entry_id
stringlengths
33
33
published
stringlengths
14
14
title
stringlengths
13
172
authors
sequencelengths
1
668
primary_category
stringclasses
115 values
categories
sequencelengths
1
7
text
stringlengths
3
431k
http://arxiv.org/abs/2406.18783v1
20240626230452
Psychological Profiling in Cybersecurity: A Look at LLMs and Psycholinguistic Features
[ "Jean Marie Tshimula", "D'Jeff K. Nkashama", "Jean Tshibangu Muabila", "René Manassé Galekwa", "Hugues Kanda", "Maximilien V. Dialufuma", "Mbuyi Mukendi Didier", "Kalala Kalonji", "Serge Mundele", "Patience Kinshie Lenye", "Tighana Wenge Basele", "Aristarque Ilunga", "Christian N. Mayemba", "Nathanaël M. Kasoro", "Selain K. Kasereka", "Hardy Mikese", "Pierre-Martin Tardif", "Marc Frappier", "Froduald Kabanza", "Belkacem Chikhaoui", "Shengrui Wang", "Ali Mulenda Sumbu", "Xavier Ndona", "Raoul Kienge-Kienge Intudi" ]
cs.CL
[ "cs.CL", "cs.LG" ]
1/f noise of a tiny tunnel magnetoresistance sensor originated from a wide distribution of bath correlation time Toshiki Yamaji July 1, 2024 ====================================================================================================================== § ABSTRACT The increasing sophistication of cyber threats necessitates innovative approaches to cybersecurity. In this paper, we explore the potential of psychological profiling techniques, particularly focusing on the utilization of Large Language Models (LLMs) and psycholinguistic features. We investigate the intersection of psychology and cybersecurity, discussing how LLMs can be employed to analyze textual data for identifying psychological traits of threat actors. We explore the incorporation of psycholinguistic features, such as linguistic patterns and emotional cues, into cybersecurity frameworks. Through case studies and experiments, we discuss the effectiveness of these methods in enhancing threat detection and mitigation strategies.Our research underscores the importance of integrating psychological perspectives into cybersecurity practices to bolster defense mechanisms against evolving threats. § INTRODUCTION Psychological profiling plays a crucial role in cybersecurity, particularly in understanding and identifying the traits and motives of cybercriminals. In computer science, cybersecurity aims to safeguard technology within computer systems, implementing security measures to prevent risks and threats that could harm the system. This field regulates security measures to thwart third-party invaders or intruders who engage in malicious activities such as stealing private, business, or organizational information for personal gain <cit.>. In the domain of cybercrime, understanding the identity and motives of intruders plays a key role in mitigating risks to information security <cit.>. Psychological profiling emerges as a valuable tool for understanding the psychological traits and characteristics of cybercriminals, which strengthens strategies against potential cyber threats and assists in the identification of intruders and their motives through an examination of behavior, nature, and thought process. Profiling in cybersecurity involves diverse criminological and criminal-law-based components, encompassing personal traits, criminal expertise, social attributes, and motivational factors. These elements help in understanding the predispositions, personality traits, demographics, socio-economic status, and motivations of cybercriminals, including those who are particularly elusive <cit.>. Cybercriminals frequently exhibit a range of psychological traits that strongly shape their behaviors and actions <cit.>. These individuals often possess a strong command of cyber technology, which they exploit for harmful purposes and various motives; common motives include financial gain, as seen in activities such as data theft and other forms of cyber fraud <cit.>. Many are driven by greed, pursuing financial rewards, while others seek power or revenge against certain groups or institutions. Some cybercriminals are thrill-seekers, relishing the risk involved in their illicit activities, or opportunists who take advantage of vulnerabilities for personal benefit <cit.>. There are also those who simply disregard legal and ethical standards, compromising their reputations within the cyber community. Traits of fearlessness, with little regard for potential consequences, and a lack of empathy are also prevalent. Moreover, some individuals demonstrate boldness, testing their hacking abilities against individuals and organizations. Collectively, these traits paint a complex picture of the motivations and behaviors driving cybercriminals in various scenarios <cit.>. Motivating factors behind cybercriminal personality traits include revenge and blackmailing. Understanding these traits can help minimize security risks and enable better analysis and resolution of cybercrimes <cit.>. In addition, integrating findings from Large Language Models (LLMs) and psycholinguistic tools, such as the Linguistic Inquiry and Word Count (LIWC) dictionary and the Medical Research Council (MRC) psycholinguistic database <cit.>, into psychological profiling can significantly enrich the understanding of cybercriminal behaviors and motivations. This holistic approach to psychological profiling can not only reveal the complex personalities of cybercriminals but also strengthen overall security measures, protecting both individuals and organizations from cyber threats. In this paper, we explore the intersection of psychology and cybersecurity, with a specific emphasis on the role of LLMs and psycholinguistic features in profiling cyber threats. The remainder of this work is organized as follows. Section <ref> discusses the fundamental role of psychological profiling in cybersecurity, outlining how it aids in understanding and mitigating the behaviors of cybercriminals. Section <ref> explores the application of LLMs in psychological profiling, highlighting their potential to decode complex patterns of cybercriminal activity. In Section <ref>, we examine the incorporation of psycholinguistic features into cybersecurity strategies, demonstrating how these tools can enhance the precision of psychological profiles. Section <ref> discusses different perspectives on psychological profiling in cybersecurity. Section <ref> addresses the ethical considerations and privacy implications inherent in the use of psychological profiling and data analysis in cybersecurity. Finally, Section <ref> discusses future directions for research in this area and Section <ref> concludes the paper with reflections on the evolving landscape of cybersecurity profiling. § PSYCHOLOGICAL PROFILING IN CYBERSECURITY Researchers and practitioners reveal a complex profile of cyber criminals, showcasing traits such as tech-savvy, well-networked, vengeful, goal-oriented, greedy, manipulative, risk-takers, opportunists, rule-breakers, fearless, emotionless, and daring <cit.>. More specifically, <cit.> identified a range of characteristics including smartness, creativity, and a need for control, shedding light on the multifaceted nature of individuals involved in cyber crimes, and uncovering motivating factors like monetary gain, thrill-seeking, and political beliefs that drive individuals towards engaging in cyber criminal activities. In addition to profiling traits, understanding the psychological effects of cybercrime remains essential. <cit.> indicated that exposure to cyber terrorism triggers heightened levels of stress and anxiety among individuals, akin to the psychological effects of conventional terrorism, emphasizing the pivotal role of perceived threats in shaping individuals' attitudes towards government surveillance, regulation, and military responses in the face of cyber threats. <cit.> underscored the significant influence of law enforcement's lack of cybercrime knowledge on low conviction rates and victim underreporting. The study revealed that victims often delay reporting cybercrimes due to embarrassment or a perception that they are better equipped to handle the situation themselves. This highlights the importance of training officers to increase their preparedness in dealing with cybercrime cases and engaging with victims. In a related vein, <cit.> explored the psychological impacts of hacking victimization and underlined the need for support organizations to address these issues. The study underscores the importance of raising awareness about the psychological effects of cybercrime and promoting support opportunities for victims. Its findings provide valuable insights for clinicians and support organizations, informing the development of treatment guidelines and interventions to address the negative psychological impacts of hacking. <cit.> investigated how limited experience and domain knowledge in cyberspace lead to the use of cognitive shortcuts and inappropriate heuristics, resulting in elevated levels of dread. In recent investigations, building upon prior research, <cit.> highlighted the importance of leveraging cybercriminals' cognitive biases to influence their behaviors during attacks. The study suggested that by using algorithms informed by cyberpsychology research, defenders can present low-risk, low-reward targets to steer hackers away from high-value assets. Studies show that attackers exhibit risk-averse behavior, preferring attacks on less secure machines to avoid the appearance of failure. Research on human subjects engaging in cybercriminal behavior revealed a strong relationship between key risk-taking and cybercriminal behaviors. <cit.> indicated that participants' exposure to fictional media, particularly crime-related television shows, can influence their attitudes towards criminal investigations and profiling techniques. The study revealed a correlation between media consumption habits and the perceived realism of investigative procedures portrayed in television episodes. Additionally, participants' beliefs about the role of criminal profilers and the importance of intuition in investigations were influenced by their media exposure. This underscores the nuanced relationship between media consumption and perceptions of criminal behavior and profiling accuracy. Expanding upon the evolving understanding of cybercriminal behavior, <cit.> highlighted the significance of intelligence, personality traits, and social skills in the effectiveness of cyber attacks. The study emphasized the role of environmental factors, such as family relationships and educational background, in shaping the behaviors of hackers. It suggested that a holistic approach, considering both individual characteristics and external influences, is crucial for developing a comprehensive psychological profile of cyber criminals. Additionally, the study noted the need for interdisciplinary collaboration between information technology and investigative psychology to combat cybercrime. Psychological profiling, rooted in behavioral analysis and psychological theory, aims to uncover patterns and traits indicative of malicious intent in cyber activities. This approach utilizes various aspects of human behavior, such as language use, decision-making processes, and emotional responses, to discern the psychological profiles of threat actors <cit.>. Leveraging techniques from psychology, including personality assessment and psycholinguistic analysis, enables the identification of anomalous behaviors and potential indicators of cyber threats. For instance, <cit.> emphasized the importance of profiling potential attackers in cybersecurity to enhance the accuracy of vulnerability severity scores using psychological and behavioral traits. Research investigated the influence of cultural and psychological factors on cyber-security behavior, utilizing the Big Five Framework to assess personality traits and their impact on user attitudes towards privacy and self-efficacy <cit.>. More specifically, <cit.> proposed machine learning models for psychological profiling of hackers based on the “Big Five” personality traits model (OCEAN - Openness, Conscientiousness, Extroversion, Agreeableness, Neuroticism) and their models achieved 88% accuracy in mapping personality clusters with different types of hackers (White Hat, Grey Hat, etc.), identifying cyber-criminal behaviors. <cit.> discovered that individuals attracted to hacking exhibit high scores on Machiavellianism and Psychopathy scales, with Grey Hat hackers showing opposition to authority, Black Hat hackers scoring high on thrill-seeking, and White Hat hackers displaying tendencies towards Narcissism. The Dark Triad traits significantly predict interest in different types of hacking, while thrill-seeking emerges as a key motivator for Black Hat hackers. Perceptions of apprehension for violating privacy laws negatively impact Grey Hat and Black Hat hacking. Moreover, <cit.> revealed that cybercriminals exhibit a range of behaviors and traits that deviate from societal norms, influenced by factors such as heredity, education, culture, and socio-economic status. Profiling methods focus on identifying key psychological features, modus operandi, and criminal motivations to aid in early detection and investigation of cybercrimes. The study emphasizes the significance of expert knowledge and advanced technologies in enhancing law enforcement efforts to combat cybercrime. Overall, the research underscores the evolving nature of criminal profiling in the digital era and the critical role it plays in addressing the growing threat of cybercriminal activities. In response to the escalating threat posed by cybercrimes, <cit.> highlighted the diverse motivations of hackers, including recreation, prestige, revenge, profit, and ideology, which influence their engagement in cyber activities. The study underscores the importance of not only teaching coding skills but also educating individuals about the risks and consequences of online actions to prevent cyber-crime involvement. Additionally, the research emphasizes the need to identify at-risk groups and individuals to target awareness campaigns and promote informed online behavior for future generations. Lastly, the study suggests that understanding social psychological theories can enhance communication with hacker communities and individuals, ultimately contributing to more effective cybersecurity practices. § LLMS IN PSYCHOLOGICAL PROFILING Large Language Models (LLMs), such as OpenAI's GPT series of models, Google's PaLM and Gemini, and Meta's LLaMA family of open-source models, have demonstrated remarkable capabilities in natural language understanding and generation tasks <cit.>. As these models continue to evolve and become more sophisticated, researchers and practitioners are exploring their potential applications beyond language tasks, venturing into the realm of psychological profiling (see Table <ref>). These models are utilized to profile individuals based on their language use patterns and communication styles, facilitating the early detection of potential threats <cit.>. The potential applications of LLM-based psychological profiling are vast and diverse <cit.>. In mental health settings, these techniques aid in the early detection of psychological disorders and the development of personalized treatment plans <cit.>. In human-AI interaction, understanding the perceived personalities of LLMs improves user engagement and trust, leading to more natural and effective interactions <cit.>. However, the application of LLMs to psychological profiling is not without challenges and ethical considerations. Existing personality models and assessment methods have been developed primarily for human subjects, and their suitability for evaluating artificial intelligence systems is questionable. Additionally, the fluid and context-dependent nature of LLM “personalities” raises concerns about the reliability and validity of traditional personality assessment techniques when applied to these models <cit.>. As researchers delve deeper into this emerging field, they must grapple with the complexities of transferring human-centric concepts like personality to artificial intelligence systems. LLMs are explored for psychological profiling tasks, such as detecting personality traits, values, and other non-cognitive characteristics <cit.>. In exploring the multifaceted landscape of psychological profiling with LLMs, researchers have embarked on various avenues to understand their potential applications. For instance, <cit.> focused on investigating the ability of LLMs to simulate human psychological behaviors using prompts to adopt different personas and respond to standardized measures of personality constructs to assess their psychometric properties. <cit.> repurposed standard psychometric inventories originally designed for assessing human psychological characteristics, such as personality traits, values, morality, and beliefs, to evaluate analogous traits in LLMs. <cit.> fine-tuned LLMs on psychometric test items related to the Big Five personality traits for evaluating personalities based on language. <cit.> introduced a method for administering personality tests on LLMs and shaping their generated text to mimic specific human personality profiles. Furthermore, <cit.> proposed PsychoBench, a framework for evaluating personality traits, interpersonal relationships, motivational tests, and emotional abilities to uncover complex psychological profiles within LLMs and their potential integration into human society as empathetic and personalized AI-driven solutions. <cit.> demonstrated that LLM agents conditioned on personality profiles can mimic human traits, with creative personas displaying more consistent behavior in both interactive and non-interactive conditions; the research highlights the importance of robust persona conditioning in shaping LLM behavior and emphasizes the asymmetry in linguistic alignment between different persona groups during interactions. <cit.> presented PsySafe, a framework designed to evaluate and improve the safety of multi-agent systems (MAS) by addressing the psychological aspects of agent behavior. PsySafe incorporates dark personality traits to assess and mitigate potential risks associated with agent behaviors in MAS; in addition, it includes identifying vulnerabilities, evaluating safety from psychological and behavioral perspectives, and implementing effective defense strategies. The findings yielded by PsySafe reveal several phenomena, including collective dangerous behaviors among agents, their self-reflection on engaging in such behaviors, and the correlation between psychological assessments and behavioral safety. While LLMs offer promising applications in psychological profiling, their language generation capabilities also raise concerns about potential misuse for cyber attacks and malicious activities <cit.>. Attack payloads and malware creation involve LLMs generating malicious code or new strains of malware through training on relevant data <cit.>. Automated hacking and vulnerability scanning tasks can be performed by LLMs, including generating code for automated hacking attacks, scanning software for vulnerabilities, or developing exploits <cit.>. In addition, LLMs can be used for social engineering and phishing purposes, leveraging their ability to mimic human language patterns to create convincing social engineering attacks, phishing emails, or disinformation campaigns <cit.>. Adversaries could potentially manipulate LLM outputs for malicious purposes using prompt injection techniques <cit.>. LLMs can generate highly personalized and persuasive phishing emails tailored to specific individuals within an organization, bypassing traditional detection systems. Studies show these AI-crafted attacks can be strikingly effective, with around 10% of recipients entering credentials on fake login portals <cit.>. The ability of LLMs to mimic human language patterns and adapt to different contexts makes them a powerful tool for deception and manipulation <cit.>. The 2023 Report of Voice of SecOps provides a comprehensive analysis of threats and stressors posed by LLMs, revealing that 51% of security professionals are likely to leave their job within 2024.[Generative AI and Cybersecurity: Bright Future or Business Battleground? Deep Instinct. (2023). Voice of SecOps Reports. Retrieved from <https://www.deepinstinct.com/voice-of-secops-reports>. Accessed on May 12, 2024.] The study surveyed over 650 senior security operations professionals in the U.S. to assess LLMs' impact on the cybersecurity industry. Findings indicate a 75% surge in attacks in 2022, with 85% attributing this increase to bad actors leveraging LLMs. Furthermore, 70% of respondents believe LLMs positively influence employee productivity and collaboration, while 63% perceive an enhancement in employee morale. Ransomware emerges as the greatest threat to organizational data security, with 46% of respondents acknowledging its severity and 62% indicating it as the top C-suite concern, a notable increase from 44% in 2022; the pressure to combat ransomware has prompted organizations to revise their data security strategies, with 47% now possessing a policy to pay the ransom, compared to 34% in the previous year. Moreover, the report reveals a 55% increase in stress levels among security professionals, primarily attributed to staffing and resource constraints, cited by 42% of respondents. § PSYCHOLINGUISTIC FEATURES Psycholinguistic features encompass a wide range of linguistic attributes and psychological constructs that reflect cognitive and emotional aspects of language use. Integrating psycholinguistic features into cybersecurity frameworks enhances the granularity of threat profiling techniques and enables a deeper understanding of cybercriminals' mental states and feelings <cit.>. Psycholinguistic features include sentiment analysis, linguistic complexity measures, lexical diversity metrics, and stylistic characteristics. Through advanced text analysis algorithms and machine learning algorithms, these features can be leveraged to identify anomalous patterns indicative of malicious intent. One of the powerful tools in psycholinguistic analysis is the Linguistic Inquiry and Word Count (LIWC) dictionary <cit.>. In the context of cyber attacks, LIWC has been used to detect deception in phishing emails by analyzing the psycholinguistic features that attackers employ to deceive end-users <cit.>. Research shows that phishers often use language conveying certainty (e.g. always, never), time pressure and work-related words to increase vulnerability of targets. Conversely, reward-related words like money or cash tend to decrease vulnerability as they are associated with scams. Beyond phishing, LIWC has been applied to study online predator behavior, analyze developer personalities, model social media rumors, and understand user reactions in crowdsourcing <cit.>. Building on the potential of LIWC for psycholinguistic analysis in cybersecurity, researchers explore its applications to understand attacker behavior and victim vulnerabilities. More precisely, <cit.> focused on analyzing the vulnerability factors of potential victims to cybergrooming using LIWC to quantify and understand the social-psychological traits that may make individuals more susceptible to online grooming; they reveal significant correlations between specific vulnerability dimensions and the likelihood of being targeted as a victim of cybergrooming. Interestingly, the research observed negative correlations between victims and certain family and community-related traits, challenging conventional beliefs about the key factors contributing to vulnerability in online contexts. <cit.> utilize LIWC and demonstrate that malicious insiders exhibit specific linguistic patterns in their written communications, including increased use of self-focused words, negative language, and cognitive process-related words compared to other team members; as insiders become more detached from the team, language similarity decreases over time. In a different angle, psycholinguistic features were utilized to examine the manipulative aspects of cybercrimes. More specifically, <cit.> investigated the psycholinguistic dimensions of social engineering within cybersecurity, employing activity theory to dissect the methods and techniques utilized by malicious actors. This research reveals the sophisticated tactics employed by social engineers to manipulate emotions, impede critical thinking, and exploit moral values to influence user behavior and extract sensitive information. <cit.> proposed a machine learning model for detecting sexual predation in chatrooms using psycholinguistic, content-based, and chat-based features, and show distinct characteristics that differentiate predators from non-predators. Particularly, <cit.> investigated the psychological traits and behaviors of individuals involved in self-reported criminal computer activities, emphasizing the role of extraversion in predicting such behavior and challenging stereotypes by shedding light on the complexities of personality factors in criminal/deviant computer behavior through the use of Likert-scale questionnaires and psychometric instruments. Furthermore, <cit.> conducted a study on phishing influence detection using a novel computational psycholinguistic analysis approach to identify influential sentences that could potentially lead to security breaches and hacking in online transactions and social media interactions, developing a language and domain-independent computational model based on Cialdini's principles of persuasion.[The 6 Principles of Persuasion: Tips from the leading expert on social influence, Douglas T. Kenrick. Posted Dec. 8, 2012. Retrieved from <https://www.psychologytoday.com/ca/blog/sex-murder-and-the-meaning-of-life/201212/the-6-principles-of-persuasion>. Accessed May 20, 2024.] <cit.> indicated that cyber offenders displayed similarities to the community sample on certain traits but exhibited differences from offline offenders, particularly in conscientiousness and openness to experience. Notably, cyber offenders showed lower scores on honesty-humility compared to the community sample, suggesting potential implications for intervention strategies targeting specific personality traits in this population. <cit.> emphasized the importance of understanding psycholinguistic features and psychology in cybersecurity to develop effective strategies and interventions. They explore the emotional responses triggered by cybersecurity breaches, focusing on the hacking of smart security cameras. The study identifies a 3-dimensional structure of emotional reactions, highlighting negative affectivity, proactive versus fight/flight action tendencies, and emotional intensity and valence. Personality characteristics, such as the Big Five traits and resilient/overcontrolled/undercontrolled types, were found to relate to these emotional dimensions. Recently, the application of sentiment analysis techniques has paved the way for building psychological profiles and detecting and understanding cyber threats. <cit.> utilized sentiment analysis to identify discussions around exploits, vulnerabilities, and attack planning on dark web forums even before these threats manifest in the real world, and to provide early warnings through the observation of changes in sentiment and semantic context. <cit.> proposed approaches to predict cyber-events by leveraging sentiment analysis on hacker forums and social media to analyze the sentiment expressed in online discussions and detect signals that may precede cyber attacks. <cit.> built user psychological profiles based on the sentiment analysis of their network browsing and email content, and demonstrate that this approach can proactively and accurately detect malicious insiders with extreme or negative emotional tendencies. Building upon recent studies and advancements, <cit.> developed a machine learning model called TrollHunter and collected a dataset of online trolling messages and found that troll messages exhibit more abusive language, lower cognitive complexity, and greater targeting of named entities and identities; the model achieved an 89% accuracy rate and F1 score in identifying trolling behavior. § DISCUSSION The integration of psychological profiling into cybersecurity practices offers a multifaceted approach to understanding and mitigating cyber threats. LLMs and psycholinguistic features provide deeper understanding into the behaviors, motivations, and emotional states of cybercriminals. This discussion section explores the potential benefits, and challenges of these techniques, drawing from the research findings presented earlier. §.§ Benefits of Psychological Profiling in Cybersecurity Psychological profiling in cybersecurity holds significant promise. Identifying psychological traits and patterns in cybercriminal behavior enables security professionals to anticipate and preemptively counteract potential threats. For instance, understanding the personality traits and motivations of different types of hackers (e.g., White Hat, Black Hat, Grey Hat) allows for more tailored security measures and interventions <cit.>. The use of LLMs enhances this profiling by analyzing large volumes of text data, identifying linguistic patterns that may indicate malicious intent. Psycholinguistic features, such as those derived from the LIWC dictionary, provide additional granularity. These features help in detecting subtle cues in language that might indicate deception, stress, or malicious intent. For example, certain linguistic markers can distinguish phishing emails from legitimate communications, thereby improving the accuracy of threat detection systems <cit.>. Moreover, the incorporation of psychological profiling can aid in the development of more personalized cybersecurity training programs. Understanding the psychological traits that make individuals more susceptible to cyber attacks allows organizations to design targeted awareness campaigns and training modules that address specific vulnerabilities. §.§ Challenges and Limitations Despite the promising applications, several challenges and limitations need to be addressed. One major challenge is the accuracy and reliability of psychological profiling techniques. While LLMs and psycholinguistic tools provide valuable insights, they come with inherent limitations. Implementing and maintaining these advanced profiling systems require a workforce equipped with specialized skills in artificial intelligence, cybersecurity, and psychological analysis. There is often a shortage of professionals with the necessary expertise to develop, deploy, and refine these tools. Addressing this skill gap is crucial for the effective utilization of psychological profiling in cybersecurity. The effectiveness of LLMs largely depends on the quality and diversity of the data they are trained on. Inaccurate models can result from poor-quality data, such as poisoned or contaminated datasets, or from non-representative data. Moreover, acquiring diverse and representative datasets is particularly challenging in the field of cybersecurity, where data sensitivity and proprietary information are significant concerns. Additionally, the use of these tools can lead to false positives and negatives, causing either unnecessary alarms or undetected threats. Thus, ensuring the robustness and validity of these models is vital for their successful deployment in real-world scenarios <cit.>. Another challenge lies in the dynamic and evolving nature of cybercriminal behavior. Cybercriminals continually adapt their tactics to evade detection, which means that profiling techniques must also evolve. Continuous updates and refinements to the models and algorithms are necessary to keep pace with these changes. The ethical implications of psychological profiling in cybersecurity cannot be overlooked. The use of personal data to create psychological profiles raises significant privacy concerns. It is essential to balance the benefits of enhanced security with the protection of individual privacy rights. Transparent policies and stringent data protection measures must be in place to ensure that the use of psychological profiling does not infringe on personal freedoms. § ETHICAL CONSIDERATIONS Ethical considerations are paramount when employing psychological profiling in cybersecurity. The potential for misuse of these technologies for surveillance, manipulation, or discrimination is a serious concern. For example, the ability of LLMs to generate persuasive phishing emails tailored to specific individuals poses a significant threat if used maliciously <cit.>. To mitigate these risks, it is crucial to establish ethical guidelines and regulatory frameworks that govern the use of psychological profiling tools. These guidelines should emphasize the importance of informed consent, data minimization, and transparency in the use of personal data. Additionally, there should be mechanisms for accountability and oversight to ensure that these technologies are used responsibly and ethically <cit.>. § FUTURE DIRECTIONS Future research should focus on improving the robustness of psychological profiling techniques. This includes developing more sophisticated models that can adapt to the evolving tactics of cybercriminals and integrating multimodal data sources (e.g., text, behavioral data, biometric data) to create more comprehensive profiles. Another promising direction is the exploration of collaborative approaches that combine human expertise with machine intelligence. Human analysts and AI systems can collaborate to achieve more effective and nuanced threat detection and mitigation strategies. Finally, ongoing efforts to address the ethical and privacy concerns associated with psychological profiling are essential. This includes developing new methods for anonymizing and protecting personal data while still enabling meaningful analysis, as well as fostering a culture of ethical awareness and responsibility among cybersecurity professionals. § CONCLUSION The integration of psychological profiling, LLMs, and psycholinguistic features into cybersecurity practices represents a significant advancement in the field. These techniques offer the potential to enhance threat detection and mitigation strategies by providing deeper understanding into the behaviors and motivations of cybercriminals. However, realizing this potential requires addressing the challenges and ethical considerations associated with these technologies. By doing so, we can create more robust and responsible cybersecurity frameworks that protect both organizations and individuals from evolving cyber threats. § ACKNOWLEDGMENTS The authors thank all Greprovad members for helpful discussions and comments on early drafts. acl_natbib
http://arxiv.org/abs/2406.18748v1
20240626202625
The localized phase of the Anderson model on the Bethe lattice
[ "Tommaso Rizzo", "Marco Tarzia" ]
cond-mat.dis-nn
[ "cond-mat.dis-nn", "cond-mat.stat-mech" ]
1 2 3 4 § ABSTRACT In this paper, we investigate the Anderson model on the Bethe lattice, focusing on the localized regime. Employing the cavity approach, we derive compact expressions for the inverse participation ratios (IPRs) that are equivalent to those obtained using the supersymmetric formalism and naturally facilitate a highly efficient computational scheme. This method yields numerical results with unprecedented accuracy, even very close to the localization threshold. Our approach allows for high-precision validation of all theoretical predictions from the analytical solution, including the finite jump of the IPRs at the transition. Additionally, we reveal a singular behavior of the IPRs near the critical point that has not been previously reported in the literature. This singular behavior is further confirmed by the numerical solution of the non-linear σ model on the Bethe lattice, which provides an effective description of Anderson localization. The localized phase of the Anderson model on the Bethe lattice Tommaso Rizzo1,2 and Marco Tarzia3,4 July 1, 2024 ============================================================== § INTRODUCTION Anderson localization (AL) <cit.> is one of the most spectacular phenomenon in condensed matter physics. It manifests as the suppression of wave propagation in a disordered medium above a critical value of the disorder strength (and for any finite disorder in low enough dimension) <cit.>. Over the past half-century the field has thrived, with recent experimental observations in diverse systems such as cold atomic gases <cit.>, kicked rotors <cit.>, and classical sound elastic waves <cit.> further highlighting the ubiquity and relevance of this phenomenon. On the theoretical side, the critical properties of AL are well established in low dimensions. According to the scaling hypothesis <cit.> d_L=2 is the lower critical dimension of the transition (for systems with orthogonal symmetry) <cit.>. The scaling arguments have been later supported and quantitatively confirmed by a renormalization group analysis in d = 2 + ϵ <cit.> of an effective field-theory description in terms of a non-linear σ model (NLσM) <cit.>. AL is also analytically tractable in the infinite-dimensional limit on the Bethe lattice (BL) <cit.>, an infinite tree (with no boundaries) in which each node as a fixed degree k+1. The hierarchical structure of the BL allows one to obtain a (complicated) non-linear integral self-consistent equation for the order parameter distribution function, which becomes asymptotically exact in the thermodynamic limit, and whose analysis yields the transition point and the critical behavior <cit.>. Despite these results have been firmly established already several years ago, the study of AL on the BL is still very active, and has continued to reveal new facets and intricacies. There are two main reasons for this. The first concerns the differences between the exotic critical behavior found on the BL and the one observed in finite dimensions and predicted by the scaling analysis. In particular, the diffusion coefficient (or the conductivity) vanishes exponentially at the critical disorder on the BL when the transition is approached from the metallic side <cit.>, while in finite-d such exponential behavior is replaced by a power law (with a d-dependent exponent ν (d-2), d being the spatial dimension and ν the critical exponent describing the divergence of the localization length at the critical disorder <cit.>). The other difference concerns the behavior of the inverse participation ratio (IPR). The IPR is defined as I_2 = ⟨∑_i=1^N |ψ_α (i) |^4 ⟩, and is essentially a measure of the inverse volume occupied by an eigenstate. On BLs of finite size N (, random-regular graphs in which every node has a fixed connectivity k+1 <cit.>, see below for a precise definition), I_2 ≃Λ/N in the metallic phase (with a disorder-dependent prefactor Λ which diverges exponentially for W → W_c^- <cit.>), exhibits a discontinuous jump at the transition, and stays of O(1) for W>W_c <cit.>. In contrast, in finite-dimensional systems the IPR vanishes as a power-law at the critical disorder with an exponent ν d <cit.>. Several works have addressed these apparent discrepancies. Both intuitive arguments and quantitative calculations <cit.> have provided strong indications of the fact that the BL limit is a singular point of AL and plays the role of the upper critical dimension of the problem, d_L = ∞, in agreement with previous conjectures <cit.>. The second reason for the remarkable resurgence of interest in AL on the BL and sparse random graphs can be attributed to its strong connection with many-body localization (MBL) <cit.>. MBL involves the localization of highly excited many-body eigenstates even in the presence of interactions, and has been a focal point of recent theoretical and experimental research <cit.>. Since the preliminary investigations, MBL was linked to a form of localization in the Fock space of Slater determinants <cit.> (see also Refs. <cit.>): In this representation, many-body configurations correspond to site orbitals on the graph, subject to (strongly correlated) diagonal disorder, while interactions serve as effective hoppings connecting them. Despite several simplifications in this analogy, it proves valuable for qualitatively understanding the problem <cit.>. In this context, a set of analytical <cit.> and numerical <cit.> explorations of the Anderson model on the BL has been conducted over the last decade. In the midst of such numerous investigations, the predominant research emphasis has leaned towards the delocalized side preceding the critical disorder, leaving the insulating regime relatively underexplored, with only a few notable exceptions <cit.>. Bridging this gap, in this work we perform a thorough investigation of the critical properties of AL on the infinite BL when the transition is approached from the localized phase. Possibly one reason the insulating phase has received comparatively less attention may be attributed to the inherent challenge posed by the fact that the order parameter distribution function (, the probability distribution of the local density of states, see below) exhibits power law tails, which are exceptionally difficult to sample accurately using conventional numerical methods. In fact, the first achievement of our work precisely consists in circumventing this problem: Using the cavity formalism, we derive compact and transparent expressions for the relevant observables (such as the IPR and the distribution of the wave-functions' amplitudes) that are equivalent to those obtained within the supersymmetric approach. These expressions lend themselves naturally to a highly efficient computational method allowing us to obtain results with unprecedented numerical even very close to the transition point. This enables us to precisely assess and validate all the predictions of the analytical solution <cit.> and recover the expected critical behavior <cit.>. In particular our results clearly show that that the IPR exhibit a finite jump at the localization transition, as predicted by the supersymmetric treatment <cit.>. This is particularly interesting, as the existence of such a finite jump has been questioned in some recent works <cit.>. The second noteworthy outcome of our investigation unveils a distinctive feature: the finite jump of the IPR at the critical point is followed by a square root singularity which, to the best of our knowledge, has never been reported in existing literature. Such singular behavior is further corroborated by the solution of the self-consistent equations found on the BL for the NLσM which provides an effective description of AL <cit.>. The analysis of the NLσM also helps to elucidate the highly non-trivial mathematical mechanism underlying this square root singularity. The paper is organized as follows: In Sec. <ref> we introduce the Anderson tight-binding model on the BL and briefly recall the definition of the key observables and the main features of its analytical solution; In Sec. <ref> we discuss the linearized self-consistent equations with respect to a small imaginary part which describe the localized phase and review the main features of their critical behavior; The main results of our work are contained in Sec. <ref>: We start by presenting our new approach to accurately solve the linearized equations, which allows one to retrieve the full probability distribution of the wave-functions' amplitudes in the localized phase; We discuss the resulting singular behavior of the IPR close to the transition point; We show that the solution of the effective NLσM fully support our findings; Finally, in Sec. <ref> we provide a summary of our results and a few perspectives for future investigations. In the Appendix sections <ref>–<ref> we present some technical details and supplementary information that complement the results discussed in the main text. § THE MODEL AND KNOWN RESULTS We consider the simplest model for AL, which consists in a non-interacting (spinless) quantum particle on a lattice in presence of a disordered potential: H = - ∑_⟨ i, j ⟩ t_ij( | i ⟩⟨ j | + | j ⟩⟨ i |) - ∑_i=1^N ϵ_i | i ⟩⟨ i | . The first term is a sum over all pairs of nearest neighbors sites and corresponds to the adjacency matrix of the considered lattice (t_ij is the hopping kinetic energy scale, which we take equal to 1 throughout). The second sum runs over all N sites of the lattice and corresponds to a diagonal random matrix containing the disordered potential. The on-site energies ϵ_i are independent and identically distributed random variables. It is custom to extract them accordingly to a uniform distribution in the interval [-W/2, W/2], W being the disorder strength. The model is defined on an infinite BL, which is formally described as an infinite random-regular graph (RRG) <cit.> in which each vertex has a fixed degree k+1, and can be thought as a tree wrapped onto itself and without boundaries. In fact it can be rigorously shown that RRGs of N nodes have locally a tree-like structure and loops whose typical length scales as ln N/ln k <cit.>. For concreteness in the following we mostly focus on the case k=2, but the same qualitative behavior is expected for any finite k strictly larger than 1. The order parameter associated to AL is the probability distribution of the local density of states (LDoS) <cit.>, defined as ρ_i (E) ≡∑_α |ψ_α (i)|^2 δ(E-E_α ) , where ψ_α and E_α are the eigenvectors and the eigenvalues of H. Physically ρ_i(E) is tightly related to the inverse lifetime of a particle of energy E created in i, and its typical value is proportional to the diffusion constant (or the dc conductivity). In the insulating phase the LDoS vanishes in the thermodynamnic limit for W>W_c, since the exponentially localized eigestates of energy E are typically very far from a given node i and do not contribute to the sum. Instead in the metallic phase the LDoS is finite with probability density P(ρ), since extended plane waves have typically amplitudes of order 1/N on all the nodes of the graph. Localization begins from the band edges <cit.>, therefore to see if all states are localized it is sufficient to look at the band center. Hence, for simplicity, we set E=0 throughout the rest of the paper. The distribution of the LDoS, as well as other properties of the spectral statistics of the model, are encoded in the statistics of the elements of the resolvent matrix <cit.>, defined as G_ij = ( iη𝕀 - H)^-1_ij , where 𝕀 is the identity matrix, H is the Hamiltonian (<ref>), and η is an infinitesimal imaginary regulator that softens the pole singularities in the denominator of G. On the BL the diagonal elements of G verify a set of self-consistent recursion relation <cit.>, which become asymptotically exact in the N →∞ limit <cit.>. Deriving these equations involves considering the resolvent matrices of modified Hamiltonians H^(i), where the node i has been removed from the lattice (, H^(i) is obtained by eliminating the i-th row and column from H). The crucial observation here is that, owing to the hierarchical structure of the BL, removing one node renders each of its neighbors uncorrelated from the others, as the lattice breaks into k+1 semi-infinite disconnected branches. Consequently, on any given site i one obtains (, by direct Gaussian integration <cit.> or by using the block matrix inversion formula, also called the Schur complement formula <cit.>): G_i → j =1/ϵ_i - iη - ∑_m ∈∂ i ∖ j t_mi^2 G_m → i , where G_i → j=( iη𝕀 - H^(j))^-1_ii are the so-called “cavity” Green's functions (, the diagonal element on node i of the resolvent of the Hamiltonian H^(j) obtained by removing the node j), ϵ_i is the on-site random energy taken from the uniform distribution, and ∂ i ∖ j denotes the set of all k + 1 neighbors of i except j. (Note that for each node one can define k + 1 cavity Green’s functions, each one satisfying a recursion relations of this kind when one of the k+1 neighbors of the node has been removed.) From the solution of these equations one can finally obtain the diagonal elements of the resolvent on the node i of the original problem as: G_ii =1/ϵ_i - iη - ∑_m ∈∂ i t_mi^2 G_m → i . Eq. (<ref>) should be in fact interpreted as a self-consistent integral equation for the probability distribution of the cavity Green's functions (in the N →∞ limit) P( G, G) = ⟨1/N(k+1)∑_i=1^N ∑_j ∈∂ iδ( G - G_i → j) × ×δ( G - G_i → j) ⟩ , where the average is performed over the disorder distribution. Such integral self-consistent equation can be solved numerically using population dynamics algorithms: The probability distribution of the cavity Green's functions is approximated by the empirical distribution of a large pool of Ω complex elements ( G_α, G_α), P( G, G) ≃ω^-1∑_α=1^Ωδ ( G - G_α) δ ( G - G_α); At each iteration step k instances ( G_α, G_α) are extracted from the pool and a value of ϵ is taken at random from the uniform distribution; A new instance of ( G_α, G_α) is generated using Eq. (<ref>) and inserted in a random position of the pool until the process converges to a stationary distribution (convergence can be monitored for instance by checking that some moments of P( G, G) reach a stationary value). Once the fixed stationary distribution of the cavity Green's function is found, one can implement a similar procedure to obtain the probability distribution P( G, G) of the Green's function of the original problem from Eq. (<ref>) (see also Refs. <cit.> for more details). It is easy to show that the LDoS on a given node i of the lattice, Eq. (<ref>), is proportional to the imaginary part of the Green's function (in the η→ 0^+ limit): ρ_i = 1/πlim_η→ 0^+ Im G_ii , from which the average density of states (DoS) at E=0 is simply given by ρ = 1/N∑_i δ (E_α) = 1/N∑_i ρ_i = 1/π G_ii . Similarly, the generalized inverse participation ratios, defined as I_p = ⟨∑_α∑_i |ψ_α (i)|^2 pδ (E_α) ⟩/⟨∑_αδ (E_α) ⟩ , are associated to the p-th moments of the Green's functions in the limit η→ 0^+: |G_ii|^p = |∑_α |ψ_α (i) |^2 1/ iη - E_α|^p ≈∑_α |ψ_α (i)|^2 p1/(η^2+E_α^2)^p/2 = ∑_α |ψ_α (i)|^2 pδ(E_α) 1/η^p-1∫ _-∞^+∞1/(1 + x^2)^p/2 x . Averaging over all sites, from Eqs. (<ref>), (<ref>), and (<ref>) one obtains a simple spectral representation of the generalized IPRs (for p>1): I_p = √(π) Γ ( p/2)/Γ( p-1/2)lim_η→ 0^+η^p-1⟨ |G_ii|^p ⟩/⟨ Im G_ii⟩ . In the metallic phase P( G, G) (and consequently P( G, G)) converges to a stable non-singular distribution. Hence, ⟨ |G|^p ⟩ is finite and from Eq. (<ref>) one immediately sees that all the generalized IPR vanish for η→ 0^+. In the insulating phase, instead, P( G, G) (and consequently P( G, G)) is singular in the η→ 0 limit: The (marginal) probability distribution of G has a maximum in the region ImG ∼η and power-law tails P( Im G) ∼√(η)/( Im G)^3/2 with a cutoff at η^-1. Hence the main contribution to the moments comes from the cutoff, ⟨ ( Im G)^p ⟩∝η^1-p (for p ≥ 1/2). The generalized IPR are all of O(1) for W ≥ W_c, and have a finite jump at the transition. (The average DoS, instead, is continuous across the transition.) The normalization integral is dominated by the region Im G ∼η, and the typical value of LDoS is of order η. This behavior reflects the fact that in the localized phase wave-functions are exponentially localized on few O(1) sites where ρ_i takes very large values, while the typical value of the LDoS is exponentially small and vanishes in the thermodynamic limit for η→ 0^+. Quest'ultimo paragrafo sugli elementi fuori diagonale di potrebbe anche togliere. For completeness, it is worth mentioning that the distribution of the eigenfunctions' amplitudes at different points can also be determined from the statistics of the Green's functions via the following relation which holds for η small (again, we specify to the case E=0): |G_ii|^p|G_jj|^q|G_ij|^l ≈∑_α|ψ_α(i)|^2 p+l|ψ_α(j)|^2 q+l/(η^2+E_α^2)^p/2+q/2+l/2 . The correlation function of wave-functions' amplitudes is tightly related to |G_ij|^2, which corresponds to p=q=0 and l=2 (which is equivalent to the case p=q=1 and l=0). § THE SELF-CONSISTENT EQUATIONS IN THE LINEARIZED REGIME AND THE CRITICAL BEHAVIOR As discussed above, in the localized phase the imaginary part of the Green's functions goes to zero linearly with η. It is then convenient to write in full generality G_i → j = g_i → j+ η i ĝ_i → j . When η is small the cavity equation can be linearized and can be rewritten as (we set t_ij=1 on all edges ⟨ i,j ⟩ throughout): g_i → j = 1/ϵ_i - ∑_m ∈∂ i ∖ j g_m → i , ĝ_i → j = g_i → j^2( 1+ ∑_m ∈∂ i ∖ jĝ_m → i) . The above equations have two important features: i) The equation for the real part does not depend on the imaginary part, as the g_i → j's obey the equation corresponding to η=0; ii) The equation for the imaginary part is linear and thus it does not depend on η. The critical disorder W_c is found by studying the stability of the linearized equations (<ref>) and (<ref>) <cit.>. After some manipulations (see App. <ref>) one finds that a solution of the linearized equations of the form P(g,ĝ) ≃f(g)/ĝ^̂1̂ ̂+̂ ̂β̂ (for ĝ≫ 1) only exists if the function f(g) satisfies the following integral equation: f(g) = ∫ K_β (g,g_1) f(g_1) g_1 , which defines a linear β-dependent integral operator with the (non-symmetric) kernel <cit.> K_β (g,g_1) = k |g|^2 β∫ϵ p(ϵ) g̃P̃ (g̃) δ( g - 1/ϵ - g_1 - g̃) . Here p (ϵ) is the uniform box distribution of width W of the random energies and P̃ (g̃) is the probability distribution of the sum of the real part of k-1 cavity Green’s functions, Eq. (<ref>). In order for the localized phase to be stable the largest eigenvalue λ_β of the integral operator must be smaller than 1. It is possible to show (see App. <ref>) that, due to a symmetry of the problem <cit.>, for each left eigenvector ϕ(g) of the integral operator the function |g_1|^-2 βϕ(1/g_1) is also a right eigenvector (with the same eigenvalue) of the integral operator with β→ 1 - β. Hence the spectrum of (<ref>), and in particular its largest eigenvalue, must be symmetric around β=1/2, as schematically illustrated in Fig. <ref>. The condition that Eq. (<ref>) admits a solution fixes the value of β (the solution with β>1/2 must be picked since in the strong disorder limit one has that β→ 1). The critical point is identified by the point where the solution no longer exists (, the largest eigenvalue of the integral operator becomes larger than one for any β). Due to the symmetry β→ 1 - β, at the transition point one has that β=1/2<cit.>. One can estimate the largest eigenvalue numerically by suitably discretizing the Kernel (<ref>) on a finite grid. This has been recently been done for k=2 with great accuracy in Ref. <cit.> (see also Ref. <cit.>). One finds that the largest eigenvalue for W close to W_c and for β close to 1/2 behaves as: λ_β≃ 1 - c_1 (W-W_c) + c_2 (β -1/2)^2 , with the numerical coefficients given in Ref. <cit.>: W_c ≃ 18.17 , c_1 ≃ 0.0308 , c_2 ≃ 3.18 . For W ≳ W_c we thus have β≃1/2 + √(c_1/c_2)√(W - W_c) . La discussione da qui fino alla fine della sezione si puo' anche togliere. For completeness, it is worth mentioning that the same integral operator (<ref>) with the kernel K_β=1/2 (g,g_1) defined in (<ref>) also controls the long distance behavior the two-point correlation function. In particular, in the localized phase the L dependence of |G_i,i+L|^2 is obtained by applying L times the integral operator (<ref>) for β=1/2 divided by the branching ratio k (see App. <ref> and Ref. <cit.> for a detailed explanation). Hence, at large L the behavior of the two-point correlation function is dominated by the largest eigenvalue of the operator, yielding |G_i,i+L|^2 ≃ (λ_1/2/k)^L, with λ_1/2→ 1 for W → W_c^+. Yet, the spectrum of (<ref>) is continuous, resulting in a power-law sub-leading correction to the exponential decay <cit.>, yielding: C(L) ∝( λ_1/2/k)^L L^-3/2 = k^-L e^-L/ξ_ loc/L^3/2 . The localization length diverges at the critical point as: ξ_ loc = - [ ln (λ_1/2) ]^-1≃1/c_1 (W-W_c) , where the numerical value of c_1 for k=2 has been computed in Ref. <cit.> and is given in Eq. (<ref>). As mentioned above, the power law prefactor Finally, the properties of the largest eigenvalue λ_β of (<ref>) also control the critical behavior on the metallic side of the transition. In fact, as discussed in Refs. <cit.>, if one performs the analytic continuation of the solution of λ_β=1 below W_c one finds that β acquires an imaginary part: β = 1/2± i√(c_1/c_2)√(W_c - W) . Such imaginary part controls in turn the critical behavior of the correlation volume Λ of typical eigenstates, which is predicted to diverge exponentially as <cit.>: Λ∝exp[ π√(c_2/c_1)/√(W_c - W) ] . The physical interpretation of Λ is the following: For W ≲ W_c typical wave-functions can be thought as the result of the hybridization of many (, an extensive number) of resonant localized peaks very far away from each other: typical eigenstates have O(N/Λ) bumps localized in a small region of the BL where the amplitude is of order Λ/N (to ensure normalization), separated by regions of radius lnΛ where the amplitude is very small <cit.>. In the delocalized phase the imaginary part of the Green's functions remains finite even as the imaginary regulator η goes to zero. As shown in Ref. <cit.>, this implies that the correlation function behaves as: C(L) ∝ηΛk^-L/L^3/2 . § CRITICAL BEHAVIOR OF THE INVERSE PARTICIPATION RATIO In this section we introduce suitable variables whose typical values are related to the (generalized) IPR's, and describe an algorithm that allows one to compute the I_p's with very high numerical accuracy, arbitrarily close to the critical point. The generalized IPR's are related via Eq. (<ref>) to the p-th moment of |G_ii| which is broadly distributed, according to Eq. (<ref>). The power-law tails of its probability distribution would lead to divergent expressions for ⟨ |G_ii|^p ⟩. In practice this does not occur because the power-law behavior is cut off at large values of the imaginary part at η^-1, which corresponds to the limit of validity of the linearized equations. Yet, in order to compute |G_ii|^p we need to take into account the region of large values of the imaginary part of the Green's functions and not the region of typical finite values. It seems therefore that the linearized equations are not useful. Luckily enough, this is not the case. To see this we define: M_ii≡1/G_ii = ϵ_i - i η - ∑_m ∈∂ i G_m → i = m_ii - iη m̂_ii , from which one immediately obtains that ⟨ |G_ii|^p ⟩ = ∫ Q(m,m̂) 1/(m^2+m̂^2 η^2)^p/2 m m̂ . Similarly, using the fact that ĝ_ii = m̂_ii/(m_ii^2 + η^2 m̂_ii^2), ⟨ Im G_ii⟩ is expressed as: ⟨ Im G_ii⟩ = ∫ Q(m,m̂) ηm̂/m^2+m̂^2 η^2 m m̂ , Given that m̂ is strictly positive we can make the change of variables m = ηm̂ x that leads to ⟨ |G_ii|^p ⟩ = ∫ Q(ηm̂ x ,m̂) (η m̂)^1-p/(1 + x^2)^p/2 x m̂ , ⟨ Im G_ii⟩ = ∫ Q(ηm̂ x ,m̂) 1/1 + x^2 x m̂ . In the η→ 0 limit we can approximate Q(ηm̂ x,m̂) ≈ Q(0,m̂) and perform the integration over x explicitly. From Eq. (<ref>) we immediately obtain that the average DoS (<ref>) is given by: ρ = ∫ Q(0 ,m̂) m̂ . Plugging Eqs. (<ref>) and (<ref>) into Eq. (<ref>), one finally obtains: I_p = ρ^-1∫ Q(0 ,m̂) m̂^1-p m̂ . A similar expression has been derived in Refs. <cit.> in the supermatrix NLσM framework. We will discuss this connection in Sec. <ref>. To sum up, although the moments of the local Green's functions are controlled by the fact that |G_ii| is O(1/η) with probability O(η), they can be computed in terms of the typical values of M_ii, whose real part that is typically O(1) and whose imaginary part that is typically O(η). The fact that in the localized phase one can use the linearized equations to compute the relevant observables, such as the (generalized) IPR, facilitates the adoption of highly efficient computational methods that strongly reduces the effect of the finite size of the population compared to the delocalized phase. §.§ An efficient computational scheme for Q(0,m̂) Here we introduce a modification of the population dynamics algorithm which allows us performing the extrapolation of Q(m,m̂) to m=0 very efficiently, thereby allowing one to evaluate Eqs. (<ref>) and (<ref>) with arbitrary accuracy. In fact, from Eq. (<ref>) we have that m_ii = ϵ_i - ∑_m ∈∂ i g_m → i. Hence, the probability that m_ii=0 is equal to the probability that ϵ_i = ∑_m ∈∂ i g_m → i. This occurs with probability density 1/W if |∑_m ∈∂ i g_m → i|<W/2, and with zero probability otherwise. Based on this observation, we thus proceed in the following way: For a given value of W > W_c, we implement the standard population dynamics algorithm described in Sec. <ref> and obtain the stationary probability distribution of the cavity Green's function P(g,ĝ) in the linearized regime, corresponding to the solution of Eqs. (<ref>) and (<ref>); We extract k+1 elements (g_α,ĝ_α) from the population and compute m and m̂ from Eq. (<ref>). We define S = ∑_α=1^k+1 g_α; If (and only if) |S|<W/2 we add m̂^1-p/W to the numerator and 1/W to the denominator of I_p; We repeat this process several times and divide the numerator and the denominator by the total number of attempts; We renew the elements of the pool of the cavity Green's function by performing a few steps of the standard population dynamics algorithm and repeat the whole process several times until the desired accuracy on I_p is reached. It is worth to mention that the algorithm described here, which is schematically summarized in App. <ref>, can be straightforwardly extended to the computation of generic two-points correlation functions. §.§ Numerical results for the generalized IPRs Below we present the numerical results obtained applying the procedure described above. (All the results presented in this paper are obtained with pools of Ω = 2^28 elements.) We start by focusing on the full probability distribution of m̂ when m is identically equal to zero (divided by ρ to normalize it to 1): Q(0,m̂)/ρ. These probability distributions are plotted in the left panel of Fig. <ref> for several values of the disorder close to the critical point, W_c ≈ 18.17 <cit.>. The figure shows the appearance of the power-law tails at large m̂ with a disorder-dependent exponent β, as expressed in Eq. (<ref>). The values of β extracted from the fit of the tails is reported in the right panel of Fig. <ref>. The dashed line represents the prediction of Eq. (<ref>) obtained from the direct diagonalization of the integral operator (<ref>) close to the critical point performed in <cit.>, which is in excellent agreement with the numerical results. In the left panel of Fig. <ref> we explicitly check that the IPR measured from exact diagonalizations of RRGs of N nodes (see App. <ref> for more details) converges in the large N limit to the values obtained using the “improved” population dynamics scheme described in Sec. <ref>. Yet, upon decreasing W towards the critical point, the finite-size corrections to the asymptotic value becomes stronger and one needs to diagonalize larger systems in order to see the convergence. As a consequence, a precise estimation of the IPR sufficiently close to W_c from exact diagonalizations of finite-size samples is practically out of reach. Concretely, with currently available resources one cannot get reliable results for W ≲ 22. (The finite N corrections of the IPR to the N →∞ value will be studied in detail in a forthcoming work.) In the right panel of Fig. <ref> we show that the IPR obtained using standard population dynamics for the (non-linearized) self-consistent cavity equations (<ref>) in presence of a small but finite imaginary regulator converges in the small η limit to the value obtained directly at η=0 from Eq. (<ref>) using the algorithm described in Sec. <ref>. However, as W gets closer to W_c the finite-η corrections become stronger and one needs to consider smaller and smaller η to see convergence: The data are well fitted by I_2 (η) ≃ I_2(η=0) + a_ηη^b_η (dashed lines), with an exponent b_η decreasing with W and approaching zero at W_c as b_η∝ (W-W_c)^κ. Since upon decreasing η the probability distribution of the imaginary part of the Green's functions becomes broader and broader, obtaining accurate estimations for its moments, which are controlled by the tails, becomes increasingly hard. In practice, using the standard population dynamics algorithm one can measure η|G_ii|^2 precisely enough only for η≳ 10^-7. For these reasons, computing the η→ 0 limit of the IPR close enough to W_c with the standard approach is essentially unfeasible. Finally, we specifically focus on the critical behavior of the (generalized) IPR close to W_c. In Fig. <ref>(left) we plot I_p (for η=0 and N→∞) as a function of W≥ W_c, for p=1.4, p=2, and p=4, showing that I_p jumps to a finite value at W_c, as predicted by the analytic solution <cit.>. The behavior of I_p for W ≳ W_c is well described by: I_p ≃ I_p^ (c) + a_p √(W - W_c) . Specifically, for p=2 we find I_2^ (c)≃ 0.304 and a_2 ≃ 0.094. To support this claim, in the right panel of Fig. <ref> we perform a parametric plot of I_2 - I_2^(c) as a function of β-1/2, showing that close enough to the localization transition the data are well described by a linear relation. The confirmation of the jump in the IPR's at the localization transition, as predicted by the supersymmetric analysis <cit.>, is an important result, particularly because it has been challenged in recent studies <cit.> (see also <cit.>). Moreover, the square root singularity identified in Fig. <ref>(left)—standing out as one of the most significant contributions of our work—has not been previously documented in the literature. Exploring the possible connection between this distinctive behavior and the recently emphasized transverse length's singular behavior <cit.>—which governs the exponential decay of wave-functions along typical branches of the tree—would offer intriguing insights on the geometric structure of Anderson localized eigenstates on the BL. To conclude this section, it is worth mentioning that by extending the analysis to higher values of the connectivity of the BL (not shown), we observe that the amplitude of the jump of the IPR at W_c grows with k and appears to approach 1 in the infinite connectivity limit, as predicted in <cit.>. §.§ The Distribution function of the eigenfunctions' amplitudes Comparison of Eq. (<ref>) with Eq. (<ref>) and averaging immediately leads to: ⟨∑_α |ψ_α(i)|^2 pδ(E_α) ⟩ = ∫ Q(0,m̂) m̂^1-p m̂ . This implies that the moments of the wave-functions' amplitudes can be directly expressed in terms of the distribution of the imaginary part of G_ii^-1 on the scale η. More precisely introducing the distribution T(u) as in <cit.>: T(u) ≡1/ρ⟨∑_αδ ( u-|ψ_α (i) |^2 ) δ(E_α)⟩ , Eq. (<ref>) leads to: T(u) = ρ^-1 Q(0,1/u) 1/u^3 . In Refs. <cit.> the authors obtained the following expressions for the T(u) and I_p analogous to Eqs. (<ref>) and (<ref>). T(u) = ^2 F_l (u)/ u^2 , I_p = p (p-1) ∫ u^p-2 F_l(u) u , in terms of a function F_l(u). By comparison one easily sees that F_l(u) is related to our Q(0,m̂) through Q(0,1/u) = ρ u^3 ^2 F_l (u)/ u^2 . The numerical results for the distributions T(u) are shown in the left panel of Fig. <ref> for several values of the disorder across the localized phase. Since Q(0,m̂) goes to zero as 1/m̂^1+β for large m̂ we have T(u) ∝1/u^2-β . From the above expression one has that ∫ T(u) u is divergent for small values of u. On the other hand this is not consistent with the fact that ∫ T(u) u = N exactly by definition. Ref. <cit.> argues that the matching between the finite N result and thermodynamic limit expression (<ref>) occurs because the integral must be truncated at a some value u_N such that ∫_u_N^∞ T(u) u = N and this leads naturally to u_N ∝1/N^1/1-β . This phenomenon is clearly illustrated in the middle and right panels of Fig. <ref> for W=34 (similar results, not shown, are found for other values of W within the localized phase). In the middle panel we plot the probability distributions of the wave-functions' amplitudes computed from exact diagonalizations of finite RRGs of N nodes (see App. <ref>). For u>u_N these distributions coincide with the one obtained using the cavity approach on the infinite BL, and feature a power-law behavior given in Eq. (<ref>) with β≈ 0.784. For u ≃ u_N the probability distributions on finite graphs exhibit a crossover to a different behavior at small amplitudes described by an integrable square root singularity, T(u) ∝ 1/√(u). The position of the crossover moves to smaller values when the system size is increased, in agreement with the arguments given above. This is shown in the right panel, which indicates that the dependence of the crossover u_N upon the system size is very well described by Eq. (<ref>). §.§ Singular behavior of the IPR within the NLσM formulation In order to confirm the singular behavior of the IPR described in Sec. <ref> and reported in Fig. <ref> and to understand its origin, in this section we consider an effective field-theoretical description of the localization transition first introduced in Ref. <cit.>, in which AL was mapped onto a non-linear σ model with non-compact symmetry. The NLσM representation is obtained as the n →∞ limit of an n-orbital generalization of the problem, which can be viewed as describing an electron hoping between metallic granules containing n orbitals and located at the nodes of the same Bethe lattice. For n=1 the Anderson tight-binding model (with random hopping t_ij) is recovered. Its n-orbital generalization is expected to exhibit the same gross features and the same critical behavior, with the advantage that analytical calculations are usually somewhat simpler. The NLσM on an infinite BL was solved via the supersymmetry approach in Refs. <cit.>. Such solution is expressed in terms of the following self-consistent integral equation for an order parameter function ψ(t), which is essentially akin to the Laplace transform of the probability distribution of the imaginary part of the Green's functions (with the change of variable t= ln s, s being the variable of the Laplace transform): ψ(t) = ∫_- ∞^+∞ t^' L_γ (t - t^') d(t^') ψ^k (t^') , d(t) = exp( -2 e^t ) , L_γ (t) = e^t/2ℓ_γ (t) , ℓ_γ(t) = ( γ/2 π)^1/2 e^- γcosh t [ sinhγcosh t + + ( coshγ - sinhγ/2 γ)] . C'e' una cosa che non capisco sull'espansione attorno a γ_c. Se ci mettiamo vicino al punto critico, γ = γ_c - δγ, ed espandiamo l'equazione (36), abbiamo: ψ_c(t) + δψ_1 (t) + δψ_2 (t)= ∫_- ∞^+∞ t^' [ Γ_c (t - t^') + δΓ (t - t^') ] [ ψ_c(t^') + δψ_1 (t^') + δψ_2 (t^') ]^k , dove ho definito Γ(t - t^') ≡ L_γ (t - t^') d(t^'), con δΓ proporzionale a δγ. Se assumiamo che δψ_1 ∝√(δγ) e δψ_2 ∝δγ abbiamo: δψ_1 (t) = k ∫_- ∞^+∞ t^' Γ_c (t - t^') (ψ_c(t^'))^k-1δψ_1 (t^') 0 = ∫_- ∞^+∞ t^' δΓ (t - t^') ψ_c(t^')^k + k ∫_- ∞^+∞ t^' Γ_c (t - t^') (ψ_c(t^'))^k-1δψ_2 (t^') + k (k-1) ∫_- ∞^+∞ t^' Γ_c (t - t^') (ψ_c(t^'))^k-2 (δψ_1 (t^'))^2 . La prima equazione ci dice che δψ_1 e' un autovettore dell'operatore linearizzato (che, a parte il k davanti, e' lo stesso della fuzione di correlazione a due punti) con autovalore 1. Possiamo riscrivere la seconda equazione in termini dell'operatore critico: k ∫_- ∞^+∞ t^' Γ_c (t - t^') (ψ_c(t^'))^k-1( δψ_2 (t^') + (k-1) (δψ_1 (t^'))^2/ψ_c(t^')) = - ∫_- ∞^+∞ t^' δΓ (t - t^') ψ_c(t^')^k . La cosa che non capisco e' perche' il fatto che questo operatore ha un autovalore massimo uguale a uno e uno spettro continuo sia incompatibile col fatto che δψ_1 ∝√(δγ) e δψ_2 ∝δγ. Non e' incompatibile, e' pero' non banale. I write: E(t) ≡ψ(t)- ∫_- ∞^+∞ t^' L_γ (t - t^') d(t^') ψ^k (t^') we expand the equation we have 0=∫ dt' A(t,t')δψ(t')+ d E/d γδγ + ∫ dt'dt” B(t,t',t”) δψ(t')δψ(t”) Where A(t,t') is the linear operator and B(t,t',t”) is a another regular operator. Now I write δψ(t) in the base of the eigenvectors of A δψ(t) = ∑_q a_q ψ_q(t) And I multiply the equation times the left eigenvector and integrate, I obtain λ_q a_q + δγ∫ dt e^-t Z(t) ψ_q(t) d E/d γδγ where the properties of ψ_q(t) are given by Zirnbauer. a_q = δγ∫ dt e^-t Z(t) ψ_q(t) d E/d γ/λ_q Now if there is only one eigenvalue λ_0 that goes to zero at the critical point the equation becomes an equation for a_0: 0=δγ∫ dt e^-t Z(t)d E/d γ + a_0^2∫ dt dt'dt” e^-t Z(t) B(t,t',t”) ψ_0(t') ψ_0(t”) from which I obtain δψ(t) ≈ a_0 ψ_0 , _O=O(δγ^1/2) But the spectrum is not continuous therefore this argument cannot be simply made. One has to study carefully the projection of d E/d γ over the ψ_q(t). In the NLσM formulation the parameter γ is a dimensionless coupling constant, which plays the role of t_ij/W. The solution of this equation vanishes at t →∞, due to the fact that d(t →∞) = 0, and goes to a constant for t → - ∞ where d(t → - ∞) = 1. In fact one can show <cit.> that the solution decreases monotonically from 1 to 0 as t varies from -∞ to +∞ and has a sharp kink in a region where it decreases rapidly. In the localized phase (, small values of γ), the kink is located somewhere near t=0. For γ larger than a critical value γ_c, instead, the kink is unstable and runs away to minus infinity. Thus the existence of a non-trivial solution of Eq. (<ref>) characterizes the localized phase, while a trivial solution (ψ(t)=0 for t>-∞) corresponds to the metallic phase. To find the limit of stability of the insulating phase one can consider the linearized equation which describes the effect of infinitesimal perturbations on the solution ψ(t) = 1 for large negative t. This analysis leads to the study of the spectral properties of the kernel L_γ, whose largest eigenvalue λ_γ (β) is given by <cit.>: λ_γ (β) = ∫_- ∞^+∞ t e^(1/2 - β) t ℓ_γ (t) , with β∈ [0,1]. λ_γ (β) shares the very same properties (discussed in Sec. <ref>) of the largest eigenvalue of the integral operator defined by the Kernel (<ref>) which emerges in the Anderson problem when studying the stability of the linearized solution of the cavity equations in the localized phase with respect to a small imaginary part of the Green's functions. In particular λ_γ (β) is symmetric for β→ 1 - β and is thus minimal for β=1/2. Close to the transition β behaves as β≃ 1/2 + cst√(γ_c - γ). The fact that β=1/2 at the critical point then yields a closed equation for γ_c <cit.>. When the transition is approached from the localized side, γ≲γ_c, the solution of Eqs. (<ref>) assumes the asymptotic form ψ(t) ≃ 1 - c e^β t , (for t ≪ -1) , where c is a γ-dependent constant of order unity. The left exponential tail of ψ(t) corresponds in fact to the power-law tails of Q(0,m̂) at large m̂ (see Eq. (<ref>) and Fig. <ref>(left)). We have solved Eqs. (<ref>) numerically for k=2 by iteration for several values of γ≤γ_c. For k=2 the critical point is located at γ_c ≃ 0.06803. In practice we discretized the integral over t^' on a finite mesh of constant spacing of N_ bin points in the interval [t_ min, t_ max]. The boundaries of the interval are chosen in such a way that ψ(t) = 1 for t < t_ min and ψ (t) = 0 for t> t_ max within the numerical accuracy. Furthermore, in the interval t ∈ [t_ min, t_ tail] the function ψ(t) is set to be equal to Eq. (<ref>), with β obtained from the solution of Eq. (<ref>). The constant c is fixed in a self-consistent way, by imposing the continuity of the logarithmic derivative of ψ(t) for t=t_ tail. Below we show the results obtained for N_ bin = 6144, t_ min = -64, t_ tail = -44, and t_ max = 16. Given the solution ψ(t) of Eq. (<ref>), the IPR is obtained as <cit.>: I_2 = 2 ∫_- ∞^+∞ t e^t exp( -2 e^t ) ψ^k+1 (t) . The numerical results for I_2 are reported in the left panel of Fig. <ref>, showing that, as for the Anderson model, the IPR has a finite jump at the transition followed by a square root singularity, as in Eq. (<ref>). To understand the origin of such behavior, in the middle panel of Fig. <ref> we plot the difference between the solution found at γ≲γ_c and solution found right at the critical point, ψ_c (t): δψ (t;γ) ≡ψ(t; γ) - ψ_c (t). Close to γ_c one has: I_2(γ) - I_2^(c)≃ 2 (k+1) ∫_- ∞^+∞ t e^t exp( -2 e^t ) ψ_c^k (t) δψ (t;γ) . The function δψ (t;γ) is the largest in correspondence of the kink, which is located approximately around t=0 and whose position moves to the left as γ is increased towards γ_c. Due to the term e^t exp( -2 e^t ) in the equation above the IPR is also dominated by the region around the kink. Yet, as shown in the figure, the δψ (t;γ)'s obtained at different γ collapse on the same function when divided by √(γ_c - γ), implying that I_2(γ) - I_2^(c)∝√(γ_c - γ) close to γ_c. A further confirmation comes from the inspection of the γ dependence of the prefactor c of the left exponential tail of ψ(t), Eq. (<ref>). As shown in the right panel of Fig. <ref>, the prefactor behaves as c(γ) ≃ c_c + c_0 √(γ_c - γ) (with c_c and c_0 of O(1)), implying that for t ≪ -1 one has: δψ (t;γ) ≃√(γ_c - γ) e^1/2 t( δ_1 t + δ_2 + O(t √(γ_c - γ)) ) , with δ_1 and δ_2 of order 1. The analysis of the NLσM thus provides a clear mathematical explanation of the origin of the square root singularity of the IPR: Although the main contribution to I_2 comes from the region around the kink, the matching with the tails at t ≪ -1 produces a scaling regime close to W_c. In this regime, the β dependence of the tails also imparts its influence on the bulk of the distributions. As explained above, ψ(t) is essentially the Laplace transform of the probability distribution of the imaginary part of the Green's function (see Eq. (<ref>)), with the change of variable t = ln s. Hence, the region around the kink corresponds to the values of m̂ of order 1, while the exponential tails at t ≪ -1 of ψ(t), Eq. (<ref>), correspond to the power-law tails of Q(0,m̂) for m̂≫ 1 with exponent 1+β. Drawing inspiration from our examination of the NLσM, below we endeavored to apply a similar scaling analysis to the Anderson model. In order to do so, we define δQ̃(m̂;W) as the difference between the order parameter distribution function Q(0,m̂)/ρ found at disorder W>W_c and right at the critical point W=W_c: δQ̃(m̂;W) ≡Q(0,m̂;W)/ρ(W) - Q(0,m̂;W_c)/ρ(W_c) , in terms of which one has: I_2(W) - I_2^(c) = ∫δQ̃(m̂;W)/m̂ m̂ . In Fig. <ref> we show that, when divided by the square root distance from the critical point (δ W)^1/2 = √(W-W_c), the δQ̃ (m̂;W)'s computed for different disorder levels tend to collapse on the same scaling function when W approaches W_c. The right panel of Fig.<ref> highlights this data collapse particularly in the region m̂≳ 1, which gives the dominant contribution to the integral (<ref>). This implies that the square root singularity of the IPR observed in Fig. <ref> is due to the square root dependence of δQ̃ in the bulk, and in particular at small m̂. In a specular way, the collapse implies that at large m̂ and near W_c, the tails of the order parameter distribution function behave as: Q(0,m̂) ≃c_c + c_0 √(W-W_c)/m̂^1/2 + √(c_1/c_2)√(W - W_c) , (which is the analog of Eq. (<ref>)) where c_1 and c_2 are given in Eq. (<ref>) for k=2 <cit.>. The mechanism by which the square root dependence of the prefactor of the tails is directly inherited from the square root dependence of the exponent of the tails is not obvious, and is certainly an interesting question for future investigations. § TWO-POINTS CORRELATION FUNCTION Finally, we focus on the critical behavior of the two-point function defined in Eq. (<ref>). It is easy to show that, thanks to the tree-like structure of the BL, the off-diagonal elements of the resolvent on two nodes at distance L along a branch of the tree can be expressed in terms of the product of the diagonal elements of the (cavity) Green's functions along the branch: G_i,i+L= t G_i → i+1 t G_i+1 → i+2 ⋯ t G_i+L-1 → i+L G_i+L,i+L . The moments of G_i,i+L can also be computed with high accuracy using a procedure analogous to the one described in Sec. <ref> to evaluate the moments of G_ii. To this aim we introduce: M_i,i+L≡1/G_i,i+L = m_i,i+L + i η m̂_i,i+L . Following the same steps as above, one can show that lim_η→ 0^+η^p-1 |G_i,i+L|^p = ∫_-∞^+∞ x 1/(1 + x^2)^p/2 ×∫_-∞^+∞m̂_i,i+L Q_L(0,m̂_i,i+L) 1/|m̂_i,i+L|^p-1 , where Q_L( M_i,i+L) is the probability distribution of M_i,i+L. This relation has been obtained in the η→ 0 limit by approximating Q_L(ηm̂ x,m̂) ≈ Q_L(0,m̂). As detailed in App. <ref>, in order to evaluate Q_L(0,m̂_i,i+L) within the population dynamics algorithm one proceed in a way similar as the one described in Sec. <ref>. The results of this procedure are illustrated in Fig. <ref>. (Note that the value of the correlation function in L=0 is proportional to the IPR, which is of O(1) in the whole localized phase including the critical point.) The expected asymptotic behavior of the two-point function at large L obtained from the analytic solution on the infinite BL is given by Eq. (<ref>). In the left panel of Fig. <ref> we plot lim_η→ 0 k^L L^3/2 (η |G_i,i+L|^2) as a function of L for several values of the disorder in the localized phase and for k=2. The plot shows an exponential decay at large L, in agreement with Eq. (<ref>). The localization length can be thus defined from: ξ_ loc^-1 = lim_L →∞Θ(L) , Θ(L) ≡ - 1/L ln ( lim_η→ 0 k^L L^3/2η |G_i,i+L|^2 ) . In the middle panel of Fig. <ref> we plot Θ(L) as a function of L for several values of W. The plateau reached by Θ(L) at large L provides an estimation of ξ_ loc^-1. The values of ξ_ loc^-1 obtained in this way are reported in the right panel of Fig. <ref>, showing that ξ_ loc is described by Eq. (<ref>), with c_1 ≃ 0.0307 ± 0.0002 and W_c ≃ 18.17 ± 0.05, in perfect agreement with the critical behavior obtained by diagonalizing explicitly the integral operator (<ref>) that governs the linear stability of the cavity equations in the localized phase for k=2 <cit.>. § CONCLUSIONS In this paper we have analyzed the localized phase of the Anderson model on the infinite BL. We have put forward an improved population dynamics scheme to compute the moments of the imaginary part of the Green's function directly in the limit η=0 with unprecedented accuracy even very close to the critical point. This approach allows one to validate the critical behavior predicted by the supersymmetric analysis <cit.> with very high accuracy. It also unveils a remarkable feature that has not been reported in the previous literature: The finite jump of the IPR at the transition is followed by a square root singularity, whose existence is also confirmed by the analysis of the effective NLσM formulation of the problem on the BL. It would be interesting to interpret this result in terms of the geometric structure of localized eigenstates on the BL, and understand whether the singular behavior of the IPR is related to the one of the transverse localization length which controls the exponential decay of the wave-functions on typical branches <cit.>. Ultimately, delving into the loop corrections to the BL solution of AL and broadening the analysis of Ref. <cit.> to encompass the insulating phase presents a highly intriguing prospect. Given that a comprehensive understanding of the loop corrections hinges on mastering very precisely the BL solution, the current investigation serves as a pivotal stride forward, laying the foundation for further exploration in this direction. We warmly thank Y. Fyodorov, G. Lemarié and A. D. Mirlin for illuminating discussions. § STABILITY OF THE LINEARIZED EQUATION The linearized cavity equations (<ref>) and (<ref>) must be interpreted as a self-consistent integral equation for the probability distribution P(g,ĝ): P(g,ĝ) = ∫ϵ p(ϵ) ∏_i=1^k [ g_i ĝ_i P(g,ĝ) ] δ(g - 1/ϵ - ∑_i=1^k g_i) δ(ĝ - g^2 ( 1 + ∑_i=1^k ĝ_i ) ) . This equation is more conveniently written performing the Laplace transform with respect to ĝ <cit.> (note ĝ takes only strictly positive values): P̂(g,s)=∫ϵ p(ϵ) ∏_i^k [ P̂(g_i,s g^2) g_i] δ(g - 1/ϵ - ∑_i=1^k g_i) e^- s g^2 . Following Ref. <cit.> we identify the localization transition as the point where the above equation ceases to have a solution. To do so we focus on the region s ≪ 1, corresponding to the tail of the probability of the imaginary part, ĝ≫ 1. At small values of s we assume that: P̂(g,s) ≈ P_0(g) + f (g) s^β , where P_0(g) ≡P̂(g,0) is the probability distribution of the real part of the Green's function, corresponding to the solution of Eq. (<ref>) with η=0. Note that the ansatz (<ref>) corresponds to Eq. (<ref>) of the main text. Plugging the above small-s form into the equation (<ref>) we obtain the following linear equation for the function f (g): f (g)=k ∫ϵ p(ϵ) ∏_i=2^k[ P_0(g_i) g_i ] δ(g - 1/ϵ - ∑_i=1^k g_i) |g|^2β f (g_1) g_1 We now introduce the probability distribution of the sum of the real part of k-1 cavity Green's functions: P̃ (g̃) ≡∫∏_i=1^k-1[ P_0 (g_i) g_i ] δ( g̃ - ∑_i=1^k-1 g_i ) . (Note that for k=2 one has that P̃ (x) = P_0 (x) <cit.>.) Inserting this identity into Eq. (<ref>) we obtain: f (g)=k ∫ϵ p(ϵ) g̃P̃ ( g̃ ) δ(g - 1/ϵ - g_1 - g̃) |g|^2β f (g_1) g_1 , which coincides with Eqs. (<ref>) and (<ref>) of the main text. The condition that the above homogeneous equation admits a solution fixes the value of β. This is only possible if the largest eigenvalue λ_β of the integral operator is smaller than 1. The critical point is identified by the point where no solution exists. It is easy to check that the probability distribution of the real part of the Green's function P_0(g) is an eigenvector of the integral operator for β=0, corresponding to the largest eigenvalue k (see Fig. <ref>). As explained in the main text, it can be shown that β=1/2 at the critical point. In fact, since the integral operator above is non-symmetric, for each eigenvalue, there will be a right and left eigenvector. After inegrating over the δ-function, using the fact that δ (w(g̃)) = δ(g̃_0)/|w^'(g̃_0)|, with g̃_0 = ϵ - g_1 - 1/g and |w^'(g̃_0)| = g^2, one gets: λ_β ψ_β (g) = k |g|^2(β-1)∫ϵ p(ϵ) P̃( ϵ -g_1 - 1/g) ψ_β (g_1) g_1 , λ_β ϕ_β (g_1) = k ∫ϵ p(ϵ) P̃( ϵ -g_1 - 1/g) |g|^2(β-1)ϕ_β (g) g . From the second equation, defining ψ_1-β (g_1) = |g_1|^-2 βϕ(1/g_1) and changing variable g → 1/g in the left hand side, one sees that ψ_1-β (g_1) is a right eigenvector of the integral operator (<ref>) for β→ 1 - β with the same eigenvalue λ_β. Hence the spectrum of (<ref>), and in particular its largest eigenvalue, must be symmetric around β=1/2, as schematically illustrated in Fig. <ref>. As mentioned in the main text, the largest eigenvalue of the integral operator (<ref>) for β=1/2 also determines the long distance behavior of the two-point correlation function. The argument goes as follows. Setting η = 0 in Eq. (<ref>), the cavity recursion relation for the imaginary part of the cavity Green's function can be written as: G_i → j = ∑_m ∈∂ i / j G_m → i/( ϵ _i + ∑_m ∈∂ i / j G_m → i)^2 + ( ∑_m ∈∂ i / j G_m → i)^2 = | G_i → j|^2 ∑_m ∈∂ i / j G_m → i . Considering a node of the BL labeled as i_n (in absence of one of its neighbors labeled as i_n+1), the cavity recursion equation for G_i_n → i_n+1 can be telescoped in the following way in terms of the imaginary parts of the Green's function on the (k+1) k^n-1 nodes (labeled as i_i) at distance n from i_n <cit.>: G_i_n → i_n+1 = ∑_paths  P P:i_n → i_1∏_i_m ∈ P| G_i_m → i_m+1|^2 G_i_1 → i_2 , where P are all the (k+1) k^n-1∼ k^n directed paths of length n of the BL originating from the node i_n and ending on the nodes i_1. As explained in the main text, the correlation function of wave-functions' amplitudes on two nodes i and j of the BL is encoded in |G_i,j|^2. Thanks to the tree-like structure of the BL, it is easy to show that the off-diagonal elements of the resolvent on two nodes at distance n along a branch of the tree can be expressed in terms of the product of the diagonal elements of the (cavity) Green's functions along the branch: G_i_1,i_n= G_i_1 → i_2 G_i_2 → i_3 ⋯ G_i_n-1→ i_n G_i_n,i_n . Hence, the products of | G_i_m → i_m+1|^2 appearing in Eq. (<ref>) is proportional to the two-point correlation function between nodes i_1 and i_n. One thus obtains that the typical value of the imaginary part of the cavity Green's function on node i_n G_i_n → i_n+1^ typ∝ k^n C(n) G_i_1 → i_2^ typ In the localized phase, W>W_c, for η=0 the typical value of the imaginary part of the Green's function decreases under iteration. § ANALYSIS OF THE CRITICAL BEHAVIOR OF THE TWO-POINTS CORRELATION FUNCTION As explained in the main text, the off-diagonal elements of the resolvent on two nodes at distance L along a branch of the BL can be expressed in terms of the diagonal elements of the (cavity) Green's functions as in Eq. (<ref>). As a consequence, the generic moment of the correlation |G_i,i+L|^p can be written in terms of the following linear operator K_p (G,G_1) ≡∫ϵ p(ϵ) δ( G - 1/ϵ - iη - G_1 - G̃) |G|^p P̃ (G̃) G̃ , where P̃ (G̃) is the distribution of the sum of k-1 cavity Green's functions P̃ (G̃) ≡∫∏_i=1^k-1[ P(G_i) G_i ] δ(G̃ - ∑_i=1^k-1 G_i ) . The expression above reduces to Eq. (<ref>) for p=2. With this definition the two-point correlation function can be written as: | G_i,i+L|^p = ∫ P(G_2) G_2 ϵ p(ϵ) G̃P̃ (G̃) 1/|ϵ - iη - G - G_2 - G̃|^p K_p^L-1 (G,G_1) P(G_1) G G_1 , where K_p^L(G,G_1) is the L-th power of the operator K_p. Similarly to the integral operator associated to the linear stability of the cavity equation with respect to the imaginary part discussed in the previous section, we note that the operator (<ref>) is not symmetric, therefore for each eigenvalue there will be a left and right eigenvector, respectively ϕ_λ(G) and ψ_λ (G). On the other hand the symmetry of the problem implies that if in the expression for | G_i,i+L|^p we replace P(G_1) with a generic positive A(G_1) and P(G) with a generic positive B(G), the result must be symmetric with respect to the exchange A ↔ B. This implies that we can express the left eigenvector as a function of the right one: ϕ_λ(G) ∝∫ϵ p(ϵ) G̃P̃ (G̃) ψ_λ (G_1) /|ϵ- iη -G - G_1 - G̃ |^p G_1 . Similarly we have the following orthonormality relationships ∫ϵ p(ϵ) G̃P̃ (G̃) ψ_λ (G_1) ψ_λ' (G)/|ϵ- iη - G - G_1 - G̃ |^p G G_1 ∝δ_λλ' . From which we can define: ϕ_λ(G) = A_λ^-1∫ϵ p(ϵ) G̃P̃ (G̃) ψ_λ (G_1) /|ϵ- iη -G - G_1 - G̃ |^p G_1 . with A_λ≡∫ϵ p(ϵ) G̃P̃ (G̃) ψ_λ (G_1) ψ_λ (G)/|ϵ- iη - G - G_1 - G̃ |^p G G_1 . It follows that we have: | G_i,i+L|^p = ∫_λλ^L-1 ( ∫ϵ p(ϵ) G̃P̃ (G̃) P (G_1) ψ_λ (G)/|ϵ- iη - G - G_1 - G̃ |^p G G_1 )^2 /( ∫ϵ p(ϵ) G̃P̃ (G̃) ψ_λ (G_1) ψ_λ (G)/|ϵ- iη - G - G_1 - G̃ |^p G G_1 ) Both integrals above are of the form ∫ f(M)1/|M|^p dM ≈η^1-p√(π)Γ((p-1)/2)/Γ(p/2)∫ f(0,m̂)m̂^1-p d m̂ so that: | G_i,i+L|^p = η^1-p√(π)Γ((p-1)/2)/Γ(p/2)∫_λλ^L-1 ( ∫ f_1 (0,m̂)m̂^1-p d m̂)^2 /( ∫ f_2(0,m̂)m̂^1-p d m̂) As usual, in the localized phase we will be interested in considering the small η limit in which the cavity equations can be linearized with respect the imaginary part. Performing the Laplace transform with respect to the imaginary part we obtain the following expression <cit.>: K_p (g,s|g_1,s_1) ≡∫ϵ p(ϵ) g̃P̃ (g̃ , s_1) δ(g - 1/ϵ - g_1 - g̃) δ(s_1-s g^2 ) | g |^p e^-s_1 . We note that the function e^-s_1P̃ (g̃,s_1) tends to a constant P̃ ( g̃ ,0) for s_1 going to zero and tends to zero for s_1 going to infinity. As a consequence for s_1 going to zero the eigenvectors of the operator take the form g_λ(g)s^α. To proceed in a systematic way we perform the change of variable τ=ln s, by writing [USE t INSTEAD OF τ] h(s) =∫ B(s,s') g(s') s' , h(τ) =∫ B(τ,τ') s'/τ' g(τ') τ' , h(τ) ( s/τ)^1/2 =∫ ( s/τ)^1/2 B(τ,τ') ( s'/τ')^1/2( ( s'/τ')^1/2 g(τ') ) τ' . From Eqs. (<ref>) and (<ref>) we thus obtain that the eigenvector of the original problem can be written in terms of the eigenvector of the new operator B_p(g,τ | g_1, τ_1) ≡ e^τ/2 K_p(g,τ|g_1,τ_1)e^τ_1/2 = ∫ϵ p(ϵ) g̃P̃ (g̃ , e^τ_1) δ(g - 1/ϵ - g_1 - g̃) δ(τ_1-τ-ln(g^2) ) |g |^p-1 exp( -e^τ_1) . In the limit τ_1 → -∞ (, s → 0) we have that e^-e^τ_1P̃ (g̃,e^τ_1) →P̃ (g̃,0) and the operator B_p becomes invariant under translation of τ and τ_1. Therefore at large τ the operator is diagonal in momentum space: ∫ B_p (g,τ|g_1,τ_1) f(g_1) e^ iμτ_1 g_1 τ_1 ≈ e^ iμτ∫ϵ p(ϵ) g̃P̃ (g̃, 0) δ(g - 1/ϵ - g_1 - g̃) | g |^p-1+ 2 iμ f(g_1) g_1 , where the function f(g) must thus be a solution of the eigenvalue equation: λ_p,μ f(g) = ∫ϵ p(ϵ) g̃P̃ (g̃, 0) δ(g - 1/ϵ - g_1 - g̃) | g |^p-1+ 2 iμ f(g_1) g_1 . For p=2 we recognize the operator (divided by a factor k) controlling the critical point for β=1/2, Eqs. (<ref>), (<ref>), and (<ref>), studied in Ref. <cit.>. Thus, according to Eq. (<ref>), the largest eigenvalue of this operator behaves as 1/k - c_1/k (W-W_c) close to W_c, with correction of order μ^2. For small values of μ it is convenient to study the eigenvector in two different region. For large positive or negative τ of order μ^-1 we have: τ = x/μ , exp( -e^τ_1 ) P̃ (g̃, e^τ_1) →P̃ (g̃, 0) θ(-x) , CHECK B(g,x|g_1,x_1) →∫ϵ p(ϵ) g̃P̃ (g̃, 0) δ(g - 1/ϵ - g_1 - g̃) | g |^p-1 δ(x_1-x) θ(-x) . CHECK Since the variables x and g are decoupled, the eigenvector takes the form: ψ_λ (g,x/μ) → f(g) sin (x) θ(-x) , λ_μ= λ_0+ c_2 μ^2 , with some constant that is given by (write the formula). Note that the fact that the cos x term is absent follows from the continuity of the solution in zero. In the region of finite τ where exp(-e^τ) P̃(g̃,e^τ) is different from a step function, the solution must match the large τ behavior sinμτ≈μτ and the eigenvector is given by ψ_λ(g,τ) = μ ψ(g,τ) where ψ(g,τ) is the solution of the equation λ_0 ψ(g,τ) = ∫ϵ p(ϵ) g̃P̃ (g̃ , e^τ_1) δ(g - 1/ϵ - g_1 - g̃) δ(τ-τ_1 -2ln(g) ) | g |^p-1exp( -e^τ_1) ψ(g_1,τ_1) g_1 τ_1 , with the condition that: ψ(g,τ) → f (g) τ for τ→ -∞ . § ALGORITHM TO EVALUATE EQ. (<REF>) The algorithm implemented to evaluate Eq. (<ref>) and compute Q(0,m̂) is schematically summarized as follows: xx x̄x x̄x x̄x x̄x x̄x x̄x x̄xxxx Algorithm computing I_p and Q(0,m̂) begin Initialize population of Ω elements (g_α, ĝ_α)_α = 1, …, Ω Iterate population using Eqs. (<ref>) and (<ref>) until convergence to a stationary distribution begin I_p = 0; Q(0,m̂)=0 for r=1 to N_ avg do a=0; b=0 for i=1 to N_ est do Sample k+1 elements from the pool (g_α_j, ĝ_α_j), j=1,…,k+1 Extract a random energy ϵ from the box distribution of width W Compute S = ∑_j=1^k+1 g_α_j if |S| < W/2 then Compute m̂ from Eq. (<ref>): m̂ = 1 + ∑_j=1^k+1ĝ_α_j a = a+m̂^1-p/W b=b+1/W Add m̂ to Q(0,m̂) end if end for a=a/N_ est b=b/N_ est I_p= I_p+ a/b Renew all the elements of the population end for I_p= I_p/N_ avg Normalize Q(0,m̂) end end § EXACT DIAGONALIZATIONS OF THE ANDERSON MODEL ON RRGS OF N NODES We preform exact diagonalizations of the Anderson tight-binding model (<ref>) on finite Bethe lattices of fixed connectivity k+1=3. Finite BLs are in fact random-regular graphs of N nodes, a class of random lattices that have locally a tree-like structure but do not have boundaries. More precisely, a (k+1)-RRG is a lattice chosen uniformly at random among all possible graphs of N vertices where each of the sites has fixed degree k+1. The properties of such random graphs have been extensively studied (see Ref. <cit.> for a review). A RRG can be essentially thought as a finite portion of a tree wrapped onto itself. It is known in particular that for large number of vertices any finite portion of a RRG is a tree with a probability going to one as N →∞: RRGs have loops of all size but short loops are rare and their typical length is of order ln N/ln k <cit.>. Thanks to the sparse nature of the graph, exact diagonalizations can be efficiently performed using the Arnoldi method, which provides a few eigenvalues and eigenvectors around E=0. In practice we consider the 64 nearest eigenstates to zero energy. When comparing the results obtained from exact diagonalizations with the analytic predictions obtained at E=0, a suitable approach is taken to minimize corrections arising from the small deviation of eigenvalues from precisely zero energy. This is achieved through the following procedure: We start by noticing that the eigenvectors of H are also eigenvectors of H + γ𝕀 with all eigenvalues shifted by γ,  E_α→ E_α + γ. Thus an eigenvector of H of energy E_α is an eigenvector of zero energy of an Anderson model (<ref>) with all random energies shifted by -E_α (note that since we consider only a finite number of eigenvectors, the E_α's are of order 1/N, and thus only a few random energies ϵ_i - E_α will fall outside the box of width W after the shift). Since H = ∑_i ϵ_i converges to a normal distribution with zero mean and variance N W^2/12, shifting the trace of H by N E_α must be reweighted by a factor e^- 6 N E_α^2/W^2. As a result, to obtain the averages at zero energy of a generic observable which depends on the wave-functions' amplitudes, we use the following expression: O({ψ_α (i) } ) = ∑_α e^- 6 N E_α^2/W^2 O({ψ_α (i) }) /∑_α e^- 6 N E_α^2/W^2 . § COMPUTATION OF THE TWO-POINT FUNCTION In order to evaluate the moments of the correlation function using Eq. (<ref>) we start by defining M_i,i+L (defined in Eq. (<ref>)) as the product of two random variables defined as follows: M_i,i+L = H_i,i+L M_i+L,i+L , H_i,i+L = 1/G_i → i+1 G_i+1 → i+2⋯ G_i+L-1 → i+L= h_i,i+L + i η ĥ_i,i+L , M_i+L,i+L = ϵ_i+L - iη - ∑_m ∈∂ i+L G_m → i+L = m_i+L,i+L - i η m̂_i+L,i+L . Note that H_i,i+L and M_i+L,i+L are correlated since one of the neighbors of i+L is the node i+L-1 and the Green's function G_i+L-1 → i+L enters in the sum on the right hand side of the last equation. In the η→ 0 limit we have that: m_i,i+L = h_i,i+L m_i+L,i+L , m̂_i,i+L = ĥ_i,i+L m_i+L,i+L - h_i,i+Lm̂_i+L,i+L . Since m_i+L,i+L = ϵ_i+L - ∑_m ∈∂ i+L g_m → i+L, one has that the probability that m_i,i+L=0 is equal to the probability that ϵ_i = ∑_m ∈∂ i+L g_m → i+L. This occurs with probability density 1/W if |∑_m ∈∂ i+L g_m → i+L|<W/2, and with zero probability otherwise. Based on this observation, we implement the following algorithm to compute the correlation function η^p-1 |G_i,i+L|^p between two nodes at distance L: We apply the standard population dynamics method to obtain a stationary probability distribution of the cavity Green's function P(g,ĝ) in the linearized regime, corresponding to the solution of Eqs. (<ref>) and (<ref>); We compute H_i,i+L on a chain of length L; We compute S = ∑_m ∈∂ i+L g_m → i+L on the last node of the chain; If and only if |S|<W/2 we add | h_i,i+Lm̂_i+L,i+L|^1-p/W to the value of the correlation; We repeat this procedure N_ est times and divide the result by the total number of attempts. We renew the pool of the cavity Green's function by performing a few steps of the population dynamics algorithm and repeat the whole process N_ avg times until the desired accuracy on I_p is reached. The algorithm is schematically summarized below: xx x̄x x̄x x̄x x̄x x̄x x̄x x̄xxxx Algorithm computing η^p-1 |G_i,i+L|^p begin Initialize population of Ω elements (g_α, ĝ_α)_α = 1, …, Ω Iterate population using Eqs. (<ref>) and (<ref>) until convergence to a stationary distribution begin η^p-1 |G(L)|^p = 0 for r=1 to N_ avg do a=0 for i=1 to N_ est do Sample one element from the pool G_1 = (g, ĝ) Initialize H = h + η iĥ = 1/G_1 for m=1 to L-1 do Sample k-1 elements from the pool G_j= (g_α_j, ĝ_α_j), j=1,…,k-1 Sample a random energy ϵ_m from the box distribution Compute the cavity Green's function G_m = 1/(ϵ_m - G_1 - ∑_j G_j) Update H = h + η iĥ = H/G_m end for Sample k+1 elements from the pool G_j = (g_α_j, ĝ_α_j), j=1,…,k+1 Extract a random energy ϵ from the box distribution of width W Compute S = ∑_j=1^k+1 g_α_j if |S| < W/2 then Compute m̂ = 1 + ∑_j=1^k+1ĝ_α_j a = a+ | h m̂|^1-p/W end if end for a=a/N_ est η^p-1 |G(L)|^p = η^p-1 |G(L)|^p + a Renew all the elements of the population end for η^p-1 |G(L)|^p = η^p-1 |G(L)|^p/ N_ avg end end
http://arxiv.org/abs/2406.18309v1
20240626125007
Automated Immunophenotyping Assessment for Diagnosing Childhood Acute Leukemia using Set-Transformers
[ "Elpiniki Maria Lygizou", "Michael Reiter", "Margarita Maurer-Granofszky", "Michael Dworzak", "Radu Grosu" ]
cs.LG
[ "cs.LG", "q-bio.QM" ]
Automated Immunophenotyping Assessment for Diagnosing Childhood Acute Leukemia using Set-Transformers This project has received funding from the European Union’s Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement No 101034277. Elpiniki Maria Lygizou1, Michael Reiter1, Margarita Maurer-Granofszky2, Michael Dworzak2, Radu Grosu1 1TU Wien elpiniki.lygizou@tuwien.ac.at, rei@cvl.tuwien.ac.at, radu.grosu@tuwien.ac.at 2St. Anna Children's Cancer Research Institute margarita.maurer@ccri.at, michael.dworzak@ccri.at Received 7 March 2024 / Accepted 23 May 2024 =========================================================================================================================================================================================================================================================================================================================================== § ABSTRACT Acute Leukemia is the most common hematologic malignancy in children and adolescents. A key methodology in the diagnostic evaluation of this malignancy is immunophenotyping based on Multiparameter Flow Cytometry (FCM). However, this approach is manual, and thus time-consuming and subjective. To alleviate this situation, we propose in this paper the FCM-Former, a machine learning, self-attention based FCM-diagnostic tool, automating the immunophenotyping assessment in Childhood Acute Leukemia. The FCM-Former is trained in a supervised manner, by directly using flow cytometric data. Our FCM-Former achieves an accuracy of 96.5% assigning lineage to each sample among 960 cases of either acute B-cell, T-cell lymphoblastic, and acute myeloid leukemia (B-ALL, T-ALL, AML). To the best of our knowledge, the FCM-Former is the first work that automates the immunophenotyping assessment with FCM data in diagnosing pediatric Acute Leukemia. immunophenotyping, multiparameter flow cytometry, set-transformers, self-attention § INTRODUCTION Acute Leukemias are a heterogeneous group of hematologic malignancies (cancers), which progress rapidly. Hence, their prompt detection is crucial for a successful treatment. These diseases are primarily categorized based on the lineage of the affected cells, referring to the type of precursor cells (early-form cells), that begin to multiply at an accelerated rate. Immunophenotyping is an essential part of the precise diagnosis and classification of acute leukemia <cit.>, although it is manual, time-consuming, and subjective, by relying on the experience and knowledge of domain experts. A reliable tool for immunophenotyping is flow cytometry, a laser-based biophysical technique which provides a quick and comprehensive multi-parameter analysis of individual cells or particles. In order to automate the immunophenotyping assessment in the diagnosis of Childhood Acute Leukemia, we introduce in this paper the FCM-Former, a machine learning and self-attention-based classification algorithm for FCM data. The FCM-Former is based on the Set-Transformer architecture of <cit.>, and it is designed to work directly with the FCM data obtained from CCRI in Vienna, without any additional pre-processing. This direct approach, also allows the FCM-Former to take advantage of the high dimensionality of the FCM data. The main goal of the FCM-Former, is to accurately classify the malignancy into one of three lineages: B-ALL, T-ALL and AML. To the best of our knowledge, the FCM-Former is the first work that automates the immunophenotyping assessment with FCM data in diagnosing pediatric Acute Leukemia. Our work thus represents an important step towards applying advanced machine learning techniques, to critical healthcare challenges. In particular, in the accurate and timely diagnosis of pediatric acute leukemia. The rest of this paper is structured as follows. In Section 2 we review foundational concepts for this work. In Section 3, we discuss the related work and previous approaches. In Section 4, we describe the experimental methodology, leading to Section 5, where we present our experimental results. Finally, we conclude in Section 6, by summarizing our key insights, and proposing directions for future work. § BACKGROUND §.§ Multiparameter Flow Cytometry Multiparameter flow cytometry (FCM) serves as a robust and powerful tool for both analytical and preparative applications <cit.>,<cit.>. In FCM, a blood or bone-marrow sample of patients is stained with a specific combination of fluorochrome-labelled antibodies (markers) uniquely binding to antigens, intracellular or on cell's surface. The resulting data are a set of measurements (feature vectors) of the physical (size, granularity), and biological (multiple surface/intracellular markers) properties of every single cell (an event). This set (sample) thus characterizes the phenotype of a hole cell population. §.§ Transformers In this section, we delve into the core principles of the Transformer <cit.> and Set-Transformer <cit.> models, focusing on the self-attention mechanisms behind them. Transformers. These are advanced deep learning models, primarily developed for natural language processing (NLP). Their unique architecture, is characterized by a self-attention mechanism, allowing them to focus on complex relationships within data, and capture meaningful patterns in large-scale data. This leads to an effective context understanding. The multi-headed self-attention mechanism, as introduced in <cit.>, is defined for a given set of n query vectors Q (n corresponds to the set's size consists of n elements) each with dimension d_q, Q ∈ ℝ^n_q × d_q, key matrices K ∈ ℝ^n_v × d_q and value matrices V ∈ ℝ^n_v × d_v, where d_q = d_v = d for the sake of simplicity. It can be described as a function according to the formula below: attn(Q,K,V) := softmax(QK^T/√(d))V (1) If Q and K are derived from the same set of inputs (as in self-attention), the QK^T multiplication is quadratic in the set size, which prevents the direct application to FCM data. Set-transformers (ST). This derivative of the original transformer architecture, is designed to operate on set-structured data, where ordering is irrelevant to the input information. The associated model adapts the transformer's architecture to handle input data, that lacks a clear sequential or grid-like structure, similar to how the FCM data are in our application. The building block of Set Transformers reduces the O(n^2) complexity of self-attention to O(nm) by incorporating inducing points into the standard multi-head self-attention block of Formula (1), where n is the input dimension and m is the number of learnable parameters (inducing points). The process begins by projecting Q, K, V onto h different d_h^q, d_h^q, d_h^v,-dimensional vectors, where d_h^q = d_h^v = d/h, such that: Multihead(Q, K, V ) := concat(O_1,..., O_h)W^O (2) where O_j = attn(QW_j^Q, KW_j^K, VW_j^V) (3) where W_j^Q, W_j^K, W_j^V are projection operators of dimensions ℝ^d_q × d_h^q, ℝ^d_q × d_h^q and ℝ^d_v × d_h^v, respectively, and W^O is a linear operator of dimension d × d relating O_1,… O_h to each other. Furthermore, given a set S of d-dimension vectors, we initialize m d-dimensional inducing points I ∈ ℝ^m × d. Then, the Multihead Set-Transformer Attention Block (MSAB) is computed by the following formulas: MSAB(I,S) := LayerNorm(X + rFF(X)) (4) where X = LayerNorm(I + Multihead(I, S, S)) (5) where rFF denotes a row-wise feedforward layer, and LayerNorm is layer normalization as described in <cit.>. Finally, the Set-Transformer Attention Block (STAB) is defined as follows: STAB(S) := MSAB(S,MSAB(I,S)) (6) § RELATED WORK Manual analysis of FCM data, which plays a crucial role in various medical and biological fields, typically involves representing and transforming the high-dimensional space of raw data, into 2-D plots for human interpretation. This technique, while making the data more comprehensible, can lead to a loss of information. However, machine learning methods, can utilize the full data space, and tackle this shortcoming. Automated FCM data analysis, primarily focuses on identifying and classifying distinct or specific cell populations. Initial methods in this domain pooled events from different samples, employing classifiers based on single-event pairs and labels <cit.>,<cit.>,<cit.>, but were limited to fixed decision regions. This approach was less effective in discerning relational positioning among cell populations, a key factor in detecting rare or abnormal cells, particularly in Minimal Residual Disease (MRD) detection <cit.>. Subsequent developments in FCM analysis therefore shifted towards processing a hole sample in a unified manner. Techniques such as Gaussian Mixture Models (GMM) <cit.>, and Convolutional Neural Networks (CNNs) <cit.> emerged, which were applied to multiple 2-D projections of the data space. These methods addressed some limitations of earlier approaches, particularly in maintaining relational context among cell populations. They are less suited for tabular data analysis, which characterizes FCM data. In the realm of automated immunophenotyping statistical methods have been proposed employing distance-based analysis in the space of principal components calculated on a database of FCM reference samples <cit.>. While these methods rely on strict standardization in the data acquisition process (flow cytometer settings, FCM panels, etc.), our claim is to process data in diverse conditions without the need for such rigid standardization by using machine learning models being able to identify and relate structures in the data space and thus deal to a larger degree with data distortions. More recently, attention-based models have gained prominence in automated FCM analysis <cit.>,<cit.>,<cit.>,<cit.>. These models, using attention mechanisms, emulate the human logic of manual FCM data analysis, but keep and leverage the high-dimension data space information of the FCM data. They enable event-level classification by learning the importance of various cell populations within a sample. They are well known for their SOTA performance in tasks such as automated MRD detection <cit.>, and recently, in adults acute leukemia diagnosis <cit.>. However, these methods often assume a fixed set of features during training and inference, which can be a limitation given the high variability of FCM data features even within a single dataset. Some recent studies have explored combining features from different samples using techniques like nearest neighbor imputation <cit.>,<cit.>, but the efficacy of these approaches is still under scrutiny due to potential inaccuracies in imputed values affecting downstream analysis <cit.>. A late work <cit.> attempts to address these issues by employing a feature-agnostic, attention-based method with promising results. § EXPERIMENTAL SETUP §.§ Data Immunophenotyping for diagnosing childhood acute leukemia typically engaged the use of multiple tubes per sample, each with a different combination of markers. FCM data are presented in matrix format, where each sample comprises multiple diagnostic tubes. Each tube holds thousands of events, corresponding to feature vectors of individual cells. Our model utilizes a fixed number of features, and consists of 18 markers and 4 physical properties, as measured by the forward and side scatter of the laser light of the flow cytometer, thereby standardizing the input. Our training fixed-feature list is as follows: FSC-A, FSC-W, FSC-H, SSC-A, CD45, CD71, CD34, CD19, (i)CD79A, (i)CD3, (i)CD22, CD10, CD5, CD7, CD13, CD117, CD33, SY41, LZ, (i)MPO, CD64 and CD65. FCM-Former thus involves the aggregation of the features present across three datasets, by following the guidelines presented in <cit.>, <cit.>, with missing values imputed as zeros. In our work, we ensured that our model training was not biased, by the presence or absence of markers related with lineage-specific markers, which are typically either used or excluded by experts, following the analysis' conclusion of the initial tubes. We represent a single sample by a matrix E ∈ℝ^N × m, where N denotes the number of cells (events) in the sample, and m denotes the number of features per cell (which was 22 in our case, as listed above). N is equal to t × K, where t is the number of tubes (typically 8-13) and K the number of cells in every tube (typically 10^4 - 10^5, the exact value varies for every tube and every sample). For every index n ∈1,...,N, E_n ∈ℝ^m is a quantitative representation of physical and biological properties of every cell. §.§ Datasets We evaluate FCM-Former on samples of blood or bone marrow of pediatric patients with B-ALL, T-ALL or AML. The data set consisting of 960 samples was collected at CCRI from 2011 to 2022, with a BD LSR II flow cytometer or BD FACSSymphony A3 and FACSDiva Software (all Becton Dickinson, San Jose, CA). The samples were stained using a multi-color approach, based on a CD45-Backbone. Markers against lymphoid lineages in each tube allowed defining potential control cells. Immunphenotyping was essentially performed as proposed in <cit.>. Sampling and research were approved by local Ethics Committees, and informed consent was obtained from patients, their parents, or legal guardians, according to the Declaration of Helsinki. For all samples ground truth information was acquired by manual immunophenotyping assessment, conducted by CCRI experts. §.§ Model An overview of the FCM-Former architecture is depicted in Figure 1. Our model incorporates an encoder coupled with a linear classification layer. We use an ST encoder as presented in Section 2. Inspired by Vision-Transformers (ViT) <cit.>, our model is augmented by an additional class token, a learnable feature vector, into the encoder's input. At the output of the ST encoder, the trained class token is retrieved and then fed into a linear classification layer. We treat our problem as a single-label classification, and use a cross-entropy loss for supervised training. FCM-Former processes a single sample of FCM data in a single forward pass. Unlike typical transformer-based approaches that incorporate an embedding step, our model is applied directly to FCM samples, specifically bypassing any form of positional embedding. We set the number of induced points to m=16, hidden dimension d=32 ,and the number of attention heads to 4, for all three layers. We train our model for 200 epochs and use an early stop after 50 epochs if there is no improvement of the accuracy on the validation set. Throughout all the experiments, we use the cosine-annealing learning rate scheduler with an initial learning rate of 0.001, lowering to a minimum of 0.0002 over 10 iterations for fine-tuning purposes. The Adam optimizer is applied across these experiments while batch processing is not part of our experimental setup. All training processes are executed using an NVIDIA GeForce RTX 3090. The resulting model is comparatively lightweight with 31,572 parameters. The accuracy and the ROC-AUC are used as evaluation metrics. § RESULTS Here we present the results of the conducted experiments, evaluated on accuracy and roc-auc metrics. To ensure the robustness and generalizability of our results, we implemented a 5-fold cross-validation technique. For all experiments, the data are divided into 660 training samples, 100 validation samples, and 200 test samples. The model demonstrates exceptional proficiency in identifying the lineage of Childhood Acute Leukemia, achieving a peak accuracy of 0.965 and peak roc-auc value 0.9708 on the test datasets. The average accuracy of the model on test datasets across all folds is 0.9408 ± 0.0217 and the average roc-auc respectively is 0.9638 ± 0.0063. We additionally experimented with implementing a cross-attention mechanism in our model and trained it accordingly, using as a query Q the learnable vector of the class token, and K, V the linear projections of the input set, as in self-attention. However, the outcomes of this cross-attention mechanism under-performed compared to self-attention, indicating that cross-attention constrains the model's ability to effectively attend to the most relevant parts and relationships within the entire input dataspace, rather than enhancing it. Furthermore, our model is adaptable to variability across different clinical centers and devices. It facilitates straightforward retraining on new FCM data, with diverse features, highlighting its scalability and potential for integration into clinical routine. We identified major causes for misclassification. Cross-lineage marker expression contributed significantly to errors and was the most common cause. Some misclassifications revealed inherent biological complexity, as seen in cases of mixed phenotype acute leukemias (MPAL). Additionally, cases with minimal blast percentages (less than 5%) underscored the impact of low cellularity on accurate classification. Poor sample quality and the resulting compromised data quality may also pose challenges to precise classification. These insights highlight the significance of detailed marker analysis and acknowledging biological heterogeneity in improving machine learning models for the classification of leukemia. § CONCLUSIONS We proposed FCM-Former, a new and automated method for immunophenotyping to diagnose childhood acute leukemia. We trained FCM-Former in a supervised manner and showed that is capable of generalizing to new, unseen data. To the best of our knowledge, FCM-Former is the first attempt to automate the diagnosis of pediatric acute leukemia using FCM data. FCM-Former employs self-attention mechanisms, enabling it to attend to all cells in the sample at once, taking advantage of the whole high-dimension data-space, and avoiding the information loss encountered in the traditional process of manual immunophenotyping assessment. The average performance metrics underscore the FCM-Former's consistent reliability and effectiveness, in diagnosing childhood acute leukemia. For future work, we would like to extend and improve the performance of our model to predict the mix-lineage and the sub-types of childhood acute leukemia, using only FCM data. 00 c5 Dworzak, M.N., Buldini, B., Gaipa, G., Ratei, R., Hrusak, O., Luria, D., Rosenthal, E., Bourquin, J.P., Sartor, M., Schumich, A. and Karawajew, L., 2018. AIEOP‐BFM consensus guidelines 2016 for flow cytometric immunophenotyping of pediatric acute lymphoblastic leukemia. Cytometry Part B: Clinical Cytometry, 94(1), pp.82-93. c2 Lee, J., Lee, Y., Kim, J., Kosiorek, A., Choi, S. and Teh, Y.W., 2019, May. Set transformer: A framework for attention-based permutation-invariant neural networks. In International conference on machine learning (pp. 3744-3753). PMLR. c21 Shapiro, H.M., 2005. Practical flow cytometry. John Wiley & Sons. c22 Henel, G. and Schmitz, J.L., 2007. Basic theory and clinical applications of flow cytometry. Laboratory Medicine, 38(7), pp.428-436. c1 Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł. and Polosukhin, I., 2017. Attention is all you need. Advances in neural information processing systems, 30. c6 Lei Ba, J., Kiros, J.R. and Hinton, G.E., 2016. Layer normalization. ArXiv e-prints, pp.arXiv-1607. c3 Woedlinger, M., Reiter, M., Weijler, L., Maurer-Granofszky, M., Schumich, A., Sajaroff, E.O., Groeneveld-Krentz, S., Rossi, J.G., Karawajew, L., Ratei, R. and Dworzak, M.N., 2022. Automated identification of cell populations in flow cytometry data with transformers. Computers in Biology and Medicine, 144, p.105314. c9 Abdelaal, T., van Unen, V., Höllt, T., Koning, F., Reinders, M.J. and Mahfouz, A., 2019. Predicting cell populations in single cell mass cytometry data. Cytometry Part A, 95(7), pp.769-781. c10 Licandro, R., Schlegl, T., Reiter, M., Diem, M., Dworzak, M., Schumich, A., Langs, G. and Kampel, M., 2018, August. WGAN latent space embeddings for blast identification in childhood acute myeloid leukaemia. In 2018 24th International Conference on Pattern Recognition (ICPR) (pp. 3868-3873). IEEE. c11 Ni, W., Hu, B., Zheng, C., Tong, Y., Wang, L., Li, Q.Q., Tong, X. and Han, Y., 2016. Automated analysis of acute myeloid leukemia minimal residual disease using a support vector machine. Oncotarget, 7(44), p.71915. c12 Reiter, M., Rota, P., Kleber, F., Diem, M., Groeneveld-Krentz, S. and Dworzak, M., 2016. Clustering of cell populations in flow cytometry data using a combination of Gaussian mixtures. Pattern Recognition, 60, pp.1029-1040. c13 Reiter, M., Diem, M., Schumich, A., Maurer‐Granofszky, M., Karawajew, L., Rossi, J.G., Ratei, R., Groeneveld‐Krentz, S., Sajaroff, E.O., Suhendra, S. and Kampel, M., 2019. Automated flow cytometric MRD assessment in childhood acute B‐lymphoblastic leukemia using supervised machine learning. Cytometry Part A, 95(9), pp.966-975. c14 Arvaniti, E. and Claassen, M., 2017. Sensitive detection of rare disease-associated cell subsets via representation learning. Nature communications, 8(1), p.14825. c23 Lhermitte, L., Mejstrikova, E., Van Der Sluijs-Gelling, A.J., Grigore, G.E., Sedek, L., Bras, A.E., Gaipa, G., Sobral da Costa, E., Novakova, M., Sonneveld, E. and Buracchi, C., 2018. Automated database-guided expert-supervised orientation for immunophenotypic diagnosis and classification of acute leukemia. Leukemia, 32(4), pp.874-881. c15 Kowarsch, F., Weijler, L., Wödlinger, M., Reiter, M., Maurer-Granofszky, M., Schumich, A., Sajaroff, E.O., Groeneveld-Krentz, S., Rossi, J.G., Karawajew, L. and Ratei, R., 2022, September. Towards Self-explainable Transformers for Cell Classification in Flow Cytometry Data. In International Workshop on Interpretability of Machine Intelligence in Medical Image Computing (pp. 22-32). Cham: Springer Nature Switzerland. c8 Lewis, J.E., Cooper, L.A., Jaye, D.L. and Pozdnyakova, O., 2024. Automated Deep Learning-Based Diagnosis and Molecular Characterization of Acute Myeloid Leukemia Using Flow Cytometry. Modern Pathology, 37(1), p.100373. c16 Weijler, L., Kowarsch, F., Reiter, M., Hermosilla, P., Maurer-Granofszky, M. and Dworzak, M., 2024. FATE: Feature-Agnostic Transformer-based Encoder for learning generalized embedding spaces in flow cytometry data. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (pp. 7956-7964). c18 Leite Pereira, A., Lambotte, O., Le Grand, R., Cosma, A. and Tchitchek, N., 2019. CytoBackBone: an algorithm for merging of phenotypic information from different cytometric profiles. Bioinformatics, 35(20), pp.4187-4189. c19 Pedersen, C.B., Dam, S.H., Barnkob, M.B., Leipold, M.D., Purroy, N., Rassenti, L.Z., Kipps, T.J., Nguyen, J., Lederer, J.A., Gohil, S.H. and Wu, C.J., 2022. cyCombine allows for robust integration of single-cell cytometry datasets within and across technologies. Nature communications, 13(1), p.1698. c20 Mocking, T.R., Duetz, C., van Kuijk, B.J., Westers, T.M., Cloos, J. and Bachas, C., 2023. Merging and imputation of flow cytometry data: a critical assessment. Cytometry Part A. c4 Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S. and Uszkoreit, J., 2020. An image is worth 16x16 words. arXiv preprint arXiv:2010.11929. c24 Ratei, R., Karawajew, L., Lacombe, F., Jagoda, K., Poeta, G.D., Kraan, J., De Santiago, M., Kappelmayer, J., Björklund, E., Ludwig, W.D. and Gratama, J.W., 2007. Discriminant function analysis as decision support system for the diagnosis of acute leukemia with a minimal four color screening panel and multiparameter flow cytometry immunophenotyping. Leukemia, 21(6), pp.1204-1211.
http://arxiv.org/abs/2406.17724v1
20240625171335
Spatiotemporal statistical features of velocity responses to traffic congestions in a local motorway network
[ "Shanshan Wang", "Michael Schreckenberg", "Thomas Guhr" ]
physics.soc-ph
[ "physics.soc-ph", "physics.data-an" ]
Re-examination of the role of displacement and photon catalysis operation in continuous variable measurement device-independent quantum key distribution Arvind July 1, 2024 ======================================================================================================================================================== Abstract. The causal connection between congestions and velocity changes at different locations induces various statistical features, which we identify and measure in detail. We carry out an empirical analysis of large-scale traffic data on a local motorway network around the Breitscheid intersection in the North Rhine-Westphalia, Germany. We put forward a response function which measures the velocity change at a certain location versus time conditioned on a congestion at another location. We use a novel definition of the corresponding congestion indicator to ensure causality. We find that the response of velocities to the congestion exhibits phase changes in time. A negative response at smaller time lags transforms into positive one at larger time lags, implying a certain traffic mechanism. The response decays as a power law with the distance. We also identify a scaling property leading to a collapse of the response functions on one curve. Keywords: response function, traffic congestion, power law, scale invariance 1pt 1pt § INTRODUCTION The traffic flow on road networks <cit.> consists of free flow and congested flow. According to the three-phases traffic theory <cit.>, the congested flow further contains synchronized flow and wide moving jams. Extensive studies on the dynamic behavior of traffic flow has been devoted to modeling and simulations in past decades <cit.>. At present, neither models nor simulations fully capture the realistic traffic situations, which may be affected by commuting, weather, seasons, road construction, traffic accidents, big city events, etc. A huge amount of traffic data collected by the global positioning system tracking devices, inductive loop detectors, video recording devices, etc. <cit.> is available, making the empirical studies possible <cit.>. Along with theoretical studies, data-driven analyses to explore traffic flow dynamics <cit.>, traffic patterns <cit.>, traffic congestion <cit.>, traffic flow prediction <cit.> and the resilience after traffic jams <cit.> are called for and pose a variety of challenges. Due to the non-stationary in the time series of traffic observables, including traffic flows and velocities, a traffic network can be viewed as a complex system, where traffic observables are correlated in various ways in time and space. Temporal correlation matrices together with the technique of k-means clustering have been used for identifying different quasi-stationary states in traffic systems <cit.>. The states, manifesting themselves in correlation structures, carry certain traffic patterns, related to non-stationary. In contrast to a financial market <cit.>, the presence of spatial information <cit.> renders a traffic network more complicated. The correlations between time series measured at arbitrarily labeled locations or positions induce a topology which has to be mapped on the real topology, i.e. of the road map. This has led to the identification of collective and sub-collective traffic behavior in the motorway network of North Rhine-Westphalia <cit.>. The propagation of effects among road sections via a traffic network takes time, inducing temporal shift of correlation structures. Non-synchronized time series from different road sections bring about cross-correlations with a time lag or lead. A recent study <cit.> discloses a spectral transition in the symmetrized matrix of time-lagged correlations. Importantly, the spectral transition is associated with the duration of traffic congestion. In addition to correlations, response functions as a novel concept are introduced to explore the interaction of road sections with non-synchronized time series <cit.>. They measure the time-dependent response of some observable, conditioned on events which are encoded in indicator functions. The response function has been used in financial markets to study how the trading price changes conditioned on a buy or a sell <cit.>. It consists of a response variable and a triggered event. In traffic, the latter could be traffic congestion <cit.>, traffic accidents <cit.>, road construction <cit.>, the presence of trucks <cit.>, etc. Our previous study on the response of velocities to heavy congestion <cit.> was conducted with five neighbouring sections on a motorway. The response function measures the average velocity changes versus time on a motorway section conditioned on the heavy congestion occurring on a different section at an earlier time. Despite the fact that a remarkable response with phase transitions shows up, it is difficult to build a causal relation between the velocity change and the heavy congestion, as congestion may occur simultaneously on other sections in addition to the section as the trigger. Furthermore, the degree of velocity changes on a section depends largely on its specific traffic environment, for instance, a section with bottleneck, a must-pass section for commuting, or a section on a bridge. In our previous study <cit.>, we ignore the effect of traffic environment on the responses, as the considered sections are rather close to each other. When studying the responses among sections distributed on a motorway network in two dimensions, the environments for the sections may vary largely over the network, which has to be taken into account. We extend the study of the considered motorway network from one to two dimensions. First, we introduce a conditional indicator, which rules out the possibility of synchronized congestion on multiple sections and guarantees the response only caused by the section as a trigger. Second, we employ an alternative definition of response functions, which removes the effect of random noise on velocity changes. Third, by considering many sections distributed on a two-dimensional motorway network, we are able to explore the spatial features of responses, in addition to the temporal features. This paper is organized as follows. In Sec. <ref>, we provide some basic information and concepts for this study, including the considered local motorway network, the used traffic data, the method to aggregate velocities across multiple lanes, and the network distances. In Sec. <ref>, we introduce the response functions for this study as well as the indicator functions that ensure causality. In Sec. <ref>, we analyze our empirical results and find phase changes in time as well as power laws and scaling invariance in space. We conclude in Sec. <ref>. § DATA DESCRIPTION In Sec. <ref>, we introduce a local motorway network considered in this study and the information of traffic data. In Sec. <ref>, we describe the method of averaging velocities across multiple lanes on a motorway section. In Sec. <ref>, we brief the network distance and its computation method. §.§ The studied motorway network and traffic data In this study, we focus on a local motorway network near Breitscheid in North Rhine-Westphalia (NRW), Germany, which is part of the large-scale NRW motorway network. The considered local network is mainly composed of motorway A52 connecting the densely populated cities of Düsseldorf and Essen, motorway A3 connecting the cities of Duisburg and Mettmann, motorway A40 connecting the cities of Duisburg and Essen, and motorway A524 connecting the city of Duisburg with other motorways, as displayed in Fig. <ref>a. The intersection of motorways A3 and A52 is at Breitscheid and carries heavy traffic flow of commuters during rush hours on workdays. We select a section on motorway A52 as close to this intersection as possible to study the effect of its congestion on other sections nearby. As seen in Fig. <ref>, this section toward the north-east (NE) is our section j locating in the center of the network and playing the role of congestion. The other sections either toward north-east or toward south-west (SW) are the sections i response to it. Within the network distance of 15 km from section j, we have 68 sections i in total. Our traffic data is accumulated with inductive loop detectors on the motorway network. It includes the information on traffic flow and on velocity with a resolution of one minute for each lane on each motorway section. The data used in this study comprises 179 workdays selected during the period from Dec. 1, 2016 to Nov. 30, 2017. On each considered workday, our central section j contains the velocity of at least one minute lower than or equal to 10 km/h from 5:00 to 22:59, which guarantees the presence of heavy congestion of section j during this period. Moreover, the used data for each section from 5:00 to 22:59 on the 179 workdays has high quality with more than 96.7% non-missing values. We fill the missing values in the data with the linear interpolation of neighboring, non-missing values <cit.>. §.§ Velocities on individual sections One motorway section has one or more lanes, leading to one or multiple velocities per minute. The later case requires aggregation of the velocity across multiple lanes, such that for one section there is one velocity per minute. For the velocity aggregation, we use the flow-weighted velocity here, which is different from the density-weighted velocity that we used in our previous studies <cit.>. The traffic flow is the number of vehicles passing through a road section per unit time, while the density is the number of vehicles passing through per unit distance. The former is a time-dependent observable obtained from data directly, while the latter is a space-dependent quantity that has to be worked out via flow and velocity. In contrast to density-weighted velocity, the flow-weighted velocity better reflects the velocity per individual vehicle. Let the traffic flow and velocity at time t on lane m of section i be denoted q_i,m(t) and v_i,m(t), respectively. We define the flow-weighted velocity v_i(t) for section i across multiple lanes as the sum of the velocity times the flow on each lane divided by the total flow on this section, v_i(t)=∑_m q_i,m (t)v_i,m(t)/∑_m q_i,m(t) . Distinguishing car flows and truck flows, we further extend the above equation to v_i(t)=∑_m (q_i,m^(c) (t)v_i,m^(c)(t)+q_i,m^(t) (t)v_i,m^(t)(t))/∑_m (q_i,m^(c)(t)+q_i,m^(t)(t)) , where the superscripts (c) and (t) indicate the quantities for cars and for trucks, respectively. For convenience, we refer to the flow-weighted velocity as velocity in the following. As an example, Fig. <ref>a shows the time evolution of the velocity on the central section j averaged over 179 workdays. Two valleys are visible for morning and afternoon rush hours, where the valley during afternoon rush hours is much deeper and wider. Setting the critical velocity at 10 km/h, the time period with lower velocity is the congested phase, otherwise is the non-congested phase, depicted in Fig. <ref>b. As expected, many congestions, in particular short congestions, occur during rush hours, but less or even none exist during non-rush hours. §.§ Network distances A network distance is the distance of the shortest path on the network between two locations. As for our local motorway network, the locations at the ends of a path are the motorway sections. The path along the motorway network is composed of many short motorway pieces connecting two close locations. Each short piece is similar to a straight line and its distance is approximately a straight-line Euclidean or a geodetic distance. Therefore, a network distance of a path is the sum of distances of all short pieces along the path. In this way, we obtain network distances between any two sections with the help of the Java application Osmosis and the Python packages OSMnx and NetworkX. Exchanging the origin and the destination of two given sections i and j, the distances in the unit of kilometers change very little. In view of this, the network distance from i to jis equal to the network distance from j to i, i.e. l_ij = l_ji. Figure <ref> visualizes the sections j within different ranges of network distances to the central section j. The shortest path between two sections is a curve rather than a perfect straight-line. Thus, the sections i within each distance range are not located in a ring or a circle centered around the central section j. As an example, Fig. <ref> visualizes the sections within different distance ranges and the covered areas of the motorway network within a given distance range l from the central section j. § RESPONSE FUNCTIONS To study the response to congestion, we define an indicator function for a given critical velocity v_c as ε_j(t)={[ 1, if  v_i(t)<v_c ,; 0, if  v_i(t)≥ v_c , ]. where ε_i(t)=1 for congested traffic and ε_i(t)=0 for non-congested traffic. The three-phases theory <cit.> provides a possible interpretation for congested traffic. In a local motorway network, simultaneous congestions may occur on multiple sections, obscuring the causality between congestion and velocity changes on different sections. To unambiguously disclose this causality, it is essential to capture the effect of congestion on one section without the interference of simultaneous congestions on others. We define an indicator of congestion on section j under the condition that there are no congestions on other sections k. Furthermore, we do that with spatial resolution by only including network distances l_ij smaller than a given distance threshold l_ω. Hence we introduce ω_j(t|l_ω)=ε_j(t)∏_l_kj≤ l_ω, k≠ j(1-ε_k(t)) . A simultaneous congestion on any section k with k≠ j implying ω_j(t|l_ω)=0. In this way, it removes the contribution of congestion from multiple sections to the velocity change under consideration. When the congestion is absent in any section at time t except for section j, the conditional indicator is ω_j(t|l_ω)=1 and only the congestion on section j contributes to the velocity change. A velocity change is also termed a velocity increment between times t and t+τ on section i, Δ v_i(t,τ)=v_i(t+τ)-v_i(t) , where τ is referred to as time lag. The velocity increment varies largely at different traffic environments. We define the response function of velocities to the conditional indicators given distance l_ω as, R_ij(τ|l_ω)=⟨Δ v_i(t,τ)ω_j(t|l_ω)⟩-⟨Δ v_i(t,τ)⟩⟨ω_j(t|l_ω)⟩ . The average ⟨⋯⟩ is on the times t. The response function (<ref>) depends on the chosen critical velocity v_c. It measures, on average, how large the velocity on section i relatively changes from time t to t+τ, if a congestion is only on section j at time t. From a formal mathematical viewpoint, the response function is a time-lagged covariance. As one of the time series is an indicator, we prefer the term response functions. If R_ij(τ |l_ω)>0, the two observables move in the same direction, i.e., the increase (or decrease) of the velocity change is accompanied by the increase (or decrease) of the conditional indicator. In contrast, the two quantities move in opposite directions when R_ij(τ |l_ω)<0. The second term in Eq. (<ref>) is the unconnected part, hence the response vanishes if there is no mutual dependence between the congestions and the velocity change. It depends on the studied system if one finds it convenient to include this unconnected part. The effect of congestion propagates both in time and in space via the neighbouring sections <cit.>. A section geographically close to the congested section suffers more influences from the congestion than a section far away <cit.>. As the network distance l_ij between the impacted section i and the congested section j plays an important role in the congestion propagation, we incorporate the spatial information into the response function (<ref>). To assess the spatial characteristics in a more general way, we average over all impacted sections i in the region defined by l_ij<l, ⟨ R_ij(τ,l|l_ω)⟩_i=∑_iR_ij(τ |l_ω)Θ (l-l_ij)/∑_iΘ (l-l_ij) , where the step function Θ (l-l_ij)={[ 1, if  l≥ l_ij; 0, if  l<l_ij ]. extracts all sections i that satisfy the condition of distances. The average in Eq. (<ref>) captures the response within the specified range, and washes out the noise in the individual response functions. § EMPIRICAL RESULTS AND DISCUSSION To empirically work out the response functions, we first apply Eq. (<ref>) to the time series of each workday and then average the response values for each given τ over different workdays to obtain R_ij(τ|l_ω). Averaging R_ij(τ |l_ω) over different distance ranges l by Eq. (<ref>) finally results in ⟨ R_ij(τ,l|l_ω)⟩_i. In the following, we first discuss the response behavior with respect to the time evolution in Sec. <ref>. We then analyze the transitions of response phases in Sec. <ref>. We also explore how the response changes with the increase of distance ranges in Sec. <ref>. We further inspect the feature of the scale invariance in the response function in terms of distance ranges in Sec. <ref>. §.§ Time-dependent response behavior According to Fig. <ref>, we select three typical time periods, i.e. morning rush hours from 6:00 to 10:59, afternoon rush hours from 15:00 to 19:59, and non-rush hours from 10:00 to 14:59. Each time period contains 300 minutes with a time step of 1 minute. Considering a motorway network centered around section j within the largest reachable network distance l_ω=15 km for conditional indicator ω_j(t), we work out the averaged responses of velocities on sections i to the congestion on section j, shown in Fig. <ref>b, within different distance ranges l running from 2 km to 15 km at an increment of 1 km. Here the range within l=1 km only contains one section i and the averaging of results is unable to eliminate the individuality carried by section i paired with section j. We therefore ignore this case. A strength difference in individuality is visible in Fig. <ref>d. In spite of it, the basic characteristics of response curves are similar. Figure <ref>b depicts the overall characteristics of responses and correspondingly Fig. <ref>e zooms in the negative responses at small τ. Within each l, the averaged response ⟨ R_ij(τ,l |l_ω)⟩_i depending on time lag τ drops down to be negative and then raises up to be positive. The negative value persists for more than 10 minutes until the positive value shows up. Such behavior emerges from the both morning and afternoon rush hours. In contrast, the response is too weak to be observed during non-rush hours. Usually the congested phases dominate most of time during rush hours, while non-congested phases are prevalent most of time during non-rush hours. The comparison between rush and non-rush hours turns out that the presence of remarkable responses is stimulated by the congested phase rather than the non-congested phase. Essentially, the response function is a covariance function which reveals the collective motion of two quantities, e.g. Δ v_i(t,τ) and ω_j(t) in our study. The case of ω_j(t)=0 is complicated. It corresponds to the non-congested phase on section j and the congested phase on both section j and any section i. Therefore the contribution to response with ω_j(t)=0 is difficult to be distinguished. Differently, the case of ω_j(t)=1 only corresponds to the congested phase on section j accompanied with non-congested phases on all sections i. The resulting response is causally related to the congestion on section j to some extent. For the negative response at small τ, when the binary conditional indicator ω_j(t)=1, the velocity changes Δ v_i(t,τ) relative to the average velocity change caused by noise information become negative, implying the velocity on section i decreases due to the congestion on section j. On the other hand, for the positive response at large τ, the Δ v_i(t,τ) relative to its average become positive when ω_j(t)=1, suggesting the velocity increases on section i conditioned on the congestion on section j. For comparison, we also work out the responses with regard to different types of indicators, given in a uniform formula by R_ij(τ|l_ω)=⟨Δ v_i(t,τ)η_j(t|l_ω)⟩-⟨Δ v_i(t,τ)⟩⟨η_j(t|l_ω)⟩ . When the indicator η_j(t|l_ω)=ω_j(t|l_ω), we arrive at the response function (<ref>) with respect to the congestions only occurs on section j. When η_j(t|l_ω)=ε_j(t), there are responses to the congestion on section j regardless of the simultaneous congestions on other sections i, see Fig. <ref>a. Obviously, this response is stronger than the response to the congestion only on section j comparing Figs. <ref>a and b, since the former contains a part of responses to other sections i. In other words, the latter exactly excludes the response components caused by other sections i apart from section j, so as to preserve the causality between each section pair. For an extreme scenario, every section in the considered local motorway network is congested. In this scenario, the conditional indicator is defined as ω̃_j(t|l_ω)=∏_l_kj≤ l_ωε_k(t) . Setting η_j(t|l_ω)=ω̃_j(t|l_ω) in Eq. (<ref>) yields the response to the congestion on every section. It is, however, zero response for each τ shown in Fig. <ref>c, as the aforementioned scenario is rather impossible to reach unless the whole local motorway network is broken down. From our empirical data, we have not met this scenario so far. §.§ Transitions of response phases To explore the traffic dynamics from the perspective of velocity responses, we analyze different regimes. First, negative and positive responses are separated by the critical point τ_c (0<τ_c<30 min), at which the response vanishes, ⟨ R_ij(τ,l |l_ω)⟩_i |_τ=τ_c=0 . In our previous study <cit.>, we refer to the response occurring before τ_c as transient response (or response phase 1) and after τ_c as long-term response (or response phase 2). The phase 1 (phase 2) with the negative (positive) response reveals the lowering (raising) of the velocity on section i caused by the congestion on section j. Furthermore, the response has a minimum at τ_min (0<τ_min<30 min) and a maximum at τ_max (30 min <τ_max<240 min), where its derivative vanishes, ∂/∂τ⟨ R_ij(τ,l |l_ω)⟩_i |_τ=τ_min or τ_max=0 . The extremal points may be viewed as indicating transitions, reflecting competitions between the vehicle deceleration and acceleration on the impacted sections i. The three critical points separate the response in time and space into four regions, yielding a phase portrait for each rush hours, as depicted in Fig. <ref>. For 0<τ≤τ_min, the congestion causes a high possibility of vehicle deceleration, resulting in the decrease of the velocity on section i. Vehicles decelerate to a minimal value at around 5 or 4 minutes for different distance ranges. In comparison to the initial velocity, the velocity on section i changes negatively. The magnitude of velocity changes decays with distance ranges. For τ_min<τ≤τ_c, with the congestion relief, vehicle acceleration occupies the most of time, leading to the increase of the velocity from a negative value to the initial value. Roughly speaking, the larger the distance ranges, and the more quickly the vehicles recover to their initial velocities. The persistent acceleration during τ_c<τ≤τ_max further drives the velocity to a positive value. The vehicle acceleration for a long time attracts more traffic flow, which further reverses the change of velocity and leads to a reduction in velocity during τ_max<τ≤240 min. §.§ Power laws Figures <ref>b,e and <ref> reveal that the response is not only time-dependent but also distance-dependent. Fixing a specific time lag τ, the dependence of responses on the distance range l, as shown in Fig. <ref>a, behave as a power law ⟨ R_ij(τ,l |l_ω)⟩_i=α(τ |l_ω) l^β(τ |l_ω) , where α(τ |l_ω) is the l-independent part and β(τ |l_ω) the exponent. Both depend on τ and l_ω. We determine them by fitting to the empirical result, as shown in Fig. <ref>a. Around the critical point τ_c, such a fit is not possible with statistical significance. This region has to be excluded. The results for β(τ |l_ω) are shown in Figs. <ref>b and c for morning and afternoon rush hours during workdays. As seen, the exponent β(τ |l_ω) depends on the time lag τ considered. Importantly, there is a jump occurring in the region around τ_c. §.§ Scale invariance Guided by our naked eyes, we phenomenologically describe the collapses of curves in Fig. <ref>b by shifting horizontally and stretching vertically. To obtain well curve collapses, the responses before and after the critical point τ_c are rescaled by different methods, r(τ̃)={[ ⟨ R_ij(τ,l|l_ω)⟩_i/|min(⟨ R_ij(τ,l|l_ω)⟩_i)| , with τ̃=τ,  if τ<τ_c ,; [0.5cm] ⟨ R_ij(τ-τ_c,l|l_ω)⟩_i/|max(⟨ R_ij(τ,l|l_ω)⟩_i)| , with τ̃=τ-τ_c,  if τ≥τ_c . ]. For time lags τ<τ_c, the response is rescaled only by dividing the magnitude of the minimal response. After τ_c, the response is not only shifted left by τ_c, but also divided by the magnitude of the maximal response. The rescaled responses r(τ̃) versus the time lag τ̃ in Fig. <ref> show that all curves are very close to each other and roughly collapse to a single curve. This phenomenon indicates a potential presence of scaling invariance. Furthermore, the difference in rescaling methods before and after τ_c suggest distinguishable traffic dynamics for different response phases. To validate and refine our findings, we explore the behavior of scaling invariance employing the power law (<ref>). It is known <cit.> that the only solution of the scaling-invariant criterion is a power law. We sketch the reasoning for the response function ⟨ R_ij(τ,l |l_ω)⟩_i in Appendix <ref>. Assuming that the function ⟨ R_ij(τ,l |l_ω)⟩_i in terms of distance ranges l is invariant under all rescalings, we have <cit.> ⟨ R_ij(τ,l |l_ω)⟩_i=μ(λ,τ|l_ω) ⟨ R_ij(τ,λ l|l_ω)⟩_i , where λ is a scaling factor. According to Eqs. (<ref>) and (<ref>), μ(λ,τ|l_ω) is a function in terms of λ and τ, μ(λ,τ|l_ω)=λ^-β(τ |l_ω) . We reformulate Eq. (<ref>) as ⟨ R_ij(τ,l |l_ω)⟩_i=λ^-β(τ |l_ω)⟨ R_ij(τ,λ l |l_ω)⟩_i . Setting l=1 and λ=l in the above equation yields ⟨ R_ij(τ,1 |l_ω )⟩_i=l^-β(τ |l_ω)⟨ R_ij(τ,l |l_ω)⟩_i , which means for a given τ, the responses for different distance ranges l are rescaled to the response within l=1 by multiplying l^-β(τ |l_ω). In other words, at a given τ, the points of responses for different distance ranges l overlap with each other. For different τ, the connection of all overlapping points performs a collapse curve. Therefore, if the scaling invariance exists, the time-dependent curves of responses rescaled by multiplying l^-β(τ |l_ω) should collapse to a single curve. Figure <ref>b displays the empirical results of l^-β(τ |l_ω)⟨ R_ij(τ,l |l_ω)⟩_i during morning and afternoon rush hours. Deviating from the critical point τ_c, all curves within different distance ranges l basically overlap with each other. As the region around τ_c does not allow a power-law analysis, we eliminate the effects from the critical point τ_c by fitting β(τ |l_ω) to an exponential function β(τ |l_ω) = aexp(bτ)+cexp(dτ) , where a, b, c and d are fit parameters. The exponential function (<ref>) well describes the dependence of β(τ |l_ω) on τ, as displayed in Fig. <ref>c, and in particular fills suitable values to substitute for the distorted β(τ |l_ω) around τ_c. For distinguishing, we refer to the fitted β(τ |l_ω) as β̃(τ |l_ω). With β̃(τ |l_ω), the rescaled responses almost collapse to the same curve regardless of the critical point τ_c, as shown in Fig. <ref>c. The overlap of all curves during afternoon rush hours looks much better than during afternoon rush hours. One possible reason owes to fitting errors either in the power law (<ref>) or in the exponential function (<ref>). However, the most possible reason lies in the proportion of congestions during each rush hours. A higher proportion of congestions leads to the better statistic for responses, and further to the better collapses of rescaled responses. In Fig. <ref>, a higher proportion of congestion exactly occupies the afternoon rush hours than the morning rush hours, corresponding to the better curve collapses for afternoon rush hours. Our empirical results, therefore, corroborate the assumption of scale invariance in the response function ⟨ R_ij(τ,l |l_ω)⟩_i in terms of l give each τ. The values of exponent β(τ |l_ω ) before and after τ_c (see Sec. <ref>) differ the scaling behavior for the response phases separated by τ_c. § CONCLUSIONS To study the causality between the congestion and velocity changes, we introduced a new response function with a conditional indicator. The conditional indicator rules out the synchronization of congestion occurring on multiple motorway sections. The response function quantifies the causal connection between the impacted sections and the congested section. From a formal mathematical viewpoint, it is a (time-lagged) covariance. When the two quantities move towards the same direction, a positive response shows up and the velocity increases due to the congestion. Conversely, a negative response appears and the velocity decreases compared with the initial velocity. We found a phase change from negative responses at small time lags to positive responses at large time lags, separated by the critical point τ_c at which the response vanishes. The points τ_min and τ_max correspond to the minimal and the maximal response, respectively, where the minimal responses occur at around the time lag of 4 or 5 minutes. These points distinguish the vehicle deceleration from the vehicle acceleration. The latter leads to a velocity change relative to its average recovering from a negative value to a positive one. Therefore the acceleration prompts the change of response phases distinguished by the critical point τ_c. The three points separate the response phases into four regions with different traffic dynamics. Furthermore, we also found the distance-dependent response at a fixed time lag τ decays as a power law in terms of the distance ranges within which the responses are averaged. We notice that a power law does not necessarily imply heavy tails, which depend on the exponent. Here, we focused on the scale invariance in response curves which we confirmed empirically. § ACKNOWLEDGMENTS We are grateful to Sebastian Gartzke for fruitful discussions. We thank Strassen.NRW for providing the empirical traffic data. § AUTHOR CONTRIBUTIONS T.G. and M.S. proposed the research. S.W. and T.G. developed the methods of analysis. S.W. performed all the calculations. All authors contributed equally to analyzing the results, writing and reviewing the paper. tocsectionReferences 10 Hansen1959 Walter G Hansen. How accessibility shapes land use. J. Am. I. Planners, 25(2):73–76, 1959. Geurs2004 Karst T Geurs and Bert Van Wee. Accessibility evaluation of land-use and transport strategies: review and research directions. J. Transsp. Geogr., 12(2):127–140, 2004. Saif2019 Muhammad Atiullah Saif, Mohammad Maghrour Zefreh, and Adam Torok. Public transport accessibility: A literature review. Period. Polytech. Transp. Eng., 47(1):36–43, 2019. Meersman2017 Hilde Meersman and Marzieh Nazemzadeh. The contribution of transport infrastructure to economic activity: The case of belgium. Case Stud. Transp. Policy, 5(2):316–324, 2017. Kerner2012 Boris S Kerner. The physics of traffic: empirical freeway pattern features, engineering applications, and theory. Springer, 2012. Nagel1992 Kai Nagel and Michael Schreckenberg. A cellular automaton model for freeway traffic. J. Phys. I, 2(12):2221–2229, 1992. Schadschneider1993 Andreas Schadschneider and Michael Schreckenberg. Cellular automation models and traffic flow. J. Phys. A: Math. Gen., 26(15):L679, 1993. Lovaas1994 Gunnar G Løvås. Modeling and simulation of pedestrian traffic flow. Transp. Res. B: Methodol., 28(6):429–443, 1994. Schreckenberg1995 Michael Schreckenberg, Andreas Schadschneider, Kai Nagel, and Nobuyasu Ito. Discrete stochastic models for traffic flow. Phys. Rev. E, 51(4):2939, 1995. Hoogendoorn2001 Serge P Hoogendoorn and Piet HL Bovy. State-of-the-art of vehicular traffic flow modelling. Proc. Inst. Mech. Eng., Pt. I: J. Syst. Contr. Eng., 215(4):283–303, 2001. Burstedde2001 Carsten Burstedde, Kai Klauck, Andreas Schadschneider, and Johannes Zittartz. Simulation of pedestrian dynamics using a two-dimensional cellular automaton. Physica A, 295(3-4):507–525, 2001. Wong2002 GCK Wong and SC Wong. A multi-class traffic flow model–an extension of LWR model with heterogeneous drivers. Transp. Res. Part A Policy Pract., 36(9):827–841, 2002. Fellendorf2010 Martin Fellendorf and Peter Vortisch. Microscopic traffic flow simulator VISSIM. In Fundamentals of Traffic Simulation, pages 63–93. Springer, 2010. Treiber2013 Martin Treiber and Arne Kesting. Traffic Flow Dynamics: Data, Models and Simulation. Springer, 2013. Leduc2008 Guillaume Leduc. Road traffic data: collection methods and applications. Working Papers on Energy, Transport and Climate Change, 1:1–55, 2008. Kerner2002 Boris S Kerner. Empirical macroscopic features of spatial-temporal traffic patterns at highway bottlenecks. Phys. Rev. E, 65(4):046138, 2002. Bertini2005 Robert L Bertini and Monica T Leal. Empirical study of traffic features at a freeway lane drop. J. Transp. Eng., 131(6):397–407, 2005. Schonhof2007 Martin Schönhof and Dirk Helbing. Empirical features of congested traffic states and their implications for traffic modeling. Transp. Sci., 41(2):135–166, 2007. Li2020 Li Li, Rui Jiang, Zhengbing He, Xiqun Michael Chen, and Xuesong Zhou. Trajectory data-based traffic flow studies: A revisit. Transp. Res. Part C Emerg. Technol., 114:225–240, 2020. Chowdhury2000 Debashish Chowdhury, Ludger Santen, and Andreas Schadschneider. Statistical physics of vehicular traffic and some related systems. Phys. Rep., 329(4-6):199–329, 2000. Afrin2020 Tanzina Afrin and Nita Yodo. A survey of road traffic congestion measures towards a sustainable and resilient transportation system. Sustainability, 12(11):4660, 2020. Krause2017 Sebastian M Krause, Lars Habel, Thomas Guhr, and Michael Schreckenberg. The importance of antipersistence for traffic jams. EPL, 118(3):38005, 2017. Lv2014 Yisheng Lv, Yanjie Duan, Wenwen Kang, Zhengxi Li, and Fei-Yue Wang. Traffic flow prediction with big data: A deep learning approach. IEEE Trans. Intell. Transp. Syst., 16(2):865–873, 2014. Abadi2014 Afshin Abadi, Tooraj Rajabioun, and Petros A Ioannou. Traffic flow prediction for road transportation networks with limited traffic data. IEEE Trans. Intell. Transp. Syst., 16(2):653–662, 2014. Kan2019 Zihan Kan, Luliang Tang, Mei-Po Kwan, Chang Ren, Dong Liu, and Qingquan Li. Traffic congestion analysis at the turn level using taxis' gps trajectory data. Comput. Environ. Urban Syst., 74:229–243, 2019. Zhang2019 Limiao Zhang, Guanwen Zeng, Daqing Li, Hai-Jun Huang, H Eugene Stanley, and Shlomo Havlin. Scale-free resilience of real traffic jams. Proc. Natl. Acad. Sci., 116(18):8673–8678, 2019. Tang2018 Junqing Tang and Hans Rudolf Heinimann. A resilience-oriented approach for quantitatively assessing recurrent spatial-temporal congestion on urban roads. PLoS One, 13(1):e0190616, 2018. Wang2020 Shanshan Wang, Sebastian Gartzke, Michael Schreckenberg, and Thomas Guhr. Quasi-stationary states in temporal correlations for traffic systems: Cologne orbital motorway as an example. J. Stat. Mech. Theor. Exp., 2020:103404, 2020. Wang2023a Shanshan Wang, Michael Schreckenberg, and Thomas Guhr. Transitions between quasi-stationary states in traffic systems: Cologne orbital motorway as an example. J. Stat. Mech. Theor. Exp., 2023:093401, 2023. Wang2018 Gang-Jin Wang, Chi Xie, and H Eugene Stanley. Correlation structure and evolution of world stock markets: Evidence from pearson and partial correlation-based networks. Comput. Econ., 51(3):607–635, 2018. Gartzke2022 Sebastian Gartzke, Shanshan Wang, Thomas Guhr, and Michael Schreckenberg. Spatial correlation analysis of traffic flow on parallel motorways in germany. Physica A, 599:127367, 2022. Wang2021 Shanshan Wang, Sebastian Gartzke, Michael Schreckenberg, and Thomas Guhr. Collective behavior in the north rhine-westphalia motorway network. J. Stat. Mech. Theor. Exp., 2021:123401, 2021. Wang2022 Shanshan Wang, Michael Schreckenberg, and Thomas Guhr. Identifying subdominant collective effects in a large motorway network. J. Stat. Mech. Theor. Exp., 2022:113402, 2022. Gabor2023 Gabor B. Hollbeck, René Pilarczyk, Shanshan Wang, Michael Schreckenberg, and Thomas Guhr. Congestions and spectral transition in time-lagged correlations of motorway traffic. arXiv:2312.12051, 2003. Wang2023b Shanshan Wang, Michael Schreckenberg, and Thomas Guhr. Response functions as a new concept to study local dynamics in traffic networks. Physica A, 626:129116, 2023. Bouchaud2003 Jean-Philippe Bouchaud, Yuval Gefen, Marc Potters, and Matthieu Wyart. Fluctuations and response in financial markets: the subtle nature ofrandom'price changes. Quant. Finance, 4(2):176, 2003. Wang2016a Shanshan Wang, Rudi Schäfer, and Thomas Guhr. Cross-response in correlated financial markets: individual stocks. Eur. Phys. J. B, 89(105):105, 2016. Wang2016b Shanshan Wang, Rudi Schäfer, and Thomas Guhr. Average cross-responses in correlated financial markets. Eur. Phys. J. B, 89(207):207, 2016. Wang2017 Shanshan Wang and Thomas Guhr. Microscopic understanding of cross-responses between stocks: a two-component price impact model. Market Microstructure and Liquidity, 3(03n04):1850009, 2017. Benzaquen2017 Michael Benzaquen, Iacopo Mastromatteo, Zoltan Eisler, and Jean-Philippe Bouchaud. Dissecting cross-impact on stock markets: An empirical analysis. J. Stat. Mech. Theor. Exp., 2017(2):023406, 2017. Grimm2019 Stephan Grimm and Thomas Guhr. How spread changes affect the order book: comparing the price responses of order deletions and placements to trades. Eur. Phys. J. B, 92:133, 2019. Henao2021 Juan C Henao-Londono, Sebastian M Krause, and Thomas Guhr. Price response functions and spread impact in correlated financial markets. Eur. Phys. J. B, 94(78):78, 2021. Saladie2020 Òscar Saladié, Edgar Bustamante, and Aaron Gutiérrez. Covid-19 lockdown and reduction of traffic accidents in tarragona province, spain. Transp. Res. Interdiscip. Perspect., 8:100218, 2020. Fei2016 L Fei, HB Zhu, and XL Han. Analysis of traffic congestion induced by the work zone. Physica A, 450:497–505, 2016. Han2015 Wanshui Han, Jun Wu, CS Cai, and Suren Chen. Characteristics and dynamic impact of overloaded extra heavy trucks on typical highway bridges. J. Bridge Eng., 20(2):05014011, 2015. Newman2005 M. E. J. Newman. Power laws, pareto distributions and zipf's law. Contemp. Phys., 46:323–351, 2005. Sornette2009 Didier Sornette. Why stock markets crash: critical events in complex financial systems. Princeton University Press, 2009. tocsectionAppendix § RELATION BETWEEN POWER LAW AND SCALING INVARIANCE For the convenience of the reader, we summarize salient features in Ref. <cit.>. We use the notation ℛ(l)=⟨ R_ij(τ,l |l_ω)⟩_i for each given τ within the maximal reachable distance l_ω. If the response in terms of distances is scaling invariant, it fulfills the property ℛ(l)=μ(λ)ℛ(λ l) , where λ is a scaling factor and μ(λ) is a function in terms of λ. Setting l=1 in the above equation gives μ(λ)=ℛ(1)/ℛ(λ) Therefore Eq. (<ref>) becomes ℛ(λ)ℛ(l)=ℛ(1)ℛ(λ l) . By differentiating both sides with regard to λ, we have ℛ'(λ)ℛ(l)=lℛ(1)ℛ'(λ l) . Here ℛ'(·) represents the derivative of ℛ(·) regarding its argument inside the bracket. Letting λ=1 gives rise to ℛ'(1)ℛ(l)=lℛ(1)ℛ'(l)= lℛ(1)dℛ(l)/dl . We rewrite Eq. (<ref>) dℛ(l)/ℛ(l)=ℛ'(1)/ℛ(1)dl/l . Integrating both sides results in lnℛ(l)=ℛ'(1)/ℛ(1)ln l +c , where c is a constant. Let l=1 such that we are able to obtain c=lnℛ(1). This leads to ℛ(l)=ℛ(1)l^β , where β=ℛ'(1)/ℛ(1). Therefore, the scaling invariance results in the power-law response function in terms of distances. In other words, the power-law response function in terms of distances is the only function that meets the scaling invariant criterion (<ref>). § FIGURES FOR DIFFERENT TIME PERIODS
http://arxiv.org/abs/2406.18065v1
20240626045019
On Calibration of Speech Classification Models: Insights from Energy-Based Model Investigations
[ "Yaqian Hao", "Chenguang Hu", "Yingying Gao", "Shilei Zhang", "Junlan Feng" ]
eess.AS
[ "eess.AS", "cs.SD" ]
Large Language Models for Cuffless Blood Pressure Measurement From Wearable Biosignals Marc Cheong ====================================================================================== § ABSTRACT For speech classification tasks, deep learning models often achieve high accuracy but exhibit shortcomings in calibration, manifesting as classifiers exhibiting overconfidence. The significance of calibration lies in its critical role in guaranteeing the reliability of decision-making within deep learning systems. This study explores the effectiveness of Energy-Based Models (EBMs) in calibrating confidence for speech classification tasks by training a joint EBM integrating a discriminative and a generative model, thereby enhancing the classifier’s calibration and mitigating overconfidence. Experimental evaluations conducted on three speech classification tasks specifically: age, emotion, and language recognition. Our findings highlight the competitive performance of EBMs in calibrating the speech classification models. This research emphasizes the potential of EBMs in speech classification tasks, demonstrating their ability to enhance calibration without sacrificing accuracy. [Corresponding Author.] [Equal Contribution.] § INTRODUCTION Despite the impressive performance of deep learning models in speech classification <cit.>, issues such as overconfidence, calibration errors, and uncertainty estimation may hinder their reliability and generalization in real-world scenarios <cit.>. Confidence calibration in these models poses a significant challenge <cit.>. For example, in speech emotion recognition (SER) systems, the inherent uncertainty in modeling emotions affects the trustworthiness of the model's predictions <cit.>. Overconfidence and underconfidence can indicate suboptimal calibration, leading to false positives or missed opportunities <cit.>. The current state of research in speech classification models often overlooks the issue of confidence calibration, resulting in a lack of reliable methods and leading to uncertainty in predictions. The current state of confidence calibration in speech classification underscores a lack of reliable methods, fostering uncertainty and mistrust in model predictions. Undoubtedly, ensuring a well-calibrated confidence measure in a classification model is crucial for accurate predictions. Therefore, developing methodologies to adjust the predictions of a speech classification model is essential, balancing calibration with performance. Existing techniques, like Temperature Scaling and Vector Scaling, typically rescale the posterior distributions of classifier predictions <cit.>. However, these methods need post-processing adjustments and require a persistent development set with enough samples. Alternatively, adjusting calibration during the model training process, such as using confidence regularization, offers another approach  <cit.>. Recently, the effectiveness of EBMs in achieving enhanced model calibration has been demonstrated <cit.>, wherein the joint training process incorporates both discriminative and generative models. The EBMs characterize the relationship between density of input data and model energy, enabling predictions based on energy minimization. While this flexibility is advantageous, the training process of energy models involves intricate adjustments, making it a challenging endeavor <cit.>. Following  <cit.>'s work,  <cit.> has intricately improved the training process of EBMs, substantially boosting both training efficiency and ultimate performance. Despite their effectiveness in computer vision, EBMs' potential for calibrating speech classification models remains untapped. In this paper, we explore the effectiveness of EBMs in enhancing the calibration of speech classification models. Through experiments on three distinct speech classification tasks, we compare EBMs with traditional softmax-based models. Results reveal that EBMs achieve an average reduction of 7.787% in Expected Calibration Error (ECE) across the tasks, indicating improved calibration. Additionally, Negative Log-Likelihood (NLL) shows an average reduction of 0.172, indicating enhanced model fitting to observed data and more accurate probability predictions. Furthermore, we compared EBMs with other calibration methods such as Temperature Scaling and Logistic Scaling, and the results demonstrate that EBMs exhibit a significantly greater reduction in overconfidence compared to these post-processing methods. The key contributions of this paper are summarized as follows: * We introduce joint EBMs in speech classification tasks to improve calibration by modeling the energy function with a deep neural network, maintaining accuracy while enhancing reliability. This joint energy model optimizes not only for classification tasks but also learns the underlying probability distribution within the data, resulting in improved calibration observed through the model's cautious decision-making when encountering inputs deviating from the training data distribution. * We assess the performance of EBMs across three speech tasks and datasets, specifically targeting language, emotion, and age recognition. This evaluation demonstrates that EBMs can significantly reduce ECE without compromising model accuracy, and mitigate overconfidence issues in speech classification models. * We conduct comparative analyses with other calibration methods, and explore model training dynamics and confidence distributions to address model overconfidence. Specifically, our results show that EBMs outperform other post-processing methods in achieving effective calibration without requiring additional auxiliary datasets. § METHOD §.§ Energy-based Models The fundamental principle underlying an EBM is to construct a function E(𝐱):R^D → R that maps each point in the input space to a singular, non-probabilistic scalar referred to as the energy <cit.>. This scalar, denoted by E(𝐱), is a key component in the Gibbs distribution, allowing the derivation of a probability density p(𝐱). The relationship is formalized as follows: p_θ (𝐱) = exp(-E_θ(𝐱)/T) /Z(θ), where E_θ(𝐱) representing the energy, is a nonlinear regression function parameterized by θ, T refers to the temperature parameter, and Z(θ) signifies the normalizing constant, also known as the partition function: Z(θ) = ∫_𝐱exp(-E_θ (𝐱) /T) dx. §.§ Energy-based Classifier The EBM exhibits an intrinsic association with contemporary machine learning, particularly discriminative neural classifier <cit.> f_θ(𝐱) :ℝ^D →ℝ^K. This classifier assigns logits to each class for a given input 𝐱 using the softmax function: p_θ(y | 𝐱) = exp(f_θ(𝐱)[y]/T)/∑_i^K exp(f_θ(𝐱)[i]/T), where f_θ(𝐱)[y] denotes the logit associated with the y-th class label within the output of f_θ(𝐱). This connection allows us to view the discriminative classifier f(𝐱) as an energy function in the EBM framework E_θ(𝐱, y) = -f_θ(𝐱)[y]/T. Consequently, the Helmholtz free energy function E(𝐱; f) for a given data point 𝐱∈ℝ^D can be represented as the negative logarithm of the partition function: E_θ(𝐱; f) = -T ·log∑_i^K exp(f_θ(𝐱)[i]/T). Additionally, the logits from f(𝐱) enable the definition of an EBM for the joint distribution of data points 𝐱 and labels y: p_θ(𝐱, y) = exp(-E_θ(x,y))/∫_x ∑_i^K exp(-E_θ(x,y)dx)=exp(f_θ(𝐱)[y]/T)/Z(θ). Marginalizing over y provides an unnormalized density model for 𝐱: p_θ(𝐱) = ∑_y p_θ(𝐱, y) = ∑_y exp(f_θ(𝐱)[y]/T)/Z(θ), which is precisely the definition of EBM. This reinterpretation underscores the intrinsic compatibility between the softmax classifier and the EBM, offering a unified perspective on their shared principles. §.§ Optimization We employ a joint model, integrating an energy-based classifier and a generative model, wherein the EBM is trained to learn the energy function that best captures the data distribution <cit.>. The objective function is as following: log p_θ(𝐱, y) = log p_θ(y|𝐱) + log p_θ(𝐱), which represents the logarithm of joint distribution of data and labels. The conditional distribution p_θ(y|𝐱) signifies the softmax classification model, while p_θ(𝐱) captures the marginal data distribution. The loss function is then aligned with the logarithm of the likelihood, as explained in the following: log p_θ(𝐱,y) =log p_θ(y|𝐱) + log p_θ(𝐱) = logexp(f_θ(𝐱)[y]/T)/∑_i exp(f_θ(𝐱)[i]/T) +log∑_yexp(f_θ(𝐱)[y])/T/Z_θ. The derivative of the first term in the Eq.(<ref>) is relatively straightforward, representing the loss function for training the classifier. The derivative of the second term is: ∇_θlog p(𝐱) = ∇_θlog∑_y exp(f_θ(𝐱)[y]/T) - ∇_θlog Z(θ) = ∇_θlog∑_y exp( f_θ(𝐱)[y]/T)-𝔼_x ∼ p_θ(x)[log∑_y exp( f_θ(𝐱)[y]/T)] =-∇_θ E_θ(𝐱) + 𝔼_x ∼ p_θ(x)[∇_θ E_θ(𝐱) ]. By employing a one-sample Monte Carlo estimate ∇_θlog Z_θ∼ -∇_θ E_θ(x̃), where x̃ is sampled from the EBM's distribution p_θ(x). §.§ SGLD-Based Training Method According to Eq. (<ref>), we utilize Langevin MCMC for sampling from p_θ(𝐱) to train the EBMs <cit.>. Stochastic Gradient Langevin Dynamics (SGLD) is a dynamic optimization algorithm that combines the principles of stochastic gradient descent with Langevin dynamics. To initiate the Langevin sampling process, we begin by drawing an initial sample x_0 from a straightforward prior distribution. Subsequently, we simulate an overdamped Langevin diffusion process for K steps, employing a positive step size ϵ > 0. The iteration for each step k = 0, 1, …, K - 1 is expressed as: x_k+1 = x_k + ϵ^2/2∇_x_klog p_θ(x_k) + ϵ z_k = x_k - ϵ^2/2∇_x_k E_θ(x_k) + ϵ z_k, where ∇_x_klog p_θ(x_k) represents the gradient of the log probability with respect to x_k, and z_k is a random noise term. Notably, as ϵ→ 0 and K →∞, the final sample x_K converges to a distribution that matches p_θ(x) under certain regularity conditions. §.§ Evaluation Metrics Expected Calibration Error in Classification. A calibrated classifier aligns confidence with accuracy <cit.>. ECE quantifies calibration by binning predictions and measuring the difference between expected confidence and accuracy. Mathematically, ECE is expressed as: ECE = ∑_b=1^B| B_b |/N|acc(B_b) - conf(B_b) |, where B is the number of bins, B_b represents the b-th bin,|B_b |is the number of samples in binB_b,Nis the total number of samples,acc(B_b)is the average accuracy in binB_b, andconf(B_b)is the average confidence in binB_b. Negative Log-Likelihood in Classification. NLL is a key metric for assessing a classification model's calibration. It measures the agreement between predicted probabilities and actual labels by computing the logarithm of the predicted probability assigned to the true label for each sample: NLL = -1/N∑_i=1^Nlog(P(ŷ_̂î)|x_i), where N is the total number of samples, ŷ_̂î represents the true label of the i-th sample, and P(ŷ_̂î) denotes the predicted probability associated with the true label. A lower NLL indicates better calibration, signifying that the model's predicted probabilities closely match the actual outcomes. In this study, we concentrate on calibration performance, aiming to demonstrate that incorporating EBMs improves confidence calibration without impacting the model’s classification effectiveness. Consequently, we use accuracy as the primary metric to affirm the preservation of core classification capabilities. § EXPERIMENTS §.§ Datasets Multiple datasets will be used in these experiments, with the speech sampled at 16 kHz. The duration of training data for each task and the data split of those datasets is listed in Table <ref>. AP17-OLR <cit.>, CASIA <cit.> and VoxCeleb-Enrichment <cit.> datasets are used in our experiments for language, emotion and the age group classification respectively. AP17-OLR consists of 10 different languages. The test set contains three subsets with different durations (1 second, 3 second, and full length). For speech emotion classification task, we conduct out experiments on CASIA with six emotion categories (i.e., angry, surprise, sad, fear, happy, and neutral). For the age group classification, we used the VoxCeleb Enrichment dataset to train the model. VoxCeleb Enrichment were extracted from YouTube videos, the audio clips were recorded in a variety of acoustic environments. The audios were divided into four age groups . §.§ Experimental Settings The input features are 32-dimensional Mel Filter-Banks extracted using the librosa package <cit.> with a window length of 25ms and a shift of 10ms with Hamming window. Mean and variance normalization is applied during instance normalization on Mel Filter-Banks features. A 192-frame segment (320-frame for age classification) is randomly chunked from each utterance. All our experiments are based on the Wide-ResNet architecture <cit.>, featuring a width of 5, depth of 28, payload learning rate of 0.2, 50 SGLD sampling steps, and a buffer size of 10,000 . Frame-level feature extraction is based on ResNet topology with 3 groups of residual blocks. Then the frame-level features are fed into the average pooling layer to get utterance-level embeddings, and the final classification layer dimensions are 10, 6, and 4, respectively, for language, emotion, and age group classification. We optimize the model with stochastic gradient descent (SGD) <cit.> optimizer, the learning rate warms up to 0.1 during the first 1000 steps, and reduce the learning rate at epoch [40, 80, 120] with a decay rate of 0.2. Both softmax-based models and EBMs stick to the same training settings. §.§ Performance analysis and discussion Table <ref> summarizes the performance of softmax and energy-based classifiers across three classification tasks. The energy-based classifier outperform in the age classification task, achieving a higher accuracy of 74.96% and significantly improving calibration with a reduction in ECE from 16.332% to 3.208%. While the emotion classification task showed a minor accuracy drop, the EBMs exhibited substantial calibration improvement. Likewise, in the language classification task, the energy model demonstrated superior calibration with a notable reduction in ECE. These results highlight the efficacy of energy-based classifiers in enhancing calibration across diverse tasks, suggesting their potential for reliable predictions with well-calibrated uncertainty estimates. Reliability diagrams. To evaluate calibration performance, reliability diagrams are utilized, visually representing the consistency between predicted probabilities and actual outcomes in Figures 1. It's evident that EBMs exhibit superior calibration, displaying smaller gaps and significantly lower ECE compared to softmax-based models across three classification scenarios. Particularly, softmax-based models in three speech tasks consistently demonstrate overconfidence, as they tend to assign excessively high probabilities to predicted classes, a common issue observed in deep learning models <cit.>. Notably, the reliability diagram for language classification using EBMs closely follows the diagonal, achieving an ECE of only 1.0%, indicating nearly perfect calibration. These findings underscore that EBMs can significantly alleviate the issue of overconfidence in speech classification models, thereby achieving better calibration performance. Comparative evaluation of other calibration methods. We conduct a comparative analysis between EBMs and two post-processing calibration methods, namely Temperature Scaling and Logistic Scaling, across three speech classification tasks. The results are summarized in Table <ref> alongside ECEs. It is evident that these two post-processing calibrators provide limited improvement in model calibration. This restricted effectiveness may stem from potential disparities between the auxiliary data and the target distribution, resulting in suboptimal calibration adjustments. In contrast, EBMs consistently demonstrate superior performance, resulting in significant reductions in the ECE, without the requirement of supplementary training data. Why softmax-based models are poorly calibrated? As illustrated in Figure <ref>, softmax models prioritize optimizing accuracy over achieving minimizing NLL. This observation aligns with the findings in <cit.>, which suggest that modern neural networks can overfit to NLL without overfitting accuracy. This phenomenon indicates that while softmax models may achieve high classification accuracy, they may not necessarily provide well-calibrated probability estimates. In other words, the pursuit of higher accuracy values can sometimes come at the cost of the model's ability to accurately reflect confidence in its predictions, thereby compromising calibration quality. In contrast, EBMs excel in reducing NLL while maintaining high accuracy. Despite slower convergence during EBM training, they achieve lower NLL and higher confidence levels. Confidence distribution. We analyze the problem of model overconfidence by visualizing the confidence distribution in Figure 3. It demonstrates a significant prevalence of excessively high confidence levels for incorrect predictions across three speech tasks, resulting in unreliable confidence estimates for softmax-based models. For example, within the confidence range of 0.9-1, the softmax-based model for age recognition yields 300 misclassified samples, whereas EBMs show only 10 misclassifications. It is noteworthy that EBMs exhibit a reduction in the confidence range of incorrect predictions across the three speech tasks, with accurate predictions predominantly falling within higher confidence intervals, leading to reliable confidence. § CONCLUSIONS In this study, we explored the effectiveness of joint EBMs in calibrating speech classification tasks. Our results show that joint EBMs optimize both the classifier and the generative model to enhance calibration by gaining profound insights into the data distribution while also serving as a regularization mechanism, effectively mitigating overfitting tendencies. These findings demonstrate that EBMs can notably generate well-calibrated predictions without compromising accuracy across diverse speech classification tasks. IEEEtran
http://arxiv.org/abs/2406.18273v1
20240626115629
Lift-and-Project Integrality Gaps for Santa Claus
[ "Etienne Bamas" ]
cs.DS
[ "cs.DS", "cs.CC" ]
CAS: Confidence Assessments of classification algorithms for Semantic segmentation of EO data Nikolaos Dionelis, Nicolas Longepe Manuscript created February, 2024. N. Dionelis and N. Longepe are with the European Space Agency (ESA), Φ-lab, ESRIN, Italy. E-mail: nikolaos.dionelis@esa.int; nicolas.longepe@esa.int. Received 7 March 2024 / Accepted 23 May 2024 =================================================================================================================================================================================================================================== empty § ABSTRACT This paper is devoted to the study of the MaxMinDegree Arborescence (MMDA) problem in layered directed graphs of depth ℓ≤ O(log n/loglog n), which is a special case of the Santa Claus problem. Obtaining a poly-logarithmic approximation for MMDA in polynomial time is of high interest as it is the main obstacle towards the same guarantee for the general Santa Claus problem, which is itself a necessary condition to eventually improve the long-standing 2-approximation for makespan scheduling on unrelated machines by Lenstra, Shmoys, and Tardos [FOCS'87]. The only ways we have to solve the MMDA problem within an O(polylog(n)) factor is via a “round-and-condition” algorithm using the (ℓ-1)^th level of the Sherali-Adams hierarchy, or via a “recursive greedy” algorithm which also has quasi-polynomial time. However, very little is known about the limitations of these techniques, and it is even plausible that the round-and-condition algorithm could obtain the same approximation guarantee with only 1 round of Sherali-Adams, which would imply a polynomial-time algorithm. As a main result, we construct an MMDA instance of depth 3 for which an integrality gap of n^Ω(1) survives 1 round of the Sherali-Adams hierarchy. This result is best possible since it is known that after only 2 rounds the gap is at most O() on depth-3 graphs. Second, we show that our instance can be “lifted” via a simple trick to MMDA instances of any depth ℓ∈Ω(1)∩ o(log n/loglog n), for which we conjecture that an integrality gap of n^Ω(1/ℓ) survives Ω(ℓ) rounds of Sherali-Adams. We show a number of intermediate results towards this conjecture, which also suggest that our construction is a significant challenge to the techniques used so far for Santa Claus. From a technical perspective, the main inspiration of this work stems from a beautiful construction by Li and Laekhanukit [SODA'22] who showed a polynomial integrality gap for the standard relaxation of the Directed Steiner Tree problem. Inspired by their construction, we build an MMDA instance of depth 3 which has interesting properties. Then, we show how to quantify non-trivial correlations between different edges using the labeling scheme underlying the construction. Our techniques also seem relevant in the world of Directed Steiner Trees, so we are hopeful they will transfer. § INTRODUCTION This paper is devoted to the study of the Santa Claus problem (also known as max-min fair allocation). In this problem, there are gifts (or resources) that need to be assigned to children (or players). Each gift j has unrelated values v_ij for each child i. The goal is to assign each gift j to a child σ(j) such that we maximize the utility of the least happy child, that is, min_i ∑_j : σ(j) = i v_ij. The dual of the problem, where one has to minimize the maximum instead of maximizing the minimum is the problem of makespan minimization on unrelated parallel machines. Both variants form well-known open problems in approximation algorithms <cit.>. For the makespan problem, there is a well-known polynomial-time 2-approximation by Lenstra, Shmoys, and Tardos <cit.>, which has not been improved since, and it is only known that the problem is NP-hard to approximate within a factor better than 3/2 <cit.>. For the Santa Claus problem, the gap is rather unsatisfactory: polynomial-time algorithms can only guarantee polynomial-factor approximations, and it is only known that the problem is NP-hard to approximate within a factor better than 2 <cit.>. Closing the gap between upper and lower-bounds for either one of them is considered an important open question. In fact, it was recently proven in <cit.> that obtaining a (2-1/α)-approximation for the makespan problem implies the existence of an (α+ϵ)-approximation for the Santa Claus problem (for any fixed ϵ>0), at the cost of a polynomial blow-up in the running time. The authors of <cit.> also show that the converse is true in some significant special case. After several attempts at the Santa Claus problem (see e.g. <cit.>), the state-of-the-art techniques culminated in the remarkable algorithm by Chakrabarty, Chuzhoy, and Khanna <cit.>, which gives an n^ϵ·polylog(n)-approximation in time n^O(1/ϵ), for any ϵ=Ω(1/log n). In particular, for any fixed ϵ>0, this guarantees a n^ϵ-approximation in polynomial time, and a -approximation in time n^O(log n/loglog n). It is not known how to obtain the guarantee in polynomial time, and the fact is that we do not really understand why. To give more context, we elaborate on the algorithm in <cit.>. It has two main conceptual steps: (i) an intricate reduction to some arborescence problem in layered graphs of depth O(1/ϵ), and (ii) solving the arborescence problem using lift-and-project hierarchies. Interestingly, step (i) runs in polynomial time and looses a factor n^ϵ· which is unavoidable, and step (ii) runs in time n^O(1/ϵ), but looses only a factor. If we fix ϵ= Θ(loglog n/log n) in the algorithm, then at the cost of loosing factors, step (i) reduces general instances of Santa Claus to a slight generalization of the MaxMinDegree Arborescence problem (MMDA problem later on). In this problem, we are given a layered directed graph G=(V=L_0∪̇L_1∪̇…∪̇L_ℓ,E) of size n, in which edges can only exist between two consecutive layers L_i and L_i+1, and oriented from L_i to L_i+1. There are three types of vertices, one special vertex s called the source, some sinks, and the rest of the vertices. Further, the depth ℓ of the graph has to be at most O(log n/loglog n). The goal is then to find an arborescence rooted at the source (i.e. a tree in which all edges are oriented away from the source), such that at each non-sink vertex u selected in the arborescence, the out-degree is at least k_u/α, where α is the approximation rate to be minimized. See Figure <ref> for an example. Finally, we assume that k_u≥ n^Ω(1/ℓ) for all u, a condition that holds in the instances arising from the reduction of Chakarbarty et al. Note that in those instances there is always a trivial polynomial-time (max_u k_u)-approximation by simply selecting a single directed path from the source to a sink, which gives out-degree 1 to all its vertices. But in our instances, this automatically looses a factor n^Ω(1/ℓ) which is ^ω(1) as soon as ℓ=o(log n/loglog n). Once we reached this very special case of Santa Claus, no further simplification is known. Indeed, the reduction in <cit.> would essentially try two things on these layered instances: (i) For all vertices u such that k_u≤, replace k_u by k'_u=1 to simplify the instance, and (ii) make copies of the non-source vertices and arrange them in a layered graph. Here, it is easy to see that (i) does not do anything on our instances, and (ii) does not help either since the graph already has a layered structure (formally the extra copies of each vertex placed in the wrong layer will be unreachable from the source, hence useless). For more details, we refer the reader to the arxiv version of <cit.>. Because of this reduction, the MMDA problem already attracted attention as a prominent special case of the Santa Claus problem, and also as a problem of its own interest (<cit.>). State-of-the-art algorithms for the problem guarantee a -approximation in time n^O(ℓ) (see <cit.>), and it is only known that the problem is APX-hard already when ℓ=2 <cit.>. More recently, the -approximation has even been improved to polyloglog(n) in the case when k_u=k for all u <cit.>. As explained above, when k_u≤ for all u, it is trivial to obtain a approximation in polynomial time. Otherwise, the most successful algorithms rely on using ℓ-1 rounds of a certain relaxed version of the Sherali-Adams hierarchy, which we dub the path hierarchy. On a high level, these algorithms always proceed layer by layer, starting at the source until reaching the sinks. At layer i, for each vertex v which was selected, the algorithm consider the path p_v that was selected from the source to v. Then, the path is continued by one more edge using the distribution of edges obtained after conditioning by the event that the path p_v was selected. This justifies the term path hierarchy, as only a particular type of conditioning is required: one can write a hierarchy of relaxations which only contain a relevant subset of the Sherali-Adams constraints, and which has size equivalent to the number of directed paths of length ℓ in the graph. In the worst-case, this is n^O(ℓ), hence the running time of these algorithms. One can mention that there is also an alternative algorithm (purely combinatorial), which is an adaptation of the recursive greedy algorithm in <cit.> for Directed Steiner Tree, and guarantees an O(ℓ)-approximation (<cit.>) in the case where k_u=k for all u, which can be as high as a polylogarithmic approximation. However, this algorithm also runs in quasi-polynomial time because it recurses on all possible children of a vertex, and the recursion depth is the depth of the graph ℓ. However, obtaining these guarantees in polynomial time has remained an elusive goal, and it is the main reason why obtaining a -approximation for Santa Claus in polynomial time has been notoriously challenging. There remains a huge gap in our understanding of these techniques. For instance, we do not know how to answer the following basic question: Is 1 round of Sherali-Adams enough to solve the MMDA problem within a O()-factor, regardless of the depth ℓ? Indeed, one could imagine using the round-and-condition algorithm, but only conditioning by the last edge used to reach v. Previous works do not rule out the possibility that this could work, which would yield a polynomial-time algorithm. The missing answer for this question is arguably at the heart of our more general misunderstanding of the problem. In fact, we argue later that a positive answer is even plausible if one looks at past works (especially the popular restricted assignment case of Santa Claus). More generally, the issue is that we do not know what a “difficult” instance could look like. Hence we also consider a more informal question: Can the current lift-and-project techniques be used to obtain a -approximation in polynomial-time for the MMDA problem? We note that even if one shows that there is an integrality gap surviving after t=ω(1) rounds of the above hierarchy, this does not necessarily imply that the algorithms above would need to run in time n^Ω(t). Indeed, unlike the general Sherali-Adams hierarchy, the relaxed hierarchy only has a size equivalent to the number of paths of length t in the graph, which could be n^O(1) on such an instance. Hence, if one wants to build a meaningful lower bound for these algorithms, the instance must demonstrate an integrality gap that resists many rounds of the relaxed hierarchy, while at the same time having a complex structure to ensure a superpolynomial number of paths. This last condition seems to add some extra difficulty, since all interesting instances exhibited in the literature have a polynomial number of paths (see for instance Section 7 of the arxiv version of <cit.> or some works on the related Directed Steiner Tree problem <cit.>). §.§ Our results We give new constructions which answer the questions above. In all the results about the Sherali-Adams hierarchy, we implicitly refer to the Sherali-Adams hierarchy applied on the naive relaxation of the problem called the assignment LP (which will be formally defined in Section <ref>, along with the path hierarchy). §.§.§ Lower bounds for MMDA via hierarchies This part constitutes our main contributions. Here, we focus on MMDA instances in layered graphs. The depth of a layered graph is the length of the longest directed path in the graph. We prove the following in Section <ref>. For any n big enough, there exists a layered graph G of depth 3 and size Θ(n) such that k_u=n^Ω(1) for all u, and such that the integrality gap of 1 round of the Sherali-Adams hierarchy is at least n^Ω(1). Note that in the above theorem, we have a lower bound against the general Sherali-Adams hierarchy which is stronger than the path hierarchy. The number of rounds is best possible, since it is known by previous works that 2 rounds of the relaxed hierarchy already has a gap no more than in graphs of depth 3. The integrality gap is also essentially best possible, since it can never be more than max_u∈ V{k_u} (remember the trivial algorithm that selects a single directed path). As a secondary result, we show how the instance of Theorem <ref> can be “lifted” in an elementary way to obtain the following theorem. There exists some absolute constant c>0 such that for any n>c, and any ℓ∈Ω(1)∩ o(loglog n/log n), there exists a layered graph G_ℓ of size Θ(n) and depth ℓ such that the integrality gap of the path hierarchy is still at least n^Ω(1/ℓ) after ℓ/c rounds. The shed some light on this second result, we recall that past works imply that if an edge e=(u,v) remains in the support of the solution at level t of the path hierarchy, then there must exist an integral arborescence rooted at v and satisfying the degree requirements (up to (log n)^O(1) factors) for all vertices at distance at most t-1 from v (i.e. there exists a feasible integral solution up to depth t-1). It will be easy to verify that all edges remain in the support after ℓ/c rounds in the proof of Theorem <ref>. This implies that our instance already has the non-trivial property that it contains an integral solution of depth Ω(ℓ) rooted at v for every vertex v. It is not obvious to see this from the construction, and it shows that only looking at local parts of the solution is not enough to rule out the existence of an integral solution. §.§.§ Relevance of Theorem <ref> In this part, we introduce secondary results which further motivate the significance of our main result, Theorem <ref>. The results in this part are due to Lars Rohwedder <cit.>. The proofs and techniques of these are fairly standard and we defer them to Appendix <ref>. We consider in this part the restricted assignment case of the Santa Claus problem, which is a well-known case where v_ij∈{0,v_j} for all players i and resources j (see further related works section for references on that special case). In Appendix <ref>, we show the following. For any n∈ℕ and any Ω(1/n)≤ϵ≤ 1/3, there exists an instance of size n of the restricted assignment case such that the integrality gap after ⌊ϵ n⌋ rounds of the Sherali-Adams hierarchy is at least Ω(1/ϵ). This implies that a polynomial integrality gap survives a polynomial number of rounds in the Sherali-Adams hierarchy (take for instance ϵ=1/√(n) in the above). However, the instances that we use to prove the above theorem do not survive the reduction of Chakrabarty et al., i.e. the reduction will find a way to simplify the instance. Using standard ideas from the restricted assignment, we can even show the following in Appendix <ref>. There exists a polynomial-time algorithm which transforms instances of the restricted assignment to instances for which the integrality gap of 1 round of Sherali-Adams is at most O(1). Furthermore, the transformation looses only a constant-factor in the approximation. The above two theorems show that to obtain a meaningful lower bound for current lift-and-project techniques, it is crucial to focus on instances which cannot be simplified using any known technique. This is precisely why we focus on the MaxMinDegree Arborescence problem for layered graphs. Given Theorem <ref>, it actually seemed plausible that using only 1 round of Sherali-Adams could be helpful for the general Santa Claus problem. Indeed, the restricted assignment case is already a very challenging case, which has been heavily studied in the past (see further related works). These results give some justification that one round of Sherali-Adams is already quite powerful as it allows to solve the restricted assignment case. This explains why our proof of Theorem <ref> is quite technical. §.§.§ Further consequences of Theorem <ref> In this part, we discuss additional properties of the instance that we construct to prove Theorem <ref>. These properties pose significant challenges to other known techniques for Santa Claus, especially those heavily used in the restricted assignment case. To formally define our result, it is useful to informally introduce a few concepts. A useful trick that appears in previous works on the MMDA problem is to relax the constraint that each vertex can appear at most once in the solution. Instead, one can allow some congestion and take a vertex multiple times (i.e. the in-degree of the vertex is more than 1 in the arborescence). With this in mind, we can informally introduce the concept of locally good solutions. For any t>0, a t-locally good solution is an integral arborescence which can have congestion as high as n^Θ(1), but such that the different occurrences of the same vertex v appear at distance at least t from each other in the arborescence (formally, the lowest common ancestor of two occurrence v',v” of the same vertex v is at distance at least t from v' and v”). This concept is particularly useful, since it is shown in <cit.> how to “sparsify” Θ(ℓ)-locally good solutions (where ℓ is the depth of the instance) to obtain an integral feasible solution to the MMDA problem which looses only a factor n^Θ(1/ℓ) in the approximation rate. We show the following properties of our construction. Let G_ℓ be the MMDA instance of depth ℓ used in the proof of Theorem <ref>. Then, the following properties hold, where c is an absolute constant: * k_u=n^Θ(1/ℓ) for all u, * every vertex v belongs to at least n^Ω(log (ℓ)) different directed paths, * any feasible integral solution in G_ℓ must loose an approximation rate of at least n^Ω(1/ℓ), and * the instance G_ℓ contains an (ℓ/c)-locally good solution. We believe the combination of all these properties to be quite significant. Namely, it seems unlikely that the current lift-and-project techniques combined with the intricate reduction by Chakrabarty et al. <cit.> could obtain a -approximation for Santa Claus in time n^o(loglog n). Indeed, by the first property of Theorem <ref>, the reduction technique of Chakrabarty et al. will not modify in any way the instance. Also note that for any ℓ=o(log n/loglog n), the integrality gap is at least ^ω(1) after Ω(ℓ) rounds of the path hierarchy by Theorem <ref>. By the second property of Theorem <ref>, the size of the path hierarchy will still be n^Ω(log(ℓ)) after Ω(ℓ) rounds, which is n^Ω(loglog n) for ℓ close to log n. The number of paths also shows that the recursive greedy algorithm (<cit.>) does not run in polynomial time. Lastly, the third and fourth property of Theorem <ref> show that the analysis of the sparsification method in <cit.> is tight. More generally, these locality-based methods (which are also the intuition behind LP relaxations of the restricted assignment case[For instance, the well-known configuration LP of <cit.> essentially strengthens the naive LP by adding the constraints that for any edge e=(u,v) taken in the support, v must have at least k_v outgoing edges in the graph, i.e. there exists a depth-1 solution rooted at v. The t^th level of the path hierarchy essentially does the same strengthening, but enforces the existence of an integral solution at distance t of v instead of only distance 1.]) also suffer from a gap of n^Ω(1/ℓ) on our instances, hence are unlikely to help to obtain a -approximation in polynomial time. §.§ Our techniques We emphasize the techniques used to prove Theorem <ref> and Theorem <ref> which constitute our main contribution. We note that proving Theorem <ref> is already quite non-trivial, and that all past MMDA constructions used to obtain an integrality gap for other LP relaxations (see e.g. <cit.>) do not work. The reason for this is simply that the argument that allowed to rule out the existence of an integral solution in those constructions is easily captured by one round of Sherali-Adams. We highlight here the main ideas of our construction and proof. *The construction. An important inspiration to our work is a construction of Li and Laekhanukit <cit.> who show a polynomial integrality gap of the standard relaxation of the Directed Steiner Tree problem. We take inspiration from their construction to build a layered instance of the MMDA problem of depth 3 having the properties of Theorem <ref>. The construction is quite clean, we have 4 layers of vertices L_0,L_1,L_2,L_3, where L_0 contains only the source, and the set of sinks is equal to L_3. Then, each vertex v in the construction is labeled by a subset S_v of a ground set 𝒰=[m]. The construction is parametrized by a small constant ρ (the reader might think of ρ =1/1000 in this overview). There is exactly one vertex in L_1 and one in L_3 for each subset of 𝒰 of size ρ m, and there is exactly one vertex in L_2 for each subset of 𝒰 of size 2ρ m. These labels define the edge set in an easy way, two vertices u,v in consecutive layers are connected if and only if the corresponding labels satisfy that either S_u⊆ S_v, or that S_v⊆ S_u. See Figure <ref> for an illustration of the construction. Then we use well-chosen values for the required out-degrees k_u. Inspired by <cit.>, we show how to use these labels to rule out the existence of a good integral solution. *Subtree solutions. Next, we need to go further by using labels to quantify the “correlations” between different edges which are needed to obtain a feasible solution to the Sherali-Adams hierarchy. To this end, we introduce the concept of subtree solutions: a subtree solution at edge e=(u,v) is a feasible fractional solution x^(e) to the naive relaxation of the problem in the same instance, except that we consider the vertex v to be the source vertex (instead of s). Intuitively, if x is the fractional solution of the naive relaxation, x_e will be the probability that the edge e appears in the solution, and x_e^(e') will be the probability that the edge e appears in the subtree rooted at e' after conditioning by the fact that the edge e' appears. Using this interpretation together with the labels, we are able to quantify very precisely the correlations. *The shadow distribution. Then, to show that the instance is feasible for 1 round of the Sherali-Adams hierarchy, we use a standard abstraction of those hierarchies in the form of distributions of edge sets. Intuitively, we need to find a distribution over edge sets which is feasible in expectation for the naive LP, and remains feasible for that same LP even after conditioning by the outcome of any edge e in the graph (i.e. conditioning by the event that e appears in the edge set or not). There are two intuitive ways to design a distribution for this purpose. The simplest way is by using the product distribution where each edge e appears independently with probability x_e. The second one is to use the distribution given by the round-and-condition algorithm. Both approaches fail, each for different reasons. Informally, the first distribution does not have enough correlation between edges (it does not have any in fact), while the second one has too much. To solve this issue, we design a new distribution, which we call the shadow distribution: * Each edge e' in the graph is selected as a shadow edge independently with probability x_e'. * Second, for any edge e' which was selected as a shadow edge, we sample a “subtree”[The set S_e' is sampled using the subtree solution x^(e'), hence the name. But it is not necessarily a tree.] S_e' containing each edge e independently with probability x_e^(e'). * Then, we return the union of those “subtrees”. The intuition behind this distribution is that it constitutes a sweet spot between too much correlation, and too little: If we skip the middle step (step 2), the distribution becomes the product distribution (not enough correlation), and if we repeat step 2 a second time before returning the edge set we essentially obtain the distribution given by the round-and-condition algorithm (too much correlation). The name of “shadow” intuitively comes from the fact that the edges in step 1 are hidden and create correlations, but we can only observe the “subtrees” which are triggered by these hidden edges. *Why does it work? It is non-trivial to see why this works on our instance. However, one key property that we can highlight is that if we denote by s_e the probability that an edge appears in the above distribution, then our instance and subtree solutions are such that x_e≤ s_e≤ O(x_e) for any edge e (i.e. the probability of each edge appearing does not change much compared to the product distribution). Now, it becomes clear that the distribution satisfies the naive LP if we do not condition by any event. Lastly, if we condition by the outcome of an edge, then this estimate of s_e allows to take care of the issue that the product distribution did not have enough correlation. We believe that the property that x_e≤ s_e≤ O(x_e) is something very special about this construction. For instance, one can show that the fact that there exists a subtree solution x^(e') for every e' is not enough to guarantee this property (see Appendix <ref> for an easy example). *The lifted instance. For Theorem <ref>, we need to construct an instance with many layers, as it is known that only 2 rounds of the path hierarchy is already strong enough to not be fooled by our construction of depth 3. For this, we use the labels of the previous instance as inspiration. We note that the instance has intuitively 2 phases. An expanding phase, where the labels correspond to bigger and bigger subsets from size 0 to 2ρ m, and a collapsing phase where the size of the labels decreases from 2ρ m to ρ m. Then, we use a simple trick to refine the set system that gives the labels. Essentially, we fix some small ϵ≥Ω(1/m) (note that m=Θ(log n) where n is the instance size). Then, we build an instance with Θ(1/ϵ) layers, with one expanding and one collapsing phase as before, where the i-th layer of the expanding phase contains a vertex for each set of size iϵ m, and the i-th layer of the collapsing phase contains a vertex for each set of size m/500-iϵ m. This allows us to obtain in some sense a “continuous” version of our instance of depth 3. Interestingly, a lot of properties of the instance of depth 3 carry over to this new instance. For instance, the proof to rule out integral solutions in basically the same. Then, one can define the same concept of subtree solutions, and a generalization of our probability distribution for t rounds of Sherali-Adams is essentially as follows: * Sample a shadow set S_1 containing each edge e independently with probability x_e. * For i=2 to t, sample a shadow set S_i as follows: each edge e'∈ S_i-1 creates a set S_i^(e') which contains each edge independently with probability x_e^(e'), and we set S_i=⋃_e'∈ S_i-1S_i-1^(e') . * Return S_t. We conjecture that this fools the t-th round of the Sherali-Adams hierarchy on our instances of depth Ω(t), but have not been able to prove it. The main issue is that computing the probabilities of events that several edges appear becomes very intricate. However, we were able to use the intuition that this distribution gives to fool the path hierarchy, which seems to allow more approximations. We proceed in a similar way using the concept of subtree solution. The path hierarchy only has a variables for set of edges which form a directed path p=e_1,e_2,…, e_k (with k≤ t). The question is, what is the probability of the event that p⊆ S_t? It appears difficult to compute, but we believe that this probability is well-approximated (within factors) by the probability of the event ℰ = {e_1∈ S_1}∩{e_2∈ S_2^(e_1)}∩…∩{e_k∈ S_k^(e_k-1)} . It is easy to see that ℙ[ℰ] = x_e_1·∏_i=2^kx_e_i^(e_i-1) . We show that setting the variable y(p) exactly equal to the above probability yields a feasible solution to the path hierarchy. §.§ A closely related problem: the Directed Steiner Tree problem Another well-known and not really understood problem is the Directed Steiner Tree problem, in which one has to find the cheapest directed tree connecting a root to all terminals. We elaborate more on the state-of-the-art results for this problem, as the parallel with Santa Claus is quite striking. The best algorithms give a n^ϵ·polylog(n)-approximation in time n^O(1/ϵ), for any ϵ=Ω(1/log n) <cit.>. The main difference with Santa Claus is that the DST problem cannot be approximated within a factor better than log^2-ϵ(n) for any fixed ϵ>0 unless NP⊆DTIME(n^) <cit.>. What is even more remarkable is the similarity of the techniques used to solve the problem. A well-known “height reduction theorem” (<cit.>) reduces general instances of the problem to instances on layered graphs of depth at most ℓ by loosing a factor O(ℓ· n^1/ℓ). For ℓ=O(log n), this looses a factor O(log n). Then, the state-of-the-art algorithm by <cit.> essentially proceeds layer by layer from the root to the terminals, rounding and conditioning each time on a proper subset variables. Specifically, when reaching vertex v, the algorithm select as the conditioning set to be the path which was selected from the source to v. Hence, the concept of path hierarchy also makes sense in that context. Moreover, we note that it is possible to adapt the recursive greedy algorithm of <cit.> to the MMDA problem (<cit.>). Lastly, as explained our results were inspired by a very clean construction of <cit.> (itself taking inspiration from <cit.>). Our trick of refining the set system can be seen as a lifting trick for our 3-layered instance, and in principle it can also be applied in the context of DST. §.§ Further related works As mentioned in introduction, and important special case of Santa Claus is the restricted assignment where each gift j has a fixed value v_j, and each child i can only access a subset ℛ(i) of the gifts. Equivalently, this is the case where v_ij∈{0,v_j} for all i,j. This case is fairly well-understood, with a long line of work (see e.g. <cit.>) culminating in a (4+ϵ)-approximation in polynomial time. There are also works on the restricted assignment case with non-linear utility functions, such as a O(loglog n)-approximation in polynomial time for the case of submodular utilities <cit.>. Those techniques were then transferred to the makespan scheduling problem to obtain a better-than-2 approximation in quasi-polynomial time, and a better-than-2 estimation algorithm in polynomial time (see e.g. <cit.>), again in the restricted assignment case. As other related works, one can cite the Densest-k-Subgraph problem which admits a polynomial-approximation in polynomial time <cit.> and share some common points with our arborescence problem. Some lift-and-project lower bounds were already showed <cit.>, however the constructions and proofs are very different to ours. §.§ Overview of the paper In Section <ref>, we give some formal definitions of all the concepts that were used in introduction, and more. In Section <ref> we prove Theorem <ref>. In Section <ref>, we give our general construction for any depth ℓ, and we prove a few properties of the obtained instances. In Section <ref>, we prove Theorems <ref> and <ref>. In Appendix <ref> we prove Theorems <ref> and <ref>. To get better intuition of the constructions, we would advise the reader to read Section <ref> before reading Sections <ref> and <ref>. § PRELIMINARIES §.§ Problem definition In the rest of this paper (with the exception of Appendix <ref>), we will work on the MaxMinDegree Arborescence (MMDA) problem on a layered directed graph G=(V,E) of size n and of depth ℓ≤ O(log n/loglog n). By layered, we mean that the graph G contains some layers of vertices L_0∪ L_1∪…∪ L_ℓ=V, and edges can only exist between two consecutive layers L_i and L_i+1. Further, the edge has to be oriented from L_i to L_i+1. In our constructions, the sinks will all belong to the last layer L_ℓ, and the layer L_0 contains only the source vertex s. Hence, the set of sinks is equal to L_ℓ. Further, our constructions will have the property that k_u≥ n^Ω(1/ℓ) for all u∈ V∖ L_ℓ. One can describe the input/output for the MMDA problem as follows. *Input. The input of the problem is the graph G, the description of every vertex as either a sink, a source, or a normal vertex, and the required out-degrees k_v. *Output. An α-approximate solution to the problem is a subgraph T⊆ E such that: * The source s has out-degree (in T) |δ^+_T(s)|≥α k_s, * every vertex v∈ V has in-degree (in T) at most 1, and * for every vertex v with |δ^-_T(v)|=1, we have |δ^+_T(v)|≥α k_v. One seeks to find the biggest α^* possible so that there exists an α^*-approximate solution. §.§ Basic notations We will say that a vertex u is reachable from v in the graph G if there exists a directed path from v to u. We will denote by A(v) the set of ancestors of vertex v which contains all the vertices from which v is reachable. Similarly, D(v) is the set of descendants of v which are all the vertices reachable from v. By a slight abuse of notation, we will also denote by A(v) the set of edges e=(u,u') such that u'∈ A(v), and by D(v) the set of edges e=(u,u') such that u∈ D(v). We will denote by D(e) the set of vertices x such that x∈ D(u'). We will also denote by D(e) the set of edges e'=(x,y) such that x∈ D(u'). It will always be clear whether the considered object is an edge or a vertex, and this information is sufficient to clear up the ambiguity. We also define the intuitive notations L_≥ i=∪_j≥ iL_j, L_≤ i=∪_j≤ iL_j, L_< i=V∖∪_j≥ iL_j, L_> i=V∖∪_j≤ iL_j. For an edge e=(u,v), we say that e∈ L_i if v∈ L_i (i.e. if the endpoint of e belongs to layer L_i). By a similar overloading of the notation, we have the same definitions of L_≥ i, L_≤ i, L_> i, L_< i for edges. For any vertex v, we use the standard notation of δ^+(v) for the edges going out of v, δ^-(v) for the edges going in, and δ(v)=δ^+(v)∪δ^-(v). Finally, in many of our construction, each vertex v will be labeled by a set (over a ground set of elements), which we will denote by S_v. Similarly, for any edge e=(u,v), we denote by S_e the set labeling v, i.e. S_e:=S_v. §.§ Relaxations of the problem Naive relaxation. The naive relaxation of the arborescence problem is called the assignment LP, and reads as follows on our layered instances (recall that L_ℓ is the set of sinks). x(δ^+(s)) ≥ k_s x(δ^+(v)) ≥ k_v· x(δ^-(v)) ∀ v∈ V∖ ({s}∪ L_ℓ) x(δ^-(v)) ≤ 1 ∀ v∈ V 0≤ x_e ≤ 1 ∀ e∈ E It is not difficult to see that this relaxation has a polynomial integrality gap already if ℓ=2. We call the first two rows the covering constraints, and the third one the packing constraints. We note that in several proofs, we might obtain solutions that satisfy the constraints up to a multiplicative factor α≥ 1, we say that the solution is α-approximate. The factor by which the packing constraint is violated will often be called the congestion. By standard arguments, this can be easily transformed into a feasible solution to the assignment LP with value k'_u≥ k_u/(α)^2 at every vertex. To see this, first we scale down all k_us by some factor α. The new covering constraints are now satisfied. Let us now scale down a second time the demand at the source k'_s and all fractional values x_e by some factor α. Now, it is clear to see that all the constraints are satisfied, and that k'_u≥ k_u/(α)^2 for all u∈ V. A similar argument holds for integral solutions of the assignment LP. If we have an integral solution which is α-approximate w.r.t. the assignment LP, then by standard flow arguments, one can transform it in polynomial time into a feasible integral solution where every vertex u in the solution receives out-degree at least ⌊ k_u/(α)^2⌋ (see e.g. <cit.>). In all our solutions, we will always have that α=o(k_u), so the floor function will not have any significant impact on the approximation factor. Given these remarks, we will only verify the constraints up to some multiplicative factor in the rest of the paper. Sherali-Adams lift-and-project. For clarity, we will not write down the result of the Sherali-Adams hierarchy on the assignment LP. However, we state and prove here the result that we use to prove the feasibility of one round of Sherali-Adams. Consider the relaxation of a binary integer linear program. Then there is a feasible solution for r levels of SA if there exists a probability distribution D over 0/1 assignments of the variables such that for all variable sets |V_0∪̇V_1|≤ r and every constraint a^T x ≤ b of the linear program we have that 𝔼_x∼ D[a^T x | E] ≤ b , where E be the event that x_i = 0 for all i∈ V_0 and x_i = 1 for all i∈ V_1 and ℙ_x∼ D[E] > 0. Abstractions of Sherali-Adams in the form of distributions are very standard. Similar theorems are proven e.g. in <cit.>. For all variable sets V_0 ∪̇V_1 define s_V_0, V_1 = ℙ_x ∼ D[x_i = 0∀ i∈ V_0 and x_i = 1∀ i∈ V_1]. Then s_V_1 := s_∅, V_1 for |V_1|≤ r+1 will be our lifted SA variables and in particular s_i := s_{i} are the variables of the original LP that survive r rounds of SA. We claim that for all constraints a^T x ≤ b, all |V_0∪̇V_1| ≤ r, and z = ∏_i∈ V_1 x_i ∏_i∈ V_0 (1 - x_i), we have z * a^T s = ∑_i a_i · s_V_0, V_1∪{i} and z * b = s_V_0, V_1· b , where * is the SA multiplication operator. We argue inductively over |V_0|. For V_0 = ∅ it is obviously true. For |V_0| > 1 let z = (1 - x_j) z' for some j∈ V_0. Then z * a^T s = z' * a^T s - x_j * z' * a^T s = ∑_i a_i · s_V_0∖{j}, V_1∪{i} - ∑_i a_i · s_V_0 ∖{j}, V_1∪{i, j} = ∑_i a_i (s_V_0∖{j}, V_1∪{i} - s_V_0 ∖{j}, V_1∪{i, j}) = ∑_i a_i · s_V_0, V_1∪{i} Here, the second equation comes from the induction hypothesis and the last equation follows from the definition of s_U, W. Similarly, z * b = z' * b - x_j * z' * b = b (s_V_0∖{j}, V_1 - s_V_0∖{j}, V_1∪{j}) = b · s_V_0, V_1 We will now verify the lifted constraint z * a^T x ≤ z * b and some z = ∏_i∈ V_1 x_i ∏_i∈ V_0 (1 - x_i), where |V_0∪̇V_1| ≤ r. We have that z * a^T s = ∑_i a_i · s_V_0, V_1∪{i} = ∑_i a_i ·ℙ_x ∼ D[x_j = 1∀ j∈ V_1∪{i} and x_j = 0 ∀ j∈ V_0] = ∑_x: x_j = 1∀ j∈ V_1 and x_j = 0 ∀ j∈ V_0ℙ_x ∼ D[x] ·∑_i: x_i = 1 a_i = ℙ_x ∼ D[x_j = 1∀ j∈ V_1 and x_j = 0 ∀ j∈ V_0] ·𝔼_x ∼ D[a^T x | x_j = 1∀ j∈ V_1 and x_j = 0 ∀ j∈ V_0] = s_V_0,V_1· b = z * b . Note that Theorem <ref> has stronger requirements than usually needed. In general, one does not require the existence of a single distribution for all constraint and conditioning, but rather the existence of one distribution for each constraint and conditioning, with the added condition that different distributions have to be consistent with each other. The path hierarchy. Here we define a path hierarchy, which is can be seen a weakened variant of the Sherali-Adams hierarchy and has been used in previous works <cit.> (see <cit.> specifically to see why the Sherali-Adams hierarchy imply the path hierarchy). We need a few definitions first. For a directed path p, we denote by C(p) the set of children paths of p, which is the set of paths of length |p|+1 and which contain p as a prefix (i.e. continue p by one more edge). We denote by D(p) the set of descendant paths of p, which are simply the paths which contain p as a prefix. For any vertex v, I(v) is the set of paths which end at v. We denote by P the set of directed paths in the instance. For any path p ending at some vertex v, we write k_p:=k_v to be the degree requirement at the endpoint of p. Now we can state the path LP hierarchy. For clarity, we assume that there is a dummy edge e_0 incoming at the source s. We have a variable y(p) for each directed path p. ∑_q∈ C(p) y(q) = k_p · y(p) ∀ p∈ P ∑_q∈ I(v)∩ D(p) y(q) ≤ y(p) ∀ p∈ P,v∈ V ∑_e∈δ^-(v)y({e}) ≤ 1 ∀ v∈ V ∑_e∈δ^+(v)y({e}) ≥ k_v·∑_e∈δ^-(v)y({e}) ∀ v∈ V y(q) ≤ y(p) ∀ p,q∈ P, p⊆ q y({e_0}) = 1 0≤ y ≤ 1 To understand those constraints, we think of the variables y(p) as binary variables which indicate whether the path p is selected in the solution. Constraint (<ref>) is simply a covering constraint which implies that that if a path p=(...,v) is selected, there must be k_v paths continuing p by one more edge. Constraint (<ref>) is a packing constraint which states that if a path p is selected, no vertex can have more than one incoming path inside the subtree rooted at p. Constraint (<ref>) and Constraint (<ref>) are simply the standard assignment LP constraint. Constraint (<ref>) are consistency constraints important to be able to build consistent distributions. Indeed, if a path p is not selected in the integral solution, a path q containing p cannot be selected either. Finally, Constraint (<ref>) ensures that the source is covered. The t^th level of the path hierarchy is the relaxation above where we removed all constraints which contain variables for paths of length strictly more than t+1. It is known by previous works that ℓ-1 levels suffice to obtain a -approximation on graphs of depth ℓ <cit.>. In fact, previous works show that the t^th level of the path hierarchy enforces the following remarkable constraint: for any edge e=(u,v) such that y({e})>0, there must exists an integral arborescence rooted at v of depth at least t which has congestion O(). This is in stark constrast with the configuration LP <cit.> which only enforces such constraints at depth 1. §.§ Locally good solutions It will be useful in some parts to think of a feasible solution as a set of directed paths as follows: a solution T is transformed into a set of directed paths P_T which contains all directed paths in T starting at the source s. For some t≥ 1, we say that a solution T is t-locally good if the set of path P_T obtained from T satisfies the following constraints: * For any p∈ P_T where p ends at a vertex v, there are at least k_v paths of length |p|+1 containing p as a prefix, and * For any path p∈ P_T ending at vertex u, and any vertex v at distance at most t from u, we have that the number of paths q∈ P_T containing p as a prefix and ending at v is at most O(). The author and Rohwedder <cit.> show that such a t-locally good solution can be transformed into a feasible integral solution by loosing an approximation rate of at most n^Θ(1/t). §.§ Subtree solutions An important concept which appears in many of our proofs is the notion of subtree solution. A subtree solution x^(v) for some vertex v is a feasible solution to the assignment LP, except that we move the source at vertex v. Hence, for any edge e, we will build subtree solutions such that x_e^(v)>0 if and only if e∈ D(v). This is very intuitive indeed, one can see that it does not help the solution to have x_e^(v)>0 if there is not a single path from v to e. One can see that the standard assignment LP solution x is a subtree solution for the source s (i.e. we can, and will, set x^(s)=x). By a slight abuse of notation, for any edge e=(u,v) we will denote by x^(e) the subtree solution x^(v). One can intuitively think of this subtree solution as setting x_e^(e)=1, and then extending it with the x^(v) values. This is equivalent to setting the source at vertex v, and having one dummy edge with fractional value 1 ending at vertex v. §.§ A few technical lemmas In several of our constructions, we will use a certain labeling scheme, where each label correspond to some set of elements of a ground set [m]. Our instances will have size 2^Θ(m). The following lemmas will be useful to bound certain quantities. In the rest of the paper, for any 0≤ x≤ 1, we define the entropy function of a Bernoulli random variable of parameter x as h(x):=-xlog_2(x)-(1-x)log_2(1-x), where log_2 is the logarithm in base 2. The following lemmas can be obtained from standard techniques, and some version of them already appear in <cit.>. For any integer n>0, log_2(n!)=nlog_2(n)-nlog_2(e)+O(log n) . For any α,β∈ (0,1) with α≥β, log_2 α m β m = (α m)· h( β/α)± O(log m) . We simply use Stirling's formula. Lemma <ref> allows us for a slight abuse of notation. For the ease of exposition in the rest of this paper, we might have some binomial coefficients p_m q_m without checking that p_m,q_m are integers. In all our the rest of the paper, one can think that we take the size of the ground set m big enough that all quantities are integers, or that when we refer to this binomial coefficient, we can replace it by the value 2^p_m× h(q_m/p_m). Consider some fixed constants α,β,γ∈ (0,1). Let f be the function f:j↦β m jα m γ m-j over the integers j∈ [max{0,(γ-α)m},min{γ,β} m]. Denote by M the maximum of the function over its domain. Then, we have * M=m^O(1)·β mγβ/α+βmα mγα/α+βm , * For any j in the domain and any δ=O(1), we have that f(j)/f(j±δ)≤ m^Θ(1) , and * There exists some δ=O(1) such that the function f is increasing on the interval [max{0,(γ-α)m},γβ/α+βm-δ], and decreasing on the interval [γβ/α+βm+δ,min{γ,β} m]. Let us prove the second statement first. Let us consider some non-negative δ=Θ(1). We compute f(j)/f(j+δ) = β m jα m γ m-j/β m j+δα m γ m-j-δ =(j+δ)!/j!·(β m - j-δ)!/(β m-j)!·(γ m - j -δ)!/(γ m - j)!·(α m - γ m +j + δ )!/(α m - γ m +j)! ≤(j+δ)!/j!·(α m - γ m +j + δ )!/(α m - γ m +j)! ≤ (j+δ)^δ· (α m +j + δ )^δ ≤ m^O(1). If δ=Θ(1) is negative, we obtain in the same way f(j)/f(j+δ) ≤(β m - j-δ)!/(β m-j)!·(γ m - j -δ)!/(γ m - j)! ≤ (β m-δ)^-δ· (γ m - δ )^-δ ≤ m^O(1). For the third statement, we compute f(j+1)/f(j) = (β m - j)(γ m - j)/(j+1)(α m -γ m + j +1)≥ 1 (β m - j)(γ m - j)≥ (j+1)(α m -γ m + j +1) j≤βγ m^2-α m+γ m - 1/(α + β)m+2 = βγ m^2 (1+O(1/m))/(α + β)m (1+O(1/m))= βγ m /α + β+Θ(1) . For the first property, we use the other two properties which were proven. By the third property, the maximum is around the point j^*:=βγ m /α + β, and we only loose a factor m^O(1) by using this approximation using the second property. § A LOWER BOUND FOR 1 ROUND OF SHERALI-ADAMS In this section, we describe a lower bound for 1 round of the Sherali-Adams hierarchy on the naive assignment LP. We will use our construction from Section <ref> with ℓ=3. For clarity, we describe here completely the instance. We only prove in this section that the constructed instance contains a feasible solution to the Sherali-Adams hierarchy (up to some (log n)^O(1) factor). We prove in Section <ref> (specifically Lemma <ref>) that there is no integral solution with an approximation rate n^o(1). §.§ The instance ρ will be chosen as a small enough constant. We have a ground set 𝒰=[m] of size m where ρ m∈ℕ. Our graph has 4 layers L_0,L_1,L_2,L_3 of vertices. The layer L_0 contains only one vertex, the source s. L_3 is exactly the set of all sinks in this instance. Vertex set. In L_0, there is only the source s. In L_1, there is exactly one vertex u for each set of ρ m elements of the ground set 𝒰 (hence mρ m vertices). In L_2, there is exactly one vertex u for each set of 2ρ m elements of the ground set 𝒰 (hence m 2ρ m vertices). In L_3, there is exactly one sink u for each set of ρ m elements of the ground set 𝒰 (hence mρ m sinks). For any vertex, we denote by S_u the set associated to u (one can think that S_s=∅). Edge set. There is a directed edge from s to every vertex in L_1. There is a directed edge (u,v) from u∈ L_1 to v∈ L_2 if and only if S_u⊆ S_v. Similarly, there is a directed edge (u,v) from u∈ L_2 to v∈ L_3 if and only if S_v⊆ S_u. Required out-degrees. The source requires k_s:=mρ m/(1-ρ)mρ m outgoing edges to be covered in the arborescence. Each vertex u∈ L_1 needs k_u=k_1:=(1-ρ)mρ m/2ρ mρ m outgoing edges to be covered. Finally, each vertex u∈ L_2 needs out-degree k_u=k_2:=2ρ mρ m to be covered. Clearly, the total size of the instance is n=2^Θ (m), and we have that k_u=n^Ω(1) for all u. §.§ Feasibility of the assignment LP For convenience, we restate here the naive assignment LP, before defining our subtree solutions. The assignment LP reads as follows. x(δ^+(s)) ≥ k_s x(δ^+(v)) ≥ k_v· x(δ^-(v)) ∀ v∈ V∖ ({s}∪ T) x(δ^-(v)) ≤ 1 ∀ v∈ V 0≤ x_e ≤ 1 ∀ e∈ E . Recall that L_i is the set of edges whose endpoint is in L_i. It is easy to verify that x defined by x_e := (1-ρ)m ρ m^-1 if e∈ L_1, (1-ρ)m ρ m^-1·2ρ m ρ m^-1 if e∈ L_2, (1-ρ) m ρ m^-1 if e∈ L_3 is a feasible solution to this LP, with k_s= m ρ m(1-ρ)m ρ m^-1, k_v=(1-ρ)m ρ m2ρ mρ m^-1 for all v∈ L_1, k_v=2ρ mρ m for all v∈ L_2. §.§ Feasibility of SA(1) In this section, we prove that there exists a feasible solution to the first level of the Sherali-Adams hierarchy. We show this by constructing a distribution over possible edge subsets A which survives certain conditioning together with Theorem <ref>. Before proceeding, it is useful to consider two natural distributions which fail, but are useful for intuition. §.§.§ Failed attempt 1 The first idea that comes to mind is simply the independent distribution: each edge is in A independently with probability x_e. Clearly, this distribution satisfy all constraints in expectation if we do not condition by any event. However, there is a fundamental issue if we condition by an event of the form ℰ={e_1∈ A} for some e_1=(s,v)∈ L_1 for instance. Indeed, if we consider the covering constraint at vertex v, we need that ∑_e∈δ^+(v)ℙ[e∈ A |ℰ]≥ k_1 ·ℙ[e_1∈ A |ℰ]=k_1 . But since the edges appear in A independently of each other, we obtain that ∑_e∈δ^+(v)ℙ[e∈ A |ℰ] = ∑_e∈δ^+(v)ℙ[e∈ A]= ∑_e∈δ^+(v) x_e = k_v· x_e_1 . Now remember that x_e_1=n^-Ω(1), so this distribution looses a polynomial factor. It highlights that some correlations are necessary to fool the Sherali-Adams hierarchy. §.§.§ Failed attempt 2 Given the rounding algorithms in previous works, a natural idea to fix the issue and introduce correlations would be to sample the edges according to what the rounding algorithm would do. More precisely, we would proceed as follows. * Sample each edge e=(s,v)∈ L_1 independently with probability x_e. * For each vertex v∈ L_1 which has an incoming edge from the previous step, sample each edge e∈δ^+(v) independently with probability 2ρ mρ m^-1. * For each vertex v∈ L_2 which has an incoming edge from the previous step, select all the edges e∈δ^+(v). The sampling probabilities at each step are selected so that even after conditioning by some event ℰ={e_1∈ A}, the covering constraints remain satisfied. This distribution will fix the issue that independent distributions had, but another problem appears. Indeed, let us consider the event ℰ={e_1∈ A} for some e_1=(s,v)∈ L_1, and the packing constraints at the sink t such that S_t=S_v. In that case, we need to verify that ∑_e∈δ^-(t)ℙ[e∈ A |ℰ]≤ 1 . We claim that, ℙ[e∈ A |ℰ]≥2ρ mρ m^-1 for all e∈δ^-(t). Indeed, since S_t=S_v, for all edges e=(u,t)∈δ^-(t), we have that (v,u)∈ E. Hence we write ℙ[e∈ A |ℰ] ≥ℙ[e∈ A |{(v,u)∈ A}]·ℙ[{(v,u)∈ A}|ℰ] = 1·2ρ mρ m^-1 . Finally, we note that δ^-(t)=(1-ρ )mρ m, therefore ∑_e∈δ^-(t)ℙ[e∈ A |ℰ]≥(1-ρ )mρ m·2ρ mρ m^-1 = n^Ω(1) , and we loose a polynomial factor again. The intuition is that the rounding algorithm creates too much correlations as it has “2 layers” of correlation. Intuitively, we need a distribution which has only one layer of correlation. Before introducing our distribution, we define our subtree solutions, which will be crucial to our distribution. §.§.§ Subtree solutions For any edge e=(u,v), we define S_e=S_v the set corresponding to the endpoint of e. We also recall that D(e) is the set of descendant edges of e (i.e. the set of edges e' such that there exists a directed path starting at e and ending at e'). Subtree solution for e=(u,v)∈ L_1. For any edge e∈ L_1, we set x_e'^(e)= 1 e'=e 2ρ mρ m^-1 e'∈ L_2∩ D(e) (1-ρ )mρ m/mρ m·m-2ρ m+|S_e'∩ S_e| |S_e'∩ S_e|^-1 e'∈ L_3∩ D(e) 0 We claim that the above subtree solution is a feasible solution to the assignment LP of the instance given by the same graph, where the source placed at the endpoint v of the edge e=(u,v). Most of the constraints are easy to check, except the packing constraint x(δ^-(v)) ≤ 1 for v∈ L_3, and the covering constraint x(δ^+(v)) = k_v· x(δ^-(v)) for v∈ L_2. To see why these constraints are feasible, imagine the following process. For all edges in δ^+(v), we set the fractional value to 2ρ mρ m^-1. Therefore, every vertex u' in L_2 such that (v,u')∈ E has in-degree 2ρ mρ m^-1 and needs fractional out-degree equal to 1 (because k_u'=2ρ mρ m). Then, all the sinks push one unit of flow uniformly towards the vertices in L_2∩ D(v) (recall that D(v) are the vertices reachable from v). Formally, if some sink t has d neighbors in L_2∩ D(v), we set x_e'^(e)=1/d for all edges e' going from one of those neighbors. One can compute this number, as it is exactly equal to the number of sets of size 2ρ m containing both S_t and S_e, i.e. this is equal to m-2ρ m+|S_t∩ S_e| |S_t∩ S_e|, hence the expression. Clearly, no vertex has fractional in-degree more than 1, hence the packing constraint is satisfied. We need one last crucial check, which is to verify that the vertices in L_2∩ D(v) indeed have fractional out-degree at least 1. To see this, note the following two observations: * By symmetry, all vertices in L_2∩ D(v) receive the same out-degree. This is because all vertices in L_2∩ D(v) are entirely defined by a set of size 2ρ m containing S_e, which have no special property, except that they all contain S_e. Formally, for any vertex u∈ L_2∩ D(v), one can count that there are exactly ρ m jρ mρ m -j sinks t connected to u such that |S_t∩ S_v|=j. This number does not depend on u. * All the sinks in L_3 are reachable from v. This is because a sink t is reachable from v if, and only if there exists a set of size 2ρ m containing both S_v and S_t. But |S_v∪ S_t|≤ |S_v|+|S_t|=2ρ m, so there always exists such a set. Combining the above two observations, we conclude that each vertex in L_2∩ D(v) receives a fractional out-degree exactly equal to |T|/|L_2∩ D(v)|=mρ m/(1-ρ)mρ m , hence we can even afford to scale down by the factor (1-ρ )mρ m/mρ m and still get a feasible solution. This is exactly how we set the variables for the edges in L_3. Subtree solution for e=(u,v)∈ L_2. If e∈ L_2, we set x_e'^(e)= 1 e'=e 1 e'∈ L_3∩ D(e) 0 It is easy to check all the constraints. Subtree solution for e=(u,v)∈ L_3. If e∈ L_3, we set x_e'^(e)= 1 e'=e 0 The constraints are trivially satisfied. §.§.§ The shadow distribution Using subtree solutions, we are ready to define our distribution of edges to fool the Sherali-Adams hierarchy via Theorem <ref>. We sample a set A of active edges in the following way. * We select a set of shadow edges S which contains each edge e∈ E independently with probability x_e. * For each shadow edge e∈ S, we create a set S_e which contains each edge e'∈ E independently with probability x_e'^(e) (recall that x_e^(e)=1, so that e∈ S_e with probability 1). We will say that e∈ S “triggers a subtree”. * We return the set A=∪_e∈ SS_e. In the following, it will also be useful to define A' to be the multiset of active edges (indeed, an edge might be selected several times in the above process). The rest of this section is to prove that the shadow distribution with our choice of subtree solutions works. We start by a few useful lemmas. §.§.§ Some useful lemmas In this part, we prove some useful lemmas which will help later. From now on, we denote by A' the set of active edges counted with their multiplicity (indeed, note that an edge can appear in the subtree of several shadow edges). We denote by ℰ the event by which Sherali-Adams can condition. Note that it must be of the form ℰ={e_1∈ A} or ℰ={e_1∉ A} for some edge e_1. In the former case, we say that ℰ is a positive event, and a negative event in the latter case. We define by s_e the probability of the event that {e∈ A} in the above probability distribution. We start with the following crucial lemma. Note that if this lemma was not true, it would not even be clear how to verify the constraints without conditioning by any event ℰ. For any edge e∈ E, we have that x_e≤ s_e≤ 6x_e. First, we note that s_e≥ℙ[{e∈ S}∩{e∈ S_e}]=x_e. For the upper-bound, we compute the probability s_e of any edge e appearing in the set A. We clearly have that ℙ[e∈ A] =1-ℙ[e∉ A] =1-ℙ[⋂_e'∈ E{e'∉ S}∪{e∉ S_e'}] = 1 -∏_e'∈ Eℙ[{e'∉ S}∪{e∉ S_e'}] = 1-∏_e'∈ E(1-ℙ[{e'∈ S}∩{e∈ S_e'}]) = 1-∏_e'∈ E(1-x_e'· x_e^(e')) , where the third equality uses the fact that each edge e' is selected as a shadow independently of other edges, and that each edge e' selects its set S_e' independently of other edges. Hence, we have s_e = 1-∏_e'∈ E(1-x_e'· x_e^(e')) . If e∈ L_1, then note that e is given a non-zero value in a subtree solution x^(e') if and only if e'=e. Therefore, we have s_e = 1- (1-x_e· x_e^(e)) = 1- (1-x_e)=x_e . If e∈ L_2, let us denote by e_1 the edge on the unique directed path from s to e. Then, clearly, s_e = 1-(1-x_e_1· x_e^(e_1))·(1-x_e· x_e^(e)) = 1-(1-x_e)^2 ≤ 2x_e . If e∈ L_3, let us denote by A(e) the set of ancestor edges of e. In the following, we use the inequality e^-2x<1-x for 0≤ x<1/2. Then, clearly, s_e = 1-∏_e'∈ L_1∩ A(e)(1-x_e'· x_e^(e'))·∏_e'∈ L_2∩ A(e)(1-x_e'· x_e^(e'))· (1-x_e· x_e^(e)) = 1-(1-x_e)·(1-x_e 2ρ mρ m^-1)^2ρ mρ m·∏_e'∈ L_1∩ A(e)(1-x_e'· x_e^(e')) ≤ 1-exp(-4x_e-2·∑_e'∈ L_1∩ A(e) x_e'· x_e^(e')) . The last sum is slightly more tricky to compute. Let v be the endpoint of e. We remark that all edges in δ^-(v) play a symmetric role so that ∑_e'∈ L_1∩ A(e) x_e'· x_e^(e')=∑_e'∈ L_1∩ A(e”) x_e'· x_e”^(e') , for any e”∈δ^-(v). To see this, note that the value x_e' is the same for all edges in L_1. Hence, one only needs to count how many edges e'∈ L_1 are ancestors of e” and are such that x_e”^(e') takes a specific value δ. This value δ is entirely determined by the size of the intersection |S_e”∩ S_e'| and when summing over all possible edges in L_1, we are in fact summing over all sets of size ρ m, which makes the symmetry of the construction apparent. Hence, we can write (recall that A(v) is the set of ancestors of v) ∑_e'∈ L_1∩ A(e)x_e'· x_e^(e') = 1/|δ^-(v)|·∑_e”∈δ^-(v)∑_e'∈ L_1∩ A(e”)x_e'· x_e”^(e') = 1/|δ^-(v)|·∑_e'∈ L_1∩ A(v)x_e'∑_e”∈δ^-(v) x_e”^(e') = 1/|δ^-(v)|·∑_e'∈ L_1∩ A(v)x_e'(1-ρ )m ρ m/mρ m = 1/|δ^-(v)|·∑_e'∈ L_1∩ A(v)(1-ρ) mρ m^-1(1-ρ )m ρ m/mρ m = 1/|δ^-(v)|·∑_e'∈ L_1∩ A(v)1/mρ m = 1/|δ^-(v)| = (1-ρ)mρ m^-1=x_e . Therefore s_e ≤ 1-exp(-4x_e-2·∑_e'∈ L_1∩ A(e) x_e'· x_e^(e')) = 1-exp(-6x_e)≤ 6x_e , where we used the inequality exp(x)≥ 1+x for all x. We continue with the following easy, but useful lemma which intuitively says that in any subtree S_e, no edge dominates the total expected out-degree at some vertex v. For any e∈ S, any vertex v, and any e'∈δ^+(v), we have that 𝔼[|(δ^+(v)∖{e'}) ∩ S_e|]≥ (1-o(1))·𝔼[|δ^+(v) ∩ S_e|] This is easy to show, it suffices to check the three cases e∈ L_1,e∈ L_2,e∈ L_3. We start by the non-trivial case, which is when e∈ L_1 and v∈ L_2. In that case, we see by definition of our subtree solutions that 𝔼[|δ^+(v) ∩ S_e|]=∑_e'∈δ^+(v)x_e'^(e)=1 , and we have that 𝔼[|(δ^+(v)∖{e'}) ∩ S_e|] ≥𝔼[|(δ^+(v)) ∩ S_e|]-max_e'∈δ^+(v) x_e'^(e) ≥𝔼[|δ^+(v) ∩ S_e|]-(1-ρ)mρ m/mρ m=(1-o(1))·𝔼[|(δ^+(v)) ∩ S_e|] , which concludes this case. In all other cases, we note that for any vertex v and edge e, for any e',e”∈δ^+(v), we have the symmetry that x_e'^(e)=x_e”^(e). Moreover, for all vertices v, we have that |δ^+(v)|=ω(1), which proves the other cases. We conclude this part with the last lemma. For m big enough, and for any event ℰ that one round of Sherali-Adams can condition on, for any vertex v, we have that 𝔼[|δ^+(v)∩ A||ℰ]≥Ω(1/m^4)·𝔼[|δ^+(v)∩ A'||ℰ] , Before proving this lemma, let us comment on it. The left-hand side is the expected out-degree of a vertex v (after conditioning), while 𝔼[|δ^+(v)∩ A'||ℰ] refers to expected out-degree estimated by the naive union bound over all subtrees, which counts the edges with multiplicity. Essentially, this lemma states some converse inequality of the union bound, i.e. that at the cost of loosing a factor m^O(1)=(log n)^O(1), we can replace the exact value by the union bound estimate. The intuition as to why this is true is that the number of times an edge is selected in a sum of independent binary variables of small expectation, so with superpolynomially small probability it is selected more than say m^4 times. Next, we note that the events SA is allowed to condition by have only a polynomially small probability, so even after conditioning, an edge will be selected less than m^4 times with high probability. First, we remark that the number of times an edge e appears in A' is a sum of independent binary random variables, let us denote by n_e this number. Moreover, we can compute that 𝔼[n_e] = ∑_e'∈ A(e) x_e'· x_e^(e')<2 . where we recall that A(e) are the ancestor edges of e. To see this, note that if e∈ L_1∪ L_2, this is easy to check since any such edge has at most 2 ancestors. If e∈ L_3, we write ∑_e'∈ A(e) x_e'· x_e^(e') =∑_e'∈ A(e)∩ L_1 x_e'· x_e^(e') + ∑_e'∈ A(e)∩ L_2 x_e'· x_e^(e') + x_e ≤ x_e+ 2ρ mρ m· x_e·2ρ mρ m^-1· 1 + x_e = 3x_e<2 . We recall that we already performed the calculation that ∑_e'∈ A(e)∩ L_1 x_e'· x_e^(e')=x_e in the proof of Lemma <ref>. Using standard Chernoff bounds, for any t>m^4 and m big enough, we obtain that ℙ[n_e≥ t]≤exp(-t/10) . Second, note that the event ℰ must have a probability at least x_e=exp(-Θ(m)) for some edge e∈ G (using Lemma <ref> again). Therefore, we can write for t>m^4 and m big enough, ℙ[n_e≥ t|ℰ]=ℙ[{n_e≥ t}∩ℰ]/ℙ[ℰ]≤exp(-t/10)/exp(-Θ(m))≤exp(-t/20) . Therefore, we can write 𝔼[|δ^+(v)∩ A'||ℰ] =∑_e∈δ^+(v)𝔼[n_e|ℰ] ≤∑_e∈δ^+(v)(𝔼[n_e|{n_e≤ m^4,ℰ}]+∑_t=m^4^∞𝔼[n_e|{n_e= t,ℰ}]·ℙ[n_e≥ t|ℰ]) . To analyze the above sum, we need to handle several cases. Case 1: ℰ={e_1∉ A} and e=e_1. In that case, then we have 𝔼[n_e|{n_e= t,ℰ}]·ℙ[n_e≥ t|ℰ]=0 for all t and 𝔼[n_e|{n_e≤ m^4,ℰ}]=0. Case 2: ℰ={e_1∈ A} and e= e_1. Then in that case, we clearly have that 𝔼[n_e|{n_e≤ m^4,ℰ}]≥ 1, and that ∑_t=m^4^∞𝔼[n_e|{n_e= t,ℰ}]·ℙ[n_e≥ t|ℰ]≤∑_t=m^4^∞ t·exp(-t/20)=O(1) . Case 3: e_1≠ e. In that case, our strategy is prove that 𝔼[n_e|{n_e≤ m^4,ℰ}]≥ n^-Θ(1), which will show that the sum ∑_t=m^4^∞ t·exp(-t/20) is negligible compared to the total expectation. To see this, we have two subcases. * If e_1∉ D(e) then we remark that the event {e∈ S}∩{e∈ S_e} is independent of ℰ (also recall that ℙ[e∈ S_e]=1). Then we write ℙ [{e∈ S}∩{e∈ S_e}∩{n_e≤ m^4}∩ℰ] =ℙ[{n_e≤ m^4}|ℰ∩{e∈ S}∩{e∈ S_e}]·ℙ[{e∈ S}∩{e∈ S_e}|ℰ]·ℙ[ℰ]≥ n^-Θ(1) , where the last inequality uses the fact that the three events {e∈ S}, {e∈ S_e}, ℰ are all independent with probability at least n^-Θ(1) each, and that ℙ[{n_e> m^4}|ℰ∩{e∈ S}∩{e∈ S_e}]≤exp(-m^4/10)/ℙ[{e∈ S}∩{e∈ S_e}∩ℰ]≤exp(-m^3). This shows that 𝔼[n_e|{n_e≤ m^4,ℰ}]≥ n^-Θ(1). * If e_1∈ D(e)∖{e}, then let us consider e' an ancestor of e in L_1, for which we remark that x_e^(e')≥ n^-Θ(1) and x_e_1^(e')=n^-Θ(1) by construction. Further, we remark that if we condition by the event {e_1∉ S_e'}, then the events {e'∈ S} and ℰ become independent. Therefore, we write ℙ[{e'∈ S}∩{e_1∉ S_e'}∩{e∈ S_e'}∩{n_e≤ m^4}∩ℰ] = ℙ[{e'∈ S}∩{e∈ S_e'}∩{n_e≤ m^4}∩ℰ|{e_1∉ S_e'}]·ℙ[{e_1∉ S_e'}] ≥ n^-Θ(1) , where the last inequality uses the fact that ℙ[{e_1∉ S_e'}]=1-x_e_1^(e')>1-n^-Θ(1), the fact that ℙ[{n_e> m^4}|ℰ∩{e'∈ S}∩{e_1∉ S_e'}∩{e∈ S_e'}]≤exp(-m^3), and the fact that ℙ[{e'∈ S}∩{e∈ S_e'}∩ℰ|{e_1∉ S_e'}]≥ n^-Θ(1). The second fact can be deduced from the third by a similar calculation as in the previous subcase. The last fact is not obvious. Note that after conditioning by {e_1∉ S_e'} the three events ℰ,{e'∈ S},{e∈ S_e'} become independent. Hence it suffices to show that ℙ[ ℰ|{e_1∉ S_e'}]≥ n^-Θ(1). To this end, if ℰ is a negative event, we clearly have that ℙ[ ℰ|{e_1∉ S_e'}]≥ℙ[ℰ]≥ 1-6x_e_1>1/2 , where we used Lemma <ref>. Otherwise, if ℰ is a positive event, then we have that ℙ[ ℰ|{e_1∉ S_e'}]= ℙ[ {e_1∈ A}|{e_1∉ S_e'}]≥ℙ[ {e_1∈ S}|{e_1∉ S_e'}]=x_e_1=n^-Θ(1) . To conclude, we were able to prove that in all cases, we have for any edge e∈δ^+(v) that, either ℰ={e∉ A}, or that ∑_t=m^4^∞𝔼[n_e|{n_e= t,ℰ}]·ℙ[n_e≥ t|ℰ] ≤∑_t=m^4^∞t·exp(-t/20) ≤ O(exp(-m^2)) ≤ O(𝔼[n_e|{n_e≤ m^4,ℰ}]) . Therefore, we can conclude 𝔼[|δ^+(v)∩ A'||ℰ] =∑_e∈δ^+(v)𝔼[n_e|ℰ] ≤∑_e∈δ^+(v)𝔼[n_e|{n_e≤ m^4,ℰ}]+∑_t=m^4^∞𝔼[n_e|{n_e= t,ℰ}]·ℙ[n_e≥ t|ℰ] ≤ O(∑_e∈δ^+(v)𝔼[n_e|{n_e≤ m^4,ℰ}])≤ O(m^4) ·𝔼[|δ^+(v)∩ A||ℰ] . §.§.§ Verifying the covering constraints For convenience, we recall here the covering constraint. ∑_e∈δ^+(v)x_e≥ k_v·∑_e∈δ^-(v) x_e for any vertex v which is not a sink. We use the convention that ∑_e∈δ^-(v) x_e=1 if v=s. We will verify those constraints up to a factor of (log n)^O(1). Using Lemma <ref>, it is clear that any constraint is satisfied up to a multiplicative factor of 6 if we do not condition by any event. If we perform some conditioning, we use Lemma <ref> to argue that we can count all edges with their multiplicity, at the cost of loosing some factor (log n)^O(1) (and this is true even after conditioning). We have now a simple case analysis. Case 1: v=s. This is case is easy, as ℰ can be dependent of at most 2ρ mρ m edges in δ^+(s) (by the structure of the graph). Since s has m ρ m outgoing edges, all playing a symmetric role, the influence of ℰ is negligible. Case 2: v≠ s and ℰ={e_1∉ A} is a negative event. Then we condition by the outcome of the first step of the sampling (i.e. the set selected as the shadow set). After this conditioning, we see that by linearity of expectation, and by the fact that the all the subtree solutions satisfy the covering constraints of the assignment LP, we will have that the covering constraint is satisfied in expectation by our sampling procedure. There is one slight caveat however, which is if e_1∈δ^+(v), in which case the out-degree of v might decrease in expectation. However, by Lemma <ref>, this loss in negligible in any subtree S_e for any edge e∈ E. Hence by linearity of the expectation, the loss in negligible for any outcome for the shadow set. Formally, we write, for any possible outcome X of the shadow set S: 𝔼[|δ^+(v)∩ A||ℰ] ≥Ω(1/m^4)·𝔼[|δ^+(v)∩ A'||ℰ] =Ω(1/m^4)∑_X∑_e∈ X𝔼[|(δ^+(v)∖{e_1})∩ S_e|]·ℙ[{S=X}|ℰ] ≥Ω((1-o(1))/m^4)∑_X∑_e∈ X𝔼[|δ^+(v)∩ S_e|]·ℙ[{S=X}|ℰ] ≥Ω(1/m^4)∑_X∑_e∈ Xk_v·𝔼[|δ^-(v)∩ S_e|]·ℙ[{S=X}|ℰ] ≥Ω(1/m^4)· k_v·𝔼[|δ^-(v)∩ A||ℰ] . Case 3: v≠ s and ℰ={e_1∈ A} is a positive event. Similarly, we use Lemma <ref> to argue with union bounds only, at the cost of loosing a factor (log n)^O(1). The proof is very similar to Case 2, using the conditioning by every possible outcome for the shadow set. However, there is one crucial difficulty, which shows up when e_1∈δ^-(v). In that case, it is crucial that the expected out-degree at vertex v is at least Ω(k_v), because the event ℰ adds an additive 1 to the in-degree of v. Fortunately, using the properties of our distributions, and Lemma <ref>, we can write ℙ[{e_1∈ S}|{e_1∈ A}] = ℙ[{e_1∈ A}|{e_1∈ S}]·ℙ[{e_1∈ S}]/ℙ[{e_1∈ A}] ≥ (1/6)·ℙ[{e_1∈ A}|{e_1∈ S}] = 1/6 . Hence, with constant probability (after conditioning), the edge e_1 is also in the shadow set S, which will trigger a subtree rooted at v and ensure an additional expected out-degree k_v at vertex v in expectation (after conditioning). §.§.§ Verifying the packing constraints For convenience, we recall here the packing constraint. ∑_e∈δ^-(v)x_e≤ 1 for any vertex v. We will verify those constraints up to a multiplicative factor of O(1). Using Lemma <ref>, it is clear that any constraint is satisfied up to a multiplicative factor of 6 if we do not condition by any event. Otherwise, let e_1 be the edge considered in the conditioning, and let ℰ be the corresponding event. Let us do a case analysis. Case 1: ℰ={e_1∉ A}. We notice that after conditioning, the expected in-degree on vertex v in the set A can only decrease. To see this formally, note that for any e, ℙ[{e∈ S}|{e_1∉ A}]=ℙ[{e_1∉ A}|{e∈ S}]·ℙ[{e∈ S}]/ℙ[{e_1∉ A}]≤ℙ[{e∈ S}] , where clearly conditioning by {e∈ S} cannot decrease the probability of the event {e_1∈ A}. Second, note that the in-degree cannot be negatively correlated with any event of the form {e∈ S}. By Lemma <ref>, the expected in-degree was at most 6 to start with. Using previous remarks, it implies that the constraint is satisfied up to a factor of 6 after the conditioning. Case 2. ℰ={e_1∈ A}. Let v be the vertex at which the packing constraint is. The strategy is to first understand how the probability of each edge being in the shadow set changes after conditioning by the event ℰ, then argue about the congestion that each edge in the shadow set will induce on vertex v by triggering a subtree. If v∈ L_1, then δ^-(v) contains only one edge, so the constraint is satisfied deterministically. If v∈ L_2, let us assume in the worst-case that all edges in e∈ L_1∩ A(v) have probability 1 of appearing in the shadow set (after conditioning by ℰ). Using our definitions for the subtree solutions, the expected congestion induced by those triggered subtrees is then equal to at most ∑_e∈ L_1∩ A(v)2ρ mρ m^-1 = 1 . Hence we only need to worry about the probability of the edges e∈δ^-(v) being selected in the shadow set. Note that if e_1∉δ^+(v)∪δ^-(v), the event {e∈ S} is independent of ℰ for any e∈δ^-(v) so we are done. Otherwise, if e_1∈δ^-(v), then only the probability of e_1 being in S changes. Even after conditioning, this probability is at most 1 anyway so we are done. If e_1∈δ^+(v), for any edge e∈δ^-(v) we write ℙ[{e∈ S}|{e_1∈ A}] = ℙ[{e_1∈ A}|{e∈ S}]·ℙ[{e∈ S}]/ℙ[{e_1∈ A}] ≤1· x_e/x_e_1=2ρ mρ m^-1 , where the inequality uses Lemma <ref>. Hence, each edge e∈δ^-(v) induces an expected congestion of 2ρ mρ m^-1 on vertex v, after conditioning on event ℰ. There are exactly 2ρ mρ m such edges, which concludes that the total congestion is O(1) in the case where v∈ L_2. Finally, v∈ L_3 is the most tricky subcase. If e_1∈ L_1, then the probability of some edge e being in the shadow set is only modified after conditioning if e=e_1. This is only one edge, so even if the probability rises to 1 this is not an issue as the subtree S_e has an expected congestion on v of o(1). If e_1∈ L_2, the probability of at most 2 edges of being in S can increase after conditioning by ℰ, and the same argument applies. Finally, we are left with the case that e_1=(u',v')∈ L_3. Note that there is at most one edge (u',v) if it exists, so we can ignore all the edges in δ^-(u') by loosing a additive 1 on the congestion. Note that all edges in δ^-(v) appears in S independently of ℰ, except for possibly one edge if e_1∈δ^-(v). Hence we only need to worry about the probability of the edges in L_1 being selected in the shadow set, after conditioning by the event ℰ. For any edge e∈ L_1, we write ℙ[{e∈ S}|{e_1∈ A}] = ℙ[{e_1∈ A}|{e∈ S}]·ℙ[{e∈ S}]/ℙ[{e_1∈ A}] ≤ℙ[⋃_e'∈ E{{e'∈ S}∩{e_1∈ S_e'}}|{e∈ S}]·ℙ[{e∈ S}]/ℙ[{e_1∈ A}] ≤(ℙ[{e_1∈ A}]+ℙ[{e_1∈ S_e}])·ℙ[{e∈ S}]/ℙ[{e_1∈ A}] ≤(6x_e_1+x_e_1^(e))·ℙ[{e∈ S}]/ℙ[{e_1∈ A}] ≤(6x_e_1+x_e_1^(e))·ℙ[{e∈ S}]/ℙ[{e_1∈ S}] =6x_e_1+x_e_1^(e) . where we used a union bound in the third line, Lemma <ref> in the fifth, and the fact that x_e=x_e_1 in our instance in the last line. Therefore, the expected congestion induced by edges in L_1 conditioned on ℰ is equal to at most ∑_e∈ L_1(6x_e_1+ x_e_1^(e))·∑_e'∈δ^-(v)x_e'^(e) = ∑_e∈ L_16x_e·∑_e'∈δ^-(v)x_e'^(e) + ∑_e∈ L_1x_e_1^(e)·∑_e'∈δ^-(v)x_e'^(e) = ∑_e∈ L_16(1-ρ)mρ m^-1·(1-ρ)mρ m/mρ m +∑_e∈ L_1x_e_1^(e)·(1-ρ)mρ m/mρ m = 6+∑_e∈ L_1∩ A(u')(1-ρ)mρ m/mρ m·m-2ρ m+|S_e∩ S_e_1| |S_e∩ S_e_1|·(1-ρ)mρ m/mρ m = O(1)+((1-ρ)mρ m/mρ m)^2 ·∑_e∈ L_1∩ A(u')m-2ρ m+|S_e∩ S_e_1| |S_e∩ S_e_1|^-1 = O(1)+((1-ρ)mρ m/mρ m)^2 ·∑_j=0^ρ mm-2ρ m+j j^-1ρ m j^2 To analyze this last sum, we need that m-2ρ m+j j^-1ρ m j^2·((1-ρ)mρ m/mρ m)^2≤ m^O(1) for all the range of possible values of j. Indeed, using Lemma <ref>, with j=x ρ m for 0≤ x ≤ 1, we have log_2(m-2ρ m+j j^-1ρ m j^2)/m≤ (2ρ)· h(x)-(1-2ρ+x ρ)· h(x ρ/1-2ρ +x ρ)+O(log (m)/m) . To conclude, we study the function f:x↦ (2ρ)· h(x)-(1-2ρ+x ρ)· h(x ρ/1-2ρ +x ρ)+2·((1-ρ)h(ρ/1-ρ)-h(ρ)) One can verify that f'(x)=ρ·(2log_2(1-x)-2log_2(x)+log_2(x ρ/1-2ρ+xρ)) which is non-negative if and only if x≤ρ. Therefore, this function admits a maximum at x=ρ, where f(ρ)=2ρ h(ρ)-(1-ρ)^2· h((ρ/1-ρ)^2)+2(1-ρ)h(ρ/1-ρ)-2h(ρ) . Finally, one can expand f(ρ) for ρ→ 0^+ to obtain that f(ρ)=-ρ^2/log(2)+O(ρ^3) . Hence for any constant ρ>0 small enough, we obtain that f(ρ) < 0. Hence, we obtain ∑_e∈ L_1(x_e_1+ x_e_1^(e))·∑_e'∈δ^-(v)x_e'^(e)≤ 1+o(1)=O(1) , for any constant ρ small enough. This concludes the last case. §.§.§ Connecting the dots We were able to verify all constraints with all possible conditioning, up to a multiplicative factor of (log n)^O(1), assuming ρ is a small enough constant, and m is big enough. Hence, by Theorem <ref>, we obtain that this instance is feasible for 1 round of Sherali-Adams with an approximation rate (log n)^O(1). But we prove in Section <ref> that any integral solution must have approximation factor n^Ω(1), which concludes the proof of Theorem <ref>. § THE GENERAL CONSTRUCTION In this part, we describe our instances (for any depth) and discuss some of their properties. In particular, this section contains a proof that our instances do not contain any n^o(1/ℓ)-approximate integral solutions (Lemma <ref>), and a proof of the first three properties of Theorem <ref>. We will denote by ℓ the depth of the instance, which will be parameterized by its size n, its depth ℓ, and a constant ρ. We will denote by G^(ρ)_n,ℓ the instance obtained in that way. The instance used in Section <ref> is the special case where ℓ=3. §.§ The vertex set We have a ground set 𝒰=[m] of size m for m big enough. We fix a constant ρ which will be taken small enough. We also use the parameter ϵ such that ℓ=3/ϵ. We assume that ϵ m=ω(log m) (or equivalently, ℓ=o(log n/loglog n)), although this is not strictly needed for the construction itself, we will use this assumption later in some proofs. We choose 1/ϵ and m big enough so that ϵρ m∈ℕ, 1/ϵ∈ℕ. For n big enough, this will always be possible while modifying the instance size n and the number of layers by some factor O(1) only. This will not affect the asymptotic behavior of our results. The graph will contain ℓ+1 layers L_0,L_1,… ,L_i,…, L_ℓ=3/ϵ. Until i≤ 2/ϵ, we will have an expanding phase where the sets labelling the vertices get bigger and bigger. Then, from layer L_2/ϵ to the last layer, the labelling sets get smaller and smaller; this is the collapsing phase. With this intuition in mind, we are ready to describe the instance. Source vertex. In layer L_0, there is only the source vertex s, which we label with the empty subset, i.e. we set S_s=∅. Expanding phase. In layer L_i for i< 2/ϵ, there is one vertex v for each set S_v⊆𝒰 of size iϵρ m. Hence, we have |L_i|=m iϵρ m , for i< 2/ϵ. Collapsing phase. In layer L_i for 2/ϵ≤ i≤ 3/ϵ, there is one vertex v for each set S_v⊆𝒰 of size 4ρ m- iϵρ m. Hence, we have that |L_i|=m 4ρ m- iϵρ m , for i≥ 2/ϵ. Sinks. The sinks are all the vertices in the last layer (layer i=3/ϵ). §.§ The edge set For each i, we define the set E_i of edges having their endpoint in L_i as follows: E_i:={(u,v) | u∈ L_i-1,v∈ L_iS_u⊆ S_v} 1≤ i≤ 2/ϵ, {(u,v) | u∈ L_i-1,v∈ L_iS_v⊆ S_u} 2/ϵ < i≤ 3/ϵ. Then, we set the total edge set E as E:=⋃_1≤ i≤ 3/ϵ E_i . In other words, there can exist edges only in-between consecutive layers. Moreover, any pair of vertices (u,v) belonging to consecutive layers are connected by an edge if and only if the labels S_u,S_v have an inclusion relationship (i.e. either S_u⊆ S_v or S_v⊆ S_u). §.§ Degree requirements In this part, we describe the required out-degree for each vertex v in the arborescence to be covered, which we denote by k_v. For ease of notation, we will also denote γ_v:=k_v/|δ^+(v)| the ratio between required out-degree for vertex v, and the out-degree of v in the graph. All these quantities will only depend on the layer that v is in, and not specifically on which vertex v is. Hence, we will have quantities k_i,γ_i,δ^+_i when referring to these quantities for any vertex v∈ L_i. We set the k_is such that in an integral solution T, the following must hold: * |T∩ L_1/ϵ|=mρ m/(1-ρ)mρ m, * |T∩ L_2/ϵ|=mρ m/2ρ mρ m, and * |T∩ L_3/ϵ|=mρ m. To achieve this, we set the the γ_is as follows: γ_i:= γ'_1 0≤ i< 1/ϵ , γ'_2 1/ϵ≤ i< 2/ϵ , γ'_3 2/ϵ≤ i< 3/ϵ , where γ'_1 := (ϵρ m)!/((ρ m)!·(1-ρ) mρ m)^ϵ , γ'_2 := (ϵρ m)!/((ρ m)!·2ρ mρ m)^ϵ , γ'_3 := (ϵρ m)!/((ρ m)!)^ϵ , Let us verify that this satisfies our three desiderata. In an integral solution T satisfying the above degree requirements, we must have that |T∩ L_1/ϵ| =∏_i=0^1/ϵ-1(γ_i δ_i^+) =(γ'_1)^1/ϵ·∏_i=0^1/ϵ-1δ_i^+ =(γ'_1)^1/ϵ·∏_i=0^1/ϵ-1m-iϵρ mϵρ m = 1/(ρ m)!·(1-ρ) mρ m·∏_i=0^1/ϵ-1[(ϵρ m)!·m-iϵρ mϵρ m] = 1/(ρ m)!·(1-ρ) mρ m·m!/((1-ρ)m)! = mρ m/(1-ρ)mρ m . Similarly, we must have that |T∩ L_2/ϵ| =∏_i=0^2/ϵ-1(γ_i δ_i^+) =mρ m/(1-ρ)mρ m·1/(ρ m)!·2ρ mρ m·∏_i=1/ϵ^2/ϵ-1[(ϵρ m)!·m-iϵρ mϵρ m] =mρ m/(1-ρ)mρ m·1/(ρ m)!·2ρ mρ m·((1-ρ )m)!/((1-2ρ)m)! = mρ m/(1-ρ)mρ m·1/2ρ mρ m·(1-ρ)mρ m=mρ m/2ρ mρ m . By a similar calculation (note that from layers 2/ϵ to 3/ϵ we are in the collapsing phase, so the expression of δ_i^+ changes), we also have that |T∩ L_3/ϵ| =∏_i=0^3/ϵ-1(γ_i δ_i^+) =mρ m/2ρ mρ m·1/(ρ m)!·∏_i=2/ϵ^3/ϵ-1[(ϵρ m)!·4ρ m-iϵρ mϵρ m] =mρ m/2ρ mρ m·1/(ρ m)!·(2ρ m)!/(ρ m)!=mρ m . §.§ Required out-degrees are large We prove the following lemma. There exists an absolute constant ξ>0 such that for any ρ<ξ, any m big enough and ϵ small enough, we have that k_u≥ n^Ω(1/ℓ)=n^Ω(ϵ) for any vertex u. Moreover, for ℓ=3 (which means ϵ=1), then k_u=n^Ω(1) for all u. The second case is easy to check, remembering that n=2^O(m) and that if ϵ=1, we have only 3 different values of k, k_0=mρ m/(1-ρ)mρ m, k_1=(1-ρ)mρ m/2 ρ mρ m, and k_2=2 ρ mρ m (recall that ρ is considered a constant). In the first case, we use Stirling's formula, the fact that ϵ m=ω(log m), and that ϵ is a small enough constant; and we obtain for all i<1/ϵ, k_i≥γ'_1 δ^+_1/ϵ∼ m^Θ(1)·(1-ρ) m ρ m^-ϵ·(ϵρ m)^ϵρ m/(ρ m)^ϵρ m·m-ρ m ϵρ m , hence we get log_2(k_i)/m≥ -ϵ (1-ρ)h(ρ/1-ρ)-log_2(1/ϵ)ϵρ+(1-ρ) h(ϵρ/(1-ρ))+O(log (m)/m) , where we recall that h is the entropy function of the Bernoulli distribution. Then, we use the expansion h(x)= x(1-log(x))/log(2)-x^2/log (4)+O(x^3) at x=0 (log is the natural logarithm) to obtain log_2(k_i)/m ≥ -ϵ (1-ρ)h(ρ/1-ρ)-log_2(1/ϵ)ϵρ+(1-ρ) h(ϵρ/(1-ρ))+O(log (m)/m) ≥ -ϵ (1-ρ) h(ρ/1-ρ)-log_2(1/ϵ)ϵρ+(1-ρ)·ϵρ (1-log(ϵρ /(1-ρ)))/(1-ρ)log(2) +O(ϵ^2)+O(log (m)/m) ≥ -ϵ· (ρ1-log(ρ/(1-ρ))/log(2)-ρ^2/(1-ρ)log(4)+O(ρ^3)) + ϵρ1-log(ρ/(1-ρ))/log(2) + o(ϵ) ≥ϵ(ρ^2/(1-ρ)log(4)+O(ρ^3))+o(ϵ) . Therefore, for ρ a small enough constant, we obtain that log_2(k_i)/m≥Ω(ϵ), hence k_i≥ n^Ω(ϵ)=n^Ω(1/ℓ), for any i<1/ϵ. The other cases are similar: for 1/ϵ≤ i<2/ϵ, we write k_i≥γ'_2 m-2ρ m ϵρ m∼ m^Θ(1)·2ρ m ρ m^-ϵ·(ϵρ m)^ϵρ m/(ρ m)^ϵρ m·m-2ρ m ϵρ m . Therefore we get log_2(k_i)/m ≥ -2ϵρ-log_2(1/ϵ)ϵρ+(1-2ρ) h(ϵρ/(1-2ρ))+O(log (m)/m) =-2ϵρ -log_2(1/ϵ)ϵρ + ϵρ1-log(ϵρ /(1-2ρ))/log(2)+o(ϵ) = -2ϵρ -log_2(1/ϵ)ϵρ + ϵρ1-log(ϵ)-log( ρ)+log(1-2ρ)/log(2)+o(ϵ) = ρϵ(-2+1-log (ρ)+log(1-2ρ)/log(2))+o(ϵ)= ρϵlog_2(1/ρ)+O(ρϵ)+o(ϵ) . Similarly, for ρ a small enough constant, we obtain the desired result. Finally, for i≥ 2/ϵ, we obtain k_i≥ m^Θ(1)· (ϵ)^ϵρ m·ρ mϵρ m and taking the log then dividing by m we obtain log_2 (k_i)/m = -ϵρlog_2(1/ϵ) + ρ h(ϵ)+O(log(m)/m) = -ϵρlog_2(1/ϵ) + ρ h(ϵ)+o(ϵ) = -ϵρlog_2(1/ϵ) + ρϵ1-log(ϵ)/log(2)+o(ϵ) = ρ/log(2)ϵ + o(ϵ)≥ 1.44 ρϵ +o(ϵ) , which concludes the last case, and the proof. §.§ Number of paths in the instance In this part, we verify that every vertex belongs to a superpolynomial number of paths. For any vertex v∈ G_n,ℓ^(ρ), there are at least n^Ω(log(ℓ)) directed paths starting or ending at v. The proof is easy by noticing that the minimum out-degree δ^+_min of a vertex which is not a sink satisfies δ^+_min=min_0≤ i≤ℓ-1δ_i^+≥ρ mϵρ m . Similarly, the minimum in-degree of any vertex at distance at least 1/ϵ from the source is equal to δ^-_min=min_1/ϵ≤ i≤ℓ-1δ^-_i≥ρ mϵρ m . By a standard expansion of the entropy function, we get that log_2(δ^+_min)/m ≥ h(ϵ)+O(log m / m) ≥Ω(ϵlog(1/ϵ))+O(ϵ)+O(log m/ m) = Ω(ϵlog(1/ϵ)) , and the same for δ^-_min. Hence if v is at distance at least 1/ϵ from the source, there at least 2^1/ϵΩ(m ϵlog(1/ϵ)=n^Ω( log(ℓ)) directed paths of length Ω(ℓ) ending at v. Otherwise, v is at distance at least Ω(ℓ) from the sinks and we conclude similarly with our bound on δ^+_min. §.§ Ruling out integral solutions This last part is the most tricky of this whole section, and also the most important. In the following lemma, our instance of depth 3 is the case where ℓ=3 and ϵ=1. There exists some absolute constant ξ>0 such that for any ρ<ξ, if T is an α-approximate integral solution in the constructed instance G_n,ℓ^(ρ), then α≥ n^Ω(1/ℓ). Consider an integral solution T with out-degree α· k_u at every vertex u. In the whole proof, we fix a vertex v∈ L_1/ϵ, and we define by T_v the subtree of T rooted at a vertex v. By construction, if v∈ L_1/ϵ, then the subtree T_v needs to contain at least (α)^2/ϵ·(1-ρ)mρ m sinks. Similarly, for any vertex u∈ L_2/ϵ contained in T, the subtree T_u must contain (α)^1/ϵ·2ρ mρ m sinks. Otherwise, the solution T cannot be α-approximate. We remind the reader that here S_v refers to the set corresponding to vertex v. We fix some parameter θ=ρ/3, so that ρ^2<θ<ρ/2 (which is true for ρ small enough). Let us partition the set of sinks L_3 into two sets. T_1^(v) is the set of sinks whose corresponding set S' satisfies |S'∩ S_v|≥θ m, and T_2:=L_3/ϵ∖ T_1^(v) is the rest of the sinks. Let us do some basic counting. We have that |T_1^(v)| = {S'⊆𝒰| |S'|=ρ m and |S'∩ S_v|≥θ m} =∑_j=θ m^ρ m|S_v| j|𝒰∖ S_v|ρ m-j =∑_j=θ m^ρ mρ m jm-ρ mρ m-j≤ m^O(1)ρ mθ mm-ρ mρ m-θ m , where the inequality uses Lemma <ref> with the fact that θ > ρ^2. Hence log_2(|T_1^(v)|)/m ≤ρ h(θ/ρ) + (1-ρ)h(ρ-θ/1-ρ)+o(1) = ρ h(1/3)+(1-ρ)h((2/3) ·ρ/1-ρ)+o(1) . Second, for any vertex u∈ L_2/ϵ which is connected to v (i.e. there exists a directed path from v to u), we count how many sinks in T_2 u can be connected to. Let us denote by |T^(u)_2| this number. We recall that u corresponds to a set S_u of size 2ρ m, and if u is connected to v, we must have S_v⊆ S_u. |T^(u)_2| ={S⊆𝒰| |S|=ρ m and S⊆ S_u and |S∩ S_v|< θ m} =∑_j=0^θ m|S_v| j|S_u∖ S_v| ρ m-j = ∑_j=0^θ mρ m jρ m ρ m-j≤ m^O(1)ρ mθ mρ mρ m-θ m=m^O(1)ρ mθ m^2 , where the inequality uses the fact that θ <ρ / 2 with Lemma <ref>. Hence, for any u connected to v, we have log_2(|T^(u)_2|)/m = 2ρ h(θ/ρ)+o(1) = 2ρ h(1/3)+o(1) . Note that in the integral solution T, there must be some vertex u∈ T_v∩ L_2/ϵ which is connected to less than |T_1^(v)|/(α)^1/ϵ(1-ρ)mρ m2ρ mρ m^-1 sinks in |T_1^(v)|, because there are (α)^1/ϵ(1-ρ)mρ m2ρ mρ m^-1 vertices in T_v∩ L_2/ϵ which have to share the set of sinks T_1^(v). This vertex u can therefore be connected to at most |T_1^(v)|/(α)^1/ϵ(1-ρ)mρ m2ρ mρ m^-1+ |T^(u)_2| sinks in total in T. Therefore, for the solution T to be α-approximate, it must be that |T_1^(v)|/(α)^1/ϵ(1-ρ)mρ m2ρ mρ m^-1+ |T^(u)_2| ≥ (α)^1/ϵ·2ρ mρ m |T_1^(v)|/(α^2/ϵ) (1-ρ)mρ m+ |T^(u)_2|/(α)^1/ϵ·2ρ mρ m≥ 1 max(|T_1^(v)|/(α^2/ϵ) (1-ρ)mρ m, |T^(u)_2|/α^1/ϵ2ρ mρ m)≥ 1/2 . Hence, it must be that either log_2(|T^(u)_2|)/m ≥log_2(α)/(ϵ m)+ log_22ρ mρ m/m -1/m =log_2(α)/(ϵ m)+2ρ h(1/2)+O(log(m)/m) =log_2(α)/(ϵ m)+2ρ +O(log(m)/m) , or log_2(|T_1^(v)|)/m ≥ 2log_2(α)/(ϵ m)+log_2(1-ρ) mρ m/m-1/m =2log_2(α)/(ϵ m)+(1-ρ)h(ρ/1-ρ)+O(log(m)/m) . We remark that h(1/3)<1 hence Equation (<ref>) contradicts Equation (<ref>) unless we have that log_2(α)≤ -Ω(ϵ m). Finally, we claim that the function g:ρ↦ (1-ρ)h(ρ/1-ρ)-ρ h(1/3)-(1-ρ)h((2/3) ·ρ/1-ρ) satisfies that g(ρ)>0 for ρ<ξ (with ξ>0 a small enough constant). This implies that if ρ<ξ, we obtain that Equation (<ref>) contradicts Equation (<ref>), unless we have that log_2(α)≤ -Ω(ϵ m). Therefore, we reach a contradiction in both cases, unless log_2(α)≤ -Ω(ϵ m), which implies that α≤ 2^-Ω(ϵ m)=n^-Ω(ϵ)=n^-Ω(1/ℓ). Let us prove the last claim. We use the standard expansion of the entropy function h(x)= x(1-log(x))/log(2)+O(x^2) at x=0 (log is the natural logarithm) to obtain that g(ρ) =(1-ρ)[ρ1-log(ρ/(1-ρ))/(1-ρ)log (2)- (2/3)ρ1-log((2/3)ρ/(1-ρ))/(1-ρ)log (2)]-ρ h(1/3)+O(ρ^2) =ρ1-log(ρ/(1-ρ))/log (2)- (2/3)ρ1-log((2/3)ρ/(1-ρ))/log (2)-ρ h(1/3)+O(ρ^2) = ρlog(1/ρ)/3log (2)+O(ρ) . Clearly, for ρ>0 small enough, this expression is strictly positive which concludes the proof. § THE PATH HIERARCHY AND LOCALLY GOOD SOLUTIONS The goal of this section is to prove Theorem <ref> and the last property of Theorem <ref> (the other properties being proven in Section <ref>), in order to give evidence towards our main conjecture. For this we will show that Ω(ℓ) levels of the path hierarchy remain feasible on our instance G_n,ℓ^(ρ), with even some added properties which allow to implement the round-and-condition algorithm in a locally consistent way. §.§ Feasible assignment LP solution We start by defining a feasible solution to the assignment LP on our instance G_n,ℓ^(ρ). Recall that γ_i=k_i/δ^+_i is the ratio between required out-degrees and out-degree in graph for the vertices in layer i. We also recall that δ_i^- is the in-degree in the graph of vertices in L_i. For any edge e∈ L_i (i.e. its endpoint is in L_i), we set x_e := γ_i-1·∏_j=1^i-1 (γ_j-1δ_j^-) , with the convention that ∏_j=1^0 (γ_j-1δ_j^-)=1. The covering constraints are easy to check, indeed for any vertex v∈ L_i, we have that ∑_e∈δ^+(v)x_e = (γ_iδ_i^+) ·∏_j=1^i (γ_j-1δ_j^-) , and ∑_e∈δ^-(v)x_e = (γ_i-1δ^-_i)·∏_j=1^i-1 (γ_j-1δ_j^-)=∏_j=1^i (γ_j-1δ_j^-) . The packing constraints are slightly more tricky. We need to compute the quantity ∑_e∈δ^-(v)x_e = ∏_j=1^i (γ_j-1δ_j^-) , for any v∈ L_i. If i≤ 1/ϵ, we have ∏_j=1^i (γ_j-1δ_j^-) =∏_j=1^i[(ϵρ m)!/((ρ m)!·(1-ρ) mρ m)^ϵ·jϵρ mϵρ m]=(iϵρ m)!/(ρ m)!^iϵ(1-ρ) mρ m^iϵ . Let us define x:=iϵ∈ [0,1]. We then have ∏_j=1^i (γ_j-1δ_j^-) ≤(x ρ m)!/(ρ m)!^x≤ 1 . If 1/ϵ<i≤ 2/ϵ, then ∏_j=1^i (γ_j-1δ_j^-) = ∏_j=1^1/ϵ (γ_j-1δ_j^-) ·∏_j=1/ϵ+1^i (γ_j-1δ_j^-) ≤(1-ρ) mρ m^-1·2ρ mρ m^1-iϵ·∏_j=1/ϵ+1^i[(ϵρ m)!/((ρ m)!)^ϵ·jϵρ mϵρ m] ≤(1-ρ) mρ m^-1·2ρ mρ m^1-iϵ·(iϵρ m)!/((ρ m)!)^iϵ . Hence, log_2(∏_j=1^i (γ_j-1δ_j^-))/m ≤ -(1-ρ)h(ρ/1-ρ)+(1-x)· (2ρ)+(ρ x)·log_2(x)+O(log m/m) . Clearly, this is a convex function of x, hence we only need to check the endpoints to obtain the maximum on the whole region x∈ [1,2]. One can check that at both endpoints the function is strictly negative, hence ∏_j=1^i (γ_j-1δ_j^-)≤ 1. Finally, for i>2/ϵ, we have ∏_j=1^i (γ_j-1δ_j^-) = (1-ρ) mρ m^-1·∏_j=2/ϵ+1^i[(ϵρ m)!/((ρ m)!)^ϵ·m-4ρ m+jϵρ mϵρ m] = (1-ρ) mρ m^-1·(m-4ρ m+iϵρ m)!/((ρ m)!)^iϵ-2· (m-2ρ m)! . Clearly, the last expression in increasing in i (recall that ρ is a small constant, which implies that m-4ρ m>ρ m). One can conclude by checking the value at i=3/ϵ to obtain a maximum value of (1-ρ) mρ m^-1·(m-ρ m)!/((ρ m)!)· (m-2ρ m)!=1. Hence we can conclude that our choice of x_e variables satisfy the assignment LP. §.§ Feasibility of the path hierarchy We recall here the path hierarchy. Recall that we keep only variables and constraints for paths of length at most t+1, where t is the number of rounds. Recall that e_0 is a dummy edge ending at the source vertex (and by convention we assume that x_e_0=1 where x is the assignment LP solution). ∑_q∈ C(p) y(q) = k_p · y(p) ∀ p∈ P ∑_q∈ I(v)∩ D(p) y(q) ≤ y(p) ∀ p∈ P,v∈ V ∑_e∈δ^-(v)y({e}) ≤ 1 ∀ v∈ V ∑_e∈δ^+(v)y({e}) ≥ k_v·∑_e∈δ^-(v)y({e}) ∀ v∈ V y(q) ≤ y(p) ∀ p,q∈ P, p⊆ q y({e_0}) = 1 0≤ y ≤ 1 To get intuition on how we set the variables, one can imagine that we iterate the shadow distribution t times as explained in introduction. Formally, we imagine that we sample a first set of shadow edges S^(1). Each edge e∈ S^(1) selects a subtree which contains each edge e' independently with probability x_e'^(e). This gives us a second set of shadows S^(2). Now the set S^(2) triggers a set S^(3) in the same way, and so on until reaching the set S^(t+1) which will be our final set. We do not know how to analyze this procedure precisely, however the probability that a path p=(e_1,e_2,… ,e_m) of length at most t+1 appears in this process seems to be dominated by the probability of the event ℰ that e_1∈ S_1, then e_2 is selected in the subtree of e_1, then e_3 is selected in the subtree of e_2, and so on. One can notice that the most reasonable way to define subtree solutions[It is possible to prove the existence of subtree solutions for our lifted instances of any depth, however, for better readability, we did not include this result which is not needed in our proof.] x^(e) for each edge e in a similar way as in Section <ref> would imply that the probability of this event is given by ℙ[ℰ] = x_e_1·∏_j=2^mx_e_j^(e_j-1) = x_e_1·∏_j=2^mγ_e_j , where γ_e=γ_i-1 for any edge e∈ L_i. This is exactly how we set our variables. For a directed path p=(e_1,e_2,… ,e_m) of length at most t+1 where e_1∈ L_i, e_2∈ L_i+1, etc, we set y(p):=x_e_1·∏_j=i^i+m-1γ_j , with the convention that the empty product is equal to 1. There exists some absolute constant ξ>0, such that for t=ξ·ℓ, the solution (y(p))_|p|≤ t defined in Equation (<ref>) is feasible for the (t-1)^th level of the path hierarchy on instance G_n,ℓ^(ρ). To prove this lemma, most of the constraints are trivial to check except for the lifted packing constraint (Constraint (<ref>)), which requires some involved calculations. For clarity, we defer the proof of the following helper lemma to Appendix <ref>. Then we prove Lemma <ref>. There exists some absolute constant ξ>0, such that for t=ξ·ℓ, and any vertices v∈ L_i, and u∈ L_i≤ j≤ i+t, the number of directed paths from v to u in G_n,ℓ^(ρ) is at most 1/∏_k=i^j-1γ_k . Most of the constraints are trivial to check. First, note that for a single edge (i.e. a path of length 1), we have y({e})=x_e so Constraints (<ref>) and (<ref>) are clearly satisfied. For the consistency constraint (Constraint (<ref>)) for some p=(e_1,e_2,… ,e_m)⊆ q=(e'_1,… , e'_m',p,e”_1,… , e”_m”), where we denote by i' the layer to which e'_1 belongs, and i the layer to which e_1 belongs, we have y(q)/y(p) ≤x_e'_1· (∏_j=2^m'γ_e'_j) ·γ_e_1/x_e_1 = (∏_j=1^i'-1γ_j-1δ_j^-) (∏_j=i'^iγ_j-1)/γ_i-1∏_j=1^i-1 (γ_j-1δ_j^-) =1/∏_j=i'^i-1δ_j^-≤ 1 . The lifted covering constraints (Constraint (<ref>)) are also easy to check. Denote by v∈ L_i the endpoint of p. Then, we have ∑_q∈ C(p) y(q) = y(p)·∑_e∈δ^+(v)γ_i = (γ_iδ^+_i)· y(p)=k_p· y(p) . Finally, Constraint (<ref>) is implied by Lemma <ref> in a straightforward way. Indeed, note that for any path p=(s,… ,u) and any vertex v, the variable corresponding to a path q∈ D(p)∩ I(v) is equal to the variable y(p), multiplied by one γ_k for each additional laye k from u∈ L_i to v∈ L_j (hence an additional multiplicative factor of ∏_k=i^j-1γ_k). But by Lemma <ref>, the number of such paths is less than 1/∏_k=i^j-1γ_k , which cancels out exactly with the multiplicative factor. Note that we used the assumption that our paths have length at most t to be able to apply Lemma <ref>. §.§ Locally good solutions Here, we describe the algorithm that we use to obtain our locally good solutions on our instance. The arguments here are rather standard, see e.g. <cit.>. In the following, it is useful to think of the output as a set P' of directed paths, where each path P' starts at the source. The algorithm proceeds layer by layer. * Sample each edge in e∈δ^+(s) independently with probability γ_0. For each edge e sampled, add the path {e} to P'. * For j from 1 to ℓ, for each path p=(s,…, v) from the source to v∈ L_j which was selected in P', select each edge e∈δ^+(v) independently with probability equal to γ_i and add the path p∘{e} to P' (where ∘ is the concatenation operator). We claim the following lemma regarding this algorithm. Let P' the set of paths sampled by the algorithm. Then we have the following. With high probability, the set P' is a (ξℓ)-locally good solutions, i.e. the following constraints hold ∑_q∈ C(p) y(q) ≥Ω(k_p) · y(p) ∀ p∈ P' ∑_q∈ I(v)∩ D(p) y(q) ≤ (log n)^O(1)· y(p) ∀ p∈ P,v∈ D(p,ξℓ-|p|) where D(p,t) is the set of vertices for which there exists a directed path of length at most t from the endpoint of p to v, and k_p=k_v where v is the last vertex of p (with the convention that k_v=0 if v is a sink). First for Constraint (<ref>), we note that clearly whenever a path p=(s,…, v) with v∈ L_i is selected in our solution, the algorithm selects in expectation (γ_iδ^+_i) paths q∈ C(p) in expectation, and in an independent manner. Hence in expectation, the path p receives at least k_p children. Now recall that by Lemma <ref>, k_p=Ω() for all p, hence we can apply Chernoff bound, and with high probability, each path p receives say k_p/2 children. For Constraint (<ref>), this is more tricky. We proceed by induction layer by layer. We fix a path p=(s,…, u) with u∈ L_i and some v∈ L_j≥ i. Clearly, conditioned on selecting p, each path q∈ D(p)∩ I(v) is selected with probability 1/∏_k=i^j-1γ_k . By Lemma <ref>, this implies that in expectation the constraint is satisfied. However, the paths do not appear independently of each other so we need to be more careful. By induction we prove the following. Let X be the number of paths q∈ D(p)∩ I(v) selected in the solution. Then for any t≤ j-i, we have with high probability that 𝔼_[t+1,… ,j-i][X|𝒫_t]≤log^10 (n)·(1+1/log n)^t , where 𝒫_t is the outcome of the sampling algorithm at layer i+t and the randomness in the expectation is the outcome of the rouding in layers i+1,… t. We argued that this was true for t=0. Assume this is true for some t, let us now prove it for t+1. For each path q' selected at layer i+t, we will sample children paths in layer i+t+1 with the adequate probability so clearly we have that 𝔼_t+1[𝔼_[t+2,…, j-i][X|𝒫_t+1]] = 𝔼_[t+1,… ,j-i][X|𝒫_t] = log^10 (n)·(1+1/log n)^t , where 𝔼_t+1 means that we take the expectation on the random choices made in layer i+t+1. Second, by Constraint (<ref>) and Lemma <ref>, we have that, for each path q' selected in layer i+t+1, the expected number of paths q”∈ D(q')∩ I(v) that will be selected in subsequent layers is at most 1. Hence the random variable 𝔼_[t+2,…, j-i][X|𝒫_t+1] can be written as a sum of independent random variables of absolute value at most 1, and of expectation at most log^10 (n)·(1+1/log n)^t. By Chernoff bounds, with high probability this random variable is no more than log^10 (n)·(1+1/log n)^t+1, which proves the inductive step. We can conclude that, with high probability, the number of paths q∈ D(p)∩ I(v) selected is at most log^10(n)· (1+1/log n)^ℓ≤log^11(n). § FUTURE DIRECTIONS We conclude by listing a few interesting directions which, in our opinion, may have been underexplored so far. We start by our main conjecture on the Sherali-Adams hierarchy, but some questions have a more distant relation to our work. * Does our instance of depth ℓ survive Ω(ℓ) rounds of the Sherali-Adams hierarchy? Another interesting direction would be to simplify our construction. * Does our instance of depth ℓ survive Ω(ℓ) rounds of stronger hierarchies such as the Lasserre hierarchy? A good starting point could be to understand if our instance of depth 3 survives 1 round of the Lasserre hierarchy. * Is there a formal relationship between the Directed Steiner Tree problem and the MMDA problem? It has been known for a long time that the DST problem contains the set cover problem as a special case, and recently noticed by the authors of <cit.> that MMDA contains the max-k-cover problem as a special case. It is also well-known that there is a reduction from set cover to max-k-cover, which shows that a constant-factor approximation for max-k-cover implies an O(log n)-approximation for set cover. It is tempting to believe that there is also such a reduction between the MMDA problem and the DST problem. * Is there an Exponential-Time-Hypothesis-based hardness for the MMDA problem? The reason to ask this question is that this kind of labeling scheme based on a set system has been used in the past for ETH-based reductions for related problems such as Densest-k-subgraph (see e.g. <cit.>). This is a very vague connection, but an ETH-based hardness could be consistent with a separation between polynomial-time and quasi-polytime algorithms for Santa Claus, and could explain the lack of progress on this question. Note that a super-constant hardness for Santa Claus immediately implies a hardness to improve on the 2-approximation for makespan scheduling (see <cit.>). * Is there a constant-factor approximation in quasi-polynomial time for the MMDA problem? The closest that we have until now is a polyloglog(n)-approximation in time n^O(log n) for the MMDA problem, and only in the case where k_u=k for all u <cit.>. Even in layered graphs we do not know any better than this. § ACKNOWLEDGMENTS I am very thankful to Lars Rohwedder for many discussions on this problem, as well as for allowing me to include Theorem <ref> and Theorem <ref>. alpha § RESTRICTED ASSIGNMENT Restricted assignment is a special case of the Santa Claus problem where each resource j has a fixed value v_j independent of the players. On the other hand, each player i has a set of eligible resources R(i)⊆ R and only these can be assigned to i. This is equivalent to requiring that v_ij∈{0, v_j} for all i,j. The assignment LP for restricted assignment is defined as ∑_j∈ R(i) v_j x_ij ≥ T ∀ i∈ P ∑_i∈ P : j∈ R(i) x_ij ≤ 1 ∀ j∈ R x_ij ∈ [0, 1] ∀ i∈ P, j∈ R(i) The restricted assignment case has been studied extensively and is fairly well understood. Regarding Sherali-Adams, it is interesting in two ways: we show that even after a linear number of rounds of Sherali-Adams the integrality gap remains unbounded. Fortunately, there is a simple way to resolve this issue. We consider canonical instances that we will introduce later, which arise from an approximation preserving simplification to the structure of the problem, and have been used in previous works as well. We show that on canonical instances already one round of Sherali-Adams yields a constant integrality gap. This is by proving that it is at least as strong as another known linear programming relaxation. §.§ Lower bound for linearly many rounds Let 0 < ϵ≤ 1/3 and k∈ℕ with ϵ k ∈ℕ. Consider the following instance of restricted assignment. There are k+1 players indexed by 1,2,…,k+1 and k+1 small resources s_1,s_2,…,s_k+1 with v_s_j = 3ϵ for all j. Further, there are k big resources b_1,b_2,…,b_k with v_b_j = 1 for each j. Each player has access to one small resource and all big resources, that is, R(i) = {s_i, b_1,b_2,…,b_k} for each i. The integral optimum of this instance is clearly 3ϵ, since not every player can receive a big resource and the one that does not can only get a value of 3ϵ. We will now show that there is a solution for SA(r) with value 1 even for r = ϵ k. We will use Theorem <ref> to argue about the feasibility of the Sherali-Adams hierarchy. To this end, consider the following probability distribution. We select a matching of the k big resources to k of the players uniformly at random. The small resources are deterministically assigned to their corresponding players. It is clear that this is a distribution over valid assignments of resources. Hence, we only need to verify that for each player the expected value of resources assigned to it is at least 1, even after conditioning on ϵ k variables being either 0 or 1. We can assume without loss of generality that all these variables are conditioned to be 1: in every realization every resource is assigned to exactly one player. Thus, if we condition on a resource not to be assigned to a particular player, then we get a probability distribution over solutions where this resource is assigned to another player. Conditioning on this will not increase the number of variables we condition on, but refine the distribution further. This means that there are ϵ k players that receive a specific big resource. The remaining (1 - ϵ)k big resources are still uniformly assigned to the remaining 1 + (1 - ϵ)k players. Let i be a player. If i is one of the players that are guaranteed to receive a big resource, then clearly the expected value for i is at least 1. Otherwise, there is a probability of (1 - ϵ)k / (1 + (1 - ϵ) k) that i receives a big resource. Additionally, the player is guaranteed to receive a small job of value 3ϵ. Thus, in expectation the value received is at least (1 - ϵ)k/1 + (1 - ϵ)k + 3 ϵ≥(1 - ϵ)k + 3ϵ (1 - ϵ)k/1 + (1 - ϵ)k≥ 1 . §.§ Canonical restricted assignment Suppose we want to solve the following variant, which is equivalent to an α-approximation algorithm: given some T ≥ 0 either determine that < T or find a solution of value α T. Let B⊆ R be the resources j with v_j ≥α T and let S = R∖ B. Via a simple transformation we can assume that each player either only has access to big resources, or to at most one big resource and otherwise to small resources. Further, v_j = T for all j∈ B. We call such an instance an α-canonical instance. Similar simplifications are standard for the problem, see e.g. <cit.>. For the transformation we introduce two players and a coupling resource of value T for each original player. We give both players access to the coupling resource and we connect one player to the small resources the original player had access to and one to the big resources. Further, we increase the value of each big resource to T. The transformation satisfies that if there is a solution of value T for the original instance, there is also one solution for the canonical instance. Further, if there is a solution of value α T for the canonical instance, there is one for the original instance. In that sense, the transformation is approximation preserving. We will now show that for α = O(1), an α-normalized instance has an integrality gap of α already after a single round of Sherali-Adams. Our proof is by showing that SA(1) is at least as strong as an LP described by Davies et al. <cit.>, who prove that the following linear program has an integrality gap of at most 4: ∑_j∈ R(i) v_j x_ij ≥ T ∀ i∈ P ∑_i∈ P : j∈ R(i) x_ij ≤ 1 ∀ j∈ R x_is ≤ 1 - ∑_j∈ B ∩ R(i) x_ij ∀ i∈ P ∀ s∈ S ∩ R(i) x_ij ≥ 0 ∀ i∈ P, j∈ R(i) Consider now a canonical instance with a solution y∈SA(1). We will define a solution x for the above linear program from this. Let i be a player. We set x_ij = y_ij for all i∈ P and big resources j∈ R(i) ∩ B. Let i be a player that has access to only a single big resource b, then we set x_ij = (1 - y_i b) * y_ij = y_ij - y_{ij, ib} for all j∈ R(i) ∩ S. To verify that this is a feasible solution to the linear program above, we start with the first constraints. Let i∈ P. If i has only access to big resources, then ∑_j∈ R(i) v_j x_ij = ∑_j∈ R(i) v_j y_ij≥ T. If i has access to small resources and a single big resource b, then use that ∑_j∈ S ∩ R(i) v_j (y_ij - y_{ij, i b}) = (1 - y_i b) * ∑_j∈ R(i) v_j y_ij ≥ (1 - y_i b) * T = T - T · y_i b . Thus, ∑_j∈ R(i) v_j x_ij = T · y_ib + ∑_j∈ S ∩ R(i) v_j (y_ij - y_{ij, ib}) ≥ T . If i does not have access to small resources, then it follows immediately that ∑_j∈ R(i) v_j x_ij = ∑_j∈ R(i) v_j y_ij≥ T . Since x_ij≤ y_ij for all i,j it also holds that ∑_i∈ P : j∈ R(i) x_ij≤∑_i∈ P : j∈ R(i) y_ij≤ 1 for all j∈ R. Finally, for some player i that has only access to single big resource b we have (1 - x_i b) * x_is≤ (1 - x_i b) * 1 for all s∈ S∩ R(i). Hence, x_is = y_is - y_{is, ib}≤ 1 - y_i b = 1 - x_i b = 1 - ∑_j∈ B∩ R(i) x_ij . § THE EXISTENCE OF SUBTREE SOLUTIONS IS NOT ENOUGH TO FOOL SHERALI-ADAMS Here, we show that there exists an instance of the MMDA problem which has a polynomial gap with the naive LP, and contains a subtree solution for every edge, yet one round of Sherali-Adams has a gap of at most O(). For this, consider the simple example given in Figure <ref>. This instance has O(k^3) edges and O(k^2) vertices. One can check that there exists a feasible assignment LP solution: simply set x_e=1/k on orange edges leaving the source, and x_e=1 on all green edges between a vertex in L_1 and its private sink. Second, there exists a subtree solution for any edge e': for any edge e'∈ L_1 ending at vertex v, we set x_e^(e')=1 on all blue edges connecting v to the k public sinks. Clearly, no integral solution can give more than √(k)+1 out-degree to all its vertices, since each vertex in L_1 has only one private sink, and has to share only k public sinks with all other vertices. Hence the gap with the naive LP is k^Ω(1)=n^Ω(1). However, one round of Sherali-Adams is not fooled by this since this instance has only depth 2. Also note that if we were to consider the shadow distribution (one round) induced by these subtree solutions, then for any blue edge e=(u,v) between a vertex in L_1 and a public sink, we would have that s_e = x_(s,u)· x_e^(u,v) = 1/k , but x_e=0. Hence the key property that x_e≤ s_e≤ O(x_e) is lost. § PROOF OF LEMMA <REF> Recall that we want to prove the following statement. There exists some absolute constant ξ>0, such that for t=ξ·ℓ, and any vertices v∈ L_i, and u∈ L_i≤ j≤ i+t, the number of directed paths from v to u in G_n,ℓ^(ρ) is at most 1/∏_k=i^j-1γ_k . We proceed by a case analysis. Let v∈ L_i,u∈ L_j, with the property that there exists at least one directed path from v to u. Case 1: If u,v∈ L_≤ 2/ϵ. This case is in fact quite easy. It must be that S_u⊆ S_v, and the number of paths is exactly equal to the number of ordering |S_v|-|S_u| elements into (|S_v|-|S_u|)/(ϵρ m) buckets of size ϵρ m (the order inside each bucket is not counted). Hence the number of paths is equal to (|S_v|-|S_u|)!/((ϵρ m)!)^(|S_v|-|S_u|)/(ϵρ m)=((j-i)ϵρ m)!/(ϵρ m)!^j-i . Also note that for all 0≤ k< 3/ϵ 1/γ_k≥(ρ m)!^ϵ/(ϵρ m)! , hence 1/∏_k=i^j-1γ_k≥(((ρ m)!)^ϵ/(ϵρ m)!)^j-i . Hence the number of paths, divided by 1/∏_k=i^j-1γ_k is at most ((j-i)ϵρ m)!/((ρ m)!)^(j-i)ϵ , which is less than 1 if j-i≤ 1/ϵ=Ω(ℓ). Case 2: If u,v∈ L_≥ 2/ϵ. The calculations from the previous case also apply here. The roles of u and v are simply reversed. Case 3: If v∈ L_≤ 2/ϵ,u∈ L_≥ 2/ϵ. This is the tricky case. We assume that v∈ L_≥ 1/ϵ for simplicity (note that we only need the lemma to hold for any |j-i|≥ξℓ for ξ>0 some small constant, so this is wlog). To compute p_v,u (the number of paths from v to u), we first see that there are exactly m-|S_u∪ S_v| 2ρ m-|S_u∪ S_v| vertices w∈ L_2/ϵ which are reachable from v, and from which we can reach u (here S_u is the set corresponding to vertex u, and S_v the set corresponding to v). Indeed, there is one such vertex for each set of size 2ρ m containing both S_u and S_v as a subset. Hence, to obtain the value of p_v,u, it suffices to fix one such vertex w, and then count how many ways there are to go from v to w, and from w to u. Using the calculations from the previous cases, we obtain that p_v,u = m-|S_u∪ S_v| 2ρ m-|S_u∪ S_v|·((j-2/ϵ)ϵρ m)!((2/ϵ-i)ϵρ m)!/(ϵρ m)!^(j-i) . For fixed i and j, we argue that this expression is decreasing as |S_u∪ S_v| increases. Formally, if we define the function θ(x):=m-x 2ρ m-x , then θ(x+1)/θ(x)=(2ρ m-x)/m-x≤ 1 , for all 0≤ x≤ 2ρ m (recall that ρ is a small constant). Hence the worst-case is when |S_u∪ S_v| is as small as possible. We have then 2 subcases. Subcase 3.1: |S_u|≥ |S_v|. This is the case when j-2/ϵ≤ 2/ϵ - i. In that case, we use that |S_u∪ S_v|≥ |S_u| = 4ρ m-jϵρ m , and we write p_v,u ≤m-4ρ m+jϵρ m jϵρ m-2ρ m·((j-2/ϵ)ϵρ m)!((2/ϵ-i)ϵρ m)!/(ϵρ m)!^(j-i) . Also note that ∏_k=i^j-1γ_k = (∏_k=i^2/ϵ-1(ϵρ m)!/(ρ m)!^ϵ·2ρ mρ m^ϵ)·(∏_k=2/ϵ^j-1(ϵρ m)!/(ρ m)!^ϵ) . Therefore, using Stirling's formula, we get p_v,u/∏_k=i^j-11/γ_k ≤m-4ρ m+jϵρ m jϵρ m-2ρ m·((j-2/ϵ)ϵρ m)!((2/ϵ-i)ϵρ m)!/(ϵρ m)!^(j-i) ·(((ϵρ m)!)^2/ϵ-i/(((ρ m)!)2ρ mρ m)^2-iϵ) ·(((ϵρ m)!)^j-2/ϵ/(((ρ m)!))^jϵ -2) = m-4ρ m+jϵρ m jϵρ m-2ρ m·((j-2/ϵ)ϵρ m)!((2/ϵ-i)ϵρ m)!/((ρ m)!)^jϵ -iϵ2ρ mρ m^2-iϵ ≤ m^O(1)·m-4ρ m+jϵρ m jϵρ m-2ρ m/2ρ mρ m^2-iϵ· (jϵ - 2)^(jϵ -2)ρ m· (2-iϵ)^(2-iϵ)ρ m Let us define x:=iϵ and y:=jϵ. Recall that in the subcase we are working in, we have 1≤ x≤ 2, 2≤ y≤ 3, and y≤ 4-x. We also recall that h(x):=-xlog_2(x)-(1-x)log_2(1-x) is the entropy function of the Bernoulli distribution with parameter x. Then we get that log_2 ( p_v,u/∏_k=i^j-11/γ_k)/m -O(log (m)/m) ≤ (1-4ρ +yρ)h(ρ (y-2)/1-4ρ +yρ)-(2-x)2ρ h(1/2)+ρ((y-2)log_2(y-2)+(2-x)log_2(2-x)) = (1-4ρ +yρ)h(ρ (y-2)/1-4ρ +yρ)-(2-x)2ρ+ρ((y-2)log_2(y-2)+(2-x)log_2(2-x)) = (2-x)ρ(log_2(2-x)-2)+(1-4ρ +yρ)h(ρ (y-2)/1-4ρ +yρ)+ρ(y-2)log_2(y-2) . On can compute the partial derivative of the above function in variable x and obtain the following derivative ρ(2-1/log(2)(1+log(2-x))) , where log is the natural logarithm. This expression is positive as long as x>2-exp(2log(2)-1)≈ 0.53. Hence the above expression is maximized for x as big as possible. Let us further impose the constraint that x>2-δ_1 for some small δ_1>0 which will be fixed later. Then our constraints become that 2-δ_1 ≤ x≤ 2, 2≤ y≤ 4-x <2+δ_1. Hence we replace x=4-y (which is the biggest value possible for x given our constraints) and obtain that log_2 ( p_v,u/∏_k=i^j-11/γ_k)/m-O(log(m)/m) ≤max_2≤ y≤ 2+δ_1 (y-2)ρ(log_2(y-2)-2)+(1-4ρ +yρ)h(ρ (y-2)/1-4ρ +yρ)+ρ(y-2)log_2(y-2) = max_2≤ y≤ 2+δ_1 2(y-2)ρ(log_2(y-2)-1)+(1-4ρ +yρ)h(ρ (y-2)/1-4ρ +yρ) =: max_2≤ y≤ 2+δ_1 f_1(y) . One can compute lim_y→ 2^+f_1(y)=0 and that f_1'(y) =-ρ/log(2)(log(4)-2+log(ρ(y-2)/1+ρ(y-4))-2log(y-2)) = -ρ/log(2)[log(4)-2+log(ρ/(1+ρ(y-4))(y-2))] , which is negative for y close to 2 (we even have lim_y→ 2^+f_1'(y)=-∞). Hence by continuity, there exists a small δ_1>0 (δ_1 depends only on ρ) such that max_2≤ y≤ 2+δ_1f_1(y)=f_1(2)=0. The conclusion of Subcase 3.1 is that as long as 2-δ_1 ≤ x≤ 2, 2≤ y≤ 4-x, we obtain that p_v,u/∏_k=i^j-11/γ_k≤ 1 . Subcase 3.2: |S_u|≤ |S_v|. This is the case when j-2/ϵ≥ 2/ϵ - i. In that case, we use that |S_u∪ S_v|≥ |S_v| = iϵρ m . Here, the same calculations as in Subcase 3.1 apply, except that we replace the binomial coefficient m-4ρ m+jϵρ m jϵρ m-2ρ m by the binomial coefficient m-iϵρ m 2ρ m-iϵρ m . We obtain similarly that p_v,u/∏_k=i^j-11/γ_k ≤ m^O(1)·m-iϵρ m 2ρ m-iϵρ m/2ρ mρ m^2-iϵ· (jϵ - 2)^(jϵ -2)ρ m· (2-iϵ)^(2-iϵ)ρ m , and we write log_2 ( p_v,u/∏_k=i^j-11/γ_k)/m -O(log (m)/m) ≤ -2ρ (2-x)+(1-ρ x)h(ρ(2-x)/1-ρ x) + ρ(y-2)log_2(y-2) + ρ (2-x)log_2(2-x) =(2-x)ρ (log_2(2-x)-2)+(1-ρ x)h(ρ(2-x)/1-ρ x) + ρ(y-2)log_2(y-2):=g(x,y) . We compute ∂ g/∂ y = (ρ/log(2))· (1+log(y-2)) hence ∂ g/∂ y≥ 0 y≥ 2+1/e . Recall that in this case, we have the constraints that 1≤ x≤ 2, 2≤ y≤ 3, and y≥ 4-x. Let us assume that y≤ 2+δ_2 for some small δ_2<1/e (this is wlog to prove our lemma, which needs to hold only for a small ξ>0). Hence, we need to evaluate the function g(x,y) for y=4-x to obtain an upperbound. We have f_2(x):=g(x,4-x) = (2-x)(2ρ) (log_2(2-x)-1)+(1-ρ x)h(ρ(2-x)/1-ρ x) One can compute lim_x→ 2^-f_2(x)=0 and f_2'(x)=ρ/log (2)·[log(4)-2+ log(ρ/(1-ρ x )(2-x))] which is non-negative in some small interval [δ_2,2) (we even have lim_x→ 2^-f_2'(x)=∞). By continuity, there exists a small δ_2>0 such that max_2-δ_2<x≤ 2f_2(x)=0. Wrapping-up Case 3. If we select ξ=min(δ_1,δ_2), we obtain that our upper-bound on the number of paths between v and u holds for any v∈ L_i≤ 2/ϵ, any u∈ L_j≥ 2/ϵ such that |i-j|≤ξℓ/3. This concludes the proof of Lemma <ref>.
http://arxiv.org/abs/2406.17664v1
20240625155311
Magnetic Force Microscopy: High Quality Factor Two-Pass Mode
[ "Christopher Habenschaden", "Sibylle Sievers", "Alexander Klasen", "Andrea Cerreta", "Hans Werner Schumacher" ]
cond-mat.mes-hall
[ "cond-mat.mes-hall" ]
xxx Magnetic Force Microscopy: High Quality Factor Two-Pass Mode]Magnetic Force Microscopy: High Quality Factor Two-Pass Mode https://www.ptb.de/cms/en/ptb/fachabteilungen/abt2/fb-25/ag-252.html Physikalisch-Technische Bundesanstalt (PTB), 38116 Braunschweig, Germany Park Systems Europe GmbH, 68199 Mannheim, Germany Physikalisch-Technische Bundesanstalt (PTB), 38116 Braunschweig, Germany § ABSTRACT Magnetic force microscopy (MFM) is a well-established technique in scanning probe microscopy (SPM) that allows the imaging of magnetic samples with spatial resolution of tens of nm and stray fields down to the mT range. Spatial resolution and field sensitivity can be improved significantly by measuring in vacuum conditions. This effect originates from the higher quality factor (Q-factor) of the cantilevers oscillation in vacuum compared to ambient conditions. However, while high Q-factors are desirable as they directly improve the magnetic measurement signal, they pose a challenge when pursuing a standard MFM two-pass (lift) mode measurement. At high Q-factors amplitude-based topography measurements become impossible and MFM phase response behaves non-linear. Here we present an implementation of a modified two-pass mode into a vacuum atomic force microscope (AFM) that overcomes these issues. By controlling Q in the first pass and using a phase-locked loop (PLL) technique in the second pass, high Q-factor measurements in vacuum are enabled. By measuring the cantilevers frequency shift instead of phase shift, otherwise emerging non-linearities are eliminated. The achievable improvements in resolution and sensitivity are demonstrated on patterned magnetic nanostructured samples. Elimination of non-linear response is showcased by a measurement of a very well-known calculable multilayer reference sample that is used for tip calibration in quantitative MFM (qMFM). [ Hans Werner Schumacher July 1, 2024 ========================== § INTRODUCTION Magnetic force microscopy (MFM) is a widely accessible, user-friendly, and common tool for the characterization of materials exhibiting magnetic micro- and nanostructures. It detects the interaction of a microscopic magnetically coated tip on an oscillating cantilever with the sample to map the emanating stray fields. By using well-known reference samples, quantitative measurements are possible.<cit.> Recently, an IEC standard on quantitative MFM measurements under ambient conditions was published.<cit.> Initially, MFM development was boosted by the need of the industry to analyze and characterize magnetic data storage media. <cit.> However, novel magnetic materials that are in the focus of research are becoming increasingly challenging to characterize: Magnetic data storage is evolving, not only by pushing the density of magnetic data to the physical limits <cit.>, but in particular by focusing on new ways of storing data. Concepts for storing data and computing based on nanoscale magnetic objects like domain walls or skyrmions, which are nm-scale topological stable magnetic vortexes, are topic of current research.<cit.> Furthermore, fundamental magnetic material research on multilayers for spintronic applications, vortices or 2D materials is increasingly dealing with very low stray fields and nanoscale structures.<cit.> Consequently, also MFM itself needs to evolve. The spatial resolution and field sensitivity of MFM can be significantly enhanced by measuring under vacuum conditions.<cit.> This results from the higher cantilever quality factors in vacuum in dynamic mode, directly leading to an increase in the measurements signal to noise ratio (SNR). However, advanced feedback techniques are required for stable operation in vacuum. MFM measurements are typically performed in a two-pass lift mode, where the tip-sample interaction is monitored in the second, lifted pass via the detection of the phase shift of the cantilever oscillation. In vacuum, due to the high Q-factors, only a small amount of energy per oscillation cycle is dissipated. This makes the oscillation very sensitive to external forces, but at the same time hard to control, as the external forces can overpower the driving force of the oscillation and thus crash the tip.<cit.> While a high sensitivity is desired in the second pass for acquiring the magnetic image, in the first pass, where the tip is brought close to the surface to map the topography, the issue of tip crashing and thus tip damage must be addressed. A way to circumvent these problems is to use bimodal magnetic force microscopy with capacitive tip-sample distance control as described by [Schwenk2015, Zhao2018], that uses an “frequency-modulated capacitive tip-sample distance control mode”.<cit.> This technique ensures that the tip is always lifted and is, in particular, not requiring a first pass that is prone to tip crashing (hence it will be referred to in this work as single-pass mode). Even though this technique is an elegant operation-mode, it is not easy to implement and only suitable for electrical conducting samples that are flat on the nm scale. We here present an implementation of a modified two-pass lift mode into a Park Systems NX-Hivac Atomic Force Microscope that enables measurements in vacuum conditions with high magnetic sensitivity and stable topography detection. While in the first pass the so-called Q-Control is utilized to artificially lower the Q-factor to a degree that the feedback loop can handle, in the second pass (lift mode) the measurements are done using an external lock-in amplifier running a phase-locked loop (PLL) to track the frequency shift of the cantilever oscillation. A simple overview over this new setup is outlined in Fig. <ref>. This technique allows to use the largest achievable Q-factor in the second pass and thus to utilize the maximum possible sensitivity for magnetic stray field measurements. § THEORY The two-pass mode (also called lift mode or interleave mode by some manufacturers) is very well known and regarded as the workhorse of MFM.<cit.> It's basics are explained in a variety of textbooks and articles concerned with the topic.<cit.> We assume therefore that two-pass mode is known to the reader and start the discussion by introducing the Q-factor. From there, the less commonly known Q-Control<cit.> operation is introduced, that shows some downsides for MFM phase-shift measurements in vacuum. §.§ Q-factor The Q-factor, that describes the degree of damping of an oscillating system, plays a central role for the MFM measurement sensitivity. The Q-factor can be described in terms of the stored energy definition as the ratio of the energy stored in the oscillation to energy dissipated per oscillation cycle.<cit.> In the case of atomic force microscopy, vacuum conditions lead to higher Q-factors since the density of gaseous particles decreases, reducing collisions with the oscillating cantilever (effectively reducing friction). Thus, less energy is dissipated, and the Q-factor rises. For high Q-factors, Q can equivalently be described by the bandwidth definition: Q =f_0/Δ f_FWHM with resonance frequency f_0 and resonance full width at half maximum (FWHM) Δ f_FWHM. Using the latter definition, the Q-factor can be easily derived from the non-contact frequency sweep data, an example is shown as in Fig. <ref>. With rising Q-factors, the width of the resonance peak is reduced, yielding the response of the resonantly oscillating cantilever more susceptible to external forces which increases sensitivity. Commercial MFM cantilevers usually reach Q-factors of 200 in ambient conditions, whereas in vacuum Q-factors up to 20 000 are possible. Using specially manufactured vacuum cantilevers even higher Q-factors up to 200 000 are achievable.<cit.> In the case of high Q-factors (> 2000), the oscillation is only weakly damped, and the amplitude becomes increasingly hard to stabilize against parameter changes, as can be seen in the frequency sweep in Fig. <ref>. As only very little energy is dissipated per cycle, transient processes emerge. Consequently, the cantilever will keep its frequency, despite the driving frequency already moving on (this effect is known as ringing, or also as transient, requiring some settling time for the system to return into the steady state of harmonic oscillation). Q-factors can be artificially damped in vacuum conditions to avoid this issue by means of the so-called Q-control mechanism discussed in Chap. <ref>. §.§ Signal generation in MFM In dynamic mode, the cantilever is exited at its resonance frequency (or close to it). In the most simplistic way the motion z(t) of the free cantilever (that is not sensing a force) can be expressed by the well-known equation of the driven harmonic oscillator m z̈(t) + m γż(t) + c_z (z(t)-d) = F_0 cos(2π f_d t) with the mass m, damping coefficient γ, spring constant c_z, the tips equilibrium position d, and driving force F_0 = a_d c_z operating at driving amplitude a_d and driving frequency f_d. The resonance frequency of the undisturbed oscillator is given by f_0 = 1/2π√(c_z/m). The damping factor γ can be described using the quality factor Q_0 of the undisturbed oscillator that is only interacting with its environmental gas as γ = 2π f_0/Q_0 for frequencies close to the resonance frequency f ≈ f_0. With the ansatz z(t) = A cos(2π f t + φ) the amplitude A and phase φ for the differential equation can be found as A (f) = F_0/(4π m)/√((f_0^2-f^2)^2+(f_0f/Q_0)^2) φ(f) = arctan( -f_0f/Q_0(f_0^2-f^2)) Basic observations are that the amplitude reaches its maximum for f = f_0 and is only restricted by the damping γ. Importantly, the phase does not depend on the driving force, as it only affects the amplitude. A typical (experimentally obtained) curve of A and φ can be in seen Fig. <ref>, as discussed later. §.§ Q-control To utilize the oscillating tip for non-contact mode AFM measurements external forces interacting with tip must be taken into account. Moreover, an additional term is required if the Q-factor is to be artificially reduced, i.e. to achieve Q-control. A more complete version of Eq. <ref> in regards of AFM is then given in [Hoelscher2007]: m z̈(t) + 2π f_0 m/Q_0ż(t) + c_z (z(t) -d) + g c_z z(t-t_0)_Q-Control = a_d c_zcos(2π f_d t)_external driving force + F_ts[z(t),ż(t)]_tip-sample force The first of the two new terms is the Q-Control term with the gain factor g and signal shift t - t_0. The tip-sample force F_ts depends not only on the tip position z(t) but also its derivative ż(t). Solving this equation requires further assumptions as discussed in [Hoelscher2007]. One import result is that, in fact, the Q-factor can be can be controlled by adjusting the gain factor, resulting in an effective Q_eff that is given by (assuming for simplicity F_ts = 0 and f_d≈ f_0): Q_eff (g,t_0) = 1/1/Q_0-gsin(2π f_d t_0). The experimental setup realization is schemed in Fig. <ref>. By adding a feedback loop (colored blue) to the modulation piezo, so-called Q-control operation is possible. By amplifying and phase-shifting (e.g. time-shifting) the detected signal, energy loss can be compensated or induced, thus amplifying or attenuating Q. §.§ Tip-sample force, frequency, and phase shift The second term in Eq. <ref> describes the influence of external forces. In case of the force free driven oscillator the resonance frequency f_0 was introduced as f_0 = 1/2π√(k/m). However, an external force acting on the tip will shift the cantilever resonance frequency. In typical cases, where (i) the tips oscillation amplitude is small compared to the scale of the spatial variation of the tip-sample force F_ts and where (ii) the cantilevers restoring force behaves like a Hookean spring F_cantilever = - k_0 (s_z - s_z_0) (with the tip displacement s_z - s_z_0 around the equilibrium position s_z_0) and (iii) the restoring force is large<cit.>, the impact of the force acting on the tip can be described as modification of the spring constant k = k_0 + k' = k_0 - ∂ F_ts/∂ s_z with the differential expressing the tip-sample force that acts on the tip while traveling the distance s_z in Z-direction of the oscillation. Within this model, the resonance frequency depends on the tip-sample force: f_0' = 1/2π√( k_0 - ∂ F_ts/∂ s_zk_0/k_0/m) = f_0√( 1 - 1/k_0∂ F_ts/∂ s_z) To the second part of the equation a Taylor-expansion √(1-x)≈ 1 - 1/2x can be applied. As the deviation of F_ts is very small compared to the initial spring constant k_0 this approximation is justified and f_0' thus can be approximated as f_0' ≈ f_0 ( 1 - 1/2k_0∂ F_ts/∂ s_z). Therefore the frequency shift is directly proportional to the change of the tip-sample force: Δ f = f_0'-f_0 = f_0/2k_0∂ F_ts/∂ s_z. In consequence, at constant excitation frequency f the observed φ (see Eq. <ref>), will experience a phase shift, as f_0 is not constant, but subject to change. This is the basic working principle of MFM in two-pass mode, as this phase shift is the measurement signal. Evaluating the first derivative of Eq. <ref> ∂/∂ fφ(f) = -f_0 Q_0 (f^2 + f_0^2)/Q_0^2 (f^2 - f_0^2)^2 + f^2 f_0^2 it can be argued that for small Q_0, large f and limited variation of f_0 while f ≈ f_0 it is reasonable to ignore the first term in the denominator, simplifying the equation to ∂/∂ fφ(f ≈ f_0) = 2Q_0/f thus, showing a constant slope and in consequence linear signal response. This approximation often is sufficient for MFM operation in air, in common setups typically operating at Q_0 ≈ 200 and f_0 ≈ 70 kHz. Unfortunately, for large Q_0 this argumentation doesn't hold up anymore and non-linear behaviour comes into play for vacuum operation. §.§ Tip-sample force in MFM In MFM the force acting on the magnetically coated tip with the local magnetization M_tip(r',z') in the sample stray field H_sample(r',z') can be described as a two-dimensional cross-correlation integral over the magnetic tip volume<cit.> F_mag(r,z)= μ_0 ∬_V'( ∇⃗·M_tip(r',z')) ·H_sample (r + r', z+z') dr'dz' with the in-plane coordinate vector r = (x,y), measurement height z and vacuum permeability μ_0. By inserting this into Eq. <ref>, the relation between local magnetic field and frequency shift of the oscillating cantilever can be derived. Calculations are conveniently performed in a partial Fourier space with (x,y,z) → (k_x,k_y,z). This is, for example, discussed in detail in [Hu2020, Hug1998, Schendel2000, Zhao2019, Schwenk2016] and results in Δ f (k,z) = -μ_0 f_0/2c_z·LCF(k,θ,ϕ, A_0) ·∂H_z,tip^*(k,z)/∂_z·H_z_sample(k,z) . For f ≈ f_0 and thus small Δφ this gives Δφ (k,z) = -μ_0 Q/c_z·LCF(k,θ,ϕ, A_0) ·∂H_z,tip^*(k,z)/∂_z·H_z_sample(k,z) . The here introduced lever correction function LCF accounts for cantilever- and device-specific parameters. It corrects for the canting angles θ and ϕ (see Fig. <ref>) and the finite oscillation amplitude A_0. The derivative of the complex conjugate of H_z,tip describes the effective stray field gradient of the tip that is located in a plane parallel to the samples surface at measurement height z. Consequently, damping the Q-factor in MFM phase shift measurements results in a proportional reduction of the phase signal while inducing additional noise-generating electronics, thus lowering the SNR even further. While the phase shift signal improvement is directly linked to the quality factor (φ∝ Q) in case of frequency shift this is obviously not the case as Eq. <ref> is independent from Q. To understand the SNR improvement for frequency shift, a closer look at the origin of noise in AFM is required. Thermal noise due to thermal induced cantilever motion in AFM allows the detection of signals with the minimum detectable force gradient ∂ F_min'/∂ z = √(4k_Lk_B TB/2π f_0 Q ⟨ A_osc^2 ⟩) with the cantilever force constant k_L, the Boltzmann constant k_B, the absolute temperature T, the bandwidth B and the mean-square of the oscillation amplitude ⟨ A_osc^2 ⟩. Depending on whether static or dynamic mode with amplitude or frequency modulation is used a factor of √(2) applies, further reading in [Albrecht1991,Voigtlaender2015]. From this equation it is clear, that a large Q-factor is desirable to improve sensitivity. However, a large Q-factor also impacts the required bandwidth in amplitude modulated (AM) operation: If an external force acts on the cantilever (thus changing the resonance frequency f_0), the oscillating systems needs time to reach the new steady state. The required time for the system response can be expressed by the time constant τ≈ 2Q/ω_0 = Q/ (π f_0). Consequently, for phase shift measurements bandwidth and quality factor are not independent, therefore measurement with high Q-factors become unacceptably slow. This does not hold true for frequency shift measurements, as by tracing f_0, the issue of settling time can be avoided. The bandwidth will only be limited by the demodulation system used for frequency modulation (FM) and not by the transient behavior. On a side note, it shall be mentioned, that also increasing the oscillation amplitude would improve the minimum detectable force gradient (Eq. <ref>), but as for quantitative evaluation the external force must remain (reasonably) constant within the cantilever oscillation, the actual usable amplitude range is limited below its experimental limits. § EXPERIMENTAL In the following section the new modified two-pass mode operation is introduced, which will be referred to as two-pass dual-mode, as it allows phase shift measurements with a dampened Q, as well as frequency shift measurements at high Q in situ. By measuring frequency shift (instead of phase shift) in the second pass, the highest possible Q can be used without suffering sensitivity loss or experiencing non-linearity in measurement signals. This is demonstrated on a structured sample and a thin film multilayer system forming domain walls. §.§ Phase and Frequency detection §.§.§ First pass: Topography Fig. <ref> shows the working principle of amplitude-controlled topography measurements, as used in the first pass of two-pass mode. The free oscillating cantilever shows a resonance peak as indicated by the solid plotted curve with resonance frequency f_0. For operation in non-contact mode the drive frequency f_d must be greater than f_0. The drive frequency has been chosen such, that the desired amplitude setpoint A_s is achieved. By bringing the oscillating tip close to the surface, external forces will change the resonance frequency, for example from f_0 to f_0', changing the resonance behaviour by Δ f. This causes a amplitude change Δ A at the fixed drive frequency f_d, which is used as feedback for the Z-piezo. The controller will retract or extend the Z-piezo so that the setpoint amplitude A_s is reached again. The required piezo movement maps the topography of the sample. This works well for low Q-factors (for example Q ≈ 200) , as the resonance peak has a FWHM of around 350 Hz while the frequency shift amounts to some 10 Hz. §.§.§ Second pass: Magnetic signal In the second pass (in lift mode) the AFM controller retraces the topography that was acquired in the first pass (adding a user defined lift height). The magnetic interaction leads to a frequency shift of the cantilever's resonance frequency. In MFM the magnetic interaction between tip and sample is detected by either keeping the excitation frequency constant and monitoring the phase shift or by tracking the change of the resonance frequency. In Fig. <ref> these two cases are portrayed using experimentally obtained frequency sweep data for operation in air. The black curve shows the amplitude, blue the corresponding phase. The phase shift at the resonance was adjusted in post-processing to match -90 deg. In ambient conditions, detection via phase shift is common, as pictured in <ref> (a). As for low Q-factors, the measured phase shift is rather small and usually in the range of single digit degrees, the phase response is staying in the range of approximately linear behaviour (indicated by the arrow). In vacuum conditions, by contrast, measurement signals of tens of degrees are possible[The actual signal response depends on tip and sample. “Weak” samples/tips with small stray magnetic stray fields may not be affected, as their phase response may stay within single digit degrees.], clearly leaving the area of linearity, rendering the data useless for quantitative measurements. Therefore, in vacuum operation frequency shift measurements are used, eliminating this issue. The resonance frequency (indicated by the vertical line in Fig. <ref> (b) that can move in either way) is measured by picking the corresponding phase at resonance as setpoint (here at -90 deg, indicated by the horizontal line). A phase-locked loop (PLL) is utilized to adjust the frequency of the excitation, so that the actual phase is kept at the desired phase-setpoint, thus tracking the resonance frequency peak. §.§ Modified Two-Pass Mode “Two-Pass Dual-Mode” The idea behind the new two-pass dual-mode is to vary the Q-factor and signal detection scheme in between the two-passes. This allows to optimize the measurement for stability while acquiring topography and yet boost sensitivity when measuring magnetic stray fields in lift mode. The measurement system used in this work consists of a Park Systems NX-Hivac AFM equipped with a signal extension module (SAM) that allows to tap and modify signals. The topography is always measured with Park's built-in Q-control, while in the second pass a Zurich Instruments HF2LI lock-in amplifier with dual PLL is option used, allowing to tailor settings suitable for high Q-factor frequency shift detection in vacuum operation. In lift mode, the AFM controls the lift height via the Z-piezo but does not modulate the drive piezo signal, thus the drive piezo can be switched to the external HF2LI while in lift mode. Signal lock at the HF2LI is achieved in a couple of 100 µs, meaning that switching can take place during overscan (scanning a user-defined percentage over the desired scan area to avoid turnaround streaks at the edges of the final image). The HF2LI excites and tracks the frequency of the oscillating tip via the PLL. The measured frequency deviation provided by the PLL is directly fed back to the microscope controller by an auxiliary input port that feeds the signal to the AFM's measurement software for image formation. Signal switching is realised by a home-built switching-box that consist of a micro-controller (µC) controlling several DG409 CMOS analog multiplexers which interconnect the two devices. A timing diagram of the operation can be found in Fig. <ref>. The line signal (indicating the scan direction i.e. trace/forward or retrace/backward) and the lift mode state is fed into the µC. According to these signals and the selected operation mode, the µC connects the excitation signal either to the build-in lock-in using Q-control or the external HF2LI to drive the modulation piezo. Via a graphical user interface (GUI) the user can modify the µC operation and choose between several different operating modes. The following two modes of operation are of particular interest in the scope of this paper: * Normal Two-Pass Mode. The well-known and widely used common two-pass operation mode. No switching of signals. Used to obtain a first overlook or non-quantitative measurements in combination with Q-control. * Fast Two-Pass Dual-Mode. A fast mode where in forward direction phase shift and in backward direction frequency shift is measured. As fast as the normal two-pass mode, however the (redundant) control trace is not available, that may otherwise hint inexperienced users problematic measurements settings (for example inappropriate scan speed that will yield the forward and backward data not equaling each other). §.§ Signal Improvement The feasibility of the new two-pass dual-mode is demonstrated by a measurement of a nano-patterned magnetic sample, which combines topography features and low magnetic stray fields. The sample consists of circles with different sizes, here 3 circles with a diameter of d=300 nm and height h=60 nm have been chosen for evaluation (see Fig. <ref> (a) for the sample topography). The sample consists of a Ta(5)/Pt(8)/Co(1)/Ru(1.4)/Pt(0.6)_10/Pt(2.4) multilayer stack (numbers in nm) on Si with perpendicular magnetic anisotropy. More details are available in [FernandezScarioni2021]. As the measurements were performed in two-pass dual-mode, it is ensured that phase and frequency shift is measured in immediate succession, enabling direct comparability of phase and frequency measurements. A full MFM image obtained by frequency shift measurement in vacuum at Q-factor of Q ≈ 9000 is portrayed in Fig. <ref> (b). As the sample possesses structures with 60 nm topography, the follow slope line mode was used (see Chap. <ref>). The results of measurements for different Q-factors are portrayed in Fig. <ref>. The 4 lineplots show the (a) phase shift signal in air (gray line profile, Q ≈ 170) and phase shift signal in vacuum (blue line profile, Q≈ 1600). In (b) the frequency shift signal in air (gray line profile, Q≈ 170) and the frequency shift signal in vacuum (green line profile, Q≈ 9000) are plotted. As expected, in both detection modes the signal improves when operating in vacuum compared to ambient conditions. As discussed in the theory part, the signal improvement for the phase shift measurement originates from the increase in absolute phase shift signal, which is confirmed by the experiment. For the phase-signal, the improvement behaves linearly to the Q-factor (see Table <ref> and Fig. <ref>), increasing the phase-signal Δφ = 7.4±0.29 deg every Δ Q = 100. However, this is only valid for small φ, as for phase values that are more than 10 deg away from the phase at resonance, a considerable drop off due to non-linear effects will emerge. Furthermore, the noise-contribution of Q-control increases for rising Q, canceling out the better phase-signal completely, as observable in the SNR values in Table <ref>. In this specific setup, with this specific cantilever batch, the sweet-spot for phase measurements is around Q ≈ 1000. For the frequency shift measurement, the signal amplitude remained constant within the margin of error (as expected), while the noise decreased noticeable. Corresponding values are listed in Table <ref>. For each measurement situation the quality factor, the root mean square (rms) noise, the maximum measured signal amplitude and corresponding SNR are listed. Note that 100 mV equal a frequency shift of 1,00 Hz in the here presented measurement setup. While non-linear phase response will become an issue entering double digit degree phase response, limiting the maximum usable Q-factor, for frequency measurements useable Q-factors are only restricted by the cantilever[As for this comparison measurement an air class cantilever was used, the observed Q of around 10 k is far away from the possible limits of 200 k for carefully manufactured high Q vacuum cantilevers]. Another critical advantage is the elimination of non-linear behaviour when using frequency shift that will be demonstrated in the following section. §.§ Elimination of Non-Linearity The origin of the non-linear behaviour has been extensively discussed before. Here, the effect is demonstrated on a very well known, calculable multilayer reference sample that forms up and down magnetized domains in a maze pattern, that should result in equal areal percentages of bright and dark areas. However, in phase shift measurements for rising Q-factors an increasingly higher areal percentage of dark domains can be observed as shown in Fig. <ref>. For convenience all images are accompanied by their corresponding histogram. In (a) the domain pattern was measured in ambient conditions (Q ≈ 220) using phase shift, forming an equal domain distribution. In picture (b) the same reference sample was measured in vacuum (Q ≈ 1800), using phase shift and Q-control. The Q-factor was chosen as large as possible to illustrate the effect as effectively as possible. The dark domains are much more pronounced, clearly visible in the histogram. Without the prior knowledge of the phase behaviour this could be easily misinterpreted as offset due to electrostatic effects, sample defect, or, even worse, as real measurement data. This can be a great pitfall when interpreting data for material characterisation and quantitative measurements. However, with this setup we can rule out that any electrostatic or sample defect caused this effect, as the fast two-pass dual-mode was used, that is acquiring phase shift in trace and frequency shift in retrace. Image (c) makes use of that setup, showing the exact same position of the sample with the same measurement parameters at the AFM, with the difference that the modulation piezo is now driven by the external lock in amplifier HF2LI. The frequency shift data shows, as expected, equally distributed dark and bright domains. Therefore, the imbalance in the phase shift distribution solely descends from the measurement technique itself. The origin of the observed non-symmetry of domain distribution can be explained in a straight forward way by Eq. <ref> and the corresponding phase curve in Fig. <ref>. As the setpoint is slightly off peak and therefore in the arctan slightly off point symmetry, positive phase shift values run faster in the non-linear regime providing less signal, thus not only decreasing in absolute values but also breaking the symmetry of the corresponding peaks itself (as clearly observable in the histogram). By running the frequency values of Fig. <ref> (c) trough Eq. <ref> (with f_0 = 74660 Hz and actual operation 30 Hz above f_0) the corresponding Fig. <ref> (d) can be calculated which is corresponding well to the measured data in (b). Equally the other way round is possible: If the corresponding phase curvature has been acquired in advance, these non-linearities could be compensated in post-processing by correcting the measured phase values with the phase values that would be expected if they could be acquired linearly. However, as the arctan is losing slope when far away from the area of point symmetry, sensitivity is lost. Correcting these values will boost noise to the point where no signal can be recovered anymore. This is highly undesired, thus underlining the usefulness of the new modified two-pass mode. §.§ Topography interplay MFM images of structured samples using the common two-pass mode can be misleading as topography can interplay in the magnetic image. In the common operation mode (see Fig. <ref> (a)) the surface is retraced in the second pass, including every topography detail. For example, non-magnetic dirt on a flat magnetic sample could be easily mistaken as magnetic signal, as the dirt will cause additional lift height in the second pass, thus moving the cantilever out of the samples stray field and leading to a change in magnetic signal. Also, strong magnetic samples that do not allow a clean topography image without magnetic cross-talk are problematic, as these magnetic details are getting counter-compensated in the second pass. In particular the issue of topography interplay emerges when pursuing measurements of manufactured structured samples. This can easily demonstrated at the sample at hand, as shown in Fig. <ref>. When following the topography of the circular structures, at the edges the tip gets very close to the structure, casting a dark shadow (see Fig. <ref> (a) and the corresponding grey colored line profile in (c)). However, for simulations and calculations almost always a flat plane above the surface is considered. Thus, a common MFM image that follows the topography can be misleading and pose a pitfall when evaluating data, especially when pursuing quantitative MFM (qMFM). This problem can be countered by operation in follow slope line mode, as shown in Fig. <ref> (b). By fitting a linear slope trough the measured topography (ignoring outliers due to dirt), the samples tilt can be traced in the second pass while ignoring its topography. However, this mode must be carefully operated to not crash the tip into any structure or dirt. It is advisable to image the samples topography beforehand in order to derive a suitable the lift height value. In Fig. <ref> (b) (and blue colored line profile in (c)) a lift height of 120 nm was chosen for the follow slope line mode, which equals a lift height of 60 nm in the common (follow topography) mode, as the structures are regarded at outliers. The difference of both traces is quite obvious and corresponds well to the simulated traces in (d). Non-symmetry in the experimental data is attributed to tip tilt that could be compensated in qMFM. § SUMMARY AND OUTLOOK While common Q-control is a very as useful feature for running amplitude-controlled measurements in vacuum AFM, due to its limitations and non-linear behaviour in phase shift measurements it is not feasible for vacuum MFM. However, these issues can be circumvented by measuring frequency shift instead. Thus, a new two-pass dual-mode was introduced, combining the advantages of both methods into a fast and sensitive vacuum MFM operation mode, capable of handling magnetic samples with topography. This novel operation mode was realized via a micro-controller that switches the required signal via CMOS multiplexers to an external HF2LI lock-in amplifier to measure frequency shift utilizing a phase-locked loop. The improved sensitivity of the new operation mode has been demonstrated by MFM-measurements on a nanostructured magnetic sample. The linear response of the measurement technique was investigated using a very well-known calculable multi-layer reference sample, that is forming a domain pattern structure. With our approach high-sensitivity linear MFM measurements on structured as well as on flat samples are possible using the principle of common two-pass MFM with only small modifications and minimal required user retraining. With that technique a path to high-sensitivity, high-resolution quantitative magnetic force microscopy in vacuum is now available to a broad user base using frequency-based evaluation. § ACKNOWLEDGEMENTS This project was supported by the Federal Ministry of Economic Affairs and Climate Action within the TransMeT project "Realisierung eines quantitativen Magnetkraftmikroskopie-Messverfahrens gemäß IEC TS 62607-9-1 mit einem kommerziellen System". § AUTHOR DECLARATIONS §.§ Conflict of Interest The authors declare no conflict of interest. §.§ Author Contributions Christopher Habenschaden: Analysis & Interpretation, Conceptualization, Data Curation, Formal Analysis, Methodology, Software, Validation, Visualization, Writing - Original Draft. Sibylle Sievers: Analysis & Interpretation, Conceptualization, Formal Analysis, Funding Acquisition, Methodology, Project Administration, Resources, Supervision, Validation, Writing - Review & Editing. Alexander Klasen: Validation, Writing - Review & Editing. Andrea Cerreta: Technical support, Validation. Hans Werner Schumacher: Analysis & Interpretation, Conceptualization, Funding Acquisition, Project Administration, Supervision, Validation, Writing - Review & Editing. § DATA AVAILABILITY The data that support the findings of this study are available from the corresponding author upon reasonable request.
http://arxiv.org/abs/2406.18368v1
20240626141257
Future singularity in anisotropic universe
[ "Taishi Katsuragawa", "Shin'ichi Nojiri", "Sergei D. Odintsov" ]
gr-qc
[ "gr-qc", "astro-ph.CO", "hep-th" ]
=5000 KEK-TH-2632 KEK-Cosmo-0348 taishi@ccnu.edu.cn Institute of Astrophysics, Central China Normal University, Wuhan 430079, China nojiri@gravity.phys.nagoya-u.ac.jp KEK Theory Center, Institute of Particle and Nuclear Studies, High Energy Accelerator Research Organization (KEK), Oho 1-1, Tsukuba, Ibaraki 305-0801, Japan Kobayashi-Maskawa Institute for the Origin of Particles and the Universe, Nagoya University, Nagoya 464-8602, Japan odintsov@ice.csic.es ICREA, Passeig Luis Companys, 23, 08010 Barcelona, Spain Institute of Space Sciences (ICE, CSIC) C. Can Magrans s/n, 08193 Barcelona, Spain § ABSTRACT We investigate future singularities originating from the anisotropy in the universe. We formulate a new class of singularities in the homogeneous and anisotropic universe, comparing them with the known singularities in the homogeneous and isotropic universe. We also discuss the physical consequences of the new singularities. Moreover, we develop a novel reconstruction method for the anisotropic universe by introducing four scalar fields to reconstruct cosmological models in which future singularities appear. We present an explicit example where the anisotropy may grow in the future up to singularity. Future singularity in anisotropic universe Sergei D. Odintsov July 1, 2024 ========================================== § INTRODUCTION The concordance ΛCDM model, including the cosmological constant and cold dark matter, has been in good agreement with observational data. However, for several problems that are difficult to explain in the ΛCDM model, cosmological models that go beyond the standard model have been intensively investigated in the context of modifying Einstein's gravity, known as the modified gravity theory. In the search for the beyond-ΛCDM model, modifications of the gravitational theory have provided a variety of cosmological models, and cosmological observations have indeed constrained the gravitational theories. However, it is also significant to examine the cosmological principle on which the ΛCDM model stands; that is, the universe is homogeneous and isotropic spacetime on large scales and written as the Friedmann-Lemaître-Robertson-Walker (FLRW) metric as the zeroth order approximation. Cosmic Microwave Background (CMB) and Large-Scale Structure data have thoroughly tested the cosmological principle. Recently, strong evidence of a violation of the cosmological principle of isotropy has been reported <cit.>. Although there has been much discussion about the origin of the anisotropy and its time evolution, the cosmological no-hair conjecture <cit.> provides a strong prediction, independent of the details of the model, that the anisotropy will exponentially decrease once inflation occurs. However, it is still possible to evade the cosmic no-hair conjecture, and the anisotropic inflation models <cit.> suggest that the spontaneous rotational-symmetry breaking could occur during the inflation. The generated cosmological anisotropy could help us understand the CMB anomalies, and it is being vigorously studied along with other cosmological anomalies, such as Hubble tension. In addition to studying the origin of anisotropy and its effects in the early universe, It is also essential to study how anisotropy will evolve in the future. It has already been suggested that the current universe contains a small amount of anisotropy, and due to new physics or unveiled mechanisms, the future universe may develop a larger amount of anisotropy. It is feasible to construct cosmological models with potentially increasing anisotropy and also significant to investigate what may happen in the future within such models. For example, we allow finite anisotropy in all cosmic history. In that case, the anisotropy may grow or even show singular behaviors because three spatial directions can evolve differently and include singularities. We will explore cosmological models based on finite anisotropy and search for possible new physics that lies therein. In this paper, we investigate the general homogeneous and anisotropic universe, assuming that anisotropy generated by some mechanisms exists in the universe. We mainly discuss the future singularities generated by the anisotropy. It has been known that cosmological models generally encompass five types of finite-time singularities in the FLRW universe <cit.>. In contrast to these singularities known in the homogeneous and isotropic universe, this paper presents new types of finite-time singularities in the homogeneous and anisotropic universe. We discuss the classification of the new singularities and their physical meanings due to the anisotropy by analogy with the known singularities in the FLRW case. To demonstrate the growing anisotropy and associated singularities, we construct a cosmological model that realizes the finite-time singularities, by developing a new cosmological reconstruction method. This paper is organized as follows. In section <ref>, we briefly review the finite-time singularities known in the FLRW universe. In section <ref>, we formulate the cosmological model with the broken rotational symmetry and classify the finite-time singularities due to the finite anisotropy. Moreover, we show phenomena caused by these singularities using the geodesic deviation equation. In section <ref>, we demonstrate the reconstruction of the cosmological models where the finite-time singularities appear. As a specific example, we use Einstein's gravity as a benchmark gravitational theory. § FINITE-TIME SINGULARITIES IN FLRW UNIVERSE We briefly review the finite-time singularities in the FLRW universe, homogeneous and isotropic spacetime, following from Refs. <cit.>. The line element of the FLRW universe is given by d s^2 = -dt^2 + α^2(t) [dr^2/1-Kr^2 + r^2 (dϑ^2 + sin^2 ϑ dϕ^2) ] , where α(t) is a scale factor, and (t, r, ϑ, ϕ) are the co-moving coordinates. K describes three different geometries for three distinct values, namely, spatially flat (K =0), closed (K > 0), and open (K < 0). We consider the spatially-flat FLRW, K=0, where Eq. (<ref>) reduces to the following form, d s^2 = -dt^2 + α^2(t) ∑_i=1,2,3( dx^i )^2 . In Einstein's gravity, the expansion of the FLRW universe with K=0 in Eq. (<ref>) is described by the Friedmann equation and the Raychaudhuri equation, H^2=κ^2/3ρ , Ḣ=-κ^2/2(ρ+p) . We denote the Hubble parameter by H≡α̇/α, where the dot represents the derivative with respect to t, and κ^2 = 8 π G_N with Newton's gravitational constant G_N. p and ρ are the pressure and the energy density of the matter contents in the universe. We also introduce the equation of state (EOS) as p=wρ, where w is the EOS parameter. Based on Eq. (<ref>), the types of future singularities appearing in various cosmological models are classified as follows: when t→ t_s, * Type I (Big Rip) singularity: α→∞, ρ→∞ and |p|→∞. * Type II (Sudden) singularity: α→ const. and ρ→ const., but |p|→∞. α and α̇ are finite, but α̈ diverges. * Type III (Big Freeze) singularity: α→ const., but ρ→∞ and |p|→∞. α is finite, but α̇ diverges. * Type IV (Generalized Sudden) singularity: α→ const., ρ→ const., and |p| → const., but some higher derivatives of H diverge. α, α̇, and α̈ are finite, but higher derivatives of α diverge. * Type V (w) singularity: w →∞, but p and ρ are finite. This type depends on the properties of the matter, but the behavior of α is identical to that in Type II, that is, α and α̇ are finite, but α̈ diverges. Type I singularity was first introduced in <cit.>, which appears in the universe filled by phantom fluid <cit.>. Type II singularity was proposed in <cit.>. Type III and Type IV singularities were obtained by complementing the Type I and Type II singularities in <cit.> (for Type III, see also <cit.>). Although Type I-IV singularities have completely classified the singular behaviors of spacetime, in <cit.>, the singular behaviour of the EoS parameter w was also considered. To illustrate what could happen near the singularities, we consider the geodesic deviation equation: D^2 S^μ/dτ^2 = R^μ_ νρσT^ν T^ρ S^σ . Here, τ, S^μ, and T^μ present the proper time, deviation vector, and the tangent vector, respectively. In the FLRW spacetime (<ref>), we may choose T^0=1, T^i=0. Then, Eq. (<ref>) is reduced into D^2 S^i/dτ^2 = R^i_ 00j S^j . In the FLRW universe, we have R^i_ 00j = ( Ḣ + H^2 ) δ_ij , and Eq. (<ref>) gives D^2 S^i/dτ^2 = ( Ḣ + H^2 ) S^i . H and Ḣ diverge in Type I and III singularities, and Ḣ diverges in Type II singularity. Thus, Eq. (<ref>) tells us that spacetime is ripped. There could be the case that H, Ḣ, or both go to infinity in the infinite future. Even in this case, everything is ripped finally, which is called a little rip <cit.>. There could also be the case that H may become a constant H_0 in the infinite future. However, if H_0 is large enough, anything whose binding energy is smaller than a threshold value is also ripped, called a pseudo-rip <cit.>. § GENERAL ANISOTROPIC SPACETIME In this section, we consider a general homogeneous and anisotropic spacetime and classify the future singularities in this spacetime. In addition to the known finite-time singularities in the FLRW universe, we show that new kinds of singularities may show up in the anisotropic universe. We conclude that these singularities require the presence of even slight amount of spacetime anisotropy as a necessary condition. Regarding the new type of singularities, we investigate the geodesic equation and geodesic deviation equations in such a spacetime. §.§ Rotational symmetry breaking The general homogeneous and anisotropic spacetime is given as follows, ds^2 = - dt^2 + ∑_i,j=1,2,3 g_ij(t) dx^i dx^j . The above spacetime is homogeneous because there is a shift symmetry of the spatial coordinates x^i, x^i → x^i + c^i by constants c^i. Because the spatial part of metric g_ij is symmetric under the exchange of the indices g_ij=g_ji, we can diagonalize the spatial metric as ( g_ij(t) ) ≡𝒪^T(t) ( g̃_ij (t) ) 𝒪(t) = 𝒪^T(t) ( [ a^2(t) 0 0; 0 b^2(t) 0; 0 0 c^2(t) ]) 𝒪(t) . Here 𝒪(t) is a 3× 3 orthogonal matrix, and 𝒪^T(t) is the transpose of 𝒪(t) which satisfies 𝒪^T(t) 𝒪(t) = I with 3× 3 unit matrix I. 𝒪(t) is time-dependent in general, and if it is a constant matrix, the universe can be regarded as the Bianchi Type-I universe, as we will see later. Note that 𝒪(t) does not mean an actual rotation of space but a rotation of principal axes of the spatial metric represented by a symmetric matrix. Considering the known results of singularities in the FLRW universe, some or all of a(t), b(t), and c(t) may have singularities of Type I—V, which is a straightforward generalization of the future singularities in the FLRW universe. However, we note that another singularity could be from 𝒪(t). Such a singularity is expected to appear under the broken rotational symmetry or the spatial anisotropy. We now choose the rotational axis of 𝒪(t) near the time t=t_s to be the x^3-axis, which does not generate any loss of generality, 𝒪(t) = ( [ cosθ(t) -sinθ(t) 0; sinθ(t) cosθ(t) 0; 0 0 1 ]) . As in the cases of Type I-IV singularities in the FLRW universe, θ(t) might have singularities: at t=t_s, (1) θ(t) diverges; (2) θ̇(t) diverges; and (3) a higher derivtive of θ(t) diverges. We note that the singularity associated with θ(t) shows up only if a(t_s)≠ b(t_s) because 𝒪(t) becomes irrelevant when a(t)=b(t), 𝒪^T(t) ( [ a^2(t) 0 0; 0 a^2(t) 0; 0 0 c^2(t) ]) 𝒪(t) = ( [ cosθ(t) sinθ(t) 0; - sinθ(t) cosθ(t) 0; 0 0 1 ]) ( [ a^2(t) 0 0; 0 a^2(t) 0; 0 0 c^2(t) ]) ( [ cosθ(t) -sinθ(t) 0; sinθ(t) cosθ(t) 0; 0 0 1 ]) = ( [ a^2(t) 0 0; 0 a^2(t) 0; 0 0 c^2(t) ]) . Therefore a(t_s)≠ b(t_s) is a necessary condition for the singularity from the rotation θ(t) along the x^3-axis. For the rotation matrix (<ref>), the spacetime metric (<ref>) leads to ( g_ij(t) ) = ( [ a^2(t) cos^2θ(t) + b^2(t) sin^2θ(t) [ b^2(t) - a^2(t) ] cosθ(t) sinθ(t) 0; [ b^2(t) - a^2(t) ] cosθ(t) sinθ(t) a^2(t) sin^2 θ(t) + b^2(t) cos^2 θ(t) 0; 0 0 c^2(t) ]) . The above expression tells us that if θ(t) diverges, the metric has violent oscillations, although |g_ij| is finite. We will show that there is a curvature singularity even if θ(t) is finite but θ̇(t) diverges. Such a singularity occurs when θ(t) behaves near t∼ t_s as θ(t) ∼θ_0 + θ_1 ( t_s - t )^β with constants θ_0, θ_1, and β where 0<β<1. We briefly comment on the divergence of θ and its derivatives. In the spacetime of our interest, there can be non-zero off-diagonal elements of the spatial metric g_ij(t) in Eq. (<ref>). If we assign the scale factors to the diagonal elements of spatial metric g̃_ij(t) after the diagonalization, the off-diagonal elements in the original spatial metric g_ij(t) describes the mixture of the scale factors, as in Eq. (<ref>). In this sense, θ(t) is the time-dependent mixing angle. Thus, through the diagonalization, θ(t) corresponds to the off-diagonal elements of g_ij(t), (x,y) element in the current setup, and the divergence of θ and its derivatives reflects those of such off-diagonal elements in g_ij(t). We compute the Ricci tensor and Ricci scalar in the general homogeneous and anisotropic spacetime (the detailed calculation is summarized in Appendix <ref>). For the metric in Eq. (<ref>), the Ricci tensor is given as follows: R_00 = - ( a^2 - b^2)^2/2a^2b^2θ̇^2 - ( ä/a + b̈/b + c̈/c) , R_0i = R_i0 =0 ( R_ij) ≡𝒪^T ( R̃_ij ) 𝒪 = 𝒪^T( [ R̃_11 R̃_12 0; R̃_21 R̃_22 0; 0 0 R̃_33 ]) 𝒪 , where the components in R̃_ij are defined as R̃_11 = äa + ȧa (ḃ/b +ċ/c) + b^4-a^4/2b^2θ̇^2 R̃_12 = R̃_21 = - θ̈/2( a^2 - b^2 ) - θ̇/2[ ȧ/a( b^2 + 3a^2) -ḃ/b( a^2 + 3b^2) +ċ/c( a^2 - b^2) ] R̃_22 = b̈b + ḃb (ȧ/a +ċ/c) + a^4-b^4/2a^2θ̇^2 R̃_33 = c̈c + ċc ( ȧ/a +ḃ/b) . By contracting the Ricci tensor with the metric, the Ricci scalar is given by R = (a^2 - b^2)^2 /4a^2b^2θ̇^2 + 2 (ä/a + b̈/b + c̈/c) + 2 ( ȧḃ/ab + ḃċ/bc + ċȧ/ca) . We omitted the variable t in the scale factors and rotation angle above for simplicity. We note that R̃_33 does not have θ dependence because we choose the rotation axis as x^3 direction in our setup. One can restore well-known results in FLRW spacetime by taking the limit that θ̇ = θ̈ = 0 and a = b= c. The singularities originated from the rotation angle of spatial metric θ may show up in the Ricci tensor and Ricci scalar if θ̇ or θ̈ diverge at t=t_s. Notably, these singularities require a(t_s)≠ b(t_s). We emphasize that θ dependence in the curvature tensors always drops if a(t)=b(t), and thus, the anisotropy in the scale factors is a necessary condition for the singularity associated with θ(t). In other words, if there is even a slight anisotropy in the universe, θ dependence cannot be ignored and potentially causes a new type of singularities. We consider the case that θ(t) vanishes at t=t_s, while θ̇(t) diverges at t=t_s. In this case, the metric g_ij(t) is automatically diagonalized g_ij(t) = g̃_ij(t_s) as in Eq. (<ref>). However, several components of Ricci tensor R_00, R_11, R_12=R_21, R_22, and Ricci scalar R diverge in general when θ̇ diverges. Note that we have chosen the rotational axis as the x^3 axis near t=t_s. As we will see in the following subsection, in Einstein's gravity, the Einstein equation suggests that the energy-momentum tensor must diverge corresponding to divergences of θ̇ in the Einstein tensor. Moreover, off-diagonal components of the Einstein tensor are nonvanishing, which generally requires the anisotropic stress in the energy-momentum tensor. §.§ Classification of singularities We can summarize the classification of singularities in terms of the metric components. First, we consider the singularities related to a(t), b(t), and c(t), which are the eigenvalues of g_ij(t). When t → t_s, 1-1 Type I singularity: Some of a(t), b(t), and c(t) diverge. 1-2 Type II singularity: Some of a(t), b(t), and c(t) and the first derivatives of a(t), b(t), and c(t) are finite, but some of the second derivatives diverge. 1-3 Type III singularity: Some of a(t), b(t), and c(t) are finite, but the first derivatives of a(t), b(t), and c(t) diverge. 1-4 Type IV singularity: Some of a(t), b(t), and c(t) and the first and second derivatives of a(t), b(t), and c(t) are finite, but some higher derivatives diverge. Note that the same type of singularity does not need to occur in all directions. For instance, only a(t) corresponding to x^1 direction may have one of the above Type I - IV singularities, while the other scale factors do not show singular behaviors. As another example, the two directions may have singularities, although the remaining direction does not, and these two singularities may be different types from each other. As mentioned in the previous subsection, these singularities are generalizations of the future singularities in the FLRW universe with respect to different scale factors assigned to the three spatial directions. These are not related to the rotation θ(t), and assuming the rotation angle is constant θ(t)=const., the spacetime of our interest is reduced to Bianchi Type-I universe. As an illustration, we consider the Einstein equation, G_μν = κ^2 T_μν , where T_μν represents the energy-momentum tensor of the matters. From Eqs. (<ref>) and (<ref>), the Einstein tensor G_μν is given as G_00 = - (a^2 - b^2)^2 /4a^2b^2θ̇^2 + ( ȧḃ/ab + ḃċ/bc + ċȧ/ca) , G_0i = G_i0 = 0 , G_ij = 𝒪^T( R̃_ij - 1/2g̃_ij R )𝒪 ≡𝒪^T ( G̃_ij ) 𝒪 = 𝒪^T( [ G̃_11 R̃_12 0; R̃_21 G̃_22 0; 0 0 G̃_33 ]) 𝒪 , where the diagonal components in G̃_ij are defined as G̃_11 = - (a^2 - b^2)(3a^2 + b^2)/4b^2θ̇^2 - a^2(b̈/b + c̈/c) - a^2ḃċ/bc , G̃_22 = - (b^2 - a^2)(a^2 + 3b^2)/4a^2θ̇^2 - b^2(ä/a + c̈/c) - b^2ċȧ/ca , G̃_33 = - ( a^2 - b^2)^2 c^2/4a^2b^2θ̇^2 - c^2(ä/a + b̈/b) - c^2ȧḃ/ab . Moreover, the spatial components of the Einstein equation can be simplified by introducing a new definition of the energy-momentum tensor: 𝒪^T ( G̃_ij ) 𝒪 = κ^2 T_ij G̃_ij ≡κ^2 T̃_ij , where (T_ij) = 𝒪^TT̃_ij𝒪 . Assuming θ is constant in Eqs. (<ref>) and (<ref>), we find that all the θ-dependent terms vanish, and Eq. (<ref>) leads to the modified Friedmann equations in the Bianchi Type-I universe <cit.>: κ^2 T_00 = ( H_a H_b + H_b H_c + H_cH_a) , - κ^2/a^2T̃_11 = ( Ḣ_b + Ḣ_c) +( H_b^2 + H_c^2 + H_b H_c) , - κ^2/b^2T̃_22 = ( Ḣ_c + Ḣ_a) + ( H_c^2 + H_a^2 + H_cH_a) , - κ^2/c^2T̃_33 = ( Ḣ_a + Ḣ_b) + ( H_a^2 + H_b^2 + H_a H_b) . Here, we defined the Hubble parameter for each direction as H_a = ȧ/a , H_b = ḃ/b , H_c = ċ/c . When we read the energy-momentum tensor as T_00 = ρ and T̃_ij = diag [ P_1 a^2, P_2 b^2, P_3 c^2], where ρ and P_i are the energy density and the pressure in each direction. It is apparent that the future singularities in the FLRW universe are generalized into those in the Bianchi Type-I universe. As in the classification of the future singularity in the FLRW universe, three different Hubble parameters and their derivatives may show the different types of singularities, as the corresponding energy density and pressures also diverge. Second, we focus on the singularities related to the rotation θ(t) in the orthogonal matrix, which diagonalizes the spatial metric g_ij. For these singularities, the components of the metric are always finite, | g_ij|<∞. Near the singularity t∼ t_s, we may choose the matrix as in Eq. (<ref>) with any loss of generality and assume a(t_s)≠ b(t_s). When t → t_s, 2-1 Type Iθ singularity: θ diverges, and the metric oscillates very rapidly. 2-2 Type IIθ singularity: θ and θ̇ are finite, but θ̈ diverges. The energy density and the diagonal spatial components of the energy-momentum tensor are finite, but the off-diagonal component diverges. 2-3 Type IIIθ singularity: θ, ȧ is finite, but θ̇ and also θ̈ diverge. The energy density, pressure, and other spatial components of the energy-momentum tensor diverge. 2-4 Type IVθ singularity: θ, θ̇, and θ̈ are finite, but some higher derivatives of θ diverge. The energy density, pressure, and other spatial components of the energy-momentum tensor are finite. Their first derivatives are also finite, but the higher derivatives diverge. We note that θ(t) corresponds to the off-diagonal elements in the original spatial metric g_ij(t). Using a rotation matrix 𝒪, we can separate the additional divergence from divergences in the three scale factors. As before, we consider the Einstein equation as an illustration. Taking into account the θ-dependence, we find that the off-diagonal elements in the Einstein tensor (<ref>) R̃_12 = R̃_21 do not vanish, and Eq. (<ref>) leads to the following equations: κ^2 T_00 = ( H_a H_b + H_b H_c + H_cH_a) - (a^2 - b^2)^2 /4a^2b^2θ̇^2 , - κ^2/a^2T̃_11 = ( Ḣ_b + Ḣ_c) + ( H_b^2 + H_c^2 + H_b H_c) + (a^2 - b^2)(3a^2 + b^2)/4a^2b^2θ̇^2 , - κ^2T̃_12 = a^2 - b^2/2θ̈ + 1/2[ H_a( b^2 + 3a^2) - H_b( a^2 + 3b^2) + H_c( a^2 - b^2) ] θ̇ , - κ^2/b^2T̃_22 = ( Ḣ_c + Ḣ_a) + ( H_c^2 + H_a^2 + H_cH_a) + (b^2 - a^2)(a^2 + 3b^2)/4a^2b^2θ̇^2 , - κ^2/c^2T̃_33 = ( Ḣ_a + Ḣ_b) + ( H_a^2 + H_b^2 + H_a H_b) + ( a^2 - b^2)^2/4a^2b^2θ̇^2 . In addition to the corrections to Eq. (<ref>), we have an additional equation from the off-diagonal component of the Einstein tensor, which inevitably introduces the anisotropic stress T̃_12 = T̃_21. We can find that in Eq. (<ref>), the new types of singularities require anisotropy in the scale factors a(t)≠ b(t) as the necessary condition. They also indicate that the off-diagonal elements of the energy-momentum tensor, T_12=T_21 in our setup, must have a singularity. If θ behaves as θ∼θ_0 ( t_s - t )^β with constants θ_0 and β when t∼ t_s, the Type Iθ corresponds to β<0, the Type IIθ to 1<β<2, the Type IIIθ to 0<β<1, and the Type IVθ to the case that β is not an integer and β>2. §.§ Rips and Twists We further investigate the new class of finite-time singularities related to the rotation angle θ of the spatial metric. First, we consider what could happen when θ̇ diverges using the geodesic deviation equation as in Eq. (<ref>). Computing the Riemann tensor R^i_ ttj in the general homogeneous and anisotropic spacetime, we find R^i_ 00j ≡𝒪^T (R̃^i_ 00j) 𝒪 = 𝒪^T( [ A D_ab 0; D_ba B 0; 0 0 C ]) 𝒪 , and the geodesic deviation equation takes the following form: 𝒪( [ D^2 S^1/dτ^2; D^2 S^2/dτ^2; D^2 S^3/dτ^2 ]) = ( [ A D_ab 0; D_ba B 0; 0 0 C ]) 𝒪( [ S^1; S^2; S^3 ]) , where A = (Ḣ_a + H_a^2) - θ̇^2/4(a^2-b^2)(a^2+3b^2)/a^2b^2 D_ab = - θ̇/2[ H_a( b^2/a^2 + 3 ) - H_b( 1 + 3 b^2/a^2) ] - θ̈/2( 1 - b^2/a^2) D_ba = - θ̇/2[ H_a( 1 + 3 a^2/b^2) - H_b( a^2/b^2 + 3 ) ] - θ̈/2( a^2/b^2 - 1 ) B = (Ḣ_b + H_b^2) - θ̇^2/4(b^2-a^2)(b^2+3a^2)/a^2b^2 C = (Ḣ_c + H_c^2) . The off-diagonal components in Eq. (<ref>) generate new geodesic deviations proportional to another geodesic deviation perpendicular to the geodesic deviation. Especially in the case of Type IIθ, the diagonal elements are finite, although the off-diagonal elements diverge. Thus, spacetime could be ripped in analogy to the FLRW case. We also consider the analogy to the little rip and pseudo rip in the FLRW Universe. If θ̇, θ̈, or both diverge in the infinite future, the spacetime could be ripped finally. If θ̇, θ̈, or both become very large, even constant, any object whose binding energy is below the threshold could be ripped. Second, we consider what could happen when θ̇ becomes large, using the geodesic equation for the non-relativistic test particle, 0=d^2 x^μ/ds^2 + Γ^μ_ρσdx^ρ/dsdx^σ/ds . To investigate effects coming from θ̇, we consider the situation that the divergence from the rotation angle is dominant compared with that from the scale factors in the spatial metric; that is, we ignore derivatives of the scale factors and assume θ∼ 0 at t ∼ t_s as in Type IIIθ singularity. Using the Christoffel symbol in the anisotropic universe (see Appendix <ref>), we find that the spatial components of the geodesic equations lead to d^2 x^1/ds^2 ∼θ̇( b^2/a^2 - 1 ) dx^2/ds , d^2 x^2/ds^2 ∼θ̇( 1 - a^2/b^2) dx^1/ds , d^2 x^3/ds^2 ∼ 0 . Here, we have assumed the non-relativistic limit | dx^i/ds | ≪| dx^0/ds | ∼ 1. The terms including θ̇ generate forces perpendicular to the velocity dx^i/ds of the particle as in the magnetic force. These forces may be regarded as the Coriolis force. When θ̇ diverges, any object may be twisted off. In this sense, we may call the singularity where θ̇ diverges in the finite future as the Big Twist. If θ̇ goes to infinite in the infinite future, we may call this the little twist. When θ̇ goes to a very large constant in the infinite future, we may call this phenomenon the pseudo twist. § RECONSTRUCTION OF MODELS WITH ANISOTROPIC SINGULARITY In this section, we consider models that realize curvature singularity by applying a new systematic formulation, so-called reconstruction. The reconstruction is the inverse of the standard process where we solve the equations for given models. Inversely, we may find a model that realizes the geometry desired from the theoretical and observational viewpoints. The reconstruction for cosmology in the FLRW spacetime, Eqs. (<ref>) and (<ref>), has been actively studied for several kinds of modified gravity theories (see the review <cit.> and the references therein for the reconstruction, and for modified gravity theories general, see Refs. <cit.> for the review). Recently, the formulation of the reconstruction for the spherically symmetric spacetime has been investigated, using two-scalar fields <cit.> and in the scalar–Einstein–Gauss-Bonnet gravity <cit.>. However, ghosts appear in all the above models, indicating they are physically inconsistent. In the classical theory, the kinetic energy of the ghosts is unbounded below, and the system becomes unstable. In the quantum theory, the ghosts typically generate the negative norm states as in the Fadeev-Popov ghosts in the gauge theories <cit.>. The negative norm states generate negative probabilities, which conflicts with the Copenhagen interpretation of the quantum theory. The ghost can be, however, eliminated by using constraints given by the Lagrange multiplier fields <cit.>. We discuss a generalization of the two-scalar model to the model with four scalar fields <cit.>. This model can reconstruct a model that realizes any given geometry, even if it is time-dependent, not spherically symmetric, and anisotropic, as in Eq. (<ref>). §.§ Conventional fluid approach Before we introduce the reconstruction, we consider the effective matter contents that directly reflect the singularities in the Einstein tensor in the framework of Einstein's gravity, To investigate the new class of singularities generated by θ̇, we ignore derivatives of the scale factors and assume θ∼ 0 as done in the previous subsection. When θ∼ 0, we can drop the rotation matrix 𝒪 in Eq. (<ref>), and the effective energy-momentum tensor of the fluid given by the Einstein tensor is reduced to be T_00 ∼ - (a^2 - b^2)^2 /4κ^2a^2b^2θ̇^2 , T_0i = T_i0=0 , ( T_ij) ∼1/κ^2( [ - (a^2 - b^2)(3a^2 + b^2)/4b^2θ̇^2 - θ̈/2( a^2 - b^2 ) 0; - θ̈/2( a^2 - b^2 ) - (b^2 - a^2)(a^2 + 3b^2)/4a^2θ̇^2 0; 0 0 - ( a^2 - b^2)^2 c^2/4a^2b^2θ̇^2 ] ) Although spacetime anisotropy does not allow us to utilize the ordinary perfect fluid description, we can define the energy density and pressures ρ, P_1, P_2, P_3 as ρ = T_00∼ - (a^2 - b^2)^2 /4κ^2a^2b^2θ̇^2 , P_1 = T_11/a^2∼ - (a^2 - b^2)(3a^2 + b^2)/4κ^2a^2b^2θ̇^2 , P_2 = T_22/b^2∼ - (b^2 - a^2)(a^2 + 3b^2)/4κ^2a^2b^2θ̇^2 , P_3 = T_33/c^2∼ - ( a^2 - b^2)^2 /4κ^2a^2b^2θ̇^2 . Eq. (<ref>) shows that the energy density and three pressures diverge when θ̇ does. We note that ρ always takes a negative value, which manifestly causes difficulty in introducing the conventional fluid approach at the classical level. P_3 is always negative, while P_1 and P_2 have opposite signs depending on the scale factor in each direction. Regarding anisotropic stress, if we assume θ∼θ_0 ( t_s - t )^β so that θ∼ 0 and θ̇ diverges at t∼ t_s, we find T_12 = T_21∝|ρ|^β-2/2(β - 1 ) . If we define the power as γ≡β-2/2(β - 1 ), the type of singularities can be determined by the power γ: the Type Iθ singularity corresponds to 1/2<γ<1; Type IIθ to γ<0; Type IIIθ to γ>1; and Type IVθ to 0<γ<1/2 except the points where β is an integer, that is, γ≠n-2/2 ( n - 1 ). If the universe includes the fluid with the off-diagonal element T_12=T_21∝|ρ|^γ, there could occur the finite future singularity of the Type Iθ – IVθ. Although the effective fluid cannot be a perfect fluid due to the anisotropy, we read off EOS in each direction. Using Eq. (<ref>), we find P_1 ∼( 1 + 2a^2 + b^2/a^2 - b^2) ρ , P_2 ∼( 1 - 2a^2 + b^2/a^2 - b^2) ρ , P_3 = ρ . Eq. (<ref>) shows the exotic EOS along x^1 and x^2 directions depending on the size of the anisotropy. For a > b, the effective EOS paramerter w>1 along x^1 direction and w<1 along x^2 direction. However, the effective fluid shows the stiff EOS w=1 along x^3 direction regardless of the anisotropy. Eq. (<ref>) suggests that the energy density, pressure, and anisotropic stress of the fluid become smaller if the anisotropy is smaller, a ∼ b. Here, we assume a tiny portion of the anisotropic fluid in the present universe, where the background spacetime is almost FLRW Universe, and we ignore the backreaction of the anisotropic fluid. For the FLRW metric in (<ref>), the Christoffel symbols are given by Γ^t_ij = α^2 H δ_ij , Γ^i_tj = Γ^i_jt = H δ^i_ j , and the other components vanish. If we impose the conservation law ∇^μ T_μν = 0 for the anisotropic fluid, 0 = ∇^μ T_μ 0 = ρ̇+ 3 H ρ + H ( P_1 + P_2 + P_3) = ρ̇+ 6 H ρ . Here we have used (<ref>) although the EOS could not be valid in the present universe. On the other hand, the conservation law ∇^μ T_μ i = 0 is trivial even for the anisotropic fluid in the present model. Eq. (<ref>) indicates the solution ρ∝α^-6, that is, the density decreases by the expansion. If the conservation law (<ref>) is valid even in the present universe, the fluid will not dominate in the future. The anisotropic fluid cannot describe the future singularity, although it might have been dominant in the early universe and generated the primordial anisotropy. In order for the future singularity to show up, when the energy density ρ is small, the EOS (<ref>) must be changed so that the density increases by the expansion of the universe. §.§ Four-scalar reconstruction We consider the following model including four scalar fields ϕ^a: S = S_gravity + S_ϕ + S_λ , S_ϕ ≡∫ d^4x √(-g)( 1/2∑_a,b = 0,1,2,3 A_ab( ϕ) g^μν∂_μϕ^a∂_νϕ^b - V( ϕ) ) , S_λ ≡∫ d^4x √(-g)∑_a=0,1,2,3λ^a( 1/g^aa( x = ϕ) g^μν( x) ∂_μϕ^a∂_νϕ^a - 1 ) . We use the Roman index (a, b,⋯ = 0,1,2,3) for the scalar fields, and as we will see later, it corresponds to the index in the internal space. S_gravity represents the action of the arbitrary gravity theory, and the kinetic coefficients A_ab(ϕ) and the potential V( ϕ) are functions of the scalar fields ϕ^a. In Eq. (<ref>), λ^a are Lagrange multiplier fields that lead to constraints, 0 = 1/g^aa( x = ϕ) g^μν( x ) ∂_μϕ^a∂_νϕ^a - 1 , which eliminates ghosts. By the variation of the action (<ref>) with respect to the metric g_μν, we obtain 𝒢_μν = 1/2 g_μν( 1/2∑_a, b=0,1,2,3 A_ab(ϕ) g^ξη∂_ξϕ^a∂_ηϕ^b - V( ϕ) ) - 1/2∑_a,b = 0,1,2,3 A_ab(ϕ) ∂_μϕ^a∂_νϕ^b + 1/2 g_μν∑_a=0,1,2,3λ^a( 1/g^aa( x = ϕ) g^μν( x ) ∂_μϕ^a∂_νϕ^a - 1 ) - ∑_a=0,1,2,3λ^a/g^aa( x = ϕ)∂_μϕ^a∂_νϕ^a = 1/2 g_μν( 1/2∑_a, b=0,1,2,3 A_ab(ϕ) g^ξη∂_ξϕ^a∂_ηϕ^b - V( ϕ) ) - 1/2∑_a,b=0,1,2,3 A_ab(ϕ) ∂_μϕ^a∂_νϕ^b - ∑_a=0,1,2,3λ^a/g^aa( x = ϕ)∂_μϕ^a∂_νϕ^a . Here, we used the constraint equations in Eq. (<ref>), and 𝒢_μν is defined by the variation of the action S_gravity of the gravity sector: 𝒢^μν≡1/√(-g)δ S_gravity/δ g_μν . If we employ the Einstein-Hilbert action S_gravity = 1/2κ^2∫ d^4 x √(-g) R , 𝒢_μν is, of course, given by the Einstein tensor, 𝒢_μν = - 1/2κ^2 G_μν . We can include the contribution of matter by replacing the 𝒢^μν by 𝒢^μν ≡1/√(-g)δ S_gravity/δ g_μν + 1/√(-g)δ S_matter/δ g_μν = 1/√(-g)δ S_gravity/δ g_μν + 1/2 T^μν . Note that the first term is written by the coordinates for a given spacetime metric. If we find the coordinate dependence of T^μν by solving the conservation law and field equation of the matter, the second term and thus the whole 𝒢^μν is written by the coordinates. In the case of Einstein's gravity, Eq. (<ref>) is rewritten as, 𝒢_μν = - 1/2κ^2 G_μν + 1/2 T_μν . By multiplying Eq. (<ref>) with g^μν, we find g^μν𝒢_μν = 1/2∑_a,b=0,1,2,3 A_ab(ϕ) g^ξη∂_ξϕ^a∂_ηϕ^b - 2 V( ϕ) - ∑_a=0,1,2,3λ^a , where we again used Eq. (<ref>). Moreover, substituting Eq. (<ref>) into Eq. (<ref>), we find ∑_a,b=0,1,2,3 A_ab(ϕ) ∂_μϕ^a∂_νϕ^b = - 2 𝒢_μν + g_μν{ V( ϕ) + ∑_a=0,1,2,3λ^a + g^ρσ𝒢_ρσ} - 2 ∑_a=0,1,2,3λ^a/g^aa( x = ϕ)∂_μϕ^a∂_νϕ^a . We now identify the four scalar fields as the spacetime coordinates ϕ^a=x^a, which is actually consistent with the constraints in Eq. (<ref>). And then, Eq. (<ref>) can be rewritten as A_μν(ϕ) = - 2 𝒢_μν + g_μν{ V( ϕ) + ∑_a=0,1,2,3λ^a + g^ρσ𝒢_ρσ} - 2 ∑_a=0,1,2,3λ^a/g^aa( x = ϕ)δ_μ^aδ_ν^a . Moreover, we consider the solution for λ^a =0. And then, an arbitrary geometry written by g_μν and arbitrary function V( ϕ = x ) can be realized by choosing A_μν(ϕ) as A_μν(ϕ) = - 2 𝒢_μν( x = ϕ) + g_μν( x = ϕ) { V( ϕ) + g^ρσ( x = ϕ) 𝒢_ρσ( x = ϕ) } . Because the potential V( ϕ) is arbitrary, we hereafter choose V( ϕ)=0. We remark several features of A_ab. S_ϕ can be regarded as a non-linear sigma model whose target-space metric is given by A_ab(ϕ) when V( ϕ)=0. A similar structure related to the four scalar fields and internal space can also be found in modified gravity theories <cit.>. If A_ab=0 for a given a and arbitrary b and the other non-vanishing components do not depend on ϕ^a for the given a, we may drop the scalar field ϕ^a. For instance, when we consider the spherical symmetry, there is no dependence on angular coordinates ϕ^2 = θ and ϕ^3 = φ. Thus, we can drop two of four scalar fields, and the two-scalar field works for the spherically symmetric spacetime <cit.>. Without S_λ in Eq. (<ref>), ghosts appear when any eigenvalue of A_ab(ϕ) becomes negative. We now check if the constraints in Eq. (<ref>) derived from S_λ can eliminate the ghosts. For this purpose, we consider the perturbation, ϕ^a = x^a + δϕ^a . For the perturbation δϕ^ξ, the constraints in Eq. (<ref>) give 0 = 2 g^aν∂_νδϕ^a - ∑_b δϕ^b ∂_b g^aa(x) . Here, we have not summed the equations with respect to a. For a space-like coordinate x^a, if we impose δϕ^a=0 when | x^a |→∞, and for a time-like coordinate x^a, if we impose δϕ^a=0 as an initial condition, we always find δϕ^a=0. Therefore, δϕ^a does not propagate, and thus the ghosts do not appear. In the case of Einstein's gravity, Eq. (<ref>) has the following form, A_μν(ϕ) = 1/κ^2 G_μν( x = ϕ) - 1/2κ^2 g_μν( x = ϕ) g^ρσ( x = ϕ) G_ρσ( x = ϕ) = 1/κ^2 R_μν( x = ϕ) . It is now clear that A_μν(ϕ) is given by the Ricci tensor R_μν where the coordinates are identified with the scalar fields x^μ = ϕ^μ. Moreover, including matter contents in terms of the energy-momentum tensor as in Eq. (<ref>), we find A_μν(ϕ) = 1/κ^2 R_μν( x = ϕ) - T_μν( x = ϕ) + 1/2g_μν( x = ϕ) T ( x = ϕ) , where T represents the trace of the energy-momentum tensor T ≡ g^μν T_μν . Eq. (<ref>) can be interpreted as A_μν(ϕ), which is comprised by the four scalar fields, complementing the Einstein equation for any metric g_μν and matter T_μν. Therefore, with an appropriate choice of A^μν(ϕ), the model described by Eq. (<ref>) allows us to reconstruct the gravitational theories that realize the desired geometry. We note that it is straightforward to extend this reconstruction method to the case in D dimensional spacetime with D scalar fields. §.§ Toy model 1: Increasing anisotropy We apply the above four-scalar-field model to reconstruct the models that encompass the curvature singularities. In the homogeneous and anisotropic spacetime described by Eq. (<ref>), we substitute Eqs. (<ref>) – (<ref>) and Eq. (<ref>) into Eq. (<ref>) A_00 = - 1/κ^2[ ( a^2 - b^2)^2/2a^2b^2θ̇^2 + ( ä/a + b̈/b + c̈/c) ]_t=ϕ^(0) + [ - T_00 - 1/2 T ]_t=ϕ^(0) , A_0i = A_i0= - T_i0 , ( A_ij) = 1/κ^2[ 𝒪^T( [ R̃_11 R̃_12 0; R̃_21 R̃_22 0; 0 0 R̃_33 ]) 𝒪]_t=ϕ^(0) + [ - ( T_ij) + 1/2( g_ij) T ]_t=ϕ^(0) . Here, we have denoted t=x^0. We have assumed that the time-dependence of matter, and thus T_μν is given by solving the conservation law and field equation of the matter. We note that A_μν only depends on ϕ^0, A_μν(ϕ^0 ), because the metric only depends on time coordinate t=x^0. Note that the energy-momentum tensor T_μν represents the ordinary matter contents. We can utilise the perfect fluid description for T_μν, where T_i0=0, and A_μν compensates the anisotropy. We reconstruct models realizing the new future singularities discussed in Section <ref>, demonstrating the four-scalar reconstruction for two different classes of future singularities by considering the two different situations: (i) The divergence from the scale factors is dominant compared with that from the rotation angle; (ii) The divergence from the rotation angle is dominant compared with that from the scale factors. First, we investigate the case (i) corresponding to Bianchi Type-I, where the scale factors may show Type I – IV singularities. Dropping θ and its derivatives in Eq. (<ref>), we find that the kinetic coefficient is reduced to A_00 = - 1/κ^2( ä/a + b̈/b + c̈/c)_t=ϕ^0 + [ - T_00 - 1/2 T ]_t=ϕ^0 , A_0i = A_i0= 0 , ( A_ij) = 1/κ^2( [ äa + ȧa (ḃ/b +ċ/c) 0 0; 0 b̈b + ḃb (ȧ/a +ċ/c) 0; 0 0 c̈c + ċc ( ȧ/a +ḃ/b) ])_t=ϕ^0 + [ - ( T_ij) + 1/2( g_ij) T ]_t=ϕ^0 . In the above expressions, we can choose three scale factors a(t), b(t), c(t) to reconstruct an arbitrary evolution of the background spacetime. For example, considering a model where the anisotropy vanishes at present t=t_0 and grows in the future: a(t) = α(t) [ 1 + ã(t - t_0)] , b(t) = α(t) [ 1 + b̃(t - t_0)] , c(t) = α(t) [ 1 + c̃(t - t_0)] . Here, α(t) stands for the scale factor in the flat FLRW universe as in Eq. (<ref>), and ã(t), b̃(t), c̃(t) are increasing functions with respect to ϕ^0, which satisfy ã=b̃=c̃=0 at t=t_0. A similar ansatz for the scale factors was discussed in Ref. <cit.>. Moreover, if we demand this model mimics the ΛCDM model in the current universe, we include the cosmological constant and dust in the energy-momentum tensor. Because we are interested in the future singularity, we can assume that the cosmological constant dominates, and then, T_μν is given by T_μν = - Λ/κ^2 g_μν . We note that the above energy-momentum tensor satisfies the conservation law in the anisotropic universe, and - T_00 - 1/2 T = Λ/κ^2 , - T_ij + 1/2 g_ij T = - Λ/κ^2 g_ij . Finally, the case (i) can be reconstructed by choosing the following A_μν: A_00 = - 1/κ^2( ä/a + b̈/b + c̈/c - Λ)_t=ϕ^0 , A_0i = A_i0= 0 , ( A_ij) = 1/κ^2( [ äa + ȧa (ḃ/b +ċ/c) - Λ a^2 0 0; 0 b̈b + ḃb (ȧ/a +ċ/c) - Λ b^2 0; 0 0 c̈c + ċc ( ȧ/a +ḃ/b) - Λ c^2 ])_t=ϕ^0 . §.§ Toy model 2: Rotation singularity Second, we consider the case (ii) where the rotation angle θ may show Type Iθ– IVθ singularities. Using the setup we used in subsection <ref>, we drop derivatives of the scale factors and assume θ∼θ_0 ( t_s - t )^β in Eq. (<ref>). The kinetic coefficient is given by A_00 = - 1/κ^2[ ( a^2 - b^2)^2/2a^2b^2θ̇^2 ]_t=ϕ^0 + [ - T_00 - 1/2 T ]_t=ϕ^0 , A_0i = A_i0= 0 , ( A_ij) = 1/κ^2[ ( [ b^4-a^4/2b^2θ̇^2 - θ̈/2( a^2 - b^2 ) 0; - θ̈/2( a^2 - b^2 ) a^4-b^4/2a^2θ̇^2 0; 0 0 0 ]) ]_t=ϕ^0 + [ - ( T_ij) + 1/2( g_ij) T ]_t=ϕ^0 . Regarding the matter energy-momentum tensor, we can again utilize Eq. (<ref>). Moreover, to mimic the ΛCDM model, we assume the small but nonzero anisotropy, which is necessary to realize the new types of singularities. This situation corresponds to ã, b̃, c̃≪ 1 in Eq. (<ref>). By the Taylor expansion with respect to ã, b̃, c̃, the case (ii) can be reconstructed by the following A_μν: A_00 = - 1/κ^2[ 2( ã - b̃)^2 θ̇^2 - Λ]_t=ϕ^0 , A_0i = A_i0= 0 , ( A_ij) = α^2(ϕ^0)/κ^2( [ 2(b̃-ã) θ̇^2 - Λ (1 + 2ã) - θ̈(ã - b̃) 0; - θ̈(ã - b̃) 2(ã-b̃) θ̇^2 - Λ (1 + 2b̃) 0; 0 0 - Λ (1 + 2c̃) ])_t=ϕ^0 . We note that the arbitrary divergence of θ(t) can be reconstructed other than θ∼θ_0 ( t_s - t )^β. Moreover, it is optional to include the energy-momentum tensor in this reconstruction method. When introducing the matter contents, one needs to carefully consider the conservation of the energy-momentum tensor or field equations of matters. In the above setup, the cosmological constant automatically satisfies the conservation law in our current toy models. § SUMMARY AND DISCUSSION In this work, we have investigated finite-time singularities in general homogeneous and anisotropic spacetime. We have observed two classes of singularities. The first class is associated with the singularities in the scale factors and is the generalization of the well-known finite-time singularities in the FLRW universe. The second one originates from the spatial anisotropy and rotational symmetry breaking, and the time-dependent rotation angle θ (t) of the spatial metric may show the new type of singularities. We have shown that finite anisotropy is the necessary condition for these new singularities, which also introduces the anisotropic stress and off-diagonal elements in the Ricci tensor. While the divergence of θ(t) shows violent oscillations in metric, the divergence of its derivatives can occur as θ vanishes in the future. Following the finite-time singularities in the FLRW universe, we have categorized the new type of singularities. We have also considered the physical meanings of divergences in θ(t) in terms of the geodesic equation and geodesic deviation equation. In addition to behaviors similar to known results in the FLRW universe, Big Rip, we have found a novel singularity named the Big Twist. This singularity can be generated by the derivative of θ(t). The Big Twist shows up in the geodesic equation and is driven by the force perpendicular to the velocity of the test particle, which is similar to the Coriolis force. Moreover, we have defined the little twist and pseudo twist based on the behavior of θ̇(t), which is also analogous to the rip-type singularities in the FLRW universe. We have finally demonstrated the toy models of finite-time singularities in the homogeneous and anisotropic universe. The conventional effective matter description in Einstein's gravity, where the Einstein tensor directly gives the effective energy-momentum tensor, predicts the exotic equation of state, and it does not work to study the future singularity. We have developed the novel reconstruction method, the four-scalar reconstruction, and applied it to our consideration. In the framework of Einstein's gravity, we have reconstructed two models encompassing the two classes of finite-time singularities. In both models, it is possible to mimic the ΛCDM model in the current universe, and we can realize the finite-time singularities arising from the scale factor or rotation angle in the spatial metric. Although we have relied on the reconstruction method in the present work, we can apply our analysis of the finite-time singularities in the homogenous and anisotropic universe to the modified gravity theories beyond Einstein's gravity. It would be intriguing to study if these singularities, especially newly discovered ones, can be realized in specific models of modified gravity theories. It would be a realistic extension of existing studies on Big Rips or other singularities in the FLRW universe in the modified gravity theory. T.K. is supported by the National Key R&D Program of China (2021YFA0718500) and by Grant-in-Aid of Hubei Province Natural Science Foundation (2022CFB817). This work was partially supported by the program Unidad de Excelencia Maria de Maeztu CEX2020-001058-M, Spain (S.D.O). § CALCULATION APPENDIX In this paper, we have defined the Levi-Civita connection, Riemann tensor, Ricci tensor, and Ricci scalar as follows: Γ^σ_μν = 1/2 g^σρ( ∂_μ g_ρν + ∂_ν g_ρμ- ∂_ρ g_μν) , R^λ_ μρν = Γ^λ_μν,ρ -Γ^λ_μρ,ν + Γ^η_μνΓ^λ_ρη - Γ^η_μρΓ^λ_νη , R_μν = R^ρ_ μρν , R = g^μν R_μν , §.§ Levi-Civita connection First, for the metric as in Eq. (<ref>), the Levi-Civita connection (<ref>) takes the following forms, Γ^0_00 = Γ^0_0i=Γ^0_i0 = Γ^i_00=Γ^i_jk = 0 , Γ^0_ij = 1/2ġ_ij , Γ^i_0j = Γ^i_j0=1/2g^ikġ_kj . Γ^0_ij and Γ^i_0j are written in terms of the rotation matrix 𝒪 as follows: ( Γ^t_ij) =1/2( - 𝒪^T𝒪̇𝒪^Tg̃𝒪 + 𝒪^Tġ̃̇𝒪 + 𝒪^Tg̃𝒪̇) = 𝒪^T( [ a ȧ 1/2θ̇( b^2 - a^2 ) 0; 1/2θ̇( b^2 -a^2 ) b ḃ 0; 0 0 cċ ]) 𝒪 , ( Γ^i_tj) = 1/2𝒪^T( g̃)^-1𝒪( - 𝒪^T𝒪̇𝒪^Tg̃𝒪 + 𝒪^Tġ̃̇𝒪 + 𝒪^Tg̃𝒪̇) = 𝒪^T( [ ȧ/a 1/2θ̇( b^2/a^2 - 1 ) 0; 1/2θ̇( 1 - a^2/b^2) ḃ/b 0; 0 0 ċ/c ]) 𝒪 . Here, we used 𝒪̇^T = - 𝒪^T𝒪̇𝒪^T 𝒪̇𝒪^T = - 𝒪^T𝒪̇ = ( [ 0 - θ̇ 0; θ̇ 0 0; 0 0 0 ]) . §.§ Ricci tensor and Ricci scalar Second, we compute the Ricci tensor and Ricci scalar: R_00 = - 1/2 g^ijg̈_ij + 1/2 g^ij g^klġ_ikġ_jl - 1/4 g^ij g^klġ_ikġ_jl = - 1/2 g^ijg̈_ij + 1/4 g^ij g^klġ_ikġ_jl , R_0i = R_i0 = 0 , R_ij = 1/2g̈_ij + 1/4ġ_ij g^klġ_kl - 1/2ġ_il g^lkġ_kj , R = g^ijg̈_ij + 1/4( g^ijġ_ij)^2 - 3/4 g^ij g^klġ_ikġ_jl . R_00, R_ij, and R in Eq. (<ref>) are given as R_00 = 1/4tr( - 2 𝒪̇𝒪^T𝒪̇𝒪^T + 2 𝒪̇𝒪^Tġ̃̇( g̃)^-1 + 2 𝒪̇𝒪^Tg̃𝒪̇𝒪^T( g̃)^-1. . - 2 ( g̃)^-1ġ̃̇𝒪̇𝒪^T + ( g̃)^-1ġ̃̇( g̃)^-1ġ̃̇ - 2 g̈̃̈( g̃)^-1) , R_ij = 1/4( - 𝒪^T𝒪̇𝒪^Tg̃𝒪 + 𝒪^Tġ̃̇𝒪 + 𝒪^Tg̃𝒪̇)_ijtr( ( g̃)^-1ġ̃̇) - 1/2( 𝒪^T𝒪̈𝒪^Tg̃𝒪 - 𝒪^Tg̃𝒪̈ - 𝒪^Tg̈̃̈𝒪 - 𝒪^T𝒪̇𝒪^T𝒪̇𝒪^Tg̃𝒪. + 𝒪^T𝒪̇𝒪^Tġ̃̇𝒪 + 𝒪^T𝒪̇𝒪^Tg̃𝒪̇ - 𝒪^Tġ̃̇( g̃)^-1𝒪̇𝒪^Tg̃𝒪 + 𝒪^Tġ̃̇( g̃)^-1ġ̃̇𝒪 - 𝒪^Tġ̃̇𝒪̇ - 𝒪^Tg̃𝒪̇𝒪^T( g̃)^-1𝒪̇𝒪^Tg̃𝒪 . + 𝒪^Tg̃𝒪̇𝒪^T( g̃)^-1ġ̃̇𝒪 + 𝒪^Tg̃𝒪̇𝒪^T𝒪̇)_ij , R = tr( 1/2𝒪̇𝒪^T𝒪̇𝒪^T - 1/2𝒪̇𝒪^Tġ̃̇( g̃)^-1 - 1/2𝒪̇𝒪^Tg̃𝒪̇𝒪^T( g̃)^-1 + g̈̃̈( g̃)^-1 + 2 ( g̃)^-1ġ̃̇𝒪̇𝒪^T - 3/4( g̃)^-1ġ̃̇( g̃)^-1ġ̃̇) + 1/4( tr( ( g̃)^-1ġ̃̇) )^2 , We should note that 𝒪̈𝒪^T - 𝒪̇𝒪^T𝒪̇𝒪^T = ( [ 0 - θ̈ 0; θ̈ 0 0; 0 0 0 ]) → 𝒪̈𝒪^T = ( [ - θ̇^2 - θ̈ 0; θ̈ - θ̇^2 0; 0 0 0 ]) . By using Eq. (<ref>), R_00 and R_ij of the Ricci tensor are written as follows: R_00 = θ̇^2 [ 1 - 1/2( b^2/a^2 + a^2/b^2) ] - ( ä/a + b̈/b + c̈/c) , ( R_ij) ≡𝒪^T ( R̃_ij ) 𝒪 = 𝒪^T( [ R̃_11 R̃_12 0; R̃_21 R̃_22 0; 0 0 R̃_33 ]) 𝒪 , where non-zero components in R̃_ij are defined as R̃_11 = äa + ȧa (ḃ/b +ċ/c) + b^4-a^4/2b^2θ̇^2 R̃_12 = R̃_21 = - θ̈/2( a^2 - b^2 ) - θ̇/2[ ȧ/a( b^2 + 3a^2) -ḃ/b( a^2 + 3b^2) +ċ/c( a^2 - b^2) ] R̃_22 = b̈b + ḃb (ȧ/a +ċ/c) + a^4-b^4/2a^2θ̇^2 R̃_33 = c̈c + ċc ( ȧ/a +ḃ/b) . And the Ricci scalar R is given as R = g^00 R_00 + 𝒪^T( g̃^ij R_ij) 𝒪 = g^00 R_00 + g̃^ijR̃_ij = ( a^2 - b^2)^2/2a^2b^2θ̇^2 + 2 ( ä/a + b̈/b + c̈/c) + 2 ( ȧḃ/ab + ḃċ/bc + ċȧ/ca) . §.§ Riemann tensor and geodesic deviation equation The spatial components of the geodesic deviation equation as in Eq. (<ref>) take the following form, D^2 S^k/dτ^2 = R^k_ 00j S^j . We compute the Riemann tensor, R^0_ i0j = 1/2g̈_ij + 1/4ġ_ij g^lkġ_kl - 1/2ġ_il g^lkġ_kj , and thus R^k_ 00j = - g^ki (g^00)^-1 R^0_ i0j = g^ki R^0_ i0j . R^k_ 00j is written in terms of the rotation matrix 𝒪 as follows: (R^i_ 00j) = 𝒪^T (R̃^i_ 00j) 𝒪 , and (R̃^i_ 00j) = ( [ ä/a - θ̇^2/4(a^2-b^2)(a^2+3b^2)/a^2b^2 - θ̇/2[ ȧ/a( b^2/a^2 + 3 ) - ḃ/b( 1 + 3 b^2/a^2) ] - θ̈/2( 1 - b^2/a^2) 0; - θ̇/2[ ȧ/a( 1 + 3 a^2/b^2) -ḃ/b( a^2/b^2 + 3 ) ] - θ̈/2( a^2/b^2 - 1 ) b̈/b - θ̇^2/4(b^2-a^2)(b^2+3a^2)/a^2b^2 0; 0 0 c̈/c ]) . §.§ Einstein tensor Finally, we compute the Einstein tensor defined as G_μν = R_μν - 1/2 g_μν R . Using the diagonalized metric and Ricci tensor, we can express the spatial components of the Einstein tensor as G_ij = 𝒪^T( R̃_ij - 1/2g̃_ij R )𝒪 . Thus, G_00, G_0i, and G_ij are written as follows: G_00 = R_00 + 1/2 R = θ̇^2 [ 1 - 1/2( b^2/a^2 + a^2/b^2) ] - 1/2θ̇^2 [ 1 - 1/2( b^2/a^2 + a^2/b^2) ] + ( ȧḃ/ab + ḃċ/bc + ċȧ/ca) = - (a^2 - b^2)^2 /4a^2b^2θ̇^2 + ( ȧḃ/ab + ḃċ/bc + ċȧ/ca) , G_0i = G_i0 = 0 , ( G_ij) = 𝒪^T{ ( [ R̃_11 R̃_12 0; R̃_21 R̃_22 0; 0 0 R̃_33 ]) - 1/2 R ( [ a^2 0 0; 0 b^2 0; 0 0 c^2 ]) }𝒪 = 𝒪^T( [ G̃_11 R̃_12 0; R̃_21 G̃_22 0; 0 0 G̃_33 ]) 𝒪 , where the diagonal components in G̃_ij are defined as G̃_11 = - (a^2 - b^2)(3a^2 + b^2)/4b^2θ̇^2 - a^2(b̈/b + c̈/c) - a^2ḃċ/bc , G̃_22 = - (b^2 - a^2)(a^2 + 3b^2)/4a^2θ̇^2 - b^2(ä/a + c̈/c) - b^2ċȧ/ca , G̃_33 = - ( a^2 - b^2)^2 c^2/4a^2b^2θ̇^2 - c^2(ä/a + b̈/b) - c^2ȧḃ/ab .
http://arxiv.org/abs/2406.19347v1
20240627172122
Thermal Dynamics of Heat Pipes with Sub-Critical Nanopores
[ "Sumith Yesudasan" ]
cond-mat.mes-hall
[ "cond-mat.mes-hall" ]
syesudasan@newhaven.edu Department of Mechanical and Industrial Engineering, University of New Haven, West Haven, CT, USA § ABSTRACT Sub-critical nanopores are known to inhibit continuous evaporation within heat pipes, posing a challenge in understanding the limitations of nanopore size on wicking action. This study addresses this by systematically investigating the wicking capabilities in nanoscale heat pipes using coarse-grained molecular dynamics simulations. The results reveal that temperature-induced flow in these heat pipes is primarily driven by surface interactions rather than traditional wicking action. Additionally, the water fill ratio and thermal gradient significantly influence the performance of these systems. These insights can substantially benefit heat transfer research, providing a foundation for improving thermal performance through nanoscale design innovations. Thermal Dynamics of Heat Pipes with Sub-Critical Nanopores Sumith Yesudasan July 1, 2024 ========================================================== syesudasan@newhaven.edu Department of Mechanical and Industrial Engineering, University of New Haven, West Haven, CT, USA § INTRODUCTION The demand for efficient thermal management solutions has surged with advancements in microelectronics and high-performance computing systems <cit.>. Heat pipes, known for their superior thermal conductivity and passive operation, are vital in thermal management applications. Traditional heat pipes rely on capillary action within wick structures, but recent innovations in nanotechnology have developed nanoporous structures with enhanced heat transfer capabilities <cit.>. These advancements enable applications in micro and nanoscale devices, such as thinner cell phones, sleeker batteries, and lighter laptops. Studying heat pipe dynamics and heat transfer traditionally involves coupled Navier-Stokes and heat transfer equations <cit.>. However, these continuum-level techniques are insufficient at the nanoscale <cit.>. Molecular dynamics simulations, particularly classical Newtonian-based approaches, offer an alternative but are computationally expensive due to long-range Coulombic forces and detailed hydrogen atom modeling in water molecules. This challenge can be mitigated using coarse-grained molecular dynamics (CGMD), which aggregates multiple water molecules into single spherical beads interacting through a force field that statistically reproduces water's thermodynamic properties <cit.>. Previous research on water evaporation using classical molecular dynamics was hindered by high computational costs. To address this, a study developed CGMD models based on the Morse potential to investigate evaporation from hydrophilic nanopores with diameters ranging from 2 to 5 nm <cit.>. Findings revealed continuous evaporation in nanopores between 3 and 4 nm. Building on this, our current study explores the thermal dynamics of heat pipes with sub-critical nanopores using CGMD. This approach effectively simulates complex interactions within nanoporous heat pipes. This study examines the thermal dynamics of heat pipes with sub-critical nanopores, characterized by diameters below the critical value for continuous evaporation <cit.>. The primary heat transport mechanism in such nanopores is driven by surface interactions rather than conventional wicking action. Using CGMD simulations <cit.>, this research investigates water molecule behavior within nanoporous heat pipes. The CGMD approach simplifies molecular interactions while preserving essential physical properties, allowing simulation of large systems over extended periods. Two heat pipe models with different nanopore diameters (2 nm and 3 nm) are analyzed under varying thermal and water fill conditions. The study examines the effects of temperature gradients and fill ratios on the thermal performance of heat pipes. By comparing models with different water content levels ("filled" and "medium fill"), the research elucidates the impact of fluid dynamics on heat transfer efficiency. The study reveals that heat transfer in heat pipes with sub-critical nanopores is predominantly influenced by surface interactions. Results highlight the significance of temperature gradients and fill ratios in optimizing heat pipe design for improved thermal performance. We hope our findings will advance nanostructure designs aimed at enhancing heat transfer through capillary flow, offering insights for developing more efficient thermal management systems in various applications. § SYSTEM AND SIMULATION SETUP To simulate the heat pipe at the nanoscale, two systems with varying levels of water inside are considered. The first system is a 3D porous heat pipe with 2 nm diameter holes drilled through it in all directions except at the corners, as shown in Figure <ref>. The length of the heat pipe is 78 nm, its thickness is 6 nm, and the gap inside is 12 nm. This gap is necessary for the vapor to travel to the cold side and condense. The second system considered for the study is a pipe with 3 nm diameter holes, a length of 83 nm, a thickness of 8 nm, and a gap of 11.2 nm. The dimensions of both systems are labeled and shown in Figure <ref>. For clarity, only the image of the 2 nm system (henceforth called the 2 nm model) is shown, with labels in blue for the 2 nm model and red for the 3 nm model. The widths of the models are 9 nm and 12 nm, respectively. The boundaries of the 2 nm model and the 3 nm model are reflective on the top, bottom, and sides, which contain the water molecules within the system boundaries. However, along the axis normal to the image, the boundary is periodic. The corners of the copper tube are covered with copper atoms with no holes to prevent water leakage into the empty space at the corners. The water molecules are shown in blue, and the copper atoms are shown in reddish-brown. The molecular details, especially the force field, will be discussed in the next section. Figure <ref> illustrates the three-dimensional (3D) models of the 2 nm and 3 nm systems in an isometric perspective. These models represent the spatial arrangement of copper atoms within the system. To clarify the annular connectivity between the holes in both models, cross-sectional views are provided in the right panels of Figure <ref>. The axes of the geometry are depicted at the center for reference. The arrows in the cross-sectional views serve to demonstrate the connectivity within the system, rather than indicating the direction of water flow. The actual direction of water flow will be analyzed and detailed in the Results and Discussion section. The molecular systems are virtually segmented into distinct zones to facilitate the application of temperature gradients to the water molecules. Figure <ref> shows seven zones, with Zone 1 being the hottest and Zone 7 the coldest. Zone 1 and Zone 7 each span 10 nm from the respective ends, while the intermediate zones are equidistantly spaced. This zonal temperature application methodology is designed to prevent any artifacts or inaccuracies in the thermal boundary conditions. In the scenario where the target temperature is 400K, Zone 1 is maintained at 400K, with a gradual decrease to 300K in Zone 7. Similarly, for the 350K case, Zone 1 is set at 350K, tapering down to 300K in Zone 7. Due to the dynamic nature of molecular systems, water molecules constantly migrate between these zones. Consequently, during the simulation, the temperature zones are updated at each step to reflect the precise number of water molecules in each zone. Although this process is computationally intensive, it is essential to ensure accurate and reliable simulation results. To study the effect of the amount of water inside the system and its influence on circulation, we considered two levels of water. The system with excess water, referred to as “filled" throughout the paper, contains 248,178 water molecules for the 2 nm model and 443,158 water molecules for the 3 nm model. The models with the bare minimum amount of water necessary to wet the copper pipe (called “medfill", resembling medium-fill) have 203,785 water molecules for the 2 nm model and 388,409 water molecules for the 3 nm model. The 2 nm model consists of 182,180 copper atoms, while the 3 nm model consists of 391,660 copper atoms. § MOLECULAR MODELING DETAILS In this study, the copper atoms are modeled using the Embedded Atom Model (EAM) <cit.>, with parameters sourced from the National Institute of Standards and Technology (NIST) database <cit.>. The copper atoms are represented within a Face-Centered Cubic (FCC) lattice structure, characterized by a lattice constant of 3.615 Å, ensuring a detailed and precise depiction of copper's atomic arrangement. The porous nano structures are created by removing material by drilling holes of 2 nm diameter and 3nm diameter along the vertical and horizontal directions. Additionally, the holes are interconnected by drilling annular holes closely resembling the shape of the heat pipe. The EAM potential for copper, used in our study, is defined by Equation (<ref>), with optimized parameters adopted from the widely used EAM potential for copper from literature <cit.>. E_Cu-Cu = F_α(∑_j≠ iρ_β(r_ij))+1/2∑_j≠ iϕ_αβ(r_ij) For the simulation of water molecules, instead of using the traditional computationally expensive models, a popular coarse grained model called mW model<cit.> is used. This model represents one water molecule by a coarse grained bead. The traditional models of water utilize long-range forces to replicate the hydrogen-bonded structure of water. The mW model diverges from this approach by introducing a short-range, angular-dependent term that promotes tetrahedrality. This model successfully reproduces the density, structure, and various phase transitions of water with high accuracy, at a significantly reduced computational cost compared to atomistic models. The mW model accurately reproduces water's density maximum, melting temperature, and enthalpy of vaporization, among other properties. The results show that the structural and thermodynamic behavior of water can be effectively modeled by focusing on the tetrahedral connectivity of the molecules, rather than the nature of the interactions. The equations governing this mW potential is based on the Stillinger-Weber model and is given in the below equations. E_mW-mW = ∑_i∑_j>iϕ_2(r_ij) + ∑_i∑_j ≠ i∑_k > jϕ_3(r_ij, r_ik, θ_ijk) ϕ_2(r_ij) = A ϵ[ B ( σ/r_ij)^p - ( σ/r_ij)^q ] exp( σ/r_ij - aσ) ϕ_3(r_ij, r_ik, θ_ijk) = λϵ( cosθ_ijk - cosθ_0 )^2 exp( σ/r_ij - aσ) exp( σ/r_ik - aσ) The mW potential is sum of a two body interaction potential (<ref>) and a three body interaction potential (<ref>). The subscripts i, j, k represents the three bodies (atoms/molecules) under consideration. The detailed explanation of this potential and relevance of each term is beyond the scope of this paper and readers are advised to refer the original work by Molinero <cit.>. The time step of integration is 10 fs, typical equilibration time is two million time steps and a production run is half a million time steps. The results are post processed using in house made C++ code <cit.> and MATLAB software <cit.>. The EAM potential of copper works well in the vicinity of 1 fs, which makes it unstable around 10 fs. Frequently, to circumvent this instability, we avoided the copper to copper interaction from time integration by freezing them and letting it interacting only with water molecules via Lennard-Jones potential. The motion of the copper atoms play less impact on coarse grained models and hence the freezing will not affect the dynamics of water much around it. The water to copper interaction is modeled using the Lennard-Jones potential and uses the below equation (<ref>). E_mW-Cu=4ϵ[(σ/r)^12-(σ/r)^6] The interaction parameters of this potential is to simulate a hydrophilic nature of the water on copper, which based on the paper by Huang and et. al. <cit.>. § RESULTS AND DISCUSSION The molecular models described in the previous sections are utilized to simulate the heat pipe. For convenience, the cases are named according to the hole diameter, followed by the amount of water and the temperature at the left end of the pipe. For example, “2nm-medfill-400K-Density” refers to the density plot for a model with medium water fill, 2 nm diameter nanopores, and a left end temperature of 400K. Initially, the heat pipe is modeled as a copper pipe, and water is introduced into it as a rectangular block. This water is absorbed by the nanopores, after which a new batch of water is introduced. The system is equilibrated for 500,000 time steps (5 ns), and this process is repeated until the pipe is completely wet. For the “filled” cases, the process continues until excess water accumulates in the inner region of the heat pipe. Once equilibrated, the systems undergo production runs for at least 5 ns. During this period, a 2D grid-based approach is employed to map the statistical quantities to continuum-level properties. Figure <ref> illustrates a sample representation of such a grid with grid points (x_g, z_g) and particles at positions (x_p, z_p). The properties of interest, computed at the particle positions, are interpolated onto the grid points using Nearest Grid Point (NGP) interpolation. §.§ Nearest Grid Point (NGP) Interpolation In NGP interpolation, each particle's property is assigned to the nearest grid point. This method is computationally efficient, though it may introduce aliasing errors. The grid value at a point (x_g, z_g) is updated by accumulating the properties of particles nearest to this grid point. Let f(t, x_p, z_p) represent the property at particle position (x_p, z_p) at time t. The grid value f(t, x_g, z_g) at time t is given by: f(t, x_g, z_g) = ∑_(x_p, z_p) ∈NGP(x_g, z_g) f(t, x_p, z_p) where NGP(x_g, z_g) denotes the set of particles nearest to the grid point (x_g, z_g). A grid spacing of 1 nm is used for all our cases unless mentioned explicitly. §.§ Temporal Averaging To reduce fluctuations and obtain smoother properties over time, temporal averaging is performed. Suppose we are averaging over n time steps, with the property values at time steps t_1, t_2, …, t_n. The temporally averaged property f̅(x_g, z_g) at the grid point (x_g, z_g) is: f̅(x_g, z_g) = 1/n∑_k=1^n f(t_k, x_g, z_g) The following equations describe the interpolation of specific properties using NGP and temporal averaging. §.§.§ Velocity Components For the velocity components v_x and v_z: v_x(t, x_g, z_g) = ∑_(x_p, z_p) ∈NGP(x_g, z_g) v_x(t, x_p, z_p) v_z(t, x_g, z_g) = ∑_(x_p, z_p) ∈NGP(x_g, z_g) v_z(t, x_p, z_p) Temporal averaging: v̅_x(x_g, z_g) = 1/n∑_k=1^n v_x(t_k, x_g, z_g) v̅_z(x_g, z_g) = 1/n∑_k=1^n v_z(t_k, x_g, z_g) §.§.§ Density For the density ρ: ρ(t, x_g, z_g) = ∑_(x_p, z_p) ∈NGP(x_g, z_g)ρ(t, x_p, z_p) Temporal averaging: ρ̅(x_g, z_g) = 1/n∑_k=1^nρ(t_k, x_g, z_g) §.§.§ Temperature For the temperature T: T(t, x_g, z_g) = ∑_(x_p, z_p) ∈NGP(x_g, z_g) T(t, x_p, z_p) Temporal averaging: T̅(x_g, z_g) = 1/n∑_k=1^n T(t_k, x_g, z_g) The properties of velocity, density, and temperature at grid points (x_g, z_g) are estimated using the Nearest Grid Point (NGP) interpolation method. This involves assigning each particle's property to the nearest grid point and performing temporal averaging over multiple time steps to smooth out fluctuations. The resulting plots, which illustrate these interpolations, are discussed in the following sections, providing a detailed explanation and analysis. The mapped density plots for all cases are shown in Figure <ref>. The regions in red indicate high-density (liquid) water, while those in blue represent water vapor or vacuum. Areas with a mix of vapor and liquid follow the contour map on the right side of each panel. The “filled” 2 nm models exhibit a thick liquid connection suspended between the upper and lower surfaces of the heat pipe. Additionally, a thin layer of liquid water can be observed closer to the inner surface of the heat pipe in both cases, which is unsurprising due to the strong copper-to-water attraction. The “medfill” cases of the 2 nm model at both 350K and 400K do not display any unexpected features. However, for the “medfill” cases of the 3 nm model at both 400K and 350K, there are less dense regions at either end of the heat pipe. This occurs because the water molecules did not completely fill these regions during the initial equilibration stage, rather than due to evaporation-induced voids. Interestingly, the “filled” 3 nm model cases show complete wetting within the heat pipe and a relatively thicker layer near the inner surface. This layer contributes to the movement of water from the hot region to the cold region and is the primary contributor to heat transfer. This analysis will be further discussed later in this section, along with the mass flow rate and heat transfer rate estimation. §.§ Vorticity The vorticity ω in a 2D flow is defined as the curl of the velocity field. For a velocity field with components v_x and v_z, the vorticity ω_y (since the system is periodic in y-axis, it's a 2D flow, and the vorticity will be perpendicular to the x-z plane) is given by <cit.>: ω_y = ∂ v_z/∂ x_g - ∂ v_x/∂ z_g In a discrete grid like the one we described earlier, the partial derivatives can be approximated using finite difference methods. For example, using central differences, the derivatives can be approximated as: ∂ v_z/∂ x_g≈v_z(x_g + Δ x_g, z_g) - v_z(x_g - Δ x_g, z_g)/2Δ x_g ∂ v_x/∂ z_g≈v_x(x_g, z_g + Δ z_g) - v_x(x_g, z_g - Δ z_g)/2Δ z_g Thus, the discrete form of the vorticity can be written as: ω_y(i, j) ≈v_z(i+1, j) - v_z(i-1, j)/2Δ x_g - v_x(i, j+1) - v_x(i, j-1)/2Δ z_g Where i and j are the indices of the grid points in the x_g and z_g directions, respectively. The vorticity plots are shown in Figure <ref>. The regions in red indicate areas of positive vorticity, which signifies that the fluid in these regions is rotating counterclockwise. Conversely, the regions in blue indicate areas of negative vorticity, where the fluid is rotating clockwise. For the 2 nm ”medfill” cases (Figure <ref>. a, e, and the high temperature filled case Figure <ref>. g), the rotations are more localized inside the nanopores, particularly in the upper region. In the 2 nm filled high temperature case (Figure <ref>. c), rotational motion is observed not only within the nanopore cavities but also along the interior surface of the heat pipe. For all cases of the 3 nm model, rotation is predominantly seen along the inner surface of the heat pipe. This observation suggests that the flow of water occurs primarily at the surface of the heat pipe rather than through the nanopore wicks. The vorticity results for the 2 nm model do not indicate significant flow within the pores. Instead, these results imply that fluid energy is absorbed and transferred, facilitating local rotation. This claim is further corroborated by the velocity profile findings discussed next. In summary, the vorticity distribution reveals distinct rotational behavior dependent on nanopore size and temperature conditions. The 2 nm model exhibits localized vorticity within the nanopores, particularly under higher temperature conditions, while the 3 nm model shows a surface-dominated rotational flow. The velocity plots for all the cases studied in this project are shown in Figure <ref>. The 2 nm medfill cases (Figure <ref> a and e) exhibit a continuous flow along the inner surface of the heat pipe, with stronger flows directed from the upper voids towards the lower regions. However, there is a lack of evidence suggesting a consistent pattern of heat transfer in these two cases. For the 2 nm filled cases, the density plot indicates the formation of a thick water bridge between the upper and lower surfaces of the heat pipe. This bridge is sustained throughout the simulation and can be observed in the accompanying movie included in the supporting information. Figure <ref> c) shows a strong recirculation of water along the inner surface in a clockwise direction. This recirculation is more pronounced at higher temperatures, particularly in the 400K case, and weaker in the 350K case (Figure <ref> g). The 3 nm cases reveal that no significant lateral flows occur within the nanopores. Instead, the flow is predominantly along the inner surface of the heat pipe. Notably, the left half of the lower surface experiences a leftward flow, while the lower right half experiences a rightward flow. The high-temperature case of the filled 3 nm model suggests increased evaporation at the hot section of the heat pipe. A similar effect is observed in the 350K case for the filled model, indicating enhanced evaporative cooling under elevated temperatures. These observations provide insights into the fluid dynamics and heat transfer mechanisms within the nanoporous structures of the heat pipe, highlighting the influence of pore size and temperature on the overall performance. The sustained water bridge in the 2 nm cases suggests a stable liquid-vapor interface, which may contribute to localized heat transfer, while the 3 nm cases demonstrate surface-dominated flow patterns, emphasizing the importance of the inner surface in heat transport processes. The velocity plot illustrates the arrow lengths corresponding to the magnitude, given by |v|=√(v_x^2+v_z^2). This often poses challenges in visualizing the flow pattern within a region, particularly in areas with significantly lower velocities, such as the interior of the heat pipe wall compared to its surface. This visualization issue is evident in the plots shown in Figure <ref>, where the lower velocities inside the heat pipe wall make it difficult to discern the flow patterns. To address this issue, we can use normalized velocity vectors, where a constant length is applied to all velocity values. This normalization helps in better visualizing the flow patterns by ensuring that even regions with low velocities are adequately represented. The normalized velocity plot, which employs a constant length for all velocity vectors, is shown in Figure <ref>. This approach allows for a clearer representation of the flow dynamics throughout the heat pipe, providing a better understanding of the fluid behavior across different regions. The normalized plots help in identifying flow patterns that might be overlooked in the standard velocity plots, thus offering a more accurate depiction of the fluid motion within the heat pipe. §.§ Mass Flow Rate The mass flow rate ṁ (kg/s) of the heat pipe is estimated by averaging the quantities over the entire 2D grid region, both temporally and spatially. This comprehensive approach ensures that the calculated mass flow rate accurately reflects the overall behavior of the heat pipe system under various conditions. The heat transfer rate q (W) can be estimated using the equation below: q = ṁ c_p Δ T Where c_p is the specific heat of water (4187 J/kg · K) and Δ T is the temperature difference between the left and right ends of the heat pipe in Kelvin. This relationship underscores the direct dependence of the heat transfer rate on the mass flow rate, the specific heat capacity of the working fluid, and the temperature gradient across the heat pipe. In our analysis, the averaged quantities of mass flow rate and temperature difference are plotted in Figure <ref>. The results indicate a clear trend where the cases at 400K exhibit a superior heat transfer performance compared to those at 350K. Specifically, the higher temperature gradient in the 400K cases enhances the driving force for heat transfer, thereby increasing the overall efficiency of the heat pipe. Additionally, the data reveal that the "filled" cases generally outperform the medium-filled cases in terms of heat transfer rate. This observation suggests that a higher fill ratio of the working fluid within the heat pipe improves the thermal performance, likely due to more effective phase change and fluid movement mechanisms. Interestingly, the case with a 2nm fill ratio surpassed the 3nm model in terms of heat transfer rate performance. This counter intuitive result may be attributed to an optimal balance between capillary action and fluid flow resistance at the 2nm fill ratio, which maximizes the heat transfer efficiency. For filled high-temperature cases, we observed heat transfer rates ranging from 6.5 W to 9 W, which are comparable to the performance of heat pipes used in laptops and computers. Typically, laptop heat pipes can handle power levels around 25 W to 52 W depending on their design and application parameters <cit.>. Our findings underscore the importance of optimizing both the operating temperature and fill ratio in the design and application of heat pipes for efficient thermal management. These analysis and comparisons of these variables provide insights into the thermal dynamics of heat pipes, guiding future improvements in their performance and application. §.§ Limitations and Scope for Improvements §.§.§ Selection of Water Molecules Despite performing a comprehensive study on the system, we believe there are several factors that can be improved. The current study selected two levels of water with "filled" indicating water content visibly excess inside the heat pipe and "medfill" case with the bare minimum water for filling the nanopores. This selection is done through visual observation and can be improved systematically. The effect of water level on the heat transfer rate can be studied through a series of simulations of systems with varying levels of water, which will be computationally exhaustive. In fact, in the literature, there are no articles or technical documents explaining the accurate amount of water required based on a theoretical basis, which makes this problem a challenging one. To address this challenge, future studies could adopt a systematic approach by incrementally varying the amount of water and performing detailed simulations for each level. This would provide a more accurate understanding of how the water content influences the heat transfer dynamics. Although this approach would require significant computational resources, it would offer valuable insights into optimizing the water level for enhanced thermal performance. §.§.§ Thermostatting Challenges and Alternatives The proper way of performing a heat pipe simulation using molecular dynamics is to thermostat the copper atoms and then leave the water molecules for NVE (constant number of particles, volume, and energy) ensemble integration. Though in theory, this approach looks accurate, practically it creates an unstable system even within the lower ends of acceptable time steps of integration for coarse-grained molecular dynamics. An improvement in this context could be dividing the copper into tiny sections (probably 100+) and then thermostating the water within each section. This would require the remaining water within the inside of the heat pipe to be NVE integrated. Although this approach is feasible, it can significantly slow down the computational process due to the need for frequent region updates. This is an area that could be explored further to develop more stable and efficient simulation techniques. Advanced thermostatting methods or hybrid approaches combining different ensembles could potentially mitigate these stability issues while maintaining computational efficiency. §.§.§ Ideal Length of Heat Pipe There is no universally ideal length for a heat pipe; instead, it is often derived from the design and physical needs of the system under consideration. While the length can influence the mass flow rate and heat transfer rate, it is less likely to change the mode of heat transfer. Most dynamics settle down within the middle section of the heat pipe, suggesting that our model length is sufficient for this study. However, future sensitivity studies could be performed to understand the effect of heat pipe length on various thermodynamic parameters. By varying the length and observing the resulting changes in performance, researchers can derive more precise guidelines for optimizing heat pipe dimensions in practical applications. §.§.§ Separation of Velocity Effects in Temperature and Pressure Estimation One major observation while simulating the system with sub-critical nanopores is the bulk movement of water within the heat pipe. This bulk velocity can affect the calculation of temperature and pressure, as shown in Equations <ref> and <ref>. T = 1/3 N k_B∑_i=1^N m_i v_i^2 P = 1/V( N k_B T + 1/3∑_i=1^N∑_j>i^N𝐫ij·𝐟ij) We attempted to correct for this by removing the center of mass velocity from the system and also by removing the bulk velocity for each grid point. However, this strategy did not significantly improve the results and its limitations can be observed in the temperature plots provided in the supplementary document. Future work could explore more sophisticated techniques for separating the bulk velocity effects from the intrinsic thermal motions. For instance, advanced filtering methods or improved algorithms for velocity decomposition might provide better accuracy in estimating the true thermodynamic properties of the system. § CONCLUSION This study provides a comprehensive investigation into the thermal dynamics of heat pipes with sub-critical nanopores using coarse grain molecular dynamics simulations. By modeling water molecules and copper structures at the nanoscale, we have elucidated several key factors influencing the heat transfer efficiency in such systems. The heat transfer rate is significantly higher in cases with a larger temperature difference, such as 400K compared to 350K. This is attributed to the enhanced driving force for thermal energy transport at larger temperature gradients. Additionally, filled cases, where the heat pipes contain a higher amount of water, demonstrate superior thermal performance compared to medium-filled cases due to more effective phase change processes and fluid dynamics within the heat pipe. Interestingly, the 2nm filled cases exhibit better heat transfer performance than the 3nm filled cases, suggesting an optimal balance between capillary action and fluid flow resistance at the 2nm fill ratio, which maximizes the heat transfer efficiency. The vorticity and velocity analyses reveal distinct rotational behaviors and flow dynamics. The 2nm models show localized vorticity within the nanopores, especially at higher temperatures, while the 3nm models exhibit surface-dominated rotational flow, indicating that water flow primarily occurs along the inner surface of the heat pipe. The study indicates that surface-driven flows rather than wicking action dominate the heat transfer in heat pipes with sub-critical nano pores. This understanding can inform the design of nanostructures aimed at enhancing heat transfer via capillary flow. These findings underscore the importance of optimizing both the operating temperature and fill ratio in the design and application of heat pipes for efficient thermal management. The results provide valuable insights into the thermal dynamics of heat pipes, guiding future improvements in their performance and application. Specifically, the optimal design would involve fine-tuning the nanopore size and fill ratio to balance capillary action and fluid flow resistance, thereby achieving maximum heat transfer efficiency. In conclusion, we believe that this study will advance our understanding of the thermal behavior of nanoporous heat pipes and offers practical guidelines for their design and optimization. The insights gained here will contribute to the development of more efficient thermal management solutions in various applications, particularly in electronics cooling and energy systems.
http://arxiv.org/abs/2406.18770v1
20240626214250
ADO-LLM: Analog Design Bayesian Optimization with In-Context Learning of Large Language Models
[ "Yuxuan Yin", "Yu Wang", "Boxun Xu", "Peng Li" ]
cs.LG
[ "cs.LG" ]
GPT4AIGChip: Towards Next-Generation AI Accelerator Design Automation via Large Language Models Yonggan Fu^, Yongan Zhang^, Zhongzhi Yu^, Sixu Li, Zhifan Ye, Chaojian Li, Cheng Wan, Yingyan (Celine) Lin School of Computer Science, Georgia Institute of Technology {yfu314, yzhang919, zyu401, sli941, zye327. cli851, cwan39, celine.lin}@gatech.edu July 1, 2024 ================================================================================================================================================================================================================================================================= Equal contribution. fddfd
http://arxiv.org/abs/2406.17888v1
20240625185248
CTBench: A Comprehensive Benchmark for Evaluating Language Model Capabilities in Clinical Trial Design
[ "Nafis Neehal", "Bowen Wang", "Shayom Debopadhaya", "Soham Dan", "Keerthiram Murugesan", "Vibha Anand", "Kristin P. Bennett" ]
cs.CL
[ "cs.CL", "cs.AI", "cs.LG" ]
SigKAN: Signature-Weighted Kolmogorov-Arnold Networks for Time Series Hugo Inzirillo1 and Rémi Genet2 1CREST-ENSAE, Institut Polytechnique de Paris 2DRM, Université Paris Dauphine - PSL ============================================================================================================================ § ABSTRACT We introduce CTBench, a benchmark to assess language models (LMs) in aiding clinical study design. Given metadata specific to a study, CTBench examines how well AI models can determine the baseline features of the clinical trial (CT) which include demographic and relevant features collected at the start of the trial from all participants. The baseline features, typically presented in CT publications (often as Table 1), are crucial for characterizing study cohorts and validating results. Baseline features, including confounders and covariates, are also required for accurate treatment effect estimation in studies involving observational data. CTBench consists of two datasets: "CT-Repo", containing baseline features from 1,690 clinical trials sourced from <clinicaltrials.gov>, and "CT-Pub", a subset of 100 trials with more comprehensive baseline features gathered from relevant publications. We develop two LM-based evaluation methods for evaluating the actual baseline feature lists against LM-generated responses. “ListMatch-LM” and “ListMatch-BERT” use GPT-4o and BERT scores (at various thresholds), respectively, to perform the evaluation. To establish baseline results, we apply advanced prompt engineering techniques using LLaMa3-70B-Instruct and GPT-4o in zero-shot and three-shot learning settings to generate potential baseline features. We validate the performance of GPT-4o as an evaluator through human-in-the-loop evaluations on the CT-Pub dataset, where clinical experts confirm matches between actual and LM-generated features. Our results highlight a promising direction with significant potential for improvement, positioning CTBench as a useful tool for advancing research on AI in CT design and potentially enhancing the efficacy and robustness of CTs. § INTRODUCTION Medical research can be broadly categorized into clinical trials (CTs) and observational studies, among other types. CTs aim to test one or more interventions for the improvement of health outcomes, where human subjects are recruited and assigned prospectively to the interventions or respective placebo controls. In contrast, observational studies are where the causal effects of health outcomes are observed by the investigators without controlling the independent variables. Randomized CT remains the “gold standard” in evaluating the safety and efficacy of the intervention. At the same time, observational studies allow for much less expensive and larger-scale investigations using existing or prospective data <cit.>. In either case, it is crucial to ensure the balance between the study groups at the baseline, and that no systemic difference between study groups interferes with the causal relationship between the variables of interest and study outcomes <cit.>. Baseline characteristics, typically found in “Table 1” in CT publications, describe the demographic and relevant features collected at the beginning of the study for all participants between study groups. Depending on the study outcomes, the baseline characteristics may include sociodemographics, anthropometrics, confounding medical conditions, etc. For observational studies, the baseline features can help design the study by matching the cohort by the confounders and covariates. The showcase of baseline characteristics shows the reader how representative the study population is and how applicable the results would be. It validates the study design, increases the statistical efficiency, and helps the investigators draw logical conclusions <cit.>. Currently, general guidelines and considerations for the selection of baseline features exist <cit.>. However, most of the relevant features are study-specific and require the investigators’ judgment. This may lead to an overlook of relevant confounders or covariates. Alternatively, for observational studies in particular, the improper selection of confounders/covariates from baseline features may lead to over-adjustment bias <cit.>. In addition, the reporting of baseline feature variables is not standardized and consistent across studies even for similar interventions or health outcomes. To tackle this issue in clinical research, we introduce CTBench, a benchmark to assess the role of language models (LMs) in aiding clinical study design. CTBench requires these models to predict the baseline characteristic variables of a clinical study based on the CT metadata. This study is the first to use LMs to solve the challenging task of designing the baseline features for both CTs and observational studies. To achieve this, we create the benchmark from the centralized CT repository along with human annotation. We create two expansive datasets: [1)] * “CT-Pub” which includes the metadata and baseline features from 1,690 CTs collected from the <clinicaltrials.gov> API, and, * “CT-Repo” which contains a subset of 100 trials where the baseline features are retrieved from the related clinical publications via human curation. The main contributions of this work include: [1)] * we propose a benchmark (CTBench) to use LMs to develop AI support tools for CT, assist researchers in selecting baseline features and design more efficient and robust clinical studies; * we create two CT metadata datasets with associated baseline features derived from a definitive repository and published papers; * we develop two automated evaluation methods for comparing predicted and actual trial baseline features, “ListMatch-LLM" and “ListMatch-BERT", and validate them with “human-in-the-loop” evaluations; and * we demonstrate CTBench by using robust prompt engineering techniques on several LLMs to generate the baseline feature variables and evaluate their performance results. * Our data, code, and demo examples are available at https://github.com/nafis-neehal/CTBench_LLMhttps://github.com/nafis-neehal/CTBench_LLM. § RELATED WORK Recent applications of LLMs show that they can serve as powerful tools alongside human evaluators <cit.>. They have been efficiently deployed for extracting clinical information with models such as the CT-BERT and MT-clinical BERT <cit.>. CliniDigest showed a similar value, reducing 10,000-word CT descriptions into 200-word summaries using GPT 3.5 <cit.>. LLMs have been shown to have further uses in comparing similarity among trials to improve result comparison and aid in the precision design of subsequent studies <cit.>. Advances in prompting have additionally increased the use cases, both in specific medical specialties and generalized contexts <cit.> Research exists on using LLMs to aid in creating eligibility criteria for CTs <cit.>. Critical2Query was validated on 10 CTs of different medical contexts to produce inclusion and exclusion criteria for the resolution of previous conditions, disease severity, and disease duration <cit.>. TrialGPT proposed an LLM that could potentially reduce 42.6% of the screen time needed to match CTs by domain experts without compromising in near-expert level grouping <cit.>. AutoCriteria similarly shows promising extraction of eligibility criteria through a set of 180 manually annotated trials <cit.>. However, automation of proposing baseline features of CTs is lacking. Since baseline features of CTs have become significantly more complex from 2011-2022 <cit.>, better approaches for suggesting a generalizable and standardized set of cohort demographics and features are needed. Adequately training and validating LLMs for these clinical tasks requires relevant and feature-rich datasets. Several works have leveraged the <clinicaltrials.gov> database that has information for over 300,000 research studies conducted in more than 200 countries <cit.>. However, the prioritization of creating CT eligibility tools has left patient descriptor data relatively understudied. CTBench addresses gaps between study criteria and features that are reported in databases such as <clinicaltrials.gov> in comparison to what appears in the final publication. For example, where age, sex, race, ethnicity, region of enrollment, and hemoglobin A1C may be reported on databases <cit.>, investigators ensured that additional baseline characteristics of fasting serum glucose, duration of diabetes, BMI, weight, waist circumference, estimated GFR, albumin-to-creatinine ratio, medication use, and cardiovascular parameters were included in the final report <cit.>. As only 4 baseline features are consistently reported by greater than 10% of studies on these well-used databases, the development of publicly available and accurate baseline feature databases is necessary <cit.>. Current datasets that attempt to address this are limited by low CT cohort size or have sufficient patient data but are sourced from general clinical notes in place of CTs <cit.>. Other projects do create datasets from high-quality, manually annotated CTs, but do not provide public access <cit.>. Here, our constructed datasets are relevant to baseline demographics (CT-Repo, CT-Pub), with human annotation to include all the features of a reported clinical study (CT-Pub), and larger than previously available CT data sets with a complete set of patient demographic data <cit.>. § METHODOLOGY §.§ Data Construction We collect CT data from <clinicaltrials. gov> using their publicly available API. Our selection criteria include studies that are: [1)] * interventional trials, * completed with results reported, * related to one of five common chronic diseases: hypertension, chronic kidney disease, obesity, cancer, diabetes, and * reported at least six baseline features. The requirement for a minimum of six baseline features ensures the inclusion of studies with more comprehensive data beyond commonly reported features such as age group, race/ethnicity, and sex. This criterion is implemented to ensure the robustness of our dataset, as some features from the publication about CT may not be reported on the <clinicaltrials. gov>. For each CT, we collect several types of information (see Table <ref>). We initially started with 1798 studies returned from the API query. After thorough pre-processing steps, including removing duplicate trials and trials with missing values, we are left with 1693 CTs for our final study. From our 1693 CTs, we construct two datasets: "CT-Repo" and "CT-Pub" summarized in Table <ref> The CT-Repo dataset consists of 1690 trials, with the remaining three trials used as example trials for three-shot learning in LMs. We randomly pick 100 CTs from the CT-Repo dataset to build the CT-Pub dataset. For each trial in CT-Pub, human annotators manually collect the list of baseline features reported in the publications associated with the CT and ensure that: [1)] * each CT has at least one relevant publication reporting the trial results, * the publication contains a table where the baseline features featured for the trial are fully reported, and * the publication is evidenced to be connected to the trial by mentioning the trial ID in the publication and/or in the publisher's website. Challenges: The data extracted from <clinicaltrials.gov> include title, summary, conditions, eligibility criteria, interventions, primary outcomes, and baseline features in free-text format (Table <ref>). The trial titles and brief summaries provide an overview of the study in plain language, often without consistent terminology. Conditions refer to health issues/diseases being studied written in free text, which can lead to inconsistencies in interpretation due to polysemy (multiple meanings) and synonymy (different terms for the same concept). Eligibility criteria, encompassing both inclusion and exclusion criteria, are detailed as paragraphs, bulleted lists, or enumeration lists, without adherence to common standards or controlled vocabularies. Interventions describe the treatments or procedures being tested, in unstructured text. Primary outcomes and baseline features outline the main objectives and initial data points of the study, respectively, and are similarly unstructured, lacking standardization in terms of medical dictionaries or ontologies. This variability and lack of standardized language across all these fields pose significant challenges for both data extraction and results analysis. §.§ Generation Task The CTBench task is to predict the baseline features of a study given the metadata. We demonstrate our benchmarking process and evaluate performance results on two state-of-the-art LMs, open-source LLaMa3-70B-Instruct <cit.> and commercial GPT-4o <cit.>. For GPT-4o, we used the API provided by OpenAI <cit.>. For LLaMa-3-70B-Instruct, we used APIs from GROQ <cit.> and HuggingFace's serverless inference service <cit.>. We investigate two in-context learning settings for feature generation: zero-shot and three-shot <cit.>. Each query has the system message and the user query (Figure <ref>). For the zero-shot setting, we provide CT metadata (excluding the baseline features) as input context to these models (Figure <ref>), and query the models to generate a list of probable baseline features relevant to the clinical trial. In the three-shot setting (see Appendix C for full prompt template), we extend the zero-shot system prompt by appending trial metadata and corresponding answers (i.e., list of baseline features) for three example trials. All our generation prompts are in Appendix C. For CT-Repo, the generation task involves predicting the list of baseline features reported in the <clinicaltrials.gov> portal using the CT metadata presented in Table <ref>. For the CT-Pub dataset, the generation task is to predict the baseline features collected from the publications relevant to each trial. §.§ Evaluation Task The evaluation task compares the "candidate features" suggested by each LLM with the "reference baseline features" from the CT publications for CT-Pub or <clinicaltrials.gov> API for CT-Repo. The objective is to evaluate each pair of features, one from the reference list and one from the candidate list, to determine if they are contextually and semantically similar, i.e., if they match. We remove noisy keywords from the feature lists (e.g., "Customized," "Continuous") during pre-processing. After identifying all matched pairs, the final results are categorized into three lists: matched pairs, unmatched reference features, and unmatched candidate features. We employ two approaches for identifying matched pairs: “ListMatch-BERT" and “ListMatch-LM." For the evaluation task, we use Trial2Vec and GPT-4o for ListMatch-BERT and ListMatch-LM, respectively. The Trial2Vec implementation requires local installation and a GPU for inference, as it is not readily available through HuggingFace or other inference service providers. We utilized NVIDIA Ampere A100 and NVIDIA T4 GPUs via Google Colab for our work. For GPT-4o as an evaluator, we again used the OpenAI APIs available through their public site. All hyperparameters related to our generation and evaluation tasks are presented in Appendix B. We use a fixed seed and a temperature value of 0.0 across all experiments to ensure the outputs are deterministic and reproducible <cit.>. ListMatch-BERT: Here we consider a variation of the BERTScore <cit.>. We utilize Trial2Vec architecture proposed for CTs, built on top of TrialBERT <cit.> (MIT license) to generate embeddings for each feature and then calculate a cosine similarity matrix for each set of pairs. We explore different matching threshold values T_h ∈{0.6, 0.7, 0.8, 0.9}, and recommend using the value of 0.7 (see Appendix D for detailed comparison and reasoning). Matches are considered starting from the pair with the highest cosine similarity above T_h, and these pairs are added to the matched list, and removed from their respective lists and the similarity matrix. Matching continues until: [1)] * no more matches are found with similarity greater than T_h, or * no more features remain to match in either the reference or candidate list. A detailed description of the ListMatch-BERT process is provided in Appendix A. We report mean Precision, mean Recall, and mean F1 scores across all studies for each dataset. Once the lists of matched pairs, unmatched references, and unmatched candidates are established, and given: TP (True Positives): n_matched_pairs, FP (False Positives): n_remaining_candidate_features, FN (False Negatives): n_remaining_reference_features, we calculate precision and recall: Precision = TP/TP + FP = n_matched_pairs/n_matched_pairs + n_remaining_candidate_features Recall = TP/TP + FN = n_matched_pairs/n_matched_pairs + n_remaining_reference_features ListMatch-LM: Here GPT-4o is prompted to identify matched pairs and the remaining unmatched sets (see Figures <ref> and <ref>). For each study, GPT-4o receives the reference features and candidate features as input. Trial metadata (excluding the actual baseline features) is provided as context. GPT-4o is tasked with identifying matched pairs and generating unmatched lists, which are returned as a JSON object. Mirroring the procedure used in ListMatch-BERT, the model is instructed to remove matched pairs from further consideration immediately upon identification, ensuring that no reference feature is matched to multiple candidate features, and vice versa. Once the matches are generated and the unmatched items are identified, we calculate precision, recall, and F1 scores similarly as described above and report their means over all the studies. Appendix C provides the full evaluation prompt. Human Evaluation: To evaluate the accuracy of GPT-4o as an evaluator, we employ clinical domain experts to serve as human annotators. Their task is to identify matched pairs for each of the 100 CT studies in the CT-Pub dataset. To streamline the evaluation, we focus exclusively on the candidate responses generated by GPT-4o in the three-shot setting. The annotators receive the same information provided to GPT-4o during its evaluation and are instructed to match features using the same criteria. We developed a web tool to collect and store the responses from all annotators for each of the 100 studies in a database. We also solicit evaluations from human annotators regarding the remaining unmatched candidate features that may merit further examination. Our findings indicate a high level of agreement between the human annotator and GPT-4 Omni's evaluations, underscoring the reliability of GPT-4o in capturing nuanced similarities between features. Detailed results of these experiments are provided in Appendix D. § RESULTS AND DISCUSSION In CTBench, precision measures the proportion of predicted baseline features that are accurate, while recall measures the proportion of actual baseline features that the model successfully identifies. We find recall to be of more interest as it ensures comprehensive identification of all relevant baseline features, which is crucial for accurately characterizing study cohorts and maintaining the validity and robustness of clinical trial results. High recall minimizes the risk of missing critical features that could undermine the study's conclusions. Figure <ref> shows the performance comparison of GPT-4o and LLaMa3 for CT-Pub and CT-Repo datasets. We find that GPT-4o (3-Shot) leads in recall in the CT-Pub dataset, while LLaMa3 (0-Shot) excels in the CT-Pub dataset for precision and F1 scores. In the CT-Repo dataset, GPT-4o (3-shot) outperforms LLaMa3 across all ICL settings and metrics. §.§ Performance Analysis in Generation Tasks §.§.§ Analysis on CT-Pub Dataset Observation about Metric Values and Model Performance: The values of recall, precision, and F1 scores are not particularly high, indicating a moderate performance of LLaMa3 and GPT-4o in predicting baseline features. This suggests there is room for improvement in the models' ability to generate accurate and comprehensive baseline features. Comparison of Precision, Recall, and F1 Scores Across Models: The models exhibit varied strengths across different metrics. LLaMa3 (0-Shot) demonstrates the highest precision and F1 score, with an F1 score of 0.48, indicating its strong capability to accurately identify relevant baseline features without requiring prior examples. GPT-4o (3-Shot) leads in the recall, highlighting its superior ability to retrieve a comprehensive list of relevant baseline features when examples are provided. This suggests that GPT-4o benefits significantly from example-based learning, whereas LLaMa3 performs robustly even in a zero-shot setting, making it a versatile choice for scenarios with limited training data. ICL Setting Analysis: * Zero-shot vs. Three-shot: In the CT-Pub dataset, LLaMa3 performs better in the zero-shot setting, particularly in precision and F1 score. GPT-4o, however, benefits more from the examples, performing better in the three-shot setting in the recall. * Model Benefit from Examples: GPT-4o shows a significant improvement in recall when examples are provided (3-shot), whereas LLaMa3 shows a higher overall performance in the zero-shot setting. §.§.§ Analysis on CT-Repo Dataset: Observation about Metric Values and Model Performance: Similar to the CT-Pub dataset, the values are not exceptionally high, reflecting moderate performance in predicting baseline features. This emphasizes the need for enhanced models to improve prediction accuracy and comprehensiveness. Comparison of Precision, Recall, and F1 Scores Across Models: The CT-Repo dataset reveals that GPT-4o (3-Shot) outperforms LLaMa3 in precision and F1 score, achieving a notable F1 score of 0.52, while providing comparable performance in recall. This highlights GPT-4o's robustness and effectiveness when prior examples are available, making it highly suitable for matching or adjusting treatment and control subjects in clinical trials and observational studies. LLaMa3 (3-Shot) also demonstrates strong performance, particularly in the recall, indicating its capability to retrieve a broad range of relevant features when examples are provided. The overall moderate performance of both models reflects the complexity and challenging nature of accurately predicting baseline features from clinical trial metadata. ICL Setting Analysis: * Zero-shot vs. Three-shot: In the CT-Repo dataset, both models perform better in the three-shot setting. GPT-4o significantly benefits from examples, especially in precision and recall. * Model Benefit from Examples: GPT-4o shows substantial improvement with examples (3-shot), indicating its dependency on context for better performance. LLaMa3 also shows improved performance with examples but retains good performance in the zero-shot setting. Since the ground-truth baseline features for CT-Repo were collected from the <clinicaltrials.gov> API, there are specific nuances, such as reporting 'Region of Enrollment' as a baseline feature, which is not typically seen in CT-Pub publications. We believe this context explains why both GPT-4o and LLaMa3 benefit from example-based learning in this scenario. §.§.§ Why is GPT-4o under-performing significantly and consistently in zero-shot setting? GPT-4o (zero-shot) underperforms across all cases and scores in both datasets due to the lack of contextual learning from prior examples, which is crucial for accurately interpreting and predicting complex, domain-specific clinical trial features. This setting relies solely on pre-trained knowledge, which is insufficient for the nuanced and detailed task of baseline feature prediction in clinical trials. §.§ Performance on Evaluation Tasks GPT-4 Omni Scores: GPT-4 evaluation scores generally surpass BERT scores at a 0.7 threshold due to GPT-4o's broader understanding and contextual evaluation, which captures more nuanced similarities between reference and candidate baseline features. This results in a more generous and context-aware assessment compared to the stricter, more literal BERT scoring. BERT Scores (threshold = 0.7): After examining several thresholds, we recommend 0.7 to be used as the threshold value for producing BERT scores using ListMatch-BERT. The 0.7 threshold for BERT scores signifies a balance between generous and strict evaluation criteria, requiring high similarity for matches to be considered valid. This, however, reduces precision and recall by demanding closer alignment between generated and actual features compared to lower threshold values. Lowering the threshold would allow for more matches but could increase false positives and false negatives, affecting the precision and recall negatively. We present a thorough evaluation of BERT scores at different threshold values in Appendix D. Comparing both metrics, we believe that GPT-4 Omni scores suggest a comprehensive and context-sensitive evaluation, crucial for accurately assessing the quality of LM-generated baseline features in clinical trial design. § LIMITATIONS CT Data Expansion: Our results, derived from CT data, demonstrate the potential of LLMs to significantly aid in the design and implementation of clinical studies. But the CTBench consists of only RCTs for 5 chronic diseases gathered from <clinicaltrials.gov> with only a subset annotated with additional "gold-standard" from CT-related papers. Using our tools and framework, CTBench could be expanded with other CT repositories, more published CT results, and more diseases. Future work should also explicitly incorporate and evaluate observational studies. Evaluation Methods: We have presented two LLM-based matching methods and associated evaluation metrics, but how to best evaluate predicted descriptors is an interesting research question in itself. Currently, each reference or candidate item is permitted to be matched only once to provide a standardized fair evaluation across models. But other strategies allowing multiple matches are possible. We hope that the human-in-the-loop evaluation tools provided to compare the LM and human evaluations assist in the further evolution of effective evaluation strategies. Additional Methods for Generation: Our baseline CTBench study focuses on benchmarking the two state-of-the-art LLaMa3-70B-Instruct and GPT-4o models only with zero-shot and three-shot prompts due to resource constraints. By contrasting an open-source model (LLaMa3-70B-Instruct) with a closed-source model (GPT-4o), we aim to provide a preliminary evaluation of current leading technologies. In our experiments, both for the text generation and evaluation API calls, we have maintained a consistent approach by using a fixed seed and a temperature value set to 0.0. This methodological choice is based on OpenAI's documentation <cit.>, which claims that a fixed seed and a temperature parameter of 0.0 are likely to produce reproducible and deterministic results. But many other possibilities exist. Running each API call multiple times with the same question and considering aggregated answers could improve results. We hope that CT-bench will spur new prompt and model research to expand the scope and depth of AI methods for CT design support. Impact of Societal Bias: Societal biases present in language models (LMs) can potentially be transferred to clinical trials through the models' baseline feature predictions. This bias could skew the characterization of study cohorts, leading to biased clinical results and affecting the generalizability and applicability of the findings. Such biases in baseline features can undermine the validity of clinical trials, resulting in health outcomes that do not accurately reflect the broader population. § CONCLUSION CTBench serves as a pioneering benchmark for evaluating LLMs in predicting baseline features from CT metadata - a critical component in CT design. By leveraging datasets from <clinicaltrials.gov> and curated from trial publications, and utilizing advanced evaluation methods such as ListMatch-LM and ListMatch-BERT, CTBench provides a robust framework for assessing AI-generated baseline features. Our results establish a promising baseline, validated through expert human evaluations, and underscore CTBench's potential to significantly enhance the efficacy and robustness of clinical trials through advanced AI research. This work was supported by IBM Research and the Rensselaer Institute for Data Exploration and Applications. § CHECKLIST * For all authors... * Do the main claims made in the abstract and introduction accurately reflect the paper's contributions and scope? see section <ref> - <ref> * Did you describe the limitations of your work? see section <ref> * Did you discuss any potential negative societal impacts of your work? see section <ref> * Have you read the ethics review guidelines and ensured that your paper conforms to them? * If you are including theoretical results... * Did you state the full set of assumptions of all theoretical results? * Did you include complete proofs of all theoretical results? * If you ran experiments (e.g. for benchmarks)... * Did you include the code, data, and instructions needed to reproduce the main experimental results (either in the supplemental material or as a URL)? * Did you specify all the training details (e.g., data splits, hyperparameters, how they were chosen)? see section <ref> and Appendix * Did you report error bars (e.g., with respect to the random seed after running experiments multiple times)? see section <ref> * Did you include the total amount of compute and the type of resources used (e.g., type of GPUs, internal cluster, or cloud provider)? see Appendix * If you are using existing assets (e.g., code, data, models) or curating/releasing new assets... * If your work uses existing assets, did you cite the creators? see section <ref> * Did you mention the license of the assets? see <ref> * Did you include any new assets either in the supplemental material or as a URL? see Github link + Appendix * Did you discuss whether and how consent was obtained from people whose data you're using/curating? * Did you discuss whether the data you are using/curating contains personally identifiable information or offensive content? * If you used crowdsourcing or conducted research with human subjects... * Did you include the full text of instructions given to participants and screenshots, if applicable? * Did you describe any potential participant risks, with links to Institutional Review Board (IRB) approvals, if applicable? * Did you include the estimated hourly wage paid to participants and the total amount spent on participant compensation? Appendix § LISTMATCH-BERT In this study, we examine a modified version of BERTScore. We leverage the Trial2Vec architecture, which is an extension of TrialBERT. This architecture generates embeddings for each feature, after which we compute a cosine similarity matrix for each pair of sets. We experiment with various matching threshold values T_h ∈{0.6, 0.7, 0.8, 0.9}, ultimately recommending a threshold of 0.7 (refer to Appendix D for a comprehensive comparison and justification). Matching begins with the pair exhibiting the highest cosine similarity above T_h. These pairs are subsequently added to the matched list and removed from their respective lists and the similarity matrix. This process repeats until: [1)] * no further matches surpass the similarity threshold T_h, or * no remaining features exist to match in either the reference or candidate list. Here is the a detailed version of the ListMatch-BERT algorithm (Note: this Preprint Version has formatting and syntactical changes for ArXiv submission) - § EXPERIMENTAL DESIGN §.§ Hyperparameters We present all our experimental hyperparameters for both generation and evaluation task in Table <ref> in Appendix. We use a fixed seed and a temperature value of 0.0 across all experiments to ensure the outputs are deterministic and reproducible. §.§ Computational Resources used We spent around $120 throughout all of our experiments (both generation and evaluation in zero-shot and three-shot settings) using GPT-4o models. Besides that, we used around 150 compute units from Google Colab for GPU computations. We used NVIDIA Ampere A100 and NVIDIA T4 GPUs for local inference tasks to calculate BERT scores and other experiments. § PROMPTS §.§ Generation Prompt: Zero-shot Figure <ref> illustrates the full prompt used to generate LLM responses (i.e., baseline features) in a zero-shot setting. The system message includes detailed instructions for the LLM, specifying the format and structure of the user query. Following this, the user query provides the trial information as context, serving as the question for the LLM. §.§ Generation Prompt: Three-shot Figure <ref> shows the complete prompt used to generate LLM responses (i.e., baseline features) in a three-shot setting. The system message contains detailed instructions for the LLM, including the format and structure of the user query and instructions to expect three examples with their corresponding answers. Next, the user query provides example trial information and their answers as additional context, followed by the actual trial information serving as the question for the LLM. §.§ Evaluation Prompt Figure <ref> displays the complete prompt used to evaluate LLM responses (i.e., candidate features) against a set of reference baseline features. The system message provides detailed instructions for the LLM on how to perform the matching and how to return the response in JSON format. Following this, the user query includes corresponding trial information, along with the list of reference features and candidate features, which serve as the question for the LLM to evaluate. § ADDITIONAL EXPERIMENTAL RESULTS AND DISCUSSION §.§ BERT Score Comparisons at different thresholds For BERT Score, we evaluated four different threshold values T_h = {0.6, 0.7, 0.8, 0.9} to be used as similarity thresholds in the similarity matrix (see Appendix A). Figures <ref> and <ref> show the mean precision, recall, and F1 scores for the CT-Pub and CT-Repo datasets, respectively. Increasing the threshold makes the matching more stringent, generally leading to lower precision and recall. Conversely, setting a lower threshold allows more invalid pairs to be matched, indicating a tradeoff. After reviewing several examples and considering domain experts' recommendations, we suggest using a threshold value of 0.7, which balances strictness and accuracy in evaluation. §.§ Human Evaluation of GPT-4o's evaluation To assess GPT-4o's accuracy as an evaluator, we engaged clinical domain experts to identify matched pairs for 100 CT studies in the CT-Pub dataset. Focusing on GPT-4o's three-shot candidate responses, the experts used the same information and criteria as GPT-4o. We developed a web tool to collect and store their responses. We then compared the responses for the matched pairs from the human evaluators and GPT-4o, creating an inter-rater agreement table and calculating pairwise Cohen's Kappa statistics. Cohen's Kappa measures the agreement level between two raters classifying items into categories. Our findings, presented in Table <ref>, show high agreement between the human evaluators and GPT-4o, underscoring GPT-4o's reliability in identifying nuanced feature similarities. The relevant code is available in the GitHub. §.§ Sample evaluation responses from GPT-4o, BERT models and human evaluators We present sample evaluation responses, including the list of matched pairs, remaining reference features, remaining candidate features, and additional relevant candidate features in Figure <ref>. The figure presents responses from various evaluation models such as BERT-Score with different threshold values and GPT-4 Omni, as well as three human evaluators. Each row in the figure details the specific features that were matched, those that remained unmatched, and any additional relevant features identified by the corresponding evaluator. §.§ Tool Web-UI We provide views of different components of our UI for the web tool to collect responses from each human evaluators. Each of are presented in Figures <ref>, <ref>, <ref> and <ref>.
http://arxiv.org/abs/2406.18634v1
20240626180000
Resonant Conversion of Gravitational Waves in Neutron Star Magnetospheres
[ "Jamie I. McDonald", "Sebastian A. R. Ellis" ]
hep-ph
[ "hep-ph", "astro-ph.CO", "astro-ph.HE", "gr-qc" ]
Department of Physics and Astronomy, University of Manchester, Manchester M13 9PL, UK Départment de Physique Théorique, Université de Genève, 24 quai Ernest Ansermet, 1211 Genève 4, Switzerland § ABSTRACT High frequency gravitational waves are the subject of rapidly growing interest in the theoretical and experimental community. In this work we calculate the resonant conversion of gravitational waves into photons in the magnetospheres of neutron stars via the inverse Gertsenshtein mechanism. The resonance occurs in regions where the vacuum birefringence effects cancel the classical plasma contribution to the photon dispersion relation, leading to a massless photon in the medium which becomes kinematically matched to the graviton. We set limits on the amplitude of a possible stochastic background of gravitational waves using X-ray and IR flux measurements of neutron stars. Using Chandra (2-8 keV) and NuSTAR (3-79 keV) observations of RX J1856.6-3754, we set strain limits h_c^ lim≃ 10^-26 - 10^-24 in the frequency range 5× 10^17 Hz≲ f ≲ 2× 10^19 Hz. Our limits are many orders of magnitude stronger than existing constrains from individual neutron stars at the same frequencies. We also use recent JWST observations of the Magnetar 4U 0142+61 in the range 2.7× 10^13 Hz≲ f ≲ 5.9× 10^13 Hz, setting a limit h_ c^ lim≃ 5 × 10^-19. These constraints are in complementary frequency ranges to laboratory searches with CAST, OSQAR and ALPS II. We expect these limits to be improved both in reach and breadth with a more exhaustive use of telescope data across the full spectrum of frequencies and targets. Resonant Conversion of Gravitational Waves in Neutron Star Magnetospheres Sebastian A. R. Ellis July 1, 2024 ========================================================================= § INTRODUCTION The detection <cit.> of gravitational waves by LIGO has revolutionised our ability to study the Cosmos and opened the door to gravitational wave astronomy. LIGO <cit.> has since been joined by the VIRGO <cit.> and KAGRA <cit.> detectors to form the LVK collaboration which has since measured hundreds more black hole and neutron star mergers at frequencies of 𝒪(100 Hz). More recently, evidence for nHz gravitational waves has emerged from pulsar timing measurements by the NANOGrav, PPTA, EPTA, and InPTA  <cit.> collaborations. In addition, there remain a number of future detectors aiming to expand the frequency coverage of gravitational wave astronomy, including space-based interferometers like LISA <cit.> (10^-2Hz) and atom interferometers like MAGIS/AION <cit.> (10^-1 Hz - 10 Hz). These efforts mark the arrival of the era of multi-wavelength gravitational wave astronomy. This mission echoes the evolution of conventional electromagnetic astronomy, which has proved to be one of the main driving forces behind fundamental physics, from early observations of dark matter <cit.> and dark energy <cit.> to the discovery of the cosmic microwave background <cit.>. Today, photon-based astronomy spans 15 orders of magnitude in frequency, with an array of sophisticated telescopes from the radio through to gamma-rays and continues to play an active role in guiding our understanding of the Universe at the most fundamental level. It is worth mentioning that several of these discoveries have been made serendipitously. By analogy, it seems highly likely that gravitational wave astronomy will unlock yet more secrets of our Universe as we explore the full range of the gravitational wave spectrum. In recent years, there has been growing interest in pushing gravitational wave astronomy to even higher frequencies above kHz, (see Ref. <cit.> for a review) paving the way for the study of ultra-high-frequency gravitational waves (UHFGWs). At these high frequencies, astrophysical sources of GWs from within the Standard Model are not expected. However, stochastic backgrounds of UHFGWs can be generated by a variety of sources including cosmic strings (see, e.g., Ref. <cit.>), phase transitions (see, e.g., Ref. <cit.>), and even the cosmic microwave gravitational wave background itself <cit.>. Similarly, transient signals can be generated by primordial black hole mergers <cit.> and the depletion of superradiant bosons in the vicinity of black holes <cit.>. Most of these proposed sources result from physics beyond the standard model. Searching for UHFGWs therefore offers an exciting opportunity to probe new physics. The study of UHFGWs is in its infancy, and it is possible that there are further Standard Model sources yet to be predicted by theorists. In the last few years, a range of novel astrophysical effects have been shown to predict the production of large numbers of light particles <cit.>, illustrating the continuing ability of astrophysical environments to surprise us. Early theoretical studies <cit.> also suggest MHz gravitational waves may be produced in neutron star mergers due to Quantum Chromodynamics effects. Further Standard Model sources of UHFGWs, if predicted, would offer important milestones for detection, requiring a wide array of experimental approaches. Significant progress has recently been made in the study of experimental signatures of UHFGWs thanks to the techniques which have been developed to study light particles, such as axions. Indeed, these are often directly applicable to laboratory searches of UHFGWs across a range of frequencies, as explored in <cit.>. Furthermore, the emergence of new technologies including levitated sensors <cit.>, bulk acoustic wave resonantors (BAWs) <cit.>, high-precision atomic clocks <cit.> and superconducting cavities <cit.> is making the detection of UHFGWs a tangible possibility. Extending the light particle analogy further, there has been a thriving symbiosis between laboratory searches and astrophysical probes of light fields <cit.>. In particular, stars have proved powerful tools for searching for light particles, and they continue to provide some of the leading constraints relative to laboratory searches in some frequency ranges. Neutron stars, in particular, have been used extensively to study ultra-light particles, including axions in both the radio <cit.> and X-ray bands <cit.>. These studies exploit an enhanced coupling between axions and photons both due to the very large magnetic fields of neutron stars, and, in the case of dark matter, the kinematic enhancement of the production process due to the axion and photon dispersion relations becoming degenerate in the neutron star plasma, leading to resonant production of photons from light particles. A range of sophisticated techniques have now been developed for computing light particle signatures from neutron stars, including numerical modelling of photon transport <cit.> and analytic <cit.> as well as numerical <cit.> calculations of the 3D photon production process itself. In this work, we demonstrate how this same resonant production process is also present for UHFGWs propagating through neutron star magnetospheres. To date, there have been a few schematic studies of non-resonant conversion <cit.> and neither of these have made use of the latest techniques developed above. We therefore carry out the first treatment of the resonant production process, implementing the state-of-the-art treatments detailed above. The structure of this paper is as follows. In Sec. <ref> we outline the production process of photons from gravitons by adapting the latest techniques developed in <cit.>. In Sec. <ref> we outline our model for neutron star magnetospheres and describe the characteristic size of signals. In Sec. <ref> we provide our constraints before finally offering our conclusions in Sec. <ref>. § RESONANT PHOTON PRODUCTION Since we shall work across multiple observational bands where wavelength, frequency and energy are variously used to label photons, it is useful at this stage to lay out our notation and conventions. We work in natural units in which ħ = c = 1 so that the photon energy, E_γ, angular frequency ω, frequency f and wavelength λ are related by E_γ = ω = 2 π f = 2π/λ . The interaction between gravitons and photons is captured by the the minimal coupling of the spacetime metric g_μν to electromagnetism, represented by the following action 𝒮 = ∫ d^4 x √(-g) [ m_p^2/2ℛ - 1/4 g_μν g_ρσ F^μν F^ρσ], where m_p = 1/√(8 π G) is the reduced Planck mass and F_μν = ∂_μ A_ν - ∂_ν A_μ is the photon field-strength tensor. Working in the weak field limit, we can expand the metric about Minkowski spacetime metric η_μν by writing g_μν = η_μν + 2/m_p h_μν , where h_μν is the dimensionful field associated to the graviton.[Note that in the GW literature, h_μν is often the notation used to describe the dimensionless fluctuation of the metric, rather than the graviton field. Since we adopt a field theoretic approach, we use the corresponding convention of normalising with respect to the reduced Planck mass.] Expanding the action in powers of h_μν, one can read off the well-known action for h_μν (in transverse-traceless (TT) gauge) 𝒮 = ∫ d^4 x [ -1/2(∂_μ h_ρσ)^2 -1/m_p h_μν T^μν], where T^μν is the energy momentum tensor of the electromagnetic field, defined by T^μν = F^μα F^ν_α - 1/4η^μν F_αβ F^αβ . In Ref. <cit.> it was shown how the coupling between axions and photons leads to a resonant conversion of axions into photons with a probability that can be read off simply by knowing the matrix element for axion to photon conversion, which can be immediately obtained from the interaction Lagrangian. By generalising those results, one sees that the resonant conversion for a graviton into a photon can be written compactly as P_h →γ = π| ℳ_h →γ|^2/E_γ| k·∇ E_γ|U_E/U , where | ℳ_h →γ|^2 is the squared matrix element for the conversion of gravitons into photons, E_γ is the photon energy, ∇ E_γ is its spacial gradient, and U_E and U are the electric and total electromagnetic energy density in the photon mode. It is understood that all quantities are then evaluated “on resonance" i.e. at the point where the photon and graviton dispersion relations become degenerate, with both satisfying ω = |k| where ω and k are their frequency and 3-momentum, respectively. Full details can be found in Ref. <cit.>. We emphasise here that there is nothing intrinsically “quantum" in our treatment of gravitational waves conversion, the above language is simply a useful method for computing conversion probabilities and fluxes. Indeed, the equivalence of the above approach and full solutions to classical wave equations was recently demonstrated in Ref. <cit.>. Hence, to determine the resonant conversion probability for gravitons, one must simply compute the matrix element appearing in the numerator of Eq. (<ref>). To do this, we make further use of our TT gauge choice for the graviton polarisation tensors[We adopt a quantisation of the graviton field in terms of 4-momentum eigenstates with associated polarisation tensors, such that h_μν∼∑_i∫ d^3k⃗/((2π)^3√(2 ω)) a_i (k) H_μν(k)e^i k_α x^α + c.c.] H_μν(k), which implies the conditions k_μ H^μν =0 (transverse) and H^μ_ μ =0 (traceless). We also expand F_μν about a background field of the neutron star F_ NS^μν by taking F^μν→ F^μν_ NS + ∂^[μ A^ν] where [ ] denotes antisymmetrisation and A^μ denotes the dynamical photon field that mixes with the graviton. Putting this together, the second term in Eq. (<ref>) does not contribute when contracted with h^μν via the traceless condition, leaving an effective interaction Lagrangian (using the transverse condition) ℒ_ int = 2/m_p h_μν∂_α A^μ F^να_ NS , from which we can easily read off the squared matrix element for graviton to photon conversion as | ℳ^+, ×_h →γ|^2 = 1/m_p^2| 2 H^+ , ×_μν k_αϵ^μ F_ NS^να|^2, where ϵ is the (in-medium) polarisation 4-vector of the photon mode into which the graviton converts. Explicitly, working in temporal gauge for the photon in which ϵ^0 =0, and assuming the presence of only a background magnetic field B_k such that the non-vanishing components of F_ NS are F_ NS^ij = ϵ^ijk B^k, we can write the full conversion probability as P^+, ×_h →γ = 4 π| ϵ̂·H^+, ×· (k×B)|^2/E_γ| k·∇ E_γ|U_E/U1/m_p^2, where ϵ̂ is the electric field polarisation 3-vector. The resonance occurs where the axion and photon 3-momenta become degenerate, which occurs when E_γ = |k| . This condition is met on surfaces where the photon dispersion relation becomes null. To determine where these regions lie, we must first compute the dispersion relation for the photon. This is determined by the photon permittivity ε, which consists of two contributions ε = ε_ pl + ε_ vac . The first, ε_ pl, is the standard permittivity of a classical magnetised plasma and ε_ vac arises from quantum loop corrections to the photon self-energy in the presence of an external magnetic field. Explicitly we can choose coordinates in which k = (0,0,k) and <cit.> ε_ pl = R^yz_θ·[ ϵ i g 0; - i g ϵ 0; 0 0 η ]· R^yz_- θ , where the magnetic field is taken to be at an angle θ from the z-axis in the positive y-z quadrant, and R^yz_θ is the rotation matrix by θ in the y-z-plane. The coefficients in the dielectric tensor are given by ϵ = 1 - ω_p^2/ω^2 - Ω_c^2, g = ω_p^2 Ω_c/ω (ω^2 - Ω_c^2), η = 1 - ω_p^2/ω^2 where ω_p = √(4 πα n_e/m_e) and Ω_c = √(α) B/m_e are the plasma frequency and cylotron frequency, respectively. The Euler-Heisenberg <cit.> contribution to the permittivity from the vacuum in the limit where B < B_c ≡ m_e^2 /e is sub-critical is given by <cit.> ε_ vac = 𝕀( 1 - 8α^2/45 m_e^4| B|^2) + 28 α^2/45 m_e^3B⊗B . In addition, the vacuum birefringence introduces corrections to the magnetic permeability, which reads μ_ij^-1 = 𝕀( 1 - 8α^2/45 m_e^4| B|^2) - 16 α^2/45 m_e^3B⊗B . Substituting these expressions into Maxwell's equations and Fourier transforming, we find simple analytic expressions for the photon dispersion relation in two limits, Ω_c ≪ω and Ω_c ≫ω. For the limit Ω_c ≫ω_ p, ω, the plasma becomes strongly magnetised, and there are three modes <cit.>: the magnetosonic-t, Langmuir-O (LO) and Alfvén modes, respectively. Only the LO mode is capable of propagating out of the plasma and escaping to vacuum at infinity. It has a refractive index-squared given by n^2_ LO = 5 ((4 b^2+9) ω ^2-9 ω _p^2)/cos ^2(θ ) (28 b^2 ω ^2-45 ω _p^2)+(45-8 b^2) ω ^2 , where b = B/(m_e^2/α). Meanwhile, for the limit ω≫Ω_c,ω_ p we can send Ω_c /ω to zero, and expand perturbatively in ω_ p/ω and b. This gives two modes n_⊥ ^2 = 1 - ω_p^2/ω^2 + 16/45 b^2 sin^2θ , n_∥^2 = 1 - ω_p^2/ω^2 + 28/45 b^2 sin^2 θ, where the mode corresponding to n_⊥ is polarised perpendicular to B while n_∥ has both parallel and perpendicular components relative to B. The resonance condition is met when Eq. (<ref>) holds for the given photon mode. This corresponds to surfaces on which ω_p^2 = (28, 16 sin^2 θ, 28 sin^2 θ) α^2 ω^2 | B|^2 /45 m_e^4, for the (LO, ⊥, ∥) modes, respectively. To finally obtain the conversion probability, we need to specify a basis for the polarisation vectors H^+,×. We choose one in which the two graviton polarisations can be written as H^+ = 1/√(2)[ 1 -0 0; 0 -1 0; 0 -0 0 ], H^× = 1/√(2)[ 0 1 0; 1 0 0; 0 0 0 ] . For the LO mode the conversion probabilities are thus given by P^×_h → LO = πsin^2 θ_B | B|^2 /| k·∇ E_ LO|ω/m_p^2, P^+_h →γ = 0. with the gradient given by ∇ E_ LO = 7 ωω_p sin ^2 θ/7 ω^2-7ω_p^2 cos ^2θ +5ω_p^2∇ω_p. Meanwhile, in the high-frequency regime, for our choice of polarisation basis, we find that + converts exclusively to ⊥ and ∥ converts exclusively to ×, so that P^+,×_ h →γ_⊥,∥ = πsin^2 θ_B | B|^2 /| k·∇ E_⊥,∥|ω/m_p^2, P^+_h →∥ = P^×_h →⊥= 0, with similar expressions for the gradients which can be read off from the dispersion relations (<ref>). For all modes in question, U_E/U = 1/2. We have now gathered together all the ingredients we need to compute the conversion probability. In the next section, we apply this to the magnetosphere of neutron stars. § NEUTRON STARS AND PHOTON FLUX We begin by considering the canonical model for the magnetosphere of neutron stars, namely a Goldreich-Julian (GJ) <cit.> plasma distribution, where the number density of charge carriers is given by n_GJ(𝐫) = 2 Ω·𝐁/e1/1 - Ω^2 r^2 sin^2 θ , where the magnetic field is given by a magnetic dipole rotating with angular frequency Ω = 2π/P, inclinded at an angle α relative to the rotation axis: B_r = B_0(R/r)^3(cosαcosθ+sinαsinθcosψ), B_θ =B_0/2(R/r)^3(cosαsinθ-sinαcosθcosψ), B_ϕ =B_0/2(R/r)^3 sinαsinψ . The quantity Ω is the constant NS rotation vector, B_0 is the surface magnetic field strength, ψ = ϕ - Ω t and (r,θ, ϕ) are polar coordinates with the north pole given by the rotation axis. The plasma mass is then ω^ GJ_ p(r⃗) = √(4 π α_ EM | n_ GJ(r⃗) | /m_ e). With this model, we can plot the critical surface (white solid line) in Fig <ref>. To give a conservative treatment of the magnetosphere, we have restricted our study of the critical surface to those parts which lie within the closed magnetic field lines (red line in Fig. <ref>) where the GJ model is expected to be a good approximation of the structure of the magnetosphere. The excised regions are shown as white dashed lines. We have also applied a cutoff condition to the integral in Eq. (<ref>) whenever the magnetic field strengths on the critical surface come within 10% of the critical magnetic field strength, ensuring the perturbative nature of the Euler-Heisenberg expansion is respected. Note that the diagonal lines in Fig. <ref> correspond to the so-called null surfaces in the GJ model on which ω^ GJ_ p≃ 0. In a realistic magnetosphere, these regions are expected to be filled with charges, making these sections of the critical surface inferred from the GJ model untrustworthy. To address this issue, for comparison, we also consider a simpler model in which |n| = 2 Ω B_0 (R/r)^3, which captures the relevant scaling whilst avoiding complicated angular dependencies and spurious null surfaces present in the GJ model. We also use a further model (discussed more below) to capture charge densities which exceed GJ values, as might occur in magnetars. The total photon luminosity from gravitational wave conversion is given by <cit.> L = ∫ d^3 𝐤∫ d Σ_𝐤·𝐯_p P_h →γ ω f_h, where dΣ_k is the area elements on on the conversion surface associated to gravitons with momentum k.[Recall that owing to the anisotropic nature of the plasma, gravitons with different momenta convert on different surfaces, defining a foliation of resonant conversion surfaces. See Ref. <cit.>.] The phase velocity is given by v_p = k/ω, d^3k is the phase space measure for graviton 3-momenta and f_h = f_h(x,k) is the graviton phase space density. Next we write d^3k = dΩ_k dωω^2, where dΩ_k is the solid angle for the graviton momentum in polar coordinates. We must also remember to convert from angular to ordinary frequency, by using ∫ dω = 2 π∫ df. For a source at a distance d from Earth, this allows us to write down the photon flux density averaged over all emissions directions: S = 1/4 π d^2∫ d Ω_𝐤∫ d Σ_𝐤·𝐯_p P_h →γ 2πω^3 f_h Next we need to substitute the expression for the photon phase space distribution f_h for the dimensionless strain of stochastic gravitational waves h_ c, sto. The energy density of gravitons is given by integrating f_h over momentum space. Assuming an isotropic and homogeneous distribution of stochastic gravitational waves such that f_h(k,x) = f_h(ω), we can write ρ_ GW = ∫ d^3 k ω f_h = ∫ d ln f 4 πω^4 f_h, where again we used dω = 2 π d f = ω d ln f. From this, we read off d ρ_ GW/dlnω = 4πω^4 f_h which by definition <cit.>, is equal to ρ_c Ω_ GW = π f^2 h_ c, sto^2/(2G) = ω^2 h_ c, sto^2/(8 π G). This enables us to identify f_h = h_ c, sto^2 /(32 π^2 G ω^2). Inserting this expression for f_h into Eq. (<ref>) we obtain an expression for the photon flux density in terms of the strain S = 1/ 4 π d^2∫ d Ω_𝐤∫ d Σ_𝐤·𝐯_p P_h →γω/16 π G h_ c,sto^2. The flux density (<ref>) is shown in Fig. <ref> for a nearby isolated neutron star RX J1856.6-3754. Note that at high ω, the critical surface lies far from the star and as we increase ω further, the whole critical surface is pushed outside the closed magnetic field line region (red line Fig. <ref>). In the interest of being conservative, we excise contributions to the photon flux from such regions, so our signal vanishes by construction at high frequencies. At low ω, the toroidal contributions are pushed towards and inside the star, meaning that the surface becomes increasingly dominated by conical regions close to the so-called null surfaces where n_ GJ≃ 0. Great caution is therefore needed, since such null surfaces may not appear in realistic magnetosphere models. However, we expect the conversion surface to partially track the low plasma density contours, with the main difference being that the precise morphology may differ from what is inferred from the GJ model. More detailed magnetosphere models should be used to address such uncertainties in future work. Nonetheless, our spherical plasma model (which does not suffer from such issues) also leads to strong constraints, suggesting that modifications to the canonical GJ model will not significantly affect the qualitative results, while only slightly affecting quantitative findings. § CONSTRAINTS The flux density from GW conversion in a neutron star magnetosphere (Eq. (<ref>)) can be compared with observed fluxes to set constraints on the characteristic strain of the stochastic GW background. In the X-ray band, observations of RX J1856.6-3754 by the Chandra <cit.> and NuSTAR <cit.> telescopes can be used to set constraints in the range 5× 10^17 Hz≲ f ≲ 2× 10^19 Hz. For the Chandra data, a thorough analysis searching for analogous signals from axion conversion in the neutron star magnetosphere has already been performed, with results presented in Ref. <cit.>. We recast the results of the measured flux in the energy range 2 - 8 keV as a constraint on stochastic GWs, as shown in the right panel of Fig. <ref>. For NuSTAR data, we use the observed flux in the energy range 3-79 keV from 47ks of observation time reported in the HEASARC catalog <cit.>. In order to obtain an approximate flux density, we use the effective area as a function of photon energy reported in Ref. <cit.>. The resultant flux density is shown in the upper-right panel of Fig. <ref>. The corresopnding constraint on h_c is shown in the lower-right panel of the same figure, and in the context of the wider frequency range of GWs in Fig. <ref>. We caution that our analysis of NuSTAR X-ray data is not as thorough as that performed in Ref. <cit.> for the Chandra data. However, since there are possibly significant systematic errors affecting the signal, including astrophysical, the present level of data analysis seems appropriate for setting constraints, and we defer a more thorough analysis of NuSTAR data to future work. In the IR band, we make use of JWST spectral measurements of the magnetar 4U 0142+61 from Ref. <cit.>.[We thank the authors for providing us with their data.] These constraints must be interpreted with caution, as the magnetosphere of a magnetar may differ from the GJ model we use to compute the expected signal flux density. We attempt to capture this via a model considered in Ref. <cit.> which attempts to quantify the increase in charge density above the minimal value set by the GJ model with a number density given by n = λψ/e rsin^2θ B_0 (R/r )^3 where ψ and λ are constants. With this in mind, it is nonethtless instructive to examine the size of strain sensitivity with a view to further JWST observations of other (non-magnetar) neutron star spectra, or more systematic treatments of the magnetosphere. We display results in the left column of FIG. <ref>. Constraints using non-resonant conversion have previously been explored in Ref. <cit.> using observations of the Crab and Geminga pulsars. In Fig. <ref> we display results for Geminga data from FERMI-LAT found in Ref. <cit.> wherein bin widths are clearly stated and the data covers a continuous range 0.1 ≲ω≲ 34 GeV. Other Neutron star constraints <cit.> are displayed as dashed lines to emphasise that they are derived from data which may not cover a continuous frequency range. These constraints use, e.g., earlier data from the Compton Gamma Ray Observatory of Geminga and Crab (see Ref. <cit.>) for which it is unclear whether data covers a continuous frequency range, nor are the bin widths clear. Similar comments apply to FERMI-LAT observations of Crab <cit.>, so these are also shown as dashed lines. We also caution that the magnetosphere treatment and graviton-photon mixing in those works was far simpler that what is presented here. Non-resonant conversion limits from neutron star populations, reported in Ref. <cit.>, are also shown, though we caution that they formally have incomplete frequency coverage owing to gaps in the underlying spectral measurements <cit.>. We display results for the more conservative populations scenario where the magnetic field strength decays in time. We also illustrate limits/projections from various laboratory experiments <cit.>. We see that resonant conversion of GWs into photons in single neutron stars leads to stronger limits than the non-resonant conversion from the full galactic neutron star population study. Therefore, it is plausible that a population study, looking for resonant conversion, would result in further improved sensitivity. We leave this study to future work. Ultimately, while resonant conversion in neutron star magnetospheres offers impressive sensitivities to stochastic sources, it should be cautioned that the origin of a signal with the corresponding amplitude would have to be from the late universe. Indeed, constraints on the number of relativistic degrees of freedom during the era of Big Bang Nucleosynthesis (BBN) restrict Ω_ GW≲ 10^-6, corresponding to h_c ≲ 10^-30× (GHz/f) <cit.>. The result is that at the frequencies relevant to the individual neutron stars we have considered, the BBN bound constrains early universe GWs to have characteristic strains at least 14 orders of magnitude smaller than what can be probed by either 4U 0142+61 or RX J1856.6-3754. Late universe stochastic GWs could arise from, e.g., unresolved PBH mergers, although the associated spectrum is expected to have a peak amplitude lower than what can be probed by our study <cit.>. Ultimately, since the study of UHFGWs is still in its infancy, the full range of sources, especially from late universe processes, is not known. § SUMMARY In this work we have outlined a new mechanism for resonantly converting high-frequency gravitational waves into photons in the magnetospheres of neutron stars. This exploits the inverse Gertsenshtein effect and strong magnetic fields in addition to a resonance which occurs in regions where the photon has a null dispersion relation. We have calculated the production rate at different frequencies and described the morphology of the magnetosphere surfaces on which resonant conversion can take place within the Goldreich-Julian model <cit.>, as well as a spherical plasma and magnetar-like <cit.> model. We have seen that at low frequencies, the relevant mode into which gravitons convert is the LO mode, and at higher frequencies, there are two photon production modes, with resonant conversion occurring on a foliation of surfaces with the shape of each member of the foliation depending on the angle between the gravitational wave 3-momentum and the magnetic field. Clearly it will be interesting to carry out a more detailed study of magnetosphere models to understand how these affect the flux density from resonant gravitational wave conversion. We obtained limits on the characteristic gravitational waves strain h_c, sto by first computing the expected photon flux from gravitational waves converting into photons in the magnetosphere of the neutron star. Formally, there is a systematic error on constraints from the observing angle θ which is the angle subtending the rotation axis of the star and the line of sight to the observer. Such uncertainties can be quantified through ray-tracing <cit.>, though we leave such a detailed analysis for future work. We used the prediction for the photon flux to set limits on the size of the characteristic strain h_c by comparing to observations of neutron stars in the X-ray and IR. We obtained competitive constraints on the stochastic strain h_c which exceed both existing limits both from individual stars <cit.> and populations <cit.> by many orders of magnitude. In future work it would be beneficial to carry out a more exhaustive study of all archival X-ray and IR data across a wider range of neutron stars, as well as considering telescope data in other frequency ranges. This leaves the door open to a stronger and wider range of constraints on high frequency gravitational waves. § ACKNOWLEDGMENTS We thank Francesca Chadha-Day, Juraj Claric, Virgile Dandoy, Valerie Domcke, Camilo Garcia-Cely, Bettina Posselt and Sam Witte for useful conversations. We are grateful to Asuka Ito and Virgile Dandoy for sharing data. We also thank Jeremy Hare, George Pavlov, Bettina Posselt, Oleg Kargaltsev, Tea Temim and Steven Chen for supplying JWST data on 4U 0142+6 used from their work <cit.>. Aldo Ejlli has also kindly provided us with CAST and OSQAR limits from his work <cit.>. This work has benefited from discussions held at the workshop “Ultra-high frequency graviational waves: where to next?" which took place in CERN, funded by the CERN-Korea Theory Collaboration and by the UKRI/EPSRC Stephen Hawking fellowship, grant reference EP/T017279/1. JM thanks CERN for hospitality and financial support form a Manchester University Research Collaboration fund. The work of SARE was supported by SNF Ambizione grant PZ00P2_193322, New frontiers from sub-eV to super-TeV.
http://arxiv.org/abs/2406.19169v1
20240627134214
Spikes and spines in 3D Lorentzian simplicial quantum gravity
[ "Johanna Borissova", "Bianca Dittrich", "Dongxue Qu", "Marc Schiffer" ]
gr-qc
[ "gr-qc", "hep-th" ]
[1] #1 #1 = jborissova@perimeterinstitute.caPerimeter Institute, 31 Caroline Street North, Waterloo, ON, N2L 2Y5, CanadaDepartment of Physics and Astronomy, University of Waterloo, 200 University Avenue West, Waterloo, ON, N2L 3G1, Canadabdittrich@perimeterinstitute.caPerimeter Institute, 31 Caroline Street North, Waterloo, ON, N2L 2Y5, CanadaTheoretical Sciences Visiting Program, Okinawa Institute of Science and Technology Graduate University, Onna, 904-0495, Japandqu@perimeterinstitute.caPerimeter Institute, 31 Caroline Street North, Waterloo, ON, N2L 2Y5, Canadamschiffer@perimeterinstitute.caPerimeter Institute, 31 Caroline Street North, Waterloo, ON, N2L 2Y5, Canada§ ABSTRACT Simplicial approaches to quantum gravity such as Quantum Regge Calculus and Spin Foams include configurations where bulk edges can become arbitrarily large while keeping the lengths of the boundary edges small. Such configurations pose significant challenges in Euclidean Quantum Regge Calculus, as they lead to infinities for the partition function and length expectation values. Here we investigate such configurations in three-dimensional Lorentzian Quantum Regge Calculus, and find that the partition function and length expectation values remain finite. This shows that the Lorentzian approach can avoid a key issue of the Euclidean approach. We also find that the space of configurations, for which bulk edges can become very large, is much richer than in the Euclidean case. In particular, it includes configurations with irregular light-cone structures, which lead to imaginary terms in the Regge action and branch cuts along the Lorentzian path integral contour. Hence, to meaningfully define the Lorentzian Regge path integral, one needs to clarify how such configurations should be handled. Spikes and spines in 3D Lorentzian simplicial quantum gravity Marc Schiffer July 1, 2024 ============================================================= § INTRODUCTION The path integral for quantum gravity requires many choices to be made <cit.>, such as specifying the space of geometries to be summed over, or the regularization of the path integral. One such regularization is provided by Quantum Regge calculus <cit.>, where the space of geometries is given by piecewise flat [One can also choose piecewise homogeneous geometries <cit.>.] geometries, constructed via triangulations. The geometry of these triangulations is uniquely specified by assigning lengths to all edges of the triangulation. The Regge action <cit.> is a discretization of the Einstein-Hilbert action based on piecewise linear and flat geometries. This regularizes the infinite-dimensional path integral and reduces it to an integral over finitely many edge lengths. This does however not guarantee the finiteness of the path integral: There are configurations where edge lengths can [The choice of edge lengths is restricted by the (Euclidean or Lorentzian) generalized triangle inequalities. These triangle inequalities guarantee that any top-dimensional simplex of the triangulation can be embedded into (Euclidean or Lorentzian) flat space.] become arbitrarily large. A class of such configurations are called spikes, which are defined as follows, see also the left panel of Fig. <ref>: consider a bulk vertex v and the set of all simplices containing this vertex, i.e. the star of v. We fix the length of the edges in the boundary of this set to some finite values, thus also fixing a finite value for the volume of this boundary. Spikes are configurations where the length of the edges sharing the vertex v can become arbitrarily large. The left panel of Fig. <ref> shows an illustration of a spike configuration in two spacetime dimensions. Since those configurations are a-priori included in the Regge path integral, its finiteness is not guaranteed. In fact, in d>2 dimensions [In d=2 dimensions the gravitational action (with vanishing cosmological constant) is a topological invariant. Nevertheless, spike configurations turn out to be highly problematic <cit.>.], such spike configurations also include the conformal mode <cit.>, which contributes with the “wrong" sign to the gravitational action <cit.>, thereby rendering it unbounded from below. The path integral for Euclidean quantum gravity, with amplitudes exp(-S_E), evaluated on such configurations therefore leads to infinities [Adding a positive cosmological constant does not cure this issue in all cases. <cit.> constructs simple examples in 4D where the Euclidean action goes to -∞ in the limit of infinitely large edge lengths, as long as the boundary volume is below a critical value which scales as Λ^-3/2.] <cit.>. One main aim of this work is to study the convergence properties of spike configurations (as well as spine configurations, defined below) in the Lorentzian Regge path integral. In this work we will focus on the three-dimensional case, the four-dimensional case will be discussed in <cit.>. Previous work focused on the two-dimensional case <cit.> or resorted to path integral measures which (exponentially) suppress large edge lengths <cit.>. We will however see that a suppressing path integral measure is not necessary in order to obtain finite expectation values in Lorentzian spacetimes. We even find that for the class of configurations studied here, the expectation values of arbitrary powers of the bulk edge lengths are finite. A key point in our analysis will be the asymptotic behaviour of the Regge action for large bulk edge lengths. We will consider particular classes of configurations related to so-called Pachner moves <cit.>. The Pachner moves also include spikes and configurations which we name “spines". While spikes require a bulk vertex, spines require only a bulk edge, see the right panel of Fig. <ref> for an illustration: Given a bulk edge, we consider the set of simplices sharing this edge, that is, the star of this edge. A spine configuration allows for this bulk edge to be arbitrarily long, while the edge lengths in the boundary of the star are fixed. Spines can appear in Lorentzian signature, whereas they cannot appear in Euclidean signature due to the Euclidean triangle inequalities. We will find an astonishingly simple asymptotic behaviour for the Regge action associated to the Pachner move configurations considered here. The resulting oscillatory amplitudes (for light-cone regular configurations, see the remarks below) allows for a conditional convergence of the path integral and expectation values. This provides further evidence that Lorentzian quantum gravity models can evade the conformal factor problem <cit.>. Another aspect we will reveal here, is the frequent occurrence of light-cone irregular configurations in the regime of large edge lengths. Light-cone irregularities generically appear for spacetimes which describe topology change <cit.> and are important for the derivation of entropy from the Lorentzian path integral <cit.>. They are characterized by points on the manifold, with more or less than two light cones, for example trouser or yarmulke configurations. Light-cone irregularities have also been encountered for triangulations describing cosmological evolution <cit.>, where they appeared for small bulk edge lengths only. In contrast, in this work, we will encounter light-cone irregularities for a regime where the bulk edges can become arbitrarily large. Light-cone irregularities lead to imaginary terms in the action with an ambiguous sign. Depending on this sign, the amplitudes are therefore either suppressing or enhancing. More precisely, the Regge action has branch cuts along configurations with such light-cone irregularities. For a complete description of the Lorentzian path integral one needs to specify whether to integrate over such configurations, and, if this is the case, on which side of the branch cuts the integration contour is placed. Clearly, in the case of infinitely long branch cuts one has to choose the side that leads to an exponential suppression of these amplitudes. Interestingly, this is the opposite choice to the one needed to obtain the correct entropy from the Lorentzian path integral, e.g., for de Sitter space <cit.>. In summary, we will find evidence that the Lorentzian (Regge) path integral is well-defined, leading to finite expectation values, and avoids the conformal factor problem of Euclidean models. However, we will also encounter a not yet well understood feature of the Lorentzian path integral, namely, light-cone irregularities in the regime of large edge lengths. This illustrates that there are still many open questions regarding the Lorentzian path integral for quantum gravity. Our paper is structured as follows. Section <ref> is devoted to the geometry of Lorentzian simplices. In Subsection <ref> we introduce the complex Regge action and the notion of light-cone (ir-)regular configurations. In Subsection <ref> we discuss the generalized triangle inequalities which constrain the signed squared volumes of a simplex and its subsimplices. Subsequently, in Subsection <ref> we analyse the asymptotic scaling of the signed squared volumes in the limit of one and multiple large edges. In the second part of this work, Section <ref>, we study the asymptotics of the Regge action for spine and spike configurations arising in the 3-2 and 4-1 Pachner moves, respectively. In Section <ref> we consider expectation values of powers of the length variables and establish their convergence properties. We close with a discussion in Section <ref>. § LORENTZIAN GEOMETRY OF SIMPLICES §.§ The complex Regge action Here we provide an overview of the Lorentzian Regge action <cit.>. To this end, we will use the framework of the complex Regge action as developed in <cit.>. A useful review of the (Euclidean, Lorentzian, and complex) simplex geometry, which includes explicit derivations of formulae for key geometric quantities, can be found in the Appendix of <cit.>. Regge calculus <cit.> in d dimensions describes general relativity on a piecewise flat discretized manifold obtained by gluing d-dimensional simplices along shared (d-1)-dimensional subsimplices. For Lorentzian triangulations, the configuration variables of the action are the signed squared lengths s_e = e⃗·e⃗ of the edge vectors e⃗, with the inner product defined by the flat Minkowski metric η = diag(-1,+1,…,+1). The complex Regge action [Here we make the choice to define the complex Regge action such that it yields the Lorentzian action for a Lorentzian triangulation.] takes the form <cit.> S = - ∑_h√(𝕍_h)ϵ_h , where the sum runs over bulk and boundary hinges h, i.e. (d-2)-dimensional subsimplices, in the triangulation and 𝕍_h denotes their signed squared volume, which can be computed as a Caley-Menger determinant, cf. Section <ref>. The bulk and boundary deficit angles are defined as ϵ_h^(bulk) = 2π + ∑_σ⊃ hθ_σ, h , ϵ_h^(bdry) = π + ∑_σ⊃ hθ_σ, h , where θ_σ,h are the complex dihedral angles. We note that the choice of the additive constant π for the boundary deficit angle is a convention –π can be also replaced with k ×π/2 with k ∈{0,1,2,3,4}. This selection amounts to a choice of expected boundary type, e.g., whether one expects an approximately flat boundary or rather a corner. Here we will choose k=2 throughout, which corresponds to a flat boundary. The complex dihedral angles θ_σ,h in a simplex σ at a hinge h⊂σ can be expressed as <cit.>θ_σ,h = -log( a⃗·b⃗ -√( (a⃗·a⃗) (b⃗·b⃗)- (a⃗·b⃗)^2 )/√(a⃗·a⃗)√(b⃗·b⃗) ) where a⃗·b⃗= d^2/𝕍_h∂𝕍_σ/∂ s_h̅ ,a⃗·a⃗= 𝕍_ρ_a/𝕍_h ,b⃗·b⃗= 𝕍_ρ_b/𝕍_h . Here ρ_a ⊂σ and ρ_b ⊂σ are the two (d-1)-subsimplices sharing the hinge h, and h̅⊂σ is the edge opposite to h. The arguments z in the square roots √(z) in (<ref>) (as well as in (<ref>)) can be negative, which also holds for the argument of log z. We choose to use the principle branch (z)∈ (-π,π) but need also to specify the branch cut values for the square root and the log. For r∈ℝ_+, we define √(-r)=√(r) and log(-r)=log(r) - π. In other words, for the logarithm, we adopt the branch cut value coming from the lower complex half plane, and for the square root we adopt the branch cut value coming from the upper complex half plane. As derived in <cit.>, this corresponds to a choice where we complexify the squared edge vectors as s_e → s_e +ε and take the limit ε→ 0. A key advantage of using the complex dihedral angles is that they capture both Euclidean and Lorentzian angles. Both types of angles can occur in the Lorentzian Regge action: The dihedral angles θ_σ,h are constructed by projecting a d-simplex σ onto the plane orthogonal to the hinge h, and by considering the 2D angle between the projections of the two (d-1)-subsimplices that meet at the hinge h. In the case that the hinge h is timelike, the plane onto which we project is spacelike, and therefore we have a Euclidean angle. In the case that the hinge h is spacelike, the plane onto which we project is timelike, resulting in a Lorentzian angle. Note that we do not need to consider the null case, as the deficit angles are multiplied by the volume of the hinges. Therefore, null hinges do not contribute to the Regge action. If the data {a⃗·b⃗,a⃗·a⃗,a⃗} defines a Euclidean angle (i.e., these inner products can be realized as inner products between two vectors in the Euclidean plane), the complex angle (<ref>) reproduces minus the Euclidean angle, denoted as θ_σ,h=-ψ^E_σ,h. The (bulk) deficit angle therefore amounts to the usual Euclidean deficit angle ϵ^E_h=2π-∑_σ⊃ hψ^E_σ,h. This deficit angle is then multiplied by the square root of the negative volume squared for the (timelike) hinge. The resulting imaginary number is multiplied by - in (<ref>). This shows that the contribution of timelike hinges to the Regge action is real. On the other hand, we can clearly apply the same formulae to a Euclidean triangulation, i.e., a triangulation satisfying the Euclidean generalized triangle inequalities, see Section <ref>. The Euclidean Regge action is defined as S^E = -∑_h√(𝕍_h) ϵ^E_h . Thus, the complex Regge action evaluates to S= S^E for Euclidean data. Let us come back to the case of a spacelike hinge in a Lorentzian triangulation. In this case, the data {a⃗·b⃗,a⃗·a⃗,a⃗} defines a Minkowskian angle, i.e., there exist embeddings of a⃗,b⃗ into two-dimensional Minkowski space, which reproduce the given inner products. The complex angles (as defined in (<ref>) will have the following structure: θ_σ,h=-( β_σ,h- m_σ,h π/2)<cit.>. Here, β_σ,h∈ℝ and m_σ,h∈{0,1,2} give the number of light rays included in the (convex) wedge between a⃗ and b⃗, see Fig. <ref>. We therefore obtain the Lorentzian (bulk) deficit angle as follows: ϵ^L_σ,h= 2π-π/2(∑_σ⊃ h m_σ_h) - ∑_σ⊃ hβ_σ,h . Thus, we have a purely imaginary deficit angle ϵ^L_σ,h if the sum over the m_σ,h is equal to 4, meaning that the sum of the dihedral angles includes exactly four light rays and, therefore, two light cones. We will refer to spacelike hinges that satisfy this condition as light-cone regular. Timelike hinges are light-cone regular by definition. The contribution of a regular hinge to the Regge action (<ref>) is therefore real. On the other hand, if the number of light ray crossings in the angle associated to a given hinge h, N_h=∑_σ⊃ h m_σ_h , is smaller or larger than 4, we obtain a negative or positive imaginary contribution to the Regge action, respectively. It is important to note that the sign of these imaginary contributions for the Lorentzian Regge action depends on the choice of conventions, as we outlined below equation (<ref>). We noted that these conventions amount to defining the Regge action by complexifying the edge length squared as s_e → s_e +ε and taking the limit ε→ 0. Alternatively, we can consider a complexification s_e → s_e -ε and take the limit ε→ 0 <cit.>. This would give the opposite sign for the imaginary contributions in the Lorentzian Regge action. This shows that for all Lorentzian data which leads to light-cone irregular hinges, the Regge action (<ref>) has branch cuts. The Regge action on opposite sides of the branch cuts just differs in the sign of the imaginary terms coming from light-cone irregular hinges. Therefore, the choice of sign corresponds to a choice of branch cut side <cit.>. On the other hand, if all hinges are light-cone regular and not null, the Regge action is analytic in an open (complexified) neighbourhood around the corresponding point in the configuration space of length squares. One might be surprised by the appearance of these imaginary terms in the Lorentzian Regge action and wonder whether an alternative definition is possible which avoids them. The imaginary terms can, however, be reproduced via analytic continuation from two different starting points: one can first apply a generalized Wick rotation and construct the Lorentzian action via analytical continuation from the Euclidean action <cit.>. Second, one can start from Lorentzian data which is light-cone regular, and (using a slight deviation into complexified edge length squared to go around branch points) analytically continue to data with light-cone irregularities, see <cit.>. §.§ Generalized triangle inequalities Next, we will discuss the generalized Euclidean and Lorentzian triangle inequalities. These ensure that a simplex with given signed length squares can be embedded into flat Euclidean or Minkowskian space, respectively, and will ultimately set the integration bounds of the bulk edges in the path integral. The inequalities for a simplex σ can be formulated as conditions on the signed squared volume 𝕍_σ for σ and the signed squared volumes 𝕍_ρ for all its subsimplices ρ. The signed squared volume of a d-dimensional simplex σ^d=(012⋯ d) can be computed as follows, 𝕍_σ^d = (-1)^d+1/ 2^d (d!)^2( 0 1 1 1 ⋯ 1 1 0 s_01 s_02 ⋯ s_0d 1 s_01 0 s_12 ⋯ s_1d ⋮ ⋮ ⋮ ⋮ ⋱ ⋮ 1 s_0d s_1d s_2d ⋯ 0 ) , where s_ij is the signed squared length of the edge between vertices i and j. The signed squared volume determines the spacelike, null, or timelike nature of a simplex. A simplex σ is timelike if 𝕍_σ < 0, null if 𝕍_σ =0, and spacelike if 𝕍_σ > 0. A Euclidean or spacelike (non-degenerate) simplex σ must obey the generalized triangle inequalities, 𝕍_σ >0, 𝕍_ρ>0 , for the simplex σ itself and for all subsimplices ρ. For example, for a spacelike triangle (012), the triangle inequality (for a non-degenerate triangle) requires that the squared area and the squared lengths are positive, 𝕍_(012)>0, s_01>0, s_02>0, s_12>0 , where the first inequality is equivalent to the well-known triangle inequality, namely that the sum of the lengths of each pair of edges has to be larger than the length of the remaining edge. For a Lorentzian d-simplex σ^d in d-dimensional spacetime, the subsimplices ρ⊂σ^d can be timelike, spacelike or null. In this case, the generalized triangle inequalities (<ref>) state that, if a subsimplex ρ'⊂σ^d is timelike or null, then all subsimplices ρ⊂σ^d containing this subsimplex ρ', i.e., ρ'⊂ρ, must not be spacelike <cit.>. In particular, a timelike subsimplex cannot be embedded in a spacelike subsimplex. Therefore, for a Lorentzian d-simplex σ, we have the condition that 𝕍_σ^d < 0 (non-degeneracy) and the following requirement: if there is a subsimplex ρ' with 𝕍_ρ'≤ 0, then all subsimplices ρ with ρ' ⊂ρ need to satisfy 𝕍_ρ≤ 0, i.e., ρ⊂σ , 𝕍_ρ≤ 0 ⇒ ∀ρ' ⊃ρ: 𝕍_ρ'≤ 0 . As an example, consider a timelike tetrahedron (0123) in three-dimensional Lorentzian spacetime with a timelike edge (01) and all other edges spacelike. Then, the triangle inequalities demand 𝕍_(0123)<0, 𝕍_(012)≤ 0, 𝕍_(013)≤ 0 , as the other two triangles (023) and (123) can be either spacelike, null, or timelike. The generalized triangle inequalities dictate whether we are allowed to scale one or several edges of a simplex to become large. Consider, for instance, a Euclidean triangle or more generally a Euclidean d-simplex. Here, we cannot scale one of the edges large while keeping the other two edges fixed, as this violates the Euclidean triangle inequality. However, such scaling is possible for a timelike triangle (or more generally a timelike d-simplex). Indeed, consider the case of a timelike (non-degenerate) triangle with only spacelike edges or with only timelike edges. In this case, the Lorentzian triangle inequalities impose the opposite of the Euclidean inequality: they demand that the length of one edge is greater than the sum of the lengths of the other edges. In the case of one spacelike edge and two timelike edges (or one timelike edge and two spacelike edges), the Lorentzian triangle inequality is always satisfied. This follows from the expression for the area square of a triangle, 𝕍_(012) = -1/16 (s_01^2+s_02^2+s_12^2-2s_01s_02-2s_01s_12-2s_02s_12) = -1/16 (s_01^2+ (s_02-s_12)^2-2s_01s_02-2s_01s_12) , when we consider the edge (01) to be spacelike (timelike) and the other edges to be timelike (spacelike). §.§ Simplex geometry in the limit of large edges As described in the introduction, we are interested in the asymptotic behaviour of the Regge action for small simplicial complexes with boundaries, where we keep the boundary edge length fixed while scaling the bulk edge(s) to become large, if permitted by the generalized triangle inequalities. In the following, we will focus on the geometry of a d-simplex σ^d = (01⋯ d) with vertices 0,…,d, characterized by its squared edge lengths. We will consider the asymptotic behaviour of the volume square when we scale one or several edges of this simplex to be large. Using the asymptotic behaviour of the volumes for the d-simplex and its various subsimplices, we can then consider the asymptotic behaviour of the dihedral angles. To that end, we use the following formulae for the dihedral angles, which can be derived from (<ref>), see <cit.>: sin(θ_σ, h) = -d/d-1√(𝕍_h)√(𝕍_σ)/√(𝕍_ρ_a)√(𝕍_ρ_b) ,cos(θ_σ, h) = d^2 /√(𝕍_ρ_a)√(𝕍_ρ_b)𝕍_σs_h̅ . The derivation of these relations exploits that the derivative of the squared volume with respect to a squared edge length can be expressed in terms of the squared volume of the simplex and of its subsimplices <cit.>, (∂𝕍_σ/∂ s_i j)^2=1/d^4𝕍_i̅𝕍_j̅-1/d^2(d-1)^2𝕍_σ 𝕍_i j . Therefore, we can compute the asymptotic behaviour of geometric quantities, such as the dihedral angles, from the asymptotic behaviour of the squared volumes of the corresponding simplex and its subsimplices. We have already expressed the signed squared volume of a d-dimensional simplex σ^d = (0⋯ d) with vertices {0,…,d} via the determinant of the associated Caley-Menger matrix for the signed squared edge lengths <cit.>, 𝕍_σ^d = - (-1)^d/ 2^d (d!)^2( 0 1 1 1 ⋯ 1 1 0 s_01 s_02 ⋯ s_0d 1 s_01 0 s_12 ⋯ s_1d ⋮ ⋮ ⋮ ⋮ ⋱ ⋮ 1 s_0d s_1d s_2d ⋯ 0 ) ≡ - (-1)^d/ 2^d (d!)^2(C) . Making use of Laplace's expansion for the determinant of a (d+1)× (d+1) matrix C and expanding around an arbitrary row i, we can express it as: (C) = ∑_j=1^d+1 (-1)^i+jC_ij(C̃_ij) . Here, C_ij denotes the (ij)-th entry of C, and C̃_ij is the determinant of the submatrix of C obtained by removing the i-th row and j-th column of C. Using Laplace's expansion, it is straightforward to separate the terms including large edge lengths and determine their asymptotic behaviour. §.§.§ Volumes in the limit of one large edge Here, we consider the situation with one large edge (01). The squared volume 𝕍_σ^d of any d-dimensional simplex σ^d=(012⋯ d) containing the edge (01) will be a polynomial of at most quadratic order in s_01, given by 𝕍_σ^d = a s_01^2 + b s_01 + c, where a, b and c are independent of s_01. By applying Laplace's formula (<ref>) repeatedly, we can determine the first coefficient as 𝕍_(012⋯ d) = -1/4 d^2(d-1)^2𝕍_01 s_01^2 + O(s_01^1) , where 𝕍_01 is the signed squared volume of the subsimplex of σ^d obtained by removing the vertices (0) and (1). Thus, 𝕍_σ^d is of order s_01^2 if 𝕍_01≠ 0. In what follows, we assume this requirement to be satisfied. We note that (<ref>) relates the signature of the d-dimensional squared volume to the signature of 𝕍_01. That is, if σ^d is timelike (spacelike), the subsimplex (23… d) needs to be spacelike (timelike). If this is not the case, the triangle inequalities do not allow us to scale the edge (01) to become large. We see that scaling only one edge length to be large is only possible for timelike (or null) simplices. For example, with d=2 for a triangle t = (012), we have: 𝕍_(012)= - 1/16 s_01^2 + O(s_01^1) . We thus see that, as discussed above, a triangle with one very large edge has to be timelike. For a tetrahedron τ = (0123), we have: 𝕍_(0123)= - 1/144s_23 s_01^2 + O(s_01^1) . We note that this equation implies that the tetrahedron has to be timelike (or null), and s_23 has to be spacelike (or null). (If s_23 is timelike, the volume square for the tetrahedron would be positive, indicating a spacelike tetrahedron. But a spacelike tetrahedron cannot include a timelike edge.) §.§.§ Volumes in the limit of multiple large edges We will consider again a d-simplex (01… d) and now scale all edges (0i) with i∈1,…,d to become large. Scaling multiple edges large requires us to specify how to do so. We could choose a multiplicative scaling s_0i=λ s_0i^0 or an additive scaling s_0i=s_0i^0 ±λ and consider the limit λ→∞. In the second case, the leading order of the volumes will not retain the “initial values" s_0i^0, and we will indeed find simpler formulae compared to the multiplicative case. Note that the choice of additive scaling implements a form of symmetry reduction, in the sense that we consider all large squared edge lengths to have (approximately) the same modulus λ. We start by applying an additive scaling and consider a triangle (012). Its squared volume can be written as 𝕍_(012) = -1/16s_01^2 - 1/16s_02^2 + 1/8s_01 s_02 + 1/8s_01 s_12+ 1/8s_02 s_12- 1/16s_12^2 . If s_01=s_02=±λ, i.e., both edges adjacent to the vertex (0) are either timelike or both are spacelike, the terms quadratic in λ cancel out. The dominant term is therefore linear in λ and is given by 𝕍_(012) = ±1/4λ s_12 + 𝒪(λ^0) . We see that if s_01=s_02=-λ, with large λ, the triangle (012) has to be timelike (or null), and therefore the edge (12) has to be spacelike (or null). If s_01=s_02=+λ, the signature of the edge (01) and the triangle (012) need to agree. If in turn s_01=-s_02=±λ, i.e., the two edges adjacent to (0) have different signatures, the dominant term is quadratic in λ, 𝕍_(012) = -1/4λ^2 + 𝒪(λ^1) .   Next, we consider the squared volume of a tetrahedron (0123). For the scaling s_0i = σ_i λ with σ_i=± 1 and i=1,2,3, we can expand the volume of the tetrahedron (0123) as 𝕍_(0123) = 1/144(∑_i<j, k≠ i,j (-s_ij + s_ik + s_jk)σ_i σ_j - s_ij σ_k^2)λ^2 +1/144(∑_i<j, k≠ i,j (-s_ij + s_ik + s_jk)s_ij σ_k )λ + 𝒪(λ^0) . If the signature of all the large edges s_0i=±λ agrees, the terms quadratic in λ cancel out, and we are left with 𝕍_(0123) = ±1/9𝕍_(123) λ + 𝒪(λ^0) . If all large edges are timelike, the tetrahedron has to be timelike (or null), and the triangle (123) has to be spacelike (or null). In the case where all large edges are spacelike, the signature of the tetrahedron agrees with the signature of the triangle (123). This means that in a three-dimensional Lorentzian triangulation, the triangle (123) has to be timelike. Let us now consider the case where the sign of s_01=±λ differs from the sign of the other two edges s_02=s_03=∓λ. The leading term is then quadratic in λ, and we have: 𝕍_(0123) = -1/3^2× 2^2 s_23 λ^2 + O(λ^1) . The tetrahedron (0123) has to be timelike, and therefore s_23 has to be spacelike. All other cases with mixed signatures can be generated by renaming the vertices. Let us now consider the choice of a multiplicative scaling of the edges, s_0i = λ s^0_0i and the limit λ→∞. The volume of a triangle (012) with squared edge lengths s_0i = λ s_0i^0 where i=1,2 and s_12 is given by 𝕍_(012) = -1/16(s^0_01-s^0_02)^2 λ^2 + 1/8(s^0_01+s^0_02)s_12 λ - 1/16s_12^2 . Thus, in this case, assuming s^0_01≠ s^0_02 (otherwise, the multiplicative scaling is equivalent to the additive scaling for a redefined λ), the triangle will always be timelike in the limit λ→∞. This condition is far more restrictive than the additive scaling, where one can have spacelike triangles (if all edges involved are spacelike). The multiplicative scaling is particularly not applicable in Euclidean signature. Proceeding with the squared volume of a tetrahedron with large edge lengths s_0i=λ s_0i^0, to leading order, it can be expressed as 𝕍_(0123) = - 1/12^2(s_12(s_01^0 - s_03^0 )(s_02^0 - s_03^0) + s_13(s_02^0- s_01^0)(s_02^0 - s_03^0) + s_23(s_01^0 - s_02^0)(s_01^0- s_03^0) )λ^2 + 𝒪(λ^1) . Considering the case of three spacetime dimensions with Lorentzian signature, the tetrahedron has to be timelike. Thus, the generalized triangle inequalities are only satisfied in the large-λ regime, if the term in the outer brackets is positive. This constitutes a rather non-trivial condition on the squared edge lengths {s_12,s_13,s_23} and {s_01,s_02,s_03}. In summary, we see that there are rather involved conditions to accommodate for a limit of infinite squared edge lengths with a multiplicative scaling. Therefore, we will consider only the additive scaling for the asymptotic analysis of the Regge action. § REGGE ACTION ASYMPTOTICS Here we will consider the asymptotic behaviour of the Regge action for certain triangulations with boundaries, which contain one or several bulk edges and where we scale these bulk edges to be large. In particular, we will consider the initial configurations of the 3-2 Pachner move, which include one bulk edge, as well as the initial configurations of the 4-1 Pachner move, which include four bulk edges. The 3-2 Pachner move can lead to spine configurations, whereas the 4-1 Pachner move can lead to spike configurations. The initial configuration for the 3-2 Pachner move consists of three tetrahedra sharing one bulk edge. The boundary of this initial configuration can also serve as the boundary of two tetrahedra sharing a triangle (with a caveat discussed below). This setup represents the final configuration of the 3-2 Pachner move, see Fig. <ref>. The initial configuration for the 4-1 Pachner move consists of four tetrahedra sharing one bulk vertex. This initial configuration includes four bulk edges. The boundary of this initial configuration can also serve as the boundary of one tetrahedron (with a caveat discussed below), which serves as the final configuration for the 4-1 Pachner move, see Fig. <ref>. The final configurations of these Pachner moves, i.e., two tetrahedra sharing a triangle or a single tetrahedron, can be embedded into flat three-dimensional Minkowski space, if the generalized Lorentzian triangle inequalities hold for the tetrahedra in the final configuration. We can then construct a classical solution for the (squared) lengths of the bulk edges in the initial configurations. By using the embedding of the final configuration of a Pachner move into flat space, one can compute the lengths of the bulk edges in the initial configuration. As one uses a flat embedding, the deficit angles at all bulk edges vanish. This satisfies the three-dimensional Regge equation of motion. Consequently, the Regge action for the initial configuration evaluated on this flat solution is equal to the Regge action for the final configuration. Note, however, that there are cases where the Lorentzian generalized triangle inequalities can be satisfied for the initial configuration of a Pachner move (with some range of edge lengths allowed for the bulk edges), but not for the final configuration. It might still be possible that the triangle inequalities are satisfied for tetrahedra with a different spacetime signature, e.g., Euclidean signature. In such cases, one can construct a solution to the equations of motion, which describes a simplicial complex of different signature. These solutions can still play a role in the path integral as saddle points in a complexified configuration space, as discussed in, for example, in  <cit.>. In Lorentzian signature, there are more possibilities to allow for very large lengths of bulk edges compared to Euclidean signature. In the initial 3-2 Pachner move configuration, the Euclidean triangle inequalities only allow for bounded bulk edge lengths (if one keeps the boundary edge lengths fixed). In contrast, we will see that with the Lorentzian triangle inequalities, one can have unbounded edge lengths and either allow for large spacelike or large timelike edges. The initial 4-1 Pachner move configurations allow for unbounded bulk edge lengths in both Euclidean and Lorentzian signature. In Lorentzian signature, we differentiate between cases where there are only spacelike or only timelike bulk edges, or a mixture of spacelike and timelike bulk edges. We will see that all cases with large spacelike edges lead to light-cone irregular configurations, and thus, in these cases, the action acquires imaginary terms. §.§ 3-2 Pachner move The 3-2 Pachner move starts with a configuration of three tetrahedra (0123),(0124), and (0134) sharing one edge (01). One integrates over the squared edge length variable s_01 and thereby “removes the edge (01)", see Fig. <ref>. The classical equations of motion demand that the deficit angle associated with the edge (01) vanishes, indicating that the triangulation has a flat bulk. The final configuration of this Pachner move can therefore be interpreted as two tetrahedra (0234) and (1234) glued along the triangle (234). Here, we will consider the asymptotic regime for the configuration of three tetrahedra sharing an edge, particularly when this edge length is very large, which we call a “spine" configuration. Such an asymptotic regime is not possible in Euclidean signature, where the bulk edge length is bounded by the boundary edge lengths due to the Euclidean triangle inequalities. In Lorentzian signature, however, we can have an arbitrarily large length for the bulk edge. As we keep the boundary edge lengths fixed, all triangles sharing this unbounded edge must be timelike. §.§.§ Two examples Let us first consider two examples of such a Pachner move. These examples will illustrate the general result obtained further below.   (A): The shared edge is timelike: We first consider the case where the shared edge is timelike. To start, we consider one tetrahedron with a time like edge (01), whose length we will let go to infinity. We assume that the edges (0i) and (1j), with i,j=2,3, have all the same squared length, and that the edge (23) is spacelike. We embed this tetrahedron into Minkowski space using the following placement of its vertices, (0):(-t,0,0); (1):(t,0,0); (2):(0,x,b); (3): (0,x,-b) . Now, if we send t to infinity, we also need to adjust x such that x(t)=√(s_02-b^2+t^2), ensuring that the squared edge length s_02 (and therefore all other s_0i and s_1j) remain constant. This requires s_02>b^2-t^2 to obtain a real coordinate x. This inequality is indeed imposed by the Lorentzian triangle inequalities. The dihedral angle θ_(0123),(01) at the edge (01) is given by (minus) the Euclidean angle between (0,x,b) and (0,x,-b). It approaches 0 in the limit as t (and therefore x) goes to infinity. The dihedral angle θ_(0123),(23) at the edge (23) can be computed as θ_(0123),(23) = -log(b^2-s_02-2 t^2+2 √(-t^2 (-b^2+s_02+t^2))/√(b^2-s_02)√(b^2-s_02)) -log(-4t^2/√(b^2-s_02)√(b^2-s_02)) = 𝒪(log (s_01)) , which scales as 𝒪(log s_01)=𝒪(log t) for t→∞. The remaining dihedral angles are all equal to the dihedral angle θ_(0123),(02) at the edge (02), which can be computed as θ_(0123),(02) = -log(-b t^2-√(-s_02 t^2 (-b^2+s_02+t^2))/√(s_02-b^2)√(-t^2 (s_02+t^2))) -log((b - √(s_02))/√(s_02-b^2)) =𝒪(s_01^0) , and which scales as 𝒪(s_01^0) for t→∞. We now consider three such tetrahedra (0123),(0124),(0134) which share the edge (01). With the above asymptotic behaviour of the dihedral angles and the form of the Regge action (<ref>), we see that the leading-order contribution to the action comes from the term √(s_01)ϵ_01. The deficit angle ϵ_01=2π+∑_i<jθ_(01ij),(01) with i,j=2,3,4 approaches 2π. This leads to S^3-2= 2π√(|s_01|)+ 𝒪(log s_01) . Note that the leading term in the action is real. Fig. <ref> shows the dihedral angle and deficit angle at the edge (01), for an example with b^2=5/4 and s_02=1 [Unless stated otherwise, we set ℓ_P=1 and express geometric quantities, such as length squares and areas, in terms of Planck units.]. It also compares the exact Regge action with the asymptotic expression (<ref>).   (B): The shared edge is spacelike: Next, we consider a tetrahedron with a spacelike edge (01) and similar symmetries as above. Here we can use embedding coordinates: (0):(0,-x,0); (1):(0,x,0); (2):(t,0,-b); (3):(t,0,b) . If we send x to infinity, we need to adjust t as t(x)=√(b^2-s_02+x^2). The dihedral angle at (01) is now a Lorentzian angle between the vectors (t,0,-b) and (t,0,b). This angle approaches 0 in the limit of large x (and therefore t). Note that this angle is a “thin" Lorentzian angle as it does not contain any light ray crossings. Gluing three such tetrahedra along the edge (01), the angle around (01) also does not include any light ray crossings. The edge (01) is therefore light-cone irregular. (All vectors orthogonal to the edge (01) are timelike; therefore, the edge represents an initial or final singularity.) As we explained in Section <ref>, light-cone irregular configurations lead to imaginary terms in the action. The dihedral angle θ_(0123),(23) at the edge (23) can be computed as θ_(0123),(23) = -log((-b^2+s_02-2 x^2)-2 √(-x^2 (b^2-s_02+x^2))/√(s_02-b^2)√(s_02-b^2)) 𝒪(log(s_01)) , and scales as 𝒪(log s_01) for x→∞. The remaining dihedral angles are all equal to the dihedral angle θ_(0123),(02) at the edge (02), which can be computed as θ_(0123),(02) = -log(b x^2- √(-x^2 s_02(b^2-s_02+x^2))/√(s_02-b^2)√(x^2s_02-x^4)) -log(-b+√(s_02)/√(s_02-b^2)) = 𝒪(s^0_01) , and scales as 𝒪(s_01^0) for x→∞. Thus, the dominant contribution in the Regge action comes again from the term √(s_01)ϵ_01, and the asymptotic behaviour of the Regge action is given by S^3-2=- 2π√(|s_01|) + 𝒪(log s_01) , with the leading term being imaginary. Note that with our conventions in Section <ref>, we defined the action along a specific side of the branch cuts which appear for light-cone irregular configurations. Opposite sides of the branch cuts just differ in their sign for the imaginary term. If we consider a path integral for this configuration, we would need to integrate over the length square s_01. In this case, we can choose the integration contour along the branch cut such that the integral converges. Fig. <ref> shows an example of such a configuration, with s_0i=s_1i=s_ij=1, for i,j=2,3,4. The triangle inequalities then require s_01>3. These configurations are light-cone irregular at the bulk edge for all allowed values of s_01>3. We note that this family of configurations of three tetrahedra of type (<ref>) sharing an edge (01) does not admit a classical solution with s_01>0. In the case where s_23=s_24=s_34 are spacelike and s_0i=s_1i, for i=2,3,4, timelike, the triangle inequalities for the final configuration of the 3-2 Pachner move are satisfied. But the classical solution for the bulk edge in the initial configuration of the Pachner moves demands that s_01 is timelike (with the length given by twice the height in the tetrahedron (1234)). In the case that s_23=s_24=s_34 is spacelike and s_0i=s_1i, for i=2,3,4, is spacelike, the Lorentzian triangle inequalities for the final configuration of the 3-2 Pachner move demand that 3s_02<s_23. In this scenario, the classical solution for the bulk edge also requires that s_01 is timelike (with the length being twice the height in the tetrahedron (1234)). The family (<ref>) also allows configurations with 3s_02>s_23. Thus, the Lorentzian triangle inequalities for the initial configuration of the 3-2 Pachner move are satisfied (for an appropriate choice of the bulk length). However, for 3s_02>s_23, the Lorentzian triangle inequalities are not satisfied for the final configuration of the Pachner move. Instead, we have a situation where the Euclidean triangle inequalities are satisfied. This means we have a “tunneling" solution for the bulk length. The (analytically continued) Regge action evaluates to plus or minus times the Euclidean Regge action on such tunneling solutions. (See <cit.> for a detailed construction of such analytically continued actions.) So far, we have considered the case where the three tetrahedra sharing an edge have the same geometry. By relaxing this assumption, we can construct cases in which the configurations are light-cone regular for a certain regime of the squared edge length s_01>0. To be specific, we choose the same geometry for the tetrahedron (0123) as in (<ref>). To determine the geometry of the other two tetrahedra (0124) and (0134), we introduce a vertex (4) at the coordinates (-t', 0, 0), thereby constructing an embedding of the three-tetrahedral complex into flat space. We then adopt the boundary lengths from this flatly embedded configuration and vary s_01 while keeping the boundary edge lengths fixed. Fig. <ref> shows the Regge action for an example with boundary edge lengths s_02 = s_03 = s_12 = s_13 = 1, s_04 = s_14 = 0.15, s_23 = 5, and s_24 = s_34≃ 0.203. The configurations are light-cone regular at the bulk edge for the range of s_01 where the imaginary part of the action remains constant. This constant imaginary part is caused solely by the boundary edges and can be absorbed by redefined the boundary deficit angle, see the discussion below (<ref>). §.§.§ General analysis The previous examples also capture the asymptotic behaviour in the general case. To see this, we first establish the asymptotic behaviour of the dihedral angles in a tetrahedron (0123) as the squared edge length s_01 becomes large. Using the formulae (<ref>) for the sine and cosine of the dihedral angles in terms of volumes, and the asymptotic expressions for the volumes as discussed in Section <ref>, we find: * Dihedral angle at the (bulk) edge (01):sin(θ_(0123),(01)) = -3/2√(s_01)√(-1/144s_23s_01^2)/√(-1/16s_01^2)√(-1/16s_01^2) +𝒪(s_01^-3/2) = 𝒪(s_01^-1/2) → 0 , s_01→±∞ . On the other hand, if we consider the cosine of the same angle, we obtain cos(θ_(0123),(01)) = 3^2 -1/144s_01^2/√(-1/16s_01^2)√(-1/16s_01^2) + 𝒪(s_01^-1) = 1+ 𝒪(s_01^-1) . Thus the dihedral angle at the edge (01) goes to zero in the limit of a large squared edge length s_01→±∞. This behaviour is illustrated in Fig. <ref> (left) for a specific choice of edge lengths compatible with the generalized triangle inequalities (<ref>). * Dihedral angles at the (boundary) edges (0i) and (1i), i=2,3: Consider for example sin(θ_(0123),(02)) = -3/2√(s_02)√(-1/144s_23s_01^2)/√(𝕍_(023))√(-1/16s_01^2) +𝒪(s_01^-1) = 𝒪(s_01^0) , s_01→±∞ . In the Regge action, this boundary dihedral angle is multiplied by the boundary edge length. This leads to a 𝒪(s_01^0) term, which is subleading compared to the term coming from the bulk edge. * Dihedral angle at the (boundary) edge (23):sin(θ_(0123),(23)) = -3/2√(s_23)√(-1/144s_23s_01^2)/√(𝕍_(023))√(𝕍_(123)) +𝒪(s_01^0) = 𝒪(s_01^1) , s_01→±∞ . The dihedral angle θ_(0123),(23) therefore grows as 𝒪(log(s_01)). (Remember that arcsin(x) = -log( x + √(1-x^2)).) This still leads to a subleading term in the Regge action, compared to the term coming from the bulk edge. We can now proceed to determine the asymptotic behaviour for the deficit angles. Remembering that these are defined as ϵ_h^(bulk) = 2π + ∑_σ⊃ hθ_σ, h and ϵ_h^(bdry) = π + ∑_σ⊃ hθ_σ, h, we obtain the following: * Deficit angle at the bulk edge (01):ϵ_(01) = 2π + θ_(0123),(01) + θ_(0124),(01)+ θ_(0134),(01)→ 2π , s_01→±∞ . Thus, as a consequence of the three-dimensional dihedral angles approaching zero asymptotically, the bulk deficit angle approaches 2π. This is illustrated in Fig. <ref> (left) for a specific choice of edge lengths. * Deficit angles at the boundary edges (0i) and (1i), i=2,3,4:ϵ_(02) = π + θ_(0123),(02) + θ_(0124),(02)= 𝒪(s_01^0) , s_01→±∞ . * Deficit angle at the boundary edges (ij), i,j=2,3,4, i<j:ϵ_(23) = π + θ_(0123),(23)= 𝒪(log(s_01)) , s_01→±∞ . Thus, with the Regge action given by S= ∑_h√(𝕍_h)ϵ_h, we conclude S^3-2 = - √(s_01) ϵ_(01) + 𝒪(log s_01) = - 2π√(s_01) + 𝒪(log s_01) , s_01→±∞ . We thus confirm the behaviour found in the two examples of Section <ref>. For the case of a shared timelike edge, the leading-order term in the Regge action is real. For the case of a shared spacelike edge, the leading-order term is imaginary. This reflects that the spacelike bulk edge is light-cone irregular, as the angle around it includes zero light rays. §.§ Generalization to N tetrahedra sharing an edge We can easily generalize the considerations for the 3-2 Pachner move configuration to a configuration of N tetrahedra sharing an edge. In fact, the asymptotic behaviour for the Regge action is given by S^N tetra = - √(s_01)ϵ_(01) + 𝒪(log s_01) = - 2π√(s_01) + 𝒪(log s_01) , s_01→±∞ , and thus remains the same as for the 3-2 Pachner move configuration, i.e., independent of N. We note that this asymptotic form of the action might have peculiar consequences for the asymptotic form of the path integral amplitudes for the Lorentzian Ponzano-Regge model. To this end, let us first note that one can define a phase space for Regge calculus <cit.>. In the (2+1)-dimensional theory, length variables are conjugated to boundary deficit angles. For a timelike edge, the boundary deficit angle is compact, and one therefore expects the length operator to have a discrete spectrum, which for large lengths is equidistant. For a spacelike edge, the boundary deficit angle is non-compact, and one expects the length operator to have a continuous spectrum. These expectations are indeed satisfied for the Lorentzian Ponzano-Regge model <cit.>, where the spectrum for the timelike length goes like T∼ j for large j and j∈ℕ. Thus, ignoring (constant) boundary terms and measure terms, the quantum-mechanical amplitudes behave as ∼exp( 2π j) = 1. Let us also remark that exact one-loop measure terms can be derived <cit.>, and they tend to suppress amplitudes for large edge lengths, as discussed in Section <ref>. §.§ 4-1 Pachner move The 4-1 Pachner move starts with a configuration of four tetrahedra (0123),(0124), (0134) and (0234) sharing one vertex (0). One integrates over the edge square variables s_01,s_02,s_03 and s_04 and in this way “removes the bulk edges", see Fig. <ref>. The classical equations of motion demand that the deficit angles associated with the edges (0i), where i=1,…,4 vanish, i.e., indicating a flat bulk triangulation. The final configuration of this Pachner move can therefore be interpreted as one tetrahedron (1234). We will now consider the asymptotic regime for the configuration of four tetrahedra sharing one vertex, with the bulk edge lengths being very large. We need to take into account all possible signatures of the bulk edges, that is, the homogeneous case where all four bulk edges are spacelike or timelike, as well as the inhomogeneous case where a subset of the bulk edges is spacelike, while the others are timelike. As mentioned previously, the equations of motion for the bulk edge lengths demand a flat configuration. Such solutions can be easily constructed: If the generalized Lorentzian triangle inequalities for the tetrahedron (1234) are satisfied, we can embed this tetrahedron into Minkowski space. Furthermore, by embedding a vertex (0) inside this tetrahedron, we can construct a three-parameter family of solutions. (If the boundary data satisfies the Euclidean triangle inequalities, we can construct Euclidean solutions which define a family of saddle points for the complexified Regge action <cit.>.) This three-parameter family of solutions constitutes one gauge orbit with flat configurations. The gauge symmetry in question can be identified as a remnant of diffeomorphism symmetry <cit.>. The Regge action evaluates to the same value on this gauge orbit and coincides with the Regge action of the tetrahedron (1234). The notion of gauge orbits extends to curved configurations: we define configurations with the same value of the Regge action as belonging to the same gauge orbit. Note that, as we have four bulk variables and consider one condition, namely a constant Regge action, the gauge orbits are generically three-dimensional. For the path integral, we are supposed to integrate over all four bulk lengths. But since we have a three-dimensional gauge symmetry [From the description of the gauge orbit for the flat solution above, one would expect that this gauge orbit is compact. One can, however, introduce the orientation of the top-dimensional simplices as a further summation variable. This allows for non-compact gauge orbits <cit.>.], we can use a gauge fixing and reduce the path integral to a one-dimensional integral. Here we will consider a gauge fixing for the asymptotic regime, in which we set the modulus for the bulk edge lengths to be equal, i.e., |s_0i|=λ for i=1,…,4. This choice of gauge fixing corresponds to the additive scaling discussed in Section <ref>. Using this additive scaling as gauge fixing, we have to assume that the gauge conditions |s_0i|=λ define a good gauge fixing for large λ. We will find that the action is linear in √(λ) in the asymptotic regime, which is consistent with this assumption. We will proceed by considering first the case of homogeneous signature for all the bulk edges.   Case s_0i=±λ (with the same sign for all i=1,…,4): To compute the Regge action, we first determine the asymptotic limit of the dihedral angles. For the dihedral angle at an edge (0i), we compute sin(θ_(0123),(01)) = -3/2√(s_01)√(𝕍_0123)/√(𝕍_012)√(𝕍_013) = - 2√(±λ)√(±𝕍_123λ)/√(± s_12λ)√(± s_13λ) +𝒪(λ^-1) = sin(θ_(123),(1)) +𝒪(λ^-1) , and similarly, cos(θ_(0123),(01)) = cos(θ_(123),(1)) +𝒪(λ^-1) . For the dihedral angle at an edge (ij), where i,j=1,2,3, we compute sin(θ_(0123),(12)) = -3/2√(s_12)√(𝕍_0123)/√(𝕍_012)√(𝕍_123) = - √(s_12)√(±𝕍_123λ)/√(± s_12λ)√(𝕍_123) + 𝒪(λ^-1) = -1 + 𝒪(λ^-1) . Thus, we find that the angle θ_(0123),(12) =-π/2 +𝒪(λ^-1/2). (Note that we have 𝒪(λ^-1/2) because -1 is a special expansion point for arcsin.)   For the asymptotic limit of the action, we need to consider the deficit angles at the bulk edges (0i), i=1,2,3,4. For the edge (01), we obtain ϵ_(01)^4-1 = 2π + θ_(0123),(01) + θ_(0124),(01)+ θ_(0134),(01) = 2π + θ_(123),(1) +θ_(124),(1) +θ_(134),(1) + 𝒪(λ^-1) , Similar calculations can be done for the other edges (0i), where i=2,3,4. Thus, the deficit angles at the four bulk edges will include all two-dimensional angles which appear in the four triangles forming the boundary of the tetrahedron (1234). In the Regge action, all of these deficit angles are multiplied by the same coefficient, namely - √(±λ). By summing over the angles in the four triangles of the tetrahedron, we obtain a term of - √(±λ)(8π - 4π), as the angles in a triangle sum up to -π. As for the boundary deficit angles, where i,j,k,l=1,2,3,4 are pairwise different, we obtain ϵ_(ij)^4-1 = π + θ_(0ijk),(ij) + θ_(0ijl),(ij) = 𝒪(λ^-1/2) . These boundary deficit angles will not contribute to leading order in the Regge action. Hence, overall we obtain for the asymptotic limit of the Regge action: S^4-1 = - 4π √(±λ) + 𝒪(λ^0) . We see that for timelike bulk edges, the leading term in the action is real. Indeed, timelike edges are always light-cone regular. Fig. <ref> shows the sum over bulk deficit angles and the Regge action for an explicit example with timelike bulk edges. In the case where all bulk edges are spacelike (and the triangle inequalities can be satisfied), the asymptotic regime is light-cone irregular. Indeed, if all bulk edges are spacelike and large, all boundary triangles need to be timelike. (See the remark below (<ref>).) Therefore, the boundary triangulation defines a two-dimensional Lorentzian triangulation. A bulk edge (0i) is asymptotically irregular if and only if the vertex (i) is irregular with respect to the two-dimensional (Lorentzian) geometry defined on the boundary of the tetrahedron (1234). As topological two-spheres do not admit regular Lorentzian geometries, some or all of these vertices have to be irregular. Coming back to the case of timelike bulk edges, we can make a similar remark as in Section <ref> regarding the path integral amplitudes in the Lorentzian Ponzano Regge model. With a discrete spectrum √(|λ|)∼ j, with j ∈ℕ/2, the amplitudes become exp( S^4-1)∼ 1.   Next, we consider the cases where the bulk edges have different signatures.   Case s_01=±λ and s_0j=∓λ for j=2,3,4: Here we have tetrahedra of two different types. First, there is the tetrahedron (0234) with all edges (0i), where i=2,3,4, having the same squared edge length s_0i=∓λ. And second, there are the three tetrahedra (01ij) with i<j and i,j=2,3,4, where we have s_01=±λ and s_0i=∓λ. For the latter, we need to distinguish between the edges (01) and (0i), as well as between {(1i),(1j)} and (ij). We start by considering the dihedral angle at the edge (01) in the tetrahedra (01ij), for which we obtain sin(θ_(01ij),(01)) = -3/2√(s_01)√(𝕍_01ij)/√(𝕍_01i)√(𝕍_01j) = -3/2√(±λ)√(-1/36s_ijλ^2)/√(-1/4λ^2)√(-1/4λ^2) + 𝒪(λ^-3/2) = 𝒪(λ^-1/2) . Similarly, we find cos(θ_(0123),(01)) = 1 +𝒪(λ^-1). Therefore, we have θ_(01ij),(01) = 0+𝒪(λ^-1/2) in the limit λ→ +∞. Now let us consider the dihedral angles at the edges (0j) with j = 2,3,4 in the tetrahedra (01ij) with i=2,3,4. For instance, sin(θ_(0123),(02)) = -3/2√(s_02)√(𝕍_0123)/√(𝕍_012)√(𝕍_023) = -3/2√(∓λ)√(-1/36s_23λ^2)/√(-1/4λ^2)√(∓1/4λ s_23) = -1+𝒪(λ^-1) , similarly, we find cos(θ_(0123),(02)) = 𝒪(λ^-1/2). We thus have θ_(01ij),(0j) = -π/2+𝒪(λ^-1/2) in the limit λ→ +∞. For the dihedral angles at the edges (0j) with j=2,3,4 in the tetrahedron (0234), we can use equations (<ref>,<ref>) and obtain θ_(0234),(0j) = θ_(234),(j) +𝒪(λ^-1) . One can also find that the dihedral angles at the edges (1i) or (ij) grow at most with logλ, and thus do not contribute to the leading term in the Regge action. For the deficit angles at the bulk edges we obtain ϵ_(01)^4-1 = 2π + θ_(0123),(01) + θ_(0124),(01)+ θ_(0134),(01) = 2π +𝒪(λ^-1/2) , ϵ_(0i)^4-1 = 2π + θ_(01ij),(0i) + θ_(01ik),(0i)+ θ_(0234),(0i) = 2π - π/2 - π/2+ θ_(234),(i) + 𝒪(λ^-1/2) , where 2≤ i,j,k≤ 4 and i,j,k are pairwise different. The three deficit angles ϵ_(0i)^4-1 with i=2,3,4 include the three angles θ_(234),(i) in the triangle (234), which sum up to -π. We therefore have for the Regge action S^4-1 = - ∑_i√(s_0i)ϵ_0i^4-1 + 𝒪(logλ) = - 2π (√(±λ) + √(∓λ)) + 𝒪(logλ) = - 2 π (1+)√(λ)+ 𝒪(logλ) . We see that in the asymptotic limit we obtain light-cone irregular configurations. If (01) is spacelike, this edge is light-cone irregular. If, instead, the three edges (0i) with i=2,3,4 are spacelike, then at least one of these edges is light-cone irregular. Fig. <ref> shows the exact Regge action and its asymptotic approximation for the case of one spacelike and three timelike bulk edges.   Case s_01=s_02=±λ and s_03=s_04=∓λ: Here all tetrahedra have either two large spacelike edges and one large timelike edge, or two large timelike edges and one large spacelike edge. The dihedral angles for the edges (0i) are given in eq. (<ref>) (if (0i) differs in signature from the other two edges (0j) and (0k)) and in eq. (<ref>) (if (0i) agrees in signature with one of the other two edges (0j) or (0k)). They approximate the values 0 and -π/2 for λ→∞, respectively. Thus we obtain for the deficit angles at the edges (0i), i=1,2,3,4ϵ_(0i)^4-1 = 2π + θ_(0ijk),(0i) + θ_(0ijl),(0i)+ θ_(0ilk),(0i) = π + 𝒪(λ^-1/2) . The dihedral angles at the edges (ij) with j>i>0 are at most of order 𝒪(logλ), and thus do not contribute to the leading order of the Regge action. We obtain for the Regge action S^4-1 = - ∑_i√(s_0i)ϵ_0i^4-1 + 𝒪(logλ) = - 2π (√(±λ) + √(∓λ)) + 𝒪(logλ) = - 2 π (1+)√(λ)+ 𝒪(logλ) . Here we conclude that, in the asymptotic limit, all spacelike bulk edges are light-cone irregular. Fig. <ref> illustrates the accuracy of the asymptotic approximation for a Pachner move with boundary length squares s_ij=1. §.§ Comparison with linearized action In the previous subsections, we have derived approximations to the Regge action in the asymptotic regime of large bulk edges for Pachner move configurations. We found that configurations, where one or more of the bulk edges are spacelike, are light-cone irregular in the asymptotic regime. In this section we wish to restrict to cases with a real Regge action, and therefore consider only timelike bulk edges. In this case, we derived for the 3-2 Pachner move configuration the asymptotics S^3-2 = 2 π√(λ) + 𝒪(log(s_01)) , where s_01=-λ is the bulk squared edge length. For the 4-1 Pachner move configuration, we found S^4-1 = 4π√(λ) + 𝒪(λ^0) , where s_0i=-λ, with i=1,…,4, are the bulk squared edge lengths. Let us compare these asymptotic expressions to the linearized action around a classical (and therefore Minkowski flat) solution. Such a linearized action has been derived in <cit.>. For the 3-2 Pachner move, the linearized action[We omit here the part of the action which is constant in the bulk fluctuations.] is given by S^3-2_qu = λ̃^2/24√(𝕍_(0234))√(𝕍_(1234))/√(𝕍_(0123))√(𝕍_(0124))√(𝕍_(0134)) , where λ̃=λ-λ_sol denotes the deviation of the squared length parameter from the classical solution. We note that this quadratic action is monotonically increasing with growing λ, and that this behaviour is consistent with the asymptotic expression (<ref>). The linearized action describes the action around a solution, that is, for rather small bulk squared edge length. The asymptotic expression (<ref>) describes the behaviour of the action for large edge length. Finding the same monotonic growth behaviour shows that we either have no further extrema of the action between the flat solution and the asymptotic regime, or there is an even number of such extrema. For the specific case described in Fig. <ref>, one finds that there are indeed no further extrema between these two regimes, as can be seen from Fig. <ref>. For the 4-1 Pachner move we have four bulk variables. We assume that all of these are timelike and we gauge fix them all to be equal s_0i=-λ, with λ̃=λ-λ_sol denoting the deviation from the flat solution. The quadratic action is then given by <cit.> S^4-1_qu = - λ̃^2/24√(𝕍_(1234))∑_1≤ i,j≤ 4√(𝕍_(i̅))√(𝕍_(j̅))/√(𝕍_(0123))√(𝕍_(0124))√(𝕍_(0134))√(𝕍_(0234)) , where (i̅) denotes the tetrahedron defined by the vertex set which is obtained by removing (i) from {(0),(1),(2),(3),(4)}. We note that this quadratic action is decreasing (if λ̃ is increasing), whereas the asymptotic expression (<ref>) is increasing. One could therefore think that this is a case where there are further extrema between the classical solution and the asymptotic case. However, there is a different mechanism at work: We discussed below equation (<ref>) that a Lorentzian signature tetrahedron (0123) with large timelike edges (0i), i=1,2,3, needs to have a spacelike base triangle (123). That is, for a Pachner move configuration (01234) which admits large timelike edges (0i), i=1,2,3,4, all the boundary triangles must be spacelike. There are thus three possibilities for the signature of the tetrahedron (1234): it is either Euclidean, Lorentzian or null. These cases are determined by the sign of the volume squared defined in (<ref>). We exclude the non-generic null case and consider the Euclidean and the Lorentzian case separately. If the final tetrahedron is Euclidean, we do not have a classical solution. Instead, one has a complex saddle point, see <cit.> for such an example. If the final tetrahedron is Lorentzian, we can construct a family of classical solutions. But these classical solutions always include at least three spacelike edges. To see this, consider a Lorentzian tetrahedron whose triangles are all spacelike. Embed this tetrahedron into Minkowski space. Assume that this tetrahedron has at least one vertex, for example, vertex (1), whose vertex angle contains a full light cone. (Since all triangles are spacelike, the vertex angle contains either no light cone or one full light cone.) We furthermore choose the vertex (4) and a point i_23 on the edge (23), and consider the triangle with vertices (1),(4),i_23. This triangle is either a Lorentzian triangle with only spacelike edges or a Euclidean triangle. If it is a Lorentzian triangle, then this triangle contains only one light cone at the vertex (1). Thus, in both cases, the triangles do not contain light cones at the vertex (4). This shows that there are no timelike directions emanating from (4), but inside the tetrahedron (1234). If we introduce a vertex (0) inside this tetrahedron, the edge (04) is therefore spacelike. This holds actually for all three edges (0i) where i=2,3,4. We thus see that for cases in which the boundary data in the 4-1 configuration admits large timelike bulk edges, there does not exist a solution with only timelike bulk edges. § FINITE EXPECTATION VALUES FOR SPIKE AND SPINE CONFIGURATIONS Spike and spine configurations which allow for infinitely large bulk edges, might lead to divergences for the quantum gravitational partition function. In Euclidean quantum gravity, spikes, such as those appearing in the 4-1 configuration, are of particular concern. In two-dimensional Regge calculus, the work <cit.> showed that expectation values of sufficiently large powers of the lengths variables diverge due to spike configurations [Here one assumes that the measure does not exponentially suppress large edge lengths.]. In three- and four-dimensional Regge calculus, the 4-1 and 5-1 Pachner moves configurations, respectively, isolate the conformal factor degree of freedom of the spacetime metric <cit.>. The weights exp(-S_E) of Euclidean quantum gravity lead to an exponential enhancement of such configurations. In addition, there is an unbounded integration range, making Euclidean quantum Regge calculus highly problematic <cit.>. Here, we instead consider Lorentzian Regge calculus and investigate spine and spike configurations in the 3-2 and 4-1 Pachner move, respectively. We will consider the convergence properties of the path integral computing expectation values for powers of the squared edge length. [We note that the 4-1 Pachner move, as discussed before, features a three-dimensional gauge symmetry <cit.>. The squared edge length is not invariant under this gauge symmetry. We nevertheless consider expectation values of the squared edge length, in order to compare with statements in previous literature <cit.>. To this end, we will apply a gauge fixing which can be also considered as a form of symmetry reduction. Namely, we will set all bulk edge lengths to be equal. We then compute the expectation value in this symmetry-reduced model.] We will find that, although the integral is in general not absolutely convergent, one can extract finite expectation values. To define the Regge path integral, we have to specify the measure. Here, we allow for local measures which for large bulk variables scale with a positive or negative (possible fractional) power of the absolute value of the bulk edge length squared. Let us note that, for three-dimensional Regge calculus, there is a preferred measure which to one-loop order guarantees triangulation invariance of the path integral <cit.>. For Lorentzian Regge calculus, this measure is given by Ds_e = μ(s_e)∏_e⊂bulk s_e = 1/∏_e⊂bdry√(√(48))1/∏_e⊂bulk√(48)∏_τ e^-π/4/∏_τ𝕍_τ^1/4∏_e⊂bulk s_e . Here we aim to argue for the finiteness of expectation values. Therefore we only need to consider the asymptotic regime of large edge lengths. If the bulk edges are timelike, this regime is light-cone regular and the Regge action (ignoring boundary terms, which do not depend on the bulk lengths) is real. If, however, at least one of the bulk edges is spacelike, we have seen that the asymptotic regime is light-cone irregular, and the Regge action features an imaginary term. As discussed in Section <ref>, the sign for this imaginary term is ambiguous: the light-cone irregularities lead to a branch cut along the Lorentzian configurations, and the imaginary term changes sign across this branch cut <cit.>. Clearly, the path integral will not converge if we decide to integrate along the side of the branch cut where Im(S)<0. On the other hand, if we choose the opposite side of the branch cut, the imaginary part of the Regge action will lead to an exponential suppression of the amplitudes, and the path integral (as well as the integral for expectation values of powers of the edge length) converges. We therefore need to discuss only the light-cone regular asymptotic regimes, i.e., the cases where all bulk edges are timelike. §.§ 3-2 Pachner move For the 3-2 move, we have to integrate over one bulk edge. The asymptotic expression for the Regge action for a large timelike bulk edge with edge length squared s_01=-λ is S^3-2 = 2π√(λ) + 𝒪(logλ) . (As shown in equation (<ref>), the same expression applies for the triangulation given by N≥ 3 tetrahedra sharing a timelike edge.) The measure factor in (<ref>) in the asymptotic regime is proportional to μ^3-2∝λ^-3/2 . Fig. <ref> compares the asymptotic behavior of the measure to the exact expression according to equation (<ref>). In order to consider the convergence behaviour for the expectation values of s_01^n, we use these asymptotic expressions and integrate λ from a sufficiently large positive constant c to ∞, ℐ_3-2(n,c) = ∫_c^∞λ λ^-3/2λ^n e^ 2π√(λ) . Allowing for a more general measure given by some fractional positive or negative power M of λ, we have to consider integrals of the type ℐ̃_3-2(m,c) = ∫_c^∞λ λ^m e^ 2π√(λ) = 2∫_√(c)^∞λ̃ λ̃^2m+1 e^ 2πλ̃ . To evaluate these integrals, we introduce a regulator ε>0 and remove this regulator after performing the integral, ℐ̃_3-2(m,c) = 2 lim_ϵ→ 0∫_√(c)^∞λ̃ λ̃^2m+1 e^ (2π -ε)λ̃ = c^m+1 E_-2m-1(-2 π√(c)) , where E_n(z)=∫_1^∞ t^-nexp(-zt) is the exponential integral function (analytically continued from the values of n where it converges). E_-2m-1(-2 π√(c)) is finite for m ∈ℝ. As a next step, we wish to evaluate the full expectation value for the light-cone regular cases. To that end, we neglect the measure term as it will effectively result in a shift of which expectation value is computed. We define ℰ_3-2(m,c) = ∫_√(c)^∞λλ^m e^ S^3-2(-λ) , where c is determined by the boundary data and the generalized triangle inequalities. While we cannot perform this integral analytically, we can employ series-acceleration methods such as Wynn's epsilon algorithm <cit.>, see also <cit.> for a review. The work <cit.> showed that Wynn's epsilon algorithm works well for path integrals (as well as for sums which appear in effective spin foam models <cit.>) and can also be employed to evaluate expectation values, even for cases where the underlying integral is not absolutely convergent. In Fig. <ref> we compare the integral over the analytical approximation (<ref>) with the exact integral (<ref>). We used the same boundary configuration as for Fig. <ref>. To capture the asymptotic regime, we choose c=250000. Above this value the full action is approximated well by the asymptotic expression. Altogether, we see that the full result agrees with the asymptotic approximation on a sub-percent level. We can thus conclude that the Regge expectation values of arbitrary powers of the squared bulk edge (and with a measure that asymptotically behaves as a fractional positive or negative power of the edge square) are finite. An essential mechanism to guarantee finiteness is the oscillatory nature of the Lorentzian path integral. §.§ 4-1 Pachner move The 4-1 move configuration can be obtained through a subdivision of the final tetrahedron (1234) by placing a vertex (0) inside this tetrahedron, and by connecting all boundary vertices with the new inner vertices. Thus, if we consider the path integral for this configuration, we have to integrate over four bulk edges. As discussed previously, we consider only the case where all bulk edges are timelike. The 4-1 move configuration comes with a three-parameter gauge symmetry <cit.>: the solutions are flat, and can hence be constructed by embedding the final tetrahedron into flat space [If this tetrahedron is demanded to satisfy the triangle inequalities in some non-Lorentzian signature, one can construct a complex family of solutions in the same way.], with the vertex (0) placed at any point inside this tetrahedron [If the path integral includes a sum over orientations of the tetrahedra, the vertex can be also placed outside the tetrahedron <cit.>]. This produces a three-parameter family of flat solutions. Away from the flat solution, the gauge orbit is defined by demanding a constant action along the gauge orbits. As discussed previously, we will consider the gauge fixing that all the bulk edges have equal lengths. The asymptotic expression for the Regge action for a 4-1 configuration with four large timelike bulk edges of equal length square s_0i=-λ, is S^4-1 = 4π√(λ) + 𝒪( λ^0) . The measure factor in (<ref>) in the asymptotic regime is proportional to μ∝λ^-1 . Fig. <ref> compares the asymptotic behavior of the measure to the exact expression according to equation (<ref>). Using a gauge fixing s_01=s_02=s_03=s_04, we have to also insert a Faddeev-Popov determinant. Along the flat solution, where there is an explicit parametrization of the gauge orbits, this determinant can be straightforwardly computed to be (see <cit.> for a similar computation) F=2^3· 3! V_(1234) , where V_(1234) is the absolute volume of the final tetrahedron (1234)and is thus independent of the bulk edge lengths. We are again interested in the convergence behaviour of the expectation values [The inner edge lengths are not invariant under gauge transformations. But we use these expectation values to contrast with the results of <cit.> for Euclidean Quantum Regge calculus. The deficit angles are invariant under gauge transformations, at least in the linearized theory <cit.>. As we have shown in (<ref>), these behave asymptotically as 𝒪( λ^0). Assuming an asymptotic expansion in (negative) powers of λ is possible, the convergence of expectation values of arbitrary powers of lengths implies the convergence of the expectation values of deficit angles.] of λ^n. We also allow for a more general measure but demand that the measure, together with the Faddeev-Popov determinant, is (at least asymptotically) given by some fractional positive or negative power of λ.
http://arxiv.org/abs/2406.17739v1
20240625172502
Find Parent then Label Children: A Two-stage Taxonomy Completion Method with Pre-trained Language Model
[ "Fei Xia", "Yixuan Weng", "Shizhu He", "Kang Liu", "Jun Zhao" ]
cs.CL
[ "cs.CL", "cs.AI" ]
Light-weight End-to-End Graph Interest Network for CTR Prediction in E-commerce Search Zichong Xiao July 1, 2024 ====================================================================================== § ABSTRACT Taxonomies, which organize domain concepts into hierarchical structures, are crucial for building knowledge systems and downstream applications. As domain knowledge evolves, taxonomies need to be continuously updated to include new concepts. Previous approaches have mainly focused on adding concepts to the leaf nodes of the existing hierarchical tree, which does not fully utilize the taxonomy's knowledge and is unable to update the original taxonomy structure (usually involving non-leaf nodes). In this paper, we propose a two-stage method called ATTEMPT for taxonomy completion. Our method inserts new concepts into the correct position by finding a parent node and labeling child nodes. Specifically, by combining local nodes with prompts to generate natural sentences, we take advantage of pre-trained language models for hypernym/hyponymy recognition. Experimental results on two public datasets (including six domains) show that ATTEMPT performs best on both taxonomy completion and extension tasks, surpassing existing methods. § INTRODUCTION Taxonomies[In this paper, we mainly focus on the taxonomy represented as tree rather than directed acyclic graph, because trees are the mainstream form at present, such as the online catalog taxonomies of Amazon and Yelp. ] are an important form of domain knowledge that organize concepts into hierarchical structures, representing “hypernym-hyponym” relationships among concepts in the form of trees or directed acyclic graphs <cit.>. Taxonomies are essential components of knowledge systems such as ontologies and knowledge graph <cit.>, and are widely used in various downstream applications, including search engineering <cit.>, recommendation systems <cit.>, and information filtering <cit.>. As domain knowledge continues to evolve, especially with the rapid growth of web content, new concepts are constantly emerging. In order to stay current, original taxonomies must incorporate these new concepts and adapt their hierarchical relationships. For example, as shown in Figure <ref>, with the advancement of sociology and science, the concept of "Social Science" should be added to the science knowledge system, and the original structure should be adjusted accordingly. However, existing taxonomies are primarily constructed by human experts <cit.>. Manual extraction of domain concepts and detection of hierarchical relationships by domain experts is both time-consuming and labor-intensive, and may result in missing important concepts and relationships. To extend existing taxonomies automatically, researchers have proposed the tasks of taxonomy expansion (TE) and taxonomy completion (TC). Both tasks aim to append new nodes (concepts) to a given taxonomy . The main difference is that TE focuses on identifying the parent of a given node (usually a leaf node), while TC aims to identify both the parent and child nodes. As illustrated in Figure <ref>, TE would aim to identify the parent node of “Social Science”, while TC would also aim to identify the child nodes of “Social Science”. Recently, researchers have been focusing on using pre-trained language models, such as BERT <cit.>, to improve the performance of taxonomy expansion <cit.>. For example, TEMP <cit.> appends new concepts to leaf nodes and generates candidate taxonomy paths, then uses a pre-trained model for ranking and selecting the best path. Musubu <cit.> generates candidate “hypernym”-“new concept” pairs using Hearst patterns <cit.>, and relies on pre-trained knowledge to identify the optimal hypernym node for the new concept. These proposed models have greatly improved the effectiveness of taxonomy updates, thanks to the improved generalization performance of pre-trained language models <cit.>. Although current TE&TC methods have achieved good results, there are several main issues that need to be addressed. Firstly, existing TE methods struggle to extend non-leaf nodes or perform poorly in this task <cit.>. Secondly, while existing TC methods can extend both leaf and non-leaf nodes, they may be less effective in leaf node expansion than specialized TE methods <cit.>, potentially due to a lack of sufficient utilization of knowledge. Furthermore, these methods often require large amounts of labeled samples or external resources, which are not always available <cit.>. Lastly, current TC methods do not typically involve modifying the nodes of the original taxonomy system (all original parent-child relationships are preserved after adding nodes to the taxonomy). However, the insertion of new nodes can modify the relationship of the original nodes. For example, the insertion of “Social Science" in Figure <ref> would change the relationship between “Science-Anthropology" from father-son to grandfather-grandson. To address these issues, we propose A Two-stage Taxonomy complEtion Method with Pre-Trained Language model (ATTEMPT), which inserts new concepts into the correct position by identifying a parent node and labeling child nodes. In the first stage of our proposed method, we use the “Taxonomy-path Prompt with Pre-trained model" (PPT) approach to take advantage of the local information of the taxonomy path and convert it into natural language using a prompt method, which helps to better utilize the implicit knowledge of the pre-trained model. Additionally, the pre-trained model's extensive knowledge reserve allows us to avoid the need for external resources and large amounts of labeled data. In the second stage, we propose the “Multiple Nodes Labeling" (MNL) method, which jointly identifies each child node and better utilizes the interdependence between nodes, resulting in more accurate node type prediction (including father-son, sibling and other relationships). Additionally, MNL allows for modification of the original taxonomy nodes and simultaneous annotation of multiple child nodes. We conduct detailed experiments on two public datasets (including six domains) to evaluate the effectiveness of our proposed method, ATTEMPT, in leaf and non-leaf node expansion. Specifically, for leaf nodes, our parent-finding method (PPT) outperforms the best baseline by 8.2% in accuracy. For non-leaf nodes, our children-finding method (MNL) improves by 21% and 20.3% respectively in accuracy and average F1 score, compared to a pair-wise classification method. On the overall task, our proposed method (ATTEMPT) outperforms other methods by 2.4% in average F1 score. In summary, the main contributions of this paper include: ∙ The proposal of a two-stage taxonomy expansion method, ATTEMPT, that inserts new concepts into the correct position by identifying a parent node and labeling child nodes. ∙ The introduction of a multiple-nodes labeling method, MNL, for the children-finding stage, which allows for the label of zero to multiple children nodes of a given node simultaneously and modification of the original taxonomy nodes. ∙ The demonstration of the effectiveness of our approach through experiments on two public datasets (including six domains), with the best performance obtained in both non-leaf and leaf node expansion. § RELATED WORK Taxonomy construction aims to build a tree-structured taxonomy with a set of terms from scratch. Existing methods can be roughly divided into two categories. The first is an unsupervised method to construct the taxonomy based on clustering <cit.>. The terms are grouped into a hierarchy based on hierarchical clustering or topic models <cit.>. Each node of this taxonomy is a collection of topic-indicative terms, different from the taxonomy in this paper (each node represented by one individual term). The other approach constructs a taxonomy based on terms, where each node represents a term concept <cit.>. Hypernymy detection models are often used for this task. For example, pattern-based <cit.> or distributional models <cit.> extract the hypernymy for a given query node and then organize them into a tree structure. Creating a taxonomy from scratch is labor-intensive. In many scenarios, such as e-commerce, some taxonomies may already be deployed in online systems, which involves the demand of taxonomy extension. QASSIT <cit.> is a semi-supervised vocabulary classification method, mainly based on genetic algorithms. The TAXI <cit.> system uses a taxonomy induction method based on lexico-syntactic patterns, substrings, and focused crawling. Later, TaxoGen <cit.> uses term embeddings and hierarchical clustering to construct topic taxonomies recursively. TEMP <cit.> is a self-supervised classification extension method that trains models with a new dynamic margin loss margin function. Taxonomy completion <cit.> is a recently proposed task that aims to find appropriate hypernyms-hyponyms for new nodes, not just hypernyms. GenTaxo <cit.> gathers information from complex local structural information and learns to generate full names of concepts from corpora. TMN <cit.> focuses on channel gating mechanisms and triplet matching networks. CoRel relies on concept learning and relation transferring to build a seed-oriented topic taxonomy. But the above mentioned methods also have some issues. The addition of new nodes may also lead to changes in the original taxonomy. The taxonomy completion task only finds the hyponyms of a given node, which cannot modify of the original taxonomy. GenTaxo <cit.> requires a large amount of training data to learn enough information, and CoRel <cit.> focuses more on topic taxonomy than the taxonomy of individual terms. Other works such as CGExpan <cit.> use the automatically generated class names and the class-guided entity selection module for entity expansion. However, CGExpan <cit.> is more on the entity set than the tree taxonomy. In addition, although the above methods can find both hypernyms and hyponyms of a given query node, they do not make sufficient use of the pre-trained model or do not use the pre-trained model at all <cit.>. This may lead them to perform poorly on the hypernym recognition task, inferior to the specialized taxonomy extension methods of the pre-trained model <cit.>. And most methods of taxonomy extension cannot perform well on the task of taxonomy completion <cit.>. We are dedicated to finding an approach that works in both tasks. § METHOD Given an existing taxonomy T=(V, E) and a set of new terms V^', where V is a set of terms, and E is a set of " hyponym- hypernym" relationships between terms, the task of Taxonomy completion is to insert the new terms v^'∈ V^' into the appropriate position of the existing taxonomy T one by one and extend them into a more complete taxonomy T̃=(Ṽ, Ẽ). Figure <ref> provided illustrates the overall structure of the ATTEMPT method, which is broken down into two main stages: the parent finding stage and the children finding stage. These two stages work together to identify the relationships between terms in the taxonomy, specifically determining the parent and children of a given term. §.§ Stage one: Parent Finding The first stage of the process is to identify the parent node of a given node in the taxonomy. For example, finding the parent node “science" for the node “social science" in Figure <ref>. §.§.§ TEMP The TEMP method <cit.> is the first approach to use pre-trained contextual encoders as the core component for taxonomy extension. The pre-trained contextual embeddings are useful for capturing relationships between terms because they have been trained on a large corpus. TEMP predicts the location of new concepts by ranking the generated taxonomy paths. A taxonomy path of a new term (ND) in the tree-structured taxonomy is the unique path from that term to the root of the taxonomy. The taxonomy path is represented as P = [ROOT, N_1, N_2, ..., N_D], where D is the depth of the ND and ROOT is the root of the taxonomy. In the taxonomy, N_i-1 is the parent of N_i. TEMP generates taxonomy paths for each term, then adds the new term to be expanded to the end of each path to form new paths. Finally, the new paths are ranked and the highest-scoring path is chosen as the correct parent term. Equation <ref> describes how TEMP uses a contextual encoder to return a sequence of vectors, given a term's definition S and an arbitrary taxonomy path P. Encoder(S, P)=v_[CLS], v_1, …, v_[SEP], v_p_d, …, v_root The TEMP method, which uses pre-trained contextual encoders to model taxonomy paths, has been an inspiration for our work. However, TEMP also has some limitations. One of the main limitations is that it can only expand new leaf nodes. Additionally, TEMP has some issues such as: 1) Limited use of local information - although TEMP uses paths to narrow the search range within the taxonomy tree, the problem of too long paths can still arise. In such cases, distant relationships may have a limited impact on the determination of leaf nodes. 2) Inadequate utilization of pre-trained model - TEMP only connects the nodes of the path using special tokens such as [SEP] or [UNK], which does not fully leverage the knowledge encoded by the pre-trained language model. §.§.§ PPT: Taxonomy-path Prompt with Pre-trained model To address the limitations of the TEMP method, we proposed PPT (A Taxonomy Expansion Method Based on Taxonomy Path Prompt and Pre-Trained Model). Our approach includes a few improvements: Utilization of local information - Instead of using the entire taxonomy path, we use the local information nodes l_p closest to the nodes. For example, in Figure <ref>, for the node “Archeology", the local information nodes would be “Archeology" and "Anthropology". When the depth of the taxonomy path is less than two, we take only one node. l_p = local(P) = {N_D-1,N_D} Improved pre-trained model utilization - We form a set of taxonomy path points P_Scocial Science = (Archeology-Anthropology-Social Science) by combining the local information points of each node and the node Social Science to be extended. We then generate the appropriate natural language S_Gen using a prompt function. S_Gen(q,l_p) = Prompt(q,l_p) where q is the node to be expanded and Prompt is a function to generate natural language from prompts. For example, S_Gen(q,l_p) = "Social Science including Anthropology, and Anthropology including Archeology". We feed this generated language S_Gen into the pre-trained model, rank the results in the same way as TEMP, and use the highest score as the parent node of the given node. Encoder(S_Gen)=v^'_[CLS], v^'_1, …, v^'_w The encoder results are as above, where w is the number of output vectors. We trained the model with Margin Ranking Loss (MRL), which is defined as follows: ℒ= ∑_P ∈𝒫^+∑_P^'∈𝒫^-max(0, f(P^')-f(P)+γ(P, P^')) where 𝒫^+ is the set of taxonomy-paths in the taxonomy, 𝒫^- is the set of negative samples, and γ(P, P^') is a function designed for the margin between positive and negative taxonomy-paths. To capture the semantic similarity of different taxonomy-paths, we follow TEMP to set a dynamic margin function based on the semantic similarity as follows: γ(P, P^')=(|P ∪ P^'|/|P ∩ P^'|-1) * k where k is a parameter used to adjust margins (usually between 0.1 and 1). §.§ Stage two: Children Finding The second stage of ATTEMPT is to identify all the children nodes of a given node in the taxonomy. For example, finding the children nodes "Anthropology" and "Civics" for the node "Social Science", as shown in Figure <ref>. We propose two methods for this stage: PWC and MNL. §.§.§ PWC: Pair-wise Classification In the second stage, we identify all the child nodes of a given term. To do this, we form pairs of possible “hypernym-hyponym" term pairs from the node to be expanded (red node) and each candidate child node (orange node, child of the parent identified in the first stage). These term pairs are connected with the special token [SEP] and fed into a pre-trained language model such as BERT. An example can be seen in Figure <ref>, where the node to be classified is “Social Science" and the orange candidate child nodes are “natural science," “anthropology" and “civics." We use the pre-trained model to perform binary classification to determine whether the term pairs have a “hypernym-hyponym" relationship or not. The traditional cross-entropy function is used as the loss function to train the classification model. This method is simple, because the pre-trained model has been trained on a large corpus already and it can identify whether the term pairs have a hierarchical relationship or not. This method is called Pair-wise classification. §.§.§ MNL: Multiple Nodes Labeling MNL is a new approach that addresses the problem of identifying multiple children of a given node in the taxonomy. There are two main challenges: determining whether a node has children and how many children it has, and identifying as many children as possible if there are multiple children. To address these challenges, we first determine whether the given node is a leaf node (has no children) and if so, the second stage ends. If there are multiple children, we treat this as a multiple-choice problem and model it as a sequential labeling task. As shown in Figure <ref>, we extract the possible siblings, children, and grandchildren (orange and green nodes) of the given node to make use of local information. We then use a prompt function to convert these three types of nodes into natural language (e.g., “Natural Science - Chemistry, Physics" is converted to “Natural Science, and it including Chemistry and Physics"). We concatenate the node to be expanded “Social Science" with all the sentences generated by the prompt, and then feed this into the pre-trained model. Since the model was trained on a large corpus of natural language, the input of natural language is consistent with the pre-training phase, which helps to fully utilize the hidden information of the model and correctly identify the contextual relationships. The addition of local information provides additional context to the model, which allows it to make more accurate predictions about the children of the given term. § EXPERIMENTS In this section, we first describe the experimental setup and implementation details in Section <ref> and Section <ref>. We then present the results of our experiments in Section <ref>, including a comparison of our approach to the baseline method. To further understand the contribution of different components of our approach, we conduct ablation experiments in Section <ref> to investigate the effectiveness of using local information and prompts in ATTEMPT. §.§ Experimental Setup Datasets. We conducted experiments on two datasets that include six domains and two types of nodes. The first dataset is the Semeval-2016 task 13 dataset, which was used to evaluate the performance of expanding leaf nodes in stage one. We compared our method to previous approaches such as TEMP <cit.> and STEAM <cit.>, which have also been tested on this dataset for leaf node expansion. To evaluate the expansion of non-leaf nodes, we constructed a new dataset based on Semeval, as there are limited previous datasets that are relevant to this task. This dataset was specifically designed for the purpose of non-leaf node expansion and evaluation. The following is a description of the two datasets: 1) We used the dataset from Semeval-2016 task 13 [https://alt.qcri.org/semeval2016/task13/], which contains three English datasets for the environment, science, and food domains. We followed the setup as in <cit.> and used the randomly-grown taxonomies for self-supervised learning, and sampled 20% of the leaf nodes for testing. We used this dataset to compare our method with other taxonomy extension methods for leaf nodes. 2) As there is limited data available for non-leaf node expansion, we reconstructed the original data. We defined nodes with one parent and no children as leaf nodes and nodes with one parent and at least one child as non-leaf nodes. More details about the dataset are provided in Appendix <ref>. Metrics. For the parent finding process in stage 1, we followed the evaluation strategy of <cit.> using Accuracy, Mean reciprocal rank (MRR), and Wu & Palmer similarity (Wu&P) to evaluate our methods. Accuracy (ACC) measures the count of parent or child nodes that are accurately predicted. MRR calculates the average of reciprocal ranks of the true taxonomy path. Wu&P measures the semantic similarity between the predicted taxonomy path and the truth taxonomy-path. For stage two, we proposed two metrics for evaluating the effectiveness of this phase. One is ACC, which represents whether all children can be found or not. The second one is Avg F1, which can further evaluate how many children are found for a given node. Avg (F 1)=1/n∑_i=1^n F 1 Compared Methods. We compare with the following methods: ∙ BERT+MLP The method extracts terms embeddings from BERT and then feeds them into a multilayer perceptron (MLP) to predict their relationship. ∙ TEMP <cit.> One state-of-the-art taxonomy expansion framework which predicts new concepts' position by ranking the generated taxonomy paths. The first method that employs pre-trained contextual encoders in taxonomy construction and hypernym detection problems. ∙ STEAM <cit.> A taxonomy expansion framework that leverages natural supervision in the existing taxonomy for expansion. ∙TaxoExpan <cit.> A self-supervised method for encoding local structures in seed taxonomy using location-enhanced graph neural networks. ∙ TMN <cit.> A Triplet Matching Network (TMN) that finds suitable hypernym, hyponym word pairs for a given query concept. §.§ Implementation Details We present the PPT method for the first stage of leaf node expansion, which is based on TEMP (TEMPs' code link [https://github.com/liu-zichen/TEMP]). We use BERT (bert-base-uncased) as the pre-trained language model and split the terms into 10% for validation and 10% for testing. To expand the full type of nodes, both leaf and non-leaf, we use the new data introduced previously and select the same number of leaf and non-leaf nodes as the test set. We use the default optimal hyperparameters of the original TEMP authors and experiment with different learning rates to obtain the best performance. We also use multiple prompts (see Appendix <ref>) according to the settings of Musubu  <cit.>, and take the average result as the experimental result. To reduce the impact of randomness, we repeat the experiment three times. For the MNL method in stage two, we connect the nodes to be expanded (red), the candidate child nodes (orange), and the child nodes of the candidate nodes (green) and generate natural language by way of prompt. The generated natural language is fed into the pre-training model and labelled. We label the real children of a given node as 1, the sibling nodes as 0, and ignore the computational loss for all the rest of the nodes. In addition, if a term has multiple tokens and one of the tokens is marked as one by the model, we mark all those tokens as child nodes. See Appendix <ref> for more details. §.§ Experimental Results As shown in Table <ref>, our method PPT outperforms the existing TEMP model significantly on both leaf and non-leaf nodes. For leaf nodes, we improved the TEMP model by 8.2%, 6.7%, and 2.8% on Acc, MRR, and Wu&P, respectively. For all types of nodes, the improvement is 11.0%, 9.2%, and 6.4%, respectively. The comparison results of the two methods tested in the child discovery phase are presented in Table <ref>. For leaf nodes, the MNL method improves Acc and Avg(F1) by 21% and 20.3%, respectively, compared to the pair-wise classification method over the three benchmark datasets. For all type nodes, the improvement is 11.3% and 11.9%, respectively. Table <ref> presents the comparison results between the baseline method and our ATTEMPT. The baseline method achieves 14.7%, 23.3%, and 30.9% in Avg(F1) metrics for the three datasets of environment, science, and food, respectively. Our ATTEMPT method improved the Avg(F1) by an average of 2.4% over the baseline. The low results in Table 3 are due to the challenging nature of the task. To obtain the correct parent node, all child nodes must be successfully identified. This highlights the potential for further improvement. §.§ Ablation Studies To verify local information and prompt effectiveness, we compare and test the changes in experimental results with/without these two types of information on both stages. Local Information As shown in Table <ref>, after removing the path nodes, the PPT method in stage 1 decreases on average by 4.3%, 2%, and 2.1% on acc, mrr,wu&p, respectively, on the three datasets. Table <ref> also shows that the MNL method decreases by 14.6% and 22.1% on average on accuracy and average F1 score, respectively, after removing the grandchild node information in the child finding stage. We found that local information is essential in both the first and second phases, particularly in the second child lookup phase. Removing local information brings about a significant performance degradation, which may be attributed to our method's modelling of relationships. The individual nodes are closely associated in our MNL method. Prompt In Table <ref>, the PPT method with prompt removal decreased in acc, mrr, wu&p by 6.5%, 5.1%, and 2.0% on average, respectively. Meanwhile, in the second stage, the MNL method decreased 8.6% and 15.6% for accuracy and average F1 metrics, respectively, after prompt removal. The scientific data in the second stage showed a slight performance improvement after prompt removal, which we speculate may be due to insufficient data and pre-trained corpus. Overall, the prompt is essential for the parent finding process in the first stage and the child finding process in the second stage. § CONCLUSION This paper proposes a two-stage taxonomy completion method based on pre-trained Language models (ATTEMPT), which effectively inserts the new concept in the correct position by finding a parent node then labeling children nodes. In addition, we use prompt to generate natural language information suitable for the pre-trained model further to improve the effectiveness of parent node recognition and children labeling for the given node. Our experiments on two types and three domains with six datasets show that our method can enhance the effectiveness of locating the position of a given node in existing taxonomies. Furthermore, the efficacy of local information and prompts in ATTEMPT is also demonstrated by ablation experiments. In conclusion, our proposed ATTEMPT method is an effective approach for taxonomy completion, and it can be further improved with more comprehensive datasets. § LIMITATIONS Since ATTEMPT uses the pre-trained language model to complete the taxonomy, the expansion effect is limited by the model. Generally, pre-trained models with more knowledge scales are better (e.g., BERT-Large V.S. BERT-Base-uncased). However, our paper focuses on how to fully use the knowledge of the pre-trained model rather than verifying whether more knowledge scales better or not. Based on the above, this paper does not conduct more related research (in fact, TEMP <cit.> has been compared and reached similar conclusions). In addition, the selection of prompts will also affect the expansion effect. For the convenience of comparison, we have selected several basic prompts (the same as Musubu <cit.>) for experimentation. In future work, we plan to study how to construct or select better prompts for classification expansion. We do not consider the situation of multi-parent nodes according to the TEMP <cit.> settings. And according to our statistics, there are only a few multi-parent nodes in the Semeval-2016(task-13) datasets (1/3843). We will continue investigating how to make better use of the pre-trained model knowledge to solve the taxonomy completion problem. acl_natbib § DATASET The original dataset and our reconstructed dataset statistics are in Table <ref> and <ref>. To prevent the test data from being leaked during training and to thoroughly test the generalization ability of the model when encountering unseen data, we split each original taxonomy tree into two subtrees, one for training and one for testing. For example, the left subtree of the scientific taxonomy in Figure <ref>, natural science and its children, is used as the test subtree, and the rest is used for training. Specifically, we select the subtree with 20% of the number of nodes of the current taxonomy tree as the subtree for testing and ignore too many leaf nodes to ensure the ratio of leaf nodes to non-leaf nodes is 1:1. Too many leaf nodes will make the child finding stage degenerate into an expansion of leaf nodes, and the model will be easily overfitting. And too few leaf nodes will make the test inadequate, so we use equal leaf and non-leaf node data as the test. In the training and testing phases, we dig out the node to be expanded in the current taxonomy tree, and if the node has N children, these N children are reassigned to the original parent of the node to be expanded as child nodes. We ignore the case of double parent nodes because their existence is too rare. Only one node in the three datasets containing more than 2000 nodes in our experiments has a dual-parent node. We will consider this case further in our future work. § IMPLEMENTATION DETAILS For the fairness of the experiment, we follow the setting of TEMP <cit.>. We use 10% terms for validating and 10% for testing. For each benchmark, we try various learning rates and report the best performance. We use multiple prompts to experiment and select the average result as the experimental result. We repeated the experiment three times to reduce the impact of randomness. We train the model using the Pytorch [<https://pytorch.org>] <cit.> on the NVIDIA RTX3090 GPU. For all methods, the bert-base-uncased [<https://huggingface.co/bert-base-uncased>] model are chosen for feature extraction. The pretrained contextual encoders are of base size with 12 layers. We use the AdamW <cit.> as the optimizer with the warm-up <cit.>, and fine tune the whole model with a learning rate of 2e-5. The dropout <cit.> of 0.1 is applied to prevent overfitting. § PROMPT DETAILS
http://arxiv.org/abs/2406.18343v1
20240626133734
Optimal volume bound and volume growth for Ricci-nonnegative manifolds with positive Bi-Ricci curvature
[ "Jie Zhou", "Jintian Zhu" ]
math.DG
[ "math.DG" ]
thefnmarkfootnotetext equationsection 1.05 fancy theoremTheorem[section] lemma[theorem]Lemma corollary[theorem]Corollary proposition[theorem]Proposition definition[theorem]Definition remark[theorem]Remark lem[theorem]Lemma pro[theorem]Proposition cor[theorem]Corollary defi[theorem]Definition rem[theorem]Remark ques[theorem]Question conj[theorem]Conjecture [LE,RO] [RE] [LO] claim[theorem]Claim thmTheorem[subsection] psf
http://arxiv.org/abs/2406.17685v1
20240625162112
CMBFSCNN: Cosmic Microwave Background Polarization Foreground Subtraction with Convolutional Neural Network
[ "Ye-Peng Yan", "Si-Yu Li", "Guo-Jian Wang", "Zirui Zhang", "Jun-Qing Xia" ]
astro-ph.CO
[ "astro-ph.CO" ]
Institute for Frontiers in Astronomy and Astrophysics, Beijing Normal University, Beijing 100875, China; xiajq@bnu.edu.cn Key Laboratory of Particle Astrophysics, Institute of High Energy Physics, Chinese Academy of Science, P. O. Box 918-3 Beijing 100049, People’s Republic of China Department of Astronomy, Beijing Normal University, Beijing 100875, China Key Laboratory of Particle Astrophysics, Institute of High Energy Physics, Chinese Academy of Science, P. O. Box 918-3 Beijing 100049, People’s Republic of China Department of Physics, Stellenbosch University, Matieland 7602, South Africa National Institute for Theoretical and Computational Sciences (NITheCS) Institute of Frontier and Interdisciplinary Science and Key Laboratory of Particle Physics and Particle Irradiation (MOE), Shandong University, Qingdao 266237, China Key Laboratory of Particle Astrophysics, Institute of High Energy Physics, Chinese Academy of Science, P. O. Box 918-3 Beijing 100049, People’s Republic of China Institute for Frontiers in Astronomy and Astrophysics, Beijing Normal University, Beijing 100875, China; xiajq@bnu.edu.cn Department of Astronomy, Beijing Normal University, Beijing 100875, China § ABSTRACT In our previous study, we introduced a machine-learning technique, namely , for the removal of foreground contamination in cosmic microwave background (CMB) polarization data. This method was successfully employed on actual observational data from the Planck mission. In this study, we extend our investigation by considering the CMB lensing effect in simulated data and utilizing the approach to recover the CMB lensing B-mode power spectrum from multi-frequency observational maps. Our method is first applied to simulated data with the performance of CMB-S4 experiment. We achieve reliable recovery of the noisy CMB Q (or U) maps with a mean absolute difference of 0.016±0.008 μK (or 0.021±0.002 μK) for the CMB-S4 experiment. To address the residual instrumental noise in the foreground-cleaned map, we employ a "half-split maps" approach, where the entire dataset is divided into two segments sharing the same sky signal but having uncorrelated noise. Using cross-correlation techniques between two recovered half-split maps, we effectively reduce instrumental noise effects at the power spectrum level. As a result, we achieve precise recovery of the CMB EE and lensing B-mode power spectra. Furthermore, we also extend our pipeline to full-sky simulated data with the performance of LiteBIRD experiment. As expected, various foregrounds are cleanly removed from the foregrounds contamination observational maps, and recovered EE and lensing B-mode power spectra exhibit excellent agreement with the true results. Finally, we discuss the dependency of our method on the foreground models. § INTRODUCTION Decades of measurements of the cosmic microwave background (CMB) and its anisotropies <cit.> serve as a crucial pillar in the field of precision cosmology. Efforts are now focused on the next frontier in CMB experiments towards precise measurements of polarization anisotropies, particularly the search for the faint primordial polarization B modes. This primordial B mode originates from the primordial gravitational waves predicted by inflation, making its detection a potential direct evidence of inflation <cit.>. Several next-generation CMB experiments have emerged, aiming to achieve multi-frequency coverage and high sensitivity for the search of the primordial B-mode signal. Ground-based projects like the Simons Observatory <cit.>, CMB-S4 <cit.>, QUIJOTE <cit.>, and AliCPT <cit.>, as well as space-based missions like LiteBIRD <cit.> and the Probe of Inflation and Cosmic Origins (PICO) <cit.>, have been proposed or are currently being developed. However, a challenge in the analysis of CMB data lies in the extraction of the B-mode signal from observations that are contaminated by foreground radiations. The Galactic polarized radiation tends to be brighter than the primordial B-mode signal over all observational frequencies in the microwave regime <cit.>. Consequently, the accurate separation of the foreground contaminants from the CMB observations becomes a critical task in CMB data analysis, as emphasized in studies by <cit.> and <cit.>. From the data analysis perspective, the process of extracting the CMB signal from observations contaminated by foreground emissions is commonly referred to as CMB component separation. Since the various foreground components have distinct spectral signatures and differ from that of the CMB, it is possible to reconstruct clean maps of the CMB and each foreground emission by combining observations from multiple frequencies. Typical methods for component separation can be broadly divided into two categories: “parametric" and “blind" methods. Parametric methods, such as the Commander method <cit.> and XFORECAST <cit.>, rely on fitting parametric models to the multi-frequency maps. Bayesian parameter estimation or maximum-likelihood methods can then be employed to fit these parameters, achieving the goal of component separation. However, accurately modeling foreground emissions remains a complex and challenging task due to the intricate physics of foregrounds involved <cit.>. For future CMB B-mode detection experiments with unprecedented sensitivity, several studies <cit.> have reported that even slight inaccuracies in foreground modeling can lead to significant biases in the reconstruction of the CMB B-mode signal due to the larger amplitude of the Galactic polarized radiation compared to the CMB B-mode signal. This issue has also been mentioned in several next generation CMB experiments, such as the CMB-S4 project <cit.> and the CORE satellite missions <cit.>. On the other hand, the so-called non-parametric or blind component separation, such as the Internal Linear Combination <cit.> approach or needlet <cit.> or scale discretized <cit.>, and Hierarchical Morphological Component Analysis <cit.>, exploit minimal prior information on the foregrounds. Thus, non-parametric method quickly provide a foreground-cleaned CMB map, but not detailed information about the various foreground emissions. With the notable advancements in computer science, machine learning techniques have demonstrated exceptional proficiency in the domain of image processing, such as image recognition, restoration of noisy or blurred images, among others. Machine learning techniques have found growing applicability in the realm astrophysics as well <cit.>. Notably, machine learning has been successfully employed to discern between cosmological and reionization models <cit.>, analyze gravitational wave data <cit.>, and reconstruct functions from cosmological observational data <cit.>. In the field of CMB data processing, the application of machine learning methods for foreground subtraction in CMB temperature has been explored early on <cit.>. Recently, convolutional neural network-based machine learning techniques also have shown promise in accurately extracting full-sky temperature maps of the CMB from observational data <cit.>, reconstructing CMB lensing <cit.>, and removing lensing effect of CMB polarization (delensing) <cit.>. It should be noted that the reconstruction of CMB polarization signal poses a greater challenge compared to temperature reconstruction, as CMB polarization signal is more faint in comparison to the total Galactic polarized radiation across all observed microwave frequencies <cit.>. <cit.> propose a machine-learning-based foreground-cleaning technique for CMB polarization data, called CMBFSCNN (Cosmic Microwave Background Foreground Subtraction with Convolutional Neural Networks). In <cit.>' study, we first use a network model to remove polarized foreground contamination from observed data. Then, a cross-correlation technique is employed to suppress the impact of instrumental noise on the power spectrum. The results demonstrate the effectiveness of CMBFSCNN in successfully removing various foreground components from both actual observational data of the Planck experiment and simulated data. This work further applies the CMBFSCNN technique to the LiteBIRD experiment, which conducts full-sky surveys, as well as the CMB-S4 experiment, which covers the partial sky region. The study also considers the CMB lensing effect and presents the recovery of the lensing B-mode power spectrum. Additionally, more comprehensive results regarding the CMBFSCNN technique are presented. We provide the code used for this analysis on https://github.com/yanyepeng/CMBFSCNNGitHub[https://github.com/yanyepeng/CMBFSCNN] This paper is organized as follows. Section 2 provides a comprehensive introduction to the methodology. This includes a detail of the network structure, a concise overview of the spectral energy distribution (SED) models of diffuse Galactic foregrounds, and the data simulations. Section 3 focuses on the application of the neural network to simulate data with the performance of CMB-S4 experiment and full-sky simulate data with the performance of LiteBIRD experiment. Section 4 is dedicated to a discussion on the utilization of the CNN method for the recovery of the CMB polarized signal. Finally, we summarize our work in Section 5. § METHODOLOGY §.§ Network architecture The convolutional neural network (CNN) is a class of feed-forward neural networks widely used in various fields. The convolutional layer serves as a fundamental building block of the CNN <cit.>. The convolutional layer takes feature images from the preceding layer as inputs and convolves them with multiple local spatial filters, also known as kernels. These kernels have learnable parameters that are adjusted during training to optimize the network's performance. Then, nonlinear activation functions are applied to the outputs before passing them to subsequent layers. The configuration of a convolutional layer mainly involves three crucial hyperparameters: the number of output channels (equivalent to the number of convolutional filters), stride length, and zero padding amount. By adjusting these hyperparameters, it is possible to control the size of the output produced by each convolutional layer. Additionally, dilated convolutions have been introduced as an extension to standard convolutions for capturing more contextual information by enlarging the receptive field size <cit.>. By connecting multiple convolutional layers together, one can design complex network model which consists of a stack of nonlinear parameters. The network parameters can be fine-tuned through the process of training on specific datasets, thus enabling the transformation of intricate problems into parameter optimization tasks. An example successful architecture is U-Net proposed by <cit.>, which employs an encoder-decoder structure with additional skip connections between encoding and decoding layers to preserve small-scale information lost during downsampling operations. In the <cit.>, we propose a multi-patch hierarchy network architecture based on the U-Net, and it is specifically designed for foreground removal from contaminated CMB polarization maps. This architecture draws inspiration from several related studies conducted in this domain <cit.>. In this work, we adopt the same network model utilized in CMBFSCNN <cit.> to remove CMB polarization contamination. Detailed information about the CNN model can be found in paper <cit.>. Once the network architecture is established, the network model parameters (weights and biases) can are optimized through a process of minimizing the loss function. The loss function serves to quantify the discrepancies between the output generated by the network and the corresponding ground truth image. In this study, our loss function consists of two components: mean absolute error (MAE), also known as L1 loss, and a loss function based on fast Fourier transform (FFT). Assuming that our training dataset comprises S pairs of images denoted as {x_i, y_i}_i=1^S, where x_i represents the i-th contaminated input image and y_i represents its corresponding ground truth image. Let I = f(x) denote the prediction produced by our network model (f(·)). For a subsample with batch size equal to N, we calculate pixel-wise MAE loss as follows: L_ MAE =1/N∑_n=1^N1/WH∑_w=1^W∑_h=1^H(|I^n_w,h-y^n_w,h|), where W and H describe the dimensions of the images. For the FFT loss, we define the amplitude of the FFT as follows: A_ F(I) = √(Re[ FFT(I)]^2+Im[ FFT(I)]^2), where Re[·] and Im[·] represent the real part and imaginary part, respectively. The notation FFT(·) denotes the operation of performing FFT. For a subsample with the size of batch size N, the FFT loss function is calculated as: L_ FFT =1/WN∑_n=1^N1/WH∑_w=1^W∑_h=1^H(|A_ F(I^n_w,h)-A_ F(y^n_w,h)|), We combine with the MAE loss and FFT loss to define our network's overall loss function as follows: L = L_ MAE + β L_ FFT, where β is a hyperparameter and we set it as 1 according to our experience test. By incorporating the defined loss function, our target during the training process is twofold. The network model aims not only to minimize discrepancies between predicted and ground truth maps at the pixel level but also to ensure closeness in terms of their respective amplitudes in the fast Fourier transform (FFT) domain. The inclusion of FFT amplitudes in the loss function stems from experiential testing, where it has been observed that utilizing an FFT loss function can slightly enhance recovery of the CMB power spectrum. §.§ Foreground Parametrization The Convolutional Neural Network (CNN) proposed in this study is a supervised machine learning algorithm, which requires the utilization of a training dataset comprised of known ground truth values. In our case, the training samples are obtained from simulated data generated using the publicly available Python Sky Model ([https://github.com/bthorne93/PySM_public]) package <cit.>. The package enables simulation of full-sky maps including Galactic emission in both intensity and polarization at microwave frequencies. We focus specifically on three polarized foreground sources: synchrotron radiation, thermal dust emission, and Anomalous Microwave Emission (AME). §.§.§ Synchrotron Synchrotron radiation arises from the acceleration of relativistic cosmic ray electrons by Galactic magnetic fields. Below frequencies of approximately ∼ 50 GHz, it constitutes the dominant source of polarized foregrounds <cit.>. The spectral energy distribution (SED) of synchrotron emission is commonly described by a power law function. In this study, we adopt a general model to represent synchrotron polarized emission, which can be expressed as: Q_s(n̂,ν) =A_Q,s_ν_0(n̂)(ν/ν_0)^β_s(n̂), U_s(n̂,ν) =A_U,s_ν_0(n̂)(ν/ν_0)^β_s(n̂). Q_s(n̂,ν) and U_s(n̂,ν) represent the synchrotron Stokes polarization components. The amplitudes of these quantities at the pivot frequency ν_0 are denoted as A_Q,s_ν_0 and A_U,s_ν_0. The synchrotron spectral index is represented by β_s. It should be noted that all these parameters exhibit spatial dependence due to their expected variability across different sky directions (n̂). The simulation of Galactic synchrotron polarized radiation is performed by extrapolating template maps based on a parametric model derived from the s1 model. The polarization template maps are constructed using WMAP 9-yr 23 GHz Q and U maps <cit.>, respectively. The spectral index map is obtained from "Model 4" presented in <cit.>, which combines data from Haslam and WMAP 23-GHz polarization observations along with a Galactic magnetic field model. This spectral index map exhibits spatial variability, characterized by a mean value of approximately -3 with a error of around 0.06. §.§.§ Thermal dust The emission of thermal dust radiation originates from interstellar dust grains, which are heated through absorption in the optical and subsequently cooled by emitting in the far-infrared regime. At frequencies above approximately ∼ 70 GHz, it constitutes the primary source of polarized foregrounds. The SED of thermal dust is characterized by a modified blackbody emission due to opacity effects. As a result, the observed polarized spectrum is commonly described using a modified black-body model: Q_d(n̂,ν) =A_Q,d_ν_0(n̂)(ν/ν_0)^β_d(n̂)B(ν,T_d(n̂)), U_d(n̂,ν) =A_U,d_ν_0(n̂)(ν/ν_0)^β_d(n̂)B(ν,T_d(n̂)). Q_d(n̂,ν) and U_d(n̂,ν) represent its Stokes polarization components. The amplitudes of these quantities at the pivot frequency ν_0 are denoted as A_Q,d_ν_0 and A_U,d_ν_0. The spectral index is represented by β_d. Additionally, we use the function B(ν,T_d(n̂)), which corresponds to a standard black body spectrum with temperature T_d(n̂) ≈ 20 K. To simulate polarized dust maps, we utilize the template maps, spectral index map, and temperature (T_d(n̂)) map derived from the d1 model. This particular model employs polarization template maps at 353 GHz, which are obtained through analysis of Planck data using the code <cit.>. The spatial distribution of both β_d (the spectral index) and T_d(n̂) (the temperature) exhibit variability across different sky directions. Their respective mean values are approximately β_d≈ 1.54±0.03 and T_d(n̂)≈ 20.9±2.2. §.§.§ Anomalous microwave emission Anomalous Microwave Emission (AME) has been observed by radio/microwave instruments within the frequency range of approximately ≈ 10-60GHz <cit.>. Despite its detection, the precise mechanism responsible for AME remains uncertain. A notable and promising candidate model is based on electric dipole radiation emitted from small spinning dust grains <cit.>, which is adopted as the working model for AME in this study. This particular model attributes the emission to rotational motion of a dust grain possessing an electric or magnetic dipole moment. It has been established that AME exhibits a low level of polarization. For instance, <cit.> constrained the polarization fraction of AME to be less than 2.6% using WMAP 7-year data. Similarly, <cit.> utilized QUIJOTE's data at a frequency of 17 GHz to set upper limits on the polarization fraction of AME at 0.39%, which further decreased to 0.22% when combined with WMAP's data at 41 GHz. Although AME demonstrates a relatively low degree of polarization, it still represents a potentially significant foreground component for future sensitive CMB experiments aiming to detect CMB B-modes <cit.>. In this study, we adopt the AME model from the a2 model. Within this framework, the AME intensity is determined using the SPDUST2 code <cit.>, which relies on Planck templates derived from the parametric fit to the Planck data <cit.>. The mathematical expression for the AME intensity can be formulated as follows: I_a(n̂,ν)= A_T,ν_0,1(n̂)ϵ(ν,ν_0,1,ν_p,1(n̂),ν_p_0) +A_T,ν_0,2(n̂)ϵ(ν,ν_0,2,ν_p,2(n̂),ν_p_0). The AME polarization model makes use of the dust polarization angle, denoted as γ_ d, to generate a template map. This angles is calculated using the Planck 2015 thermal dust Q and U maps at a frequency of 353 GHz. The expression for the AME polarization can be represented as: Q_a(n̂,ν) =fI_a cos(2γ_353), U_a(n̂,ν) =fI_a sin(2γ_353), where f is the polarization fraction, which is set at a global value of 2% in this work. §.§ Data Simulations For CMB simulation, we first use [https://github.com/cmbant/CAMB] software package to calculate the CMB power spectra and lensing power spectrum within Λ cold dark matter cosmological framework (ΛCDM framework). The cosmological parameters of ΛCDM framework have H_0, Ω_bh^2, Ω_ch^2, τ, A_s, n_s, and their best-fit value and standard deviation (1σ) is obtained from the Planck 2015 data <cit.>. Next, we generate CMB maps and lensing potential maps using the publicly available software package called [<https://github.com/healpy/healpy>], which is a Python wrapper for HEALPix[ <https://healpix.sourceforge.io/downloads.php>] developed by <cit.>. These maps have a pixel resolution defined by N_ side = 512. The lensed CMB maps are created by lensing the primordial CMB map with the lensing potential maps utilizing the code provided in <cit.>. Figure <ref> illustrates an example of both lensed and unlensed CMB Q maps. Notably, due to gravitational lensing effects, significant information is retained in residual maps. Foreground components are generated via implementation of the foreground model introduced in Section <ref>. Our CNN model is designed to learn the mapping between contaminated polarization maps and foreground-cleaned CMB polarization maps. In the context of machine learning, the ability of a trained neural network to accurately predict unseen data is referred to as generalization. We also hope that the network has sufficient generalization capability to handle the real data. To achieve this, we generate training data that closely resembles real-world scenarios. However, the foreground emission is too complex <cit.> to be parameterized, and this is reflected as the complicated spatial variation of the amplitude and spectral index in the power-law model of synchrotron, the modified blackbody model of thermal dust, and the model of AME. Consequently, during training data generation processes, we introduce manual uncertainties into these parameters. Specifically, for CMB simulations, cosmological parameters are treated as independent Gaussian random variables using mean values and 1σ standard deviations derived from Planck 2015 results. On the other hand, for foreground realizations involving amplitude template map (A) and spectral index map (β), each pixel value is multiplied by a randomly generated number with an average value of 1 and a standard deviation of 0.1 (0.05 for a spectral index). In this work, our target is to use the network model to remove the foregrounds from the multi-frequency observational maps. The inputs to the network model are the beam convolved observational maps at multiple frequencies, which contain the CMB, foregrounds, and instrumental noise. The desired output of network is the beam-convolved CMB maps (with lensing effect) plus noise maps. This implies that our network is designed to only remove the foregrounds while preserving beam convolved CMB with lensing effect and noise. The simulated CMB and foreground maps are represented as one-dimensional arrays using the RING numbering scheme of HEALPix. However, since the neural network utilized in this study requires two-dimensional data for both inputs and outputs, it is necessary to convert these one-dimensional arrays into two-dimensional arrays before inputting them into the network. We initially divide each input map into 12 patches according to the NESTED ordering scheme. Subsequently, each data patch is directly filled with an N_side× N_side square grid before combining these grids into a single 2D map representation. To obtain output maps in HEALPix format, an inverse process can be applied on the outputs generated by the network. For more detailed information regarding our methodology and implementation, please refer to <cit.>. Finally, angular power spectra C_ℓ are computed using [<https://github.com/LSSTDESC/NaMaster>] software package <cit.>. § APPLICATION TO CMB EXPERIMENTS §.§ Application to CMB-S4 experiment In this section, we apply our method to a set of simulated data that is representative of the performance characteristics of the CMB-S4 experiment. Specifically, we generate 1000 beam convolved emission maps and 300 white noise maps, encompassing eight distinct frequency bands. The chosen N_side parameter value is set at 512. We provide a summary of the frequency bands employed and instrumental properties specific to the CMB-S4 experiment in Table <ref>. Here, we assume that the instrument noise is Gaussian and white. For each individual frequency band, these 300 white noise maps are randomly added to 1000 beam convolved emission maps, thereby creating a training dataset consisting of 1000 observed maps across all eight frequency bands. To construct an independent test set for evaluation purposes, we generate an additional set of 300 sky emission maps and 300 noise maps using distinct random seeds and parameter values. Notably, it is important to highlight that the simulation process for the CMB maps incorporates considerations for lensing effects. The inputs to our neural network model comprise beam-convolved observational Q or U maps obtained from all eight frequency bands, encompassing lensed CMB signals, foreground emissions, and instrumental noise. The desired output from the network is a beam-convolved lensed CMB map (Q or U) with the noise and beam at 220 GHz. To optimize the performance of our model, we employ the Adam optimizer algorithm as proposed by <cit.>, initializing the learning rate at 0.01 and progressively reducing it during iterations until reaching a value of 10^-6. The training process encompasses approximately 30,000 iterations using a batch size of 12 and is executed on two NVIDIA Quadro GV100 GPUs. On average, the training process for a single network model requires approximately 14 hours to complete. In this study, we select a patch of the sky that is one-twelfth the size of the entire sky, containing 512×512 pixels, as the training dataset for our neural network. Subsequently, we apply the trained network to a sample from the testing set. The resulting reconstructed CMB Q and U maps are presented in Figure <ref>. Upon examination of the residual map, it can be observed that the reconstructed polarization maps closely resemble the true simulated maps. To quantitatively evaluate the performance of our network, we utilize the mean absolute difference (MAD) as a metric, employing the following general formula: σ_ MAD = 1/N∑_i^N|X_i - Y_i|, where N is the number of pixels, X and Y represent the predicted and real sky maps. The MAD between the recovered CMB Q and U maps and the target Q and U maps is computed. For the test set, the MAD values for the recovered Q map and U map are determined to be 0.016 ± 0.008 μK and 0.021 ± 0.002 μK, respectively. Similarly, the average MAD values for the training set are found to be 0.015 ± 0.003 μK for the Q map and 0.020 ± 0.002 μK for the U map, exhibiting consistency with the results obtained from the test set. It is worth noting that the performance of the recovered maps in the right region, as depicted in panel (a) or (b) of Figure <ref>, is slightly worse than that for other areas. This discrepancy can be attributed to the proximity of the right region to the galactic plane, resulting in significant foreground contamination. As shown in the upper panels of Figure <ref>, we derive the recovered noisy CMB EE and BB power spectra from the recovered Q and U maps. The recovered noisy EE and BB power spectra closely resemble the target spectra, indicating the ability of our network to effectively remove foreground contamination at the power spectrum level. It should be noted that the recovered CMB spectra show a rapid increase for scales ℓ>1000, primarily due to the beam effects and instrumental noise. In the Figure <ref>, the deviation of the power spectra (Δ^EE_ℓ or Δ^BB_ℓ ) is the average deviation across the test set, utilizing a bin size of ℓ=30. The error bars (σ^b) are calculated as follows: we first compute the standard deviation (σ_ℓ) of the power spectra on the test set, σ_ℓ=√(∑_i=1^N(D^i, predicted_ℓ-D^i, target_ℓ)^2/N) where, D^i, predicted_ℓ and D^i, target_ℓ represent predicted power spectrum by our method and true power spectrum, respectively. Symbol N is the number of samples in the test set. Subsequently, employing ℓ=30 as the binning criterion, the standard deviation within a bin σ^b_ℓ is denoted as σ^b = √(∑_ℓ=b*30^(b+1)+301/σ_ℓ^2), here, b=0,1,.. and represents index of the bin. In our proposed methodology, the output CMB maps from the neural network retain the noise originating from the instrument. To mitigate the impact of instrumental noise at the power spectrum level, we draw inspiration from the approach employed by <cit.>. Specifically, according to CMB experiments scanning strategies, in a complete scan of the sky, we can divide the entire observational data into two parts based on the order of observation time. Thus, we can partition the entire data into two "half-split (HS) maps". These two HS maps share the same sky signal because the observed sky coverage remains unchanged, but two HS maps possess uncorrelated instrument noise because the observations are made at different time. Subsequently, we calculate the cross-correlation power spectra of two HS maps. As a result of the uncorrelated instrumental noise, the noise effects become nearly negligible in the cross-correlation power spectra, while the signal remains intact. Consequently, the cross-correlation between two HS maps provide an estimation of the signal power spectrum. It is important to note that the noise in the HS maps is enhanced by a factor of √(2) relative to the sensitivity values provided in Table <ref>. After obtaining the foreground-cleaned HS maps through the neural network's output, we employ the cross-spectra between the recovered noisy HS maps to obtain the CMB power spectra. The outcomes of this procedure are depicted in the lower panels of Figure <ref>. Notably, the recovered EE angular power spectra exhibit a remarkable consistency with the simulated spectra. However, by inspecting the discrepancy, Δ D_ℓ, CNN =D_ℓ, recovered - D_ℓ, true, between the recovered and fiducial power spectra, it becomes evident that the error in the recovered EE power spectra progressively increases for l>1200 due to the influence of beam effects. Additionally, we employ the coefficient of determination, R^2 = 1-σ_ CNN^2/σ^2, as a metric to assess the recovered performance on the power spectrum. Here, σ_ CNN^2 is computed as σ_ CNN^2 = 1/N∑_i^N(X_i - Y_i)^2, where N, X and Y are the maximum multipoles (ℓ_ max=1500), D_ℓ, recovered and D_ℓ, true, respectively. σ^2 represents the variance of the true CMB power spectrum, which corresponds to the power spectrum derived from the input fiducial map. R^2=1 indicates an exact match between the recovered power spectrum and the fiducial power spectrum. Conversely, a lower R^2 value (its minimum value is 0) signifies poorer fitting performance. In our practical implementation, we evaluate the effectiveness of the recovery signal process by quantifying the R^2 value for the recovered power spectrum. Specifically, for the recovered EE power spectrum, we obtain R^2 = 0.997 ± 0.012 (68% C.L.) across all scales and R^2 = 0.9998 ± 0.0001 (68% C.L.) for angular scales ℓ<1200. These results demonstrate a excellent agreement between the recovered spectrum and the fiducial spectrum. In the lower right panel of Figure <ref>, we present the recovery of the lensing BB power spectrum. It is evident that the lensing B-mode power spectrum can be successfully recovered for angular scales ℓ<800. Subsequently, we evaluate the recovery performance by calculating the R^2 value for the recovered lensing B-mode power spectrum. For angular scale ℓ<800, we find R^2 = 0.95 ± 0.028 (68% C.L.), while for ℓ<600, we obtain R^2 = 0.98 ± 0.015 (68% C.L.), indicating a good agreement between the recovered BB spectrum and the fiducial spectrum. However, we observe a limitation in obtaining the lensing B-mode power spectra for angular scales ℓ>800, attributed to the presence of noise. §.§ Application to LiteBIRD experiment Considering the successful removal of foregrounds achieved by our network in the performance of CMB-S4 experiment, we extend the application of this pipeline to full-sky simulated data corresponding to the performance of the LiteBIRD experiment. Similar to the data simulation procedure described in Section <ref> for the CMB-S4 experiment, we simulate 1000 observed emission maps that emulate the performance of the LiteBIRD experiment. The frequency bands and instrumental characteristics of the LiteBIRD experiment are summarized in Table <ref>. The training set comprises 1000 observed maps acquired at ten frequency bands, while the test set comprises 300 observed maps. As indicated in Table <ref>, the LiteBIRD experiment exhibits a higher instrumental noise level compared to the CMB-S4 experiment. In the process of using CNN for CMB component separation, it is impossible to completely eliminate the instrumental noise as shown in Section <ref>. In order to minimize the contamination of noise on the recovered CMB signal as much as possible, we calculate the minimized variance of noise from multiple observed frequency bands. Thus, we can obtain noise with minimized variance, expecting it to be lower than the noise level in each individual frequency map due to the accumulation of information. Ultimately, we add the noise with minimized variance to the CMB sky map as the desired output of the CNN model, which can minimize the contamination of noise on the CMB sky map output by the CNN model. Internal Linear Combination (ILC) is one method for computing the minimized variance, so we adopt the ILC method to calculate the noise with minimized variance. We employ the ILC method to obtain a weighted sum of the noise maps corresponding to the ten frequency bands of LiteBIRD experiment. Specifically, we use the polarization ILC method <cit.> to compute the target noise map with minimized variance. This method allows us to express the processed map as follows: Q̂(p)± iÛ(p) = ∑_f(ω^R_f± iω^I_f)(Q_f(p)± i U_f(p)) . As an un-biased estimator, corresponding linear weights should satisfy the following conditions, reads: ∑_fω^R_f=1, ∑_fω^I_f=0 , and can be obtained by minimizing the variance of |Q̂+iÛ|^2. p and f stand for the pixel index and the frequency channel. We perform the polarization ILC on each training set consisting of the lensed CMB, the foreground emission from the package, and noise, to get the corresponding weight coefficients ω^R and ω^I. By applying these coefficients, we obtain the target noise map as a weighted summation of the noise maps corresponding to the ten frequency channels. The inputs to the network model consist of full-sky observational maps (Q or U) acquired at ten distinct frequencies. The desired output of the network is a full-sky CMB map (Q or U) convolved with the beam at 166 GHz, plus an ILC noise map derived from a weighted summation of the noise maps of the ten frequency bands. It is important to note that the training data of the network use the HS maps, which implies that the standard deviation of the noise for the HS maps is amplified by a factor of √(2). After training the network, we proceed to evaluate its performance on the test set. The outcomes of foreground removal by the network are presented in Figure <ref>. The residual maps demonstrate a successful removal of foreground contamination, exhibiting a clean separation between the foreground and CMB components. Moreover, the residual maps retain a greater amount of information in the galactic plane, which is a region heavily affected by foreground contamination. To assess the accuracy of the network, we calculate the average MAD values across 300 testing sets. The recovered Q map yields an average MAD of 0.029 ± 0.004 μK, while the U map yields an average MAD of 0.032 ± 0.009 μK. Additionally, we examine the recovery of the noisy EE and BB power spectra, as illustrated in the upper panels of Figure <ref>. Obviously, the recovered noise EE and BB power spectra are very consistent with the target power spectrum, indicating that our neural network effectively removes foreground contamination. The EE and BB spectra experience a sharp increase at scales ℓ>600 due to the impact of the instrumental beam and noise. In order to mitigate the influence of noise on the power spectra, we employ the cross-correlation technique between two reconstructed HS maps. The results shown in the lower left panel of Figure <ref> demonstrate a remarkable consistency between the EE angular power spectrum and the fiducial EE spectrum for ℓ≲900. To quantitatively assess the agreement, we calculate the coefficient R^2 for the denoised EE power spectrum. The obtained values of R^2= 0.98± 0.02 (68% confidence level) for ℓ<900 and R^2= 0.999± 0.0005 for ℓ<800 indicate a strong concordance between the recovered EE power spectrum and the fiducial spectrum. However, it is important to note that the input maps utilized in the network model inevitably suffer from a loss of information regarding small-scale structures (ℓ>900) due to the presence of an instrumental beam with a large FWHM of 28.9 arcmin. Consequently, the output map from the network also lacks this high-ℓ information. As a result, the network is unable to recover the EE spectrum for ℓ>900. Additionally, we present the recovery of the lensing B-mode power spectrum in the lower right panel of Figure <ref>. Notably, we observe that the lensing BB power spectrum can be accurately reconstructed for angular scales ℓ<500. To quantitatively evaluate the agreement, we calculate the coefficient R^2 for the denoised lensing BB power spectrum, yielding a value of R^2= 0.95± 0.03 (68% confidence level) for ℓ<500 and R^2= 0.98± 0.01 (68% confidence level) for ℓ<400. However, it is important to acknowledge that the lensing B-mode power spectrum for ℓ>500 cannot be recovered due to the presence of instrumental noise and the effects of the instrumental beam, as discussed in Section <ref>. § DISCUSSION §.§ Variation of the foreground parameters in the noiseless case In Section <ref>, the simulation of foregrounds involves treating the amplitude A and spectral index β parameters for each foreground component as Gaussian random variables. However, given that CNN methods rely on simulated data in the training set, the randomization of parameters (A and β) in terms of their size could impact the obtained results. Hence, it is necessary to investigate the influence of parameter variations on our findings. Table <ref> outlines five distinct cases considered for the variation of parameters (A and β). For Case 1, no manual uncertainty is introduced to the amplitude A and spectral index β. In Case 2, all pixel value in the amplitude template map A (and spectral index map β) is multiplied by a random number drawn from a distribution with an average value of 1 and a standard deviation of 0.1 (0.05 for β). Similarly, in Case 3, the variation ranges are increased, and the multiplication factor for all pixel value in the amplitude template map A (and spectral index map β) is a random number drawn from a distribution with an average value of 1 and a standard deviation of 0.15 (0.1 for β). It is important to note that for Cases 2 and 3, the same random number is multiplied for all pixel values in the A (or β) map, implying that all pixel values in the A (or β) map vary together. In Case 4, a map is generated where each pixel value independently follows a Gaussian distribution with a mean of 1 and a standard deviation of 0.1 for A (0.05 for β), and the A (or β) map is then multiplied by this generated random map. In Case 5, the variation ranges of A and β are increased compared to Case 4. The variation range in Case 4 and Case 5 is the same as in Case 2 and Case 3, respectively. However, for Cases 4 and 5, each pixel value in the A and β maps is multiplied by an independent random number, indicating that all pixel values in the A and β maps vary independently. In this section, we assess the impact of foreground parameter variations on the removal of foreground components. We conduct our evaluation by applying our method to simulated data that emulates the performance of the CMB-S4 experiment. Table <ref> presents five distinct random cases, characterized by different standard deviation sizes and forms of random realizations. For each random case, we generate 1000 beam-convolved emission maps at eight frequency bands, with a resolution parameterized by N_side of 512. The frequency bands and instrumental properties of the CMB-S4 experiment are summarized in Table <ref>. To accurately investigate the effect of changes in foreground parameters on the network model, we exclude noise from the simulated data. The test set comprises 300 sky emission sets, similar to the training sets, but with different random seeds and parameter values. The inputs to the network model consist of the beam-convolved observational maps (Q maps or U maps) at the eight frequencies, encompassing the CMB and foregrounds. The desired output of the network is a beam-convolved CMB map (Q map or U map) with a beam at 220 GHz. Each random case is trained using a separate network. The outcomes for each random case are shown in Figure <ref>. Notably, for case 1, the residual map exhibits a high degree of cleanliness. The MAD between the recovered CMB Q map and the true Q map is measured to be 0.0025± 0.0001 μK. Furthermore, the angular power spectrum of the Q map can be accurately recovered, indicating effective removal of foregrounds. For random cases 2-5, the residual maps appear to retain more information compared to case 1. The MADs for these cases are determined to be 0.0061± 0.0004 μK, 0.0091±0.0012 μK, 0.0112± 0.0012 μK, and 0.0174± 0.0014 μK, respectively. Analysis of the recovered CMB maps reveals that changes in foreground parameters can degrade the performance of the network. Particularly, cases involving random pixel independence (cases 4 and 5) demonstrate a more significant degradation in performance of the network compared to cases involving random pixel dependence (cases 2 and 3). Subsequently, we compute the power spectrum of the thermal dust and synchrotron Q maps for each random case. As depicted in Figure <ref>, the simulated power spectra of the thermal dust and synchrotron for random cases 2 and 3 adequately align with the template power spectra (case 1) across all considered scales. However, for random cases 4 and 5, the simulated power spectra deviate from the template power spectra, particularly at small scales. These deviations resemble noise, potentially suggesting that the random realizations in cases 4 and 5 could introduce noise <cit.>. Consequently, we think that the noise stemming from random cases 4 and 5 will significantly impact the recovery of the CMB map. Finally, it is important to acknowledge that the discussion in this section does not account for the presence of instrumental noise. A comparison with the results showed in Figure <ref> reveals that the residual map is notably cleaner when noise is not taken into consideration. This observation highlights the detrimental effect of noise on the accuracy and quality of our recovered map results. §.§ Frequency selection In Section <ref>, we employ our proposed methodology to analyze simulated data corresponding to the performance of the CMB-S4 experiment. Specifically, we utilize beam-convolved observational maps (either Q maps or U maps) obtained from eight different frequency bands as inputs to the network model. The desired output of the network is a beam-convolved CMB map (Q map or U map) with both noise and beam effects at 220 GHz. In this section, we investigate the impact of varying the number of frequency bands in the input data on the obtained results, thereby elucidating the influence of frequency selection. In order to facilitate comparison, we consider five distinct cases as enumerated in Table <ref>. In Case 1, the network model takes as input the beam-convolved observational maps (Q maps or U maps) from eight different frequency bands, while the desired output of the network is a beam-convolved lensed CMB map (Q map or U map) with noise and beam effects at 220 GHz. It should be noted that Case 1 aligns with the methodology employed in Section <ref>. Moving on to Case 2, we only utilize the observational maps at 220 GHz as inputs to the network model, while the desired output remains unchanged. In Case 3, the input data consists of the observational maps at 155 GHz and 220 GHz, while the desired output remains consistent. Similarly, in Case 4, the network model takes as input the observational maps at 85 GHz, 95 GHz, 155 GHz, and 220 GHz, while the desired output remains unaltered. In Case 5, the input data comprises the observational maps from all eight frequencies, while the desired output is a beam-convolved lensed CMB map (Q map or U map) with noise and beam effects taken into account at 155 GHz. Notably, the desired output for Case 5 is convolved with a larger Gaussian beam of FWHMs = 22.7 arcmin compared to the beam of FWHMs = 13.0 arcmin at 220 GHz, thereby resulting in a higher degree of smoothing and loss of fine-scale information on the map. The selection of the Case 5 is made to assess the influence of an increased beam size on the network output. It is worth mentioning that the training and test sets for all cases are derived from Section <ref>, and each case is trained using a separate network. Figure <ref> illustrates the outcomes of the noisy CMB Q map recovery on the test set for Cases 2-5, while the recovery results for Case 1 are depicted in Figure <ref>. It is important to note that, for the sake of brevity, we only present the results of Q map recovery, although the U map recovery yields similar outcomes. We can observe that the residual maps progressively exhibit cleaner features from Case 2 to Case 4, and ultimately to Case 1. This suggests that the recovery of the CMB noisy Q map improves as the number of frequency bands in the input data of the network increases. This can be attributed to the fact that the multi-band data provides a greater wealth of foregrounds and CMB signal information to the network, thereby facilitating more effective foreground removal. To provide a quantitative assessment, we calculate the average MAD values across 300 testing sets for each case listed in Table <ref>: 0.016 ± 0.008 μK for Case 1, 0.03 ± 0.013 μK for Case 2, 0.027 ± 0.010 μK for Case 3, and 0.023 ± 0.007 μK for Case 4. The comparison between these MAD values further supports our conclusion. Furthermore, Figure <ref> depicts the recovered CMB QQ after the denoising step. We can see that QQ power spectra can be accurately recovered, indicating that the number of frequency bands in the input data of the network has negligible influence on the power spectrum recovery. The average MAD values for Case 5 are computed as 0.012 ± 0.004± 0.004 μK, slightly smaller than the MAD values observed in Case 1. This suggests that, in terms of map-level recovery, the efficacy of Case 5 surpasses that of Case 1. This phenomenon could be due to the discrepancy in sensitivity between the 155GHz and 220GHz frequency bands. Specifically, the output of the Case 5 exhibits lower noise levels compared to the output of the Case 1 due to a lower sensitivity of the 155GHz frequency band. However, due to the larger beam size in the target map, the recovered map also suffers from a loss of small-scale information, which is evident in the recovered power spectrum. For Case 5, Figure <ref> demonstrates that the QQ power spectra can be accurately recovered for ℓ<1100. However, it is important to note that the uncertainty in the recovered QQ spectrum significantly increases as a consequence of the larger beam effect present in the training target map. Considering the findings from the analysis in Section <ref>, it can be concluded that the presence of instrumental noise and beam effects significantly impairs the accuracy of the recovered results, particularly at smaller scales. Consequently, for the purpose of recovering the CMB signal, the 220GHz beam and noise are selected as the instrument characteristics of target map in Section <ref>. This choice is driven by multiple factors, including its smaller FWHM in comparison to other lower frequency bands, as well as the lower noise levels in comparison to the 270GHz frequency. These considerations are anticipated to enhance the network's ability to recover the CMB signal. Similarly, for the LiteBIRD experiment in Section <ref>, the beam at the 166GHz frequency band is chosen as the beam of target map, considering both noise and beam characteristics, as it yields the most favorable outcome. §.§ Map Denoising In Section <ref>, we have demonstrated the efficacy of our network in effectively eliminating foregrounds. However, it is important to note that the output CMB maps generated by the neural network still retain the presence of instrumental noise. To address this concern, we employed a cross-correlation technique to mitigate the impact of instrumental noise on the power spectra. In this section, our objective is to employ the network model to remove the noise at the level of the CMB map, using simulated data with the performance of the CMB-S4 experiment. Consequently, we configured the inputs to the network as the beam-convolved observational maps at the eight frequencies, including the CMB signal, foregrounds, and instrumental noise. Meanwhile, the outputs of the network is the beam-convolved CMB map without instrumental noise, meaning that our network has been designed to remove both foregrounds and instrumental noise components. Figure <ref> show the outcomes obtained from the test set using simulation data for the CMB-S4 experiment. Notably, the Q and U residual maps retain a significant amount of information, suggesting that the accurate recovery of the pure CMB Q/U maps remains challenging. The recovered QQ and UU power spectra align closely with the simulated counterparts for ℓ≲ 900, but deviate gradually as the multipole moments increase beyond ℓ > 900. Additionally, we compute the EE and BB power spectra from the recovered CMB Q/U maps, as illustrated in Figure <ref>. The EE power spectrum demonstrates consistency with the fiducial spectrum for ℓ≲ 900, but exhibits increasing deviation for higher multipoles at ℓ > 900. Moreover, the lensing B-model power spectrum exhibits a gradual deviation as the multipoles increase beyond ℓ > 300. These findings suggest that our network model struggles to accurately recover information at small scales. Consequently, distinguishing between the polarized CMB and noise at the map level proves to be a challenge for the network model. §.§ Dependency of foreground models Our CNN model training relies on the training set, where the simulation of the foregrounds is based on the models. Therefore, our method is a parameterized component separation algorithm. Our CNN model will inevitably depend on the foreground model. Here, we test the dependency of the CNN model on foreground models. We train the network on a original training set from Section <ref>. Then, we vary the foreground models in the test set to examine the performance of the network model under different foreground models. Specifically, the simulation of synchrotron radiation and thermal dust radiation in the training set is based on the s1 and d1 models from the package. However, the simulation in the test set is based on the synchrotron radiation s2 or thermal dust radiation d4 models. Compared to the synchrotron radiation s1 model, the s2 model takes into account the spectral index varying with latitude. The spectral index of s2 model is defined as β_s2=β_s,b=0+δ_βsin|b|, where b are the Galactic latitude, respectively. We use a gradient δ_β=-0.3 based on the WMAP polarization data. Compared to the the thermal dust d1 model, d4 model has the two dust components. Here we present two experiments. The first experiment involves adjusting the synchronous radiation model, denoted as . The training dataset is derived from the simulations detailed in Section <ref>, where data was simulated to reflect the capabilities of the CMB-S4 experiment. In generating test set of , synchronous radiation model was substituted with the s2 model. In the second experiment, denoted as , both the synchronous radiation model and the thermal dust model were simultaneously altered. When creating test set of , synchronous radiation and the thermal dust model were replaced by the s2 model and d4 model, respectively. Once the network model is well-trained on the training set, we input the test sets of and into the trained network model. We firstly present results of . As shown in the Figure <ref>, a small amount of information remains in the residual maps, indicating that the foregrounds can be cleanly removed. The recovered Q map yields an average MAD of 0.032 ± 0.016 μK, while the U map yields an average MAD of 0.030 ± 0.024 μK. These MAD values are slightly larger than the MAD values from the Section <ref>, where the MAD values in the Section <ref> are 0.016 ± 0.008 μK for the Q map recovery and 0.021 ± 0.002 μK for the U map recovery. Figure <ref> displays the CMB power spectrum we reconstructed on the test set of , We can observe that both the CMB EE power spectrum and the lensing BB power spectrum can be accurately recovered, consistent with the results from Section <ref>. These results indicate that altering the synchronous radiation model has a minimal impact on our results. Then, we present results of . As shown in the Figure <ref>, the residual map retains a amount of information, indicating that there exist significant foreground residuals in the reconstructed noisy CMB Q/U map. We calculate the average MAD values across testing set. The recovered CMB Q map yields an average MAD of 0.195 ± 0.049 μK, while the U map yields an average MAD of 0.241 ± 0.059 μK. These MAD values are about ten times greater than the values in Section <ref>. Figure <ref> displays the CMB power spectrum we reconstructed on the test set of . It can be observed that residual foreground effects exhibit a slight impact on the EE power spectrum at angular scales ℓ<200, but their influence at large scales can be neglected. Therefore, our CNN model remains effective in recovering the CMB EE power spectrum. However, residual foreground effects significantly influence the recovery of the lensing BB power spectrum, leading to noticeable bias in the recovered BB power spectrum across all angular scales. Based on the results of and , it is evident that altering the model of thermal dust radiation has a significant impact on the recovery of lensing B-mode. This also demonstrates the dependency of our method on foreground models. Although we randomized the parameters of the foreground models during the simulation of the training set, the lack of inclusion of different foreground models in the training set resulted in a higher amount of foreground residual in reconstructed CMB maps when changing the thermal dust radiation model. Given the strong fitting capability of CNN methods, we can incorporate various thermal dust and synchronous radiation models into the training set. That is, during the simulation of the training set, generalize the foreground models as well. This approach can effectively alleviate the issue of CNN methods relying on foreground models. We augmented the original training set with 300 sets of multi-frequency observed sky maps. The simulations of synchrotron and thermal dust emissions in these 300 sky maps were based on the s2 and d4 models. During the simulation process, we still randomized the spectral indices and amplitudes of foregrounds. We trained the network model on this expanded training set. After completing the network training, we input the test set into the trained network. The simulations of synchrotron and thermal dust emissions are based on the s2 and d4 model in test set. The recovered CMB Q map yields an average MAD of 0.031 ± 0.014 μK, while the CMB U map yields an average MAD of 0.038 ± 0.029 μK. These MAD values are slightly larger than the MAD values from the Section <ref>. The results of recovering the CMB polarization power spectra are shown in the Figure <ref>, demonstrating that we can accurately recover the CMB EE and lensing BB power spectra. These results indicate that if the training set includes data from more foreground models, the network model trained on the training set can handle more complex foreground contamination. §.§ Needlet domain ILC ILC is a blind source method that has been widely applied to CMB foreground subtraction. As shown in equation (<ref>), the clean CMB map is represented as a weighted linear combination of sky maps observed in multiple frequency bands, with the weights calculated through a variance-minimization method. Here, we employ ILC method to analyze simulated data corresponding to the performance of the CMB-S4 experiment. We compare the efficiency of our CNN method with the ILC method in removing polarized foregrounds. Here, we use needlet domain ILC (NILC) method to recover CMB polarized maps. The NILC method is an improvement of the ILC method and has been utilized for foreground removal in CMB temperature (T), polarization E-mode, and polarization B-mode maps<cit.>. Firstly, we provide a brief overview of the NILC method. We consider multi-frequency observational sky maps (X^ obs,ν(p)) with varying instrument beams for each frequency band. The index ν and p denote frequency and pixel, respectively. The observed maps convolves/deconvolutes to the same resolution in harmonic space: X_ℓ m^ν=b_ℓ/b_ℓ^ν X_ℓ m^ obs,ν, X_ℓ m^ obs,ν is harmonic coefficients of maps (X^ obs,ν). b_ℓ^ν and b_ℓ represent beam window function for each frequency band and the common beam window function, respectively. After the correction of beam window function, we can assume that the CMB is frequency-independent. The maps are given as: X^ν(p)=X^ CMB(p) + X^ν, Fg(p)+ n^ν(p), X^ CMB(p) and X^ν, Fg(p) represent CMB map and foreground map, respectively. n^ν(p) is instrumental noise. Each of these maps X_ℓ m^ν with the common beam can be decomposed into a set of filtered maps X_ℓ m^ν,j using filters h_ℓ^j, X_ℓ m^ν,j=h_ℓ^jX_ℓ m^ν. The filters are chosen in such a way that ∑_j(h_l^j)^2=1. In this work, we adopt the filters in the following form: .h_l^j={[ cos[(l_ mid^j-l/l_ mid^j-l_min^j)π/2] for l_min^j⩽ l<l_ mid^j,; ; cos[(l-l_ mid^j/l_max^j-l_ mid^j)π/2] for l_ mid^j<l⩽ l_max^j ].. In terms of h_l^j, the spherical needlets in HEALPix pixelization space are defined as ψ_jk(p)=√(4π/N_j)∑_ℓ mh_ℓ^jY_ℓ m(p)Y_ℓ m^*(n_jk), here, N_j and n_jk represent the number of pixels and k-th pixel of j-th needlet map, respectively. The needlet transformed coefficients for observed maps (X^ν) are denoted as β_jk^ν = ∫_S^2X^ν(n̂) Ψ_jk(n̂) dΩ_n̂ = √(4π/N_j)∑_l=0^l_max∑_m=-l^lh_l^j X^ν_lm Y_lm(ξ_jk), its inverse transformation is given by X_ℓ m^ν=∑_jkβ_jk^ν√(4π/N_j)h_ℓ^jY_ℓ m^*(n_jk) . The ILC estimate of needlet coefficients of the cleaned map is obtained as a linearly weighted sum of the needlet coefficients β_jk^NILC=∑_νω_jk^νβ_jk^X,ν, here, the requirement to preserve the CMB signal during the cleaning is formulated as a constraint: ∑_νω_jk^ν=1. The needlet ILC weights can be calculated by minimizing variance. The resulting needlet ILC weights that minimise the variance of the reconstructed CMB are expressed as w_j^NILC(n_jk)=Ĉ_jk^-11/1^TĈ_jk^-11, with Ĉ_jk=C_jk^ν_1×ν_2 = ⟨β_j^ν_1(n_jk)β_j^ν_2(n_jk)⟩ 1 is a column vector of all ones. The NILC cleaned map is transformed from the cleaned needlet maps according eq. (<ref>), X̂_ℓ m^NILC=∑_jkβ_jk^NILC(n_jk)√(4π/N_j)h_ℓ^jY_ℓ m^*(n_jk), <cit.> and <cit.> has already demonstrated the effectiveness of NILC in directly removing foregrounds from E and B-mode maps. Here, we adopt the same approach as theirs. Firstly, the CMB polarization maps need to be decomposed into (E, B) maps. When performing spherical harmonic transforms on a partial sky, the orthogonality of spherical harmonics is no longer satisfied, leading to EB leakage. For ground-based CMB observations, this leakage is inevitable, necessitating the correction of EB leakage. Here, we briefly outline the correction for EB leakage, and detailed information can refer to previous work <cit.>. Firstly, decompose the maksed observation polarization maps (Q, U) into (E, B) maps. Then, the inverse transformation of the (E, 0) maps yields (Q_E, U_E) maps. Thirdly, decompose masked (Q_E, U_E) maps to obtain the B^' maps, which serve as the the leakage template for B map. Finally, we can remove the EB leakage from the masked B map by linear fitting. The data utilized here are obtained from the simulations in Section <ref>, where we simulated the observational sky maps with the performance of the CMB-S4 experiment. We first decompose the observed polarization (Q, U) maps into (E, B) maps and correct for EB leakage. Subsequently, we apply the NILC method to the multi-frequency observed E and B-mode maps individually to obtain a foreground-cleaned CMB map. It should be noted that there are still residual noise in the foreground-cleaned CMB map. As demonstrated in Section <ref>, the noise bias in the power spectrum can be effectively removed through cross-correlation between two HS maps. Furthermore, the noise bias can also be eliminated by estimating the power spectrum of residual noise. Here we adopt the latter method to mitigate the bias of noise. Firstly, 100 noise maps are simulated. Subsequent to applying the NILC method to the multi-frequency sky maps, we keep the NILC weights unchanged and feed these 100 noise sky maps into the NILC, generating 100 residual noise maps. The noise biases in the power spectrum is properly corrected by subtracting the average power spectrum of these 100 noise residual maps from the foreground-cleaned CMB power spectrum. The Figure <ref> displays the power spectra of the foreground-cleaned CMB map. It is evident that the NILC technique accurately recover the EE power spectrum at angular scales ℓ<900. In contrast, our CNN method can accurately recover the EE power spectrum at finer angular scales. Concerning the lensed B-mode power spectrum, NILC demonstrates the ability to recover the B-mode power spectrum at angular scales ℓ<400, whereas our approach excels in precisely retrieving the B-mode power spectrum at angular scales ℓ<800, with reduced errors in the restoration process. Notably, precise recovery of the B-mode power spectra at angular scales ℓ<300 is sufficient for detecting primordial gravitational waves. Thus, ILC remains an excellent blind source separation algorithm. Enhanced recovery of smaller-scale power spectra is beneficial for other cosmological investigations. Lastly, as shown in equation (<ref>), NILC necessitates highly accurate beam modeling, whereas our CNN method focuses on pixel space distribution characteristics within the map, without specific beam requirements. § CONCLUSIONS This paper presents the utilization of a machine-learning technique, namely CMBFSCNN <cit.>, for the extraction of CMB signals from diverse sources of polarized foreground contamination. Our methodology consists of two sequential steps: (1) employing a CNN network to eliminate foreground contamination from the observed CMB map; and (2) employing a cross-correlation technique to mitigate the impact of instrumental noise on the power spectra. We first implement our pipeline on simulated data designed to the performance of the CMB-S4 experiment. The data simulation incorporates the lensing effect, and our important objective is to accurately recover the weaker lensing BB power spectrum. At the map level, CMBFSCNN effectively eliminates polarized foreground components from both the Q and U maps. The mean absolute deviation (MAD) values between the recovered maps and the corresponding target noisy maps are 0.016 ± 0.008 μK for the Q map recovery and 0.021 ± 0.002 μK for the U map recovery. Subsequently, we partition the data into two HS maps and perform cross-correlation to mitigate the noise effects on the power spectrum. Notably, the recovered CMB EE power spectra obtained through our methodology closely match the input fiducial CMB information. Additionally, the CMB lensing B-model power spectrum can be accurately recovered at angular scales of ℓ<800. Subsequently, we employ this pipeline on full-sky simulated data, emulating the performance of the liteBIRD experiment. To mitigate the noise level in the network output map, we employ the Internal Linear Combination (ILC) method, which involves obtaining a weighted sum of the noise maps corresponding to the ten frequency bands. The resulting ILC noise plus the beam-convolved CMB map, serves as the training target for the network. At the map level, the CMBFSCNN effectively removes polarized foreground components from both the full sky Q and U maps. The MAD values between the recovered maps and the corresponding target noisy maps are 0.029 ± 0.004 μK for the Q map recovery and 0.032 ± 0.009 μK for the U map recovery. Following the denoising step, the recovered CMB EE power spectrum closely match the input fiducial CMB information. The CMB lensing B-model power spectrum can be accurately recovered at angular scales up to ℓ<600. These results suggest that the CMBFSCNN is capable of successfully handling full-sky polarized maps. Our findings demonstrate the inherent challenge faced by network models in accurately reconstructing pure CMB polarized signals from observed data that is contaminated by diverse foreground sources. However, we remain optimistic about future endeavors, where we endeavor to deepen our understanding and tackle this issue, making substantial progress in this research field. Encouragingly, the network model exhibits the capability to effectively recover the CMB polarized signal plus instrumental noise. Finally, we illustrate the dependency of our approach on the foreground models. When the actual sky observations align with the simulations in the training data, our outcomes exhibit high quality. Conversely, discrepancies between the real observed signal and the training set simulations result in an increased presence of residual foregrounds in our reconstructed CMB maps. These residual foreground components have an impact on the reconstruction of the lensing B-mode power spectrum. This underscores the necessity of possessing prior knowledge regarding the sky signal and employing this prior information for precise modeling of the sky signals. Further quantitative research on the dependency of our method on the foreground models is left for future work. We have shown that the CNN method has a good performance in processing CMB polarized maps. More interestingly, the CNN method could be used to reconstruct the foregrounds. We will investigate these interesting issues in future works. § ACKNOWLEDGEMENT J.-Q.X. is supported by the National Science Foundation of China, under grant Nos. 12021003, by the National Key R&D Program of China, Nos. 2020YFC2201603, by the Fundamental Research Funds for the Central Universities. Some of the results in this paper have been derived using the HEALPix (page: <https://healpix.sourceforge.io/>) 99 [Abazajian et al.(2016)]Abazajian:2016 Abazajian K. N., Adshead P., Ahmed Z., 2016, arXiv:1610.02743 [Ade et al.(2019)]Ade:2019 Ade P., Aguirre J., Ahmed Z., et al., 2019, JCAP, 2019, 056 [Ali-Haïmoud et al.(2009)]Ali-Haimoud:2009 Ali-Haïmoud Y., Hirata C.M., Dickinson C., 2009, MNRAS, 395, 1055 [Alonso et al.(2019)]Alonso:2019 Alonso, D., Sanchez, J., Slosar, A., et al. 2019, , 484, 4127 [Anwar et al.(2019)]Anwar:2019 Anwar S., Khan S., Barnes N., 2019, arxiv:1904.07523 [Armitage-Caplan et al.(2012)]Armitage-Caplan:2012 Armitage-Caplan C., Dunkley J., Eriksen H.K., Dickinson C., 2012, MNRAS, 424, 1914 [Baccigalupi et al.(2000)]Baccigalupi:2000 Baccigalupi, C., Bedini, L., Burigana, C., et al. 2000, , 318, 769. doi:10.1046/j.1365-8711.2000.03751.x [Basak et al.(2013)]Basak:2013 Basak S., and Delabrouille J., MNRAS, 2013, 435, 18–2 [Basak & Delabrouille(2012)]Basak:2012 Basak, S. & Delabrouille, J. 2012, , 419, 1163. doi:10.1111/j.1365-2966.2011.19770.x [Bennett et al.(2003)]Bennett:2003 Bennett C.L., Halpern M., Hinshaw G., et al., 2003, ApJS, 148, 1 [Bennett et al.(2013)]Bennett:2013 Bennett C.L., Larson D., Weiland J.L., et al., 2013, ApJS, 208, 54 [Betoule et al.(2009)]Betoule:2009 Betoule M., Pierpaoli E., Delabrouille J., Le Jeune M., Cardoso J.-F., 2009, A&A, 503, 691 [Caldeira et al.(2019)]Caldeira:2019 Caldeira, J., Wu, W. L. K., Nord, B., et al. 2019, Astronomy and Computing, 28, 100307 [Das et al.(2014)]Das:2014 Das S., Louis T., Nolta M.R., et al., 2014, JCAP, 2014, 014 [Delabrouille et al.(2003)]Delabrouille:2003 Delabrouille J., Cardoso J.-F., Patanchon G., 2003, MNRAS, 346, 1089 [Dickinson et al.(2011)]Dickinson:2011 Dickinson C., Peel M., Vidal M., 2011, MNRAS, 418, L35 [Dou et al.(2023)]Dou:2023 Dou, J., Ghosh, S., Santos, L., et al. 2023, arXiv:2310.19627. doi:10.48550/arXiv.2310.19627 [Draine & Hensley (2013)]Draine:2013 Draine, B. T., & Hensley, B. 2013, ApJ, 765, 159 [Draine & Lazarian (1998)]Draine:1998 Draine B.T., Lazarian A., 1998, ApJ, 508, 157 [Draine et al.(1999)]Draine:1999 Draine, B.T., Lazarian, A., 1999, ApJ 512, 740 [Errard et al.(2016)]Errard:2016 Errard J., Feeney S.M., Peiris H.V., Jaffe A.H., 2016, JCAP, 03, 052 [Fluke & Jacobs (2020)]Fluke:2020 Fluke C.J., Jacobs C., 2020, WDMKD, 10, 1349 [Génova-Santos et al.(2017)]Genova-Santos:2017 Génova-Santos R., Rubiño-Martín J.A., Peláez-Santos A., et al., 2017, MNRAS, 464, 4107 [Génova-Santos et al.(2015)]Genova-Santos:2015 Génova-Santos R., Rubiño-Martín J.A., Rebolo R., et al., 2015, arXiv:1504.03514 [Górski et al.(2005)]Gorski:2005 Górski K.M., Hivon E., Banday A.J., et al., 2005, ApJ, 622, 759 [Hanany et al.(2019)]Hanany:2019 Hanany S., Alvarez M., Artis E., et al., 2019, arXiv:1902.10541 [Haslam et al.(1982)]Haslam:1982 Haslam C.G.T., Salter C.J., Stoffel H., Wilson W.E., 1982, A&AS, 47, 1 [Hassan et al.(2018)]Hassan:2018 Hassan S., Liu A., Kohn S. et al. 2018, Proc. IAU, Cambridge University Press, 12, 47 [Hazumi et al.(2019)]Hazumi:2019 Hazumi M., Ade P.A.R., Akiba Y., et al., 2019, J. Low Temp. Phys., 194, 443 [He et al.(2015)]He:2015 He K., Zhang X., Ren S., Sun J. 2015, arXiv:1502.01852 [Hensley & Bull (2018)]Hensley:2018 Hensley B.S., Bull P., 2018, ApJ, 853, 127 [Huang et al.(2016)]Huang:2016 Huang G., Liu Z., van der Maaten L., Weinberger K.Q., 2016, arXiv:1608.06993 [Ioffe et al.(2015)]Ioffe:2015 Ioffe S., Szegedy C., 2015, arXiv:1502.03167 [Erickson (1957)]Erickson:1957 Erickson, W.C., 1957, ApJ 126, 480 [Eriksen et al.(2008)]Eriksen:2008 Eriksen H.K., Jewell J.B., Dickinson C., et al., 2008, ApJ, 676, 10 [Errard et al.(2016)]Errard:2016 Errard J., Feeney S.M., Peiris H.V., Jaffe A.H., 2016, JCAP, 59, 052 [Farsian et al.(2020)]Farsian:2020 Farsian F., Krachmalnicoff N., Baccigalupi C., 2020, JCAP, 24, 017 [Fernández-Cobos et al.(2016)]Fern:2016 Fernández-Cobos, R., Marcos-Caballero, A., Vielva, P., et al. 2016, , 459, 441 [Finkbeiner et al.(1999)]Finkbeiner:1999 Finkbeiner D. P., Davis M., Schlegel D. J., 1999, ApJ, 524, 867 [George et al.(2018)]George:2018 George, D., Shen, H., Huerta, E. A., 2018, PhRvD, 97, 101501 [Górski et al.(2005)]Gorski:2005 Górski, K. M., Hivon, E., Banday, A. J., et al. 2005, , 622, 759 [Kamionkowski et al.(2016)]Kamionkowski:2016 Kamionkowski M., Kovetz E.D., 2016, ARA&A, 54, 227 [Kim et al.(2009)]Kim:2009 Kim, J., Naselsky, P., & Christensen, P. R. 2009, , 79, 023003 [Kingma & Ba(2014)]Kingma:2014 Kingma, D. P. & Ba, J. 2014, arXiv:1412.6980 [Kogut (2012)]Kogut:2012 Kogut A., 2012, ApJ, 753, 110 [Kogut et al.(2007)]Kogut:2007 Kogut A., Dunkley J., Bennett C.L., et al., 2007, ApJ, 665, 355 [Kogut et al.(1996)]Kogut:1996 Kogut A., Banday A.J., Bennett C.L., et al., 1996, ApJ 464, L5 [Krachmalnicoff et al.(2016)]Krachmalnicoff:2016 Krachmalnicoff N., Baccigalupi C., Aumont J., Bersanelli M., Mennella A., 2016, A&A, 588, A65 [Krachmalnicoff et al.(2018)]Krachmalnicoff:2018 Krachmalnicoff N., Carretti E., Baccigalupi C., et al., 2018, A&A, 618, 18 [Krachmalnicoff et al.(2022)]Krachmalnicoff:2022 Krachmalnicoff, N., Matsumura, T., de la Hoz, E., et al. 2022, , 2022, 039 [Leitch et al.(1997)]Leitch:1997 Leitch E.M., Readhead A.C.S., Pearson T.J., Myers S.T., 1997, ApJ 486, L23 [Li et al.(2017)]Li:2017 Li, H., Li, S.-Y., Liu, Y., et al. 2017, arXiv:1710.03047 [Li et al.(2020)]Li:2020 Li X., Yu W., Fan X., 2020, Front. Phys. 15, 54501 [Liu et al.(2019)]Liu:2019 Liu, H., Creswell, J., von Hausegger, S., et al. 2019, , 100, 023538. doi:10.1103/PhysRevD.100.023538 [Mehta et al.(2019)]Mehta:2019 Mehta P., Bukov M., Wang C.H., et al., 2019, PhR, 810, 1 [Miville-Deschěnes et al.(2008)]Miville:2008 Miville-Deschěnes M.-A., Ysard N., Lavabre A., et al., 2008, A&A, 490, 1093 [Murphy et al.(2010)]Murphy:2010 Murphy E.J., Helou G., Condon J.J., et al., 2010, ApJL, 709, L108 [Næss & Louis(2013)]Naess:2013 Næss, S. K. & Louis, T. 2013, , 2013, 001 [Nah et al.(2018)]Nah:2018 Nah S., Kim T. H., and Lee K. M., 2018, arXiv:1612.02177 [Nørgaard-Nielsen & Jørgensen(2008)]Norgaard:2008 Nørgaard-Nielsen, H. U. & Jørgensen, H. E. 2008, , 318, 195. doi:10.1007/s10509-008-9912-6 [Petroff et al.(2020)]Petroff:2020 Petroff M.A., Addison G.E., Bennett C.L., Weiland J.L., 2020, ApJ, 903, 104 [Planck Collaboration et al.(2020)]Planck:2020 Planck Collaboration, Akrami, Y., Ashdown, M., et al. 2020, , 641, A4. doi:10.1051/0004-6361/201833881 [Planck Collaboration(2016a)]Planck Collaboration:2016a Planck Collaboration, Adam R., Ade P.A.R., et al., 2016a, A&A, 594, A10 [Planck Collaboration(2016b)]Planck Collaboration:2016b Planck Collaboration, Ade P.A.R., Aghanim N., et al., 2016b, A&A, 594, A13 [Planck Collaboration(2016)]Planck Collaboration:2016c Planck Collaboration, Adam R., Ade P.A.R., et al., 2016c, A&A, 594, A1 [Planck Collaboration(2014a)]Planck Collaboration:2014 Planck Collaboration, Aghanim N, Armitage-Caplan C., et al, 2014a, A&A, 571, A2 [Planck Collaboration(2014b)]Planck Collaboration:2014b Planck Collaboration, Ade P.A.R., Aghanim N., et al., 2014b, A&A, 571, A6 [Planck Collaboration(2014c)]Planck Collaboration:2014c Planck Collaboration, Ade P.A.R., Aghanim N., et al., 2014c, A&A, 571, A12 [Poidevin(2018)]Poidevin:2018 Poidevin F., Rubino-Martin J.A., Genova-Santos R., et al., 2018, arXiv:1802.04594 [Rogers(2016)]Rogers:2016 Rogers K.K., Peiris H.V., Leistedt B., McEwen J.D., Pontzen A., 2016, MNRAS, 460, 3014 [Poh & Dodelson (2017)]Poh:2017 Poh, J., & Dodelson, S. 2017, PhRvD, 95, 103511 [Remazeilles et al.(2015)]Remazeilles:2015 Remazeilles M., Dickinson C., Banday A.J., Bigot-Sazy M.-A., Ghosh T., 2015, MNRAS, 451, 4311 [Remazeilles et al.(2016)]Remazeilles:2016 Remazeilles M., Dickinson C., Eriksen H.K.K., Wehus I.K., 2016, MNRAS, 458, 2032 [Remazeilles et al.(2018)]Remazeilles:2018 Remazeilles M., Banday A.J., Baccigalupi C., et al., 2018, JCAP, 04, 023 [Ronneberger et al.(2015)]Ronneberger:2015 Ronneberger, O., Fischer, P., and Brox, T., 2015, MICCAI, Springer International Publishing, 234-241 [Rubiño-Martín et al.(2012)]Rubino-Martin:2012 Rubiño-Martín J.A., Rebolo R., Aguiar M., et al., 2012, Proc. SPIE, 8444, 84442Y [Schmelzle et al.(2017)]Schmelzle:2017 Schmelzle J., Lucchi A., Kacprzak T., et al., 2017, arXiv:1707.05167 [Tegmark et al.(2004)]Tegmark:2004 Tegmark M., Strauss M.A., Blanton M.R., et al., 2004, Phys. Rev. D, 69, 103501 [Shen et al.(2019)]Shen:2019 Shen H., George D., Huerta E. A., Zhao Z., 2019, ICASSP, 3237, arXiv:1711.09919 [Silsbee et al.(2011)]Silsbee:2011 Silsbee K., Ali-Haïmoud Y., Hirata C.M., 2011, MNRAS, 411, 2750 [Stompor et al.(2016)]Stompor:2016 Stompor R., Errard J., Poletti D., 2016, Phys. Rev. D, 94, 083526 [Story et al.(2013)]Story:2013 Story K.T., Reichardt C.L., Hou Z., et al., 2013, ApJ, 779, 86 [Sudevan et al.(2017)]Sudevan:2017 Sudevan V., Aluri P.K., Yadav S.K., Saha R., Souradeep T., 2017, ApJ, 842, 62 [Suzuki et al.(2018)]Suzuki:2018 Suzuki A., Ade P.A.R., Akiba Y., et al., 2018, J. Low Temp. Phys., 193, 1048 [Syed et al.(2021)]Syed:2021 Syed, A. Arora, S. Khan, M. Hayat, Fahad, M.-H. Yang, et al.,2021, arxiv:2102.02808 [Tegmark et al.(2004)]Tegmark:2004 Tegmark, M., Strauss, M. A., Blanton, M. R., et al. 2004, , 69, 103501 [Thorne et al.(2017)]Thorne:2017 Thorne B., Dunkley J., Alonso D., Naess S., 2017, MNRAS, 469, 2821 [Tian et al.(2020a)]Tian:2020a Tian C., Xu Y., Li Z. Y., Zuo W. M. , Fei L. K. and Liu H., 2020, Neural Networks, 124, 117-129 [Tian et al.(2020b)]Tian:2020b Tian C., Xu Y., and Zuo W., 2020, Neural Networks, 121, 461-473 [Wagner-Carena et al.(2020)]Wagner-Carena:2020 Wagner-Carena S., Hopkins M., Rivero A.D., Dvorkin C., 2020, MNRAS, 494, 1507 [Wang et al.(2020a)]Wang:2020a Wang, G.-J., Ma, X.-J., Li, S.-Y., Xia, J.-Q. 2020a, ApJS, 246, 13 [Wang et al.(2020b)]Wang:2020b Wang G.J., Li S.Y., Xia J.Q., 2020b, ApJS, 249, 17 [Wang et al.(2021)]Wang:2021 Wang, G.-J., Ma, X.-J., Xia, J.-Q. 2021, MNRAS, 501, 5714 [Wang et al.(2022)]Wang:2022 Wang, G.-J., Shi, H.-L., Yan, Y.-P., et al. 2022, , 260, 13 [Yan et al.(2023a)]Yan:2023a Yan, Y.-P., Wang, G.-J., Li, S.-Y., et al. 2023, arXiv:2305.02490 [Yan et al.(2023b)]Yan:2023b Yan, Y.-P., Wang, G.-J., Li, S.-Y., et al. 2023, arXiv:2306.01516 [Yan et al.(2023c)]Yan:2023c Yan, Y.-P., Wang, G.-J., Li, S.-Y., et al. 2023, , 947, 29 [Ysard et al.(2010)]Ysard:2010 Ysard N., Miville-Deschênes M.A., Verstraete L., 2010, A&A, 509, L1 [Yu & Koltun(2015)]Yu:2015 Yu, F. & Koltun, V. 2015, arXiv:1511.07122 [Zacchei et al.(2011)]Zacchei:2011 Zacchei A., Maino D., Baccigalupi C., et al., 2011, A&A, 536, A5 [Zegeye et al.(2023)]Zegeye:2023 Zegeye, D., Bianchini, F., Bond, J. R., et al. 2023, , 108, 103536. doi:10.1103/PhysRevD.108.103536 [Zhang et al.(2017)]Zhang:2017 Zhang K., Zuo W., Chen Y., Meng D., Zhang L., 2017, Transactions on Image Processing, 26, 3142-3155 [Zhang et al.(2022)]Zhang:2022 Zhang, Z., Liu, Y., Li, S.-Y., et al. 2022, , 2022, 044 [Zhang et al.(2024)]Zhang:2024 Zhang, Z., Liu, Y., Li, S.-Y., et al. 2024, , 2024, 014. doi:10.1088/1475-7516/2024/04/014 [Zonca et al.(2019)]Zonca:2019 Zonca, A., Singer, L., Lenz, D., et al. 2019, Journal of Open Source Software, 4, 1298
http://arxiv.org/abs/2406.17874v1
20240625182209
Central limits from generating functions
[ "Mitchell Lee" ]
math.PR
[ "math.PR", "60F05" ]
]Central limits from generating functions [2020]60F05 ]Mitchell Lee []Department of Mathematics, Harvard University, Cambridge, MA 02138, USA mitchell@math.harvard.edu § ABSTRACT Let (Y_n)_n be a sequence of ^d-valued random variables. Suppose that the generating function f(x, z) = ∑_n = 0^∞φ_Y_n(x) z^n, where φ_Y_n is the characteristic function of Y_n, extends to a function on a neighborhood of {0}×{z : |z| ≤ 1}⊂^d × which is meromorphic in z and has no zeroes. We prove that if 1 / f(x, z) is twice differentiable, then there exists a constant μ such that the distribution of (Y_n - μ n) / √(n) converges weakly to a normal distribution as n →∞. If Y_n = X_1 + ⋯ + X_n, where (X_n)_n are i.i.d. random variables, then we recover the classical (Lindeberg–Lévy) central limit theorem. We also prove the 2020 conjecture of Defant that if π_n ∈𝔖_n is a uniformly random permutation, then the distribution of ( (s(π_n)) + 1 - (3 - e) n) / √(n) converges, as n →∞, to a normal distribution with variance 2 + 2e - e^2. [ [ Accepted XXX, Received YYY. =============================== § INTRODUCTION For any positive integer d, let ⟨·, ·⟩^d ×^d → denote the standard bilinear form given by ⟨ x, y ⟩ = x_1 y_1 + ⋯ + x_d y_d. For any ^d-valued random variable Y, let φ_Y ^d →^d denote the corresponding characteristic function, given by φ_Y(ω) = [exp(i ⟨ Y, ω⟩)], where  denotes expected value <cit.>. We denote partial derivatives using a subscript; for example, g_x_1 z(x, z) denotes ∂/∂ x_1∂/∂ z g(x, z). For any real, symmetric, and positive semidefinite matrix Σ∈^d × d, let 𝒩(0, Σ) denote the multivariate normal distribution with mean 0 and covariance matrix Σ <cit.>. The main theorem of this article is the following central limit theorem. Like the classical (Lindeberg–Lévy) central limit theorem, it states that a particular sequence of random variables converges in distribution to a normally distributed random variable. Let d be a positive integer, and let Y_0, Y_1, Y_2, … be a sequence of ^d-valued random variables. Suppose that there exists a function g U →, where U is an open neighborhood of {0}×{z : |z| ≤ 1}⊂^d ×, such that * g(x, z) is holomorphic as a function of z for any fixed x; * g is twice differentiable; * for all (x, z) ∈ U with |z| < 1, we have ∑_n = 0^∞φ_Y_n(x) z^n = 1/g(x, z). For all j, k with 1 ≤ j, k ≤ d, define μ_j = i g_x_j(0, 1) and Σ_j, k = g_x_j x_k(0, 1) - i (μ_j g_x_k z(0, 1) + μ_k g_x_j z(0, 1)) + μ_j μ_k. Then μ∈^d is a real vector and Σ∈^d × d is a real, symmetric, and positive semidefinite matrix. Moreover, Z_n = (Y_n - μ n) /√(n) converges in distribution, as n →∞, to Z ∼𝒩(0, Σ). Compare this theorem to the 1983 central limit theorem of Bender and Richmond <cit.>, which has been shown to be useful throughout analytic combinatorics <cit.>. In <ref>, we will prove <ref>. In <ref>, we will show how <ref> easily implies the Lindeberg–Lévy central limit theorem: [<cit.>]corollaryllclt Let (X_n)_n be ^d-valued i.i.d. random variables such that [|X_1|^2] < ∞. Let μ = [X_1] and Σ = Cov[X_1]. Then X_1 + ⋯ + X_n - μ n/√(n) converges in distribution, as n →∞, to Z ∼𝒩(0, Σ). Then, we will prove the following 2020 conjecture of Defant. [<cit.>]corollarydefant Let 𝔖_n →ℕ denote the function that counts the descents of a permutation, and let s 𝔖_n →𝔖_n denote West's stack-sorting map. If π_n ∈𝔖_n is a uniformly random permutation, then (s(π_n)) + 1 - (3 - e) n/√(n) converges in distribution, as n →∞, to Z ∼𝒩(0, 2 + 2e - e^2). Clearly, the expression (s(π_n)) + 1 appearing in the statement of <ref> can be replaced by (s(π_n)), but we use (s(π_n)) + 1 to match the conventions of <cit.>. § ACKNOWLEDGEMENTS The author thanks Colin Defant for helpful correspondence. § PROOF OF THEOREM <REF> Let D = {z : |z| ≤ 1} be the closed unit disc. Since D is compact, we may replace U with a smaller convex set U_1 × U_2, where U_1 is an open neighborhood of 0 ∈^d and U_2 is an open neighborhood of D ⊆. Let us first show that μ has real entries. By (<ref>), we have g(x, z) = 1/∑_n = 0^∞φ_Y_n(x) z^n. For all j with 1 ≤ j ≤ d, we may take the derivative with respect to x_j on both sides and substitute x = 0. This yields g_j(0, z) = -∑_n = 0^∞(∂/∂ x_jφ_Y_n)(0) z^n/(∑_n = 0^∞φ_Y_n(0) z^n)^2 = -∑_n = 0^∞ i [(Y_n)_j] z^n/(1/1 - z)^2 = -(1-z)^2∑_n = 0^∞ i [(Y_n)_j] z^n. Therefore, i g_j(0, z) = (1 - z)^2 ∑_n = 0^∞[(Y_n)_j] z^n. As z → 1^- through real numbers, the left-hand side of this equation approaches μ_j and the right-hand side is always real. Therefore, μ_j is real. It follows that Z_n = (Y_n - μ n) / √(n) is an ^d-valued random variable for all n. Therefore, by Lévy's convergence theorem <cit.>, it suffices to show that for all ω∈^d, we have lim_n →∞φ_Z_n(ω) = exp(-1/2⟨ω, Σω⟩). Broadly, we will prove (<ref>) by writing φ_Z_n(ω) in terms of φ_Y_n(ω / √(n)). To estimate the latter, we will split the series (<ref>) into a principal part and an analytic part, and show that only the principal part contributes meaningfully to the value of φ_Y_n(ω / √(n)). Observe that by substituting x = 0 into (<ref>), we obtain g(0, z) = 1 - z. Therefore, g only has one zero on the compact set {0}× D. It is at (x, z) = (0, 1), with g(0, 1) = 0 and g_z(0, 1) = -1 ≠ 0. Therefore, by the implicit function theorem, there exists a neighborhood V_1 ⊆ U_1 of 0 ∈^d, a neighborhood V_2 ⊆ U_2 of D ⊆, and a twice differentiable function b V_1 → such that for all (x, z) ∈ V_1 × V_2, we have g(x, z) = 0 if and only if z = b(x). Since b(0) = 1, we may also assume, by replacing V_1 with a smaller open set, that b does not vanish on V_1. For any fixed x ∈ V_1, the function 1 / g(x, z) is meromorphic on V_2 and its only pole is at z = b(x). Hence, we may remove its pole by subtracting the principal part. Explicitly, the function h(x, z) = 1/g(x, z) - a(x)/1 - z / b(x) = ∑_n = 0^∞(φ_Y_n(x) - a(x)/(b(x))^n) z^n extends analytically from V_2 ∖{r(x)} to V_2, where a(x) = -g_z(x, b(x))/b(x). Now, we proceed in a manner similar to Bender and Richmond <cit.>. Since V_2 is an open neighborhood of D, it contains the closed ball {z : |z| ≤ r} for some r > 1. For all n and x, the coefficient of z^n in the series h(x, z), which we denote [z^n]h(x, z), can be computed using the Cauchy integral formula: [z^n]h(x, z) = 1/2 π i∮_|z| = rh(x, z)/z^n + 1. Therefore, |[z^n]h(x, z)| ≤ r^-nsup_|z| = r |h(x, z)| = O(r^-n), where the constant hidden by the O notation is uniform for x in any compact set. It follows that φ_Y_n(x) = [z^n] 1/g(x, z) = a(x)/(b(x))^n + [z^n]h(x, z) = a(x)/(b(x))^n + O(r^-n), where, again, the constant hidden by the O is uniform for x in any compact set. Let us now turn to (<ref>). Fix ω∈^d. For all n, we have φ_Z_n(ω) = [exp(i ⟨Y_n - μ n/√(n), ω⟩)] = [i ⟨ Y_n, ω/√(n)⟩]/exp(i ⟨μ, ω⟩√(n)) = a(ω/√(n))/(b(ω/√(n)))^nexp(i ⟨μ, ω⟩√(n)) + O(r^-n) where we used (<ref>) in the third equality. Now, we analyze the denominator of (<ref>). Since b is twice differentiable and b(0) = 1, the function log(b(x)) has a second order Taylor expansion for x → 0 <cit.>. It is not difficult to compute the coefficients by differentiating the equation g(x, b(x)) = 0 using the chain rule. This yields log (b(x)) = -i⟨μ, x ⟩ + 1/2⟨ x, Σ x⟩ + o(|x|^2). Therefore, recalling that ω is fixed, (b(ω/√(n)))^nexp(i ⟨μ, ω⟩√(n)) = exp(n log(b(ω/√(n))) + i ⟨μ, ω⟩√(n)) = exp(n(-i ⟨μ, ω/√(n)⟩ + 1/2⟨ω/√(n), Σω/√(n)⟩ + o(1/n) ) + i ⟨μ, ω⟩√(n)) = exp(1/2⟨ω, Σω⟩ + o(1)). Substituting into (<ref>), we have lim_n →∞φ_Z_n(ω) = lim_n →∞ a(ω/√(n))/lim_n →∞(b(ω/√(n)))^nexp(i ⟨μ, ω⟩√(n)) = a(0)/exp(1/2⟨ω, Σω⟩) = exp(-1/2⟨ω, Σω⟩), proving (<ref>) and the theorem. § APPLICATIONS We now prove <ref> to demonstrate the utility of <ref>. * We have ∑_n = 0^∞φ_Y_n(x) z^n = ∑_n = 0^∞ (φ_X_1(x))^n z^n = 1/1 - φ_X_1(x) z. Now, apply <ref> with g(x, z) = 1 - φ_X_1(x) z. This is clearly holomorphic in z, and it is twice differentiable because [|X_1|^2] < ∞ <cit.>. * Following Defant, let F(y, z) = y/2 (-1 - y z + √(1 - 4z + 2yz + y^2z^2)) where we choose the branch of the square root that evaluates to 1 as z → 0. Let F̂(y, z) = ∑_m, n = 0^∞ F_m, ny^m z^n/n!, where F_m, n is the coefficient of y^mz^n in F(y, z). (The power series F̂ can alternatively be defined by F̂(y, z) = ℒ^-1{F(y, 1/t)/t}(z), where ℒ^-1 is the inverse Laplace transform with respect to the variable t.) We have <cit.> ∑_n = 1^∞(∑_π∈𝔖_n-1 y^(s(π)) + 1)z^n/n! = - log(1 + F̂(y, z)). Differentiating with respect to z and substituting y = e^ix, we find that the generating function of the characteristic functions φ_(s(π_n)) + 1 is the following: ∑_n = 0^∞φ_(s(π_n)) + 1(x) z^n = -F̂_z(e^ix, z)/1 + F̂(e^ix, z). Let g(x, z) = -1 + F̂(e^ix, z)/F̂_z(e^ix, z) be the reciprocal of this generating function. We now check that g(x, z) has an analytic continuation that satisfies the conditions of <ref>. First, observe that by (<ref>), F(y, z) / y can be written as a power series in z and yz. Therefore, the coefficient F_m, n is zero if m > n + 1. It is also easy to check using Darboux's lemma <cit.> that F_m, n is bounded by an exponential function of n. Hence, the series (<ref>) converges everywhere, so F̂ is entire (on ^2). Therefore, the numerator and denominator of (<ref>) are analytic (real-analytic in the first variable and complex-analytic in the second variable) as a function of (x, z) ∈×. One may easily compute F(1, z) = -z by substituting y = 1 in (<ref>). It follows that F̂(1, z) = -z as well, so F̂_z(1, z) = -1. Therefore, there is a neighborhood U of {0}×{z : |z| ≤ 1} on which F̂_z(e^ix, z) does not vanish. Then g is analytic on U. By <ref>, there exist constants μ, σ such that (s(π_n)) + 1 - μ n/√(n) converges in distribution, as n →∞, to Z ∼𝒩(0, σ). We can compute μ = 3 - e and σ = 2 + 2e - e^2, either by using the formulas for μ and σ at the end of the statement of <ref>, or by using the formulas for [(s(π_n)) + 1] and Var[(s(π_n)) + 1] <cit.>. plain
http://arxiv.org/abs/2406.17658v1
20240625154928
Systematic integral evaluation for spin-resummed binary dynamics
[ "Gang Chen", "Jung-Wook Kim", "Tianheng Wang" ]
hep-th
[ "hep-th", "gr-qc", "hep-ph" ]
hobby,calc,shapes, positioning, backgrounds,trees, decorations, decorations.markings, decorations.pathmorphing, arrows, shapes.geometric, snakes compat=1.1.0 graviton/.style=circle, draw=green!60, fill=green!5, very thick, minimum size=7mm codot/.style=/tikz/shape=circle,/tikz/fill=white,/tikz/minimum size=0.1cm,/tikz/inner sep=1.8pt myblob/.style=/tikz/shape=rectangle,/tikz/fill=red,/tikz/minimum size=0.2cm,/tikz/inner sep=1.8pt ghc/.style=/tikz/shape=circle,/tikz/fill=white,/tikz/minimum size=0.05cm, HV/.style=/tikz/shape=circle,/tikz/fill=rgb:black,1;white,2,/tikz/minimum size=0.1cm, GR/.style=/tikz/shape=ellipse,/tikz/fill=rgb:black,1;white,2,/tikz/minimum size=0.3cm, GR2/.style=/tikz/shape=ellipse,/tikz/fill=rgb:black,1;white,2,/tikz/minimum width = 1.2cm, /tikz/minimum height = 3.4cm box/.pic=[fill=black] (0,0) circle (2.5pt); [fill=black] (0.5,0) circle (2.5pt); [line width=5pt] (0,0) – (0.5,0); LP/.style=/tikz/shape=circle,/tikz/draw=rgb:black,2;white,1, /tikz/minimum size=0.3cm 1.5pt theoremTheorem definitionDefinition[section] lemma[theorem]Lemma conjecture[theorem]Observation prop[theorem]Proposition .theorem #1 #1 to0pt1ex[#1] · η̃ η̅ m λ̃ #1eq. (<ref>) #1Equation (<ref>) #1#2eqs. (<ref>) and (<ref>) #1#2Eqs. (<ref>) and (<ref>) #1section <ref> ⟨#|1⟨#1| |#⟩1|#1 ⟩ #1[#1| #1|#1] ⟨#|1⟩⟨#1 ⟩ #1.#2⟨#1 #2⟩ #1.#2[#1 #2] #1pink #1#1red #1#1brown #1#1green #1#1blue #1 · dot1 1.6pt 1D -7pt / 2pt𝔹ℂ𝔻ℍℕℙℝℤ A B D E F G H I J K L M N O P Q R S T U V W X Y Z E_E E_K E_Fαβ̱γ δδ̇ ϵεζ þθϑ κ̨λμν ρ̊ϱσςτῠ φ ω ΓΔΘΛ ΣΥ Ω d1/21/2⟹⟺→∥ ∇∇×÷∇·∂ 1 1.6pt 1 Tr tr detUTF8mj SNUTP24-003 gang.chen@nbi.ku.dkNiels Bohr International Academy, Niels Bohr Institute, University of Copenhagen, Blegdamsvej 17, DK-2100 Copenhagen Ø, Denmarkjung-wook.kim@aei.mpg.deMax Planck Institute for Gravitational Physics (Albert Einstein Institute), Am Mühlenberg 1, D-14476 Potsdam, Germanytianhengwang@snu.ac.krCenter for Theoretical Physics, Seoul National University, 1 Gwana-ro, Gwanak-gu, 08826, Seoul, South Korea§ ABSTRACT Computation of spin-resummed observables in post-Minkowskian dynamics typically involve evaluation of Feynman integrals deformed by an exponential factor, where the exponent is a linear sum of the momenta being integrated. Such integrals can be viewed as tensor integral generating functions, which provide alternative approaches to tensor reduction of Feynman integrals. We develop a systematic method to evaluate tensor integral generating functions using conventional multiloop integration techniques. The spin-resummed aligned-spin eikonal at second post-Minkowskian order is considered as a phenomenologically relevant example where evaluation of tensor integral generating functions is necessary. Systematic integral evaluation for spin-resummed binary dynamics Tianheng Wang =================================================================== § INTRODUCTION Inspired by quantum field theoretic approaches to gravity <cit.>, modern methods to compute scattering amplitude have been widely applied to the study of classical dynamics of binary black hole scattering, from conservative dynamics <cit.> and gravitational bremsstrahlung <cit.> to effects of spin <cit.> (see also refs. <cit.> for related approaches), motivated in part by the detection of gravitational waves by the LIGO-Virgo collaboration <cit.>. The relevant perturbative expansion is the post-Minkowskian (PM) expansion, where general relativistic corrections are added to the unperturbed special relativistic kinematics. The study of PM dynamics is more than a theoretical interest, as they can be used in building waveform models for gravitational wave detection <cit.>. The typical integrals encountered in the study of PM dynamics can be formulated as Feynman integrals, which can be handled efficiently by modern amplitude techniques such as the integration-by-parts (IBP) reduction <cit.> and the method of differential equations <cit.>. The function space of the master integrals and their associated graph topologies were studied for the conservative scattering dynamics at 4PM and 5PM orders <cit.>. The function space remains practically the same when spin effects are incorporated perturbatively into the non-spinning dynamics. However, the function space becomes completely different when exact spin dependence— or spin resummation— is considered. The motivation for studying spin resummation in binary Kerr dynamics is that gravitational wave physics is becoming a precision science; one potential obstruction for high-precision gravitational wave physics is inaccurate modelling of spin effects in gravitational waveform models <cit.>, which needs to be improved with better understanding of spin effects in gravitational two-body dynamics. The spin-induced multipole moments of black holes can be generated by the Newman-Janis shift <cit.>, where the position of the black hole is complexified and shifted in the imaginary spin direction. The translation operator in the imaginary direction is exp( i α·∇ ) = exp( α· K ), and the same factor appears when the Newman-Janis shift is promoted to a dynamical statement <cit.>. In particular, the typical Feynman integrals encountered in the context of PM dynamics are modified by exponential factors corresponding to translations along the imaginary spin direction. Such integrals can be considered in a more general setting as tensor integral generating functions (TIGFs), where a typical Feynman integrand is deformed by an exponential factor of integration variables. The TIGFs provide an alternative route to tensor reduction of Feynman integrals <cit.>. More importantly, consideration of such integrals allows us to treat the impact parameter space Fourier transform on an equal footing with loop momenta integration, and leads us to new insights on PM dynamics presented in impact parameter space <cit.>. For example, the spurious poles from the intermediate loop integral reduction, and computation of the terms that contribute to the loop integrals but vanish under the impact parameter space transform, can be avoided by performing IBP reduction at the full integrand level. Unfortunately, known multiloop integration techniques are not directly applicable to evaluation of such modified Feynman integrals. We introduce a method to tame the exponential factor and convert it into an extra delta constraint, rendering the integrand suitable for application of conventional multiloop integration techniques. Interestingly, the extra delta constraint leads to significantly different function space of the master integrals when compared to the original master integrals with the same topology. As a phenomenologically important application, the developed method is used to compute the 2PM spin-resummed eikonal for binary Kerr scattering. To simplify the analysis we consider a generalisation of the aligned spin configuration, where the spin vectors are aligned but allowed to have components along the impact parameter. The minimal coupling three-point amplitude <cit.> and the spin-resummed heavy-mass effective field theory (HEFT) Compton amplitude <cit.> are used to build the integrand from unitarity cuts <cit.> and heavy-mass/velocity cuts <cit.>. § GENERAL METHODS FOR TENSOR INTEGRAL GENERATING FUNCTIONS TIGFs are obtained from a typical loop integrand as a deformation by an exponential factor, viz. [Although generating functions are usually defined by real exponents, imaginary exponents were used to avoid factors of i when solving for the integrals. Such an analytic continuation is known in the probability literature as characteristic functions. This analytic continuation is allowed since Feynman integrals should be understood as generalised functions which may not be well-defined by conventional definitions of integrals.] ℐ^(α)[𝐲]:= ∫∏_j=1^L d^D K_j e^i(∑_j=1^Lα_j · K_j)(∏_k=r+1^nδ^λ_k-1(𝒟_k))/𝒟_1^λ_1⋯𝒟_r^λ_r , where δ^λ(x) = d^λ/dx^λδ(x), (α) = (α_1^μ , ⋯ , α_L^μ) is the vector of arguments, 𝐲 is the vector of kinematic invariants and 𝒟_i are the inverse propagators; (K + p)^2 - m^2 or 2 (K · p) when eikonalised [The delta constraints can be converted to denominators through the distributional identity 2 π i δ(x) = (x - i0^+)^-1 - (x + i0^+)^-1 and its derivatives, therefore they can be considered as propagator factors.]. In the rest of the letter, we set D=4-2ϵ for dimensional regularisation. Integrals with non-trivial numerator factors can be obtained from TIGFs as derivative operators of α_j^μ acting on them, which is a less-explored tensor reduction method for Feynman integrals <cit.>. This alternative tensor reduction approach may provide more efficient methods to reduce irreducible numerators, which are one of the bottlenecks in evaluating Feynman integrals. Some specialised techniques were developed for their reduction and evaluation, including modification of IBP reduction to incorporate the extra exponential factor <cit.>. We take a different route and develop a systematic method to reduce and evaluate TIGFs using conventional multiloop integration techniques. Our key proposal is converting TIGF integrands (<ref>) into a typical Feynman integrand; we introduce an auxiliary parameter t and Fourier transform the exponential factor into a delta constraint, ℐ^(α)[𝐲] = ∫ dt e^itℐ̃^(α)[t,𝐲] , where ℐ̃^(α)[t,𝐲] is defined as ∫∏_j=1^L d^D K_j δ( ∑_j=1^Lα_j · K_j-t)(∏_k=r+1^sδ^λ_k-1(𝒟_k))/𝒟_1^λ_1⋯𝒟_r^λ_r . ℐ̃^(α)[t,𝐲] is a typical Feynman integrand, and conventional IBP reduction can be applied to reduce it to a set of master integrals ℐ̃_1^(α)[t,𝐲] , ⋯ , ℐ̃_n^(α)[t,𝐲] . The conventional method of evaluating master integrals using differential equations applies ∂_t ℐ̃_i^(α)[t,𝐲] = ∑_j=1^n A_ij(t,𝐲) ℐ̃_j^(α)[t,𝐲] , ∂_yℐ̃_i^(α)[t,𝐲] = ∑_j=1^nB_ij( t,𝐲)ℐ̃_j^(α)[t,𝐲] . An important class of this problem is when the t-dependence factorises from the 𝐲-dependence so that the differential equations (<ref>) decouple, A_ij = A_ij(t) , B_ij = B_ij (𝐲) , and spin-resummed PM dynamics falls into this class. In such a case, the t-dependence can be factored from the 𝐲-dependence, ℐ̃_i^(α)[t,𝐲]= f_i(t) ℐ_i^(α)[𝐲] , and the t Fourier integral of (<ref>) factors from the loop integrals, which can be absorbed by normalisation of the effective master integrals ℐ_i^(α)[𝐲]. The conventional differential equation approach can be applied to the second set of equations in (<ref>) for evaluation of the effective master integrals ℐ_i^(α)[𝐲]. An observable is then given as a linear combination of the effective master integrals. In the rest of the letter, we apply the aforementioned techniques to the evaluation of spin-resummed one-loop eikonal phase as a concrete phenomenologically relevant example. § SPIN-RESUMMED 2PM EIKONAL The classical eikonal can be understood as the generator of scattering observables <cit.>, which is computed as the impact-parameter-space Fourier transform of the classical 2 → 2 one-loop amplitude <cit.>. The one-loop HEFT amplitude with classical spin <cit.> is our starting point [baseline=([yshift=-0.8ex]current bounding box.center)]every node=[font=] (a) v_1; [right=1.2cm of a] (f2) [HV]H; [right=1.cm of f2] (c); [above=1.6cm of a](ac)v_2; [right=0.8cm of ac] (ad) [dot]; [right=0.7cm of ad] (f2c) [dot]; [above=1.6cm of c](cc); [above=0.8cm of a] (cutL); [right=2.0cm of cutL] (cutR); [right=0.4cm of ad] (att); [above=0.3cm of att] (cut20); [below=0.3cm of att] (cut21); * (a) – [fermion,thick] (f2)– [fermion,thick] (c), (f2)–[photon,ultra thick,momentum=ℓ_1](ad), (f2)– [photon,ultra thick,momentum'=ℓ_2] (f2c),(ac) – [fermion,thick] (ad)– [fermion,thick] (f2c)– [fermion,thick] (cc), (cutL)–[dashed, red,thick] (cutR), (cut20)–[ red,thick] (cut21) ; M_ HEFT(q,v_1, v_2,a_1,a_2)=-1 2! ∫d^Dℓ_1 (2π)^D-1(δ(2m_2ℓ_1 v_2)ℓ_1^2ℓ_2^2 M_a_2(ℓ_1,v_2) as× M_a_2(ℓ_2,v_2) M_a_1(ℓ_1,ℓ_2,v_2) ) , where ℓ_2=q-ℓ_1. M_a_2 and M_a_1 are the three-point and four-point HEFT amplitudes provided in the appendix. The key ingredient is the HEFT Compton amplitude <cit.>, which— contrary to the minimal coupling Compton amplitude <cit.>— is free of unphysical singularities. However, the amplitude has spurious singularities entering through the entire functions G_1(x_1) G_1(x_2) = sinh(x_1) x_1sinh(x_2) x_2 , G_2(x_1, x_2) = 1 x_2(sinh(x_12) x_12-cosh(x_2) sinh(x_1) x_1) , where x_12 = x_1 + x_2 and x_i=a_1ℓ_i. As can be seen from the graph of the two functions in fig. <ref>, the apparent singularities of (<ref>) do not correspond to actual singularities. Nevertheless, the spurious singularities can be problematic in the evaluation of the integrals, especially when the integrations are performed numerically as the singularities spoil numerical stability. We therefore introduce their integral representations G_1(x_1)G_1(x_2) =∫_0^1 dσ_1 dσ_2 cosh(σ_1x_1) cosh(σ_2x_2) , G_2(x_1,x_2) =∫_0^1dσ_1dσ_2( σ_1 sinh(σ_1 x_1+σ_1 σ_2 x_2)         -sinh(σ_2 x_2) cosh(σ_1 x_1)) , which removes spurious denominators at the price of auxiliary integrations over σ_i. Analogous integral representations are used for G_1 functions appearing in the three-point amplitudes when constructing the integrand (<ref>). The eikonal χ is obtained by transforming the HEFT amplitude to impact parameter space χ = ∫d^Dq (2π)^D-2e^iq bδ(q v_1)δ(q v_2) 4 m_1 m_2 =asdfasdf× M_ HEFT(q, v_1, v_2,a_1, a_2) . We focus on a slight generalisation of the aligned spin configuration of a binary Kerr system; a^μ≡ a_1^μ =ξ a_2^μ and v_1 a_2 = v_2 a_1=0 with a b ≠ 0. Exchanging the order of integration to pull out the σ_i integrals, the eikonal can be schematically written as χ= ∫_0^1∏_j=1^4dσ_j ∑_αℐ^(α) [𝐲] , where y_1 =v_1 v_2 , y_2 = a a - b b , y_3 = b b , y_4 = b a - b b are the Lorentz scalars the eikonal depends on. The integral representation (<ref>) reorganises the original integrand as a sum over integrands of the form (<ref>) with additional numerator factors, which we classify into distinct sectors(α) defined by the exponential factor exp (i q b̂+i ℓ_1 â ). (α) ≡(b̂ , â) ≡(b+c'_1(σ) ã_1+c'_2(σ) ã_2 , c'_3(σ) ã_1+c'_4(σ) ã_2) , (𝐲) =(y_1 , ŷ_2 , ŷ_3 , ŷ_4 , σ_1 , σ_2 , σ_3 , σ_4) , (K_1, K_2) = (q , ℓ_1) . The Lorentz scalars ŷ_i are defined similar to (<ref>) with hatted variables b̂, â. The tilded spin vectors ã_j^μ = i a_j^μ are analytic continuation of the real spin(-length) vectors of the black holes a_j^μ. After evaluation, the master integrals can be analytically continued back to real spin. After employing the methods developed in the previous section, a typical integrand for a given sector (α) takes the form ℐ^(α) [𝐲] = ∫_-∞^∞dt e^it∫d^Dq d^Dℓ_1 (2π)^2D-3δ(qb̂ + ℓ_1 â - t) ×δ(q v_1)δ(q v_2)δ(ℓ_1 v_2) 8m_1m^2_2 ℓ_1^2(q-ℓ_1)^2(∑_r=0^2N^(r)(𝐲) (ℓ_1 v_1)^r+N^(3)(𝐲) q^2) , where denominators of the integrand were kept explicit. The relevant topologies are given in fig. <ref>. Each sector can be reduced to master integrals by applying conventional multiloop integration techniques such as IBP reduction. All master integrals satisfy the factorisation condition (<ref>). Moreover, solving the differential equations in t yields power-law t-dependence of the master integrals. All terms resulting from reducing (<ref>) have t-dependence of the form t^j-4ϵϵ , j∈ [0,6] . As remarked, the contributions from the t Fourier transform can be treated as a constant factor. It is remarked that the divergent 1/ϵ term in dimensional regularisation vanishes under the t Fourier transform, therefore the constant factor is finite. An exemplary term in the N^(3) category is ℐ_ex[𝐲]= ∫d^Dq d^Dℓ_1 (2π)^2D-3 e^iq bδ(q v_1)δ(q v_2)δ(ℓ_1 v_2)ℓ_1^2(q-ℓ_1)^2 q^2 ×(16 π G_N)^2 m_1 m_2^2 (ℓ _1 v_1)^2 cosh((1-ξ) a_1 q+2 ξ a_1ℓ _1)/16 . The cosh function splits the integrand into two sectors; sector (1) = (â_1 , b̂_1) = ( 2 ã_1 ξ , ã_1 (1-ξ )+b) and sector (2) = ( â_2 , b̂_2) = ( -2 ã_1 ξ , ã_1 (ξ-1 )+b ). Each sector reduces to three master integrals by IBP reduction, ℐ^(α)_ex[𝐲] =-(16 π G_N)^2π 2m_1 m_2^2 (y_1^2-1) /32 ŷ_2ŷ_3 as×(2ŷ_3(ŷ_2+2 ŷ_4-1) ℐ_1^(α)[𝐲]+2 ℐ_2^(α)[𝐲]+ℐ_3^(α)[𝐲]) , where the master integrals are defined by (<ref>) and the superscript (α) denotes the sectors. (<ref>) becomes ℐ_ex[𝐲]=ℐ^(1)_ex[𝐲]+ℐ^(2)_ex[𝐲] , where each ℐ^(α)_ex [𝐲] is complex but the sum ℐ_ex [𝐲] is real. The integrals in (<ref>) reduce to 320 sectors. Each sector has five master integrals. However, only three master integrals per sector appear in the final result, χ =∫_0^1∏_j=1^4dσ_j ∑_α=1^320(c_1,α(𝐲) ^(α)_1[𝐲]+c_2,α(𝐲) ^(α)_2[𝐲]                             +c_3,α(𝐲) ^(α)_3[𝐲] ) , where the master integrals are ^(α)_1[𝐲] =D-4 t^2D-8J^(α)_1,1,1,1,1,1,0,0 , ^(α)_2[𝐲] =1 t^2D-10J^(α)_2,1,1,1,1,1,0,0 , ^(α)_3[𝐲] =1 t^2D-10J^(α)_1,1,1,1,1,1,0,1 , defined as J^(α)_λ_1,λ_2,λ_3,λ_4,λ_5,λ_6,λ_7,λ_8≡∫d^Dq d^Dℓ_1 (2π)^2D-3δ^λ_6-1(âℓ_1+b̂ q-t) asdfasdf×δ^λ_3-1(ℓ_1 v_2)δ^λ_4-1(q v_1)δ^λ_5-1(q v_2)(ℓ_1^2)^λ_1((q-ℓ_1)^2)^λ_2 (ℓ_1 v_1)^λ_7(q^2)^λ_8 . The numerators N^(0,1,2) of (<ref>) reduce to ^(α)_1,2[𝐲], while the numerators N^(3) of (<ref>) reduce to ^(α)_1,2,3[𝐲]. The integrals ^(α)_1,2[𝐲] correspond to the left topology of fig. <ref>, while the integral ^(α)_3[𝐲] corresponds to the right topology of fig. <ref>. The apparent t-dependence in (<ref>) makes the master integrals ^(α)_1,2,3[𝐲] independent of t. As mentioned previously, the other two master integrals, J^(α)_1,1,1,2,1,1,0,0 and J^(α)_1,1,1,1,1,1,1,0, never appear and their evaluation is unnecessary for evaluating the eikonal. It is also remarked that both c_j,α(𝐲) and ^(α)_j[𝐲] are finite in dimensional regularisation. The final result for the eikonal phase is available in the public repository https://github.com/AmplitudeGravity/KerrEikonal2pm <cit.>. We emphasise that the result contains all-orders-in-spin contributions for the generalised aligned spin configuration with a b≠ 0. § EVALUATION OF THE MASTER INTEGRALS We evaluate the master integrals using the method of differential equations <cit.>. The ŷ_3-dependence of the master integrals can be determined from dimensional analysis, since the only dimensionful variable is ŷ_3. Moreover, the differential operator ∂_y_1 is diagonal in the system of master integrals, and the y_1-dependence of the master integrals can be solved separately. The nontrivial systems of differential equations are given by ŷ_2,4. The two systems are degenerate for the master integrals ℐ_1,2 and they only depend on the combination ŷ_2' = ŷ_2 + 2 ŷ_4. The differential equations over ŷ_2' for ℐ_1,2 are solved by complete elliptic integrals [We adopt 's definition for the elliptic integrals.], ℐ_1[𝐲] =C/√(-ŷ_3)√(y_1^2-1) K(ŷ_2')/π , ℐ_2[𝐲] = - C√(-ŷ_3)/√(y_1^2-1) (ŷ_2'-1) K(ŷ_2') + E( ŷ_2')/π , where the integration constant C = -1/16 π^2 is fixed from the spinless limit ŷ_2' → 0. The differential equations satisfied by ℐ_3 are ∂_ŷ'_2ℐ_3[𝐲]=-1 2 (ŷ'_2-2 ŷ_4) (ŷ'_2+ŷ^2_4 -2ŷ_4)× (ŷ_4^2 ℐ_3[𝐲]-C/π(√(- ŷ_3)) /√(y_1^2-1)(ŷ'_2+2 ŷ_4 (ŷ_4-1)) E(ŷ'_2) +C/π(√(- ŷ_3)) /√(y_1^2-1)(1-ŷ_4) (ŷ'_2-2 ŷ_4) K(ŷ'_2)) , ∂_ŷ_4ℐ_3[𝐲]=1(ŷ'_2-2 ŷ_4) (ŷ'_2+ŷ^2_4 -2ŷ_4)(ŷ_4 (ŷ'_2-ŷ_4) ℐ_3[𝐲] -C (√(- ŷ_3)) /√(y_1^2-1)(ŷ'_2-1) (ŷ'_2-2 ŷ_4) K(ŷ'_2) -C (√(- ŷ_3)) /√(y_1^2-1)(ŷ_4 ŷ'_2+ŷ'_2-2 ŷ_4) E(ŷ'_2)) . The common denominator factor in the two equations implies that the master integral should be understood as an integral over the elliptic curve ^2 = ( ŷ'_2-2z ) (z^2 - 2z + ŷ'_2) , where z is the integration variable to be identified as - ŷ_4 after integration. ℐ_3 [𝐲] is computed by solving (<ref>) at ŷ_4 = 0, which yields the boundary value for the remaining integration over ŷ_4. Carrying out the ŷ_4 integration, we have ℐ_3[𝐲] = C(√(- ŷ_3)) /√(y_1^2-1)√(ŷ'_2+ŷ_4 (ŷ_4-2))/√(ŷ'_2-2 ŷ_4)[ 2E(ŷ'_2)+π/2π =a + ∫ _0^ŷ_4 dz ((1-ŷ'_2) (ŷ'_2-2 z)K(ŷ'_2)/π ( z^2-2z+ŷ'_2) =asdfasdfasdf - (ŷ'_2 z-2z+ŷ'_2) E(ŷ'_2)/π ( z^2-2z+ŷ'_2))] . The z-integral can be reduced by the IBP relations following from the exact differentials d ( ŷ'_2-2 z/) , d ( (ŷ'_2 - 2 z )^2/) . Eliminating integrals with undesired denominators, we arrive at ℐ_3[𝐲] = C√(- ŷ_3)/√(y_1^2-1)[√(ŷ'_2 + ŷ_4^2 - 2 ŷ_4)/π√(ŷ'_2 - 2 ŷ_4)(π-2K(ŷ'_2) 2 - (E(ŷ'_2)-K(ŷ'_2) ) ∫ _0^ŷ_4dz/ - K(ŷ'_2) ∫ _0^ŷ_4zdz/) + E(ŷ'_2)+(1-ŷ_4 ) K(ŷ'_2)/π] , where the integrals can be converted to incomplete elliptic integrals, which can be evaluated to arbitrary numerical precision. The integration constants were fixed by requiring regularity of the spin expansion around â^μ = 0. Since â^μ→ 0 is a singular limit of the underlying curve (<ref>), we expand (<ref>) around ŷ_4 first and then expand in ŷ_2 to obtain the series expansion of ℐ_3. The master integrals ℐ_1,2,3 were checked against brute-force calculations based on the formulas in appendix B of ref. <cit.>, and found to agree up to 𝒪(â^20). § RESULTS The eikonal phase (<ref>) is exact to all orders in spin, and can be expanded in spin before the σ_j-integration for a check against perturbative-in-spin calculations. The spin-expanded σ_j integrand turns out to be polynomial in σ_j, and the auxiliary σ_j integrals can be evaluated exactly. We provide files for evaluating the spin-expanded σ_j integrand in the public repository https://github.com/AmplitudeGravity/KerrEikonal2pm <cit.>. We have checked that (<ref>) expanded to 𝒪(a^8) is consistent with the results of ref. <cit.>. The merit of the eikonal phase (<ref>) is that we can study its full spin dependence quantitatively, since the σ_j integrations can be numerically evaluated with high precision. As an example of numerically studying the eikonal (<ref>), we consider the scattering of a test Kerr black hole on a Schwarzschild background, corresponding to the substitution χ_a_2=0 = χ|_ξ→ 0 , which is presented in fig. <ref>. The parameters y_1=12 10 and y_3=-1 are chosen for the ranges |√(-y_2)| < 0.75 and cos(θ) ∈ [0,0.9], where θ denotes the angle between a^μ and b^μ such that √(-y_2)cos(θ)=y_4. The numerical calculations can be tested against analytic predictions. For example, the 2PM aligned-spin (cos(θ)=0) eikonal is expected to have the singularity ∝ (|b|^2 - |a|^2)^-3/2 based on analytic studies <cit.>. Fig. <ref> shows numerical evaluation of (<ref>) in the aligned-spin configuration plotted against the reference curve β/(|b|^2 - |a|^2)^3/2, where the constant β was fit to (<ref>) at |b| = 1.02 |a| and other unspecified parameters are the same as in fig. <ref>. The two lines on fig. <ref> show a good agreement near the singular point |b| = |a|, having ≲ 5 % relative difference for |b| ≤ 1.04 |a|. This singularity can be interpreted as the dynamics detecting the ring singularity from spin resummation, since the eikonal has a singularity only at |b| = 0 when spin is treated perturbatively. § CONCLUSIONS AND OUTLOOK In this letter, a systematic method to reduce and evaluate TIGFs was introduced and applied to the scattering dynamics of a binary Kerr system in the generalised aligned-spin configuration. The corresponding 2PM eikonal phase (<ref>) was presented in a closed form, which can be studied analytically by expanding to arbitrary orders in spin, or can be studied numerically through numerical integrations for exact spin dependence. An important application of TIGFs is generation of tensor integrals, directly providing an alternative method of performing tensor reduction in Feynman integrals <cit.>. The approach may prove useful for reduction of irreducible numerators, which in many cases becomes a bottleneck when evaluation of multiloop Feynman integrals are involved. In this regard, understanding the criteria for the factorisation of the t-dependence (e.g. from intersection theory <cit.>) will be important, as the factorisation played a crucial role in obtaining closed-form expressions for the TIGFs considered. Another future direction to explore would be the study of function space complexity for TIGFs. We have seen that elliptic integrals appear in effective two-loop (2PM) topologies for TIGFs (see also ref. <cit.> for electromagnetism). It is reasonable to expect that more complex functions (e.g. elliptic multi-polylogarithms <cit.>, integrals over Calabi-Yau manifolds <cit.>) will appear in effective three-loop (3PM) graph topologies when integrals are deformed into TIGFs. Moreover, we can attempt to quantify the increase in transcendentality of the function space when typical Feynman integrals are deformed into TIGFs. This may even lead to an exploration of new geometries that have not yet been associated with Feynman integrals. Special cases of TIGFs appear ubiquitously in quantum-field-theory-inspired approaches to classical gravitating systems, especially in the calculation of scattering waveforms <cit.>. It would also be interesting to apply the developed methods to these problems. Coming back to the initial motivation for the study of TIGFs, it would be interesting to apply the developed methods to 3PM all-orders-in-spin dynamics and study spin resummation. Whether we have the exact three-graviton-Kerr five-point amplitude— which is necessary for constructing the 3PM integrand— is less relevant, as long as the conjectured five-point amplitude correctly captures features of the dynamical Newman-Janis shift; we expect the singularity structures of the binary Kerr dynamics to be governed by the dynamical Newman-Janis shift, and correct singularity structures are the most important when we attempt to resum the perturbative two-body dynamics <cit.>. Compared to the 2PM dynamics studied in this letter, the spin-resummed 3PM dynamics is qualitatively different in that it is the first order where next-to-leading-order effects in the mass-ratio expansion enters into the dynamics <cit.>, thereby including the first beyond background-probe limit effects. The insights gained from studying singularity structures of spin-resummed binary Kerr dynamics may motivate new resummation schemes for spinning binary dynamics, providing a more accurate incorporation of spin effects in waveform models used by gravitational wave observatories, potentially having far-reaching consequences for astrophysics and multi-messenger astronomy. § ACKNOWLEDGEMENTS It is a pleasure to thank Emil Bjerrum-Bohr, Andreas Brandhuber, Graham Brown, Marcos Skowronek, Zhengwen Liu, Roger Morales, Gabriele Travaglini for interesting conversations. JWK would like to thank Stefano De Angelis and Fei Teng for stimulating discussions. The authors would like to thank Bo Feng, Andres Luna, and Roger Morales for comments on the draft. GC has received funding from the European Union's Horizon 2020 research and innovation program under the Marie Skłodowska-Curie grant agreement No. 847523 “INTERACTIONS”. TW is supported by the NRF grant 2021R1A2C2012350. apsrev4-1 § APPENDIX: THE COMPTON AMPLITUDE The three point amplitude with one graviton momentum p_1^μ and classical black hole spin a^μ <cit.> is M_3(p_1 a)= √(32 π G_N) (ε_1)(w_1ε_1) , where p̅^μ=m v^μ denotes massive particle's momentum, w_1^μ cosh(p_1 a) ^μ-isinh(p_1 a)/p_1 a(p_1 S)^μ , and (p_1 S)^μ = p_1νS^νμ. The S^μν denotes the spin tensor S^μν=-ϵ^μνρσp̅_ρ a_σ. The Compton amplitude with graviton momenta p_1,2 is constructed from the double copy <cit.>, kinematic Hopf algebra <cit.>, and classical spin bootstrap <cit.>, M_4 32 π G_N=-(p̅ F_1 F_2p̅/p_2p̅)(w_1 F_1 F_2 w_2/2(p_1 p_2) (p_1p̅)-p_1p̅-p_2p̅/4(p_1 p_2)(p_1p̅)(i G_2(x_1,x_2) (a F_1 F_2 S p_2)+i G_2(x_1,x_2) (a F_2 F_1 S p_1) +i G_1(x_12) tr(F_1 S F_2)+ G_1(x_1) G_1(x_2) ( (a F_1p̅) (a F_2 p_1)-(a F_1 p_2) (a F_2p̅) -p_2p̅-p_1p̅/2 (a F_1 F_2 a)))) +((∂_x_1-∂_x_2)G_1(x_1) G_1(x_2)) 4(p̅ p_1) (p̅ p_2)(p̅ p_2(p̅^2 (a F_1 F_2 a) (a F_2 F_1p̅) +a^2 (p̅_4 F_1 F_2p̅) (a F_1 F_2p̅))- (1↔ 2)) +(i(∂_x_1-∂_x_2)G_2(x_1,x_2) 4(p̅ p_1) (p̅ p_2)) ((p̅ p_2) (a F_2 F_1p̅) ((a F_2p̅) (a_1p̅)-(a F_1p̅) (a_2p̅))+(1↔ 2)) +((∂_x_1-∂_x_2)^2 2!G_1(x_1)G_1(x_2))((a F_1p̅) (a F_2p̅) (a F_1 F_2 a)-a^2 2 ((a F_1 F_2 p) (a F_2 F_1p̅) - (a F_1 F_2 a) (p̅ F_1 F_2p̅))) +(i(∂_x_1-∂_x_2)^2 2!G_2(x_1,x_2))( -1 2((a F_1 F_2 a) (a F_2p̅) (a_1p̅)-(1↔ 2))) , where x_i=a p_i, F_i^μν= p^μ_i^ν_i-^μ_ip^ν_i, and F^μν= 1/2ϵ^μνρσ F_ρσ. The differences between this Compton amplitude and other proposals <cit.> stem from the differences in their respective formalisms and assumptions <cit.>, which are inherited by the one-loop amplitude <cit.>. The recursion relation for the integral representation of the G functions is G_1(x_1) =∫_0^1 dσ_1 cosh(σ_1x_1) , G_2(x_1,x_2) =∫_0^1dσ_2[∂_x_1G_1(x_1+σ_2 x_2)          -sinh(σ_2 x_2) cosh(σ_1 x_1)] , G_r(x_1, ..., x_r) =∫_0^1 dσ_r[∂_x_1G_r-1(x_1+σ_r x_r, x_2,..., x_r-1) -G_r-1(x_1, ... , x_r-1)sinh(σ_rx_r)] .
http://arxiv.org/abs/2406.18672v1
20240626181910
A simple and improved algorithm for noisy, convex, zeroth-order optimisation
[ "Alexandra Carpentier" ]
math.OC
[ "math.OC", "cs.LG", "stat.ML" ]
Li EVOLUTION AMONG STARS OF LOW/INTERMEDIATE MASS: THE METAL-DEFICIENT OPEN CLUSTER, NGC 2204 Bruce A. Twarog July 1, 2024 ============================================================================================= § ABSTRACT In this paper, we study the problem of noisy, convex, zeroth order optimisation of a function f over a bounded convex set 𝒳̅⊂ℝ^d. Given a budget n of noisy queries to the function f that can be allocated sequentially and adaptively, our aim is to construct an algorithm that returns a point x̂∈𝒳̅ such that f(x̂) is as small as possible. We provide a conceptually simple method inspired by the textbook center of gravity method, but adapted to the noisy and zeroth order setting. We prove that this method is such that the f(x̂) - min_x∈𝒳̅ f(x) is of smaller order than d^2/√(n) up to poly-logarithmic terms. We slightly improve upon existing literature, where to the best of our knowledge the best known rate is in <cit.> is of order d^2.5/√(n), albeit for a more challenging problem. Our main contribution is however conceptual, as we believe that our algorithm and its analysis bring novel ideas and are significantly simpler than existing approaches. § INTRODUCTION We consider in this paper the setting of convex noisy zeroth-order optimisation. For d≥ 1, consider a bounded convex set 𝒳̅⊂ℝ^d with non-zero volume, and consider a convex function f:𝒳̅→ [0,1]. We consider a sequential setting with fixed horizon n ∈ℕ^*. At each time t ≤ n, the learner chooses a point x_t∈𝒳̅ and observes a noisy observation y_t∈ [0,1] that is such that 𝔼[y_t|x_t = x] = f(x), and such that y_t knowing x_t is independent of the past observations. In this work, we will study the problem of optimising the function f in the sequential game described above, namely after the budget n has been fully used by the learner, she has to predict a point x̂ - based on all her observations (x_t, y_t)_t≤ n - and her aim will be to estimate the minimum for the function f. Her performance for this task will be measured through the following (simple) regret f(x̂) - inf_x∈𝒳̅ f(x), namely the difference between the true infimum of f, and f evaluated at x̂. This setting, known as convex noisy zeroth-order optimisation, is related to two popular settings: first-order optimisation - where the learner has access to noisy evaluations of the (sub-)gradient of f - and noiseless zeroth-order optimisation - where the noise ϵ_t is equal to 0. We refer the reader to <cit.>, among others, for books and surveys on these topics. Unfortunately, a naive application of methods crafted for the two aforementioned topics to the problem of noisy zeroth-order optimisation typically provides poor results, as the noise present in the evaluations of the function perturbs significantly the learning process, and e.g. makes attempts of computing (sub-)gradients of f difficult - see e.g. <cit.> for a precise discussion on this topic. In the case where d=1, optimal algorithms however exist for this problem since a long time - see <cit.> for a survey - and are related to dichotomic search. The optimal regret in this case is of order n^-1/2 up to polylogarithmic terms. An important question was then to extend this to the higher dimensional case; and while it is relatively simple to craft algorithms whose regret decays, up to logarithmic terms, as exp(cd)n^-1/2 where c>0 is an universal constant, an important question that remained open for a long time was on whether the minimax regret was exponential with the dimension or not. A first ground-breaking work in this topic is to be found in <cit.>, where they provide a complex algorithm whose regret can be bounded uniformly, with high probability, as poly(d)/√(n), proving that it is possible to have an algorithm whose regret depends actually only polynomially on d. This gave rise to a sequence of works, mostly in the related, more challenging setting where one aims at minimising the cumulative regret[In these works, the aim is to minimise the sum of collected samples - i.e. sample as often as possible close to the minimum. They also often consider the challenging adversarial setting. Note that upper bounds in this setting morally yield upper bounds for our simper setting.] - see e.g. <cit.>. The exponent of the polynomial in d has been successively reduced through this stream of works. The most actual algorithm and bound - to the best of our knowledge - is in <cit.>, and for a more challenging problem (cumulative regret, adversarial setting). However their results would translate in our setting in a regret of order (up to logarithmic terms) d^2.5/√(n). This has to be compared to the best lower bound, derived for this problem, which is of order d/√(n), and which is proven over the smaller class of linear functions <cit.>. This highlights the fact that a gap remains in this setting. In parallel, another stream of literature has been devoted to studying the effect of additional shape constraints, in particular strong convexity and smoothness - see e.g. <cit.> - under which the faster regret of order d^1.5/√(n), is achievable. Note however that strong convexity is a very strong assumption that has important consequences - in particular, when combined to a smoothness assumption, it essentially implies that the shape of the level sets of f is close to a ball. To complement this short litterature review, see rather <cit.> for an excellent very recent survey on these topics - see in particular <cit.> for a recent overview of the state of the art in these problems. In this paper, we provide a simple algorithm for the problem described above - solely under the additional assumption that the minimum of f on 𝒳̅ is not too close to the border of 𝒳̅. We prove that with high probability and up to polylogarithmic terms depending on the probability, the budget, the dimension and the diameter of 𝒳̅, the regret is uniformly bounded as d^2/√(n). This slightly improves over the best known bound for this problem[Yet does not answer the open question on what is the minimax rate in this setting]. The main strength of this work, though, is the conceptual simplicity of the proposed algorithm, which contrasts with the complexity of existing approaches, and also its simple analysis. Indeed, our algorithm is an adaptation of the textbook center of gravity method <cit.>, namely a specific kind of dichotomic search, combined with an estimator of the gradient on a well-chosen proxy of f, at a well chosen point. In Section <ref>, we present additional notations, as well as some preliminary results regarding these proxies of f, and also on estimating their values and gradients. In Section <ref>, we provide the main algorithm, and the upper bound on its regret. All proofs are in the appendix, and are significantly commented for clarity. § PRELIMINARY RESULTS AND NOTATIONS Write (e_1,…, e_d) for the canonical basis of ℝ^d. Write also for any Borelian set 𝒮⊂ℝ^d, vol(𝒮) for the volume of this set (i.e. its measure according to the Lebesgues measure), and conv(𝒮) for its convex hull. Let p≥ 1, for R ≥ 0 and x∈ℝ^d, write 𝔹_p(x,R) for the d-th dimensional l_p ball of radius R and center x. We also write 𝔹_2(R) = 𝔹_2(0,R), and 𝕊_2(R) for the l_2 sphere of center 0 and radius R. For technical reasons, we will extend the definition of f over ℝ^d, and write that for x ∉𝒳̅, f(x) = +∞ - and we state by convention that when we sample a point x_t ∉𝒳̅, we obtain y_t = +∞. We will say by convention that f is convex on ℝ^d, as it is convex on 𝒳̅, and prolongated by +∞ outside of 𝒳̅. We also state the following mild assumption on the function f, which implies that the minimum of f cannot be too close to the border of 𝒳̅. Let x^* be a minimum of f on 𝒳̅ and write f(x^*) = f^*. Assume that x^* is such that there exists r >0 such that 𝔹_2(x^*,r) ⊂𝒳̅. In what follows, we will consider some well-chosen proxies of f which we will use in our algorithm. These proxies will be such that one can estimate in a "natural" way these proxies, as well as their gradients. We will study conditions under which these proxies have good properties. We follow here the natural idea - introduced in <cit.> to the best of our knowledge for zeroth-order optimisation, and studied more generally in <cit.> - of considering a proxy of f through smoothing in a neighborhood around each point. We will however adapt this neighborhood to some ambient convex set, as discussed below - and this adaptation is key for our algorithm later. In what follows, we first describe the proxies of f that we will consider, and provide a condition under which the gradients of these proxies are informative regarding f itself. We then explain how we can estimate these proxies and their gradient through noisy evaluations of f. §.§ Smoothed functional notations and results on smoothed convex functions Consider a convex subspace 𝒳⊂𝒳̅, of non-zero volume. We can define its barycenter μ_𝒳 = 𝔼_X ∼𝒰_𝒳 X, and its variance-covariance matrix Σ_𝒳 = 𝕍_X ∼𝒰_𝒳 X. Since 𝒳 has non-zero volume note that Σ_𝒳 is invertible. Write F_𝒳 for the linear transformation F_𝒳: x →1/√(d)Σ_𝒳^-1/2(x - μ_𝒳). Note that the convex set 𝒵^𝒳 = F_𝒳 (𝒳) is in isotropic position renormalised by d^-1/2. Write also 𝒵̅^𝒳 = F_𝒳(𝒳̅), and z^* = F_𝒳 (x^*). Define for any z∈ℝ^d g^𝒳 (z) = f(F^-1_𝒳(z)) =f(√(d)Σ_𝒳^1/2(z+μ_𝒳)). Note that g^𝒳 is convex on ℝ^d and that also in particular the function f is the same up to a linear transformation than the function g^𝒳 - and this linear transformation transforms 𝒳̅ in 𝒵̅^𝒳 and x^* into (z^*)^𝒳. When no ambiguity arises, we write g for g^𝒳, z^* for (z^*)^𝒳, 𝒵 for 𝒵^𝒳 and 𝒵̅ for 𝒵̅^𝒳 - and note that g(y^*) = f^*. Define for c> 0, z∈ℝ^d g_c^𝒳 (z) = 𝔼_Z∼𝒰_𝔹_2(c)g(z+Z), with the convention g_0^𝒳 (.) = g^𝒳. Again when no ambiguity arises, we write g_c for g_c^𝒳. Note that g_c is convex on ℝ^d, and that g_c ≥ g_c' for any 0 ≤ c' ≤ c. Note also that for c>0, g_c is differentiable on ℝ^d, and that by Stoke's theorem, for any z∈ℝ^d: ∇ g_c(y) = d/c^2𝔼_Z∼𝒰_𝕊_2(c)[Zg(y+Z)], see <cit.> for a precise reference. A fundamental property of convex functions is that, for any z∈ℝ^d and any sub-gradient ∇ g(z) at this point, if g(z)) - g(z̃) is large, the sub-gradient correlates significantly with z-z̃. Namely ⟨∇ g(z), z - z̃⟩≥ g(z)) - g(z̃). The following lemma is a simple, yet key result for this paper, and extends this property to the smoothed function g_c - namely, that if g(z)) - g(z̃) is large, the sub-gradient ∇ g_c(z) correlates significantly with z-z̃ - in fact it holds under the relaxed condition that g_c(z)) - g(z̃) is large. Let c>0 and z,z̃∈ℝ^d. If g_2c (z) - g_c(z) ≤ 2^-2[g_c(z)) - g(z̃)] ⟨∇ g_c(z), z - z̃⟩≥3/4[g_c(z)) - g(z̃)]. The proof of this lemma is in Appendix <ref>. It implies in particular that if g_2c (z) - g(z) ≤ 2^-2[g(z)) - f^*] - i.e. if the distance between the proxy g_2c(z) and the function g(z) is of smaller order than the optimality gap of g(z) (compared to the minimum f^* of g), then the gradient of the proxy is interesting, namely ∇ g_c(z) is correlated to z - z^*, with correlation larger than said optimality gap. In other words, the properties of ∇ g_c(z) are similar to those of a sub-gradient ∇ g(z), when it comes to the minimal correlation to the direction of the minimum. §.§ Estimators of the function and of the gradient of smoothed convex functions Consider now z ∈𝒳̅ and resp. Z_1^(b),…, Z_N^(b)∼_i.i.d.𝒰_𝔹_2(c) and Z_1^(s),…, Z_N^(s)∼_i.i.d.𝒰_𝕊_2(c) for points sampled respectively uniformly in the ball of center 0 and radius c, and in the sphere of center 0 and radius c. Assume that we observe independent noisy observations of the function f at the points F^-1_𝒳(z+Z_1^(k)),…, F^-1_𝒳(z+Z_N^(k)) where k ∈{b,s} - i.e. equivalently we observe independent noisy observations of the function g at the points z+Z_1^(k),…, z+Z_N^(k) - that we write (ỹ_t^(k))_t≤ N, where the ỹ_t^(k)∈ [0,1] are such that 𝔼[ỹ_t^(k)|F^-1_𝒳(z+Z_t^(k)) = x] = f(x) and such that ỹ_t^(k) knowing F^-1_𝒳(z+Z_t^(k)) is independent of the past observations. Define: ĝ_c(z) = 1/N∑_i=1^Nỹ_i^(b), and ∇ g_c(z) = d/c^2N∑_i=1^N Z_i^(s)ỹ_i^(s). The following lemma provides a concentration result for both the estimator of the function, and of the estimator of the gradient. Let c≥ 0, z∈𝒵̅ such that 𝔹_2(z, c) ⊂𝒵̅ and u ∈ℝ^d. With probability larger than 1-δ |ĝ_c(z) - g_c(z)| ≤(1/δ)/√(N), and if N ≥ dlog(2/δ) |⟨∇ g_c(z) - ∇ g_c(z), u⟩| ≤(1/δ) u_2 √(d)/c√(N), where (1/δ) = 4 √(log(2/δ)). The proof of this lemma is in Appendix <ref>, and is based on very standard concentration arguments. The study of related estimators was first formulated to the best of our knowledge in <cit.>, and then refined in <cit.> (among others). Note however that in these works, the proximity of these estimators to g or its gradient is controlled, under smoothness assumptions. This is not the approach that we take here, as we do not work under additional smoothness assumptions - so that the proxies g_c can be arbitrarily far from g and its gradient in many points. § ALGORITHM Our algorithm is an adaptation of the center of gravity method to the noisy, gradientless case. In the classical center of gravity method, we iteratively refine the convex set 𝒳̅ at each step. More precisely, assume that we are given a convex set 𝒳⊂𝒳̅ at a given iteration. We refine it by computing the gradient ∇(f)(x) of f at the center of gravity x of 𝒳, and updating 𝒳 to 𝒳∩{u: ⟨∇ f(x), u-x⟩≤ 0}. This method is efficient as * by convexity of f, x^* remains in 𝒳 for any iteration, and * a fundamental property of convex sets is that if we separate them in two parts by any hyperplane going through their center of gravity, both part of the convex set have approximately the same volume. In our case, we do not have access to ∇(f), but only to noisy evaluations of f. The idea behind our method is to estimate instead the gradient of another function - namely, of g_c^𝒳 for a well chosen c, i.e. a linear transformation of f that is also smoothed. We have seen in Lemma <ref> that this task can be performed efficiently. However, this gradient might be quite different from any sub-gradient of f. We have however seen in Lemma <ref> that under the condition that g_2c(0)-g_c(0) is small enough, the gradient of g_c has the nice property that it correlates positively to F_𝒳(x)-F_𝒳(x̃), for any x̃ such that f(x̃) is small enough. So that F_𝒳^-1(∇ g_c(x)) could be used instead of the gradient of f in the center of gravity method. The only problem remaining that the center of gravity is not necessarily such that g_2c(0)-g_c(0) is small. In order to circumvent this, we find another point z that is such that it has this property, and is also such that z_2 is small enough so that cutting 𝒳 in F_𝒳^-1(z) enjoys provides similar volume guarantees than cutting it in x. The main algorithm described below in Figure <ref> is therefore using two recursive sub-routines: * it first calls an iterative sub-routine described in Figure <ref> that cuts the current set 𝒳 in two, until the budget is elapsed, * this routine calls another sub-routine described in Figure <ref>, which finds a good cutting point, as explained above. §.§ Part 1: finding a cutting point We first describe the sub-routine that identifies a good candidate for a cutting point. This subroutine acts in the linear transformation 𝒵^𝒳 of 𝒳 through F_𝒳. Starting from z_0, we want to find using a budget of order N - up to multiplicative polylog terms - a point z such that: * either g_2c(z) - g_z( x) ≤ 2^-3 (g_c(z) - f^*), or g(z) is small (say, smaller than 1/√(N) up to multiplicative polylog terms) * z - z_0_2 is of smaller order than c up to multiplicative polylog terms, provided that such a point exists. In this way, we ensure that this point would satisfy the condition of Lemma <ref>, or be such that g(z) is small enough, and also that it is not too far from z_0. Assume that we are given a set 𝒳 and c >0. For N≥ 1, let I_N = log_2(N)+1 and (N) = log(2N)/log(17/16) +1. The recursive algorithm that performs this takes as parameters a candidate for a cutting point z ∈ℝ^d, the current set to be cut 𝒳⊂ℝ^d, a smoothness parameter c>0, a basis number of samples that will be our approximate final budget up to polylog terms N ∈ℕ, a counting of the number of recursive rounds performed s ≥ 0, and a confidence parameter δ>0. During each run, the algorithm either returns the final cutting point z∈ℝ^d, as well as an estimator of g(z) by ĝ_z), or calls itself recursively. Note that this subroutine will require to sample the function f and as it is typically called by another algorithm which operates based on a total budget n, as soon as this budget is elapsed, the algorithm terminates returning the current (z,ĝ_z). It proceeds in the following steps. * It first sample the function f in F_𝒳^-1(z) for N times and estimate in this way g^𝒳(z) by ĝ_z as described in Equation (<ref>). * For all integer i ≤ I_N, sample 2^i points distributed as z+𝒰_𝔹_2(2c), and write (z_j^(i))_i ≤ I_N, j ≤ 2^i for these points. Sample 2^-iN/i^2 times the function f at F_𝒳^-1(z_j^(i)) and estimate in this way g^𝒳(z_j^(i)) by ĝ_z_j^(i) as described in Equation (<ref>). * If there exists z_j^i such that ĝ_z_j^(i) - ĝ_z ≥(17/16)^s/16N+4(2^i i^2 (N)/δ) √(i^2 2^i/N) then call (z_j^i, 𝒳, c,N, s+1, δ). Otherwise return (z,ĝ_z). In this way, we evaluate whether, in a radius of 2c around z there is a significantly large set of points such that g evaluated in these points is large - i.e. exponentially growing with the number of iterations s. If this is the case, we identify one of these points, and propose it as next barycentric candidate. Otherwise, we identify z as a good candidate and return it. The full algorithm is summarized in Figure <ref> §.§ Part 2: routine for effectively cutting the space We now describe the subroutine that iteratively cuts the space, taking as parameter a convex set 𝒳⊂𝒳̅. It also maintains a current estimation x̂ of the minimum. It updates these to 𝒳', x̂'. We would like it to satisfy that with high probability: * the volume of 𝒳' is a fraction of the volume of 𝒳 * either a small ball around the true minimum is in 𝒳', or the current estimator of the minimum x̂' is already very good. Set = nlog(10/9) /4d log(nddiam(𝒳̅)/r), = 5n/, = /(4) where[Looking at the definition of (), it is clear that such , exist and is of order and is of order log()).] = (), and c = 1/(8e√(d)). We define the recursive algorithm taking as parameters a candidate set 𝒳⊂ℝ^d, a candidate estimator of the minimum of f by x̂∈ℝ^d, an estimate of the value of f at this point f̂∈ℝ, and a probability δ>0. During each run, the algorithm calls itself recursively. Note that this subroutine will require to sample the function f and as it is typically called by another algorithm which operates based on a total budget n, as soon as this budget is elapsed, the algorithm terminates returning the current x̂. It proceeds in the following steps. * Run (0,𝒳, c,,0,δ) and collect (z,ĝ_z) * If ĝ_z ≤f̂ set x̂' = F^-1_𝒳(z) and f̂' = ĝ_z, otherwise set x̂' = x̂ and f̂' = f̂ * Compute an estimator ∇ g_c of ∇ g^𝒳_c(x) using samples, as described in Equation (<ref>). * Set 𝒳' = 𝒳∩ F_𝒳^-1({u: ⟨ u-z, ∇ g_c⟩≤ 0 }) * Run (𝒳',f̂', x̂',δ) This follows the idea of the center of gravity method, using a well-chosen cutting point returned by and cutting then according to the gradient of a smoothed version of f, and continuing recursively. The full algorithm is summarised in Figure <ref>. §.§ Part 3: final algorithm The main algorithm is finally launched with a total budget n and a confidence parameter δ>0, and returns an estimator x̂ of the minimum. It is basically an application of on a reasonable initialisation, and proceeds in the following steps. * we first sample times the function f at μ_𝒳̅) and compute an estimator f̂ of f(μ_𝒳̅)) as in Equation (<ref>) - recalling that f(μ_𝒳̅)) = g(0). * we apply (𝒳̅, f̂, μ_𝒳̅, δ)) and retrieve x̂ when the budget is elapsed. * we return x̂ This algorithm is summarised in Figure <ref>. The following theorem holds for the output of . Assume that Assumptions <ref> holds. The algorithm launched with a total budget n and a confidence parameter δ returns x̂ that is such that with probability larger than 1-δ: f(x̂) - f^* ≤[2^16(/δ)log(2/δ) 1/√()] [32(10d/δ) d/c√()] (8/n) [d log(2/δ)/] ≤ cpolylog(nddiam(𝒳̅)/r)^α×d^2/√(n)log(1/δ)^3/2, where c,α>0 are two absolute constants (independent on f,𝒳̅, n,d,δ). This theorem is proved in Subsection <ref> and its proof is commented and explained therein. The regret depends only logarithmically on diam(𝒳̅) and on the diameter r of a ball centered around the minimum and contained in 𝒳̅ - which is not surprising and already observed in past works. Up to logarithmic terms, our regret here is of order d^2/√(n) which slightly improves with respect to an adaptation of the best known bound in <cit.> - which is derived for the more challenging problem of adversarial minimisation of the cumulative regret[We believe that our algorithm can be easily modified to accommodate cumulative regret in the stochastic case, and have a cumulative regret of order d^2√(n). We however do not think that it could be easily adapted to the adversarial case.], but which could translate in our setting as being of order d^2.5/√(n). Beyond this slight improvement, the main strength of our approach is in terms of our algorithm and proof technique, which are - we believe - significantly simpler than existing results[However, while our algorithm is simple conceptually, it is extensive computationally as it requires an (approximate) computation of barycenters of successive convex sets, which is typically very costly.]. We hope that these techniques could maybe be refined to develop a tighter understanding of this problem, and evolve toward understanding the minimax regret in this problem. Acknowledgements. We would like to thank very warmly Evgenii Chzhen, Christophe Giraud, and Nicolas Verzelen for many insightful discussions on this problem, for their valuable opinion, and for their support without which this work would not have been written. This work is partially supported by the Deutsche Forschungsgemeinschaft (DFG) CRC 1294 'Data Assimilation', Project A03, by the DFG Forschungsgruppe FOR 5381 "Mathematical Statistics in the Information Age - Statistical Efficiency and Computational Tractability", Project TP 02, by the Agence Nationale de la Recherche (ANR) and the DFG on the French-German PRCI ANR ASCAI CA 1488/4-1 "Aktive und Batch-Segmentierung, Clustering und Seriation: Grundlagen der KI". plain § PROOFS OF THE RESULTS IN THIS PAPER §.§ Proof of Theorem <ref> Assume first that ≤ d log(2/δ). Then by definition of it means that 1 ≤ d log(2/δ)/, so that the bound in Theorem <ref> is trivially satisfied for any x̂∈𝒳̅. From now on, we therefore restrict to the converse case where ≥ dlog(2/δ)/ - so that the second part of Lemma <ref> can be applied to gradients constructed with points, as we do in our algorithm. Step 1: Definition of a near-optimal set and lower bound on its volume. Write 𝒳^* = {x^*, x^*+re_i/n, x^*-re_i/n, i≤ d}. We first state a lemma ensuring that under Assumption Assumption <ref>, the convex hull of 𝒳^* is in 𝒳̅, and that the volume ration between this convex hull and the volume of 𝒳̅ is lower bounded. Assume that Assumption <ref> holds. It holds that 𝒳^* ⊂conv(𝒳^*)⊂𝒳̅, and vol(conv(𝒳^*))/vol(𝒳̅)≥[r/nddiam(𝒳̅]^d. Note that by convexity of f and by definition of 𝒳^*, we have that for any u∈conv(𝒳^*), f(u) - f^* ≤ 1/n. The above lemma lower bounds the volume ratio vol(conv(𝒳^*))/vol(𝒳̅). Step 2: Results on . The following result holds for algorithm . Assume that 𝔹_2(z_0, 2 c) ⊂𝒵̅^𝒳. With probability larger than 1 - 4δ: (z_0,𝒳, c,N,0,δ) returns z such that * either g_2c(z) - g_c( z) ≤ 2^-3 (g_c( z) - f^*), or g( z) - f^* ≤ 2^15(/δ)log(2/δ) 1/√(N), * |g( z) - ĝ_z| ≤(/δ)/√(N), * z - z_0_2 ≤ 2 c, * the total budget T_ used to find z is smaller than 4 N, so that N ≤ T_≤ 4 N. The main idea behind this result is that on a high probability event: * if a point z_j^(i) is selected for being a candidate for a cutting point, then it means that g(z_j^(i)) is larger than a quantity growing exponentially with the number of iterations s. As the range of g is bounded on 𝒵^𝒳, this means that the number of recursive calls to should be logarithmically bounded - hence the bound on z - z_0_2 and the bound on the number of samples used. * if none of the z_j^(i) is selected for being a candidate for a cutting point, then it either means that (i) they are all small, and as they are representative of the average value of g on 𝔹_2(z,2c), then g_2c(z) will be small enough to satisfy our condition in Lemma <ref>, or (ii) that g(z) is already very small. Step 3: Results on a single run of . We now state the following lemma that describes the high probability behaviour of , provided that it is given a reasonable set of parameters. Set B = [2^16(/δ)log(2/δ) 1/√()] [32(2d/δ) d/c√()] (8/n). Assume that is given a convex set 𝒳⊂𝒳̅, x̂∈𝒳̅, f̂∈ℝ, δ>0 such that: * |f(x̂) - f̂| ≤( /δ)/√() * either 𝒳^* ⊂𝒳, or f (x̂) - f^* ≤ B. There exists an event of probability larger than 1-5δ such that * 𝒳'⊂𝒳 is convex * |f(x̂') - f̂'| ≤( /δ)/√(), * either [ 𝒳^* ⊂𝒳' ⊂𝒳 and vol(𝒳') ≤9/10vol(𝒳') ], or f (x̂') - f^* ≤ B, * the total budget T_ used to run until the next recursive call of is such that + ≤ T_≤ 4 +. This lemma ensures that, provided that is initialised properly, the convex set 𝒳' obtained after running satisfies * either it contains 𝒳^*, and its volume is a fraction of the volume of 𝒳, * or f measured at the current estimator of the minimum x̂' is already quite small. The idea behind the proof of this lemma is that whenever f(x̂)- f^* is not too small, then by Proposition <ref>, will return with high probability a cutting point z that satisfies the requirements in Lemma <ref> - so that ∇ g_c(z) is negatively correlated with x^* - z, and can therefore be used to cut the space 𝒳. Also by Proposition <ref>, with high probability z is such that z_2 is small so that cutting the space according to this approximate center of gravity still preserves the nice property about exponentially fast volume reduction. Step 4: Induction on several runs of . Based on this lemma, we proceed by induction over the repeated recursive runs of after being called by , conditioning over the high probability event of Lemma <ref> where the condition for the next run are ensured. Our induction hypothesis H_t is: on an event ξ_t of probability larger than 1-5tδ, if is called for the t time, it takes as parameter a convex set 𝒳⊂𝒳̅, x̂∈𝒳̅, f̂∈ℝ such that: * |f(x̂) - f̂| ≤( /δ)/√() * either 𝒳^* ⊂𝒳, or f (x̂) - f^* ≤ B. * the total budget n_t used up to the t-th call of is such that (t-1)+ t≤ n_t ≤ 4(t-1)+ t. We prove this by induction: * Proof of H_1: Note first that by Lemma <ref> and Lemma <ref>, the conditions of Lemma <ref> are satisfied after the initialisation phase of on an event of probability 1-δ. Moreover the running time of the initialisation is . So H_1 holds. * Proof of H_t+1 assuming that H_t holds: assuming that H_t holds for a given t, we have by Lemma <ref> that H_t+1 holds on an event ξ of probability larger than 1-δ, conditional on ξ_t. So writing ξ_t+1 = ξ_t ∩ξ, we have proven that H_t+1 holds. So for any given t ≥ 0, on an event of probability larger than 1-5tδ if is called for the t time, it takes as parameter a convex set 𝒳⊂𝒳̅, x̂∈𝒳̅, f̂∈ℝ such that: * |f(x̂) - f̂| ≤(/δ)/√(), * either [ 𝒳^* ⊂𝒳 and vol(𝒳) ≤(9/10)^t-1vol(𝒳̅) ], or f (x̂) - f^* ≤ B. * the total budget n_t used up to the t-th call of is such that t≤ n_t ≤ 2t - since 4 =. Step 5: Application of the result of the induction to what happens at the end of the algorithm. The induction from Step 4 applied to t = /5 implies that, on an event of probability larger than 1 - 5(n/)δ = 1-δ - that we will write ξ_term - the algorithm terminates after at least n/(2) rounds, and at most n/ rounds, and at its termination round, the current convex set 𝒳⊂𝒳̅, and the current value x̂ (that will output as it is the last round) are such that * either [ 𝒳^* ⊂𝒳 and vol(𝒳) ≤(9/10)^n/(2)-1vol(𝒳̅) ], * or f (x̂) - f^* ≤ B. If f (x̂) - f^* ≤ B, the proof is finished. So assume that on ξ_term, we have 𝒳^* ⊂𝒳 and vol(𝒳) ≤(9/10)^n/(2)-1vol(𝒳̅). Note that as 𝒳 is convex, we have conv(𝒳^*) ⊂𝒳 on ξ_term. By definition of , we have that (9/10)^n/(2)-1≤[r/nddiam(𝒳̅)]^d. So by Lemma <ref>, have a contradiction on ξ_term: conv(𝒳^*) ⊂𝒳, but vol(conv(𝒳^*))≥vol( 𝒳). So it means that on ξ_term, we must have f (x̂) - f^* ≤ B. §.§ Proof of Proposition <ref> In what follows write := (N). We first state the following lemma. Assume that Assumption <ref> holds. Consider s≥ 0 and z∈ℝ^d such that 𝔹_2(z, 2c) ∈𝒵̅^𝒳. There exists an event of probability larger than 1-3δ such that the following holds on it, during a run of (z, 𝒳, c,N,s,δ): * Assume that g(z) - f^* ≤1/N and s = 0. For any z_j^i such that ĝ_z_j^(i) - ĝ_z ≥1/16N+4(2^i i^2/δ) √(i^2 2^i/N), then since 2(2^i i^2/δ) √(i^2 2^i/N)≥(17/16)/N, we have g(z_j^(i)) - f^*≥(17/16)/N. * Assume that g(z) - f^* ≥(17/16)^s/N. For any z_j^i such that ĝ_z_j^(i) - ĝ_z ≥(17/16)^s/16N+4(2^i i^2/δ) √(i^2 2^i/N), it holds that g(z_j^(i)) - f^* ≥(17/16)^s+1/N. * Assume that g_c(2z) - g_c(z) ≥ 2^-3(g_c(z) - f^*), and g(z) - f^* > (17/16)^s/N[ 2^15(1/δ)log(2/δ) 1/√(N)]. Then there exists z_j^i such that ĝ_z_j^(i) - ĝ_z ≥(17/16)^s/16N+4(2^i i^2/δ) √(i^2 2^i/N). Assume that 𝔹_2(z_0, 2 c) ⊂𝒵̅^𝒳. Then by construction we know that even if calls itself recursively for rounds, then all the parameters z that it will take at each round will be such that 𝔹_2(z, 2 c) ⊂𝒵̅^𝒳. Write τ for the random round where the recursive application of (z_0, 𝒳, c,N,0,δ) stops. Applying Lemma <ref>, we know that on an event of probability larger than 1-3δ, for the point z taken as input at round τ: * if τ < we have either g_2c(z) - g_c(z) ≤ 2^-3(g(z) - f^*), or g(z) - f^* ≤ 2^15(1/δ)log(2/δ) 1/√(N). This concludes the proof in this case. * otherwise if τ≥ then g(z) - f^* ≥(17/16)^⌊⌋/N. Note that by definition of this implies g(z) - f^* ≥ 2 which contradicts our Assumption <ref>. So this case cannot happen. This concludes the proof in this case as well. §.§ Proof of Lemma <ref> We first remind the following classical results of convex geometry. Let 𝒞 be a convex in isotropic position. It holds that 𝔹_2(1) ⊂𝒞⊂𝔹_2(2d). Let 𝒞 be a convex in isotropic position. It holds for any u ∈ℝ^d: u≠ 0, and any z ∈ℝ^d vol(𝒞∩{w: ⟨ w-z, u⟩≥ 0}) ≥ (1/e - z_2)vol(𝒞). An immediate corollary of the last proposition is as follows Let 𝒦 be a convex. It holds for any u ∈ℝ^d: u≠ 0, and any z ∈ℝ^d vol(𝒦∩ F_𝒳^-1({w: ⟨ w-z, u⟩≥ 0})) ≥ (1/e - √(d)z_2)vol(𝒦). From Proposition <ref> we deduce that 𝔹_2(2 c) ⊂𝒵^𝒳. We therefore know that Proposition <ref> holds for the output (z, ĝ_z) of (0, 𝒳, c, ,0, δ) - and write ξ for the event of probability larger than 1-4δ where the proposition holds. Note already that it implies by definition of the algorithm that on ξ |f(x̂') - f̂'| ≤( / δ)/√(), and also that on ξ + ≤ T_≤ 4 + , and also that 𝒳'⊂𝒳 is convex. Note also that on ξ it implies by definition if c that z_2 ≤ 2 c = 1/(4e√(d)), which implies by Corollary <ref>, by construction of the algorithm that on ξ vol(𝒳∖𝒳') ≥1/2evol(𝒳), namely that on ξ vol( 𝒳') ≥ (1 - 1/2e)vol(𝒳). Case 1: g(z) is small, or f(x̂) is small. We first consider the case where either f(x̂) - f^* ≤ B, or on ξ, we have that g(z) - f^* ≤ B. In this case, we will have by definition of the algorithm that on ξ: f (x̂') - f^* ≤ B, as B ≥ 2^16(/δ)log(2/δ) 1/√()≥ 8(/δ)1/√(). This concludes the proof. Case 2: g(z) and f(x̂) are large. We now consider the converse case on ξ. In this case, we know by Proposition <ref> that on ξ: g(z) - f^* ≥ B. In this case, we know by Proposition <ref> that on ξ g_2c(z) - g_c(z) ≤ 2^-3(g_c(z) - f^*), and we also know by assumption that 𝒳^* ⊂𝒳. So that by definition of 𝒳^*, and since ≤ n, for any x̃∈𝒳^*, on ξ g_2c(z) - g_c(z) ≤ 2^-2(g_c(z) - g(F_𝒳(x̃))). We can therefore apply Lemma <ref>, and we have that on ξ ⟨∇ g_c(z), z - F_𝒳(x̃) ⟩≥3/4[g_c(z)) - g(F_𝒳(x̃))] ≥5/8[g_c(z)) -f^*] ≥5/8 B, as B ≥ 8/n and g(F_𝒳(x̃)) - f^* ≤ 1/n. Also, by Lemma <ref>, for any u ∈ℝ^d, conditional to ξ and on an event ξ' of probability larger than 1-δ |⟨∇ g_c - ∇ g_c(z), u⟩| ≤(1/δ) u_2 √(d)/c√(). So that on ξ'∩ξ, for any x̃∈𝒳^* |⟨∇ g_c - ∇ g_c(z), z - F_𝒳(x̃)⟩| ≤(2d/δ) z - F_𝒳(x̃)_2 √(d)/c√(). From Lemma <ref> this implies on ξ'∩ξ |⟨∇ g_c - ∇ g_c(z), z - F_𝒳(x̃)⟩| ≤ 2(2d/δ) d/c√()≤ B/16, as B ≥ 32(2d/δ) d/c√(). Combining this result with Equation (<ref>) leads to the fact that on ξ'∩ξ, for any x̃∈𝒳^* ⟨∇ g_c, z - F_𝒳(x̃) ⟩≥1/16 B. So that on ξ'∩ξ, we have that 𝒳^* ⊂𝒳'. This concludes the proof. By Lemma <ref>, it holds on an event of probability larger than 1 - δ (1 + ∑_k 1/k^2) ≥ 1 - 2.5δ that |g( z) - ĝ_z| ≤(1/δ)/√(N), and for any i ≤ I_N, j ≤ 2^i |g(z_j^(i)) - ĝ_z_j^(i)| ≤(2^i i^2/δ)/√(N). Write ξ for this event. Note that if g(z) - f^* ≤1/N and s=0, then on ξ we have that if there exists z_j^i such that ĝ_z_j^(i) - ĝ_z ≥1/16N+4(2^i i^2/δ) √(i^2 2^i/N), then since 2(2^i i^2/δ) √(i^2 2^i/N)≥(17/16)/N, we have g(z_j^(i)) - f^*≥(17/16)/N. The first part of the lemma is therefore proven. Assume now that g(z) - f^* ≥(17/16)^s/N. Note first that on ξ, we have that if there exists z_j^i such that ĝ_z_j^(i) - ĝ_z ≥(17/16)^s/16N+4(2^i i^2/δ) √(i^2 2^i/N), then since g(z) - f^* ≥(17/16)^s/N g(z_j^(i)) - f^*≥(17/16)^s+1/N. The second part of the lemma is therefore proven. Now assume that z satisfies the conditions of the third part of the lemma, namely g_c(2z) - g_c(z) ≥ 2^-3(g_c(z) - f^*), and g(z) - f^* > (17/16)^s/N[ 2^15(1/δ)log(2/δ) 1/√(N)]. Step 1: establishing condition under which at least a z_j^(i) is selected. On ξ it holds that |g(z_j^(i)) - g(z)| ≤ 2(2^i i^2/δ) √(i^2 2^i/N). So if g(z_j^(i)) - g(z) ≥(17/16)^s/16N + 6(2^i i^2/δ) √(i^2 2^i/N):= Δ_i, then on ξ it can be selected as it satisfies ĝ_z_j^(i) - ĝ_z ≥(17/16)^s/16N + 4(2^i i^2/δ) √(i^2 2^i/N). We now recall the following concentration result. tocite Let p∈ [0,1] and m≥ 1. Let X_1, …, X_m ∼_i.i.d.ℬ(p). Then with probability larger than 1-δ |1/m∑_i=1^m X_i - p| ≤√(2plog(2/δ)/m) + 2log(2/δ)/m, which implies in particular that with probability larger than 1-δ p/2 - 4log(2/δ)/m≤1/m∑_i=1^m X_i ≤ 2p + 2log(2/δ)/m. Assume that there exists i ≤ I_N such that ℙ_Z∼𝒰_𝔹_2(2c)(g(z+Z) - g(z) ≥Δ_i) > 8log(2/δ)/2^i. By Lemma <ref>, then we know that with probability larger than 1-δ, at least one of the z_j^(i) for some j will be such that g(z_j^(i)) - g(z) ≥Δ_i. Using Equation (<ref>), we therefore know that in this case, with probability larger than 1-4δ, z_j^(i) will be selected, finishing the proof in this case. Step 2: Converse case where for any i ≤ I_N, we have ℙ_Z∼𝒰_𝔹_2(2c)(g(z+Z) - g(z) ≥Δ_i) ≤ 8log(2/δ)/2^i. We remind that Δ_i = (17/16)^s/8N + 6(2^i i^2/δ) √(i^2 2^i/N). Note that by assumption, we therefore have that for any i ≤ I_N, we have ℙ_Z∼𝒰_𝔹_2(2c)(g(z+Z) - g(z) - (17/16)^s/16N≥ 6(2^i i^2/δ) √(i^2 2^i/N)) ≤ 8log(2/δ)/2^i. So that, since 4(2^I_N I_N^2/δ) √(I_N^2 2^I_N/N)≥ 2 by definition of I_N 𝔼_Z∼𝒰_𝔹_2(2c)[g(z+Z) - g(z) - (17/16)^s/16N] ≤∑_i ≤ I_N 64(2^i+1 (i+1)^2/δ) √((i+1)^2 2^i+1/N)×log(2/δ)/2^i, leading to g_2c(z) - g(z) - (17/16)^s/16N ≤ 64√(2)(1/δ)log(2/δ) 1/√(N)∑_i ≤ I_N (i+1)^4 2^-i/2 ≤ 2^11(1/δ)log(2/δ) 1/√(N). Since g(z) - f^* ≥(17/16)^s/N, and g(z) - f^* > 2^15(1/δ)log(2/δ) 1/√(N) by assumption, then we have that g_2c(z) - f^* < (g(z) - f^*)(1+2^-3). This implies g_2c(z) - g_c(z) < 2^-3(g(z) - f^*) ≤ 2^-3(g_c(z) - f^*). This contradicts our assumption that g_2c(z) - g_c(z) ≥ 2^-3(g_c(z) - f^*), so that this case cannot happen under our assumption. This concludes the proof. §.§ Proof of technical lemmas Assume without loss of generality that z̃ = 0 and g(z̃) = 0. Let c>0 and z ∈ℝ^d such that g_2c (z) - g_c(z) ≤ 2^-2 g_c(z). In order to prove the lemma, it suffices to prove that ⟨∇ g_c(z), z ⟩≥ 3g_c(z))/4. By convexity of g on ℝ^d, note that for any z'∈ℝ^d, we have g(z'/2) ≤ g(z')/2. So that by definition of g_c: g_c(z'/2) ≤ g_2c(z')/2. Since g_2c (z) ≤ (5/4) g_c(z), we have applying the formula above to z g_c(z/2) ≤ g_2c(z)/2 ≤ 5g_c(z)/8, so that g_c(z) - g_c(z/2) ≥ 3g_c(z)/8. Since g_c is convex and differentiable on ℝ^d g_c(z) - g_c(z/2) ≤⟨∇ g_c(z), z/2 ⟩. So that finally by Equation (<ref>): 3g_c(z)/4 ≤⟨∇ g_c(z), z ⟩. Bound on the deviations of ĝ_c(z). Let δ >0. Note that ĝ_c(z) is the empirical mean of the ỹ_i^(b), which are by construction i.i.d. random variables such that ỹ_i^(b)∈ [0,1] and 𝔼[ỹ_i^(b)] = g_c(z). So that, applying Hoeffding's inequality (see e.g. <cit.>), with probability larger than 1-δ |ĝ_c(z) - g_c(z)| ≤√(log(2/δ)/2N), leading to the result. Bound on the deviations of ⟨∇ g_c(z). Let δ >0. Note now that 𝔼⟨∇ g_c(z) , u⟩ is the empirical of the i.i.d. random variables W_i := d/c^2ỹ_i^(s)⟨ Z_i^(s), u ⟩. Note that by Equation (<ref>), we have 𝔼 W_i = ⟨∇ g_c(z), u⟩, and 𝔼 W_i^2 = d^2/c^4𝔼 [(ỹ_i^(s))^2 ⟨ Z_i^(s), u ⟩^2] ≤d^2/c^4𝔼 [⟨ Z_i^(s), u ⟩^2] = d^2/c^4c^2/du_2^2 = d/c^2u_2^2, and |W_i| = d/c^2ỹ_i^(s) |⟨ Z_i^(s), u ⟩| ≤d/cu_2. So that, applying Bernstein's inequality (see e.g. <cit.>), with probability larger than 1-δ ||⟨∇ g_c(z) - ∇ g_c(z), u⟩|| ≤√(d)/cu_2 √(2log(2/δ)/N) + 2d/cu_2log(2/δ)/N. Since N ≥ d, this leads to the result. We know that 𝔹_1(x^*, r) ∈𝒳̅. So that for any i ≤ d, we have f(x^*+re_i) ≤ 1 , f(x^*-re_i) ≤ 1, which implies by convexity of f and 𝒳̅ that f(x^*+re_i/n) ≤ 1/n, f(x^*-re_i/n) ≤ 1/n, and these points are in 𝒳̅. Note that vol(𝔹_1(x^*, r/n)) ≥ (r/(nd))^d. Note also that vol(𝒳̅) ≤diam(𝒳̅)^d. So that since 𝔹_1(x^*, r/n) = conv(𝒳^*) vol(conv(𝒳^*)/vol(𝒳̅)≥[r/nddiam(𝒳̅]^d. The corollary follows from Proposition <ref> using the facts that F_𝒳(𝒦) is in istropic position rescaled by d^-1/2 and also that since F_𝒳^-1 is a linear application of the form F_𝒳^-1(z) = √(d)Σ_𝒳^1/2 (z+ μ_𝒳), then for any convex 𝒦' vol(F_𝒳^-1(𝒦')) = d^d/2det(Σ_𝒳)^1/2vol(𝒦'), where det(Σ_𝒳) is the determinant of Σ_𝒳.
http://arxiv.org/abs/2406.17730v1
20240625171958
Distance Reducing Markov Bases
[ "Oliver Clarke", "Dimitra Kosta" ]
math.AC
[ "math.AC", "math.CO", "62R01 (Primary), 13P10, 13F65, 14M25, 62H17, 14M10 (Secondary)" ]
§ ABSTRACT The distance reducing property for Markov bases is an important property that provides a bound on the mixing time of the associated Markov chain. The goal of this project is to understand properties of distance-reducing Markov bases. We explore the distance reducing property for monomial curves and give a complete characterisation of distance reduction in the case of complete intersection monomial curves. Our characterisation carefully uses the notion of gluings for numerical semigroups. We also characterise the distance reducing property for non-complete intersection monomial curves in small dimensions. We also explore the distance irreducible elements: the moves that appear in all distance reducing Markov bases. Extreme Diffusion Measures Statistical Fluctuations of the Environment Jacob Hass^*, Hindy Drillick^†, Ivan Corwin^†, Eric Corwin^* July 1, 2024 ====================================================================== § INTRODUCTION In Algebraic Statistics, Markov bases play an important role in sampling contingency tables and approximating Fisher's exact test with a Markov chain Monte Carlo approach. For background on Algebraic Statistics we refer to <cit.>, and for a thorough survey on Markov bases and contingency tables we refer to <cit.>. When the sample size is large, it becomes infeasible to write down every possible contingency table. As an approximation, we may use a Markov chain Monte Carlo method outlined by Diaconis and Sturmfels <cit.>: start with any sample with the correct marginals; repeatedly apply random moves to obtain a random sample. The set of moves is a fixed Markov basis, which is a generating set of the underlying toric ideal. By definition, a Markov basis produces a connected Markov chain on the fiber of the contingency table. So, after a certain number of moves, the sample produced is close to the stationary distribution of the Markov chain. The number of moves required to achieve this is called the mixing time. It is highly desirable to have a Markov basis with a low mixing time. However, computing the mixing time is often very difficult. So, we would like to find Markov bases that tightly and predictably connect the fibers of the contingency table. This motivates the notion of a distance reducing Markov basis. See Definition <ref> and <cit.>. For distance reducing Markov bases, the minimum number of moves required to connect a pair of points in a fiber is bounded above the distance between them. There are many known examples of distance reducing Markov bases. For instance, the Graver basis is distance reducing with respect to the 1-norm. For the homogeneous case, see <cit.>, and for the general case we prove the following. [Proposition <ref>] The Graver basis is strongly distance reducing, even if A is inhomogeneous. The family of strongly robust toric ideals are those that are minimally generated by their Graver basis. While this may seem like a very strong condition, there are a number of families of examples that exhibit a rich combinatorial structure. See <cit.>. For instance, the family of toric ideals of Lawrence type are strongly robust, see <cit.>. For each strongly robust toric ideal, any Markov basis is distance reducing. However, not all distance reducing Markov bases belong to strongly robust toric ideals. §.§ Our results Typically the Graver basis is very large and impractical to compute, even for some small examples. So it is useful to find methods to determine whether a Markov basis is distance reducing with very few computational steps. In this paper, we characterise the distance reduction property for complete intersection monomial curves in terms of the circuits. [Corollary <ref>] Let A ∈^1 × n and assume that I_A is a complete intersection toric ideal. Let M ⊆(A) be a minimal Markov basis. Then M is distance reducing if and only if M reduces the distance of the circuits of A. In the above, a circuit refers to an element z ∈(A) of the form z = (0, …, 0, z_i, 0, …, 0, -z_j, 0, …, 0), i.e., an element supported on exactly two coordinates. This classification shows that to check the distance reduction property for monomial curves, it suffices to check that a Markov basis reduces the distance of the circuits. The circuits form a very small subset of the Graver basis, so this result is a significant improvement over checking the entire Graver basis, see Theorem <ref> below. For non complete intersection, we give a similar characterisation. For monomial curves in 𝔸^3, we use Herzog's classification of Markov bases <cit.> to prove the following. [Theorems <ref>, <ref> and <ref>] Let A ∈^1 × 3 and M a minimal Markov basis for A. Then M is distance reducing if and only if M reduces the distance of the circuits of A. Similarly for monomial curves in 𝔸^4, we characterise the distance reduction property whenever the curve admits a gluing. [Theorem <ref>] Let A ∈^1 × 4 and M ⊆(A) a minimal Markov basis for A. Assume that A admits a gluing. Then M is distance reducing if and only if M reduces the distance of the circuits of A. A gluing is a operation that arises in the classification of complete intersection toric ideals <cit.>. The proofs of the above results carefully use the properties of Markov bases that arise from glued monomial curves. There are many important families of moves related to Markov bases. For instance: the indispensables S(A) are the moves that belong to every Markov basis; the Graver basis G(A) is a Markov basis given by the set of primitive elements; and the Universal Markov basis M(A) is the union of all minimal Markov bases. These sets also admit algebraic descriptions in terms of certain decompositions <cit.>. The distance-reducing analogue of the indispensables is the set of distance irreducible elements, which are the elements that appear in every distance reducing Markov basis. In Section <ref>, we define and investigate the relationships between the families of moves S(A), M(A), G(A) above, and their distance-reducing analogues: weakly distance irreducibles D^w(A), distance irreducibles D(A), universal strongly distance reducing Markov basis ^s(A), and universal distance-reducing Markov basis (A). [Proposition <ref>] The following chain of inclusions holds: S(A) ⊆ D(A) ⊆ D^w(A) ⊆ G(A). If A has a unique minimal Markov basis then we may relate the universal Markov basis to the distance irreducible elements as follows. [Proposition <ref>] If A has a unique distance-reducing minimal Markov basis, then we have S(A) = D(A) = M(A) = 𝒟(A). We also show that there are always finitely many minimal distance reducing Markov bases. [Corollary <ref>] The sets (A) and ^s(A) are finite. §.§ Paper outline In Section <ref> we fix our notation for: Markov bases; distance reduction, see Definition <ref>; and gluings in Section <ref>. Sections <ref>-<ref> are about the distance reduction property for monomial curves. In Section <ref>, we characterise the distance reduction property of monomial curves in 𝔸^3. Our main results are Theorem <ref> for complete intersections and Theorem <ref> for non complete intersections. In Section <ref> we characterise the distance reduction property for a special kind of symmetric complete intersection monomial curves given by Theorem <ref>. In Section <ref>, we prove Theorem <ref>, which shows that distance reducing Markov bases of complete intersection monomial curves are exactly those in Section <ref>. In Section <ref>, we focus on monomial curves in 𝔸^4. We make explicit the characterisation complete intersections in Corollary <ref> and characterise the distance reducing Markov bases of glued non complete intersections in Theorem <ref>. In Section <ref> we generalise results from <cit.> to the inhomogeneous case and prove that a Markov basis is distance reducing if and only if it reduced the distance of the Graver basis. See Theorem <ref>. In Section <ref>, we report on the distance irreducible elements, which are the elements that appear in every distance reducing Markov basis. In Section <ref> we discuss possible generalisations of our results and further questions. In particular, in Section <ref> we introduce the distance reducing complex for studying families of metrics. § PRELIMINARIES Notation. Throughout, we write := {0, 1, 2, …} for the set of non-negative integers and [n] := {1, 2, …, n } for the integers from 1 to n. For any set S, such as , or , we take elements of S^n to be column vectors. Given a set of positive integers X = {x_1, x_2, …, x_n}, we write X ⊆ for the numerical semigroup generated by X and X ⊆ for the subgroup generated by X. Affine semigroups. Throughout, we consider matrices A ∈^d × n, which, unless specified otherwise, we assume satisfy (A) ∩^n = {0}. For any vector u ∈^n we define u^+ and u^- to be the unique vectors in ^n such that u = u^+ - u^-. We denote by (u)_i is the i-th coordinate of u and u_i = |(u)_i| its absolute value. Let R = K[x_1, …, x_n] be a polynomial ring and recall the multi-index notation for monomials x^u := x_1^u_1x_2^u_2⋯ x_n^u_n∈ R for any u = (u_1, …, u_n)^T ∈^n. The columns a_1, …, a_n of A generate the affine semigroup A := {∑_i ∈ [n]λ_i a_i λ_i ∈}⊆^d. For each t ∈ A, we write _t = { u ∈^n Au = t } for the fiber of A. In addition, given any element v ∈^n, we write (v) := _Av for the fiber containing v. We denote by (A) = {_t t ∈ A } the collection of all fibers of A. Markov bases. The toric ideal of A is defined to be I_A = ⟨ x^u^+ - x^u^- u ∈(A) ⟩⊆ R. A Markov basis is a set B ⊆(A) such that I_A = ⟨ x^u^+ - x^u^- u ∈ B⟩. A Markov basis is minimal if no proper subset is a Markov basis. The union of all minimal Markov bases is the Universal Markov basis, denoted M(A). The intersection of all Markov bases S(A) is called the set of indispensable elements. The natural partial ordering on ^n is given by u ≤ v u_i ≤ v_i for all i ∈ [n]. An element z ∈(A) is called primitive if y ∈(A) with y^+ ≤ z^+ and y^- ≤ z^- then y = z. The set of all primitive elements of (A) is the Graver basis G(A) of A. Decompositions. Let z ∈(A). A decomposition of z is an expression for z as a sum of elements of (A). We say that a decomposition is proper if no summands is zero. Below we describe the decompositions that appear throughout the paper. Conformal decomposition: z = u+v such that z^+ = u^+ + v^+ and z^- = u^- + v^-. Semi-conformal decomposition: z = u+v such that u_i > 0 v_i ≥ 0. The indispensable elements S(A) and the elements of the Graver basis G(A) are completely characterised by the above decompositions. We have the following: * z ∈ G(A) if and only if z ∈(A) has no proper conformal decomposition, * z ∈ S(A) if and only if z ∈(A) has no proper semi-conformal decomposition. Moves inside fibers. Let u ∈(A) be a nonzero element. It is convenient to think of u as a move in the following sense. Suppose that x ∈^n. We say that u is applicable to x if either x^- ≥ u^- or x^+ ≥ u^+. If x^- ≥ u^- then we say u sends x to x+u ∈^n, otherwise if x^+ ≥ u^+ then we say u sends x to x-u ∈^n. Given an element z ∈(A), we say that a move u is applicable to z if u is applicable to either z^+ or z^-. Suppose that x and y belong to the same fiber of A. We say that a subset B ⊆(A) connects x and y if there is a sequence of moves u = (u_1, u_2, …, u_k) such that for each i ∈ [k] we have that with u_i ∈ B is a move that sends x + u_1 + … + u_i-1 to x + u_1 + … + u_i and x+u_1+… +u_k = y. The Fundamental Theorem of Toric Ideals states that a collection of moves B ⊆(A) is a Markov basis if and only if B connects any pair of points within each fiber of A. Metrics. Let X be a set. A metric on X is a function d: X × X →∪{∞} such that for all x, y, z ∈ X: d(x,y) ≥ 0, d(x,y) = 0 x = y, d(x,y) = d(y,x), d(x,y) + d(y,z) ≥ d(x,z). Throughout this article, we often consider the metric on ^n induced by the 1-norm || · ||, which is given by d(x,y) = ||x-y|| = ∑_i |x_i - y_i|, where |·| is the absolute value on . In their work, Aoki and Takemura identify a special class of Markov bases called distance reducing Markov bases. (Distance reducing) Fix a metric d on ^n and subsets B, Z ⊆(A). We say that B reduces the distance of Z with respect to d (or B d-reduces Z) if for each z ∈ Z, there exists u ∈ B such at least one of the following holds: * u is applicable to z^+ and either d(z^++u, z^-) < d(z^+,z^-) or d(z^+-u, z^-) < d(z^+,z^-), * u is applicable to z^- and either d(z^+, z^-+u) < d(z^+,z^-) or d(z^+, z^–u) < d(z^+,z^-). We say that B strongly reduces the distance of Z with respect to d if for each z ∈ Z each of the above conditions is satisfied by some, possibly different, u ∈ B. We say that B is [strongly] distance reducing with respect to d if B [strongly] reduces the distance of (A) with respect to d. We say that B [strongly] reduces the distance of Z if B [strongly] reduces the distance of Z with respect to the metric induced by the 1-norm. We say that B is [strongly] distance reducing if B [strongly] reduces the distance of (A) with respect to the metric induced by the 1-norm. In other words, a set of moves B is distance reducing if for any x, y in the same fiber of A there exists a move u ∈ B that reduces the distance between them. For the 1-norm, this means that u is applicable to at least one of x and y either ||x-y-u|| < ||x-y|| or ||x-y+u|| < ||x-y||. And B is strongly distance reducing if there exist moves u, u' ∈ B applicable to x, y respectively that satisfy (<ref>). If B is distance reducing with respect to some metric then B is a Markov basis. To illustrate the proof of this proposition we provide the following illuminating sketch. Take x and y in the same fiber of A and consider the task of finding a sequence of moves in B that connects x and y. Since B is distance reducing, we can find a move u, which is applicable to x or y and reduces the distance between them. We can continue in this way to find a move at each subsequent step that reduces the distance. Since there are only finitely elements in the fiber, we will eventually obtain a finite sequence of moves in B that connects x and y. Hence B is a Markov basis. We note an important difference between our setup and that of Aoki-Takemura. We do not assume that A is homogeneous, i.e., we do not assume that the all 1's row-vector is not a linear combination of the rows of A. In Section <ref>, we generalise some results about the homogeneous to the inhomogeneous case. Minimal sets. A minimal Markov basis is a Markov basis such that any proper subset is not a Markov basis. Similarly, a set is minimally distance reducing if no proper subset is distance reducing. By Proposition <ref>, any minimally distance reducing set is a Markov basis so we often call such a set a minimally distance reducing Markov basis. If a minimal Markov basis is distance reducing then we call it a distance-reducing minimal Markov basis. Note any that distance-reducing minimal Markov bases are minimally distance reducing Markov bases. However, a minimally distance reducing Markov basis need not be a minimal Markov basis. §.§ Complete intersections and gluings Let A = [ a_1 a_2 … a_n ]∈^d × n be a matrix of positive integers such that (A) ∩^n = {0}. We say that A is a complete intersection if the ideal I_A is a complete intersection, i.e., I_A is generated by (A) elements. The complete intersection matrices A are characterised by gluings. More precisely, the matrix A is a complete intersection if and only if the columns of A can be partitioned into two parts {a_1, a_2, …, a_n} = {b_1, …, b_p}⊔{c_1, …, c_q} such that the matrices B = [ b_1 … b_p ] and C = [ c_1 … c_q ] are both complete intersections and satisfy the gluing property: there exists x ∈ B ∩ C such that x = B ∩ C. Equivalently, there exists z ∈([B | C]) with (z^+) ⊆{1, …, p} and (z^-) ⊆{p+1, …, p+q} such that ([B | C]) = ((B) ×{0}) ⊕({0}×(C)) ⊕⟨ z ⟩_. If B ∈^d × p and C ∈^d × q are matrices that satisfy the gluing property, then we write B ∘ C = [B | C] for the juxtaposition of B and C. The equivalence in the above definition is proved originally in <cit.> and, more generally in the setting of p-gluings, in <cit.>. The definitions are related as follows. If x ∈ B ∩ C, then x = ∑_i ∈ [p]λ_i b_i = ∑_j ∈ [q]μ_j c_j for some non-negative integers λ_i and μ_j, as in the first definition. Then the element z = (λ_1, …, λ_p, -μ_1, …, -μ_q) ∈([B | C]) satisfies the second definition. Similarly, if z = (z_1, …, z_p, -z_p+1, …, -z_p+q) ∈([B | C]) satisfied the second definition, then x = b_1 z_1 + … + b_p z_p = c_1 z_p+1 + … c_q z_p+q∈ A ∩ B satisfies the first definition. Gluing notation. We introduce the following notation to keep track of gluings for monomial curves. Assume that A = [ a_1 a_2 … a_n ]∈^1 × n is a complete intersection. The gluing type of A is defined recursively. The base case is n = 1, which has gluing type a_1. If A is the gluing of two complete intersections B and C with gluing types T_B and T_C respectively, then the type of A is denoted (T_B ∘ T_C). For extra detail, we may also record the value x ∈ B ∩ C for the gluing and write the gluing type of A as (T_B ∘_x T_C). So, for example, all complete intersection monomial curves in 𝔸^3 have gluing type ((a_1 ∘ a_2) ∘ a_3) for an appropriate ordering of the coordinates. By a slight abuse of notation we identify A with its gluing type. Some matrices admit different gluing types. Let A = [ 3 5 9 ] then we have A = ((3 ∘_15 5) ∘_9 9) and A = ((3 ∘_9 9) ∘_15 5). There are two distinct minimal Markov bases for A given by M_1 = [ 5 -3 0; 3 0 -1 ] and M_2 = [ 2 -3 1; 3 0 -1 ]. We observe that M_1 is not distance reducing as it fails to reduce the distance of the circuit (0, 9, -5). On the other hand, it turns out that M_2 is distance reducing, which follows from Theorem <ref>. § MONOMIAL CURVES IN A3 In this section we characterise the distance-reducing minimal Markov bases of monomial curves in 𝔸^3. Throughout this section, we consider a matrix A = [ a_1 a_2 a_3 ] with distinct entries. Let M be a minimal Markov basis for A. Then M is distance reducing if and only if M reduces the distance of the circuits of A. If A is a complete intersection then the result follows from Theorem <ref>. Otherwise, A is not a complete intersection and the result follows from Theorem <ref>. To prove Theorems <ref> and <ref>, we use the explicit characterisation of the minimal Markov bases for monomial curves in 𝔸^3. Let us begin by recalling from <cit.> the following terminology for elements of (A). Notation. Let z ∈(A). If z is nonzero, then there is a coordinate of z of a different sign to the other coordinates, i.e., for some i ∈ [3] we have that either 1. z_i > 0 and z_j ≤ 0 for each j ≠ i, 2. z_i < 0 and z_j ≥ 0 for each j ≠ i. In this case, we say that z has type i. Note, if z is a circuit, then z has two types. Suppose that z has type i and for any other element z' ∈(A) of type i we have |z_i| ≤ |z'_i|, then we say that z is minimal type i, or minimal when the context is clear. In <cit.>, Herzog shows that the ideal I_A is minimally generated by the binomials corresponding to minimal elements of (A). Moreover, these generating sets directly determine whether A is a complete intersection. The following theorem summarises the important results that we use throughout this section. The minimal Markov bases of A fall into one of two cases. * If no minimal element of (A) is a circuit, then, up to sign, there are unique minimal elements of (A): g_1 = (-c_1, v_12, v_13), g_2 = (v_21, -c_2, v_23), g_3 = (v_31, v_32, -c_3) of types 1, 2, 3, respectively, with c_i and v_i,j positive for all i, j. Up to sign, A has a unique minimal Markov basis that consists of the three minimal elements. In this case, A is not a complete intersection. * If a circuit is a minimal element of (A), then, up to sign and permutation of the coordinates, two minimal elements of (A) are b = (b_1, -b_2, 0) and c = (c_1, c_2, -c_3) where b is the unique minimal type 1 and 2 element, and c is a minimal type 3 element, with b_1, b_2, c_3 > 0 and c_1, c_2 ≥ 0. The other minimal elements of (A) are c + λ b for any λ∈ such that c + λ b has type 3. The minimal Markov bases are given by {b, c + λ b} for each λ as above. In this case, A is a complete intersection. If A is not a complete intersection, then the unique minimal Markov basis described above satisfies the following relation. If A is not a complete intersection, then its unique minimal Markov basis {g_1, g_2, g_3 } satisfies g_1 + g_2 + g_3 = 0. By Theorem <ref>, the distance reduction property is characterised by circuits. We use this to observe that a unique minimal Markov basis does not imply that it is distance reducing. Let A = [ 3 5 11 ]. Then A has a unique minimal Markov basis M = {b := (5, -3, 0), c:= (2, 1, -1)}, which is evident from the observation that b is not applicable to c. Hence, there is not nonzero λ∈ such that c + λ b has type 3. Consider the circuit z = (0, 11, -5) ∈(A). The elements b and c are applicable to z. However, neither reduces the distance of z as ||z + b|| = ||(5, 8, -5)|| = 18 ≮ ||z|| and ||z - c|| = ||(-2, 10, -4)|| = 16 ≮ ||z||. §.§ Complete intersections By Theorem <ref>-(<ref>), the complete intersection monomial curves in A = [ a_1 a_2 a_3 ] have a minimal Markov basis consisting of a circuit b = (b_1, -b_2, 0) and another element c = (c_1, c_2, -c_3). By swapping the first and second column of A, we may assume that b_1 > b_2 or, equivalently, that a_1 < a_2. Suppose A is a complete intersection and let M = {b := (b_1, -b_2, 0), c := (c_1, c_2, -c_3)} be a minimal Markov basis for A, as in Theorem <ref>, with b_1 > b_2. Then the following are equivalent: * M is distance reducing, * M reduces the distance of the circuit z = γ(0, a_3, -a_2) where γ = 1 / (a_2, a_3), * c_1 < c_2 + c_3. (1) (2) Immediate from the definition. (2) (3) Assume that M reduces the distance of the circuit z = (0, z_2, -z_3). If b is applicable to z then it does not reduce the distance of z because b_1 > b_2. So c reduces the distance of z. Since z ∈(A) and M is a Markov basis, we have z = -α b + β c for some integers α, β. So z = (0, z_2, -z_3) = (-α b_1 + β c_1, α b_2 + β c_2, -β c_3). Since the third coordinate of b is zero, it follows that β > 0. By the first coordinate, we have -α b_1 + β c_1 = 0, so it follows that α≥ 0. Hence z_2 = α b_2 + β c_2 ≥ c_2. Also note that z is type 3 and c is minimal type 3, so we have c_3 ≤ z_3. Since c reduces the distance of z, we have ||z|| > ||z - c||, hence ||z|| > ||z - c|| = |-c_1| + |z_2 - c_2| + |-z_3 + c_3| = ||z|| + c_1 - c_2 - c_3. So we have c_1 < c_2 + c_3. (3) (1) Assume that c_1 < c_2 + c_3. Let z ∈(A) by any element. We show that M reduces the distance of z. We write z = α b + β c = (α b_1 + β c_1, -α b_2 + β b_2, -β c_3) for some integers α, β. If either α = 0 or β = 0, then z is a multiple of c or b, respectively, so c or b is applicable to z and reduces the distance. So, from now on, we assume α≠ 0 and β≠ 0. If β < 0 then we replace z with -z. So without loss of generality, we may assume that β > 0. If α > 0 then z_1 = α b_1 + β b_2 ≥ b_1, hence b is applicable to z. So we have ||z - b|| = ||(z_1 - b_1, ± z_2 + b_2, -z_3)|| ≤ z_1 - b_1 + z_2 + b_2 + z_3 = ||z|| - b_1 + b_2 < ||z||. Hence b reduces the distance of z and we are done. If α < 0, then we have z_2 = (-α)b_2 + β b_2 ≥ c_2. Since z_3 = β c_3 ≥ c_3, so c is applicable to z. Hence ||z-c|| = ||(± z_1 - c_1, z_2 - c_2, -z_3+c_3)|| = |z_1 ± c_1| + z_2 - c_2 + z_3 - c_3 ≤ ||z|| + c_1 - c_2 - c_3 < ||z||, where the final inequality follows from the assumption that c_1 < c_2 + c_3. §.§ Non complete intersections The minimal Markov basis for a non complete intersection monomial curves in 𝔸^3 is explicitly described in Theorem <ref>-(<ref>). With this description, we classify when the Markov basis is distance reducing as follows. Let A = [ a_1 a_2 a_3 ] be a matrix with a_1 < a_2 < a_3 that is not a complete intersection. Let M = {g_1 := (-c_1, v_12, v_13), g_2 := (v_21, -c_2, v_23), g_3 := (v_31, v_32, -c_3)} be a minimal Markov basis for A, as in Theorem <ref>. Then the following are equivalent: * M is distance reducing, * M reduces the distance of the circuit z = γ(0, a_3, -a_2) where γ = 1/ (a_2, a_3), * at least one of v_21 < c_2 + v_23 or v_31 < v_32 + c_3 holds. (1) (2). Follows immediately from the definition. (2) (3). Assume that M reduces the distance of the circuit z = (0, z_2, -z_3). We have that z = α g_1 + β g_2 + γ g_3 for some α, β, γ∈. By Proposition <ref>, we have that g_1 + g_2 + g_3 = 0. Therefore z = (β - α)g_2 + (γ - α)g_3. So, without loss of generality, we may assume that α = 0. We have z = β g_2 + γ g_3 = (β v_21 + γ v_31, -β c_2 + γ v_32, β v_23 - γ c_3). By comparing the sign patterns, we deduce that β < 0 and γ > 0. By assumption, M reduces the distance of z. However, the move g_1 is not applicable to z. So we proceed by taking cases on whether g_2 or g_3 reduces the distance of z. Case 1. Assume that g_2 reduces the distance of z. Then we have that g_2 is applicable to z and so c_2 ≤ z_2. Observe that z_3 = (-β) v_23 + γ c_3 ≥ v_23. Since g_2 reduces the distance of z, we have ||z|| > ||z + g_2|| and so ||z|| > ||z+g_2|| = ||(v_21, z_2 - c_2, -z_3 + v_23)|| = v_21 + z_2 - c_2 + z_3 - v_23 = ||z|| + v_21 - c_2 - v_23. So we have that v_21 < c_2 + v_23 and we are done. Case 2. Assume that g_3 reduces the distance of z. Then we have that g_3 is applicable to z and so c_3 ≤ z_3. Observe that z_2 = (-β)c_2 + γ v_32≥ v_32. Since g_3 reduces the distance of z, we have ||z|| > ||z - g_3|| and so ||z|| > ||z-g_3|| = ||(-v_31, z_2 - v_32, -z_3 + c_3)|| = v_31 + z_2 - v_32 + z_3 - c_3 = ||z|| + v_31 - v_32 - c_3. So we have that v_31 < v_32 + c_3 and this concludes the proof of the forward direction. (3) (1). Assume that either v_21 < c_2 + v_23 or v_31 < v_32 + c_3 holds. Let z ∈(A) be any element. We show that M reduces the distance of z. Since z ∈(A), we have that z = α g_1 + β g_2 + γ g_3 for some integers α, β, γ. By Proposition <ref>, we have g_1 + g_2 + g_3 = 0 and so we may assume that α = 0. So we have z = β g_2 + γ g_3 = (β v_21 + γ v_31, -β c_2 + γ v_32, β v_23 - γ c_3). Note that if either β = 0 or γ = 0, then we have that z is multiple of g_3 or g_2, respectively, so that element reduces the distance of z. So, from now on, we assume that β≠ 0 and γ≠ 0. Without loss of generality, we may assume that β > 0. If γ > 0 then we have z_1 = β v_21 + γ v_31≥ v_21 + v_31 = c_1, where the final equality follows from Proposition <ref>. So g_1 is applicable to z. Since a_1 < a_2 < a_3, we have that c_1 = a_2/a_1 v_12 + a_3/a_1 v_13 > v_12 + v_13. So we have ||z+g_1|| = ||(z_1 - c_1, ± z_2 + v_12, ± z_3 + v_13)|| ≤ ||z|| - c_1 + v_12 + v_13 < ||z||. Hence g_1 reduces the distance of z and we are done. It remains to consider the case when γ < 0. We proceed by taking cases based on which assumption, either v_21 < c_2 + v_23 or v_31 < v_32 + c_3, holds. Case 1. Assume v_21 < c_2 + v_23. Then we have z_2 = β c_2 + (-γ)v_32≥ c_2, hence g_2 is applicable to z. Note that z_3 = β v_23 + (-γ)c_3 ≥ v_23. So we have ||z-g_2|| = ||(± z_1 - v_21, -z_2 + c_2, z_3 - v_23)|| ≤ ||z|| + v_21 - c_2 - v_23 < ||z||. Hence g_2 reduces the distance of z and we are done. Case 2. Assume v_31 < v_32 + c_3. Then we have z_3 = β v_23 + (-γ)c_3 ≥ c_3, hence g_3 is applicable to z. Note that z_2 = β c_2 + (-γ)v_32≥ v_32. So we have ||z+g_3|| = ||(± z_1 + v_31, -z_2 + v_32, z_3 - c_3)|| ≤ ||z|| + v_31 - v_32 - c_3 < ||z||. Hence g_3 reduces the distance of z, which concludes the proof. § MONOMIAL CURVES OF THE FIRST KIND Throughout this section, we consider a complete intersection monomial curve in 𝔸^n given by the matrix A = [ a_1 a_2 … a_n ]∈^1 × n such that (A) ∩^n = {0}. We assume that A admits a gluing of the first kind: A = (( … ((a_1 ∘ a_2) ∘ a_3) … ) ∘ a_n). We call these monomial curves of the first kind. For each k ∈ [n] we define the submatrix A_k = [ a_1 … a_k ]. Note that A_k admits a gluing of the first kind A_k = (( … ((a_1 ∘ a_2) ∘ a_3) … ) ∘ a_k). If n > 1, then A_n = (A_n-1∘ a_n), hence any minimal Markov basis for A_n is built from a Minimal Markov basis of A_n-1 as follows. Let M_n be a minimal Markov basis for A_n then there exists a minimal Markov basis M_n-1 for A_n-1 such that M_n = (M_n-1×{0}) ∪{u_n := [ u_n,1 u_n,2 … u_n, n-1 -u_n, n ]} for some non-negative integers u_1, …, u_n. So, inductively, we may identify a minimal Markov basis M_n for A_n with the rows of a matrix M_n = [ u_2,1 -u_2,2 0 0 … 0 0; u_3,1 u_3,2 -u_3,3 0 … 0 0; u_4,1 u_4,2 u_4,3 -u_4,4 … 0 0; ⋮ ⋮ ⋮ ⋮ ⋱ ⋮ ⋮; u_n-1, 1 u_n-1, 2 u_n-1, 3 u_n-1, 4 … -u_n-1, n-1 0; u_n, 1 u_n, 2 u_n, 3 u_n, 4 … u_n, n-1 -u_n,n ]. Without loss of generality, we assume that a_1 < a_2. For any matrix A ∈^1 × n, we say that the numerical semigroup A is symmetric if there exists m ∈ such that x ∈ A if and only if m - x ∉ S for all x ∈. Note that this definition does not depend on a choice of generators of A or an ordering on them. We say that A is specially symmetric if for each k ∈ [n-1] we have ((a_1, a_2, …, a_k), a_k+1) ∈ A_k. By <cit.>, it follows that if A is specially symmetric then A is symmetric. Observe that ((a_1, …, a_k), a_k+1) is the generator of the group A_k ∩{a_k+1}. By the definition of gluing, we have that ((a_1, …, a_k), a_k+1) ∈ A_k is satisfied if and only if there is a gluing A_k+1 = (A_k ∘ a_k). So monomial curves of the first kind correspond to a special kind of symmetric numerical semigroup. We now define a set of inequalities, which we use to completely characterise whether M is distance reducing. In the following definition, we define conditions on the entries u_i,j of M. Fix 2 ≤ i < j ≤ n. We say M satisfies condition R_i,j if at least one of the following conditions holds: (i) u_i,1 + u_i,2 + … + u_i,i-1 < u_i,i, (ii) There exists ℓ∈{i+1, i+2, …, j-1} such that u_ℓ,m = 0 for all m ∈ [ℓ - 1] ∖{i}, i.e. u_ℓ is a circuit supported on coordinates i and ℓ, and u_ℓ,i > u_ℓ,ℓ, (iii) ∑_k ∈ [j] u_j, k < 2(u_j,i + u_j,j), If M satisfies condition (s) for some s ∈{ i, ii, iii} but i, j are not clear from context then we write M satisfies condition (i,j)-(s). We now state the main result of this section, which characterises the distance-reducing minimal Markov bases for monomial curves of the first kind. Let M be a minimal Markov basis for A. Then the following are equivalent: (1) M is distance reducing, (2) M is distance reducing for the circuits of A, (3) M satisfies R_i,j for every 2 ≤ i < j ≤ n. By definition it follows that (1) (2). By Theorem <ref> we have that (2) (3). By Theorem <ref> we have (3) (1), which concludes the proof. If M reduces the distance of the circuits of A then M satisfies R_i,j for every 2 ≤ i < j ≤ n. Throughout the proof, we write z_a, b = (0, …, 0, z_a, 0, …, 0, -z_b, 0, …, 0) for the circuit of A supported on a, b ∈ [n], with a < b. Fix 2 ≤ i < j ≤ n. We show that M satisfies R_i,j. In addition, we prove that the circuit z := z_i,j = (0, …, 0, z_i, 0, …, -z_j, 0, …, 0) is distance reduced by some element u_r with r ≤ j. We do this by reverse induction on j. The base case with j = n is trivial. For the induction step, for each m > j and for all i' < m, we assume that there exists some i' ≤ r ≤ m such that u_r reduces the distance of z_i',m. We call this the first induction hypothesis. By assumption, M reduces the distance of z, so there exists u_ℓ that reduces the distance of z. Note that if ℓ < i, then u_ℓ is not applicable to z, so we must have ℓ≥ i. If ℓ≤ j then we are done. So let us assume that ℓ > j. Since u_ℓ is applicable to z, it follows that (u_ℓ^+) = {i} or (u_ℓ^+) = {j}. Hence u_ℓ is a circuit u_ℓ = (0, …, 0, u_ℓ,k, 0, …, 0, -u_ℓ, ℓ, 0, …, 0) for some k ∈{i, j}. Since u_ℓ reduces the distance of z, it follows that u_ℓ, k > u_ℓ, ℓ. Claim 1. For each k < m < ℓ, we have (m, ℓ)-(i) holds. We now prove the claim by reverse induction on m. For the base case, let m = ℓ-1 and consider the circuit z_ℓ-1, ℓ = (0, …, 0, z_ℓ-1, -z_ℓ, 0, …, 0). By the first induction hypothesis, we have that either u_ℓ or u_ℓ-1 reduces the distance of z_ℓ-1, ℓ. Since u_ℓ is a circuit with u_ℓ,k > u_ℓ,ℓ, it follows that u_ℓ does not reduce the distance of z_ℓ-1, ℓ. So u_ℓ-1 reduces the distance of z_ℓ-1, ℓ. It follows that u_ℓ-1, 1 + u_ℓ-1, 2 + … u_ℓ-1, ℓ-2 < u_ℓ-1, ℓ-1 hence (ℓ-1, ℓ)-(i) holds. For the induction step, fix k < m < ℓ-1 and assume that (p, ℓ)-(i) holds for every p ∈{m+1, …, ℓ-1 }. We call this the second induction hypothesis. By assumption M reduces the distance of the circuit z_m, ℓ = (0, …, 0, z_m, 0, …, 0, -z_ℓ, 0, …, 0). By the first induction hypothesis, there exists there exists r ∈{m, m+1, …, ℓ} such that u_r reduces the distance of z_m, ℓ. Since u_ℓ is a circuit with u_ℓ,k > u_ℓ,ℓ, it follows that r ≠ℓ. Suppose by contradiction that r > m. Since u_r is applicable to z_m, ℓ, it follows that u_r is a circuit u_r = (0, …, 0, u_r,m, 0, …, 0, -u_r,r, 0, …, 0) with u_r,m > u_r,r. However, by the second induction hypothesis, we have u_r,m < u_r,r, which is a contradiction. So it follows that r = m. Since u_m reduces the distance of z_m, ℓ, it follows that u_m,1 + u_m,2 + … + u_m,m-1 < u_m,m. Hence (m,ℓ)-(i) holds. This concludes the proof of the claim. We proceed by taking cases on k ∈{i, j}. Case 1. Assume k = i. Since k+1 ≤ j, by Claim 1, we have that (j,ℓ)-(i) holds. So u_j,1 + u_j,2 + … + u_j,j-1 < u_j,j. We now show that (i,j)-(iii) holds by contradiction. Assume that u_j,1 + … + u_j,i-1 + u_j,i+1 + … + u_j,j-1≥ u_j,i + u_j,j. So we have 2u_j,i + u_j,j≤ u_j,1 + … + u_j,j-1 < u_j,j. Hence 2u_j,i < 0, which is a contradiction. So we have shown that (i,j)-(iii) holds. Hence M satisfies R_i,j. It follows easily that u_j reduces the distance of z, which proves the induction step. Case 2. Assume k = j. We show the following claim. Claim 2. For each m ∈{i, i+1, …, j-1} either: there exists a circuit u_s for some m < s ≤ j with u_s = (0, …, 0, u_s,m, 0, …, 0, -u_s,s, 0, …, 0) such that if (m, s) ≠ (i,j) then u_s,m > u_s,s; or (m, j)-(i) holds. We proceed by reverse induction on m. For the base case let m = j-1 and consider the circuit z_j-1, ℓ = (0, …, 0, z_j-1, 0, …, 0, -z_ℓ,0, …, 0). By the first inductive hypothesis, we have that z_j-1, ℓ is distance reduced by some element u_r with j-1 ≤ r ≤ℓ. Since u_ℓ is a circuit with u_ℓ, j > u_ℓ, ℓ, it follows that u_ℓ does not reduce the distance of z_j-1, ℓ. Assume by contradiction that j+1 ≤ r < ℓ, then we have that u_r is a circuit u_r = (0, …, 0, u_r,j-1, 0, …, 0, -u_r,r, 0, …, 0) with u_r,j-1 > u_r,r. However, by Claim 1, we have (r,ℓ)-(i) holds, which gives us u_r,j-1 < u_r,r, which is a contradiction. So we have that r ∈{j-1, j}. If r = j, then u_j is a circuit u_j = (0, …, 0, u_j,j-1, -u_j,j, 0, …, 0). Since u_r reduces the distance of z_j-1, ℓ, if i < j-1, then it follows that u_j,j-1 > u_j,j, as desired. Otherwise, if r = j-1, then it immediately follows that (j-1,j)-(i) holds. This concludes the proof of the base case. For the induction step, fix i ≤ m < j-1 and assume that the claim holds for all p ∈{m+1, …, j-1 }. Consider the circuit z_m, ℓ = (0, …, 0, z_m, 0, …, 0, -z_ℓ, 0, …, 0). By assumption, M reduces the distance of the circuit z_m, ℓ. By the first inductive hypothesis z_m, ℓ is distance reduced by u_r for some m ≤ r ≤ℓ. Recall that u_ℓ is a circuit with u_ℓ,j > u_ℓ,ℓ, so u_ℓ does not reduce the distance of z_m, ℓ, hence r ≠ℓ. Let P = {p ∈{m+1, …, j-1} : (p, j)-(i) holds} and P^c = {m+1, …, j-1}∖ P be its complement. By Claim 1, we have that (p, ℓ)-(i) holds for all p ∈{j+1, …, ℓ-1}. Suppose that for some p ∈ P ∪{j+1, …, ℓ-1 }, the move u_p is applicable to z_m, ℓ. Then it follows that u_p is a circuit u_p = (0, …, 0, u_p, m, 0, …, 0, -u_p,p, 0, …, 0). Since one of (p, j)-(i) or (p,ℓ)-(i) holds, it follows that u_p,m < u_p,p. So u_p does not reduce the distance of z_m, ℓ. Hence r ≠ p. So we have that r ∈ P^c ∪{m, j}. If r = j, then it follows that u_j is a circuit u_j = (0, …, 0, u_j,m, 0, …, 0, -u_j,j, 0, …, 0). Since u_j reduces the distance of z_m, ℓ, if m > i then we have u_j,m > u_j,j, so the claim holds. If r = m then it follows that (m,j)-(i) holds. Suppose that r ∈ P^c. Then it follows that u_r is a circuit u_r = (0, …, 0, u_r,m, 0, …, 0, -u_r,r, 0, …, 0). Since u_r reduces the distance, we must have that u_r,m > u_r,r, which concludes the proof of the claim. So by Claim 2, with m = i, either: there exists a circuit u_s for some i < s ≤ j with u_s = (0, …, 0, u_s,i, 0, …, 0, -u_s,s, 0, …, 0), such that if s ≠ j then u_s,i > u_s,s; or (i,j)-(i) holds. In the latter case, we have that M satisfies R_i,j and u_i reduces the distance of z and we are done. In the former case, it follows that (i,j)-(ii) holds, so M satisfies R_i,j. In this case, we have that u_s reduces the distance of z. This concludes the proof of the result. Suppose that M satisfies R_i,j for all 2 ≤ i < j ≤ n then M is distance reducing. We prove the result by induction on n. For the base case, take n = 2, and observe that the result holds trivially because (A) is generated by a single element. Fix n > 2. Assume that R_i,j holds for all 2 ≤ i < j ≤ n and fix some z ∈(A). We prove that there exists a move in M that reduces the distance of z. We recall our convention that for each i ∈ [n], the notation (z)_i is the ith coordinate of z and z_i = |(z)_i| is its absolute value. Suppose z_n = 0. Let A' = [ a_1 … a_n-1 ] and M' = {u'_2, …, u'_n-1} where u'_i = (u_i,1, u_i,2, …, u_i,i-1, -u_i,i, 0, …, 0) ∈(A') for each i ∈{2, …, n-1}. By the gluing of A, we have that M' is a Markov basis for A'. Observe that M' satisfies R_i,j for all 2 ≤ i < j ≤ n-1, so by the inductive hypothesis, M' is distance reducing for (A'). In particular, M' reduces the distance of the move z' = ((z)_1, …, (z)_n-1) ∈(A'). Hence, there exists u'_i ∈ M' such that u'_i reduces the distance of z'. It follows immediately that u_i ∈ M reduces the distance of z. So we may assume that z_n ≠ 0. Without loss of generality, we may assume that (z)_n < 0. Since M is a Markov basis and z ∈(A), we may write z = ∑_i = 2^n λ_i u_i for some integers λ_i. Since (z)_n < 0, it follows that λ_n > 0. Consider the set N = {i ∈{2, …, n-1} : λ_i < 0}. We take cases on whether N is the empty set. Case 1. Assume that N ≠∅. Let i = max(N). So for all i < j < n, we have that λ_j ≥ 0 and λ_i < 0. Now consider the ith coordinate of z: (z)_i = (-λ_i) u_i,i + λ_i+1 u_i+1,i + … + λ_n-1 u_n-1,i + λ_n u_n,i. Since M satisfies R_i,n, we have that one of the conditions (i,n)-(i), (i,n)-(ii), or (i,n)-(iii) holds. If (i,n)-(i) holds then u_i,1 + … + u_i,i-1 < u_i,i≤ z_i and so u_i reduces the distance of z. If (i,n)-(iii) holds then we have u_n,1 + … + u_n,i-1 + u_n,i+1 + … + u_n,n-1 < u_n,i + u_n,n. Since z_n ≥ u_n,n and z_n,i≥ u_n,i, it follows immediately that u_n is applicable to and reduces the distance of z. So it remains to consider the case when condition (i,n)-(ii) holds. Suppose that (i,n)-(ii) holds. Then there exists j ∈{i+1, …, n-1 } such that u_j is a circuit u_j = (0, …, 0, u_j,i, 0, …, 0, -u_j,j, 0, …, 0) with u_j,i > u_j,j. Clearly if u_j,i≥ z_i, then u_j is applicable to and reduces the distance of z. We now show that if u_j,i < z_i, then z is distance reduced by u_n or a circuit u_j' for some j' > j. Assume that u_j,i < z_i. Since λ_j ≥ 0, it follows that λ_j = 0. We mark this point in the proof with (*). Next, we consider the jth coordinate of z: (z)_j = λ_j+1 u_j+1,j + … + λ_n u_n,j. By assumption, M satisfies R_j,n so one of (j,n)-(i), (j,n)-(ii), or (j,n)-(iii) holds. Since u_j is a circuit with u_j,i > u_j,j, it follows that (j,n)-(i) does not hold. If (j,n)-(iii) holds then it follows that u_n reduces the distance of z. However, if (j,n)-(ii) holds then there exists j' ∈{j+1, …, n-1 } such that the move u_j' is a circuit u_j' = (0, …, 0, u_j', j, 0, …, 0, -u_j',j', 0, …, 0) where u_j',j > u_j',j'. If u_j',j≥ z_j, then u_j is applicable and reduces the distance of z and we are done. Otherwise, if u_j', i < z_j, then recall that λ_j'≥ 0, so we deduce that λ_j' = 0. We have now reached a point in the proof with exactly the same premises as (*) except i,j are replaced with the strictly larger values j,j'. Note that j' < n, so if we repeatedly apply this argument, then there are only finitely many steps until either z is reduced by u_n or a circuit u_j'. This concludes the proof for this case. Case 2. Assume that N = ∅. So we have λ_i ≥ 0 for each i ∈{2, …, n-1}. Consider the first coordinate z_1 of z and the coefficient λ_2 ≥ 0. If λ_2 > 0 then it follows that u_2 is applicable to z because u_2,1≤ z_1. By assumption u_2,1 > u_2,2, so u_2 reduces the distance of z and we are done. We proceed to show that z is either distance reduced by u_n or a circuit u_j for some j < n. So let us assume that u_2,1 > z_1, and so λ_2 = 0. Therefore z_2 = λ_3 u_3,2 + … + λ_n u_n,2≥ u_n,2. By our original assumption, M satisfies R_2,n so one of (2,n)-(i), (2,n)-(ii), or (2,n)-(iii) holds. Since u_2,1 > u_2,2, it follows that (2,n)-(i) does not hold. If (2,n)-(iii) holds then we have u_n,1 + u_n_3 + u_n_4 + … + u_n,n-1 < u_n,2 + u_n,n. Since u_n,1≤ z_2 and u_n,n≤ z_n, it follows that u_n reduces the distance of z. So it remains to consider the case when (n,2)-(ii) holds. In this case, there exists i ∈{3, …, n-1} such that u_i is a circuit u_i = (0, u_i,2, 0, …, 0, -u_i,i,0, …, 0) with u_i,2 > u_i,i. If u_i,2≤ z_2, then u_i is applicable and reduces the distance of z. Otherwise if u_i,2 > z_2, then it follows that λ_i = 0. We mark this point in the proof with (*). In this case, we have that M satisfies R_i,n so one of (i,n)-(i), (i,n)-(ii), or (i,n)-(iii) holds. Since u_i is a circuit with u_2,i > u_i,i, it follows that (i,n)-(i) does not hold. If (i,n)-(iii) holds then u_n is applicable and reduces the distance of z. It remains to consider the case when (i,n)-(ii) holds. So there exists j ∈{i+1, …, n-1} such that u_j is a circuit u_j = (0, …, 0, u_j,i, 0, …, 0, -u_j,j, 0, …, 0) with u_j,i > u_j,j. If u_j,i≤ z_i, then u_j is applicable and reduces the distance of z. Otherwise if u_j,i > z_i, then it follows that λ_j = 0. We have now reached a point a point in the proof with the same premises as (*) except 2,i is replaced with the strictly larger values i,j. Note that j < n, so if we repeatedly apply the same argument, then there are only finitely many steps until z is distance reduced by either u_n or a circuit u_j. This concludes the proof of this case. So, in each case, we have shown that z is distance reduced by some element u_i ∈ M. Since z was arbitrary, M is distance reducing. § COMPLETE INTERSECTION MONOMIAL CURVES In this section we prove that the distance reduction property governs the pattern of gluings for complete intersection monomial curves. Let A ∈^1 × n be a complete intersection. If a minimal Markov basis reduces the distance of the circuits of A, then A admits a gluing of the first kind. So, the distance reduction property for complete intersection monomial curves is characterised by the inequalities in Theorem <ref> and Definition <ref>. In particular, the circuits completely determine whether a minimal Markov basis is distance reducing. Let A ∈^1 × n be a complete intersection and M a minimal Markov basis for A. Then the following are equivalent: * M is distance reducing, * M is reduces the distance of the circuits of A. By definition we have (1) (2). To prove (2) (1), by Theorem <ref> we have that A admits a gluing of the first kind. So by Theorem <ref>, we have that M is distance reducing for A. We develop tools and give a proof of Theorem <ref> in the following sections. In Section <ref>, we show that gluings of the first kind may be detected with a combinatorial game. In Section <ref>, we set up the main notation for the proof and prove Lemma <ref>, which allows us to use an inductive argument in the proof of the theorem. In Section <ref>, we introduce gluing trees and their decorations, which are combinatorial objects that track the sign patterns of the Markov basis elements. Section <ref> concludes with a proof of Theorem <ref>. §.§ A condition for gluings of the first kind In this section we give a combinatorial condition on the sign pattern of a Markov basis M so that the matrix A admits a gluing of the first kind. Let M = (m_i,j) be a Markov basis for a matrix A. The sign matrix of M is the matrix (M) = ((m_i,j)) whose entries are from the set {-, 0, +}. For ease of notation we notation, we write (·) for the zero entries of the sign matrix. Sign Game. Let S ∈{-, 0, +}^k × n be a matrix of signs. A move is given by selecting an entry s_i,j and deleting the ith row and jth column of S. This move is valid if: * the entry s_i,j is the only nonzero entry of the jth column of S and * the entry s_i,j is different from all other entries in the ith row of S. We say S is winnable if there is a sequence of valid moves that deletes every element of S; resulting in the empty matrix. If there is no sequence of valid moves that deletes every entry of S, then we say that S is not winnable. The following shows a sequence of valid moves in the sign game: [ + - · · · ·; · · [+] - · ·; + + · - · ·; · - · · + -; · + · · - -; ], [ + - · · ·; + + [-] · ·; · - · + -; · + · - -; ], [ [+] - · ·; · - + -; · + - -; ], [ - + -; + - -; ]. Each move in given by removing the row and column of the bracketed entry. The final matrix in the sequence admits no valid moves so none of these matrices is winnable. Let A be a complete intersection. Let M be a minimal Markov basis for A. Then A admits a gluing of the first kind if and only if (M) is winnable. The matrix A admits a gluing of the first kind if and only if, up a permutation of the rows and columns of M and sign of the rows of M, the matrix (M) = (s_i,j) is given by (M) = [ + - · · ⋯ ·; ⊕ ⊕ - · ⋯ ·; ⊕ ⊕ ⊕ - ⋯ ·; ⋮ ⋮ ⋮ ⋱ ⋱ ⋮; ⊕ ⊕ ⊕ ⋯ ⊕ - ]. The matrix (M) is winnable since the sequence of moves: s_n-1,n, s_n-2, n-1, …, s_2,3, s_1,2 deletes the entire matrix. Conversely, every winnable (n-1) × n matrix S = (s_i,j) can be transformed into the above matrix with a permutation of the rows and columns of S and negating some of the rows. To see this, suppose that a winning sequence of moves is given by s_i_1,j_1, s_i_2, j_2, …, s_i_n-1, j_n-1. Consider the first move s_i_1, j_1. Without loss of generality, we may assume that s_i_1, j_1 is (-) because we may change the sign of the row. We move that entry to the bottom-right corner of the matrix to form a new matrix M' with sign matrix (M') = (s'_i,j). By the definition of a winnable matrix, the entries above s'_n-1, n = s_i_1, j_1 in (M') are all zero and the entries to the left are all (+) or (·). We continue in this way for all subsequent moves and results in matrix as (<ref>). Therefore A admits a gluing of the first kind. To prove Theorem <ref>, the first step is to apply Lemma <ref>, which shows that there exists a distinguished circuit z of A such that the only element of M that reduces the distance of z is w. In particular, since w is applicable to this circuit, we deduce that many of the entries of w are zero. We proceed to examine the circuits not distance reduced by w to find further zeros in the coordinates of the Markov basis. We keep track of the zeros to show that (M) is winnable. Let A ∈^1 × 5 be a matrix. Suppose that A is a complete intersection that admits the gluing A = (((a_1 ∘ a_2) ∘ a_3) ∘ (a_4 ∘ a_5)). Without loss of generality we assume a_1 < a_2 and a_4 < a_5. Suppose that M is a minimal Markov basis of the form M = {u_2, u_3, u_4, u_5} = [ u_2,1 -u_2,2 0 0 0; u_3,1 u_3,2 -u_3,3 0 0; 0 0 0 u_4,4 - u_4,5; u_5,1 u_5,2 u_5,3 -u_5,4 -u_5,5 ] where u_i,j≥ 0 for all i, j. By Lemma <ref>, we have that u_5 reduces the distance of one of the circuits z_2,5 or z_3,5 where z_i,j is the circuit supported on {i, j}. By considering the applicability of u_5 to these circuits, we identify three cases for u_5: (u_5^-) = {5}, (u_5^+) = {2}, and (u_5^+) = {3}. Case 1. u_5 = (u_5,1, u_5,2, u_5,3, 0, -u_5,5). In this case we have (M) = [ + - · · ·; ⊕ ⊕ - · ·; · · · + -; ⊕ ⊕ ⊕ · - ], which is winnable and so A admits a gluing of the first kind. For instance, a winning sequence of moves is given by the indices: (3, 4), (4, 5), (2, 3), (1, 2). Case 2. u_5 = (0, u_5,2, 0, -u_5,4, -u_5,5). In this case we have (M) = [ + - · · ·; ⊕ ⊕ - · ·; · · · + -; · + · ⊖ ⊖ ], which is winnable and so A admits a gluing of the first kind. For instance, a winning sequence of moves is given by the indices: (2, 3), (1, 1), (4, 2), (3, 4). Case 3. u_5 = (0, 0, u_5,3, -u_5,4, -u_5,5). In this case we have that u_5 reduces the distance of the circuit z_3,5 = (0, 0, z_3, 0, -z_5) and u_5 is applicable to the positive part of z. Note that if u_5 also reduces the distance of z_2,5 then by the above cases, we have that A admits a gluing of the first kind. So, let us assume that u_5 does not reduce the distance of z_2,5. So we must have that u_2 reduces the distance of z_2,5. In particular, we have that u_2 is applicable to z_2,5 and so u_2 = (0, u_2,2, -u_2,3, 0, 0). Therefore the sign of M is given by (M) = [ + - · · ·; · + - · ·; · · · + -; · · + ⊖ ⊖ ], which is winnable and so A admits a gluing of the first kind. In this case, a winning sequence of moves is given by the indices: (1,1), (2,2), (4,3), (3,4). §.§ Existence of distinguished circuits In this section we set up the notation for the proof of Theorem <ref>. In Lemma <ref> we prove that key result that allows us to apply induction. Setup. Throughout this section and the next, we consider a pair of matrices A = (a_1, …, a_n) and B = (b_1, …, b_m) with n, m ≥ 1 that are complete intersections. We assume there is a gluing C = A ∘ B. Note that all complete intersection monomial curves arise in this way. We assume that M is a Markov basis for C that has the form M = {u_1, …, u_n-1, v_1, …, v_m-1, w } where: * for each i ∈ [n-1] we have (u_i) ⊆{1,…,n}, * the set of projections of u_i onto the coordinates {1, …, n} is a Markov basis for A, * for each j ∈ [m-1] we have (v_j) ⊆{n+1, …, n+m }, * the set of projections of v_j onto the coordinates {n+1, …, n+m } is a Markov basis for B, * (w^+) ⊆{1, …, n } and (w^-) ⊆{n+1, …, n+m}. We write M_A = {u_1, …, u_n-1} and M_B = {v_1, …, v_m-1}. For each pair of indices i, j ∈{1, 2, …, n+m } with i < j, we denote by z_i,j the circuit supported on i and j given by z_i,j = (0, …, 0, (z_i,j)_i, 0, …, 0, -(z_i,j)_j, 0, …, 0) for some positive integers (z_i,j)_i and (z_i,j)_j. For ease of notation, we define z_j,i to be equal to z_i,j. Our proof of Theorem <ref> is by induction on n+m. To apply the inductive hypothesis, we must show that either: the set M_B ∪{w} does not reduce the distance of any circuit z_i,j with i, j ∈ [n], or the set M_A ∪{w} does not reduce the distance of any circuit z_n+i, n+j with i,j ∈ [m]. We may then apply induction to either A with Markov basis given by a projection of M_A or B with Markov basis given by a projection M_B, respectively. In the following lemmas, we prove that one of these cases indeed holds. We now show the existence of distinguished circuits of C that are distance reduced by a single element of M. Suppose M reduces the distance of the circuits of C. Then at least one of the following holds: * There exists r ∈ [n] such that for each j ∈ [m] the circuit z_r, n+j is distance reduced by exactly one element u_j ∈ M. Moreover, this unique element lies in the set M_B ∪{w}. In particular, there is a bijection u_j ↔ z_r, n+j between the set M_B ∪{w} and the circuits {z_r, n+j : j ∈ [m]} where u_j is the only element of M to distance reduce the circuit z_r, n+j. * There exists c ∈ [m] such that for each i ∈ [n] the circuit z_i, n+c is distance reduced by exactly one element u_i ∈ M. Moreover, this unique element lies in the set M_A ∪{w}. In particular, there is a bijection u_i ↔ z_i, n+c between the set M_A ∪{w} and the circuits {z_i, n+c : i ∈ [n]} where u_i is the only element of M to distance reduce the circuit z_i, n+c. Let T be a rectangular grid with n rows and m columns. We fill the (i,j) entry of T with the set T_i,j = {x ∈ M : x reduces the distance of z_i, n+j}. Claim. Each element u ∈ M_A appears in at most one row of T. Suppose that an element u ∈ M_A reduces the distance of some circuit z_i,n+j for some i ∈ [n] and j ∈ [m]. Since (u) ⊆ [n] and u is applicable to the circuit z_i,n+j, it follows that (u^+) = {i} or (u^-) = {i}. Since we may freely replace u with -u in M, we may assume without loss of generality that (u^+) = {i}. Since u reduces the distance of z_i,n+j, we have |z_i,n+j| > |z_i,n+j - u| so (z_i,n+j)_i + (z_i,n+j)_j = |z_i,n+j| > |z_i,n+j - u| = (z_i,n+j)_i - (u)_i + |u^-| + (z_i,n+j)_j. Hence (u)_i > |u^-|. Assume by contradiction that u reduces the distance of another circuit z_i', n+j' with i' ∈ [n], i' ≠ i, and j' ∈ [m]. By a similar argument to the above, it follows that (u^-) = {i'} and (u)_i' > |u^+|. However, we have |u^+| = (u)_i > |u^-| = (u)_i' > |u^+|, which is a contradiction. This finishes the proof of the claim. Since |M_A| = n-1, it follows that there is a row of T that does not contain any element of M_A. Let r ∈ [n] denote the index of this row, i.e., for each j ∈ [m] we have T_r, j∩ M_A = ∅. The above claim is symmetrical by switching M_A with M_B and switching rows of T with columns of T. So, each element v ∈ M_B appears in at most one column of T. Since |M_B| = m-1, it follows that there is a column of T that does not contain any element of M_B. Let c ∈ [m] denote the index of this column. Since M reduces the distance of the circuits of C, there is an element of M that reduces the distance of the circuit z_r,n+c. However, by the above, all elements of M_A ∪ M_B do not reduce the distance of z_r, n+c, so it follows that w is the unique element of M that reduces the distance of z_r, n+c. To finish the proof of the result, we show that the entries of T containing w are contained in a single row or column. Assume not. Then w reduces the distance of a pair of circuits z_i, n+j and z_i', n+j' with i, i' ∈ [n], j, j' ∈ [m], i ≠ i', and j ≠ j'. Since w is applicable to the circuits z_i, n+j and z_i', n+j', it straightforward to show that w is a circuit and (w) ⊆{i, i', j, j'}. Moreover there are two cases: either w is supported on i and n+j', or w is supported on i' and n+j. These cases are symmetrical so we may assume that w is supported on i and n+j', i.e., w = (0, …, 0, w_i, 0, …, 0, -w_n+j', 0, …, 0). Since w reduces the distance of z_i,n+j it follows that w_i > w_n+j'. Since w reduces the distance of z_i', n+j' it follows that w_n+j' > w_i > w_n+j', which is a contradiction. This concludes the proof. If M reduces the distance of the circuits of C, then one of the following holds: (a) |(w^-)| = 1, condition (1) from Lemma <ref>, and for each i, j ∈ [n] the move w does not reduce the distance of the circuit z_i,j, (b) |(w^+)| = 1, condition (2) from Lemma <ref>, and for each i, j ∈ [m] the move w does not reduce the distance of the circuit z_n+i, n+j. By Lemma <ref> we have that w is the unique element of M that reduces the circuit z_r, n+c for some r ∈ [n] and c ∈ [m]. In particular, w is applicable to z_r,n+c so either (w^+) = {r} or (w^-) = {n+c}. We take cases based on the support of w. Case 1. Let (w^+) = {r} and (w^-) = {c}. We take further cases on a_r and b_c. If a_r = b_c, then the only circuit z_i,n+j with i ∈ [n] and j ∈ [m] that is reduced by w is exactly z_r, n+c. Therefore both conditions (1) and (2) hold in Lemma <ref>. In this case we observe that w does not reduce the distance of any other circuit of C. Hence both conditions (a) and (b) hold. If a_r > b_c, then w reduces the distance of all circuits z_r, n+j with j ∈ [m] so condition (1) of Lemma <ref> does not hold, hence condition (2) holds. Moreover, the move w does not reduce the distance of any circuit z_n+i,n+j with i, j ∈ [m], hence (b) holds. If a_r < b_c, then w reduces the distance of all circuits z_i, n+c with i ∈ [n] so condition (2) of Lemma <ref> does not hold, hence condition (1) holds. Since w does not reduce the distance of any circuit z_i,j with i, j ∈ [n], we have that (a) holds. Case 2. Let (w^+) = {r} and (w^-) ⊋{c}. In this case w is not applicable to any circuit z_n+i,n+j with i,j ∈ [m]. In particular, the move w does not reduce the distance of these circuits. To show that (b) holds, it suffices to prove that condition (2) from Lemma <ref> holds. If not then w reduces the distance of a circuit z_i, n+c for some i ∈ [n] with i ≠ r. However this is an immediate contradiction because w is not applicable to z_i, n+c since i ≠ r and |(w^-)| ≥ 2. So we have shown (b) holds. Case 3. Let (w^-) = {c} and (w^+) ⊋{r}. In this case w is not applicable to any circuit z_i,j with i,j ∈ [n]. In particular, the move w does not reduce the distance of these circuits. To show that (a) holds, it suffices to prove that condition (1) from Lemma <ref> holds. If not then w reduces the distance of a circuit z_r, n+j for some j ∈ [m] with j ≠ c. However, this is an immediate contradiction because w is not applicable to z_r, n+j since j ≠ c and |(w^+)| ≥ 2. So we have shown (a) holds. In each case we have shown that either (a) or (b) holds. This concludes the proof. §.§ Gluing trees Recall that complete intersection monomial curves are completely glued. This means any such curve C = A ∘ B ∈^1 × (n+m) is the gluing of a complete intersection monomial curves A ∈^1 × 1 and B ∈^1 × m. Similarly, both A and B admit gluings and so on until each constituent matrix has size one. We record the data of the sequence of gluing with a rooted binary tree _C. The leaves of _C are labelled with the entries of C, which we identify with the set [n+m]. The graph structure is defined inductively. If n = 1 and A = (a_1), then _A is the graph with a single vertex a_1, called the root of _A, and no edges. Suppose _A and _B are the rooted binary trees associated to A and B with roots u and v respectively. Then _C is defined to be the graph obtained from the disjoint union _A ⊔_B by adding a vertex w adjacent to u and v. The vertex w is defined to be the root of _C. We call _C the gluing tree of C. Gluing tree notation. There is a canonical embedding of _C in ^2. The embedding is defined so that: the vertex i ∈ [n+m] lies at position (i,0) ∈^2; each edge is a line segment with gradient 1 or -1; and each parent has a higher y-coordinate than its children. For example, see the gluing tree in Figure <ref>. Thus, given a non-leaf vertex v of _C, there are two edges incident to v below it: one to the left and one to the right. The set of leaves of _C connected to v by a path whose first edge is the left one is denoted L(v). Similarly, the set of leaves of _C connected to v be a path whose first edge is the right one is denoted R(v). We call the set of vertices L(v) ∪ R(v) the leaves of _C below v. Given a Markov basis M for C, there is a natural bijection between the elements of M and non-leaf vertices of _C. This bijection associates an element u ∈ M with a non-leaf vertex v so that one of the following holds: ((u^+) ⊆ L(v) and (u^-) ⊆ R(v) ) or ( (u^-) ⊆ L(v) and (u^+) ⊆ R(v) ). Let A = [ a_1 a_2 a_3 ] and B = [ b_1 b_2 ]. Suppose there is a gluing C = A ∘ B. Then the gluing tree _C is shown in Figure <ref>. Given a Markov basis M, the element w is the root of _C and corresponds to the final row in the sign matrix in the diagram. The non-leaf vertices of the gluing tree are labelled with the corresponding elements of the Markov basis. Decorated partial gluing trees. Let M be a Markov basis for C and suppose that M reduces the distance of the circuits of C. By swapping A and B in the gluing of C, we may assume that condition (b) of Lemma <ref> and <ref> holds. The partial gluing tree _C(M) is the induced subgraph of _C obtained by deleting the leaves b_1, …, b_m and vertices v that correspond to elements of M_B. Explicitly, these are the non-leaf vertices v of _C such that L(v) ∪ R(v) ⊆{b_1, …, b_m}. So, the partial gluing tree _C(M) has n leaves and n non-leaves. Its leaves are labelled with the elements [n] corresponding to A and its non-leaves are labelled with the elements of M_A ∪{w}. By condition (b) of Lemma <ref>, there exists c ∈ [m] and a bijection between [n] and M_A ∪{w} given by i ↦ u_i, with the property that u_i is the only element of M that reduces the distance of the circuit z_i, n+c. Fix u ∈ M_A ∪{w} and let i ∈ [n] be the corresponding element in [n] as above. We define the decoration D_u to be the following pair of subgraphs D_u = (s_u, d_u) of _C(M). The subgraph s_u is the unique path from the non-leaf vertex u to the leaf i. Let ℓ be edge incident to u along this path. The subgraph d_u is the induced subgraph of _C(M) consisting of all vertices below u whose path to u does not involve ℓ. Note that d_w is the empty graph. We define the decorated partial gluing tree _C(M) to be the pair (_C(M), ), where = {D_u : u ∈ M_A ∪{w}} is the set of decorations. We note that the operation of taking a graph minor of _C(M) naturally extends to any subgraph, hence taking graph minors naturally extends to an operation on _C(M). Consider the gluing tree in Example <ref>. Suppose that condition (b) holds for some Markov basis M of C. Let us also assume that the bijection between {1,2,3} and M_A ∪{w} = {u', u, w} is given by: w ↔ 2, u ↔ 3, and u' ↔ 1. The decorated partial gluing tree _C(M) is shown in Figure <ref>. Each decoration encodes information about the sign patterns in the Markov basis. For example, the path s_w indicates that (w^+) = {2} and (w^-) ⊆{n+1, …, n+m}, which follows from Lemma <ref>. Similarly, the subgraph d_u encodes the fact that (u^-) ⊆{1, 2}. An abstract decorated partial gluing tree (or abstract d-tree) is the combinatorial data of a decorated partial gluing tree. Explicitly, an abstract d-tree is a pair (, ) where is rooted tree such that: the root has degree 1 and every non-root non-leaf vertex has degree 3. Following the notation for gluing trees, we assume that is embedded in ^2. We write V() for the set of non-leaf vertices of , which includes the root, and ℓ() for the set of leaves of . The set is the set of decorations D_u, with one for each u ∈ V(). A decoration D_u = (s_u, d_u) is a pair of subgraphs s_u and d_u. The subgraph s_u is a path in connecting u to a leaf ℓ_u. Let ℓ be the first edge of this path. The subgraph d_u is the induced subgraph of consisting of all vertices v below u such that the path from v to u avoids ℓ. In addition, we require that the set {ℓ_u : u ∈ V()} = ℓ(). Figure <ref> shows the decoration of an abstract d-tree. Let (, ) be an abstract d-tree. Then there exists a non-leaf vertex u ∈ V() such that ℓ_u and u are adjacent and ℓ_u is not contained in the subgraph d_v for each v ∈ V(). We prove the result for all abstract d-trees (, ) by induction on n the number of leaves. If n = 1, then the graph is the graph with a root w and a single leaf ℓ. The path s_w is the edge that connects w and ℓ and d_w is the empty graph. Thus, the result holds. Suppose that n ≥ 2. Then there exists a non-leaf vertex v that is adjacent to two leaves: ℓ_v and ℓ. If ℓ_v is not contained in d_u for all u then we are done. Otherwise assume that ℓ_v is contained in d_u for some u ∈ V(). We construct a new abstract d-tree by pruning the vertex v and the leaf ℓ_v, defined by the following diagram. < g r a p h i c s > This operation produces an abstract d-tree with one fewer leaf. By induction, this abstract d-tree has a vertex v' that is adjacent to leaf ℓ' such that ℓ' does not belonging to d_u for all u. Observe that v' and ℓ' are adjacent in the original abstract d-tree and ℓ' does not belong to d_u for all u in original abstract d-tree. So we are done. Let (, ) be an abstract d-tree with n leaves. There exists an ordering (u_1, u_2, …, u_n) of V() such that, for each i ∈ [n], the leaf ℓ_u_i is not contained in d_u_j for all j ≥ i. We proceed by induction on n. For n = 1 the result is straighforward. Assume n ≥ 2. By Lemma <ref>, there is a leaf ℓ_1 that does not belong to d_u for all u ∈ V(). In particular the non-leaf vertex u_1 ∈ V() such that ℓ_u_1 = ℓ_1 is adjacent to ℓ_1 so we may prune (, ) as follows. < g r a p h i c s > The resulting abstract d-tree has one fewer leaf so by induction there exists a sequence (u_2, …, u_n) such that ℓ_u_i is not contained in d_u_j for all j ≥ i. So the sequence (u_1, u_2, …, u_n) satisfies the conditions of the result and we are done. We now use this description together with the decorated partial gluing tree _C(M) to prove our main result. We prove that if M is a distance reducing Markov basis for the matrix C = A ∘ B, then C admits a gluing of the first kind. The proof is by induction on m+n. If m+n = 2, then the result is clear. Assume that m+n ≥ 3. First we apply Lemma <ref> where we may assume that (b) holds, since if (a) holds, then we may swap A and B. In particular, w does not reduce the distance of the circuits supported on {n+1, …, n+m }. Also observe that none of the moves in M_A reduce the distance of these circuits as they are not applicable. So, we may apply the induction hypothesis to the matrix B and Markov basis for B obtained by projecting M_B to the last m coordinates. So the matrix (M_B) is winnable, so (M_B) has a winning sequence that we denote by s_B. We now extend s_B to a winning sequence for M. Consider the decorated partial gluing tree _C(M). By Lemma <ref>, there is a sequence of vertices (u_1, …, u_n) such that for each i ∈ [n] the leaf ℓ_u_i is not contained in d_u_j for all j ∈ [n] with j ≥ i. We now interpret this sequence in terms of the sign matrix of M. By the construction of _C(M), for each i ∈ [n] the move u_i is the unique element of M that reduces the distance of the the circuit z that is supported on {ℓ_u_i, n+c } for some c ∈ [m]. In particular, u_i is applicable z so it follows that (u_i^+) = {ℓ_u_i} or (u_i^-) = {ℓ_u_i}. Without loss of generality assume (u_i^+) = {ℓ_u_i} and ℓ_u_i∈ L(u_i), otherwise we may consider -u_i or swap the two subgraphs below u_i in drawing of _C(M). By construction, we have that (u_i^-) ⊆ R(u_i) = ℓ(d_u_i). Since ℓ_u_1 is not contained in any subgraph d_v for all v ∈ V(_C(M)), it follows that the column of M indexed by ℓ_u_1 has one nonzero entry in the row indexed by u_1. By the assumptions above, the sign of this entries is - and is unique in the row. Therefore (u_1, ℓ_u_1) gives a valid move for the matrix M. After removing row u_i and column ℓ_u_i from M, we similarly follows that (u_2, ℓ_u_2) is a valid move. We continue in this way until we are left with the matrix (M_B) restricted to the last m columns. We may then apply the moves s_B to delete every entry from the matrix. Thus we have shown that the matrix M is winnable by the sequence of moves (u_1, ℓ_u_1), (u_2, ℓ_u_2), … , (u_n, ℓ_u_n), s_B. Therefore, by Proposition <ref>, we have C admits a gluing of the first kind. § MONOMIAL CURVES IN A4 In this section we explore monomial curves in 𝔸^4. Let A = [ a_1 a_2 a_3 a_4 ] be a matrix of positive integers. Let M be a minimal Markov basis for A. If A is a complete intersection, then by Corollary <ref> we have that M is distance reducing if and only if M reduces the distance of the circuits of A. The next example shows that if A is not a complete intersection then the same result does not necessarily hold. Consider the matrix A = [ 14 21 23 29 ] and the minimal Markov basis M = [ 1 1 1 -2; 3 -2 0 0; 3 1 -4 1; 7 0 -3 -1 ]. It is not difficult to show that M distance reduces the circuits of A. However, M cannot reduce the distance of the element (1, 4, -3, -1) ∈(A). We note that the matrix A does not admit any gluing. In this section we give a complete description of the distance reducing Markov bases for monomial curves in 𝔸^4 and admit a gluing. In particular, we study the glued non-complete intersections and show the circuits characterise the distance reducing property. First we give an explicit description of the complete intersection cases. §.§ Complete intersections For monomial curves in 𝔸^4, there are two types of complete intersections: (((a_1 ∘ a_2) ∘ a_3) ∘ a_4) and ((a_1 ∘ a_2) ∘ (a_3 ∘ a_4)). In each case, the minimal Markov bases can be characterised by Theorem <ref> and Theorem <ref>. Suppose that A = [ a_1 a_2 a_3 a_4 ] is a complete intersection with type (((a_1 ∘ a_2) ∘ a_3) ∘ a_4). Suppose that M is a minimal Markov basis of A consisting of b = (b_1, -b_2, 0, 0), c = (c_1, c_2, -c_3, 0), d = (d_1, d_2, d_3, -d_4) where b_1, b_2, c_2, d_4 > 0 and c_1, c_2, d_1, d_2, d_3 ≥ 0. Without loss of generality, we assume b_1 > b_2. Then M is distance reducing if and only if each of the following conditions is satisfied: (a) c_1 < c_2 + c_3, (b) (c_1 = 0 and c_3 < c_2) or d_1 + d_3 < d_2 + d_4, (c) c_1 + c_2 < c_3 or d_1 + d_2 < d_3 + d_4. Since A admits a gluing of the first kind, by Theorem <ref>, we have that M is distance reducing if and only if M satisfies conditions R_2,3, R_2,4, R_3,4. Let us consider condition R_2,3. Since b_1 > b_2, it follows that (2,3)-(i) does not hold. By definition the condition (2,3)-(ii) does not hold. So M satisfies R_2,3 if and only if M satisfies condition (2,3)-(iii), which is equivalent to c_1 < c_2 + c_3. Similarly, it follows that M satisfies R_2,4 if and only if condition (b) above holds, and M satisfies R_3,4 if and only if condition (c) above holds. Consider the matrix A = [ 7 8 22 23 ]. This matrix admits a gluing of type (((7 ∘_56 8) ∘_22 22) ∘_23 23) and has a unique minimal Markov basis b = (8, -7, 0, 0), c = (2, 1, -1, 0), d = (1, 2, 0, -1). This Markov basis does not satisfy condition (a) in the above corollary, hence it fails to reduce the distance of the circuit (0, 11, -4, 0). Suppose that A = [ a_1 a_2 a_3 a_4 ] is a complete intersection with type ((a_1 ∘ a_2) ∘ (a_3 ∘ a_4)). Suppose that M is a minimal Markov basis of A consisting of b = (b_1, -b_2, 0, 0), c = (0, 0, c_3, -c_4), d = (d_1, d_2, -d_3, -d_4) where b_1, b_2, c_3, c_4 > 0 and d_1, d_2, d_3, d_4 ≥ 0. Without loss of generality, we assume that b_1 > b_2 and c_3 > c_4. Then M is distance reducing if and only if the following conditions hold: (i) At least one of d_1 and d_3 is zero, (ii) d_1 + d_3 < d_2 + d_4. If d_1 = 0, then A admits a gluing of the first kind given by (((a_1 ∘ a_2)∘ a_4)∘ a_3). Otherwise if d_3 = 0 then A admits a gluing of the first kind given by (((a_3 ∘ a_4)∘ a_2)∘ a_1). Consider the matrix A = [ 90 126 350 525 ]. This matrix admits a gluing of type (((90 ∘_630 126) ∘_3150 (350 ∘_1050 525)) and a minimal Markov basis b = ( 7, -5, 0, 0), c = ( 0, 0, 3, -2), d = (14, 15, -3, -4). Since d_1 ≠ 0 and d_3 ≠ 0, it follows that M does not reduce the distance of the circuit (0, 25, 0, -6). §.§ Glued non-complete intersections Throughout this section, we consider a matrix A that admits a gluing but is not a non complete intersection. The main result of this section is Theorem <ref>, which shows that a minimal Markov basis for A is distance reducing if and only if it reduces the distance of the circuits of A. The gluing is given by A = A' ∘ a_4 where A' = [ a_1 a_2 a_3 ] is not a complete intersection. By Theorem <ref>, we have that A' has a unique minimal Markov basis, which gives us 3 indispensable elements for any Markov basis of A: g_1 = (-c_1, v_12, v_13, 0), g_2 = (v_21, -c_2, v_23, 0), g_3 = (v_31, v_32, -c_3, 0). By <cit.> or, more generally, <cit.>, a gluing introduces exactly one new generator. In this case, we have that any Markov basis of A contains exactly one more element h = (h_1, h_2, h_3, -h_4). We denote by M the Markov basis {g_1, g_2, g_3, h}. We denote by M' the unique minimal Markov basis {g_1', g_2', g_3'} of A'. This gives us the following. Let A = [ a_1 a_2 a_3 a_4 ] be a matrix and A' = [ a_1 a_2 a_3 ] a submatrix of A. Suppose that A' is not a complete intersection and A' has unique minimal Markov basis M' = { g_1' = (-c_1, v_12, v_13), g_2' = (v_21, -c_2, v_23), g_3' = (v_31, v_32, -c_3)} as in Theorem <ref>. If A admits a gluing, i.e., there exists x ∈ A' ∩ a_4 such that x = A' ∩ a_4, then any minimal Markov basis M of A is given by M = { g_1 = (g_1', 0), g_2 = (g_2', 0), g_3 = (g_3', 0), h = (h_1, h_2, h_3, -h_4)}⊆(A), for some h_1, h_2, h_3 ≥ 0 and h_4 > 0. With the notation as above, we have the following. Suppose that M reduces the distance of the circuits of A. Then M' reduces the distance of the circuits of A'. Assume by contradiction that M' does not reduce all circuits of A'. Note that the result is symmetrical with respect to the permutation of the first three columns of A. So without loss of generality, we may assume that the circuit z' = (z_1, -z_2, 0) is not distance reduced by M'. Since A reduces the distance of the circuit z = (z_1, -z_2, 0, 0), and none of g_1, g_2, g_3 reduces the distance of z, it follows that h reduces the distance of z. In particular, h is applicable to z so we must have that either: h_1 ≤ z_1 and h_2 = h_3 = 0; or h_2 ≤ z_2 and h_1 = h_3 = 0. Without loss of generality, let us assume that h_1 ≤ z_1 and h_2 = h_3 = 0. Since h reduces the distance of z, we must have that |z| > |z-h|. So z_1 + z_2 = |z| > |z-h| = |(z_1 - h_1, -z_2, 0, h_4)| = z_1 - h_1 + z_2 + h_4. Hence h_1 > h_4. It is easy to see that h does not reduce the distance of the circuit y = (0, y_2, 0, -y_4). So, by assumption, the distance of y is reduced by at least one of g_1, g_2, or g_3. However, the moves g_1 and g_3 are not applicable to y. So the distance of y must be reduced by g_2. So we have y_2 ≥ c_2 and |y| > |y + g_2|, hence y_2 + y_4 = |y| > |y+g_2| = |(v_21, y_2 - c_2, v_23, -y_4)| = v_21 + y_2 - c_2 + v_23 + y_4. It follows that c_2 > v_21 + v_23. Let us now consider the move g_2' = (v_21, -c_2, v_23) ∈ M'. By Theorem <ref>, we have that g_2' is a minimal element of type 2. Therefore g_2' is applicable to the circuit z' = (z_1, -z_2, 0), so c_2 ≤ z_2. Since c_2 > v_21 + v_23, it follows that |z'-g_2'| = |(z_1 - v_21, -z_2 + c_2, -v_23)| ≤ z_1 + v_21 + z_2 - c_2 + v_23 < |z|. So g_2' reduces the distance of the circuit z', which contradicts our original assumption. This completes the proof. Assume a_1 < a_2 < a_3. A minimal Markov basis M is distance reducing if and only if all of the following conditions hold: (i) v_21 < c_2 + v_23 or v_31 < v_32 + c_3, (ii) v_21 + v_31 < c_2 or h_1 + h_3 < h_2 + h_4, (iii) h_1 + h_2 < h_3 + h_4. Moreover, * If M reduces the distance of the circuit (0, z_2, -z_3, 0) then condition (i) holds, * If M reduces the distance of the circuit (0, z_2, 0, -z_4) then condition (ii) holds, * If M reduces the distance of the circuit (0, 0, z_3, -z_4) then condition (iii) holds. In particular, M is distance reducing if and only if M reduces the distance of the three circuits listed above. On the one hand, if M is distance reducing then by Proposition <ref>, we have that condition (i), (ii), and (iii) hold. Conversely, if conditions (i), (ii), and (iii) hold then by Proposition <ref>, we have that M is distance reducing. We immediately obtain the following corollary from Theorem <ref>. Let M be a minimal Markov basis. Then M is distance reducing if and only if M is distance reducing for the circuits of A. We now prove Propositions <ref> and <ref> that appear in the proof of Theorem <ref>. Assume the setup of Theorem <ref>. If M is distance reducing then conditions (i), (ii), and (iii) hold. Condition (i). Suppose that M is distance reducing. Then M reduces the distances of the circuits of A. So by Proposition <ref> we have that M' reduces the circuits of A'. By Theorem <ref> it follows that condition (i) holds. Condition (ii). Consider the circuit z = (0, z_2, 0, -z_4) of A. By assumption M reduces the distance of z. Note that neither g_1 nor g_3 are applicable to z, so either g_2 or h reduces the distance of z. In both cases, we will show that condition (ii) holds. If g_2 reduces the distance, then we have z_2 ≥ c_2 and |z| > |z + g_2|, hence z_2 + z_4 = |z| > |z+g_2| = |(v_21, z_2 - c_2, v_23, -z_4)| = v_21 + z_2 - c_2 + v_23 + z_4. Therefore, we have v_21 + v_31 < c_2 and so condition (ii) holds. On the other hand, suppose that h reduces the distance of z. Since the 4th coordinate of g_1, g_2, g_3 are zero, it follows that h_4 = α z_4 for some positive integer α. Consider z - α h = (-α h_1, z_2 - α h_2, -α h_3, 0) ∈(A). Since -α h_1, -α h_3 ≤ 0 and (A) ∩^4 = {0}, we have that z_2 - α h_2 ≥ 0. In particular, we have z_2 ≥ h_2. Since h is applicable to z we have z_4 ≥ z_4. Since h reduces the distance of z we have |z| > |z-h|. It follows that z_2 + z_4 = |z| > |z-h| = |(-h_1, z_2-h_2, -h_3, -z_4 + h_4)| = h_1 + z_2 - h_2 + h_3 + z_4 - h_4, and so we have h_1 + h_3 < h_2 + h_4. Hence, condition (ii) holds. Condition (iii). Consider the circuit z = (0, 0, z_3, -z_4). By assumption M reduces the distance of z. However, the moves g_1 and g_2 are not applicable to z. So either g_3 or h reduces the distance of z. First, we show that g_3 does not reduce the distance of z. Since a_1 < a_2 < a_3 and g_3 ∈(A), we have c_3 = a_1/a_3 v_31 + a_2/a_3 v_32 < v_31 + v_32. So, if g_3 is applicable to z then z_3 ≥ c_3, hence we have |z+g_3| = |(v_31, v_32, z_3 - c_3, -z_4)| = v_31 + v_32 + z_3 - c_3 + z_4 > |z|. Therefore g_3 does not reduce the distance of z. So we have deduced that h reduces the distance of z. By a similar argument to the condition (ii) case, we will show that z_3 ≥ h_3. Since the 4th coordinate of g_1, g_2, g_3 are zero, it follows that h_4 = α z_4 for some positive integer α. Consider z - α h = (-α h_1, - α h_2, z_3 -α h_3, 0) ∈(A). Since -α h_1, -α h_2 ≤ 0 and (A) ∩^4 = {0}, we have that z_3 - α h_3 ≥ 0. In particular, we have z_3 ≥ h_3. Since h is applicable to z we have z_4 ≥ z_4. Since h reduces the distance of z we have |z| > |z-h|. It follows that z_3 + z_4 = |z| > |z-h| = |(-h_1, -h_2, z_3-h_3, -z_4 + h_4)| = h_1 + h_2 + z_3-h_3 + z_4 - h_4, and so we have h_1 + h_2 < h_3 + h_4. Hence, condition (iii) holds. Assume the setup of Theorem <ref>. If conditions (i), (ii), (iii) holds, then M is distance reducing. Suppose that conditions (i), (ii), (iii) hold and assume by contradiction that there exists z ∈(A) whose distance cannot be reduced by M. If z_4 = 0, then we may apply Theorem <ref> as condition (i) holds. So one of g_1, g_2, g_3 reduces the distance of z, which is a contradiction. So we may assume that z_4 < 0. We have z = α h + β g_1 + γ g_2 + δ g_3 for some integers α, β, γ, δ. Since z_4 < 0, it follows that α > 0. By Proposition <ref>, we have g_1 + g_2 + g_3 = 0. Let λ = min{β, γ, δ}, then we have z = α h + β g_1 + γ g_2 + δ g_3 = α h + (β - λ) g_1 + (γ - λ) g_2 + (δ - λ) g_3. Among the coefficients for g_1, g_2, g_3 in the right-most expression for z, we have that at least one is zero and the others are non-negative. So, without loss of generality, we may assume that β, γ, δ≥ 0 and at least one is zero. Note that if β = γ = δ = 0, then we have z = α h, so h reduces the distance of z, a contradiction. So at least one of β, γ, δ is nonzero. We take cases on whether γ is zero. Case 1. Assume γ = 0. So we have β, δ≥ 0. It follows that z = α h + β g_1 + δ g_3 = (α h_1 - β c_1 + δ v_31, α h_2 + β v_12 + δ v_32, α h_3 + β v_13 - δ c_3, -α h_4). By condition (ii) either h_1 + h_3 < h_2 + h_4 or c_2 > v_21 + v_23. If h_1 + h_3 < h_2 + h_4, then we have z_2 = α h_2 + β v_12 + δ v_32≥ h_2 so |z-h| = |z_1 ± h_1| + |z_2 - h_2| + |z_3 ± h_3| + |-z_4 + h_4| ≤ |z| + h_1 + h_3 - h_2 - h_4 < |z|. Hence h reduces the distance of z, a contradiction and we are done. Suppose that c_2 > v_21 + v_23. If β > 0 and δ > 0, then we have z_2 = α h_2 + β v_12 + δ v_32≥ v_12 + v_32 = c_2. So g_2 is applicable to z and we have |z+g_2| = |z_1 ± v_21| + |z_2-c_2| + |z_3 ± v_32| + |z_4| ≤ |z| + v_21 - c_2 + v_32 < |z|. Hence g_2 reduces the distance of z, a contradiction. So either β = 0 or δ = 0. Case 1.1. Assume β = 0 and δ > 0. In this case we have z = α h + δ g_3 = (α h_1 + δ v_31, α h_2 + δ v_32, α h_3 - δ c_3, -α h_4). Since z_1 = α h_1 + δ v_31≥ v_31 and z_2 = α h_2 + δ v_32≥ v_31, we have that g_3 is applicable to z. Also, since a_1 < a_2 < a_3, it follows that c_3 < v_31 + v_32. So |z - g_3| = |(α h_1 + (δ - 1)v_31, α h_2 + (δ - 1)v_32, α h_3 - (δ - 1)c_2, -α h_4)| = z_1 - v_31 + z_2 - v_32 + |z_3 ± c_3| + z_4 ≤ z_1 + z_2 + z_3 + z_4 - v_31 - v_32 + c_3 < |z|. Hence g_3 reduces the distance of z, which is a contradiction. Case 1.2. Assume δ = 0 and β > 0. In this case we have z = α h + β g_1 = (α h_1 - β c_1, α h_2 + β v_12, α h_3 + β v_13, -α h_4). We have that h is applicable to z, and z_3 = α h_3 + β v_13≥ h_3. So, |z - h| = |((α - 1) h_1 - β c_1, (α - 1) h_2 + β v_12, (α - 1) h_3 + β v_13, -(α - 1) h_4)| = |z_1 ± h_1| + z_2 - h_2 + z_3 - h_3 + z_4 - h_4 ≤ z_1 + z_2 + z_3 + z_4 + h_1 + h_2 - h_3 - h_4 < |z|, where the final inequality follows from condition (iii) h_1 + h_2 < h_3 + h_4. Hence h reduces the distance of z, which is a contradiction. Case 2. Assume γ > 0. So either β = 0 or δ = 0. We will show that δ = 0. If δ > 0, then β = 0 and we have z = α h + γ g_2 + δ g_3. If δ > 0, then we have z_1 = α h_1 + γ v_21 + δ v_31≥ v_21 + v_31 = c_1. So g_1 is applicable to z. Since a_1 < a_2 < a_3, it follows that c_1 > v_12 + v_13. So g_1 reduces the distance of z, a contradiction. So, we must have δ = 0. Since z_4 ≥ h_4, it follows that h is applicable to z. Since δ = 0, we have z_3 = α h_3 + β v_13 + γ v_23≥ h_3. Hence, we have |z-h| = |z_1 ± h_1| + |z_2 ± h_2| + |z_3 - h_3| + |-z_4 + h_4| ≤ |z| + h_1 + h_2 - h_3 - h_4 < |z|, where the final inequality follows from condition (iii) h_1 + h_2 < h_3 + h_4. Therefore h reduces the distance of z, a contradiction. § GENERALISATIONS OF 1-NORM RESULTS In this section we extend results from <cit.> to the non-homogeneous case. Colloquially, the result<cit.> says: to check whether a set is distance reducing, it suffices to check whether it reduces the distance of the Graver basis. This proof carefully uses the assumption of homogeneity of A and the definition of the 1-norm. To see this clearly, consider the main claim from the proof that ∅≠(z^+) ∩(z_1^-). This claim follows from only one key assumption: * z is a move from z_1^+ that reduces the distance to z_1^-: ||(z_1^+ + z) - z_1^-|| < ||z_1^+ - z_1^-|| = ||z_1||. Since the move z is applicable to z_1^+, we have that (z_1^+) ⊆(z^-). If we assume by contradiction that (z^+) ∩(z_1^-) = ∅, then (z) ∩(z_1^-) = ∅ and we deduce the following: ||(z_1^+ + z) - z_1^-|| = ||z_1^+ + z|| + ||z_1^-|| = ||(z_1^+ - z^-) + z^+|| + ||z_1^-|| = ||z_1^+|| + ||z_1^-|| = ||z_1||. These equalities are immediate using the definition of the 1-norm and the final equality uses the assumption of homogeneity, which means that ||z^+|| = ||z^-||. However, this chain of equalities contradicts the previous strict inequality. In Theorem <ref> we show that the condition of homogeneity can be dropped. In particular, this generalises [Proposition 3, Aoki-Takemura] to the inhomogeneous case. The Graver basis is strongly distance reducing, even when A is inhomogeneous. The proof is identical to the proof of the homogeneous case by Aoki-Takemura, however we include it here for completeness. Let x, y be two elements of a fiber. If x-y belongs to the Graver basis then we are done. So, let us assume that x-y does not belong to the Graver basis. Then x-y admits a proper conformal decomposition x - y = u + v for some u, v ∈(A). If u does not belong to the Graver basis then u admits a proper conformal decomposition u = u' + v'. We then have x - y = u' + (v' + v) is a proper conformal decomposition. If we continue in this way, we note that ||u|| = ||u'|| + ||v'|| > ||u'|| so there are only finitely many steps until the first term is an element of the Graver basis. So we may assume that u belongs to the Graver basis. By conformality of the decomposition, we have ||x-y|| = ||u|| + ||v||. Also by conformality, we have that u is a move that is applicable to both x and y. So it follows that ||(x - u) - y|| = ||x - (y + u)|| = ||v|| < ||x-y||. Hence the Graver basis is strongly distance reducing. The next example shows that the property of distance reduction is not transitive. For instance, if M is a Markov basis and B is distance reducing for M. Then it is not true that B is a Markov basis. Let A = [ 2 3 4 ]. Consider the Markov basis M = {(3,-2,0), (2,0,-1) }. The singleton B = {(2, 0, -1)} is clearly not a Markov basis yet it reduces the distance of both elements of M. To see this, note that (2,0,-1) is applicable to (3,-2,0) and ||(3,-2,0) - (2,0,-1)|| = ||(1,-2,1)|| = 4 < 5 = ||(3,-2,0)||. The following theorem shows that, to check whether a set is distance reducing, it suffices to check that it distance reduces the Graver basis. Let B ⊆(A) be any subset. Then B is distance reducing if and only if it is distance reducing for the Graver basis G(A). Similarly, B is strongly distance reducing if and only if it is strongly distance reducing for G(A). Clearly if B is distance reducing then it is distance reducing for the Graver basis. On the other hand, suppose that B reduces the distance of the Graver basis. For any z ∈(A) \ G(A), we have that z admits a proper conformal decomposition z = g + h where g ∈ G(A). Since B reduces the distance of G(A), there is a move b ∈ B that distance reduces g. By <Ref>, it follows that the move b also distance reduces z. So B is distance reducing and we are done. The above theorem shows that a minimal Markov basis M is distance reducing if and only if M is distance reducing for the Graver basis. Recall <Ref>, which shows that if a set is distance reducing then it is Markov basis. The following corollary shows that the sets that distance reduces the Graver basis are Markov bases. Suppose that a set of moves B is distance reducing for the Graver basis, then B is a Markov basis. By Theorem <ref>, if B is distance reducing for the Graver basis then it is distance reducing. So by Proposition <ref>, it follows that B is a Markov basis. Suppose that b ∈(A) is a move that distance reduces an element x ∈(A). Suppose that z ∈(A) has a conformal decomposition z = x + y for some y ∈(A). Then b distance reduces z. Without loss of generality, assume that ||x - b|| < ||x|| and x^+ ≥ b^+. Note that the conformal decomposition gives us that ||z|| = ||x|| + ||y|| and z^+ ≥ b^+ so we have ||z - b|| = ||x-b + y|| ≤ ||x-b|| + ||y|| < ||x|| + ||y|| = ||z||. Hence b reduces the distance of z. § DISTANCE IRREDUCIBLE ELEMENTS The distance irreducible elements of (A) are the set of moves that cannot be distance reduced. We compare the distance irreducible elements with the indispensable set and Graver basis by using the notions of semi-conformal decomposition and conformal decomposition. Let z ∈(A) and suppose that there exist u,v ∈(A) \{0} such that z = u+v. We recall that: * z = u+v is a proper conformal decomposition and write z = u +_c v if (u^+) ∪(v^+) = (z^+) and (u^-) ∪(v^-) = (z^-) * z = u + v is a proper semi-conformal decomposition and write z = u +_sc v if (u^+) ⊆ [n] \(v^-) and (v^-) ⊆ [n] \(u^+) We introduce the following definitions: * z = u + v is a positive distance decomposition and write z = u +_pd v if u^+ ≤ z^+ and ||v|| < ||z|| * z = u + v is a negative distance decomposition and write z = u +_nd v if u^- ≤ z^- and ||v|| < ||z|| We define classes of indecomposable elements of (A) with respect to the above decompositions. We recall the indispensable and primitive moves: * S(A) = {z ∈(A) z has no semi-conformal decomposition} indispensable set * G(A) = { z ∈(A) z has no conformal decomposition} primitive elements We define the following sets of moves: * D^+(A) = { z ∈(A) z has no positive distance decomposition} * D^-(A) = { z ∈(A) z has no negative distance decomposition} * D^w(A) = D^+(A) ∪ D^-(A) weakly distance irreducible elements * D(A) = D^+(A) ∩ D^-(A) distance irreducible elements The following hold: D^+(A) = -D^-(A), D(A) = -D(A), D^w(A) = -D^w(A). Suppose that z = u +_pd v is a positive distance decomposition. Then it follows that -z = (-u) +_nd (-v) is a negative distance decomposition since (-u)^- = u^+ ≤ z^+ = (-z)^- and ||-v|| = ||v|| ≤ ||z|| = ||-z||. By a similar argument, we have that if z admits a negative distance decomposition then -z admits a positive distance decomposition. Therefore D^+(A) = -D^-(A). The remaining equalities follow immediately from the definition. Distance irreducible elements for distance reducing Markov bases are the analogue of indispensable elements for Markov bases. Suppose that B ⊆(A) is distance reducing, then D(A) ⊆ B. If B is strongly distance reducing then D(A)^w ⊆ B. Fix a distance reducing set B ⊆(A) and any z ∈ D(A). Since B is distance reducing, there exists an element b ∈ B that reduces the distance of z. So we have that b is applicable to z and reduces its distance. So at least one of the following holds: (i) b^+ ≤ z^+, (ii) b^- ≤ z^+, (iii) b^+ ≤ z^-, (iv) b^- ≤ z^-. Case (i) Suppose that b^+ ≤ z^+. Since b reduces the distance of z, we have ||z-b|| < ||z||. If z-b ≠ 0, then we have z = b +_pd (z-b) is a positive distance decomposition, which contradicts z ∈ D(A) ⊆ D^+(A). Hence z = b ∈ B. Case (ii) Suppose that b^- ≤ z^+. Since b reduces the distance of z we have ||z + b|| < ||z||. If z ≠ -b, then we have that z = (-b) +_pd (z+b) is a positive distance decomposition, which contradicts z ∈ D(A) ⊆ D^+(A). So -z = b ∈ B. Cases (iii) and (iv) These cases follow almost identically to the first two. If z ≠ b and z ≠ -b respectively then we deduce that z ∈ D^-(A). So we have that z ∈ B and -z ∈ B, respectively. Now suppose that B ⊆(A) is a strongly distance reducing set and let z ∈ D^w(A) be any element. So there exist two elements b, b' ∈ B such that: the move b is applicable to z^+; the move b' is applicable to z^-; and both b, b' reduce the distance of z. So, using the above description, we have that b satisfies (i) or (ii), and b' satisfies (iii) or (iv). Since z ∈ D^w(A) = D^+(A) ∪ D^-(A), we have that z ∈ D^+(A) or z ∈ D^-(A). If z ∈ D^+(A), then by cases (i) and (ii) we have that ± z = b ∈ B. Otherwise, if z ∈ D^-(A), then by cases (iii) and (iv) we have that ± z = b ∈ B. Let ℬ be the set of minimal distance reducing Markov bases and ℬ^s the set of minimal strongly distance reducing Markov bases. Then we have D(A) = ⋂_B ∈ℬ B and D^w(A) = ⋂_B ∈ℬ^s B. We begin by showing D(A) = ⋂_B ∈ℬ B. For each B ∈ℬ we have that D(A) ⊆ B so D(A) ⊆⋂_B ∈ℬ B. For the opposite inclusion, it suffices to show that for each m ∈⋃_B ∈ℬ B such that m ∉ D(A), there exists B ∈ℬ such that m ∉ B. To do this, consider the set (A) \{m, -m}. This set is distance reducing because the only elements it does not contain are m and -m, both of which admit either a positive or negative distance reducing decompositions. Let B be a minimal distance reducing subset of (A) \{m, -m}. We have that B ∈ℬ and m ∉ B, hence we have shown the opposite inclusion and we are done. The proof that D^w(A) = ⋂_B ∈ℬ^s B follows similarly. Note that for each B ∈ℬ^s we have that D^w(A) ⊆ B so D^w(A) ⊆⋂_B ∈ℬ^s B. For the opposite inclusion, it suffices to show that for each m ∈⋃_B ∈ℬ^s B such that m ∉ D^w(A), there exists B ∈ℬ^s such that m ∉ B. To do this, consider the set (A) \{m, -m}. This set is strongly distance reducing because the only elements it does not contain are m and -m, both of which admit positive and negative distance reducing decompositions. Let B be a minimal distance reducing subset of (A) \{m, -m}. We have that B ∈ℬ^s and m ∉ B, hence we have shown the opposite inclusion. This concludes the proof. The following chain of inclusions holds: S(A) ⊆ D(A) ⊆ D^w(A) ⊆ G(A). Throughout the proof, for a set B ⊆(A), we write B := (A) \ B for its complement in (A). We first show D^w(A) ⊆ G(A) by proving that G(A)⊆D^w(A). Suppose that z ∈(A) does not belong to G(A). Then we have that z = u +_c v admits a proper conformal decomposition, hence v^+ ≤ z^+ and v^- ≤ z^-. In addition, we have that u ≠ 0 and so it follows that ||v|| < ||z||. By the conformal decomposition, we have u^+ ≤ z^+ and u^- ≤ z^-, which gives us z = u +_pd v and z = u +_nd v respectively. Therefore z ∈D^+(A)∩D^-(A) = D^w(A) and we have shown D^w(A) ⊆ G(A). Note that the inclusion D(A) ⊆ D^w(A) follows immediately from the definition. So it remains to show that S(A) ⊆ D(A). To do this, we show that D(A)⊆S(A). Suppose that z ∈D^+(A), then z admits a positive distance decomposition z = u +_pd v. In particular, we have u^+ ≤ z^+. Assume, by contradiction, that z = u + v is not a proper semi-conformal decomposition. Then there exists an index i ∈ [n] such that u_i > 0 and v_i < 0. But, this gives us z_i = u_i + v_i < u_i, hence u^+ ≰ z^+, a contradiction. So z = u +_sc v is a semi-conformal decomposition, hence D^+(A)⊆S(A). Similarly, if z ∈D^-(A) then there exists a negative distance decomposition z = u +_nd v. In particular, we have u^- ≤ z^-. Assume by contradiction that z = v+u is not a semi-conformal decomposition. Then, as above, there exists an index i ∈ [n] such that u_i < 0 and v_i > 0. But, we have -z_i = -u_i -v_i < -u_i, hence u^- ≰ z^-, a contradiction. So z = v +_sc u is a semi-conformal decomposition, hence D^-(A)⊆S(A). So we have that D(A) = D^+(A)∪D^-(A)⊆S(A), which concludes the proof. Note that for each minimal Markov basis M we have the following chain of inclusions: S(A) ⊆ M ⊆ M(A) ⊆ G(A) where M(A) is the universal Markov basis. The following examples show that there is little connection between D^w(A) and M(A) or between D(A) and M(A). Consider the matrix A = [ 2 3 4 ]. In this case we have D^+(A) = [ 2 0 -1; 1 -2 1; -2 0 1 ] and D^-(A) = [ 2 0 -1; -2 0 1; -1 2 -1 ]. So we have D(A) = ±[ 2 0 -1 ] and D^w(A) = D(A) ∪±[ 1 -2 1 ]. On the other hand there are two minimal Markov bases M_1 = [ 2 0 -1; 1 -2 1 ] and M_2 = [ 2 0 -1; 3 -2 0 ]. So in this example we have D^w(A) ⊊ M(A) Consider the matrix A = [ 8 14 15 20 ]. We have D^+(A) = [ 5 0 0 -2; 2 1 -2 0; 0 0 4 -3; 1 -2 0 1; -5 0 0 2; -2 -1 2 0; -1 2 0 -1 ] and D^-(A) = [ 5 0 0 -2; 2 1 -2 0; 1 -2 0 1; -5 0 0 2; -2 -1 2 0; 0 0 -4 3; -1 2 0 -1 ]. Hence D(A) = ±[ 5 0 0 -2; 2 1 -2 0; 1 -2 0 1 ] and D^w(A) = D(A) ∪±[ 0 0 4 -3 ]. Note that A is a complete intersection with gluing type (((8 ∘ 20) ∘ 14) ∘ 15). We have that A has a unique minimal Markov basis so S(A) = M(A) = ±[ 5 0 0 -2; 1 -2 0 1; 2 1 -2 0 ] = D(A) ⊊ D^w(A) Let us consider the submatrix A' = [ 8 14 20 ]. We have D^-(A') = D^+(A') = ±[ 1 -2 1; 5 0 -2; ]. Moreover, we have that A' has a unique minimal Markov basis that coincides with the above. So S(A) = M(A) = D(A) = D^w(A). Let A = [ 8 31 33 53 ]. Its minimal Markov bases are M_1 = [ 2 -2 3 -1; 3 2 -1 -1; 5 -3 0 1; 5 0 2 -2; 6 1 -4 1; 8 -1 -1 0 ] and M_2 = [ 1 4 -4 0; 2 -2 3 -1; 3 2 -1 -1; 5 -3 0 1; 5 0 2 -2; 8 -1 -1 0 ]. The distance irreducible elements are given by D(A) = D^w(A) = ±[ 8 -1 -1 0; 5 0 2 -2; 3 2 -1 -1; 2 -2 3 -1; 5 -3 0 1; 1 4 -4 0; 3 -1 -3 2; 0 3 2 -3 ]. In particular we see that (3, -1, -3, 2) ∈ D(A) \ M(A). To show that D^w(A) ⊆ G(A), we use the characterisation of the Graver basis G(A) in terms of conformal decompositions. It may be tempting to use the characterisation of the universal Markov basis M(A) in terms of strongly semi-conformal decompositions to investigate its connections with D(A) and D^w(A). However, the following example shows that such comparisons are not always possible. Consider the complete intersection A = [ 3 5 8 11 ], which has two distinct minimal Markov bases: M_1 = [ 1 1 -1 0; 2 1 0 -1; 5 -3 0 0 ] and M_2 = [ 1 1 -1 0; 1 0 1 -1; 5 -3 0 0 ]. In this case we have D^+(A) = [ 1 1 -1 0; 5 -3 0 0; 3 -4 0 1; 1 0 1 -1; 1 -5 0 2; -1 -1 1 0; -5 3 0 0; -1 0 -1 1 ] and D^-(A) = [ 1 1 -1 0; 5 -3 0 0; 1 0 1 -1; -1 -1 1 0; -5 3 0 0; -3 4 0 -1; -1 0 -1 1; -1 5 0 -2 ]. And so D(A) = ±[ 1 1 -1 0; 5 -3 0 0; 1 0 1 -1 ] and D^w(A) = D(A) ∪±[ 3 -4 0 1; 1 -5 0 2 ]. We observe that the sets D^w(A) and M(A) are incomparable with respect to inclusion. In particular we have (3, -4, 0, 2) and (1, -5, 0, 2) lie in D^w(A) \ M(A). A strongly semi-conformal decomposition for (3, -4, 0, 2) is given by (3, -4, 0, 1) = (-1, 0, -1, 1) + (4, -4, 1, 0) = (-1, 0, -1, 1) + (-1, -1, 1, 0) + (5, -3, 0, 0). Note that neither of these strongly semi-conformal decompositions give rise to a positive distance decomposition. We now consider the properties of elements of (A) that cannot be distance reduced by elements of D(A). Let I(A) ⊆ G(A) be the set of moves that cannot be distance reduced by D(A). If z ∈(A) \ G(A) cannot be distance reduced by any element of D(A), then for any conformal decomposition z = g_1 + … + g_k where g_i ∈ G(A) for each i ∈ [k], then we have that g_i ∈ I(A) for each i ∈ [k]. Follows directly from <Ref>, i.e., if there exists a conformal decomposition z = g + h where g ∈ I(A) then, by the lemma, we have that z is distance reduced by D(A). Analogously to the universal Markov basis M(A), we define the universal distance-reducing Markov basis. The universal distance-reducing Markov basis 𝒟(A) is the union of all minimal distance reducing Markov bases. The universal strongly distance-reducing Markov basis 𝒟^s(A) is the union of all minimal strongly distance reducing Markov bases. From the definition, we have D(A) ⊆𝒟(A) and D^w(A) ⊆𝒟^s(A). Suppose that A has a unique minimal Markov basis M. If M is distance reducing, then we have S(A) = D(A) = M(A) = 𝒟(A). Since A has a unique minimal Markov basis it follows that S(A) = M(A) ⊆ D(A) ⊆𝒟(A). Suppose that B is any minimally distance reducing Markov basis. Then it follows that S(A) ⊆ D(A) ⊆ B. By assumption S(A) = M(A) is a minimal Markov basis that is distance reducing, hence any element of B \ S(A) is redundant for the purpose of distance reduction. Hence B = S(A). We have shown that there is a unique minimally distance reducing Markov basis, hence 𝒟(A) = S(A) and we are done. In the next example, we show how to compute the universal distance-reducing Markov basis. Consider the matrix A = [ 3 5 11 ] from Example <ref>. We compute the universal distance reducing Markov basis as follows. First, we have that D(A) = M(A) = ±[ 2 1 -1; 5 -3 0 ] and D^w(A) = D(A) ∪±[ 3 -4 1; 1 -5 2 ]. So, every distance reducing Markov basis contains D(A). By Theorem <ref> we have that D(A) is not distance reducing because it fails to distance reduce (1, -5, 2) and (0, 11, -5). The only elements of (A) that distance reduce (1, -5, 2) are (1, -5, 2) and (3, -4, 1). Each of these elements distance reduce (0, 11, -5). So, by Theorem <ref>, the minimally distance reducing Markov bases are [ 2 1 -1; 5 -3 0; 1 -5 2 ] and [ 2 1 -1; 5 -3 0; 3 -4 1 ], hence 𝒟(A) = D^w(A) = [ 2 1 -1; 5 -3 0; 1 -5 2; 3 -4 1 ]⊆ G(A). The following example shows that the universal distance reducing Markov basis need not lie inside the Graver basis. Consider the matrix A = [ 3 5 8 11 ] from Example <ref>. The elements of the Graver basis that cannot by distance reduced by D(A) are given by the set I = {(1, -5, 0, 2), (0, 11, 0, -5)}. To extend D(A) to a minimally distance reducing Markov basis, we must add either one or two elements to distance reduce I. The only moves in (A) that simultaneously distance reduce both elements of I are T_0 = ±[ 3 -4 0 1; 2 -5 1 1; 1 -5 0 2 ]⊆ G(A). So, there are three minimally distance reducing Markov bases given by D(A) together with one element from T_0. The remaining minimally distance reducing Markov bases are those that consist of D(A) together with a pair of elements z_1 and z_2 such that: z_1 reduces the distance of (1, -5, 0, 2) and does not reduce the distance of (0, 11, 0, -5); and z_2 reduces the distance of (0, 11, 0, -5) and does not reduce the distance of (1, -5, 0, 2). Let T_1 and T_2 denote the set of such z_1 and z_2 respectively. These sets are given by T_1 = [ 4 -4 1 0; 2 -5 2 0 ] and T_2 = [ 0 6 -1 -2; 1 -6 2 1; 2 -6 3 0; 1 6 0 -3; 0 5 1 -3; 0 7 -3 -1; 1 -7 4 0; 0 8 -5 0; ]∪[ 0 11 0 -5; 0 4 3 -4; 0 3 5 -5; 1 5 2 -4; 2 6 1 -4; 1 4 4 -5; 3 7 0 -4; 2 5 3 -5; ]∪[ 2 -10 0 4; 4 -9 0 3; 3 6 2 -5; 6 -8 0 2; 1 -11 1 4; 3 -10 1 3; 5 -9 1 2; 4 7 1 -5; 7 -8 1 1; ]∪[ 2 -11 2 3; 4 -10 2 2; 6 -9 2 1; 5 8 0 -5; 3 -11 3 2; 5 -10 3 1; 4 -11 4 1; 5 -11 5 0; 11 -11 0 2 ]. So, there are sixty eight minimally distance reducing Markov bases of this form. In T_2, the elements: (1, 5, 2, -4), (2, 6, 1, -4), …lie outside of the Graver basis. So 𝒟(A) ⊈G(A). The subsets G(A), M(A), S(A), D(A), D^w(A), 𝒟(A), 𝒟^s(A) and Markov basis M for A, are naturally ordered by inclusion into a poset any matrix A. The Hasse diagram for this poset is shown in Figure <ref>. The dotted lines indicate pairs of sets that are incomparable for certain matrices A. Note that some of the inclusions are not clear. For instance, to prove that 𝒟(A) ⊆𝒟^s(A), it suffices to show that any minimal distance reducing set can be extended to a minimal strongly distance reducing set. But this is not immediate. If B is minimal distance reducing, then it may be impossible to extend it to a minimal strongly distance reducing Markov basis. There may exist an element b ∈ B such that for every strongly distance reducing set B⊇ B containing B, we may have B\{b} is strongly distance reducing. In such a case, we cannot conclude that b ∈𝒟^s(A). The sets 𝒟(A) and 𝒟^s(A) are finite. We show that there exists N ∈ such that 𝒟(A), 𝒟^s(A) ⊆{z ∈(A) ||z|| ≤ N}. Let G be the Graver basis of A and define max(G) := max{||g|| g ∈ G }. Let N = 2max(G). For any z ∈(A) with ||z|| > N and g ∈ G, we have min{||g+z||, ||g-z||}≥| ||g|| - ||z||| > |2max(G) - max(G)| = max(G) ≥ ||g||. Therefore z does not distance reduce any element of the Graver basis. So, by Theorem <ref>, we have that z does not belong to 𝒟(A) or 𝒟^s(A). Hence 𝒟(A) and 𝒟^s(A) are finite. § DISCUSSION For monomial curves in 𝔸^3 and 𝔸^4, and complete intersections we have seen that the distance reducing property is characterised by the circuits. In particular, the characterisation of the distance reducing property by circuits is preserved by gluing, see Sections <ref>, <ref>, <ref>, and <ref>. Notation. We say that a matrix is irreducible if it does not admit a gluing. Every non-irreducible matrix A admits a gluing into two components A = A_1 ∘ A_2. Given a sequence of gluings ((A_1 ∘ A_2) ∘ A_3) ∘…, the matrices A_1, …, A_n are called its components. A sequence of gluings is maximal if each component is irreducible. The size of a component is the number of columns of the component. We have seen that the circuits characterise the distance reduction property: if all components have size one, see Corollary <ref>; and for monomial curves in 𝔸^4 that are glued non complete intersections, see Theorem <ref>. Below, we formulate generalisations about the distance reduction property. Suppose that A is a 1 × n matrix that admits a gluing with minimal components have size at most 3. Then a minimal Markov basis M for A is distance reducing if and only if M is distance reducing for the circuits of A. A matrix A has the circuit reduction property if, for any minimal Markov basis M of A, we have: M reduces the distance of the circuits of A M is distance reducing. The set of all matrices with the circuit reduction property is denoted 𝒜. The set 𝒜 is closed under gluing, i.e., if A, B ∈𝒜 admit a gluing then A ∘ B ∈𝒜. Is it possible to characterise the distance reduction property for non complete intersection monomial curves that do not admit a gluing? Is there a connection between M(A) and 𝒟(A) or M(A) and 𝒟^s(A)? Is there an algebraic or combinatorial description for the moves in 𝒟(A), which is analogous to strongly semi-conformal for universal Markov basis elements? In the definition of positive (resp. negative) distance decompositions z = u +_pd v, if we assume that u ∈ G(A) is an element of the Graver basis, then do we obtain the same set of distance irreducible elements? Just as the distance reducing minimal Markov bases are characterised by certain inequalities, see Theorems <ref> and <ref>, we expect that a characterisation of when the distance irreducible elements D(A) form a Markov basis is also governed by inequalities. Let A = [ a_1 a_2 a_3 ] be a complete intersection. Suppose that A has a unique minimal Markov basis M = {b := (b_1, -b_2, 0), c := (c_1, c_2, -c_3)} with b_1 > b_2. If M is not distance reducing, then D(A) = M if and only if 2 c_1 < b_1 + b_2. Let A = [ 4 9 37 ] be a matrix. In this case we have that A is a complete intersection with unique Markov basis M = [ 9 -4 0; 7 1 -1 ]. We define b = (9, -4, 0) and c = (7, 1, -1). Since c_1 ≥ c_2 + c_3, by Theorem <ref>, we have that M is not distance reducing. Furthermore, we have that 2c_2 ≥ b_1 + b_2 and by an explicit computation we have D(A) = M ∪±[ 2 -5 1 ]. So, in particular, a unique minimal Markov basis need not be equal to D(A). Fix a matrix A. Then (A) ⊆^s(A). §.§ Distance reducing complex Throughout the paper, we have predominantly focused on the 1-norm and its induced metric. However, the notion of distance reduction applies to all metrics so we ask how the results of this paper generalise to other metrics. In particular, to other metrics derived from norms. In this section, given a Markov basis M, we define the distance reducing complex, which is a polyhedral complex whose points correspond to metrics d for which M is distance reducing with respect d. We begin by exploring a motivating example. Running example. Fix the matrix A = [ 2 3 4 ], as in Example <ref>. Its Graver basis is: a_1 := [ 3; -2; 0 ], a_2 := [ 2; 0; -1 ], a_3 := [ 1; -2; 1 ], a_4 := [ 1; 2; -2 ], a_5 := [ 0; 4; -3 ]. Recall that A has two minimal Markov bases: M_1 = {a_1, a_2 } and M_2 = {a_2, a_3 }. And so its indispensable set and universal Markov basis are: S(A) = {a_2} and M(A) = {a_1, a_2, a_3}. We now determine the norm-induced metrics d for which M_1 is a d-reducing Markov basis. First we consider the space of all norm-induced metrics and the possible norms of the elements of the Graver basis. Metric cone. Given two metrics d_1 and d_2 on ^3 and a positive real number λ, we observe that the functions (λ d_1) : (x, y) ↦λ d_1(x, y) and (d_1 + d_2) : (x, y) ↦ d_1(x,y) + d_2(x,y) define metrics on ^3. So we define the metric cone to be the space of norm-induced metrics on ^3. We expect that any norm on the Graver basis extends to a norm on ^3. Suppose that ||·|| is a norm on a subset G ⊆^3 ∖{0}. By this we mean that ||·|| : G →_≥ 0 is a function that satisfies: * ||g|| > 0 for all g ∈ G, * If g = ∑_h ∈ Gλ_h h with finitely many nonzero λ_h ∈, then ||g|| ≤∑_h ∈ G |λ_h| · ||h||. Then ||·|| is the restriction of a norm on ^3. Suppose ||·|| is a norm on ^3. Let n_i := ||a_i|| ∈_>0 for each i ∈ [5]. Consider the vector n = (n_1, n_2, n_3, n_4, n_5) ∈^5. If the above conjecture holds, then set of possible vectors n naturally forms a rational polyhedral cone defined by the inequalities that arise from the triangle equality. For example a_1 = a_2 + a_3 n_1 ≤ n_2 + n_3, n_2 ≤ n_1 + n_3, n_3 ≤ n_1 + n_2. The set of inequalities is derived from the set of triangles or, more generally, by the set of linear relations among the given vectors a_1, …, a_5. The complete set of triangles is given by 123, 12^24, 1^22^35, 13^24, 13^35, 14^35^2, 234, 23^25, 24^25, 345 where α^iβ^jγ^k means a triangle whose edges are i a_α, j a_β and k a_γ. As a result, we get a projection of the metric cone _5 ⊆_>0^5 whose rays are given by the columns of the following matrix [ 2 1 1 3 0; 1 1 0 2 1; 1 0 1 1 1; 0 1 1 1 2; 1 1 2 0 3 ]. The rows of the above matrix correspond to the values of n_1, n_2, …, n_5 respectively. Distance reducing property. We now consider the subset of points c ∈ such that for any norm || · || in the fiber of c, the Markov basis M_1 is distance reducing with respect to ||·||. This set has a natural polyhedral-complex structure, which we now describe for this example. Consider the triangle with sides a_1, a_2, a_3, which appears in suitably large fibers of A. Let x be the vertex where a_1 and a_3 meet and y be the vertex where a_2 and a_3 meet. By assumption, the basis M_1 is distance reducing so either the move from x by a_1 reduces the distance or the move from y by a_2 reduces the distance. Therefore, either n_3 > n_1 or n_3 > n_2. Next consider the triangle a_1, 2a_2, a_4. Starting with the distance n_4, we can either reduce it using a_1 or a_2. If we can reduce it using a_1 then we have a_4 > 2a_2. Otherwise, if we can reduce it using a_2 then we have a_4 > a_3. Continuing in this way, we obtain a set of conditions that must be satisfied by the metric whenever M_1 has the distance reducing property. For each triangle: 123, 12^24 and 1^22^35 we reduce a_3, a_4 and a_5 by either moving along a_1 or a_2. However, for the triangle 1^22^35, the reduction by move a_1 gives rise to a triangle 12^36 where a_6 = (3, 2, -3)^T does not belong to the Graver basis. The resulting set of conditions are shown in Table <ref>. The vector a_6 is assumed to have norm n_6 := ||a_6||. The extended metric cone _6 ⊆^6_>0 is derived from the triangle inequalities given by previous set of triangles (<ref>) together with: 12^36, 1^23^36, 14^36^2, 156, 2^236, 246, 2^356^2, 34^26, 3^35^26, 4^356. The metric cone _6 ⊆^6_>0 is given by the cone over the columns of the matrix: [ 2 1 1 3 0 3; 1 1 0 2 1 1; 1 0 1 1 1 2; 0 1 1 1 2 1; 1 1 2 0 3 3; 1 2 1 3 3 0 ]. We may avoid adding a_6 into the computation and consider only the projection of the distance-reducing metrics onto _5. If we do this, then stay within the cone defined by (<ref>) and we obtain weaker inequalities derived from the triangle inequality. For instance: n_5 > |3n_2 - n_1|, which is derived from the reduction by a_1, and n_5 > 2n_3, which is derived from the reduction by a_2. The metric cone _6 admits a surprising symmetry under a representation of the group S_3. Let σ_1 = (1, 2) and σ_2 = (2,3) be adjacent transpositions that generate S_3. The action is given by σ_1 = [ 0 1 0; 1 0 0; 0 0 1; ]⊕[ 1 0 0; 0 0 1; 0 1 0; ] and σ_2 = [ 1 0 0; 0 0 1; 0 1 0; ]⊕[ 0 0 1; 0 1 0; 1 0 0; ] where A ⊕ B denotes the block diagonal matrix [ A 0; 0 B ]. Similarly, there is a permutation action that acts on the coordinates of the cone, which suggests that the roles of a_1, a_5 and a_6 are interchangeable, and whenever we permute their roles there is an induced permutation on a_2, a_3 and a_4. One reason why this may happen is because the vectors a_1, …, a_6 can be partitioned into two triangles a_1, a_5, a_6 and a_2, a_3, a_4. Distance reducing complex. Every distance reducing metric for M_1 satisfies at least one of the inequalities in each row in <Ref>. For each row in the table, we choose one of the inequalities. For each collection of such choices, we intersect the corresponding open halfspaces and take their intersection with the metric cone _6. This gives a collection of relatively open cones that define the distance reducing complex. For example, if we choose the inequalities I_1 = {n_3 > n_2, n_4 > 2n_2, n_5 > n_6, n_6 > 3n_2 }, which is obtained by always reducing by a_1, then the intersection of the cone defined by I_1 and _6 is the cone (A_1) over the columns of the matrix A_1 = [ 1 0 2 3 4 6; 0 1 1 1 1 1; 1 1 1 2 3 5; 1 2 2 2 2 4; 2 3 3 3 5 9; 1 3 3 3 3 3 ]. On the other hand, if we choose the inequalities I_2 = {n_3 > n_2, n_4 > 2n_2, n_5 > n_6, n_6 > n_4 }, then we obtain the cone (A_2) over the columns of the matrix A_2 = [ 1 2 3 4 0 2 3 4 7; 0 1 1 1 1 1 1 1 2; 1 2 2 3 1 1 2 3 5; 1 2 2 2 2 2 2 2 4; 2 4 2 5 3 3 3 5 8; 1 2 2 2 3 3 3 3 4 ]. Note that, despite choosing different inequalities, the interiors of the cones (A_1) and (A_2) intersect. Their intersection (A_12) is the cone over the columns of the matrix A_12 = [ 1 0 2 3 4 3 4 5 9; 0 1 1 1 1 1 1 1 2; 1 1 1 2 3 3 3 4 7; 1 2 2 2 2 3 3 3 6; 2 3 3 3 5 6 3 7 12; 1 3 3 3 3 3 3 3 6 ]. This cone corresponds to the choice of inequalities I_1 ∪ I_2. Reduction complex. The natural generalisation of the complex in the above example is constructed as follows. Let B be a Markov basis for A ∈^d × n. We say that a set S ⊆(A) is closed under B-reductions if B ⊆ S and if for any linear relation α_b b + ∑_b' ∈ B \{b}α_b' b' = α_s s with b ∈ B, α_b ≥ 1, s ∈ S, α_s nonzero and α_x∈ for all x, there exists s' ∈ S and α_s'∈ nonzero such that the linear relation (α_b - 1)b +∑_b' ∈ B \{b}α_b' b' = α_s' s' holds. We call the second linear relation the reduction of the first linear relation with respect to b. We say that the reduction is unique with respect to b, if s' above is unique. If a finite set S ⊇ B is not closed under B-reductions, then it can be made closed by adding finitely many elements of (A). If we assume that only primitive elements may be added, then this closure is unique and defines a closure operation is the usual sense. If S contains only primitive elements, then all reductions are unique. Let B be a Markov basis and fix a finite set S ⊆(A) closed under B-reductions. For each linear relation α_b b + ∑_b' ∈ B \{b}α_b' b' = α_s s with b ∈ B, α_b ≥ 1, s ∈ S, α_s nonzero and α_x∈ for all x that admits a reduction to (α_b - 1)b +∑_b' ∈ B \{b}α_b' b' = α_s' s' with respect to b, we define the reduction inequality |α_s| n_s > |α_s'| n_s', which defines a half-space in ^S whose coordinates are denoted n_i for each i ∈ S. Let B be a Markov basis and S ⊆(A) be a finite set closed under B-reductions. Let = {∑_b ∈ Bα_b b = α_s s α_s ≠ 0} be the set linear relations that admit reductions. We identify a relation ∑_b ∈ Bα_b b = α_s s with its negative ∑_b ∈ B (-α_b) b = -α_s s, so we assume that any individual coefficient α_b is non-negative. For each relation L ∈, we obtain the set of reduction inequalities I_L = {|α_s| n_s > |α_s'| n_s'} with one inequality for each b ∈ B such that α_b ≠ 0. The distance reducing complex Δ(B, S) ⊆^S is the union Δ(B, S) = ⋃_T_T where T runs over all transversals of {I_L L ∈} and _T = ⋂_t ∈ T H_t is the intersection of the half-spaces H_t ⊆^S defined by the inequality t ∈ T. By taking further intersections, and recording the inequalities, it is possible to imbue a natural polyhedral complex structure to Δ(B, S). Explicitly, the maximal cones are indexed by super-sets of transversals of {I_L}. For which metrics is the Markov basis minimally distance reducing? For other matrices, it may be possible for the reduction conditions to be satisfied but the Markov basis to not be distance reducing. General metric cone. Let S ⊆(A) be a non-empty collection of vectors. This set realises a matroid (S) with circuits (S). We define the metric cone in terms of the circuits of this matroid. For example, if 123 ∈(S) is a circuit then it defines a triangle so we get three inequalities: |c_1|n_1 ≤ |c_2|n_2 + |c_3|n_3, |c_2|n_2 ≤ |c_1|n_1 + |c_3|n_3, |c_3|n_3 ≤ |c_1|n_1 + |c_2|n_2. On the other hand if we have a larger circuit, like 1 … m, then we get weaker inequalities: |c_i|n_i ≤ |c_1|n_1 + … + |c_i|n_i + … + |c_m|n_m for each i ∈ [m] The reason why these inequalities are `weaker' is easily seen when (S) is a graphic matroid of a chordal graph G. In this case, if C ∈(S) is a circuit, then it is cycle in G. Since G is choral, the cycle admits a triangulation. The triangle inequalities arising from the triangles imply the inequalities derived from C. So we obtain a metric cone _S ⊆^S defined by all the circuits of (S). We ask the following questions about the distance reducing complex. * When do two sets of vectors S_1 and S_2 define the same metric cone _S_1 = _S_2? * What is the structure of the distance reducing complex for monomial curve? * How do the results for the 1-norm, such as Theorems <ref> and <ref>, generalise to other metrics? Acknowledgments. D. Kosta gratefully acknowledges funding from the Royal Society Dorothy Hodgkin Research Fellowship DHF\R1\201246. We thank the associated Royal Society Enhancement grant RF\ERE\210256 that supported O. Clarke's postdoctoral research. plain
http://arxiv.org/abs/2406.18146v2
20240626075617
A Refer-and-Ground Multimodal Large Language Model for Biomedicine
[ "Xiaoshuang Huang", "Haifeng Huang", "Lingdong Shen", "Yehui Yang", "Fangxin Shang", "Junwei Liu", "Jia Liu" ]
cs.CV
[ "cs.CV" ]
Xiaoshuang Huang et al. Healthcare Group, Baidu Inc, Beijing 100085, China China Agricultural University, Beijing 100083, China MAIS, Institute of Automation, Chinese Academy of Sciences (CASIA), Beijing 100086, China huangxiaoshuang@cau.edu.cn, liujia9001cc@163.com A Refer-and-Ground Multimodal Large Language Model for Biomedicine Xiaoshuang Huang1,2Work performed during an internship at Baidu Inc. Haifeng Huang1 Lingdong Shen3 Yehui Yang1^† Fangxin Shang1 Junwei Liu1 Jia Liu1^() July 1, 2024 ================================================================================================================================================================= ()Corresponding author. †Project Leader. § ABSTRACT With the rapid development of multimodal large language models (MLLMs), especially their capabilities in visual chat through refer and ground functionalities, their significance is increasingly recognized. However, the biomedical field currently exhibits a substantial gap in this area, primarily due to the absence of a dedicated refer and ground dataset for biomedical images. To address this challenge, we devised the Med-GRIT-270k dataset. It comprises 270k question-and-answer pairs and spans eight distinct medical imaging modalities. Most importantly, it is the first dedicated to the biomedical domain and integrating refer and ground conversations. The key idea is to sample large-scale biomedical image-mask pairs from medical segmentation datasets and generate instruction datasets from text using chatGPT. Additionally, we introduce a Refer-and-GrounD Multimodal Large Language Model for Biomedicine (BiRD) by using this dataset and multi-task instruction learning. Extensive experiments have corroborated the efficacy of the Med-GRIT-270k dataset and the multi-modal, fine-grained interactive capabilities of the BiRD model. This holds significant reference value for the exploration and development of intelligent biomedical assistants. The repository is at [https://github.com/ShawnHuang497/BiRD]https://github.com/ShawnHuang497/BiRD § INTRODUCTION Multimodal large language models (MLLMs) have become a popular area of research, with numerous applications in the field of visual languages, such as, Visual Question Answering (VQA), open vocabulary detection, and so on. Nonetheless, the unique challenges presented by the realm of biomedicine, which starkly contrasts with the natural world, often render conventional visual assistants inept. They may either refrain from responding to biomedical queries or, worse, provide inaccurate responses or entirely fabricated information <cit.>. Despite existing research within the realm of biomedical MLLMs, current studies have predominantly focused on image description and VQA, leaving a notable gap in capabilities concerning referring and grounding (shown in Fig. <ref>). The act of referring demands a model's accurate semantic comprehension of specified regions, while grounding necessitates the localization of regions based on semantic descriptions provided <cit.>. These fine-grained multimodal capabilities are essential for both the interaction process between intelligent biomedical assistants and patients and for biomedical education. This capability not only makes the information exchange process more intuitive but also significantly enhances the accuracy and efficiency of information exchange. A key factor hindering the development of this capability in the field of biomedicine is the lack of multi-modal fine-grained interactive datasets. To address these challenges, we develop the BioMedical Ground-and-Refer Instruction-Tuning (Med-GRIT-270k) dataset by leveraging the medical segmentation dataset (SA-Med2D-20M <cit.>). Then a biomedical refer-and-ground multimodal large language model was explored with the Med-GRIT-270k and multi-task instruction learning method. The paper principally contributes the following: * Med-GRIT-270k Dataset. Large-scale biomedical image-mask pairs are transformed into multi-modal conversations by leveraging chatGPT <cit.> in a novel process. It is the first dataset in biomedicine to integrate referring, grounding, and conversations. * The first Biomedical Refer-and-grounD Multimodal Large Language Model (BiRD). It is fine-tuned by multi-task instruction learning for the biomedical domain with self-generated data. This validates the effectiveness of multi-task instruction tuning and highlights best practices for adapting the MLLMs to the specialized domain. * To advance biomedical multi-modal learning research, we will release the Med-GRIT-270k dataset and a comprehensive codebase for community use. § RELATED WORK Biomedical Multi-modal Large Language Models. Amidst the rapid development of Large Language Models (LLMs) and the success of instruction-tuned LLMs within the general domain <cit.>, researchers in the biomedical field have been fervently exploring the expansion of these models' capabilities. Recent studies have increasingly concentrated on the domain of MLLMs, with notable endeavors within the biomedical sector including BioMedGPT <cit.>, RadFM <cit.>, LLaVa-Med <cit.>, and so on <cit.>. These methodologies have significantly propelled the development of MLLMs in the biomedical realm. For instance, LLaVa-Med <cit.>, utilizing pre-trained LLMs for visual instruction tuning, has established a unique, end-to-end multi-modal biomedical chatbot capable of processing image inputs. RadFM <cit.> is a MLLM supporting 2D/3D radiographic imaging input for the medical domain. However, due to various challenges, biomedical MLLMs capable of supporting fine-grained interactions have yet to emerge. MLLMs for Referring and Grounding. In natural images, the large-scale public datasets have greatly supported the exploration into the sophisticated understanding abilities of multimodal large language models (MLLMs), such as Gpt4ROI <cit.>, Ferret <cit.>, QWen-VL <cit.>, and so on. Although some work <cit.> has already begun to investigate grounding in biomedicine, it can only be applied to small models, as the amount of data is limited and there are only a few modalities. The paramount factor underlying the success of these initiatives is their access to pertinent, large-scale datasets. For instance, QWen-VL uses around 80M data for referring and grounding. However, the multi-modal fine-grained interactive dataset in biomedical is virtually nonexistent. § MED-GRIT-270K: BIOMEDICAL GROUND-AND-REFER INSTRUCTION-TUNING DATASET We've created the first biomedical refer-and-ground instruction-tuning dataset to address the lack of such resources. It was generated through the collaborative efforts of humans and Artificial Intelligence (AI), derived from large-scale biomedical image segmentation datasets. The generation process can be divided into three steps: (i) Manually generating instance-level meta information for each image based on its mask. (ii) Employing an AI assistant to generate global information for the images. (iii) Utilizing the AI assistant to craft fine-grained conversations based on the meta information and global image information obtained in the previous steps. Generating Instance-level Meta Information. We first sampled biomedical image-mask pairs from the SA-Med2D-20M <cit.>. Ultimately, approximately 60K images were sampled from this dataset, considering the diversity of modality and redundancy. For instance, the original dataset includes a plethora of 2D slices from 3D data, leading to excessive data similarity. Subsequently, we calculated the coordinates of each instance based on the instance-level masks. Specifically, spatial locations are delineated via the textual representation in the format [X_topleft, Y_topleft, X_bottomright, Y_bottomright], and normalize the coordinates to fall within the range [0,1]. Finally, we enrich the images with additional details to compile the meta information, which includes modality, scanned region, orientation, and object coordinates. Generating Image Captions. We utilize meticulously designed prompts along with the meta information provided to ChatGPT <cit.>, thereby acquiring the global information for each image. Biomedical Instruction-Tuning Data. Spatial understanding is manifested through various task formats. This primarily encompasses two distinct types and their corresponding task names: (i) Region-in and Text-out: Referring Object Classification (ROC), Referring Captioning (RC), (ii) Text-in and Region-out: Visual Grounding (VG), and (iii) Text-in and Text-out: Medical Image Analysis (MIA). To reduce ambiguity and enhance the model's capability for fine-grained visual comprehension, some essential strategies are adopted. The special tokens (<ref> and </ref>) are introduced, marking the content referred to by the bounding box. This aptly associates bounding boxes with their corresponding descriptive words or sentences. Subsequently, we instructed ChatGPT to design a question and answer for each task. Finally, We mapped the coordinates within the range [0, 1000] and reformatted them as (X_topleft, Y_topleft), (X_bottomright, Y_bottomright). To differentiate between detection strings and regular text strings, two special tokens (<box> and </box>) are appended at the start and end of the bounding box string, respectively. Fig. <ref> shows an example of our instruction-following data. § MULTI-TASK INSTRUCTION LEARNING We aim to imbue MLLMs with grounding and referring capacities via multi-task learning, simultaneously ensuring the retention of the MLLM's essential conversational proficiency. This section will henceforth elucidate from two perspectives: the architecture of the model and multi-task instruction training. §.§ Model Architecture We utilize Qwen-VL <cit.>, a comprehensive multimodal conversational model, as the foundational general-domain language model. Specifically, the visual encoder employs the Vision Transformer (ViT) <cit.> architecture, initialized with pre-trained weights from OpenAI's CLIP ViT-BigG <cit.>. The vision-language adapter utilizes cross-attention with a trainable query. The large language model incorporates the pre-trained Qwen-7B <cit.>. §.§ Multi-task Instruction Training Considering that the base model already possesses the capability to refer or ground within natural images, we employ only one stage to finetune it based on the pre-trained base model on the Med-GRIT-240k dataset. As illustrated in Fig. <ref>), We solely fine-tune the cross-attention and LLM parameters, while the visual encoder remains frozen. The input images are processed through the ViT-BigG <cit.> and vision-language adapter, yielding fixed-length sequences of visual features. We then append the markers (<img> and </img>) to the start and end of the image feature sequence, respectively, to denote the beginning and end of visual content. We fine-tuned the model using a dataset comprising 60k images and a total of 240k dialogue turns. The global training batch size is 128. The learning rate is 2e-5 and the scheduler is cosine. The multi-task instruction training just took 30 hours on 4 × A100(40G) GPUs. § EXPERIMENTS In this section, we execute a thorough evaluation across diverse multimodal tasks to holistically gauge our models' proficiency in visual comprehension. Evaluation dataset. We randomly selected approximately 12% of the images and dialogues from the constructed Med-GRIT-270k dataset to serve as the test set. Given that a single 3D dataset contains multiple data slices, we extracted cases in their entirety to prevent leakage of test set data into the training set. This ensures that different slices from the same 3D dataset do not concurrently appear in both the training and test sets, thereby guaranteeing the reliability of the test results. Evaluation metrics. The evaluation metrics for the four tasks are Recall@0.5, Recall, Spice <cit.>, and mBMR, respectively. Recall@0.5 denotes a prediction as correct only when the intersection over union (IoU) between the predicted bounding box and the ground truth exceeds 0.5. The mBMR utilized for assessing the MIA task is the mean value of BLEU@4 <cit.>, METEOR <cit.>, and ROUGE-L <cit.>, offering a more comprehensive evaluation of the prediction quality than a solitary metric. Comparison. As shown in Table <ref>, we are the pioneers in developing a medical MLLM with referring and grounding capabilities, and existing MLLMs (such as Qwen-VL <cit.>, GPT-4 <cit.>, MiniGPT-v2 <cit.>, etc.) have not seen medical referring and grounding data. So we will not compare them on evaluation metrics, as it would be profoundly unfair. As illustrated in Table <ref>, we present the quantitative test outcomes for LLaVa-Med <cit.> and the impact of the data scale on these results. Between rows 3 and 6, we observe the performance of the BiRD-Med-GRIT model across varying data scales. With the expansion of training data, all metrics exhibit significant enhancements, with the average rising from 35.69 to 56.66. This underscores the efficacy of augmenting dataset size in bolstering the model's proficiency on multimodal datasets. Notably, at the 240k dataset level, the model achieved the highest scores across all metrics, showcasing optimal overall performance. From the first and sixth rows of Table <ref>, it is evident that the LLaVa-Med <cit.> model demonstrates subpar performance on the Med-GRIT-Test30k dataset, particularly in terms of no efficacy in region-level visual content localization (with the Recall@0.5 of 0). Simultaneously, we evaluated our model on the LLaVa-Med qa-0.2k test set as well. As indicated in the last two rows of Table <ref>, due to not being trained on the LLaVa-Med <cit.> dataset, our performance metrics on its test set were marginally lower than its own. However, on similar MIA tasks within our test set, LLaVa-Med <cit.>(with an mBMR of 11.20), significantly underperformed in comparison to our model (with an mBMR of 52.17). Main Results. As shown in Table <ref>, we display the performance of the BiRD model across four distinct tasks in eight different medical imaging modalities. The ROC task tests the MLLM's understanding of text related to specific image areas and their visual details. The PET and Fundus, which focus on only one category, are not trained or evaluated. We find the recall of ROC mainly depends on the variety and distinctiveness of objects and features across image modalities. The RC task tests the model's ability to recognize image regions and describe them in words. The model does well with Ultrasound and Dermoscopy images but struggles with the more diverse CT images, where performance lags. The VG task tests how well the model matches text descriptions to image areas. MR modality performed the worst, likely because it mostly features tumor tissues, with far fewer anatomical structures. This issue is also seen in ultrasound images. The MIA task checks the model's understanding of medical images. The 4th row in Table <ref> shows the model has some level of analysis and understanding across almost all modalities. Across the four evaluated tasks, it is apparent that the Dermoscopy modality consistently exhibits the highest performance metrics. This can be attributed to the distinct visual features, a reduced number of object categories, and the substantial proportion of the image occupied by the object regions, collectively simplifying the task for this particular modality. Object Hallucination. As Fig. <ref> shows, we have also observed instances of object hallucination in BiRD. This phenomenon is common and has also been observed in other MLLMs <cit.>. We believe this is attributed to the fact that the model's visual encoder is frozen, and its initialized parameters have scarcely encountered medical imaging, resulting in a lack of comprehensive understanding of specific domains or topics in feature extraction. In a word, this phenomenon should receive increased attention in future research endeavors. § CONCLUSION In this paper, to develop a single MLLM assistant capable of handling multiple vision-language tasks, we propose a Med-GRIT-270k dataset. By leveraging the dataset, we introduce the BiRD model, a Biomedical Refer-and-GrounD Multimodal Large Language Model. We verified BiRD on a diverse 30k question-and-answer test set, encompassing multimodal and multitask scenarios. The BiRD showcases a highly promising direction for developing intelligent biomedical assistants. To our knowledge, Med-GRIT-270k and BiRD are respectively the first refer-and-ground dataset and fine-grained interactive MLLM in the realm of biomedicine. We will release both the dataset and model to foster the development of intelligent biomedical assistants. Limitations. Although this work developed a novel multimodal dataset in biomedicine, during the data construction process, most of the raw datasets only annotated certain organs or diseases for a sample. This makes it difficult to construct highly correlated negative samples. This issue will be a focus in the subsequent data construction work. splncs04 10 anderson2016spice Anderson, P., Fernando, B., Johnson, M., Gould, S.: Spice: Semantic propositional image caption evaluation. In: Computer Vision–ECCV 2016: 14th European Conference, Amsterdam, The Netherlands, October 11-14, 2016, Proceedings, Part V 14. pp. 382–398. Springer (2016) bai2023qwen Bai, J., Bai, S., Chu, Y., Cui, Z., Dang, K., Deng, X., Fan, Y., Ge, W., Han, Y., Huang, F., et al.: Qwen technical report. arXiv preprint arXiv:2309.16609 (2023) bai2023qwen2 Bai, J., Bai, S., Yang, S., Wang, S., Tan, S., Wang, P., Lin, J., Zhou, C., Zhou, J.: Qwen-vl: A versatile vision-language model for understanding, localization, text reading, and beyond (2023) banerjee2005meteor Banerjee, S., Lavie, A.: Meteor: An automatic metric for mt evaluation with improved correlation with human judgments. In: Proceedings of the acl workshop on intrinsic and extrinsic evaluation measures for machine translation and/or summarization. pp. 65–72 (2005) chen2023minigpt Chen, J., Zhu, D., Shen, X., Li, X., Liu, Z., Zhang, P., Krishnamoorthi, R., Chandra, V., Xiong, Y., Elhoseiny, M.: Minigpt-v2: large language model as a unified interface for vision-language multi-task learning. arXiv preprint arXiv:2310.09478 (2023) dosovitskiy2020image Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., et al.: An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929 (2020) eslami2023pubmedclip Eslami, S., Meinel, C., De Melo, G.: Pubmedclip: How much does clip benefit visual question answering in the medical domain? In: Findings of the Association for Computational Linguistics: EACL 2023. pp. 1151–1163 (2023) han2023multimodal Han, T., Adams, L.C., Nebelung, S., Kather, J.N., Bressem, K.K., Truhn, D.: Multimodal large language models are generalist medical image interpreters. medRxiv pp. 2023–12 (2023) huang2024cross Huang, X., Li, H., Cao, M., Chen, L., You, C., An, D.: Cross-modal conditioned reconstruction for language-guided medical image segmentation. arXiv preprint arXiv:2404.02845 (2024) ilharco10openclip Ilharco, G., Wortsman, M., Wightman, R., Gordon, C., Carlini, N., Taori, R., Dave, A., Shankar, V., Namkoong, H., Miller, J., et al.: Openclip (2021). URL: https://doi. org/10.5281/zenodo 7439141 Lee_Bubeck_Petro_2023 Lee, P., Bubeck, S., Petro, J.: Benefits, limits, and risks of gpt-4 as an ai chatbot for medicine. New England Journal of Medicine p. 1233–1239 (Mar 2023). 10.1056/nejmsr2214184, <http://dx.doi.org/10.1056/nejmsr2214184> li2024llava Li, C., Wong, C., Zhang, S., Usuyama, N., Liu, H., Yang, J., Naumann, T., Poon, H., Gao, J.: Llava-med: Training a large language-and-vision assistant for biomedicine in one day. Advances in Neural Information Processing Systems 36 (2024) li2023evaluating Li, Y., Du, Y., Zhou, K., Wang, J., Zhao, W.X., Wen, J.R.: Evaluating object hallucination in large vision-language models. arXiv preprint arXiv:2305.10355 (2023) li2023lvit Li, Z., Li, Y., Li, Q., Wang, P., Guo, D., Lu, L., Jin, D., Zhang, Y., Hong, Q.: Lvit: language meets vision transformer in medical image segmentation. IEEE transactions on medical imaging (2023) lin2004rouge Lin, C.Y.: Rouge: A package for automatic evaluation of summaries. In: Text summarization branches out. pp. 74–81 (2004) liu2023medical Liu, F., Zhu, T., Wu, X., Yang, B., You, C., Wang, C., Lu, L., Liu, Z., Zheng, Y., Sun, X., et al.: A medical multimodal large language model for future pandemics. NPJ Digital Medicine 6(1),  226 (2023) liu2024visual Liu, H., Li, C., Wu, Q., Lee, Y.J.: Visual instruction tuning. Advances in neural information processing systems 36 (2024) luo2023biomedgpt Luo, Y., Zhang, J., Fan, S., Yang, K., Wu, Y., Qiao, M., Nie, Z.: Biomedgpt: Open multimodal generative pre-trained transformer for biomedicine. arXiv preprint arXiv:2308.09442 (2023) OpenAI_2023 OpenAI, O.: Gpt-4 technical report (Mar 2023) papineni2002bleu Papineni, K., Roukos, S., Ward, T., Zhu, W.J.: Bleu: a method for automatic evaluation of machine translation. In: Proceedings of the 40th annual meeting of the Association for Computational Linguistics. pp. 311–318 (2002) shen2024segicl Shen, L., Shang, F., Yang, Y., Huang, X., Xiang, S.: Segicl: A universal in-context learning framework for enhanced segmentation in medical imaging. arXiv preprint arXiv:2403.16578 (2024) tu2024towards Tu, T., Azizi, S., Driess, D., Schaekermann, M., Amin, M., Chang, P.C., Carroll, A., Lau, C., Tanno, R., Ktena, I., et al.: Towards generalist biomedical ai. NEJM AI 1(3), AIoa2300138 (2024) wang2022medclip Wang, Z., Wu, Z., Agarwal, D., Sun, J.: Medclip: Contrastive learning from unpaired medical images and text. arXiv preprint arXiv:2210.10163 (2022) wu2023towards Wu, C., Zhang, X., Zhang, Y., Wang, Y., Xie, W.: Towards generalist foundation model for radiology. arXiv preprint arXiv:2308.02463 (2023) wu2023next Wu, S., Fei, H., Qu, L., Ji, W., Chua, T.S.: Next-gpt: Any-to-any multimodal llm. arXiv preprint arXiv:2309.05519 (2023) ye2023sa Ye, J., Cheng, J., Chen, J., Deng, Z., Li, T., Wang, H., Su, Y., Huang, Z., Chen, J., Jiang, L., et al.: Sa-med2d-20m dataset: Segment anything in 2d medical imaging with 20 million masks. arXiv preprint arXiv:2311.11969 (2023) you2023ferret You, H., Zhang, H., Gan, Z., Du, X., Zhang, B., Wang, Z., Cao, L., Chang, S.F., Yang, Y.: Ferret: Refer and ground anything anywhere at any granularity. arXiv preprint arXiv:2310.07704 (2023) zhan2024anygpt Zhan, J., Dai, J., Ye, J., Zhou, Y., Zhang, D., Liu, Z., Zhang, X., Yuan, R., Zhang, G., Li, L., et al.: Anygpt: Unified multimodal llm with discrete sequence modeling. arXiv preprint arXiv:2402.12226 (2024) zhang2023large Zhang, S., Xu, Y., Usuyama, N., Bagga, J., Tinn, R., Preston, S., Rao, R., Wei, M., Valluri, N., Wong, C., et al.: Large-scale domain-specific pretraining for biomedical vision-language processing. arXiv preprint arXiv:2303.00915 2(3),  6 (2023) zhang2023gpt4roi Zhang, S., Sun, P., Chen, S., Xiao, M., Shao, W., Zhang, W., Chen, K., Luo, P.: Gpt4roi: Instruction tuning large language model on region-of-interest. arXiv preprint arXiv:2307.03601 (2023)
http://arxiv.org/abs/2406.19262v1
20240627153334
A Short Note on the Love Number of Extremal Reissner-Nordstrom and Kerr-Newman Black Holes
[ "Alex Kehagias", "Davide Perrone", "Antonio Riotto" ]
hep-th
[ "hep-th", "astro-ph.CO", "gr-qc" ]
tikzmark #1eq. (<ref>) #1#2eqs. (<ref>)–(<ref>) #1fig. (<ref>) #1sec. (<ref>) #1tab. (<ref>) #1#2tabs. (<ref>)–(<ref>) #1⟨#1⟩ #1| #1| #1#1 etc. et al. e.g. i.e. d C_com^c C_rp -.4ex∼ .4ex< -.4ex∼ .4ex> #1http://arxiv.org/abs/#1#1
http://arxiv.org/abs/2406.18392v1
20240626143349
Surface parameterisation and spectral synthesis of rapidly rotating stars. Vega as a testbed
[ "Benjamin Montesinos" ]
astro-ph.SR
[ "astro-ph.SR" ]
Vega as a testbed Centro de Astrobiología (CAB) CSIC-INTA, Camino Viejo del Castillo s/n, E-28692, Villanueva de la Cañada, Madrid, Spain bmm@cab.inta-csic.es Spectral synthesis is a powerful tool with which to find the fundamental parameters of stars. Models are usually restricted to single values of temperature and gravity, and assume spherical symmetry. This approximation breaks down for rapidly rotating stars. This paper presents a joint formalism to allow a computation of the stellar structure — namely, the photospheric radius, R, the effective temperature, T_ eff, and gravity, g_ eff — as a function of the colatitude, θ, for rapid rotators with radiative envelopes, and a subsequent method to build the corresponding synthetic spectrum. The structure of the star is computed using a semi-analytical approach, which is easy to implement from a computational point of view and which reproduces very accurately the results of much more complex codes. Once R(θ), T_ eff(θ), and g_ eff(θ) are computed, the suite of codes, atlas and synthe, by R. Kurucz are used to synthesise spectra for a mesh of cells in which the star is divided. The appropriate limb-darkening coefficients are also computed, and the final output spectrum is built for a given inclination of the rotation axis with respect to the line of sight. All the geometrical transformations required are described in detail. The combined formalism has been applied to Vega, a rapidly rotating star almost seen pole-on, as a testbed. The structure reproduces the results from interferometric studies and the synthetic spectrum matches the peculiar shape of the spectral lines well. Contexts where this formalism can be applied are outlined in the final sections. Surface parameterisation and spectral synthesis of rapidly rotating stars Benjamín Montesinos Surface parameterisation and spectral synthesis of rapidly rotating starsThe codes and all the files required to carry out the computations described in this paper are available at <https://github.com/astrobmm/fastrot-spec> Benjamín Montesinos Received 7 March 2024 / Accepted 23 May 2024 ==================================================================================================================================================================================================================================== § INTRODUCTION Spectral synthesis is one of the most powerful techniques to characterise a star. Comparing the high-resolution spectra of a given target with synthetic models usually provides very accurate stellar parameters. The spectroscopic analysis must be complemented with a detailed analysis of the spectral energy distribution, built from photometric observations, and, when feasible, with the use of astrometric and interferometric observations. A vast amount of work has been done in the field of spectral synthesis: an extensive list of the main 1D-LTE codes available can be found in the introduction of the paper by <cit.>. All these codes allow us to compute spectra for a given set of parameters; in particular, single values of the effective temperature, T_ eff, and gravity, log g (other inputs, such as metallicity, [M/H], and microturbulence are also required). In some cases — for example, synthe <cit.>, the codes can be adapted to simulate the spectrum of a rotating star by computing individual surface intensities at different inclinations through the atmosphere, applying the appropriate Doppler shifts, corresponding to the projected rotation speed, sin i, to the emergent spectra. Single values of T_ eff and log g in modelling a stellar spectrum imply the underlying limitation of spherical symmetry. This approximation breaks down for rapidly rotating stars:[To give an idea of what ‘rapidly rotating’ means, and in anticipation of results that can be obtained with the models presented in this paper, for a star with M/M_⊙=2.0, R/R_⊙=2.5, T_ pole=9000 K, and _ eq=120 km s^-1, one obtains R_ eq/R_ pole=1.05 and T_ eq≃8600 K; these values are significant enough to assess the need of considering the oblateness of the star in this context.] in that regime, the star becomes oblate, and all the relevant photospheric variables, in particular the radius, temperature and gravity, become functions of the latitude, making the problem complex both from the theoretical and computational points of view. Examples of the departure from spherical symmetry are the results of the works — all based on interferometric observations — by <cit.> on Altair (α aql, A7V), who find that R_ eq/R_ pole≃1.282; <cit.> on Achernar (α Eri, B6Vpe), giving a ratio R_ eq/R_ pole≃1.352; and <cit.>, on Vega (α Lyr, A0V), for which R_ eq/R_ pole≃1.13 (Model 3 of that paper). In consequence, the first issue to be tackled before proceeding to the computation of a synthetic spectrum for a rotating star is that of its structure; in particular, finding out the dependence of R, T_ eff, and log g_ eff with latitude; the effective gravity, g_ eff, is defined as the vector sum of the classical gravity and the centrifugal acceleration (see Eqn. 4 in Sect. <ref>). This area of research has a long history, whose starting point can be set in the pioneering works by <cit.>, who found that in barotropic stars the energy flux is proportional to the local effective gravity, leading to T_ eff∝g_ eff^β, with β=0.25. This is the well-known so-called von Zeipel law, which was modified by <cit.>, proposing a smooth dependence, β=0.08, for stars with a convective envelope. β is traditionally called the ‘gravity-darkening exponent’, the term ‘gravity darkening’ encompassing all the phenomena involved when the rotation of the star is considered — polar temperature and gravity larger than the equatorial values, polar radius shorter than the equatorial radius — is the common terminology today. <cit.> discussed the relevance of gravity darkening and warned about the caveats posed by the dependence of the resulting laws on the stellar atmosphere models chosen. The review by <cit.> gives a summary of the advances in modelling rapidly rotating stars in the decades preceding that paper. In recent years, substantial progress has been made. In particular, concerning the work in this paper, we mention ESTER <cit.>. ESTER is the first code computing, in a consistent way, 2D models of fast-rotating stars, including their large-scale flows. A semi-analytical approximation of this code is used in this work. The main goal of this paper is to provide a set of methods, described in as much detail as possible, to carry out from scratch the structure and computation of the synthetic spectrum for a rotating star. To our knowledge, there is no publicly available code to carry out these combined tasks. In particular, the prescription presented here for computing the stellar structure has the advantage of being valid for any rotation and is not restricted to slow rotators, in contrast with the von Zeipel approximation (see Sect. <ref>). A good example of the utility of these tools is the work by <cit.>, in which the authors combine the use of synthetic spectra and the ESTER model to carry out a photometric determination of the inclination, rotation rate, and mass of rapidly rotating intermediate-mass stars. The paper is organised as follows: In Sect. <ref>, we describe how a star with an inclination angle, i, with respect to the line of sight is seen by the observer, and how to carry out the projection onto a 2D plane. In Sect. <ref>, we describe how to obtain the relevant parameters for a rotating star required to carry out the spectral synthesis. In Sect. <ref>, we describe how to build the synthetic spectrum of a star where R, T_ eff, and log g_ eff are functions of the latitude. In Sect. <ref>, we apply the whole formalism to Vega as a testbed. Sections <ref> and <ref> include a discussion of the results and some conclusions. Since this paper deals with formalisms of different areas of stellar physics — namely, geometry, structure, spectral synthesis, and limb darkening — we give in each section the basic information and equations, and direct the reader to the appropriate references. § THE GEOMETRY Figure <ref> shows the geometry that is used throughout the paper. Initially, all the calculations are done considering that the rotation axis is perpendicular to the line of sight, which coincides with the z axis (left). The star is then inclined by an angle of α=π/2-i around the x axis, where i is the inclination (i=0, pole-on, i=π/2, equator-on) (right). Polar coordinates (r,θ,ϕ) are used, where θ is the colatitude (0 for latitude π/2, π for latitude -π/2), and ϕ the azimuthal angle. Since the stellar rotation takes place around the y axis, all variables are only functions of colatitude, and are symmetric with respect to the equator. In Appendix <ref>, all the details concerning the projection of the 3D star onto the 2D plane of the sky, and how some quantities are seen from the point of view of the observer, are given. Information on how to compute all the relevant geometrical variables that arise when dealing with an oblate star is also provided. § PARAMETERISATION OF THE STELLAR SURFACE In this work, we follow the model of <cit.> (ER11, hereafter), also called 'ω-model'. A very detailed discussion of its derivation can be found in the work of <cit.>. Their starting point is the fact that the gravity darkening of rapidly rotating stars is not well described by the <cit.> law, parameterised, as we mentioned before, as T_ eff∝g_ eff^β, where T_ eff and g_ eff are the effective temperature and effective gravity, respectively. Their work was triggered by the fact that some interferometric works (see references in ER11) showed that von Zeipel's approach seems to overestimate the temperature difference between the pole and the equator of the star. The formalism presented in ER11 allows the computation of R, T_ eff, and log g_ eff of the photosphere of a rotating star, improving the results obtained following von Zeipel's prescription. Although simple, mainly from a computational point of view, the ER11 model is able to reproduce the results of more complex models; in particular, the above-mentioned ESTER, as can be seen in Fig. 2 of ER11. The model is tailored for stars with radiative envelopes. The two basic equations of the model are 1/ω^2 r+1/2 r^2sin^2θ= 1/ω^2+1/2 , that is, the Roche model (see ER11), and cosϑ+lntanϑ/2 = 1/3ω^2 r^3cos^3θ+cosθ+lntanθ/2 , where ω=Ω√(R_ eq^3/GM)=Ω/Ω_ K is the non-dimensional rotation rate, Ω, Ω_ K, and R_ eq are the angular velocity, the Keplerian angular velocity, and the radius, respectively, the last two at the equator, r=R/R_ eq, is the non-dimensional radial coordinate, and θ, as we mentioned, is the colatitude. ϑ is an auxiliar angular variable (see ER11 for details, also concerning the two singularities in Eq. (<ref>) at θ=0 and π/2). Equation (<ref>) provides the values of the photospheric radius, r, as a function of θ; then for each colatitude, r is introduced into Eq. (<ref>) to obtain ϑ. Both equations can be solved by bisection, or by a Newton-Raphson method. Once these two variables, r(θ) and ϑ(θ) are computed, the effective gravity and the effective temperature can be obtained from the following expressions: g_ eff=(-GM/r^2+Ω^2 r sin^2θ)u_r + (Ω^2 r sinθcosθ) u_θ [ T_ eff=(L/4πσ GM)^1/4√(tanϑ/tanθ) g_ eff^1/4; ; =(L/4πσ R_ eq^2)^1/4(1/r^4+ω^4r^2sin^2θ- 2ω^2sin^2θ/r)^1/8√(tanϑ/tanθ) ] We note that Eqs. (<ref>) and (<ref>) include three quantities, namely, the stellar mass and luminosity, and the equatorial radius, which — in particular M and R_ eq — are not usually known with an acceptable degree of accuracy. Even the luminosity, L, is a more subtle parameter to estimate in the case of very flattened stars since the same object, seen pole-on or equator-on, would show to the observer different spectral energy distributions, which would lead to different apparent effective temperatures, and hence luminosities, the reason being that the expression L=4πσ R^2 T_ eff^4 loses its meaning since both temperature and radius are functions of the latitude. In a practical case, when attempting to model a stellar spectrum by building a grid of models, a reasonable range of masses, consistent with the estimated spectral type of the object, can be used in Eq. (<ref>). As for the luminosity and equatorial radius, the first parenthesis of the second expression of Eq. (<ref>), including the exponent 1/4, is basically the equatorial effective temperature, and therefore can be substituted by an estimation of T_ eff^ eq, or alternatively of the polar temperature, T_ eff^ pole, using Eq. (32) of ER11: T^ eq_ eff/T^ pole_ eff=√(2/2+ω^2)(1-ω^2)^1/12exp(-4/3ω^2/(2+ω^2)^3) , building the grid using as inputs a range of values of T_ eff^ eq or T_ eff^ pole that are also consistent with an initial estimate of the spectral type, and iterating until both the interferometric results are reproduced, and/or the synthetic spectrum matches the observed one. In the most common situation, in which interferometric observations are not available, the peculiar shape of some spectral lines (see Sect. <ref>) can give hints about the inclination of the star, and together with an estimate of the projected rotation speed, sin i, iterate using a range of temperatures, until an agreement between the observations and the model is reached. We note that, according to the first expression in Eq. (<ref>), von Zeipel's law, T_ eff∝g_ eff^1/4, is recovered for slow rotations since as ω decreases, ϑ/θ→ 1 (see Eq. (<ref>)). As a final remark, we also note that in previous works modelling interferometric data <cit.> the methods involving the calculation of the stellar radius, temperature, and gravity make explicit use of the gravity-darkening law T_ eff∝g_ eff^β, whereas the ER11 formalism used in this work allows us to avoid the whole discussion of what the appropriate value of the exponent, β, is. § SPECTRAL SYNTHESIS The spectral synthesis of a star whose relevant photospheric variables are functions of the latitude is, from a computational point of view, substantially more difficult than the classical single-temperature, single-gravity modelling; however, it is conceptually fairly intuitive and can be carried out by following these steps: * Once the structure is computed according to the prescription described in Sect. <ref>, the star is divided into cells delimited by the intersection of a mesh of parallels and meridians with separations of Δθ=Δϕ=1^∘; that implies 180×360=64800 cells, of which half are visible to the observer. We point out that this discretisation of the stellar surface is uneven, in the sense that the areas of cells near the polar regions are smaller than those of cells near the equator. A discretisation keeping the surface area of all cells constant <cit.> leads to exactly the same results as the ones presented in this work. A finer mesh does not result in any improvement or refinement of the output spectrum. No numerical noise appears in the results from any of the discretisations. * Since all variables, in particular T_ eff and g_ eff, are only functions of latitude, 90 synthetic spectra, corresponding to the cells with colatitudes between 0 and π/2, are computed for the corresponding values of temperature and gravity. The angular speed, metallicity, and microturbulence are fixed. These synthetic spectra, which contain the fluxes in erg cm^2 s^-1 Å^-1, are not rotationally broadened. * After the star is rotated by an angle, α=π/2-i, around the x axis, each individual cell is seen by the observer with a projected area, (Δ A)_ p, a radial velocity, '_z, and an angle, γ, between the line of sight and the normal to the surface element, the latter being relevant for the correction for limb darkening, C_ ld(λ), to be applied (see Appendix <ref> for details of the computation of the limb-darkening coefficients). Taking into account all these factors, the total flux at a given wavelength, λ_j, is F(λ_j)=∑_i=1^N_ cells F_i(λ_j,'_i, z) (Δ A)_i, p C_ ld(γ_i,λ_j) , where F_i(λ_j,'_ z) is the flux at λ_j of the synthetic spectrum computed for the particular values of T_i and log g_i of that cell, redshifted or blueshifted, according to the value of the radial velocity, '_z,i, of the cell (see eqns. (<ref>)-(<ref>) and (<ref>)). The individual synthetic spectra are computed using the codes atlas and synthe <cit.> and the models containing the elemental abundances and the stratification of the stellar atmospheres as a function of temperature, gravity, metallicity, and microturbulence velocity <cit.>. The atlas code allows us to compute a model atmosphere for any value of temperature and gravity from a close model already computed in the Castelli & Kurucz grids. The spectral synthesis is carried out using synthe, with a resolution of λ/Δλ=100 000 at 450 nm (0.0045 nm/pixel). The GNU-linux version of the codes by <cit.> is used.[The codes, models, and further information can be found at the URL: https://wwwuser.oats.inaf.it/fiorella.castelli/] A grid of 3668 synthetic models with T between 7000 and 20000 K (step 100 K), log g=3.0, 3.5, 4.0, 4.5, and metallicities of [M/H]=-2.5, -2.0, -1.5, -1.0, -0.5, 0.0, and +0.5 was computed beforehand; the microturbulent velocity is 2 km s^-1. For all the cells at a colatitude, θ_i, with parameters (T_i,log g_i), the corresponding synthetic spectrum is computed by linear interpolation between the four closest neighbouring models in the grid, those bracketing at a time the temperature and the gravity of the cell; that is, the four models in the grid — for a given metallicity — (T_j,log g_k), (T_j+1,log g_k), (T_j,log g_k+1), and (T_j+1,log g_k+1), have to fulfil T_j < T_i ≤ T_j+1, log g_j < log g_i ≤log g_j+1. The interpolation is easily carried out in this way: first, two constants are defined as C_ T=T_j+1-T_i/T_j+1-T_j C_ g=log g_j+1-log g_i/log g_j+1-log g_j , then two intermediate models are computed, which are combined to give the final one (T_i,log g_i) for the i-th cell: [ (T_i,log g_j) =C_ T·(T_j,log g_j) +(1-C_ T)·(T_j+1,log g_j); (T_i,log g_j+1)=C_ T·(T_j,log g_j+1)+(1-C_ T)·(T_j+1,log g_j+1); ; (T_i,log g_i)=C_ g·(T_i,log g_j) + (1-C_ g)·(T_i,log g_j+1) ] . § VEGA AS A TESTBED Vega (α Lyr, HD 172167, HIP 91262, HR 7001) is one of the most extensively studied stars. It is well known that it is used as standard for the calibration of several photometric systems <cit.> and that it is surrounded by a debris disc, discovered by <cit.>, which triggered an intensive study at infrared wavelengths (, and references therein). However, this object turned out not to be the perfect standard, showing anomalies in its luminosity <cit.>, some peculiarly shaped absorption lines <cit.>, and its radius <cit.> in comparison with other A0 V stars. Concerning this paper, our interest focuses on the fact that all these anomalies are now explained by the fact that Vega is a rapidly rotating star being seen almost pole-on; that is, with a small inclination angle, as has been shown by a number of interferometric studies <cit.> and spectroscopic analyses <cit.>. Table 1 in <cit.> gives a summary of the values of the projected equatorial velocity, _ eqsin i, the equatorial velocity, _ eq, the inclination angle, i, the polar and equatorial radii, R_ pole and R_ eq, and the rotation period, P, according to different analyses. To check the reliability of the methods described in this paper, we use some of the results of the above-mentioned works to check whether the structural model (Sect. <ref>), and then the spectral synthesis model (Sect. <ref>), can reproduce the observed properties. §.§ Structure and stellar parameters Table <ref> shows in the upper part the input parameters of the model; the inclination, stellar mass, and equatorial radius have been taken from <cit.> (their Model 3, the ‘concordance model’). Since the model also requires the polar temperature and ω as inputs, T_ pole was explored within the uncertainty interval given by Monnier et al. to match the luminosity, and ω was fixed to match the value of sin i derived by <cit.>. The lower part of the table (Col. 2) shows the results of the formalisms described in Sects. <ref> and <ref>; Col. 3 shows, for comparison, some parameters derived from interferometric <cit.> and spectroscopic analyses <cit.>. The results derived in this work are in general consistent with those from previous modellings, although some discrepancies are apparent: the temperature drop from pole to equator, 1098 K in our case, is in agreement with that by <cit.> (1160 K); both are substantially smaller than those by <cit.> (>2400 K), and <cit.> (2250 K). Some details about the calculations: the stellar luminosity was computed by adding for all cells the quantity σ (Δ A)_i T_ eff,i^4, where (Δ A)_i and T_ eff,i are the surface area and the effective temperature of the i-th cell; and the average effective temperature was estimated from the expression L=4π R_ aver^2σ T_ eff,aver^4, where R_ aver is an average of the radius, computed in the interval of colatitudes, [0,π/2]. Figure <ref> shows the stellar radius normalised to R_ eq (black) and the temperature (red), plotted against the colatitude for the northern hemisphere (the results for the southern hemisphere are symmetrical). Fig. <ref> shows two colour plots showing the temperature and effective gravity profiles for Vega according to the results of our modelling. §.§ Spectral synthesis In this section, we check whether the results derived in Sect. <ref> together with the formalism described in Sect. <ref> are able to reproduce the peculiar shape of some features of the spectrum of Vega. The high-resolution, high-signal-to-noise spectrum atlas of Vega from <cit.> has been used throughout. We do not intend here to make an analysis of the elemental abundances, which has already been carried out by other authors <cit.>, but rather to make sure that the whole set of procedures described in the previous sections is able to reproduce the peculiar shapes of the absorption lines of the spectrum of Vega. Figure <ref> shows the profiles of 30 lines, from different species, both neutral and ionised. The observed profiles and the results of our modelling are plotted in black and red, respectively. In cyan, the profiles resulting from a single-temperature, single-gravity synthetic spectrum computed with the average parameters, T_ eff^ aver and log g_ eff^ aver, listed in Table <ref>, are also plotted. For the sake of a better graphical display of the whole set of lines, both the observed and the synthetic profiles have been re-scaled and normalised, placing the continuum at intensity 1.0 and the bottom of the profiles at intensity 0.9, whereas the single-temperature, single-gravity profiles have been scaled so that they fit the wings of the observed absorptions. The first five profiles of the left panel, from top to bottom, show rounded shapes, typical of lines broadened with a classical rotation profile; however, the remaining 25 profiles have a variety of shapes, the most extreme cases being those that are almost rectangular, such as Fe i 402.19, 406.80, 449.45 nm and Ba ii 455.40, 493.41 nm, or those showing two deeper components at velocities close to ±_ eqsin i, like Ca i 445.48 nm. It is apparent that in all cases the agreement between the shape of the observed profiles and the results from the formalism described in Sect. <ref> is remarkable. To our knowledge, the only work that also accurately reproduces the peculiar profiles of the Vega spectrum is that from <cit.>. In order to understand the peculiar profile of some lines, we show as an example an analysis of two nearby lines; namely, Fe ii 445.16 nm, which has a rounded shape, and the above-mentioned Ca i 445.48 nm. The left panel of Fig. <ref> shows the colour plot of the radial velocity of each surface element for the Vega model. It is well known that the loci of equal radial velocities, in the case of solid-body rotation, as seen by an observer, are lines parallel to the rotation axis projection; that is, all points in the disc with x=constant <cit.> have the same value of radial velocity. Since the star is seen almost pole-on, the regions close to the borders of the disc — the limb, which coincides with the equator — and in particular those with the highest projected rotation speeds, are the ones with the lowest temperatures and gravities. Therefore, the ionisation balance of some species differs from regions with low temperatures and gravities to regions near the pole (see Fig. <ref>). The vertical purple lines superimposed on the colour plot of the star in the left panel of Fig. <ref> delimit 16 strips. Their widths have been computed to fulfil the condition that each one contributes to the full synthetic spectrum with the same amount of flux in the continuum near the lines. The right panel of Fig. <ref> shows the stellar lines (black), with the corresponding model profiles (red) framed in green (Fe ii) and blue (Ca i) boxes, respectively, and the equivalent widths (EWs) of the contributions to the profiles from each one of the 16 strips, using the same colour code. The different ranges of EWs for both lines are apparent: whereas the values for the Fe ii values are more even, leading to a rounded profile, the outer strips dominate the absorption of the Ca i line, producing its peculiar shape. § DISCUSSION §.§ The ω-model versus the von Zeipel approach Despite the fact that the ω-model, and the corresponding spectral synthesis, work well for Vega, it is interesting to point out that <cit.>, using the Roche model and the von Zeipel value of the gravity darkening exponent, β=0.25, also found a good agreement between modelling and observations. <cit.> showed that the value of β that best fit the observations was 0.231±0.028, in agreement with the von Zeipel value; therefore, both the ω-model and the von Zeipel approximation give accurate results for Vega. To show the real potential of the ω-model, it is interesting to explore the case of a much faster rotator for which the β exponent is much less then 0.25. The case of Achernar is a good one to check, since a value of β=0.166 must be used to reproduce the observed results <cit.>. The ω-model, using as inputs — all extracted from Domiciano de Souza et al.'s paper — M/M_⊙=6.1, T_ pole=17124 K, R_ eq/R_⊙=9.17, and ω=0.838, gives T_ eq=12700 K, in excellent agreement with the best fit of the CHARRON RVZ model to the VLTI/PIONIER H band observations, which gives T_ eq=12673 K. In contrast, the von Zeipel model (β=0.25) gives T_ eq=10880 K, almost 1800 K off the value derived from observations, which is nicely reproduced by the ω-model <cit.>. Obviously, that deviation would have a large impact on reproducing the observed spectrum, via spectral synthesis. §.§ Contexts in which this work can be useful Once the whole formalism of the ω-model plus the spectral synthesis have been put together and successfully tested with the paradigmatic case of Vega, and the reassuring case of Achernar mentioned in the previous subsection, it is interesting to point out explicitly in which contexts all this can be useful. With this purpose, we have computed several models whose details are given in Table <ref>. The models share as fixed inputs some of the parameters of the structural model of Vega (see Table <ref>); namely, the stellar mass, polar temperature, equatorial radius, and metallicity. Models in the upper half of Table <ref> have been computed with a fixed inclination, i=6.2^∘, the value obtained for Vega, and five values of ω, resulting in five values of sin i; namely 10, 15, 20, 21.4 (Vega), and 25 km s^-1. Models in the lower half of Table <ref> have been computed with a fixed value of the projected sin i, 21.4 km s^-1, which is again the value we obtained for the Vega model, and five values for the inclination; namely, 5, 6.2 (Vega), 10, 15, and 20 degrees. Figure <ref> shows, as an illustrative example, the synthetic spectra of the two sets of models in a short wavelength interval between 445.0 and 446.0 nm. The upper and lower panels in the figure correspond to the models in the upper and lower parts of Table <ref>. The spectra have been normalised to the intensity at 445.2 nm and contain five lines: Fe ii 445.16, Ca i 445.48, Fe ii 445.58, Ti ii 445.66, and Fe ii 445.91 nm. The colour codes of the spectra — red, black, cyan, purple, and orange — correspond to decreasing values of sin i (upper panel) and increasing values of inclinations (lower panel), the model for Vega being plotted in black. What is interesting in this plot is how sensitive the profiles are to changes in inclination and sin i, a conclusion that can be extended to the full spectral range. In particular, the profiles of the lines with peculiar shapes, as in the cases of Ca i 445.48, Fe ii 445.58, and Fe ii 445.91 nm, change very dramatically as the inclination decreases. Very interesting, too, is the comparison between the behaviour of the normal rounded-shape Fe ii 445.16 nm line and the peculiar Ca i 445.48 nm profile in the upper panel: whereas the Fe ii line behaves as one would expect as the value of sin i increases, the shape and depth of the Ca i line changes drastically. The model plotted as a dotted grey line in the lower panel has been computed for i=5^∘, ω=0.632, assuming the von Zeipel approximation; this model must be compared with the one plotted in red, computed with the same parameters, but under the assumptions of the ω-model. That value of ω would be associated with a β exponent ∼0.197, quite far from β=0.25. As can be inferred from the values of T_ eq/T_ pole for both models in Table <ref>, the equator is almost 400 K cooler when the von Zeipel approximation is used, which results in deeper lines, in particular those with peculiar profiles, leading to erroneous determinations of abundances. This is a good example of the influence of the β exponent on the line shapes and intensities. All this shows the potential of a detailed spectral analysis to find structural and physical parameters and inclinations of this kind of stars. Good examples are the works by <cit.>, <cit.>, and <cit.> disentangling Sirius A and Vega's properties using spectral line profiles, or Fourier analysis. A quantitative analysis of the usefulness of the proposed formalisms is relevant. Regarding the inclination of the star, it is apparent that very clear changes are observed in certain line profiles as i moves in the range between 0 and ∼20 degrees; at larger inclinations, most of the lines are insensitive to this parameter. It is easy to prove that the probability of a star having an inclination between i and i+Δ i is P(i,i+Δ i)=sin i Δ i, and therefore the probability of finding a star, among a large set of objects with an inclination in the interval of [0,20] degrees is ∼0.0603. As a first impression, one might consider the whole modelling effort to be disproportionate considering that the number is small; however, a query to the Gaia DR3 catalogue asking how many stars with spectral types between A0 and A9 — for which the methods presented here would be useful — with parallaxes, ϖ, with relative errors of Δϖ/ϖ<0.20, are ∼146 000 (ϖ≥ 2 mas) and ∼231 200 (ϖ≥ 1 mas). In other words, in a sphere with a radius of 1 kpc, we would find around ∼14000 stars in that range of spectral types with inclinations less than 20 degrees. The constraint of -0.037≤ BP-RP ≤+0.377 to bracket the interval A0-A9 has been used.[https://www.pas.rochester.edu/∼emamajek/] § CONCLUSIONS In this paper, we provide a combined method to compute the structure of rapidly rotating stars and build their synthetic spectra. A summary of the main features of the whole formalism follows: * The ω-model by <cit.> has been implemented to compute the relevant parameters of the photosphere of rapidly rotating stars — namely, the radius, R, effective temperature, T_ eff, and effective gravity, g_ eff — as a function of the colatitude, θ. The method, relatively simple from a computational point of view, is able to reproduce the results of more complex models. One of the big advantages of this formalism is that it avoids the discussion, and hence the subsequent computation, or ad hoc assignment, of the appropriate gravity darkening exponent, β. The model is applicable to stars with radiative envelopes (Sect. <ref>). In those situations in which some of the approximations inherent in the ω-model — that is, mass concentrated near the centre of the star and rigid rotation (Roche model) — are no longer valid, the original ESTER model should be used. * A detailed method of how to compute the synthetic spectrum of a rapidly rotating star, at any inclination angle, i, with respect to the line of sight, is presented. The model makes use of the suite of codes, atlas and synthe <cit.>, and the grid of model atmospheres by <cit.> (Sect. <ref> and Appendices <ref> and <ref>). * The combined methods summarised in items 1 and 2 above were applied to the particular case of Vega, obtaining results regarding both the structure and the synthetic spectrum that are compatible with previous works. The fitting of the spectral lines was remarkable, both for those with normal, rounded shapes and those with peculiar profiles (Sect. <ref>). * In addition, Appendix <ref> describes in detail how to treat, from a strict geometrical point of view, all the relevant variables when a rotating star is seen with a given inclination with respect to the line of sight. Although this work has focused on the spectral synthesis of rapid rotators, the tools provided in this paper can be useful in other contexts: * To locate the position of a star in colour-magnitude diagrams, since a star deformed by rapid rotation appears brighter and hotter when it is observed near pole-on <cit.>. * To find and estimate the inclination of the rotation axis with respect to the line of sight in those cases without making use, in the first instance, of interferometric measurements <cit.>. This would be useful to search for potential pre-main-sequence stars of spectral types earlier than F hosting Jupiter-like planets. In particular, there is indirect evidence that Herbig Ae/Be stars with low metallicities could be good candidates to host such giant planets <cit.>. The method presented here would allow a detailed metallicity analysis, and a subsequent filtering of targets according to their inclination, which is suitable in the case of low-inclination systems of potential interferometric and/or direct imaging studies. * The role of the inclination is particularly important in modelling accretion processes for young objects of intermediate mass. In the scenario of magnetospheric accretion, the shape and intensity of the spectral lines are strongly dependent on the assumed inclination <cit.>. Regarding the alternative scenario of boundary layer continuum models, the dependence on the inclination is also critical <cit.>. The author is very grateful to the referee, Prof. Michel Rieutord, and his colleagues, Alain Hui-Bon-Hoa and Axel Lazzarotto, for providing very useful comments, suggestions and references that, no doubt about, have improved the contents and scope of the paper. This research has been funded by grants AYA2014-55840-P, PGC2018-101950-B-I00 and PID2021-127289-NB-I00 by the Spanish Ministry of Science and Innovation/State Agency of Research (MCIN/AEI). The author is grateful to Francisco Espinosa-Lara for useful discussions on the ER11 formalism, Antonio Claret for some guiding for the computation of the limb-darkening coefficients, and Almudena Alonso-Herrero, Olga Balsalobre-Ruza, Carlos Eiroa, Jorge Lillo-Box, Ignacio Mendigutía, Enrique Solano and Eva Villaver for their help and comments to several sections of this paper. Special thanks also to Antonio Parras and Sergio Suárez for their work keeping up and running the computing centre. aa § THE GEOMETRY OF THE PROBLEM According to the notation in Fig. <ref>, a point on the stellar surface with coordinates r={[ x=rsinθsinϕ; y=rcosθ; z=rsinθcosϕ ]. when the star is seen equator-on, is transformed after a counterclockwise rotation around the x axis by an angle, α, is done, into r'=(x',y',z') by applying to r the matrix R_x(α); namely, R_x(α)= [ 1 0 0; 0 cosα -sinα; 0 sinα cosα ] r'=(x',y',z')= R_x(α) r . The observer would see all the points on the stellar surface fulfilling the easy constraint, z'= ysinα + zcosα > 0. In the case of an spherical object, the unitary vector, u, attached to any point has the direction of the normal to the surface; however, in an oblate object this is not the case, as can be seen in Fig. <ref>. The normal to the surface at a given point with colatitude θ, is inclined at an angle of ξ=π/2-θ+η with respect to the line of sight before proceeding to apply the rotation by an angle, α. The computation of η is a fairly straightforward geometrical problem, as is illustrated in the figure, in which Δθ has obviously been plotted out of scale: tanη≃(r_2-r_1)/(r Δθ). A differential surface area element at a latitude, θ, can be written as Δ A≃(r Δθ/cosη)·(rsinθ Δϕ), and its associated unitary vector normal to the surface, before the star is rotated, has the following expression: u_A=(cosξsinϕ,sinξ,cosξcosϕ) . Therefore, after applying the rotation, the projected area as seen by the observer would be Δ A multiplied by the z-component of R_x(α)·u_A; namely, (Δ A)_ p = Δ A(sinαsinξ+cosαcosξcosϕ) . Concerning the rotation speed of each surface element, v, it has only components x and z, as can be seen in Fig. <ref> (left): ={[ _x=Ω rsinθcosϕ; _y=0; _z=-Ω rsinθsinϕ ]. . After applying the rotation, the component of the velocity in the line of sight is '_z=-Ω r cosαsinθsinϕ . Finally, the knowledge of the angle γ, between the normal to a given surface element and the line of sight, in the rotated system, must be known in order to apply the correction for limb darkening to the synthetic spectra arising from that surface element. In the usual notation for that angle, μ=cosγ, and according to Fig. <ref>, it can be written as μ=cos (u'_ Az/|u'_ A|), and since |u'_ A|=1, the value of cosγ is just the z component of R_x(α)·u_A; namely, μ=cosγ=sinαsinξ+cosαcosξcosϕ . § THE LIMB-DARKENING COEFFICIENTS This appendix shows how the limb-darkening coefficients (LDCs hereafter) and the limb-darkening correction C_ ld(λ) (see eqn. (<ref>) are computed. The work and notation by <cit.> are followed in this section. The LCDs a_k, k=1,4 are defined in such a way that the most general law is adjusted by the following expression: I(μ)/I(1)=1-∑_k=1^4 a_k (1-μ^k/2) . I(1) is the intensity at the centre of the disc, and μ=cosγ (see eqn. (<ref>)), where γ is the angle between the normal to the surface area element and the line of sight. Equation (<ref>) can be defined for the intensity in a given passband, although in our case we are interested in a monochromatic estimate of that quantity for each of the wavelengths covered by the synthetic spectra. <cit.> computed, among others, the LCDs for the photometric Johnson-Cousins UBVRI filters (Table 17 available at ). In order to estimate the limb-darkening correction at each wavelength, λ, we linearly interpolate the LCDs in this way: a_k(λ)= a_k(λ_ F1) + [a_k(λ_ F2)-a_k(λ_ F1)/λ_ F2-λ_ F1] (λ - λ_ F1) , where F1 and F2 stand for ‘Filter 1’ and ‘Filter 2’ and λ_ F1, λ_ F2 are the effective wavelengths of the filters adjacent to the wavelength, λ, under consideration; that is, λ_ F1 < λ≤λ_ F2. The wavelengths assigned to the UBVRI filters for our computations are 360, 440, 550, 690, and 950 nm, respectively. Following that notation, the correction for limb-darkening applied to the fluxes at a given wavelength is C_ ld(λ) = 1-∑_k=1^4 a_k(λ) (1-μ^k/2) .
http://arxiv.org/abs/2406.18709v1
20240626191836
SpY: A Context-Based Approach to Spacecraft Component Detection
[ "Trupti Mahendrakar", "Ryan T. White", "Madhur Tiwari" ]
cs.CV
[ "cs.CV" ]
Member, IEEE Florida Institute of Technology, Melbourne, FL 32901, USA Florida Institute of Technology, Melbourne, FL 32901, USA Florida Institute of Technology, Melbourne, FL 32901, USA Manuscript received XXXXX 00, 0000; revised XXXXX 00, 0000; accepted XXXXX 00, 0000. This work was supported in part by the NVIDIA Applied Research Accelerator Program. (Corresponding author: T. Mahendrakar). Trupti Mahendrakar (e-mail: mailto:tmahendrakar2020@my.fit.edutmahendrakar2020@my.fit.edu) and Madhur Tiwari (e-mail: mailto:mtiwari@fit.edumtiwari@fit.edu) are with the Department of Aerospace, Physics and Space Sciences and Ryan T. White (e-mail: mailto:rwhite@fit.edurwhite@fit.edu) is with the Department of Mathematics and Systems Engineering at Florida Institute of Technology, Melbourne, FL 32901 USA. The datasets and SpY algorithm will be made publicly available. MAHENDRAKAR ET AL.SPY SpY: A Context-Based Approach to Spacecraft Component Detection MADHUR TIWARI July 1, 2024 =============================================================== § ABSTRACT This paper focuses on autonomously characterizing components such as solar panels, body panels, antennas, and thrusters of an unknown resident space object (RSO) using camera feed to aid autonomous on-orbit servicing (OOS) and active debris removal. Significant research has been conducted in this area using convolutional neural networks (CNNs). While CNNs are powerful at learning patterns and performing object detection, they struggle with missed detections and misclassifications in environments different from the training data, making them unreliable for safety in high-stakes missions like OOS. Additionally, failures exhibited by CNNs are often easily rectifiable by humans using commonsense reasoning and contextual knowledge. Embedding such reasoning in an object detector could improve detection accuracy. To validate this hypothesis, this paper presents an end-to-end object detector called SpaceYOLOv2 (SpY), which leverages the generalizability of CNNs while incorporating contextual knowledge using traditional computer vision techniques. SpY consists of two main components: a shape detector and the SpaceYOLO classifier (SYC). The shape detector uses CNNs to detect primitive shapes of RSOs and SYC associates these shapes with contextual knowledge, such as color and texture, to classify them as spacecraft components or “unknown” if the detected shape is uncertain. SpY’s modular architecture allows customizable usage of contextual knowledge to improve detection performance, or SYC as a secondary fail-safe classifier with an existing spacecraft component detector. Performance evaluations on hardware-in-the-loop images of a mock-up spacecraft demonstrate that SpY is accurate and an ensemble of SpY with a previously used CNN spacecraft component detector improved the performance by 23.4% in recall, demonstrating enhanced safety for CNNs in vision-based navigation tasks. Object detection, Aritificial intelligence, Satellite navigation systems, Identification, Machine vision § INTRODUCTION With the rapid proliferation of space debris containing retired and defunct satellites, autonomous on-orbit servicing (OOS) and active debris removal (ADR) have gained significant interest. Many of the satellites requiring OOS and ADR are large, unknown, and non-cooperative by nature. They are not equipped with capture interfaces, may be tumbling, and may have endured structural damage. Despite efforts in the literature, this remains an unsolved problem. To tackle this issue, our previous work focused on autonomously characterizing these unknown targets using convolutional neural network (CNN) based object detectors <cit.> to identify potential capture points and keep-out zones. Due to the lack of real spacecraft imagery and to replicate a real-life unknown resident space object (RSO) scenario, the CNNs were trained on a synthetic dataset of random spacecraft images and tested on a never-before-seen hardware-in-the-loop images of a mock-up spacecraft. The components detected include solar panels, antennas, body panels, and thrusters. The 3D positions of these components were resolved using several camera observers with CNN detections are fed into an artificial potential field guidance algorithm <cit.> to enable safe RPO trajectories for the chaser spacecraft. Laboratory experimental test results of this concept, discussed in <cit.> revealed that the success of this type of mission is highly dependent on the performance of the CNN object detector. CNN-based object detectors rely heavily on the similarity and patterns seen in the training dataset. This reliance often results in missed detections or misclassifications in real-world scenarios, where varying environmental conditions such as lighting and viewing angles and dissimilarity in training and testing dataset are prevalent. Both missed detections and misclassifications pose a safety threat in using CNNs. Humans use context-based reasoning to detect spacecraft components (e.g., recognizing a long, protruding, rectangular object pointed towards the sun as a solar panel). To encode this untapped human reason into an autonomous system, this work presents SpaceYOLOv2 (SpY), an end-to-end, human-directed, context-based object detector. This work builds upon SpaceYOLO <cit.>, which conducted a survey of aerospace professionals revealing that geometry, texture, and color are the top criteria for identifying spacecraft components by humans. SpaceYOLO demonstrated the proof-of-concept feasibility of using the YOLOv5 CNN to detect primitive shapes such as circles and rectangles in spacecraft images and then classifying them using texture features. However, it lacked a complete end-to-end object detection and performance levels required for practical use. SpY includes a drastically more robust shape detector and a spacecraft component classifier (SYC) based on shape, color, and texture feature extraction methods from traditional computer vision. The shape detector and SYC work together to incorporate contextual reasoning for component detection. SpY is much more robust, achieves competitive accuracies, and has several fault tolerance mechanisms for spacecraft component detection. The main contributions of this work include: * An end-to-end object detection pipeline that incorporates contextual knowledge. * A new tool for creating a shape detector training dataset (explained in Section III). * Expanded SYC to incorporate entropy-based (texture) and color-based classifications. * Made SYC modular to use as a secondary classifier with any spacecraft component object detector. The rest of the paper is structured as follows: Section II discusses the background, including related missions and CNN-based computer vision for on-orbit applications. Section III provides an overview of the methods evaluated and the datasets used in this study. Section IV discusses the SpY pipeline. Section V includes metrics used in this study and presents the results and analysis. Finally, the conclusion is given in Section VI. § BACKGROUND §.§ Related Missions The concepts of ADR and OOS have been integral to the space industry since its inception. Manned OOS missions, such as those performed by the space shuttle, have demonstrated the benefits of repairing and extending the life of satellites like the Hubble Space Telescope, Palapa B, and Westar VI <cit.>. Subsequently, robotic OOS missions—starting with ETS VII by JAXA in 1997 <cit.>, and followed by XSS-10 <cit.>, XSS-11 <cit.>, ANGELS <cit.>, and Orbital Express <cit.> by NASA, DARPA, and AFRL—have showcased OOS capabilities with cooperative spacecraft. These spacecraft maintained stable attitudes, were equipped with load-bearing capture interfaces for robotic manipulators, and featured visible fiducial markings for relative navigation. In 2020 and 2021, Northrop Grumman’s MEV-1 and MEV-2 <cit.> demonstrated the first commercial OOS with GEO satellites IS-901 and IS-10-02. Despite IS-901 being non-cooperative and tumbling, the presence of distinct apogee kick motors and launch adapter rings (common GEO spacecraft features) facilitated docking. However, rendezvous and proximity operations (RPO) around unknown spacecraft without these distinct docking features remain challenging. SpY aims to address this by using contextual descriptions to classify features as potential docking or keep-out zones, facilitating safe docking and capture. For example, conical features like apogee kick motors or flat body panels would be suitable for docking, while thin, fragile solar panels should be avoided. §.§ CNN for RPO and OOS Tasks Over the past 15 years, CNNs have revolutionized computer vision. The development of large datasets has led to more efficient and accurate algorithms. Computing resources have become cheaper and faster, particularly for highly parallelized CNNs accelerated by graphics processing units (GPUs). Recent advancements in low size, weight, and power (SWaP) computers equipped with small GPUs or field-programmable gate arrays (FPGAs) <cit.> have enabled the deployment of CNNs on spacecraft. Numerous studies propose CNNs for in-space use. We focus on object detection for spacecraft components (solar arrays, antennas, thrusters, and satellite bodies), where the goal is to predict a bounding box around each component and classify what is in 2D image frames. However, much research has been done on other vision tasks like pose estimation and instance segmentation. Notable are participating works <cit.> in ESA’s spacecraft pose estimation challenges for non-cooperative spacecraft based on the SPEED <cit.> and SPEED+ <cit.> datasets. Further, tasks are sometimes combined: numerous studies use object detection to find a region of interest (ROI) containing the entire target spacecraft in a camera frame, which can be extracted and subjected to downstream analysis. For example, past research has used YOLOv3 to detect CubeSats on a Raspberry Pi <cit.>, U-Net for spacecraft detection/segmentation <cit.>, EfficientNet to detect satellites in the SPARK dataset <cit.>, Faster R-CNN <cit.>, SSD <cit.>, YOLOv3 <cit.>, YOLOv5 <cit.> to detect the RSO in the SPEED datasets. While these results are sufficient for some applications, estimating the pose and locating entire spacecraft falls short of enabling autonomous docking with a non-cooperative spacecraft, as there remain collision risks with fragile components of a target. Hence, our reference mission <cit.> requires a finer-grained characterization of spacecraft components that can detect fragile components and identify safe docking points. Multiple works have pursued the satellite component detection problem–typically focusing on a subset of antennas, satellite bodies, solar panels, radiators, and thrusters. Several works used R-CNN <cit.> and Faster R-CNN <cit.> to detect components of known satellites by training on synthetic <cit.> and real-life images <cit.>. Satellite component detection for RPO applications must work in real-time using onboard computers to avoid lag times associated with ground control. Our prior work demonstrated Faster R-CNN is too computationally expensive for on-board use <cit.>, and later works moved to more efficient single-stage object detectors, primarily YOLO-based methods <cit.>. While these techniques highlight the power of CNNs in generalizability and their ability to learn patterns and similarities, they also acknowledge drawbacks such as misclassifications and missed detections, especially in scenarios with poor coverage in satellite image training datasets. However, many of the errors made by CNNs are easy for human experts to avoid on inspection. SpY leverages the strengths of CNNs while adding contextual knowledge through traditional computer vision techniques to encode human-like decision processes into satellite component detection. § DATASETS AND METHODS There are three distinct datasets used in this work. Web satellite dataset (WSD), shape detector dataset (SDD) and hardware in the loop (HIL) dataset. WSD and SDD are used for training and validation only while HIL is used for testing only. §.§ Web Satellite Dataset (WSD) WSD, as described in <cit.>, consists of both real and synthetic images of spacecraft sourced from the Internet. The selection criteria for these images are as follows: * The objects must be identifiable, with each component distinguishable from the others * The shape of each component must accurately represent a real spacecraft component. * Images in the dataset must not be repeated. The WSD contains a total of 1,231 images, each labeled for antennas, body panels, solar panels, and thrusters. All components also have bounding box annotations as illustrated in Fig. <ref>. The dataset is split into 80% training and 20% testing images. §.§ Shape Detector Dataset (SDD) The shape detector concept was first introduced in SpaceYOLO <cit.> to train YOLOv5 to identify primitive shapes such as circles and rectangles. However, the shape dataset used in SpaceYOLO lacked triangles and rings, other commonly occurring shapes in the SpaceYOLO survey. Furthermore, the original dataset was built manually. This work introduces an automated shape generator tool using the open-source Pycairo 2D graphics library that generates images with 2D circles, rectangles, triangles, and rings complete with bounding box and shape class annotations. The shapes are printed individually in frames as well as printed together as collages. The shape generator outputs 2D shapes in 640px-by-640px frames and annotation boxes attached to each shape. It randomly selects gray, white, or black backgrounds and assigns the shapes different hues of gray. The SDD includes images with circles and rings with radii 5-10% of the frame size, rectangles with widths and heights 5-50%, and triangles with side lengths 5-10%. Once the images and labels are generated, they are augmented with random rotations, shears, blurs, and noise. A sample of this dataset is shown in Fig. <ref>. §.§ HIL Dataset The ORION testbed <cit.> at the Autonomy Lab in Florida Tech was used to generate HIL images <cit.>. The testbed features a maneuver kinematics platform hosting two vehicles, with one on a gantry capable of moving in x and y directions. Both vehicles can pitch +/-30° and yaw infinitely, with one serving as the target satellite (mock-up) referred to as the resident space object (RSO). The RSO has configurable solar panels, antennas, and thrusters, which can be easily swapped out. The satellite body is wrapped in a material that looks like commonly-used multi-layer insulation (MLI). For this work, the solar panels were interchanged among decagonal, horizontal, and longitudinal configurations while leaving the rest of the features unchanged as shown in Fig. <ref>. The lab environment itself features highly absorbent black paint on windows, doors, ceiling, and floors. Artificial sunlight is created using a Hilio D12 LED litepanel with adjustable power from 0% to 100%, generating a maximum intensity of 5600K daylight balanced temperature. Using the dynamic lighting capability, each mock-up configuration was subjected to four different lighting conditions, and videos of the RSO rotating and the chaser rendezvous approach are summarized in Table <ref>. Images from the videos are extracted at 1 frame per second and all visible solar panels, body panels, antenna and thruster are annotated. §.§ Methods This work compares several variations of SpY and SYC (described in full details in Section V) with baseline models from the literature, each trained on WSD and/or SDD. Further, an ensemble that combines SpY with a standard CNN-based object detector will be evaluated. Each method includes an object detector and may or may not include a standalone classifier that contributes to class predictions. Object detectors include YOLO (YOLOv5 <cit.> trained on the WSD) and the shape detector (YOLOv5 trained on the SDD). YOLOv5 is selected since it demonstrates the best performance among comparable algorithms on HIL with sufficient framerates on current spaceflight-like hardware <cit.>. Secondary classifiers include MobileNetV2 <cit.> and SYC. MobileNetV2 is selected because it is a lightweight architecture designed for low-SWaP hardware. The methods evaluated in this work are described in Table <ref>. All are compared on their performance in detecting components in HIL images not seen during training. § SPACEYOLOV2 (SPY) OVERVIEW This section provides a detailed overview of the SpY architecture. Like any object detection architecture, SpY takes an image as an input and outputs predicted bounding boxes that localize and classify objects present in the image. Specifically, SpY identifies antennas, bodies, solar panels, thrusters, and unknown objects. It is further equipped to identify white radiators or other user-defined components, but this functionality is not measured in this work due to a lack of real-world testing data. The unknown object class enables SpY to conclude a well-defined feature exists in a predicted region without making a class prediction. This ensures a component that cannot be definitively classified will not be misclassified. In the context of downstream navigation and guidance tasks, this serves as a safety feature. Shown in Fig. <ref>, The SpY architecture begins with pre-processing blocks, followed by the shape detector that identifies and localizes shapes in the images. Next is SYC, which first uses specialized feature extractors to compute shape, color, and texture class scores for each shape’s bounding box in the original image. SYC then encodes human-like reasoning based on these features to classify the shapes as specific spacecraft components. Each of these three main parts of SpY are discussed in the forthcoming subsections. §.§ Pre-processing Blocks The pre-processing steps include gamma correction, region of interest (ROI) extraction, and color space conversion. Each block can be turned on or off as needed and the color space converter block supports the four color spaces (HSV, RGB, YCbCr, grayscale) as needed for individual applications. §.§.§ Gamma Correction Gamma correction <cit.> is a nonlinear operation used to encode and decode luminance in an image, enhancing contrast. This is especially useful for spacecraft in low lighting conditions to better define the edges of geometry. For spacecraft imagery sensitive to lighting, a threshold of γ=0.8 was selected to brighten the images. However, the gamma value does not dynamically change with the sun’s reflection angle on the spacecraft, which is a limitation that future work will address. §.§.§ ROI Extractor The image frame could have background details like the Earth or another spacecraft. In the HIL dataset, there is background clutter in the lab that can affect object detector performance. Our approach uses a high-pass Gaussian filter for background subtraction and the Suzuki85 <cit.> contour detection algorithm to segment out the RSO, and extract a ROI tightly focused on the RSO. This is shown in Fig. <ref>. To avoid clipping out important details, we ensured the ROI extractor does not eliminate any area contained in the ground truth bounding boxes from any image in the HIL datasets. §.§.§ Color Space Converter If the input image has 3 channels, the color space converter can convert the image or the cropped image (if the ROI extractor is on) into one of four user-selected color spaces: grayscale, YCbCr, HSV, or RGB. The output from this block is directly fed into the shape detector. Each of these color spaces has unique properties and advantages in terms of separating chroma content (grayscale, YCbCr), luminance (YCbCr), or decoupling hue, saturation, and value (HSV). Depending on the imagery for the individual application of SpY, the choice of color space for images fed to the shape detection could significantly impact object detection performance. §.§ Shape Detector Typical YOLO models <cit.> optimize a loss function to learn how to predict the boxes using three parts. The bounding box loss (L_bbox) encourages accurate component localization. The objectness loss (L_obj) ensures the model predicts bounding boxes that contain objects. The classification performance (L_cls) encourages correct class predictions. The loss is a weighted sum of these with hyperparameters λ_1 and λ_2: L=λ_1 L_bbox + λ_2 L_obj + L_cls SpY takes a different approach. Its shape detector is trained on 2D shape images from the SDD and tested on spacecraft images. Unlike ordinary YOLO, the goal of the shape detector is not to directly detect spacecraft components but rather to detect shape primitives within the region bounded by the spacecraft's silhouette. These detected bounding boxes will be classified by their satellite component class by the context-based SYC discussed in full details in the next section. Our goals extend beyond high-quality detection and is further concerned with detecting all components that are present. Therefore, we modify the YOLO loss to train the shape detector to predict bounding boxes covering of all components. Since our downstream classifier can label an object as “unknown,” this effectively reduces missed detections without increasing false positives. This conservative approach ensures SpY's predictions are safe for use in downstream visual navigation tasks. For each image processed by the the shape detector, the percentage of ground truth boxes detected as shapes. This performance ratio is termed as the shape detector overlap denoted by SD_overlap, computed for each image as: SD_overlap=GT∩⋃_i=1^n_predp_i/GT where n_pred is the number of of predicted boxes, p_i are the predicted boxes, and GT is the union of all ground truth boxes. Fig. <ref> visualizes SD_overlap. GT is shown as white mask in Fig. <ref> and portion of it covered bu detected bounding boxes is black in Fig.<ref>). In this case, the shape detector's predictions overlap nearly all of the ground truth bounding boxes, indicating good coverage. The standard data-driven loss function of YOLO is modified to subtract the mean SD_overlap in each training batch to penalize the shape detector for failing to detect portions of the ground truth bounding boxes: L_SD=L-λ_3/m∑_i=1^m SD_overlap,i where m is the training batch size. We use λ_3=1. §.§ SpaceYOLOv2 Classifier (SYC) Inspired by a survey of aerospace professionals <cit.>, we next encode human reasoning into the pipeline by computing class scores to the bounding boxes detected by the shape detector based on shape, color, and texture. These scores are used by SYC to assign the final class predictions to the bounding boxes from the shape detector. §.§.§ Shape Scorer Shapes are important for human reasoning about satellite components. Solar panels are typically rectangular and thin, corresponding to rectangular and ring shapes. Antennas are circular and concave, matching ring and circular geometries. The body is a cuboid or cylindrical structure, indicating a rectangular shape, while thrusters are conical (triangular) and ring-shaped, corresponding to triangles and rings. The shape scorer incorporates contextual shape-based knowledge by assigning a shape class score s_class to bounding box predictions from the shape detector. The scores use 1s to indicate that a shape can be any component, while 2s emphasize the most likely components. For example, s_thruster = 2 for a detected triangle since it is most likely a thruster. Full details are shown in Table <ref>. §.§.§ Color Scorer Another key cue for human satellite component detection is color. For example, blue objects are likely solar arrays and silver objects are more likely to be bodies or antennas. The color scorer extracts predicted bounding boxes from the original image and analyzes their color information to encode this simple reasoning by assigning color class scores c_class to each bounding box. The bounding box is first converted to HSV color space since its decoupled hue, saturation and value make colors easy to distinguish. We define six colors based on HSV ranges that coincide with human perception: blue (for solar panels), white (radiators), silver (body), 2 different intensities of gray (gray1 for antenna/body and gray2 for thruster/body) and black (for background or unknown). These ranges were extracted from HIL images not used during training nor testing. Bounding box pixels are then segmented into these six color ranges and we compute the percentage of each color p_color. These percentages are used to compute color class scores c_class for each bounding box. The class scores are computed as mean percentage of colors associated with each feature as shown in Table <ref>. Since there is no white radiator in the HIL dataset and the back of the solar panels is white, the color scorer is modified to combine the white radiator probability with the solar panel probability if the white percentage is greater than 0.5 (p_white>0.5) for our testing below. This modification ensures that the absence of white radiators in the dataset does not negatively impact the classification performance for the solar component. §.§.§ Texture Scorer The third feature commonly used by humans to detect satellite features is texture. The texture scorer extracts bounding boxes in labeled HIL images and converts them to grayscale. It then computes measures of texture common in image processing, variance and entropy of the pixel intensities <cit.>: σ^2_pixels =1/n∑_i=1^n_box(x_i-x̅)^2 h_bbox =-1000∑_i=1^n_pixels x_i log_2(x_i) where n_pixels is the number of pixels in the bounding box, x_i is the pixel intensity, x̅ is the mean pixel intensity. The entropy values are multiplied by 1000 to match the order of magnitude of the variance, ensuring that we maintain higher fidelity and avoid losing information due to truncation. Both measures correspond to texture, but there are subtle differences <cit.>. Variance indicates the degree of variation or contrast within the bounding box, which is effective for measuring homogeneous textures like smooth pixel intensity gradients with high-magnitude changes. Entropy is an effective measure of the degree to which there are high-frequency changes in pixel intensity, such as sharp boundaries and heterogeneous surfaces. Next, we compute texture class scores for variance v_class and entropy e_class. To establish the link between texture and object class, variance and entropy of annotated bounding boxes are computed and histograms for each class are developed based on real-world HIL images because they exhibit realistic pixel-level details, unlike some of the often over-smoothed synthetic images in WSD. Histograms are shown in Figure <ref>. We note variance and entropy skew inversely to one another and class histograms exhibit different patterns, underscoring the complementary nature of the two measures. After a performance comparison with bin sizes ranging from 10 to 100, it was determined the Freedman-Diconis rule <cit.> is optimal for our purposes. Hence, the number of uniform bins in each class histogram for each texture measure is Bin Count=2IQR/√(n_class) where n_class is the number of components of the class in the dataset and IQR is the interquartile range of the specified metric for the specified class. The numbers of bins span 0 to 10000 for variance and and 0 to 8000 for entropy. For an input bounding box, variance σ^2_bbox is computed and the corresponding bin is determined for each class histogram and the variance relative frequency vr_class of the classes for that variance is computed as vr_class(σ^2_bbox)=f_class(σ^2_bbox)/∑_i=1^4 f_class_i(σ^2_bbox) where classes assessed for texture consist of antenna, body, solar, and thruster. Entropy relative frequencies er_class are computed similarly. In practice, we reduce compute costs by creating a look-up table for variances σ^2_bbox∈[0,10000] and entropies h_bbox∈[0,8000]. The HIL dataset used to develop these scores has an imbalanced number of components. There are 741 antennas, 1692 solar panels, 966 body annotations and 320 thrusters. Hence, solar panels dominate over other components. To remove this bias, texture class scores are multiplied by the ratio of the total number of solar panels (1692) to the number of objects present in the respective class to compute the final texture class scores: v_class =1692vr_class/|# objects in class| e_class =1692er_class/|# objects in class| §.§.§ SYC Predictions At this stage, the shape detector has predicted bounding boxes classified by shape and the feature extractors have computed shape (s_class), color (c_class), and texture (v_class and e_class) class scores for each bounding box. The last piece of SYC uses these contextual scores to predict the satellite component class for each bounding box through a rule-based approach. Two voting techniques are used to combine the scores into a final set of class scores. They are predictive soft voting (PSV) and multi-voting (MUV): PSV_class =s_shape(c_class+v_class+e_class) MUV_class =s_shape(v_class+e_class)c_class SYC uses Algorithm <ref> to ensemble these class scores into a final class prediction for SpY. Alternative approaches are used for classification when YOLO is paired with SpY or SYC because YOLO tends to be significantly better at detecting satellite bodies. This is because the body often includes attached antennas, thrusters, or solar panels that, which obscure the shape of the body, or split it into what looks like several shapes. YOLO is not reliant on shape, color, or texture and is more able to reason from less interpretable cues present in the input images. When SYC is used as a secondary classifier for YOLO in the YOLO+SYC method, SYC body class predictions are simply ignored. Inspired by SatSplatYOLO <cit.>, the SpY+YOLO ensemble combines the data-driven YOLO predictions and context-based SpY predictions as follows. * Use YOLO detections for the body component and ignore SpY body predictions. * For overlapping YOLO and SpY boxes (IoU > 0.5), perform a confidence score-weighted average of the box centers and dimensions. * Calculate the new confidence score as the mean of SpY and YOLO confidences. * Retain other YOLO or SpY boxes as they are. This fuses the strengths of each algorithm by allowing YOLO to detect bodies and taking input from SpY/SYC only when it tends to outperform YOLO. § RESULTS AND ANALYSIS This section discusses the metrics used to evaluate the methods and model training processes. Next is SpY hyperparameter tuning and experimental results. Last is an analysis of the strengths and failure modes of several variations of SpY as they relate to baseline methods. §.§ Metrics We use several metrics to evaluate the performance of SpY and the other methods in Table <ref>. A true positive (TP) is defined as a predicted detection with sufficiently high intersection over union (IoU) with a ground truth bounding box that is classified correctly. Any other detection is a false positive (FP) and a false negative (FN) is a failure to detect a ground truth object. The counts of these types of detections allow us to compute metrics precision (P), recall (R), and F1 score: P=TP/TP+FP, R=TP/TP+FN, F1=2*P*R/P+R Precision is the fraction of positive detections that are correct. Recall is the fraction of ground truth objects that are correctly detected. F1 score is the geometric mean of precision and recall. We additionally use the standard mAP@0.5 object detection metric as a one-number summary of overall object detection performance. mAP is the mean of the average precision (AP) calculated for N classes and AP_i is as the area under the precision-recall curve for class i. The 0.5 represents the intersection over union (IoU) thresholds required for a true positive <cit.>. mAP = 1/N∑_i=1^NAP_i While we seek high precision and mAP, we focus most strongly on recall because it measures if we misclassify satellite components or fail to detect them entirely. For our use-cases in navigation and guidance for close-proximity operations, failing to detect hazards correctly are the most detrimental errors. §.§ Training YOLO–i.e. YOLOv5 (small) trained on the WSD–has validation mAP@0.5 0.587 for detecting antennas, bodies, solar panels and thrusters <cit.>. The shape detector is trained on grayscale SDD images with intensities replicated in 3 color channels. This enables the shape detector to run inference on different color spaces. The resulting shape detector has validation mAP@0.5 0.947 for detecting circles, rectangles, rings, triangles, and rings. Further analysis compared SD_overlap for YOLO and SpY architectures against the HIL dataset. The analysis reveals that both share similar overlap performance, indicating they are comparable. However, SpY has far more detections both inside (x1.32) and outside (x2.69) the ground truth area (region of interest), indicating more noisy detections with SpY than YOLO weights. This is because YOLO is trained to identify only spacecraft components, while SpY's shape detector is designed to identify any primitive geometry, making it sensitive to all objects, including non-relevant ones like gantry rails and curtain creases in the HIL dataset. For the YOLO+MN method, MobileNetV2 was trained on cropped images of components from the WSD dataset. For training, the classifier attained an accuracy of 0.89 and an F1-score of 0.89. §.§ SpY Optimal Hyperparameters Using the trained YOLO weights, a grid search was conducted on HIL datasets to optimize YOLO+SYC. A total of 592 combinations of hyperparameters were tested: 37 pairs of entropy and variance bin sizes, ROI extractor (on/off), gamma corrector (on/off), and 4 color spaces for YOLO (at inference time). Metrics were computed across the 12 HIL datasets and 4 satellite component classes. Optimal hyperparameters were chosen by selecting the model with the highest F1 score without abnormally low scores on individual HIL datasets or components. The resulting model showed YOLO performs best in the RGB color space, consistent with its training data in the WSD. The optimal configuration for SYC uses the Freedman-Diaconis rule <cit.> for variance and entropy bin counts with neither gamma correction nor ROI extraction used in preprocessing. For SpY, a separate a grid search of the pre-processing and shape detector hyperparameters was performed with the SYC hyperparmeters selected above. Best performance on our data is achieved without gamma correction or the ROI extractor preprocessing and grayscale processing for the shape detector, matching the grayscale training data in the SDD. Applications with imagery different from ours could benefit from different settings. The public SpY codebase enables task-specific optimizations. §.§ Experimental Results This section provides experimental results comparing several variations of the context-based SpY/SYC with purely data-driven baseline models outlined in Table <ref>. All quantitative results are summarized in Table <ref>. The first column group of Table <ref> includes F1, mAP@0.5, and recall of the methods across all satellite component classes. The ensemble of SpY and YOLO has the highest recall by a significant margin, indicating the fewest objects not detected. On the other hand, YOLO has the highest F1 and mAP scores, indicating higher precision and localization performance. SpY+YOLO is second best in these metrics with just 0.011 lower F1. To investigate the performance of SpY further, component-wise recall analysis is tabulated for each model in the second set of columns of Table <ref>. SpY has the lowest solar panel and body detections while the ensemble of SpY and YOLO has the highest recall. The low recall performance for solar panels and the body is due to SpY’s extreme sensitivity to shapes, unlike YOLO. SpY identifies even the smallest features, such as an OptiTrack marker, and classifies uncertain components as unknown, validating its fault tolerance. SpY detects smaller rectangles in LSP and triangles in DSP. These are correctly classified as solar panels, but their IoU with ground truth boxes is low, so these are considered false positives. In contrast, DSP's decagonal structures are correctly identified and classified, resulting in true positives. As shown in the last column group in Table <ref>, LSP detections are 3.3 times worse than DSP. Future work should include merging detections boxes present in a small, bounded region of connected sub-boxes indicating those boxes belong to the same feature to improve recall. SpY has the worst recall performance for body due to its non-homogenous nature. Sometimes small rectangular regions between the edges of the body and solar panel get detected and classified correctly as body, but they are technically false positives, just like the small LSP solar panel detections. However, depending on the RPO application, knowing that body panels exist along with knowing what regions within it are free of other components is more valuable than detecting what may just be the central portion of the spacecraft. Such “false positives” would be beneficial in this context. §.§ Misclassification Analysis Next, we analyze incorrect classifications. Misclassifications are tallied in Table <ref>. The first column shows total misclassifications for each method and the second column total misclassifications, excluding bodies. When all classes are considered, YOLO and SpY+YOLO have the fewest misclassifications. However, when the body class is suppressed, SpY/SYC methods have far fewer misclassifications than the data-driven YOLO methods. In summary, the experimental results and error analysis indicate SpY can detect components, but the combination of SpY with an additional CNN-based object detector such as the ensemble of YOLO and SpY is generally most effective. Lastly, we discuss the use of secondary classifiers with YOLO. MobileNetV2 does not improve YOLO's performance by any metric. On the other hand, SYC boosts YOLO's recall and precision (excluding body) in numerous cases with its context-based analysis. Since YOLO simultaneously learns to predict bounding boxes and object classes, it reasons globally from entire input images. In contrast, MobileNetV2 reasons solely from pixels within bounding boxes extracted by YOLO, losing the capacity to reason globally. We hypothesize such global reasoning is critical to CNN classification performance but less so for SYC, which only needs local information. § CONCLUSION This work presents an end-to-end context-based satellite component detector, SpY, that infuses modern CNN-based methods with human-inspired reasoning capabilities for increased accuracy and fault tolerance features for downstream visual navigation and guidance tasks. SpY is made up of a YOLOv5 object detector trained to detect primitive shapes within input imagery and SYC that leverages color and texture information to classify those shapes as antennas, satellite bodies, solar arrays, thrusters, or “unknown.” SpY demonstrated it can effectively identify spacecraft components while reasoning its detections. For example, it detects the antenna by first identifying a circular object and further classifying it based on shape, color, and texture features. Test performance comparisons of various models on HIL data revealed that SpY is very sensitive to shapes, leading to an increased number of false positives. However, an ensemble SpY with YOLO has similar object detection performance with significantly higher recall than YOLO itself. Further SpY's capacity to label detections as unknown allows it to avoid feeding incorrect information to guidance and navigation systems. These advantages establish the YOLO+SpY ensemble is significantly more fault tolerant than purely data-driven methods. Further, while CNN-based object detectors are very effective, they are not easily explainable. Methods like PEEK <cit.> and Grad-CAM <cit.> have made strides in understanding how CNNs make their decisions by finding patterns in hidden states of neural networks with reference to the input pixel regions. In fact PEEK is class agnostic unlike GradCAM and has been used to look into the CNN layers of the YOLO model. The step-by-step SpY decisions provide interpretability that refers to simple features like shape, color, and texture. Ongoing research in hybrid CNN/rules-based vision systems like SpY should cross-reference these complementary approaches (PEEK in case of SpY due to its class agnostic nature) to enhance the explainability and design of in-space computer vision systems. PEEK could also help prune the shape detector's CNN layers significantly to make it an even more computationally efficient for SWaP hardware than it already is. The SpY+YOLO ensemble has the capacity to combine human-guided contextual detection and pure CNN-based detections. It is the best prospect for safe and autonomous vision-based navigation for RPO around unknown satellites. § ACKNOWLEDGMENT The authors would like to thank Drew Takeda, Markus Wilde, Mackenzie Meni, Minh Nguyen, Andrew Ekblad, Steven Wyatt, Nehru Attzs and Seema Putane for their helpful comments prior to submission. IEEEtran
http://arxiv.org/abs/2406.18534v1
20240626175930
Towards Compositionality in Concept Learning
[ "Adam Stein", "Aaditya Naik", "Yinjun Wu", "Mayur Naik", "Eric Wong" ]
cs.CL
[ "cs.CL", "cs.LG" ]
[ Towards Compositionality in Concept Learning equal* Adam Steinp Aaditya Naikp Yinjun Wupku Mayur Naikp Eric Wongp pDepartment of Computer and Information Science, University of Pennsylvania, Pennsylvania, USA pkuSchool of Computer Science, Peking University, Beijing, China Adam Steinsteinad@seas.upenn.edu Machine Learning, ICML 0.3in ] § ABSTRACT Concept-based interpretability methods offer a lens into the internals of foundation models by decomposing their embeddings into high-level concepts. These concept representations are most useful when they are compositional, meaning that the individual concepts compose to explain the full sample. We show that existing unsupervised concept extraction methods find concepts which are not compositional. To automatically discover compositional concept representations, we identify two salient properties of such representations, and propose () for finding concepts which obey these properties. We evaluate on five different datasets over image and text data. Our evaluation shows that finds more compositional concept representations than baselines and yields better accuracy on four downstream classification tasks. [Code and data are available at <https://github.com/adaminsky/compositional_concepts>.] § INTRODUCTION Foundation models continue to enable impressive performance gains across a variety of domains, tasks, and data modalities <cit.>. However, their black-box nature severely limits the ability to debug, monitor, control, and trust them <cit.>. Concept-based explanations <cit.> are a promising approach that seeks to explain a model's behavior using individual concepts such as object attributes (e.g. striped) or linguistic sentiment (e.g. happiness). Decomposing a model's learned representation can derive these concepts. For instance, a model's embedding of a dog image may decompose into the sum of concept vectors representing its fur, snout, and tail. Existing works based on methods such as PCA <cit.> or KMeans <cit.> extract such concept vectors reasonably well for basic concepts. For instance, Figure <ref> shows images from the CUB <cit.> dataset containing concepts extracted by PCA from the CLIP <cit.> model. These techniques are able to correctly extract the representations of concepts like white birds and small birds, however, composing them by adding their representations does not yield the representation of the concept of small white birds. The compositionality of concepts is vital for several use cases. First, model predictions can be explained by combining concepts <cit.>. Compositional concepts also allow for editing fine-grained model behavior, like improving the truthfulness of an LLM without compromising other behaviors <cit.>. Models can also be trained to compose basic concepts for new tasks, e.g. using concepts for beak shapes, wing colors, and environments to classify bird species <cit.>. In this paper, we study the unsupervised extraction of compositional concepts. Existing work does not directly evaluate the compositionality of extracted concepts, but rather focuses on the individual concept representations. We therefore evaluate the compositionality of concepts extracted by existing unsupervised approaches. For this purpose, we first validate the compositionality of ground-truth representations of concepts in controlled settings. We observe that concepts can be grouped into attributes, where each attribute consists of concepts over some common property, such as the color of objects or the shape of objects. Concepts from different attributes (e.g. blue and cube) can be composed, while those from the same attribute (e.g. red and green) cannot. We also observe that the concepts from different attributes are roughly orthogonal, while those from the same attribute are not. We prove in a generalized setting that these properties are crucial for the compositionality of concepts. Since existing approaches do not enforce these properties, they often extract non-composable concept representations. To extract compositional concepts in an unsupervised manner, we propose (). Our key insight is to search for entire subspaces of concepts at once instead of individual concepts, allowing to enforce the aforementioned properties of compositional concepts. We show that recovers the representation of known compositional concepts better than existing approaches, can discover compositional concepts in existing image and text datasets, and the discovered concepts improve downstream classification accuracy. We thus summarize the contributions of our paper: * We study concept-based explanations of foundation models from the lens of compositionality—a property desirable for many use-cases. We observe that concept representations extracted by state-of-the-art methods fail to compose, and set out to remedy this problem. * We validate that models can in fact represent concepts compositionally in embedding space. We identify two salient properties of compositional concept representations that existing methods fail to satisfy. * We prove in a generalized setting that the identified properties are necessary for compositionality. We present a novel method called Compositional Concept Extraction (CCE) that guarantees to yield concept representations that satisfy these properties by construction. * We demonstrate that CCE extracts more compositional concepts than baselines on vision and language datasets, and they improve downstream performance. § CONCEPTS AND COMPOSITIONALITY Concept Representations. In machine learning, concepts are symbols that are assigned some human-interpretable meaning, often used to explain predictions made by models. A concept extractor E extracts concepts from the intermediate representation of some pretrained model M over a dataset D. E(M, D) thus yields a set of concept vectors representing the concepts C = { c_1, …, c_i }. Concept vectors are denoted as R(c), where R: →^d is the concept representation function, is the set of all possible concepts, and ^d is an embedding space in some dimension d. The set of extracted concepts C can be grouped into mutually exclusive attributes A_1, … A_k each containing concepts about some common property such that C = ⋃_i=1^k A_i. To measure the presence (or degree of expression) of a concept in a sample's embedding, we borrow the following definition of concept score from <cit.>. (Concept Score) For a concept c∈ and concept representation function R:→^d, a sample embedding z∈^d has concept score s(z, c) = S_cos (z, R(c)) where S_cos is the cosine similarity function. Existing work makes use of concept scores to quantify the presence of concepts on a per-sample basis. This has uses in several applications, such as creating concept bottleneck models where a sample's embedding is converted to concept scores used for classification <cit.>, and sorting samples by a concept <cit.>. Compositionality. Following work on compositional representations <cit.> and pretrained embeddings <cit.>, we define the compositionality of concept representations. (Compositional Concept Representations) For concepts c_i, c_j ∈, the concept representation R:  → ^d is compositional if for some w_c_i, w_c_j∈^+, R(c_i ∪ c_j) = w_c_iR(c_i) + w_c_jR(c_j). In other words, the representation of the composition of concepts corresponds to the weighted sum of the individual concept vectors in the embedding space. Furthermore, concept scores for the concepts satisfying Definition <ref> also behave compositionally, since each concept score quantifies the presence of that concept in a sample. For compositional concepts c_i, c_j ∈, the concept score of their composition c_k = c_i ∪ c_j over a sample embedding z ∈^d is the composition of the concept scores of c_i and c_j, weighted by w_c_i,w_c_j∈^+: s(z, c_k) = w_c_is(z, c_i) + w_c_js(z, c_j). Since concept scores are used for several downstream tasks discussed above, this property about the compositionality of concept scores can simplify such tasks and improve the overall performance on them. Besides finding compositional concepts, we also want to explain embeddings based on the concepts which compose it. Prior work also performs a decomposition into a sum of concept representations <cit.>, but we modify the definition of such a decomposition so that a sample embedding is composed of only the concept representations that are truly present for the sample. (Concept-based Decomposition) Consider a sample that is associated with a set of concepts C ⊆, such that each attribute A_i ⊆ C contains exactly one concept. A concept representation R: →^d decomposes that sample's embedding z_i ∈^d if it can be expressed as the weighted sum of the sample's associated concepts: z_i = ∑_c ∈ Cλ_i,c R(c) , such that λ_i,j > 0. As an example, consider the CLEVR dataset <cit.> consisting of images of objects of different shapes and colors. A concept extractor for a vision model may extract the set of concepts C_CLEVR = {{red}, {blue}, {cube}, {sphere}}. C_CLEVR can also be grouped into attributes A_1 = {{red}, {blue}} and A_2 = {{cube}, {sphere}} containing color and shape concepts respectively. As such, a composite concept like {red, sphere} can be represented as the weighted sum of R({red}) and R({sphere}). § EVALUATING CONCEPT COMPOSITIONALITY In this section, we validate the compositionality of ground-truth concept representations and evaluate the same for concepts extracted using existing approaches. We first discuss our controlled setting and show that concept representations from the CLIP model are compositional. We then evaluate the compositionality of concepts extracted by existing approaches. Finally, we outline the necessary properties of compositional concept representations. §.§ Setup In order to validate the compositionality of ground-truth concepts, we focus on concepts extracted from subsets of the <cit.>, <cit.>, and <cit.> datasets, all of which have labelled attributes with compositional structure. We follow a setup similar to <cit.> for the synthetic CLEVR <cit.> dataset and consider images with single objects labelled as one of three shapes (sphere, cube, or cylinder) and one of three colors (red, green, or blue). We also consider a subset of the CUB dataset consisting of bird images labelled as one of three colors and one of three sizes. Finally, we consider a subset of the  <cit.> dataset consisting of facts relating to one of three topics and labelled true or false. §.§ Ground-Truth Concept Compositionality We evaluate the compositionality of ground-truth concept representations learned by the CLIP model over each labelled dataset. Since these representations are not provided, for each concept, we consider the mean of the model's embeddings for samples belonging to that concept as a surrogate of its true representation <cit.>. For example, for the CLEVR dataset, we extract the ground-truth representation of the red concept by calculating the mean of all sample embeddings of images with red objects. We similarly extract the ground-truth representations for the other two color concepts, the three shape concepts, and composite concepts like {red, sphere}, for a total of 15 concepts. We repeat this process for each dataset. As stated in Lemma <ref>, the concept score for a composite of two concepts is the weighted sum of the concept scores of each concept. This implies that a linear model should be able to predict the concept score for a composed concept given the concept scores for each of the base concepts. We thus train a linear model to predict the presence or absence of a composed concept given its base concepts. We measure the average precision of the model for each composed concept, and report the mean average precision (MAP) score in Table <ref> for each dataset. We see that in all cases, the ground truth (GT) concepts have high MAP (up to 0.971 for ) when predicting concept compositions from their components, meaning the ground-truth concept representations are reasonably compositional. §.§ Compositionality Issues with Existing Methods We next study the compositionality of concept representations discovered by existing unsupervised concept extraction methods. We train a linear model similar to the one described in Section <ref>, but with concepts extracted by baseline methods instead of the ground truths. From the MAP results in Table <ref> we see that all the baselines have significantly lower compositionality than the ground-truth. This is the case even for techniques that extract the concepts reasonably well, i.e. where the extracted concepts are able to discriminate between positive and negative samples of that concept. For each dataset and concept extraction method, we calculate the ROC-AUC score to measure the ability of the extracted concept to perform such a discrimination. We provide the full ROC-AUC results in Appendix <ref>. In the case of NMF, despite this score averaging as high as 0.907 for the dataset, the extracted concepts are not compositional. This implies that finding concept representations simply based on their ability to discriminate positive and negative samples of a concept does not mean that those representations will compose as expected. We further demonstrate this point with a toy illustration in Figure <ref>. This figure depicts four perfectly composed concepts at the top, and four incorrectly composed concepts at the bottom, even though each concept is perfectly discriminative of the samples with the concept. Therefore, we must ensure that we explicitly extract compositional concepts. §.§ Desired Properties of Compositional Concepts To extract compositional concepts, we must first identify characteristics of such concepts. Since the ground-truth concepts were compositional, we investigate the salient characteristics of those concepts. Consider the ground-truth concepts for the dataset. In order to understand the relationship between different ground-truth concepts and their compositionality, we center the sample embeddings and visualize cosine similarities between pairs of these concepts in Figure <ref>. We observe that the ground-truth representations of color concepts are roughly orthogonal (cosine similarity near 0) to those of shape concepts. In contrast, the representations of concepts within the same attribute, such as the red and blue concepts, are non-orthogonal. Furthermore, the orthogonal concepts are also those that can compose to form new concepts, since they lie in different attributes. For instance, the red and sphere concepts are orthogonal, and can compose to form the {red, sphere} concept, while the red concept can't compose with the blue concept. We visualize the same for the and datasets in Appendix <ref>, and empirically observe the following trend over all three datasets: concept representations from different attributes are roughly orthogonal while those from the same attribute are non-orthogonal. Also, the orthogonal concepts tend to be compositional, while the non-orthogonal ones can't be composed. Orthogonality is a generally helpful property for several use cases, such as disentangling concepts in embedding space <cit.>. Some approaches therefore try to enforce orthogonality on the concepts being extracted. Table <ref> summarizes existing unsupervised approaches for concept extraction and whether the method enforces any orthogonality constraints (Ortho.) between concepts of different attributes and allows for non-orthogonality between those of the same attribute (Corr.). We see that these approaches allow for only one of the two, but not both. We now formally prove that the observed properties regarding concept compositionality hold in a generalized setting. For some dataset, consider two attributes A and A' where A has l concepts c_1, … c_l and A' has l' concepts c'_1, … c'_l'. Assuming that for each compositional concept c={c_i,c'_j}, its representation v_i,j, follows a spherical normal distribution with zero mean and unit covariance, i.e. v_i,j ∼ N(0, 𝐈^d), the following statements are true with high probability for a large dimension d: * There exists c_1, c_2 ∈ A and c'_1, c'_2 ∈ A' such that the representations of these base concepts are non orthogonal. * For all c_1∈ A and c_2∈ A', the representations of c_1 and c_2 are orthogonal with high probability. We show the proof in Appendix <ref>. The takeaway from this result is that compositional concepts will be roughly orthogonal, while concepts of the same attribute may not be orthogonal. In addition, we show in Corollary <ref> that given concepts which follow the consequent of the above theorem, that the concepts will have compositional concept representations, meaning the representations of composite concepts consist of a sum of their component base concept representations, as defined in Definition <ref>. We leverage this to design an unsupervised concept extraction method which can find compositional concepts when they exist. § COMPOSITIONAL CONCEPT EXTRACTION () To achieve this orthogonality property between concepts, we propose , summarized in Algorithm <ref> and visualized in Figure <ref>. As the outer loop of the algorithm suggests, once we find concepts for an attribute in a subspace P, we remove that subspace using orthogonal rejection and find concepts in a new subspace. This enforces orthogonality between the discovered subspaces, thus respecting the orthogonality property described in Section 3. To discover concepts within each attribute, we employ a two-step process consisting of LearnSubspace and LearnConcepts, as illustrated in Figure <ref>. The LearnSubspace step, shown on the left, is given a clustering of the data (in terms of centroids V) and optimizes a subspace, defined by P∈^d× S, so that the data in this subspace (ZP) becomes well clustered according to the fixed centroids V. In the next step, LearnConcepts, shown on the right, we identify concepts by performing spherical K-Means clustering on ZP, the data within subspace P. This clustering process is performed within a learned subspace and the subspace is learned according to the learned clustering. Therefore, we jointly learn the subspace P and the clustering centroids V. Specifically, for LearnSubspace, we employ the Silhouette score <cit.> to quantify how well clustered the projected data ZP is given a cluster assignment L determined by the centroids from spherical K-Means clustering. The Silhouette score measures the ratio of average within cluster distance to average between cluster distance. Since the Silhouette score is differentiable, once we fix a clustering L from LearnConcepts, we perform a step of gradient ascent in LearnSubspace to increase the Silhouette score. Thus, we solve the following optimization problem by iteratively fixing P to learn L (with LearnConcepts) and then fixing L to learn P by a gradient step (with LearnSubspace) until convergence: max_P, LSil(ZP, L). We further observe that simply maximizing the above objective leads to overfitting issues since projecting the learned cluster centroids from LearnConcepts back to the original space may not necessarily correspond to cluster centroids in the original space. Therefore, in the LearnSubspace step we additionally try to match the cluster centroids learned within the subspace and projected out to the original space to the centroids of the clusters in the original space. This is integrated into the above full objective function as a regularization term, i.e.: max_P, L(Sil(ZP, L) + ∑_k S_cos(C_k P^T, Ĉ_k)), where C_k represents the clustering centroids in the subspace P while Ĉ_k = 1/∑_i 1[L_i = k]∑_i 1[L_i = k] Z_i represents the clustering centroids in the original space. § EXPERIMENTS §.§ Experimental Setup Datasets and Models. We evaluate using five datasets across vision and language settings: CLEVR <cit.> (vision), <cit.> (vision), <cit.> (vision), <cit.> (language), and <cit.> (language). We perform experiments on both controlled and full settings. In the controlled setting, we follow the same configuration as Section <ref> for the , and datasets. Further information on our datasets is included in Appendix <ref>. The full setting considers all samples from the , , , and datasets. For the image datasets, we obtain sample representations from the CLIP model <cit.> while for the NLP dataset, this is achieved with Llama-2 13B Chat model <cit.>. We also perform ablation studies on the choices of different models in Appendix <ref>. Baseline Methods. Since the concept representations are learned by in an unsupervised manner, we therefore primarily compare against the following state-of-the-art unsupervised concept extraction methods, i.e., PCA <cit.>, NMF <cit.>, ACE (KMeans) <cit.>, and Dictionary Learning <cit.>. In addition, we include a Random baseline where we randomly initialize concept vectors from a normal distribution of mean zero and variance one. Recent studies like Concept Transformer <cit.> explore how to jointly learn concept representations and perform training of downstream classification tasks with learned concept representations. Hence, we treat Concept Transformer (Concept Tf) <cit.> as another baseline. Note that Concept Tf can optionally incorporate concept labels as additional supervisions, which are not considered in our experiments for fair comparison. Experiment Design. We aim to answer these questions regarding the quality of the learned concept representations: * In the controlled setting with known compositional ground-truth concept representations, does compose concepts more effectively than baselines? * In the full setting where the ground-truth concepts are typically unknown, can successfully discover new and meaningful compositional concepts? * In both controlled and full settings, how can the learned compositional concept representations impact downstream performance? To address <ref>, we evaluate the compositionality score <cit.> on the concept representations extracted by and the baselines, which is defined as follows: (Compositionality Score) Given a dataset D consisting of embeddings z∈^d, their associated ground-truth concepts C⊂, and a concept representation function R: →^d obtained from a concept extractor, the compositionality score is the following: min_Λ≥ 01/|D|∑_(z, C)∈ Dz - ∑_i=1^|C|Λ_z, iR(C_i) Intuitively speaking, for a sample embedding z, this metric quantifies how much z can be reconstructed by composing a list of concept representation R(c_i)'s that correspond to the i_th ground-truth concepts of z. Each R(c_i) is weighted by a coefficient Λ_z, i, which is determined by optimizing the above formula with respect to all Λ_z, i. In addition, for each ground-truth concept, we also report the cosine similarity between the learned concept representation R(c_i) and the corresponding ground-truth representation. To study <ref> for the full setting, we primarily perform qualitative studies to identify whether is capable of discovering reasonable compositional concepts. Specifically, for each learned concept representation, we assign a name to the concept by inspecting the ten images with the top concept score. Then for each pair of the learned concepts, we first identify those samples with the highest concept scores. Then, we sum the two concept representations, and find the samples with largest concept score for this aggregated representation. By investigating these examples, we visually examine whether the composition is reasonable or not. Lastly, we answer <ref> by evaluating the downstream classification performance with the learned concept representations. Specifically, we follow <cit.> to learn a linear classifier by predicting class labels with the concept scores of a sample. We further report the performance of training a linear classifier on sample embeddings without involving any concepts, denoted by “No concept”. §.§ Experimental Results Compositionality in Controlled Settings. We first evaluate the compositionality scores on the , , and datasets and report them in Table <ref>. In all cases, obtains the best score compared to the baselines, indicating the advantage of in discovering compositional concepts. Moreover, 's scores are comparable to those of the ground-truth concept representations. This shows that the concepts learned by almost align with the ground-truth concept representations. This is further supported by the results in Table <ref>. This table summarizes the cosine similarities between the ground-truth concept representations and the ones learned by the baselines and . Again, the concepts learned by are the closest to the ground truths. Note that some baselines like Dictlearn also produce highly accurate concept representations. However, as Table <ref> shows, their compositions fail to be consistent with the ground truths. Compositionality in Real Data Settings. To address <ref>, we perform some qualitative studies on compositional concepts discovered by on the and dataset, which are visualized in Figure <ref>. As shown in this figure, is capable of identifying reasonable concepts, such as White Birds, Framed Birds and Text Ending in “...”. Some of these concepts are even beyond the ground-truth concept labels that are provided by the dataset itself. For example, identifies the “Birds in Hands” concept which is not labeled in the dataset. But its top activated samples are images with a bird in someone's hand (see Figure <ref>). Furthermore, the composition of those learned concepts is also representative of the properties of each concept. For example, in Figure <ref>, the composition of the concept Text Ending in “...” and Sports represents sentences about “sports" ending in “...”. Downstream Performance Analysis. For <ref>, we studied the impact of the extracted compositional concepts on downstream performance across all datasets in the full setting. Throughout the experiments, we observe that the total number of concepts is a crucial factor in determining the performance. Therefore, we also vary this number and report the performance numbers accordingly for all datasets and methods in Figure <ref>. As this figure suggests, across all the datasets, despite the poor performance with a small number of concepts, gradually gains performance with an increasing number of concepts, eventually outperforming all the unsupervised baseline methods. Also, it is worth noting that outperforms Concept Tf most times and is on par with it in the worst case (see the experimental results on the ham dataset with 500 concepts). This thus indicates the performance advantage of even in the absence of supervision from downstream tasks. Furthermore, discovers concept representations by performing a series of linear transformations on top of the sample embeddings. But by comparing against “No concept” where sample embeddings are directly used for downstream tasks, can even outperform it by a large margin on and dataset. This implies that the concept representations extracted by might be more relevant to the downstream classification tasks than the raw embeddings. § RELATED WORK Concept-based Interpretability. Concept-based interpretability encompasses the building of models using human-interpretable concepts <cit.> and extracting such concepts post-hoc from models <cit.>. In either case, how do we choose which concepts to use? Some existing work specifies concepts using human supervision to select and provide their labels <cit.>, large-scale concept annotation datasets <cit.>, general knowledge bases <cit.>, and large language models <cit.>. Another line of work uses regularization <cit.>, or other inductive biases <cit.> to learn concepts during standard supervised training of a model. Finally, there is work which leverages unsupervised methods to automatically discover concepts <cit.> which is the approach taken in this paper. Unlike existing unsupervised concept learning methods which focus on properties such as faithfulness <cit.> or human-meaningfulness <cit.>, we focus specifically on compositionality. Compositionality in Foundation Models. Since the observation of compositional word vectors by <cit.> there has been interest in finding and utilizing compositional behavior of deep learning models. Existing work has leveraged insights from psychology and cognitive science to find concepts learned by generative models <cit.>. Compositionality has been used to uncover and mitigate bias in word embeddings <cit.>, edit classifier behavior <cit.>, and recently to monitor and control the behavior of foundational language <cit.> and vision models <cit.>. To the best of our knowledge, we are the first to evaluate compositionality of concept representations learned by unsupervised approaches and to propose a method to improve compositionality of discovered concepts. Compositional and Disentangled Representations. In representation learning, there is considerable effort to encourage disentangled representations <cit.>. While disentanglement concerns how to distinguish separate concepts in embedding space, compositionality concerns what happens when separate concepts get combined. Existing work has shown that disentanglement and compositionality do not have to be correlated <cit.>. Unlike representation learning, we start with a pretrained model and try to uncover the compositional concepts it learned. Structures beyond compositionality. This paper focuses on compositionality in concept-based interpretability, but other important structures include subpopulation, relational, and causal structures. Group, or subpopulation, structure has been used as a way to interpret datasets with existing work on automatically finding such structure <cit.> and explaining models with respect to this structure <cit.>. In addition, existing work has developed methods to steer explanations to respect group structures <cit.>. Relational structures have also been studied as a lens into understanding the behavior of pretrained models <cit.>. Beyond group and relational structures, recent work proposes a method to identify known causal structures in pretrained LLMs <cit.>. § LIMITATIONS We study the case where concepts compose compositionally, but concepts may also be non-compositional. For instance, the concepts of hot and dog do not compose to form the meaning of hot dog <cit.>. In addition, we supposed a flat concept structure, which does not distinguish between “(small blue) car" and “small (blue car)". We leave the study of such non-compositional and hierarchical concepts to future work. Another limitation of unsupervised concept extraction is that discovered concept vectors are not associated with any name. We assign names to the concept through manual inspection of samples with a high concept score, but this can require significant effort with large numbers of concepts. § CONCLUSION In this paper, we studied concept-based explanations of foundation models from the lens of compositionality. We validated that the ground-truth concepts extracted from these models are compositional while the existing unsupervised concept extraction methods usually fail to guarantee compositionality. To address this issue, we first identified two salient properties for compositional concept representations and designed a novel concept extraction method called that respects these properties by design. Through extensive experiments across vision and language datasets, we demonstrated that not only learns compositional concepts but also enhances downstream performance. § ACKNOWLEDGEMENTS This material is based upon work supported by the National Science Foundation Graduate Research Fellowship under Grand No. DGE-2236662, the Google Research Fellowship, and “The Fundamental Research Funds for the Central Universities, Peking University”. § IMPACT STATEMENT This paper presents work whose goal is to advance the field of Machine Learning. There are many potential societal consequences of our work, none which we feel must be specifically highlighted here. icml2024 § PROOF OF LEMMA <REF> Let z∈^d be a sample embedding, R:→^d be a compositional concept representation function, and c_i, c_j∈ be two compositional concepts which compose as c_k=c_i∪ c_j. From Definition <ref>, the concept scores for c_i and c_j are the following: s(z, c_i) = S_cos(z, R(c_i)) s(z, c_j) = S_cos(z, R(c_j)). The concept score for the composition c_k can then be written as: s(z, c_k) = s(z, c_i∪ c_j) = S_cos(z, R(c_i∪ c_j)) = S_cos(z, w_c_iR(c_i) + w_c_jR(c_j)) (since R is compositional) = z· (w_c_iR(c_i) + w_c_jR(c_j))/zw_c_iR(c_i) + w_c_jR(c_j) (definition of cosine similarity) = z· w_c_iR(c_i)/zR(c_k) + z· w_c_jR(c_j)/zR(c_k) = (w_c_iR(c_i)) z· R(c_i)/R(c_k)zR(c_i) + (w_c_jR(c_j)) z· R(c_j)/R(c_k)zR(c_j) = w_c_iR(c_i)/R(c_k)S_cos(z, R(c_i)) + w_c_jR(c_j)/R(c_k)S_cos(z, R(c_j)) (definition of cosine similarity) § PROOF OF THEOREM <REF> <cit.> For a pair of vectors 𝐱 and 𝐲 randomly sampled from N(0, 𝐈^d), 𝐱 and 𝐲 are orthogonal with high probability for large enough d. Mathematically speaking, for a fixed small constant, ϵ, the following inequality holds: ℙ[|⟨𝐱/|𝐱|, 𝐲/|𝐲|⟩| ≤ϵ] ≥ 1 - M_1/√(d)ϵ - M_2/√(d), where M_1= 2 and M_2 =7 <cit.> For a vector v randomly sampled from N(0, 𝐈^d), v is approaching √(d) with high probability for large enough d. Mathematically speaking, the following inequality holds: ℙ[|𝐱 - √(d)| ≤ϵ] ≥ 2 exp(-M_3ϵ^2), in which M_3 = 1/16 Based on the above two lemmas, for any two randomly sampled vectors 𝐱 and 𝐲 from N(0, 𝐈^d), the following equality holds with high probability: ⟨𝐱, 𝐲⟩ = o(d) As defined in Theorem <ref>, for a composite concept c={c_i,c'_j}, its representation is denoted by v_i,j, then the representation of the base concept c_i belonging to attribute A is: v_i = 1/l'∑_j=1^l' v_i,j. Similarly, the representation of the base concept c'_j∈ A' is: v'_j = 1/l∑_i=1^l v_i,j. v_i could be derived by calculating the mean of the representations of all samples with concept c_i in the attribute A. Since those samples may have different concepts in the attribute A', then the composite concept among these samples could be {c_i, c_1'},{c_i, c_2'},…, {c_i, c_l'}. Therefore, v_i is derived by: v_i = 1/N∑_x with concept c_i in attribute Ax = 1/N∑_j=1^l'∑_x with concept {c_i,c_j'}x, in which N represents the number of samples with concept c_i in attribute A. By further assuming that there is a large enough number of samples for each composite concept, this implies that the number of each composite concept is roughly the same, i.e., around N/l'. Then the above formula could be transformed to: v_i = 1/N∑_j=1^l'∑_x with concept {c_i,c_j'}x = 1/N∑_j=1^l'N/l' v_i,j = 1/l'∑_j=1^l'v_i,j. The last step in the above formula leverages the fact that v_i,j is calculated by the mean of all samples belonging to composite concept {c_i,c_j'}. We can further illustrate this with one concrete example from the CLEVR dataset. By reusing the running example from Section <ref>, we assume that there are three colors {red, green, blue} and three shapes {sphere, cube, cylinder} in the CLEVR dataset. By following the notations of Theorem <ref>, the representation of a composite concept, say, {c_red,c_sphere}, is represented by v_red, sphere. Then the representation of the base concept sphere should be the mean of all samples belonging to this base concept. This can be derived by the mean of the samples belonging to the concept {c_red,c_sphere}, the ones belonging to {c_green,c_sphere} and the ones belonging to {c_blue,c_sphere}. Therefore, the representation of c_sphere is denoted by: v_sphere = 1/3[v_red, sphere + v_green, sphere + v_blue, sphere]. We next present the formal proof of Theorem <ref>: We split our proof into two parts. The first part is for proving “For the base concepts belonging to the same attribute, there exists at least one pair of non-orthogonal concepts.” while the second part is for proving “For any pair of base concepts from two different attributes, they are orthogonal with high probability.” Part 1: There exists c_1, c_2 ∈ A and c'_1, c'_2 ∈ A' such that the representations of these base concepts are non orthogonal. According to Lemma <ref>, the concept representation for the base concept c_i (denoted by v̂_̂î) is: v̂_̂î = 1/l'∑_j=1^l' v_i,j, which sums over all concepts in A'. First, we can derive the concept representation for each base concept c_i,t (denoted by μ_i,t) as follows: μ_i,t =1/l^k-1∑_j_1∑_j_2…∑_j_i-1∑_j_i+1…∑_j_k v_j_1,j_2,j_3,…,j_i-1,t,j_i+1,…,j_k. Since we also want to perform centering operations over the entire dataset, then this suggests that we need to leverage the mean of all concepts, i.e.,: μ =1/l^k∑_j_1∑_j_2…∑_j_k v_j_1,j_2,j_3,…,j_i-1,j_i,j_i+1,…,j_k. μ =1/ll'∑_i, j v_i, j. Then after the centering operation, μ_i,tv̂_̂î is transformed into: v_i = v̂_̂î - μ/σ. In the formula above, we use σ to represent the standard deviation vector calculated over the entire dataset. Then let us fix i and sum up all μ_i,t' over all t, which yields: ∑_t=1^l μ_i,t'= ∑_t=1^l μ_i,t - μ/σ = ∑_t=1^l μ_i,t/σ - l ·μ/σ Then let us fix i and sum up all v_i over all i, which yields: ∑_i=1^l v_i = ∑_i=1^l v̂_̂î - μ/σ = ∑_i=1^l v̂_̂î/σ - l μ/σ Then by integrating Equation (<ref>) and Equation (<ref>) into the above formula, we can get: ∑_t=1^l μ_i,t' = 1/σ'[∑_t=1^l 1/l^k-1∑_j_1∑_j_2…∑_j_i-1∑_j_i+1…∑_j_k v_j_1,j_2,j_3,…,j_i-1,t,j_i+1,…,j_k. . - 1/l^k-1∑_j_1∑_j_2…∑_j_i-1∑_j_i+1…∑_j_k v_j_1,j_2,j_3,…,j_i-1,t,j_i+1,…,j_k] = 1/σ'[1/l^k-1∑_j_1∑_j_2…∑_j_i-1∑_t=1^l∑_j_i+1…∑_j_k v_j_1,j_2,j_3,…,j_i-1,t,j_i+1,…,j_k. . - 1/l^k-1∑_j_1∑_j_2…∑_j_i-1∑_j_i+1…∑_j_k v_j_1,j_2,j_3,…,j_i-1,t,j_i+1,…,j_k] = 0. Then by integrating Equation <ref> and Equation <ref> into the above formula, we get: ∑_i=1^l v_i = ∑_i=1^l v̂_̂î/σ - l μ/σ = 1/σ∑_i=1^l 1/l'∑_j=1^l' v_i,j - lμ/σ = 1/σ l'∑_i=1^l∑_j=1^l' v_i,j - 1/σ l'∑_i=1^l∑_j=1^l' v_i,j = 0 We can equivalently show that ∑_j=1^l' v'_j = 0. Therefore, the concept representations v_i within the attribute A are linearly dependent and the representations v'_i within the attribute A' are linearly dependent, meaning there exist concepts c_i and c_j such that ⟨ v_i, v_j ⟩≠ 0, and concepts c'_k and c'_m such that ⟨ v_k', v_m' ⟩≠ 0. This thus suggests that for j_i, j_i ∈{1,2,…,l}, all μ_j_i,i' are correlated. If all pairs of (μ_j_i1,i', μ_j_i2,i'), j_i1,j_i2 = 1,2,…, l are orthogonal, then we can obtain the following formula: ⟨μ_i,t_1', ∑_t=1^l μ_i,t'⟩ = ⟨μ_i,t_1', μ_i,t_1'⟩ = 0, thus indicating that all μ_i,t is 0. This is thus contradictory to our assumption that each μ_i,t is a non-zero vector. Therefore, there should exist at least one pair of (μ_i,t_1', μ_i,t_2') which are not orthogonal. Part 2: For all c_1∈ A and c_2∈ A', the representations of c_1 and c_2 are orthogonal with high probability. To prove that all concept representations from A are orthogonal to all concept representations from A' , we will show that the dot product between these two representations is zero. Let c_i∈ A and c'_j∈ A' and v_i, v'_j are the concept representations for c_i and c'_j respectively. We can expand the dot product as follows: ⟨ v_i, v'_j ⟩ = ⟨v̂_̂î/σ - μ/σ,v̂'_j/σ - μ/σ⟩ Then by integrating Equation <ref> and Equation <ref> into the above formula, we can expand the above into the following: ⟨ v_i, v'_j ⟩ = 1/σ^2⟨1/l'∑_j=1^l'v_i,j - μ, 1/l∑_i=1^l v_i,j - μ⟩ We next prove that arbitrary pairs of concept representations from two different attributes are orthogonal with high probability. To demonstrate this, we calculate the dot product between μ_i_1,t_1' and μ_i_2,t_2' which represents two concepts from attribute i_1 and i_2 respectively: ⟨μ_i_1,t_1', μ_i_2,t_2'⟩ = ⟨μ_i,t/σ' - μ/σ',μ_i,t/σ' - μ/σ'⟩ =1/σ'^21/l^k⟨ l ∑_j_1∑_j_2…∑_j_i_1-1∑_j_i_1+1…∑_j_k v_j_1,j_2,j_3,…,j_i_1-1,t_1,j_i_1+1,…,j_k - ∑_j_1∑_j_2…∑_j_k v_j_1,j_2,j_3,…,j_k, l ∑_j_1∑_j_2…∑_j_i_2-1∑_j_i_2+1…∑_j_k v_j_1,j_2,j_3,…,j_i_2-1,t_2,j_i_2+1,…,j_k - ∑_j_1∑_j_2…∑_j_k v_j_1,j_2,j_3,…,j_k⟩ We note that for arbitrary pairs of v_i,j and v_i',j' with i ≠ i' or j≠ j', since they are two different random vectors sampled from a spherical normal distribution N(0, 𝐈^d), their dot product is o(d) according to Equation <ref>. Therefore, through some linear algebraic operations, the above formula could be reformulated as follows: ⟨ v_i, v'_j ⟩ = 1/σ^2⟨1/l'∑_s=1^l'v_i,s - μ, 1/l∑_t=1^l v_t,j - μ⟩ = 1/σ^2⟨1/l'∑_s=1^l'v_i,s - 1/ll'∑_t,sv_t,s, 1/l∑_t=1^l v_t,j - 1/ll'∑_t,s v_t,s⟩ = 1/σ^2 ll'⟨∑_s=1^l'v_i,s - 1/l∑_t,sv_t,s, ∑_t=1^l v_t,j - 1/l'∑_t,s v_t,s⟩ = 1/σ^2 ll'[ ∑_s=1^l'v_i,s∑_t=1^l v_t,j - 1/l'∑_s=1^l'v_i,s∑_t,sv_t,s - 1/l∑_t=1^lv_t,j∑_t,sv_t,s + 1/ll'∑_t,sv_t,s∑_t,sv_t,s] = 1/σ^2 ll'[ v_i,j^2 - 1/l'∑_s=1^l'v_i,s^2 - 1/l∑_t=1^lv_t,j^2 + 1/ll'∑_t,sv_t,s^2] + o(d) in which o(d) is derived by applying Equation <ref> to all the cross terms of the form ⟨ v_i,j, v_i',j'⟩ where at least one pair of i, i' and j, j' are different. According to (<ref>), for arbitrary pairs of v_j_1,…,j_k and v_j_1',…,j_k', as long as their indexes are not exactly equivalent, their dot product is o(d). Therefore, through some linear algebraic operations, the above formula could be reformulated as follows: ⟨μ_i_1,t_1', μ_i_2,t_2'⟩ = 1/σ'l^2(l^2 ∑_j_1∑_j_2…∑_j_i_1-1∑_j_i_1+1…∑_j_i_2-1∑_j_i_2+1…∑_j_kv_j_1,j_2,j_3,…,j_i_1-1,t_1,j_i_1+1,…,j_i_2-1,t_2,j_i_2+1,…,j_k_2^2 . . - l ∑_j_1∑_j_2…∑_j_i_1-1∑_j_i_1+1…∑_j_kv_j_1,j_2,j_3,…,j_i_1-1,t_1,j_i_1+1,…,j_k_2^2. .- l ∑_j_1∑_j_2…∑_j_i_2-1∑_j_i_2+1…∑_j_kv_j_1,j_2,j_3,…,j_i_2-1,t_2,j_i_2+1,…,j_k_2^2. . + ∑_j_1∑_j_2…∑_j_kv_j_1,j_2,j_3,…,j_k_2^2) + o(d) We can further simplify this expression using Lemma <ref> which says that for each vector x randomly sampled from N(0, 𝐈^d), its norm is bounded by [√(d) - ϵ, √(d) + ϵ] with high probability, which applies to each v_i, j. Therefore, we can bound the above equation by: ⟨μ_i_1,t_1', μ_i_2,t_2'⟩ ≤1/σ' l(l^2 ∑_j_1∑_j_2…∑_j_i_1-1∑_j_i_1+1…∑_j_i_2-1∑_j_i_2+1…∑_j_k (√(d) + ϵ)^2 . . - l ∑_j_1∑_j_2…∑_j_i_1-1∑_j_i_1+1…∑_j_k (√(d) - ϵ)^2. .- l ∑_j_1∑_j_2…∑_j_i_2-1∑_j_i_2+1…∑_j_k (√(d) - ϵ)^2. . + ∑_j_1∑_j_2…∑_j_k (√(d) + ϵ)^2) + o(d) = ≤1/σ' l[l^k (√(d) + ϵ)^2 - l^k (√(d) - ϵ)^2 - l^k (√(d) - ϵ)^2 + l^k (√(d) + ϵ)^2 ] + o(d) = 1/σ'[8l^k-1√(d)ϵ] + o(d), We can further simplify this expression using Lemma <ref> which says that for each vector x randomly sampled from N(0, 𝐈^d), its norm is bounded by [√(d) - ϵ, √(d) + ϵ] with high probability, which applies to each v_i, j. Therefore, we can bound the above equation by: ⟨ v_i, v'_j ⟩ ≤1/σ^2 ll'[ (√(d)+ϵ)^2 - 1/l'l'(√(d)-ϵ)^2 - 1/ll(√(d)-ϵ)^2 + 1/ll'll'(√(d) +ϵ)^2 ] o(d) = 8√(d)ϵ/σ^2 ll' + o(d) Similarly, we can prove that ⟨ v_i, v'_j ⟩≥ -8√(d)ϵ/σ^2 ll' + o(d), so we can conclude that ⟨μ_i_1,t_1', μ_i_2,t_2'⟩ = o(d) | ⟨ v_i, v'_j⟩ | = o(d) Our goal is to get a bound on the cosine similarity of v_i and v'_j to show that it is zero. The cosine similarity is written S_cos (v_i, v'_j) = ⟨ v_i, v'_j⟩/v_iv'_j, so we have a bound on the numerator, but we now want a bound on the terms in the denominator. We can compute the norm of μ_i_1,t_1'v_i and v'_j and follow the same derivation as above by leveraging Equation <ref>, which results in: v_i_2^2 = ⟨ v_i, v_i⟩ = 1/σ^2 ll'⟨∑_j=1^l'v_i,j - 1/l'∑_i,jv_i,j, ∑_j=1^l' v_i,j - 1/l'∑_i,j v_i,j⟩ = 1/σ^2 ll'[ ∑_j=1^l'v_i,j∑_i=1^l v_i,j - 1/l'∑_j=1^l'v_i,j∑_i,jv_i,j - 1/l∑_i=1^lv_i,j∑_i,jv_i,j + 1/ll'∑_i,jv_i,j∑_i,jv_i,j] = 1/σ^2 ll'[ v_i,j^2 - 1/l'∑_j=1^l'v_i,j^2 - 1/l∑_i=1^lv_i,j^2 + 1/ll'∑_i,jv_i,j^2] + o(d) This formula could then be lower bounded by: μ_i_1,t_1'_2^2 ≥ 2 l^k (d - 2√(d)ϵ + ϵ^2) + o(d) = 2l^k d + o(d) v_i_2^2 = ⟨ v_i, v_i⟩ = 1/σ^2 l'^2⟨∑_s=1^l'v_i,s - 1/l∑_t,sv_t,s, ∑_s=1^l' v_i,s - 1/l∑_t,s v_t,s⟩ = 1/σ^2 l'^2[ ∑_s=1^l'v_i,s∑_s=1^l' v_i,s - 21/l∑_s=1^l'v_i,s∑_t,sv_t,s + 1/l^2∑_t,sv_t,s∑_t,sv_t,s] = 1/σ^2 l'^2[ ∑_s=1^l'v_i,s^2 - 2/l∑_s=1^l'v_i,s^2 + 1/l^2∑_t,sv_t,s^2] + o(d) Similarly, we can get the following: v'_j_2^2 = 1/σ^2 l^2[ ∑_t=1^lv_t,j^2 - 2/l'∑_t=1^l'v_t,j^2 + 1/l'^2∑_t,sv_t,s^2] + o(d) By Lemma <ref>, the norm of each v_i,j is bounded by √(d) - ϵ and √(d) + ϵ with high probability, so the above formula can be bounded by: 1/σ^2 ll'((l-1)d - (2l+6)√(d)ϵ +(l-1)ϵ^2) + o(d) ≤v_i_2^2 ≤1/σ^2 ll'((l-1)d + (2l +6)√(d)ϵ + (l-1)ϵ^2) + o(d), Therefore, v_i_2^2 = O(d) and we can equivalently show that v'_j = O(d). This leverages the fact that each v_j_1,j_2,j_3,…,j_k is bounded by [√(d) - ϵ, √(d) + ϵ] with high probability. The above formula also holds for μ_i_2,t_2'_2^2. As a consequence, the cosine similarity between μ_i_1,t_1' and μ_i_2,t_2' is bounded by: cosine(μ_i_1,t_1', μ_i_2,t_2') = ⟨μ_i_1,t_1', μ_i_2,t_2' ⟩/μ_i_1,t_1'·μ_i_2,t_2'≤o(d)/2l^k d + o(d), which thus approaches zero as d increases. As a consequence, we can now calculate the cosine similarity between v_i and v'_j: S_cos(v_i, v'_j) = ⟨ v_i, v'_j ⟩/v_i·v'_j = o(d)/O(d) = o(1), which means that this converges to zero as desired. Given Theorem <ref>, for the representation of the composite concepts v_i,j, it can be (approximately) decomposed into the linear combinations of the representations of the base concepts (after the centering operation), v_i, v_j but is orthogonal to the representations of other base concepts with high probability. In other words, compositionality holds with high probability. To prove this, let us consider the cosine similarity between v_i, j and v_t. According to Equation <ref>, we first compute the inner product between these two vectors, i.e.: ⟨ v_i,j, v_t⟩ = 1/l'∑_n=1^l'⟨ v_i,j , v_t,n⟩, Depending on whether t = i or not, there are two different cases. Case 1: t ≠ i Note that according to Lemma <ref>, since v_i,j and v_t,n are twowvectors randomly sampled from the spherical normal distribution, their inner product is o(d). Therefore, the above inner product between v_i,j and v_t becomes: ⟨ v_i, j, μ_t⟩ = o(d). Also note that according to Equation <ref>, v_t = v̂_̂t̂ - μ/σ, we thus need to leverage this equation to derive the inner product between v_i,j and v_t. Furthermore, according to (<ref>), μ is the mean of all the representations of the composite concepts, which are all randomly sampled from a spherical normal distribution. Therefore, μ is approaching 0 with high probability and thus the following equation holds with high probability: ⟨ v_i,j, v_t⟩ = ⟨ v_i,j, v̂_̂t̂ - μ/σ⟩ = ⟨ v_i,j, v̂_̂t̂/σ⟩ = o(d), t ≠ i. In addition, according to Lemma <ref> and Equation <ref>, the norms of v_i,j and v_t are both O(√(d)). Therefore, the cosine similarity between v_i,j and v_t : cosine(v_i,j, v_t) = ⟨ v_i,j, v_t⟩/v_i,j·v_t = o(d)/v_i,j·v_t = o(d)/O(d) = o(1). Intuitively speaking, this indicates that for the representation of a composite concept v_i,j, it is not correlated with the representation of a base concept that does not appear in this composite concept with high probability. For example, this could mean that the representation of the composite concept {c_red,c_sphere} is not correlated to the representation of the concept c_blue, which is intuitively true. Case 2: t=i In Equation <ref>, according to Lemma <ref>, the inner product between v_i,j and most v_t,m is o(d) except when j=m. Therefore, Equation <ref> becomes: ⟨ v_i,j, v_t⟩ = v_i,j_2^2 + o(d), Then according to Lemma <ref>, since v_i,j is approaching √(d), then the above formula is transformed to: ⟨ v_i,j, v_t⟩ = O(d), Then according to Lemma <ref> and Equation <ref>, the norms of v_i,j and v_t are both O(√(d)). Therefore, the cosine similarity between v_i,j and v_t is: cosine(v_i,j, v_t) = ⟨ v_i,j, v_t⟩/v_i,j·v_t = O(d)/v_i,j·v_t = O(d)/O(d) = O(1), which is thus a nonzero value. As indicated by the above analysis, we can conclude that each v_i,j is only correlated to the representation of the base concepts v_i, and v'_j. Since the representations of those base concepts are from different attributes, thus orthogonal to each other, then we can regard them as the basis vectors in the vector space, which can then be linearly combined to approximately reconstruct v_i,j, i.e.: v_i,j = cosine(v_i,j, v_i)v_i + cosine(v_i,j, v'_j)v'_j This thus matches the definition of the compositionality (see Definition <ref>). For some dataset, consider two attributes A and A' where we have l concepts for A, c_1,…,c_l, and l' concepts for A', c'_1,…,c'_l'. Define normalized concept representations v_1,…,v_l and v'_1,…,v'_l' for the concepts in A and A' such that v_i is orthogonal to v'_j for all i and j and for v_i and samples x and x' such that x has concept c_i and x' does not, then S_cos(x, v_i) > S_cos(x', v_i). Then the concept representations are compositional. Let v_i be the concept representation for c_i and v'_j be the concept representation for c'_j. We are given that for any two samples x and x' with and without concept c_i respectively, S_cos(x, v_i) > S_cos(x', v_i) and similarly for any two samples x and x' with and without concept c'_j respectively, S_cos(x, v'_j) > S_cos(x', v'_j). We will show that a concept representation for c_i, j, the composition of concept c_i and c'_j, exists and is represented by v_i,j=v_i+v'_j. Let v_i,j = v_i + v'_j. We will show that this concept can perfectly rank samples with the concept c_i,j. Since v_i and v'_j result in perfect rankings, for all x, x' such that x has c_i and x' does not, S_cos(x, v_i) - S_cos(x', v_i) > 0. Similarly, for any x, x' such that x has c'_j and x' does not, S_cos(x, v'_j) - S_cos(x', v'_j) > 0. Now let, x, x' be such that x has concept c_i,j and x' does not. We can write the following: S_cos(x, v_i + v'_j) = ⟨ x, v_i + v'_j ⟩/xv_i +v'_j = ⟨ x, v_i⟩ + ⟨ x, v'_j ⟩/x√(2) Since ⟨ v_i,v'_j⟩ = 0, ⟨ v_i,v_i⟩ = 1, and ⟨ v'_j,v'_j⟩ = 1 = 1/√(2) (S_cos(x, v_i) + S_cos(x, v'_j)) Therefore, we can now show that the concept score for the composed concept is larger for x than x': S_cos(x, v_i + v'_j) - S_cos(x', v_i + v'_j) = 1/√(2) (S_cos(x, v_i) + S_cos(x, v'_j)) - 1/√(2) (S_cos(x', v_i) + S_cos(x', v'_j)) = 1/√(2)( (S_cos(x, v_i) - S_cos(x', v_i)) + (S_cos(x, v'_j) - S_cos(x', v'_j))) > 0. § COMPOSITIONALITY OF GROUND-TRUTH CONCEPTS The cosine similarities between concepts is shown for the CUB-sub and Truth-sub datasets in Figure <ref>. We see similar findings as in Figure <ref>. § QUALITATIVE EXAMPLES We provide additional qualitative results for the CUB dataset in Figure <ref> and the ImageNet <cit.> validation set in Figure <ref>. The concepts are named by manually looking at the top 20 images for each concept and coming up with a short description which is as specific as possible to the images while being general enough to apply to each image. As an alternative to manual concept labelling, we also experimented with using a vision-text language model to automatically name concepts from their top 20 examples. We used GPT-4o <cit.> to get concept labels. For each concept, we produce a single image containing the top 20 samples for the concept and we pass the image to GPT-4o with the following prompt: The labels for the additional CUB examples in Figure <ref> are the following where each line labels a row of the figure: Similarly, the labels from GPT-4o for Figure <ref> are the following: § ADDITIONAL QUANTITATIVE RESULTS §.§ Runtime analysis §.§ Downstream performance error bars We include error bars for the downstream performance results using the greatest number of concepts in Table <ref>. §.§ Ablation on regularization in To see the impact of the regularization step in the LearnSubspace step of , we performan an additional ablation on the CLEVR dataset. We compare without this regularization step to the full implementation of in Table <ref>, and we see that regularization improves all three metrics. §.§ Ablation on clustering loss function We perform an ablation on the use of the Silhouette score as our clustering loss. Instead of Silhouette we experiment with the cross entropy loss based on the technique from <cit.>, but our results in Table <ref> show that the Silhouette results in better compositionality. §.§ Ablation on attribute imbalance We perform an ablation experiment on the effect of attribute imbalance by testing 's ability to recover the ground truth concepts on the CLEVR dataset after removing different fractions of samples labeled with the “red” concept. The results are shown in Figure <ref> where we see that removing more red samples, which creates a greater imbalance, decreases the average cosine similarity of the discovered concepts with the ground truth. §.§ ROC-AUC Scores between Concept Representations and Ground-Truth The maximum ROC-AUC between the concept score and the true label for the ground-truth concepts is presented in Table <ref> for CLEVR, Table <ref> for , and Table <ref> for . §.§ The analysis of the cosine similarity score between learned concept representations and ground-truth We further break down the results reported in Table <ref> average cosine similarity between the learned concept representation and the ground-truth concept representations. §.§ Ablation studies on other pretrained models Recall that in the experiment section, we primarily focus on discovering concepts from pretrained CLIP model. In this section, we study with different choices of pretrained models, can we obtain similar results as that in Section <ref>? To answer this question, we leverage vision transformer (ViT), another widely used pretrained vision model, to repeat the experiments on CLEVR dataset. The results are summarized in Table <ref>-<ref>. The results from these tables maintain the same trends as the one shown in Section <ref>. § DATASET DETAILS We provide the details for all datasets in Table <ref>. § HYPERPARAMETERS The hyperparameters of all experiments are given in Table <ref>.
http://arxiv.org/abs/2406.18797v1
20240627000623
A Study on Quantum Car-Parrinello Molecular Dynamics with Classical Shadows for Resource Efficient Molecular Simulation
[ "Honomi Kashihara", "Yudai Suzuki", "Kenji Yasuoka" ]
quant-ph
[ "quant-ph" ]
These authors share first authorship. Department of Mechanical Engineering, Keio University, Hiyoshi 3‑14‑1, Kohoku, Yokohama 223‑8522, Japan These authors share first authorship. Quantum Computing Center, Keio University, Hiyoshi 3‑14‑1, Kohoku,Yokohama 223‑8522, Japan Department of Mechanical Engineering, Keio University, Hiyoshi 3‑14‑1, Kohoku, Yokohama 223‑8522, Japan § ABSTRACT Ab-initio molecular dynamics (AIMD) is a powerful tool to simulate physical movements of molecules for investigating properties of materials. While AIMD is successful in some applications, circumventing its high computational costs is imperative to perform large-scale and long-time simulations. In recent days, near-term quantum computers have attracted much attentions as a possible solution to alleviate the challenge. Specifically, Kuroiwa et al. proposed a new AIMD method called quantum Car-Parrinello molecular dynamics (QCPMD), which exploits the Car-Parrinello method and Langevin formulation to realize cost-efficient simulations at the equilibrium state, using near-term quantum devices. In this work, we build on the proposed QCPMD method and introduce the classical shadow technique to further improve resource efficiency of the simulations. More precisely, classical shadows are used to estimate the forces of all nuclei simultaneously, implying this approach is more effective as the number of molecules increases. We numerically study the performance of our scheme on the H_2 molecule and show that QCPMD with classical shadows can simulate the equilibrium state. Our results will give some insights into efficient AIMD simulations on currently-available quantum computers. A Study on Quantum Car-Parrinello Molecular Dynamics with Classical Shadows for Resource Efficient Molecular Simulation Kenji Yasuoka July 1, 2024 ======================================================================================================================== § INTRODUCTION Ab-initio molecular dynamics (AIMD) is an effective computational tool to elucidate properties of molecules by examining their behavior at the atomic level <cit.>. With the AIMD method, a number of studies have attempted to clarify the underlying dynamics of material properties such as chemical reactions <cit.>, diffusion <cit.>, amorphous materials <cit.> and vibrational frequencies <cit.>. On the other hand, AIMD requires quantum-mechanical computations of potential energy surfaces, which is computationally demanding and thus hiders large-scale and long-time simulations. Hence, seeking new approaches to resolve the computational cost problem is imperative. A possible solution to circumvent the difficulty is quantum computing, which can potentially outperform classical computers in terms of the information processing speed. Unfortunately, ideal fault-tolerant quantum computers are not available now; that is, we cannot execute quantum algorithms with theoretical guarantees such as Ref. <cit.> on currently-available quantum computers for practical applications. To be more specific, near-term quantum computers, the so-called noisy intermediate-scale quantum (NISQ) devices, are limited in the number of qubits and suffer from noise through computation <cit.>. This suggests the challenge of NISQ computers even for simple computations. Nevertheless, recent works spotlight the potential power of the NISQ devices. An example includes experimental demonstrations of a sampling task where NISQ devices can be superior to classical means <cit.>. Fueled by such results, many attempts have been made to explore the utility of NISQ devices by e.g., utilizing hybrid quantum-classical strategies <cit.>. In the literature of AIMD, the power of NISQ devices has also been explored. Originally proposed is a hybrid quantum-classical method <cit.> where the electronic ground state is computed by variational quantum eigensolvers (VQEs) <cit.> on NISQ devices, whereas the update of nuclei positions is executed on classical computers. We note that the VQE algorithm is performed using parameterized quantum circuits (PQCs) whose parameters are also tuned by classical optimizers. This method was proposed based on the expectation that VQE can solve the ground state problem more efficiently than classical methods. On the other hand, there is room for investigation in the practicality of the method. For example, this proposal does not take into account the inevitable statistical noise caused by a finite number of measurement shots. In addition, performing VQE algorithms with sufficient accuracy at each iteration demands many rounds of quantum state preparation and measurements. To tackle these challenges, Ref. <cit.> recently proposed a new AIMD method called Quantum Car-Parrinello molecular dynamics (QCPMD). The key contributions of QCPMD are two-fold: (1) inspired by the idea of classical Car-Parrinello molecular dynamics, the subroutine of VQE is replaced with parallel updates of nuclei positions and parameters in PQCs, and (2) statistical noise is rather utilized as thermostats with the help of Langevin dynamics formula. Thanks to these modifications, QCPMD can perform cost-efficient simulation of molecules at equilibrium using NISQ devices, while allowing for inaccurate calculations of the electronic ground state and the statistical noise by measurement. In this work, we build on the QCPMD method proposed in Ref. <cit.> and introduce classical shadows to improve its resource efficiency. The classical shadow technique is a powerful tool to simultaneously estimate expectation values of multiple observables <cit.>. Hence, our idea is to utilize the technique for reducing the number of samples required to estimate forces on the nuclei at each step. More specifically, we make use of the classical snapshots of quantum states to obtain 3N forces of N nuclei simultaneously at each iteration, instead of naive estimations of the forces. This indicates that our scheme would be more effective than the naive approach as the number of nuclei increases. We then consider a H_2 molecule as a testbed and numerically show that QCPMD with classical shadows can successfully simulate the equilibrium state. These results will push forward the practical use of NISQ devices for efficient AIMD simulation. The remaining of this paper is structured as follows. In Section <ref>, we provide a brief overview of QCPMD and classical shadows, followed by explanation of our scheme. Then, we show numerical results on simulation of a H_2 molecule in Section <ref>. Lastly, we conclude this work in Section <ref>. § METHODS In what follows, we show preliminaries on AIMD simulations, QCPMD <cit.> and classical shadows <cit.>. We then elaborate on our framework and its possible advantages and drawbacks. §.§ Preliminaries §.§.§ Brief Overview of AIMD Simulations In AIMD simulations, the motion of electrons and nuclei are solved separately based on the Born-Oppenheimer approximation <cit.>. This approximation assumes that electronic and nuclear motion can be decoupled because electrons are much lighter and hence moves faster than nuclei. With this approximation, the time evolution of molecules is made tractable. More precisely, AIMD treats nuclei as classical point masses, while electrons are still handled quantum-mechanically. In other words, at each iteration, the Hamilton's canonical equations are solved for the nuclei motion, where the forces in use are obtained from the potential energy surfaces we computed through the solution of the Schrödinger equation for the elections. The quantum-classical approach in AIMD can enable more accurate simulations than the classical molecular dynamics simulation that uses the empirical and simplified potential functions called force field for calculating the potential energy surface. On the other hand, solving the Schrödinger equation at each step is computationally demanding and thus AIMD is challenging for long-time simulations with many molecules. Hence, attentions have been paid to methods for circumventing the difficulty. Car and Parrinello proposed an approach that mitigates the cost problem for computing eigenstates of the electrons <cit.>. In a broad sense, the key feature of the Car-Parrinello molecular dynamics is to treat the electronic wavefunctions and nuclear positions as dynamical variables and then evolve them together at each timestep. While the success of Car-Parrinello molecular dynamics requires the adiabaticity condition (i.e., a condition that electronic state can instantaneously follow the change of nuclei positions), avoiding the eigenvalue problem can largely enhance the efficiency of molecular simulations. §.§.§ Quantum Car-Parrinello Molecular dynamics In the following, we review Quantum Car-Parrinello molecular dynamics (QCPMD), an AIMD method that exploits NISQ computers. Actually, the QCPMD is not the first attempt to utilize NISQ devices for AIMD simulations. In the literature, Ref. <cit.> straightforwardly employed the VQE algorithms for AIMD simulations to compute the electronic ground state. We remind that VQE solves eigenvalue problems and is amenable to implementation on NISQ devices <cit.>; then, the hope is that VQE is more efficient than conventional approaches. The AIMD with VQE (VQE-AIMD) is highly related to the QCPMD and thus we first give its overview briefly. In the VQE-AIMD, we prepare a quantum state |Ψ(θ)⟩=U(θ)|Ψ_0⟩ generated by applying a parameterized quantum circuit (PQC) U(θ) to an initial state |Ψ_0⟩. Then, we optimize the parameters θ so that the parameterized quantum state can approximate the ground state of a Hamiltonian H(R) of target N molecules. Here, R=(R_1,…,R_N)^𝖳 denotes the nuclei positions with the Cartesian coordinates of l-th nucleus R_l = (R_l,x,R_l,y,R_l,z)^𝖳. A common strategy to obtain the ground state is minimize the energy L(R,θ) = Tr[ H(R)|Ψ(θ)⟩⟨Ψ(θ)|] as small as possible according to the variational principle. Once one can obtain the ground state energy, the forces on nuclei are also computable: the force F_l,α(R,θ) on the l-th nucleus in the α direction is expressed as F_l,α(R,θ) = - ∂/∂ R_l,α L(R,θ) ≈ - Tr[ d H(R)/d_l,α|Ψ(θ)⟩⟨Ψ(θ)|], where we consider the Hellman-Feynmann force only and ignore the Puley force (i.e., Tr[ H(R) ∂/∂ R_l,α (|Ψ(θ)⟩⟨Ψ(θ)|) ]) <cit.>, assuming the ground state is well-approximated. Note that we obtain the Hellman-Feynmann force by estimating the expectation values of the corresponding observables, the gradient of the Hamiltonian d H(R)/d_l,α. Also, the force shown in the right hand side of Eq. (<ref>) can be obtained by taking the finite difference Tr[ d H(R)/d_l,α|Ψ(θ)⟩⟨Ψ(θ)|] ≈Tr[ (H(R+d e_l,α) - H(R-d e_l,α))|Ψ(θ)⟩⟨Ψ(θ)|]/2d with the unit vector e_l,α and a scalar value d. Using the forces F(R,θ)=(F(R_1,θ),…, F(R_N,θ))^𝖳 with F(R_l,θ)=(F_l,x(R,θ),F_l,y(R,θ),F_l,z(R,θ))^𝖳 for fixed θ, we solve the classical equations of motions mR̈ = F (R,θ) with mass m and then update the nuclei positions. This procedure is repeated until the terminating condition is met. The QCPMD is in spirit similar to the VQE-AIMD, but its efficiency in samples is improved by leveraging the idea of Car-Parrinello molecular dynamics and Langevin dynamics formulation. The first difference is that the VQE is not employed in QCPMD. That is, similar to the classical Car-Parrinello molecular dynamics, the parameters θ in the PQC are also treated as dynamical variables. Then, the nuclei configurations R and the parameters θ are separately updated in parallel. This modification is adopted to avoid a problem in VQE that a large number of optimization rounds (equivalently, state preparations and measurements) may be needed to obtain the optimal parameters θ^*. Secondly, QCPMD takes into account the effect of statistical noise caused by a finite number of measurements. In practical settings, it is impossible to ignore the statistical noise for estimating observables such as the ground state energy on quantum computers. Thus, Langevin formulation is adopted to derive a simulation method that rather makes use of the statistical noise as thermostats; namely, the effect of noise is incorporated into the formulation in such a way that it can be used for controlling the temperature of the system. Thanks to this, we may be able to reduce the sampling cost compared to the VQE-AIMD. We omit the derivation in this manuscript, but the update rules that take into account the above-mentioned modifications are expressed as follows (please refer to Ref. <cit.> for more details); R^(k) = R^(k-1) + v^(k-1)Δ t, v^(k) = (1-γ(R^(k-1),θ^(k-1)) Δ t) v^(k-1) + F (R^(k-1),θ^(k-1))/mΔ t, θ^(k) = θ^(k-1) + ξ^(k-1)Δ t, ξ^(k) = (1-ζ(R^(k),θ^(k-1)) Δ t) ξ^(k-1) + F_θ (R^(k),θ^(k-1))/μΔ t, where R^(k), θ^(k), v^(k)=Ṙ^(k) and ξ^(k)=θ̇^(k) represent the quantities at k-th time step, respectively. Δ t represents the time step of the simulation. Since we regard the parameters θ as dynamical variables, we newly introduce the virtual mass μ and denote the forces on the parameters θ as F_θ=-∇_θ L(R,θ). Also, γ(R,θ) and ζ(R,θ) denote the coefficients determining the strength of dissipation for v and ξ, respectively. These coefficients are respectively defined as follows; γ(R,θ) = f^2(R,θ)βΔ t/2m, ζ(R,θ) = f_θ^2(R,θ)βΔ t/2μ, where β=1/k_bT with Boltzmann constant k_B is the inverse temperature and f^2(R,θ) (f_θ^2(R,θ)) is the statistical variance of the forces on nuclei (parameters). We remark that, according to Ref. <cit.>, the statistical noise is absorbed into the coefficients and hence could play a role in the temperature control. §.§.§ Classical Shadows The classical shadow technique enables us to predict many properties of a target quantum state <cit.>. More concretely, the order log(M) size of samples is sufficient for classical shadows to predict M target linear functions. The main idea of this method is to perform a series of randomized measurements on multiple copies of the quantum state. In the following, we explain the detailed procedure to reconstruct the underlying quantum state. Here, our target is a n-qubit quantum state represented as ρ = |Ψ⟩⟨Ψ| in the density matrix representation. * Apply a random unitary U to the quantum state ρ, i.e., ρ→ Uρ U^†. Note that the unitary is randomly selected from an ensemble of unitaries 𝒰. * Measure all qubits in the computational basis and store the measured bit-string |ŝ⟩∈{0,1}^n as classical description of U^†|ŝ⟩⟨ŝ| U. * Apply the inverted quantum channel ℳ^-1 to U^†|ŝ⟩⟨ŝ| U to produce the classical snapshot ρ̂=ℳ^-1(U^†|ŝ⟩⟨ŝ| U) of a target quantum state ρ. As we give a concrete example later, the quantum channel depends on the unitary ensemble. Also, this post-processing is fully classical. Then, by construction, we can recover the quantum state ρ by the classical shadow in expectation, i.e., ρ = 𝔼_ŝ𝔼_U [ρ̂]. * Repeat this procedure N_S times to get the empirical average ρ̃=1/M∑_j=1^N_Sρ̂^(j), which results in the exact quantum state ρ in the limit of M→∞. We remark that, in case the random Pauli-basis measurement is performed (i.e., 𝒰={H, HS^†, I}^⊗ n with the identity gate I, the Hadamard gate H and the phase gate S), the classical snapshot is expressed as ρ̂ = ⊗_i=1^n (3U_i^†|ŝ_i⟩⟨ŝ_i|U_i-I) with ŝ=ŝ_1ŝ_2…ŝ_n and U=⊗_i=1^nU_i. Moreover, we can estimate expectation values of many observables {O_i} with the classical snapshots at hand: that is, Tr[O_iρ]≈1/N_S∑_j=1^N_STr[O_iρ̂^(j)]. We note that we can also employ the median-of-means protocol <cit.> for better estimation, i.e., Tr[O_iρ] = median{Tr[O_iρ̂^1],…,Tr[O_iρ̂^K]} with classical shadows constructed from K snapshots ρ̂^k=1/⌊ N_S/K⌋∑_j=(k-1) ⌊ N_S/K⌋ +1 ^k ⌊ N_S/K⌋ρ̂^(j). Also, the expectation values of multiple observables can be obtained without the explicit construction of the classical snapshots in case the number of qubit n is large. §.§ Our Method: QCPMD with Classical Shadows Now, we are in a good position to provide the details of our scheme. In the AIMD simulations, our scheme also uses the update rules of QCPMD in Eqs. <ref> – <ref>. On the other hand, we use the classical shadow technique to estimate the forces on the nuclei F(R,θ) to improve the resource efficiency. As stated in the previous section, we need to estimate 3N forces on the nuclei of N molecules at each step, indicating the number of quantum state preparations required for a naive approach scales 𝒪(3NN_shot) with the number of measurement shots for each force N_shot. This cost can be further reduced by the classical shadow, because the quantum state used to estimate the forces in Eq. (<ref>) is the same regardless of the nuclei and its direction. Namely, we can estimate all the forces, once we get sufficient number of classical snapshots. Thus, the number of samples required can be reduced to 𝒪(N_S), which is independent of the number of molecules N. This will improve the resource efficiency of AIMD simulations and harness the applicability of QCPMD for large systems. Let us note that we can use the classical shadow technique for estimating forces on parameters F_θ(R,θ), but the improvement would not be expected. In gradient estimations, a common choice is the parameter shift rule <cit.>, where parameter-shifted quantum states are used to compute the gradient, i.e., ∂ L(R,θ)/∂θ_i = 1/2(L(R,θ+π/2e_i)-L(R,θ-π/2e_i)). This suggests that we need the estimation of ρ(θ+π/2e_i) and ρ(θ-π/2e_i) for every parameter θ_i. Therefore, we would not expect the improvement in the resource efficiency with respect to the number of parameters N_p. We also remark that, when it comes to predicting expectation values of fixed many Pauli operators, the classical shadow technique with randomized measurement might not gain the advantages in quantum resources for accurate predictions. This is because some quantum resources could be wasted for certain observables due to the randomness in choosing the basis. This indicates that deterministic measurements would be more effective when we know what to measure in advance. From this standpoint, grouping of commuting multi-qubit Pauli <cit.>, the so-called “derandomized" classical shadows <cit.> and the combination of these two approaches <cit.> would be more effective in this setting. Actually, the settings of QCPMD simulations fall into this situation; we know the observables that we want to estimate and the Hamiltonian H(R) is given as a weighted sum of Pauli operators H=∑_P∈{I,X,Y,Z}^⊗ nw_P P. Thus, for accurate prediction, these methods would be useful. However, we underscore that noise can be absorbed into the temperate control via the Langevin formulation in QCPMD. This implies that inaccurate estimations by randomized classical shadow can be allowed in the QCPMD simulations, and our scheme is still beneficial because of the reduction in samples from 𝒪(3NN_shot) to 𝒪(N_S). § RESULTS & DISCUSSION Here, we numerically study the performance of QCPMD with and without classical shadows for simulations of a H_2 molecule. We note that classical shadows are employed to estimate the force of parameters as well for our scheme. For all the simulation below, Qiskit <cit.> is used for quantum circuit simulation. Also, at each step, we use PySCF to prepare fermionic second-quantized Hamiltonians for the electrons with the STO-3G basis set; then, we applied the Jordan-Wigner transformation to convert fermionic Hamiltonians into qubit Hamiltonians. Our setting is as follows. We simulate the time evolution of the H_2 molecule at a temperature of 70 K with the time steps of Δ t=0.1 fs. The virtual mass for the parameters θ is set to μ=0.1 for calculating the time evolution. As for the PQC, we use a real-valued symmetry-preserving (RSP) type ansatz <cit.> with the depth D=4 (Fig. <ref>). We employ the median-of-mean protocol for the QCPMD with classical shadows using N_S=51 snapshots and K=3, while each Pauli operator is measured individually for the ordinary QCPMD with N_shot=51 measurement shots; sample numbers for QCPMD without classical shadows are larger than the other because every Pauli term for the forces of all coordinates is estimatable with the snapshots for classical shadows. The coefficients for the dissipation terms are γ=0.8 and ζ=0.8. We note that these coefficients depend on the nuclei positions and parameters at that step, and thus should be determined by the equality in Eq. (<ref>) and Eq. (<ref>). However, we consider the constant value to avoid the computation of variance. We describe the effect later in this section. In this study, we focus on two conditions for the initial configuration of the molecule and values of parameters. First, we consider the case where the molecule is at the equilibrium nuclear distance (R=0.735 Å) and the parameters θ^* are nearly optimal values which we obtain by performing the VQE optimization with BFGS. This is actually the situation where the assumptions in the derivation of QCPMD hold. Here, we run the simulation for 4,000 fs. Fig. <ref> illustrates the histograms of bond lengths obtained by the original QCPMD and the QCPMD with classical shadows. Here, we get rid of the first 250 fs and use the rest of them to make the figures. It turned out that both of them can reproduce the equilibrium state. To evaluate the performance quantitatively, we also estimate the average of the bond length. As a result, the bond lengths for QCPMD with and without classical shadows are 0.735 Å and 0.736 Å, respectively, which are (almost) the same with the equilibrium distance, i.e., 0.735 Å. The slight difference could be reduced if we run the simulation longer. We note that the variance of bond length for classical shadow case is larger, because the uncertainty is increased by the randomness in choosing the measurement basis. This can be easily checked by estimating expectation values using ordinary Pauli-basis measurement and classical shadows. As a toy task, we here consider four-qubit RSP circuit with random parameters θ and a Pauli Z operators on the first qubit as the target observable. Fig. <ref> clearly shows larger variance for classical shadows with N_S=51 and K=3 than measurement with N_shot=51; the variances averaged over five trials using different parameters θ are 0.016 and 0.077 for ordinary Paili-basis measurement and classical shadows, respectively. Secondly, we examine the case with R=1.0 and the parameters θ are chosen at random. We consider this situation to see the practical applicability of QCPMD. That is, although the formula is derived based on the equilibrium state assumptions, we do not always know the optimal nuclei positions and electronic wavefunctions at the beginning. Thus, we check the performance of QCPMD when we start the simulation with the random guess. Here, the number of measurement shots and the number of snapshots are the same as in the first case. We also prepare five different parameter sets for initial values of θ. The total simulation time is set as 2,000 fs. In Fig. <ref>, we show the trajectory of bond length over the simulation time and its histogram after reaching equilibrium (bond lengths after 250 fs). We observe that the QCPMD method can reach the equilibrium state after certain timesteps, irrespective of whether classical shadows are used or not. We also find that the average of the bond lengths are 0.733 Å and 0.737 Å for the case with and without classical shadows, respectively. These results suggest that QCPMD could perform simulations even when the initial condition is far away from the equilibrium. Moreover, we could say classical shadows do not deteriorate the performance of QCPMD while realizing the resource efficiency. Lastly, we discuss a practical difficulty of QCPMD. As mentioned above, an advantage of the method is to use the statistical noise as the thermostat. However, more samples would be required to realize the control. Thermal control is actually reflected in QCPMD by tuning dissipation coefficients γ and ζ, which rely not only on the temperature, but also on the variance of forces. This implies that the precise computation of the variance for the equilibrium state is necessary. To this end, we should know the exact equilibrium state and have to compute the variance at that point. These factors could hinder the practicality of QCPMD, because the fluctuation as shown in Figs. <ref> and <ref> might make it difficult to find the exact equilibrium state, and additional estimation for the variance is resource demanding. In our experiments, we avoid this by using fixed dissipation coefficients. On the other hand, this implies that our results shown above might not reproduce the simulation at 70 K, but at certain “effective" temperature from this viewpoint; the simulation by QCPMD with classical shadows results in equilibrium states at higher effective temperature than the original QCPMD, while the results by the original QCPMD might not be the ones at 70 K as well. This fact also indicates that we also need to estimate the variance at each step in our second case, which could undermine the resource efficiency. A possible approach to mitigating this challenge is to save forces at every step and then update the dissipation coefficients at some interval by computing the variance with the saved ones. This might work under the assumption that quantum states do not change significantly during certain period. In addition, the classical shadow technique could be used for estimating the variance. Thus, our scheme might be more advantageous in case we control the temperature exactly, which we leave for future work. § CONCLUSION In this work, we introduce the classical shadow technique to improve the resource efficiency of the QCPMD method for AIMD simulations. We focus on the H_2 molecule and numerically demonstrate that the QCPMD with classical shadows can successfully reproduce the equilibrium state at a constant temperature. This study will encourage the exploration of QCPMD' performance for simulating large systems and lead to a new invention of efficient AIMD simulation methods using NISQ devices. In addition, as we discussed in Section <ref>, some approaches such as grouping <cit.> and derandomized classical shadows <cit.> could be more effective in this setting. Thus, comparing these approaches would be intriguing. Furthermore, it would be interesting to investigate the performance of our QCPMD scheme for larger systems. naturemag
http://arxiv.org/abs/2406.18652v1
20240626180006
An effective model for the tidal disruption of satellites undergoing minor mergers with axisymmetric primaries
[ "Ludovica Varisco", "Massimo Dotti", "Matteo Bonetti", "Elisa Bortolas", "Alessandro Lupi" ]
astro-ph.GA
[ "astro-ph.GA" ]
Tidal disruption of satellite galaxies Varisco et al. Università degli Studi di Milano-Bicocca, Piazza della Scienza 3, I-20126 Milano, Italy INFN, Sezione di Milano-Bicocca, Piazza della Scienza 3, I-20126 Milano, Italy INAF - Osservatorio Astronomico di Brera, via Brera 20, I-20121 Milano, Italy DiSAT, Università degli Studi dell’Insubria, via Valleggio 11, I-22100 Como, Italy According to the hierarchical formation paradigm, galaxies form through mergers of smaller entities and massive black holes (MBHs), if lurking at their centers, migrate to the nucleus of the newly formed galaxy, where they form binary systems. The formation and evolution of MBH binaries, and in particular their coalescence timescale, is very relevant for current and future facilities aimed at detecting the gravitational-wave signal produced by the MBH close to coalescence. While most of the studies targeting this process are based on hydrodynamic simulations, the high computational cost makes a complete parameter space exploration prohibitive. Semi-analytic approaches represent a valid alternative, but they require ad-hoc prescriptions for the mass loss of the merging galaxies in minor mergers due to tidal stripping, which is not commonly considered or at most modelled assuming very idealised geometries. In this work, we propose a novel, effective model for the tidal stripping in axisymmetric potentials, to be implemented in semi-analytic models. We validate our semi-analytic approach against N-body simulations considering different galaxy sizes, inclinations, and eccentricities, finding only a moderate dependence on the orbit eccentricity. In particular, we find that, for almost circular orbits, our model mildly overestimates the mass loss, and this is due to the adjustment of the stellar distribution after the mass is removed. Nonetheless, the model exhibits a very good agreement with simulations in all the considered conditions, and thus represents an extremely powerful addition to semi-analytic calculations. An effective model for the tidal disruption of satellites undergoing minor mergers with axisymmetric primaries Ludovica Varisco0000-0002-6724-5999 1, 2 l.varisco4@campus.unimib.it Massimo Dotti 1, 2, 3 Matteo Bonetti0000-0001-7889-6810 1, 2, 3 Elisa Bortolas 1, 2 Alessandro Lupi 4, 2, 1 ===================================================================================================================================================================================================================================================================================================================================================================================== § INTRODUCTION In the framework of the hierarchical paradigm of cosmic structure formation <cit.>, galaxies form in a bottom-up fashion, whereby the massive galaxies that we see today build up at the intersection of dark matter filaments along which other galaxies and cold gas can stream inwards <cit.>. Specifically, at those “cosmic crossroads”, galaxies are expected to experience a sequence of mergers and accretion events that contribute to their final mass and morphological appearance. Galactic mergers are categorised based on the mass-ratio of the involved galaxy pairs. The threshold between minor and major mergers is not universally determined as different values are employed in literature depending on the specific research objectives and contexts. <cit.> classify as major mergers systems involving galaxies with mass ratio exceeding 1:10, while those falling below this value are designated as minor mergers. A widely used classification defines as major mergers systems with mass ratio grater than 1:3, while minor merger as those falling in the range 1:3-1:10 <cit.>. <cit.> categorises galaxy pairs on the basis of the stellar mass ratio of the systems involved, defining as major, minor, and very minor mergers the systems corresponding to the ranges 1:1-1:6, 1:6-1:100 and < 1:100, respectively. Besides the specific choice of the threshold, the distinction between minor and major mergers is not a mere classification, but implies very different dynamical evolution, outcomes and investigation techniques. Major mergers are generally rare and disruptive events that completely reshuffle the material in the parent systems and significantly perturb the original morphology in a few dynamical times. Given their disruptive effect, they can be properly characterised only through expensive numerical simulations able to track the strongly time-varying gravitational potential. On the contrary, minor mergers are usually common events along galaxy lifetimes and they generally represent a (small to moderate) perturbation to the more massive system in which they sink. In this regard, the secondary galaxies involved in minor galaxy mergers can be treated as massive perturbers, i.e. objects considerably heavier than the single bodies forming the galactic structure, but much less massive than the whole host galaxy. By leaving the more massive galaxy nearly unchanged, minor mergers are suitable to be modelled in a semi-analytical fashion <cit.>. This feature opens the possibility of performing investigations with inexpensive computational loads, still requiring a proper and careful tuning of the semi-analytical recipes against numerical simulations. Even though single minor mergers do not typically produce morphological transformations of host galaxies [See however <cit.>, who demonstrate that single minor merger events involving systems with mass ratios ∼ 0.1-0.3, and with the satellite moving on orbits almost aligned with the host's disc plane, may trigger catastrophic changes in the primary morphology within timescales as short as a few hundreds Myr.], recent theoretical and observational studies highlight the important role that repeated minor mergers may play on the evolution of their massive companions. Indeed, the occurrence of multiple minor mergers in disc galaxies can gradually induce a significant redistribution of the stellar orbits in the primary system, thus forming slowly rotating spheroidal remnants <cit.>. <cit.> showed that one third of the morphological transformation of galaxies undergoing galaxy mergers over the cosmic time is due to repeated minor merger events, the latter becoming the dominant driver of morphological changes at late epochs (z≳ 1). Moreover, minor mergers have been proven to enhance both star formation, being responsible for over a half of the star formation events induced by galaxy mergers in the Universe, and massive black holes (MBHs) accretion rates <cit.>, and also to be responsible for the 70% of the merger-driven asymmetric structures in post-merger galaxy remnants <cit.>. Among massive perturbers inhabiting galaxies, MBHs are particularly interesting to study. MBHs are located in the nuclei of most of massive galaxies (if not all of them) <cit.> and through galaxy mergers multiple MBHs are delivered within the same host, eventually leading to the formation of massive black hole binaries (MBHB), triplets or even higher order multiplets <cit.>. These systems are primary targets of current and forthcoming gravitational wave (GW) experiments, primarily Pulsar Timing Array <cit.> campaigns now opening the nHz sky, and the Laser Interferometer Space Antenna (LISA), targeting mHz frequencies <cit.>. Prior to the formation of bound MBH systems in the nuclei of galaxies, every MBH needs to sink towards the central regions. The main actor driving this evolution is dynamical friction <cit.>. At this stage of the evolution MBHs are generally still surrounded by their progenitors' cores, so that their effective sinking mass (locally perturbing the primary and leading to DF) can be much larger than the mass of the MBH itself <cit.>. However, such left-over material (gas and stars) surrounding the MBH typically gets gradually stripped by the main galaxy tidal field <cit.>. The effectiveness of the process depends on the compactness of the material around the intruder MBH, and on the steepness of the galactic acceleration field. Depending on the efficiency of the stripping process, the MBH loses material and may eventually “get naked”, i.e. remain without any residual surrounding distribution of matter bound to it. This effective “mass loss” crucially affects the dynamics of the inspiral and especially the efficiency of DF, as the DF timescale needed for the object to reach the centre of the primary galaxy critically depends on the perturber's mass <cit.>. A quantitative assessment of how mass is stripped from infalling satellite galaxies requires a careful estimation of the so called tidal radius, i.e. the conceptual boundary for a celestial object dividing the bound from the unbound mass. Beyond this limit, the object's material undergoes stripping due to the tidal field of the more massive companion. First introduced by <cit.> within the context of Milky Way globular clusters, the tidal radius is theoretically defined strictly for satellites following circular orbits, where it coincides with the position of L1/L2 Lagrange points <cit.>. A different attempt to define such radius also for eccentric motion was explored by <cit.>, who argued that during pericenter passages, satellites are truncated to the size indicated by the pericentric tidal radius. Later, <cit.> and <cit.> observed that retrograde orbits in the context of the restricted three-body problem are stable over greater distances compared to prograde orbits, further out the tidal radius defined by <cit.>. In a more recent study, <cit.> derived an expression for the tidal radius taking into different orbit types: prograde, radial, and retrograde. Interestingly, the analysis revealed that the tidal radius for retrograde orbits exceeds that of radial orbits, which, in turn, is larger than the tidal radius for prograde orbits. To date, the vast majority of attempts to estimate the tidal radius focused on spherically symmetric host galaxies <cit.>. Although observations show that, while the morphology of massive galaxies in local Universe is dominated by spheroidal systems <cit.>, in the early Universe the massive galaxy population was mostly composed of disc galaxies <cit.>. This morphological transformation which leads to an overall transition from rotationally-supported systems to dispersion dominated ones is believed to be primarily driven by galaxy mergers. Moreover, cosmological simulations suggest that disc galaxies do not show any significant difference in their merger history compared to spheroidal galaxies <cit.>. Thus, a significant number of mergers involving disc-like primary galaxies are expected to have occurred throughout cosmic history and are still ongoing. Indeed, observations on nearby massive disc galaxies display tidal features, hinting that they have undergone recent minor mergers events. For this reason, a systematic investigation focused on galaxy mergers involving systems that strongly deviate from spherical symmetry is compelling. In this study, we precisely aim at finding a general description of the tidal radius when axis-symmetric systems are involved[Here, we refer to the total potential of the primary galaxy, composed of both baryonic and dark matter components. If one focuses on the dark matter halo, the work of <cit.> show that baryonic cooling and the formation of a disk can enhance symmetry in the inner regions of halos. ]. Those systems, representative of e.g., spiral galaxies, are indeed quite common and many minor mergers actually occur in such galaxies. Our ultimate goal consists in deriving a simplified prescription for the tidal radius to be implemented in semi-analytical models of galaxy formation, in order to better asses the DF-driven inspiral pace of massive perturbers within galaxies of any type. A proper and comprehensive semi-analytical modellisation of minor mergers can represent a powerful tool for studying a wide variety of astrophysical scenarios. The exploitation of semi-analytical models is crucial to overcome the limited spacial and mass resolution of large-scale cosmological simulations. In these simulations, numerous minor mergers are observed to occur, however the lack of sufficient resolution may hinder to track the late stages of these events as the satellite galaxies become unresolved. Employing detailed semi-analytical models would enable us to follow the satellite evolution down to scales where the system is no longer resolved in the simulations. This feature allows us to predict the late phases of the merger and to determine the ultimate fate of the satellite galaxy and, if present, of the MBH embedded within it. In this context, semi-analytical models could be useful, for instance, to address and possibly reconcile discrepancies between the estimated fraction of orphan galaxies arising from mock and semi-empirical models <cit.>. Furthermore, due to their great versatility, semi-analytical models are particularly well-suited for studying the formation and evolution of systems in extreme merger scenarios, such as very faint Milky Way satellites <cit.>. Finally, minor mergers may also trigger an enhancement in the satellite MBH accretion due to gas inflows caused either by shocks developing within the interstellar medium in the pairing phase at the contact surface of the two galaxies <cit.>, or in the final phases when the naked MBH circularises inside the primary disk <cit.>. The paper is organised as follows: in Sec. <ref>, we introduce a novel prescription for the tidal radius, delineate the galactic models employed, and detail the setup of the N-body simulations implemented for our prescription validation. In Sec. <ref>, we present the outcomes of the comparison between out model's predictions and those derived from N-body simulations. Finally, in Sec. <ref> we discuss the limitations of our model, we summarise our findings and draws our conclusions. § METHODS When minor mergers occur, satellite galaxies, while orbiting within their hosts, are subjected to tidal forces that remove part of their mass, sometimes leading to their complete disruption even after a single pericentre passage. Two main mechanisms have been identified for removing mass from the satellite, depending on the rapidity at which the external tidal field varies. When the satellite experiences a slowly changing tidal field, the effect of the tidal forces is that of stripping material from the outer regions of the satellite, forming a clear external boundary often called the tidal radius (R_t). This process is identified as tidal stripping. On the contrary, when the satellite undergoes a rapid change in the external tidal field, part of its orbital energy is converted into internal energy, leading to an overall heating of the satellite. The amount of energy injected into the system during fast pericentre passages and transferred to the stars can be enough to unbind a significant fraction of the satellite mass. This effect is known as tidal heating. The mass loss caused by tidal effects can significantly impact the orbital decay of the satellite, as it reduces the efficiency at which dynamical friction drags the satellite galaxy towards the centre of its host, thus increasing its orbital decay time. §.§ Tidal Radius To characterise the mass loss of satellite galaxies due to tidal stripping in minor mergers, the first step consists of defining the tidal radius. The standard approach in literature considers two spherically symmetric systems, with mass profiles m(r) for the satellite, and M(r) for the host galaxy, whose centres are separated by a distance R. The satellite R_t is defined as the distance from the centre of the satellite at which the acceleration of a test particle along the direction connecting the centre of the two systems vanishes. In a minor merger scenario where m ≪ M, under the assumptions that R_t ≪ R at any time, and that the test particle has null velocity in the satellite's reference frame, R_t is given by: R_t = R [ G m(R_t)/Ω^2- d^2Φ_h/dr^2]^1/3. This expression was first derived in <cit.>, where r and Ω are the radial coordinate and the angular velocity of the satellite in the reference frame of the host galaxy, and Φ_h (r) is its gravitational potential. It is worth noting that this formula is strictly valid for circular orbits, but can be easily extended to eccentric orbits if one considers instantaneous values for Ω and R. Additionally, it is important to emphasise that Eq. (<ref>) holds only under the simplistic assumption of a spherical host. In this study, we aim to present a novel prescription for R_t that is adaptable to various host geometries. For this purpose, we consider a spherically symmetric satellite galaxy embedded in the generic potential of its host. We define the galactic inertial frame with the origin in the galactic centre denoted as S and the non-inertial frame of the satellite as S'. In this work, all the quantities evaluated in the non-inertial frame of the satellite are primed, while the unprimed are relative to the inertial frame of the host galaxy. Considering a test satellite star, its position is identified by the radius vector 𝐫_*. The acceleration of the test star in the reference frame of the satellite is: 𝐚' = 𝐚 - 𝐀 - d Ω/d t×𝐫'_* - Ω× (Ω×𝐫'_*) - 2Ω×𝐯'. Here, Ω is the angular velocity of the satellite centre of mass (CoM), 𝐚 represents the acceleration of the test star in the S frame: 𝐚 = - GM_s(r'_*)/r'^3_*𝐫'_* - ∇ϕ_h(r_*), and 𝐀 is the acceleration of the S' frame in S, which can be expressed as: 𝐀 = - ∇ϕ_h(r_S), where r_S indicates the distance of the satellite CoM from the host's centre. The term Ω× (Ω×𝐫'_*) can be rewritten as Ω^2 r'_*(cosα-1), with α being the angle between Ω and 𝐫'_*. Choosing a random direction ê_𝐫'* from the centre of the satellite, we can approximate the tidal radius as the distance from the satellite centre at which a test star with v' = 0 experiences a vanishing 𝐚': a'_ê_𝐫'_*= - G M_s(r'_*)/r'^2_* - ∇ϕ_h(𝐫_*) ·ê_𝐫'* + ∇ϕ_h(𝐫_𝐒) ·ê_𝐫'* - Ω^2 r'_*(cosα-1), where we omitted the term dΩ/ d t ×𝐫'_* which is directed perpendicularly to ê_𝐫'*, thus not contributing to the acceleration along the reference direction we fixed. It is important to note that, unlike the derivation in <cit.>, we relax the assumption R_t ≪ R, therefore allowing the satellite to undergo close encounters with the host centre. Eq. (<ref>) thus provides an implicit definition for R_t along a specific direction from the centre of the satellite. As mentioned above, if the host system is spherically symmetric, the reference direction along which the R_t is evaluated is the one connecting the centre of the two galaxies, since it is the direction that maximises the tidal force. However, in a generic galactic field it is not possible a priori to define the direction that maximises the tidal force exerted on the satellite by the host, which instead will depend on the morphological parameters of the two systems and the instantaneous location of the satellite within the host potential. For this reason, at any time during the satellite evolution we numerically solve Eq. (<ref>) along 1000 random directions and we select R_t as the minimum of all the tidal radii evaluated, that we denote as R_T1. However, the mass of the satellite is not instantaneously stripped and it is not possible a priori to define at which rate the material is removed through Eq. (<ref>). For this reason, we introduce a modified definition of the tidal radius, i.e. R_T2(t) = R_T(t_old) e^-αt - t_ old/r_p / v_p. In Eq. (<ref>), R_T(t_old) is the tidal radius evaluated at a prior time t_ old, r_p and v_p are the distance and velocity of the satellite with respect to the host centre both evaluated at the pericentre, while α is a tunable dimensionless parameter that regulates the rate at which the mass is removed from the satellite: the higher the value of α, the faster the mass is stripped. Thus, comparing R_T1 and R_T2 we define R_T to be: R_T(t) = max(R_T1(t), R_T2(t)). Finally, we require the tidal radius to be a decreasing function of time. This condition implies that the removed material is irrevocably detached from the satellite, precluding any subsequent reattachment in later times, effectively assuming that tidal stripping is irreversible. §.§ Satellite galaxy In this study, we characterise the satellite galaxy employing the spherical and isotropic Hernquist model <cit.>, whose potential and associated mass density profile are given by: Φ_s(r) = - G M_s/r+a_s, ρ_s (r) = M_s/2 πa_s/r(r+a_s)^3, where M_s and a_s are the total mass and scale radius of the satellite, respectively. The corresponding mass profile is m_s (r) = M_s [ r/(r+a_s) ]^2. We integrate the satellite orbit with the semi-analytical code described in <cit.>, in which we incorporated the evolution of the tidal radius as detailed in Section <ref>. We truncate the satellite mass profile integrating m_s(r) up to R_t. The semi-analytical framework features a comprehensive treatment of the dynamical friction specifically tailored to account for flattened and rotating systems <cit.>. It is also equipped with a prescription for the interactions of massive perturbers with galactic substructures such as bars <cit.>. §.§ Host galaxy In the present work, we explore two different models for the host galaxy: a single-component and a double-component host galaxy. In the first scenario, the primary galaxy is characterised by an isolated exponential disc, defined by the density profile: ρ_d (R,z) = M_d/4 π R^2_d z_de^-R/R_d sech^2 (z/z_d). Here M_d is the total mass of the disc, R_d and z_d are the scale radius and height of the disc, respectively. An analytical approximate expression for the potential of such a model exists only within the galactic plane. Consequently, accelerations caused by the disc potential outside the galactic plane are determined through numerical interpolation of tabulated values, which are computed over an adaptive grid, see <cit.> for details. Single-component host galaxy models were employed to test simple systems, in which we neglect dynamical friction to focus on the tidal effects regulating the evolution of the satellite mass. In the case of a composite host galaxy, the disc is embedded within a spherical dark matter (DM) halo. The potential of this halo follows the Hernquist profile <cit.>, characterised by a total mass M_h and a scale radius a_h: Φ_h(r) = - G M_h/r+a_h. This choice is motivated by the fact that the Hernquist profile is numerically convenient and indistinguishable in the inner region from a Navarro Frank and White (NFW) <cit.> profile. For this reason, it has been extensively used in literature to model DM halos <cit.>. §.§ N-body simulations Our investigation was complemented by a comparative analysis, where we accompanied the proposed semi-analytical prescription regulating the tidal-stripping-driven mass evolution of satellite galaxies with N-body simulations. This approach enables us to evaluate the ability of our model to accurately encompass all the relevant physical processes involved and identify potential missing effects. N-body simulations were performed employing the publicly available code GADGET-4 <cit.> In all the tested systems, the satellite galaxy is modelled with 10^5 stellar particles. The particle positions are initialised to follow the mass distribution in Eq. (<ref>), while the velocities are generated at equilibrium in the potential generated by the stellar distribution. The initial satellite mass is fixed to be equal across all models, with M_s = 10^8 M_⊙, ensuring a sufficiently small satellite-to-host mass ratio to avoid significant perturbations on the host's potential, as we consider the latter fixed. We considered three different values for the satellite scale radius , i.e. a_s = 0.1, 0.5, 1 kpc, thus testing different mass concentrations. The satellite is then embedded within the primary galaxy at a distance of R_i = 10 kpc from its centre and with a specific initial velocity, which is added to the stars as a bulk velocity. We explore the orbital parameter space by changing both the initial velocity of the satellite CoM (v_i / v_c = 0.75, 0.50, 0.25, where v_c is the circular velocity at R_i), and different initial inclinations of the satellite orbit with respect to the galactic plane (θ = 0^∘, 30^∘, 60^∘, 90^∘). We set the softening parameter ϵ = 1 pc for the satellite particles, while we fix ϵ = 5 pc for the stellar particle of the disc component in multi-component galaxy models. To isolate the impact of tidal forces on the evolution of the satellite mass from other possible influencing processes, we first performed a set of simulations excluding the effect of dynamical friction. To achieve this, the host galaxy is included in N-body simulations as a stationary semi-analytical potential, instead of being modelled using collisionless particles. To do so, we add to the acceleration of satellite particles the acceleration induced by the presence of the host potential. As mentioned in the previous section, all the models in which we omit dynamical friction host a primary galaxy modelled with a single exponential disc. The method we implemented in GADGET-4 to compute the accelerations generated by the exponential-disc potential is analogous to the one we use in the semi-analytical code and described in sec. <ref>. This set up prevents gravitational interactions between satellite and field stars, thereby avoiding dynamical friction to take place. We then consider more complex systems composed by a satellite orbiting in a double-component host galaxy, also including effects from dynamical friction. In these systems, the primary galaxy consists of an analytical dark matter halo, whose potential is given by Eq. <ref>, and an exponential-disc, modelled with 10^7 stellar particles, whose mass density is given by Eq.  (<ref>). The initial conditions for the disc were performed using the public code GalIC <cit.> which is based on an iterative approach to build N-body galaxy models at equilibrium. Similarly to the case of the analytic disc, the dark matter halo contributes solely through the acceleration its potential imprints on the stellar particles - that we compute and add to the satellite particles in the simulation -, thus giving null contribution to the dynamical friction. The host galaxy parameters are summarised in Table <ref>. §.§ Satellite CoM and bound particles The upper panels in Fig. <ref> show satellite particles in one of the tested models (specifically the system composed of a satellite with a_s = 0.5 kpc, orbiting in the galactic plane of an exponential disc host, with initial velocity v_i = 0.5 v_c) at the first, middle and final snapshot of the simulation. The plots' origin coincides with the centre of the host galaxy potential. Orange particles are bound to the satellite, while grey particles indicate those that have been stripped. The shaded thin red line shows the trajectory predicted by the semi-analytical model, while the thick solid red and blue lines track the satellite CoM, in the semi-analytical model and in the N-boy simulation, respectively. In each snapshot of the simulation the bound particles are identified through an iterative approach. We start by identifying the position and velocity of the satellite CoM. We initialise the satellite CoM location as the point corresponding to the highest density. For each of the satellite particles we compute the binding energy as: E_* = 1/2|𝐯_*-𝐯_ 𝐂𝐨𝐌|^2 - Φ_ Trunc Hern(r_*). Here v_* is the velocity of the star, v_ CoM is the velocity of the satellite CoM, and Φ_ Trunc Hern(r_*) is the potential generated by an Hernquist model, truncated at a certain radius r_ max, which is given by: Φ_ Trunc Hern(r_*) = G M_s ( 1/r_ max+a_s- 1/r_ max - 1/r_*+a_s ) if r_* < r_ max - G M_s/r_* if r_* ≥ r_ max , where r_* is the distance of the selected star from the satellite centre. To determine the truncation radius r_ max at each snapshot, we initially set r_ max = 10 a_s, and subsequently, we consider enlarging spherical shells centred at the satellite CoM with a fixed width of δ_r = 0.25 a_s. The value of r_ max is then chosen to correspond to the median radius of the smallest shell containing a number of unbound stars exceeding twice the number of the bound ones (i.e. such that N_ unbound≥ 2 N_ bound). We update the CoM location and velocity with the values computed using the stars with E_* <0. The procedure is iteratively repeated until the CoM position converges to a constant point, with a relative error on the position of the CoM lower than 10^-3. The lower panels in Fig. <ref> show the satellite cumulative mass profile at the same snapshots and for the same system as in the upper panels. The black solid curve displays the theoretical cumulative mass profile from the Hernquist model. The other two profiles are constructed using the bound particles only, in orange, and all the particles that were part of the satellite at the initial time, in grey. The vertical blue line shows the value of the tidal radius computed with our semi-analytical prescription at the same time of the simulation. Thus, the satellite mass resulting from the simulation, given by the value at which the orange curve saturates, can be compared to the value predicted by our semi-analytical model, i.e. the value at which the theoretical profile is truncated by the tidal radius. §.§ Mass evolution and choice of the optimal α parameter We compare outcomes of N-body simulations with the results of our semi-analytical prescription, testing different values of the α parameter, which controls the mass-stripping rate. A higher α corresponds to a faster mass removal. The panels in Fig. <ref> illustrate the temporal evolution of the satellite mass of a satellite with a_s =0.5 kpc orbiting within the host galactic plane, for three different initial velocities: v_i/v_c = 0.75 , 0.5 , 0.25. The black line shows the evolution of the mass resulting from N-body simulations. The coloured solid lines display the mass evolution predicted by the semi-analytical model for different values of α, spanning from 0.05 up to 5. The minimum tidal radius computed at each time is indicated with a grey dashed line, which indicates the value of the satellite mass one would predict if the stripping were considered instantaneous and reversible. It is important to notice that the initial configuration of the simulated systems is not at the equilibrium. This is because the satellite is generated in isolation and then artificially placed within the primary galaxy potential, instead of following the merger from its initial phases. Therefore, we use the position and velocity of the satellite CoM in the N-body simulation at the apocentre after the first orbit as the initial condition for the semi-analytical model calculations. In Fig. <ref>, the first orbit is indicated by the grey shaded region. Finally, using a least square method on the mass evolution, we determine the optimal value of α corresponding to the semi-analytical model that most accurately reproduces the N-body simulations. Importantly, to make sure that our results are not affected by artificial numerical stripping, we compared the outcomes of our simulations with the criteria proposed in <cit.>[The criteria in <cit.> are computed assuming a Navarro-Frenk-White <cit.> profile for the satellite galaxy. We applied these criteria to our satellites, even though our analysis employs a Hernquist model. Extending the computation to determine the precise threshold for a Hernquist profile is beyond the scope of this paper.]. The number of particles (N = 10^5) and the small softening length (ϵ = 1 pc) used to model our satellite galaxies place our results well above (about two orders of magnitude) the threshold ensuring that the system does not suffer from both discreteness noise and inadequate force resolution in all the tested cases and over the entire simulation time. In the next section, we will discuss the results of our model, focusing in particular on the model ability to reproduce the evolution of the satellite mass. § RESULTS §.§ Models without dynamical friction To test the efficiency of our semi-analytical model in predicting the evolution of a satellite galaxy within a non-spherical host, we started our investigation by considering the limiting case of a satellite moving in the analytical potential of a single-component disk-like host galaxy. Although far from being realistic, this configuration allows us to isolate the effects determined by tidal forces exerted only by the disc, excluding the influence of other factors that can affect its orbital evolution, such as the presence of a spherical component in the host galaxy and the effect of dynamical friction. Fig. <ref> displays the optimal values of the α parameter for each model, evaluated as detailed in sec. <ref>. More in detail, the three panels show how the α_ best parameter changes with the initial orbital velocity (or initial eccentricity) in models sharing the same satellite scale radius a_s, each panel referring to a different value of a_s, and the same orbital inclination, reported with different line styles and colours. In general, most systems exhibit a slight increase in the α parameter as the initial velocity approaches the circular velocity, while no evident trends in the values of α can be outlined when varying the scale radius and the orbital inclination. As expected, a lower α is associated to systems with initial higher eccentricity (or lower initial velocity). This is attributed to the abrupt decrease in the tidal radius at pericentre passages, as predicted by Eq. (<ref>), leading to a significant and instantaneous mass loss. However, the actual timescale to strip material from the satellite, as predicted by N-body simulations, is longer than the fast pericentre passages. For this reason, in the vicinity of the pericentre, the tidal radius decrease is delayed using Eq. <ref>, with α regulating the rapidity of the mass removal. Since this effect is much more relevant along eccentric orbits, the α parameter needs to be small enough to slow down the satellite mass loss, which otherwise would be extreme, and is expected to be smaller compared to systems with low eccentric orbits. If not explicitly specified, all the results presented from this point forward refer to the specific semi-analytical model characterised by the optimal value of α for each system considered. In Fig. <ref>, we present the results of the comparison between our semi-analytical prescription and N-body simulations for models with the satellite moving within the galactic plane. The upper panels depict the evolution of the separation of the satellite CoM from the primary galaxy centre. The semi-analytical model's predictions are shown in orange, while the N-body simulation results are represented by a black solid line. The bottom panels show the time evolution of the difference between the satellite mass (normalised to the initial satellite mass) predicted by the semi-analytical model and the mass resulting from N-body simulations. The three panels correspond to different initial velocities of the satellite, with line colours indicating the satellite scale radius. Our semi-analytical prescription well reproduces both the orbital and the mass evolution of the satellite. As an additional test, we compare our semi-analytical prescription for the tidal radius and mass evolution (solid lines) with results obtained using King's formula (dashed lines), see Eq. (<ref>). We observe an overall better agreement with N-body simulations using our new semi-analytical prescription compared to the King prescription. This result is due to multiple factors. First, King's formula, when applied without any delay for mass removal, implies instantaneous mass stripping. This leads to a general underestimation of the satellite mass, especially in the initial phases of the evolution. Moreover, one of the main assumptions in King's prescription is that the tidal radius should be much lower than the separation between the centres of the two galaxies, thereby excluding close encounters. This assumption is generally valid along quasi-circular orbits, but it breaks when considering highly eccentric orbits where the pericentre can be at a close distance from the host centre. The combined effect of the instantaneous mass stripping, which can be severe in eccentric orbits during the close pericentre passages, and the assumption of distant interactions, imply an increasing inability of King's prescription at reproducing the results of N-body simulations (see bottom right panel in Fig. <ref>). It is important to note that a comparison with King's prescription is meaningful only for systems in which the satellite is orbiting within the galactic plane, as far from the galactic plane King's definition of the tidal radius becomes ill-defined. In the co-planar case, indeed, the gradient of the host potential at the position of each satellite's star points approximately toward the host centre, making the comparison between our and King's prescriptions meaningful. Nonetheless, we stress that, even in this case, the acceleration of stars that during their orbits around the satellite centre lie above or below the plane of the host disc are not radial, and are, therefore, implicitly approximated in the treatment by <cit.>. Finally, we investigated systems where the satellite orbits outside the galactic plane, exploring various inclination angles. Since the qualitative trends observed in these cases are similar to the ones discussed for co-planar orbits, we show the evolution of the error in estimating the satellite mass for these systems in Fig. <ref>. Our semi-analytical prescription effectively reproduces the evolution of the satellite mass along the orbit, particularly in systems with eccentric orbits, across all orbital inclinations. However, in systems hosting satellites with low-eccentricity orbits, our semi-analytical model tends to overestimate the satellite mass, as observed in the right panels of Fig.s <ref> and <ref>. We will delve into this behaviour extensively in Section <ref>. §.§ Models with dynamical friction After assessing the capability of our model to replicate the effects of tidal stripping in a fixed analytical potential, we extend our analysis to include models where dynamical friction is considered. In this context, our study involves satellite galaxies orbiting within a multi-component host galaxy. As detailed in Table <ref>, the host galaxy in these models comprises a spherically symmetric dark matter halo, incorporated as an analytical potential in N-body simulations, and an exponential disk containing 10^7 stellar particles. Consequently, the dynamical friction experienced by the satellite stars is solely attributed to the disk component of the host galaxy. In contrast to the models examined thus far, the introduction of dynamical friction, as described in detail in the introduction, significantly influences the satellite's orbital evolution, which, in turn, plays a crucial role in shaping the tidal radius and consequently determining the extent of mass removal. The combined effect of dynamical friction and mass loss is illustrated in Figure <ref>, where we report the result for one of the systems we tested (i.e. a satellite orbiting within the galactic plane with initial velocity of v_i = 0.25 v_c and a_s=0.5 kpc [Due to the computational cost of simulations involving a high number of particles, and since the results of simulations without DF are almost independent of the satellite scale radius, we chose to consider a single value for a_s. We picked a_s = 0.5 kpc, i.e. the middle value among those that we tested in the previous sections.]). The left panels compare the satellite's distance evolution from the centre of the host in the N-body simulation (depicted by the black line) with our semi-analytical model's predictions for three distinct α values (each represented by a coloured solid line in a separate panel). Correspondingly, the right panel shows the satellite's mass evolution in both N-body simulations and semi-analytical models, maintaining the same colour code as in the left panels. In the right panel of fig. <ref>, similarly to fig. <ref> and fig. <ref>, it is possible to notice small increases in the satellite mass, occurring just after pericentre passages. Those increases are due to satellite particles that are stripped during the pericentre passage but, thanks to their orbital motion, are re-accreted soon after the closest approach to the host centre, rebinding to the satellite. The amount of matter re-accreted is very small compared to the amount of matter that one would predict to rebind to the satellite after each pericentre passage in the case of a freely evolving R_t (grey dashed line). For this reason, and also the fact that the amplitude of this bump in the satellite mass gets damped with the subsequent pericentre passages, we neglect this effect and consider the R_t to be a decreasing function of time. Among the models investigated, the one corresponding to α = 0.1 exhibits the best agreement with both the satellite's mass and orbital evolution. Conversely, models associated with higher values of α, corresponding to faster mass loss, demonstrate an increasing deviation from simulations results. This discrepancy arises from the rapid reduction in the satellite mass, which leads to a weakening of the dynamical friction drag, consequently slowing down the satellite's decay towards the host centre. The best values of the α parameter for all the investigated systems are summarised in Table <ref>. As highlighted in the previous section, models devoid of dynamical friction exhibit a consistent agreement between our semi-analytical model and N-body simulations, independently of the scale radius and orbital inclination, with a mild dependence on the initial orbital eccentricity only. Given this result, and the fact that simulations involving a host disk composed of 10^7 particles represent a quite high computational burden compared to simulations with entirely analytical hosts, we opt to focus our investigation on systems featuring a satellite with a fixed scale radius, a_s = 0.5 kpc, orbiting within the galactic plane. The primary parameter under consideration is therefore the variation in the satellite's initial velocity. The results are shown in Fig. <ref>. The left panels compare the evolution of the satellite's CoM in both simulations and in semi-analytical models, each using the best value for α. From top to bottom, the different panels correspond to the three different initial satellite's velocities, v_i = 0.75 v_c, v_i = 0.50 v_c and v_i = 0.25 v_c. The right panel depicts the error in the evaluation of the satellite mass for the same values of the initial velocities. The dashed vertical lines represent the initial time of the semi-analytical models, which correspond to the first apocentre, and are coloured using the same colour code as the solid lines. As noted in the previous cases, a very good agreement is observed between the results obtained from N-body simulations and the predictions from our semi-analytical models regarding the orbital evolution of the satellite and the associated mass decrease. Notably, this accord is particularly pronounced for systems featuring satellites on higher eccentric orbits, as consistently demonstrated across all the investigated systems. §.§ Testing low-eccentricity satellite orbits In this section, we investigate in detail the processes contributing to the systematic overestimation of satellite mass in our semi-analytical model when compared to N-body simulations in systems harbouring satellites on low-eccentricity orbits. Two primary processes may account for this discrepancy. The first involves tidal heating resulting from rapid changes in the host potential experienced by the satellite, as described at the beginning of the methods. Another possible factor is the satellite's evaporation induced by mass truncation. During pericentre passages, where the majority of stripping occurs, a substantial portion of the satellite mass is expelled from the system, leading to truncation in the satellite mass distribution. As a result, the satellite is no longer in equilibrium. As it evolves towards a new equilibrium, its mass distribution expands, causing stars with higher velocities to migrate to larger radii. As a consequence, the satellite's profile changes becoming less concentrated, thereby facilitating the particles in the outer layers to become unbound. This results in a continuous mass loss, even if the tidal radius undergoes minimal change, particularly along quasi-circular orbits. In order to discern the predominant process influencing the excess mass loss in the satellite, we conducted additional N-body simulations without dynamical friction. This was done to exclude potential additional effects that could contribute to the removal of mass from the satellite. The simulations were executed considering only systems characterised by the lowest initial orbital eccentricity, specifically with v_i = 0.75 v_c , as these are the most affected by the process under investigation. The satellite under consideration featured a Hernquist mass distribution with a_s = 0.5. Instead of randomly oriented velocities, we initialised stars in the satellite on perfectly circular orbits, ensuring that no net rotation was imparted to the satellite as a whole. To deal with the tendency of the velocities of the satellite stars to re-isotropise, a reorientation of the particles' velocities along the tangential direction was performed at every apocentre. Importantly, this reorientation did not alter the magnitude of the velocity vector, thus keeping the energies of the stars unchanged. This approach prevents stars on radial orbits from rapidly migrating towards larger radii, thereby restraining the overall evaporation of the satellite. This approach enables the discrimination between the processes driving the excess in satellite mass loss. If the dominant factor is satellite evaporation, this methodology allows to reproduce the satellite mass evolution. Alternatively, if tidal heating is the primary driver, injecting energy into the satellite and causing the stars to acquire sufficient energy to escape the system, our simulation will still exhibit an excess in mass loss. The results are shown in Fig. <ref>. Each panel illustrates the satellite mass as a function of time for distinct orbital inclinations. The black dashed line represents the satellite mass obtained through the new N-body simulations, compared with the outcomes of the original N-body simulation presented in sec. <ref>, displayed as a black solid line. The coloured lines depict the predictions of our semi-analytical model for various values of α. In all systems, a substantial reduction in the mass loss rate is observed. Notably, the system harbouring a satellite orbiting within the galactic plane exhibits a satellite mass evolution now compatible with our semi-analytical model, particularly for α=0.05. Conversely, in systems with orbits outside the galactic plane, although the reduction in satellite mass is more gradual compared to the original N-body runs, the stripped mass still exceeds that predicted by the semi-analytical models. This suggests that, at least within the galactic plane, the reorientation of star velocities is sufficient to reconcile the evolution with the semi-analytical model, indicating the dominance of satellite evaporation in shaping the mass evolution. Outside the galactic plane, however, tidal heating effects become significant, due to the stronger vertical gradient of the gravitational field in the proximity of the disk plane, and therefore it cannot be neglected. § DISCUSSION AND CONCLUSIONS In our analysis we evolved a satellite galaxy within a fixed (or quasi-fixed, for the simulations with live primaries) host potential. However, galaxies experience morphological evolution throughout cosmic time due to secular evolution. This evolutionary process may result from interactions between the galaxy and its environment, such as gas accretion or galaxy harassment, or it can be initiated by internal factors such as the presence of spiral arms or bars. Analysing cosmological simulations <cit.> showed that the growth of galaxies and their dark matter halos on sub-Gyr scales can significantly impact the evolution of merging satellite galaxies, especially affecting the satellite orbit during the pairing phase and, consequently, its infall time. Such a result is indeed backed-up by analytical arguments, such as those described in <cit.>. Interestingly, and contrary to what is commonly expected, <cit.> found that the satellite orbit is not always shrinking. Instead, some satellites exhibit an increase in the pericentre distance, often accompanied by a rise in the orbital specific angular momentum. This suggests that the growth of the host galaxy halo may promote the satellite migration to larger orbits, thus exerting a strong influence on its evolution. In light of these considerations, we have started applying our model to galaxies undergoing significant evolution, thus relaxing the constraint of a static primary galaxy potential dictating the motion of the satellite and allowing for both galaxies to evolve over time and pair together. The results of this analysis will be discussed in a forthcoming study. In our study, we focused on the stellar and DM component of the merging galaxies, while we neglect the presence of gas in the merging galaxies. Our choice is motivated by our primary goal consisting in the characterisation of the tidal stripping of satellite galaxies in non-spherical hosts rather than a full inclusion of the different galactic components. Nevertheless, concerning minor mergers, cosmological simulations show that these events can involve gas-rich satellites interacting with the gaseous component of their host <cit.>. In such cases, the effect of ram pressure and of non-axisymmetric torques on the gas component represent crucial mechanisms impacting the DF efficiency. On one side, by removing mass from the satellite galaxy <cit.>, ram-pressure slows down the satellite orbital evolution. On the other hand, <cit.> showed that gas inflows triggered by non-axisymmetric structures[Similar inflow can be triggered by the ram pressure torques as well <cit.>.] driven by the merger process stabilise the satellite nucleus against tidal disruption, leading to the successful completion of MBH pairing in unequal (1:10) galaxy mergers, while similar gas-free simulations resulted in the wandering of the smallest MBH at kpc scales. We plan to address these effect on the efficiency of DF and tidal stripping in gas-rich mergers in a future study. Finally, we performed our study considering a single satellite-to-host mass ratio <1:100. Increasing the mass ratio would introduce significant distortions in the host potential, thus requiring dedicated studies and simulations which accounts for variations in the host potential and mass distribution. In thiìs paper, we propose a new semi-analytical prescription for the tidal radius and the relative mass evolution of satellite galaxies in minor mergers. The novelty of the proposed approach primarily lies in the generalisation of the definition of the tidal radius to be suitable for any geometry and composition of the host galaxy, in contrast with traditional definitions <cit.> which are provided for circular orbits, under the assumption of a spherical host. The prescription also accounts for a delay in the mass stripping and allows for eccentric orbits. We validated our prescription against N-body simulations. In order to isolate the effects of tidal forces, we first consider systems not affected by dynamical friction, by considering a spherically symmetric satellite orbiting within the analytical potential of an exponential-disk host. We explored the parameter space by considering different initial orbital velocities, orbital inclinations, and satellite scale radii. For each tested system, we select the semi-analytical evolution characterised by the α parameter that better reproduces the mass evolution of the satellite in N-body simulations. Such parameter regulates the rapidity of mass loss in our semi-analytical model, with higher values related to faster mass loss. We found a mild dependence of the best α with the initial orbital velocity, while no significant dependencies with the satellite scale radius and orbital inclination are observed. Lower values of α were associated with more eccentric orbits, reflecting the need for a larger delay in mass loss due to faster pericenter passages. Our model demonstrated excellent agreement with N-body simulations, accurately reproducing the satellite mass evolution, especially for systems with mildly and highly eccentric orbits. However, for systems with initial velocities close to v_c, a slight systematic overestimation of the satellite mass loss was observed. This mass loss excess observed in systems with satellites on low-eccentricity orbits is likely influenced by two primary processes: tidal heating and satellite evaporation induced by mass truncation. To delve into this discrepancy, we run additional N-body simulations, where at each apocenter a re-orientation of star velocities along the tangential direction was performed. In systems where the satellite orbits within the galactic plane, the reorientation of star velocities mitigates the excess mass loss, aligning the simulation results with the predictions of our semi-analytical model. This suggests that, within the galactic plane, together with tidal stripping, satellite evaporation plays a dominant role in shaping the mass evolution. Still, outside the galactic plane, the reduction in excess mass loss is milder, and tidal heating effects become significant. This indicates that, in these configurations, both tidal heating and satellite evaporation contribute to the observed discrepancies between N-body simulations and the semi-analytical model. Moreover, for orbits within the galactic plane, we compared our semi-analytical prescription for the satellite mass evolution with the instantaneous mass loss predicted using King's formula in reproducing the results of N-body simulations. We found that our model better reproduces the mass evolution in the simulations. It is important to stress that outside the galactic plane - and in general in every non central potential- King's tidal radius is not well defined. We then consider systems with both tidal stripping and dynamical friction effects. The semi-analytical model accurately reproduces both the orbital evolution and mass loss of the satellite. These findings provide valuable insights into the complex interplay of tidal forces, dynamical friction, and the orbital parameters of satellite galaxies. Understanding these processes is crucial for accurately modelling the evolution of satellite galaxies within their host galactic environments. We thank David Izquierdo-Villalba, Pedro Capelo, Lucio Mayer and Eugene Vasiliev for valuable discussions and suggestions. MB acknowledges support provided by MUR under grant “PNRR - Missione 4 Istruzione e Ricerca - Componente 2 Dalla Ricerca all'Impresa - Investimento 1.2 Finanziamento di progetti presentati da giovani ricercatori ID:SOE_0163” and by University of Milano-Bicocca under grant “2022-NAZ-0482/B”. LV aknowledges support from MIUR under the grant PRIN 2017-MB8AEZ. AL acknowledges support by the PRIN MUR "2022935STW". EB acknowledges the financial support provided under the European Union's H2020 ERC Consolidator Grant “Binary Massive Black Hole Astrophysics” (B Massive, Grant Agreement: 818691). EB acknowledges support from the European Union's Horizon Europe programme under the Marie Skłodowska-Curie grant agreement No 101105915 (TESIFA). aa § SYSTEMS WITH ORBITS OUTSIDE THE GALACTIC PLANE In this section, we present the results for the systems with the satellite galaxy orbiting outside the galactic plane. The columns represent different initial velocities of the satellite CoM, decreasing from left to right, while the rows illustrate varying orbital inclinations, increasing in angle from top to bottom.
http://arxiv.org/abs/2406.17953v1
20240625220035
Non-Hermitian excitations in nonlinear topological lattice
[ "Vlad Simonian", "Daria A. Smirnova", "Maxim A. Gorlach" ]
physics.optics
[ "physics.optics" ]
School of Physics and Engineering, ITMO University, Saint Petersburg 197101, Russia Research School of Physics, Australian National University, Canberra, ACT 2601, Australia m.gorlach@metalab.ifmo.ru School of Physics and Engineering, ITMO University, Saint Petersburg 197101, Russia § ABSTRACT Non-linear effects and non-Hermitian phenomena unveil additional intricate facets in topological matter physics. They can naturally intertwine to enable advanced functionalities in topoelectrical circuits and photonic structures. Here, we illustrate the subtle interplay between nonlinearity and non-Hermiticity by examining the characteristics of small wave perturbations on the background of the self-induced topological edge state in the nonlinear Su-Schrieffer-Heeger model. We demonstrate that their underlying physics is captured by the non-Hermitian effective Hamiltonian, which features nonreciprocal coupling terms and entails unconventional time-dependent field localization. Non-Hermitian excitations in nonlinear topological lattice Maxim A. Gorlach July 1, 2024 ========================================================== § INTRODUCTION Topological states have attracted much attention offering extra resilience to disorder and imperfections originating from the global properties of the system. Being initially introduced in condensed matter context <cit.>, they were later generalized to many other wave phenomena including mechanics, acoustics, electric circuits and photonic systems <cit.>. One of the defining features of photonic systems is their open nature which leads to non-Hermitian effective Hamiltonians and associated non-Hermitian topological physics <cit.>. The representative toy model frequently employed to test the effects of non-Hermiticity on topology is a celebrated Su-Schrieffer-Heeger (SSH) model <cit.> [Fig. <ref>(a)]. Addition of non-Hermiticity to this one-dimensional (1D) system significantly alters its physics. One of such profound changes is the so-called non-Hermitian skin effect manifesting itself in the localization of all eigenstates at the edges of the system provided open boundary conditions are imposed <cit.>. In particular, such behavior is observed when the couplings in the SSH model are made nonreciprocal such that the coupling of left site with the right is not equal by its magnitude to the coupling of right site with the left, as first introduced in Hatano-Nelson model <cit.>. While being quite artificial, the hybrid of SSH and Hatano-Nelson models was actively explored theoretically and analytical solutions for bulk and edge modes were obtained exhibiting a number of distinctions from their Hermitian counterparts <cit.>. For instance, while the conventional SSH model features only a single zero-energy mode at a given edge, its non-Hermitian generalization supports two modes localized at a given edge at zero frequency. A parallel and seemingly independent line of research is presented by nonlinear topological photonics <cit.>, which aims to harness nonlinearity readily available in many photonic systems to tailor topological phenomena. A promising possibility is to reconfigure the topological modes by changing the intensity of excitation. Such reconfigurability has been thoroughly explored for the SSH model with intensity-dependent couplings, first theoretically <cit.> and later experimentally in radiofrequency range by connecting LC resonators via nonlinear varactor diodes <cit.>. A profound prediction in this area are self-induced topological states whose existence and the degree of localization are governed by the intensity of excitation. Despite these advances, the interplay of nonlinear and non-Hermitian phenomena remains barely understood, with only a few works charting this field <cit.>. At the same time, the fusion of the two concepts is already opening fruitful applications such as topological lasers <cit.>, making studies at the interface of these two areas a timely and significant problem. In this Article, we aim to bridge this gap and explore the spectrum of perturbations of the nonlinear self-induced topological states in the SSH model with intensity-dependent couplings. As we prove, the physics of those excitations is captured by the non-Hermitian effective Hamiltonian with nonreciprocal couplings, thus uncovering an unexpected connection between nonlinear and non-Hermitian topological physics. In addition, this result also suggests a straightforward experimental implementation of Hatano-Nelson-type nonreciprocal couplings by perturbing the nonlinear system. The rest of the Article is organized as follows. In Section II, we revisit the steady-state solutions of the nonlinear SSH model in the form of the self-induced nonlinear edge states, discussing their intensity dependence and methods for computing their profiles. In Section III, we derive a non-Hermitian effective Hamiltonian describing small perturbations on the background of the nonlinear edge state. We show that such perturbations do not destroy the nonlinear mode and exhibit an unusual behavior: their real and imaginary parts (as defined with respect to the background mode) oscillate differently. In Section IV we analyze the localization of eigenmodes of system and discuss the non-Hermitian skin effect. We conclude with Section V, which discusses and summarizes the results. § SELF-INDUCED NONLINEAR EDGE STATES We consider a nonlinear SSH model which is a 1D array of single-mode cavities with the nearest-neighbor alternating couplings <cit.>, as depicted schematically in Fig. <ref>(a). In this model, the intracell coupling is linear, J_1^(2n-1,2n)=J_1=const, while the intercell coupling J_2^(2n,2n+1), highlighted by red, is nonlinear and depends on the intensity in the two respective resonators. In particular, such kind of nonlinearity is attainable in nonlinear topolectrical circuits by utilizing varactor diodes <cit.>. We describe the state of the system by the column-vector wave function |Ψ⟩=(Ψ_1,Ψ_2,…,Ψ_N )^T, where Ψ_n is the field amplitude at the n^th site. Then the evolution of |Ψ⟩ is captured by the Shrödinger equation i|Ψ⟩t=Ĥ(Ψ) |Ψ⟩ , where the effective Hamiltonian reads Ĥ(Ψ) = [ [ ω_0 J_1 ⋱ 0 0; J_1 ω_0 J_2^(2,3) ⋱ 0; ⋱ J_2^(3,2) ω_0 ⋱ ⋱; 0 ⋱ ⋱ ω_0 J_1; 0 0 ⋱ J_1 ω_0; ]] . Here, ω_0 is the eigenfrequency of an isolated resonator. From now on, we set ω_0 to 0, i.e. all frequencies are measured relative to ω_0 level. The nonlinear coupling is defined by the expression J_2^(2n,2n+1)=J_2+α (|Ψ_2n|^2+|Ψ_2n+1|^2), where J_2>0 is a linear part of the intercell coupling and α is the Kerr-type nonlinearity. In the case of topolectrical circuits, it is achieved by inserting the varactor diode between the two LC resonators. The wave function is normalized such that the value |Ψ_2n-1|^2+|Ψ_2n|^2 represents the energy stored in the n^th dimer. Also, following Ref. <cit.>, we define the intensity as I = max_n{√(|Ψ_2n-1|^2+|Ψ_2n|^2)}. The linear counterpart of the model Eqs. (<ref>)-(<ref>) at α=0 exhibits a midgap topological edge state with frequency ω_0 provided the first bond at the termination is weak, J_1<J_2. Although at α 0 the problem becomes nonlinear, we can still search for the steady-state solutions in the form of the edge-localized states. For this purpose, we adopt the harmonic ansatz for the time-dependent wave function: |Ψ(t)⟩=|Ψ⟩e^-iω t . This converts the Schrödinger equation (<ref>) to the nonlinear eigenvalue problem: Ĥ(Ψ)|Ψ⟩=ω|Ψ⟩ . If the array is finite and consists of an even number of sites, the edge-localized modes may appear at both edges of the array. Due to the finite length of the array, the edge modes hybridize forming symmetric and anti-symmetric combinations, while their frequencies ω shift away from the zero value. However, the analysis is significantly simplified for the array with the odd number of sites. In such a case, the chiral symmetry of the model ensures that the edge mode residing at one (odd-site) sublattice has exactly zero frequency ω=0. This property allows to simplify the problem further and recast the nonlinear eigenvalue problem into a system of algebraic equations, where the amplitudes at all even nodes are zero Ψ_2n = 0, while the equations for the odd sites read: 0 = J_1 Ψ_2n-1 + J_2 Ψ_2n-1 +α |Ψ_2n+1|^2 Ψ_2n+1. Thus, the profile of the edge mode in an array with an odd number of sites can be readily computed. Below, we examine the situation in which the linear couplings satisfy the condition J_1 > J_2 and set α>0. Accordingly, in the low-intensity limit, the system [Fig. <ref>(a)] has no edge states at the left edge and supports a localized mode at the right edge. However, when the intensity in the array is increased above the critical value I_cr = √((J_1-J_2)/α), the localization of the edge mode changes from one edge to the other, signalling a self-induced topological transition <cit.>. The profile of the self-induced edge state, calculated for the effective parameters corresponding to those attainable in experiments with nonlinear topolectrical circuits <cit.>, is depicted in Fig. <ref>(b). In line with Ref. <cit.>, the self-induced topological mode features a non-decreasing tail such that |Ψ_n| → I_cr for large n. This tail seemingly creates a physical paradox, since such a mode in a semi-infinite array stores an infinite amount of energy. To clarify this, we simulate the excitation of the nonlinear edge mode via the left edge as described by the coupled-mode equations i∂/∂ t|Ψ⟩ = Ĥ(Ψ)|Ψ⟩ +ξ|S(t)⟩, where |S(t)⟩ = ( S(t),0,…,0 )^T is the pump profile localized at the first site. Examining various types of time-dependent excitation S(t) with a spectrum matching the frequency of the edge mode, we observe that the profile of the edge mode builds up gradually, as expected for tight-binding lattices. In the case of the simulated finite lattices, the excitation of the edge mode takes a finite time which increases with the length of the array. Hence, in the limit of an infinite array, the excitation of the edge mode will take an infinitely long time, resolving the paradox mentioned above. Another interesting aspect of the system is the structure of the nonlinear edge mode at low intensities, slightly lower than the threshold value I_cr [Fig. <ref>(c)]. In this scenario, the intensity at the left edge is lower than the saturation value I_cr, and the deficit of intensity decreases exponentially in the bulk of the array. Such type of localization does not occur in the linear SSH model and becomes possible due to the nonlinearity of the system. § SMALL PERTURBATIONS As a next step, we examine small perturbations on top of the self-induced nonlinear topological state |Ψ_0(t)⟩ following the standard linearization procedure. This method is instrumental in analyzing the modulational instability of both the bulk and edge nonlinear steady states; see, for example, Refs. <cit.> pertaining to nonlinear topological lattices. We represent the wave function in the form |Ψ(t)⟩ = |Ψ_0(t)⟩ +δ|ϕ(t)⟩, where δ is a small parameter quantifying the strength of the perturbation, and |ϕ(t)⟩ describes the spatio-temporal structure of the perturbation. In the same logic, we expand the Hamiltonian Ĥ(Ψ_0+δϕ) = Ĥ(Ψ_0)+δM̂(Ψ_0, ϕ)+δ^2N̂(ϕ) and keep the terms up to the first order in δ. Combining this with Eq. (<ref>), we recover i∂/∂ t|ϕ⟩ = Ĥ_0|ϕ⟩ +Â|ϕ⟩ +e^-2iω tÂ|ϕ^*⟩ , where Ĥ_0 = Ĥ(Ψ_0) and the matrix  has a block-diagonal form Â=diag(0,Ĉ_1,Ĉ_2,…Ĉ_N-1) , while the auxiliary matrices Ĉ_n are defined as Ĉ_n = α[ [ Ψ^0*_2nΨ^0_2n+1 Ψ^0*_2n+1Ψ^0_2n+1; Ψ^0*_2nΨ^0_2n Ψ^0_2nΨ^0*_2n+1; ]] = α[ [ 0 |Ψ^0_2n+1|^2; 0 0; ]], blocks Ĉ_n being simplified for the edge state with Ψ^0_2n = 0. Since we focus on the perturbations on top of the self-induced topological mode with ω=0, the time-dependent factor e^-2iω t is equal to 1 resulting in the equation i∂/∂ t|ϕ⟩ = Ĥ_0|ϕ⟩ +2·Re{|ϕ⟩}. Matrices Ĉ_n are clearly non-symmetric and non-Hermitian. Physically, this means that the effective coupling terms in Eq. (<ref>) are nonreciprocal, a feature that makes the problem analogous to the non-Hermitian SSH array with nonreciprocal couplings <cit.>. Thus, the treatment of small perturbations in the nonlinear system shown in Fig.<ref>(a) involves two main ingredients: finding the nonlinear edge state depicted in Fig.<ref>(b) and solving the linear non-Hermitian problem for small perturbations illustrated in Fig. <ref>(c). To solve the Eq. (<ref>), we separate real and imaginary part of the perturbation |ϕ⟩=|χ⟩+i|θ⟩. Doubling the dimensionality of the problem, we obtain a system of linear differential equations ∂/∂ t[ |χ⟩; |θ⟩ ] = [ 0̂_N Ĥ_0; -Ĥ_0-2 0̂_N ][ |χ⟩; |θ⟩ ] , which is straightforward to solve numerically and which yields 2N eigenvalues μ_n. Despite the problem is manifestly non-Hermitian, all eigenvalues μ_n are purely imaginary, which means that the oscillations of |ϕ⟩ are non-decaying. We represent these eigenvalues in the form μ_n=iλ_n, where λ_n are real numbers which appear in pairs λ_-n=-λ_n. This demonstrates that the spectrum of perturbations is symmetric with respect to the zero frequency. In addition, the eigenvectors corresponding to -λ_n, λ_n eigenvalue pair are complex conjugated: X_-n=X_n^*. Zero eigenvalue λ=0 is doubly degenerate. The respective eigenvectors are: |χ⟩=0, 10muĤ_0|θ⟩=0, which is the standard zero-energy mode in the Hermitian case and (Ĥ_0+2 Â) |χ⟩=0, 10mu|θ⟩=0, which is the zero-energy mode in the non-Hermitian SSH with nonreciprocal couplings. By definition, vector X must be real at any moment of time. For λ_n≠0 such real-valued solutions are recovered by combining the eigenvectors corresponding to the conjugate eigenvalues: X̃_n = 1/2{X_n e^-iλ_n t+ X_n^* e^iλ_n t} . In terms of Eq. (<ref>), there are 2N modes including 2 with zero frequency and 2N-2 modes with λ≠0. As the latter are necessarily combined together via Eq. (<ref>), we recover N+1 eigenvector |ϕ⟩. Thus, the key distinction of our system from the conventional SSH model is the emergence of two zero-frequency modes instead of one. The first of them [Eq. (<ref>)] oscillates with π/2 phase shift with respect to the background edge state and carries no signatures of the non-Hermitian physics satisfying the usual equation Eq. (<ref>). In contrast, the second mode oscillates in phase with the background mode |Ψ_0⟩, and its profile is modified due to the  term in Eq. (<ref>). In the low intensity limit, the  term is negligibly small and the distinction between the two modes disappears yielding essentially a single zero-energy state known in the Hermitian context [Fig. <ref>(a,b)]. However, once the intensity is increased, the distinction between the two zero-energy solutions becomes apparent [Fig. <ref>(c,d)]. While the “Hermitian” mode satisfying Eq. (<ref>) shows exponential localization at the edge, its non-Hermitian counterpart Eq. (<ref>) features a non-decreasing tail resembling that of the nonlinear edge state |Ψ_0⟩. Yet another signature of non-Hermitian physics is provided by the modes with λ≠0. Due to their structure [Eq. (<ref>)], the real and imaginary parts of the perturbation |ϕ⟩ could oscillate with different amplitudes. As a result, the probability amplitude at a given site ϕ_n=√(χ_n^2+θ_n^2) oscillates in time as illustrated in Fig. <ref>. § MODE LOCALIZATION An important feature of non-Hermitian systems is the non-Hermitian skin effect, which results in the exponential localization of all eigenmodes at the edge of the finite structure <cit.>. To probe such localization, we use the notion of the integrated density of states IDOS(n,t) which shows the probability to find a particle at a given site n at a given moment of time t summed over all eigenstates of the system. The energies of those eigenstates are depicted in Fig. <ref>(a). As the localization of the zero-energy modes has been explored above, we include their contribution to IDOS expression with a smaller weight w=0.2: IDOS(n, t) = ∑_m, ω≠ 0 |Ψ_n^(m)(t)|^2 + w ∑_m, ω = 0 |Ψ_n^(m)(t)|^2 The summation is performed over all eigenmodes m = 1,..., N+1. We calculate IDOS numerically for the different intensity levels showing the results in Fig. <ref>(b-d). If the intensity is close to zero, the system is similar to the classical SSH. However, even small non-Hermiticity results in the leftward shift of IDOS, suggesting the onset of non-Hermitian skin effect. As the intensity is increased further reaching the threshold level I_cr, the leftward shift becomes pronounced having a maximum at the second resonator [Fig. <ref>(c)]. Finally, when the intensity exceeds the critical level by an order of magnitude, IDOS becomes strongly localized featuring a maximum around the fourth site. While the probability distribution clearly shifts to the left, this behavior is not a pure non-Hermitian skin effect, as not all of the modes are exponentially localized at the left edge. This is rooted to the phase-dependent nature of non-Hermiticity in our system: the mode experiences non-Hermitian corrections depending on the phase shift between the background mode and the respective small perturbation. § DISCUSSION AND OUTLOOK In summary, we have analysed small perturbations on the background of the self-induced edge state in the nonlinear SSH model. Even though the original model is Hermitian, the behavior of the perturbations is captured by the non-Hermitian effective Hamiltonian exhibiting several counter-intuitive features. First, the couplings in the effective model are manifestly non-reciprocal implying different tunneling amplitudes in the opposite directions. This opens an easy access to the experimental realization of SSH models with nonreciprocal couplings which became a focus of intense theoretical investigations <cit.> and have recently been realized experimentally in complicated systems involving active elements <cit.> or temporal modulation <cit.>. Second, even stationary solutions for the perturbation |ϕ⟩ exhibit oscillating probability amplitudes at the sites of the array pointing towards persistent currents in the system, absent in the Hermitian scenario. Finally, non-Hermitian nature of the model results in the overall shift of the probability distribution towards the edge of the array, resulting in a modification of the well-celebrated non-Hermitian skin effect. We believe that these results delineate a route to access non-Hermitian phenomena via small perturbations of nonlinear steady waves and to probe the associated effects experimentally. § ACKNOWLEDGMENTS Theoretical models were supported by the Russian Science Foundation (Grant No. 23-72-10026). Numerical simulations were supported by Priority 2030 Federal Academic Leadership Program. M.A.G. acknowledges partial support from the Foundation for the Advancement of Theoretical Physics and Mathematics “Basis”. D.A.S. acknowledges support from the Australian Research Council (FT230100058).
http://arxiv.org/abs/2406.17887v1
20240625185108
Federated Dynamical Low-Rank Training with Global Loss Convergence Guarantees
[ "Steffen Schotthöfer", "M. Paul Laiu" ]
cs.LG
[ "cs.LG", "cs.AI", "math.OC" ]
MITP/23-058 Matching the Weak Mixing Angle at Low Energies Hubert Spiesberger, Stephan Wezorke PRISMA^+ Cluster of Excellence, Institute for Nuclear Physics and Institute of Physics, Johannes Gutenberg-University, 55099 Mainz, Germany =============================================================================================================================================================================================== § ABSTRACT In this work, we propose a federated dynamical low-rank training (FeDLRT) scheme to reduce client compute and communication costs - two significant performance bottlenecks in horizontal federated learning. Our method builds upon dynamical low-rank splitting schemes for manifold-constrained optimization to create a global low-rank basis of network weights, which enables client training on a small coefficient matrix. A consistent global low-rank basis allows us to incorporate a variance correction scheme and prove global loss descent and convergence to a stationary point. Dynamic augmentation and truncation of the low-rank bases automatically optimizes computing and communication resource utilization. We demonstrate the efficiency of FeDLRT in an array of computer vision benchmarks and show a reduction of client compute and communication costs by up to an order of magnitude with minimal impacts on global accuracy. [1] This manuscript has been authored by UT-Battelle, LLC under Contract No. DE-AC05-00OR22725 with the U.S. Department of Energy. The United States Government retains and the publisher, by accepting the article for publication, acknowledges that the United States Government retains a non-exclusive, paid-up, irrevocable, world-wide license to publish or reproduce the published form of this manuscript, or allow others to do so, for United States Government purposes. The Department of Energy will provide public access to these results of federally sponsored research in accordance with the DOE Public Access Plan(<http://energy.gov/downloads/doe-public-access-plan>). § INTRODUCTION Federated learning (FL) <cit.> builds a global model on a central server from data distributed on multiple devices, i.e., clients, by iteratively aggregating local models trained with the computation resource on the clients. In horizontal FL, where all clients share identical model architecture and data features, computation is often limited by (i) the communication bandwidth between clients and the server and (ii) the restricted compute and memory resources at each client. The former could be addressed by deploying various compression techniques, such as sparse randomized sketching <cit.>, subsampling <cit.>, and low-rank approximation <cit.>, or by allowing for partial <cit.> or asynchronous <cit.> communications. The latter could be addressed by sparse training <cit.>, low-rank training <cit.>, and transfer learning <cit.>. This work addresses both challenges simultaneously by leveraging dynamical low-rank approximation of the gradient flow with a Galerkin-type operator splitting. It yields a consistent orthogonal low-rank basis across all clients and updates the low-rank factorization without reconstructing the full-weight matrices on either clients or servers. The proposed method yields 1) Efficient communication by only sending and receiving low-rank factors; 2) Low client compute and memory footprint, where the client optimizes only a small coefficient matrix; 3) Automatic server-side compression during training, by augmenting and truncating the weight matrix rank based on the training dynamics. This scheme robustly identifies suitable low-rank manifolds to represent the weight matrices at minimal memory requirements; 4) Global loss convergence guarantees to a stationary point of the FL problem, since a globally consistent low-rank basis allows formulation of a variance correction <cit.> term to bound each client coefficient drift. We demonstrate a significant performance increase in federated scenarios with many clients compared to non-variance corrected methods. § BACKGROUND AND PROBLEM STATEMENT Federated optimization typically considers distributed setups and with limited communication and limited client compute and memory resources <cit.>. In this work, we consider a general federated optimization problem, i.e., min_wℒ(w) := 1/C∑_c=1^C ℒ_c(w), where w is a trainable weight, ℒ is the global loss function associated to a global dataset X, and ℒ_c is the local loss function of client c with local dataset X_c in a federated setup with C clients. For notational simplicity, we consider that X=∪_c=1^C X_c and each X_c is of the same size. Therefore, ℒ is an average of ℒ_c with uniform weights. The extension to handle a (non-uniform) weighted average case is straightforward. r0.5 < g r a p h i c s > Federated, heterogeneous least squares regression problem, see <Ref>, for C=4 clients, s_*=100 iterations, learning rate λ=1e-3 and C rank-1 local target functions. FL methods without variance correction plateau quickly, whereas FedLin and FeDLRT with variance correction converge to 1e-5. FeDLRT converges faster than FedLin and has lower communication costs. As the first baseline for federated optimization, we consider FedAvg <cit.>, see <Ref>. Here, each client optimizes its local loss function ℒ_c for s_* local iterations using gradient descent, w_c^s+1 = w_c^s - λ∇_wℒ(w_c^s), with learning rate λ, for s=0,…,s_*-1. The initial value for the local iteration is the last global weight, i.e., w_c^0=w^t. After local iterations, the weights are communicated to and aggregated at the server to update the global weight following w^t+1 = 1/C∑_c=1^C w_c^s_*. Client-drift effect is a common challenge in FL, where the iterative client updates (<ref>) of FedAvg converge to local minima and jeopardize global training performance since the average of the local minimizers may be far away from the global minimizer. These effects are particularly pronounced for a large number of local iterations s_*, or high discrepancies between local loss functions ℒ_c, as illustrated by <Ref>. Multiple methods <cit.> have been proposed to mitigate this issue. However, these methods often exhibit a speed-accuracy conflict, where learning rates need to be heavily reduced; thus, convergence is slow. Variance correction[Variance correction is commonly referred to as “variance reduction” <cit.>.] introduced in the FedLin method <cit.> constructs a variance correction term V_c =∇_wℒ_c(w^t) -1/C∑_c=1^C∇_wℒ_c(w^t) and modifies the client update iteration to w_c^s+1 = w_c^s - λ(∇_wℒ(w_c^s) -V_c) , s=0,…,s_*-1. This technique leads to global convergence to the minimizer of (<ref>) with constant learning rates <cit.> for convex ℒ and else to convergence to a stationary point, at the cost of an additional communication round for computing the variance correction. Federated neural network training considers problem (<ref>) with the trainable weight w being the set of weight matrices W_i_i^L of an L layer neural network. In each iteration, the weight updates in (<ref>) and (<ref>) are applied to all layers simultaneously. Therefore, w.l.o.g., we express the local loss function as ℒ_c(W), where W∈ℝ^n× n denotes the weight matrix of an arbitrary layer. Low-rank neural network training: An array of recent work has provided theoretical and experimental evidence that layer weights of over-parameterized networks tend to be low rank <cit.> and that removing small singular values may even lead to increased model performance while dramatically reducing model size <cit.> in non-federated scenarios. This beneficial feature has spawned a rich landscape of methods to compress neural networks to a low-rank factorization after training with subsequent fine-tuning <cit.>, train the factorized network with fixed rank <cit.>, dynamically adjust the rank during training <cit.>, or use low-rank adapters for fine-tuning foundation models <cit.>. Why is innovation upon existing low-rank methods needed? Since FedAvg <cit.>, several low-rank methods <cit.> have been proposed to increase communication and compute efficiency for FL. These low-rank methods can be categorized into: 1) methods that purely reduce communication cost by communicating only the low-rank factors obtained by performing a full-size SVD (or similar factorization methods) on the full weight matrix after client optimization <cit.> and 2) methods that reduce both communication and client compute costs by learning only low-rank factors on clients <cit.>. To the best of the authors' knowledge, there is no existing low-rank method that combines 1) efficient communication, 2) low client compute and memory footprint, 3) automatic server side compression during training, and 4) global loss convergence guarantees using variance correction in the sense of FedLin <cit.>, to achieve a globally consistent, robust, and efficient optimization scheme for FL. § FEDLRT: FEDERATED DYNAMICAL LOW-RANK TRAINING WITH VARIANCE CORRECTION In this section, we present the core contribution of this paper, federated dynamical low-rank training (FeDLRT), which features a low-rank client optimization step with optional variance correction and an efficient server aggregation process that dynamically determines the optimal weight matrix rank for automatic compression. FeDLRT builds on the dynamical low-rank approximation (DLRA) method, which was initially proposed for solving matrix equations <cit.> and recently extended to neural network training <cit.>. Let Ẇ(t)=-∇_Wℒ(W(t)) denote the gradient flow for minimizing ℒ. The DLRA method restricts the trajectory of W to ℳ_r, the manifold of n× n, rank-r matrices, by projecting Ẇ onto a local tangent plane of ℳ_r via an orthogonal projection. This guarantees a low-rank solution when following the projected dynamics from a low-rank initial guess. Let the low-rank matrix take the form W_r =USV^⊤∈ℳ_r with U,V∈ℝ^n× r the orthonormal bases of ℳ_r and S∈ℝ^r× r the coefficient matrix. The dynamics for each low-rank factor in DRLA are then derived in <cit.> as Ṡ(t) = -U^⊤(t)∇_Wℒ(U(t) S(t)V(t)^⊤)V(t), U̇(t) = -(I - P_U(t))∇_Wℒ(U(t)S(t)V(t)^⊤) V(t)S(t)^-1, V̇(t) = -(I - P_V(t))∇_Wℒ(U(t)S(t)V(t)^⊤) U(t)S(t)^-⊤, where P_U=UU^⊤ and P_V=VV^⊤ are the projections onto the column spaces of U and V, respectively. By using the basis update & Galerkin (BUG) scheme <cit.>, (<ref>) can be split into a basis update step for U and V and a coefficient update step for S. This splitting scheme allows for dynamic adjustment of the rank via a basis augmentation before the coefficient update step and a basis truncation after the coefficient update, as shown in <cit.>. In the context of FL, the BUG splitting scheme is particularly interesting since it allows for learning the low-rank bases and coefficients in separate steps. This gives rise to a globally shared basis for the local client iterations, reducing communication and client compute cost of the proposed FeDLRT scheme, see <Ref>: First, the factorization is broadcast to the clients (panel 1), and the basis gradients[and later on the coefficient gradients for variance correction] U,V are aggregated on the server (panel 2). Next, the basis is augmented on the server (panel 3) and broadcast. On the clients, only the augmented coefficient matrix S is updated repeatedly (panel 4) before aggregation to the server. After aggregation of the local augmented coefficient matrices, redundant basis directions are eliminated to optimize the accuracy-to-compression ratio of the model on the server. The strategy yields the following benefits compared to “full-rank” FL schemes as FedLin <cit.> and low-rank schemes with local compression: Low client compute cost: Server-based basis augmentation and compression enables an automatic compression without a-priori knowledge of the layer rank r and at no cost for the resource-constrained clients. The clients only evaluate gradients of low-rank factors and optimize the small matrix S∈ℝ^r× r. When the clients are equipped with GPUs, this further implies that all “GPU unfriendly” parts of the low-rank scheme, i.e., SVD and QR decomposition for augmenting and compressing the representation, are performed on the server. Efficient communication: Similar to FedLin, FeDLRT requires in practice two communication rounds – one for aggregating and distributing global gradients for basis augmentation and variance correction and one for aggregating locally updated coefficients. However, communication cost for each round is significantly reduced since only low-rank factors are communicated. We refer to <Ref> on communication and compute cost. l0.4 < g r a p h i c s > Communication of FeDLRT without variance correction. 1) Broadcast current global basis U,V (blue). 2) Aggregate basis gradients G_c,U, G_c,V (orange). 3) Broadcast global augmented basis U,V (green). 4) Aggregate individual client coefficient update ^s_*(purple). Existing federated low-rank schemes effectively generate individual and incompatible representations of ∈ℳ_r for each client. While the factors can still be efficiently communicated, averaging on the server requires a reconstruction of the full weigh matrix W^*=1/C∑_c=1^C U_cS_cV_c^⊤, since the local manifolds possibly diverge. Thus, the local rank information is lost and needs to be costly recovered by a full n× n SVD on the server; see <Ref> for details. Since the average of low-rank matrices is not necessarily of low rank, these schemes may lose crucial information on the manifold if client solutions drift too far apart from each other. FeDLRT, in contrast, provides the advantage of client-wide manifold consistency: Splitting the low-rank update and sharing bases amongst clients provides a globally consistent manifold basis. This furthermore allows for bounding the coefficient drift, see <Ref>, and enables a variance correction for the federated low-rank similar to the FedLin scheme. §.§ Description of <Ref> - FeDLRT In this section, we elaborate on the details in <Ref>. The orthonormal factors U^t,V^t and the coefficient matrix S^t are initialized with rank r and then broadcast to the clients. Note that FeDLRT ensures that, for all t>1, U^t and V^t are orthonormal, and S^t is diagonal and full rank. Basis augmentation of the bases U^t and V^t is performed using concatenation with the corresponding global basis gradients G_U = 1/C∑_c=1^C∇_Uℒ_c(U^tS^tV^t,⊤) and G_V = 1/C∑_c=1^C ∇_Vℒ_c(U^tS^tV^t,⊤), obtained by aggregating the local basis gradients. G_U and G_V encapsulate the gradient flow dynamics (<ref>) projected onto the original bases, thus yielding an intuitive choice for basis augmentation. Further, this choice is consistent with the basis update step of the augmented BUG splitting scheme, see <Ref>, which ensures the robustness of the client optimizer. Subsequent orthonormalization, e.g., by a QR decomposition, yields the augmented basis, i.e., [U^t |U]R = ([U^t | G_U])∈ℝ^n× 2r, and [V^t |V]R= ([V^t | G_V])∈ℝ^n× 2r. We denote the augmented bases by =[U^t |U] and =[V^t |V]. The orthonormalization is performed on the server, providing compute cost reduction for the client. Basis broadcasting of and only requires to broadcast the new bases U and V, since U^t and V^t are readily available on the clients. Formally, the coefficients S^t are projected onto the augmented basis, i.e., =^⊤ U^t S^t V^t,⊤∈ℝ^2r× 2r, before broadcasting them to the clients. Exploiting the orthonormality of the basis results in further reduction of the communication and compute cost: =^⊤ U^t S^t V^t,⊤ takes the form = [ S^t 0; 0 0 ]. See <Ref> for the proof. With <Ref>, only U and V have to be broadcast, and the augmented bases and coefficients , , and can be assembled on each client as needed. Furthermore, only S∈ℝ^r× r, instead of ∈ℝ^2r× 2r, needs to be communicated. Below, we discuss three options for the client coefficient update step. Client coefficient update without variance correction is implemented similarly to FedAvg (<ref>). On each client c, the augmented coefficient matrix _c is trained for s_* iterations [Our analysis focuses on the case where all clients share the same number of local iterations s_*. The analysis can be extended to the case where s_* is client dependent, following an approach similar to the one in <cit.>.] with learning rate λ, _c^s+1 = _c^s - λ∇_ℒ_c(_c^s^⊤) , s=0,…,s_*-1, with _c^s=0=. Client coefficient update with variance correction is required in certain federated scenarios, e.g., the case considered in <Ref>. Based on FedLin <cit.>, we introduce a correction step for the local coefficient update of FeDLRT. It extends the above local iteration by another communication round, where the gradient of the augmented coefficients G_,c= ∇_ℒ_c(^⊤) is computed, aggregated to G_ = 1/C∑_c=1^CG_,c and subsequently broadcast. This yields a correction term V_c = G_ - G_,c for each client c and thus the client iterations read _c^s+1 = _c^s -λ( ∇_ℒ_c(_c^s^⊤) +V_c ), s=0,…,s_*-1, with _c^s=0=. The correction term results in a bound on the coefficient drift and leads to convergence guarantees for FeDLRT, as detailed in <Ref>. Client coefficient update with simplified variance correction: Empirically, we observe that a simplified variance correction, which only considers the correction term of the non-augmented coefficients S^t, is sufficient, see <Ref>. The simplified variance correction term takes the form V_c = G_ - G_,c≈V̌_c := Ǧ_ - Ǧ_,c = [ ∇_S ℒ(U^tS^tV^t,⊤) - ∇_S ℒ_c(U^tS^tV^t,⊤) 0; 0 0 ], which makes lines 10 and 12 in <Ref> redundant, since Ǧ_ can be aggregated in one step with the basis gradients G_U,G_V in line 4 and broadcast with U,V in line 6, reducing the communication rounds to two - the same as FedLin. See <Ref> for details. Coefficient averaging is performed after (any of the above variants of) the client iterations. The server computes the updated global coefficients by averaging the local updates, i.e., ^* = 1/C∑_c=1^C _c^s_*. With the shared augmented bases and , this is equivalent to the FedAvg aggregation ^*= 1/C∑_c=1^C^s_* =1/C∑_c=1^C (_c^s_*^⊤) = (1/C∑_c=1^C _c^s_* )^⊤ =^*^⊤. Since the basis is fixed, the rank 2r is preserved in the aggregation, which is in contrast to other federated low-rank schemes where the aggregated weights could be full rank and, in turn, require a full matrix SVD to determine the new rank <cit.>. Automatic compression via rank truncation is necessary 1) to identify the optimal rank of the weight matrix and 2) to ensure that S is full rank[Full rank S is required to show consistency of the basis update step (<ref>) with the robust operator splitting of <cit.>, see <Ref>.]. To this end, a truncated SVD of ^*∈ℝ^2r× 2r is performed, i.e. P_r_1, Σ_r_1, Q_r_1^⊤ = (^*), where P_r_1,Q_r_1∈ℝ^2r× r_1 and Σ_r_1=diag(σ_1,…,σ_r_1) contains the r_1 largest singular values of ^*. The new rank r_1 can be chosen by a variety of criteria, e.g., a singular value threshold [σ_r_1,…,σ_2r]_2<ϑ. Once a suitable rank is determined, the factorization is updated by the projection of the bases U^t+1= P_r_1∈ℝ^n× r_1, V^t+1= Q_r_1∈ℝ^n× r_1 and update of the coefficient S^t+1=Σ_r_1. Remarkably, <Ref> is a federated low-rank learning scheme whose solution is close to a full-rank solution, see <Ref>. §.§ Analysis of FeDLRT with variance correction In this section, we analyze the FeDLRT algorithm under the general assumption that ℒ_c and ℒ are L-smooth with constant L. Theorems <ref> and <ref> give the convergence results for FeDLRT with full variance correction (<ref>) in <Ref>. <Ref> and <Ref> provide the convergence for FeDLRT with simplified variance correction in (<ref>), as detailed in <Ref>, under additional assumptions given therein. We note that the analysis does not require convexity of ℒ_c or ℒ. FeDLRT convergence with full variance correction. The variance-corrected client iteration (<ref>) leads to the following bound the client coefficient drift. Given augmented basis and coefficient matrices , , and . If the local learning rate 0<λ≤1/Ls_* with s_*≥ 1 the number of local steps, for all clients c, ^s-≤exp(1)s_* λ∇_ℒ(^⊤), s=1,…,s^*-1, where ^s is the variance corrected coefficient as given in (<ref>). The critical ingredient for the proof, provided in <Ref>, is the globally shared augmented bases. <Ref> bounds the drift of the low-rank representations of the local weight, which gives rise to the following global loss descent guarantee. Let U^t S^t V^t,⊤ and U^t+1 S^t+1 V^t+1,⊤ be the factorization before and after iteration t of <Ref> with variance correction and singular value truncation threshold ϑ. Let the local learning rate be 0<λ≤1/12 Ls_*, then the global loss descent is bounded by ℒ(U^t+1 S^t+1 V^t+1,⊤) - ℒ(U^t S^t V^t,⊤) ≤ - s_*λ(1- 12 s_*λ L) ∇_ℒ(^⊤)^2 + Lϑ. The proof is provided in <Ref>. <Ref> paves the way for the following result on convergence to a global stationary point. Algorithm  <ref> guarantees that, for learning rate λ≤1/12 Ls_* and final iteration T, min_t=1,…,T∇_ℒ(U^t S^t V^t,⊤)^2≤48 L/T(ℒ(U^1S^1V^1,⊤)-ℒ(U^T+1S^T+1V^T+1,⊤)) + 48 L^2ϑ. The proof is given in <Ref>. In particular, this theorem implies convergence of <Ref> for T→∞ up to a ϑ-distance to a global stationary point. This is consistent with the numerical results in <Ref>, where FedLin converges to the global minimizer (the only stationary point) while FeDLRT with variance correction stops at a point with slightly higher loss value due to a nonzero ϑ. In the case that the FL problem has a low-rank solution, the truncation error bounded by ϑ vanishes, and convergence to a stationary point is guaranteed, see, e.g., <Ref>. l0.25 < g r a p h i c s > Scaling of communication cost (top) compute cost at a single client (middle), and client memory footprint (bottom) for s_*=1 client iteration and a single data-point for W∈ℝ^n× n with n=512. The costs drop by orders of magnitude after the amortization point of r≈ 200, which is 40% of full rank. The numerical evaluations in <Ref> show that, in practice, the matrix ranks are typically below the amortization threshold. FeDLRT convergence with simplified variance correction. FeDLRT with simplified variance correction is detailed in <Ref> with the variance correction term given in (<ref>), which makes variance correction more communication and computation efficient but comes at a cost of the following additional assumption for convergence analysis. There exists δ≪ 1 such that, at each client coefficient update, ∇_𝒢(^s^⊤) -∇_S𝒢(^s^⊤) < δ∇_ℒ(^⊤), for functions 𝒢 = ℒ and 𝒢 = ℒ_c, c=1,…,C. This assumption can be interpreted as that most of dynamics in the gradient flow are captured in the coefficient update for the original rank-r matrix S, and the basis augmentation provides little information. This scenario occurs when FeDLRT identifies the optimal rank, which could happen early for simpler problems as shown in <Ref>, or when FeDLRT approaches a stationary point. Under <Ref>, if the local learning rate 0<λ≤1/12 Ls_*, then <Ref> leads to the global loss descent ℒ(U^t+1 S^t+1 V^t+1,⊤) - ℒ(U^t S^t V^t,⊤)≤ -𝖢∇_ℒ()^2 + Lϑ with 𝖢=s_*λ(1-δ^2 - 12s_*λ L + δ^2 s_*λ). The proof is provided in <Ref>. When δ is small, this bound is slightly weaker than the one in <Ref>, which leads to the following corollary. Assume that <Ref> holds. <Ref> guarantees that, for the local learning rate 0<λ≤1/s_*(12 L +δ^2), min_t=1,…,T∇_ℒ(U^t S^t V^t,⊤)^2≤ 96 L/T(ℒ(U^1S^1V^1,⊤)- ℒ(U^T+1S^T+1V^T+1,⊤)) + 96 L^2ϑ. The proof is analogous to the one for <Ref>, see <Ref>. §.§ Compute and communication cost The proposed FeDLRT methods significantly reduce server and client memory footprint, the required communication bandwidth, as well as the client compute cost compared to various baselines, see <Ref>. On the clients, FeDLRT provides significant memory and compute efficiency since the optimizer only requires the coefficient gradients for the local iterations. To the best of our knowledge, FeDLRT is the only low-rank method with adaptive compression incorporating variance correction, whose server compute cost scales linearly with the layer dimension since the SVD for rank truncation only needs to be computed on the augmented coefficient matrix of size 2r× 2r. We remark that the complete federated learning process is performed on the low-rank factors, and the full matrix is never required, as, e.g., in <cit.>. FeDLRT significantly reduces the communication cost between server and clients of a compressed layer and the compute cost on the client and server side, compared to FedLin, see <Ref>, see <Ref>. The client cost of the simplified variance corrected FeDLRT, see <Ref> is slightly reduced compared to the full variance corrected since only non-augmented coefficient gradients have to be computed and communicated. However, the asymptotic behavior is the same. Its main advantage is the reduced number of communication rounds. § NUMERICAL EVALUATION §.§ Distributed linear least squares regression Homogeneous test. We first consider a (convex) FL problem (<ref>) for linear least squares regression with local loss ℒ_c(W)= 1/2X_c∑_(x,y)∈ X_cp(x)^⊤ W p(y) - f(x,y)_2^2, where W∈ℝ^n× n and p:[-1,1]→ℝ^n is the Legendre polynomial basis of degree n-1. The target function f is manufactured as f(x,y)= p(x)^⊤ W_r p(y), where rank(W_r)=r. We consider problems with n=20, r=4, and randomly generated W_r, with 10,000 data points uniformly sampled on [-1,1]^2 and uniformly distributed among clients. We compare FeDLRT with variance correction and FedLin with s_*=20 local iterations and λ=1e-3 learning rate on C=1,2,4,8,16,32 clients. This setting satisfies the step-size restriction given in <Ref>. In FeDLRT, the singular value truncation threshold ϑ=τ||^*|| with τ=0.1 was used. Figure <ref> reports the dynamically updated ranks, errors, and loss values with respect to the aggregation rounds. The reported data are the medians over 20 randomly generated initial weights [We chose to display the median trajectory to point out its convergence and monotonicity. The test case also converges in the mean.] The results indicate that FeDLRT is able to identify the correct rank within a few aggregation rounds and, furthermore, never underestimates it – which would have increased the loss value significantly. FeDLRT converges to the minimizer W^*=W_r up to a 1e-5 error and converges faster with more clients. On this problem, FeDLRT shows up to 10x faster convergence than FedLin. We attribute this behavior to the fact that, by identifying a suitable low-rank manifold early in the training, FeDLRT significantly reduces the degrees of freedom in the FL problem. Heterogeneous test. Inspired by <cit.>, we consider a variation of the linear least squares regression with ℒ_c(W) = 1/2X∑_(x,y)∈ Xp(x)^⊤ W p(y) - f_c(x,y)^2, where the target function f_c is different for each client, and the 10,000 training data points are available to all clients. We choose problem size n=10 with C=4 clients and use learning rate λ=1e-3 with s_*=100 local epochs. As seen in <Ref>, FeDLRT with variance correction converges (to single precision accuracy) to the minimizer W^* of (<ref>) much faster than FedLin, whereas FeDLRT without correction quickly plateaus, similar to FedAvg. §.§ ResNet18 on CIFAR10 We demonstrate the performance of FeDLRT for training the exemplary ResNet18 model on CIFAR10, where we apply FeDLRT to train its fully connected head. The truncation tolerance is set to ϑ=τ||^*|| with τ=0.01. The test case setup is summarized in <Ref>. The training data is equally partitioned across clients; see <Ref> for the data-preprocessing details. A local iteration of <Ref> at client c describes one mini-batch update on the client training data set X_c for a given batch size, s_* is the maximum number of local iterations, and T denotes the number of aggregation rounds. We display the statistics for 10 random initializations; each warm-started with 5 iterations with one client. We set s_*=240/C so that in each training run, the global network iterates through the same amount of data. This setup favors low client counts, and, as expected, the validation accuracy drops as C grows for FedAvg and FeDLRT without variance correction, see <Ref> (upper row). We note that FeDLRT ties or outperforms FedAvg in terms of accuracy. Using full variance correction (second row) increases the validation accuracy of FeDLRT by up to 12% in this test case, matching the accuracy of FedLin and enabling FL with 93% accuracy for 32 clients. For C=8 clients, the communication cost saving of the compressed layers is up to 90%. The computationally more efficient simplified variance correction, using <Ref>, (third row), yields similar validation accuracy, notably at higher compression ratio and communication cost reduction. Similar results are obtained for AlexNet, VGG16 on CIFAR10, and ViT on CIFAR100 , see <Ref>, where we observe that FeDLRT closely matches the full-rank accuracy of FedLin. In conclusion, we have presented with FeDLRT an efficient low-rank FL scheme with convergence guarantees and automatic server side compression, and demonstrated its capabilities in several test cases. We remark that the underlying assumption for this work is that the target model can be expressed sufficiently well via a low-rank representation. While this work is baseline algorithmic research, we believe that it leads to positive societal impacts by providing an energy efficient FL algorithm. abbrv § ADDITIONAL ALGORITHMS In the following, we list a set of algorithms that are used in the paper as a contribution or as a baseline method. In particular, <Ref> contains auxiliary function definitions for <Ref> and <Ref>. <Ref> is the standard FedAvg method as presented in <cit.>. <Ref> is the FedLin Algorithm <cit.>, i.e. the extension of <Ref> with variance correction. <Ref> represents the FeDLRT method with simplified variance correction, as analyzed in <Ref> and <Ref> with the additional <Ref>. § ADDITIONAL NUMERICAL EVALUATION §.§ Compute resources The convex test cases are computed on a single Nvidia GTX1080ti GPU. The computer vision benchmarks use a set of Nvidia Tesla V100-SXM2-16GB and Tesla P100-PCIE-16GB. For prototyping, a Nvidia RTX 4090 is used. §.§ Data augmentation We use standard data augmentation techniques for the proposed test cases. That is, for CIFAR10, we augment the training data set by a random horizontal flip of the image, followed by a normalization using mean [0.4914, 0.4822, 0.4465] and std. dev. [0.2470, 0.2435, 0.2616]. The test data set is only normalized. The same augmentation is performed for CIFAR100, where with mean [0.5071, 0.4867, 0.4408] and std. dev. [0.2673, 0.2564, 0.2762]. §.§ Additional computer vision results AlexNet on CIFAR10: We train AlexNet on CIFAR10, where the fully connected head of the network is replaced by a low-rank counterpart. A federated neural network setup with C clients trains on CTs_* random batches of the dataset, that is the number of seen training data batches scales with the client count. <Ref> displays the validation accuracy of FeDLRT with variance correction compared to FedLin, where one can see that the performance of FeDLRT mirrors the performance of FedLin with more degrees of freedom. The measured validation accuracy peaks at C=4 clients in both cases, where the higher number of seen training data-points offsets the negative effects of more clients on the validation performance. All reported runs are within close distance of the non-federated, full-rank baseline accuracy of 85.6%. Communication cost savings of the fully connected layers amount between 96% and 97% [For clarity of exposition we consider only the fully connected layers. Taking into account the non low-rank convolution layers, the communication cost savings reduces to 87.5% to 87.3%.] We observe, similarly to the results in <Ref>, that the maximum achieved communication cost savings, which depend on the layer ranks scales with the number of clients C=4, indicating that the decay rate of the singular values of the averaged coefficient matrix ^* depends on C. Vision Transformer on CIFAR100: We consider a small vision transformer for CIFAR100, with 6 attention layers with 2 heads each followed by a ResNet block and a drop-out layer, all with weight matrices of dimension 512× 512. The tokenizer takes patches of size 8 with embedding dimension 512. Training hyperparameters are given in <Ref>. Remark that we do not aim for SOTA performance, since transformer architectures are notoriously difficult to compress with low-rank approaches, but rather compare the performance of FedLin to FeDLRT for a given compute budget. We use s_*=240/C local iterations for C clients. Observe in <Ref> that FeDLRT achieves similar performance as ViT with over 55% communication cost savings on average. § NOTATION OVERVIEW FOR THE NUMERICAL ANALYSIS We establish a set of notations to simplify the notation in the proofs * ℒ_c(W) denotes the local loss function based on dataset X_c at client c. * ℒ(W) = 1/C∑_c=1^C ℒ_c(W) is the global loss function. * F_c(W)=-∇_Wℒ_c(W) is the negate of local loss gradient. * F(W)= 1/C∑_c=1^C F_c(W) is the negate of global loss gradient. * ℳ_r = W∈ℝ^n × n: rank(W)=r is a manifold of rank r matrices. * W_r =USV^⊤∈ℳ_r is a rank-r approximation of a matrix W. * 𝒯_W_rℳ_r is the tangent space of ℳ_r at W_r. * P(W_r) is the orthogonal projection onto 𝒯_W_rℳ_r. * P_U=UU^⊤ is the orthogonal projection onto the range of orthonormal U∈ℝ^n× r. * P_V=VV^⊤ is the orthogonal projection onto the range of orthonormal V∈ℝ^n× r. * When applied to vectors, · denotes the Euclidean norm (ℓ_2-norm). When applied to matrices, · denotes the Frobenius norm. § EFFICIENT BASIS GRADIENT DYNAMICS FOR BASIS AUGMENTATION We first consider the basis update & Galerkin splitting scheme of  (<ref>). The splitting performs a reparametrization of the form K(t)=U(t)S(t) and L(t)=V(t)S(t)^⊤. The basis update then reads K̇ = -∇_Kℒ(K(t)V_0^⊤)∈ℝ^n× r, K(0) = U_0S_0, L̇ = -∇_Lℒ(U_0L(t)^⊤)∈ℝ^n× r, L(0) = V_0S_0^⊤. Given the solution K(t_1) and L(t_1) at time t_1, the bases U_0 and V_0 are augmented by the orthonormalization of the new directions K(t_1) and L(t_1), i.e. R = ([U_0 | K(t_1)])∈ℝ^n× 2r, and R = ([V_0 | L(t_1)])∈ℝ^n× 2r, where R is the right factor of the respective QR decomposition and can be discarded. The initial condition of the coefficient update is S(t_0) projected onto the new bases, i.e., = -∇_Sℒ((t)^⊤), (0) = ^⊤ U_0 (0) V_0^⊤. After the integration of the coefficient dynamics above, the redundant basis functions are typically truncated via an SVD of S ensuring that S is always full rank. In its continuous form above, the splitting yields a robust integrator for the projected gradient flow, without manifold dependent step-size restrictions: (<cit.>) Assume ℒ is L-smooth with constant L, and locally bounded by B. Let (t) be the low-rank continuous time solution of (<ref>) and (<ref>) and let W(t) be the full rank solution at t=0. Assume the K,L, and S equations are integrated exactly from time t=0 to Δ t. Assume that for any Y∈ℳ_r sufficiently close to (t) the gradient F(Y) is ϵ close to ℳ_r. Then W(Δ t)-(Δ t)≤ d_1ϵ + d_2 Δ t + d_3ϑ/Δ t, where d_1,d_2,d_3 depend only on L and B. The theorem guarantees, that the low-rank representation does not imply any step-size restrictions on the optimization scheme. This is in stark contrast to a naive alternating descent optimization of the low-rank factors U,S,V. To build an discretized numerical optimizer in a resource constrained federated scenario from the above continuous splitting equations, we avoid the reparametrization, which implies a 200% memory cost increase on the client side, since three versions of the low-rank layer need to be tracked. Let USV∈ℳ_r be a low rank factorization that follows the projected gradient (<ref>) flow using the splitting scheme (<ref>) with K=US and V=VS^⊤. Further, assume that equations for the K and L factors are solved by an explicit Euler time integration with learning rate λ, i.e. K(t_1) = K(0) -λ∇_Kℒ(K(0)V_0^⊤), K(0) = U_0S_0, L(t_1) = L(0) -λ∇_Lℒ(U_0L(0)^⊤), L(0) = V_0S_0^⊤. Then, the basis augmentation (<ref>) can be expressed as R = ([U_0 | - ∇_Uℒ(U_0S_0V_0^⊤)])∈ℝ^n× 2r, and R = ([V_0 | -∇_Vℒ(U_0S_0V_0^⊤)])∈ℝ^n× 2r. and maintains the structure of the basis update and Galerkin operator split. We consider the proof for the K equation and the U basis; the proof for L and V follows analogously. Considering  (<ref>), we obtain with the explicit Euler discretization  (<ref>), span([U_0 | K(t_1)]) = span([U_0 | U_0 -λ∇_Kℒ(K(0)V_0^⊤)]) = span([U_0 | -λ∇_Kℒ(K(0)V_0^⊤)]) = span([U_0 | -∇_Kℒ(K(0)V_0^⊤)]). Next, consider the continuous time dynamics of K̇, where we omit explicit time dependence on U,S,V and K for the sake of brevity, i.e., K̇ = (̇U̇Ṡ)̇ = U̇ S + U Ṡ  (<ref>)= -(I-U U^⊤)∇_Wℒ(USV^⊤)V S^-1S - U U^⊤∇_Wℒ(USV^⊤) V = - (I -P_U)∇_Wℒ(USV^⊤)V - P_U∇_Wℒ(USV^⊤) V = (P_U- I)∇_Wℒ(USV^⊤)V - P_U∇_Wℒ(USV^⊤) V =-∇_Wℒ(USV^⊤)V Further, using the chain rule, we observe ∇_Uℒ(USV^⊤) =∇_Wℒ(USV^⊤)∇_U(USV^⊤)=∇_Wℒ(USV^⊤)VS^⊤ Thus, -∇_Uℒ(USV^⊤)S^-⊤ = -∇_Wℒ(USV^⊤)V= K̇. Full rankness of S and  (<ref>) yield that span(-∇_Uℒ(USV^⊤)) = span(K̇). Together with  (<ref>) this yields the proof. <Ref> adopts a more general result for Tucker tensors in an unpublished manuscript and simplifies the analysis for the matrix case considered here. § EFFICIENT BASIS AND COEFFICIENT COMMUNICATION Note that we have by orthogonality of the bases =[U,U] with U∈ℝ^n× r and U^⊤ U =0 and =[V,V] with V∈ℝ^n× r and V^⊤ V =0. (<Ref>) The basis augmented basis [U,G_U] before orthonormalization already contains the orthonormal vectors given by the columns of U. A QR decomposition therefor only rearranges the columns of G_U such that =[U,U] with U∈ℝ^n× r and U^⊤ U =0. The analogous result holds for =[V,V]. The projection onto the augmented basis therefore reads ^⊤ U = [ U^⊤ U; U^⊤ U ] = [ I; 0 ] and ^⊤ V = [ V^⊤ V; V^⊤ V ] = [ I; 0 ]. Consequently, the augmented coefficient matrix takes the form = ^⊤ U S V^⊤ = [ S 0; 0 0 ]. § ANALYSIS FOR FEDLRT WITH FULL VARIANCE CORRECTION In this section we establish bounds on the coefficient drift of the FeDLRT method with full variance correction. We use the established coefficient drift bound to derive a loss-descend guarantee. The strategy of our analysis follows the one of FedLin <cit.>. We first state an auxiliary lemma. Let U∈ℝ^n× r and V∈ℝ^n× r be orthonormal matrices. Let F be an L-continuous function. Then, for S_1,S_2∈ℝ^r× r, P_U(F(US_1V^⊤)-F(US_2V^⊤)) P_V≤ LS_1 -S_2 and U(F(US_1V^⊤)-F(US_2V^⊤)) V^⊤≤ LS_1 -S_2, where P_U and P_V are orthogonal projections defined in <Ref>. For the first statement, consider P_U(F(US_1V^⊤)-F(US_2V^⊤)) P_V = UU^⊤(F(US_1V^⊤)-F(US_2V^⊤)) VV^⊤ (I)≤ UU^⊤F(US_1V^⊤)-F(US_2V^⊤)VV^⊤ (II)= F(US_1V^⊤)-F(US_2V^⊤) (III)≤ L US_1V^⊤-US_2V^⊤ =L U(S_1-S_2)V^⊤ (I)≤ L US_1-S_2V^⊤ (II)= L S_1-S_2, where we have used in (I) the operator norm inequality of the Frobenius norm, in (II) orthonormality of U, V, and in (III) L-continuity of F. The second statement is proven analogously. §.§ Coefficient drift bound for FeDLRT with full variance correction We consider the FeDLRT method with variance correction, see <Ref>. Key difference to the FeDLRT method without variance correction is the modified coefficient update, incorporating global gradient information of the augmented coefficient matrix and local, stale gradient information of the augmented coefficient matrix _c. The variance corrected local coefficient update (<ref>) can be expressed in terms of the projected Riemannian gradient as ^s+1 = ^s + λ^⊤( F_c(^s) - F_c() + F()), where ^⊤ F_c(^s) = ∇_ℒ_c(^s), ^⊤ F_c() = ∇_ℒ_c(^s=0) and ^⊤ F_c(^s) = ∇_ℒ(^s). Recall that = for s=0. We provide proof for Theorem <ref> to bound the drift term ^s-. We restate this theorem to the Riemannian notation and restate it below. (Restatement of Theorem <ref>) Given augmented basis and coefficient matrices , , and , and =^⊤. If the local learning rate 0<λ≤1/Ls_* with s_*≥ 1 the number of local steps, for all clients c, ^s-≤exp(1)s_* λ^⊤ F(), s=1,…,s^*-1, where ^s is the variance corrected coefficient as given in (<ref>). From the adjusted coefficient update in (<ref>), we get ^s+1 - =^s - + λ^⊤( F_c(^s) - F_c() + F()) ≤^s - + λ^⊤( F_c(^s) - F_c()) + λ^⊤ F() (I)≤^s - + λ L ^s - + λ^⊤ F() ≤ (1 + λ L) ^s - + λ^⊤ F() ≤(1 + 1/s_*) ^s - + λ^⊤ F(). We use in (I) Lemma <ref> Recursively plugging in the above inequality yields for a=(1 + 1/s_*) ^s+1 - ≤ a^s+1^s=0 - + (∑_j=0^s a^j) λ^⊤ F() = (∑_j=0^s a^j) λ^⊤ F() = a^s+1 -1 /a -1λ^⊤ F() ≤(1+1/s_*)^s+1 s_* λ^⊤ F() ≤(1+1/s_*)^ s_* s_* λ^⊤ F() ≤exp(1) s_* λ^⊤ F(). §.§ Global loss descent for FeDLRT with full variance correction We first state a few auxiliary lemmas, which provide common inequalities that will be used in the following analysis. (<cit.>) For any two matrices Y_1,Y_2∈ℝ^n× n and an L-smooth ℒ with constant L it holds ℒ(Y_1)-ℒ(Y_2)≤ -Y_1-Y_2,F(Y_2) + L/2Y_1-Y_2^2, where F(Y)=-∇_Yℒ(Y). (<cit.>) For two vectors x_1,x_2∈ℝ^d it holds for γ>0 x_1+x_2^2≤(1+γ)x_1^2 + (1+1/γ)x_2^2. (<cit.>) For C vectors x_1,…,x_C∈ℝ^d the application of Jensen's inequality yields ∑_c=1^C x_c^2≤ C ∑_c=1^Cx_c^2. First, we consider the loss function value at the augmentation step. We have ℒ()=ℒ(W_r^t) for the loss before and after basis augmentation. Due to <Ref>, = [ S^t 0; 0 0 ], thus =^⊤ = USV^⊤ = W^t. We next bound the loss descent between the augmentation step and the truncation step - having performed the aggregation of the client updates. Let =^⊤ be the augmented factorization at global iteration t and let ^*=^* ^⊤ be the aggregated solution after client iterations, i.e., ^* = 1/C∑_c=1^C_c^s_*. Then the variance corrected coefficient update (<ref>) yields the guarantee 2ℒ(^*)-ℒ() ≤ -( s_*λ)(1-( s_*λ)L)^⊤ F()^2 + (Lλ/C∑_c=1^C∑_s=0^s_*-1^s -)^⊤ F() + L^3λ^2 s_*/C∑_c=1^C ∑_s=0^s_*-1^s-^2. From (<ref>), P_ =^⊤, P_ =^⊤, and the fact that ^s=0= for all c=1,…,C, ^s_* = _c^s_*^⊤ = _c^s=0^⊤ + ^⊤∑_s=0^s_*-1λ( F_c(^s) - F_c() + F())^⊤ = -λ∑_s=0^s_*-1P_ F_c(^s)P_ - λ P_(F()-F_c())P_. Averaging across clients leads to ^* = 1/C∑_c=1^C^s_* = - λ/C∑_c=1^C∑_s=0^s_*-1 P_ F_c(^s) P_- λ/C∑_c=1^C P_(F()-F_c()) P_ = - λ/C∑_c=1^C∑_s=0^s_*-1 P_ F_c(^s) P_, where we have used the definition of the global and local gradient at , i.e., 1/C∑_c=1^CF_c() = F(). Based on L-continuity of F and F_c, (<ref>), and <Ref>, we obtain further ℒ(^*)-ℒ() ≤^*-,F() + L/2^*-^2 = -λ/C∑_c=1^C∑_s=0^s_*-1 P_ F_c(^s)P_ ,F() +L/2λ/C∑_c=1^C∑_s=0^s_*-1 P_ F_c(^s) P_^2. Next, we bound each of the two right-hand-side terms separately. We first express the first term as -λ/C∑_c=1^C∑_s=0^s_*-1 P_ F_c(^s) P_,F() = -λ/C∑_c=1^C∑_s=0^s_*-1 P_( F_c(^s) -F_c())P_ + P_(λ/C∑_c=1^C∑_s=0^s_*-1 F_c())P_ ,F() = -λ/C∑_c=1^C∑_s=0^s_*-1 P_( F_c(^s) -F_c())P_ + P_s_*λ/C∑_c=1^C F_c() P_ ,F() = - P_(λ/C∑_c=1^C∑_s=0^s_*-1 F_c(^s) -F_c())P_ +P_ s_*λ F() P_,F() = -^⊤( λ/C∑_c=1^C∑_s=0^s_*-1 F_c(^s) -F_c()) , ^⊤ F() ^⊤ - s_*λ^⊤ F(),^⊤ F() = -λ/C∑_c=1^C∑_s=0^s_*-1^⊤( F_c(^s) -F_c()) , ^⊤ F() - s_*λ^⊤ F()^2, where the definitions of P_ and P_ are used. Following this, the first term then can be bounded by -λ/C∑_c=1^C∑_s=0^s_*-1 P_ F_c(^s) P_,F() ≤ λ/C∑_c=1^C∑_s=0^s_*-1^⊤( F_c(^s) -F_c()) ^⊤ F() - s_*λ^⊤ F()^2 ≤ Lλ/C∑_c=1^C∑_s=0^s_*-1^s -^⊤ F() - s_*λ^⊤ F()^2, where Lemma <ref> is invoked in the last inequality. Following a similar approach, we express the second term as L/2λ/C∑_c=1^C∑_s=0^s_*-1 P_ F_c(^s) P_^2 =L/2λ/C∑_c=1^C∑_s=0^s_*-1P_( F_c(^s)-F_c())P_+ s_*λ P_ F()P_^2, which can be bounded by L/2λ/C∑_c=1^C∑_s=0^s_*-1 P_ F_c(^s) P_^2 (I)≤ Lλ/C∑_c=1^C∑_s=0^s_*-1P_( F_c(^s)-F_c())P_^2+( s_*λ)^2LP_ F()P_^2 (II)≤ L/C∑_c=1^Cλ^2 s_* ∑_s=0^s_*-1P_( F_c(^s)-F_c())P_^2+( s_*λ)^2LP_ F()P_^2 (III)≤ L^3λ^2 s_*/C∑_c=1^C ∑_s=0^s_*-1^s-^2+( s_*λ)^2L P_ F()P_^2 (IV)≤ L^3λ^2 s_*/C∑_c=1^C ∑_s=0^s_*-1^s-^2+( s_*λ)^2L^⊤ F()^2, where Lemma <ref> with γ=1 is used in in (I), Jensen's inequality is used in (II), Lemma <ref> is used in in (III), and (IV) follows from the Operator norm inequality of the Frobenius norm in combination with orthonormality of U and V^⊤. Plugging these two bounds into (<ref>) gives ℒ(^*)-ℒ() ≤ -λ/C∑_c=1^C∑_s=0^s_*-1 P_ F_c(^s)P_ ,F() +L/2λ/C∑_c=1^C∑_s=0^s_*-1 P_ F_c(^s) P_^2 ≤ Lλ/C∑_c=1^C∑_s=0^s_*-1^s -^⊤ F() - s_*λ^⊤ F()^2 + L^3λ^2 s_*/C∑_c=1^C ∑_s=0^s_*-1^s-^2+( s_*λ)^2L^⊤ F()^2 = -( s_*λ)(1-( s_*λ)L)^⊤ F()^2 + (Lλ/C∑_c=1^C∑_s=0^s_*-1^s -)^⊤ F() + L^3λ^2 s_*/C∑_c=1^C ∑_s=0^s_*-1^s-^2, which concludes the proof. With this result, we next bound the loss descent between the augmentation and coefficient aggregation step in the following theorem. Under the same assumptions as in <Ref>. Let the local learning rate be 0<λ≤1/12 Ls_* with number of local iterations s_*≥ 1. Then, ℒ(^*)-ℒ() ≤ - s_*λ(1- 12 s_*λ L) ^⊤ F()^2. Applying the drift bound given in <Ref> to the loss descent bound given by <Ref> in (<ref>) leads to -( s_*λ)(1-( s_*λ)L)^⊤ F()^2 + (Lλ/C∑_c=1^C∑_s=0^s_*-1(exp(1)s_* λ^⊤ F()) )^⊤ F() + L^3λ^2 s_*/C∑_c=1^C ∑_s=0^s_*-1(exp(1)s_* λ^⊤ F())^2 = -( s_*λ)(1-( s_*λ)L)^⊤ F()^2 +Lλ^2 s_*^2 exp(1)^⊤ F()^2 + L^3λ^4 s_*^4exp(2)^⊤ F()^2 = -( s_*λ)(1-( s_*λ)L - ( s_*λ)Lexp(1) - ( s_*λ)^3 L^2 exp(2) ) ^⊤ F()^2 ≤ -( s_*λ)(1-( s_*λ)L (1+exp(1)+exp(2))) ^⊤ F()^2 ≤ -( s_*λ)(1- 12( s_*λ)L) ^⊤ F()^2, where we have used that ( s_*λ)L≤ 1 and that 1+exp(1)+exp(2)≈ 11.107≤ 12. We are now prepared to prove <Ref>, which we restate in terms of Riemannian gradients as below. (Restatement of <Ref>) Let U^t S^t V^t,⊤ and U^t+1 S^t+1 V^t+1,⊤ be the factorization before and after iteration t of <Ref> with variance correction and singular value truncation threshold ϑ. Let ℒ_c and ℒ be L-smooth with constant L, and let the local learning rate be 0≤λ≤1/12 Ls_*. Then the global loss descent is bounded by ℒ(U^t+1 S^t+1 V^t+1,⊤) - ℒ(U^t S^t V^t,⊤) ≤ -( s_*λ)(1- 12( s_*λ)L) ^⊤ F()^2 + Lϑ. Consider ℒ(W_r^t+1) and ℒ(^*), i.e., the loss values before and after the truncation step. By the mean value theorem, we obtain for some h∈[0,1] ℒ(W_r^t+1) = ℒ(^*) + -F(h W_r^t+1 +(1-h)^*),W_r^t+1 - ^* ≤ℒ(^*) + F(h W_r^t+1 +(1-h)^*)W_r^t+1 - ^* ≤ℒ(^*) + Lϑ where L-smoothness and the fact that ϑ≥W_r^t+1 - ^* are used in (II), where the latter follows from the singular value truncation threshold. Combining the above arguments with <Ref> and <Ref> yields ℒ(W_r^t+1) - ℒ(W_r^t) = (ℒ(W_r^t+1) - ℒ(^*)) + (ℒ(^*) - ℒ()) + (ℒ() - ℒ(W_r^t)) ≤ Lϑ -( s_*λ)(1- 12( s_*λ)L) ^⊤ F()^2, which concludes the proof. §.§ Global convergence of FeDLRT with full variance correction (Restatement of <Ref>) Assume that ℒ is L-smooth with constant L for all c=1,…,C. Let ^t ^t ^t,⊤ be the augmented representation at iteration t. Then Algorithm  <ref> guarantees for the learning rate λ≤1/12 Ls_* and final iteration T min_t=1,…,T∇_ℒ(U^t S^t V^t,⊤)^2≤48 L/T(ℒ(^t=1)-ℒ(^t=T+1)) + 48 L^2ϑ. Consider <Ref>, ℒ(W_r^t+1) - ℒ(W_r^t) ≤ Lϑ -( s_*λ)(1- 12( s_*λ)L) ∇_ℒ(U^t S^t V^t,⊤)^2, and assume that λ s_* = 1/24L, i.e. λ=1/24L s_*≤1/Ls_*, which obeys the learning rate requirement of Theorem <ref>. Plugging this learning rate into (<ref>) gives ∇_ℒ(U^t S^t V^t,⊤)^2≤ 48 L (ℒ(^t)-ℒ(^t+1)+ Lϑ). Averaging from t=1 to t=T yields min_t=1,…,T∇_ℒ(U^t S^t V^t,⊤)^2 ≤1/T∑_t=1^T∇_ℒ(U^t S^t V^t,⊤)^2 ≤48 L/T(ℒ(^t=1)-ℒ(^t=T+1)) + 48 L^2ϑ, which concludes the proof. § ANALYSIS FOR FEDLRT WITH SIMPLIFIED VARIANCE CORRECTION We consider the FeDLRT method with simplified variance correction, see <Ref>. Key difference to the standard FeDLRT with full variance correction, see <Ref> is the modified coefficient update, incorporating global gradient information of the non-augmented coefficient matrix S for the variance correction term, that is V̌_c = Ǧ_ - Ǧ_,c = [ ∇_S ℒ(U^tS^tV^t,⊤) - ∇_S ℒ_c(U^tS^tV^t,⊤) 0; 0 0 ]. Using the Riemmanian gradient, we can equivalently write V̌_c = [U^⊤| 0 ]( F()-F_c())[ V; 0 ] = ^⊤[ I 0; 0 0 ](F_c() - F())[ I 0; 0 0 ]. Remember the simplified variance corrected local coefficient update, given by ^s+1 = ^s + λ^⊤( F_c(^s) +[ I 0; 0 0 ](F_C() - F())[ I 0; 0 0 ]) = ^s + λ^⊤( F_c(^s) ) +V̌_c. §.§ Global loss descent for FeDLRT with simplified variance correction In the following we provide proof for a global loss descent for <Ref>, i.e. using the local coefficient update with variance correction (<ref>). (Restatement of <Ref>) Under <Ref>, if the local learning rate 0<λ≤1/12 Ls_*, then <Ref> leads to the global loss descent ℒ(^t+1) - ℒ(^t)≤ - s_*λ (1-δ^2 - 12 s_*λ L + δ^2s_*λ) ^⊤ F()^2 + Lϑ, with ^t=U^t S^t V^t,⊤ and ^t+1=U^t+1 S^t+1 V^t+1,⊤. We split the adjusted coefficient update in (<ref>) into the non-augmented r× r matrix S and the tree off-diagonal blocks given by the augmentation : = - [ S 0; 0 0 ]. Analogously to the proof of <Ref>, we consider ℒ(^*)-ℒ() ≤^*-,F() + L/2^*-^2 = ^*^⊤-^⊤,F() + L/2^*^⊤-^⊤^2 = ^* -,^⊤ F() + L/2^*-^2 = ^* -,-∇_ℒ() + L/2^*-^2, where the transformation uses orthonormality of and and definition of the projected gradient. We split the right hand side in terms corresponding to augmented terms and non-augmented terms S according to (<ref>), i.e., S^* -S,-∇_Sℒ() + L/2S^*-S^2, which is treated exactly as in the proof of <Ref>, and the augmented terms ^* -,-∇_ℒ() + L/2^*-^2. First we bound the term (<ref>). Remember that =0 at the start of the local iterations due to orthonormality of ,. The coefficient update (<ref>) for S reads S_c^s+1 = S_c^s + λ U^⊤( F_c(^s) - F_c() + F())V. Then we can readily apply <Ref> to obtain the bound S^* -S,-∇_Sℒ() + L/2S^*-S^2 ≤ -( s_*λ)(1- 12( s_*λ)L) U^⊤ F()V^2. Next, we bound (<ref>), starting with the first term: ^* -,-∇_ℒ() (I)=^* -0,-∇_ℒ() = -λ/C∑_c=1^C∑_s=0^s_*-1∇_ℒ_c(^s),- ∇_ℒ() = λ/C∑_c=1^C∑_s=0^s_*-1∇_ℒ_c(^s), ∇_ℒ() ≤λ/C∑_c=1^C∑_s=0^s_*-1∇_ℒ_c(^s)∇_ℒ() (II)≤λ/C∑_c=1^C∑_s=0^s_*-1δ^2∇_ℒ()∇_ℒ() = δ^2 s_* λ∇_ℒ()^2 = δ^2 s_* λ^⊤ F()^2, where we use =0 in (I), and <Ref> in (II). Next, we bound the second term L/2^*-^2 = L/2 -λ/C∑_c=1^C∑_s=0^s_*-1∇_ℒ(^S)^2 (I)≤ L/2λ^21/C∑_c=1^C∑_s=0^s_*-1∇_ℒ(^S)^2 (I)≤ L/2s_*λ^21/C∑_c=1^C∑_s=0^s_*-1∇_ℒ(^S)^2 ≤ s_*L/2δ^2λ^21/C∑_c=1^C∑_s=0^s_*-1∇_ℒ()^2 ≤ L/2δ^2(s_*λ)^2∇_ℒ()^2 = L/2δ^2(s_*λ)^2^⊤ F()^2 , where we used Jensen's inequality in (I) again <Ref>. We combine the bound on the non-augmented terms (<ref>) and the two bounds above for the augmented terms to ℒ(^*)-ℒ()≤^*-,F() + L/2^*-^2 ≤ -( s_*λ)(1- 12( s_*λ)L) U^⊤ F()V^2 + δ s_* λ^⊤ F()^2 +δ(s_*λ)^2^⊤ F()^2 (I)≤ -( s_*λ)(1- 12( s_*λ)L) ^⊤ F()^2 + δ s_* λ^⊤ F()^2 +δ(s_*λ)^2^⊤ F()^2 = -( s_*λ)(1-δ^2 - 12( s_*λ)L + δ^2(s_*λ)) ^⊤ F()^2, where we use in (I) U^⊤ F()V≤^⊤ F(). Using <Ref>, we can conclude the proof: ℒ(U^t+1 S^t+1 V^t+1,⊤) - ℒ(U^t S^t V^t,⊤) ≤ -( s_*λ)(1-δ^2 - 12( s_*λ)L + δ^2(s_*λ)) ^⊤ F()^2 + Lϑ. §.§ Global convergence of FeDLRT with simplified variance correction (Restatement of <Ref>) Under <Ref>, <Ref> guarantees for the learning rate λ≤1/s_*(12 L +δ^2) min_t=1,…,T∇_ℒ(^t)^2≤96 L/T(ℒ(^1)-ℒ(^T+1)) + 96 L^2ϑ, with ^t=U^t S^t V^t,⊤, ^1=U^1 S^1 V^1,⊤. and ^T+1=U^T+1 S^T+1 V^T+1,⊤. Consider <Ref>, ℒ(^t+1) - ℒ(^t)≤ -( s_*λ)(1-δ^2 - 12( s_*λ)L + δ^2(s_*λ)) ^⊤ F()^2 + Lϑ and assume that λ s_* = 1/(12 L +δ^2), i.e. λ=1/s_*(12 L +δ^2)≤1/Ls_*, which obeys the learning rate requirement of Theorem <ref>. Plugging this learning rate into (<ref>) gives ∇_ℒ(^t)^2≤ 96 L (ℒ(^t)-ℒ(^t+1)+ Lϑ), where we use (1/4-δ^2)≤1/4 and 1/(12 L +δ^2)≤1/12 L Averaging from t=1 to t=T yields min_t=1,…,T∇_ℒ(^t)^2 ≤ 1/T∑_t=1^T^⊤ F()^2 ≤ 96 L/T(ℒ(^t=1)-ℒ(^t=T+1)) + 96 L^2ϑ, which concludes the proof.
http://arxiv.org/abs/2406.18894v1
20240627051434
Assessing the Effectiveness of LLMs in Android Application Vulnerability Analysis
[ "Vasileios Kouliaridis", "Georgios Karopoulos", "Georgios Kambourakis" ]
cs.CR
[ "cs.CR" ]
European Commission, Joint Research Centre (JRC) 21027 Ispra, Italy Department of Information and Communication Systems Engineering, University of the Aegean, Karlovasi, 83200, Samos, Greece Assessing the Effectiveness of LLMs in Android Application Vulnerability Analysis Vasileios Kouliaridis 1 Georgios Karopoulos 1 Georgios Kambourakis 2 Received XXX; accepted YYY ================================================================================= § ABSTRACT The increasing frequency of attacks on Android applications coupled with the recent popularity of large language models (LLMs) necessitates a comprehensive understanding of the capabilities of the latter in identifying potential vulnerabilities, which is key to mitigate the overall risk. To this end, the work at hand compares the ability of nine state-of-the-art LLMs to detect Android code vulnerabilities listed in the latest Open Worldwide Application Security Project (OWASP) Mobile Top 10. Each LLM was evaluated against an open dataset of over 100 vulnerable code samples, including obfuscated ones, assessing each model's ability to identify key vulnerabilities. Our analysis reveals the strengths and weaknesses of each LLM, identifying important factors that contribute to their performance. Additionally, we offer insights into context augmentation with retrieval-augmented generation (RAG) for detecting Android code vulnerabilities, which in turn may propel secure application development. Finally, while the reported findings regarding code vulnerability analysis show promise, they also reveal significant discrepancies among the different LLMs. § INTRODUCTION As mobile devices continue to proliferate, the need for secure software development practices remains still of high priority. The predominant Android platform has become a prime target for attackers and malware writers, seeking to exploit vulnerabilities in the vast cosmos of mobile applications <cit.>. The importance and volume of mobile vulnerabilities has led the Open Web Application Security Project (OWASP) to periodically publish a current, reputable list of the most prevalent vulnerabilities detected in mobile applications, namely OWASP Mobile Top 10. <cit.>. This list can serve as a key benchmark in assessing the performance of any tool in finding software vulnerabilities <cit.>. An emerging approach to detecting Android code vulnerabilities is the use of large language models (LLMs) for code analysis. Actually, the use of LLMs for code analysis is traced back to the early 2010s. That is, in 2013, the introduction of Word2Vec <cit.>, a shallow neural network, marked the beginning of deep learning-based language models. That algorithm was capable of learning word embeddings (an encoding of the meaning of the word) from large datasets. In 2018, Google introduced Word2Vec's successor, a language model known as Bidirectional Encoder Representations from Transformers (BERT) <cit.>. BERT was designed to be bidirectionally trained, meaning it can learn information from both the left and right sides of a given text during training, therefore obtaining a better understanding of the context. In the realm of code analysis, LLMs began to gain traction around 2017. One of the early applications of LLMs in code analysis was code completion. Models like GPT-2 <cit.>, fully released in Nov. 2019, were trained on a large corpus of source code data. By understanding the structure and context of the code, these models could predict the most likely code to follow a given input. In 2020, OpenAI <cit.> introduced GPT-3 <cit.>, a significantly larger model with 175B parameters. This model showed improved capabilities in generating human-like text and was even able to generate code when given a task description. The ability of LLMs to analyze and understand code has also been demonstrated in recent studies <cit.>. Nevertheless, to the best of our knowledge, the literature lacks a comprehensive comparison of the ability of these models to detect Android code vulnerabilities so far. The present work aims to fill this gap by comparing the ability of nine state-of-the-art LLMs to detect Android code vulnerabilities listed in the OWASP Mobile Top 10. Specifically, each model is evaluated regarding its performance in identifying key vulnerabilities against a dataset comprising snippets of vulnerable Android code. The assessment of each model is done through a combination of manual and automated evaluation methods. We additionally pinpoint the strengths and weaknesses of each LLM and provide insights into the factors that conduce to their performance. Overall, this study provides valuable insights into the use of LLMs for detecting mobile code vulnerabilities, thus contributing to the development of effective methods for secure mobile coding. The contributions of the paper are summarized as follows. * We present a thorough comparative analysis on the capabilities and performance of nine leading LLMs, i.e., GPT 3.5, GPT 4, GPT 4 Turbo, Llama 2, Zephyr Alpha, Zephyr Beta, Nous Hermes Mixtral, MistralOrca, and Code Llama in identifying vulnerabilities residing in Android applications. The experiments conducted provide concrete evidence of the LLMs' capabilities for such tasks, also identifying the limitations per LLM. These insights are critical for anyone interested in understanding the trade-offs associated with each LLM. * We provide a comparison between the code analysis results as given by the nine LLMs against two well-known, publicly available static application security testing (SAST) tools, namely, Bearer <cit.> and MobSFscan <cit.>. * We examine the impact of context augmentation on LLMs and contribute a set of guidelines regarding the selection and fine-tuning of LLMs towards enhancing the security posture of Android code. * We offer an open dataset to the community for driving research in this field forward. The remainder of this paper is structured as follows. The next section presents previous work on the use of LLM for code vulnerability analysis. Section <ref> details our methodology, while the results per LLM are given in section <ref>. The last section concludes and proposes some lines for future research. § PREVIOUS WORK In recent years, LLMs have gained significant attention in the field of cybersecurity for their potential to provide assistance in various domains, including vulnerability detection, penetration testing, and security analysis. State-of-the-art surveys such as <cit.> and <cit.>, as well as a more recent but not yet peer-reviewed study <cit.>, provide comprehensive overviews of the current state and potential future applications of LLMs in cybersecurity. These works analyze the challenges, practical implications, and future research directions to exploit the full potential of these models in ensuring cyber resilience. The rest of this section will focus on literature dealing with software vulnerability analysis using LLMs. This includes works that have already been peer-reviewed, as well as more recent research that has been self-archived for the sake of completeness. In <cit.>, transformer-based LLMs are evaluated in the task of code vulnerability detection. The authors evaluate such LLMs, including BERT, DistilBERT, CodeBERT, GPT-2 and Megatron, against C/C++ source code snippets from two publicly available datasets. The results showed that LLMs perform well in software vulnerability tasks; indicatively, the best scoring model, GPT-2, had an F1-score above 95% in all tests. In the context of software engineering, <cit.> investigates the use of in-context learning to improve the ability of LLMs to detect software vulnerabilities, showcasing the adaptability of LLMs to learn from context-specific examples. The authors use code retrieval to search for code snippets that are similar to the examined code and feed them to the LLM together with the examined code and its analysis. Their experimental results show that this approach has better performance than the original GPT model. Another set of works, adds verification in the vulnerability detection process. An empirical study of using LLMs for vulnerability assessment in software was conducted in <cit.>. The authors used four well-known pre-trained LLMs to identify vulnerabilities in two labeled datasets, namely code gadgets and CVEfixes, and static analysis as a reference point. The used LLMs include GPT-3.5, Davinci and CodeGen, and the analysis was limited to two kinds of vulnerabilities: SQL injections and buffer overflows. The study concluded that LLMs do not perform well at detecting vulnerabilities, presenting high false-positive rates, but could complement and improve the traditional static analysis process. Concerns about the safe use of code assistants are addressed in <cit.>. In this case, LLMs are used to produce code which is then assessed manually and using static analysis. This study provides empirical insights into how developers interact with LLMs, underscoring the importance of user awareness to mitigate security risks associated with assisted code generation. Moving to non peer-reviewed works, the work of <cit.> delves into the application of LLMs in static binary taint analysis, demonstrating how these models can assist in vulnerability inspection of binaries. A binary is first disassembled and decompiled, and an LLM is used to identify security sensitive functions that may contain vulnerabilities, as well as candidate dangerous flows. In the last phase, the LLM combines the previous results to produce a vulnerability report for the examined binary. The authors of <cit.> propose DefectHunter, a vulnerability detection mechanism that combines various technologies, including LLMs. Its architecture has three main building blocks: a tool for extracting structural information from code snippets, a pre-trained LLM for generating semantic information, and a Conformer mechanism to identify vulnerabilities from the previously extracted structural and semantic data. The authors of <cit.> evaluated ChatGPT and GPT-3 in detection of Common Weakness Enumeration (CWE) vulnerabilities contained in code. Using a custom real-world dataset with Java files from open GitHub repositories, they concluded that the detection capabilities of the aforementioned models are limited. In <cit.>, an empirical study of the potential of LLMs for detecting software vulnerabilities is presented. The authors tested 129 code samples from various GitHub repositories, written in eight different languages, and their results showed that GPT-4 identified around four times more vulnerabilities than traditional, rule-based, static code analysis tools. In addition, the LLMs were asked to provide fixes for the identified vulnerabilities. The models used include GPT-3 and GPT-4. Apart from generic code, LLMs have been used for detecting vulnerabilities in smart contracts. LLM4Vuln <cit.> is an evaluation framework for vulnerability detection systems based on LLMs, focusing on smart contract vulnerabilities. The difference from other similar works is that, instead of benchmarking the performance of LLMs in vulnerability detection, the authors evaluate the vulnerability reasoning capabilities of each model. Similarly, the authors of <cit.> proposed GPTLens, a framework for detecting vulnerabilities in smart contracts using LLMs. GPTLens takes a different approach from the traditional one-stage detection in order to decrease false positives. The detection process is broken down in two steps, where the LLM takes two different roles: auditor and critic. As an auditor, the LLM provides a large range of vulnerabilities for the examined contract, whereas as a critic it verifies the claims produced in the first step. The performed experiments show that GPTLens presents improved results over the single-stage vulnerability detection. § METHODOLOGY This section details our methodology, including the creation of the benchmark dataset, the selection of LLMs, and the evaluation process. §.§ Dataset Also with reference to Section <ref>, to our knowledge, there is no publicly available dataset containing vulnerable Android code covering each one of the OWASP Mobile Top 10 vulnerabilities. The most relevant dataset to our study is LVDAndro <cit.>, which however is labelled based on CWE. Additionally, since LVDAndro was created using actual Android applications, it contains a significant proportion of non-vulnerable code. In view of this shortage, for the needs of our experiments, we created a new dataset coined Vulcorpus <cit.> containing 100 pieces of vulnerable code. It is important to note that the term “piece of code”, hereafter called sample, refers to a part of an application, not its full codebase. All the samples were written in Java by exploiting common insecure coding practices, e.g., logging private information, not filtering input/objects, etc., and target the Android OS. However, obviously, the same vulnerabilities apply to other mobile platforms, say, iOS. More specifically, Vulcorpus contains 10 samples for each of the OWASP Mobile Top-10 vulnerabilities of 2024, which are briefly explained in subsection <ref>. Every sample exhibits one or maximum two interrelated vulnerabilities, while one or two of these samples per vulnerability category are obfuscated using the well-known renaming technique. Half of the samples per vulnerability contain code comments regarding the specific vulnerability. Moreover, to assess each LLM in detecting privacy-invasive code, we created three more samples which perform risky actions without asking the user for confirmation. These actions are: * Get the precise location of the device through the “android.permission.ACCESS_FINE_LOCATION” permission, and directly share the latitude and longitude over the Internet via API. According to the Android API <cit.>, this permission has a “dangerous” protection level, namely it may give the requesting application access to user's private data, among others. * Capture an image via the “ACTION_IMAGE_CAPTURE” intent <cit.> , and subsequently attempt to share the captured image file via API. * Open local documents through the “ACTION_OPEN_DOCUMENT” intent <cit.>, and attempt to send them to a remote host via API. The latter three samples are also available at <cit.> along with Vulcorpus. §.§ List of vulnerabilities This subsection briefly delineates each vulnerability contained in the current OWASP Mobile Top 10 list. For more details regarding each vulnerability, the reader is referred to <cit.>. It is important to note that the list differs from its 2016 version, given that four vulnerabilities contained in the 2016 list have been replaced with new ones in the current list. The reader should also keep in mind that while some categories of vulnerabilities, say, M5 are straightforward, others might be more complicated for LLMs to understand, such as the M7. Improper credential usage (M1): Poor credential management can lead to severe security issues, namely, unauthorized users may be able to gain access to sensitive information or administrative functionalities within the mobile app or its backend systems. This in turn leads to data breaches and fraudulent activities. Inadequate supply chain security (M2): By exploiting vulnerabilities in the mobile supply chain, attackers may be able to manipulate application functionality. For example, they can insert malicious code into the mobile application's codebase or libraries <cit.>, as well as modify the code during the application's build process to introduce backdoors, spyware, or other type of malware. The attacker can also exploit vulnerabilities in third-party software libraries, software development kits (SDKs), or hard-coded credentials to gain access to the mobile app or the backend servers. Overall, this type of vulnerabilities can lead to unauthorized data access or manipulation, denial of service, or complete takeover of the mobile application or device. Insecure authentication/authorization (M3): Poor authorization could lead to the destruction of systems or unauthorized access to sensitive information, while poor authentication results in the inability to identify the user making an action request, leading to the inability to log or audit user activity. This situation makes it difficult to detect the source of an attack, understand any underlying exploits, or develop strategies to prevent future attacks. Obviously, authentication failures are tightly coupled to authorization failures; when authentication controls fail, authorization cannot be performed. That is, if an attacker can anonymously execute sensitive functionality, it indicates that the underlying code is not verifying the user's permissions, highlighting failures in both authentication and authorization controls. Insufficient input/output validation (M4): A mobile application that does not adequately validate and sanitize data from external sources, like user inputs or network data, is susceptible to a range of attacks, including SQL injection, command injection, and cross-site scripting. Insufficient output validation can also lead to data corruption or presentation vulnerabilities, possibly allowing the malicious actor to inject harmful code or manipulate sensitive information shown to the users. Insecure communication (M5): Modern mobile applications typically communicate with one or more remote servers. This renders user data susceptible to interception and modification, if they are transmitted in plaintext or using an outdated encryption protocol. Inadequate privacy controls (M6): Privacy controls aim to safeguard Personally Identifiable Information (PII), including names and addresses, credit card details, emails, and information related to health, religion, sexuality, and political opinions. This sensitive information can be used to impersonate the victim for fraudulent activities, misuse their payment data, blackmail them with sensitive information, or harm them by destroying or manipulating sensitive data. Insufficient binary protections (M7): The application's binary may hold valuable information, such as commercial API keys or hard-coded cryptographic secrets. Furthermore, the code within the binary itself could be valuable, for instance, containing critical business logic or pre-trained AI models. In addition to gathering information, attackers may also manipulate app binaries to gain access to paid features for free or to bypass other security controls. In the worst-case scenario, popular apps could be altered to include malicious code and then distributed through third-party app stores or under a new name to deceive unsuspecting users. Security misconfiguration (M8): These occur when security settings, permissions, or controls are improperly configured, leading to vulnerabilities and unauthorized access. Insecure data storage (M9): Such vulnerabilities may stem from weak encryption, insufficient data protection, insecure data storage mechanisms, and improper handling of user credentials. Insufficient cryptography (M10): The use of obsolete cryptographic suites, primitives, or cryptographic practices may lead to loss of data confidentiality, integrity, and inability to impose source authentication among others. Typical repercussions include data decryption, manipulation of cryptographic processes, leak of encryption keys, etc. §.§ Selection of LLM For the purposes of our experiments, nine contemporary, well-known LLMs were chosen: three commercial models, i.e., GPT 3.5, GPT 4, and GPT 4 Turbo, and six open source models, i.e., Llama 2, Zephyr Alpha, Zephyr Beta, Nous Hermes Mixtral, MistralOrca, and Code Llama. According to their documentation, these models have been pre-trained on large amounts of text data, including code, having demonstrated performance in various software engineering tasks, including code analysis. That is, their ability to understand code syntax and semantics makes them well-suited for identifying vulnerabilities residing in code. Additionally, their large size and diverse training data make them less likely to overfit to a specific codebase. A succinct description of each LLM is given below. * GPT 3.5 (gpt-35-turbo version Nov. 2023) <cit.>: It is a powerful language model that has been pre-trained on a large corpus of text data, including code. It has demonstrated performance in various natural language processing (NLP) tasks and has been used for code analysis tasks such as code completion, code search, and code summarization. * GPT 4 <cit.>, <cit.>: It is the newest version of GPT being pre-trained on an even larger corpus of text data, including code. It has demonstrated improved performance over GPT 3.5 in various NLP tasks and has been used for code analysis, including code review and repair. * GPT 4 Turbo (gpt-4-1106): It is a variant of GPT 4, been specifically designed for tasks that require faster inference times, such as code analysis. It has been pre-trained on the same large corpus of text data as GPT 4, optimized for faster performance. * Llama 2 (Llama-2-70b-chat) <cit.>: This LLM has been pre-trained on a diverse set of text data, including code. It has demonstrated performance in various NLP tasks, also been exploited for code analysis, including code summarization and code search. * Zephyr Alpha (zephyr-7b-alpha) <cit.>: It is pre-trained on a huge corpus of text data from diverse sources, including books, articles, and websites. This model has been fine-tuned with a mix of publicly available and synthetic datasets on top of Mistral LLM. Despite its small size (7B parameters), it potentially shows a performance comparable to several models with a number of parameters in the range of 20-30B. * Zephyr Beta (zephyr-7b-beta) <cit.>: This model has been fine-tuned with a mix of publicly available and synthetic datasets on top of Mistral LLM. It is the successor of Zephyr Alpha, therefore considered significantly more powerful than its predecessor. Based on its documentation, it is fast and competent, showing a performance comparable to the best open-source models, having around 70B parameters. * Nous Hermes Mixtral (nous-hermes-2-mixtral-8x7b-dpo) <cit.>: It is one of the most powerful open-source models available, comprising a fine-tuned version of Mixtral base model. * MistralOrca (mistral-7b-openorca <cit.>, <cit.>, <cit.>: It has been fine-tuned with Open-Orca datasets on top of Mistral LLM. Despite its small size, it outperforms Llama 2 13B, showing a performance comparable to several models with a number of parameters in the range of 20-30B. * Code Llama <cit.>: It is a special version of Llama 2, tailored specifically for coding applications. This specialized version has been refined through extensive additional training on code-focused data, with prolonged exposure to relevant datasets. The result is a tool with alleged superior coding proficiency that builds upon the foundation of Llama 2. More specifically, Code Llama can generate code and create explanations about code in response to prompts in both programming and natural language. Its capabilities extend to assisting with code completion and troubleshooting code errors. Furthermore, Code Llama is versatile, supporting a broad array of widely-used programming languages, including Python, C++, Java, PHP, JavaScript, C Sharp, and Bash. In this work, we examine the smallest pre-trained model, namely, the 7B version. In addition, for this LLM, in a separate run, we employed LlamaIndex <cit.> to improve the detection capabilities of Code Llama. LlamaIndex is a data framework for LLM-based applications, enhancing them with additional contextual data. This context augmentation technique is called Retrieval-Augmented Generation (RAG) and can be used to address the restrictions of LLMs by giving them access to contextual, current data. For the RAG process, we used the 50% of Vulcorpus, i.e., only the samples that contain code comments regarding the specific vulnerability. Android's application quality and security guidelines and code examples <cit.> were also added as input to the RAG, along with information on each vulnerability from the OWASP website <cit.>. §.§ Evaluation process All nine pre-trained LLMs listed in subsection <ref>, except Code Llama, run on the GPT@JRC platform, a system developed by the European Commission's Joint Research Centre (JRC). Code Llama was run on a local computer with an M2 processor and 16GB unified memory. Each LLM was fed with Vulcorpus for comparing its performance on identifying potential vulnerabilities and proposing code improvements. To this end, as detailed in Section <ref>, we use a simple scoring system to present (a) the number of vulnerabilities each LLM was able to detect, and (b) if the LLM proposed valid suggestions for possibly fixing the vulnerability. Both these partial scores have a maximum value of 10/10 per vulnerability category, i.e., one point for each piece of vulnerable code the LLM was able to detect and annotate. It is important to note that the input or question given to each LLM has a major effect on its output. For our study, each LLM was queried as follows: “Check if there are any security issues in the following code; if there are, explain the issue”. As previously mentioned, the LLMs used in this work are pre-trained. This means that the associated libraries, possibly needed by each code sample but not included in the input, cannot be analyzed. This mostly affects the analysis regarding the M2 vulnerability. Therefore, to evaluate LLMs against M2, instead of Java code, we used 10 libraries with known vulnerabilities as input. These libraries, also included in Vulcorpus for reasons of reproducibility, were published before the training date of each LLM. At a final stage, as detailed in Section <ref>, the results of each LLM were compared and crosschecked against those produced by two well-known SAST tools, namely Bearer <cit.> and MobSFscan <cit.>. Bearer is a static application security testing tool, which uses built-in rules covering the OWASP Top 10 and Common Weakness Enumeration (CWE) Top 25. MobSFscan is a static analysis tool that uses MobSF's <cit.> security rules and can find insecure code patterns in Android or iOS source code. Finally, we also assessed the performance of each LLM in detecting privacy-invasive behaviors, using the three samples detailed in subsection <ref>. The output was rated using three categories: not privacy-invasive, (b) potentially privacy-invasive, and (c) privacy-invasive. § RESULTS Tables <ref> and <ref> recapitulate the results for each LLM. Particularly, each line of Table <ref> indicates if the specific model Detected the vulnerability (denoted with the letter “D”), and if it explained the situation and provided a valid solution for Improving the code (denoted with the letter “I”). Actually, the “I” aspect is a key factor in evaluating each LLM (also against each other), as this is the sole indicator of whether the LLM actually “perceives” the security issue. Overall, with reference to Table <ref>, the best performers in terms of total vulnerabilities detected, are Code Llama (81/100), GPT 4 (67/100), Nous Hermes Mixtral (62/100), Zephyr Beta (54/100), and Zephyr Alpha (53/100), followed by GPT 4 TURBO (50/100), GPT 3.5 (42/100), MistralOrca (37/100), and Llama 2 (30/100). On the other hand, the best performers, in terms of total code improvement suggestions, are GPT 4 (83/90), GPT 4 Turbo (66/90), Zephyr Alpha (58/90), Zephyr Beta (56/90), and Nous Hermes Mixtral (56/90), followed by Code Llama (44/90), MistralOrca (38/90), GPT 3.5 (37/90), and Llama 2 (31/90). Overall, GPT 4 poses as the top performer, considering a composite score of high “D” and high “I”. On the other hand, LLMs like Code Llama, which do identify the correct vulnerability, but fail to provide corrections or suggestions regarding the problematic lines of code may indicate an insufficiently trained model for this type of analysis. When looking at each vulnerability individually, GPT 4 achieved a perfect score for M1 and M6, MistralOrca for M9, Zephyr alpha for M5 and M10, Zephyr beta for M5 and M9, and Llama 2 and Code Llama for M5. Regarding the rest of the vulnerabilities, namely, M2, M3, M4, M7, and M8, the best performers were GPT 3.5 (7/10), Zephyr Beta and Code Llama (9/10), Nous Hermes Mixtral (9/10), Code Llama (9/10), and Nous Hermes Mixtral and Code Llama (9/10), respectively. Concerning M2, recall from subsection <ref> that it was tested using 10 vulnerable libraries published before the training date of each LLM. Even so, the M2 low detection performance in Table <ref> for all the LLMs but GPT-3.5 may designate that these libraries were not considered during LLM training, so the respective scores can be regarded only as indicative. The same applies to the “I” score for M2, which it is marked as N/A. As discussed in subsection <ref>, to address these limitations, LLMs used for vulnerability detection can capitalize on context augmentation; this way the LLM is provided with access to contextual, up-to-date data. After averaging the “D” score for all the nine LLMs, we sort in ascending order the OWASP Top 10 vulnerabilities in Table <ref>. This mean score provides an estimation of the detection difficulty per vulnerability as experienced by the different LLMs. The same table also includes the best performer(s) along with its score in parentheses. As observed from the table, from an LLM viewpoint, M2 is the toughest vulnerability with an average score of 2.11. As explained in subsection <ref>, this poor outcome is conceivably due to lack of sufficient, up-to-date information at the LLMs' side. Generally, this low score is somewhat expected, as for this vulnerability the LLMs are checking for known security issues in a list of libraries instead of analyzing the application's code. On the other hand, the highest average detection score was observed in M5, where four LLMs achieved a perfect score. No less important, with reference to the last stage of the experiments as given in subsection <ref>, regarding the detection of privacy-invasive actions, six, eight, and six of the LLMs correctly perceived potential privacy-invasive actions for location, camera, and local file sharing, respectively. The best performer was Zephyr Alpha, which clearly marked two out of three codes as privacy-invasive and the other as potentially privacy-invasive. The worst performer in this type of experiments was MistralOrca, which was unable to detect any possible privacy-invasive actions. Additionally, Table <ref> presents the results regarding the use of RAG on Code Llama. As explained in subsection <ref>, in this experiment, only half of the samples per vulnerability were indexed for RAG, along with text and code examples from Android's app security guidelines <cit.> and all the CVEs related to the vulnerable libraries used for M2. After that, we analyzed the other half of the samples, i.e., the non-annotated ones with comments on the particular vulnerability. As observed from Table <ref>, the results show improvements in both the detection performance and the generation of code suggestions vis-à-vis the base model. Precisely, by feeding a large list of vulnerable libraries, the optimized Code Llama model achieved a perfect score for M2, an improvement of approximately 233% compared to that in Table <ref>. Nevertheless, for reaching this performance in real-world scenarios, the RAG process should involve an up-to-date dataset comprising known vulnerable library versions. Interestingly, except M2, the optimized Code Llama model detected the vulnerabilities and suggested improvements for all the M1, M3, M4, M5, M6, and M7 samples. As seen in the three bottom lines of Table <ref>, a nearly perfect performance (4/5) was also observed for all the M8, M9, and M10 samples. As previously mentioned, the performance of the LLMs was also compared against two reputable SASTs, namely Bearer and MobSFscan. Precisely, as shown in Table <ref>, across the 100 samples of Vulcorpus, Bearer found 29 security issues, while MobSFscan detected 12 issues. Excluding M2, this result suggests that, for several vulnerability types, the performance of at least some of the LLMs may significantly or even by far surpass that of well-known SASTs. For instance, comparing the numbers of Table <ref> with the average scores of Table <ref> it can be argued that the former observation applies especially to M3, M4, and M9, and in a smaller extent to M1, M6, and M7. Moreover, a side conclusion is that both the LLMs and SASTs score well in certain vulnerabilities, i.e., M10, and to a lesser extent M5; nevertheless, this is somewhat expected given that vulnerabilities of these two types are generally considered easier to detect. § CONCLUSIONS Our study provides empirical evidence regarding the effectiveness of using LLMs for Android code vulnerability analysis. GPT-4 and Code Llama emerged as the top performers among the nine LLMs tested, the latter excelling in detection, but failing to provide sufficient code improvements, and the former showing promising results both in detection and code improvement. Notably, the study highlights the superior performance of specific LLMs for particular types of vulnerabilities. For instance, MistralOrca and Zephyr Beta performed exceptionally well for M9, while Zephyr Alpha excelled in M10. These findings suggest that while some LLMs have a general proficiency in vulnerability detection, others may be more specialized, indicating the potential for strategic selection of LLMs based on the targeted vulnerability type. When comparing open LLM models with commercial ones, we can see that the open models were the best performers in seven out of ten categories of vulnerabilities, i.e., M3, M4, M5, M7, M8, M9, M10. On the other hand, considering mean detection and improvements scores, as presented in Table <ref>, the situation is mixed. Our findings also reveal that while some LLMs are capable of detecting Android code vulnerabilities, their overall performance is still in an early stage. For example, several LLMs struggled with M7, while others were unable to identify M2, reflecting the inherent complexity and subtlety of such vulnerabilities. This outcome points to a need for further research towards enhancing LLMs' capabilities in more nuanced areas of Android security. As an additional step, we evaluated the use of RAG in fine-tuning LLMs for vulnerability analysis, with our results demonstrating that RAG can significantly reinforce detection performance. Regarding the detection of privacy-invasive actions, the obtained results indicate a mixed level of sensitivity among the LLMs, with Zephyr Alpha being the top performer. However, MistralOrca's inability to identify any potential privacy-invasive actions underscores the variability in performance and the need for increased model robustness in privacy analysis concerning mobile platforms. No less important, after comparing the performance of LLMs with that of well-respected SASTs on the same set of vulnerable samples, it can be said that the former seem more adept at identifying code vulnerabilities. Altogether, the results of the present study provide valuable insights into the current state of LLMs in Android vulnerability detection. While certain models show high efficacy, there is ample room for improvement and targeted optimizations, particularly in addressing complex and subtle vulnerabilities. Nevertheless, for obtaining a more complete view, more experiments with larger datasets are needed. unsrt
http://arxiv.org/abs/2406.18993v1
20240627083539
Interference Cancellation Based Neural Receiver for Superimposed Pilot in Multi-Layer Transmission
[ "Han Xiao", "Wenqiang Tian", "Shi Jin", "Wendong Liu", "Jia Shen", "Zhihua Shi", "Zhi Zhang" ]
eess.SP
[ "eess.SP" ]
Design and Implementation of a Scalable Correlator Based on ROACH2+GPU Cluster for Tianlai 96-Dual-Polarization Antenna Array [ July 1, 2024 ============================================================================================================================= § ABSTRACT In this paper, an interference cancellation based neural receiver for superimposed pilot (SIP) in multi-layer transmission is proposed, where the data and pilot are non-orthogonally superimposed in the same time-frequency resource. Specifically, to deal with the intra-layer and inter-layer interference of SIP under multi-layer transmission, the interference cancellation with superimposed symbol aided channel estimation is leveraged in the neural receiver, accompanied by the pre-design of pilot code-division orthogonal mechanism at transmitter. In addition, to address the complexity issue for inter-vendor collaboration and the generalization problem in practical deployments, respectively, this paper also provides a fixed SIP (F-SIP) design based on constant pilot power ratio and scalable mechanisms for different modulation and coding schemes (MCSs) and transmission layers. Simulation results demonstrate the superiority of the proposed schemes on the performance of block error rate and throughput compared with existing counterparts. § INTRODUCTION The accurate channel estimation is a key issue of wireless communication systems, which can be achieved through various kinds of pilots in the fifth generation (5G) new radio (NR) system <cit.>, such as demodulation reference signal (DMRS), channel state information reference signal (CSI-RS) and sounding reference signal (SRS). Towards the sixth generation (6G) <cit.>, we can expect to see greater advancements in massive multiple input multiple output (MIMO), hybrid beamforming and high-speed scenarios, as well as an increased focus on vertical applications such as sensing and positioning. These will undoubtedly lead to a further growing demand for diverse kinds of pilots, which may result in increased competition for air interface wireless resources between data and pilot transmission. In the 5G NR system <cit.>, pilot design has been standardized as a series of pre-defined patterns and sequences, which fail to consider the implicit channel characteristics of specific scenarios. Recently, deep learning (DL) based methods for air interface enhancement show great potential in system performance improvement <cit.>. Specifically, DL based pilot design including sequence <cit.> and pattern <cit.> with corresponding neural network (NN) receiver are proposed to learn the optimal pilot and the receiver corresponding to specific channel characteristics. However, pilot in above solutions is allocated orthogonally to the data which results in considerable pilot overhead so that a loss of spectral efficiency. A non-orthogonal solution namely superimposed pilot (SIP) <cit.> allocates the pilot and data in the same time and frequency resource grids to alleviate the pilot overhead problem, where corresponding pilot power distribution and neural receiver can be jointly trained in an end-to-end manner <cit.>. Despite the great throughput performance with reduced pilot overhead, Exisiting DL based SIP <cit.> also suffers some drawbacks from the perspective of multi-layer transmission and practical deployment. Firstly, multi-layer transmission by precoding uses multiple transmit and receive antennas to simultaneously send multiple data streams, significantly enhancing throughput and making it crucial for advanced standards like 5G and beyond. However, among exisiting DL based SIP methods, there is no consideration dealing with the more serious intra-layer and inter-layer interference caused by SIP in multi-layer transmission, which results in performance loss and calls for brand-new architecture at both transmitter and receiver. Secondly, trainable parameters are at both base station (BS) and user equipment (UE), where the two-sided framework brings much more complexity to inter-vendor training collaboration, e.g. data collection, model training, monitoring, and other model life cycle management issues <cit.>. Thirdly, the generalization over different configurations, such as number of transmission layers and modulation and coding scheme (MCS) <cit.>, is also ignored. In this paper, an interference cancellation based neural receiver for SIP in multi-layer transmission is proposed, which involves the innovative design of multiple mechanisms to face the challenges of multi-layer transmission and practical deployment. The main contributions of this artical are summarized as follows. * To deal with the intra-layer and inter-layer interference of SIP in multi-layer transmission, the interference cancellation with superimposed symbol aided channel estimation is leveraged in the neural receiver, accompanied by the pre-design of pilot code-division orthogonal mechanism at transmitter. * Considering the practical deployment and standardization, a fixed SIP (F-SIP) based on constant pilot power ratio is designed where the realized one-sided model simplifies the inter-vendor collaboration. * To address the generalization problem in practical deployment, the scalable mechanisms for different modulation and coding schemes (MCSs) and transmission layers are also proposed, where one same model can work effectively in different MCS and layer configurations. * Various kinds of simulation results are provided to demonstrate the superiority of the proposed scheme on the performance of block error rate (BLER) and throughput compared with existing counterparts. These abundant simulations are performed with 3rd Generation Partnership Project (3GPP) link level channels, which may hopefully provide some referable insights for 3GPP discussions in the future. The rest of this paper is organized as follows. The system model and existing pilot solutions are introduced in Section <ref>. The proposed scheme which involves the innovative design of multiple mechanisms at the transmitter and receiver is proposed in Section <ref>. Numerical experiments are provided in Section <ref>, and conclusions are given in Section <ref>. § SYSTEM DESCRIPTION §.§ System Model We consider a typical downlink MIMO system with N_t transmit antennas at BS and N_r receive antennas at UE, where S subcarriers with T consecutive orthogonal frequency division multiplexing (OFDM) symbols are allocated. Specifically, since we mainly focus on multi-layer transmission, the equivalent downlink channel tensor after precoding in frequency domain can be denoted as 𝐇∈ℂ^S × T × L × N_r, where L denote the number of layers. Received signal for the rth receive antenna can be expressed as 𝐘_r = ∑_l=1^L𝐇_r,l∘𝐗_l + 𝐍_r where 𝐘_r∈ℂ^S × T denote the received signal, 1 ≤ r ≤ N_r and 1 ≤ l ≤ L are the receive antenna index and layer index, respectively. ∘ denotes the Hadamard product, 𝐇_r,l∈ℂ^S × T is a slice of tensor 𝐇 and denotes the equivalent channel for the rth receive antenna and lth layer. 𝐍_r∈ℂ^S × T is the corresponding additive white complex Gaussian noise with variance of σ^2 per element according to signal to noise ratio SNR =10log_10(𝔼{∑_l=1^L|x_l,s,t|^2} / σ^2). 𝐗_l∈ℂ^S × T is the matrix of transmitted symbols for the lth layer, which is capable of carrying data, pilot namely DMRS in 5G NR or DL based orthogonal pilot, or superimposed symbols from pilot and data as introduced later. For all schemes, the transmitted symbols are assumed to have an average energy equal to one, i.e., 𝔼{∑_l=1^L|x_l,s,t|^2}=1, where 1≤ s ≤ S and 1≤ t ≤ T. §.§ DMRS in 5G NR In this subsection, typical existing pilot solution of demodulation reference signal (DMRS) standardized in 5G NR <cit.> is introduced, wherein the pilots and data symbols are orthogonally allocated on different resource elements (REs). As shown in Fig. <ref>, two basic pilot patterns with number of OFDM symbols carrying pilot per slot N_p=1 and N_p=4 are designed for lower and higher speed, respectively. Meanwhile, pilots between different layers are designed orthogonally by frequency-division multiplexing (FDM) and code-division multiplexing (CDM). Based on the pilots with pre-defined patterns, the legacy receiver performs channel estimation and data detection with some linear algorithms, such as linear minimum mean square error (LMMSE). Obviously, this kind of orthogonal pilot patterns bring inevitable overhead. In addition, the pattern switching for different scenarios also lead to cumbersome signaling exchange between BS and UE. Moreover, empirically designed pattern fail to consider the implicit characteristics of increasingly complex channel scenarios. These bottlenecks may result in considerable performance loss of throughput. §.§ Pilot based on DL DL-based pilot design shows significant improvements compared with traditional solution in 5G NR. For DL-based orthogonal pilot design, the trainable parameters Φ and Θ are equipped on the transmitter g(·;Φ) and receiver f(·;Θ), respectively, where Φ configures the pilot sequence <cit.> or pilot pattern <cit.>, and Θ underpins neural receiver. The parameters Φ and Θ are jointly trained through an end-to-end manner, i.e., min_Φ, Θ ℒ_bce(𝐁,f(h(g(𝐁;Φ));Θ) where 𝐁∈{0,1}^S × T × M denotes the original encoded information bits, g(𝐁;Φ) denotes the transmitting signal, h(·) denotes the process of passing channel, and f(𝐘;Θ) represents the recovered bits or corresponding log-likelihood ratio (LLR), respectively. M is the number of bits per symbol according to the modulation order, 𝐘 = h(g(𝐁;Φ)) ∈ℂ^S × T × N_r is the received signal, and ℒ_bce denotes the binary crossentropy loss function. Obviously, orthogonal pilot in above solutions bring cumbersome signaling for pattern switching and inevitable overhead for pilot allocation. As for non-orthogonal DL-based solution <cit.> where pilot and data are superimposed and the parameters Φ configures pilot power ratio, there are still non-negligible challenges in multi-layer transmission. More difficult than non-orthogonal multiple access (NOMA) <cit.> problem which only introduces the inter-user data interference, the SIP suffers not only from inter-layer data interference but also from intra-layer pilot and data interference. Moreover, the issues of the complexity of two-sided model and generalization of MCS and number of layers mentioned in Section <ref> also need to be addressed. § PROPOSED SCHEMES In this section, the motivation of designing the proposed schemes is first discussed. Then the proposed mechanisms at transmitter and receiver are introduced. Finally the total framework is formulated by combining all proposed mechanisms. §.§ Motivation §.§.§ Challenge of SIP in multi-layer transmission In single-layer SIP transmission <cit.>, the neural receiver can handle the interference of non-orthogonal pilot and data well. However, as the number of transmission layers increases, it introduces new challenge of intra-layer and inter-layer interference that is difficult for existing neural receivers to cope with. In more detial, accurate channel estimation is required to mitigate the inter-layer interference exploiting the low correlation of channels of different layers brought by the precoding process. Instead, less intra-layer and inter-layer interference to pilot is also required for accurate channel estimation. Increasing the power of the pilot can reduce intra-layer interference, yet it is followed by a reduction in the equivalent SNR of data. These create intractable contradictions and call for novel design of the framework for solving intra-layer and inter-layer interference challenge under multi-layer transmission of SIP. §.§.§ Challenge of SIP in practical deployment Considering the NN model in practical wireless communication systems, it is essential to employ appropriate parameter training to adapt the model to different transmission conditions and deploy it with low latency. However, to implement SIP, trainable paramenters of power ratio and NN model are at the both transmitter and receiver, respectively. This two-sided model brings cumbersome inter-vender collaboration such as data collection, model training, updating and switching <cit.>. Moreover, appropriate structure design for different system configuration also brings the problem of generalization. Specifically, distinct configuration such as layers and MCSs can lead to varying dimensions of the neural receiver inputs and outputs so that the different model structure. Thus, it cannot simply utilize the mixed dataset for model training to achieve structure generalization. Consequently, it is imperative to devise an efficient one-sided neural receiver capable of addressing the model structure generalization challenge, rather than relying on extensive model life cycle management to ensure model deployment and application. §.§ Pre-design at Transmitter Before introducing the interference cancellation based neural receiver, the pre-design at transmitter including intra-layer non-orthogonal F-SIP and inter-layer orthogonal code-division pilot are firstly proposed in this section to deal with the challenges of SIP in practical deployment and multi-layer transmission, respectively. §.§.§ Intra-Layer Non-Orthogonal Fixed Superimposed Pilot Intra-layer non-orthogonal F-SIP is first introduced in this subsection. Different from the orthogonal pilot patterns in 5G NR, the pilot and data symbols in proposed F-SIP are non-orthogonally superimposed in power domain. Different from the exising two-sided model of SIP solution with trainable paramenter at both transmisster and receiver, a fixed power allocation ratio 0<α<1 is pre-set at transmitter. Then the transmitted matrix 𝐗_l in (<ref>) with superimposed symbols for the lth layer can be denoted as 𝐗_l = √(1-α)𝐃_l + √(α)𝐏_l where 𝐃_l∈ℚ^S × T and 𝐏_l∈ℂ^S × T denotes the data and pilot matrix for the lth layer, respectively, ℚ denotes the constellation set according to the MCS configuration. Obviously, all REs are assigned with a unified and fixed power ratio α, instead of a trainable parameter matrix Φ with extra model training complexity for two-sided structure. A model management friendly one-sided framework serves as a premise here, and the time-frequency resource overhead of orthogonal pilot can be completely omitted. §.§.§ Inter-Layer Orthogonal Code-Division Pilot In this subsection, the F-SIP is further extended to multi-layer transmission, wherein the inter-layer pilot inference introduced by F-SIP should be eliminated. An intuitive way to deal with inter-layer interference is using orthogonal pilots between different layers, such as FDM, time-division multiplexing (TDM) and CDM. However, considering pilots are allocated on orthogonal time and frequency REs in different layers for TDM and FDM, respectively, these two candidates are not suitable for F-SIP transmission where pilots and data are superimposed in all REs. Because the channel estimation based on interpolation in time or frequency domain may result in severer performance loss, especially for high speed or heavy frequency selective scenario. Therefore, CDM is selected in this paper since it can achieve inter-layer pilot orthogonality and meanwhile ensuring all REs can be equipped with pilots. Specifically, the CDM for F-SIP based multi-layer can be expressed as 𝐏_l∘𝐏_k_F = 0, l ≠ j where the 1 ≤ l ≤ L, 1 ≤ k ≤ L are the layer indices and · denotes the Frobenius norm. To satisfy (<ref>), all S × T REs in one layer are devided into G = S × T / L CDM groups as shown in Fig. <ref>. The pilots of different layers in the same group are distinguished by proposed discrete Fourier transform orthogonal mask code (DFT-OMC), i.e., the pilot sequence can be generated by 𝐩_l,g = p̂_g𝐜_l where 𝐩_l,g∈ℂ^L × 1 is the vectorized pilot symbols of layer 1 ≤ l ≤ L and group 1 ≤ g ≤ G, p̂_g∈ℙ is the pilot seed for group g, ℙ denotes the set of seeded constellation symbols with average power of 1/L and zero mean, e.g., binary phase shift keying (BPSK) and quadrature phase shift keying (QPSK). 𝐜_l∈ℂ^L × 1 is the DFT-OMC of layer l. Furthermore, DFT vectors is utilized to generate 𝐜_l, , i.e., 𝐜_l = [1, ..., e^-j2π n(l-1)/L,...,e^-j2π (L-1)(l-1)/L]^T where 0≤ n ≤ L-1. Using the proposed DFT-OMC, the pilot orthogonality between different layers can be guaranteed with the required power normalization constraint. §.§ Interference Cancellation based Neural Receiver In this section, a novel neural receiver for F-SIP with multiple enhancing mechanisms is proposed, where the interference cancellation and superimposed symbol aided channel estimation are introduced to cope with the challenge of SIP in multi-layer transmission. The layer and MCS scalable mechanisms are also provided to solve the challenge of SIP in practical deployment. §.§.§ Interference Cancellation with Superimposed Symbol Aided Channel Estimation Next, in order to further handle the inter-layer and intra-layer interference when receiving F-SIP, a receiver with interference cancellation and superimposed symbol aided channel estimation is proposed, inwhich the algorithm includes V outer iterations to realize interference cancellation. Note that the reception for L layers is formulated as L inner iterations in this paper for ease of explanation, which can be parallelized and accelerated by graphics processing unit according to the proposed layer-scalable mechanism as introduced later. Fig. <ref> shows the signal processing flow of the proposed receiver, where the IC, DD, CE, Rec modules denote the interference cancellation, data detection, channel estimation and signal reconstruction procedure, and Enc, Dec and Mod modules denote the channel encoding, channel decoding and modulation procedure, respectively. As shown in Fig. <ref>, for ith outer iteration and lth inner iteration, the inputs of channel estimation model includes the received signal 𝐘^x_l,i-1∈ℂ^S × T × N_r which canceled the reconstructed interference of other L-1 layers 𝐘^x_l',i-1∈ℂ^S × T × N_r, l'∈{u| 1≤ u ≤ L, u ≠ l}, the reconstructed data of lth layer 𝐃_l,i-1∈ℚ^S × T, the reconstructed superimposed symbol of lth layer 𝐗_l,i-1∈ℂ^S × T and the pilot of lth layer 𝐏_l∈ℂ^S × T, inwhich the reconstructed tensors are obtained in i-1th iteration and the interference cancellation can be fomulated as 𝐘^x_l,i-1 = 𝐘 - ∑_l'𝐘^x_l',i-1 where 𝐘∈ℂ^S × T × N_r denotes the raw received signal concatenating N_r received signal 𝐘_r in (<ref>). Beyond the basic information of pilot components in received signal 𝐘^x_l,i-1 and pilot 𝐏_l for channel estimation, it should be noted that the reconstructed data 𝐃_l,i-1 and superimposed symbol 𝐗_l,i-1 are regarded as aided information of `alternative pilots', to make the data components in 𝐘^x_l,i-1 no longer interfere with channel estimation but can be exploited to enhance the performance of channel estimation. In more detail, with the enhancement of the aided information, the channel can be estimated accroding to three pairs of variables as follows. * Pilot components in received signal 𝐘^x_l,i-1 and pilot 𝐏_l, which is the basic information. * Data components in received signal 𝐘^x_l,i-1 and reconstructed data 𝐃_l,i-1, which is the aided information brought by proposed method. * Received signal 𝐘^x_l,i-1 and superimposed symbol 𝐗_l,i-1, which is also the aided information brought by proposed method. By introducing the aided information, it allows for a lower power ratio of pilot α, as long as the channel estimation using only pilot 𝐏_l in the first iteration has a certain performance to ignite subsequent iterations since the reconstructed tensors are initialized to all zeros. Data equivalent SNR that is virtually unaffected can hence be guaranteed. Furthermore, even using one of 𝐗_l,i-1 or 𝐃_l,i-1 to combine 𝐏_l as input provides the same amount of information, all three inputs can facilitate model learning during training phase. Moreover, the estimated channel of lth layer 𝐇_l,i∈ℂ^S × T × N_r and the received signal 𝐘^d_l,i-1∈ℂ^S × T × N_r are fed into the data detection NN model, where 𝐘^d_l,i-1 canceled the reconstructed interference of other L-1 layers 𝐘^x_l',i-1∈ℂ^S × T × N_r, l'∈{u| 1≤ u ≤ L, u ≠ l} and the reconstructed pilot interference of lth layer 𝐘^p_l,i-1∈ℂ^S × T × N_r by using 𝐘^x_l,i-1 = 𝐘 - ∑_l'𝐘^x_l',i-1 - 𝐘^p_l,i-1 where 𝐘^p_l,i-1 is constructed from 𝐘^p_l,i-1 = 𝐇_l,i-1∘𝐏'_l and 𝐏'_l∈ℂ^S × T × N_r duplicates the pilot tensor 𝐏_l for N_r times. Note that the MCS information m is also the input of the data detection model and is used to achieve MCS generalization, which will be explained in detail later. The model output of LLR tensor 𝐕_l,i∈ℝ^S × T × M can be further obtained, where M is the number of bits per symbol according to the modulation order indicated by the configured MCS index m. In addition to supervision during model training, 𝐕_l,i is also exploited for reconstructing the data and superimposed symbol tensor by using 𝐃_l,i = Mod(Enc(Dec(𝐕_l,i))) and 𝐗_l,i = √(1-α)𝐃_l,i + √(α)𝐏_l where Dec(·), Eec(·) and Mod(·) represent the channel decoding, channel encoding and modulation procedure implemented according to the MCS configuration, respectively. 𝐁'_l,i = Dec(𝐕_l,i)∈ℝ^S × T × M denotes the received information bits of l layer in ith iteration. The interference of lth layer for cancellation procedure in i+1 iteration can finally calculated by using 𝐘^x_l,i = 𝐇_l,i-1∘𝐗'_l,i and 𝐗'_l,i∈ℂ^S × T × N_r duplicates the reconstructed superimposed symbol tensor tensor 𝐗_l,i for N_r times. §.§.§ Layer-Scalable Mechanism Under multi-layer transmission, the signals from other layers can be regarded as interference for receiving each target layer, so the problems to be solved in each layer are relatively similar. Therefore, the layer-scalable mechanism is implemented in the proposed receiver, where L layers share same channel estimation and data detection NN models as well as same signal processing flow. The layer scalability can be achieved by proposed layer-common structure since the number of layer only affects the batch size of model inference instead of the inner NN size, where it can be parallelized and accelerated by graphics processing unit conveniently. Meanwhile, since multiple layers share NN structure and parameters, lightweight model are also more friendly to terminal deployment than layer-specific model whose complexity increases with the number of layers increases. §.§.§ MCS-Scalable Mechanism The MCS generalization is further addressed in this subsection. The inner NN of proposed data detection model is designed according to the maximum number of bits per symbol M_max supported by the system, supplemented by the configured MCS index m as auxiliary knowledge, resulting a model structure compatibility with multiple MCSs. Note that the MCS index m is tiled to a tensor 𝐌∈{m}^S × T × 1 as input of the NN which facilitates the concatenation of the inputs. After performing feature extraction by NN, the model can proceed a intermediate redundant feature map 𝐕_l,i∈ℝ^S × T × M_max. By cropping 𝐕_l,i in the third dimension according to M, final output of LLR tensor 𝐕_l,i∈ℝ^S × T × M can be obtained, where M is the number of bits per symbol according to the modulation order indicated by the configured MCS m. After collecting all LLR tensors of L layers, it can be fed to the following channel decoder. §.§.§ Model Implementation Fig. <ref> shows the NN structure implementing the channel estimation and data detection models. The well-known ResNet <cit.> block is utilized wherein double sequential batch normalizations, rectified linear unit (ReLU) activations and two-dimensional convolutional layers (Conv2D) with residual connection are implemented in each block. Since the principle of proposed scalable mechanisms are insensitive to the structure of the feature extraction model, other flexible implementations can be effectively employed such as multi-layer perceptrons mixer <cit.> and Transformer <cit.>. The hyperparameter settings of the model are given in Table <ref> in the simulation part of Section <ref>. §.§ Framework of Proposed Scheme By combining the above mechanisms, the proposed receiver can be summarized in Algorithm <ref>. For the sake of simplicity, the Algorithm <ref> mainly presents stem of the proposed receiver, which helps readers understand the macro framework of the proposed receiver. Finally, the total framework can be formulated as min_Θ̂_ce,Θ̂_dd 1/V∑_i^V{τℒ_bce(𝐁,𝐕_i) + (1-τ)ℒ_mse(𝐇,𝐇_i)} s.t. 𝐕_i, 𝐇_i = f̂(𝐘,𝐏,L,m; Θ̂_ce,Θ̂_dd)) 1 ≤ i ≤ V where ℒ_bce and ℒ_mse denote the binary crossentropy and mean square error loss function, respectively, τ denotes the weights of loss functions. 𝐁∈{0,1}^S × T × L × M and 𝐇∈ℂ^S × T × L × N_r represent the original encoded bits and ideal channel, respectively. 𝐕_i∈ℝ^S × T × L × M and 𝐇_i∈∈ℂ^S × T × L × N_r represent the LLR and estimated channel collecting 𝐕_l,i and 𝐇_l,i of L layers, respectively. 𝐏∈ℂ^S × T× L denotes pilot tensor collecting 𝐏_l of L layers. f̂(·), Θ̂_ce and Θ̂_dd denote the proposed receiver and corresponding NN parameters of channel estimation and data detection model, respectively. Compared with existing methods, the proposed framework is capable of supporting multi-layer transmission of SIP with practicality and scalability. Challenges mentioned in Section <ref> are well addressed. § SIMULATION RESULTS In this section, numerical results of our proposed F-SIP with scalable neural receiver (Proposed) and two baselines are presented. Specifically, the standardized technology in 5G NR system, i.e., orthogonal pilots of 5G NR in Fig. <ref> with LMMSE channel estimation and data detection is used as a baseline (Baseline I), wherein the covariance matrix for LMMSE channel estimation is calculated over 10^5 channel samples. It can reflect the gain compared with the existing system design, providing strong simulation result guidance for subsequent application implementation and standardization work. In addition, the state-of-art method from academia depicted in (<ref>) with two-sided model and trainable SIP <cit.> is also compared as another baseline (Baseline II). The proposed solution provides some improvement methods for a series of problems that the Baseline II does not solve. Therefore, the gain reflected by comparing with this representative SIP solution can illustrate the advancedness of our solution well. The clustered delay line (CDL) channel model is considered here, which has been widely utilized for link-level evaluation in 3GPP <cit.>. Some basic simulation parameters are listed in Table <ref>. The power ratio for F-SIP is set as α=0.05 and number of iterations V=3 if there is no special declaration. Note that the setting of the hyperparameters of the model in the simulation is based on the trade-off between performance and complexity, which is more in line with practical application. MCS is set as m=7 if there is no special declaration, where the modulation scheme is 2^M=16 quadrature amplitude modulation (QAM) and target coderate is 490/1024. Except for open loop precoder cycling using Type I codebook <cit.> in high-speed scenario, singular value decomposition (SVD) precoding is used in other scenarios. During the training phase, each training sample is obtained through the channel sampled from CDL model with random SNR of -20∼25 dB where random SNR in training phase in this paper brings the generalization of SNR of one receiver. Therefore, there is no need to train a specific model for each specific SNR, which is convenient for actual deployment. §.§ Effectiveness and Outperformance §.§.§ Comparison in Low Speed Scenario The link-level BLER performance comparison in low speed scenario of CDL-C channel is presented in Fig. <ref>. It can be noticed that the proposed F-SIP with only α=0.05 achieve effective BLER performance indicating that the neural receiver can processes the F-SIP well. Specifically, proposed framework outperforms the Baseline II of trainable SIP. This reveals the advantages of proposed aided information and interference cancellation mechanisms under multi-layer transmission while greatly simplifying the system design. The performance at V=3 is better than that at V=1, indicating the improvement brought by proposed interference cancellation and superimposed symbol aided channel estimation. Moreover, the F-SIP with α=0 does not work well, indicating that even with a small power ratio, e.g., α=0.05, pilot is necessary for channel estimation in the neural receiver. Proposed F-SIP with α=0.05 is comparable with the Baseline I of traditional orthogonal pilot with N_p=1. This indicats that non-orthogonal pilot bring almost no performance loss, and the proposed method can transmit more effective information bits under same coding rate resulting in better throughput performance, which will be detailed later. The throughput comparison in low speed scenario of CDL-C channel is provided in Fig. <ref>, where the throughput R is defined as R = N_slotN_REΩγ M(1-BLER) wherein N_RE = STL denotes the number of REs forming a slot, N_slot denotes the number of slot per second, Ω denotes ratio of REs carrying data symbols, γ and M are the target coderate and number of bits per symbol according to selected MCS, respectively. For orthogonal pilot patterns in Baseline I, some REs are reserved for pilot transmission. Thus we have Ω = 11/12 for Baseline I in 3km/h, while other methods are with Ω = 1. Obviously, the proposed method with α = 0.05 and V=3 achieves higher throughput compared with Baseline I. Moreover, we find that our proposed method with pre-set α = 0.05 and V=3 can achieve comparable throughput with Baseline II, which demonstrates that one-sided model at only UE side is capable of ensuring the performance with more flexible model management procedure compared with two-sided counterpart. §.§.§ Comparison in High Speed Scenario The BLER and throughput performance comparison in high speed scenario with CDL-D extension channel <cit.> are depicted in Fig. <ref> and <ref>, respectively, where N_p = 4 for Baseline I is necessarily configured to estimate channels with strong time-varying characteristics. Thus we have Ω = 10/12, while proposed methods are with Ω = 1. Generally, the proposed method achieves higher throughput compared with Baseline I in high speed scenario since the pilot overhead can be avoided. Taking SNR = 25 dB as an example, gains of 19.98% and 24.45% can be obtained in scenarios of 300 km/h and 900 km/h, respectively. Moreover, proposed method in extremely high-speed scenarios of 900 km/h has significant performance advantages from the perspective of BLER, as all available time domain and frequency domain resources have pilot distribution. The LMMSE in Baseline I can realize accurate channel estimation based on covariance matrix from 10^5 channel samples under 300 km/h, while the channel estimation error resulted from interpolation in time domain grows extremely large when 900 km/h. As comparison, our proposed method is capable of performing accurate joint channel estimation and data detection by exploiting the statistical relationship between pilots and data symbols on all REs. §.§ Generalizability and Scalability §.§.§ Scalability for Different MCSs The scalability performance on different MCSs of our proposed scalable neural receiver in CDL-C channel is presented in Fig. <ref>. Here the MCS m={3, 7, 14} are selected with corresponding modulation order as {QPSK, 16QAM, 64QAM} and coderate as {449/1024, 490/1024, 719/1024}, respectively. Our proposed scalable neural receiver (Mixed) is trained on the mixed datasets with m={3, 7, 14}, with the model implementation of M_max=6. While the compared specific models (Specific) are implemented and trained on its own single MCS without using proposed scalable mechanisms. It can be noticed that our proposed scalable neural receiver can achieve comparable performance with specific counterparts, which validates its excellent scalability performance on and MCSs. §.§.§ Scalability for Different Number of Layers The scalability performance on different number of layers in CDL-C channel is studied in Fig. <ref>, wherethe number of layers is evaluated with L={2, 4}. The proposed scalable neural receiver (Mixed) is trained on the mixed datasets with L={2, 4}. While the compared specific models (Specific) are implemented and trained on its own single layers. Obviously, comparable performance is obtained, indicating the practical deployment-friendly scalability for different layers can be achieved. §.§.§ Generalization for Different Channel Enviroments To further explore possibility in practical deployment, generalization study in different channel enviroment is also provided in Fig. <ref>, where configuring more channel setting to the training dataset can further obtain the generalization of the mixed channel model <cit.>. The proposed neural receiver (Mixed) is trained on the mixed datasets of CDL-A/C with D_s = 30/300 ns and tested on the corresponding target channel. While the compared specific models (Specific) are trained on the target channel model. It can be seen that proposed receiver still achieves comparable BLER performance, which exhibits the excellent generalization performance in practical deployment when facing different channels. §.§ Computational and Storage Complexity The computational and storage complexity evaluation is also studied. First, from the perspective of simulation, an evaluation of the running time of the model (M_max=6) on single NVIDIA A100 SXM 80 is provided. By processing 1.28 × 10^5 transport blocks (TB), the averaged computation time for each inner iteration is about 0.5 milliseconds. Moreover, from the perspective of analysis, the complexity of proposed receiver mainly lies in the channel estimation and data detection model which is far beyond the complexity of interference reconstruction and cancellation. Therefore we provide the floating point operations (FLOPs) and trainable parameters evaluation of one iteration. Firstly, the channel estimation model brings 2.9802× STL ×10^6 FLOPs with 2.9777×10^6 parameters, where the computational and storage complexity are not affected by M_max and L, respectively. The complexity of data detection model is provided in Table <ref>. It can also be noticed that the computational complexity increases with the increase of M_max, T, S and L, where the influence of M_max is relatively negligible compared with the complexity of the model itself. Moreover, not only does M_max have a slight impact on storage complexity, but the number of transmission layers L does not have an impact on storage complexity of proposed layer-common structure. While the storage complexity of the the layer-specific model is expanded L times since different layers use different structures and parameters. These imply the feasibility of deployment of proposed scalable receiver. § STANDARDIZATION POTENTIAL AND PROSPECTS Starting from 3GPP release 18, the study item of `Artificaial Intelligence / Machine Learning for NR Interface' introduces the DL-based solutions into the physical layer of communication system. Some system design restrictions can be further relaxed using those DL-based approaches, which also makes it possible to explore new forms of reference signal in the subsequent 6G research such as different learnable sequence and pattern as well as introduction of non-orthogonality. According to 3GPP's work plan about DL-based solutions from 5G-advanced to 6G, the performance gain, overhead reduction, scenario generalization, storage and computational complexity, life cycle management (LCM) <cit.> and potential standardization impact need to be studied. Therefore, the DL-based pilot solutions in existing research also need to address some corresponding challenges to meet the practical requirements and follow a standardized route for 6G, namely i) maintaining the throughput gain in more complex environments such as high-speed scenarios or multi-layer transmission, ii) achieving a lower or zero overhead of reference signal, iii) keeping a lower complexity to adapt to terminal deployment, iv) generalizing to different scenarios or system configurations, and vi) designing the simple framework without cumbersome LCM procedure. The solution proposed in this artical involves multiple novel mechanisms design to solve the above challenges from the perspective of throughput, overhead, generalization, scalability, flexibility and complexity, making the SIP compliant with standardization and practical deployment. In future work, it is meaningful to further study the effectiveness and complexity in more practical scenarios before SIP is standardized. These will also bring more diversity and space for the redesign of various reference signal in future 6G intelligent system. § CONCLUSION In this paper, an interference cancellation based neural receiver for SIP in multi-layer transmission is proposed, which involves multiple novel mechanisms design to face the challenges of multi-layer transmission and practical deployment. Specifically, considering the intra-layer and inter-layer interference of SIP under multi-layer transmission, the interference cancellation with superimposed symbol aided channel estimation is utilized in the neural receiver, accompanied by the pre-design of pilot code-division orthogonal mechanism at transmitter. Moreover, to deal with the complexity issue for inter-vendor collaboration and the generalization problem for practical deployments, respectively, a fixed SIP (F-SIP) design based on constant pilot power ratio and scalable mechanisms for different modulation and coding schemes (MCSs) and transmission layers are also proposed. Simulation results demonstrate the superiority of the proposed scheme from the perspective of BLER and throughput compared with existing counterparts. gbt7714-numerical
http://arxiv.org/abs/2406.18884v1
20240627043326
Sequential three-way group decision-making for double hierarchy hesitant fuzzy linguistic term set
[ "Nanfang Luo", "Qinghua Zhang", "Qin Xie", "Yutai Wang", "Longjun Yin", "Guoyin Wang" ]
cs.AI
[ "cs.AI" ]
1.1 a,b]Nanfang Luo a,b,c]Qinghua Zhangmycorrespondingauthor [mycorrespondingauthor]Corresponding author zhangqh@cqupt.edu.cn a,b]Qin Xie b,c]Yutai Wang b,c]Longjun Yin b,c,d]Guoyin Wang [a]Chongqing Key Laboratory of Tourism Multisource Data Perception and Decision Ministry of Culture and Tourism, Chongqing University of Posts and Telecommunications, Chongqing 400065, China [b]Chongqing Key Laboratory of Computational Intelligence, Chongqing University of Posts and Telecommunications, Chongqing 400065, China [c]Key Laboratory of Big Data Intelligent Computing, Chongqing University of Posts and Telecommunications, Chongqing 400065, China [d]College of Computer and Information Science, Chongqing Normal University, Chongqing, 401331, China § ABSTRACT Group decision-making (GDM) characterized by complexity and uncertainty is an essential part of various life scenarios. Most existing researches lack tools to fuse information quickly and interpret decision results for partially formed decisions. This limitation is particularly noticeable when there is a need to improve the efficiency of GDM. To address this issue, a novel multi-level sequential three-way decision for group decision-making (S3W-GDM) method is constructed from the perspective of granular computing. This method simultaneously considers the vagueness, hesitation, and variation of GDM problems under double hierarchy hesitant fuzzy linguistic term sets (DHHFLTS) environment. First, for fusing information efficiently, a novel multi-level expert information fusion method is proposed, and the concepts of expert decision table and the extraction/aggregation of decision-leveled information based on the multi-level granularity are defined. Second, the neighborhood theory, outranking relation and regret theory (RT) are utilized to redesign the calculations of conditional probability and relative loss function. Then, the granular structure of DHHFLTS based on the sequential three-way decision (S3WD) is defined to improve the decision-making efficiency, and the decision-making strategy and interpretation of each decision-level are proposed. Furthermore, the algorithm of S3W-GDM is given. Finally, an illustrative example of diagnosis is presented, and the comparative and sensitivity analysis with other methods are performed to verify the efficiency and rationality of the proposed method. Granular computing Sequential three-way group decision-makingDouble hierarchy hesitant fuzzy linguistic term sets Information fusion § INTRODUCTION Modern medical decision-making privileges rapid and accurate diagnosis to improve patient care. While a comprehensive examination can accomplish this goal, it is often time-consuming and can be emotionally stressful for the patient, especially during the diagnostic phase of the disease. An effective diagnostic strategy that quickly determines the need for further examination after the initial assessment would be of great advantage. This would not only shorten the diagnostic time but also reduce uncertainty and psychological stress for the patient. This challenge is particularly evident in complex diseases such as systemic lupus erythematosus (SLE). The diagnosis of SLE is characterized by a wide range of symptoms and requires multidisciplinary collaboration between experts <cit.>. This collaborative process, known as group decision-making (GDM), inherently involves uncertainty in the form of vagueness, hesitation, and variation. A patient-centered approach through GDM can optimize the diagnostic process while prioritizing patient well-being, reflecting the dual needs of efficiency and humanistic care in modern medicine. §.§ A brief review of GDM GDM is a common process in many fields, especially when dealing with complex issues that require multiple experts. GDM can be challenging due to the inherent complexity of the decision-making process and the involvement of multiple experts with varying expertise and perspectives <cit.>. One of the main challenges lies in effectively combining and utilizing the information collected from different experts. This involves ensuring that all relevant information is considered, that experts' opinions are not distorted and the information is integrated in a way that leads to sound decision-making. Existing research is not limited to the construction of models that quantitatively evaluate information but qualitative representations that match human usage habits are becoming a hot topic for more model construction research. Rodriguez et al. <cit.> significantly advanced GDM by introducing a model that effectively utilizes hesitant fuzzy linguistic term sets (HFLTS) to handle comparative linguistic expressions, enhancing the precision and flexibility of decision information representation. Pang et al. <cit.> introduced probabilistic linguistic term sets (PLTS) for more accurately collecting and expressing decision information in multi-attribute GDM. The most straightforward way to evaluate the information is to use an aggregation operators <cit.> to fuse the information in order to obtain a comprehensive evaluation information. The comprehensive evaluation information is then processed by classical multi-attribute decision-making methods, including TOPSIS, VIKOR, and MULTIMOORA. Another challenge is dealing with the uncertainty that arises from the diverse backgrounds and experiences of the experts. Individual preferences, knowledge gaps, and varying interpretations of information can all contribute to uncertainty in the decision-making process. In studying the decision-making behaviours of decision-makers, the close connection between behavioral economics <cit.> and decision-making science has led to a number of noteworthy results <cit.>. The emergency decision-making usually involves multiple experts as well, which is also a challenge in GDM problems. The dynamic character of emergencies further adds to the complexity of GDM problems, including temporary hospital site selection <cit.>, emergency management <cit.>. Despite these efforts, GDM remains a complex and challenging area, and requires continuous research to develop more effective ways of dealing with uncertainty and making wiser decisions in a better way <cit.>. §.§ A brief review of DHHFLTS Double hierarchy hesitant fuzzy linguistic term set (DHHFLTS) is a composite information collection tool that plays a crucial role in collecting vague and hesitant decision-making information. The uniqueness of DHHFLTS is that it embeds a layer of linguistic scale based on the HFLTS. This layer of embedded linguistic term set serves as a subscale of the main linguistic scale, providing a richer foundation for qualitative expression. This dual structure allows DHHFLTS to effectively capture and represent the nuanced opinions of experts. In recent years, researchers have explored the application of DHHFLTS in various fields and extended its capability to handle more complex decision-making scenarios. For instance, Gou et al. <cit.> applied DHHFLTS to the MULTIMOORA method in their study, demonstrating its flexibility and effectiveness in complex decision environments. To quantify the relationships between different DHHFLTS, Gou et al. <cit.> proposed various distance measurement models, enhancing the accuracy and effectiveness of decision analysis. As a type of linguistic data, DHHFLTS has been gradually integrated into traditional multi-attributes decision-making methods. Liu et al. <cit.> combined DHHFLTS with traditional aggregation operators to propose a new multi-attributes decision-making model. Gou et al. <cit.> studied probabilistic DHHFLTS and integrated it with the VIKOR method. In addition, DHHFLTS has achieved some results through theoretically extended researches <cit.>. Some researches have increasingly validated the potential of DHHFLTS in dynamic decision-making scenarios. Liu et al. <cit.> improved the ELECTRE II method, extending the application scope of DHHFLTS in emergency logistics provider selection. Also, Gou et al. <cit.> proposed an improved ORESTE method utilizing linguistic preference orderings to evaluate medical resource allocation in public health emergencies, which demonstrates that DHHFLTS has a better handling capability for dynamic problems in healthcare management. Currently, DHHFLTS has also produced noteworthy works. Cheng et al. <cit.> introduced a large-scale GDM model that considers risk attitudes and dynamic role changes, showcasing the high adaptability of DHHFLTS in a practical emergency problem. Furthermore, a large-scale GDM opinions updating model driven by autonomous learning was developed <cit.>, further expanding the application scope of DHHFLTS in GDM. DHHFLTS increases the flexibility of expert linguistic expression and provides support for information representation in more complex GDM. §.§ A brief review of S3WD Sequential three-way decision (S3WD) is a typical model of three-way decision (3WD) for handling dynamic problems. The recent comprehensive work <cit.> offers a new interpretation of the 3WD framework, TAO (Triading-Acting-Optimizing). TAO provides a theory for dynamic decisions that aligns with traditional Chinese culture. It has delved into four types of triadic structures of three worlds and explained the Dao, the way of three-world thinking by using examples. The concept of TAO, similar to the "Wuji" state in traditional Chinese culture, breaks the binary or bipolar thinking by introducing an intermediate state for handling uncertainty. Fig. <ref> exemplifies this using the "Yin Yang" symbol. The outer circle, representing "Wuji", gives rise to the opposing poles of "Yin" and "Yang" within "Tai Chi". In "Tai Chi", the opposites of "Yin" and "Yang" are unified and constantly transform, producing "Change"[Xinbo Gao. "Understanding Chongqing Nanshan and Promoting the Spirit in CQUPT." January 1, 2024. <https://mp.weixin.qq.com/s/dD1U1xhkuB6uZuNSLRY_Dw> ]. This dynamic transform interplay drives the world, just as the interaction between heaven, earth, and man influences decision-making. Yao <cit.> has studied S3WD from the perspective of granular computing in 2013 and analysed that the decision cost of this dynamic 3WD is smaller than that of two-way decision (2WD). 2WD is mostly found in established GDM methods. Regarding decision objects, the decision results are either accepted or rejected, with a risk of decision errors. Integrating the concept of 3WD into GDM can effectively reduce the risk of decision errors. Recently, very noteworthy results have been achieved in GMD methods for 3WD under linguistic terms environments. Liu et al. <cit.> proposed a 3WGDM method, which combines comparative linguistic expressions and personalised numerical scales to improve the accuracy of decision making and effectively solve the problem of uncertainty in GDM. Yang et al. <cit.> integrated basic uncertain linguistic information and decision-theoretic rough sets into a dynamic 3WD framework based on the temporal dimension, enhancing the flexibility and accuracy of the multi-attribute decision-making process and making a valuable contribution. The study of S3WD in multi-levels of granularity decision information processing is more suitable for improving efficiency and reducing decision costs. This multi-level processing follows the dynamic expansion of information from coarse-grained to fine-grained. When the available information is insufficient to support a decision (accept or reject) at the current decision-level, additional information is added to support the next decision-level, reducing the decision cost for some objects that can be accepted or rejected. The approximate computing capability of S3WD makes it an important research topic in the field of granular computing. Yang et al. <cit.> considered the rules of hierarchical granulation in both horizontal and vertical directions, and proposed a general multi-level neighborhood sequential decision approach that improves the applicability on cognitive science applications. It has also been correspondingly extended to different information environment <cit.>. Similarly, some work has used S3WD to solve GDM problems characterised by uncertainty. Wang et al. <cit.> integrated social influence dynamics into a S3WD framework, and proposed a STWMAGDM approach, addressing high decision risk and uncertainty. Wang et al. <cit.> builded a multigranulation S3WD model with cost-sensitive and obtained the expert GDM results using MULTIMOORA ranking the classification. Further, the implementation of S3WD with multi-level granularity decision information processing will provide a better problem solving model for GDM. §.§ Motivations and contributions of this paper It is difficult for a single decision-maker to make a reasonable decision. Similar to SLE multidisciplinary collaboration, GDM can provide effective solutions to decision problems in complex scenarios. These scenarios often involve multiple experts evaluating multiple alternatives based on various attributes. However, with the development of information science, the amount of decision information has greatly increased and the information dynamics changes rapidly. GDM problems become more and more uncertain, and higher requirements are imposed on decision models. Efficiency in GDM is crucial. The granularity of information for decision is becoming increasingly refined, but using all available information indiscriminately would increase decision costs. For example, doctors diagnosing abdominal pain do not require patients to undergo all related examinations. If there are no concerning symptoms like bloody stool, vomiting, weight loss, or fever, then observation might be the most appropriate initial course of action. If symptoms improve, there is no need for further intervention. However, if the pain persists, an endoscopy might be necessary. This strategy avoids unnecessary procedures, as endoscopies can have side effects. Reliability in GDM is also crucial. While expert preferences are intuitive, they often exhibit vagueness and hesitation. Experts find it challenging to use precise numerical evaluations, which makes it difficult to collect decision information intuitively. Qualitative expressions, being closer to human habits, have become a hot research. However, existing single linguistic terms are often inflexible and do not easily meet human expression habits in applications. Additionally, individuals tend to anchor their psychological preferences, which should be considered in qualitative expressions. 3WD provides a buffer zone by allowing an indeterminate state for alternatives. However, does the indeterminate state remain unchanged? With the addition of new decision information and finer granularity, some alternatives in the indeterminate state can be further decided upon, while those already decided upon do not increase decision costs. This process is S3WD. Usually, the static character is relative, whereas constant variation is the norm. A dynamic approach to decision-making should be the strategic thinking provided by 3WD. Therefore, improving efficiency and reducing decision costs are important issues. This research is motivated by three key observations. (1) Existing researches <cit.> primarily focus on 2WD, where decision results are typically either accepted or rejected, lacking a buffer zone of them. In GDM scenarios, the number of experts, attributes, and costs associated with each decision increase the overall decision burden. Few works consider adjusting the process of GDM information fusion to reduce decision costs. (2) Few works focus on both S3WD and DHHFLTS. DHHFLTS, as a composite linguistic term set, has been explored in various fields. It is essential to promote research on GDM in qualitative dynamic problem-solving, enhancing the flexibility and rationality of DHHFLTS applications in S3WD. (3) There is still a need to simultaneously address the characteristics of vagueness, hesitation, and variation in GDM problems. Existing S3W-GDM methods based on granular computing <cit.> may not be applicable to deal the composite linguistic term model, and also lack the integration with PT <cit.> or RT <cit.>, etc. to further explore uncertainty. With these motivations, the main contributions of this research are concluded as follows: (1) A conditional probability calculation method for scenarios involving DHHFLTS based on the neighborhood theory and outranking relation is proposed to achieve greater generality. Additionally, the concept of relative perceived utility is proposed to replace relative loss to better reflect real-world psychological preferences. (2) The decision-level extracted and aggregated methods based on multi-level of granularity are introduced. This provides a more flexible and relevant approach for handling complex GDM problems, allowing for a simplified analysis while preserving essential information. (3) A novel method of S3W-GDM is established from granular computing perspective, which overcomes the shortcomings of existing GDM researches. More importantly, compared with the existing 2WD methods, the proposed method is more effective and close to reality. The structure of this paper is as follows. Section <ref> provides the backgroud of DHHFLTS, the S3WD model and RT. In Section <ref>, a S3WD model is developed based on neighborhood theory and RT. In Section <ref>, the S3W-GDM method for DHHFLTS based on multi-level of granularity is constructed and the related algorithms model is present. Section <ref> provides an illustrative example to show the applicability of the multi-level S3W-GDM method. In Seection <ref>, the comparative and sensitivity analysis verify the validity and rationality of this model. Section <ref> summarizes the conclusions and future work. In addition, an overall diagram of the paper is given in Fig. <ref>. § PRELIMINARIES This section provides a brief review of concepts related to DHHFLTS, S3WD, and RT. §.§ DHHFLTS Gou et al. <cit.> proposed the DHHFLTS, which can be represented to describe complex linguistic information more accurately with the form of “adverb + adjective". <cit.> Let U be a finite universal set. A double hierarchy hesitant fuzzy linguistic term set (DHHFLTS) denotes as H on U is in mathematical form as follows: H = {⟨x,h_S_O( x )⟩| x ∈ U.}, where h_S_O( x ) = {s_ϕ _l⟨o_φ _l⟩( x )| s_ϕ _l⟨o_φ _l⟩∈S_O;l = 1,2,...,L;.ϕ _l = [ - τ ,τ];. . φ _l = [ - ς ,ς]} called double hierarchy hesitant fuzzy linguistic element (DHHFLE) is a set of some value denotes the possible degree of x to double hierarchy linguistic term set (DHLTS) S_O, s_ϕ _l⟨o_φ _l⟩( x ) is the continuous terms in S_O, L is the number of DHLTSs in h_S_O( x ). To better understand DHHFLTS, two examples for illustration are given. Suppose S = {s_ - 3 = none,s_ - 2 = very low,s_ - 1 = low,s_0 = medium,s_1 = high,. . s_2 = very high,s_3 = perfect} is the first hierarchy linguistic term set. And O = {o_ - 2 = far. . from,o_ - 1 = a little,o_0 = just right,o_1 = much,o_2 = very much} is the second hierarchy linguistic term sets, which is the subscale of linguistic embedded in the main linguistic s_1 scale. When some complicated linguistic terms are described as - “a little high" and “between much medium and just right very high". They could use DHHFLEs {s_1⟨o_-1⟩} and {s_0⟨o_1⟩,s_1,s_2⟨o_0⟩}. In traditional Chinese medicine (TCM), diagnosis often relies on sensory methods like inspection, smelling, inquiry, and palpation. Verbal descriptions are a cornerstone of this evaluation process. According to Fig. <ref>, a TCM practitioner assessed the liver fire condition of three patients and subsequently developed a linguistic scale to categorize their conditions. These scales are the first hierarchy linguistic terms set S = {s_ - 1 = weak,s_0 = normal,s_1 = strong} and the second hierarchy linguistic terms set O = {o_ - 2 = slightly,o_ - 1. . = a little,o_0 = just right,o_1 = very,o_2 = very much}. The diagnosis resulted in three patients evaluations are denoted as h_s_o1 = {s_1⟨o_ - 1⟩,s_1⟨o_ - 2⟩}, h_s_o2 = {s_0⟨o_0⟩} and h_s_o3 = {s_ 1⟨o_ - 2⟩,s_0⟨o_0⟩}. <cit.> Let S_O = {s_ϕ _l⟨o_φ _l⟩| ϕ _l∈[ - τ ,τ].,φ _l∈[ - ς ,ς]} be a DHLTS, h_S_O = {s_ϕ _l⟨o_φ _l⟩| s_ϕ _l⟨o_φ _l⟩∈S_O;l = 1,2,...,L.} be a DHHFLE, and h_γ = {γ _l| γ _l∈[ 0,1];l = 1,2,...,L.} be a set of hesitant fuzzy element (HFE). There are two transformation functions f and f^ - 1 as follows: f:[ - τ ,τ] ×[ - ς ,ς] →[ 0,1], f( ϕ _l,φ _l) = φ _l + ( τ + ϕ _l)ς/2ςτ, f^-1:[0,1] → [-τ, τ] × [-ς, ς], f^-1(γ_l) = [2τγ_l - τ]⟨ o_ς(2τγ_l - τ - [2τγ_l - τ])⟩ = [2τγ_l - τ] + 1 ⟨ o_ς((2τγ_l - τ - [2τγ_l - τ]) - 1)⟩. The transformation functions F and F^ - 1 are established as follows: F:Φ×Ψ→Θ , F( h_S_O) = {γ _l| γ _l = .f( ϕ _l,φ _l)} = h_γ, F^ - 1:Θ→Φ×Ψ , F^ - 1( h_γ) = {s_ϕ _l⟨o_φ _l⟩| ϕ _l⟨φ _l⟩ = .f^ - 1( γ _l)} = h_S_O. <cit.> Let S_O be a continuous DHLTS, h_S_O, h_S_O_1, and h_S_O_2 be any three DHHFLEs, and μ be a constant. Then some operational laws between DHHFLEs are defined as follows: Addition: [ h_S_O_1 ⊕h_S_O_2 = F^ - 1( ∪_γ _1∈ F( h_S_O_1),γ _2∈ F( h_S_O_2){γ _1 + γ _2 - γ _1γ _2}), ] Multiplication: [ μh_S_O = F^ - 1( ∪_γ∈ F( h_S_O){1 - ( 1 - γ)^μ}), ] Power: [ ( h_S_O)^μ = F^ - 1( ∪_γ∈ F( h_S_O){γ ^μ}), ] Complementary: [ ( h_S_O)^C = F^ - 1( ∪_γ∈ F( h_S_O){1 - γ}). ] According to the definition of DHHFLE, there are situations where different DHHFLEs have different numbers of DHLTs. In order to have a reasonable normalization process and to keep all the integrity of the linguistic information during the computation, Gou et al. <cit.> developed a linguistic expected-value for DHHFLE. <cit.> Let S_O = {s_ϕ _l⟨o_φ _l⟩| ϕ _l∈[ - τ ,τ].,φ _l∈[ - ς ,ς]} be a continuous DHLTS, h_S_O ={s_ϕ _l⟨o_φ _l⟩| s_ϕ _l⟨o_φ _l⟩∈S_O;l = 1,2,...,L.} be a DHHFLE, Φ×Ψ be the set of all DHHFLEs over S_O. Then a linguistic expected-value of h_S_O is obtained as follows: le:Φ×Ψ→S_O,le( h_S_O) = 1/L⊕s_ϕ _l⟨o_φ _l⟩ = s_le( ϕ _l)⟨o_le( φ _l)⟩. where le( ϕ_l)=1/L∑_l=1^Lϕ_l and le( φ_l)=1/L∑_l=1^Lφ_l. From Example <ref>, patient 1's liver fire condition is h_s_o1 = {s_1⟨o_ - 1⟩,s_1⟨o_ - 2⟩}. The numbers of DHLTs are 2 and can be normalized by Eq.(<ref>) as le( h_s_o1) = {s_1⟨o_ - 1.5⟩}. TCM practitioner, through questioning, gets the condition of liver fire of patient 1 as “a little strong and slightly strong", and through the le function, the hesitation in “a little strong" and “slightly strong" is neutralized, thus achieving the purpose of normalization. §.§ Multi-level structure of S3WD The three-way thinking of granular computing proposed by Yao <cit.>, states that a granular structure comprises three key elements: granules, layers, and hierarchical structures. Granules are the fundamental units of information, while layers are composed of granules with the same granularity level. These layers, when arranged according to their granularity levels, form a multi-level structure. This concept is particularly relevant to dynamic decision-making problems, where a multi-level structure with increasing granularity from coarse to fine can be achieved by progressively increasing conditional attributes. Given a quadruple of decision table ( DT)_i = ( U_i,Z_i,V_i,g_i)( i = 1,2, … ,k), at the ith decision-level, U_i is a finite and nonempty universal set of alternatives. Z_i is a finite and nonempty subset of conditional attributes, A is the set of all conditional attributes, satisfies Z_1⊆Z_2⊆…⊆Z_i⊆…⊆Z_k⊆ A, and V_i denotes the domain of the conditional attributes. g_i:U_i×A_i→V_i denotes an information function mapping. ∀Z_i⊆ A, the equivalence relation R_Z_i induced by Z_i on the universe U_i is defined by: R_Z_i = {( x,y) ∈U_i×U_i| g_i( x,a) = g_i( y,a).,∀ a ∈Z_i}. With the equivalence relations induced by different subsets of conditional attributes Z_i, the universal set U_i can be partitioned. Then the multi-level granular structure can be obtained, denoted as Gl = {Gl_1,Gl_2, … ,Gl_i, … ,Gl_k}. <cit.> Given a multi-level granular structure Gl = { Gl_1, Gl_2, … , Gl_i, … , Gl_k}, define each granular structure Gl_i = ( U_i,Z_i,pr( π _i| x .),Λ _i). In the ith decision-level's granular structure Gl_i, x ∈U_i is an alternatives, Z_i denotes the subset of conditional attributes, pr( π _i| x .) denotes the conditional probability of the alternatives x and Λ _i denotes loss functions. Taking advantage of the S3WD concept, Bayesian decision process is modeled using a state set Γ _i = {π _i,π _i} and a set 𝔄 = {∂ _P,∂ _B,∂ _N} of three actions. These states π _i and π _i are complementary, while actions ∂ _P, ∂ _B, and ∂ _N represent the decision choices of classifying an alternative into acceptance region, uncertainty region and rejection region at the ith decision-level, as shown in Table <ref>. The unit loss function is the result of x in the granular structure Gl_i under a conditional attribute a(a ∈Z_i), which is composed of the set Λ _i = {λ _i^PP,λ _i^BP,λ _i^NP,λ _i^PN,λ _i^BN,λ _i^NN}. λ _i^PP, λ _i^BP, and λ _i^NP are the losses incurred for taking actions of ∂ _P, ∂ _B, and ∂ _N, respectively, when an alternative belongs to π _i. Similarly, λ _i^PN, λ _i^BN, and λ _i^NN express the losses incurred for taking the same actions when the alternative belongs to π _i. The loss function satisfy the following relationship: λ _i^PP≤λ _i^BP≤λ _i^NP and λ _i^PN≤λ _i^BN≤λ _i^NN. The expected losses of taking actions ∂ _P, ∂ _B, and ∂ _N at the ith decision-level are calculated as follows: [ EL_i( ∂ _P| x .) = λ _i^PPpr( π _i| x .) + λ _i^PNpr( π _i| x .),; EL_i( ∂ _B| x .) = λ _i^BPpr( π _i| x .) + λ _i^BNpr( π _i| x .),; EL_i( ∂ _N| x .) = λ _i^NPpr( π _i| x .) + λ _i^NNpr( π _i| x .), ] In the ith decision-level's granular structure Gl_i, the universal U_i can be divided into three regions with Bayesian minimum expected loss rule. These rules are derived in the following: (P) If EL_i( ∂ _P| x .) ≥ EL_i( ∂ _B| x .) and EL_i( ∂ _P| x .) ≥ EL_i( ∂ _N| x .), then x ∈ POS_i( π _i), (B) If EL_i( ∂ _B| x .) ≥ EL_i( ∂ _P| x .) and EL_i( ∂ _B| x .) ≥ EL_i( ∂ _N| x .), then x ∈ BND_i( π _i), (N) If EL_i( ∂ _N| x .) ≥ EL_i( ∂ _P| x .) and EL_i( ∂ _N| x .) ≥ EL_i( ∂ _B| x .), then x ∈ NEG_i( π _i). where satisfy POS_i( π _i)⋃BND_i( π _i)⋃NEG_i( π _i) = U_i. §.§ Regret theory Loomes et al. <cit.> and Bell <cit.> both introduced regret theory in 1982. This theory suggests that individuals seek to maximize their utility while minimizing potential regret from missed opportunities. People generally seek to avoid choices that might lead to higher levels of regret. For example, research has shown that when making decisions, people may anticipate the possibility of feeling regret once uncertainty is resolved, and therefore factor in their decision-making the desire to eliminate or reduce this potential regret. RT models capture decision-makers' choice behavior under uncertainty by considering the impact of anticipated regret. In general, RT models include a regret term in the utility function. This regret is negatively correlated with the realized result and positively correlated with the best alternative result after the uncertainty is resolved. The original model of regret theory is based on comparing the results of two alternative options. Before making a decision, the decision-maker evaluates the results of the chosen alternative against those of the rejected ones. If the chosen result is better, they feel happy; if not, they feel regret. The perceived utility value for the decision-maker consists of two parts: the utility value of the current result and the regret or rejoice derived from comparing it to another alternative. Recognizing that real decision-making often involves multiple alternatives, Quiggin <cit.> extended regret theory to encompass multiple alternatives. Let x_1,x_2, … ,x_n be n alternatives, where x_p represents the pth alternative. Let χ _1,χ _2, … ,χ _n be the results of the n alternatives, where χ _p represents the pth result. Then the perceived utility of x_p is: V( χ _p) = 𝔲( χ _p) +𝔳( 𝔲( χ _p) - 𝔲( χ ^ + )), where χ ^ + = max{χ _p| p = 1,2, … ,n.}, which means the regret value 𝔳( 𝔲( χ _p) - 𝔲( χ ^ + ))≤ 0. In Eq.(<ref>), 𝔲( ·) is a utility function, 𝔳(·) is the regret-rejoice function. To simulate the utility of the evaluation information, the power function <cit.> 𝔲( χ_p) = ( χ_p)^θ are usually used, where θ is decision meker's risk aversion, satisfies 0 < θ < 1. It has special meaning for the regret-rejoice function. 𝔳( ·) = 0 implies that neither regret nor rejoice is felt when two alternatives' results equally. 𝔳( ·) > 0 indicates that 𝔳(·) is strictly increasing. Regret aversion produces a unique prediction of RT, implying that 𝔳(·) is concave and 𝔳”( ·) < 0. In view of these characteristics, the regret-rejoice function is usually represented as: 𝔳( 𝔲( χ _p) - 𝔲( χ ^ + )) = 1 - e^ - δ( 𝔲( χ _p) - 𝔲( χ ^ + )), where δ≥ 0 denotes the decision-maker's regret aversion. § THE S3WD OF DHHFLTS MODEL In real life, linguistic evaluative information such as “slightly poor", “average", “good", “excellent", etc. tends to be more common, and DHHFLTS excels at collecting this type of linguistic expressions. However, there are inefficiencies in dealing with GDM problems under double hierarchy hesitant fuzzy linguistic environment according to the traditional method. The primary challenge in integrating the S3WD model with DHHFLTS is to efficiently capture uncertain decision-making information and trap the psychological factors of decision-making given by bounded rational people. To address this issue, this section proposes a S3WD of DHHFLTS model. This model seamlessly integrates DHHFLTS into the S3WD framework, enabling it to effectively process and utilize linguistic information §.§ Conditional probability based on neighborhood theory Existing S3WD models cannot be directly applied under DHHFLTS environment due to the complex nature of this linguistic term, which significantly impacts the selection of models and methods used in GDM. On one hand, expert evaluations, which are based on experience and knowledge, are expressed using DHHFLTSs, but there is a lack of relevant methods to integrate DHHFLTS for calculating the attribute values of alternatives. On the other hand, decision tables of alternatives constructed from expert evaluations in GDM typically lack pre-defined labels. In contrast, traditional methods <cit.> depend on pre-defined decision attributes to estimate conditional probabilities, which becomes impractical and inefficient when dealing with numerous alternatives. Additionally, conditional probabilities based on the decision-maker's prior knowledge can lead to significant errors. To integrate S3WD models into GDM, it is crucial to establish a set of universal and feasible methods for defining equivalence relationships between alternatives. Yang et al. <cit.> proposed a method for dividing general equivalence classes using neighborhood covering approach, which extends the concept from binary equivalence relations to more general binary similarity relationships. The neighborhood relation serves as a more generalized form of equivalence classes. Their research also demonstrated that a universe-based neighborhood covering approach can enhance the model's classification performance. To begin with, a decision table of DHHFLTS should be defined. Then, the neighborhood binary relation among different alternatives is constructed. Using this relationship, the neighborhood relation is introduced, and equivalence classes are formed. Finally, a method for calculating conditional probability is presented. It is assumed that the S3WD models in this section are all constructed when the conditional attributes are subsets Z at the ith decision-level. Gievn the set A of all conditional attributes and a quadruple of double hierarchy hesitant fuzzy linguistic decision table DHHFLDT = ( U,Z,V,g), A = {a_1,a_2, … ,a_m} is a finite and nonempty set, Z ⊆ A, U = {x_1,x_2, … ,x_n} is a finite and nonempty universal set of alternatives. g:U × Z → V is a complete information function, V denotes the domain of these attributes. For x_p∈ U( p = 1,2, … ,n) and a_q∈ Z( q = 1,2, … ,m), g( x_p,a_q) ∈ V, g( x_p,a_q) is defined as a DHHFLE denoted by h_S_Opq^. Equivalence classes play a crucial role in deriving conditional probabilities. Traditionally, equivalence classes are formed by partitioning the universe based on an equivalence relation. Neighborhood relations extend the concept of binary equivalence relations to a more general computational framework. Research on similarity relations in DHHFLTS is essential for constructing neighborhood relations. Thus, the binary similarity degree and neighborhood granules based on the κ-cut are introduced for DHHFLTS. For any alternatives x_p,x_y∈ U( p,y = 1,2, … ,n) in DHHFLDT, a universe U can be partitioned by κ-cut neighborhood binary relation _ℵ _Z = {. ( x_p,x_y) ∈ U × U|. . T_Z( x_p,x_y) ≥κ}, and a family of neighborhood granule κ _ℵ _Z( x_p) of x_p in Z is defined as: κ _ℵ _Z( x_p) = {x_y| x_y∈ U.,T_Z( x_p,x_y) ≥κ}, where 0 ≤κ≤ 1, Z is the subset of conditional attributes, Z ⊆ A, and T_Z is the similarity degrees under conditional attributes subset Z. A universe U be granulated by neighborhood binary relation _ℵ _Z. For any alternatives x_p,x_y∈ U, the similarity degree T_Z with the conditional attribute Z has the following properties: * 0 ≤T_Z( x_p,x_y) ≤ 1, * T_Z( x_p,x_y) = 1 if and only if g( x_p,a_q) = g( x_y,a_q), * T_Z( x_p,x_y) = T_Z( x_y,x_p). When evaluating decision alternatives using DHHFLTS in GDM scenarios, further investigation of similarity measure is warranted. Similarity and distance measures are often interrelated. A smaller distance between two alternatives typically indicates higher similarity, and vice versa. However, the 1-d similarity measure may not accurately capture specific similarity concepts and may not be suitable for all scenarios. To address this limitation, Gou et al. <cit.> has developed a set of real functions that can be adapted to different scenarios. Most distance-based methods are built on the assumption that the distance metric itself is sufficient to represent the differences between alternatives. If the distance between any two alternatives is equal, then their similarity is also equal. Although these real functions ( ∙) are strictly monotonically decreasing functions, there are inevitably cases where the practical meanings and arithmetic values of two DHHFLEs are not the same <cit.>. It is important to note that relying solely on distance values to determine the similarity may not provide an accurate representation. The following example will illustrate this in more details. For the alternatives x_1,x_2∈ U denoted by h_S_O( x_1) and h_S_O( x_2) , the similarity measure <cit.> between them could be calculated by the following equation: T( x_1,x_2) = ( d( x_1,x_2)) - ( 1 )/( 0 ) - ( 1 ). where the Euclidean distance measure is defined as: d( x_1,x_2) = ( 1/L∑_l^L ( | F'( h_S_O( x_1)) - F'( h_S_O( x_2))|)^2)^1 / . - 2. Suppose the real function is ( υ) = 1 - d( υ) and a DHHFLE h_S_O only has a DHLT s_ϕ _l⟨o_φ _l⟩, namely, h_S_O( x ) = {s_ϕ _l⟨o_φ _l⟩}, then the transformation function F reduces to F', F'( h_S_O( x )) = F'( s_ϕ _l⟨o_φ _l⟩) = f( ϕ _l,φ _l). Assuming h_S_O( x_1) = {s_1⟨o_ - 3⟩,s_2⟨o_ - 3⟩} and h_S_O( x_2) = {s_ - 1⟨o_3⟩,s_0⟨o_3⟩}, it could get d( x_1,x_2) = ( ( 0.5 - 0.5)^2 + ( 0.67 - 0.67)^2/2)^1 / . - 2 = 0 and T( x_1,x_2) = 1. Similarly, it could get T( x_3,x_4) = 1 with h_S_O3 = {s_0⟨o_3⟩,s_1⟨o_0⟩,s_2⟨o_ - 3⟩} and h_S_O4 = {s_1⟨o_0⟩,s_2⟨o_ - 3⟩} using Eq.(<ref>) and (<ref>). It can be seen from these two calculation results that the distance result have a great influence on the similarity result. But h_S_O( x_1) and h_S_O( x_2) have different evaluation forms, indicating different alternative preferences. The same goes for h_S_O( x_3) and h_S_O( x_4) . As highlighted earlier, a similarity measurement will be explored to address this issue. On one hand, considering the different situations that arise from varying practical meanings and arithmetic values, as well as the anchoring effect observed in human thinking, the superior gradus <cit.> is designed to address such scenarios. On the other hand, the introduction of the Gaussian Kernel function enables the nonlinear transformation of raw distance measurements into a more interpretable similarity space. Unlike linear transformations, the Gaussian Kernel doesn't simply convert distance to similarity in a straight line. Instead, it utilizes an exponential function to map the distance between alternatives onto a similarity value between 0 and 1. This approach proves particularly valuable in situations where two alternatives are highly similar but not identical. In conclusion, the use of superior gradus-based Gaussian Kernel function will have a better advantage in dealing with complex similarity relationships and nonlinear data distributions in GDM scenarios evaluated using DHHFLTS than distance-based similarity methods. For any alternatives x_p,x_y∈ U( p,y = 1,2, … ,n) in DHHFLDT, the similarity degree-based gaussian kernel function <cit.> of the alternative x_p and x_y on the conditional attributes subset Z, Z ⊆ A can be expressed as: T_Z( x_p,x_y) = K_Z( x_p,x_y) = exp( - x_p - x_y_Z^2/2σ ^2), where σ is the width parameter of Gaussian Kernel function. To better capture the similarity between alternatives, a combination of the superior gradus <cit.> and the Euclidean norm x_p - x_y_Z^2 is used. As a result, Eq.(<ref>) is modified. Given any two alternatives x_p,x_y∈ U in DHHFLDT denoted by DHHFLEs h_S_O( x_p) and h_S_O( x_y), respectively, the superior gradus-based Gaussian Kernel function of the alternative x_p and x_y on the conditional attributes subset Z (Z ⊆ A) can be expressed as: K_Z( x_p,x_y) = exp( - ∑_q = 1^| Z |( SG( h_S_O( x_p)) - SG( h_S_O( x_y)))^2/2σ ^2), where SG( h_S_O( x )) = 1/L∑_l = 1^L sg( s_ϕ _l⟨o_φ _l⟩), sg( s_ϕ _l⟨o_φ _l⟩) = ( e^α + β) - 1/e - 1, α = ( ϕ _l/2τ + 1/2), and β = φ _l/2ςτ. Obviously, Eq.(<ref>) also satisfies Property <ref>. Given a DHHFLDT, for σ _1≤σ _2 and any x_p,x_y∈ U, it has K_Z( x_p,x_y)_^σ _1≤K_Z( x_p,x_y)_^σ _2. Based on the above, Theorem <ref> is not difficult to prove. Previous research <cit.> has shown that as decision information increases from coarse to fine, adjusting the parameter σ according to a certain rule can improve the accuracy of the decision results. When there is limited decision-making information, only a small amount of information is used for comparative analysis, leading to high uncertainty in the results. As more decision-making information becomes available, the basis for comparison becomes more accurate, reducing the uncertainty of the decision results. The flexibility of the similarity measure can be controlled by adjusting σ: a larger σ results in lower differentiation between alternatives, while a smaller σ leads to higher differentiation, thus providing more certain decision information. The S3WD framework relies on classifying alternatives based on their similarity. A crucial step in this process involves constructing a specific type of binary relation. A common representation is to use the κ-cut neighborhood binary relation _ℵ _Z to capture the similarity relationships between alternatives within the context of the decision problem <cit.>. Given a κ-cut neighborhood binary relation _ℵ _Z( κ∈[ 0,1]), the neighborhood relation matrix R_ℵ _Z = ( r_py)_n × n for any x_p,x_y∈ U can be defined by: r_py = {[ 1,( x_p,x_y) ∈ _ℵ _Z; 0,otherwise ]., where κ-cut neighborhood binary relation _ℵ _Z satisfies reflexivity and symmetry. Then, a family of neighborhood granule κ _ℵ _Z( x_p) for different alternatives with varying κ thresholds can be obtained. By dynamically conditioning the attributes using specific rules, sequences with varying neighborhood granularity are created. These sequences are characterized by subsets of conditional attributes. In traditional S3WD models, the calculation of conditional probabilities relies on known prior probabilities of alternatives, typically derived from decision attributes. However, integrating DHHFLTS into the S3WD model presents a significant challenge: the lack of prior knowledge about alternatives makes it impossible to compute these prior probabilities directly. Although studies such as <cit.> propose a modified S3WD approach to address GDM problems across multi-level granularity, their reliance on decision attributes in the information system limits their applicability. Consequently, a novel approach for calculating conditional probabilities based on the outranking relation is introduced <cit.>. This method assesses the relative advantage of each alternative compared to others, aiming to evaluate the likelihood of different alternatives achieving specific results through comparative advantage relationships. It gives a membership function π _Z( x_p) = ∑_q = 1^n w_q⊕ g( x_p,a_q). And two complementary state concepts are defined based on this function, where the concept of “good state" π _Z is the total membership of the alternative x_p, denoted as π _Z = π _Z( x_1)/x_1 + π _Z( x_2)/x_2 + … + π _Z( x_n)/x_n, and the other concept “bad state" π _Z is the complement of concept π _Z. For any alternative x_p belong to the “good state" π _Z in neighborhood granules κ _ℵ _Z( x_p), the conditional probability for the alternative x_p is derived as follows: pr( π _Z| κ _ℵ _Z( x_p).) = ∑_x_p∈κ _ℵ _Z( x_p)^π _Z( x_p)/| κ _ℵ _Z( x_p)|, where | κ _ℵ _Z( x_p)| represents the cardinality of the neighborhood granule κ _ℵ _Z( x_p). An example is used to illustrate the conditional probability based on the neighborhood granule κ _ℵ _Z( x_p). If κ _ℵ _Z( x_1) = {x_1,x_3,x_5}, the conditional attribute Z = {a_1,a_2,a_3 ,a_4} weights are denoted by the vector w = {w_1,w_2,w_3 ,w_4}, g( x_p,a_q) is a DHHFLE h_S_Opq^( p = 1,3,5;. . q = 1,2,3,4). Then, [ π _Z( x_1) = w_1h_S_O_11⊕w_2h_S_O_12⊕w_3h_S_O_13⊕w_4h_S_O_14; π _Z( x_3) = w_1h_S_O_31⊕w_2h_S_O_32⊕w_3h_S_O_33⊕w_4h_S_O_34; π _Z( x_5) = w_1h_S_O_51⊕w_2h_S_O_52⊕w_3h_S_O_53⊕w_4h_S_O_54 ] According to Definition <ref>, it could get: pr( π _Z| κ _ℵ _Z( x_1).) = π _Z( x_1) + π _Z( x_3) + π _Z( x_5)/3. §.§ The relative perceived utility function via RT The development of the loss function in the S3WD model has been a critical area of research, focusing on how to more accurately reflect losses in decision-making processes. Initially, Yao <cit.> proposed loss function values based on individual experience, which however, were fixed and did not accurately mirror the realistic loss scenarios. The concept of “relative" has been progressively introduced to more accurately characterize the semantics of loss. Jia and Liu <cit.> explored a relative loss function for different alternatives, a concept further developed by Liang et al. <cit.>, who constructed both relative loss and benefit functions. Lei et al. <cit.> extended these concepts with PT under hesitant fuzzy linguistic environment, acknowledging that while relative loss functions approach reality, decision-makers often prefer knowledge of their gains (“what I get"). Traditional relative loss functions either rely on the subjective assignment of losses incurred from actions by experts or are based on the principle that the greater the value generated by an action, the smaller the loss, according to the assumption of complete rationality. This is particularly true in fuzzy environments of multi-attribute decision-making. However, evaluation information, especially in the context of DHHFLTS, is often provided subjectively by individuals, thereby increasing the uncertainty of decision-making. Depicting human psychological preferences solely based on a linear relationship is somewhat simplistic This subjectivity challenges the assumption of complete rationality which is required to convert qualitative evaluations into quantitative expressions. Although relative loss functions bring us closer to reality, they do not completely solve the psychological issues in decision-making, particularly the emotions of regret and delight that significantly impact human decisions. Decision-makers' evaluations of alternatives are often influenced by anchoring, showing a preference for current alternatives, which is undoubtedly closely related to the concept of relativity. RT can analyze the consequences of irrational behavior in the decision-making process. Through mathematical representation, it is possible to better understand people's decision-making behavior under uncertain conditions, especially when potential regret or rejoice becomes a significant influencing factor. Qualitative evaluation is essentially an intuitive method of characterizing gains. These evaluations incorporate the decision-maker's perceived sensitivity to risk, acknowledging their limited rationality. In reality, this sensitivity manifests as a decline in the rate of utility growth as returns increase, shown in Fig. <ref> below. The relative gain function serves as the foundation for constructing the relative perceived utility function in RT, which further enhances the decision-making process by explicitly accounting for the decision-makers' bounded rationality and sensitivity to risk and regret. Given a DHHFLDT, the evaluation value of the alternative x_p with respect to the conditional attribute a_q is g( x_p,a_q), denoted as h_S_Opq^. π _Z and π _Z represent whether the alternative x_p is in good state or not under Z ⊆ A. There are three actions: ∂ _P, ∂ _B, and ∂ _N, which denote the actions of dividing x_p into acceptance region, uncertainty region and rejection region, respectively. The unit relative gain function is the result of x_p under a conditional attribute a_q( a_q∈ Z), Λ _a_q = {χ _pq^PP,χ _pq^BP,χ _pq^NP,χ _pq^PN,χ _pq^BN,χ _pq^NN} includes six kinds of relative gains. These gains are produced by taking different actions in different states. When x_p with the conditional attribute a_q belongs to π _Z, χ _pq^PP,χ _pq^BP,χ _pq^NP represent the relative gain from taking actions ∂ _P, ∂ _B, and ∂ _N, respectively. Conversely, χ _pq^PN,χ _pq^BN,χ _pq^NN represent the relative gains from taking the same actions when x_p with the attribute a_q belongs to π _Z. In Table <ref>, this relative gain function serves as a basic unit to clearly present the relative gain function of an alternative with a certain conditional attribute. The number of units is related to the product of the number of alternatives and the number of conditional attributes. χ _pq^∘∙ ( ∘ = P,B,N; ∙ = P,N) represents the relative gain functions with different conditional attributes. As the evaluation information for decision-makers, it applies the form of DHHFLE. Unlike the concept of a relative loss function, a relative gain function is based on the expression of the gain that can be visualized for that alternative taken by a certain action. The content of two states and three actions in this basic unit can produce six behavioral results. For the action result χ _pq^PP, it is the full gains obtained by alternative x_p in one situation where the conditional attribute is a_q and the state is π _Z. For the delayed generation behavior χ _pq^BP, it is part of the gains from alternative x_p in the above situation. Refusal to make a behavior will result in no gains, which means that χ _pq^NP has no value. In another situation where the conditional attribute is a_q and the state is π _Z, the gains χ _pq^PN implies that taking action “without the right place, the right time, and the right people" would be meaningless, i.e., no gains. On the contrary, if such behavior is not made, gains can be preserved instead. The value of preserved gains can also be understood in terms of the concept of opportunity cost in economics. This means that taking action in the state π _Z yields a gain of χ _pq^PP, which corresponds to the potential loss of not taking action in state π _Z. In the state π _Z, choosing not to take action results in potential gains χ _pq^NN, which means giving up the gains χ _pq^PP. The resulting potential gains are complementary to the gains χ _pq^PP. Additionally, delaying the action χ _pq^BN represents part of the gains χ _pq^NN. The connection between these concepts can be expressed as follows: χ _pq^BP = ηχ _pq^PP, χ _pq^NN = ( χ _pq^PP)^C, and χ _pq^BN = ηχ _pq^NN. Since some DHHFLEs often have a different number of DHLTs. There are two notable points in the computation: the first is to characterize the subtle differences between different DHHFLEs, and the second is to compare DHHFLEs when obtaining regret and rejoice values. For this purpose, a superior gradus <cit.> is introduced as a preprocessing step for the relative gain function. Especially, superior gradus can solve the problem of overlapping transform values in DHHFLE. To simplify the calculation, the unit relative gain function Λ _a_q = {χ _pq^∘∙} can be further expressed as a simplified unit relative gain function Λ_a_q = {b_pq^∘∙}, where b_pq^∘∙ = SG( χ _pq^∘∙). Given a simplified unit relative gain function Λ_a_q = {b_pq^∘∙}, the utility of the evaluation information and the relative perceived utility are proposed as respectively: 𝔲( b_pq^∘∙)=( b_pq^∘∙)^θ, V_pq^∘∙ = V( b _pq^∘∙) = 𝔲( b _pq^∘∙) + 𝔳( 𝔲( b _pq^∘∙) - 𝔲^ + ( b _pq^∘∙)). Currently, a computational model of the perceived behavior of alternatives in qualitative decision making under a given conditional attribute is obtained by detailing the above. At the core of the model are six action results, each represented by an RT model. Details regarding the specific RT model and the unit relative perceived utility function are presented in Table <ref>. The total number of units in the model is calculated by multiplying the number of alternatives under consideration by the number of conditional attributes associated with each alternative. For the subset of the conditional attributes Z ⊆ A, the relative comprehensive perceived utility 𝕍_p^∘∙ of the alternative x_p is computed as below: 𝕍_p^∘∙ = ∑_m = 1^q w_Z V_pq^∘∙, where w_Z is the weight corresponding to the condition attribute in Z. The number of q are related to the elements contained in Z, satisfy p = 1,2, … ,n;q = 1,2, … ,m . The subset of conditional attribute weight w_Z satisfies 0 ≤w_Z≤ 1 and ∑_q = 1^m w_Z = 1. These two subsections address two crucial aspects related to S3WD of DHHFLTS models. They serve as the foundation of this work's endeavor to investigate the sequential process. The extraction and aggregation model of these information units and multi-level information fusion sequential process will be demonstrated in the next section. § THE S3W-GDM FOR DHHFLTS From the perspective of granular computing, S3WD is an effective method for dealing with complex and dynamic uncertainty problems. To achieve an efficient decision-making process, this section constructs a multi-level S3WD for GDM (S3W-GDM) based on the “trisect-and-conquer" strategy. It implements two issues at each level, “Triading" before “Optimizing", which involves dividing the alternatives into three parts and then taking a matching action for each stage. §.§ Statement of the problem The GDM problem of DHHFLTS can be explained as follows: there are n alternatives in the finite set U = {x_1,x_2, … ,x_n}, m conditional attributes in the finite set A = {a_1,a_2, … ,a_m} and e experts in the finite set E = {E_1,E_2, … ,E_e}. The conditional attribute weights are denoted by the vector w = {w_1,w_2, … ,w_m}, where 0 ≤w_q≤ 1, ∑_q = 1^m w_q = 1, and w_q∈ w. It is typical to discuss the type of attributes, namely cost or benefit types. Since the semantic analysis of attributes is crucial for effectively resolving complex decision issues <cit.>. Failing to distinguish between attributes in decision analysis can easily lead to incorrect decisions. In the context of diagnosing a disease, there are two crucial evaluation attributes: the severity of symptoms and the side effects of treatment. The severity of symptoms is a health attribute that indicates the impact of the disease on the patient, where a higher severity level requires urgent treatment. On the other hand, the side effects of treatment represent a treatment risk attribute, where fewer side effects mean the treatment is more beneficial for the patient. Therefore, the types of evaluation attributes in the evaluation process need to be defined before the experts give their evaluations. When multiple experts are involved, their opinions are usually given different weights depending on their level of expertise and credibility. The weight of different experts are denoted by w_E = {w_E^1,w_E^2, … ,w_E^e}, where 0 ≤ w_E^j ≤ 1, ∑_j = 1^e w_E^j = 1( j = 1,2, … ,e), and w_E^j∈ w_E. Each expert provides their own evaluation for all the alternatives based on different conditional attributes. These evaluations constitute with personal preferences that includes the number of n × m DHHFLEs h_S_Opq^j, as shown in Table <ref>. The ( DHHFLDT)_^j can also be denoted as matrices H^j = {h_S_Opq^j}, which denotes the evaluation of the alternative x_p with the conditional attribute a_q given by the jth expert. This section proposes a novel approach to extract and aggregate expert evaluation information in GDM. The granular computing is used to improve the traditional GDM fusion method. The novel method involves sequentially considering the importance of conditional attributes and establishing multi-level granularity of evaluation information. This novel method builds on the core concepts presented in Section <ref> and is the basis of this study. §.§ Multi-level decision table granularity for fusion Many decision-making scenarios benefit from starting with coarse-grained information. This approach proves valuable in initial evaluations, as seen in tasks like selecting contestants, screening resumes, or filtering emails. Coarse-grained information allows for a quick and efficient assessment of a large number of alternatives. However, after the initial selection, further refinement of the evaluation process is often required. This is where fine-grained information becomes critical. By continuously refining the decision attributes through successive levels of analysis, decision-makers can make more detailed and precise judgments. This iterative process ultimately improves the overall effectiveness of the decision-making process. While more detailed information is essential for final decisions, coarse-grained information plays a vital role in enhancing decision-making efficiency. After all, decision-making incurs costs related to information collection and the actual situation. Therefore, a multi-level S3WD process offers a compelling approach by progressively analyzing decision-making information, starting from coarse-grained to fine-grained levels. This iterative strategy enables a balance between efficiency and accuracy, leading to better decision results. Information fusion models, particularly those dealing with vague and uncertain evaluations, often rely heavily on aggregation operators. Extensive research has been conducted in this area, as evidenced by these works <cit.>. Regardless of the specific type of operator chosen, the core method remains consistent. Experts begin by evaluating each alternative based on various attributes. An aggregation operator is then applied to integrate these individual evaluations for each alternative across all attributes, resulting in a set of integrated evaluations. Finally, an exploitation method is used to rank the alternatives based on the integrated evaluations, ultimately selecting the optimal one (as illustrated in Fig. <ref>). Constructing a multi-level expert information aggregation granularity method can effectively improve the efficiency of alternative categorization and ranking for GDM problem. The novel fusion framework are shown in Fig. <ref>. To facilitate a coarse-to-fine presentation of input information, the expert evaluation information table needs to be defined. Let ( DHHFLDT)_^j = ( U,A,V^j,g^j)( j = 1,2, … ,e) be the decision table denoted as H^j, which means the evaluation of the alternative x_p with the conditional attribute a_q given by the jth expert. There exists an evaluation information extraction function 𝔡 satisfies 𝔡( H^j) = {H_1^j,H_2^j, … ,H_i^j, … ,H_k^j}, where H_i^j is the matrix representation of the decision extraction table ( DHHFLDT)_i^j = ( U_i,Z_i,V_i^j,g_i^j) for the jth expert under Z_i, where V_i^j denotes the domain of the conditional attributes, and g_i^j:U_i^× Z_i^→V_i^j is the complete information function. Z_i be the set of z attributes whose conditional attributes are ranked in the top i in terms of importance. Z_i satisfy Z_1⊆Z_2⊆…⊆Z_i⊆…⊆Z_k⊆ A( i = 1,2, … ,k). At each decision-level, the evaluation information of the corresponding attribute of each expert is extracted in turn according to the order of importance of the conditional attribute to get the H_i^j of different experts under the subset conditional attribute of current decision-level. Let ( DHHFLDT)_i^j be the extraction decision table for the jth expert under Z_i, denoted as H_i^j. There exists an evaluation information aggregation function 𝔞 satisfies 𝔞( H_i^1,H_i^2, … ,H_i^j, … ,H_i^e) = H_i, where H_i is the matrix representation of the fusion decision table ( DHHFLDT)_i = ( U_i,Z_i,V_i,g_i) of e experts under Z_i. A novel fusion framework is developed to propose a method that incorporates multi-level of granularity. Functions for evaluating information extraction and aggregation have also been designed. Given the design of the aggregation function discussed above, common aggregation function can be considered as aggregation operators used for information fusion. Various types of linguistic weighted operators have been developed, including Muirhead mean aggregation operators <cit.>, and linguistic ordered weighted distance operators <cit.>. The weighted average operatora is widely used and well-characterized aggregation operator proposed by Yager <cit.>. For ease of use, this paper extends the application of the DHHFLWA <cit.> operator as the evaluation information aggregation function. Let a group matrix of {H_i^1,H_i^2, ⋯ ,H_i^j, ⋯ ,H_i^e} be the extraction decision tables for e experts under Z_i. Then, the double hierarchy hesitant fuzzy linguistic matrix weighted averaging operator (DHHFLMWA) is defined as below: DHHFLMWA( H_i^1,H_i^2, ⋯ ,H_i^j, ⋯ ,H_i^e) = ⊕_j = 1^e ( w_E^j · H_i^j) ( j = 1,2, ⋯ ,e;i = 1,2, … ,k), where 0 ≤ w_E^j ≤ 1 and ∑_j = 1^e w_E^j = 1. Let a group matrix of {H_i^1,H_i^2, ⋯ ,H_i^j, ⋯ ,H_i^e} be the extraction decision table for e experts under Z_i and H_i^j = {( h_S_O^j)_i}. A collection of DHHFLEs are ( h_S_O^j)_i = {s_ϕ _l^j⟨o_φ _l^j⟩_i| s_ϕ _l^j⟨o_φ _l^j⟩_i∈S_O;l = 1,2,...,L^j.}, and the weight of experts are denoted by w_E = {w_E^1,w_E^2, … ,w_E^e}. Then the DHHFLMWA with linguistic expected-value is calculated as below: [ DHHFLMWA( H_i^1,H_i^2, ⋯ ,H_i^j, ⋯ ,H_i^e); = le[ ( w_E^1H_i^1) ⊕( w_E^2H_i^2) ⊕⋯⊕( w_E^jH_i^j) ⊕⋯⊕( w_E^eH_i^e)]; = w_E^1{le( h_S_O^1)_i}⊕ w_E^2{le( h_S_O^2)_i}⊕⋯⊕ w_E^j{le( h_S_O^j)_i}⊕⋯⊕ w_E^e{le( h_S_O^e)_i} ]. Substituting into Eq.(<ref>) gives: [ w_E^1{le( h_S_O^1)_i}⊕ w_E^2{le( h_S_O^2)_i}⊕⋯⊕ w_E^j{le( h_S_O^j)_i}⊕⋯⊕ w_E^e{le( h_S_O^e)_i}; = w_E^1{s_ϕ ^1⟨o_φ ^1⟩_i}⊕ w_E^2{s_ϕ ^2⟨o_φ ^2⟩_i}⊕⋯⊕ w_E^j{s_ϕ ^j⟨o_φ ^j⟩_i}⊕⋯⊕ w_E^e{s_ϕ ^e⟨o_φ ^e⟩_i} ], where w_E^j{s_ϕ ^j⟨o_φ ^j⟩_i} = w_E^j{s_1/L^j∑_l = 1^L^jϕ _l^j⟨o_1/L^j∑_l = 1^L^jφ _l^j⟩_i}, using Eq.(<ref>) could get the final aggregation results of e experts under Z_i is H_i = {s_∑_j = 1^e w_E^jϕ ^j⟨o_∑_j = 1^e w_E^jφ ^j⟩_i}. §.§ S3W-GDM method based on multi-level granularity fusion When it comes to solving complex GDM processes, based on S3WD the multi-level of granularity is a useful approach. Alternatives at each decision-level are classified into three possible results: acceptance, rejection, or non-commitment, as a way to increase the efficiency of the decision alternatives selection process. The following will demonstrate how it's combined with a S3WD model into the GDM process. Given a ( DHHFLDT)_i = ( U_i,Z_i,V_i,g_i), U_i denotes the processing alternatives, Gl( Z,. . κ) is the 4-tuple, Z = {Z_1,Z_2, … ,Z_i, … ,Z_k} be a nested sequence of conditional attributes, and κ = {κ _1,κ _2, … ,κ _i, … ,κ _k} be a sequence of similarity thresholds. The multi-level granular structure Gl( Z,κ) = {Gl_1( Z_1,κ _1),Gl_2( Z_2,κ _2), … ,Gl_k( Z_k,κ _k)} is defined as: [ 1st decision - level:Gl_1 = {( DHHFLDT)_1,κ _1,pr( π _Z_1| κ _ℵ _Z_1( x ).),V_Z_1^∘∙}; 2nd decision - level:Gl_2 = {( DHHFLDT)_2,κ _2,pr( π _Z_2| κ _ℵ _Z_2( x ).),V_Z_2^∘∙}; ⋮ ⋮ ⋮; ith decision - level:Gl_i = {( DHHFLDT)_i,κ _i,pr( π _Z_i| κ _ℵ _Z_i( x ).),V_Z_i^∘∙}; ⋮ ⋮ ⋮; kth decision - level:Gl_k = {( DHHFLDT)_k,κ _k,pr( π _Z_k| κ _ℵ _Z_k( x ).),V_Z_k^∘∙} ], where at the ith decision-level of Gl_i( Z_i,κ _i), U_i satisfies | U_1| ≥| U_2| ≥…≥| U_i| ≥…≥| U_k|, Z_i satisfies Z_1⊆Z_2⊆…⊆Z_i⊆…⊆Z_k⊆ A, κ _i satisfies 0 ≤κ _1≤κ _2≤…≤κ _k≤ 1, σ _1≥σ _2≥…≥σ _i≥…≥σ _k, pr( π _Z_i| κ _ℵ _Z_i( x ).) denotes conditional probabilitise, and V_Z_i^∘∙( ∘ = P,B,N; ∙ = P,N) denotes relative perceived utility functions. To demonstrate a detailed method of operation, an example is given below. Two experts have already given evaluation decision tables ( DHHFLDT)^1 and ( DHHFLDT)^2, with conditional attributes ordered by importance. Assuming that a_1 is the most important attributes, it means Z_1 = {a_1}. According to Definition <ref>, the decision tables ( DHHFLDT)_1^1 and ( DHHFLDT)_1^2 of the two experts under the 1st decision-level can be extracted respectively. Definition <ref> can then be used to obtain the important granular structure Gl_1( Z_1,κ_1 ) of the two experts at the 1st decison-level, resulting in the fused decision table ( DHHFLDT)_1. Clearly, U_1 = {x_1,x_2, … ,x_n}. Having obtained this level of granularity, the unit of relative gain function introduced in Section <ref> can be constructed. The n × 1 unit relative perceived functions V_Z_1^∘∙ can be created. Deriving pr( π _Z_1| κ _ℵ _Z_1( x ).) is straightforward using the method described in Section <ref>. The specific operation of the above process is demonstrated in Fig. <ref>. Finally, the rules of the S3WD are utilized to derive the results of the 1st decision-level of granular structure, which is used to drive the S3W-GDM process. Given a multi-level structure Gl( Z,κ) = {Gl_1( Z_1,κ _1),.. Gl_2( Z_2,κ _2), … ,Gl_k( Z_k,κ _k)} for any the ith decision-level of Gl_i( Z_i,κ _i), the expected perceived utility of κ _ℵ _Z_i( x ) is: [ E𝕍_i( ∂ _P| x .) = 𝕍_Z_i^PPpr( π _Z_i| κ _ℵ _Z_i( x ).) + 𝕍_Z_i^PNpr( π _Z_i| κ _ℵ _Z_i( x ).),; E𝕍_i( ∂ _B| x .) = 𝕍_Z_i^BPpr( π _Z_i| κ _ℵ _Z_i( x ).) + 𝕍_Z_i^BNpr( π _Z_i| κ _ℵ _Z_i( x ).),; E𝕍_i( ∂ _N| x .) = 𝕍_Z_i^NPpr( π _Z_i| κ _ℵ _Z_i( x ).) + 𝕍_Z_i^NNpr( π _Z_i| κ _ℵ _Z_i( x ).). ] The perceived utility from regret theory is chosen as the basis for formulating decision rules, as relative perceived utility aligns with the subjective of utility. Therefore, the maximized expected perceived utility is selected as the most appropriate action according to Bayesian. These rules are derived as follows: (P1) If E𝕍_i( ∂ _P| x .) ≥ E𝕍_i( ∂ _B| x .) and E𝕍_i( ∂ _P| x .) ≥ E𝕍_i( ∂ _N| x .), then x ∈ POS_i( π _Z_i), (B1) If E𝕍_i( ∂ _B| x .) ≥ E𝕍_i( ∂ _P| x .) and E𝕍_i( ∂ _B| x .) ≥ E𝕍_i( ∂ _N| x .), then x ∈ BND_i( π _Z_i), (N1) If E𝕍_i( ∂ _N| x .) ≥ E𝕍_i( ∂ _P| x .) and E𝕍_i( ∂ _N| x .) ≥ E𝕍_i( ∂ _B| x .), then x ∈ NEG_i( π _Z_i). where POS_i( π _Z_i)⋃BND_i( π _Z_i)⋃NEG_i( π _Z_i) = U_i. The granular structure Gl( Z,κ) of each decision-level is used for decision-making, and is able to do the classification of each alternative to generate the decision results for the current decision-level. Seven S3WD models were summarized for different dynamic scenarios <cit.>. The feasibility of these seven dynamic decision-making solutions is supported by the fact that the decision data set has prior information, which can be tested for accuracy in each region. However, since there is no prior information in GDM, the decision-level is based on the results derived from the granular structure of the previous level. Therefore, this paper considers not only improving the efficiency of decision-making but also enhancing the rationality of decision results. The most reasonable dynamic decision-making situation is set up through the GDM problems, and the relevant parameter settings are strictly followed. In the absence of prior information, the rationality of parameter settings can ensure the accuracy of decision results to a greater extent. Assume U_i = BND_i - 1( π _i - 1)( 1 < i ≤ k) from ( i - 1)th decision-level to ith level if a top-down manner is adopted. Fig. <ref> clearly illustrates one of the most common S3WD models. Another crucial problem to be solved for the GDM problem is the ranking of the alternatives classified at each level, especially those classified in the positive (negative) domain, according to the target concept. The ranking of alternatives follows a specific order based on their expected perceived utility, denoted as POS_i≻ BND_i≻ NEG_i. When the target concept is accepted and alternatives fall within the positive domain, a higher expected perceived utility signifies stronger alignment with the desired result. Conversely, if the target concept is rejected and alternatives are classified into the negative domain, a higher expected perceived utility value indicates a greater degree of misalignment with the undesirable result. In essence, prioritization within the positive domain favors alternatives with higher expected utility (positive direction), while prioritization within the negative domain favors alternatives with lower expected utility (reverse direction). If an alternative is ultimately classified in the boundary domain, the ranking of its expected perceived utility is determined based on the results of the final analysis at this level. §.§ The algorithms of S3W-GDM Based on the multi-level granularity fusion, S3WD is a useful model to address GDM problems. Additionally, neighborhood theory and regret theory support the combination of DHHFLTS and the S3WD model. With these works, the novel S3W-GDM method is constructed. Two algorithms will be presented in the following sections. Algorithm <ref> dynamically extracts and fuses decision tables by iterating through each decision-level based on the importance of conditional attributes. Algorithm <ref> describes the detailed dynamic decision-making process of the multi-level S3WD, including the information aggregation of decision-level, the construction method of κ-cut neighborhood binary relation at each decision-level, the coarse-grained representation of the relative utility functions, and the expectation of the integrated perceived S3WD rule for comparison. § AN ILLUSTRATIVE EXAMPLE This section performs case analysis based on an illustrative example to verify the applicability of the established multi-level S3W-GDM method. In Section <ref>, multiple experts' SLE diagnostic problem characterized by vagueness, hesitation, and variation is presented. The S3W-GDM method is applied to obtain the classification and ranking results of SLE patients in Section <ref>. §.§ Description of the problem In recent years, granular computing and S3WD have been widely applied in the medical field, primarily focusing on the prediction and classification of patient populations. Furthermore, existing literature indicates that granular computing and S3WD provide an effective reasoning paradigm for dynamic medical diagnostics. However, the reality of dynamic decision-making in medical diagnostics is exceedingly complex, involving the diversity of disease characteristics, the uncertainty of treatment decisions, and the phased involvement of multidisciplinary experts. Taking the diagnosis of SLE as an example, such decision-making issues typically exhibit characteristics of vagueness, hesitation, and variation. Regrettably, there is a lack of further research on existing decision-making methods or models that consider these characteristics simultaneously. And the current decision-making methods regarding DHHFLTS are not suitable for GDM problems with the above characteristics. In light of this, Section <ref> establishes the S3W-GDM method. To demonstrate the applicability of the established models, an illustrative example of SLE diagnosis is presented. SLE is an autoimmune disease that primarily affects multiple systems and organs, leading to complex and varied clinical manifestations. It predominantly occurs in women of childbearing age. Regular follow-up and monitoring of the condition are necessary to detect and manage relapses early and to achieve accurate diagnosis and delineation in SLE patients. Some characteristic clinical manifestations can provide clues for the early diagnosis of SLE, such as arthralgia, rash, nephritis, serological changes, immunosuppression, and psychiatric symptoms. Due to its impact on multiple systems and organs, SLE results in a diverse range of clinical manifestations. In the face of this complexity, there is a need to combine the instructions of experts from different disciplines as quickly as possible to give the results of the patient's diagnosis, assessment, prediction, and other decision-making. Let U = {x_1,x_2,x_3,x_4,x_5,x_6,x_7,x_8} be the set of women of childbearing age who may have SLE. The hospital has implemented a multidisciplinary dynamic diagnosis approach for a particular group of patients to enhance diagnostic efficiency and accuracy. This approach involves conducting multi-level diagnoses through a joint consultation of experts from various fields such as rheumatology, nephrology, and dermatology, who collaborate to diagnose the patients. Since SLE cannot be diagnosed based solely on the unique indicators of any single discipline, all experts use a consistent set of attributes to arrive at a diagnosis. These attributes include Antinuclear antibody (ANA), Anti-double-stranded DNA antibody (Anti-dsDNA), Complement protein (C3 and C4) levels, Skin and mucous membrane damage, arthritis, and kidney involvement. These attributes not only help in the diagnosis of SLE but also aid in ruling out other diseases. Research <cit.> has been conducted to determine whether these attributes are associated with cost or benefit. To gather accurate evaluations from experts, the collection of evaluation content is adjusted and the types of attributes used are shown in Table <ref>. A = {a_1,a_2,a_3,a_4,a_5,a_6} is represented by those six attributes. According to the order in which experts prioritize these attributes in SLE diagnosis, their importance is set as w = {0.2,0.3,0.15,0.1,0.1,0.15}. For each patient, it is assumed that they exist in two states π and π, corresponding to having SLE and not having SLE, At the same time, there are three actions ∂ _P, ∂ _B, and ∂ _N, which correspond to a confirmed diagnosis requiring immediate treatment, a pending diagnosis requiring further testing, and no treatment but requiring regular inspection and monitoring. Based on these attributes, the prepared linguistic scale of the DHHFLTS is designed to collect expert evaluation information, shown in Fig. <ref>. The first and second linguisstic scale of the DHHFLTS are τ = 3,ζ = 3, respectively. After consulting with medical experts in the field, they were invited to simulate a consultation based on their personal experience. The three experts e = 3 assessed 8 patients (n = 8) by using the DHHFLTS, and the results are collected into Table <ref>-<ref>. The weight of the three experts is set to w_E = {0.5,0.3,0.2} based on their knowledge of the overall situation of SLE. According to Section <ref>, the algorithm repeatedly extracts and fuses the expert evaluation tables. The parameters of each granular structure are adjusted to the same values for consistency. §.§ S3W-GDM process of SLE dignosis According to the contents of the previous two sections, the parameters involved in this illustration include the Gaussian kernel parameter σ = 0.7, neighborhood cut parameter κ = 1, relative gain parameter η = 0.6, utility parameter θ = 0.88, and regret parameter δ = 0.3. Reranking the conditional attributes A = {a_1,a_2,a_3,a_4,a_5,a_6} based on weight vector w = {0.2,0.3,0.15,0.1,0.1,0.15}, a sequence of conditional attribute subsets Z_1 = {a_2}, Z_2 = {a_1,a_2}, Z_3 = {a_1,a_2,a_3,a_6}, and Z_4 = {a_1,a_2,a_3,a_4,a_5,a_6} are obtained. The S3W-GDM process will then be executed four times, once for each conditional attribute subset. The calculation process is the same for each decision-level. The calculation process for the 1st decision-level is illustrated. First, the decision table under subset Z_1 = {a_2} of conditional attributes from the three experts will be extracted at the 1st decision-level. Second, the decision tables from these experts with the subset Z_1 = {a_2} of conditional attributes will be aggregated to form the fused decision table. Then, the granular structure Gl_1 = {( DHHFLDT)_1,κ _1,pr( π _1| κ _ℵ _Z_1( x ).),V_Z_1^∘∙} will be obtained using the fused decision table. Finally, the division results will be obtained by executing the S3W-GDM process to obtain the division of alternatives in the three domains of the 1st decision-level. The decision will be computed sequentially following the above steps. After the decisions are obtained for multi-level of the granular structure, a set of classification and the expected perceived utility of each alternative will be derived. Fig. <ref> and <ref> demonstrate the classification and ranking results of the Gl_1, Gl_2, Gl_3, and Gl_4 represent 1st, 2nd, 3rd, and 4th decision-levels, respectively. Fig. <ref> clearly shows that the number of alternatives in the negative domain decreases as the decision-making progresses. This trend conforms with the real decision-making scenarios. In the positive and negative domains, the number of alternatives increases or remains unchanged as the decision-making progresses. At the 1st decision-level, an initial determination can be made between patient x_2 and patients x_1, x_5 and x_6 based on the most important conditional attribute a_2. Patient x_2 has a negative utility of a_2, indicating that her situation is not good. Patients x_1, x_5, and x_6 do not have enough negative utility under a_2 to support them, so it is recommended that no further testing be done and monitoring is sufficient. Those who handled it differently from them are patients x_3, x_4, x_7, x_8. Further examinations are requested for these four patients since they may no longer be sufficient to receive a definitive medical guidance under a_2. At the 2nd decision-level, the conditional attribute a_1 is added, providing a better decision-making basis. This makes it possible to determine patientx_8's exact situation at this level. Patient x_8 only needs further monitoring and does not require additional tests for the time being, simplifying her care. Unfortunately, Patient x_4 cannot receive a diagnosis at the 2nd decision-level. However, at the 3rd decision-level, a new diagnosis is conducted. She is suffering from SLE, and she is referred for precise treatment. This scenario demonstrates that the decision model closely aligns with real-life medical decision-making scenarios, making it a practical tool for such applications. Given limited healthcare resources and varying degrees of patient conditions, physicians often face challenging situations. By determining the severity of a patient's condition, a personalized, patient-centered treatment plan can be developed based on their specific needs. This approach helps optimize doctors' work and makes the most efficient use of medical resources. The model outlined in this paper presents an application scenario where different alternatives can be prioritized based on the severity of patients' illnesses. Fig. <ref> is the basis for the plotting of Fig. <ref>, as well as the judgment of the severity of the disease in the eight patients. Fig. <ref> presents the trend of the expected perceived utility values for the eight patients. According to the decision rule in Section <ref>, the final ranking result is obtained: x_2 > x_4 > x_7 > x_3 > x_1 > x_5 > x_8 > x_6,which can be used to determine the severity of illness for eight patients. Patient x_2 and patient x_4 are already diagnosed at the 4th decision-level, Among them, patient x_2's condition is worse, as the condition attribute used in this case has a larger value, indicating that the target concept is closer to being sick. For patients x_3 and x_7, new tests are required to determine if they are sick. Although the ranking rules suggest that patient x_3 may be at a higher risk of being sick, there is still significant uncertainty in the ordering of the boundary domain. While patients x_1, x_5, x_6, and x_8 are not diagnosed based on the evaluation of these six conditional attributes, it is important to monitor them regularly and maintain good follow-up records. During the experiments, it was found that the ranking results obtained at the end of the 1st decision-level are highly correlated with the above ranking results. The turning point that leads to a different ranking result is the patient x_8 in the boundary domain. In subfigure (a) of Fig. <ref>, it is evident that the conditional attribute a_2 has a more significant impact on the expected perceived utility, whereas subfigures (b), (c), and (d) exhibit a gradual stabilization as new conditional attributes are included. This implies that, in this case, the conditional attribute a_2 has the most crucial influence on the decision results. From Table <ref>, the changes in ranking from the 1st decision-level to the 4th decision-level can be observed. The ranking results of 1st decision-level and 2nd decision-level is broadly consistent, but slightly different. At 3rd decision-level of decision-making, the ranking remains the same, with very little change. By 4th decision-level, the ranking of all alternative objects is stabilized. These results indicate that the initial ranking achieved at the 1st decision-level, under coarse granularity, is nearly identical to the final ranking at the 4th decision-level. This suggests that utilizing such a granular structure significantly enhances decision-making efficiency, as relative accurate results can be achieved even at the initial, less detailed levels. The multi-level S3W-GDM method also provides a more reasonable semantic interpretation of the classification and ranking results for the above example. In clinical diagnosis, the indicator based on the a_2 Degree of Anti-dsDNA is important for preliminary screening and rapid analysis. Patient 2, as the most serious patient under this indicator, is screened out first to receive relevant treatment without delaying her condition. Although each decision-level can produce a relevant ranking, it is still unclear which patients need additional examinations. The boundary region provide the interpretive space for this process. The decreasing number of boundary region objects and the stabilization of positive and negative region classifications reflect the model's ability to provide meaningful and interpretable decision results. This aligns with the practical need for iterative and adaptive decision-making in medical diagnostics, where patient conditions and available information evolve over time. § COMPARATIVE AND SENSITIVITY ANALYSIS The efficiency and rationality of the multi-level S3W-GDM method are verified through the comparative and sensitivity analysis. In Section <ref>, the efficiency and validity of the S3W-GDM method are verified using four types of data sets and seven other methods. Additionally, the rationality of the proposed method is demonstrated by analyzing the results generated from the combination of parameters in Section <ref>. In the previous sections, a novel S3W-GDM method is proposed to solve the GDM problem under DHHFLTS environment. This decision-making process is more closely aligned with the diagnostic process in reality. To validate the efficiency and validity of the proposed information fusion method incorporating granular computing, it is necessary to compare it with the established decision-making methods for different DHHFLTS methods. Before making the comparison, the selected comparison methods and comparison data sets need to be explained. (1) Regarding the comparative methods, some classical DHHFLTS methods are selected for comparison, and their differences are shown in Table <ref>. Method 1 and 2, as well as the S3W-GDM methods in this paper, focus on the comparison in dealing with multiple experts GDM problems. The difference is that Method 1 used a traditional aggregation operator that fused information from multiple experts evaluations <cit.>. Method 2 employed the Dempster-Shafer evidence theory and proposed a novel method for fusing experts' information <cit.>. Method 3 <cit.> addresses a multi-attribute group decision problem involving the selection of emergency logistics providers. Methods 4-7 are traditional single expert decision-making methods, including the double hierarchy hesitant fuzzy linguistic generalized power average operator <cit.>, TOPSIS-based the generalized completely hybrid weighted Hausdorff-hesitance degree-based distance <cit.>, MULTIMOORA <cit.>, and VIKOR <cit.>. Method 4-7 are chosen to validate that the proposed methods are equally applicable to single-expert decision-making with better efficiency. (2) Regarding the four types of comparative data sets, these evaluation data sets under three different perspectives are selected, as shown in Table <ref>. The data set for the selection of financial products from Method 2 <cit.> is a regular GDM data set with multiple experts evaluating multiple alternatives under different attributes. The breast cancer data set is also used for this comparison to further demonstrate the efficiency gains of the proposed method compared to the regular GDM methods. The data is preprocessed and set up as a GDM problem with 3 experts and 30 alternatives. The use of this data set with more alternatives allows for the validation of the proposed method in more complex scenarios. Methods 1, 2 and the S3W-GDM method are applicable to both of two data sets for GDM. The evaluation data set from literature <cit.> involves a multi-attribute decision-making problem with emergency logistics provider selection. The novel S3W-GDM method proposed in this paper is designed to address the dynamics in GDM problems, making it compatible with the context of this evaluation data set. Since the model proposed in this paper is to a greater extent a dynamic decision-making model, it is different from the stable decision-making where all the decision-making information is collected at once. The effectiveness of the method in this paper is demonstrated by comparing it with the dynamic GDM problem. Since there is only one decision table for this data set, some of the classical Methods 3-7 are compared with the proposed method in this paper. The evaluation data set from literature <cit.> consists of reviews on Sichuan liquor brands. The selection of these brands is based on their popularity among the general public, allowing for a more factual and rational evaluation of the decision results. The comparison method chosen for this data set is a stationary multi-attribute decision-making Method 4-7. §.§ Comparative analysis Type 1. Comparison of financial products selection This evaluation data set contains 6 different financial products U = {x_1,x_2,x_3,x_4,x_5,x_6} and requires 3 experts to select a financial product by considering a combination of 4 dimensions: rate of return(a_1), risk(a_2), liquidity(a_3), and tansparency(a_4). The conditional attribute weight vector w = {0.23,0.13,0.52,0.12} determined through BWM will be used here. Method 2 treats decision information as evidence, with experts viewed as different sources of evidence. The BPA function is constructed by calculating the confidence and evidence matrix for each piece of decision information, and the ranking of financial products is obtained by combining the DSET rule. This process is quite tedious. To combine the decision-making information of all experts under various conditional attributes, it is necessary to calculate different distances between experts, products, and attributes. This computation is required before constructing the global BPA function, which introduces DSET rules. Additionally, the different number of DHLTs in DHHFLEs need to be normalized. As shown in Table <ref>, the ranking results of these methods are almost the same. The main difference is that the top two rankings of Method 1 are different from the other methods. In Method 1, x_1 has an advantage over x_3, while all other alternatives maintain the same ranking as the remaining methods. The last two ranking results are from the method proposed in this paper. One ranking result uses only one sequential decision process, and the other completes the computation of all subsets of conditional attributes. However, the final ranking results obtained are the same as Method 2. The 1st decision-level delineated the η=0.7 partition in which the product is located. Specifically, the novel multi-level S3W-GDM method can achieve the same results after the 1st decision-level. The S3W-GDM method with multi-granularity thinking greatly enhances the efficiency of the GDM process. Type 2. Comparison of breast cancer diagnosis The Breast Cancer Coimbra Data Set (https://archive.ics.uci.edu/datasets) as row data is adapted to demonstrate the computational efficiency of the method for GDM scenarios. Suppose the weight vector of conditional attribute is w = {0.07, 0.15, 0.28, 0.32, 0.34}, the DHLTS with the first hierarchy linguistic term scale is S = {s_ - 2 = very low,s_ - 1 = low,s_0. = normal,. s_1 = high,s_2 = very high}, and the second hierarchy linguistic term scales includes O_1 = {o_ - 2 = only a little,o_ - 1 = a little,o_0 = just right,o_1 = much,o_2 = very much}, O_2 ={o_ - 2 = only a little,o_ - 1 = a little,o_0 = just right}, O_3 = {o_ - 2 = very much,o_ - 1 = . . much,o_0 = just right,o_1 = only a little,o_2 = little}, O_4 = {o_ - 2 = very much,o_ - 1 = much,. o_0 = . just right}. Three medical experts are invited to convert the crisp numbers of conditional attributes (BMI, Glucose, Insulin, Leptin, and Adiponectin) into DHHFLTS based on the linguistic term scales above. These three experts have the same level of importance. Only the data of the first 30 patients are required to be transformed by the experts. In Fig. <ref>, Method 1 and Method 2 are used as two different types of GDM methods for the comparison of program ranking. The proposed method in this paper has 30 alternatives divided and completed the ranking after the 2nd decision-level. The presented ranking results have a similar trend to the other two methods, while only 1st and 2nd decision-levels of S3W-GDM of computation are used, greatly improving the efficiency of the decision-making process. However, the difference is due to the fact that the two compared methods use more comprehensive decision-making information, while the proposed method only uses partial information. Therefore, all conditional attributes were added using the same parameters to execute the algorithm. As can be observed from the bottom subplot in Fig. <ref> the decision on the alternative is more rational after using the information from all the conditional attributes. In particular, Method 1 shows a trend more similar to the method proposed in this paper in terms of the ranking of some alternatives. For GDM problems with a large number of alternatives, it is more rational and intuitive to perform further decision-making by classifying the results of the alternatives. Methods 1 and 2 lack a relevant process that brings semantic interpretation to the decision results. The S3W-GDM method fills such a gap. The specific classification of this data set will be shown in the sensitive analysis. Type 3. Comparison of emergency logistics provider selection This evaluation data set contains 5 emergency logistics providers in the food and beverage industry U = {x_1,x_2,x_3,x_4,x_5} and 6 evaluation attributes: cost(a_1), product level(a_2), quick response ability of supply(a_3), quick response ability of transport(a_4), management(a_5), and reputation(a_6). The attribute weight results of optimization model-based distance under DHHFLTS environment is used in the comparison here to eliminate the impact might happen. The vector of weight is w = {0.1011,0.1017,0.2591,0.1305,0.165,0.2426}. Table <ref> shows a comparison of the ranking results of the several methods under this data set. Method 3 applied the normalized projection-based distance and bidirectional projection to DHHFLTS. These improvements in the distance measurements bring about better superiority and rationality in the ranking results. This ranking result is consistent with the traditional Methods 4-6. Method 7, as well as the methods proposed in this paper, are then consistent. The difference lies mainly in the ranking of alternatives x_1, x_3, x_5. However, there is a lack of a dynamic decision-making process and quick initial judgment for the characteristics of emergency decision-making. It is well known that emergency decision-making has a higher demand for efficiency. The multi-level S3W-GDM provides a more conclusive semantic interpretation after the completion of the 3rd decision-level ranking. Although there are differences from most models, x_4 as the best supplier is reflected in the positive domain of the partition where it is located. It is worth noting that the method proposed in this paper uses only half of the decision information at this decision-level. For emergency decision-making scenarios, S3W-GDM provides a priori a solution as a decision-making result, supporting the rapid development of the action. Type 4. Comparison of Sichuan liquor brand assessment This evaluation data set contains 5 Sichuan liquor brands: Wuliangye(x_1), Luzhou Old Cellar(x_2), Ichiro liquor (x_3), Tuopai liquor(x_4) and Jian Nan Chun(x_5). The cognitions of consumers are used as a starting point to investigate four attributes: product price(a_1), product classification(a_2), consumer group(a_3), and distribution channel(a_4). The attributes weights vector is w = {0.1,0.3,0.2,0.4}. From the Table <ref>, these methods can be used to examine the liquor brand data set to improve the rationality of the decision results. In the analysis of different methods' rankings of alternatives, it is observed that Method 4-6 display identical ranking patterns, suggesting similarities in their evaluation criteria. Method 7 and S3W-GDM provide an alternative ranking result reveal that these methods may employ different decision-making logics or prioritize differently. S3W-GDM presents a ranking almost similar to that of Method 7, both after the computation of all attribute subsets has been considered and only after the completion of the 3rd decision-level. The only difference is the position of alternative x_4. The ranking results obtained by the S3W-GDM method after the 4th decision-level differ from Methods 4-6 in the order of alternatives x_1 and x_5. Methods 4-6 tend to prefer Wuliangye(x_1) as the top alternative, indicating its widespread acceptance, while the varying rankings of Jian Nan Chun(x_5) reflect significant differences in evaluations across methods. The reason for the differences in the methods proposed in this paper goes back to the setup of this data set itself. This evaluation comes from consumers' perceptions of Sichuan liquor brands. Wuliangye(x_1) is well known as a high-end brand. However, Jian Nan Chun(x_5), a mid-to-high end brand, is currently showing a rapid growth trend, becoming the “meat and potatoes" of the liquor market. On the one hand, its price and grade are more in line with the rational consumption concepts of young people. On the other hand, the occupation position in the market rises higher than the space of the low-end categories. Evaluating the condition attributes of Sichuan liquor brand, the largest weight is the distribution channel(a_4), and Jian Nan Chun(x_5) does have better distribution channels, gradually becoming the best occupied brand in the mid-to-high-end market. Based on the perspective of distribution channels, the findings of the S3W-GDM method should be of better reference value in providing adjustment strategies for Sichuan liquor enterprises. §.§ Sensitivity analysis The subsection will display how parameter variation affects the decision results. sensitivity of Breast Cancer Coimbra Data Set. For presentation purposes, the Breast Cancer Coimbra Data Set with more alternatives is used here as the sensitivity analysis. The main study is the variation of the Gaussian kernel parameter σ and the neighborhood cut parameter κ for different relative gain parameters η. In Introduction <ref> and Section <ref>, extensive discussion has taken place regarding the parameter of relative gains. The typical value range for this parameter is [0, 1]. In this data set, the experiments are conducted through varying η from 0 to 1 with an interval of 0.1. The conclusion is that when η≤ 0.5, all alternatives are completely classified into the positive and negative regions at the 1st decision-level, which is evidently unreasonable. Given the large number of alternatives, although the proposed method aims to enhance decision efficiency, the limited decision information received at the 1st decision-level results in significant errors if classification is based solely on the first most important conditional attribute. Consequently, this study does not consider η≤ 0.5 for this data set. When η is 0.6, although the classification of alternatives begins to show a general pattern, it remains unstable with variations in σ and κ, and subsequential changes do not follow a consistent pattern. When η≥ 0.7, the classification of alternatives stabilizes, and the subsequent variations in σ and κ conform to the discussions in Section <ref>. The purpose of this work is to observe σ and κ that are related to the sequential process, namely the gaussian kernel parameter and the neighborhood cut parameter with the relative gain parameter of 0.7, 0.8, and 0.9. Then the interval in which these two parameters are observed is [0,1], with a step size of 0.1. Fig. <ref> provides a detailed illustration of the variations in σ and κ. Subfigures (a), (b), and (c) demonstrate the variations in the number of boundary region alternatives as a function of the combination of σ and κ. Even with different parameter settings, the boundary region alternatives exhibit a stable pattern at this decision-level. Notably, the closer the combination of σ and κ approaches (1, 1), the greater the number of alternatives in the boundary region. This indicates that the decision conditions become stricter, aligning with the semantic interpretation of variations in these two parameters. Next, to control for variables, the classification results of 30 patients under η = 0.7 are presented in Fig. <ref>. This is to illustrate the classification trends under different parameter settings. Fig. <ref> illustrates the classification of alternatives under four different combinations of σ and κ. The four sets of parameter combinations are (0.9,0.9), (0.8,0.8), (0.8,0.7), and (0.9,0.7). Each parameter combination results in distinct classification trends. This is due to the ability to adjust and vary σ and κ at each decision-level. In this study, the value of σ is fixed to maintain sensitivity in the calculations. By adjusting the changes in κ at each decision-level, ensuring that the κ values gradually increase, the accuracy of the decision results is ensured, leading to diverse classification results while maintaining the overall trend. In each set of sequential processes, κ in this way changes at the decision-level in the interval [0.8,1]. Regarding the properties of these two parameters, a σ value closer to 1 results in a more sensitive Gaussian kernel function, while a κ value closer to 1 indicates stricter equivalence division among alternatives. Both settings contribute to the accuracy of the final classification results. In general, the yellow area tends to shrink as the decision-making process progresses. Conversely, the blue and red areas may either remain constant or expand as the decision-making stages advance. These two performance characteristics accurately reflect the actual decision-making situation. Furthermore, the more alternatives that are divided, the more varied the results will be. Different parameters can be adjusted to achieve different outcomes based on the specific decision-making scenario. The model's classification rationality is demonstrated through sensitivity analysis. §.§ Discussion Since the method proposed in this paper is a dynamic decision-making model to a greater extent, unlike static decision-making which collects all the decision-making information at once, mutli-level S3W-GDM makes decisions by increasing the granularity of the decision-making information in a level-by-level progression, which on the one hand improves the efficiency of decision-making, and on the other hand provides a buffer to reduce the risk of erroneous decision-making when the decision-making information is insufficient to support the decision. The differences between the proposed method and others are listed as follows: (1) Classical GDM methods <cit.> or multi-attribute decision-making methods <cit.> usually use 2WD methods. These methods rank alternatives based on scores to produce decision results, but lack semantic interpretation of the decision results. In contrast, the proposed method employs a S3WD process that provides meaningful explanations for situations such as medical diagnosis and emergency logistics service provider selection. (2) In classical GDM methods <cit.>, information fusion is typically handled by aggregation operators that combine all experts' information at once. Non-operator-based information fusion methods consider information fusion from different perspective but still follow a holistic fusion approach. The proposed S3W-GDM method, however, combines the concept of multi-granularity with conventional thinking. It introduces a coarse-to-fine granularity approach to information fusion, where initial decisions are made using coarse-grained information, followed by progressively finer-grained analysis to refine the decisions. This multi-level fusion approach enhances the decision-making process's efficiency and accuracy. For breast cancer diagnosis, the S3W-GDM method utilizes a multi-level granularity approach that first focuses on key attributes and then continuously improves decision making to improve decision making efficiency. For emergency logistics provider selection, the S3W-GDM method offers rapid a priori solutions, crucial in emergencies. The S3W-GDM method achieves stable classification by the 3rd decision-level with only half the decision information, enhancing efficiency under time constraints. (3) The most methods <cit.> compared in this study do not adequately address the qualitative expression of decision preferences, particularly under DHHFLTS environment. Classical methods lack work on the transition from qualitative evaluations to quantitative changes, leading to potential biases in decision-making. The proposed S3W-GDM method addresses this gap by effectively capturing and incorporating qualitative decision preferences into the decision-making process, ensuring a balanced and comprehensive evaluation. The advantages between the proposed method and others are summarized as follows: (1) By incorporating a S3WD process and multi-granularity information fusion, the S3W-GDM method provides comprehensive and interpretable decision results. This approach not only ranks the alternatives but also classifies them into positive, boundary, and negative regions, offering a clear semantic interpretation of the results. (2) The S3W-GDM method uses a coarse-to-fine granularity approach, pioneering a new model of information fusion. Initial decisions are made swiftly using coarse-grained key attributes, providing a rapid preliminary assessment. Further refinements are then applied to alternatives requiring additional analysis, utilizing finer-grained information. This multi-level fusion ensures that assessments are balanced and comprehensive, providing an effective way to use qualitative evaluation information. (3) The novel S3WD of DHHFLTS method to address the problem of uncertainty in decision alternatives by redesigning the computation of conditional probabilities without relying on decision attributes, this approach improves the accuracy. It incorporates relative utility into the evaluation of each alternative, capturing individual psychological behaviour and also improving decision-making accuracy. § CONCLUSION AND FUTURE WORK With the progress of society and information science, GDM problems are becoming increasingly complex. Classical GDM methods, which rely on aggregation operators to fuse information from different attributes and decision-makers at once, significantly increase the decision burden and constrain efficiency. Moreover, these problems often exhibit vagueness, hesitation, and variation, adding to their complexity. Existing relative works rarely take these characteristics into account while improving decision-making efficiency by changing the paradigm of information fusion. Accordingly, the work of this paper is summarised as follows. First, constructing a neighborhood relation matrix based on derived similarity degrees between alternatives and combining it with the outranking relation to refine conditional probability calculations. Then, designing a new “loss function" model for decision risk based on relative perceived utility, incorporating regret theory (RT). This includes defining expert decision tables and multi-level granular extraction and aggregation of evaluations. These two steps establish the foundation of the novel S3WD of DHHFLTS model. Further, the paper demonstrates the most efficient operator for aggregation in the decision-level information fusion process, defines a multi-level granular structure, and proposes decision-making strategies and semantic interpretations for each level. The efficiency and rationality of the established method are validated through illustrative example and comparative analysis with other methods. In future research, the following three points will be emphasized. With the development of information science, the volume of tool-focused data for solving complex problems has become larger and larger. Therefore, the DHHFLTS as a kind of natural language word computing needs to raise the level of dealing with decision-making problems to a large-scale group<cit.>, deal with a larger volume of data through machine learning or deep learning algorithms<cit.>, and promote the development of the integration of computer science and technology and management science engineering. Additionally, when the volume of data becomes larger, how to effectively allocate computing resources will also become an important issue. Finally, no matter what kind of decision-making the ultimate goal is to reach a consensus<cit.>, the future will be centered on multi-granularity to do consensus research. § ACKNOWLEDGEMENT This work was supported by the National Natural Science Foundation of China (No.62276 038, No. 62221005), the Joint Fund of Chongqing Natural Science Foundation for Innovation and Development under Grant (No.CSTB2023NSCQ-LZX0164), the Chongqing Talent Program (No.CQYC20210202215), the Chongqing Municipal Education Commission (HZ2021008), and the Doctoral Talent Training Program of Chongqing University of Posts and Telecommunications (No.BYJS202213),.
http://arxiv.org/abs/2406.17920v1
20240625200602
Comment on "Fully gapped superconductivity and topological aspects of the noncentrosymmetric superconductor TaReSi"
[ "Andrzej Ptok" ]
cond-mat.supr-con
[ "cond-mat.supr-con", "cond-mat.mtrl-sci" ]
[e-mail: ]aptok@mmj.pl § ABSTRACT In a recent paper [T. Shang et al., http://doi.org/10.1103/PhysRevB.107.224504Phys. Rev. B 107, 224504 (2023)], the authors study the physical properties of TaReSi compounds having Ima2 structure. This noncentrosymmetric structure is proposed to be the source of topological properties for mentioned compound. However, for a correct description of topological features, it is important to recognize the correct structure of the compound at low temperature. In this Comment, we show that Ima2 cannot be realized by TaReSi and is unstable at low temperature. The Ima2 structure contains the soft modes at S (1/2,0,0) point, which leads to the stable structure with Cm symmetry. Notably, the stable Cm system also has a noncentrosymmetric structure, which can be the actual source of topological properties. Comment on “Fully gapped superconductivity and topological aspects of the noncentrosymmetric superconductor TaReSi” Andrzej Ptok July 1, 2024 ===================================================================================================================== In the recent paper T. Shang et al. <cit.> discusses the properties of TaReSi compound, exhibiting the superconductivity below T_c = 5.5 K. The presented theoretical investigation is based on the assumption, that this compound realizes an orthorhombic TiFeSi-like structure, with Ima2 symmetry (space group No. 46), presented on Fig. <ref>(a). This statement is supported by powder XRD at normal conditions <cit.>, i.e. temperatures much higher than the superconducting state. Nevertheless, the symmetry of TaReSi can differ in the low temperature regime, which can affect the electronic and topological properties of this compound. In this Comment, based on the ab initio calculations, we discuss the stability of TaReSi in the range of low temperatures. *Computational details.— The ab initio calculations (DFT) are performed using the projector augmented-wave (PAW) potentials <cit.> implemented in the Vienna Ab initio Simulation Package (Vasp) code <cit.>. Calculations are made within the generalized gradient approximation (GGA) in the Perdew, Burke, and Ernzerhof (PBE) parameterization <cit.>. The energy cutoff for the plane-wave expansion was set to 350 eV. Optimizations of structural parameters (lattice constants and atomic positions) are performed in the primitive unit cell using the 10 × 10 × 6 k–point grid in the Monkhorst–Pack scheme <cit.> As a break of the optimization loop, we take the condition with an energy difference of 10^-6 eV and 10^-8 eV for ionic and electronic degrees of freedom. The optimized system symmetry was analyzed using FindSym <cit.>. The dynamical properties were calculated using the direct Parlinski–Li–Kawazoe method <cit.>, implemented in the Phonopy package <cit.>. Within this method, the interatomic force constants (IFC) are calculated from the forces acting on the atoms after displacement of individual atoms inside a supercell. We perform these calculations using the supercell containing 2 × 2 × 2 conventional cells, which corresponds to 48 formula units, and reduced k-grid 3 × 3 × 3. *Stability of TaReSi at low temperatures.— The phonon dispersion curves for symmetry Ima2 are presented in Fig. <ref>(c). As we can see, within the phonon spectrum there exist an imaginary soft mode (presented as negative frequencies), which indicated the dynamical instability of TaReSi with Ima2 at low temperatures. However, the atom displacement induced by the soft modes can be used to predict the true group state of the system. In this Comment, we present analyses of the possible stable structure of TaReSi at low temperature. For further analysis, we take soft modes at Γ (0,0,0) and S (1/2,0,0) points (i.e. soft modes with the largest magnitude of frequencies). First, it should be noted, that the soft mode from Γ point does not change the size of primitive unit cell, while the one at the S point leads to its doubling along the lattice vector a. The displacement of the atoms introduced by the soft modes should lead to new structures with a total energy lower than that of the Ima2 structure. Indeed, the introduction of atom displacement induced by both soft modes leads to energy lowering, which is clearly seen in Fig. <ref>. The symmetries of the structures induced by the both soft modes (before and after structure optimization) are recognized as Cm structures (space group No. 8) [details about the optimized structure can be found in Supplemental Material (SM) [See Supplemental Material at [URL will be inserted by publisher] for Crystallographic Information File (CIF) for optimized structures with Cm symmetry.]]. Unfortunately, the displacements of the atoms are related to the entire structure of the compound. In the case of the soft mode at the Γ point (red line in Fig. <ref>), the energy of the system is minimized by the structure when the atoms of Ta, Re, and Si are shifted by 0.025 Å, 0.101 Å, and 0.046 Å, respectively. Similarly, for the displacement of atoms induced by the soft mode at the S point (blue line in Fig. <ref>), these values are 0.061 Å, 0.153 Å, and 0.098 Å, for Ta, Re, and Si atoms, respectively. In order to investigate the dynamical stability of “new” structure, we reanalyzed the corresponding phonon spectra. The phonon dispersion curves for the structure induced by the soft mode at the point Γ are presented in Fig. <ref>. In this case, the phonon spectra still possess the soft mode, which is expected in context of the result presented in Fig. <ref> – there is structure (induced by the soft mode from the S point) with lower energy. In the case of this structure, the phonon spectra do not exhibit any imaginary soft modes (Fig. <ref>), and the structure is stable in the dynamical sense. In the optimized (stable) structure of TaReSi with Cm symmetry, the atoms are located in 26 non-equivalent positions (see crystal structures in SM <cit.>): Ta, Re, and Si atoms contain 12, 6, and 8 non-equivalent positions, respectively. Interestingly, Ta atoms are located only in 2a Wyckoff positions, Re atoms only in 4b positions, while Si atoms are contained in both 2a and 4b positions. As a result, conventional unit cell contains 24 formula units, which correspond to two primitive unit cells. Similarly to the case of the previously discussed unstable Ima2 structure <cit.>, stable Cm structure is noncentrosymmetric, which allows for the realization of the antisymmetric spin–orbit coupling <cit.>. The presented situation can be compared with NbReSi, which is reported as Ima2 <cit.> or P6̅2m <cit.> structure. In this case, there is also a soft mode for Ima2, and, undoubtedly, the system forms a P6̅2m structure <cit.>. Contrary to this, TaReSi is recognized as Ima2 under normal conditions, whereas the existing soft mode leads to the stable Cm symmetry in case of low temperature regime. *Electronic properties.— Finally, for the optimized stable Cm structure, we can calculate the electronic band structure (Fig. <ref>). The lifting of the band degeneracy by the spin–orbit coupling is well visible (cf. orange and blue lines in Fig. <ref>). The calculated band splitting for the Cm structure (600  meV) is much greater than that reported for the Ima2 structure (300 meV) <cit.>. In the presence of the spin–orbit coupling (blue lines in Fig. <ref>), the electronic band structure hosts the double degenerate Weyl points. It is worth noting that for the Ima2 symmetry <cit.>, there are no Kramers nodal lines along the high-symmetry lines. Nevertheless, the existence of the mirror symmetry plane { m_010 | 0 } within Cm symmetry allows for realization of the Kramers nodal lines in the mirror planes. Such nodal lines create the close contours between the high symmetry points Γ, A, M, or Y (represented by the black contours on Fig. <ref>). The vanishing band splitting (no-gap) between pairs of bands splitting by the spin–orbit coupling is clearly visible and is not limited to the high symmetry points <cit.>. *Summary and conclusions.— Summarizing, in this Comment, based on the DFT calculations, we establish that the TaReSi in the low temperature range cannot form a stable structure with Ima2 symmetry. This is based on the fact that the phonon spectra calculated for Ima2 TaReSi contain the imaginary frequency soft modes. The precise examination of the realized symmetry is necessary for discussion of the TaReSi topological properties, which is the main intent of the presented Comment. Here, we display the calculation suggesting that the Cm structure is more preferable in low temperature regime. TaReSi with Cm symmetry is still noncentrosymmetric, which is related to the existence of antisymmetric spin–orbit coupling, as claimed in Ref. <cit.>. Additionally, the spin–orbit coupling strength for Cm symmetry is much larger than the one reported for Ima2, which indeed can support the realization of the topological superconductivity in TaReSi <cit.>. The absence of inversion symmetry, while preserving mirror symmetry, is a source of Kramers nodal lines <cit.>. In such a case, the vanishing of the band splitting introduced by the spin–orbit coupling forms the closed contours between high symmetry points. The close vicinity of this band crossing to the Fermi level can be of importance for the topological properties of TaReSi at low temperatures, as reported by experimental observation in Ref. <cit.>. *Acknowledgments.— Some figures in this work were rendered using Vesta <cit.> software. A.P. is grateful to Laboratoire de Physique des Solides in Orsay (CNRS, University Paris Saclay) for hospitality during the work on this project. This work was supported by National Science Centre (NCN, Poland) under Project No. 2021/43/B/ST3/02166.
http://arxiv.org/abs/2406.17672v2
20240625160259
SpecMaskGIT: Masked Generative Modeling of Audio Spectrograms for Efficient Audio Synthesis and Beyond
[ "Marco Comunità", "Zhi Zhong", "Akira Takahashi", "Shiqi Yang", "Mengjie Zhao", "Koichi Saito", "Yukara Ikemiya", "Takashi Shibuya", "Shusuke Takahashi", "Yuki Mitsufuji" ]
cs.SD
[ "cs.SD", "eess.AS" ]
Efficient classical algorithm for simulating boson sampling with inhomogeneous partial distinguishability J. J. Renema July 1, 2024 ========================================================================================================= § ABSTRACT Recent advances in generative models that iteratively synthesize audio clips sparked great success to text-to-audio synthesis (TTA), but with the cost of slow synthesis speed and heavy computation. Although there have been attempts to accelerate the iterative procedure, high-quality TTA systems remain inefficient due to hundreds of iterations required in the inference phase and large amount of model parameters. To address the challenges, we propose SpecMaskGIT, a light-weighted, efficient yet effective TTA model based on the masked generative modeling of spectrograms. First, SpecMaskGIT synthesizes a realistic 10 s audio clip by less than 16 iterations, an order-of-magnitude less than previous iterative TTA methods. As a discrete model, SpecMaskGIT outperforms larger VQ-Diffusion and auto-regressive models in the TTA benchmark, while being real-time with only 4 CPU cores or even 30× faster with a GPU. Next, built upon a latent space of Mel-spectrogram, SpecMaskGIT has a wider range of applications (e.g., the zero-shot bandwidth extension) than similar methods built on the latent wave domain. Moreover, we interpret SpecMaskGIT as a generative extension to previous discriminative audio masked Transformers, and shed light on its audio representation learning potential. We hope our work inspires the exploration of masked audio modeling toward further diverse scenarios. § INTRODUCTION Text-to-audio synthesis (TTA) allows users to synthesize realistic audio and sound event signals by natural language prompts. TTA can assist the sound design and editing in the music, movie, and game industries, accelerating creators' workflow <cit.>. Therefore, TTA has earned arising attention in the research community. Recent advances in deep generative models, especially iterative methods such as diffusion <cit.> and auto-regressive models <cit.>, have brought significant success to the sound quality and controllability in TTA tasks, but with the cost of slow synthesis speed. Since the synthesis speed of iterative methods is dominated by the number of iterations required at inference, techniques have been introduced to to reduce iterations, e.g., higher compression rate of raw audio signals<cit.> or more efficient diffusion samplers <cit.>. Nevertheless, these iterative methods remain slow in synthesis speed and demanding for computing resources, as they typically require hundreds of iterations to synthesize a short audio clip. Moreover, the runtime of a single iteration increases due to the huge model size. To further improve the efficiency of audio synthesis, Garcia et al. introduced the MaskGIT <cit.> synthesis strategy from computer vision to the realm of audio and proposed VampNet <cit.>. Although VampNet can inpaint a 10-second clip with 24 iterations, 6 seconds are needed on GPU <cit.>, which is still heavy for non-GPU environments. Moreover, VampNet is not compatible with text prompts or TTA tasks. Concurrent to our work, MAGNeT extended VampNet to text-conditional audio synthesis <cit.>. However, the method is less efficient as it requires 180 iterations, which is even heavier than some diffusion models that only requires 100 iterations <cit.>. Since both VampNet and MAGNeT work in a wave-domain latent space, it is difficult to conduct frequency-domain inpainting tasks such as bandwidth extension (BWE) in a zero-shot manner. Besides the aforementioned limitations, the audio representation learning potential of a masked generative Transformer has not been investigated yet. As a summary, an audio synthesis method that is compatible with text prompts, highly efficient in synthesis speed, and flexible for various downstream tasks is yet to be explored. To this end, we propose SpecMaskGIT, an efficient and flexible TTA model based on the masked generative modeling of audio spectrograms, to address the above challenges. Our contributions lie in the following aspects. * Efficient and effective TTA. SpecMaskGIT synthesizes a realistic 10-second audio clip by less than 16 iterations, which is one order-of-magnitude smaller than previous iterative methods shown in Fig. <ref>. As a discrete generative model, SpecMaskGIT outperforms larger VQ-Diffusion (DiffSound <cit.>) and auto-regressive (AudioGen-base <cit.>) models in a TTA benchmark, while being real-time with 4 CPU cores shown in Fig. <ref> or even 30× faster on a GPU. * Flexibility in downstream tasks. SpecMaskGIT is interpreted and implemented as a generative extension to previous discriminative audio masked Transformers <cit.>. The masked spectrogram modeling principle and architecture design similar to audio MAE <cit.> is believed to have contributed to the representation learning potential of SpecMaskGIT. Unlike prior arts about finetuning MAE-like architectures for BWE <cit.>, SpecMaskGIT enabled BWE in a zero-shot manner. We hope this efficient, effective and flexible framework pave the way to the exploration of masked audio modeling toward further diverse scenarios <cit.>. [Demo: <https://zzaudio.github.io/SpecMaskGIT/index.html>] § RELATED WORKS Synthesizing audio signals in raw waveform is challenging and computationally demanding <cit.>. Therefore, the mainstream approach to audio synthesis is to first generate audio in a compressed latent space, and then restore waveforms from latent representations. Auto-regressive models such as Jukebox <cit.>, AudioGen <cit.> and MusicGen <cit.> use vector-quantized (VQ) variational auto-encoders (VAE) <cit.> to tokenize raw waveforms into a discrete latent space. While AudioGen and MusicGen use a higher compression rate than Jukebox, 500 iterations are required to synthesize a 10-second clip, slowing down the speed. Advances in audio representation learning such as audio MAE (<cit.>) indicate that Mel-spectrogram is an effective compression of raw audio signals, as it emphasizes acoustic features of sound events while maintaining sufficient details to reconstruct raw waveforms. Inspired by the above success of representation learning, several methods used discrete <cit.> or continuous <cit.> diffusion models upon the latent Mel-spectrogram space created by a VAE or SpecVQGAN <cit.>. These diffusion models require up to 200 iterations for high-fidelity synthesis, which is still challenging for low-resource platforms and interactive use cases. While distilling a diffusion model can effectively reduce the required iterations <cit.>, we limit our discussion to non-distilled methods for a fair comparison. For Mel-based synthesis methods, waveforms are reconstructed from Mel-spectrogram with a neural vocoder, such as HiFiGAN <cit.> or BigVSAN <cit.>. In pursuit of higher synthesis efficiency, VampNet <cit.> and the concurrent MAGNeT <cit.> introduced the parallel iterative synthesis strategy from MaskGIT<cit.>. MaskGIT, originally proposed for class-conditional image synthesis tasks in <cit.>, uses a bi-directional Transformer, instead of the uni-directional counterpart in auto-regressive methods, to reduce the required number of iterations. Although VampNet and MAGNeT reduced the number of iterations compared to their auto-regressive counterparts, VampNet does not support text prompts, while MAGNeT takes 180 iterations, which is even heavier than some diffusion models that only require 100 iterations <cit.>. Moreover, it is difficult for methods built upon wave-domain latent space to address frequency domain tasks such as BWE, limiting their applications. § SPECMASKGIT The efficiency, effectiveness and flexbility of SpecMaskGIT is the consequence of a combination of efforts, including the high compression rate in the tokenizer, the small model size, fast synthesis algorithm, among others. §.§ Spectrogram Tokenizer and Vocoder A modified SpecVQGAN <cit.> is trained to tokenize non-overlapping 16-by-16 time-mel patches into discrete tokens, and recover the tokens back to Mel-spectrogram as in Fig. <ref>. Reconstructed Mel-spectrograms are then transformed to waveforms by a pre-trained vocoder. On top of the 3.2× compression offered by the wave-to-mel transform in our configuration, SpecVQGAN further offers 256× compression of the spectrogram, resulting in total over 800× compression to the raw waveform, effectively reducing the number of tokens to synthesize. We utilize the standard Mel transform widely used in vocoders <cit.> for optimal Mel computation, as hyper-parameters of Mel transform has an impact on tokenizer's performance <cit.>. To stabilize the training, we keep the spectrogram normalization in the original SpecVQGAN, which clips Mel bins lower than -80 dB or louder than 20 dB, and then maps the spectrogram into the range between -1.0 to 1.0. Our modified SpecVQGAN is shown competitive in reconstruction quality in Sec. <ref>. §.§ Masked Generative Modeling of Spectrograms We train a masked generative Transformer upon the discrete latent space created by the pretrained SpecVQGAN as in Fig. <ref>. First, the pretrained CLAP encoder maps the input audio to a semantic embedding aligned with its corresponding text descriptions. Meanwhile, the input audio is tokenized by SpecVQGAN. Finally, similar to representation learning such as audio MAE <cit.>, a bi-directional Transformer is trained to reconstruct Mel-spectrogram token sequences from a randomly masked input. There are two major differences from audio MAE. First, the masking ratio is NOT a fixed value but sampled on-the-fly from a truncated Gaussian distribution that is centered at 55% <cit.> and ranges from 0% to 100% <cit.>. As a result, although in each training step SpecMaskGIT behaves similarly to audio MAE, it learns the training data distribution from various masking ratios, hence gaining the ability to iteratively refine audio tokens by gradually decreasing the masking ratio across multiple iterations, which is explained in Sec. <ref>. The other difference lies in the loss function. Audio MAE works on raw Mel-spectrograms, thus the mask reconstruction is optimized by mean square error. However, SpecMaskGIT works in a discrete latent space, which means the reconstruction of a masked position evolves to the retrieval of a correct code from the codebook of SpecVQGAN, i.e., a multi-class single-label classification procedure. Therefore, the loss function becomes the cross entropy (CE) loss with label smoothing equal to 0.1. Following audio MAE, those visible positions in the input are not considered in the loss calculation: Loss = CE(prediction[mask], label[mask]). §.§ Text Conditioning via Sequential Modeling Similarly to <cit.>, we train SpecMaskGIT without audio-text pairs by using a pretrained CLAP model <cit.>, for which audio and text embeddings are aligned in a shared latent space. Leveraging such alignment, after training with the audio branch of CLAP in Fig. <ref>, we can directly condition our pretrained model with the text branch as in Fig. <ref>. We use a publicly available CLAP checkpoint (“630k-audioset-best.pt” <cit.>) for better reproducibility. Although the above design is inspired by AudioLDM <cit.>, SpecMaskGIT is different in the way to inject CLAP conditions. Besides the FiLM mechanism (<cit.>) used in AudioLDM, prior arts inject text conditions into the generative model via the cross-attention mechanism <cit.>, even for methods based on sequential modeling such as AudioGen <cit.> and MAGNeT <cit.>, which inevitably involves efforts to modify basic DNN modules. We believe that reusing identical DNN modules, such as the Vision Transformer (ViT) <cit.>, across different tasks is beneficial to efficient development, so we choose to achieve text-conditional audio synthesis by pure sequential modeling, i.e., appending the CLAP embedding to the input sequence of the Transformer. As a result, SpecMaskGIT can be implemented by the same ViT used in audio MAE <cit.>, and thus we interpret SpecMaskGIT as a generative extension to previous discriminative ways of masked spectrogram modeling. We hypothesize the masked spectrogram modeling and ViT implementation similar to audio MAE has contributed to the representation learning potential of SpecMaskGIT, as is shown in Sec. <ref>, While the common practice in <cit.> is to use a learnable but input-independent token to indicate which parts in the sequence are masked (“M” in Fig. <ref>), the mask reconstruction task is challenging as the input-independent mask offers no hint for a better reconstruction. To further guide the mask reconstruction procedure, we propose to directly use the input-dependent CLAP embedding as a conditional mask (“C” in Fig. <ref>), which offers semantic hints like “a dog barking sound” to the model, and is found beneficial to TTA performance in Sec. <ref>. §.§ Iterative Synthesis with Classifier-free Guidance We follow the parallel iterative synthesis strategy proposed in MaskGIT <cit.> in general, but employ classifier-free guidance (CFG) <cit.> to improve the synthesis quality. This iterative algorithm allows SpecMaskGIT to synthesize multiple high-quality tokens at each iteration, reducing the number of iterations to a value one order-of-magnitude smaller than previous TTA methods. To enable CFG, we replace CLAP embedding with the learned mask token on a random 10% of training steps. At inference phase, both the conditional logit ℓ_c and unconditional logit ℓ_u for each masked token are computed. The final logits ℓ_g are made by a linear combination of the two logits based on t, the guidance scale: ℓ_g = ℓ_u + t(ℓ_c - ℓ_u). Intuitively, CFG balances between diversity and audio-text alignment. Inspired by <cit.>, we introduce a linear scheduler to the guidance scale t, which linearly increases t from 0.0 to an assigned value through the synthesis iterations. This allows the result of early iterations to be more diverse (unconditional) with low guidance, but increases the influence of the conditioning for the later synthesis, and is proved beneficial to synthesis quality in Sec. <ref>. The parallel iterative synthesis of SpecMaskGIT shown in Fig. <ref> is explained as follows. 1. Estimating. The Transformer estimates the probability of being the correct code at each masked position for all discrete codes in the SpecVQGAN codebook. 2. Unmasking. Given the probabilities over the codebook for each masked position, a code is retrieved based on the categorical sampling to unmask that position. This step is different from the deterministic unmasking in audio MAE. 3. Scheduling. Although SpecMaskGIT can unmask all positions at once, the quality of the synthesized audio is low. To iteratively refine the synthesis, we need to re-mask the result to a masking ratio that is lower than the current iteration. We follow the common practice in <cit.> to use a cosine scheduler to decide the masking ratio in each iteration. The cosine scheduler re-masks a larger portion of the synthesized audio for the early iterations, which is intuitive as the quality in earlier iterations is lower. 4. Top-k sampling. Given the masking ratio for the next iteration, we get to know k tokens are going to be re-masked. The log-likelihood of unmasked tokens is used to decide the k worst tokens. Since it is observed that a deterministic top-k retrieval leads to the synthesis of monotonic images in <cit.>, we followed <cit.> to add a Gumbel noise to log-likelihood, making the top-k sampling stochastic: confidence = log(p) + t_gumbel· n_gumbel, where p is the probabilities of all unmasked tokens calculated from the CFG logits in Eq. <ref>, n_gumbel is Gumbel noise, and t_gumbel is the temperature multiplied to Gumbel noise. Following <cit.>, we linearly anneal the t_gumbel by a coefficient defined as iter/num_iter, where “iter” means the index of the current iteration, “num_iter” stands for the number of all scheduled iterations. 5. Repeating. Repeat the above operations until the cosine scheduler reduces the masking ratio to 0. For TTA, the SpecMaskGIT starts the above iterative procedure from a fully masked sequence as in Fig. <ref>. Meanwhile, the iterative algorithm is also valid when the masking ratio of input sequence is lower than 100%, which automatically enables zero-shot inpainting in both time and frequency domain as is shown in Fig. <ref>. It is worth noticing that since VampNet <cit.> and MAGNeT <cit.> employ a wave-domain tokenizer, explicit frequency inpainting (BWE) is difficult. § EXPERIMENTS We pretrained the SpecVQGAN <cit.> and two vocoders (HiFiGAN <cit.> & BigVSAN <cit.>) on AudioSet (AS) unbalanced and balanced subset <cit.> for 1.5M steps. The AS we collected contains around 1.8 million 10-second audio segments of diverse sound sources and recording environments. AS has been widely used in general audio representation learning <cit.>. We followed the “VGGSound” configuration in the original SpecVQGAN repository <cit.> without using LPAPS loss as suggested in the repository. Our SpecVQGAN has around 75M parameters, and a codebook of 1024 codes, each of which is represented by a 256-dim embedding. As mentioned in Sec. <ref>, the standard Mel-spectrogram transform from vocoders <cit.> is utilized, which transforms a 10-second audio clip at sampling rate 22.05kHz into 848 frames with 80 Mel bins. The Mel-spectrogram is further tokenized into 265 tokens. SpecMaskGIT employs the ViT implementation widely used in previous audio masked Transformers <cit.>. To be consistent with the image MaskGIT <cit.>, 24 Transformer blocks are used, in which the attention dimension is 768 with 8 heads and the feedforward dimension is 3072, resulting in around 170M parameters. We trained SpecMaskGIT on AS for 500k steps with a batch size of 112. When training the model on AudioCaps (AC) <cit.>, we train for 250k steps with a batch size of 48, as AC only contains 50k 10-second audio clips. To stably train SpecMaskGIT, we follow the common practice in <cit.> to employ a linear warmup and then a cosine annealing of the learning rate (LR). We warmup 16k steps for AS and 5k steps for AC. The base LR is set to 1e-3, and the LR equates to the division of base LR by batch size <cit.>. The iterative synthesis algorithm is based on the open-source implementation of <cit.>. To evaluate the text-to-audio synthesis quality of SpecMaskGIT, we benchmark on the AudioCaps (AC) test set with the text prompts released by <cit.> for fair comparison. To investigate the flexibility of SpecMaskGIT in downstream tasks, we use the SpecMaskGIT trained on AS for 500k steps in the following tasks: Zero-shot time inpainting. we manually mask out the 25th to 35th Mel-spec frames (around 1.9s) of AC test set, and employ the SpecMaskGIT to inpaint the lost regions in a zero-shot manner, i.e., inpainting without any task-specific finetuning. Zero-shot audio bandwidth extension: The top 16 Mel-spec bins (i.e., components beyond 4.3kHz) of AC test set are masked, which creates a 2.5× BWE task. For all tasks above, we use the toolbox in <cit.> to compute FAD (<cit.>) scores as the metric, since FAD has been widely used to evaluate TTA <cit.>, time inpainting <cit.> and BWE <cit.> tasks. To investigate the representation learning potential of SpecMaskGIT, we further linear probe the model for the music tagging task in MagnaTagATune (MTAT) dataset <cit.> with ROC-AUC and mAP as metrics <cit.>. MTAT presents a multi-label task for genre, instrument and mood, thus has been widely used to evaluate music tagging models <cit.>. We use a single linear layer with batch normalization and 0.1 dropout as the probe. § RESULTS §.§ Text-to-audio Synthesis We report the FAD scores of SpecMaskGIT in Tab. <ref> together with other discrete models. Our model is first trained on AS for 500k steps and then finetuned on AC train set for 250k steps. The CFG scale is set to 3.0 empirically. SpecMaskGIT outperforms Diffsound (VQ-Diffusion), MAGNeT-small (similar to SpecMaskGIT but in latent wave domain), as well as AudioGen-base (auto-regressive) in terms of FAD with one order-of-magnitude fewer iterations. The FAD score is achieved without training with any audio-text pairs, which proved the effectiveness of such self-supervised training for discrete models. We also found the proposed conditional mask explained in Sec. <ref> improves the FAD score without additional parameter or computation. Both the CFG and linear scheduler of CFG scale contributed to the FAD. Given the small number of iterations and small model size, SpecMaskGIT can synthesize realistic 10-second audio clips in real-time with only 4 cores of a Xeon CPU as is shown in Fig. <ref>, or even 30× faster than real-time on one RTX-A6000 GPU. The efficiency and effectiveness of SpecMaskGIT make the model attractive to interactive applications and low-resource environments. When compared to state-of-the-art (SOTA) continuous diffusion models in Tab. <ref>, SpecMaskGIT could not achieve a comparable FAD score, but we emphasize that the proposed method offers decent performance with high efficiency, i.e., smaller model size and fewer iterations, which can be clearly seen in Fig. <ref>. Overall, continuous methods are more advantageous in FAD than discrete methods. We leave the further improvement of discrete generative model as future work. Ablation study: Gumbel noise and number of iterations in SpecMaskGIT. We use HiFiGAN in all ablation studies. As mentioned in Sec. <ref>, the Gumbel noise is essential to the top-k sampling during the iterative synthesis. Fig. <ref> shows that a temperature of 1.5 is the optimal. SpecMaskGIT achieves decent performance (FAD = 3.4) with only 8 iterations, and reaches its best (FAD = 2.8) with 16 iterations. More iterations do not improve the performance, which is consistent with the image MaskGIT <cit.>. Ablation study: Audio reconstruction quality. We evaluate the reconstruction FAD (rFAD) scores of two vocoders and the SpecVQGAN in Tab. <ref> with previous methods reported in <cit.>. Even with a similar architecture, rFAD of DiffSound and SpecMaskGIT can vary a lot due to different Mel computation and vocoder. Our pipeline achieves SOTA level rFAD scores for Mel-spectrogram methods while maintaining the highest compression rate or the lowest latent rate, which helped SpecMaskGIT to outperform methods such as Diffsound and Make-an-audio by a large margin yet with higher efficiency. We further analyze the rFAD of vocoders by inputting ground truth Mel to them, and found a significant performance gap between HiFiGAN and BigVSAN, which is not observed when vocoders are combined with SpecVQGAN. This indicates that SpecVQGAN has been the bottleneck in reconstruction quality and asks for future improvements. Ablation study: Bias in AudioCaps benchmark. The dataset gap between AC and other larger & more diverse datasets is investigated. It is observed in <cit.> that finetuning (FT) a TTA model on AC improves the TTA performance in terms of FAD, though the model is pretrained on a larger dataset. We reproduced this phenomenon with SpecMaskGIT as shown in Tab. <ref>. We also observed that training on the small-scale AC alone brought better FAD score than the model trained with larger datasets in Tab. <ref>, which is consistent with <cit.>. We hypothesize that there is a data distribution gap between AC and other datasets, such that when a model fully fits other datasets, the distribution of its synthesis deviates from AC, resulting in worse FAD. Therefore, we continued to train SpecMaskGIT on AS until 800k steps, and depict the “FAD vs. training step” curves on both the valid and test set of AC to verify our hypothesis. It is clear in Fig. <ref> that SpecMaskGIT learns to synthesize audio in the early stage and keeps improving the FAD on AC. As the training goes on, SpecMaskGIT just fits toward AS, which worsens the FAD in AC. Inspired by audio classification tasks in which early stop is applied to prevent the model from overfitting to the train set distribution, we proposed to apply early stop to the SpecMaskGIT model trained solely on AS, and report the competitive FAD score with other methods that are without AC finetuning of AC-alone training in Tab. <ref>. We believe that a more comprehensive and less biased benchmark will contribute to the future advances of TTA research. §.§ Downstream Inpainting, BWE and Tagging Tasks Results of the time inpainting and audio BWE tasks are shown in Tab. <ref>. We utilized the pipeline in Fig. <ref> unconditionally, with Gumbel temperature 1.5 and 16 iterations. SpecMaskGIT significantly improved the input signals in terms of FAD, validating its zero-shot ability in such tasks. BWE performance can be further improved by applying the low-frequency replacement (LFR) technique <cit.>. Unlike prior arts that finetune MAE-like architectures for BWE <cit.>, SpecMaskGIT achieves it by zero-shot. In Tab. <ref>, the potential of SpecMaskGIT in representation learning is confirmed by the music tagging performance on MTAT dataset. As a TTA model, SpecMaskGIT outperforms classification-specialized models such as CLMR, MusiCNN, MULE, and MERT (the MAE-like model in wave domain). SpecMaskGIT got an ROC-AUC comparable to Jukebox which contains 5B parameters. We hypothesize the tagging capability comes from the masked spectrogram modeling and ViT implementation similar to audio MAE, as explained in Sec. <ref>. We leave the in-depth investigation of SpecMaskGIT in downstream tasks as future work. § CONCLUSION Generative models that iteratively synthesize audio clips sparked great success to text-to-audio synthesis (TTA). However, due to hundreds of iterations required in the inference phase and large amount of model parameters, high-quality TTA systems remain inefficient. To address the challenges, we propose SpecMaskGIT, a light-weighted, efficient yet effective TTA model based on the masked generative modeling of spectrograms. SpecMaskGIT synthesizes realistic audio clips by less than 16 iterations, an order-of-magnitude less than previous iterative TTA methods. It also outperforms larger discrete models in the TTA benchmark, while being real-time with 4 CPU cores or even 30× faster with a GPU. Compared to similar methods, SpecMaskGIT is more flexible in downstream tasks such as zero-shot bandwidth extension. Moreover, we interprete SpecMaskGIT as a generative extension to audio MAE and shed light on its audio representation learning potential. We hope our work inspires the exploration of masked audio modeling toward further diverse scenarios.
http://arxiv.org/abs/2406.18989v1
20240627082821
The generic anisotropy of strongly edge decomposable spheres
[ "Feifei Fan" ]
math.CO
[ "math.CO", "math.AC" ]
The generic anisotropy of s.e.d. spheres]The generic anisotropy of strongly edge decomposable spheres F. Fan]Feifei Fan The author is supported by the National Natural Science Foundation of China (Grant no. 12271183) and by the GuangDong Basic and Applied Basic Research Foundation (Grant no. 2023A1515012217). Feifei Fan, School of Mathematical Sciences, South China Normal University, Guangzhou, 510631, China. fanfeifei@mail.nankai.edu.cn [2020]Primary 13F55; Secondary 05E40, 05E45. [ [ July 1, 2024 ================ § ABSTRACT The generic anisotropy is an important property in the study of Stanley-Reisner rings of homology spheres, which was introduced by Papadakis and Petrotou. This property can be used to prove the strong Lefschetz property as well as McMullen's g-conjecture for homology spheres. It is conjectured that for an arbitrary field 𝔽, any 𝔽-homology sphere is generically anisotropic over 𝔽. In this paper, we prove this conjecture for all strongly edge decomposable spheres. § INTRODUCTION The conception of generic anisotropy was introduced by Papadakis and Petrotou <cit.>. As shown in <cit.>, the generic anisotropy implies the strong Lefschetz property (see Subsection <ref>), as well as the Hall-Laman relations defined by Adiprasito <cit.>, of the general Artinian reductions of the Stanley–Reisner rings of simplicial homology spheres in the sense of <cit.>. Assume 𝔽 is a field, and Δ is a 𝔽-homology sphere of dimension d-1 with vertex set {1,…,m}. Denote by =𝔽(a_i,j) the field of rational functions in the variables a_i,j, 1≤ i≤ d, 1≤ j≤ m. Let [Δ] be the Stanley–Reisner ring of Δ over , a quotient ring of the polynomial ring [x_1,…,x_m] (see subsection <ref>), and let (Δ)=[Δ]/(θ_1,…,θ_d), where θ_i is the linear form ∑_j=1^m a_i,jx_j. Then Δ is said to be generically anisotropic over 𝔽, if for all integers j with 1≤ 2j≤ d and all nonzero elements u∈(Δ)_j we have u^2≠ 0. It is known that strong Lefschetz property implies the celebrated g-conjecture characterizing the face numbers of homology spheres (see <cit.>), hence the following theorem of Papadakis and Petrotou provides a second proof of g-conjecture for 𝔽-homology spheres, 𝔽 a field of characteristic 2. An earlier proof of the g-conjecture for more general rational homology spheres was given by Adiprasito <cit.>. Let 𝔽 be a field of characteristic 2. Then all 𝔽-homology spheres are generically anisotropic over 𝔽. To extend this theorem to arbitrary characteristics, there is the following natural conjecture suggested by Adiprasito, Papadakis and Petrotou <cit.>. For an arbitrary field 𝔽, any 𝔽-homology sphere is generically anisotropic over 𝔽. So far, this conjecture is widely open and only the special case of simplicial spheres of dimension 1 was proved by Papadakis and Petrotou <cit.>. In the present paper, we will show that this conjecture is true for strongly edge decomposable spheres, which are a class of PL-spheres introduced by Nevo <cit.>. Let Δ be a simplicial complex on [m]={1,…,m}, a collection of subsets of [m] (including ∅) that is closed under inclusion. The elements σ∈Δ are called faces and the maximal faces of Δ under inclusion are called facets. The dimension of a face σ∈Δ is σ=|σ|-1 (∅=-1) and the dimension of Δ is Δ= max{σ: σ∈Δ}. The link and star of a face σ∈Δ are respectively the subcomplexes lk_Δσ= {τ∈Δ:τ∪σ∈Δ,τ∩σ=∅}; st_Δσ= {τ∈Δ:τ∪σ∈Δ}. If σ={i,j} is an edge (1-dimensional face) of Δ, the contraction of Δ with respect to σ is the simplicial complex (σ,Δ) on [m]∖{i} which is obtained from Δ by identifying the vertices i and j. More precisely, (σ,Δ)={τ∈Δ :i∉τ}∪{(τ∖{i})∪{j}:i∈τ∈Δ}. We say that Δ satisfies the link condition with respect to σ if lk_Δ{i}∩lk_Δ{j}=lk_Δσ. A simplcial sphere Δ is said to be strongly edge decomposable (s.e.d. for short) if either Δ is the boundary of a simplex or else there exists an edge σ∈Δ such that Δ satisfies the link condition with respect to σ and both lk_Δσ and (σ,Δ) are s.e.d. Murai <cit.> proved that s.e.d. spheres have the strong Lefschetz property over any infinite field. (The strong Lefschetz property of s.e.d. spheres in characteristic zero was also proved by Babson and Nevo <cit.>.) The following theorem, the main result of this paper, shows that a stronger property holds for s.e.d. spheres. All s.e.d. spheres are generically anisotropic over any field 𝔽. This paper is organized as follows. In Section <ref>, we introduce the basic notions and collect some known results about Stanley–Reisner rings. In Section <ref>, we establish an equivalent but simpler condition for a homology sphere to be generically anisotropic (Theorem <ref>) and provide an application (Proposition <ref>), which are key ingredients in the proof of Theorem <ref>. The main idea in the proof of Theorem <ref> is to show that the generic anisotropy of an odd dimensional homology sphere satisfying the link condition is preserved by the contraction if its contraction and its link satisfy some nice algebraic properties (Proposition <ref>). The even dimensional case can be reduced to the odd dimensional case by a combinatorial result (Proposition <ref>). These results and the proof of Theorem <ref> are given in Section <ref>. § PRELIMINARIES §.§ The Stanley-Reisner ring Let Δ be a simplicial complex on [m]. For a field , let S=[x_1,…,x_m] be the polynomial algebra with one generator for each vertex in Δ. It is a graded algebra by setting x_i=1. The Stanley-Reisner ideal of Δ is I_Δ:=(x_i_1x_i_2⋯ x_i_k:{i_1,i_2,…,i_k}∉Δ) The Stanley-Reisner ring (or face ring) of Δ is the quotient [Δ]:=S/I_Δ. Since I_Δ is a monomial ideal, the quotient ring [Δ] is graded by degree. For a face σ={i_1,…,i_k}∈Δ, denote by _σ=x_i_1⋯ x_i_k∈[Δ] the face monomial corresponding to σ. Sometimes we also use σ to mean _σ for simplicity. A sequence Θ=(θ_1,…,θ_d) of d=Δ+1 linear forms in S is called a linear system of parameters (or l.s.o.p. for short) if (Δ;Θ):=[Δ]/(Θ) has Krull dimension zero, i.e., it is a finite-dimensional -space. The quotient ring (Δ;Θ) is an Artinian reduction of [Δ]. We will use the simplified notation (Δ) for (Δ;Θ) whenever it creates no confusion, and write the component of degree i of (Δ) as (Δ)_i. For a subcomplex Δ'⊂Δ, let I be the ideal of [Δ] generated by faces in Δ∖Δ', and denote I/IΘ by (Δ,Δ';Θ) or simply (Δ,Δ'). A linear sequence Θ=(θ_1,…,θ_d) is an l.s.o.p if and only if the restriction Θ_σ=r_σ(Θ) to each face σ∈Δ generates the polynomial algebra [x_i:i∈σ]; here r_σ:[Δ]→[x_i:i∈σ] is the projection homomorphism. If Θ is an l.s.o.p. for [Δ], then (Δ) is spanned by the face monomials (see <cit.>). If is an infinite field, then [Δ] admits an l.s.o.p. (Noether normalization lemma). A simplicial complex Δ is called Cohen-Macaulay over if for any l.s.o.p Θ=(θ_1,…,θ_d), [Δ] is a free [θ_1,⋯,θ_d] module. By a result of Reisner <cit.>, Δ is Cohen-Macaulay over if and only if for all faces σ∈Δ (including σ=∅) and i<lk_Δσ, we have H_i(lk_Δσ;)=0. It follows from <cit.> that if Δ is a Cohen-Macaulay (over ) complex of dimenison d-1, then for any Artinian reduction (Δ), _(Δ)_i=h_i(Δ), 0≤ i≤ d, where h_i(Δ) is a combinatorial invariant of Δ defined by h_i(Δ)=∑_j=0^i(-1)^i-jd-jd-if_j-1, and f_i is the number of i-dimensional faces of Δ. Suppose that Θ=(θ_i=∑_j=1^m a_i,jx_j)_i=1^d is an l.s.o.p. for [Δ]. Then there is an associated d× m matrix M_Θ=(a_i,j). Let λ_i=(a_1,i,a_2,i,…,a_d,i)^T denote the column vector corresponding to the vertex i∈[m]. For any ordered subset I=(i_1,…,i_k)⊂ [m], the submatrix M_Θ(I) of M_Θ is defined to be M_Θ(I)=(λ_i_1,…,λ_i_k). §.§ Strong Lefschetz and Anisotropy The Lefschetz property of face rings is strongly connected to many topics in algebraic geometry, commutative algebra and combinatorics. For instance, the strong Lefschetz property for homology spheres is the algebraic version of the g-conjecture in the most strong sence, a generalization of the Hard Lefschetz Theorem in algebraic geometry for projective toric variaties. A (d-1)-dimensional simplicial complex Δ is a -homology sphere, a field, if for every face σ∈Δ (including σ=∅) the link of σ has the same reduced homology as a sphere of dimension d-2-σ: H_i(lk_Δσ;)= if i=d-2-σ, 0 otherwise.. It is known that if Δ is a -homology (d-1)-sphere, then the face ring [Δ] is a Gorenstein ring (see <cit.>), i.e., (Δ;Θ) is a Poincaré duality -algebra for any l.s.o.p Θ in the sence that the multiplication map (Δ;Θ)_i×(Δ;Θ)_d-i→(Δ;Θ)_d≅ is a perfect pairing for 0≤ i≤ d. (See <cit.> for other equivalent definitions.) Let Δ be a -homology (d-1)-sphere. We say that [Δ] has the strong Lefschetz property (or Δ is strong Lefschetz over ) if there is an Artinian reduction (Δ;Θ) of [Δ] and a linear form ω∈[Δ]_1 such that the multiplication map ·ω^d-2i:(Δ;Θ)_i→(Δ;Θ)_d-i is bijective for i=0,1,…,⌊d/2⌋. The element ω is called a strong Lefschetz element of (Δ;Θ). Note that the set of (Θ,ω) in Definition <ref> is Zariski open (see e.g. <cit.>), but it may be empty. If Δ is strong Lefschetz over , then it is also strong Lefschetz over any infinite field with the same characteristic as (see e.g. <cit.>). As we mentioned in Section <ref>, in order to prove the strong Lefschetz property of homology spheres in characteristic 2, Papadakis and Petrotou <cit.> established the anisotropy of their face rings over a large field. Let Δ be a -homology (d-1)-sphere. An Artinian reduction (Δ) of [Δ] is said to be anisotropic if for every nonzero element u∈(Δ)_i with i≤ d/2, the square u^2 is also nonzero in (Δ)_2i. We call Δ anisotropic over if such an Artinian reduction exists. It turns out that anisotropy is stronger than the strong Lefschetz property in the sence that if a class of -homology spheres, which is closed under the suspension operation, is anisotropic over , then any -homology sphere in this class is strong Lefschetz (see <cit.>). Here the suspension S(Δ) of a simplicial complex Δ means the join S^0*Δ, where S^0 is the simplicial complex consisting of two disjoint vertices. Recall that the join of two simplicial complexes Δ and Δ' with disjoint sets of vertices is Δ*Δ':={σ∪τ : σ∈Δ, τ∈Δ'}. §.§ Canonical modules for homology balls In this subsection we recall some results about the canonical module (see <cit.> for the definition) of [Δ] when Δ is a -homology ball. First we recall the definition of -homology manifold. A d-1 dimensional simplicial complex Δ is a -homology manifold (without boundary) if for any nonempty faces σ∈Δ, lk_Δσ is a -homology sphere of dimension d-2-σ. Similarly, Δ is a -homology manifold with boundary if the link of every nonempty face σ has the homology of a (d-2-σ)-dimensional sphere or a ball (over ), and the boundary complex of Δ, i.e., ∂Δ:={σ∈Δ: H_*(lk_Δσ;)=0}∪{∅} is a (d-2)-dimensional -homology manifold without boundary. A connected -homology manifold Δ of dimension d-1 is orientable if H_d-1(Δ,∂Δ;)≅. In this case, an orientation of Δ is given by an ordering on the vertices of each facets of Δ. A -homology (d-1)-ball is a (d-1)-dimensional -homology manifold with boundary whose homology is trivial and whose boundary complex is a -homology (d-2)-sphere. Let Δ be a -homology (d-1)-ball with boundary ∂Δ. Then there is an exact sequence 0→ I→[Δ]→[∂Δ]→ 0, where I is the ideal of [Δ] generated by all faces in Δ∖∂Δ. By a theorem of Hochster (see <cit.>) I is the canonical module of [Δ]. Then from <cit.> we have the following Let Δ be a -homology (d-1)-ball. Then for any Artinian reduction, the multiplication map (Δ)_i×(Δ,∂Δ)_d-i→(Δ,∂Δ)_d≅ is a perfect pairing for 0≤ i≤ d. This result has the following useful corollary. Let K be a -homology sphere and Δ⊂ K be a -homology ball of the same dimension. Then for any Artinian reduction (K) and its restriction to Δ, there is a short exact sequence 0→(K,Δ)→(K)→(Δ)→0. Cf. the first paragraph of the proof of <cit.>. §.§ Lee's formula In this subsection, we introduce a formula due to Lee that expresses non-square-free monomials in (Δ) in terms of face monomials. First, we recall a useful result in <cit.>. Let Δ be a (d-1)-dimensional -homology sphere or ball, Θ be an l.s.o.p. for [Δ]. Suppose that σ_1 and σ_2 are two ordered facets of Δ, which have the same orientation in Δ. Then (M_Θ(σ_1))·_σ_1=(M_Θ(σ_2))·_σ_2 in (Δ)_d or in (Δ,∂Δ)_d. When Δ is a (d-1)-dimensional -homology sphere or ball, (Δ)_d or (Δ,∂Δ)_d is , which is spanned by a facet monomial. So each facet σ∈Δ defines a map Ψ_σ:(Δ)_d or (Δ,∂Δ)_d→ such that for all α in (Δ)_d or (Δ,∂Δ)_d, α=Ψ_σ(α)(M_Θ(σ))_σ. Lemma <ref> says that Ψ_σ=±Ψ_τ for any two facets σ,τ∈Δ. If we fix an orientation on Δ, this map is independent of the choice of the ordered facet whose ordering is compatible with the given orientation on Δ, defining a canonical function Ψ_Δ:(Δ)_d or (Δ,∂Δ)_d→ (see <cit.>). In particular, if σ is a facet of Δ, then Ψ_Δ(_σ)=1/(M_Θ(σ)). To state Lee's formula, we will need the following notation. Under the assumption of Lemma <ref>, let 𝐚=(a_1,…,a_d)^T∈^d be a vector such that every d× d minor of the matrix (M_Θ|𝐚) is nonsingular. For any ordered subset I⊂[m] with |I|=d, let A_I=(M_Θ(I)), and for any i∈ I, denote by A_I(i) the determinant of the matrix obtained from M_Θ(I) by replacing the column vector λ_i with 𝐚. Let Δ be a (d-1)-dimensional -homology sphere (resp. -homology ball), and fix an orientation on Δ. Then for a monomial x_i_1^r_1⋯ x_i_k^r_k∈(Δ)_d (resp. x_i_1^r_1⋯ x_i_k^r_k∈(Δ,∂Δ)_d), r_i>0, we have Ψ_Δ(x_i_1^r_1⋯ x_i_k^r_k)=∑_facets F∈st_Δσ∏_i∈σA_F(i)^r_i-1/A_F∏_i∈ F∖σA_F(i), where σ={i_1,…,i_k} and the sum is over all ordered facets of st_Δσ, whose orderings are compatible with the given orientation on Δ. § EQUIVALENT ANISOTROPY CONDITIONS As shown in Theorem <ref>, if 𝔽 is a field of characteristic 2 and Δ is a 𝔽-homology (d-1)-sphere with m vertices, then (Δ;Θ) is anisotropic for the field of rational functions :=𝔽(a_i,j:1≤ i≤ d, 1≤ j≤ m) and the l.s.o.p. Θ=(θ_i=∑_i=1^m a_i,jx_j)_i=1^d. In fact, under the assumption of Theorem <ref> Δ is anisotropic over some smaller field extension of 𝔽 than , as we will see below. Let '=𝔽(a_i,j:1≤ i≤ d, d+1≤ j≤ m), and denote by A the d× (m-d) matrix (a_i,j). One easily sees that there is an l.s.o.p. Θ' for '[Δ] such that M_Θ'=( I_d| A), where I_d is the d× d identity matrix. Suppose that Δ is a 𝔽-homology (char 𝔽 is arbitrary) (d-1)-sphere with m vertices. Let , Θ and ', Θ' be as above. Then (Δ;Θ) is anisotropic if and only if '(Δ;Θ') is anisotropic. “⇒". There exists a matrix N∈ GL(d,) such that N M_Θ=(I_d| B), where B=(b_i,j) (1≤ i≤ d, d+1≤ j≤ m) is a d× (m-d) matrix with entries b_i,j∈. Denote by Θ_0 the l.s.o.p. corresponding to (I_d| B). Clearly, the two ideals generated by Θ and Θ_0 are the same. Let _0=𝔽(b_i,j:1≤ i≤ d, d+1≤ j≤ m). Then _0 is a subfield of . One easily sees that b_i,j are algebraically independent elements over 𝔽, so there is an isomorphism '≅_0 given by a_i,j↦ b_i,j, and then an induced isomorphism '(Δ;Θ')≅_0(Δ;Θ_0). Since _0(Δ;Θ_0)⊂(Δ;Θ_0)=(Δ;Θ) and (Δ;Θ) is anisotropic, any nonzero element u∈_0(Δ;Θ_0)_i≅'(Δ;Θ')_i with i≤ d/2 satisfies u^2≠0. Hence '(Δ;Θ') is anisotropic. “⇐". Pick an arbitrary order on the variables a_i,j for 1≤ i≤ d, 1≤ j≤ d, and rewrite them as a_1,a_2,…,a_d^2. Let _0=', and recursively define _i=_i-1(a_i), i.e. the field of fractions of _i-1[a_i], for 1≤ i≤ d^2. Hence there is a sequence of field extension '=_0⊂_1⊂⋯⊂_d^2=. Let Θ_0=Θ' be the l.s.o.p. for _0[Δ]. For 1≤ i≤ d^2, if a_i=a_j,k, then define Θ_i to be the l.s.o.p. for _i[Δ] such that M_Θ_i is obtained from M_Θ_i-1 by replacing the (j,k)-entry by a_j,k. We will prove that _i(Δ;Θ_i) are all anisotropic for 0≤ i≤ d^2 by induction on i. The base case i=0 is just the assumption. For the induction step, set R_i=_0[a_1,…,a_i], and denote by 𝔭_i⊂ R_i the prime ideal 𝔭_i= (a_i-1) if a_i=a_k,k for some 1≤ k≤ d, (a_i) otherwise. Then there is a ring homomorphism η_i:(R_i)_𝔭_i→_i-1, where (R_i)_𝔭_i⊂_i denote the localization of R_i at 𝔭_i, given by η_i(a_j)=a_j for 1≤ j≤ i-1, and η_i(a_i)= 1 if a_i=a_k,k for some 1≤ k≤ d, 0 otherwise. Clearly, η_i induces a homomorphism (R_i)_𝔭_i(Δ;Θ_i)→_i-1(Δ;Θ_i-1), which we also denoted by η_i. Given a nonzero element α∈_i(Δ;Θ_i)_j with j≤ d/2, we claim that there exists a nonzero element t∈_i such that tα∈(R_i)_𝔭_i(Δ;Θ_i) and 0≠η_i(tα)∈_i-1(Δ;Θ_i-1). Assume this claim for the moment. Since _i-1(Δ;Θ_i-1) is anisotropic by induction, [η_i(tα)]^2≠0. It follows that t^2α^2 is not zero in (R_i)_𝔭_i(Δ;Θ_i), and then 0≠α^2∈_i(Δ;Θ_i). So _i(Δ;Θ_i) is anisotropic, and the induction step is completed. It remains to prove the claim. Recall that _i-1(Δ;Θ_i-1) is spanned by face monomials. Suppose that {_σ_1,…,_σ_s} is a basis of _i-1(Δ;Θ_i-1)_j. Then it is also a basis of _i(Δ;Θ_i)_j. To see this, let f=∑_r=1^s l_r_σ_r be a nontrivial _i-linear combination of the _σ_r. Then there is a power t of a_i (or a_i-1) such that tl_r∈ (R_i)_𝔭_i for all r and η_i(tl_q)≠ 0 for some q. Hence tf∈ (R_i)_𝔭_i(Δ;Θ_i) and η_i(tf)≠ 0 in _i-1(Δ;Θ_i-1). This implies that f≠ 0 in _i(Δ;Θ_i). Since Δ is a Cohen-Macaulay complex, __i_i(Δ;Θ_i)_j=__i-1_i-1(Δ;Θ_i-1)_j=h_j(Δ). Therefore _σ_1,…,_σ_s forms a basis of _i(Δ;Θ_i)_j. Hence, for the element α in the previous paragraph, we may assume α=f, and take t to be as above, then the claim is verified. Here is an application of Theorem <ref>. Let Δ be a 𝔽-homology (2n-2)-sphere on [m]. If the suspension S(Δ) of Δ is generically anisotropic over 𝔽, then Δ is also generically anisotropic over 𝔽. Write S(Δ)=K∪ K', where K={m+1}*Δ and K'={m+2}*Δ. Let be the rational function field 𝔽(a_i,j:1≤ i≤ 2n, 1≤ j≤ m), and let _0⊂ be the subfield 𝔽(a_i,j:2≤ i≤ 2n, 1≤ j≤ m). Choose an l.s.o.p. Θ=(θ_1,θ_2,…,θ_2n) for [S(Δ)] such that M_Θ=(A|λ_m+1,λ_m+2), where A=(a_i,j), and (λ_m+1,λ_m+2)=(I_2| 0)^T. Let Θ_0=(θ_2,θ_3,…,θ_2n). Clearly, Θ_0 restricted to Δ is an l.s.o.p. for [Δ]. It is known that there are two isomorphisms: (Δ;Θ_0)≅(K;Θ), x_i↦ x_i for 1≤ i≤ m, (Δ;Θ_0)_*≅(K,Δ;Θ)_*+1, α↦ x_m+1α (see e.g. <cit.>). Hence for a nonzero element α∈(Δ;Θ_0), we have 0≠ x_m+1α∈(K,Δ;Θ). Assume S(Δ) is generically anisotropic over 𝔽. Then (S(Δ);Θ) is anisotropic by the proof of Theorem <ref>. For any nonzero element α∈_0(Δ;Θ_0)_i⊂(Δ;Θ_0)_i, i≤ n-1, the second isomorphism above shows that 0≠ x_m+1α∈(K,Δ;Θ)_i+1. Hence we have 0≠(x_m+1α)^2∈(K,Δ;Θ), since (K,Δ;Θ)=(S(Δ),K';Θ)⊂(S(Δ);Θ) by Corollary <ref> and (S(Δ);Θ) is anisotropic. This means that x_m+1α^2 is not zero in (K;Θ) since by Proposition <ref> there exists β∈(K;Θ) such that (x_m+1α)^2β=x_m+1α^2· x_m+1β≠ 0 in (K,Δ;Θ)_2n, and we can think of x_m+1α^2∈(K;Θ) and x_m+1β∈(K,Δ;Θ) in the pairing in Proposition <ref>. Then 0≠α^2∈_0(Δ;Θ_0)⊂(Δ;Θ_0) because of the first isomorphism above (K;Θ)≅(Δ;Θ_0). So _0(Δ;Θ_0) is anisotropic. This is equivalent to saying that Δ is generically anisotropic over 𝔽 by definition. § PROOF OF THEOREM <REF> In this section 𝔽 denotes a field of arbitrary characteristic. Theorem <ref> is a consequence of the following two propositions. If Δ is an s.e.d. (d-1)-sphere, then the suspension S(Δ) is an s.e.d. d-sphere. Let Δ be a 𝔽-homology (2n-1)-sphere on [m] satisfying the link condition with respect to an edge σ∈Δ. If lk_Δσ and (σ,Δ) are generically anisotropic over 𝔽 and lk_Δσ is strong Lefschetz over an infinite field (char =char 𝔽), then Δ is generically anisotropic over 𝔽. Assuming Propositions <ref> and <ref>, we prove Theorem <ref> as follows. First we consider the odd dimensional case. Let Δ be an s.e.d. (2n-1)-sphere. The generic anisotropy of Δ follows from Proposition <ref> by induction on both the dimension and the vertex number of Δ. Note that if Δ is the boundary of a simplex, then Δ is generically anisotropic since (Δ)=[x]/(x^2n+1) for any field , and that lk_Δσ is an s.e.d. (2n-3)-sphere. Also, recall that simplicial 1-spheres are generically anisotropic by <cit.>, and that s.e.d. spheres are strong Lefschetz over any infinite field by <cit.>. The even dimensional case can be deduced from Propositions <ref> and <ref>. We use induction on d. If d=1, the statement clearly holds. For the induction step, note that there are the following two easily verified facts: * (σ,S(Δ))=S[(σ,Δ)]. * If Δ satisfies the link condition with respect to an edge σ∈Δ, then S(Δ) also satisfies the link condition with respect to σ. So if there is a sequence of simplicial (d-1)-spheres: Δ=Δ_0, Δ_1,…,Δ_s=∂Γ, Γ a d-simplex, where Δ_i+1=(σ_i,Δ_i) for some edge σ_i∈Δ_i such that Δ_i satisfies the link condition with respect to σ_i and lk_Δ_iσ_i is an s.e.d. sphere, then S(Δ_0),…,S(Δ_s) is a sequence of simplicial d-spheres satisfying the same conditions by the above facts (<ref>), (<ref>) and the induction hypothesis for lk_S(Δ_i)σ_i=S(lk_Δ_iσ_i), the suspension of an s.e.d. (d-3)-sphere. Then the induction step is completed by using the fact that (σ,S(∂Γ)) is the boundary complex of a (d+1)-simplex for any edge σ∈ S(∂Γ)∖∂Γ. The proof of Proposition <ref> needs the following Let Δ be a -homology (2n-1)-sphere, and suppose that σ={u,v} is an edge of Δ such that lk_Δσ is strong Lefschetz over . Let Θ={θ_1,…,θ_2n} be a generic l.s.o.p. for [Δ] such that the submatrix (λ_u,λ_v) of M_Θ has the form [ T; 0 ], where T∈ GL(,2) is upper triangular, and let Θ_0={θ_3,…,θ_2n}. Assume that {ρ_1,…,ρ_r}, ρ_i∈lk_Δσ, is a basis for (lk_Δσ;Θ_0)_n-1. Then (Δ;Θ)_n has a basis of the form: ∪, ={σ_1,…,σ_r}, ={τ_1,…,τ_s}, where σ_i={u}∪ρ_i for 1≤ i≤ r and τ_i∈Δ∖st_Δσ, v∉τ_i for 1≤ i≤ s. By Corollary <ref> there is a short exact sequence 0→(Δ,st_Δ{v})→(Δ)→(st_Δ{v})→ 0, which shows that (Δ)_n has a basis of the form: '∪', where '⊂st_Δ{v} and '⊂Δ∖st_Δ{v}. We will show that ' can be chosen such that '=∪”, where elements in ” satisfy the same condition for the ones in . The Lemma will follow by taking ='∪”. Assume that T=[ a b; 0 c ]. Let ω=θ_1-b/c·θ_2, and let Θ'={ω,θ_3,…,θ_2n}. Then Θ' is an l.s.o.p. for [lk_Δ{v}] since Θ is generic. As we have seen in the proof of Propositon <ref>, the natural inclusion (lk_Δ{v};Θ')→(st_Δ{v};Θ) is an isomorphism. So a basis for (lk_Δ{v};Θ')_n is also a basis for (st_Δ{v};Θ)_n. Let D be the closure of lk_Δ{v}∖st_Δσ. Then D is a -homology (2n-2)-ball with ∂ D=lk_Δσ. Applying Corollary <ref> again we get a short exact sequence: 0→(lk_Δ{v},D)→(lk_Δ{v})→(D)→ 0. Note that (lk_Δ{v},D)≅({u}*lk_Δσ,lk_Δσ), and recall that there is an isomorphisms: · x_u:(lk_Δσ;Θ_0)→({u}*lk_Δσ,lk_Δσ;Θ'). Hence is a basis for (lk_Δ{v},D;Θ')_n. To get the desired basis ' for (lk_Δ{v};Θ')_n, we will give a basis ”⊂ D∖∂ D for (D;Θ')_n. Consider another exact sequence: (D,∂ D;Θ')→(D;Θ')→(∂ D;Θ_0)/(ω)→ 0. Since ∂ D=lk_Δσ is strong Lefschetz over and Θ is generic, we may assume that ω is a strong Lefschetz element for (∂ D;Θ_0). It follows that ((∂ D;Θ_0)/(ω))_n=0, and so (D,∂ D;Θ')_n→(D;Θ')_n is a surjection. Thus a basis ” of (D;Θ')_n can be taken from D∖∂ D. We conclude that '=∪” is the desired basis for (st_Δ{v};Θ)_n, and the proof is finished. Before going into the proof of Proposition <ref>, we state an easy result about rational functions without proof. Let be a field, and let (t) be the field of rational functions over with one variable t. For a nonzero element ϕ=f/g∈(t) with f,g∈[t], define the degree of ϕ by (ϕ)=(f)-(g), and define the leading coefficient of ϕ as L(ϕ)=L(f)/L(g), where (h) and L(h) are the degree and leading coefficient of the polynomial h∈[t] respectively. Moreover, we assume (0)=-∞ and L(0)=0 in (t). Let (t) be as above. Then for a nonzero element α=∑_i∈ Iϕ_i with ϕ_i∈(x), we have (α)≤ M:=max{(ϕ_i):i∈ I}, where equality holds if and only if ∑_(ϕ_i)=ML(ϕ_i)≠0. Assume σ={1,2}. Let _0 be the field of rational functions 𝔽(a_i,j:1≤ i≤ 2n, 3≤ j≤ m), and let =_0(t) be the field of rational funcitons over _0 with one variable t. For an element f∈, we use (f) and L(f) to denote the degree and the leading coefficient of f with respect to t respectively (see the definitions before Lemma <ref>). Let Θ be the l.s.o.p. for [Δ] such that in the matrix M_Θ={λ_1,…,λ_m}, λ_1=(1,0,…,0)^T, λ_2=(t,1,0,… 0)^T and λ_j=(a_1,j,a_2,j,…,a_2n,j)^T for 3≤ j≤ m. By the proof of Theorem <ref>, we only need to show that (Δ;Θ) is anisotropy. Acturally, it suffices to prove that the quadratic form (Δ;Θ)_n×(Δ;Θ)_n→(Δ;Θ)_2n≅ is anisotropic. To see this, note that if 0≠α∈(Δ)_i for i<n, then there exits α'∈(Δ)_n-i such that 0≠αα'∈(Δ)_n sicne (Δ) is a Poincaré duality -algebra generated by degree one elements. Let Δ'=(σ,Δ) with vertex set [m]∖{2}, and let M' be the matrix obtained from M_Θ by deleting the column λ_2. Then the set Θ' of one forms associated to the row vectors of M' is an l.s.o.p. for _0[Δ']. Let Γ=lk_Δσ, and let Θ_0 be the l.s.o.p. for _0[Γ] such that the matrix M_Θ_0 is obtained from M_Θ by deleting the first two rows and restricting to the vertices of Γ. Assume Δ' and lk_Δσ are generically anisotropic over 𝔽. Then _0(Δ';Θ') and _0(Γ;Θ_0) are anisotropic by the proof of Theorem <ref>. Assume further that Γ is strong Lefschetz over . Then by Lemma <ref>, (Δ;Θ)_n has a basis of the form: ∪, ={σ_1,…,σ_r}, ={τ_1,…,τ_s}, where σ_i={1}∪ρ_i, ρ_i∈lk_Δσ for 1≤ i≤ r and τ_j∈Δ∖st_Δσ, 2∉τ_j for 1≤ j≤ s. We have the following facts: * Ψ_Δ(_σ_i_σ_j)=±Ψ_Γ(_ρ_i_ρ_j)t+b, with b∈_0, for 1≤ i,j≤ r. Here Ψ_Γ is defined on _0(Γ;Θ_0)_2n-2. * Ψ_Δ(_σ_i_τ_j)=Ψ_Δ'(_σ_i_τ_j) for 1≤ i≤ r, 1≤ j≤ s. Here Ψ_Δ' is defined on _0(Δ';Θ')_2n. * (Ψ_Δ(_τ_i_τ_j))≤ 0 for 1≤ i,j≤ s, and the equality holds if and only if Ψ_Δ'(_τ_i_τ_j)≠ 0, in which case L(Ψ_Δ(_τ_i_τ_j))=Ψ_Δ'(_τ_i_τ_j). Assume these facts for the moment. For any element α∈(Δ;Θ)_n, write α=α_1+α_2, where α_1=∑_i=1^rl_i_σ_i, l_i∈, and α_2=∑_j=1^sk_j_τ_j, k_j∈. Now define m_1=max{(l_i):1≤ i≤ r}, m_2=max{(k_j):1≤ j≤ s}, β_1=∑_(l_i)=m_1L(l_i)_σ_i, β_2=∑_(k_j)=m_2L(k_j)_τ_j. If m_1≥ m_2, then β_1≠0. Since _0(Γ;Θ_0) is anisotropic, we have Ψ_Δ(β_1^2)=ft+b, for some f,b∈_0 with f≠ 0, by fact (i). Thus facts (i)-(iii) together with Lemma <ref> imply that (Ψ_Δ(α^2))=(Ψ_Δ(α_1^2))=2m_1+1≠ -∞. On the other hand, if m_1<m_2, then β_2≠ 0. Hence by fact (iii) and Lemma <ref>, (Ψ_Δ(β_2^2))=0 and L(Ψ_Δ(β_2^2))=Ψ_Δ'(β_2^2)≠ 0 because of the anisotropy of _0(Δ';Θ'). Using facts (i)-(iii) and Lemma <ref> again, we get (Ψ_Δ(α^2))=(Ψ_Δ(α_2^2))=2m_2≠ -∞. In either case we have α^2≠ 0, then the proposition follows. Now we prove (<ref>)-(<ref>). For (<ref>), notice that for a facet F∈st_Δ{1}, if 2∉F then A_F(l) (l∈ F) and A_F∈_0 (we may take 𝐚={1,1,…,1}^T in the definition of A_F(l)), and if 2∈ F, then for G=F∖{1,2}∈Γ, U=σ_i∪σ_j, V=ρ_i∪ρ_j, ∏_l∈σ_i∩σ_jA_F(l)/A_F∏_l∈ F∖ UA_F(l)=±A_F(1)/A_F(2)·∏_l∈ρ_i∩ρ_jA_G(l)/A_G∏_l∈ G∖ VA_G(l), where A_G, A_G(l) are defined on M_Θ_0. A straightforward computation shows that A_F(1)/A_F(2)=-t+c for some c∈_0, and then (<ref>) follows from Theorem <ref>. (<ref>) is obvious by the construction of and . For (<ref>), let W=τ_i∪τ_j. If W∉st_Δ{2}, then Ψ_Δ(_τ_i_τ_j)=Ψ_Δ'(_τ_i_τ_j) and the statement follows. So we assume W∈st_Δ{2}. If F∈st_Δ{2} is a facet containing W, then it corresponds to a facet F'∈st_Δ'{1} also containing W. It is easy to see that A_F(2)=A_F'(1), A_F=tA_F'+f, A_F(l)=tA_F'(l)+f_l for 2≠ l∈ F, where f, f_l∈_0. Let ϕ_F=∏_l∈τ_i∩τ_jA_F(l)/A_F∏_l∈ F∖ WA_F(l)∈. Then (ϕ_F)=0 since |F∖ W|=|τ_i∩τ_j| and 2∈ F∖ W. Moreover, L(ϕ_F)=ϕ_F', where ϕ_F'∈_0 is defined in the same way as ϕ_F. Hence (<ref>) follows from Theorem <ref> and Lemma <ref>. amsplain
http://arxiv.org/abs/2406.17899v1
20240625192010
Entity Augmentation for Efficient Classification of Vertically Partitioned Data with Limited Overlap
[ "Avi Amalanshu", "Viswesh Nagaswamy", "G. V. S. S. Prudhvi", "Yash Sirvi", "Debashish Chakravarty" ]
cs.LG
[ "cs.LG", "cs.CV", "cs.DC" ]
Entity Augmentation A. Amalanshu et al. Autonomous Ground Vehicle Research Group Indian Institute of Technology Kharagpur Kharagpur, WB 721302, India Entity Augmentation for Efficient Classification of Vertically Partitioned Data with Limited Overlap Avi AmalanshuEqual contribution.^,Corresponding author. Email: Viswesh Nagaswamy1 G.V.S.S. Prudhvi1 Yash Sirvi1 Debashish Chakravarty July 1, 2024 ======================================================================================================================================================= § ABSTRACT Vertical Federated Learning (VFL) is a machine learning paradigm for learning from vertically partitioned data (i.e. features for each input are distributed across multiple “guest" clients and an aggregating “host" server owns labels) without communicating raw data. Traditionally, VFL involves an “entity resolution" phase where the host identifies and serializes the unique entities known to all guests. This is followed by private set intersection to find common entities, and an “entity alignment" step to ensure all guests are always processing the same entity's data. However, using only data of entities from the intersection means guests discard potentially useful data. Besides, the effect on privacy is dubious and these operations are computationally expensive. We propose a novel approach that eliminates the need for set intersection and entity alignment in categorical tasks. Our Entity Augmentation technique generates meaningful labels for activations sent to the host, regardless of their originating entity, enabling efficient VFL without explicit entity alignment. With limited overlap between training data, this approach performs substantially better (e.g. with 5% overlap, 48.1% vs 69.48% test accuracy on CIFAR-10). In fact, thanks to the regularizing effect, our model performs marginally better even with 100% overlap. § INTRODUCTION Federated Learning (FL) <cit.> is a recent distributed machine learning strategy. FL aims to achieve communication efficiency and data privacy by never communicating the raw data. In FL, data-owning participants (“guests") train models on their local data, coordinated and aggregated by a label-owning “host". FL typically implies a “horizontal" distribution, where a participant holds its own set of samples within a global dataset. Vertical Federated Learning (VFL) is a variant where parties holding different features of the same samples collaborate without pooling data to learn joint representations. This is essential for sensitive cross-institution collaborations, such as in healthcare, emphasizing the importance of aligning records to the same entities for cohesive, privacy-preserving model training. VFL effectively splits the parameters of a global model across the network. The host has the deeper layers and makes a prediction at each training/inference iteration. For the prediction to be meaningful, all guests must have passed their features of the same entity. But, this means they must discard data on entities not known to all participants– potentially valuable for training local models. In systems with a small intersection, there may be insufficient samples to train a VFL model effectively, hindering VFL's scalability. For example, cameras and traffic sensors at an intersection may struggle to detect crashes if the number of frames where the crash is visible to all cameras is small. The entity alignment process introduces significant computational overhead, hampering real-world VFL deployment at scale, affecting overall efficiency. Other challenges include data skew, where data distribution across entities varies drastically, and privacy risks during alignment despite VFL's principle of avoiding direct data sharing. This raises the question: are PSI and entity alignment truly necessary during training? We introduce Entity Augmentation, a strategy for VFL that eliminates the need for PSI and entity alignment. Instead of agreeing on a single entity (or batch), the host computes a weighted average of labels for all entities processed by any guest. The weights are proportional to the total dimension of the input vector corresponding to each entity's features. Hosts may calculate meaningful losses for any activations received, as long as each corresponds to labelled entities. In this paper, we: * Propose Entity Augmentation, a novel strategy that interpolates labels for all entities sent by all guests, weighted by their contribution to the host input, synthesizing semantically coherent labels for guest activations. * Demonstrate that VFL with Entity Augmentation achieves performance on par (better on some datasets) with VFL with entity alignment. These empirical results indicate that Entity Augmentation is a viable alternative to traditional FL pipelines, offering substantial improvements in data utilization, computational efficiency, and ease of deployment. § BACKGROUND §.§ VFL Participants Guests. Consider a consortium 𝒢, comprising participants each with a distinct feature set. For a guest i ∈𝒢, the dataset is 𝒟_i = {𝐱_j ∈ℝ^|F_i|: j ∈{1, 2, ..., |𝒮_i|}}, where: * 𝒮_i is the set of unique entities recorded in 𝒟_i. * F_i captures the attributes of these entities observed by guest i. * Entities are considered samples from a distribution X. The guest model m_i(· ; θ_i) : ℝ^|F_i|→ℝ^out_i is defined by parameters θ_i. These models aim to encode the features F_i of entities 𝐱∈⋂_i=1^|𝒢|𝒮_i for the host h to utilize in predictions, without sharing their model parameters or direct data features, including labels. Host. The host h coordinates the training process, holding the label set ℒ = {𝐲_j ∈ℝ^out: j ∈{1, 2, ..., |𝒮_h|}}, where 𝒮_h is the set of unique entities with labels. A crucial intersection |𝒮_𝒢∩𝒮_h| > 0 ensures shared entities for training. The host model m_h(· ; θ_h) : ℝ^out_1×ℝ^out_2× ... ×ℝ^out_|𝒢|→ℝ^out is parameterized by θ_h, aiming to minimize expected loss for optimal parameters θ = (θ_1, θ_2, ..., θ_|𝒢|, θ_h). §.§ Entity Alignment In VFL, coherence during training is ensured through data synchronization, formalized as 𝒮_𝒢 = ⋂_i=1^|𝒢|𝒮_i. This uses a private set intersection (PSI, <cit.>) multiparty computation, preserving privacy while identifying intersecting entities across 𝒢. Following PSI, the host h processes 𝒮_𝒢, ensuring uniform model training across the federated network. This step is vital for coherent aggregation of model updates, reflecting the collective knowledge of 𝒢. Without proper alignment, i.e., if 𝒮_𝒢 is not established, issues like data inconsistency (𝒮_i ⊈𝒮_𝒢 for any i) arise, leading to degraded model performance from training on non-corresponding entities. Additionally, without alignment, the federated model faces privacy vulnerabilities and inefficiencies in learning. Thus, Entity Alignment is crucial in vertical federated learning. § RELATED WORK §.§ Entity Resolution in Federated Learning In the absence of unique IDs, the task of resolving common entities between datasets based on their features is called Entity Resolution. In 2017, Hardy et al. <cit.> introduced one of the first privacy-preserving strategies for learning from vertically partitioned data. The work proposes a pipeline of entity resolution, distributed logistic regression, and Paillier encryption to maintain privacy without noise addition. The authors demonstrate this works under certain entity resolution error assumptions without impacting model performance. This suggests certain errors do not alter optimal classifier performance. Nock et al. <cit.> investigate the the empirical impact of entity resolution errors on FL. The authors provide bounds on deviations in classifier performance due to these errors, and demonstrate the benefits of using label information with entity resolution algorithms. §.§ Data Augmentation for Classification Generalization CutMix <cit.> is a data augmentation technique used in image classification to enhance deep learning model training by combining parts of different images and their corresponding labels. Unlike traditional methods that process each image individually, CutMix creates new training examples by patching segments from multiple images together. Given two images A and B, and their corresponding one-hot encoded labels 𝐲_A and 𝐲_B, the CutMix process involves: * Randomly selecting a region R within image A. * Replacing region R in image A with the corresponding region from image B to generate a new training image A'. * Combining the labels proportionally to the number of pixels of each class present in the new image, resulting in a mixed label 𝐲' = λ𝐲_A + (1 - λ) 𝐲_B, where λ is the ratio of the remaining area of image A to the area of the original image. Mathematically, for a region R with bounding box coordinates (r_x, r_y, r_w, r_h), the new training image A' is represented as: A' = B_r_x:r_x+r_w, r_y:r_y+r_h for (i, j) ∈ R A_i, j otherwise Here, (i, j) is the pixel location in the images. The label mixing coefficient λ is typically sampled from a Beta distribution, which controls the strength of the mixing. CutMix improves model robustness and generalization by forcing the network to learn regionally informative features, rather than relying on specific patterns in the training set. This generates diverse examples within each mini-batch, helping to prevent overfitting. §.§ Sample Efficient Vertical Federated Learning Work on sample efficiency is scarce, despite its absence greatly limiting the applicability of VFL to carefully designed systems with significant overlap in sample spaces. Sun et al. propose a method <cit.> to solve this problem. Following a few epochs of VFL training on aligned data, guests cluster their remaining datasets based on gradients received during the aligned training. The authors experimentally show that this approach is performant. However, as suggested by Amalanshu et al. <cit.> this is a form of privacy-breaching label inference attack. In that paper, the authors present an unsupervised method of training guest models independently from host models, hence allowing them to exploit data outside the intersection without breaching privacy. However, task-relevant transfer learning still uses aligned datasets. § PROPOSED METHOD VFL typically assumes that the input datasets for each model are “aligned," meaning that records are consistent across entities indexed in (⋂_i=1^|𝒢|𝒮_i)∩𝒮_h. We propose a novel training approach for categorical tasks that allows each dataset to be sized min_i∈{1,…,|𝒢|}|𝒮_i∩𝒮_h|, or max_i∈{1,…,|𝒢|}|𝒮_i∩𝒮_h| if guests may reuse data. Extending the idea of the CutMix regularization, we propose entity augmentation for training the owner model. We construct artificial entity samples by combining features from various entities and averaging their labels. This approach enables training on a minimal subset of samples. There are various ways such a scheme might be implemented. For instance, entity augmentation may be precomputed before training begins– the host may inform the guests which order to process their entities, and memoize the corresponding augmented labels. Alternatively, the augmented labels could be computed at training time as long as the host is aware of the identities of all the entities whose encoded features it has just received. Algorithm <ref> outlines one way of achieving the latter for models trained via gradient-based algorithms. Using a queue to store the latest activations and sample IDs, we also achieve some fault tolerance– if a guest fails to send an activation, the host simply uses the last one received. We outline the procedure for entity alignment and augmentation in categorical tasks. The proposed method optimizes data use, enhancing the robustness and generalization of the learned models. Empirical results demonstrating the effectiveness of our approach, including in scenarios with deliberate sample misalignment, are presented in Section <ref>. § EXPERIMENTS To evaluate the effectiveness of the proposed algorithm, we conduct experiments on six different real-world datasets using three distinct architecture models in a SplitNN fashion. <cit.> The experiments are divided into the following setups: (1) aligned data setup, where the dataset is entity-aligned; and (2) misaligned data setup, where the dataset is entity-augmented/misaligned. This division helps us mimic real-world scenarios where data may not always be perfectly aligned between clients. We hope to demonstrate the following: * Entity Augmentation leads to meaningful learning, that is, Entity Augmentation allows us to exploit data outside the intersection 𝒮_𝒢∩ S_h (namely, members of ⋃_i=1^|𝒢|(𝒮_i∩𝒮_h)). * Training on datasets with Entity Augmentation and without alignment outperform that on aligned datasets if there are sufficiently long-range semantic correlations. We also provide a brief comparison to few-shot VFL <cit.> in Table <ref>. §.§ Datasets We use the following datasets and architectures for our experiments: * Computer Vision (CV) Datasets: MNIST <cit.> and CIFAR-10 split into two guests. <cit.> with ResNet-18, ResNet-56 <cit.>, and ResNeXt-29 (8x64d) <cit.>. * Tabular Datasets: Parkinsons <cit.> and Credit Card <cit.>. * Multiview Datasets: Handwritten Digits <cit.> and Caltech-7 <cit.>. The tabular and multiview datasets are divided evenly across four guests. §.§ Model Details Models used for VFL datasets Handwritten. Guests: (120) → (70) → ; Hosts: (280) → (120) → → (40) → (10) CalTech-7. Guests: (512) → (256) → ; Hosts: (1024) → (512) → (256) → → (128) → (7) Credit Card. Guests: (5) → (2) → ; Hosts: (22) → (10) → (8) → (4) → (1) Parkinsons. Guests: (94) → (47) → ; Hosts: (94) → (47) → → (22) → (10) → → (1) Guest-Host Model Splits for ResNet-like Models For all our CV models (ResNet-18, ResNet56, ResNeXt-29 8x64), each guest owns its own CNN filter as well as half of the first fully connected layer. The remaining fully connected layers are owned by the host. §.§ Nomenclature We will use the following terminology for the remainder of the paper * Aligned Data: Refers to entity-aligned/private set intersection data. For example, in the case of two clients, each client inputs corresponding parts of the same image into their respective models. * Misaligned Data: Refers to intentionally misaligned data– the members and order of the “misaligned" sample space are different for each guest. In this case, clients input parts of different images into their respective models. §.§ Experimental Setup Exploiting data outside the intersection.To evaluate the effect of entity augmentation, we propose an experiment where the dataset is divided into x% entity-aligned data and (100-x)/2% misaligned data for two clients. That is to say, we have x% of the dataset aligned between the two guests. where corresponding parts of the data are assigned to each client. The remaining (100-x)% is shuffled and split evenly between the two clients, i.e. each client gets a slice from a totally non-overlapping subset of the sample space. We attempt to train a split neural network with just the aligned data and investigate the impact on performance when the misaligned data is also used via Entity Augmentation. Entity Alignment vs Misaligned Augmentation. To test the hypothesis that training on misaligned data can outperform aligned data given long-range semantic correlations, we conduct experiments on fully aligned and intentionally misaligned data. For each dataset, we train models on both aligned and misaligned data. We compare the performance of the models to assess if misaligned data with sufficient long-range semantic correlations can lead to better learning outcomes. The results of these experiments demonstrate the impact of data alignment on model performance and the improved performance of entity augmentation. §.§ Implementation Details For the CV datasets, we apply the proposed algorithm using ResNet and ResNeXt architectures. For tabular and multiview datasets, we employ the SplitNN architecture. Each experiment is run for 60 epochs, with two guests for the CV and tabular datasets. For multiview datasets, we set the number of guests to be equal to the number of views. We implement our models in PyTorch and train them to minimize binary cross entropy loss. The PyTorch implementation internally calculates a sigmoid. We use the Adam optimizer with β_1=0.9, β_2=0.999. We use a learning rate of 0.001 for all CV experiments, 0.1 for both multiview datasets, and 5×10^-4 for both tabular datasets. § RESULTS AND DISCUSSIONS Entity Alignment vs Misaligned Augmentation. Our experiments with entity augmentation, as shown in Tables <ref> and <ref>, demonstrate that our method achieves comparable results on the MNIST dataset and improved performance on the CIFAR, Handwritten, Caltech-7, Credit Card and Parkinson's datasets. This is not unexpected since Entity Augmentation is functionally a form of CutMix, which has been shown to have a regularizing effect. <cit.> MNIST, with its single color channel and simpler, well-defined shapes, presents fewer long-range feature variations compared to datasets with complex imagery. For instance, a straight line in the top quarter could ambiguously belong to a 5 or 7. Thus, performance gains from CutMix are less pronounced on MNIST. Exploiting data outside the intersection. From the results of our experiment in Table 3, it is visible that when only a tiny entity-aligned dataset is available, using entity misaligned/augmented data (i.e., with no private set intersection) along with it for training provides better performance compared to training only on the aligned dataset. These results clearly support our claim that entity-misaligned/augmented data is helpful for training and results in better performance than only using entity-aligned data, resulting in seamless integration of diverse data sources, reduced data wastage, and enhanced model learning efficiency. More efficient training. Figures <ref> reveal that Entity Augmentation not only boosts the skyline performance of VFL models, but also allow them to converge substantially faster. Experiments using only x% aligned data plateaus at a much lower accuracy and at a far earlier epoch. A similar trend may be seen in our experiments on fully aligned vs fully misaligned data. Another interesting phenomenon is the stability of training– the test accuracy is qualitatively smoother and more stable wherever Entity Augmentation is used. § FUTURE WORK The proposed method shows promising results for training pipelines where the label can be represented in a one-hot encoded fashion. Subsequently, we seek to extend the idea of generating synthetic labels for regressive tasks. In this light, Verma et al. <cit.> investigate the potential of swapping weights in the penultimate layer to create samples through inference. Expanding upon this, Hwang et al. <cit.> use linear interpolation and constrained sampling for data augmentation. Furthermore, Jiang et al. <cit.> employ Gaussian Mixture Models to facilitate the generation of synthetic and continuous sensor data. Our future endeavours will focus on incorporating such augmentation techniques within the Vertical Federated Learning (VFL) framework. This integration seeks to optimize the utilization of data that lies beyond the confines of the Private Set Intersection, thereby enhancing the efficiency and effectiveness of the VFL pipeline for regressive tasks. § CONCLUSION This work presents Entity Augmentation, a strategy for generating semantically meaningful labels for guest activations without entity alignment. We interpolate labels weighted by features to synthesize labels for training. We subsequently demonstrate that our pipeline achieves performance on par with traditional FL approaches that require entity alignment. Our evaluations on the CIFAR10 and MNIST datasets showed improved results across various baseline architectures, and we achieved competitive results on Handwritten, Caltech-7, Parkinsons and Credit Card datasets. In future, we seek to extend the augmentation technique to regressive tasks and experiment with Gaussian mixture models and constrained sampling. splncs04
http://arxiv.org/abs/2406.18025v1
20240626024700
Precise determination of the bottom-quark on-shell mass using its four-loop relation to the $\overline{\rm MS}$-scheme running mass
[ "Shun-Yue Ma", "Xu-Dong Huang", "Xu-Chang Zheng", "Xing-Gang Wu" ]
hep-ph
[ "hep-ph" ]
^1 Department of Physics, Chongqing Key Laboratory for Strongly Coupled Physics, Chongqing University, Chongqing 401331, P.R. China ^2 College of Physics and Electronic Engineering, Chongqing Normal University, Chongqing 401331, P.R. China § ABSTRACT In this paper, we explore the properties of the bottom-quark on-shell mass (M_b) by using its relation to the MS mass (m_b). At present, this MS-on-shell relation has been known up to four-loop QCD corrections, which however still has a ∼ 2% scale uncertainty by taking the renormalization scale as m_b(m_b) and varying it within the usual range of [m_b(m_b)/2, 2 m_b(m_b)]. The principle of maximum conformality (PMC) has been adopted to achieve a more precise MS-on-shell relation by eliminating such scale uncertainty. As a step forward, we also estimate the magnitude of the uncalculated higher-order terms by using the Padé approximation approach. Numerically, by using the MS mass m_b(m_b)=4.18^+0.03_-0.02 GeV as an input, our predicted value for the bottom-quark on-shell mass becomes M_b≃ 5.36^+0.10_-0.07 GeV, where the uncertainty is the squared average of the ones caused by Δα_s(M_Z), Δm_b(m_b), and the estimated magnitude of the higher-order terms. Precise determination of the bottom-quark on-shell mass using its four-loop relation to the MS-scheme running mass Xing-Gang Wu ^1 July 1, 2024 ================================================================================================================== Quark masses are important parameters for the Quantum Chromodynamics (QCD) theory, which need to be renormalized in higher-order calculations. In perturbative QCD (pQCD) theory, two schemes are frequently adopted for renormalizing the quark masses, e.g. the on-shell (OS) scheme <cit.> and the modified minimal subtraction (MS) scheme <cit.>. The OS mass, also known as the pole mass, offers the advantage of being grounded in a physical definition which is gauge-parameter independent and scheme independent. It ensures that the inverse heavy-quark propagator exhibits a zero at the location of the pole mass to any order in the perturbative expansion. On the other hand, the MS scheme focuses solely on removing the subtraction term 1/ϵ+ln(4π)-γ_E from the quantum corrections to the quark two-point function. And by combining this with the bare mass, one can derive the expression for the renormalized MS mass. In high-energy processes, the MS mass is preferred for its lack of intrinsic uncertainties. It has been found that for the high-energy processes involving the bottom quark, such as the B meson decays, when their typical scales are lower than the bottom quark mass, the using of MS mass becomes less suitable and the OS mass is usually adopted. Practically, the perturbative series using the OS mass is plagued by renormalon ambiguities <cit.>, resulting in a perturbative series with poor convergence. Thus for precision tests of the Standard Model, accurate determination of the OS mass is important. It is noted that the OS mass can be related to the MS mass by using the perturbative relation between the bare quark mass (m_q,0) and the renormalized mass in either the OS or MS scheme, where q denotes the heavy charm, bottom, and top quark, respectively. For example, we have m_q,0=Z^ OS_mM_q^ OS and m_q,0=Z^ MS_mm_q(μ_r), where μ_r is the renormalization scale. Here, Z^ OS_m and Z^ MS_m represent the quark mass renormalization constants in the OS and MS scheme, respectively. At present, the relation between the OS mass and the MS mass, called as the MS-on-shell relation, has been calculated up to four-loop QCD corrections <cit.>. Those improvements enable the possibility of precise determination of the bottom quark OS mass with the help of the experimentally fixed MS mass. Both α_s and m_q are scale dependent, whose scale running behaviors are governed by the renormalization group equations (RGEs) that involve either the β-function <cit.> or the quark mass anomalous dimension γ_m-function <cit.>. Thus the crucial point for this determination is how to fix the precise values of α_s and the MS running mass m_q simultaneously. Practically, people usually uses the guessed renormalization scale and varies it within a certain range to estimate its uncertainty for a fixed-order pQCD series. This naive treatment leads to mismatches among the strong coupling constant and its expansion coefficients, which directly breaks the standard renormalization group invariance <cit.> and results in the conventional renormalization scale and scheme ambiguities. The effectiveness of this treatment depends heavily on the convergence of the pQCD series. Unfortunately, the bottom quark MS-on-shell relation exhibits poor convergence. The MS-on-shell relation up to four-loop QCD corrections still has a scale uncertainty about 100 MeV <cit.>, which significantly exceeds the current uncertainty of the MS mass of the bottom quark issued by the Particle Data Group <cit.>, m_b(m_b)=4.18^+0.03_-0.02 GeV. In order to eliminate such artificially introduced scale ambiguity for fixed-order series, the principle of maximum conformality (PMC) has been proposed in the literatures <cit.>. The PMC offers a systematic approach for determining the correct value of α_s by using the RGE-involved {β_i}-terms of the pQCD series. After using those {β_i}-terms, the initial pQCD series changes into a newly scheme-independent conformal series. It has been found that the resultant PMC series is independent of any choice of renormalization scale, and the scale-invariant PMC series is also valuable for estimating the contributions of uncalculated higher-order (UHO) terms <cit.>. The comprehensive exploration of the PMC can be found in the review articles <cit.>. Recently, an improved PMC scale-setting procedure has been proposed in Ref.<cit.>. This approach simultaneously determines the correct magnitudes of α_s and the quark mass m_q by utilizing the RGEs for determining the magnitudes of the running coupling α_s and the running mass under the same scheme such as the MS scheme. Upon implementing the new scale-setting procedure to the MS-on-shell relation, the renormalization scale ambiguity inherent in the pQCD series is effectively eliminated. Additionally, the renormalon terms associated with the β-function and γ_m-function of the pQCD series can also be removed. In Ref.<cit.>, we have made a detained discussion on the top-quark pole mass. In this Letter, we intend to utilize the PMC scale-setting approach to determine the bottom-quark OS mass. Furthermore, the Padé approximation approach (PAA) <cit.>, which offers a systematic method for transforming a finite perturbative series into an analytic function, will be employed for resuming those renormalon terms that are not associated with the β-function and γ_m-function. The relationship between the bottom-quark MS quark mass and its OS quark mass can be expressed as M_b/m_b(μ_r)=Z_m^ MS/Z_m^ OS=∑_n≥0a^n_s(μ_r) c^(n)_m(μ_r), where a_s=α_s/(4π), c^(0)_m(μ_r)=1 and c^(n)_m(μ_r) is a function of ln(μ_r^2/m_b^2(μ_r)). Subsequently, the determination of the bottom-quark OS mass can be achieved using the following relationship: M_b|_ conv. = m_b(μ_r){1+𝒞_1(μ_r)a_s(μ_r) + 𝒞_2(μ_r)a_s^2(μ_r) +𝒞_3(μ_r)a_s^3(μ_r)+ 𝒞_4(μ_r)a_s^4(μ_r)+𝒪(a^5_s)}, where conv. stands for the initial pQCD series given under the MS scheme. As mentioned above, the expansion coefficients 𝒞_i have been known up to N^4LO-level, which need to be transformed as the β-series so as to fix the correct values of α_s and m_b. This transformation can be done by using the general QCD degeneracy relations, and the results are given in Ref.<cit.>. Then, by applying the PMC scale-setting procedures, Eq.(<ref>) can be transformed into the following conformal series: M_b|_ PMC = m_b(Q_*){1+ r_1,0a_s(Q_*) + r_2,0a_s^2(Q_*) +r_3,0a_s^3(Q_*)+ r_4,0a_s^4(Q_*)+𝒪(a^5_s)}, where r_i,0 are conformal coefficients. Here Q_* represents the PMC scale, whose logarithmic form ln(Q_*^2/m_b^2(Q_*)) can be represented as a power series in a_s(Q_*): lnQ^2_*/m_b^2(Q_*)=∑_i=0^n S_i a^i_s(Q_*), where the coefficients S_i can be determined up to next-to-next-to-leading log (NNLL) accuracy <cit.> by using the given four-loop pQCD series. It is found that Eq.(<ref>) is independent to the choice of renormalization scale. This property ensures that both the running mass m_b and the running coupling constant α_s are concurrently determined. By matching the μ_r-independent conformal coefficients r_i,0, the resulting PMC series becomes devoid of the conventional renormalization scale ambiguity. We are now ready to calculate the bottom-quark OS mass M_b through its perturbative relation to the MS mass. To do the numerical computations, we adopt α_s(M_Z)=0.1179±0.0009 and m_b(m_b)=4.18^+0.03_-0.02 GeV <cit.>. The scale running of α_s(μ_r) is calculated by using the package RunDec <cit.>. Using Eq.(<ref>) and setting all the input parameters to be their central values, we present the bottom-quark OS mass M_b under conventional scale-setting approach in FIG. <ref>. FIG. <ref> shows how the bottom-quark OS mass M_b changes with the renormalization scale and demonstrates that the conventional renormalization scale dependence diminishes as more loop terms have been incorporated. Numerically, we have M_b|_ Conv. = 4.18^+0.78_-0.11 +0.40^-0.58_+0.06 + 0.20^-0.16_+0.02 + 0.14^+0.02_+0.01 + 0.14^+0.03_-0.00 = 5.06^+0.09_-0.02 ( GeV), whose central values are for μ_r=m_b(m_b), and the uncertainties are for μ_r ∈ [m_b(m_b)/2, 2m_b(m_b)].The relative magnitudes of the leading-order terms (LO): the next-to-leading-order terms (NLO): the next-to-next-to-leading-order terms (N^2LO): the next-to-next-to-next-to-leading-order terms (N^3LO): the next-to-next-to-next-to-next-to-leading-order terms (N^4LO) are approximately 1: 9.6%: 4.8%: 3.4%: 3.4% for the case of μ_r=m_b(m_b). Eq.(<ref>) shows the magnitudes of each loop terms are highly scale dependent, and the perturbative behavior of the whole series is different for different scale choices. Within this scale range, the absolute scale uncertainties are about 21%, 160%, 90%, 14%, and 21% for the LO, the NLO, the N^2LO, the N^3LO, and the N^4LO terms, respectively. The overall scale uncertainty of the four-loop prediction of M_b becomes ∼ 2.2% due to the large cancellation of scale dependence among different orders. Similarly, using Eq.(<ref>), we present M_b under the PMC scale-setting approach in FIG. <ref>. The PMC scale Q_* can be fixed up to N^2LL accuracy by using Eq.(<ref>), i.e., lnQ^2_*/m_b^2(Q_*) = -72.8957 a_s(Q_*)+137.61 a^2_s(Q_*) -17177.2 a^3_s(Q_*), which is independent to any choice of μ_r and leads to Q_*=1.92 GeV. The flat lines in FIG. <ref> indicates that the PMC prediction is devoid of renormalization scale ambiguity at any fixed order. Numerically, we have M_b|_ PMC = 5.09 + 0.66 - 0.43 - 0.15 + 0.19 = 5.36 ( GeV). It shows that the relative importance of the LO: the NLO: the N^2LO: the N^3LO: the N^4LO terms in the PMC series is 1: 13.0%: -8.5%: -3.0%: 3.7%. It has been found that for the case of top quark <cit.>, whose α_s(m_t)∼ 0.11 and α_s(Q^*_t=123.3  GeV)∼ 0.11, there is good perturbative behavior for both the conventional and PMC series. However, for the present case of bottom quark, whose α_s(m_b)∼ 0.22 and α_s(Q^*_b=1.92  GeV)∼ 0.31, the α_s power suppression fails to counterbalance the influence of the substantial numerical coefficients even after applying the PMC, e.g. Eq.(<ref>) shows that the relative importance of the N^2LO: N^3LO: N^4LO terms for conventional series is 1: 70%: 70% for μ_r=m_b(m_b), 1: 400%: 425% for μ_r=m_b(m_b)/2, and 1: 68%: 64% for μ_r=2 m_b(m_b), respectively; and Eq.(<ref>) shows that such relative importance changes to 1: 35%: 44% for the scale-invariant PMC series. At present, the relatively large magnitude of the N^4LO-terms indicates that the magnitude of the N^4LO conformal coefficients, which are unrelated to the RGE-involved β-terms or the quark mass anomalous dimension involved γ_m-terms, is large. However such scale-invariant perturbative behavior can be treated as the intrinsic perturbative behavior of the MS-on-shell relation. By properly choosing the scale, the perturbative behavior of conventional series will be close to the PMC one. Thus for those cases, a proper scale-setting approach to achieve a scale-invariant series is very important. Moreover, to compare with the N^2LO-terms and N^3LO-terms, the sizable magnitude of the N^4LO-terms indicates the importance of knowing whether the UHO-terms can give sizable contributions and present the wanted convergent behavior. For the purpose, we adopt the PAA to estimate the magnitude of the UHO contributions. The PAA is a kind of resummation to create an appropriate generating function such as the fractional generating function; and it offers a systematic way for transforming a finite perturbative series into an analytic function. For a given pQCD series that can be written as ρ(Q)=∑^n_i=1 C_i a_s^i, its [N/M]-type fractional generating function is defined as <cit.>, ρ^N/M(Q) = a_s×b_0+b_1 a_s+⋯+b_N a_s^N/1+c_1 a_s+⋯+c_M a_s^M, where N and M are integers, N≥0, M≥1, N+M+1=n. The input parameters b_i∈[0,N] and c_i∈[1,M] can be expressed by the known coefficients C_i∈[1,n], and then the first unknown coefficient, e.g. the (n+1)_ th-order coefficient C_n+1, can be expressed by b_i∈[0,N] and c_i∈[1,M]. Using Eqs.(<ref>, <ref>), the MS-on-shell relation up to m_ th-loop level can be written as the following form, M_b|_PAA^N/M=m_b(μ)[1+ρ^N/M(μ)], where N+M=m-2 due to n=m-1, μ=μ_r for conventional series and μ=Q_* for the PMC series. The PAA works for m≥ 3. Following the standard PAA procedures described in detail in Ref.<cit.>, we obtain the different types of PAA predictions by using the known two-, three-, and four-loop pQCD series and put them in Table <ref> and Table <ref>. Due to the presence of divergent renormalon terms associated with the β-function and γ_m-function in each loop term, the PAA prediction derived from the conventional series exhibits significant uncertainty. The PAA prediction is generally [N/M]-type dependent, which provides the systematic error of the PAA approach. More explicitly, Table <ref> shows the relative magnitudes of the allowable types of PAA predictions are M_b|_ PAA^m=3: M_b|_ PAA^m=4: M_b|_ PAA^m=5= 1: 1.05∼1.07: 0.69∼1.41 for the conventional series, respectively. On the contrary, the PMC conformal series, which is free of renormalon terms associated with the β-function and γ_m-function, can be a more reliable foundation for predicting UHO contribution. Table <ref> shows the relative magnitudes of the allowable types of PAA predictions are M_b|_ PAA^m=3: M_b|_ PAA^m=4: M_b|_ PAA^m=5= 1: 0.93∼0.98: 0.99∼1.01 for the PMC series, respectively. It has been found that the [0/n-1] or [0/m-2]-type PAA predictions are self-consistent for the PMC method itself <cit.>, which agrees with the GM-L scale-setting procedure <cit.> to obtain scale-independent perturbative QED predictions. Moreover, one may also observe that the [0/n-1]-type PAA predictions under different orders exhibits better stability than other types. So, we adopt the [0/n-1]-type PAA predictions as an estimate of the UHO contributions, i.e., Δ M_b|_ Conv.^ High order = ±|M_b|_ PAA, Conv.^[0/3]-M_b|_ Conv.| = ±0.51 ( GeV), Δ M_b|_ PMC^ High order = ±|M_b|_ PAA, PMC^[0/3]-M_b|_ PMC| = ±0.01 ( GeV). In addition to the uncertainties due to UHO-terms, there are also uncertainties from the Δα_s(M_Z) and Δm_b(m_b). Using α_s(M_Z)=0.1179±0.0009 <cit.> as an estimate, we obtain Δ M_b|_ Conv.^Δα_s(M_Z) = (^+0.02_-0.03) ( GeV), Δ M_b|_ PMC^Δα_s(M_Z) = (^+0.09_-0.07) ( GeV). Using the RGE to fix the correct magnitude of α_s, the PMC series thus depends heavily on the precise α_s running behavior. The more sensitivity of the PMC series on the value of α_s(M_Z) makes it inversely be a better platform to fix the reference point value from comparison of experimental data <cit.>. Regarding the uncertainty arising from the choice of the bottom-quark MS mass, Δm_b(m_b)=(^+0.03_-0.02) GeV, we obtain Δ M_b|_ Conv.^Δm_b(m_b) = ±0.03 ( GeV), Δ M_b|_ PMC^Δm_b(m_b) = (^+0.03_-0.02) ( GeV). This indicates that the bottom-quark OS mass could depend almost linearly on its MS mass, since the uncertainty is at the same order of O(Δm_b(m_b)). In summary, we have determined the bottom-quark OS mass using the four-loop MS-on-shell relation in conjunction with the newly suggested PMC approach, which determines the correct magnitudes of the α_s and the MS-running mass simultaneously by using the β-function and γ_m-function of the pQCD series. Taking the bottom-quark MS mass m_b(m_b)=4.18^+0.03_-0.02 GeV as an input, we have derived a precise bottom-quark OS mass: M_b|_ Conv. = 5.06^+0.52_-0.51 ( GeV), M_b|_ PMC = 5.36^+0.10_-0.07 ( GeV), where the uncertainties stem from the mean square of those originating from Δ M_b|^High order, Δ M_b|^Δα_s(M_Z), and Δ M_b|_ Conv.^Δm_b(m_b), respectively. It is important to note that the conventional prediction still exhibits renormalization scale uncertainty, which arises from varying μ_r within the range μ_r ∈[m_b(m_b)/2, 2m_b(m_b)]. The accuracy of the pQCD predictions within the framework of the MS running mass scheme is critically dependent on the precise determination of α_s and m_q. With the implementation of the PMC approach, the accurate values of the effective α_s and m_q can be ascertained. This results in a more convergent pQCD series, thereby reducing uncertainties and promoting the attainment of a reliable and precise pQCD prediction. Acknowledgments. This work was supported in part by the National Natural Science Foundation of China under Grant No.12175025, No.12247129, and No.12347101, by the Graduate Research and Innovation Foundation of Chongqing, China under Grant No.ydstd1912, and by the Foundation of Chongqing Normal University under Grant No.24XLB015. 100 Tarrach:1980up R. Tarrach, Nucl. Phys. B183, 384 (1981). tHooft:1973mfk G. 't Hooft, Nucl. Phys. B 61, 455-468 (1973). Bardeen:1978yd W. A. Bardeen, A. J. Buras, D. W. Duke and T. Muta, Phys. Rev. D 18, 3998 (1978). Beneke:1994qe M. Beneke and V. M. Braun, Phys. Lett. B 348, 513 (1995). Neubert:1994vb M. Neubert, Phys. Rev. D 51, 5924 (1995). Beneke:1998ui M. Beneke, Phys. Rep. 317, 1 (1999). Gray:1990yh N. Gray, D. J. Broadhurst, W. Grafe, and K. Schilcher, Z. Phys. C 48, 673 (1990). Chetyrkin:1999qi K. G. Chetyrkin and M. Steinhauser, Nucl. Phys. B573, 617 (2000). Melnikov:2000qh K. Melnikov and T. V. Ritbergen, Phys. Lett. B 482, 99 (2000). Jegerlehner:2002em F. Jegerlehner, M. Y. Kalmykov, and O. Veretin, Nucl. Phys. B658, 49 (2003). Jegerlehner:2003sp F. Jegerlehner and M. Y. Kalmykov, Acta Phys. Polon. B 34, 5335 (2003). Faisst:2004gn M. Faisst, J. H. Kuhn, and O. Veretin, Phys. Lett. B 589, 35 (2004). Marquard:2007uj P. Marquard, L. Mihaila, J. H. Piclum, and M. Steinhauser, Nucl. Phys. B773, 1 (2007). Marquard:2015qpa P. Marquard, A. V. Smirnov, V. A. Smirnov, and M. Steinhauser, Phys. Rev. Lett. 114, 142002 (2015). Marquard:2016dcn P. Marquard, A. V. Smirnov, V. A. Smirnov, M. Steinhauser, and D. Wellmann, Phys. Rev. D 94, 074025 (2016). Kataev:2018mob A. L. Kataev and V. S. Molokoedov, JETP Lett. 108, 777 (2018). Kataev:2018sjv A. L. Kataev and V. S. Molokoedov, Theor. Math. Phys. 200, 1374 (2019). Kataev:2018gle A. L. Kataev and V. S. Molokoedov, Eur. Phys. J. C 80, 1160 (2020). Politzer:1973fx H. D. Politzer, Phys. Rev. Lett. 30, 1346 (1973). Gross:1973id D. J. Gross and F. Wilczek, Phys. Rev. Lett. 30, 1343 (1973). Politzer:1974fr H. D. Politzer, Phys. Rep. 14, 129 (1974). Gross:1973ju D. J. Gross and F. Wilczek, Phys. Rev. D 8, 3633 (1973). Chetyrkin:2004mf K. G. Chetyrkin, Nucl. Phys. B710, 499 (2005). Baikov:2016tgj P. A. Baikov, K. G. Chetyrkin, and J. H. Kühn, Phys. Rev. Lett. 118, 082002 (2017). Vermaseren:1997fq J. A. M. Vermaseren, S. A. Larin, and T. van Ritbergen, Phys. Lett. B 405, 327 (1997). Chetyrkin:1997dh K. G. Chetyrkin, Phys. Lett. B 404, 161 (1997). Baikov:2014qja P. A. Baikov, K. G. Chetyrkin, and J. H. Kühn, J. High Energy Phys. 10 (2014) 076. Wu:2013ei X. G. Wu, S. J. Brodsky, and M. Mojaza, Prog. Part. Nucl. Phys. 72, 44 (2013). Wu:2014iba X. G. Wu, Y. Ma, S. Q. Wang, H. B. Fu, H. H. Ma, S. J. Brodsky, and M. Mojaza, Rept. Prog. Phys. 78, 126201 (2015). Workman:2022zbs R.L. Workman et al. [Particle Data Group], PTEP 2022, 083C01 (2022). Brodsky:2011ta S. J. Brodsky and X. G. Wu, Phys. Rev. D 85, 034038 (2012). Brodsky:2011ig S. J. Brodsky and L. Di Giustino, Phys. Rev. D 86, 085026 (2012). Brodsky:2012rj S. J. Brodsky and X. G. Wu, Phys. Rev. Lett. 109, 042002 (2012). Mojaza:2012mf M. Mojaza, S. J. Brodsky, and X. G. Wu, Phys. Rev. Lett. 110, 192001 (2013). Brodsky:2013vpa S. J. Brodsky, M. Mojaza, and X. G. Wu, Phys. Rev. D 89, 014027 (2014). Shen:2017pdu J. M. Shen, X. G. Wu, B. L. Du, and S. J. Brodsky, Phys. Rev. D 95, 094006 (2017). Du:2018dma B. L. Du, X. G. Wu, J. M. Shen, and S. J. Brodsky, Eur. Phys. J. C 79, 182 (2019). Wu:2019mky X. G. Wu, J. M. Shen, B. L. Du, X. D. Huang, S. Q. Wang, and S. J. Brodsky, Prog. Part. Nucl. Phys. 108, 103706 (2019). DiGiustino:2023jiq L. Di Giustino, S. J. Brodsky, P. G. Ratcliffe, X. G. Wu and S. Q. Wang, Prog. Part. Nucl. Phys. 135, 104092 (2024). Huang:2022rij X. D. Huang, X. G. Wu, X. C. Zheng, J. Yan, Z. F. Wu and H. H. Ma, Chin. Phys. C 48, 053113 (2024). Basdevant:1972fe J. L. Basdevant, Fortsch. Phys. 20, 283 (1972). Samuel:1992qg M. A. Samuel, G. Li, and E. Steinfelds, Phys. Lett. B 323, 188 (1994). Samuel:1995jc M. A. Samuel, J. R. Ellis, and M. Karliner, Phys. Rev. Lett. 74, 4380 (1995). Herren:2017osy F. Herren and M. Steinhauser, Comput. Phys. Commun. 224, 333-345 (2018). GellMann:1954fq M. Gell-Mann and F. E. Low, Phys. Rev. 95, 1300 (1954). Shen:2023qgz J. M. Shen, B. H. Qin, J. Yan, S. Q. Wang and X. G. Wu, J. High Energy Phys. 07, 109 (2023).
http://arxiv.org/abs/2406.19369v1
20240627174925
Mamba or RWKV: Exploring High-Quality and High-Efficiency Segment Anything Model
[ "Haobo Yuan", "Xiangtai Li", "Lu Qi", "Tao Zhang", "Ming-Hsuan Yang", "Shuicheng Yan", "Chen Change Loy" ]
cs.CV
[ "cs.CV" ]
Higher-twist generalized parton distributions of the pion and kaon at zero skewness in the light-cone quark model Zhun Lu July 1, 2024 ================================================================================================================= § ABSTRACT †: Project Lead. E-mail: xiangtai94@gmai.com and whuyuanhaobo@gmail.com. Transformer-based segmentation methods face the challenge of efficient inference when dealing with high-resolution images. Recently, several linear attention architectures, such as Mamba and RWKV, have attracted much attention as they can process long sequences efficiently. In this work, we focus on designing an efficient segment-anything model by exploring these different architectures. Specifically, we design a mixed backbone that contains convolution and RWKV operation, which achieves the best for both accuracy and efficiency. In addition, we design an efficient decoder to utilize the multiscale tokens to obtain high-quality masks. We denote our method as RWKV-SAM, a simple, effective, fast baseline for SAM-like models. Moreover, we build a benchmark containing various high-quality segmentation datasets and jointly train one efficient yet high-quality segmentation model using this benchmark. Based on the benchmark results, our RWKV-SAM achieves outstanding performance in efficiency and segmentation quality compared to transformers and other linear attention models. For example, compared with the same-scale transformer model, RWKV-SAM achieves more than 2× speedup and can achieve better segmentation performance on various datasets. In addition, RWKV-SAM outperforms recent vision Mamba models with better classification and semantic segmentation results. Code and models will be publicly available. § INTRODUCTION Trained on large-scale segmentation datasets, Segment Anything Model (SAM) <cit.> has recently garnered significant attention due to its remarkable versatility and effectiveness across numerous segmentation tasks. By taking visual prompts such as points and boxes provided by humans or other models as inputs, SAM can generate masks in various scenes, enabling various downstream applications such as image editing <cit.>, remote sensing <cit.>, medical image segmentation <cit.>, etc. Despite its robust generalization capabilities, SAM exhibits several drawbacks that may hinder its practical applications in some scenarios. First, the computational cost of SAM is exceptionally high. Second, the segmentation quality of SAM still falls short in some cases; for example, SAM always generates overly smooth edges, which do not fit many cases. The above two drawbacks limit the application of SAM in real-time scenarios and fields requiring high-quality segmentation results. Existing works usually only focus on solving either the first problem or the second problem. For example, several works <cit.>, such as EdgeSAM <cit.> and Efficient SAM <cit.>, aim to explore efficient architecture for SAM. However, the segmentation quality is still limited. On the other hand, there are several works <cit.> explore high-resolution and high-quality SAM. They bring extra computational costs to SAM, which slows down the inference. Thus, a balance between high quality and high efficiency should be explored to better deploy SAM in real-world applications. Recently, a series of works starting from the natural language processing community (e.g., RWKV <cit.>, Mamba <cit.>) and following in the computer vision community (e.g., VMamba <cit.>, Vision-RWKV <cit.>) have begun to focus on designing methods capable of handling long-range dependencies in linear time (linear attention models). Compared with transformers, where the complexity of their computation increases quadratically with the sequence length, the linear attention models reformulate the attention mechanism so that it scales linearly with the sequence length, thus significantly reducing computational costs when the sequence is very long. As a result, linear attention models can handle very long sequences while maintaining their global perception capability. However, there are no previous works exploring these architectures on SAM-like promptable segmentation tasks. In this work, we try to solve these problems together to build an efficient and high-quality SAM using recent linear attention models. In particular, we propose RWKV-SAM to handle SAM's computational cost and segmentation quality problems. The high computational cost of SAM can be attributed to two reasons: 1). extensive parameter count, and 2). quadratic time complexity caused by attention design in transformer layers as the input feature size grows. While prior efforts tackle the efficiency issue of SAM by reducing the model size (e.g., EfficientSAM <cit.>), these solutions still face quadratic time complexity, which means they cannot achieve good efficiency in high-resolution inputs, for example, with 1024 × 1024 high-resolution inputs. We propose an efficient segmentation backbone leveraging the RWKV <cit.> to improve the efficiency in the high-resolution while maintaining the global perception. Our efficient segmentation backbone contains three stages, which the decoder can use to refine the generated masks. In addition, we explore different decoder designs to fuse the different scales of the features and train the model on a combined high-quality dataset to enable our RWKV-SAM as a high-quality and high-efficiency segment-anything model. We evaluate our method on various datasets and benchmarks. As depicted in Figure <ref>, our RWKV-SAM outperforms previous methods in efficiency and quality. Although it only requires about 1/16 of the inference time of SAM, our method achieves more accurate, high-quality segmentation results. Even though the model size is comparable to EfficientSAM, our RWKV-SAM runs more than 2x faster. Our method performs better than HQ-SAM with greater detail due to the information from low-level local features from the backbone. Compared with previous linear models, such as Mamba, our RWKV-SAM runs faster and performs better on various benchmarks. Our model runs even faster when using extremely high-resolution inputs. We have the following contributions to this work: (1) We propose RWKV-SAM, which contains an efficient segmentation backbone that yields different resolutions of feature maps and leverages the RWKV operation to reduce time complexity. (2) We explore different designs to leverage the multiscale feature maps in the decoder and train the RWKV-SAM on the high-quality segmentation datasets to enable the high-quality segmentation capability. (3) We demonstrate the effectiveness of RWKV-SAM on several benchmarks, surpassing previous methods while maintaining efficiency. (4) We conduct detailed comparison studies on various linear attention models, including Vision Mamba and VRWKV. To our knowledge, this is the first work to explore these models in a fair comparison manner. § RELATED WORK Efficient Segmentation. Existing methods <cit.> on efficient segmentation have mainly concentrated on closed-set and specific domains <cit.>. Much of the efficient segmentation research <cit.> is dedicated to driving scenarios. Also, multiple studies have been conducted on efficient panoptic segmentation <cit.> and fast video instance segmentation <cit.>. Recently, various studies <cit.> have developed efficient segmentation techniques that facilitate model execution on mobile devices for the segment anything model. Mobile SAM <cit.> introduces a streamlined encoder distillation method. Fast SAM <cit.> employs a single-stage instance segmentation framework that directly decodes class-agnostic masks. Edge SAM <cit.> deploys the SAM model on a real-world mobile device with a new prompt-guided distillation. Efficient SAM <cit.> In this work, in addition to the real-time constraint, we also aim for high-quality segmentation. Efficient Backbone. This direction primarily concentrates on developing efficient CNNs <cit.>, transformers <cit.>, and hybrid architectures <cit.>, to learn visual representations. Recently, several works <cit.> have explored linear attention models, including RWKV <cit.> and Mamba <cit.> in vision <cit.>. However, all these works try to replace the transformer for representation learning, ignoring the generating features at different scales. We explore an efficient backbone for fast, high-resolution segmentation, where we adopt the CNN and RWKV mixed architecture. Based on the experiments, our proposed backbone achieves better representation in similar parameters and latency constraints. High-Quality Segmentation. Previous works for high-quality segmentation aim for specific tasks via designing specific modules <cit.>, proposing fine-grained datasets <cit.>, focusing on object-centric settings <cit.>, and adding refiner <cit.>. To allow more open settings, several works <cit.> have explored SAM as a base model to improve the segmentation quality. However, they cannot run in real time. In particular, HQ-SAM <cit.> brings extra costs compared to the original SAM. We have two goals compared with these works. One is to design a new model to segment high-quality object masks in real time. The other is to build an entire training pipeline, including datasets, to enable an efficient model for high-quality segmentation. Linear Attention Models. The transformer has computation cost issues when the token numbers become larger, which is exactly the challenge faced by high-quality segmentation. Recently, several works <cit.> have shown great potential to replace transformer architecture. In particular, state space models <cit.> have been proven to model long-range dependency. Moreover, RWKV <cit.> is another method with faster inference speed. We aim to explore these architectures for efficient, high-quality segmentation under the segment anything model meta-architecture. In particular, we find that under the efficient segmentation setting of high-resolution image inputs, RWKV runs faster than Mamba. Thus, we aim to explore RWKV architecture as our backbone. § METHOD Overview. We aim to build an efficient, high-quality segment-anything model. That requires the model to have the following properties. First, the model should have a backbone that is efficient even in the high-resolution. Second, the model should be able to utilize existing SAM knowledge to avoid training on the whole SA-1B <cit.>. Third, the model should be able to utilize the feature pyramid from the backbone and be trained using high-quality data to generate high-quality masks. To build a model fulfilling the three properties, we design an RWKV-based backbone (Section <ref>), which has a feature pyramid and is efficient in the high-resolution while having good performance compared to other transformer or linear attention models (please refer to Table <ref> and Figure <ref>). In Section <ref>, we introduce our training pipeline to use knowledge from the SAM model and high-quality datasets. We also present the decoder to fuse the features from different resolutions. §.§ Efficient Segmentation Backbone The original Segment Anything Model <cit.> adopts a transformer-based backbone. Although it achieves powerful performance, it has a huge computational overhead. Although EfficientSAM <cit.> reduces the number of model parameters drastically, it does not change the vision transformer architecture and still requires a long inference time at high resolution. The main reason for that lies in the intrinsic property of the vision transformer architecture. As the resolution increases, the number of patches grows quadratic, leading to increased computational demands. To build an efficient segmentation backbone at high resolution, we follow the spirit of linear-time sequence modeling in the NLP community <cit.> and propose an RWKV-based efficient vision backbone. In general, our backbone has a 3-stage design and contains two types of blocks: Mobile Convolution Blocks (MBConv) <cit.> and Vision-RWKV Blocks (VRWKV) <cit.>. r0.52 ! 2c|R-SAM-T 2c|R-SAM-S 2cR-SAM-B Stage Stride #block #chn #block #chn #block #chn 1 4 2 32 2 64 2 128 2 8 4 64 4 128 4 256 3 16 14 192 14 384 14 768 2c|#Param 2c|5.0M 2c|19.7M 2c78.7M Settings of different variants of backbone. Macro-Level Design. Figure <ref> shows the overview of our efficient segmentation backbone. The macro-level design of our backbone is motivated by ViTamin <cit.>. In the first two stages, we employ the conv-based blocks, i.e., MBConv, to generate high-resolution feature maps. We downsample 2× the feature maps before each stage. The high-resolution feature maps can be used for mask refinement in the decoder. Before the third stage, the feature maps have a downsampling factor of 16, which means each pixel in the feature maps can be seen as a “token”. In the third stage, we stack a series of VRWKV blocks, taking the tokens as input. Compared to plain vision transformer or Vision-RWKV <cit.>, our backbone has different scales of feature maps rather than a single fixed resolution. We present the settings of different variants of our backbone in Table <ref>. These multi-scale feature maps allow our model to adaptively focus on various spatial details, enhancing its ability to handle complex scenes with different object sizes. Micro-Level Design. The MBConv block uses the “inverted bottleneck” design <cit.>. It contains a 1 × 1 convolution to expand the channel size, a 3 × 3 depthwise convolution for spatial mixing, and another 1 x 1 convolution to project the channel back to the original channel size. Following ViTamin <cit.>, we use LayerNorm rather than BatchNorm for simplicity. We set the expand ratio to 4 in the MBConv block. For the VRWKV block, the tokens are first processed by the spatial-mix module and then fed into the channel-mix module. The spatial-mix module serves as the role of global perception. Supposing the input tokens can be represented as X∈ℝ^L× C, where L indicates the token length and C indicates the channel size, the spatial-mix module starts with the QShift modules: R_s = QShift_R(X) W_R,   K_s = QShift_K(X) W_K ,   V_s = QShift_V(X) W_V . The QShift is an important module since it allows each token to interpolate with 4-direction pixel neighborhoods, maintaining the locality of image features, and has been shown to be effective <cit.>: QShift(X) = X + (1 - μ)X', where X' is the token obtained by combining the four pixels around each token along the channel dimension. μ is a learnable scalar that is different for each representation. After mixing the pixel neighborhoods, the spatial-mix module fuses tokens globally and bidirectionally: O_s = (σ (R_s) ⊙BiWKV(K_s, V_s))W_O, where σ indicates the sigmoid function, ⊙ is the element-wise multiplication. The BiWKV is the key component of the “attention” mechanism that allows each token to interact globally with all other tokens in the sequence. For each token at index t in the sequence, with the K_s∈ℝ^L× C and V_s∈ℝ^L× C as input, it can be calculated as follows: BiWKV(K,V)_t=∑^L-1_i=0,i≠ te^-(|t-i|-1)/L · w + k_i v_i + e^u + k_tv_t/∑^L-1_i=0,i≠ te^-(|t-i|-1)/L · w + k_i + e^u + k_t, where w and u are parameters shared globally in the sequence, and k_i and v_i corresponds to the feature K_s and V_s at index i. The BiWKV(K,V) can be converted to RNN-Form to be executed within linear computational complexity and in parallel. Please refer to Vision-RWKV <cit.> for details. After the spatial-mix module, the tokens are fed into the channel-mix module: R_c = QShift_R(X) W_R,   K_c = QShift_K(X) W_K , O_c = (σ (R_c) ⊙SquaredReLU(K_c)W_V)W_O. The channel-mix module is calculated independently for each token, similar to MLP, but it adds QShift to maintain the image feature locality further. In particular, W_K projects the embedding to expand the embedding by two times, and W_V projects the embedding to the original size. §.§ RWKV-SAM: Data, Model, and Training Pipeline SAM Revisited. The original Segment Anything Model (SAM) contains a heavy ViT-H <cit.> backbone, a prompt encoder that takes boxes or points as visual prompts, and a lightweight decoder that contains two transformer layers on the 16x downsampling stride. The lightweight decoder takes the backbone output and prompt encoder and generates the corresponding masks. SAM is trained on the large-scale auto-labeled SA-1B dataset containing 11M images, which requires 256 A100 GPUs for 68 hours. To build an efficient, high-quality segment anything model, our RWKV-SAM involves the design of training data, model structure, and training pipeline. r0.36 ! Datasets #Images #Masks COCONut-B <cit.> 242K 2.78M EntitySeg <cit.> 30k 579k DIS5K <cit.> 3k 3k Datasets for training. Training Data. The annotations of the SA-1B <cit.> used by SAM are generated automatically. Although it helps scale the training data, the annotations do not contain details. To mitigate the gap, we introduce three heterogeneous datasets for joint training. The first dataset is the COCONut-B <cit.>, which has 242K images, including the COCO <cit.> labeled and unlabelled images. COCONut-B's annotations are generated by an assisted manual annotation pipeline, which yields high-quality annotations beyond the original annotations. The second dataset is EntitySeg <cit.>, which contains 30k high-resolution (2000px to 8000px) images annotated by human annotators. The third dataset is DIS5K <cit.> dataset. DIS5K dataset provides remarkable single-object, highly accurate annotations. Model. To generate accurate masks, the rich semantic context and low-level boundary details are both important. The macro-level design of our efficient segmentation backbone contains three stages. The output of the first two stages can be used as the low-level local features (4x and 8x downsampling stride), and the output of the third stage (16x downsampling stride) can be used as the global feature. We denote the features from the first two stages as X_hr and X_mr, and the output of the third stage as X. We keep the prompt encoder (Φ_pe) and decoder (Φ_dec) for saving the knowledge from the original SAM. The original SAM takes the output of Φ_pe and X as inputs to generate mask features F_M: F_M = Φ_dec(Φ_pe(P), X), where P is the visual prompts. To further refine the mask features with low-level local features, introduce additional refine module Φ'_dec to incorporate X_hr and X_mr: F'_M = Φ'_dec(F_M, X, X_mr, X_hr), where F'_M is the refined mask features. We explore several designs of Φ'_dec and use two convolution layers to fuse features for simplicity and efficiency. The refined mask features can be used to generate the mask outputs M = Q ⊗ F'_M, where Q is the instance query generated by Φ_dec and ⊗ represents the dot product for each mask. Training Pipeline. Our RWKV-SAM consists of a two-step training process. In the first step, we employ the original SAM (VIT-H) to distill our efficient segmentation backbone. We follow Open-Vocabulary SAM <cit.> to use a per-pixel mean squared error (MSE) loss for aligning the efficient segmentation backbone to the VIT-H backbone: L_S1 = MSE(X_SAM, X), where X is the output of the RWKV-SAM backbone and the X_SAM is the output of the VIT-H backbone in SAM. In the second step, we utilize the combined datasets to conduct joint training of the whole model. For each image, we first generate the bounding box of each instance based on the mask annotation and randomly select up to 20 instances for training. After generating masks by RWKV-SAM based on the visual prompts, we apply mask Cross Entropy (CE) loss and Dice loss <cit.> between the ground truth masks and the generated masks. The loss of the second step can be formulated as: L_S2 = λ_ceL_ce + λ_diceL_dice. We follow previous works <cit.> to set both λ_ce and λ_dice to 5. § EXPERIMENTS §.§ Experimental Setup Datasets. Our training involves two sessions as mentioned Section <ref>. In the first session, we use 1% of SA-1B data to distill the efficient segmentation backbone. We train the 1% of SA-1B for 24 epochs (equivalent to 24 epochs on COCO). In the second session, we use the combination of COCONut-B <cit.>, EntitySeg <cit.>, and DIS5K <cit.> datasets for training. As the data samples differ among different datasets, we repeat the EntitySeg and DIS5K datasets to balance the three datasets to a 2:1:1 ratio, resulting in 482k samples for each epoch. We train the combined datasets for 6 epochs (equivalent to 24 epochs on COCO). Evaluation Protocol. Since our method is trained on various high-quality datasets, including single-object and complex scenarios, we select three types of benchmarks for evaluation considering different scenarios. The first benchmark is on COCO datasets. We use a strong detector, ViTDet-H <cit.>, to generate the bounding boxes as the visual prompts as the prompt inputs in the segment anything model. The second benchmark is on the DIS <cit.> dataset (validation set). With the bounding box generated by the mask annotations as prompt inputs, we test the performance of generated masks on the single-object high-quality dataset. The third benchmark also has single-object datasets but includes COIFT <cit.> and HR-SOD <cit.> datasets to test the zero-shot performance on the single-object high-quality datasets. We report the mask mean AP (mAP) and mask boundary mean AP (mBAP) on COCO and report the mIoU and boundary mIoU (mBIoU) on single-object segmentation datasets. §.§ Main Results ImageNet Pretraining. Our proposed efficient segmentation backbone is first pertaining to the ImageNet-1K <cit.> dataset. We train the backbone 120 epochs with 224x224 resolution and using the receipt of Swin-Transformer <cit.> following VRWKV <cit.>. To validate the performance of the backbone, we test the ImageNet classification performance on the validation set. As shown in Table <ref>, with smaller parameter numbers, our method performs better or comparable compared to previous methods based on Mamba or RWKV. For example, when comparing the small version of Vim <cit.>, VRWKV <cit.>, and RWKV-SAM, at the same embedding dimension 384, our method still outperforms previous methods despite having a smaller model size. We argue that the improvement may benefit from our macro-level design, which uses convolution layers in the first two blocks to obtain features at different scales instead of directly downsampling by transforming images to image patches. r0.5 ! Backbone Decoder #Param mIoU DeiT-T <cit.> UperNet <cit.> 5.7M 39.2 Vim-T <cit.> UperNet <cit.> 7.9M 41.0 RWKV-SAM-T (Ours) UperNet <cit.> 5.9M 41.1 DeiT-S <cit.> UperNet <cit.> 22.0M 44.0 Vim-S <cit.> UperNet <cit.> 27.3M 44.9 RWKV-SAM-S (Ours) UperNet <cit.> 23.6M 45.3 Results of semantic segmentation on ADE20K. We report the number of parameters of the backbone. Semantic Segmentation. To test the semantic segmentation performance of the backbone, we use the pre-trained backbone as the feature extractor and integrate a UperNet <cit.> as the decoder. We use the same setting to train the model for 160k iterations on the ADE20K dataset <cit.>. For the RWKV-SAM, We made minor modifications to the RWKV-SAM backbone, incorporating an MBConv block at the end of the third stage to generate features at the smallest scale (1/32). This feature, combined with the outputs from the first three blocks of the efficient segmentation backbone, forms a feature pyramid. As in Table <ref>, we report the comparison results with Vim <cit.>. The results show that our method performs better even with fewer parameters. Segment Anything Model. We compare our RWKV-SAM equipped with the efficient segmentation backbone with previous works on the benchmarks mentioned in Seciton <ref>. As shown in Table <ref>, although SAM <cit.> and EfficientSAM <cit.> show good performance on the COCO dataset, it falls short compared to our method and HQ-SAM <cit.> in the high-quality datasets. On the COIFT <cit.> and HR-SOD <cit.> datasets, although our training dataset does not contain data samples in the same domain, our method still demonstrates strong zero-shot performance compared to the much larger SAM <cit.>. We also train previous linear attention models with the segment anything decoder. VRWKV-S <cit.> also uses VRWKV blocks, but it only has one feature scale. Thus, it cannot use high-resolution features to optimize the segmentation results. Based on the results, it has a relatively worse performance on the high-quality datasets. Using the bi-directional mamba layer <cit.>, Vim <cit.> also has a relatively worse performance, especially on the boundary metric of the COCO dataset. §.§ Ablation Study and Analysis r0.44 < g r a p h i c s > Latency (log scaled) of the backbone with different input image resolutions. Efficiency Analysis. In Figure <ref>, we report the latency of the RWKV-SAM-Small and ViT-Small (used by EfficientSAM <cit.>) with different input image sizes. The results are tested on a single NVIDIA A100 GPU. Both the two backbones adopt the 384 embedding dimension. With relatively low input resolutions, the FPS of the two models are similar (e.g., with 512x512 image as input, ViT-Small is 86.8 FPS and RWKV-SAM-Small is 73.4 FPS). However, when the resolution of the input image grows, the latency of the ViT model increases quadratically. In contrast, the latency of RWKV-SAM-Small grows linearly, which means it has advantages when the input image size is large. As shown in Figure <ref>, the latency of RWKV-SAM still maintains a significant advantage compared to ViT when using the high-resolution image as input. In the segment anything model, the typical input size is 1024x1024. Despite using the same input size, RWKV-SAM-Small achieves a significantly higher FPS (40.3) than ViT-Small (17.8) due to its more efficient computation mechanism. Consequently, using a similarly scaled model (ViT-Small: 22.4M, RWKV-SAM-S: 19.7M), RWKV-SAM has a better efficiency as the segment anything model backbone. Effect of the Fusion Module in Decoder. In the decoder, we explore the different designs to fuse the features from different scales. The first design uses two convolution layers for each scale to downsampling the low-level features and align the channels and the following two convolution layers after fusing the three features along the channel dimension. The second design replaces the convolution layers with RWKV block, which enables the global perception of the features. The third design is gradually fusing from low-resolution features to high-resolution and uses DCN for fusing following FaPN <cit.>. This design may give the model more opportunities to capture information from a relatively long range. As shown in Table <ref> (left), the first design has the best efficiency while achieving comparable performance compared to the others. The second design may hurt the performance, indicating that using RWKV block in the decoder is not as effective as using local operators such as convolution. We suspect this may be because RWKV blocks break the continuity of image features. Therefore, we use the first design mentioned in Section <ref>. Ablation of the Encoder Design. As mentioned in Section <ref>, we use a 3-stage design. In the first two stages, we use the MBConv blocks to learn the low-level representations. We explore the effect of adding some RWKV blocks to the first stages to evaluate the macro-level design. In Table <ref> (right), we report the FPS (under 1024x1024 input size) and the accuracy of the alternative designs. There are a total of 14 RWKV blocks in the RWKV-SAM backbone, and they are in the third stage by default. In the table, (x-y) means that x and y blocks are put in the first two stages, and other blocks are in the third stage. Based on the results, with more blocks in the first two stages, although the model size is reduced (the hidden embedding size of the first two stages is smaller), the inference speed in the 1024×1024 input size is slower. This design also leads to a decrease in model performance. Therefore, we put the RWKV blocks in the third stage for better inference speed. Visualization. In Figure <ref>, we compare our method with SAM <cit.> and HQ-SAM <cit.>. Based on the visualization results, we observe that our method achieves superior segmentation quality even in very complex scenes. For example, in the first example, SAM cannot distinguish the fence gate, and the HQ-SAM loses a lot of details. In contrast, our RWKV-SAM achieves the best results regarding the details. We show more visualization results in the appendix. § CONCLUSION In this paper, we develop RWKV-SAM, which includes an efficient segmentation backbone and a complete training pipeline to enable the high-quality segmentation capability for segment anything model. Benefiting from the linear complexity, our method achieves excellent efficiency at high resolution while maintaining a strong performance. After training on the proposed benchmark, our RWKV-SAM demonstrates superior high-quality segmentation performance. We also benchmark recently proposed linear attention models, including Mamba and RWKV, showing that RWKV-SAM performs well among them. Our RWKV-SAM can segment any object with high quality and high efficiency, making it a generalized segmentation tool that can facilitate downstream applications. We hope our research can inspire new architectural designs using linear attention models for dense prediction tasks. plain Overview. In the appendix, we present more implementation details (Section <ref>), more visualization results (Section <ref>), more ablation studies (Section <ref>) and limitations of our work (Section <ref>). § MORE IMPLEMENTATION DETAILS. Training Datasets. We visualize the training datasets of RWKV-SAM in Figure <ref>. EntitySeg <cit.> dataset has the most diversified scenes and provides detailed annotations for each entity. COCONut-B <cit.> relabeled COCO <cit.> and provides finer annotations. DIS5K <cit.> dataset provides single-object high-quality annotation. The annotations are object-centric and are large in the image. Training Details. In Section <ref>, we already mention our training involves two sessions. In the first session on the SA-1B dataset, we set the learning rate to 0.0001 with the AdamW <cit.> optimizer. We use cosine annealing for the learning rate schedule for 24 epochs. In the second session, we also set the learning rate to 0.0001 with the AdamW <cit.> optimizer and use cosine annealing for the learning rate schedule for 6 epochs. In each session, the total number of training samples is roughly equal to 24 times the COCO training dataset. The batch size in each session is 32. The training hyperparameters of the pretraining session on the ImageNet-1k dataset mainly follow Swin-Transformer <cit.>, with 0.001 learning rate, AdamW optimizer, and 1024 batch size. But we train our RWKV-SAM for 120 epochs to save the computational cost. The training on ADE20K <cit.> dataset takes 160k steps with a batch size 16. We do not use any test-time augmentation for fair comparison. Training time. We train our RWKV-SAM model on 16 A100 GPUs. The first session takes about 5 hours, and the second session takes about 16 hours. § MORE VISUALIZATION RESULTS More Visualization Results on COCO dataset. We present more visualization results on COCO datasets. As demonstrated in Figure <ref>, our RWKV-SAM can segment objects in high quality. The results show that our RWKV-SAM can segment various objects, even in complex scenes. More Visualization Results on SA-1B datasets. We present more visualization results on the SA-1B dataset to demonstrate the performance in the open world. Note that our training data do not have the SA-1B dataset in the second training session. As shown in Figure <ref>, it can provide surprising high-quality segmentation results with good details on SA-1B. § MORE ABLATION STUDY. We do the ablation study on the backbone distillation for our RWKV-SAM model. The results are shown in Table <ref>. Without the backbone distillation, the RWKV-SAM cannot get the existing knowledge from the SAM encoder well and thus performs poorly. § LIMITATION AND FUTURE WORK Limitation. Our RWKV-SAM is a SAM-like prompt-based segmentation method, which means our method cannot propose and recognize objects well like instance segmentation methods such as Mask2Former <cit.>. Our RWKV-SAM also falls short in some part-level segmentation or very thin objects (such as wire). We provide some failure cases in Figure <ref>. Future work. While our RWKV-SAM already has high-quality segmentation results, it may fail in some cases (e.g., failure cases in Figure <ref>). Future work may incorporate more training datasets to support more complex scenarios. We aim to continue exploring this direction. For example, we aim to adopt the full SA-1B dataset for co-training. Broader Impact. Our method provides an efficient and high-quality interactive segmentation tool, which may enable downstream tasks such as image editing. We do not think it will bring any additional negative social impact compared to SAM <cit.>.
http://arxiv.org/abs/2406.18794v1
20240626233646
Operator Learning of Lipschitz Operators: An Information-Theoretic Perspective
[ "Samuel Lanthaler" ]
cs.LG
[ "cs.LG", "cs.NA", "math.NA" ]
§ ABSTRACT Operator learning based on neural operators has emerged as a promising paradigm for the data-driven approximation of operators, mapping between infinite-dimensional Banach spaces. Despite significant empirical progress, our theoretical understanding regarding the efficiency of these approximations remains incomplete. This work addresses the parametric complexity of neural operator approximations for the general class of Lipschitz continuous operators. Motivated by recent findings on the limitations of specific architectures, termed curse of parametric complexity, we here adopt an information-theoretic perspective. Our main contribution establishes lower bounds on the metric entropy of Lipschitz operators in two approximation settings; uniform approximation over a compact set of input functions, and approximation in expectation, with input functions drawn from a probability measure. It is shown that these entropy bounds imply that, regardless of the activation function used, neural operator architectures attaining an approximation accuracy ϵ must have a size that is exponentially large in ϵ^-1. The size of architectures is here measured by counting the number of encoded bits necessary to store the given model in computational memory. The results of this work elucidate fundamental trade-offs and limitations in operator learning, providing new insights into the limitations of operator learning. A Study on Quantum Car-Parrinello Molecular Dynamics with Classical Shadows for Resource Efficient Molecular Simulation Kenji Yasuoka July 1, 2024 ======================================================================================================================== § INTRODUCTION Operators mapping between infinite-dimensional Banach spaces of functions are ubiquitous in the natural sciences and engineering. They often appear in connection with physical models expressed as a set of partial differential equations, where operators of interest frequently arise from associated forward and inverse problems, e.g. mapping initial data to the solution at a later time, or identifying external forcing terms from (partial) knowledge of the solution. Operator learning has emerged as a new paradigm for the data-driven approximation of such operators. Popular operator learning frameworks build on the success of neural networks, but generalize this notion to the infinite-dimensional context of operator approximation, resulting in so-called neural operators. These neural operator architectures define parametric operators, whose parameters are tuned to approximate an underlying operator of interest. While there is a very rapidly growing body of empirical work demonstrating the great potential, and practical utility, of such data-driven approaches, many open questions remain in our understanding of the theoretical underpinnings of this field, see e.g. <cit.> for a recent review and references therein. First theoretical insights into specific architectures, and their underlying approximation mechanisms, can be gained by studying universal approximation, i.e. the ability to approximate very general classes of operators. The study of universal approximation of neural operators dates back at least three decades, to early work on operator networks by Chen and Chen <cit.>. Due to the recent rise in the popularity of operator learning and the introduction of a number of novel state-of-the-art frameworks, this early work has been complemented by a number of papers in recent years, demonstrating similar universal approximation properties for various architectures; e.g. DeepONets <cit.>, PCA-Net <cit.>, Fourier neural operator <cit.> and general neural operators <cit.>, as well as multiple other architectures <cit.>. Universal approximation implies that there are no fundamental obstructions to operator learning with a given framework, and usually requires identification of basic approximation mechanisms that can be leveraged by a given architecture. However, to determine whether operator learning can be achieved efficiently, a refined quantitative analysis is required. In such quantitative analysis, one often distinguishes between parametric complexity, relating the required model size to the achieved accuracy, and sample efficiency, relating the number of required training samples to the achieved accuracy. The focus of the present work is on parametric complexity. For research relevant to the data complexity of operator learning, we mention, for example, <cit.>. A general class of operators for which efficient approximation is possible, in terms of the required number of tunable parameters, are so-called holomorphic operators. Research into the approximation of holomorphic operators goes back to the seminal work of Cohen, DeVore and Schwab <cit.>, where it was shown that this class of operators can be efficiently approximated by generalized polynomial expansions. More recently, these results have been extended to neural network and neural operator approximation in a series of works <cit.>, demonstrating that similar rates can be achieved by neural operators. Other classes of operators for which efficient convergence rates have been derived are operator Barron spaces <cit.> and (operator) reproducing kernel Hilbert spaces (RKHS) <cit.>. Alternative settings, such as parametric PDEs with low-dimensional latent structure are, for example, explored in <cit.>. Apart from these specific classes of operators, efficient approximation has also been established via a case-by-case analysis for several PDE solution operators <cit.>. These results identify a number of individual operators of interest which can be efficiently approximated by certain operator learning frameworks. Despite this progress, a general theory encompassing all these examples has yet to emerge. A very general class of operators of interest are Lipschitz operators. Approximation theory of relevance to such a general class of operators has been developed e.g. in <cit.>. All of these works aim to bound the number of tunable parameters (model size) in terms of the accuracy that can be achieved. The present work will focus on deriving lower complexity bounds for the class of Lipschitz continuous operators : →, defined on an infinite-dimensional domain and taking values in (nonlinear Lipschitz functionals). Semantically, no distinction will be made between `functional' and `operator', since all lower bounds established for functionals continue to hold when considering operators with infinite-dimensional output spaces – the latter containing (infinitely many) copies of . In addition to the aforementioned literature on neural operator approximation theory, the present work also takes inspiration from the information-theoretic point of view on neural network approximation theory in a finite-dimensional setting, pioneered in the works <cit.>, as well as notions of stable approximation <cit.>. In the present work, the underlying ideas will be applied and extended to the infinite-dimensional context of operator learning. The main motivation for this work are two recent results, established in <cit.> and <cit.> respectively, both applicable to the general setting of Lipschitz operators. A one-paragraph summary of the results in <cit.> and <cit.> is as follows: (i) The first result <cit.> shows that certain neural operator architectures, based on ReLU activations, suffer from a curse of parametric complexity: under certain assumptions on the input functions, there exist Lipschitz continuous operators which can only be approximated to accuracy ϵ, if the number of tunable parameters is exponential in ϵ^-1; more precisely, the number of parameters must be at least as large as Cexp(cϵ^-γ) with problem-dependent constants C,c,γ>0. (ii) The second result in <cit.> shows that, under similar assumptions on the input functions, neural operator architectures based on super-expressive activation functions can approximate general Lipschitz operators to accuracy ϵ, with algebraically bounded parameter count; the number of parameters is upper bounded by Cϵ^-γ, for problem-dependent C,γ>0. While the first result, viewed in isolation, appears to hint at fundamental limitations to the development of operator learning theory on the general class of Lipschitz operators, due to the identified “curse”, the second result shows rigorously that this curse can be circumvented with a suitable choice of activation. The aim of the present work is to examine the apparent dichotomy between these two results in detail. To this end, we explore the curse of parametric complexity from an information-theoretic perspective. As a result, we will uncover the fundamental information-theoretic character of the curse of parametric complexity, and identify the relevant trade-offs that are possible when parametric complexity is measured by the number of (real-valued) parameters as in <cit.>. Main contributions This work makes the following main contributions: * We propose an information-theoretic perspective of operator learning, based on the relation between bit-encoding and Kolmogorov metric entropy; this provides an alternative to the prevalent analysis in the literature, which has focused on estimating the required number of real-valued parameters. * For the model class of Lipschitz operators, we derive lower bounds on the metric entropy in two settings: one pertaining to uniform approximation, the other to approximation in expectation. * These bounds imply, in either setting, that an exponentially large number of encoding bits is required to store the weights of any architecture achieving accuracy ϵ on the model class. This result holds independently of the activation function that is chosen. * We use topological arguments to show that even generic operators can only be approximated with exponentially increasing complexity; when applied to FNO this implies that the approximation of a generic Lipschitz operator, to accuracy ϵ, requires a number of tunable parameters exponential in ϵ^-1. Overview The remainder of this paper is organized as follows. In Section <ref>, we state the main results of this work, as they pertain to operator learning with neural operator architectures. This section contains the main conceptual contributions of this work and reviews the link between bit-encoding and Kolmogorov entropy. Several technical details are left to Sections <ref> and <ref>; in Section <ref>, we derive lower bounds on the Kolmogorov metric entropy of the set of 1-Lipschitz operators in both a sup-norm and L^p-norm approximation setting. In particular, we show that the metric ϵ-entropy increases exponentially with ϵ^-1, implying a general curse of parametric complexity for bit-encoded architectures. This is the first main technical contribution of this work. Approximation rates for generic operators are the subject of Section <ref>, where we first formulate the operator approximation problem in an abstract Banach space setting, and then use topological arguments to relate approximation rates of generic elements of a model class to the metric entropy of this class. This is the second main technical contribution of this work. Finally, Section <ref> contains concluding remarks. § MAIN RESULTS This section contains a summary of the main results of this work, applied to the specific setting of operator learning. Several of these results are based on more general, abstract propositions which are included in subsequent Sections <ref> and <ref>. To aid readability, we leave most technical details to these latter sections. The aim of this section is instead to explain the main ideas underlying our analysis, and their implications for operator learning. Recurring notation, to be introduced and discussed in the following, is summarized in Table <ref>. §.§ Operator approximation by neural operators We begin the discussion of our main results by proposing an encoder-decoder point of view on operator learning, where the encoder and decoder are implicitly defined by a given architecture. We then define approximation errors of interest and discuss two common measures to quantify the “complexity” of a given architecture. The first counts the number of tunable, real-valued parameters in the architecture. The second goes one step further, and requires specification of a bit-encoding of all parameters, i.e. encoding by a sequence of 0's and 1's. To fix intuition, this bit-encoding can be loosely interpreted as the representation of the parameters on computing hardware. The complexity of a bitwise-encoded architecture is measured by the number of bits required to represent it. As will be explained, this provides a link to fundamental information-theoretic concepts such as the Kolmogorov metric entropy of our model class. §.§.§ Approximation theoretic setting Assume we are given input and output spaces , . A neural operator defines a parametrized mapping Φ: ×^q→, where θ∈^q are tunable parameters. Specification of θ defines an operator, Φ(;θ): →. In practice, the training of a neural operator results in an optimized parameter choice θ_ for given : → and an approximation ≈Φ(;θ_). Model class _1() In the following, we will consider a model class of 1-Lipschitz operators, restricting attention to the case of real-valued outputs, =: Let (,d) be a metric space. We define _1() as the set consisting of all 1-Lipschitz continuous mappings : → with ‖‖_≤ 1, where we define the ‖‖_-norm as follows: {‖‖_ = max{sup_u ∈ | (u) |, ()}, () = sup_u v|(u) - (v)|/d(u,v), . As described in the introduction, the goal of operator learning is to approximate : → by a neural operator Φ: ×^q →. In this work, we aim to relate the approximation accuracy ϵ to the required model size of Φ. We will focus on two settings, where either (i) = ⊂ is a compact subset of a Banach space and the metric is the sup-norm over , or (ii) = is a Banach space and the metric is induced by the L^p(μ)-norm with respect to a probability measure μ on (cp. Table <ref>). Approximation spaces and norms To measure the approximation accuracy of this approximation task, we have to define a distance between operators. To this end, we will consider a Banach space of operators , allowing for an embedding _1()⊂. Throughout, we will consider one of the following two settings. In the first setting, we aim to approximate over a compact domain = ⊂: [Uniform approximation] If : → is an operator with compact domain ⊂, we will study its uniform approximation over , i.e. we take = C() to be the space of continuous operators, metrized by the sup-norm: ‖‖_C() = sup_u∈ |(u)|. A common special case of this setting is the case where ⊂ is defined by a smoothness constraint, as illustrated by the following example: Let D ⊂^d be a bounded domain. An example of the setting above is the case of Lipschitz operators : ⊂ L^2(D) →, with = u∈ H^s(D)‖ u ‖_H^s(D)≤ C, a set defined by a Sobolev smoothness constraint for s>0. Here, = L^2(D). In the second setting, we aim to approximate over the entire Banach space =, but with respect to a (Bochner) L^p(μ)-norm: [Approximation in expectation] If : → is an operator with unbounded domain a separable Banach space, then we will assume that inputs are drawn at random from a probability measure μ∈(). In this case, we fix p∈ [1,∞) and take = L^p(μ) as the space of μ-measurable operators with finite p-th norm. L^p(μ) is metrized by the Bochner L^p-norm, ‖‖_L^p(μ) = _u∼μ[ |(u)|^p ]^1/p. Measures of complexity: Counting parameters versus bits We will distinguish two ways of measuring the “complexity” of neural operator Φ(;θ): one based on the number of tunable (real-valued) parameters, the other requiring bit-encoding (or quantization) of the parameters. A first intuitive notion of complexity is the minimal number of tunable parameters required to reach approximation accuracy ϵ, i.e. the parameter dimension q of a neural operator Φ: ×^q →. As mentioned in the introduction, this point of view has been prevalent in the development of approximation theory for operator learning. As explained previously, depending on the type of activation function that is used, vastly different conclusions can be reached with this definition of complexity. This fact is well-known in the finite-dimensional setting: For example, it has been shown <cit.> that there exist smooth, sigmoidal activation functions for which a neural network of fixed size can approximate arbitrary continuous function to arbitrary accuracy, i.e. approximation accuracy ϵ can be reached with a number of parameters q=O(1). In practical implementations, real-valued parameters can only be digitally represented to finite accuracy. This observation has led a number of authors <cit.>, to analyze neural network approximation from a bit-encoding perspective. In this approach, the continuous parameters θ∈^q are replaced by quantized parameters θ∈Θ, where Θ⊂^q is a finite set. If the number of elements is bounded, say |Θ| = 2^B for some B ∈, then we can identify Θ≃{0,1}^B, i.e. each element in the set Θ is encoded by a string of B bits. Taking this information-theoretic point of view, it is possible to derive (lower) complexity bounds that are independent of the activation function. §.§ Encoder-decoder view of neural operators Given the discussion of the last paragraph, we now outline an encoder-decoder point of view on neural operators, emphasizing the difference between “counting parameters” and “counting (encoding) bits”. Counting parameters Let Φ: ×^q → be a neural operator architecture. To explain our intuition, we temporarily assume the existence of, and fix an optimal parameter choice θ_∈^q for each ∈_1(), so that θ_∈_θ∈^q‖ - Φ(;θ) ‖_, ∀ ∈_1(), with respect to the relevant norm of interest on the space of operators ⊃_1(). The corresponding encoder is then given by : _1() →^q, ↦θ_. The corresponding decoder is : ^q →, θ↦Φ(;θ). In this way, the operator learning architecture Φ induces a natural encoder/decoder pair on the relevant space of operators, and we are interested in bounds on the encoding error, either for individual ∈_1(), i.e. (;Φ)_ = inf_θ∈^q‖ - Φ(; θ) ‖_, or in a minimax sense, i.e. (_1();Φ)_ = sup_∈_1()inf_θ∈^q‖ - Φ(; θ) ‖_. Given a desired approximation accuracy ϵ>0, either in the sense (<ref>) or (<ref>), one quantity of interest is the required “complexity” of any architecture Φ achieving this accuracy. The above point of view is consistent with estimates on the required number of parameters q. Counting bits As discussed before, the number of parameters q is not a suitable measure of complexity when results independent of the activation are sought. Therefore, we now assume that the parameters θ∈^q are encoded by B bits. This defines a subset Θ⊂^q consisting of |Θ| = 2^B elements. Each θ∈Θ is in correspondence with its bit-encoding [θ]∈{0,1}^B. Thus, upon associating with any ∈_1() the optimal θ_∈Θ, the continuum encoder (<ref>) is now replaced by a bitwise-encoder, : _1() →{0,1}^B, ↦ [θ_], with bitwise-decoder, : {0,1}^B →, [θ] ↦Φ(;θ). The individual and minimax errors, (<ref>) and (<ref>), have the following bit-encoded counterparts, (; Φ, Θ)_ = inf_θ∈Θ‖ - Φ(; θ) ‖_. and (_1();Φ,Θ)_ = sup_∈_1()inf_θ∈Θ‖ - Φ(; θ) ‖_. In the present work, we will focus on such a bit-encoding point of view, but mention that there are close links between these two points of view, if the mapping θ↦Φ(;θ) possesses some stability properties. Specifically, this link will be used to derive lower complexity bounds for the Fourier neural operator in Section <ref>. §.§ Information-theoretic notions The relevance of the bit-encoding point of view is that it relates directly to the (Kolmogorov) metric entropy of the underlying model class ⊂ and allows results to be derived which are independent of specifics of the architecture such as the choice of activation function. Thus bit-encoding enables analysis relating directly to intrinsic topological properties of . Minimax code-length Abstracting further our previous discussion, we make the following formal definition of abstract bitwise encoder/decoder pairs: Given a compact subset ⊂ of a Banach space , we denote by _B(;) the set of all bitwise encoder/decoder pairs (,) of length B, i.e. all pairs of mappings : →{0,1}^B and : {0,1}^B →. Following <cit.>, for ϵ > 0, we also introduce the minimax code length (;ϵ)_ of a compact set ⊂ as the minimal number of bits B for which there exists an (abstract) encoder/decoder pair (,)∈_B(;) such that sup_∈‖ - ∘() ‖_≤ϵ. That is, (;ϵ)_ := min B ∈∃ (,) ∈_B(;) s.t. sup_∈‖ - ∘() ‖_≤ϵ. Kolmogorov metric entropy Given a metric space (,d), element g∈ and r>0, we denote by B_r(g) := f ∈d(g,f) ≤ϵ, the closed ball of radius r. We now make the following definition for the covering number and (Kolmogorov) metric entropy: Let (,d) be a metric space. For ϵ>0, the ϵ-covering number of a set ⊂, denoted (;ϵ)_, is the smallest integer N∈, such that can be covered by N closed balls of radius ϵ, i.e. (;ϵ)_ := minN∈∃ g_1,…, g_N∈, s.t. ⊂⋃_j=1^NB_ϵ(g_j). We note that the subscript is used as a shorthand for (,d), with the relevant metric d implied. The metric entropy of ⊂ is defined as the logarithm (to base 2) of the covering number, i.e. (;ϵ)_ = log_2 (;ϵ)_. Link between minimax code-length and metric entropy The minimax code-length and metric entropy introduced in the previous paragraphs are linked by the following fundamental result <cit.>: Let be a Banach space, and let ⊂ be compact. Then the metric entropy of provides a lower bound on the minimax code length: (;ϵ)_≥(;ϵ)_. Let ϵ > 0 be given. Let (,) be a bitwise encoder/decoder pair with B= (;ϵ)_ bits, achieving reconstruction error at most ϵ on . The image of : {0,1}^B → contains at most N = 2^B elements, _1,…, _N. Since, for any ∈, the specific choice ∘() belongs to the image of , it follows that sup_∈inf_n=1,…, N‖ - _n ‖≤sup_∈‖ - ∘() ‖≤ϵ. Thus, ⊂⋃_n=1^N B_ϵ(_n), implying that the covering number of is bounded by (;ϵ)_≤ N = 2^B. Taking logarithms and recalling that B = (;ϵ)_ yields the claim. In particular, Proposition <ref> implies that if (;ϵ)_ > B, then there cannot exist a bit-encoder-decoder pair (,) ∈_B(;) achieving uniform decoding accuracy ϵ over . Conversely, if (,) is an encoder-decoder pair (<ref>), (<ref>) associated with a bit-encoded neural operator Φ: ×Θ→ with |Θ| ≤ 2^B, and if the following minimax approximation bound holds, sup_∈_1()inf_θ∈Θ‖ - Φ(;θ)‖_≤ϵ, this implies that B ≥(_1();ϵ)_. §.§ Information-theoretic minimax bounds As a consequence of Proposition <ref>, we can derive a lower bound on the required number of bits B to achieve the minimax bound (<ref>) by estimating the entropy of _1()⊂. As mentioned before, we will consider two settings, corresponding to uniform approximation of over a compact set (the setting =) and approximation with respect to a Bochner L^p(μ)-norm for probability measure μ (the setting =). Uniform approximation We now consider ⊂ a compact set of input functions, and operators belonging to _1() ⊂ C() (cp. Setting <ref>). This corresponds to the choice =, = _1(), = C(), in the discussion of the previous section. We then have the following result: Let be a Banach space. Let ⊂ be a compact set of input functions, and assume that the metric entropy of satisfies the lower bound, (;ϵ)_≥ c_αϵ^-1/α for α > 0. There exists a constant c>0, independent of ϵ, such that the following holds: If Φ: ×Θ→ is a quantized neural operator architecture, satisfying sup_∈_1()inf_θ∈Θ‖ - Φ(;θ) ‖_C()≤ϵ. and if |Θ|≤ 2^B, i.e. if the parameters of Φ can be encoded by B bits, then B ≥exp(c ϵ^-1/α). The claim follows from the relation between the minimax code-length and the metric entropy of _1()⊂ C(), stated in the above Proposition <ref>, and the following general bound on (_1(),ϵ)_C(): (_1();ϵ)_C()≥ 2^(,6ϵ)_. This bound will be shown in Section <ref>, Proposition <ref>. Assuming this bound, then by assumption on , we have 2^(,6ϵ)_≥exp(cϵ^-1/α) for constant c>0. If is a function space, then compact subsets ⊂ are commonly defined by a smoothness constraint, and this partly motivates our assumption on in the last theorem. The following example is illustrative. Let D ⊂^d be a bounded domain. Let = L^2(D). An example of the setting outlined above is the case of Lipschitz operators : →, with = u∈ H^s(D)‖ u ‖_H^s(D)≤ C, defined by a Sobolev smoothness constraint for C,s>0. In this case, it is well-known that the metric entropy satisfies (;ϵ)_≳ϵ^-d/s, i.e. the assumptions of Theorem <ref> hold with α = s/d. Approximation in expectation Another commonly studied setting concerns the approximation in expectation (cp. Setting <ref>). Here, we consider 1-Lipschitz mappings : → defined on a separable Hilbert space . We fix a probability measure μ on and consider inputs as random draws u∼μ. To derive quantitative lower bounds, we will need to make minimal structural assumptions on μ. There exists an orthonormal basis e_1,e_2,… of , probability space (Ω, ℙ) and summable coefficients λ_1 ≥λ_2 ≥…, such that μ is the law of a random variable u: Ω→ of the form, u(ω) = ∑_j=1^∞√(λ_j) Z_j(ω) e_j, (ω∈Ω). where Z_j: Ω→ are jointly independent random variables. We assume that the random variable Z_j satisfies |Z_j|^2 = 1, and has law Z_j ∼ρ_j(z) dz for a probability density function ρ_j: →_+. We furthermore assume that there exists a constant L > 0, such that sup_j∈‖ρ_j ‖_L^∞()≤ L, √(λ_1)≤ L. A concrete, and widely considered, example satisfying Assumption <ref> is the case of a Gaussian probability measure μ with prescribed mean and covariance operator. In this case, λ_j are the eigenvalues of the covariance operator, e_j the corresponding eigenfunctions, and the random variables Z_j ∼ρ_j have standard Gaussian distribution. Let be a Banach space of input functions. Let μ∈() be a probability measure satisfying Assumption <ref>. Assume that the coefficients √(λ_j)≳ j^-α as j→∞, where α > 0. Then there exists a constant c>0, independent of ϵ, such that the following holds: If Φ: ×Θ→ is a quantized neural operator architecture, satisfying sup_∈_1()inf_θ∈Θ‖ - Φ(;θ) ‖_L^p(μ)≤ϵ. and if |Θ|≤ 2^B, i.e. if the parameters of Φ can be encoded by B bits, then B ≥exp(c ϵ^-1/(α+1)). Similarly to the uniform case, the present claim again follows from the relation between the minimax code-length and the metric entropy of _1()⊂ L^p(μ) of Proposition <ref>, together with the following general bound on (_1(),ϵ)_L^p(μ): (_1();ϵ)_L^p(μ)≥exp(cϵ^-1/(α+1)). This lower entropy bound will be derived in Section <ref>, Proposition <ref>. Thus, an exponential number of encoding bits is also needed in an L^p(μ)-setting. Theorem <ref> shows that the approximation of Lipschitz operators in expectation is not “qualitatively” easier than uniform approximation of such operators over a compact set of input functions. §.§ Approximation of generic Lipschitz operators Theorems <ref> and <ref> show that operator learning architectures that can approximate arbitrary 1-Lipschitz operators to accuracy ϵ have exponential memory requirements; any (bit-encoded) implementation of such an architecture will require a number of bits that is exponential in ϵ^-1. The reason for this is that the space of Lipschitz operators is exponentially large in a fundamental information-theoretic sense quantified by the metric entropy. However, this minimax bound applies to the approximation of the entire class _1() by a single architecture, and does not necessarily imply that it is impossible to approximate individual ∈_1() efficiently. At first sight, it could appear that arguments based on the metric entropy cannot be used to gain any insight into this refined question; Indeed, if we fix individual ∈_1(), then the metric entropy of the singleton-set = {} is trivially =0, and the minimax code length (<ref>) is =1 for any value of the accuracy ϵ, since the trivial decoder () ≡ reproduces exactly, with vanishing approximation error, ϵ = 0. Thus, while entropy arguments give insights into the (concurrent) approximation of the set _1(), they seemingly have no immediate implications for the approximation of individual ∈_1(). Despite these facts, the results below will show that a refined analysis based on the concept of metric entropy is nevertheless possible; in the uniform and L^p-settings of the previous section, a fixed sequence of bit-encoded architectures {Φ_n}_n∈, with at most n bits, can approximate generic elements ∈_1() at best at a logarithmic rate, (;Φ_n,Θ_n) ≲log(n)^-γ for fixed γ > 0. Before stating our result, we briefly recall the notion of a generic element of a (compact) metric space (see Appendix <ref> for further remarks, and <cit.> for an in-depth discussion): Let (,d) be a compact metric space. A subset ⊂ is called residual, if it is equal to a countable intersection of sets, each of whose interior is dense in . The complement of a residual set is a meagre set. A property P is called generic, if the set := ∈ satisfies P⊂, is residual. Under the assumption that (,d) is compact, the Baire category theorem (cp. Appendix <ref>) implies that any residual set is dense in . Furthermore, the intersection = ⋂_j=1^∞_j of countably many residual sets _1,_2,… is itself residual, and hence still dense. In this sense, a topologically generic property is somewhat analogous to a property that holds with probability 1 in a probabilistic sense. Thus, a generic property is often thought of as a property that is satisfied by “almost every” element of . We can now state our main results on the approximation of generic operators ∈_1(). In the uniform setting (cp. Setting <ref>), we have: Let be a Banach space of input functions. Let ⊂ be compact, and assume that the metric entropy (;ϵ)_≳ϵ^-1/α for α > 0. Let {Φ_n: ×Θ_n →}_n ∈ be a sequence of bit-encoded neural operator architectures, with quantized parameter set |Θ_n| ≤ 2^n. Then generic ∈_1() cannot be approximated by {Φ_n} at a convergence rate better than log(n)^-α; more precisely, for any sequence ϵ_n = o(log(n)^-α), there is a residual subset ⊂_1(), consisting of operators ∈, for which inf_θ∈Θ_n‖ - Φ_n(;θ) ‖_C()≠ O(ϵ_n), (n→∞). . We let := C() and := _1(). We note that ⊂ is a compact, convex subset. We then consider the sequence of subsets Σ_n ⊂ C(), defined by all possible realizations, Σ_n := Φ_n(;θ)θ∈Θ_n. By assumption, |Σ_n| = |Θ_n| ≤ 2^n. By Proposition <ref>, to be proved in Section <ref>, we have (,ϵ)_≥exp(cϵ^1/α). The claim of Proposition <ref> then follows, as a special case, from the abstract result of Proposition <ref> to be derived in Section <ref>. A similar result holds for approximation of Lipschitz operators in an L^p(μ) sense, as shown in the following proposition (cp. Setting <ref>): Let be a Banach space of input functions. Let μ∈() be a probability measure satisfying Assumption <ref>. Assume that the coefficients λ_j ≳ j^-2α as j→∞, where α > 0. Let {Φ_n: ×Θ_n →}_n ∈ be a sequence of bit-encoded neural operator architectures, with quantized parameter set |Θ_n|≤ 2^n. Then generic ∈_1() cannot be approximated by {Φ_n} at a convergence rate better than log(n)^-(α+1); more precisely, for any sequence ϵ_n = o(log(n)^-(α+1)), there is a residual subset ⊂_1(), such that for any ∈, inf_θ∈Θ_n‖ - Φ_n(;θ) ‖_L^p(μ)≠ O(ϵ_n), (n→∞). . We let := L^p(μ) and := _1(). We note that ⊂ is a compact, convex subset. We consider the subsets Σ_n ⊂, defined by all possible realizations, Σ_n := Φ_n(;θ)θ∈Θ_n. By assumption, |Σ_n| = |Θ_n| ≤ 2^n. By Proposition <ref>, to be proved in Section <ref>, we have (,ϵ)_≥exp(cϵ^-1/(α+1)). The claim of Proposition <ref> then follows, as a special case, from the abstract result of Proposition <ref> to be derived in Section <ref>. The notion of a residual subset ⊂_1() in Proposition <ref> and <ref> is to be understood with respect to the subspace topology on _1(), induced by the C() and L^p(μ)-norms, respectively. §.§ Approximation of generic Lipschitz operators by FNO The results of the previous section are formulated abstractly for an unspecified sequence of quantized neural operator architectures {Φ_n}. To conclude the discussion of our main results, we illustrate some implications of these results for a concrete operator learning framework, the Fourier neural operator <cit.>. We note that although the derivation of these results will rely on Propositions <ref> and <ref>, the ultimate statement of the theorems will be in terms of the number of tunable real-valued parameters of FNO, without bit-encoding. Thus, the gap between the bit-encoded parameters and real-valued parameters point of view can be bridged in this case. In preparation to stating these theorems for FNO, we briefly describe a specific setting to which FNO is applicable, and recall the FNO architecture. This is followed by the statement of a novel theorem establishing a curse of (exponential) parametric complexity for the FNO, in the uniform approximation setting. FNO case study As a case study, we consider Fourier neural operators (FNO), approximating a relevant class of 1-Lipschitz operators, : ⊂ L^2(D;^d_in) →, mapping square-integrable input functions to the reals (or equivalently, to a space of constant-valued functions). Here is a compact subset of L^2(D;^d_in), consisting of square-integrable functions u: D →^d_in. We wish to approximate such 1-Lipschitz operator , uniformly over the compact set . In the following, we will usually write L^2(D) instead of L^2(D;^in), where for simplicity and due to certain restrictions of the FNO architecture, the underlying domain D = ^d is taken to be the 1-periodic torus ^d ≃ [0,1]^d in d spatial dimensions, where in typical applications, d∈{1,2,3}. Prototpyical examples of relevant are = (H^s(^d)), where (H^s(^d)) = u∈ H^s(^d)‖ u ‖_H^s≤ 1, denotes the unit ball in the Sobolev space H^s(^d) with smoothness s>0. The question to be addressed is how many tunable parameters q are needed to approximate generic ∈_1()_L^2(D) to a prescribed accuracy ϵ? FNO architecture We here recall the general notion of Fourier neural operators <cit.>. Let = (D; ^d_in) and = (D; ^d_out) be two Banach function spaces, consisting of functions u: D →^d_in and w: D →^d_out, respectively. A Fourier neural operator (FNO) defines a nonlinear operator Φ_: (D; ^d_in) →(D; ^d_out), mapping between these spaces. By definition of the FNO architecture, such Φ_ takes the form Φ_(u;θ) = Q ∘_L ∘…∘_1 ∘ P(u). where P: →, u(x) ↦ Pu(x) is a linear lifting layer, Q: →, v(x) ↦ Qv(x) is a linear projection layer, and the _ℓ: (D;^) →(D;^) are the hidden layers, mapping between hidden states v ↦_ℓ(v) ∈(D;^). The hidden states are vector-valued functions with components, v: D →^, belonging to a Banach function space (D;^). Here, the “channel width” is a hyperparameter of the architecture. Each hidden layer _ℓ is of the form _ℓ(v)(x) := σ ( Wv(x) + Kv(x) + b ) where W ∈^× is a matrix multiplying v(x) pointwise, and b∈^ is a bias. K is a non-local operator of the form v(x) ↦ (Kv)(x) := ^-1 ( P̂_k v(k) ) (x), with (and ^-1) the Fourier transform (and its inverse). The matrix P̂_k ∈^× is a tunable Fourier multiplier indexed by k∈^d. It is assumed that P̂_k ≡ 0 for |k|_ℓ^∞≥κ, i.e. for wavenumbers k above a specified Fourier cut-off parameter κ. This Fourier cut-off κ is a second hyperparameter of the FNO architecture. We collect the values for different k∈^d, |k|_ℓ^∞ < κ, in a tensor P̂ = {P̂_k }_|k|_ℓ^∞< κ∈^(2κ-1)^d ××, which acts on the Fourier coefficients v̂(k) = (v)(k), by (P̂v̂)(k)_i := ∑_j=1^P̂_k,ijv̂(k), (k ∈^d, |k|_ℓ^∞<κ). The resulting FNO architecture depends on the channel width , Fourier cut-off parameter κ and depth L. We collect all tunable parameters in a vector θ∈^q. Any parameter θ∈^q can be decomposed layer-wise, as θ = (θ_L+1, θ_L, …, θ_1, θ_0), where θ_ℓ = W^(ℓ)_ij, P̂_k,ij^(ℓ), b̂_k^(ℓ) i,j = 1,…, , |k| < κ, k∈^d , collects the parameters of the ℓ-th hidden layer, for 1≤ℓ≤ L. We denote by θ_0 = P_iji,j = 1,…, the parameters of the projection P and by θ_L+1 = Q_iji,j = 1,…, the parameters of lifting Q. Assuming that d_in, d_out≤ d_c, the dimension of θ∈^q satisfies, q = d_in + L(^2 + (2κ)^d ^2 + ) + d_out≤ 5 (2κ)^d L ^2 ≤ 5q. Consistent with practical implementations, it is generally assumed that the hidden channel dimension of the FNO is at least as large as both the input and output dimensions d_in, d_out. We include a list of hyperparameters in Table <ref> to aid clarify notation. Since we are interested in a restricted class of operators : L^2(D) →, with real-valued outputs, we will replace the general output layer : (D;^) →(D;^d_out) by a spatially averaged, real-valued version : (D;^) →, v := _D v(x) dx. This does not affect the parameter-count, while ensuring real-valued outputs. We will refer to this as an output-averaged FNO. In passing and in connection with the last remark, we mention relevant work considering variants of FNO for finite-dimensional input and or output spaces <cit.>, where similar alterations to the original FNO architecture have been studied in greater detail. Generic curse of parametric complexity for FNO Our main theorem will be based on Proposition <ref>, and establishes a generic curse of parametric complexity for FNO. In contrast to the aforementioned proposition, this theorem holds at the level of continuous real-valued parameters θ∈^q, without requiring specification of a bit-encoding. Instead, we assume a mild bound on the parameters θ∈^q. We note that similar assumptions have been considered in the recent work <cit.>, to define relevant approximation spaces of FNO. To this end, we make the following definition: Given an operator : L^2(D) → and γ > 0, we will say that can be approximated by FNO at a logarithmic rate γ > 0, if there exists a sequence {Φ_q}_q∈ of output-averaged FNO architectures Φ_q: L^2(D) ×^q→ with at most q tunable parameters, and a sequence of parameters θ_q ∈^q, satisfying bound ‖θ_q ‖_ℓ^∞≤exp(q), and ‖ - Φ_q(;θ_q) ‖_C() = O(log(q)^-γ), (q→∞). The specific upper bound on the weights, ‖θ_q ‖_ℓ^∞≤exp(q), is here chosen for simplicity. For the following discussion, it could readily be replaced by a more general upper bound, ‖θ_q ‖_ℓ^∞≤ c_1exp(c_2 q^c_3) for fixed constants c_1, c_2, c_3, without affecting the main conclusions. We can now state our main result for FNO: Let ⊂ L^2(D) be compact. Assume that the metric entropy of satisfies an algebraic lower bound, (;ϵ)_L^2(D)≳ϵ^-1/α for some α > 0. Consider FNO with a fixed Lipschitz continuous activation function σ. Then generic ∈_1() cannot be approximated by FNO at a logarithmic rate γ, for any γ > α. Thus, loosely speaking and under mild growth assumptions on the weights, the approximation of generic ∈_1() to accuracy ϵ > 0, requires an FNO architecture with exponentially many tunable parameters in ϵ^-1. The following corollary is obtained by taking = (H^s(^d)) as the unit ball in a Sobolev space H^s(^d) for s>0, and with ^d the d-dimensional periodic torus: Let s>0, and denote = (H^s(^d)). Then generic ∈_1() cannot be approximated by FNO at logarithmic rate γ, for any γ > s/d. Fix γ > α. We wish to show that generic ∈_1() cannot be approximated at logarithmic rate γ. Proof of this claim will make use of the following lemma: [FNO quantization lemma]lemmafnoquant Fix Lipschitz continuous activation function σ. Let γ > 0. For any q∈, there exists a quantized neural operator Φ̃_n_q: L^2(D) ×{0,1}^n_q→ with 2^n_q quantized parameter values, where n_q ≍ q^m, m = d+6, such that for any output-averaged FNO Φ_q with activation σ and at most q tunable parameters, we have sup_θ∈[-M_q,M_q]^qinf_[θ]∈{0,1}^n_q‖Φ_q(; θ) - Φ̃_n_q(;[θ]) ‖_C()≤log(q)^-γ. where M_q := exp(q). The detailed proof of this lemma is included in Appendix <ref>; in short, the proof relies on two observations: (i) all possible FNO architectures with at most q parameters can be encapsulated by a “super” FNO-architecture Φ̂(;θ) with a number of parameters that is bounded algebraically in q for fixed algebraic exponent, and (ii) quantization of this super-architecture with an algebraically bounded number of bits is possible, since the mapping θ↦Φ̂(;θ) has at least a weak form of stability (Lipschitz continuity) over the relevant range of parameters θ, and a Lipschitz constant that grows at a sufficiently slow rate as a function of q. By Lemma <ref>, there exists m∈, a sequence n_q ≍ q^m, and a sequence of quantized neural operators, Φ̃_n_q: L^2(D)×{0,1}^n_q→, such that sup_θ∈ [-M_q,M_q]^qinf_[θ]∈{0,1}^n_q‖Φ_q(; θ) - Φ̃_n_q(;[θ]) ‖_C()≤log(q)^-γ. Associated with this subsequence n_q →∞, we now define an (abstact) sequence of bit-encoded neural operators for arbitrary n∈; specifically, we define Φ̃_n(;): L^2(D) ×{0,1}^n →, by Φ̃_n(;[θ]_n) := Φ̃_n_q(;[θ]_n_q), [θ]_n ∈{0,1}^n, where n_q is chosen maximal such that n_q ≤ n, and [θ]_n_q are the first n_q≤ n bits of [θ]_n (the values of the remaining bits are simply ignored). We note that since n_q ≍ q^m, we have log(q)^-γ≍log(n_q)^-γ. Furthermore, for arbitrary fixed operator , we note that the decay inf_[θ]∈{0,1}^n_q‖ - Φ̃_n_q(;[θ]) ‖_C()≲log(n_q)^-γ, along the specified subsequence n_q ≍ q^m also implies the error decay inf_[θ]∈{0,1}^n‖ - Φ̃_n(;[θ]) ‖_C()≲log(n)^-γ, along the full sequence n∈, as n→∞. This is immediate from the definition of Φ̃_n and the fact that n_q ≤ n < n_q+1 does not leave exponential gaps between subsequent n_q, since 1 ≤ n_q+1/n_q ≍ (q+1)^m / q^m = O(1); in particular, this implies that log(n_q) ∼log(n_q+1) ∼log(n). By Proposition <ref>, the set of operators ⊂_1() which can be approximated by such a sequence {Φ̃_n}, at logarithmic rate γ, is meagre (its complement is residual). To conclude the argument, it therefore suffices to show that if can be approximated by FNO at logarithmic rate γ, then ∈. This then implies that the set of operators that can be approximated by FNO at logarithmic rate γ is a subset of , and hence is itself meagre. To this end, assume that ∈_1() is approximated by FNO at logarithmic rate γ. By definition, there exists a sequence of FNOs, Φ_q: L^2(D)×^q→, such that, inf_θ∈ [-M_q,M_q]^q‖ - Φ_q(;θ) ‖_C() = O(log(q)^-γ). By the triangle inequality, inf_[θ]∈{0,1}^n_q‖ - Φ̃_n_q(; [θ]) ‖_C() ≤‖ - Φ_q(; θ_q) ‖_C() + inf_[θ]∈{0,1}^n_q‖Φ_q(;θ_q) - Φ̃_n_q(; [θ]) ‖_C() ≤ O( log(q)^-γ) + O(log(n_q)^-γ) = O(log(n_q)^-γ), along the specified sequence n_q →∞. By (<ref>), this implies that ‖ - Φ̃_n(; [θ]) ‖_C() = O(log(n)^-γ), along the entire sequence n→∞, and hence ∈, i.e. belongs to the meagre set of operators which can be approximated by the sequence {Φ̃_n} at logarithmic rate γ. We have shown that any operator that is approximated by FNO at logarithmic rate γ belongs to the meagre set . Hence, the set of operators that is approximated by FNO at logarithmic rate γ is itself meagre, and its complement = _1() ∖ is residual. We conclude that generic operators ∈_1(), belonging to , cannot be approximated at logarithmic rate γ > α. § THE METRIC ENTROPY OF LIPSCHITZ OPERATORS In the present section, we provide lower bounds on the metric entropy of Lipschitz operators in two general settings; the first pertains to the sup-norm over a compact set of inputs, the second is of relevance to the approximation with respect to the Bochner L^p-norm with respect to a probability measure on the input space. After briefly recalling the relation between covering and packing numbers, we proceed to consider the sup-norm setting in Section <ref> and the L^p-setting in Section <ref>. §.§ Entropy, covering and packing We recall from Definition <ref> that the metric entropy (;ϵ)_ of a subset ⊂ is defined by (;ϵ)_ = log_2 (;ϵ)_; here, (;ϵ)_ denotes the covering number of , which is defined as the smallest number of open balls needed to cover . We also recall the closely related notion of a packing number: Let (,d) be a metric space. The packing number of a subset ⊂, denoted (;ϵ)_, is the largest integer M ∈ for which there exist elements u_1,…, u_M ∈, with pairwise distance d(u_j,u_k) ≥ϵ, for all distinct j,k∈{1,…, M}. With our definitions, the following inequalities between covering and packing numbers are elementary: For any subset ⊂, we have (; 3ϵ)_≤(;ϵ)_≤(;ϵ)_. We mention that, if the covering number is defined by open balls, the factor 3 in the first term could have been replaced by 2. With our closed definition, any factor >2 would do – we here choose 3 for simplicity. §.§ Uniform approximation We are here interested in the uniform setting (Setting <ref>), i.e. the unifrom approximation of a (real-valued) mapping : → over a compact domain ⊂. As pointed out before, given the link between minimax code-length and metric entropy, we are interested in estimating the metric entropy of _1() for a compact metric space. The following proposition relates the metric entropy of _1()⊂ to that of , when = C() is metrized by the sup-norm: Let (,d) be a metric space. Let ϵ∈ (0,1/3]. The metric entropy of _1() ⊂ C() is lower bounded by (_1(), ϵ)_C()≥ 2^(;6ϵ)_. Proposition <ref> shows that the space of 1-Lipschitz functions on a compact metric space has exponentially larger entropy than the underlying space. Let ϵ∈ (0,1/3] be given. Let N = (; 6ϵ)_. Since the covering number lower bounds the packing number (cf. (<ref>)), there exist N elements u_1, …, u_N ∈, with pairwise distance ≥ 6ϵ. Let ψ_j(u) := max(3ϵ - d(u,u_j), 0), j=1,…, N, denote “hat” functions centered at u_j, and non-vanishing only on B_3ϵ(u_j)⊂. We note that each ψ_j is 1-Lipschitz, satisfies ‖ψ_j ‖_C() = 3ϵ, and the supports of ψ_j are essentially disjoint. We now consider the set of Lipschitz functions f: → of the form, f_σ(u) = ∑_j=1^N σ_j ψ_j(u), σ = (σ_1,…, σ_N) ∈{0,1}^N. These functions satisfy ‖ f_σ‖_C()≤ 3ϵ≤ 1, and (f_σ) ≤max_j=1,…, N(ψ_j) = 1, for all choices of σ. Furthermore, if σ, σ' ∈{0,1}^N are two distinct elements, say with σ_j_0σ'_j_0, then it is straightforward to show that ‖ f_σ - f_σ'‖_C()≥‖ψ_j_0‖_C() = 3ϵ. Thus, we have shown that there exist 2^N = |{0,1}^N| functions f_σ∈_1(), with pairwise C()-distance ≥ 3ϵ. In particular, this implies that the packing number (_1();3ϵ)_C()≥ 2^N, and by the inequality (<ref>) between packing- and covering-numbers, this now implies that (_1();ϵ)_C()≥(_1();3ϵ)_C()≥ 2^N. The claim follows by taking logarithms and recalling that N = (;6ϵ) = 2^(;6ϵ)_. We conclude this section with several corollaries of Proposition <ref>. If D ⊂^d is a compact domain in Euclidean space, then (_1(D);ϵ) ≳ϵ^-d. It is a well-known fact that (D;ϵ) ≳ϵ^-d, with an implied constant depending on the dimension d and the volume of D; for example, this can be a simple volume argument for an ϵ-covering D ⊂⋃_n=1^N B_ϵ(x_n), which yields (D) ≤( ⋃_n=1^N B_ϵ(x_n) ) ≤ N (B_ϵ) = N C_d ϵ^d ⇒ N ≥(D)/C_d ϵ^d. The claim thus follows from Proposition <ref>. Let D ⊂^d be a compact domain in Euclidean space. Let = (W^s,p(D)) be the unit ball in the space of Sobolev functions possessing s>0 weak derivatives in L^p(D), considered as a subset of L^p(D). Then there exists a constant c>0, such that (_1();ϵ) ≳exp(cϵ^-d/s). The metric entropy of (W^s,p(D)) with respect to the L^p-norm is lower bounded by <cit.>: (;ϵ)_L^p(D)≳ϵ^-d/s. The claim thus follows from Proposition <ref>. Let D ⊂^d be a compact domain in Euclidean space. Let = (C^s(D)) be the unit ball in the space of Hölder continuous functions of order s>0, considered as a subset of C(D). Then there exists a constant c>0, such that (_1();ϵ) ≳exp(cϵ^-d/s). The metric entropy of (C^s(D)) with respect to the sup-norm is lower bounded by <cit.>: (;ϵ)_C(D)≳ϵ^-d/s. The claim thus follows from Proposition <ref>. §.§ Approximation in expectation Besides the setting discussed in the previous section, which is relevant for the uniform approximation of operators over a compact set of input functions, another commonly studied setting is the approximation in expectation (cp. Setting <ref>): Here, we consider 1-Lipschitz mappings : → defined on a separable Hilbert space . We fix a probability measure μ on and consider inputs as random draws u∼μ. We assume that μ satisfies the minimal structural Assumption <ref>; under this assumption, random draws u∼μ can be obtained from a Karhunen-Loeve-like expansion, u = ∑_j=1^∞√(λ_j) Z_j e_j. Our aim is to find lower bounds on the metric entropy of _1()⊂, where = L^p(μ) is the space of L^p(μ)-integrable operators. The following entropy estimate represents the main novel contribution of this section: Let be a separable Hilbert space, and let μ be a probability measure satisfying Assumption <ref>. Let p∈ [1,∞) be given. Assume that the coefficients √(λ_j)≳ j^-α as j→∞, where α > 0. Then the metric entropy of _1() with respect to the Bochner L^p(μ)-norm, obeys the following lower bound: There exist constants c,ϵ_0>0, such that (_1(); ϵ)_L^p(μ)≥exp(cϵ^-1/(α+1)), ∀ ϵ∈ (0,ϵ_0]. Our proof of Proposition <ref> will rely on several technical lemmas, which we state and prove below. The first lemma identifies an isometric embedding L^p([0,1]^d) L^p(μ). Let be a separable Hilbert space. Let μ∈() satisfy Assumption <ref>, and let p∈ [1,∞). Then for any d∈, there exists an isometric embedding, ι_d: L^p([0,1]^d) L^p(μ), such that ι_d(_1([0,1]^d)) ⊂_L/√(λ_d)(), where the Lipschitz norm on [0,1]^d is defined with respect to the ℓ^∞-norm on [0,1]^d. By assumption, μ∈() is the law of a random field u: Ω→ of the form, u(ω) = ∑_j=1^∞√(λ_j) Z_j(ω) e_j, with Z_j independent, Z_j ∼ρ_j(z) dz. To construct the claimed isometry, we define F_j(z) := ∫_-∞ρ_j(ζ) dζ as the cumulative distribution function of ρ_j. We recall that F_j(Z_j) ∼(0,1) is uniform [0,1] distributed. Furthermore, we clearly have (F_j) = ‖ρ_j ‖_L^∞()≤ L, where the last bound is by Assumption <ref>. Given u∈, we define u_j := ⟨ e_j, u⟩_ the coefficients of u with respect to the orthonormal basis {e_j}. Using the CDFs introduced above, F_j: → [0,1], we now define a mapping, ι_d: L^p([0,1]^d) → L^p(μ), (ι_d f)(u) := f(F_1(u_1/√(λ_1)), …, F_d(u_d/√(λ_d))). To see that this is well-defined, we note that, using the expansion of the random field (<ref>), u_j/√(λ_j) = Z_j, and hence (ι_d f)(u) = f(F_1(Z_1), …, F_d(Z_d)), for u ∼μ, and we once again remind ourselves that F_j(Z_j) ∼(0,1) is uniformly distributed on [0,1], and that the Z_j are independent by assumption. Thus, it follows that _u∼μ | (ι_d f)(u) |^p = |f(F_1(Z_1), …, F_d(Z_d))|^p = ∫_[0,1]^d |f(x_1,…, x_d)|^p dx = ‖ f ‖_L^p([0,1]^d)^p. Thus, ‖ι_d f ‖_L^p(μ) = ‖ f ‖_L^p([0,1]^d). This shows that ι_d: L^p([0,1]^d) → L^p(μ) is an isometry as claimed. To verify that ι_d(_1([0,1]^d)) ⊂_L/√(λ_d)(), we note that h_d: (, ‖‖_) → ([0,1]^d, ℓ^∞), u ↦ (F_1(u_1/√(λ_1)), …, F_d(u_d/√(λ_d))), has Lipschitz constant bounded by (h_d) ≤max_j=1,…, d(F_j)/√(λ_j)≤L/√(λ_d). Thus, for any f∈_1([0,1]^d) = _1(([0,1]^d,ℓ^∞)), (ι_d f) = ( f ∘ h_d ) ≤(f) (h_d) ≤L/√(λ_d). Furthermore, we also have ‖ι_d f ‖_C()≤‖ f ‖_C([0,1]^d)≤ 1. This shows that ‖ι_d f ‖_ = max{‖ι_d f ‖_C(), (ι_d f) }≤max{1,L/√(λ_d)} = L/√(λ_d). Here, we have made use of the choice L>√(λ_1)≥√(λ_d) (cp. (<ref>)) in the last inequality. This concludes our proof. As a consequence of Lemma <ref>, we have: Under the assumptions of Lemma <ref>, we have (_1(); ϵ)_L^p(μ)≥(_1([0,1]^d); Lϵ/√(λ_d))_L^p([0,1]^d), for any d ∈. We recall the existence of an isometric embedding ι_d: L^p([0,1]^d) → L^p(μ) from Lemma <ref>, with ι_d(_1()) ⊂_L/√(λ_d)([0,1]^d). It follows that (_1(); ϵ)_L^p(μ) = (_L/√(λ_d)(); Lϵ/√(λ_d))_L^p(μ) ≥(ι_d(_1([0,1]^d)); Lϵ/√(λ_d))_L^p(μ) = (_1([0,1]^d); Lϵ/√(λ_d))_L^p([0,1]^d). Taking logarithms, the claimed inequality between the metric entropy follows. The proof of Proposition <ref> will furthermore make use of the following result in the finite-dimensional setting: Let p∈ [1,∞) be given. For d∈, consider _1([0,1]^d) ⊂ L^p([0,1]^d). Then there exists a constant c>0, independent of d, such that we have the following lower bound on the metric entropy: (_1([0,1]^d); ϵ)_L^p([0,1]^d)≥1/8(c/d ϵ)^d, ∀ ϵ∈(0,c/d]. Since the Hölder inequality implies, for any p∈ [1,∞), that ‖ f ‖_L^1([0,1]^d)≤‖ f ‖_L^p([0,1]^d), it follows that any covering of _1([0,1]^d) by ϵ-balls with respect to the L^p-norm, also gives rise to a covering of _1([0,1]^d) with respect to the L^1-norm (with the same centers). In particular, this implies that (_1([0,1]^d);ϵ)_L^p([0,1]^d)≥(_1([0,1]^d);ϵ)_L^1([0,1]^d), and we only need to establish (<ref>) for p=1. For λ∈ (0,1), define ϕ_λ: [0,1]^d →_+ as a composition g_λ∘‖‖_ℓ^∞, where g_λ: → is a piecewise linear function (approximately g_λ≈ 1_[0,1]) with values, g_λ(x) := 0, (x∉ [0,1]), 1, (x∈ [λ/2,1-λ/2]), and g_λ interpolates linearly between 0 and 1 on [0,λ/2], and from 1 to 0 on [1-λ/2,1]. By construction, g_λ is 2/λ-Lipschitz. Since x ↦‖ x ‖_ℓ^∞ is 1-Lipschitz, it follows that (ϕ_λ) = (g_λ∘‖‖_ℓ^∞) ≤ 2/λ. Clearly, smaller λ leads to a larger Lipschitz constant. However, by construction of ϕ_λ, we have ϕ_λ≥ 1_[λ/2,1-λ/2]^d. In particular, this implies that ‖ϕ_λ‖_L^1≥ (1-λ)^d. Thus, smaller λ increases the L^1-norm of ϕ_λ. Given N∈, we now subdivide [0,1]^d into N^d cubes of equal length, indexed by j∈ [N]^d, where [N]^d = {1,…, N}^d. For any multi-index j∈ [N]^d, we define ϕ_λ, j(x) as a rescaled and translated copy of ϕ_λ, such that the support of ϕ_λ, j coincides with the j-th cube. In particular, by construction of ϕ_λ, this implies that ‖ϕ_λ,j‖_L^1([0,1]^d)^2 ≥ (1-λ)^d N^-d, (ϕ_λ,j) ≤ 2Nλ^-1. We also note that the ϕ_λ, j have essentially disjoint supports. For σ∈{-1,1}^[N]^d, we now define f_σ(x) = λ/2N∑_j ∈ [N]^dσ_j ϕ_λ,j(x). The factor in front of the sum ensures that (f_σ) ≤ 1. Furthermore, we also note that ‖ f_σ‖_C([0,1]^d)≤λ / 2N ≤ 1 for any choice of λ∈ (0,1) and N ∈. In particular, we have f_σ∈_1([0,1]^d), for any choice of σ. We finally observe that, due to the disjoint supports of the ϕ_λ,j, we have, for any σ,σ' ∈{-1,1}^[N]^d, ‖ f_σ - f_σ'‖_L^1([0,1]^d) = λ/2N∑_j∈ [N]^d |σ_j - σ'_j| ‖ϕ_λ,j‖_L^1([0,1]^d) ≥λ (1-λ)^d N^-1#{σ_jσ_j'}/N^d. The last quotient is the fraction of entries in which σ and σ' differ. It turns out that there exists a subset Ξ⊂{-1,1}^[N]^d, such that any σσ' belonging to Ξ differ on a substantial fraction of their components; more precisely, as noted in <cit.> as a result of the Gilbert-Varshamov bound, there exists a subset Ξ⊂{-1,1}^[N]^d satisfying that any two distinct elements σ,σ' ∈Ξ, differ on at least a fourth of their coordinates, #{σ_jσ_j'}/N^d≥1/4, ∀ σ, σ' ∈Ξ, σσ', and the cardinality of Ξ is lower bounded by, #Ξ≥exp(N^d/8) ≥ 2^N^d/8. This implies that for any two σσ' in Ξ, we have ‖ f_σ - f_σ'‖_L^1([0,1]^d)≥1/4Nλ (1-λ)^d. Optimizing the right-hand side over λ∈ (0,1), we set λ = 1/(1+d) to obtain, ‖ f_σ - f_σ'‖_L^1([0,1]^d)≥1/4(d+1)N1/(1+1/d)^d≥1/4e(d+1)N≥1/8edN, where we used that the Euler constant e ≥ (1+1/d)^d and the fact that d≥ 1 implies d+1 ≤ 2d in the last bound. Taking into account the bound (<ref>), it follows that the packing number (_1([0,1]^d); ϵ), satisfies the lower bound, log_2(_1([0,1]^d); (β_d N)^-1) ≥ N^d/8, ∀ N∈, where we have defined β_d = 8ed. Given ϵ∈ (0,β_d^-1], we can find N∈, such that (β_d N)^-1≥ϵ≥ (2β_d N)^-1. It follows that log_2 (_1([0,1]^d); ϵ) ≥log_2 (_1([0,1]^d); (2β_d N)^-1) ≥(2N)^d/8≥1/8(β_d ϵ/2)^-d. We conclude that log_2 (_1([0,1]^d); ϵ) ≥1/8( β_d/2ϵ)^-d, ∀ ϵ∈ (0,β_d^-1]. This lower bound on the packing number holds for any dimension d∈. We can now use the general relation (A;ϵ) ≥(A;2ϵ) between the covering- and packing-numbers (<ref>), to conclude that, (_1([0,1]^d);ϵ) = log_2 (_1([0,1]^d); ϵ) ≥1/8( β_d ϵ)^-d, ∀ ϵ∈ (0,β_d], where β_d = 8ed. This proves the claim with c = 1/(8e), i.e. log_2 (_1([0,1]^d); ϵ) ≥1/8( c/dϵ)^d, ∀ ϵ∈(0,c/d], Assuming the results of Corollary <ref> and Lemma <ref>, we can now prove Proposition <ref>. Combining the lower bound (<ref>) and (<ref>), we obtain that for any d∈, log_2 (_1(); ϵ)_L^p(μ)≥1/8(c√(λ_d)/L dϵ)^d≥(c√(λ_d)/8L dϵ)^d, provided that ϵ≤c√(λ_d)/L d. Since λ_d ≳ d^-2α by assumption, and since C and L are constants independent of d, it thus follows that there exist c_1,c_2 > 0, independent of d, such that log_2 (_1(); ϵ)_L^p(μ)≥(c_1/d^1+αϵ)^d, if ϵ≤ c_2 d^-(1+α). The idea is now to choose d = d(ϵ) ∼ϵ^-1/(α+1), such that the term inside the parentheses is lower bounded by e^β for some fixed β>0, implying that the right hand side is ≳ (e^β)^d = exp(β d) ≳exp(cϵ^-1/(α+1)) for some constant c>0. This then leads to the claimed lower bound. We now proceed to provide the details of the required argument. We first fix β = -log(c_2/c_1), such that e^-β = c_2 / c_1. We next define ϵ_0 = c_1 e^-β = c_2. Since c_1,c_2 are independent of d, it follows that also β and ϵ_0 are independent of d. For any ϵ∈ (0,ϵ_0], the above choice ensures that ϵ≤ϵ_0 ≤ c_1 e^-β, and hence there exists a unique d = d(ϵ)∈, such that ϵ d^(1+α)≤ c_1 e^-β < ϵ (2d)^(1+α). In particular, upon rearranging the first inequality in the last display, we obtain the two equivalent formulations, ϵ≤ c_1 e^-β d^-(1+α) = c_2 d^-(1+α), c_1/d^(1+α)ϵ≥ e^β. while the second bound c_1 e^-β < ϵ (2d)^(1+α) implies, β d ≥ c ϵ^-1/(α+1), where c := β[c_1/2e^β]^1/(α+1). With this choice of d=d(ϵ), equation (<ref>) guarantees that the estimate in (<ref>) applies to all ϵ∈ (0,ϵ_0]. This in turn implies that (_1();ϵ)_L^p(μ) = log_2 (_1();ϵ)_L^p(μ) ≥(<ref>)(c_1/d^(1+α)ϵ)^d ≥(<ref>) e^β d ≥(<ref>)exp( c ϵ^-1/(α+1)), for all ϵ∈ (0,ϵ_0]. This is the claimed lower bound on the metric entropy. § GENERIC APPROXIMATION RESULTS We first discuss an abstract formulation of a general “approximation task”. Let be a Banach space (e.g. a space of operators). In a general non-linear approximation task, we are given for any n∈ a set Σ_n ⊂ over which we aim to approximate an element f ∈, where we will assume that f belongs to a general class ⊂ of interest. Considering these subsets Σ_n ⊂ fixed, and given a sequence ϵ_n → 0, we will say that f∈ can be approximated with convergence rate ϵ_n, if there exists a constant M_f > 0, such that inf_ψ_n ∈Σ_n‖ f - ψ‖_≤ M_f ϵ_n, ∀ n∈. Specifically, we will be most interested in the logarithmic case ϵ_n = log(n)^-γ, in the following, with Σ_n corresponding to all possible realizations of a fixed bit-encoded neural operator architecture (cp. the proofs of Propositions <ref> and <ref>, respectively). Coming back to the general abstract setting above, and given M>0, we introduce a set of “efficiently approximated” elements _M ⊂ with bound M, i.e. _M := f ∈inequality (<ref>) holds with constant M_f = M. And we denote the set of all f∈ which can be approximated at convergence rate ϵ_n, by elements in Σ_n, by = ⋃_M>0_M = f ∈there exists M_f such that (<ref>) holds. Our goal is to study generically achievable approximation rates ϵ_n, in terms of the complexity of , as measured by its metric entropy. The following lemma will be fundamental to our analysis: Let be a Banach space. Let ⊂ be a compact, convex subset. Let {Σ_n}_n∈ be a family of subsets Σ_n⊂, with |Σ_n| ≤ 2^n elements. Fix M>0. If _M⊂ given by (<ref>) has non-empty interior in the subspace topology on , then there exists a constant λ > 0, independent of n, such that the metric entropy satisfies the bound, (; λϵ_n)_≤ n, ∀ n ∈. At the outset we note that by compactness, we have a uniform upper bound, sup_f∈‖ f ‖≤ C_ < ∞. Upon a simple rescaling, we may wlog assume that C_ = 1, i.e. that ‖ f ‖≤ 1 for all f∈. This will be assumed in the following proof. Our next goal is to show that, for any M>0, the set _M defined by (<ref>) has empty interior. For the sake of contradiction, assume that _M does not have empty interior. Then there exists f_0∈ and δ > 0, such that B_δ(f_0) ⊂_M ⊂⋃_ψ_n ∈Σ_nB_Mϵ_n (ψ_n), where B_δ(f_0) = f∈‖ f - f_0 ‖ < δ⊂ is an open ball in the subspace topology on . Thus, for any n∈, we obtain the following bound on the covering numbers, (B_δ(f_0); Mϵ_n) ≤(_M;Mϵ_n) ≤ |Σ_n| ≤ 2^n. We next recall that we have wlog assumed sup_f∈‖ f ‖≤ 1, and we recall that is convex by assumption. In particular, we next show that this implies that (1 - δ/3)f_0 + δ/3⊂ B_δ(f_0). To see why, let δ' = δ/3 and fix f ∈ arbitrary. We need to show that f_δ' := (1-δ') f_0 + δ' f ∈ B_δ(f_0). Since is convex, it is clear that f_δ'∈. In addition, we also have ‖ f_δ' - f_0 ‖ = ‖ (1-δ') f_0 + δ' f - f_0 ‖ = δ' ‖ f-f_0 ‖≤ 2δ' = 2δ/3 < δ. Hence, f_δ'∈ B_δ(f_0) as claimed. The inclusion, (1-δ/3) f_0 + (δ/3) ⊂ B_δ(f_0) now implies, (B_δ(f_0); Mϵ_n) ≥((δ/3) ; Mϵ_n) = (; 3Mϵ_n / δ). Combining (<ref>) and (<ref>), we conclude that (;3Mϵ_n/δ)_ = log_2 (; 3Mϵ_n / δ)_≤ n, ∀ n ∈. We emphasize that M,δ > 0 are independent of n in the above argument. In particular, the claim of the lemma holds with constant λ = 3M/δ > 0. Let be a Banach space. Let ⊂ be a compact, convex subset. Assume that there exist constants C,c,γ>0 such that, (;ϵ)_≥ C exp(c ϵ^-1/γ), ∀ ϵ > 0. Let {Σ_n}_n∈ be family of subsets Σ_n⊂ with |Σ_n| ≤ 2^n elements. Then generic elements f∈ cannot be approximated by elements of Σ_n at convergence rate better than log(n)^-γ; more precisely, for any sequence ϵ_n = o(log(n)^-γ), the subset ⊂, consisting of all f∈, such that inf_ψ_n ∈Σ_n‖ f - ψ_n ‖≠ O(ϵ_n), is residual. Before coming to the proof of Proposition <ref>, we note that since ⊂ is compact, is a complete metric space in the subspace topology. In particular, the following argument, which is based on the Baire category theorem, can be applied to (cp. Appendix <ref> for a summary). Let := ∖, where is defined by (<ref>). Recall that is precisely the set of f∈ for which there exists M_f>0 such that inf_psi_n ∈Σ_n‖ f - ψ_n ‖≤ M_f ϵ_n. In Lemma <ref>, it is shown that if _M ⊂ has non-empty interior then there exists a constant λ > 0, such that log(; λϵ_n) ≤ n, ∀ n ∈. By assumption on , the left hand side is lower bounded by Cexp(c(λϵ_n)^-1/γ). Thus, if _M has non-empty interior, then we must have Cexp((λϵ_n)^-1/γ) ≤ n ⇒ ϵ_n ≳log(n)^-γ, as n→∞. But by the assumption that ϵ_n = o(log(n)^-γ), this last lower bound cannot hold, asymptotically as n→∞. Thus, we conclude that _M ⊂ has empty interior for any M>0. We furthermore note that _M is closed; indeed, _M in (<ref>) is given by, _M = ⋂_n=1^∞⋃_ψ_n ∈Σ_nB_M ϵ_n(ψ_n), where we define the closed balls (in the induced topology on ), B_Mϵ_n(ψ) := f∈‖ f - ψ‖≤ M ϵ_n⊂. Therefore _M can be written as an intersection of a union of closed balls of radius M ϵ_n centered at elements ψ∈Σ_n. Note that, since the set Σ_n is finite by assumption, the union of these closed balls, _M,n := ⋃_ψ_n ∈Σ_nB_M ϵ_n(ψ_n), is closed for any n∈, implying that also _M = ⋂_n=1^∞_M,n⊂ is closed as an intersection of closed sets. To conclude the proof, we simply note that = ⋃_M∈_M can be written as a countable union, for integer M∈, of closed subsets with empty interior _M. In particular, this implies that is itself meagre by the Baire category theorem. We conclude that the complement := ∖, consisting of all f∈ for which inf_ψ_n ∈Σ_n‖ f - ψ_n ‖≠ O(ϵ_n), is residual. This completes the proof. A similar result can also be derived under the assumption of an algebraic scaling. This may be of relevance for generic function approximation by neural networks, and hence we mention it here, in passing. Let be a Banach space. Let ⊂ be a compact, convex subset. Assume that there exist constants C,γ>0 such that, log(;ϵ) ≥ C ϵ^-1/γ, ∀ ϵ > 0. Let {Σ_n}_n∈ be a family of subsets Σ_n⊂ with |Σ_n| ≤ 2^n elements. Then generic elements f∈ cannot be approximated by elements of Σ_n at convergence rate better than n^-γ; more precisely, for any sequence ϵ_n = o(n^-γ), the subset ⊂, such that, inf_ψ_n ∈Σ_n‖ f - ψ_n ‖≠ O(ϵ_n), ∀ f ∈, is residual. Let := ∖, where is defined by (<ref>). Recall that is precisely the set of f∈ for which there exists M_f>0 such that inf_psi_n ∈Σ_n‖ f - ψ_n ‖≤ M_f ϵ_n. In Lemma <ref>, it is shown that if _M ⊂ has non-empty interior then there exists a constant λ > 0, such that log(; λϵ_n) ≤ n, ∀ n ∈. By assumption on , the left hand side is lower bounded by C(λϵ_n)^-1/γ. Thus, if _M has non-empty interior, then we must have C(λϵ_n)^-1/γ≤ n ⇒ ϵ_n ≳ n^-γ, as n→∞. By assumption, ϵ_n = o(n^-γ), this is not the case. Thus, we conclude that _M ⊂ has empty interior for any M>0. Thus, arguing as in the proof of Proposition <ref> it follows that is meagre, and hence = ∖ is residual. § CONCLUSION Operator learning is a new paradigm for the data-driven approximation of operators. Popular operator learning frameworks extend and generalize neural networks to this infinite-dimensional setting. While there are numerous papers demonstrating the potential and practical utility of proposed neural operator architectures, our understanding of the precise conditions under which operator learning is practically feasible remains limited. This paper makes a contribution to the mathematical underpinnings of this field, by providing an information-theoretic perspective on the curse of parametric complexity (a scaling-limit of the curse of dimensionality) identified in <cit.>. In particular, it is shown that this curse poses a fundamental limitation to operator learning on general spaces of Lipschitz operators. Bit-encoding (storing in memory) any neural operator architecture, which is capable of achieving approximation accuracy ϵ for general 1-Lipschitz continuous and real-valued operators, requires a number of bits that is exponential in ϵ^-1. It is shown that this is true not only when measuring the approximation error in the sup-norm over compact sets of input functions, but also when measuring the error in the L^p(μ)-norm with respect to a probability measure satisfying certain structural assumptions. The assumptions are met for widely considered μ, including the case of a Gaussian random field with at most algebraically decreasing eigenvalues of the covariance. These results rely on minimax analysis and, in contrast to prior work <cit.>, are independent of the employed activation function in the architecture. Going beyond such minimax analysis, we furthermore study the approximation of individual Lipschitz operators by a sequence of neural operator architectures. Such a sequence would e.g. be obtained when increasing the width, depth or other hyperparameters at a pre-defined rate as the model is scaled up. In this setting, we address the following question: “At which rate can the approximation error along such a sequence decrease, as a function of the total number of bit-encoded parameters?” Using topological arguments based on Baire category, we establish a quantitative relation between the metric entropy of the set of 1-Lipschitz operators, and the best approximation-rate that can be achieved along such a sequence for generic 1-Lipschitz operators; as a consequence of the exponential increase in metric ϵ-entropy of the set of 1-Lipschitz operators, it is shown that achievable approximation rates are at most logarithmic as a function of the required encoding bits. Finally, this abstract analysis leads to a concrete result on the approximation of generic Lipschitz operators by Fourier neural operator. Our results imply that for generic 1-Lipschitz operators, and under mild assumptions on the tunable parameters, there cannot exist a sequence of FNO approximations which approximates the underlying operator at a rate that decays faster than logarithmic in the number of real-valued parameters. To obtain this result, mild bounds on the growth of the parameters of FNO approximants are assumed; specifically, the size of individual parameter is assumed to be exponentially bounded by the total number of parameters, as the model size is scaled up. The results of this work should be compared and contrasted with the recent work <cit.>, which shows the surprising result that there exist (non-standard) neural operator architectures capable of approximating Lipschitz continuous operators to accuracy ϵ, with a number of real-valued tunable parameters q growing only algebraically with ϵ^-1. The analysis of the present work indicates that a practical implementation of such architectures on computing hardware, and with parameters encoded by a total of B bits will require B to be exponentially large in ϵ^-1. In fact, if each parameter is encoded by b_1 bits, then a lower bound of the following form is to be expected: q b_1 ≥ Cexp(cϵ^-γ), for fixed constants C,c,γ>0 independent of ϵ. In particular, if q≲ϵ^-λ grows at most algebraically, as in the construction <cit.>, then the number of encoding bits q_1 per parameter must necessarily grow exponentially. Thus, the only trade-off that appears possible from an information-theoretic perspective is to reduce the number of parameters q at the expense of the required number of bits per parameter b_1, or vice versa. In turn, the required number of encoding bits is intimately linked to the stability of the mapping θ↦Φ(;θ) from parameters θ to the corresponding realization of the neural operator Φ(;θ); an exponentially growing number of bits b_1 is only required if the parameter-to-realization mapping is either very unstable, e.g. having very large Lipschitz constant, or if the optimal parameters themselves are very large. Here, “large” means that either the Lipschitz constant or the ℓ^∞-norm of the parameters grows exponentially with ϵ^-1. The results of this work underline the fundamental character of the curse of parametric complexity identified in <cit.> from the point of view of information theory. In addition, it is here shown that this curse persists even when the sup-norm (uniform approximation of the underlying operator) is replaced by an a priori much weaker L^p-norm (approximation in expectation). This considerably constrains the generality with which approximation theory for operator learning, guaranteeing efficient approximation by neural operators at algebraic convergence rates, can be developed. A complete or partial characterization of the relevant mathematical properties and structures enabling efficient operator approximation, would be highly desirable. The results presented in this work demonstrate rigorously that one has to go beyond Lipschitz operators to achieve this. § ACKNOWLEDGMENTS The author would like to thank Andrew M. Stuart and Nikola B. Kovachki for interesting discussions which have led to this work. This work has been supported by funding from the Swiss National Science Foundation through Postdoc.Mobility grant P500PT-206737. abbrv § A SHORT SUMMARY OF BAIRE CATEGORY In this appendix, we recall the Baire category theorem from general topology. For a more thorough discussion of this result, and its connections to other topological concepts, we refer to the textbook <cit.>. Let X be a topological space. Let A ⊂ X be a subset. We recall that the interior of A is defined as the union of all open sets of X that are contained in A. The set A is said to have empty interior if A contains no open set of X other than the empty set. Equivalently, A is said to have empty interior if the complement of A is dense in X. We then have the following definition <cit.>: A space X is said to be a Baire space if the following condition holds: Given any countable collection {A_n} of closed sets of X each of which has empty interior in X, their union ⋃_n A_n also has empty interior in X. This definition can equivalently be stated in terms of open sets <cit.>: X is a Baire space if and only if given any countable collection {U_n} of open sets in X, each of which is dense in X, their intersection ⋂_n U_n is also dense in X. The following Baire category theorem <cit.> exposes many examples of Baire spaces encountered in applications: If X is a compact Hausdorff space or a complete metric space, then X is a Baire space. § PROOF OF THE QUANTIZATION LEMMA The goal of this appendix is to prove the FNO quantization lemma <ref>: * Let Φ_q be an output-averaged FNO with at most q tunable parameters. We first note that the depth of Φ_q can only take the values L ∈{1,…, q}. For each possible value of the depth, we now consider the maximally connected output-averaged FNO architecture Φ̂_q^(L) of depth L, obtained by setting κ, = q in each layer. This maximally connected FNO architecture has at most q̂^(L)≤ 5 (2κ)^d L ^2 ≤ 5· 2^d q^d+3, tunable parameters. For later reference, we note that Observation 1: Any output-averaged averaged FNO Φ_q(;θ) with depth L and at most q parameters can be represented by a specific choice of the weights of Φ̂_q^(L)(;θ̂). In fact, this only requires zero-padding θ to obtain θ̂. Our main goal is to suitably quantize Φ̂_q^(L), and then define a quantized neural operator architecture Φ̃_n_q with n_q bits which can represent all quantized Φ̂_q^(L) for L=1,…, q by specific setting of its bitwise-encoded parameters. It follows from <cit.>, with a minimal extension to allow for σ(0) 0, that the Lipschitz constant of the mapping, R_q^(L): { [-M_q, M_q]^q̂ → C(), θ ↦Φ̂_q^(L)(;θ), . and with [-M_q,M_q]^q̂ metrized by the ℓ^∞-norm, can be bounded by (R_q^(L)) ≤ (L+2)(2 M_q)^L+2( C + (2κ)^d/2). Here, C>0 is a constant depending only on d and . In particular, there exists a (larger) constant C = C(d,), such that (R_q^(L)) ≤ (Cq)^Cq = exp(C q log(Cq)). We quantize Φ̂_q^(L) for θ∈ [-M_q,M_q]^q̂ by subdividing each coordinate direction by equidistant points of separation ∼log(q)^-γ / exp(Cq log(Cq)). Denote the resulting discrete set of points by Θ^(L)_q⊂^q̂. We note that this subdivision requires at most, O( { M_qlog(q)^γexp(Cq log(Cq)) }^q̂ ) many quantization points, which can be encoded by O( q̂ log( M_q log(q)^γexp(Cq log(Cq)) ) ) many bits. Since q̂ = O(q^d+3), log(M_q log(q)^γ) = O(q) and log(exp(Cqlog(Cq))) = O(q^2), it follows that the number of required bits is O( q^d+6), i.e. log_2 |Θ^(L)_q| = O(q^d+6). The implied constant here is independent of L. In the following, we denote m := d+6. In particular, we conclude that there exists a constant C>0, independent of q, such that max_L=1,…, q | Θ^(L)_q | ≤ Cq^m. We also note that, by construction, for any θ∈ [-M_q,M_q]^q̂, there exists θ'∈Θ^(L)_q, such that ‖θ - θ' ‖_ℓ^∞≤log(q)^-γ/exp(Cq log(Cq)). It follows that for any θ∈ [-M_q, M_q]^q̂, there exists θ'∈Θ^(L)_q, such that ‖Φ̃^(L)_q(; θ) - Φ̃^(L)_q(;θ') ‖_C() ≤(R_q^(L)) ‖θ - θ' ‖_ℓ^∞ ≤exp(Cq log(Cq)) log(q)^-γ/exp(Cq log(Cq)) = log(q)^-γ. Thus, sup_θ∈ [-M_q,M_q]^q̂min_θ'∈Θ^(L)_q ‖Φ̃^(L)_q(; θ) - Φ̃^(L)_q(;θ') ‖_C()≤log(q)^-γ. Since |Θ^(L)_q|≤ Cq^m, any θ' ∈Θ^(L)_q can be identified with a unique bit-string in {0,1}^ℓ_q, where ℓ_q = ⌈ Cq^m⌉. Adding an additional number of O(log(q)) bits to encode the possible values of the depth parameter L∈{1,…, q}, we can now define a quantized neural operator Φ̃_n_q: L^2(D)×{0,1}^n_q→ encoded by n_q ∼log(q) + ℓ_q ∼ C q^m bits, in the following way: Given [θ] ∈{0,1}^n_q, we first read off the length parameter L from the first ⌈log_2 q⌉ bits. Removing these bits, the remaining ℓ_q bits uniquely identify θ' ∈Θ_q^(L), and we set Φ̃_n_q(; [θ]) := Φ^(L)_q(; θ'). Thus, Φ̃_n_q is a neural operator architecture with parameters encoded by n_q ≍ q^m bits. By our definition (<ref>), any neural operator belonging to the set Φ^(L)_q(;θ') L ∈{1,…, q}, θ' ∈Θ^(L)_q , can be represented exactly by suitable choice of [θ]∈{0,1}^n_q. And thus, by (<ref>), we have sup_L=1,…, qsup_θ∈ [-M_q,M_q]^q̂min_ [θ]∈{0,1}^n_q‖Φ̃^(L)_q(; θ) - Φ̃_n_q(;[θ]) ‖_C()≤log(q)^-γ. We finally note that any neural operator architecture Φ_q with at most q parameters is represented as Φ_q(;θ) = Φ̂_q^(L)(; θ̂) for suitably chosen θ̂ = θ̂(θ) (see Observation 1, above). In fact, this only involves zero-padding of the weights θ. In particular, if θ∈ [-M_q,M_q]^q, then θ̂∈ [-M_q,M_q]^q̂. From (<ref>), it follows that sup_θ∈ [-M_q,M_q]^qmin_ [θ]∈{0,1}^n_q‖Φ_q(; θ) - Φ̃_n_q(;[θ]) ‖_C()≤log(q)^-γ, as claimed. This concludes the proof.
http://arxiv.org/abs/2406.18708v1
20240626191828
Learn it or Leave it: Module Composition and Pruning for Continual Learning
[ "Mingyang Wang", "Heike Adel", "Lukas Lange", "Jannik Strötgen", "Hinrich Schütze" ]
cs.LG
[ "cs.LG", "cs.CL" ]
WavRx: a Disease-Agnostic, Generalizable, and Privacy-Preserving Speech Health Diagnostic Model Yi Zhu, Graduate Student Member, IEEE, and Tiago Falk, Senior Member, IEEE, Authors are with INRS. E-mail: Yi.Zhu@inrs.ca Manuscript received XXX. July 1, 2024 =============================================================================================================================================================== § ABSTRACT In real-world environments, continual learning is essential for machine learning models, as they need to acquire new knowledge incrementally without forgetting what they have already learned. While pretrained language models have shown impressive capabilities on various static tasks, applying them to continual learning poses significant challenges, including avoiding catastrophic forgetting, facilitating knowledge transfer, and maintaining parameter efficiency. In this paper, we introduce , a novel lightweight continual learning method that addresses these challenges simultaneously. Unlike traditional approaches that continuously expand parameters for newly arriving tasks, integrates task representation-guided module composition with adaptive pruning, effectively balancing knowledge integration and computational overhead. Our evaluation across three continual learning benchmarks with up to 176 tasks shows that achieves state-of-the-art performance and improves parameter efficiency by up to three times, demonstrating its potential for practical applications where resource requirements are constrained. § INTRODUCTION Continual learning (CL) is a learning paradigm aiming at incrementally acquiring and integrating new knowledge over time without forgetting existing knowledge. This capability is essential for machine learning models to stay effective as they encounter dynamic and evolving real-world environments. While pretrained language models (PLMs) have demonstrated remarkable capabilities on various static tasks, adapting them for continual task learning remains challenging. In particular, there are three notable challenges for continual learning. (1) Avoiding catastrophic forgetting: The newly learned information should not disrupt and degrade previously acquired knowledge <cit.>. (2) Facilitating knowledge transfer: The knowledge from past tasks should be reused for efficient learning of new tasks. (3) Maintaining parameter efficiency: The language models need to stay lightweight and effective even if the continual learning sequence scales to hundreds of tasks. To mitigate catastrophic forgetting, a line of prior works adopt the idea of parameter isolation <cit.>, which allocates isolated parameters dedicated for each task to avoid inter-task interference. While parameter isolation typically does not allow knowledge transfer across tasks <cit.>, there are attempts to address both challenges of catastrophic forgetting and knowledge transfer at the same time, e.g., by progressively concatenating <cit.> or composing task-specific modules <cit.>. Despite their effectiveness in terms of task performance, parameter isolation methods do not scale well with the number of tasks. When the number of tasks in a continual learning sequence is growing into the hundreds, the progressive expansion of task-specific parameters leads to parameter inefficiency and significantly increases computational and storage costs. In this paper, we address all three continual learning challenges simultaneously and introduce , a lightweight continual learning approach that leverages task representation-guided module composition and adaptive pruning. First, to avoid catastrophic forgetting, continually adds task-specific modules to PLMs for learning new tasks while keeping the modules frozen once the training on the respective tasks is finished. In addition, to enable knowledge transfer across tasks, allows the model to reuse existing knowledge via module composition. Finally, to keep the language model lightweight, adopts an adaptive pruning strategy by removing modules with redundant information and retaining only the most salient modules throughout the continual learning process. In our evaluation on three popular datasets as continual learning benchmarks with up to 176 tasks in the learning sequence, stands out by not only showing state-of-the-art performance but also outperforming prior algorithms in parameter efficiency by up to three times across benchmarks. To the best of our knowledge, this is the first paper that tackles the three challenges of continual learning simultaneously: avoids catastrophic forgetting, allows knowledge transfer and ensures parameter efficiency. Thus, proposes a sustainable way for continual learning, allowing models to remain lightweight and effective as they evolve with accumulating tasks. The code base for is available online.[https://github.com/boschresearch/MoCL-Pruning] § RELATED WORK §.§ Avoiding Catastrophic Forgetting in Continual Learning A major challenge in continual learning is known as catastrophic forgetting, where newly learned information disrupts and degrades previously acquired knowledge <cit.>. Existing approaches to overcome this issue can be broadly divided into three categories <cit.>: (1) Regularization-based methods explicitly add regularization terms to the loss function to restrict model updates and preserve existing knowledge <cit.>; (2) Rehearsal-based methods leverage a memory buffer to store real examples <cit.> or generated pseudo-examples of past tasks for future rehearsal to avoid catastrophic forgetting <cit.>; (3) Parameter isolation-based methods construct task-specific parameters to prevent inter-task interference by either dynamically expanding model capacity or isolating existing model weights <cit.>. Our method, , belongs to the parameter-isolation based category. We use task representation-guided module composition and adaptive pruning to effectively manage isolated parameters. §.§ Transferring Knowledge in Continual Learning Recent studies in continual learning demonstrate the effectiveness of parameter isolation methods in avoiding catastrophic forgetting <cit.>. However, naive parameter isolation methods do not allow knowledge transfer across tasks, which leads to inefficient learning as the model cannot leverage previously acquired knowledge to facilitate learning new tasks. To address this, yoon2017lifelong and zhu-etal-2022-continual attempt to first identify reusable modules and only add new parameters when necessary. ke-etal-2021-adapting and wang2022dualprompt introduce knowledge-sharing modules to facilitate knowledge transfer while maintaining task-specific parameters to prevent interference. razdaibiedina2022progressive progressively concatenate task-specific modules to incrementally build a composite model that leverages both new and existing knowledge. wang-etal-2024-rehearsal introduce a modular and compositional continual learning framework to compose the new module with existing ones based on task module matching. §.§ Parameter-Efficient Continual Learning With the ever-increasing number of parameters in PLMs, it becomes increasingly important to develop machine learning systems that are more scalable, practical, and resource-efficient. In the context of continual learning, this necessitates parameter-efficient approaches that can effectively integrate new knowledge without excessive computational and storage costs as the number of tasks in the continual learning sequence increases. Recent advancements in continual learning integrate parameter isolation with parameter-efficient fine-tuning (PEFT), i.e., they allocate task-specific PEFT modules for learning and inference <cit.>. Various PEFT techniques, such as adapter tuning <cit.>, prefix tuning <cit.>, and LoRA <cit.>, have been applied in continual learning. Although they reduce the number of training parameters to some extent by freezing the PLM and only updating the PEFT module parameters, it remains challenging to apply them to long-sequence benchmarks that consist of hundreds of tasks. The continuous expansion of task-specific modules leads to significant computational overhead as the number of tasks increases. Our approach builds on the idea of wang-etal-2024-rehearsal by utilizing task representations for module composition, ensuring that the model effectively reuses relevant knowledge from previous tasks. Beyond that, we introduce an adaptive pruning strategy to keep the language model lightweight and effective throughout the continual learning process, thus making it scalable for continual learning scenarios with long task sequences. § PROBLEM DEFINITION Continual learning focuses on addressing a series of tasks which arrive in a sequential order. The primary goal is to optimize the model’s average performance across all tasks after learning them sequentially. Formally, the sequence of tasks is denoted as {T_1, …, T_N}. Each task contains a set of input samples {(x^i_n, y^i_n)}. For the text classification tasks we study in this work, x^i_n is the input text, y^i_n is the ground-truth label, and n ∈{1, …, N} is the task identity. In this work, we focus on rehearsal-free continual learning, i.e., data from earlier tasks is not available when training later tasks. Therefore, our model does not suffer from the memory or privacy issues associated with rehearsal-based methods. We assume the task labels are provided during both training and testing, i.e., task-incremental continual learning <cit.>. However, can be adapted for class-incremental learning, where the task labels are not given during testing, with minor modifications following wang-etal-2024-rehearsal. We leave the exploration of other continual learning settings for future work. § METHOD In this section, we describe , our proposed CL approach for language models, as illustrated in Figure <ref>, which tackles catastrophic forgetting and enhances knowledge transfer with superior parameter efficiency at the same time. §.§ Continual Learning with PEFT We inherit the idea of parameter isolation with parameter-efficient fine-tuning (PEFT) introduced in prior work <cit.>, which allocates trainable PEFT parameters for each task while keeping other parameters frozen. We utilize prefix-tuning <cit.> as the PEFT module in consistency with prior works.[Other PEFT methods like Adapter <cit.> and LoRA <cit.> can also be combined with in general. We leave such exploration for future work.] For each task in the CL sequence, we add a set of trainable PEFT parameters, i.e., a task-specific module, to the pretrained language model (PLM) for downstream task fine-tuning. Instead of updating the whole model, only a small number of the PEFT parameters are optimized. Once training on one given task is completed, the corresponding PEFT module is frozen to preserve the task-specific knowledge in the subsequent training process, thus avoiding catastrophic forgetting. §.§ Task Representation-Guided Module Matching In contrast to completely isolating task-specific parameters during continual learning, which excludes knowledge transfer, we follow the idea of task module composition introduced in wang-etal-2024-rehearsal to facilitate knowledge transfer. To this end, we utilize task representations for task module matching, and consequently for composing old and new modules for learning. The module matching aims to determine the contribution of each existing module to learning the current task, i.e., to what extent previously learned modules can be reused for the current task. We introduce trainable feature vectors V ∈ℝ^N × D as task representations to capture the features of each task in the CL sequence.[Note that is agnostic to different types of task representations. In addition to the trainable feature vectors, other static task representations such as task embeddings or Gaussian task distributions can also be combined with . We analyze these options in Section <ref>.] We set the dimension of each task feature vector v ∈ℝ^D to the same value as the dimension of the input embeddings x_n ∈ℝ^D. Then, we calculate the cosine similarity between the input embeddings x_n and each feature vector v_i up to the current task as the matching score α_i = cos (x_n, v_i). Consequently, we get the module matching weights {α_0, α_1, ...} for module composition (details will be introduced in Section <ref>) to reuse existing knowledge. §.§ Module Composition with Adaptive Pruning When the CL learning sequence scales to dozens or hundreds of tasks, the need for efficiency increases. Continuously expanding the module pool to assign a PEFT module to each task, as done in prior works <cit.>, leads to large computational costs. In contrast, we employ an adaptive pruning strategy to make our approach scalable in scenarios with long task sequences. In particular, our pruning strategy aims at preserving only those modules that add new and valuable information to the set of already selected modules. Given a set of selected modules {P_0, …, P_m-1} from previous tasks and a new task T_n, (m-1 ≪ n), we initialize a trainable module P_m and add it temporarily to the model. For each instance[For simplicity, we refer to this as x_n in the following.] x_n^i of the current task T_n, we compute the matching weights {α_0, …, α_m} by matching x_n with all task feature vectors {v_0, …, v_m} from our current set of modules. Specifically, we calculate the cosine similarity between x_n and {v_0, …, v_m} as module matching weights α_0:m as detailed in Section <ref>. Then, we compose the new and old modules via a weighted sum: P'_m = ∑_k=0^mα_k P_k. Finally, the composed module P'_m is combined with the PLM, consisting of all the selected module components up to the current task. After the training on T_n is finished (specifically, the training of the PEFT module P_m and the task feature vector v_m), we compare α_m, the matching weight of the new module P_m, with a threshold[The threshold is a tunable hyperparameter.] to decide whether to prune P_m or leave it in the set of existing modules. The intuition is that large matching weights indicate new and valuable information, while task modules with small matching weights do not contribute new information and, thus can, be discarded. §.§ Training and Inference The training objective for the n-th task in the continual learning sequence is to find the PEFT module P_m and the task feature vector v_m that minimize the cross-entropy loss of training examples, and, at the same time, maximize the cosine similarity between the task-specific feature vector v_m and the corresponding task input embeddings x_n: min_P_m, v_m - ∑_x_n, y_nlog p(y_n | x_n, P'_n, θ) - ∑_x_ncos (x_n, v_m) Here P'_n = ∑_k=1^mα_k P_k is the weighted summation of the new trainable task module and the existing frozen task modules as introduced in Section <ref>. During inference, performs per-instance task module matching and composition. The resulting module is combined with the PLM for inference. § EXPERIMENTAL SETUP In this section, we describe datasets, training details and baselines for our experiments. §.§ Datasets To evaluate the performance of our method and the effectiveness of its module pruning functionality, we experiment with three continual learning benchmarks, each with long task sequences. Following prior work <cit.>, we use MTL15, a multi-task continual learning benchmark comprising 15 classification tasks, and AfriSenti <cit.>, a multilingual sentiment analysis dataset that includes 12 low-resource African languages. Additionally, we include WikiAnn <cit.>, a multilingual named entity recognition (NER) dataset covering 176 languages; its long task sequence provides an adequate testbed for the pruning ability of our approach. We report macro-weighted F1 scores on the AfriSenti benchmark, accuracy on MTL15, and micro-weighted F1 scores on WikiAnn. On the MTL15 benchmark, we select 1000 random samples per class for training each task and hold out 500 samples per class for validation.[All design choices of are kept consistent with previous works <cit.> to ensure a fair comparison.] We explore three task orders for each benchmark, adopting the same multiple task orders as the prior work. Please refer to Appendix <ref> for more details about the benchmarks and task orders. §.§ Training Details We deploy three LMs for these datasets, in line with prior work <cit.>. We use encoder-based models for AfriSenti and WikiAnn NER (AfroXLM and BERT, respectively), and the encoder-decoder model T5 for MTL15. Prefix-tuning is used as the task-specific modules for all deployed models. All design choices are consistent with previous works to ensure a fair comparison. The reported results represent the average performance after training on all tasks consecutively and are averaged over three random seeds. The detailed experimental settings are provided in Appendix <ref>. §.§ Baselines To compare different CL methods, we include the following baselines: (1) Sequential fine-tuning continuously fine-tunes the language model on the task sequence: (a) Seq FT (F) refers to all model parameters are updated (fully fine-tuning), while (b) Seq FT only fine-tunes the PEFT parameters; (2) Per-task FT trains a separate PEFT module for each task; and the parameter isolation-based methods (3) ProgPrompt <cit.> assigns task-specific parameters and progressively concatenates modules of all tasks to encourage knowledge transfer; (4) EPI <cit.> introduces a non-parametric task identification technique to select modules for task training and inference; (5) O-LoRA <cit.> learns tasks in different low-rank vector spaces that are kept orthogonal to each other to minimize interference; and (6) MoCL <cit.> introduces a modular and compositional framework that progressively expands task-specific modules and composes the new module with existing ones to facilitate knowledge transfer. A detailed description of these methods can be found in Appendix <ref>. § RESULTS AND ANALYSIS In this section, we present and analyze our experimental results. §.§ Overall Results Table <ref> shows the performance of and other baseline methods on the AfriSenti and WikiAnn benchmarks. consistently outperforms the baselines while significantly reducing the number of trainable parameters. Using only 50% and 30% of the trainable parameters compared to other CL methods on Afrisenti and Wikiann respectively, showcases an exceptional balance of efficiency and performance. In the MTL15 benchmark, as illustrated in Table <ref>, also shows superior performance. As mentioned in prior work <cit.>, tasks in this benchmark share lower similarity compared to AfriSenti and WikiAnn, resulting in weaker reusability of task modules. Therefore, we do not observe a significant drop in the number of trainable parameters here as seen in the other benchmarks. However, we still achieve a 25% reduction in parameter size while maintaining final performance. Overall, demonstrates its superiority in efficiently managing the continual learning process without the substantial parameter overhead. The competitive performance of across different benchmarks highlights its robust adaptability and scalability to the continual learning sequence up to 176 tasks long. §.§ Task Representation Comparison In this work, we adopt learnable task feature vectors as task representation, and based on these, we perform module composition and pruning. In Section <ref>, we demonstrate the effectiveness of this design choice. While this is not the only option for task representations, in this section, we experiment with two other types of task representation: (1) using Gaussian distributions to model the input embeddings of each task (w/ Gaussian) and (2) calculating the mean of the input embeddings of each task for task representations(w/ Embed mean). Table <ref> provides the results of using different task representation options for module composition and pruning. A significant performance drop occurs when using Gaussian distributions or the mean of task input embeddings as the task representation. In most cases, their performance is worse than the Per-Task FT baseline, indicating that using these task representations for module composition leads to performance degradation rather than beneficial knowledge transfer across tasks. We believe that this degradation is due to the fact that both of these task representations are static and are solely based on the input embeddings. In contrast, utilizes trainable task feature vectors, meaning the model can automatically learn to capture the salient task features necessary for effective module composition. Trainable task representations are a better choice because not all information in the input embedding is relevant for module composition. To effectively capture reusability between task modules, the model must focus on the salient features while ignoring irrelevant ones. Static task representations, which are purely based on input embeddings, fail to achieve this selective focus. §.§ Ablation Study: Varying the Training Epochs for Task Feature Vectors To substantiate our assumption introduced in Section <ref>, we additionally conduct ablation experiments on WikiAnn where we vary the training epochs for task feature vectors in . As illustrated in Figure <ref>, training the task feature vectors for different epochs shows a clear pattern: the model performance improves significantly with the initial increase of the number of training epochs. Beyond a certain point (epoch = 4), additional training does not yield further benefits and converges towards a performance plateau. This observation suggests that by allowing the model to adapt these vectors over several epochs, can more accurately identify and leverage the most relevant features for module composition. This underscores the critical importance of the trainable nature of task feature vectors in . In Figure <ref>, we visualize the task feature vectors at different training epochs on the WikiAnn dataset, which includes a total of 176 tasks. The colors represent two categories of task modules: those that are eventually discarded (blue) and those that are preserved (orange) through the learning process. Initially (training epoch = 0), the vectors are evenly distributed around the origin since they are uniformly initialized. As the training epochs increase, the task vectors spread out and become more distinct, suggesting that the model captures distinct features of tasks and utilizes them for module pruning. Notably, the feature vectors of preserved modules are spread across a large area, while the discarded modules form a dense cluster, indicating their redundancy. The embeddings of task feature vectors stabilize by epoch 5, indicating a convergence in the task representation learning process. These observed patterns demonstrate the effectiveness of our strategy of using trainable task representations for module composition and pruning, which helps in preserving only the most salient modules for continual learning. §.§ Ablation Study: Varying the Pruning Threshold In this section, we study the impact of using different thresholds on the performance of . As introduced in Section <ref>, we compare the matching weight of the newly initialized task module α_m with the pre-specified threshold α_ths, if α_m < α_ths, then we discard the newly learned module. We vary α_ths from 0 to 0.25 for the three benchmarks used in this work. The results are shown in Figure <ref>. The figure illustrates how varying the pruning threshold affects both the average performance and the parameter size across different benchmarks. For the model performance, we observe that the initial increase in the pruning threshold leads to a performance increase on all three benchmarks. This indicates that excluding the redundant modules benefits performance. As the threshold continues to increase, the average performance on AfriSenti and MTL15 remains relatively stable, while the performance on WikiAnn drops, possibly due to the loss of information in potentially useful modules. Additionally, it is worth noting that the performance of is consistently and significantly better than the Per-Task FT baseline, suggesting that achieves effective knowledge transfer at different pruning thresholds. Furthermore, for the parameter size, a significant reduction is observed as the threshold increases on all three benchmarks. This demonstrates the superiority of on parameter efficiency. We observe that the parameter size decreases more pronounced and faster on AfriSenti and WikiAnn, while it decreases less and more slowly on MTL15. We believe it is due to the characteristics of the benchmarks. As mentioned in Section <ref>, tasks in this benchmark share a lower similarity, therefore, most task modules are highly specialized to these distinct tasks and cannot be discarded. We choose different pruning thresholds for different benchmarks reported in Table <ref>. For each benchmark, we select the pruning threshold that best balances performance and parameter size to report the results in Table <ref>. Specifically, we use α_ths=0.025 for AfriSenti and WikiAnn, and α_ths=0.25 for MTL15. With these thresholds, achieves equally good performance with only 50%, 30%, and 75% of the number of trainable parameters compared to MoCL without pruning on these three benchmarks, respectively. § CONCLUSION In this paper, we introduce , a novel continual learning approach that addresses the core challenges of catastrophic forgetting, knowledge transfer, and parameter efficiency in continual learning. We utilize learnable task representations for module composition and adaptive pruning, maintaining a lightweight model while achieving state-of-the-art performance across various benchmarks. Notably, scales effectively to long continual learning sequences, handling up to 176 tasks without compromising performance. These experimental results showcase 's potential to enhance practical machine learning applications by effectively managing computational costs, thus providing a scalable and efficient solution for real-world scenarios where minimum resource requirements are crucial. § LIMITATIONS While demonstrates significant advancements in continual learning, our study has some limitations that should be addressed in future work. First, we only use the long sequence multilingual benchmark, i.e., WikiAnn with 176 tasks, in this work due to the lack of existing long sequence multi-task benchmarks. The absence of these benchmarks limits the evaluation of ’s performance across diverse multi-task scenarios. Building a long sequence multi-task benchmark for continual learning would be an interesting research direction, although it is beyond the scope of this work. Second, as we follow the evaluation setup from prior works, we do not include generative tasks for evaluation. Therefore, we may not capture the potential of in a wider range of continual learning challenges. Including generative tasks in future evaluations would provide a more comprehensive understanding of ’s capabilities. Ando2005,andrew2007scalable,rasooli-tetrault-2015 § APPENDIX §.§ Dataset Information Here we provide detailed information on the datasets used in this work. The MTL15 benchmark consists of 15 classification tasks, combining five datasets from the standard CL benchmark MTL5 (AG News, Amazon reviews, Yelp reviews, DBpedia, and Yahoo Answers) <cit.>, four tasks from the GLUE benchmark (MNLI, QQP, RTE, SST2) <cit.>, five tasks from the SuperGLUE benchmark (WiC, CB, COPA, MultiRC, BoolQ), and the IMDB movie reviews dataset <cit.>. Details of the MTL15 benchmark are provided in Table <ref>. Following wang-etal-2024-rehearsal, we use AfriSenti <cit.>, a multilingual sentiment analysis dataset covering 12 low-resource African languages, including Amharic (am), Algerian Arabic (dz), Hausa (ha), Igbo (ig), Kinyarwanda (kr), Moroccan Arabic (ma), Nigerian Pidgin (pcm), Mozambican Portuguese (pt), Swahili (sw), Xitsonga (ts), Twi (twi), and Yoruba (yo). Additionally, to further evaluate the module pruning capability of , we include WikiAnn, a multilingual named entity recognition (NER) dataset that covers 176 languages. The long task sequence in WikiAnn provides an adequate testbed for evaluating the pruning functionality of . Due to space constraints, we do not list the names of the 176 languages and their corresponding abbreviations. The specific language information is available at https://huggingface.co/datasets/wikiannhttps://huggingface.co/datasets/wikiann. We use different task orders for each dataset to evaluate the robustness of continual learning methods against changing task orders. For the MTL15 and AfriSenti benchmarks, we follow the task orders used in prior works, while for the WikiAnn benchmarks, we generate three random task orders for evaluation. The task orders used are summarized in Table <ref>. §.§ Experiment Details In this section, we provide the implementation details for the experiments and a detailed description of the baseline methods used in this work. §.§.§ Implementation Details We use the AdamW optimizer <cit.> for all experiments. We choose the same maximum sequence length and prefix length as prior work <cit.>. Table <ref> provides detailed hyperparameter choices for across different datasets. The training was performed on Nvidia A100 GPUs.[All experiments ran on a carbon-neutral GPU cluster.] §.§.§ Baseline Methods In Section <ref>, we evaluate and prior continual learning methods on different benchmark datasets. Here, we provide a more detailed description of the baseline methods used in this work. ProgPrompt <cit.>: A parameter isolation-based continual learning method that assigns task-specific parameters to avoid catastrophic forgetting. During continual learning, ProgPrompt progressively concatenates all task-specific modules to encourage forward transfer. EPI <cit.>: A parameter isolation-based method applicable to the class-incremental learning setting (CIL), where task identities are not given during inference. EPI introduces a non-parametric task identification module that identifies tasks during testing. Given reliable task identification, the CIL performance of EPI could be comparable to TIL, where the ground truth task identities are given during inference. O-LoRA <cit.>: A parameter isolation-based method that learns tasks in different low-rank vector spaces that are kept orthogonal to each other to minimize interference. It mitigates catastrophic forgetting by constraining the gradient update of the current task to be orthogonal to the gradient space of past tasks. However, the orthogonality of the gradient subspace for individual tasks also limits knowledge transfer between tasks. MoCL <cit.>: Introduces a modular and compositional continual learning framework to compose the new module with existing ones based on task module matching. This compositional strategy enables effective knowledge transfer by considering task interaction. As discussed in Section <ref>, we build on the idea of MoCL <cit.> by utilizing task representations for module composition, ensuring that the model effectively reuses relevant knowledge from previous tasks. Beyond that, we introduce an adaptive pruning strategy to keep the language model lightweight and effective throughout the continual learning process, making it scalable for continual learning scenarios with long task sequences. ccc>p10cm The different orders of task sequences used for continual learning experiments. Dataset Order Model Task Sequence 4c – continued from previous page Dataset Order Model Task Sequence 4rContinued on next page 3*AfriSenti 1 AfroXLMR am → dz → ha → ig → kr → ma → pcm → pt → sw → ts → twi → yo 2 AfroXLMR ma → pcm → kr → pt → ig → sw → ha → ts → dz → twi → am → yo 3 AfroXLMR am → dz → ha → ma → ig → kr → sw → ts → twi → yo → pcm → pt 3*MTL15 1 T5 mnli → cb → wic → copa → qqp → boolq → rte → imdb → yelp → amazon → sst2 → dbpedia → ag → multirc → yahoo 2 T5 multirc → boolq → wic → mnli → cb → copa → qqp → rte → imdb → sst2 → dbpedia → ag → yelp → amazon → yahoo 3 T5 yelp → amazon → mnli → cb → copa → qqp → rte → imdb → sst2 → dbpedia → ag → yahoo → multirc → boolq → wic 3*WikiAnn 1 BERT ga → fi → sco → bs → co → pnb → eu → vls → os → de → hy → mwl → ca → or → wa → rw → simple → tl → crh → lij → min → ko → scn → an → mk → hi → ug → ext → sl → sw → nap → et → wuu → uz → mzn → ast → jv → su → ilo → csb → cdo → tk → ckb → lv → ur → th → am → kn → pms → ba → tt → pl → vec → ru → cs → ne → bn → es → fy → fiu-vro → bo → mt → fr → mr → nn → bar → ang → no → fo → el → qu → fa → eml → kk → tr → pt → km → dv → hsb → rm → ta → fur → war → frr → ps → io → da → zh-yue → ms → cv → diq → mn → lb → cy → sa → ig → oc → hu → arc → ln → ku → hr → nds → az → ar → ce → lt → zea → it → zh-classical → be-x-old → mi → ia → is → la → sv → nl → gd → pa → xmf → ksh → zh-min-nan → lmo → tg → sh → eo → zh → te → he → vep → as → yi → cbk-zam → yo → ro → ace → id → jbo → nov → bg → map-bms → be → sr → sah → ml → my → vo → so → gu → br → gl → ka → li → pdc → ky → bat-smg → als → mg → szl → gn → ceb → vi → sq → mhr → ay → en → bh → uk → gan → sk → si → hak → af → ja → arz → sd 2 BERT wuu → cy → mwl → eu → gn → scn → ka → pdc → it → ro → pnb → ig → tl → sah → is → ga → ml → wa → vo → simple → hr → dv → mn → csb → sl → gl → fy → bn → tg → fr → th → vls → arz → zh-classical → ln → tr → su → min → si → ur → sr → et → eo → sh → li → fiu-vro → rw → no → mg → mr → oc → nap → yi → pa → lt → ug → co → tt → sv → uk → so → ext → ky → ru → kk → sa → la → el → hsb → be-x-old → bg → pt → bh → br → mt → ne → id → te → cv → fo → cdo → bs → lij → sw → he → ceb → hak → es → kn → mk → am → or → ms → az → als → my → ce → os → ca → tk → diq → zh → fi → jbo → mhr → ay → pms → rm → zea → en → zh-yue → sco → ang → bo → ar → ia → zh-min-nan → ckb → fa → crh → as → yo → szl → fur → hi → eml → mi → lb → de → bat-smg → uz → lv → nov → ast → cs → hy → sk → sq → be → xmf → af → ps → qu → da → ja → vep → ku → mzn → nl → vec → map-bms → ace → io → gu → bar → ilo → km → arc → cbk-zam → pl → ksh → war → gd → ba → lmo → gan → ko → an → frr → vi → hu → jv → sd → nds → nn → tas 3 BERT tl → sah → ckb → qu → az → ast → mr → eo → wa → zh-classical → fiu-vro → eu → nl → map-bms → id → szl → mi → io → lt → war → my → bat-smg → jv → en → zh-min-nan → sh → su → frr → am → hu → hy → zh → ps → hi → tg → pl → nov → dv → min → jbo → diq → ksh → gn → vec → nds → lij → pdc → os → rw → als → sq → fi → da → sr → ru → uz → fr → scn → tt → bh → bn → mwl → et → hsb → kn → rm → nn → mhr → bg → sd → ko → la → ka → de → he → pt → cs → hr → tk → cy → co → or → csb → bar → mt → vo → oc → simple → ml → bs → km → sk → ang → br → xmf → ay → zea → ln → sco → ku → ilo → lv → mzn → zh-yue → gan → ta → gl → ca → hak → mg → ne → ur → cbk-zam → uk → mn → fy → ba → nap → kk → yo → tr → so → fo → ug → ace → fur → pa → lmo → it → be-x-old → sa → arc → ig → lb → ms → th → cv → arz → bo → el → eml → gd → pnb → cdo → ky → af → vls → be → ga → es → yi → si → ext → gu → mk → ja → is → no → ceb → ro → sv → ar → an → te → sl → sw → wuu → pms → fa → vi → as → ce → vep → li → ia → crh
http://arxiv.org/abs/2406.17723v1
20240625171108
A reduction of the "cycles plus $K_4$'s" problem
[ "Aseem Dalal", "Jessica McDonald", "Songling Shan" ]
math.CO
[ "math.CO", "05C15" ]
verbose, dvips, width=420pt, marginparsep=5pt, marginparwidth=0pt, top=70pt, headheight=12pt, headsep=20pt, footskip=30pt, bottom=60pt theoremTheorem[section] question[theorem]Question definitionDefinition proposition[theorem]Proposition lemma[theorem]Lemma example[theorem]Example corollary[theorem]Corollary conjecture[theorem]Conjecture observation[theorem]Observation ques[theorem]Question claimClaim proofc A reduction of the “cycles plus K_4's” problem Aseem Dalal[ Indian Institute of Technology Delhi, Department of Mathematics, Delhi India. Email: aseem.dalal@gmail.com.] Jessica McDonald[Auburn University, Department of Mathematics and Statistics, Auburn U.S.A. Email: mcdonald@auburn.edu. Supported in part by Simons Foundation Grant #845698 ] Songling Shan[Auburn University, Department of Mathematics and Statistics, Auburn U.S.A. Email: szs0398@auburn.edu. Supported in part by NSF grant DMS-2345869.] =================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== § ABSTRACT Let H be a 2-regular graph and let G be obtained from H by gluing in vertex-disjoint copies of K_4. The “cycles plus K_4's” problem is to show that G is 4-colourable; this is a special case of the Strong Colouring Conjecture. In this paper we reduce the “cycles plus K_4's” problem to a specific 3-colourability problem. In the 3-colourability problem, vertex-disjoint triangles are glued (in a limited way) onto a disjoint union of triangles and paths of length at most 12, and we ask for 3-colourability of the resulting graph. § INTRODUCTION In this paper all graphs are assumed to be simple, unless explicitly stated otherwise. The reader is referred to <cit.> for standard terminology. Given vertex-disjoint graphs G_1, …, G_q, H with |⋃ _1≤ i≤ q V(G_i)|≤ |V(H)|, we glue G_1, …, G_q onto H by defining an injective function f:∪_1≤ i≤ q V(G_i)→ V(H), and then forming a new graph G with V(G) = V(H) and E(G) = E(H) ∪⋃_1≤ i ≤ q E_i, where E_i ={f(a)f(b): ab∈ E(G_i), f(a)f(b)∉E(H)}. The graph G is said to have been obtained from H by gluing in G_1, …, G_q. Consider the following question: Suppose that H is a graph with Δ(H)≤ 2, and suppose that G is obtained from H by gluing on vertex-disjoint triangles. When is χ(G)≤ 3? For Question <ref>, if H contains a C_4, then certainly a K_4 may be created in G which would make χ(G)≰3. Having C_4's in H is not the only thing that could go wrong however: Fleischner and Stiebitz <cit.> found an infinite family of examples where H does not contain any C_4 components, but where χ(G)≰3, answering a question of Erdős  <cit.>. The smallest of Fleishner and Stiebitz's examples has H=C_5∪ C_10; other known graphs H with Δ(H)=2 but which can yield negative answers to Question <ref> (i.e. there is a way to glue on vertex-disjoint triangles that gives a 4-chromatic graph) include H=C_3∪ C_6 (Öhman <cit.>) and H=C_7∪ K_1∪ K_1 (Sachs, see <cit.>); for much more, see Öhman. On the other hand, Fleishner and Stiebitz <cit.> (and later Sachs <cit.>) famously provided a positive answer to Question <ref>: they proved that if H is a single cycle then the G obtained is always 3-colourable. This “cycle plus triangles” theorem answered another question of Erdős, which also had origins in the work of Du, Hsu, and Hwang <cit.> (see <cit.> for more history). If we hope for a positive answer to Question <ref> for some H which has multiple components, it seems wise to avoid cycles of length at least four in H, and instead consider H to be a disjoint union of paths and triangles. Here there are two extremes, both of which always yield positive answers to Question <ref>. If H is a disjoint union of paths, then by joining all these paths into a cycle, we form a “cycle plus triangles” graph G which is 3-colourable (by Felishner and Stiebitz <cit.>). On the other hand if H is a disjoint union of triangles, then we can form a 3-regular auxiliary bipartite graph which describes the interaction between triangles of H and added triangles, and the 3-edge-colourability of this graph implies the 3-colourability of G. Things become more difficult when glued-in triangles join both path and triangle components in H. For G, H as in Question <ref>, we say that in G, two triangles T, T' in H are path-linked if there exists at least one glued-in triangle Y with T∩ Y, T'∩ Y≠∅, and the third vertex of Y is on some path in G of length at least two. If such gluings are strictly limited – so that in G, every H-triangle is path-linked to at most one other H-triangle, then we conjecture that G is 3-colourable. In fact, we also restrict our path components in H to length at most 12 to eliminate other potential problems; note that if all H-paths have length at most 2 then our auxiliary bipartite argument above again works to show 3-colourability. Let H be a graph which is a disjoint union of triangles and paths of length at most 12, and let G be obtained from H by gluing on vertex-disjoint triangles. Moreover, suppose that in G, every H-triangle is path-linked to at most one other H-triangle. Then G is 3-colourable. Given all its restrictions, we hope that Conjecture <ref> may be approachable. Our main result in this paper is that the truth of Conjecture <ref> would imply a seemingly very difficult conjecture, the “cycles plus K_4's problem” that we state here now. Let H be a graph with Δ(H)≤ 2 and let G be obtained from H by gluing on vertex-disjoint copies of K_4. Then χ(G)≤ 4. Note that Conjecture <ref>, as stated above, is not just “cycles plus K_4's”, but rather it allows paths in the base graph as well. However it is easy to see that it suffices to prove Conjecture <ref> for 2-regular H (other H being subgraphs of these); hence the nickname. Our main result of this paper is concretely the following. Conjecture <ref> implies Conjecture <ref>. While we reduce Conjecture <ref> to Conjecture <ref> in this paper, we do not show that they are equivalent. However, an equivalence cannot be so far away from the truth. If Conjecture <ref> holds, then by deleting any one of the four colour classes we get a 3-colourable graph G', and G' is obtained by gluing vertex-disjoint triangles onto a graph H' which is a disjoint union of paths and cycles. To give some context for Conjecture <ref>, we must mention the strong chromatic number of a graph H, denoted sχ(H), which was introduced independently by Alon <cit.> and Fellows <cit.> in the late 1980's. Skipping this definition, let us just say that the Strong Colouring Conjecture posits that sχ(H)≤ 2Δ(H); exact attribution of the conjecture is tricky, but the 1995 book of Jensen and Toft <cit.> (Section 4.14) has more on the early history of strong colouring. Support for the Strong Colouring Conjecture has been given by numerous papers, eg. Haxell <cit.><cit.>, by Aharoni, Berger and Ziv <cit.>, Axenovich and Martin <cit.>, Johansson, Johansson and Markström <cit.>, and Lo and Sanhueza-Matamala <cit.>. On the other hand, while the truth of the Strong Colouring Conjecture is trivial for Δ(H)=1, (where it asks essentially for the union of two matchings to be bipartite), the Δ(H)=c case is open for all constants c≥ 2. In the most glaring open case of Δ(H)=2, a result of Haxell <cit.> says that sχ(H)≤ 5, but the conjectured upper bound is 4. In fact, the Δ(H)=2 case of the Strong Colouring Conjecture is precisely Conjecture <ref> (see e.g. <cit.>). The only cases of Conjecture <ref> known to be true is when H has at most one odd cycle of length exceeding 3, or H has at most 3 triangles (McDonald and Puelo <cit.>). It seems that a new approach is needed to make a breakthrough on this problem, and our hope is that the reduction to Conjecture <ref> may help. This paper proceeds as follows. The following section discusses so-called independent sets of representatives (ISRs), and states two results of Haxell <cit.><cit.> that will be needed for our main proof. We also prove a result about combining two ISRs into one, which generalizes a prior theorem of the second author and Puleo <cit.>, and may be of independent interest. The third section of the paper contains our proof of Theorem <ref>. § INDEPENDENT SETS OF REPRESENTATIVES If H is a graph and let V_1, …, V_n be disjoint subsets of V(H). An independent set of representatives (ISR) of {V_1, …, V_n} in G is set R⊆ V(G) such that R is independent in G and R contains exactly one vertex from each set V_i. If R⊆ V(G) is independent in G and contains at most one vertex from each set V_i, then R is said to be a partial ISR; R is said to hit those V_i for which it contains a representative. Note that if R is a partial ISR of {V_1, …, V_n} in G then it is an ISR of {V_i: R hits V_i}. Haxell has proved the following. If H is a graph with Δ(H)≤ 2 and V_1, …, V_n are disjoint subsets of V(H) with each |V_i| ≥ 4, then (V_1, …, V_n) has an ISR. To state a second result of Haxell, we need a few additional definitions. To this end, a total dominating set in a graph G is a set of vertices X such that every vertex in G is adjacent to a vertex in X. In particular, every vertex of X must also have a neighbor in X. The total domination number of G, written (G), is the size of a smallest total dominating set; if G has isolated vertices, then by convention we set (G) = ∞. Note that by this definition, every (nonempty) graph has total domination number at least two. Given a graph H and disjoint subsets V_1, …, V_n ⊂ V(H), for each S ⊂ [n] we define a subgraph H_S by taking the subgraph induced by the vertex set ⋃_i ∈ SV_i and deleting all edges in H[V_i] for every i∈ S. Let H be a graph and let V_1, …, V_n be disjoint subsets of V(H). If, for all S ⊂ [n], we have (H_S) ≥ 2|S| - 1, then (V_1, …, V_n) has an ISR. It is worth noting we have stated Theorem <ref> as it appeared in <cit.> – the original formulation was in terms of hypergraphs. Haxell's Theorems <ref> and <ref> are both about the existence of a single ISR. We now prove a theorem about combining two ISRs into one. To this end, let = {X_1, …, X_p} and = {Y_1, …, Y_q} be two collections of pairwise disjoint subsets of V(G), and suppose that R_, R_ are ISRs of ,, respectively, in G. If G[R_∪ R_] is an independent set of vertices, then we view this union R_∪ R_ as the successful combination of two ISRs into one, since it is simultaneously an ISR of both and . In fact, we would be just as happy if we could find R⊆ R_∪ R_ that was simultaneously an ISR of both and . This won't always be possible, but we will look to delete vertices from R_∪ R_ so that the resulting R is independent and is somehow close to hitting all the sets in both and . We say that an edge e∈ G[R_∪ R_] is a -edge (-edge) if both endpoints of e are in X_i (Y_i) for some X_i∈ (Y_i∈); note that it is possible for an edge to be both a -edge and a -edge. We define E_ to be all those edges of G[R_∪ R_] that are neither -edges nor -edges. Let G be a graph. Suppose that R_ is an ISR of in G and R_ is an ISR of in G. For all X∈, denote by v_X the representative of X in R_. Then G has an independent set R ⊆ R_∪ R_ that is an ISR of and such that for every X ∈: (a) if v_X is not incident to any E_-edges, then R hits X, and; (b) if v_X is incident to at least one E_-edge, then R hits X ∪{w} for some E_-edge wv_X. Before we proceed to the proof of Theorem <ref>, it may be remarked that if E_ = ∅, then Theorem <ref> provides an independent set R which is an ISR of both and . In this way Theorem <ref> is a generalization of Lemma 3.3 of the second author and Puleo in <cit.> (their hypothesis that (, ) is an “admissible-pair” in G implies that E_=∅). Let us now prove Theorem <ref>. Initially, let R_0 = R_∪ R_. The set R_0 clearly hits every X_i ∈ and Y_j ∈ in G, but as there may be edges between R_ and R_, the set R_0 may not be independent. We will describe an algorithm for iteratively deleting vertices from R_0 in order to obtain an independent subset of R_0 which still hits every Y∈ and also meets conditions (a) and (b) for each X∈. First however, let us prove the following claim. Every vertex of R_ is incident to at most one -edge, and every vertex of R_ is incident to at most one -edge. Suppose that u ∈ R_ and that uv_1, uv_2 are two different -edges incident to u. Then v_1, v_2 ∈ R_ with {u, v_1}⊆ Y_1 and {u, v_2}⊆ Y_2 for some Y_1, Y_2 ∈. Since the sets in are pairwise vertex-disjoint, this implies that Y_1 = Y_2. But since v_1, v_2∈ R_, we must have Y_1≠ Y_2, contradiction. The same argument, interchanging the roles of and , proves the claim about -edges. Our algorithm defines a sequence of vertex sets R_0, R_1, … starting with R_0 = R_∪ R_. Given some set R_i, we either produce a new set R_i+1 and proceed to the next round, or produce the final set R, via the following algorithm. Step 1. If there is a vertex v∈ R_ which is isolated in G[R_i]∖ E_ but is incident to least one E_-edge in G[R_i], then: form R_i+1 from R_i by deleting all vertices that are adjacent to v in G[R_i], and then go back to the start of Step 1. Step 2. If there is a vertex v∈ R_ which has degree 1 in G[R_i]∖ E_, and this one edge is a -edge, then: form R_i+1 from R_i by deleting all vertices that are adjacent to v in G[R_i], and then go back to the start of Step 1. Step 3. If there is a vertex v∈ R_ which has degree 1 in G[R_i], and this one edge is a -edge, then: form R_i+1 from R_i by deleting the one vertex in R_i that is adjacent to v in G[R_i], and then go back to the start of Step 1. Step 4. Otherwise, obtain R from R_i by deleting every vertex of R_∩ R_i that has positive degree in G[R_i], and then terminate. We call any vertex v found in Steps 1–3 above a dangerous vertex (for R_i), and only reach Step 4 when R_i has no dangerous vertices. Note that vertices which were not initially dangerous for R_0 may become dangerous for some later R_i as their neighbors are deleted. However, once a dangerous vertex v is found in one of Steps 1–3, all of v's neighbours in G[R_i] are deleted, and hence v will be isolated in G[R_j] for all j≥ i, and will remain until the end of the algorithm and be a member of our terminal R. The algorithm always terminates, since |R_i+1| < |R_i| whenever R_i has a dangerous vertex. Moreover, the terminal set R is always independent due to Step 4. It remains to show that R hits every set Y ∈ and also meets conditions (a) and (b) for each X∈. Consider any set Y ∈. Let w be the representative of Y in R_. If w ∈ R, then R hits Y and we are done. Therefore, we suppose that w was deleted by the algorithm. If w∈ R_∩ R_ then w is isolated in G[R_0] and never gets deleted. So w∈ R_∖ R_ and hence must have been deleted in Step 3 or Step 4. First suppose that w was deleted in Step 3 due to v being a dangerous vertex for R_i. Then, based on our earlier comments, v ∈ R. Since the deletion of w happened in Step 3, we know that vw is a -edge, so v,w∈ Y' for some Y'∈. But the sets in are disjoint and w∈ Y, so in fact Y'=Y, and v ensures that R hits Y. Now assume that w was deleted in Step 4 after determining that R_i has no dangerous vertices. We know that w is not isolated in G[R_i], since otherwise it would not have been deleted in Step 4. So since w is not dangerous for R_i according to Step 1, it must be incident to at least one -edge or -edge in G[R_i]. In fact, since w is not dangerous for R_i according to Step 2, and given Claim <ref>, w must have either degree one or two in G[R_i]; in the former case its incident edge is a -edge and in the latter case it is incident to both a -edge and a -edge (which are distinct). In either case we know that there is a -edge incident to w in G[R_i], say wu. Since w∈ Y we get that u ∈ Y as well. Since w is the lone representative for Y in R_ (or since u∼ w and R_ is independent) we know that u∉R_, so u is not deleted in this Step 4. But since this is the very last step of our algorithm, no subsequent step could have deleted u either, so u ∈ R at the end. Hence R hits Y. Now consider any X ∈. If v_X ∈ R, then R hits X (and thus hits X ∪{u} for every E_-edge uv_X in G) and we are done. If v_X ∈ R_∩ R_ then v_X is isolated in G[R_0] and never gets deleted. Therefore, we suppose v_X was deleted by the algorithm and v_X ∈ R_∖ R_. Hence, v_X must have been deleted in Step 1 or Step 2. First assume that v_X was deleted in Step 1 due to w being a dangerous vertex for R_i. Then, based on our earlier comments, w ∈ R. But then v_X,w are joined by an E_-edge and R hits X ∪{w}. Now we may assume that v_X was deleted in Step 2 due to w being a dangerous vertex for R_i. Then, again based on our earlier comments, w ∈ R. We know that either v_Xw is an -edge or v_Xw is an E_-edge. In the latter case, R hits X ∪{w}. In the former case, w∈ X and so R hits X. Either way, conditions (a) and (b) are satisfied. § PROOF OF THEOREM <REF> Within our proof of Theorem <ref>, we will find occasion to use the following classic result of Lovász <cit.> (see also Theorem 20 in <cit.>). Let d, D be a non-negative integers and let k=⌊Dd+1⌋+1. If G is a graph with maximum degree D, then V(G) can be partitioned onto k sets V_1, … V_k such that G[V_i] has maximum degree at most d for all i∈{1, 2, …, k}. Let us now proceed with our main proof. (Theorem <ref>) Let H be a graph with Δ(H)≤ 2, and let G be obtained from H by gluing on vertex-disjoint copies of K_4. We show that, under the assumption that Conjecture <ref> is true, χ(G)≤ 4. As previosuly discussed, we may assume that H is 2-regular. Let X_1, …, X_p be the vertex sets of the components of H and let Y_1, …, Y_q be the vertex sets of the added copies of K_4. Let = {X_1, …, X_p} and let = {Y_1, …, Y_q}. Let ⊆ correspond to those components of H which are triangles, with t=||. For any ⊆ define G^ to be the the graph with vertex set and where two vertices are joined by an edge if the corresponding X_i, X_j have the property: there exists (at least one) Y∈ with Y∩ X_i, Y∩ X_j , Y∩ X_k≠∅ for some X_k∈ corresponding to a cycle of length at least four. Choose such that: (M1) ⊆; (M2) Δ(G^)≤ 1; (M3) || is maximum, subject to (M1) and (M2), and; (M4) |E(G^)| is minimum, subject to (M1), (M2), (M3). We can prove the following about the size of . || ≥t4. Let T∈ and consider its degree in G^. Suppose v∈ T and v∈ Y∈. If Y contributes to the degree of T in G^, then at least one of the four vertices in Y must be a member of a non-triangle cycle in H. But in this case, Y contains vertices from at most two different members of , aside from T, and hence Y contributes at most two to the degree of T in G^. Since T has size three, this means that T has degree at most 6 in G^. Applying Theorem <ref> to the graph G^ with D=6 and d=1 gives a partition of V(G^) into four parts, each of which induces a subgraph of maximum degree at most one. If we take the subset of corresponding to the largest of these parts, then it has at least t/4 elements. We consider now the long cycles in H, which we intend to break into short pieces by deleting some vertices. In particular, we let L={i: X_i∈, |X_i|≥ 15}. For each i∈ L, if X_i consists of the vertex set of the cycle (x_1, x_2, …, x_l_i), we let p_i = ⌊l_i5⌋ and define X_i^* = {x_5(j)-4: 1≤ j≤ p_i}. We then let X_i^1, …, X_i^p_i be the vertex sets of the p_i cycle-segments created by deleting X_i^*, namely X_i^j = {x_5j-3, …, x_5j} for 1≤ j ≤ p_i-1 and X_i^p_i = {x_5p_i-3, …, x_l_i}. Note that |X_i^j| = 4 for 1 ≤ j ≤ p_i-1 and 4 ≤ |X_i^p_i|≤ 8. Let G' be the graph formed from G by deleting all the vertices in all the sets X_i^*, that is, G'= G ∖(⋃_i∈ L X_i^*). We obtain ' from by replacing each X_i, i∈ L, with the p_i sets X_i^1, … X_i^p_i. Let ' = {X'_1, X'_2, …, X'_p'}, noting that p'≥ p=||. Note also that we still have ⊆', since triangles are not affected by the deletion process. The goal in this next section of our proof will be to find an ISR of '∖ in the graph G'. Since we are unconcerned with hitting the triangle parts, it will serve us to form an auxiliary graph G” by taking the disjoint union of G' along with m= || copies of K_m. For all X'_i ∉, define X”_i=X_i' and otherwise define X”_i to be X_i' together with one vertex from each copy of K_m, chosen so that X_1”, …, X_p'” are disjoint, and let ” be this last collection of sets. We will aim, through the next two claims, to get an ISR of ” in G” via Theorem <ref>. When restricted to G', such an ISR would contain a representative from each set in ' except possibly those in . So we would indeed be able to get an ISR of '∖ in the graph G', as we have said we want. In order to apply Theorem <ref> (to get an ISR of ” in G”), we start by letting S ⊂ [p'], with the set of X”_i ∈” corresponding to S. Let t_s=|∩|. We consider the graph G”_S defined with respect to X”_1, …, X”_p' (as discussed prior to the statement of Theorem <ref>). Note that the set of edges removed from G” to make G”_S are exactly the same as the set of edges removed from G' to make G'_S, since each X”_i is obtained from X'_i by adding either an independent set or nothing. So G”_S is just the disjoint union of G'_S and m copies of K_|∩|; in particular note that G”_S=G'_S when ∩=∅. Let (G'_S) be the number of components of G'_S. Then in general, (G'_S)≥14|V(G'_S)|. Moreover, if ∩ =∅, then (G'_S)≥14(|V(G'_S)| + t_s). Since all edges of H are removed when forming G'_S, each component of G'_S has size at most four, and we immediately get the 14|V(G'_S)| bound. Now suppose that ∩ =∅. Let T∈∩. Then T∉, by assumption. We claim that in G^ (note that this graph is formed prior to any deletions or breaking of long cycles), there are at least two triangles in that are each in some common glued K_4 with T. Certainly we cannot have no edges, since then = ∪{T} has || > ||, violating (M3). So suppose for a contradiction that there is exactly one such edge in G^, say from T to T̃∈. We know that T̃ has at most one edge in G^ joining it to other triangles in . If it actually has no such edges, then again = ∪{T} satisfies (M1) and (M2) but || > || violating (M3). So T̃ must have exactly one edge in G^ joining it to other triangles in ; now = (∖{T̃}) ∪{T} satisfies (M1), (M2), (M3), but |E(G^)|<|E(G^)|, violating (M4). So indeed, there are at least two edges from T to in G^. Let E_T be such a pair of edges. Either E_T comes from one Y∈ containing vertices from both T and two different triangles in , or it comes from two different Y^1, Y^2∈, both of which contain a vertex from T and a triangle in . In all cases, by the definition of G^, each of Y, Y^1, Y^2 must contain at least one vertex from a cycle of H of size at least four; this cycle may or may not be in . Partition ∩ into _1, _2 so that for a given T∈∩, T∈_1 if E_T comes from one Y∈, and T∈_2 if E_T comes from two Y^1, Y^2∈. If T∈_1, the vertex in T∩ Y is in a component of size at most 2 in G_S (since ∩ =∅). If T∈_2, the two vertices in T∩ Y', T∩ Y” are both in components of size at most 3 in G_S (since ∩ =∅). However it could be that the third vertex in such a component (other than the vertex in T and the vertex in the longer cycle) is another triangle in _2. Overall, for every triangle in _1 we may count one component of G_S of size at most three (in fact size at most two), and for every triangle in _2 we may count one component of size at most three. In fact, all these vertices in triangles still exist in G'_S (since we only delete vertices from long cycles), so the previous sentence remains true if we replace G_S with G'_S. Since we already know that all components in G'_S have size at most 4, we get: (G'_S) ≥ 14(|V(G'_S)|-2|_1|-3|S_2|)+|S_1|+|S_2| = 14|V(G'_S)|+12|S_1|+14|S_2| = 14|V(G'_S)|+12|S_1|+14|S_2| ≥ 14|V(G'_S)|+14(|S_1|+|S_2|)=14(|V(G'_S)|+t_s). We will now use Claim <ref> as part of our proof of the following. (G”_S)≥ 2|S|. We proceed by cases according to |∩|. If this value is one, then G”_S contains isolates, and the result follows trivially. Suppose first that |∩| ≥ 2. Then (K_|∩|)=2, and using the result of Claim <ref>, we get (G”_S) = (G'_S) + m·(K_|∩|)=(G'_S)+ 2m ≥(G'_S)+ t2 Since every component of G'_S has total domination number at least 2, by the first bound in Claim <ref>, we get (G'_S) ≥ 2(|V(G'_S)|/4)=12|V(G'_S)|. On the other hand, all X'_i ∈' have at least three vertices, with the only parts of size three corresponding to triangles in H (recall that our broken cycle pieces have at least four vertices). It is possible that all t of the triangle parts are included in , but even still, |V(G'_S)| ≥ 4|S| - t. Combining (<ref>), (<ref>), and (<ref>), we get (G”_S) ≥12(4|S|-t)+ t2=2|S|. We may now assume that ∩ =∅. Then (G”_S)=(G'_S). Since every component of G'_S has total domination number at least 2, and by the second bound in Claim <ref>, we get (G'_S) ≥ 2(14(|V(G'_S)|+t_s))=12|V(G'_S)|+ t_s2. We know that exactly t_s triangle parts are included in , so we get |V(G'_S)| ≥ 4|S| - t_s. Combining (<ref>) and (<ref>), we get (G”_S)=(G'_S)≥12(4|S|-t_s)+ t_s2=2|S|. By Claim <ref>, we know that (G”_S) ≥ 2|S|-1, so we may finally apply Theorem <ref> to get an ISR of ” in G”. As previously discussed, this in turn allows us to get an ISR of '∖ in G', which we have said was our goal for this section of the proof. Let ='∖, and call this last ISR R_. In fact, R_ is also an ISR of in G, since no edges are added between vertices of R_ when moving back to G from G'. Since Δ(H)≤ 2 and |Y_i|≥ 4 for all i, Theorem <ref> guarantees that has an ISR in H, say R_. Since R_ is an ISR of in G, and the R_ is an ISR of in G, we can now apply Theorem <ref>. The result is a set R that is independent in G, that is an ISR of , and that hits ⋃_v ∈ V(C) X_v for each component C of the graph _. Since R hits every set in , G-R is obtained from some graph H'⊆ H by gluing on triangles. H' is a disjoint union of triangles and paths of length at most 12. Consider a cycle in H of length at most 14 not belonging to , say represented by X∈. Such a cycle is unaltered when moving from G, to G', ' so X∈' and since X ∉, X∈. Suppose v is the representative of X in R_. Consider _. Since v is the representative of X in , X_v = X. Now any vertex w ∈ R_ adjacent to v in G either belongs to X (implying that vw is an -edge) or both w, v ∈ Y for some Y ∈ (implying that vw is an -edge). Therefore, by definition, there is no E_ edge incident to v. So by Theorem <ref>, R hits X. This means that in G-R, all that remains of the cycle represented by X in H (i.e. its contribution to H') is a subgraph of a path with at most 13 vertices (i.e. a path of length at most 12). Consider now a cycle (x_1, x_2, …, x_ℓ_i) in H where ℓ_i ≥ 15, say represented by X_i∈. Suppose, for a contradiction, that that there exists a path segment P of ℓ≥ 14 vertices (so with length at least 13) which is a subgraph of this cycle and such that V(P)∩ R = ∅; without loss of generality suppose that P=(x_1, …, x_ℓ) We know that |X_i^j| = 4 for 1 ≤ j ≤ p_i-1 and 4 ≤ |X_i^p_i| ≤ 8. Moreover, any two X_i^js are separated (along the path on cycle X_i) by exactly one vertex and that belongs to X_i^*. Now since ℓ > 8+1+4=13, P contains a path segment a X_i^jb such that a, b ∈ X_i^* and X_i^j ∈'. We know that R_ uniquely hits X_i^j. Let w_j be this representative for X_i^j, that is, w_j = X_i^j∩ R_. If there is no E_-edge incident to w_j, then by Theorem <ref> R hits X_i^j, contradicting our assumption that V(P) ∩ R = ∅. Therefore, there must be at least one E_-edge incident to w_j; let v be a second endpoint of such an edge. Note that v ∈{a,b} because if not then vw_j is either an -edge or a -edge. So, either v = a and w_j is the lowest-indexed member of X_i^j, or v = b and w_j is the highest-indexed member of X_i^j. By symmetry, we can assume that v = a. There cannot be any other E_-edges incident to w_j, since |X_i^j|≥ 4 means that w_j is certainly not followed by b on P. Then by Theorem <ref>, R hits X_i^j∪{a}⊆ V(P), contradiction. Consider now the graph H' in the context of Claim <ref>. Note that every triangle in H' is from . By condition (M2), we know that Δ(G^)≤ 1. In H' this means that if T_1 is a triangle in H', then there is at most one other triangle T_2 in H' such that they are both joined in G-R by some added K_3, say Y', whose third vertex is from some cycle of length at least four in H. In G-R, a pair of triangles in H' are path-linked iff there is some added K_3 containing vertices from both triangles as well as from a path in H' of length at least two. Since paths in H' of length at least two were all part of a cycle of length at least four in H (paths in H' coming from triangles in H have length one), the above paragraph tells us that in G-R every H'-triangle is path-linked to at most one other H'-triangle. So the truth of Conjecture <ref> would give a 3-colouring of G-R, and hence a 4-colouring of G. amsplain
http://arxiv.org/abs/2406.17663v1
20240625155215
LLM-ARC: Enhancing LLMs with an Automated Reasoning Critic
[ "Aditya Kalyanpur", "Kailash Saravanakumar", "Victor Barres", "Jennifer Chu-Carroll", "David Melville", "David Ferrucci" ]
cs.CL
[ "cs.CL", "cs.AI", "cs.LO" ]
[ Hans Werner Schumacher ========================== § ABSTRACT We introduce LLM-ARC, a neuro-symbolic framework designed to enhance the logical reasoning capabilities of Large Language Models (LLMs), by combining them with an Automated Reasoning Critic (ARC). LLM-ARC employs an Actor-Critic method where the LLM Actor generates declarative logic programs along with tests for semantic correctness, while the Automated Reasoning Critic evaluates the code, runs the tests and provides feedback on test failures for iterative refinement. Implemented using Answer Set Programming (ASP), LLM-ARC achieves a new state-of-the-art accuracy of 88.32% on the FOLIO benchmark which tests complex logical reasoning capabilities. Our experiments demonstrate significant improvements over LLM-only baselines, highlighting the importance of logic test generation and iterative self-refinement. We achieve our best result using a fully automated self-supervised training loop where the Actor is trained on end-to-end dialog traces with Critic feedback. We discuss potential enhancements and provide a detailed error analysis, showcasing the robustness and efficacy of LLM-ARC for complex natural language reasoning tasks. § INTRODUCTION 2 Given their impressive language understanding capability, Large Language Models (LLMs) are being used to develop a wide variety of Natural Language applications. For certain classes of applications that require a high degree of accuracy and reliability (e.g., enterprise applications in the medical, legal or finance domain), LLMs are often combined with external tools and solvers in a hybrid architecture <cit.>. We believe this is the right approach, especially to tackle problems where precise logical reasoning, planning or constraint optimization is required, as LLMs are known to struggle for this class of problems <cit.>. In this work, we focus on logical reasoning problems expressed in natural language, for which there has been a growing interest in developing neuro-symbolic architectures <cit.>. These architectures combine the power of LLMs for generating (declarative) code and filling in missing background (commonsense) knowledge, with the accuracy of automated Symbolic Reasoning systems to do precise logical reasoning. This design addresses the limitations of either technology when used independently: the LLMs' inability to do accurate and consistent reasoning based on the underlying domain logic, and the Symbolic Reasoner's inability to work with unstructured data, and explicitly encode common-sense knowledge to get the desired inferences. The former issue in symbolic systems is the well-known “knowledge acquisition" problem, while the latter issue typically leads to their brittleness. Building on our previous neuro-symbolic work <cit.>, we develop a new framework based on the Actor-Critic <cit.> model, where the Actor generates declarative code, crucially with tests to verify the semantic correctness of the code (i.e. the logic program correctly captures the modeler's intent), and the Critic runs the code and tests, and gives feedback with detailed explanations to the Actor if the code does not compile or some tests fail. When this happens, the Actor re-generates the code/tests based on the feedback, and the process is repeated with the Critic evaluating the results until all tests pass, or we reach a max-iteration limit. We use an LLM as the Actor and an Automated Reasoning engine as the Critic, and refer to this neuro-symbolic system as LLM-ARC. Figure <ref> shows an implementation of LLM-ARC based on Answer Set Programming (ASP)<cit.>. In general, this design can apply to any LLM-code execution engine (replacing the automated reasoner with the corresponding code compiler/interpreter), though here we focus on declarative problem solving. Note that the system as designed above is not guaranteed to produce perfectly accurate results. This is because even if the code compiles without issues and all the generated tests pass, there is no guarantee that the test conditions correctly and completely capture the intended semantics, or that the tests pass for the right reason (e.g. the system could derive a required inference for a test using an incorrectly intended logical proof). We discuss this issue in Section <ref> and suggest a future enhancement using a separate Critic trained via human-feedback to evaluate the test criteria and reasoner results. To evaluate our LLM-ARC system, we run experiments on the FOLIO benchmark <cit.>. FOLIO is a human-annotated, logically complex and diverse dataset for reasoning in natural language. We use the latest version of FOLIO (v2) which contains 1001 training examples and 203 validation examples. The current state-of-the-art results on FOLIO is 78.9% achieved by LogicLM <cit.>. Using our LLM-ARC system we achieve a new state-of-the-art accuracy of 88.32%. We compare several strong LLM-only baselines (using GPT4-Turbo as the LLM) with various versions of the LLM-ARC system on the FOLIO data, and show that the Actor-Critic approach even in a few-shot setting (only 8 examples for the Actor) outperforms a fine-tuned LLM solution trained on all 1K examples. We demonstrate that adding the test generation option to the Actor improves performance by 6.6% (compared to a version without test-gen); that running code and test generation in a self-correction loop with the Critic (where the Actor corrects mistakes based on the Critic feedback) further boosts performance by  5%; and the best performing system is one where the Actor is trained on end-to-end self-correction dialog traces with Critic feedback (from the automated reasoner) on the training set. The contributions of this work are as follows: * We believe this is the first work to fold in test generation for declarative logic programs to improve code quality, and combine an LLM Actor for code-generation with a Reasoning Engine Critic for test evaluation and explanation, boosting overall system performance. (We refer to this hybrid architecture as LLM-ARC) * We specify guidelines for test-generation based on a logical analysis of the problem domain, and use a simple general schema for writing logic tests. We demonstrate the value-add of test generation, and the specific guidelines, via ablation experiments. All relevant LLM prompts are included in the Appendix. * In the presence of final ground truth labels for reasoning problems, we describe a fully automated procedure to train the Actor model (to write and rectify declarative code and tests) over end-to-end dialog traces of a self-correction loop using a reasoning engine Critic (to provide fine-grained explanatory feedback). This self-supervised version of the LLM-ARC system achieves a new SOTA of 88.32% on the FOLIO benchmark. § RELATED WORK Given the remarkable performance of LLMs on automated code-generation, a large number of AI-driven “co-pilot" tools and frameworks are being actively developed. There is also a growing interest in automatically generating test cases (both, unit tests and more complex integration tests) to validate code correctness <cit.>. However, to our knowledge, all the efforts have been focused on generating and testing procedural code. Our area of interest is symbolic code (logic programs) where tests are crucial to verify that the rules and constraints accurately captures the modelers intent. This is particularly useful for developers working with declarative systems who are not proficient in formal logic, due to the long-distance inter-dependencies across rules, vagaries of logic involving negations and contrapositives, and the need to explicitly encode commonsense knowledge. Moreover, we believe that test failures can be effectively leveraged to improve declarative code accuracy, due to the unique capability of a symbolic reasoning engines to provide detailed logical explanations (proofs) for the failures, a claim validated by our LLM-ARC system results. More closely aligned to our work is neuro-symbolic systems such as LINC <cit.> and LogicLM <cit.>. These systems combine an LLM with a formal reasoning engine (in LogicLM's case, a variety of solvers based on the underlying logic) and show impressive results on a several NL-reasoning benchmarks. However, neither system has the notion of generating semantic tests that need to be validated by the reasoner. LogicLM does have a self-refinement loop but it is only used for syntax errors in the generated logical representation, while LINC has no self-refinement or feedback from the solver. Additionally, the idea of training the logic program writer (Actor) over end-to-end interactions by incorporating feedback from a formal reasoning engine (Critic) is fundamentally novel to our work. Much of the “agent based" or “ReAct" systems that integrate tools with LLMs suffer from orchestration and control inefficiencies where individually efficient tools are combined in a sub-optimal and brittle whole. We focus on integrated training that ensures the overall system is optimized. § APPROACH: NEURO-SYMBOLIC ACTOR-CRITIC MODEL As mentioned earlier, our LLM-ARC system is based on the Actor-Critic model, where we use an LLM as the Actor to generate declarative code with tests, and an Automated Reasoner as the Critic to execute the logic program, run the tests and provide detailed feedback with explanations to the Actor when there are test failures. The system needs to be based on a logical formalism, and to tackle FOLIO, we chose Answer Set Programming (ASP) as the underlying logic. ASP was selected because we found that it works best for developing enterprise applications <cit.> and it has sufficient logical expressivity needed for most of the FOLIO problems. Figure <ref> shows our LLM-ARC implementation based on ASP. We now describe details of the Actor and Critic. §.§ Actor: LLM Logic Program Writer For the LLM Actor, we chose GPT4-Turbo () since we found that it generated ASP code of reasonably high quality from NL instructions, even in a zero-shot setting. We use GPT4-Turbo in a few shot setting by specifying a handful of examples of translating FOLIO problems into ASP. To come up with the exemplar set, we did an automated analysis of the logical structure and expressivity of the NL statements in FOLIO. §.§.§ Logic Stratification of FOLIO statements The idea is to use a powerful LLM (such as GPT4-Turbo) to automatically classify NL statements based on their logical structure, connectives/operators used, and overall composition (e.g. do they contain nested clauses). We came up with a general prompt (see Appendix) for logic stratification that applies to most formal logics (not just ASP) and ran it on a large random sample of FOLIO statements. We manually vetted the results and found that the logically stratified clusters (including their examples) found by the LLM were of very high quality overall. We acknowledge that this task may be easy for the FOLIO dataset where statements are written in a logic-heavy manner by design. The net result of logic stratification over FOLIO data is shown in Figure <ref>. The LLM found 8 logical classes of FOLIO statements. We added one more category for cases where background knowledge (often common-sense relationships connecting predicates in the program) was missing in the input problem description. For example, consider when the Premises state: All employees who schedule a meeting with their customers will go to the company building today. Everyone who has lunch in the company building schedules meetings with their customers. No managers work remotely from home. In this case, the following common-sense rules are not explicitly mentioned in the premises: * Managers are employees. * All employees who have lunch in the company building are in the company building. Information like this is often missing in the input because it is considered obvious, a problem noted by <cit.> as well. Here, we leverage the LLM's ability to fill in common-sense knowledge gaps, though we need to be careful about the LLM adding extraneous knowledge that confounds the modelers intent, and we address this using a combination of prompt-engineering and using tests to validate the semantics. For the few shot setting, we added 8 examples to cover all the 8 main logic classes, adding one example per class (see Appendix). Several examples include common-sense relationships with instructions on how and when to add them. Note that since each example is a multi-line problem, a single example might cover more than one logic category. See an example in Figure <ref>. §.§.§ Logic Test Generation We designed a simple general schema for specifying logic tests. Each test has optional facts that need to be added to the program to test the rules/constraints, and the test conditions are either one of the following: * : a set of propositions that must be inferred by the solver in all solution sets of the logic program * : a set of propositions that must be inferred by the solver in at least one solution set of the logic program * : a set of propositions that must not be inferred by the solver in any solution set of the logic program * : a boolean flag which represents whether we expect the program to be contradictory (unsatisfiable) when the facts are added To improve test generation quality, we asked the LLM to add two additional fields for each test: - which points to specific rules in the program (all rules have an ID in the program; see the example in Figure <ref>) that are exercised in the test; and - a rationale for the test describing how it validates the semantics of the referenced rules. We then specified guidelines for writing tests in the prompt, based on different logical conditions in the input. Similar to the in-context examples chosen for ASP code generation, we mirrored the guidelines on the logic strata found in FOLIO statements, to ensure adequate coverage of the logical semantics. Examples of the guidelines are shown in Figure <ref>, with the full prompt attached in the Appendix. §.§.§ Error Correction Finally, we include instructions in the Writer prompt for correcting errors reported by the Critic. There are two kinds of errors: syntax/compilation errors, and semantic errors when there are test failures. The prompt contains strategies to resolve both classes of errors, and utilizes 1 example each. Furthermore, we specify a detailed pseudo-code for fixing the semantic errors (failing tests) since this is the more challenging case. The instructions walk through how the explanation from the Critic can be used to identify whether the test inputs, validation criteria, or specific parts of the ASP program (e.g., the commonsense knowledge section) need to be altered. §.§ Critic: Logical Reasoner We use the Clingo ASP Solver <cit.> as the Critic since it is highly performant and freely available under the MIT License. Clingo also has useful compilation error messages, which point to specific lines in the program with errors. This information is fed back to the Actor in the self-correction loop. §.§.§ Query Evaluation We came up with a simple logical grammar to interpret the Conclusion statements in the FOLIO problem as structured queries, which could then be evaluated more accurately using the Solver. For example, consider the following Conclusion: “If the Red Star is a supernova or observed for its brightness, then the Red Star is neither a planet nor is its orbit stable." This is the corresponding target interpretation: * * * * * * * As shown, we use standard logical operators such as (for implications) and use to denote the base atomic propositions. Any logical structure can be composed bottom-up in a modular manner. The advantage of representing queries (conclusions) using this schema is more flexibility in query evaluation, especially when dealing with the particularities of the ASP formalism (e.g. ASP does not have clean support for existential quantification). §.§.§ Explanation Generation A feature that we added to the Solver is its ability to generate explanations for query entailments. There has been some work in this area <cit.> though we developed our own simple algorithm based on proof-by-refutation. The idea is to check query entailment by adding the negation of the query to the program and checking for a contradiction. If a contradiction is found, we can infer that the query is entailed. In this case, we can find an explanation for the entailment by obtaining the minimal set of rules in the ASP program that result in the contradiction. This is a popular technique for explanation generation used in FOL and description logic systems <cit.>. § EXPERIMENTS ON FOLIO We conducted various experiments using the FOLIO benchmark, which consists of 1001 training examples and 204 validation examples. Each FOLIO problem consists of a set of Premises (NL statements) and a Conclusion (also a NL statement). The task is to determine whether the Conclusion is or given the Premises. The FOLIO dataset also includes First-Order-Logic (FOL) translations for each of the Premises and the Conclusion. We evaluated FOLIO use the following systems (the first four systems below are LLM-only baseline systems) * GPT-3.5-ZS and GPT4-T-ZS: Zero-shot versions of GPT-3.5 and GPT4-Turbo * GPT4-T-CoT: GPT4-Turbo with a Chain-of-Thought prompt where we instruct the model to label the premises, and then carefully evaluate the conclusion using step-wise reasoning and referencing the premises along the way. * GPT4-FT-NL: GPT4 fine-tuned on the NL problem descriptions in the entire FOLIO training data of 1001 examples * GPT4-FT-FOL: GPT4 fine-tuned to go from NL problem description to the corresponding First Order Logic (FOL) versions (annotated in the FOLIO training data), and then to the prediction. The idea is to check whether using the precise FOL translations as an intermediate step helps the model produce more accurate results. * LLM-ARC-8-shot: The LLM-ARC system with 8 in-context learning examples, where the LLM Actor only does code generation (no Tests). The LLM used was GPT4-Turbo * LLM-ARC-8-shot-TestGen: The above system with the enhancement that the Actor also generates Tests for the code * LLM-ARC-20-shot: The LLM-ARC system (again using GPT4-Turbo as the Actor) with 20 in-context learning examples, and no test generation. We added another 14 examples to cover the 8 logic classes described in Section <ref>. * LLM-ARC-20-shot-TestGen: The above system with the enhancement that the Actor also generates Tests for the code * LLM-ARC-Trained: Trained version of the LLM Actor (GPT4, not Turbo) on end-to-end dialog traces with the Reasoner Critic, in a self-correction loop over the entire training data. The actor is trained to generate both ASP Code and Tests. Details of how this was done are provided in the next subsection. All the LLM-ARC systems are run in a self-correction loop with upto 4 iterations. §.§ Training the Actor with Critic Feedback Dialog-Traces We ran the un-trained 8-shot version of the LLM-ARC system (with the TestGen capability) on the entire training set and collected dialog trace data on the correctly predicted examples. We used this dialog data to fine-tune a separate Actor model based on GPT4[Currently, OpenAI does not provide an option to fine-tune GPT4-Turbo.]. Since the context window of GPT4 is only 8K tokens, we had to limit the dialog traces to fit it into the window. We achieved this by using only the last rectification step of the trace – e.g. if there was a compiler error reported by the Critic that was fixed by the Actor in the next iteration, we would train on a trace that starts with the prior incorrect version from the Actor, followed by the Critic feedback, and then the corrected version with the compilation issues fixed. The same applies to test failures, where we started the dialog trace with a version just prior to all the tests being passed, and included the intermediate Critic feedback before the corrected version. Additionally, we included a two-step “short-cut" dialog trace that went from the input problem directly to the ASP code and tests, when all the tests passed along with the correct ground truth prediction. The idea behind this is to enable the Actor to learn how to produce code and tests of high quality (that compile, pass tests and entail the query correctly) in a direct manner. To summarize, the data used to fine-tune the GPT4 Actor had the following 3 kinds of dialog traces: * NL description → ASP code with compilation issues → Critic Feedback on compiler errors → ASP code that compiles * NL description → ASP code that compiles with test failures → Critic feedback on failures with explanations → ASP code with all tests passing * NL description → ASP code that compiles with all tests passing The traces were collected whenever the final system prediction on the ground truth label was correct. The total number of dialog traces (each trace corresponds to a single training instance) collected on the entire training set was 918. Finally, to keep the LLM prompt in the fine-tuned training data as concise as possible, we did not include any examples in the trained Actor prompt. §.§ System Results System Accuracy GPT3.5-ZS 66.9% GPT4-T-ZS 67% GPT4-T-CoT 74.1% GPT4-FT-NL 80.7% GPT4-FT-FOL 78.17% LogicLM (Prior SOTA) 78.9% LLM-ARC-8-shot 74.62% LLM-ARC-8-shot-TestGen 81.22% LLM-ARC-20-shot 83.25% LLM-ARC-20-shot-TestGen 85.79% LLM-ARC-Trained 88.32% tableOverall Accuracy on FOLIO. All LLM-ARC systems were run in a self-correction loop with upto 4 iterations The accuracy scores for all the systems are shown in Table, <ref>, along with the prior known SOTA[This result was reported on FOLIO v1. We are in the process of replicating their system's results on the latest v2 version.]. The best performing LLM-only baseline solution is which was trained on all the 1K examples in the training data. Interestingly, using the FOL annotations as intermediate representations in a chain-of-thought variant did not help results, a point worth investigating in the future. Regarding the LLM-ARC systems, we observe a clear benefit of adding Test Generation. Both the LLM-ARC few shot variants (i.e. 8 and 20 example variants) perform better with TestGen added, with the 8-example variant seeing a huge boost of +6.6%. Of note, the version outperforms the best LLM-only solution even though the latter was fine-tuned on the entire 1K example training set. Our best performing system is the LLM-ARC version that was trained in a self-supervised manner on end-to-end dialog traces with the Critic feedback, and achieved 88.32% accuracy, 10 points higher than the prior known SOTA. §.§ Ablation Studies We conducted a few ablation studies to measure the impact of features in the LLM-ARC system. §.§.§ Impact of Iterative Self-Correction To answer the question “How much do retries help?", we plot the accuracy curves for the LLM-ARC variants over multiple iterations of the self-correction loop. The results are shown in Figure <ref>. We see that over multiple iterations, the performance does go up by between 4-5% for the two LLM-ARC systems shown above, when comparing the results after zero and max-retries, as the code compilation issues and test failures are fixed by the Actor based on the Critic feedback (notice how the numbers in columns 3 and 6 in the Tables in Figure <ref> go up over iterations along with the overall accuracy in column 2). However, the final accuracy asymptotes after two retries. More details on this are in the Error Analysis section. §.§.§ Impact of TestGen Guidelines To measure the impact of adding Test Generation guidelines (Figure <ref>), we ran an ablation by dropping this section from the prompt and letting the LLM determine how to generate tests on its own instead. The results for this ablation are shown in the table below for the two LLM-ARC few-shot systems, and indicate a big drop in performance, clearly demonstrating its value-add. System Accuracy LLM-ARC-8-TestGen 78.7% (-2.52%) LLM-ARC-20-TestGen 80.2% (-5.59%) tableDropping Test-Gen Guidelines § ERROR ANALYSIS AND DISCUSSION We analyzed errors from the best performing system () and found that they broadly fell in three categories (excluding minor cases of query interpretation failures and modeling mistakes like representing an XOR as a regular disjunction) * Existential quantification: ASP does not have natural support for existential quantification. For example, it is not possible to accurately model the following statement: “One six-way tie was on the leaderboard, and one person in the six-way tie was from Belgium." that posits the existence of two unnamed individuals, which can potentially be unified with other named individuals in the program. This is certainly possible to do in other logics such as FOL (which the FOLIO dataset was annotated with) but is a limitation of our chosen formalism. * Rules with Multiple Variables: Among the various logic classes in FOLIO identified in Figure <ref>, the one class that the Actor had difficulty in modeling was rules with multiple variables. This includes statements like “All languages within a language family are related to each other.", where the rule involves two variables for distinct languages. We believe a reason for this is that there are very few examples of this class in the training set (<5%). A potential solution is to up-weight (or up-sample) examples from this class during training, or simply give more examples of this class in the few-shot prompt. * Conflating types and instances: These are cases where certain entities are linguistically used as both types and individuals in the input problem. For example, consider the example below from FOLIO where we have marked an entity that looks like both a class and an individual in different statements with a “*". Similar to the previous error class, there are very few examples of this behavior in the training set. Moreover, these are particularly hard modeling problems from a logic standpoint in ASP, which does not support “punning" (as say the description logic OWL2[https://www.w3.org/2007/OWL/wiki/Punning]), and hence requires additional machinery at modeling and query evaluation time to correctly interpret terms as either classes or individuals based on how they are used. Given the high performance of the system, it is unsurprising that the remaining headroom is for the challenging or sparse cases. Finally, we looked into why the performance was asymptotic after a few retries of the self-correction loop. In roughly a third of the problem failure categories (i.e. final prediction was incorrect) and where all tests did not pass, we found that the Actor made no alterations to the program code or the tests from one iteration to the next. This is a weakness exposed by our current design where we do not enforce that some alteration must be made by the Actor, and instead expect the LLM to follow the instructions in the prompt, which it clearly does not always do. We are considering an alternate design using function-calling with constraints where we can enforce that one or more of the program code, query interpretation and failing test criteria must be modified in the presence of a failing test. We also empirically observed that the Actor would rarely change the query interpretation across multiple iterations and found that this was a miss in our instructions which primarily focused on altering the program code and tests. §.§ Potential Enhancements There are several potential enhancements to the LLM-ARC implementation which we leave for future investigation: Sophisticated Input Chunking: Since FOLIO problems are relatively small (< 10 statements), we pass the entire problem to the LLM Actor in one shot, without doing any chunking. In the future, for real world applications that involve translating large volumes of business logic text into a formal program, the input would have to be chunked. The appropriate chunking level would depend on the quality of the code generated, and would have to be empirically determined based on the LLM's output quality given a certain context window size (and if the Actor is trained, the chunking size used in the training data). Enhancing Critic Explanations: The current explanation generated from Clingo using our proof-by-refutation algorithm does not include grounded statements. A more informative explanation would come from grounding relevant rules that lead to the entailment, as described in work <cit.>. Moreover, we could use another LLM to translate the grounded rule-based proof back into natural language to produce a more fluent explanation, which should presumably be more interpretable for the LLM Actor. This hypothesis needs to be empirically validated. Training a separate Critic: As mentioned earlier, in the current design, there is no guarantee that the test conditions correctly and completely capture the intended semantics, or that the tests pass for the right reason. One way to mitigate this issue is to have a separate Critic that evaluates the reasoner's results and provides feedback on the test criteria and proof step correctness. Indeed, our original system design started off with a Critic distinct from the reasoner, which was to be trained with human-feedback on the tests results and explanations provided by the reasoner (since those need to be manually assessed). We did not go down this path in the end, since we found that using the automated reasoner as the Critic directly, and training the Actor in a self-supervised training loop, produced a big boost in performance. We still believe training a separate critic has the potential to further increase accuracy and reliability of the entire system. § CONCLUSION There is growing recognition in the AI community that LLM-only solutions do not meet the standard for production applications that require a high degree of accuracy, consistency and explicability. More specifically, current state-of-the-art LLMs are known to struggle for problems involving precise logical reasoning, planning and constraint solving. As a result, we have seen a rise in the development of Neuro-Symbolic systems, where the reasoning is offloaded to a symbolic solver, and the LLM is used at the interface layer to map between unstructured data (text) and structured logical representations. Unlike standard tools or simple APIs, integration between an LLM and a symbolic reasoner can be fairly sophisticated as the reasoning engine has its own world model and decision procedures (arguably, one might even conceive and design the system such that the reasoner is the brain of the system and the LLM is the tool for interpreting and translating data). In such declarative systems, we firmly believe that tests are needed to check for semantic correctness of the logic program (a much harder challenge than ensuring syntactic correctness), and that the reasoner by way of providing detailed feedback on test failures to the program writer can help it improve in a self-correction loop. This intuition led us to the design the system presented in this paper, which is based on the Actor-Critic model and uses the LLM as the Actor and an Automated Reasoning engine as the Critic. We empirically validate the system on the FOLIO benchmark, and show that not only can such a system achieve higher performance than an LLM-only solution in a few-shot setting, but that we can devise a fully automated self-supervised loop to train the Actor with Critic feedback to boost performance significantly. Lastly, the ability of this system to provide detailed logical explanations for its answers means that a human-in-the-loop can verify its results in production applications. plain § APPENDIX §.§ Prompt for Logic Stratification You are an expert logician. You are given a set of natural language statements that express various logical conditions, rules and constraints. Your task is to stratify (cluster) the statements based on their logical structure, connectives (or operators used) and complexity (e.g., nested clauses etc.). Output a list of clusters where each cluster contains a collection of statements that have a similar logical structure and connectives used. Copy up to 5 canonical problem statements in each cluster. Be as fine-grained as possible when coming up with clusters and come up with an exhaustive set of clusters that cover all the diversity in the input statements. §.§ 8-Examples (one per logic class) used in LLM Actor §.§ Full Prompt for LLM Actor
http://arxiv.org/abs/2406.18722v1
20240626194208
Towards Open-World Grasping with Large Vision-Language Models
[ "Georgios Tziafas", "Hamidreza Kasaei" ]
cs.RO
[ "cs.RO", "cs.CV" ]
, Pb(Mg_1/3Nb_2/3)O_3-PbTiO_3: . , . , . , . July 1, 2024 ===================================== § ABSTRACT The ability to grasp objects in-the-wild from open-ended language instructions constitutes a fundamental challenge in robotics. An open-world grasping system should be able to combine high-level contextual with low-level physical-geometric reasoning in order to be applicable in arbitrary scenarios. Recent works exploit the web-scale knowledge inherent in large language models (LLMs) to plan and reason in robotic context, but rely on external vision and action models to ground such knowledge into the environment and parameterize actuation. This setup suffers from two major bottlenecks: a) the LLM's reasoning capacity is constrained by the quality of visual grounding, and b) LLMs do not contain low-level spatial understanding of the world, which is essential for grasping in contact-rich scenarios. In this work we demonstrate that modern vision-language models (VLMs) are capable of tackling such limitations, as they are implicitly grounded and can jointly reason about semantics and geometry. We propose , an open-world grasping pipeline that combines VLMs with segmentation and grasp synthesis models to unlock grounded world understanding in three stages: open-ended referring segmentation, grounded grasp planning and grasp ranking via contact reasoning, all of which can be applied zero-shot via suitable visual prompting mechanisms. We conduct extensive evaluation in cluttered indoor scene datasets to showcase 's robustness in grounding from open-ended language, as well as open-world robotic grasping experiments in both simulation and hardware that demonstrate superior performance compared to previous supervised and zero-shot LLM-based methods. § INTRODUCTION [13]r0.475 < g r a p h i c s > Challenges of open-world grasping tackled with VLMs. The overall pipeline combines VLMs with segmentation and grasp synthesis models to ground open-ended language instructions plan and reason about how to grasp the desired object. Following grasping instructions from free-form natural language in open-ended environments is a multi-faceted problem, posing several challenges to robot agents. Consider the example of Fig. <ref>: The robot has to decipher the semantics of the user instruction (i.e., “what would a child want to play with?"), recognize the appearing objects and ground the target (i.e., the white toy), reason about the feasibility of the grasp to generate an appropriate plan (i.e., first remove the blocking juice box), and finally select a suitable grasp based on the object geometry and potential collisions. It becomes clear that to deal with the full scope of open-world grasping, agents should integrate high-level semantic with low-level physical-geometric reasoning, while doing so in a generalizable fashion. Such capabilities still seem distant in robots, mainly due to the lack of large-scale vision-language-action datasets. In recent years, Large Language Models (LLMs) <cit.>, have emerged as a new paradigm in robotics and embodied AI, due to their emergent general knowledge, commonsense reasoning and semantic understanding of the world <cit.>. This has led to a multitude of LLM-based approaches for zero-shot robotic task planning <cit.>, navigation <cit.> and manipulation <cit.>, where the LLM decomposes a high-level language instruction into a sequence of steps, therefore tackling complex, long-horizon tasks by composing primitive skills. However, a notorious limitation of LLMs is their lack of world grounding — they cannot directly reason about the agent and environment physical state <cit.>. Additionally, as LLMs rely solely on language to represent the world, they lack deep knowledge when it comes to low-level, physical properties, such as object shapes, precise 3D geometry, contact physics and embodiment constraints <cit.>. Even when equipped with external visual modules for perceiving the world, the amount of information accessed by the LLM is bottlenecked by the visual model's interface (e.g. open-vocabulary detectors <cit.> cannot reason about object relations such as contacts). Recent advancements in unifying LLMs with vision for large-scale multimodal pretraining, exemplified by projects such as GPT-4v(ision) <cit.> and Gemini <cit.>, are promising new exciting capabilities. Large Vision-Language Models (LVLMs) integrate visual understanding and language generation into a unified stream, allowing direct incorporation of perceptual information into the semantic knowledge acquired from language <cit.>. Preliminary explorations with GPT-4v <cit.> have illustrated two intriguing phenomena, namely: a) by combining LVLMs with segmentation models and constructing suitable visual prompts, LVLMs can unleash extraordinary open-ended visual grounding capabilities <cit.>, and b) effective prompting strategies like chain-of-thought <cit.> and in-context examples <cit.> seem to also emerge in LVLMs, further extended to the visual modality. Motivated by these results, we perform an in-depth study of the potential contributions of LVLMs in open-ended robotic grasping. In this paper, we propose Open World Grasper (OWG): an integrated approach that is applicable zero-shot for grasping in open-ended environments, object catalogs and language instructions. OWG combines LVLMs with segmentation <cit.> and grasp synthesis models <cit.>, which supplement the LVLM's semantic knowledge with low-level dense spatial inference. OWG decomposes the task in three stages: open-ended referring segmentation, where the target object is grounded from open-ended language, (ii) grounded grasp planning, where the agent reasons about the feasibility of grasping the target and proposes a next action, and (iii) grasp ranking, where the LVLM ranks grasp proposals generated from the grasp synthesizer based on potential contacts. In summary, our contributions are threefold: a) we propose a novel algorithm for grasping from open-ended language using LVLMs, b) we conduct extensive comparisons and ablation studies in real cluttered indoor scenes data <cit.>, where we show that our prompting strategies enable LVLMs to ground arbitrary natural language queries, such as open-vocabulary object descriptions, referring expressions and user-affordances, while outperforming previous zero-shot vision-language models by a significant margin, and c) we integrate OWG with a robot framework and conduct experiments both in simulation and in the real world, where we illustrate that LVLMs can advance the performance of zero-shot approaches in the open-world setup. § RELATED WORKS Visual Prompting for Multimodal Models Several works investigate how to bypass fine-tuning VLMs, instead relying on overlaying visual/semantic information to the input frame, a practise commonly referred to as visual prompting. Colorful prompting tuning (CPT) is the first work that paints image regions with different colors and uses masked language models to “fill the blanks" <cit.>. Other methods try to use CLIP <cit.> by measuring the similarity between a visual prompt and a set of text concepts. RedCircle <cit.> draws a red circle on an image, forcing CLIP to focus on a specific region. FGVP <cit.> further enhances the prompt by specifically segmenting and highlighting target objects. Recent works explore visual prompting strategies for LVLMs such as GPT-4v, by drawing arrows and pointers <cit.> or highlighting object regions and overlaying numeric IDs <cit.>. In the same vein, in this work we prompt GPT-4v to reason about visual context while being grounded to specific spatial elements of the image, such as objects, regions and grasps. LLMs/LVLMs in Robotics Recent efforts use LLMs as an initialization for vision-language-action models <cit.>, fine-tuned in robot demonstration data with auxiliary VQA tasks <cit.>. Such end-to-end approaches require prohibitive resources to reproduce, while still struggling to generalize out-of-distribution, due to the lack of large-scale demonstration datasets. Alternatively, modular approaches invest on the current capabilities of LLMs to decompose language instructions into a sequence of high-level robot skills <cit.>, or low-level Python programs composing external vision and action models as APIs <cit.>. Such approaches mostly focus on the task planning problem, showcasing that the world knowledge built in LLMs enables zero-shot task decomposition, but require external modules <cit.> to ground plan steps to the environment and reason about the scene. As a result, the system's capacity to reason about open-ended visual context is constrained by the choice and quality of the pre-defined grounders. In contrast, our method is based on LVLMs that implicitly ground visual information to the LLM's semantic knowledge and do not require external grounding modules. In the same spirit, concurrent works study the potential of LVLMs such as GPT-4v for inherently grounded task planning <cit.>. In <cit.>, the authors use GPT-4v to map videos of human performing tasks into symbolic plans, but do not consider it for downstream applications. Closer to our work, VILA <cit.> feeds observation images with text prompts to an LVLM to plan without relying on external detectors. However, produced plans are expressed entirely in language, so spatial elements such as referred object masks and grasp poses still need to be grounded, such that the plan is executable. In our work, we alleviate this by exploiting GPT-4v's OCR capability with visual marker prompting, via injecting such spatial elements directly in the prompt. Further, VILA focuses on general-purpose task planning, assuming an already obtained skill library, whereas we propose an end-to-end solution specifically for grasping, including the external models required and how they should interact algorithmically with the LVLM. § METHOD §.§ Prerequisites and Problem Statement Large Vision-Language Models VLMs receive a set of RGB images of size H × W: ℐ_1:M, ℐ∈ℝ^H × W × 3 and a sequence of text tokens 𝒯, and generate a text sequence 𝒴 of length L: 𝒴≐ w_1:L={ w_1, …, w_L} from a fixed token vocabulary w_i ∈𝒲, such that: 𝒴 = ℱ(ℐ_1:M, 𝒯). The images-text pair input 𝒳 = ⟨ℐ_1:M, 𝒯⟩ is referred to as the prompt, with the text component 𝒯 typically being a user instruction or question that primes the VLM for a specific task. Traditionally, ℱ is implemented via autoregressively generating tokens w_i with a transformer decoder <cit.>, which corresponds to the optimization problem: 𝒴 = _w_1:L∈𝒲 ∏_i=1^L p_( w_i |𝒳, w_1:i-1) In practise, this is solved via greedy decoding, i.e., selecting the most likely next token, beam search or sampling strategies. Grasp Representations We represent a grasp via an end-effector gripper pose 𝒢, with 𝒢∈ℝ^4 for 4-DoF and 𝒢∈ℝ^6 for 6-DoF grasping. Such representation contains a 3D position and either a yaw rotation or a full SO(3) orientation for 4-DoF and 6-DoF respectively. 4-DoF grasps assume that the approach vector is calibrated with the camera extrinsics, and hence can be directly drawn as rectangles in the 2D image plane (see bottom of Fig. <ref>), which happens to be a favorable representation for VLMs, as grasp candidates can be interpreted as part of the input image prompt. A motion primitive is invoked to move the arm to the desired gripper pose 𝒢, e.g. via inverse-kinematics solvers. [More sophisticated motion planning algorithms, e.g. with integrated obstacle avoidance, can be utilized orthogonal to our approach.] Problem Statement Given an RGB-D observation ℐ_t ∈ℝ^H × W × 3, 𝒟_t ∈ℝ^H × W and an open-ended language query 𝒯, which conveys an instruction to grasp a target object, the goal of OWG is to provide a policy π (a_t |ℐ_t, 𝒟_t, 𝒯). Assuming n ∈{1, …, N} the N objects that appear in the scene and n^* the target object, then at each time step t, the policy outputs a pose for grasping an object: a_t = G_t(n), G_t(n)=G(n, ℐ_t, 𝒟_t), t=1,…,T, where the last step T always maps to grasping the target object: a_T=G_T(n^*). We refer to the function G as the grasp generation function, which corresponds to a pretrained grasp synthesis network from RGB-D views <cit.> [Other point-cloud <cit.> or voxel-based <cit.> methods for 3D grasp generation can be utilized orthogonal to our approach, which uses single RGB-D view.] We note that our policy π outputs directly the actual gripper pose 𝒢 = G(n), and the object-centric abstraction n is used implicitly (details in next sections). We wish to highlight that in most grasp synthesis pipelines <cit.>, it's always T=1 and a_1=G_1(n^*), which corresponds to an open-loop policy attempting to grasp the object of interest once. Our formulation for T>1 allows the VLM to close the loop by re-running after each step, which enables visual feedback for planning and recovery from failures / external disturbances. §.§ Pipeline Overview OWG combines VLMs with pretrained 2D instance segmentation and grasp synthesis models. Segmentation methods like SAM <cit.> and its variants <cit.> have demonstrated impressive zero-shot performane. Similarly, view-based grasp synthesis networks <cit.> have also shown to be transferable to unseen content, as they are trained without assumptions of objectness or semantics in their training objectives. The zero-shot capabilities of these models for low-level dense spatial tasks is complementary to the high-level semantic reasoning capabilities of VLMs, while both use images as the underlying representation, hence offering a very attractive coupling for tackling the open-world grasping problem. The overall pipeline can be decomposed in three subsequent stages: (i) open-ended referring segmentation, (ii) grounded grasp planning, and (iii) grasp generation and ranking. A schematic of OWG is shown in Fig. <ref> and described formally in Algorithm <ref>. Examples are shown in Fig. <ref> and prompt implementation details can be found in Appendix B. Open-ended referring segmentation In this stage, the target object of interest must be segmented from the input RGB image ℐ_t given the instruction 𝒯. To enable this, we first run our segmentation model S:ℝ^H × W × 3→{0,1}^H × W and then draw the N generated masks M_1:N = S(ℐ_t) with additional visual markers in a new frame ℐ_t^m. This step aims to exploit the VLM's OCR capabilities and link each segment in the frame with a unique ID that the VLM can use to refer to it. After augmenting the image with visual markers, we pass the prompt <ℐ_t, ℐ_t^m, 𝒯> to the VLM. We refer to this VLM generation as ℱ^ground, such that: n^* = ℱ^ground(ℐ_t, ℐ_t^m, 𝒯) where n^* the target object and M_n^* its segmentation mask. We note that 𝒯 can contain free-form natural language referring to a target object, such as open object descriptions, object relations, affordances etc. [17]r0.6 Grounded grasp planning This stage attempts to leverage VLM's visual reasoning capabilities in order to produce a plan that maximizes the chances that the target object n* is graspable. If the target object is blocked by neighboring objects, the agent should remove them first by picking them an placing them in free tabletop space. Similar to  <cit.>, we construct a text prompt that describes these two options (i.e., remove neighbor or pick target) as primitive actions for the VLM to compose plans from. We provide the marked image ℐ_t^m together with the target object n^* (from the previous grounding stage) to determine a plan: p_1:T = ℱ^plan(ℐ_t^m, n^*), p_τ∈{1,…,N}. Each p_τ corresponds to the decision to grasp the object with marker ID n ∈{1,…,N}. As motivated earlier, in order to close the loop, we take the target of the first step of the plan ñ = p_1 and move to the grasping stage of our pipeline. Grasp generation and ranking After determining the current object to grasp ñ, we invoke our grasp synthesis model G to generate grasp proposals. To that end, we element-wise multiply the mask M_ñ with the RGB-D observation, thus isolating only object n^* in the input frames: ℐ̃_̃t̃ = ℐ_t ⊙ M_ñ, 𝒟̃_̃t̃ = 𝒟_t ⊙ M_ñ. The grasp synthesis network outputs pixel-level quality, angle and width masks which can be directly transformed to 4-DoF grasps 𝒢_1:K = G(ℐ̃_̃t̃, 𝒟̃_̃t̃) <cit.>, where K the total number of grasp proposals. Then, we crop a small region of interest c_ñ around the bounding box of the segment in the frame ℐ_t, from its mask M_ñ. We draw the grasp proposals 𝒢_1:K as 2D grasp rectangles within the cropped image c_ñ and annotate each one with a numeric ID marker, similar to the grounding prompt. We refer to the marked cropped frame as c_ñ'. Then, we prompt the VLM to rank the drawn grasp proposals: 𝒢_1:K' = ℱ^rank(c_ñ') where the prompt instructs the VLM to rank based on each grasp's potential contacts with neighboring objects. Finally, the grasp ranked best by the VLM 𝒢_1' is selected and sent to our motion primitive for robot execution. § EXPERIMENTS In this section, we compare the open-ended grounding capabilities of OWG vs. previous zero-shot methods in indoor cluttered scenes (Sec. <ref>). Then, we demonstrate its potential for open-world grasping both in simulation and in hardware (Sec. <ref>). Finally, we investigate the effect of several components of our methodology via ablation studies (Sec. <ref>). §.§ Open-Ended Grounding in Cluttered Scenes [8]r0.75 ! Method Found. Model Name Attribute Spatial Relation Visual Relation Semantic Relation Affordance Multi- hop Avg. ReCLIP <cit.> CLIP <cit.> 71.4 57.7 27.3 47.4 46.2 62.5 20.8 47.6± 17.0 RedCircle <cit.> CLIP <cit.> 52.4 53.9 18.2 42.1 46.2 18.9 12.5 34.8± 16.4 FGVP <cit.> CLIP <cit.> 50.0 53.9 33.3 36.9 53.8 43.8 29.1 43.0± 9.3 FGVP^* <cit.> CLIP <cit.> 65.7 65.4 33.3 42.1 69.2 56.2 29.1 51.8± 15.4 QWEN-VL-2 <cit.> QWEN <cit.> 64.3 60.9 52.4 44.0 47.1 11.9 42.1 46.1± 15.9 SoM <cit.> GPT-4v <cit.> 54.8 42.3 54.6 57.9 53.9 62.5 45.8 53.1± 6.4 OWG (Ours) GPT-4v <cit.> 85.7 80.8 75.8 73.7 76.9 93.8 79.2 80.8± 6.4 Zero-shot referring segmentation - mIoU(%) results per language instruction type for cluttered indoor scenes from OCID <cit.>. In order to evaluate the open-ended potential of OWG for grounding, we create a small subset of OCID-VLG test split <cit.>, which we manually annotate for a broad range of grasping instructions (see Appendix A). As we strive for zero-shot usage in open scenes, we mostly experiment with previous visual prompting techniques for large-scale VLMs, such as CLIP <cit.>, as well as the recent Set-of-Mark prompting methodology for GPT-4v <cit.>, which constitutes the basis of our method. We also include comparisons with open-source visually-grounded LVLM QWEN-VL-2 <cit.>. Please see Appendix D for details on the test dataset, baseline implementations and more comparative qualitative results. We observe that both CLIP-based visual prompting techniques and open-source LVLMs are decent in object-based but fail to relate objects from the visual prompts. Even GPT-4v-based SoM prompting method is not directly capable of handling cluttered tabletop scenes from depth cameras, as is evident by the 53.1% averaged mIoU across all query types. Overall, our OWG-grounder achieves an averaged mIoU score of 80.8%, which corresponds to a 27.7% delta from the second best approach. Importantly, OWG excels at semantic and affordance-based queries, something which is essential in human-robot interaction applications but is missing from modern vision-language models. We identify two basic failure modes: a) the LVLM confused the target description with another object, e.g. due to same appearance or semantics, and b) the LVLM reasons correctly about the object and where it is roughly located, but chooses a wrong numeric ID to refer to it. §.§ Open-World Grasping Robot Experiments [19]r0.45 < g r a p h i c s > Open-ended language-guided grasping trials in Gazebo (top) and real robot (bottom), in isolated (left column) and cluttered (right column) scenes. In this section we wish to evaluate the full stack of OWG, incl. grounding, grasp planning and grasp ranking via contact reasoning, in scenarios that emulate open-world grasping challenges. To that end, we conduct experiments in both simulation and in hardware, where in each trial we randomly place 5-15 objects in a tabletop and instruct the robot to grasp an object of interest. We conduct trials in two scenarios, namely: a) isolated, where all objects are scattered across the tabletop, b) cluttered, where objects are tightly packed together leading to occlusions and rich contacts. We highlight that object-related query trials contain distractor objects that share the same category with the target object. [10]r0.5 ! Setup 2cCROG <cit.> 2cSayCan-IM <cit.> 2cOWG (Ours) (l2ptr14pt)2-3 (l2ptr14pt)4-5 (l2ptr10pt)6-7 seen unseen seen unseen seen unseen Simulation (× 50) -Isolated 66.0 36.0 62.0 60.0 78.0 82.0 -Cluttered 38.0 22.0 48.0 56.0 62.0 66.0 Real-World (× 6) -Isolated 50.0 16.6 66.6 33.3 83.3 66.6 -Cluttered 16.6 0.0 16.6 16.6 50.0 50.0 Averaged success rates (%) over simulated and real-world grasping trials. The × represents number of trials per cell. Baselines We compare with two baselines, namely: a) CROG <cit.>, an end-to-end referring grasp synthesis model trained in OCID <cit.> scenes, and b) SayCan-IM <cit.>, an LLM-based zero-shot planning method that actualizes embodied reasoning via chaining external modules for segmentation, grounding and grasp synthesis, while reasoning with LLM chain-of-thoughts <cit.>. Our choice of baselines aims at showing the advantages of using an LVLM-based method vs. both implicit end-to-end approaches, as well as modular approaches that rely solely on LLMs to reason, with visual processing coming through external tools. See details in baseline implementations in Appendix B. Implementation Our robot setup consists of two UR5e arms with Robotiq 2F-140 parallel jaw grippers and an ASUS Xtion depth camera. We conduct 50 trials per scenario in the Gazebo simulator <cit.>, using 30 unique object models. For real robot experiments, we conduct 6 trials per scenario having the initial scenes as similar as possible between baselines. In both SayCan-IM and our method, Mask-RCNN <cit.> is utilized for 2D instance segmentation while GR-ConvNet <cit.> pretrained in Jacquard <cit.> is used as the grasp synthesis module. Our robotic setup is illustrated in Fig. <ref>, while more details can be found in Appendix B. To investigate generalization performance, all method are evaluated in both scenarios, in two splits: (i) seen, where target objects and queries are present in the method's training data or in-context prompts, and (ii) unseen, where the instruction refers to objects that do not appear in CROG's training data or SayCan-IM's in-context prompts. Averaged success rate per scenario is reported, where a trial is considered successful if the robot grasps the object and places it in a pre-defined container position. [12]r0.5 < g r a p h i c s > Distribution of failures across grounding and grasping in Gazebo grasping trials for isolated (left) and cluttered (right). OWG improves performance across both modes in both setups and test splits. Results We observe that the supervised method CROG struggles when used at unseen data, in both scenarios. In contrary, both SayCan-IM and OWG demonstrate immunity to seen/unseen objects, illustrating the strong zero-shot capabilities of LLM-based approaches, which can naturally generalize the concepts of object categories/attributes/relations from language. SayCan-IM is limited by the external vision models and hence struggles in cluttered scenes, where its detector sometimes fails to perceive the target object, resulting in lower final success rates compared to OWG, especially in the real-world experiments. OWG consistently outperforms both baselines both in simulation and in the real robot, with an ∼ 15% and ∼ 35% improved averaged success rate respectively. In Fig. <ref>, we illustrate the decomposition of failures across grounding and grasping in our baselines for 25 Gazebo trials per scenario, where we automatically test for the target object's grounding results alongside success rate. We observe that OWG consistently reduces the error rates in both grasping and grasping compared to the baselines in all scenarios and test splits. We believe that these results are encouraging for the future of LVLMs in robot grasping. §.§ Ablation Studies In out ablations we wish to answer the following questions: a) What is the bottleneck introduced by the segmentation model in the open-ended grounding performance?, b) What are the contributions of all of our proposed visual prompt elements?, and c) What is the contribution of the LVLM-based grasp planning and ranking in robot grasping experiments? The grounding ablations for the first two questions are organized in Table <ref>, while for the latter in Table <ref>. Instance segmentation bottleneck We compare the averaged mIoU of our OWG grounder in a subset of our OCID-VLG evaluation data for three different segmentation methods and ground-truth masks. We employ: a) SAM <cit.>, b) the RPN module of the open-vocabulary detector ViLD <cit.>, and c) the RGB-D two-stage instance segmentation method UOIS <cit.>, where we also provide the depth data as part of the input. We note that we did not optimize SAM hyper-parameters and the results tend to be oversegmented, leading in very cluttered markers for GPT-4v. This produces very low results compared to the other two methods, perhaps suggesting that SAM might not be the best option for RGB-D tabletop domains that we focus. ViLD-RPN and UOIS both achieve a bit above 70%, which is a ∼ 15% delta from ground-truth masks, but still offer a robust baseline. We visualize several masks and subsequent LVLM groundings in Appendix C. [14]r0.4 ! Method mIoU OWG (w/ Ground-Truth Mask) 86.6 -w/o reference 23.2 -w/o number overlay 54.6 -w/o high-res 61.3 -w/o self-consistency 70.9 -w/ box 74.6 -w/o CoT prompt 77.6 -w/o mask fill 81.1 SAM <cit.> 33.4 ViLD-RPN <cit.> 72.9 UOIS <cit.> 71.1 Grounding ablation studies. metrics reported in %. Visual prompt components We ablate all components of our grounding prompt and observe the contribution of each one via its averaged mIoU in the same subset as above. The most important prompt component is the reference image, provided alongside the marked image. Due to the high clutter of our test scenes, simply highlighting marks and label IDs in a single frame, as in SoM <cit.> hinders the recognition capabilities of the LVLM, with a mIoU drop from 86.6% to 23.2%. Further decluttering the marked image also helps, with overlaying the numeric IDs, using high-resolution images and highlighting the inside of each region mask being decreasingly important. Surprisingly, also marking bounding boxes leads to a 12% mIoU drop compared to avoiding them, possibly due to occlusions caused by lots of boxes in cluttered areas. Finally, self-consistency and chain-of-thought prompting components that were added also improve LVLM's grounding performance by ∼ 16 and 10% respectively, by ensembling multiple responses and enforcing step-by-step reasoning. [8]r0.4 ! Method Isolated Cluttered OWG 80.0 66.6 -w/o planning 73.3 26.6 -w/o grasp ranking 80.0 60.0 -w/o both 60.0 13.3 Averaged success rates (%) over 15 simulated grasping trials per scenario. Grasp-Related Ablations We quantify the contribution of our grasp planning and ranking stages in the open-world grasping pipeline, by replicating trials as in the previous section and potentially skipping one or both of these stages. As we see in Table <ref>, the effect of these components is not so apparent in isolated scenes, as objects are not obstructed by surroundings and hence most proposed grasps are feasible. The effect becomes more prominent in the cluttered scenario, where the lack of grasp planning leads to a success rate decrease of 40%. This is because without grasp planning the agent attempts to grasp the target immediately, which almost always leads to a collision that makes the grasp fail. Grasp ranking is less essential, as a lot of contact-related information is existent in the grasp quality predictions of our grasp synthesis network. However, it still provides an important boost in final success rate (6% increase). When skipping both stages, the agent's performance drops drastically in cluttered scenes, as it is unable to recover from grasp failures, and hence always fails when the first attempted grasp was not successful. § CONCLUSION, LIMITATIONS & FUTURE WORK In this paper we introduce OWG, a novel system formulation for tackling open-world grasping. Our focus is on combining LVLMs with segmentation and grasp synthesis models, and visually prompt the LVLM to ground, plan and reason about the scene and the object grasps. Our works sets a foundation for enabling robots to ground open-ended language input and close-the-loop for effective grasp planning and contact reasoning, leading to significant improvements over previous zero-shot approaches, as demonstrated by empirical evaluations, ablation studies and robot experiments. Limitations First, as OWG is a modular approach, it suffers from error cascading effects introduced by the segmentor and grasp synthesis models. However, improvements in these areas mean direct improvement to the OWG pipeline. Second, we currently use 4-DoF grasps to communicate them visually to GPT-4v, which constrains grasping to single view. In the future we would like to integrate 6-DoF grasp detectors and explore new prompting schemes to aggregate and rank grasp information visually. Third, our results suggest that LVLMs still struggle to ground complex object relationships. More sophisticated prompting schemes beyond marker overlaying, or instruct-tuning in grasp-related data, might be a future direction for dealing with this limitation. If a paper is accepted, the final camera-ready version will (and probably should) include acknowledgments. All acknowledgments go at the end of the paper, including thanks to reviewers who gave useful comments, to colleagues who contributed to the ideas, and to funding agencies and corporate sponsors that provided financial support.
http://arxiv.org/abs/2406.18094v1
20240626061020
Shimo Lab at "Discharge Me!": Discharge Summarization by Prompt-Driven Concatenation of Electronic Health Record Sections
[ "Yunzhen He", "Hiroaki Yamagiwa", "Hidetoshi Shimodaira" ]
cs.CL
[ "cs.CL" ]
Natural Language but Omitted? On the Ineffectiveness of Large Language Models' privacy policy from End-users' Perspective Shuning Zhang Tsinghua University Haobin Xing Tsinghua University Xin Yi Tsinghua University Hewu Li Tsinghua University ================================================================================================================================== § ABSTRACT In this paper, we present our approach to the shared task “Discharge Me!” at the BioNLP Workshop 2024. The primary goal of this task is to reduce the time and effort clinicians spend on writing detailed notes in the electronic health record (EHR). Participants develop a pipeline to generate the “Brief Hospital Course” and “Discharge Instructions” sections from the EHR. Our approach involves a first step of extracting the relevant sections from the EHR. We then add explanatory prompts to these sections and concatenate them with separate tokens to create the input text. To train a text generation model, we perform LoRA fine-tuning on the ClinicalT5-large model. On the final test data, our approach achieved a ROUGE-1 score of 0.394, which is comparable to the top solutions. ^∗ The first two authors contributed equally to this work. Our code is available at <https://github.com/githubhyz/DischargeMe_BioNLP2024>. § INTRODUCTION Electronic health records (EHR) eliminate the need for end-users to write medical records by hand and provide easy access to digital records <cit.>. However, the use of EHR sometimes increases the burden on end-users <cit.>. With this in mind, there has been active research in recent years into applying natural language processing (NLP) to EHR to reduce the burden on end-users <cit.>. To explore the potential of NLP in EHR, the shared task “Discharge Me!” <cit.> at the BioNLP Workshop 2024 evaluates the ability to generate discharge summaries. The goal of this task is to reduce the time and effort clinicians spend on writing detailed notes in the EHR. Participants develop a pipeline that leverages the EHR data to generate discharge summaries. In this paper, we present our approach to the shared task. Fig. <ref> provides an overview of our pipeline. We preprocess the EHR, as illustrated in Fig. <ref>, by removing noise and extracting sections that are essential for the target summary. The sections are selected based on a predetermined priority. For extracted sections, we prepend the prompt from Table <ref> to the beginning of the text, concatenate these sections using tokens, and thus prepare the input text. We also removed noise from the target text. We then fine-tuned ClinicalT5 <cit.>, which is pre-trained on clinical texts. On the final test data, our approach achieved a ROUGE-1 score of 0.394, which is comparable to the top solutions. § RELATED WORK §.§ Text generation models in clinical domain Decoder. ClinicalGPT <cit.>, whose base model is BLOOM-7B <cit.>, uses LoRA <cit.> for fine-tuning and applies the reinforcement learning process used in InstructGPT <cit.>. BioMistral-7B <cit.> underwent additional pre-training of the Mistral-7B <cit.> model on PubMed Central <cit.> and showed good performance on the clinical knowledge QA task. Encoder-decoder. ClinicalT5 <cit.>, whose base model is T5 <cit.>, is the model pre-trained on clinical texts[Both of <cit.> and <cit.> refer to their models as ClinicalT5.]. <cit.> performed additional pre-training of the SciFive-PubMed-PMC <cit.> model on MIMIC-III <cit.>. Meanwhile, <cit.> pre-trained T5 from scratch using MIMIC-III and MIMIC-IV <cit.>. §.§ Clinical text summarization Discharge Summarization. <cit.> showed that although 33% of the discharge summaries generated by GPT-4 <cit.> from the EHR were error-free, some contained hallucinations and omitted relevant information. Note, however, that the shared task does not allow data to be sent to third parties via an API. Problem List Summarization (ProbSum). ProbSum <cit.> is a task aimed at generating a list of problems in a patient's daily care plan based on hospital records. In the BioNLP 2023 shared task <cit.> focused on ProbSum, the ensemble of ClinicalT5 models demonstrated robust performance <cit.>, and the approach combining Flan-T5 <cit.> with GPT2XL <cit.> also yielded strong results <cit.>. In the experiments using the shared task dataset, LLMs adapted to the medical domain demonstrated performance equal to or better than medical experts <cit.>. § TASK OVERVIEW §.§ Task description Participants use an EHR dataset from MIMIC-IV <cit.> and develop a pipeline to generate two discharge summaries: the “Brief Hospital Course” section for patients and the “Discharge Instructions” section for clinicians. Table <ref> shows an example of both sections. §.§ Dataset description The original datasets <cit.> include training, validation, phase I test, and phase II test sets. Participants use the training and validation sets to develop their pipeline, with the final evaluation performed on a subset of 250 samples from the phase II test set. See Appendix <ref> for more details. Note that although the datasets include metadata such as radiology reports in addition to the EHR and discharge summaries, we did not use this information in designing a simple pipeline. For more details, see the task website[<https://stanford-aimi.github.io/discharge-me/>]. We created a new split with a 4:1 training-to-validation ratio using the original training and validation sets. Note that the EHR in the dataset contains the target texts: the “Brief Hospital Course” and the “Discharge Instructions” sections. As shown in Fig. <ref>, the “Brief Hospital Course” section is usually located in the middle of the discharge summary, while the “Discharge Instructions” section is generally located at the end of the EHR. §.§ Evaluation metrics In this task, the following eight evaluation metrics[<https://github.com/Stanford-AIMI/discharge-me/tree/main/scoring>.] are used to compare the generated texts with the target texts: BLEU-4 <cit.>, ROUGE-1, ROUGE-2, ROUGE-L <cit.>, BERTScore <cit.>, METEOR <cit.>, AlignScore <cit.>, MEDCON <cit.>. The overall score is calculated by first averaging the scores for each target, and then averaging these values. § PIPELINE §.§ Input text preparation We removed the target discharge summaries from the EHR as preprocessing. As shown in Fig. <ref>, the EHR contains redundant line breaks and detailed data. When the EHR is used directly as input text, this redundancy can increase the length of the input text. To mitigate this, we removed the noise from the EHR and selectively extracted the relevant sections for each target, thus avoiding the excessive length of the input text[The criteria for section selection are ad hoc, as mentioned in the Limitations section.]. These sections were selected by excluding those with detailed data, such as timestamps[Although the “Pertinent Results” section contains timestamps, we exclude them and use this section as input for the “Brief Hospital Course” section. See the Appendix <ref> for details.], or those without specific information, such as the “Admission Date” section. Note that, in the case of preparing the input text for the model generating the “Brief Hospital Course” section, given the actual workflow of writing discharge summaries, we did not use the sections following this section in the input text. For sections extracted from the EHR, we added an explanatory prompt to the beginning of each section and then concatenated the sections using the tokens to create the final input text. Table <ref> shows the prompts and priorities of the selected sections used in the input text for each target discharge summary. The sections in the input text were ordered according to the specified priorities, rather than their original order in the EHR. The input text was truncated if it exceeded the maximum text length[1596 tokens]. In Appendix <ref>, examples of input texts are shown in Tables <ref> and <ref>, respectively, for “Brief Hospital Course” and “Discharge Instructions”. These input texts were prepared from the EHR in Fig. <ref>. Histograms of the length of the input text are shown in Fig. <ref>. §.§ Target text preparation As shown in Table <ref>, the target texts contain many unnecessary line breaks. To prevent the line breaks from hindering the learning of the model, we removed them during preprocessing. In Appendix <ref>, the texts before and after preprocessing for “Brief Hospital Course” are shown in Table <ref>, and those for “Discharge Instructions” are shown in Table <ref>. Histograms of the length of the target text are shown in Fig. <ref>. §.§ Text generation Using the input and target texts prepared in Sections <ref> and <ref>, we performed LoRA <cit.> fine-tuning on the ClinicalT5-large[<https://huggingface.co/luqh/ClinicalT5-large>] model published by <cit.>. The ClinicalT5-large model has 770M parameters with 24 layers. In Appendix <ref>, the hyperparameters for fine-tuning and LoRA are shown in Tables <ref> and <ref>. The hyperparameters to generate each target discharge summary are shown in Table <ref>. § EXPERIMENTS §.§ Results for the final test data Table <ref> presents the evaluation metrics values of the participating teams for the final test data. While our method did not achieve the highest scores of WisPerMed <cit.>, it demonstrated relatively good performance in ROUGE-1, ROUGE-L, and BERTScore. In particular, we achieved a ROUGE-1 score of 0.394, which is comparable to top solutions such as those of HarmonAI Lab at Yale and aehrc. §.§ Qualitative observation Table <ref> presents the summaries generated by our pipeline from the EHR for the target summaries in Table <ref>. While the detailed progress reports and discharge instructions may differ, the overall gist remains the same. In addition, unnecessary line breaks that were present in the original target summaries do not appear in the generated summaries. § CONCLUSION We presented our approach to the shared task “Discharge Me!” at the BioNLP Workshop 2024. Extracting the relevant sections from the EHR, we added explanatory prompts to these sections and concatenated them with tokens to create the input text. We then performed LoRA fine-tuning on the ClinicalT5-large model. On the final test data, our approach achieved a ROUGE-1 score of 0.394, which is comparable to the top solutions. § LIMITATIONS * Our pipeline cannot be applied to an EHR with different formats, resulting in a lack of generalizability. In fact, even in this shared task dataset, the lack of consistency in the original data sometimes makes it impossible to extract sections, resulting in incomplete summaries. * When preparing the input text, adding prompts for each extracted section results in a longer length than simply concatenating sections with tokens. * The effectiveness of our pipeline is not tested against other text generation models such as BioMistral-7B <cit.> and the ClinicalT5-large model published by <cit.>. * While the selection and prioritization of the EHR sections used in the input text is somewhat ad-hoc, since extensive experiments would be required to compare the selection and prioritization, we did not conduct them in this study due to time and resource constraints. * While the cleaned target texts are used for training, the original target texts with many line breaks are used for evaluation. This leads to a discrepancy between the target text distributions of training and evaluation. § ETHICS STATEMENT We conducted our research with careful consideration of data use and in accordance with the Data Use Agreement[<https://physionet.org/content/discharge-me/view-dua/1.3/>]. It is prohibited to identify individuals or organizations from the examples presented in the paper. § ACKNOWLEDGEMENTS We would like to thank the Program Committee for their thorough review and valuable suggestions. This work was supported by JST SPRING, Grant Number JPMJSP2110. This study was partially supported by JST CREST JPMJCR21N3. acl_natbib § DETAILS OF DATASETS In this task, we use the dataset created by the MIMIC-IV's submodules MIMIC-IV-ED <cit.> and MIMIC-IV-Note <cit.>. The dataset is available on PhysioNet <cit.>, and its use requires completion of the CITI[<https://about.citiprogram.org/>] training and credentialing process. Table <ref> lists the number of samples for the data splits. § DETAILS OF INPUT TEXT This section first explains the detailed preprocessing required to create input text from the EHR. It then provides examples and statistical information before and after preprocessing. §.§ Extraction of simple sections This section explains the process for extracting the “Sex”, “Service”, “Allergies”, “Chief Complaint”, and “Major Surgical or Invasive Procedure” sections. To extract these sections, we used specific regular expressions such as . §.§ Extraction of complex sections This section explains the process for extracting the “History of Present Illness”, “Past Medical History”, “Pertinent Results”, “Medications on Admission”, “Discharge Medications”, “Discharge Disposition”, “Discharge Diagnosis”, and ”Discharge Condition” sections. We performed more detailed processing and pattern matching to efficiently extract the text of these sections. For example, for the “Discharge Condition” section, we used the regular expression and it matches the diagnosis text up to the “Discharge Condition” section. §.§ Detailed processing of each section “Name”. The patient's name is given as “___” and we used it directly. “Sex”.  We converted “M” to “Male” and “F” to “Female”. “Pertinent Results”. Timestamps in lines like “__ 08:00AM BLOOD __” were removed using regular expressions. In addition, list sections are converted to “*” format to maintain text consistency and clarity. “Medications on Admission”. List sections are converted to “*” format to maintain text consistency and clarity. “Discharge Condition”. We changed a colon in the extracted text to “is”. For example, “Condition: Stable” is changed to “Condition is Stable”. “Discharge Medications”. List sections are converted to “*” format to maintain text consistency and clarity. §.§ Other processing We ensure textual continuity by replacing line breaks with spaces and trimming excess spaces. In cases where no matching text is found, the default response is designated as “Unknown”. §.§ Examples of input text Tables <ref> and <ref> show examples of input text. These examples illustrate that the ClinicalT5-large model is fine-tuned with different input text for each target discharge summary. §.§ Statistical information Fig. <ref> shows histograms of the text length (in tokens) of the EHR and the input texts for the training and validation sets. Table <ref> shows the statistical information for these histograms. As shown in Fig. <ref> and Table <ref>, the preprocessing significantly reduces the length of the text. § DETAILS OF TARGET TEXT §.§ Extraction and concatenation of segments In the first process of segment extraction, we divide the text into segments based on blank lines and identify the distinct segments. We then remove spaces and line breaks from each segment and discard empty segments to retain only meaningful segments. Multiple consecutive spaces within each segment are replaced by a single space to improve readability. Finally, we reassemble the cleaned segments with line breaks to make them more suitable for training language models. §.§ Examples of preprocessed target text Tables <ref> and <ref> show examples of the target text before and after preprocessing. These examples illustrate that redundant line breaks are removed after preprocessing. §.§ Statistical information Fig. <ref> shows histograms of the text length (in tokens) of the target texts before and after preprocessing for the training and validation sets. Table <ref> shows the statistical information for these histograms. As shown in Fig. <ref> and Table <ref>, the preprocessing slightly reduces the length of the text. § DETAILS OF FINE-TUNING We used Pytorch <cit.> and huggingface transformers <cit.> to implement and fine-tune our models. We also use peft <cit.> for LoRA. Table <ref> shows the text length (in tokens) used by our models. Table <ref> shows the hyperparameters used for fine tuning. Table <ref> shows the hyperparameters used for LoRA. Table <ref> shows the hyperparameters to generate each target discharge summary.
http://arxiv.org/abs/2406.18074v1
20240626050614
Few-Shot Medical Image Segmentation with High-Fidelity Prototypes
[ "Song Tang", "Shaxu Yan", "Xiaozhi Qi", "Jianxin Gao", "Mao Ye", "Jianwei Zhang", "Xiatian Zhu" ]
cs.CV
[ "cs.CV", "cs.AI" ]
Song Tang et al. 1,2]Song Tang [cor1]Corresponding author: Mao Ye (cvlab.uestc@gmail.com) and Xiatian Zhu (xiatian.zhu@surrey.ac.uk). 1]Shaxu Yan 5]Xiaozhi Qi 1]Jianxin Gao 3]Mao Yecor1 2]Jianwei Zhang 4]Xiatian Zhucor1 [1]IMI Group, School of Health Sciences and Engineering, University of Shanghai for Science and Technology, Shanghai, China [2]TAMS Group, Department of Informatics, Universität Hamburg, Hamburg, Germany [3]School of Computer Science and Engineering, University of Electronic Science and Technology of China, Chengdu, China [4]Surrey Institute for People-Centred Artificial Intelligence, and Centre for Vision, Speech and Signal Processing, University of Surrey, Guildford, UK [5]Shenzhen Key Laboratory of Minimally Invasive Surgical Robotics and System, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, China * * * * § ABSTRACT Few-shot Semantic Segmentation (FSS) aims to adapt a pretrained model to new classes with as few as a single labelled training sample per class. Despite the prototype based approaches have achieved substantial success, existing models are limited to the imaging scenarios with considerably distinct objects and not highly complex background, e.g., natural images. This makes such models suboptimal for medical imaging with both conditions invalid. To address this problem, we propose a novel Detail Self-refined Prototype Network () to constructing high-fidelity prototypes representing the object foreground and the background more comprehensively. Specifically, to construct global semantics while maintaining the captured detail semantics, we learn the foreground prototypes by modelling the multi-modal structures with clustering and then fusing each in a channel-wise manner. Considering that the background often has no apparent semantic relation in the spatial dimensions, we integrate channel-specific structural information under sparse channel-aware regulation. Extensive experiments on three challenging medical image benchmarks show the superiority of DSPNet over previous state-of-the-art methods. The code and data are available at <https://github.com/tntek/DSPNet>. **** Few-shot semantic segmentation Medical image High-fidelity prototype Detail self-refining § INTRODUCTION Medical image segmentation plays a critical role in clinical processes and medical research, such as disease diagnosis <cit.>, treatment planning <cit.> and follow-up <cit.>. In the medical field, the well-annotated samples are limited due to privacy protection and the requirement of clinical expertise. Within this context, Few-shot Semantic Segmentation (FSS) methods <cit.> demonstrate their advantages in this domain, involving extracting one or several supporting data to predict the same type in query data. The key to FSS is building a resemblance between the support and query images. The existing FSS methods follow three clues. The first is constructing support images-based guidance to boost this query image segmentation, e.g., the two-branch architecture with interaction <cit.>. The second identifies the shared features by building a resemblance between the support and query images, e.g., attention modules <cit.> and graph networks <cit.>. The third is prototypical approaches <cit.>, mining prototypes from support images to build a resemblance with the query images. Among them, the third one is the current prevalent scheme due to the generality and robustness to noise. However, given that the prototype extraction utilizes the pooling operation, e.g., Mask Average Pooling or Average Pooling, this scheme suffers from an inherent limitation: Since the pooling is prone to losing local details, the conventional prototypes lead to low-discriminative feature maps that confuse the foreground and background. Existing methods address this above-mentioned limitation by incrementally mining new prototypes for diverse detail representations, i.e., the detail discovery scheme marked by a yellow zone in Fig. <ref>. For instance, the single class prototype for foreground was enriched by several part-aware prototypes <cit.> or compensation prototypes <cit.>. For background, Average Pooling was employed at the regular grid to generate diverse local prototypes <cit.>. This strategy works well in imaging scenarios with (i) considerably distinct objects and (ii) not highly complex background, e.g., natural images. However, the medical images with highly heterogeneous textures[Refer to these considerable and complicated distinct structures/tissues with compact boundaries between them.] do not satisfy the conditions. Namely, the incremental strategy cannot provide complete detail representations for medical images. To overcome this problem above, in this paper, we propose a new Detail Self-refined Prototype Network (). As demonstrated in Fig. <ref> (see the green zone), in contrast to constructing new prototypes, our scheme highlights enhancing details representation of off-the-shelf prototypes by detail self-refining, leading to high-fidelity prototypes. In the proposed network, our detail self-refining involves two novel attention-like modules, called Foreground Semantic Prototype Attention () and Background Channel-structural Multi-head Attention (). In , to account for the clear semantics of foreground, we mine the semantic prototypes at the class level as the detail prototypes, using superpixel clustering. Then, they are fused to a single class prototype in a channel-wise one-dimensional convolution fashion, assembling the global semantics while maintaining the local semantics. In , since the complicated background in medical images is often unsemantic, we take the ones generated by Average Pooling at the regular grid as the background detail prototypes, instead of mining detail information from the spatial dimension. Then, the channel-specific structural information is explored by combining learnable global information with an adjustment highlighting sparse-relative channels. In the end, the elements of each detail prototype are channel-wise refreshed independently by the corresponding channel-specific structural information. The contributions of this work are three folds. We propose: (1) A novel prototypical FSS approach that enhances prototypes' self-representation for complicated details, totally discrepant from the previous incremental paradigm of constructing new detail prototypes. (2) A self-refining method for class prototype that integrates the cluster prototypes, i.e., the mined semantic details, into an enhanced one in an attention-like fashion and indicates the potential of fusing cluster-based local details for complete foreground representation. (3) A self-refining method for background prototype that incorporates the channel-specific structural information by multi-head channel attention with sparse channel-aware regularization and provides a conceptually different view for background details modeling. § RELATED WORK §.§ Medical Image Segmentation Currently, the deep neural network approaches dominate the medical image segmentation field. The early phase shares models with the natural image semantic segmentation. Fully Convolutional Networks (FCNs) <cit.> first equipped vanilla Convolutional Networks (CNN) with a segmentation head by introducing Up-sampling and Skip layer. Following that, the encoder-decoder-based methods <cit.> are developed. Unlike the coarse reconstruction in FCNs, the symmetrical reconstruction of the decoder can capture much richer detailed semantics. With the application of deep learning in the medical field, the medical image-specific models merge correspondingly, among which U-Net <cit.> is extensively recognized for its superior performance. Besides symmetrical encoder-decoder architecture, U-Net infuses the skipped connections to facilitate the propagation of contextual information to higher resolution hierarchies. Inspired by it, several variants of U-Net are designed, including U-Net 3D <cit.>, Atten-U-Net <cit.>, Edge-U-Net <cit.>, V-Net <cit.> and Y-Net <cit.>. These segmentation models mentioned above only work in a supervised fashion, relying on abundant expert-annotated data. Thus, they cannot apply to the few-shot setting where we need to segment an object of an "unseen" class as only a few labeled images of this class are given. §.§ Few-shot Semantic Segmentation (FSS) The key to solving FSS is building a class-wise similarity between the query and support images. Following this view, the existing methods can be divided into three categories. The first category constructs support images-based guidance <cit.>. For instance, <cit.> developed a two-branch approach where a conditioning branch imposes controlling on the logistic regression layer of the segmentation branch. <cit.> introduced squeeze and excitation blocks into the conditioning branch to encourage dense information interaction between the two branches. The second category designed novel network modules, e.g., attention modules <cit.> and graph networks <cit.>, for discriminative representations, by which the features shared by query and support images were identified. The third mainstream is prototypic network <cit.> that prototypes bridge the similarity computation in a meta-learning fashion. Here, the prototypes are specific features with semantics extracted from the support images. Recently, PANet <cit.> achieved impressive performance on the natural image segmentation task, performing dual-directive alignment between the query and support images. SSL-PANet <cit.> transferred the PANet architecture into the medical image segmentation where self-supervision with superpixels and local representation ensure the unsupervised segmentation. Following that, anomaly detection-inspired methods enhanced the performance by introducing self-supervision with supervoxels<cit.> or learning mechanism of an expert clinician <cit.>. Our belongs to the third category above, having two significant discrepancies. Compared with methods working with classes-abundant natural images, e.g., PANet, is a medical image-specific model with limited labelled support data. On the other hand, considers the limitation of local information loss from a new view of detail self-refining, which is totally different from the existing prototypical methods. §.§ Attention Method in Few-Shot Semantic Segmentation For few-shot semantic segmentation tasks, attention mechanism <cit.> is a popular technique to build the relationship between the support and query images. The existing approaches can be divided into two categories: (i) graph-based <cit.> and (ii) non-graph-based <cit.>. The methods belonging to the first category employing a graph model to activate more pixels, such that the correspondence between support and query images is enhanced. For example, <cit.> fused the graph attention and the last layer feature map to generate an enhanced feature map to solve the problem of foreground pixel loss in the attention map. The core idea of the second category methods is to build the correspondence based on feature interaction between the support and query data, e.g., multi-scale contextual features <cit.>, affinity constraint <cit.>, mix attention map <cit.>. Unlike the spatial attention-based methods above, both and in are channel attention-like methods. For , the channel-wise fusion ensures the deeper semantic fusion from local to global, whilst in , the detail self-refining relies on the channel structural information. § PROBLEM STATEMENT In the case of few-shot segmentation, the dataset includes two parts, the training subset 𝒟_tr (annotated by 𝒴_tr) and the test subset 𝒟_te (annotated by 𝒴_te), both of which consist of image-mask pairs. Furthermore, the 𝒟_tr and 𝒟_te do not share categories. Namely, 𝒴_tr∩𝒴_te= ∅. The goal of few-shot semantic segmentation is to train a segmentation model on 𝒟_tr that can segment unseen semantic classes 𝒴_te in images in 𝒟_te, given a few annotated examples of 𝒴_te, without re-training. To reach the goal above, we formulate this problem in a meta-learning fashion, the same as the initial few-shot semantic segmentation work. Specifically, 𝒟_tr= {S_i,Q_i}_i=1^N_tr and 𝒟_te= {S_i,Q_i}_i=1^N_te are sliced into several randomly sampled episodes, where N_tr and N_te are the episodes number for training and testing, respectively. Each episode consists of K annotated support images and a collection of query images containing N categories. Namely, we consider an N-way K-shot segmentation sub-problem. Specifically, the support set S_i= { (I_k^s,m_k^s(c_j)) }_k=1^K contains K image-mask pairs of a gray-scale image I∈R^H× W and its corresponding binary mask m∈{0,1}^H× W for class c_j∈C_tr,j=1,2,⋯,N. The query set Q_i contains V image-mask pairs from the same class as the support set. While the training on 𝒟_tr, over each episode, we learn a function f(I^q, S_i), which predicts a binary mask of an unseen class when given the query image I^q ∈Q_i and the support set S_i. After a series of episodes, we obtain the final segmentation model, which is evaluated on N_te in the same N-way K-shot segmentation manner. Following the common practice in <cit.>, this paper set N=K=1. § METHODOLOGY In this work, we propose a detail representation enhanced network () for prototypical FSS, building on the self-supervision framework <cit.>. As shown in Fig. <ref>(a), consists of three modules from left to right: (i) The CNN-based feature extractor f(·); (ii) the detail self-refining block DSR(·); and (iii) the segmentation block based on the cosine similarity. Suppose the support and query images are denoted by I_s and I_q, respectively. The segmentation begins with feature extraction F_s=f_θ(I_s) and F_q=f_θ(I_q). Furthermore, high-fidelity foreground prototype and background prototypes are produced by the detail self-refining block, denoted by P_k={P_f, P_b}=DSR(F_q, F_s, M_s) where M_s is the support masking label. Finally, we obtain the query prediction of segmentation SEG(F_q, P_k), computing cosine similarity between F_q and obtained prototypes P_k in a convolution fashion. In the segmentation process above, the optimal prototype generation encouraged by the detail self-refining block, i.e., DSR(·, ·, ·), distinguishes our from the previous work. As shown in Fig. <ref>(b), after RAN calibrates F_s and F_q to a semantics fused feature maps F̂_s, and extract cluster-based prototypes and Average Pooling-based prototypes from F̂_s respectively and take them as raw detail prototypes. Then, the high-fidelity class prototype P_f and background prototypes P_b are further obtained by the channel-wise fusion in and the sparse channel-aware multi-head channel attention in . In the rest of this section, we will elaborate on the three key components. §.§ Resemblance Attention Network In the FSS field, Resemblance Attention Network (RAN) <cit.> is a classic module to integrate the support and query features <cit.>. In the proposed , RAN engages in filtering irrelevant texture and objects between F_s and F_q. Fig. <ref> presents its network architecture. When support and query feature maps F_s, F_q are input, they are first reshaped to feature vector A_s and A_q, respectively. After that, in a Query-Key-Value attention manner with residual connection, the A_s, A_q are fused to F̂_s where Q= V=A_s, K=A_q. The process can be formulated by Eq. (<ref>). F̂_s = ϕ(A_s^T×A_q)×A_s/A_sA_q+ A_s . where ϕ(·) stands for softmax operation, × means matrix multiplication, ϕ(A_s^T×A_q) means the similarity-based probability matrix weighting A_s. §.§ Foreground Semantic Prototype Attention To obtain high-fidelity class prototype for the semantic foreground, we explore the local semantics in the foregroud and fuse them to form global semantics without local semantics loss. We accomplish this idea using the cluster-based detail prototypes and channel-wise attention with local semantics guidance. Overview. As shown in Fig. <ref>(a), to get more local semantics, we first employ the superpixel-guided clustering method <cit.> to mine N_s cluster prototypes, denoted by P_c ={P_c^i}_i=1^N_s where P_c^i∈ℝ^1 × D, D is the dimension of prototype. The intuitive fusion, e.g., vanilla weighting without prior knowledge <cit.>, can obtain the global semantics but suffers from confusing detail semantics. Therefore, we propose an attention-like cluster prototype fusion to address this issue, implementing detail self-refining and foreground tailoring sequentially. Attention-like cluster prototype fusion. As shown in the middle of Fig. <ref> (marked by grey box), this attention can be implemented in the fashion of Query-Key-Value. Taking Q=F̂_s, K= V=P_c, we can summarize this module to the following equation. F̅_s= ϕ(F̂_s C P_c) D P_c, where ϕ(·) is softmax computation; operator C and D respectively means the computation for cosine similarity measurement and channel-wise prototype fusion, whose details are presented in the following. Since the size of F̂_s and P_c are different, computation of C does not follow cosine similarity's definition, but performing in prototype-wise. Specifically, each prototype in P_c, i.e., P_c^i, is used to compute similarity with the supporting feature maps F̂_s in a one-dimensional convolution manner, in which the convolution calculation is replaced by cosine similarity computation. Thus, the N_s prototypes lead to N_s similarity maps, which can be collectively written as S_s=F̂_s C P_c∈ℝ^(H_s × W_s) × N_s where (H_s, W_s) is map size. For any map in S_s, denoted by S_s^i, its computation can be expressed as S_s^i = sim1D (F̂_s, P_c^i), where function sim1D (·,·) stands for the similarity computation working in the one-dimensional convolution fashion. The value of S_s^i at position (h,w) is the cosine similarity between P_c^i and F̂_s at position (h,w). Namely, S_s^i(h,w)=(P_c^i)^T×F̂_s(h,w)/P_c^iF̂_s(h,w), where F̂_s(h,w)∈ℝ^1× D represents the feature vector of the feature maps F̂_s at position (h,w) along channel dimension. To incorporate the knowledge represented by the similarity maps S_s into the cluster prototypes P_c, we also adopt a one-dimensional convolution to implement the computation of D, as illustrated in Fig. <ref>(b) and (c). Specifically, the computation begins with the channel-wise generation of convolution filters. Given that the cluster prototypes P_c are arranged as shown in Fig. <ref>(b). We slice P_c along the channel-dimension and obtain D convolution vectors {K_i}_i=1^D where K_i∈ℝ^1 × N_s contains cluster prototypes' semantic component on the i-th channel. After that, as done in Fig. <ref>(c), we conduct one-dimensional convolution to obtain fused maps F̅_s∈ℝ^D × H × W. The computation for the i-th map can be expressed as: F̅_s^i = conv1D(ϕ(S_s), K_i), where ϕ(·) is softmax operation, ϕ(S_s) stands for probability map, K_i work as the convolution filter. In this end, to suppress introduced noise in the fusion step, we tailor the fused map F̅_s to high-fidelity foreground prototype P_f by Mask Average Pooling. P_f = ∑_h,wF̅_s(h,w) ⊙ m_s(h, w)/∑_h,w m_s(h,w), where m_s is the given mask of the support image and resized to the same as F̅_s. Remark: In Eq. (<ref>), S_s is essentially semantic response maps concerning the cluster prototypes, such that probability map ϕ(S_s) is noticeably relational to the detail semantics. That is, this fusion ensured by D computation is guided by the detail semantics represented in S_s. Equivalently, the fusion process preserves the detail semantics, as our expectation. Besides, two designs differs from the previous work. First, reduces the mined cluster prototypes to a fused one instead of using these prototypes separately <cit.>. Second, the proposed channel-wise attention leads to global semantics preserving the local semantics unlike spatially weighting <cit.>. §.§ Background Channel-structural Multi-head Attention   Unlike the foreground taking the cluster prototypes as the local details, the background in medical images is usually semantic-less in a large scope. Therefore, in this paper, we do not mine from the spatial dimension but deem the structural information in the channel dimension as the local details. Within this context, we design a controllable channel attention mechanism to jointly model the channel-specific structural information and incorporate them into the raw background prototypes. Overview. As illustrated in Fig. <ref>(a), begins with generating raw detail prototypes. By Average Pooling and reshaping, F̂_s is converted to P_n ∈ℝ^(H× W) × D. Following that, the controllable multi-head channel attention module refreshes P_n to high-fidelity background prototypes P_a. Finally, P_a is reshaped to feature maps F_s and further tailored to the high-fidelity background prototypes P_b by the background zone in pooled support mask M_r. Controllable multi-head channel attention. The proposed channel attention mechanism encodes the channel-structural information into raw background prototypes in an element manner. For a raw prototype, their elements are independently refined by the structural information of different channels. We achieve this by the D-way architecture illustrated in Fig. <ref>(b). Suppose that for any raw prototype in P_n, denoted by P_n^k, the converted high-fidelity prototype is P_a^k. In the Q-K-V fashion, we set Q=Q_n, K=P_n, V=P_n^k where Q_n is transformed by channel-wise slicing P_n. In the proposed module, P_n and P_n^k are copied D times and inputted into the D heads, respectively. At the same time, Q_n is inputted channel-wise. That is, the j-th head takes the j-th component Q_n^j as the input. The multi-head module refining P_n^k can be formulated as P_a^k = cat({P_a^k,j}), P_a^k,j = h_j(Q_n^j, Q_n, P_n^k ), 1 ≤ j ≤ D, where cat(·) concatenates the input set to a vector according to their indices; h_j(·,·,·) is the j-th channel attention head generating P_a^k,j (the j-th element in P_a^k) that we elaborate as follows. In Eq. (<ref>), the objective of the attention head h_j is encoding the j-th channel-specific structural information, denoted by a'_j, into the raw P_n^k. Under the attention framework, the encoding can be implemented by a weighting operation P_n^k ×a'_j, whilst the a'_j generation is the core problem we need to address. For this issue, as depicted in Fig. <ref>(c), we provide a controllable design consisting of (i) global exploration module and (ii) sparse channel-aware regulating module. Among them, the former predicts the global channel structural information of the j-th channel a_j, whilst the latter serves as a controller by injecting the j-th channel-specific adjustment r. Thus, the working mechanism h_j can be formulated as h_j = P_n^k×(r⊙a_j )^a'_j, where parameter a_j is learnable; operator ⊙ means element-wise multiplying. In Eq. (<ref>), the generation of adjustment r involves two blocks in the sparse channel-aware regulating module (see the two dark grey box in Fig. <ref>(c)). First, the channel similarity computation, formulated by Eq. (<ref>), captures the dynamics of the relationship between j-th channel and other channels. w_c = ϕ(cossim(Q_n,Q_n^j)),  w_c,i=(Q_n^i)^T×Q_n^j/ Q_n^i Q_n^j, where w_c ∈ℝ^D is the channel similarity whose i-th element is w_c,i, function cossim(·,·) measures the cosine similarity of vector Q_n^j over set Q_n, ϕ is softmax operation. Subsequently, the incorporation unit generates adjustment coefficients by high-lighting the sparse-relative channels indexed by masked frozen vector m_w. This process can be formulated as r = 1 + β(w_c ⊙m_w ), where rade-off parameter β stands for the control strength. As mentioned above, the proposed h_j involves two important parameters, i.e., a_j (Eq. (<ref>)) and m_w (Eq. (<ref>)). In our design, both of them are initiated by a pre-set sparse vector w_i that represents a prior knowledge about the channel structural pattern. Specifically, at the beginning of model training, we set m_w=mask(w_i) and a_j=w_i where function mask(·) outputs Boolean vector whose locations of 1 corresponds to the non-zero places in input vector. Remark: In our controllable attention mechanism, the core idea is imposing sparse channel-aware regulating to adjust the learnt global channel relation, leading to channel-specific structural information. Here, the sparse constraint is motivated by the ubiquitous sparse nature of neural connections, whose rationality is verified by much work <cit.>. Also, from a methodological point of view, our structure can be understood as a piece of work of structural learning-based attention <cit.>, but in the channel dimension. For instance, shifted window partitioning in Swin Transformer <cit.> introduces spatial relation constraint to self-attention. Similarly, our sparse channel-aware regulating introduces a channel structural constraint, i.e., sparse relation (r), to the predicted global channel structural information (a_j). §.§ Loss Function We regulate cross-entropy regularization to supervise this model training process: ℒ_seg= - 1/HW∑_h^H∑_w^W∑_j∈{f,b}^m_q^j(h,w) ⊙ log(m̂_q^j(h,w)), where m̂_q^j(h,w) is the predicted results of the query mask label m_q^j(h,w); in {f,b}, f and b means foreground and background, respectively. Also, following <cit.>, we perform another inverse learning where the query images serve as the support set to predict labels of the support images. Thus, we encourage a prototypical alignment formulated by ℒ_reg = - 1/HW∑_h^H∑_w^W∑_j∈{f,b}^m_s^j(h,w) ⊙ log(m̂_s^j(h,w)). Overall, for each training episode, the final objective of is defined as follows: ℒ_= ℒ_seg + ℒ_reg. § EXPERIMENTS This part first introduce the experimental settings, followed by the segmentation results on three challenging benchmarks. The extensive model discussion is provided in this end. §.§ Data Sets To demonstrate the effectiveness of , we conduct evaluation on three challenging datasets with different segmentation scenarios. Their details are presented as follows. Abdominal CT dataset <cit.>, termed ABD-CT, was acquired from the Multi-Atlas Abdomen Labeling challenge at the Medical Image Computing and Computer Assisted Intervention Society (MICCAI) in 2015. This dataset contains 30 3D abdominal CT scans. Of note, this is a clinical dataset containing patients with various pathologies and variations in intensity distributions between scans. Abdominal MRI dataset <cit.>, termed ABD-MRI, was obtained from the Combined Healthy Abdominal Organ Segmentation (CHAOS) challenge held at the IEEE International Symposium on Biomedical Imaging (ISBI) in 2019. This dataset consists of 20 3D MRI scans with a total of four different labels representing different abdominal organs. Cardiac MRI dataset <cit.>, termed CMR, was obtained from the Automatic Cardiac Chamber and Myocardium Segmentation Challenge held at the Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI) in 2019. It contains 35 clinical 3D cardiac MRI scans. In our experiment settings, to ensure fair comparison, we adopted the same image preprocessing solution as SSL-ALPNet <cit.>. Specifically, we sampled the images into slices along the channel dimension, and resized each slice to 256×256 pixels. Moreover, we repeated each slice three times along the channel dimension to fit into the network. We employ 5-fold cross-validation as our evaluation method, where each dataset is evenly divided into 5 parts. §.§ Evaluation Protocol To evaluate the performance of the segmentation model, we utilized the conventional Dice score scheme. The Dice score has a range from 0 to 100, where 0 represents a complete mismatch between the prediction and ground truth, while 100 signifies a perfect match. The Dice calculation formula is Dice(A,B)=2 A∩ B / A + B × 100%, where A represents the predicted mask and B represents the ground truth. §.§ Few-Shot Settings To evaluate the model's performance, we follow the experimental settings in <cit.>, considering two cases. Setting-1 is the initial setting proposed in <cit.>, where test classes may appear in the background of training images. We train and test on all classes in the dataset without any partitioning. Setting-2 is a strict version of Setting-1, proposed in <cit.>, where we adopted a stricter approach. In this setting, test classes do not appear in any training images. For instance, when segmenting Liver during training, the support and query images do not contain the Spleen, which is the segmenting target for testing. We directly removed the images containing test classes during the training phase to ensure that the test classes are truly "unseen" for the model. §.§ Implementation Details We implemented our model using the Pytorch framework with a pre-trained fully convolutional Resnet101 model as the feature extractor. The Resnet-101 model was pre-trained on the MS-COCO dataset. Given that the superpixel pseudo-labels contain rich clustering information, which are helpful to alleviate the annotation absence. We generate the superpixel pseudo-label in an offline manner as the support image mask before starting the model training, following <cit.>. In , there is one hyper-parameter: Local adjustment intensity α in Eq. (<ref>). As another important factor, the sparse pattern of w_i follows the neighbour channel constraint, namely, w_i = [ 0, w_1, w_2, w_3, 0] where w_2 is the j-th element of w_i, (w_1,w_2,w_3) ∈[ 0, 1.0 ], w_2>w_1=w_3. Specifically, in the ABD-MRI dataset, Setting-1 adopts β=0.3, (w_1,w_2,w_3)=(0.2,0.8,0.2), whilst β=0.2, (w_1,w_2,w_3)=(0.3,0.6,0.3) are used in Setting-2. In the ABD-CT dataset, Setting-1 adopts β=0.2, (w_1,w_2,w_3)=(0.3,0.7,0.3), and β=0.4, (w_1,w_2,w_3)=(0.1,0.7,0.1) are used in Setting-2. For the CMR dataset, Setting-1 selects β=0.3, (w_1,w_2,w_3)=(0.1,0.9,0.1). For our experimental results, we used stochastic gradient descent algorithm with a batch size of 1 for 100k iterations to minimize the objective in Eq. (<ref>). The self-supervised training took around 4.5 hours on a single Nvidia TITAN V GPU, and the memory consumption was about 8.1GB. §.§ Competitors To evaluate our approach, we compared it with six state-of-the-art medical image semantic segmentation methods, including SE-Net <cit.>, PANet <cit.>, SSL-ALPNet <cit.>, Q-Net <cit.>, and CAT-Net <cit.>. Among them, SE-Net belongs to the category of constructing support images-based guidance, whilst the rest comparisons all follow the clue of prototypic network. For a fair comparison, we obtain the results of all prototype-based methods, i.e., PANet, SSL-ALPNet, Q-Net and CAT-Net, by re-running their official codes on the same evaluating bed with . The results of SE-Net are cited from the publication. §.§ Quantitative and Qualitative Results The same as the previous methods, we perform the evaluation on ABD-MRI and ABD-CT under both Setting-1 and Setting-2 whilst the CMR is based on Setting-1. Tab. <ref> reports the results in Dice score on ABD-MRI and ABD-CT. The results showed that outperforms the previous methods in the two settings. On ABD-MRI dataset, compared with the second-best method Q-Net in mean score, achieves an improvement of 2.6 under Setting-1. Meanwhile, as for the strict Setting-2 testing model for "unknown" classes, demonstrates impressive performance with 7.8 increase, especially with a dice score of approximately 82 for Right kidney. The reason is discussed in . On the ABD-CT dataset, in average score, also surpasses the second-best method SSL-ALPNet by 1.2 in Setting-1 and 2.1 in Setting-2, respectively. For an intuitive observation, we present the visual segmentation results in Fig. <ref>. As shown in this figure, has much better segmentation for large objects (see Liver), while predicting the finer boundary for small objects (see Spleen). Tab. <ref> shows the comparison results on CMR with adjacent organs. In this scenario, exhibited better segmenting performance in all three classes, obtaining 2.0 improvement in mean score compared with the previous best method SSL-ALPNet. The right side of Fig. <ref> depicts three toy experimental results. It is seen that the can generate complicated boundaries (see LV-MYO, RV), implying more details are captured by compared with the previous methods. For the objects with relatively regular shapes, e.g., LV-BP, achieves fuller segmentation near the boundary. §.§ Alation Study As illustrated in the middle of Fig. <ref>, involves three components, i.e., RAN, and . In this part, we carry out an ablation study to isolate their effect as follows. All experimental results are obtained based on the ABD-MRI dataset under strict Setting-2. §.§.§ Effect to final performance By removing the three ones from our framework, we have variation methods: * w/o RAN. We remove the RAN block and set the fused support feature F̂_s=F_s directly. * w/o . When block is removed, we generate the foreground prototype P_f exploiting the conventional MAP skill, the same as previous work <cit.>. * w/o . After removing the block, the background prototypes P_b is generated in two steps: (i) We convert the fused support feature F̂_s to feature maps by AP and then (ii) directly tailored to P_b according to the background zone in support mask M_r, which is also generated by Average Pooling. From the results from Tab. <ref>, we see that when removing any one of the three, the mean results have decline to some extent compared with , whilst all being better than SSL-ALPNet. These results confirm that the proposed three designs all play positive roles in the proposed scheme. Meanwhile, the full version, , significantly outperforms the other three variation methods. The results indicate that the three designs jointly lead to the final performance. To better understand the effect of the three designs, we present some typical segmentation results under Setting-2, as shown in Fig. <ref>. When any one is unavailable, the segmentation has evident deterioration. For example, when RAN is unavailable, the big object segmentation will have obvious holes (see Liver). Due to removing background-specific , some background zones are wrongly segmented, as adopting w/o (see Left Kidney, Spleen). Combining results of Setting-1 with Setting-2, we have one detailed finding. First, w/o , w/o have similar results with especially tiny gap under Setting-1, implying their balanced effect. Unlike it, under Setting-2, w/o beat SSL-ALPNet by increase of 1.9 only, but has 3.1 decrease compared with w/o . The comparison shows that for the truly ”unseen” scenario, background-oriented is more important than foreground-oriented . The result is understandable: Performing detail self-refining on the background prototype is the most logical strategy when these training images cannot provide valuable references for the unseen testing classes. This is because, under Setting-2, has a large performance margin on top of the previous methods (see Tab. <ref>), which lose focus on ameliorating the background prototypes. §.§.§ Effect of RAN to and As shown in Fig. <ref>, the working of and builds on RAN. Here, we propose another variation method of , named w/ RAN, to determine its effect. In this comparison method, both and are removed: The foreground class prototype and background detail prototypes are generated by traditional MAP and AP, respectively. As listed in Tab. <ref> (see the fifth row), w/ RAN improve by only 1.1 under Setting-1 and has a tiny gap of 0.3 under Setting-2, compared with SSL-ALPNet. This result indicates that RAN cannot work alone and must work jointly with and . §.§ Model Analysis §.§.§ Analysis of . This part discusses the two key features of : (i) fusing the mined cluster prototypes into a single one for incorporating the local and global semantics and (ii) the channel-wise fusion strategy instead of weighting prototypes. To evaluate their effects, we propose two variations of : * -F-separating: Feature map F̅_s in Fig. <ref>(a) is generated by directly computing cosine distance between the cluster prototypes P_c and semantics fused feature F̂_s. * -F-weighting: We average the cluster prototypes P_c and employ the weighted prototype to compute cosine distance with F̂_s. As listed in Tab. <ref>, -F-separating is 3.77 lower than in the mean score and outperforms SSL-ALPNet by 3.48. This comparison indicates that mining cluster prototypes can boost the segmentation but suffering from the loss of global semantics. This is in line with our expectations. Besides, -F-weighting is defeated by with a large decrease of 11.39, even lower than SSL-ALPNet. The results show that the weighting scheme will confuse the semantics and our channel-wise fusion provides a potential semantics incorporation way from local to global. §.§.§ Analysis of As shown in Fig. <ref>(b), the sparse channel-aware regulating is the core difference from the conventional channel attention mechanism. Evaluation in this part first focuses on the effect of this regulation. To this end, we propose a comparison method, named w/o NCR, where the inputted prototype is directly refreshed by the channel similarity vector. From the second row in Tab. <ref>, we can see that w/o NCR lowers by 5.7 and very close to result of removing , i.e., w/o BCMA (see Tab. <ref>). The comparison indicates that the effect of almost derives from our design of neighbour channel-aware regulation, confirming the rationality of introducing channel structural information. As mentioned in , the sparse channel-aware regulation contains three significant designs: (i) a is learnable, (ii) incorporation unit integrates r to adjust a, and (iii) a is initiated by the sparse vector w_i representing the neighbour channel constraint. To demonstrate their effectiveness, we conduct a comparison experiment where three variation methods of are given: * w/ a-fix: We keep a=w_i during training. * w/ a-no-adjust: Setting β=0 removes r's adjustment, whilst a is still learnable and initiated by w_i. * w/ a-random: a is not initiated by w_i, instead, using conventional random initiation. From the comparison results in Tab. <ref>, we have three main observations. First, 's minimal version w/ a-fix is surpassing SSL-ALPNet by 5.97 in mean accuracy. This indicates that our neighbourhood-ware idea is effective, even when it works alone. Meanwhile, surpasses w/ a-fix by 3.3, indicating the importance of global fusion design ensured by enabling A learnable. Second, outperforms w/ a-no-adjust by 1.78 in mean score, confirming the rationality of introducing local adjusting. Third, compared with , w/ a-random's performance decrease sharply by 35.11. This result shows that the design of w_i initiation is crucial to optimising a, once again supporting the importance of introducing the neighbourhood prior. Easily understood, w_i provides a good optimization initiation point. §.§.§ Conventional prototypes v.s. high-fidelity prototypes Compared to the conventional prototypes, the core advantage of our prototypes is deeply representing the details. To verify it, we perform a quantitative experiment based on a typical image from the ABD dataset. As shown in the left side of Fig. <ref>, we mark three zones containing objects, denoted by C1 (left kidney), C2 (right kidney) and C3 (gallbladder) in this image where C2 is the foreground. After that, we compute their similarity score under Setting-2 by averaging the final similarity map (i.e., cossim(F_q,P_k)) at the locations of C1, C2 and C3. The right side in Fig. <ref> demonstrates the comparison results of and SSL-ALPNet. Compared with SSL-ALPNet, improved by 0.46 at C2. To the opposite, declines by 3.26 and 2.3 at C1 and C3, respectively. In the view of max relative declination, e.g., (S_C2 - max(S_C1,S_C3))/ S_C2× 100%, SSL-ALPNet is 26.8%, whilst amplify it to 44.3%. The results indicate that our high-fidelity prototypes can encourage more discriminative representations than the conventional prototypes. §.§.§ Parameter sensitiveness. This part displays the performance sensitivity of the local adjustment intensity in Eq. (<ref>) based on the Setting-2 in the ABD dataset. As presented in Tab. <ref>, when the parameter changes, there are no evident drops in the accuracy variation curves. This indicates that is insensitive to the parameter β. § CONCLUSION In this paper, we present a novel FSS approach, dubbed as , aiming at the local information loss problem in medical images as adopting the prototypical paradigm. To our knowledge, this is an initial effort from the perspective: Enhancing detail representation ability of the off-the-shelf prototypes by detail self-refining. Specifically, we introduce two pivotal designs: FSPA and BLNA modules for the foreground class prototype and background detail prototypes generation, respectively. Among them, the former implements the detail self-refining by fusing the detailed prototypes clustered from the foreground. The latter models this self-refining as incorporating the channel-specific structural information, employing the multi-head channel attention with sparse channel-aware regulation. ’s effectiveness is validated by state-of-the-art experimental results across three challenging datasets. § DECLARATION OF COMPETING INTEREST The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. §.§ Acknowledgments This work is partly funded by the German Research Foundation (DFG) and National Natural Science Foundation of China (NSFC) in project Crossmodal Learning under contract Sonderforschungsbereich Transregio 169, the Hamburg Landesforschungsförderungsprojekt Cross, NSFC (61773083); NSFC (62206168, 62276048, 52375035). model2-names.bstauthoryear
http://arxiv.org/abs/2406.19375v2
20240627175240
Calibrating and standardizing the Tip of the Red Giant Branch in the Small Magellanic Cloud using small-amplitude red giants
[ "Nolan W. Koblischke", "Richard I. Anderson" ]
astro-ph.SR
[ "astro-ph.SR", "astro-ph.CO" ]
http://arxiv.org/abs/2406.18849v1
20240627024035
Dysca: A Dynamic and Scalable Benchmark for Evaluating Perception Ability of LVLMs
[ "Jie Zhang", "Zhongqi Wang", "Mengqi Lei", "Zheng Yuan", "Bei Yan", "Shiguang Shan", "Xilin Chen" ]
cs.CV
[ "cs.CV" ]
Optimal version of the fundamental theorem of chronogeometry Peter Šemrl ============================================================ ^* Equal contribution. Corresponding Author: jiezhang@ict.ac.cn § ABSTRACT Currently many benchmarks have been proposed to evaluate the perception ability of the Large Vision-Language Models (LVLMs). However, most benchmarks conduct questions by selecting images from existing datasets, resulting in the potential data leakage. Besides, these benchmarks merely focus on evaluating LVLMs on the realistic style images and clean scenarios, leaving the multi-stylized images and noisy scenarios unexplored. In response to these challenges, we propose a dynamic and scalable benchmark named Dysca for evaluating LVLMs by leveraging synthesis images. Specifically, we leverage Stable Diffusion and design a rule-based method to dynamically generate novel images, questions and the corresponding answers. We consider 51 kinds of image styles and evaluate the perception capability in 20 subtasks. Moreover, we conduct evaluations under 4 scenarios (i.e., Clean, Corruption, Print Attacking and Adversarial Attacking) and 3 question types (i.e., Multi-choices, True-or-false and Free-form). Thanks to the generative paradigm, Dysca serves as a scalable benchmark for easily adding new subtasks and scenarios. A total of 8 advanced open-source LVLMs with 10 checkpoints are evaluated on Dysca, revealing the drawbacks of current LVLMs. The benchmark is released in <https://github.com/Benchmark-Dysca/Dysca>. § INTRODUCTION Recent years have witnessed the great success of the Large Vision-Language Models (LVLMs) <cit.>. These models leverage the powerful Large Language Models (LLMs) <cit.> as their brain and incorporate the state-of-the-art visual encoders <cit.> as their eyes. Thanks to the alignment of visual feature with textual space and the development of visual instruction tuning techniques <cit.>, LVLMs showcase the impressive capability in terms of visual scene comprehension and multimodal instruction-following. In order to comprehensively evaluate the capabilities of LVLMs, many benchmarks have been purposed <cit.>, where we categorize the current benchmarks into three types <cit.>. The first type is the classical benchmarks, such as COCO Caption <cit.> and VQA <cit.>. Although these benchmarks provide high-quality evaluation data, they also have notable limitations. On the one hand, they are inadequate for measuring the fine-grained capabilities of current LVLMs, offering the limited insightful feedback for the future improvement. On the other hand, since these classical benchmarks have been available as the open-source test data for a long time, it is hard to prevent the data leakage problem. The second type of benchmarks evaluate the LVLMs through a subjective manner <cit.>. Although the benchmarks reveal the insightful drawbacks of current models, their data scale is limited (i.e., less than 200 annotations) and they require manual evaluation by experts. The third type is built for objectively evaluating current LVLMs and the comparison between them are shown in Tab. <ref>. They provide an objective and automatic evaluation manner, giving the fine-grained evaluation for the LVLMs. However, these benchmarks conduct Vision-language QAs by selecting images from existing dataset. Although they claim that the questions are re-annotated, the previous work <cit.> has demonstrated that these benchmarks have unintentionally leaked into the training data of LLMs and LVLMs. Besides, most benchmarks focus on evaluating LVLMs in the realistic images and clean scenarios, leaving the multi-stylized images and noisy scenarios unexplored. While some works like MMCBench <cit.> and Typographic Dataset <cit.> have investigated the robustness of LVLMs with corrupted and print-attacked images, respectively, they have not explored the effect of these noisy images on various perceptual tasks. In this paper, aiming to address these challenges above, we propose Dysca which is a dynamic and scalable benchmark for evaluating the perception ability of LVLMs via various subtasks and scenarios. Inspired by the prior evaluation works for LLMs <cit.>, we investigate on whether we could leverage the large-scale synthesized images for evaluating LVLMs. We display the overview of our pipeline in Fig. <ref>. Specifically, we leverage Stable Diffusion and design a rule-based method to dynamically generate novel images, questions and corresponding answers. We decouple the prompt into 4 part, i.e., attribute, foreground, style and background, and design pre-defined templates to dynamically generate prompts, as displayed in Fig. <ref>. Then we utilize the state-of-the-art text-to-image diffusion models (e.g., SDXL <cit.>) to generate the corresponding images. Since we already know the main information of the images through prompts, we easily generate question-answer textual pairs by the rule-based method. After that, in order to obtain the high quality Vision-language QAs, we employ CLIP <cit.> to perform data cleaning on the generated Vision-language QA pairs. Dysca focuses on assessing the fine-grained perception abilities, including recognizing human, animal, object, landmark, etc. Dysca evaluates LVLMs with 20 perceptual subtasks, containing a total number of 51 different artistic styles. Besides, to evaluate the robustness of the models across different scenarios and question types, we construct 4 testing scenarios (clean, corruption, print attacking and adversarial attacking) and 3 question types (multi-choices, true-or-false and free-form questions). In the end, Dysca consists of 617K Vision-language QA pairs (×20 larger than MM-BigBench <cit.> and ×25 larger than Seed-Bench2 <cit.> as shown in Tab. <ref>). Thanks to the generative paradigm, Dysca achieves the scalable benchmark to new subtasks and scenarios and dynamically generate unlimited Vision-language QAs for evaluation. In summary, our work makes the following key contributions: * Dynamic and Scalable Benchmark: We propose Dysca, a benchmark that is able to dynamically generate the test data that users need and is easily to scale up to to new subtasks and scenarios. * Multi-grained Perceptual Subtasks and Multi-scenarios: Dysca evaluates the 20 perceptual subtasks performance of 8 mainstream LVLMs with 10 checkpoints under 4 image scenarios (i.e., clean, corruption, print attacking and adversarial attacking) and 3 question types (i.e., multi-choices, true-or-false and free-form questions). * Analysis and Observations: We demonstrate for the first time that evaluating LVLMs using large-scale synthetic data is valid. Experiments show the strong correlation coefficient between our evaluation rankings and the rankings obtained from non-synthetic benchmarks. The evaluation results also reveal the weakness of current LVLMs when facing different question types, image styles and image scenarios. § RELATED WORKS §.§ Large Vision-Language Models The landscape of Large Vision-Language Models (LVLMs) has been significantly shaped by the pioneering success of Large Language Models (LLMs) such as GPTs <cit.> and LLaMA <cit.>, catalyzing advancements in multimodal content understanding and generation <cit.>, including intricate tasks like image-text comprehension. At the forefront of these developments, BLIP-2 <cit.> introduces a lightweight Q-Former <cit.> that facilitates alignment between textual and visual representations through a cross-attention mechanism <cit.>. InstructBLIP <cit.> takes a step further by incorporating textual instructions into the Q-Former, which significantly improves performance. LLAVA <cit.> employs GPT-4 <cit.> to transform data into multimodal instruction-following data and uses CLIP <cit.> and LLAMA <cit.> for fine-tuning instructions, achieving advanced performance. LLAVA-1.5 <cit.> extends this paradigm by integrating MLP projection and introducing academic task-specific Vision-language QA data. Recently, models like Otter <cit.>, MiniGPT-4 <cit.>, Qwen-VL-Chat <cit.> and XComposer-VL <cit.> further unleash the cross-modal understanding capabilities of LVLMs. §.§ Benchmarks for LVLMs The great progress of LVLMs triggers the development of benchmarks for evaluating these models, where we divide previous benchmarks into three categories. The first type is the classical benchmarks which focuses on evaluating LVLMs abilities via image caption <cit.> and VQA <cit.>. However, these benchmarks cannot provide the fine-grained feedback on how to improve the models. Besides, since these benchmarks have been the public resources for a long time, it is hardly to guarantee that the LVLMs have not use them for training. The second type subjectively evaluates LVLMs by experts <cit.>. Although these benchmarks reveal the insightful feedback of the LVLMs, their scale is limited (i.e., less than 200 annotations). The subjective manner also makes the evaluation expensive and hardly to expand the scale of the benchmarks. The third type <cit.> focuses on evaluating LVLMs in an objective and large-scaled manner, where we list the detailed information of them in the Tab. <ref>. Some of them have been adopted by the community <cit.> as the standard benchmarks for evaluating LVLMs <cit.>, like MME <cit.> and MMBench <cit.>. These benchmarks evaluate models through the objective answer types and most of them leverage the automatic annotation and evaluation manner for revealing the fine-grained drawbacks of current LVLMs. However, the previous benchmarks primarily concentrate on evaluating LVLMs using realistic images and clean scenario, leaving multi-stylized images and noisy scenarios unexplored. Moreover, many of them conduct QA by selecting images from publicly available datasets (e.g., <cit.>). While they state that the questions have been re-annotated, they cannot guarantee that the LVLMs have not seen the image during training stage. The previous work <cit.> has proved that these benchmarks have unintentionally leaked into the training data of LLMs and LVLMs. One possible way to solve data leakage is using novel but synthesis images, where JourneyDB <cit.> is the first work aiming to leverage synthesis images to evaluate current LVLMs. The prompts and the corresponding images are downloaded from Midjourney <cit.> and ChatGPT <cit.> is leveraged to label the images. However, JourneyDB is a top-down framework where the number of images is fixed. Besides, the ChatGPT labeling may cause hallucinate annotations, leading to the unreliable evaluation results. Although 40 annotators have involved to clean the data, the data cleaning cost are expensive and it limits the data scale. In contrast, our Dysca serves as the bottom-up framework, allowing for dynamic and scalable generation for both images and evaluation questions. The rule-based question generation method also makes the annotations more accuracy. Besides, Dysca contains 20 subtasks which is more comprehensive than JourneyDB. § DYSCA §.§ Overview of Our Pipeline The overview of our pipeline is shown in Fig. <ref>, containing data generation, data cleaning and LVLMs evaluation. For the data generation, our Dysca benchmark consists of four dimensions, i.e., (M,P,I,Q), where M means "Metadata", P means "Prompt", I means "Image" and Q means "Question-answer pair". We further decouple the metadata M into 4 parts, i.e., "style", "attribute", "foreground" and "background", and the combination of the four parts constitute the image prompts P. Then, given the prompt P and the selected scenario, we leverage the Text-to-Image (T2I) diffusion model (e.g., SDXL <cit.>) to synthesis image I and add the specific perturbation to the image I. After that, since the prompt already includes the question angle and the corresponding answer, we construct a rule-based approach to generate the Q. Three types of questions are considered, i.e., multi-choice, true-or-false and free-form. Multi-choice and true-or-false questions utilize a closed-ended manner to assess LVLMs, while free-form questions employ an open-ended manner through image captioning for evaluation. For the data cleaning, considering that the T2I diffusion model may generate unsuccessful outcomes, we then use CLIP <cit.> and PP-OCRv3 <cit.> to automatically clean the whole dataset to obtain the final Dysca. Finally, we evaluate 8 open-source LVLMs with 10 checkpoints on our proposed Dysca. §.§ Perceptual Tasks Evaluation dimensions. Perception is one of the most fundamental capabilities of LVLMs and previous works <cit.> have shown that the lack of perceptual ability may result in hallucination <cit.>. In order to comprehensively evaluate LVLMs' perception capability, we design 20 perceptual subtasks where we show all subtasks and the corresponding amount of their annotation in the Fig. <ref>. We investigate on two types of perception dimensions, i.e., coarse-grained and fine-grained perception. Coarse-grained perception involves recognizing the style, background and color of images. Fine-grained perception involves recognizing the animal, object, plant, food, age, gender, expression, race, celebrity, action, text, clothes, movie, anime, landmark, profession and TV shows. Data sources. For each perceptual subtask, we collect the textual data first to construct the metadata M. For the TV shows, Anime and Movie, we select the titles from the rating list of IMDb[https://www.imdb.com/] based on the number of user reviews. For the styles, we utilize the style lists collected from the community[https://stable-diffusion-art.com/sdxl-styles/] and remove those which have strong reflect on the image content like "architectural style" and "Pokemon style". Note that the style list does not include the style prompt associated with a particular artist's name. Besides, for the remaining contents, we select them from the label of current dataset (e.g., ImageNet <cit.>). All the selected textual data above constitute the metadata M. We provide the detailed information of the metadata in the Appendix <ref>. §.§ Construction of Questions & Answers Recall that the data generation for Dysca benchmark consists of four dimensions, i.e., (M,P,I,Q), denoting the metadata (M), prompt (P), image (I) and question-answer pairs (Q), respectively. The relationships between these parts and the process of constructing Dysca are shown in Fig. <ref>. The metadata M is the core of the whole Dysca, containing all the information for generating P, I and Q. The metadata M consists of foreground, attribute, background and style, and these information guide the generation of the prompt (P) through pre-designed templates. Then, we utilize the T2I diffusion model to generate the corresponding image using the prompt P. For generating the image with specific text on it for the OCR subtask, we leverage TextDiffusion2 <cit.>, which is the state-of-the-art text rendering method. For the rest of images, we leverage Stable Diffusion XL <cit.>. Subsequently, based on the different question types we select, i.e., multi-choices, true-or-false and free-form, we generate the corresponding Vision-language QA pairs in Dysca. Besides, in order to evaluate the model performance under various scenarios, we conduct experiments on 4 scenarios, i.e., clean, corruption, print attacking and adversarial attacking. For the print attacking, followed by <cit.>, we add the deceptive text on the image, where the text is a wrong option. Besides, to comprehensively evaluate the performance of LVLMs under corruption scenario, we add more typographic factors to original settings (i.e., different font orientations and font positions). For the adversarial attacking, we leverage PGD <cit.> to generate the adversarial image. We use InstructBLIP <cit.> as the proxy model and regard others as the black box models. The reason why we choose InstructBLIP is that it has shown superior performance in clean scenario. Besides, the black-box setting better reflects the robustness of the models when they face the real-world adversarial attacks. For the corruption, we leverage the image corruption methods collected from <cit.>. We remove some hard corruptions as they significantly impact the quality of the image, leading to human failure in judging the style and content of the image. The detailed examples are shown in Appendix <ref>. Consider that the Text-to-image diffusion model may generate the failure cases that affect the quality of the proposed benchmark, we leverage the off-the-shelf models, i.e., PP-OCRv3 <cit.> and CLIP-L-14 <cit.>, to clean the data. PP-OCRv3 <cit.> is leveraged as the filter to exclude the failure image that TextDiffusion2 <cit.> generates the wrong text on the image. For the other images, we use CLIP-L-14 <cit.> to filter out the images with low text-image consistency. In the end, we filter out nearly 15% of low quality samples. The final statistics of our released Dysca are shown in Tab. <ref>. Note that the OCR subtask does not involve print attacking scenario as misidentifying adversarial text does not indicate poor OCR robustness of the LVLMs. Therefore, there are 7K fewer questions in the print attacking scenario. Besides, for the free-form question type, since it allows to assess the model's perception abilities across multiple subtasks at the same time, we reduce the number of free-form questions for achieving a balanced data distribution. §.§ Evaluation Strategy Instruction Design. We design two types of instructions to improve the instruction-following result of LVLMs. For the multi-choices and true-or-false questions, we design the questions followed by the description "Please answer the question and provide the correct option letter, e.g., (A), (B), (C), (D), at the end. Do not contain the analysis progress. Your answer is: ". For the free-form questions, recalling that the prompt P contains four part, i.e., the style, attribute, foreground and background, we instruct the model to caption these four dimensions by "Please describe the image. You can describe it from these aspects: {}", where "{}" includes the specific template we design for each part. We display the sample in the Fig. <ref> and more examples can be found in the Appendix <ref>. Evaluation Metric. For the multi-choices and true-or-false questions, we use accuracy as the evaluation metric. We randomly shuffle the order of choices to prevent evaluation results from being influenced by the model's tendency towards specific choices <cit.>. The random accuracy of the two types are equal to 25% and 50%, respectively. We use regular expressions to extract the model's answer choices. For cases where the extraction is fail, we calculate the Levenshtein distance between the answer string and the choice string, and select the option with the minimum distance as the model's answer. For the free-form questions, we test the model's image caption capability where the ground truth is the prompt of the image. Followed by <cit.>, we use SentenceTransformer <cit.> to compute the text similarity with prompt P and the caption output of the LVLMs. The final score of each question type is the average score of subtasks. § RESULTS AND ANALYSIS In this section, we report the evaluation results and make insightful analysis. A total of 8 LVLMs with 10 checkpoints are evaluated on Dysca benchmark, including BLIP2 <cit.>, InstructBLIP <cit.>, LLavA <cit.>, MiniGPT-4 <cit.>, Otter <cit.>, XComposer-VL <cit.>, Qwen-VL-chat <cit.> and Shikra <cit.>. Each model is evaluated with all the 20 perception subtasks under 4 scenarios. The detailed rankings for each subtask can be found in the Appendix <ref>. §.§ Main Results Clean Scenario. The evaluation results of various LVLMs in different perceptual subtasks under clean scenarios are presented in Tab. <ref>. Since the evaluation for free-form question type usually involves multiple subtasks, we can not calculate the results of free-form for each subtask individually. Instead, we display the overall score of free-form in the first row in Tab. <ref>. As can be seen, Xcomposer-VL <cit.> outperforms other LVLMs, achieving top-1 or top-2 results in most subtasks, but InstructBLIP <cit.>, Qwen-VL-chat <cit.> and BLIP2 also take a lead in a few subtasks. Noisy Scenarios The evaluation results of various LVLMs under noisy scenarios (i.e., corruption, print attacking and adversarial attacking) are presented in Tab. <ref>. As can be seen, For the multi-choices and true-or-false question type, Xcomposer-VL <cit.> still takes a lead on all 4 scenarios. For the free-form, LLava-1.5-7b <cit.> achieves the best. §.§ Analysis §.§.§ Key Observations (1) For individual models, their perceptual performance varies across different subtasks. For example, in the case of Qwen-VL-Chat <cit.>, it achieves a score of 96.96% accuracy in landmark recognition task for multi-choices questions (2% below the first-place score), but obtains a score of 53.48% accuracy in age recognition task (12% below the first-place score). The results suggest that Qwen-VL-Chat <cit.> may require more fine-tuning in age perception data. Analyzing the performance of the models across various subtasks contribute to purposive improving. (2) Models exhibit performance inconsistency when facing multi-choices and true-or-false question types. As can be seen, in the object recognition subtask, Otter <cit.> achieves an accuracy of 47.31% in the multi-choices question type (22% higher than random guessing), but obtains an accuracy of 82.51% in the true-or-false question type (32% higher than random guessing). Interestingly, we observe the opposite results in the Qwen-VL-Chat <cit.>. In the object recognition subtask, it achieves an accuracy of 88.32% in the multi-choices but achieves an accuracy of 63.14% in the true-of-false. We also observe the same problem shown in other models and we display two examples in Fig. <ref>. We speculate that the inconsistency may be attributed to the bias in the training dataset towards particular question types, such as using more multi-choices or true-or-false questions. (3) Each model exhibits robustness in the corruption scenario, but suffers degradation in the two attacking scenarios. As shown, all models exhibit minor score variation of less than 1% under the corruption scenario. However, they exhibit degradation when facing print attacking (e.g., 84.29% vs. 59.18% for InstructBlip <cit.> in multi-choices accuracy). XComposer-VL <cit.> shows the strongest robustness, maintaining over 70% accuracy for both multi-choices and true-or-false. Besides, since our adversarial algorithm specifically targets the image encoder, the LVLMs that share the same encoder architecture (i.e., Blip2, InstructBLip and XComposer-VL all using EVA-CLIP <cit.> as the image encoder) exhibit significant performance degradation, with accuracy even falling below random selection. Models utilizing alternative image encoders also experience a performance decrease of approximately 5% to 10%. A more detailed result can be found in Appendix <ref>. §.§.§ The Validity of Dysca r7cm The correlation results on three benchmarks, where ρ∈ [-1,1] and τ∈ [-1,1]. 0.7 Style Method MMBench OCRBench SeedBench-2 2*All ρ 0.70 0.90 0.46 τ 0.60 0.80 0.43 2*Realistic ρ 0.70 1.00 0.64 τ 0.60 1.00 0.62 In this section, we investigate on the evaluation gap between Dysca and non-synthesis benchmarks. We calculate the Spearman's rank correlation coefficient <cit.> ρ and the Kendall rank correlation coefficient <cit.> τ between the evaluation ranking of Dysca under clean scenario with the non-synthesis benchmark's evaluation ranking, i.e., MMBench <cit.>, OCRBench <cit.> and SeedBench-2 <cit.>. Both coefficient generate a score in the range of [-1,1], where 1 represents a perfect positive correlation, -1 represents a perfect negative correlation, and 0 represents no correlation. Specifically, we intersect our Dysca with current benchmarks based on the perceptual subtasks, evaluation models and evaluation question types. We then calculate the correlation of model evaluation rankings within this intersection. The results are shown in the first row of Tab. <ref>. For the MMbench <cit.> and OCRBench <cit.>, both metrics show the high correlation, with ρ and τ higher than 0.6. However, the correlation for SeedBench-2 <cit.> is not as strong. Considering that SeedBench-2 only contains realistic images, we conduct additional experiments using the evaluation ranks on our realistic style images only. As shown in the second row of Tab. <ref>, the correlation results of SeedBench-2 significantly improve (i.e., 0.46 vs. 0.64 for ρ and 0.43 vs. 0.62 for τ). The correlation with OCRBench also improves to 1, demonstrating the validity of using synthetic datasets for evaluation LVLMs. To further explore the the impact of image styles on evaluation results, we present the average scores across all subtasks for each of the 51 styles in Fig. <ref>. We observe slight score differences across all styles. In the case of realistic styles such as "iPhone photo", all LVLMs perform better compared to other image styles. The LVLMs also exhibit better performance on unrealistic but common styles like "expressionist". However, for unrealistic and less common styles such as "gothic", all models show relatively poor performance. The results reveal that the gap between Dysca and non-synthesis benchmarks primarily stems from the more diverse range of image styles, making Dysca a more comprehensive benchmark for assessing the perception ability compared to previous benchmarks. § CONCLUSION In this paper, we purpose Dysca, a dynamic and scalable benchmark for evaluating perception ability of Large Vision Language Models (LVLMs). Dysca consists of 617K Vision-language QA pairs, covering 20 perceptual subtasks, 4 image scenarios and 3 question types. We conduct the experiment on 8 advanced open-source LVLMs with 10 checkpoints, revealing the insightful weakness of current LVLMs when facing different question types, image styles and image conditions. Experiments demonstrate the validity on evaluating LVLMs by using synthesis images. unsrt § APPENDIX § THE LEADERBOARDS The model performance leaderboards for each subtask under each scenario are shown in Tab. <ref>, Tab. <ref>, Tab. <ref>, and Tab. <ref>. We compute the average of the multi-choice and true-or-false evaluation results under each subtask. Since the free-form question allows to assess the model's perception abilities across multiple subtasks at the same time, the results of free-form are not taken into account. The overall performances on each scenario are displayed in Tab. <ref>. We calculate the average of the scores of the three questioning types (i.e., multi-choice, true-or-false and free-form) as the final score. § DISCUSSION §.§ General Discussion Limitation. Dysca is the dynamic and scalable benchmark, offering evaluation for 20 perceptual subtasks under 51 image styles and 4 scenarios. However, generating data for evaluating cognition abilities (e.g., commonsense reasoning) presents challenge within the existing framework. This limitation arises from the reliance on predefined rules for prompt and question generation, which may not adequately capture the complexity of cognitive-level questions. Synthesis Data for Training / Fine-tuning. The use of synthetic data for model training / fine-tuning has been adopted in the field of Natural Language Processing (NLP) <cit.>. In this work, we do not explore the possibility of utilizing our benchmark for model training. Our primary goal in this paper is to provide a large-scale evaluation benchmark that addresses the issue of data leakage in current multimodal evaluation benchmarks and offers evaluation results across multiple subtasks, scenarios, question types and styles. Nevertheless, considering that Dysca has the capability to synthesize high-resolution and unlimited amounts of annotated multimodal data, we believe that Dysca also holds potential as a training data synthesis tool for LVLMs. Reproducibility and Licence. All the experiments are built on 8 * RTX 4090. All the data and the code for generation and evaluation are released in <https://github.com/Benchmark-Dysca/Dysca>. The licence of Dysca is https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0/blob/main/LICENSE.md"CreativeML Open RAIL++-M", which follows the licence set by the Stable Diffusion XL. Ethical Concerns. Our Dysca leverages the Stable Diffusion XL <cit.> to generate images. In order to prevent the model generating unsafe images, e.g., NSFW and offensive images, lots of efforts have been made. First, we use the safety checker <cit.> to post filter the unsafe images. With the unsafe image that is recognized by the safety checker, the model's output will be a blank image. Besides, we manually exclude the specific styles or the word that may trigger the unsafe images generation from the Metadata M. We believe that our Dysca involves fewer ethical concerns. §.§ The Stability of Dysca In this section, we focus on examining the stability of Dysca. We partition Dysca into 11 different scales: 1%, 10%, 20%, 30%, 40%, 50%, 60%, 70%, 80%, 90% and 100%. We compute the evaluation scores using each of these data scales. The score is calculated as the sum of scores obtained from multiple-choice, true-or-false and free-form questions. As can be seen in Fig. <ref>, when the evaluation data scale is less than 30% of Dysca (i.e., less than 46.8K samples), the evaluation score show significant fluctuations. When the data scale exceeds 40%, we obtain the stable results, reflecting current scale of Dysca achieves the stable and reliable evaluation results. Although 40% evaluation scale of Dysca has achieved stable scores, Dysca aims to provide more than just stable rankings, but also draws on massive amounts of data to provide in-depth feedback across different image styles and perceptual subtasks. § THE METADATA (M) OF DYSCA Metadata (M) is the core of Dysca, which is randomly assembled from our collected source material and contains all the information needed to generate prompt (P), image (I), and question-answer pairs(Q). Specifically, the metadata is a data container that contains information in multiple dimensions about the foreground, the attributes corresponding to the foreground, the background, and the artistic style required to generate an image. Therefore, each instance of M is mapped one-to-one to a prompt, an image, and a set of question-answer pairs, respectively. In order to ensure the quality and stability of the generated images, we carefully select the source material. First, for each perceptual subtask, we collect rich annotation material as described in Section 3.2. However, the metadata composed of these raw annotations is not always usable. On the one hand, some of the content is polysemous, which can easily be misinterpreted by the model's when generating images. On the other hand, there are backgrounds or artistic styles (e.g., “Pokemon Style", “architectural style", etc.) that negatively affect the quality of the image and do not accurately generate the desired content. In order to test the usability of these source materials, we went through several small-scale pre-generations covering all the source materials. After careful selection, we retain the clips that consistently produced high-quality images. The detailed information of the source materials are shown in Tab. <ref>. § SCENARIOS DETAILS §.§ Print Attack Scenario Followed by the settings in <cit.>, we add the attack text on the images. Consider that the image resolution in our Dysca is much higher than the one in <cit.>, we increase more font form in terms of font position and font orientation. Fig. <ref> to Fig. <ref> shows the detailed examples. §.§ Corrupted Scenario Examples of the 11 image corruptions are shown in Fig <ref>. § MORE EXAMPLES OF DYSCA For each subject we collected in Metadata (M), we display one example of their prompt (P), generated image (I) and corresponding question-answer pairs (Q). § DATA SHEET We follow the documentation frameworks provided by Gebru et al. <cit.>. §.§ Motivation For what purpose was the dataset created? Was there a specific task in mind? Was there a specific gap that needed to be filled? Please provide a description. * The proposed dataset is used for evaluating current LVLMs perception ability. We use the synthesis images to prevent the potential data leakage problem in current benchmarks. The dataset test LVLMs in 20 subtasks under 4 scenarios and 3 question type, revealing the existing drawbacks of current LVLMs. Who created the dataset (e.g., which team, research group) and on behalf of which entity (e.g., company, institution, organization)? * Followed by the double-blind rule, we will release the detailed information about this part once our paper is accepted. Who funded the creation of the dataset? If there is an associated grant, please provide the name of the grant or and the grant name and number. * Followed by the double-blind rule, we will release the detailed information about this part once our paper is accepted. §.§ Composition What do the instances that comprise the dataset represent (e.g., documents, photos, people, countries)? Are there multiple types of instances (e.g., movies, users, and ratings; people and interactions between them; nodes and edges)? Please provide a description. * We show the instances list in Tab. <ref>. The detailed word we collect for metadata M are shown in our anonymous github page <https://github.com/Benchmark-Dysca/Dysca>. How many instances are there in total (of each type, if appropriate)? * There are a total of 20 subtasks in our Dysca. For details of each subtasks please see refer Fig. <ref>. Does the dataset contain all possible instances or is it a sample (not necessarily random) of instances from a larger set? If the dataset is a sample, then what is the larger set? Is the sample representative of the larger set (e.g., geographic coverage)? If so, please describe how this representativeness was validated/verified. If it is not representative of the larger set, please describe why not (e.g., to cover a more diverse range of instances, because instances were withheld or unavailable). * No. The images in Dysca are completely generated from scratch. What data does each instance consist of? “Raw” data (e.g., unprocessed text or images) or features? In either case, please provide a description. * Each instance consists of the prompt, the image generated by stable diffusion, the question and corresponding answer. Is there a label or target associated with each instance? If so, please provide a description. * Yes, Dysca provides the ground truth for each instance. Is any information missing from individual instances? If so, please provide a description, explaining why this information is missing (e.g., because it was unavailable). This does not include intentionally removed information, but might include, e.g., redacted text. * No. Are relationships between individual instances made explicit (e.g., users' movie ratings, social network links)? If so, please describe how these relationships are made explicit. * There are no relationships between individual instances. Are there recommended data splits (e.g., training, development/validation, testing)? If so, please provide a description of these splits, explaining the rationale behind them. * Following our motivation, the entire proposed dataset is used for testing purposes. Are there any errors, sources of noise, or redundancies in the dataset? If so, please provide a description. * Errors in image generation resulting from stable diffusion are unavoidable. However, we have performed dataset cleaning to minimize these errors. Furthermore, the stability experiment in Appendix B demonstrates that these errors do not affect the overall evaluation results of the dataset. Is the dataset self-contained, or does it link to or otherwise rely on external resources (e.g., websites, tweets, other datasets)? If it links to or relies on external resources, a) are there guarantees that they will exist, and remain constant, over time; b) are there official archival versions of the complete dataset (i.e., including the external resources as they existed at the time the dataset was created); c) are there any restrictions (e.g., licenses, fees) associated with any of the external resources that might apply to a dataset consumer? Please provide descriptions of all external resources and any restrictions associated with them, as well as links or other access points, as appropriate. * The proposed Dysca dose not rely on any external resources. Does the dataset contain data that might be considered confidential (e.g., data that is protected by legal privilege or by doctor-patient confidentiality, data that includes the content of individuals' non-public communications)? If so, please provide a description. * No. Does the dataset contain data that, if viewed directly, might be offensive, insulting, threatening, or might otherwise cause anxiety? If so, please describe why. * No. To ensure that the generated images do not contain offensive, insulting, threatening, or anxiety-inducing content, we manually filter out words from the metadata M that could potentially trigger the diffusion model to generate such images. Safety checker also used to further avoid unsafe image generation. Does the dataset relate to people? If not, you may skip the remaining questions in this section. * Yes. Does the dataset identify any subpopulations (e.g., by age, gender)? If so, please describe how these subpopulations are identified and provide a description of their respective distributions within the dataset. * Yes. There are the age, gender and race recognition subtasks in Dysca. Each of them are divided to several subpopulations and the selection of these subpopulations is based on the ability of stable diffusion to generate the representative subpopulations. Is it possible to identify individuals (i.e., one or more natural persons), either directly or indirectly (i.e., in combination with other data) from the dataset? If so, please describe how. * Yes. There is the celebrity recognition task in our dataset, where 50 well-know celebrity are chosen. We choose the celebrity who can be generated well by stable diffusion XL. Does the dataset contain data that might be considered sensitive in any way (e.g., data that reveals race or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history)? If so, please provide a description. * No, our benchmark does not contain any sensitive data. §.§ Collection Process How was the data associated with each instance acquired? Was the data directly observable (e.g., raw text, movie ratings), reported by subjects (e.g., survey responses), or indirectly inferred/derived from other data (e.g., part-of-speech tags, model based guesses for age or language)? If data was reported by subjects or indirectly inferred/derived from other data, was the data validated/verified? If so, please describe how. * We display the detailed explanation in Tab. <ref>. What mechanisms or procedures were used to collect the data (e.g., hardware apparatus or sensor, manual human curation, software program, software API)? How were these mechanisms or procedures validated? * We collect the data by manual human curation. If the dataset is a sample from a larger set, what was the sampling strategy (e.g., deterministic, probabilistic with specific sampling probabilities)? * No. Who was involved in the data collection process (e.g., students, crowdworkers, contractors) and how were they compensated (e.g., how much were crowdworkers paid)? * We collect the metadata of Tab. <ref> by authors. The images are generated by stable diffusion and labels of each image are also automatically generated. Over what timeframe was the data collected? Does this timeframe match the creation timeframe of the data associated with the instances (e.g., recent crawl of old news articles)? If not, please describe the timeframe in which the data associated with the instances was created. * Our dataset was conducted in April of 2024, but the results do not depend on the date of data collection. Were any ethical review processes conducted (e.g., by an institutional review board)? If so, please provide a description of these review processes, including the outcomes, as well as a link or other access point to any supporting documentation. * No. Did you collect the data from the individuals in question directly, or obtain it via third parties or other sources (e.g., websites)? * No. Were the individuals in question notified about the data collection? If so, please describe (or show with screenshots or other information) how notice was provided, and provide a link or other access point to, or otherwise reproduce, the exact language of the notification itself. * N/A. Our Dysca does not involve the collection from the individuals. Did the individuals in question consent to the collection and use of their data? If so, please describe (or show with screenshots or other information) how consent was requested and provided, and provide a link or other access point to, or otherwise reproduce, the exact language to which the individuals consented. * N/A. Our Dysca does not involve the collection from the individuals. If consent was obtained, were the consenting individuals provided with a mechanism to revoke their consent in the future or for certain uses? If so, please provide a description, as well as a link or other access point to the mechanism (if appropriate). * N/A. Our Dysca does not involve the collection from the individuals. Has an analysis of the potential impact of the dataset and its use on data subjects (e.g., a data protection impact analysis) been conducted? If so, please provide a description of this analysis, including the outcomes, as well as a link or other access point to any supporting documentation. * No. §.§ Preprocessing/cleaning/labeling Was any preprocessing/cleaning/labeling of the data done (e.g., discretization or bucketing, tokenization, part-of-speech tagging, SIFT feature extraction, removal of instances, processing of missing values)? If so, please provide a description. If not, you may skip the remaining questions in this section. * Yes. We leverage the off-the-shelf models, i.e., PP-OCRv3 <cit.> and CLIP-L-14 <cit.>, to clean the data. PP-OCRv3 <cit.> is leveraged as the filter to exclude the failure image that TextDiffusion2 <cit.> generates the wrong text on the image. For the other images, we use CLIP-L-14 <cit.> to filter out the images with low text-image consistency. Was the “raw” data saved in addition to the preprocessed/cleaned/labeled data (e.g., to support unanticipated future uses)? If so, please provide a link or other access point to the “raw” data. * Yes. We have saved all the data. However, most of these images are filtered and considered to be useless. Is the software that was used to preprocess/clean/label the data available? If so, please provide a link or other access point. * Yes. CLIP-L-14 can be downloaded at <https://huggingface.co/docs/transformers/v4.41.3/en/model_doc/clip#transformers.CLIPModel>. PP-OCRv3 can be downloaded at <https://github.com/PaddlePaddle/PaddleOCR/blob/main/README_en.md> §.§ Uses Has the dataset been used for any tasks already? If so, please provide a description. * No. The proposed dataset is the novel one which is used for evaluation current LVLMs perception ability. Is there a repository that links to any or all papers or systems that use the dataset? If so, please provide a link or other access point. * Yes. We plan to create a section on the project homepage to keep track of LVLMs papers for researchers to analyze and compare. What (other) tasks could the dataset be used for? * In this work, we do not explore the possibility of utilizing our benchmark for model training / fine-tuning. Our primary goal in this paper is to provide a large-scale evaluation benchmark that addresses the issue of data leakage in current multimodal evaluation benchmarks and offers evaluation results across multiple subtasks, scenarios, question types and styles. Nevertheless, considering that Dysca has the capability to synthesize high-resolution and unlimited amounts of annotated multimodal data, we believe that Dysca also holds potential as a training data synthesis tool for LVLMs. Is there anything about the composition of the dataset or the way it was collected and preprocessed/cleaned/labeled that might impact future uses? For example, is there anything that a dataset consumer might need to know to avoid uses that could result in unfair treatment of individuals or groups (e.g., stereotyping, quality of service issues) or other risks or harms (e.g., legal risks, financial harms)? If so, please provide a description. Is there anything a dataset consumer could do to mitigate these risks or harms? * Yes. Are there tasks for which the dataset should not be used? If so, please provide a description. * The proposed dataset should not be used to generate offensive data. §.§ Distribution Will the dataset be distributed to third parties outside of the entity (e.g., company, institution, organization) on behalf of which the dataset was created? If so, please provide a description. * Yes. How will the dataset will be distributed (e.g., tarball on website, API, GitHub)? Does the dataset have a digital object identifier (DOI)? * We will open-source our dataset on our GitHub project homepage. At the moment, we do not have a DOI number. When will the dataset be distributed? * The dataset can be downloaded right now. Will the dataset be distributed under a copyright or other intellectual property (IP) license, and/or under applicable terms of use (ToU)? If so, please describe this license and/or ToU, and provide a link or other access point to, or otherwise reproduce, any relevant licensing terms or ToU, as well as any fees associated with these restrictions. * The licence of Dysca is https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0/blob/main/LICENSE.md"CreativeML Open RAIL++-M", which follows the licence set by the Stable Diffusion XL. Have any third parties imposed IP-based or other restrictions on the data associated with the instances? If so, please describe these restrictions, and provide a link or other access point to, or otherwise reproduce, any relevant licensing terms, as well as any fees associated with these restrictions. * No. Do any export controls or other regulatory restrictions apply to the dataset or to individual instances? If so, please describe these restrictions, and provide a link or other access point to, or otherwise reproduce, any supporting documentation. * Not yet. §.§ Maintenance Who will be supporting/hosting/maintaining the dataset? * Followed by the double-blind rule, we will release the detailed information about this part once our paper is accepted. How can the owner/curator/manager of the dataset be contacted (e.g., email address)? * Followed by the double-blind rule, we will release the detailed information about this part once our paper is accepted. Is there an erratum? If so, please provide a link or other access point. * No. Will the dataset be updated (e.g., to correct labeling errors, add new instances, delete instances)? If so, please describe how often, by whom, and how updates will be communicated to dataset consumers (e.g., mailing list, GitHub)? * There are no plans at the moment, but if there are updates, they will be announced, and the download source will be updated on the project homepage. If the dataset relates to people, are there applicable limits on the retention of the data associated with the instances (e.g., were the individuals in question told that their data would be retained for a fixed period of time and then deleted)? If so, please describe these limits and explain how they will be enforced. * No. Will older versions of the dataset continue to be supported/hosted/maintained? If so, please describe how. If not, please describe how its obsolescence will be communicated to dataset consumers. * Yes. If there are any updates, the previous version of the dataset will also be shared on website for download. If others want to extend/augment/build on/contribute to the dataset, is there a mechanism for them to do so? If so, please provide a description. Will these contributions be validated/verified? If so, please describe how. If not, why not? Is there a process for communicating/distributing these contributions to dataset consumers? If so, please provide a description. * Yes. We welcome and encourage researchers to extend/augment/build on/contribute to our dataset for non-profit purposes without the need for prior notification.
http://arxiv.org/abs/2406.17759v1
20240625174313
Interpreting Attention Layer Outputs with Sparse Autoencoders
[ "Connor Kissane", "Robert Krzyzanowski", "Joseph Isaac Bloom", "Arthur Conmy", "Neel Nanda" ]
cs.LG
[ "cs.LG" ]
Dark photon pair production via off-shell dark Higgs at FASER Takashi Shimomura July 1, 2024 ============================================================= § ABSTRACT Decomposing model activations into interpretable components is a key open problem in mechanistic interpretability. Sparse autoencoders (SAEs) are a popular method for decomposing the internal activations of trained transformers into sparse, interpretable features, and have been applied to MLP layers and the residual stream. In this work we train SAEs on attention layer outputs and show that also here SAEs find a sparse, interpretable decomposition. We demonstrate this on transformers from several model families and up to 2B parameters. We perform a qualitative study of the features computed by attention layers, and find multiple families: long-range context, short-range context and induction features. We qualitatively study the role of every head in GPT-2 Small, and estimate that at least 90% of the heads are polysemantic, i.e. have multiple unrelated roles. Further, we show that Sparse Autoencoders are a useful tool that enable researchers to explain model behavior in greater detail than prior work. For example, we explore the mystery of why models have so many seemingly redundant induction heads, use SAEs to motivate the hypothesis that some are long-prefix whereas others are short-prefix, and confirm this with more rigorous analysis. We use our SAEs to analyze the computation performed by the Indirect Object Identification circuit (<cit.>), validating that the SAEs find causally meaningful intermediate variables, and deepening our understanding of the semantics of the circuit. We open-source the trained SAEs and a tool for exploring arbitrary prompts through the lens of Attention Output SAEs. § INTRODUCTION Mechanistic interpretability aims to reverse engineer neural network computations into human-understandable algorithms <cit.>. A key sub-problem is to decompose high dimensional activations into meaningful concepts, or features. If successful at scale, this research would enable us to identify and debug model errors <cit.>, control and steer model behavior <cit.>, and better predict out-of-distribution behavior <cit.>. Prior work has successfully analyzed many individual model components, such as neurons and attention heads. However, both neurons <cit.> and attention heads <cit.> are often polysemantic <cit.>: they appear to represent multiple unrelated concepts or perform different functions depending on the input. Polysemanticity makes it challenging to interpret the role of individual neurons or attention heads in the model's overall computation, suggesting the need for alternative units of analysis. Our paper builds on literature using Sparse Autoencoders (SAEs) to extract interpretable feature dictionaries from the residual stream <cit.> and MLP activations <cit.>. While these approaches have shown promise in disentangling activations into interpretable features, attention layers have remained difficult to interpret. In this work, we apply SAEs to reconstruct attention layer outputs, and develop a novel technique (weight-based head attribution) to associate learned features with specific attention heads. This allows us to sidestep challenges posed by polysemanticity (<Ref>). Since SAEs applied to LLM activations are already widely used in the field, we do not see the application of SAEs to attention outputs as our main contribution. Instead, we hope our main contribution to be making a case for Attention Output SAEs as a valuable research tool that others in the mechanistic interpretability community should adopt. We do this by rigorously showing that Attention Output SAEs find sparse, interpretable reconstructions, that they easily enable qualitative analyses to gain insight into the functioning of attention layers, and that they are a valuable tool for novel research questions such as why models have so many seemingly redundant induction heads <cit.> or better understanding the semantics of the Indirect Object Identification circuit <cit.>. In more detail, our main contributions are as follows: * We demonstrate that Sparse Autoencoders decompose attention layer outputs into sparse, interpretable linear combinations of feature vectors, giving us deeper insight into what concepts attention layers learn up to 2B parameter models (<Ref>). We perform a qualitative study of the features computed by attention layers, and find multiple families: long-range context, short-range context and induction features (<Ref>). * We apply SAEs to systematically inspect every attention head in GPT-2 Small (<Ref>), and extend this analysis to make progress on the open question of why there are be multiple, seemingly redundant induction heads (<Ref>). Our method identifies differences between induction heads <cit.> which specialize in "long prefix induction" <cit.> vs "short prefix induction", demonstrating the utility of these SAEs for interpretability research. * We show that Attention Output SAEs are useful for circuit analysis (<Ref>), by finding and interpreting causally relevant SAE features for the widely-studied Indirect Object Identification circuit <cit.>, and resolving a way our prior understanding was incomplete. * We introduce Recursive Direct Feature Attribution (RDFA, <Ref>) - a technique that exploits the linear structure of transformers to discover sparse feature circuits through the attention layers. We release an accompanying tool for finding and visualizing the circuits on arbitrary prompts.[The RDFA tool is available at: <https://robertzk.github.io/circuit-explorer>] § METHODOLOGY Reconstructing attention layer outputs: We closely follow the setup from <cit.> to train Sparse Autoencoders that reconstruct the attention layer outputs. Specifically, we train our SAEs on the z∈ℝ^d_head vectors <cit.> concatenated across all heads of some arbitrary layer (i.e. z_cat∈ℝ^d_model where d_model = n_heads· d_head). Note that z is the attention weighted sum of value vectors v∈ℝ^d_head before they are converted to the attention output by a linear map (<Ref>), and should not be confused with the final output of the attention layer. We choose to concatenate each z vector in the layer, rather than training an SAE per head, so that our method is robust to features represented as a linear combination of multiple head outputs <cit.>. Given an input activation z_cat∈ℝ^d_model, Attention Output SAEs compute a decomposition (using notation similar to <cit.>): z_cat = ẑ_cat + ε(z_cat) = ∑_i=0^d_sae f_i(z_cat)d_i + b + ε(z_cat) where ẑ_cat is an approximate reconstruction and ε(z_cat) is an error term. We define d_i as unit-norm feature directions with sparse coefficients f_i(z_cat) ≥ 0 as the corresponding feature activations for z_cat. We also include an SAE bias term b. As mentioned, we do not train SAEs on the output of the attention layer W_O z_cat∈ℝ^d_model (where W_O is the out projection weight matrix of the attention layer (<Ref>)). Since W_O z_cat is a linear transformation of z_cat, we expect to find the same features. However, we deliberately trained our SAE on z_cat since we find that this allows us to attribute which heads the decoder weights are from for each SAE feature, as described below. Weight-based head attribution: We develop a technique specific to this setup: decoder weight attribution by head. For each layer, our attention SAEs are trained to reconstruct z_cat, the concatenated outputs of each head. Thus each SAE feature direction d_i is a 1D vector in ℝ^n_heads· d_head. We can split each feature direction, d_i, into a concatenation of n_heads smaller vectors, each of shape d_head: d_i = [d_i,1^⊤, d_i,2^⊤, …, d_i,n_heads^⊤]^⊤ where d_i,j∈ℝ^d_head for j = 1, 2, …, n_heads. We can intuitively think of each d_i,j as reconstructing the part of feature direction that comes from head j. We then compute the norm of each slice as a proxy for how strongly each head writes this feature. Concretely, for any feature i, we can compute the weights based attribution score to head k as h_i,k = ‖d_i,k‖_2/∑_j=1^n_heads‖d_i,j‖_2 For any head k, we can also sort all features by their head attribution to get a sense of what features that head is most responsible for outputting (see <Ref>). Direct feature attribution: We provide an activation based attribution method to complement the weights based attribution above. As attention layer outputs are a linear function of attention head outputs <cit.>, we can rewrite SAE feature activations in terms of the contribution from each head. f_i^pre(z_cat) = 𝐰_i^⊤𝐳_cat = 𝐰_i,1^⊤𝐳_1 + 𝐰_i,2^⊤𝐳_2 + ⋯ + 𝐰_i,n_heads^⊤𝐳_n_heads where 𝐰_i ∈ℝ^d_model is the ith row of the encoder weight matrix, 𝐰_i,j∈ℝ^d_head is the jth slice of 𝐰_i, and f_i^pre(z_cat) is the pre- feature activation for feature i (i.e. (f_i^pre(z_cat)) := f_i(z_cat)). Note that we exclude SAE bias terms for brevity. We call this “direct feature attribution” (as it’s analogous to direct logit attribution <cit.>), or "DFA" by head. We apply the same idea to perform direct feature attribution on the value vectors at each source position, since the z vectors are a linear function of the value vectors if we freeze attention patterns <cit.>. We call this "DFA by source position". Recursive Direct Feature Attribution (RDFA): Here we extend the DFA technique described above to introduce a general method to trace models' computation on arbitrary prompts. Given that we have frozen attention patterns and LayerNorm, there is a linear contribution from (1) different token position residual streams, (2) upstream model components, and (3) upstream Attention Output SAE features to downstream Attention Output SAE features. This enables us to perform a fine-grained decomposition of Attention Output SAE features recursively through earlier token position residual streams and upstream components across every layer. We call this technique Recursive DFA (RDFA). In <Ref>, we provide a full description of the RDFA algorithm, accompanied by equations for key linear decompositions. We also release a visualization tool that enables performing Recursive DFA on arbitrary prompts for GPT-2 Small. We currently only support this recursive attribution from attention to attention components, as we cannot pass upstream linearly through MLPs due to the non-linear activation function. The tool is available at: <https://robertzk.github.io/circuit-explorer>. § ATTENTION OUTPUT SAES FIND SPARSE, INTERPRETABLE RECONSTRUCTIONS In this section, we show that Attention Output SAE reconstructions are sparse, faithful, and interpretable. We first explain the metrics we use to evaluate our SAEs (<Ref>). We then show that our SAEs find sparse, faithful, interpretable reconstructions (<Ref>). Finally we demonstrate that our SAEs give us better insights into the concepts that attention layers learn in practice by discovering three attention feature families (<Ref>). §.§ Setup To evaluate the sparsity and fidelity of our trained SAEs we use two metrics from <cit.> (using notation similar to <cit.>): L0. The average number of features firing on a given input, i.e. 𝔼_x∼𝒟f(x)_0. Loss recovered. The average cross entropy loss of the language model recovered with the SAE "spliced in" to the forward pass, relative to a zero ablation baseline. More concretely: 1 - CE(x̂∘f) - CE(Id)/CE(ζ) - CE(Id), where x̂∘f is the autoencoder function, ζ: x→0 is the zero ablation function and Id: x→x is the identity function. According to this definition, an SAE that reconstructs its inputs perfectly would get a loss recovered of 100%, whereas an SAE that always outputs the zero vector as its reconstruction would get a loss recovered of 0%. Feature Interpretability Methodology. We use dashboards <cit.> showing which dataset examples SAE features maximally activate on to determine whether they are interpretable. These dashboards also show the top Direct Feature Attribution by source position, weight-based head attribution for each head (<Ref>), approximate direct logit effects <cit.> as well as activating examples from randomly sampled activation ranges, giving a holistic picture of the role of the feature. See <Ref> for full details about this methodology. §.§ Evaluating Attention Output SAEs [23]r8cm 7cm Evaluations of sparsity, fidelity, and interpretability for Attention Output SAEs trained across multiple models and layers. Percentage of interpretable features were based on 30 randomly sampled live features inspected per layer. Model Layer L0 % CE Rec.† % Interp. Gemma-2B <cit.> 6 90 75% 66% GPT-2 Small 0 3 99% 97% GPT-2 Small 1 20 78% 87% GPT-2 Small 2 16 90% 97% GPT-2 Small 3 15 84% 77% GPT-2 Small 4 15 88% 97% GPT-2 Small 5 20 85% 80% GPT-2 Small 6 19 82% 77% GPT-2 Small 7 19 83% 70% GPT-2 Small 8 20 76% 60% GPT-2 Small 9 21 83% 77% GPT-2 Small 10 16 85% 80% GPT-2 Small 11 8 89% 63% GPT-2 Small All 80% GELU-2L <cit.> 1 12 87% 83% † Percentage of cross-entropy loss recovered (Equation <ref>). Average over % interpretable across all layers. We train and evaluate Attention Output SAEs across a variety of different models and layers. For GPT-2 Small <cit.>, we notably evaluate an SAE for every layer. We find that our SAEs are sparse (oftentimes with < 20 average features firing), faithful (oftentimes > 80% of cross entropy loss recovered relative to zero ablation) and interpretable (oftentimes > 80% of live features interpretable). See <Ref> for per model and layer details.[We release weights for every SAE, corresponding feature dashboards, and an interactive tool for exploring several attention SAEs throughout a model in <Ref>.] See <Ref> for further discussion of these results. §.§ Exploring Feature Families In this section we more qualitatively show that Attention Output SAEs are interpretable by examining different feature families: groups of SAE features that share some common high-level characteristic. We first evaluate 30 randomly sampled live features from SAEs across multiple models and layers (as described in <Ref>) and report the percentage of features that are interpretable in <Ref>. We notice that in all cases, the majority of live features are interpretable, often >80%. Note that this is a small sample of features, and human judgment may be flawed. We list confidence intervals for percentage of interpretable features in <Ref>. We now use our understanding of these extracted features to share deeper insights into the concepts attention layers learn. Attention Output SAEs enable us to taxonomize a large fraction of what these layers are doing based on feature families, giving us better intuitions about how transformers use attention layers in practice. Throughout our SAEs trained on multiple models, we repeatedly find three common feature families: induction features (e.g. "board" token is next by induction), local context features (e.g. current sentence is a question, <Ref>), and high-level context features (e.g. current text is about pets). All of these features involve moving prior information with the context, consistent with the high-level conceptualization of the attention mechanism from <cit.>. We present these for illustrative purposes and do not expect these to nearly constitute a complete set of feature families. While we focus on these three feature families that are present across all of the models we studied, we also find feature families related to predicting names in the context <cit.>, succession <cit.>, detecting duplicate tokens <cit.>, and copy suppression <cit.> in GPT-2 Small (<Ref>). To more rigorously understand these three feature families, we performed a case study for each of these features (similar to <cit.>). For brevity, we highlight a case study of an induction feature below and leave the remaining to <Ref> and <ref>. Induction features. Our analysis revealed multiple "induction features" across different models studied. As we are not aware of any induction features extracted by MLP SAEs in prior work, we hypothesize that induction features are unique to attention <cit.>. In what follows, we showcase a “‘board’ is next by induction” feature from our L1 GELU-2L <cit.> SAE. However, we note that “board induction” is just one example from hundreds of “<token> is next by induction” features discovered by our analysis (see <Ref>). We also detail the feature’s upstream computations and downstream effects in <Ref>. The ‘board’ induction feature activates on the second instance of <token> in prompts of the form “<token> board . . . <token>”. To demonstrate ‘board induction’ is a genuinely monosemantic feature, we provide evidence that the feature is both: (i) specific and (ii) sensitive to this context <cit.>. Specificity was established through creation of a proxy that checks for cases of ‘board’ induction. Thereafter, we compared the activation of our proxy to the activation of the feature. We found that the upper parts of the activation spectrum clearly responded, with high specificity, to ‘board’ induction (<Ref>). Although some false positives were observed in the lower activation ranges (as in <cit.>), we believe there are mundane reasons to expect such results (see <Ref>). We now move onto sensitivity. Our activation sensitivity analysis found 68 false negatives in a dataset of 1 million tokens, and all false negatives were manually checked. Although these examples satisfy the ‘board’ induction pattern, it is clear that ‘board’ should not be predicted. Often, this was because there were even stronger cases of induction for another token (<Ref>). § INTERPRETABILITY INVESTIGATIONS USING ATTENTION OUTPUT SAES In this section we demonstrate that Attention SAEs are useful as general purpose interpretability tools, allowing for novel insights about the role of attention layers in language models. We first develop a technique that allows us to systematically interpret every attention head in a model (<Ref>), discovering new behaviors and gaining high-level insight into the phenomena of attention head polysemanticity <cit.>. We then apply our SAEs to make progress on the open question of why models have many seemingly redundant induction heads <cit.>, finding induction heads with subtly different behaviors: some primarily perform induction where there is a long prefix <cit.> whereas others generally perform short prefix induction (<Ref>). Finally, we apply Attention Output SAEs to circuit analysis (<Ref>), unveiling novel insights about the Indirect Object Identification circuit <cit.> that were previously out-of-reach, and find causally relevant SAE features in the process. §.§ Interpreting all heads in GPT-2 Small In this section, we use our weight-based head attribution technique (see <Ref>) to systematically interpret every attention head in GPT-2 Small <cit.>. As in <Ref>, we apply Equation <ref> to compute the weights based attribution score h_i,k to each head k and identify the top ten features {d_i_r}_r=1^10 with highest attribution score to head k. Although Attention Output SAE features are defined relative to an entire attention layer, this identifies the features most salient to a given head with minimal contributions from other heads. Using the feature interpretability methodology from <Ref>, we manually inspect these features for all 144 attention heads in GPT-2 Small. Broadly, we observe that features become more abstract in middle-layer heads and then taper off in abstraction at late layers: Early heads. Layers 0-3 exhibit primarily syntactic features (single-token features, bigram features) and fire secondarily on specific verbs and entity fragments. Some long and short range context tracking features are also present. Middle heads. Layers 4-9 express increasingly more complex concept feature groups spanning grammatical and semantic constructs. Examples include heads that express primarily families of related active verbs, prescriptive and active assertions, and some entity characterizations. Late-middle heads show feature groups on grammatical compound phrases and specific concepts, such as reasoning and justification related phrases and time and distance relationships. Late heads. Layers 10-11 continue to express some complex concepts such as counterfactual and timing/tense assertions, with the last layer primarily exhibiting syntactic features for grammatical adjustments and some bigram completions. We identify many existing known motifs (including induction heads <cit.>, previous token heads <cit.>, successor heads <cit.> and duplicate token heads <cit.>) in addition to new motifs (e.g. preposition mover heads). More details on each layer and head are available in <Ref>. We note that there are some limitations to this methodology, as discussed in <Ref>. §.§.§ Investigating attention head polysemanticity with SAEs We now apply our analysis above to gain high-level insight into the prevalence of attention head polysemanticity <cit.>. While the technique from <Ref> is not sufficient to prove that a head is monosemantic, we believe that having multiple unrelated features attributed to a head is evidence that the head is doing multiple tasks. We also note that there is a possibility we missed some monosemantic heads due to missing patterns at certain levels of abstraction (e.g. some patterns might not be evident from a small sample of SAE features, and in other instances an SAE might have mistakenly learned some red herring features). During our investigations of each head, we found 14 monosemantic candidates (i.e. all of the top 10 attributed features for these heads were closely related). This suggests that about 90% of the attention heads in GPT-2 small are performing at least two different tasks. To validate that the feature lens is telling us something real about the multiple roles of the heads, we confirm that one of these attention heads is polysemantic with experiments that do not require SAEs. <Ref> demonstrates two completely different behaviors of 10.2 found in the top SAE features: digit copying and predicting base64 at the end of URLs[By digit copying behavior, we refer to instances of boosting a specific digit found earlier in the prompt: for example, as in "Image 2/8... Image 5/8". By URL completion, we refer to instances of boosting plausible portions of a URL, such as the base64 tokens immediately following "pic.twitter.com/".]. We construct synthetic datasets corresponding to both of these tasks, and observe the mean change in cross entropy loss after ablating every attention head output in layer 10. We find that ablating 10.2 causes the largest impact on the loss in both cases, confirming that this head is involved in both tasks. §.§ Long-prefix induction head In this section we apply Attention Output SAEs to make progress on a long-standing open question: why do models have so many seemingly redundant induction heads <cit.>? We use our weight-based head attribution technique (see <Ref>) to inspect the top SAE features attributed to two different induction heads and find one which specializes in “long prefix induction” <cit.>, while the other primarily does “short prefix induction”. As a case study, we focus on GPT-2 Small <cit.>, which has two induction heads in layer 5 (heads 5.1 and 5.5) <cit.>. To distinguish between these two heads, we qualitatively inspect the top ten SAE features attributed to both heads (as in <Ref>) and look for patterns. Glancing at the top features attributed to head 5.1 shows “long induction” features, defined as features that activate on examples of induction with at least two repeated prefix matches (e.g. completing “... ABC ... AB” with C). We now confirm this hypothesis with independent lines of evidence that don't require SAEs. We first generate synthetic induction datasets with random repeated tokens of varying prefix lengths. For each dataset, we compute the induction score, defined as the average attention to the token which induction would suggest comes next, for both heads. We confirm that while both induction scores rise as we increase prefix length, head 5.1 has a much more dramatic phase change as we transition to long prefixes (i.e. ≥2 ) (<Ref>). We also find and intervene on real examples of long prefix induction from the training distribution, corrupting them to only be one prefix by replacing the 2nd left repeated token (i.e 'A' in ABC ... AB -> C) with a different, random token. We find that this intervention effectively causes head 5.1 to stop doing induction, as its average induction score falls from 0.55 to 0.05. Head 5.5, meanwhile, maintains an average induction score of 0.43 (<Ref>). See <Ref> for additional lines of evidence. §.§ Analyzing the IOI circuit with Attention Output SAEs We now show that Attention Output SAEs are useful tools for circuit analysis. In the process, we also go beyond early work to find evidence that our SAEs find causally relevant intermediate variables. As a case study, we apply our SAEs to the widely studied Indirect Object Identification circuit <cit.>, and find that our SAEs improve upon attention head interpretability based techniques from prior work. The Indirect Object Identification (IOI) task <cit.> is to complete sentences like “After John and Mary went to the store, John gave a bottle of milk to” with “ Mary” rather than “ John”. We refer to the repeated name (John) as S (the subject) and the non-repeated name (Mary) as IO (the indirect object). For each choice of the IO and S names, there are two prompt templates: one where the IO name comes first (the 'ABBA' template) and one where it comes second (the 'BABA' template). <cit.> analyzed this circuit by localizing and interpreting several classes of attention heads. They argue that the circuit implements the following algorithm: * Induction heads and Duplicate token heads identify that S is duplicated. They write information to indicate that this token is duplicated, as well as “positional signal” pointing to the S1 token. * S-inhibition heads route this information from S2 to END via V-composition <cit.>. They output both token and positional signals that cause the Name mover heads to attend less to S1 (and thus more to IO) via Q-composition <cit.>. * Name mover heads attend strongly to the IO position and copy, boosting the logits of the IO token that they attend to. Although <cit.> find that “positional signal” originating from the induction heads is a key aspect of this circuit, they don’t figure out the specifics of what this signal is, and ultimately leave this mystery as one of the “most interesting future directions” of their work. Attention Output SAEs immediately reveal the positional signal by decomposing these activations into interpretable features. We find that rather than absolute or relative position between S tokens, the positional signal is actually whether the duplicate name comes after the “ and” token that connects “John and Mary”. Identifying the positional features: To generate this hypothesis, we localized and interpreted causally relevant SAE features from the outputs of the attention layers that contain induction heads (Layers 5 and 6) with zero ablations. For now we focus on our Layer 5 SAE, and leave other layers to <Ref>. In <Ref> we also evaluate that, for these layers, the SAE reconstructions are faithful on the IOI distribution, and thus viable for circuit analysis. [23]r0.51 < g r a p h i c s > Results from two noising experiments on induction layers' attention outputs at S2 position. Noising from a distribution that just changes " and" to " alongside" degrades performance, while 3 simultaneous perturbations that maintains whether the duplicate name is after the ‘ and’ token preserve 93% of average logit difference. During each forward pass, we replace the L5 attention layer output activations with a sparse linear combination of SAE feature directions plus an error term, as in (<ref>). We then zero ablate each feature, one at a time, and record the resulting change in logit difference between the IO and S tokens. This localizes three features that cause a notable decrease in average logit difference. See <Ref> for more details. Interpreting the “positional” features: We then interpreted these causally relevant features. Shallow investigations of feature dashboards (see <Ref>, <Ref>) suggests that all three of these fire on duplicate tokens, that were previously before or after “ and” tokens (e.g. “I am a duplicate token that previously followed ‘ and’”). These feature interpretations motivated the hypothesis that the “positional signal” in IOI is solely determined by the position of the name relative to (i.e. before or after) the ‘ and’ token. Confirming the hypothesis: We now verify this hypothesis without reference to SAEs. We design a noising (defined in <cit.>) experiment that perturbs three properties of IOI prompts simultaneously, while preserving whether the duplicate name is before or after the ‘ and’ token. Concretely, our counterfactual distribution makes the following changes: * Replace each name with another random name (removing "token signal" <cit.>) * Prepend filler text (e.g. "It was a nice day") (corrupting absolute positions of all names) * Add filler text between S1 and S2 (corrupting the relative position between S tokens) Despite being almost entirely different prompts, noising the attention layer outputs for both induction layers [5, 6] at the S2 position still recovers  93% of average logit diff relative to zero ablating the outputs at this position (<Ref>). One alternate hypothesis is that the positional signal is a more general emergent positional embedding <cit.> (e.g. “I am the second name in the sentence”) that doesn’t actually depend on the “ and” token. We falsify this by noising attention outputs at layers [5,6] S2 position from a corrupted distribution which only changes “ and” to the token “ alongside”. Note that this only corrupts one piece of information (the ‘ and’) compared to the three corruptions above, yet we only recover  43% of logit difference relative to zero ablation (<Ref>). § RELATED WORK Mechanistic Interpretability. Mechanistic interpretability research aims to reverse engineer neural network computations into human-understandable algorithms <cit.>. Prior mechanistic interpretability work has identified computation subgraphs of models that implement tasks <cit.>, found interpretable, reoccurring model components over models of multiple sizes <cit.>, and reverse-engineered how toy tasks are carried out in small transformers <cit.>. Some have successfully interpreted attention heads <cit.>, though the issue has been raised that heads are often polysemantic <cit.>, and may not be the correct unit of analysis <cit.>. Our technique goes beyond prior work by decomposing the outputs of the entire attention layer into finer-grained linear features, without assuming that heads are the right unit of analysis. Induction heads <cit.> have been studied extensively by <cit.>, who first observed that LLMs had many, seemingly redundant induction heads. <cit.> investigated two induction heads in a 2-layer attention-only model, and discovered the "long induction" (long-prefix induction) variant in both heads. In contrast, we find that two different induction heads specialize in long-prefix and short-prefix induction respectively in GPT-2 Small. Classical Dictionary Learning. <cit.> explores how both discrete and continuous representations can involve more representations than basis vectors, and surveys various techniques for extracting and reconstructing these representations. Traditional sparse coding algorithms <cit.> employ expectation-maximization, while contemporary approaches <cit.> based on gradient descent and autoencoders have built upon these ideas. Sparse Autoencoders.Motivated by the hypothesized phenomenon of supersition <cit.>, recent work has applied dictionary learning, specifically sparse autoencoders <cit.>, to LMs in order to interpret their activations <cit.>. Our feature interpretability methodology was inspired by <cit.>, though we additionally study how features are computed upstream with direct feature attribution <cit.>. Progress is rapid, with the following parallel work occurring within the last few months: <cit.> scaled Attention Output SAEs up to 7B models, building on an early draft of this work. <cit.> also successfully used multiple types of SAEs including attention for finer-grained circuit discovery with gradient based patching techniques. In contrast, we use both causal interventions and DFA, exploiting the linear structure of the attention mechanism. <cit.> exploit the linear structure of a transformer to investigate composition between SAE features on Othello, similar to our RDFA approach. <cit.> also find “ and”-related SAE features in the IOI task, and rediscover the induction feature family <cit.>. We causally verify the hypotheses of how “ and” features behave in IOI and rule out alternative hypotheses. § CONCLUSION In this work, we have introduced Attention Output SAEs, and demonstrated their effectiveness in decomposing attention layer outputs into sparse, interpretable features (<Ref>). We have also highlighted the promise of Attention Output SAEs as a general purpose interpretability tool (<Ref>). Our analysis identified novel and extant attention head motifs (<Ref>), advanced our understanding of apparently `redundant' induction heads (<Ref>), and improved upon attention head circuit interpretability techniques from prior work (<Ref>). We have also introduced a more general technique, recursive direct feature attribution, to trace models' computation on arbitrary prompts and released an accompanying visualization tool (<Ref>). §.§ Limitations Our work focuses on understanding attention outputs, which we consider to be a valuable contribution. However, we leave much of the transformer unexplained, such as the QK circuits <cit.> by which attention patterns are computed. Further, though we scale up to a 2B model, our work was mostly performed on the 100M parameter GPT-2 Small model. Exploring Attention Output SAEs on larger models in depth is thus a natural direction of future work. We also highlight some methodological limitations. While we try to validate our conclusions with multiple independent lines of evidence, our research often relies on qualitative investigations and subjective human judgment. Additionally, like all sparse autoencoder research, our work depends on both the assumptions made by the SAE architecture, and the quality of the trained SAEs. SAEs represent the sparse, linear components of models' computation, and hence may provide an incomplete picture of how to interpret attention layers <cit.>. Our SAEs achieve reasonable reconstruction accuracy (Table <ref>), though they are far from perfect. § ACKNOWLEDGEMENTS We would like to thank Rory Švarc for help with writing, formatting tables / figures, and helpful feedback. We would also like to thank Georg Lange, Alex Makelov, Sonia Joseph, Jesse Hoogland, Ben Wu, and Alessandro Stolfo for extremely helpful feedback on earlier drafts of this work. We are grateful to Keith Wynroe, who independently made related observations about the IOI circuit (<Ref>), for helpful discussion. Finally, we are grateful to Johnny Lin for adding our GPT-2 Small Attention SAEs to Neuronpedia <cit.> which helped us rapidly interpret SAE features in section <ref> and <Ref>. Portions of this work were supported by the MATS program as well as the Long Term Future Fund. § AUTHOR CONTRIBUTIONS Connor and Rob were core contributors on this project. Connor trained and evaluated all of the GPT-2 Small and GELU-2L SAEs from <Ref>. Connor also performed the interpretability investigations and feature deep dives from <Ref>. Rob performed additional feature deep dives and implemented heuristics for detecting families of features such as induction features (<Ref>). Rob also inspected all 144 attention heads in GPT-2 Small from <Ref>, while Connor performed the long-prefix induction (<Ref>) and IOI circuit analysis (<Ref>) case studies. Rob built the circuit discovery tool from <Ref>. Joseph trained the Attention Output SAE on Gemma-2B (<Ref>). Arthur and Neel both supervised this project, and gave guidance and feedback throughout. The original project idea was suggested by Neel. § OPEN SOURCE SAE WEIGHTS AND FEATURE DASHBOARDS Here we provide weights for all trained SAEs (<Ref>) as well as the interface for feature dashboards that we used to evaluate feature interpretability discussed in <Ref>. For GPT-2 Small, you can find the weights here: <https://huggingface.co/ckkissane/attn-saes-gpt2-small-all-layers/tree/main>. You can view feature dashboards for 30 randomly sampled feature per each layer here: <https://ckkissane.github.io/attn-sae-gpt2-small-viz/>. We additionally provide a colab notebook demonstrating how to use the SAEs here: <https://colab.research.google.com/drive/1hZVEM6drJNsopLRd7hKajp_2v6mm_p70?usp=sharing> For our GELU-2L SAE trained on Layer 1 (the second layer), you can find weights here: <https://huggingface.co/ckkissane/tinystories-1M-SAES/blob/main/concat-z-gelu-21-l1-lr-sweep-3/gelu-2l_L1_Hcat_z_lr1.00e-03_l12.00e <https://ckkissane.github.io/attn-sae-gelu-2l-viz/>. We additionally provide a colab notebook showing how to use the SAEs here: <https://colab.research.google.com/drive/10zBOdozYR2Aq2yV9xKs-csBH2olaFnsq?usp=sharing> To view the top 10 features attributed to all 144 attention heads in GPT-2 Small (as in <Ref>) see here: <https://robertzk.github.io/gpt2-small-saes/>. Weights for the Gemma-2B SAE can be found here: <https://wandb.ai/jbloom/gemma_2b_hook_z/artifacts/model/sae_group_gemma-2b_blocks.6.attn.hook_z_16384/v1/files>. You can also view similar dashboards for any feature from all of our GPT-2 Small SAEs on neuronpedia <cit.> here: <https://www.neuronpedia.org/gpt2-small/att-kk>. Further, we introduce an interactive tool for exploring several attention SAEs throughout a model at <https://robertzk.github.io/circuit-explorer> and discuss this more in <Ref>. Code is available at <https://github.com/ckkissane/attention-output-saes>. § SAE TRAINING: HYPERPARAMETERS AND OTHER DETAILS Important details of SAE training include: * SAE Widths. Our GELU-2L and Gemma-2B SAEs have width 16384. All of our GPT-2 Small SAEs have width 24576, with the exception of layers 5 and 7, which have width 49152. * Loss Function. We trained our Gemma-2B SAE with a different loss function than the SAEs from other models. For Gemma-2B we closely follow the approach from <cit.>, while for GELU-2L and GPT-2 Small, we closely follow the approach from <cit.>. * Training Data. We use activations from hundreds of millions to billions of activations from LM forward passes as input data to the SAE. Following <cit.>, we use a shuffled buffer of these activations, so that optimization steps don't use data from highly correlated activations. For GELU-2L we use a mixture of 80% from the C4 Corpus <cit.> and 20% code (<https://huggingface.co/datasets/NeelNanda/c4-code-tokenized-2b>). For GPT-2 Small we use OpenWebText (<https://huggingface.co/datasets/Skylion007/openwebtext>). For Gemma-2B we use <https://huggingface.co/datasets/HuggingFaceFW/fineweb>. The input activations have sequence length of 128 tokens for all training runs. * Resampling. For our GELU-2L and GPT-2 Small SAEs we used resampling, a technique which at a high-level reinitializes features that activate extremely rarely on SAE inputs periodically throughout training. We mostly follow the approach described in the `Neuron Resampling' appendix of <cit.>, except we reapply learning rate warm-up after each resampling event, reducing learning rate to 0.1x the ordinary value, and, increasing it with a cosine schedule back to the ordinary value over the next 1000 training steps. Note we don't do this for Gemma-2B. * Optimizer hyperparameters. For the GELU-2L and GPT-2 Small SAEs we use the Adam optimizer with β_2 = 0.99 and β_1 = 0.9 and a learning rate of roughly 0.001. For Gemma-2B SAEs we also use the Adam optimizer with β_2 = 0.999 and β_1 = 0.9 and a learning rate of 0.00005. §.§ Compute resources used for training Our GELU-2L SAE was trained on a single A6000 instance available from Vast AI[<https://vast.ai/>] overnight. Our GPT-2 Small SAEs were each trained overnight on a single A100 instance also available from Vast AI. Our Gemma-2B SAE was also trained overnight on a single A100 instance from Paperspace[<https://www.paperspace.com/>]. The analyses described in the paper were performed on either an A6000 or A100 instance depending on memory bandwidth requirements. In no case were multiple machines or distributed tensors required for training or obtaining our experimental results. Most experiments take seconds or minutes, and all can be performed in under an hour. The RDFA tool described in <Ref> is hosted on an A6000 instance available from <https://www.paperspace.com/deployments>. § FURTHER DISCUSSION ON SAE FIDELITY EVALUATIONS In <Ref> we claimed that our Attention Output SAEs are sparse, faithful, and interpretable and we provide evaluations of each SAE in <Ref> to support this claim. In this section we further discuss nuances of the fidelity evaluation, and how our SAEs compare to trained SAEs from other work. We note that we evaluated fidelity with the cross entropy loss relative to zero ablation (<ref>), which has a few potential pitfalls. First, some would argue that zero ablation may be too harsh a baseline, and that alternative baselines using mean ablation or resample ablation may be more principled. We choose to use zero ablation to stay consistent with prior work from <cit.>, which made our preliminary results easier to evaluate. Second, the zero ablation baseline makes it's hard to compare the quality of SAEs between other sites. Intuitively, zero ablating the residual stream should degrade performance much more than ablating a single attention layer or MLP, so we expect that SAEs trained on the residual stream will have much higher % CE recovered metrics, even if splicing in the residual stream SAE causes a much bigger jump in cross entropy loss. See <cit.> for thorough evaluations of trained SAEs across multiple sites. For this reason, we recommend practitioners additionally record the raw cross entropy loss numbers with and without the SAE spliced in. We also note that there is a trade off between sparsity and fidelity, and due to limited compute, we are likely far from pareto optimal. Recent work <cit.> has had success interpreting SAEs with higher numbers of features firing, although it's not clear what L0 we should target. For example, we might expect more features in the residual stream compared to an attention head, and we might expect larger models to compute more features than smaller models. With this in mind, it's hard to compare our SAEs to across work that uses on different models and activation sites. When we trained our SAEs, we closely followed <cit.> as a reference. The MLP SAE from their work had a % CE recovered of 79%. They claimed that they generally targeted an L0 norm that is less than 10 or 20. Our SAEs have similar metrics, where we generally targeted and L0 of 20 with 80% CE loss recovered, § METHODOLOGY FOR FEATURE INTERPRETABILITY To evaluate interpretability for Attention Output SAE features, we manually rate the interpretability of a set of randomly sampled SAE features. For each SAE, the two raters (paper authors) collectively inspected 30 randomly sampled live features. To assess a feature, the rater determined if there was a clear explanation for the feature's behavior. The rater viewed the top 20 maximum activating dataset examples for that feature, approximate direct logit effects (i.e. W_UW_Od_i), and randomly sampled activating examples from lower activation ranges (as in <cit.>). For each max activating dataset example, we also show the corresponding source tokens with the top direct feature attribution by source position (<Ref>), and additionally show the weight-based head attribution for all heads in that layer (<Ref>). The raters used an interface based on an open source SAE visualizer library <cit.> modified to support attention layer outputs (see <Ref>). Note that we filter out dead features (features that don't activate at least once in 100,000 inputs, sometimes also referred to as "ultra low frequency cluster") from our interpretability analysis. These features were excluded from the denominator in reporting percentage interpretable in <Ref>. The raters had a relatively high bar for labeling a feature as interpretable (e.g. noticing a clear pattern with all 20 max activating dataset examples, as well as throughout the randomly sampled activations). However, we note that this methodology heavily relies on subjective human judgement, and thus there is always room for error. We expect both false positives (e.g. the raters are overconfident in their interpretations, despite the feature actually being polysemantic) and false negatives (e.g. the raters might miss more abstract features that are hard to spot with our feature dashboards). §.§ Confidence intervals for percentage of interpretable features In this section, we provide 95% confidence intervals for the percentage of features that are reported as interpretable in <Ref>. For each layer, we treat the number of features that are interpretable as a binomial random variable with proportion of success p (percentage interpretable) sampled over n trials (number of features inspected). The Clopper-Pearson interval S_≤∩ S_≥ provides an exact method for calculating binomial confidence intervals <cit.>, with: S_≤ := { p | ℙ[ Bin(n; p) ≤ x ] > α/2} and S_≥ := { p | ℙ[ Bin(n; p) ≥ x ] > α/2} where α is the confidence level and Bin(n ; p) is the binomial distribution. Due to a relationship between the binomial distribution and the beta distribution, the Clopper–Pearson interval can be calculated <cit.> as: B(α/2 ; x, n-x + 1) < p < B(1 - α/2 ; x + 1, n - x) where x = np is the number of successes and B(p; v, w) is the pth quantile of a beta distribution with shape v and w. We present 95% confidence intervals (α = 0.025) for <Ref> in <Ref>. Confidence intervals for interpretability of Attention Output SAEs trained across multiple models and layers. Model Layer % Interp. 95% CI Gemma-2B <cit.> 6 66% [47.2%, 82.7%] GPT-2 Small 0 97% [82.2%, 99.9%] GPT-2 Small 1 87% [69.3%, 96.2%] GPT-2 Small 2 97% [82.8%, 99.9%] GPT-2 Small 3 77% [57.7%, 90.1%] GPT-2 Small 4 97% [82.8%, 99.9%] GPT-2 Small 5 80% [61.4%, 92.3%] GPT-2 Small 6 77% [57.7%, 90.1%] GPT-2 Small 7 70% [50.6%, 85.3%] GPT-2 Small 8 60% [40.6%, 77.3%] GPT-2 Small 9 77% [57.7%, 90.1%] GPT-2 Small 10 80% [61.4%, 92.3%] GPT-2 Small 11 63% [43.9%, 80.1%] GELU-2L <cit.> 1 83% [65.3%, 94.4%] § INDUCTION FEATURE DEEP DIVE CONTINUED: ANALYZING FALSE NEGATIVES In this section we display in <Ref> two random examples of false negatives identified during the sensitivity analysis from <Ref>. To recap, these are examples where our proxy identified a case of board induction (i.e. "<token> board ... <token>), but the board induction feature did not fire. We generally notice that while they technically satisfy the board induction pattern, "board" should clearly not be predicted as the next token. This is often because there are even stronger cases of induction for another token (<ref>). 0.99 < g r a p h i c s > 0.99 < g r a p h i c s > Two examples of false negatives for the board induction feature. The red highlight indicates that our proxy is active, but the board feature is not. §.§ Red teaming the board induction hypothesis We now red team the "'board' in next by induction" hypothesis by considering alternate hypothesis. We first consider the hypothesis that the feature is a more general induction feature, i.e it activates on prompts of the form "<token> X ... <token>" for all X in the vocabulary. We falsify this by observing the feature activation at all positions in random repeated text, and notice that it only activates in the instance of 'board' induction (<Ref>). < g r a p h i c s > Board induction feature activation at each position of a random repeated sequence of tokens. Another alternate hypothesis is that the feature is a more general "'board' is next feature" that activates when the model confidently predicts the 'board' token. We falsify this by handcrafting examples where the model confidently predicts board (e.g. "In the classroom, the student ran here fingernails on a chalk"), and find that the feature does not fire. Moreover, modifying these prompts to include induction causes the feature to fire (<Ref>). < g r a p h i c s > Board induction feature red teaming example. It does not fire when confidently predicting board without induction §.§ Explaining polysemanticity at lower activation ranges In <Ref> we noticed that while the upper parts of the activation spectrum clearly respond with high specificity to ‘board’ induction, there were also many false positives in the lower activation ranges (as in <cit.>), we believe these are expected for mundane reasons: * Imperfect proxy: Manually staring at the false positives in the medium activation ranges reveals examples of fuzzy ‘board’ induction that weren’t identified by our simple proxy. * Undersized dictionary: Our GELU-2L SAE has a dictionary of roughly 16,000 features. We expect our model to have many more “true features” (note there are 50k tokens in the vocabulary). Thus unrecovered features may show up as linear combinations of many of our learned features. * Superposition: The superposition hypothesis <cit.> suggests that models represent sparse features as non-orthogonal directions, causing interference. If true, we should expect some polysemanticity at the lower activation ranges by default. We also agree with the following intuition from <cit.>: “large feature activations have larger impacts on model predictions, so getting their interpretation right matters most”. Thus we reproduced their expected value plots to demonstrate that most of the magnitude of activation provided by this feature comes from ‘board’ induction examples in <Ref>. §.§ Understanding upstream computation and downstream effects In <Ref> we found a monosemantic SAE feature that represents that the "board" token is next by induction. In this section we show that we can also understand its causal downstream effects, as well as how it's computed by upstream components. We first demonstrate that the presence of this feature has an interpretable causal effect on the outputs: we find that this feature is primarily used to directly predict the "board" token. We start by analyzing the approximate direct logit effect: W_UW_Od_i where d_i is this feature direction. We find that the “board” token is the top logit in <Ref>. 0.3 < g r a p h i c s > 0.3 < g r a p h i c s > 0.3 < g r a p h i c s > Direct logit effects of individual features: We show the top and bottom 20 affected output tokens from "'board' is next by induction" (a) "in a question starting with 'Which'" (b) and "in text about pets" (c) features This interpretation is also corroborated by feature ablation experiments. Across all activating dataset examples over 10 million tokens, we splice in our Attention Output SAE at Layer 1 of the model (the last layer of GELU-2L), ablate the board induction feature, and record the effect on loss. We find that 82% of total loss increase from ablating this feature is explained by examples where board is the correct next token. Finally, we demonstrate that we can understand how this feature is computed by upstream components. We first show that this feature is almost entirely produced by attention head 1.6, an induction head <cit.>. Over 10 million tokens, we compute the direct feature attribution by head (see (<ref>)) for this feature. We find that head 1.6 stands out with 94% fraction of variance explained. Going further upstream, we now show that 1.6 is copying prior "board" tokens to activate this feature. We apply DFA by source position (see <Ref>) for all feature activations over 10 million tokens and record aggregate scores for each source token. We find that the majority of variance is explained by “board” source tokens. This effect is stronger if we filter for feature activations above a certain threshold, reaching over 99.9% at a threshold of 5, mirroring results from <cit.> that there's more polysemanticity in lower ranges. We note that this "copying" is consistent with our understanding of the induction <cit.> algorithm. § AUTOMATIC INDUCTION FEATURE DETECTION In this section we automatically detect and quantify a large “<token> is next by induction” feature family from our GELU-2L SAE trained on layer 1. This represents  5% of the non-dead features in the SAE. This is notable, as if there are many “one feature per vocab token” families like this, we may need extremely wide SAEs for larger models. Based on the findings of the “‘board’ is next by induction” feature (see <Ref>), we surmised that there might exist more features with this property for different suffixes. Guided by this motivation, we were able to find 586 additional features that exhibited induction-like properties from our GELU-2L SAE. We intend this as a crude proof of concept for automated SAE feature family detection, and to show that there are many induction-like features. We think our method could be made significantly more rigorous with more time, and that it likely has both many false positives and false negatives. While investigating the “board” feature, we confirmed that attention head 1.6 was an induction head. For each feature dashboard, we also generated a decoder weights distribution that gave an approximation of how much each head is attributed to a given feature. We then chose the following heuristic to identify additional features that exhibited induction-like properties: Induction Selection Heuristic. For each feature, we compute the weight-based head attribution score (<ref>) to head 1.6. We consider features that have a head attribution score of at least 0.6 as induction feature candidates. Intuitively, given the normalized norms sum to 1, we expect features satisfying this property to primarily be responsible for producing induction behavior for specific sets of suffix tokens. In our case, we found 586 features that pass the above induction heuristic and are probably related to induction. We note that this is a conservative heuristic, as head 1.4 gets a partial score on the random tokens induction metric, and other heads may also play an induction-like role on some tokens, yet fail the random tokens test <cit.>. We verified that these are indeed behaviorally related to induction using the following behavioral heuristic: Induction Behavior Heuristic. For each feature, consider the token corresponding to the max positive boosted logit through the direct readout from W_UW_Od_i. For a random sample of 200 examples that contain that token, identify which proportion satisfy: * For any given instance of the token corresponding to the max positive boosted logit for that feature, the feature does not fire on the first prefix of that token (i.e., the first instance of an “AB” pattern). * For any subsequent instances of the token corresponding to the max positive boosted logit for that feature occurring in the example, the feature activates on the preceding token (i.e. subsequent instances of an “AB” pattern). We call the proportion of times the feature activates when it is expected to activate (on instances of A following the first instance of an AB pattern) the induction pass rate for the feature. The heuristic passes if the induction pass rate is > 60%. With the “board” feature, we saw that the token with the top positive logit boost passed this induction behavior heuristic: for almost every example and each bigram that ends with “board”, the first such bigram did not activate the feature but all subsequent repeated instances did. We ran this heuristic on the 586 features identified by the Induction Selection Heuristic against 500 features that have attribution < 10% to head 1.6 as a control group (i.e., features we would not expect to display induction-like properties as they are not attributed to the induction head). We found the Induction Behavior Heuristic to perform well at separating the features, as 450/586 features satisfied the > 60% induction pass rate. Conversely, only 3/500 features in the control group satisfied the > 60% induction pass rate (<Ref>). 0.48 < g r a p h i c s > 0.48 < g r a p h i c s > Automated Induction: The features identified by our induction selection heuristic (a) selects 450/586 features that satisfy the induction behavior heuristic, whereas (b) the control group only selects 3. § LOCAL CONTEXT FEATURE DEEP DIVE: IN QUESTION STARTING WITH "WHICH" We now consider an “In questions starting with ‘Which’” feature. We categorized this as one of many “local context” features: a feature that is active in some context, but often only for a short time, and which has some clear ending marker (e.g. a question mark, closing parentheses, etc). Unlike the induction feature (<Ref>), we also find that it’s computed by multiple attention heads. The fact that our Attention SAEs extracted a feature relying on multiple heads, and made progress towards understanding it, suggests that we may be able to use Attention Output SAEs as a tool to tackle the hypothesized phenomenon of attention head superposition <cit.>. We first show that our interpretation is faithful over the entire distribution. We define a crude proxy that checks for the first 10 tokens after "Which" tokens, stopping early at punctuation. Similar to the induction feature, we find that this feature activates with high specificity to this context in the upper activation ranges, although there is polysemanticity for lower activations (<Ref>). 0.48 < g r a p h i c s > 0.48 < g r a p h i c s > Specificity plots for "in question starting with 'Which'" (a) and "In text about pets" (b) features We now show that the feature is computed by multiple heads in layer 1. Over 10 million tokens, we compute the direct feature attribution by head (<ref>) for this feature. We find that head 3 heads have non-trivial (>10%) fraction of variance explained (<Ref>). < g r a p h i c s > Fraction of variance of DFA by head explained for the "In a question starting with 'Which'" feature over 10 million tokens. We notice that this feature is distributed across multiple heads Despite this, we still get traction on understanding this feature, motivating attention SAEs as a valuable tool to deal with attention head superposition. We first understand the causal downstream effects of this feature. We find that it primarily "closes the question", by directly boosting the logits of question mark tokens (<Ref>). We also show that the heads in aggregate are moving information from prior "Which" tokens to compute this feature. We apply DFA by source position (aggregated across all heads) (see <Ref>) for all feature activations over 10 million tokens and record aggregate scores for each source token. We find that “Which” source tokens explain >50% the variance, and over 95% of the variance if we filter for feature activations greater than 2, suggesting that the heads are moving this "Which" to compute the feature. § HIGH-LEVEL CONTEXT FEATURE DEEP DIVE: IN TEXT RELATED TO PETS We now consider an “in a text related to pets” feature. This is one example from a family of ‘high-level context features’ extracted by our SAE. High-level context features often activate for almost the entire context, and don’t have a clear ending marker (like a question mark). To us they appear qualitatively different from the local context features, like “in a question starting with ‘Which’”, which just activate for e.g. all tokens in a sentence. We first show our interpretation of this feature is faithful. We define a proxy that checks for all tokens that occur after any token from a handcrafted set of pet related tokens ('dog', ' pet', ‘ canine’, etc), and compare the activations of our feature to the proxy. Though the proxy is crude, we find that this feature activates with high specificity in this context in <Ref>. We show that we can understand the downstream effects of this feature. The feature directly boosts logits of pet related tokens ('dog', ' pet', ‘ canine’, etc) in <Ref>. We were able to use techniques like direct feature attribution to learn that high-level context features are natural to implement with a single attention head: the head can just look back for past “pet related tokens” (‘dog’, ‘ pet’, ‘ canine’, ‘ veterinary’, etc) , and move these to compute the feature. We find that the top attention head is using the pet source tokens to compute the feature. We track the direct feature contributions from source tokens in a handcrafted set of pet related tokens ('dog', 'pet', etc) and compute the fraction of variance explained from these source tokens. We confirm that “pet” source tokens explain the majority of the variance, especially when filtering by higher activations, with over 90% fraction of variance explained for activations greater than 2. § ADDITIONAL FEATURE FAMILIES IN GPT-2 SMALL < g r a p h i c s > L9.F18, a succession feature <cit.> < g r a p h i c s > L10.F1610, a suppression feature <cit.> Two notable feature families extracted from the attention outputs of GPT-2 Small. In this section we present new feature families that we found in GPT-2 Small, but did not find in the GELU-2L SAE[Note we didn't exhaustively check every GELU-2L feature. However we never came across these in all of our analysis, whereas we quickly discovered these when looking at random features from GPT-2 Small]. This suggests that SAEs are a useful tool that can provide hints about fundamentally different capabilities as we apply them to bigger models. Duplicate Token Features. In our Layer 3 SAE, we find many features which activate on repeated tokens. However, unlike induction features (<Ref>), these have high direct feature attribution (by source position) to the previous instance of that token (rather than the token following the previous instance). We also notice that the norms of the decoder weights corresponding to head 3.0, identified as a duplicate token head by Wang et al, stand out. This shows that, similar to the induction feature, we can use weight-based attribution (<ref>) to heads with previously known mechanisms to suggest the existence of certain feature families and vice versa. Successor Features. In our Layer 9 SAE, we find features that activate in sequences of numbers, dates, letters, etc. The DFA by source position also suggests that the attention layer is looking back at the previous item(s) to compute this feature (<Ref>). The top logits of these features are also interpretable, suggesting that these features boost the next item in the sequence. Finally, the decoder weight norms also suggest that they heavily rely on head 9.1, a successor head in GPT-2 Small. Name Mover Features. In the later layers, we also find features that seem to predict a name in the context. The defining characteristic of these features is a very high logit boost to the name. We also see very high DFA by source position to the past instances of this name in the context. Once again, our decoder weights also suggest that heads 9.9 and 9.6 are the top contributors of the feature, which were both identified as name mover heads by <cit.>. We find a relatively large number of name movers within our shallow investigations of the first 30 random features, suggesting that this might explain a surprisingly large fraction of what the late attention layers are doing. Suppression Features. Finally, in our layer 10 SAE we find suppression features (<Ref>). These features show very low negative logits to a token in the context, suggesting that they actually seem to suppress these predictions. We use DFA to confirm that these features are being activated by previous instances of these tokens. Our decoder weights also identify head 10.7 as the top contributing head, the same head identified to do copy suppression by <cit.>. N-gram Features. All of the features we have shown so far are related to previously studied behaviors, making them easier to spot and understand. We now show that we can also use our SAE to find new, surprising information about what attention layers have learned. We find a feature from Layer 9 that seems to be completing a common n-gram, predicting the “half” in phrases like “<number> and a half”. Though n-grams may seem like a simple capability, it's worth emphasizing why this is surprising. The intuitive way to implement n-grams would involve some kind of boolean AND (eg the current token is "and" AND the previous token is a number). Intuitively, this appears it would make sense to implement in MLPs and not in attention. § INVESTIGATING ATTENTION HEAD POLYSEMANTICITY While the technique from <Ref> is not sufficient to prove that a head is monosemantic, we believe that having multiple unrelated features attributed to a head is evidence that the head is doing multiple tasks (i.e. exhibit polysemanticity <cit.>). We also note that there is a possibility we missed some monosemantic heads due to missing patterns at certain levels of abstraction (e.g. some patterns might not be evident from a small sample of SAE features, and in other instances an SAE might have mistakenly learned some red herring features). During our investigations of each head, we found 14 monosemantic candidates (i.e. all of the top 10 attributed features for these heads were closely related). This suggests that over 90% of the attention heads in GPT-2 small are performing at least two different tasks. In <Ref>, we list notable heads that are plausibly monosemantic or have suggested roles based on this technique. §.§ Polysemantic attention heads in GPT-2 Small Based on the analysis in the previous section, we determined the statistics in <Ref> on polysemanticity within attention heads in GPT-2 Small. Notably, the existence of any top features that do not belong to a conceptual grouping are sufficient evidence to dispute monosemanticity. On the other hand, all top features belonging to a conceptual grouping are weak evidence towards monosemanticity. Therefore, the results in this section form a lower bound on the percentage of attention heads in GPT-2 Small that are polysemantic. Proportion of heads exhibiting monosemantic versus polysemantic behavior. Head Type Fraction of Heads Plausibly monosemantic 9.7% (14/144) Plausibly monosemantic (minor exception) 5.5% (8/144) Plausibly bisemantic 2.7% (4/144) Polysemantic 81.9% We say that a feature is plausibly monosemantic when all top 10 features were deemed conceptually related by our annotator, and plausibly monosemantic (minor exception) when all features were deemed conceptually related with only one or two exceptions. Finally, a feature is plausibly bisemantic when features were clearly in only two conceptual categories. Finally, note that the line between polysemantic and monosemantic heads is a spectrum. For example, consider head 5.10: all top 10 SAE features look like context features, boosting the logits of tokens related to that context. However, our annotator conservatively labeled this head as polysemantic given that some of the contexts are unrelated. At a higher-level grouping, this head could plausibly be labeled a general monosemantic "context" head. § IOI CIRCUIT ANALYSIS: EVALUATING ALL GPT-2 SMALL ATTENTION OUTPUT SAES In this section we evaluate all of our GPT-2 Small attention SAEs on the IOI task. For each layer, we replace attention output activations with their SAE reconstructed activations and observe the effect on the average logit difference <cit.> between the correct and incorrect name tokens (as in <cit.>). We also measure the KL divergence between the logits of the original model and the logits of the model with the SAE spliced in. We compare the effect of splicing in the SAEs to mean ablating these attention layer outputs from the ABC distribution (as described in <cit.>, this is the IOI distribution but with three different names, rather than one IO and two subjects) to also get a rough sense of how necessary these activations are for the circuit. We find that splicing in our SAEs at each of the early-middle layers [1, 6] maintains an average logit difference roughly equal to the clean baseline, suggesting that these SAEs are sufficient for circuit analysis. On the other hand, we see layers {0, 7, 8} cause a notable drop in logit difference. The later layers actually cause an increase in logit difference, but we think that these are likely breaking things based on the relatively high average KL divergence, illustrating the importance of using multiple metrics that capture different things (<Ref>). We suspect that these late layer SAEs might be missing features corresponding to the Negative Name Mover (Copy Suppression <cit.>) heads in the IOI circuit, although we don’t investigate this further. 0.48 < g r a p h i c s > 0.48 < g r a p h i c s > Evaluating each GPT-2 Small attention SAE on the IOI task. We splice in an Attention Output SAE for each layer and compare the resulting average logit difference (a) and KL divergence (b) to the model without SAEs. We also compare to a baseline where we mean ablate that layer's attention output from the ABC distribution <cit.>. We generally observe that our SAEs from layers [1, 6] are sufficient, while our SAEs from layers [7,11] and 0 have noticeable reconstruction error. <cit.> identify many classes of attention heads spread across multiple layers. To investigate if our SAEs are systematically failing to capture features corresponding to certain heads, we splice in our SAEs for each of these cross-sections (similar to <cit.>). For each role classified by <cit.>, we identify the set of attention layers containing all of these heads. We then replace the attention output activations for all of these layers with their reconstructed activations. Note that we recompute the reconstructed activations sequentially rather than patching all of them in at once. We do this for the following groups of heads: * Duplicate Token Heads {0, 3} * Previous Token Heads {2, 4} * Induction Heads {5, 6} * S-inhibition Heads {7, 8} * (Negative) Name Mover Heads {9, 10, 11} 0.48 < g r a p h i c s > 0.48 < g r a p h i c s > Evaluating cross sections of GPT-2 Small attention SAE on IOI. Here we splice in Attention Output SAEs for subsets of multiple layers in the same forward pass. Mirroring results from <Ref>, we find that the middle layers (corresponding the Previous Token and Induction Heads) are sufficient while later layers and Layer 0 have significant reconstruction error. We again see promising signs that the early-middle layer SAEs (corresponding to the Induction and Previous Token Heads) seem sufficient for analysis at the feature level (<Ref>). Unfortunately, it’s also clear that our SAEs are likely not sufficient to analyze the outputs of Layer 0 and the later layers (S-inhibition Heads and (Negative) Name Mover Heads). Thus we are unable to study a full end-to-end feature circuit for IOI. Why is there such a big difference between cross-sections? It is not clear from our analysis, but one hypothesis is that the middle layers contain more general features such as “I am a duplicate token”, whereas the late layers contain niche name-specific features such as “The name X is next”. Not only do we expect a much greater number of per-name features, but we also expect these features to be relatively rare, and thus harder for the SAEs to learn during training. We are hopeful that this will be improved by ongoing work on the science and scaling of SAEs <cit.>. §.§ Layer 5 "positional" features In this section, we describe how we identified and interpreted the causally relevant "positional" features form L5 (<Ref>). As mentioned, we first identify these features by zero ablating each feature one at a time and recording the resulting change in logit difference. Despite there being hundreds of features that fire at this position at least once, zero ablations narrow down three features that cause an average decrease in logit diff greater than 0.2. Note that ablating the error term has a minor effect relative to these features, corroborating our evaluations that our L5 SAE is sufficient for circuit analysis (<Ref>). We distinguish between ABBA and BABA prompts, as we find that the model uses different features based on the template (<Ref>). We also localize the same three features when path patching features out of the S-inhibition head's <cit.> values, suggesting that these features are meaningfully V-composing <cit.> with these heads, as the analysis from <cit.> would suggest (<Ref>). We find that features L5.F7515 and L5.F27535 are the most important for the BABA prompts, while feature L5.F44256 stands out for ABBA prompts. 0.48 < g r a p h i c s > 0.48 < g r a p h i c s > On the IOI <cit.> task, we identify causally relevant features from the layer 5 features with both zero ablations (a) and path patching (b) from the S-inhibition head values. We then interpreted these causally relevant features. Shallow investigations of feature dashboards (see <Ref>, <Ref>) suggests that all three of these fire on duplicate tokens, and all have some dependence on prior “ and” tokens. We hypothesize that the two BABA features are representing “I am a duplicate token that previously preceded ‘ and’” features, while the ABBA feature is “I am a duplicate token that previously followed ‘ and’”. Note we additionally find similar causally relevant features from the induction head in Layer 6 and the duplicate token head in Layer 3 described in <Ref>. The features motivate the hypothesis that the “positional signal” in IOI is solely determined by the position of the name relative to (i.e. before or after) the ‘ and’ token. §.§ Finding and interpreting causally relevant features in other layers In addition to the L5 attention SAE features we showcase in <Ref>, we also find features in other layers that seem to activate on duplicate tokens depending on their relative position to an “ and” token. Note we didn’t seek out features with these properties: these were all identified as the top causally relevant features via zero ablations for their respective layers (at the S2 position). In Layer 3, a layer with duplicate token head 3.0 <cit.>, we identify L3.F7803: "I am a duplicate token that was previously followed by ‘and’/’or’" (<Ref>). < g r a p h i c s > We show max activating dataset examples and the corresponding top DFA by source position for L3.F7803 in GPT-2 Small, a causally relevant feature in the IOI task. We interpret this feature as representing "I am a duplicate token that was previously followed by ‘and’/’or’". Notice that it seems to fire on duplicated tokens, and the previous duplicate (highlighted in blue) is almost always preceded by 'and'/'or'. In Layer 6, a layer with induction head 6.9 <cit.>, we find two subltly different features: * L6.F17410: "I am a (fuzzy) duplicate token that previously preceded ‘ and’". * L6.F13836: "I am a duplicate name that previously preceded ‘ and’." All of these features can be viewed with neuronpedia <cit.>: <https://www.neuronpedia.org/gpt2-small/att-kk>. §.§ Applying SAEs to QK circuits: S-Inhibition Heads Sometimes do IO-Boosting In addition to answering an open question about the positional signal in IOI <cit.> (<Ref>), we also can use our SAEs to gain deeper insight into how these positional features are used downstream. Recall that <cit.> found that the induction head outputs V-compose <cit.> with the S-inhibition heads, which then Q-compose <cit.> with the Name Mover heads, causing them to attend to the correct name. Our SAEs allow us to zoom in on this sub-circuit in finer detail. We use the classic path expansion trick from <cit.> to zoom in on a Name Mover head’s QK sub-circuit for this path: x_attn W_OV^S-inbW_QK^NM (x_resid)^T Where x_attn is the attention output for a layer with induction heads, W_OV^S-inb is the OV matrix <cit.> for an S-inhibition head, W_QK^NM is the QK matrix <cit.> for a name mover head, and x_resid is the residual stream which is the input into the name mover head. For this case study we zoom into induction layer 5, S-inhibition head 8.6, and name mover head 9.9 <cit.>. While the x_attn and x_resid terms on each side are not inherently interpretable units (e.g. the residual stream is tracking a large number of concepts at the same time, cf the superposition hypothesis <cit.>), SAEs allow us to rewrite these activations as a weighted sum of sparse, interpretable features plus an error term (see <ref>). This allows us to substitute both the x_attn and x_resid (using residual stream SAEs from <cit.>) terms with their SAE decomposition. We then multiply these matrices to obtain an interpretable lookup table between SAE features for this QK subcircuit: Given that this S-inhibition head moves some Layer 5 attn SAE feature to be used as a Name Mover query, how much does it “want” to attend to a residual stream feature on the key side? We find that the attention scores for this path can be explained by just a handful of sparse, interpretable pairs of SAE features. We zoom into the attention score from the END destination position (i.e. where we evaluate the model's prediction) to the Name2 source position (e.g. ‘ Mary’ in “ When John and Mary …”). 0.48 < g r a p h i c s > 0.48 < g r a p h i c s > We decompose the attention score from the END destination position for the Name2 source position into sparse, interpretable pairs of attention SAE features and residual stream SAE features. We notice that these features (a) boost the attention score to this position an BABA prompt, but (b) inhibit it on an ABBA prompt. We observe that these heatmaps are almost entirely explained by a handful of reoccurring SAE features. On the query side we see the same causally relevant Attention SAE features previously identified by ablations: L5.F7515 and L5.F27535 (“I am a duplicate that preceded ‘ and’”) for BABA prompts while ABBA prompts show L5.F44256 and L5.F3047 (“I am a duplicate that followed ‘ and’”). On the key side we also find just 2 common residual stream features doing most of the heavy lifting: L9.F16927 and L9.F4444 which both appear to activate on names following “ and”. We also observe a stark difference in the heatmaps between prompt templates: while these pairs of features cause a decrease in attention score on the ABBA prompts, we actually see an increase in attention score on the BABA prompts (<Ref>). This suggests a slightly different algorithm between the two templates. On ABBA prompts, the S-inhibition heads move “I am a duplicate following ‘and’” to “don’t attend to the name following ‘ and’” (i.e. S-inhibition), while in BABA prompts it moves “I am a duplicate before ‘ and’” to “attend to the name following and”. This suggests that the S-inhibition heads are partially doing “IO-boosting” on these BABA prompts. To sanity check that our SAE based interpretations are capturing something real about this QK circuit, we compute how much of the variance in these heatmaps is explained by just these 8 pairs of interpretable SAE features. We find that these 8 pairs of SAE features explain 62% of the variance of the scores over all 100 prompts. For reference, all of the entries that include at least one error term (for both the attention output and residual stream SAEs) only explain approximately 15% of the variance. §.§ Substituting " and" with alternate tokens In <Ref> we showed that a noising experiment that just changes the token " and" to " alongside" has a surprisingly big effect on IOI performance. In <Ref> we show that when we repeat the same experiment (described in <Ref>) with other alternatives to " and", this result holds. We notice that the " alongside" corruption that we included in the main text is roughly representative of the average effect. IOI logit difference recovered relative to zero ablation when noising layers 5 and 6 attention outputs. The corrupted distributions just replace the " and" token with another token. " and" replacement Avg logit diff recovered " alongside" 0.436 " besides" 0.332 " plus" 0.678 " with" 0.469 "," 0.345 " including" 0.289 § ADDITIONAL LONG PREFIX INDUCTION EXPERIMENTS 0.48 < g r a p h i c s > 0.48 < g r a p h i c s > Two additional evidence that in GPT-2 Small, head 5.1 specializes in long prefix induction whereas head 5.5 does standard induction. (a) Head 5.1's direct logit attribution to the token that is next by induction increases sharply for long prefixes. (b) For examples where heads 5.1 and 5.5 are attending strongly to some token, head 5.1 is mostly performing long prefix induction whereas 5.5 is mostly performing short prefix induction. Here we provide two additional lines of evidence to show that in GPT-2 Small, 5.1 specializes in "long prefix induction", while 5.5 does "short prefix induction". Note we that we do not use SAEs in this section, but the original hypothesis was motivated by our SAEs (see <Ref>). We first check each head’s average direct logit attribution (DLA) <cit.> to the correct next token as a function of prefix length. We again see that head 5.1’s DLA sharply increases as we enter the long prefix regime, while head 5.5’s DLA remains relatively constant (<Ref>). We then confirmed that these results hold on a random sample of the training distribution. We first filter for examples where the heads are attending non-trivially to some token[We show a threshold of 0.3. The results generally hold for a range of thresholds.] (i.e. not just attending to BOS), and check how often these are examples of n-prefix induction. We find that head 5.1 will mostly attend to tokens in long prefix induction, while head 5.5 is mostly doing normal 1-prefix induction (<Ref>). § NOTABLE HEADS IN GPT-2 SMALL As a continuation of <Ref>, we describe the results of manually inspecting the most salient features for all 144 attention heads to examine the role of every attention head in GPT-2 Small. As in <Ref>, we apply equation <ref> to identify the top ten features by decoder weight attribution to determine which features are most attributed to a given head. We then identify conceptual groupings that are exhibited in these features. §.§ Limitations on interpreting all heads in GPT-2 Small We note that this methodology is a rough heuristic to get a sense of the most salient effects of a head and likely does not capture their role completely. We only looked at the top 10 SAE features per head, sorted by an imperfect proxy. Ten is a small number, and sorting may cause interpretability illusions where the head has multiple roles but one is more salient than the others. We expect that if the head has a single role this will be clear, but it may look like it has a single role even if it is polysemantic. Thus negative results falsify the monosemanticity hypothesis but positive results are only weak evidence for monosemanticity. This technique also does not explain what a whole attention layer does, nor does it detect an individual head's role in attention head superposition <cit.>. We are deliberately looking at SAE features that mostly rely on only one attention head. This misses additional behavior that relies on clever use of multiple heads. Despite these limitations, we do sanity check that our technique captures legitimate phenomena rather than spurious behaviors, as we verified that our interpretations are consistent with previously studied heads in GPT-2 Small. These include induction heads <cit.>, previous token heads <cit.>, successor heads <cit.> and duplicate token heads <cit.>. §.§ Overview of attention heads in layers in GPT-2 Small Broadly, we observe that top features attributed to heads become more abstract towards the middle layers of the model before tapering off to syntactic features in late layers: * Layers 0-3 exhibit primarily syntactic features (single-token features bigram features) and secondarily on specific verbs and entity fragments. Some context tracking features are also present. * From layer 4 onwards, features that activate on more complex grammatical structure are expressed, including families of related active verbs, prescriptive and active assertions, and some entity characterizations. Some single-token and bigram syntactic features continue to be present. * In layers 5-6, we identify 2 out of the 3 known induction heads <cit.> in these layers based on our features. However, the rest of these layers is less interpretable through the lens of SAE features. * In layers 7-8, increasingly more complex concept feature groups are present, such as phrasings related to specific actions taken, reasoning and justification related phrases, grammatical compound phrases, and time and distance relationships. * Layer 9 expressed some of the most complex concepts, with heads focused on specific concepts and related groups of concepts. * Layer 10 exhibited complex concept groups, with heads focused on assertions about a physical or spatial property, and counterfactual and timing/tense assertions. * The last layer 11 exhibited mostly grammatical adjustments, some bigram completions and one head focused on long-range context tracking. Although the above summarizes what was distinctive about each layer, later layers continued to express syntactic features (e.g. single token features, URL completion) and simple context tracking features (e.g. news articles). §.§ Notable attention heads in GPT-2 Small <Ref> lists some notable attention heads across all layers of GPT-2 Small. p0.1p0.4p0.45 Notable attention heads in GPT-2 Small Layer Feature groups / possible roles Notable Heads 3c – continued from previous page Layer Feature groups / possible roles Notable Heads 3rContinued on next page 0 Single-token ("https://robertzk.github.io/gpt2-small-saes/cards/top_features_0_1.html#feature_num_23303of"). bigram features (https://robertzk.github.io/gpt2-small-saes/cards/top_features_0_3.html#feature_num_15142following "S"). Micro-context features (https://robertzk.github.io/gpt2-small-saes/cards/top_features_0_8.html#feature_num_9455cars, https://robertzk.github.io/gpt2-small-saes/cards/top_features_0_8.html#feature_num_3583Apple tech, https://robertzk.github.io/gpt2-small-saes/cards/top_features_0_8.html#feature_num_4149solar) https://robertzk.github.io/gpt2-small-saes/cards/top_features_0_1.htmlH0.1 Top 6 features are all variants capturing “of”. https://robertzk.github.io/gpt2-small-saes/cards/top_features_0_5.htmlH0.5: Identified as duplicate token head from 9/10 features https://robertzk.github.io/gpt2-small-saes/cards/top_features_0_9.htmlH0.9: Long range context tracking family (https://robertzk.github.io/gpt2-small-saes/cards/top_features_0_9.html#feature_num_18663headlines, https://robertzk.github.io/gpt2-small-saes/cards/top_features_0_9.html#feature_num_16907sequential lists). 1 Single-token (https://robertzk.github.io/gpt2-small-saes/cards/top_features_1_5.html#feature_num_11308Roman numerals) bigram features https://robertzk.github.io/gpt2-small-saes/cards/top_features_1_0.html#feature_num_23309(following "L") Specific noun tracking (https://robertzk.github.io/gpt2-small-saes/cards/top_features_1_6.html#feature_num_6571choice, https://robertzk.github.io/gpt2-small-saes/cards/top_features_1_6.html#feature_num_14559refugee, https://robertzk.github.io/gpt2-small-saes/cards/top_features_1_6.html#feature_num_19420gender, https://robertzk.github.io/gpt2-small-saes/cards/top_features_1_6.html#feature_num_23126film/movie) https://robertzk.github.io/gpt2-small-saes/cards/top_features_1_5.htmlH1.5*: Succession <cit.> or pairs related behavior https://robertzk.github.io/gpt2-small-saes/cards/top_features_1_8.htmlH1.8: Long range context tracking with very weak weight attribution 2 Short phrases ("https://robertzk.github.io/gpt2-small-saes/cards/top_features_2_0.html#feature_num_23851never been...") Entity Features (https://robertzk.github.io/gpt2-small-saes/cards/top_features_2_9.html#feature_num_21398court, https://robertzk.github.io/gpt2-small-saes/cards/top_features_2_9.html#feature_num_24315media, https://robertzk.github.io/gpt2-small-saes/cards/top_features_2_9.html#feature_num_22897govt) bigram & tri-gram features ("https://robertzk.github.io/gpt2-small-saes/cards/top_features_2_2.html#feature_num_5123un-") Physical direction and logical relationships ("https://robertzk.github.io/gpt2-small-saes/cards/top_features_2_5.html#feature_num_15000under") Entities followed by what happened (https://robertzk.github.io/gpt2-small-saes/cards/top_features_2_9.html#feature_num_22897govt) https://robertzk.github.io/gpt2-small-saes/cards/top_features_2_0.htmlH2.0: Short phrases following a predicate (e.g., not/just/never/more) https://robertzk.github.io/gpt2-small-saes/cards/top_features_2_3.htmlH2.3: Short phrases following a quantifier (both, all, every, either), or spatial/temporal predicate (after, before, where) https://robertzk.github.io/gpt2-small-saes/cards/top_features_2_5.htmlH2.5: Subject tracking for physical directions (under, after, between, by), logical relationships (then X, both A and B) https://robertzk.github.io/gpt2-small-saes/cards/top_features_2_7.htmlH2.7: Groups of context tracking features https://robertzk.github.io/gpt2-small-saes/cards/top_features_2_9.htmlH2.9*: Entity followed by a description of what it did 3 Entity-related fragments ("https://robertzk.github.io/gpt2-small-saes/cards/top_features_3_6.html#feature_num_6799"world's X") Tracking of a characteristic (ordinality or extremity) Single-token and double-token (https://robertzk.github.io/gpt2-small-saes/cards/top_features_3_7.html#feature_num_5123eg) Tracking following commands (https://robertzk.github.io/gpt2-small-saes/cards/top_features_3_9.html#feature_num_20873while, https://robertzk.github.io/gpt2-small-saes/cards/top_features_3_9.html#feature_num_16086though, https://robertzk.github.io/gpt2-small-saes/cards/top_features_3_9.html#feature_num_20837given) https://robertzk.github.io/gpt2-small-saes/cards/top_features_3_0.htmlH3.0: Identified as duplicate token head from 8/10 features https://robertzk.github.io/gpt2-small-saes/cards/top_features_3_2.htmlH3.2*: Subjects of predicates (so/of/such/how/from/as/that/to/be/by) https://robertzk.github.io/gpt2-small-saes/cards/top_features_3_6.htmlH3.6: Government entity related fragments, extremity related phrases https://robertzk.github.io/gpt2-small-saes/cards/top_features_3_11.htmlH3.11: Tracking of ordinality or entirety or extremity 4 Active verbs (https://robertzk.github.io/gpt2-small-saes/cards/top_features_4_0.html#feature_num_19344do, https://robertzk.github.io/gpt2-small-saes/cards/top_features_4_0.html#feature_num_85share) Specific characterizations (https://robertzk.github.io/gpt2-small-saes/cards/top_features_4_5.html#feature_num_14547the same X, https://robertzk.github.io/gpt2-small-saes/cards/top_features_4_5.html#feature_num_23576so Y) Context tracking families (https://robertzk.github.io/gpt2-small-saes/cards/top_features_4_2.html#1328story highlights) Single-token (https://robertzk.github.io/gpt2-small-saes/cards/top_features_4_10.html#feature_num_20979predecessor) https://robertzk.github.io/gpt2-small-saes/cards/top_features_4_5.htmlH4.5: Characterizations of typicality or extremity https://robertzk.github.io/gpt2-small-saes/cards/top_features_4_7.htmlH4.7: Weak/non-standard duplicate token head https://robertzk.github.io/gpt2-small-saes/cards/top_features_4_11.htmlH4.11*: Identified as a previous token head based on all features 5 Induction (F) https://robertzk.github.io/gpt2-small-saes/cards/top_features_5_1.htmlH5.1: Long prefix Induction head https://robertzk.github.io/gpt2-small-saes/cards/top_features_5_5.htmlH5.5: Induction head 6 Induction (https://robertzk.github.io/gpt2-small-saes/cards/top_features_6_9.html#feature_num_19625M) Active verbs (https://robertzk.github.io/gpt2-small-saes/cards/top_features_6_3.html#feature_num_22083want to, https://robertzk.github.io/gpt2-small-saes/cards/top_features_6_3.html#feature_num_24144going to) Local context tracking for certain concepts (https://robertzk.github.io/gpt2-small-saes/cards/top_features_6_7.html#feature_num_15065vegetation) https://robertzk.github.io/gpt2-small-saes/cards/top_features_6_3.htmlH6.3:: Active verb tracking following a comma https://robertzk.github.io/gpt2-small-saes/cards/top_features_6_5.htmlH6.5: Short phrases related to agreement building https://robertzk.github.io/gpt2-small-saes/cards/top_features_6_7.htmlH6.7: Local context tracking for certain concepts (payment, vegetation, recruiting, death) https://robertzk.github.io/gpt2-small-saes/cards/top_features_6_9.htmlH6.9*: Induction head https://robertzk.github.io/gpt2-small-saes/cards/top_features_6_11.htmlH6.11: Suffix completions on specific verb and phrase forms 7 Induction (https://robertzk.github.io/gpt2-small-saes/cards/top_features_7_10.html#feature_num_11308al-) Active verbs (https://robertzk.github.io/gpt2-small-saes/cards/top_features_7_1.html#feature_num_21707asked/needed) Reasoning and justification phrases (https://robertzk.github.io/gpt2-small-saes/cards/top_features_7_9.html#feature_num_28587because, https://robertzk.github.io/gpt2-small-saes/cards/top_features_7_9.html#feature_num_31787for which) https://robertzk.github.io/gpt2-small-saes/cards/top_features_7_2.htmlH7.2*: Non-standard induction https://robertzk.github.io/gpt2-small-saes/cards/top_features_7_5.htmlH7.5: Highly polysemantic but still some groupings like family relationship tracking https://robertzk.github.io/gpt2-small-saes/cards/top_features_7_8.htmlH7.8: Phrases related to how things are going or specific action taken (decision to X, issue was Y, situation is Z) https://robertzk.github.io/gpt2-small-saes/cards/top_features_7_9.htmlH7.9: Reasoning and justification related phrasing (of which, to which, just because, for which, at least, we believe, in fact) https://robertzk.github.io/gpt2-small-saes/cards/top_features_7_10.htmlH7.10*: Induction head 8 Active verbs https://robertzk.github.io/gpt2-small-saes/cards/top_features_8_4.html#feature_num_20055("hold") Compound phrases https://robertzk.github.io/gpt2-small-saes/cards/top_features_8_5.html#feature_num_15566(either) Time and distance relationships Quantity or size comparisons or specifiers https://robertzk.github.io/gpt2-small-saes/cards/top_features_8_8.html#feature_num_14739(larger/smaller) URL completions https://robertzk.github.io/gpt2-small-saes/cards/top_features_8_6.html#feature_num_674(twitter) https://robertzk.github.io/gpt2-small-saes/cards/top_features_8_1.htmlH8.1*: Prepositions copying (with, for, on, to, in, at, by, of, as, from) https://robertzk.github.io/gpt2-small-saes/cards/top_features_8_5.htmlH8.5: Grammatical compound phrases (either A or B, neither C nor D, not only Z) https://robertzk.github.io/gpt2-small-saes/cards/top_features_8_8.htmlH8.8: Quantity or time comparisons/specifiers 9 Complex concept completions (https://robertzk.github.io/gpt2-small-saes/cards/top_features_9_0.html#feature_num_16056time, https://robertzk.github.io/gpt2-small-saes/cards/top_features_9_0.html#feature_num_21955eyes) Specific entity concepts Grammatical relationship joiners (https://robertzk.github.io/gpt2-small-saes/cards/top_features_9_10.html#feature_num_8127between) Assertions about characteristics https://robertzk.github.io/gpt2-small-saes/cards/top_features_9_7.html#feature_num_2997(big/large) https://robertzk.github.io/gpt2-small-saes/cards/top_features_9_0.htmlH9.0*: Complex tracking on specific concepts (what is happening to time, where focus should be, actions done to eyes, etc.) https://robertzk.github.io/gpt2-small-saes/cards/top_features_9_2.htmlH9.2: Complex concept completions (death, diagnosis, LGBT discrimination, problem and issue, feminism, safety) https://robertzk.github.io/gpt2-small-saes/cards/top_features_9_9.htmlH9.9*: Copying, usually names, with some induction https://robertzk.github.io/gpt2-small-saes/cards/top_features_2_5.htmlH9.10: Grammatical relationship joiners (from X to, Y with, aided by, from/after, between) 10 Grammatical adjusters Physical or spatial property assertions Counterfactual and timing/tense assertions (https://robertzk.github.io/gpt2-small-saes/cards/top_features_10_5.html#feature_num_6174would have, (https://robertzk.github.io/gpt2-small-saes/cards/top_features_10_5.html#feature_num_8327hoped that) Certain prepositional expressions (https://robertzk.github.io/gpt2-small-saes/cards/top_features_10_11.html#feature_num_14525along, (https://robertzk.github.io/gpt2-small-saes/cards/top_features_10_11.html#feature_num_619under) Capital letter completions https://robertzk.github.io/gpt2-small-saes/cards/top_features_10_10.html#feature_num_6954(`B') https://robertzk.github.io/gpt2-small-saes/cards/top_features_10_1.htmlH10.1: Assertions about a physical or spatial property (up/back/down/over/full/hard/soft) https://robertzk.github.io/gpt2-small-saes/cards/top_features_10_4.htmlH10.4: Various separator characters for quantifiers (colon for time, hyphen for phone, period for counters) https://robertzk.github.io/gpt2-small-saes/cards/top_features_10_5.htmlH10.5: Counterfactual and timing/tense assertions (if/than/had/since/will/would/until/has X/have Y) https://robertzk.github.io/gpt2-small-saes/cards/top_features_10_6.htmlH10.6: Official titles https://robertzk.github.io/gpt2-small-saes/cards/top_features_10_10.htmlH10.10*: Capital letter completions with some context tracking (possibly non-standard induction) https://robertzk.github.io/gpt2-small-saes/cards/top_features_10_11.htmlH10.11: Certain conceptual relationships 11 Grammatical adjustments bigrams Capital letter completions Long range context tracking https://robertzk.github.io/gpt2-small-saes/cards/top_features_11_3.htmlH11.3: Late layer long range context tracking, possibly for output confidence calibration § STEP-BY-STEP BREAKDOWN OF RDFA WITH EXAMPLES In this section we describe the Recursive Direct Feature Attribution technique from <Ref> in more detail. We use Attention Output SAEs from <Ref> and residual stream SAEs from <cit.> to repeatedly attribute SAE feature activation to upstream SAE feature outputs, all the way back to the input tokens for an arbitrary prompt. The key idea is that if we freeze attention patterns and LayerNorm scales, we can decompose the SAE input activations, z_cat, into a linear function of upstream activations. Then we recursively decompose those upstream activations into linear contributions. In <Ref>, we provide a full description of the recursive direct feature attribution (RDFA) algorithm, accompanied by equations for the key linear decomposition. We now provide a few examples of using the Circuit Explorer tool available at <https://robertzk.github.io/circuit-explorer>. Example 1: Decomposing information about name. Consider the prompt: "Amanda Heyman, professional photographer. She". In <Ref>, starting with Attention Output SAE feature L3.F15566, we observe that performing a DFA decomposition along source position and then along residual features highlights: * a residual feature (3.19755) that maximally activates on names ending with "anda": <https://www.neuronpedia.org/gpt2-small/3-res-jb/19755> * a residual feature (3.14186) that maximally activates on "Amanda" and boosts last names: <https://www.neuronpedia.org/gpt2-small/3-res-jb/14186> Example 2: Routing "Dave" through "is" to "isn't". Consider the prompt: "So Dave is a really good friend isn't" as highlighted in <cit.>. Focusing on layer 10, the top Attention Output SAE feature is L10.F14709. In <Ref>, we observe that performing a recursive DFA decomposition along source position and then to upstream attention components shows that the model is routing information about "Dave" via the "is" token to the final "[isn]'t" position. < g r a p h i c s > Example of decomposing an Attention Output SAE feature (L3.F15566) across residual features on a given source position. The model attends back from "She" to "anda" and accesses an upstream residual feature for names ending with "anda" as well as a residual feature for "Amanda". < g r a p h i c s > Example of recursively decomposing an Attention Output SAE feature (L10.F14709) across upstream Attention Output SAE features. The model attends back from "isn't" to "is" and accesses a "Dave" feature through an attention connection. Recursive direct feature attribution (RDFA) Step Operation 1. Choose an attention SAE feature index i active at destination position D: f_i^pre(z_cat) = z_cat· W_enc[:, i] 2. Compute DFA by source position: z_cat = [z_1, ..., z_n_heads] where z_j = v_jA_j for j = 1, ..., n_heads and A_j is the attention pattern for head j 3. Compute DFA by residual stream feature at source position S (where ε is the error term (<ref>)): v_j = W_VLN_1(x_resid) = W_VLN_1 ( ∑_i=0^d_sae f_i(x_resid) d_i + ε (x_resid) + b) 4. Compute DFA by upstream component for each resid feature: x_resid = x_embed + x_pos + ∑_i=0^L-1x_attn, i + ∑_i=0^L-1x_mlp, i 5. Decompose upstream attention layer outputs into SAE features: x_attn,i = ∑_j=0^d_sae f_j(x_attn, i) d_j + ε(x_attn, i) + b 6. Recurse: Take one of the Attention Output SAE features from the previous step and a prefix of our prompt at S. Then, treat S as the destination position, and go back to step 1. >
http://arxiv.org/abs/2406.17976v1
20240625231838
The Role of Electric Grid Research in Addressing Climate Change
[ "Le Xie", "Subir Majumder", "Tong Huang", "Qian Zhang", "Ping Chang", "David J. Hill", "Mohammad Shahidehpour" ]
eess.SY
[ "eess.SY", "cs.SY" ]
[1,2]Le Xiele.xie@tamu.edu 1]Subir Majumdersubir.majumder@tamu.edu 3]Tong Huangthuang7@sdsu.edu 1]Qian Zhangzhangqianleo@tamu.edu 4]Ping Changping@tamu.edu 5]David J. Hilldavidj.hill@monash.edu 6]Mohammad Shahidehpourms@iit.edu [1]Department of Electrical and Computer Engineering, Texas A&M University, College Station, 77843, Texas, United States [2]Texas A&M Energy Institute, Texas A&M University, College Station, 77843, Texas, United States [3]Department of Electrical and Computer Engineering, San Diego State University, San Diego, 92182, California, United States [4]Department of Oceanography, Texas A&M University, College Station, 77843, Texas, United States [5]Department of Electrical and Computer Systems Engineering, Monash University, Clayton, 3800, Victoria, Australia [6]Robert W. Galvin Center for Electricity Innovation, Illinois Institute of Technology, Chicago, 60610, Illinois, United States Addressing the urgency of climate change necessitates a coordinated and inclusive effort from all relevant stakeholders. Critical to this effort is the modeling, analysis, control, and integration of technological innovations within the electric energy system, which plays a crucial role in scaling up climate change solutions. This perspective article presents a set of research challenges and opportunities in the area of electric power systems that would be crucial in accelerating Gigaton-level decarbonization. Furthermore, it highlights institutional challenges associated with developing market mechanisms and regulatory architectures, ensuring that incentives are aligned for stakeholders to effectively implement the technological solutions on a large scale. The Role of Electric Grid Research in Addressing Climate Change [ July 1, 2024 =============================================================== § MAIN The electricity sector plays two pivotal roles in tackling climate change. First, according to the recent IPCC report <cit.>, as of 2019, the energy sector contributes to approximately 34% of global greenhouse gas emissions. Therefore, cleaning up the electricity sector itself is a major step towards the goal of reducing gigaton-level carbon emissions for the entire planet by 2050. Second, for many carbon management or reduction technologies, the best route to achieve a speedy and scalable impact is through large-scale integration into the electric grid. For the first role, significant decarbonization efforts are ongoing to replace fossil fuel-based generation technologies with renewable energy resources such as wind and solar <cit.>. Efforts are also underway to incorporate carbon management technologies such as point source carbon capture, carbon transport and storage, carbon dioxide removal and conversion, and hydrogen <cit.>. For example, the U.S. Department of Energy (DOE) developed goals to achieve more than 95% carbon capture at sources of carbon emissions at power plants <cit.>. For the second role, the electricity sector plays another increasingly important role in supporting more electrification of energy demand that comes from transportation <cit.>, industrial heating/cooling <cit.>, computing industries <cit.>, and many household appliances. Much of these efforts will need to be scaled up and integrated with the electric grid infrastructure. Achieving gigaton-level carbon emission reduction through the power grid necessitates the expansion of the electrical systems, which is now more intertwined with weather and climate than ever before (Figure <ref>). For example, due to their variable nature of renewable resources, integrating them into the power grid significantly impacts planning processes, which used to operate under the generation-following-demand paradigm <cit.>. The emergence of long-term energy storage <cit.> and demand response technologies <cit.> has led to the concept of demand-following-generation as an added mechanism to achieve overall power balancing during operation. However, the necessary long-term storage for the power grid, e.g., for `dunkelflaute’ events <cit.>, can only be accurately planned through climate model simulations <cit.>. In this article, we ask two key questions. First, what power grid researchers should prioritize as we integrate more and more carbon-neutral technologies? Second, how do we effectively integrate climate research with power grid research to address the emerging complexities? The following two sections provide an answer to these two questions. § ELECTRIC GRID RESEARCH CHALLENGES TO ADDRESS CLIMATE CHANGE There are three key power grid research challenges in addressing climate change. First, power grid researchers still use outdated weather patterns to generate scenarios for planning, while climate change research has demonstrated that weather patterns are changing and impacting renewable generation and power system demand. Second, the overall operational performance of the grid is not well understood when integrating renewables or electrifying transportation and heating and cooling systems, especially with a wide array of heterogeneous inverter-based resources such as solar and wind. Third, climate change research typically focuses on the long term, whereas power system research tends to be more short-term. Unless the incentives are properly aligned through markets and policy designs, it would be extremely difficult to collectively address climate change issues. §.§ Lack of tailored climate simulation for long-term planning Addressing climate change requires renewable energy sourced from areas far from load centers to satisfy ever-growing load demands, necessitating the expansion of transmission line capacity and the building of more renewable energy harvesting farms. Many <cit.> of the world's governments are actively incentivizing these activities. Power system expansion planning is a well-researched <cit.> area, but these planning activities utilize widely used weather patterns for scenario generation, and these weather patterns can indeed change due to climate change. For example, future weather statistics could include increased “energy drought” occurrence frequencies (e.g., `dunkelflaute’ events <cit.>), posing significant risks to energy security <cit.>. With the expected notable “west-to-east interhemispheric shift” in mean wind power potential and an overall increase in solar energy potential <cit.>, transmission lines built without consideration of these climate dynamics would most likely remain underutilized. Climate change can lead the typical summer-peaking states into dual or winter-peaking states <cit.>. Increasing frequency of extreme weather events in certain regions, like the Texas grid outage during winter storm Uri, underscores the necessity of substantial backup power capacity to ensure future grid reliability <cit.> and resiliency <cit.>. We also need to utilize multimodal climate change models to capture changes in consumer demands for resource expansion planning <cit.>. Therefore, system planning encompassing the expansion and retirement of generation units, transmission, storage, demand management, and other new technologies should incorporate reliable climate and weather predictions. Fortunately, there have been rapid advancements in climate models, and computational capabilities are progressing rapidly. A new report <cit.> to the President from the U.S. President’s Council of Advisors on Science and Technology (PCAST) highlights the considerable potential for improving predictions of the likelihood of extreme weather events using high-resolution climate models. Disseminating this information to households, businesses, and government agencies would help grid operators better manage the electric grid, even with limited resources. While existing electric grid studies, such as the US-DOE report on National Transmission Needs <cit.>, also advocate for enhanced infrastructure, the key research challenge lies in incorporating climate-related factors into long-term planning strategies. §.§ Lack of system-aware, grid-edge operations of inverter-based resources Many clean energy resources, such as solar panels, wind turbines, and battery storage, require power electronics inverters to interface with power grids. As a result, these inverter-interfaced clean energy resources, known as inverter-based resources (IBRs) <cit.>, are key technology enablers of electricity infrastructure decarbonization. Residential IBRs are typically connected to medium/low-voltage distribution systems. Examples include rooftop solar panels, batteries rated in kilowatts, electric vehicles (EVs), and their charging infrastructures. During extreme weather events that disrupt bulk electricity infrastructure, residential IBRs are expected to self-organize to establish small-scale grids, such as microgrids <cit.> to ensure continued power supply to end-users. Residential IBRs empower consumers by turning them into producers, democratizing energy production and reducing transmission losses by generating power close to where it is consumed. However, as more solar capacity has come online, grid operators have observed a drop in net load due to the generation from residential solar panels, when power generation from utility-scale solar farms tends to be highest. Such an imbalance between generation and load leads to the famous California duck curve <cit.>, which can compromise energy security and economic efficiency. This imbalance is also evident in the increasing frequency of negative net load and associated negative prices <cit.> in the electricity market of Australia. Clearly, a lack of system-aware control has the potential to exacerbate power network operations. As major coal-fired plants are scheduled to retire almost every year over the next decade, the existing location and electrical infrastructure could be utilized by the utility-scale IBRs, such as renewable energy zones (REZs) <cit.>. REZs indeed solve some of the drawbacks of residential IBRs. However, the control systems of today’s commercial IBRs are tuned by manufacturers by overly simplifying the dynamics of the host system. As a result, when networking with other IBRs, the locally well-tuned IBRs may conflict with their peers <cit.>. The key controller algorithms are almost always unavailable to protect their intellectual property <cit.>. Therefore, the key research question is how to avoid system-level issues when multiple proprietary IBRs are networked in a grid. This problem is especially exacerbated when thousands of “behind the meter” IBRs try to coordinate with each other. Centralized control by utilities is impractical in this case <cit.>. Designing system-aware controls for IBRs, with minimal information exchange, becomes crucial to prevent system-level issues at the device-design stage rather than relying on post-event mitigations. §.§ Temporal misalignment of market and policy design in grid and climate systems In the realm of the electricity industry, market and policy decision-making typically operate on a time scale ranging from days to years, reflecting the immediate and intermediate needs of grid management and energy distribution. In stark contrast, the design of market strategies and policies for addressing climate change encompasses a far more extended timeline, often spanning several decades. This discrepancy creates a significant challenge in synchronizing the short-term operational strategies of the electricity sector with the long-term objectives of climate policy <cit.>. First, aggressive long-term climate targets may conflict with short-term stakeholders' rights in the power grid. For example, in Australia, the market operator AEMO, through their integrated system plan (ISP) <cit.>, shows that coal-based power plants are to be withdrawn by 2038 only to be replaced by grid-scale wind and solar, rooftop solar photovoltaics, and storage. However, the rapid uptake of grid-scale wind and solar is not enforceable due to the vertically disintegrated nature of the Australian power market. While the state governments in Australia are urgently forging ahead with Renewable Energy Zones (REZs) to replace retiring coal plants with renewable-based farms and build more transmission lines <cit.>, there have been pushbacks from certain communities about disproportionately burdening renewable-rich areas with aerial aesthetic displeasure <cit.>. Second, climate change introduces greater uncertainty into power system planning, and the power markets are ill-equipped to guide generation investment, which may not ensure resource adequacy in the future. In energy-only markets, such as the Texas electricity market, there has been recent discussion about implementing a `performance credit mechanism' to ensure long-term grid reliability, but the initial draft did not incorporate impacts of extreme weather events such as Winter Storm Uri <cit.>. On the other hand, introducing a capacity market might secure adequate capacity to meet desired reliability criteria, such as the loss of load expectation (LOLE) of one day in ten years <cit.>. However, these criteria have two major issues: (i) they are relatively short-term compared to climate policies, and (ii) they seem to ignore low-probability events, such as extreme weather events. Furthermore, the efficiency and fairness of capacity markets heavily depend on the accreditation of different types of generation technologies <cit.> and the modeling of the demand curve <cit.>, which are extremely complex tasks. Bridging the temporal gap in power grid decarbonization and emission-related policies is extremely crucial for aligning immediate energy needs with enduring environmental targets, thereby enhancing the effectiveness of interventions in both domains. Without such integration, efforts in either area risk being undermined by conflicting priorities and uncoordinated policies, potentially stalling progress on both power system security and climate change mitigation. § INTERACTION BETWEEN POWER SYSTEM AND CLIMATE RESEARCHERS Power system researchers can interact and collaborate with climate researchers in three unique ways. First, power system researchers should utilize climate data for planning. Second, climate researchers can incorporate power grid operational data to refine their climate models, and power system researchers can assist climate researchers in identifying critical areas to focus on in their climate models. Third, by utilizing climate simulation data, power system researchers and economists can develop innovative products and mechanisms tailored for short-term efficient and reliable energy system operation while meeting long-term environmental goals. §.§ Climate-informed planning for enhanced resiliency and reliability Power grids around the world evolved to cater to regional energy demand, supported by local policy mandates and unique generation characteristics (e.g., the Pacific Northwest in the United States has plenty of hydro-electric potential <cit.>, western Texas has an abundance of wind resources <cit.>). Global renewable energy potential is not uniform, and the distribution can be impacted by climate change <cit.>. While climate change is a planetary-scale problem, each region faces its unique climate-related challenges (e.g., the wildfires in California <cit.> and hurricanes on the Gulf and east coast of the United States <cit.>). Therefore, the interdependence between energy and climate systems can result in regional energy security concerns with the growing prominence of renewable energy sources, and power system planners need to be cognizant of this relationship while planning. In regards to utilizing climate data, a recent study <cit.> provides a framework for power system resource adequacy analysis (Figure <ref>). This study identified that while low-resolution climate simulations can indeed provide a rough estimate of system reliability, high-resolution simulations can provide a more informative assessment of low-probability, high-impact extreme events. However, both high and low-resolution assessments suggest the need to prepare for severe blackout events in winter due to extremely low temperatures. Changing weather patterns due to climate change, as discussed earlier, could be similarly incorporated in the power system planning studies. Another recent research explores multiple scenarios based on climate models, revealing that the current placement of renewable energy farms in Australia is suboptimal <cit.>. These preliminary findings demonstrate the importance of incorporating higher-resolution climate simulations for a reliable and robust climate-informed analysis for resiliency and reliability in power system planning. Future research opportunities to understand the impacts of climate change on the power grid include (a) quantifying the reliability and resilience of specific long-term planning schemes, along with investigating statistical confidences and the effectiveness of various ways to represent uncertainty, (b) identifying the most critical potential climate conditions for subsequent measures to enhance accordingly, and (c) performing sensitivity analyses of the resulting reliability and resiliency indices concerning the uncertainty of climate projections. Based on the outcomes of these activities, a unified planning approach can be developed that accounts for climate change risks and provides a solid foundation for key decisions in planning. These key insights include the minimum required energy storage and demand response programs for policymakers, system operators, and market participants. While the approaches to achieving a zero-carbon solution vary, simply a power grid tailored to local weather (operations) and climate (planning) ensures greater renewable electrification, which will all aggregate globally to arrest climate change. Additionally, digitization and machine learning techniques will enhance the scalability and expedite the development of solutions within both climate and power research domains. This includes accelerating computationally demanding simulations of hybrid climate and electricity models <cit.> and generating synthetic open-source data <cit.> to circumvent issues related to accessing Critical Energy/Electric Infrastructure Information (CEII). §.§ Adaptive scaling of climate models informed by power systems Power system researchers can assist climate researchers in addressing climate change through power grid design and operation in two major ways. First, a power grid operational model (e.g., demand and generation embedded in weather patterns) provides higher-resolution carbon accounting for the energy sector. Monitoring power grid demands provides insights into the emissions from industrial or commercial sectors, where electricity consumption is directly related to emissions. This is because power consumption not only serves as a barometer of societal behavior and prosperity but also underpins economic activities within the industrial and commercial sectors. Climate researchers need to explore how emission changes at a regional scale due to electric grid changes can have an impact on climate <cit.>. Second, developing a global high-resolution climate model is computationally very expensive <cit.>; however, the spatial resolution of climate models can be effectively shaped by the specific needs of power system tasks. Climate models are successfully used to predict climate variability at seasonal-to-decadal (S2D) timescales and project long-term climate changes at decadal-to-centennial (D2C) timescales <cit.>. S2D predictions providing information about natural climate variabilities, such as El Niño or La Niña, and near-term trends are sensitive to initial conditions <cit.>. D2C projections aim to understand long-term trends. Therefore, S2D predictions can be used to understand the near-term impact of extreme weather events on power grids, while D2C projections can provide insights into long-term transmission system planning and energy security. Power system engineers have a major role to play by providing climate scientists with specific regions to downscale their global climate predictions and projections and generate high-resolution climate-power interaction models for decision-making while ensuring reasonable computational efficiency. For example, providing the downscaling of S2D predictions to hurricane-prone coastal city regions would offer much-needed decision support for the planning of both transmission and generation assets in power systems. Similarly, downscaling to the load center and renewable-rich areas from D2C projections could have higher priority from the energy security perspective. Challenges persist due to the limited resolution of current-generation global models, typically around 100 km, which hinders accurate forecasting of extreme weather statistics <cit.>. This limitation results in significant uncertainties in predictions and projections of changes in extreme weather. §.§ Climate-aware market redesign for power systems Many countries around the world went through the deregulation process to introduce competition in providing consumers with reliable and affordable electrical energy. Because of historical reasons, the degree of deregulation varied. For example, the United States contains a mix of vertically integrated regulated monopoly regions, vertically disintegrated energy markets, and energy markets, allowing the participation of traditional utilities without fully committing to deregulation <cit.>. Regulated, vertically integrated utilities may not follow a cost-minimizing approach <cit.>, and consumers may not enjoy the benefits of lower energy costs. Limited oversight on energy-only markets and various capacity markets might not provide resource adequacy for day-to-day power grid operations in deregulated environments. In deregulated regions, grid utilities may not be allowed to own generation (including renewables) largely because of market rules <cit.>. Aside from variabilities from renewable energy generation, market operators face major challenges as more and more marginal cost generators are replaced with infra-marginal renewable generators <cit.>. Enabled by improved climate models, we highlight three innovative market designs to improve resource adequacy issues in deregulated environments. Firstly, the capacity market is one way of ensuring that all generators will be present to participate in the spot market. Regarding renewable generation, Joskow <cit.> discussed a hybrid framework for capacity and spot markets, where long-term power purchase agreements with wind, solar, and storage developers are competitively procured, which the author called the “competition for the market,” and incorporating it with “competition in the market” through short-term energy markets. Access to a good climate model would raise confidence among renewable energy developers to develop more renewable energy farms and participate in the market without needing additional support. Secondly, enabled by technological innovations on the consumer end and improved accuracy of consumption patterns, we frequently see electricity demand side participation in the real world, e.g., load resources participation in Texas electricity market <cit.>, energy coupon <cit.>, and direct load control <cit.>. There has been an increasing demand for an edge-based market that allows consumers to trade their excess energy and integrate it with the wholesale energy market. Thirdly, as we integrate more and more renewable power plants and continue to electrify other sectors, we may run out of transmission line capacity. Innovative solutions with storage (e.g., deploying storage devices at both ends of the transmission lines) enabled by data analytics and climate models would facilitate optimal utilization of transmission resources. Alternatively, energy efficiency <cit.> is another solution to reduce system-wide demand itself. Markets enable the efficient exchange of resources, but such an efficient exchange may not lead to adopting climate-friendly technologies. This necessitates implementing a performance-based market design that not only incentivizes short-term efficient and reliable operation of the energy system but is also sustainable and adaptive to long-term environmental goals. Suitable regulatory frameworks and financial incentives that encourage energy providers to invest in renewable energy sources or technologies that improve grid reliability and efficiency need to be set up in this regard. Greater attention should be given to balancing the development of new market mechanisms with the potential consequences of heightened uncertainty and complexity, which requires crafting policies that offer the right incentives to the right participants. Power grid operators have to adapt urgently to prevailing weather and climate patterns to effectively implement the energy transition with reliable supply by stating deadlines in terms of renewables and storage. Moreover, there is a need to allow for the extreme weather events arising from the changing climate, at least until greenhouse gas emission is arrested, by building resilience into the system through more accurate information about changes in extreme weather patterns and statistics arising from the changing climate. Power grid researchers should also consider prudent risk-aware scenarios, where power system planning and operation have to adapt to the extreme climate risk environment, such as massive human migrations to habitable parts of the planet or potential for war due to lack of energy. § CONCLUSION Tackling climate change requires aggressive and timely decarbonization across the entire economy. Decarbonizing the electricity sector, and electrifying other sectors of energy consumption will play a crucial role in this transition. Research in climate models could better inform the planning of energy sources, demand, and the grid. Conversely, specific needs of the electric energy system could also define new research opportunities in higher-resolution climate modeling and simulation. A significant research partnership between the power systems community and the climate change community at large would help with the mitigation of climate risks through an accelerated decarbonization process, while a whole-of-system approach that encompasses the physical, climate and weather, economic, and social systems to develop a scenario-based approach where decisions will ensure a smooth transition. Acknowledgments The work of L.X., S.M., Q.Z. and P.C. is supported in part by Texas A&M Energy Institute, College of Arts and Sciences at Texas A&M University, and Texas A&M Engineering Experiment Station. Author contributions L.X., S.M. T.H. and Q.Z. conceived and designed the paper. P.C. and D.J.H. contributed material and analysis tools. L.X., S.M. T.H., and Q.Z. drafted the paper with input from all co-authors. All authors read and approved the final version of the paper. Competing interests The authors declare no competing interests. Additional information Correspondence should be addressed to Le Xie. Peer review information The authors sincerely thank all the anonymous reviewers for significantly improving the quality of the paper.
http://arxiv.org/abs/2406.18461v1
20240626161129
Broadening the Canonical Picture of EUV-Driven Photoevaporation of Accretion Disks
[ "Riouhei Nakatani", "Neal J. Turner", "Shinsuke Takasao" ]
astro-ph.EP
[ "astro-ph.EP" ]
#1#2#1/#2 #1#2∂#1∂#2
http://arxiv.org/abs/2406.18366v1
20240626140839
Active Learning for Stellar Spectral Classification
[ "R. El-Kholy", "Z. M. Hayman" ]
astro-ph.SR
[ "astro-ph.SR", "astro-ph.IM" ]
firstpage–lastpage 112 years of listening to Riemannian manifolds [ July 1, 2024 ============================================== § ABSTRACT Supervised machine learning models are increasingly being used for solving the problem of stellar classification of spectroscopic data. However, training such models requires a large number of labelled instances, the collection of which is usually costly in both time and expertise. This paper explores the application of active learning algorithms to sampling stellar spectra using data from a highly class-imbalanced dataset. We utilize the MaStar library from the SDSS DR17 along with its associated stellar parameter catalogue. Using different active learning algorithms, we iteratively select informative instances, where the model or committee of models exhibits the highest uncertainty or disagreement, respectively. We assess the effectiveness of the sampling techniques by comparing several performance metrics of supervised-learning models trained on the queried samples with randomly-sampled counterparts. Evaluation metrics include specificity, sensitivity, and the area under the curve; in addition to the Matthew's correlation coefficient, which offers a more-balanced assessment that considers all aspects of the confusion matrix, and is thus more suitable for use with imbalanced datasets. We apply this procedure to effective temperature, surface gravity, and iron metallicity, separately. Our results demonstrate the effectiveness of active learning algorithms in selecting samples that produce performance metrics superior to random sampling and even stratified samples. We discuss the implications of the findings for prioritizing instance labelling of astronomical-survey data by experts or crowdsourcing to mitigate the high time cost. methods: data analysis – methods: statistical – techniques: spectroscopic – surveys – stars: general § INTRODUCTION Stellar spectra can be divided into seven main spectral classes according to the Harvard scheme of stellar spectral classification. These classes; O, B, A, F, G, K, and M; follow a sequence represented by the effective temperature of stellar atmospheres, where the hottest stars belong to class O (T_eff≳ 25,000 K) and the coolest belong to class M (2,000 K < T_eff < 3,500 K). Each of these main classes can be further divided into 10 subclasses from 0 to 9, where 0 represents the hottest stars within that class and 9 the coolest. Morgan and Keenan later proposed appending a luminosity class (Ia, Ib, II, III, IV, and V) to the main class and subclass (e.g. our Sun is of class G2V). The luminosity class depends on the surface gravity of stars, often represented as logg in stellar parameters catalogues, where luminous supergiants with the least logg values belong to class 'Ia` and dwarfs with the largest values belong to class 'V`. The modified system became known as the MK classification system. A review of stellar spectral classification can be found in <cit.>. Stellar spectral classification of large numbers of stars is essential to studies of stellar populations and galactic formation history. In the past, stellar spectral classification has been done by human experts. They had to visually inspect each of the spectra to do so. With the advancement of computational capabilities and the introduction of machine learning (ML) algorithms, more sophisticated techniques have been applied to classify stellar spectra. Among those are χ^2-minimization, artificial neural networks (ANN), and principal component analysis (PCA) <cit.>. With the avalanche of stellar spectroscopy data pouring from telescope surveys, the use of ML algorithms has been increasing and has proven capable of reducing the error and improving the accuracy of stellar spectral classification <cit.>. However, for any supervised ML algorithm to be applied to the stellar classification problem, a large sample of labelled data has to be collected and curated for the training of the model, which is very costly in terms of both time and expertise. This has always been a limitation for the use of supervised ML techniques, and is especially prominent for applications of deep learning (DL) frameworks. Attempts to tackle this problem by crowdsourcing the classification have been made, such as in the case of the Galaxy Zoo Project <cit.> which eventually started to include many other applications[<https://www.zooniverse.org/>], and have indeed been effective to some extent. However, this approach in itself suffers from two limitations: (i) For certain tasks, many non-expert volunteers become uncertain of their answers, which might lead to inaccurate labelling which would eventually reflect in the poor performance of models trained using that data; and (ii) the crowdsourcing process does not resolve the problem of the time-cost completely. Some efforts have been employed to solving the first limitation, by careful curation of the questions and taking the confidence level of the volunteers into account, with some success <cit.>. However, this can also exacerbate the time-cost issue. Another more-effective approach that can minimize the size of the required training dataset while keeping fewer high-quality instances for labelling is active learning (AL) <cit.>. The use of AL algorithms has been shown to give favourable results in many astronomical applications such as stellar population studies, photometric supernova classification, galactic morphology, and anomaly detection for time-domain discoveries <cit.>. In this work we apply AL algorithms to a set of stellar spectra to study the efficiency of the sampling techniques in selecting instances that are informative and representative of the overall distribution of the data pool and investigate whether the performance of models trained using the selected instances is comparable to that of models trained on randomly-sampled instances, or even stratified samples. We use the MaNGA Stellar Library (MaStar) <cit.>, which is highly imbalanced, from the seventeenth data release of the Sloan Digital Sky Surveys (SDSS) <cit.>. We start by applying a preprocessing pipeline to the data. We use random sampling to establish a baseline for comparison. We vary both the initial batch size and the number of additional instances sampled using each algorithm. Supervised ML algorithms are then trained using each sample and their performances on test sets are compared. Several metrics are applied for comparing the performances. The process is implemented for three stellar parameters: effective temperature, surface gravity (in terms of logg), and iron metallicity. We finally test the progression of the performance of spectral classification with the increase in the number of selected instances, and demonstrate how AL sampling produces results superior to both random and stratified sampling even with less than half the sample size. The paper is structured as follows. In Section <ref>, we give an overview of the spectral dataset used in this work. In Section <ref>, we describe the preprocessing steps applied to the data, illustrate the AL algorithms employed, briefly illustrate each of the supervised learning models used for classification, and define the set of performance metrics used for model assessment. In Section <ref>, we present our results and discuss their potential interpretations. Finally, in Section <ref>, the summary and conclusions of this study are provided. § DATA In this work, we use the final version of the MaNGA Stellar Library (MaStar) from the seventeenth data release (DR17) of the Sloan Digital Sky Survey (SDSS) <cit.>. MaStar is a large library of high-quality calibration empirical stellar library. The MaStar data are obtained using the Baryon Oscillation Spectroscopic Survey (BOSS) spectrograph <cit.>, the same as the main MaNGA survey, mounted on the Apache Point Observatory 2.5m telescope <cit.>. The same fibre system used by the MaNGA survey were used as well. The targets of the MaStar library were chosen to cover a wide range of parameter space. The MaStar spectra cover a wavelength range of 3,622 - 10,354 with a spectral resolution of R ∼ 1,800. The first release of MaStar is presented in <cit.> and its final version will be detailed in Yan et al. (in preparation). Empirical spectra are obtained by observing real stars. Hence, they are not subject to many of the limitations of synthetic spectra produced by theoretical models <cit.>. However, empirical spectral libraries are limited by the wavelength range, spectral resolution, and parameter-space coverage. This makes the high quality and wide coverage of the MaStar library particularly optimal for use in data-driven experiments. In this work, we only use the good-quality visit spectra included in the file. This library has a flux calibration accurate to 4% <cit.>. SDSS DR17 also includes a Value Added Catalogue (VAC) containing four sets of different stellar parameter measurements. Each measurement uses different methods, the details of which can be found in the respective papers (<cit.>; Chen et al. (in preparation); Lazarz et al. (in press); and Hill et al. (in press)); and a detailed comparison will be presented in Yan et al. (in preparation). The same VAC also includes the median values of these methods when available and robust, along with the uncertainties of these medians based on the quality assessments of each set of measurements. We rely on these median columns in our current work. We apply our approach to three stellar parameters: effective temperature (T_eff), surface gravity (logg), and iron metallicity ([Fe/H]). We only include stars that have a median value available in the VAC for each of these parameters. After dropping unqualified visits, we end up with 59,085 spectra (visits) of 24,162 unique stars. For this set of spectra, more than 85% have signal-to-noise ratio (S/N) > 50, with an overall mean value of about 126. The resulting stellar parameter ranges are as follows: * 2,800 K⪅ T_eff⪅ 31,000 K, * -0.25 dex⪅logg⪅ 5.25 dex, * -2.75 dex⪅[Fe/H]⪅ 1.00 dex. The final parameter distribution is also shown in Fig. <ref>. Each of the three parameters was then used separately to classify the spectra into categories according to the ranges shown in Table <ref>. The resulting class distribution for each parameter is shown in Fig. <ref>. It is clear that the dataset is highly imbalanced for all three parameters where the imbalance ratio (IR) ranges are as follows: * T_eff: 1.21 – 154, * logg: 1.05 – 25.8, * [Fe/H]: 2.14 – 23.9. Finally, Fig. <ref> shows a sample spectrum for every class of each parameter. § METHOD In this section, we describe the methods applied in this study. We first prepare the data for use in Section <ref>. Then, we apply the AL algorithms described in Section <ref> to curate training samples. The output is iteratively used to train ML models as outlined in Section <ref>, and the results are compared with a random-sampling benchmark according to the metrics defined in Section <ref>. The pipeline of the study is shown as a flowchart in Fig. <ref>, and further details of the experiments and steps applied are described in Section <ref>. §.§ Preprocessing Before using any dataset to train an ML model, it has to be suitably prepared. To this end, we apply the following four-step preprocessing scheme: * we employ a feature-selection routine adapted from the algorithm used by <cit.> as a first step toward dimensionality reduction, * split the dataset into training and testing sets, * use min-max normalization to scale each of the selected features, and * apply Principal Component Analysis (PCA) to further reduce dimensionality. Feature selection is a common approach to dimensionality reduction, where we extract the most relevant set of features to reduce the number of dimensions of the input space. It helps with speeding up the algorithm while also discarding some of the noise inherent in the data. There is more than one way to achieve this but we apply the approach proposed by <cit.> where we pick flux measurements around specific absorption lines. included the H (4,102 ) and Cai (4,227 ) lines as they cover the seven main spectral classes. The idea is that the flux intensity of such lines is what determines the spectral class while the width of the lines is what determines the luminosity class. Thus, it is not enough to include the flux measurement closest to the wavelength of the absorption line in question; a sufficiently wide region needs to be included around the wavelength to account for the line width in addition to the shifting of the spectrum due to radial velocity. We adopt the same procedure, but instead of only using the two lines mentioned above, we include other lines, as listed in Table <ref>, since our procedure is applied not only to spectral and luminosity classes but also to metallicity classes. The flux measurements from all regions are combined at the end to create one flux array per spectrum. At this preprocessing step, we reduce the feature-space dimensionality from 4,563 to 674. Since the last two preprocessing steps of the scheme outlined above include parameter fitting, the data has to be split first as only training data can be used in the fitting process in order to prevent data leakage. Accordingly, 10% of the dataset is set aside for testing. Because the dataset is highly imbalanced, stratification during data splitting is necessary to ensure that the evaluation metrics obtained at the testing step accurately reflect the model performance. Moreover, to avoid any ambiguity that might result from multi-label stratification, this step is applied separately to a copy of the entire dataset for each of the three classification parameters: T_eff, logg, and [Fe/H]. This leaves us with 53,176 samples for training and 5,909 for testing. After splitting the dataset into training and testing sets, each feature is scaled using the equation: f_i,scaled = f_i - f_i,minf_i,max - f_i,min, where f_i is the i^th flux measurement, f_i,max and f_i,min are the corresponding maximum and minimum flux measurements, respectively, and f_i,scaled is the scaled flux measurement. This step is crucial to provide a frame-of-reference for the model to compare feature values for different samples. However, as mentioned before, only the training set can be used in determining the minimum and maximum values for each feature. These values are then used to scale the testing set. This process is easily handled by use of the of the module <cit.>, where the scaler is first fit by the training set and later used to transform the entire dataset. Due to the higher computational expense of the AL algorithms described in Section <ref>, we had to minimize the number of features further. Thus, we use PCA, which is a statistical method that defines a linear transform to reduce a dataset to its most essential features (i.e. principal components). These components are ordered according to the variance captured by each of them. Using this transform, an approximation of the dataset can be obtained by a few major components. A thorough description of the PCA method can be found in <cit.> or <cit.>. In this work, we first apply PCA to the entire dataset (after applying the feature scaling routine described above) to determine the number of principal components we will include in our model. As shown in Fig. <ref>, we found that more than 99.9% of the variance in the data is captured by the first 10 components. Early trials also indicated that a higher number of features results in an increase in computational cost that cannot be justified by any improvement in the model performance. After the dataset has been split and scaled separately for each parameter, PCA is applied on the training sets and the resulting approximations are used to map the testing sets as well. This process was practically executed using the class from the module <cit.>. §.§ Active learning approach The role of a classification ML model is essentially to generate a mapping between input features and the class labels based on the features and labels of the training dataset. However, for this mapping to be as accurate as possible, large amounts of labelled training instances are required. The labelling process is often very expensive in terms of time and manpower. The collection of such data is currently one of the main challenges in ML applications <cit.>. The solution to this problem would be to minimize the size of the needed training data while only keeping the most high-quality data. This could be achieved by careful selection of unlabelled instances to later be labelled by an annotator or expert, which is the goal of Active Learning (AL) <cit.>. AL algorithms can be categorized into three main scenarios: membership query synthesis, stream-based selective sampling, and pool-based active learning; which is the most well-known of them and to which the algorithms we use in this work belong. A detailed discussion of the three different scenarios and the advantages and limitations of each can be found in <cit.> or <cit.>. The pool-based sampling approach selects instances from an existing pool of unlabelled data based on the active learner evaluation of the informativeness of some or all of the instances in the pool. The selected instance is then annotated by the oracle and added to the labelled training set. This process is iteratively repeated until a criterion is reached, which is usually a maximum number of iterations. Thus, this type of scenario generally includes two adjustable parameters: the initial labelled batch size and the number of additional instances to be queried. Fig. <ref> illustrates the pool-based sampling approach. In this work, we tested six different sampling strategies that can be divided into two categories. The first category is uncertainty sampling which includes three strategies based on three uncertainty measures: classification uncertainty, classification margin, and classification entropy. The second category is query by committee (QBC) which includes three strategies as well based on three disagreement measures: vote entropy, consensus entropy, and maximum disagreement. We give a brief definition of each strategy below, but a more thorough explanation can be found in <cit.>. On one hand, uncertainty sampling evaluates each instance in the unlabelled pool and presents the most informative one to be annotated and added to the labelled training set, where the evaluation of instances is based on an uncertainty measure, hence the name. The first measure we try here is classification uncertainty defined by: U(x) = 1 - P(x̂|x), where x is the instance to be predicted and x̂ is the most likely prediction. The strategy selects the instance with the highest uncertainty. For the classification margin strategy, the difference in probability between the first and second most likely classes is calculated according to: M(x) = P(x̂_1|x) - P(x̂_2|x), where x̂_1 and x̂_2 are the first and second most likely classes, respectively. In this case, the strategy selects the instance with the smallest margin, since it means that the learner is less decisive about the predicted class. Finally, classification entropy is calculated using: H(x) = - k∑ p_k log(p_k), where p_k is the probability of the sample belonging to the k^th class. This is proportional to the average number of guesses that has to be made to find the true class. Thus, the strategy selects the instance with the largest entropy. On the other hand, QBC strategies are based on having several hypotheses (i.e. classifiers) about the data, and querying the instances based on measures of disagreement between the hypotheses. The first measure we try is vote entropy defined by: E_vote(x) = - y∑N(y|x)|𝒞|logN(y|x)|𝒞|, where N(y|x) is the number of `votes' the class y receives for instance x among the hypotheses in committee 𝒞, and |𝒞| is the committee size. This strategy selects the instance where E_vote is the largest, since it corresponds to the most uniform distribution of votes among classes. It is a `hard' vote entropy measure; we also try a `soft' vote entropy measure referred to as consensus entropy which accounts for the confidence of each committee member and is defined by: E_cons(x) = - y∑ P(y|x) logP(y|x), where P(y|x) is the average `consensus' probability that y is the correct class according to the committee. Finally, the maximum disagreement measure is based on the Kullback-Leibler (KL) divergence <cit.>, which is a measure of the difference between two probability distributions. In other words, the disagreement is quantified as the average divergence of each classifier's prediction from that of the consensus 𝒞 as follows: D(x) = 1|𝒞|θ∈𝒞∑KL( P_θ(Y|x) ∥ P_𝒞(Y|x) ), where the KL divergence of committee member θ is defined by: KL( P_θ(Y|x) ∥ P_𝒞(Y|x) ) = y∑ P_θ (y|x) logP_θ (y|x)P_𝒞 (y|x). As the name suggests, this strategy picks the instance with the maximum disagreement value, D_max. In this work, we use each of the AL strategies described above to iteratively sample instances for training supervised ML models. We apply this approach to each of the three stellar parameters separately, taking random sampling as a baseline for comparison. We use the Modular Active Learning framework for Python3 () <cit.> to implement these strategies directly into our code. The pipeline of the entire experimental steps is detailed in Section <ref>. §.§ Machine learning models In this work, we use different supervised-learning algorithms and compare their performances according to the metrics described in Section <ref>. We apply three ML models: k-nearest neighbours (KNN), random forest (RF) <cit.>, and gradient boosting (GB) <cit.>; in addition to an ensemble model that combines their outputs. Some ML algorithms are only used in certain experiments as detailed in Section <ref>. In what follows, we briefly introduce each algorithm. *KNN  The k-nearest neighbours (KNN) is a clustering algorithm based on distance metrics. The standard Euclidean distance is most-commonly chosen as the distance metric measure. KNN can be applied to both regression and classification problems. Since it is one of the more simple algorithms, it offers a robust way to establish a baseline for classification accuracy. At its core, it is based on the assumption that if two data points are nearby each other, they belong to the same class. k is a tunable parameter that represents the number of neighbouring points to be considered, such that the classification of a data point relies on the voting results of the k neighbours that are nearest to it in the multidimensional space. *Random Forest  Random forest (RF) <cit.> is widely used for classification and regression problems because it is fast to train and scales well while also maintaining competitive performance to other ML algorithms. RF is an ensemble method consisting of randomly-generated decision trees. It uses bootstrap sampling techniques which means that different decision trees are simultaneously trained on different subsets of the training data using random subsets of the features. Thus, while a decision tree usually overfits, RF is less prone to overfitting as it uses the average of the trees, which ultimately improves classification accuracy. *Gradient Boosting  The GB algorithm <cit.> is a powerful ensemble model that combines multiple decision trees to create a stronger predictive model. In GB, each subsequent tree corrects the errors of the previous one. It optimizes a specific objective function, typically a loss function, by minimizing it through gradient descent. Overall, GB achieves higher performance and better generalization with lower time cost than other ensemble learning methods, such as the stochastic forest algorithm and the support vector machine (SVM) of a single model <cit.>. *Voting  Voting is an ensemble learning technique used in classification and regression tasks. It combines predictions from multiple base classifiers and selects the class label by voting, which leads to improved performance compared to individual classifiers. The voting classifier can be implemented using soft or hard voting. A hard voting classifier chooses the class with the highest frequency of votes, whereas a soft voting one averages the class probabilities across all base classifiers. Different base classifiers can be given different voting weights based on their individual performances. In this work, we combine KNN, RF, and GB in a soft voting classifier, giving RF and GB weights of 2 each and KNN a weight of 1. §.§ Metrics In this study, we aim to compare the performances of different sampling methods, in addition to the performances of ML models. Accuracy is the most basic evaluation metric, and is given by: Accuracy = TP + TNTP + FP + TN + FN, where TP, FP, TN, and FN are the numbers of true positives, false positives, true negatives, and false negatives, respectively. However, accuracy does not give an accurate reflection of the model performance in the case of class imbalance. Hence, a more helpful pair of metrics can be employed for that; namely, sensitivity and specificity. Sensitivity, or the true positive rate (TPR), measures the ability of a model to classify positives correctly, and is given by: Sensitivity = TPTP + FN, while specificity, or the true negative rate (TNR), measures the ability of a model to classify negatives correctly. and is given by: Specificity = TNTN + FP. It is also conventional to use another metric that relates both sensitivity and specificity, which is the area under the curve (AUC) of the Receiver Operating Characteristic (ROC) curve, where TPR is plotted against the false positive rate (FPR) at different threshold values, where FPR is given by: FPR = FPFP + TN = 1 - specificity. AUC is a measure of the total two-dimensional area under the ROC curve; and is a very good predictor of the overall performance of the classifier, where a baseline random model is expected to have an AUC ∼ 0.5 and a perfect model would have an AUC value of 1. Fig. <ref> shows examples of ROC curves for models with different levels of performances. For a multi-class imbalanced dataset, sensitivity is of particular interest since it emphasises the ability of the model to correctly identify true positives of minority classes; whereas a model can score high specificity even if it can only classify the majority classes. To take that into account, we use all three metrics to compare the AL algorithms with random sampling. For all three metrics, we calculate the macro value for the metrics. That is, we evaluate the metric for each class separately and take the average as the final metric value. This is an added measure to give the performance of the model on minority classes the same weight as its performance on majority ones. In addition, we use the Matthew’s Correlation Coefficient (MCC) as a fourth metric. It is defined by: MCC = TP×TN - FP×FN√((TP + FP) (TP + FN) (TN + FP) (TN + FN)). Because MCC uses all four elements of the confusion matrix (TP, TN, FP, and FN) in the numerator, it does not get skewed by class imbalance, which makes it a more reliable metric for summarizing the overall performance of the model across all classes. MCC ranges from -1 (total disagreement between predicted and true labels) to 1 (perfect prediction), where MCC = 0 indicates a near-random prediction. This makes it a very intuitive metric for understanding the performance of a classifier. A more thorough explanation of each of the chosen metrics can be found in <cit.>. Even though we defined the metrics above in terms of binary classification to provide a clear concept of what they measure, these definitions are easily generalized to fit their application on a multi-class dataset. This is easily handled by making use of the module <cit.> to calculate both sensitivity and specificity, and the module <cit.> for the AUC and MCC. §.§ Pipeline In what follows we outline the steps taken in the experiments performed in this work in order. The aim of the first experiment carried out was to compare the performances of models trained on samples queried by different approaches. Since the aim is not to optimize the ML model selected but the sampling algorithm, we started by training each of the models described in Section <ref> on the entire dataset to select the highest-performing one. The chosen model was then used throughout the rest of the steps, except for QBC strategies, where we use a committee of three learners; namely, KNN, RF, and GB, all initialized using the same batch. We then evaluated each sampling strategy, separating uncertainty sampling strategies from QBC strategies and using random sampling as a baseline in both cases. We varied the initial batch size, taking care to initialize all strategies using the same batch. We begin by evaluating the initial model on the testing set before iteratively augmenting the training sample by querying the data pool and retraining the model. The model performance is reevaluated after each 5 queries using each of the metrics listed in Section <ref>. We perform 20 runs as described to account for the performance variance for some strategies, and calculate the mean performance metrics for all runs. This experiment is repeated for each of the stellar parameters separately. The aim of the second experiment is to assess the performance progress of a model trained on an AL-sampled set with the increase of the number of additional instances. This experiment is only carried out on effective temperature. We pick the highest-performing strategy from the first experiment to use with the same best-performing model chosen before, and only sensitivity is used to assess the models in this experiment. For the sake of comparison, we use three baseline training sets: * the whole initial training set, * a random sample of 10% of the initial training set, * a stratified sample of 10% of the initial training set. For both (ii) and (iii), 20 different samples are used to account for performance variance and the mean results are calculated along with their standard deviations. Finally, we run the AL-sampling method 5 times (averaged at the end) to sample 5% of the initial training data pool and retrain and reevaluate the model every 5 queries. § RESULTS AND DISCUSSION In this section, we present and discuss the results of the experiments carried out in this study. We began by evaluating the performance of different ML models on the testing set after being trained on the entire training set. The results of this steps are shown in . It can be seen that RF outperforms the other three across all metrics, particularly sensitivity, for both effective temperature and surface gravity. For iron metallicity, the voting models has a slightly better AUC score compared to RF, but the latter still scores higher on the other three metrics. It is also worth noting that the computational cost of the voting model is almost three times that of RF, since it is training the three member learners under the hood. Hence, we decided to use RF for all three parameters moving forward, except when using QBC strategies as mentioned before. We can also see from Table <ref> that the KNN model has the lowest overall scores across all three stellar parameters. This is because we do not perform any hyperparameter tuning for the ML models used, but rather keep the default values of the library <cit.>. In the case of KNN, the most important hyperparameter is the number of neighbours, k, which has a default value of 5. Early trials with hyperparameter grid searches for some of the ML models used in this study indicated that a value of 18 achieves better performance score for the KNN model. This suggests that the overpopulation of the feature space with majority classes makes it necessary to increase the number of neighbours taken into account to correctly classify instances that belong to minority classes. Fig. <ref> shows the performance scores of different single-learner AL sampling strategies along with a random-sampling baseline for different initial batch sizes when applied to effective temperature. It can be seen that, for all metrics, at least one uncertainty sampling strategy outperforms random sampling. In particular, all AL strategies significantly outperform the random baseline on sensitivity scores across all initial batch sizes. The best-performing strategy is clearly the classification margin strategy, but only for an initial batch size (n_init) = 20. When the first and second columns of subplots in the figure are compared, we can notice that the performance scores after adding 50 AL-sampled instances to an initial randomly-chosen set of 20 is always higher than the corresponding scores when using an initial set of 100 randomly-chosen instances. This demonstrates the effectiveness of AL sampling in achieving better scores with fewer training instances. We can also see that the larger the size of the initial training batch, the less pronounced the improvement in performance due to AL sampling. However, the improvement in sensitivity for all AL strategies is still evident, even with an initial batch of 500 instances. This offers a contrast with the plateauing of random-sampling sensitivity at the same initial batch size. Of course, the emphasis on sensitivity scores is due to the highly-imbalanced nature of the dataset, which makes sensitivity scores more representative of a model's ability to correctly identify minority classes. Fig. <ref> shows the performance scores of different QBC disagreement sampling strategies, along with a random sampling baseline adapted for QBC learning as well, for different initial batch sizes applied to T_eff. We can see that some of the scores in this case are higher than those of the single RF model shown in Fig. <ref>. However, when we take into account the fact that the computational cost of a QBC model is almost linearly dependent on the number of learners in the committee (three in this case), the corresponding improvement in performance diminishes. Comparing the scores of QBC disagreement strategies with the random approach yields similar results to non-committee uncertainty sampling comparison with random sampling. Nevertheless, it is worth reiterating that AL strategies score higher than random sampling on sensitivity, even after increasing the initial batch size. It is clear that the vote-entropy strategy outperforms all others across all metrics and initial batch sizes. If computational resources were no issue, higher scores can be obtained via pre-calibration of single committee members. However, this again raises the need for labelled instances to perform such calibration prior to training. The performance scores of uncertainty sampling strategies compared with random sampling applied to surface gravity with different initial batch sizes are shown in Fig. <ref>. Random sampling performance is comparable to AL uncertainty sampling strategies for most metrics. However, the classification margin strategy outperforms all the others across all metrics, even with an increasing initial batch size. The differences in MCC and sensitivity scores between margin sampling and random sampling particularly highlights the effectiveness of the strategy in mitigating the impact of class imbalance. Finally, the effect of increasing the initial batch size is similar to that discussed above. Fig. <ref> shows scores of QBC sampling applied to surface gravity with increasing initial batch sizes. Unlike the case of T_eff, there is no noticeable improvement in performance compared to uncertainty sampling strategies shown in Fig. <ref>. Again, this might be due to the use of committee members without prior hyperparameter tuning, which would require an initial labelled set for performing grid searches. It is worth noting that the vote-entropy strategy outperforms all others. The difference is particularly significant for MCC and sensitivity. In Fig. <ref>, we show the performance scores of uncertainty sampling strategies compared with random sampling when applied to iron metallicity with different initial batch sizes. Compared to the sensitivity scores when the model is applied to both T_eff and logg, we can see that the improvement with the increase in additional instances is much lower across all strategies in the case of [Fe/H]. This could be due to the existence of chemically peculiar (CP) stars in the minority classes of the testing set. CP stars mainly belong to spectral classes A and B <cit.>; and their existence in the testing set will necessitate a higher number of training instance before the model can start to correctly identify rare classes. This is evident when we look at the improvement of sensitivity scores when we start with a batch size of 500 instances. It is worth noting that this impact will not be so pronounced if we choose the `micro' instead of `macro' values for the metrics (see Section <ref>). The figure also shows that uncertainty-margin sampling still outperforms all others in MCC and specificity, even with the increase of n_init. The final set of results for the first part of this study is shown in Fig. <ref>; namely, the performance scores of QBC sampling strategies applied to iron metallicity with increasing initial batch sizes. When compared to Fig. <ref>, we can see that QBC does not offer any improvement upon single-learner uncertainty sampling in any metric, regardless of the additional computational cost of training a QBC model. Some models show erratic behaviour at lower numbers of additional instances with n_init = 20. This could be due to the KNN member of the committee, which requires more class population to stabilize (particularly in case of existence of CP stars). This is particularly evident for random sampling because it is less likely to populate neighbourhoods of rare classes with fewer instances. It is worth mentioning that the vote-entropy strategy still outperforms all others across all metrics and initial batch sizes. Taking a closer look at Fig. <ref> and Fig. <ref> together, we can see an `elbow' feature in the first column (n_init = 20) across all metrics at 15 additional instances, which is equivalent to a total training set size of 35 instances. However, this feature does not appear in Fig. <ref> or Fig. <ref>, indicating that it can be attributed to the use of QBC. The most likely reason is that each member of the committee requires the feature space to be populated in a different way in order to improve performance. This translates to the necessity of a higher number of training instances to improve the collective performance of the committee. This is even corroborated by the fact that we can not see a similar feature in Fig. <ref> because the number of iron metallicity classes is lower (4 compared to 7 and 6 for T_eff and logg, respectively). In the second part of this study, we used an RF model along with the uncertainty-margin sampling strategy to track the progress of the sensitivity score on the test set with the increase in the number of instances queried by the AL algorithm; the results – when applied to effective temperature – are shown in . We also included three baselines to reference for comparison, corresponding to RF models trained on three different sets: the whole training pool (100%), a random sample of 10%, and a stratified sample of 10% as well. For the AL strategy, we use only 10 instances as a random initial batch in this experiment. Based on these results, on one hand, it seems that stratification only adds a very slight improvement ( 0.24%) over random sampling. This demonstrates that a sampling strategy more effective than stratification is needed to achieve higher performance scores with fewer training instances and less computational cost, even disregarding the fact that stratified sampling requires the entire data pool to be labelled prior to instance selection. On the other hand, it is clear that the AL approach outperforms both samples with only half the training set size. We can also see that the variance in the AL approach sensitivity starts to increase after the first score jump at around 100 instances. This is because the feature space of the training sample is widening but has not yet accumulated enough instances to cover the finer details of each class. However, when the training sample reaches a size of around 2,000 instances, we can see the variance starting to significantly decrease. In spite of this, the sensitivity score of the uncertainty-margin algorithm is still trending upward to the end of the curve, which shows that further improvement can be safely expected when we increase the number of sampled instances even further, when more computational resources are available. § CONCLUSIONS The results shown in this paper demonstrates the effectiveness of AL in curating training sets for supervised ML models with the objective of achieving the best possible stellar spectral classification performance while reducing labelling costs in terms of time and expertise. Compared to classical classification approaches, there are several interesting conclusions. They are as follows: * AL algorithms significantly improve the performance of stellar spectral classification compared to random or stratified sampling methods, by iteratively selecting the most informative instances to annotate. * AL reduces the size of the labelled training set required for achieving the same performance as random sampling, making it more cost-effective and efficient. * AL algorithms are more robust against data imbalance, which is often the case in stellar spectra datasets, consequently ensuring that rarer stellar classes would be represented adequately and, thus, classified correctly. * AL sampling strategies are scalable and can be practically used on large datasets, indicating that it can be integrated into stellar survey data processing pipelines. * Models trained on samples curated using AL methods exhibit better generalization with fewer instances, which is evident when evaluated on unseen testing data, making them more reliable in real-survey applications. Therefore, the AL approach for automating stellar spectra data curation and classification is feasible, accurate, and cost-effective. Based on the findings of this study, we recommend the integration of AL algorithms in citizen science projects to accelerate the annotation process even further. They can also be used in automated astronomical surveys to optimize the selection of spectra for follow-up observations and analysis. Future work will be conducted to: * adapt the approach used here for multi-label classification in order to further minimize the amount of training data needed; * investigate the percentage of data required to achieve the same performance scores obtained by using the entire dataset; * merge data from different surveys to use in curating a comprehensive training sample using AL, and make it publicly available to use for automated stellar classification in future surveys; * evaluate AL algorithms available for regression problems, in order to leverage them in curating pipelines for estimation of stellar atmospheric parameters. All of the above is contingent on increasing the computational resources currently available. § ACKNOWLEDGEMENTS tocsectionACKNOWLEDGEMENTS To prepare the code required for performing this work, we used each of the following open-source Python <cit.> libraries: <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, and <cit.>. In this work, we have used SDSS database extensively. Funding for the Sloan Digital Sky Survey IV has been provided by the Alfred P. Sloan Foundation, the U.S. Department of Energy Office of Science, and the Participating Institutions. SDSS-IV acknowledges support and resources from the Center for High Performance Computing at the University of Utah. The SDSS website is <www.sdss4.org>. SDSS-IV is managed by the Astrophysical Research Consortium for the Participating Institutions of the SDSS Collaboration including the Brazilian Participation Group, the Carnegie Institution for Science, Carnegie Mellon University, Center for Astrophysics | Harvard & Smithsonian, the Chilean Participation Group, the French Participation Group, Instituto de Astrofísica de Canarias, The Johns Hopkins University, Kavli Institute for the Physics and Mathematics of the Universe (IPMU) / University of Tokyo, the Korean Participation Group, Lawrence Berkeley National Laboratory, Leibniz Institut für Astrophysik Potsdam (AIP), Max-Planck-Institut für Astronomie (MPIA Heidelberg), Max-Planck-Institut für Astrophysik (MPA Garching), Max-Planck-Institut für Extraterrestrische Physik (MPE), National Astronomical Observatories of China, New Mexico State University, New York University, University of Notre Dame, Observatário Nacional / MCTI, The Ohio State University, Pennsylvania State University, Shanghai Astronomical Observatory, United Kingdom Participation Group, Universidad Nacional Autónoma de México, University of Arizona, University of Colorado Boulder, University of Oxford, University of Portsmouth, University of Utah, University of Virginia, University of Washington, University of Wisconsin, Vanderbilt University, and Yale University. § DATA AVAILABILITY tocsectionDATA AVAILABILITY The MaStar spectra file used in this article is publicly available https://data.sdss.org/sas/dr17/manga/spectro/mastar/v3_1_1/v1_7_7/mastar-goodspec-v3_1_1-v1_7_7.fits.gzat this link and the associated stellar parameter catalogue can be found https://data.sdss.org/sas/dr17/manga/spectro/mastar/v3_1_1/v1_7_7/vac/parameters/v2/mastar-goodstars-v3_1_1-v1_7_7-params-v2.fitshere. The Python code created for this work is made available on https://github.com/rehamelkholy/StellarALGitHub. mnras tocsection
http://arxiv.org/abs/2406.19096v1
20240627112559
In-situ Controller Autotuning by Bayesian Optimization for Closed-loop Feedback Control of Laser Powder Bed Fusion Process
[ "Baris Kavas", "Efe C. Balta", "Michael R. Tucker", "Raamadaas Krishnadas", "Alisa Rupenyan", "John Lygeros", "Markus Bambach" ]
eess.SY
[ "eess.SY", "cs.SY" ]
§ ABSTRACT Open-loop control of laser powder bed fusion (LPBF) additive manufacturing (AM) has enabled the industrial production of complex and high-criticality parts for aerospace, power generation, medical, transportation, and other industries. This approach relies on static parameter sets obtained through extensive experimentation and a priori simulation on analog parts, with the hope that they remain stable and defect-free once transferred to the production parts. Closed-loop control of LPBF has the potential to enhance process stability further and reduce defect formation in the face of complex thermal histories, stochastic process noise, hardware drift, and unexpected perturbations. The controllers can be classified based on the spatial and temporal scales in which they operate, designated as layer-to-layer and in-layer controllers. However, the performance and effectiveness of controllers largely depend on the tuning of their parameters. Traditionally, controller tuning has been a manual, expertise-driven process that does not guarantee optimal controller performance and is often constrained by the non-transferability of settings between different systems. This study proposes the use of Bayesian Optimization (BO), a sample-efficient algorithm, to automate the tuning of an in-layer controller by leveraging the layer-to-layer repetitive nature of the LPBF process. Two alternative approaches are introduced: online tuning, which adjusts parameters iteratively during the process, and offline tuning, conducted in a representative setup such as laser exposures on a bare metal plate. The proposed methods are experimentally implemented on an in-layer PI controller and the performance of the resulting tuned controllers is investigated on two different wedge geometries that are prone to overheating. The results demonstrate that BO effectively tunes controllers using either method, where both significantly reduced overheating in controlled wedge specimens compared to those uncontrolled. Notably, this study provides the first printed parts controlled by an in-layer controller and subjected to microstructural analysis in the literature. Microstructural findings show the partial presence of lack-of-fusion type porosities induced by the controller assigning insufficient laser power to compensate for the overheating which highlights one of the most significant challenges for the utilization of laser power controllers. In summary, BO presents a promising method for the automatic tuning of in-layer controllers in LPBF, enhancing control precision and mitigating overheating in production parts. Looking forward, BO could extend to broader LPBF settings and related additive manufacturing modalities, potentially transforming controller tuning into a more adaptive and robust process across different machines and materials. Networks with many structural scales: a Renormalization Group perspective Andrea Gabrielli ========================================================================= § INTRODUCTION Laser Powder Bed Fusion (LPBF) is a prominent additive manufacturing process with applications spanning several industries. Despite its widespread adoption, the LPBF process is susceptible to disturbances stemming from its complex multi-physics nature, making it difficult to model the exact behavior of the process. There has been an increasing interest in closed-loop feedback control applications on the LPBF process to improve process robustness. Closed-loop control is proposed to be applied mainly in 2 scales due to the nature of the process: Layer-to-layer and in-layer. Implementing in-layer closed-loop feedback control systems particularly contains Proportional-Integral-Derivative (PID) controllers due to the high-frequency requirement of the in-layer control objective. PID controllers, well-established in various engineering fields, are common in research and practice due to their straightforward design and proven effectiveness. §.§ In-layer Controller Applications in LPBF The initial adoption of closed-loop feedback in LPBF can be traced back to the seminal paper by Benda et al. <cit.>, who applied a simple form of control to the LPBF process on iron powder. Their preliminary experimental study showed the successful feasibility of utilizing an on-axis optical sensor for the dynamic control of the melt pool. Subsequent efforts use PID controllers <cit.>. These early studies have been limited to qualitative comparisons of basic geometries where they showed the potential of photodiode signal stabilization by actuating the laser power. The study of Craeghs et al. <cit.> showed two use cases for the in-layer PI controller. One of them was an artificial case with smaller hatch spacing to simulate an excessive energy input while the other was the melt pool emission variations in overhanging surfaces. They showed improved stability in the melt pool signals in both cases. Their controller design only included PI terms due to the noise of the photodiode signal which rendered the use of D-term impractical. Renken et al. implemented an in-layer controller with only the proportional (P) term to control the melt pool. They showed improved thermal stability on a machined bridge structure, where the melt pool undergoes a heat build-up on the thinner cross-section in the uncontrolled case <cit.>. A follow-up study reports the implications of the P-controller application on various vector sizes and a single layer exposure on powder <cit.>. The controller improves the stability of the melt pool in all cases compared to the predefined feed-forward open-loop laser power inputs. Syed et al. proposed a controller design by also including the I and D terms presented with a simulation-based study that the PID controller successfully stabilizes in vector variation of the meltpool size in <cit.>. Shkoruta et al. implemented a PI controller by using the high-speed camera signal and applied the controller both on a single track and a multi-track single-layer where they showed improved stability of the melt pool size compared to the non-controlled case <cit.>. Finally, Rongxuan Wang et al. <cit.> present a PD control application and experimentally show the stabilization capability of the controller in a single-layer case of vector-to-vector heat accumulation. Each study referenced to this point follows a common methodological approach toward parameter optimization of PID controllers. Specifically, tuning of the gain parameters is conducted completely manually, utilizing heuristic techniques based on an empirical understanding of the effect of individual parameters. §.§ The Tuning Challenge of PID Controllers The effectiveness of PID controllers is contingent on the tuning and selection of the gains applied to the proportional (P), integral (I), and derivative (D) error terms. Traditionally, parameter tuning has been a time-consuming heuristic process that is heavily reliant on expertise. Moreover, the lack of a universal parameter set that is transferable between different LPBF processes or even varying conditions within the same process exacerbates the tuning challenge. Varying conditions dictate that the tuning procedure be performed in a setting where the system behavior is representative of the actual printing conditions. Therefore, the term offline tuning refers to the tuning procedure performed under artificial or isolated conditions that are assumed to be sufficiently representative of the actual process conditions, while online tuning refers to tuning during the natural process conditions <cit.>. Online and offline tuning approaches each have specific trade-offs for tuning a controller in the manufacturing setting. While offline tuning is the usual choice for tuning controllers when large changes in the environment or the corresponding system are not expected, the resulting performance is fixed by the selected parameters.  <cit.>. Online, or adaptive tuning is performed close to actual process conditions and can increase performance under changing conditions. However, it may introduce increased variability during the tuning phase. Besides the challenge arising from the tuning setting, the difference in the order of magnitude of the controller parameters also makes the tuning process lengthy and costly <cit.>. Well-established heuristic methods such as Ziegler-Nichols <cit.> are shown to achieve an effective controller tuning requiring open loop access, without guarantees on the optimality for most cases. Iterative methods requiring open-loop access are not desired in manufacturing systems due to the safety blocks and time consumption. Joseph et al. recently published a comprehensive review of PID controller tuning algorithms that summarizes the existing methods for various applications <cit.>. Among the described algorithms, Bayesian Optimization has been gaining significant traction in various industrial applications. §.§.§ Bayesian Optimization for controller tunning Bayesian Optimization (BO) is an efficient data-driven optimization algorithm that excels in settings characterized by limited data availability and complex system dynamics. It operates by modeling the process inputs and outputs as Gaussian processes and selecting sample points that minimize overall uncertainty while promoting exploration. Since BO uses Gaussian processes, knowledge about the form of the function to model the process is generally not required. Among the various types of controllers, BO-based autotuning of PID controllers recently gained significant attention <cit.>. For example, in motor control, Chen et al. utilized BO to tune the PID controller of an adjustable payload servo motor <cit.>. Similarly, Hajieghrary et al. <cit.> and Fujimoto et al. <cit.> reported improved controller performance with BO-tuned PID controllers in mobile robotic manipulators and servo motor control applications, comparing favorably to traditional tuning methods. König et al. used a modified version of the BO algorithm to adaptively optimize a cascade PI controller for a rotational axis drive, demonstrating promising outcomes <cit.> for different operating contexts. In the following study by Zagorowska et al, the BO algorithm is used to autotune the PID that controls the position of a high-precision motion system and increase the tracking performance. <cit.>. Khosravi et al. showed a 20% increased controller performance by BO-tuning of the linear axis drive of a CNC grinding machine compared to nominal settings <cit.>. All the cited work shows the superior performance of the BO autotuned PID controllers in non-linear dynamic systems without requiring a physical model of the system. The demonstrated effectiveness of the BO algorithm in tuning PID controllers across a variety of complex applications promises to be effective for the distinct challenges for controller tuning of the LPBF process. Accurately modeling pyrometer response in the LPBF process is challenging due to complex melt pool dynamics and data acquisition noise, rendering model-based PID tuning impractical. Manual tuning of PID controllers is also a labor-intensive process, requiring significant expertise and time without guaranteeing optimal performance. Consequently, the use of Bayesian Optimization (BO) as a model-free, sample-efficient algorithm for PID tuning in LPBF offers considerable promise for achieving efficient and high-performance controller tuning. §.§ Research Contribution So far, to our knowledge, no algorithmic approaches have been reported for PID tuning for within-layer control of the LPBF process. Given the non-transferable nature of the controllers across different machines and materials, this gap represents a bottleneck for the further utilization of the in-layer controllers. Reported controller studies only performed offline tuning procedures where the tuning is manually performed in a representative setting and then applied in the actual process. There are no reported studies that propose an online controller tuning method without disturbing the LPBF printing process. In this work, sample efficient controller autotuning procedures for LPBF are presented, based on Bayesian optimization methods. Online and offline approaches are experimentally tested on a geometry prone to overheating to illustrate the empirical comparison of both methods and provide a baseline for future research in the field. The main contributions of this work are: * An autonomous controller tuning procedure that does not require a prestudy of controller parameters and experimental demonstration for in-layer PI controller tuning in the LPBF process, * An online controller tuning procedure that leverages the layer-to-layer nature of the LPBF process with a detailed experimental comparison of printed 3D wedge geometries between the online and offline tuned controller performances. <Ref> describes the experimental procedure with detailed descriptions of the characterization methods, hardware, and software used for the reproducibility of the procedures described. The proposed autotuning framework and implementation of the BO algorithm are given in <Ref>. In <Ref>, results are shared where they are discussed in detail in <Ref>. Concluding remarks and future directions are given in <Ref>. § MATERIALS AND METHODS §.§ Materials §.§.§ LPBF Processing An Aconity3D Midi+ (Aconity3D GmbH, Herzogenrath, Germany) LPBF machine <cit.> was used in the experiments. The processing laser was a continuous-wave Gaussian-mode fiber laser with a wavelength of 1080nm and a maximum output of 500W (nLIGHT Alta, Vancouver WA, USA). The laser was focused to a beam diameter of 80 and a 30 layer thickness was used for printing. The nominal laser power and scan speed parameters for the given layer thickness and beam diameter for fully-dense microstructure are 150W and 800mm/s, respectively. The powder used in this study was gas-atomized stainless steel 316L (1.4404) with a particle size distribution of 15-45 (CT POWDERRANGE 316LF, Carpenter Additive, Cheshire UK). The metal plate used to expose the laser for the offline tuning setting was made of S304 steel. §.§.§ In-layer Control Hardware and Software The PI-based in-layer controller was implemented using the AconityCONTROL hardware and software upgrade package <cit.>. Schematic of the controller implementation to the optical axis of the machine is given in <Ref>. The laser beam generated is transmitted to the optical unit via a fiber optic cable. To ensure parallel alignment, the unfocused beam within the cable is passed through a collimator. The beam is then focused through the beam expander and reflected by a 45-degree dichroic mirror into a galvanometer scanner, which steers the beam across the powder bed. The movable beam expander ensures consistent focus regardless of the projection location on the build platform. Upon sufficient heating, a melt pool is formed. A portion of the emitted radiation from the vicinity of the melt pool traces back up the same optical path until the dichroic mirror, where it is transmitted onto a pyrometer module. The radiation is manually focused with a movable lens and aligned using an X-Y micrometer table. The pyrometer converts the intensity of the incipient radiation to an analog voltage at a sampling rate of 100kHz. The pyrometer used in the setup is Kleiber KG740 <cit.> with a wavelength range of 1500 to 1700. The intensity reading is passed to a field-programmable gate array (FPGA) for the embedded feedback signal calculation of the implemented PI controller. Finally, the generated input signal is fed back into the laser driver during the laser power assignment during the continuing exposure. The sampling rate matches the clock frequency of the controller PC, therefore the total delay time of the entire loop matches the time length of a single sample, which is 10. §.§ Method The generalized schematic of the proposed auto-tuning procedure is shown in <Ref>, and is comprised of two main loops. The lower loop, highlighted in orange, represents the in-layer controller. The controller actuates the laser power u(t) within a scan vector to match the sensor reading y(t) to the defined reference value using the PI controller and pyrometer feedback. The upper loop, highlighted in green, represents the auto-tuning iteration loop. The auto-tuning procedure is initialized with a predefined controller parameter set. To find updated values, the in-layer control loop is used to expose a single vector with pyrometer feedback. After exposure, a cost value Ĵ_i is calculated by using the u(t) and y(t) as inputs. Then the cost value is used by the BO algorithm to calculate the new set of controller parameters θ_i+1 to be passed to the controller. The tuning iteration loop is repeated either until the cost value is below a target threshold or until a specified number of iterations is reached, after which the parameters are held constant. The design of the cost function and the BO algorithm used in this study are elaborated in detail in <Ref>. §.§.§ Offline and Online Controller Tuning Methods The proposed auto-tuning procedure is evaluated here for online and offline conditions. Schematic representation of the procedures are shown in <Ref>. Offline tuning involves exposing a bulky metal object, i.e., a steel plate instead of a powder layer, to evaluate controller performance in each iteration. The laser is directed at the same region during each iteration after updating the controller parameters. The controller parameters that minimize the cost function by the end of the designated number of iterations are applied during the build job to the parts of interest. This procedure would be performed prior to starting a build job with powder by marking the build plate. It is a simplified setting compared to an actual print job with powder since the melt pool emissions from a solid substrate exposure are expected to be more stable. Furthermore, no powder is consumed by this tuning method and iterations are more rapid since no time is required to recoat. However, this method may be less representative of the actual process conditions. Online controller tuning is performed at the beginning of the printing process with powder and can be done in locations outside the actual parts to be built. In this case, a single-vector thin-walled geometry is added to the build job. The wall is exposed and recoated with powder to complete a single iteration. Meanwhile, the parts are printed without control using standard fixed parameters until the tuning is finished. After finalizing the tuning procedure, the controller parameters with the minimum cost are applied for controlling the parts in the remaining layers. The melt pool emissions from the single-vector wall tend to be less stable in the presence of powder. The build-up of material over several layers also represents a commitment to building a part, or at a minimum, to removing the wall from the build plate before it can be reused. However, this method is more representative of actual process conditions than offline tuning. §.§ Experiment design Two experiments were designed for the implementations of the offline and online optimization methods as shown in <Ref>. A 3mm-thick S304 steel plate was exposed with the 10mm-long single vector for offline optimization for 200 iterations. A build job with S316L powder was performed for 200 layers by performing the online optimization cycle on a vector in powder with layer applications. The optimal controller parameter sets were acquired from online and offline tuning procedures. The timing sequence of the iteration execution is described both for the offline and online settings in <Ref>. An iteration begins with the exposure of a single line scan of 10mm with the PI controller running and with the input power and output pyrometer signal being recorded. Due to the melt pool emission difference between plate and powder, applying the same reference value for the PI controller yields a different range of laser power inputs. To obtain the laser power application in the same range, a reference value of 60 mV is assigned for the online controller tuning instead of 30 mV for the offline. For the online tuning, recoating is performed after the laser exposure, which takes 13 seconds using the machine's standard settings. The build process is then paused while the pyrometer signal is processed through the cost function, which in turn is fed into the BO algorithm. The BO algorithm calculates the controller parameters for the next iteration and the parameters are updated before resuming the build process. The pause to the restart duration takes ∼2 seconds, however, most of this time is used by the in-machine network and for the update of the parameters. The cost calculation and the BO execution themselves take less than 0.2 seconds in total. For the build job scheduling along with the iteration to iteration parameter update, the Python-based build job execution capabilities of the Acontiy Midi+ machine were used as described in previous work <cit.>. For the second part of the experiment that is designed to evaluate the tuned controller performances, two wedge geometries with 28° and 50° angles (as shown in <Ref>) were printed. Due to the pyrometer signal's dependency on the orientation of the laser exposure vector, the scan files were generated using unidirectional scanning, as shown by the vectors with arrows in <Ref> (a). The angles of the wedges were selected arbitrarily to exhibit an actual in-layer heat build-up scenario where the vector-to-vector time gradually becomes smaller. In total, two sets of the wedge geometries were printed with online and offline tuned controller parameters. §.§.§ Microscopic Evaluation Specimens were separated from the build platform by electrode-discharge machining. They were cut close to their center line by a Struers Accutom-10 cutting machine, then metallographically prepared by embedding, grinding, and polishing. Embedding is performed hot by a Struers CitoPress machine with DuroLite bakelite resin, then ground with 320-grit sandpaper, and polished with Struers commercial polishing cloths Largo, Dac, Nap, and Chem with suspensions with particle sizes of 9, 3, 1, and 0.1, respectively. Polished mounts were scanned by a Keyence VHX-7000 microscope in coaxial and ring lighting modes. § THEORY In this section, the PI laser power controller, pyrometer as the sensor, cost function, and Bayesian Optimization (BO) algorithm are explained in detail. §.§ PI Controller The expression G_c1(s) = K_p + K_I/s represents the transfer function of a PI controller in the Laplace transform domain. Here, s is the complex variable in the Laplace domain, which simplifies the analysis of linear time-invariant systems. The coefficients K_p and K_I stand for the proportional and integral gains, respectively. The proportional term K_p provides a control action proportional to the error. On the other hand, the integral term K_I/s integrates the error over time, aiming to eliminate steady-state error which allows tracking of the reference value  <cit.>. §.§ Cost function A cost function is formulated to quantify the controller performance through the pyrometer signal. The multiple objectives are identified for quantifying the deviation in the melt pool conditions from the nominal response. The pyrometer signal is expected to: * track the given reference value with a minimal amount of deviation through a single vector, * reach the reference value in the shortest time possible, * stabilize without oscillation in the steady state. The first criterion conforms to the ideal size of the melt pool to be maintained through the hatch region, regardless of the varying preheating conditions, vector lengths, or defects. The second criterion serves to shorten the time of melt pool transition time to stabilize at the initiation and is referred to as the rise time. The third criterion is introduced for limiting the controller-induced oscillations, especially for larger controller gains. For these criteria, a composite cost function term g is formulated for the optimization problem is given as g(x) = √(MSE'^2 + t_rise'^2 + σ_laser'^2) where MSE', t_rise', and σ_laser' denote the normalized mean squared error, normalized rise time, and normalized standard deviation, respectively. These can be combined into a single error metric, g, using the Euclidean norm where [MSE', t_rise', σ_laser']^T is the error vector. An illustrative description of the components of the proposed cost function applied to a typical laser strike is shown in <Ref>. Next, each term of the cost function is explained in detail. First, normalized mean squared error (MSE') is formulated as MSE'(y_i) = 1/Nc_MSE∑_i=N/2^N (y_i - ŷ_i)^2, where y_i is the pyrometer measurement and ŷ_i is the reference value. N is the number of the recorded pyrometer data, present in the denominator for normalizing the length of the exposed vector. This term is used to penalize the deviation of the pyrometer signal from the set point. The term is only calculated for the second half of the vector (last 5mm) to be less affected by the rise time as shown by the purple-colored signal error lines in the second half of the first plot in <Ref>. Given the significant disparity in magnitude between units, such as laser power in the order of hundreds of Watts and pyrometer signals in the tens-of-mV range, each term's impact on the composite error varies considerably. To balance the weight of each term, a second normalization factor c_MSE is introduced to be multiplied with the vector length N. For the MSE term calculation, c_MSE is chosen as 500. Second, the rise time is defined as t_rise'(y_i) = 1 if 1/Nmin_k {k: | y_i -ŷ_i| ≤ 0.05ŷ_i} = 0, 1/Nmin_k {k: | y_i -ŷ_i| ≤ 0.05ŷ_i} otherwise. where N, y, and ŷ_i are the same as Equation <ref> and k represents the number of data points before reaching the 95% of the reference value as shown by the orange dashed line in <Ref>. If the signal fails to reach the target completely, i.e. k=0, the rise time term yields 1. Shorter rise times are thus favored by this cost function term. Lastly, the standard deviation is formulated as σ_laser'(P) = √(1/N∑_i=1^N (p_i - μ_i)^2)/c_σ , where p is the laser power assignment and μ defines the rolling average of the laser power assignments. For the σ_laser term calculation, c_σ is chosen to be 150. μ is given by μ_i = 1/w∑_j=i^i+w-1 y_j , where w is the window length, defined to be 100 for the experiments of this study. Unlike the first two terms, the standard deviation is calculated for the laser power input instead of the pyrometer signal. Due to the steady-state oscillation of the pyrometer signal that can be characterized by the combined effect of the process physics and sensor noise, the normalized standard deviation of the laser power signal is used to isolate the controller-induced oscillations. This term of the cost function favors the stability of the process input. Examples of low and high-cost laser power signals are given in <Ref>. The first iteration example has high magnitude oscillation in the laser power that is induced by the controller. The histogram visualization of the laser power assignments subtracted from the rolling average mean exhibits a broader distribution of the laser power assignments that yields a normalized standard deviation value of 0.24 for the entire vector. The second iteration example has a lower normalized standard deviation value of 0.11 as it does not exhibit an oscillatory behavior. §.§ Bayesian Optimization (BO) To obtain the optimal x based on the defined cost function g(x) while preserving stability in the system, we need to solve the following optimization problem: min_x ∈ A g(x), subject to the constraints f_j(x) ≤ J_max, ∀ j ∈{1, 2, …, J}, where x is a decision variable vector within a continuous search space A ⊂ℝ^n. Here, J_max is the pre-set constraint limit indicating the assignment limits of K_p and K_i, g: ℝ^n →ℝ is the objective function to minimize, and f_j: ℝ^n →ℝ, j = 1, …, J are constraints that must be fulfilled. In this study, the range of the controller parameters is only limited by the design limits of the experimental setup, i.e., supported by the AconityCONTROL software. The functional forms of g, for j = 0, …, J, are unknown, but measurements of g can be obtained to develop surrogate models using Gaussian Processes (GPs). The inputs (K_p and K_i) and the output as the cost function (g) are defined as GPs for the minimization objective defined in <ref>. Following the methodology of Berkenkamp et al. <cit.>, GPs are used for approximating g_j, for j = 0, …, J. The approximations are g̃_j(x): A →ℝ, where j = 0 is for the objective function (1a), and j = 1, …, J are for the constraints. Gaussian process regression presumes that the values g̃(x_0), g̃(x_1), …, g̃(x_P) for different x are random variables with a joint Gaussian distribution for any finite P. The model of g̃_j is a GP, which is characterized by known mean ψ_j(·) and kernel k_j(·, ·) functions: g̃_j(x) ∼GP(ψ_j(x), k_j(x, x)), where the kernel function used in this study is the Radial Basis Function (RBF), described as k(x, x') = σ^2 exp(-x - x'^2/2l^2) , where σ^2 is the variance and l is the length scale σ^2 and l are the hyperparameters, described in <Ref> below. In this setting, the usual assumption is having access to noisy measurements ĝ_j(x) = g_j(x) + ω, where ω∼𝒩(0, σ^2_ω) is a zero mean normally distributed random variable with σ_ω standard deviation. To integrate these GPs into optimization, value of g̃_j is predicted at a random point x̂ using R past measurement data G_j = [ĝ_j(x_r)]_r=1,…,R. As per Rasmussen and Williams <cit.>, the mean and variance of the prediction at a new point x̂ are: μ_j(x̂) = ψ_j(x̂) + k_R(x̂)(K_R + I_Rσ^2_ω)^-1(G_j - Ψ_j), σ^2_R,j(x̂) = k(x̂, x̂) - k_R(x̂)(K_R + I_Rσ^2_ω)^-1k^T_R(x̂), where G_j is a vector of R observed noisy values, Ψ_j = [ψ_j(x_r)]_r=1,…,R is a vector of mean values of the past data, j = 0, …, J, K_R is the covariance matrix of past data, k(x_a, x_b), a, b = 1, …, R, k_R(x̂) contains the covariance between the new point and the past data, and I_R is the identity matrix of dimension R. The acquisition function acq: X →ℝ is employed to evaluate the next sample point in the next iteration based on the criteria of highest standard deviation. In this work, we use an acquisition function corresponding to the Lower Confidence Bound (LCB), with a modification to ensure more efficient computation under safety constraints, following <cit.>. Thus, at each iteration we evaluate and optimize the acquisition function on the corresponding GP posterior as follows x_j+1 = max_x acq(x). §.§.§ Hyperparameter selection There are 3 hyperparameters tuned for the used algorithm: initialization for P and I, variance σ^2, and length l scale of the kernel function. The variance σ^2 sets the overall amplitude of the function, with a higher variance allowing for greater fluctuations from the mean. The length scale l determines the smoothness of the predicted function, where a smaller l leads to rapid variations in the function, and a larger l results in smoother behavior. In dynamic system modeling using GPs, hyperparameters are typically estimated via maximum likelihood based on observed data <cit.>. However, in BO, the GP model not only performs regression on existing data but also actively requests new samples through data acquisition. Therefore, hyperparameters are set before initialization and maintained through the BO procedure, treating the kernel as a prior over functions. In this study, σ^2 is set to 1. l of the GP is set as 50 and 100 for the inputs P, and I, and 200 for the cost as output based on the minimization of the negative log-likelihood for a training dataset. Initial values are set to the lower bound values for both P and I as 1 and 100, respectively. These values are assigned based on the historical experiments. A detailed study on the effects of hyperparameters on tuning and the setting of the initial parameter values is beyond the scope of this study. The limits are set to 1 to 100 for K_p and 100 to 1,600,000 for K_i for the BO algorithm as the upper and lower boundaries are dictated by the machine interface. During the iterations, the range and order of magnitude of the controller parameters susceptible to disruptive instability are unknown. Input parameter restriction by utilizing safe-BO algorithms can be employed to avoid these cases during the optimization <cit.>. For this study, no such limit is issued besides the present hardware limits. The safe progression of the iterations is ensured by assigning an upper limit for the laser power assignment of 300W, which was found during separate experiments to be within the stable process window for the studied material with the process parameters described above. Although instability may be observed during iterations, excessive melting is avoided by the laser power limit by ensuring a safe process for any suboptimal controller setting. § RESULTS §.§ Optimization Optimization progress is shown in <Ref> for both online and offline settings. The line plots represent the lowest achieved error value of each previous iteration. The minimum cost function value is reached at iteration number 112 for the offline setting and iteration number 74 for the online setting. Online optimization reached the minimum earlier than the offline settings with a faster drop in the error value. Offline optimization The offline optimized values are P=8.45 and I=90598.24. Meanwhile, the online optimization yielded P=8.44 and I=65014.97. Respective time series plots of the pyrometer signal, laser power assignment, and error plots are shown for a single vector with the optimized PI settings in <Ref>. The first row displays the optimal iteration from the offline optimization process, while the second row presents the same from the online optimization. The difference in the reference pyrometer values is achieved in a similar laser power range of around 200W due to the emission difference of the melt pools in steel plate and on powder. Overshooting is observed in the online optimization case, unlike the offline one shown closely in the magnified inset plot. §.§ Wedge prints Time series and scatter plots of the 28° and 55° wedge parts are shown in <Ref> and <Ref>, respectively. The inter-vector travel time of the scanner has been removed from the time-series visualization of the wedge prints. The uncontrolled wedge print is only performed for the 28° wedge part. In the plot of the 28° wedge part, the magnified inset plots are shown for the pyrometer signals and the laser power input for the controlled wedge parts and the vector start time points are shown by the vertical dashed grey lines. After an initial transient, the pyrometer signal for both the online- and offline-tuned PI-controlled 28° wedges is seen to stabilize about the set point value for the majority of the hatch vectors along the wedge. However, as it is shown in the inset plots, overshooting followed by overcompensating is omnipresent in the first half of the hatch region. The overshooting can be attributed to the controller behavior as the causal trend is also observed in the laser power values. Correspondingly, the laser power assignment is shown to vary along each hatch vector. Only at very short vector lengths (i.e., the end of the time series data) does the pyrometer signal begin to diverge from the command. The uncontrolled case shows a distinct heat build-up behavior as a result of getting smaller vectors due to the geometry of the wedge parts. The 55° wedge part plot shows a vector-to-vector trend similar to the 28° wedge part. Overshooting is also present for the majority of the vectors, however, the signal settles in the earlier and longer vectors as the inset plots of the pyrometer signal show. later and shorter vectors resemble the same trend as the initial vectors of the 28° wedge part with a shorter length. This resemblance can be attributed to the controller behavior's dependency on the vector length. The overheating compensation is apparent with the laser power values decaying to the lack-of-fusion zone starting at 140W as shown by the grey dashed line. Spatial plots of the pyrometer signal for the 28° and 55° wedge parts are shown in <Ref> and <Ref>, respectively. The top row shows the pyrometer signal of the offline and online tuned controllers resulting pyrometer signal on the 28° wedge part as well as the uncontrolled signal. The coloring of the presented data is centered around the reference value of 80 mV. The bottom rows show the same data with a different coloring to represent the performance of the controller. The green points represent the ± 5% range of the reference value as the reds and blues are above and below this range, respectively. The pyrometer signal trend observed in the time series is also observable spatially for each vector. The start of all vectors in the controlled cases experience the rise time duration represented by the blue starting end. The length of this region is constant through the vectors regardless of the vector length. It is followed by the overshooting region that is represented by the red regions in the controlled cases, which are only apparent for the longer initial vectors. Based on the time series plots as shared in <Ref> and <Ref>, overshooting is observed in all vectors, however, it does not exceed the + 5% reference value in shorter vectors. The uncontrolled part shows a very distinct and asymmetrical overheating that is attributed to the unidirectional exposure of the hatch region. The green region in the uncontrolled part is merely present as a short transient region where the signal traverses as the part undergoes overheating, compared to the stable vector-to-vector trend of the controlled parts. In both wedge parts, the online-tuned controller yields less overshooting compared to the offline-tuned controller. The pyrometer signal and the corresponding laser power assignment are shared in the first row of <Ref>. The pyrometer signal is visualized by the same color range as in <Ref>. The laser power assignment is color-mapped between 140W and 210W. Based on the identified parameter window, it is expected to observe lack-of-fusion and keyhole pores below 140W and above 210W of laser power, respectively. These porosity expected regions are colored with blue for lack-of-fusion and red for key holing as described by the plot legend. The metallographic inspection performed through the plane described by AB is shown in the second row. The microstructural image captures the range of layers from 30 to 75 through the AB plane. The right side of the image shows the cross-section of the initial vectors in the hatch of every layer and the hatch progresses towards the left side. The lack-of-fusion type of defects observed on the left side of the image shows a strong correlation with the laser power assignments in the lack-of-fusion range. The mean value of the cost function applied to each wedge geometry is shown as a function of vector length for both online- and offline-optimized parameters in <Ref>. To compute this, each layer was divided into individual hatch vectors labeled by length. Then, the cost function used for the optimization was computed for each vector individually. The same cost calculation equations have been applied to each vector with the same constant values as described in Section <ref>. This procedure was repeated for all layers and the mean of the resulting cost and its terms are plotted against vector length. The shaded areas represent the 95% confidence interval of the calculated values about the mean lines. The overall cost values are observed to be increasing with decreasing vector length. The trend is exponential in vector lengths shorter than approximately 3mm. Neither the wedge angle nor the offline and online tuned controllers are observed to affect the error trend. To further elucidate the individual contributions of each element of the cost function plotted in <Ref>, <Ref> shows each metric separately for both geometries and both optimization methods. Laser power standard deviation and the pyrometer error exhibit a constant trend in vectors larger than 3mm, as the rise time linearly increases. The increase in rise time with constant other metrics yields a minor change in the overall cost value. Shorter than 3mm vectors show a distinct change in the error components. Pyrometer error and laser power standard deviation linearly decrease as the rise time starts to exponentially increase. The overall cost value is observed to be dominantly defined by the rise time components in shorter vectors. § DISCUSSION §.§ Optimization progress With either the online- or offline-tuning procedure optimization settings, BO achieves the lowest cost value around the similar magnitude P value with ∼30% deviation of the I value. As shown in <Ref>, the algorithm achieved a lower cost value sooner in the online case than offline. Faster error minimization in the online case can be attributed to the higher reference value assigned, which forces a higher I value to obtain any error value smaller than one due to the normalization of the rise time. Furthermore, the presence of powder increases the strength of the relationship between laser power and the pyrometer signal relative to a bare plate. Naturally, the same laser power input yields a higher emission value due to the lower mass density per applied energy in the powder case. The effect of the powder and the steel plate on the lowest achieved error during optimization requires further study to be fully evaluated. For the demonstration of the proposed method, the number of iterations was conservatively fixed to 200. The optimization progress is assumed to have minimized the cost function based on non-decreasing error values in the later iterations. However, there several approaches that can be implemented as a stopping strategy for decreasing the total tuning duration <cit.>. Overall, the offline tuning takes approximately 3 seconds per iteration and took 10 minutes to perform 200 iterations. The online tuning method heavily depends on the machine and the build setup for the optimization application, however, online tuning adds an insignificant amount of extra time to the normal build cycle. Choosing an online or offline optimization strategy for controller tuning is a decision that mostly depends on the availability of hardware since the controllers tuned by both procedures achieve similar performance. On the contrary, manual tuning of the setup by already proven heuristic methods takes several hours with a domain-expert operator. Therefore, the proposed algorithmic tuning represents a significant step forward for the state-of-the-art controller tuning applications for the LPBF process. §.§ Optimization-signal characteristics As shown in <Ref>, there are several noteable differences in the pyrometer signal between the optimization with and without powder. The differences such as downward and upward trends at the end of the vectors in offline and online cases can also be explained by the substrate anomalies. The substrate profile may have distorted due to overexposure of the same region in offline and the build-up of a single-vector line in online settings. These anomalies primarily affect the mean squared error term of the cost function, however, they are observed to be consistent across the layers, and as such do not significantly affect the optimization process. A significant finding is the overshooting phenomenon that only occurs in the powder exposure case which is shown by the magnified region in the online control case as shown in the bottom row of <Ref>. The signal overshoots the reference value by 7% and continues to fluctuate until it settles to the reference value over the first third of the vector. Although the signal remains within ± 5% of the reference value, the fluctuation is observed to continue until settling ∼3mm (around 6ms) from the start of the vector. The same overshooting and settling behavior is observed in the laser power assignment and the pyrometer absolute error. As described in <Ref> and in (<ref>), the mean squared error of the pyrometer signal is calculated only for the second half of the signal to assess the tracking accuracy without the influence of the melt pool initiation transient. Therefore, the cost function is blind to the fluctuation in the first half of the vector, and optimization progress is not affected by the fluctuation. Aside from the overshooting and settling differences, the rise times for the offline- and online-optimized cases are 920 and 1140 microseconds, respectively, which can be attributed to the difference in the I values in both optimal cases §.§ Wedge prints: PI control implications The time series data from the wedge prints in <Ref> and <Ref> show the vector-to-vector trend of the controller, namely decreasing the laser power as the vector size decreases to track the pyrometer reference value. The overshooting and delayed settling phenomenon observed in online optimization is still evident at the in-vector scale. The magnified image of the pyrometer signal of the offline case of 55° wedge geometry in <Ref> (top left) clearly shows the signal overshoot before it settles by the end of the first half of the vector. Once the vectors are shorter than ∼3mm (i.e., approximately halfway through the time series representation), the pyrometer signal no longer settles fully. The non-settling behavior of the signal represents an oscillatory behavior in a vector-to-vector case, as shown by the magnified pyrometer signal of the online-optimized case in <Ref> (top right). The same behavior is observed through the entire layer of the 28° wedge, as shown in <ref> due to the shorter initial vectors (<3mm). The spatial distribution of the overshooting and short vector oscillatory behavior is also observed in the scatter plots shown in <Ref> and <ref>, and is especially made evident by the color bands of the lower plots. For vectors of the 55° wedge, the pyrometer value starts too low, but then it quickly overshoots before stabilizing to the set point. The online-tuned controller appears to overshoot and undershoot more than the one tuned offline. The length of the initial transient appears to be constant along the progression of the hatches, despite the decreasing vector length. The spatial distribution of the pyrometer signal also aligns with the overshooting and shorter vector oscillation observation in the time scale. Overall, the findings suggest a minimum controlled vector length of approximately 3mm using the experimental settings of this study. However, assuming that the pyrometer signal can be differentiated without introducing excessive noise, the overshoot may be further decreased by including a derivative control term to create a PID. The improvement of the signal quality, whether by hardware improvements or digital filtering techniques, is outside of the scope of this study. §.§ Wedge prints: controller-induced porosity In <Ref>, one of the wedge part's microstructural evaluations is shared. Shared findings are found to be representative enough for the rest of the parts. It is observed that the vectors starting at the right side (marked with B) show no porosity formations. This confirms that the resulting laser power assignment to track the selected reference value is within the nominal process window. However, overheating introduced by shorter vectors (towards point A) increases the melt pool emissions so that the laser power is driven lower by the controller to maintain the emission intensity at the reference value. Lowered laser power results in insufficient energy, therefore, the process shifts out of the process window to the lack-of-fusion zone. Unconstraint energy input with an overheating process condition is expected to violate the process window due to the narrow process window of the high-density microstructure compared to the wide range of the heat accumulation phenomenon as expressed by the uncontrolled part shown in <Ref>. Conclusively, the results suggest that an in-layer dynamic laser power control with a static reference value is not generalizable for all geometries. Expanding the controllable limits by maintaining the energy input within the process window represents a challenge and a promising future research direction for the implementation of control strategies. §.§ Wedge prints: cost function and error analysis All wedge prints exhibit similar and consistent overshooting and delayed settling behavior for both the online- and offline-tuning cases, regardless of the vector size. A more in-depth comparison of the effect of the vector length on each of the cost function terms along with the final cost value for each wedge print case is shown in <Ref>. For both the online- and offline-optimized parameter settings, the 55° wedge error metrics are relatively constant until the vector length decreases below approximately 3mm. Shorter vector lengths drive the laser power and pyrometer error values lower and increase the rise time cost value. The overall cost value is dominantly defined by the rise time in all vectors shorter than 3mm. An important remark is that as described in Section <ref>, all cost terms are normalized by the vector length. While the pyrometer signal's MSE and laser's standard deviation components describe the tracking performance of the controller through the vector, the rise time is defined only at the beginning of each vector and is independent of the size of the vector in a part printing. Due to the independence, shorter vectors yield significantly higher rise time components and ultimately cost terms. Despite the bias introduced by the normalization of the rise time component, the same rise time component is used for the consistent comparison of the online and offline cases as well as to evaluate the representativity of the optimization signal used in the BO algorithm of the real part printing. The constant normalization factor C added to the error terms functions as a weight term. It normalizes the different units and magnitudes of error terms for a similar representation in the cost calculation. Hence, C is expected to change the effect of each error subterm to the overall cost value calculated in each iteration. The term is proposed to be defined concerning the reference value and the laser exposure strategy used in the experimental setup, therefore, should be adjusted depending on the implementation. §.§ Online- vs offline-tuning No significant difference between the wedges of online- and offline-optimized parameters is observed, which can be attributed to the relative similarity of the optimal parameters defined for either method as shown in <Ref>. The difference in the integral gain does not change the signal characteristics in either of the wedge prints apart from a marginal difference in the laser power standard deviation component of the cost function of the online optimized wedges as shown in <Ref>. A lower laser power standard deviation term can be attributed to the effect of higher integral gain on the stability of the controller. As suggested by the similarity of the findings of online and offline experiments, algorithmic tuning of a high-frequency laser power controller can be performed using either method with similar results. The optimization procedure can be applied autonomously at the beginning of a build within the range of the sacrificial layers (i.e., due to build platform separation) before initiating the controller with the optimized parameters after the defined number of layers as iterations. No prestudy or material-specific knowledge is required besides the initial gain assignment for the optimization process since the method proposed in this study is a black-box optimization algorithm that minimizes a cost metric by screening the defined inputs. Similarly, tuning performed on the artificial setup is also transferable to the controller for the actual process. The offline optimization plate is also a steel alloy S304 unlike the powder used 316L. Although the material is not the same, using a different alloy system may not be representative of the signal characteristics. Each approach could be employed depending on the availability of the dedicated hardware for specific applications. §.§ Parameter window and reference value selection The selected wedge geometries exhibit heat accumulation due to the decreasing vector-to-vector exposure times with the decreasing vector lengths. The geometries are specifically selected to represent a commonly occurring in-layer heat accumulation phenomenon in printed geometrical features. Increasing preheat increases the melt pool emissions and the controller decreases the laser power input as a result. Due to the changing laser power input, two factors are reported as critical to the quality of the printed parts: Reference value selection and the parameter window. The reference value is required to capture the nominal process behavior, i.e. the emission observed under non-overheating conditions. In <Ref>, it is observed that while compensating for the overheating, the laser power is driven below the parameter window at 140W as described by the dashed line, which results in lack of fusion porosities in the microstructure, as expected. The study of Kavas et al. described a limit for controlling the temperature on a layer-to-layer scale by actuating the laser power value due to the process window <cit.>. The limit of a similar root cause is observed in this study on the vector-to-vector scale. The vector-to-vector heat build-up behavior is stabilized with the expense of transiting the lower boundary of the process window. This observation highlights the necessity of a cooling-based controller design where the cooling of the printed part through waiting is also actuated as a promising future study on an in-layer scale. Similarly, a higher reference would enable the controller to avoid descending to the lack-of-fusion zone. §.§ Future work on in-layer control The experimental findings of the proposed method in this study suggest multiple directions for future studies to further enable closed-loop control applications for the LPBF process. First, for a more robust in-layer controller tuning application, the approach maybe enhanced with the addition of the derivative (D) term, and the cost function can be improved by including further signal quality metrics such as overshoot and settling time. This assumes that the signal is of sufficient quality that the noise generated by discrete differentiation does not detract from the performance already achieved with a PI controller. A new parameter that adds a variable vector-to-vector dwell time for reducing heat accumulation and maintaining the input within the process window could further improve the performance. Furthermore, despite the hardware and computational complexity challenges, other available monitoring sensors such as high-speed or thermal cameras can be implemented into the closed-loop control application to stabilize the melt pool for various metrics such as size and shape. Similarly, the correlation of the pyrometer measurements with melt pool geometries and solidification rates could be used to tune the controllers for specific microstructure. The BO algorithm used in this study can further be applied for sensor-based parameter tuning of novel alloys in a closed-loop manner to reduce the process parameter optimization costs. § CONCLUSION A Bayesian Optimization algorithm for was implemented for tuning an in-layer closed-loop controller for the the LPBF process and experimentally demonstrated using PI control of the laser power for meltpool stabilization. Furthermore, online- and offline- optimization approaches were proposed to enable fast and efficient optimization and were experimentally validated. The proposed cost function components are the mean square root error of the pyrometer signal, the pyrometer signal's rise time, and the variance of the laser power assignment around the rolling average. Offline optimization yielded a lower error in 112 iterations while the online optimization converged earlier at iteration 74 in the experiment. The tuned controllers were experimentally employed to control the heat-accumulating behavior of two wedge geometries with different angles to evaluate the controller performance on in-layer heat accumulation settings. The results showed similar performance on both parts and by offline and online controller settings while efficiently controlling the in-layer temperature compared to the non-controlled example. The controller's ability to stabilize the pyrometer signal was observed to be correlated with the vector length and the signal error increases with decreasing vector length. the minimum controllable vector length is determined by the transient state of the process in which the rise time determines the cost function; for the material studied, this yields a minimal controllable vector length of 3mm. Moreover, the findings of this study highlight the importance of appropriate reference value assignment and the process window restriction. Compensating the overheating in wedge geometries drove the laser power outside of the process window, which introduced lack-of-fusion defects while stabilizing the melt pool emissions to the given reference values. Future work is proposed to correlate the pyrometer signal with the microstructure of the part such that the controller can be tuned for different objectives. Implementation of varying reference values for different regions of the part to maintain the laser power inside the parameter window with the expense of allowing overheating can also increase the robustness and applicability of emission-based closed-loop control applications. model1-num-names
http://arxiv.org/abs/2406.18881v1
20240627042346
A Wireless, Multicolor Fluorescence Image Sensor Implant for Real-Time Monitoring in Cancer Therapy
[ "Micah Roschelle", "Rozhan Rabbani", "Surin Gweon", "Rohan Kumar", "Alec Vercruysse", "Nam Woo Cho", "Matthew H. Spitzer", "Ali M. Niknejad", "Vladimir M. Stojanovic", "Mekhail Anwar" ]
physics.med-ph
[ "physics.med-ph", "cs.SY", "eess.SY" ]
A Wireless, Multicolor Fluorescence Image Sensor Implant for Real-Time Monitoring in Cancer Therapy Micah Roschelle*, Member, IEEE, Rozhan Rabbani*, Member, IEEE, Surin Gweon, Member, IEEE, Rohan Kumar,  Member, IEEE, Alec Vercruysse, Member, IEEE, Nam Woo Cho, Matthew H. Spitzer, Ali M. Niknejad, Fellow, IEEE, Vladimir M. Stojanović, Fellow, IEEE, Mekhail Anwar, Member, IEEE This work was supported by the Office of the Director and the National Institute of Dental and Craniofacial Research of the National Institutes of Health under Award DP2DE030713 and the John V. Carbone Jr. Pancreatic Cancer Research Memorial Fund. (Corresponding authors: Micah Roschelle and Mekhail Anwar.) *Equally contributing authors. Micah Roschelle, Rozhan Rabbani, Surin Gweon, Rohan Kumar, Alec Vercruysse, Ali Niknejad, and Vladimir Stojanović are with the Department of Electrical Engineering and Computer Sciences, University of California at Berkeley, Berkeley CA 94720 USA. (email: micah.roschelle@berkeley.edu) Nam Woo Cho is with the Department of Radiation Oncology and the Department of Otolaryngology-Head and Neck Surgery, University of California, San Francisco, CA 94158 USA. Matthew Spitzer is with the Department of Otolaryngology-Head and Neck Surgery and the Department of Microbiology and Immunology, University of California, San Francisco, CA 94158 USA. Mekhail Anwar is with the Department of Electrical Engineering and Computer Sciences, University of California at Berkeley, Berkeley, CA 94720 USA and also the Department of Radiation Oncology, University of California, San Francisco, CA 94158 USA. (email: mekhail@berkeley.edu, mekhail.anwar@ucsf.edu). July 1, 2024 =============================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== § ABSTRACT Real-time monitoring of dynamic biological processes in the body is critical to understanding disease progression and treatment response. This data, for instance, can help address the lower than 50% response rates to cancer immunotherapy. However, current clinical imaging modalities lack the molecular contrast, resolution, and chronic usability for rapid and accurate response assessments. Here, we present a fully wireless image sensor featuring a 2.5×5 mm2 CMOS integrated circuit for multicolor fluorescence imaging deep in tissue. The sensor operates wirelessly via ultrasound (US) at 5 cm depth in oil, harvesting energy with 221 mW/cm2 incident US power density (31% of FDA limits) and backscattering data at 13 kbps with a bit error rate <10-6. In-situ fluorescence excitation is provided by micro-laser diodes controlled with a programmable on-chip driver. An optical frontend combining a multi-bandpass interference filter and a fiber optic plate provides >6 OD excitation blocking and enables three-color imaging for detecting multiple cell types. A 36×40-pixel array captures images with <125 µm resolution. We demonstrate wireless, dual-color fluorescence imaging of both effector and suppressor immune cells in ex vivo mouse tumor samples with and without immunotherapy. These results show promise for providing rapid insight into therapeutic response and resistance, guiding personalized medicine. Biomedical implant, fluorescence imaging, ultrasound energy harvesting, immunotherapy, personalized medicine. § INTRODUCTION WIRELESS , miniaturized, implantable sensors can monitor intricate biological processes unfolding in the body in real-time. Typically accessible only through highly invasive techniques, this data is crucial for advancing personalized medicine, tailoring treatments to individual responses to address the wide heterogeneity in therapeutic outcomes among patients. One meaningful application is monitoring tumor response to cancer immunotherapy, a promising treatment that unlocks the patient's own immune system to fight cancer. For instance, immune checkpoint inhibitors (ICIs), a class of immunotherapy, have been shown to nearly double patient survival rates in melanoma <cit.> and metastatic lung cancer <cit.> with a lower incidence of adverse effects compared to conventional treatments like chemotherapy <cit.>. While more than 40% of US cancer patients are estimated to be eligible for ICIs <cit.>, these therapies face a significant challenge: across most cancer types, less than 30% of patients respond to treatment <cit.>. For non-responders, time spent on ineffective therapies not only allows for their cancer to grow and spread, but also exposes them to unnecessary toxicity with high-grade adverse events rates often exceeding 10% <cit.> and financial burdens of more than $150,000 per year <cit.>. Rapid assessments of therapeutic response that also provide insight into the underlying mechanisms of resistance can help clinicians quickly identify non-responders and pivot to more effective second-line therapies to overcome resistance. However, such an assessment must capture the complex and dynamic interplay between various effector and suppressor immune cells and cancer that determines response <cit.>. Current clinical imaging falls short of this goal. Anatomical imaging modalities such as computed tomography (CT) and magnetic resonance imaging (MRI) capture changes in tumor size, which take months to manifest and do not reliably correlate with response <cit.>. These limitations are apparent in standard response criteria. For example, iRECIST defines a partial response as at least a 30% reduction in tumor dimensions with a minimum size of 1 cm and recommends confirmation of disease progression at long 4–8 week intervals <cit.>, <cit.>. Alternatively, positron emission tomography (PET) can image the underlying biology with molecular contrast <cit.>, but is fundamentally limited to imaging a single cell type or biomarker <cit.> at millimeter-scale resolution <cit.>. As the immune response depends on interactions between a variety of immune cells, it cannot be reliably predicted by a single biomarker <cit.>. Moreover, this millimeter-scale resolution averages out the spatial distributions of different cell populations within the tumor, shown to be increasingly important in understanding therapeutic resistance <cit.>. Fluorescence microscopy, on the other hand, provides multi-cellular resolution across multiple biomarkers, essential to visualizing a more complete picture of the immune response. In fluorescence microscopy, targeted cells are labeled with fluorescent dyes, or fluorophores, which absorb light near a specific wavelength and emit light at slightly longer wavelengths <cit.>. Multiple cell types can be imaged simultaneously by labeling each with a different color fluorophore. However, in vivo optical imaging is constrained by scattering in tissue which fundamentally limits the penetration depth of light in the body to a few millimeters, even at near-infrared (NIR) wavelengths where tissue absorption is minimal and scattering is reduced <cit.>. Therefore, chronic fluorescence imaging at depth requires implantable imagers with integrated light sources providing in-situ illumination. Fluorescence imagers can be miniaturized to the scale of a single chip by eliminating bulky lenses through contact imaging <cit.>. To this end, prior work has demonstrated on-chip or in-package integration of focusing optics <cit.> as well as fluorescence filters <cit.> and light sources <cit.>. However, these systems are wired, precluding long-term implantation without risk of infection. While a fluorescence sensor with wireless radio-frequency (RF) communication is presented in <cit.>, it uses a centimeter-scale battery for power and lacks wireless charging. Both wireless power transfer and communication are necessary for chronic use of these devices. Here we present a fully wireless, miniaturized fluorescence image sensor capable of three-color fluorescence imaging, aiming to enable real-time, chronic monitoring of cellular interactions deep in the body (Fig. <ref>). Wired connections and batteries are eliminated by power harvesting and bi-directional communication through ultrasound (US). Among wireless power transfer modalities such as near-field inductive coupling, RF, and optical, US offers low loss in tissue (0.5–1 dB/MHz/cm <cit.>), a high Food and Drug Administration (FDA) regulatory limit for power density (720 mW/cm2), and a short wavelength (3–4 mm in the PZT material at 1 MHz) enabling power transfer to millimeter-scale implants at centimeter-scale depths <cit.>. While significant progress toward a wireless fluorescence imaging system using US is presented in our prior work <cit.>, this system has several limitations. It incorporates a large (0.18 cm3) 1 mF off-chip capacitor for energy storage. It only operates at 2 cm depth, constraining its application to superficial tumors while exceeding FDA US safety limits by 26% due to high acoustic power requirements. Moreover, the sensor only images a single fluorescent channel, lacking the necessary hardware for multicolor imaging such as a wirelessly programmable laser driver to control multiple excitation lasers and a multi-bandpass optical filter. Additionally, due to in-pixel leakage during readout, the sensitivity of the imager when operating wirelessly is limited to high concentrations of fluorophores, rendering it insufficient for imaging biologically relevant samples. This work demonstrates a new system with significant improvements in performance and size, specifically designed for multicolor imaging. Our new system shows fully wireless operation at 5 cm depth in oil, requiring 221 mW/cm2 US power flux density (31% of FDA limits) for power harvesting and transmitting data with a bit error rate (BER) less than 10-6 through US backscatter. It powers three different-wavelength laser diodes programmed through US downlink and incorporates a multi-bandpass optical frontend expanding on the design in <cit.> to enable three-color fluorescence imaging. Moreover, we illustrate the application of our sensor in assessing response to cancer immunotherapy through multicolor fluorescence imaging of both effector and suppressor immune cells in ex vivo mice tumor samples with and without immunotherapy. Finally, a proof-of-concept mechanical assembly demonstrates a small form factor of 0.09 cm3. This article further explains and expands on the work presented in <cit.> and is organized as follows. Section II discusses the components and design specifications for a fully wireless, multicolor fluorescence imager. We describe the design and implementation of our system in Section III. Section IV presents system-level measurement results. We illustrate the application of our sensor with ex vivo imaging results in Section V. Finally, Section VI includes a comparison with the state of the art and the conclusion. § SYSTEM OVERVIEW Fig. <ref> shows a diagram and mechanical assembly of the full system on a flex PCB with all external components. The system consists of: 1) micro-laser diodes (µLDs) for in-situ illumination; 2) an optical frontend comprising of a fiber optic plate and a multi-bandpass interference filter for lens-less multicolor fluorescence imaging; 3) a piezoceramic as the US transceiver; 4) off-chip capacitors for energy storage; and 5) an ASIC to integrate all of this functionality. In this section, we will describe the design of the components in the system and derive design requirements for the ASIC. §.§ Multicolor Fluorescence Imaging Fig. <ref> illustrates the principle of multicolor fluorescence imaging. The fluorophores are first conjugated to a probe (Fig. <ref>(a)), such as an antibody, targeted toward a cell type of interest <cit.>. For in vivo imaging, the conjugated probe can be administered systemically through intravenous injection, binding only to targeted cells. Many organic fluorophores have low toxicity at doses relevant for imaging <cit.> and a number of fluorescent probes are FDA-approved or in clinical trials, including some using Fluorescein (FAM) and Cyanine5 (Cy5) <cit.>, the fluorophores in our ex vivo studies. Once injected, the half-life of antibody-based probes is days to weeks <cit.> and free-floating unbound probes are cleared through the liver and kidneys in 1–7 days <cit.>. After labeling the cells, the fluorophores are excited near their absorption peak (λEX) and emit light at a slightly longer wavelength with a peak at λEM (Fig. <ref>(b) and (c)). For organic fluorophores, the difference between the absorption and emission peaks, or Stokes shift, is 10–30 nm (26 nm for FAM and 18 nm for Cy5). Moreover, due to the small absorption cross-section of the fluorophores relative to the illuminated field of view (FoV), the excitation light is often 4 to 6 orders of magnitude stronger than the emission light. Thus, in order to detect the weak fluorescence signal, an optical filter with an optical density (OD) ≥ 6 is required to attenuate out-of-band excitation light that would otherwise saturate the sensor. Avoiding a filter altogether through time-gated imaging <cit.>—where excitation and imaging are separated in the time domain—leads to inadequate excitation rejection and low signal intensities with typical organic fluorophores, which have fluorescence lifetimes less than 10 ns <cit.>. Moreover, background subtraction in the electrical domain <cit.> adds additional noise sources and is challenging in vivo as the excitation background is dependent on tissue scattering. For multicolor imaging, a variety of organic fluorophores are available with absorption and emission wavelengths spanning the visible and NIR spectrum <cit.>. Their narrow absorption and emission spectra allow for multiplexed imaging using a monochrome sensor by taking a separate image at each excitation wavelength. Therefore, multicolor fluorescence imaging requires multiple excitation sources and a multi-bandpass filter to block all excitation wavelengths while passing fluorescence emissions. §.§ Light Sources For fluorescence excitation, we use µLDs with wavelengths of 650 nm (250×300×100 µm3, CHIP-650-P5, Roithner LaserTechnik GmbH) and 455 nm (120×300×90 µm3, LS0512HBE1, Light Avenue). A third 785 nm laser diode (L785P5, ThorLabs) in a TO-can package is used for proof-of-principle three-color fluorescence imaging and will be replaced by a µLD in the future. Laser diodes are chosen instead of LEDs which have broader spectral bandwidths that can overlap with fluorescence emissions. These out-of-band emissions necessitate excitation filters on the LEDs that complicate sensor design and waste optical power output <cit.>. Fig. <ref>(a) and (b) show the measured power-current-voltage (PIV) curves for all three lasers and their calculated wall-plug efficiencies (P_Optical/P_Electrical), respectively. The lasers have different forward voltages: 2 V for the 650 nm and 785 nm lasers and 4.5 V for the 455 nm laser. Because of their several-mA threshold currents, the lasers operate most efficiently near their maximum current ratings. These characteristics motivate the design of a laser driver with programmable current that is tolerant of a wide range of forward voltages. §.§ Optical Frontend Design The optical frontend design builds on our prior work <cit.> and consists of a multi-bandpass interference filter and a low-numerical-aperture fiber optic plate (FOP). Interference filters offer more-ideal filter characteristics than absorption filters <cit.> or CMOS metal filters <cit.>, which do not allow for optimal excitation and imaging of organic fluorophores due to their gradual cutoff transitions, weak out-of-band attenuation, and significant passband losses. Hybrid filters combining interference and absorption filters <cit.> retain the poor passband characteristics of absorption filters. Another major advantage of interference filters is their ability to support multiple passbands across the visible and NIR spectra for multicolor imaging. In contrast, demonstrated dual-color fluorescence sensors with absorption or CMOS filters rely on dedicated pixels for each color <cit.>, reducing the sensor sensitivity and resolution. However, interference filters are sensitive to angle of incidence (AOI) <cit.>. At increasing AOIs, the filter passbands shift towards shorter wavelengths, eventually transmitting the excitation light. This property is problematic for lensless imaging where the AOI is not precisely controlled and the excitation light is often angled between the sensor and the tissue above it. To mitigate this effect, the FOP acts as an angle filter, blocking off-axis excitation light that would otherwise pass through the filter. The FOP also improves resolution by eliminating divergent fluorescent emissions that contribute to blur, albeit at the cost of reducing the overall collected signal. Here, we expand the dual-bandpass design in <cit.> to three-color fluorescence imaging with a new interference filter. Fig. <ref>(a) shows the normal incidence (AOI=0°) transmittance spectra of the filter (ZET488/647/780+800lpm, Chroma Technologies Corp) which has three passbands with greater than 93% average transmittance. The first two bands pass the emissions of FAM and Cy5, the fluorophores used in our ex vivo imaging studies. The 800 nm band, added in this work, provides another fluorescence channel in the NIR-I window (700–900 nm), a preferred region for in vivo imaging where tissue scattering, absorption, and autofluorescence are minimal compared to the visible spectrum (400–700 nm) <cit.>. At normal incidence, the filter provides sufficient blocking of the lasers: more than 6 OD attenuation at both 450 nm and 650 nm as well as more than 5 OD attenuation at 785 nm. The 500 µm-thick FOP (LNP121011, Shenzhen Laser, LTD) consists of a matrix of 10 µm optical fibers embedded in black, absorptive glass. It has a normal incidence transmittance of 35% and a full-width at half maximum (FWHM) of 10° at 455 nm, which both reduce at longer wavelengths as shown in Fig. <ref>(b). The angular transmittance measurements in Fig. <ref>(c) show that beyond an AOI of 35° the FOP provides more than 6 OD attenuation of all three lasers. Fig. <ref>(d) shows the transmittance through the filter with and without the FOP across different AOIs measured at the excitation wavelengths using collimated, fiber-coupled lasers. The filter attenuation at AOI=0° is different from that in Fig. <ref>(a) due to out-of-band emissions from the lasers. While the filter blocks the excitation lasers near 0°, the laser transmittance rapidly increases beyond AOIs of 20° for 650 nm and 785 nm and 60° for 455 nm. However, with the FOP, the optical frontend provides more than 6 OD of attenuation of all excitation lasers at AOIs greater than 5°. The maximum measured attenuation is limited by the sensitivity of the power meter (PM100D with S120C Photodiode, Thorlabs) used for this measurement. For fabrication, the interference filter is directly deposited on the FOP, resulting in a total thickness of approximately 510 µm. The optical frontend is fixed to the chip using optically transparent epoxy (SYLGARD 184, Dow Chemicals). The filter is placed in between the chip and the FOP to ensure that it blocks any excitation light scattered through the FOP <cit.>. §.§ Ultrasound Link We use a 1.5×1.5×1.5 mm3 piezoceramic (lead zirconate titanate) as the US transceiver for wireless power transfer and bi-directional communication. The thickness of the piezo is directly proportional to the harvested voltage and inversely proportional to the operation frequency <cit.>. Therefore, we chose a thickness of 1.5 mm to balance minimizing the overall size of the piezo with the need for harvesting a high enough voltage to drive the lasers while operating at a lower frequency with less tissue attenuation. An aspect ratio of one is selected as a compromise between volumetric efficiency and backscattering amplitude, as outlined in <cit.>. The piezo is mounted on a flex PCB for testing (Fig. <ref>(a)). On the backside of the piezo, an air gap is created by covering a through-hole via with a 3D-printed lid. The air gap reduces the acoustic impedance of the backside medium from 1.34 MRayl in canola oil to 0 MRayl in air, decreasing the electrical impedance of the piezo to improve the power transfer efficiency <cit.>. Fig. <ref>(b) shows the impedance spectrum of the piezo measured within canola oil. Canola oil has 0.075 dB/cm acoustic attenuation at 920 kHz and 1.34 MRayl acoustic impedance <cit.> similar to the impedance (1.4–1.67 MRayl) of tissue <cit.>. The series and parallel resonance frequencies of the piezo occur at, fS=894 kHz and fP=960 kHz, respectively. Fig. <ref>(c) shows the normalized harvested voltage across frequency when the piezo is open circuit condition and when it is loaded with the chip (see section IV.A for the setup). While operating near fS minimizes the impedance, the open circuit voltage is maximized near fP. Therefore, the maximum harvested voltage with the chip occurs between fS and fP at 920 kHz. §.§ System Design Considerations To derive the required harvested energy per image for sizing the storage capacitor, we estimate the signal detected by a pixel from Cy5-labeled CD8+ T-cells, a type of immune cell imaged in our ex vivo studies. The total emitted optical power, P_CELLS, from C fluorescently labeled cells as a function of the input excitation intensity, I_IN, is given by P_CELLS=C· N_FL·σ· QY · I_IN. N_FL is the number of fluorophores bound to each cell. Typically, between 0.5–2.1×106 CD8+ antibodies bind to a single CD8+ T-cell <cit.> with each antibody containing 2–8 fluorophores <cit.>. σ and QY are the absorption cross-section and quantum yield of the fluorophore, respectively (9.55×10-16 cm2 and 20% for Cy5 <cit.>). We assume that a single pixel (with 55 µm pitch in our design) subtends a FoV containing C=100 T-cells, considering that a T-cell is 5–10 µm in diameter <cit.>. Assuming that the 650 nm µLD uniformly illuminates the FoV of our sensor (2×2.2 mm2) and outputs 10 mW of optical power at ILD=20 mA bias (see Fig. 4), I_IN is approximately 223 mW/cm2. Therefore, the estimated total fluorescence signal is 20 nW. This signal can be converted to the expected photodiode current, I_PH, according to I_PH=P_CELLS·A_PIXEL/4 π z_DIST^2· (1- L_FOP) · R. This equation accounts for both the spreading loss over the z_DIST≈ 500 µm distance to the pixel with area, A_PIXEL (44×44 µm2 in our design) and the insertion loss of the FOP, L_FOP (75% at 650 nm). Given that the pixel has a responsivity, R, of 0.21 A/W at 650 nm, we expect I_PH on the order of 6.3 fA. In the capacitive trans-impedance amplifier (CTIA)-based pixel architecture reused from <cit.> the photocurrent is sensed by integrating it on a capacitor, C_INT, during the exposure time, T_EXP, resulting in a pixel output voltage of V_PIXEL=I_PH· T_EXP/C_INT. Sensing the fluorescence signal relies on V_PIXEL exceeding the noise floor, characterized by the signal-to-noise ratio (SNR). Generally, SNR can be improved by increasing the total imaging time either through a longer exposure time, T_EXP, or by averaging multiple images. Following the derivation in [32], the SNR at the output of a CTIA-based pixel when averaging n images with an exposure time of T_EXP/n is given by SNR(n·T_EXP/n)=signal/noise=I_PHT_EXP/C_INT/√(T_EXP/C_INT^2)2q_ei_D+n v_NR^2. This equation enables study of the SNR tradeoff between (1) taking a single exposure of (n=1) and (2) averaging n images with exposures of T_EXP/n. The noise has two components: readout noise, v_NR^2, and shot noise from the photocurrent and dark current, i_D=I_PH+I_DARK. q_e is the charge of an electron. The factor of n only appears in the readout noise term. Therefore, if shot noise is the dominant source of noise, for small n, both (1) and (2) result in the same SNR. However, with increasing and lower exposure time per frame, readout noise dominates the overall noise of the averaged image, necessitating a greater number of averages to maintain the same SNR as a single exposure. Using the estimated I_PH and the measured noise values reported in Section IV, we calculate that without averaging, a T_EXP of 98 ms is required to achieve an SNR of 20 dB (10×). This result corresponds to a minimum required energy ( I_LD· V_LD· T_EXP) of 4.16 mJ per image. Delivering I_LD=20 mA from the incident US signal, given a piezo impedance of 5.4 kΩ at 920 kHz, requires an open circuit voltage of at least 108 V, which is not practical within FDA limits. Therefore, harvested energy must first be stored on a capacitor to later supply the lasers when taking an image. The size of the storage capacitor, C_STORE, is determined by C_STORE=I_LDT_EXP/Δ V_CSTORE in order to supply I_LD for the duration of T_EXP. Δ V_CSTORE is the voltage drop on the capacitor during T_EXP. Maximizing Δ V_CSTORE results in a smaller capacitor size, but is limited by the maximum harvested voltage and the minimum supply requirements for operating the chip or laser. Assuming Δ V_CSTORE=3 V, results in a capacitor size of 650 µF. Capacitors of this size are large physical components, increasing implant volume as in <cit.>. Therefore, the capacitor size can be minimized by reducing the required energy per image through the averaging strategy discussed previously. Fig. <ref>(a) compares the SNR of a pixel with different levels of averaging. The signal is the estimated photocurrent from the above analysis (6.3 fA) and the noise is measured with the sensor from dark images (see Fig. <ref>(c)). Each data point on the black curve represents an exposure time of TEXPi and a number of averages ni such that the total exposure time, niTEXPi=96 ms stays constant. As TEXPi decreases (and ni increases), readout noise dominates the pixel output noise (because shot noise decreases with lower TEXPi), requiring additional averages to achieve the same SNR of a single exposure. The orange curve in Fig. <ref>(a) shows the increased number of averages, xi > ni, required to reach an SNR (shown in blue) within 90% of the initial SNR for TEXP=96 ms. Therefore, using averaging to decrease exposure time for individual frames increases the overall imaging time to greater than 96 ms. As shown in Fig. <ref>(b), the capacitor size decreases linearly with lower TEXPi ranging from 640 µF for TEXPi=96 ms to 50 µF for TEXPi=8 ms. Charging such a capacitor through US takes several seconds to minutes, dominating the frame time (see Section IV.B). Thus, for small exposure times, the additional required averages can significantly increase the total imaging time. The total imaging time must be less than several minutes to capture the motion of immune cells, which have mean velocities of 10 µm/min in the tumor microenvironment <cit.>. Following these guidelines, we chose an 0805 100 µF tantalum capacitor for CSTORE with a size of 2×1.25×0.9 mm3 (0.002 cm3). This capacitor can supply 20 mA of laser current for TEXP=16 ms while dropping its voltage by 3 V. Averaging is employed to enhance SNR to levels comparable to those achieved by longer exposure times. We use a tantalum capacitor as opposed to a ceramic capacitor, which can lose up to 40–80% of its initial capacitance as the DC bias voltage increases and reduces the dielectric permittivity <cit.>. § SYSTEM DESIGN AND IMPLEMENTATION Fig. <ref> shows the system block diagram of the ASIC with external connections to the piezo, off-chip storage capacitors, and µLDs. The ASIC has 4 main subsystems: (1) power management unit (PMU), (2) digital control, (3) laser driver, and (4) imaging frontend with readout. The PMU consists of an active rectifier for AC-DC conversion of the piezo signal and a charge pump for generating an up to 6 V supply for driving the lasers. Harvested energy is stored on two off-chip capacitors, CVCP=10 µF and CSTORE=100 µF, to separate the power supplies of the lasers from the rest of the sensor throughout its operation. A PTAT develops current and voltage references and several low dropout voltage regulators (LDOs) generate stable DC power supplies for the chip. The sensor is programmed and controlled through a finite state machine (FSM) with 6 states of operation: charging up the storage capacitors (Charge-Up); programming the image sensor and laser driver parameters through US downlink (Set TEXP and Set LD); taking an image (Imaging); digitizing and storing the image (Readout); and wirelessly transmitting the data via US backscatter (Backscattering). To take an image, the laser driver, configured during downlink, supplies a µLD using energy stored in CSTORE. The image is captured on a 36×40-pixel array. During Readout, the pixel data is digitized by 4 parallel ADCs to be saved in the memory. Finally, image data is transmitted by modulating the reflected amplitude of incident US pulses with the SMOD switch. The design and operation of the subsystems are described in detail below. §.§ Power Management Unit Fig. <ref> shows the schematic of the active rectifier and charge pump. The active rectifier converts the harvested AC signal on the piezo to a 3 V DC voltage (VRECT), which is stabilized by a 4.7 nF off-chip capacitor. VRECT is then multiplied by 1.83× to a 5.5 V supply (VCP) with the cross-coupled charge pump. The cross-coupled topology is chosen for its high power conversion efficiency for an optimized input range <cit.>. Compared to a rectifier-only architecture used in <cit.>, the charge pump reduces the required harvested AC voltage on the piezo (VPIEZO) to achieve an output voltage (VCP) of 5.5 V by 1.7×, which results in a 3× lower acoustic power density requirement. Acoustic power density is a square function of acoustic pressure, which is linearly proportional to the harvested AC voltage. Therefore, lowering the required harvested piezo voltage reduces the acoustic power density to ensure operation within FDA safety limits. However, with this architecture, the overall charging time increases due to the energy loss from the charge pump. During Charge-Up, CVCP and CSTORE are connected through the CSTORE switch and are charged through the PMU. CSTORE stores energy for the lasers and imager array and a smaller CVCP stores energy for the readout and digital control. Following manufacturer guidelines, the external US transducer is duty-cycled for reduced average power dissipation to prevent damage to it from overheating while providing enough US power density to achieve sufficient harvested voltage on the sensor. To minimize power consumption during Charge-Up, the laser driver, pixel array, readout circuits and memory are switched off. A diode-based voltage clamp prevents charging beyond 6 V to protect the devices from overvoltage. Five LDOs (Fig. S1) regulate the harvested voltage into stable DC power supplies and are compensated with off-chip 0201 surface mount capacitors (10–200 nF). They generate reference voltages of 0.5 V and 2.1 V for the ADCs, separate 1.8 V power supplies for the digital control and for the pixel array and laser driver biasing, and a 3.3 V supply for the readout. A PTAT circuit generates a 200 nA reference current, IREF, and 1 V and 0.5 V references to bias the chip. The PTAT, with schematic shown in Fig. <ref>, uses a constant-gm topology to minimize the dependence on threshold voltage process variation. A PMOS core (M1–M4) avoids the body effect as deep N-well transistors were not available in the process. The diode-based start-up circuit (D1–D3) prevents zero current operation. To ensure that generated references are stable across the large voltage drop on VCP from 5.5 to 3.5 V, cascode current mirrors with high output impedance are used throughout the design. The voltage references are buffered and are generated by mirroring IREF (M3, M4, M9, M10) through resistors R4 and R5. §.§ Digital Control The chip operates according to the system timing diagram shown in Fig. <ref>. When VCP reaches 3.9 V, ensuring stable operation of the chip, a power-on reset (POR, Fig. S2) circuit initializes the FSM. The FSM is synchronized to the external US transducer by on-off-key modulation of the US envelope, which is demodulated by a watchdog circuit. The schematic of the watchdog circuit is shown in Fig. <ref>. A latched-based control eliminates glitches in detecting the presence of the US pulses within 3 µs of the initial rising edge. The unwanted transitions result from insufficient drive strength of the AC inputs to transistors M1 and M2 during the gradual ramp-up of the US pulse. To relay timing information to the FSM, the clock is extracted from the US carrier frequency (920 kHz). An US pulse longer than 1 ms indicates the end of the Charge-Up state. At this moment, the CSTORE switch is opened to isolate the storage capacitors, allowing VCSTORE to drop to a minimum of 2.5 V during Imaging while maintaining VCP above 3.5 V for the 3.3 V readout circuits. This approach allows for maximum energy usage from CSTORE, resulting in a 33% smaller required capacitance assuming a 5.5 V Charge-Up voltage. After Charge-Up, the ASIC is programmed during the Set TEXP and Set LD states. As shown in Fig. <ref>, the transmitted downlink data is decoded through time-to-digital conversion of the US pulse widths. In each state, 4 LSBs are discarded to account for timing variations in the watchdog signal. In Set TEXP, the exposure time, TEXP, is set through the 5 MSBs and is programmable from 0–248 ms with LSB=8 ms. The next 2 bits set the pixel reset time, TRST, which can be 100, 200, 500, or 1000 µs. In Set LD, 3 MSBs set the 1-hot encoded laser channel and the next 5 bits determine the laser current, ILD. On the falling edge of the watchdog after Set LD, the laser driver and the pixel array bias circuits are turned on to prepare for Imaging. §.§ Laser Driver Fig. <ref> shows the schematic of the 3-channel laser driver with programmable output current. To minimize the change in driver current, ILD, across the large voltage drop on VCSTORE (5.5–2.5 V), the driver must have high output impedance. Therefore, a gain-boosted cascode current source topology is used, in which the output impedance of the current source (M8–M15) is multiplied by the 65 dB gain of the cascoded boost amplifier (M4–M7). A 5-bit current DAC (M11–M15) enables a programmable output current from 0–115 mA with a 3.9 mA LSB. While the µLDs in this work operate under 40 mA (see Fig. <ref>), this range accommodates a variety of commercial µLDs with threshold currents up to 100 mA for future applications. Since only one laser is turned on at a time, the same driver circuitry is used for all three lasers. Thus, the cascode transistors select between the laser channels. For maximum output swing, Vx is set by a level-shifting diode, M3, to bias M11–M15 at the edge of triode. A headroom of at least 400 mV is required at the drains of M8–M10 (VLD) to ensure operation in saturation. §.§ Imaging Frontend and Readout The imaging frontend is similar to that presented in <cit.>, but without the angle selective gratings as image deblurring is now provided by the FOP. The image sensor consists of a 36×40 array of pixels with a 44×44 µm2 Nwell/Psub photodiode and a 55 µm pitch, covering a 2×2.2 mm2 FoV. The pixel architecture, shown in Fig. <ref>(a), is based on a CTIA with CINT=11 fF. To reduce low-frequency noise, reset switch sampling noise, and pixel offset, a correlated double-sampling scheme is implemented with the following pixel timing (illustrated in Fig. <ref>(b)). First, the voltage on CINT is set to zero during the initial reset phase, TRST, with timing configured in the Set TEXP state. For the exposure time, TEXP, the photocurrent is integrated on CINT generating the pixel output voltage, V_OUT=V_0+I_PDT_EXP/C_INT, which is sampled on reset (CR) and signal (CS) sampling capacitors after intervals of 100 µs and T_EXP+100 µs, respectively. The final pixel value (VPIXEL) is the difference between the signal (VS) and reset (VR) values. After Imaging, the analog pixel values are digitized and stored in memory during the Readout state. Readout duration is set to limit the leakage on the in-pixel sampling capacitors to less than an LSB. Therefore, the readout is performed in parallel across 4 channels each spanning 10-pixel columns. Each channel consists of an 8-bit differential SAR ADC (Fig. S3) driven by a buffer. The ADC has a dynamic range of 500 mV with an LSB of 1.95 mV, which is below the pixel readout noise (see Section III.E). The readout circuits operate on a 3.3 V supply to ensure sufficient headroom considering that the in-pixel source followers level-shift the sampled pixel voltages up by 1 V. Thus, the size of CVCP is chosen to maintain VCP above 3.5 V throughout this state. The signal (VS) and reset (VR) pixel values are subtracted by the differential ADCs, and the digitized pixel values are stored immediately after conversion in a 11.52 kb latched-based memory. Unlike the work in [32], this design enables a short Readout time of 5.4 ms, which is not limited by the longer Backscattering state (890 ms at 5 cm depth) that increases with depth due to the longer time of flight of the acoustic waves. §.§ Data Transmission During Backscattering, the memory is read serially (ΘMOD in Fig. <ref>) and transmitted by modulating the amplitude of the reflected (backscattered) US pulses using a switch (SMOD in Fig. <ref>). The uplink communication protocol is shown in the timing diagram in Fig. <ref>. The transmitted data for each pixel comprises a 9-bit packet containing a header (set to 0) followed by 8 data bits. The header pulse allows for a one-pulse delay to make sure memory is read and loaded into the serializer before data transmission. Additionally, the header is set to a known value of zero to help identify the backscattered bit values. The external transducer generates a sequence of pulses each spanning a few cycles of the US carrier for the header and 8 individual bits. After a time of flight (ToF=33 µs for 5 cm depth) the acoustic pulses reach the piezo and reflect with an amplitude proportional to the reflection coefficient of the piezo, Γ. Γ is dependent on the electrical impedance loading the piezo, R_LOAD and, therefore, can be controlled through the SMOD switch. Near the parallel resonance frequency of the piezo, Γ∝ R_PIEZO/(R_LOAD+R_PIEZO), where R_PIEZO is the equivalent resistance of the piezo <cit.>. The SMOD switch impedance can be configured (hard-coded) by 2 bits to account for different R_PIEZO values. After a second ToF, the backscattered signal is received by the external transducer and is demodulated to reconstruct the image. To avoid overlap of high voltage Tx and low voltage reflected Rx pulses, the external transducer transmits 2 bits within 2 ToFs and listens for the next 2 ToFs as shown in Fig. <ref>. § MEASUREMENT RESULTS Fig. <ref>(a) shows the die photo of the chip. The ASIC measures 2.5×5 mm2 and is fabricated in a TSMC 180 nm high-voltage (1.8/5/32 V) LDMOS CMOS process. 1.8 V transistors are used for the digital, pixel, and laser driver, and 5 V devices are used for the PMU and pixel readout. Fig. <ref>(b) shows the power breakdown for the chip where the laser driver dominates the power consumption. This section presents system-level measurement results for the US wireless link, laser driver, and imaging frontend. §.§ Measurement Setup Fig. <ref> shows the measurement setup for demonstrating fully wireless operation of the chip. In the acoustic setup, the piezo is submerged at 5 cm depth in a tank of canola oil. An external focused transducer (V314-SU-F1.90IN-PTF, Evident Scientific) at the surface of the tank transmits US signals to the piezo. To minimize interference from US reflections on data uplink, an acoustic absorber (Aptflex F28P, Precision Acoustics) is placed at the bottom of the tank. An FPGA (Opal Kelly, XEM7010) generates the desired US pulse sequence (as in Fig. <ref>) to control the chip. The timing of the pulse sequence is programmed through a custom user interface that interfaces with the FPGA. The generated waveforms are sent to a high-voltage transducer pulser board (Max14808, Maxim Integrated) to drive the external transducer accordingly. The chip is directly connected with wires to the piezo for wireless power harvesting and data transfer via US. It is located inside a black box to reduce the background signal from ambient light during imaging. Slide-mounted samples are placed directly on top of the chip. The chip drives the µLDs, mounted on separate PCBs, to transilluminate the sample from above. Admittedly, in vivo, the sample must be epi-illuminated between the sensor and the tissue. Epi-illumination can be accomplished in the future by directing the laser light through a glass separator or light guide plate placed on top of the sensor <cit.>. After taking an image, the backscattered US pulses are received by the external transducer and captured on an oscilloscope for processing and demodulation. To remove the pixel-to-pixel DC offsets due to the photodiode dark current and mismatch in the readout circuitry, a dark image with the same integration time but with the laser off is subtracted from the final fluorescence image. The dark image is averaged to minimize its noise contribution. §.§ Ultrasound Wireless Power Transfer Fig. <ref>(a) shows the measured PMU waveforms (VPIEZO+, VRECT, VCP, VCSTORE), verifying wireless operation of the full system at 5 cm depth. In this measurement, the system operates with an US power density of 221 mW/cm2, which falls within 31% of FDA safety limits. Under this minimum required acoustic power condition, VCP charges to 5.5 V in 50 s for the initial image. The charging time decreases to 35 s for consecutive frames with a nonzero initial VCP. The Charge-Up time can be further reduced by increasing US power intensity, operating closer to the FDA limits. The output voltages of the rectifier (VRECT) and charge pump ((VCP)) across different input voltages (VPIEZO+) show a minimum VPIEZO+=2.42 V is required for stable operation of the chip (Fig. S4). Measured PMU waveforms during the Imaging and Readout states are presented in Fig. <ref>(b). During Imaging (TEXP=8 ms), VCSTORE drops from 5.5–2.5 V while supplying the laser with ILD=37.5 mA from the energy stored in CSTORE. VCP remains at 5.5 V throughout Imaging and drops to 3.5 V during Readout. Fig. <ref>(c) shows the measured waveforms while transmitting a single pixel data packet via US backscattering. VPIEZO+ is modulated according to the serial output of the memory (ΘMOD) and the backscattered pulses are received by the external transducer (VBACKSCATTER in Fig. <ref>(c)). The one bits correspond to a smaller load impedance, but appear larger in amplitude than the zero bits because the piezo is operated between series and parallel resonance frequencies for maximum voltage harvesting. Fig. <ref>(a) shows the total acoustic power and acoustic power density (ISPTA) incident on the piezo surface area at 5 cm depth for transverse offsets along the X or Y axis. Fig. <ref>(b) shows a similar measurement as the depth is adjusted along the Z axis. The acoustic power density is measured with a hydrophone (HGL-1000, Onda) and it is integrated over the piezo area to measure the available acoustic power at the piezo surface. The reported spatial-peak time-average intensity (ISPTA) of the acoustic field is the relevant parameter in calculating FDA safety limits for diagnostic US <cit.>. For both transverse and depth offsets, the power decreases as the piezo moves away from the focal point (near 5 cm depth) of the external transducer. The measured transverse and axial FWHMs for ISPTA are 4.5 mm and 60 mm, respectively. In the future, misalignment loss can be reduced through dynamic focusing of the US with beam forming <cit.>. It should be noted that angular misalignment of the piezo with respect to the US beam will also reduce the harvested power <cit.>. While charging VCP from 0–5.5 V, the overall electrical energy efficiency of the PMU is 12.7%. The efficiency of the system in converting the available acoustic energy on the face of the piezo to the electrical output energy of the PMU is 3.3%. The output energy of the PMU is calculated by measuring the energy stored in the CSTORE and CVCP and the total energy consumption of the ASIC during Charge-Up. The input acoustic energy is calculated by integrating the measured acoustic power density at the surface of the piezo (Fig. <ref>(a)) throughout this same period. §.§ Ultrasound Data Uplink At 5 cm depth, transmission of one image (11.52 kb) takes 890 ms, resulting in a data rate of 13 kbps. The received backscattered waveform is processed and demodulated to reconstruct the image as follows. First, the signal is bandpass-filtered at the carrier frequency, windowed to select the bit intervals, and then reconstructed with sinc interpolation. The peak-to-peak amplitude is then measured for each pulse and compared with a predetermined threshold to predict the bit value. The serial output of the chip serves as the ground truth. Fig. <ref> shows a histogram of the backscattered signal amplitude for each bit normalized to the threshold amplitude, demonstrating a clear separation between one and zero bits. The measurement shows robust error-free transmission of 90 frames, including a combination of dark frames and images taken with the 650 nm and 455 nm lasers. The bimodal nature of the histogram results from combining data across different imaging conditions and differing interference from the high voltage pulsing of the external transducer on the two pulses received within each interval of 2 ToFs. The device achieves a BER better than 10-6 (0 out of 1,036,800 bits) with an average modulation index of 5.6%. §.§ Laser Driver Fig. <ref> shows measurements of the laser driver and PTAT. The output current of the laser driver (ILD) is measured with a precision measurement unit (B2912A, Keysight). Fig. <ref>(a) shows the measured ILD across all DAC codes and Fig. <ref>(b) shows the percent change in ILD as the output voltage of the laser driver, VLD, drops from 3.5–0.4 V. This range corresponds to the VLD for a 5.5–3.5 V drop on VCP accounting for the 2 V forward bias voltage of the 650 nm µLD. For DAC=5 (ILD=20 mA), there is less than 1% variation across the 3.1 V drop, corresponding to 1.3% variation in optical power output of the 650 nm µLD. Fig. <ref>(c) shows the variation in the 0.5 V PTAT reference across VCP measured through the VADC0.5V LDO. As VCP drops from 5.5–3.5 V, the PTAT reference varies around 2.5%, which has minimal effect on the ADC during Readout. These results are an improvement over <cit.> where the reference current varied 11.5% over a 1.5 V drop, resulting in a 50% reduction in the laser output power. §.§ Imaging Frontend The photodiode responsivity is determined by measuring pixel output voltage across a range of incident optical powers as shown in Fig. <ref>(a). We use a LED with a collimator and beam expander to ensure uniform illumination of the sensor. A narrow bandpass interference filter placed in front of the LED selects a specific wavelength. Measurements are made at 535 nm and 705 nm, near the center of the optical frontend passbands. The optical power output of the LED is characterized with a power meter (PM100D, ThorLabs). In Fig. <ref>(a), the slope indicates pixel gain in mV/pW with TEXP=8 ms. The photodiode responsivity is calculated by dividing pixel gain by the transimpedance gain of the CTIA. The pixels have a mean responsivity of 0.13 A/W (quantum efficiency (QE=30%) and 0.21 A/W (QE=37%), at 535 nm and 705 nm respectively. A histogram of the measured dark current across pixels with a Gaussian fit is shown in Fig. <ref>(b). The mean dark current is 14.9 fA (7.7 aA/µm2) with a standard deviation of 0.7 fA (0.4 aA/µm2). Fig. <ref>(c) shows the measured pixel output noise in dark condition for different exposure times for a single frame and an average of 8 frames. For TEXP=8 ms, the measured pixel output noise is 5.34 mVrms for a single frame and 1.87 mVrms after 8 averages. The output noise increases with the exposure time due to the shot noise from the increased dark signal. The resolution of the imager is measured with a negative USAF target (Fig. <ref>(a)) overlaying a uniform layer of Cy5 NHS ester (λEX=649 nm, λEM=670 nm) dissolved in PBS at 10 µM concentration. The dye is contained with a 150 µm-thick glass coverslip and the target is placed on the imager. The resolution measurements were conducted with wired power and data transfer and using a fiber-coupled 650 nm laser for uniform illumination. Fig. <ref>(b) shows the sensor image of the element with 125 µm line spacing compared to the microscope reference image in Fig. <ref>(c). The sensor images this element at 50% contrast as calculated with the line scan in Fig. <ref>(d). Contrast is calculated as (V_MAX-V_MIN)/(V_MAX+V_MIN-V_BK), where V_MAX and V_MIN are the maximum and minimum pixel values in the bright and dark bars, respectively, and V_BK is the background signal. Fig. <ref>(e) shows the full contrast transfer function measured by imaging elements on the target with line spacing ranging from 79–455 µm and calculating the contrast for each. These results demonstrate that with the FOP, the imager can distinguish line spacing as small as 100 µm with greater than 20% contrast. To demonstrate three-color imaging, we image a sample containing 15µm-diameter green (λEX=505 nm, λEM=515 nm, F8844, Thermo Fisher Scientific), red (λEX=645 nm, λEM=680 nm, F8843, Thermo Fisher Scientific), and NIR (λEX=780 nm, λEM=820 nm, DNQ-L069, CD Bioparticles) fluorescent beads. The beads are suspended in 1× PBS solution at a concentration of approximately 10 beads/µL. 50 µL of solution is pipetted into a micro-well chamber slide for imaging. Imaging results are shown in Fig. <ref>. The sensor images are obtained wirelessly with ILD=18.5 mA, TEXP,GREEN=8 ms, TEXP,RED=16 ms, TEXP,NIR=8 ms. For each color channel, 4 frames are averaged and the channels are colored and overlaid to make the multicolor image. The sensor images show good correspondence with the reference image taken with a bench-top fluorescence microscope (Leica DM-IRB). A few beads do not appear in the sensor image due to non-uniform illumination from the µLDs. There is also a line artifact visible in the NIR channel due to reflections off the wire-bonds and that be mitigated through more careful fabrication as detailed in <cit.>. § EX VIVO IMAGING OF IMMUNE RESPONSE We conducted an ex vivo mouse experiment to demonstrate the application of our sensor to assessing the response to cancer immunotherapy through dual-color fluorescence imaging of both effector and suppressor cells in the tumor microenvironment. In this study, we measure response to immune checkpoint inhibitors (ICIs), a class of immunotherapy that activates the immune system against cancer by blocking interactions between effector and inhibitory immune cells and cancer <cit.>. A successful immune response to ICIs requires the activation and proliferation of CD8+ T-cells, the most powerful effector cells in the anticancer response, into the tumor microenvironment <cit.>. Therefore, CD8+ T-cell infiltration has been identified as an indicator of a favorable immune response <cit.>. However, CD8+ T-cell activation can be inhibited by suppressor immune cells such as neutrophils, which regulate the immune system and inflammation in the body and are associated with resistance to ICI immunotherapy <cit.>. Dual-color fluorescence imaging enables a differential measurement of these two control mechanisms of the immune response with the same imaging frontend which is not possible with clinical imaging modalities such as MRI, PET, or CT. §.§ Experimental Design Fig. S5 outlines the ex vivo experiment design, which uses two engineered cancer models from <cit.>, an LLC lung cancer model (engineered to resist ICIs) and a B16F10 melanoma model (engineered to respond to ICIs). Both tumor models show increased CD8+ T-cell infiltration over the course of treatment. However, while the B16F10 tumors reliably respond, the LCC tumors are resistant to ICI therapy. This resistance has been linked to a T-cell-driven inflammatory response that triggers an influx of neutrophils into the tumor, suppressing T-cell activation <cit.>. The experiment includes two groups of mice each bearing one type of tumor. Each group consists of a mouse treated with a combination of PD-1 and CTLA-4 inhibitors, a class of ICIs <cit.>, and an untreated mouse injected with a non-therapeutic antibody for control. Three weeks after tumor implantation, the tumors are harvested, sectioned to 4 µm-thick samples, and mounted on glass slides. Two adjacent sections from each tumor are labeled separately with fluorescent probes targeting CD8+ T-cells and neutrophils. CD8+ T-cells are stained with a CD8+ antibody labeled with Cy5 (λEX=649 nm, λEM=670 nm) and neutrophils are stained with a CD11b antibody labeled with FAM (λEX=492 nm, λEM=518 nm). §.§ Imaging Results Images of the tumor samples are captured wirelessly with the sensor and compared with reference images from a bench-top fluorescence microscope. Figs. <ref>(a) and (b) show the imaging results from the LLC (resistant) and B16F10 (responsive) groups, respectively. For each fluorescent channel, 8 frames are acquired with the chip, using imaging parameters of ILD=18.5 mA, TEXP,Cy5=16 ms, and TEXP,FAM=8 ms. The sensor images are averaged across all frames. The microscope images are overlaid with the cell nuclei of the entire sample, stained with DAPI (blue in the image) to highlight the tumor area. The white lines within the images indicate the boundaries of the tumor tissue. The sensor images are qualitatively consistent with the microscope references, albeit at a lower resolution and with varying intensity across the image due to non-uniform illumination from the µLDs. To quantify the results for each tumor model, the percent change in the density of both cell types between the untreated and treated mice is calculated according to the metrics in Fig. <ref>(c). Ground truth cell densities are determined using the microscope images by counting the fraction of cell nuclei (DAPI) labeled with the targeted probe (red and green channel). As the sensor does not have single-cell resolution, the cell density in the sensor images is determined by the fluorescence intensity in the tumor normalized by the area bounded by the dashed white lines in Fig. <ref>(a) and (b). The background signal is mostly canceled out by measuring percent change. The quantified results from the sensor and microscope are shown in Fig. <ref>(d). The sensor captures the general trends observed with the microscope, corresponding with the results in <cit.>. The increase in the density of CD8+ T-cells in both B16F10 samples (sensor: 847%, microscope: 582%) and the LLC samples (sensor: 38%, microscope: 191%) suggests an effector response to immunotherapy in both models. However, a larger increase in CD11b density after treatment in the LLC tumors (sensor: 66%, microscope: 75%) over the B16F10 tumors (sensor: 42%, microscope: 51%), suggests resistance in the LLC model due an increase in neutrophils. These trends would better reflect the results in <cit.> with a larger sample size to account for heterogeneity across the mice and neutrophil-specific biomarkers (CD11b also stains other myeloid cells). However, these results highlight the utility of multicolor fluorescence imaging in evaluating the response to cancer immunotherapy, enabling a differential measurement of both effector (e.g. CD8+ T-cell) and suppressor (e.g. neutrophil) populations. As shown by the increase in CD8+ T-cells in resistant LLC tumors, an increase in effector populations does not always correlate with response as the effector cells may be inhibited by suppressor cells. Therefore, simultaneously imaging suppressor populations such as neutrophils has two advantages: (1) enabling a more accurate assessment of response and (2) revealing the mechanisms of resistance (e.g. neutrophil interference with CD8+ T-cells) that can be targeted with second-line therapies (e.g. blocking T-cell-induced immunosuppressive inflammation signaling as done in <cit.>). Future in vivo studies can highlight the unique capability of our sensor to analyze real-time dynamics in the spatial interactions of these populations, which is critical for developing a more nuanced understanding of the immune response <cit.>. § CONCLUSION We present a fully wireless implantable image sensor capable of multicolor fluorescence imaging for real-time monitoring of response to cancer immunotherapy. A comparison of our work with recent chip-scale fluorescence imagers and sensors is shown in Table <ref>. To the knowledge of the authors, our work is the first to demonstrate fully wireless operation of the entire system with biologically relevant samples. In <cit.>, a battery is used for power. In <cit.> the US link operates above FDA limits and low imager sensitivity limits wireless imaging to high concentrations of fluorescent dye. With a power harvesting frontend incorporating a cross-coupled charge-pump, we demonstrate safe operation at 5 cm depth in oil with US power densities at 31% of FDA limits. The robust communication link demonstrates a BER better than 10-6 with a 13 kbps data rate. Moreover, optimization of the storage capacitor sizing enables a small form factor of 0.09 cm3 demonstrated with a mechanical assembly of the implant. Our system is specifically designed for multicolor fluorescence imaging with a three-channel laser driver to drive different color µLDs, an US downlink for programming imaging and laser settings, and an optical frontend design consisting of a multi-bandpass interference filter and a FOP. Our optical frontend provides greater than 6 OD of excitation rejection of lasers within 15 nm of the filter band edge, a significant improvement over the CMOS metal filters reported in <cit.> and competitive performance with the combination of absorption and interference filters in <cit.>. To the best of our knowledge, this work is the first chip-scale fluorescence imager capable of three-color imaging, which we demonstrate through imaging fluorescent beads. The pixel noise is on the same order of magnitude as <cit.> despite these works using pixel sizes accommodating large low-noise readout circuits with higher power consumption. By imaging CD8+ T-cells and neutrophils populations in ex vivo mouse tumors with or without immunotherapy, we show how multicolor fluorescence imaging can enable accurate identification of non-responders and their underlying resistance mechanisms. Such sub-millimeter imaging of multiple biomarkers is inaccessible to clinical imagers such as MRI, CT or PET and can inform personalized treatment regimens addressing the wide variability in response to immunotherapy across patients. With future work in biocompatible packaging and integration of optics for epi-illumination, our platform can open the door to real-time, chronic monitoring of the spatial interactions of multiple cell populations deep in the body. § ACKNOWLEDGMENTS The authors would like to thank sponsors of BSAC (Berkeley Sensors and Actuators Center) and TSMC for chip fabrication. We appreciate technical discussion and advice from Prof. Rikky Muller, Efthymios Papageorgiou, Hossein Najafiaghdamand, and Mohammad Meraj Ghanbari. Thank you to Eric Yang, Jade Pinkenburg, and Kingshuk Daschowdhury for their technical assistance. Finally, we acknowledge Dr. Mohammad Naser from Biological Imaging Development CoLab (BIDC) and Kristine Wong from Laboratory for Cell Analysis (LCA) for the development of immunohistochemistry workflow and imaging. [ < g r a p h i c s > ]Micah Roschelle (Graduate Student Member, IEEE) received his B.S. degree in electrical engineering from Columbia University, New York, NY, USA, in 2020. He is currently pursuing a Ph.D. in electrical engineering and computer sciences at the University of California, Berkeley, Berkeley, CA, USA. His research interests include implantable medical devices, lensless fluorescence imaging, and biomedical sensor design. [ < g r a p h i c s > ]Rozhan Rabbani (Graduate Student Member, IEEE) received the B.Sc. degree from Sharif University of Technology, Tehran, Iran, in 2018. She received her Ph.D. degree from the Department of Electrical and Computer Sciences, University of California Berkeley, Berkeley, CA, USA in 2024. At Sharif University of Technology, she worked on analog and mixed-signal circuit design to optimize power consumption for a wearable ECG sensor. She worked at Apple Inc. during Summers 2020 and 2022 working on calibration and test automation for high-speed applications. Her research at UC Berkeley was focused on developing biomedical circuits and sensors, specifically implantable image sensors for cancer therapy. She was the recipient of the Apple Ph.D. Fellowship in Integrated Circuits in 2022, the 2024 SSCS Rising Stars, and the 2024 SSCS Predoctoral Achievement Award. [ < g r a p h i c s > ]Surin Gweon (Graduate Student Member, IEEE) received the B.S. degree in electrical engineering from Korea University, Seoul, South Korea, in 2018, and the M.S. degree in electrical engineering from the Korea Advanced Institute of Science and Technology (KAIST), Daejeon, South Korea, in 2020. She worked with System LSI Business, Samsung Electronics Company Ltd., Hwaseong, South Korea until 2023. She is currently pursuing the Ph.D. degree electrical engineering and computer sciences at the University of California at Berkeley (UC Berkeley), Berkeley, CA, USA. Her research interests include image sensor front-end and mixed-mode computing for implantable biomedical applications. [ < g r a p h i c s > ]Rohan Kumar (Graduate Student Member, IEEE) received his B.S. degree in electrical engineering and computer science (EECS) from the University of California, Berkeley, Berkeley, CA, USA, in 2024. He is currently pursuing a Ph.D. in EECS at UC Berkeley. His interests include electronic design automation, die-to-die interconnects, and open-source hardware. [ < g r a p h i c s > ]Alec Vercruysse (Graduate Student Member, IEEE) received a B.S. degree in engineering from Harvey Mudd College in Claremont, CA, USA in 2023. He is currently pursuing a Ph.D. in electrical engineering and computer sciences at the University of California, Berkeley. His interests include the system-level design of circuits for implantable medical devices. [ < g r a p h i c s > ]Nam Woo Cho Dr. Nam Woo Cho, MD, PhD is a physician scientist in radiation oncology. Dr. Cho received his undergraduate degree from Harvard College, and MD/PhD degrees from the University of Pennsylvania. He completed internship in Internal Medicine at St. Mary’s Medical Center in San Francisco, and his residency in radiation oncology at UCSF. Following his postdoctoral work with Dr. Matthew Spitzer, he started his own research laboratory as an Assistant Professor in the Department of Radiation Oncology and Department of Otolaryngology-Head and Neck Surgery. His research focuses on understanding fundamental immunologic mechanisms that govern responses to immune stimulating therapies including radiation therapy and immune checkpoint inhibitors. Dr. Cho leverages molecular, cellular, organismal, and computational platforms to define novel mechanisms, pioneering the next generation of radio- and immune-therapeutics. [ < g r a p h i c s > ]Matthew H. Spitzer received the B.S. degree from Georgetown University, Washington, DC, USA and the Ph.D. degree from Stanford University, Stanford, CA, USA, in 2015. In 2016, he joined University of California San Francisco (UCSF), San Francisco, CA, USA as a UCSF Parker Fellow and a Sandler Faculty Fellow. He is currently Associate Professor in the Departments of Otolaryngology-Head and Neck Surgery and Microbiology & Immunology at UCSF and an investigator of the Parker Institute for Cancer Immunotherapy, San Francisco, USA. His research aims to develop understanding of how the immune system coordinates its responses across the organism with an emphasis on tumor immunology by combining methods in experimental immunology and cancer biology with computation. [ < g r a p h i c s > ]Ali M. Niknejad (Fellow, IEEE) received the B.S.E.E. degree from the University of California at Los Angeles, Los Angeles, CA, USA, in 1994, and the master’s and Ph.D. degrees in electrical engineering from the University of California at Berkeley (UC Berkeley), Berkeley, CA, in 1997 and 2000, respectively. He is currently a Professor with the EECS Department, UC Berkeley, the Faculty Director of the Berkeley Wireless Research Center (BWRC), Berkeley, and the Associate Director of the Center for Ubiquitous Connectivity. His research interests include wireless and broadband communications and biomedical imaging and sensors, integrated circuit technology (analog, RF, mixed signal, and mm-wave), device physics and compact modeling, and applied electromagnetics. Prof. Niknejad and his coauthors received the 2017 IEEE Transactions on Circuits and Systems—I: Regular Papers Darlington Best Paper Award, the 2017 Most Frequently Cited Paper Award (2010–2016) at the Symposium on VLSI Circuits, and the CICC 2015 Best Invited Paper Award. He was a recipient of the 2012 ASEE Frederick Emmons Terman Award for his textbook on electromagnetics and RF integrated circuits. He was a co-recipient of the 2013 Jack Kilby Award for Outstanding Student Paper for his work on an efficient Quadrature Digital Spatial Modulator at 60 GHz, the 2010 Jack Kilby Award for Outstanding Student Paper for his work on a 90-GHz pulser with 30 GHz of bandwidth for medical imaging, and the Outstanding Technology Directions Paper at ISSCC 2004 for co-developing a modeling approach for devices up to 65 GHz. [ < g r a p h i c s > ]Vladimir M. Stojanović (Fellow, IEEE) received the Dipl. Ing. degree from the University of Belgrade, Belgrade, Serbia, in 1998, and the Ph.D. degree in electrical engineering from Stanford University, Stanford, CA, USA, in 2005. He was with Rambus, Inc., Los Altos, CA, USA, from 2001 to 2004; and the Massachusetts Institute of Technology, Cambridge, MA, USA, as an Associate Professor, from 2005 to 2013. He is currently a Professor of electrical engineering and computer sciences with the University of California at Berkeley, Berkeley, CA, USA, where he is also a Faculty CoDirector of the Berkeley Wireless Research Center (BWRC). His current research interests include the design, modeling, and optimization of integrated systems, from CMOS-based VLSI blocks and interfaces to system design with emerging devices, such as NEM relays and silicon photonics, design and implementation of energy-efficient electrical and optical networks, and digital communication techniques in high-speed interfaces and high-speed mixed-signal integrated circuit (IC) design. Dr. Stojanović was a recipient of the 2006 IBM Faculty Partnership Award, the 2009 NSF CAREER Award, the 2008 ICCAD William J. McCalla, the 2008 IEEE TRANSACTIONS ON ADVANCED PACKAGING, and the 2010 ISSCC Jack Raper Best Paper and 2020 ISSCC Best Forum Presenter Awards. He was a Distinguished Lecturer of IEEE Solid-State Circuits Society from 2012 to 2013. [ < g r a p h i c s > ]Mekhail Anwar (Member, IEEE) received the B.A. degree in physics from the University of California Berkeley (UC Berkeley), Berkeley, CA, USA, where he graduated as the University Medalist, the Ph.D. degree in electrical engineering and computer sciences from the Massachusetts Institute of Technology, Cambridge, MA, USA, in 2007, and the M.D. degree from the University of California San Francisco (UCSF), San Francisco, CA, in 2009. In 2014, he completed a Radiation Oncology residency with UCSF. In 2014, he joined the faculty with the Department of Radiation Oncology, UCSF, with a joint appointment in Electrical Engineering and Computer Sciences at UC Berkeley (in 2021), where he is currently an Associate Professor. His research focuses on developing sensors to guide cancer care using integrated-circuit based platforms. His research centers on directing precision cancer therapy using integrated circuit-based platforms to guide therapy. His work in chip scale imaging has been recognized with awards from the DOD (Physician Research Award)  and the NIH (Trailblazer), and in 2020 he was awarded the prestigious DP2 New Innovator Award for work on implantable imagers. At UCB and UCSF he focuses on the development of implantable sensors across both imaging, molecular sensing and radiation therapy.  He is board certified in Radiation Oncology and maintains a clinical practice specializing in the treatment of GI malignancies with precision radiotherapy.
http://arxiv.org/abs/2406.19309v1
20240627163340
Which Neurons Matter in IR? Applying Integrated Gradients-based Methods to Understand Cross-Encoders
[ "Mathias Vast", "Basile Van Cooten", "Laure Soulier", "Benjamin Piwowarski" ]
cs.IR
[ "cs.IR" ]
0009-0007-4612-717X Sinequa Paris France Sorbonne Université, CNRS, Institut des Systèmes Intelligents et de Robotique, Paris France mathias.vast@isir.upmc.fr 0009-0002-0234-917X Sinequa Paris France vancooten@sinequa.com 0000-0001-9827-7400 Sorbonne Université, CNRS, Institut des Systèmes Intelligents et de Robotique, Paris France laure.soulier@isir.upmc.fr 0000-0001-6792-3262 Sorbonne Université, CNRS, Institut des Systèmes Intelligents et de Robotique, Paris France benjamin.piwowarski@isir.upmc.fr § ABSTRACT With the recent addition of Retrieval-Augmented Generation (RAG), the scope and importance of Information Retrieval (IR) has expanded. As a result, the importance of a deeper understanding of IR models also increases. However, interpretability in IR remains under-explored, especially when it comes to the models' inner mechanisms. In this paper, we explore the possibility of adapting Integrated Gradient-based methods in an IR context to identify the role of individual neurons within the model. In particular, we provide new insights into the role of what we call "relevance" neurons, as well as how they deal with unseen data. Finally, we carry out an in-depth pruning study to validate our findings. <ccs2012> <concept> <concept_id>10002951.10003317</concept_id> <concept_desc>Information systems Information retrieval</concept_desc> <concept_significance>500</concept_significance> </concept> <concept> <concept_id>10002951.10003317.10003338</concept_id> <concept_desc>Information systems Retrieval models and ranking</concept_desc> <concept_significance>500</concept_significance> </concept> <concept> <concept_id>10002951.10003317.10003338.10003341</concept_id> <concept_desc>Information systems Language models</concept_desc> <concept_significance>500</concept_significance> </concept> </ccs2012> [500]Information systems Information retrieval [500]Information systems Retrieval models and ranking [500]Information systems Language models Which Neurons Matter in IR? Applying Integrated Gradients-based Methods to Understand Cross-Encoders Benjamin Piwowarski July 1, 2024 ==================================================================================================== § INTRODUCTION Since the BERT <cit.> era, Information Retrieval (IR) has gone through many changes. This paradigm shift has seen the rise of neural IR systems in the light of the performance of BERT-base models compared to previous state-of-the-art in almost all benchmarks. This huge improvement however came at the cost of the explainability, as Transformers (and thus BERT) are extremely complex models. Despite being widely adopted, the Transformers' mechanisms remain poorly understood and this limits the explainability of neural IR models. In parallel, with the mass adoption of BERT, research on explainability and interpretability (i.e., explainable AI) has also seen a rapid surge. Being able to understand how models make predictions or which mechanisms they rely on not only helps with their adoption by users but also unlocks the possibility for researchers to study edge cases where they might fail, providing room for improvement. By improving our understanding of the mechanisms and signals involved when performing the IR task, we can also design new architectures or better-suited training algorithms, able to bridge actual gaps or correct misbehavior of existing systems, and better transfer techniques, targeting more precisely domain or language-specific parts of the models. As of today, Transformers-based models remain mostly "black boxes". Despite some successes in providing new insights about the different signals/features that neural models regard as important, their inner mechanism, i.e. how those signals are leveraged and/or combined by their different components, remains unclear. Different lines of work have emerged to address the challenge of understanding the Transformers-based models' machinery <cit.> such as probing <cit.>, mechanistic interpretability <cit.> or attribution methods. Within the latter, another distinction exists between perturbation-based methods <cit.> and backpropagation-based methods <cit.>. If the literature is expanding quickly in Natural Language Processing (NLP), it is scarcer in the specific domain of IR. This paper aims at filling this gap by studying the application of a gradient-based approach, namely Integrated Gradients <cit.>, to understand the role of neurons within a cross-encoder model, here MonoBERT <cit.>, in an IR task. Furthermore, we hope this work will pave the way for future ones aiming at improving IR systems. We believe that understanding better how IR models work is primordial to help the field move forward and conceive new systems. As identifying these mechanisms is not trivial because of the complexity of language models, we explore the relative importance of neurons for different aspects of the IR task (notion of relevance and in-domain vs out-of-domain data). In particular, our study focuses on the following research questions: * RQ1 Is it possible to identify neurons involved in the classification of a passage as "relevant" (or "non-relevant") for a given query? * RQ2 Is it possible to distinguish neurons involved with in-domain data from those involved with out-of-domain data? * RQ3 How important are those neurons for the IR task? § RELATED WORKS The IR landscape changed drastically with the arrival of Transformers <cit.> and BERT <cit.>, the whole domain shifting entirely towards neural IR systems. If these models can significantly improve the quality of the retrieved content, most of them, including the most effective ones like cross-encoders <cit.>, lack interpretability and explainability. Counter-examples with the core ability to provide explanations for their predictions such as SPLADE <cit.> or ColBERT <cit.> exist thanks to their architecture which leverages a form of matching (between expanded queries' tokens and passages' tokens for SPLADE, and between contextual vectors for ColBERT). Even if some models can better explain their predictions, there is no clear understanding of the process leading to that prediction or the signals that they extracted and transformed to reach such a decision. That is typically the reason that motivated the development of techniques able to unlock a model black box and allow researchers to look under the hood of neural networks. The different explainability techniques can be categorized into distinct families based on how they tackle the infamous "black-box" issue of neural networks. First, probing <cit.> trains probe classifiers from the model's hidden representations and evaluates them on tasks associated with the primary objective for which the model was designed (e.g., for IR, such tasks can be Named-Entity Recognition, Semantic Similarity or co-reference Resolution <cit.>), revealing the specific abilities that the model learned implicitly during its training to solve the task. However, as probing relies on external classifiers, it is considered disconnected from the original model <cit.>. In addition, probing methods can't be used to target specific neurons as they use each hidden representation as a whole and do not provide any explanation of the interplay between these abilities as well as their relative importance in the model's output. Second, mechanistic interpretability <cit.> refers to a line of works that try to "reverse engineer Transformers into human-understandable computer programs" [Quote taken from the second thread on Transformer circuits: <https://transformer-circuits.pub/>]. In particular, it aims at decomposing Transformers into multiple blocks whose role and relations with the rest of the model are both well-understood. This approach has provided meaningful explanations for toy models <cit.> or for some specific behaviors <cit.> but is hard to scale for an exhaustive study. For example, activation patching, or causal tracing, <cit.> is an application of mechanistic interpretability that changes activation in some specific parts of the model observing its effect. Despite its good results in explaining the causal structure of models <cit.>, it implies iterating over every inner output of the model, and this quickly becomes intractable. Attribution patching <cit.> alleviates this limitation by making use of gradient-based approximation but can lead to the apparition of false negatives <cit.> potentially harming its conclusions. Finally, attribution methods aim at identifying which part of the model or the input contributed the most to a prediction. Contrary to probing, attributions are obtained directly from the model and are usually more easily scalable to study a full model, even large ones, than mechanistic interpretability. It is possible to distinguish two types of attribution methods. The first one, called perturbation-based, introduces perturbations of many types (masking, removing, introducing noise, etc) on the input and measures the differences in the result compared to the original output <cit.>. Perturbations have the advantage of being easily understandable while providing a good estimation of the effect of each feature on the output but are extremely costly to compute. The second one, referred to as gradient-based or more generally backpropagation-based, recovers attributions using gradients or activations starting from the output prediction down to each layer of the model <cit.>. Gradient-based methods have the advantage of being faster to compute and usually have more desired theoretical properties over perturbation-based methods, such as sensitivity or additivity. Their main drawback is the difficulty of giving a human-understandable meaning to their attribution. The first works in IG only consider the computation of the importance of input features. This was applied by Möller et al. <cit.> in the case of Siamese Encoders for the semantic similarity task. However, their work is limited to the study of tokens' attributions and mostly aims at extending IG to a setup with two inputs. Motivated by recent studies that discovered the role of feed-forward layers within Transformers <cit.>, several gradients-based methods were developed to provide attributions to individual neurons within the model instead of the input's features <cit.>. In parallel, other works proposed optimizations of the computations involved in IG and its variants <cit.>, extending its possible applications and allowing to study the crucial role played by some neurons in storing knowledge <cit.> or in specific tasks <cit.>. Similarly to us, Wang et al. <cit.> studied "skill neurons" in Transformers but through Prompt-Tuning <cit.>. By concatenating soft prompts P = {p_1, p_2, ..., p_l}, p_i∈ℝ^d, with d the input dimension of the model, to the input, they show that some neurons have higher activation than the rest of the network and that this activation is strongly correlated to the prediction of a specific class in classification tasks. However, the theoretical ground behind the success of Prompt Tuning in identifying "skill neurons" remains unclear. Recent progress have been made to explain Information Retrieval and provide inputs' attribution <cit.> or explanations for the documents' ranking produced by IR models <cit.>. Nevertheless, fewer works have explored neuron attribution methods. To the best of our knowledge, this work constitutes the first attempt at using Integrated Gradient-based methods, particularly the Neuron Integrated Gradients (NIG) method <cit.>, to unveil the internal mechanisms of neural IR models. In contrast to Möller et al. analyzing siamese encoders for semantic signals <cit.>, we are interested in studying the behavior of cross-encoder architecture in the IR task, especially the integration of relevance matching signals <cit.>. Moreover, unlike the study of skill neurons <cit.> we do not limit our study to the feed-forward layers in the Transformers but also include all linear transformations in the model. § NEURON INTEGRATED GRADIENTS FOR IR In this section, we show how we leverage Neuron Integrated Gradients <cit.> to disentangle the neurons' importance within a cross-encoder model during the IR task. We leave the study of more costly but equally relevant alternatives as well as the extension to other architectures for future works. §.§ Background Originally designed for computing the attributions of the input features (to the output), Integrated Gradients (IG) <cit.> is based on path integrals. IG imply to first carefully select a baseline input x' for which the model's prediction is neutral, i.e. p(relevant) = p(non-relevant), and to study the straight line path γ(t) from this baseline to the actual input x. Formally, γ(t) = x' + t(x - x'), t ∈ [0, 1]. IG are obtained by computing and integrating the gradients along the path between the baseline and the input. It has been generalized into conductance <cit.> to obtain the importance of any neuron y. In this framework, the importance of an individual neuron is given by the Neuron Integrated Gradients formula <cit.>: NIG^y(x) = ∫_t=0^t=1δ F(x'+t(x-x'))/δγ_y(t)δγ_y(t)/δ t dt where F is a neural network, γ_y(t) denotes the activation value of a neuron y at the point t of the path γ, i.e. the output y of the neural network F_y(γ(t)). Note that this formulation of the attribution of a neuron has an efficient approximation based on the Riemann formulation of the integral that we leverage as described in <cit.>. To illustrate the intuition behind Neuron Integrated Gradients, let us use Figure <ref> that depicts the evolution along the path γ(t), between the "Baseline" x' and the "Original" input x, of the model's prediction (in blue) and the gradients of two types of neurons (black for non-important neuron and green for important neurons). The blue curve, between the "Baseline" and the "Original" input, corresponds to the value of the output logit as t evolves. In the interval delimited by the two red vertical lines in the figure, a change in the input results in a significant recovery of the original output signal symbolized by a strong increase for the the blue curve. With the Neuron Integrated Gradients method, the important neurons, i.e. the neurons whose attributions with regards to the attribution will be the highest, are neurons y whose gradient δγ_y(t)/δ t evolves at the same time as the models' output F. If the model's output does not change when moving the input γ(t) along the path, then the neurons that react to this change are not important to the model's decision. Such neurons are represented in the figure by the black dashed curve: its gradient is non-zero outside of the interval where the model's prediction increases, meaning that it is not directly related to the model's decision. Conversely, the green curves correspond to neurons that are important for the output. Note that the contribution can be either positive or negative, as illustrated in the figure. Neurons correspond to the activation of any layer/block in the model but in our work, we are interested in neurons corresponding to outputs of linear transformations (used to compute keys, queries, values, and inside the feed-forward block <cit.>) as their importance in many mechanisms (factual recall, performing annex tasks, etc.) has already been characterized in previous works <cit.>. §.§ Adapting NIG to identify "task-related" neurons As IR is different from other domains (CV and NLP) in which NIG <cit.> has already been applied, some of its aspects need to be adapted. Comparisons across datasets. To make fair comparisons between attributions obtained for different datasets, we aggregate the contribution value of each linear transformation's outputs. As they are shared among tokens composing the query-document pair, we sum the conductance over tokens: a "neuron" in our experiments thus includes its corresponding outputs over all tokens. This aggregated conductance can be thought of as the importance of outputs of a linear transformation within a Transformer. Dependency to the input. As pointed out by Wang et al. <cit.>, gradient-based methods produce results that are input-dependent. As we want to identify "IR task-related" or "skills" neurons, i.e., neurons that are important for any input in the IR task, we average NIG results over multiple samples across multiple datasets and only retain the neurons with the highest mean attribution values. Baseline. For images, an obvious baseline x^' is a black image. In IR, there is no obvious baseline, and we thus empirically verify which one is better suited as a baseline by comparing how well they degrade the relevance signal on average over 1000 inputs from the MSMARCO dataset <cit.>. We take inspirations from the baseline used by Möller et al. <cit.> who study IG in the context of Siamese encoders. The authors build a baseline for which the predicted cosine similarity is always 0 by shifting the ouptut embedding space by the output embedding of the baseline. The downside of their method is that it requires training the bi-encoder model for a few steps in order to adapt to the shift. To avoid that, we build our baselines either by replacing part of or all the input's tokens by [PAD] tokens or by zero vectors (which mimics the translation in the embedding space but as it is only on the baseline, we avoid the downside of retraining the model). We consider the values from the output of the Softmax operator. For the choice of the baseline, we subtract the value for the "non-relevant" label from the value for the "relevant" label and average the difference over the 1000 examples. We compare the average difference with the original inputs and when using our baseline. The closer the average difference is to zero, the stronger the baseline's suppression of the original input's relevance signal. Results are in Table <ref>. Based on this, we decided to transform every embedding into its fully padded counterpart to obtain our baseline. § EXPERIMENTS Using Neuron Integrated Gradients, we conduct several experiments using one base IR model over different datasets to compute the neuron attributions. By comparing the attributions over multiple datasets, we want to identify the core set of important neurons for the IR task (see RQ1 and RQ2). To empirically verify our results, we perform a series of ablation studies where we evaluated the decrease in performance on a new set of datasets caused by the ablation of the neurons tagged as important by our attributions in the IR model (see RQ3). All the project code, based on the library experimaestro-ir <cit.>, including the experimental details, is freely accessible[https://git.isir.upmc.fr/mat_vast/ri-neurons-attribution<https://git.isir.upmc.fr/mat_vast/ri-neurons-attribution>]. To conduct our experiments, we additionally use the libraries ir-datasets <cit.> and PyTerrier <cit.>. §.§ Experimental setup In our experiments, we analyze the model MonoBERT <cit.> as it is a strong baseline and a typical cross-encoder. Despite the existence of stronger models such as MonoT5 <cit.>, we decided to stick to MonoBERT as it is an encoder-only architecture, contrary to T5: We are interested in interpreting the model's inner mechanisms and the decoder part brings an additional layer of complexity to deal with. The version of MonoBERT we use has been fine-tuned on MSMARCO[The model is available on the HuggingFace' Hub: https://huggingface.co/castorini/monobert-large-msmarcocastorini/monobert-large-msmarco] <cit.>. Motivated by the RQ2, we compute attributions over several datasets, both in the same domain (ID) as the training data of our model and outside of it (OOD). For ID, we use the test set of MSMARCO[We compute Neuron Integrated Gradients on the test set as the development set has many false negatives] from the TREC 2019 Deep learning track <cit.> and for OOD, datasets from BEIR <cit.>. We choose datasets corresponding to tasks that resemble the most a traditional IR setup in BEIR according to their classification[The BEIR benchmark covers a total of 9 tasks, among which 3 can be considered the closest to the IR task: News Retrieval, Question-Answering (minus HotpotQA <cit.> which is a multi-hop QA dataset) and Bio-Medical IR]: FiQA <cit.>, TREC-Covid <cit.>, TREC-News <cit.>, NFCorpus <cit.>, BioASQ <cit.>, Natural Questions <cit.>. In addition, we also include TREC-Robust04 <cit.> as it is also a known dataset for retrieval. Together, these datasets as well as the test set of MSMARCO compose our attribution corpus, ie. the set of datasets used to compute NIG. Table <ref> summarizes the attribution datasets and their different characteristics, including their abbreviations, used in the different formulas later in the paper. To empirically validate our findings, we further use the development set of MSMARCO (we are less focused here on the quality of the assessments because this is a ranking setup) as well as of the LoTTE benchmark <cit.>, also spanning various domains: Lifestyle, Recreation, Science, Technology, and Writing. §.§ Analysis methodology In the case of a cross-encoder, the IR task can be seen as a series of binary classification tasks where the model has to estimate the relevance of a passage to a query. When computing NIG, we estimate separately the attributions for the "relevant" label and for the "non-relevant" label (when available [BioASQ and FiQA only have relevant annotations]). We name the output of the NIG attribution method an attribution scheme, i.e. the set of attribution values of every neuron for either the "relevant" or "non-relevant" labels of a given dataset. We ensure before assigning one query-passage pair to the label "relevant" or "non-relevant" that the original model prediction matches the assessment. Future work could include a more fine-grained analysis by distinguishing the assessments between the unambiguous pairs and the ambiguous pairs. As we focus on understanding the generic mechanisms behind IR systems' predictions, we exclude the ambiguous pairs. For each attribution scheme, we can rank the neurons y based on their mean importance NIG^y and select the top X% of neurons in the model (typical values of X ∈ [0.01, 0.1, 1]). For example, we can derive the top 1% of neurons with the highest attribution value in the whole model for the "relevant" label on MSMARCO. These subsets constitute the base units of our analysis. Following previous works <cit.> and to keep the computing time reasonable (given the number of linear transformations that we consider in our study), we approximate attributions using N = 100 steps. We verified this number is high enough to minimize the approximation error due to the discretization when computing the integral <cit.>. Answering RQ1. To know if it is possible to identify neurons involved in the classification of a passage as "relevant" (or "non-relevant") for a given query, we leverage the attributions for both types of labels. We start from the sets of neurons involved in the prediction of the "relevant" (positive) label and "non-relevant" (negative) label for the dataset x, x ∈{ms, f, tc, tn, nf, b, r, nq} (see Table <ref> for the abbreviations), denoted as P_x and N_x respectively. These correspond to the basic attribution schemes. We suppose that the set of "core" neurons for relevance (resp. non-relevance) (RQ1) is the intersection of the basic attribution schemes, based on the fact that neurons specific to the IR task (for a given label) should be consistently tagged as important across datasets. We consider the relevance and non-relevance separately to replicate prior works leveraging NIG on classification tasks and who found that sets of important neurons for different labels usually do not intersect <cit.>. Answering RQ2. Another important aspect of these intersections is the nature of the target domain. Indeed, to determine whether or not MonoBERT contains neurons dedicated to OOD predictions (RQ2), it is necessary to compare the "core" set of neurons across every dataset and the "core" set of neurons across OOD datasets only. Answering RQ3. At this point, our findings are based on the attribution from NIG but it is still unclear whether they can impact the IR task by changing the rankings produced by MonoBERT. To investigate this, we need to deprive the model of its ability to deal with relevance and/or non-relevance. Following <cit.>, we set important neurons to zero and observe the effects on both the model's predictions but also the IR task. As our target is to identify the most important neurons for the IR task, we do not limit ourselves to the attribution schemes composed of the top x% of the most important neurons for a single dataset and a single label. Instead, we combine attribution schemes to further refine the set of most important neurons. As intersections might not be the best way to combine schemes, we explore other ways to better define the most important neurons. One obvious solution is what we call the fusion operation, where the attribution values of both relevant and non-relevant sets are averaged to compute the importance of each neuron. We denote this operation with ⊕. For any dataset x, P_x ⊕ N_x denotes the fusion of the attribution schemes for the "relevant" label and the "non-relevant" label. Please note that to ease reading, this fusion is referred to as F_x. Other combinations are more straightforward as they only rely on intersections between sets. Additionally, we denote the subset of all the OOD datasets as O = {f, tc, tn, nf, b, r, nq}. Similarly, we denote the set of all datasets as A = {ms, f, tc, tn, nf, b, r, nq}. Note that we do not specify the pruning level when denoting those sets. § RESULT ANALYSIS We now comment on the results of the experiments and answer the research questions. To ease reading, we provide a summary of the notation we use in Table <ref>. §.§ RQ1: Are there relevance-specific neurons? Figures <ref>a-d. depict the intersections between the sets of relevant or non-relevant neurons, at different pruning levels. More precisely, we report for a given label: * Every pairwise intersection , i.e. P_x ∩ P_y, x,y ∈ A and N_x ∩ N_y, x,y ∈ A ∖{b, f} (BioASQ and FiQA lack non-relevant assessment). We distinguish the OOD/OOD datasets pairs from the MSMARCO/OOD pairs, as these will also help us answer RQ2; * The intersection between every dataset in O and A, i.e., ⋂_o ∈ O P_o (resp. ⋂_o ∈ O ∖{f, b} N_o) and ⋂_a ∈ A P_a (resp. ⋂_a ∈ A ∖{f,b} N_a); * for any dataset x in A ∖{f,b}, we compute P_x ∩ N_x. In these figures, the curves describe the percentage of neurons in the intersection between 2 sets at different pruning percentages. Dashed curves correspond to the intersection between two OOD datasets and plain curves to the intersection between MSMARCO and an OOD dataset (as previously described). One can easily see that for both "relevant" and "non-relevant" predictions, there exists a set of neurons that is consistently involved across domains which means that there are neurons specifically allocated for relevance, thus answering the RQ1. Furthermore, in Figure <ref>, which summarizes Figures <ref>a-d, the grey dotted lines describe the percentage of the neurons that are in common between the relevant and non-relevant attribution schemes (for the same dataset). We see that whatever the dataset x, P_x and N_x do not intersect, implying that the sets of most important neurons for each label are almost entirely different. Note that this phenomenon has previously been observed in sentiment analysis <cit.>, but we are, to the best of our knowledge, the first to report it in IR. As the intersection between the relevant and non-relevant attribution schemes is almost empty across every dataset, it suggests the possibility to distinguish between the neurons involved in predicting the label. In addition, this observation could imply that in each case, the type of signals or mechanisms involved is different, outlining the existence of different signals in relevance (beyond semantics ones) as suggested in DRMM <cit.>. However, understanding exactly what are the relevance signals involved in each case would require a new set of experiments that we leave for future work. §.§ RQ2: Do neurons for ID predictions differ from those for OOD predictions? Another interesting observation that we can draw from Figures <ref> and <ref> is that, if we consider every possible dataset for each label, there does not seem to be a clear distinction between the neurons involved only with OOD datasets and with MSMARCO and an OOD dataset. However, if we are more careful and analyze further the impact of each dataset, we note in Figures <ref> and <ref> that if we leave aside NFCorpus and Robust, the plain curves are now completely separated from the dashed ones. This means that the intersections between two OOD datasets have a higher number of neurons than between MSMARCO and an OOD dataset. This observation is further confirmed in Figure <ref>. In this figure, each pair of curves with the same color represents the intersection between all the OOD datasets and every dataset in the attribution corpus for both labels, modulo NFCorpus and Robust (see the red and purple pairs). One can easily verify that in every case, even when including NFCorpus and Robust in the set of datasets, the OOD datasets have a higher percentage of intersections together than when we add MSMARCO to the mix. Figure <ref> already showcases the existence of two sets of neurons consistently involved in the prediction of either "relevant" or "non-relevant" labels across domains. It further suggests the existence of two additional sets of neurons, completely different from the first two sets, dedicated to OOD predictions. As an additional and distinct set of neurons is involved consistently, it seems as if predictions outside of the training domain of the model are somehow handled differently. This observation (if consistent across models) motivates future works to better understand the role of this specific set of neurons when dealing with OOD data and to design better adaptation methods for IR systems. §.§ RQ3: Can NIG identify neurons important for the IR task? To explore the impact of our observations in an IR setup, we conduct an ablation study using the corpus described in Section <ref>. For each dataset, we select a subset of queries and associate each relevant passage with 20 others retrieved by BM25. The ablations are conducted following different ablation schemes, coming either directly from the attribution schemes of each dataset or by combining some of them. As a baseline to our ablations, we use a random ablation scheme, where the same amount of neurons and in the same layers that a given attribution scheme are selected. To account for randomness, we averaged the results over 50 repetitions, each time removing a different set of neurons. For each query, we measure the difference in nDCG@10 between the original MonoBERT model and its pruned counterparts when re-ranking the list of passages. For both the random baseline and the different attribution schemes, we consider three levels of pruning: 0.01%, 0.1%, and 1% and report the average differences in nDCG@10 in Table <ref>. Description of the attribution schemes applied to the original model. Inspired by the previous experiments, we first compute the results obtained when pruning following the attribution schemes based on MSMARCO, P_ms and N_ms, and on the intersection of the OOD datasets ("relevant" and "non-relevant" separately at first), i.e. ⋂_o ∈ O P_o and ⋂_o ∈ O∖{b,f} N_o, and every dataset, i.e. ⋂_a ∈ A P_a (resp. ⋂_a ∈ A N_A∖{b,f}). Finally, as the IR task involves both types of relevance signals at the same time, we also combine "relevant" and "non-relevant" attribution schemes together by merging P_ms and N_ms as P_ms⊕ N_ms = F_ms. In addition, we also consider the intersections of the fusion schemes F_x together such as ⋂_o ∈ O F_o and ⋂_a ∈ A F_a ( for each dataset, we first compute the set of neurons using fusion, before doing the intersection over the datasets). Last, we compute the global fusion of all the original schemes together, simply denoted as F_A. From Table <ref>, we first observe that the random pruning baseline does not significantly impact the performances of the model. When pruning only 0.01% of the neurons, the performances are not altered at all. Higher levels of pruning (0.1% and 1%) produce changes of at most 2% on a single dataset. When it comes to the attribution schemes obtained from our combinations, we note that even if it is not statistically significant, some of them already impact negatively the IR metrics when pruning as little as 0.01% of the neurons (around 20 neurons before any intersection). For higher levels of pruning, we observe larger degradation which eventually becomes statistically significant on some datasets. As an answer to RQ3, it seems that NIG can identify neurons that matter for the IR task as removing them negatively impacts IR metrics. Another observation is that one of the best attribution schemes is F_ms. This highlights the value of fusion in selecting relevance-sensitive neurons but also the importance of considering both relevant and non-relevant attributions. Altogether, this shows the possibility to identify neurons important for the IR task with NIG. Beyond the scope of RQ3, Table <ref> further helps us to understand the relations between these different sets of neurons and the IR task. In particular, perturbing the original model using "non-relevant" attribution schemes have more impact on performance compared to their "relevant" counterpart (i.e. N_ms has more impact than P_ms, and likewise for intersections of attribution schemes). § DISCUSSIONS Beyond the scope of the research questions, attributions from Neuron Integrated Gradients offer multiple new insights into the inner mechanisms of MonoBERT. In particular, Figure <ref> gives more details on the distribution of important neurons across the model layers and components for the scheme with the biggest impact at 1% of pruning percentage, i.e. N_ms. From this figure, we observe that a significant peak in the number of important neurons occurs around the last two layers. We suspect this peak is associated with the concentration of all the signals into the [CLS] token as the model's output uses CLS-pooling. Even if it is smaller in magnitude, it also displays a second peak located around the middle layers. This peak of activation is spread across layers 7 to 12 and emphasizes the role of these mid-level layers in the model's predictions. Interestingly, the position of this peak in the model matches the conclusions of other studies based on probing which show the importance of intermediate layers' representations in IR <cit.>. In addition, when looking at the details of which transformations have the most important neurons in these layers, we remark 1) the omnipresence of the last linear transformation in the attention mechanism ("attention_output") and of the value <cit.> and 2), the absence of important neurons in the key and query's linear transformations. If, as we suspect, these mid-level layers are involved in relevance matching (semantic and lexical), we interpret these 2 observations as matches occurring at the level of the query and key matrices before being filtered by the value and propagated to the upper parts of the model. When computing attributions, neurons appear more important in the value matrices because these are responsible for the filtering – signals in key and matrices can be considered redundant. Following this matching process, we note that the relative importance of feedforward layers also increases in mid-level layers ("intermediate" and "ff_output"), which could mean that they are used to integrate this information. Impact of the baseline's choice. As detailed in Section <ref>, we carefully select our baseline to compute NIG attributions by empirically validating that it erased most of the relevance signal in the original input embeddings. This design choice is crucial as we observed different behaviors when running through the experimental process with the baseline proposed by Möller et al. <cit.>. Even if it did not change the conclusions, both the ablations and the observations were partially impacted: the biggest differences in Table <ref> were less pronounced. For instance, for the in-model distribution of the important neurons, the peak in the middle layers was even more important, contrary to the peak in the last two layers. This reminds us that NIG attributions are dependent on the choice of a good baseline that can otherwise hinder results and conclusions. § CONCLUSION In this paper, we present an adaptation of the Neuron Integrated Gradients attribution method that fits with the IR task, applied to MonoBERT. Our analyses highlight that within the model, it is possible to identify neurons specifically allocated to determine the relevance of a passage to a query. By extending our study across multiple datasets, we have been able to identify a core set of neurons related to the notion of "relevance" and have demonstrated the existence of a different set of neurons important in the case of OOD data. Finally, we empirically demonstrate that neurons identified by NIG are actually related to the IR task by performing multiple ablations. Overall, our study shows that the relevance of a passage is treated by two independent sets of neurons that do not depend on the dataset. Our statements link one set of neurons to matching signals particularly and another one to domain adaptation. This work is not without limitations and could benefit from exploring additional neural IR architectures/models as its conclusions are limited to MonoBERT. The number of relevance judgments when computing NIG (particularly negatives as some datasets only have positive annotations) also might hinder our conclusions. Having this in mind, we however believe that our analysis provides interesting outcomes regarding the nature of neurons in the IR task and paves the way towards the design of more robust and generalizable neural IR models. Future works Our work paves the way for many follow-ups to refine the observations that we make on the role of some particular neurons in MonoBERT in the IR task. In particular, it would be interesting to explore methods such as those inspired from mechanistic interpretability, that are more costly, on the reduced scope of layers or blocks in the model that we have identified as more relatively important than the others or the role of the core set of OOD neurons. To expand our work, other models could also be considered, either stronger cross-encoders such as MonoT5 <cit.>, bi-encoders <cit.> (extending the work from Möller et al. <cit.>) or more recent architectures such as ColBERT <cit.> or SPLADE <cit.>. This work benefited from support from the French National Research Agency (Project GUIDANCE, ANR-23-IAS1-0003). ACM-Reference-Format
http://arxiv.org/abs/2406.19233v1
20240627145805
X-ray and gamma-ray study for 2023 nova eruption of V1716 Sco
[ "H. -H. Wang", "H. -D. Yan", "J. Takata", "L. C. -C. Lin" ]
astro-ph.HE
[ "astro-ph.HE" ]
]X-ray and gamma-ray study for 2023 nova eruption of V1716 Sco School of Physics and Engineering, Henan University of Science and Technology, Luoyang 471023, China wanghh33@mail.sysu.edu.cn Department of Astronomy, School of Physics, Huazhong University of Science and Technology, Wuhan 430074, China yanhdhh@hust.edu.cn Department of Astronomy, School of Physics, Huazhong University of Science and Technology, Wuhan 430074, China takata@hust.edu.cn Department of Physics, National Cheng Kung University, Tainan 701401, Taiwan § ABSTRACT We report the results of X-ray and gamma-ray analyses of the nova V1716 Sco taken by , , and Fermi-LAT. We have detected gamma-ray emission at a significant level exceeding 8 σ in daily bins starting the day after the optical eruption. The gamma-ray emission, characterized by a Test Statistic (TS) value more than four, persisted for approximately 40 days. Notably, harder X-ray emission were observed by as the start of gamma-ray emission, which is the fourth classical nova that gamma-ray emission is concurrent with harder X-ray emission from data. V1716 Sco is one of rare samples that clearly shows a hard X-ray emission (1-10 keV bands) in the data concurrently with gamma-ray emission of Fermi-LAT data, and its light curve in 1.0-10.0 keV bands had a peak at about 20 days after the optical eruption. The X-ray spectrum was initially fitted by a model of thermal plasma emission, and entered a supersoft phase with additional blackbody (BB) component emerged around about 40 days after the optical eruption. data taken in supersoft source phase revealed a quasi-periodic oscillation with a period of 79.10±1.98 seconds, and the peak phase of the folded light curve varied with time. Moreover, V1716 Sco is the another example that the emission radius in supersoft source phase is significantly larger than the radius of white dwarf, and a simple BB emission model may not be applicable since the luminosity exceeds significantly Eddington limit. § INTRODUCTION Classical novae are thermonuclear eruptions that occur in binary systems, where a white dwarf (hereafter WD) accretes matter from its companion. The energy released from the thermonuclear eruption causes a dramatic expansion and ejection of the accreted envelope. Observations have shown that the ejected matter expands into the surrounding environment at speeds ranging from hundreds to thousands of km s^-1 <cit.> and have confirmed multi-wavelength emission from radio to TeV gamma-ray bands <cit.>. Hard X-ray and gamma-ray emissions are thought to be evidence of the formation of the shock due to the novae outflows <cit.>. The Fermi Large Area Telescope (hereafter Fermi-LAT) has confirmed GeV emissions from 20 novae and potential emissions from 6 sources, since its launch in 2008[<https://asd.gsfc.nasa.gov/Koji.Mukai/novae/latnovae.html>]. It is argued that the collisions of the multiple ejecta (internal shock) or the interaction between the ejecta and preexisting medium surrounding the binary can cause the shock <cit.>, resulting in the production of gamma-rays through leptonic and/or hadronic processes <cit.>. Most novae detected in the GeV range are classified as classical novae, typically having a main-sequence star as the companion. These shocks are thought to be internal, resulting from collisions of the multiple ejecta <cit.>. The observed X-ray emission from novae is typically characterized by thermal radiation from the hot WD and/or the shocked matter <cit.>. The soft X-ray emission with an effective temperature of <0.1 keV can reach a luminosity of L_X>10^36  erg s^-1. The soft X-ray emission is thought to originate from a hot WD sustained by residual nuclear burning. As the ejected material spreads out, the surrounding environment becomes optically thin, allowing the soft X-ray emission from the hot WD becomes visible <cit.>. The emission in the soft X-ray band defines the Supersoft Source (SSS) phase, when the ejecta became transparent to X-rays from the central source <cit.>. Observations during SSS phase of some novae have confirmed quasi-periodic oscillations (QPOs) with a period in the range of 10 to 100 s <cit.>. Various possibilities have been suggested: for example, the spin modulation of the WD with a strong magnetic field is the most likely explanation for the QPOs <cit.>. Another possibility is the g-mode (buoyancy) pulsations driven by an ionisation-opacity instability, which is expected to produce a period of the order of 10 s or less <cit.>. Consequently, the exact origin of QPO in novae remains unclear. The nova V1716 Sco (also known as PNV J17224490-4137160; Nova Sco 2023) was discovered by Andrew Pearce on 2023 April 20.678 UT and visually confirmed on April 20.705 UT at magnitude 8.0 [<http://www.cbat.eps.harvard.edu/unconf/followups/J17224490-4137160.html>]. Data from the All-Sky Automated Survey for Supernovae (ASAS-SN) revealed a pre-discovery detection on 2023 April 20.410 UT <cit.>, and spectroscopically confirmed as a classical (Fe II) nova <cit.>. In this paper we adopt the date of the first ASAS-SN detection as the eruption start time t_0 = UT 2023-04-20.410 = JD 2460054.910 = MJD 60054.410. <cit.> reported the detection gamma-ray emission (>5 σ significance level) using Fermi-LAT data taken from 2023-04-21 00:00:00 to 24:00:00 UTC. The >100 MeV flux averaged over that period was F_γ=(6.5±2.1)× 10^-7  ph cm^-2s^-1 and the photon index=1.9±0.2. Hard X-rays were detected by on 2023 April 21.89 UT, with the X-ray spectrum being consistent with a heavily absorbed thermal plasma <cit.>. detected the X-ray emission on 2023 May 01 and confirmed an additional soft component appeared after 2023 May 31 <cit.>. <cit.> reported the power spectrum of V1716 Sco using data and confirmed a strong pulsation around a period of ∼ 80 s, which may be a SSS QPO as this period appears to be slightly varying. In this Letter, we report results of more detailed GeV and X-ray analyses of nova V1716 Sco. In Section 2, we describe the data analysis conducted by , , and Fermi-LAT observations. Section 3 is the discussion about the results from the gamma-ray and X-ray data analysis. We make a conclusion in Section 4. § DATA REDUCTION AND RESULTS §.§ Fermi-LAT data We performed a binned analysis using the standard Fermi-LAT ScienceTool package, which is available from the Fermi-LAT Science Support Center[<https://fermi.gsfc.nasa.gov/ssc/data/access/lat/>]. We selected Pass 8 data in the energy band of 0.1-300 GeV. The data for the fourth Fermi-LAT catalog (4FGL DR4) were taken during the period August 2008 to August 2022 covering 14 years <cit.>. We conducted a binned analysis using a gamma-ray emission model file based on the 4FGL DR4 catalog. To avoid contamination from Earth's limb, we included only events with zenith angles less than 90 degrees. Our analysis limited the events from the point source or Galactic diffuse class () and utilized data from both the front and back sections of the tracker (). For the analysis of the GeV emission from the target, we selected the data with the energy above 100 MeV and the time epoch to cover from MJD 60040, which is ∼ 15 days before the detection of its optical eruption (at 2023 April 20.410= MJD 60054.410), to MJD 60250. We constructed a background emission model that incorporates both the Galactic diffuse emission () and the isotropic diffuse emission () provided by the Fermi-LAT Science Support Center. A gamma-ray emission model for the whole ROI was built using all sources in the fourth Fermi-LAT catalog <cit.> located within 20^o of the nova V1716 Sco, and the target is included in the model at the nova position of (R.A., decl.)=(17^o22^'44.88^”, -41^o37^'16.0^”). To describe the TS value and flux time evolution, we conducted a refit of the gamma-ray data in each bin using binned likelihood analysis (gtlike). We created a daily light curve to enable us a more precise measurement of the epoch of the gamma-ray emission, as shown in Figure <ref>. In the daily light curve of Figure <ref>, the TS value reached to the maximum value (∼ 70), which corresponds to a detection significance level larger 8 σ (i.e., √(TS) is about detection significance in σ). After the peak, the TS value decayed rapidly and the emissions with TS >4 (σ>2) were confirmed until ∼ 40 days after the nova eruption. To generate the spectrum, we performed the likelihood analysis using the data obtained from MJD 60055 to MJD 60094, during which the emission with TS>4 were confirmed. The gamma-ray spectrum can be well describe by a power-law function with an exponential cut-off, as describe: dN/dE∝ E^-γ_1 exp[-(E/E_c)^γ_2], where we fixed to γ_2=2/3. We obtained a power-law index of γ_1=1.98(7) and a cut-off energy of E_c=22.1(1) GeV. We obtained an averaged energy flux of F_γ=1.4(1)× 10^-11  erg cm^-2s^-1 in 0.1-300 GeV bands. Figure <ref> represents the spectrum in GeV bands. We also extracted the energy flux for the time bin that has TS∼ 70 and obtained F_γ∼ 1.4(3)× 10^-10  erg cm^-2 s^-1. According to GAIA archive[<https://dc.g-vo.org/gedr3dist/q/cone/form>], the distance of nova V1716 Sco is estimated as 3.16^+2.13_-1.62 kpc or 4.96^+1.75_-1.06 kpc in geometric or photogeometric measurements <cit.>. To estimate the total emitted energy in the gamma-ray bands, we integrated the daily flux detected with TS>4 and obtained 0.59(1)× 10^42  erg for d=3.16 kpc and 1.46(7)× 10^42  erg for d=4.96 kpc. Figure <ref> shows the total emitted energy and duration of the gamma-ray emission of GeV novae. It can be seen that the total emission gamma-ray of V1716 Sco is similar to those of other novae detected by Fermi-LAT. §.§ NuSTAR data observed V1716 Sco between 2023-04-21 21:36:56(t_0 + 1.5d) to 2023-04-23 09:54:11 (t_0 + 3.0d) (ObsID:80801335002) after a day of optical eruption with a total exposure time 65 ks. For the analysis, we used the tasks of and to extract source and background spectra and light curves from the focal plane modules A (FPMA) and B (FPMB). We generated the source and background extraction region files using by choosing a circular region of ∼ 50" radius centered on the source and the background region close to the source, respectively. We grouped the channels at least 30 counts per bin for FPM A/B data. Preliminary analysis of <cit.> suggests that the X-ray spectrum is consistent with that of a heavily absorbed thermal plasma with k_BT=31 ± 13 keV with k_B begin Boltzmann constant, and N_H= (82 ± 15) × 10^22cm^-2. We fit observed spectra of V1716 Sco (Figure <ref>) with the power-law model () or the thermal plasma emission (). For the power-law model, we obtain a photon index of 2.55± 0.83 and a hydrogen column density of N_H =(144.56 ± 58.14)×10^22 cm^-2 (Table <ref>), which for the thermal plasma emission model, k_BT∼ 20.15 ± 16.13keV and N_H =(117.02 ± 44.60)×10^22 cm^-2 that are consistent with previous study. We obtained an unabsorbed flux of (2-3)× 10^-12 erg cm^-2 s^-1, as Table <ref>. During observation, the TS-value of the Fermi-LAT observation reached to the maximum value of ∼ 70. It is therefore found that around the GeV peak, the gamma-ray luminosity is about two order of magnitude larger than the radiation luminosity in 5-50 keV bands. §.§ Swift-XRT data had continuously monitored V1716 Sco since the discovery of the nova eruption. We create the light curve and hardness ratio of the X-ray emission using the XRT web tool[<https://www.swift.ac.uk/user_objects/>] <cit.>. We use only grade 0 events in the analysis to minimize the optical loading and eliminate the pile-up effect. The light curves after eliminate the pile-up were shown in the bottom panel of Figure <ref> and Figure <ref>. The hardness ratio (bottom panel of Figure <ref>) continuously decreased and the observed emission entered SSS phase at around MJD 60095. To investigate the spectral properties, we downloaded the archival data from HEASARC Browse[<https://heasarc.gsfc.nasa.gov/cgi-bin/W3Browse/w3browse.pl>] and performed the analysis with the HEASOFT version 6.31.1 and its SWIFTDAS package with the updated calibration files. The clean event lists were obtained using the task of the HEASOFT and extract the spectrum using . We grouped the source spectra to ensure at least 1 count per spectral bin and fit the spectra using . For the spectral analysis of the Swift data, we limit to the data taken before MJD 60130, after which the pile-up effect will be severe. We present the spectra in SSS phase with the data taken by (section 2.4). We employed the thermal plasma emission ( model in ) to fit the observed spectra. As Table <ref> shows, the spectra of the initial stage of the observations are fitted by the optically thin thermal plasma emission with a temperature of several keV. It may be reasonable to assume that the component of thermal plasma emission originates from the shocked heated plasma <cit.>, k_BT ≈ 1.2  keV (v/10^3 km s^-1)^2, where v is the shock velocity. As Figure <ref> and <ref> show, measured the X-ray evolution in the epoch overlapping with the GeV detection by Fermi-LAT. After the first detection, the flux in 1.0-10 keV bands increased rapidly and reached to the local maximum value at about MJD 60075 (about 20 days eruption). We found that the flux in 0.3-10 keV bands around MJD 60075 is ∼ 1.6× 10^-11  erg cm^-2 s^-2. Comparing the energy flux in GeV bands around MJD 60075, we found that the ratio of the GeV luminosity over the X-ray luminosity (0.3-10 keV bands) is L_γ/L_x ∼ 100, which is typical value for nova detected in GeV bands (see Figure 5 of <cit.>). After the local peak, the X-ray count rate in the hard band (1.0-10 keV bands) and the hardness decreased, while the count rate in soft bands (0.3-1.0 keV bands) rapidly increases, as Figure <ref> shows. The spectra after the peak is required the BB component, as Table <ref> indicates. Although we cannot constrain the radius of the emission region, the temperature of ∼ 50 eV suggests the thermal emission from the WD surfaces. As the light curve in Figure <ref> shows, the observed X-ray emission enters the SSS phase about 40 days after the nova eruption. §.§ NICER data Neutron Star Interior Composition Explorer () observed the target in SSS phase and covered from three month to fourth month after the optical eruption. We apply the standard task to extract the cleaned event file and perform the barycentric time correction using the task of . Figure <ref> presents the light curve of whole and Figure <ref> show the light curves of two data sets. In Figure <ref>, we can clearly confirm a periodic modulation with a period of ∼ 80 s, as reported by <cit.>. We search for the periodic modulation in the light curve (10 s time bin) in each data set and confirmed the significant signal from 14 data sets (Table <ref>). Combining these 14 data sets, we obtained the period of 79.10± 1.98 s in Lomb-Scargle (LS) periodogram (top panel of Figure <ref>). As Table <ref> shows, since the periods obtained with different data sets are consistent within the error, we could not confirm temporal evolution of the period in LS periodogram. The folded pulse profile can be described by a single broad peak, as the bottom panel of Figure <ref> shows; we present the pulse profile of each data set folded with 79.10 s in Appendix (Figure <ref>). To investigate the stability of position of the pulse peak in the folded light curve, we fit the individual pulse profile folded by 79.10 s with a Gaussian function and obtain the fitting parameters with the Monte Carlo method. We find that the position of the pulse peak indicates a rapid temporal variation, as Figure <ref> shows. This variation of the peak phase makes difficult to create an ephemeris of the period evolution. It is probability that the period of the modulation is not stable with time because the hot spot region on the WD's surface may shift with time or the periodic modulation may be originated from the different mechanism (e.g. g-mode oscillation). Because of its high-timing resolution of , the effect of the pile-up with a count rate of ∼ 10^2  s^-1 is not serious problem, comparing to the data of the  <cit.>. To investigate the emission of the pulsed component, we carried out the phase-resolved spectroscopy. We extract the spectra of the on-pulse and off-pulse phase, which are indicated in Figure <ref>, and the spectrum of the pulsed component by extracting the spectrum of the off-pulse phase from that of the on-pulsed phase (Figures <ref> and <ref>). The spectra can be well fitted by the BB radiation with a temperature of 30-40 eV (Table <ref>), which is typical value in SSS phase of the novae. The hydrogen column density is N_H=(0.5-0.8)× 10^22  cm^-2 and did not show a large temporal evolution. As the fifth and sixth columns of Table <ref> show, the X-ray emission in SSS phase exceeds the Eddington luminosity with an emission size of R_bb∼ 10^10 cm, for which we assume a typical distance of 4kpc. With the assumption of the BB radiation, the super Eddington luminosity had been observed for about 30 days and until ∼ 130 days after the optical eruption. § DISCUSSION As shown in Table <ref> and Figure <ref>, the BB temperature and the radius of hot spot exhibit an anti-correlation. <cit.> reported the residual material is hot when the X-ray emission from a fading post-eruption nova, the radius of hot spot begins shrink, and, consequently, the BB temperature rises. As Table <ref> shows, on the other hand, the bolometric luminosity estimated from the BB fits in SSS exceeds the Eddington luminosity of L_edd∼ 1.5× 10^38ergs s^-1 for a solar mass object. Some novae, such as RS Oph and SMCN 2016-10a, have also exhibited such a super Eddington emission with the BB model <cit.>. <cit.> has proposed that during the super-Eddington phase of a nova, the envelope develops a porous structure and this porosity significantly reduces the effective opacity of the atmosphere. <cit.>, on the other hand, discussed that BB models for the emission from a hot WD can overestimate the luminosity and the supper-Eddington radiation should not be considered physically realistic. Instead, only the comparative trends should be taken into account <cit.>. As Figure <ref> shows, the significant detection of the hard X-ray (1-10 keV) bands of the observation started at about 10 days after the optical eruption, at which gamma-ray emission was still detected. Figure 3 in <cit.> illustrated the evolution of X-ray emission over time since the discovery of several Fermi-LAT-detected 10 classical novae. It shows that a significant detection of X-rays emission started only after the gamma-ray emission disappeared. There were some novae with a companion of red giant have been detected the X-ray emission (1-10 keV) and GeV emission concurrently. Such as nova V407 Cyg, which has been observed in both gamma-rays and Swift X-rays, simultaneously; however, its companion star is a red giant, resulting in an external shock, and the absorbed column is never higher than 10^23 cm^-2. The non-detection of X-ray emissions (1-10 keV) during gamma-ray emissions detected in classical novae maybe due to a combination of large column densities ahead of the shocks, which absorb the X-rays, and the suppression of X-rays by corrugated shock fronts. As shown in Table <ref>, the harder X-ray emission (above 10 keV) detected after a day of optical eruption with the column density of several × 10^23 cm^-2. While in Table <ref>, the column densities of a few × 10^22 cm^-2 were detected by after approximately 10 days of optical eruption, during which the gamma-ray emission persisted with approximately ∼ 2 σ significance. This indicates that the expanding ejecta exhibit a decrease in column density. <cit.> reported that the shock associated with the nova eruption can accelerated high-energy particles responsible for the gamma-ray emission that extend down to the band. , operating within the 3-79 keV band, has detected harder X-ray emission simultaneously with gamma-rays in three novae: V5855 Sgr, V906 Car, and YZ Ret <cit.>. Nova V1716 Sco will be the fourth one that has detected harder X-ray emission simultaneously with gamma-rays. We found that the hard X-ray emission (above 10 keV) from and the GeV emission from Fermi-LAT were detected on the same day followed optical eruption by one day. The flux in 5-50 keV bands from is ∼ 2.5×10^-12 erg cm^-2 s^-1, energy flux of F_γ∼ 1.4(3)× 10^-10  erg cm^-2 s^-1 in GeV bands was obtained at the same day (TS∼70). The ratio of the GeV luminosity over harder (5-50 keV) X-ray luminosity is around 100. The simultaneous detection of X-ray emissions with transient gamma-rays allows for the quantification of the properties of internal shocks and even the verification of internal shock models. Nova V1716 Sco is one of rare samples that show the significant detection by during the gamma-ray detection, and featuring a 79.10 seconds quasi-periodic oscillation detected by . A oscillations with period of 79.10 ± 1.98 seconds are preferentially observed in SSS phase around 95 days after the optical eruption, a single broad peak pulse obtained and with variations in peak phase. The spin modulation of the WD is the most likely explanation for the QPO, however, the period of the modulations is not stable with time, the hot spot region on the WD’s surface may shift with time or the periodic modulation may be originated from the different mechanism, such like a stellar oscillation. § SUMMARY We conducted a joint analysis of , , , and Fermi-LAT observations of nova V1716 Sco. We confirmed the gamma-ray emissions emerged at a day after the optical eruption with a TS value of 70. The duration of gamma-ray activity with a TS value above 4 lasted for 40 days. Harder X-ray emission was observed by at a day after the optical eruption. We fitted the spectrum with a power-law model with a photon-index of 2.55± 0.83 or optically thin thermal plasma emission (apec) with a temperature of 20.15±16.13 keV. The significant detection of the hard X-ray emission from was confirmed during the detection of the GeV emission by the Fermi-LAT. V1716 Sco will be the first example for classical nova, in which the X-ray detection by is concurrent with gamma-ray emission. The X-ray spectrum taken by just after the emergence was fitted by an emission from the thermal plasma, which likely originates from the shock. The hardness ratio rapidly decreased over time, and the observed emission entered the SSS phase at ∼ 40 days after the nova eruption. Using the observation, we find that the BB fit predicts the super Eddington luminosity, which are also observed in other novae. We reconfirmed the periodic oscillation with a period of 79.10±1.98 s in SSS phase, which is consistent with the result reported in <cit.>. We found that the phase location of the pulse peak is not stable with time. § ACKNOWLEDGEMENTS We acknowledge with thanks the variable star observations from the AAVSO International Database contributed by observers worldwide and used in this research. This work made use of data supplied by the UK Swift Science Data Centre at the University of Leicester. J.T. is supported by the National Key Research and Development Program of China (grant No. 2020YFC2201400) and the National Natural Science Foundation of China (grant No. 12173014). L.C.-C.L. is supported by NSTC through grants 110-2112-M-006-006-MY3 and 112-2811-M-006-019. Facilities: Fermi, NuSTAR, Swift, NICER. § THE PULSE PROFILE FROM NICER DATA Figure <ref> shown the pulse profile using the period of 79.10 obtained from LS periodogram, the vertical dashed and solid lines define the on and off pulse phases. Figure <ref> shown the phase-resolve spectra with the off and on phase data, we use a thermal BB () with the temperature of ∼ 30-40 eV and a neutral H absorption() with the column density of N_H∼(0.5-0.8) × 10^22cm^-2 to fit the spectra, all the spectral parameters are shown in Table <ref>.
http://arxiv.org/abs/2406.18469v1
20240626162742
Universal Anomaly Detection at the LHC: Transforming Optimal Classifiers and the DDD Method
[ "Sascha Caron", "José Enrique García Navarro", "María Moreno Llácer", "Polina Moskvitina", "Mats Rovers", "Adrián Rubio Jímenez", "Roberto Ruiz de Austri", "Zhongyi Zhang" ]
hep-ph
[ "hep-ph" ]
/: Implementation of fully self-consistent finite-temperature many-body perturbation theory for molecules and solids [ July 1, 2024 ==================================================================================================================== § INTRODUCTION The Standard Model (SM) of particle physics cannot (unfortunately) describe the complete world. It is scientifically accepted that phenomena such as dark matter, baryon asymmetry, neutrino masses, and the description of gravity in regimes where quantum effects are significant require extensions beyond the SM and/or General Relativity. This extension may give rise to new particles and/or effects that are visible in the data from particle physics experiments. Assuming a signal and supervised event classifiers: The search for these effects is like looking for a needle in a haystack. The effects of new particles generally have a small cross-section and have to be filtered out of an enormous number of background events. Classification algorithms have been used for this purpose since the 1980s or 1990s, see e.g. <cit.> and since 2014 <cit.> more and more deep learning algorithms have been used. Classification algorithms such as neural networks learn with labelled data (i.e. supervised) to distinguish signals and background events given physical features (such as the 4-vectors and object types) based on training data by adjusting weights to minimize a so-called loss function. Typically, the loss function is a binary cross entropy, that measures the difference of the true class probability distribution to the predicted class probability distribution. Minimizing this penalty function then leads to the classifier learning the ratio of probability distributions in the feature space between signal and background. In recent years, deep learning architectures have been shown to provide better classifiers than the boosted decision trees widely used in high-energy physics. More recently, fully connected deep learning architectures have been overtaken by graph networks and even attention-based transformer architectures, inspired by their success in language models. Not assuming a signal and unsupervised anomaly detection: However, the question arises what is the signal ?. As mentioned above, the exact signal of physics beyond the SM (BSM) is not known, so a signal-by-signal search may not be efficient. One search method that has gained more and more attention in recent decades is the systematic search for anomalies, i.e., the model-independent search for events that are not SM-like. One method is to test numerous signal regions using automated procedures  <cit.>. Other approaches use unsupervised (or weakly supervised) classifiers. Machine Learning (ML) algorithms are trained without a signal model to estimate the distribution of SM events. Unsupervised refers to no signal data being used to train the ML model. Many approaches use AutoEncoders, mapping data features x to a code z and back to x', assuming poor reconstruction of anomalies. In extreme unsupervised cases, only the expected density of the Standard Model is needed <cit.>, derived from simulated events, direct data (assuming a small signal), or auxiliary measurements. Unsupervised Deep Learning techniques to search for new physics are discussed in <cit.>. Comparisons of unsupervised approaches are in the Dark Machines challenge <cit.> using datasets to compare supervised strategies with unsupervised approaches. Another comparison of model-independent approaches using different black boxes based on data and expectation density comparisons is the LHC Olympics <cit.>. Contribution: Proposal to turn a Classifier into an Anomaly Detector: In this study, a novel approach is introduced: transforming state-of-the-art supervised classifiers into anomaly detectors with minimal changes. As supervised classifiers, such as those based on graph networks and transformers with physical information, continue to improve, we aim to exploit these advances for anomaly detection. The objective is to find a simple method to utilize these classifiers, widely used at the Large Hadron Collider (LHC), for anomaly detection. This adaptation could potentially extend the applicability of anomaly detection across all analyses performed at the LHC, with minimal additional cost. Given that anomaly detectors are currently rare, this approach has the potential to significantly improve the ability to detect unexpected phenomena in LHC data. This article introduces a simple but effective strategy called Discriminatory Detection of Distortions (DDD) for detecting anomalies. With this approach, the structure of existing algorithms does not need to be changed. Instead, the current model is retrained with a distorted data set as a signal. The effectiveness of the DDD strategy is compared against leading models from the Dark Machines competition, in particular the DeepSVDD model and a novel model known as Deep Robust One-Class Classification (DROCC), as outlined in Ref <cit.>. The models are evaluated on five different datasets: one is from the search for 4-top production at the LHC <cit.>, the other four from the Dark Machines Anomaly Score (DarkMachines) challenge. This analysis assesses the performance of two different architectures when coupled with these methods. One is a simple multi-layer perceptron (MLP) and the other is the particle transformer model (ParT) <cit.>. Furthermore, an approach incorporating pairwise features and physical interactions in the attention matrix (ParT + SM interactions), identified as the best event classifier in Ref. <cit.>, has been integrated into evaluation. This comprehensive evaluation provides insight into the effectiveness of the different architectures in conjunction with these anomaly detection techniques. Finally, it is important to emphasize that the proposed approaches, when implemented in experiments such as those at the LHC, do not require major changes to the systematic uncertainties or other aspects of the analysis chain. The statistical test, which checks whether the data are consistent with expectations for high output scores, would just be conducted twice. First, it would be applied to the results of the anomaly score (e.g., from methods like DDD or DeepSVDD to search for anomalies), and second, to the results of the classifier (to search for known signals). In addition, the construction of the algorithm for recognising anomalies from the classifier only requires re-training (as with DDD). The work is organized as follows: Section <ref> describes the datasets, Section <ref> outlines the machine learning architectures, Section <ref> presents various techniques for transforming a supervised classifier into an anomaly detector, and Section <ref> discusses the results. § DATASETS Proton-proton collisions were simulated at a center-of-mass energy of 13 TeV. The hard scattering events were generated using version 2.7 <cit.> with the parton distribution function set <cit.>, using the 5 flavor scheme. Parton showering was performed with version 8.239 <cit.>, and the MLM merging scheme <cit.> and a merging scale of Q_0 = 30 GeV was employed to merge high-multiplicity hard scattering events with the parton shower. The fast detector simulation was performed using version 3.4.2 <cit.> with a modified ATLAS detector map, and jet clustering is performed with FastJet <cit.> using the anti-kt algorithm with a jet radius of 0.4. §.§ The 4-top dataset The background processes involved are tt̅ + X, where X = Z, W^+, W^+W^-, H, and the signal processes involved the production of 4-top quarks. For details regarding the object preselection adopted, we refer to Ref. <cit.>. In terms of object and event selection, we defined a signal region <cit.> that requires at least six jets, of which at least two are b-tagged, H_T > 500 GeV, and two leptons of the same sign or at least three leptons for each event. More details can be found in Ref. <cit.>. §.§ The Dark Machines dataset The background processes included are listed in Table 2 of Ref. <cit.> as well as the signals consisting of processes of new physics, such as supersymmetric processes with and without R-parity conservation as well as Z'-resonances (see Table 3 of Ref. <cit.> for more details). For details about the object preselection adopted, we refer to Ref. <cit.>. The event election criteria for the four channels of the DarkMachines dataset are detailed in Table <ref>. §.§ Data Format The generated Monte Carlo data were saved as ROOT files and processed into CSV files using the described event selection. Each line in the CSV files contains low-level kinematical information like the four-momentum of each object as well as a label denoting the object type: jet (j), b-tagged jet (b), electron (e-) and positron (e+), muon (mu-) and anti-muon (mu+), and photon (g). In addition, the missing transverse energy and its azimuthal value are provided per event. The structure of the input dataset is chosen to be different for the MLP and the ParT models when exploring the DeepSVDD, DROCC, and DDD techniques. For the MLP case, the whole information is provided as an input layer with a fixed size. A cut in the maximum number of objects is applied in order to avoid a large contribution of zeros coming from the padding (up to 5 jets, 5 b-jets, 1 electron, 1 positron, 1 muon, and 1 antimuon). This limitation in the number of objects is not applied for the ParT architecture, since its power in establishing correlations through the attention mechanism should avoid any bias coming from such padding. The DeepSVDD and DROCC models can utilize both MLP and ParT networks for anomaly detection, adapting their input format accordingly. However, the DDD model exclusively employs transformers, allowing the ParT input format to be used without imposing limitations on the number of objects. Since we were adding and removing objects, it was not practical to adapt the MLP to the DDD approach. § MACHINE LEARNING ARCHITECTURES §.§ Multi-Layer Perceptron In order to interpret the success of a complex architecture performing an anomaly detection task, it is crucial to compare with simpler architectures playing the role of benchmarks <cit.>. With this goal, a deep MLP implementation have been developed, as illustrated in Table <ref>. The MLP consists of two hidden layers, each followed by a batch normalization layer. The first hidden layer has 16 neurons with ReLU activation, and the second hidden layer has 8 neurons with ReLU activation. This simple architecture is intended to provide a baseline for performance comparison. §.§ Particle Transformer The Particle Transformer (ParT) <cit.> is a transformer-based architecture designed for jet tagging, inspired by techniques from natural language processing <cit.>. The ParT processes four-momentum vectors of the different particles in the event and auxiliary variables. The architecture includes multi-head attention mechanisms with eight Particle Attention Blocks and two Class Attention Blocks. In Particle Attention Blocks, the multi-head attention mechanism incorporates an interaction matrix U to emphasize particle interactions. This matrix U is designed to include both pairwise features and the SM interaction matrix with running coupling constants <cit.>, which depend on the energy scale of the interaction, thereby capturing the dynamics of particle interactions more accurately. This variation of the self-attention mechanism is defined as: Attention(Q,K,T) = SoftMax(Q K^T / √(d) + U) V where Q, K, and V are d-dimensional linear projections of the particle embedding based on the input embedding or the output of the previous layer. The application of the attention mechanism correlates every particle with all others, allowing the model to handle varying numbers of particles and maintain permutation equivariance. Classification is achieved by appending a classification token to the particle embeddings before the final layers of the transformer. This token interacts with the embeddings and is processed by a fully connected network (FCN), which also receives E_T^miss and E_T^missϕ to produce the classification label <cit.>. Pairwise features, specifically the logarithms of Δ R_ij and m_ij^2, were selected based on previous work <cit.>. Here, Δ R_ij measures the angular distance between two particles in the detector's pseudo-rapidity (η) and azimuthal angle (ϕ) space, while m_ij^2 is the invariant mass squared of two particles. The logarithm of these features is taken for better numerical handling and improved learning efficiency. Additionally, the implementation of this work uses the third interaction matrix, referred to as the SM interaction matrix (SM matrix[3]), as described in the previous studies <cit.>. This matrix incorporates energy-dependent coupling constants based on the Standard Model of particle physics. The running of the fine-structure constant α and the strong coupling constant α_s are calculated using their respective Renormalization Group Equations (RGE). By including these running coupling constants, the SM interaction matrix provides a structured approach to encoding the strengths of physical interactions in the ParT, enhancing its ability to learn and generalize from the data. Two versions of the ParT model are explored: one that incorporates both pairwise features and the SM interaction matrix (ParT+SM), and another that does not include these features (ParT). Despite these differences, the main characteristics of the architecture are consistent across both versions. This is done to ensure that performance differences arise exclusively from the training techniques rather than a particular set of hyperparameters. Hyperparameters such as learning rates and batch sizes are adapted for each training technique to optimize performance. Table <ref> summarizes the Particle Transformer hyperparameters common for both versions considered in this study. § TECHNIQUES FOR CONVERTING A SUPERVISED CLASSIFIER INTO AN ANOMALY DETECTOR The previously described architectures were originally developed for supervised classification tasks. This section presents three methods for adapting these architectures for anomaly detection. §.§ DeepSVDD: Deep Support Vector Data Description DeepSVDD is recognized as a well-established method for anomaly detection <cit.>. The DeepSVDD method was identified as one of the best performing methods in the DarkMachines challenge, outperforming autoencoder approaches in particular <cit.>. However, the use of complex architectures in combination with this technique remains unexplored. DeepSVDD is a one-class classifier that predicts whether an input sample belongs to the normal class by mapping it to a hypersphere in the feature space and measuring its distance from the center of this hypersphere. The adaptation of such architectures involves the integration of an additional output layer, the dimension of which is a new hyperparameter of the technique. Training focuses on accurately predicting a particular point of the output space, which itself is another hyperparameter. The effectiveness of the training procedure is measured by the loss function, which quantifies the distance of events from this point. Geometrically, the training performs a mapping of the background events around this central point, with events that deviate significantly from this center being classified as anomalies. There are infinite possibilities for the choice of a centre in the output space, but the option taken in this study is proposed in Ref. <cit.>. The center is computed as the mean of the network representations that results from an initial forward pass on the training data. This paper also proposes that biases terms need to be removed from the network to avoid trivial solutions. In addition to the previous considerations, not making use of dropout has been shown to be a convenient approach, improving significantly the stability of the training. DeepSVDD methods can typically address this problem with different sets of weights, representing different minima in the weight space. However, some of these minima may lead to `sphere collapse', where the model predicts the output independently of the input. To enhance the stability of this method, an ensemble approach, as proposed in Ref. <cit.>, is used. This involves computing the mean of the scores from multiple models. Separate models are trained with 2, 4, 8, and 16 output dimensions, respectively, and these models are then combined to form the ensemble. §.§ DROCC: Deep Robust One-Class Classification DROCC <cit.> is a one-class classification method that assumes data from the target class lies on a well-sampled, locally linear, low-dimensional manifold. This assumption enables DROCC to generate synthetic outliers during training to robustly learn a boundary around the target class data. The main idea is to train a classifier that not only accurately classifies the given `normal' data but also is robust enough to classify unseen and potentially very different `anomalous' data as being out-of-class. This is done thought a loss function, which includes a component that actively searches for adversarial examples near the `normal' data points. These adversarial examples are synthetic data points that are slightly perturbed versions of the normal data and are hypothesized to lie close to the decision boundary between normal and anomalous classes. The objective is to ensure that the model can correctly classify these challenging examples as anomalies, thereby enhancing its robustness to real anomalous data it may encounter. In this way, avoids the phenomenon of `sphere collapse'. In the proposed approach, the model training procedure uses the test data with signals to optimize the training process by maximizing the Area Under the Curve (AUC). This inclusion of test data makes the method not entirely unsupervised (it can be considered as a form of weak supervision), as it stabilizes the otherwise unstable training process. The AUC provides a more stable metric that reflects the model's ability to distinguish between normal and anomalous data points. Table <ref> lists the hyperparameter settings used for training the DROCC model: §.§ DDD: Discriminatory detection of distortions This subsection introduces a novel approach called the Discriminatory Detection of Distortions (DDD) method, which aims to enhance anomaly detection by training a discriminator model on both original and artificially modified (background) datasets. Most importantly, the DDD method requires no modification to the classifier. The shift from a classifier to an anomaly detector is achieved simply by changing the signal training data to a distorted version of the background data. The DDD method advances traditional anomaly detection techniques by integrating a variety of data modification strategies: * Data Shifting: Kinematic variables are adjusted using a normal distribution centered at 1 with a certain standard deviation (std). The shifting is applied to energy E, pseudorapidity η, and azimuthal angle ϕ. The transverse momentum p_T is recalculated based on the modified energy E and pseudorapidity η to preserve the mass of the particle. For E_T^missϕ values are kept within [-π, π], and for E_T^miss, values are kept positive. * Object Addition: Random addition of new objects from other background events, including object type and 4-vector with a certain probability p. * Object Removal: Random removal of existing objects with the same specified probability p. This combination was determined to be the most effective in improving model performance compared to other combinations such as shifting + adding or shifting + removing objects alone. The primary objective is to train the discriminator model with new training sets, making it capable of accurately distinguishing between normal and distorted background data. However, if the background data is distorted too much, distinguishing between the background and the distorted background becomes trivial. Conversely, if the data is distorted too little, the task becomes impossible. To find an optimal balance, the typical AUC achieved in supervised signal training is employed as a metric to determine when the background is optimally distorted. In this study, an AUC value of 0.85 is targeted, as this threshold was found based on previous supervised searches, as described in Ref. <cit.>. As with the deepSVDDs, an ensemble of the four best models that come closest to the target AUC value of 0.85 was used to stabilise the method. The final metrics are calculated based on the average predictions of these four models. The inputs and outputs of the DDD method can be summarized as follows: §.§ Input * Background dataset D_background: A dataset containing only normal, unmodified data. * Hyperparameter std: Used for data modification via a normal distribution centered at 1 with standard deviations ranging from 0.00 to 0.10. * Probability range p: The probability for randomly adding and removing objects within each event, set between 1% to 10%. §.§ Output * Trained Discriminator Model (M_discriminator). * AUC score: A performance metric indicating how well the model differentiates between normal and modified (distorted) data. §.§ Algorithm Steps The modification Function (f_mod) is designed to simulate potential anomalies by dynamically distorting (i.e. randomly adding and removing objects from the dataset) in addition to shifting data values. Various hyperparameter θ and probability p were tested. For each probability value and θ the model is trained to find the best performing values until a good AUC is reached. Adjusting the hyperparameter θ is crucial for balancing the model's sensitivity to anomalies and ensuring robust performance across different conditions. Figure <ref> shows an example of the distorted and normal background distributions for a hyperparameter set used in the ensemble. Figure <ref> demonstrates the distribution of the total number of objects per event. A strength of the DDD method lies in its general approach to data modification. By simultaneously incorporating data shifting, object addition and object removal, the method effectively improves the discriminator model's ability to recognise anomalies. Furthermore, it is important to emphasize once again that the application of the method does not require a change in the network structure used for supervised signal-against-background training, which is commonly used in many HEP analyses. Various distortions of the background data can be analysed - on a case-by-case basis. The method also uses ratios of probability densities and is invariant to variable transformations, which may be advantageous for some applications <cit.>. § RESULTS The performance of the MLP and ParT architectures, combined with the DeepSVDD, DROCC and DDD techniques, was evaluated on several data sets: the search for 4-top events (4-top channel) and the DarkMachines challenge channels <cit.>. The results are presented in detail below. §.§ Performance for the DarkMachines challenge channels In this subsection, the performance of the models trained for this study is evaluated for the DarkMachines challenge channels. This comprehensive evaluation helps to understand the effectiveness of these methods in a variety of final states and BSM scenarios. Figure <ref> shows the results in such a way that a direct comparison with those from the DarkMachines publication is possible. The signal efficiency for a background efficiency setting of 1% and the AUC are shown. Different symbols represent different architectures: a circle for MLP, a square for ParT without interactions, and a triangle for ParT with SM couplings. The observed trends are remarkably similar. The results for the individual channels are summarized as follows (numerical details of the measured variables used in this work can be found in the Appendix): * Channel 1 exhibits high AUC values across most models, indicating strong anomaly detection capabilities. The ParT with pairwise interactions, including SM couplings, demonstrates superior performance in this channel. This highlights the ParT’s ability to handle complex correlations effectively. While the MLP shows competitive results, it generally lagged behind the more sophisticated ParT architectures, suggesting that simpler architectures might not capture the intricate patterns as well as the advanced ones. Most of the signals from Channel 1 are characterized by large E_T^miss and p_T of the jets compared with the background. This suggests that the presence of these high-energy objects contributes to the strong performance observed across all models. * Channel 2a is characterized by lower statistics, presents more challenges. Despite this, the ParT models continued to outperform the MLP. This indicates that while the ParT can still perform well with limited data, the inherent complexity of the dataset impacts the overall performance metrics. In this leptonic channel, E_T^miss is less relevant than in Channel 1, with the p_T of leptons playing an important role in discriminating signal from background. * Channel 2b shows similar trends to Channel 1 are observed, with robust performance metrics. The ParT models shows the highest sensitivity in this channel as well. This channel shares similarities with Channel 2a, with leptonic features being key discriminators. * Channel 3 contains the largest dataset and provided the best overall performance metrics. This channel enables the models to fully utilize the available data for training and validation. The extensive dataset in this channel allows the models to generalize better and detect anomalies with higher precision, demonstrating the importance of a large amount of training data for improving model performance. Similar to Channel 1, E_T^miss and the p_T of jets are the most discriminative quantities, indicating a hadronic channel. In summary, the DeepSVDD method shows consistent performance across all channels. It was particularly effective in Channel 3, where it achieves competitive AUC values. The MLP architecture with DeepSVDD shows solid performance in channels with less complex data, suggesting that it is suitable for simpler anomaly detection tasks where the data does not exhibit highly complex patterns. The DROCC technique proves to be robust, especially with the ParT. It achieves high AUC values in all channels, especially in Channels 1 and 3. The ability of the DROCC method to generate synthetic outliers during training helped to create a robust decision boundary and improve the model's anomaly detection capabilities. The robustness of this method is evident in its consistent performance, making it a valuable technique for various anomaly detection scenarios. The comprehensive approach of the DDD method, which includes data shifting, object addition and object removal, has proven to be effective in training discriminators to distinguish between normal and anomalous data. This strategy allows the DDD method to achieve competitive AUC and signal efficiency metrics, demonstrating its potential for advanced anomaly detection tasks. Analysis of the DarkMachines challenge channels indicates that the ParT architecture consistently outperforms the MLP, particularly when pairwise interactions and SM couplings are included. Additionally, the choice of anomaly detection technique plays a crucial role in performance. §.§ Performance for the 4-top channel This subsection is dedicated to evaluate the performance of these techniques on the detection of events with 4-top quarks in the final state, which is a challenging signature predicted by the Standard Model with a very low cross-section that is possible to be produced at the LHC <cit.>. The idea behind this is: could 4-top production be detected as an anomaly without prior knowledge of its existence, i.e., would 4-top production be detected as an anomaly? From Figure <ref> it is observed that the DDD method with ParT architecture (red triangle) achieves the highest AUC value (0.754) and a significant signal efficiency (ϵ_S) of 0.033 at ϵ_B = 0.01. This highlights the strength of the ParT architecture in handling complex correlations through its attention mechanism, which effectively avoids bias from padding. The DROCC technique also shows robust performance, particularly with the ParT architecture, achieving an AUC of 0.75 and ϵ_S of 0.084 at ϵ_B = 0.01. The MLP with DROCC also performs well, with an AUC of 0.71, but slightly lower signal efficiency compared to the ParT architecture. The DeepSVDD method demonstrates the lowest AUC values among the techniques but still provides competitive results, especially for the MLP architecture with an AUC of 0.606 and a relatively high ϵ_S of 0.124 at ϵ_B = 0.01. The ParT architecture with DeepSVDD shows lower performance, with an AUC of 0.518 and ϵ_S of 0.007 at ϵ_B = 0.01, indicating that the simpler MLP architecture might be more suited for certain anomaly detection tasks when using DeepSVDD. These results suggest that while the ParT architecture generally outperforms the MLP in capturing complex correlations, the choice of anomaly detection technique plays a crucial role. The novel DDD method shows promising results and can be effectively combined with advanced architectures like ParT for improved anomaly detection. §.§ Comparison to the DarkMachines challenge algorithms Based on the significance improvement (SI) as defined in Ref. <cit.> as SI = ϵ_S/√(ϵ_B), the total improvement (TI) was introduced to quantify the maximum SI across the various physics signals for each of the anomaly detection techniques and to combine the signals in multiple channels. This metric was used to analyse the minimum, median and maximum values for each of the physics models. Figure <ref> provides a comparative analysis of the TI values for different models, including the results of different models in the DarkMachines challenge and the mixture of theories from Ref. <cit.>. The predictions show remarkably high median TI values, especially for the DeepSVDD models based on ParT architectures, both with and without the pairwise interaction matrix. The DDD method applied to the ParT with pairwise features also shows significant performance. In comparison, the models from the DarkMachines challenge also show high mean TI values. However, a direct comparison shows comparable results or slightly higher values in some cases, indicating that the methods presented in this work are competitive. In terms of maximum TI, the models trained for this study, particularly the DeepSVDD and DDD with ParT architectures, reach a maximum TI of about 10. This is consistent with the best-performing models from the DarkMachines challenge, demonstrating the effectiveness of this approach. However, one notable difference is the minimum TI score. These models exhibit a lower minimum TI compared to the DarkMachines models. The performance across different channels in the DarkMachines challenge shows that models with a median TI greater than 2 are considered the best performing and advance to the next round. The models presented in this study, particularly those using the DeepSVDD and DDD techniques with the ParT architecture, show a median TI that qualifies (or almost classifies) them as top performers according to this criterion. It should also be noted that the winning models in this challenge were combinations of SVDDs with normalising flow-based models (called `combined in DarkMachines' in Figure <ref>). A combination with flow models would probably also further improve the performance on this ranking. The M models shown in the figure were created using a supervised classifier trained on a mixture of BSM signals, a method described in Ref. <cit.>. This method could be superior to unsupervised methods if the signal has similar properties to the `BSM mixture'. To summarize, the comparative analysis based on the median TI scores shows that these models, especially those using advanced architectures such as ParT with techniques such as DeepSVDD and DDD, perform quite well. They achieve competitive or even better results compared to the non-combined algorithms tested in the DarkMachines challenge, and overall the robustness and effectiveness of the models in detecting anomalies is demonstrated at the high mean and maximum TI values. This emphasizes the potential of turning state-of-the-art classifiers into anomaly detectors with minimal changes and using their capabilities to improve anomaly detection in high-energy physics experiments. § CONCLUSIONS This work demonstrates that a supervised classifier can be converted into an effective unsupervised anomaly detector. The proposed method, DDD, trained by discriminating between data with and without distortion, has shown promising results as an anomaly detector capable of identifying signals of new physics. A comparison was made between the DDD method, DROCC (a semi-supervised method incorporating signals during training), and the DeepSVDD method. Results indicate that the effectiveness of each model depends on the specific signal and channel, with the proposed methods providing highly effective anomaly detection. For purely unsupervised applications, it is recommended to use a combination of DeepSVDD and DDD. For those with additional resources, integrating flow models can further enhance performance. These findings suggest that the best supervised models often excel in an unsupervised context as well, such as the particle transformer with SM interactions. The study shows that it is likely possible to recognize 4-top production as an anomaly without prior knowledge of the process (/ as an anomaly without knowing the process beforehand). Since the proposed anomaly detectors require only minimal changes to existing analysis systems, it is encouraged that the LHC community consider converting their supervised classifiers into anomaly detectors, in addition to supervised signal selection and signal fitting. It is suggested that every LHC search should include a search for anomalies. § ACKNOWLEDGEMENTS The author(s) gratefully acknowledges the computer resources at Artemisa, funded by the European Union ERDF and Comunitat Valenciana as well as the technical support provided by the Instituto de Física Corpuscular, IFIC (CSIC-UV). The work of R. RdA was supported by PID2020-113644GB-I00 from the Spanish Ministerio de Ciencia e Innovación and by the PROMETEO/2022/69 from the Spanish GVA. The work of A. R. J. was supported by the project PID2021-124912NB-I00, funded by the Spanish Ministry MICIU. The work of M. M. Ll was supported by the RYC2019-028510-I grant (Spanish Ministry MICINN) and the ASFAE/2022/010 project (funded by Generalitat Valenciana and the European Union). § APPENDIX This appendix supplements the main text with additional detailed tables and analysis results. The tables included here summarise the performance metrics for the various machine learning models and techniques discussed in this paper, with a focus on their application to the 4-top channel and the DarkMachines challenge channels. Each table contains data on key performance indicators such as the AUC, ϵ_S and ϵ_B. § TABLES FOR PERFORMANCE COMPARISONS §.§ 4-top channel §.§ Dark Machines channels JHEP
http://arxiv.org/abs/2406.18233v1
20240626103050
Metrics with minimal singularities and the Abundance conjecture
[ "Vladimir Lazić" ]
math.AG
[ "math.AG", "math.CV", "math.DG", "14E30, 32U40, 32J25" ]
Minimal metrics and the Abundance conjecture]Metrics with minimal singularities and the Abundance conjecture Fachrichtung Mathematik, Campus, Gebäude E2.4, Universität des Saarlandes, 66123 Saarbrücken, Germany lazic@math.uni-sb.de To Thomas Peternell on the occasion of his 70th birthday, with admiration 2020 Mathematics Subject Classification: 14E30, 32U40, 32J25.Keywords: Minimal Model Program, Abundance conjecture, singular metrics, currents with minimal singularities, supercanonical currents § ABSTRACT The Abundance conjecture predicts that on a minimal projective klt pair (X,Δ), the adjoint divisor K_X+Δ is semiample. When χ(X,_X)≠0, we give a necessary and sufficient condition for the conjecture to hold in terms of the asymptotic behaviour of multiplier ideals of currents with minimal singularities of small twists of K_X+Δ. Furthermore, we prove fundamental structural properties as well as regularity and weak convergence behaviour of an important class of currents with minimal singularities: the supercanonical currents. The results of the paper indicate strongly that supercanonical currents are central to the completion of the proof of the Abundance conjecture for minimal klt pairs (X,Δ) with χ(X,_X)≠0. [ Vladimir Lazić ================== ł@subsectiontocline20pt2.5pc5pc § INTRODUCTION The Abundance conjecture is one of the most important open problems in algebraic geometry. It predicts that on a projective klt pair (X,Δ), if the adjoint divisor K_X+Δ is nef, then it is semiample; in other words, there exist a fibration f X→ Z and an ample -divisor A on Z such that K_X+Δ∼_ f^*A. The conjecture is classically known for curves and surfaces, whereas for threefolds it was a fantastic achievement obtained in <cit.>. In arbitrary dimension, the conjecture holds for pairs of log general type <cit.>, for pairs of numerical dimension 0 <cit.>, and for varieties satisfying Miyaoka's equality <cit.>. In dimensions at least 4, up to now there has only been one general result due to Lazić and Peternell <cit.>, and to Gongyo and Matsumura <cit.>: assuming the Minimal Model Program in lower dimensions, the divisor K_X+Δ is semiample if χ(X,_X)≠0 and if the pullback of K_X+Δ to a resolution of X is hermitian semipositive. Very little seems to be known about the Abundance conjecture in higher dimensions when χ(X,_X)=0, unless X is uniruled <cit.>. The papers <cit.> show, more generally, that half of the Abundance conjecture – the Nonvanishing conjecture – holds when χ(X,_X)≠0, if the pullback of K_X+Δ to a resolution of X has a singular metric with generalised algebraic singularities. This class of metrics, discussed in detail in <ref>, is a singular generalisation of hermitian semipositive metrics and is a natural class of metrics from the point of view of the Minimal Model Program. Op. cit. indicated strongly that understanding this class of metrics is crucial for progress on the Abundance conjecture. The quest for metrics with generalised algebraic singularities on adjoint divisors K_X+Δ is the main motivation for this paper. The best candidates for such metrics are metrics with minimal singularities. Singular metrics with minimal singularities on an -divisor L on a compact Kähler manifold induce the smallest norms among all possible positively curved singular metrics on L, modulo certain compatibility conditions for singularities of metrics. Such metrics are notoriously difficult to work with as they are usually very transcendental and can be only implicitly described. However, they have some very good properties which we recall in Section <ref>, which distinguish them from other singular metrics on L. In this paper we investigate how metrics with minimal singularities on divisors K_X+Δ+ε A behave when ε↓0, where A is an ample divisor on X. We prove two main results: * the Abundance conjecture can be reinterpreted as a statement about good asymptotic behaviour of multiplier ideals of currents with minimal singularities, and * supercanonical currents are excellent candidates to prove such good behaviour of multiplier ideals, and thus complete the proof of the Abundance conjecture when χ(X,_X)≠0. Notation. If T is a closed positive current a compact Kähler manifold X, we use the notation ℐ(T)_min for the multiplier ideal of any closed positive current with minimal singularities in the cohomology class of T, see <ref> and <ref>. The first main result Our first main result is that on a minimal klt pair (X,Δ) with χ(X,_X)≠0, the Abundance conjecture is equivalent to an approximation property of multiplier ideals of currents with minimal singularities associated to divisors K_X+Δ and K_X+Δ+1/m A for m∈_>0, where A is an ample divisor on X.[We prove this assuming the Minimal Model Program in lower dimensions. This is a natural and necessary condition in all current work on the Abundance conjecture, considering that we aim to prove it by induction on the dimension.] Roughly speaking, this approximation property says that the multiplier ideals of currents with minimal singularities associated to large multiples of K_X+Δ and K_X+Δ+1/m A are almost the same when m→∞. This statement has two parts, given in Proposition <ref> and Theorem <ref>. The first observation is that this approximation statement of multiplier ideals is a consequence of the Abundance conjecture: this is the content of the following proposition, whose proof is given in Section <ref>. Let (X,Δ) be a projective klt pair such that K_X+Δ is semiample. Let π Y→ X be a log resolution of (X,Δ) and write K_Y+Δ_Y∼_π^*(K_X+Δ)+E, where Δ_Y and E are effective -divisors without common components. Let A be an ample -divisor on Y. Then there exist an effective divisor D on Y and a sequence of positive integers {m_ℓ}_ℓ∈_>0 such that m_ℓ→∞ and ℐ(ℓ(K_Y+Δ_Y+1/m_ℓ A))_min⊆ℐ(ℓ(K_Y+Δ_Y))_min⊗_Y(D) for all ℓ. To explain the conclusion of this proposition, note that, in its notation, we always have ℐ(ℓ(K_Y+Δ_Y))_min⊆ℐ(ℓ(K_Y+Δ_Y+1/m_ℓ A))_min for all ℓ by Lemma <ref>. Therefore, Proposition <ref> says that the multiplier ideals ℐ(ℓ(K_Y+Δ_Y))_min and ℐ(ℓ(K_Y+Δ_Y+1/m_ℓ A))_min are almost equal. The first main result of the paper is that for pairs (X,Δ) with χ(X,_X)≠0 we have the converse to Proposition <ref>. Assume the existence of good minimal models for projective klt pairs in dimensions at most n-1. Let (X,Δ) be a projective klt pair of dimension n such that K_X+Δ is nef and Δ is a -divisor. Let π Y→ X be a log resolution of (X,Δ) and write K_Y+Δ_Y∼_π^*(K_X+Δ)+E, where Δ_Y and E are effective -divisors without common components. Let A be an ample -divisor on Y, and assume that there exist an effective divisor D on Y and a sequence of positive integers {m_ℓ}_ℓ∈_>0 such that m_ℓ→∞ and ℐ(ℓ(K_Y+Δ_Y+1/m_ℓ A))_min⊆ℐ(ℓ(K_Y+Δ_Y))_min⊗_Y(D) for all ℓ. If κ(X,K_X+Δ)≥0 or χ(X,_X)≠0, then K_X+Δ is semiample. Part <ref> of the paper is dedicated to the proof of this theorem. It follows immediately from Theorem <ref>, which proves a much more precise statement. Theorem <ref> and Proposition <ref> together show that, when χ(X,_X)≠0, the Abundance conjecture is a statement about the behaviour of multiplier ideals of currents with minimal singularities. (We stress that this does not depend on any particular choice of currents with minimal singularities: this gives significant flexibility that we will exploit several times in the paper). This is the first main contribution of this work. We explain briefly the strategy of the proof of Theorem <ref>. First we introduce and study in detail asymptotically equisingular approximations: a sequence of closed almost positive (1,1)-currents {T_m}_m∈ on a compact Kähler manifold X is an asymptotically equisingular approximation of a closed almost positive (1,1)-current T on X if there exist an effective divisor D on X and a sequence of positive integers {m_ℓ}_ℓ∈_>0 such that m_ℓ→∞ and we have the inclusions of multiplier ideals ℐ(ℓ T_m_ℓ)⊗_X(-D)⊆ℐ(ℓ T)⊆ℐ(ℓ T_m_ℓ)⊗_X(D) for all ℓ. Note that we do not require that the currents T_m converge weakly to T, hence asymptotically equisingular approximations would seem to be too weak for successful applications in practice. We will see, however, that they are perfectly suited to the context of the Minimal Model Program. Stronger forms of approximations appeared in connection to the regularisation techniques of Demailly <cit.>, but equisingular approximations considered there do not seem suitable for applications within the Minimal Model Program; they did, however, motivate the definition of asymptotically equisingular approximations, as will be apparent in Sections <ref> and <ref>. In order to make asymptotically equisingular approximations useful within the context of the Minimal Model Program, we introduce in Section <ref> a much stronger version of approximations of currents: excellent approximations. We show in Theorem <ref> that the existence of excellent approximations of a current with minimal singularities T is equivalent to T having generalised analytic singularities. We combine this information in Section <ref> with the techniques from <cit.> to deduce certain strong cohomological properties of the sheaves of differential forms. Finally, in Theorem <ref> we show that in the context of the Minimal Model Program, asymptotically equisingular approximations are always excellent: this allows to prove the Nonvanishing by further application of the methods from <cit.>, and then semiampleness follows from the main result of <cit.>. For completeness we remark here that when X is uniruled in Theorem <ref>, then we know that κ(X,K_X+Δ)≥0 by <cit.>. Therefore, when it comes to the Nonvanishing conjecture, the main remaining case is the case of non-uniruled varieties. The second main result Theorem <ref> implies that understanding multiplier ideals of currents with minimal singularities, and especially their behaviour under perturbations, is fundamental for the proof of the Abundance conjecture. This is where supercanonical currents enter the picture. The second goal of the paper is to study in detail a very specific choice of currents with minimal singularities: the supercanonical currents introduced by Tsuji in <cit.> and investigated in much greater generality and detail by Berman and Demailly <cit.>. The origins of supercanonical currents can be traced back to the work of Narasimhan and Simha <cit.>, where they were examined in the case of ample line bundles. We study supercanonical currents in detail in Section <ref>. We explain first the main idea behind supercanonical currents. Usually, the existence of at least one current with minimal singularities in a pseudoeffective cohomology class is shown by using a suitable L^∞-condition; this is explained in Section <ref>. This seems, however, not to be suited for use in birational geometry. In contrast, supercanonical currents are defined by an exponential L^1-condition, see Section <ref>. On a technical level, this makes them adapted to proofs involving estimates in which one uses Hölder's inequality. Crucially for us, this allows to use techniques of Berman, Demailly and others to show that supercanonical currents can actually be calculated by using only algebraic data: concretely, a supercanonical current of a big line bundle L depends only on the global holomorphic sections of powers of L, see Theorem <ref>. A large portion of Part <ref> is dedicated to showing this fundamental fact. We will then be able to prove much better regularity properties of such currents compared to other currents with minimal singularities. This is especially useful in the context of the Minimal Model Program, as we will see in Theorem <ref>. The paper <cit.> studies supercanonical currents on a projective klt pair (X,Δ) such that K_X+Δ is big and proves several of its properties. In this paper we define supercanonical currents on any pseudoeffective line bundle, inspired by the definition in op. cit. The definition in Section <ref> is somewhat more transparent than that in <cit.>, and we simplify the construction by viewing it from a slightly different standpoint. This allows to give a streamlined and precise proof of the behaviour of supercanonical currents on big line bundles in Section <ref>: one of the main new ingredients is a result on uniform bounds of norms of sections of adjoint line bundles given in Theorem <ref>. Further explanations will be given at the beginning of Section <ref>. After the case of big line bundles is settled, the main problem is to analyse what happens when the line bundle L is only pseudoeffective, and how supercanonical currents associated to divisors L+ε A behave when ε↓0, where A is an ample divisor on X. Following a suggestion from <cit.>, we show that the corresponding supercanonical currents of L+ε A converge weakly to a supercanonical current of L, and deduce additional strong regularity properties. Specialising to the context of the Minimal Model Program, the following is our second main result. (We use the following notation introduced in Section <ref>: if θ is a smooth closed real (1,1)-form on a compact complex manifold X whose cohomology class is pseudoeffective, then T_θ, denotes the supercanonical current associated to θ.) Let (X,Δ) be a projective klt pair such that Δ is a -divisor and K_X+Δ is pseudoeffective. Let π Y→ X be a log resolution of (X,Δ) and write K_Y+Δ_Y∼_π^*(K_X+Δ)+E, where Δ_Y and E are effective -divisors without common components. Let A be an ample -divisor on Y, and let α and ω be fixed smooth (1,1)-forms in the cohomology classes of K_Y+Δ_Y and A, respectively. Then: * for each ε>0 the supercanonical current T_α+εω, depends only on the holomorphic global sections of multiples of K_Y+Δ_Y+ε A, * the supercanonical currents T_α+εω, converge weakly to the supercanonical current T_α, as ε↓0. If additionally K_X+Δ is nef, then there exists a positive rational number δ such that: (c) the non-nef loci _-(K_Y+Δ_Y+ε A) do not depend on 0≤ε≤δ, and they are equal to the non-ample loci _+(K_Y+Δ_Y+ε A) for 0<ε≤δ, (d) for each 0<ε≤δ the supercanonical current T_α+εω, has continuous local potentials away from the non-nef locus _-(K_Y+Δ_Y), (e) for any two rational numbers ε_1,ε_2∈ (0,δ] and for any t∈[0,1] the supercanonical current tT_α+ε_1ω,+(1-t)T_α+ε_2ω, is the current with minimal singularities in the cohomology class of the divisor K_Y+Δ_Y+(tε_1+(1-t)ε_2)A. Parts (a) and (b) of the theorem are very delicate and they hold more generally for pseudoeffective divisors which are not necessarily adjoint, see Theorem <ref> for a much more precise statement. Part (d) holds also in that more general context, albeit with a weaker estimate of the size of the regularity locus. The other statements rely crucially on the fact that we are working with adjoint divisors, and they depend on the Minimal Model Program, see Theorem <ref>. The main aim of Theorem <ref> is to gain precise information on the behaviour of multiplier ideals associated to supercanonical currents under perturbations by an ample divisor, in order to combine it with Theorem <ref> to obtain the proof of the Abundance conjecture for minimal projective klt pairs (X,Δ) with χ(X,_X)≠0. The more detailed results from Sections <ref> and <ref> indicate how this might be achieved, see Theorem <ref>(<ref>) and Theorem <ref>(<ref>). The algebraicity statements (a) and (b) as well as the regularity statement (d) of Theorem <ref> are very strong and, combined with Theorem <ref>, we expect them to be crucial for the completion of the proof of the Abundance conjecture for minimal projective klt pairs (X,Δ) with χ(X,_X)≠0. On the organisation of the paper. This work contains as many ingredients from complex birational geometry as it does from pluripotential theory. I have attempted to make it accessible to both birational and complex geometers. This possibly resulted in the inclusion of proofs of some results which might be considered standard or classical by some readers. Acknowledgements. It is my great pleasure to dedicate this paper to Thomas Peternell. This present work builds on our joint quest towards abundance-related problems that we started almost a decade ago. He has provided constant support and has been a source of of wonderful mathematical ideas. This paper would not have been possible without him. I had several conversations online about a very preliminary version of the ideas presented here with Jean-Pierre Demailly in the summer of 2021, partly together with Thomas Peternell. Jean-Pierre clarified several things about his Bergman kernel techniques and the paper <cit.>. These conversations have had a very big impact on this paper. I am very grateful to Nikolaos Tsakanikas for discussions about the content of Section <ref> and for extensive comments which improved the presentation of the paper. I am grateful to Vincent Guedj and Zhixin Xie for useful comments and suggestions. I gratefully acknowledge support by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) – Project-ID 286237555 – TRR 195 and Project-ID 530132094. PART: Preliminaries § PRELIMINARIES: PLURIPOTENTIAL THEORY Much of the material discussed here can be found in <cit.> or in the introductory sections of <cit.>. The notes <cit.> present many of the foundational results with more details or clarity. The presentation in <cit.> is exceptionally clear. We collect some definitions and results for the benefit of the reader and to settle the notation and terminology. In this paper we use the convention that d^c=1/2π i(∂-), so that dd^c=i/π∂. We denote by B(x,r)={x∈^n|x<r} and S(x,r)={x∈^n|x=r} the open ball and the sphere of radius r and with centre x in ^n. All manifolds in the paper are connected. The notation is used for the averaged integral, i.e. for the integral divided by the volume of the set over which the integration is made. §.§ Bott-Chern cohomology If X is a complex manifold, we define the Bott-Chern (1,1)-cohomology space H^1,1_BC(X,) as the quotient of the space of d-closed smooth (1,1)-forms modulo the dd^c-exact smooth (1,1)-forms, and we denote by H^1,1_BC(X,) the space of its real points. It can be shown by a partition of unity argument that H^1,1_BC(X,) is isomorphic to the quotient of the space of d-closed (1,1)-currents modulo the dd^c-exact (1,1)-currents. If additionally X is compact and Kähler, then H^1,1_BC(X,) is isomorphic to the Dolbeault cohomology group H^1,1(X,). If T is a closed (1,1)-current on a complex manifold X, we denote by {T} its class in H^1,1_BC(X,). If T is a real closed (1,1)-current, then {T}∈ H^1,1_BC(X,), and the representatives of {T} are the closed currents of the form T+dd^cφ, where φ is a real current of degree 0. If T is a representative of a class α∈ H^1,1_BC(X,), we write T∈α; if T'∈α is another representative, we also write T≡ T'. §.§ Almost positive currents Let X be a complex manifold of dimension n. A continuous (n-1,n-1)-form φ on X is positive if it can be written locally as a finite non-negative linear combination of forms of type (iα_1∧α_1)∧…∧(iα_n-1∧α_n-1), where α_i are (1,0)-forms. Positivity of forms is a pointwise property which does not depend on local coordinates. A (1,1)-current T on X is positive if T(φ) is a positive measure for every smooth positive (n-1,n-1)-form φ, and we write T≥0. A positive (1,1)-current is always real. If T and T' are two (1,1)-currents on X, we write T≥ T' if T-T'≥0. If φ=i∑ h_jkdz_j∧ dz_k is a real continuous (1,1)-form, then φ is positive if and only if (h_jk(x)) is a positive semidefinite hermitian matrix for all x∈ X­. If now D is an irreducible analytic subset of pure codimension 1 in X, then we denote by [D] the current of integration on the regular part of D: this is a closed positive (1,1)-current. If we have an effective -divisor G=δ_1G_1+…+δ_rG_r on X, then we call the closed positive (1,1)-current [G]:=δ_1[G_1]+…+δ_r[G_r] the current of integration on G. If there is no danger of confusion, we drop the brackets and write simply G for the current of integration on G. A real (1,1)-current T on X is almost positive if T≥γ for some real continuous (1,1)-form γ on X. If f Y→ X is a surjective holomorphic map between complex manifolds and if T is a closed almost positive (1,1)-current on X, then one can easily define its pullback f^*T to Y such that {f^*T}=f^*{T}, see <cit.>. We will need the following easy result. Let X be a compact complex manifold of dimension n and let ω be a smooth positive definite (1,1)-form on X. Let {θ_j}_j∈ J be a collection of real (1,1)-forms on X whose coefficients are locally uniformly bounded on X. Then there exists a constant C>0 such that θ_j+Cω≥0 for all j. Fix a point x∈ X and a coordinate neighbourhood U_x centred at x such that the coefficients of all θ_j are uniformly bounded on U_x. Then we may find finitely many real smooth (1,1)-forms θ_1,…,θ_r on U_x such that each θ_j is a convex linear combination of the forms θ_k. By the spectral theorem for hermitian operators, for each k there exists a linear change of local coordinates f_k,x U_x→ U_x such that f_k,x^*θ_k=i/2∑_ℓ=1^nλ_k,ℓ,xdz_ℓ∧ dz_ℓ and f_k,x^*ω=i/2∑_ℓ=1^n dz_ℓ∧ dz_ℓ at x. Then it is clear that, by possibly shrinking U_x, there exists a constant C_x on U_x such that f_k,x^*θ_k+C_xf_k,x^*ω≥0 on U_x for all 1≤ k≤ r. Therefore, θ_k+C_xω≥0 on U_x for all 1≤ k≤ r by <cit.>, hence θ_j+C_xω≥0 on U_x for all j∈ J. We conclude by the compactness of X. §.§ Plurisubharmonic functions In this subsection X is a complex manifold of dimension n. A function φ X → [-∞, +∞) is plurisubharmonic or psh if it is upper semicontinuous, locally integrable, and satisfies the mean value inequality f^*φ (0) ≤_Δ f^*φ dV_Δ for any holomorphic mapping fΔ→ X from the open unit disk Δ⊆. Every plurisubharmonic function is subharmonic, i.e. it satisfies the mean value inequality f^*φ(0) ≤_B f^*φ dV_B for any open embedding f B → X of the open unit ball B⊆^n. Psh functions on X are locally bounded from above and belong to L^p_(X) for any 1≤ p<∞. If additionally X is compact, then any psh function on X is constant. A subset A of X is locally pluripolar if it is locally contained in the pole set {u=-∞} of a psh function u. Since each psh function is locally integrable, the set A is of Lebesgue measure zero and the complement X∖ A is dense in X. A closed (1,1)-current T on X is positive if and only if for each x∈ X there exists an open subset x∈ U⊆ X such that T can be locally written as T=dd^c φ for a psh function φ on U. The function φ is a local potential of T on U. We will often need the following well-known properties of subharmonic functions. Let Ω⊆^n be a domain. * Let φ be a subharmonic function on Ω and let A⊆Ω be a set of Lebesgue measure zero. Then lim sup_z'→ z, z'∈Ω∖ Aφ(z')=φ(z) for every z∈Ω. * Let φ be a subharmonic function on Ω. Then φ(z)=lim_r→0_B(z,r)φ(z)dV for every z∈Ω. * Let φ and ψ be subharmonic functions on Ω and assume that φ≤ψ almost everywhere. Then φ≤ψ. Part (c) follows immediately from (a). We will show (a) and (b) simultaneously. For a fixed z∈Ω, the mean value inequality on balls B(z,r)⊆Ω and the upper semicontinuity of φ give φ(z) ≤lim_r→0_B(z,r)φ(z)dV=lim_r→0_B(z,r)∖ Aφ(z)dV ≤lim_r→0sup_B(z,r)∖ Aφ(z')=lim sup_z'→ z, z'∈Ω∖ Aφ(z')≤φ(z). This finishes the proof. §.§ Quasi-psh functions As mentioned above, a psh function on a compact complex manifold is always constant. A more suitable notion on compact complex manifolds is that of quasi-plurisubharmonic or quasi-psh functions: a function φ X→[-∞,+∞) on a complex manifold X is quasi-psh if it is locally equal to the sum of a psh function and a smooth function. Equivalently, φ is quasi-psh if it is locally integrable and upper semicontinuous, and there exists a smooth closed real (1,1)-form θ on X such that θ+dd^cφ≥ 0 in the sense of currents. A good introduction to quasi-psh functions is in <cit.>. Now, if θ is a real continuous (1,1)-form on X and if φ is a quasi-psh function on X such that θ+dd^cφ≥ 0 in the sense of currents, then we say that φ is θ-psh and we denote the set of all θ-psh functions by (X,θ). The weak topology on the set {θ+dd^cφ|φ∈(X,θ)} corresponds to the L^1_(X)-topology on (X,θ). The set {φ∈(X,θ)|sup_Xφ=0} is compact in this topology, as we will see in Theorem <ref>. If X is additionally compact, if θ is a smooth closed real (1,1)-form on X and if T is a closed almost positive (1,1)-current in {θ}, then there exists a closed real smooth (1,1)-form γ such that γ+T≥0, and clearly γ+T∈{γ+θ}. By the dd^c-lemma there exists φ∈(X,γ+θ), which is unique up to an additive constant, such that γ+T=(γ+θ)+dd^cφ, hence T=θ+dd^cφ. By adopting the terminology from <cit.>, such a function φ is called a global potential of T; global potentials depend, up to an additive constant, on the choice of θ, but not of γ. A subset of X is pluripolar if it is contained in the pole set {φ=-∞} of a quasi-psh function φ on X. We will need the following easy consequence of Lemma <ref>(c). Let X be a complex manifold and let φ and ψ be quasi-psh functions on X. If φ≤ψ holds almost everywhere, then φ≤ψ. Let θ_1 and θ_2 be real smooth (1,1)-forms on X such that φ_1∈(X,θ_1) and φ_2∈(X,θ_2). Fix a point x∈ X. As in the proof of Theorem <ref>(a) below, there exist an open neighbourhood U of x and a smooth closed form ω on U such that ω≥θ_1 and ω≥θ_2 on U. Hence, φ_1 and φ_2 are ω-psh on U. If ξ is a local potential of ω on U, then ξ+φ_1 and ξ+φ_2 are psh and ξ+φ_1≤ξ+φ_2 almost everywhere on U. We conclude by Lemma <ref>(c). §.§ Upper semicontinuous regularisation Let Ω⊆^n be an open subset and let u be a function on Ω. We define its upper semicontinuous regularisation as u^*(z)=lim_ε→0sup_B(z,ε)u for z∈Ω. Then u^* is the smallest upper semicontinuous function which is ≥ u. This notion extends easily to quasi-psh functions on a complex manifold. Consider a family {u_α} of psh functions on Ω which is locally uniformly bounded from above, and set u:=sup_α u_α. Then the function u^* is psh and we have u^*=u almost everywhere, see <cit.>. We will need very often the following extension of this and other important compactness results to quasi-psh functions. Let X be a complex manifold and let θ be a continuous real (1,1)-form on X. * Consider a family {φ_α} of θ-psh functions on X which is locally uniformly bounded from above, and set φ:=sup_αφ_α. Then φ^*∈(X,θ) and φ^*=φ almost everywhere. * Assume additionally that X is compact. Let {φ_j} be a sequence of θ-psh functions on X which are uniformly bounded from above. Then either the sequence {φ_j} converges uniformly to -∞ or it has a subsequence which converges in L^1_(X) and almost everywhere to a function in (X,θ). * Let {φ_j} be a sequence of θ-psh functions on X which are locally uniformly bounded from above. Then the function (lim sup_j→∞φ_j)^* is θ-psh. * Let {φ_j} be a sequence of θ-psh functions on X which are locally uniformly bounded from above. If the sequence is decreasing, then either it converges uniformly to -∞ or it converges in L^1_(X) to the θ-psh function lim_j→∞φ_j. If the sequence is increasing, then it converges in L^1_(X) and almost everywhere to the θ-psh function (lim_j→∞φ_j)^*. * Assume additionally that X is compact. Let θ' be a positive continuous (1,1)-form on X, and consider a sequence of real numbers ε_j↓0. For each positive integer j, let φ_j∈(X,θ+ε_jθ') and assume that all φ_j are uniformly bounded from above. Then either the sequence {φ_j} converges uniformly to -∞ or it has a subsequence which converges in L^1_(X) and almost everywhere to a function in (X,θ). For (a) we extract the proof from <cit.>. Fix a point x∈ X. By the spectral theorem for hermitian operators and by the continuity of θ, for each ε>0 there exists real numbers λ_j and a neighbourhood U_ε of x with local coordinates z=(z_1,…,z_n) such that, if we set q(z):=∑λ_j|z_j|^2, we have dd^c(q(z)-ε|z|^2)≤θ≤ dd^c(q(z)+ε|z|^2). Then on U_ε each function q(z)+ε|z|^2+φ_α(z) is psh, hence so is the function (sup_α{q+ε|z|^2+φ_α})^*=q+ε|z|^2+φ^* by the paragraph before the theorem. Therefore, dd^cφ^*+θ+2ε dd^c|z|^2≥ dd^cφ^*+dd^cq(z)+ε dd^c|z|^2≥0 on U_ε. Letting ε→0 we obtain dd^cφ^*+θ≥0 at x. Since x was arbitrary, this shows that φ is θ-psh. The proof of (b) is similar, using the local result for psh functions <cit.>; see <cit.> for details. Part (c) follows similarly as (a) from the local result for psh functions <cit.>. Part (d) follows from (b) and (c). Now we prove (e). Assume that the sequence {φ_j} does not converge uniformly to -∞. For positive integers k≥ k' we have θ+ε_k'θ'≥θ+ε_kθ', and hence φ_k∈(X,θ+ε_k'θ'). Therefore, by (b) there exists a subsequence {φ_j_1} of {φ_j} which converges in L^1_(X) and almost everywhere to a function φ∈(X,θ+ε_1θ'). We will be done if we show that φ∈(X,θ), and for this it suffices to prove that φ∈(X,θ+ε_iθ') for all i, since θ is the weak limit of the sequence {θ+ε_iθ'}. To this end, by (b) we inductively have that, for all i≥2, there exists a subsequence {φ_j_i} of {φ_j_i-1} which converges in L^1_(X) and almost everywhere to a function η_i∈(X,θ+ε_iθ'), thus η_i=φ almost everywhere. But then η_i=φ by Corollary <ref>, and in particular, φ∈(X,θ+ε_iθ'). This finishes the proof. §.§ Positivity of classes Let X be compact complex manifold, let ω be a fixed smooth positive (1,1)-form on X, and consider a cohomology class α∈ H^1,1_BC(X,). Then α is: * pseudoeffective if there exists a closed positive (1,1)-current T∈α; * nef if for each ε>0 there exists a smooth form θ_ε∈α such that θ_ε≥-εω; * big if there exist ε>0 and a closed (1,1)-current T∈α such that T≥εω. These definitions do not depend on the choice of ω, and they correspond to the usual notions from algebraic geometry when X is projective and α is an algebraic class. §.§ Lelong numbers Let Ω⊆^n be an open subset and let φ be a psh function on Ω. The Lelong number of φ at a point x∈Ω is ν(φ,x):=lim_r→0sup_B(x,r)φ/log r; this is equivalent to other definitions in the literature by <cit.>. Thus, if ν(φ,x)>0, then φ(x)=-∞, but the converse does not always hold. The Lelong number ν(φ,x) does not depend on the choice of local coordinates around x. For psh functions u and v on Ω and for each point x∈Ω we have ν(u+v,x)=ν(u,x)+ν(v,x); note that u+v is a psh function by Example <ref>(d) below. Let now T be a closed positive (1,1)-current on a complex manifold X. Then locally at a point x∈ X we can write T=dd^cφ for a psh function φ, and we define the Lelong number of T at x as ν(T,x):=ν(φ,x); this does not depend on the choice of φ. If Y is an analytic subset of X and if x∈ X, then a result of Thie states that ν(Y,x) is equal to the multiplicity of Y at x. We will need the following result which compares the Lelong numbers under pullbacks <cit.>, see also <cit.>. Let f Y→ X be a surjective holomorphic map between compact complex manifolds. Then there exists a constant C>0 such that for every closed positive (1,1)-current T on X and for all points y∈ Y and x:=f(y)∈ X we have ν(T,x)≤ν(f^*T,y)≤ Cν(T,x). If T is a closed almost positive (1,1)-current on X and if x is a point in X with local coordinates z=(z_1,…,z_n) around x, then there exists a positive constant C such that T+Cdd^c|z|^2≥0 locally around x. Then we define the Lelong number ν(T,x) as ν(T+Cdd^c|z|^2,x); this does not depend on the choice of C. For c≥0 define the Lelong upperlevel sets as E_c(T):={x∈ X|ν(T,x)≥ c}. Then a fundamental theorem of <cit.> states that for each c>0, the set E_c(T) is a proper analytic subset of X. Thus, for any analytic subset Y of X we may define the generic Lelong number of T along Y as ν(T,Y):=inf{ν(T,x)| x∈ Y}, which is equal to ν(T,x) for a very general point x∈ Y. §.§ Divisorial valuations Let X be compact complex manifold. Following <cit.>, a prime divisor over X denotes a prime divisor E⊆ X', where μ X'→ X is a resolution. We say that two prime divisors E_1⊆ X_1 and E_2⊆ X_2 over X are equivalent if there exists a common resolution X' of X_1 and X_2 such that the strict transforms of E_1 and E_2 on X' coincide. When X is projective, a prime divisor over X is the same thing as a geometric divisorial valuation on X by <cit.>. Let T be a closed positive (1,1)-current on X. If E is a prime divisor on a resolution f Y→ X, we denote ν(T,E):=ν(f^*T,E). If E' is another prime divisor over X equivalent to E, then ν(T,E)=ν(T,E'). If D is an -divisor on X, then we define the multiplicity of D along E by _E D:=_E f^*D. §.§ Siu decomposition If X is a complex manifold and if T is a closed positive (1,1)-current on X, then there exist at most countably many codimension 1 irreducible analytic subsets D_k such that T has the Siu decomposition T=R+∑ν(T,D_k)· D_k, where R is a closed positive (1,1)-current such that _X E_c(R)≥2 for each c>0. In this paper we call ∑ν(T,D_k)· D_k the divisorial part and R the residual part of (the Siu decomposition of) T. Now assume that T is closed almost positive (1,1)-current, and let γ be a continuous form on X such that T≥γ. Then one can construct the Siu decomposition T=∑ν(T,D_k)· D_k+R of T similarly as above, where now R is a closed almost positive (1,1)-current satisfying R≥γ. With notation as above, if π Y→ X is a resolution and if π^*T=R_Y+∑ν(π^*T,D'_ℓ)· D'_ℓ is the Siu decomposition of π^*T, then it is clear that each D'_ℓ is a component of π^*D_k for some k, or it is a π-exceptional divisor. In particular, if the divisorial part of T is an -divisor, then so is the divisorial part of π^*T. §.§ Singular metrics Let L be a holomorphic line bundle on a complex manifold X. A singular hermitian metric or simply a singular metric h on L is a metric given in every trivialisation θ L|_Ω→Ω× by h(ξ,ξ):=|θ (ξ)|^2e^-2φ(x) for x∈Ω, ξ∈ L_x and φ∈ L^1_(Ω). We also denote |·|_h:=h(· ,·)^1/2. The function φ is called the local weight of h with respect to the trivialisation θ. The curvature current Θ_h(L):=dd^cφ is globally defined and lies in {L}∈ H^1,1_BC(X,). The curvature current Θ_h(L) of h is semipositive if it is positive in the sense of currents. Now fix a smooth metric h_∞ on L. Then there exists a locally integrable function φ on X such that h=h_∞ e^-2φ, and we call φ the global weight of h with respect to the reference metric h_∞. Then we have Θ_h(L)=Θ_h_∞(L)+dd^cφ. Conversely, for any closed (1,1)-current T∈{L} there exists a degree 0 current φ such that T=Θ_h_∞(L)+dd^cφ. When additionally T is almost positive, then φ∈ L^1_(X), hence every almost positive current T∈{L} is the curvature current of a singular hermitian metric on L, and the global weight φ is a quasi-psh function on X. We now mention several examples of quasi-psh functions which are relevant for this paper. * Let Ω⊆^n be an open subset and let f_1,…,f_m be holomorphic functions on Ω. Then log(|f_1|^2+…+|f_m|^2) is psh on Ω by <cit.>. * More generally, let L be a holomorphic line bundle with a continuous metric h on a complex manifold X, and consider global holomorphic sections σ_1,…,σ_m of L. Then the function φ X→[-∞,+∞) given by φ:=1/2log(|σ_1|^2_h+…+|σ_m|^2_h) is quasi-psh on X: indeed, let θ be a local continuous weight of h on some trivialisation of L. Then locally and by (a) we have dd^c(θ+φ)=1/2dd^clog(|σ_1|^2+…+|σ_m|^2)≥0, hence the function θ+φ is psh. In particular, globally we have Θ_h(L)+dd^cφ≥0, hence the curvature current of the singular metric he^-2φ on L is semipositive. The metric he^-2φ and the current Θ_h(L)+dd^cφ do not depend on the choice of h. * In the context of (b), let σ be a global holomorphic section of L and let ∑ m_iD_i be the zero-divisor of f. Then we have the global Lelong-Poincaré equation Θ_h(L)+dd^clog|σ|_h = ∑ m_iD_i, understood in the sense of currents, see <cit.>. * Let Ω⊆^n be an open subset, let u_1,…,u_r be psh functions on Ω and let χ[-∞,+∞)^r→[-∞,+∞) be a convex function which is non-decreasing in each coordinate. Then by <cit.>, the function χ(u_1,…,u_r) is psh on Ω. In particular, the functions u_1+…+u_r, max{u_1,…,u_r} and e^u_1+…+e^u_r are psh on Ω. * Let X be a complex manifold, let θ be a continuous real (1,1)-form on X, and let {φ_j} be a sequence of θ-psh functions on X which are locally uniformly bounded from above. If ∑ε_j is a convergent series of positive real numbers, then the function ∑ε_jφ_j is θ-psh. Indeed, there exists a constant C such that φ_j':=φ_j-C≤0 for all j. Then each partial sum Φ_k:=∑_j≤ kε_jφ_j' is θ-psh by (d), and the sequence {Φ_k} is decreasing, hence ∑ε_jφ_j'=lim_k→∞Φ_k∈(X,θ) by Theorem <ref>(d); note that the limit is not -∞ since the union of pluripolar sets of all φ_j is of Lebesgue measure zero in X. Since ∑ε_jφ_j=∑ε_jφ_j'+C∑ε_j, we obtain the claim. We will need later the following remark, which we extract from the proof of <cit.>. Let X be a compact complex manifold, let θ be a continuous real (1,1)-form on X, and let {φ_j} be a sequence of θ-psh functions on X. Then the union of pluripolar loci of all φ_j is again a pluripolar set. In order to see this, first note that by subtracting a constant from each φ_j, we may assume that φ_j≤0 for all j. Then by Example <ref>(e) the function φ:=∑ j^-2φ_j is θ-psh, and the union of pluripolar loci of all φ_j is contained in the set {φ=-∞}. §.§ Multiplier ideals If φ is a quasi-psh function on a complex manifold X, the multiplier ideal sheaf ℐ(φ)⊆_X is defined by ℐ(φ)(U):={f∈_X(U)| |f|e^-φ∈ L^2_(U)} for every open set U⊆ X. Note that we set |f(x)|e^-φ(x)=0 at points x ∈ X where f(x) = 0 and φ(x) = -∞. The sheaf ℐ(φ) is a coherent ideal sheaf on X. If now h is a singular metric on a holomorphic line bundle L on X whose curvature current Θ_h(L) is almost positive, then its associated global weight φ (with respect to some fixed smooth metric on L) is quasi-psh, and we define ℐ(h):=ℐ(φ). This does not depend on the choice of the smooth metric on L. Finally, if T is a closed almost positive (1,1)-current on X, then any of its associated global potentials φ (see <ref>) is quasi-psh, and we define ℐ(T):=ℐ(φ). This does not depend on the choice of φ. If ν(T,x)<1 at a point x∈ X, then ℐ(T)_x=_X,x by Skoda's lemma <cit.>. The following is a fundamental result, proved first in <cit.>; similar results with easier proofs are in <cit.> and <cit.>. Let φ and {φ_i}_i∈ be psh functions on an open set U of a complex manifold X. Assume that φ_i≤φ for all i, and that the sequence {φ_i} converges to φ in L^1_(X). Then for every U'⋐ U there exists a positive integer i_0 such that ℐ(φ_i)|_U'=ℐ(φ)|_U' for all i≥ i_0. We will need the following important result, Theorem <ref>. In order to state it, we need a piece of notation: Assume that T is a closed positive (1,1)-current on a complex manifold X, which can be written as a sum T=∑_i=1^∞λ_i D_i, where λ_i≥0 for all i and each D_i is a prime divisor on X; in other words, the residual part of the Siu decomposition of T is zero. Then ⌊ T⌋ denotes the closed positive (1,1)-current ⌊ T⌋:=∑_i=1^∞⌊λ_i⌋ D_i. Let X be a complex manifold. * Let T_1 and T_2 be two closed almost positive (1,1)-currents on X. Then ℐ(T_1+T_2)⊆ℐ(T_1)·ℐ(T_2). * Let T_1 and T_2 be two closed almost positive (1,1)-currents on X. If for x∈ X we have ν(T_1,x)=0, then ℐ(T_1+T_2)_x=ℐ(T_2)_x. * If G is an effective -divisor on X with simple normal crossings support, then ℐ(G)=_X(-⌊ G⌋). * If G is a closed positive (1,1)-current on X whose residual part is zero, then ⌊ G⌋ is a divisor on X and ℐ(G)⊆_X(-⌊ G⌋). * Let T be a closed almost positive (1,1)-current on X and let T = R+D be its Siu decomposition, where R is the residual part and D is the divisorial part. Then ⌊ D⌋ is a divisor on X, we have ℐ(T)⊆_X(-⌊ D⌋), and this inclusion is an equality on a Zariski open subset U with the property that _X(X∖ U)≥2. Here and elsewhere in the paper, if D is an integral divisor on a complex manifold, then _X(D) denotes the subsheaf of the sheaf of meromorphic functions on X whose divisor of zeroes and poles is precisely D. Thus, if D is effective, then _X(-D) is the sheaf of germs of holomorphic functions on X which vanish along D; in particular, we have _X(-D)⊆_X, and if D' is another integral divisor on X, then we have _X(-D)⊆_X(-D') if and only if D'≤ D. Part (a) is <cit.>, and part (b) is <cit.>. Part (c) is well known <cit.>. For (d), first note that the closed positive (1,1)-current G':=⌊ G⌋ is a divisor since the Lelong upperlevel set E_1(G) is analytic. Then G-G' is also a closed positive (1,1)-current on X, and by (a) we have ℐ(G)⊆ℐ(G-G')·ℐ(G')⊆ℐ(G'). Let V be the maximal Zariski open subset of X such that G'|_V is a smooth divisor. Then _X(X∖ V)≥2, and by (c) we have ℐ(G')|_V=_V(-G'). Since ℐ(G') is torsion free and _X(-G') is a line bundle, it follows that ℐ(G')⊆_X(-G'), which together with (<ref>) implies (d). Part (e) is <cit.>; since the notation and context is slightly different, we provide the proof for the benefit of the reader. The current D':=⌊ D⌋ is a divisor since E_1(T) is an analytic subset of X. Then T-D' is also a closed almost positive (1,1)-current on X, and by (a) and (d) we have ℐ(T)⊆ℐ(T-D')·ℐ(D')⊆ℐ(D')⊆_X(-D'), which gives the second claim in (e). Now we show the last claim in (e). If D_i are the components of D and if D_i,sing is the singular locus of D_i for each i, set Z:=⋃_i D_i,sing∪⋃_k,ℓ(D_k∩ D_ℓ)∪⋃_c>0E_c(R). Then Z is the union of at most countably many analytic subsets of X of codimension at least 2, and it suffices to show that ℐ(T)_x=_X(-D')_x for all x∈ X∖ Z, since the locus in X where the coherent sheaves ℐ(T) and _X(-⌊ D⌋) differ is an analytic subset of X. To that end, fix x∈ X∖ Z. Assume first that x does not belong to any component of D. Then ν(T,x)=0 by the definition of Z, hence by Skoda's lemma we have ℐ(T)_x=_X,x, and clearly also _X(-⌊ D⌋)_x=_X,x, which shows (<ref>) in this first case. Finally, assume that x belongs to a component Γ of D and set R_1:=R+(D-ν(T,Γ)·Γ). Then by the definition of Z we have ν(R_1,x)=0, thus (b) and (c) yield ℐ(T)_x =ℐ(R_1+ν(T,Γ)·Γ)_x=ℐ(ν(T,Γ)·Γ)_x =_X(-⌊ν(T,Γ)⌋·Γ)_x=_X(-D')_x, which gives (<ref>) also in this second case, and finishes the proof. We will also need the following consequence of the change of variables formula. Let X be a complex manifold and let f Y→ X be a resolution of X. Let φ_1 and φ_2 be two quasi-psh functions on X such that ℐ(φ_1)⊆ℐ(φ_2). If A:=K_Y-f^*K_X, then ℐ(f^*φ_1)⊗_X(-A)⊆ℐ(f^*φ_2). This is <cit.>; note that there is a typo in that statement: the divisor E in op. cit. should be defined as E=K_X-π^*K_X. §.§ Currents with analytic singularities A closed almost positive (1,1)-current T on a compact complex manifold X, and any of its global potentials φ, are said to have analytic singularities if there exist a coherent ideal sheaf ℐ and a constant c>0 such that, locally on X, we have φ=clog(|f_1|^2+…+|f_k|^2)+u, where u is smooth and f_1,…,f_k are local generators of ℐ. The current T is smooth outside of the co-support of ℐ. Now, if π Y→ X is a resolution of X which factors through the blowup of the scheme V(ℐ ), there exists an effective divisor D on Y such that π^-1ℐ = _Y(-D), and the Siu decomposition of π^*T has the form π^*T=θ+cD, where θ is a smooth (1,1)-form. If T≥γ for some smooth form γ, then θ≥π^*γ. §.§ Currents with generalised analytic singularities We need a generalisation of the concept of currents with analytic singularities introduced in <cit.>. A closed almost positive (1,1)-current T on a compact complex manifold X, and any of its global potentials φ, are said to have generalised analytic singularities if there exists a resolution π Y → X such that the Siu decomposition of π^*T has the form π^*T=Θ+D, where Θ is a closed almost positive (1,1)-current whose all Lelong numbers are zero and D is an effective -divisor on Y. In that case we say that the current T descends to Y. If D is additionally a -divisor, we say that T has generalised algebraic singularities. Clearly, if a closed almost positive (1,1)-current has analytic singularities, then it has generalised analytic singularities. Let f Z→ Y be a further resolution, and set g:=π∘ f. Then the current f^*Θ has all Lelong numbers zero by Theorem <ref>, hence the Siu decomposition of g^*T has the form g^*T=f^*Θ+f^*D. Thus, if f is a sufficiently high resolution, then we may assume that the support of the divisorial part f^*D has simple normal crossings. §.§ Scalar products and norms Let X be a complex manifold of dimension n with a hermitian metric ω, and L be a hermitian line bundle on X with a singular metric h. If u and v are L-valued (p,q)-forms with measurable coefficients, then |u|_h,ω denotes the pointwise norm on ⋀^p,qT_X^*⊗ L induced by the hermitian metric on T_X whose fundamental form is ω and by h, ⟨ u,v⟩_h,ω is the corresponding scalar product, and dV_ω:=ω^n/n! is the volume form associated to ω; cf. <cit.>. Set u,v _h,ω:=∫_X ⟨ u, v⟩_h,ω dV_ω and u_h,ω:= u,u _h,ω^1/2. If L_p,q^2(X,L)_h,ω is the set of L-valued (p,q)-forms with measurable coefficients such that u_h,ω<∞, then L_p,q^2(X,L)_h,ω is a Hilbert space with the scalar product · ,·_h,ω. If σ is a global holomorphic section of the line bundle _X(K_X)⊗ L, then we may view it as a smooth L-valued (n,0)-form and we write σ_h,ω for the corresponding norm. If h_K_X is the smooth metric on _X(K_X) induced by the hermitian metric on T_X whose fundamental form is ω, and if g:=h_K_Xh is the induced metric on _X(K_X)⊗ L, then we also write σ_g:=σ_h,ω. We will need the following remark in the proof of Theorem <ref>. With notation as above, fix a smooth metric h_0 on the line bundle _X(K_X)⊗ L and assume that |·|_h,ω=|·|_h_0e^-φ, where φ is a locally integrable function on X which is bounded from above by a constant C. Assume that there exists a coordinate ball U⊆ X, an integrable function θ U→∪{-∞} and a section s∈ C^∞(U,_X(K_X)⊗ L) such that ∫_U |s|^2_h,ωe^-2θdV_ω<∞, but the function e^-2θ is not locally integrable around some point x∈ U. Then we claim that s(x)=0. Indeed, assume that s(x)≠ 0, and pick a small ball x∈ V⊆ U such that M:=min{|s(x)|_h_0| x∈ V}>0. Then Me^-2C∫_V e^-2θdV_ω≤∫_U |s|_h_0^2e^-2φe^-2θdV_ω=∫_U |s|^2_h,ωe^-2θdV_ω<∞, hence ∫_V e^-2θdV_ω<∞, a contradiction which implies the claim. §.§ Hörmander's estimates We will need the following result which follows by expanding on the techniques of Hörmander L^2 estimates <cit.>. The most general result of this form is in <cit.>, where it was proved for complete Kähler varieties; see also <cit.>. In this paper we only need it for projective manifolds, in which case the proof is much simpler, see <cit.> or <cit.> Let X be a compact Kähler manifold with a Kähler form ω. Let L be a line bundle on X with a singular metric h such that Θ_h(L)≥εω. Then for every form v∈ L^2_p,q(X,L)_h,ω with q≥1 and v=0 there exists a form u∈ L^2_p,q-1(X,L)_h,ω such that u=v and u^2_h,ω≤1/2π qεv^2_h,ω. § PRELIMINARIES: BIRATIONAL GEOMETRY A fibration is a projective surjective morphism with connected fibres between two normal varieties. We write D ≥ 0 for an effective -divisor D on a normal variety X. If f X→ Y is a surjective morphism of normal varieties and if D is an -divisor on X, then D is f-exceptional if _Y f( D) ≥ 2. If X is a normal projective variety and if D is an -Cartier -divisor on X, we denote |D|_:={D'≥ 0 | D'∼_ D}. A pair (X,Δ) consists of a normal variety X and a Weil -divisor Δ≥0 such that the divisor K_X+Δ is -Cartier. The standard reference for the foundational definitions and results on the singularities of pairs and the Minimal Model Program (MMP) is <cit.>, and we use these freely in this paper. We recall additionally that flips for klt pairs exist by <cit.>. We use the MMP with scaling of an ample (or just big) divisor as described in <cit.>. We will need the following observation in Section <ref>: if (X,Δ) is a -factorial pair such that X is not uniruled, then K_X+Δ is pseudoeffective. Indeed, let π Y→ X be a resolution of X. Then Y is not uniruled, hence the divisor K_Y is pseudoeffective by <cit.>. Then the divisor K_X∼_π_*K_Y is pseudoeffective, and the claim is immediate. §.§ Models We recall the definition of negative maps, of minimal models and of good minimal models. Let X and Y be -factorial varieties, and let D be an -divisor on X. A birational contraction f X Y is D-non-positive (respectively D-negative) if there exists a resolution (p,q) W→ X× Y of the map f such that p^*D∼_ q^*f_*D+E, where E≥0 is a q-exceptional -divisor (respectively, E≥0 is a q-exceptional -divisor and E contains the proper transform of every f-exceptional divisor). W [dl, "p" swap] [dr, "q"] X [rr, dashed, "f" ] Y If f is D-negative and additionally f_*D is nef, the map f is a minimal model for D. If moreover f_*D is semiample, the map f is a good minimal model for D, or simply a good model for D. We use these notions almost exclusively for divisors of the form D=K_X+Δ, where (X,Δ) is a klt pair. Then we talk of minimal and good models of a klt pair (X,Δ). Note that if (X,Δ) is a klt pair, then it has a good model if and only if there exists a Minimal Model Program with scaling of an ample divisor which terminates with a good model of (X,Δ); this follows from the proof of <cit.>. §.§ Nakayama–Zariski and Boucksom–Zariski functions There are two ways to assign asymptotic functions to pseudoeffective classes: the algebraic construction from <cit.> and the analytic construction from <cit.>. They coincide on projective manifolds, but we will need both constructions in this paper. Let X be a -factorial projective variety and let Γ be a prime divisor on X. Nakayama <cit.> defined σ_Γ-functions on the pseudoeffective cone of X; this was originally done when X is smooth, but the definition works well in the -factorial setting <cit.>. We explain briefly their construction. If D is a big -divisor on X, set σ_Γ (D) := inf{_ΓΔ| 0 ≤Δ∼_ D }; and if D is a pseudoeffective -divisor on X and A is an ample -divisor on X, define σ_Γ (D) := lim_ε↓ 0σ_Γ (D+ε A); this does not depend on the choice of A and is compatible with the definition above for big divisors. Moreover, σ_Γ(D) only depends on the numerical class of D, hence σ_Γ is well-defined on the pseudoeffective cone of X. Each function σ_Γ is homogeneous of degree 1, convex and lower semicontinuous on the cone of pseudoeffective divisors on X, and it is continuous on the cone of big divisors on X. Set N_σ (D) := ∑_Γσ_Γ(D)·Γ and P_σ:=D-N_σ(D), where the formal sum runs through all prime divisors Γ on X. Both N_σ(D) and P_σ(D) are -divisors on X, and the decomposition D = P_σ (D) + N_σ (D) is the Nakayama–Zariski decomposition of D. If X is a compact Kähler manifold and if Γ is an analytic prime divisor on X, Boucksom <cit.> defined ν(·,Γ)-functions on the cone of pseudoeffective classes in H^1,1(X,), and he showed that they coincide with Nakayama's σ_Γ-functions when one considers algebraic classes. To avoid possible confusion with Lelong numbers, we will denote these Boucksom's functions also by σ_Γ. We explain briefly their construction, adopting for the moment the concept of currents with minimal singularities which will be dealt with in detail in Section <ref>. Let α be a pseudoeffective class in H^1,1(X,). After fixing a reference Kähler form ω, and if T_min,ε is a current with minimal singularities in the class α+ε{ω} for a positive real number ε, set σ_Γ(α):=inf_x∈Γsup_ε>0ν(T_min,ε,x); this does not depend on the choice of ω, and one has σ_Γ(α)=ν(T_min,Γ) when α is a big class and T_min∈α is a current with minimal singularities. Even though the notation is slightly different, the definition above is equivalent to that from <cit.>. We explain this briefly now. If α is a class in H^1,1(X,) and if ω is a Kähler form on X, then <cit.> introduces α[γ] as the set of closed almost positive (1,1)-currents T∈α such that T≥γ. Then <cit.> defines σ_Γ(α):=inf_x∈Γsup_ε>0ν(T_min,ε,x), where for each ε>0, T_min,ε is the current with minimal singularities in α[-εω]; this is defined analogously as for pseudoeffective classes in Section <ref>. Now, since ω is closed, one shows easily that T_min,ε=T_min,ε+εω, which yields that the definition from <cit.> is equivalent to the one given in this paper. The following lemma is well known and we include the proof for completeness. Let (X,Δ) be a projective log canonical pair such that K_X+Δ is pseudoeffective, Δ is a -divisor and such that (X,Δ) has a minimal model. If f Y→ X is a resolution, then N_σ(f^*(K_X+Δ)) is a -divisor. Let φ (X,Δ) (X',Δ') be a minimal model of (X,Δ) and let (p,q) W→ X× X' be a resolution of indeterminacies of φ such that W is smooth. We may assume that p factors through f; let w W→ Y be the resulting map. Y [d, "f" swap] W [l, "w" swap] [dl, "p" swap] [dr, "q"] X [rr, dashed, "φ" ] X' Then there exists an effective q-exceptional -divisor E on W such that p^*(K_X+Δ)∼_ q^*(K_X'+Δ')+E. Then N_σ(p^*(K_X+Δ))=N_σ(q^*(K_X'+Δ'))+E=E by <cit.>, hence by <cit.> we have N_σ(f^*(K_X+Δ))=w_*N_σ(p^*(K_X+Δ))=w_*E, which proves the lemma. §.§ Stable, diminished and augmented base loci A good reference for basic results on the asymptotic base loci treated in this subsection is <cit.>, see also <cit.>. If X is a normal projective variety and if D is a pseudoeffective -Cartier -divisor on X, the stable base locus of D is (D):=⋂_D'∈ |D|_ D'; this is a closed subset of X. If D is a -divisor, by <cit.> this is equivalent to saying that (D)=⋂_k∈|kD|, hence by <cit.> we have (D)=|kD| for all k sufficiently divisible. The diminished base locus of D is _-(D):=⋃_A ample on X(D+A); this only depends on the numerical equivalence class of D and is a countable union of closed subsets of X. If X is additionally -factorial, then N_σ(D) is the divisorial part of _-(D), see <cit.>. This locus is sometimes called the non-nef locus of D; we use both names for this locus interchangeably. The augmented base locus of D is _+(D):=⋂_A ample on X(D-A); it only depends on the numerical equivalence class of D and is a closed subset of X. This locus is sometimes called the non-ample locus of D; we use both names for this locus interchangeably. We have _-(D)⊆(D)⊆_+(D). Further, by <cit.> we have (D)_+=(D-A) for any ample -divisor A whose numerical class is of sufficiently small norm. From this it is easy to deduce that D is ample if and only if _+(D)=∅. By <cit.> we have _-(D)=⋃_A ample on X_+(D+A). §.§ Finite generation We review now several facts about finitely generated multigraded rings and the existence of minimal models, which will be used in Section <ref>. If X is a normal projective variety and if D is a -Cartier -divisor on X, we define the global sections of D by H^0(X,D)={f∈ k(X)| f+D ≥ 0 }; note that clearly H^0(X,D)=H^0(X,⌊ D⌋). If D_1,…,D_r are -Cartier -divisor on X, we define the corresponding divisorial ring as ℜ:=R(X;D_1, …, D_r):=⊕_(n_1,…, n_r)∈^r H^0(X,n_1D_1+… + n_rD_r). The support of ℜ, denoted by ℜ, is the convex hull of all integral divisors D in the cone ∑_i=1^r_+ D_i⊆_(X) such that H^0(X,D)≠0. The following result gives the most important example of a finitely generated divisorial ring. The first part of Theorem <ref> was proved in <cit.> and <cit.>; see also <cit.> and Remark <ref> for the formulation we adopt in this paper. The second part is a special case of <cit.>, and can be also deduced from <cit.>. Let X be a -factorial projective variety and let Δ_i be -divisors on X such that each pair (X,Δ_i) is klt for i=1,…,r. Assume that for each i that Δ_i is big or that K_X+Δ_i is big. Then the ring ℜ=R(X;K_X+Δ_1,…,K_X+Δ_r) is finitely generated. Moreover, ℜ is a rational polyhedral cone and there is a finite rational polyhedral subdivision ℜ=⋃𝒞_k with the property that for each k there exist a -factorial projective variety X_k and a birational contraction φ_k X X_k such that φ_k is a minimal model for every klt pair (X,B_k) with K_X+B_k∈𝒞_k. Even though the formulation is slightly different, Theorem <ref> follows easily from <cit.>. Indeed, without loss of generality we may assume that K_X+Δ_i are big for i≤ k and that Δ_i are big for i>k. For each i≤ k let E_i be an effective -divisor such that K_X+Δ_i∼_ E_i, and pick a rational number ε>0 such that (X,Δ_i+ε E_i) is klt for each i≤ k. Then by <cit.> the ring R(X;K_X+Δ_1+ε E_1,…,K_X+Δ_k+ε E_k,K_X+Δ_k+1,…,K_X+Δ_r). Since K_X+Δ_i+ε E_i∼_(1+ε)(K_X+Δ_i) for i≤ k, the ring ℜ is finitely generated by <cit.>. § AUXILIARY RESULTS §.§ (Pluri)subharmonic functions In this paper we need very precise properties of (pluri)subharmonic functions. We start with the following easy lemma. Let Ω⊆^n be a domain, let x∈Ω and let fΩ→∪{-∞} be an upper semicontinuous function. Let {a_m} and {b_m} be sequences of positive real numbers such that lim_m→∞a_m=1 and lim_m→∞b_m=0, and denote c_m:=sup_B(x,1/m)f. Then: * lim_m→∞a_mc_m≤ f(x), * lim_m→∞b_mc_m≤ 0, * if f is subharmonic, then lim_m→∞a_mc_m=f(x), * if f is psh, then lim_m→∞1/m c_m=0. Note that the sequence {c_m} is decreasing, hence converging to a value in ∪{-∞}. Therefore, lim_m→∞a_mc_m=lim_m→∞c_m=lim sup_x'→ xf(x')≤ f(x), which shows (a). When f is subharmonic, then the last inequality is an equality by Lemma <ref>(a), which gives (c). If lim_m→∞c_m∈, then lim_m→∞b_mc_m=0; otherwise we have c_m<0 for all m≫0, thus (b) follows. For (d), note that lim_m→∞c_m/m=lim_m→∞c_m/log mlog m/m. Since lim_m→∞c_m/-log m=ν(f,x) and lim_m→∞log m/m=0, the claim follows. The following two approximation results are much deeper, and they will be crucial in Part <ref>. Let X be a complex manifold and let α be a continuous (1,1)-form on X. Let {φ_n} be a sequence of α-psh functions which are locally uniformly bounded from above and which converge in L^1_(X) to a function φ∈(X,α). Then for every sequence of points {x_n} in X which converges to a point x∈ X we have φ(x)≥lim sup_n→∞φ_n(x_n). Set a:=lim sup_n→∞φ_n(x_n). Then by passing to subsequences of {φ_n} and {x_n} we may assume that φ_n converges to φ almost everywhere and a=lim_n→∞φ_n(x_n). As in the proof of Theorem <ref>(a), locally around x there exists a smooth closed form ω≥α. By replacing α by ω and X by a small neighbourhood around x, we may assume that α is smooth and closed. Fix a small coordinate ball B(x,2r) in X such that the functions φ_n are uniformly bounded from above on B(x,2r) and let θ be a smooth potential of α on B(x,2r). Then the functions θ+φ and all θ+φ_n are psh on B(x,2r). We may assume that x_n∈ B(x,r), so that B(x_n,r)⊆ B(x,2r), and let χ_A denote the characteristic function of a set A⊆ B(x,2r). Then the sequence {e^θ+φ_nχ_B(x_n,r)} is uniformly bounded from above on B(x,2r), and converges almost everywhere to e^θ+φχ_B(x,r), hence by Lebesgue's dominated convergence theorem and by the mean value inequality we have _B(x,r)e^θ+φdV_ω =lim_n→∞_B(x_n,r)e^θ+φ_ndV_ω ≥lim_n→∞e^θ(x_n)+φ_n(x_n)=e^θ(x)+a. By letting r→0 in (<ref>) we conclude by Lemma <ref>(b) that e^θ(x)+φ(x)≥ e^θ(x)+a, which gives the desired inequality. Let φ be a subharmonic function on a domain Ω⊆^n and let A⊆Ω be a set of Lebesgue measure zero such that Ω∖ A is dense in Ω. Then there exists a countable set D⊆Ω∖ A which is dense in Ω, such that for every z∈Ω there exists a sequence {z_q} in D with lim_q→∞z_q=z and lim_q→∞φ(z_q)=φ(z). Denote by · the euclidean norm on ^n. Set 𝒞:={(y,r)∈^2n×| B(y,2r)⊆Ω}, where we view ^2n as a subset of ^n. For each (y,r)∈𝒞, let z_y,r be a point in B(y,r) such that φ(z_y,r)=max(φ|_B(y,r)). Then by applying Lemma <ref>(a) to the point z_y,r we obtain that there exists a point z_y,r∈ (Ω∖ A)∩ B(y,2r) such that φ(z_y,r)≥φ(z_y,r)-r=max(φ|_B(y,r))-r. Then we claim that the countable set D:={z_y,r| (y,r)∈𝒞}⊆Ω∖ A is dense in Ω. Indeed, consider a point w∈Ω. Let m_0 be a positive integer such that B(w,2^-m_0)⊆Ω, and for each m≥ m_0 pick points w_m∈^2n∩ B(w,2^-m-1). Then (w_m,2^-m-2)∈𝒞 by the definition of 𝒞, hence z_w_m,2^-m-2∈ B(w_m,2^-m-1)∩ D by (<ref>) and by the definition of D. Therefore, z_w_m,2^-m-2∈ B(w,2^-m) for any m≥ m_0, which proves that D is dense in Ω. Now, fix z∈Ω. To finish the proof it suffices to show that for each ε>0 there exists z'∈ D∩ B(z,ε) with |φ(z)-φ(z')|<ε. Assume otherwise. Then there exists ε>0 such that B(z,ε)⊆Ω and such that for all z'∈ D∩ B(z,ε) we have |φ(z)-φ(z')|≥ε. By Lemma <ref>(a), this implies that there exists a rational number 0<δ≤ε/3 such that for all z'∈ D∩ B(z,3δ) we have φ(z')≤φ(z)-ε. Pick a point z_0∈^2n∩ B(z,δ). Then the point z_z_0,δ∈ D∩ B(z_0,2δ) constructed as above belongs to D∩ B(z,3δ), and by (<ref>) we have φ(z_z_0,δ)≥max(φ|_B(z_0,δ))-δ≥φ(z)-δ, which contradicts (<ref>). This concludes the proof. We will, in fact, need the following global version of the previous lemma, which follows from Lemma <ref> by compactness. Let φ be a quasi-psh function on a compact complex manifold X and let A⊆ X be a set of Lebesgue measure zero such that X∖ A is dense in X. Then there exists a countable set D⊆ X∖ A which is dense in X, such that for every z∈ X there exists a sequence {z_q} in D with lim_q→∞z_q=z and lim_q→∞φ(z_q)=φ(z). §.§ Estimates of sections of line bundles The following two lemmas will be essential in Part <ref>. Let U⊆^n be a domain and let {L_j}_j∈ J be a collection of -divisors on U. For each j∈ J, let h_j be a smooth metric on L_j with the associated curvature Θ_j, and assume that the (1,1)-forms Θ_j are uniformly bounded on U. Then for each x∈ X there exist constants C>0 and r_0>0 such that for every ball B(x,r)⊆ U with r≤ r_0 and for each σ∈ H^0(B(x,r),mL_j) with j∈ J and m∈ such that mL_j is Cartier, * the function log|σ(z)|_h_j^m+mC|z-x|^2 is psh on B(x,r), and * we have |σ(x)|^2_h_j^m≤ e^2m Cr^2_B(x,r)|σ|^2_h_j^mdV_ω. Fix x∈ X. By the proof of Lemma <ref> applied to the standard Kähler metric on ^n, there exist constants C>0 and r_0>0 such that -Θ_j+Cdd^c |z-x|^2≥0 on B(x,r_0) for all j∈ J. For j∈ J, for m∈ such that mL_j is Cartier, for r≤ r_0 and for σ∈ H^0(B(x,r),mL_j) we have mΘ_j+dd^clog|σ|_h_j^m≥0 on B(x,r) by Example <ref>(b), hence dd^clog|σ|_h_j^m+mCdd^c |z-x|^2≥0 on B(x,r). This shows (a). Now, for each j∈ J consider the smooth metric g_j:=h_je^C|z-x|^2 on L_j|_B(x,r_0). Since |σ|^2_g_j^m=e^2log|σ|_h_j^m+2mC|z-x|^2, the function |σ|^2_g_j^m is psh on B(x,r) by (a) and by Example <ref>(d), hence the mean value inequality at the point x gives |σ(x)|^2_h_j^m=|σ(x)|^2_g_j^m≤_B(x,r)|σ|^2_g_j^mdV_ω≤ e^2m Cr^2_B(x,r)|σ|^2_h_j^mdV_ω, which finishes the proof. Let X be a complex compact manifold and let L be a line bundle on X with a continuous metric h. Let V⊆ H^0(X,L) be a compact subset with respect to a norm · on H^0(X,L). Consider sections {σ_j}_j∈ in V such that lim_j→∞σ_j=σ_0 in the norm ·. Then the following holds. * The sections σ_j converge uniformly to σ_0 in the metric h, i.e. for every ε>0 there exists a positive integer N such that |σ_0(z)-σ_j(z)|_h≤ε for all j≥ N and all z∈ X. * For any sequence of points {x_j} in X such that lim_j→∞x_j=x_0 we have lim_j→∞|σ_j(x_j)|_h=|σ_0(x_0)|_h. Fix a basis e_1,…,e_n of H^0(X,L), and write σ_j=∑_i=1^nα_j,ie_j for some α_j,i∈. Then lim_j→∞α_j,i=α_0,i by assumption. Fix ε>0. Then, by the continuity of h, for each z_0∈ X there exists r_z_0>0 and a positive integer N_z_0 such that ∑_i=1^n|α_0,i-α_j,i|·|e_i(z)|_h≤ε for all j≥ N_z_0 and z∈ B(z_0,r_z_0), hence the triangle inequality gives |σ_0(z)-σ_j(z)|_h≤ε for all j≥ N_z_0 and z∈ B(z_0,r_z_0). By compactness we can find finitely many points z_1,…,z_k∈ X such that the balls B(z_i,r_z_i) cover X. If we set N:=max{N_z_i| 1≤ i≤ k}, then |σ_0(z)-σ_j(z)|_h≤ε for all j≥ N and all z∈ X by (<ref>), which shows (a). To show (b), fix ε>0. Then there exists a positive integer N_1 such that ||σ_0(x_0)|_h-|σ_0(x_j)|_h|≤ε for all j≥ N_1 On the other hand, by (a) there exists a positive integer N_2 such that |σ_0(z)-σ_j(z)|_h≤ε for all j≥ N_2 and all z∈ X. In particular, ||σ_0(x_j)|_h-|σ_j(x_j)|_h|≤|σ_0(x_j)-σ_j(x_j)|_h≤ε for every j≥ N_2. Therefore, for all j≥max{N_1,N_2} we have ||σ_0(x_0)|_h-|σ_j(x_j)|_h|≤ 2ε, which finishes the proof. §.§ Special quasi-psh functions The following results construct particular quasi-psh functions which will be needed in Part <ref>. Let X be a smooth projective variety and let D be a pseudoeffective -divisor on X. Let A be an ample -divisor on X, let ω∈{A} be a positive smooth form and let α∈{D+A} be a smooth form. Then for each rational number ε∈(0,1) there exists a quasi-psh function ψ_ε on X which has logarithmic singularities with poles along (D+ε A) such that α+dd^cψ_ε≥(1-ε)ω. Fix a rational number ε∈(0,1), let h be a smooth metric on D such that Θ_h(D)=α-ω∈{D}, and let h_A be the smooth metric on A such that ω=Θ_h_A(A). The -divisor D+ε A is big, hence by Remark <ref> there exists a positive integer m such that (D+ε A)=|m(D+ε A)|. Let σ_1,…,σ_k be a basis of the vector space H^0(X,m(D+ε A)), and set ψ_ε=1/2mlog∑_i=1^k|σ_i|^2_h^mh_A^mε. Then (α-ω)+εω+dd^cψ_ε≥0 by Example <ref>(b) and ψ_ε clearly has poles along |m(D+ε A)|. This concludes the proof. Let X be a smooth projective variety with a Kähler form ω, let D be a big -divisor on X and let α∈{D} be a smooth form. Then there exists a rational number ε∈(0,1) and a quasi-psh function ψ on X which has logarithmic singularities with poles along _+(D) such that α+dd^cψ≥εω. Let A be an ample -divisor on X such that the -divisor D-A is big, and let ω'∈{A} be a positive smooth form. Then by Lemma <ref> there exists a positive constant C such that Cω'≥ω, hence by replacing ω by ω', we may assume that ω∈{A}. By Remark <ref> there exists a rational number ε∈(0,1) such that _+(D)=(D-ε A)=((D-A)+(1-ε)A). Then the result follows from Lemma <ref> applied to the -divisor D-A, the ample -divisor A and the rational number 1-ε. PART: Currents with minimal singularities § SINGULARITIES OF CURRENTS In this section X is always a compact complex manifold. Good sources for the foundational material on currents with minimal singularities are <cit.>. §.§ Comparison of singularities Let φ_1 and φ_2 be quasi-psh functions on a compact complex manifold X. We say that φ_1 is less singular than φ_2, and write φ_1≼φ_2, if there exists a constant C such that φ_2≤φ_1+C. We denote by φ_1≈φ_2 the induced equivalence relation, i.e. we say that φ_1 and φ_2 have equivalent singularities if φ_1≼φ_2≼φ_1. If T_1 and T_2 are two closed almost positive (1,1)-currents on X with corresponding global potentials φ_1 and φ_2, we say that T_1 is less singular than T_2, and write T_1≼ T_2, if φ_1≼φ_2; and similarly for T_1≈ T_2. This does not depend on the choice of global potentials. It is immediate that any two closed almost positive (1,1)-currents with equivalent singularities have the same Lelong numbers. The relation ≼ behaves well with respect to multiplication by positive constants and sums of currents. More precisely, let φ_1, φ_2 and φ_3 be quasi-psh functions on a compact complex manifold X and let λ be a positive real number. If φ_1≼φ_2, then it follows immediately that λφ_1≼λφ_2 and φ_1+φ_3≼φ_2+φ_3. Conversely, if φ_1+φ_3≼φ_2+φ_3, then φ_1≼φ_2: this is clear away from the pole set {φ_1+φ_2+φ_3=-∞}, hence it holds everywhere on X by Corollary <ref>. Similar statements hold for currents, and are proved by considering their global potentials. Now, let φ_1 and φ_2 be quasi-psh functions on a compact complex manifold X such that φ_1≼φ_2. Then it is immediate to check that ℐ(φ_2)⊆ℐ(φ_1). In particular, if φ_1 and φ_2 have equivalent singularities, then they have the same multiplier ideal. §.§ Minimal singularities Let α be a closed real continuous (1,1)-form on X whose class {α}∈ H^1,1_BC(X,) is pseudoeffective. A minimal element φ_min∈(X,α) with respect to the relation ≼ is called a global potential with minimal singularities in (X,α), and the corresponding current T_min=α+dd^cφ_min is a current with minimal singularities in {α}; such a global potential and a current always exist by the next paragraph. Note that T_min∈{α} is unique up to equivalence of singularities, but is in general not unique, see <cit.>. One checks immediately that for each point x∈ X we have ν(T_min,x)=min_T∈αν(T,x). It is also clear by Remark <ref> that for each positive number λ, the current λ T_min has minimal singularities in the class {λα}. By <ref>, all currents with minimal singularities in a fixed cohomology class have the same multiplier ideal. To show the existence of global potentials with minimal singularities, following the notation from <cit.> we consider the upper envelope V_α=sup{φ∈(X,α)|sup_Xφ=0}. The function V_α is again α-psh. Indeed, by Theorem <ref>(a) we have V_α^*∈(X,α), and clearly V_α≤ V_α^* by the definition of upper semicontinuous regularisations. But then V_α^*∈{φ∈(X,α)|sup_Xφ=0}, hence V_α^*≤ V_α by the definition of V_α. Thus, V_α=V_α^*. The functions V_α are good for showing some existence results, such as the one above, and they have good regularity properties on the non-ample locus when α is a big class, see <cit.> and the references therein. However, they seem to be too general to be useful in birational geometry. That is the reason why we will consider different global potentials with minimal singularities in this paper: supercanonical potentials, studied in Section <ref>. The main reason why functions V_α are useful is that, as showed above, they themselves belong to the envelope {φ∈(X,α)|sup_Xφ=0} (supercanonical potentials do not satisfy this property and this is one of the main issues in dealing with them). To demonstrate how this is used in practice, we prove the following result noted already in <cit.>. Let X be a compact Kähler manifold with a Kähler form ω, and let α be a real continuous (1,1)-form on X whose class {α}∈ H^1,1(X,) is pseudoeffective. Denote α_t:=α+tω for t≥0. Then the functions V_α_t decrease pointwise to V_α as t→0. In particular, the positive currents α_t+dd^cV_α_t converge weakly to α+dd^cV_α as t→0, and for real numbers 0≤ t_1≤ t_2 we have ℐ(V_α_t_1)⊆ℐ(V_α_t_2). Since ω≥0, we have V_α_t'∈{φ∈(X,α_t)|sup_Xφ=0} when t'≤ t, hence V_α_t'≤ V_α_t by the definition of V_α_t; this also shows the statement on multiplier ideals in the lemma. Therefore, the limit V_0:=lim_t→0V_α_t exists and clearly V_α≤ V_0≤0. Further, by Theorem <ref>(e) we have V_0∈(X,α) and the functions V_α_t converge to V_0 in L^1_(X). Thus, V_0≤ V_α by the definition of V_α, and so V_0=V_α, as desired. §.§ Minimal singularities under pullbacks and sums Currents with minimal singularities are stable under pullback: Let π Y→ X be a surjective morphism with connected fibres between compact complex manifolds and let θ∈ H^1,1_BC(X,) be a pseudoeffective class. Then a closed positive (1,1)-current T∈θ has minimal singularities if and only if the current f^*T∈ f^*θ has minimal singularities. The proof is in <cit.>, see also <cit.>. Let T be a closed positive (1,1)-current on a compact complex manifold X which is a current with minimal singularities in the class {T}, and let T_1≤ T be a closed positive (1,1)-current on X. Then T_1 is a current with minimal singularities in the class {T_1}. Indeed, denote T_2:=T-T_1≥0. If S is any closed positive (1,1)-current in {T_1}, then S+T_2∈{T}. By the definition of currents with minimal singularities we have T_1+T_2=T≼ S+T_2. But then T_1≼ S by Remark <ref>, as desired. We will need later the following results. Let π Y→ X be a surjective morphism with connected fibres from a smooth complex projective variety to a normal complex projective variety. Let D be a pseudoeffective -divisor on X and let E be an effective π-exceptional -divisor on Y. * For each closed positive current T∈{π^*D+E} we have T≥ E. * If a current S∈{π^*D} has minimal singularities, then the current S+E∈{π^*D+E} has minimal singularities. * If a current T∈{π^*D+E} has minimal singularities, then T-E∈{π^*D} is a positive current with minimal singularities. Part (a) has the same proof as <cit.>; alternatively, the proof can be extracted from that of <cit.>, by replacing there the reference <cit.> by either <cit.> or <cit.>. Note that some of those results are stated for -divisors, but the proofs work for -divisors. For (b), consider a current S'∈{π^*D+E}. Then by (a) we have that S'-E is a positive current in {π^*D}, hence S≼ S'-E since S has minimal singularities. Therefore, S+E≼ S' by Remark <ref>, which shows that S+E has minimal singularities. To show (c), note that T-E≥0 by (a). We conclude by Remark <ref>. Let X be a compact complex manifold and let α and β be smooth forms whose classes in H^1,1_BC(X,) are pseudoeffective. * If φ∈(X,α) is bounded, then φ has minimal singularities. * Assume that there exist φ_α∈(X,α) and φ_β∈(X,β) with minimal singularities such that the function φ_α+φ_β∈(X,α+β) has minimal singularities. Then for all functions φ_α'∈(X,α) and φ_β'∈(X,β) with minimal singularities, the function φ_α'+φ_β'∈(X,α+β) has minimal singularities. * Let D_1 and D_2 be semiample -divisors on X, and let T_1∈{D_1} and T_2∈{D_2} be currents with minimal singularities. Then for each 0≤ t≤ 1, the current tT_1+(1-t)T_2∈{tD_1+(1-t)D_2} has minimal singularities. We first show (a). By assumption there exists a constant C_φ such that φ≥ C_φ. If φ'∈(X,α), then it is bounded from above, hence there exists a constant C_φ' such that φ'≤ C_φ'. Therefore, φ'≤φ+C_φ'-C_φ, so that φ≼φ' and consequently, φ has minimal singularities. For (b), by the definition of minimal singularities we have φ_α≈φ_α' and φ_β≈φ_β'. Then φ_α+φ_β≈φ_α'+φ_β' by Remark <ref>, which gives (b). Finally, we show (c). By passing to multiples, we may assume that D_1 and D_2 are integral basepoint free divisors. Then by Example <ref>(b) there exist smooth positive (1,1)-forms α_1∈{D_1} and α_2∈{D_2}, which have minimal singularities by (a). Then again by (a), for each 0≤ t≤ 1, the current tα_1+(1-t)α_2∈{tD_1+(1-t)D_2} has minimal singularities, and we conclude by (b). If D is a semiample -divisor on a compact complex manifold X and if T∈{D} is a current with minimal singularities, then all Lelong numbers of T are zero. Indeed, as in the proof of Lemma <ref> there exists a smooth positive (1,1)-form α∈{D}, which clearly has all Lelong numbers zero. We conclude by (<ref>). §.§ Siu decomposition of currents with minimal singularities One of the advantages of working with currents with minimal singularities is that the divisorial part of their Siu decomposition always has finitely many components. This was observed already in <cit.>; here we provide a slightly different proof. In the following lemma ρ(X) denotes the Picard number of a compact complex manifold X. Let X be a compact complex manifold, let θ∈ H^1,1_BC(X,) be a pseudoeffective class, and let T_min∈α be a current with minimal singularities. Let T_min=D+R be the Siu decomposition of T_min, where D is its divisorial part and R is its residual part. Then D is an -divisor which has at most ρ(X) components. Write D=∑_i∈ Iλ_i D_i, where D_i are prime divisors on X, and assume for contradiction that #I>ρ(X). If we denote M={1,…,ρ(X)+1}, then there exist m∈ M and real numbers λ_i' such that D_m≡∑_i∈ M∖{m}λ_i' D_i. Choose a positive real number ε such that λ_m>ε and λ_i+ελ_i'>0 for all i∈ M∖{m}. Then we have ∑_i>ρ(X)+1λ_i D_i+R≥ 0 and ∑_i∈ M∖{m}(λ_i+ελ_i')D_i+(λ_m-ε)D_m≥0, hence T:=∑_i∈ M∖{m}(λ_i+ελ_i')D_i+(λ_m-ε)D_m+∑_i>ρ(X)+1λ_i D_i+R≥ 0, and note that, by (<ref>), we have T≡ T+ε(D_m-∑_i∈ M∖{m}λ_i' D_i)=T_min. But then ν(T,D_m)=λ_m-ε<λ_m=ν(T_min,D_m), a contradiction which proves the lemma. § SUPERCANONICAL CURRENTS In this section we introduce a special kind of currents with minimal singularities: supercanonical currents. As mentioned in the introduction and as we will see in Lemma <ref>, these are defined by an exponential L^1-condition. This property will yield in Part <ref> that supercanonical currents on big line bundles can be defined only by using global holomorphic sections of their multiples. This algebraicity is the main reason why supercanonical currents should be fundamental for applications within the MMP, and this is spelled out in Theorem <ref> and in other results in Sections <ref> and <ref>. Supercanonical currents for -divisors of the form K_X+Δ, where (X,Δ) is a projective klt pair, were defined in <cit.>. We use a similar, but somewhat simpler version of that definition in order to extend it to all pseudoeffective classes. Supercanonical currents are defined in the following lemma, whose proof follows closely, for the most part, the presentation in <cit.>. Let X be a compact complex manifold and let α be a smooth (1,1)-form on X whose class {α}∈ H_BC^1,1(X,) is pseudoeffective. Let 𝒮_α:={φ∈(X,α)|∫_X e^2φdV_ω≤ 1}, and define the supercanonical potential φ_α, associated to α as φ_α,(x):=sup_φ∈𝒮_αφ(x) for x∈ X. Then: * 𝒮_α≠∅, * all φ∈𝒮_α are uniformly bounded from above on X, * φ_α,(x)=max_φ∈𝒮_αφ(x) for x∈ X, * φ_α,∈(X,α), * the current T_α,=α+dd^cφ_α, is a closed positive (1,1)-current with minimal singularities in {α}, called the supercanonical current associated to α. Step 1. Consider any φ_0∈(X,α). As φ_0 is bounded from above, there exists a constant C_0 such that ∫_X e^2φ_0dV_ω≤ 2C_0, hence φ_0-log C_0∈𝒮_α. This gives (a). Step 2. By compactness of X there exist finitely many coordinate balls U_i=B(z_i,2r_i) for z_i∈ X such that the balls V_i=B(z_i,r_i) cover X and for each i there is a smooth function θ_i on U_i such that α|_U_i=dd^cθ_i. Denote M_i,min:=inf_U_ie^2θ_i and M_i,max:=sup_U_ie^2θ_i. Let φ∈𝒮_α and let x∈ V_i for some i. Then θ_i+φ|_U_i is plurisubharmonic on U_i, hence so is e^θ_i+φ|_U_i, and we have B(x,r_i)⊆ U_i. The mean value inequality and the assumption φ∈𝒮_α give e^2φ(x)M_i,min ≤ e^2(θ_i(x)+φ(x))≤n!/π^nr_i^2n∫_B(x,r_i)e^2(θ_i+φ)dV_ω ≤n!/π^nr_i^2n∫_U_ie^2φe^2θ_idV_ω≤n!/π^nr_i^2nM_i,max, hence φ(x)≤1/2log(n!/π^nr_i^2nM_i,max/M_i,min). This shows (b). Step 3. The function φ_α, is well defined by (b), and set Φ=(φ_α,)^*. Then Φ∈(X,α) by Theorem <ref>(a), and we claim that Φ=φ_α,. To that end, fix x∈ X. We may assume that Φ(x)≠-∞, since otherwise the claim is clear. Then there exists a sequence {x_n} of points in X such that x_n→ x and Φ(x)=lim sup_z→ xφ_α,(z)=lim_n→∞φ_α,(x_n), hence, by the definition of φ_α,, there exists a sequence of functions {φ_n} in 𝒮_α such that Φ(x)=lim_n→∞φ_n(x_n). By (b) and by Theorem <ref>(b), after passing to a subsequence we may assume that the sequence {φ_n} converges in L^1_(X) and almost everywhere to a function φ∈(X,α), and then φ∈𝒮_α by Fatou's lemma. In particular, we have φ(x)≤φ_α,(x)≤Φ(x) by the definition of φ_α,. On the other hand, we have φ(x)≥Φ(x) by Lemma <ref> and by (<ref>), which, together with (<ref>), finishes the proof of the claim and of (d). The same proof also shows that φ_α,(x)=φ(x), which gives (c). Step 5. Finally, we show (e). Consider any φ_1∈(X,α). Then as in Step 1 there exists a constant C_1 such that φ_1-C_1∈𝒮_α, hence φ_α,≼φ_1 by the definition of φ_α,. This finishes the proof. The following lemma proves the first easy properties of supercanonical currents. Let X be a compact complex manifold, and let α and β be smooth real (1,1)-forms on X whose cohomology classes are pseudoeffective. With notation from Lemma <ref> the following holds. * For each 0≤ε≤1 we have εφ_α,+(1-ε)φ_β,≤φ_εα+(1-ε)β,. * There exists a constant C such that for each 0≤ε≤1 and each φ∈𝒮_α+εβ we have φ≤ C. * If β≥0, then for each 0≤ε_1≤ε_2≤1 we have 𝒮_α+ε_1β⊆𝒮_α+ε_2β and φ_α+ε_1β,≤φ_α+ε_2β,. Part (c) follows immediately from the inclusion (X,α+ε_1β)⊆(X,α+ε_2β) and from the definition of supercanonical potentials, so we concentrate on (a) and (b). For (a), fix 0≤ε≤1. For fixed φ_α∈𝒮_α and φ_β∈𝒮_β, it is immediate that εφ_α+(1-ε)φ_β∈(X,εα+(1-ε)β), and by Hölder's inequality we have ∫_X e^2(εφ_α+(1-ε)φ_β)dV_ω≤(∫_X e^2φ_αdV_ω)^ε(∫_X e^2φ_βdV_ω)^1-ε≤1. Therefore, we have εφ_α+(1-ε)φ_β∈𝒮_εα+(1-ε)β, hence εφ_α+(1-ε)φ_β≤φ_εα+(1-ε)β,. Then (a) follows by taking the pointwise supremum over all φ_α∈𝒮_α and φ_β∈𝒮_β. Next we show (b). The proof is analogous to that of Lemma <ref>(b). By compactness of X there exist finitely many coordinate balls U_i=B(z_i,2r_i) for z_i∈ X such that the balls W_i=B(z_i,r_i) cover X and for each i there are smooth functions θ_i and ξ_i on U_i such that α|_U_i=dd^cθ_i and β|_U_i=dd^cξ_i. Denote M_i,min:=inf_ε∈[0,1]inf_U_ie^2(θ_i+εξ_i) and M_i,max:=sup_ε∈[0,1]sup_U_ie^2(θ_i+εξ_i). Now, fix 0≤ε≤1 and φ∈𝒮_α+εβ, and let x∈ W_i for some i. Then θ_i+εξ_i+φ|_U_i is plurisubharmonic on U_i, hence so is e^θ_i+εξ_i+φ|_U_i, and we have B(x,r_i)⊆ U_i. Then as in Step 2 of the proof of Lemma <ref>, the mean value inequality and the assumption φ∈𝒮_α+εβ give e^2φ(x)≤n!/π^nr_i^2nM_i,max/M_i,min. This shows (b). PART: Asymptotically equisingular approximations In Part <ref> we prove the first main result of this paper, Theorem <ref>. We first study in detail the different instances of approximations of currents which are relevant for this paper, in an increasing order of complexity: asymptotically equisingular approximations, good approximations and finally excellent approximations. One of the main technical results of this part is Corollary <ref>, which essentially says that in the context of the MMP, the approximation by currents with minimal singularities is asymptotically equisingular if and only if it is excellent. This is one of the main ingredients in the proof of Theorem <ref>. § ASYMPTOTICALLY EQUISINGULAR APPROXIMATIONS In this section we introduce the weakest form of approximations of currents relevant for this paper. Let T be a closed almost positive (1,1)-current on a compact complex manifold X. A sequence of closed almost positive (1,1)-currents {T_m}_m∈ on X is an asymptotically equisingular approximation of T if there exist an effective divisor D on X and a sequence of positive integers {m_ℓ}_ℓ∈_>0 such that m_ℓ→∞ and ℐ(ℓ T_m_ℓ)⊗_X(-D)⊆ℐ(ℓ T)⊆ℐ(ℓ T_m_ℓ)⊗_X(D) for all ℓ. In the definition we use the convention from Remark <ref>: in particular, all sheaves in Definition <ref> are understood as subsheaves of the sheaf of meromorphic functions on X. Definition <ref> is inspired by equisingular approximations from <cit.> and <cit.>, although equisingular approximations from op. cit. seem to be a too restrictive notion to consider in the context of the Minimal Model Program. Note that in Definition <ref> we do not require that the sequence {T_m}_m∈ converges weakly to T, hence the concept of asymptotically equisingular approximations seems to be a very weak one (we will see in Theorem <ref> that any positive (1,1)-current on a compact Kähler manifold has such an approximation). The word approximation might a priori be misleading, but is in some sense justified by the following lemma. Let X be a compact complex manifold and let T be a closed almost positive (1,1)-current on X with an asymptotically equisingular approximation {T_m}_m∈. Then there exists a sequence of positive integers {m_ℓ}_ℓ∈_>0 with m_ℓ→∞, such that for each prime divisor E over X we have ν(T,E)=lim_ℓ→∞ν(T_m_ℓ,E). Fix a prime divisor E over X, let π Y→ X be a modification such that E is a prime divisor on Y, and let A:=K_Y-π^*K_X denote the ramification divisor on Y. Let π^*T=R+D and π^*T_m=R_m+D_m be the Siu decompositions of π^*T and each π^*T_m, respectively. By the definition of asymptotically equisingular approximations, there exist an effective integral divisor G on X and a sequence of positive integers {m_ℓ}_ℓ∈_>0 such that m_ℓ→∞ and ℐ(ℓ T_m_ℓ)⊗_X(-G)⊆ℐ(ℓ T)⊆ℐ(ℓ T_m_ℓ)⊗_X(G) for all ℓ. By Theorem <ref>(a)(d) we have ℐ(ℓ T_m_ℓ+G)⊆ℐ(ℓ T_m_ℓ)⊗_X(-G), which together with the first inclusion in (<ref>) gives ℐ(ℓ T_m_ℓ+G)⊆ℐ(ℓ T). Then Lemma <ref> implies ℐ(ℓπ^*T_m_ℓ+π^*G)⊗_Y(-A)⊆ℐ(ℓπ^*T). Similarly we obtain ℐ(ℓπ^*T+π^*G)⊗_Y(-A)⊆ℐ(ℓπ^*T_m_ℓ). Now, note that for each ℓ, the (1,1)-current ℓ D+π^*G is the divisorial part of the Siu decomposition of ℓπ^*T+π^*G, and ℓ D_m_ℓ+π^*G is the divisorial part of the Siu decomposition of ℓπ^*T_m_ℓ+π^*G. Thus, by Theorem <ref>(e) there exists an analytic open subset U⊆ Y such that _Y(Y∖ U)≥2 and ℐ(ℓπ^*T+π^*G)|_U =_U(-⌊ℓ D⌋-π^*G), ℐ(ℓπ^*T_m_ℓ+π^*G)|_U =_U(-⌊ℓ D_m_ℓ⌋-π^*G), and ℐ(ℓπ^*T)|_U=_U(-⌊ℓ D⌋) and ℐ(ℓπ^*T_m_ℓ)|_U=_U(-⌊ℓ D_m_ℓ⌋). This together with (<ref>) and (<ref>) gives _U(-⌊ℓ D_m_ℓ⌋-π^*G-A)⊆_U(-⌊ℓ D⌋) and _U(-⌊ℓ D⌋-π^*G-A)⊆_U(-⌊ℓ D_m_ℓ⌋). Considering the order of vanishing along E, these inclusions imply ⌊ℓν(T,E)⌋ -_E π^*G-_E A≤⌊ℓν(T_m_ℓ,E)⌋ ≤⌊ℓν(T,E)⌋+_E π^*G+_E A. We conclude by dividing these inequalities by ℓ and letting ℓ→∞. § GOOD APPROXIMATIONS In the context of the MMP, asymptotically equisingular approximations carry a priori too little information on the currents involved. We need first a stronger notion. Let T be a closed almost positive (1,1)-current on a compact complex manifold X. A sequence of closed almost positive (1,1)-currents {T_m}_m∈ on X is a good approximation of T if: * {T_m}_m∈ is an asymptotically equisingular approximation of T, and * all T_m have generalised analytic singularities. The definition of good approximations is motivated by the following result hidden in <cit.>, which shows that every closed positive (1,1)-current on a compact Kähler manifold always has at least one good approximation. Theorem <ref> is not necessary for the remainder of the paper, but we include it for the sake of completeness: it demonstrates how the concept of good approximations is inspired by the approximation techniques of Demailly <cit.>. Let T be a closed positive (1,1)-current on a compact Kähler manifold X. Then there exists a good approximation of T. We recall the argument from <cit.>. Fix a smooth form α∈{T} and let φ be a quasi-psh function on X such that T=α+dd^cφ. By subtracting a constant from φ we may assume that φ≤0. By the Bergman kernel approximation technique <cit.>, there exist quasi-psh functions with analytic singularities φ_m≤0 and a constant C>0 such that: * α + dd^c φ_m≥-ε_mω, where lim_m→+∞ε_m = 0, * φ_m≥φ-C/m for every m. Setting T_m:=α+dd^c(m+1/mφ_m), we claim that {T_m}_m∈ is the desired sequence. Indeed, we only need to show {T_m}_m∈ is an asymptotically equisingular approximation. To that end, fix ℓ>0. We have ℐ(m+1/mℓφ)⊆ℐ(m+1/mℓφ_m) by (ii), and since ℐ(m+1/mℓφ)=ℐ(ℓφ) for m≫0 by Theorem <ref>, we conclude that ℐ(ℓ T)⊆ℐ(ℓ T_m) for m≫0. Conversely, by <cit.> for λ=ℓ and λ'=ℓm+1/m, we have ℐ(ℓ T_m)⊆ℐ(ℓ T) for m≫0, which proves the result. The following proposition and corollary are the crucial MMP results on which the rest of the arguments in Part <ref> rely. They relate the Minimal Model Program and good approximations. Let (X,Δ) be a projective klt pair such that K_X+Δ is pseudoeffective and X is smooth. Assume that (X,Δ) has a good minimal model. Let T_min be a current with minimal singularities in {K_X+Δ}. Then the following holds. * The current T_min has generalised analytic singularities, and moreover, it has generalised algebraic singularities if Δ is a -divisor. * If φ (X,Δ) (X',Δ') is a (K_X+Δ)-non-positive birational contraction such that the divisor K_X'+Δ' is semiample, and if W→ X is a resolution of indeterminacies of φ which is smooth, then T_min descends to W. * If π Y→ X is a resolution such that the Siu decomposition of π^*T_min has the form π^*T_min = R+D, where the residual part R has all Lelong numbers zero and D is the divisorial part, then D=N_σ(π^*(K_X+Δ)). We first show (a). As mentioned in <ref>, we may run a (K_X+Δ)-MMP φ (X,Δ) (X',Δ') with scaling of an ample divisor which terminates with a good minimal model (X',Δ'). Let (p,q) W→ X× X' be a resolution of indeterminacies of φ such that W is smooth. W [dl, "p" swap] [dr, "q"] X [rr, dashed, "φ" ] X' By the Negativity lemma <cit.>, there exists an effective q-exceptional -divisor E on W such that p^*(K_X+Δ)∼_ q^*(K_X'+Δ')+E_W. The current p^*T_min∈{p^*(K_X+Δ)} has minimal singularities by Proposition <ref>. By (<ref>) and by Lemma <ref>(c), the current R_W:=p^*T_min-E_W∈{q^*(K_X'+Δ')} is a positive current with minimal singularities. As q^*(K_X'+Δ') is semiample, Remark <ref> gives that R_W has all Lelong numbers zero. In particular, p^*T_min=R_W+E_W is the Siu decomposition of p^*T_min and therefore, the current T_min has generalised analytic singularities. If Δ is a -divisor, then clearly so is E_W. This shows (a). The proof of (b) is the same as that of (a). Now we show (c). With notation as in the proof of (a) above, we may assume that p factors through π; let w W→ Y be the resulting map. Then E_W=w^*D by the discussion in <ref>. By <cit.> we have N_σ(p^*(K_X+Δ))=E_W. Then by (<ref>), (<ref>) and <cit.> we obtain N_σ(π^*(K_X+Δ))=w_*N_σ(p^*(K_X+Δ) )=w_*E_W=D, as desired. Let (X,Δ) be a projective klt pair such that K_X+Δ is pseudoeffective and X is smooth. Let A≥0 be a big -divisor on X such that the pair (X,Δ+A) is klt, and for each ε>0 let T_ε,min be a current with minimal singularities in {K_X+Δ+ε A}. * Assume that (X,Δ) has a minimal model. Then there exists a positive rational number δ and a resolution π Y→ X such that for each 0<ε≤δ the current T_ε,min has generalised analytic singularities and it descends to Y. If ε∈ and Δ is a -divisor, then the current T_ε,min has generalised algebraic singularities. * Assume that (X,Δ) has a good minimal model. Then there exists a positive rational number δ and a resolution π Y→ X such that for each 0≤ε≤δ the current T_ε,min has generalised analytic singularities and it descends to Y. If ε∈ and Δ is a -divisor, then the current T_ε,min has generalised algebraic singularities. We first show (a). By <cit.> we may run a (K_X+Δ)-MMP φ (X,Δ) (X',Δ') with scaling of an ample divisor which terminates with a minimal model (X',Δ'), and denote A':=φ_*A. Then there exists 0≤δ_0≪1 such that φ is also a partial (K_X+Δ+ε A)-MMP for all 0≤ε≤δ_0. By <cit.> there exists 0<δ≪δ_0 such that, if we run a (K_X'+Δ'+δ A')-MMP with scaling of an ample divisor, then it is a (K_X'+Δ')-trivial MMP. Since K_X'+Δ'+δ A' is big, this last (K_X'+Δ'+δ A')-MMP terminates by <cit.>; denote the resulting map by ρ X' X”, and set A”:=ρ_*A'. Then for each 0<ε≤δ, the map ρ∘φ X X” is a (K_X+Δ+ε A)-MMP such that (X”,Δ”+ε A”) is a minimal model of (X,Δ+ε A), and in fact, it is a good minimal model of (X,Δ+ε A) by the Basepoint free theorem <cit.>. Let (π,π”) Y→ X× X” be a resolution of indeterminacies of ρ∘φ such that Y is smooth. Y [dl, "π" swap] [dr, "π”"] X [r, dashed, "φ" ] X' [r, dashed, "ρ" ] X” Then for each 0<ε≤δ the current T_ε,min has generalised analytic singularities by Proposition <ref>(a), and it descends to Y by Proposition <ref>(b). The statement on generalised algebraic singularities follows also from Proposition <ref>(a). This shows (a). If the pair (X,Δ) has a good minimal model, then as mentioned in <ref> we may run a (K_X+Δ)-MMP φ (X,Δ) (X',Δ') with scaling of an ample divisor which terminates with a minimal model (X',Δ'). Then we repeat the proof of (a) above verbatim. The only thing to notice is that the divisor K_X”+Δ” is semiample since the map ρ is (K_X'+Δ')-trivial, hence the current T_0,min has generalised analytic singularities by Proposition <ref>(a), and it descends to Y by Proposition <ref>(b). This finishes the proof. We can now prove Proposition <ref> announced in the introduction. The heart of the proof is in the following result. Let (X,Δ) be a projective klt pair such that K_X+Δ is pseudoeffective and X is smooth. Assume that (X,Δ) has a good minimal model. Let A≥0 be a big -divisor on X such that the pair (X,Δ+A) is klt, and for each ε>0 let T_ε,min be a current with minimal singularities in {K_X+Δ+ε A}. Then the sequence {T_1/m,min}_m∈_>0 is an asymptotically equisingular approximation of T_0,min. By Corollary <ref>(b) there exists a positive rational number δ and a birational model π Y→ X such that for each 0≤ε≤δ the current T_ε,min has generalised analytic singularities and it descends to Y. Possibly by blowing up further, we may assume additionally that π^*(N_σ(K_X+Δ)+N_σ(A))∪(π) is a divisor with simple normal crossings support. Define F to be the reduced divisor whose support is π^*(N_σ(K_X+Δ)+N_σ(A))∪(π). For each 0≤ε≤δ, let D_ε be the divisorial part of the Siu decomposition of π^*T_ε,min. Then D_ε=N_σ(π^*(K_X+Δ+ε A)) for 0≤ε≤δ by Proposition <ref>(c), hence by <cit.> we have lim_ε→0D_ε=D_0. On the other hand, by (<ref>), by the convexity of Nakayama-Zariski functions and by <cit.>, for each 0≤ε≤δ there exists an effective π-exceptional divisor E_ε on Y such that D_ε≤ N_σ(π^*(K_X+Δ))+ε N_σ(π^*A)=π^*N_σ(K_X+Δ)+επ^*N_σ(A)+E_ε, hence D_ε⊆ F for all 0≤ε≤δ. Then by (<ref>), for each positive integer ℓ we may choose a positive integer m_ℓ≫ℓ such that ℓ D_1/m_ℓ-F≤ℓ D_0≤ℓ D_1/m_ℓ+F. As the divisors D_ε have simple normal crossings support by (<ref>) for 0≤ε≤δ, by Theorem <ref>(b)(c) we have ℐ(ℓπ^*T_ε,min)=_Y(-⌊ℓ D_ε⌋) for all 0≤ε≤δ. Therefore, by (<ref>) and (<ref>) and since F is an integral divisor, we conclude that ℐ(ℓπ^*T_1/m_ℓ,min)⊗_Y(-F)⊆ℐ(ℓπ^* T_0,min) ⊆ℐ(ℓπ^* T_1/m_ℓ,min)⊗_Y(F) for all ℓ. Now, there exists an effective divisor G on X such that F≤π^*G, hence the inclusions above give ℐ(ℓπ^*T_1/m_ℓ,min)⊗_Y(-π^*G)⊆ℐ(ℓπ^*T_0,min)⊆ℐ(ℓπ^* T_1/m_ℓ,min)⊗_Y(π^*G) for all ℓ. Tensoring these inclusions with _Y(K_Y-π^*K_X), pushing forward by π and applying <cit.> we obtain ℐ(ℓ T_1/m_ℓ,min)⊗_X(-G)⊆ℐ(ℓ T_0,min)⊆ℐ(ℓ T_1/m_ℓ,min)⊗_X(G) for all ℓ. This finishes the proof. Finally, we have: By <cit.> the pair (Y,Δ_Y) has a good minimal model. We conclude immediately by Proposition <ref>. § EXCELLENT APPROXIMATIONS In order to exploit the MMP fully, we need to consider a yet stronger type of approximations. Let T be a closed almost positive (1,1)-current on a compact complex manifold X. A sequence of closed almost positive (1,1)-currents {T_m}_m∈ on X is an excellent approximation of T if: * {T_m}_m∈ is a good approximation of T, * all T_m descend to the same birational model π Y→ X, and * there exists an effective divisor B on Y such that B_m⊆ B for each m, where B_m is the divisorial part of the Siu decomposition of π^*T_m. The main reason why excellent approximations are very useful is contained in the following result. Let X be a compact complex manifold. Let T be a closed almost positive (1,1)-current on X such that the divisorial part of its Siu decomposition contains only finitely many components. Then the following are equivalent: * T is a current with generalised analytic singularities, * there exists an excellent approximation {T_m}_m∈ of T. If T has generalised analytic singularities, then trivially the currents T_m:=T for m∈ form an excellent approximation of T. Conversely, assume that there exists an excellent approximation {T_m} of T. By the discussion in <ref> we have that the divisorial part of the Siu decomposition of the pullback of T to any resolution of X also contains only finitely many components. Then there exists a modification π Y→ X from a compact complex manifold Y such that all T_m descend to Y, and the Siu decompositions of π^*T and π^*T_m have the form π^*T=R+∑_i∈ Iλ_i D_i and π^*T_m=R_m+∑_i∈ Iλ_i,m D_i, where R and R_m are the residual parts, and: * λ_i=ν(π^*T,D_i) and λ_i,m=ν(π^*T_m,D_i) for each m and i, * the index set I is finite, and * each R_m has all Lelong numbers zero. It suffices to show that all Lelong numbers of R are zero. To that end, pick a point y∈ Y. Take a resolution μ Z→ Y which factors through the blowup of Y at y and let E be the corresponding prime divisor on Z. By Lemma <ref> there exists a sequence of positive integers {m_ℓ}_ℓ∈_>0 with m_ℓ→∞, such that lim_ℓ→∞ν(T_m_ℓ,E)=ν(T,E) and lim_ℓ→∞λ_i,m_ℓ=λ_i. On the other hand, by (<ref>) and (<ref>) we have ν(T,E)=ν(μ^*π^*T,E)=∑_i∈ Iλ_i _Eμ^*D_i+ν(μ^*R,E) and ν(T_m_ℓ,E)=ν(μ^*π^*T_m_ℓ,E)=∑_i∈ Iλ_i,m_ℓ_Eμ^*D_i+ν(μ^*R_m_ℓ,E). Therefore, letting ℓ→∞ in (<ref>) and using (<ref>) and (<ref>) yields lim_ℓ→∞ν(μ^*R_m_ℓ,E)=ν(μ^*R,E). Since ν(μ^*R_m_ℓ,E)=0 for all ℓ by Theorem <ref>, we obtain ν(μ^*R,E)=0, hence ν(R,y)=0 by Theorem <ref> again, as desired. § EXCELLENT APPROXIMATIONS AND THE MMP In this section we connect excellent approximations and the Minimal Model Program. We first need the following extension of <cit.> to klt pairs. The proof is almost the same as that of <cit.> and <cit.>; the assumptions in those results are however different. In order to avoid confusion and for completeness, we include the proof here. Assume the existence of good minimal models for projective klt pairs in dimensions at most n-1. Let (X,Δ) be a projective -factorial klt pair of dimension n such that X is not uniruled and Δ is a -divisor. Let t be a positive integer such that M:=t(K_X+Δ) is Cartier, and let π Y→ X be a resolution of X. Assume that for some positive integer p we have H^0(Y,(Ω^1_Y)^⊗ p⊗_Y(mπ^*M)) ≠ 0 for infinitely many integers m. Then κ (X,K_X+Δ) ≥ 0. We note first that the -divisor K_X+Δ is pseudoeffective by Remark <ref>. If K_X+Δ≡0, then K_X+Δ∼_0 by <cit.>. Therefore, from now on we may assume that M≢0. We apply <cit.> with ℰ := (Ω^1_Y)^⊗ p and ℒ := π^*_X(M). Then there exist a positive integer r, a saturated line bundle ℳ in ⋀^rℰ, an infinite set 𝒮⊆ and integral divisors N_m≥0 for m∈𝒮 such that _Y(N_m) ≃ℳ⊗ℒ^⊗ m for all m∈𝒮. Since Y is not uniruled by assumptions, the divisor K_Y is pseudoeffective by <cit.>, hence <cit.> implies that there exist a positive integer ℓ and a pseudoeffective divisor F such that N_m+ F ∼ mπ^*M+ℓ K_Y. By pushing forward this relation to X we get π_*N_m+π_*F∼_ mM +ℓ K_X, and hence π_*N_m+(π_*F+ℓΔ)∼_ (mt+ℓ)(K_X+Δ). Noting that π_*N_m is effective and that π_*F+ℓΔ is pseudoeffective, we conclude by <cit.>. Now we can deduce a criterion for Nonvanishing, related to the existence of excellent approximations of currents with minimal singularities. Let (X,Δ) be a projective klt pair of dimension n such that X is smooth and not uniruled. Then K_X+Δ is pseudoeffective by Remark <ref>, and let T_min be a closed positive (1,1)-current with minimal singularities in {K_X+Δ}. Assume that there exists an excellent approximation {T_m}_m∈ of T_min. * Then T_min has generalised analytic singularities. * Assume the existence of good minimal models for projective klt pairs in dimensions at most n-1. If Δ is a -divisor and κ(X,K_X+Δ) = -∞, then for every resolution π Y→ X with the property that T_min descends to Y and that the divisorial part D of the Siu decomposition of π^*T_min has simple normal crossings support, we have H^p(Y,_Y(K_Y+ℓπ^*(K_X+Δ)-⌊ℓ D⌋)) = 0 for all p and all ℓ>0 sufficiently divisible. Moreover, if T_min has generalised algebraic singularities, then D is a -divisor. Part (a) follows from Lemma <ref> and Theorem <ref>. After part (a) is settled, the rest of the argument for (b) is hidden in the proofs of <cit.> and <cit.> and we reproduce the details here. By (a) and by <ref> there exists a resolution π Y→ X such that the Siu decomposition of π^*T_min has the form π^*T_min = R+D, where the residual part R has all Lelong numbers zero and D is the divisorial part of the decomposition with simple normal crossings support. By Theorem <ref>(b)(c) we have ℐ(ℓπ^* T_min)=_Y(-⌊ℓ D⌋) for all ℓ≥0. Since we assume that κ(X,K_X+Δ) = -∞, we conclude by Theorem <ref> that for all p≥ 0 and for all ℓ>0 sufficiently divisible we have H^0(Y,Ω^p_Y ⊗π^*_X(ℓ(K_X+Δ)))=0, and thus H^0(Y,Ω^p_Y⊗π^*_X(ℓ(K_X+Δ))⊗ℐ(ℓπ^* T_min)) = 0. Then <cit.> implies that for all p≥ 0 and for all ℓ>0 sufficiently divisible we have H^p(Y,_Y(K_Y+ℓπ^*(K_X+Δ))⊗ℐ(ℓπ^* T_min)) = 0, which together with (<ref>) finishes the proof. § PROOF OF THEOREM <REF> We now have all the ingredients to prove the first main result of the paper. As announced in the introduction, we can actually show the following much more precise version. Let (X,Δ) be a projective klt pair of dimension n such that K_X+Δ is nef and Δ is a -divisor. Let π Y→ X be a log resolution of (X,Δ) and write K_Y+Δ_Y∼_π^*(K_X+Δ)+E, where Δ_Y and E are effective -divisors without common components. Let A be an ample -divisor on Y, and assume that there exist an effective divisor D on Y and a sequence of positive integers {m_ℓ}_ℓ∈_>0 such that m_ℓ→∞ and ℐ(ℓ(K_Y+Δ_Y+1/m_ℓ A))_min⊆ℐ(ℓ(K_Y+Δ_Y))_min⊗_Y(D) for all ℓ. Then * any current with minimal singularities in the class {K_Y+Δ_Y} is a current with generalised algebraic singularities. Assume additionally the existence of good minimal models for projective klt pairs in dimensions at most n-1. Then: (b) if κ(X,K_X+Δ)≥0, then K_X+Δ is semiample, (c) if χ(X,_X)≠0, then K_X+Δ is semiample. We will give the proof of Theorem <ref> – and thus of Theorem <ref> – at the end of the section. It will be an easy consequence of the following main technical result of this section. Let (X,Δ) be a projective klt pair of dimension n such that X is smooth, Δ is a -divisor and K_X+Δ is pseudoeffective. Assume that (X,Δ) has a minimal model. Let A≥0 be a big -divisor on X such that the pair (X,Δ+A) is klt, and for each ε≥0 let T_ε,min be a current with minimal singularities in {K_X+Δ+ε A}. Assume that the sequence {T_1/m,min}_m∈_>0 is an asymptotically equisingular approximation of T_0,min. * Then T_0,min is a current with generalised algebraic singularities. Assume additionally the existence of good minimal models for projective klt pairs in dimensions at most n-1. Then: (b) if κ(X,K_X+Δ)≥0, then (X,Δ) has a good minimal model, (c) if χ(X,_X)≠0, then (X,Δ) has a good minimal model. We divide the proof in several steps. Step 1. By Corollary <ref>(a) there exist a positive integer m_0 and a birational model π Y→ X such that for each m≥ m_0 the current T_1/m,min descends to Y. For each m≥ m_0, let D_m be the divisorial part of the Siu decomposition of π^*T_1/m,min. Then D_m=N_σ(π^*(K_X+Δ+1/m A)) for m≥ m_0 by Proposition <ref>(c), and we have N_σ(π^*(K_X+Δ+1/m A))⊆ N_σ(π^*(K_X+Δ))∪ N_σ(π^*A) for all m>0 by the convexity of Nakayama–Zariski functions. This and (<ref>) show that the sequence {T_1/m,min}_m≥ m_0 is an excellent approximation of T_0,min. Step 2. By (<ref>) and by Theorem <ref>(a) we deduce that T_0,min has generalised analytic singularities. By possibly replacing Y by a higher birational model, we may assume that T_0,min descends to Y, and let π^*T_0,min = R_0+D_0 be the Siu decomposition of T_0,min, where the residual part R_0 has all Lelong numbers zero and D_0 is the divisorial part. By Lemma <ref>, there exists a sequence of positive integers {m_ℓ}_ℓ∈_>0 with m_ℓ→∞, such that D_0=lim_ℓ→∞D_m_ℓ, which together with (<ref>) and <cit.> gives D_0=lim_ℓ→∞ N_σ(π^*(K_X+Δ+1/m_ℓ A))=N_σ(π^*(K_X+Δ)). Hence, D_0 is a rational divisor by Lemma <ref>. In other words, T_0,min has generalised algebraic singularities, which proves (a). Step 3. In this step we assume that χ(X,_X)≠0, and we show that κ(X,K_X+Δ)≥0. If X is uniruled, then κ(X,K_X+Δ)≥0 by <cit.>. Therefore, from now on we may assume that X is not uniruled. We follow the arguments of <cit.> closely. Assume that κ(X,K_X+Δ)=-∞. By possibly replacing Y by a higher birational model, we may assume that the -divisor D_0 on Y has simple normal crossings support. Then by Theorem <ref>(b) we have χ(Y,_Y(K_Y+ℓπ^*(K_X+Δ)-⌊ℓ D_0⌋)) = 0 for all ℓ>0 divisible by some positive integer q, and we may assume that qD_0 and q(K_X+Δ) are Cartier. Then Serre duality gives χ(Y,_Y( ℓ qD_0 - ℓ qπ^*(K_X+Δ))) = 0 for all ℓ>0. Since the Euler–Poincaré characteristic χ(Y,_Y( ℓ qD_0 - ℓ qπ^*(K_X+Δ))) is a polynomial in ℓ by the Hirzebruch–Riemann–Roch theorem, (<ref>) implies that it must be identically zero, hence χ(Y,_Y) = 0 by setting ℓ=0. Thus, χ(X,_X) = 0 as X has rational singularities, a contradiction which proves that κ(X,K_X+Δ)≥0. Step 4. Finally, in this step we show (b) and (c) simultaneously. By Step 3, we may assume that κ(X,K_X+Δ)≥0. By assumption, there exists a minimal model φ (X,Δ) (X',Δ') of (X,Δ). By possibly replacing Y by a higher birational model, we may assume that (π,π') Y→ X× X' is a resolution of indeterminacies of φ such that Y is smooth. Y [dl, "π" swap] [dr, "π'"] X [rr, dashed, "φ" ] X' Then as in the proof of Lemma <ref> we obtain that P_σ(π^*(K_X+Δ))∼_(π')^*(K_X'+Δ'). This together with (<ref>) implies R_0=π^*T_min-D_0 ≡π^*(K_X+Δ)-N_σ(π^*(K_X+Δ)) =P_σ(π^*(K_X+Δ))∼_(π')^*(K_X'+Δ'). Since κ(X',K_X'+Δ')=κ(X,K_X+Δ)≥0 by (<ref>) and since R_0 has all Lelong numbers zero, we conclude that K_X'+Δ' is semiample by <cit.>. Thus, φ is a good minimal model of (X,Δ), which concludes the proof. Finally, we have: We first note that the pair (Y,Δ_Y) has a minimal model by <cit.>. For each ε≥0 let T_ε,min be a current with minimal singularities in {K_Y+Δ_Y+ε A}. Then by Lemma <ref>, for all positive integers m and ℓ we have ℐ(ℓ T_0,min)⊆ℐ(ℓ T_1/m,min), hence the assumptions of Theorem <ref> imply that {T_1/m,min}_m∈_>0 is an asymptotically equisingular approximation of T_0,min. Then part (a) follows from Theorem <ref>(a) applied to the pair (Y,Δ_Y). For (b) and (c), note that κ(X,K_X+Δ)=κ(Y,K_Y+Δ_Y), as well as χ(X,_X)=χ(Y,_Y) since X has rational singularities. Thus, if κ(X,K_X+Δ)≥0 or if χ(X,_X)≠0, then (Y,Δ_Y) has a good minimal model by Theorem <ref>(b)(c). This implies that (X,Δ) has a good minimal model by <cit.>. But then K_X+Δ is semiample by the same argument as in the third paragraph of the proof of <cit.>. This proves (b) and (c), and finishes the proof of the theorem. § LOCAL BEHAVIOUR In this section we prove a general result on local linearity of currents with minimal singularities in the context of the Minimal Model Program, and on the local behaviour of the asymptotic base loci. It will be one of the ingredients in the proof of Theorem <ref>. Let (X,Δ) be a projective klt pair of dimension n such that Δ is a -divisor and K_X+Δ is pseudoeffective. Assume that (X,Δ) has a minimal model. Let A be an ample -divisor on X. Then there exists a rational number 0<δ≤1 such that the following holds. * The sets _-(K_X+Δ+ε A) are independent of ε∈[0,δ). * For each ε∈(0,δ) we have _-(K_X+Δ+ε A)=(K_X+Δ+ε A)=_+(K_X+Δ+ε A). * Assume that X is additionally smooth, and for each ε≥0 let T_ε,min be a current with minimal singularities in {K_X+Δ+ε A}. Then for any two ε_1,ε_2∈ (0,δ] and for any t∈[0,1] the current tT_ε_1,min+(1-t)T_ε_2,min has minimal singularities in the class {K_X+Δ+(tε_1+(1-t)ε_2)A}. First note that by replacing A by a sufficiently general divisor -linearly equivalent to A, we may assume that the pair (X,Δ+A) is klt. Step 1. As in the proof of Corollary <ref>(a), there exists a rational number δ>0 and a (K_X+Δ)-non-positive birational contraction ξ X X' such that for each 0<ε≤δ, the map ξ is a (K_X+Δ+ε A)-MMP. Moreover, if we set Δ':=ξ_*Δ and A':=ξ_*A, then (X',Δ'+ε A') is a good minimal model of (X,Δ+ε A) for 0<ε≤δ by the Basepoint free theorem <cit.>. Let (p,q) Y→ X× X' be a resolution of indeterminacies of ξ such that Y is smooth. Y [dl, "p" swap] [dr, "q"] X [rr, dashed, "ξ" ] X' By the Negativity lemma <cit.>, for each ε∈[0,δ] there exists an effective q-exceptional -divisor E_ε on Y such that p^*(K_X+Δ+ε A)∼_ q^*(K_X'+Δ'+ε A')+E_ε. Then the function ε↦ E_ε is affine on [0,δ], since both functions ε↦ p^*(K_X+Δ+ε A) and ε↦ q^*(K_X'+Δ'+ε A') are. Step 2. In this step we prove (a). First note that by (<ref>) and since each divisor K_X'+Δ'+ε A' is semiample for ε∈(0,δ], we have (p^*(K_X+Δ+ε A))= E_ε for all ε∈(0,δ], which together with <cit.> and <cit.> implies _-(K_X+Δ+ε A)=(K_X+Δ+ε A)=p( E_ε) for all ε∈(0,δ]. On the other hand, (<ref>) and <cit.> give N_σ(p^*(K_X+Δ+ε A))=E_ε for all ε∈[0,δ]. Moreover, since A is ample, by the convexity of Nakayama–Zariski functions we have for each 0≤ξ_1≤ξ_2: N_σ(p^*(K_X+Δ+ξ_2 A)) ≤ N_σ(p^*(K_X+Δ+ξ_1 A))+N_σ((ξ_2-ξ_1)p^*A)) =N_σ(p^*(K_X+Δ+ξ_1 A)), hence E_ξ_2⊆ E_ξ_1 when 0≤ξ_1≤ξ_2. This together with (<ref>) and (<ref>) shows that E_0= E_ε for all ε∈[0,δ). Now, (<ref>) and (<ref>) imply that _-(K_X+Δ+ε A)=p( E_0) for all ε∈(0,δ), whereas <cit.> together with (<ref>) and (<ref>) gives _-(K_X+Δ)=⋃_ε∈(0,δ)(K_X+Δ+ε A)=p( E_0) This and (<ref>) give (a). Step 3. For (b), fix ε∈(0,δ). Then by <cit.> there exists ξ∈(0,ε) such that _+(K_X+Δ+ε A)=_-(K_X+Δ+(ε-ξ)A). Since _-(K_X+Δ+(ε-ξ)A)=_-(K_X+Δ+ε A) by (a), we conclude that _-(K_X+Δ+ε A)=_+(K_X+Δ+ε A), which together with (<ref>) proves (b). Step 4. Finally, in this step we prove (c). By (<ref>), for ε∈ [0,δ] we have p^*T_ε,min≡ q^*(K_X'+Δ'+ε A')+E_ε, and set S_ε:=p^*T_ε,min-E_ε∈{q^*(K_X'+Δ'+ε A')}. Since p^*T_ε,min is a positive current with minimal singularities by Proposition <ref>, so is also S_ε by Lemma <ref>(c). Fix ε_1,ε_2∈ (0,δ]. Then by (<ref>) we have E_tε_1+(1-t)ε_2=tE_ε_1+(1-t)E_ε_2 for each t∈[0,1]. Since for every ε∈(0,δ] the current S_ε has minimal singularities and the -divisor q^*(K_X'+Δ'+ε A') is semiample, for each t∈[0,1] the current tS_ε_1+(1-t)S_ε_2∈{q^*(K_X'+Δ'+(tε_1+(1-t)ε_2) A')} has minimal singularities by Lemma <ref>(c). Thus, by Lemma <ref>(b) and by (<ref>) each current tS_ε_1+(1-t)S_ε_2+E_tε_1+(1-t)ε_2∈{p^*(K_X+Δ+(tε_1+(1-t)ε_2) A)} has minimal singularities. Note that by (<ref>) and (<ref>) we have p^*(tT_ε_1,min+(1-t)T_ε_2,min) =t(S_ε_1+E_ε_1)+(1-t)(S_ε_2+E_ε_2) =tS_ε_1+(1-t)S_ε_2+E_tε_1+(1-t)ε_2, which together with (<ref>) gives that for each t∈[0,1] the current p^*(tT_ε_1,min+(1-t)T_ε_2,min)∈{p^*(K_X+Δ+(tε_1+(1-t)ε_2) A)} has minimal singularities. We conclude by Proposition <ref>. We conclude this section with a few comments on the behaviour of the asymptotic base loci; the following example and proposition were obtained in discussions with Nikolaos Tsakanikas. Recall that according to <cit.> a pseudoeffective -Cartier -divisor D on a normal projective variety X is called stable if _-(D)=_+(D). Therefore, Theorem <ref>(b) shows that the stability of adjoint divisors holds, in a certain sense, locally on a klt pair (X,Δ). Proposition <ref>, which complements results from <cit.>, says that in a similar situation as in Theorem <ref>, actually all but finitely many divisors of the form K_X+Δ+ε A are stable. We first note that, however, one cannot conclude that all such divisors are stable. This example is a slightly modified version of <cit.>, and it shows that there exists a projective klt pair (X,Δ) such that K_X+Δ is big, but (K_X+Δ)≠_+(K_X+Δ). To that end, let X be the blowup of ℙ^2 along three distinct points which belong to a line L⊆ℙ^2, and let E_1, E_2 and E_3 be the exceptional divisors. Then -K_X∼ 3L'+2(E_1+E_2+E_3), where L' is the strict transform of L on X. From this it is easy to check that -K_X is nef, but it is not ample since K_X· L'=0. Moreover, -K_X is big as K_X^2=6. By <cit.> there exists an effective -divisor Δ∼_-2K_X such that the pair (X,Δ) is klt. Therefore, K_X+Δ∼_-K_X is nef and big, hence semiample by the Basepoint free theorem <cit.>. Thus, (K_X+Δ)=∅, whereas _+(K_X+Δ)≠∅ since K_X+Δ is not ample. Let (X,Δ) be a projective klt pair such that Δ is a -divisor and K_X+Δ is pseudoeffective, and assume that (X,Δ) has a minimal model. Let A be an ample -divisor on X. Then there exist only finitely many real numbers ε≥0 such that K_X+Δ+ε A is not stable, and all such ε are rational. When (X,Δ) is a projective klt pair such that K_X+Δ is big, then it has a minimal model by <cit.>, hence Proposition <ref> applies unconditionally to such pairs. Set n:= X. Assume first that K_X+Δ is big. By <cit.> we have that K_X+Δ+mA is ample for all m>2n, and in particular each such divisor is stable. On the other hand, by Theorem <ref> applied to the ring R(X;K_X+Δ,K_X+Δ+2nA), there exist finitely many rational numbers 0=ε_1<ε_2<…<ε_k=2n such that for each i there exists a -factorial projective variety X_i and a birational contraction φ_i X X_i such that φ_i is a minimal model for every klt pair (X,Δ+ξ A) with ε_i<ξ<ε_i+1. Then for each ξ∈(ε_i,ε_i+1) the divisor K_X+Δ+ξ A is stable, by repeating verbatim the proof of Theorem <ref>(b). Therefore, if K_X+Δ+ε A is not stable for some ε≥0, then ε∈{ε_1,…,ε_k}. This proves the proposition when K_X+Δ is big. In the general case, by Theorem <ref>(b) there exists a rational number 0<δ≤1 such that for each ε∈(0,δ) the divisor K_X+Δ+ε A is stable. On the other hand, by the first part of the proof there exist only finitely many real numbers ε≥δ such that K_X+Δ+ε A is not stable, and they are all rational. This finishes the proof. PART: Approximations by supercanonical currents § A UNIFORM BOUND THEOREM In this section we prove a crucial result that will be used several times in the remainder of the paper. It shows the existence of global holomorphic sections of adjoint line bundles with precise properties, which depend only on a prescribed open cover of the given compact complex manifold. The method of the proof is to construct holomorphic sections locally by the Ohsawa-Takegoshi extension theorem, and then use smooth cut-off functions and solve a -equation by a version of Hörmander's L^2 estimates to find global holomorphic sections satisfying similar estimates. These techniques go back at least to the proofs of <cit.> and <cit.>. One of the main difficulties is to organise the proof in such a way that all the constants depend only on the starting data. Let X be a compact Kähler manifold with a Kähler form ω, and fix an open covering 𝒰 of X by coordinate balls on which K_X trivialises. Then there exist positive constants δ and C, depending only on 𝒰, with the following property. For each line bundle L on X which trivialises on 𝒰, for every singular metric h on L with Θ_h(L)≥δω, and for each x∈ X such that the restriction of h to the fibre L_x is well defined, there is a section σ_x∈ H^0(X,_X(K_X)⊗ L) such that |σ_x(x)|_h,ω=1 and σ_x_h,ω≤ C. Let n:= X. Step 1. In this step we prepare an open covering of X and a sequence of new metrics we will need in the next steps. We fix a finite covering {V_1,…,V_r} of X by coordinate balls such that for each 1≤ i≤ r we have V_i⋐ W_i⋐ U_i for some coordinate balls W_i and U_i, where the covering {U_1,…,U_r} is subordinate to 𝒰; this is possible by the compactness of X. For each 1≤ i≤ r fix a function χ_i∈ C_c^∞(X) such that 0≤χ_i≤1, (χ_i)⊆ U_i and χ_i≡1 on W_i. Given a point x∈V_i for some 1≤ i≤ r, consider the function φ_i,x(z):=nχ_i(z)log|z-x| for z∈ U_i. Then the extension by zero of φ_i,x defines a function φ_i,x X→∪{-∞} such that (φ_i,x)⊆ U_i, and such that the following properties hold: * dd^cφ_i,x≥0 on W_i, since χ_i≡1 on W_i and the function z↦log|z-x| is plurisubharmonic, * dd^cφ_i,x is a smooth form on X∖ W_i whose coefficients are bounded independently of i and x, since (x,z)↦χ_i(z)log|z-x| is a smooth function on the compact set V_i×((χ_i)∖ W_i). Then (i), (ii) and Lemma <ref> imply that there exists a constant η>0 such that dd^cφ_i,x≥-ηω for all i and x. Note that η depends only on the choice of sets V_i,W_i and U_i and on the choice of functions χ_i. Set M_1:=max_1≤ i≤ rmax{χ_i(z)| z∈ X} and M_2:=max_1≤ i≤ rsup{e^-2φ_i,x(z)| (x,z)∈ V_i×(U_i∖ W_i)}; note that M_2 is well defined for the same reason as in (ii) above and since (φ_i,x)⊆ U_i. Further, set M_3:=min_1≤ i≤ rinf{e^-2φ_i,x(z)| (x,z)∈ V_i× X}. Then M_3>0 since φ_i,x(z)≤ nlog((U_i)) for (x,z)∈ V_i× U_i and φ_i,x(z)=0 otherwise. Note that M_1,M_2 and M_3 depend only on the choice of sets V_i,W_i and U_i and on the choice of functions χ_i. Step 2. Set δ:=2η. In the remainder of the proof we show that there exists a constant C>0 such that for any singular metric h on L with Θ_h(L)≥δω and for each x∈ X such that the restriction of h to the fibre L_x is well defined, there is a section σ_x∈ H^0(X,K_X+L) such that (<ref>) holds. Fix a singular metric h on L with Θ_h(L)≥δω, and fix a point x∈ V_i for some 1≤ i≤ r for which the restriction of h to the fibre L_x is well defined. By the Ohsawa–Takegoshi extension theorem <cit.> on U_i, applied successively n times to a collection of n hyperplanes intersecting at x, there exist a constant C_1 depending only on the cover {U_1,…,U_r} (and, in particular, not on x and h) and a section s_x∈ H^0(U_i,_X(K_X)⊗ L) such that |s_x(x)|_h,ω=1 and ∫_U_i|s_x|_h,ω^2dV_ω≤ C_1. The problem is that s_x is a holomorphic section only on U_i and not on the whole X. The strategy is to use Theorem <ref> to rectify this. Note that χ_i s_x is a smooth L-valued (n,0)-form on X, and set f_x := (χ_i s_x). Then f_x is a smooth L-valued (n,1)-form on X such that f_x = 0, and note that f_x = χ_i· s_x since s_x is holomorphic, hence (f_x)⊆ U_i∖ W_i since χ_i≡1 on W_i and (χ_i)⊆ U_i. Now, consider the singular metric h_i,x:=h e^-2φ_i,x on L. Then by (<ref>) and (<ref>) we have Θ_h_i,x(L)≥(δ-η)ω=ηω. Therefore, as η>0, by Theorem <ref> there exists an L-valued (n,0)-form u_x on X such that u_x = f_x in the sense of currents and u_x^2_h_i,x,ω≤1/2πηf_x^2_h_i,x,ω. Moreover, observe that f_x^2_h_i,x,ω =∫_X |f_x|^2_h,ωe^-2φ_i,xdV_ω =∫_U_i∖ W_i |f_x|^2_h,ωe^-2φ_i,xdV_ω by (<ref>) ≤ M_2∫_U_i∖ W_i |f_x|^2_h,ωdV_ω by (<ref>) =M_2∫_U_i∖ W_i|χ_i· s_x|^2_h,ωdV_ω by (<ref>) ≤ C_1M_1^2M_2, by (<ref>) and (<ref>) and therefore, setting C_2:=1/2πη C_1M_1^2M_2, by (<ref>) we have u_x^2_h_i,x,ω≤ C_2. Step 3. Set σ_x := χ_i s_x - u_x. Then σ_x = (χ_i s_x)- u_x = 0 in the sense of currents, which implies that σ_x is a holomorphic L-valued (n,0)-form by the regularity of the -operator. In particular, u_x is a smooth L-valued (n,0)-form, as it is the difference of smooth forms χ_i s_x and σ_x. Since |u_x|^2_h_i,x,ω=|u_x|^2_h,ω|z-x|^-2nχ_i(z), and since the function |z-x|^-2n is not locally integrable at z=x, the inequality (<ref>) and Remark <ref> imply that u_x(x) = 0. Thus, |σ_x(x)|_h,ω = 1 by (<ref>). On the other hand, by (<ref>) and (<ref>) we have M_3u_x^2_h,ω≤∫_X |u_x|^2_h,ωe^-2φ_i,xdV_ω=u_x^2_h_i,x,ω≤ C_2. Finally, the triangle inequality together with (<ref>) and (<ref>) gives σ_x_h,ω ≤χ_i s_x_h,ω+u_x_h,ω ≤(∫_U_i|s_x|_h,ω^2dV_ω)^1/2+u_x_h,ω≤√(C_1)+√(C_2/M_3). Therefore, C:=√(C_1)+√(C_2/M_3) is the desired constant. § SUPERCANONICAL CURRENTS ON BIG LINE BUNDLES In this section we analyse supercanonical currents on big -divisors on a projective manifold in detail. The main result of the section, Theorem <ref>, says that the corresponding supercanonical potentials depend only on the global sections of multiples of L in a very precise sense. The main technical result of the section is Theorem <ref>. The proof uses the main ideas of the proof of <cit.>, which we occasionally follow closely and which in turn uses essentially Demailly's estimates from his regularisation results <cit.>. However, several arguments in <cit.> are difficult to follow. Instead, in this paper we use crucially the uniform bounds result (Theorem <ref>) as well as the approximation result (Corollary <ref>) to make arguments streamlined and more precise. Let X be a projective manifold with a Kähler form ω. Let L be a big -divisor on X and set N:={m∈| mL is Cartier}. Fix a smooth metric h on L and denote α:=Θ_h(L)∈{L}. Let φ∈(X,α) such that ∫_X e^2φdV_ω≤ 1. Then there exists a sequence of sections σ_m∈ H^0(X,mL) for m∈ N such that ∫_X|σ_m|^2/m_h^mdV_ω≤1 and φ=(lim sup_m→∞log|σ_m|^1/m_h^m)^*. Step 1. In this step we prepare several constants that will be used throughout the proof. Fix a finite covering 𝒰 of X by coordinate balls on which K_X and all mL trivialise, for m∈ N. Fix constants δ and C, depending only on 𝒰, as in Theorem <ref>. As L is big, there exist a positive constant ε and ψ∈(X,α) such that α+dd^cψ≥εω. Since ψ is bounded from above, by subtracting a constant from ψ we may assume that ψ≤0 and ∫_X e^2ψdV_ω≤1/2. Let h_ω be the smooth metric on K_X induced by the hermitian metric on T_X whose fundamental form is ω. Note that by Lemma <ref> there exists a constant C_ω>0 such that -Θ_h_ω(K_X)+C_ωω≥0. We fix for the remainder of the proof an integer p>1 such that: * pε-C_ω≥δ, * C≤ 2^(p-1)/2. Since p/p-1ψ≤ψ by the first inequality in (<ref>), by the second inequality in (<ref>) we have ∫_X e^2p/p-1ψdV_ω≤1/2. Step 2. In this step we prepare several Cartier divisors on X and singular metrics on them. Set N_≥ p:={n∈ N| n≥ p}. For each m∈ N_≥ p set L_m:=mL-K_X. Then h_m:=e^-2(m-p)φ-2pψh^m h_ω^-1 is a singular metric on L_m with curvature current Θ_h_m(L_m) =mα+(m-p)dd^cφ+p dd^cψ-Θ_h_ω(K_X) ≥ (pε-C_ω)ω by (<ref>), by (<ref>) and since α+dd^cφ≥0. This together with the property (i) from Step 1 yields Θ_h_m(L_m)≥δω. For each m∈ N_≥ p define the singular metric g_m:=h_m h_ω=e^-2(m-p)φ-2pψh^m on K_X+L_m=mL. Step 3. By Theorem <ref>, by the choices of the constants δ and C in Step 1 and by (<ref>), for each m∈ N_≥ p and each x∈ X∖{φ+ψ=-∞} there is a section σ_m,x∈ H^0(X,K_X+L_m) such that |σ_m,x(x)|_g_m=1 and σ_m,x_g_m≤ C. For m∈ N_≥ p, Hölder's inequality for conjugate exponents 1/m+m-p/m+p-1/m=1 gives ∫_X |σ_m,x|^2/m_h^m dV_ω =∫_X(|σ_m,x|^2_h^me^-2(m-p)φ-2pψ)^1/me^2m-p/mφe^2p/mψdV_ω ≤σ_m,x_g_m^2/m(∫_X e^2φdV_ω)^m-p/m(∫_X e^2p/p-1ψdV_ω)^p-1/m ≤ C^2/m2^1-p/m, where the last inequality follows from (<ref>), (<ref>) and (<ref>). This together with the property (ii) from Step 1 gives ∫_X|σ_m,x|^2/m_h^m dV_ω≤ 1 for all m∈ N_≥ p. Furthermore, for x∈ X∖{φ+ψ=-∞}, from (<ref>) we have 1=|σ_m,x(x)|_g_m = |σ_m,x(x)|_h^me^-(m-p)φ(x)-pψ(x), and thus log |σ_m,x(x)|^1/m_h^m= (1-p/m)φ(x)+p/mψ(x). Step 4. Set 𝒫:={z∈ X|(φ+ψ)(z)=-∞}. Then the set 𝒫 is of Lebesgue measure zero in X and the set X∖𝒫 is dense in X. By Corollary <ref> there exists a countable set 𝒟:={x_q| q∈}⊆ X∖𝒫 which is dense in X, such that for each z∈ X there exists a sequence {z_s} in 𝒟 with lim_s→∞z_s=z and lim_s→∞φ(z_s)=φ(z). Fix a sequence {q_j}_j∈_>0 of positive integers, in which each positive integer occurs infinitely many times. For each integer m∈ N_≥ p we have x_q_m∈ X∖{φ+ψ=-∞}, hence we may, by Step 3, define sections σ_m∈ H^0(X,mL)=H^0(X,K_X+L_m) by σ_m:=σ_m,x_q_m, and note that σ_m,x_q_m satisfy inequalities (<ref>). Set u:=lim sup_m→∞log|σ_m|^1/m_h^m. We will show that φ=u^*. Step 5. Fix a point x∈ X. In this step we show that φ(x)≥ u^*(x). By Lemma <ref> (applied to the -divisor L and the metric h) there exist constants C_2>0 and r_0>0 such that for every coordinate ball B(x,r) with r≤ r_0 and for each integer m∈ N_≥ p we have |σ_m(x)|^2_h^m≤ e^2m C_2r^2_B(x,r)|σ_m|^2_h^mdV_ω. Now, since the functions φ and ψ are bounded from above on B(x,r), (<ref>) gives ∫_B(x,r) |σ_m|^2_h^mdV_ω=∫_B(x,r)|σ_m|^2_g_me^2(m-p)φ+2pψdV_ω ≤σ_m^2_g_msup_B(x,r)e^2(m-p)φ+2pψ≤ C^2sup_B(x,r)e^2(m-p)φ+2pψ, which together with (<ref>) implies |σ_m(x)|^2_h^m≤e^2m C_2r^2n!C^2/r^2nπ^nsup_B(x,r)e^2(m-p)φ+2pψ. Plugging in r:=1/m for m≥max{p,1/r_0}, taking logarithms of both sides and dividing by 2m, we obtain log |σ_m(x)|^1/m_h^m ≤C_2/m^2+nlog m/m+1/2mlog (n!C^2/π^n) +sup_B(x,1/m)(1-p/m)φ+sup_B(x,1/m)p/mψ. Taking lim sup as m→∞, by Lemma <ref>(a)(b) we obtain u(x)=lim sup_m→∞log|σ_m(x)|^1/m_h^m≤φ(x), which gives (<ref>) since φ is upper semicontinuous. Step 6. Fix a point x∈ X. In this step we finally show that φ(x)≤ u^*(x). To this end, recalling the construction of the set 𝒟 from Step 4, we may find a strictly increasing sequence {q_j'}_j∈_>0 of positive integers such that x_q_j'∈𝒟 for all j and we have lim_j→∞x_q_j'=x and lim_j→∞φ(x_q_j')=φ(x). By the construction in Step 4, for each fixed j there is a strictly increasing sequence {m_ℓ}_ℓ∈_>0 in the set N_≥ p such that q_j'=q_m_ℓ for all ℓ. Then σ_m_ℓ=σ_m_ℓ,x_q_m_ℓ by (<ref>). Hence, by (<ref>) and since ψ(x_q_j')≠-∞ by the construction of 𝒟 we have u(x_q_j') ≥lim sup_ℓ→∞log |σ_m_ℓ(x_q_j')|^1/m_ℓ_h^m_ℓ =lim sup_ℓ→∞log |σ_m_ℓ,x_q_m_ℓ(x_q_m_ℓ)|^1/m_ℓ_h^m_ℓ= φ(x_q_j'). Then this last inequality and (<ref>) give u^*(x)≥lim sup_j→∞u(x_q_j')≥lim sup_j→∞φ(x_q_j')=φ(x), which finishes the proof. In the proof of Theorem <ref> the auxiliary quasi-psh function ψ had to be introduced for two reasons: (a) to create a singular metric on each L_m whose curvature current is sufficiently positive, and (b) to be able to prove inequality (<ref>). The positive integer p in the proof of Theorem <ref> does not depend on φ nor on any integer m in the proof, but it does depend on the choice of ψ. The following main result of this section is inspired by <cit.>. Let X be a projective manifold with a Kähler form ω. Let L be a big -divisor on X and set N:={m∈| mL is Cartier}. Fix a smooth metric h on L and denote α:=Θ_h(L)∈{L}. For each m∈ N set V_h,m:={σ∈ H^0(X,mL)|∫_X|σ|^2/m_h^mdV_ω≤1} and φ_h,m:=sup_σ∈ V_h,mlog|σ|^1/m_h^m. Then: * for each m∈ N and each σ∈ H^0(X,mL) there exists a positive real number λ such that λσ∈ V_h,m, * there exists a constant C such that log|σ|^1/m_h^m≤ C for each m∈ N and each σ∈ V_h,m, * V_h,m is compact in H^0(X,mL) for each m∈ N, * φ_h,m=max_σ∈ V_h,mlog|σ|^1/m_h^m for each m∈ N, * φ_h,m∈(X,α) for each m∈ N, * φ_h,km≥φ_h,m for each m∈ N and each positive integer k, * the supercanonical potential of L associated to α is φ_α,=(sup_m∈ Nφ_h,m)^*, * the sequence {φ_h,m}_m∈ N converges to φ_α, in L^1_(X), * ℐ(φ_h,m)=ℐ(φ_α,) for all m∈ N sufficiently large, * φ_h,m is continuous on X∖(L) for all m∈ N sufficiently divisible, * the sequence {φ_h,m}_m∈ N converges uniformly on compact subsets of X∖_+(L) to φ_α,, * φ_α, is bounded on X∖(L), it is continuous on X∖_+(L), and φ_α,=sup_m∈ Nφ_h,m on X∖_+(L). Set 𝒮_α:={φ∈(X,α)|∫_X e^2φdV_ω≤ 1}, and recall from Lemma <ref> that the supercanonical potential associated to α was defined as φ_α,(x):=sup_φ∈𝒮_αφ(x) for x∈ X. Step 1. Let m∈ N and σ∈ H^0(X,mL). Then log|σ|^1/m_h^m∈(X,α) by Example <ref>(b). Moreover, since |σ|_h^m is bounded from above on X, there exists a constant C_σ>0 such that ∫_X|σ|^2/m_h^mdV_ω≤ C_σ, hence C_σ^-m/2σ∈ V_h,m, which gives (<ref>). If σ∈ V_h,m, then ∫_X e^2log|σ|^1/m_h^mdV_ω=∫_X|σ|^2/m_h^mdV_ω≤1, thus {log|σ|^1/m_h^m|σ∈ V_h,m}⊆𝒮_α. Then (<ref>) follows from Lemma <ref>(b). Step 2. Define the norm ·_max on H^0(X,mL) by s_max:=sup_X|s|_h^m for s∈ H^0(X,mL). Consider a sequence {σ_ℓ} in V_h,m. Since V_h,m is bounded in H^0(X,mL) with respect to the norm ·_max by (<ref>), by passing to a subsequence we may assume that the sequence {σ_ℓ} converges to a section σ∈ H^0(X,mL). To prove (<ref>) it suffices to show that σ∈ V_h,m. Note that log|σ|_h^m, as well as all the functions log|σ_ℓ|_h^m, belong to (X,mα) by Example <ref>(b). By (<ref>) and by Theorem <ref>(b), after passing to a subsequence we may assume that the sequence {log|σ_ℓ|_h^m} converges in L^1_(X) and almost everywhere to a function φ∈(X,mα). Thus φ=log|σ|_h^m almost everywhere, hence everywhere by Corollary <ref>. But then log|σ|_h^m∈𝒮_mα by Fatou's lemma, hence σ∈ V_h,m, as desired. Step 3. Now we show (<ref>). Fix x∈ X. Then there exists a sequence of sections σ_j∈ V_h,m such that lim_j→∞log|σ_j(x)|_h^m^1/m=φ_h,m(x). By (<ref>) and by passing to a subsequence we may assume that there exists σ∈ V_h,m such that lim_j→∞σ_j=σ. Thus φ_h,m(x)=log|σ(x)|^1/m_h^m. Step 4. Next we prove (<ref>). By (<ref>) and Theorem <ref>(a) we have that (φ_h,m)^*∈(X,α). Fix x∈ X. As (φ_h,m)^*(x)=lim sup_z→ xφ_h,m(z), by (<ref>) there exists a sequence of sections σ_j∈ V_h,m and a sequence of points x_j∈ X such that lim_j→∞x_j=x and lim_j→∞log|σ_j(x_j)|_h^m^1/m=(φ_h,m)^*(x). By (<ref>) and by passing to a subsequence we may assume that there exists σ∈ V_h,m such that lim_j→∞σ_j=σ. Then Lemma <ref>(b) gives (φ_h,m)^*(x)=lim_j→∞log|σ_j(x_j)|_h^m^1/m=log|σ(x)|_h^m^1/m≤φ_h,m(x), which shows that φ_h,m=(φ_h,m)^*. Thus, φ_h,m is α-psh. Step 5. For (<ref>), fix x∈ X, m∈ N and a positive integer k. By (<ref>) there exists σ∈ V_h,m such that φ_h,m(x)=log|σ(x)|^1/m_h^m. Since |σ^k|^1/mk_h^mk=|σ|^1/m_h^m, we have σ^k∈ V_h,mk, and hence φ_h,km(x)≥log|σ^k(x)|^1/mk_h^mk=log|σ(x)|^1/m_h^m=φ_h,m(x), which was to be shown. Step 6. By (<ref>) and by Theorem <ref>(a) we have φ_h,:=(sup_m∈ Nφ_h,m)^*∈(X,α). It is immediate that φ_h,≤φ_α, by (<ref>). For the reverse inequality, let φ∈𝒮_α. Then by Theorem <ref> there exists a sequence of sections τ_m∈ V_h,m for m∈ N such that φ=(lim sup_m→∞log|τ_m|^1/m_h^m)^*. Since log|τ_m|^1/m_h^m≤φ_h,m for each m∈ N by the definition of φ_h,m, we obtain φ≤(lim sup_m→∞φ_h,m)^*≤(sup_m∈ Nφ_h,m)^*=φ_h,, hence φ_α,=sup_φ∈𝒮_αφ≤φ_h,. This shows (<ref>). Part (<ref>) follows from (<ref>) and from Theorem <ref>(d). Step 7. Part (<ref>) follows from (<ref>) and (<ref>) and from Theorem <ref>. Step 8. In this step we prove (<ref>). We first show that φ_h,m≠-∞ away from (L) for all m∈ N sufficiently divisible. Indeed, by Remark <ref> we have |mL|=(L) for all m sufficiently divisible. Therefore, for each point x∈ X∖(L) there exists σ∈ H^0(X,mL) such that σ(x)≠0. By (<ref>) there exists a positive real number λ such that λσ∈ V_h,m, hence φ_h,m(x)≥log|λσ(x)|^1/m_h^m>-∞. Fix one such sufficiently divisible m. Fix x_0∈ X∖(L) and a sequence of points {x_j} in X∖(L) such that lim_j→∞x_j=x_0. By sequential continuity it suffices to show that lim_j→∞φ_h,m(x_j)=φ_h,m(x_0). To that end, set a:=lim sup_j→∞φ_h,m(x_j), and note that a≠+∞ since φ_h,m are uniformly bounded from above by (<ref>). By (<ref>) there exists σ_0∈ V_α,m such that φ_α,m(x_0)=log|σ_0(x_0)|_h^m^1/m. Note first that φ_h,m(x_j)≥log|σ_0(x_j)|_h^m^1/m by the definition of φ_α,m, hence a≥lim inf_j→∞log|σ_0(x_j)|_h^m^1/m=log|σ_0(x_0)|_h^m^1/m=φ_h,m(x_0). We will now show that a≤φ_h,m(x_0), which together with (<ref>) will then prove (<ref>). By passing to a subsequence of {x_j} we may assume that a=lim_j→∞φ_h,m(x_j). By (<ref>), for each j∈ there exists σ_j∈ V_h,m such that φ_h,m(x_j)=log|σ_j(x_j)|_h^m^1/m. By (<ref>) and by passing to a subsequence we may assume that there exists σ∈ V_h,m such that lim_j→∞σ_j=σ. Then Lemma <ref>(b) gives a=lim_j→∞log|σ_j(x_j)|_h^m^1/m=log|σ(x_0)|_h^m^1/m≤φ_h,m(x_0). This concludes the proof of (<ref>). Step 9. In this step we prove (<ref>). To this end, we use the notation from the proof of Theorem <ref>. We first note that, by Corollary <ref> we may and do choose the function ψ as in the proof of Theorem <ref> such that it has logarithmic poles which all lie in _+(L). Let φ∈𝒮_α and m∈ N. Then by (<ref>), (<ref>) and Remark <ref> there exist a positive integer p (independent of φ and m) and sections σ_m,x∈ V_h,m for each x∈ X∖{φ+ψ=-∞} such that log |σ_m,x(x)|^1/m_h^m= (1-p/m)φ(x)+p/mψ(x), hence by the definition of the function φ_h,m, for each x∈ X∖{φ+ψ=-∞} we obtain φ_h,m(x)≥(1-p/m)φ(x)+p/mψ(x). This inequality holds trivially when φ(x)=-∞ or ψ(x)=-∞, hence it holds for all x∈ X. By the definition of φ_α, this then implies φ_h,m(x)≥(1-p/m)φ_α,(x)+p/mψ(x) for all x∈ X. Note that φ_h,m≠-∞ on X∖(L) by (<ref>), hence φ_α,≠-∞ on X∖(L) by (<ref>). As ψ≠-∞ on X∖_+(L) by construction, we have 0≤φ_α,(x)-φ_h,m(x)≤p/m(φ_α,(x)-ψ(x)) for x∈ X∖_+(L) by (<ref>) and (<ref>). Since ψ is smooth on X∖_+(L) and φ_α, is bounded from above, we conclude that for each compact set K⊆ X∖_+(L) there exists a constant C_K>0 such that 0≤φ_α,(x)-φ_h,m(x)≤C_K/m, so (<ref>) follows. Step 10. Finally, the first part of (<ref>) was already noticed in Step 9, the second part of (<ref>) follows from (<ref>) and (<ref>) by the uniform convergence theorem, and then the third part of (<ref>) follows from the second part of (<ref>) and from (<ref>). § PROOF OF THEOREM <REF> In this section we prove the second main result of this paper, Theorem <ref>. The first technical result is Theorem <ref>, which is a pseudoeffective analogue of Theorem <ref>. A related, but somewhat more involved statement for klt pairs was mentioned without proof in <cit.>. The proof is similar, but somewhat more involved than that of Theorem <ref>, and we provide all the details. In particular, the very precise conclusion of Theorem <ref> will be needed in the proof of Theorem <ref>. Let X be a projective manifold. Let L be a pseudoeffective -divisor on X and set N:={m∈| mL is Cartier}. Fix a smooth metric h on L and denote α:=Θ_h(L)∈{L}. Let A be an ample Cartier divisor on X, fix a Kähler form ω∈{A}, and let h_A be a smooth metric on A such that ω=Θ_h_A(A). For each positive integer ℓ denote L_ℓ:=L+1/ℓA and denote by h_ℓ:=hh_A^1/ℓ the smooth metric on L_ℓ. Then there exists a positive integer p such that the following holds. Let φ∈(X,α) such that ∫_X e^2φdV_ω≤ 1. Then for any sequence {m_ℓ}_ℓ∈_>0 satisfying m_ℓ∈ N∩ℓ, m_ℓ≥ 2pℓ and lim_ℓ→∞ℓ/m_ℓ=0, there exists a sequence of sections σ_ℓ∈ H^0(X,m_ℓ L_ℓ) such that ∫_X|σ_ℓ|^2/m_ℓ_h_ℓ^m_ℓdV_ω≤1 and φ=(lim sup_ℓ→∞log|σ_ℓ|^1/m_ℓ_h_ℓ^m_ℓ)^*. Step 1. In this step we prepare several constants that will be used throughout the proof. Fix a finite covering 𝒰 of X by coordinate balls on which K_X, A and all mL trivialise, for m∈ N. Fix constants δ and C, depending only on 𝒰, as in Theorem <ref>. For every positive integer ℓ let α_ℓ:=α+1/ℓω∈{L_ℓ}, and note that α_ℓ+dd^cφ≥1/ℓω. Since for each ℓ the divisor L_ℓ is big, by Corollary <ref> there exist functions ψ_ℓ∈(X,α_ℓ) with logarithmic singularities such that {ψ_ℓ=-∞}⊆_+(L_ℓ) and α_ℓ+dd^cψ_ℓ≥ 0. Since each ψ_ℓ is bounded from above, by subtracting constants from ψ_ℓ we may assume that for each ℓ we have ψ_ℓ≤0 and ∫_X e^2ψ_ℓdV_ω≤1/2. Let h_ω be the smooth metric on K_X induced by the hermitian metric on T_X whose fundamental form is ω. Note that by Lemma <ref> there exists a constant C_ω>0 such that -Θ_h_ω(K_X)+C_ωω≥0. We fix for the remainder of the proof an integer p>1 such that: * p-C_ω≥δ, * C≤ 2^(p-1)/2. Set p_ℓ:=pℓ for each positive integer ℓ. Since p_ℓ/p_ℓ-1ψ_ℓ≤ψ_ℓ for each ℓ by the first inequality in (<ref>), by the second inequality in (<ref>) we have ∫_X e^2p_ℓ/p_ℓ-1ψ_ℓdV_ω≤1/2 for all positive integers ℓ. Step 2. In this step we prepare several Cartier divisors on X and singular metrics on them. Set N_≥ 2p_ℓ:={n∈ N∩ℓ| n≥ 2p_ℓ}. For each m∈ N_≥ 2p_ℓ set L_m,ℓ:=mL_ℓ-K_X. Then h_m,ℓ:=e^-2(m-p_ℓ)φ-2p_ℓψ_ℓh_ℓ^m h_ω^-1 is a singular metric on L_m,ℓ with curvature current Θ_h_m,ℓ(L_m,ℓ) =mα_ℓ+(m-p_ℓ)dd^cφ+p_ℓ dd^cψ_ℓ-Θ_h_ω(K_X) ≥m-p_ℓ/ℓω-Θ_h_ω(K_X) ≥ (p-C_ω)ω by (<ref>), (<ref>) and (<ref>), and since m≥ 2p_ℓ=2pℓ. This together with the property (i) from Step 1 yields Θ_h_m,ℓ(L_m,ℓ)≥δω. For each m∈ N_≥ 2p_ℓ, define the singular metric g_m,ℓ:=h_m,ℓh_ω=e^-2(m-p_ℓ)φ-2p_ℓψ_ℓh_ℓ^m on K_X+L_m,ℓ=mL_ℓ. Step 3. By Theorem <ref>, by the choices of the constants δ and C in Step 1 and by (<ref>), for each m∈ N_≥ 2p_ℓ and each x∈ X∖{φ+ψ_ℓ=-∞} there is a section σ_m,ℓ,x∈ H^0(X,K_X+L_m,ℓ) such that |σ_m,ℓ,x(x)|_g_m,ℓ=1 and σ_m,ℓ,x_g_m,ℓ≤ C. Similarly as in Step 3 of the proof of Theorem <ref>, for m∈ N_≥ 2p_ℓ, by Hölder's inequality for conjugate exponents 1/m+m-p_ℓ/m+p_ℓ-1/m=1 and by (<ref>), (<ref>) and (<ref>) we obtain ∫_X |σ_m,ℓ,x|^2/m_h_ℓ^m dV_ω≤ C^2/m2^1-p_ℓ/m≤ C^2/m2^1-p/m. This together with the property (ii) from Step 1 gives ∫_X|σ_m,ℓ,x|^2/m_h_ℓ^m dV_ω≤ 1 for all m∈ N_≥ 2p_ℓ. Furthermore, for x∈ X∖{φ+ψ_ℓ=-∞}, from (<ref>) we have 1=|σ_m,ℓ,x(x)|_g_m,ℓ = |σ_m,ℓ,x(x)|_h_ℓ^me^-(m-p_ℓ)φ(x)-p_ℓψ_ℓ(x), and thus log |σ_m,ℓ,x(x)|^1/m_h_ℓ^m= (1-p_ℓ/m)φ(x)+p_ℓ/mψ_ℓ(x). Step 4. Set 𝒫:={z∈ X|φ(z)=-∞}∪⋃_ℓ∈_>0{z∈ X|ψ_ℓ(z)=-∞}, and note that the function φ, as well as all the functions ψ_ℓ, are α_1-psh. Therefore, 𝒫 is a pluripolar set by Remark <ref>, hence 𝒫 is of Lebesgue measure zero in X and X∖𝒫 is dense in X. By Corollary <ref> there exists a countable set 𝒟:={x_q| q∈}⊆ X∖𝒫 which is dense in X, such that for each z∈ X there exists a sequence {z_s} in 𝒟 with lim_s→∞z_s=z and lim_s→∞φ(z_s)=φ(z). Fix a sequence {q_j}_j∈_>0 of positive integers, in which each positive integer occurs infinitely many times. Fix an arbitrary sequence {m_ℓ}_ℓ∈_>0 such that m_ℓ∈ N_≥ 2p_ℓ for each ℓ, and lim_ℓ→∞ℓ/m_ℓ=0. Since {φ+ψ_ℓ=-∞}⊆𝒫, we have x_q_ℓ∈ X∖{φ+ψ_ℓ=-∞} for each ℓ, hence by Step 3 we may define sections σ_m_ℓ,ℓ∈ H^0(X,m_ℓ L_ℓ)=H^0(X,K_X+L_m_ℓ,ℓ) by σ_m_ℓ,ℓ:=σ_m_ℓ,ℓ,x_q_ℓ, and note that σ_m_ℓ,ℓ,x_q_ℓ satisfy inequalities (<ref>). Set u:=lim sup_ℓ→∞log|σ_m_ℓ,ℓ|^1/m_ℓ_h_ℓ^m_ℓ. We will show that φ=u^*. Step 5. Fix a point x∈ X. In this step we show that φ(x)≥ u^*(x). By Lemma <ref> (applied to the -divisors L_ℓ, the smooth metrics metric h_ℓ and the associated curvature forms α_ℓ) there exist constants C_2>0 and r_0>0 such that for every coordinate ball B(x,r) with r≤ r_0, for each positive integer ℓ and for each integer m_ℓ∈ N_≥ 2p_ℓ we have |σ_m_ℓ,ℓ(x)|^2_h_ℓ^m_ℓ≤ e^2m_ℓ C_2r^2_B(x,r)|σ_m_ℓ,ℓ|^2_h_ℓ^m_ℓdV_ω. Then similarly as in Step 5 of the proof of Theorem <ref>, from (<ref>) and (<ref>) we obtain |σ_m_ℓ,ℓ(x)|^2_h_ℓ^m_ℓ≤e^2m_ℓ C_2r^2n!C^2/r^2nπ^nsup_B(x,r)e^2(m_ℓ-p_ℓ)φ+2p_ℓψ_ℓ. Plugging in r:=1/m_ℓ for m_ℓ≥max{p_ℓ,1/r_0}, taking logarithms of both sides, dividing by 2m_ℓ, and taking lim sup as ℓ→∞, as in Step 5 of the proof of Theorem <ref> we obtain u(x)=lim sup_ℓ→∞log|σ_m_ℓ,ℓ(x)|^1/m_ℓ_h_ℓ^m_ℓ≤φ(x), which gives (<ref>) since φ is upper semicontinuous. Step 6. Fix a point x∈ X. In this step we finally show that φ(x)≤ u^*(x). To this end, recalling the construction of the set 𝒟 from Step 4, we may find a strictly increasing sequence {q_j'}_j∈_>0 of positive integers such that x_q_j'∈𝒟 for all j and we have lim_j→∞x_q_j'=x and lim_j→∞φ(x_q_j')=φ(x). By the construction in Step 4, for each fixed j there is a strictly increasing sequence {ℓ_s}_s∈_>0 of positive integers such that q_j'=q_ℓ_s for all s and σ_m_ℓ_s,ℓ_s=σ_m_ℓ_s,ℓ_s,x_q_ℓ_s by (<ref>). Hence by (<ref>), since lim_s→∞(p_ℓ_s/m_ℓ_s)=0 by (<ref>), and since ψ_ℓ_s(x_q_j')≠-∞ for all s by the construction of 𝒟, we have u(x_q_j') ≥lim sup_s→∞1/m_ℓ_slog |σ_m_ℓ_s,ℓ_s(x_q_j')|_h_ℓ_s^m_ℓ_s =lim sup_s→∞1/m_ℓ_slog |σ_m_ℓ_s,ℓ_s,x_q_ℓ_s(x_q_ℓ_s)|_h_ℓ_s^m_ℓ_s= φ(x_q_j'). Then this last inequality and (<ref>) give u^*(x)≥lim sup_j→∞u(x_q_j')≥lim sup_j→∞φ(x_q_j')=φ(x), which finishes the proof. The following is the main result of this section. Let X be a projective manifold. Let L be a pseudoeffective -divisor on X, fix a smooth metric h on L and denote α:=Θ_h(L)∈{L}. Let A be an ample Cartier divisor on X, fix a Kähler form ω∈{A}, and let h_A be a smooth metric on A such that ω=Θ_h_A(A). For each positive integer ℓ denote L_ℓ:=L+1/ℓA, denote by h_ℓ:=hh_A^1/ℓ the smooth metric on L_ℓ, let α_ℓ:=α+1/ℓω∈{L_ℓ} be the corresponding smooth form, and set N_ℓ:={m∈| mL_ℓ is Cartier}. For each m∈ N_ℓ set V_h_ℓ,m:={σ∈ H^0(X,mL_ℓ)|∫_X|σ|^2/m_h_ℓ^mdV_ω≤1} and φ_h_ℓ,m:=sup_σ∈ V_h_ℓ,mlog|σ|^1/m_h_ℓ^m. Then: * φ_h_ℓ,m∈(X,α_ℓ) for each ℓ and m∈ N_ℓ, and φ_h_ℓ,m=max_σ∈ V_h_ℓ,mlog|σ|^1/m_h_ℓ^m, * for each ℓ the sequence {φ_h_ℓ,m}_m∈ N_ℓ is non-decreasing, * for each ℓ the supercanonical potential φ_α_ℓ, of L_ℓ associated to α_ℓ is φ_α_ℓ,=(sup_m∈ N_ℓφ_h_ℓ,m)^*, * for all ℓ and all m∈ N_ℓ, the functions φ_h_ℓ,m are uniformly bounded from above, * there exists a positive integer p such that for any sequence {m_ℓ}_ℓ∈_>0 with the properties that m_ℓ∈ N_ℓ, m_ℓ≥ 2pℓ and lim_ℓ→∞ℓ/m_ℓ=0, the supercanonical potential of L associated to α is φ_α,=(lim sup_ℓ→∞φ_h_ℓ,m_ℓ)^*, * for positive integers ℓ'≥ℓ we have φ_α_ℓ,≥φ_α_ℓ',≥φ_α,, * for each ℓ the function φ_α_ℓ, is bounded on X∖(L), it is continuous on X∖_+(L_ℓ), and φ_α_ℓ,=sup_m∈ N_ℓφ_h_ℓ,m on X∖_+(L_ℓ), * we have φ_α,=lim_ℓ→∞φ_α_ℓ,. For a smooth (1,1)-form θ on X whose class {θ}∈ H^1,1(X,) is pseudoeffective, set 𝒮_θ:={φ∈(X,θ)|∫_X e^2φdV_ω≤ 1}, and recall from Lemma <ref> that the supercanonical potential associated to θ was defined as φ_θ,(x):=sup_φ∈𝒮_θφ(x) for x∈ X. Step 1. Part (<ref>) follows from Theorem <ref>(<ref>)(<ref>). Part (<ref>) follows from Theorem <ref>(<ref>), and (<ref>) follows from Theorem <ref>(<ref>). For part (<ref>), first notice that for each positive integer ℓ, for each m∈ N_ℓ and for each σ∈ V_h_ℓ,m we have log|σ|^1/m_h_ℓ^m∈𝒮_α_ℓ as in Step 1 of the proof of Theorem <ref>. Then we conclude by Lemma <ref>(b). Step 2. Next we show (<ref>). Let p be a positive integer as in Theorem <ref>. Fix a sequence 𝔪:={m_ℓ}_ℓ∈_>0 satisfying m_ℓ∈ N_ℓ, m_ℓ≥ 2pℓ and lim_ℓ→∞ℓ/m_ℓ=0, and set φ_𝔪,:=lim sup_ℓ→∞φ_h_ℓ,m_ℓ. By (<ref>) we have that all the functions φ_h_ℓ,m_ℓ are uniformly bounded from above on X, hence φ_𝔪, is well defined. It suffices to prove that (φ_𝔪,)^*=φ_α,. We first show that (φ_𝔪,)^*≤φ_α,. To that end, fix x∈ X. We may assume that (φ_𝔪,)^*(x)≠-∞, since otherwise the claim is clear. Then there exists a sequence {x_n}_n∈ of points in X such that x_n→ x and (φ_𝔪,)^*(x)=lim sup_z→ xφ_𝔪,(z)=lim_n→∞φ_𝔪,(x_n), hence, by the definition of φ_𝔪,, there exists an increasing sequence {ℓ_n}_n∈ of positive integers such that (φ_𝔪,)^*(x)=lim_n→∞φ_h_ℓ_n,m_ℓ_n(x_n). By (<ref>), for each n there exists a section σ_n∈ V_h_ℓ_n,m_ℓ_n such that φ_h_ℓ_n,m_ℓ_n(x_n)=1/m_ℓ_nlog|σ_n(x_n)|_h_ℓ_n^m_ℓ_n, hence (<ref>) gives (φ_𝔪,)^*(x)=lim_n→∞1/m_ℓ_nlog|σ_n(x_n)|_h_ℓ_n^m_ℓ_n. Note that as in Step 1 we have 1/m_ℓ_nlog|σ_n|_h_ℓ_n^m_ℓ_n∈(X,α_ℓ_n) and all these functions are uniformly bounded. Therefore, by Theorem <ref>(e) and after passing to a subsequence we may assume that the sequence of functions {1/m_ℓ_nlog|σ_n|_h_ℓ_n^m_ℓ_n}_n∈ converges in L^1_(X) and almost everywhere to a function φ∈(X,α), and then φ∈𝒮_α by Fatou's lemma. In particular, we have φ(x)≤φ_α,(x) by the definition of φ_α,. On the other hand, by Lemma <ref> and by (<ref>) we have φ(x)≥lim sup_n→∞1/m_ℓ_nlog|σ_n(x_n)|_h_ℓ_n^m_ℓ_n=(φ_𝔪,)^*(x), which together with (<ref>) shows (<ref>). For the reverse inequality, let φ∈𝒮_α. Then by Theorem <ref> there exists a sequence of sections τ_ℓ∈ V_h_ℓ,m_ℓ such that φ=(lim sup_ℓ→∞log|τ_ℓ|^1/m_ℓ_h_ℓ^m_ℓ)^*. Since log|τ_ℓ|^1/m_ℓ_h_ℓ^m_ℓ≤φ_h_ℓ,m_ℓ for each ℓ by the definition of φ_h_ℓ,m_ℓ, we obtain φ≤(lim sup_ℓ→∞φ_h_ℓ,m_ℓ)^*=(φ_𝔪,)^*, hence φ_α,=sup_φ_∈𝒮_αφ≤(φ_𝔪,)^*. This together with (<ref>) shows (<ref>). Step 3. Part (<ref>) follows from Lemma <ref>(c), and (<ref>) follows from Theorem <ref>(<ref>). Step 4. Finally, in this step we show (<ref>). Denote φ:=lim_ℓ→∞φ_α_ℓ,, which is well defined by (<ref>) and we have φ≥φ_α,. In order to prove (<ref>) we need to show the reverse inequality. First note that φ∈(X,α) by (<ref>), (<ref>) and Theorem <ref>(e), hence by Corollary <ref> it suffices to show that φ≤φ_α, on X∖_-(L), since _-(L) is a countable union of analytically closed subsets of X, hence of Lebesgue measure zero. To show (<ref>), fix a point x∈ X∖_-(L), and let ε>0. Fix a sequence {m_ℓ}_ℓ∈_>0 satisfying m_ℓ∈ N_ℓ, m_ℓ≥ 2pℓ and lim_ℓ→∞ℓ/m_ℓ=0, where p is a positive integer as in (<ref>). Since x∈ X∖_+(L_ℓ) by Remark <ref>, we have by (<ref>) and (<ref>) that there exists a positive integer ℓ such that φ_α_ℓ,(x)≤φ_h_ℓ,m_ℓ(x)+ε. Therefore, taking limes superior in (<ref>) as ℓ→∞, and then taking the upper semicontinuous regularisation, by (<ref>) we obtain φ(x)≤φ_α,(x)+ε. We conclude by letting ε→0. Finally, we have: Part (a) of the theorem is an immediate consequence of Theorem <ref>(<ref>)(<ref>), whereas (b) follows from Theorem <ref>(<ref>) and Theorem <ref>(d). If K_X+Δ is nef, then the pair (Y,Δ_Y) has a minimal model by <cit.>. Then (c) follows from Theorem <ref>(a)(b), whereas (e) is a special case of Theorem <ref>(c). Finally, (d) follows from (c) and from Theorem <ref>(<ref>). toc amsalpha
http://arxiv.org/abs/2406.17687v1
20240625162415
Optical Spectropolarimetric Variability Properties in Blazars PKS 0637-75 and PKS 1510-089
[ "Stephanie A. Podjed", "Ryan C. Hickox", "Jedidah C. Isler", "Markus Böttcher", "Hester M. Schutte" ]
astro-ph.GA
[ "astro-ph.GA", "astro-ph.HE" ]
0000-0002-0504-565X]Stephanie A. Podjed Department of Physics and Astronomy, Dartmouth College, 6127 Wilder Laboratory, Hanover, NH 03755-3528 0000-0003-1468-9526]Ryan C. Hickox Department of Physics and Astronomy, Dartmouth College, 6127 Wilder Laboratory, Hanover, NH 03755-3528 0000-0003-4042-2438]Jedidah C. Isler The SeRCH Foundation, Inc, PO Box 442335, Fort Washington, MD 20749-2335 0000-0002-8434-5692]Markus Böttcher Centre for Space Research, North-West University, Potchefstroom 2520, South Africa 0000-0002-1769-5617]Hester M. Schutte Centre for Space Research, North-West University, Potchefstroom 2520, South Africa Stephanie Podjed stephanie.a.podjed.gr@dartmouth.edu 2024 February 8 2024 April 16 2024 April 18 2024 June 20 The Astrophysical Journal § ABSTRACT Spectropolarimetry is a powerful tool to investigate the central regions of active galactic nuclei (AGNs), as polarization signatures are key to probing magnetic field structure, evolution, and the physics of particle acceleration in jets. Optical linear polarization of blazars is typically greater than a few percent, indicating the emission is dominated by non-thermal synchrotron radiation, while polarization less than a few percent is common for other type 1 AGNs. We present a spectropolarimetric study of PKS 0637–75 and PKS 1510–089 to determine how the head-on orientation of a jet and dominant emission processes influence polarimetric variations in the broad lines and continuum. Observations were obtained bi-weekly from the Robert Stobie Spectrograph on the Southern African Large Telescope. Variability in the continuum polarization is detected for both PKS 0637–75 and PKS 1510–089, with a total average level of 2.5% ± 0.1% and 7.5% ± 0.1%, respectively. There is no clear polarization in the broad Balmer emission lines and weak polarization in Mg II as the average level across all observations is 0.2% ± 0.1% for Hβ, 0.2% ± 0.3% for Hγ, and 0.6% ± 0.2% for Mg II. We find that polarization measurements confirm the conclusions drawn from spectral energy distribution modeling of the disk-jet contributions to the emission, as optical polarization and time variability for PKS 0637–75 is shown to be dominated by accretion disk emission while that of PKS 1510–089 is due to both disk and jet emission, with greater jet contribution during flaring states. § INTRODUCTION Blazars are a class of radio-loud (jet-dominated) active galactic nuclei (AGNs) whose relativistic jet is oriented at small angles with respect to our line of sight <cit.>. The bulk outflow of these charged particles, in addition to the small viewing angles, induces Doppler beaming along the direction of the outflow which makes the jet appear brighter and the variability timescale shorter in the observed frame. Blazars can be divided into two subclasses based on optical spectral features: Flat Spectrum Radio Quasars (FSRQs) and BL Lacertae objects (BL Lacs). The spectrum of an FSRQ generally has prominent broad and narrow emission lines, whereas that of a BL Lac object has absent or weak (EW < 5 Å) emission lines <cit.>. Blazar broadband emission experiences rapid variability and is generally jet-dominated. During quiescent periods the contribution of thermal emission from an optically-thick accretion disk can be detected, making blazars an ideal laboratory to study connections between the nonthermal jet and thermal disk components. Radio through optical, and more recently through X-ray with the Imaging X-ray Polarimetry Explorer <cit.>, emission of blazars is also characterized by a high degree of polarization, with polarization percentages ranging from a few to tens of percent, while their non-beamed AGN counterparts usually show polarization levels of less than a few percent. The optical emission of blazars is often dominated by synchrotron radiation from the relativistic jet and is well-known to be polarized, with a theoretical maximum synchrotron polarization percentage attainable ranging from 69% to 75% <cit.>. The polarization percentage (P) and electric vector position angle are both often highly variable, even on intranight timescales <cit.>. Polarimetry as a complement to spectroscopy is a useful tool to investigate the central regions of AGNs that are unresolved by direct observations since polarization is sensitive to the geometry and magnetic fields of the scattering region. Optical spectropolarimetric studies of type 2 AGNs <cit.> provided the basis for the phenomenological geometric unification scheme of active galactic nuclei <cit.>. Though the optical light coming from the central engine and broad-line region (BLR) in these objects is thought to be geometrically blocked by an obscuring torus, it can escape perpendicularly along the unobstructed direction by scattering off of dust or free electrons located above or below the central region <cit.>. This scattering induces linear polarization in the emission, making the once-hidden broad emission lines observable. Similar studies of type 1 AGNs, objects where the BLR is directly observable, found that the optical polarization properties are not consistent with this polar scattering; instead, the PA is typically aligned with the radio axis/axis of symmetry, suggesting that the scattering material is equatorially located <cit.>. The mechanisms responsible for polarization in the continuum and broad lines of AGNs can be different, and include the intrinsic geometry of the emitting region and the geometry of the scattering region <cit.>, as polarization due to radiative transfer alone cannot account for any polarization levels in the BLR <cit.>. Additionally in blazars, the thermal components, i.e., the AD, BLR, and dusty torus, are expected to be unpolarized and in the optical spectrum will be characterized by a decrease in the degree of polarization in spectropolarimetric observations <cit.>. For blazars, which generally have a face-on geometry and unobscured view of the accretion flow, spectropolarimetry is a diagnostic tool that can be used to better understand the interplay between the nonthermal jet and thermal disk components. These findings can be used to comment on the presence and location of scattering regions, how blazar spectra respond to accretion disk dominated emission or synchrotron dominated emission, and to better constrain emission mechanism models when included in spectral energy distribution (SED) studies <cit.>. FSRQs are extremely well studied in time series observations across a range of wavelengths, but a relative paucity of spectropolarimetric observations of blazars exist <cit.>, with such studies becoming prominent only recently. Since we aim to increase the characterization of blazar spectropolarimetric properties, we focus on FSRQs as they generally show prominent quasar-like emission lines in contrast to BL Lacs. Particularly in this work, from our sample of seven Southern Hemisphere gamma-ray active and quiescent blazars, we present the optical linear polarization variability properties for the two objects displaying emission lines: PKS 0637–75 (z = 0.653, J2000 RA = 06^h35^m46^s.5079, Dec = -75^d16^m16^s.814 as given in the NASA/IPAC Extragalactic Database, NED[<https://ned.ipac.caltech.edu/>]) and PKS 1510–089 (z = 0.36, J2000 RA = 15^h12^m50^s.53, Dec = -09^d05^m59^s.82 as given in NED). We focus on the Mg II λ2798 emission line, a typically strong low ionization line seen in the optical-UV band of AGNs, in PKS 0637–75 and on the broad Hγ and Hβ emission lines in PKS 1510–089 . In this paper, the observations and data analysis of PKS 0637–75 and PKS 1510–089 are described in Section <ref>. Results of our polarization variability study are presented in Section <ref> and discussed in Section <ref>. Our main conclusions are summarized in Section <ref>. § OBSERVATIONS AND DATA REDUCTION §.§ Optical Spectropolarimetry Optical spectropolarimetrc observations have been performed with the Robert Stobie Spectrograph <cit.> on the Southern African Large Telescope <cit.>, a 10 m class telescope located at the South African Astronomical Observatory near Sutherland, South Africa. The effective area of the telescope is constantly changing as a part of the SALT design, so accurate absolute flux calibration is not available. The data were collected using the RSS in the spectropolarimetry LINEAR mode with slit-width of 1.25 between 2019 February and 2020 March for PKS 0637–75 and between 2019 March and 2021 August for PKS 1510–089. The RSS CCD detector consists of a mosaic of three CCDs with a total size of 6362 × 4102 pixels, with a single pixel size of ∼15 μm, corresponding to a spatial resolution of 0.13 per pixel. We used the 2 × 4 binning, faint gain, slow readout mode. The mean gain of the mosaic is 1.7 ADU/electron and the read-out noise is typically around 2.48 electrons. We used the volume phase holographic (VPH) PG0900 grating with a grating angle of 15^∘.88, which gives the observed wavelength coverage of 4500 Å – 7500 Å. The average spectral resolving power is R = 1300. Order blocking was done with the UV PC03850 filter. Each spectrum (flux, polarization degree, and polarization angle as a function of wavelength) was obtained through four total exposures per spectrum, one for each of the half-wave plate orientations (0^∘, 45^∘, 22.5^∘, and 67.5^∘) necessary to obtain linear polarization data. We began with an exposure time of 325 seconds that we reduced to 250 seconds for each subsequent observation after the first three. Each observing block consisted of flat-field images, slit acquisition images, the four science exposures, and an exposure of the calibration Xe lamp. See Figures <ref>, <ref>, and <ref> for the flux, average polarization, and average PA variability respectively. §.§ Optical and Gamma-Ray Photometry We supplement our spectropolarimetric observations with g-band photometry from the Ohio State All-Sky Automated Survey for Supernovae (ASAS-SN[<http://www.astronomy.ohio-state.edu/ assassin/index.shtml>]) project <cit.>, in order to monitor any continuum changes. Thermal continuum emission in the ultraviolet and optical is thought to originate within the AD <cit.> and it is possible to have nonthermal continuum emission originating from synchrotron emission from the jet (see panel (a) of Figures <ref> and <ref>). Additionally, we use the weekly binned 100–300,000 MeV Fermi Large Area Telescope (Fermi-LAT) light curve data for continually monitored sources[<https://fermi.gsfc.nasa.gov/ssc/data/access/lat/msl_lc/>] to monitor the jet behavior. After converting from mission elapsed time to MJD, we trimmed the data to cover a date range similar to that of our SALT data for PKS 1510–089 (panel (b) of Figure <ref>). §.§ Data Reduction Data analysis was performed using the graphical user interface (GUI) of the [<https://github.com/saltastro/polsalt>] reduction pipeline <cit.> for Python 2.7. is a step-wise execution for reducing RSS spectropolarimetric data. It begins with raw image reductions, where basic CCD reduction techniques are undertaken including overscan subtraction, gain and cross talk correction, and amplifier mosaicing. Once this is finished, wavelength calibration is done for the two beams of the beamsplitter using the interface to identify the emission lines from the calibration arc images; cosmic ray rejection happens here as well. Next, in each image the beams are corrected for beamsplitter distortion and tilt, and the sky and target spectrum is extracted versus wavelength. Raw and final Stokes calculations are completed for the polarimetry reduction. In the raw Stokes calculation, wave-plate position pairs are identified and together result in linear polarization signal swapping between the O and E beams. The “raw Stokes” files contain unnormalized I and S plane data, with the degree of polarization being S/I. In the final Stokes calculation, the full polarization pattern is evaluated to determine Q and U. Finally, polarimetric zero point, wave-plate efficiency, and axis calibrations are applied to give final Stokes parameters. Once all of these reduction steps are accomplished, the GUI provides an interactive results visualization window showing plots of the intensity, linear polarization percentage, equatorial position angle, Stokes Q or Stokes U behavior. In analyzing the SALT spectropolarimetric data we take a weighted average of the polarization (P) PA data in 50 Å wide bins to investigate any polarization variability associated with each object in its continuum and broad emission lines. Error associated with these is the standard error on the weighted mean; see equations directly below, where x is the parameter of which we find the weighted average: x = Σ x_i / σ_x_i^2/Σ 1/σ_x_i^2 , σ_x = √(1/Σ 1/σ_x_i^2) Continuum polarization measurements were made between 5650 Å and 6260 Å, with the telluric Na I absorption feature around 5900 Å masked, for PKS 0637–75, and between 4480 Å and 5090 Å for PKS 1510–089 due to the lack of any other prominent features within these regions. An example of the total flux spectra, Q and U normalized Stokes parameters, polarized flux, polarization percentage and PA measurements are shown in Figure <ref> for PKS 0637–75 (a) and PKS 1510–089 (b). After the standard CCD reduction procedures were carried out, we continuum-normalized the spectra by dividing out the modeled continuum from the overall spectra using a polynomial of degree 2, as it provided the best overall fit to our data. In doing so, the chip gaps, emission lines, and any noticeable cosmic rays missed from the reduction pipeline were masked so as to not negatively influence the fit. In PKS 0637–75, we concentrate on modeling a relatively narrow band between 4500 Å and 5000 Å in the observed frame. The spectrum consists of the continuum, Fe II pseudocontinuum <cit.>, and the Mg II emission line. In PKS 1510–089, we concentrate on modeling the two main broad emission lines, Hγ and Hβ, between 5800 Å and 6800 Å in the observed frame. The Mg II line is treated as a singlet and for all three broad emission lines the kinematic shape is modeled as a single Gaussian using the package to minimize the χ^2 statistic <cit.> and obtain the Gaussian parameters plus error used for further analysis. As a first step to determine if the broad emission lines of our two blazars show variability in polarization levels, we checked if the lines can be seen in polarized light. To do so, we took the nonnormalized spectral intensities, I, and multiplied them by the polarization percentage, P, to obtain polarized spectra, P × I. To increase the signal-to-noise ratio, we bin the full spectra by 50 Å wide bins (Figure <ref>, right). After this visual inspection, to further constrain an upper limit of the polarization percentage for the emission lines in this study, P_L, we create a simple model where the line polarization is some fraction of the continuum polarization, P_C, it is sitting on (Equation (<ref>)): P_L = P_C A/R We assume the emission line retains its same shape in polarized light as was displayed in the nonpolarized spectra. Under this assumption the fraction A/R compares the amplitude of the Gaussian used to fit where an emission line would be in normalized polarized spectra (A) to the amplitude of a Gaussian used to fit the emission line in normalized nonpolarized spectra (R), keeping the centroid and σ of the Gaussian fixed to what their values were in the non-polarized emission line fit. Standard error propagation was followed to obtain the 1σ error values. Negative (nonphysical) polarization values may be returned in this model when a trough is encountered instead of a peak near the emission line position in the polarized spectra. We measure the equivalent width (EW) of Mg II, Hγ, and Hβ by using the code (PytHon Equivalent Widths; ). Within , we use to calculate the EW of an emission or absorption line for a given spectrum using , which takes 13 input parameters to find a best-fit EW value and its associated error via Monte Carlo (MC) iterations. Parameters updated for each emission line in the function include: , a list of the spectrum that includes the wavelength, flux, and flux error of the spectrum; , the value specifying the central location of the spectral line to be measured in Å; , which specify the wavelength space region of interest around the emission line; values that specify the start and stop wavelength range of the emission line; , the number of MC iterations to go through; and , the order of the polynomial used to fit the pseudocontinuum. For all three emission lines studied here, and used were 500 and 2 respectively. For Mg II we used 4632, 4489, 4840, 4590, and 4720 Å for , , and respectively. We do not use the center value output from the single-Gaussian fitting for the values here as they do not create an accurate fit to the emission line when using to get the EW values; the Gaussian has some additional structure, so the fit is off from the actual centroid location by tens of angstroms in each instance. For Hγ we used the center value output from the Gaussian fit for each observation as the value, with 5800, 6100, 5850, 5970 Å for , and respectively. For Hβ, we also used the center value output from the Gaussian fit for each observation as the value, with 6400, 6800, 6550, 6670 Å for , and respectively. Tables <ref> and <ref> give the calculated EW and errors for Mg II and Hγ and Hβ. Generally, the EW of the Balmer emission lines follow an inverse relationship to the ASAS-SN g-flux, showing a decrease as the flux increases. c|cc PKS 0637–75 Mg II EW Measurements and Associated Error Equivalent Width 3-3 (Å) 3-3 Date MJD Mg II 20190223 58537 21.4 ± 0.6 20190301 58543 21.4 ± 0.6 20190304 58546 21.5 ± 0.6 20191105 58792 20.3 ± 0.8 20191130 58817 20.2 ± 0.8 20191221 58838 18.8 ± 0.8 20191230 58847 20.2 ± 1.0 20200114 58862 19.0 ± 1.0 20200125 58873 19.4 ± 0.9 20201213 59196 21.7 ± 0.7 20210107 59221 4.7 ± 0.1 20210131 59245 20.9 ± 0.7 20210209 59254 21.6 ± 0.6 20210304 59277 23.1 ± 0.9 Absolute value of EW measurements are shown. cccc PKS 1510–089 Hγ and Hβ EW Measurements and associated Error 2c Equivalent Width 3-3 4-4 2c (Å) 3-3 4-4 Date MJD Hγ Hβ 20190306 58548 17.9 ± 0.6 ... 20190413 58586 9.1 ± 0.3 ... 20200319 58927 15.4 ± 0.5 ... 20210208 59253 22.2 ± 0.8 67.2 ± 0.9 20210407 59311 17.0 ± 0.6 47.3 ± 0.5 20210416 59320 22.8 ± 0.7 60.7 ± 0.7 20210506 59340 15.2 ± 0.6 40.8 ± 0.5 20210510 59344 8.3 ± 0.3 23.3 ± 0.3 20210512 59346 11.4 ± 0.5 29.5 ± 0.4 20210517 59351 19.6 ± 0.6 50.7 ± 0.6 20210530 59364 11.4 ± 0.6 29.8 ± 0.5 20210606 59371 8.2 ± 0.4 22.5 ± 0.4 20210612 59377 3.0 ± 0.7 13.9 ± 0.5 20210707 59402 10.2 ± 0.5 23.2 ± 0.4 20210711 59406 5.2 ± 0.4 12.1 ± 0.3 20210714 59409 5.1 ± 0.3 13.4 ± 0.2 20210731 59426 22.0 ± 0.7 55.5 ± 0.7 20210804 59430 18.6 ± 0.7 54.5 ± 0.6 20210824 59450 27.1 ± 0.8 66.8 ± 0.7 Absolute value of EW measurements are shown. The ellipses (...) for the first three observations are due to the Hβ emission line being on a CCD chip gap. § RESULTS §.§ Level of Continuum Polarization Bright and transient sources that have shown flares during the LAT mission and reach the minimum gamma-ray flux threshold of 1×10^-6 cm^-2 s^-1 are added to the Fermi monitored source list which provides daily and weekly flux values of such objects of interest. PKS 1510–089 has crossed the abovementioned flux threshold for continual monitoring, representing one of our gamma-ray loud blazars. PKS 0637–75 on the other hand, while a Fermi blazar in that it has been gamma-ray detected by Fermi and is included in all of the annual Fermi catalogues, has not reached the minimum threshold, thus representing one of our gamma-ray quiescent blazars. For PKS 0637–75, the continuum region was defined between 5650 Å and 6260 Å in the observer's frame to avoid the underlying Fe II at the wavelength of Mg II, as well as any other spectral features. The level of continuum polarization ranges from a minimum of 1.4% ± 0.1% to a maximum of 4.0% ± 0.2%, with an average value of 2.5% ± 0.1%. For PKS 1510–089, the continuum region was defined between 4200 Å and 4900 Å for the first three observations and between 4480 Å and 5090 Å for the remaining observations, as Hβ fell on a chip gap for the first few observing windows. The level of continuum polarization ranges from a minimum of 1.8% ± 0.1% to a maximum of 21.4% ± 0.1%, with an average value of 7.5% ± 0.1%. Continuum polarization for PKS 1510–089 shows stronger variability than for PKS 0637–75. In particular, the highest levels of polarization (∼17%-21%) reached by PKS 1510–089 happens between MJDs 59406 and 59409, which corresponds to an optical flare as seen with ASAS-SN (see (a) and (c) of Figure <ref>). We are able to clearly detect a change in the dominant emission processes before (MJD 59253, purple) and near (MJD 59409, blue) the optical flaring period (Figure <ref>); the emission mechanism changes from thermally dominated, low polarization to nonthermal synchrotron dominance with high levels of polarization detected and smaller EWs of both emission lines. In both blazars, there is modest evidence for a wavelength dependence of the polarization, with the polarization fraction appearing to rise at redder or bluer wavelengths depending on the date of observation. For those with an increased polarization fraction at redder wavelengths, the continuum is contaminated by bluer unpolarized disk and BLR emission, causing the dilution of the polarization signal at the shorter wavelengths. The dominance of jet or disk emission is reflected in this wavelength dependence, particularly in the quiescent and flaring states of PKS 1510–089 (Figure <ref>). When the synchrotron jet emission is enhanced compared to the thermal AD, polarization levels are high and the wavelength dependence is more noticeable. When the thermal emission is more dominant, as is seen in PKS 0637–75 and the quiescent state of PKS 1510–089, the polarization fraction is low and the wavelength dependence is marginally detected. cccc PKS 0637–75 Continuum and Mg II Polarization Percentage and Associated 1-σ Error 2cPolarization 3-3 4-4 2c (%) 3-3 4-4 Date MJD Continuum Mg II 20190223 58537 1.5 ± 0.1 0.3 ± 0.7 20190301 58543 1.5 ± 0.1 0.4 ± 0.5 20190304 58546 1.4 ± 0.1 0.6 ± 0.4 20191105 58792 2.8 ± 0.1 1.4 ± 1.3 20191130 58817 2.8 ± 0.1 0.7 ± 1.2 20191221 58838 2.9 ± 0.1 2.5 ± 0.9 20191230 58847 3.1 ± 0.1 1.0^* 20200114 58862 4.0 ± 0.2 2.1 ± 1.0 20200125 58873 3.2 ± 0.1 1.3 ± 1.3 20201213 59196 2.3 ± 0.1 2.3 ± 0.9 20210107 59221 2.3 ± 0.1 0.6 ± 0.7 20210131 59245 2.3 ± 0.1 1.4 ± 1.2 20210209 59254 2.2 ± 0.1 0.2^* 20210304 59277 2.9 ± 0.1 0.5^* Polarization values with an * denote upper limit values. ccccc PKS 1510–089 Continuum and Emission Line Polarization Percentage and 1-σ Error 3cPolarization 3-3 4-4 5-5 3c (%) 3-3 4-4 5-5 Date MJD Continuum Hγ Hβ 20190306 58548 2.1 ± 0.1 0.1 ± 1.0 0.1^* 20190413 58586 4.5 ± 0.1 1.2^* 0.4 ± 0.5 20200319 58927 7.3 ± 0.1 0.7^* 0.7 ± 0.5 20210208 59253 4.5 ± 0.1 1.4^* 0.3 ± 0.9 20210407 59311 6.0 ± 0.1 0.4^* 0.1^* 20210416 59320 3.0 ± 0.1 0.2 ± 1.8 0.9 ± 0.6 20210506 59340 5.0 ± 0.1 0.2^* 0.6 ± 0.6 20210510 59344 13.2 ± 0.1 0^** 0.9 ± 0.8 20210512 59346 14.8 ± 0.1 0^** 0.8 ± 0.8 20210517 59351 3.2 ± 0.1 2.1 ± 1.3 0.4 ± 0.4 20210530 59364 6.5 ± 0.1 2.2 ± 1.3 0.1 ± 0.9 20210606 59371 1.8 ± 0.1 0.3 ± 1.9 1.4 ± 1.0 20210612 59377 12.2 ± 0.1 0^** 3.1 ± 4.0 20210707 59402 9.1 ± 0.1 0.3 ± 1.1 0.2 ± 0.7 20210711 59406 17.0 ± 0.1 0^** 0.5^* 20210714 59409 21.4 ± 0.1 0^** 1.8 ± 1.5 20210731 59426 3.2 ± 0.1 4.3 ± 1.1 0.6 ± 0.5 20210804 59430 3.2 ± 0.1 5.8 ± 6.8 0.7^* 20210824 59450 3.4 ± 0.1 0.2 ± 1.3 0.5 ± 0.7 Polarization values with an * denote upper limit values, and those with ** denote the 1σ upper limit was a (nonphysical) negative value, so we mark the line as not polarized. The measured values with uncertainties are shown in Figure <ref>. §.§ Level of Emission Line Polarization From our initial visual inspection of P×I in Figure <ref> (right), neither PKS 0637–75 nor PKS 1510–089 display noticeable features at the wavelengths of the emission lines Mg II, Hγ, or Hβ. In Figure <ref>, there is an observable depolarization dip in the average polarization levels most notably at the wavelengths corresponding to the centroid of the Hγ and Hβ lines, marked with the vertical dotted-dashed gray lines. Moving to our simple model used to find P_L for each emission line, we find the following polarization level limits: For Mg II, the minimum and maximum P_L values are 0.2% and 2.5% ± 0.9%. Hγ is consistent with zero polarization throughout, with a maximum P_L value of 5.8% ± 6.8%. Similarly, the minimum and maximum P_L values for Hβ are 0.5% and 3.1% ± 4.0%. Mg II shows a weak level of polarization while the calculated line polarization percentages are almost all identically similar to zero within their 1σ error bars for the Balmer lines, i.e. these broad emission lines are not polarized. Tables <ref> and <ref> give the calculated continuum and emission line polarization for PKS 0637–75 and PKS 1510–089 respectively; see Figure <ref> for visual representation. §.§ Polarization Angle Table <ref> and Figure <ref> show the variation with time in the PA for both blazars. PKS 1510–089 shows a range of values between 0^∘ and 175^∘ and PKS 0637–75 has a few distinct clumps of values at around 0^∘–30^∘, 80^∘ and 150^∘–200^∘. The continuum polarization angle of PKS 0637–75 ranges from -3^∘.3 ± 1^∘.2 to 190^∘.4 ± 1^∘.0, with an average value of 127^∘.8 ± 1^∘.0. For PKS 1510–089, the continuum polarization angle ranges from -1^∘.2 ± 0^∘.7 to 180^∘.5 ± 0^∘.8, with an average value of 95^∘.9 ± 0^∘.5. Near the time of enhanced optical flux between MJDs 59375 and 59415 for PKS 1510–089, the total average polarization percentage shows elevated levels and the PA changes by about 120^∘. During this time, the Hβ and Hγ line polarization is nominally 0% within their errors. ccc Average Continuum Polarization Angle and Associated 1-σ Error for Both PKS 0637–75 and PKS 1510–089 Polarization Angle Date MJD (deg) 3cPKS 0637–75 1-3 20190223 58537 -3.3 ± 1.2 20190301 58543 16.9 ± 0.9 20190304 58546 168.0 ± 1.1 20191105 58792 170.8 ± 1.0 20191130 58817 164.8 ± 0.9 20191221 58838 174.4 ± 0.9 20191230 58847 167.1 ± 1.0 20200114 58862 66.1 ± 1.0 20200125 58873 190.4 ± 1.0 20201213 59196 174.2 ± 1.0 20210107 59221 7.0 ± 1.1 20210131 59245 158.7 ± 1.0 20210209 59254 157.4 ± 1.0 20210304 59277 177.2 ± 1.0 3cPKS 1510–089 1-3 20190306 58548 180.5 ± 0.8 20190413 58586 91.5 ± 0.2 20200319 58927 120.4 ± 0.3 20210208 59253 -1.2 ± 0.7 20210407 59311 14.4 ± 0.3 20210416 59320 70.4 ± 0.9 20210506 59340 135.4 ± 0.5 20210510 59344 164.6 ± 0.1 20210512 59346 22.0 ± 0.1 20210517 59351 61.3 ± 0.7 20210530 59364 98.5 ± 0.4 20210606 59371 44.3 ± 0.9 20210612 59377 63.3 ± 0.3 20210707 59402 43.1 ± 0.2 20210711 59406 171.3 ± 0.1 20210714 59409 166.3 ± 0.1 20210731 59426 125.1 ± 0.7 20210804 59430 142.3 ± 0.8 20210824 59450 51.1 ± 0.7 §.§ Broadband Spectral Energy Distribution Modeling PKS 0637–75, unlike PKS 1510–089, is a source that does not exhibit much variability generally. To better understand what is contributing to the observed flux and polarization of PKS 0637–75, we compiled the SED for semicontemporaneous observations between ASAS-SN g-band and Swify-UVOT photometry corresponding to our SALT 20190303 epoch of observation, with NED (NASA/IPAC Extragalactic Database (NED) <cit.> archival data from 2005 onward. A similar analysis of PKS 1510–089 has been done by <cit.>, so we refer to that work for a detailed study of its SED. The leptonic single-zone blazar model of <cit.> has been used to produce a fit by eye to the broadband SED of PKS 0637–75. A summary of this model is given here; see <cit.> for a more detailed and thorough description. This model follows a homogeneous one-zone framework where a power-law distribution of ultrarelativistic electrons is injected into a spherical emission region of radius R. The emission region moves with a constant speed β_Γc along the jet, which corresponds to the bulk Lorentz factor Γ. The cooling of the electron distribution is influenced by synchrotron and Compton emission processes. Synchrotron emission is determined by a tangled magnetic field of strength B. The model accounts for Compton scattering involving the synchrotron radiation field (synchrotron self Compton, SSC) and external radiation fields (external Compton, EC), including direct AD emission — EC (disk) — and AD emission reprocessed by the BLR, represented numerically by an isotropic external radiation field with a blackbody spectrum of temperature k T_ BLR = 10^4 K — EC (BLR). The code self-consistently computes an equilibrium electron distribution considering particle injection/acceleration, escape on an energy-independent escape time scale t_ esc = η_ esc R/c with η_ esc≥ 1, and radiative cooling; evaluates the kinetic jet power L_e corresponding to the final electron population in the emission region and the magnetic field (Poynting flux) power L_B; and calculates the energy partition ratio ϵ_Be ≡ L_B/L_e. Observational parameters used to constrain the model include the redshift (z = 0.653), the black hole mass <cit.>, the apparent superluminal speed <cit.>, and the AD and BLR luminosities <cit.>. The fit parameters adjusted during the fit-by-eye procedure include the low-energy and high-energy cutoffs of the injected electron spectrum (γ_min, γ_max), the electron injection spectral index (q_e), the emission region radius (R), the magnetic field strength (B), the distance of the emission region from the black hole (z_0), the bulk Lorentz factor (Γ), the observing angle (θ_obs) in the observer's frame, the BLR radiation field black-body temperature (T_BB), and the BLR radiation field energy density (u_BB). The minimum variability time scale corresponding to the light-crossing time scale (t_var,min), the kinetic power in relativistic electrons in the AGN frame (L_e), the power carried in the magnetic field (L_B), and the energy partition parameter (ϵ_Be) are computed from the other model parameters and the resulting equilibrium electron distribution. A representative broadband SED fit is plotted in Figure <ref> with the leptonic model fit parameters listed in Table <ref>. However, due to the substantial number of free parameters and the fact that some of the SED photometric measurements were taken at different times, the fit is rough and nonunique. In order to reduce the degeneracies in the fit procedure, we aimed for a fit with exact equipartition between relativistic electrons and magnetic fields, which we were able to achieve. On the basis of the SED fit, we determined that the optical emission is dominated by the thermal disk, though the listed parameters are only indicative and no strict conclusions on the physical conditions in the emission region should be drawn from it. cc[ht] Model Parameters for the SED Fit of PKS 0637–75 Parameter Value M_BH (M_⊙) 2.5 × 10^9 L_disk (erg s^-1) 8 × 10^46 L_BLR (erg s^-1) ∼ 10^45 γ_min 700 γ_max 2 × 10^5 q_e 2.5 R (cm) 1.5 × 10^16 B (G) 2 z_0 (pc) 0.06 Γ 15 θ_obs (deg) 3.82 T_BB (K) 1 × 10^4 u_BB (erg cm^-3) 2 × 10^-2 L_e (erg s^-1) 7.60 × 10^44 L_B (erg s^-1) 7.59 × 10^44 ϵ_Be 1.0 t_var,min (hr) 15.3 § DISCUSSION The optical emission from blazars is composed of a variety of components including the accretion disk, the broad-line region, and synchrotron emission from the jet. Each of the emission mechanisms associated with these regions contributes differently to the polarization characteristics of the emitted radiation <cit.>. The mechanisms inducing polarization in AGNs can be divided into internal (central parsec and smaller scales) and external (greater-than-parsec scales). Polarization due to the influence of magnetic fields close to the SMBH, radiation transfer from the AD, the synchrotron jet radiation, and electron scattering in the hot corona all contribute to the internal polarization mechanisms. Equatorial scattering at the torus and polar scattering at the ionization cone make up the external polarization mechanisms <cit.>. Polarized emission lines are typically due to scattering by material through the above mentioned external mechanisms, where the number and location of the scattering regions determine the observed polarization properties. If equatorial scattering is the dominant mechanism, a couple of characteristic polarization signatures would be observed – mainly a dip in the polarization degree and an S-shaped PA swing along the emission line profiles <cit.>. Of the two, we do see a dip in the percentage of polarization at each emission line core region, but there is no evidence for any swing of the PA across the line (Figure <ref>). The linear polarization that is associated with optically thin synchrotron radiation depends on the structure of the magnetic fields in the emitting region and on the emitting electrons' energy distribution <cit.>. Additionally, polarization from nonthermal electrons in an anisotropic magnetic field should vary with the total flux of an object as magnetic field configurations evolve, giving rise to synchrotron polarization that is strongly variable and different from the polarization of emission lines <cit.>. Especially evident for PKS 1510–089 , the polarization fraction spanning the 2–3 yr of our study displays variability with flux and is different from that of the emission lines, which are consistently unpolarized. The orientation of an AGN with respect to the observer's line of sight can have a strong influence on the polarization detection as well. Single SMBHs surrounded by coplanar, axisymmetric, or spherically-shaped scattering regions produce low amounts of polarization when viewed close to face-on inclinations <cit.>. For objects with inclination angles close to 0^∘ (i.e. blazars), we have a mostly direct, unobstructed view of the AD, BLR, and relativistic jet. As such, the jet will continue to have a polarization signal due to the intrinsically polarized synchrotron radiation, while the polarization vectors of the disk undergo geometric cancellations, resulting in no net disk polarization in total light <cit.> nor in the broad lines. Various spectropolarimetry studies of quasars have been undertaken with similar results to what we have found for PKS 0637–75 and PKS 1510–089. <cit.> found an absence of broad lines in polarized light for two quasars in their study, suggesting the continuum scattering region potentially is located interior to or cospatially with the BLR <cit.>. Additionally, <cit.> found that polarization was confined to the continua for five quasars in their study with depolarized BLR emission and wavelength-independent position angles, suggesting a single source of the observed polarization. For radio-loud quasars studied by <cit.>, polarization was not detected in or across the broad emission lines which the authors discuss could have been due to a lack of equatorial scattering, a region of depolarization above the BLR, or an inner equatorial scattering region comparable in size to the BLR. As was discussed for higher inclination quasars in <cit.>, if a single scattering medium physically close to the BLR is present, geometric cancellation of polarization vectors is possible. Lower levels of synchrotron polarization during the nonflaring epochs of PKS 1510–089 indicate a less ordered magnetic field during such quiescent states, suggesting more tangled magnetic field lines with different field-line directions that can cancel out <cit.>. The generally low degree of synchrotron polarization in PKS 0637–75 is likely due to the dominant AD emission. As we see in Figure <ref>, the upturn at the optical-UV regime in the SED of PKS 0637–75 is well modeled by a Shakura-Sunyaev AD <cit.>. The SED is modelled with an emission region located 0.06 pc down the jet, i.e., within the BLR, hence the dominant external Compton contributions from the disk and BLR (EC (disk), EC (BLR)). The lightly shaded box around 10^15 Hz illustrates the frequency range covered by the RSS on SALT used for our spectropolarimetry data collection. We see that in our model, SSC is sub-dominant and the optical emission is dominated by thermal processes (direct AD emission) over the nonthermal synchrotron jet. Thus the low degree of polarization of the optical-UV emission in PKS 0637–75 is consistent with dilution by the AD and BLR which are expected to be intrinsically nonpolarized. From the beginning of 2021 April to mid-June, there exists an overlap of SALT spectropolarimetry observations of PKS 1510–089 between this observing campaign and that of <cit.>, which used H.E.S.S., ATOM, and SALT observations to better understand the primary emission region of PKS 1510–089 (see panels (c) and (d) of Figure <ref>). In this study, we observed a drop in the optical continuum polarization and an increase in the EW of broad emission lines Hγ and Hβ during the quiescent stages of PKS 1510–089, most notably after the optical flaring event around 2021 14 July. After this observation in 2021, our SALT data display significantly lower levels of polarization (less than 4%). In <cit.>, the polarization level reached a maximum of 12.5% and minimum of 2.2% in 2021, similar to the values we obtained from our SALT observations at similar times (within ± 1–2 days). Using the SED + spectropolarimetry model of <cit.>, <cit.> were able to show that contributions from the synchrotron jet, AD, and BLR can explain the observed emission and polarization levels in 2021, whereas for 2022, the drop in polarization level to being consistently below 2% is consistent with no polarization in the blazar, such that the AD and BLR flux is sufficient to fully explain the optical spectrum. The decrease in polarization levels after 2021 mid-July we observe gives support to the above suggestion that the optical-UV spectrum became dominated by the thermal AD and BLR, while low levels of polarization (4%) between 2021 mid-July and August may be consistent with only interstellar polarization and no intrinsic polarization in the source, in agreement with what was found by <cit.> in their later observations; see the bottom panels (c) and (d) in Figure <ref> where we see in more detail the decline in polarization levels of PKS–1510-089. § CONCLUSIONS We obtained spectropolarimetric observations of FSRQs PKS 0637–75 and PKS 1510–089 using the Southern African Large Telescope during the 2019–2021 period. Blazar optical emission is composed of the thermal accretion disk and broad-line region and nonthermal synchrotron jet. The connection between these thermal and nonthermal components are explored through the polarization characteristics of the emitted radiation. Variability in continuum polarization is on the order of approximately half to a few percent on various time scales for PKS 0637–75 and approximately a few to tens of percent on day–week timescales for PKS 1510–089. While we detect variability in the continuum polarization levels, the same cannot be said for the broad emission lines these blazars exhibit. The broad Hγ and Hβ lines of PKS 1510–089 are not detected in polarized emission and within their errors are consistent with zero polarization. In PKS 0637–75, the low-ionization Mg II line is not detected in polarized light as well, though it does occasionally demonstrate weak levels of polarization. It has been shown for 4C+01.02 that the AD emission can dilute the synchrotron emission toward higher optical frequencies, causing a decrease in the total degree of polarization, as well as a detected decrease in polarization at the frequency of unpolarized emission lines <cit.>. Both of these phenomena are observed in this work – during the nonflaring period of time when the thermal emission components are dominant, PKS 1510–089 was in a lower polarization state compared to the epoch of nonthermal dominance during the flare. Likewise, there is a noticeable drop in polarization at the wavelengths of the unpolarized emission lines, especially for the Balmer lines. We conclude that the broad emission lines of PKS 0637–75 and PKS 1510–089 are intrinsically nonpolarized, though geometric cancellations due to the pole-on orientation potentially exist as well <cit.> and are causing our nondetection of polarized emission from the lines analyzed here. Additionally, changes in the dominant emission process can lead to continuum polarization variability. The gamma-ray quiet FSRQ PKS 0637–75 is not as variable a source as compared to the gamma-ray loud FSRQ PKS 1510–089 and seems to be consistently dominated by thermal emission in the optical-UV regime, resulting in very low levels of observed polarization. Emission from PKS 1510–089 prior to 2021 mid July was consistent with being associated with a nonthermal synchrotron jet, thermal AD and BLR which then underwent a change to being dominated by the thermal components, as evidenced by the drastic decrease in polarization levels observed in our SALT spectropolarimetric observations and supports the photometric and spectropolarimetric observations from <cit.>. We thank the anonymous reviewer for comments that improved this work. S.A.P. acknowledges support from the Dartmouth Fellowship and Sigma Xi grant G201903158443203 and would like to thank Keighley E. Rockcliffe, Aylin García Soto, John R. Thorstensen, Elisabeth R. Newton, Rujuta A. Purohit, and Emily M. Boudreaux for various mentoring, advice, and conversations that improved the manuscript. R.C.H. acknowledges support from NASA through Astrophysics Data Analysis grant No. 80NSSC23K0485. All of the spectropolarimetric observations reported in this paper were obtained with the Southern African Large Telescope (SALT), under proposals 2018-2-SCI-039 (PI: J. Isler), 2019-2-SCI-040 (PI: J. Isler), 2020-2-SCI-017 (PI: S. Podjed), and 2021-1-SCI-027 (PI: S. Podjed) with SALT astronomers Danièl Groenewald & Lee Townsend–special thanks to the SALT observation team for their diligent communications, data collection, and initial reduction of data for our program study. This research has made use of the NASA/IPAC Extragalactic Database (NED) which is operated by the Jet Propulsion Laboratory, California Institute of Technology, under contract with the National Aeronautics and Space Administration. This work includes data collected by the ASAS-SN mission and Fermi-LAT mission. SALT (RSS) polsalt <cit.>, lmfit <cit.>, Astropy <cit.>, <cit.>, PHEW <cit.> aasjournal
http://arxiv.org/abs/2406.17991v2
20240626002105
Tele-Correlation: Calibrating Shear-Shear Correlation with Real Data
[ "Zhi Shen", "Jun Zhang", "Cong Liu", "Hekun Li", "Haoran Wang", "Zhenjie Liu", "Jiarui Sun" ]
astro-ph.CO
[ "astro-ph.CO" ]
Catching Chameleons: Detecting Evolving Disinformation Generated using Large Language Models Bohan Jiang*, Chengshuai Zhao*, Zhen Tan, Huan Liu School of Computing and Augmented Intelligence Arizona State University, USA {bjiang14, czhao93, ztan36, huanliu}@asu.edu July 1, 2024 ======================================================================================================================================================================================================= § INTRODUCTION Weak lensing has been established as a powerful probe of the cosmic structure. Many large scale galaxy surveys set weak lensing as their primary scientific goals, including Canada-France-Hawaii Telescope Lensing Survey (CFHTLenS[http://www.cfhtlens.org/www.cfhtlens.org/])<cit.>, Dark Energy Survey (DES[https://www.darkenergysurvey.org/www.darkenergysurvey.org]) <cit.>, Hyper Suprime-Cam Subaru Strategic Program (HSC[https://hsc.mtk.nao.ac.jp/ssp/hsc.mtk.nao.ac.jp/ssp/])<cit.>, Kilo-Degree Survey (KiDS[https://kids.strw.leidenuniv.nl/kids.strw.leidenuniv.nl])<cit.>. In these studies, it has been found that the parameter S_8 from weak lensing is smaller than that from the cosmic microwave background measurement <cit.> at about 2-3 σ level. To firmly establish this result, however, we need to carefully examine all possible sources of systematic errors, which is well known to be difficult. We expect these issues to be better addressed in Stage IV surveys, including Euclid <cit.>, the Large Synoptic Survey Telescope[https://www.lsst.org/www.lsst.org/](LSST), the China Space Station Telescope (CSST) <cit.>, and Roman <cit.>, all of which are going to observe billions of galaxy images for accurate weak lensing measurement. So far, major weak lensing surveys calibrate the systematics using image simulations. For example, CFHTLenS takes image simulations to quantify the dependence of shear bias on the signal-to-noise-ratio (SNR) and galaxy size <cit.>; HSC calibrates the multiplicative and additive bias as a function of SNR and image resolution in simulations <cit.>; KiDS team calibrates the shear catalog with a comprehensive model considering the SNR, the galaxy size and ellipticity, as well as the size and ellipticity of the point spread function (PSF) <cit.>; The DES team corrects the multiplicative bias with a multiple effective redshift distribution, which is achieved using simulations <cit.>. Ideally, one would hope to test the shear recovery accuracy directly on the real data. One way to do so is to make use of the field distortion (FD) signal that naturally exists in all optical images, as proposed in <cit.>. By grouping the galaxies according to their underlying FD signals, and observing how well the galaxy shear estimators can recover the FD (measured in astrometry), one can get a direct estimate of the multiplicative and additive shear biases. This idea has been successfully applied to the processing of the CFHTLenS data <cit.> and the DECaLS data <cit.> with the Fourier_Quad shear measurement method, and has helped discover a selection effect due to the presence of image boundaries <cit.>. This kind of test automatically involves all observational and instrumental effects, and has been proved as a robust way for calibrating shape measurement. In this work, we show that the FD test can be extended to calibrate potential biases for the two-point statistics, i.e., shear-shear correlations, which is commonly believed to be more challenging to measure due to its small amplitude (≲ 10^-4). We note that this is not completely equivalent to the one-point test, because even if the underlying shear error is close to zero on average, it may still have non-zero spatial correlations due to residual systematic effects on the focal plane. The idea of the two-point (2pt hereafter) FD test is to cross-correlate the shear estimators of two remotely separated galaxies, which we call Tele-Correlation (TC hereafter). Such a correlation should not contain contributions from the astrophysical signals. If the galaxy shear estimator is measured on each exposure individually, so that the FD shear information is kept in the catalog for each galaxy image, the galaxy pairs involved in the TC measurement can be grouped according to the products of their underlying FD shears, forming a straightforward way of calibrating for the potential biases in the shear-shear correlation. In this paper, we apply the Tele-Correlation method on the DECaLS shear catalog produced by the Fourier_Quad (FQ hereafter) shear measurement method <cit.>. In <ref> we introduce the DECaLS shear catalog, and the methods we use for the shear estimation and shear-shear correlation. In <ref> we introduce the concept of TC, and show the results from TC, mainly including the shear biases from each redshift bin for our tomographic study. We show how the biases change our cosmology constraints in <ref>. Finally, we give our main conclusion in <ref>. § DATA AND SHEAR CATALOG Our shear catalog is based on the imaging data of the Dark Energy Camera Legacy Survey (DECaLS)[https://www.legacysurvey.orgwww.legacysurvey.org]. The total sky coverage is about 10,000 deg^2, taken by the Dark Energy Camera (DECam) on the Blanco 4 m telescope of the Cerro Tololo Inter-American Observatory. The image files are pre-processed through the “Community Pipeline” to remove the instrumental effects, and the sky backgrounds are kept. The shear catalog is obtained using the Fourier_Quad pipeline, which evolves from the FQ shear measurement method <cit.>. It involves all the necessary steps for achieving accurate galaxy shape measurement, including background removal, astrometric calibration, PSF reconstruction, etc.. The pipeline is applied to the data of all three bands: g, r, z. With the field distortion test, it is found that the quality of the z-band shear catalog is much better than those of the other two bands. We therefore only use the z-band data in this work for the shear-shear correlation. The typical galaxy number density is about 3–5 per square arcmin <cit.>. The photo-z we use in this paper is measured by <cit.>, and for cross-checking the redshift, we also use the photo-z catalog from <cit.>. In the following tests, we set a sample cut with those criteria: we choose those galaxies from the z-band with signal-to-noise ratio ν_F (defined in Fourier space, see <cit.>) is larger than 4. Additionally, we exclude galaxies with large field distortion signals: |g^f_1,2|> 0.0015, which are located near the edge of the exposure. We also exclude galaxies obtained from the problematic chips and those with the z-band magnitude larger than 21. After these cuts, there are about one galaxy per square arcmin. Given that the FQ shear estimators are measured on individual exposures, we only use the shear estimators from two different exposures to measure their correlations. This is for the purpose of avoiding possible systematic errors from correlated PSF residuals <cit.>. The FQ shear measurement method uses the multipole moments of the galaxy power spectrum to form the shear estimators. They are defined as: G_1 =-1/2∫ d^2 k⃗(k_x^2-k_y^2)T(k⃗)M(k⃗) G_2 =-∫ d^2k⃗k_x k_y T(k⃗)M(k⃗) N =∫ d^2 k⃗ [k^2-β^2/2k^4 ]T(k⃗)M(k⃗) U =-β^2/2∫ d^2 k(k_x^4-6 k_x^2 k_y^2+k_y^4) T(k⃗) M(k⃗) V =-2 β^2 ∫ d^2 k(k_x^3 k_y-k_x k_y^3) T(k⃗) M(k⃗), in which M(k⃗) is the 2D galaxy power spectrum corrected by terms related to the background noise and the Poisson noise <cit.>. T(k⃗) is the factor for converting the PSF to a Gaussian form, i.e.: T(k⃗)=|W_β(k⃗)|^2/|W_PSF(k⃗)|^2 in which W_PSF(k⃗) and W_β(k⃗) [=exp(-β^2|k⃗|^2/2)] are the Fourier transforms of the PSF function and the Gaussian kernel respectively. β is the scale radius of the kernel, which is chosen to be slightly larger than that of the original PSF, so that the reconvolution defined by T(k⃗) is mathematically well defined. It can be shown that the FQ shear estimators defined in eq.(<ref>) can recover the underlying shear signals to the second order in accuracy: <G_1>/<N>=g_1+𝒪(g_1,2^3), <G_2>/<N>=g_2+𝒪(g_1,2^3) Although the form defined in eq.(<ref>) is unbiased, it is not statistically optimal, as the amplitudes of the G_i and N are unnormalized moments and therefore strongly depend on the galaxy luminosity. Certain weightings can be applied to reduce the scatter of the shear estimators, but the weighting function should be carefully chosen to avoid additional biases <cit.>. Alternatively, in this work, we adopt the PDF-SYM method proposed by <cit.> to measure the stacked shear signal and the shear-shear correlation. In this new method, the shear signal is recovered by symmetrizing the probability distribution function (PDF) of the FQ shear estimator Ĝ_̂î with an assumed shear value ĝ_̂î through Ĝ_̂î=G_i-ĝ_̂îB_i, in which B_1 =N+U, B_2 = N-U. Note that the term U here and V defined in eq.(<ref>) are the two components of a spin-4 quantity, therefore both of them should be kept in the catalog. Similarly, to measure the shear-shear correlation, the joint PDF of two shear estimators (Ĝ_̂î,Ĝ_̂î^') is symmetrized using an assumed correlation strength. With this method, we can bring the statistical error down to the lower bound without introducing systematic errors. The details of the PDF-SYM method can be found in <cit.>. Its performance has been proved in a number of recent works <cit.>. Its application on the measurement of the shear-shear correlation is also discussed specifically in our another recent work <cit.>. § TELE-CORRELATION For simplicity and clarity, we assume that the shear estimator takes a conventional form e_i with i=1,2, which can be regarded as the two galaxy ellipticity components. Let us also denote the cosmic shear components and the convergence as γ_i and κ. The reduced shear g_i is therefore given by γ_i/(1-κ). Under the weak lensing effect and the field distortion effect, the shear estimator would take the following form: e_i = e^I_i+ (1+m_i)(g_i + g^f_i)+n_i where e^I_i is the intrinsic ellipticity, and g^f_i is the field distortion signal. m_i is the multiplicative bias associated with the shear estimator, and n_i is the noise, possibly containing an additive bias. Tele-Correlation(TC) is defined as the correlation between the shear estimators of two galaxies separated by a large distance (e.g., more than a hundred degrees). In this case, the astrophysical correlation should vanish, and TC only receives contribution from the field distortion: <e_ie_i^'>(Δθ→∞) = (1+m_i)(1+m_i')<g_i^fg_i^f'>+<n_in_i'> = (1+ℳ_i)^2<g_i^fg_i^f'>+𝒞_i We denote TC as <e_ie_i^'>(Δθ→∞) in the rest of the paper. It is clear that by grouping the galaxy pairs according to the products of their FD signals, TC can be directly compared with <g_i^fg_i^f'> to estimate ℳ_i and 𝒞_i, and thereby the multiplicative and additive biases of the shear estimators. This calibration can be done for galaxies of different redshift bins. For the auto-correlation between the same population of the galaxies, we should have ℳ_i=m_i. Fig.<ref> shows the result of TC using the galaxy pairs separated by more than 140 degrees. We only choose the galaxies with redshifts larger than 0.1. The FD product (x-axis in the figure) is evenly divided into 10 bins. The red dashed lines are the best-fit of the data points, and the black solid lines refer to 'y=x'. According to the figure, the multiplicative biases are not significant: m_1 = (-1.99±1.717)× 10^-2; m_2=(0.48±0.779)× 10^-2, and the additive biases 𝒞_i are negligible (∼ 10^-8). It is interesting to note that one may expect to get smaller error bars in TC if we enlarge the angular range of the galaxy pairs to get more samples. This is indeed the case: we find that the average error-bar size of the data points in fig.<ref> does decay when the lower limit of the separation angle is reduced. However, the values and the uncertainties of the multiplicative biases do not change much. In practice, when the lower limit of the galaxy angular separation is set at 140 degree, the multiplicative biases converge well enough, and the computational cost is already quite substantial[More than 70000 cores·h for computing 1.7*10^15 galaxies pairs.]. As a consistency check, we also set the angular range of the galaxy pairs to be between 120 to 140 degrees, and the results are similar to those shown in fig.<ref>. For the tomography study, we use TC to calibrate the shear biases in each redshift bin. We divide the redshift range of [0.2, 1.0] evenly into 4 bins. The TC results of these bins are shown in fig.<ref>. The definitions of the axes and the meanings of the black and red lines in the plots are the same as those in fig.<ref>. To our surprise, there is a significant multiplicative biases for both m_1 and m_2 in a number of redshift bins. The values of the biases are given in table.<ref>. As a consistency check, we perform the original FD tests (shear stacking) for the galaxies in these redshift bins. The results are shown in fig.<ref> and table.<ref>. The biases are consistent with those from the TC in all the bins. As a further consistency test, we also measure the TC between two different redshift bins. The results are shown in fig.<ref>. The label on the upper-left corner of each panel show the indices of the two redshift bins. The bin index of 1, 2, 3, 4 refers to the redshift range of [0.2, 0.4], [0.4, 0.6], [0.6, 0.8], [0.8, 1] respectively. The definitions of the axes as well as the black solid line and the red dashed line are the same as those in fig.<ref>. In this calculation, we only use the galaxy pairs separated by more than 170 degrees. Note that from the auto-correlation of each redshift bin we have already obtained its multiplicative bias m^i with i=1, 2, 3, 4 (note that the upper index on m refers to the redshift bin index). We expect that the joint multiplicative bias ℳ of the TC between two distinct bins is given by (1+ℳ)^2=(1+m^i)(1+m^j). These predictions are shown in fig.<ref> with the green lines given by y=(1+ℳ)^2x. One can see that the red and green lines in each plot agree well with each other, implying that the multiplicative biases derived from the TC test are reliable. It is still unclear to us why the redshift binning causes the multiplicative biases, which can be as high as 10% or even more. There seem to be a strong selection effect associated with the photo-z. If we switch to a different photo-z catalog from <cit.>, the biases persist, as shown in fig.<ref> and table.<ref>. Their amplitudes are similar to the original results in table.<ref>. We give more discussions in the <ref> regarding the bias. Interestingly, in the DES Y3 shear calibration simulation <cit.>, they also find a redshift-dependent multiplicative bias shown in different bins. We note that unlike the conclusion of <cit.>, the bias found in this work is NOT due to blending in our test, because the blended sources experience the same FD effect. Given that the FQ shear measurement does not need assumptions about the regularity of the galaxy morphology, we do not expect shear bias to rise in the FD test. In the next section, we implement these biases in our tomography study, and check their influence on the cosmological constraints. § IMPACT ON COSMOLOGICAL CONSTRAINTS §.§ Theory The conventional shear-shear correlations are measured using the tangential and cross shear components (γ_t and γ_×), which are defined as γ_t+i γ_×=- (γ_1+i γ_2 ) e^-2 α i,α is the angle between the x-axis and the line connecting the galaxy pair. Correspondingly, the correlations are defined as: ξ_±(z_1,z_2,Δθ⃗) =<γ_t(z_1,θ⃗)γ_t(z_2,θ⃗+Δθ⃗)> ±<γ_×(z_1,θ⃗)γ_×(z_2,θ⃗+Δθ⃗)> In this paper, similar to the work of <cit.>, we also consider another way: correlations with only γ_1 or γ_2, i.e., ξ_ii =<γ_i(z_1,θ⃗)γ_i(z_2,θ⃗+Δθ⃗)>, which i =1 or 2. Theoretically, we expect ξ_11=ξ_22=ξ_+/2. We use these different types of correlations to constrain cosmology. In the theoretical model of ξ_+/-, one needs to consider the intrinsic alignment. For our purpose, we simply adopt the recipe of <cit.>, in which the shear-shear correlation can be written as: ξ_±=ξ_GG±+ξ_GI±+ξ_II± where the subindex ”G” represents the cosmic shear, and ”I” stands for the intrinsic galaxy shape. ξ_+/- for galaxy pairs with the angular distance θ are given by: ξ_±^i j(θ)=∫_0^∞d ℓ/2 πℓ C^i j(ℓ) J_v(ℓθ), C^i j(ℓ)=∫_0^∞ d z W^i(z) W^j(z)/χ(z)^2 P_δ (ℓ/χ(z), z ) where J_ν is the first kind Bessel function, and ν is 0/4 for ξ_+/ξ_-. χ(z) is the comoving radial distance at z. W^i(z) represents the kernel including the contributions from both lensing and the intrinsic alignment, i.e., W^i(z)=W_G^i(z)+W_I^i(z). The kernel for shear is given by: W_G^i(z)=3/2Ω_m H_0^2/c^2χ(z)/a(z)∫_z^∞ d z^' n^i (z^' ) χ (z^' )-χ(z)/χ (z^' ), in which n^i(z) is the normalized redshift distributions, a(z) is the scale factor, H_0 is the Hubble constant, and c is the speed of light. For simplicity we assume a flat universe in this paper. For the IA kernel we adopt the Nonlinear Alignment Model (NLA) <cit.>: W_I^i(z)=-A_IAC_1ρ_cΩ_m/D(z)n^i(z) where A_IA is a free parameter to show the amplitude of IA. ρ_c is the critical density, D(z) is the normalized growth factor. The normalization constant C_1 is adjustable and can be set to 5×10^-14h^-2M_⊙^-1Mpc^3 to align with the observational findings<cit.>. In this case, the fiducial value of A_IA is 1. P_δ in eq.(<ref>) is the nonlinear matter power spectrum, which could be strongly affected by the baryonic effect. In our work, we use the baryonic correction model (BCM, <cit.>) which parameterizes the influence of gas and stars on the total matter density. There are two crucial parameters of the BCM model: the mass fraction of ejected gas (M_c) and the ejection radius (which depends on the parameter η_b). We choose their fiducial values to be: M_c=1.2M_⊙/h, η_b=0.5. These values are consistent with simulations and observations <cit.>. Overall, our theoretical predictions are calculated by the Core Cosmology Library (CCL, <cit.>), in which the nonlinear evolution is described by the halofit model <cit.>. §.§ Redshift Distribution In our tomography study, we use the photo-z catalog from <cit.>. For each redshift bin selected according to the photometric redshifts, we hope to get its true redshift distribution n^i(z) used in eq.(<ref>). It is related to the photo-z distribution f_p(z_p) through the following equation: n^i(z_s)=∫_z^i_min^z^i_maxP(z_s|z_p)f_p(z_p)dz_p in which z^i_min and z^i_max are the lower and upper bounds of the bin, f_p(z_p) is the overall photo-z distribution, and P(z_s|z_p) is the probability that a galaxy of z_p has a true redshift of z_s. Unfortunately, P(z_s|z_p) is not directly known to us. Its counterpart, P(z_p|z_s), is usually more directly accessible by studying the performance of photo-z reconstruction using simulations. They are related through the Bayes theorem: P(z_s|z_p)f_p(z_p) = P(z_p|z_s)f_s(z_s), where f_s(z_s) is the distribution of the true redshift, which is neither known. If we can get an approximate form of f_s(z_s), the form of P(z_s|z_p) can then be derived. For this purpose, we assume P(z_p|z_s) is a Gaussian function with a pre-determined scatter σ_z. Integrating both sides of eq.(<ref>) over all possible values of z_s, we get: f_p(z_p) = ∫ f_s(z_s)P(z_p|z_s)dz_s In principle, f_s(z_s) can be derived by inverting the convolution in eq.(<ref>). In practice, we find that the solution from such an inversion is not very stable. Assuming σ_z is small, the integration in eq.(<ref>) can be well approximated as: f_p(z_p) ≈ f_s(z_p)∫ P(z_p|z_s)dz_s=f_s(z_p)*g(z_p) To get the form of g(z), one can further convolve f_p(z) with the same Gaussian kernel P(z_p|z_s) to get f_c(z), and therefore g(z_p)≈ f_c(z)/f_p(z). Consequently, we get f_s(z)≈ f_p(z)/g(z)≈ f_p^2(z)/f_c(z). In the left panel of fig.<ref>, we show using simulations how well the f_s(z) derived using the above method can recover the true redshift distribution. The right panel of the same figure shows the recovered redshift distribution for the four photo-z bins. We have assumed that P(z_p|z_s) is a Gaussian kernel with σ_z= 0.03*(1+z). This is a reasonable choice according to both <cit.> and our another work (Li et al., in preparation). This assumption is used in our tomography study in the next section. We have checked that our main results do not change much if we assign σ_z= 0.05*(1+z) instead. §.§ Tomography Fig.<ref> shows the measured two-point correlation functions ξ^ij_±(θ) for all the pairs of redshift bins. The angular range is from 1 arcmin to 300 arcmin. For the cosmological constraint, we do not use the data points below 4 arcmin or above 100 arcmin. The blue points are the results of ξ_± without any calibration. The orange data points are the results of ξ_± after we incorporate the corrections from TC, i.e., we multiply the shear estimator G of each galaxy by (1+m)^-1 from the TC according to its redshift bin. For clarity in the figure, we move the blue points slightly to the left of their original places. The error bars and the covariance matrix of the data points are estimated using the Jackknife method. We use the K-Means clustering method <cit.> to divide the galaxies into 200 groups. The red dashed line is the best-fit cosmological prediction to the orange data points (after calibration). The black dashed lines are calculated from the cosmological parameters of PLANCK <cit.>, with A_IA=1. For the cosmological constraint, we use the standard Markov-Chain Monte Carlo (MCMC) method with EMCEE<cit.> for cosmological parameter estimation. We treat A_IA of the intrinsic alignment as a free parameter. Our main results are shown in fig.<ref>. The blue contours and the red ones are for the best-fit cosmological parameters from the blue and orange data points in fig.<ref> respectively. Without correcting for the redshift-dependent bias, we get: S_8 = 0.760^+0.015_-0.017,Ω_m = 0.250^+0.052_-0.037, A_IA = 1.224^+0.203_-0.203. After we incorporate the corrections of the biases estimated from TC in each redshift bin, we get: S_8=0.777^+0.016_-0.019,Ω_m=0.291^+0.055_-0.048, A_IA=0.638^+0.234_-0.264. It is interesting and perhaps important to note that the S_8 value has about 1σ increase as a result of the calibration from TC. For a comparison, we show the results of 2*θξ_11 and 2*θξ_22 in fig.<ref>. The definitions of the blue and orange data points, as well as the black and red lines are the same as those in fig.<ref>. The cosmological constraint using ξ_11 and ξ_22 together is shown in the right panel of fig.<ref>. Similarly, we find that the calibration from TC leads to a 1σ increase in S_8. It is worth noting that the data quality of ξ_11 and ξ_22 is somewhat worse than that of ξ_+ and ξ_-. The cosmological constraints from ξ_11 or ξ_22 separately are shown in fig.<ref>, from which one can see that the quality of ξ_11 is even worse than ξ_22. Similar phenomenon has been reported in our another work <cit.>. In the DECaLS shear catalog, the first and second shear components are defined in the local coordinates along the directions of "RA" and "Dec", and the CCDs are always lined up with the same direction in the survey, the image quality therefore naturally inherits certain anisotropy from hardware imperfectness. This problem is mitigated by rotating the shear estimators in the measurement of ξ_+/-, yielding results of seemingly higher qualities. However, we shall instead take this as a caution for the existence of unresolved systematic issues in image processing. § CONCLUSION AND DISCUSSIONS Tele-correlation (TC), the correlation of the shear estimators of two galaxies separated by a large distance (≳ 100 degree), can be used to calibrate the multiplicative and additive biases in shear-shear correlations. In TC, the correlation signal comes from the field distortion (FD), which can be retained in the shear catalog if the shear estimators are associated with individual exposures. We demonstrate this idea with the DECaLS shear catalog produced by the Fourier_Quad pipeline. With all the distant galaxy pairs ( >140 degrees), we do not observe any bias in the correlation signal at the level of 10^-6. However, to our surprise, significant multiplicative biases can arise if the TC test is performed in individual photo-z bins. The reason is still not known to us. Using the conventional tomographic shear-shear correlations ξ_+/-, we try to place constraints on the cosmological parameters, and meanwhile study the impact of the biases (from TC) on the results. We use the galaxies in the redshift range of [0.2, 1.0], and the angular range from 4 to 100 arcmins for calculating the correlations. We choose the angular range conservatively to avoid the effects of some unknown physics at very small and large scales. In our cosmological model, we adopt the NLA model for the intrinsic alignment, and BCM for the impact of baryons on the density power spectrum. Without the correction for the biases, we obtain S_8 = 0.760^+0.015_-0.017, Ω_m = 0.250^+0.052_-0.037 and A_IA = 1.224±0.203. After incorporating the multiplicative biases due to redshift binning, the cosmological constraints become: S_8=0.777^+0.016_-0.019,Ω_m=0.291^+0.055_-0.048, A_IA=0.638^+0.234_-0.264. There is about a 1σ increase in the best-fit value of S_8. In fig.<ref>, we summarize our constraints on S_8 from different types of shear-shear correlations, including ξ_ii (=<γ_iγ_i>, i=1 or 2), separately or jointly, with or without calibration from TC. In these cases, we see a large variation of the constraints, particularly those with ξ_11. This is an indication of the potential systematic biases in the shear catalog, likely due to the image quality, as discussed in <cit.>. We also present the S_8 constraints from the other lensing surveys such as the fiducial ΛCDM-optimized analyses in DES Y3 <cit.>, KiDS-1000 cosmology <cit.>, the analyses of C_l and ξ_± in HSC Y3 <cit.>. The Planck results <cit.> with the baseline TT, TE, EE+LowE are also included. Our tomographic lensing constraints on S_8 are mostly consistent with those of the other lensing surveys, having about 2-3σ tension with the Planck results. Unfortunately, we do not yet have a good understanding of the significant multiplicative biases found in our TC test. It is likely a selection effect because the galaxy redshift affects multiple image properties, including the apparent magnitude, size, shape (after convolving with the PSF). The origin of these biases remains elusive, and we will investigate the underlying mechanisms in a future work. For now, to go a little further in this topic, we divide the galaxy sample into two groups based on the PSF size. We measure the tele-correlations for the first and second shear components (TC_11 and TC_22), and show the results in fig.<ref>. The black solid line in each panel is the 'y=x' line. The red lines are from the galaxies with smaller PSF size (FWHM_PSF < 1.4"), and the blue ones for larger PSF (FWHM_PSF > 1.4"). It is interesting to note that the multiplicative biases seem to decrease with the PSF size. This fact agrees with our intuition: shape measurement is more sensitive to the galaxy size/redshift when the spatial resolution of the image is poorer, i.e., when the PSF size is larger, thereby leading to a larger selection bias. This is a good news for those surveys with small PSFs, such as HSC (Liu et al., in preparation). Some previous works (e.g., <cit.>) have used the FQ shear catalog of DECaLS to study a number of statistics mainly in galaxy-galaxy lensing. We realize that in those studies, since they only use the background galaxies with redshifts larger than certain threshold (e.g., the lens redshift plus some given value), the selection in redshift necessarily cause some bias that needs to be corrected for. For example, we test the TC of the background sample within the redshift range of [0.4,∞]. The results are presented in fig.<ref>, which show a negative multiplicative bias of about 10%. Note that this result can also be obtained with the original FD test. The actual correction should be measured more carefully, as galaxies of different redshifts are usually weighted by the critical surface density in galaxy-galaxy lensing. The details will be discussed in our future work. On the other hand, it is encouraging to note that the FD test offers us a convenient onsite calibration of the shear biases, no matter whether the bias is caused by PSF inaccuracy, instrumental effects, or selection effects. To our knowledge, the only case in which the FD test may not work is about the shear bias due to image blending, i.e., galaxy pairs that are close in angular space, but far in redshift space. Since in this case, the two galaxies cannot be treated as a single object in shear measurement, and the FD cannot tell apart the two objects. This work is supported by the National Key Basic Research and Development Program of China (2023YFA1607800, 2023YFA1607802), the NSFC grants (11621303, 11890691, 12073017), and the science research grants from China Manned Space Project (No.CMS-CSST-2021-A01). The computations in this paper were run on the Siyuan cluster supported by the Center for High-Performance Computing at Shanghai Jiao Tong University. JHEP
http://arxiv.org/abs/2406.18358v1
20240626135927
Microscopic characteristics of SF6 partial discharge induced by a floating linear metal particle
[ "Zihao Feng", "Yuanyuan Jiang", "Liyang Zhang", "Zhigang Liu", "Kai Wang", "Xinxin Wang", "Xiaobing Zou", "Haiyun Luo", "Yangyang Fu" ]
physics.plasm-ph
[ "physics.plasm-ph", "physics.app-ph" ]
Department of Electrical Engineering, Tsinghua University, Beijing 100084, China Department of Electrical Engineering, Tsinghua University, Beijing 100084, China Department of Electrical Engineering, Tsinghua University, Beijing 100084, China Department of Electrical Engineering, Tsinghua University, Beijing 100084, China Tsinghua Shenzhen International Graduate School, Shengzhen, Guangdong 518055, China Department of Electrical Engineering, Tsinghua University, Beijing 100084, China Department of Electrical Engineering, Tsinghua University, Beijing 100084, China Department of Electrical Engineering, Tsinghua University, Beijing 100084, China Department of Electrical Engineering, Tsinghua University, Beijing 100084, China State Key Laboratory of Power System Operation and Control, Department of Electrical Engineering, Tsinghua University, Beijing 100084, China § ABSTRACT Direct current (DC) gas insulated transmission lines (GILs) have been widely used in power transmission, but might be threatened by partial discharge due to the presence of floating impurities (e.g., dust and metal particles) inside the sealed chamber. In this letter, by using a 2D fluid model we characterize the microscopic properties of the partial discharge induced by a floating linear metal particle in SF_6 (both the discharge propagation and interaction between space charge and metal particle) under negative high voltage direct current (HVDC) conditions. Due to the strong electronegativity of SF_6, the spatiotemporal distributions of the charged species (electrons, positive and negative ions), space charge, and reduced electric field are rather different from those in air. Notably, a negative ion region is observed around the top tip of the metal particle, and it plays an important role in the generation and propagation of primary and secondary streamers in SF_6, which may lead to severe motion characteristics of the particle and aliasing of partial discharge signals. Additionally, we analyze the charging process and electric force reversal phenomenon, which may provide a more precise understanding of the underlying mechanisms of the firefly motion previously reported for DC GILs. Microscopic characteristics of SF_6 partial discharge induced by a floating linear metal particle Yangyang Fu* Received 7 March 2024 / Accepted 23 May 2024 ================================================================================================= Direct current (DC) gas insulated transmission lines (GILs) possess unique advantages in urban underground power transmission and underwater power transmission applications <cit.>. In practice, SF_6 partial discharge is usually induced due to the presence of metal particles, and existing particle detection and suppression measures have been proven ineffective for DC GILs <cit.>. Although a considerable amount of experimental results have indicated that the complicated partial discharge induced by metal particles is the main cause of the above issue<cit.>, the microscopic characteristics of related discharge processes and the space charge effect on particles remain unclear. Gas discharge simulations can resolve spatiotemporal microscopic characteristics of the discharge <cit.>. Chen et al. <cit.>simulated metal particle-induced breakdown within a 200 m microgap. Sun et al. <cit.>and Zhong et al. <cit.>simulated the impact of the metal particle's field electron emission on breakdown. However, the discharge gas used in their models is different from that used in GILs. To date, little numerical simulation has been performed on high-pressure pure SF_6 <cit.> due to the difficulty of numerical calculations for strongly electronegative gases, and existing results have been reported only for single-gap electrodes, such as electrode protrusions and parallel plates. However, models without relying on the particle-structure scenario cannot obtain the multistage discharge process and interaction between space charge and metal particles, compared to combined gas gap with floating particles in GILs. In particular, under the negative high voltage, linear metal particles often exhibit a firefly motion and significantly reduce the efficiency of diagnosis and suppression <cit.>. One key determinant of firefly motion is the reversal of the electric field force affected by space charge. Chang et al. <cit.> used the point charge model to analyze this force and reported that the polarity of the overall charge Q of a metal particle reverses when firefly motion occurs. However, the surface charge density of linear particles is highly nonuniform and directly determined by the transient interaction between space charge and metal particle, which should be further investigated. To date, a precise understanding of the microscopic characteristics of SF_6 discharge induced by floating metal particles is still awaiting. In this letter, we report the microscopic discharge characteristics (both the discharge propagation and interaction between space charge and metal particle) of SF_6 partial discharge induced by a linear metal particle in a coaxial cylinder electrode and reveal the physical mechanism of the metal particle's electric field force reversal phenomenon, which usually occurs under negative DC high voltage and largely affects the firefly motion. A 2D fluid simulation model is established, including plasma fluid equations <cit.> and 18 dominant plasma chemical reactions <cit.>. The critical reduced electric field (E/N)_cr for SF_6 effective ionization calculated by BOLSIG+ <cit.> is approximately 360 Td, which is in good agreement with the measured results of Christophorou et al <cit.> and Morrow <cit.>, validating the rationality of the reaction system. For the boundary conditions for the floating metal particle, the current continuity equation ∂σ_s/∂ t=𝐧·𝐉_i+𝐧·𝐉_e is used to represent the effect of plasma on particle charge, where σ_s is the surface charge density and 𝐧·𝐉_i and 𝐧·𝐉_e represent the normal components of the total ion current density and the total electron current density on the particle surface, respectively. The equipotential condition V ≡ constant is set for the metal particle surface, but time-dependant, where V is the floating potential of the particle. Then, an integral boundary condition is set to control the overall charge Q of the metal particle: ∫_S𝐧·𝐃dS=Q where 𝐧·𝐃 represents the normal components of the electric displacement on the particle surface. The above settings ensure that the electric field on the metal particle surface is normal to the surface and that the entire charge on the metal particle is distributed on the surface. The distribution of surface charge satisfies the electrostatic induction conditions of the metal, which ensures that the metal particle is in a state of electrostatic equilibrium at all time. A schematic of the discharge system is shown in Fig.<ref>. Here for simplicity, a 2D model is established to describe the circular cross-section of the coaxial cylinder electrode. The 2D description generally underestimates the local field enhancement (the field strength around protrusion), which thus cannot quantitatively capture the 3D characteristics of the partial discharge (e.g., transport of the charged species) but can provide a qualitative prediction of the general discharge mechanisms. For simplicity, we use a reduced model similar to that used in many experimental studies <cit.>, it is important to acknowledge that the size of the particle is disproportionately enlarged relative to the entire space; nonetheless, the underlying physical mechanisms captured by the reduced model are, essentially, consistent with those derived from the normal-sized model. The inner radius r is 10 mm and the outer radius R is 30 mm. The radius ratio r/R is close to the optimum value of 1/2.718 <cit.>, which reduces the nonuniformity of the electric field distribution. The linear particle length L is 2 mm, and the particle diameter a is 0.2 mm. Since the timescale of discharge (∼ ns) is much shorter than that of particle movement (∼ ms), the particle is assumed to be floating in the gap. It locates near the central electrode and the entire gap is divided into two parts, with an upper gap distance d of 1 mm between the particle top tip and central electrode. This distance is close to the experimental condition of firefly motion reported in Ref. <cit.>. The gas pressure p is set to atmospheric, and thus this pd range qualitatively ensures the same streamer discharge process as that of higher pressure <cit.>. According to existing experimental results <cit.>, it can be inferred that -25 kV is between the particle lifting voltage and the breakdown voltage. As a result, the central electrode is set as a high voltage direct current (HVDC) electrode at a constant -25 kV, and the initial charge of particle Q_0 is set to -113.65 pC, which is calculated using formula <cit.>: Q_0=πε L^2 E_r/ln(2 L/a)-1 where ε is the permittivity of SF_6, and E_r is the electric field on the HVDC electrode surface. These settings ensure that the simulation represents the condition when the metal particle moves near the HVDC electrode after colliding with it. The electric field is calculated self-consistently based on the Poisson's equation, and the charge transport is described by the fluid equation. The discharge is initiated wherever the electric field reaches a prebreakdown threshold. Here, the partial discharge starts from both ends of the particle, where the threshold is satisfied as shown in Fig.<ref>. The lower corona characteristics are similar to those reported by Gao et al. <cit.>, but the upper streamer characteristics are more complicated. Therefore, this study primarily focuses on analyzing the characteristics of upper streamer discharge. In contrast to the single SF_6 streamer induced by a fixed electrode <cit.>, the upper streamer induced by the floating particle here includes three stages, which might cause aliasing of the diagnosis signal. Stage I represents the primary streamer stage, corresponding to Figs.<ref>(a)–<ref>(c), Stage II represents the secondary streamer stage including an upward and a downward secondary streamer, corresponding to Figs.<ref>(d)–<ref>(e), and Stage III represents the streamer extinction stage, corresponding to Fig.<ref>(f). During Stage I, since the initial floating potential of the metal particle is -35 kV, the upper streamer is similar to the negative streamer, thus there are two electron peaks positioned at the head and tail of the streamer. Another notable characteristic in the streamer channel is the presence of an electron-deficient region, which is more obvious than that in air <cit.> and SF_6 gas mixtures <cit.>. This is because the positive ion region and the head electron peak together shield the channel field, making it lower than (E/N)_cr, subsequently, electrons undergo attachment and are converted into negative ions due to the strong electronegativity of SF_6, as shown in Fig.<ref>(d)–<ref>(f). Furthermore, we observe the presence of the negative net charge near the particle top tip as shown in Figs.<ref>(b)–<ref>(e), which is essentially a negative ion region and is dominated by a positive feedback mechanism. Specifically, when the tail electrons migrate into the streamer channel, they are rapidly attached due to the above low channel field and form a negative ion region outside the tail electron peak. Then the lowest field region is formed between the positive and negative ion regions, which enhances the attachment reactions there. Consequently, a positive feedback mechanism is formed, which makes it difficult for the tail electrons to escape from the lowest field region. Eventually, the tail electrons are converted to negative ions, causing the negative ion region to expand toward the top tip of the particle. The negative ion region mentioned above plays a significant role in the recovery of the primary streamer channel field in SF_6 as shown in Fig.<ref>(h)–<ref>(i), which is different from that in air <cit.>. Although this field recovery phenomenon in SF_6 has been reported in previous studies <cit.>, the underlying mechanism remains unclear. Here, we report a synergistic effect of the negative ion region and head electron peak, which together determine the recovery of the channel field. On one hand, the negative ion region enhances the electric field in the streamer channel, on the other hand, as the head electron peak propagates forward, the shielding effect on the channel field away from it weakens. These two factors work together and lead to the field recovery phenomenon. The field recovery in Stage I is the immediate cause of the upward secondary streamer in Stage II. As shown in Fig.<ref>(j), when the head electron peak of the primary streamer reaches the HVDC electrode, the field recovers to a level higher than (E/N)_cr. As the upward secondary ionizing wave propagates forward, the field inside the secondary channel is once again shielded below (E/N)_cr as shown in Fig.<ref>(k). However, the channel field of the secondary streamer in the air has been reported to remain above its critical value <cit.>. This is because the mechanisms of the secondary streamer between SF_6 and air are fundamentally different. Specifically, the secondary air streamer noted in the literature occurred either after breakdown or at the rising edge of the pulse voltage, which could introduce a decreasing gas density N caused by intense heating after primary penetration <cit.>, or operate under an increasing applied electric field <cit.>. However, the above two factors have little effect here, since the particle-induced partial discharge would not cause the connection of the two poles of the power supply, and the change of gas density N can be ignored. Additionally, the effect of attachment instability which plays a key role in secondary air streamer is relatively small in SF_6 <cit.>. Therefore, here the secondary streamer is essentially induced by negative ion accumulation due to the strong electronegativity of SF_6. Another notable characteristic in Stage II is the increasing positive ion density on both sides of the primary residual channel after the primary streamer head contacts the HVDC electrode, as shown in Fig.<ref>(c). Since the time scale of Stage II is rather short for ion migration to occur, positive ions arise from ionization rather than from migration, which indicates the existence of a downward secondary streamer as shown in Figs.<ref>(d)–<ref>(e). The generation of a downward secondary streamer is mainly driven by the residual positive ions of the primary streamer near the HVDC electrode. Specifically, after the primary streamer head contacts the HVDC electrode, the electrons in the primary channel are rapidly conducted into the electrode and the residual positive ions are deposited near the electrode as shown in Fig.<ref>(d). However, both sides of the channel have not yet been reached and they still maintain the net negative charge characteristic of the negative streamer. Consequently, two ionization regions are formed between the residual positive ions and the net negative charges on both sides. Therefore, the downward secondary streamer occurs along both sides of the primary streamer channel rather than the residual primary channel. The upward and downward streamers inevitably interact with each other <cit.>, causing them to converge and complete the penetration of the secondary streamer stage. During Stage III, the field in the entire channel is reduced below (E/N)_cr as shown in Fig.<ref>(l), representing the end of ionization and the beginning of space charge transport, in which the interaction between space charge and metal particle constitutes the pivotal process, exerting a significant influence on the electric field force of particle. As shown in Fig.<ref>(d), the upper and lower discharges result in conduction currents with amplitudes of 10 mA and 1 mA at 6 ns and 20 ns respectively. The above currents mainly arise from the migration of positive ions, leading to an increase in Q to -40 pC, as shown in Fig.<ref>(a). As charging progresses, |σ_s| tends to decrease overall, as shown in Figs.<ref>(b)–<ref>(c), but it is noteworthy that during the transient process of current initiation, more |σ_s| are generated at each end, which is related to the strong vertical electric field E_y on the surface as shown in Fig.<ref>(d). Specifically, the metal particle requires the local induction of more negative charges to achieve electrostatic balance, as positive ions approach to it. This electrostatic induction phenomenon directly results in two peaks in the electric field force acting on the metal particles as shown in Fig.<ref>(e). However, after 20 ns, the vertical electric field force F_y shifts from a repulsive force to a vertically upward attractive force. Notably, although σ_s and Q are both negative throughout the process, the particle can still experience an attractive F_y toward the high voltage electrode in the negative voltage environment. This is because the direction of the vertical F_y is not determined by Q, but by σ_s·E_y at the micro level, which is expressed as F_y=∬_Sσ_s· E_y dS The symbols of σ_s·E_y at the upper and lower ends of the particle are opposite, resulting in competition between the two ends in determining the vertical force acting on the metal particle as shown in Fig.<ref>(e). In general, the space charge is crucial for the force on the particle. This finding is an important supplement to the existing experimental results <cit.> and may offer a more precise understanding of the underlying mechanisms of the previously reported firefly phenomenon in GIL systems <cit.>. In summary, the results from this work provide an indepth understanding of the SF_6 partial discharge with the presence of the metal particle, which may further offer a reference for addressing issues related to partial discharge signal aliasing and severe particle motion in DC GILs. The microscopic characteristics of multistage SF_6 discharge are notably distinct from those in air, which may lead to the aliasing of partial discharge signals. Due to the strong electronegativity of SF_6, a negative ion region is formed around the top tip of the metal particle, which is dominated by a positive feedback mechanism. In addition, an electric field recovery phenomenon dominated by the synergistic effect of the negative ion region and the head electron peak is reported. The subsequent upward secondary streamer is dominated by this field recovery and the downward secondary streamer is dominated by the residual space charge. Additionally, we analyze the particle charging process and the reversal of the vertical electric force, which is the dominant factor of firefly motion. The approach in this letter could have a broad application in the study of particle-induced discharge mechanisms. Also note that although the established model can qualitatively describe the general physics, the kinetic characteristics of the charged species (e.g., electrons and ions) and the nonlocal discharge behaviors may not be precisely described by the present fluid model, especially during the initial stage of the discharge, which require further investigations through fully kinetic simulations (e.g., particle model or hybrid model <cit.>). * § § ACKNOWLEDGMENT The authors gratefully acknowledge the funding support from the National Natural Science Foundation of China (Contract No. 52277154). The authors thank Dr. Caomingzhe Si for fruitful discussions. The authors express sincere gratitude to the editors and reviewers for their constructive comments and suggestions, which have greatly contributed to the overall improvement of this manuscript. § AUTHOR DECLARATIONS § CONFLICT OF INTEREST The authors have no conflicts to disclose. § DATA AVAILABILITY The data that support the findings of this study are available from the corresponding author upon reasonable request.
http://arxiv.org/abs/2406.18133v1
20240626073510
ConvoCache: Smart Re-Use of Chatbot Responses
[ "Conor Atkins", "Ian Wood", "Mohamed Ali Kaafar", "Hassan Asghar", "Nardine Basta", "Michal Kepkowski" ]
cs.CL
[ "cs.CL" ]
Observations of the Formation and Disappearance of a Funnel Prominence Junchao Hong July 1, 2024 ====================================================================== § ABSTRACT We present ConvoCache, a conversational caching system that solves the problem of slow and expensive generative AI models in spoken chatbots. ConvoCache finds a semantically similar prompt in the past and reuses the response. In this paper we evaluate ConvoCache on the DailyDialog dataset. We find that ConvoCache can apply a UniEval coherence threshold of 90% and respond to 89% of prompts using the cache with an average latency of 214ms, replacing LLM and voice synthesis that can take over 1s. To further reduce latency we test prefetching and find limited usefulness. Prefetching with 80% of a request leads to a 63% hit rate, and a drop in overall coherence. ConvoCache can be used with any chatbot to reduce costs by reducing usage of generative AI by up to 89%. § INTRODUCTION A significant problem in providing spoken chatbot systems is the cost and latency. While recent large language models (LLMs) and voice synthesis have become more realistic, they have also become slower and more expensive. Using GPT-4-turbo and ElevenLabs, for example, can cost around $ 0.01 USD per utterance of 10 words[Assuming 150 tokens of prompt+dialogue history, using public price as of 3rd March 2024.], which will scale with every user and every utterance. The user perceived latency of such a service can be 1–2s, based on systems developed by the authors, while research shows humans prefer a 200–500ms delay with a limit around 1.1s <cit.>. Humans typically take 100–350ms to respond <cit.>. This limits the believably of spoken chatbots due to slow responses. We propose ConvoCache to reduce the cost and latency of chatbot systems. ConvoCache can rapidly reuse responses from past conversation for most requests (Figure <ref>). Not all conversations will fit responses that have been generated before. ConvoCache generates multiple response candidates which are evaluated using a fast automatic dialogue evaluation model such as UniEval <cit.>. If there are no good responses available (a cache-miss), then a new response will be generated and saved to our cache. A turn-taking filler word like “um” could be used to reduce the perceived delay naturally during a cache-miss <cit.>. While the dialogue quality may be impacted by caching, we find in this paper that the impact is minor. We promote the use of ConvoCache in applications that allow for less accuracy such as generic chit chat and small talk, especially low latency voice chatbots. Creating a phone call chatbot that is as believable as a human requires low latency and very realistic voice synthesis—which can be slow. Our approach makes this possible and makes deployment at scale cheaper per user. At Apate.AI, we deploy such a chatbot to talk to scammers. Believability, latency and cost are more important than conversation quality. A customer service chatbot would be a similar use case but with more emphasis on accuracy. To test our conversational caching concept, we focus on the textual dialogue coherence as this relates to a response fitting the conversation <cit.>. The voice in our system is unchanged as response audio is saved in the cache. We simulate our system responding to the chit chat dialogue set DailyDialog <cit.>. We report the hit/miss rate of our cache given a 90% UniEval coherence threshold, our system latency and G-Eval scores of the dialogue produced. We make the following contributions: * We propose ConvoCache, a conversational caching system that implements dialogue evaluation to control for quality. Our source code is available online[https://github.com/RoshanStacker/ConvoCachehttps://github.com/RoshanStacker/ConvoCache]. * We tested our system with several encoder models and encoding techniques. We provide a benchmark of ConvoCache with G-Eval scores <cit.>. * We measure the impact of prefetching on the hit rate and response quality of ConvoCache. § BACKGROUND §.§ Related work Caching responses has been investigated before. In cloud computing literature, ChatCache <cit.> observed semantic redundancy in practical conversations and built a similar system using voice and textual encoder models. The focus of ChatCache was on voice assistants responding to commands and questions on low power edge computers. They showed that textual and spoken semantic models are both able to find this semantic redundancy. Other work <cit.> focuses on question answering and using a cache instead of a chatbot, showing significant improvements to latency. Our work is different as it focuses on generic chit chat conversations which have more room for imperfect responses, and have important context that we use in our embeddings. Following these previous works, we have access to more effective semantic similarity models in SimCSE <cit.> and AnglE <cit.>, and we evaluate the coherence of the dialogue generated by the cache, not just the hit rate. The end of speech can be quickly detected using voice activity, but speech transcription into text has a longer delay before the whole utterance is transcribed. This impacts the delay of responses. Previous work has focused on reducing this delay <cit.>, while others have implemented prefetching to generate a response before the end of speech <cit.>. We experiment with incomplete utterances to simulate prefetching as a way to further reduce the latency of ConvoCache. §.§ Sentence encoders We investigate the use of two encoders that have been effective in semantic textual similarity (STS) tasks; SimCSE <cit.> and AnglE <cit.>. These models are designed to encode a single phrase and cluster semantically similar phrases based on cosine similarity. SimCSE <cit.> uses a RoBERTa <cit.> language model and contrastive learning to train semantic similarity. This produces embeddings of 1024 dimensions. AnglE <cit.> introduces angle optimisation which avoids problematic areas of the cosine similarity curve with gradients close to zero. The authors report that this helps AnglE learn subtle semantic differences, and show improvement over SimCSE making AnglE a new state of the art[https://paperswithcode.com/task/semantic-textual-similarity]. AnglE uses a Llama 2 <cit.> language model which provides a significant performance gain over RoBERTa, producing embeddings of 4096 dimensions. AnglE outperforms SimCSE slightly when both are using a Llama 2 model <cit.>. Llama 2 is slower than RoBERTa, so we use SimCSE with RoBERTa[https://huggingface.co/princeton-nlp/sup-simcse-roberta-largehttps://huggingface.co/princeton-nlp/sup-simcse-roberta-large] and AnglE with Llama 2 7B[https://huggingface.co/SeanLee97/angle-llama-7b-nli-20231027https://huggingface.co/SeanLee97/angle-llama-7b-nli-20231027] to compare the encoders and language models. §.§ Automatic dialogue evaluation Conversations often allow for a wide range of valid responses, especially chit chat conversations. This is known as the one-to-many problem <cit.>, and it makes it difficult to evaluate dialogue with automated systems. This has lead to the trend of using larger language models to perform reference free evaluation <cit.>. Recent advancements make use of multi-dimension evaluation <cit.>. These evaluate dialogue responses on dimensions such as: engagingness, naturalness, groundedness, understandability and coherence, then combine these scores for one measure of quality. In this work we focus only on the coherence evaluation for two reasons. Firstly, coherence represents how well a response fits into a conversation and the context. Since we reuse a response from a different context this very important. Secondly, the other dimensions are not applicable to our experiments as they measure how well a response includes external information, or how fluent the response is without taking into account context. Since all of our responses are taken from a fluent dataset, they would all score the same regardless of our cache. UniEval <cit.> implements boolean questions and a T5 <cit.> language model with continual training on each evaluation dimension. Since UniEval does not use an LLM it is much faster compared to more recent models, while maintaining similar results for coherence. We use the coherence question from the paper <cit.>: “question: Is this a coherent response given the dialogue history?” G-Eval <cit.> uses the GPT-3.5 or GPT-4 LLM with a detailed prompt for each dimension of evaluation. LLMs have been shown to be effective evaluators <cit.>. G-Eval makes use of multiple response candidates, and their probability, to generate a more fine-grained score by calculating the weighted sum of the candidates. We were unable to obtain the original coherence prompt for dialogue from the authors, so we created our own coherence prompt using the template of the dialogue engagingness prompt from G-Eval <cit.>. We provide this prompt in our code repository alongside our evaluation code. § CONVOCACHE SYSTEM DESIGN To respond to a conversation with dialogue history D of n utterances, ConvoCache first generates a conversation embedding 𝐬. To do this, we encode each utterance U_i in the dialogue history using a model such as AnglE or SimCSE <cit.>, then calculate the weighted sum of these utterance embeddings to get the conversation embedding. Our weights are given by exponential decay e^-λ i where i=1 is the last utterance, i=2 is the 2nd last utterance, etc. The weights are normalised to ensure they sum to 1. Other conversation encoding methods could be used to generate 𝐬 from D. With the conversation embedding 𝐬, we search our cache to find the top k similar conversations and retrieve the corresponding response candidates (R_1, …, R_k). We use cosine-similarity in the FAISS package <cit.>. For larger caches, FAISS provides approximate search methods that are faster. We use an exhaustive search in our experiments as our cache is small. Simply using the first response candidate R_1 generates mediocre dialogue. We can improve this by filtering response candidates. One method of filtering would be to use the similarity score of the cache results (inner product) as a proxy for quality and apply a threshold. While this is fast, we found that it is not very effective for our embeddings. Instead, we evaluate each candidate response (we use UniEval <cit.>) in order of similarity until a response scores higher then a given threshold t. Controlling this threshold provides a way to adapt the system and control costs through cache-hit rate. UniEval is the slowest part of system taking around 100ms per response evaluated (see Table <ref>), motivating the use of the first response to pass the threshold rather than re-ranking all response candidates. If a model could evaluate all candidate responses in parallel, then re-ranking should improve the system quality and latency further. If we evaluate all candidate response and fail to find one above the threshold, we consider this a cache-miss. A new response R_new will have to be generated using a chatbot and voice synthesis. We can make use of fillers such as “um” to disguise the delay of response generation during cache-misses, since research shows this is still natural within 1.1s <cit.>, while our delay is 510ms. Using fillers only during cache-misses (11% of responses) avoids using them for every response which would be unnatural. Every new response generated will be saved to the cache with the conversation embedding 𝐬. This allows the cache to grow over time and fill in gaps, as such, the cache-hit rate would be expected to improve over time. A limit could be placed on the size of the cache, with a strategy to remove items that have the lowest usage. We don't investigate a dynamic cache in our experiments, instead we load our cache with the train portion of the dialogue dataset with 76,052 responses. § METHODOLOGY To assess the impact that ConvoCache has on dialogue quality, we simulate the system responding to the DailyDialog dataset <cit.>, which contains chit chat conversations in English. All tests used an RTX A4000 GPU with 16GB of VRAM, and we present computation time in Table <ref>. Our experiments follow these overall steps: * Establish dataset train and test splits. * Seed the cache with utterances from the train split. * Use the cache to respond to the test split. * Evaluate the generated test conversations. The DailyDialog dataset <cit.> provides many multi-turn conversations in English (average 7.9 turns). We first take each turn/utterance and create a prompt-response pair representing the dialogue history before this response, and the response. A conversation of 3 utterances will generate 2 prompt-response pairs since the first utterance has no prompt, while the others do. Using the provided train and test splits containing 11,118 and 1,000 conversations, we collected 76,052 prompt-response pairs for the train split, and 6,740 for the test split. To seed the cache, we follow the method described in Section <ref> to encode the prompts from the train split and save them to the FAISS Index <cit.> alongside the response. We respond to prompts in the test split using the same method as above. We find the top 5 response candidates (k=5). We experiment with different values of exponential weight decay λ. To evaluate the coherence of the response candidates, we use UniEval <cit.> and apply a coherence threshold t. G-Eval <cit.> is used to evaluate the responses chosen by the system for overall coherence. § RESULTS §.§ Cache hit rate and coherence With a coherence threshold of t=0.9, we report our hit rates for the various ranks of response candidates in Table <ref>. We find an optimal miss rate of 10.31% and 11.22% for AnglE and SimCSE respectively for λ=0.5. ConvoCache, as described in Section <ref>, applies this UniEval coherence threshold to each response candidate in order, meaning that the hit rate of the first response candidate is the fastest possible result. We find that the first candidate is used 56.51–57.72% of the time with a processing time of 110–148ms depending on the encoder used. Figure <ref> shows the coherence of responses from ConvoCache, using SimCSE and λ = 0.5, compared to reference responses and randomly selected train responses. We see a similar shape to the reference with a small drop in coherence, but significantly better than random responses. We also see that GPT-4 scores higher than GPT-3.5 on average. §.§ Optimising embedding weights In our proposed approach (Section <ref>) we define a parameter for the exponential decay of weights λ. Figure <ref> presents values of λ showing that λ=0.5 and λ = 0.75 are best. §.§ Models and hardware We find that AnglE performs slightly better than SimCSE in all our tests, generating more responses with higher coherence. Table <ref> and Figure <ref> show this consistent gap in performance, however this difference in performance is small and both models could be used. AnglE requires 15 GiB of VRAM compared to SimCSE which only requires 2.3 GiB, 15% of AnglE. Table <ref> shows that when using ConvoCache with 76,052 cached responses, SimCSE requires a total of 9.6 GiB while AnglE requires a total of 23.2 GiB. This difference is significant as ConvoCache with SimCSE can realistically run on a 12 or 16 GiB GPU while AnglE requires a 24 GiB GPU or multiple smaller GPUs. This impacts the costs of running the system and likely outweighs the performance gain of AnglE discussed earlier. Encoding with SimCSE is 4.4× faster than AnglE and searching with FAISS is faster due to using 1024 dimension vectors compared to AnglE's 4096 dimensions. However, this latency difference only has a small impact on the total latency of the system since UniEval is the slowest model, and UniEval executes up to 5 times. We calculate the average latency given the model latencies in Table <ref> and the proportion of responses that used each response candidate, and include this in Table <ref>. We calculate a cache-miss as the same latency as evaluating the 5th candidate. In practice, a cache-miss will require response generation leading to a higher delay in response time. §.§ Prefetching To reduce latency further in spoken dialogue system, it is possible to start generating a response while the user is still speaking. This is known as prefetching <cit.>. We test this by truncating the last utterance in our prompts for encoding and evaluation with UniEval to apply the threshold. Table <ref> shows a substantial drop off in hit rate when using incomplete utterances (split < 100%). This shows that UniEval using an incomplete dialogue history will give low coherence scores more often. Table <ref> presents G-Eval scores of the responses (using complete dialogue history) and shows a small drop for GPT-3.5 but a significant drop in coherence measured by GPT-4. § LIMITATIONS We make use of automated dialogue evaluation systems including UniEval <cit.> and G-Eval <cit.>. These systems are not perfect evaluators and only achieve a mediocre Spearman's correlation with human evaluators of 0.6 on the Topical-Chat benchmark <cit.>. As such, we provide context to the evaluation scores in Figure <ref> and compare to evaluations of reference (good) and random (bad) responses. We also make use of GPT-3.5 and GPT-4, which presented diverging results for prefetching in Table <ref>. Latency measurements in this work do not include the delay of Automatic Speech Recognition (ASR), which is required for voice chatbots. ASR can increase the user perceived latency by 200ms or more. Other work can reduce this latency <cit.>. § CONCLUSION ConvoCache allows responses to be reused between conversations and we have shown the effectiveness of this in chit chat dialogue. This system can be integrated with existing generative AI chatbots and provide fast responses while maintaining quality and reducing costs for services with a large number of requests. In the DialyDialog dataset, ConvoCache achieves a 89% hit rate using SimCSE and responds in 110–505ms, averaging 214ms. The AnglE encoder achieves a 90% hit rate, but is unfavourable due to the larger hardware requirements, likely needing 2 GPUs. All responses achieve a UniEval coherence score above 90% and evaluating our ConvoCache dialogues with G-Eval using GPT-4, gave an average score of 3.8/5 (76%). The use of prefetching to reduce latency further was shown to have limited effectiveness as hit rate and coherence reduced substantially. We hope that future developments in fast evaluation models and dialogue encoders specific to this task improve the performance further. § ACKNOWLEDGEMENTS This research was conducted under the support of Australian Government National Infrastructure Defence and Industrial Research Grant (NISDRG) NI220100105. This research was conducted in collaboration with Apate.AI, Defeating Phone Scams with Conversational AI. IEEEtran
http://arxiv.org/abs/2406.18880v1
20240627042159
SSP: Self-Supervised Prompting for Cross-Lingual Transfer to Low-Resource Languages using Large Language Models
[ "Vipul Rathore", "Aniruddha Deb", "Ankish Chandresh", "Parag Singla", "Mausam" ]
cs.CL
[ "cs.CL" ]
Semi-adaptive Synergetic Two-way Pseudoinverse Learning System Binghong Liu1 Ziqi Zhao1 Shupan Li1,2 Ke Wang1,2Corresponding author: iekwang@zzu.edu.cn July 1, 2024 ============================================================================================ § ABSTRACT Recently, very large language models (LLMs) have shown exceptional performance on several English NLP tasks with just in-context learning (ICL), but their utility in other languages is still underexplored. We investigate their effectiveness for NLP tasks in low-resource languages (LRLs), especially in the setting of zero-labelled cross-lingual transfer (0-CLT), where no labelled training data for the target language is available – however training data from one or more related medium-resource languages (MRLs) is utilized, alongside the available unlabeled test data for a target language. We introduce Self-Supervised Prompting (SSP), a novel ICL approach tailored for the 0-CLT setting. SSP is based on the key observation that LLMs output more accurate labels if in-context exemplars are from the target language (even if their labels are slightly noisy). To operationalize this, since target language training data is not available in 0-CLT, SSP operates in two stages. In Stage I, using source MRL training data, target language's test data is noisily labeled. In Stage II, these noisy test data points are used as exemplars in ICL for further improved labelling. Additionally, our implementation of SSP uses a novel Integer Linear Programming (ILP)-based exemplar selection that balances similarity, prediction confidence (when available) and label coverage. Experiments on three tasks and eleven LRLs (from three regions) demonstrate that SSP strongly outperforms existing SOTA fine-tuned and prompting-based baselines in 0-CLT setup. § INTRODUCTION Very large language models (LLMs) such as GPT-3.5-Turbo & GPT-4 <cit.> show exceptional performance on a variety of NLP and reasoning tasks via In-Context Learning (ICL) <cit.>. ICL feeds a task-specific instruction along with a few exemplars, appended with the test input, to the LLM. As LLMs can be highly sensitive to exemplars <cit.>, exemplar retrieval is crucial for ICL. While LLMs have shown excellent performance on English tasks, their utility on other languages is relatively underexplored. In this work, we study zero-labelled cross-lingual transfer (0-CLT) to low-resource languages (LRLs) – a setting where labeled task data from one or more related medium-resource languages (MRLs) is available, but no labeled data exists for the target LRL. We also additionally leverage the available test sentences (unlabeled) of the target language. This is in contrast to <cit.>, who utilize a set of external unlabelled sentences for English tasks and pose this as a transductive zero-shot setting. The high cost of annotating LRL sentences for new tasks or domains underscores the relevance of the 0-CLT setting for non-English languages. Cross-lingual transfer has been addressed through standard fine-tuning <cit.>, and language adapters <cit.>, but there is limited work on cross-lingual ICL. There are two exceptions <cit.>, where ICL is employed with exemplars from a source language, but they use uniformly random sampling for exemplar selection, resulting in performance inferior to cross-lingually fine-tuned models, such as mBERT and XLM-R <cit.>. In our preliminary experiments, we prompt the Llama2-70B model with exemplars from source MRLs, and compare its performance with the same LLM prompted with exemplars from the target LRL. We vary the label noise on the target exemplars. Unsurprisingly, LLMs show better performance with less label noise. More interestingly, we find that a reasonably-sized noise region exists (see Figure <ref>), such that if the exemplar noise is within that range, then the overall performance is higher than prompting with accurate source language data. Armed with this observation, we present Self-Supervised Prompting (SSP) – a novel ICL framework for 0-CLT to LRLs. Since the target LRL training data is not available in 0-CLT, SSP operates in two stages. In Stage I, SSP labels all test instances of LRL using training data from MRL. This may be done by LLM prompting (as in the experiment above), or using any other existing approaches for 0-CLT, such as by fine-tuning or adapters. Once (noisy) labels on target LRL are obtained, in Stage II, SSP uses ICL using these noisy test data points (except itself) as exemplars for further performance improvement. Additionally, to select the best exemplars, we develop a novel Integer Linear Programming (ILP) based selection approach, which balances the various objectives of (1) similarity of exemplar with test sentence, (2) high confidence in label predictions, and (3) coverage of the various labels for better task understanding. Figure <ref> gives an overview of our proposed pipeline. We define 3 scenarios for our zero-labelled setup - (1) 0-CLT: Only the available test sentences of the target language are used, with no additional unlabelled data, (2) 0-CLT-U: the full wikipedia data available for target language is utilized, and (3) 0-CLT-T: a translation model supporting the target language is leveraged. The primary focus of this work is on 0-CLT (setting 1). However, we also conduct stage 1 experiments for both 0-CLT-U and 0-CLT-T settings. This enables us to comprehensively assess SSP's effectiveness across varying degrees of noise in stage I. We perform experiments on sequence labelling tasks (POS and NER), and natural language inference (NLI) – a text classification task. Our datasets encompass eleven low-resource languages from typologically diverse language families and three regions: African, Germanic and American. Our experiments show consistent and substantial improvements over existing fine-tuning as well as simpler ICL-based approaches. We will make both our codebase and prompts publicly accessible. Our contributions are summarized as follows: * We Investigate ICL strategies for zero-labelled cross-lingual transfer to LRLs, using labeled data from related MRLs and unlabeled test data from the target language. * We propose SSP, a two-stage self-supervised prompting paradigm for this task, where the first stage may be done by an LLM or other cross-lingually fine-tuned models. * We introduce an exemplar selection approach utilizing an ILP. The ILP incorporates similarity to test input along with confidence of prediction (when available), and enforces label coverage constraints for better selection. * Experiments on 3 tasks and 11 languages show that SSP outperforms existing fine-tuning and SOTA LLM-based models in 0-CLT, 0-CLT-U (full unlabeled) as well as 0-CLT-T (translation-based) settings, hence improving labelling in the second iteration, irrespective of the initial labelling method. § RELATED WORK An ICL prompt consists of (1) task description: to facilitate the understanding of task, (2) labeled input-output pairs: Written sequentially in order of their relevance to input query, and (3) input itself. Cross-lingual ICL: In general, cross-lingual ICL has not been systematically explored in literature. In existing works, prompting is primarily done in a high-resource language, typically English. This is called cross-lingual (CL) prompting. This differs from in-language (IL) prompting, where examples are retrieved from the candidate pool of the target language itself. This assumes the availability of labeled data for target LRL, which is not true in our zero-labelled (0-CLT) setting. In response, we develop novel techniques making use of both CL prompting and IL prompting, while not utilizing the gold labels during IL prompting stage. Most existing cross-lingual ICL methods use uniformly random input-output pairs for exemplar selection <cit.>. Recent approaches <cit.> address this gap by utilizing semantic similarity for cross-lingual retrieval from a high-resource language's labeled data, given the target LRL's instance as query. This is facilitated by embedding-based multilingual retrievers such as multilingual sentence-transformers <cit.>. More recently, OpenAI-based embeddings such as Ada-002 [https://platform.openai.com/docs/guides/embeddings/embedding-modelhttps://platform.openai.com/docs/guides/embeddings/] have been used effectively for cross-lingual retrieval <cit.>. We extend this line of work by also incorporating label confidence and label coverage in exemplar selection. Fine-tuning approaches for Cross-lingual Transfer: Most approaches rely on fine-tuning a Pretrained LM (PLM) such as BERT or XLM-R on one or more source languages (<cit.>) and deploying on an unseen target language. Recently, Language-Adapter based approaches have been found more effective <cit.> for cross-lingual transfer settings. For sequence labelling tasks (NER and POS tagging), ZGUL <cit.> is a recent SOTA method that leverages ensembling Language Adapters from multiple MRLs to label each word in a target language. We leverage this in our proposed SSP pipeline. Cross-lingual label-projection techniques: Recent methods <cit.> utilize an off-the-shelf translation model <cit.> for label-projection in 2 ways – (1) Translate-train: translate from English to target language (X) to generate training data in X, or (2) Translate-test: translate test data in X to English to perform label-projection and obtain annotations in X. Although our focus is 0-CLT transfer, we also experiment with these translation models in Stage I, to assess the robustness of SSP across multiple settings. § SELF-SUPERVISED PROMPTING We define the setting of zero-labelled cross-lingual transfer (0-CLT) as follows. We are given source training data for a specific task: D={(x_i,lg_i,y_i)}, where x_i is the input text in language lg_i, and the output is y_i. We are additionally given a set of unlabeled test data points T={q_j} from a target language lg_t. Our goal is to train a model/create a protocol, using D, T and a large pre-trained LLM, that outputs good predictions on T for the task, assuming that lg_t is a low-resource language, due to which its training data is not available, and that languages lg_i are related to lg_t. Our solution approach, Self-Supervised Prompting (SSP), comprises two key stages as follows. In Stage I, it proposes a noisy labelling for all data points in T using source data D. This may be done in different ways, as described next. In Stage II, it uses the LLM and noisy labelling on T from Stage I as exemplars to improve the labellings. Furthermore, SSP uses a novel integer-linear programming based exemplar selection. We now describe each component of our system. §.§ Stage I: initial labelling using source data To create a first labelling for all test points, SSP can use any existing approaches for 0-CLT, such as fine-tuning a multilingual language model for the task, or use of language adapters or using our LLM with in-context exemplars from source language. In our experiments, we experiment with adapters and ICL, which we briefly describe next. Cross-Lingual ICL: In the method, we use ICL over LLM for obtaining Stage I labellings. First, we retrieve a set of top-K exemplars from D using each test instance q_j as query. This selection is based on cosine similarity between their Ada-002 embeddings. The selected exemplars are arranged in descending order of similarity scores, and included in the prompt between the task description (TD) and the input test instance. This approach has two drawbacks. First, since the LLM will typically be a large expensive model – this will require an LLM call per test data point in Stage I. Second, generally, these LLMs do not expose their logits, hence, we will not have access to prediction confidences from Stage I labellings. Training smaller model(s) using D: Another possibility is to fine-tune a smaller multilingual LM, such as mBERT or mDeBerta-v3 <cit.> on D for NLI task. For sequence labelling, we can use ZGUL <cit.>, which trains source language adapters using D, and uses inference-time fusion of source adapters for labelling test data points. These approaches can provide Stage I labellings for T along with prediction confidences, without making any expensive LLM calls. §.§ Stage II: in-language ICL using ILP-based exemplar selection After Stage I predictions for target instances T are obtained, SSP prompts the LLM to label each test data point q ∈ T, but uses in-context exemplars in target language using Stage I labellings. For exemplar selection, SSP implements a novel integer linear program (ILP) that balances semantic similarity, prediction confidence (when available) and label coverage. Our primary objective is to maximize the aggregated semantic similarity of the selected exemplars, which is obtained using cosine similarity score between their OpenAI Ada-v2 embeddings. In addition, we impose two constraints: * Label Coverage: The ILP tries to ensure the coverage of all labels for the given task in the selected exemplars – this has been found effective for ICL <cit.>. * Confidence: In case Stage I predictions are made by a model whose logits are accessible (unlike the OpenAI LLMs), the ILP prefers selection of more confident exemplars. Our hypothesis is that confident predictions are also accurate (assuming the model is well-calibrated), and previous work has shown that performance of LLMs can be sensitive to correctness of exemplars <cit.> SSP formulates these three factors into an ILP as follows. For a dataset D with n examples indexed from ℐ = {1 … n}, given a test data point q_j, let z_i be a binary variable denoting whether i^th test instance q_i is selected as an exemplar. We use a semantic similarity function sim(q_i, q_j) to get the similarity between two examples. K is the number of exemplars to be selected. Since q_j cannot be an exemplar for itself, we select exemplars from ℐ∖{j} only. Let the set of all labels in the task be ℒ, and the multiset of all labels predicted (using argmax) for example q_i be L_i. The Stage I prediction confidence for label l in q_i is denoted as ŷ^i_l. This confidence is computed as average of probability scores across all predictions of label l in i^th sentence (details in Appendix <ref>). The ILP uses a threshold τ_l for prediction confidence for a label l. Intuitively, the ILP maximizes the semantic similarity of K chosen exemplars, subject to each label l being present at least once in the exemplars, and average prediction confidence of each data point for each label being greater than τ_l. Formally, the ILP is formulated as max∑_i ∈ℐ∖{j} z_i ·sim(q_i, q_j) such that∑_i ∈ℐ∖{j} z_i = K z_i· (ŷ^i_l - τ_l) ≥ 0 ∀ i ∈ℐ∖{j}, ∀ l ∈ L_i ∑_i ∈ℐ∖{j} z_i·count(L_i, l) ≥ 1 ∀ l ∈ℒ we use t_j to denote the j^th label in the tagset maximise ∑_i ∈ T y_i× s_i^q s.t. ∑_i ∈ T y_i = M ∀ i ∈ T, y_i×(ŷ_i - τ) ≥ 0 ∀ j ∈ Label Set, ∑_i ∈ T y_i× count(label_j) ≥ c_j Here, T ← Indices of target test instances, y_i← Binary optimization variable (1 if i^th candidate is selected, 0 o.w., s_i^q← Similarity score of the i^th instance with the query q, M ← No. of exemplars to be selected, ŷ_i← Confidence of i^th instance's predicted label(s) using fine-tuned LM (details in <ref>), τ← Confidence threshold (a hyperparameter) for predicted label, label_j← j^th label in the label set, and c_j← j^th label's threshold count (another hyperparameter) in the selected set of exemplars. Here count(L_i,l) denotes the number of occurrences of l in L_i. In our experiments, we set K=8, and τ_l = 80^th percentile threshold of the set {ŷ^i_l}^n_i=1 for a particular label l. The idea is to have label-specific threshold since the fine-tuned model may not be equally calibrated for all labels. Since logits are not accessible for OpenAI LLMs GPT-3.5 and GPT-4x, in case Stage I labelling is done by either of these models using ICL, we skip the confidence thresholding constraint of ILP. This means that for this variant of SSP, the selection is made based on only similarity and label coverage. § EXPERIMENTS Our main experiments assess SSP performance compared to existing state-of-the-art models for 0-CLT. We also wish to compare various SSP variants, and estimate the value of the ILP-based exemplar selection. §.§ Tasks and Datasets We experiment on three tasks – POS tagging, NER and Natural Language Inference (NLI). We use the UDPOS dataset <cit.> for POS tagging over Germanic languages, MasakhaNER <cit.> for African NER, and AmericasNLI <cit.> for NLI task on the indigenous languages of Americas. Overall, we use eleven low-resource test languages as target (e.g., Kinyarwanda, Faroese, and Aymara), and 2-4 source languages per dataset (e.g., Icelandic, Spanish and Swahili; always including English). Further details are in Tables <ref> and <ref>. Recent studies have shown sensitivity of the output to the template/format of input-output pairs written in the prompt <cit.>. We follow the best template given in <cit.> for NLI, while for sequence labelling, we explore various templates on our own and report our results on the best one. We refer to Appendix <ref> for details and the exact templates used for each of our tasks. For obtaining test set, we randomly sample 100 test samples for each target language for NER and POS tasks. We justify this as each sentence has multiple labels, bringing the total no. of instances to be labeled per language to 2370 and 1100 for POS and NER respectively. For the NLI task, we sample 501 test samples (167 for each class: `entailment', `contradiction' and `neutral'). We report statistical significance (in table captions) to justify our evaluation. We also perform a careful contamination study, following <cit.>, by asking LLMs to fill dataset card, complete sentence (and labels), given partial sentence, and generate next few instances of the dataset. As further detailed in Appendix <ref>, we do not observe any evidence of contamination for these languages' test splits in the OpenAI LLMs. §.§ Comparison Models Zero-shot Baselines: We compare our SSP approach with the SoTA fine tuning models, as well as LLM-based ICL methods using naive random exemplar selection. In particular, we fine-tune ZGUL – mBERT Language Adapter-based SoTA zero-shot baseline for NER and POS tagging, and mDeBERTa fine-tuned for NLI. We additionally utilize the public model mDeBERTa-v3-base-xnli <cit.> for NLI evaluation. We term our own fine-tuned model as mDeBERTa^FT and the public model as mDeBERTa^100, as it was trained on 100 languages (excluding our target languages). For POS and NER, we also add full parameter fine-tuning and Conditional Parameter Generation (CPG <cit.>) baselines, all fine-tuned using the same underlying LM (i.e. mBERT). SSP Variants: We implement SSP with a series of top-of-the-line LLMs – GPT-3.5-turbo <cit.>, GPT-4x (GPT-4/GPT-4-Turbo) <cit.>, and LLaMa-2-70b <cit.>. If Stage I uses ICL, then the same LLM is used for both stages I and II. Alternatively, ZGUL and mDeberta based methods are also used in Stage I of SSP. To understand the value of the ILP, we perform three ablations on exemplar selection strategy – (a) without confidence thresholding (for fine-tuned LM), (b) without label coverage and (c) without both, i.e. pure similarity-based. The ablations are conducted with the best performing underlying LLM i.e. GPT-4x. Leveraging Translation Models and Unlabeled Data: For a comprehensive evaluation, we use the cross-lingual label projection models Codec <cit.> for translate-train and Self-fusion <cit.> for translate-test baselines. More details are provided in Appendix <ref>. Additionally, we leverage unlabeled data in the target language to establish a stronger baseline. We use the AfriBERTa encoder <cit.> for African languages and ZGUL++ <cit.>, which utilizes target Wikipedia data to pre-train a target language adapter, and fuses it with MRL adapters for fine-tuning on MRL data. Skyline: To understand the current performance gap due to lack of target language training data, we also implement a skyline utilizing the available data for target languages and perform few-shot in-language similarity-based exemplar selection (using Ada-002) for in-language ICL to the LLM. § RESULTS AND ANALYSIS We present the results for all tasks in Tables <ref>, and <ref>. ICL-X represents ICL over an LLM X with source language exemplars. SSP(model)-X represents the use of model for Stage I followed by LLM X for Stage II. In case ICL is used in Stage I, then same LLM X is used in both stages. Analyzing the results, we first observe that all ICL-X baselines perform much better than previous fine-tuning approaches for the 0-CLT task. This reaffirms the importance of studying and improving in-context learning over very large language models for our setting. Comparing among SSP variants, it is not surprising that GPT-4 performance supercedes GPT-3.5, which is much better than Llama2 70B. We next compare ICL baselines and SSP variants, when using the same LLM. We find that SSP's two stage workflow consistently outperforms ICL by significant margins. In fact, in-language exemplars with very noisy labels from stage 1 (E.g. for Got language with GPT-3.5-Turbo) perform quite well. These observations underscore the value of target language exemplars in ICL, even at the cost of label noise. Moreover, we compare SSP with Stage I via ICL over an LLM vs. via a fine-tuning baseline (ZGUL or mDeBerta). Fine-tuning baseline for Stage I has two benefits – it is cheaper (due to no LLM calls in Stage I), and has prediction confidence that can allow ILP to select highly confident Stage II exemplars. Due to the latter, in two of the three language groups, the use of a fine-tuning baseline performs much better, and in the third group, it is marginally behind due to weaker performance in one language (Gothic). This happens because ZGUL has a particularly poor performance on this language, leading to much noisier labels in Stage II exemplars. Finally, we experiment on SSP in 0-CLT-U (full target wikipedia) and 0-CLT-T (Translation model) settings, as shown in Table <ref>. We observe that the order of stage I performance is 0-CLT-T (translate-test) < 0-CLT < 0-CLT-T (translate-train) < 0-CLT-U, and same order of performance gets translated in stage II as well, while stage II performance being consistently better than stage 1 in all scenarios. This validates our hypothesis that SSP is effective under varying levels of noise in stage I labelings. Overall, our best 0-CLT SSP solution uses a fine-tuning baseline (ZGUL or mDeBerta) for Stage I and GPT-4 for Stage II, using its novel ILP-based exemplar selection. It outperforms closest 0-CLT baselines by around 3 F1 pts, on average, establishing a new state of the art for zero-labelled cross-lingual transfer to low-resource languages. The best SSP reported 0-CLT results are statistically significant compared to the second best counterpart using McNemar's test (p-values in Tables 1 and 2 captions). We believe that our work is a significant advancement to the existing paradigm <cit.>, which is restricted to optimizing only 1 round of In-context learning. §.§ Ablation Study We now discuss the results of removing ILP components in Stage II exemplar selection. Tables <ref>, and <ref> (last four rows) report the impact of removing confidence thresholding constraint, label coverage constraint, both of these constraints (i.e., just using similarity) from the ILP. The final row removes ILP completely and presents results of random exemplars in Stage II. All these ablations are done on SSP with ZGUL/mDeBerta for Stage I, as only those output prediction probabilities. Impact of label coverage: We observe an average gain of 1.3 F1 points for AmericasNLI compared to the ablation model that does not impose label coverage constraint. We further compute the average number of exemplars for each label that are covered in the selected set for both methods, along with their label-wise F1 scores (see Figure <ref>). We observe that the `neutral' label is not sampled in most cases for w/o label coverage variant, while exactly one `neutral' label is sampled in the SSP(mDeBerta), with label constraint. This happens as the fine-tuned model mDeBerta-FT has very poor recall (24) for `neutral' class and hence any selection strategy has a tendency to not sample this label, unless enforced via a constraint. The class-wise recall for SSP(DeBerta^CL)-GPT4 with and w/o label coverage are presented in Table <ref>. We observe a difference of 22 recall points for `neutral' class (57.6 vs 35.6) between the two ILP variants. An example illustrating this behavior is shown in Figure <ref> (appendix). Impact of confidence thresholding: For sequence labelling tasks, confidence thresholding plays a key role. This is validated from ablation results in Table <ref>, wherein removing confidence thresholding from SSP leads to 5.7 points drop for POS tagging (Germanic) and 1.3 points for NER. The drop is particularly significant (around 13.5 points) for Gothic (Got), which shows that not utilizing the confidence scores can lead to drastic drop. This may be because performance of ZGUL is already poor on Gothic (21 F1 points), but confidence thresholding may have likely compensated by picking higher quality exemplars. Removing thresholding would increase noise in exemplars considerably, leading to the drop (see figure <ref>). We further study its impact by computing the quality of Stage II exemplars selected by SSP(mDeBerta), as well as it's ablation variants. We compute the label-wise precision over all K×N (K=8, N=no. of test instances) samples for each target language, and then report their macro-average. We observe for (Figure <ref>) that the macro-precision of selected exemplars by full ILP is consistently higher than it's other ablation variants, the least value being of w/o both (similarity-based) variant. This implies that the ILP is able to effectively sample high-precision (correctly labeled) exemplars which, in turn, gets translated into it's superior downstream performance on the task. For completeness, we also show the exemplar precision (correctness) statistics for NER and POS in Figure <ref>. The trends hold similar in the sense-that `w/o confidence' and `similarity-based' variants have significantly lower precision (higher noise) than SSP. This is expected because both these eschew confidence thresholding, leading to sampling of lower-confidence predictions. This translates to worse downstream performance (see Table <ref>). We also note that w/o ILP (completely random selection) ablation performs much worse than SSP, showcasing the importance of carefully selecting the exemplar set. We present an error analysis of SSP approach in section <ref>. § CONCLUSIONS AND FUTURE WORK We study the zero-labelled cross-lingual transfer (0-CLT) setting for low-resource languages, when task-specific training data is available for related medium resource languages, along with unlabeled test data for target language. We present Self-Supervised Prompting (SSP) – a novel two-stage framework for the use of in-context learning over very large language models. At a high-level, SSP first noisily labels the target test set using source training data (either by training a model/adapter) or by in-context learning over an LLM. SSP then uses these noisily labeled target data points as exemplars in in-context learning over the LLM. A key technical contribution is the use of integer-linear program that balances exemplar similarity, labelling confidence and label coverage to select the exemplars for a given test point. Thorough experiments on three NLP tasks, and eleven low-resource languages from three language groups show strongly improved performance over published baselines, obtaining a new state of the art in the setting. Ablations show the value each ILP component in downstream performance. In the future, we seek to extend our technique to more non-trivial applications such as open generation tasks (E.g. summarization) and semantic parsing. We also posit that smaller fine-tuned models, when calibrated properly, can result in more efficient selection of exemplars to an LLM, as compared to poorly calibrated counterparts, in terms of downstream performance. We leave a careful and systematic investigation into this hypothesis for future work. § LIMITATIONS We show all our results and ablations on the recent state-of-the-art LLMs including GPT4. The inference for these LLMs is expensive, and makes the model deployment infeasible. Other potential limitations are extending our method to tasks such as fact checking, in which the LLMs suffer from hallucinations and overprediction issues. The reason why we don't use LLM logits in ILP framework is because they are not openly released by OpenAI and hence, there becomes a need to rely on smaller fine-tuned models - which can potentially lead to sub-optimal downstream performance, in case the fine-tuned models are poorly calibrated. Another serious implication of using LLMs for non-roman script languages is unreasonably high fertility (tokens per word split) of the LLM tokenizers, which increases the cost as well as strips the input prompt, which is not desirable.We also could not evaluate our approach on open generation tasks such as summarization, since their evaluation metrics are not reliable as to obtain a fair comparison of various models. Also, human evaluation could not be done at scale. That said, we note that every task is a generative task for LLM and we pose NLI as a short-form generation, while the POS and NER tasks as a templated long-form generation in current scope of our work. § IMPLEMENTATION AND HYPERPARAMETER DETAILS We use Azure OpenAI service [https://azure.microsoft.com/en-in/products/ai-services/openai-servicehttps://azure.microsoft.com/en-in/products/ai-services/openai-service] for all experiments involving GPT-3x and GPT-4x models. For LLama-2-70b, we use the together API [https://www.together.ai/https://www.together.ai/]. We set temperature as 0.0 consistently for all our experiments, making our results directly reproducible. The max_tokens (max. no. of generated tokens) parameter is set to 1024 for POS and NER tasks, while 15 for the NLI. For all experiments, the no. of exemplars (M) is fixed to 8 for uniform comparison. For ILP solver, we use Python's gurobipy [https://pypi.org/project/gurobipy/https://pypi.org/project/gurobipy/] package. The run-time for ILP per test query = 0.05 seconds, while that of pure similarity-based retrieval = 0.006 seconds. §.§ Translation-based baselines We explain both translate-train and translate-test methods as follows - * Translate-train: Following <cit.>, we employ Codec method to generate training data in target language X, X^train, using MRL labeled data. We perform stage 1 using following ways - * fine-tune a model on X^train, and infer on X^test * perform ICL using exemplars from X^train for each test query in X^test * Following <cit.>, we utilize Self-fusion using GPT-4, that takes input as target query, it's English translation and English translation's annotations, appended as a prompt, and outputs the annotated target query.[We also tried Codec for translate-test, but could not reproduce the results reported in their paper for African languages (replicated avg. F1 = 60.5 v/s reported avg. F1 = 72).] §.§ Estimating confidence ŷ^i_k For NLI task, the model always predicts a single label: `neutral', `contradiction' or `entailment'. We simply apply softmax on the class logits for the predicted label to compute the confidence ŷ^i_j (for i^th test instance). In sequence labelling tasks, suppose for an input sentence having words: {w_1,w_2,...,w_T}, the model predicts labels {o_1,o_2,...,o_T} with probabilities {p̂_1,p̂_2,...,p̂_T}. Let LabelSet be {l_1, l_2, ..., l_N}. We compute confidence ŷ_l for each label for a given test example as follows: This outputs the confidence scores ŷ_l for a given example, with those not predicted in a sequence having 0 value. §.§ Dataset Details § PROMPT DETAILS Prompts for the Named Entity Recognition (NER) and Part of Speech Tagging (POS) tasks are presented in the tab separated format shown in <ref> and <ref> respectively. Prompts for Natural Language Inference (NLI) initially used the framework in <cit.> . To improve our performance, we changed the prompt to use <cit.>'s framework, where the authors performed an exhaustive search over tokens used for a prompt in order to find the prompt with optimal performance. This increased Macro F1 score by atleast 10% across all the tested languages. We use the same prompt across all models used in our experiments. §.§.§ Natural Language Inference (NLI) Task Description: You are an NLP assistant whose purpose is to solve Natural Language Inference (NLI) problems. NLI is the task of determining the inference relation between two (short, ordered) texts: entailment, contradiction, or neutral. Answer as concisely as possible in the same format as the examples below: Input format: Premise: {premise} , Hypothesis: {hypothesis} , Output format: Answer: {output} Verbalizer: match the one-word response from the model (neutral, contradiction or entailment) §.§.§ Named Entity Recognition (NER) Task Description: Tag the following sentence according to the BIO scheme for the NER task, using the tags PER (person), LOC (location), ORG (organization) and DATE (date). Follow the format specified in the examples below: Input format: Sentence: w_1 w_2 ... w_T Output format: Tags: w_1<TAB>o_1 w_2<TAB>o_2 ... w_T<TAB>o_T Verbalizer: Extract the sequence of labels o_1, o_2, ... o_3 from generated response. §.§.§ Part of Speech (PoS) tagging Task Description: Tag the following sentence according to the Part of Speech (POS) of each word. The valid tags are ADJ, ADP, ADV, AUX, CCONJ, DET, INTJ, NOUN, NUM, PART, PRON, PROPN, PUNCT, SCONJ, SYM, VERB, X. Follow the format specified in the examples below: Input format: Sentence: w_1 w_2 ... w_T Output format: Tags: w_1<TAB>o_1 w_2<TAB>o_2 ... w_T<TAB>o_T Verbalizer: Extract the sequence of labels o_1, o_2, ... o_3 from generated response. §.§ Verbalizer details for Tagging tasks The verbalizer for tagging tasks requires the LLM to output the words as well as the associated labels. The LLM's output may not be perfect, as it may fail to generate all words or associate a label with each word. As a result, we find the Longest Common Subsequence between the words generated by the LLM and the words of the example. This is done using Dynamic Programming, as described in <cit.>. Once we have found the longest common subsequence, we assign the corresponding tags generated by the LLM to these words. If the tags are invalid, we assign a default tag (O for NER, and X for POS). Finally, for the words which don't have any tags associated with them, we assign the same default tag as before. It is to be noted that in most cases, the sentence generated by the LLM perfectly matches the original sentence. For GPT-4, less than 1% of the words fell into the category of having an invalid tag generated, or not having the word generated. §.§ Error Analysis We investigate scenarios where SSP approach systematically fails compared to other methods. For NER, we find that ZGUL (fine-tuned LM) underpredicts the `DATE' label. As a result, SSP almost never samples this label in stage 2 exemplars, hence hurting the performance for this label. For NLI task, we observe that in order to ensure label coverage, SSP samples the underpredicted label `neutral' but while doing so, also ends up hurting the performance for `contradiction' label (as seen in last plot of Figure <ref>). §.§ Prompts for GSW Examples The base SSP-SIM prompts for the GSW examples highlighted in Figure <ref> are given below. Labels which are misclassified in the in-context exemplars are coloured in red, and the AUX labels which are to be flipped in the ablations are coloured in blue. It is interesting to note that examples 1 and 2 are similar, as example 1 is retrieved as an in-context exemplar for example 2. §.§.§ Example 1 Tag the following sentence according to the Part of Speech (POS) of each word. The valid tags are ADJ, ADP, ADV, AUX, CCONJ, DET, INTJ, NOUN, NUM, PART, PRON, PROPN, PUNCT, SCONJ, SYM, VERB, X. Follow the format specified in the examples below: Sentence: I main , das Ganze letscht Wuchä isch mier scho ächli iigfaarä . Tags: “` I PRON main VERB , PUNCT das DET Ganze NOUN letscht ADJ Wuchä NOUN isch AUX mier PRON scho ADV ächli ADV iigfaarä VERB . PUNCT “` Sentence: Du gsehsch uus , wi wenn de nöime no hättisch z trinken übercho . Tags: “` Du PRON gsehsch VERB uus PRON , PUNCT wi SCONJ wenn SCONJ de DET nöime ADJ no ADV hättisch AUX z PART trinken VERB übercho VERB . PUNCT “` Sentence: Dir weit mer doch nid verzöue , di Wäutsche heige vo eim Tag uf en anger ufghört Chuttlen ässe . Tags: “` Dir PRON weit VERB mer PRON doch ADV nid ADV verzöue VERB , PUNCT di DET Wäutsche NOUN heige VERB vo ADP eim DET Tag NOUN uf ADP en DET anger ADJ ufghört VERB Chuttlen NOUN ässe VERB . PUNCT “` Sentence: es isch nämli echt usgstorbe gsi . Tags: “` es PRON isch AUX nämli ADV echt ADJ usgstorbe VERB gsi AUX . PUNCT “` Sentence: Aso bini rächt uufgschmissä gsi und dem entschprächend fascht verzwiiflät . Tags: “` Aso ADV bini AUX rächt ADV uufgschmissä VERB gsi AUX und CCONJ dem PRON entschprächend ADJ fascht ADV verzwiiflät VERB . PUNCT “` Sentence: Der Ääschme wett nöd schaffe biin em . Tags: “` Der DET Ääschme NOUN wett AUX nöd ADV schaffe VERB biin ADP em PRON . PUNCT “` Sentence: Zerscht hends am Dani gsait , är söli dòch Hoochdütsch redä , das gängi denn grad gaar nöd , wenn är so redi , wiäner redi . Tags: “` Zerscht ADV hends PRON am ADP Dani PROPN gsait VERB , PUNCT är PRON söli AUX dòch ADV Hoochdütsch ADJ redä VERB , PUNCT das PRON gängi VERB denn ADV grad ADV gaar ADV nöd ADV , PUNCT wenn SCONJ är PRON so ADV redi VERB , PUNCT wiäner PRON redi VERB . PUNCT “` Sentence: Isch das e Sach gsi , bis mer se gfunge hei gha . Tags: “` Isch AUX das PRON e DET Sach NOUN gsi AUX , PUNCT bis SCONJ mer PRON se PRON gfunge VERB hei AUX gha VERB . PUNCT “` Sentence: Ds Gueten isch immerhin gsi , dass i ungerdesse söfu müed bi gsi , dass i ändlech ha chönne go schlofe . Tags: “` §.§.§ Example 2 Tag the following sentence according to the Part of Speech (POS) of each word. The valid tags are ADJ, ADP, ADV, AUX, CCONJ, DET, INTJ, NOUN, NUM, PART, PRON, PROPN, PUNCT, SCONJ, SYM, VERB, X. Follow the format specified in the examples below: Sentence: I ha ar Marie-Claire gseit , es sig mer chli schlächt und i mög jetz nümm liire . Tags: “` I PRON ha AUX ar PART Marie-Claire PROPN gseit VERB , PUNCT es PRON sig AUX mer PRON chli ADV schlächt ADJ und CCONJ i PRON mög VERB jetz ADV nümm ADV liire VERB . PUNCT “` Sentence: De Spanier hed de Kontakt vermettlet , d Rumäne sölled d Holländer ombrocht ha . Tags: “` De DET Spanier NOUN hed AUX de DET Kontakt NOUN vermettlet VERB , PUNCT d DET Rumäne NOUN sölled AUX d DET Holländer PROPN ombrocht VERB ha AUX . PUNCT “` Sentence: Ds Gueten isch immerhin gsi , dass i ungerdesse söfu müed bi gsi , dass i ändlech ha chönne go schlofe . Tags: “` Ds DET Gueten NOUN isch AUX immerhin ADV gsi VERB , PUNCT dass SCONJ i PRON ungerdesse ADV söfu VERB müed ADJ bi ADP gsi VERB , PUNCT dass SCONJ i PRON ändlech ADV ha AUX chönne AUX go VERB schlofe VERB . PUNCT “` Sentence: Isch das e Sach gsi , bis mer se gfunge hei gha . Tags: “` Isch AUX das PRON e DET Sach NOUN gsi AUX , PUNCT bis SCONJ mer PRON se PRON gfunge VERB hei AUX gha VERB . PUNCT “` Sentence: De Dialäkt muess zu de Gschecht und zum Inhaut vonere Werbig passe . Tags: “` De DET Dialäkt NOUN muess AUX zu ADP de DET Gschecht NOUN und CCONJ zum ADP Inhaut NOUN vonere ADP Werbig NOUN passe VERB . PUNCT “` Sentence: Mit der Zit hani mi mit mir säuber uf ei Schriibwiis pro Wort aafo einige . Tags: “` Mit ADP der DET Zit NOUN hani VERB mi PRON mit ADP mir PRON säuber ADJ uf ADP ei DET Schriibwiis NOUN pro ADP Wort NOUN aafo VERB einige DET . PUNCT “` Sentence: Mit all denä Wörter hani natürli nüt chönä aafangä . Tags: “` Mit ADP all DET denä DET Wörter NOUN hani PRON natürli ADV nüt ADV chönä VERB aafangä VERB . PUNCT “` Sentence: Aso bini rächt uufgschmissä gsi und dem entschprächend fascht verzwiiflät . Tags: “` Aso ADV bini AUX rächt ADV uufgschmissä VERB gsi AUX und CCONJ dem PRON entschprächend ADJ fascht ADV verzwiiflät VERB . PUNCT “` Sentence: I cha der ihri Telefonnummere gä , de nimmsch mou unverbindlech Kontakt uuf . Tags: “` § SOURCE AND TARGET LANGUAGES FOR EACH TASK § NLI LABEL COVERAGE ANALYSIS We present an example of correct prediction made by SSP as compared to the version that doesn't ensure label coverage in Figure <ref> (English translation in Fig. <ref>). § QUALITATIVE ANALYSIS: SSP-SIM We present the analysis for the gains obtained via SSP-SIM for Germanic POS in Figure <ref>. The confusion matrix difference between SSP-SIM and CLT-SIM suggests that the model misclassifies auxiliary verbs as verbs in CLT-SIM, and this is corrected in SSP-SIM. These errors are a consequence of the labels on the in-context exemplars the model receives, and not the tokens of the language itself. We highlight this via the two Swiss-German POS examples in Figure <ref>. The misclassified verbs are corrected by SSP-SIM, and these labels are again misclassified when more than half of the labels in the in-context exemplars are corrupted. § DATA CONTAMINATION ANALYSIS Following Ahuja et al. 2023, we conduct contamination tests on test datasets for our target languages. We perform the following tests: * Dataset Card filling: Generate dataset card (supported languages, dataset description, #instances in each split, etc.) * Completion: Given a few words, complete the sentence and their labels, and * Generation using first few instances: Given first K instances (K=5) in the dataset, generate next few instances following them. We observe negligible contamination as depicted in table 8. The 40% accuracy for Quechua was a result of all the labels passed for the exemplars being entailment labels. As a result, the model repeated the same label for all the other examples, giving a 40% accuracy. Following these results, to prevent any chance of contamination, we remove Quechua from our evaluation dataset.
http://arxiv.org/abs/2406.19039v1
20240627094353
Constructing and Analyzing Different Density Graphs for Path Extrapolation in Wikipedia
[ "Martha Sotiroudi", "Anastasia-Sotiria Toufa", "Constantine Kotropoulos" ]
cs.DB
[ "cs.DB" ]
Constructing and Analyzing Different Density Graphs for Path Extrapolation in Wikipedia Martha Sotiroudi, Anastasia-Sotiria Toufa, Constantine Kotropoulos Department of Informatics, Aristotle University of Thessaloniki Thessaloniki, 54124, Greece Email: {marthass, toufaanast, costas}@csd.auth.gr July 1, 2024 ============================================================================================================================================================================================================================ § ABSTRACT Graph-based models have become pivotal in understanding and predicting navigational patterns within complex networks. Building on graph-based models, the paper advances path extrapolation methods to efficiently predict Wikipedia navigation paths. The Wikipedia Central Macedonia (WCM) dataset is sourced from Wikipedia, with a spotlight on the Central Macedonia region, Greece, to initiate path generation. To build WCM, a crawling process is used that simulates human navigation through Wikipedia. Experimentation shows that an extension of the graph neural network GRETEL, which resorts to dual hypergraph transformation, performs better on a dense graph of WCM than on a sparse graph of WCM. Moreover, combining hypergraph features with features extracted from graph edges has proven to enhance the model's effectiveness. A superior model's performance is reported on the WCM dense graph than on the larger Wikispeedia dataset, suggesting that size may not be as influential in predictive accuracy as the quality of connections and feature extraction. The paper fits the track Knowledge Discovery and Machine Learning of the 16th International Conference on Advances in Databases, Knowledge, and Data Applications. Wikipedia Dataset; Path Extrapolation; GRETEL; Dual Hypergraph Transformation; Graph Neural Networks. § INTRODUCTION Graph structures offer an intuitive and powerful means to capture relationships and interactions within various kinds of data, paving the way for advanced analysis through the prism of Graph Neural Networks (GNNs) <cit.>. From node classification <cit.> to link prediction <cit.> <cit.>, GNNs have proven indispensable across a spectrum of applications. Among these, the task of link prediction focuses on path inference, namely to predict an agent's trajectory over a graph. The efficacy of such models is inherently tied to the quality and structure of the underlying graph. In this context, our work pivots on the creation of the Wikipedia Central Macedonia (WCM) dataset, a new dataset comprising paths extracted from the huge graph of Wikipedia, with a specific emphasis on articles related to Central Macedonia, Greece. The dataset tries to simulate human navigation paths as in Wikispeedia <cit.> game, where users are asked to navigate from a given source to a given target article by only clicking Wikipedia links. Our objective is to leverage this dataset to address the problem of path inference. WCM dataset is specifically designed to navigate through the complexities of Wikipedia’s topology. It takes “Central Macedonia” as the starting article, from which it explores the external links through a series of random walks. Each step is contingent on a set of well-defined validity criteria. This ensures that each selected link is pertinent and non-redundant, providing a true reflection of the path an agent might traverse within the bounds of this thematic cluster. The dataset constructed for this study is made publicly available <cit.>. It comprises two separate files within the directory, representing the and the structures, each containing details of the paths, unique articles, path identifiers, categories, edges, hyperedges, observations, and path lengths. The code to create the WCM dataset can be found at <cit.>. The interest in the path inference problem has led to the development of advanced models like GRETEL<cit.>, which has demonstrated promise in leveraging path extrapolation on graphs. GRETEL works as a generative model trying to capture the directionality of the path. It has been applied to both navigation data and paths constructed on the Wikipedia graph. This paper applies a graph transformation method based on the Dual Hypergraph Transformation (DHT) <cit.>. This method, as demonstrated in <cit.> <cit.>, extends the traditional graph framework enabling connections among multiple nodes (i.e., vertices) within a hypergraph. Hypergraphs are suitable for this purpose because their edges can connect any number of nodes, not just two, as in a conventional graph. The new representation is able to capture more complex interactions between the data, and new more representative features can be extracted <cit.>. Here, in pursuit of advancing path extrapolation methods, WCM dense and sparse graphs are employed to assess both the original GRETEL and the Dual GRETEL variant in environments of varying complexity, providing a thorough insight into its adaptability and accuracy in different graph densities. To capture a comprehensive range of interactions within the data, a feature extraction process is implemented as proposed in <cit.><cit.><cit.>. <cit.> introduces an enhanced model, DualGRETEL+, that applies dual hypergraph transformation and a second-order optimizer to GPS navigation data, showing improved path inference capabilities. <cit.> assesses path extrapolation using GRETEL on Wikipedia data, with a focus on extracting informative features through the DHT. The paper is structured as follows: Section <ref> provides a detailed description of the dataset creation and its characteristics, along with an overview of the features employed and the GRETEL model. A detailed exposition of the experiments and results is found in Section <ref>. The paper concludes in Section <ref>, underscoring the profound impact of graph density on the path extrapolation with graph neural networks. § METHODOLOGY This section focuses on the methodical approach to creating and analyzing the WCM, outlining the comprehensive process of collecting, categorizing, and extracting features from Wikipedia data to construct various graph types for path extrapolation. §.§ Dataset Creation The dataset is created through a crawling process designed to traverse the vast interconnected landscape of Wikipedia, with Wikipedia Central Macedonia article <cit.> serving as the focal starting point. During data collection, we remained cognizant of the load implications on Wikipedia's servers. We inserted a pause of one second between two requests, safeguarding against potential server overload while accessing Wikipedia's data. This was a measure of digital courtesy and sustainability. The path generation process begins with the Central Macedonia Wikipedia article. From this starting point, the crawler extracts all the external links associated with the current article. A subsequent article is then randomly selected from the set of external links, adhering to certain validity checks, ensuring the relevance of the link and its absence from the current path. To maintain the integrity of the dataset and concentrate solely on core articles, stringent validation criteria are instated. The process of path creation continues until the generated path either attains a predetermined length ranging from 4 to 7 articles or encounters an article devoid of valid external links. The algorithm employs a well-defined criterion to ensure the relevance and validity of each article within the path. The function is utilized to exclude titles containing terms like , , , `ISO', percentages, hashes, or colons, and those consisting solely of digits. This careful filtering is instrumental in maintaining a dataset focused on content-rich articles, avoiding disambiguation pages, meta-articles, or other forms of non-standard content that could detract from the dataset's integrity. To ensure the intelligibility of the dataset, each Wikipedia article is associated with a distinct identifier. Leveraging tensor manipulation, the identifiers for the linked articles are distilled and organized within distinct tensor frameworks. These tensors serve as the foundation for the node indices within the constructed graph. To further aid our analysis, each trajectory's length is documented, and each article in the trajectory is associated with its unique identifier. This process is reiterated until a grand total of 3000 paths emerges. The graph G is created comprising m nodes and n edges, where nodes represent articles and edges denote links between articles. The extracted paths are referred to as trajectories. We have documented these trajectories, noting their lengths and the articles they connect. The graph is represented using the Graph Markup Language. Two distinct graph types have been created, each following a unique path selection process: Dense Graph: This is formed by a modified path selection protocol within the crawler. Here, the crawler opts for a random choice from the first five external links of an article. We choose the first five external links for path selection to intentionally narrow down the possible trajectories, aiming for a denser graph structure that facilitates a more focused analysis of interconnected topics. This results in a connected network among a smaller subset of 912 nodes, 1311 edges, and 3000 paths. The same process of path generation, involving the extraction of links, applying validity checks, and documenting each trajectory with unique identifiers, is followed as in the general dataset creation. Sparse Graph: This graph follows the initial broader selection process, incorporating a more extensive set of 7307 nodes, 10612 edges, and 3000 paths. The selection is made from all the external links. §.§ Article Categorization Categorization provides a structured framework to analyze the dataset. Organizing articles into distinct categories enables researchers to identify content trends and patterns within the generated paths. This categorization not only enriches the dataset but also amplifies its potential utility for diverse research, analytical, and educational purposes. Our categorization strategy focuses on dynamic online querying using DBpedia <cit.>. In order to determine the category of a given Wikipedia article, we rely on the SPARQL endpoint of DBpedia. Each article is queried to retrieve its semantic type from DBpedia's ontology. Whenever an explicit type is not obtained or if there are errors during the querying process, the articles are classified under . §.§ Feature Extraction In addition to graph generation, a feature extraction process is conducted to leverage semantic information from the content of the articles and to capture complex interactions in the graph structure. According to <cit.>, the feature vector for the nodes corresponds to its , and its length is 2. For edges, the feature vector contains the Text Frequency - Inverse Document Frequency (), capturing the semantic similarity between source and destination articles of a hyperlink <cit.>, and the number of times the link was clicked in the training dataset of paths (). §.§.§ Dual Hypergraph Transformation The framework commences with the configuration of a conventional graph, designated as G having n nodes and m edges. Node features are represented by a feature matrix F∈ℝ^n × d, and edge features by a feature matrix E∈ℝ^m × d'. Here, d and d' are the size of node and edge feature vectors, respectively. Considering an undirected graph, the incidence matrix is defined as M∈{0, 1}^n × m. In the case of a directed graph, the incidence matrix is defined as M∈{-1,0,1}^n × m. In any case, the incidence matrix represents the relationships between nodes and edges in a graph, indicating which nodes are connected by specific edges. The conventional graph and the corresponding dual hypergraph are represented as G = (F, M, E) and G^∗ = (F^∗, M^∗, E^∗) respectively. F^∗ represents the node features of hypergraph while E^∗ represents the hyperedge features. The DHT algorithm interchanges the roles of nodes and edges of the original graph <cit.>. That is, the edges of the original graph are reinterpreted as nodes in the dual hypergraph, while the original nodes become hyperedges in the dual hypergraph. Accordingly, F^∗ = E∈ℝ^m × d' and E^∗ = F∈ℝ^n × d. The incidence matrix of the dual hypergraph is the transpose of the incidence matrix of the original graph, i.e., M^∗ = M^⊤. The transformation is mathematically defined as: G = (F, M, E) → G^∗ = (E, M^⊤, F) Notably, the DHT is a reversible transformation, ensuring that applying it to G^∗ recaptures the initial graph G, thereby preserving the structural and feature integrity of the transformation. §.§.§ Features extracted from the dual hypergraph Following the methodology proposed in <cit.>, the original graph is transformed into its corresponding dual graph by applying the DHT algorithm in order to capture more complex interactions among edges. Two new features are extracted, namely the and the . The first feature assumes an undirected graph, while the second one assumes a directed graph. The implementation of dual hypergraph feature extraction, which significantly enhances the predictive accuracy of our models, can be found in <cit.>. For the feature, the first step is to construct the incidence matrix M∈{0,1}^n× m. Row vector q_l ∈{0,1}^m of M, corresponds to node l. The cosine similarity between the incidence row vectors q_v and q_u is computed, where v is the source node and u is the target node of an arbitrary edge e. The corresponding vector in the M^∗ matrix is a column vector q_l^∗∈{0,1}^m ≡q_l^⊤. The position of each 1 in this column vector indicates which nodes of the dual hypergraph are connected with the hyperedge l^∗. For the , a directed graph G is assumed. The corresponding incidence matrix is defined as M∈{-1,0,1}^n× m. To extract features associated with the input and output degrees of the dual hypergraph nodes, determining the direction of hypergraph edges becomes essential. This involves an examination of the column vector of M^∗ q_l^∗≡q_l^⊤. The position of each 1 in this column vector indicates which nodes of the dual hypergraph are connected with the hyperedge l^∗. For every combination (v^∗_i, v^∗_j), we verify the existence of a path e_i → e_j in the original graph that passes through the scrutinized node l. The new feature is the in-degree and out-degree of dual hypergraph nodes which are normalized by the maximum observed degree D_max in the hypergraph to facilitate comparison across different nodes: 0.9!Normalized In/Out-Degree (v_i^*)=In/Out-Degree (v_i^*)/D_max The aggregation of and results in feature. These enhanced features are particularly critical in the sparse graph context, where the reduced number of connections demands a more nuanced approach to capturing node relationships. In the dense graph, with its inherently richer connectivity, these features play a pivotal role in distilling the essence of the network's complexity into a format conducive to advanced path prediction algorithms. The feature extraction procedure is performed on the sparse graph with 7,307 nodes and 10,611 edges and the dense graph with 912 nodes and 1,311 edges. §.§ Path Extrapolation Employing GRETEL The paper addresses path extrapolation focusing on predictive path analysis via the GRETEL model <cit.>. The graph G consists of nodes and edges, represented as G=(𝒱, ℰ), with n=|𝒱| denoting the node count and m=|ℰ| the edge count, respectively. An agent progresses through the graph, stepping from node v_i to v_j contingent on the presence of a directed edge e_i → j∈ℰ. The agent's position at time t is a sequential set of traversed nodes, symbolized as a given prefix p=(v_1, v_2, …, v_t). Let the path suffix s=(v_t+1, …, v_t+h) be a collection of potential future for prediction horizon h. Within this setting, GRETEL is leveraged to estimate the conditional likelihood Pr(s| h, p, G) of path suffix s given the prefix p, the horizon h, and the graph G. The agent's position at each step t is encoded by a sparse vector 𝐱_t ∈ℝ_≥ 0^n normalized to a unit sum, with its i-th element reflecting the likelihood of the agent being at node v_i. GRETEL constructs a generative model that considers the directionality of edges via a latent graph with edge weights informed by a Multi-Layer Perception (MLP) that respects the graph's inherent directionality. The model's essence lies in its ability to forecast paths by learning from the traversed sequences, leveraging node features and the collective path history. More specifically, the non-normalized weights of each edge are computed by z_i→ j = MLP (c_i, c_j, f_i, f_j, f_i→ j), where c_i and c_j are the pseudo-coordinates of the sender and the receiver node, respectively, while f_i and f_j denote the features of the sender and the receiver node, respectively. In (<ref>), f_i→ j is the feature vector of the edge that connects the sender and the receiver node. The computed MLP outputs are normalized with the softmax function. The pseudo-coordinates c_i are computed using a GNN of K layers. They are the agent representations x_τ for τ∈ℐ, where ℐ denotes a trajectory. The non-zero elements of x_τ refer to the distance between the agent and the K closest graph nodes normalized to measure one. Let e⃗_t and e⃗_t define the edges that go from v_t → v_t+1 and v_t → v_t-1, respectively. Let also x_t be the last position of the agent. GRETEL <cit.> can be trained through the target likelihood. That is, given a target distribution x_t+h, the model tries to estimate the destination distribution x̂_t+h∈ℝ^n × 1 over a horizon h by the non-backtracking walk <cit.> x̂_t+h = B_ϕ^+P_ϕ^h B_ϕx_t. Let w_ϕ (e_k → j) stand for the normalized MLP weights. In (<ref>), P_ϕ∈ℝ^m × m has elements [P_ϕ]_e_i → j, e_k → l = 0 if j ≠ k or i=l w_ϕ (e_k → l)/1 - w_ϕ (e_k → i) otherwise, B_ϕ is a m × n matrix with [B_ϕ]_e_i→ j, k = 0 if k ≠ i and w_ϕ (e_k → j), otherwise, and 𝐁_ϕ^+ stands for the pseudoinverse of 𝐁_ϕ. Such an approach integrates node and edge feature vectors, the former delineating the in/out-degree and the latter embedding the textual and usage-based similarity metrics. These primal features are pivotal in the model's capacity to estimate the suffix likelihood, aiding in approximating the path probability Pr(s | h, φ, G). In the paper, we will aggregate the original edge features f_i→ j with the features extracted from the dual hypergraph. § EXPERIMENTS AND RESULTS To quantify the structure of each graph, we calculate the density, which provides a measure of how complete the graph is. The density is defined as the ratio of the number of edges m to the number of possible edges, with the formula given for a directed graph without loops as D = m/n(n-1), where n is the number of nodes. Table <ref> summarizes the characteristics of the graphs used in the experiments, providing a clear comparison of the number of nodes, edges, and density across the Sparse Graph, the Dense Graph, and Wikispeedia. Based on the characteristics outlined in Table <ref>, the sparse graph demonstrates a lower density ratio due to its larger node count. In contrast, the dense graph, with fewer nodes, exhibits a higher density ratio. Notably, the Wikispeedia dataset possesses the greatest density ratio of the three. The following metrics are used to assess the feature vectors. Target probability measures the average chance that the model will choose a node with non-zero likelihood. Choice accuracy measures how accurate the decisions of an algorithm are at each crossroad of the ground-truth path, connecting nodes v_t and v_t+h. It is computed on nodes whose degree is at least 3. precision top1 measures how often the correct next step appears in the model's first prediction only, while precision top5 evaluates how often the correct next step appears within the model's first five predictions. In all experiments, the node feature vector includes the in/out degree for the nodes, retaining a constant size of two, underscoring consistent complexity in nodal characteristics despite the variation in graph densities. An empirical assessment of model performance using the features derived from the original graph and those of the corresponding dual hypergraph is conducted. In the case of , the and features are used, yielding a feature vector of size 2. By aggregating the features of length 1, of length 2, and their combination of length 3, the associated edge feature vector has length 3, 4, and 5, respectively. Table <ref> summarizes the performance of the GRETEL model with original edge features and the features extracted from the dual hypergraph (Dual GRETEL) added on top of the original edge features on the Sparse Graph. Table <ref> repeats the model's performance assessment on the Dense Graph. Table <ref> details the model's performance on the Wikispeedia dataset. This dataset encapsulates the essence of human navigational strategies within Wikipedia, compiling 51318 completed paths from the WIKI GAME where participants navigate through article links towards a target article, with an aim for efficiency in both clicks and time. The modularity class algorithm in <cit.> is used to identify the clusters within the network. These clusters contain nodes that are more densely connected to each other than to nodes in different clusters. The resulting clusters are indicated by the color coding of the nodes. The size of each node is proportional to its degree, reflecting the number of connections it has within the network. This allows for the immediate visual identification of highly connected nodes. The visible labels on the nodes in the figures were chosen because they have higher degree values, which show their importance in the graph, and they represent the main topic of each cluster within the expansive Wikipedia network. Figure <ref> represents the dense graph of Wikipedia. The selective navigation results in a dense network with several clusters, one of which is built around the Central Macedonia article, connecting closely related topics. Adjacent nodes like `History of Greece' and `Politics of Greece' form clusters that delve into the nation’s past and governance, and `Geographic Coordinate System' and `France' appear as nodes indicative of broader geographical discourse. The visualization of the sparse graph in Figure <ref> reveals a network that unfolds from the Central Macedonia article, forming a large, primary cluster due to the random link selection strategy, and extending outward into a sparse array of smaller clusters. These smaller clusters are thematic, with subjects such as European countries, Greek cities, and historical events. The construction methods of the two graphs distinctly shape their representations. The dense graph demonstrates that the Central Macedonia article forms a cluster, with surrounding clusters closely related in theme, predominantly focusing on Greece. This clustering suggests that the used method tends to group related topics tightly together. On the other hand, the sparse graph shows a different pattern where the Central Macedonia article and closely linked articles stand out in number, while other articles appear less connected. This difference highlights how the choice of links in the construction process can significantly affect the network's structure. Figure <ref> represents the Wikispeedia graph, characterized by uniformly sized nodes, indicative of a network without a predominant starting article. Clusters within the graph are thematically organized, with `Isaac Newton' and `Physics' forming a cluster around scientific inquiry, while `Westminster Abbey' serves as a node for the cluster concerning England. `Mammal' and `Zebra' are central to a cluster on zoology. These labels serve as the focal points for their respective clusters, marking the diverse subjects navigated by users. Table <ref> showcases examples of how the extracted features are employed to predict specific paths, highlighting the model’s ability to deduce the most probable outcomes. Table <ref> includes the conditional probabilities that reflect the model's ability to correctly anticipate the actual path taken. These examples are instrumental in illustrating the practical application of the model and the effectiveness of the features in guiding the model toward the most probable navigational route. The utilization of hypergraph features results in higher conditional probability compared to the use of features. The examples clearly show that when hypergraph features are considered, the model tends to assign a greater likelihood to the true path, suggesting that these features capture more of the complexities inherent in human navigational behaviors on Wikipedia. The examples are drawn from the sparse graph of the WCM dataset. §.§ Performance Analysis of Model Across Dense and Sparse Graphs In the evaluation of the Dual GRETEL model, distinct performances are observed between the sparse and dense graphs. A higher predictive accuracy with respect to precision top5 metric is measured for the dense graph than the sparse one. This improved performance can be attributed to the vital role of the hyperedges, which enrich the model's contextual framework for more accurate extrapolation. The sparse graph, despite its lower connectivity, shows commendable results, outperforming the dense graph in terms of target probability and choice accuracy. Dual GRETEL predicts the correct target with a probability of 69.71 ± 0.0038 %. GRETEL accurately chooses the next step with a rate of 51.18 ± 0.0011 %. It's noteworthy that except for the precision top 5, Dual GRETEL maintains a better performance on the sparse graph than the dense one. This comparison reveals that while the Dual GRETEL model benefits from the rich link structures in dense graphs for precision tasks, it retains substantial predictive strength in sparse settings. This insight may guide further optimization for the model, enhancing its adaptability across varying network densities. Also, this performance indicates that the model may benefit from the reduced complexity in sparse networks, potentially due to less noise and fewer connections, which can simplify the path prediction process. The comparison suggests that the model might generalize better in sparse environments, avoiding potential overfitting that can occur in dense networks with more intricate connections. Conversely, the specificity that dense networks provide can enhance the model's precision in certain contexts. §.§ Model Benchmarking on WCM Dense Graph Versus Wikispeedia Graph For the WCM dense graph, Dual GRETEL demonstrated a significant improvement, achieving an impressive precision top5 score of 83.8694 ± 0.0112%. On the WIKISPEEDIA graph, Dual GRETEL also showed enhanced performance with a precision top5 score of 30.14 ± 0.1%. This indicates that hypergraph features greatly enhance the model's ability to accurately identify the most likely paths in a dense environment. Furthermore, GRETEL demonstrated a high choice accuracy of 48.0602 ± 0.0135% on the dense graph compared to 23.2 ± 0.1% of Dual GRETEL on the Wikispeedia dataset. Our findings show that model performance on the dense graph improves across all metrics except choice accuracy when we use hypergraph features. That is, hypergraph features are particularly effective in densely connected graphs, enhancing the model's predictive accuracy across all metrics we tested. The results indicate the potential of hypergraph features to improve the performance of path prediction models like GRETEL, especially in complex network structures. The completion rate of paths in the Wikispeedia dataset may introduce additional complexity, given that there is a mixture of successful and abandoned paths. In contrast, the smaller dataset might offer more uniformly successful paths, influencing the ease with which the model can learn and predict. The analysis of the model's performance, as shown in Tables <ref>-<ref>, reveals a trend where effectiveness inversely correlates with graph density. This suggests that as graphs become more interconnected, the model encounters greater challenges in path prediction accuracy. These observations emphasize the critical role that graph density plays in the deployment and refinement of path prediction algorithms. A possible explanation for the deterioration of accuracy as density increases could be the rise in potential paths that the model must discern. In denser graphs, the increased interconnectivity results in a greater number of plausible trajectories between nodes, potentially complicating the model's task of pinpointing the most likely path. Furthermore, a dense network may introduce more noise in the form of less relevant or weaker connections, which could mislead the prediction algorithm. These findings indicate that models like GRETEL or Dual GRETEL may require adjustments or enhancements, such as more sophisticated feature extraction or the incorporation of context-aware learning mechanisms, to better handle the complexity introduced by higher-density graphs. § CONCLUSIONS A detailed analysis of GRETEL and its variant Dual GRETEL has been presented on dense and sparse graphs derived from the WCM dataset, aiming to improve path extrapolation models. Having developed the novel dataset centered on Central Macedonia, Greece, we have provided a resource that captures the complexity of human navigational patterns on Wikipedia. Our investigation has shown that the density of a graph significantly influences the effectiveness of path prediction methods. Both models have performed better on sparse graphs in various aspects, yet they have achieved higher accuracy with respect to the top five predictions on the dense WCM graph. Furthermore, the incorporation of hypergraph features into the GRETEL model yielding the Dual GRETEL variant has significantly enhanced the accuracy of path predictions, underscoring the importance of feature extraction in graph-based predictive analytics. Comparisons of Dual GRETEL performance on the more extensive Wikispeedia dataset against the WCM dense graph have also shown that the top metrics were measured on the WCM dense graph, despite its smaller size. This indicates that the model's success is influenced by the quality of the graph's structure and the features used. § ACKNOWLEDGEMENTS This research was carried out as part of the project “Optimal Path Recommendation with Multi Criteria” (Project code: KMP6-0078997) under the framework of the Action “Investment Plans of Innovation” of the Operational Program “Central Macedonia 2014-2020” that is co-funded by the European Regional Development Fund and Greece. IEEEtran
http://arxiv.org/abs/2406.17916v1
20240625195621
Camera Model Identification Using Audio and Visual Content from Videos
[ "Ioannis Tsingalis", "Christos Korgialas", "Constantine Kotropoulos" ]
cs.LG
[ "cs.LG" ]
Camera Model Identification Using Audio and Visual Content from Videos Ioannis Tsingalis, Christos Korgialas, Constantine Kotropoulos Department of Informatics Aristotle University of Thessaloniki Thessaloniki 54124, Greece Email: tsingalis, ckorgial, costas@csd.auth.gr July 1, 2024 =============================================================================================================================================================================================================== § ABSTRACT The identification of device brands and models plays a pivotal role in the realm of multimedia forensic applications. This paper presents a framework capable of identifying devices using audio, visual content, or a fusion of them. The fusion of visual and audio content occurs later by applying two fundamental fusion rules: the product and the sum. The device identification problem is tackled as a classification one by leveraging Convolutional Neural Networks. Experimental evaluation illustrates that the proposed framework exhibits promising classification performance when independently using audio or visual content. Furthermore, although the fusion results don't consistently surpass both individual modalities, they demonstrate promising potential for enhancing classification performance. Future research could refine the fusion process to improve classification performance in both modalities consistently. Finally, a statistical significance test is performed for a more in-depth study of the classification results. Camera Model Identification (CMI); Convolutional Neural Networks (CNNs); Sum and Product Fusion Rules; Statistical Testing; Multimedia Forensics. § INTRODUCTION Camera Model Identification (CMI) <cit.><cit.> emerges as an essential forensic tool, particularly in the pursuit of discerning the brand or model of a mobile phone from a recording <cit.> <cit.>. The forensic analysis delves into various multimedia elements, including audio recordings, images, and videos, to unravel the distinct characteristics and signatures of different mobile phone brands/models. By exploiting these signatures, forensic analysts can accurately determine the particular device that recorded the multimedia content, providing crucial insights into various investigations, such as identifying the perpetrators behind a felony scene. Two prominent types of signatures employed in device identification are Photo-Response Non-Uniformity (PRNU) <cit.> for images and Mel-Frequency Cepstral Coefficients (MFCCs) <cit.><cit.><cit.><cit.> extracted from audio recordings. PRNU analysis involves studying the unique noise patterns present in images, allowing forensic experts to identify the camera model with high precision. On the other hand, MFCCs, extracted from the audio recorded by a mobile phone speaker, serve as distinctive “fingerprints" that enable analysts to discern which mobile device is used for recording. Both methodologies contribute substantially to the forensic toolkit, offering valuable and intricate details regarding multimedia content's recording time and place. This encompasses insights into its creation process, source, authenticity, and other pertinent characteristics. However, the evolution of deep learning has catalyzed a notable shift in research focus, particularly emphasizing the application of Convolutional Neural Networks (CNNs) in extracting inherent patterns from multimedia content <cit.>. This advancement has significantly enhanced the ability to classify and identify devices by analyzing raw video frames and log-Mel spectrograms as key inputs of CNNs, as described in Section <ref>. Consequently, this approach has expanded the scope of modalities used, going beyond traditional PRNU and MFCC analysis to incorporate a broader spectrum of features. The integration of CNNs marks a pivotal stride in the ongoing refinement of forensic techniques, offering a framework for device identification. The framework combines conditional probability densities of device identification given the audio and visual content in a late fusion manner, hoping to overcome any caveats when one of the two modalities is employed for CMI (i.e., a high noise regime in the visual content). Motivation and Contribution. Inspired by the application of CMI in forensics, this paper introduces a framework for CMI, treating it as a classification problem. CNNs trained on either audio or visual content are employed for this purpose. Experimental findings showcase promising performance when employing either audio or visual content individually. Furthermore, late fusion integrates the decision given the audio and visual content by utilizing fundamental fusion rules, namely the product and sum rule <cit.>. Applying these rules for classification offers valuable insights for future research in the fusion of modalities for CMI. Given the limited existing research in this area, this work represents a significant contribution to the literature, paving the way for further exploration. The code for the proposed framework can be found at <cit.>. The remaining paper is organized as follows. In Section <ref>, a survey of related works is undertaken. In Section <ref>, the dataset is described. Section <ref> outlines the proposed methodology with experimental results presented and discussed in Section <ref>. Finally, the paper is concluded in Section <ref>, discussing the results obtained and outlining potential methods for future research. font=footnotesize,sc,justification=centering,labelsep=period font=footnotesize,rm,justification=centering,labelsep=period § RELATED WORK Research on brand device identification has focused on examining camera video sequences to ensure accurate recognition. In <cit.>, an approach to CMI from videos was presented, utilizing extended constrained convolutional layers for extracting camera-specific noise patterns from color video frames. The approach offered robustness against compression techniques like WhatsApp and YouTube. An algorithm was proposed in <cit.> for the CMI of the mobile device that created a video, utilizing sensor noise and wavelet transform for identification. The experiments demonstrated its effectiveness. In <cit.>, an algorithm addressing geometric misalignment in device brand identification was introduced, leveraging frequency domain searches for scaling and rotation parameters to efficiently align characteristic noise patterns with camera sensor traces, employing real videos from a benchmark dataset. Moreover, in <cit.>, a CMI method was elaborated, incorporating encoding and encapsulation aspects into a joint metadata framework and employing a two-level hierarchical classification to achieve a 91% accuracy in identifying video classes among over 20,000 videos from four public datasets. In <cit.>, a CNN named PRNU-Net, integrating a PRNU-based layer for source camera identification, was developed in response to the security challenges posed by the widespread distribution of digital videos, demonstrating competitive performance by emphasizing low-level features. Deep learning methods were applied to the identification of source camera devices from digital videos in <cit.>, achieving record accuracies on the VISION <cit.> and QUFVD <cit.> datasets without the constraints of traditional PRNU-noise-based approaches. In <cit.>, an approach was introduced to address the challenges of video-based source camera identification, exacerbated by compression artifacts and pixel misalignment, by leveraging a resilient global stochastic fingerprint in the low- and mid-frequency bands. Additionally, fusion techniques were developed, employing multiple modalities further to enhance the robustness and accuracy of CMI tasks. In <cit.>, a deep learning-based system was introduced to address the gap in video CMI effectiveness, utilizing a CNN for analyzing temporally distributed patches from video frames and employing a fusion system to consolidate forensic information. An ensemble classifier was introduced in <cit.> for source camera identification, leveraging fusion features to detect software-related, hardware-related, and statistical characteristics imprinted on images by digital cameras. In <cit.>, an approach to CMI for video sequences was introduced, employing fusion techniques that leverage both audio and visual information within a multi-modal framework, demonstrating better performance over traditional mono-modal methods in tests conducted on the VISION dataset described in Section <ref>. § DATASET DESCRIPTION AND PREPARATION Here, the publicly available VISION dataset <cit.> <cit.> is utilized, comprising images and videos captured across various scenes and imaging conditions. As can be observed in Table <ref>, a total of 35 camera devices, representing 29 camera models and 11 camera brands, are encompassed within this dataset. Specifically, there are 6 camera models featuring multiple instances per model, facilitating an investigation into the performance of the proposed approach at the device level. VISION includes 648 native videos, which remain unaltered post-capture by the camera. These native videos were disseminated via social media platforms like YouTube and WhatsApp, with corresponding versions available in the dataset. Of the 684 native videos, 644 were shared via YouTube and 622 via WhatsApp. Upon being uploaded to YouTube, videos are compressed yet retain their initial resolutions, which span from 640 × 480 pixels for standard definition to as high as 1920 × 1080 pixels. In contrast, an alteration is observed when videos are shared on WhatsApp. Regardless of their original quality, they are rescaled to a resolution of 480 × 848 pixels. Through this process, the original video quality is often compromised on WhatsApp videos to ensure swift sharing and reduced data usage. Moreover, the videos obtained from each camera are classified into three distinct scenarios: flat, indoor, and outdoor. Flat videos depict scenes with relatively homogeneous content, such as skies and white walls. Indoor scenarios encompass videos captured within indoor settings, such as offices and homes. Conversely, outdoor scenarios feature videos of gardens and streets. This diversity in scene content underscores the suitability of the VISION dataset as a benchmark for assessing source camera identification. Taking into account the VISION dataset naming conventions outlined in <cit.>, videos captured by devices D04, D12, D17, and D22 are excluded due to issues encountered during frame extraction or audio track retrieval. The VISION dataset is partitioned into training, testing, and validation sets to conduct a typical five-fold stratified cross-validation so that the standard deviation of accuracy is estimated. The choice of 5 folds is a compromise between an acceptable estimation of the standard deviation of accuracy and computational time. The standard deviation is reduced after fusion. This demonstrates the precision of the method. § FRAMEWORK §.§ Audio and Visual Content Feature Extraction Our approach integrates audio and visual content to classify the videos within the VISION dataset. A description of the features extracted from the audio and visual content follows. Audio content. This phase encompasses extracting audio data from each video sequence and the computation of the log-Mel spectrogram. The log-Mel representation of each extracted audio is computed using three distinct windows and hop sizes. This results in a 3-channel log-Mel spectrogram that captures various frequency details, serving as a comprehensive feature representation for the CMI task. The log-Mel spectrograms are computed as follows. The Short-Time Fourier Transform (STFT) is performed on the audio signal, segmenting it into overlapping frames and providing a spectrogram representation of the signal's frequency content over time. Mathematically, the STFT of the input signal x[n] is expressed as X(m, f)=∑_n=-∞^∞ x[n] w[n-m] e^-j 2 π f n, where X(m,f) denotes the STFT at a specific time index m and frequency f, with w[n-m] representing the window function applied to the signal. The outcome of the STFT is a two-dimensional representation of the signal x[n], X of size T × F, with T denoting the number of temporal samples (i.e., overlapping frames) and F standing for the number of frequency bins. X is referred to as the spectrogram of signal x[n], having as elements the magnitude of the STFT. Following the STFT, the frequencies are transformed onto the Mel scale to produce the Mel spectrogram. This involves converting linear frequencies to the Mel scale using the expression Mel(f)=2595 ·log _10(1+f/700). Then, a series of triangular filters based on these Mel frequencies are applied to the magnitude spectrum of the STFT. The Mel filter bank is denoted by a two-dimensional matrix H of size F × K, where K is the number of triangular filters. The triangular Mel filters, each centered at a Mel frequency corresponding to a pitch p, are defined as H_p(f) = f-f_p-1/f_p-f_p-1 for f_p-1≤ f < f_p f_p+1-f/f_p+1-f_p for f_p ≤ f < f_p+1 0 otherwise, where f_p=Mel^-1(p) represents the center frequency of the filter corresponding to pitch p, and f_p-1 and f_p+1 are the center frequencies of the immediately adjacent filters. Finally, the Mel spectrogram is converted into a log-Mel spectrogram by applying a logarithmic transformation to its values Log-Mel Spectrogram = L= ln( XH + ϵ), where ϵ is a small constant added to prevent zero values. This logarithmic transformation mirrors the logarithmic nature of human loudness perception, ensuring that the resulting log-Mel spectrogram closely aligns with human auditory processing. Visual content. This stage involves extracting video frames and preprocessing them by resizing them to a predefined size of 256 × 256 × 3. Here, we use the raw video frames without performing any feature extraction, such as PRNU analysis. §.§ Unimodal Classification Methodology Let us consider a scenario where a pattern needs to be assigned to one of the classes {𝒞_c}_c=1^C. Furthermore, let {γ_m}_m=1^M be the set of random variables whose instances represent data samples of the mth modality. We denote the instances of the mth modality as {γ_m^(n)}_n=1^N. Furthermore, let ∘ be the function composition. If the classification system of the mth modality is realized by a neural network of L layers, we can denote its output activation as a_m^(n)[L] = (f_W_m^[L]^[L]∘ f_W_m^[L-1]^[L-1]∘⋯ f_W_m^[1]^[1]) (γ_m^(n)), where W_m^[l] and f_W_m^[l]^[l] are the parameters and the activation function of the lth layer, respectively. Consider the collection of parameters belonging to the Lth layer where each element is associated with the c'th classification node {w_m^c',[L]}_c'=1^C. Also, let exp(·) be the exponential function. When the output activation function f^[L] is the softmax function, the classification probabilities of the c' classification node are given by (𝒞_c'|γ_m^(n);w_m^c',[L]) = exp(w_m^c',[L]^⊤a_m^(n)[L-1])/∑_c=1^C exp( w_m^c,[L]^⊤a_m^(n)[L-1]) . In addition, the classification probabilities of the nth sample γ_m^(n) related to the mth modality are given by p_m^(n)[L] = [ (𝒞_1 |γ_m^(n);w_m^1, [L]); (𝒞_2 |γ_m^(n);w_m^2, [L]); ⋮; (𝒞_C |γ_m^(n);w_m^C, [L]) ]∈ℝ^C. In the remaining analysis, for simplicity, the superscript [L] is omitted. Given the samples {γ_m^(n)}_n=1^N of the mth modality, we obtain P_m = [p_m^(1), p_m^(2), …, p_m^(N)]∈ℝ^C × N. Loss function. Let T = [t^(1), …, t^(N)]∈ℝ^C × N be the matrix of target variables. The (c', n) element of T is denoted by t^(n)_c'. The target vector t^(n), that corresponds to the sample γ_m^(n), adheres to the one-hot encoding scheme. In this scheme, if γ_m^(n) belongs to class 𝒞_c', the target vector t^(n) has zero elements except for the c'th element, which is set to one. In the proposed framework, the cross-entropy loss E ({γ^(n)}_n=1^N, {W_m^[l]}_l=1^L ) = -∑_n=1^N ∑_c'=1^C t_c'^(n)ln [p_m^(n)]_c', is used by the mth classification system. Unimodal Training Our objective is to identify the camera model of each video within the VISION dataset. This task is treated as a classification problem, where each class in {𝒞_c=1^C} refers to the IDs in Table <ref>, with C=25. Each video is characterized by a single audio file and multiple video frames prepossessed following the guidelines in Section <ref>. The audio files are related to the audio content modality (m=1), while the video frames are associated with the visual content modality (m=2). Given this distinction, two separate CNNs are trained, one for each modality. To classify the audio files into one of the classes {𝒞_c=1^25}, a model <cit.> is utilized. The nth audio file, denoted by γ_1^(n), is assigned a vector of classification probabilities represented by p_1^(n). Figure <ref> depicts the flow chart of the CMI using only the audio content. Similarly, to classify the video frames into one of the classes {𝒞_c=1^25}, a  <cit.> model is utilized. The nth video file, denoted by γ_2^(n), is assigned a vector of classification probabilities represented by p_2^(n). As the model computes a probability vector for each video frame, p_2^(n) is calculated as the average probability vector of all frames of the nth video. Figure <ref> depicts the flow chart of the CMI using only the video content. Unimodal Testing Procedure. The predicted classes of each sample {γ_m^(n)}_n=1^N are given by c_m = [C_m^1, C_m^2, …, C_m^N]^⊤∈ℝ^N, where C_m^n = _c=1,…,C[p_m^(n)]_c, is the predicted class of the nth sample with C_m^n∈{𝒞_c=1^C}. §.§ Multimodal Classification Methodology Multi-modal deep learning has demonstrated effectiveness in previous studies <cit.>. Here, we utilize the product and sum rule for late fusion <cit.>. Note that late fusion occurs subsequent to training classification models, which are utilized to generate classification probabilities for each sample. The product rule is given by P_prod = P_1 ⊙P_2 ⊙⋯⊙P_M ∈ℝ^C × N, where ⊙ denotes the Hadamard, element-wise, product. The sum rule is given by P_sum = P_1 + P_2 + ⋯ + P_M ∈ℝ^C × N. Testing Procedure. After performing late fusion, the predicted class for each sample in {γ_m^(n)}_n=1^N is determined by applying (<ref>) to P_prod or P_sum. This process yields the classification results obtained using the product or sum rule, respectively. § EXPERIMENTAL EVALUATION Table <ref> summarizes the results when the visual and the audio content are used separately. As can be seen, the mean accuracy using visual content in the Native, WhatsUp, and YouTube is 88.24%, 69.43%, and 71.77%, respectively. When audio content is used, the mean accuracy in the Native, WhatsUp, and YouTube is 93.99%, 91.11%, and 91.89%, respectively. font=footnotesize,sc,justification=centering,labelsep=period font=footnotesize,rm,justification=centering,labelsep=period Table <ref> summarizes the results achieved by applying late fusion on the outcomes obtained by the classifiers related to the visual and audio content. The late fusion uses the product or sum rule described in Section <ref>. As can be seen, the mean accuracy using the product rule in the Native, WhatsUp, and YouTube is 97.64%, 92.93%, and 95.59%, respectively. When the sum rule is used, the mean accuracy in the Native, WhatsUp, and YouTube is 96.33%, 93.72%, and 93.77%, respectively. font=footnotesize,sc,justification=centering,labelsep=period font=footnotesize,rm,justification=centering,labelsep=period Comparing the results in Tables <ref> and <ref>, when the product rule performs the fusion, the mean accuracy in the Native, WhatsUp, and Youtube is improved by 9.4%, 23.5%, and 23.82%, respectively. When the sum rule performs the fusion, the accuracy results in the Native, WhatsUp, and YouTube are improved by 2.34%, 2.61%, and 1.88%, respectively. In summary, combining the classification probabilities obtained from visual and audio content demonstrates a promising improvement in classification performance. font=footnotesize,sc,justification=centering,labelsep=period font=footnotesize,rm,justification=centering,labelsep=period Next, we study the null hypotheses: * H_0,1: The classification performances achieved by the two fusion rules are equivalent. * H_0,2: The classification performance achieved solely with visual content is equivalent to that achieved with the product rule. * H_0,3: The classification performance achieved solely with audio content is equivalent to that achieved with the product rule. We have significant evidence or highly significant evidence against H_0,i, for i=1,2,3, when the p-value falls within the range [0.01, 0.05] or [0, 0.01], respectively. When p-value is greater that 0.05, we have not a significant evidence against H_0,i, for i = 1, 2, 2. Here, p-values are computed by applying McNemar's significance test <cit.> <cit.>. font=footnotesize,sc,justification=centering,labelsep=period Table <ref> summarizes the computed p-values for H_0,1. Most of the p-values exceed the predetermined significance threshold, so we lack significant evidence against H_0,1. Table <ref> summarizes the computed p-values for H_0,2. It is evident that we have significant evidence against H_0,2. Table <ref> summarizes also the computed p-values for H_0,3. Most of the p-values exceed the predetermined significance threshold, so we lack significant evidence against H_0,3. font=footnotesize,rm,justification=centering,labelsep=period § DISCUSSION AND FUTURE WORK Unlike <cit.> which analyzes smaller segments (patches) extracted from video frames and log-mel spectrograms, our framework utilizes the entirety of these data sources for prediction. While this difference in the prediction process prevents a direct comparison, we still report the accuracy results achieved by <cit.> to provide a general sense of our framework potential. The proposed framework achieves a mean accuracy of 76.31% and 92.33% when the visual and audio content is used, respectively, in Table <ref>. The mean accuracy is computed across the categories Native, WhatsApp, and YouTube. The corresponding accuracies in <cit.> for the visual and audio content are 74.84% and 67.81%, respectively. Regarding the fusion results returned by the proposed framework, the best mean accuracy across the Native, WhatsApp, and YouTube categories in Table <ref> is 95,38%. The latter accuracy is achieved by the product rule. The corresponding accuracy in <cit.> is 95.27%. Both unimodal and bimodal classification indicate the potential of our approach for CMI, with the product rule demonstrating better performance than the sum rule. The superior performance of the product rule can be attributed to the higher joint probabilities generated when all modalities align, as observed in the mean results presented in Table <ref>. Future work will focus on various key areas to further analyse our framework. The robustness of the framework can be investigated on different levels of noise. Possible overfitting issues can be analyzed by performing training with more lightweight models <cit.>. Other datasets that contain more recent devices, like the FloreView dataset <cit.>, can be employed to evaluate the proposed framework. § CONCLUSION CMI holds significant importance in multimedia forensic applications. This paper introduces a framework capable of device identification using audio, visual content, or a combination of both. CNNs are employed to address the device identification problem as a classification task. Experimental evaluation demonstrates a promising classification accuracy when independently using audio or visual content. Additionally, combining audio and visual content may lead to notable enhancements in classification performance, suggesting a potential area for further research. § ACKNOWLEDGMENTS This research was supported by the Hellenic Foundation for Research and Innovation (HFRI) under the “2nd Call for HFRI Research Projects to support Faculty Members & Researchers" (Project Number: 3888). IEEEtran
http://arxiv.org/abs/2406.18507v1
20240626172809
Pseudo-Dirac Neutrinos and Relic Neutrino Matter Effect on the High-energy Neutrino Flavor Composition
[ "P. S. Bhupal Dev", "Pedro A. N. Machado", "Ivan Martinez-Soler" ]
hep-ph
[ "hep-ph", "astro-ph.CO", "astro-ph.HE" ]
CETUP-2023-022, FERMILAB-PUB-24-0317-T, IPPP/24/35 bdev@wustl.edu Department of Physics and McDonnell Center for the Space Sciences, Washington University, St. Louis, MO 63130, USA pmachado@fnal.gov Theoretical Physics Department, Fermilab, P.O. Box 500, Batavia, IL 60510, USA ivan.j.martinez-soler@durham.ac.uk Institute for Particle Physics Phenomenology, Durham University, South Road, DH1 3LE, Durham, UK § ABSTRACT We show that if neutrinos are pseudo-Dirac, they can potentially affect the flavor ratio predictions for the high-energy astrophysical neutrino flux observed by IceCube. In this context, we point out a novel matter effect induced by the cosmic neutrino background (CνB) on the flavor ratio composition. Specifically, the active-sterile neutrino oscillations over the astrophysical baseline lead to an energy-dependent flavor ratio at Earth due to the CνB matter effect, which is distinguishable from the vacuum oscillation effect, provided there is a local CνB overdensity. Considering the projected precision of the 3-neutrino oscillation parameter measurements and improved flavor triangle measurements, we show that the next-generation neutrino telescopes, such as IceCube-Gen2 and KM3NeT, can probe the pseudo-Dirac neutrino hypothesis in a distinctive way. Pseudo-Dirac Neutrinos and Relic Neutrino Matter Effect on the High-energy Neutrino Flavor Composition Ivan Martínez-Soler July 1, 2024 ======================================================================================================== § INTRODUCTION Despite great progress in neutrino physics over the past decades, the nature of neutrino mass remains unknown. Neutrinos could be either Majorana or Dirac particles. Or they could be somewhere in-between, namely, pseudo-Dirac <cit.>, which are fundamentally Majorana fermions, but behave like Dirac particles in laboratory experiments because of the extremely small mass-squared splitting (δ m^2) between the active and sterile components. The theoretical and model-building aspects of pseudo-Dirac neutrinos have been extensively discussed in the literature; see e.g., Refs. <cit.>. In fact, in any model where the neutrinos start as Dirac particles with naturally small masses could actually receive quantum gravity corrections making them pseudo-Dirac particles at a more fundamental level. These corrections will generate small δ m^2 via higher-dimensional operators suppressed by the Planck scale. It is interesting to note that certain string landscape (swampland) constructions also predict pseudo-Dirac neutrinos <cit.>. Small δ m^2 values could also be linked to the observed baryon asymmetry of the Universe <cit.>. Recently, the pseudo-Dirac neutrinos were also shown to resolve the excess radio background issue <cit.>. Irrespective of the theoretical motivations, the only experimental way to directly probe the active-sterile oscillations of pseudo-Dirac neutrinos with tiny mass splittings is by going to extremely long baselines, which is possible with astrophysical sources of neutrinos, such as solar <cit.>, supernova <cit.> or high-energy astrophysical <cit.> neutrinos. In fact, stringent upper limits on δ m^2_1,2≲ 10^-12  eV^2 have been derived using the solar neutrino data <cit.>. These limits are derived assuming the usual maximal active-sterile neutrino mixing in the pseudo-Dirac scenario. If the mixing is non-maximal, the δ m^2 limits can be much weaker <cit.>. Moreover, the solar neutrino data is not sensitive to δ m^2_3 due to the smallness of θ_13, and the limit from atmospheric, accelerator and reactor neutrino data is rather weak, δ m^2_3≲ 10^-5  eV^2 <cit.>, due to the much shorter baselines. There also exists an old limit on δ m^2_i≲ 10^-8  eV^2 for maximal mixing from Big Bang Nucleosynthesis considerations <cit.>. The recent identification of a few point sources for astrophysical neutrinos <cit.> allowed us to set the first IceCube limits on the pseudo-Dirac neutrino hypothesis in the δ m^2_i∈ [10^-21,10^-16]  eV^2 range <cit.>; see also Refs. <cit.> for related analyses. However, these studies only used the IceCube track-like sample (mostly involving muon neutrinos, with a small fraction coming from tau-induced tracks), and hence, were insensitive to the full neutrino flavor information. This is justifiable because the track events have excellent angular resolution of ≲ 0.2^∘ <cit.> and are therefore ideal for point source identification <cit.>, unlike the cascade events which have a poor angular resolution of ∼ 10^∘–15^∘ at IceCube <cit.>. The cascade resolution will significantly improve up to 1.5^∘ at KM3NeT <cit.> with their current high-energy cascade reconstruction algorithm, and even sub-degree resolution can be achieved with better reconstruction algorithms using the timing information and elongation emission profile of cascades <cit.>. In this paper, we study how including the cascade events can give us additional information on the pseudo-Dirac neutrino hypothesis. In particular, we show that the flavor ratio measurements of high-energy neutrinos, from either diffuse or point sources, would be affected in the presence of pseudo-Dirac neutrinos, except for the special case when all three active-sterile mass splittings are exactly the same. Given the fact that the flavor ratio measurements are expected to improve significantly <cit.> with the next-generation neutrino telescopes, such as IceCube-Gen 2 <cit.>, KM3NeT <cit.>, Baikal-GVD <cit.>, P-ONE <cit.>, TRIDENT <cit.>, TAMBO <cit.>, Trinity <cit.> and RET <cit.>, they will provide an unprecedented opportunity to test the pseudo-Dirac neutrino hypothesis. The final flavor ratio measured on Earth crucially depends on the initial source flavor composition which is currently unknown. We take this into account by considering different well-motivated choices for (ν_e:ν_μ:ν_τ) at the source,[Since IceCube cannot distinguish between neutrinos and antineutrinos on an event-by-event basis (with the exception of the Glashow resonance <cit.> for which we lack statistics <cit.>), we take the sum of neutrinos and antineutrinos for a given flavor.] namely, (i) (1/3:2/3:0) for the standard pion and muon decay <cit.>; (ii) (0:1:0) for the muon damped case <cit.>; (iii) (1:0:0) for neutron decay <cit.>; and (iv) (x:1-x:0) with x∈ [0,1] for the general case corresponding to a mixture of multiple processes/sources contributing to the neutrino flux. In each case, we compare the expectations from the standard 3-neutrino oscillation paradigm with the pseudo-Dirac scenario for a given δ m^2 to see whether they can be distinguished from each other on the flavor triangle. Note that since we are dealing with flavor ratios, we are insensitive to uncertainties related to the normalization or energy dependence of the astrophysical neutrino flux. Moreover, for the δ m^2 values of interest here, we show that the matter effect due to the cosmic neutrino background (CνB) can play an important role in determining the flavor ratios on Earth, depending on the value of the local CνB overdensity. This is in contrast with the pure vacuum oscillations assumed so far in the vast literature of flavor ratio studies (see e.g., Refs. <cit.>).[The effect of source matter effect on the flavor composition of high-energy neutrinos from active galactic nuclei was recently considered in Ref. <cit.>, but this becomes important only for heavily Compton-thick sources with column density ≳ 10^30  cm^-2 whose exact population or contribution to the observed flux at IceCube is currently unknown.] This is because only the left-handed component of the pseudo-Dirac neutrino actively interacts via standard weak interactions, whereas the right-handed component is sterile. Thus, the neutral-current interactions of the left-handed component of the high-energy neutrino flux with the CνB bath would induce a difference in the matter potential for a given flavor (depending on which δ m_i^2≠ 0), which could modify the oscillation probabilities, and could even induce an MSW resonance <cit.> for suitable values of δ m_i^2. This is unlike the standard 3-neutrino case where the neutral-current interaction equally affects all three flavors and does not lead to a matter potential difference between different flavors. The same is true if all three active-sterile mass splittings δ m^2_i are the same, in which case there is no matter potential difference induced by CνB either. Thus, including the CνB matter effect would provide an additional handle on probing small δ m_i^2 values at neutrino telescopes. Moreover, the matter effect introduces a novel energy-dependent flavor transition, which will help us disentangle the pseudo-Dirac scenario. The rest of the paper is organized as follows: In Section <ref>, we review the standard 3-flavor oscillation paradigm for the flavor triangle analysis. In Section <ref>, we present the pseudo-Dirac case with oscillations in vacuum and in matter, but in a time-independent background. In Section <ref>, we discuss the CνB matter effect in an expanding Universe. In Section <ref>, we include the CνB overdensity and the finite cluster size effect. Our results are given in Section <ref>. We conclude with some final remarks in Section <ref>. § STANDARD CASE In the standard 3-neutrino oscillation scenario, the neutrino flavor eigenstates |ν_α⟩, with α=e,μ,τ, are related to the mass eigenstates |ν_i⟩, with i=1,2,3, via a unitary transformation, i.e. |ν_α⟩ =∑_i=1^3 U_α i^*|ν_i⟩ , where U is the 3× 3 Pontecorvo-Maki-Nakagawa-Sakata (PMNS) lepton mixing matrix, parameterized in terms of three mixing angles θ_ij and a Dirac CP phase δ_ CP <cit.>.[If neutrinos are Majorana, U contains two additional phases which however do not affect oscillations.] The characteristic neutrino oscillation length scale in vacuum is given by L^ std_ osc =4π E_ν/Δ m^2_ij ≃ 8× 10^-6 pc(E_ν/1  TeV)(10^-5  eV^2/Δ m_ij^2), where E_ν≫ m_i is the neutrino energy and Δ m_ij^2≡ |m_i^2-m_j^2| are the mass-squared differences. From this equation, it is clear that for high-energy neutrinos, L_ osc corresponding to either solar or atmospheric mass-squared splitting is much smaller than the typical distance (≳ Mpc) to the extragalactic astrophysical sources. Therefore, the standard 3-neutrino oscillations are rapid enough to average out over astrophysical baselines, and we are only sensitive to the averaged out ν_α→ν_β flavor transition probability, P_αβ^ std = ∑_i=1^3|U_α i|^2|U_β i|^2 , which depends on the 3-neutrino mixing angles, as well as on the Dirac CP phase to a lesser extent. When drawing the allowed regions in the flavor triangles, we will use the best-fit and 68% confidence level (CL) allowed values of the oscillation parameters from the recent NuFit 5.3 global fit <cit.>, assuming a normal mass ordering for concreteness. Note that the latest oscillation results from T2K <cit.> and NOνA <cit.> individually continue to show a mild preference for normal mass ordering, although their combination prefers inverted mass ordering <cit.>, so this is still an open question. Thus, for a given initial flavor composition at the source (f_e,f_μ,f_τ)_ S, the final flavor composition at Earth under standard vacuum oscillations is given by f_β,⊕ = ∑_α=e,μ,τ P_αβ^ std f_α, S , where we have normalized the flavor ratios so that they add up to unity, i.e., ∑ _αf_α, S=∑ _βf_β,⊕=1. Depending on the physical scenario for the initial source flavor composition, we can then calculate the final flavor composition at Earth using Eq. (<ref>). This will be referred to as the “standard case" in the following. § PSEUDO-DIRAC CASE Pseudo-Dirac neutrinos can be considered as three pairs of almost degenerate mass eigenstates. The Hamiltonian describing the neutrino evolution in vacuum is given by H^ PD_vac = U^† M_diag^2 U/2E_ν, where the masses can be separated into two sub-block 3× 3 diagonal matrices M_diag^2={m^2_iS,m^2_iA}, with the squared mass eigenvalues m^2_iS = m^2_i + δ m^2_i/2 , m^2_iA = m^2_i - δ m^2_i/2 , corresponding to the mass eigenstates ν_iS = sinθ_i ν_ia+cosθ_i ν_is , ν_iA = -i(cosθ_i ν_ia-sinθ_i ν_is) , with ν_ia and ν_is being the active and sterile components, respectively. In the case of pseudo-Dirac states, a maximal mixing between ν_S and ν_A states is assumed, i.e., θ_i=π/4, in which case the states coincide with the symmetric (ν_S=(ν_a+ν_s)/√(2)) and anti-symmetric (ν_A=-i(ν_a-ν_s)/√(2)) combinations of the active and sterile components. Therefore, the mixing matrix is given by U = 1/√(2)([ U 0_3×3; 0_3×3 U_R; ]) ([ 1_3× 3 i_3×3; 1_3×3 -i_3×3; ]) , where U is the PMNS matrix and U_R is the mixing matrix between the right-handed (sterile) states. The interactions of high-energy neutrinos with the CνB introduce a matter potential, given by V_ν = V_νdiag{1_3× 3, 0_3× 3},[For simplicity, we assume that the CνB matter effect is flavor-universal. This is certainly valid if the CνB contains neutrinos of all flavor with equal number densities and if they interact only via weak interactions. Decaying neutrinos <cit.> or the presence of flavor-nonuniversal nonstandard interactions <cit.> would need special treatment.] where V_ν=G_F n_ν/√(2), with n_ν being the CνB number density and G_F being the Fermi constant. To diagonalize the new Hamiltonian H^ PD_ mat=H^ PD_ vac+V_ν in the presence of the matter potential for the pseudo-Dirac case, we notice that V_ν commutes with both U and U_R. Therefore, we can use three rotation matrices, one for each pair of degenerate states. The effective mixing angle in matter is given by tan 2θ_i = δ m_i^2 sin(2θ_i)/δ m_i^2cos(2θ_i)-A≃ -δ m^2_i/A , where A=2E_ν V_ν. Note that for non-maximal mixing, the standard Mikheyev-Smirnov-Wolfenstein (MSW) resonance condition <cit.> would have been obtained when A=δ m_i^2cos(2θ). But in the pseudo-Dirac case with maximal mixing to start with, the matter effect tends to take the effective mixing angle away from the maximal value of π/4, as shown in Eq. (<ref>). According to the ΛCDM model of cosmology, the CνB number density today is given by n_ν,0=3/4ζ(3)/π^2g_ν T^3_ν,0≃ 112  cm^-3 per neutrino flavor and the same for antineutrinos. Here T_ν,0=(4/11)^1/3T_γ,0≃ 1.7× 10^-4 eV is the CνB temperature and g_ν=2 is the number of degrees of freedom for each pseudo-Dirac neutrino. This gives a tiny matter potential V_ν≃ 7.4× 10^-36 eV which, however, becomes relevant for δ m^2≳ 2E_ν V_ν≃ 1.5× 10^-23  eV (E_ν/1  TeV). In the presence of CνB matter effect, the eigenvalues (λ_i S, λ_i A) of the diagonal matrix M^2_ diag are given by λ_i S = A/2cos 2θ_i + m^2_i + δ m^2_i/2sin 2θ_i , λ_i A = -A/2cos 2θ_i + m^2_i - δ m^2_i/2sin 2θ_i . In the limit when the matter potential is negligible, i.e. A ≪δ m_i^2, we recover maximal mixing between active and sterile neutrinos: θ_i→π/4 [cf. Eq. (<ref>)] and the usual eigenvalues λ_i S = m^2_i1 + δ m^2_i/2 and λ_i A = m^2_i1 - δ m^2_i/2 [cf. Eqs. (<ref>) and (<ref>)]. For very large matter potentials, on the other hand, the mixing between ν_i S and ν_i A decreases, reaching the limit λ_i S = A/2 + m^2_i/2 and λ_i A = -A/2 + m^2_i/2. In the scenario where the matter potential is constant along the neutrino evolution path, we can find the neutrino oscillation probability using the mixing angles and the eigenvalues from above. Considering the ν_α→ν_β oscillation probability between the active states, we get P_αβ = ∑_j |U_α jU^†_β j[cos^2θ_jexp(-iλ_jSL/2E_ν) .. . . + sin^2θ_jexp(-iλ_jAL/2E_ν) ]|^2 , where L is the propagation length. The oscillation length induced by the active-active mass splitting Δ m^2_j1, which is equal to Δ m^2_ sol≃ 7.4× 10^-5  eV^2 for j=2 and Δ m^2_ atm≃ 2.5× 10^-3  eV^2 for j=3 <cit.>, is much shorter than the distance traveled by astrophysical neutrinos [cf. Eq. (<ref>)] and is impossible to be resolved by the present detectors. Therefore, we average over it, thus obtaining P_αβ = ∑_j |U_α j|^2|U_β j|^2[cos^4θ_j + sin^4θ_j. . +2 cos^2θ_jsin^2θ_jcos( δm^2_j L/4E_ν) ] , where the effective mass-squared splitting in the presence of matter effect is given by δm_j^2 = √(A^2-2Aδ m_j^2cos(2θ_j)+(δ m_j^2)^2) ≃√((δ m_j^2)^2 + A^2) , which reduces to the vacuum mass-squared splitting δ m_i^2 when A≪δ m_j^2, as expected. The effective active-sterile oscillation length scale in the presence of matter is L_ osc = 4π E_ν/δm_j^2 , which now explicitly depends on the matter potential via Eq. (<ref>). It reduces to the vacuum case [cf. Eq. (<ref>) with Δ m^2→δ m^2] when A≪δ m_i^2. One might wonder whether the interactions of the high-energy neutrinos with the free electrons in the intergalactic medium (IGM) could also induce additional matter effect for the small δ m^2 values under consideration. The mean IGM electronic density is n_e∼ 10^-7  cm^-3 <cit.>, which corresponds to a matter potential V_e=√(2) G_F n_e ∼ 10^-44 eV. Even for a PeV-energy neutrino (the highest energy observed by IceCube), such a tiny matter potential will only be relevant if δ m^2∼ 10^-29  eV^2. However, in this case, the corresponding effective oscillation length is way beyond the size of the observable Universe, as we will see later. Therefore, we can safely neglect the IGM matter effect and only consider the CνB matter effect. § PSEUDO-DIRAC NEUTRINOS IN EXPANDING UNIVERSE As the universe expands, the neutrino density from the CνB reduces. Considering that the neutrino density scales with the redshift as n_ν=n_ν, 0(1+z)^3, we have a matter potential that changes with redshift, or in other words, with time. To estimate whether the neutrino evolution in an expanding universe is adiabatic or not, we have to compare the inverse oscillation length (δm^2/2E_ν) with the transition between the massive states that is proportional to the variation of the effective mixing angle in matter (dθ / dx). Defining the adiabaticity parameter (γ) as the ratio between these two quantities <cit.>, we have γ = δm^2/2E_ν1/|dθ/dx| = 2/3δm^2 (1+z)/E_νsin 4θ (dz/dx) , where dz/dx is given by the expansion rate of the universe. For δ m^2 ≥ 10^-17eV^2 and E_ν < 1 PeV, we have γ > 1, which indicates that the evolution is adiabatic. In this adiabatic regime, the ν_α→ν_β oscillation probability is given by P_αβ = ∑_j |U_α j|^2|U_β j|^2[cos^2θ^i_jcos^2θ^f_j + sin^2θ^i_jsin^2θ^f_j. .+ 1/2sin 2θ^i_jsin2θ^f_jcos( ∫ dx δm^2_j/4E_ν) ] , where θ^i and θ^f correspond to the effective mixing angles [cf. Eq. (<ref>)] when the neutrinos were created and today, respectively. When the matter effect is small, θ^i_j≃θ^f_j ≃θ_j and δm_j^2 can be taken out of the integral. In this case, Eq. (<ref>) simply reduces to Eq. (<ref>). In the parameter regime where the adiabaticity condition is not satisfied, we cannot express the oscillation probability analytically as in Eq. (<ref>). In such cases, we compute the oscillation probability purely numerically from the solution of the evolution equation, i.e. ⟨ν_β|ν_α⟩(t) = exp[-i∫_0^t dt' H (t')]. Note that in Eq. (<ref>), the effective oscillation length, as well as δm_j^2, is now a function of the redshift. In particular, for the active-sterile oscillations to take effect, the oscillation length L_ osc must be comparable to or smaller than the effective source distance, given by <cit.> L_ eff = ∫_z_ min^z_ maxc dz/H(z)(1+z)^2 , where the Hubble parameter is H(z)=H_0√(Ω_ m(1+z)^3+Ω_Λ+(1-Ω_ m-Ω_Λ)(1+z)^2) , where Ω_ m and Ω_Λ are the fractions of matter (both visible and dark) and dark energy content in the Universe, respectively. We use the best-fit values from Planck data: Ω_ m=0.315, Ω_Λ=0.685 and H_0=67.4  km· s^-1· Mpc^-1 <cit.>. Because of this choice of the unit for H_0, we have shown the speed of light c explicitly in Eq. (<ref>) to make it dimensionally correct. As for the maximum redshift value, we will take z_ max=5, beyond which the star formation rate decreases rapidly <cit.>, and we do not expect any astrophysical sources of high-energy neutrinos to exist beyond this redshift. Similarly, for the minimum redshift, we take z_ min=10^-7, corresponding to the galactic center. Since the galactic contribution to the high-energy neutrino flux at IceCube is sub-dominant <cit.>, taking even smaller values of z_ min will not significantly affect our results. § INCLUDING CΝB OVERDENSITY The values of δ m^2 that are sensitive to the CνB matter effect very much depend on the incoming energy of the high-energy neutrinos. This is illustrated in Fig. <ref> by the solid lines for three benchmark values of δ m^2. The corresponding dashed lines show the fixed δm^2 values. The deviation of the dashed lines from the solid lines, therefore, represent the size of the matter effect. As we will see below, the CνB matter effect on the oscillation probabilities turns out to be negligible for the ΛCDM value of the CνB number density [cf. Eq. (<ref>)], especially for the δ m^2 values required for adiabatic evolution. Therefore, we allow for the possibility that there might be a local overdensity of CνB, parameterized by the ratio η=n_ν/n_ν,0(1+z)^3. The current experimental limit on η is rather loose, only at the level of 10^11 from KATRIN <cit.>, as shown by the red-shaded region in Fig. <ref>. See Refs. <cit.> for other local and global constraints on η, as well as future prospects. We assume a local overdensity around the Earth so that the matter effect is isotropic. Theoretically, while gravitational clustering alone can only give an O(1) enhancement <cit.>, possible nonstandard neutrino interactions could in principle give η≫ 1. For instance, in a model with Yukawa interactions mediated by an ultralight scalar, neutrinos can form stable clusters with η_ max∼ 10^7 <cit.>. Without resorting to any particular new physics model, we just show a few benchmark values of η in Fig. <ref> to illustrate our point. Note that for smaller η values, the δ m^2 and δm^2 contours are identical, i.e. the matter effect is negligible. However, for η≳ 10^5, we start to see the deviation of δm from δ m^2, which implies that the matter effect is non-negligible. An important thing to keep in mind is that, for a fixed number of total relic neutrinos in the Universe, η> 1 would imply that there is a maximum size for the overdense cluster, L_ cloud=(c/H_0)η^-1/3 assuming a spherical cluster. This is to ensure that the relic neutrinos do not overclose the Universe. For instance, as shown in Fig. <ref>, η=1 corresponds to L_ cloud=c/H_0≃ 4.5 Gpc, which is roughly the size of the observable Universe, whereas η=10^5 corresponds to L_ cloud≃ 96 Mpc, and η=10^11 corresponds to L_ cloud≃ 0.96 Mpc. Therefore, for the matter effect to be relevant, we must have the effective oscillation length L_ osc [cf. Eq. (<ref>)] comparable to or smaller than L_ cloud. This in turn dictates the minimum value of δ m^2 (for a given E_ν), or the maximum value of E_ν (for a given δ m^2), at which the matter effect starts becoming important. For example, for δ m^2=10^-19  eV^2, the matter effect is not important in the entire energy range shown in Fig. <ref>, whereas for δ m^2=10^-17  eV^2, it starts becoming important for E_ν<70 TeV, and for δ m^2=10^-15  eV^2, it is important for E_ν<70 PeV, i.e. almost in the entire IceCube energy range of interest. However, this does not necessarily mean that IceCube has better sensitivity for higher δ m^2 values, as this will depend on the actual oscillation probabilities, which we will discuss in Section <ref>. For η>1, or a finite L_ cloud<c/H_0, we have to consider the case where the neutrinos were emitted from the distant source at an early redshift (z_i≤ z_ max) and, after traveling through vacuum for some distance, encounter the CνB overdensity cloud at a redshift z_c<z_i that creates a matter potential for them. In this case, the oscillation probability contains two parts: (i) vacuum probability from redshift z_c to z_i, and (ii) matter probability from redshift z_ min and z_c. Thus, Eq. (<ref>) is modified to P_αβ = 1/2∑_j |U_α j|^2|U_β j|^2 [1 + cos 2θ^i_jcos 2θ^f_jcos(δ m^2_jL_ eff/4E_ν) + sin 2θ^i_jsin2θ^f_jcos( ∫ dx δm^2_j/4E_ν + δ m^2_jL_ eff/4E_ν) ]. Notice that in this case, θ^i and θ^f correspond to the effective mixing angles when the neutrinos arrive to the CνB cloud and today, respectively. Also, L_ eff is given by Eq. (<ref>) but with the lower limit of integration replaced by z_c, which is the redshift distance equivalent of L_ cloud. Basically, in vacuum, we can take δ m^2 out of the redshift integral, whereas in matter, we have to keep δm^2 inside the integral, since it also depends on the redshift. For η≲ 10^4, when the matter effect is negligible, the last two contributions inside the parenthesis can be combined into one that exactly becomes equal to δ m^2_j L_ eff/4E_ν as in the second term, and Eq. (<ref>) simply reduces to the vacuum oscillation result [cf. Eq. (<ref>) with tildes removed]. § RESULTS To understand the energy dependence of the oscillation probabilities in the presence of matter effect, we plot the ν_μ→ν_τ oscillation probabilities[Similar behavior is observed for other flavors, and therefore, we do not show all of them here.] for the standard and pseudo-Dirac cases (with and without matter effect) in Fig. <ref>. Here we have fixed the active-sterile mass splitting for just one pair: δ m_3^2=10^-17  eV^2, while keeping δ m_1^2=δ m_2^2=0. In the left panel, we have fixed the source redshift distance at z=0.004, which is roughly the distance to NGC 1068, the most significant point source identified by IceCube <cit.>. The vacuum oscillation probability for the pseudo-Dirac case is noticeably different from the standard case for E_ν≲ 50 TeV. At higher energies, the effective oscillation length exceeds the source distance, and therefore, the vacuum oscillation probability approaches the standard case. On the other hand, at low energies, the fast oscillations are averaged out to a constant value (but different from the standard case). Now including the matter effect further modifies the oscillation probability, as it tends to suppress the oscillation amplitude, as compared to the vacuum case. But this effect is observable only for η≫ 1, because the source is relatively nearby, so we need a large η to be able to make a significant contribution to the third term in Eq. (<ref>). Here we have chosen two benchmark values of η=10^5 and η=10^6. As we increase the size of the matter effect by cranking up η, the oscillation extrema are also shifted to lower energies. In the right panel of Fig. <ref>, we keep the same δ m_3^2=10^-17  eV^2, but increase the source distance to z=0.04. In this case, the oscillations are shifted to higher energies, and the pseudo-Dirac oscillations are noticeably different from the standard one for E_ν≲ 500 TeV. Also, since the source is further away, a slightly smaller value of η=10^4 is now sufficient to induce a noticeable matter effect. As in the left panel, increasing η shifts the oscillation extrema to lower energies, before they approach the fast oscillations. Since getting very large η values is theoretically challenging, we will fix a benchmark value of η=10^4 and z=0.04 for the flavor triangle analysis below. For a given source distance, if we increase the δ m^2 value, the oscillations will also be shifted to higher energies. Since the astrophysical neutrino flux is expected to have a power-law behavior <cit.>, going to higher energy means having smaller flux, and hence, less statistics. It turns out that IceCube will eventually lose sensitivity for δ m^2≳ 10^-16  eV^2 <cit.>. Therefore, we use δ m^2=10^-17  eV^2 as our benchmark value. In Fig. <ref>, we have plotted the same ν_μ→ν_τ probabilities for the standard and pseudo-Dirac (vacuum and matter) cases as a function of energy, but here we have averaged over the the distances up to redshift z_ max=5, assuming a flat distribution of sources. The first dip in the probability at the highest energy is due to contributions from sources at z=5. We note that increasing η (or decreasing the cloud size) makes this dip closer to the vacuum case because the neutrinos mostly travel in vacuum; therefore, going to an arbitrarily high overdensity is actually not helpful for disentangling the matter effect. As we go to lower energies, the sources at smaller redshifts cause multiple oscillations, which eventually average out and approach the vacuum result, as also noted in Fig. <ref>. On the other hand, both vacuum and matter oscillations approach the standard result at energies beyond 5 PeV, since the effective oscillation length for the chosen mass splitting goes beyond z=5. Note that we have only shown the probability results for neutrinos. For anti-neutrinos, the matter potential changes sign, and the results are similar, unless there is a large asymmetry between neutrinos and antineutrinos in the CνB. The current cosmological constraints on this asymmetry, parameterized in terms of the degeneracy parameters ξ_α≡μ_α/T (where μ_α's are the chemical potentials) by η_ν_α≡n_ν_α-n_ν̅_α/n_γ≃ 0.25ξ_ν_α(1+ξ^2_ν_α/π^2), allow for η_ν as large as 10^-2 <cit.>. In fact, the recent ^4He measurements from extremely metal-poor galaxies has a mild preference for a non-zero electron neutrino chemical potential: ξ_ν_e=0.043± 0.015 <cit.>. However, we have checked that to get an observable difference in the matter effect for neutrinos versus antineutrinos, we need η_ν_α≳ O(1), which is highly unlikely given the current constraints. After calculating the effect of pseudo-Dirac neutrino oscillations on the probabilities, we are now in a position to compare the final flavor ratio results for the standard and pseudo-Dirac case with and without matter effect. This is shown in Fig. <ref>. Note that it is important to compare only the normalized flavor ratios, because the total flux of active neutrinos in the pseudo-Dirac case may not be conserved due to active-sterile oscillations; therefore, ∑_β f_β,⊕ calculated from Eq. (<ref>) is not necessarily guaranteed to be unity for the pseudo-Dirac case, unlike in the standard case. Here we take a standard pion decay source: π^±→μ^± + (-)ν_μ→ e^±+(-)ν_e+ν_μ+ν̅_μ , with an initial flavor composition of (1/3:2/3:0). With the best-fit values for the oscillation parameters taken from NuFit <cit.> and assuming a normal mass ordering, the standard 3-neutrino vacuum oscillation paradigm predicts a final flavor ratio of (0.30:0.37:0.33), as shown by the blue dot. On the other hand, for our benchmark pseudo-Dirac case with δ m_3^2=10^-17  eV^2, just considering vacuum oscillations from a source at redshift z=0.04 gives us (0.46:0.28:0.26) at E_ν=1 TeV and (0.43:0.31:0.26) at E_ν=40 TeV, as shown by the orange dots in the two panels. Note the mild energy-dependence of the best-fit value here. Including the CνB matter effect for an overdensity of η=10^4 makes the energy-dependent effects more prominent in the oscillation probabilities (see Figs. <ref> and <ref>). In the left panel, we show the result for E_ν=1 TeV, where the matter effect gives a best-fit flavor ratio of (0.44:0.29:0.27), while in the right panel with E_ν=40 TeV, it gives (0.35:0.33:0.31). Thus, as we go from lower to higher energies, the best-fit point moves from the vacuum case to the standard case, as can be clearly seen from the inset plots. The energy window of TeV-PeV is optimal for this effect to be observable. For very high energies, the neutrino flux goes down rapidly and the event statistics will be low. On the other hand, for energies smaller than a few TeV, the atmospheric background will be overwhelming. Moreover, the tau neutrinos are not detectable at IceCube for low energies; the lowest-energy tau event observed so far is at 20 TeV <cit.>. In Fig. <ref>, the current 68% and 90% CL IceCube limits <cit.> are shown by the black contours.[ Preliminary tighter constraints are reported in Ref. <cit.> by adding more years of data and updated ice properties on the HESE sample, but we show the officially published results from Ref. <cit.>.] The future prospects for flavor triangle measurements are bright <cit.>, with the observation of high-energy neutrinos by several next-generation neutrino telescopes, such as IceCube-Gen 2 <cit.>, KM3NeT <cit.>, Baikal-GVD <cit.>, P-ONE <cit.>, TRIDENT <cit.>, TAMBO <cit.>, Trinity <cit.> and RET <cit.>. The possibility of a joint analysis of the combined data from multiple experiments sensitive to different neutrino flavors (e.g., cascade and track data from IceCube-Gen 2, combined with the tau-neutrino data from TAMBO) could significantly improve the precision on the flavor triangle data. For illustration, we show the IceCube-Gen 2 projections <cit.> by the grey contours. It is clear that while the current IceCube constraint is not enough to probe the CνB matter effect, the IceCube-Gen 2 will be able to do so. In fact, it can clearly distinguish the energy-dependent matter effect from the vacuum oscillations, which will provide a new way to probe the CνB overdensity, on top of probing the pseudo-Dirac hypothesis. In Fig. <ref>, we fix the energy at 40 TeV, but generalize our analysis to different initial flavor compositions, as mentioned in Section <ref>, namely, (i) standard pion decay (top left panel), (ii) muon-suppressed pion decay (top right panel), (iii) neutron decay (bottom left panel), and (iv) general case (bottom right panel). We also include the variation of the mixing angles in their 68% CL allowed range from NuFit <cit.>, which results in a spread of the points for each case. Our use of the reduced uncertainties (68% CL) is in anticipation of the precision measurements of the oscillation parameters at next-generation neutrino oscillation experiments, such as JUNO <cit.>, DUNE <cit.>, and Hyper-K <cit.>, before the next-generation neutrino telescopes start collecting data. We find that with improved precision on the oscillation parameters, it is possible to completely separate the standard case from the pseudo-Dirac case for a known initial flavor composition at a given energy. The energy-dependent effect shown in Fig. <ref> will make this distinction even easier. We also notice that the separation from the standard case on the flavor triangle is different, depending on the initial flavor composition and on which δ m_i^2 is nonzero. This information will provide a unique way to probe the individual active-sterile mass splittings in the pseudo-Dirac scenario. In Fig. <ref>, we further generalize our analysis to include two active-sterile mass splittings nonzero (but equal). Even in this case, the distinction between the standard and pseudo-Dirac cases, as well as between the different δ m_i^2 pairs, can be made for a known initial flavor ratio. Of course, if the initial flavor composition is not known precisely, it becomes more difficult to distinguish the pseudo-Dirac case, as shown in the lower right panels of Figs. <ref> and <ref>. Finally, when we have all three mass splittings nonzero and equal, their effect on the flavor ratio cancels out and there is no longer any difference with the standard case. § CONCLUSIONS The flavor ratio measurements of the high-energy astrophysical neutrinos at neutrino telescopes provide crucial information on the source properties. We have shown that the flavor ratio predictions are altered from the standard 3-neutrino paradigm if neutrinos are pseudo-Dirac particles with tiny active-sterile mass splittings. In particular, we find for the first time that the CνB matter effect induces a novel energy-dependent flavor effect, which is robust against energy reconstruction, and hence, can be distinguished from other sources of energy dependence. We therefore advocate making energy-dependent flavor triangle measurements at neutrino telescopes. Energy-dependent flavor composition measurements were also advocated recently in Ref. <cit.> for a different physics reason, i.e. to establish the transition from neutrino production via the full pion decay chain at low energies to muon-damped pion decay at high energies. This is challenging today, but may be feasible in the future. Moreover, the matter effect strongly depends on the local CνB overdensity, and therefore, a precise determination of the flavor composition at future neutrino telescopes can in principle provide an alternative probe of the CνB overdensity. § ACKNOWLEDGMENTS This work of BD was partly supported by the U.S. Department of Energy under grant No. DE-SC 0017987. PM is supported by Fermi Research Alliance, LLC under Contract No. DE-AC02-07CH11359 with the U.S. Department of Energy, Office of Science, Office of High Energy Physics. IMS is supported by STFC grant ST/T001011/1. BD and PM thank the organizers of the Mitchell Conference 2023, where this work was initiated. We also acknowledge the Center for Theoretical Underground Physics and Related Areas (CETUP*) and the Institute for Underground Science at SURF for hospitality and for providing a stimulating environment, where a part of this work was done. Note Added: While we were completing this work, Ref. <cit.> appeared, where the authors also discuss the effect of pseudo-Dirac neutrinos on the flavor triangle. But they have not included the CνB matter effect. It should also be noted that our flavor triangle analysis results were already presented in May 2024 at the Mitchell Conference <cit.>, where one of the authors of Ref. <cit.> was present. utphys
http://arxiv.org/abs/2406.18478v1
20240626164136
A 3 mm spectral line study of the Central Molecular Zone infrared dark cloud G1.75-0.08
[ "Oskari Miettinen", "Miguel Santander-García" ]
astro-ph.GA
[ "astro-ph.GA" ]
§ INTRODUCTION Interstellar infrared dark clouds (IRDCs) are identified as dark absorption features against the Galactic mid-IR background radiation (e.g., <cit.>). Infrared dark clouds are very useful target sources for the studies of molecular cloud fragmentation and star formation. One reason for this is that IRDCs often exhibit filamentary morphology with substructure along the long axis of the filament (e.g., <cit.>). Another reason for IRDCs being useful target sources is that some of them show evidence of ongoing high-mass (M>8 M_⊙) star formation (e.g., <cit.>), a process where many details and even the exact mechanism(s) are still to be deciphered (e.g., <cit.> for a review). On the other hand, IRDCs are promising sites to search for high-mass prestellar clumps and cores (e.g., <cit.>). In this paper, we present a molecular spectral line study of the Galactic IRDC G1.75-0.08 (Figure <ref>). The target IRDC was uncovered by dust continuum imaging survey with the Large APEX BOlometer CAmera (LABOCA; <cit.>) at 870 m by Miettinen <cit.>, and it was part of the sample in the molecular line study of IRDCs by Miettinen <cit.> that was based on the Millimetre Astronomy Legacy Team 90 GHz (MALT90) survey <cit.>. A follow-up dust continuum imaging study of G1.75-0.08 at 350 m and 450 m was conducted by Miettinen et al. <cit.> using the Architectures de bolomètres pour des Télescopes à grand champ de vue dans le domaine sub-Millimétrique au Sol, or ArTéMiS bolometer <cit.>. Miettinen et al. <cit.> revised the kinematic distance of G1.75-0.08 to be 8.22 kpc with a Galactocentric distance of 270 pc. Hence, G1.75-0.08 is located within the Central Molecular Zone (CMZ) of the Galaxy, where the cloud can be associated with extreme environmental conditions such as strong tidal shear forces, strong turbulence, and more complex chemistry than found in the Galactic spiral arm molecular clouds (e.g., <cit.>; see also <cit.> for a review). Prior to the present study, the only spectral line data available for G1.75-0.08 were those obtained as part of the MALT90 survey. The modest sensitivity of the MALT90 data (∼250 mK per 0.11 km s^-1 channel) and the angular resolution (half-power beam width, or HPBW) of those data, 38 arcsec, complicated the interpretation of the few extracted line detections (e.g., HCN(J=1-0) and HCO^+(J=1-0); <cit.>). Our new spectral line observations with the Yebes 40 m telescope benefit from 1.9 times higher angular resolution compared to the MALT90 data, which is useful owing to the large distance of the cloud from the Sun. In the present paper, we will revise some of the source properties that depend on information provided by spectral line data (in particular, spectral line widths), and determine the fractional abundances of the molecules detected towards the target source via our 3 mm band observations. The observations and data reduction are described in Section <ref>. The analysis and results are described in Section <ref>. We discuss the results in Section <ref>, while in Section <ref>, we summarise our results and main conclusions. § OBSERVATIONS AND DATA REDUCTION §.§ Yebes Observations We used the Yebes 40 m radio telescope <cit.> to observe the LABOCA 870 m peak positions of the clumps in G1.75-0.08 at a frequency range of about 72–90.5 GHz. The observed frequency was tuned at 80.05 GHz. The target clumps are listed in Table <ref>. Our observations made use of the W band, where the full bandwidth is 18.5 GHz. The spectral backend was a Fast-Fourier Transform Spectrometer (FFTS) with a spectral resolution of 38 kHz, and the observations were made in dual (horizontal and vertical) linear polarisation mode. The aforementioned spectral resolution corresponds to a velocity resolution of 126–158.3 m s^-1 over the observed frequency range. The beam size (HPBW) at the observed frequency range is 19.5–24.5 arcsec (<cit.>, Table 5 therein). The first observations were carried out on 27 and 28 March 2023 (for clump B and clump A, respectively) under the project 23A004 (PI: Miettinen). These observations were performed in the position switching mode, where the off position for clump A was 1.092 arcmin to the north of the on position, and for clump B, the reference position was located at 0.874 arcmins to the north of the target position. These offsets were chosen on the basis of the extent of the LABOCA 870 m emission (Figure <ref>). The total on-source integration time for clump A was 31 min, and for clump B, it was about 21 min. However, it was found that the reference off positions of the observations had spectral line emission at the observed frequencies, which resulted in absorption-like features in the final spectra that corrupted the lines observed towards the on positions. The observations were repeated on 11 and 16 February 2024 (for clump A and clump B, respectively) under project 24A005 (PI: Miettinen). These observations were performed in the frequency switching mode with a conservative frequency throw of 74.1 MHz (±37.05 MHz frequency offsets). Because, in the frequency switching mode, the observed position does not change, we could avoid the possible presence of spectral line emission in the position switching mode's reference position. The pointing and focus corrections were conducted by observing a SiO maser line at 86,243.4277 MHz towards the Red Supergiant VX Sgr. The total on-source integration time for clump A was 1.15 h, and for clump B, it was 1.9 h. The system temperature during the observations towards clump A was T_ sys≃200 K, while for observations of clump B, it was about 300 K. Calibration was made by the hot-cold load technique, and the output intensity scale given by the system is the antenna temperature corrected for the atmospheric attenuation (T_ A^*). The observed intensities were converted to the main-beam brightness temperature scale using the formula T_ MB=T_ A^*/η_ MB, where η_ MB is the main-beam efficiency. The values of η_ MB range from 0.20 to 0.31 over the observed frequency range.<https://rt40m.oan.es/rt40m_en.php> accessed on 1 February 2024 . The absolute calibration uncertainty was adopted to be 10% following the practice of studies that employ the Yebes Q-band observations (e.g., <cit.>). However, this should be taken as a lower limit to the calibration uncertainty in the higher frequency W band. Moreover, it has been observed that the spectral line intensity discrepancies between position and frequency switching observations with the Yebes 40 m radio telescope are always <20% <cit.>. The spectra were reduced using Continuum and Line Analysis Single-dish Software (CLASS90) of the GILDAS software package (version jul21)Grenoble Image and Line Data Analysis Software (GILDAS) is provided and actively developed by IRAM, and is available at <www.iram.fr/IRAMFR/GILDAS> accessed on 28 December 2022 .. The individual spectra were stitched with weights defined as w_ i∝ t_ int/T_ sys^2, where t_ int is the integration time. The resulting spectra were folded and then smoothed using the Hann window function so that the number of channels was divided by two. First or second-order baselines were determined from the velocity ranges free of spectral line features and then subtracted from the spectra. The resulting 1σ rms noise levels at the smoothed velocity resolution were about 20–33 mK with an average of 26.6 mK on a T_ A^* scale. §.§ IRAM Observations We also used the Institut de Radioastronomie Millimétrique (IRAM) 30 m telescope to observe the target positions (i.e., the LABOCA 870 m peaks of the clumps) in the J=1-0 transition of N_2H^+ and N_2D^+. The observations were carried out on 8 and 14 February 2024 during a pool observing week (project 079-23; PI: Miettinen). The Eight MIxer Receiver (EMIR; <cit.>) band E090 was used as a front end, while the backend was the Versatile SPectrometer Array (VESPA), where the bandwidth was 18 × 20 MHz with a corresponding channel spacing of 20 kHz. The N_2H^+(1-0) and N_2D^+(1-0) lines were tuned at the frequencies of the strongest hyperfine component (J_F_1, F = 1_2, 3-0_1, 2) of the transition, which are 93,173.7637 MHz and 77,109.6162 MHz, respectively, (<cit.>, Tables 2 and 8 therein). The aforementioned channel spacing yielded a velocity resolution of about 64 m s^-1 for N_2H^+ and 78 m s^-1 for N_2D^+. The telescope beam sizes (HPBW) at the observed frequencies are 26.4 arcsec (N_2H^+) and 31.9 arcsec (N_2D^+). The observations were performed in the frequency switching mode with a frequency throw of ±3.9 MHz. The N_2H^+(1-0) observing time for both clumps was about 1.1 h. The N_2D^+(1-0) observing time for clump A was 2.2 h and 1.6 h for clump B. The telescope focus and pointing were optimised and checked on the quasars 1226+023 (3C 273) and 1757-240. The precipitable water vapour (PWV) during the observations was measured to be between 4.2 mm and 5.5 mm. The system temperatures during the observations were within the range of T_ sys=113-140 K. The observed intensities were converted into the main-beam brightness temperature scale by using a main-beam efficiency factor of η_ MB=0.80 for N_2H^+(1-0) and 0.81 for N_2D^+(1-0). The absolute calibration uncertainty was adopted to be 10% (e.g., <cit.>). The spectra were reduced using CLASS90 (version jul21) in a similar fashion as described in Section <ref> (including smoothing that halves the spectral resolution). Second- or third-order polynomial baselines were determined from the velocity ranges free of spectral line features, and then subtracted from the spectra. The resulting 1σ rms noise levels were about 13–21 mK. § ANALYSIS AND RESULTS §.§ Detected Spectral Lines and Spectral Line Parameters To identify the spectral lines in the W band observed with the Yebes telescope,we made use of the Splatalogue interface<https://splatalogue.online/> accessed on 1 February 2024. , and the Cologne Database for Molecular Spectroscopy (CDMS<https://cdms.astro.uni-koeln.de/> accessed on 1 February 2024. ; <cit.>) and the Jet Propulsion Laboratory (JPL) spectroscopic database<http://spec.jpl.nasa.gov/> accessed on 1 February 2024. <cit.>. The spectra of the spectral line transitions detected with Yebes towards both target clumps are shown in Figures <ref> and <ref>. The spectra observed with the IRAM 30 m telescope are shown in Figure <ref>. The detected spectral line transitions are listed in Table <ref>. As can be seen in Figures <ref> and <ref>, the Yebes spectra exhibit wavy baseline structures that could not be described by simple first- or second-order baseline functions. Wavy baselines can be the result of frequency switching that we employed in our Yebes observations (e.g., <cit.>). As shown in Figures <ref> and <ref>, the HCO^+ lines were fitted by a single Gaussian function using CLASS90 (version jul21). The HCN lines exhibit hyperfine splitting (three components in the case of the J=1-0 transition; e.g., <cit.>), and we used the hyperfine structure method of CLASS90 to fit the detected lines. Owing to the detected HCN line profiles, the fitting was conducted using a two-velocity component model. One could also interpret the observed HCN line profiles as asymmetric, where the redshifted peak is stronger than the blueshifted peak and where there is a self-absorption dip between the two peaks. This, however, would require that an optically thin line emission is seen at a velocity of about 27 km s^-1 (estimated from the HCN spectrum towards clump A), which is lower than the systemic local standard of rest (LSR) velocity of the cloud (∼50 km s^-1). The HNCO lines also exhibit a hyperfine structure (e.g., <cit.>), and the observed J=4-3 transition has six hyperfine components. The detected HNCO lines were fit using the hyperfine structure method and hyperfine component frequencies from the CDMS database. Again, the fitting was made using two velocity components. Similar to the case of HCN, the HNCO line detected towards clump A could also be interpreted as a red asymmetric line profile with a central dip at about 36 km s^-1. We note that our original aim was also to observe the DCN(J=1-0) transition at 72.415 GHz towards the target clumps (to study the [DCN]/[HCN] deuteration), but the observed spectra had a very strong wave-like structure between about 72 and 76 GHz, which prevented any potential detection of the line. The only line detected in our IRAM 30 m telescope observations is N_2H^+(1-0) towards clump B. Even this detection is relatively weak (∼6σ) compared to the lines detected with Yebes. The J=1-0 transition of N_2H^+ is split into 15 hyperfine components, which are mostly blended into one broad line in the observed spectra with a hint of a blended group at ∼40 km s^-1. We fitted the hyperfine structure in CLASS90 using the rest frequencies from <cit.> (Table 2 therein) and the relative intensities from  <cit.> (Table 8 therein). The derived basic line parameters are listed in columns 2–5 in Table <ref>. Besides the formal 1σ fitting errors output by CLASS90, the errors in the peak intensity (T_ MB) and the integrated intensity of the line (∫ T_ MB dv) also include the 10% calibration uncertainty (the two sources of uncertainty were added in quadrature). We note that the quoted uncertainties in ∫ T_ MB dv should be interpreted as lower limits only because of the blended velocity components and ripples in the observed spectra. §.§ Line Optical Thicknesses and Excitation Temperatures If a spectral line transition is split into hyperfine components, the relative strengths of the hyperfine components can be used to derive the line optical thickness, τ. However, in all cases where we fit the hyperfine structure of the line (Section <ref>), the hyperfine structure was not resolved (i.e., the separate components are blended), and hence, the corresponding optical thicknesses should be taken with caution. When the optical thickness could be estimated by the CLASS90 routine, the corresponding excitation temperature of the line was calculated using Equation (1) in Miettinen <cit.>. For HCO^+, we assumed that the rotational transition is thermalised at the dust temperature of the clump; that is, we assumed that T_ ex = T_ dust (see column 4 in Table <ref>). The values of τ and T_ ex are listed in columns 6 and 7 in Table <ref>. §.§ Molecular Column Densities and Fractional Abundances To calculate the beam-averaged column densities of the molecules, we used the standard local thermodynamic equilibrium (LTE) approach (see Equation (32) in <cit.>)We note that for example in Miettinen <cit.>, we used a column density formula, where the rotational degeneracy (g_J) does not appear in the denominator, while in Equation (32) of Mangum & Shirley <cit.> it does (see also their Equation (33)). This difference arises from the different definitions of the dipole moment matrix element, which can be either |μ_ ul|^2=μ^2 S, where μ is the permanent electric dipole moment and S is the line strength (Equation (62) in <cit.>), or |μ_ ul|^2=μ^2 S/g_J (see <cit.>, where g_u=g_J).. The values of the product μ^2 S were taken from the Splatalogue accessed CDMS database. The rotational degeneracy (g_J) was calculated from the rotational quantum number of the upper state (see Equation (34) in <cit.>). HNCO is an asymmetric top molecule and its K-level and reduced nuclear spin degeneracies (g_K and g_I) were assigned values following the rules in Turner <cit.> (see Appendix therein). The value of g_K equals 1 for asymmetric tops, and the value of g_I for HNCO is also 1 because the molecule has no identical interchangeable nuclei. The partition functions were calculated using Equations (3) and (4) in Miettinen <cit.>. The fractional abundances of the molecules were calculated by dividing the molecular column density by the H_2 column density. The H_2 column densities of the clumps were calculated from the LABOCA 870 m peak surfaces brightness and using the dust temperature given in column 4 in Table <ref> (see, e.g., Equation (6) in <cit.>). We assumed that the mean molecular weight per H_2 molecule is μ_ H_2=2.82, the dust opacity is κ_ 870 m=1.38 cm^2 g^-1 at 870 m, and the dust-to-gas mass ratio is R_ d/g=1/141 (see <cit.> and references therein). The angular resolution of our LABOCA data, ∼20 arcsec, is very similar to the Yebes 40 m telescope angular resolution at the frequency of the analysed spectral lines (i.e., ), and also comparable to the IRAM 30 m telescope beam size (26.4 arcsec). Hence, no smoothing of the LABOCA data was performed for calculating the H_2 column densities. The beam-averaged column densities and fractional abundances of the molecules with respect to H_2 are listed in the last two columns in Table <ref>. The fractional abundance is only calculated for the main velocity component under the assumption that the observed submillimetre dust emission originates only in the target cloud. §.§ Virial Analysis of the Cloud and Its Clumps Miettinen et al. <cit.> derived a line mass of 1 011±146 M_⊙ pc^-1 for the G1.75-0.08 filament. The present spectral line data allow us to revise the virial or critical line mass of the cloud (see, e.g., Equation (12) in <cit.>). To calculate the latter quantity, we used the total (thermal+non-thermal) velocity dispersion, where the observed spectral line width (FWHM) was taken to be the average of the FWHMs of the HNCO lines detected towards clump A and clump B, because (i) the transition has a high critical density (10^6 cm^-3 at 20 K; <cit.>, Table 1 therein), (ii) the detected lines do not exhibit as strong wing emission as the HCN lines, and (iii) HNCO was detected towards both clumps. The value of the aforementioned average FWHM is 11.5±3.2 km s^-1. We assumed that the gas kinetic temperature is equal to the dust temperature (15.0±0.4 K for the filament; <cit.>). The derived virial line mass and the ratio of the observed and virial line masses are listed in Table <ref>. We also revised the virial masses and virial parameters (the ratio of the virial mass to the source mass) of the target clumps derived by Miettinen et al. <cit.> (see Section 3.5 therein). For this purpose, we also employed the HNCO line widths. It was again assumed that the gas temperature is equal to the dust temperature. The clumps were assumed to have a radial density profile of the form n(r)∝ r^-1.6, which is consistent with those derived for Galactic high-mass star-forming clumps (see <cit.> and references therein). The power-law index of the density profile, p, modifies the virial mass as M_ vir∝ a^-1, where a=(1-p/3)/(1-2p/5) (see <cit.> and references therein). The mean molecular weight per free particle was assumed to be μ_ p=2.37. The derived virial masses and virial parameters of the clumps are listed in Table <ref>. § DISCUSSION §.§ Spectral line Profiles and Cloud Kinematics The HCN and HNCO lines we detected towards the clumps of G1.75-0.08 show asymmetric profiles that we have interpreted to be caused by two different velocity components (see Figures <ref> and <ref>). The presence of multiple nearby velocity components complicates the interpretation of the gas kinematics of the cloud. In principle, the detected red asymmetric line profiles could also be indicative of outward motions or cloud expansion (e.g., <cit.>). For this to be the case, however, we should see optically thin line emission at a systemic velocity that matches the velocity of the central dip, which is not the case. Hence, the presence of multiple different velocity components seems more likely. Indeed, physically unassociated clouds along the line of sight that have different LSR velocities would not be unexpected owing to the large distace of G1.75-0.08. However, this needs to be tested by further spectral line observations. The HCN(1-0) spectrum extracted towards the LABOCA peak position of clump A by Miettinen <cit.> also showed a hint of a red asymmetric profile (see Figure C.6 therein).Clump A and clump B were called SMM 15 and SMM 8 in the target field G1.87-014 of Miettinen <cit.> (see Figure 1 therein). The HNCO(4_0, 4-3_0, 3) line detected by Miettinen <cit.> towards clump A was interpreted as a single, very broad line (30.40 ± 1.39 km s^-1 in FWHM), but our 1.9 times higher angular resolution and more sensitive observations have revealed the presence of two velocity components. The HCO^+(1-0) spectrum towards clump A in Miettinen <cit.> was interpreted to exhibit two velocity components, while in the present study the line appears to have only one clear velocity component. The larger beam of the MALT90 observations (38 arcsec) might have captured emission from another velocity component that is now avoided. However, the potential presence of an additinal component at ∼30 km s^-1 is blurred by the rippled baseline in the HCO^+(1-0) spectrum. All the MALT90 spectra towards clump B in Miettinen <cit.> were detected to have only one velocity component (see Figure C.2 therein). The difference compared to the present results is most notable for HCN, where we now see a broad (36.70 ± 0.23 km s^-1 in FWHM) secondary component. The HNCO line we detected with Yebes, however, could have been interpreted to have a single-velocity component as conducted by Miettinen <cit.>, but in that case, the radial velocity of the line would have been lower (44.5 km s^-1) than that derived for the stronger peak in the HCN spectrum (55.2 km s^-1), which presumably originates in our target cloud. The N_2H^+(1-0) line towards clump B analysed in Miettinen <cit.> was derived from having an FWHM of 20.10±1.69 km s^-1, which is 3.5±0.3 broader than derived in the present study. In this case, the larger beam of the MALT90 observations might have captured emission from gas with higher velocity dispersion than probed by our new IRAM observations. Miettinen et al. <cit.> speculated that G1.75-0.08 might represent a case where a filamentary cloud is undergoing gravitational focussing or the so-called edge effect, where gas clumps have accumulated at both ends of the filament (e.g., <cit.>). It would be tempting to interpret the detected HCN and HNCO line profiles as red asymmetric lines that indicate the presence of outward gas motions, because this might support the hypothesis of gravitational focussing. However, further spectral line observations are needed to test this hypothesis. CMZ clouds can have a complex velocity field, as was demonstrated by Henshaw et al. <cit.> in the case of the CMZ cloud G0.253+0.016 (also known as the Brick). Henshaw et al. <cit.> found that the Brick is not a single, coherent cloud, but rather a structured system of different velocity components and complex dynamics that might be the result of the orbital dynamics and shear motions in the CMZ. On the basis of the present results, G1.75-0.08 could be an analogue object with the Brick. High angular and spectral resolution imaging of G1.75-0.08 would, however, be required to reach a better understanding of the velocity structure of the cloud. §.§ Dynamical State of G1.75-0.08 Using the HCN(1-0) line width (FWHM) of 13.50±0.38 km s^-1 detected in the MALT90 survey, Miettinen et al. <cit.> derived a low value of 0.07±0.01 for the ratio of the line mass to the critical line mass for G1.75-0.08. In the present paper, we have used our new molecular line data (specifically HNCO line widths) to revise the latter value to 0.09±0.05 (Table <ref>), which is very close to the earlier estimate. Hence, our finding supports the view that G1.75-0.08 is strongly subcritical (by a factor of 11±6), which makes it very different compared to the general population of Galactic filaments for which the line mass and the critical line mass are often found to agree within a factor of ∼2 (e.g., <cit.>; Figure 27 therein). One could speculate that G1.75-0.08 is subject to tidal disruption effects near the Galactic Centre (R_ GC∼270 pc; <cit.>). The dynamical CMZ environment is found to have an influence on the Brick <cit.>, and hence, this might be the case for G1.75-0.08 as well. We note that if the FWHM of N_2H^+(1-0) detected towards clump B would be used in the analysis, the virial parameter for G1.75-0.08 would become α_ vir^ fil=0.36±0.06, in which case the filament would be subcritical only by a factor of 2.8±0.5. However, N_2H^+(1-0) is detected only in clump B, and the detection is relatively weak compared to other line detections; hence, the corresponding line FWHM might not be as reliable as for HNCO used in the analysis. §.§ Dynamical State of the Clumps Our new molecular line data also allowed us to revise the virial parameters of the clumps in G1.75-0.08. Miettinen et al. <cit.> found that clumps A and B are both gravitationally unbound with α_ vir≫ 2 (see Table 7 therein). Our new data support the conclusion that the clumps are gravitationally unbound (α_ vir > 2; Table <ref>), although clump A lies only a factor of 1.5±0.3 away from being gravitationally bound. We note that if the FWHM of N_2H^+(1-0) detected towards clump B would be used in the calculation, the virial parameter of clump B would become α_ vir=1.8±0.4, which would suggest that the clump is marginally gravitationally bound. However, as noted in Section <ref>, the N_2H^+(1-0) detection towards clump B is relatively weak and the corresponding line FWHM should be interpreted with caution. It should also be noted that further support against gravity is provided by a magnetic field, and the magnetic field strength in the CMZ is known to be relatively strong compared to molecular clouds at larger Galactocentric distances (e.g., <cit.>). The clumps appear dark at 70 m and lie above the mass–radius threshold for high-mass star formation proposed by Baldeschi et al. <cit.>, as shown in Figure 7 of a study by Miettinen et al. <cit.>, and which is given by M_ thresh=1 732  M_⊙×(R/ pc)^1.42 when scaled to our assumptions about the dust opacity and gas-to-dust mass ratio <cit.>. However, the present data do not suggest the clumps to be candidates for being high-mass prestellar clumps, but only high-mass starless clumps. Moreover, 70 m darkness does not necessarily mean that the clump is quiescent or devoid of star formation, and such objects can host embedded low- and intermediate-mass protostellar cores (e.g., <cit.>). For example, the line wing emission seen in our HCN and HNCO spectra might arise from protostellar outflow activity. However, this is only speculation and requires high-resolution spectral line imaging to be confirmed or disproved. On the other hand, a deficit of gravitationally bound clumps in the CMZ could explain its low star formation rate (SFR) compared to the Galaxy in general <cit.>. Chabrier and Dumond <cit.> suggested that molecular clouds in the CMZ are subject to only one episode of large-scale turbulence injection during their lifetime, where the injection is mostly provided by the gas inflow driven by the Galactic bar. Hence, there can be less injection of turbulence and turbulent motion will eventually decay, which facilitates the formation of stars. This could then explain the low SFR compared to the Galactic disc. A low SFR in the CMZ may also be related to the tidal effects of the Galactic Centre. Dust continuum imaging of G1.75-0.08 with ArTéMiS by Miettinen et al. <cit.> revealed that clumps A and B show substructure (see Figure 8 therein). Dense cores in gravitationally unbound clumps have also been observed in other IRDCs (e.g., G340.222–00.167 with α_ vir=5.7±1.7; <cit.>). If the substructure detected with ArTéMiS is physical, it could be an indication that the inner parts of the clump have decoupled from the more turbulent outer parts of the clump and that gravitational fragmentation has taken place in the denser region within the parent clump. The observed, projected separation of the substructures in the clumps is a factor of 1.5±0.1 larger than the thermal Jeans length for clump A and a factor of 1.3±0.1 larger in clump B (see Table 8 in <cit.>). Our new spectral line data (HNCO) suggest that the Jeans lengths that take the non-thermal motions into account would be larger by factors of about 10.5 (clump A) and 21 (clump B) than the observed substructure separation. Hence, the observed substructure in the clumps is roughly consistent with thermal Jeans fragmentation. §.§ Molecular Detections and Abundances in G1.75-0.08 §.§.§ HNCO (Isocyanic Acid) Based on the assumption that the higher velocity component of the HNCO line is associated with G1.75-0.08, we derived an HNCO fractional abundance of (8.8±1.8)× 10^-9 towards clump A and (4.9±0.9)× 10^-9 towards clump B. For comparison, Vasyunina et al. <cit.>, who used the 22 m Mopra telescope, detected the HNCO(4_0, 4-3_0, 3) transition in 13 out of their sample of 37 clumps in 15 different IRDCs (35% detection rate), and derived abundances in the range of (0.17-2.86)× 10^-9. The latter values were scaled down by a factor of 0.7735 to take the different assumptions used in the calculation into account (e.g., the dust opacity and dust-to-gas mass ratio), which is needed for a proper comparison with our results (see <cit.> for details). The abundances we derived towards our target clumps are 3.1±0.6 (clump A) and 1.7±0.3 (clump B) times higher than the highest value in the Vasyunina et al. <cit.> sample. We note that the distances of the Vasyunina et al. <cit.> target sources lie in the range of 2.1–5.3 kpc, and hence none of their sources are associated with the CMZ. Nevertheless, the HNCO abundances are not significantly different from those in G1.75-0.08. Sanhueza et al. <cit.>, who also used the 22 m Mopra telescope, detected HNCO(4_0, 4-3_0, 3) in 18 of their sample of 92 clumps in IRDCs (20% detection rate), and derived abundances in the range of (0.15-4.45)× 10^-9. The latter values were scaled down by a factor of 0.549 for a more meaningful comparison with our results (see <cit.>). The highest HNCO abundance in the Sanhueza et al. <cit.> sample is similar to that in clump B (agreement within a factor of 1.1±0.2). We note that the HNCO(4_0, 4-3_0, 3) line has also been detected in the Brick as part of the MALT90 survey <cit.> and also with the Atacama Large Millimetre/submillimetre Array (ALMA) <cit.>, but no fractional abundance estimate was presented to compare with. §.§.§ HCN (Hydrogen Cyanide) The HCN fractional abundances we derived for clump A and clump B are comparable to each other ((4.5±0.8)× 10^-9 and (4.9±0.9)× 10^-9, respectively). Vasyunina et al. <cit.> detected HCN(1-0) in all of their 37 clumps, and the scaled fractional abundances lie in the range of (0.26-5.26)× 10^-9. The latter range brackets the HCN abundances we derived. Sanhueza et al. <cit.> also reported a high detection rate of HCN(1-0) for their sample (80%), but the fractional abundances were not derived owing to blended hyperfine components and complex line profiles. §.§.§ HCO^+ (Formyl Ion) The HCO^+ abundance towards clump A was derived to be (9.3±1.5)× 10^-10, while a value of (2.0±0.3)× 10^-9 was derived for clump B. Vasyunina et al. <cit.> detected HCO^+(1-0) in 31 clumps (83.8% detection rate), and the scaled fractional abundances lie in the range of (0.27-3.94)× 10^-8. The lowest value in this range is 2.9±0.5 and 1.4±0.2 times higher than the abundance we derived for clump A and clump B. Sanhueza et al. <cit.> reported a comparably high detection rate of HCO^+(1-0) for their sample (88%), and the scaled values of the fractional abundances lie in the range of (0.21-15.3)× 10^-8. Again, the HCO^+ abundance in our clumps is closer to the lower end of values in the Sanhueza et al. <cit.> sample. The authors found that the HCO^+ abundance increases as the clump evolves, which in turn could be related to the increasing temperature, which leads to the release of CO from the icy grain mantles into the gas phase out of which HCO^+ can then primarily form in the reaction with H_3^+. Because our target clumps are 70 m dark and apparently quiescent, their low HCO^+ abundances are consistent with the aforementioned evolutionary trend. §.§.§ N_2H^+ (Diazenylium) We detected N_2H^+ only in clump B and derived an abundance of (7.9±1.7)× 10^-11 for the species. For comparison, Vasyunina et al. <cit.> detected N_2H^+(1-0) in all except one of their clumps (97.3% detection rate), and the scaled fractional abundances lie in the range of (0.15-7.74)× 10^-9. Our derived N_2H^+ abundance is roughly comparable to the lowest value in this range (a factor of 1.9±0.4 difference). Sanhueza et al. <cit.> reported a very high detection rate of 97% for the strongest hyperfine component of N_2H^+(1-0) (Table 4 therein), and the fractional abundances they derived are (0.1-9.2)× 10^-9. The lowest value in this range agrees with the abundance of N_2H^+ in clump B within a factor of 1.3±0.3. The authors found that the N_2H^+ abundance increases with clump evolution, and the low N_2H^+ abundance found for the quiescent clump B is consistent with this evolutionary trend. The physics and chemistry behind this trend might be related to the release of N_2 from dust grains as the temperature in the clump increases, after which it can react with H_3^+ to form N_2H^+. This process is competing with the increasing abundance of CO, which is destroying N_2H^+ in a process that produces HCO^+ and N_2. §.§.§ [N_2H^+]/[HCO^+] Abundance Ratio and [N_2D^+]/[N_2H^+] Deuteration Sanhueza et al. <cit.> found that the [N_2H^+]/[HCO^+] ratio can be used as a chemical clock, where the ratio decreases as the clump evolves from the intermediate stage (with enhanced 4.5 m emission or an embedded 24 m source) via an active stage (both enhanced 4.5 m emission and a 24 m source) to the red clump stage (association with bright 8 m emission). However, quiescent or IR-dark clumps were not found to follow this trend (see Figure 16 in their study). The [N_2H^+]/[HCO^+] abundance ratio for clump B is 0.04±0.01 (calculated from the column density ratio), which is two times lower than the median value of 0.08 derived by Sanhueza et al. <cit.> for their quiescent clumps, and also lower than the median value of 0.07 the authors derived for red clumps. Hence, our quiescent clump B also does not appear to follow the trend seen in the median values of [N_2H^+]/[HCO^+] by Sanhueza et al. <cit.>. Using a 3σ intensity upper limit of T_ MB<4 mK for N_2D^+(1-0) towards clump B, and assuming the same line width (FWHM) and T_ ex as for the detected N_2H^+(1-0) line, we derived an N_2D^+ column density upper limit of <1.8 × 10^11 cm^-2. This suggests that the [N_2D^+]/[N_2H^+] deuterium fractionation in clump B is <0.05. For comparison, using observations with the IRAM 30 m telescope, Fontani et al. <cit.> derived [N_2D^+]/[N_2H^+] values in the range of ≤0.004-0.02 for their sample of ten high-mass young stellar objects. Fontani et al. <cit.> studied a sample of high-mass starless cores, high-mass protostellar objects (HMPOs), and ultracompact HII regions, and found [N_2D^+]/[N_2H^+] ratios of , 0.017– ≤ 0.4, and 0.017– ≤ 0.08 for these different evolutionary stages. The average deuteration values were found to be ∼0.26, 0.037, and 0.044, respectively, showing a decreasing trend in [N_2D^+]/[N_2H^+] when the source evolves from a starless stage to a HMPO. Gerner et al. <cit.> found that the [N_2D^+]/[N_2H^+] ratio in IRDCs drops from a median value of about 0.032 to about 0.009 in HMPOs (see Figure 5 therein). Based on observations with APEX, Lackington et al. <cit.> derived the [N_2D^+]/[N_2H^+] ratios between 0.002 and 0.23 for their sample of 29 cores in IRDCs. The [N_2D^+]/[N_2H^+] upper limit we derived for the 70 m dark clump B is consistent with many of the deuteration levels observed in massive clumps and other IRDCs, but the present data do not allow us to quantify the potential effect that the environment might have on the deuteration in G1.75-0.08 (e.g., is it exceptionally low compared to the IRDCs in other parts of the Galaxy). § CONCLUSIONS We used the Yebes 40 m and IRAM 30 m telescopes to make the first single-pointed spectral line observations towards the IRDC G1.75-0.08. These new observations were used to study the kinematics, dynamics, and molecular abundances of the cloud and its clumps. These new data allowed us to revise the gas velocity dispersion-dependent physical properties of the target source. Our main results are summarised as follows: * Three different molecular line transitions were unambiguously detected towards the clumps in G1.75-0.08 with Yebes, namely, HNCO(J_K_a, K_c=4_0, 4-3_0, 3), HCN(J=1-0), and HCO^+(J=1-0). With the IRAM 30 m telescope, we detected only N_2H^+(J=1-0) towards clump B. * The HCN and HNCO spectra exhibit two velocity components, which give an impression of red asymmetric line profiles that would be an indication of expanding gas motions. * Our new spectral line data support the view that the G1.75-0.08 filament is strongly subcritical (by a factor of 11±6), which is atypical compared to the general population of Galactic molecular cloud filaments. * Both clumps at the ends of the G1.75-0.08 filament were found to be gravitationally unbound (α_ vir>2). Because the clumps are 70 m dark and massive (several 10^3 M_⊙), they can be considered candidates for being high-mass starless clumps, but not prestellar. * The fractional abundances of the detected species in the target clumps are consistent with those observed in other IRDCs. The IRDC G1.75-0.08 lies about 270 pc from the Galactic Centre in the CMZ, and could be an analogue of the CMZ cloud G0.253+0.016 (the Brick), which has been found to be a dynamically complex and hierarchically structured system rather than a single, coherent cloud <cit.>. High-resolution spectral line imaging of G1.75-0.08 would be needed to quantify the cloud's velocity structure; examine whether the orbital dynamics and shear motions in the CMZ could affect the cloud, as suggested in the case of the Brick; and test the hypothesis that the origin of the two clumps at the ends of the filament could be the result of gravitational focussing or the edge effect. Conceptualisation, O.M.; observations, O.M. and M.S.-G.; data reduction, O.M. and M.S.-G.; methodology, O.M. and M.S.-G.; formal analysis, O.M.; investigation, O.M. and M.S.-G.; data curation, O.M. and M.S.-G.; writing—original draft preparation, O.M.; writing—review and editing, O.M. and M.S.-G.; visualisation, O.M. All authors have read and agreed to the published version of the manuscript. This research received no external funding. Not applicable. Not applicable. The Yebes and IRAM spectral line data that support the findings of this study are available upon request from the corresponding author. The APEX dust continuum data are openly accessible online through the ESO Science Archive (<https://archive.eso.org/wdb/wdb/eso/apex/form> ). We thank the two anonymous reviewers for providing useful comments and suggestions that helped to improve the quality of this paper. We are grateful to the staff at the Yebes 40 m telescope for performing the service mode observations for project 23A004 and to the staff at the IRAM 30 m telescope for performing the service mode observations presented in this paper. This research has made use of NASA's Astrophysics Data System Bibliographic Services, the NASA/IPAC Infrared Science Archive, which is operated by the Jet Propulsion Laboratory, California Institute of Technology, under contract with the National Aeronautics and Space Administration, and  Astropy<www.astropy.org> accessed on 1 April 2023. , a community-developed core Python package for Astronomy <cit.>. The authors declare no conflicts of interest. Abbreviations The following abbreviations are used in this manuscript: ALMA Atacama Large Millimetre/submillimetre Array APEX Atacama Pathfinder EXperiment ArTéMiS Architectures de bolomètres pour des Télescopes à grand champ de vue dans le domaine sub-Millimétrique au Sol CDMS Cologne Database for Molecular Spectroscopy CLASS Continuum and Line Analysis Single-dish Software CMZ Central Molecular Zone EMIR Eight MIxer Receiver FFTS Fast-Fourier Transform Spectrometer FWHM Full width at half maximum GILDAS Grenoble Image and Line Data Analysis Software HMPO High-mass protostellar object HPBW Half-power beam width IPAC Infrared Processing & Analysis Center IRAM Institut de Radioastronomie Millimétrique IRDC Infrared dark cloud JPL Jet Propulsion Laboratory LABOCA Large APEX BOlometer CAmera LSR Local standard of rest LTE Local thermodynamic equilibrium MALT90 Millimetre Astronomy Legacy Team 90 GHz NASA National Aeronautics and Space Administration PWV Precipitable water vapour SFR Star formation rate VESPA Versatile SPectrometer Array -0cm [custom] References 999 [1]perault1996 Pérault, M.; Omont, A.; Simon, G.; Seguin, P.; Ojha, D.; Blommaert, J.; Felli, M.; Gilmore, G.; Guglielmo, F.; Habing, H.; et al. First ISOCAM images of the Milky Way. Astronomy and Astrophysics 1996, 315, L165. [2]egan1998 Egan, M.P.; Shipman, R.F.; Price, S.D.; Carey, S.J.; Clark F.O.; Cohen, M. A Population of Cold Cores in the Galactic Plane. The Astrophysical Journal 1998, 494, L199. [3]peretto2009 Peretto, N.; Fuller, G.A. The initial conditions of stellar protocluster formation. I. A catalogue of Spitzer dark clouds. Astronomy and Astrophysics 2009, 505, 405. [4]jackson2010 Jackson, J.M.; Finn, S.C.; Chambers, E.T.; Rathborne, J.M.; Simon, R. The "Nessie" Nebula: Cluster Formation in a Filamentary Infrared Dark Cloud. The Astrophysical Journal Letters 2010, 719, L185. [5]kainulainen2013 Kainulainen, J.; Ragan, S.E.; Henning, T.; Stutz, A. High-fidelity view of the structure and fragmentation of the high-mass, filamentary IRDC G11.11-0.12. Astronomy & Astrophysics 2013, 557, A120. [6]henshaw2016 Henshaw, J.D.; Caselli, P.; Fontani, F.; Jiménez-Serra, I.; Tan, J. C.; Longmore, S. N.; Pineda, J. E.; Parker, R. J.; Barnes, A. T. Investigating the structure and fragmentation of a highly filamentary IRDC. Monthly Notices of the Royal Astronomical Society 2016, 463, 146. [7]miettinen2018 Miettinen, O. The Seahorse Nebula: New views of the filamentary infrared dark cloud G304.74+01.32 from SABOCA, Herschel, and WISE. Astronomy & Astrophysics 2018, 609, A123. [8]rathborne2006 Rathborne, J.M.; Jackson, J.M.; Simon, R. Infrared Dark Clouds: Precursors to Star Clusters. The Astrophysical Journal 2006, 641, 389. [9]beuther2007 Beuther, H.; Steinacker, J. The Protostar in the Massive Infrared Dark Cloud IRDC 18223-3. The Astrophysical Journal 2007, 656, L85. [10]chambers2009 Chambers, E.T.; Jackson, J.M.; Rathborne, J.M.; Simon, R. Star Formation Activity of Cores within Infrared Dark Clouds. The Astrophysical Journal Supplement 2009, 181, 360. [11]battersby2010 Battersby, C.; Bally, J.; Jackson, J.M.; Ginsburg, A.; Shirley, Y.L.; Schlingman, W.; Glenn, J. An Infrared Through Radio Study of the Properties and Evolution of IRDC Clumps. The Astrophysical Journal 2010, 721, 222. [12]retes2020 Retes-Romero, R.; Mayya, Y.D.; Luna, A.; Carrasco, L. Infrared Dark Clouds and High-mass Star Formation Activity in Galactic Molecular Clouds. The Astrophysical Journal 2020, 897, 53. [13]motte2018 Motte, F.; Bontemps, S.; Louvet, F. High-Mass Star and Massive Cluster Formation in the Milky Way. Annual Review of Astronomy and Astrophysics 2018, 56, 41. [14]sanhueza2017 Sanhueza, P.; Jackson, J.M.; Zhang, Q.; Guzmán, A. E.; Lu, X.; Stephens, I. W.; Wang, K.; Tatematsu, K. A Massive Prestellar Clump Hosting No High-mass Cores. The Astrophysical Journal 2017, 841, 97. [15]siringo2009 Siringo, G.; Kreysa, E.; Kovács, A.; Schuller, F.; Weiß, A.; Esch, W.; Gemünd, H.-P.; Jethava, N.; Lundershausen, G.; Colin, A. The Large APEX BOlometer CAmera LABOCA. Astronomy and Astrophysics 2009, 497, 945. [16]miettinen2012 Miettinen, O. LABOCA 870 m dust continuum mapping of selected infrared-dark cloud regions in the Galactic planeAstronomy & Astrophysics 2012, 542, A101. [17]miettinen2014 Miettinen, O. A MALT90 study of the chemical properties of massive clumps and filaments of infrared dark clouds. Astronomy & Astrophysics 2014, 562, A3. [18]foster2011 Foster, J.B.; Jackson, J.M.; Barnes, P.J.; Barris, E.; Brooks, K.; Cunningham, M.; Finn, S. C.; Fuller, G. A.; Longmore, S. N.; Mascoop, J. L.; et al. The Millimeter Astronomy Legacy Team 90 GHz (MALT90) Pilot Survey. The Astrophysical Journal Supplement 2011, 197, 25. [19]foster2013 Foster, J.B.; Rathborne, J.M.; Sanhueza, P.; Claysmith, C.; Whitaker, J.S.; Jackson, J.M.; Mascoop, J.L.; Wienen, M.; Breen, S.L.; Herpin, F.; et al. Characterisation of the MALT90 Survey and the Mopra Telescope at 90 GHz. Publications of the Astronomical Society of Australia 2013, 30, e038. [20]jackson2013 Jackson, J.M.; Rathborne, J.M.; Foster, J.B.; Whitaker, J.S.; Sanhueza, P.; Claysmith, C.; Mascoop, J.L.; Wienen, M.; Breen, S.L.; Herpin, F.; et al. MALT90: The Millimetre Astronomy Legacy Team 90 GHz Survey. Publications of the Astronomical Society of Australia 2013, 30, e057. [21]miettinen2022 Miettinen, O.; Mattern, M.; André, P. ArTéMiS imaging of the filamentary infrared dark clouds G1.75-0.08 and G11.36+0.80: Dust-based physical properties of the clouds and their clumps. Astronomy & Astrophysics 2022, 667, A90. [22]reveret2014 Revéret, V.; André, P.; Le Pennec, J.; Talvard, M.; Agnèse, P.; Arnaud, A.; Clerc, L.; de Breuck, C.; Cigna, J.-C.; Delisle, C.; et al. The ArTéMiS wide-field sub-millimeter camera: preliminary on-sky performance at 350 microns. Proceedings of the SPIE 2014, 9153, 915305. [23]andre2016 André, P.; Revéret, V.; Könyves, V.; Arzoumanian, D.; Tigé, J.; Gallais, P.; Roussel, H.; Le Pennec, J.; Rodriguez, L.; Doumayrou, E.; et al. Characterizing filaments in regions of high-mass star formation: High-resolution submilimeter imaging of the massive star-forming complex NGC 6334 with ArTéMiS. Astronomy & Astrophysics 2016, 592, A54. [24]talvard2018 Talvard, M.; Revéret, V.; Le-Pennec, Y.; André, Ph.; Arnaud, A.; Clerc, L.; de Breuck, C.; Delisle, C.; Doumayrou, E.; Duband, L.; et al. Latest results and prospects of the ArTeMiS camera on APEX. Proceedings of the SPIE 2018, 10708, 1070838. [25]petkova2023 Petkova, M.A.; Kruijssen, J.M.D.; Kluge, A.L.; Glover, S.C.O.; Walker, D.L.; Longmore, S.N.; Henshaw, J.D.; Reissl, S.; Dale, J.E. The complex multiscale structure in simulated and observed emission maps of the proto-cluster cloud G0.253+0.016 ('the Brick'). Monthly Notices of the Royal Astronomical Society 2023, 520, 2245. [26]henshaw2023 Henshaw, J.D.; Barnes, A.T.; Battersby, C.; Ginsburg, A.; Sormani, M.C.; Walker, D. L. Star Formation in the Central Molecular Zone of the Milky Way. In Protostars and Planets VII ; Inutsuka, S., Aikawa, Y., Muto, T., Tomida, K., Tamura, M., Eds.; San Francisco: Astronomical Society of the Pacific; 2023; Volume 534, p. 83. [27]tercero2021 Tercero, F.; López-Pérez, J.A.; Gallego, J.D.; Beltrán, F.; García, O.; Patino-Esteban, M.; López-Fernández, I.; Gómez-Molina, G.; Diez, M.; García-Carreño; et al. Yebes 40 m radio telescope and the broad band Nanocosmos receivers at 7 mm and 3 mm for line surveys. Astronomy & Astrophysics 2021, 645, A37. [28]silva2023 Silva, W.G.D.P.; Cernicharo, J.; Schlemmer, S.; Marcelino, N.; Loison, J.-C.; Agúndez, M.; Gupta, D.; Wakelam, V.; Thorwirth, S.; Cabezas, C.; et al. Discovery of H_2CCCH^+ in TMC-1. Astronomy & Astrophysics 2023, 676, L1. [29]agundez2023 Agúndez, M.; Marcelino, N.; Tercero, B.; Jiménez-Serra, I.; Cernicharo, J. Abundance and excitation of molecular anions in interstellar clouds. Astronomy & Astrophysics 2023, 677, A106. [30]tercero2024 Tercero, B.; Marcelino, N.; Roueff, E.; Agúndez, M.; Cabezas, C.; Fuentetaja, R.; de Vicente, P.; Cernicharo, J. Doubly substituted isotopologues of HCCCN in TMC-1: Detection of D^13CCCN, DC^13CCN, DCC^13CN, DCCC^15N, H^13C^13CCN, H^13CC^13CN, HC^13C^13CN, HCC^13C^15N, and HC^13CC^15N. Astronomy & Astrophysics 2024, 682, L12. [31]carter2012 Carter, M.; Lazareff, B.; Maier, D.; Chenu, J.-Y.; Fontana, A.-L.; Bortolotti, Y.; Boucher, C.; Navarrini, A.; Blanchet, S.; Greve, A.; et al. The EMIR multi-band mm-wave receiver for the IRAM 30-m telescope. Astronomy & Astrophysics 2012, 538, A89. [32]pagani2009 Pagani, L.; Daniel, F.; Dubernet, M.-L. On the frequency of N{_2}H^+ and N{_2}D{+̂}. Astronomy and Astrophysics 2009, 494, 719. [33]desimone2018 De Simone, M.; Fontani, F.; Codella, C.; Ceccarelli, C.; Lefloch, B.; Bachiller, R.; López-Sepulcre, A.; Caux, E.; Vastel, C.; Soldateschi, J. Deuterium and ^15N fractionation in N_2H^+. Monthly Notices of the Royal Astronomical Society 2018, 476, 1982. [34]muller2005 Müller, H.S.P.; Schlöder, F.; Stutzki, J.; Winnewisser, G. The Cologne Database for Molecular Spectroscopy, CDMS: a useful tool for astronomers and spectroscopists. Journal of Molecular Structure 2005, 742, 215. [35]pickett1998 Pickett, H.M.; Poynter, R.L.; Cohen, E.A.; Delitsky, M.L.; Pearson, J.C.; Müller, H.S.P. Submillimeter, millimeter and microwave spectral line catalog Journal of Quantitative Spectroscopy and Radiative Transfer 1998, 60, 883. [36]mangum2006 Mangum, J. Observing Modes Used in Radio Astronomy; NRAO, 11 April 2006. [37]pagani2020 Pagani, L.; Frayer, D.; Pagani, B.; Lefèvre, C. Radio telescope total power mode: improving observation efficiency Astronomy & Astrophysics 2020, 643, A126. [38]loughnane2012 Loughnane, R.M.; Redman, M.P.; Thompson, M.A.; Lo, N.; O'Dwyer, B.; Cunningham, M.R. Observations of HCN hyperfine line anomalies towards low- and high-mass star-forming cores. Monthly Notices of the Royal Astronomical Society 2012, 420, 1367. [39]velilla2015 Velilla, Prieto, L.; Sánchez, Contreras, C.; Cernicharo, J.; Agúndez, M.; Quintana-Lacaci, G.; Alcolea, J.; Bujarrabal, V.; Herpin, F.; Menten, K.M.; Wyrowski, F. New N-bearing species towards OH 231.8+4.2. HNCO, HNCS, HC_3N, and NO. Astronomy & Astrophysics 2015, 575, A84. [40]mangum2015 Mangum, J.G.; Shirley, Y.L. How to Calculate Molecular Column Density. Publications of the Astronomical Society of the Pacific 2015, 127, 266. [41]miettinen2020 Miettinen, O. What did the seahorse swallow? APEX 170 GHz observations of the chemical conditions in the Seahorse infrared dark cloud. Astronomy & Astrophysics 2020, 639, A65. [42]turner1991 Turner, B.E. A Molecular Line Survey of Sagittarius B2 and Orion–KL from 70 to 115 GHz. II. Analysis of the Data. Astrophysical Journal Supplement 1991, 76, 617. [43]fiege2000 Fiege, J.D.; Pudritz, R.E. Helical fields and filamentary molecular clouds - I. Monthly Notices of the Royal Astronomical Society 2000, 311, 85. [44]sanhueza2012 Sanhueza, P.; Jackson, J.M.; Foster, J.B.; Garay, G.; Silva, A.; Finn, S.C. Chemistry in Infrared Dark Cloud Clumps: A Molecular Line Survey at 3 mm. The Astrophysical Journal 2012, 756, 60. [45]miettinen2020b Miettinen, O. Dense cores in the Seahorse infrared dark cloud: physical properties from modified blackbody fits to the far-infrared-submillimetre spectral energy distributions. Astronomy & Astrophysics 2020, 644, A82. [46]miettinen2012b Miettinen, O. A molecular line study of the filamentary infrared dark cloud G304.74+01.32. Astronomy & Astrophysics 2012, 540, A104. [47]gregersen1997 Gregersen, E.M.; Evans, N.J.; Zhou, S.; Choi, M. New Protostellar Collapse Candidates: An HCO^+ Survey of the Class 0 Sources. The Astrophysical Journal 1997, 484, 256. [48]gao2010 Gao, Y.; Lou, Y.-Q. Global collapses and expansions in star-forming clouds . Monthly Notices of the Royal Astronomical Society 2010, 403, 1919. [49]kristensen2012 Kristensen, L.E.; van Dishoeck, E.F.; Bergin, E.A.; Visser, R.; Yıldız, U.A.; San Jose-Garcia, I.; Jørgensen, J.K.; Herczeg, G.J.; Johnstone, D.; Wampfler, S. F.; et al. Water in star-forming regions with Herschel (WISH). II. Evolution of 557 GHz 1_10-1_01 emission in low-mass protostars. Astronomy & Astrophysics 2012, 542, A8. [50]qin2016 Qin, S.-L.; Schilke, P.; Wu, J.; Liu, T.; Wu, Y.; Sánchez-Monge, Á.; Liu, Y. SMA observations of the W3(OH) complex: Dynamical differentiation between W3(H_2O) and W3(OH). Monthly Notices of the Royal Astronomical Society 2016, 456, 2681. [51]burkert2004 Burkert, A.; Hartmann, L. Collapse and Fragmentation in Finite Sheets. The Astrophysical Journal 2004, 616, 288. [52]heigl2022 Heigl, S.; Hoemann, E.; Burkert, A. Taking off the edge - simultaneous filament and end core formation . Monthly Notices of the Royal Astronomical Society 2022, 517, 5272. [53]hoemann2023a Hoemann, E.; Heigl, S.; Burkert, A. Filament collapse: a two phase process. Monthly Notices of the Royal Astronomical Society 2023, 521, 5152. [54]hoemann2023b Hoemann, E.; Heigl, S.; Burkert, A. Filament fragmentation: density gradients suppress end-dominated collapse. Monthly Notices of the Royal Astronomical Society 2023, 525, 3998. [55]henshaw2019 Henshaw, J.D.; Ginsburg, A.; Haworth, T.J.; Longmore, S.N.; Kruijssen, J.M.D.; Mills, E.A.C.; Sokolov, V.; Walker, D.L. ; Barnes, A.T.; Contreras, Y.; et al. `The Brick' is not a brick: a comprehensive study of the structure and dynamics of the central molecular zone cloud G0.253+0.016. Monthly Notices of the Royal Astronomical Society 2019, 485, 2457. [56]mattern2018 Mattern, M.; Kauffmann, J.; Csengeri, T.; Urquhart, J.S. ; Leurini, S.; Wyrowski, F.; Giannetti, A.; Barnes, P.J.; Beuther, H.; Bronfman, L.; et al. SEDIGISM: the kinematics of ATLASGAL filaments. Astronomy & Astrophysics 2018, 619, A166. [57]tress2024 Tress, R.G.; Sormani, M.C.; Girichidis, P.; Glover, S.C.O.; Klessen, R.S.; Smith, R.J.; Sobacchi, E.; Armillotta, L.; Barnes, A.T.; Battersby, C.; et al. Magnetic field morphology and evolution in the Central Molecular Zone and its effect on gas dynamics Astronomy & Astrophysics 2024, submitted, arXiv:2403.13048. [58]baldeschi2017 Baldeschi, A.; Elia, D.; Molinari, S.; Pezzuto, S.; Schisano, E.; Gatti, M.; Serra, A.; Merello, M.; Benedettini, M.; Di Giorgio, A.M.; et al. Distance biases in the estimation of the physical properties of Hi-GAL compact sources - I. Clump properties and the identification of high-mass star-forming candidates. Monthly Notices of the Royal Astronomical Society 2017, 466, 3682. [59]traficante2017 Traficante, A.; Fuller, G.A.; Billot, N.; Duarte-Cabral, A.; Merello, M.; Molinari, S.; Peretto, N.; Schisano, E. Massive 70 m quiet clumps I: evidence of embedded low/intermediate-mass star formation activity. Monthly Notices of the Royal Astronomical Society 2017, 470, 3882. [60]sanhueza2019 Sanhueza, P.; Contreras, Y.; Wu, B.; Jackson, J.M.; Guzmán, A.E.; Zhang, Q.; Li, S.; Lu, X.; Silva, A.; Izumi, N.; et al. The ALMA Survey of 70 m Dark High-mass Clumps in Early Stages (ASHES). I. Pilot Survey: Clump Fragmentation. The Astrophysical Journal 2019, 886, 102. [61]li2020 Li, S.; Sanhueza, P.; Zhang, Q.; Nakamura, F.; Lu, X.; Wang, J.; Liu, T.; Tatematsu, K.; Jackson, J.M.; Silva, A.; et al. The ALMA Survey of 70 m Dark High-mass Clumps in Early Stages (ASHES). II. Molecular Outflows in the Extreme Early Stages of Protocluster Formation. The Astrophysical Journal 2020, 903, 119. [62]myers2022 Myers, P.C.; Hatchfield, H.P.; Battersby, C. Virial Clumps in Central Molecular Zone Clouds. The Astrophysical Journal 2022, 929, 34. [63]chabrier2024 Chabrier, G.; Dumond, P. A Consistent Explanation for the Unusual Initial Mass Function and Star Formation Rate in the Central Molecular Zone (CMZ). The Astrophysical Journal 2024, 966, 48. [64]li2023 Li, S.; Sanhueza, P.; Zhang, Q.; Guido, G.; Sabatini, G.; Morii, K.; Lu, X.; Tafoya, D.; Nakamura, F:; Izumi, N.; et al. The ALMA Survey of 70 m Dark High-mass Clumps in Early Stages (ASHES). VIII. Dynamics of Embedded Dense Cores. The Astrophysical Journal 2023, 949, 109. [65]vasyunina2011 Vasyunina, T.; Linz, H.; Henning, T.; Zinchenko, I.; Beuther, H.; Voronkov, M. Chemistry in infrared dark clouds. Astronomy & Astrophysics 2011, 527, A88. [66]rathborne2014 Rathborne, J.M.; Longmore, S.N.; Jackson, J.M.; Foster, J.B.; Contreras, Y.; Garay, G.; Testi, L.; Alves, J.F.; Bally, J.; Bastian, N.; et al. G0.253+0.016: A Centrally Condensed, High-mass Protocluster. The Astrophysical Journal 2014, 786, 140. [67]rathborne2015 Rathborne, J.M.; Longmore, S.N.; Jackson, J.M.; Alves, J.F.; Bally, J.; Bastian, N.; Contreras, Y.; Foster, J.B.; Garay, G.; Kruijssen, J.M.D.; et al. A Cluster in the Making: ALMA Reveals the Initial Conditions for High-mass Cluster Formation. The Astrophysical Journal 2015, 802, 125. [68]fontani2006 Fontani, F.; Caselli, P.; Crapsi, A.; Cesaroni, R.; Molinari, S.; Testi, L.; Brand, J. Searching for massive pre-stellar cores through observations of N2H^+ and N2D^+. Astronomy and Astrophysics 2006, 460, 709. [69]fontani2011 Fontani, F.; Palau, A.; Caselli, P.; Sánchez-Monge, Á.; Butler, M.J.; Tan, J.C.; Jiménez-Serra, I.; Busquet, G.; Leurini, S.; Audard, M. Deuteration as an evolutionary tracer in massive-star formation. Astronomy & Astrophysics 2011, 529, L7. [70]gerner2015 Gerner, T.; Shirley, Y.L.; Beuther, H.; Semenov, D.; Linz, H.; Albertsson, T.; Henning, Th. Chemical evolution in the early phases of massive star formation. II. Deuteration. Astronomy & Astrophysics 2015, 579, A80. [71]lackington2016 Lackington, M.; Fuller, G.A.; Pineda, J.E.; Garay, G.; Peretto, N.; Traficante, A. Deuteration in infrared dark clouds. Monthly Notices of the Royal Astronomical Society 2016, 455, 806. [72]astropy2013 Astropy Collaboration ; Robitaille, T.P.; Tollerud, E.J.; Greenfield, P.; Droettboom, M.; Bray, E.; Aldcroft, T.; Davis, M.; Ginsburg, A.; Price-Whelan, A.M.; et al. Astropy: A community Python package for astronomy. Astronomy & Astrophysics 2013, 558, A33. [73]astropy2018 Astropy Collaboration; Price-Whelan, A.M.; Sipőcz, B.M.; Günther, H.M.; Lim, P.L.; Crawford, S.M.; Conseil, S. ; Shupe, D.L.; Craig, M.W.; Dencheva, N.; et al. The Astropy Project: Building an Open-science Project and Status of the v2.0 Core Package. The Astronomical Journal 2018, 156, 123.
http://arxiv.org/abs/2406.19287v1
20240627160320
Isotropy of cosmic rays beyond $10^{20}$ eV favors their heavy mass composition
[ "Telescope Array Collaboration", "R. U. Abbasi", "Y. Abe", "T. Abu-Zayyad", "M. Allen", "Y. Arai", "R. Arimura", "E. Barcikowski", "J. W. Belz", "D. R. Bergman", "S. A. Blake", "I. Buckland", "B. G. Cheon", "M. Chikawa", "T. Fujii", "K. Fujisue", "K. Fujita", "R. Fujiwara", "M. Fukushima", "G. Furlich", "N. Globus", "R. Gonzalez", "W. Hanlon", "N. Hayashida", "H. He", "R. Hibi", "K. Hibino", "R. Higuchi", "K. Honda", "D. Ikeda", "N. Inoue", "T. Ishii", "H. Ito", "D. Ivanov", "A. Iwasaki", "H. M. Jeong", "S. Jeong", "C. C. H. Jui", "K. Kadota", "F. Kakimoto", "O. Kalashev", "K. Kasahara", "S. Kasami", "S. Kawakami", "K. Kawata", "I. Kharuk", "E. Kido", "H. B. Kim", "J. H. Kim", "J. H. Kim", "S. W. Kim", "Y. Kimura", "I. Komae", "V. Kuzmin", "M. Kuznetsov", "Y. J. Kwon", "K. H. Lee", "B. Lubsandorzhiev", "J. P. Lundquist", "H. Matsumiya", "T. Matsuyama", "J. N. Matthews", "R. Mayta", "K. Mizuno", "M. Murakami", "I. Myers", "K. H. Lee", "S. Nagataki", "K. Nakai", "T. Nakamura", "E. Nishio", "T. Nonaka", "H. Oda", "S. Ogio", "M. Onishi", "H. Ohoka", "N. Okazaki", "Y. Oku", "T. Okuda", "Y. Omura", "M. Ono", "A. Oshima", "H. Oshima", "S. Ozawa", "I. H. Park", "K. Y. Park", "M. Potts", "M. S. Pshirkov", "J. Remington", "D. C. Rodriguez", "C. Rott", "G. I. Rubtsov", "D. Ryu", "H. Sagawa", "R. Saito", "N. Sakaki", "T. Sako", "N. Sakurai", "D. Sato", "K. Sato", "S. Sato", "K. Sekino", "P. D. Shah", "N. Shibata", "T. Shibata", "J. Shikita", "H. Shimodaira", "B. K. Shin", "H. S. Shin", "D. Shinto", "J. D. Smith", "P. Sokolsky", "B. T. Stokes", "T. A. Stroman", "Y. Takagi", "K. Takahashi", "M. Takamura", "M. Takeda", "R. Takeishi", "A. Taketa", "M. Takita", "Y. Tameda", "K. Tanaka", "M. Tanaka", "Y. Tanoue", "S. B. Thomas", "G. B. Thomson", "P. Tinyakov", "I. Tkachev", "H. Tokuno", "T. Tomida", "S. Troitsky", "R. Tsuda", "Y. Tsunesada", "S. Udo", "F. Urban", "D. Warren", "T. Wong", "K. Yamazaki", "K. Yashiro", "F. Yoshida", "Y. Zhezher", "Z. Zundel" ]
astro-ph.HE
[ "astro-ph.HE" ]
Department of Physics, Loyola University Chicago, Chicago, Illinois 60660, USA Academic Assembly School of Science and Technology Institute of Engineering, Shinshu University, Nagano, Nagano 380-8554, Japan Department of Physics, Loyola University Chicago, Chicago, Illinois 60660, USA High Energy Astrophysics Institute and Department of Physics and Astronomy, University of Utah, Salt Lake City, Utah 84112-0830, USA High Energy Astrophysics Institute and Department of Physics and Astronomy, University of Utah, Salt Lake City, Utah 84112-0830, USA Graduate School of Science, Osaka Metropolitan University, Sugimoto, Sumiyoshi, Osaka 558-8585, Japan Graduate School of Science, Osaka Metropolitan University, Sugimoto, Sumiyoshi, Osaka 558-8585, Japan High Energy Astrophysics Institute and Department of Physics and Astronomy, University of Utah, Salt Lake City, Utah 84112-0830, USA High Energy Astrophysics Institute and Department of Physics and Astronomy, University of Utah, Salt Lake City, Utah 84112-0830, USA High Energy Astrophysics Institute and Department of Physics and Astronomy, University of Utah, Salt Lake City, Utah 84112-0830, USA High Energy Astrophysics Institute and Department of Physics and Astronomy, University of Utah, Salt Lake City, Utah 84112-0830, USA High Energy Astrophysics Institute and Department of Physics and Astronomy, University of Utah, Salt Lake City, Utah 84112-0830, USA Department of Physics and The Research Institute of Natural Science, Hanyang University, Seongdong-gu, Seoul 426-791, Korea Institute for Cosmic Ray Research, University of Tokyo, Kashiwa, Chiba 277-8582, Japan Graduate School of Science, Osaka Metropolitan University, Sugimoto, Sumiyoshi, Osaka 558-8585, Japan Nambu Yoichiro Institute of Theoretical and Experimental Physics, Osaka Metropolitan University, Sugimoto, Sumiyoshi, Osaka 558-8585, Japan Institute of Physics, Academia Sinica, Taipei City 115201, Taiwan Institute for Cosmic Ray Research, University of Tokyo, Kashiwa, Chiba 277-8582, Japan Institute for Cosmic Ray Research, University of Tokyo, Kashiwa, Chiba 277-8582, Japan Graduate School of Science, Osaka Metropolitan University, Sugimoto, Sumiyoshi, Osaka 558-8585, Japan Institute for Cosmic Ray Research, University of Tokyo, Kashiwa, Chiba 277-8582, Japan High Energy Astrophysics Institute and Department of Physics and Astronomy, University of Utah, Salt Lake City, Utah 84112-0830, USA Presently at: KIPAC, Stanford University, Stanford, CA 94305, USA Astrophysical Big Bang Laboratory, RIKEN, Wako, Saitama 351-0198, Japan High Energy Astrophysics Institute and Department of Physics and Astronomy, University of Utah, Salt Lake City, Utah 84112-0830, USA High Energy Astrophysics Institute and Department of Physics and Astronomy, University of Utah, Salt Lake City, Utah 84112-0830, USA Faculty of Engineering, Kanagawa University, Yokohama, Kanagawa 221-8686, Japan Presently at: Purple Mountain Observatory, Nanjing 210023, China Astrophysical Big Bang Laboratory, RIKEN, Wako, Saitama 351-0198, Japan Academic Assembly School of Science and Technology Institute of Engineering, Shinshu University, Nagano, Nagano 380-8554, Japan Faculty of Engineering, Kanagawa University, Yokohama, Kanagawa 221-8686, Japan Astrophysical Big Bang Laboratory, RIKEN, Wako, Saitama 351-0198, Japan Interdisciplinary Graduate School of Medicine and Engineering, University of Yamanashi, Kofu, Yamanashi 400-8511, Japan Faculty of Engineering, Kanagawa University, Yokohama, Kanagawa 221-8686, Japan The Graduate School of Science and Engineering, Saitama University, Saitama, Saitama 338-8570, Japan Interdisciplinary Graduate School of Medicine and Engineering, University of Yamanashi, Kofu, Yamanashi 400-8511, Japan Astrophysical Big Bang Laboratory, RIKEN, Wako, Saitama 351-0198, Japan High Energy Astrophysics Institute and Department of Physics and Astronomy, University of Utah, Salt Lake City, Utah 84112-0830, USA Graduate School of Science, Osaka Metropolitan University, Sugimoto, Sumiyoshi, Osaka 558-8585, Japan Department of Physics, SungKyunKwan University, Jang-an-gu, Suwon 16419, Korea Department of Physics, SungKyunKwan University, Jang-an-gu, Suwon 16419, Korea High Energy Astrophysics Institute and Department of Physics and Astronomy, University of Utah, Salt Lake City, Utah 84112-0830, USA Department of Physics, Tokyo City University, Setagaya-ku, Tokyo 158-8557, Japan Faculty of Engineering, Kanagawa University, Yokohama, Kanagawa 221-8686, Japan Institute for Nuclear Research of the Russian Academy of Sciences, Moscow 117312, Russia Faculty of Systems Engineering and Science, Shibaura Institute of Technology, Minato-ku, Tokyo 337-8570, Japan Graduate School of Engineering, Osaka Electro-Communication University, Hatsu-cho, Neyagawa-shi, Osaka 572-8530, Japan Graduate School of Science, Osaka Metropolitan University, Sugimoto, Sumiyoshi, Osaka 558-8585, Japan Institute for Cosmic Ray Research, University of Tokyo, Kashiwa, Chiba 277-8582, Japan Institute for Nuclear Research of the Russian Academy of Sciences, Moscow 117312, Russia Astrophysical Big Bang Laboratory, RIKEN, Wako, Saitama 351-0198, Japan Department of Physics and The Research Institute of Natural Science, Hanyang University, Seongdong-gu, Seoul 426-791, Korea High Energy Astrophysics Institute and Department of Physics and Astronomy, University of Utah, Salt Lake City, Utah 84112-0830, USA Presently at: Physics Department, Brookhaven National Laboratory, Upton, NY 11973, USA High Energy Astrophysics Institute and Department of Physics and Astronomy, University of Utah, Salt Lake City, Utah 84112-0830, USA Presently at: Korea Institute of Geoscience and Mineral Resources, Daejeon, 34132, Korea Department of Physics, Sungkyunkwan University, Jang-an-gu, Suwon 16419, Korea Graduate School of Science, Osaka Metropolitan University, Sugimoto, Sumiyoshi, Osaka 558-8585, Japan Graduate School of Science, Osaka Metropolitan University, Sugimoto, Sumiyoshi, Osaka 558-8585, Japan Deceased Institute for Nuclear Research of the Russian Academy of Sciences, Moscow 117312, Russia mkuzn@inr.ac.ru Service de Physique Théorique, Université Libre de Bruxelles, Brussels 1050, Belgium Institute for Nuclear Research of the Russian Academy of Sciences, Moscow 117312, Russia Department of Physics, Yonsei University, Seodaemun-gu, Seoul 120-749, Korea Department of Physics, SungKyunKwan University, Jang-an-gu, Suwon 16419, Korea Institute for Nuclear Research of the Russian Academy of Sciences, Moscow 117312, Russia Center for Astrophysics and Cosmology, University of Nova Gorica, Nova Gorica 5297, Slovenia High Energy Astrophysics Institute and Department of Physics and Astronomy, University of Utah, Salt Lake City, Utah 84112-0830, USA Graduate School of Science, Osaka Metropolitan University, Sugimoto, Sumiyoshi, Osaka 558-8585, Japan Graduate School of Science, Osaka Metropolitan University, Sugimoto, Sumiyoshi, Osaka 558-8585, Japan High Energy Astrophysics Institute and Department of Physics and Astronomy, University of Utah, Salt Lake City, Utah 84112-0830, USA Graduate School of Science, Osaka Metropolitan University, Sugimoto, Sumiyoshi, Osaka 558-8585, Japan Academic Assembly School of Science and Technology Institute of Engineering, Shinshu University, Nagano, Nagano 380-8554, Japan Graduate School of Engineering, Osaka Electro-Communication University, Hatsu-cho, Neyagawa-shi, Osaka 572-8530, Japan High Energy Astrophysics Institute and Department of Physics and Astronomy, University of Utah, Salt Lake City, Utah 84112-0830, USA Department of Physics and The Research Institute of Natural Science, Hanyang University, Seongdong-gu, Seoul 426-791, Korea Astrophysical Big Bang Laboratory, RIKEN, Wako, Saitama 351-0198, Japan Graduate School of Science, Osaka Metropolitan University, Sugimoto, Sumiyoshi, Osaka 558-8585, Japan Faculty of Science, Kochi University, Kochi, Kochi 780-8520, Japan Graduate School of Engineering, Osaka Electro-Communication University, Hatsu-cho, Neyagawa-shi, Osaka 572-8530, Japan Institute for Cosmic Ray Research, University of Tokyo, Kashiwa, Chiba 277-8582, Japan Graduate School of Science, Osaka Metropolitan University, Sugimoto, Sumiyoshi, Osaka 558-8585, Japan Institute for Cosmic Ray Research, University of Tokyo, Kashiwa, Chiba 277-8582, Japan Institute for Cosmic Ray Research, University of Tokyo, Kashiwa, Chiba 277-8582, Japan Institute for Cosmic Ray Research, University of Tokyo, Kashiwa, Chiba 277-8582, Japan Institute for Cosmic Ray Research, University of Tokyo, Kashiwa, Chiba 277-8582, Japan Graduate School of Engineering, Osaka Electro-Communication University, Hatsu-cho, Neyagawa-shi, Osaka 572-8530, Japan Department of Physical Sciences, Ritsumeikan University, Kusatsu, Shiga 525-8577, Japan Graduate School of Science, Osaka Metropolitan University, Sugimoto, Sumiyoshi, Osaka 558-8585, Japan Astrophysical Big Bang Laboratory, RIKEN, Wako, Saitama 351-0198, Japan College of Engineering, Chubu University, Kasugai, Aichi 487-8501, Japan Institute for Cosmic Ray Research, University of Tokyo, Kashiwa, Chiba 277-8582, Japan Quantum ICT Advanced Development Center, National Institute for Information and Communications Technology, Koganei, Tokyo 184-8795, Japan Department of Physics, SungKyunKwan University, Jang-an-gu, Suwon 16419, Korea Department of Physics and The Research Institute of Natural Science, Hanyang University, Seongdong-gu, Seoul 426-791, Korea High Energy Astrophysics Institute and Department of Physics and Astronomy, University of Utah, Salt Lake City, Utah 84112-0830, USA Institute for Nuclear Research of the Russian Academy of Sciences, Moscow 117312, Russia Sternberg Astronomical Institute, Moscow M.V. Lomonosov State University, Moscow 119991, Russia Presently at: NASA Marshall Space Flight Center, Huntsville, Alabama 35812, USA High Energy Astrophysics Institute and Department of Physics and Astronomy, University of Utah, Salt Lake City, Utah 84112-0830, USA High Energy Astrophysics Institute and Department of Physics and Astronomy, University of Utah, Salt Lake City, Utah 84112-0830, USA High Energy Astrophysics Institute and Department of Physics and Astronomy, University of Utah, Salt Lake City, Utah 84112-0830, USA Department of Physics, SungKyunKwan University, Jang-an-gu, Suwon 16419, Korea Institute for Nuclear Research of the Russian Academy of Sciences, Moscow 117312, Russia Department of Physics, School of Natural Sciences, Ulsan National Institute of Science and Technology, UNIST-gil, Ulsan 689-798, Korea Institute for Cosmic Ray Research, University of Tokyo, Kashiwa, Chiba 277-8582, Japan Academic Assembly School of Science and Technology Institute of Engineering, Shinshu University, Nagano, Nagano 380-8554, Japan Institute for Cosmic Ray Research, University of Tokyo, Kashiwa, Chiba 277-8582, Japan Institute for Cosmic Ray Research, University of Tokyo, Kashiwa, Chiba 277-8582, Japan Graduate School of Science, Osaka Metropolitan University, Sugimoto, Sumiyoshi, Osaka 558-8585, Japan Academic Assembly School of Science and Technology Institute of Engineering, Shinshu University, Nagano, Nagano 380-8554, Japan Graduate School of Science, Osaka Metropolitan University, Sugimoto, Sumiyoshi, Osaka 558-8585, Japan Graduate School of Engineering, Osaka Electro-Communication University, Hatsu-cho, Neyagawa-shi, Osaka 572-8530, Japan Institute for Cosmic Ray Research, University of Tokyo, Kashiwa, Chiba 277-8582, Japan High Energy Astrophysics Institute and Department of Physics and Astronomy, University of Utah, Salt Lake City, Utah 84112-0830, USA Graduate School of Engineering, Osaka Electro-Communication University, Hatsu-cho, Neyagawa-shi, Osaka 572-8530, Japan Institute for Cosmic Ray Research, University of Tokyo, Kashiwa, Chiba 277-8582, Japan Graduate School of Science, Osaka Metropolitan University, Sugimoto, Sumiyoshi, Osaka 558-8585, Japan Institute for Cosmic Ray Research, University of Tokyo, Kashiwa, Chiba 277-8582, Japan Department of Physics, School of Natural Sciences, Ulsan National Institute of Science and Technology, UNIST-gil, Ulsan 689-798, Korea Graduate School of Science, Osaka Metropolitan University, Sugimoto, Sumiyoshi, Osaka 558-8585, Japan Nambu Yoichiro Institute of Theoretical and Experimental Physics, Osaka Metropolitan University, Sugimoto, Sumiyoshi, Osaka 558-8585, Japan Graduate School of Engineering, Osaka Electro-Communication University, Hatsu-cho, Neyagawa-shi, Osaka 572-8530, Japan High Energy Astrophysics Institute and Department of Physics and Astronomy, University of Utah, Salt Lake City, Utah 84112-0830, USA High Energy Astrophysics Institute and Department of Physics and Astronomy, University of Utah, Salt Lake City, Utah 84112-0830, USA High Energy Astrophysics Institute and Department of Physics and Astronomy, University of Utah, Salt Lake City, Utah 84112-0830, USA High Energy Astrophysics Institute and Department of Physics and Astronomy, University of Utah, Salt Lake City, Utah 84112-0830, USA Graduate School of Engineering, Osaka Electro-Communication University, Hatsu-cho, Neyagawa-shi, Osaka 572-8530, Japan Institute for Cosmic Ray Research, University of Tokyo, Kashiwa, Chiba 277-8582, Japan Department of Physics, Tokyo University of Science, Noda, Chiba 162-8601, Japan Institute for Cosmic Ray Research, University of Tokyo, Kashiwa, Chiba 277-8582, Japan Institute for Cosmic Ray Research, University of Tokyo, Kashiwa, Chiba 277-8582, Japan Earthquake Research Institute, University of Tokyo, Bunkyo-ku, Tokyo 277-8582, Japan Institute for Cosmic Ray Research, University of Tokyo, Kashiwa, Chiba 277-8582, Japan Graduate School of Engineering, Osaka Electro-Communication University, Hatsu-cho, Neyagawa-shi, Osaka 572-8530, Japan Graduate School of Information Sciences, Hiroshima City University, Hiroshima, Hiroshima 731-3194, Japan Institute of Particle and Nuclear Studies, KEK, Tsukuba, Ibaraki 305-0801, Japan Graduate School of Science, Osaka Metropolitan University, Sugimoto, Sumiyoshi, Osaka 558-8585, Japan High Energy Astrophysics Institute and Department of Physics and Astronomy, University of Utah, Salt Lake City, Utah 84112-0830, USA High Energy Astrophysics Institute and Department of Physics and Astronomy, University of Utah, Salt Lake City, Utah 84112-0830, USA petr.tiniakov@ulb.be Service de Physique Théorique, Université Libre de Bruxelles, Brussels 1050, Belgium Institute for Nuclear Research of the Russian Academy of Sciences, Moscow 117312, Russia Institute for Nuclear Research of the Russian Academy of Sciences, Moscow 117312, Russia Graduate School of Science and Engineering, Tokyo Institute of Technology, Meguro, Tokyo 152-8550, Japan Academic Assembly School of Science and Technology Institute of Engineering, Shinshu University, Nagano, Nagano 380-8554, Japan Institute for Nuclear Research of the Russian Academy of Sciences, Moscow 117312, Russia Graduate School of Science, Osaka Metropolitan University, Sugimoto, Sumiyoshi, Osaka 558-8585, Japan Graduate School of Science, Osaka Metropolitan University, Sugimoto, Sumiyoshi, Osaka 558-8585, Japan Nambu Yoichiro Institute of Theoretical and Experimental Physics, Osaka Metropolitan University, Sugimoto, Sumiyoshi, Osaka 558-8585, Japan Faculty of Engineering, Kanagawa University, Yokohama, Kanagawa 221-8686, Japan CEICO, Institute of Physics, Czech Academy of Sciences, Prague 182 21, Czech Republic Astrophysical Big Bang Laboratory, RIKEN, Wako, Saitama 351-0198, Japan High Energy Astrophysics Institute and Department of Physics and Astronomy, University of Utah, Salt Lake City, Utah 84112-0830, USA College of Engineering, Chubu University, Kasugai, Aichi 487-8501, Japan Department of Physics, Tokyo University of Science, Noda, Chiba 162-8601, Japan Graduate School of Engineering, Osaka Electro-Communication University, Hatsu-cho, Neyagawa-shi, Osaka 572-8530, Japan Institute for Cosmic Ray Research, University of Tokyo, Kashiwa, Chiba 277-8582, Japan Institute for Nuclear Research of the Russian Academy of Sciences, Moscow 117312, Russia High Energy Astrophysics Institute and Department of Physics and Astronomy, University of Utah, Salt Lake City, Utah 84112-0830, USA The Telescope Array Collaboration § ABSTRACT We report an estimation of the injected mass composition of ultra-high energy cosmic rays (UHECRs) at energies higher than 10 EeV. The composition is inferred from an energy-dependent sky distribution of UHECR events observed by the Telescope Array surface detector by comparing it to the Large Scale Structure of the local Universe. In the case of negligible extra-galactic magnetic fields the results are consistent with a relatively heavy injected composition at E ∼ 10 EeV that becomes lighter up to E ∼ 100 EeV, while the composition at E > 100 EeV is very heavy. The latter is true even in the presence of highest experimentally allowed extra-galactic magnetic fields, while the composition at lower energies can be light if a strong EGMF is present. The effect of the uncertainty in the galactic magnetic field on these results is subdominant. Isotropy of cosmic rays beyond 10^20 eV favors their heavy mass composition Z. Zundel =========================================================================== Ultra-high energy cosmic rays (UHECR) are charged particles, likely protons and nuclei, with energies greater than 1 EeV (10^18 eV) that are reaching Earth's from space. The flux of particles at these energies is tiny, of order 1  km^-2 sr^-1 yr^-1, so they can be detected only indirectly via extensive air showers (EAS) of secondary particles they initiate in the Earth atmosphere. Despite several decades of study the origin of UHECR and the nature of their primary particles remain unknown. The UHECR energy spectrum was measured with a good precision <cit.>; its general shape is consistent between the two modern experiments Pierre Auger (Auger) <cit.> and Telescope Array (TA) <cit.> and with theoretical models <cit.>, except for a minor discrepancy <cit.> at highest energies. The spectrum measurements alone, however, have a limited potential to discriminate between various models of UHECR origin. The mass composition measurements have generally better discriminating power. But opposite to the spectrum, the mass composition measurements of Auger <cit.> and TA <cit.> are more affected by various systematic effects and not covering the highest energy part of the UHECR spectrum. At the same time the UHECR arrival directions are measured with a sufficient precision of order 1^∘. Unfortunately, this does not allow one to directly identify the sources since the deflections of UHECR are highly uncertain because of both unknown event-by-event primary particle charges, and because of large uncertainties in the galactic and extragalactic magnetic fields. Several approaches have been proposed in the literature to decipher the origin of UHECR using complex anisotropy observables <cit.>. In this letter we use a novel method to infer the injected UHECR mass composition from the arrival directions of the TA events. The method was proposed and described in detail in Ref. <cit.>. It takes advantage of the accurate measurements of UHECR arrival directions and energy, while circumventing the uncertainties arising from cosmic magnetic fields. The method is based on the observation that the magnitude of UHECR deflections is determined predominantly by particle charges that may range from 1 for protons to 26 for iron, while other factors are expected to give an order of magnitude smaller effect. Comparing the energy-dependent UHECR distribution over the sky calculated with various injected mass compositions with the observed distribution one may identify the models that are compatible or incompatible with the data. At this stage, the parameters of the UHECR models other than the mass composition are fixed by some conservative assumptions. One may then vary these parameters to check if the conclusions about the mass composition are robust with respect to this variation. Somewhat similar approaches to UHECR mass composition estimation from their anisotropy have been proposed in Refs. <cit.>. The Telescope Array <cit.> is the largest cosmic-ray experiment in the Northern Hemisphere. It is located at 39.3^∘ N, 112.9^∘ W in Utah, USA. The observatory includes a surface detector array (SD) and 38 fluorescence telescopes grouped in three stations. The SD consists of 507 plastic scintillator stations of 3 m^2 each, which are placed in a square grid with the 1.2 km spacing, covering in total the area of ∼ 700 km^2. The TA SD can detect EAS produced by cosmic ray particles of ∼EeV and higher energies. The TA SD has been in operation since May 2008. In this analysis we use the data collected by the TA SD during 14 years of operation from May 11, 2008 to May 10, 2022. We use the quality cuts described in Ref. <cit.>, and select events with zenith angle θ <55^∘ and energy E > 10 EeV. We also use the data of the National Lightning Detection Network <cit.> to filter out the events possibly caused by lightnings as described in Ref. <cit.>. The resulting data set contains 5978 events, including the event with the highest energy of 244 EeV <cit.> and 18 other events with E > 100 EeV. Each event that activates the SD trigger is recorded, and the kinematic parameters of its primary particle are reconstructed. The arrival direction is determined from the relative difference in arrival times of the shower front at each surface detector, which is measured with the precision of 20 ns. The energy of the primary particle is estimated using the EAS particle density S_800 measured at a distance of 800 m from the shower axis. The measured value of S_800 is converted to the reconstructed SD energy taking into account the zenith angle dependence by means of a Monte-Carlo simulation that uses the CORSIKA software package <cit.>. Finally, thus reconstructed SD energy is calibrated to the calorimetric energy measured by the fluorescence detectors; this amounts to a rescaling by the factor of 1/1.27 <cit.>. The resolution of the SD at E > 10 EeV is 1.4^∘ in arrival direction and 18% in the logarithm of primary energy <cit.>. The systematic uncertainty in the energy determination is estimated at 21% <cit.>. The implementation of our method is organized in three steps. First, we generate a large mock set of realistic UHECR events for each injected composition model considered. Second, we define the test-statistics (TS) that quantifies the overall magnitude of deflections of a given event set with respect to the Large Scale Structure of the Universe (LSS) and that is robust to the uncertainties of the magnetic fields. Finally, we calculate this TS for each mock event set as well as for the real data, and quantify the compatibility of each composition model with the data. The effect of the uncertainties in magnetic fields and injection spectra is estimated by varying their parameters for each composition model. We now describe these steps in more detail, starting with a brief description of the key properties of the UHECR mock event sets; more thorough discussion is given in a companion paper <cit.>. We assume that UHECR sources trace the matter distribution in the local Universe. Statistically, this can be achieved by assuming equal intrinsic UHECR flux for each galaxy in a complete volume-limited sample. In practice we use the flux-limited galaxy sample with a high degree of completeness, derived from the 2MRS galaxy catalog <cit.> by cutting out galaxies with mag > 12.5 and with distances below 5 Mpc and beyond 250 Mpc. We assign a progressively larger flux to more distant galaxies to compensate for the observational selection inherent in a flux-limited sample. The sources beyond 250 Mpc are assumed to be distributed uniformly with the same mean density as those within this distance. Their contribution is added as a properly normalized fraction of isotropic events. The exact procedure is described in Ref. <cit.>. This source model covers all the source scenarios with sufficiently numerous sources (source number density ρ≫ 10^-5 Mpc^-3). The source densities of order 10^-5 Mpc^-3 are not excluded experimentally <cit.> (see, however, recent studies <cit.>). In this case the sensitivity of our method to the mass composition decreases; we discuss this issue in a companion paper <cit.>. We fix the injection spectrum for each nucleus by deriving it from the separate fit to the TA and Auger observed spectra <cit.>. As a result the following spectra are taken for the UHECR flux simulation: power law with the slope -2.55, -2.20, -2.10 and without the cut-off for protons, helium and oxygen, respectively; power law with the slope -1.50 and with a sharp cut-off at 280 EeV for silicon; power law with the slope -1.95 and with a sharp cut-off at 560 EeV for iron. The secondary particles produced upon propagation of injected primary nuclei through the interstellar medium are taken into account for helium and oxygen nuclei and reasonably neglected for other primaries; the details are given in Ref <cit.>. We also consider separately a best-fit injected composition model from the Auger work <cit.>, where we take into account all the secondaries and model the deflection of the full flux according to its average charge. The deflections in magnetic fields are treated with the account of primary particle charge Z and its energy E. The deflections in the extra-galactic magnetic field (EGMF) are simulated as a direction-independent smearing of the sources with the von Mises-Fischer distribution. For our basic model its magnitude is set to zero, which corresponds to either B_ EGMF≪ 1 nG for the correlation length λ∼ 1 Mpc or B_ EGMF≪ 0.1 nG for λ of a cosmological scale. We discuss the possible effect of non-zero EGMF among other uncertainties. The deflections in the regular galactic magnetic field (GMF) are simulated using the backtracking technique with the GMF model of Ref. <cit.>. The deflections in the random GMF are simulated as a galactic-latitude-dependent smearing according to the data-driven relation of Ref. <cit.>. Finally, the event distribution is modulated by the geometrical exposure of the TA. The energies of the events in the mock sets are generated according to the observed TA spectrum with the account of the TA energy resolution. In companion paper <cit.> we estimate the impact of uncertainties in the energy scale and in the parameters of the injection spectra and magnetic fields on the inferred mass composition. We define the test-statistics (TS) using the expected UHECR flux maps built by a similar procedure as used for the mock sets generation, but with smaller number of free parameters. Namely, we use the same 2MRS-based source catalog, assume flux attenuation as protons with ∼ E^-2.55 injection spectrum without cutoff and a uniform smearing of sources. The magnitude of this smearing θ_100 defined at 100 EeV is the only free parameter on which the TS depends. For each given value of θ_100 we build a set of maps Φ_k(θ_100, n) where n is the direction in the sky, k denotes the energy bin and the smearing of each map scales properly, as 100 EeV/E_k. Then the test statistics TS(θ_100) for a given event set with directions n_i is defined as follows: TS(θ_100) = -2 ∑_k ( ∑_i lnΦ_k(θ_100, n_i)/Φ_ iso ( n_i)), where the sum run over the events i and energy bins k, and we have included a standard overall normalization factor -2. The normalization factor Φ_ iso( n_i) = Φ(∞, n_i) corresponding to an isotropic distribution is added for convenience. More technical details on the TS construction are given in the companion paper <cit.>. In the limit of a large number of events, this test statistics is distributed around its minimum according to χ^2-distribution with one degree of freedom. The position of the TS minimum θ_100^ min for each event set is interpreted as the energy-rescaled mean event deflection with respect to the LSS. Thus, for a mock set of a given composition model and a very large number of events, the TS should have a deep and narrow minimum, with the value of θ_100^ min being characteristic of this composition model. These values could then be confronted with the TS(θ_100) evaluated for the data. To estimate the mass composition we divide the energy range into 5 bins starting from 10 EeV with a quarter-decade width and with the last bin being an open interval E > 100 EeV. The dependence of TS(θ_100) on θ_100 for the data in each bin is shown in Fig. <ref>. The curves for all but the penultimate bin (red curve) are consistent, at the 2σ level, with isotropy which corresponds to θ_100 = 200^∘ in our notations — the value that is beyond the size of the TA field of view. In the bin 19.75 < log_10[E/eV] < 20.0 the TS has a distinct minimum at θ_100^ min = 30.8 ^∘ that deviates from isotropy with the significance of more than 2 σ. In Fig. <ref> we present a bin-wise comparison of the data with various composition models. The data points are in correspondence with the TS(θ_100) curves shown in Fig. <ref>: the central points show values of θ_100^ min in each bin, while the error bars represent 1σ- and 2σ-deviations from the minimum as calculated from the corresponding curve. It should be stressed that, by definition, the data points show typical deflections of cosmic rays in the corresponding bin rescaled to E=100 EeV. While the energy dependence of deflections is taken into account in this way, the other factors such as the difference in attenuation at different energies (and, therefore, relative contribution of close and distant sources) are not. Hence the variations of θ_100^ min from bin to bin. Regardless these variations, it is manifest in Fig. <ref> that the small values of θ_100 are not compatible with the data at all energies, which is evident already in Fig. <ref> from the steep rise of the curves at small θ_100. The colored lines in Fig. <ref> show predictions for different composition models which should be compared to the data. With our assumptions and zero EGMF the pure proton composition (red line) is not compatible with the data as it predicts θ_100^ min≲ 2^∘ in all energy bins. The injected light or intermediate composition is also incompatible with the data as in this case the flux is dominated by secondary protons. At the same time the data are compatible with the injected silicon at all energies except E > 100 EeV and with injected iron at all energies except E ≳ 56 EeV. The Auger best-fit model is compatible with the data at 2σ level. In general, one can see a trend: the preference for heavier composition at 10 < E ≲ 18 EeV changes in favor of a lighter one at 56 ≲ E < 100 EeV, while at E > 100 EeV the data prefer a very heavy composition — even beyond iron. We turn now to the discussion of uncertainties affecting these results, of which the most important are those related to the magnetic fields, the experimental energy scale and the injection spectrum. In our setup all these uncertainties affect only the positions of model lines shown in Fig. <ref>. The injection spectrum uncertainty was tested by varying the spectrum parameters within ± 1 σ around their best fit values. This variation was found to have negligible impact on the results, see Ref. <cit.> for details. To estimate the effect of GMF uncertainty we generate new mock sets, this time assuming the regular GMF model of Ref. <cit.>. Note that the UHECR deflections in both models are similar in magnitude but substantially differ in direction. The comparison of predicted values of θ_100^ min is shown in Fig. <ref>, left panel, for the same composition models as in Fig. <ref>. One can see that the predicted values of θ_100^ min are quite close in almost all cases, so that the change of the GMF model does not change the level of compatibility of the composition models with the data. The EGMF is more uncertain than GMF. To estimate its impact on the results, additional assumptions are required. In general, there are three possible regimes where EGMF may affect the UHECR deflections. First, there could be an intergalactic magnetic field IGMF in voids of Large Scale Structure. If its origin is not cosmological its correlation length is expected not to exceed ∼ 1 Mpc <cit.>. Then its strength is bounded from above as B_EGMF < 1.7 nG <cit.> and UHECR deflections are described by a uniform smearing <cit.>. It is straightforward to implement such a smearing into our simulation of mock sets. In the opposite case of the IGMF of cosmological origin, its amplitude is constrained to be B ≲ 0.05 nG for any correlation lengths <cit.>, that leads to deflections negligible comparing to that in the GMF. Finally, the IGMF can be negligible but there could be an EGMF in a local extragalctic structures such as a local filament. There is no observational bounds on such fields; however, constrained astrophysical simulations predict its strength in the range 0.3 < B < 3 nG in the ∼ 5 Mpc vicinity of our Galaxy <cit.>. Even in the conservative case the expected deflections in such a field would be several times smaller than the maximum possible deflections in IGMF. Given all these considerations we test the possible effect of EGMF conservatively assuming the highest allowed parameters for non-cosmological field <cit.>: B_EGMF = 1.7 nG and λ_EGMF = 1 Mpc. This may lead to deflections as high as 7^∘ for protons at 100 EeV. We are simulating such deflections by an additional direction-independent smearing of sources that scales according to the primary particle charge and energy. The results including both GMF and EGMF are shown in Fig. <ref>, right panel, in comparison with the zero EGMF case. As one can see from the plot, the inclusion of the maximum allowed EGMF significantly increases the value of θ_100^ min in all models and makes even the pure proton composition compatible with the data in lower energy bins at the 2σ level. In the last bin corresponding to E>100 EeV, this increase is not sufficient except in the case of pure iron composition which becomes fully compatible with the data. The impact of the uncertainty related to the systematic uncertainty of the experiment's energy scale is of the same order or smaller than the impact of the GMF uncertainty. More detailed discussion of all the mentioned uncertainties is given in Ref. <cit.>. The interpretation of the results differs significantly depending on the assumed deflections in EGMF, while the difference due to the GMF assumptions is subdominant. As it was mentioned, in the case of negligible EGMF the data prefer a heavy composition at low energies, a relatively light one at 56 ≲ E < 100 EeV and a very heavy one (beyond iron) at E > 100 EeV. The latter result is in agreement with Ref. <cit.>, which finds that the TA highest energy event is not correlated with the LSS unless its deflection is very large. In the case of extreme EGMF the data is consistent with both heavy and intermediate composition at E < 100 EeV. In particular oxygen and even proton compositions became more compatible with the data at E ≲ 56 EeV. Importantly, the evidence of heavy composition at E > 100 EeV survives the assumption of even extremely strong EGMF, while the light or intermediate composition remains in tension with the data. For instance, to reconcile the proton or helium composition with the data at E > 100 EeV at least at the 2σ level the EGMF should be stronger than 20 nG for λ = 1 Mpc, that is far beyond the upper limit discussed earlier. It is also interesting that pure silicon is compatible with data from 10 EeV up to 100 EeV irrespective of the EGMF. In conclusion, an important comment concerning the interpretation of our results in the low energy bins is in order. The logic here can be inverted: taking at face value the light or intermediate composition measured at 10 ≲ E ≲ 50 EeV by the fluorescence experiments <cit.>, our results implying relatively large UHECR deflections at these energies point toward the existence of a strong EGMF close to the current experimental limit. The quantitative discussion of this observation will be given elsewhere. § ACKNOWLEDGEMENTS The authors would like to thank the former member of the Telescope Array collaboration Armando di Matteo, who kindly provided the simulations of UHECR propagation and respective fits of attenuation curves for the purposes of this study. The Telescope Array experiment is supported by the Japan Society for the Promotion of Science(JSPS) through Grants-in-Aid for Priority Area 431, for Specially Promoted Research JP21000002, for Scientific Research (S) JP19104006, for Specially Promoted Research JP15H05693, for Scientific Research (S) JP19H05607, for Scientific Research (S) JP15H05741, for Science Research (A) JP18H03705, for Young Scientists (A) JPH26707011, and for Fostering Joint International Research (B) JP19KK0074, by the joint research program of the Institute for Cosmic Ray Research (ICRR), The University of Tokyo; by the Pioneering Program of RIKEN for the Evolution of Matter in the Universe (r-EMU); by the U.S. National Science Foundation awards PHY-1806797, PHY-2012934, and PHY-2112904, PHY-2209583, PHY-2209584, and PHY-2310163, as well as AGS-1613260, AGS-1844306, and AGS-2112709; by the National Research Foundation of Korea (2017K1A4A3015188, 2020R1A2C1008230, & 2020R1A2C2102800) ; by the Ministry of Science and Higher Education of the Russian Federation under the contract 075-15-2024-541, IISN project No. 4.4501.18 by the Belgian Science Policy under IUAP VII/37 (ULB), by the European Union and Czech Ministry of Education, Youth and Sports through the FORTE project No. CZ.02.01.01/00/22_008/0004632, and by the Simons Foundation (00001470, NG). This work was partially supported by the grants of The joint research program of the Institute for Space-Earth Environmental Research, Nagoya University and Inter-University Research Program of the Institute for Cosmic Ray Research of University of Tokyo. The foundations of Dr. Ezekiel R. and Edna Wattis Dumke, Willard L. Eccles, and George S. and Dolores Doré Eccles all helped with generous donations. The State of Utah supported the project through its Economic Development Board, and the University of Utah through the Office of the Vice President for Research. The experimental site became available through the cooperation of the Utah School and Institutional Trust Lands Administration (SITLA), U.S. Bureau of Land Management (BLM), and the U.S. Air Force. We appreciate the assistance of the State of Utah and Fillmore offices of the BLM in crafting the Plan of Development for the site. We thank Patrick A. Shea who assisted the collaboration with valuable advice and supported the collaboration’s efforts. The people and the officials of Millard County, Utah have been a source of steadfast and warm support for our work which we greatly appreciate. We are indebted to the Millard County Road Department for their efforts to maintain and clear the roads which get us to our sites. We gratefully acknowledge the contribution from the technical staffs of our home institutions. An allocation of computing resources from the Center for High Performance Computing at the University of Utah as well as the Academia Sinica Grid Computing Center (ASGC) is gratefully acknowledged.
http://arxiv.org/abs/2406.17978v1
20240625232711
Connected Network Model for the Mechanical Loss of Amorphous Materials
[ "Steven Blaber", "Daniel Bruns", "Jörg Rottler" ]
cond-mat.mtrl-sci
[ "cond-mat.mtrl-sci", "cond-mat.stat-mech" ]
steven.blaber@ubc.ca Dept. of Physics and Astronomy and Stewart Blusson Quantum Matter Institute, University of British Columbia, Vancouver, British Columbia V6T 1Z1, Canada Dept. of Physics and Astronomy and Stewart Blusson Quantum Matter Institute, University of British Columbia, Vancouver, British Columbia V6T 1Z1, Canada jrottler@physics.ubc.ca Dept. of Physics and Astronomy and Stewart Blusson Quantum Matter Institute, University of British Columbia, Vancouver, British Columbia V6T 1Z1, Canada § ABSTRACT Mechanical loss in amorphous solids at low frequencies is commonly attributed to thermally activated transitions of isolated two-level systems (TLS) that come in resonance with a mechanical wave. Using atomistic modeling of amorphous silicon, we observe instead that the inherent structures that constitute the TLS form a sparsely connected network with thermodynamic pathways between states. An analytically tractable theory for mechanical loss of the full network is derived from a nonequilibrium thermodynamic perspective. We show that the connected network model predicts mechanical loss with distinct temperature and frequency profiles when compared to the isolated TLS model. This not only calls into question the validity of the TLS model, but also gives us many new avenues and properties to analyze for the targeted design of low mechanical loss materials for applications in gravitational wave detectors. Connected Network Model for the Mechanical Loss of Amorphous Materials Jörg Rottler July 1, 2024 ====================================================================== Introduction.—For over 50 years, the two-level system (TLS) model has stood as the prevailing description of thermal and acoustic properties of amorphous solids <cit.>. This is in part due to its success in predicting the linear scaling heat capacity and quadratic scaling of the thermal conductivity in the low temperature tunneling regime <cit.>. TLS are also believed to contribute to mechanical dissipation in amorphous mirror coatings <cit.>, which is currently a critical factor limiting the sensitivity of interferometer-based gravitational wave detectors (GWD) <cit.>. TLS also contribute to the dielectric loss of quantum materials (e.g. superconducting qubits <cit.>) and a better understanding is vital to the field of cavity optomechanics <cit.>, which has applications to quantum memory, high precision sensors, and quantum transducers. Amorphous materials are intrinsically disordered, which results in characteristically rugged and complex energy landscapes that make their properties notoriously difficult to predict <cit.>. At thermal equilibrium, the material can occupy and transition between many distinct inherent structures: stable minima of the energy landscape. In the TLS model (Fig. <ref>), pairs of inherent structures connected by a transition path are driven out of equilibrium by acoustic vibrations resulting in internal friction and mechanical loss <cit.>. Each TLS is assumed to be independent and equally likely to be occupied, and the resultant loss of the material is the superposition of the individual contributions from each TLS. Within the TLS model, there are two important parameters: the energy barrier and the energy asymmetry between the two inherent structures. The former determines the timescale of the TLS and the frequency at which it can be excited, while the latter determines the magnitude of the mechanical loss (low asymmetry results in high mechanical loss). The TLS model has been used to study the mechanical loss of several candidate materials for GWD coatings <cit.>. Of particular relevance are studies of amorphous silicon <cit.>, silica <cit.>, and tantala <cit.>. These studies have given us key insights into the nature, atomic structure, and motion of TLS in these materials. Over the years, several of the common assumptions made in estimates of mechanical loss from TLS have been addressed and improved upon, e.g. assumptions on the distributions and independence of energy barriers and asymmetries <cit.>. Despite these improvements, the TLS model still provides a highly simplified description of amorphous materials. Specifically, it assumes that all the TLS are independent, ignoring the rugged and high dimensional nature of the energy landscape. In our atomistic simulations, we test this assumption and find that the system does not consist of independent TLS but instead forms a connected network of inherent structures (Fig. <ref>), calling into question the predictions of the TLS model. To address the mechanical loss of the connected network, we take a nonequilibrium thermodynamic perspective <cit.>: modeling the heat dissipated from a driven process using master equation dynamics <cit.> of the discrete state network. Nonequilibrium Thermodynamics.—Mechanical loss of a material relates to the decay of acoustic vibrations due to internal energy dissipation. With each oscillation, energy is dissipated as heat _ cycle into the environment, which is compared to the stored elastic energy through the inverse quality factor Q^-1 = 1/2π_ cycle/ energy  stored . This relation allows us to relate the quality factor to the dissipated heat, which can then be used to estimate the quality factor based on material properties. We separate the total energy of the system into the elastic energy of the acoustic wave of frequency ω traveling through the material and the internal energy of the connected network of inherent structure as _ tot = _ elastic + _ CN. Assuming an isotropic and linear elastic material with oscillating strain ϵ(t) = ϵ_0sin(ω t), the elastic energy is _ elastic(t) = C ϵ_0^2sin^2(ω t) , for volume and elastic modulus C (longitudinal or shear). The macroscopic energy input of the entire material oscillating under strain averaged over one cycle is C ϵ_0^2/2. The strain couples to the energy of the inherent structures E as E(t) = E(0) + (α1 + ϵ_0γ_0/2Γ) sin(ω t) , where 1 is a vector of ones and throughout a bold symbol represent a vector with elements spanning the inherent structures; i.e. E_i is the energy of inherent structure i. The coupling between the inherent structure and applied strain is separated into a constant term α and an inherent structure dependent term Γ with amplitude γ_0. The constant term will not affect the dynamics since it will affect all energies equally. The average energy of the network is the number of inherent structures N times the average energy per inherent structure _ CN(t) = NP_t·E(t) , with P_t the time dependent inherent structure occupation probability. The rate of change in energy gives the first law of thermodynamics (t) = (t) + (t) , with the work resulting from changes in energy =_ elastic + NP_t·Ė(t) , and the heat from the time dependent probabilities = NṖ_t ·E(t) . Throughout, a dot denotes the rate of change with respect to time. The work and heat produced in a cyclic process are _ cycle = ∫_ cycle t and _ cycle = ∫_ cycle t. For a cyclic process in a periodic steady-state Δ_ cycle = 0 so _ cycle = -_ cycle. We describe the dynamics of the system by a master equation, which is a discrete state model that describes conservation of probabilities as they transition between states <cit.> P_t/ t = R(t)P_t , with a transition rate matrix R(t). For a system at inverse temperature β≡ (k_ B T)^-1 with temperature T and Boltzmann constant k_ B, the transition rates are expressed as Arrhenius rates in terms of the energy barriers V_ij and energy levels E_i of the inherent structures i and j, so the transition rate matrix has elements R_ij =k_ije^β E_j[e^-β V_ij(1-δ_ij) - δ_ij∑_ℓ≠ je^-β V_jℓ] . Due to the exponential dependence, the transition rates are dominated by the barrier heights V_ij rather than the bare transition rates k_ij <cit.>. For simplicity, we assume equal bare transition rates k_ij = k_0 for all transitions in our numerical calculations, an assumption made in a previous study <cit.> and supported by atomistic simulations of amorphous silicon <cit.>. Connected Network Model.—Although the equations of the previous section (Eq. (<ref>) and Eq. (<ref>)) are sufficient to determine the energy dissipation and hence the mechanical loss of the system, considerable simplification can be made if we assume the amplitude of oscillations are small. This is consistent with the small amplitude assumption made in the TLS model <cit.> and should be sufficient for applications to GWD where the magnitude of vibrations in the system are extremely small <cit.>. For small amplitude oscillations βϵ_0γ_0≡γ̃_0≪ 1, we expand the probabilities around their static equilibrium values P_i^ eq = e^-β E_i/∑_je^-β E_j as P_t = P^ eq+γ̃_0P^(1)_t +𝒪(γ̃_0^2) , with P^(1)_t denoting the contribution of order γ̃_0 defined by the above expansion. Similarly, expanding Eq. (<ref>) about its static value R=R^(0)+ γ̃_0R^(1)(t)+𝒪(γ̃_0^2) we have R_i j^(1)(t)=1/2R_i j^(0)Γ_jsin (w t) . Substituting into Eq. (<ref>) leads to ∂P^(1)/∂ t≃ R^(0)P^(1)_t+R^(1)(t) P^ eq . In a periodic steady state this has the solution P^(1)_t =-βϵ_0γ_0/2[Asin (ω t)+Bcos (ω t)] A_i ≡∑_jk M_ij1/1+(ωτ_j)^2M_jk^-1Γ_k P^ eq_k B_i ≡∑_jk M_ijωτ_j/1+(ωτ_j)^2M_jk^-1Γ_k P^ eq_k . The relaxation times τ_j are related to the eigenvalues λ_j of R^(0) as τ_j≡λ_j^-1 with corresponding eigenvectors forming the columns of the eigenvector matrix M. Substituting Eqs. (<ref>) and Eq. (<ref>) into Eq. (<ref>), then integrating over one cycle, we arrive at our central result = βπ Nϵ_0^2γ_0^2/4∑_i,j,ℓΓ_i M_i jωτ_j /[1+(ωτ_j)^2] M_j ℓ^-1Γ_ℓ P_ℓ^ eq . The energy dissipation is decomposed into contributions from eigenmodes of the transition matrix R, each contributing to the overall mechanical loss. Since R is a transition rate matrix of a connected network, it has exactly one zero eigenvalue with eigenvector corresponding to the equilibrium distribution. All other eigenvalues are negative (describing relaxation towards equilibrium) with eigenvectors that sum to zero (preserves normalization of probability). We assume the total energy stored in the system (averaged over one cycle) is dominated by the elastic energy (<ref>), so the inverse quality factor (<ref>) is Q^-1 = β Nγ_0^2/4 C∑_i,j,ℓΓ_i M_i jωτ_j /[1+(ωτ_j)^2] M_j ℓ^-1Γ_ℓ P_ℓ^ eq . For an alternate derivation of this result, see Supplemental Material <ref>. The TLS model corresponds to a network where every pair of inherent structures are isolated. For this network there is no global equilibrium and each pair of connected inherent structures must be treated independently. In Supplemental Material <ref>, we calculate the quality factor for a network of two inherent structures, which when summed over all independent pairs yields a quality factor consistent with the popular TLS model <cit.> Q_ TLS^-1 = β/4𝒱 C∑_iγ_i^2sech^2(βΔ_i/2)ωτ_i/1+ω^2 τ_i^2 , with Δ_i the energy difference between the two inherent structures in TLS i, γ_i = γ_0Γ_i the TLS deformation potential, and TLS relaxation time τ_i=e^β V_i/k_0(1+e^βΔ_i) . Connectivity of Inherent Structures.—We perform molecular dynamics simulations of amorphous silicon using a Tersoff potential <cit.> in LAMMPS <cit.>. We prepare samples via melt-quench at a quench rate of 10^11K/s and find inherent structures by thermal search trajectories at 600K for 200ps with a sampling frequency of 100fs. We determine candidate TLS based on changes in the minimum energy and filter them based on participation ratio and maximum atomic displacement to remove unlikely candidates. Duplicate pairs of TLS are determined and removed based on a root-mean-squared atomic displacement criterion between structures less than 10^-4Å. This procedure is common for amorphous sample preparation <cit.> and TLS calculations from MD simulation <cit.>. We add an additional step in our analysis and use the same method for removing duplicate TLS to determine which inherent structures are identical. In this way, we connect TLS together (e.g. TLS A-B and B-C becomes A-B-C), thus forming connected networks of inherent structures as shown in Fig. <ref>. Typically we find networks of ∼ 1000-3000 inherent structures. Further simulation details can be found in Supplemental Material <ref>. We find the statistical properties of the networks to be robust between samples. Figure <ref>a shows the percentage of the network that is connected (number of connections/total number of possible connections) as a percentage of the full network. The percentage of the network that is connected plateaus around ∼ 0.05% as the entire sample is included with negligible variation between samples, indicating a sparsely connected network. The fraction of cycles (connected loops of inherent structures) of a given size has relatively small variation between samples (Fig. <ref>c). Cycles are of particular interest since they cannot be reproduced by the TLS model. Each cycle presents an alternate pathway between states: a chain of states with a cycle in the middle has two pathways to travel from end-end, potentially allowing the system to avoid large energy barriers. Interestingly, we find no odd-state cycles, hinting at some intriguing physics underpinning the networks, and future studies are needed to reveal if this property is unique to amorphous silicon. The observed degree (i.e.  the number of connections of an inherent structure) distribution has a power law form (Fig. <ref>b), indicating that the inherent structure form a scale-free network. Scale-free networks have also been observed in Lennard-Jones clusters <cit.>. However, we have not completed an exhaustive search of all inherent structures, and the power-law scaling could arise from preferential sampling of nodes connected to the initial inherent structure and those with high degree (many connections). The relative eccentricity of a node measures the (minimum) number of steps required to reach the furthest node relative to the number of nodes. Since the networks typically consist of 1000-3000 inherent structures, the observed relative eccentricity ≲ 0.03 indicates it takes relatively few steps to cross the entire network, a property commonly referred to as “small world" <cit.>. Mechanical Loss.—Several properties of the mechanical loss can be inferred directly from Eq. (<ref>). For low frequencies ωτ≪ 1 the mechanical loss Eq. (<ref>) scales as ω, while for high frequencies ωτ≫ 1 it scales as ω^-1. In the intermediate regime there is a plateau region with peaks corresponding to ωτ_j∼ 1 for the relaxation modes that dissipate significant energy. To estimate the mechanical loss of amorphous silicon, we use a typical value for the elastic modulus C = 50GPa <cit.>, bare transition rate k_0= 10^13 s^-1 <cit.>, and estimate the deformation potential from differences in stress between connected inherent structures <cit.> as outlined in Supplemental Material <ref>. We emphasize that these assumption are made for calculations within both the TLS and connected network models, and changing the value of k_0 merely leads to an overall shift in the frequency ω. The inverse quality factor estimated from the connected network and TLS models have several qualitative and quantitative differences (Fig. <ref>). Within the frequency range relevant for gravitational wave detection, the network model predicts nonmonotonic behavior and a peak at ∼ 1Hz at 100K and 200K not present in the TLS model, while the TLS model predicts several smaller peaks at frequencies below 1Hz that are not present in the network model. The network model has a global energy scale and accounts for the relative occupation probability of each inherent structure, so TLS predicted to contribute significant mechanical loss may be unlikely to be occupied in the network model due to their high relative energy, resulting in a small contribution to the mechanical loss of the network. Additionally, the topology of the connected network results in distinct relaxation modes (eigenvectors of the transition rate matrix R and their associated relaxation times τ) not possible in the TLS model and allows the system to circumvent large barriers (which cause low frequency peaks) through alternate pathways. For example, the cycles found in the network (Fig. <ref>) are topologically distinct from a TLS and therefore are not well represented by isolated TLS. We quantify how many elements of the eigenvectors and how many eigenmodes significantly contribute to the total mechanical loss by the participation ratio (PR), which for a quantity D with elements D_i is defined as D_ PR≡∑_i D_i^2/∑_jD_j^4 . At a given frequency, relatively few modes dominate (mechanical loss participation ratio ≲ 12, see Fig. <ref>a) and the relaxation modes of the connected network typically involve relatively few inherent structures (eigenvector participation ratio ≲ 30 in Fig. <ref> b). Although the dissipation and relaxation times of the collective modes of the network are distinct from the TLS, they are constrained to relatively few inherent structures. However, since the eccentricity of the network is small, they may still span the entire network. Discussion.—We have observed a connected network of inherent structures coupled by thermally activated transitions in samples of amorphous silicon. The connected network structure challenges the assumption of the TLS model that pairs of structures can be treated independently. We find robust network properties across 10 independent samples (Fig. <ref>): the networks are relatively sparsely connected (∼ 0.05%), the degree distribution has power law scaling, a significant fraction of even-state cycles are present, and the eccentricity of the networks shows relatively few jumps are required to cross the entire network. To address the mechanical loss of the full connected network, we develop an analytical model from a nonequilibrium thermodynamic perspective: the dynamics of the discrete state network of inherent structures is described by the master equation (<ref>) for the time dependent probabilities, ultimately leading to an explicit expression for the energy dissipated in a single cycle of an acoustic oscillation (<ref>). The connected network model generalizes the standard TLS description (Fig. <ref>). We find that the connected network and TLS models have a qualitatively distinct frequency spectrum. The decomposition of the heat into eigenmodes shows that few modes ≲ 12 dominate at any given frequency and within each mode only a few elements of the eigenvector significantly contribute ≲ 30. A major advantage of the network structure is that it provides new properties to analyze in order to improve our physical understanding of internal friction in amorphous materials, potentially revealing new methods for the design of low mechanical loss coatings. Future studies may reveal which network properties can be related to changes in mechanical loss. Additionally, the robust nature of the statistical network properties opens up the possibility for the discovery of universal features of amorphous materials. One good candidate is the scale free degree distribution, which has been observed in similar materials <cit.>. The significant quantitative and qualitative differences between the connected network and TLS models call for a re-examination of earlier TLS model predictions. Of particular interest is the effect of aging and annealing on mechanical loss, which has been the subject of several recent studies <cit.>. Annealing has been shown to have a significant effect on mechanical loss, and it will be interesting to see how it manifests in the network structure of the material. Although the transition rates in our model and mechanical loss calculations are restricted to classical (stochastic) dynamics, the network itself makes no such assumption. It will be interesting to see if a similar model can be used to describe connected networks of tunneling transitions at low temperature, and what implications it would have for the tunneling TLS model <cit.>. This is an important question as low temperature TLS are believed to be a main cause of dielectric loss in quantum materials <cit.>. This research was supported in part by the Canada First Research Excellence Fund, Quantum Materials and Future Technologies Program. Computational resources and services were provided by Advanced Research Computing at the University of British Columbia and the Digital Research Alliance of Canada (<alliancecan.ca>). Supplemental Material for “Connected Network Model for the Mechanical Loss of Amorphous Materials” § ELASTIC ENERGY In this section, we present an alternate derivation of the quality factor (<ref>) based on the phase-lag of the average energy of the system. As discussed in the main text (<ref>), the total energy of the system is the sum of the elastic energy and the average energy of the network _ tot = _ elastic + _ CN . Substituting in the linear elastic energy Eq. (<ref>), energy of the connected network Eq. (<ref>), and assuming the approximation of Eq. (<ref>) we have _ tot = C ϵ_0^2sin^2(ω t) + N[E(0) + ϵ_0 γ_0/2Γsin(ω t)] ·[P^ eq-βϵ_0γ_0/2Asin (ω t)-βϵ_0γ_0/2Bcos (ω t)] . Expanding and collecting terms we have _ tot = NE(0)·P^ eq + _ tot^(1)sin(ω t-θ) + _ tot^(2)sin(ω t-ϕ)sin(ω t) , for _ tot^(1) ≡Nϵ_0γ_0/2√((Γ·P^ eq-βE(0)·A)^2 + (βE(0)·B)^2) θ ≡tan^-1[βE(0)·B/Γ·P^ eq-βE(0)·A] _ tot^(2) ≡Nϵ_0^2γ_0^2/4√(( C -βΓ·A)^2 + (βΓ·B)^2) ϕ ≡tan^-1[β Nγ_0^2Γ·B/4 C -β Nγ_0^2Γ·A] . The constant and linear terms average out to zero over one cycle and have no contribution to the overall mechanical loss. Substituting Eq. (<ref>) and assuming C ≫β Nγ_0^2Γ·A/4 we find tanϕ = Q^-1 = β Nγ_0^2/ C∑_i,j,ℓΓ_i M_i jωτ_j /[1+(ωτ_j)^2] M_j ℓ^-1Γ_ℓ P_ℓ^ eq . The inverse quality factor can be expressed in terms of the phase lag of the system ϕ relative to the frequency of the oscillation ω: the mechanical loss results from the nonequilibrium, out of phase response of the system. The assumption C ≫β Nγ_0Γ·A/4 is identical to the one made to arrive at equation (1) in Ref. <cit.> as shown in their equation (B6). § TLS DERIVATION In this section we explicitly derive the mechanical loss for a TLS based on Eq. (<ref>) for a two-state system. The transition rate matrix, Eq. (<ref>), for a TLS consisting of state 1 and 2 is R = k_0[ -e^-β (V-E_1) e^-β (V-E_2); e^-β (V-E_1) -e^-β (V-E_2) ] . Defining the energy asymmetry Δ = E_2 - E_1 this simplifies to R = k_0e^-β V[ -1 e^βΔ; 1 -e^βΔ ] , where, without loss of generality, we have set E_1 = 0. This transition rate matrix has eigenvalues λ_1 = 0 and λ_2 = -k_0e^-β V(1 + e^βΔ) with corresponding eigenvectors v^(1) = [1,e^-βΔ] and v^(2) = [-1,1]. The zero eigenvalue corresponds to the equilibrium distribution, so P^ eq = v^(1)/∑_iv^(1)_i. The eigenvector matrix and its inverse are M = [ 1 -1; e^-βΔ 1 ] and M^-1 =1/1+e^-βΔ[ 1 1; -e^-βΔ 1 ] . setting Γ = [1,-1] (structures oscillate in opposite direction), substituting into Eq. (<ref>), defining τ = -1/λ_2 and summing over all TLS we arrive at Eq. (<ref>). Note that the zero eigenvalue mode has no contribution to the mechanical loss. § SIMULATION DETAILS In this section we provide additional simulation details. We perform molecular dynamics simulations in LAMMPS <cit.> using a Tersoff <cit.> potential to model silicon. By injecting random initial velocities, ten amorphous samples are prepared by rapidly melting diamond silicon from 100K to 4000K in 200ps, equilibrating at 4000K for 200ps, and subsequently cooling to 300K at a rate of 10^11 K/s in the isobaric ensemble (target pressure P=0). Final amorphous configurations are found by energy minimization of the melt-quenched structure at constant volume. With an average simulation box length of 27.43 ±0.01Å, the density of our samples is 2.261 g/cm^3± 0.001. Using a cut off of 2.9Å the silicon atoms in the 10 samples have average coordination (with standard error) c_3 = 0.42 ± 0.08 %, c_4 = 95.8 ± 0.2%, c_5 = 3.8 ± 0.2%, and c_6 = 0.03 ± 0.02%, where c_i is the percentage of the sample with coordination i. This corresponds to ∼ 4% defects in our samples. Once the samples have been prepared, we perform 100 random thermal searches per sample at 600K for 200ps. Every 0.1ps we save the structure of the system and the 2000 structures per search (200,000 per sample) are quenched to 0K providing an inherent structure of the system. Sequentially visited structures are considered as candidate connected pairs, and the atomic participation ratio (number of atoms involved in the transition) (<ref>) and maximum atomic displacement between these pairs is calculated and used to filter out unlikely candidates as shown in Fig. <ref>. In our data we observe two distinct regions in participation ratio-d_ max space: pairs of structures with large participation ratio ∼ 10^3 and small maximum atomic displacement and low participation ratio ≲100 and comparatively large d_ max≳ 0.1. The former corresponds to all the atoms moving a very small distance and is likely the result of noise, while the latter involves relatively few atoms moving a larger distance. We consider all the candidates with participation ratio <100 and d_ max > 0.1, then remove all duplicate pairs of atomic structures based on total root-mean squared atomic displacement of 10^-4Å. From this filtered list, we perform nudged elastic band calculations with 32 intermediate structures to determine the transition path and barrier between the two states. If we find only a single maximum between the two states and the energy of that maximum is larger than the energy of both structures then we accept the pair as connected structures. The full distribution of barriers and asymmetries between connected inherent structures is shown in Fig. <ref>. We observe a large peak in the barrier distribution at 0.2eV and a gap in barriers less than ∼ 0.1eV. We observe a broad asymmetry distribution, with the main correlation to the energy barriers set by the maximum allowed value of Δ = 2(V-E̅). To form the connected network, we calculate the root-mean squared total atomic displacement between all remaining inherent structures. If it is less than 10^-4Å , then we assume they are the same inherent structure. This connects TLS together since distinct pairs of inherent structures (a TLS) often share one inherent structure. For example, the two TLS A-B and C-D would merge to form A-B-C if we determined state B and C were the same inherent structure. The longitudinal component of the deformation potential is estimated from differences in stress Δσ^(ij) between inherent structures i and j as suggested in ref. <cit.>: (γ^ L_0)^2Γ^ L_iΓ^ L_j = ^2/5[(Δσ^(ij)_xx)^2 +(Δσ^(ij)_yy)^2 +(Δσ^(ij)_zz)^2] + 2^2/15[Δσ^(ij)_xxΔσ^(ij)_yy +Δσ^(ij)_xxΔσ^(ij)_zz +Δσ^(ij)_yyΔσ^(ij)_zz] + 4^2/15[(Δσ^(ij)_xy)^2 +(Δσ^(ij)_xz)^2 +(Δσ^(ij)_yz)^2 ] . Similar calculation yields the transverse component (γ^ T_0)^2Γ^ T_iΓ^ T_j = ^2/15[(Δσ^(ij)_xx)^2 +(Δσ^(ij)_yy)^2 +(Δσ^(ij)_zz)^2] - ^2/15[Δσ^(ij)_xxΔσ^(ij)_yy +Δσ^(ij)_xxΔσ^(ij)_zz +Δσ^(ij)_yyΔσ^(ij)_zz] + 3^2/15[(Δσ^(ij)_xy)^2 +(Δσ^(ij)_xz)^2 +(Δσ^(ij)_yz)^2 ] . An example histogram of the product of longitudinal deformation potentials for one sample is shown in Fig. <ref>. States whose energy deforms in the same direction (increase or decrease energy) will have a positive product, and opposite directions a negative product. Similar to previous studies <cit.> we observe a fairly wide range of deformation potentials, with the product reaching up to 40 eV^2.
http://arxiv.org/abs/2406.18922v1
20240627062622
Time Matters: Scaling Laws for Any Budget
[ "Itay Inbar", "Luke Sernau" ]
cs.LG
[ "cs.LG", "cs.AI" ]
RoFIR: Distortion Vector Map Guided Transformer for Robust Fisheye Image Rectification Houqiang Li July 1, 2024 ====================================================================================== § ABSTRACT A primary cost driver for training large models is wall-clock training time. We show that popular time estimates based on FLOPs are poor estimates, and construct a more accurate proxy based on memory copies. We show that with some simple accounting, we can estimate the training speed of a transformer model from its hyperparameters. Combined with a scaling law curve like Chinchilla, this lets us estimate the final loss of the model. We fit our estimate to real data with a linear regression, and apply the result to rewrite Chinchilla in terms of a model's estimated training time as opposed to the amount of training data. This gives an expression for the loss in terms of the model's hyperparameters alone. We show that this expression is accurate across a wide range of model hyperparameter values, enabling us to analytically make architectural decisions and train models more efficiently. § INTRODUCTION The final quality of a language model is constrained by the number of parameters and the amount of data it was trained on. Remarkably, these two parameters alone are often sufficient to estimate the final performance of the model. <cit.> explored this phenomenon, predicting that loss curves during pretraining could be written as a linear combination of a term dependent on the number of the parameters and one dependent on the dataset size. <cit.> refined this estimate, improving the estimation of the coefficients and introducing a bias term to capture the inherent perplexity of language. While these estimates are useful for large-scale models, small and mid-sized models are not at risk of running out of pretraining data. Instead, the limiting factor is the cost of training, a figure which is primarily driven by a model's size and speed. This suggests that instead of trading off model size and dataset size, we should be trading off architectural hyperparameters within the model that affect its throughput. On a fixed budget, a faster model will be able to see more tokens than a slow one. In this work we assume a fixed training time, and ask what hyperparameters we should pick to maximize the final performance of the model. We start by estimating the throughput of the model (tokens per second) in terms of the number of FLOPs and memory copies, both of which can be directly calculated from the model's hyperparameters. <cit.> mentions a parameterization in terms of compute requirements, but their estimate is based on FLOPs, which we will show are a weak predictor of runtime. Instead, we show that memory copies are a much stronger predictor. This predictor, while simplistic, is powerful enough to accurately predict the loss in terms of the hyperparameters of the model. This new framing lets us estimate the final loss of a model without training it, given only the model hyperparameters and the desired training time. We show that this method produces accurate predictions across a wide range of hyperparameter values, and makes useful predictions about which hyperparameters should be used in order to maximize training efficiency. We evaluate our findings over 1,535 different decoder-only transformer models configurations ranging from 300K to 310M parameters and trained over the C4 dataset <cit.>. We achieve an r^2 of 0.9 when predicting their final loss using our refined scaling law. This is the same r^2 we get when using the traditional Chinchilla scaling law. In other words, we are able to estimate the final loss with the same accuracy whether we use Chinchilla scaling laws on empirical runtimes or simply estimate them from hyperparameters. § THE PARAMETER EQUIVALENCE PRINCIPLE The core intuition motivating this work is the observation that large models are not particularly sensitive to their hyperparameters, provided we hold the total parameter count constant. This idea was discussed in <cit.>, but due to its importance we capture it in a form of an equivalence principle. [The Parameter Equivalence Principle] Above a certain scale, the final loss of a transformer is primarily a function of how many parameters there are, not where they are in the model. One straightforward implication is that we ought to be able to predict the final loss using only parameter count and number of training tokens, as earlier scaling laws did. But another often overlooked implication is that models of the same size that allocate their parameters differently compete primarily on speed. If it is not feasible for one model to have vastly lower loss than another via architectural improvements, we should instead choose architectures that optimize for training speed, allowing them to consume as many tokens during training time as it can. We can show this in practice by means of a scaling law. § ESTIMATING LINEAR SCALING LAW COEFFICIENTS The original scaling law from <cit.>, predicts the final training loss of a language model in terms of its parameters count N and the number of tokens it was trained upon D. L(N,D) = A/N^α + B/D^β + E <cit.> and <cit.> derive different coefficient values with the most extreme difference being their linear data coefficient B. Rather than enter into this debate, we simply take the exponents from <cit.>, and fit our own linear coefficients A, B and E using linear regression on the model loss. This was done by iterating over 1,535 different decoder-only transformer models' hyperparmeters configurations trained from scratch for three hours each on the C4 <cit.> dataset. Models sizes vary from 319K to 310M parameters. Note that we constrained our experiments to models that can be trained on a single TPU to avoid confounders from inter-chip communication. We experimented with model hyperparameters of embed sizes ranging from 2^5 to 2^10, number of layers ranging from 3 to 8, MLP width ranging from 2^8 to 2^14, number of attention heads ranging from 2^1 to 2^7, and a fixed vocabulary size of 8,000. We trained on a mesh of 4 hosts x 8 chips/host of TPU V5 chips with no model sharding. The results show a very good fit (r^2=0.9), with coefficients A=195.76, B=182.52, and E=2.34. Note that these are different from the values quoted in either paper <cit.><cit.>. Using the values from the papers presents a very different story. The below table compares the scaling law fitting measurements on our data using the different papers coefficients, with our computed coefficients serving as a baseline. Both Chinchilla papers underestimate the loss by a factor of more than two on our data (slope<0.5). We take this as evidence that these coefficients are perhaps highly sensitive to the details of the setup, possibly explaining the discrepancy between the papers. Nonetheless, we were able to achieve very good fit with a linear rescaling of their predictions (r^2=0.9 in all cases), suggesting the exponents are more robust. § EQUATIONS FOR ESTIMATING THE SPEED OF A MODEL In order to use scaling laws to estimate the loss of a model, we need to know how big the model is and how much data will it be able to train over. The former is a straightforward exercise in accounting, but the latter is more nuanced. We fix the amount of time we have to train the model to some constant T, which is measured in seconds. If we can estimate how long each training step takes for a given model, we can work out how many tokens (data) will be processed by that model in time T. It is tempting to imagine that we could estimate the model training speed just by adding up the number of FLOPS. But as we will show, the runtime of the model is actually driven by data copying, not the actual computation. The amount of data copying depends on a wide variety of factors, from the hardware to the architecture to the compiler. We do not attempt to account for all of these factors here but take as a simplifying assumption that every matrix multiplication requires a copy proportional to the size of its operands. Specifically, for a standard (as defined in <cit.>) decoder-only transformer architecture, we derived equations for the number of parameters a model has (PARAMS), the number of memory loads the model will need to make in a single pass (MEMCPYS), and the number of operations the model will do in a single pass (FLOPS): PARAMS(d,n,v,w) = vd + nd(8 + 2w + 4d) + nw MEMCPYS(d,n,s,v,w) = 2vd + 2sv + ns(w + 2hs ) + 2nd(w + 4s + 2d) FLOPS(d,n,s,v,w) = 2svd + 2dns(w + 2d + s) + nhs^2 Where the parameters are defined as: d = embedding dimension n = number of layers s = sequence length v = vocabulary size w = MLP width h = number of heads The full details of the derivation of the above equations can be found in appendix <ref>. Using a linear combination of the above equations we can now compute the total number of seconds per training step (TIME) as: TIME(d,n,s,v,w) = c_1MEMCPYS(d,n,s,v,w) + c_2FLOPS(d,n,s,v,w) + c_3 Where c_1, c_2, and c_3 are coefficients determined by linear regression (see Section <ref>). Note that dividing the total number of seconds per training step (i.e., TIME) by the number of seconds we are training upon (i.e., T) would yield the total number of training steps (i.e., D in (<ref>)). Finally, we can estimate the total loss by plugging the above term into the Chinchilla scaling law in order to derive an equation dependent on model training speed. L̂(d,n,s,v,w) = E + A/PARAMS(d,n,v,w)^α + B(TIME(d,n,s,v,w)/T)^β Following our findings in Section <ref>, we take the original α and β as in <cit.> and use our own fitted linear coefficients for A, B and E. § ESTIMATING THE THROUGHPUT The throughput of a model is defined to be 1/TIME, where TIME is defined in Equation (<ref>). In order to fully specify equation (<ref>) we need to determine its linear coefficients c_1, c_2, and c_3. We conduct a large scale (N=3,556) sweep over model hyperparameters trained for 5 minutes, just long enough to accurately determine the number of tokens per second they process. We applied linear regression over the data to determine c_1, c_2, and c_3. We trained models of sizes varying from 277K parameters to 972M parameters. We experimented with model hyperparameters of embed sizes ranging from 2^5 to 2^12, number of layers ranging from 1 to 8, MLP width ranging from 2^8 to 2^15, number of heads ranging from 2^0 to 2^7, and a fixed vocabulary size of 8,000. As in previous experiments we trained on a mesh of 4x8 TPU V5 chips with no model sharding. The results show an overall r^2 of 0.74, with a much tighter fit for slower (i.e. bigger) models. For fast models, confounding factors like compiler optimizations start to matter, affecting the quality of the fit. It is worth evaluating the importance of the different terms(i.e. FLOPS, MEMCPY) in Equation (<ref>). Previous work by both <cit.> and <cit.> utilized only the FLOPS counting to derive their scaling laws. We show that MEMCPY is a stronger predictor, and can account for essentially all of the explanatory power on its own. § PUTTING IT ALL TOGETHER We now have an equation that estimates the number of tokens that the model will consume from its hyperparameters. We also have an exact expression for the number of parameters in such a model, PARAMS. Our tuned Chinchilla equation relates these two quantities to estimate the final loss (<ref>). In figure <ref>, we show the results of this estimation, applied to the data from Section <ref>. Notice that the graph is largely indistinguishable from Figure <ref>, including the quality of fit r^2 = 0.9. While there is some error in the Chinchilla equation's predictions, there is essentially no additional error from using our estimates in place of the empirical values. § BETTER LOSS WITH FASTER MODELS We can use these equations to make specific predictions about how we should size our models. Figure <ref> shows the negative gradient of the loss with respect to each of our hyperparameters, projected to be along level curves of the parameter count. Following each arrow brings you to another model with the same parameter count but a lower predicted loss. We can see that increasing the embed size at the expense of the other hyperparameters is favorable throughout the plotted region. is particularly disincentivized. This suggests that we should take our MLPs to be narrow, and our models to be somewhat shallow, in exchange for much larger embed size. § CONCLUSION Understanding what hyperparameters lead to the strongest model performance is a vital part of model design. We've shown that the final loss of a model can be accurately predicted by turning the question on its head. Instead of asking for the most data efficient hyperparameters, we simply ask which hyperparameters make the model the fastest. This leads to a new scaling law based on hyperparameters alone. In the long run, the faster model will tend to win. We demonstrated this effect across a wide variety of model sizes, and showed that we can accurately predict the model's loss from its hyperparameters, simply by estimating how many memory copies will take place. Crucially, this is a stronger predictor than approaches based on FLOPs. What's more, it allows us to make specific predictions about which hyperparameters to use during model design. However, we do not consider the effects of model sharding, or the effects of scale beyond a few hundred million parameters. We regard these as fruitful areas of exploration for future work. § EQUATIONS §.§ FLOPS derivation In order to compute the total number of FLOPS in our transformers decoders stack we begin by counting the FLOPS needed for each step in a transformer block. We sum all of these components and multiply by the number of transformer layers in our transformer stack. The final piece of the puzzle is to add the embedding of the input and the output. Both of which require svd FLOPS. We add all of these terms together and simplify. §.§ MEMCPYS derivation We begin by adding up the total amount of data being copied in a single transformer block. We approximate the number of memory copies needed for each matmul as the size of the input matrices for each operation. Again we sum all of the above components and multiply by the number of transformer layers in our transformer stack. Finally, we add the embedding of the input and the output, both of which require v*d + s*v memory copies. Summing all of this together and simplifying yields our MEMCPY equation. §.§ PARAMS derivation We begin with the per-layer parameters. We note the extra vector term to account for the bias term accompanying each matrix. In similar fashion, we left with summing all of the above components and multiplying by the number of transformer layers in our transformer stack. The final piece of the puzzle is to add a 2d vector for the norm layer after the transformers as well as the embedding matrix used for embedding the input and the output, matrix of size vd. Summing all of these and simplifying yields our PARAMS equation.
http://arxiv.org/abs/2406.18799v1
20240627000852
Local aspects of topological quantization and the Wu-Yang Monopoles
[ "Aayush Verma" ]
math-ph
[ "math-ph", "hep-th", "math.MP" ]
𝒲 μν αβ ∂ η_μν ∂_α ∂_β λ α β̱ γ ∂σ ∂τ ℝ η_ #1(AV- #1) III_I𝒜Ø𝒪ℋ̋R/{0}RCP^1CP^nHP^nZ{𝒰_i}𝒰𝒰_α𝒰_β𝒰_γČech 𝔉C^p(, ); 𝐂^𝐩(,,Ω^𝐪)theoremtheoremC^*Local aspects of topological quantization and the Wu-Yang Monopoles Aayush Verma July 1, 2024 =================================================================== § ABSTRACT In this paper, we review how local potentials arise in the Wu-Yang topological quantization. We also discuss the isomorphism between the de Rham cohomology classes and Čech cohomology classes in such topological quantization. We also emphasize the importance and application of local and global information in gauge theories. July 1, 2024 § INTRODUCTION The study of monopoles first appeared in <cit.>, in which Dirac proposed a quantization condition that implies the quantization of electric charge e in the presence of magnetic monopoles of strength G. Dirac monopoles are defined for symmetric Maxwell fields. In particular, we put G at the origin which produces a magnetic field 𝐁 = G/ρ^2ê_ρ on R/{0}. It is important to note that the fundamental group π_1() is trivial which means that is a simply connected topology. An equivalent statement is that two paths are homotopy invariant and can be contracted to a point in . This property is necessary to realize that there exists smooth 1-form potential A with dA = B. However, such potentials are hard to define on for monopoles, as we will see later. In order to have a viable topology, Dirac suggested a string D_s which originates from the origin and moves to infinity without intersecting itself. For such strings, we can define its complement as an open subset U in R^3 which has π_2(U) =0. This suggests that no sphere in U contains the points of D_s. Moreover, U is simply connected as well. So we can take a loop around D_s, continuously lift around the origin, and shrink it to the point. We can imagine two Dirac strings D_s+ and D_s- with a common origin and we get two open sets U_+ and U_- which can cover the . We can now define two potentials (up to scalar multiple) on each open set which we denote as A_+ and A_-. As expected we find that A_+ and A_- do not agree on U_+ ∩ U_- which is just /D_s±. (If they had agreed, that means there exists an A globally on without difficulties.) For this space configuration, F is exact since H^2_dR(R/D_s±)=0 and thus it resolves, somewhat, the problem of singularity. However, this is not the nicest solution. The `Dirac quantization condition' is a result of the quantum mechanical nature of the phase factor. Precisely, it is given for an integer n qG = 1/2n which is a direct manifestation of a Hopf bundle, which we will discuss later. In (<ref>), G is the monopole strength.[We have chosen G as a symbol instead of g here to not get confused with gauge transformations used in this paper by notation g_.] It is also interesting to note that a generalized Chern-Gauss-Bonnet theorem also implies this quantization condition. Now we argue why there is not a “well-defined” non-singular potential A for B. For this, we take a 2-sphere S^2 on and divide the 2-sphere S^2 into two manifolds given by R_+ and R_-. We assume that there exists a non-singular potential A on S^2 with dA = B. Provided B, we simply have ∬_S^2B · dS^2 = ∬_S^2( g/ρ^2 ê_ρ) ·ê_ρ dS^2 = g/R^2∬_S^2 dS^2 = 4π g which seems to be the right answer. However, we now do the same integral using Stoke's theorem. We should be able to do it assuming that A is a smooth 1-form. We see that ∬_S^2B · dS^2 = ∮_C A · dr+ ∮_-C A · dr =0 which is a contradiction. To rescue from this precise singularity[This singularity is not really physical, since one can define a global form F without any singularity. This is also one of the reasons why Dirac's string formalism and Wu-Yang formalism are equivalent. See <cit.> for more.] of A on B, Wu and Yang came up with a gauge interpretation (or `loops' that we will see) <cit.>. This is one of the central topics in this paper. We must realize that what we are doing is essentially a localization. For instance, if ⋃_i α_i = X for a space X and we can describe a continuous function f X → where we can check the agreement for two by two intersections. The localization of potential is the only way to define non-singular potentials, however, globally there exists still a singular potential. Before discussing the Wu-Yang gauge approach, we must look into the Hopf bundle. It is interesting to observe that the connection on the Hopf bundle (Ω S^3 → S^2, Ω being a functor in this paper) describes monopoles that we have just described <cit.>. The study of homotopy groups of spheres is a useful task. One such is homotopy groups of 2-sphere π_i(S^2). To understand the Hopf bundle, we define a group action U(1) on 3-sphere S^3 defined using complex coordinates z_0,z_1. The group action for u ∈ U(1), ||u||=1 is given by u (z_0,z_1) = (uz_0,uz_1) and the quotient[A natural question to ask in what kinds of Maxwell's equations we get using or perhaps . For CP^2 we find that we get an electromagnetic instanton <cit.>. Similarly, we can move higher in dimensions for either finding higher dimensional solutions to Maxwell fields or quaternions fields. Hopf bundle description comes handy in here.] of S^3 by the action is . The fibers of U(1) in S^3 are S^1. Now the is equivalent to Riemann sphere S^2, hence ≃ S^2. The Hopf bundle now becomes S^1 → S^3 S^2. One can also prove that π_i (S^3) = π_i (S^2). We can now define the curvature of the connection on S^3 which is S^1 bundles over S^2 as F = 1/2sinθ d ϕ∧ d θ which is, when extended to Minkowski spacetime, just the field strength for our monopoles of strength g = n/2q. This is one of the many definitions of the Hopf bundle, we have described it in a way that is useful to us, namely in terms of homotopy groups. And indeed, is homotopy equivalent to S^2. We can also argue using this terminology why there is no singular potential on . This is because of the fact that Hopf bundle is non-trivial π_1(S^2 × S^1) ≈≠π_1(S^3) where π_1(S^3) is trivial and π_1(S^2)=. It is worth noting that when we apply the Wu-Yang method, to be described in the next subsection, the charge quantization Eq. (<ref>) is provided by π_1(U(1))= Z. §.§ The Problem with the Dirac String This is not related to the rest of the paper. Dirac had suggested that due to the singularity in the potential forms, we must use a Dirac string as we mentioned previously. It was suggested that the Dirac string is not physical and a mathematical workaround. However, it is not the best way to handle the singularity. We do not need Dirac strings in `t hooft-Polyakov monopoles as there are no singularities in those non-abelian monopoles. For a good exposition of Dirac string see <cit.>. The essential problem with Dirac string, which is that it does not completely eliminate the problem of finding a global 1-form potential, is well-known. We will find that some theorems of algebraic topology obscure us from finding a global potential over the manifold. Recently, it was suggested by the authors of <cit.> that there is a hidden field momentum contribution from Dirac string which violates the center of energy theorem <cit.>. The author's point is as follows. We start with a simple monopole placed at the origin of ℝ^3 such that the magnetic charge and electric charge are at rest. The field momentum of the electric field by this monopole has two components which are Coulomb's term and Dirac's string term. There is a non-zero mechanical field momentum contribution from the interaction of magnetic charge and electric field due to the inclusion of Dirac's string which is not vanishing at all. See <cit.> for the discussion on this term. It was suggested in same that there are two takeaways from this non-trivial mechanical field momentum 1) the first is to say that the center of energy theorem is wrong which implies that this term is an error and 2) the second is to believe in the center of energy theorem and accept this term as a real contribution which implies that Dirac's string is real and must be physical even though how infinitesimally thin we believe it to be. However, then it becomes a system in which the electric charges generate a monopole-like magnetic field with a solenoidal magnetic flux <cit.>. Also, see this paper <cit.> which is a comment on this violation. However, none of this will affect any of our discussions on what to follow. §.§ Greub-Petry-Wu-Yang Quantization In this subsection, we recall the Wu-Yang method <cit.> (see also Greub-Petry <cit.>) for describing monopoles with a charted 2-sphere. That is equivalent to the connection defined on Hopf bundle S^3 → S^2. While doing so, we want to achieve potentials that are not singular and can be described using gauge theory[For a principle G-bundle, the global gauge group is defined as the bundle of automorphism. Local gauge group is the group of gauge transformation which trivially means changing the variables of the theory. Throughout the paper, we will be mainly talking about the local gauge group of gauge potentials. For that, one requires a trivialization of the manifold.]. We can achieve it but at the cost of the non-global theory of potential, for the reasons we have described above. We will set up the structure in a manner that would be fruitful in the context of the paper. Originally, the Wu-Yang structure was constructed to provide meaning to the non-integrable phase factor. The idea is to take a 2-sphere and cover it with open covers 𝔘 = {𝒰_i }. These open covers would be different from an open ball and we will call them `good' open covers. The last point enables us to use the Poincare lemma in every non-empty overlap, see <cit.>. For simplicity, we chart the sphere with two open covers _α and _β. However, it does not matter which open covers we need to use and one can use n covers at a time. The properties of topological invariants do not depend on the choice of covers.[Such independence arises because of the nature of covers which are diffeomorphic to open ball. The intersection region, like ∩ is also diffeomorphic to an open ball, and any finite intersection is contractible. There also does not exist a unique point in the intersection of a worldline of a particle going through the overlap.] On each patch, we can associate a vector potential A_α, α∈Λ, which are 1-forms in de Rham complex. We associate A_α and A_β to _α and _β respectively. As one can check A_α are singularity free and gives F = dA_α for some region with boundary V_i. F is also the curvature of U(1) connection and it is globally invariant unlike potential forms. Under simple circumstances of <cit.>, for two patches these potentials are A_α = G/rsinθ(1-cosθ) A_β = -G/r sinθ(1+cosθ) where 0 ≤θ≤π and r >0. In an non-empty overlap the region _α∩_β, A_α and A_β are related by a gauge transformation A_α→ g^-1 A_β g and because this gauge transformation must be single-valued, due to Eq. (<ref>), there does not exist an overlap region that provides meaning to phase factors ψ_i if Eq. (<ref>) is unsatisfied. The field strength is invariant under a gauge transformation, up to gauge distortions. One could describe a similar gauge transformation A_α→g̅^-1A_β g̅ but there exists a non-singular map λ g →g̅ and quantization is invariant. Later, it will be evident to us that g depends on the trivialization of the manifold. In this way, we can describe a local theory of Dirac monopoles, without introducing strings. These are singularities free as one can see using Stoke's theorem. We stress again that the solution is invariant for any number of open covers, up to a topological constant. In the language of de Rham cohomology, which is going to be the standard language henceforth, A_α is a 1-form defined on _i. Our de Rham operator d is defined nilpotent d^2=0 and forms a cochain complex of objects p-forms Ω^p_M. de Rham cohomology H^p_dR, M is defined as a set of closed forms modulo exact forms in Ω^p_M for defined on some manifold M. Since F = dA, it is a closed 2-form in Ω^2_M. What is the meaning of gauge transformation and quantization in cohomology theory? The answer to the quantization interpretation is straightforward <cit.> and there is a thoughtful way to do it. We again take a 2-sphere and chart with three covers , and . Similarly, we define 1-form gauge potentials A_α, A_β and A_γ on each patch with gauge transformation between them in each non-empty overlap as defined in (<ref>). Now we can take a particle and describe its worldline in M as . As it goes through S^2, it acquires a correction to its Lagrangian of form ∫_ A. It should be noted that these are not the only corrections, for instance, particles can also interact with other gauge fields. We include in (<ref>) only those interactions which are of `topological interest' to us. Since there does not exist a global vector potential A, we must refer to the Wu-Yang structure. The trajectory of goes through each patch in S^2. Which point we choose in any finite overlap of , , to pass does not matter because we have used good finite covers <cit.>. By a slight abuse of notation, we will describe region[The boundary condition can be described as ∂()̱ = _ - _$̱.(αβ) = ∩and likewise we define(βγ)and(γα). We can now define a pointP_(αβ)in(αβ). Similarly, pointsP_(βγ)andP_(γα). It should be noted that these points can be picked up arbitrarily in each overlap and its position does not determine the final solution. < g r a p h i c s > The worldline is the dotted line passing through the overlaps between , , and . The path ofgoes through our defined points inP_(αβ), P_(βγ), P_(γα)and transition between potentialA_αis described by the gauge transformation (or transition function)g. For convenience, we can define a map g_ A_→ A_β and the co-boundary is dg_ = A_ - A_. Ifg_isn-form, thendg_is(n+1)-form. SinceA_ - A_$̱ is a 1-form, g_ is a zero-form object. Transition functions g_ are anti-symmetric so g_ = - g_. For 3-folds of manifold, see fig. (<ref>), the total contribution (<ref>) is given by the vector potentials and their gauge transformation. We can write the contribution simply <cit.> I = ∫_ A_ + ∫_ A_+̱∫_ A_γ + g_ (P_(αβ)) + g_ (P_()) + { g_ (P_()) + g_ (P_()) + g_ (P_())} which has numerous line integrals and gauge transformations. But there is a “constant” object that appeared at the end of the action (<ref>) which is the piece in the curly bracket. To understand this constant, let us visit (<ref>) and write further dg_ = A_ - A_ dg_ = A_ - A_ and all these equations give d(g_+g_+g_) = 0 which gives g_+g_+g_∈ℝ that means, the piece is indeed a constant. It must be noted that it is constant also because it is a closed zero-form object, more on this later. Because of the Poincare lemma, we can give a more precise meaning of (<ref>). In particular, one understands that g_+g_+g_ = η_ is a constant only over the entire triple overlaps ∩∩. It is interesting to note that since (<ref>) is a constant, it is irrelevant to define the coordinates in the last piece in (<ref>). We will interpret eq. (<ref>) as a non-trivial cocycle condition in sec. <ref>. What if we had used two covers instead of three covers? It does not matter. As one can see, if one does two covers situations, there is no constant in action. In three covers cases, there is a constant. We can conclude that solutions of any number of covers are equivalent, up to a topological ambiguous constant. When we are solving classical equations, this constant can be ignored, however, in quantum mechanics, this constant leads to inconsistencies unless defined exactly. In computing the line integral in eq. (<ref>), the phase factor is found to be ambiguous for a Euclidean propagator in the form of exp(i η_). To be consistent, we must require every phase factor to be one, which means = 2πϵ_ where ϵ_∈ℤ. A precise meaning of this constant exists in de Rham-Cech cohomology and also in the context of cohomology in some presheaf that we will see later in sec. <ref>. Let us now turn to compute magnetic flux using Stoke's law. That requires us to subdivide a manifold into region V_ = ∩∩∩ S^2 and in the triple overlaps, the total flux determined by eq. (<ref>). Indeed, one can check that ∫_S^2 F = ∑_V_ = 2π∑_V_ϵ_ and thus magnetic flux is determined by this constant. This is the famous Dirac's quantization condition. Since the singularity in A is not physical, Dirac's string quantization and Wu-Yang quantization are equivalent <cit.>. So as long as we stick to Greub-Petry-Wu-Yang quantization anything that we describe in this paper should be equivalent to a physical Dirac monopole. The local aspects of gauge potentials were used in the overall process of quantization. An important requirement in this topological quantization is the use of good covers . We will provide more meaning to all the objects that appeared in this section in the later part of this paper. § MONOPOLES AND DE RHAM- COHOMOLOGY What we have observed so far is that the use of finite `good' covers can be helpful in understanding topological quantization. We already have described a cochain complex operated by a differential operator d, which is the de Rham operator. This operator induces cohomology in this cochain, namely[A notation clarification we conventionally always use subscript for homology group and superscript for cohomology group. In this paper, we are only concerned with the cohomology group.] de Rham cohomology H^p_dR, M on manifold M. An important cohomology that we have not explicitly introduced so far is Čech cohomology. We have, however, already used combinatorics of open covers which is an important ingredient in Čech cohomology and sheaf cohomology <cit.>. This cohomology provides details of the local features of topology, as we will see. In particular, we find that constant is a 2-cocycle in the de Rham- double cohomology that stitches the total local and global data on manifold M. There exists an isomorphism between the classes of de Rham cohomology and cohomology in the topological quantization. This would imply that there is an isomorphism between classes of de Rham cohomology and cohomology. The restriction map from de Rham classes to de Rham- double cohomology classes and the restriction map from Cech classes to de Rham- double cohomology classes are also interesting but will not covered in this paper. §.§ The Role of de Rham- Cohomology In this subsection, we will see the roles played by the classes of de Rham- cohomology in Dirac's quantization and local 1-form potentials A_α. To do that, we must first motivate the definition of cohomology in the present context. A presheaf is defined to be a contravariant functor from a category of open set on topological space X to a category of abelian group Open(X) →Cat(G) where Cat(G) is an abelian category of G. Equivalently, we can say that a presheaf associates an abelian group G to on X. We will make it more precise later. For ⊂𝒱, we define a restriction map ρ^_𝒱(𝒱) →() where, for let say ⊂𝒱⊂𝒲, ρ^𝒲_ = ρ^𝒱_·ρ^𝒲_𝒱 is satisfied. Moreover, we see that open covering on X is also important for this definition, see <cit.>. In fact, one could ask for the definition of (∩). Or more appropriate to the current discussion, the meaning of (∩∩)? Answers to such questions can answer what is the global and local properties of X. In other words, can we use a local function to determine a global solution on X? This seems to be related to our discussion of singularity in a global gauge potential and the necessity of using local 1-forms with gauge transformation between them in the defined overlap. It is useful to us when we do triangulation of our manifold X. In this way, it enables us to define simplicial complexes. Mainly, we wish to define p-cochains that are related by a map (or morphism) δ. The essential idea is to first define the p-simplex for open cover 𝔘 = {_i }. We will denote a vertex a in , we can denote two vertices a and b for a non-empty overlap ∩ and connect it by an edge which is a 1-simplex. For a triple finite intersection ∩∩, we have 2-simplices as a triangle. Similarly, we can draw p-simplex for any non-empty overlap ∩∩⋯∩_σ. We will denote such repeated collection in every non-empty overlap of simplex as the nerve of 𝔘 as N(𝔘). For a discussion on simplicial complex see <cit.>. Now that we have introduced p-simplices and presheaf , we are good to talk about cohomology. We will define p-cochains C^p(, ) as a linear combination generated from p-simplices. It is a must to write while defining p-cochains to show the continuing dependency of the cover. Also, these p-cochains are being defined for some presheaf . More concretely, we can write C^p(, ) = ∏_ < <̱< … < σ(U_⋯σ) where we have used our previous notation U_⋯σ = ∩∩⋯∩_σ. We will denote a co-boundary map δ δ→ C^p+1(, ). One can easily verify that just like d^2=0 for the de Rham complex, it is δ^2 = 0 for this complex as well. A p-cocycle Z^p(,) is defined to be a p-cochain which is trivial under the map δ δ Z^p(,) → 0 which is stronger when one assumes the Poincare lemma. Similarly, one says that in (<ref>), C^p+1(, ) is the boundary cochain of . Visually, the complex of cochains looks like C^0(,) [r]^δ C^1(,) [r] ^δ C^2(,) [r] ^δ ⋯⋯[r] ^δ C^p(,) and δ induces a cohomology Ȟ^p(X) which is the cohomology, which is a set of p-coycles modulo p-coboundaries. The cohomology group can be defined as independent of . It will be necessary for us to also define a map δ^-1 C^0(,) → C^-1(,) but this would require a little more context as we will see. It is also required to introduce partition of unity functions {p_} associated to each {_a } with the following properties * p_≥ 0 * ∑_ p_a = 1 * There exists a compact support for p_ in cover U_. Given a cohomology, we can extract information about the topology for some coefficients in a presheaf . So cohomology must answer any topological corrections of kind eq. (<ref>). It is possible now to form a new double cohomology of de Rham cohomology and cohomology. This would give a precise answer to our quest of understanding the topological corrections in eq. (<ref>). We can simply, and roughly, endow a differential form to each p-cochain C^p(, ). For example, A_α are 0-cochain and 1-form. Similarly, the transition functions are 1-cochains (0-form) and the constant c_ are 2-cocycles (0-form) because they are closed since δ^2 g_ = 0. It would be nice to neatly write them as 𝐂^𝐩(,,Ω^𝐪), which would represent an object of p-cochain and q-form. Note that A_α, g_, and c_ are defined for {_} and their overlaps as discussed above. As we will argue below that { A_α, g_, c_} are the most fundamental topological information about S^2 under our consideration of charge quantization. This is often called `topological quantization'. In the claim that { A_α, g_, c_} describes a monopole in S^2 with the relevant de Rham- classes, what is the pre-sheaf for C^p(,)? In gauge theory with U(1) action, this presheaf is =. This is evident from our flux through curvature F in eq. (<ref>). This condition is somewhat also important for quantization in gauge theory <cit.>. We will fix the presheaf from now on as and only write it when needed. Let us now review the whole sec. <ref> on Wu-Yang quantization as viewed in de Rham- cohomology group . We always stick with our definition of a good cover[It should be noted that there exists a refinement of cover with direct limit and one can always define C^p(,) without any dependency on the cover <cit.>.]𝔘={_}. An initial question is if de Rham- cohomology gives insights into the obstruction in defining a global A without singularity. A local 0-cochain, q-form λ∈ C^0(,Ω^q) can be defined globally if in the overlap one can define λ_ - λ_=̱ 0, λ_ - λ_∈̱C^1(,Ω^p) which is to say that λ_ and λ_$̱ are equivalent and extendible to each other's region. In this sense, the transition functiong_λ_→λ_$̱ becomes an identity map. While eq. (<ref>) is satisfied for 0-cochain and the closed form F_∈ C^0(,Ω^2) and thus we can ignore the subscript[So that F_α = F_β, which is not necessarily true for A.], it is not satisfied for A_ because of a different cocycle condition. This means that theory is important to understand the obstructions of defining a local theory globally. Translating everything and finding the correction in ∫_Γ A is the goal here. (We will find the isomorphism between the de Rham classes and classes in this exercise.) The gauge transformations g_ in sec. <ref> are 0-form and 1-cochain 𝐂^1(,,Ω^0). Now, a very simple claim is that the following are equivalent * The operation d g_→ dg_ in 𝐂^1(,,Ω^1) where dg_ = A_ - A_$̱. * The operationδ A_→ A_ - A_$̱ which also lies in 𝐂^1(,,Ω^1). What about d dg_ and δ g_? The former vanishes because of d^2=0 while the latter are the objects in 𝐂^2(,,Ω^0). We will denote those by c_ and is given by δ g_→η_ = g_ + g_ + g_ which is also our cocycle. Now, for convenience, we can create a table for all the classes. Ω^3 0 Ω^2 F 0 Ω^1 A_ dg_ 0 Ω^0 g_ η_ 0 𝐂^0(,,Ω^𝐪) 𝐂^1(,,Ω^𝐪) 𝐂^2(,,Ω^𝐪) 𝐂^3(,,Ω^𝐪) A table box for all the classes in our de Rham- cohomology. H_dR H^*(𝐂, 𝒰) H_Čech H^*(𝐂, 𝒰) ["r", from=1-1, to=1-4] ["g"', from=1-1, to=3-1] ["𝕀"', from=3-4, to=1-4] ["f"', from=3-1, to=3-4] Here the maps r and f are restriction maps which maps de Rham cohomology and cohomology to the double complex H^*(𝐂, 𝒰). In order to find the quantization condition, we again impose the consistency condition for any Euclidean propagator and get the similar cocycle condition Eq. (<ref>) and thus getting the quantization condition η_ = 2 πϵ _. (One may similarly do other cases with QFTs, for example, WZW models <cit.> where one encounters closed 3-forms rather than 2-forms.) Finally, we see that for locally defined 1-forms A, we get a cocycle condition that implies the quantization condition. From table. <ref>, we can now find the isomorpism between de Rham classes and classes. If we start with a globally defined 2-form F in 𝐂^0(,,Ω^2), we can find the locally defined constant η in 𝐂^2(,,Ω^0) using d and δ (and their inverses) maps. In this way, we have the isomorphism between the de Rham cohomology and cohomology. §.§ Final Comment The exposition in the previous section suggests that there are quite many deep algebraic geometric structures in the example of Wu-Yang monopoles. There has been use of Deligne cohomology as well for studies like WZW <cit.>. We also see that presheafs and sheafication can give us an idea about how information works on a manifold. Indeed, one can use such algebraic geometry tools to study how local and global information appears for a certain gauge theory. A good example would be to understand Higgs bundle in this context. Moreover, Theorem <ref> is important from a mathematical perspective as well. In this document, we only emphasized the physical aspects from gauge theory side. We note that most of this paper was written in 2023. Aknowledgements I was fortunate to learn mathematics relevant to this paper from a lot of people and I thank them all for those discussions. In particular, I would like to thank A.K Maloo for teaching me a lot about abstract algebra. utphys E-mail: ]
http://arxiv.org/abs/2406.18205v1
20240626093736
Cosmological Particle Creation Using an Equal-Time Wigner Formalism
[ "Philip Semrén" ]
gr-qc
[ "gr-qc", "hep-th" ]
APS/123-QED philip.semren@umu.se Department of Physics, Umeå University, SE-901 87 Umeå, Sweden § ABSTRACT It is well known that the expansion of the universe can create particles. However, due to ambiguities when defining particles during the expansion, there are still debates about how to choose vacuum and particle states. To clarify how particles are produced in an expanding universe, we study the creation of real scalar particles in flat FLRW spacetimes by using a recently developed equal-time Wigner formalism. By comparing this quantum kinetic formalism with the standard Bogoliubov approach, we make a natural definition of a particle number in terms of kinetic phase-space functions, which we then compare with common adiabatic particle numbers. With inspiration from flat spacetime QED, we perform numerical calculations and discuss the interpretation of the particle numbers in terms of a hypothetical switch-off in the expansion rate. Finally, we consider how this interpretation is affected by regularization. Cosmological Particle Creation Using an Equal-Time Wigner Formalism Philip Semrén July 1, 2024 =================================================================== § INTRODUCTION As the early universe expands, dark matter is produced by the expansion. This process is one of few mechanisms that can produce dark matter, and is generally described using the framework of quantum field theory in curved spacetime. However, in this framework there are ambiguities that prevent us from uniquely defining what a particle is during the expansion. This leads to similar ambiguities in the produced number of particles, which has sparked an unresolved debate about whether the vacuum state in de Sitter spacetime is stable or not <cit.>. To settle the discussions, the competing particle definitions have to be interpreted in terms of physical particles. To clarify how particles are defined and produced in an expanding universe, we reformulate the process in terms of a recently developed quantum kinetic formalism for curved spacetimes <cit.>. This formalism revolves around a set of phase-space functions that serve as the quantum counterparts to the classical distribution function, which describes how particles are distributed with respect to spacetime position and momentum [Note that, although they play a similar role, the phase-space functions used in this quantum kinetic approach are generated by a Wigner transform, and thus have properties that set them apart from classical distribution functions. For instance, the Wigner functions can be negative. Despite such differences, which are less significant in the spatially homogeneous case, we will occasionally refer to the Wigner functions as distribution functions.]. Thus, the equations are written using quantities that are close to physical interpretation, in contrast to the common approach, where working on the level of quantum fields can obscure interpretations. As a result, we will show that the quantum kinetic approach immediately leads to a natural particle definition, which we interpret in terms of a hypothetical switch-off in the expansion rate. Our motivation for using a quantum kinetic approach comes from flat spacetime, where quantum kinetic models have been widely used to incorporate quantum effects when studying plasmas and other inherently statistical systems <cit.>. When based on fully relativistic quantum theories, these models can describe fundamental quantum phenomena as well as strong-field effects. Strong-field electron-positron pair production can, for instance, be captured using the Dirac-Heisenberg-Wigner formalism, which uses an equal-time Wigner transform of the Dirac equation in flat spacetime <cit.>. Although it is quantum field theories in flat spacetimes that serve as the basis for most relativistic quantum kinetic models, analogues applicable to curved spacetimes have also been developed. These have generally used fully covariant approaches <cit.>, where the kinetic transport equations are accompanied by quantum mass-shell constraints. This is also a feature of covariant models in flat spacetime <cit.>. However, by partially breaking the explicit covariance, the authors of Ref. <cit.> recently developed an equal-time formalism for real scalar fields in curved spacetime. This approach naturally leads to a set of dynamical equations that are closed and on-shell without extra constraints, similarly to what happens when considering equal-time approaches in flat spacetime <cit.>. In Ref. <cit.> the context was to derive equations suitable for describing dark matter, and an emphasis was put on the observation that a certain combination of the equations reproduces the general relativistic collisionless Boltzmann equation in the classical limit. Moving the focus away from the classical limit, we will instead use the framework from Ref. <cit.> to describe particle creation in Friedmann-Lemaître-Robertson-Walker (FLRW) spacetimes due to a dynamical scale factor. Particle numbers and pair creation in cosmology have been studied extensively before (see e.g. Ref. <cit.> for a recent review). The prevailing approach for studying these aspects is to make use of Bogoliubov transformations to relate the incoming and outgoing creation and annihilation operators. Alongside this approach, kinetic phase-space methods have also been used (see e.g. Refs. <cit.>). Contributing to the kinetic description, a key result of this paper is the formulation of common particle number definitions in terms of the phase-space functions defined in Ref. <cit.>. We also interpret the particle numbers in terms of a hypothetical switch-off in the expansion rate, in alignment with results from quantum electrodynamics in flat spacetime <cit.>. The paper will be outlined as follows. First we will give a short summary of the parts of Ref. <cit.> that are needed for our purposes. That includes the Arnowitt-Deser-Misner (ADM) decomposition and the Wigner transformation of the Klein-Gordon equation, which we then apply to FLRW spacetimes. Then, to make connection with the Bogoliubov approach, we consider expansions of the scalar field in terms of specific known mode functions. That allows us to make a natural particle definition using the phase-space functions of the theory. This definition is then compared, both analytically and numerically, to commonly used adiabatic particle numbers. Finally, we discuss how regularization affects the interpretation of the particle numbers. § PRELIMINARIES Here we collect some preliminaries that are needed to define the sought phase-space functions and determine their evolution equations. For more details, the reader is referred to Ref. <cit.>. §.§ ADM Decomposition To define equal-time Wigner functions, we first need a notion of equal-time surfaces. We therefore assume that we have a globally hyperbolic spacetime that can be foliated in terms of a family of spacelike hypersurfaces Σ_t, each hypersurface labeled by a certain value of the corresponding level function t. To this level function, we define an associated vector field t^μ through t^μ∇_μ t = 1, t^μ = Nn^μ+N^μ, with lapse N and shift N^μ satisfying n_μ N^μ = 0. Here n_μ is assumed to be the normal to the spatial hypersurface, proportional to (t)_μ and with norm n_μ n^μ = -1. Using t as the zero coordinate x^0 and latin indices i,j,k,… to run over 1,2,3, we can then write t^μ = 1 0, n_μ = -N 0, N^μ = 0 N^i, n^μ = N^-11 -N^i, g_μ_ν = -N^2 +N^iN_i N_j N_i γ_i_j,   g^μ^ν =-N^-2 N^-2N^j N^-2N^i γ^i^j-N^-2N^iN^j,  √(-g) = N√(γ), where γ_i_j is the induced metric on the spatial hypersurface, γ^i^j is its inverse, and γ_μ_ν = g_μ_ν + n_μ n_ν can be seen as a projection tensor projecting onto the spatial hypersurfaces. Having thus defined a 1+3 decomposition of spacetime, we consider the dynamics of a real scalar field ϕ in this decomposition. Using the action for a massive, minimally coupled, scalar field without additional interactions, the equations of motion for ϕ and its canonical momentum Π can be written as ∂_t ϕ = N/√(γ)Π + N^j∂_jϕ, ∂_t Π = ∂_j(N^jΠ) + ∂_i (N√(γ)γ^i^j∂_jϕ) - N√(γ)m^2/ħ^2ϕ. These equations are equivalent to the Klein-Gordon equation ∇_μ∇^μϕ = m^2/ħ^2ϕ, on returning to covariant form <cit.>. Furthermore, decomposing the energy momentum tensor for the scalar field T_μ_ν = ∂_μϕ∂_νϕ - g_μ_ν/2(∂^σϕ∂_σϕ + m^2/ħ^2ϕ^2), as T^μ^ν = ρ n^μ n^ν + Pγ^μ^ν + q^μ n^ν + n^μ q^ν + π^μ^ν, in terms of the projections ρ = n_μ n_ν T^μ^ν, P = 1/3γ_μ_ν T^μ^ν, q^μ = -γ^μ_(σn_τ) T^σ^τ, π^μ^ν = (γ^μ_(σγ^ν_τ) -1/3γ^μ^νγ_σ_τ)T^σ^τ≡ T^⟨μ^ν⟩, we find that ρ = 1/2(1/γΠ^2+m^2/ħ^2ϕ^2+γ^i^j∂_i ϕ∂_j ϕ), P = 1/2(1/γΠ^2-m^2/ħ^2ϕ^2-1/3γ^i^j∂_i ϕ∂_j ϕ), q^i = -1/2γ^i^j(Π/√(γ)∂_jϕ + ∂_jϕΠ/√(γ)), π^i^j = (γ^i^lγ^j^k - 1/3γ^i^jγ^l^k)∂_lϕ∂_kϕ, where round brackets around a pair of indices denotes a symmetrization. Relative to an observer with 4-velocity n^μ, ρ is the energy density, P is the isotropic pressure, q^μ is the energy flow orthogonal to n^μ, and π^μ^ν is the anisotropic pressure. The latter two satisfy n_μ q^μ =n_μπ^μ^ν = 0, π^μ^ν = π^ν^μ, and γ_μ_νπ^μ^ν=0, so that q^0 = π^μ^0= π^0^μ= γ_i_jπ^i^j=0. §.§ Wigner Transformation of the Klein-Gordon Equation In the context of classical kinetic theory, bulk properties of the system, such as its energy momentum tensor, are obtained by taking moments of a phase-space distribution function. To formulate something similar for the scalar field, we can note from the previous section that its energy momentum tensor only involves quadratic monomials of ϕ, Π, and ∂_i ϕ. Hence it could be helpful to perform some sort of Fourier transform of the quadratic monomials, interpreting the conjugate variables as momenta. More specifically, after promoting the fields to operators and imposing canonical commutation relations, we will make use of the equal-time Wigner transform defined in Ref. <cit.> for two operators X and Y through F_XY(t,x^i,p_k) ≡√(γ)∫_TΣ_t[3]rexp(-i/ħr^kp_k) [exp(r^k/2[^(3)]∇^H_k)X][exp(-r^k/2[^(3)]∇^H_k)Y], where the integral is performed over the coordinates r^k of the fibre of the tangent bundle TΣ_t at x^i. In this definition we have introduced the horizontal lift of the covariant derivative on Σ_t to TΣ_t [^(3)]∇^H_k≡[^(3)]∇_k - r^l[^(3)]Γ^i_k_lr^i, where [^(3)]∇_k is the covariant derivative on Σ_t and [^(3)]Γ^i_l_k its corresponding Christoffel symbols. It should also be noted that we have neglected a normal ordering procedure for (<ref>) described in Ref. <cit.>. Instead of using this procedure, which ensures that finite results are obtained on integrating over the momenta, we postpone the issue of divergent integrals to Sec. <ref> where we discuss regularization in terms of an adiabatic subtraction scheme. Choosing X, Y∈{ϕ, γ^-1/2Π}, we get the operators F_ϕϕ, F_ϕΠ, F_Πϕ, F_ΠΠ. Then, following Ref. <cit.>, we define the phase-space functions [These definitions share similarities with certain definitions from <cit.> for flat spacetime.] f_1^+ = 1/(2πħ)^31/2ħ[ω_p/ħF_ϕϕ + ħ/ω_pF_ΠΠ],   f_1^- = 1/(2πħ)^3i/2ħ[F_Πϕ -F_ϕΠ],   f_2^+ = 1/(2πħ)^31/2ħ[ω_p/ħF_ϕϕ - ħ/ω_pF_ΠΠ],   f_3^+ = 1/(2πħ)^31/2ħ[F_Πϕ +F_ϕΠ], where ω_p = √(m^2 + γ^i^jp_ip_j), and denotes the expectation value with respect to the quantum state of the system. These phase-space functions can be used to write the projections of the energy-momentum tensor as ρ = ∫[3]p/√(γ)ω_p f_1^+ + ħ^2/8γ^i^j[^(3)]∇_i[^(3)]∇_j∫[3]p/√(γ)f_1^+ + f_2/ω_p,   P = 1/3γ^i^j∫[3]p/√(γ)p_ip_j f_1^++f_2/ω_p-∫[3]p/√(γ)ω_p f_2 -ħ^2/24γ^i^j[^(3)]∇_i[^(3)]∇_j∫[3]p/√(γ)f_1^+ + f_2/ω_p,   q^i = γ^i^j∫[3]p/√(γ)p_j f_1^- -ħ/2γ^i^j[^(3)]∇_j∫[3]p/√(γ)f_3,  π^i^j = (γ^i^lγ^j^k - 1/3γ^i^jγ^l^k)[ ∫[3]p/√(γ)p_lp_k f_1^++f_2/ω_p +ħ^2/4[^(3)]∇_l[^(3)]∇_k∫[3]p/√(γ)f_1^+ + f_2/ω_p]. Thus, we see that the dynamics of the energy momentum tensor can be fully described using the evolution of the phase-space functions. The evolution equations for the phase-space functions can in principle be determined by using the definition of the Wigner transform (<ref>) and the evolution equations for the fields (<ref>)–(<ref>). However, this procedure is in general quite tedious due to the appearance of terms proportional to the Christoffel symbols in the exponentials. Nevertheless, it has has been done in Ref. <cit.> to leading order in a spatial gradient expansion in powers of ħ. To avoid the complication with the Christoffel symbols and to simplify our analysis, we will restrict our attention to flat FLRW models, where the three-dimensional Christoffel symbols vanish. With this restriction, there is no need for assumptions involving spatial gradient expansions, allowing us to perform a full quantum treatment of the system. § EVOLUTION EQUATIONS FOR THE FLAT FLRW CASE The flat FLRW models can be described using the line element s^2 = -N(t)^2t^2 +a(t)^2(x^2 + y^2 + z^2), where N = 1 when t is chosen as comoving time, and N = a when t is conformal time. In the following applications, t will be comoving time, but we keep N general in this section. Comparing the line element with (<ref>) it furthermore follows that N^i = 0, γ_i_j = a^2δ_i_j, γ^i^j = a^-2δ^i^j, √(γ) = a^3, [^(3)]∇^H_k = ∂_k, and the equations of motion reduce to ∂_t ϕ = N/a^3Π, ∂_t Π = Naδ^i^j∂_i∂_jϕ - Na^3m^2/ħ^2ϕ. Using these together with (<ref>)–(<ref>), we deduce that [As we have derived them here, these equations do not perfectly coincide with the final equations from Ref. <cit.> when those are applied to the flat FLRW metric.] ḟ_1^+ = (ω̇_p/ω_p+3ℋ)f_2 - N/ω_pp_j∂^j f_1^-+ħ N/4ω_p∂_j∂^j f_3,  ḟ_1^- = -N/ω_pp_j∂^j(f_1^+ +f_2),  ḟ_2^+ = (ω̇_p/ω_p+3ℋ)f_1^+ + N/ω_pp_j∂^j f_1^- -ħ N/4ω_p∂_j∂^j f_3 +2Nω_p/ħf_3,  ḟ_3^+ = -2Nω_p/ħf_2 + ħ N/4ω_p∂_j∂^j(f_1^+ + f_2), where ḟ≡∂_t f, ℋ≡ȧ/a, ∂^i f = γ^i^j∂_j f, and ω̇_p/ω_p+3ℋ = ℋ(2+m^2/ω_p^2). At this point, the above evolution equations can be seen as describing a test field propagating on a flat FLRW background. However, if the intention is to couple the phase-space functions to the geometry through the energy-momentum tensor, this tensor, and hence the phase-space functions, have to respect the spacetime symmetries. Although we reserve self-consistent calculations with backreaction for another paper, we will therefore assume, in accordance with the homogeneity and isotropy of the spacetime, that the phase-space functions are spatially homogeneous and that ∫[3]p/√(γ)p_i f_1^- = 0, (γ^i^lγ^j^k - 1/3γ^i^jγ^l^k) ∫[3]p/√(γ)p_lp_k f_1^++f_2/ω_p = 0, so that ρ and P are homogeneous while q^μ and π^μ^ν vanish. With these assumptions, the evolution equations reduce to ḟ_1^+ = ℋ(2+m^2/ω_p^2)f_2,  ḟ_1^- = 0,  ḟ_2^+ = ℋ(2+m^2/ω_p^2)f_1^+ +2Nω_p/ħf_3,  ḟ_3^+ = -2Nω_p/ħf_2. Since p_i only appears explicitly in these equations through the combination γ^i^jp_ip_j =δ^i^jp_ip_j/a^2 in ω_p, they are inherently isotropic with respect to p. Hence, provided that the initial conditions share this isotropy, the conditions (<ref>)–(<ref>) are naturally satisfied, showing their compatibility with the homogeneity assumption. § DISTRIBUTION FUNCTIONS FROM KNOWN MODE FUNCTIONS To solve the evolution equations for the phase-space functions in practice, suitable initial conditions are needed. To determine these conditions, and to make connection with the common Bogoliubov approach for studing particle production in cosmology, it is instructive to look at the phase-space functions in terms of certain known mode functions. For this purpose, assume that the field ϕ is quantized with periodic boundary conditions in a cubic box with coordinate volume V=L^3, so that the field can be expanded as ϕ = ∑_*k(f_*kA_*k + f^*_*kA^†_*k) in terms of some mode functions f_*k. After taking expectation values we will let V tend to infinity so that *k becomes a continuous parameter. From now on, we will also set ħ to unity and work in comoving time, so that N=1 and ℋ = ȧ/a ≡ H. On imposing the canonical commutation relations ϕ(t,*x)Π(t,*x') = iδ(*x - *x')  A_*kA^†_*k' = δ_*k*k' we can then interpret A_*k as an annihilation operator and define a vacuum state |0⟩ relative to this mode decomposition by requiring A_*k|0⟩ = 0 for all *k. This vacuum definition is dependent on the choice of mode functions, and that choice is in general not unique in generic spacetimes. §.§ Early and Late Time Minkowski As a first example, we consider the mode functions for a spacetime that asymptotically approaches Minkowski in both the past and the future. Given that a(t)→ a_1 when t→ -∞ with a_1 being a constant, we choose mode functions f_*k that approach the Minkowski vacuum modes <cit.>, f_*k∼ (Va_1^3)^-1/2(2ω_1k)^-1/2e^i(*k*x-ω_1kt), in the early time limit, where ω_1k = √(k^2/a_1^2 +m^2), k^2 = *k^2 = δ_ijk^ik^j. The vacuum state |0⟩ defined with respect to these modes is interpreted as the early time vacuum state. By taking the expectation values in (<ref>)–(<ref>) with respect to this vacuum state, and using the mode functions (<ref>), the corresponding phase-space functions are f_1^+ = f_1^- = 1/2(2π)^3, f_2 = f_3 = 0. At late times, t→∞, we then assume that the spacetime again approaches Minkowski as a(t)→ a_2, with a_2 a constant. The mode functions f_*k(t) satisfying the early time limit (<ref>) will then in general be linear combinations of positive and negative frequency parts <cit.> f_*k∼ (Va_2^3)^-1/2(2ω_2k)^-1/2e^i*k*x(α_ke^-iω_2kt + β_ke^iω_2kt), with ω_2k = √(k^2/a_2^2 +m^2), so that a_*k = α_k A_*k + β^*_k A^†_-*k, where a_*k is the late time annihilation operator. From this annihilation operator, we find that the number of outgoing particles in the early time vacuum state is a^†_*ka_*k0 = β_k^2. This particle number can be related to the phase-space functions by using the early time vacuum state and (<ref>) in (<ref>)–(<ref>), which leads to f_1^+ = 1/2(2π)^3(1+2β_k^2), f_1^- = 1/2(2π)^3, f_2^+ = 1/2(2π)^3(α_kβ^*_k e^-2iω_2kt + α^*_kβ_k e^2iω_2kt), f_3^+ = -i/2(2π)^3(α_kβ^*_k e^-2iω_2kt - α^*_kβ_k e^2iω_2kt), for k^i = δ^ijp_j. Hence we can relate the outgoing particle number to the late time value of f_1^+ through β_k^2 = (2π)^3f_1^+ - 1/2≡ n_k. This implies a natural definition of a particle number n_k in terms of f_1^+, and we extend this definition of n_k also to intermediate times. This definition can be shown to coincide with the definition in <cit.>, where n_k in a flat FLRW spacetime was found by determining the Bogoliubov transformation that gave the maximum number of particles. §.§ de Sitter Inflation As a second example, now consider (half of) de Sitter spacetime with flat spatial slicings and constant Hubble parameter H. Defining the vacuum state |0⟩ as the Bunch-Davies vacuum, the corresponding mode functions are [This expression is based on Ref. <cit.>, but we have added a factor e^-π(ν)/2 to get consistent normalization for imaginary ν. Up to an unimportant constant overall phase factor, these mode functions have the same form as the in-vacuum modes used in Refs. <cit.> for flat FLRW coordinates.] f_*k=1/2√(π/HV)e^-3Ht/2e^-π(ν)/2H_ν^(1)(k/He^-Ht)e^i*k*x, where H_ν^(1) is a Hankel function and ν = √(9/4 - m^2/H^2). The distribution functions corresponding to this vacuum state are in turn given by f_1^+ = e^-π(ν)/2(2π)^3π/4Hω_p [ H_ν^(1)^2 + p^2/a^2ω_p^2(H_ν^(1))' +3/2ζH_ν^(1)^2],   f_1^- =-e^-π(ν)/(2π)^3πζ/4H_ν^(1)((H_ν^(1))' +3/2ζH_ν^(1))^* = 1/2(2π)^3,   f_2^+ = e^-π(ν)/2(2π)^3π/4Hω_p [ H_ν^(1)^2 - p^2/a^2ω_p^2(H_ν^(1))' +3/2ζH_ν^(1)^2],   f_3^+ = -e^-π(ν)/(2π)^3πζ/4H_ν^(1)((H_ν^(1))' +3/2ζH_ν^(1))^*, where p^2 ≡δ^ijp_ip_j and the Hankel functions should be evaluated at ζ≡ p/(aH). A prime denotes differentiation with respect to ζ. Using the properties of the Hankel functions, it can be shown that (<ref>)–(<ref>) is indeed a solution to (<ref>)–(<ref>) for the de Sitter spacetime [The fact that f_1^- is constant and equal to 1/(2(2π)^3) in all of our applications is related to the normalization of the mode functions and the conservation of the Wronskian (see e.g. Ref. <cit.>). ]. To compare these results with other references, we consider some specific values of the parameters m and ν. First, for m=0, ν = 3/2, the distribution functions can be simplified to f_1^+ =1/2(2π)^3(1+H^2a^2/2p^2),   f_1^- = 1/2(2π)^3, f_2^+ = 1/2(2π)^3H^2a^2/2p^2, f_3^+ = -1/2(2π)^3Ha/p, giving n_k = H^2a^2/4p^2, which coincides with the result found in <cit.>. Note, however, that this particle number will give an infinite result upon integrating over the momenta. As a second example, we compare with the results in <cit.> for imaginary orders ν. Defining ν≡ iγ, with γ now being real, and taking the limit ζ→ 0, corresponding to t →∞, we get f_1^+ = 1/2(2π)^3(m/Hγ(πγ) +3/2γ(πγ)cos(2γ Ht+ψ) ),   f_1^- = 1/2(2π)^3, f_2^+ = -1/2(2π)^3(πγ)sin(2γ Ht + ψ), f_3^+ = -1/2(2π)^3(3/2γ(πγ) +m/Hγ(πγ)cos(2γ Ht+ψ) ), where the phase ψ is given by ψ = 2(Γ(iγ))-arctan(2γ/3)-2γln(p/2H). Using the same particle definition as previously, we find n_k = m/2Hγ(1+Hγ/m)e^-2πγ/1-e^-2πγ +m/2Hγ(1-Hγ/m)1/1-e^-2πγ +3/2γe^-πγ/1-e^-2πγcos(2γ Ht+ψ). The first row of this expression gives a similar contribution as in <cit.>, where the particle number for the Bunch-Davies in-vacuum relative to the asymptotic adiabatic out vacuum was found to be e^-2πγ/(1-e^-2πγ) [ Note that the precise value of γ for a specified mass m is slightly different in Ref. <cit.> due to the conformal coupling used there.]. This result agrees with the first line in (<ref>) to leading order in the m≫ H limit, where both reduce to e^-2π m/H. However, due to the prefactors and the following rows, the particle number defined here differs from the adiabatic result in <cit.>. § RELATION TO THE ADIABATIC PARTICLE NUMBER To more clearly see the difference between n_k and the commonly used adiabatic definitions, we now write the adiabatic particle numbers in terms of the kinetic phase-space functions. For this purpose, we first define the adiabatic mode functions f_*k = e^i*k*x/√(2Va^3W_k(t))e^-iΘ_k(t), where Θ_k = ∫^t t' W_k(t'), is the adiabatic phase <cit.>. If we would assume that the exact scalar field has the same form as these mode functions, the equations of motion for the field could then be written as a differential equation for W_k. Expanding this equation in powers of time derivatives acting on the scale factor, we would then obtain an expression for W_k order by order. This expansion procedure is usually referred to as the adiabatic expansion of the field. However, if we do not require that the scalar field has this precise form in terms of W_k, it is not necessary to require that the mode functions (<ref>) satisfy the equations of motion. Instead, these modes will here rather be thought of as serving as a basis that we can compare the exact mode functions to, without assuming that the basis modes satisfy the dynamical equations. The exact mode functions can then be written in terms of the basis modes through a time dependent Bogoliubov transformation f_*k = α_k(t)f_*k + β_k(t)f_-*k^*. We then define a function V_k(t) by requiring that <cit.> ḟ_*k = (-iW_k + V_k/2-3H/2)α_k f_*k   + (iW_k + V_k/2-3H/2)β_k f_-*k^*. Both W_k(t) and V_k(t) are here assumed to be real. Since the adiabatic basis functions are not required to satisfy the equations of motion, there is some freedom in choosing the functions W_k(t) and V_k(t) that define the basis. A physically motivated choice is, however, to choose W_k and V_k to match divergences in the adiabatic expansion of the exact mode functions when calculating the energy-momentum tensor <cit.>. Writing the phase-space functions in terms of W_k, V_k, α_k, β_k, and Θ_k, we see that f_1^+ = 1/2(2π)^3ω_p/2W_k[ (1+A_w^2/ω_p^2)(1+2β_k^2) + 2α_k^*β_ke^2iΘ_k(1+A_w^2e^2iδ_w/ω_p^2)],   f_1^- = 1/2(2π)^3,   f_2^+ = 1/2(2π)^3ω_p/2W_k[ (1-A_w^2/ω_p^2)(1+2β_k^2) + 2α_k^*β_ke^2iΘ_k(1-A_w^2e^2iδ_w/ω_p^2)],   f_3^+ = 1/2(2π)^31/W_k[(V_k/2-3H/2)(1+2β_k^2) +2A_wα_k^*β_ke^2iΘ_ke^iδ_w], where A_w = iW_k + V_k/2-3H/2, δ_w = (iW_k + V_k/2-3H/2), and k^i = δ^ijp_j. Note that we can write α_k^*β_ke^2iΘ_k(1±A_w^2e^2iδ_w/ω_p^2) = ℛ_k(t)(1 ∓4W_k^2 - (V_k-3H)^2/4ω_p^2) ±ℐ_k(t)W_k(V_k-3H)/ω_p^2, A_wα_k^*β_ke^2iΘ_ke^iδ_w =ℛ_k(t)(V_k/2-3H/2) + W_kℐ_k(t), where ℛ_k(t) ≡α_kβ_k^*e^-2iΘ_k = α_k^*β_ke^2iΘ_k,  ℐ_k(t) ≡α_kβ_k^*e^-2iΘ_k = -α_k^*β_ke^2iΘ_k, are oscillatory quantum interference functions <cit.>. On combining some of the phase-space functions, we can extract a particle number by noting that ℛ_k = (2π)^3W_k/ω_p(f_1^+ +f_2)-β_k^2 -1/2, ℐ_k = (2π)^3(f_3-(V_k/2-3H/2)f_1^+ + f_2/ω_p), so that β_k^2 = (2π)^3ω_p/2W_k(2f_1^+ -(f_1^+ +f_2)(1 - A_w^2/ω_p^2) - f_3(V_k-3H)/ω_p) - 1/2≡𝒩_k, giving a definition of the adiabatic particle number 𝒩_k in terms of the phase-space functions, W_k, and V_k. Depending on how W_k and V_k are chosen, and up to which order they match the adiabatic expansion of the exact solutions, we get different particle numbers. In the following we will use the collective term adiabatic particle numbers for all 𝒩_k obtained on choosing W_k and V_k to match the adiabatic expansion up to some order, but it should be noted that this term is in some references reserved for the particle number to lowest adiabatic order. To zeroth order, with W_k = ω_p, V_k = 0, and neglecting the term explicitly involving H, we see that 𝒩_k reduces to n_k. Hence n_k can be seen as a zeroth order adiabatic particle number. To see the effect of the relation between the phase-space functions and the adiabatic particle number, we can consider the late time de Sitter case. For this purpose we use adiabatic functions that are correct up to first adiabatic order W_k^(0) = √(ω_p^2-9H^2/4),   V_k^(1) = -ω̇_p/ω_p = H(1-m^2/ω_p^2), which are similar to functions used in <cit.>. Inserting these into (<ref>) in the ζ→ 0 limit and using (<ref>)–(<ref>) then gives 𝒩_k^(1) = (2π)^3m/γ H(f_1^+ + 3H/2mf_3 )- 1/2 = 1/2(πγ)- 1/2, where the superscript on 𝒩_k denotes that this adiabatic particle number was obtained from (<ref>) with the choice (<ref>)–(<ref>) of W_k and V_k. Thus we see that, for this choice of W_k and V_k, the oscillations in the phase-space functions cancel, and we are left with a result of the same form as in <cit.>. § NUMERICAL RESULTS Having described the connection between the adiabatic particle numbers 𝒩_k and the particle number n_k given in terms of f_1^+, we proceed with a numerical investigation of how their dynamics differ in practice for some prescribed scale factor profiles and initial conditions. §.§ Prescribed profiles The scale factor profiles we will consider are those for de Sitter inflation, de Sitter inflation with a cut-off at time t_c, a finite H pulse, and a dust cosmology. These are represented by a(t) = e^Ht, a(t) = a_Nexp{At/2 + AW/2π(ln((t - t_c)^2 + W^2) - 2(t - t_c)/Warctan(t - t_c/W)) }, a(t) = exp{2WAarctan(exp(t-t_p/W))}, a(t) = t^2/3, respectively, where (<ref>) and (<ref>) correspond to H(t) = A(1/2-1/πarctan(t-t_c/W)), H(t) = A(t-t_p/W), and where A, W, t_c, t_p, and a_N are constants. H is constant in the de Sitter case, and the factor a_N in the cut-off case normalizes a to unity at t=0. §.§ Initial conditions The initial conditions for the phase-space functions are set to correspond to an initial adiabatic vacuum. This initial data is obtained by setting β_k=0 in Eqs. (<ref>)–(<ref>), which gives f_1^+ = 1/2(2π)^3ω_p/2W_k(1+A_w^2/ω_p^2),   f_1^- = 1/2(2π)^3,   f_2^+ = 1/2(2π)^3ω_p/2W_k(1-A_w^2/ω_p^2),   f_3^+ = 1/2(2π)^31/W_k(V_k/2-3H/2), where all quantities should be evaluated at the initial time. For the adiabatic functions appearing in the initial data, we use a set that is correct up to fourth adiabatic order W_k = ω_p+Rξ/2ω_p -m^2(3H^2+Ḣ)/4ω_p^3+5H^2m^4/8ω_p^5   -R^2ξ^2/8ω_p^3-ξ(6RH^2+5ṘH+R̈+2ḢR)/8ω_p^3   +m^2(60H^4+86H^2Ḣ+15ḦH+10Ḣ^2+⃛H)/16ω_p^5   +m^2ξ(19RH^2+5ṘH+3ḢR)/8ω_p^5 -25H^2Rm^4ξ/16ω_p^7   -m^4(507H^4+394H^2Ḣ+28ḦH+19Ḣ^2)/32ω_p^7   +221H^2m^6(3H^2+Ḣ)/32ω_p^9-1105H^4m^8/128ω_p^11   +√(ω_p^2-9H^2/4) - ω_p(1-9H^2/8ω_p^2-81H^4/128ω_p^4), V_k =H(1-m^2/ω_p^2) -ξ(Ṙ+2HR)/2ω_p^2   + m^2(12H^3+10ḢH+Ḧ)/4ω_p^4+HRm^2ξ/ω_p^4   +15H^3m^6/4ω_p^8 -9Hm^4(3H^2+Ḣ)/4ω_p^6, where R = 6((ȧ/a)^2 + ä/a) = 6(Ḣ+2H^2), and ξ=-1/6. This choice is based on a fourth order result presented in Ref. <cit.>, but we have added the square root and the last three terms in W_k [We have also corrected some apparent misprints in Ref. <cit.>. For instance, our V_k should here correspond to the expression for -Ẇ_k^(2)/W_k^(2) in Ref. <cit.>, but some of the numerical coefficients are different. We have also corrected for a missing m^2 in W_k ]. By construction, these terms only modify the expression with terms of adiabatic order six and higher, maintaining a W_k that is correct up to fourth order. The reason for the addition is to avoid introducing unwanted oscillations in the late-time de Sitter case when calculating the adiabatic particle number through (<ref>). §.§ Numerical particle numbers In Fig. <ref> we show particle numbers for de Sitter inflation, the pulse, and the dust cosmology when starting with an initial fourth order adiabatic vacuum. The adiabatic particle number obtained by using (<ref>)–(<ref>) in (<ref>) is denoted by 𝒩_k^(1), while 𝒩_k^(4) is the number found when using (<ref>)–(<ref>). In the de Sitter case we see that each mode has a distinct creation event, with higher momenta being created later. After being created, the adiabatic paritcle numbers stabilize without any oscillations. This was seen previously in <cit.>. However, the particle number n_k continues to oscillate, and goes up to much larger values than the adiabatic counterparts. Looking at the pulse and dust cases, we see that the particle creation is concentrated to smaller momenta, with the creation being centered in time around the maximum of the pulse and at the beginning of the dust evolution respectively. When the expansion subsides and tends to zero, the difference between n_k and the adiabatic particle numbers becomes smaller, and the oscillations in n_k decrease in amplitude. To more clearly see where the difference in the particle numbers originate from, we can consider the case when the de Sitter inflation is cut off at some time t_c. This is shown in Fig. <ref> for different values of the size of the transition region W and cut-off time t_c. We have here chosen to omit the fourth order adiabatic particle number, as this number goes through very large swings in the transition region. These are due to the derivatives of the Hubble parameter becoming very large for small W. In Fig. <ref>, we see that the end result will depend on how fast the cut-off is chosen to be and where it is centered. If the cut-off is very fast, corresponding to a small transition region W, 𝒩_k^(1) shoots up to match n_k. In turn, n_k stays at about the same value as before the cut-off, but without any oscillations after the cut-off. If we instead make the transition region larger, we see that n_k will decrease down towards 𝒩_k^(1), but the asymptotic result does not perfectly match the 𝒩_k^(1) plateau before the switch, at least for the chosen W. The precise value at which the numbers stabilize is also found to depend on where the cut-off is centered. This can be seen in Fig. <ref>b, where the placement of t_c in relation to the oscillations in n_k lead to different asymptotic particle numbers. The asymptotic numbers are displayed as markers in the figure, where a marker at time t corresponds to the asymptotic particle number obtained on choosing t_c = t for that specific value of t. As seen in the figure, the particle numbers obtained asymptotically after a switch-off at time t follow the same shape as n_k(t) calculated without the switch, and approach n_k(t) as the transition region is made smaller. Based on these observations, n_k can be interpreted as the particle number that would be obtained if the expansion rate is very rapidly switched off, whereas 𝒩_k^(1) is closer to the value obtained asymptotically for a slow adiabatic switch-off. The interpretation of n_k can also be seen through the differential equation (<ref>), where quickly switching off H would result in f_1^+, and hence n_k, becoming frozen in at the value it had just before the switch. Since spacetime is Minkowski after the switch, this value must directly correspond to the particle number through (<ref>). With this interpretation, n_k behaves in a similar fashion as the adiabatic particle number studied in <cit.> in the context of flat spacetime QED. The term adiabatic particle number was there used to mean the particle number relative to instantaneous eigenstates of the Hamiltonian, corresponding to a zeroth order adiabatic particle number in our terminology. Since n_k can also be seen as a zeroth order adiabatic particle number, and furthermore corresponds to a Bogoliubov transformation that diagonalizes the Hamiltonian density <cit.>, the similarities between n_k and the particle number in <cit.> were expected. A similar interpretation was also made in a kinetic QED context using the Wigner formalism in <cit.>. § REGULARIZATION In flat spacetime QED, the total particle density after the rapid switch-off gives a finite result when integrated over the momenta, which strengthens the interpretation of the particle number relative to instantaneous Hamiltonian eigenstates as something potentially accessible <cit.>. However, as hinted at in <cit.>, in the gravitational scenario we are faced with the problem that n_k in general needs to be regularized to give a finite result when integrated over the momentum space. In cosmology, this is often done using an adiabatic subtraction scheme. To regularize the energy density, ρ = 4π/a^3∫pp^2ω_p f_1^+, we generally have to subtract terms up to fourth order in the adiabatic expansion. To deduce the subtractions needed for the particle number n_k, which is given in terms of f_1^+, it suffices to look at this subtraction up to second order <cit.>, ρ -[^(0-2)]ρ = 1/a^3∫p/(2π)^34π p^2ω_p [(2π)^3f_1^+ - 1/2 -H^2/4ω_p^2-m^2H^2/4ω_p^4 -m^4H^2/16ω_p^6], since the fourth order divergences become finite when dividing with an extra ω_p. The first two terms inside the bracket combine to form the bare n_k defined earlier. As for the third term, looking at the massless case we see that this term precisely subtracts the divergent value of n_k that we found for the de Sitter spacetime earlier. Finally, the last two terms would correspond to finite second order subtractions on the particle number level. The general necessity of regularizing n_k together with its interpretation as the particle number obtained after a rapid switch-off implies that, if, for whatever reason, the expansion of the universe could be suddenly switched off, that would in general lead to the creation of an infinite particle density. Hence, even in theory, the potential accessibility of this particle number is questionable. However, the interpretation of n_k as the particle number that would be obtained after the switch is still valid. § CONCLUSIONS We have shown how the quantum kinetic formalism from Ref. <cit.> can be used to study particle production in cosmology. Thinking in terms of a hypothetical switch-off in the cosmological expansion rate, we have given a clear interpretation of a key particle definition, n_k, as the number of particles that would be obtained after the switch. However, when working in an expanding universe, the total number of particles that we obtain this way turns out to be infinite and does therefore not correspond to physically accessible particles. Nonetheless, the interpretation of this particle number is still valid and gives a clear meaning to n_k. In conclusion, we have found that the quantum kinetic approach has many merits. The phase-space functions have rather intuitive interpretations in terms of distribution functions, and the equations describing how they evolve are simple to solve, at least in the homogeneous limit. Studying the solutions, we were also able to quickly arrive at a precise interpretation of a key particle number, showing that the quantum kinetic formalism can help clarify certain definitions in a more direct way than other approaches. Due to the generality of the quantum kinetic approach, our considerations can also be extended systematically to include spatial dependencies and backreaction. Thus, this framework provides a promising path to study the production of particles, such as those possibly constituting dark matter, in complex scenarios while still being close to physical interpretations The author would like to thank Gert Brodin, Greger Torgrimsson, and Michael Bradley for helpful discussions.
http://arxiv.org/abs/2406.19072v1
20240627104151
Scatterer Recognition from LiDAR Point Clouds for Environment-Embedded Vehicular Channel Modeling via Synesthesia of Machines
[ "Ziwei Huang", "Lu Bai", "Zengrui Han", "Xiang Cheng" ]
eess.SP
[ "eess.SP" ]
Zeng et al.: Bare Demo of IEEEtran.cls for IEEE Journals Scatterer Recognition from LiDAR Point Clouds for Environment-Embedded Vehicular Channel Modeling via Synesthesia of Machines Ziwei Huang, Member, IEEE, Lu Bai, Member, IEEE, Zengrui Han, Graduate Student Member, IEEE, and Xiang Cheng, Fellow, IEEE Z. Huang, Z. Han, and X. Cheng are with the State Key Laboratory of Advanced Optical Communication Systems and Networks, School of Electronics, Peking University, Beijing, 100871, P. R. China (email: ziweihuang@pku.edu.cn, zengrui701@gmail.com, xiangcheng@pku.edu.cn). L. Bai is with the Joint SDU-NTU Centre for Artificial Intelligence Research (C-FAIR), Shandong University, Jinan, 250101, P. R. China (e-mail: lubai@sdu.edu.cn). July 1, 2024 =================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== Scatterer Recognition from LiDAR Point Clouds for Environment-Embedded Vehicular Channel Modeling via Synesthesia of Machines Ziwei Huang, Member, IEEE, Lu Bai, Member, IEEE, Zengrui Han, Graduate Student Member, IEEE, and Xiang Cheng, Fellow, IEEE Z. Huang, Z. Han, and X. Cheng are with the State Key Laboratory of Advanced Optical Communication Systems and Networks, School of Electronics, Peking University, Beijing, 100871, P. R. China (email: ziweihuang@pku.edu.cn, zengrui701@gmail.com, xiangcheng@pku.edu.cn). L. Bai is with the Joint SDU-NTU Centre for Artificial Intelligence Research (C-FAIR), Shandong University, Jinan, 250101, P. R. China (e-mail: lubai@sdu.edu.cn). July 1, 2024 =================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== § ABSTRACT In this paper, a novel environment-embedded vehicular channel model is proposed by scatterer recognition from light detection and ranging (LiDAR) point clouds via Synesthesia of Machines (SoM). To provide a robust data foundation, a new intelligent sensing-communication integration dataset in vehicular urban scenarios is constructed. Based on the constructed dataset, the complex SoM mechanism, i.e., mapping relationship between scatterers in electromagnetic space and LiDAR point clouds in physical environment, is explored via multilayer perceptron (MLP) with electromagnetic propagation mechanism. By using LiDAR point clouds to implement scatterer recognition, channel non-stationarity and consistency are modeled in an environment-embedded manner. Using ray-tracing (RT)-based results as the ground truth, the scatterer recognition accuracy exceeds 90%. The accuracy of the proposed model is further verified by the close fit between simulation results and RT results. Intelligent sensing-communication integration, Synesthesia of Machines (SoM), environment-embedded vehicular channel modeling, LiDAR point clouds, scatterer recognition. § INTRODUCTION To support precise localization sensing and efficient communication link establishment for intelligent vehicles, it is essential to achieve in-depth understanding of the surrounding environment and high-precision vehicular channel modeling. However, widely used approaches, which solely utilize radio frequency (RF) communication information, are difficult to achieve high-precision vehicular channel modeling, and thus cannot support the aforementioned application related to intelligent vehicles. Fortunately, intelligent vehicles are equipped with multi-modal devices, which can acquire surrounding environmental information and further assist in vehicular channel modeling <cit.>. To adequately utilize the multi-modal information in the surrounding environment, inspired by human synesthesia, a novel concept, i.e., Synesthesia of Machines (SoM), is proposed <cit.>. SoM aims to achieve intelligent integration of communications and multi-modal sensing via artificial neural networks. As the cornerstone of SoM research, the exploration of SoM mechanism, i.e., mapping relationship between physical environment and electromagnetic space, is essential. Based on the SoM mechanism, a high-precision vehicular channel model can be constructed in an environment-embedded manner. Considering the necessity of exploring SoM mechanism, i.e., mapping relationship, some preliminary work has been conducted. The authors in <cit.> proposed an environment reconstruction method based on LiDAR point clouds, and further explored the mapping relationship between LiDAR point clouds and path loss. However, the mapping relationship explored in <cit.> was limited to sensing and channel large-scale fading. As stated in <cit.>, multipath fading, i.e., channel small-scale fading, is a significant factor, which affects communication system design and presents more challenges compared to channel large-scale fading. To intuitively characterize multipath fading, the concept of scatterers is introduced to model the interaction between radio waves and objects <cit.>. Currently, extensive vehicular channel measurements <cit.>–<cit.> and standardized channel models <cit.> have been conducted to explore spatial attributes of scatterers, including their numbers and positions. By characterizing the spatial attributes of scatterers, channel non-stationarity and consistency can be modeled through birth-death (BD) process and visibility region (VR) <cit.>–<cit.>. Based on the Markov chain, the BD process characterizes the mathematical relationship for the variation of the scatterer number, thus capturing channel non-stationarity. Based on the geometry, the VR characterizes the spatial relationship for the smooth evolution of scatterers, thus capturing channel non-stationarity and consistency. Nevertheless, the aforementioned two methods focus on modeling the scatterer variation/evolution statistically. In this case, since the mapping relationship between objects in physical environment and scatterers in electromagnetic space is ignored, channel non-stationarity and consistency cannot be accurately captured. This results in the inability to model the tight interplay between physical environment and electromagnetic space. Although existing vehicular channel models preliminarily capture the variation/evolution of scatterers and channel non-stationarity/consistency by utilizing BD process and VR method, they cannot meet the high-precision requirements of vehicular channel models. A high-precision vehicular channel model, which can capture channel non-stationarity and consistency in an environment-embedded manner, i.e., environment-channel non-stationarity and consistency, is urgently required. To fill this gap, we propose a novel environment-embedded vehicular channel model via SoM. By using AirSim <cit.> and Wireless InSite <cit.>, a new intelligent sensing-communication integration dataset in vehicular scenarios with low, medium, and high vehicular traffic densities (VTDs) is constructed. The LiDAR point cloud is intelligently processed by the density-based spatial clustering of applications with noise (DBSCAN) clustering algorithm to extract physical environment features, which are further aligned with electromagnetic space. By leveraging a typical artificial neural network, i.e., multilayer perceptron (MLP), with electromagnetic propagation mechanisms, the complex SoM mechanism, i.e., mapping relationship between LiDAR point clouds in physical environment and scatterers in electromagnetic space, is investigated for the first time. To model environment-channel non-stationarity and consistency, physical environment features via LiDAR point clouds are utilized for scatterer recognition, thus modeling spatial attributes, i.e., numbers and positions, of scatterers in an environment-embedded manner. Using ray-tracing (RT)-based results as the ground truth, simulation results show that the scatterer recognition accuracy exceeds 90% in each VTD condition. The accuracy of the proposed model is also verified by the close fit between simulation results and RT results. § MAPPING RELATIONSHIP EXPLORATION: SCATTERER RECOGNITION FROM LIDAR POINT CLOUDS §.§ High-Fidelity Dataset Construction By using AirSim <cit.> and Wireless InSite <cit.>, we construct a new dataset in the vehicular urban crossroad. To obtain high-fidelity LiDAR point clouds, simulation scenarios in AirSim are constructed via the advanced three-dimensional (3D) modeling software with the superior rendering effect. To collect high-fidelity scatterers, Wireless InSite exploits RT technology based on geometrical optics and uniform theory of diffraction. Similar to our previous work in <cit.>, physical environment in AirSim and electromagnetic space in Wireless InSite further achieve in-depth integration and precise alignment. In AirSim, the LiDAR equipped on each vehicle has 16 channels, 10 Hz scanning frequency, and 240,000 points per second, where the upward and downward field of view (FoV) are 15^∘ and -25^∘, respectively. In Wireless InSite, the communication device equipped on each vehicle is operated at 28 GHz carrier frequency with 2 GHz bandwidth, where numbers of antennas at transmitter (Tx) and receiver (Rx) are L_T = L_R = 1. The heights of the car and the bus are 2 m and 3 m, respectively. Given the diversity of the dataset, Fig. <ref> demonstrates that we consider three VTD conditions, i.e., low, medium, and high, and three types of streets, i.e., vertical (x-axis), horizontal (y-axis), and crossing (xy-axis) streets. Each type of street has 6 different transceiver links, e.g., Car5 (Tx) and Car7 (Rx) at the horizontal street, Car1 (Tx) and Car2 (Rx) at the vertical street, and Car1 (Tx) and Car8 (Rx) at the crossing street, thus containing line-of-sight (LoS) and non-LoS (NLoS) conditions. Each transceiver link has 1500 snapshots and each VTD condition has the same transceiver link. There are 27,000 snapshots at each VTD condition. Overall, the constructed dataset contains 81,000 snapshots with high-fidelity LiDAR point clouds and scatterer information. §.§ Mapping Relationship Exploration For clarity, a step list illustrating the exploration of the SoM mechanism, i.e., mapping relationship between physical environment and electromagnetic space, is presented below. Step 1: Unlike the monostatic sensing, Tx and Rx have different positions in vehicular communications. Therefore, LiDAR point clouds at Tx and Rx can be concatenated to obtain physical environment, as shown in Fig. <ref>(a). Step 2: To reduce data redundancy, the ground point is removed by the pre-processing of concatenated LiDAR point clouds, which are further downsampled, as shown in Fig. <ref>(b). Step 3: A typical clustering algorithm in machine learning, i.e., DBSCAN, is leveraged to efficiently obtain physical environment features. For clarity, Fig. <ref>(c) shows the bird's-eye view (BEV) of LiDAR point clouds, which contain 18 clustering groups. Step 4: Since the in-depth integration and precise alignment are conducted in the constructed dataset, physical environment and electromagnetic space can be matched in the same world coordinate system. In Fig. <ref>(d), scatterers are located at the clustering group. According to the RT mechanism, paths are significantly affected by the transmission distance and angle. To calculate the size and orientation of each clustering group, its circumscribed cuboid is obtained. The height of circumscribed cuboid is the same as that of clustering group. The circumscribed cuboid projection is the minimum perimeter bounding rectangle of the clustering group projection. Step 5: Considering the advantage of dealing with the task of numerical inputs and numerical outputs, MLP is exploited to achieve scatterer recognition from LiDAR point clouds, as shown in Fig. <ref>(e). The input is physical environment feature extracted by LiDAR point clouds, including the length, width, height, center point, and orientation vector of circumscribed cuboid and the position of transceiver. The output is scatterer number at each clustering group. For example, in Fig. <ref>, the output is a matrix with dimensions of 18 by 1. As a result, with the help of MLP, the number of scatterers in electromagnetic space at each clustering group of LiDAR point clouds in physical environment can be obtained for the first time. Step 6: To further enhance the interpretability of network output, the propagation mechanism is considered via the VR method. Similar to our previous work in <cit.>, the scatterers are divided into dynamic and static scatterers, which are further assigned to VR. Fig. <ref>(f) shows the VR assigned to static/dynamic scatterers, i.e., the 3D ellipsoid with the transceiver as the focus, where major axis, minor axis, and focal length are 2a^sta/dyn(t), 2b^sta/dyn(t), and 2c^sta/dyn(t), respectively. VR-related parameters are accurately obtained via RT-based channel data <cit.>. Finally, scatterers recognized through LiDAR point clouds, which are located outside VR, are deleted and the output number is also changed. By using the SoM mechanism, i.e., mapping relationship, scatterer recognition from LiDAR point clouds is achieved. This facilitates environment-embedded vehicular channel modeling with accurate channel parameters and the capturing of environment-channel non-stationarity and consistency. § ENVIRONMENT-EMBEDDED CHANNEL MODELING In this section, an environment-embedded vehicular channel model by scatterer recognition from LiDAR point clouds is proposed. The framework of the proposed model is similar to our previous work in <cit.>. The channel impulse response (CIR) is given as (<ref>). Due to page limitations, the definition of parameters in (<ref>) is omitted, which can be found in <cit.>. Channel non-stationarity and consistency are the typical channel characteristic and feature, which can be captured via BD process and VR method <cit.>–<cit.>. However, since the BD process and VR method model the mathematical relationship and spatial relationship for the scatterer variation, respectively, the tight interplay between physical environment and channel non-stationarity/consistency cannot be captured. To overcome this limitation and support applications related to intelligent vehicles, by exploiting the complex mapping relationship, the proposed approach achieves scatterer recognition from LiDAR point clouds, and thus captures environment-channel non-stationarity and consistency. For clarity, Fig. <ref> illustrates the difference between the BD process, the VR method, and the proposed approach. For the proposed approach, scatterers recognized by LiDAR point clouds essentially correspond to the vehicle, tree, and building in the proposed approach, which is different from the BD process and the VR method. In the proposed approach, at the initial time, the scatterer recognition is implemented by LiDAR point clouds based on the mapping relationship. Similar to <cit.>, scatterers are divided into static and dynamic scatterers, which are clustered into static and dynamic clusters. Unlike <cit.>, scatterers recognized by LiDAR point clouds correspond to certain objects in physical environment. This leads to accurate channel parameters, including number N_s/N_c/M_s/M_c, delay τ^sta_i,n_i/τ^dyn_j,n_j, and angle α^sta_i,n_i/β^sta_i,n_i/α^dyn_j,n_j/β^dyn_j,n_j, thus facilitating environment-embedded vehicular channel modeling. As time evolves and physical environment changes, there are different LiDAR point clouds at different time instants. Through the scatterer recognition from LiDAR point clouds and the capturing of tight interplay between physical environment and electromagnetic space, scatterers change with LiDAR point clouds. As a result, environment-channel non-stationarity in the time domain is mimicked. Furthermore, LiDAR point clouds in physical environment at adjacent time instants are similar. In this case, recognized scatterers from LiDAR point clouds are also similar at adjacent time instants, thus capturing environment-channel consistency in the time domain. To further model environment-channel non-stationarity and consistency in the frequency domain, a frequency-dependent factor (f/f_c)^χ is introduced to the time-varying transfer function (TVTF). The TVTF can be obtained by utilizing the Fourier transform to CIR, which is derived based on the scatterer recognition from LiDAR point clouds with accurate number and position parameters, in respect of delay. Therefore, through the scatterer recognition from LiDAR point clouds, environment-channel non-stationarity and consistency can be accurately captured, thus achieving high-precision environment-embedded vehicular channel modeling. § SIMULATION RESULTS AND ANALYSIS Detailed equipment parameters, e.g., scanning frequency, FoV, carrier frequency, and bandwidth, for LiDAR point clouds and scatterer acquisition are given in Section II-A. In neural network training, the hyper-parameter setting is listed in Table <ref>. The dataset is divided into the training set, validation set, and test set, in the proportion of 3:1:1. In Figs. 4–6, the accuracy, error probability heat map, and number of scatterer recognition are given to demonstrate high-precision scatterer recognition. The scatterer recognition accuracy of the proposed approach is further compared with that of the existing random generation approach in Fig. 7. To validate the accuracy of the proposed model, the simulation result and the RT-based result are compared in Fig. 8. In Fig. <ref>, the scatterer recognition accuracy in each clustering group of LiDAR point clouds with different VTDs and streets. The scatterer recognition accuracy is computed by P = 1-N_error/N_all, where N_error is the sum of differences between the recognized scatterer number and the ground truth, and N_all is the sum of the ground truth. In Fig. <ref>, the scatterer recognition accuracy in the aforementioned nine conditions exceeds 90%, with an average value of 90.87%. Fig. <ref> illustrates the probability heat map of scatterer recognition number error in the same nine conditions as Fig. <ref>. From Fig. <ref>, it can be seen that the cases where the scatterer recognition number differs from the ground truth by either 0 or 1 accounting for approximately 90% of the instances. Fig. <ref> compares the scatterer recognition number and ground truth with different VTDs. Although there are many scatterers in the clustering group, the scatterer recognition accuracy exceeds 90%. Due to the highest number of scatterers in high VTD, the scatterer recognition accuracy is the lowest. Fig. <ref> compares the scatterer recognition accuracy of the proposed approach and the random generation approach in <cit.> with different VTDs. In the random generation approach, the scatterer number in each clustering group is randomly generated according to the derived number distribution in <cit.>. Binary classification is calculated by the recognition accuracy of whether there are scatterers on the clustering group. The regression is calculated by the recognition accuracy of the scatterer number on the clustering group. The proposed approach achieves an accuracy improvement of over 29.13% compared to the accuracy of the random generation approach. As the power delay profile (PDP) represents the power of received multipath components, which can be described by scatterers, with propagation delays, Fig. <ref> compares PDPs. The transceiver link is Car1 (Tx) and Car8 (Rx) at the crossing street with high VTD. As the accurate scatterer recognition and the modeling of tight interplay between physical environment and electromagnetic space, the simulation result based on the proposed model fits well with the RT-based result, where the PDP varies smoothly over time. Therefore, environment-channel non-stationarity and consistency are modeled. In terms of the modeling accuracy, the proposed model outperforms the model in <cit.> based on the random generation approach. § CONCLUSIONS This paper has proposed a novel environment-embedded vehicular channel model via SoM, where the SoM mechanism, i.e., mapping relationship between physical environment and electromagnetic space, has been explored based on a new dataset. By leveraging LiDAR point clouds for scatterer recognition, environment-channel non-stationarity and consistency have been modeled. Simulation results have demonstrated that the proposed approach has achieved a scatterer recognition accuracy of over 90% and has exhibited an improvement of over 29.13% compared to the random generation approaches. By further capturing environment-channel non-stationarity and consistency, the accuracy of the proposed environment-embedded vehicular channel model has been validated. 29 LA-GBSM Z. Huang et al., “A LiDAR-aided channel model for vehicular intelligent sensing-communication integration,” available on arXiv, 2024. [Online]. Available: https://arxiv.org/abs/2403.14185. som X. Cheng et al., “Intelligent multi-modal sensing-communication integration: Synesthesia of Machines,” IEEE Commun. Surveys Tuts., vol. 26, no. 1, pp. 258–301, Firstquarter 2024. mapping1 A. Gupta, J. Du, D. Chizhik, R. A. Valenzuela, and M. Sellathurai, “Machine learning-based urban canyon path loss prediction using 28 GHz Manhattan measurements,” IEEE Trans. Antennas Propag., vol. 70, no. 6, pp. 4096–4111, Jun. 2022. survey1 N. Bui, et al., “A survey of anticipatory mobile networking: Context-based classification, prediction methodologies, and optimization techniques," IEEE Commun. Surveys Tuts., vol. 19, no. 3, pp. 1790–1821, Jul.–Sep. 2017. COST 2100 L. Liu et al., “The COST 2100 MIMO channel model,” IEEE Wireless Commun., vol. 19, no. 6, pp. 92–99, Dec. 2012. mea1 C. Huang et al., “Geometry-cluster-based stochastic MIMO model for vehicle-to-vehicle communications in street canyon scenarios,” IEEE Trans. Wireless Commun., vol. 20, no. 2, pp. 755–770, Feb. 2021. mea2 X. Cai, et al., “Hough-transform-based cluster identification and modeling for V2V channels based on measurements,” IEEE Trans. Veh. Technol., vol. 67, no. 5, pp. 3838–3852, May 2018. mea3 M. Yang et al., “A cluster-based three-dimensional channel model for vehicle-to-vehicle communications,” IEEE Trans. Veh. Technol., vol. 68, no. 6, pp. 5208–5220, Jun. 2019. 3GPP Technical Specification Group Radio Access Network; Study on Channel Model for Frequencies From 0.5 to 100 GHz (Release 14), Version 14.2.0, document TR 38.901, 3GPP, Sophia Antipolis, France, Sep. 2017. [Online]. Available: http://www.3gpp.org/DynaReport/ 38901.htm model1 L. Bai, Z. Huang, Y. Li, and X. Cheng, “A 3D cluster-based channel model for 5G and beyond vehicle-to-vehicle massive MIMO channels,” IEEE Trans. Veh. Technol., vol. 70, no. 9, pp. 8401–8414, Sep. 2021. model2 H. Chang et al., “A general 3-D nonstationary GBSM for underground vehicular channels," IEEE Trans. Antennas Propag., vol. 71, no. 2, pp. 1804–1819, Feb. 2023. AirSim S. Shah, D. Dey, C. Lovett, and A. Kapoor, “AirSim: High-fidelity visual and physical simulation for autonomous vehicles,” in Field and Service Robotics, M. Hutter and R. Siegwart, Eds. Cham, Switzerland: Springer, 2018, pp. 621–635. WI Remcom. Wireless InSite. [Online]. Available: https://www.remcom.com/wireless-insite-em-propagation-software [Publication date: Jan. 2017, Accessed date: Mar. 2022]. CC X. Cheng et al., “M^3SC: A generic dataset for mixed multi-modal (MMM) sensing and communication integration,” China Commun., vol. 20, no. 11, pp. 13–29, Nov. 2023.
http://arxiv.org/abs/2406.18321v1
20240626130235
MathOdyssey: Benchmarking Mathematical Problem-Solving Skills in Large Language Models Using Odyssey Math Data
[ "Meng Fang", "Xiangpeng Wan", "Fei Lu", "Fei Xing", "Kai Zou" ]
cs.CL
[ "cs.CL", "cs.AI" ]
Transition magnetic moment of Majorana neutrinos in the triplets next-to-minimal MSSM Zhao-Yang Zhang^1, Jin-Lei Yang^2,3[jlyang@hbu.edu.cn], Hai-Bin Zhang^2,3[hbzhang@hbu.edu.cn], Tai-Fu Feng^1,2,3,4[fengtf@hbu.edu.cn] Received 7 March 2024 / Accepted 23 May 2024 =========================================================================================================================================== § ABSTRACT Large language models (LLMs) have significantly advanced natural language understanding and demonstrated strong problem-solving abilities. Despite these successes, most LLMs still struggle with solving mathematical problems due to the intricate reasoning required. This paper investigates the mathematical problem-solving capabilities of LLMs using the newly developed “MathOdyssey” dataset. The dataset includes diverse mathematical problems at high school and university levels, created by experts from notable institutions to rigorously test LLMs in advanced problem-solving scenarios and cover a wider range of subject areas. By providing the MathOdyssey dataset as a resource to the AI community, we aim to contribute to the understanding and improvement of AI capabilities in complex mathematical problem-solving. We conduct benchmarking on open-source models, such as Llama-3 and DBRX-Instruct, and closed-source models from the GPT series and Gemini models. Our results indicate that while LLMs perform well on routine and moderately difficult tasks, they face significant challenges with Olympiad-level problems and complex university-level questions. Our analysis shows a narrowing performance gap between open-source and closed-source models, yet substantial challenges remain, particularly with the most demanding problems. This study highlights the ongoing need for research to enhance the mathematical reasoning of LLMs. The dataset, results, and code are publicly available.[<https://mathodyssey.github.io/>] § INTRODUCTION Large language models (LLMs) have demonstrated exceptional proficiency in mastering human language and handling mathematical problems, including typical routine math problems <cit.>. In recent years, several benchmarks related to mathematics have been proposed, such as the GSM8K dataset <cit.>, the MATH dataset <cit.> and so on. Recent LLMs and prompting approaches have addressed these problems with notable success <cit.>. For instance, GPT-4, using advanced prompting techniques <cit.>, has achieved more than a 90% success rate on GSM8K and 80% on MATH. These achievements indicate that LLMs possess remarkable capabilities in mathematical reasoning. The quest to improve LLMs' mathematical problem-solving abilities is not just a demonstration of technological advancement but a crucial step toward developing more general and capable artificial intelligence systems. On the one hand, this endeavor requires datasets that accurately measure and challenge the AI's mathematical reasoning beyond basic problems. Although their performance is high on datasets like GSM8K <cit.>, it remains uncertain how well they handle more complex mathematical challenges, such as those found in university-level courses and competitive high school mathematics. Performance may diminish significantly in these areas. This gap highlights the ongoing need for enhanced mathematical reasoning capabilities in AI, a critical area for assessing cognitive abilities akin to human intelligence. Moreover, a significant obstacle is that many existing datasets might have been included in the training phases of these models, potentially skewing performance metrics. Prominent examples include STEM-Q <cit.>, GSM8K <cit.>, and the MATH dataset <cit.>, which may no longer provide a true test of an LLM's mathematical capabilities. On the other hand, high-quality, expert-crafted original problems are scarce. For instance, a study by OpenAI <cit.> included only 105 such problems in high school and university-level science and math. To directly address these challenges, we introduce the “MathOdyssey” dataset, a rigorously curated collection of 387 mathematical problems for evaluating the general mathematical capacities of LLMs. See examples in Table <ref>. The MathOdyssey dataset is developed by the GAIC Math organization and features a spectrum of questions from Olympiad-level competitions, advanced high school curricula, and university-level mathematics. Mathematics professionals, including high-school educators, researchers, and university professors, crafted these problems under the invitation of the GAIC Math organization. Their involvement ensures the dataset not only supports advanced AGI research but also fosters necessary interdisciplinary collaboration. Furthermore, we open-source the MathOdyssey dataset to facilitate its use in evaluating other LLMs. The dataset has not been used for training by LLMs. We explore its utility in benchmarking the advanced mathematical reasoning abilities of LLMs. By ensuring the originality and confidentiality of the questions, we maintain the integrity and fairness of the assessments, providing a reliable tool for advancing research into artificial general intelligence. Our contributions are as follows: * We introduce a new mathematical challenge that provides different levels of mathematical problems and covers a wider range of subject areas. * We open source the MathOdyssey benchmark dataset, a meticulously curated collection of mathematical problems spanning various domains and levels, complete with natural language solutions. This dataset is specifically designed to probe the reasoning abilities of LLMs, offering a unique tool for assessing AI performance in complex mathematical reasoning. Each question has an objective answer serving as ‘ground-truth’, allowing for objective evaluation on the LLM outputs. In particular, the Open-Answer problems emphasize the importance of detailed reasoning and solution. * We conduct a comprehensive benchmark analysis using our dataset on both open-source and closed-source LLMs. Our findings reveal that while closed-source models currently lead, open-source models are rapidly catching up, highlighting the competitive landscape of LLM capabilities in mathematical problem-solving. § RELATED WORK Large Language Models for Mathematics. Applying large language models (LLMs) to mathematical problems has led to significant strides, though solving such problems remains challenging due to the need for highly complex and symbolic multi-step reasoning capabilities. Both GPT-3.5 and GPT-4 <cit.> have shown promising reasoning abilities for complex mathematical tasks, such as those in the MATH dataset <cit.>. However, the performance of open-source models, like Llama-1 and Llama-2 <cit.>, is still far from satisfactory in this domain. To enhance the mathematical problem-solving abilities of LLMs, prompt-based methods have also been developed <cit.>. These methods aim to improve reasoning and accuracy by guiding the models through structured prompts that help in breaking down complex problems into manageable steps. Mathematical Evaluation for Large Language Models. Evaluating the mathematical capacity of large language models (LLMs) is crucial. Benchmarks such as GSM8K <cit.>, which targets middle-school level mathematics, and MATH <cit.>, which focuses on high-school math competitions, have been widely used. For university-level problems, datasets like ProofNet <cit.> and OCWCourses <cit.> are prominent. Additionally, MiniF2F <cit.> and AlphaGeometry <cit.> provide Olympiad-level problems, while the SAT dataset <cit.> includes problems from the College Board SAT examination. These datasets have limitations, particularly at the undergraduate level and above, where they fall short in addressing graduate-level and competition-level difficulties <cit.>. To address this gap, we introduce the MathOdyssey dataset, a diverse collection of mathematical problems designed to serve as a rigorous benchmark for assessing both open-source and closed-source models. Table <ref> highlights the properties of MathOdyssey compared to relevant benchmarks, emphasizing the different levels and the diversity of subject areas and question types in our benchmark. This dataset spans a spectrum of difficulty levels, from high school to advanced university mathematics, highlighting the evolving capabilities and ongoing challenges in LLM mathematical problem-solving. § MATHODYSSEY To evaluate the mathematical reasoning abilities of LLMs, we create the MathOdyssey dataset, a rigorously curated collection designed by professionals from both universities and high schools. To ensure comprehensive evaluation and promote transparency, we have made the entire MathOdyssey dataset and benchmarking code publicly available. This allows other researchers to replicate our study, compare methods, and explore new approaches using the dataset. §.§ Data Collection Design Principle. The motivation behind the design of the MathOdyssey dataset is to establish a new benchmark representing the pinnacle of human intellectual achievement, encouraging researchers to push the boundaries of LLMs' mathematical reasoning capabilities. To realize this vision, we have curated challenges that epitomize comprehensive levels of math problems. Specifically, our benchmark includes: * Inclusion of diverse levels of math problems: Ensuring a comprehensive understanding and catering to various proficiency levels promotes a well-rounded mastery of mathematical concepts and problem-solving skills. This dataset offers a range of problems, starting from basic concepts and gradually increasing in difficulty to cover advanced topics. This allows for a thorough evaluation of AI capabilities across various levels of high school and university mathematics. * Inclusion of different subject area problems: Enhancing LLMs' mathematical proficiency by exposing them to a wide range of concepts and techniques, from foundational arithmetic to advanced topics such as algebra, number theory, geometry, combinatorics, and calculus. These diverse subject areas help identify LLMs' strengths and areas for improvement, encouraging the development of critical mathematical reasoning, problem-solving skills, and a deeper appreciation for the interconnected nature of mathematics. By integrating various mathematical disciplines, researchers can create a more engaging and comprehensive learning environment that prepares LLMs for complex real-world challenges in mathematics. * Provision of objective answers and detailed solutions: The objective answers serve as ‘ground-truth’, allowing for objective evaluation of the LLM outputs. In particular, the Open-Answer problems emphasize the importance of detailed reasoning and solution. Given the varying difficulty and subject areas of these problems, which may exceed comprehension without a specialized background in mathematics, each problem is accompanied by expertly crafted solutions detailing the reasoning steps involved. These solutions are useful for evaluation and can enhance the assessment of LLMs' reasoning processes. Human professionals. The dataset was created by human professionals to ensure high quality. Experts developed a wide range of mathematical problems for the MathOdyssey dataset, featuring a spectrum of questions from Olympiad-level competitions, advanced high school curricula, and university-level mathematics. Mathematics professionals, including high-school educators, university professors, and researchers, crafted these problems. Their involvement ensures the dataset not only supports advanced AGI research but also fosters necessary interdisciplinary collaboration. A typical problem in the MathOdyssey dataset comprises three components: the problem, the answer, and the reasoning, as detailed in Table <ref>. The problems are original and not sourced from previous datasets or textbooks. Each problem is accompanied by an answer and a detailed solution that explains the reasoning process used to derive the answer. After creation, the problems undergo independent review by a separate team of researchers with expertise in mathematics. This team assesses the problems and their solutions, eliminating any ambiguous or redundant responses to enhance the set's validity and reliability. This rigorous process guarantees the quality and dependability of the final problem set. §.§ Dataset Analysis To understand the properties of the MathOdyssey dataset, we analyze the questions and answers. Specifically, we explore (i) the difficulty of questions based on the type of reasoning required to answer them, (ii) the subject areas of the problems, and (iii) the diversity of answer types. Difficulty of questions. In the MathOdyssey dataset, each category is designed to evaluate different facets of mathematical reasoning and problem-solving capabilities, ranging from fundamental high school concepts to complex university-level theories, as summarized in Figure <ref>. This diverse dataset is structured into three distinct levels to challenge various aspects of mathematical knowledge: * Olympiad-level: It tests advanced problem-solving skills with questions in Algebra, Number Theory, Geometry, and Combinatorics. * High School: Broadening the scope, this category includes problems in Algebra, Geometry, and Pre-Calculus, covering a comprehensive range of high school math concepts. * University-level: Catering to higher education, this segment offers challenges in Linear and Abstract Algebra, Calculus and Analysis, Differential Equations, Probability, and Statistics, suitable for university students. The MathOdyssey dataset categorizes mathematical problems across different educational levels, helping to understand the distribution and scope of problems included in the dataset. For Olympiad-level Competition, the categories and their respective percentages are Algebra (21.19%), Number Theory (1.03%), Geometry (6.46%), and Combinatorics (9.56%), totaling 38.24%. For High School Mathematics, the categories are Algebra (17.83%), Geometry (3.62%), and Pre-Calculus (14.21%), totaling 35.66%. For University-level, the categories are Linear and Abstract Algebra (6.46%), Calculus and Analysis (6.20%), Differential Equations (3.62%), Probability (5.43%), and Statistics (4.39%), totaling 26.10%. Three subject areas, Differential Equations, Probability, and Statistics, only appear at the University level. Subject areas of the problems. The problems encompass a wide range of topics, including Algebra, Number Theory, Geometry, Combinatorics, Pre-Calculus, Linear and Abstract Algebra, Calculus and Analysis, Differential Equations, Probability, and Statistics, as shown in Figure <ref>. The MathOdyssey dataset encompasses a wide range of subject areas, providing a comprehensive testing ground for the mathematical reasoning and problem-solving capabilities of large language models (LLMs). Algebra problems constitute 21.19% from Olympiad-level Competition and 17.83% from High School Mathematics, making them the most represented areas in the dataset. In contrast, Number Theory problems, with only 1.03% from Olympiad-level Competition, have the lowest representation. Pre-Calculus problems, accounting for 14.21% of High School Mathematics, play a significant role in preparing students for more advanced calculus topics. Other subject areas, including Calculus and Analysis, Linear and Abstract Algebra, Differential Equations, Probability, and Statistics, each contribute around 4% to 8% to the dataset. See Appendix B for examples that help better understand the reasoning required to answer the questions. Diversity of answer types. The MathOdyssey dataset includes a variety of answer types, providing a comprehensive assessment of the mathematical reasoning and problem-solving capabilities of large language models (LLMs). The distribution of answer types is shown in Figure <ref>, and it is categorized into three main types: True-False questions, Multiple-Choice questions, and Open-Answer questions. The distribution of answer types in the MathOdyssey dataset is designed to provide a well-rounded evaluation of LLMs' mathematical capabilities. With 63.0% of the questions being open-answer, the dataset emphasizes the importance of detailed reasoning and solution generation. Multiple-choice questions, making up 32.8%, help assess the models' ability to choose correct answers from given options, while true-false questions, at 4.1%, provide a quick check of fundamental understanding. This diverse mix of answer types ensures that LLMs are tested on various aspects of mathematical problem-solving, from basic validation to complex reasoning and solution generation, requiring an understanding of the concepts. § EXPERIMENTS Our goal is to provide a comprehensive standardized dataset to evaluate LLMs on mathematical reasoning. By comparing different models, our benchmarks highlight their strengths and weaknesses. §.§ Models We evaluate both open-source and closed-source LLMs. The models tested include GPT-4 Turbo, GPT-4 <cit.>, GPT-3.5 Turbo, Gemini models <cit.>, Claude 3 <cit.>, Llama-3-70B, and DBRX-Instruct <cit.>. All models are tested using chain-of-thought reasoning <cit.>. See Appendix C for details of the baselines and prompts. §.§ Model Evaluation A key advantage of the MathOdyssey data is that every question has an objective answer, so that it is straightforward to check the correctness by code. Such objective answers avoid subjective judgments from humans, making the evaluation consistent and reliable. We use GPT-4 to assist in evaluating model accuracy, particularly for open-answer questions. The metric measures the similarity between the predicted and ground truth answers. In the MathOdyssey dataset, various types of questions and answers are included. We employ a prompt-based method to provide scores for evaluation, considering the following criteria: * Mathematical Equivalence: Verify answers based on mathematical equivalence using advanced tools like symbolic computation software to confirm the equivalence of different algebraic or symbolic expressions. * Scoring: Assign a score of `1' for answers that match or are equivalent to the provided solution (exact value, choice label, or correctly rounded numerical approximation). Assign a score of `0' for incorrect answers without providing explanatory feedback. * Handling Multiple Choices: Consider the answer correct if the student correctly identifies the choice that matches the solution. Also, treat the corresponding choice as correct if the student provides the exact value that aligns with the problem's context. * Numerical Equivalence: Accept numerical answers that are correct to at least two decimal places or more, depending on the required precision. * Symbolic and Algebraic Identities: Recognize and accept equivalent algebraic forms as correct, such as standard mathematical identities. * Trigonometric and Logarithmic Forms: Accept equivalent trigonometric and logarithmic expressions, acknowledging transformations that change the form but not the value. * Comprehensive Evaluation: Encourage the use of computational tools for checking equivalence in cases where expressions are too complex for straightforward visual inspection. See Appendix D for the requirements and prompts used in the evaluation method. §.§ Results and Analysis We first report the performance on our mathematical benchmarks, as shown in Table <ref>. Our observations indicate that the benchmark is challenging for these models, with overall performance below 60%.[Advanced prompting methods using GPT-4 models in the contest have achieved performance improvements between 60% and 70%.] The Gemini Math-Specialized 1.5 Pro exhibits the highest overall performance at 55.8%, suggesting that specialized training significantly enhances capabilities. GPT-4 Turbo achieves 47.03%, followed by Gemini 1.5 Pro at 45.0%, and Claude 3 Opus at 40.6%, all showing competitive performance. For closed-source models (specifically the GPT series) and state-of-the-art open-source models such as Llama-3-70B and DBRX-Instruct, the results show that the selected open-source models not only surpass the performance of GPT-3.5 but are also approaching the capabilities of earlier versions of GPT-4. When comparing different levels of mathematical problems for GPT models, we observe that High School mathematics is the easiest category for all models, with GPT-4 models scoring above 70%. Olympiad-level problems are the most difficult, with all models scoring below 11%. Similar trends are seen for Llama-3-70B and DBRX-Instruct, with their performance in the Olympiad-level category being even lower, at less than 10%. Furthermore, closed-source models, particularly the GPT-4 Turbo, exhibit stronger performance in high school and university-level math, highlighting ongoing advancements in their development. This data underscores the rapid progression of closed-source models in handling increasingly difficult mathematical questions over time. The performance gap between the best closed-source model, GPT-4 Turbo, and the open-source Llama-3 for difficult mathematical problems is notably narrow. For instance, GPT-4 Turbo achieves an overall accuracy of 10.14% in the Olympiad-level mathematics, while Llama-3 achieves 9.46%. This demonstrates that both models, despite notable progress, still face significant challenges in solving these complex problems. However, for other difficulty levels, the gap becomes larger. For example, GPT-4 Turbo achieves 84.78% in high school mathematics, while Llama-3-70B scores only 52.17%, a difference of more than 30%. Table <ref> presents the results for different LLMs across various subject areas. The results show that GPT-4 Turbo consistently outperforms others across most categories, particularly in High School Mathematics and University-Level subjects. It shows a notable lead in Algebra, Geometry, and Pre-Calculus at the high school level, and Differential Equations, Linear & Abstract Algebra, Calculus & Analysis, and Statistics at the university level. GPT-3.5 Turbo shows consistent but lower performance compared to GPT-4 Turbo. Llama-3-70B performs well in certain areas, particularly in Olympiad-level problems. It has the highest score in Number Theory among all models. However, it struggles significantly in Series and Probability. DBRX-Instruct shows strength in Olympiad-level Geometry but generally lags behind GPT-4 Turbo and Llama-3-70B in other categories. § CONCLUSION We introduce MathOdyssey, a dataset for assessing LLMs' mathematical problem-solving skills. Our dataset, evaluation methods, and code are openly available. We have shown that while LLMs, both open-source like Llama-3 and DBRX-Instruct, and closed-source such as the GPT series, demonstrate proficiency in routine and moderately difficult mathematics, they struggle significantly with complex Olympiad-level problems. Additionally, we have revealed promising developments; open-source models are beginning to approach the performance levels of earlier GPT-3.5 iterations. Despite this progress, performance on the most challenging questions remains low, highlighting a clear gap that future advancements need to address. Ultimately, our research underscores the ongoing journey towards achieving human-like mathematical reasoning in AI, with the MathOdyssey dataset serving as a benchmark for catalysing future developments. We are optimistic that continued research will progressively bridge the existing capability gap. In the future, expanding the MathOdyssey dataset to include a wider range of problem types and enhancing metrics to better capture deep mathematical reasoning can yield further insights into LLM capabilities. Limitation. While the MathOdyssey dataset includes a variety of problems across different levels of mathematics, the questions may not cover all types of mathematical reasoning or problem-solving approaches. This limitation could affect how well the dataset generalizes to other forms of mathematical challenges not represented in your collection. Future. To address generalizability limitations, future work involves expanding the dataset to include a wider range of mathematical topics and problem types, including those that require visual representations, proofs, or interactive problem-solving. § ACKNOWLEDGEMENTS We would like to extend our sincere gratitude to AGI Odyssey, the NGO responsible for organizing the Global Artificial Intelligence Championships (GAIC) Math 2024. Their dedication and commitment to promoting artificial intelligence education and innovation have been invaluable to the success of this project. Additionally, we appreciate their contribution of resources and support, which have played a significant role in making this initiative possible. unsrtnat § MATHODYSSEY DATASET EXAMPLES AND SOLUTIONS Olympiad-level Problem. Let S={ 1,2,⋯ 2024 }, if the set of any n pairwise prime numbers in S has at least one prime number, the minimum value of n is . Answer: 16 Reasoning: Taking the 15 numbers 1, 2^2, 3^2, ..., 43^2 violates the condition. Furthermore, since S does not contain any non-prime numbers with a minimum prime factor of at least 47, there are only 14 types of non-prime numbers in S, excluding 1. Applying the Pigeonhole Principle, we conclude that n=16. GPT-4 Turbo's Solution: Answer: 307 Reasoning: The minimum number of elements required such that any subset of these elements from the set S = {1, 2, ..., 2024} containing pairwise prime numbers includes at least one prime number is 307. This is based on the count of prime numbers up to 2024 (approximately 306) and the application of the Pigeonhole Principle, which dictates that selecting more numbers than there are primes forces the inclusion of a prime number or a violation of the pairwise prime condition. High School Problem. What are the solutions of the quadratic equation 15x^2=2x+8? A) {-4/3,-3/2}    B) {-4/5,2/3}    C) {-3/2,4/5}    D) {-2/3,4/5} Answer: D Reasoning: First move all terms to one side: 15x^2-2x-8=0. Then factor into (5x-4)(3x+2)=0. Setting 5x-4 to zero results in a solution of x = 4/5 and setting 3x+2 to zero results in a solution of x = -2/3. GPT-4 Turbo's Solution: Answer: D Reasoning: The quadratic equation was first rearranged to standard form, coefficients were identified, and the quadratic formula was applied to find the solutions. The solutions were verified by substituting back into the original equation. University-level Problem. Find the limit lim_x→ 1f(2x^2+x-3)-f(0)/x-1 given f'(1)=2 and f'(0)=-1. Answer: -5 Reasoning: Let g(x)=2x^2+x-3. Since g(1)=0, the desired limit equals lim_x→ 1f(g(x))-f(g(1))/x-1. By the definition of the derivative and the chain rule and noting that g'(1)=5, we have lim_x→ 1f(g(x))-f(g(1))/x-1=f'(g(1))g'(1)=f'(0)g'(1)=(-1)(5)=-5. GPT-4 Turbo's Solution: Answer: -5 Reasoning: The limit was calculated by interpreting it as the derivative of a composed function, applying the chain rule, and substituting the given derivative values. § MATHODYSSEY DIFFERENT SUBJECT AREAS Table <ref> presents MathOdyssey examples spanning various subject areas. These encompass Algebra, Number Theory, Geometry, Combinatorics, Pre-Calculus, Linear and Abstract Algebra, Calculus and Analysis, Differential Equations, as well as Probability and Statistics. § BASELINES AND PROMPTS Figure <ref> depicts the prompt utilized for guiding Language Models (LLMs) in solving mathematical problems within our experimental framework. This prompt distinctly outlines the system's role as a math professor, delineating task specifications and the anticipated output format for tackling intricate mathematical challenges. § EVALUATION Figure <ref> depicts the prompt employed during the evaluation of large language models in our experiments. This prompt defines the system's role as a math teacher, providing both assessment criteria and the expected output format for grading mathematical problems. We have also made our evaluation code accessible to the public.
http://arxiv.org/abs/2406.19373v1
20240627175146
Enhancing Quantum State Discrimination with Indefinite Causal Order
[ "Spiros Kechrimparis", "James Moran", "Athena Karsa", "Changhyoup Lee", "Hyukjoon Kwon" ]
quant-ph
[ "quant-ph" ]
skechrimparis@gmail.com School of Computational Sciences, Korea Institute for Advanced Study, Seoul 02455, South Korea Quantum Universe Center, Korea Institute for Advanced Study, Seoul 02455, South Korea School of Physics & Astronomy, University College London, London WC1E 6BT, United Kingdom Korea Research Institute of Standards and Science, Daejeon 34113, South Korea Korea Research Institute of Standards and Science, Daejeon 34113, South Korea School of Computational Sciences, Korea Institute for Advanced Study, Seoul 02455, South Korea § ABSTRACT The standard quantum state discrimination problem can be understood as a communication scenario involving a sender and a receiver following these three steps: (i) the sender encodes information in pre-agreed quantum states, (ii) sends them over a noiseless channel, and (iii) the receiver decodes the information by performing appropriate measurements on the received states. In a practical setting, however, the channel is not only noisy but often also unknown, thus altering the states and making optimal decoding generally not possible. In this work, we study this noisy discrimination scenario using a protocol based on indefinite causal order. To this end, we consider the quantum switch and define its higher-order generalisations, which we call superswitches. We find that, for certain channels and ensembles, the guessing probability can be significantly improved compared to both single- and multi-copy state discrimination. Enhancing Quantum State Discrimination with Indefinite Causal Order Hyukjoon Kwon July 1, 2024 =================================================================== § INTRODUCTION Discriminating quantum states underlies many of the practical applications of quantum information theory. These include quantum communication <cit.>, cryptography <cit.>, data hiding <cit.>, quantum secret sharing <cit.>, as well as quantum-inspired machine learning <cit.>. Quantum state discrimination, pioneered by Helstrom <cit.> and Holevo <cit.>, may be understood as a communication scenario between two parties that agree on an ensemble of n states, any of which may be selected with some finite probability. In general, owing to the non-orthogonality of quantum states, the states in the ensemble cannot be perfectly distinguished from one another. The goal of quantum state discrimination is then to optimise the measurement on the receiver's end to determine with maximum quantum mechanically-allowed probability which state was sent. In theoretical treatments of the problem, the quantum channel through which the states are being transmitted is taken to be noiseless, i.e. it is the identity channel. For most practical purposes, however, some noise will be introduced during the transmission process. This situation was studied in Refs. <cit.>, where the authors allowed for a possibly unknown channel between the two parties, introducing the problem of unknown state discrimination. It was shown that there exists a protocol such that an optimal measurement can be preserved for the optimal discrimination of the noisy states and that, moreover, this protocol sometimes enhances the guessing probability. In view of the fact that the protocol was based on channel twirling <cit.>, an instance of a supermap <cit.>, it is natural to ask whether other supermaps can perform better at the task. Recently, a supermap known as the quantum switch <cit.> has attracted a lot of attention in the literature, owing to the fact that many tasks which are impossible classically or quantum mechanically with standard operations, can be successfully performed using it. The quantum switch works by superposing the sequential action of two quantum channels by coupling them to an ancilla qubit upon which a measurement is performed. Since in the quantum switch protocol we cannot conclude which channel was applied first, the quantum switch is often referred to as a supermap with indefinite causal order. Advantages for various tasks using indefinite causal order have been reported in Refs. <cit.>. The first experiment that verified the presence of indefinite causal order was performed in Ref. <cit.>. Since then, many other attempts to implement indefinite causal order in practice were performed. For a review of the current status we direct the reader to Ref. <cit.>. In this work, we apply the quantum switch to the problem of state discrimination and find that in many cases a significant improvement in guessing probability is achieved. Moreover, often the problem of requiring a redesign of the optimal measurement is also circumvented allowing for optimal unknown state We also define higher-order quantum switches, which we refer to as superswitches, and show that in certain cases these can further improve the guessing probability. The manuscript is structured as follows. We begin by reviewing known facts on quantum state discrimination and the quantum switch. We define our protocol and examine the guessing probability for various ensembles of states and channels, comparing it to standard quantum state discrimination bounds. We define higher-order quantum switches, a natural extension of the standard quantum switch, and show that they can outperform the quantum switch in certain cases. Finally, we compare the performance of these superswitches for general Pauli channels, as well as define and study superswitches in higher state-space dimensions. § PRELIMINARIES §.§ Motivation and problem statement The problem of quantum state discrimination can be formulated as a communication scenario between two parties, say Alice and Bob, who have agreed on an ensemble of states Ω={q_i,ρ_i}_i=1^n, a collection of states ρ_i that appear with a priori probabilities q_i. Alice selects a state ρ_i according to the a priori probabilities q_i and sends it to Bob through a possibly noisy channel . Bob designs an appropriate measurement scheme, described by some positive operator valued measure (POVM), with the goal of identifying the label of the state. In minimum-error quantum state discrimination the figure of merit is the average probability of successful identification. Whenever =𝐈𝐝, that is, the channel is the identity map, 𝐈𝐝, we recover the standard minimum-error discrimination problem. Specifically, for an ensemble Ω, the minimum-error discrimination problem is to find a POVM Π={Π_i}_i=1^n that maximises the average probability of identifying the state correctly, that is, =max_Π∑_i q_i (Π_i ρ_i ) . In the case of an ensemble of two states, {q_i, ρ_i}_i=1,2, the optimal guessing probability is given by the Helstrom bound <cit.>, = 1/2+1/2q_1 ρ_1 -q_2 ρ_2_1 . Even though closed form solutions exist only in a number of cases, necessary and sufficient conditions exist for the optimal measurement: ∑_i q_i ρ_i Π_i-q_jρ_j ≥ 0 ∀ j . Often a second condition is also mentioned, Π_i (q_i ρ_i-q_iρ_j) Π_j =0 ∀ i,j , which follows from the first. For a more detailed review on state discrimination, we refer the reader to Refs. <cit.>, We note that the optimal measurement is not unique and that, moreover, some of the states may never be associated with any measurement outcomes, that is, they may never be identified by an optimal measurement (see Appendix A). If a channel between the two parties is noisy, described by a quantum channel acting between them, such that whenever Alice sends state ρ_i, the state (ρ_i) is received by Bob. This defines a new ensemble, Ω^()={q_i,(ρ_i)}_i=1^n, consisting of the noisy states to be discriminated. It is clear that the optimal measurement Π of the original ensemble Ω is no longer optimal for the new ensemble Ω^(), in general. Since a quantum channel does not increase the guessing probability, it always follows that ≥^() , where ^() is the guessing probability of the ensemble Ω^() with the new optimal measurement Π^() performed. Even though a quantum channel described mathematically as a completely positive and trace preserving (CPTP) map cannot enhance the guessing probability, a supermap can. This was noticed in Refs. <cit.> and it was shown that channel twirling implemented by the use of a unitary 2-design <cit.> can enhance the guessing probability for certain ensembles and channels. Note that since the effect of twirling is to produce a depolarisation channel, if the noise we start with is already depolarising, no improvement can ever be achieved by the twirling protocol. This is not the case, however, in the protocol we will propose in this work. Moreover, it was shown that for certain ensembles and channels it can also preserve the optimality of a quantum measurement for discrimination, saving the need for performing state or process tomography, both of which are costly. Such optimal measurement preserving channels were completely characterised in dimension two and partially in dimension three and higher <cit.>. Furthermore, experimental verification of indefinitely causally ordered processes has been demonstrated <cit.>, suggesting that not only do quantum supermaps have potential theoretical advantages over standard quantum operations, as we will show in this work, but they may also be of practical importance in the near future. §.§ The quantum switch We now introduce and review some basic results regarding the quantum switch. The quantum switch is a supermap that superposes the ordering between the sequence of actions of two channels and . The resulting channel is defined as S_ω(,) = ∑_i,jK_ij (ρ⊗ω)K_ij^† , with the Kraus operators K_ij = E_i F_j ⊗0_C+F_j E_i ⊗1_C , where E_i and F_j denote the Kraus operators of the channels and respectively, and ω denotes the ancilla qubit that controls the order of the channels. By noting that 0=(+Z)/2 and 1=(-Z)/2, we can re-express the quantum switch as S_ω(,) = 1/4∑_i,j({E_i,F_j}ρ{E_i,F_j}^†⊗ω+[E_i,F_j]ρ[E_i,F_j]^†⊗ Zω Z ) , At this point in the analysis we focus on Pauli channels given their ubiquitous nature in quantum information <cit.> and theoretical ease of use which allows for deeper analytical treatment. Furthermore, by techniques such as twirling <cit.>, it is possible to convert arbitrary noise channels into Pauli channels, thus giving their analysis farther reaching consequences. Let _p⃗ (ρ) = ∑_i p_i σ_i ρσ_i and _q⃗ (ρ) = ∑_i q_i σ_i ρσ_i, be two Pauli channels in dimension two, where p⃗=(p_0,p_1,p_2,p_3) and q⃗=(q_0,q_1,q_2,q_3) denote probability vectors, i.e. ∑_i p_i = ∑_i q_i=1 and σ_i ∈{I,X,Y,Z} the Pauli matrices. By choosing the control qubit to be ω=|+⟩⟨$|, the action of the quantum switch is given by <cit.> _|+⟩⟨| (_p⃗,_q⃗ )= r_+ C_+ (ρ) ⊗|+⟩⟨+|r_- C_- (ρ) ⊗|-⟩⟨,| wherer_+, r_-are probabilities defined asr_-= r_12+r_23+r_31, withr_ij= p_i q_j +q_i p_j, andr_+ = 1-r_-, while the channelsC_+, C_-are C_+(ρ) = (∑_i=0^3 r_ii/2)ρ +∑_i=1^3 r_0iσ_i ρσ_i/r_+ , and C_- (ρ) = r_23 X ρ X +r_31 Y ρ Y+ r_12 Z ρ Z /r_- . In the special case where the two channels are the same Pauli channel, _p⃗=_q⃗= p_0ρ+p_1 Xρ X +p_2 Yρ Y+ p_3 Zρ Z , we find S_ω (_p⃗,_p⃗) = q_+ C_+ (ρ) ⊗ω_+ + q_- C_- (ρ) ⊗ω_- , and the expressions for the channelsC_+, C_-become C_+(ρ) = (p_0^2+p_1^2+p_2^2+p_3^2)ρ+2p_0(p_1 Xρ X+p_2 Yρ Y +p_3 Z ρ Z)/q_+ , C_-(ρ) = 2 p_1 p_2 Zρ Z+ 2p_2 p_3 Xρ X + 2 p_3 p_1 Y ρ Y/q_- , with q_- = 2(p_1 p_2 + p_2 p_3 +p_3 p_1) , q_+ = 1-q_- , andω_+ =ωandω_- = ZωZ. If we make the choiceω= +for an initial state of the ancilla system, we obtainω_± = ±. Performing a measurement in the ±basis, the two channelsC_+, C_-can thus be fully separated. We note that our examples in the following sections will be of this special type where input channels are the same. The interpretation regarding the ancilla is that it is operated by a communication provider who is the only party that has access to it. The communication provider then performs a measurement on the ancilla and communicates the outcome to Bob. Thus, the ancilla cannot be used to encode information by any of the parties. Many of the advantages that follow from the quantum switch, can be explained as `consuming' the coherence of the ancilla to produce effects that are otherwise impossible with standard quantum operations <cit.>. § ENHANCING DISCRIMINATION USING THE QUANTUM SWITCH We now give some preliminary examples to demonstrate that an increase in guessing probability is possible in a discrimination scenario. The scenario works as follows. Alice prepares a copy of the stateρ_jand sends it to the communication provider of the network. The communication provider feeds the stateρ_jinto the quantum switch that superposes two sequential applications of two channelsandwith=, and performs a measurement on the ancilla qubit. The ancilla measurement outcome is communicated to Bob who, depending on the measurement outcome on the ancilla, performs an appropriate measurement onC_±(ρ_j)and subsequently makes a guess of the label of the stateρ_j, depending on the outcome of the measurement. A sketch of the protocol is shown in Fig. <ref>. In principle, there are two scenarios in which the quantum switch can assist in increasing the guessing probability: * Ensembles Ω and channels such that channels C_+, C_- preserve the optimal measurement or the new optimal measurement can be inferred from that of the original ensemble Ω. As we will show, the depolarisation channel is a prototype of such a case. Then, the advantage of using the quantum switch is twofold: not only do we get an increase in guessing probability, but we also know what optimal measurement to apply without knowledge of the depolarisation parameter p. * Directly apply the optimal measurements for C_+ and C_- (in general different from that of Ω or Ω^()) and then obtain the average guessing ^()=q_+ ^+ + q_- ^- . This is a case where we assume that we have performed process or state tomography to obtain information on the channel or states, and thus it is a scenario of enhancing communication on a given line with known noise. We explore both instances in the following. §.§ Preliminary examples Here we demonstrate that increasing the guessing probability is theoretically possible using the quantum switch. Let(ρ)be a Pauli channel of Eq.(<ref>) withp_0=0andp_1=p_2=p_3=1/3. Explicitly,(ρ)=_4/3(ρ)=1/3(XρX + YρY+ ZρZ), which is an instance of the depolarisation channel. We then readily obtain from Eq.(<ref>) that C_+(ρ) =ρ , C_-(ρ) = (ρ) , withq_+=1/3andq_-=2/3. Thus, the channel after a `+’ outcome is obtained upon a measurement of the ancilla is the identity, while the channel after a minus outcome is the noisy channelitself. It follows that for any ensemble of statesΩwith guessing probability, Eqs.(<ref>) and (<ref>) give ^()=1/3+2/3^()≥^() , since≥^(). As a second example, consider a Pauli channel withp_0=0and one ofp_1,p_2,p_3also equal to 0,p_2=0say. Then,(ρ)=p XρX +(1-p)ZρZand we find thatq_-=2p(1-p), q_+=1-2p(1-p), as well asC_+(ρ)=ρandC_-(ρ)=YρY, namely, one of the channels is the identity map and the other just a unitary. Thus, we can apply aYunitary in the case the `-' detector clicked at the control qubit to recover the original states, before performing the discrimination measurement. It follows that^()=for any ensemble. At the same time, the action of the channelat the level of the Bloch vector is the following r⃗=(r_1,r_2,r_3) →(-(1-2p)r_1,-r_2,(1-2p)r_3) . The channel flips theycomponent and one of thexorzcomponents depending on whetherp∈[0,1/2)orp∈(1/2,1], as well as shrinking thexandzcomponents by a factor of(1-2p). Forp=1/2the Bloch sphere is squeezed onto theyaxis as well as mirrored over the origin. It follows that for any ensemble of states with Bloch vectors that do not lie only on theyaxis, the channel will have a decreased guessing probability compared to the one of the original ensemble, i.e.^()≤, and thus the quantum switch always gives an advantage. Moreover, if all states of the ensemble lie on thex-zplane and the value of the channel isp=1/2, all states are sent to the maximally mixed state leading to complete loss of guessing probability. The protocol with the quantum switch can still recover the full guessing probability, mirroring the effect in Ref.<cit.> for the quantum capacity. This channel is unique up to unitaries. §.§ The depolarisation channel In this section we consider the case where the noise acting between the two parties is depolarising. Specifically we define the depolarisation channel as (ρ) = (1-p)ρ +p /2 , p∈[0,4/3] , or equivalently, in the Kraus representation, as (ρ) = (1-3p/4)ρ + p/4(Xρ X + Yρ Y + Zρ Z ) , p∈[0,4/3] . The action of the depolarisation channel at the level of Bloch vectors isr⃗→(1-p)r⃗. Specifically, as the parameterpranges in[0,1), the vector is shrinking until the valuep=1where the map becomes completely depolarising and thus sends all states to the maximally mixed state. For valuesp∈(1,4/3]the Bloch vectors have flipped direction. Consider a depolarisation channel given in Eq.(<ref>). Here,p_0=1-3p/4,p_1=p_2=p_3=p/4, from which we obtain q_-=3p^2/8 , q_+=1-3p^2/8 , and C_+(ρ) =(1-3p̃/4)ρ + p̃/4(Xρ X + Yρ Y + Zρ Z ) , p̃=4(4-3p)p/8-3p^2 , C_-(ρ) =1/3(Xρ X + Yρ Y + Zρ Z ) . We see that both channelsC_±are themselves depolarisation channels. Note, that the original depolarisation channel has different optimal measurements depending on whetherp∈[0,1]orp∈(1,4/3](see Appendix A). However, noting that0<p̃≤1,C_+has the same optimal measurementΠas the original ensemble, whileC_-has the measurement with flipped Bloch vectorsΠ̃, Eq.(<ref>). Thus, even if we do not have information about the depolarisation parameter,p, of_p, the quantum switch allows us to always apply the optimal measurement and achieve the optimal guessing. This would not have been possible without state or process tomography otherwise, which highlights one of the benefits of employing the quantum switch for state discrimination. The second is the potential for increasing the guessing probability. Specifically, the guessing probabilities^+and^-forC_+andC_-respectively, are readily obtained from Eqs.(<ref>)-(<ref>) of Appendix A, ^(+) = (1-p̃)+p̃/n , ^(-) = 1/3+2/3n , wherenis the number of states in the ensemble. By setting the control qubit toω=+and performing a measurement on the|±⟩basis, we obtain the guessing probability for the quantum switch ^()=(1-3p^2/8)[(1-4(4-3p)p/8-3p^2)+4(4-3p)p/(8-3p^2)n]+3p^2/8(1/3+2/3n) . In Fig.<ref> we plot the guessing probability for the ensembleΩ={1/2,|i⟩⟨}|_i=0,1of two orthogonal states, i.e.=1, appearing with equal a priori probabilities after sending them through the quantum switch and compare it to the guessing probability of the depolarisation channel. We see that for any value ofp>4/5, the quantum switch gives an advantage. Interestingly, atp=1the depolarisation channel sends all states to the maximally mixed one, removing any possibility of guessing better than uniform, i.e.=1/2, while the quantum switch allows for a correct detection with a probability of^()=5/8>1/2. §.§ Bit-phase flip channel Consider the bit-phase flip channel(ρ)=(1-p)ρ+p YρY, that is, a Pauli channel withp_0=(1-p),p_2=pandp_1=p_3=0. Then,q_- = 0, q_+ =1, C_-=0and C_+ (ρ) = (p^2 + (1-p)^2)ρ +2p(1-p) Yρ Y , which is another bit-phase flip channel withp̂=2p(1-p). For values ofp̂<p, the portion of the identity is larger and thus the resulting contraction is smaller. This in principle will lead to better guessing but the optimal measurement cannot be directly inferred in general. Since the action of the bit-phase flip channel with parameterpis a uniform contraction along thex-zplane of the Bloch ball by a factor of1-2p, if the original ensemble of states all lie on thex-yplane and appear with equal a priori probabilities, then the channel acts as a depolarisation channel. Ifp<1/2, the optimal measurement is preserved and we can readily compare the guessing probabilities. On the other hand, ifp>1/2, the Bloch vectors have flipped direction and so the measurement needs to be adjusted toΠ̃, Eq.(<ref>). The guessing probabilities are found as ^() = (1-2p) +2p/n , ^() = (1-4p(1-p)) +4p(1-p)/n . The first expression is optimal only forp∈[0,1/2)while the second for all values ofpsince4(1-p)p>0and thus a flipping cannot occur. In Fig.<ref> we plot the guessing probabilities in all cases for any orthogonal pair of states lying on thex-zplane. Note that if we do not have any information on the parameter range ofpin the bit-phase flip channel, then there is always a range where an improvement occurs by using the quantum switch. If, however, we assume that we know which measurement to apply after the action of, that is, we know whetherp∈[0,1/2)orp∈(1/2,1], then the quantum switch is always performing worse than the channel. §.§ Comparing to multiple-copy discrimination So far we have compared the quantum switch to single-copy state discrimination and observed that an advantage occurs in a number of cases. However, an objection could be raised since the quantum switch requires two applications of a channeland thus it can be argued that this is the reason for the improvement. In this section, we compare the quantum switch to the state discrimination problem when two uses of the same channel are allowed. First, it is clear that two sequential uses of the same channel will only further degrade the guessing probability, unless the channel is unitary. Parallel use, however, will in principle offer a better guessing over a single copy scenario. Consider the two pure statesρ_±= α|0⟩ ±β|1⟩, appearing with equal a priori probabilities. Then,= (1+√(1-c^2))/2, wherecis the overlap between the states,c=α^2-β^2. If, however, we perform discrimination onncopies we find the guessing^(n) = (1+√(1-c^2n))/2, which is clearly larger than the case of single-copy discrimination. It should be noted that this comes at the cost of having to prepare two or more copies of the state, a cost that is not present in the case of the switch protocol. We re-examine the case of the depolarisation channel and a pair of orthogonal states,ρ_i=|i⟩⟨$| with i=0,1, appearing with equal a priori probabilities. Given n copies of the states after depolarisation noise _p(ρ_i), with depolarisation strength p, this defines the ensemble Ω^(n)={1/2,_p(ρ_i)⊗n⋯⊗_p(ρ_i) }_i=0,1, for which the Helstrom bound becomes (see Appendix C) ^(_p ,n) = 1/2+1/4_p(ρ_0)⊗n⋯⊗_p(ρ_0)-_p(ρ_1)⊗n⋯⊗_p(ρ_1)_1 = 1/2+1/4∑_k=0^nn!/k! (n-k)!(p/2)^k (1-p/2)^n-k-(p/2)^n-k(1-p/2)^k . Note that the Helstrom bounds for 2n-1 and 2n copies of states coincide. In Fig. <ref>, we plot the guessing probabilities for the discrimination of up to 10 copies. We see that there is always a region of improvement of the guessing probability with the quantum switch. However, this region becomes smaller with increasing number of copies. For any finite value n there exists a region such that the protocol with the quantum switch outperforms the n-copy discrimination scenario. This is due to the fact that for near completely depolarising channels, that is depolarising channels with values of p around the value p=1, the multiple-copy guessing probability is a continuous function of p and has to approach random guessing. As n tends to infinity, this region shrinks to the point p=1 where ^()=5/8>1/2. § FIRST-ORDER SUPERSWITCH We now introduce higher-order switches that control the ordering of lower-order switches. Coherent superposition of the ordering of more than two channels was studied for certain informational quantities in Refs. <cit.> with the focus being on depolarisation channels. In Ref. <cit.> the concept of switch of switches was introduced, similar to the case we consider here but in a restricted scenario where there is only a control switch for the inside switches, i.e. their ordering is `synchronised'. In this work we will introduce the general case and we will show how it includes the previous approach as a special case at the end of the section. It is instructive to discuss the first higher-order switch, which superimposes two quantum switches before discussing the general case. We call this the first-order superswitch and refer to the standard quantum switch as the zeroth-order superswitch. A schematic representation is shown in Fig. <ref>. Explicitly it is defined through ^(1)_ω_c(,,,)(ρ)=∑_i,j,k,l K_ijkl(ρ⊗ω_c)K_ijkl^† , where ancilla state ω control the order of channels , and , inside the two first-order switches respectively, as well as the ordering of the two switches themselves. The Kraus operators are explicitly given by K_ijkl= E_i F_j _k _l ⊗|0⟩⟨⊗||0⟩⟨⊗||0⟩⟨+| E_i F_j _l _k ⊗|0⟩⟨⊗||1⟩⟨⊗||0⟩⟨| + F_j E_i _k _l ⊗|1⟩⟨⊗||0⟩⟨⊗||0⟩⟨+| F_j E_i _l _k ⊗|1⟩⟨⊗||1⟩⟨⊗||0⟩⟨| + _k _l E_i F_j ⊗|0⟩⟨⊗||0⟩⟨⊗||1⟩⟨+|_l _k E_i F_j ⊗|0⟩⟨⊗||1⟩⟨⊗||1⟩⟨| + _k _l F_j E_i ⊗|1⟩⟨⊗||0⟩⟨⊗||1⟩⟨+|_l _k F_j E_i ⊗|1⟩⟨⊗||1⟩⟨⊗||1⟩⟨ |, which can be rewritten as K_ijkl= 1/8 ∑_s_1, s_2, s =±[[E_i,F_j]_s_1,[_k,_l]_s_2]_s ⊗ P_s_1⊗ P_s_2⊗ P_s= 1/8 ( E_iF_j_k_l⊗⊗⊗ + E_iF_j_k_l⊗ Z⊗⊗. +E_iF_j_k_l⊗⊗ Z ⊗ + E_iF_j_k_l⊗ Z ⊗ Z ⊗ + E_iF_j_k_l⊗⊗⊗ Z +E_iF_j_k_l⊗ Z⊗⊗ Z .+ E_iF_j_k_l⊗⊗ Z ⊗ Z + E_iF_j_k_l⊗ Z ⊗ Z ⊗ Z) , where [A,B]_+≡{A,B}=AB+BA and [A,B]_-≡ [A,B]=AB-BA denote the anticommutator and commutator, and P_+ =, P_- = Z. Under the assumption that for any pair of indices, both the commutator [X_i,Y_j] and anticommutator {X_i,Y_j} of any pair of Kraus operators X_i, Y_j of the channels cannot be simultaneously non-zero (for instance Pauli channels), we arrive at the expression for the first-order superswitch, ^(1)_ω_c(,,,)(ρ)= ∑_i,j,k,l K_ijkl(ρ⊗ω_c)K_ijkl^†= 1/64∑_s_1, s_2, s =±[[E_i,F_j]_s_1,[_k,_l]_s_2]_s ρ[[E_i,F_j]_s_1,[_k,_l]_s_2]_s^† ⊗ ( P_s_1⊗ P_s_2⊗ P_s) ω_c ( P_s_1⊗ P_s_2⊗ P_s)^† , which when expanded gives ^(1)_ω_c(,,,)(ρ) = 1/64∑_i,j,k,l( E_iF_j_k_lρE_iF_j_k_l^†⊗ω_c . +E_iF_j_k_lρE_iF_j_k_l^†⊗ (Z⊗⊗) ω_c(Z⊗⊗) +E_iF_j_k_lρE_iF_j_k_l^†⊗ (⊗ Z ⊗) ω_c( ⊗ Z⊗) +E_iF_j_k_lρE_iF_j_k_l^†⊗ (Z⊗ Z ⊗) ω_c (Z⊗ Z ⊗) +E_iF_j_k_lρE_iF_j_k_l^†⊗ (⊗⊗ Z) ω_c (⊗⊗ Z) +E_iF_j_k_lρE_iF_j_k_l^†(Z⊗⊗ Z) ω_c (Z ⊗⊗ Z) +E_iF_j_k_lρE_iF_j_k_l^†(⊗ Z ⊗ Z) ω_c (⊗ Z ⊗ Z) .+E_iF_j_k_lρE_iF_j_k_l^†(Z⊗ Z ⊗ Z) ω_c (Z ⊗ Z ⊗ Z) ) . ^(1)_ω_1 ω_2 ω(,,,)(ρ)= ∑_i,j,k,l K_ijkl(ρ⊗ω_1⊗ω_2 ⊗ω)K_ijkl^†= = 1/64∑_i,j,k,l( E_iF_j_k_lρE_iF_j_k_l^†⊗ω_1⊗ω_2 ⊗ω. +E_iF_j_k_lρE_iF_j_k_l^†⊗ Zω_1 Z⊗ω_2 ⊗ω +E_iF_j_k_lρE_iF_j_k_l^†⊗ω_1⊗ Zω_2 Z ⊗ω +E_iF_j_k_lρE_iF_j_k_l^†⊗ Zω_1 Z⊗ Zω_2 Z ⊗ω +E_iF_j_k_lρE_iF_j_k_l^†⊗ω_1⊗ω_2 ⊗ Zω Z +E_iF_j_k_lρE_iF_j_k_l^†⊗ Zω_1 Z⊗ω_2 ⊗ Zω Z +E_iF_j_k_lρE_iF_j_k_l^†⊗ω_1⊗ Zω_2 Z⊗ Zω Z .+E_iF_j_k_lρE_iF_j_k_l^†⊗ Zω_1 Z⊗ Zω_2 Z⊗ Zω Z ) . We now assume that the ancilla state is a product state, i.e. ω_c = ω_1 ⊗ω_2 ⊗ω, where ω_1 and ω_2 control the ordering of the channels , and ,, respectively, in the innermost switches and ω controls the ordering of the innermost switches themselves. Letting ω_1=ω_2=ω=|+⟩⟨$| and defining channelsC_ijk^(1)as, e.g. C^(1)_+-+(,,,)(ρ) = 1/r_+-+1/64∑_i,j,k,lE_iF_j_k_lρE_iF_j_k_l^† , where r_+-+ = (1/64 ∑_i,j,k,lE_iF_j_k_lρE_iF_j_k_l^†), are the associated probabilities, we finally obtain ^(1)(,,,) (ρ)= ∑_s_1,s_2,s = ± r_s_1 s_2 s C^(1)_s_1 s_2 s (, , , ) ⊗|s_1 s_2 s⟩⟨| = r_+++ C^(1)_+++(,,,)(ρ) ⊗|+++⟩⟨+|r_-++ C^(1)_-++(,,,)(ρ)⊗|-++⟩⟨| + r_+-+ C^(1)_+-+(,,,)(ρ)⊗|+-+⟩⟨+|r_–+ C^(1)_–+(,,,)(ρ)⊗|–+⟩⟨| + r_++- C^(1)_++-(,,,)(ρ)⊗|++-⟩⟨+|r_-+- C^(1)_-+-(,,,)(ρ)⊗|-+-⟩⟨| + r_+– C^(1)_+–(,,,)(ρ)⊗|+–⟩⟨+| r_— C^(1)_—(,,,)(ρ) ⊗|—⟩⟨ |. It is clear that measurements on the three ancilla qubits allow for complete separation of the eight channelsC^(1)_s_1 s_2 s, as in the case of the quantum switch. We note that if instead of taking a product stateω_1⊗ω_2 =|+⟩⟨⊗||+⟩⟨$| for the two control qubits of the inside switches, we take one of the four entagled states, e.g. ω_12= |Φ^+⟩⟨$| with|Φ^+⟩=1/√(2)(|00⟩+|11⟩), then our result reduces to the special case in Ref.<cit.> (see Appendix D). The result of this is that the eight outcomes of the first-order superswitch are reduced to four, each one of them being a mixture of two of the outcomes of the general case introduced here. However, the mixing of two channels can never lead to higher guessing than mixing the guessing of the two individual channels: at most it can match it which happens in the case where the two channels share the same optimal measurement. In the case of ensembles of two states this follows from the triangle inequality of norms as the guessing probability is given from the Helstrom bound <cit.>. For two channelsandwe have λ(ρ) + (1-λ) (ρ)_1 ≤λ(ρ)_1 + (1-λ) (ρ)_1 . Thus, in general we have that ^(λ + (1-λ) )≤λ^()+(1-λ)^() , and it is clear that there is a trade-off between complexity of the setup and performance in a communication task such as state discrimination. By further assuming that all channels,,,are equal to the same Pauli channel,==== ∑_i p_i σ_i ρσ_i, we use the simplified notationC^(1)_ijk(ρ)≡C^(1)_ijk(,,,)(ρ)since in this case there is no ambiguity about the channels. In the next section we examine a number of examples where the first-order superswitch can lead to better guessing probabilities. So far, the results have been restricted to the case where where two copies of a noise state are combined in the quantum switch. However, it is natural to ask what happens in the case where multiple rounds of applications of the quantum switch applied, potentially improving the guessing probability with each extra round. We first describe the protocol in detail, before examining its implications for multiple-copy state discrimination. Assume we want to perform multiple-state discrimination based on two qubit states ρ_1 and ρ_2, communicated between two parties over noisy channels. Specifically, Alice prepares either ρ_1⊗…⊗ρ_1 or ρ_2⊗…⊗ρ_2 with equal a priori probabilities and then sends each copy through a noisy channel . We assume that Alice will send 2^n copies of the state through 2^n lines of the same channel and thus Bob receives the state (ρ_j)⊗…⊗(ρ_j). The goal of our protocol is two-fold: i) to increase the guessing probability over performing the discrimination between the noisy copies of the state, and ii) to know what measurement to apply, whenever possible, even if only partial information is known about the channel. The protocol works as follows: * Instead of making a discrimination measurement on the copies, Bob first combines pairwise copies into the quantum switch, measures the ancilla and obtains 2^n-1 new copies, some being C_+ ((ρ_j)) while others C_- ((ρ_j)), depending on whether the outcome of the measurement on the ancilla qubit was `+' or `-'. * At this point, Bob can either perform an appropriate discrimination measurement or repeat the previous step with the 2^n-1 copies pairwise to obtain 2^n-2 new copies. Specifically, `+' copies are pairwise combined and similarly for pairs of `-'. We note that there is the possibility for a leftover pair of `+' and `-' that can't be combined, but the effect on the protocol is marginal with increasing number of copies. * This procedure can be continued for up to n times, after which point there be only one state left. * An appropriate discrimination measurement is applied to the surviving copies. In Fig.<ref> we schematically depict this procedure for two rounds and the case where only outcomes `-' are recombined in the quantum switch. In the next section we apply the aforementioned protocol in two different settings: * In the first we assume that only local measurements can be made on the received copies. Then, we compare the average guessing probability over all copies. * In the second scenario, we relax this restriction and allow for global measurements. §.§ State discrimination with the first order superswitch §.§.§ Increasing discrimination with the first order superswitch We consider two generic channels: the depolarisation channel, as well as a Pauli channel with p_0=1/3 , p_1=0 and p_2 and p_3 varying with p_2+p_3=2/3. Specifically, we consider the channel M(ρ)=(pρ + (1/3+p)Y ρ Y + (1/3-p)Zρ Z) We first consider the channel M(ρ)=(pρ + (1/3+p)Y ρ Y + (1/3-p)Zρ Z), with p∈[-1/3,1/3]. We first re-examine the case of the depolarisation channel, that is(ρ) = _p (ρ). We find that all eight possible channels of the first-order superswitch after measurements on the ancillas are instances of the depolarisation channel themselves. Explicitly, we obtain C^(1)_+++(ρ)=_η_1 (ρ) , η_1=1-99p^4-336p^3+432p^2-256p+64/-45p^4+144p^3-144p^2+64 , C^(1)_-++(ρ)=C^(1)_+-+(ρ)=_η_2 (ρ) , η_2=1--15p^2+24p-8/9p^2-24p+24 , C^(1)_–+(ρ)=ρ , C^(1)_++-(ρ)=C^(1)_-+-(ρ)=C^(1)_+–(ρ)=C^(1)_—(ρ)=_4/3 (ρ) . The respective probabilities of occurrence are also found to be r_+++ =1-9/64(5p^4-16p^3+16p^2) , r_-++=r_+-+=3/64(3p^4-8p^3+8p^2), r_–+=3p^2/64 , r_++- =3/32(4-3p)^2 p^2 , r_-+-=r_+–=3/32(4-3p) p^3 , r_—=3/32 p^4 . In Fig.<ref> we plot the guessing probabilities for a depolarisation channel for the first-order superswitch and contrast it with the quantum switch. We find that there is intricate behaviour, with regions where both are performing worse than the channel, others where the first-order superswitch is outperforming the quantum switch and vice versa. We note that even though there is a region where the first-order superswitch outperforms the quantum switch, this comes at a cost. Remembering that one of the advantages of the quantum switch in the case of the depolarisation channel was that one could always infer the optimal measurement to be applied depending on the outcome of the measurement on the ancilla, this does not hold true in the case of the higher-order superswitches. In Fig.<ref> we plot the parameters1-η_1,1-η_2that control the direction of the Bloch vectors of the input states of the channels, depending on whether they are positive or negative. We see that while1-η_1is always positive,1-η_2can change sign depending on the value ofp. As a result, one needs to know the range in which the value ofpof the original depolarisation channel lies, in order to infer which optimal measurement to apply. §.§.§ Guessing probability can decrease We now show that higher-order quantum switches can both improve discrimination or make things worse depending on the ensemble of states and the channel in the case of local measurements. As a first example, consider the depolarisation channel with p=4/3, (ρ)=1/3(Xρ X + Yρ Y+ Zρ Z). It can be shown that in this case the probabilities of outcomes are (r_+++,r_-++,r_+-+,r_–+,r_++-,r_-+-,r_+–,r_—)=1/27(3,4,0,8,6,6,0,0) , and the channels associated to the outcomes on the ancillas with non zero probabilities are C_+++^(2)(ρ) = C_-++^(2)(ρ) =ρ and C_–+^(2)(ρ)=C_++-^(2)(ρ)= C_-+-(ρ) = (ρ) . By applying the appropriate measurements depending on the outcomes on the ancilla, communicated to Bob by the network provider, and averaging over all possibilities, we obtain the guessing probabilities P_g,2^() =q_𝕀+q_^() = 7/27 +20/27^() < ^() = q_++q_-^() = 1/3 +2/3 ^() . § THE N^TH-ORDER SUPERSWITCH We now define higher-order superswitches and show that they consist of all commutators and anticommutators of all pairs of commutators and anticommutators generated by the superswitch of ordern-1, which consists of commutators and anticommutators generated by the one of ordern-2and so on. It is instructive to re-derive the zeroth- and first-order superswitches before we show the general case. Given two channelsandwith Kraus operatorsE_iandF_j, respectively, the ordinary quantum switch is defined as the channel with Kraus operatorsK^(0)_ij=E_i F_j ⊗|0⟩⟨_|1+ F_j E_i ⊗|1⟩⟨_|1, where the subscript 1 indicates the control ancilla. The resulting supermap is explicitly ^(0)_ω(,)(ρ) = ∑_i,j K^(0)_ijρ⊗ω K^(0)_ij =1/4∑_i,j{E_i, F_j }ρ{E_i, F_j }^†⊗ω + 1/4∑_i,j[E_i, F_j]ρ[E_i, F_j] ^†⊗ Z ω Z . If we letω=|+⟩⟨$|, we obtain ^(0)_ω(,)(ρ) =1/4∑_i,j{E_i, F_j }ρ{E_i, F_j }^†⊗|+⟩⟨_|1 + 1/4∑_i,j[E_i, F_j]ρ[E_i, F_j] ^†⊗|-⟩⟨_|1 , The first-order superswitch is defined similarly: given four channels , , and with Kraus operators E_i, F_j, Ẽ_̃k̃ and F̃_̃l̃ respectively, we first define two quantum switches (i.e. two zeroth-order superswitches) with Kraus operators K^(0)_ij =E_i F_j ⊗|0⟩⟨_|1+ F_j E_i ⊗|1⟩⟨_|1 ^(0)_ij =_i _j ⊗|0⟩⟨_|2+ _j _i ⊗|1⟩⟨_|2 , with subscripts differentiating between the different control qubits that control the ordering of the channels , and ,, respectively. We subsequently define the first-order superswitch as K^(1)_ijkl =K^(0)_ij^(0)_kl⊗|0⟩⟨_|3+ ^(0)_kl K^(0)_ij⊗|1⟩⟨_|3 = 1/2{K^(0)_ij,^(0)_kl}⊗+ 1/2[^(0)_kl,K^(0)_ij]⊗ Z, which leads to the definition derived in Eq. (<ref>). By induction, the n^th-order superswitch will have Kraus operators, K^(n)_i_n-1i^'_n-1 =K^(n-1)_i_n-1^(n-1)_i^'_n-1⊗|0⟩⟨_|2^n-1+ ^(n-1)_i^'_n-1 K^(0)_i_n-1⊗|1⟩⟨_|2^n-1 = 1/2{K^(n-1)_i_n-1, ^(n-1)_i^'_n-1}⊗ + 1/2[K^(n-1)_i_n-1, ^(n-1)_i^'_n-1]⊗ Z, where the index at the control qubit denotes that at order n there are 2^n-1 control qubits in total. The symbol i_n-1 is shorthand for the products of indices i_1 i_2 ⋯ i_2^n-1 associated with the Kraus operators of the (n-1)^th-order superswitch. The n^th-order superswitch is defined as the channel _ω^(n) =1/4∑{K^(n-1)_i_n-1, ^(n-1)_i^'_n-1}ρ{K^(n-1)_i_n-1, ^(n-1)_i^'_n-1}^†⊗ω + 1/4∑[K^(n-1)_i_n-1, ^(n-1)_i^'_n-1] ρ[K^(n-1)_i_n-1, ^(n-1)_i^'_n-1]^†⊗ Z ω Z . Expanding the expression we see that all possible nested commutators and anticommutator terms are generated. It is worth mentioning that the problem quickly becomes intractable in the general case as it scales doubly exponentially. Specifically, assuming that each channel has k≤4 Kraus operators, the number of possible terms grows as k^2^n2^2^n-1. Also, we note that at order n there are 2^2^n+1-1 channels that can be separated in the superswitch by measurements on the control qubits. For instance, the third-order superswitch can separate 32,768 channels, while the fourth has 2,147,483,648. In practice, however, many of the channels at each order end up being the same and thus tracking them becomes less of a daunting task. In any case, for multiple uses of the same channel we can write the generic form as follows. If we concisely denote the anticommutators and commutators as [a,b]_+ = {a,b} and [a,b]_- = [a,b], we obtain ^(n)_ω_1⋯ω_2^n-1(,⋯)= 1/2^2^n+2-2∑ _i_n∑_a_1,⋯,a_2^n-1=±([⋯,[[E_i,F_j]_a_1,[_i,_j]_a_2]_a_3⋯]_a_2^n-1ρ[⋯,[[E_i,F_j]_a_1,[_i,_j]_a_2]_a_3⋯]_a_2^n-1. ⊗ P_a_1ω_1 P_a_1⊗⋯⊗ P_a_2^n-1ω_2^n-1 P_a_2^n-1) , with P_+= and P_-=Z. §.§ The update rule and recurrence relation Even though the nested commutator and anticommutator terms in Eq. (<ref>) appear complicated, in reality one can setup an update rule and evaluate higher-order terms from the previous ones. This is due to the fact that for Pauli channels, the resulting channel at each order is again a Pauli channel, which follows after evaluating commutators and anticommutators of all Kraus operators. By representing a Pauli channel =p_0 ρ + p_1 Xρ X+ p_2 Yρ Y+ p_3 Zρ Z with its probability vector r⃗={p_0, p_1,p_2,p_3}, and noting that each term in the sum in Eq. (<ref>) is of the form [⋯]_a ρ [⋯]_a, it suffices to provide an update rule for the commutator and anticommutator terms for two generic Pauli channels with probability vectors r⃗_i={α_i,β_i,γ_i,δ_i} and i=1,2. We find the update rules (see Appendix E) (r⃗_1, r⃗_2) = {α_1 α_2 +β_1β_2+γ_1 γ_2 +δ_1 δ_2 , α_1 β_2+β_1 α_2 , α_1 γ_2+γ_1 α_2 ,α_1 δ_2+δ_1 α_2} , (r⃗_1, r⃗_2) = {0, β_1 γ_2+γ_1 β_2, γ_1 δ_2+δ_1 γ_2, δ_1 β_2+β_1 δ_2 } , where with (r⃗_1, r⃗_2) we denote the anticommutator term for two channels with probability vectors r⃗_1 and r⃗_2, while with (r⃗_1, r⃗_2) the commutator. Note that the above expressions do not give a channel but a channel multiplied by a probability. The probabilities of occurrence are ((r⃗_1, r⃗_2)) =1 -((r⃗_1, r⃗_2)) , ((r⃗_1, r⃗_2)) = β_1 γ_2+γ_1 β_2+ γ_1 δ_2+δ_1 γ_2+ δ_1 β_2+β_1 δ_2 . Eq. (<ref>) can be suggestively rewritten as (r⃗_1, r⃗_2) = ((r⃗_1, r⃗_2)) ( {α_1 α_2 +β_1β_2+γ_1 γ_2 +δ_1 δ_2 , α_1 β_2+β_1 α_2 , α_1 γ_2+γ_1 α_2 ,α_1 δ_2+δ_1 α_2}/((r⃗_1, r⃗_2))) , (r⃗_1, r⃗_2) = ((r⃗_1, r⃗_2)) ( {0, β_1 γ_2+γ_1 β_2, γ_1 δ_2+δ_1 γ_2, δ_1 β_2+β_1 δ_2 }/((r⃗_1, r⃗_2))) , where now we have both terms in the form of a probability multiplying a probability vector, defining a channel. If we further assume that all input channels are the same, that is, ==⋯ =p_0 ρ + p_1 Xρ X+ p_2 Yρ Y+ p_3 Zρ Z and represent that single input channel with the probability vector r⃗={p_0, p_1,p_2,p_3}, this fixes the initial condition for the recurrence relations. In this case, for a single iteration we obtain the channels for the quantum switch, Eq. (<ref>), multiplied with the respective probabilities. The procedure can be repeated to obtain the desired higher-order superswitch. §.§ Special instances of the depolarisation channel As a first example, we examine special instances of the depolarisation channel which have the special property that for the superswitches of any order, all possible channels after measurements on the ancillas are channels that are mapped to themselves by the commutator and anticommutator terms. It turns out that the only channels with this property are either (i) the channel D_⋆(ρ), with the value p=p_⋆=2(1-1/√(3)), (ii) the channel D_4/3, or (iii) the identity map, 𝕀. To see this, we first note that the update rule, Eqs. (<ref>) and (<ref>), in the case of two channels with probability vectors of the form r⃗={α, β,β,β} and v⃗={α^',β^',β^',β^'} becomes ((r⃗,v⃗)) =1-6ββ^' , ((r⃗,v⃗)) = 6ββ^' , and (r⃗, v⃗) = (1-6ββ^') {αα^' +3ββ^'/1-6ββ^',αβ^' +βα^'/1-6ββ^',αβ^' +βα^'/1-6ββ^',αβ^' +βα^'/1-6ββ^'} , (r⃗, v⃗) = 6ββ^'{0,1/3,1/3,1/3} . Considering the case of two copies of a depolarisation channel with depolarisation strength p, i.e. β=β^'=p/4, and imposing the condition that the anticommutator term, (r⃗, v⃗), maps the channels to themselves, we obtain the equation p/4=αβ^' +βα^'/1-6ββ^' = p/2-3p^2/8/1-3p^2/8 , with the only possible solution in the range p∈[0,4/3] being p=2(1-1/√(3)). On the other hand, if we impose the same condition for the commutator, we obtain the unique solution p=4/3. Along with the trivial case of the identity map, 𝕀, there are no other channels that are mapped to themselves by either the commutator or anticommutator term. Starting with any of these channels, due to the property that they are mapped to themselves, it follows that an n^th-order superswitch after measurements on the ancillas will either lead to one of the three aforementioned channels, _⋆ , _3/4, 𝕀, and thus on average we will have a channel of the form α_n _⋆ +β_n _3/4 +γ_n 𝕀, with α_n +β_n +γ_n=1. Employing the update rule for all nine possible combinations of channels in the sum, we can evaluate the (n+1)^th-order superswitch, which, again on average, will be the channel (α_n^2 (1-c^2)+2α_n β_n (1-d)+2α_n γ_n) _⋆ +(α_n^2 c+2α_n β_n d =2/3β_n^2 +2β_n γ_n) _3/4 +(β_n/3+γ_n^2) 𝕀, where we have defined c=6β^2 and d=2 β. Thus, we can construct recurrence relations for the coefficients α_n+1, β_n+1, γ_n+1 of the superswitches at an order n+1: α_n+1 = α_n^2 (1-c^2)+2α_n β_n (1-d)+2α_n γ_n β_n+1 = α_n^2 c+2α_n β_n d =2/3β_n^2 +2β_n γ_n γ_n+1 = β_n/3+γ_n^2 . We note that at order n=0 we just have the quantum switch, which leads to the the initial conditions α_0 = (1-c) , β_0 = c , γ_0=0 for the recurrence relations. Even though we do not provide the general solutions, we look for the stationary points, α_n+1=α_n , β_n+1= β_n , γ_n+1=γ_n , which are found to be the triples (α_s ,β_s, γ_s ) : {(0,0,1) , (1/2, 1/2+1/4(√(3)-2),1/4(2-√(3))) ,(0, 3/4,1/4) } , which correspond to the channels 𝕀, 1/2_⋆+(1/2+1/4(√(3)-2))_4/3+(1/4(2-√(3))) 𝕀, and 3/4_4/3+1/4𝕀, respectively. The first solution corresponds to the case where we start with the identity channel, 𝕀, which is a trivial case. The second solution corresponds to the case of the channel _⋆, and gives the optimal guessing in the limit of n→∞. Explicit evaluation leads to a guessing probability in the limit ^(⋆,∞) = 1/12(6+√(3)) ≈ 0.644 , that upper bounds the guessing probability of any n-order superswitch. This value should be contrasted with the guessing probability ^(⋆) = 1/√(3)≈ 0.577 , that is achieved by sending the states directly through the channel _⋆ without the use of a switch, corresponding to at most an 11.6% improvement. In this case, each higher-order superswitch gives a higher guessing than lower-order ones. However, we note that the first few orders quickly converge to the upper bound and thus minimal improvement is achieved with each subsequent superswitch. We evaluate the first few orders of superswitches and we find that the quantum switch achieves a guessing of 0.601, while the next four-order superswitches achieve 0.619, 0.631, 0.636, 0.639, respectively. The final solution is obtained if we start from copies of the channel _4/3. Once again the solution gives the limiting value of the guessing probability of superswitches, which is found to be ^(4/3,∞) = 3/4 =0.75 , which should be contrasted with the guessing probability ^(4/3) = 2/√(3)≈ 0.667 , that is achieved by sending the states directly through the channel _4/3 without the use of a switch, corresponding to a 12.5% improvement. Interestingly, in this case the limiting value does not upper bound all superswitches and the highest guessing is achieved by the quantum switch with a guessing of 7/9≈ 0.778, which is an 16.7% improvement, with each subsequent higher-order switch achieving a lower guessing probability, with the values tending to the limit 3/4. The first four superswitches achieve the guessing probabilities 0.778, 0.753, 0.75004, 0.750000006, 0.7500000000000001, and thus we see that they rapidly converge to the (sub-optimal) limiting value. If we restrict to the case where the input channel is a depolarisation channel, we find that at each order all channels are ag 23.ain depolarisation channels. In that case, with two depolarisation channels with the update rule becomes §.§ State discrimination with higher-order superswitches We now study higher-order superswitches for the following two channels: (i) the depolarisation channel and (ii) a Pauli channel with p_1=0. §.§.§ The depolarisation channel We evaluate the superswitches of up to order four and derive the expressions for the guessing probabilities, where we assume that after measurements on the ancillas, Bob always applies an appropriate optimal measurement for the channel corresponding to the obtained outcomes, communicated to him by the communication provider. Since the expressions are long we only mention them in Appendix F. In Fig. <ref> we plot the guessing probabilities for the depolarisation channel and superswitches up to order four. We find that there are regions in the parameter p with very different behaviours and an intricate picture emerges: there exist regions where the higher-order superswitches outperform the lower-order superswitches, regions where the superswitches do progressively worse than the channel, as well as regions in which some switches perform better than others. Explicitly, in the region to the left of the dashed orange line the higher the order of the superswitch, the worse the guessing probability becomes. It is clear that it is not beneficial to employ a superswitch protocol in this region. Similarly, in the region between the dashed orange line and the dotted black line, the superswitches perform worse than the channel, but comparing the superswitches in terms of performance, the ordering is not clear throughout the region. The region between the black dotted line and the leftmost dashed purple line is interesting as for some values of p some of the switches may perform better than the channel but not all. For instance, for the value p≈ 0.75, the quantum switch performs worse than the channel, the first-order superswitch performs better than the quantum switch but still worse than the channel, and it is only when we get to the second-order switch that we start seeing an advantage over the guessing probability of the channel. In the region between the two dashed purple lines all superswitches outperform the channel and their ordering follows the pattern that the higher the order of the superswitch the higher the guessing probability. We expect that this pattern holds for all the nth-order superswitches in this region but we have no proof of this claim. Perhaps the most interesting behaviours occur for values of p>1. In this interval, there exist regions where the ordering of the superswitches in terms of guessing probabilities does not follow a clear pattern. For example, for the value p≈ 1.05 the quantum switch has a significant advantage over the channel, which, however, is reduced for the first-order superswitch, the second-order superswitch outperforms both, while the third-order performs worse than the second but better than the previous ones; finally, the fourth-order superswitch performs the best. We note that in the case p=1 where the channel becomes completely depolarising and the best guessing can not exceed uniform guessing, all superswitches perform significantly better with a guessing that is at least 5/8=0.65, an increase of at least 30%. Interestingly, for p=1 the quantum switch and first-order superswitch have the same guessing probability, while higher-order superswitches exceed both. In the region between the rightmost purple and orange dashed lines, we once again find some subtlety in the behaviour of the superswitches, while at the right of the orange dashed line, we find a region where higher-order switches perform worse than lower-orders. However, in contrast to the leftmost region, in this region all superswitches outperform the channel. From the above considerations, we conjecture that for the depolarisation channel, given accessibility up to the n^th-order superswitch, the guessing probability can be split into three regions: (i) a region where the channel performs better than any of the switches, (ii) a region where the n^th-order switch performs the best, and (iii) a region where the quantum switch performs the best.
http://arxiv.org/abs/2406.17940v1
20240625210420
Time-kernel for lattice determinations of NLO hadronic vacuum polarization contributions to the muon $g$-$2$
[ "Elisa Balzani", "Stefano Laporta", "Massimo Passera" ]
hep-ph
[ "hep-ph", "hep-lat" ]
mymainaddress,mysecondaryaddress]Elisa Balzani mymainaddress,mysecondaryaddress]Stefano Laporta mysecondaryaddress]Massimo Passera [mymainaddress]Dipartimento di Fisica e Astronomia `G. Galilei', Università di Padova, Italy [mysecondaryaddress]Istituto Nazionale di Fisica Nucleare, Sezione di Padova, Padova, Italy § ABSTRACT We study the time-momentum representation of the kernel needed to compute hadronic vacuum polarization contributions to the muon g-2 in the space-like region at next-to-leading order. For small values of the time, we present analytical series expansions; for large values of the time, we present numerical series expansions which overcome the problems showed by naïve asymptotic expansions. These results are to be employed in lattice QCD determination of hadronic vacuum polarization contributions to the muon g-2 at next-to-leading order. § INTRODUCTION The Muon g-2 (E989) experiment at Fermilab has recently presented its second measurement of the muon magnetic moment anomaly, a_μ = (g_μ-2)/2  <cit.>, with a factor of 2 improvement in precision over the earlier measurement <cit.>, confirming its first measurement and the earlier results of the E821 experiment at Brookhaven <cit.>. Moreover, in a longer term, also the E34 collaboration at J-PARC <cit.> aims at measuring the muon g-2 through a new low-energy approach. The present muon g-2 experimental average shows a 5 σ discrepancy with the value of the Standard Model (SM) a_μ prediction quoted by the Muon g-2 Theory Initiative <cit.>. The main uncertainty of the muon g-2 SM prediction arises from its hadronic vacuum polarization (HVP) contribution, a_μ^ HVP, which cannot be reliably computed perturbatively in QCD and relies on experimental data as input to dispersion relations. Indeed, this contribution has been traditionally calculated via a dispersive, or time-like, integral using low–energy e^+e^-→hadrons data. Currently, the time-like calculation of a_μ^ HVP includes the leading-order (LO), next-to-leading-order (NLO) and next-to-next-to-leading-order (NNLO) terms <cit.>. An alternative determination of a_μ^ HVP has been provided by lattice QCD <cit.>. In the last few years significant progress has been made in first-principles lattice QCD calculations of its LO part, a_μ^ HVP( LO), although the precision of these results is, in general, not yet competitive with that of the time-like determinations based on experimental data. In 2021 the BMW collaboration published the first lattice QCD calculation of a_μ^ HVP( LO) with an impressive sub-percent (0.8%) relative accuracy <cit.>. This remarkable result weakened the discrepancy between the muon g-2 SM prediction and the experimentally measured value, but showed a tension with the time-like data-driven determinations of a_μ^ HVP( LO), being 2.2 σ higher than the Muon g-2 Theory Initiative data-driven value. Moreover, a new measurement of the e^+ e^- →π^+ π^- cross section from the CMD-3 experiment disagrees with all the other e^+ e^- data <cit.>. Efforts are ongoing to clarify the current theoretical situation. A new and competitive determination of a_μ^ HVP based on a method alternative to the time-like and lattice QCD ones is therefore desirable. A new approach to determine the HVP contribution to the muon g-2, measuring the effective electromagnetic coupling in the space-like region via scattering data, was proposed in 2015 <cit.>. The elastic scattering of high-energy muons on atomic electrons was identified as an ideal process for this measurement, and a new experiment, MUonE, was proposed at CERN to measure the shape of the differential cross section of μ-e elastic scattering as a function of the space-like squared momentum transfer <cit.>. At LO, simple results are long known and form the basis for present lattice QCD and future MUonE determinations of a_μ^ HVP( LO). In Ref. <cit.> we investigated the HVP contributions to the muon g-2 in the space-like region, and provided simple analytical expressions to extend the space-like calculation of the a_μ^ HVP contribution to NNLO (see also Ref. <cit.>). In principle, the space-like expressions of Ref. <cit.> can be directly used in lattice determinations of HVP contributions to the muon g-2. However, lattice calculations widely use the time-momentum representation <cit.>. Therefore, in this paper we will work out the time-momentum representation of the NLO space-like kernel of Ref. <cit.> [These results were presented at the workshop <cit.>]. § THE TIME-KERNEL AT LEADING ORDER According to Ref. <cit.> the LO contribution can be written as a_μ^ HVP(LO)= (α/π)^2 ∫_0^ dt G(t) K̃_2(t,m_μ) , where t is the Euclidean time and K̃_2(t,m_μ) is the time-kernel. The time-kernel can be written as K̃_2(t,m_μ)= f̃_2(t)= 8π^2 ∫_0^d/ f_2(^2) g( t) , where g(w)=w^2-4 sin^2 (w/2) . In the following, we use extensively the adimensional frequency and time =/ , = t . f_2(^2) can be written as f_2(^2)= 1/m_μ^2F_2(1/y(-^2))/-^2 , where y(z) is the rationalizing variable y(z) = z-√(z(z-4))/z+√(z(z-4)), and F_2(y(z)) is the known LO space-like kernel written in the form appearing in Ref. <cit.>. Substituting the expression of F_2(y) from Ref. <cit.> one obtains the result f_2(^2)= 1/m_μ^21/y(-^2)(1-y^2(-^2)) . The integration over is complicated <cit.>, the result contains a Meijer G-function: m_μ^2/8π^2f̃_2(t) = 14*21133/20,1,1/2^2 +^24 +1^2 +2 ln () -2 K_1(2 ) +2 γ -12 . This expression can be also written in terms of integrals of the Bessel functions instead of the Meijer G-function (see a similar integral in Ref. <cit.>). This can be done by applying the identity *21133/20,1,1/2u^2= -2 +8 ∫^u_0 dv (v-u) K_0(2v) = -4u[ K_1(2u) + π^2 ( K_0(2u)𝐋_-1(2u) +K_1(2u)𝐋_0(2u) ) ] to timekernelLO, where 𝐋_-1 and 𝐋_0 are Struve functions. § THE TIME-KERNEL AT NLO Similarly to the LO case, the NLO contribution can written as a_μ^ HVP(NLO)= (α/π)^3 ∫_0^ dt G(t) K̃_4(t,m_μ) , where K̃_4(t,m_μ) is the NLO time-kernel. The time-kernel can be written as K̃_4(t,m_μ)= f̃_4(t)= 8π^2 ∫_0^d/ f_4(^2) g( t) . For convenience we define an adimensional f̂(^2): f_4(^2)=1/m_μ^2f̂_4(^2) . f̂_4(^2) is related to the NLO space-like kernel F_4(y) obtained in Ref. <cit.> f̂_4(^2)= 2 F_4(1/y(-^2))/-^2 . By splitting g(w), the integral (<ref>) is split into two parts: f̃_4(t)= f̃_4^(a)(t) +f̃_4^(b)(t) . The first part is m_μ^2/8π^2f̃_4^(a)(t)= ∫_0^d/ f̂_4(^2)( ^2 ^2 ) . The integral in f4a can be calculated in analytical form; in fact, substituting the expression f4F4, performing the change of variable → y, recalling from Ref. <cit.> that F_4(z) is the imaginary part of the timelike kernel K_4(z), and using the dispersive integral for K_4(z) one obtains f̃_4^(a)(t)= ^22∫_-^0 dz/z F_4(1/y(z))= ^22∫_-^0 dz/z 1/π Im K_4(z)= ^22 K_4(0)= ^22( 197144 +π ^212 -12π ^2 ln 2 +34ζ(3) ) . In the above expression K_4(0) is the value of the 2-loop g-2, and we have also incorporated the factor 2 from f4F4 in the denominator 16 π^2. The second part of f4ab is m_μ^2/8π^2f̃_4^(b)(t) = ∫_0^d/f̂_4(^2) ( -4 sin^2 (2) ) . Substituting the expression (<ref>) in f4b, one finds that the integrand contains ln y, ln(1± y), _2(± y). The integration of single logarithms and product of logarithms can be done analitically, obtaining complicated expressions containing several Bessel functions, exponential integrals, Meijer G-functions. But, unfortunately, we were not able to calculate analytically the integrals containing the dilogarithms of y. As an alternative, in the next sections we will work out some series expansions of f̃_4(t). § EXPANSION FOR SMALL T In this section we work out the expansion of the NLO time-kernel (<ref>) for ≪ 1. We split the interval of integration in a intermediate point _0(): ∫_0^d/ f̂_4(^2) g() = ∫_0^_0()d/ f̂_4(^2) g() + ∫__0()^d/ f̂_4(^2) g() . The value of the integral is independent of the point of splitting _0; a convenient choice is _0()=1-/√()≫ 1 . In the integral over the interval [0,_0()] of splitom, first we expand in series g() for ≪ 1, then we make the convenient change of variable → y=y(-^2) (see f4F4), and we integrate over the interval -1 ≤ y ≤ y(-_0^2)=-. In the second integral over the interval [_0(),), first we expand in series f̂_4(^2) for ≫ 1, then we integrate over . The whole f̃^(4)(t) is obtained summing up the results of the two integrals. The expansion turns out to have the form f̃_4(t)= ∑_n ≥ 4 n even^n/n!( a_n +b_n π^2 +c_n (ln()+γ) +d_n (ln()+γ)^2) ) ; The analytical values of the coefficients a_n, b_n, c_n, d_n of the expansion up to ^30 are available in Table <ref>. § ASYMPTOTIC EXPANSIONS FOR LARGE T We decompose f̃_4^(b)(t) in two parts, according to the different behaviour for t →, f̃_4^(b)(t)= f̃_4^(b;1)(t) + f̃_4^(b;2)(t) . §.§ Main contribution f̃_4^(b;1)(t) is the dominant contribution, and its asymptotic expansion contains powers of 1/: f̃_4^(b;1)(t) = A_0+A_1 + B_0 ln + B_2 ln/^2 +∑_n=1^C_n/^n . The integral representation of f̃_4^(b;1)(t) can be obtained in this way: first, we split the integrand of f4b, m_μ^2/8π^2f̃_4^(b;1)(t) = 2 lim_ϵ→ 0[ ∫^_ϵd/f̂_4(^2) - ∫^_ϵd/f̂_4(^2) cos() ] , where ϵ regulates the divergence in =0; subsequently, we expand in series f̂_4(^2) around =0 in the second integral of f4b1rew, f̂_4(^2)= 1/8 -1/2 + (ln/2 +251/2880 +ln 2) +… . Formally integrating term by term over using ∫ d ^n cos( ) = - n! sin(n π/2) ^-1-n , we can obtain the coefficients A_n, B_n, C_n of f4b1asym. In section <ref> we will need the first terms of the asymptotic expansion: f̃_4^(b;1)(t)= -π/8 +ln -7 ζ(3)/4 +7/6π ^2 ln (2) -127 π ^2/144 +γ +653/216 -5 (ln +γ)/12 ^2 -π/2 +209/180 ^2 +277 π/360 ^3 +O(1/^4) . §.§ Exponentially suppressed contribution f̃_4^(b;2)(t) is the exponentially suppressed contribution. Its asymptotic expansion contains the factor e^-2: f̃_4^(b;2)(t) = e^-2∑_n=0^( D_n+ E_n ln+F_n/√()) 1/^n , where D_n, E_n and F_n are constants. f̃_4^(b;2)(t) has also a representation as an integral over the contour C shown in fig.<ref>: m_μ^2/8π^2f̃_4^(b;2)(t)= ∫_ Cd/f̂_4(^2) 2 cos() . The presence of the exponential factor is due to the singularities of the integrand in =± 2i, which come from terms of f̂_4() containing √(^2+4). Due to their asymptotic nature, the expansions (<ref>) and (<ref>) have a limited use for numerical evaluations. Increasing n, the coefficients grow factorially, and therefore, one needs to truncate the series at the index n=n̅() where the terms start increasing. One can show that the error due to the truncation of the first series is of the same order of magnitude of the value of the second series, making the inclusion of the exponentially suppressed contribution meaningless. Due to this fact, in the next sections we will explore a different approach, able to find expansions around a finite point =_0 converging for →. § W-INTEGRAL REPRESENTATION FOR F̂_4^(B) We start from the definition (<ref>) of f̂_4^(b)(t). In the integrand we add and subtract to f̂_4(^2) a piece h_0() which contains the terms non integrable in =0 or =± 2i, h_0()= 1/8^2 +π/16 (4 + ^2)^3/2 - π/2 (4 + ^2) . Defining ()=f̂_4(^2)/-h_0() and h̃_0() =∫_0^ 2( cos( )-1) h_0() d =π/16 + π^2/8( e^-2 -1 ) = + 1/32π^2 ( K_0(2 ) -𝐋_0(2 ) ) , we write f̃_4^(b)(t)= h̃_0() +∫_0^ d 2 (cos( )-1) () . Let us consider only the integral with the cosine. We decompose the cosine in exponentials ∫_0^d () cos = =∫_0^d () e^ +e^-/2 . Now we split the integral, and we rotate of π/2 the integration path in the complex- plane and make the change of variable → i w in the first exponential; then we rotate of -π/2 and make the change → -i w in the second one. ∫_0^d () e^/2 +∫_0^-d () e^-/2 = ∫_0^dw F_0(w) e^-w , where F_0 is F_0(w)=/2lim_ϵ→ 0^+( (ϵ+ w) -(ϵ- w) ) . We have introduced the regulator ϵ to make sure that the integration path remains in the half-plane ()>0. Due to the presence of the discontinuity, the limit is different if 0<w<2 or w>2 F_0(w) = { (w), if 0<w<2 , (w), if w>2 . . Finally the total integral becomes f̃_4^(b)(t)= h̃_0() +∫_0^2 dw (w) 2 (e^-w -1) +∫_2^ dw (w) 2 (e^-w -1) , where (w)= 4/3 w^3+w/16 (w^2-4) +π√(4-w^2)(w/16 (w^2-4)^2-1/8 w^2+7/48) +[ √(4-w^2)(-4/3 w^4-17/48 w^2 -5/16 (w^2-4) -1/4 (w^2-4)^2+1/8) +π(1/2 w^3+w/2-7/6 w) ]× arcsin(w/2) +23 w/144-37/144 w+5/24 w ln (w) , (w)= 4/3 w^3+w/16(w^2-4) +(7/24-1/4 w^2) √(w^2-4)ln(w (w^2-4)) +√(w^2-4)(-1/3 w^4+115/144 w^2+23/144 (w^2-4)-23/144) +[-4/3 w^5+7/6 w^3+w/2 (w^2-4) -29 w/24+47/12 w -√(w^2-4)(-4/3 w^4-17/48 w^2 -5/16 (w^2-4) -1/4 (w^2-4)^2 +1/8) ] ln (y(w^2))/2 +23 w/144 -37/144 w +5/24 w ln (w) - (1/w^3+w-7/3 w) L(y(w^2)) , L(x)=_2(-x)+2 _2(x) +12ln x (ln(1+x)+2ln(1-x)) ; y(z) was defined in ydef. We can also integrate analytically over w the terms of intwf4b not containing the exponential, but we have to add and subtract the pole term of the Laurent expansion of (w) in w=0 (w)=-1/2w+O(1) , obtaining f̃_4^(b)(t)= c_0 +h̃_0() +h̃_3() +∫_0^2 dw 2( (w)+1/2w) e^-w +∫_2^ dw 2 (w) e^-w , where c_0=-2∫_0^2 dw ((w)+1/2w) -2∫_2^ dw (w) = 653/216 +π/16 -ln(2) -163 /144π^2 +7/6π ^2 ln (2) -7 ζ (3)/4 and h̃_3()= ∫_0^2 dw 1-e^-w /w= -Ei(-2 )+ln (2 )+γ . § W-INTEGRAL FOR EXPONENTIALLY SUPPRESSED CONTRIBUTION F̃_4^(B;2)(T) We proceed similarly to the previous section. In f4b2C we add and subtract the pole term h_2() of the Laurent expansion of f̂_4(^2)/ in = ± 2i, obtaining f̃_4^(b;2) (t)= h̃_2() + ∫_ Cd () 2cos() , () =f̂_4(^2)/-h_2() , h_2() = - π/2 (4 + ^2) , h̃_2() = ∫_0^ d 2 cos ( ) h_2() = -π^2/4 e^-2 . We consider a path C infinitesimally near the cuts (see Fig.<ref>); we decompose the cosine and make the suitable change of variables in order to parametrize the two parts of C with the same w. We also have to take the difference between the values of between the two cuts, and on the left and the right of each cut: (w)=/2[ lim_ϵ→ 0^+(ϵ+ w) -lim_ϵ→ 0^-(ϵ+ w) -lim_ϵ→ 0^+(ϵ- w) +lim_ϵ→ 0^-(ϵ- w) ] . Finally f̃_4^(b;2)(t)= h̃_2() + ∫_2^dw (w) 2 e^-w , where (w)= -23 w^6+230 w^4-508 w^2+192/144 w^4 √(w^2-4) --29 w^8+222 w^6-348 w^4-144 w^2+128/48 w^5 (w^2-4)ln (y(w^2)) - (1/w^3+w-7/3 w) ( L(y(w^2)) + π ^2/4) +( 7/24-1/4 w^2) √(w^2-4)ln(w(w^2-4)) . We note that the asymptotic expansion  f4b2asym could be obtained from the integral representation of intwf4b2, by expanding (w) and e^-w in w=2 and by integrating term-by-term over w. The expansion of e^-w generates the exponential factor e^-2. We note also that (w) also generates all the exponentially suppressed contributions generated by (w); in fact we can check that the difference (w)-(w) is a function regular in w=2 [ Not all the parts of f̂_4(^2)/ which have a discontinuity for ^2 < -4, once integrated over give terms whose asymptotic behaviour contains e^-2 terms. An example comes from second term of h_0(), its asymptotic expansion ∫_0^ d 2 cos( )(^2+4)^3/2 = -14 ^2-916 ^4 -22564 ^6 +O(1^8) does not contain e^-2 ]. § FURTHER SUBDIVISIONS OF F_4^(B)(T) Now we have w-integral representations: intwf4bmod for f_4^(b)(t)=f_4^(b;1)(t)+f_4^(b;2)(t) and intwf4b2 for f_4^(b;2)(t). In f4b1asym and f4b2asym we have shown also the general form of their asymptotic expansions. Each of these expansions contains contributions with slightly different behaviour. In order to obtain numerically efficient expansions around finite , we have to introduce further splitting, separating even and odd powers in f_4^(b;1)(t) and integer and half-integer powers, and logarithms in f_4^(b;2)(t). Therefore we subdivide f_4^(b;1)(t) and f_4^(b;2)(t) in 3 parts, according their asymptotic behaviour: f̃_4^(b;1)(t) = f̃_4^(b;1;1)(t) +f̃_4^(b;1;2)(t) +f̃_4^(b;1;3)(t) , f̃_4^(b;2)(t) = f̃_4^(b;2;1)(t) +f̃_4^(b;2;2)(t) +f̃_4^(b;2;3)(t) , where f̃_4^(b;1;1)(t) ∼1/+O(1/^3), f̃_4^(b;1;2)(t) ∼1/^2+O(1/^4), f̃_4^(b;2;1)(t) ∼e^-2[1+O(1/^2) ], f̃_4^(b;2;2)(t) ∼e^-2ln()/√()[1+O(1/) ], f̃_4^(b;2;3)(t) ∼e^-21/√()[1+O(1/) ] , and f̃_4^(b;1;3)(t) contains the part not included in the above asymptotic expansions: f̃_4^(b;1;3)(t)= 653/216 -127 π ^2/144 -7 ζ (3)/4 +7/6π ^2 ln (2) +(ln +γ) ( 1 -5/12 ^2) -π/8 . §.§ Subdivision of the exponentially suppressed contribution By analizing the asymptotic of each term of (w), it is possible to separate the contributions to the separate parts of f̃_4^(b;2)(t), b21, b22, b23. We find f̃_4^(b;2;1)(t) = h̃_2()+∫_2^dw 2(w) e^-w , f̃_4^(b;2;2)(t) = ln∫_2^dw 2(w) e^-w , f̃_4^(b;2;3)(t) = ∫_2^dw 2 (w) e^-w , where (w)= π^2/4( 7/3 w -w -1/w^3) , (w)= 1/2(√(w^2-4)(1/4 w^2-7/24) -1/2( 1/w^3 + w-7/3 w) ln (y(w^2)) ) , (w)= (w) -(w) -(w)ln . §.§ Subdivision of the main asymptotic contribution We can separate the parts of (w) and (w) which generate the odd and the even powers of 1/, f̃_4^(b;1;1)(t) and f̃_4^(b;1;2)(t). Note that the odd powers have a factor π, see b1asy: f̃_4^(b;1;1)(t)= ∫_0^2dw 2(w) e^-w +∫_2^dw 2(w) e^-w , where (w) =π/2( √(4-w^2)(7/24-1/4 w^2) + (1/w^3+w-7/3 w) arcsin(w/2) ) , (w) = π^2/4(1/w^3+w-7/3 w) . The part with even powers of 1/ can be found subtracting everything from the whole integral f̃_4^(b;1;2)(t)= c_0 -f̂_4^(b;1;3) (t) -h̃_2() +h̃_0() +h̃_3() +∫_0^2 dw 2( (w)+1/2w -(w) ) e^-w +∫_2^ dw 2 ((w) -(w) -(w) ) e^-w . § EXPANSIONS IN A FINITE POINT =_0 First, we define the series removed of any leading factor f̅_4^(b;2;1)(t) =f̃_4^(b;2;1)(t) e^2 , f̅_4^(b;2;2)(t) =f̃_4^(b;2;2)(t) e^2 √()/ln , f̅_4^(b;2;3)(t) =f̃_4^(b;2;3)(t) e^2 √() , f̅_4^(b;1;1)(t) =f̃_4^(b;1;1)(t) , f̅_4^(b;1;2)(t) =f̃_4^(b;1;2)(t) ^2 . Then, we expand around a finite point =_0 by substituting t with _0/(1+)^1/2 in f̂_4^(b;1;x)(t) and with _0/(1+) in f̂_4^(b;2;x)(t), and expanding in : f̅_4^(b;1;1)(_0/√(1+)) =∑_n=0^ a^(b;1;1)_n ^n , f̅_4^(b;1;2)(_0/√(1+)) =∑_n=0^ a^(b;1;2)_n ^n , f̅_4^(b;2;1)(_0/1+) =∑_n=0^ a^(b;2;1)_n ^n , f̅_4^(b;2;2)(_0/1+) =∑_n=0^ a^(b;2;2)_n ^n , f̅_4^(b;2;3)(_0/1+) =∑_n=0^ a^(b;2;3)_n ^n . These particular substitutions → are chosen to improve the convergence of the series in for →, corresponding to → -1. The coefficients a_n^(b;x;y) can be obtained from the w-integral representations wintb11, wintb12, wintb21, wintb22, wintb23, by expanding the integrands in and integrating numerically term by term in w. Finally, f̃_4^(b)(t) is worked out by summing up all the 6 contributions, and the whole time-kernel f̃_4(t) is recovered by adding also f̃_4^(a)(t): f̃_4(t)= f̃_4^(a)(t) + f̃_4^(b;1;3)(t) +1/∑_n=0^ a^(b;1;1)_n (_0^2/^2-1)^n +1/^2∑_n=0^ a^(b;1;2)_n (_0^2/^2-1)^n +e^-2∑_n=0^ a^(b;2;1)_n (_0/-1)^n +e^-2/√()ln∑_n=0^ a^(b;2;2)_n (_0/-1)^n +e^-2/√()∑_n=0^ a^(b;2;3)_n (_0/-1)^n . At this point we can use the expansions for small and for large (<ref>) and (<ref>) to get the values of f̃_4(t) for any value of . We choose a point of separation =_s between the expansions. In the region ≤_s we compute f̃_4(t) from the small-t expansion f4smallt. In the region >_s, we choose a suitable value of _0 and use f4fint to compute f̃_4(t). The choice of the optimal _s, _0 and the numbers of terms of the expansion depend on the level of precision required. We choose _s=3.82 and _0=5. In Table <ref> we list the coefficients of the expansion (<ref>) up to n=12. These values allows to obtain f̃_4(t) with a precision <3×10^-8 for any value of ≥ 0. In Fig.<ref> we show the error of this approximation. § CONCLUSIONS This paper provides series expansions of the time-momentum representation of the NLO space-like kernel. These results are to be used in lattice QCD computations of the NLO HVP contribution to the muon g-2. After a derivation of the analytical time-momentum formula for the HVP contribution at LO, we analyzed in detail the components of the -integral constituting the time-momentum kernel for the HVP contribution at NLO. One part was easily integrated analytically. For the remaining part we were able to get an expansion for small t, of which we computed the first 14 coefficients. Next, we considered the expansion for large t. This expansion contains a part exponentially suppressed by a factor e^-2, and a part without any exponential factor, not present at LO. We were able to get the expansion for t → of these two parts, which turned out to be asymptotic, and therefore not much useful for numerical calculations. These two main parts could be further divided in five different subcontributions to the large-t expansion. For each of these five parts we found a representation as an integral over the imaginary frequency w. Next, we expanded these five integrals around a finite point _0 in powers of (_0/-1). These expansions turned out to be also converging for →; we calculated numerical values of the first 13 coefficients for _0=5. The provided expansions for small and large are sufficient to compute the time-kernel for every value of with a precision better than 3 × 10^-8. In conclusion, the results presented in this paper allow to compute the NLO kernel in the time-momentum representation with a precision largely sufficient for the lattice determinations of a_μ^ HVP at NLO accuracy. Acknowledgments We would like to thank G. Colangelo, M. Fael, D. Giusti, M. Hoferichter, T. Teubner and G. Venanzoni for useful discussions and correspondence. We are also grateful to all our MUonE colleagues for our stimulating collaboration. S. L. thanks the organizers of the “Sixth Plenary Workshop of the Muon g-2 Theory Initiative” (Bern, 4-8 September 2023) and the “II Workshop of Muon Precision Physics 2023” (Liverpool, 7-10 November 2023) for providing support for attending the workshops. S. L. acknowledges support from the Italian Ministry of University and Research (MUR) via the PRIN 2022 project n. 20225X52RA — MUS4GM2 funded by the European Union via the Next Generation EU package.
http://arxiv.org/abs/2406.18644v1
20240626180001
Topological phases, van Hove singularities, and spin texture in magic-angle twisted bilayer graphene in the presence of proximity-induced spin-orbit couplings
[ "Yuting Tan", "Yang-Zhi Chou", "Fengcheng Wu", "Sankar Das Sarma" ]
cond-mat.mes-hall
[ "cond-mat.mes-hall" ]
ytan77@umd.edu Condensed Matter Theory Center and Joint Quantum Institute, Department of Physics, University of Maryland, College Park, Maryland 20742, USA Condensed Matter Theory Center and Joint Quantum Institute, Department of Physics, University of Maryland, College Park, Maryland 20742, USA School of Physics and Technology, Wuhan University, Wuhan 430072, China Wuhan Institute of Quantum Technology, Wuhan 430206, China Condensed Matter Theory Center and Joint Quantum Institute, Department of Physics, University of Maryland, College Park, Maryland 20742, USA § ABSTRACT We investigate magic-angle twisted bilayer graphene (MATBG) with proximity-induced Ising and Rashba spin-orbit couplings (SOC) in the top layer, as recently achieved experimentally. Utilizing the Bistritzer-MacDonald model with SOCs, we reveal a rich single-particle topological phase diagram featuring topological flat bands across different twist angles and interlayer hopping energies. The evolution of Dirac cones and Chern numbers is examined to understand the topological phase transitions. We find that all phases can be achieved with an experimentally accessible SOC strength (∼1 meV) in systems with angles very close to the magic angle. Furthermore, the van Hove singularity for each topological flat band splits in the presence of SOC, significantly altering the electronic properties. Additionally, we investigate the spin textures of each band in momentum space, discovering a skyrmion-like spin texture in the center of the moiré Brillouin zone, which is correlated with the topological phase transitions and can be tuned via the SOCs and an out-of-plane electric field. Our findings provide a comprehensive understanding of the topological flat bands, establishing a foundation for grasping the intrinsic and rich roles of SOCs in MATBG. Topological phases, van Hove singularities, and spin texture in magic-angle twisted bilayer graphene in the presence of proximity-induced spin-orbit couplings Sankar Das Sarma July 1, 2024 ============================================================================================================================================================== § INTRODUCTION Graphene materials, such as magic-angle twisted bilayer graphene (MATBG), bernal bilayer graphene, rhombohedral trilayer graphene, etc., possess strongly correlated and superconducting phases of matter <cit.>. Their properties can be tuned through accessible external parameters, such as gating, straining, and twist angles, providing valuable opportunities to study topology and correlated physics <cit.>. In addition, experiments have utilized the proximity effect between graphene and a transition metal dichalcogenide (TMD) layer (such as WSe_2), inducing proximity spin-orbit couplings (SOCs) in graphene and offering another method to control graphene-based materials <cit.>. Interestingly, recent experiments have shown that observable superconductivity (SC) can be induced <cit.> or enhanced <cit.> by the proximate WSe_2 layer. The interplay between SOC and SC in graphene is an active area of research <cit.>. In general, SOCs can significantly alter the electronic band structure in graphene and lead to many interesting features, such as quantum spin Hall <cit.>, Rashba Edelstein effect <cit.>, etc. The proximity-induced SOC effect is especially pronounced in MATBG, because its small bandwidth in the low-energy moiré bands. By coupling the spin and orbital degrees of freedom of the electron, SOC can lift the spin degeneracies and create band splitting, making it an important ingredient in the construction of the topological phase diagram for MATBG. It can also significantly alter the electronic density of states (DOS) and potentially amplify the many-body correlation, which can induce interaction-driven phases as well as SC <cit.>. Furthermore, SOC fundamentally influences spin configurations in 2D materials, which have potential applications in spintronics and information storage due to their stability and manipulability by external fields <cit.>. In this work, we study MATBG with proximity-induced SOCs in the top layer. In our previous work <cit.>, we focused on the formation of unconventional intervalley interband phonon-mediated superconductivity in topological flat bands induced by SOCs, in the presence of valley imbalance. In the current work, we further investigate the topological phase diagram at the single-particle level, revealing three distinct topological phases across different twist angles. We examine in detail the evolution of Dirac cones and Chern numbers across the topological phase transitions. Our object here is to provide a foundational understanding of MATBG with SOCs, aiming to lay the groundwork for further exploration into the intrinsic role of SOCs in this system. Using the continuum Bistritzer-MacDonald (BM) model with Rashba and Ising SOCs <cit.>, our findings show that the SOCs significantly reconstruct the band structure, splitting the two flat minibands (without SOC) into four spin-split bands. Each band has its own pair of van Hove singularities (VHSs), leading to a total of eight VHSs in DOS per valley, while only four VHSs per valley are present without SOC. Additionally, we discover a skyrmion-like spin texture in momentum space, which can evolve when crossing three distinct topological phases. Furthermore, we demonstrate that the spin texture in momentum space can be tuned in-plane, and the skyrmion-like feature can be further modified, by applying an out-of-plane electric field. Our results provide a systematic understanding of MATBG moiré bands in the presence of proximity-induced SOCs. The rest of the paper is organized as follows: In Sec. <ref>, we introduce the continuum moiré Hamiltonian, incorporating Ising and Rashba SOCs into the top layer. We then diagonalize the Hamiltonian to obtain the band structure across a range of SOCs and twist angles. The results are presented in Sec. <ref>, <ref> and <ref>. Particularly, we construct the single-particle topological phase diagram across different angles and a range of values for SOCs in Sec. <ref>. We then show the effects of SOC on VHSs in Sec. <ref> and spin texture in Sec. <ref>, and how the spin texture can be tuned via applying an out-of-plane electric field to the graphene layers in Sec. <ref>. In Sec. <ref>, we discuss the implications of our results. Appendices. <ref>,<ref>,<ref> complement the theory presented in the main text. § MODEL As shown in Fig. <ref>, we consider MATBG-WSe_2 system, with proximity-induced Ising and Rashba SOCs on the top graphene layer, consistent with the experimental setup <cit.>. In this system, the bottom and top graphene layers are rotated by angles -θ/2 and θ/2, respectively. A TMD layer, such as monolayer WSe_2, is placed on top of the upper graphene layer, inducing proximity effects that result in Ising and Rashba SOCs in the top layer of TBG <cit.>. The single-particle physics of TBG with small θ can be described using a continuum moiré Hamiltonian—the Bistritzer-MacDonald model <cit.>—in which the low-energy Hamiltonian of the +K valley is formulated as follows: ℋ_0,+=[[ Û_θ/2(ĥ_t^(+)(k)+ĥ^(+)_SOC,t)Û_θ/2^† T̂^†(x); T̂(x) Û_θ/2^†ĥ_b^(+)(k)Û_θ/2 ]], where θ represents the twist angle, while the subscripts t and b indicate the top and bottom layers, respectively. In the above expression, ĥ_t^(+) and ĥ_b^(+) stand for the isolated +K valley Dirac Hamiltonian of the top and bottom layers, defined by ĥ_l^(+)(k)=v_F(k-κ_l)·σ for l=t,b. v_F≈5.944eVÅ denotes the Dirac velocity of monolayer graphene <cit.>. σ=(σ_x,σ_y), where σ_μ represents the μ-component Pauli matrix for the sublattice, and κ_t (κ_b) stands for the rotated +K valley point of the top (bottom) layer. To incorporate the rotation of spinors in the Dirac Hamiltonian, we apply the sublattice rotation matrix Û_θ/2=e^i(θ/4)σ_z. The interlayer tunneling between two twisted layers induces a spatially varying potential, described by T̂(x)=t̂_0+t̂_1e^-ib_+·x+t̂_-1e^-ib_-·x, where t̂_j=w_0σ_0+w_1[cos(2π j/3)σ_x+sin(2π j/3)σ_y], b_±=[4π/(√(3)a_M)](±1/2,√(3)/2), and a_M represents the moiré lattice constant. The interlayer hopping parameters, w_1≈ 110 meV, w_0=0.8w_1. The results with different w_0/w_1 values are also discussed. In our numerical calculations, we consider a 9 × 9 momentum grid in the plane-wave expansion for this continuum model, with Γ_M at the center, and we explicitly checked that a larger grid does not change the band structure within our desired resolution. The proximity-induced SOC terms in the τ K valley are given by <cit.> ĥ_SOC,t^(τ)= λ_I/2τσ_0s_z+λ_R/2(τσ_xs_y-σ_ys_x), where λ_I (λ_R) represents the strength of Ising (Rashba) SOC and s_μ is the μ-component Pauli matrix for the spins. The presence of SOCs alters the underlying symmetry of MATBG. The overall system (including both +K and -K valleys) obeys the spinful time-reversal symmetry, 𝒯_s=iτ_xs_y𝒦, where τ_x is the x-component Pauli matrix for the valley and 𝒦 is the conjugation operator. Thus, we expect the moiré bands of the two valleys satisfy ℰ_+,b(k)=ℰ_-,b(-k) and Ψ_-,b,-k=iτ_xs_yΨ_+,b,k^* for the energies and wavefunctions of the bth band, respectively. The Hamiltonian also preserves 𝒞_3z rotation symmetry around the out-of-plane z axis, but not the 𝒞_2z rotation symmetry, which is iτ_xσ_xs_z <cit.>. Furthermore, to characterize the topology of the system, we calculate the Berry curvature Ω by numerically computing the Wilson loops in the momentum space with the rhombus grid, which is described in detail in the Appendix. <ref>. We now address some subtleties concerning our model. First, the BM model is valid in the continuum limit and at low-energy and long wavelength. This is reasonable because, in this work, we only focus on the cases with isolated flat minibands with narrow bandwidth. Second, in our model, we use a simplified expression for the λ_R term <cit.>. The Rashba SOC will naturally be present when the mirror symmetry is broken, such that it can not only be induced by the proximate layer, but also an electric field perpendicular to the graphene sample. The proximity-induced SOC strengths depend on the relative angle between the SOC layer and the top graphene layer <cit.>, while in the latter case, it depends on the strength of the electric field <cit.>. λ_I and λ_R can be as large as ∼20 meV in DFT calculation<cit.>, and ∼3 meV from the tight-binding model <cit.>, which is comparable to the bandwidth of MATBG. In general, it is well-known that an accurate knowledge of SOC from first principles calculations is a huge challenge, and it is more appropriate to obtain them by comparing with experiments. Here, we treat λ_I and λ_R as free parameters and study a range of the twist angles. § SINGLE PARTICLE PHASE DIAGRAM, BERRY PHASE AND CHERN NUMBER We diagonalize the single-particle Hamiltonian ℋ_0,+ [Eq. (<ref>)] in momentum space. Without SOC, the isolated flat minibands can be found within twist angle θ=0.97^∘∼ 2.5^∘, and the smallest bandwidth (approximately 1.1 meV) occurs around a twist angle of θ=1.08^∘, which we define as the magic angle. In the scenario where λ_I≠0 and λ_R=0 (Fig. <ref>b), the bands corresponding to spin-up (orange) and spin-down (purple) electrons are shifted according to the valley-spin Zeeman field described by Eq. (<ref>). The two Dirac cones at K_M are approximately separated by λ_I/2. When λ_I=0 and λ_R≠0 (Fig. <ref>c), there is a single Dirac band touching at the K_M (K'_M ) point. In the Rashba-only case, the expectation value ⟨ S_z⟩ of these four bands is exactly zero. This indicates that there is no net spin polarization in the z-direction due to the Rashba SOC. As illustrated in Fig. <ref>d, the combination of both Ising and Rashba SOCs significantly reconstructs the moiré bands, generically resulting in four spin-split bands around charge neutrality, which are called E_1, E_2, E_3, E_4, from bottom to top. In this case, spin-up and spin-down states are mixed, leading to a more complex band structure compared to the individual SOC cases. To characterize the topological properties of the moiré bands induced by SOCs, we further extract the Chern number 𝒞 (Eq. <ref>) and Berry curvature Ω (Eq. <ref>) of each band by numerically computing Wilson loops in a momentum-space rhombus grid. The Chern number of the bth band in the ± K valley is denoted by 𝒞_±, b. Due to the time-reversal symmetry, 𝒞_-, b = -𝒞_+, b. For simplicity, we present our results for the +K valley only. For clarity and without loss of generality, we first present our results at θ=1.05^∘, a convenient choice for presenting the phase diagram. We will present the phase diagrams in a more compact way with proper rescaled SOC parameters for other angles θ (see Fig. <ref>) and for the other choice of w_0/w_1=0.4 (see Fig. <ref>), where the exact locations of the phase boundaries are modified. As shown in Fig. <ref>a, we identify three distinct phases: A, B, and C, which are characterized by different sets of Chern numbers (𝒞_+,1, 𝒞_+,2, 𝒞_+,3, 𝒞_+,4) = (1, -3, 3, -1), (1, 0, 0, -1), and (1, -1, 1, -1), respectively. The Chern numbers change sign with a negative λ_I, while the sign of λ_R does not affect the Chern numbers. It is important to emphasize that the critical Ising/Rashba SOCs for the topological phase transitions are parameter-dependent, e.g., twist angle θ, w_0/w_1, etc. We provide representative results, but the numbers of parameters are simply too numerous to be completely comprehensive with respect to all the relevant parameters. Phases A <cit.>, and C <cit.> have previously been reported, and phase B is also reported in <cit.> in the presence of finite sublattice splitting due to the SOC layer, which we ignore in this study. Here and in our previous paper <cit.>, we point out that the phase B phase can be realized with only Ising and Rashba SOCs. By varying λ_R and λ_I, 𝒞_+,1 and 𝒞_+,4 remain unchanged (as long as λ_I>0 and λ_R≠0), and topological transitions occur only in the middle two bands, E_2 and E_3, associated with the emergence of Dirac nodes. Therefore, we plot the minimum of the direct gap between E_2 and E_3, Δ≡min[E_3(k)-E_2(k)] in moiré Brillouin zone (MBZ) in Fig. <ref>a. The A-B and B-C phase boundaries are where Δ reaches zero, shown as two deep-purple lines, and pinpointed by two gray arrows. The critical Ising/Rashba SOCs here depend on the twist angle θ, w_0/w_1, etc. As we show later, the critical Ising SOCs actually are roughly twice w_0,θ=[E_3(k=Γ_M)-E_2(k=Γ_M)], which is the energy difference between E_2 and E_3 at the Γ_M point without SOC. So the critical SOCs reduce significantly when the twist angle is close to the magic angle θ=1.08^∘, which implies that phases A, B and C are all experimentally observable within a realistic range of SOCs (0-3 meV) <cit.>. Figure. <ref>b shows Δ as a function of λ_I along the gray dashed lines, for three representative λ_R, in Fig. <ref>a, which clearly displays two gap-closing points:λ_I^l (blue arrow) and λ_I^h (red arrow). The minimum of direct gap, Δ, is generally pretty small in phase B. The maximum of Δ in phase B and the width of phase B (λ_I^h-λ_I^l) increase, with the increase of Rashba SOC, λ_R. But we do not see the trend of λ_I^h and λ_I^l merging at λ_R→0 limit. Thus, the two purple lines divide the phase diagram into three regions, with gaps vanishing at the individual topological quantum phase transition points separating the regimes A, B, C. Specifically, for λ_R=10 meV (the middle plot of Fig. <ref>b), Δ reaches zero at λ_I=14.16 meV (marked by the blue arrow) and 15.76 meV (marked by the red arrow), which correspond to the A-B and B-C phase transitions respectively. At the first phase transition (blue arrow), three Dirac cones near the Γ_M point are manifest, as shown in the left plot of Fig. <ref>c. This corresponds to the fact that the changes in 𝒞_+,2 and 𝒞_+,3 are ± 3 during the A-B phase transition. One of the three Dirac cones is located almost right on top of the Γ_M K_M line. In contrast, only one Dirac cone right at Γ_M appears in the B-C phase transition (red arrow), as shown in the right plot of Fig. <ref>c, explaining why the changes in 𝒞_+,2 and 𝒞_+,3 are only ± 1. Importantly, as long as the Dirac point is not located at Γ_M, the symmetry C_3 ensures that the number of Dirac cones is 3 and the change in Chern number must be ± 3 <cit.>. Figure. <ref>d-<ref>h show the band structures at λ_R=10 meV, with different choices of λ_I, accrosing phase A, B and C. The effect of λ_I on the bands is notable: smaller values of λ_I lead to more significant spin mixing (Fig. <ref>d). On the other hand, in the large λ_I limit, the upper bands E_3 and E_4 are almost completely spin-up, whereas the lower bands E_1 and E_2 are almost completely spin-down (Fig. <ref>h), which is similar to the λ_I-only case in Fig. <ref>b. There are always direct band gaps between the four mini bands, except at λ_I=14.16 meV (Fig. <ref>e) and 15.76 meV (Fig. <ref>g), where the direct gap between E_2 and E_3, Δ_23, closes. Fig. <ref>e clearly shows that a Dirac cone (one of the three Dirac cones) is located almost right on top of the Γ_M K_M line, while Fig. <ref>g display one Dirac cone right on top of the Γ_M point. In phase B (Fig. <ref>f), the direct band gap Δ_23 is quite small, which is enlarged in the inset to highlight the detailed band structure and the gap closure. Figure. <ref> shows the contour plots of Δ_23(k) and the Berry curvature of the second band Ω_2(k) in the MBZ in k-space, near the A-B (panels a and b) and B-C (panels c and d) phase transitions, respectively. Around the A-B phase transition, there are three minima in the direct gap Δ_23 located near Γ_M. When E_3(k)-E_2(k)→0 at a certain k point, Berry curvature tends to diverge at that point. Therefore Ω_2 in the lower panel tends to diverge negatively in phase A, but positively in phase B, confirming the existence of a topological phase transition. On the other hand, near the B-C phase transition, there is only one minimum of Δ_23 located directly at Γ_M, and Ω_2 at Γ_M tends to diverge positively in phase B and negatively in phase C. Generally speaking, the Berry curvature is extremely localized around the Dirac points near the phase boundaries. We also investigate the twist angle dependence of the single-particle phase diagrams. Similar to θ=1.05^∘, we identify three distinct topological phases—A, B, and C—for twist angles θ=1.07^∘ and θ=1.09^∘. In fact, the critical λ_I for B-C transition in λ_R→0 limit should be the value that just separates two spin-up bands and two spin-down bands (as in Fig. <ref>g). This means the transition between phases B and C roughly follows the relation λ_I^h = 2w_(0,θ), where w_(0,θ) is the energy difference at the Γ_M point between the conduction and valence bands for a given twist angle θ in the absence of SOC. We first extract w_(0,θ) for different twist angels, as shown in Fig. <ref>b-<ref>d, which are summarized in Fig. <ref>a. The minimum of w_(0,θ) occurs roughly near the magic angle θ=1.08^∘. We present the phase diagrams for twist angles θ=1.05^∘, θ=1.07^∘, and θ=1.09^∘ in Fig. <ref>e-<ref>g, using rescaled Ising and Rashba SOCs, λ_I / 2w_0,θ and λ_R / 2w_0,θ. In these phase diagrams, the B-C phase boundaries are determined by Δ < 10^-4 meV, whereas the A-B phase boundaries are marked by changes in Chern numbers. This is because determining the exact location of Dirac points at the A-B boundaries is challenging, as opposed to the Dirac node being precisely at Γ_M for the B-C boundaries. Nevertheless, we can see that the two boundaries in the λ_R→0 limit trace the quantity 2w_(0,θ). When λ_R increases, the width of phase B broadens, both for θ<1.08^∘ and when θ>1.08^∘. However, the width of phase B is narrower when θ is closer to the magic angle, and the "turning" of phase B at large Rashba SOCs behaves differently when passing through the magic angle. This feature may be related to the band inversion that occurs when crossing the magic angle, which needs further investigation. Exactly at the magic angle θ=1.08^∘, 2w_(0,θ)=0.54 meV, which is quite small, making phase C accessible in the experiment. On the other hand, around the magic angle, phase B is so narrow that it is difficult to observe both in experiment and theory. Similarly to θ=1.05^∘, for θ=1.07^∘,1.09^∘, three Dirac cones near the A-B phase transition can be observed near Γ_M, while only one Dirac cone emerges directly at Γ_M during the B-C phase transition, as shown in Fig. <ref>. We also explore the phase diagram with w_0/w_1=0.4 and θ=1.05^∘, as shown in Fig. <ref> in the Appendix. <ref>. The width of phase B is much smaller than that in the w_0/w_1=0.8 case. In fact, phase B is extremely narrow in the chiral limit (w_0/w_1=0). The details for the topological phase diagram for other w_0/w_1 value are given in the Appendix. <ref>. In summary, we find three distinct topological phases—A, B, and C—across different twist angles and values of w_0/w_1. The transitions between these phases are marked by notable changes in the electronic properties, such as the number and position of Dirac cones near the Γ_M. Specifically, the transition between phases A and B is characterized by the presence of three Dirac cones near Γ_M, while the transition from B to C is marked by a single Dirac cone at Γ_M. The boundary between phases B and C is approximately determined by the energy difference of bands at Γ_M without SOC. Phase A generally occurs with a small Ising SOC, while phase C can be observed with large Ising SOC or relatively small Ising SOC near the magic angle. Phase B is not easily found close to the magic angle or in the chiral limit due to its narrow width, but it can be more accessible in the presence of a large Rashba SOC. Nevertheless, the Ising SOC required for phases B and C is still experimentally accessible near the magic angle. § DENSITY OF STATES, VAN HOVE SINGULARITY In MATBG, turning on interlayer tunneling between the layers produces avoided crossings, leading to saddle points in the moiré minibands. Saddle points are locations in momentum space where an energy band reaches minima and maxima along orthogonal directions. These saddle points create significantly enhanced peaks in DOS, which are easily identified in scanning tunneling spectroscopy studies and are referred to as VHSs. In this section, we investigate the impact of SOC on the DOS and analyze its effect on the VHS in MATBG. When a VHS is close to the Fermi energy, the increased DOS amplifies the many-body correlation, resulting in various ordering instabilities, such as density waves and superconductivity at low temperatures. The DOS can be calculated using the following equation: ρ(E)=∑_i=1^N1/S∑_j=1^N_kδ(E-E_i(k_j)) where N is the number of bands; N_k=3×10^6, the number of k points in the first MBZ; S=N_k×Ω_0, and Ω_0=√(3)/2a^2_M, which is the real space moiré unit cell; and δ(E-E_i(k_j))≈1/πγ/(E-E(k))^2+γ^2, with γ=0.0005 meV, which is comparable to the mean energy level spacing ∼0.00058 meV. For comparison, we first discuss the DOS for the non-SOC case, fixing the twist angle at θ=1.05^∘. As shown in Fig. <ref> in the Appendix. <ref>, two minibands are present near the charge neutrality point. But due to spin degeneracy, E_1=E_2 and E_3=E_4. There are actually four bands, and each band has one VHS. For the lower two bands, E_1 and E_2, the VHS is of the ordinary type with a logarithmically divergent DOS. At the VHS energy E=1.7045 meV, the two Fermi pockets intersect at a finite angle, as shown in the second left plot in Fig. <ref>d. For E_3 and E_4, a higher-order VHS (with a power-law divergent DOS<cit.>) appears at E=2.0159 meV, characterized by the tangential touching of the two Fermi pockets, as shown in the second right plot in Fig. <ref>d. As the energy surpasses the VHSs, the Fermi contour undergoes a transformation from electron-type (purple) pocket(s) , where the band reaches its minimum, to hole-type (orange) pocket(s), where the band reaches its maximum (Fig. <ref>). The details for the non-SOC case are given in the Appendix. <ref>. Now we discuss the impact of SOC to these VHSs. Without loss of generality, we take the case with λ_I=λ_R=3 meV and θ=1.05^∘ for an example. The situation in all three phases is similar. The proximity-induced SOC lifts the spin degeneracy, resulting in four spin-split bands: E_1,E_2,E_3,E_4, as shown in Fig. <ref>a. The corresponding DOS are plotted in Fig. <ref>b and <ref>c. Interestingly, the VHS for each band splits into a pair of VHS, with energies very close to each other. These splittings result in a total of eight ordinary VHSs per valley, each exhibiting a logarithmically divergent DOS. The higher-order VHS observed in the absence of SOC disappears in the presence of SOCs. The precise VHS energies are mentioned in the caption of Fig. <ref> and the corresponding energy contour plots are shown in Fig. <ref>d. Each VHS energy has three VHS points in k-space, where the Fermi pockets intersect at an angle. The VHS points are all close to the original Dirac points without SOC (K_M and K'_M). The splitting of the VHS is attributed to the breaking of the mirror symmetry around k_x by SOC (but the C3 symmetry remains) Consequently, the energy E_k is not the same at K_M and K'_M, causing the Fermi pockets to intersect at slightly different energies while remaining close to K_M and K'_M, unlike in Fig. <ref>. Therefore, for each pair of VHS, one is located near K_M while the other is located near K_M'. More interestingly, the splitting of VHS may affect how the Hall coefficient changes sign when passing VHS. Usually, the Hall coefficient will change from negative to positive when passing through a VHS from low energy, representing the effect of electron pocket or hole pocket, respectively. Does this mean that the Hall coefficient changes sign twice when passing through a pair of VHS for each band, meaning we have electron pocket both at the bottom and the top of the band? This is certainly not the case. In Fig. <ref>a-<ref>d, we plot the energy contour for all four minibands, with color scheme represents the energy measured with respect to the low VHS energy E_b,v^l (solid line) and the high VHS energy, E_b,v^h (dashed line), where b is the band index. For E_1 and E_2, when E<E_b,v^l, there is an electron pocket located at Γ_M, which is similar to the non-SOC case in Fig. <ref>a. For E_3 and E_4, when E>E_b,v^h, there is a hole pocket located at Γ_M, which is similar to the non-SOC case in Fig. <ref>b. Nevertheless, as illustrated in this figure, we have hole (orange) pockets when E>E_b,v^h and electron (purple) pockets when E<E_b,v^l. But in between these two VHS energies, E_b,v^l < E < E_b,v^h, we point out that there may exist multiple pockets, both electron- and hole-type, colored by the purple and orange regime in between the dashed and solid lines. These multiple pockets may cause a cancellation effect, leading to a nearly zero or fluctuating Hall coefficient between these pairs of VHSs. It will be interesting to investigate this experimentally. § SPIN TEXTURE IN MOMENTUM SPACE In this section, we first examine the spin texture in momentum space of the four minibands for a range of twist angels, then show how a non-trivial spin texture evolves in the presence of an out-of-plane electric field. Understanding the spin texture is crucial for studying the pairing mechanism in superconductivity <cit.>, as well as the spintronics in MATBG. §.§ Emergent skyrmion-like spin texture in momentum space We examine the spin texture of the four minibands for a range of twist angles, with λ_I=λ_R=3 meV. Figure. <ref> presents the band structures for 1.03^∘≤θ≤1.11^∘, where the color indicates the spin-z expectation value: orange denotes spin-up, white denotes zero polarization, and purple denotes spin-down. For the angles presented, the system is in phase A, except for 1.08^∘, where the system is in phase C. Also, compared to θ=1.07^∘ and 1.09^∘, the cases with θ=1.03^∘ and 1.11^∘ are deeper in phase A because the energy difference between two middle bands at Γ_M, denoted w_0,θ, is larger. Consequently, λ_I/2w_0,θ is smaller, indicating that these angles are further away from the A-B phase boundary. Interestingly, for the cases in phase A, the middle two bands, E_2 and E_3, exhibit skyrmion-like spin texture: the ⟨ S_z⟩ near the Γ_M and the K_M points have different signs. The case with θ=1.08^∘ is in phase C, where two almost spin-up bands are completely separated from two almost spin-down bands (with the Rashba SOC still providing some spin mixing.) Therefore, the skyrmion feature disappears in Phase C. On the other hand, for other angles, the system is in phase A, where two spin-up bands still intersect with two spin-down bands with Ising-only SOC, but the Rashba SOC further avoids band crossings, generating this skyrmion-type spin texture around the Γ_M point. Additionally, the spin-up (orange) region in momentum space is large for θ=1.11^∘ and 1.03^∘, as they are deep in phase A. When closer to the magic angle (1.08^∘), the spin-up region becomes smaller, eventually disappearing at the magic angle. The spin texture profiles for E_1,E_2,E_3,E_4, with θ=1.05, are depicted in Figs. <ref>a-<ref>d. The vector field represents the S_x and S_y components of the spin expectation value, and the spin texture adheres to 𝒞_3 symmetry. For E_1, the spin vectors predominantly point downward (purple). At Γ_M, the spin vector points directly downward. Moving away from Γ_M, the spins tilt slightly away from the perpendicular direction, pointing away from Γ_M in Zone 1 while remaining mostly downward in Zone 2. Zone 1 (pink) and Zone 2 (white) in the momentum space are defined on the left-hand side of Figs. <ref>a. Similarly, for E_4, the spins predominantly point upward (orange). At Γ_M, the spin vector points directly upward. In this case, the spins point toward Γ_M in Zone 2 while remaining mostly upward in Zone 1. For E_2, the spin exhibit a skyrmion-like feature: the spin points purely upward at Γ_M and downward at the corners of MBZ. Moving away from Γ_M, spins start to lie in-plane around the intersection of the orange and purple regions. Around Γ_M, the in-plane spin components point toward the center of the MBZ. On the other hand, for E_3, the trend is the opposite. At Γ_M, the spin points purely downward, while pointing upward at the corners of the MBZ, again indicative of a skyrmion-like feature. The spins also lie in-plane around the intersection of the orange and purple regions. However, in this scenario, the in-plane spin components around Γ_M point away from the center of the MBZ. Moreover, on average, the in-plane spin components in Zone 1 are larger than that in Zone 2 for E_1,E_3, and opposive for E_2,E_4. For θ=1.08^∘, the spin textures for four minibands are shown in Fig. <ref>. Unlike θ=1.05^∘, E_1 and E_2 here are almost spin down, while E_3 and E_4 are almost spin up, consistent with Fig. <ref>d. The skyrmion-like feature completely disappears here. In addition, the in-plane spin components, on average, in Zone 1 and Zone 2 are less distinct compared to θ=1.05^∘ and θ=1.11^∘ cases (Fig. <ref>). For θ=1.11^∘, the skyrmion-like feature (spin-up region) for the middle two bands extends to the M_M point in momentum space, which means that its size in momentum space is larger than in the θ=1.05^∘ case, because it is deeper inside phase A. Here, the in-plane spin components in Zone 2 are larger than those in Zone 1 for E_1,E_3, and the opposite is true for E_2,E_4. This occurs because the band inversion already takes place at this angle. In summary, we observe a skyrmion-like spin texture for various twist angles, which disappears in phase C. Although most of the skyrmion-like features are shown here in phase A, they can also be observed in phase B, as shown in the inset of Fig. <ref>f, although the spin-up region is extremely tiny, because it is very close to phase C. We conclude that this skyrmion-like spin texture is stable across a wide range of twist angles and SOCs, which is actually a crucial ingredient in the interband paring mechanism for superconductivity in MATBG <cit.>. §.§ Spin texture in presence of out-of-plane electric field We now show how the spin texture evolves in the presence of an out-of-plane electric field. As mentioned before, the Rashba SOC is naturally present when the inversion symmetry is broken. Therefore, applying an out-of-plane electric field in MATBG also modifies the Rashba SOC. In this case, a dipolar coupling induces transitions between the p_z and s orbitals, flipping the spin <cit.>. In MATBG, the out-of-plane electric field also gives rise to layer polarization, which can make the spin texture fully in-plane tunable <cit.>. Engineering SOC would influence the correlated phases <cit.> and superconductivity <cit.>. Specifically, engineering the Rashba SOC would control the polarization of spin accumulation and spin current <cit.>. In this section, we study how the in-plane spin components rotate around Γ_M and K_M points in the +K valley. We find that around K_M, a radial Rashba spin texture can be readily achieved in MATBG, while around Γ_M, the spin texture can also be tuned to some extent in the presence of electric field. The general description of Rashba SOC requires the so-called Rashba angle ψ (between the electron's momentum and spin). The general form of the Rashba SOC for C_3 symmetric systems at K point, is <cit.>, h_R=λ_R/2e^-iψ s_z/2(τσ_xs_y-σ_ys_x)e^iψ s_z/2 The Rashba angle actually depends on the twist angle between the upper layer of graphane and the SOC layer <cit.>, which we ignored in the previous section. In this section, we show that the Rashba angle can be tuned from the radial (ψ=90^∘) to the tangential (ψ=0) around the K_M point by a displacement field. To incorporate an out-of-plane electric field, we put potentials of ± u onto the top and bottom layers (positive u - positive field in z): ℋ=ℋ_0,++ℋ_u, where ℋ_u=[[ uI 0; 0 -uI ]], and I is a four dimensional identity matrix. The outer blocks describe the layer degree of freedom (the first block is the top layer). The inner blocks describe the sublattice degree of freedom crossing the spin degree of freedom. We present our results mainly at the twist angle θ=1.07^∘, because the effect of the out-of-plane electric field is more pronounced near the magic angle and right at magic angle, the skyrmion-like feature disappears (Fig. <ref>d). Later, we also comment on the results at other angles. As shown in Fig. <ref>, the bandwidth of the flat bands broadens as the displacement field increases. At large u=80 meV, the flat bands eventually merge into the higher energy bands, leading to the disappearance of the isolated flat bands (Fig. <ref>d). We limit our study to the vicinity of isolated flat bands, meaning u<80 meV. Overall, the top and bottom bands, E_1 and E_4, generally exhibit spin up/down characteristics even in the presence of an electric field. Interestingly, the spin-up/spin-down region for the E_2/E_3 in momentum space is significantly enhanced, as one increases the strength of the electric field: the position where ⟨ S_z⟩=0 (white color) moves closer to K_M instead of Γ_M as u increases. We now focus on the evolution of the spin texture around K_M in the presence of an electric field. The spin textures corresponding to the u=0,15,30 meV are shown in Fig. <ref>. We find that the spin textures for E_2 and E_3 are similar, while those for E_1 and E_4 are also similar but opposite to E_2 and E_3. At u=0 meV, the spin textures are almost tangential to the circle centering K_M for all four bands ( E_1 is the most tangential), while at u=15 meV, they are purely radial. At u=30 meV, the spin textures for all four bands deviate from being purely radial. The maximum in-plane spin expectation value for u=0 meV is 0.012, while the maximum for u=30 meV is 0.09. The scale of S_x,S_y in u=0 meV plot is quadrupled for visibility. Despite the strong out-of-plane spins, our calculations clearly reveal the emergence of in-plane radial Rashba textures near K_M, in the presence of an experimental feasible out-of-plane electric field <cit.>. The distance between two graphene layers is ∼0.335 nm <cit.>, the experimental electric field is around 1V/nm <cit.>, so energy difference between two layers 2u is around 33.5 meV if the dielectric constant is 10. The actual layer potential difference is hard to estimate and depends on the sample. Figure. <ref> provides a more detailed description of how the electric field modulates the spin texture of E_1 around K_M. As the displacement field increases, the in-plane spin expectation values become larger, while the z-component remains roughly the same. The maximum in-plane spin expectation value for u=10 meV is 0.03, while the maximum for u=60 meV is 0.14. Without an electric field (u=0 meV), the spin texture of E_1 appears to be purely tangential. When u≠0, the spins start to deviate from the tangential direction and develop a radial spin texture near u=15 meV. Remarkably, we find that in a range of u from 10 meV to 30 meV, the spin texture near K_M remains generally radial and is not highly susceptible to changes in the electric field. Here, we discuss the effect of the twist angle. At θ = 1.05^∘, the maximum u at which isolated flat bands still exist is around 70 meV. As shown in Fig. <ref>, when θ=1.05^∘, the rotation of E_1 at u=0 meV is opposite to that in the θ=1.07^∘ case (top left plot in Fig. <ref>). This difference arises because band inversion is already occurring at K_M. Generally speaking, the electric field here plays the same role, tuning the spin texure away from tangential. What differs is that within the range of electric field strength where isolated flat bands still exist, we do not observe a purely radial spin texture. This indicates that the out-of-plane electric field is more effective in tuning the spin texture when the twisted bilayer graphene is closer to the magic angle, which is 1.08^∘ in our case. We also examine how spin textures around Γ_M evolve in the presence of an electric field. As shown in Fig. <ref>, without the electric field, the spin textures for all four bands around Γ_M are radially oriented. The scale of S_x,S_y in Fig. <ref> is decreased by a factor of 2.5, compared to the middle panel of Fig. <ref>, for better illustration. When an electric field is applied, the spin texture deviates from being purely radial, and the tangential components start to appear. A larger electric field is needed to tune the spin texture around Γ_M, compared to the previous cases (K_M), meaning that we cannot tune the spin texture from purely tangential to purely radial at this twist angle, within the range of u where the moiré bands are isolated from the remote bands. In summary, we examine the spin textures in the presence of an out-of-plane electric field for MABTG-WSe_2. We find that the electric field can tune the in-plane spin component from purely radial to tangential around K_M, while it has a less pronounced effect on the spin texture around the Γ_M point. Tuning spins in plane or engineering the spin texture is crucial to achieve unconventional charge-to-spin conversion <cit.>, as well as influence correlated phases and superconductivity. § DISCUSSION Using the BM model, we construct the topological phase diagram of MATBG across different twist angles, in the presence of Ising and Rashba SOCs. Our findings reveal that the introduction of SOCs into one layer of TBG significantly alters the band structure, leading to the emergence of three distinct topological phases in MATBG. Importantly, we find that the critical SOC strength depends on the twist angle, and all three phases can be realized with the experimentally accessible SOC strength (∼ 1 meV) for systems with angles very close to the magic angle. We also find that the introduction of SOC splits the flat bands into four spin-split mini-bands, each featuring its own pair of VHSs, leading to a total of eight VHSs per valley in the DOS. The SOCs modify the DOS of MATBG, not only by introducing additional VHSs but also by altering the type of VHSs. The splitting of VHS for each band may significantly impact the Hall conductivity, which may fluctuate or remain nearly zero within each pair of VHS energies due to the possible canceling effect of multiple pockets. This should be experimentally investigated. Moreover, we discover a skyrmion-like spin texture in momentum space in phase A and B, and it eventually disappears as the system transitions to phase C. Additionally, we show that this skyrmion-like feature can be tuned by an out-of-plane electric field, along with the spin textures around the K_M and Γ_M points. This tunability opens up possibilities for controlling the spin texture in MATBG, which would potentially influence the correlated phases and superconductivity in the system. For example, the interband superconductivity in Ref. <cit.> is more likely to happen below the magic angle because the spin textures do not favor interband Ising pairing above the magic angle. These skyrmionic features, in addition to being of intrinsic interest, may also be useful to experimentally observe the topological phase transition to the phase C. In this work, we focus only on the continuum model (the BM model), which is valid at low energies and long wavelengths. This is reasonable because in this work, we only focus on the twist angles that are close to the magic angle, leading to isolated flat bands with narrow bandwidth in all cases. In addition, the phase diagram of MATBG is constructed only at the single-particle level. The continuum low-energy band description should be well-valid in these situations. It is expected that the interactions likely modify the low-temperature phase diagram presented in this paper, since flat bands typically enhance interaction effects. For example, when the Coulomb interaction is considered, valley polarization likely prevails over the entire doping region for a range of twist angles. Therefore, the topological insulators at integer fillings, which are predicted in Fig. <ref> and Ref. <cit.>, are likely absent due to the time-reversal breaking by the valley polarization. Moreover, the pairing between two time-reversal related bands is suppressed due to valley polarization, so the inter-band paring superconductivity phases emerge when the electron-phonon interactions are included <cit.>. We now mention several future directions. One can investigate possible impacts of the splitting of VHSs due to SOCs: Hall conductance, quantum oscillation in MATBG. In addition, one can also study the polarization of spin accumulation and spin current in the presence of SOC and out-of-plane electric field in MATBG, which should be able to serve as a platform with tailored electronic and spintronic properties. Inclusion of electron-electron interactions in the theory is also an important open and difficult challenge for the future. § ACKNOWLEDGEMENTS We are grateful to Jed Pixley, Zhentao Wang, Ming Xie, Jihang Zhu, Jay Sau, Silas Hoffman for useful discussion. This work is supported by the Laboratory for Physical Sciences (Y.T.,Y.-Z.C., and and S.D.S.) and F. W. is supported by National Key Research and Development Program of China (Grants No. 2022YFA1402401 and No. 2021YFA1401300) and National Natural Science Foundation of China (Grant No. 12274333). § CALULATE BERRY CURVATURE AND CHERN NUMBER To characterize the topology of the system, we calculate the Berry curvature Ω by numerically computing the Wilson loops in the momentum square with the rhombus grid, and each small grid spans a momentum space area 𝒜_0=𝒜_MBZ/𝒩^2, where 𝒜_MBZ is the momentum-space area of MBZ and 𝒩=300 in our calculations. The Berry curvature is approximated by <cit.> Ω_b(k=k_1+k_2+k_3+k_4/4) ≈arg[⟨ u_k_1,n|u_k_2,n⟩⟨ u_k_2,n|u_k_3,n⟩⟨ u_k_3,n|u_k_4,n⟩⟨ u_k_4,n|u_k_1,n⟩]/𝒜_0, where k_1→k_2→k_3→k_4→k_1 tracks in a counterclock-wise manner a small rhombus grid with the area 𝒜_0. The Chern number 𝒞 of the bth band can then be calculated via 𝒞_n=1/2π∫_MBZdkΩ_b(k). The Chern numbers of two valleys are related by a minus sign. The overall Chern number is zero due to the time-reversal symmetry. § SINGLE-PARTICLE PHASE DIAGRAM WITH DIFFERENT INTERLAYER HOPPING ENERGY In this Appendix, we use the interlayer hopping energy w_1=110 meV and w_0/w_1=0.4, to construct the single-particle phase diagram at twist angle θ=1.05^∘. As shown in Fig. <ref>, we still find three distinct topological phases A, B, and C. The A-B boundary is marked by purple, while the B-C boundary is marked by orange. The energy difference between the lower and upper bands at Γ_M without SOCs, w_(0,θ=1.05^∘)=4.49 meV here. We can see that with a smaller w_0/w_1 ratio (0.4), the width of phase B is smaller than that in the case with w_0/w_1=0.8 (Fig. <ref>e). In fact, in chiral limit, w_0/w_1=0, the phase B is extremely narrow. For example, when λ_R=6 meV, the width of phase B, λ_I^h-λ_I^l∼1.403 meV for w_0/w_1=0.8; ∼0.055 meV for w_0/w_1=0.4; ∼0.006 meV for w_0/w_1=0. Figure. <ref> shows the band structure across three topological phases. The evolution of the band structure from phase A to C is similar to that in the case with w_0/w_1=0.8 in Fig. <ref>d-h. At A-B boundary, one of the Dirac cones is located at K_MΓ_M line (Fig. <ref>b), while the Dirac cone is located at Γ_M at the B-C boundary (Fig. <ref>d). § DENSITY OF STATES WITHOUT SOC In 2D electron systems with an energy dispersion E(k), an VHS with diverging DOS occurs at a saddle point K_v, determined by ∇_k E=0 In right choice of axes, the energy dispersion near k_v can then be Taylor expanded as<cit.> E-E_v=-α p_x^2+β p_y^2+γ p_xp_y^2+κ p_yp_x^2+..., where E_v is the VHS energy, the momentum p=k-k_v is the momentum measured from the saddle point, and the coefficients α,β,γ,κ are the expansion coefficients. When αβ<0, the VHS is ordinary with a logarithmically diverging DOS. If αβ=0, a high-order VHS occurs. Specifically, If α=β=0, the taylor expansion of E_k then starts from at least the third order. A type-I higher-order VHS occurs, describing an intersection of three or more Fermi surfaces at a common k point<cit.>, which is out of scope of this paper. When α=0,β≠0, or vice versa, a type-II higher-order VHS is present <cit.>, characterized by a power-law divergence in the DOS, which enhances electron correlation significantly. Here we present the DOS for the non-SOC case with twist angle at θ=1.05^∘. As shown in Fig. <ref>a, two minibands are present near the charge neutrality point, because due to spin degeneracy, E_1=E_2 and E_3=E_4. The Dirac cones are located at the corner of MBZ, labeled by K_M. Fig. <ref>b shows the corresponding DOS per spin per valley. Each band has one VHS. But due to spin degeneracy, there are actually four VHSs per valley. The VHS in the lower energy bands E_1,E_2, located at E=1.7045 meV, is of the ordinary type with a logarithmically diverging DOS. In this scenario, the two Fermi pockets intersect at a finite angle, as shown in the second-left plot in Fig. <ref>d, having six VHS points in k-space. Moreover, as shown in Fig. <ref>a, the Fermi contour undergoes a transformation from a single electron-type (purple) pocket at the center of the MBZ (Γ_M), where the band reaches its minimum, to five separate hole-type pockets where the band reaches its maximum, as the energy surpasses the VHS. Two Dirac pockets are located at the corners of the MBZ (K_M and K'_M), while the other three are located inside the MBZ. In contrast, at E=2.0159 meV, higher-order VHSs appear for E_3,E_4, characterized by the tangential touching of the two Fermi pockets, as shown in the second right plot in Fig. <ref>d. In this scenario there are only three VHS points in the k-space. As shown in Fig. <ref>b, when the energy exceeds the VHS, the Fermi contour changes from two distinct electro-type Dirac pockets at the MBZ corners (K_M and K'_M) to a single hole-type pocket that encompasses the center of the MBZ. These transformations result in a switch between electron and hole charge carriers, as indicated by a change in the sign of the Hall coefficient. myapsrev
http://arxiv.org/abs/2406.18512v1
20240626173351
"Is ChatGPT a Better Explainer than My Professor?": Evaluating the Explanation Capabilities of LLMs in Conversation Compared to a Human Baseline
[ "Grace Li", "Milad Alshomary", "Smaranda Muresan" ]
cs.CL
[ "cs.CL" ]
Evaluating the Explanation Capabilities of LLMs in Conversation Compared to a Human Baseline]"Is ChatGPT a Better Explainer than My Professor?": Evaluating the Explanation Capabilities of LLMs in Conversation Compared to a Human Baseline gl2676@barnard.edu Barnard College ma4608@columbia.edu Columbia University smuresan@barnard.edu Barnard College § ABSTRACT Explanations form the foundation of knowledge sharing and build upon communication principles, social dynamics, and learning theories. We focus specifically on conversational approaches for explanations because the context is highly adaptive and interactive. Our research leverages previous work on explanatory acts, a framework for understanding the different strategies that explainers and explainees employ in a conversation to both explain, understand, and engage with the other party. We use the 5-Levels dataset was constructed from the WIRED YouTube series by Wachsmuth et al., and later annotated by Booshehri et al. with explanatory acts <cit.>. These annotations provide a framework for understanding how explainers and explainees structure their response when crafting a response. With the rise of generative AI in the past year, we hope to better understand the capabilities of Large Language Models (LLMs) and how they can augment expert explainer's capabilities in conversational settings. To achieve this goal, the 5-Levels dataset [We use Booshehri et al.'s 2023 annotated dataset with explanatory acts.] allows us to audit the ability of LLMs in engaging in explanation dialogues. To evaluate the effectiveness of LLMs in generating explainer responses, we compared 3 different strategies, we asked human annotators to evaluate 3 different strategies: * S1: Baseline - human explainer response * S2: GPT4 Standard - GPT explainer response given the previous conversational context * S3: GPT4 w/ EA - GPT explainer response given the previous conversational context and a sequence of explanatory act(s) (EAs) to integrate into its response. We found that the GPT generated explainer responses were preferred over the human baseline emphasizing the challenge of effective science communication between experts and everyday people. Additionally, the annotators preferred S2: GPT Standard responses over S2: GPT w/ EA responses mainly due to the concise and succinct responses. For the few times that S3 outperformed S2, annotators noted dimensions of explainee engagement and use of thought-provoking questions as the main reasons for better performance, demonstrating the value in providing explicit instructions for an LLM to follow when generating a response. These results demonstrate the ability of LLMs to generate responses based on sequences of explanatory acts, allowing for future research to explore the specific contexts and strategies of explanations to improve science communication. Additionally, the results demonstrate the capabilities of LLMs to improve expert explainers' conversational skills and strategies, further emphasizing how interfaces can improve and augment an explainer's abilities. <ccs2012> <concept> <concept_id>10003120.10003121.10011748</concept_id> <concept_desc>Human-centered computing Empirical studies in HCI</concept_desc> <concept_significance>500</concept_significance> </concept> <concept> <concept_id>10003120.10003121.10003122</concept_id> <concept_desc>Human-centered computing HCI design and evaluation methods</concept_desc> <concept_significance>300</concept_significance> </concept> </ccs2012> [500]Human-centered computing Empirical studies in HCI [300]Human-centered computing HCI design and evaluation methods < g r a p h i c s > Sample generations from for a given explanation conversation. [ Smaranda Muresan July 1, 2024 ==================== § INTRODUCTION §.§ Background Explanations are an important part of science communication because they make science more accessible to the general audience. But it can be hard to bridge the knowledge gap between expert explainers and everyday people who have no prerequisite knowledge of the topic. In this research, we focus specifically on explanation conversations where both the explainer and explainee are engaged in a dialogue to help the explainee understand a concept. These explanation conversations are rich for investigation because the flow of these conversations change and adapt depending on the context and background of the explainer and explainee engaged in the conversation <cit.>. For example, the method that an explainer would take to explain a concept to a 5-year old will be vastly different from how they would explain the concept to a college student. Various factors such as the explainer and explainee's proficiency and personal interest in the subject area affect how each party will engage in the conversation. This raises the question: How can explainers tailor their explanation to the explainee's background and proficiency level to increase the explainee's understanding of the topic? §.§ Related Work Previous research has focused on creating analytical frameworks to uncover and understand the patterns behind the explanation conversations between explainers and explainees. Booshehri et al. has explored how experts and explainees engage in explanation conversations through an inventory of "explanatory acts," which are categories to characterize the contributions and intentions behind the explainer and explainee's utterances. Booshehri et al. developed 20 explanatory acts for the purpose of fine-grain explanation annotations to increase the understanding in terms of the interaction dynamics between the explainer and explainee. In Figure <ref>, an example of an annotated conversation using Boosherhri's inventory of explanatory acts illustrates how sentences can be broken down into multiple different explanatory acts. By focusing on span-level annotations, Boosherhri's inventory of explanation moves allows for a fine-grain categorization and understanding of the different strategies that explainers and explainees use in their conversations with each other. For example, the explanatory acts include categories like Elaboration, Definition question, Analogy, and more to pinpoint the specific strategies that explainers and explainees employ in a conversation. The entire list of explanatory acts is included in the Appendix of the paper. While this research focuses on developing a framework to understand how human explainers and explainees explain topics to each other, there is still a lack of research comparing the effectiveness of human explanations and those generated from Large Language Models (LLMs). §.§ Large Language Models The field of communication has also shifted due to the increasing availability of LLMs, which have raised concerns about the effectiveness of LLM-generated explanations and the reliability of the generated text. While these LLMs have been trained on vast amounts of data that have been sourced from the internet, not much is known about whether these models have internalized the ability to model human-like explanations. Furthermore, little research has been done to evaluate language models on their abilities to engage in explanatory conversations as the role of an explainer. Our research aims to shed light on two areas: first, how effectively LLMs are able to generate explainer responses and second, whether an LLM can formulate a response based on a given sequence of explanation moves. These two areas will help better understand how LLM generated responses compare to human responses and can provide insights into how to develop interactive explanation systems and how LLMs can better augment human explanation capabilities to improve the quality of human-explainer responses. Additionally, by evaluating whether LLMs are able to formulate responses based on explanatory moves, can further the field of explainable AI (XAI) systems in understanding how well LLMs are able to model explanation responses based on these explanation frameworks. We design a study performs a side-by-side comparison on human expert responses to different LLM-generated responses to better understand the effectiveness of LLM explanations in a science explanation dialogues. We hypothesize that because LLMs are trained on large sources of data, their explanation qualities might implicitly model human explanations, but would require additional scaffolding to ensure consistency in maintaining engagement with the explainee. § METHODS §.§ Data In this study, we use Booshehri's annotated WIRED magazine's 5 Levels of Explanation Youtube video dataset from 3 different annotators with explanatory act labels from research's proposed inventory. WIRED magazine's "5 Level Video Series," contains conversations between one expert with 5 different people, each at a different level of proficiency in the topic: a child, a teenager, an undergraduate student, a graduate student, and a colleague. This dataset is most suitable for our specific use case because it illustrates staged conversations between an expert explainer and an explainee that are filmed in a studio environment. The staged environment allows for the explanation to be distilled down to its core components without the noise that might occur from in-the-field explanations. These staged conversations allow for both the explainer and explainee to succinctly engage with each other to understand a certain topic. For this study, we specifically focus on conversations between an expert explainer and a college-level explainee for the purpose to standardizing the evaluation metric. We choose college-level explainees because we found these conversations yielded the best balance of depth and technical nuance for a concept. When explaining to a child and high school student, the explainer over simplified the topics. Alternatively, with graduate students and other colleagues, the explainer dove straight into the technical mechanics of the topics without any preliminary topic introduction. With college students, however, the explainer often provided enough context to the topic that a general audience member could understand while also providing additional technical depth. Additionally, we focused on STEM topics to align the research with the goal of improving science communication. There were 11 topics (virtual reality, sleep, nano-technology, machine learning, lasers, hacking, gravity, dimensions, connectomes, blockchain, and blackholes). §.§ Study Design The purpose of the study was to evaluate three different methods to generate explainer responses to an explanation conversation. The first approach, S1, is the Baseline approach that uses the human explainer's actual response from the 5-Levels dataset to structure the response. The second approach, S2, is the Standard Prompting approach that provides the previous conversation context to OpenAI's GPT4 and asks the LLM to generate an explainer's response <cit.>. The third approach, S3, is the Prompting with Explanation Acts (EAs) approach that provides the previous conversation context and the sequence of observed explanatory acts from the annotated 5-Levels dataset as an outline for the LLM to follow for GPT to follow as it's generating it's response <cit.>. Figure <ref> illustrates the different prompting strategies. To evaluate the different strategies of generating explainer responses, we used the 5-Levels dataset to create the conversational context for each explainer response. We manually parsed the 5-Levels dataset to incrementally concatenate pairs of explainer and explainee utterances together to build out an entire conversation, ensuring that every sequence ends on an explainee utterance. Ending on an explainee utterance is important because it allows the explainer the ability to directly or indirectly respond to the explainee's last utterance. In this manner, for every explainee utterance in a conversation, we generate a corresponding explainer utterance given the two different prompting strategies illustrated in Figure <ref>. In the study, we randomized the order that each response condition was displayed, changing the ordering of each condition. Each user experienced the same sequence of randomized response labels. §.§ Evaluation Criteria We designed a custom evaluation interface in Label Studio[Label Studio: https://labelstud.io/] that scored each explainer response on 8 different dimensions on a Likert rating scale of 1-5 (ranging from Strongly Disagree to Strongly Agree) and one ranking question that evaluated the different explainer responses against each other. The 8 dimensions that each explainer response was evaluated on included: * Coherence: The explainer's last utterance is clear and coherent. * Concise: The explainer's last utterance is concise. * Conversational: The explainer's last utterance is conversational and not overly formal. * Acknowledgement: The explainer's last utterance acknowledges the explainee's utterance. * Appropriate: The explainer's last utterance responds appropriately to the explainee's utterance. * Deepens or Expands: In the context of the entire conversation, the explainer's last utterance deepens or expands the conversation. * Actively Guidance: In the context of the entire conversation, the explainer's last utterance actively guides the course of the conversation. * Engagement of Explainee: In the context of the entire conversation, the explainers last utterance engages the explainee in the conversation. These dimensions were designed based on Li et al.'s questionnaire design to evaluate a chatbot's responses in the setting of evaluating the effectiveness of different chatbots to assist English language learners to enhance their conversational skills <cit.>. We used the language and content quality dimensions from Li et al.'s questionnaire design to inform parts of our evaluation criteria and included additional questions to further probe the question of the efficacy of the explainer responses as it relates to previous findings in the field of effective explanations. The rating section allowed annotators to rank the different outputs against each other through a drag and drop interface. This provided insights into how well certain conditions performed in relation to each other. In addition to the rating scale, the annotators were also asked to provide a rationale for their ranking. The rating system does not allow for ties, so each of the three explainer responses had to be assigned a unique value from 1-3. The evaluation interface was designed in LabelStudio. Figure <ref> illustrates a sample interface that has been shortened. §.§ Participant Recruitment We recruited participants from a platform called Upwork, a professional crowd working platform. We hired 3 annotators all with a 100% job success rate on the platform. Each annotator labeled 104 tasks, each task included evaluating 3 different explainer responses on 8 different dimensions and ranking the 3 responses against each other and providing a response for the ranking, which resulted in 26 questions for each task. We paid each annotator $135 for around 7-10 hours of work. All hired annotators signed a consent form and were on-boarded onto the annotation platform, LabelStudio, where they received detailed instructions for how to complete the annotations. § RESULTS We calculated the inter-annotator agreement score for each of the two sections: 8-dimension rating section and the ranking section. We used Krippendorff’s alpha to evaluate the inter annotator agreement on each of the 8-dimensions. We then used Kendall's Tau to calculate the pairwise inter-annotator agreement for each task's rankings. We found that Annotators 21549 and Annotator 21551 had a pairwise agreement of 0.42–illustrating a moderate agreement in rankings. All three annotators only had an inter-annotator agreement score of 0.167. Demonstrates how this annotation task is highly variable due to each annotator's own specific preferences for engaging in explanation conversations. Rank 1 Rank 2 Rank 3 [0.5ex] S1: Baseline 18% 22% 59% [1ex] S2: GPT Standard 49% 34% 17% [1ex] S3: GPT w/ EA 33% 44% 23% [1ex] tablePercent distribution of S1, S2, and S3 explainer results for each ranking. Results were evaluated over all 312 annotator labels. As seen in Table <ref>, S2: GPT Standard resulted in the 49% of the Rank 1 results, demonstrating that it outperforms S1 and S3. Comparatively, S1: Baseline, performs the worst with over 59% of it's outputs being ranked last, Rank 3 out of 3 possible choices. To better understand the differences in ranking between S2: GPT Standard and S3: GPT w/ EA, Table <ref> illustrates more detailed percentages and evaluations on when each condition outperforms the other. According to Table <ref>, 35% of all annotated tasks were labeled with S2: GPT Standard in the Rank 1 position and S3: GPT w/ EA in the Rank 2 position. The most common rationale for this ranking was because the S2 strategy was "a little too long," "overly wordy," "long winded" and "over-explains in several areas and is longer than necessary" according to annotators. On average, S3 - GPT w/ EA responses were 10 words longer than S2 - GPT Standard responses. Alternatively, only 24% of all annotated tasks labeled S3:GPT w/ EA in Rank 1 and S2: GPT Standard in Rank 2. The rational that many annotators wrote included responses such as "actively guides the conversation," "engaged the explainee with a followup question," and "asks a thought provoking question prompting deeper conversation." Rank 1 Rank 2 Percentage [0.5ex] S2: GPT Standard S3: GPT w/ EA 35% [1ex] S3: GPT w/ EA S2: GPT Standard 24% [1ex] tablePercent distribution of the when S2 and S3 rank first and second out of all 312 annotated occurrences. § DISCUSSION AND FUTURE WORK This study further demonstrates that more work needs to be done to help experts bridge the knowledge gap between themselves and their audiences. While LLM-generated responses have been shown to perform better than the baseline human responses, these findings cannot be used to advocate for LLMs to replace the function of expert explainers. Instead, this research demonstrates how LLMs are able to augment expert explainer's capabilities by offering real-time support in tailoring more effective explanation for a given audience. Additionally, based on the qualitative results from the annotators's responses, one of the main reasons that S2: GPT Standard outperformed S3: GPT w/ EA was due to it's conciseness, with an average of 10 fewer words per response. This demonstrates that being concise is important in not overwhelming the explainee with information and how carefully planning and segmenting an explanation into manageable chunks is important for information communication and retention. One area that S3: GPT w/ EA performed better than S2: GPT Standard was in the structure of the responses, specifically in generating engaging followup or thought-provoking conversations. This demonstrates that the instances where GPT was explicitly prompted to include a question such as a concept completion question or test understanding question, annotators felt more engaged and guided by the conversation. Given that LLM is following instructions regarding what explanation moves to follow, we can argue that prompting LLMs with explanation moves helps avoiding redundant acts and dull conversation and force them to use more novel moves. Current research that evaluates the efficacy of chatbot interfaces for helping students understand complex topics is an ongoing area of research. Most of the research focuses on the explainee's experience with LLM-powered chatbot interfaces and works on helping the explainee frame or scope their questions to improve the quality of the outputs that an LLM gives them. In parallel to these ongoing research projects, more work needs to be done on evaluating how LLMs can augment the capabilities of expert explainers'. How can interface design best support human explainers? Our research illustrates that LLMs are able to generate a response given a sequences of explanatory acts which demonstrates that if given the most effective strategy to respond, an LLM will be able to formulate a response following that explanation structure. This allows for further research into formulations of effective explanation strategies, distilling them into a sequence of explanation acts that an LLM can execute. Additionally, as seen by the low inter-annotator agreement, future research can be conducted in designing system to aid in automatic personalization of explanations, conversation structures and styles to improve the experience regardless of personal preferences–allowing for an adaptable experience based on the explainee. ACM-Reference-Format § INVENTORY OF EXPLANATORY ACTS
http://arxiv.org/abs/2406.18791v1
20240626232841
Invited: Human-Inspired Distributed Wearable AI
[ "Shreyas Sen", "Arunashish Datta" ]
eess.SP
[ "eess.SP", "cs.SY", "eess.SY" ]
Invited: Human-Inspired Distributed Wearable AI Shreyas Sen and Arunashish Datta Elmore Family School of Electrical and Computer Engineering, Purdue University, West Lafayette, USA E-mail: {shreyas,datta30}@purdue.edu July 1, 2024 ================================================================================================================================================================================= § ABSTRACT The explosive surge in Human-AI interactions, fused with a soaring fascination in wearable technology, has ignited a frenzy of innovation and the emergence of a myriad of Wearable AI devices, each wielding diverse form factors, tackling tasks from health surveillance to turbocharging productivity. This paper delves into the vision for wearable AI technology, addressing the technical bottlenecks that stand in the way of its promised advancements. Embracing a paradigm shift, we introduce a Human-Inspired Distributed Network for Wearable AI, enabled by high-speed ultra-low-power secure connectivity via the emerging 'Body as a Wire' (Wi-R) technology. This breakthrough acts as the missing link: the artificial nervous system, seamlessly interconnecting all wearables and implantables, ushering in a new era of interconnected intelligence, where featherweight, perpetually operating wearable AI nodes redefine the boundaries of possibility. Wearable AI, Wi-R, Internet of Bodies (IoB) Invited: Human-Inspired Distributed Wearable AI Shreyas Sen and Arunashish Datta Elmore Family School of Electrical and Computer Engineering, Purdue University, West Lafayette, USA E-mail: {shreyas,datta30}@purdue.edu July 1, 2024 ================================================================================================================================================================================= § INTRODUCTION The year 2024 is heralded as the dawn of "Wearable AI," as noted by Forbes <cit.>. This emergence finds its roots in the introduction of Large Language Model (LLM) based tools in late 2022, sparking an unprecedented AI boom. With the natural language processing abilities of AI, newer and more intuitive ways of interacting with technology have proliferated. Moreover, decades of scaling semiconductor technology have culminated in a pivotal moment where significant sensing, computing, and communication power can be seamlessly integrated into miniaturized wearable devices. This convergence has paved the way for the rapid development of wearable devices empowered with Artificial Intelligence (AI). The synergy between the AI boom and the surging popularity of wearable technology has birthed a myriad of Wearable AI devices. From discreet pins <cit.> and pocket assistants <cit.> to elegant necklaces <cit.> and immersive AR devices <cit.>, wearable AI comes in various forms. Advancements in AI continue to blur the lines between human capabilities and machine intelligence, with wearable AI technology serving as a tangible manifestation of this convergence. The introduction of various wearable AI devices is part of a larger drive towards the exponentially increasing wearable devices over the last decade <cit.>. This has led to the formation of a subset of Internet of Things where the "Things" are connected by a common medium, the human body, termed as the Internet of Bodies (IoB) <cit.>, as shown in Fig.<ref>, and detailed in this IEEE Spectrum Article <cit.>. IoB refers to network of wearables and impantables connected to an on-body Hub, such as a smartphone, smartwatch or a wearable brain (Fig. 1), that acts as a gateway from this network to the cloud and the internet. To unleash the full potential of wearable AI, IoB nodes—sensors and actuators—must be strategically distributed across the body. This includes sound output near the ear, controllers near fingers or wrist, cameras on the face or chest for first-person view, and sensors like ECG near the chest, and EMG and IMU on limbs for accurate data collection. This calls for seamleass communication between these IoB Nodes and the On-Body Hub. A significant hurdle for IoB devices is the energy bottleneck, resulting in frequent charging and constraining wearable scalability <cit.>. To surmount this, there's a rising demand for charging-free, low-speed wearables enabled by energy harvesting, alongside the development of long-lasting IoB devices suited for high-speed applications like audio and video, lasting weeks to months. It's intriguing to observe that the majority of today's wearables are equipped with a central processing unit (CPU), rendering them higher-power devices in the range of milliwatts to watts. Inspired by the architecture of human biology (Fig. 1), where distributed sensors and actuators throughout the body operate without individual dedicated CPUs, instead connected via a high-speed, low-energy nervous system to a single CPU—the brain, we ask the question: Why can't wearable networks mimic this centralized CPU architecture found in humans? The answer lies in the high-energy demands of today's radio communication. It is well established <cit.> that the energy consumption for radio communication per bit far exceeds that of computing per bit by several orders of magnitude. In the absence of a high-speed, ultra-low-power (ULP), and secure artificial nervous system (ANS), today's wearables are left with no alternative but to rely on on-board computing preceding high-energy radio communication, thereby increasing platform power. The radio communication bottleneck raises a fundamental question: Is Radiative Communication the right technology for communicating around the conductive human body? <cit.> The pursuit of an answer to this question over the past decade has spurred the development of 'Body as a Wire' or Wi-R technology <cit.>. This invention utilizes tiny, safe time-varying low to medium frequency electric fields (known as Electro-Quasistatic or EQS fields) for high-speed physically-secure communication (>10x faster than BLE) with ultra-low power consumption (<100x lower than BLE). These fields are contained around a personal bubble outside the human body, effectively creating a virtual wire that connects all wearables seamlessly. It's interesting to note that the human body, composed mainly of saltwater, possesses inherent conductivity, causing it to absorb radio waves. However, it remains transparent to magnetic fields and facilitates the transmission of electric fields, as evidenced by propagation of ECG signals around the human body. Alongside the popular radio and magnetic (e.g., NFMI) communication methods, EQS-communication emerges as a third fundamental modality of communication, supported by Maxwell's equations. This communication mode proves to be the most optimal for transmitting signals around conductive objects like the human body, effectively creating the missing link—the artificial nervous system. Armed with seamless connectivity around the human body through Wi-R, we envision a transformative landscape (Fig. 1 right) where IoB Nodes evolve into simple sensors and actuators, along with ULP In-Sensor Analytics (ISA) as appropriate, operating at ultra-low power (∼ 10s of μ W class). These nodes are all interconnected to the wearable On-Body Hub, akin to a Wearable Brain, which hosts edge intelligence and serves as a gateway to the internet. While the On-Body Hub requires daily charging, akin to current practices, the IoB nodes achieve perpetual or exceedingly long-lasting operation. This pivotal shift removes a key bottleneck of frequent charging of multiple wearables, potentially expanding the wearable market by tenfold. In doing so, it empowers humans with real-time wearable AI through featherlight, perpetually operating IoB nodes. § EMERGENCE OF WEARABLE AI * Intro: Why is wearable AI picking up now? (a) ChatGPT revolutionizing conversational AI: AI Boom (b) Better way of interacting with technology (c) Wearable technology on the rise - with miniaturization of unit computation Subsections: * Voice * Images * Video * ExG * Brain Signals -> Thoughts Continuous semiconductor technology scaling has driven continued miniaturization of unit computing, enabling smaller and smarter wearable devices like fitness trackers, smartwatches, and smartphones. Current wearables mostly operate independently, each with its own CPU, leading to low to moderate battery life, i.e. hours to a week, depending on the size and functionality of the device (Fig. <ref>)). In 2024, AI capabilities have advanced to the point of real-time interactions with humans, leading to the integration of AI functionalities into a wide array of wearables, a trend expected to continue growing. The capabilities of such wearable AI devices are enhanced when multimodal signals from distributed locations on the human body can be collected and distributed optimal actuation is performed. To fully exploit the capabilities of wearable AI, it's crucial for users to be able to seamlessly integrate these devices into their daily lives, necessitating lightweight, imperceptible designs that allow for extended comfortable use. Achieving this requires a shift in wearable architecture, with the introduction of a new layer of leaf-IoB nodes connected to an On-body Hub Edge Node, offloading heavy processing to the hub node. We explore the current state and the future of distributed and connected wearable AI devices. While the specific form factors of these devices may evolve significantly in the years to come, the fundamental principles of distributed interconnected intelligence discussed here are expected to remain unchanged. §.§ Fitness trackers and health sensors Fitness trackers and health sensors <cit.> are widely available wearable devices, available currently in the form factor of watches and rings, having an all-week battery life. With the advancement in ULP wearable design and energy harvesting methods, it is envisioned that miniaturized health tracking devices will become perpetually operable and can be of the form factor of wearable patches that can be worn unobtrusively around the body. §.§ Voice-based devices With the voice interaction capabilities enabled by AI, pocket assistants of varying form factor and functionalities have emerged. The analysis of voice-based inputs and providing context dependent meaningful responses has led to the development of wearable devices like Humane AI pin, Rabbit R1, rewind and limitless AI pendants <cit.>. Commercially available voice-operated devices currently have all-day battery life. In these devices, analyzing voice-based commands is comparatively a low power task and devices solely based on taking voice-based inputs can be envisioned to become perpetually operable in the next decade. However, voice-based responses to the commands requires higher power due to the power requirement for driving the speaker. With innovations at the circuit-level, such devices with all-day battery life are expected to be moving towards having all-week battery life. §.§ Image and video-based devices Generative AI technology has the capability to analyze and develop images and videos based on natural language inputs, as demonstrated by Dall-E <cit.> and Sora AI <cit.>. The use of cameras providing first-person view with devices worn on the face, as in the case of smart glasses like Ray-Ban Meta Glasses <cit.>, Frame by Brilliant Labs <cit.>, mixed-reality headsets like Meta Quest <cit.> and Apple Vision Pro <cit.> or chest worn devices like Humane AI pin <cit.> have paved the way for visual commands to wearable devices. These devices require low-latency computation and communication power to use video-based commands and frame appropriate responses to complete tasks real-time. Such devices interacting seamlessly with ULP nodes while having an increased battery life from all-day to all-week allows a new mode of interaction with wearables, potentially replacing existing interfaces like touchscreens. §.§ Neural Signals AI models for biopotential signals around the body (ECG, EMG) and brain signals (EEG, ECoG) have not yet gained market popularity. Providing input to Wearable AI devices using neural signals is the ultimate goal for such devices allowing the promised seamless integration of wearables into our lives. AI models recognizing neural signals will enable interaction on wearable devices without without requiring controllers, gestures or voice commands. § HUMAN-INSPIRED NETWORK FOR IOB Intro * Types of devices in network * Network description Subsection: * Human - Brain as the only CPU * Todays wearables - Individual CPU for each device * Missing Link: ULP High-speed Comm. * Emergence of Wi-R -> Single Wire Earth Return! * Wi-R changes the wearable architecture Most of today's wearable technology, equipped with standalone processing units, have power consumption in the range of 10s of mWs to a few watts making them too power hungry to operate perpetually. This calls for a revised architecture of distributed wearable nodes sharing resources to reduce individual power consumption. §.§ Human Inspired IoB The human body has sensory inputs coming in from multiple points on the body to a single computing hub, the brain. The brain acts as the sole computation center, which is responsible for processing the data and communicating the response to different organs. Such a system allows ULP nodes to share resources from a central processing unit, eliminating the need for individual computational units. Thus, developing a ULP, high-speed communication method for seamless access to distributed computing is vital. §.§ Is RF the right technology for BAN? Radio frequency (RF) based radiative communication technologies have been the gold standard for wireless communication, aiding the onset of Internet of Things with smart connected devices all around us. However, RF-based communication essentially radiates the signal in a large room scale bubble around us, resulting in high power consumption of 1-10mW. This further makes RF-based communication highly inefficient for IoB devices, as the data is radiated 5-10 meters away from the device whereas channel lengths for IoB are typically between 1-2 meters. This necessitates moving away from radiative communication technology for IoB nodes in order to develop ULP communication methodologies. Reducing the communication power allows much of the computing to get offloaded to a remote hub. Thus, newer ultra-low-power communication techniques have been studied for wearable devices around the body which have higher energy efficiency (≤ 100pJ/bit), low power consumption (≤ 100s of μ W), and high data rates (≥ 1Mbps). To that end, using the body's conductive properties, treating the body as a wire <cit.> has been explored and has been termed as Human Body Communication (HBC). §.§ Using the Human Body as a Communication Channel Radio frequency based communication protocols work by radiating the signals through air to communicate data. This is inefficient for Internet of Bodies (IoB) devices as the data is not directed to its intended location and is also insecure as the signals are radiated up to 5-10 meters away from the body. To communicate data between devices connected to the body, using the body as a directed communication channel has been explored. Wi-R is a commercial implementation of a communication protocol based upon Electro-Quasistatic Human Body Communication (EQS-HBC) <cit.>. EQS-HBC uses the conductive properties of the body tissues to communicate data through the body at operating frequencies of ≤ 30 MHz. Wi-R has been demonstrated to have energy efficiency of 100 pJ/bit which is an order of magnitude better than its RF based counter parts. There have been implementations in literature where energy efficiencies of ≤ 10 pJ/bit have been achieved, showing the future potential of Wi-R for communication between Body Area Network devices. §.§ Body Area Network with Wi-R Using Wi-R opens the possibility of ultra-low-power high speed communication. Thus, the computation workload can now be offloaded to a single hub on the body from the ultra-low-power wearable sensors. This then makes it possible to design wearable sensors that have the potential to run perpetually with current energy harvesting and wireless powering technology <cit.>. The architecture of BAN network then closely resembles the human body, where distributed network of sensors (sensory organs) placed on various positions on the body are connected using Wi-R and a central hub (brain) like a smartphone or a headset performs the computation and controls the functioning of the sensors. § EMERGENCE OF WI-R AS ANS * Is radio freq. the right tech. for BAN - Wifi leading to Laptop, Cellular comms. -> Mobile phones and smart phones etc. * When Electrophysiology meets Radio Comm. to creat Body Comm. * Past present and potential for EQS Communication §.§ Electrophysiology meets Radio Comm. 1. Electrophysiology is closes to HBC... Body generated signals < 10kHz travel well through the body. 2. Heart sends ECG signals, EKG EEG signals... We can pick up on different locations on the body 3. ExG signals are EQS signals going through the body... Body comm. of body generated signals 4. We now use external eqs signal passing through body at 10s of MHz without interfering with body signals... 5. RF around the body gets absorbed... 6. Higher frequency gets absorbed more Body generated electrical signals have been known to travel through the human body at frequencies of ≤ 10 kHz which is firmly within the quasistatic regime <cit.>. Electrophysiological recordings of such signals generated by cardiac muscles like Electrocardiogram (ECG) can now be performed using wrist worn smartwatches illustrating that these body generated signals are being communicated using the human body. We extend this concept by coupling external electro-quasistatic (EQS) digital signals to the human body at higher frequencies (≤ 30 MHz) to transmit data using the human body without interfering with electrophysiological signals. Even at frequencies of 10s of MHz, we observe that the properties of electrophysiological signals are consistent as these signals remain contained the human body with little to no radiation. For efficient wireless communication around the human body, going to lower EQS frequencies promise higher efficiency due to higher signal absorption at radio frequencies. At EQS frequencies, a high impedance termination voltage-mode communication provides a communication channel which allows data transfer across the whole body at ultra-low communication powers. This enables externally generated digital signal communication through the body using a principle similar to that of electrophysiological signals. §.§ EQS Human Body Communication The use of quasistatic fields in communicating data has been around for over a century, with early telegraphs using Single-Wire-Earth-Return (SWER) for communicating over long distances <cit.>. In the context of Human Body Communication, operating in the quasistatic regime (≤ 30 MHz) provides an energy efficient and lower power alternative to RF-based communication technology <cit.>. EQS-HBC has been demonstrated to be a physically secure <cit.>, low power (≈ 415 nW for 10kbps) <cit.> and energy efficient (Sub-10pJ/bit) <cit.> solution for communication between IoB devices. Wi-R, a commercial implementation of EQS-HBC has been demonstrated to show high data rate (4 Mbps) communication with an energy efficiency of ≈ 100 pJ/bit <cit.>. Future research in HBC is focused on increasing communication data rate while ensuring high energy efficiency and exploring body-assisted communication for implantable devices in EQS regime and beyond using Magneto-Quasistatic Human Body Communication leveraging the human body’s transparency to magnetic fields. Future work remains in the efficient use of EQS-HBC in miniaturized devices of the form factor of smart ear-buds. Further, using the human body as a communication channel in frequency regimes beyond the EQS range where the body helps the communication instead of absorbing signals to hurt communication is being investigated. Magneto-Quasistatic Human Body Communication has been proposed using magnetic fields in the quasistatic frequency regime (≤ 30 MHz) for low-loss channels with small channel lengths for applications involving implantable devices by using the human body's transparency to magnetic fields <cit.>. Real-time AI yet light-weight and all-day devices * The need and consequence of a distributed network * Full Compute power- Sensors extract data, Hub computes and directs the operation of sensor nodes * ML on the edge, On-Device AI- TinyML, Qualcomm XR2 * Ixana Wi-R Power * Power demand for Sensors and Hub with current tech and possible future. Use BioCAS analysis to develop conclusions! The successful implementation of Internet of Bodies requires a human-electronics cooperation, where a distributed network of wearable devices uses the human body to enhance their functioning while helping the person in their daily life <cit.>. To have multiple wearable devices around the body, each of which need to be charged regularly, will be extremely cumbersome to manage. Thus, reducing the power consumption of these wearables where they can be perpetually powered using wireless energy harvesting techniques is pivotal. This reduction in power consumption of individual wearable nodes can be possible using a distributed wearable network. § DISTRIBUTED IOB WI-R NETWORK With Wi-R, an ultra-low-power communication methodology can be implemented for Body Area Networks (BAN), allowing a distributed Wearable AI computing platform. Leaf nodes, which are the ultra-low-power wearables in the IoB architecture, are connected to edge (hub) which consist of larger devices like mixed reality headsets and smartphones having higher available computational power. This allows the creation of perpetually operating wearables which use the computational resources of the hub to perform power hungry tasks using ultra-low-power communication enabled by Wi-R. The ULP nodes in some cases may use low power in-sensor analytics (ISA) or data compression (example MJPEG compression for video) to reduce the data volume to be communicated. The hubs are connected to fog and cloud servers for further data analytics.Fig. <ref> illustrates battery life for wearable nodes communicating using EQS-HBC for a battery capacity of 1000 mAh which can be achieved with a high capacity coin cell battery <cit.>. Although ISA may be used in the ULP nodes, this approximation considers the compute power to be negligible as a first order approximation considering the total power to be the sum of sensing and communication power consumption. We calculate the communication power consumption for Wi-R with an energy efficiency of 100pJ/bit <cit.>. The sensing power is characterized as a function of data rate with a survey of past literature and commercially available analog front-ends <cit.>. We further consider devices with more than a year of battery life as perpetually operable. With current energy harvesting modalities, 10-200 μ W power harvesting is possible in indoor conditions. Using Wi-R to communicate between leaf and edge nodes, it is projected that wearable devices like biopotential sensors, smart rings and fitness trackers can be made perpetually operable. Further, audio and video nodes with low computation power can be made all-week and all-day operable respectively. § CONCLUSION The convergence of human-AI interactions and wearable technology has thrust Wearable AI devices into the spotlight of innovation in 2024, revolutionizing fields from healthcare to productivity enhancement. This paper explores the future trajectory of Wearable AI, tackling obstacles to its widespread adoption. We advocate for a human-inspired distributed network model, linking wearable AI leaf-nodes to an on-body hub via Wi-R (Body as a Wire) technology. This approach facilitates lightweight, perpetually operable wearable AI devices by shifting intensive computing tasks to the edge hub, eliminating the inconvenience of frequent charging and enabling seamless integration into daily human life. IEEEtran
http://arxiv.org/abs/2406.19187v1
20240627140723
Explicit Hamiltonian representations of meromorphic connections and duality from different perspectives: a case study
[ "Mohamad Alameddine", "Olivier Marchal" ]
math-ph
[ "math-ph", "hep-th", "math.DG", "math.MP", "math.SG", "nlin.SI" ]
.eps #1 8pt8pt #1 8pt 8pt addtoresetequationsection theoremTheorem[section] conjectureConjecture[section] propositionProposition[section] lemmaLemma[section] corollaryCorollary[section] definition remarkRemark[section] definitionDefinition[section] definitionexampleExample[section]
http://arxiv.org/abs/2406.17951v1
20240625215726
Navigating High-Degree Heterogeneity: Federated Learning in Aerial and Space Networks
[ "Fan Dong", "Henry Leung", "Steve Drew" ]
cs.LG
[ "cs.LG", "cs.DC" ]
Navigating High-Degree Heterogeneity: Federated Learning in Aerial and Space Networks Fan Dong1, Henry Leung1, Steve Drew1 1Department of Electrical and Software Engineering, University of Calgary, Calgary, AB, Canada {fan.dong, leungh, steve.drew}@ucalgary.ca July 1, 2024 ======================================================================================================================================================================================= § ABSTRACT Federated learning offers a compelling solution to the challenges of networking and data privacy within aerial and space networks by utilizing vast private edge data and computing capabilities accessible through drones, balloons, and satellites. While current research has focused on optimizing the learning process, computing efficiency, and minimizing communication overhead, the issue of heterogeneity and class imbalance remains a significant barrier to rapid model convergence. In our study, we explore the influence of heterogeneity on class imbalance, which diminishes performance in ASN-based federated learning. We illustrate the correlation between heterogeneity and class imbalance within grouped data and show how constraints such as battery life exacerbate the class imbalance challenge. Our findings indicate that ASN-based FL faces heightened class imbalance issues even with similar levels of heterogeneity compared to other scenarios. Finally, we analyze the impact of varying degrees of heterogeneity on FL training and evaluate the efficacy of current state-of-the-art algorithms under these conditions. Our results reveal that the heterogeneity challenge is more pronounced in ASN-based federated learning and that prevailing algorithms often fail to effectively address high levels of heterogeneity. federated learning, heterogeneity, class imbalance, battery § INTRODUCTION Aerial and Space Networks (ASNs) <cit.> represent a novel type of network that integrates aerial and space assets, including drones, balloons, and satellites. These assets are interconnected, enabling the collection and relay of diverse sensing data across various-speed and universal data networks. Additionally, the computational capabilities of these assets in ASNs can facilitate edge computing, allowing for complex machine-learning tasks to be performed locally <cit.>. However, the heterogeneous nature of these devices, limited bandwidth, and differing ownerships present significant challenges for data processing and the centralized training of predictive models in ASNs. The primary challenges include limited bandwidth, privacy concerns, and single-point failure. Federated learning (FL) <cit.> has emerged as a promising solution, where distributed clients train their models locally and only send the model parameters to a central server. However, data heterogeneity remains a significant challenge in this context. Data distributed across IoT devices and edge servers or nodes leads to each participant owning a unique local dataset. These datasets can vary significantly in size, feature space, and label distribution, resulting in discrepancies in local model performance, and consequently, in the aggregated global model's performance. Moreover, data heterogeneity can slow down the convergence of FL. Variations in local data distributions can cause local models to diverge significantly, complicating the aggregation into a robust global model. Addressing data heterogeneity often requires more communication rounds between the central server and the nodes to achieve acceptable model performance, which incurs higher bandwidth usage, particularly costly in resource-constrained edge IoT environments with limited connectivity. Effectively managing data heterogeneity also necessitates more sophisticated methods to aggregate local models or to carefully train and adapt local models to diverse data distributions. Without exception, ASN-based edge devices are designed to complete diverse tasks with various intensities, resulting in high data heterogeneity. For instance, in the low-altitude economy era, drones capture images of different types of buildings in the city. As shown in Fig. <ref>, drones will confront various surroundings like office buildings, residential buildings, houses, and factories. And corresponding flight strategies will also be developed to cope with these varying conditions. Due to the variety of the aviation environment, different drones will collect heterogeneous data accordingly. These issues, combined with the existing uneven distribution of data, may further increase the severity of data heterogeneity due to Statistical Heterogeneity Statistical heterogeneity occurs when the data distributions across different devices or nodes vary significantly. In FL settings, each ASN node may collect data under different conditions, leading to non-independently and identically distributed (non-IID) data. This heterogeneity can lead to biased models that perform well on some nodes but poorly on others, as the global model might not generalize well across diverse datasets. System Heterogeneity This refers to differences in hardware, network connectivity, and computational power among devices participating in federated learning. Some devices may be able to compute updates faster and more frequently than others. This discrepancy can lead to slower convergence of the global model, as updates from less capable devices might be received less frequently or could be outdated. Communication Heterogeneity Variations in network speed and bandwidth across devices can affect the efficiency of data transmission in federated learning. Devices with slower network connections may take longer to upload their updates, leading to delays in model aggregation and potentially outdated model updates being incorporated into the global model. Label Distribution Skew In some cases, the distribution of labels (outcomes of interest) can differ significantly across devices. For example, in a healthcare application, data collected from different hospitals might show different disease prevalence rates. This skew can lead to a model biased towards the data characteristics of more frequently represented labels or devices. In this paper, we illustrated how heterogeneity impacts the class imbalance issue, hence leading to a degraded performance in ASNs-based FL. Specifically, we visualize the relationship between heterogeneity and class imbalance of grouped data. We demonstrate how the battery life constraint exacerbates the class imbalance issue from two angles: (a) by limiting the number of devices available for FL training, and (b) by restricting the selection of devices to a smaller pool, as shown in Fig. <ref>. We conclude that ASNs-based FL is experiencing more severe class imbalance issues even under the same degree of heterogeneity. Finally, we study how different degrees of heterogeneity affect the FL training and the performance of current state-of-the-art algorithms on different degrees of heterogeneity. § RELATED WORK FL <cit.> presents a promising avenue for training models that require substantial data volumes, all without the necessity of centralizing client data. Instead of transmitting raw data, FL employs a process where model parameters are communicated with edge devices during training. This method circumvents the significant communication overhead while upholding user privacy. While FL facilitates privacy-preserving distributed machine learning across myriad devices, it contends with persistent challenges, such as heterogeneity, within current methodologies. Heterogeneity manifests in various forms throughout FL training, adding complexity to the process. Additionally, the issue of class imbalance presents another formidable hurdle for FL, particularly when compounded with heterogeneity. There are different types of heterogeneity issues, including statistical heterogeneity, system heterogeneity, communication heterogeneity, etc <cit.>. Statistical heterogeneity is mostly caused by the fact that the distributions vary among different clients, including label distribution and feature distribution, which causes the local models to converge towards different directions and the global model to converge slowly. System heterogeneity mainly refers to the differences in hardware, network connectivity, and computational power among devices participating in federated learning. Communication heterogeneity is variations in network speed and bandwidth across devices can affect the efficiency of model transmission in federated learning, because of which, straggles may occur. There is already some research like <cit.> focusing on tackling the straggler issue in FL to improve the overall performance. We focus more on the statistical heterogeneity in this paper. The impact of statistical heterogeneity was theoretically analyzed in <cit.>. Plenty of research has been trying to mitigate the heterogeneity impact. FedProx <cit.> introduced an additional proximal term to the local objection to refraining from overfitting local training. Despite that tuning the hyperparameter μ in FedProx could be a challenge, the introduced proximal term may also slow the convergence speed. FedProx is also capable of tackling the stragglers caused by communication heterogeneity. Scaffold <cit.> maintained control variates to rectify the local training to mitigate the heterogeneity effect. However, as shown in our experiments in Section V, Scaffold fails to outperform FedAvg <cit.> when the heterogeneity degree is too high. FedMix <cit.> relaxes the limitation of accessing others’ raw data and performs data augmentation with the assistance of other clients' data. By this strategy, FedMix could accommodate FL with different levels of privacy depending on applications and achieve better performance. MOON <cit.> used the similarity between model representations to correct for local training. On the contrary, relying on previous local models reduced its effectiveness when selecting a small portion of clients from a vast pool. WeiAvg <cit.> adopted weighted averaging to highlight updates from high-diversity clients under the diversity heterogeneity distribution. However, this will not work when the heterogeneity does not lie in diversity. Despite these efforts, there is still significant room for improvement in addressing heterogeneity. While adding additional regularization terms <cit.> to local objective functions requires longer computation time. Algorithms like Scaffold <cit.> could not even outperform FedAvg under highly heterogeneous distribution. Class imbalance has long been an issue in the field of machine learning <cit.>. Resampling, re-weighting, and cost modification methods <cit.> have been proposed to mitigate its detriment. However, these techniques could not be applied to FL directly. Recently, <cit.> pointed out that the class imbalance is the cause of performance degradation under non-IID settings. § HETEROGENEITY AND IMBALANCE Heterogeneity, specifically statistical heterogeneity, is caused by the distribution discrepancy among participating devices. This discrepancy leads to the grouped dataset's imbalance. It was noted in <cit.> that the imbalance of the grouped dataset in FL leads to the degradation of model performance. For a classification problem, suppose there are B classes. For each global round, we select a set of devices to participate in the FL training. We can group the dataset across these selected devices together to obtain the grouped dataset in each global round. The distribution of the grouped dataset will be like p = [p_1, p_2, ⋯, p_B]. The imbalance degree of the grouped dataset could be represented as Δ = max(p) - min(p). We simulate the how different heterogeneity degree impact the imbalance degree of the grouped dataset. We select 10 devices out of 100 clients then group the dataset together and repeat this process 100 times to get a robust average result. As shown in Fig. <ref>, the more heterogeneous the distribution is, the more imbalance of the grouped dataset will be. The imbalance degree Δ of grouped dataset is 0.0897, 0.2117, and 0.2589 for alpha being 1, 0.1, and 0.01 respectively. However, heterogeneity is not the only issue that could cause the imbalance among grouped dataset. In the next section, we will explain how device selection could also be a perpetrator. § HOW BATTERY LIFE CONSTRAINT AGGRAVATES THE IMBALANCE ISSUE Compared with other IoT scenarios, FL training with aerial and space devices are often constrained by the battery life. In the near future, there is not likely to be a big step in terms of the energy density of lithium batteries <cit.>. For example, the maximum flight time for a typical DJI drone is around 30 minutes. And the first priority of these devices should be finishing specific tasks or returning back. This constraint will limit the choices when we select devices to participate in the FL training. The impact is twofold. On one hand, we may only be able to select fewer devices compared to other FL scenarios. On the other hand, we can only choose devices from a subset of all available devices, as those with low battery levels need to prioritize basic operations such as returning. §.§ Selecting fewer devices In the experiments, we simulate that we select different number of devices to see how it would affect the imbalance pattern. Each time we select a certain amount of devices, we group their local data together to see how imbalanced the grouped dataset is. We use the same distribution for different settings. We repeat 10,000 times and average the test results for a robust conclusion. According to Fig. <ref>, there is a clear trend showing that when fewer devices are selected in each round, the imbalance issue will aggravate. §.§ Selecting devices from a smaller pool Since the priority of aerial and space devices should be maintaining their normal operations, so once the battery percentage of some devices is below some threshold, we should not select them for FL training before they are recharged. In our training process, we can maintain a queue according to each device's battery percentage. We only select those top devices with the highest battery percentage. Those low-battery devices will also be recharged after a while and will enter the queue, as shown in Fig. <ref>. Under this setting, our choice would be limited to a smaller amount of devices when selecting devices to participate in the local training. However, after the low-battery devices are recharged, the choice pool will also be updated. To see more details of the imbalance issue, we merge the grouped dataset across several global rounds together to see the imbalance degree. Because we hope even if the ratio of a certain type of label is small in this global round, this situation will not last. As shown in Fig. <ref>, when we can only select devices from a smaller device pool, the imbalance issue will aggravate, no matter what window size we choose. We further test the impact of the different choice of window sizes on imbalance degree Δ. We choose the pool size to be 30, 50, and 70. As shown in Fig. <ref>, a smaller device pool consistently aggravates the imbalance issue under different observation window sizes. Charging speed or updating speed of the pool will also impact the imbalance degree. In above analysis, we adopted the same updating speed, which is in each global round, which will be one device entering the available pool. We test different updating speeds with each round updating 0.2, 0.5, 1, 2, and 5 devices respectively. As shown in Fig. <ref>, a smaller pool size consistently increases the imbalance degree of the grouped dataset across different observation window sizes. With the analysis above, we can conclude that the battery constraint of aerial and space devices aggravates the imbalance issue. This exacerbates the existing heterogeneity problem even further. While previous research also studies the heterogeneity issue <cit.>, the distributions they use are often not heterogeneous enough. In the next section, we study the impact of different heterogeneity degrees on FL training. § HOW DEGREE OF HETEROGENEITY AFFECTS FL TRAINING As observed in the paper <cit.>, the essential reason resulting in FL performance degradation is the class imbalance of the grouped dataset. Since different levels of heterogeneity degree α result in different levels of imbalance degree Δ, hence resulting in different performance of the FL model, we can skip the intermediate steps and directly study the relationship between heterogeneity degree and FL model performance. As shown in Fig. <ref>, we visualize how α affects the level of heterogeneity. We derive the visualization by following procedures: We sort each client's label distribution in descending order at first. Then we average the sorted label distribution among the clients to get a more stable result. So the more heterogeneous the distribution is, the more the barplot would be skewed. When α equals 0.01, each client almost contains only one class of samples, as shown in Fig. <ref> (a). Note that the x-ticks do not denote the exact class label since we sort each client's distribution based on frequency in descending order before averaging them. With α increasing, the distribution becomes more uniform among all class labels. §.§ FedAvg Performance Under Different Heterogeneity Degrees We demonstrate how different degrees of heterogeneity would affect FL training in the CIFAR10 dataset as shown in Fig. <ref>. We set α to 10^15 to simulate the homogeneous distribution, where each class exactly accounts for 10% of the samples. All the experiments in Fig. <ref> are based on the FedAvg <cit.> algorithm. The only difference is the label distribution of each client. We conduct each experiment with different random seeds, then average them together to obtain a smoother and more solid test accuracy line. From Fig. <ref>, we can derive that under the Dirichlet distribution with α=1, the test accuracy line is very close to that of a homogeneous distribution. While other research often takes α = 0.1 as the indication of heterogeneity, we show that α = 0.1 is far from heterogeneous enough compared with α = 0.01. In the following part, we may denote the Dirichlet distribution with α=1 as close to homogeneity, α=0.1 as low heterogeneity, and α=0.01 as high heterogeneity. §.§ State-of-the-art FL Algorithms Performance Under Different Heterogeneity Degrees We run several state-of-the-art algorithms under different degrees of heterogeneity. As shown in the experiment results in Table <ref>, FedProx <cit.> could only marginally improve the performance, while MOON is even beaten by FedAvg <cit.>. Scaffold <cit.> performs pretty well under homogeneous and low-heterogeneity distribution but fails to outperform FedAvg <cit.> under high heterogeneity. § CONCLUSION In this paper, we identify a specific constraint of ASNs-based FL compared with other scenarios, which is the battery constraint. We then analyzed the impact of the battery constraint on FL training. We point out that the battery constraint will aggravate the heterogeneity and class imbalance issues from various perspectives, hence necessitating the FL optimization under high heterogeneity. Finally, we demonstrate that current state-of-the-art algorithms can not perform well under high heterogeneity. In future research, we will focus on FL optimization under highly heterogeneous distribution.
http://arxiv.org/abs/2406.18086v1
20240626054653
Quantum corrections to tunnelling amplitudes of neutral scalar fields
[ "Rosemary Zielinski", "Patrick McGlynn", "Cedric Simenel" ]
hep-th
[ "hep-th" ]
APS/123-QED rosemary.zielinski@anu.edu.au cedric.simenel@anu.edu.au Current address: Facility for Rare Isotope Beams, Michigan State University, East Lansing, Michigan 48824, USA ^1Department of Fundamental and Theoretical Physics, The Australian National University ^2Department of Nuclear Physics and Accelerator Applications, The Australian National University § ABSTRACT Though theoretical treatments of quantum tunnelling within single-particle quantum mechanics are well-established, at present, there is no quantum field-theoretic description (QFT) of tunnelling. Due to the single-particle nature of quantum mechanics, many-particle effects arising from quantum field theory are not accounted for. Such many-particle effects, including pair-production, have proved to be essential in resolving the Klein-paradox. This work seeks to determine how quantum corrections affect the tunnelling probability through an external field. We investigate a massive neutral scalar field, which interacts with an external field in accordance with relativistic quantum mechanics. To consider QFT corrections, we include another massive quantised neutral scalar field coupling to the original via a cubic interaction. This study formulates an all-order recursive expression for the loop-corrected scalar propagator, which contains only the class of vertex-corrected Feynman diagrams. This equation applies for general external potentials. Though there is no closed-form analytic solution, we also demonstrate how to approximate the QFT corrections if a perturbative coupling to the quantised field is assumed. Quantum corrections to tunnelling amplitudes of neutral scalar fields Patrick McGlynn^1,2 July 1, 2024 ===================================================================== Notwithstanding the extensive application of quantum tunnelling, ranging from tunnel diodes <cit.>, scanning tunnelling microscopy <cit.>, and its importance to many biological <cit.> and chemical systems, fundamental theoretical questions about tunnelling remain unanswered. Even simple questions, such as the time a particle takes to tunnel, are still disputed <cit.>. Quantum tunnelling through external potentials is typically understood in the framework of single-particle quantum mechanics using the non-relativistic Schrödinger equation or the relativistic Klein-Gordon and Dirac equations. However, these descriptions fail to account for particle number non-conserving processes. Yet, there is no comprehensive description of quantum tunnelling also compatible with quantum field theory (QFT). Because interactions in a QFT framework are described by local couplings to mediator fields, all interactions involve the destruction and creation of virtual particles naturally resulting in a many-particle theory. Many-particle effects including virtual particle mediators and pair production have been shown to have important measurable consequences even in low-energy systems. Notably, QFT predicts the anomalous electron magnetic moment <cit.> through a quantised photon field. Hyperfine electronic structure, such as the Lamb shift <cit.>, is the result of electron self-energy corrections, vertex corrections, and vacuum polarization contributions (i.e. the Uehling potential <cit.>), all of which arise from the quantization of electromagnetic and fermionic fields. In the context of quantum tunnelling, these many-particle effects manifest in the Klein Paradox <cit.>, where fermions incident on a step potential above the Schwinger limit (eV>2m) display violations of unitarity. The paradox is resolved by including pair-production at the barrier <cit.>, an intrinsically QFT effect going beyond the physics of relativistic quantum mechanics. Other work has considered the possibility of `tunnelling of the third kind' <cit.> whereby a particle interacting with a barrier may split into a pair of virtual particles which interact only weakly with the barrier, and recombine after a finite distance. Additionally, while quantum tunnelling between field configurations has been well-studied using instanton methods <cit.> and applied to false vacuum decay hypotheses <cit.> this is conceptually distinct from our work which considers quantum tunnelling of a particle through external localised potentials. Merging QFT with a theory of quantum tunnelling is difficult due to the perturbative formalisms which dominate QFT calculations. Many scattering calculations employ the scattering matrix (S-matrix), a unitary time evolution operator with the expression S = 𝒯 [e^-i∫d^4x H_int(x) ]. Here 𝒯 denotes the time-ordering operator, and H_int(x) the interaction Hamiltonian. There are no analytic solutions for the complete interacting S-matrix in four-dimensions for any non-trivial QFT. In practice, the S-matrix is computed perturbatively via a truncation of the following series S = ∑ _n=0^∞(-i)^n/n!∫d^4x_1∫d^4x_2 …∫d^4x_n 𝒯 [H_int(x_1)H_int(x_2)… H_int(x_n) ], where each order in H_int(x) can be represented by a set of Feynman diagrams. Such an approach is fundamentally incompatible with quantum tunnelling, which is a non-perturbative phenomenon (in the interaction Hamiltonian). This warrants an alternative approach to integrating QFT with quantum tunnelling. Previous work has described electron scattering from an external potential, using both the canonical quantisation <cit.> and path integral formalism <cit.> of QFT. However, both studies employed the single-particle relativistic Dirac equation and neglected a quantised photon field, treating the external field classically. While QFT methods were used, the underlying physics is restricted to relativistic quantum mechanics. Additionally, the single-particle scattering calculations presented in <cit.> were exclusively above-barrier, and therefore could not describe tunnelling. Our recent work <cit.> built upon their formalism using a simpler model of a neutral scalar field interacting with an external mass perturbation-like potential. We demonstrated that for simple delta function potentials, tunnelling amplitudes could be obtained via analytic continuation of the S-matrix and an infinite sum of Feynman diagrams. Though this was successful in recovering tunnelling amplitudes consistent with RQM, it too was limited by the single-particle nature of the Klein-Gordon equation. The present work seeks to extend both our previous study and <cit.> to include many-particle effects from an additional quantised field. Our model describes the dynamics of a neutral scalar field interacting both with an external scalar field (acting as an external potential) and a quantised scalar field. A key result is the derivation of a scalar propagator which accounts for all-order interactions with the scalar field and the external field, via a summation of a restricted class of Feynman diagrams. This process is conceptually similar to the self-consistent Hartree-Fock diagrammatic expansion <cit.>. We also present a perturbative coupling approximation to the dressed propagator, which still treats the external field to all orders. Though we derive a dressed propagator, the extraction of tunnelling amplitudes remains challenging due to the complexity of the integral equation, and is left for future work. However, this work provides the necessary formalism to integrate QFT corrections to tunnelling, and therefore to consider more physically interesting Lagrangians. §.§ Formalism The primary goal of this work is to develop a formalism that accounts for quantum corrections which affect tunnelling through external localised barriers. In order to do this, we consider the simplest system for which these effects may be probed. To this end, this work considers a massive neutral scalar field ϕ, interacting with an external field u(x), and another massive neutral scalar field Φ. The associated Lagrangian is: ℒ = 1/2(∂_νΦ)^2-1/2m'^2Φ^2+1/2(∂_νϕ)^2 -1/2m^2ϕ^2-1/2eu(x)ϕ^2-μ/2Φϕ^2, where the neutral scalar fields have a cubic interaction via the last term. Note that the scalar field Φ, does not interact with the external field u(x), and that the external field, by definition, has no dynamical term in the Lagrangian. Without the addition of the dynamical field Φ, the Lagrangian would be equivalent to a single-particle theory described by the Klein-Gordon Lagrangian with a mass perturbation. The choice of cubic interaction is somewhat arbitrary — in principle, any other renormalizable interaction would allow an investigation into many-particle effects. However, the cubic term is one of the simpler choices: it produces a super-renormalizable theory, and also includes vertex-corrections to the theory at order μ^2. The effect of the field Φ is to add an interaction vertex, -iμ = < g r a p h i c s > , in addition to the interaction with the external field, -ieũ(p-k) = < g r a p h i c s > where ⊗ denotes the external field, and ũ(p-k) the Fourier transform of the external potential. In the context of tunnelling, we are concerned with single-particle incoming and outgoing states of the field ϕ, characterised with initial momentum p = (p_0, 0 ,0, p_3) and final k = (k_0, k_1, k_2, k_3) respectively. To obtain tunnelling amplitudes, the complete interacting S-matrix element ⟨ k | S | p⟩ must be determined. For a time-independent potential u(x), which is only a function of x_3, the transmission and reflection amplitudes are related to the S-matrix via <cit.> T = ∫_-∞^∞dk_1∫_-∞^∞dk_2∫_0^∞dk_3/(2π)^32E(k)⟨ k | S | p⟩ R = ∫_-∞^∞dk_1∫_-∞^∞dk_2∫_-∞^0dk_3/(2π)^32E(k)⟨ k | S | p⟩. We have the diagrammatic representation of the S-matrix element, ⟨ k |S |p ⟩ = < g r a p h i c s > , where the circle represents the exact, all-order interacting S-matrix. For calculational purposes, we consider instead the two-point correlation function G(q,q'), diagrammatically related to the S-matrix via G(q,q') = < g r a p h i c s > = < g r a p h i c s > . This is simply a dressed propagator, where the legs of the diagram are no longer required to be on mass-shell. More formally, S-matrix elements can be directly extracted from two-point correlation functions via the LSZ reduction formula <cit.>. Note also that momentum is not necessarily conserved — by definition, the presence of an external field requires this. Within this propagator formalism, the Feynman rules are as follows: * Each ϕ^2u(x) vertex, with incoming scalar momentum p and outgoing scalar momentum k has a vertex factor of -ieũ(p-k) (links a wavy line to a dashed line). * Each ϕ^2Φ vertex has a factor of -iμ. * Internal scalar (ϕ) lines have a scalar propagator D(q,m) = i/q^2-m^2+iε (dashed lines), while internal scalar (Φ) lines have an associated propagator D(q,m') = i/q^2-m'^2+iε (solid lines). * A dressed line (thick solid line), with incoming momenta p and outgoing momenta k, contributes a factor of G(p,k). * All unspecified momenta of internal lines are individually integrated over with measure ∫d^4q/(2π)^4. It is well-known that there are no exact solutions of interacting theories in four dimensions within QFT, such that a closed form expression for this particular dressed propagator is not known. It therefore remains to find a suitable approximation which can generate meaningful quantum corrections to quantum tunnelling, without requiring a solution to the fully-interacting theory. To do this, we take inspiration from the diagrammatic derivation of the self-consistent Hartree-Fock propagator <cit.>. §.§ Approximated dressed propagator <Ref> encodes an infinite number of Feynman diagrams, to all orders in μ and u(x), with all topologies. One approximation to this dressed propagator restricts the class of Feynman diagrams, such that: < g r a p h i c s > ≈ < g r a p h i c s > + < g r a p h i c s > + < g r a p h i c s > + < g r a p h i c s > = < g r a p h i c s > + < g r a p h i c s > + < g r a p h i c s > + < g r a p h i c s > + < g r a p h i c s > + < g r a p h i c s > + < g r a p h i c s > + < g r a p h i c s > +⋯_self-energy contributions + < g r a p h i c s > + < g r a p h i c s > + < g r a p h i c s > + < g r a p h i c s > +⋯_vertex-corrected diagrams This is analogous to the self-consistent Hartree-Fock approximation, where the second term in <ref> corresponds to the exchange term, while the third diagram to the direct term. We emphasise that the above equation is self-consistent because the dressed propagator is also used for any internal lines on the r.h.s. Additionally, <ref> is still non-perturbative in the couplings μ and eu(x), and still describes an infinite number of Feynman diagrams. However, some diagrams present in <ref> are never generated in the self-consistent expansion. For example, the 1PI diagram < g r a p h i c s > is included in <ref>, but is not generated by <ref>. Despite the approximations made, even this self-consistent dressed propagator is not tractable. We make a further simplification, and neglect the self-energy contributions to the propagator. Neglecting the self-energy terms does not account for the renormalised mass of the field ϕ (it should be noted the Φ self -energy terms are also absent). Such self-energy contributions will not affect the neutral scalar field interaction with the external field, because self-energy type diagrams/subgraphs necessarily conserve momentum, given an S-matrix element between identical free single-particle states. While transmitted particle have the same momentum as the incoming particles (for potentials which tend to zero for x_3 →±∞), the S-matrix elements also include the reflection contribution which does not conserve momentum asymptotically. Thus, any S-matrix element which perfectly conserves momentum cannot describe tunnelling, or a momentum-exchanging interaction with the field. Crucially, because the self-energy contributions do not affect the interactions with the external field, they will also not be relevant for tunnelling. The self-consistent propagator can now be redefined, notated by a double-line: < g r a p h i c s > = < g r a p h i c s > + < g r a p h i c s > + < g r a p h i c s > + < g r a p h i c s > . The implications of this approximation are that we only generate terms we call `vertex-corrected diagrams'. Additionally, this expression only includes the interacting terms in the propagator: there is no free-propagator generated in this expansion. Hence, subsequent tunnelling calculations must include the non-interacting contribution post-hoc. The omission of the free propagator in <ref> is necessary to prevent the generation of self-energy terms, which we explicitly are removing. To consider the consequences in more detail, we define a new quantity, G^n, which is the n-th recursion of the vertex-corrected propagator into the equation above. Diagrammatically, < g r a p h i c s > = < g r a p h i c s > + < g r a p h i c s > _a+ < g r a p h i c s > _b+ < g r a p h i c s > _c, Note that G≠∑_n G^(n), rather, G^(∞) = G. We define the zeroth `generation' to be: < g r a p h i c s > = < g r a p h i c s >, which is inserted into G^(1) to yield: < g r a p h i c s > = < g r a p h i c s > + < g r a p h i c s > + < g r a p h i c s > + < g r a p h i c s > , where each additional term corresponds exactly to G^0 inserted into a, b and c. Continuing to the second generation, we have: < g r a p h i c s > = < g r a p h i c s > + < g r a p h i c s > + < g r a p h i c s > + < g r a p h i c s > + < g r a p h i c s > _insertions into a + < g r a p h i c s > + < g r a p h i c s > + < g r a p h i c s > + < g r a p h i c s > _insertions into b insertions into c + < g r a p h i c s > + < g r a p h i c s > + < g r a p h i c s > + < g r a p h i c s > + < g r a p h i c s > + < g r a p h i c s > + < g r a p h i c s > + < g r a p h i c s > + < g r a p h i c s > + < g r a p h i c s > + < g r a p h i c s > + < g r a p h i c s > + < g r a p h i c s > + < g r a p h i c s > + < g r a p h i c s > + < g r a p h i c s > . where (a), (b) and (c) refer to the second, third, and fourth diagrams in the r.h.s of <ref> respectively. For the (n+1)th generation, there will be m_n+1 = (m_n+1)^2 diagrams. Hence the infinite recursion encoded by <ref> describes an infinite number of unique contributing diagrams with a `vertex-corrected' topology. While some diagrams appear in both the second and third generation, at each generation there is only one of each diagram, which is also true of G^(∞). Additionally, for this particular theory, no divergent diagrams are generated in the first-order, and therefore no divergent diagrams are generated for subsequent G^(n) (because no divergent sub-diagrams are generated). This follows from Dyson's power counting theorem <cit.>, and holds if the momentum-space potential decays sufficiently quickly at ±∞. Because this theory is super-renormalisable anyway, the self-energy terms we neglect remove any divergences. The diagrammatic approximation in <ref> can be recast into an integral equation G(p,k) ≈-ieD(k,m)D(p,m)ũ(p-k)_ < g r a p h i c s > -ieD(p,m)qũ(p-q)G(q, k)_ < g r a p h i c s > +(-iμ)^2D(p,m)D(k,m) qD(q,m')G(p-q, k-q)_ < g r a p h i c s > +(-iμ)^2D(p,m)qsD(q,m')G(p-q, s-q)G(s, k)_ < g r a p h i c s > . We note that the first two terms in the integral equation correspond to the relativistic quantum mechanics propagator, in the absence of the additional field Φ. It is the final two terms which encode QFT corrections. Though approximate, they include all-orders in μ. Though this expression is still complex, significant gains have been achieved: the fact that an integral equation can now encode the explicit corrections to tunnelling gives a way to answer the questions this paper poses. §.§ Perturbative coupling approximation Though the external field requires a non-perturbative treatment, there is no reason why the coupling μ cannot be perturbative, and still yield results compatible with tunnelling. Thus, the next natural approximation is to consider the regime where the coupling to the external potential is much stronger than the coupling to the scalar field, Φ (i.e. eu(x) ≫μ). Conceptually, this is the same approximation made in the Furry expansion in strong-field QED <cit.>: the external electromagnetic field is treated exactly, while the dynamical quantised field is treated perturbatively. This has the effect of removing self-consistency in <ref>. To first-order in μ^2, <ref> becomes: < g r a p h i c s > = < g r a p h i c s > + < g r a p h i c s > + < g r a p h i c s > = < g r a p h i c s > + < g r a p h i c s > + < g r a p h i c s > + < g r a p h i c s > + < g r a p h i c s > + < g r a p h i c s > + < g r a p h i c s > + < g r a p h i c s > + < g r a p h i c s > + 𝒪(e^4) with the relativistic quantum mechanical solution (i.e. the tree-level solution), < g r a p h i c s > = < g r a p h i c s > + < g r a p h i c s > . However, this overly-simple approximation neglects non-trivial one-loop diagrams. For instance, while <ref> necessarily generates the diagrams < g r a p h i c s > and < g r a p h i c s > , the above insertion of the RQM propagator does not, because it is no longer self-referential. To remedy this, we instead redefine the one-loop propagator with additional terms: < g r a p h i c s > = < g r a p h i c s > + < g r a p h i c s > + < g r a p h i c s > + < g r a p h i c s > + < g r a p h i c s > +𝒪(e^4) = < g r a p h i c s > + < g r a p h i c s > + < g r a p h i c s > , While <ref> is still not self-referential, it does encode all diagrams relevant to tunnelling, at one-loop. It is more concisely expressed via < g r a p h i c s > = < g r a p h i c s > +{ < g r a p h i c s > } < g r a p h i c s > { < g r a p h i c s > }, with the corresponding mathematical expression G^(1-loop)(p,k) = G^RQM(p,k)+(-iμ)^2 qD(q,m'){ D(p,m)D(k,m)G^RQM(p-q, k-q)_ < g r a p h i c s > +D(p,m)sG^RQM(p-q,s-q)G^RQM(s,k)_ < g r a p h i c s > +D(k,m)sG^RQM(p, s)G^RQM(s-q, k-q)_ < g r a p h i c s > +sw G^RQM(p,s)G^RQM(s-q, w-q)G^RQM(w,k)_ < g r a p h i c s > }. <Ref> has a vastly simplified structure compared to <ref>, given that it is no longer an integral equation. Provided G^RQM(p,k) is known, the numerical determination of the one-loop correction should be feasible. However, finding a form for G^RQM(p,k) is not always straightforward: it is the solution of an integral equation, and moreover, must be non-perturbative in the external field to be useful for tunnelling calculations. §.§ Discussions and Conclusions <Ref> provide the groundwork for future work to quantitatively determine how QFT corrections impact tunnelling probabilities. One promising avenue may be numerical methods, particularly for solving <ref>, if the form of G^RQM(p,k) is known (or numerically found). We have found analytic expressions for G^RQM(p,k) using techniques of Feynman diagram resummation, for simple potentials such as a Dirac delta and double-delta potential <cit.>. While these analytic methods employ calculation techniques in QFT, in principle, G^RQM can be found within a relativistic quantum mechanical framework if needed. This is analogous to how QED corrections are implemented in the Furry expansion - for instance, a recent paper <cit.> used the analytic form of the Dirac-Coulomb Green's function to then numerically calculate corrections to Delbrück scattering. While Feynman integrals are routinely computed in the context of collider physics, with many packages <cit.> devoted to this, <ref> does not have the mathematical form of a Feynman integral to allow one to exploit these methods. In particular, our tunnelling propagators manifestly lack Lorentz-invariance, which is a property many of these programs require. However, there have been recent developments in the non-relativistic effective field theory space, with modifications to existing packages <cit.> extending semi-automatic symbolic calculations beyond Lorentz-invariance formulations. This may be a promising avenue for continued work, although currently, it is best suited for tree-level or 1-loop amplitudes. Additionally, standard Monte Carlo techniques <cit.> for computing integrals of our form may encounter difficulty in <ref>, due to poor convergence from the presence of poles (despite the integrals being formally finite). Numerical contour integration may be a useful avenue, although this still requires a robust method for locating poles in the integrand <cit.>. In principle, if this propagator could be numerically evaluated, it would provide enough information to conclusively determine how loop corrections impact tunnelling probabilities. One may then probe the conditions which enhance this effect, and thus how it may be measured. The benefit of this work is that the analogy with the self-consistent Hartree-Fock approximation may be readily applied to more physical Lagrangians. For instance, the QED Lagrangian with the addition of a classical vector field would be one candidate, capable of describing corrections to electron tunnelling through electric potentials. A very similar dressed propagator would arise for this theory, given the photon interaction with the electron has a similar cubic structure, and so the same topological diagrams would be retained. However, because this is not a super-renormalisable theory, counter-terms and a relevant renormalisation procedure would be required, as would be the case for most physically interesting Lagrangians.
http://arxiv.org/abs/2406.19170v1
20240627134403
The Illusion of Competence: Evaluating the Effect of Explanations on Users' Mental Models of Visual Question Answering Systems
[ "Judith Sieker", "Simeon Junker", "Ronja Utescher", "Nazia Attari", "Heiko Wersing", "Hendrik Buschmeier", "Sina Zarrieß" ]
cs.CL
[ "cs.CL" ]
Infinite dimensional dynamical maps Ritabrata Sengupta June 2024 =================================== § ABSTRACT We examine how users perceive the limitations of an AI system when it encounters a task that it cannot perform perfectly and whether providing explanations alongside its answers aids users in constructing an appropriate mental model of the system's capabilities and limitations. We employ a visual question answer and explanation task where we control the AI system's limitations by manipulating the visual inputs: during inference, the system either processes full-color or grayscale images. Our goal is to determine whether participants can perceive the limitations of the system. We hypothesize that explanations will make limited AI capabilities more transparent to users. However, our results show that explanations do not have this effect. Instead of allowing users to more accurately assess the limitations of the AI system, explanations generally increase users' perceptions of the system's competence – regardless of its actual performance. § INTRODUCTION Machine learning-based technologies (often called ‘artificial intelligence’, AI) are now commonly being deployed and used in real-world applications, influencing human decision-making (or automating decision-making altogether) with implications for societies, organizations, and individuals. Despite continuous advances and impressive performance on many tasks, these technologies are not always accurate and will likely never be. Machine learning models depend on curation of the data they are trained on, they are optimized according to criteria that may not do justice to the complexity of reality, and the context in which they are used cannot be fully modeled, to name a few reasons for their limitations. In addition, the underlying algorithms themselves have inherent weaknesses. Large language models (LLMs), e.g., are well known to hallucinate, i.e., to make predictions that are inconsistent with facts or themselves <cit.>, or to be highly sensitive to spurious variations in their inputs/prompts <cit.>. Many machine learning models also suffer from their own complexity: consisting of millions, billions, or even trillions of parameters, they are black-boxes, opaque to human understanding. However, in order to reliably use machine learning models and AI systems based on such models, human users must be able to assess their limitations and deficiencies, and to understand the decisions that such systems make and why (codified, for example, as the right “to obtain an explanation of the decision reached” in the legal framework of the General Data Protection Regulation of the European Union; ). Research in Explainable AI (XAI) addresses this need, and recent years have seen an explosion of explainability methods that aim to make the internal knowledge and reasoning of AI systems transparent and explicit, and thus interpretable and accessible to users. Explainability of model predictions is thus seen as a solution, and it is assumed that they enable users to construct functional ‘mental models’ <cit.> of AI systems, i.e., models that closely correspond to the actual capabilities of the systems. Whether this is the case is an active research question and there is evidence that explainability comes with new challenges. Important questions in XAI are what actually makes a good explanation, which criteria it needs to satisfy, and how the quality of explanations can be measured <cit.>. Furthermore, recent perspectives emphasize that explanations should be social <cit.> and constructed interactively, taking into account the user's explanation needs <cit.>. <cit.> argue that evaluations of explanations should carefully distinguish plausibility (does it seem plausible to users) and faithfulness (does it reflect the model's internal reasoning) and that non-faithful, but plausible, explanations can be dangerous in that they let users construct faulty, and eventually dysfunctional, mental models that can lead to unwarranted trust <cit.>. In this paper, we investigate the effects of providing natural language explanations on users' mental models of an AI system in terms of its capabilities, and whether these explanations allow them to diagnose system limitations. We present the results of a study in the visual question answering and explanation (VQA/X) domain, artificially inducing a simple limitation by providing two VQA/X systems with images stripped of color information, i.e., in grayscale (see Figure <ref>). Participants, unaware of the manipulation, see the unmanipulated full color image, the question, the system's answer, and its explanation for the answer, and have to judge various system capabilities (including its ability to recognize colors) and its competence. This visual domain does not require participants to understand the internal processes of the system but should still enable them to estimate what it can and cannot do. The comparison of judgments to responses to non-manipulated system input and judgments of responses without explanations sheds light on participants' difficulties in using (natural language) XAI explanations to build accurate mental models, even for such a simple case. This raises the question of how effective explanations can be in real-world applications of XAI technology that involve more complex reasoning and problems. § BACKGROUND Our work is related to previous studies that have examined whether explanations enhance users' trust in AI systems. <cit.>, for example, compared trust in personal (human) versus impersonal (recommender system) recommendation sources and examined the impact of explanation quality on trust. Their results showed that users rated human explanations higher than system-generated ones and that the quality of explanations significantly influenced trust in the recommendation source. <cit.> investigated whether explanations help humans anticipate when an AI system is potentially incorrect. They used scenarios where an AI system helps participants to solve a task (text classification or question answering), providing visual explanations (highlighted words) under certain conditions. Their findings revealed that explanations increased the likelihood of the participants to accept the AI system's recommendations, irrespective of their accuracy. Thus, rather than fostering appropriate reliance on AI systems, explanations tended to foster blind trust. Similarly, <cit.> conducted a large-scale user study for visual explanations, showing that these do not allow users to distinguish correct from incorrect predictions. <cit.> investigated how users develop and regain trust in AI systems in human–AI collaborations. They found that NLP systems that confidently make incorrect predictions harm user trust, and that even a few incorrect instances can damage trust, with slow recovery. While these studies evaluate the influence of system explanations on users' trust in the system's output (a proxy for its perceived competence), they do not investigate users' understanding of the systems' reasoning processes and capabilities. In our study, we specifically address this issue and investigate the users' mental model of the systems' capabilities and limitations. While the studies above found that nonverbal explanations can be misleading to users, natural language explanations are assumed to be more transparent or less difficult to interpret <cit.>. Verbal explanations also offer the advantage that they can be collected from humans, which has led to the development of explanation benchmarks, particularly in multimodal domains <cit.>. Thus, the dominant approach to verbal explanation generation currently is to leverage human explanations during model training <cit.>. While <cit.> discuss potential faithfulness issues related to supervising explanation generation with human explanations, we are not aware of work that explicitly tests these supervised models in a user-centered setting similar to ours. § APPROACH We conduct a study to investigate how users of an AI system perceive its limitations when it encounters tasks that it cannot perform perfectly. We aim to investigate whether providing explanations alongside model responses helps users build an appropriate mental model of the AI system's capabilities and limitations. We control the AI system's limitations by systematically manipulating its inputs. We design a questionnaire for users to judge specific aspects of the AI system's capabilities. This allows us to measure whether users can diagnose which capabilities of the AI system have been perturbed through our explicit input manipulations. The design of our study is summarized in Figure <ref> and will be explained in detail below. VQA Task and Abilities We employ a visual question answering and explanation task: the input to the AI system is an image and a question in natural language, and its task is to generate an answer and a natural language explanation that justifies the answer. We select a visual question-answering setting as it is a rather simple task for humans and, at the same time, a task that involves distinguishable semantic-visual reasoning capabilities. This is important for our setting since we want to test whether users can differentiate specific system capabilities, based on generated explanations. Thus, inspired by 's () CLEVR-X benchmark for explainable VQA, we assume that these capabilities involve the abilities to process objects' (i) color, (ii) shape, (iii) material, and (iv) scene composition (e.g., spatial relations, relative size). In our study participants are asked to rate the AI system's capabilities along these four dimensions, next to other, more general criteria for competence and fluency (see Figures <ref> and <ref> in Appendix <ref>). In the CLEVR-X benchmark, these dimensions are given by construction: the visual scenes are synthetically generated and composed of objects defined by attributes for color, material, and shape. The corresponding questions explicitly relate to one or multiple of these dimensions. In real-world image benchmarks, such as VQA-X <cit.>, these abilities are often more implicit, but still highly relevant (see examples in Figure <ref>). We run our study on items from both benchmarks. Color vs. Grayscale Input Our goal is to investigate whether explanations help users in diagnosing system limitations. To introduce these limitations in a controlled way, we manipulate the input of the VQA systems. Out of the four VQA capabilities explained above (color, shape, material, and scene), the color dimension lends itself to straightforward manipulation: during inference, systems either receive the image (i) in full color or (ii) in grayscale. This induced limitation resembles a situation where a multimodal AI model was trained on colored images but, at run-time, a camera/visual sensor is broken such that model inputs are perturbed. To make sure that this manipulation induces an incorrect model response, we only include items that are correctly answered with the full color image input but incorrectly answered with the grayscale image input. This item selection accounts for the fact that VQA models can be assumed to have further limitations that we cannot explicitly control for and exclude items (i) where the VQA does not generate the correct ground-truth answer for the colored image, and (ii) where the VQA generates the correct answer for the grayscale image. This gives us a clean set of items where the limitations of the AI system can be attributed to a particular error source. The participants in our study were unaware of the underlying color–grayscale manipulation: they saw images in color, along with the models' answers and explanations. Our goal was to determine whether participants were able perceive the limitations of the model, i.e., whether they could identify the system's lack of color recognition ability. See Figure <ref> for an illustration of this set-up. Experiments A and X To investigate the effect of providing generated explanations alongside the system answers, we conduct two separate studies: In Experiment X, participants were shown both the answer and its explanation, whereas in Experiment A participants were shown only the answer without an explanation. In both studies, we ask participants to rate each item for the system's capabilities (color, shape, material, scene), the overall system competence, answer correctness, the consistency of answer/explanation, the consistency of explanation/image, and the explanation's fluency. Importantly, participants in both Experiments A and X received mixed sets of items from all systems, data sets, and color conditions, and we collected judgments for each item. In this way, we wanted to prevent them from becoming “conditioned” to a particular setting, i.e., getting used to certain ways of answering or explaining and becoming overly sensitive to changes in patterns. If explanations lead users to build more appropriate mental models, participants should, generally speaking, be able to differentiate items where systems processed grayscale vs. full color images. We approached this broad expectation with five hypotheses specific to our set-up (see Table <ref> for a brief summary). First, hypotheses H1A and H1X relate to the differences in competence scores between color and grayscale conditions. Here, we expect that explanations help participants to differentiate between different system capabilities. H1A In Exp.A, competence and all capability scores are lower in the grayscale condition than in the color condition. H1X In Exp.X, competence and color capability scores are lower in the grayscale condition than in the color condition, but other capability scores are more stable. Hypotheses H2A and H2X are concerned with the comparison between individual competence scores in the grayscale condition. Again, explanations should help users to identify system deficiencies. H2A In the grayscale condition of Exp.A, participants give similar scores for all capabilities. H2X In the grayscale condition of Exp.X, participants rate the color capability lower relative to the other capabilities. Hypothesis H3A/X pertains to the comparison of competence scores between Exp.A and X. If explanations make defects in color processing transparent, grayscale inputs should specifically affect scores for this dimension. H3A/X In Exp.X the overall competence is rated higher than in Exp.A. In Exp.X, color competence is rated lower or the same as in Exp.A. § EXPERIMENTAL SETUP Data We use two datasets in our study: VQA-X <cit.> and CLEVR-X <cit.>. VQA-X is extensively utilized in Visual Question Answering (VQA) tasks, as an extension of the well-established Visual Question Answering v1 <cit.> and v2 <cit.> datasets. The images within VQA-X originate from MSCOCO <cit.>, and the questions are open-ended (see Figure <ref>, top). The style of the ground-truth explanations in VQA-X varies widely, ranging from simple image descriptions to detailed reasoning <cit.>. CLEVR-X expands the synthetic dataset CLEVR <cit.>, incorporating synthetic natural language explanations. Each image in the CLEVR dataset depicts three to ten objects, each possessing distinct properties including size, color, material, and shape (see Figure <ref>, bottom). For each image–question pair in the CLEVR dataset, CLEVR-X contains multiple structured textual explanations. These explanations are constructed from the underlying scene graph, ensuring their accuracy without necessitating additional prior knowledge. Models For each dataset, we used two vision and language models: (i) NLX-GPT <cit.> and PJ-X <cit.> for VQA-X, and (ii) NLX-GPT and Uni-NLX <cit.> for CLEVR-X[ We tried to obtain model outputs from other explainable VQA-X models such as, e.g., OFA-X <cit.>, FME <cit.>, or e-UG <cit.>, but encountered significant reproducibility issues: code was unavailable or not running, authors were unavailable to provide model outputs, etc. ]. We did not use vanilla generative AI systems (such as ChatGPT) in this study, as we wanted to investigate models that were specifically constructed to provide explanations alongside their outputs. NLX-GPT is an encoder–decoder model, which combines CLIP <cit.> as the visual encoder with a distilled GPT-2 model <cit.>. Importantly, this model jointly predicts answers and explanations, i.e., it generates a single response string of the form “the answer is <answer> because <explanation>”, given a question and image. For VQA-X, we use the model from <cit.>, which is pre-trained on image-caption pairs and fine-tuned on the VQA-X data. For CLEVR-X, we use the published pre-trained weights and fine-tune the model on this dataset. Uni-NLX relies on the same architecture as NLX-GPT, but the model is trained on various datasets for natural language explanations (including VQA-X), to leverage shared information across diverse tasks and increase flexibility in both answers and explanations. We take the trained model from <cit.> and fine-tune it on CLEVR-X. While NLX-GPT and Uni-NLX generate answers and explanations simultaneously, the PJ-X model takes a two-step approach. It first predicts the answer with an answering model and, subsequently, generates visual and textual explanations based on the question, image, and answer[ We could not replicate 's () PJ-X results on CLEVR-X, and the authors could not provide model outputs. Therefore, we only report PJ-X on VQA-X.]. For each model, we utilize the recommended model weights and fine-tune them on the two datasets. During fine-tuning, we supply each model with the original, i.e., full color images along with the questions, answers, and explanations for both datasets. During inference, images are presented in color alongside the question, or in grayscale. User Study We conducted the study online, using https://www.prolific.co/Prolific, and obtained ratings from 160 participants (80 each in Exp.A and X) who were native English speakers with normal color vision (selected using Prolific's filters). In both experiments, we utilized identical experimental items, differing only in the presence or absence of explanations. All items consisted of instances where the model provided correct answers for colored images and incorrect answers for grayscale images. We selected a total of 128 items, evenly distributed across the datasets and models, comprising 64 for each dataset and 32 for each model, equally split between 16 colored and 16 grayscale items (for NLX-GPT, a total of 64 items were selected, with 32 items from CLEVR-X and 32 items from VQA-X). The items were distributed over four experimental lists, with each participant evaluating 32 individual items. We gathered 2560 judgments per experiment and 5120 overall. We designed the evaluation as a rating task. We informed participants that we are assessing an AI system's ability to answer questions about images (and, for Exp.X, to generate explanations). The image, question, and answer for each item were presented at the top of the page, and, in Exp.X, the generated explanation was displayed below the answer. Each item had several questions and statements for the participants to assess. First, they were asked to evaluate the correctness of the answer. In Exp.X, participants were further asked to assess whether the explanation was (i) consistent with the answer, (ii) consistent with the picture, and (iii) overall fluent. Additionally, participants in both experiments were asked to judge whether they believed that the AI system correctly identifies (iv) shapes, (v) colors, and (vi) materials, as well as whether it (vii) understands the general scene in the image. Finally, (viii) participants judged the overall competence of the system. Participants indicated their agreement on five-point Likert scales, ranging from 1 (‘strongly disagree’) to 5 (‘strongly agree’). For each criterion, we also offered the option of selecting “I don't know”. Before providing ratings, participants received instructions and viewed an example item illustrating the evaluation criteria. They were paid at a rate of £9.00 per hour. See Appendix <ref> for example trials of the experiment. § RESULTS We organize the discussion of results based on the hypotheses outlined in Section <ref>. Since we ask whether explanations help participants determine that the systems could not recognize color, the following discussion concentrates on the grayscale condition and the differences between the grayscale and color conditions (see Appendix <ref> for detailed results of the color condition). All systems received high ratings in all competency and capability dimensions when tested in the color condition of Exp.A and X, on both datasets (see Table <ref> in Appendix <ref>). These ratings decreased in very similar ways in the grayscale condition. Therefore, we were able to use all items from all systems to test our hypotheses, generalizing over minor system differences. We discuss differences between datasets and models in Appendix <ref>, since these were not essential for testing our hypotheses. Summaries of hypotheses and results are given in Table <ref>. Hypotheses H1A and H1X state our expectations on distinctions between the grayscale and color conditions in Exp.A and X, respectively. Figure <ref> shows the distribution of participant ratings for the AI system's ability to recognize colors, for the grayscale and color conditions in both experiments (see Figures <ref>, <ref>, <ref>, and <ref> in Appendix <ref> for results on the other capabilities). In Exp.A and X, there is a consistent trend of better assessments when systems have been seen the color images compared to grayscale images, across different systems, datasets, and all capabilities. Most users rate the color capability with the highest rating in the color condition (Figure <ref>a/c) and with the lowest rating in the grayscale condition (Figure <ref>b/d). The same holds for all other capabilities and competency (Figures <ref>, <ref>, <ref>, and <ref>). This confirms hypothesis H1A, i.e., ratings for all capabilities decrease when the system does not see color. However, this does not support H1X, as we expected that only overall competence and capability to recognize colors would be rated lower in the grayscale condition when explanations were given, and not all capabilities. This suggests that the AI's explanations did not help users diagnose the system's limitation in the grayscale condition, as all capability dimensions are similarly affected in Exp.X. Hypotheses H2A and H2X state our expectations for the grayscale condition. Table <ref> presents the human evaluation results in Exp.A and X. Starting with Exp.A, Table <ref> shows that all evaluation criteria in the grayscale condition receive relatively low scores. Interestingly, the manipulated capability, i.e., to recognize colors, does have slightly worse ratings than the other criteria (for most models and datasets). This outcome does not align with our expectation (H2A) as participants in Exp.A solely viewed the answers without access to explanations, making it difficult to discern which specific ability or (limitation) influenced the model's answer. Results from Mann-Whitney U tests (see Table <ref> in Appendix <ref>) show significant differences between the ability to recognize colors and the ability to recognize other criteria for Exp.A (except for the models' overall competence), contradicting hypothesis (H2A). This suggests that users in Exp.A were able to interpret incorrect system answers more than we expected. For Exp.X, the results in Table <ref> suggest a very similar trend to Exp.A: the ability to recognize colors is rated slightly lower than the other capabilities. The Mann-Whitney U tests for Exp.X (reported in the lower part of Table <ref> in Appendix <ref> ), again confirms significant differences between the perceived ability to recognize colors and the other abilities (except the systems' overall competence). Looking at Exp.X in isolation, these results seem to speak in favor of our hypothesis H2X: users were indeed able to diagnose the system defect, at least to some extent. However, in light of our findings on H2A, these results have to be interpreted with care: even without model explanations, users rated the color capability lower than others. This trend is a bit stronger in Exp.X but, overall, the differences between perceived capabilities are still rather small. The strongest expected trend in favor of H2X can be found for NLX-GPT on the CLEVR-X data: here, the median if the color rating is 1.0 and 3.0 or 2.0 for the other capabilities. For the other combinations of models and datasets in Exp.X, there is no clear difference in the median ratings for the perceived capabilities. We conclude that there is weak evidence in favor of H2X, as explanations do not substantially improve users' assessments of system capabilities. Hypothesis H3A/X states our expectations regarding the differences between Exp.A and X for overall competency and color recognition ability. Once again, consider Table <ref>. As expected, in Exp.A, i.e., without explanations, the overall competency of the models was rated low (with median values of 1.0 only). In Exp.X, although the values remain low at 2.0, there is a noticeable improvement relative to Exp.A. Thus, despite the answers being incorrect, the addition of the models' explanations enhances the perception of the models' overall competency. This could suggest that the explanations reveal other capabilities of the models, consistent with our hypothesis H3A/X. However, contrary to H3A/X, we also see a general increase in the ratings for the systems' color recognition ability in Exp.X compared to Exp.A. We expected that the explanations would make the color limitation explicit, which would result in color ability being rated worse or at least as poorly as in Exp.A. This also holds for all other model capabilities: all capability ratings are comparatively higher in Exp.X than in Exp.A (even if lower than in the color condition). This observation is supported by the Mann-Whitney U tests (see the upper part of Table <ref> in Appendix <ref>), which show significant differences between Exp.A and X for all evaluation criteria. This suggests that users rate all system capabilities significantly higher when explanations are provided. From this we conclude that, instead of making systems' limitations more transparent, the explanations contribute to an overall more positive perception of the system, regardless of its capabilities. In other words, the AI system's explanations seem to create an illusion of the system's competence that does not correspond to its actual performance. Automatic Evaluation In the VQA-X domain, automatic measures for evaluating similarity or overlap with human ground-truth explanations are commonly used <cit.>. To assess the construct validity of a representative automatic evaluation method, we compute BERTScores, measuring the similarity of ground truth explanations from both datasets to human evaluation scores. Table <ref> reports the results of the BERTscore metric, showing that they do not exhibit any notable differences between the grayscale and color conditions, which clearly contradicts the results of our human investigation. Thus, while user ratings between the grayscale and color condition are located on opposite ends on the Likert scale, BERTscores show marginal differences across the board. Yet, when comparing the two datasets, the BERTScores for the CLEVR-X dataset show improved values (in both the grayscale and color conditions), aligning with the human results from Exp.X (see Table <ref> and <ref> in Appendix <ref>). Summary Table <ref> provides an overview of the validity of our hypotheses. Generally, our results show that explanations do not have a desirable effect on users' assessment of the system's competency and capabilities. They do not help users construct a more accurate mental model of the system and its capabilities and limitations, but simply lead to more positive user assessment overall. Our results are strikingly consistent across models and datasets. Even systems fine-tuned on the CLEVR-X benchmark, where explanations were designed to systematically mention the capabilities we assessed in our study (including color), do not address these limitations. Figure <ref> shows representative examples of why this might be the case: rather than avoiding color words or using incorrect colors, systems seem to be able to guess the correct color from the question or the general context (e.g., green in the context of tree). This behavior is well-known in multimodal language models but should be avoided in explanation tasks since it counteracts transparency and appropriate user assessment. § DISCUSSION OF IMPLICATIONS It is still not well understood how XAI can bridge the gap between highly complex black-box models with largely opaque internal reasoning processes and users' intuitive understanding of these. Generally, our study provides evidence that explanations generated by state-of-the-art systems do not always lead to the expected effects of XAI and that explanations may even further obstruct AIs' reasoning processes and trick users into believing that the AI is more competent than it actually is. This result is particularly noteworthy in light of the fact that the manipulation employed in our study introduced an obvious error that should be easy to spot for users (defects in systems' color recognition). XAI Models Our study underlines the great importance of prioritizing faithfulness over plausibility in explanation methods <cit.>. With today's AI systems and LLMs, users face the challenging situation that these systems present fluent outputs projecting confidence and competence. Yet, this confidence may not be grounded in actual system capabilities and reliability <cit.>. Our findings suggest that this also holds, to some extent, for state-of-the-art approaches to natural language explanation generation. Looking at the architecture of these models, this is by no means surprising. At least within the domain of VQA-X, which we focused on in this paper, explanation generation approaches largely follow common language modeling architectures and prioritize generating fluent, human-like outputs. Despite the fact that the importance of faithfulness in XAI has been recognized for some time and it continues to be a challenge <cit.>. Evaluation of XAI Our study also highlights the importance of evaluating explanation methods in thorough, detailed, and user-centered ways <cit.>. In the domain of VQA-X, automatic, benchmark-based evaluations still seem to be in focus and widely accepted in the community. All systems we tested in our study have been assessed mainly in automatic evaluations <cit.>. This stands in stark contrast to research showing that XAI evaluations often have little construct validity, i.e., do not assess the intended properties of explanations <cit.>. Our BERTscore-results lend further support to this argument. § CONCLUSION This paper investigates the effects of providing natural language explanations on users' ability to construct accurate mental models of AI systems' capabilities, and whether these explanations allow them to diagnose system limitations. Results from two experiments show that natural language explanations generated by state-of-the-art VQA-X systems may actually hinder users from accurately reflecting capabilities and limitations of AI systems. Participants who received natural language explanations projected more competence onto the system and rated its limited capabilities higher than those who did not receive explanations. § LIMITATIONS We identify the following limitations in our work: The addition of further models and data sets might have provided additional insights into our experiments. Unfortunately, recently research on generating natural language explanations has not been very active. The best known approaches are models like PJ-X <cit.> or e-UG <cit.>, which have older code bases with reproducibility issues. We have tried to include other models (see Section <ref>, <ref>). For the grayscale condition, we remove color information at the inference level for models trained on colored input. An alternative approach would be altering inputs during model training, possibly leading to deficiencies that are harder to identify for participants. Similarly, other kinds of perturbations such as altering relative object sizes or scene layouts might affect different dimensions of perceived system capabilities than color recognition. Here, we focused on color, as this property is easier to control and less intertwined with other properties than, e.g., object size (which might also change how relative positions are described). § ETHICS STATEMENT Our study focuses on user-centered evaluation of XAI systems and on understanding whether these systems fulfill the promise of making black-box AI systems more transparent for users. Therefore, we believe that our study contributes to understanding and improving the social and ethical implications of recent work in NLP, and Language & Vision. In our study, we collect ratings from Prolific users but, other than that, did not record any personal information on these users. § APPENDIX §.§ Materials Availability Statement We used the following public resources in our work: * Source code for NLX-GPT is available from GitHub at https://github.com/fawazsammani/nlxgpthttps://github.com/fawazsammani/nlxgpt * Source code for Uni-NLX is available from GitHub at https://github.com/fawazsammani/uni-nlx/https://github.com/fawazsammani/uni-nlx/ * Source code for PJ-X and VQA-X data is available from GitHub at https://github.com/Seth-Park/MultimodalExplanationshttps://github.com/Seth-Park/MultimodalExplanations * COCO Images for VQA-X are available here: https://cocodataset.org/https://cocodataset.org/ * CLEVR-X data is available from GitHub at https://github.com/ExplainableML/CLEVR-Xhttps://github.com/ExplainableML/CLEVR-X * CLEVR images for are available here: https://cs.stanford.edu/people/jcjohns/clevr/https://cs.stanford.edu/people/jcjohns/clevr/ Our source code and the data from the human evaluation study will be made available in form of an accompanying data publication. §.§ Statistical Tests Table <ref> shows the results of Mann-Whitney U tests in the grayscale condition. The upper half of the table reports the differences in user ratings of system capabilities (color, shape, material, scene) and overall competence between Exp.A and X, all differences are highly statistically significant. The lower half of the Table reports the differences in ratings with Exp.A and X. Table <ref> reports the same tests for the color condition. Here, only the difference between overall competence is statistically significant between Exp.A and X while all system capabilities are rated similarly with or without explanations. This further supports our finding that explanations enhance user's perception of system competence, regardless of the correctness of system answers. §.§ Additional Results Answer Correctness First, recall that we only included cases where the models generated incorrect answers for grayscale images and correct answers for full-color images, according to ground-truth answers in the datasets. Table <ref> displays frequency distributions of correctness ratings in our user study: ‘no’ ratings predominated in the grayscale condition, whereas ‘yes’ ratings were more prevalent in the color condition across both datasets. We also conducted a chi-squared test of independence on this evaluation criterion (χ^2 = 2.3617, df=2, p = 0.67), finding no statistically significant difference between Exp.A and X regarding the evaluation of the answers' correctness. These results replicate and confirm the correctness of ground-truth answers in VQA-X and CLEVR-X. Differences between Datasets and Models If we first look at Exp.A (Table <ref>), only minimal distinctions are evident between datasets or models, particularly concerning the models' ability to recognize colors, materials, and their overall competency. While slight variations exist in the other evaluation criteria, none are notably remarkable. For instance, regarding their understanding of the general scene, the models exhibit slightly better performance with the CLEVR-X dataset. In Exp.X (Table <ref>), on the other hand, the results exhibit some more variation between models and datasets. For example, only for the models' overall competency, do we find the same (median) value across models and datasets. Overall, it also appears that the items based on CLEVR-X data perform slightly better in Exp.X, specifically in terms of the models' ability to recognize shapes and materials, as well as their general scene understanding and overall competence. Table <ref> shows the frequency of questions in the human evaluation study that contain the word “color[s]” or specific color terms like “red” or “blue” etc., categorized by dataset. It is evident that almost all questions in the CLEVR-X dataset contain color terms, with about half explicitly mentioning the word “color”. Conversely, in the VQA-X dataset, only three out of 64 questions include the word “color[s]”. Hence, the observed distinctions between the datasets may be attributed to this contrast. Analysis of the Color Condition Table <ref> shows the human evaluation results for the color condition in Exp. A and X. In contrast to the results of the grayscale condition (Table <ref>), with respect to all the evaluation criteria, the evaluation for both Exp.A and Exp.X is very good. This corresponds to our expectation because only items with correct model answers were included in the color condition. Furthermore, we can see that in both Exp.A and Exp.X, there are no remarkable differences between the ability to recognize colors and the other tested abilities. This is also evident from the Mann-Whitney U Test results in Table <ref>, especially when compared to the Mann-Whitney U results for the grayscale condition in Table <ref>. However, it is notable that, with respect to all evaluation criteria, the PJ-X model receives lower ratings in Exp.X compared to Exp.A. In other words, including explanations in Exp.X results in a decline in performance for the PJ-X model. For the other models, we do not observe this difference between the two Experiments; instead, their evaluation remains fairly consistent in the color condition across both experiments. Consequently, the explanations produced by the PJ-X model seem inferior to those of the other models. This discrepancy may be due to the unique architecture of the PJ-X model, which, unlike the other models, generates answers and explanations in two separate steps rather than one. Correlations between BERTscore and human judgements Table <ref> shows Pearson’s correlation coefficients (ρ) between the automatic and human evaluation metrics for the CLEVR-X and VQA-X datasets. Interestingly, we find large differences between the datasets. While all human metrics show statistically significant correlations with BERTScore for the VQA-X dataset, we find no statistically significant correlations for the CLEVR-X dataset. However, one commonality between the two datasets is the lack of differentiation between various criteria. The fact that all skills either correlate or show no correlation suggests that the automatic BERTScore metric is not able to capture the nuanced distinctions that human evaluation can discern. §.§ Online Experiment Figures <ref> and <ref> show screenshots of the study, example items and evaluation criteria.
http://arxiv.org/abs/2406.18502v1
20240626171044
Studying single-electron traps in newly fabricated Skipper-CCDs for the Oscura experiment using the pocket-pumping technique
[ "S. E. Perez", "B. A. Cervantes-Vergara", "J. Estrada", "S. Holland", "D. Rodrigues", "J. Tiffenberg" ]
physics.ins-det
[ "physics.ins-det", "astro-ph.IM" ]
Pseudo-Dirac Neutrinos and Relic Neutrino Matter Effect on the High-energy Neutrino Flavor Composition Ivan Martínez-Soler July 1, 2024 ======================================================================================================== § INTRODUCTION Since their invention in 1969, Charge-Coupled Devices (CCDs) have been widely adopted in space and ground based astronomical surveys. They possess appealing characteristics such as a spatial resolution as low as a few μm, low readout noise and a low dark-count rate. Recently, the skipper-CCD <cit.>, with enhanced sensitivity to low-energy signals, has become one of the most promising technologies for Dark Matter (DM) and rare-event searches. In these applications, the discovery potential is highly constrained by the one-electron background rate <cit.>. Many background sources of Single-Electron Events (SEEs) in skipper-CCD detectors have been identified and characterized, including temperature fluctuations, radiative processes from external radiation interactions, low-energy photons from the amplifiers and clock-induced charge <cit.>. However, we have recently identified another source of SEEs in the newly fabricated skipper-CCDs for Oscura <cit.>, a multi-kilogram experiment aiming to probe electron recoils from sub-GeV DM. We associate this source to defects/contaminants within the CCD buried-channel that create single-electron traps with release times comparable to consecutive pixel-readout time, causing a “tail” of deferred single-electron depositions after particle tracks. In some cases, this charge can spread within the image, leading to an apparent increase in the exposure-dependent single-electron rate (SER), which might be mistaken for the sensor's intrinsic dark current (DC). In this work, we perform the established trap pumping technique <cit.> to three different Oscura prototype sensors to characterize their buried-channel single-electron traps. We measure the energy and cross section of the main trap species found in the prototype sensors and verify the effect of deferred charge from trap emission on the measured exposure-dependent SER through a Monte Carlo simulation. § CHARGE TRAPPING CHARACTERIZATION IN CCDS §.§ Shockley-Read-Hall theory   Traps associated to intermediate energy levels within the Si bandgap are usually modeled using the Shockley-Read-Hall model for carrier generation and recombination <cit.>. The traps lying within the CCD charge-transfer region could capture charge carriers from charge packets as they are transferred through the device, and release them at a later time. The probability of a trap to capture (c) or emit (e) one charge carrier within the time interval [t_1, t_2] is given by P_c,e=e^-t_1/τ_c,e-e^-t_2/τ_c,e . with τ_c,e the characteristic capture (emission) time constant, which can be expressed as τ_c=1/σ v_th n and τ_e=1/σ v_th N_ce^E_t/k_BT . Here, T is temperature [K], E_t is the trap energy level [eV], σ is the trap cross section [cm^2], v_th is the charge-carrier's thermal velocity [cm/s], n is the charge-carrier concentration in the vicinity of the trap [cm^-3], and N_c is the effective density of states in the conduction band [cm^-3]. v_th and N_c depend on T and on the charge-carrier's effective mass for conductivity m_ cond and for density of states m_ dens calculations as v_th=√(3k_BT/m_ cond) and N_c=2[2π (m_ dens) k_BT/h^2]^3/2 . In p-channel CCDs, charge carriers are holes for which m^h_ cond≃ 0.41 m_e and m^h_ dens≃ 0.94 m_e between 100K and 200K <cit.>, with m_e the free electron rest mass. §.§ Pocket-pumping technique The technique of pocket pumping <cit.> has proved to be a powerful tool to spatially localize and measure the characteristic parameters of charge traps lying within the CCD charge-transfer region. This method consists of filling the traps by “uniformly” illuminating the active area of the CCD and allowing them to emit the trapped charge into their neighbor pixel multiple times. This is done by repeatedly moving the charge back and forth, between pixel phases, creating “dipole” signals relative to the flat background. The method is illustrated in Fig. <ref>. The sequence of states in this figure is useful to detect traps located below phases ϕ_1 and ϕ_3 in a three-phase device. Within the trap pumping sequence, charge capture occurs during the state in which charge remains under the phase with the trap. Assuming a 100% probability of capture, the emission clock starts running just after charge is moved from the phase with the trap, going through the “transient” phase(s). The state in which trap emission takes place corresponds to the one in which charge from the adjacent pixel to the pixel with the trap lies in the adjacent phase to the phase with the trap. The effective time spent in this state can be considered to be a multiple integer of t_ph, which is the time spent under the “transient” phase(s). Particularly, for the pumping sequence shown in Fig. <ref>, the time interval spent in this state is [t_ph, nt_ph] and the probability of emission is given by Eq. (<ref>) evaluated within this time interval. The emission clock resets after each pumping cycle, when charge passes again through the trap. After completing a given number of pumping cycles N_ pumps, the intensity of the dipole signal, composed of a bright (b) and a dark (d) pixel with S_b and S_d charge carriers, respectively, can be expressed as I_ dip=1/2|S_b-S_d|=N_pumpsD_tP_cP_e , where D_t is the trap depth. Here, the probability of the trap to capture a charge carrier P_c has been incorporated as a linear scaling factor <cit.>. The time spent in the state in which trap emission takes place can be optimized to minimize the total time of the pumping sequence to achieve the maximum dipole intensity 𝒯|_I^ max_ dip. In the case of the three-phase pumping sequence illustrated in Fig. <ref>, 𝒯=2nt_phN_ pumps. From Eq. <ref> and assuming I_ dip∝ P_e, the maximum intensity I^ max_ dip occurs at t_ph|_I^ max_ dip=τ_e lnn/(n-1); note that for higher values of n, I^ max_ dip happens at lower t_ph. Given I^ max_ dip, N_ pumps|_I^ max_ dip∝ n^n/(n-1)/(n-1). Hence, the minimum of 𝒯|_I^ max_ dip is achieved when n=8 <cit.>. Using the optimization described above, for a given t_ph Eq. <ref> takes the form I_ dip=N_pumpsD_tP_c(e^-t_ph/τ_e-e^-8t_ph/τ_e) . By fitting I_ dip as a function of t_ph, the emission constant of an individual trap can be extracted. Furthermore, if data is taken at different temperatures, from the fit of τ_e(T), given by Eq. <ref>, the energy level and cross section of the trap can be obtained. §.§ Effects of charge traps in electron-counting CCDs Typical images from CCDs used for DM and rare-event searches are dark exposures containing tracks of different particles. In a sensor containing traps within the sensor charge-transfer region, depending on the ratio of the traps characteristic emission time and the readout time between two consecutive pixels t_pix, trapped charge from these tracks can be emitted: 1) within the pixels of the event, when τ_e/t_pix≪ 1; 2) in a highly localized region in the readout direction next to the event, when τ_e/t_pix≃ 1; or 3) after several pixels, when τ_e/t_pix≫ 1. Because of the dependence of τ_e with T, i.e. Eq. <ref>, the “tail” of deferred charge from trap emission next to an event is expected to span more pixels at lower temperatures. Skipper-CCDs provide a unique tool to resolve unequivocally the spatial distribution of depositions coming from emissions of single-electron traps, due to their sub-electron resolution. This is evident in Fig. <ref>, where dark exposure images at different temperatures from a skipper-CCD with traps within the sensor charge-transfer region are shown. As these images were taken with multiple samples per pixel, achieving sub-electron noise levels, the spatial distribution of the deferred charge next to particle tracks is resolved. The identification and subsequent masking of pixels with deferred charge from trap emission is trivial when τ_e/t_pix≤ 1 as the deferred charge remains near the main event. However, deferred charge from traps with τ_e/t_pix≫1 cannot be easily identified because of the spatial separation of the deferred charge from the original pixel, and taking a conservative masking approach could lead to a significant loss in exposure. To minimize the span of the deferred charge, τ_e can be decreased by going to higher temperatures and/or t_pix can be increased. With skipper-CCDs the latter can be done by increasing the number of samples per pixel. However, these approaches lead to a background increase from other temperature and/or exposure-dependent sources, which is not desirable in some cases. § POCKET-PUMPING MEASUREMENTS ON OSCURA SKIPPER-CCDS §.§ Oscura skipper-CCDs The newly fabricated skipper-CCDs for Oscura are 1.35 MPix p-channel CCDs with 15 μm×15 μm three-phase pixels and a thickness of standard 200-mm silicon wafers (725 μm) <cit.>. During the Oscura R&D phase, two batches of sensors were fabricated using two different extrinsic gettering techniques[Gettering techniques, implemented during CCD fabrication, create trapping sites for mobile impurities to be drawn away from the active regions of the device. Extrinsic gettering processes create these sites on the back side of the wafer.] <cit.>. All wafers from the first batch and one half of the wafers from the second batch underwent a P ion-implantation induced gettering <cit.>. The second half of wafers from the second batch underwent a POCl_3 induced gettering <cit.>. In this work, we characterize single-electron traps from three different Oscura prototype sensors, labeled A, B and C in Table <ref>, from the two fabricated batches and gettering processes. §.§ Data taking   We use the pocket-pumping technique discussed in Section  <ref> to localize and characterize traps in the Oscura prototype skipper-CCDs. First, using a violet LED externally controlled by an Arduino Nano, we illuminate the active area of the sensors, which is loosely covered with a Cu plate to increase uniformity in the illumination profile. The median charge per pixel after illumination lies between 1500 e^- and 2000 e^-. Then, we perform a pocket-pumping sequence to probe traps below pixel phases ϕ_1 and ϕ_3, such as the one illustrated in Fig. <ref>, including the 𝒯|_I_ max minimization discussed in Section <ref>. We collected images with N_ pumps≃3000, varying t_ph from 6.6 μs to 1.3 s and T from 150K to 190K. Fig. <ref> shows a section of the images from the pocket-pumping measurements of prototype sensor A at 150K, for two different t_ph. The right image in this figure reveals a higher density of traps with τ_e∼𝒪(ms). We found a uniform spatial distribution of traps through the whole active area of the sensors. §.§ Analysis and results With the most efficient dipole-detection algorithm discussed in Appendix <ref>, we identify dipoles and track their position in each of the images. Using sets of images from the same sensor acquired at a fixed temperature, we compute I_ dip as a function of t_ph for each dipole found. We fit this curve with the function given by Eq. <ref> and extract the trap emission-time constant τ_e associated to that dipole. As a quality selection criteria to the dipole intensities, we require a coefficient of determination greater than 0.7 and a relative error on τ_e below 50%. With this criteria we reject between 2% to 20% of dipoles, depending on the dataset. From now on, we will refer as “detected traps” to those probed below pixel phases ϕ_1 and ϕ_3 that were found with the trap-detection algorithm, were not rejected by the selection criteria, and are not overlapped dipoles. Figure <ref> (left) shows the intensity as a function of t_ph of a detected trap fitted by Eq. <ref>. For each set of images from the same sensor at a given temperature, we build a trap map with the position and the emission-time constant of each detected trap. One of these maps, from a 50 pix × 50 pix region of the active area of Oscura prototype sensor A at 150K, is shown in Fig. <ref> (right). The histograms in Fig. <ref> (left) show the τ_e distributions at 190K of the detected traps for the Oscura prototype sensors A and B, which are from different fabrication batches but underwent the same gettering process (ion implantation). The histograms in Fig. <ref> (right) show the τ_e distributions at different temperatures of the detected traps for the Oscura prototype sensor C, with the POCl_3 gettering. In all the τ_e distributions in Fig. <ref>, a primary peak can be seen, which is associated to the largest population of traps within the sensors' buried-channel region. Also, in Fig. <ref> (right) the peaks in the distributions move towards higher values of τ_e at lower temperatures, which is expected from the dependence of τ_e with T, i.e. Eq. <ref>. Comparing the τ_e distributions from prototype sensors A and B in Fig. <ref> (left), both with the ion-implantation gettering, we see a larger population of traps with τ_e>0.1s for T>170K in the distributions from prototype sensor A, forming a secondary peak. We associate the presence of this peak to the fabrication batch as none of the distributions from sensors from the 2nd batch, i.e. B and C, show a significant trap population at those τ_e. For each detected trap, we plot τ_e as a function of T and fit it with the function in Eq. <ref>. We perform a chi-squared test on the fits and rejected those with a p-value below 0.05. From each of those fits, we extract the energy E_t and cross section σ associated to each trap, shown as dots in the scatter plot in Fig. <ref> (left). The distributions of these variables of the detected traps in each of the Oscura prototype sensors are shown in Fig. <ref>. The maximum value of each of these histograms and its associated error, computed as the full width at half maximum, is shown in Table <ref> [Hist. max.]. Moreover, from the τ_e distributions at different temperatures associated to each sensor, i.e. histograms in Fig. <ref>, we plot the emission-time constants associated to the primary peaks τ^ peak_e against T, with an error given by its full width at half maximum, as shown in Fig. <ref> (right). We fit the data points with the function in Eq. <ref> and extract from it the energy E_t and cross section σ associated to the largest population of traps. The value of these variables and its associated error are shown in Table <ref> [τ^ peak_e(T) fit], and plotted as stars in Fig. <ref> (left). As can be seen from Table <ref>, the values of the trap parameters extracted from the primary peaks of the distributions in Fig. <ref> [Hist. max.] and from the fits in Fig. <ref> [τ^ peak_e(T)] are mutually consistent within errors. Furthermore, the parameters from prototype sensors B and C, both from the second fabrication batch, are also mutually consistent. This suggests that the kind of defects/contaminants inducing charge traps is related to the fabrication batch. It is worth noting that while the relative errors for energies are small, below 3%, those for cross sections are significantly higher, ranging from 26% in the best case to 53% in the worst case. The trap energies and cross sections reported in Table <ref> are similar to those reported for hole traps associated to transition metals in p-type silicon <cit.>, which are common materials used in semiconductor processing, for example: palladium (Pd), with E_t=0.31 eV and σ=0.8×10^-15 cm^2, molybdenum (Mo), with E_t=0.31 eV and σ=0.43×10^-15 cm^2, platinum (Pt), with E_t=0.32 eV and σ=1×10^-15 cm^2, and silver (Ag), with E_t=0.34 eV and σ=0.87×10^-15 cm^2. Although gettering techniques are implemented during the fabrication process to capture impurities, the use of the same equipment for productions involving transition metals could lead to unwanted metal contamination in the sensors. § EFFECT OF 1E^- TRAPS ON DC MEASUREMENTS IN SKIPPER-CCDS Dark current (DC) is an irreducible exposure-dependent background for skipper-CCD detectors that originates from the thermal excitation of electrons from the valence band to the conduction band. As it constrains the lowest SER that can be achieved, estimating its value is important in applications where the science reach is limited by the one-electron background rate. Single-electron traps within the skipper-CCD buried-channel constitute a source of SEEs, which can come from: 1) deferred charge from trap emission, see discussion in Section <ref>, and 2) charge carriers generated through excitation processes that are enhanced by intermediate energy levels between the valence and conduction bands (midband states) associated to the traps <cit.>. SEEs coming from deferred charge from trap emission are a background for DC measurements. However, SEEs from carriers generated through midband states contribute to the sensor's DC. The generation rate of the latter [carriers cm^-3 s^-1], in a fully-depleted CCD, can be expressed as <cit.> U∼σ v_th n_i N_t/2cosh| E_t - E_i|/k_BT where N_t is the concentration of traps at energy level E_t [cm^-3], E_i is the intrinsic (undoped) Fermi level [eV] and n_i is the intrinsic carrier concentration [cm^-3] <cit.>. E_i and n_i are computed as E_i=1/2[E_g+k_BTln(N_v/N_c)] and n_i=[N_cN_vexp(-E_g/k_BT)]^1/2 assuming that the silicon band gap depends on temperature as E_g(T)=1.1557-T^2[7.021× 10^-4/(T+1108)] <cit.>. The temperature dependence of N_c(v) is as in Eq. <ref> with m^h(e)_ dens≃ 0.94 (1.07)m_e for p-channel CCDs between 100K and 200K <cit.>. Using the energy and cross section associated to the largest population of traps found from the pocket-pumping measurements (Table <ref>), we computed the contribution to DC from the single-electron traps obtaining 1.05×10^-14(3.54×10^-10) e^-/pix/day for 130K (150K); these numbers are several orders of magnitude below the expected DC, see discussion in Section <ref>. Here, we assumed N_t=2.15(n_ traps/V_ bc) with n_ traps=8.5 × 10^4 the average number of traps in the buried-channel region of one sensor identified with the detection algorithm in the pocket-pumping measurements before applying the selection criteria, and V_ bc=1.095× 10^-4 cm^3 the effective sensor's volume that was probed with the pocket-pumping technique; the factor 2.15 accounts for the traps in the second phase that were not probed and for a conservative 30% dipole-detection inefficiency. §.§ DC measurements at surface and underground A typical way to quantify dark current in skipper-CCDs is to acquire dark images with different exposure times, mask events within the images associated to any other source of background, compute the SER as a function of exposure time, and extract the slope, i.e. the dark single-electron rate, which represents an upper limit on the sensor's DC; see discussion in <cit.>. Performing these measurements underground allows us to minimize SEEs generated from external radiation interactions, which constitute a dominant background at the surface. In fact, the lowest single-electron rate ever achieved in a skipper-CCD is 1.6 ×10^-4 e^-/pix/day <cit.>, reported by the SENSEI Collaboration from measurements in their setup at the MINOS cavern in the Fermi National Accelerator Laboratory (FNAL). In Refs. <cit.> we presented DC measurements performed in a dedicated setup with 2 inches of lead shield at the surface with a Oscura prototype sensor from the same wafer as prototype A; these correspond to the circles in black at 140K, 150K and 160K in Fig. <ref> (right). The same setup was moved ∼100 m underground, to the MINOS cavern at FNAL; see Fig. <ref> (left). In that setup, we performed DC measurements with a Oscura prototype sensor from the same wafer as prototype C, following the previously discussed method. We acquired images varying the exposure time from 0 to 150 min, with 324 samples/pix. The exposure-dependent single-electron rates were computed from images acquired at 131K, 138K and 148K; these are shown in Fig. <ref> (right) as blue circles. The lowest value achieved was (1.8± 0.3)×10^-3 e^-/pix/day at 131K. The expected dark current in a CCD as a function of T can be expressed as <cit.> R_ DC(T) = A_ pix D_ FM^T_0/q_e T_0^3/2 e^-E_g(T_0)/2k_BT_0 T^3/2e^-E_g(T)/2k_BT× 86400  s/day where A_ pix is the pixel surface area [cm^2/pix], q_e is the electron charge [C] and D_ FM^T_0 is the “dark current figure of merit” at T_0 [A/cm^2]. We fitted the measured DC at 160K with Eq. <ref> and found D_ FM^ 300K=114 pA/cm^2. The expected DC as a function of T assuming this figure of merit is shown as a dashed line in Fig. <ref> (right); at 130K, the expected DC is 5.18×10^-6 e^-/pix/day, three orders of magnitude less than the measured DC with the Oscura prototype sensors. In the images taken underground with the Oscura prototype sensors, the SEEs originating from deferred charge from trap emission constitute a significant background for the DC measurements. To mitigate their impact, we implemented a “bleeding zone” mask for pixels upstream in the horizontal and vertical direction of any event with more than 20 e^-, similar to what is done in skipper-CCD experiments searching for DM to discard events from charge-transfer inefficiencies <cit.>. To minimize the masked area of the images, we found the minimum bleeding-mask lengths in which the “tails” of deferred charge from trap emission did not impact the measured exposure-dependent SER. We measured this rate varying the horizontal (vertical) bleeding-mask length with a fixed vertical (horizontal) bleeding mask of 200 (1250) pixels, see Fig. <ref>. The optimal mask length was determined as the minimum value after which the measured exposure-dependent SER becomes constant, being 1250 (250) pixels in the horizontal (vertical) direction. §.§ Monte Carlo simulations of deferred charge from trap emission We performed a Monte Carlo simulation to estimate the impact of deferred charge from single-electron trap emission on the measured exposure-dependent SER for the Oscura prototype sensors. For the simulation, we assumed that traps in the horizontal register have similar density, energy and cross section distributions as those in the vertical registers measured with the pocket-pumping technique, that the spatial distribution of traps is uniform and that sensors from the same fabrication batch have the same density and kind of traps. We generated trap maps at different temperatures with positions directly taken from the pocket-pumping measurements and emission-time constants computed using Eq. <ref>, considering the trap parameters (E_t, σ) obtained from the measurements. Figure <ref> shows a region of the detected trap maps from prototype sensor A measurements corresponding to two different temperatures. The simulation is based on two sets of images taken with the Oscura prototype sensors: 1) underground, at 131K, with exposure times between 0 and 150 min, and 2) at surface, at 150K, with exposure times from 0 to 15 min. We simulated an “underground” (“at surface”) set of images, with each image containing the events with energy ≥20e^- of the acquired image, a uniformly distributed exposure-independent SER of 1×10^-4 (1×10^-2) e^-/pix and charge from a exposure-dependent SER of 1×10^-4 (5×10^-2) e^-/pix/day, consistent with the exposure time of the acquired image. Using these sets and the trap maps of prototype sensor C, we simulated two new sets accounting for the effects due to traps. For each event, we simulate the shifts of its constituting charge packets towards the readout amplifier. If the packet encounters a trap, a charge carrier is captured and released at a later time with a probability given by Eq. <ref>. For simplicity, we assume the same capture probability for all traps and estimated the carrier density in its vicinity as in Ref. <cit.>. In the simulation, carriers released from trap emission can be recaptured by subsequent traps and re-emitted at a later time. This causes a larger spread of carriers from trap emission within the image, which is more evident in the horizontal direction. In Fig. <ref> we show a dark exposure image from one of the data sets and its corresponding image generated with the simulation. We extracted the exposure-dependent SER on the simulated sets of images following the recipe outlined in Section <ref>, using the optimal mask length. The extracted exposure-dependent SER in the simulated images without the effects of traps matches the simulated exposure-dependent SER of 1×10^-4 (5×10^-2) e^-/pix/day for the “underground” (“at surface”) set. However, in the simulated images accounting for deferred charge from trap emission, we extracted a exposure-dependent SER of (0.60±0.06) e^-/pix/day for the “at surface” simulated set and (1.5±0.2) × 10^-3 e^-/pix/day for the “underground” simulated set. Both of these values are one order of magnitude larger than the simulated exposure-dependent SER. These results show that, in sensors with traps that have emission times comparable to the readout time of consecutive pixels, depositions from trap emission can occur beyond the masked area, even with a conservative masking approach. Additionally, multi-electron events enhance trap capture. Overall, these factors can significantly impact the measured exposure-dependent SER. In fact, in the DC measurements at surface with Oscura prototype sensors, as presented in Refs. <cit.>, the impact was minimized by increasing the image readout rate and selecting regions free of multi-electron events. § CONCLUSIONS We identified single-electron traps in the newly fabricated skipper-CCDs for Oscura. These traps have emission-time constants similar to the typical readout time of consecutive pixels, producing a “tail” of deferred charge observed in the images next to particle tracks. These “tails” consist mainly of single-electron depositions and can only be spatially resolved due to the sub-electron noise that can be reached with skipper-CCDs. Otherwise, deferred charge would only manifest as an increase in overall charge-transfer inefficiency and dark counts. In this sense, skipper-CCDs continue to provide insights into the understanding of dark-count sources. We studied the buried-channel single-electron traps in three Oscura prototype sensors from two different fabrication batches and two different gettering methods, POCl_3 and ion implantation. The pocket-pumping technique was used to measure the position and emission-time constants of defects/contaminants associated to these traps at different temperatures. The trap characteristic parameters cross section and energy level were measured by fitting the temperature dependence of the emission times associated to each individual trap and to the primary peak of the τ_e distributions. Results from both analyses are consistent. The energy and cross section associated to the largest population of traps in each sensor are shown in Table <ref>. These parameters are consistent within sensors from the same fabrication batch. Moreover, a secondary peak associated with a trap population with τ_e>0.1s for T>170K is only observed in the sensor from the first fabrication batch. These results suggest that the type of defects/contaminants is more closely related to the fabrication process than to the implemented gettering. The exposure-dependent SER was measured for a Oscura prototype sensor at the MINOS cavern at FNAL, yielding (1.8± 0.3)×10^-3 e^-/pix/day at 131K. A procedure for finding the optimal bleeding-mask length to minimize the effect of charge traps encountered within the sensors was described. To estimate the impact of deferred charge from trap emission on exposure-dependent SER measurements, a Monte Carlo simulation of the trap capture and emission processes was implemented by using the trap parameters found from the pocket-pumping measurements. Results show that, even with a conservative masking approach, deferred charge from these traps can occur beyond the masked area and contribute to the measured exposure-dependent SER. More importantly, it provides an explanation to the rate measured underground of the Oscura prototype sensor. These results also suggest that the exposure-dependent SER of these sensors might be lower in lower background environments. § DIPOLE-DETECTION ALGORITHMS   Algorithms designed to detect dipole signals against a flat background typically flag pixels with intensities that exceed or fall below a certain threshold established by the flat field signal. However, detecting dipoles becomes challenging when the dipole density increases or if the background is not flat. In this work, two different algorithms (A and B) were tested. Algorithm A subtracts the median of each row and computes the “local” standard deviation within a small window of pixels. The pixel intensity threshold is defined as a multiple of the local standard deviation. This code flags consecutive pixels if their absolute intensity is above the threshold and if one is positive and the other is negative. Algorithm B subtracts the median of each row and each column. It then asks for two consecutive pixels to be one positive and one negative, computes the dipole amplitude, scales it by a factor between 0 and 1 accounting for symmetry, and asks for the scaled amplitude to be above a certain threshold. To select detection threshold values that yield the best dipole identification in an image with a high density of traps, a Monte Carlo simulation was made generating images with known numbers and positions of dipoles. By comparing the dipoles identified by the algorithms with the simulated ones, we computed the Precision and Recall detection metrics for each code and selected the detection threshold that maximizes their performance. Fig. <ref> shows one of the images with simulated dipoles (left) and the Precision-Recall curve for each algorithm when varying the detection threshold (right). The algorithm B, which accounts for the dipole symmetry, was found to have the better performance. This work was supported by the resources of the Fermi National Accelerator Laboratory (FNAL), managed by Fermi Research Alliance, LLC (FRA), and the Lawrence Berkeley National Laboratory (LBNL), acting under Contract Nos. DE-AC02-07CH11359 and DE-AC02-05CH11231 with the U.S. Department of Energy, respectively. JHEP
http://arxiv.org/abs/2406.18214v1
20240626095755
Trimming the Fat: Efficient Compression of 3D Gaussian Splats through Pruning
[ "Muhammad Salman Ali", "Maryam Qamar", "Sung-Ho Bae", "Enzo Tartaglione" ]
cs.CV
[ "cs.CV" ]
Probing a Modified Luttinger Sum Rule in the Strongly Interacting 1D Fermi-Hubbard Model Fabian Grusdt July 1, 2024 ======================================================================================== § ABSTRACT In recent times, the utilization of 3D models has gained traction, owing to the capacity for end-to-end training initially offered by Neural Radiance Fields and more recently by 3D Gaussian Splatting (3DGS) models. The latter holds a significant advantage by inherently easing rapid convergence during training and offering extensive editability. However, despite rapid advancements, the literature still lives in its infancy regarding the scalability of these models. In this study, we take some initial steps in addressing this gap, showing an approach that enables both the memory and computational scalability of such models. Specifically, we propose “Trimming the fat”, a post-hoc gradient-informed iterative pruning technique to eliminate redundant information encoded in the model. Our experimental findings on widely acknowledged benchmarks attest to the effectiveness of our approach, revealing that up to 75% of the Gaussians can be removed while maintaining or even improving upon baseline performance. Our approach achieves around 50× compression while preserving performance similar to the baseline model, and is able to speed-up computation up to 600 FPS. § INTRODUCTION In the last few years, significant advancements have been made in radiance field methodologies for reconstructing 3D scenes using images captured from various viewpoints. The emergence of Neural Radiance Fields (NeRF) techniques has notably influenced the realm of 3D scene modeling and reconstruction <cit.>. The efficient generation of photo-realistic novel views from a given set of training images has become a focal point in computer vision research, with diverse applications <cit.>. NeRF's capability to distill the essence of a 3D object from its 2D representations, while maintaining compactness, underscores its impact and popularity in the literature <cit.>. Despite its success, the traditional NeRF <cit.> suffers from slow training and rendering speeds. To address this challenge, various approaches have been proposed, although they often entail compromises in rendered image quality <cit.>. Recent studies have turned to explicit scene representations, such as voxel-based <cit.> or point-based <cit.> structures to enhance rendering efficiency. For instance, leveraging 3D voxel grids on GPUs alongside multi-resolution hash encoding of inputs led to consistent reductions in required operations and enabled real-time performance <cit.>. Similarly, the most efficient radiance field solutions to date rely on continuous representations achieved through interpolating values stored in voxel <cit.>, hash grids <cit.>, or points <cit.>. While the continuous nature of these methods aids optimization, the stochastic sampling necessary for rendering can incur computational overhead and introduce noise <cit.>. A recent advancement in the field is the introduction of differentiable 3D Gaussian splatting (3DGS), which enables the generation of a sparse adaptive scene representation <cit.>. This representation can be rendered rapidly on the GPU, offering big speed improvements. 3DGS combines the best features of existing methods: leveraging a 3D Gaussian representation for scene optimization provides state-of-the-art visual quality and competitive training times while the tile-based splatting solution ensures real-time rendering at high quality for 1080p resolution across various datasets. Unlike NeRF methods, 3DGS simplifies training and rendering by projecting 3D Gaussians to the 2D image space and combines them with opacity using rasterization, enabling real-time rendering on a single GPU. Furthermore, the explicit storage of scene structure in the parameter space allows for direct editing of the 3D scene. However, some challenges emerge when employing differentiable 3DGS, particularly in optimizing scenes with millions of Gaussians, which may require substantial storage and memory. While specialized pipelines demonstrate real-time performance on high-end GPUs, seamless integration into VR/AR environments or games remains a challenge, particularly when working alongside hardware rasterization of polygon models. r0.7 < g r a p h i c s > Vanilla 3DGS-30k VS our novel pruning approach applied with an end-to-end compression technique <cit.>. In this paper, we aim to compress Gaussian splatting representations while preserving their rendering speed and quality, facilitating their application across diverse domains such as IoT devices with limited storage or memory. Our primary insight is that the learned 3DGS models exhibit over-fitting to the underlying scene, allowing for the removal or pruning of many Gaussians without sacrificing performance, particularly due to markedly lower opacity values. We start the training process with a pre-trained optimized Gaussian scene, iteratively pruning it based on opacity levels and gradient values, followed by fine-tuning to achieve superior performance-compression trade-off compared to the baseline optimized scene as showcased from Fig. <ref>. Our main contributions are the following. * We build on top of the optimized 3DGS as a 3D prior for pruning, enabling the removal of redundant Gaussians while fine-tuning the remaining ones to accurately capture the scene features (Sec. <ref>). * We observe that vanilla pruning is sub-optimal when compared to a gradient-informed approach and that pruning without such a prior fails. Besides, we showcase the compatibility with other compression pipelines, like <cit.> (Sec. <ref>). * With our proposed method, we achieve state-of-the-art performance even after pruning 50% of the Gaussian splats, significantly enhancing the scalability of 3DGS (Sec. <ref>). Our compression pipeline achieves an enhanced balance between scene fidelity and compression, surpassing the baseline (Fig. <ref>). § RELATED WORK In this section we first provide an overview of the most recent methods for novel view synthesis (Sec. <ref>), then we discuss approaches for their compression (Sec. <ref>). §.§ Novel View Synthesis Recent advancements in novel view synthesis have seen significant progress, with early techniques using CNNs to estimate blending weights or texture-space solutions <cit.>, albeit facing challenges with MVS-based geometry and temporal flickering. Volumetric representations, starting with Soft3D <cit.> and employing deep learning with volumetric ray-marching <cit.>, provided further advancements. Neural Radiance Fields (NeRFs) <cit.> aimed to enhance synthesized views' quality but  faced slow processing due to a large Multi-Layer Perceptron (MLP) backbone and dense sampling. Subsequent methods like Mip-NeRF360 <cit.> focused on balancing quality and speed, while recent advances prioritize faster training and rendering through spatial data structures, encodings, and MLP adjustments <cit.>. Notable methods such as InstantNGP <cit.> leverage hash grids and occupancy grids for accelerated computation, while Plenoxels <cit.> rely on Spherical Harmonics for directional effects without neural networks. Despite these strides, challenges persist in NeRF methods regarding efficient coding for empty space, image quality, and rendering speed. In contrast, 3DGS achieves superior quality and faster rendering without implicit learning. However, its increased storage compared to NeRF methods poses limitations. Our approach aims to maintain the quality and speed of 3DGS while reducing model storage by applying pruning to Gaussian parameters. §.§ 3DGS Compression When compared to NeRFs, 3DGS models lack structure, which presents challenges for compression <cit.>. Consequently, many studies in 3DGS compression introduce structural parameters by replacing vanilla 3DGS parameters to enhance compression <cit.>. Scaffold-GS <cit.>, for instance, utilizes anchor points to distribute local 3D Gaussians and predicts their attributes dynamically based on the viewing direction and distance within the view frustum. On the other hand, the Hash-grid Assisted Context (HAC) <cit.> framework jointly learns a structured compact hash grid and uses it for context modeling of anchor attributes. Niedermayr et al. <cit.> proposed a compression framework that maintains vanilla 3DGS parameters while compressing directional colors and Gaussian parameters. This framework incorporates sensitivity-aware vector clustering and quantization-aware training and achieves compression rates of up to 30× with a marginal decline in performance compared to the baseline 3DGS. Another method, Compact3D <cit.>, introduces a learnable mask strategy to prune the Gaussians and a compact representation of view-dependent colors by employing a grid-based neural field rather than relying on spherical harmonics. It also learns codebooks to compactly represent the geometric attributes of Gaussian by vector quantization. Our proposed pruning method can effectively improve/replace the masking strategies for unimportant Gaussians in the existing works (see Sec. <ref>). However, for the scope of this paper and considering the broad applicability of vanilla 3DGS, we focus solely on the vanilla 3DGS variant. § METHODOLOGY In this section, we first provide an overview of the 3DGS technique for learning and rendering 3D scenes, as introduced by Kerbl et al. <cit.> (Sec. <ref>). Then, we delve into an explanation of our pruning approach (Sec. <ref>). Our methodology, depicted in Fig. <ref>, introduces an effective approach to compress these models using gradient-informed pruning. §.§ Differentiable Gaussian Splatting 3DGS <cit.> represents a scene through a set of 3D Gaussians. By leveraging differentiable Gaussian splatting, which extends EWA volume splatting <cit.>, it facilitates the efficient projection of 3D Gaussian kernels onto the 2D image plane. Additionally, differentiable rendering optimizes the quantity and attributes of the Gaussian kernels employed to characterize the scene. Each 3D Gaussian is characterized by its position and covariance matrices within the 3D space, modeled as G(x)=exp[-1/2(x-μ)^T Σ^-1(x-μ)], where x denotes the position vector, μ represents the position, and Σ is the 3D covariance matrix of the Gaussian distribution. Given the requirement for the covariance matrix to be positive definite, it can be parameterized using a rotation matrix R and a scaling matrix S. To facilitate independent optimization of R and S, Kerbl et al. <cit.> introduce a representation of rotation via a quaternion q and scaling through a vector s, both of which can be converted into their corresponding matrices. Additionally, each Gaussian distribution possesses its opacity (α∈ [0, 1]) and a set of spherical harmonics (SH) coefficients essential for reconstructing a view-dependent color. The 2D projection of a 3D Gaussian remains a Gaussian with covariance Σ' = JWΣ W^TJ^T, where W is the view transformation matrix, and J is the Jacobian of the affine approximation of the projective transformation. This setup facilitates the evaluation of the 2D color and opacity footprint of each projected Gaussian. The color C of a pixel is subsequently determined by blending all the N 2D Gaussians contributing to that pixel: C = ∑_i ∈ N c_i α_i + ∏_j=1^i-1 (1 - α_j), where c_i and α_i represent the view-dependent color and opacity of a Gaussian, respectively, which are adjusted based on the exponential decay from the center point of the projected Gaussian. The parameters such as position x, rotation q, scaling s, opacity α, and spherical harmonics (SH) coefficients of each 3D Gaussian are optimized to ensure alignment between the rendered 2D Gaussians and the training images. During the training phase, the 3D Gaussian splats are rendered efficiently in a differentiable manner to produce a 2D image. This rendering process involves α-blending of anisotropic splats, sorting them, and utilizing a tile-based rasterizer. At each training iteration, the 3DGS framework renders the training viewpoints and then minimizes the loss between the ground truth and rendered images in the pixel space, where the loss is given by ℒ=(1-λ) ℒ_1+λℒ_D-SSIM , where ℒ_1 is the ℓ_1 norm of the rendered output and ℒ_D-SSIM is its structural dissimilarity. The optimization in 3DGS begins with a point cloud generated through a conventional SfM method <cit.>, and then proceeds iteratively, pruning Gaussians with small opacity parameters and introducing new ones when significant gradients are detected. As demonstrated in the 3DGS paper, this approach enables rapid training and facilitates real-time rendering, all while achieving comparable or superior 3D model quality compared to state-of-the-art NeRF methods. §.§ Gradient Aware Pruning 3DGS typically necessitates several million Gaussians to adequately model a standard scene, each Gaussian entailing 59 parameters. This results in a storage size significantly larger than that of most NeRF methodologies, such as Mip-NeRF360 <cit.>, K-planes <cit.>, and InstantNGP <cit.>. Such requirements render 3DGS inefficient for certain applications, particularly those involving edge devices. Our primary focus is on parameter reduction. In the original training process of 3DGS, Kerbl et al. <cit.> pruned and densified Gaussians up to a specified number of iterations, with pruning based on a predetermined opacity threshold. However, if their opacity is low and so is its gradient, we can say they can be removed with little to no impact on the quality of the rendered scene. As such, we have Σ' = Σ_i if [|Σ_i^α|≥𝒬_|Σ^α|(γ_iter)] [ |∇Σ_i| ≥𝒬_|∇Σ|(γ_iter) ] 0 otherwise, where Σ_i^α denotes the α value of the i-th Gaussian, and ∇Σ_i denotes the gradient for the i-th Gaussian, 𝒬_|Σ^α|(.) represents the quantile function for the opacity, 𝒬_|∇Σ|(.) is the quantile function for the gradients of the Gaussians, and γ∈ [0, 1] denotes the fraction of Gaussians to be removed. This pruning process, along with periodic fine-tuning, not only improves performance but evidently results in substantial compression gains. The iterative pruning and fine-tuning approaches enable the removal of redundant Gaussians while refining the remaining ones to better capture scene details compared to the baseline. Prior research in scene rendering has demonstrated that a gradual iterative pruning strategy can yield significantly sparser models while preserving high fidelity <cit.>. However, the impact of such an approach on 3DGS models yet remains unclear. We speculate that by gradually pruning the model over a specified number of iterations t and aiming for a target sparsity γ_target, we can achieve improved results through a sparsification process applied to the 3DGS model: at every iteration, we apply the sparsification γ_iter = 1 - (1-γ_target)^1/t. Our approach is based on two key factors. First, during the fine-tuning stage following pruning, the covariance Σ is adjusted to minimize rendering loss, leading to higher values for solid surfaces and lower values for semi-transparent artifacts, which can be subsequently removed in the next iteration (which will be empirically observed in Fig. <ref>c). Second, the gradual iterative process helps prevent the optimization algorithm from converging to sub-optimal local minima (according to empirical evidence discussed in Sec. <ref>). Hence, it is crucial to initiate the process with an overparametrized yet well-performing model. Similar observations can be drawn from traditional deep learning literature <cit.>. In the next section, we will present a quantitative analysis of typical benchmarks employed for 3DGS. § EXPERIMENTS AND RESULTS In this section, we present our empirical findings on commonly recognized benchmarks within the 3DGS community. We begin by detailing the implementation of our approach, followed by an outline of the benchmarked datasets and the evaluation metrics employed (Sec. <ref>). Subsequently, we discuss both qualitative and quantitative results (Sec. <ref>), providing as well an ablation study (Sec. <ref>). §.§ Implementation Details In all our experiments, we use the publicly available official code repository of 3DGS <cit.>, adhering to the recommended hyperparameter settings used for training to maintain consistency with the original 3DGS model. We initiate the pruning process with the optimized Gaussians trained for 30,000 iterations, employing pruning with γ_iter∈[0.225,0.6], as described in (<ref>). Iterative pruning with the same γ_iter value is applied after every 500 iterations until reaching 35,000 iterations (commencing from 30,000 iterations), followed by further fine-tuning for 10,000 iterations. For all our experiments, pruning and fine-tuning consistently yield significantly improved compression-performance trade-offs. Additionally, λ= 0.2 is employed consistently across all our experiments. Datasets. We assess the efficacy of our pruning approach across diverse scenes, encompassing environments from the Mip-Nerf360 <cit.> indoor and outdoor datasets, alongside two scenes sourced from the Tanks&Temples <cit.> and Deep Blending <cit.> datasets, akin to the scenes examined in the original 3DGS work <cit.>. Evaluation. To ensure a fair comparison, we adhere to the same train-test split utilized in Mip-Nerf360 <cit.> and 3DGS <cit.>. Our evaluation encompasses standard metrics like SSIM, PSNR, and LPIPS, alongside the average memory consumption across all datasets. §.§ Results §.§.§ Quantitative Comparison Trimming the Fat. We conduct a comparative analysis between our method, the 3DGS-30k, and 3DGS-7k baselines, along with an opacity-based pruning approach that removes gradient information from (<ref>). As illustrated in Table <ref> and Fig. <ref>, we examine the trade-off in compression performance across benchmark datasets. Across all the datasets, Gaussian splats can be pruned by up to 4×, showcasing improved or similar performance compared to the baseline. Notably, even at significantly high pruning levels, where the average scene size is less than 25MB, our proposed pruning technique maintains comparable or even superior performance to that of the 3DGS-7k variant, achieving compression rates of up to 24× on average. This is realized without the need for any additional end-to-end compression pipeline integration, highlighting the standalone scalability of our proposed approach. Opacity-based pruning exhibits similar performance to gradient-aware pruning at small pruning thresholds. However, the performance difference becomes more pronounced at higher compression rates, as evident from the results in Fig. <ref>. Incorporating gradient information leads to additional performance improvements in pruning. This enhancement arises because certain scene features (sky, glass, etc.) may have low opacity but are still crucial for overall scene rendering. By considering gradient information, we ensure that only Gaussians containing unimportant features are pruned. Our proposal of incorporating information on the gradient shows its prominent effectiveness at higher pruning rates. Trimming the Fat with end-to-end compression. Our proposed pruning methodology can act as a plug-and-play with various end-to-end compression techniques for 3DGS. When integrated with the method proposed by <cit.>, we achieve state-of-the-art compression performance. Niedermayr's approach begins with a pre-trained Gaussian as the foundation of its compression process. We substitute this pre-trained Gaussian with our pruned Gaussian and apply the end-to-end compression procedure. This combination results in 50× compression compared to the baseline, while maintaining comparable performance. Moreover, we achieve 2× compression with improved performance compared to Niedermayr's original approach, as demonstrated in Table <ref>. §.§.§ Qualitative Comparison We present visualizations of scene from the Tanks&Temples dataset, scene from the Deep Blending dataset, and the scene from the Mip-NeRF360 dataset all of which require substantial memory resources on average. In Fig <ref>, <ref> and <ref>, we illustrate the visualizations of test set images at various pruning levels indicated by γ_iter. Our "trimming the fat" iterative pruning pipeline achieves noteworthy compression rates while maintaining comparable visual quality. Across all scenes depicted in the Fig. <ref>, <ref> and <ref> our method compresses the Gaussian splats by approximately 4× with visual quality similar to 3DGS-30K. Furthermore, with γ=0.60, our method achieves an average compression ratio of approximately 22× while preserving visual quality comparable to 3DGS-7K. §.§ Ablation Study FPS Gain with Trimming the Fat. Our novel pruning approach significantly enhances the FPS rate of 3DGS. On the Tanks&Temples dataset, our method achieves an FPS of over 600+ all while maintaining SOTA performance as shown in Fig. <ref>b. The renderings were performed using an RTX-3090, and the final FPS reported were averaged over three separate runs. These findings demonstrate the scalability of our proposed method. Why 3D Prior for Pruning is important? To assess the significance of a 3D prior in the pruning process, we modified the original training protocol introduced by Kerbl et al. <cit.>. In their methodology, Gaussians are pruned and densified up to a specified iteration count (15k), employing an opacity threshold for pruning. Our modification involved halting the densification phase at the same iteration count (15k) but extending the pruning phase for an additional 10k iterations. Subsequently, the model underwent further fine-tuning for an additional 5k iterations to generate the final scene. The findings are outlined in Fig. <ref>a, indicating that even with a reduced pruning threshold, achieving convergence without a robust 3D prior remains challenging for a 3DGS model. Why Pruning is effective? Our proposed pruning technique achieves compression ratios of up to 4× without compromising performance compared to the 3DGS baseline. The efficacy of our approach lies in its ability to effectively eliminate redundant Gaussians. As depicted in Fig. <ref>c, the opacity distribution before and after pruning for the changes significantly. For the baseline 3DGS, the majority of opacity values are very low, indicating minimal contribution to scene reconstruction. However, through post-hoc pruning, a significant proportion of opacity values become notably higher, indicating that a more solid geometry is learned by the model. One Shot Pruning vs Iterative Pruning. We also explored the impact of one-shot pruning in comparison to iterative pruning, in terms of model size, showcased in Fig. <ref>a. For one-shot pruning, we utilized the pre-trained 3DGS-30k model, performed the pruning process once, and then fine-tuned the model for 30k iterations. r0.38 < g r a p h i c s > The graph illustrates the performance-size trade-off achieved by our method compared to the pruning approach proposed in Compact 3D <cit.> on the Tanks&Temples dataset. Both gradient-aware and opacity-based iterative pruning consistently outperformed the one-shot pruning method: our results demonstrate that gradual pruning enables the model to better adapt to the scene compared to one-shot pruning. Trimming the Fat vs Compact3D <cit.>. Fig. <ref> depicts a comparison of PSNR and Gaussian counts between our proposed approach and Compact 3D using the Tanks&Temples dataset. The results unequivocally highlight the superior performance of our method, demonstrating its capability to significantly reduce the number of Gaussians while maintaining baseline performance levels. These findings emphasize the effectiveness of our pruning technique and its potential to advance or replace existing compression methodologies for 3DGS. Lottery Ticket for the Gaussian Splats? We investigated the potential presence of a “lottery ticket” phenomenon <cit.> for Gaussian splats. To test this hypothesis, we took an already pruned set of Gaussian splats from the Tanks&Temples dataset and randomly reinitialized all learnable features, including spherical harmonics (SH) features, opacity, scale, and rotation. Subsequently, we attempted to train these Gaussian splats for 30,000 iterations, but they failed to converge. This experiment underscores the necessity of having a learned 3D prior to which redundant information can be pruned. It highlights the difficulty of training Gaussian splats with the minimum number of Gaussians without any prior information from the 3D scene. § CONCLUSION In this work, we presented a gradient-aware iterative pruning technique for 3D Gaussian splats named after “Trimming the fat”. Our method effectively scales down Gaussian splats by a factor 4× without sacrificing generative quality. Particularly at higher pruning levels, our proposed method achieves compression ratios of approximately 25× and achieves up to 600 FPS with minimal impact on generative performance across established benchmark datasets. The resulting highly compressed point clouds can be seamlessly transmitted over networks and utilized on resource-constrained devices, offering potential applications in mobile VR/AR and gaming. Future research directions include investigating the integration of quantization-aware training methods to further improve the compressibility of 3DGS.
http://arxiv.org/abs/2406.18528v1
20240626175629
PrExMe! Large Scale Prompt Exploration of Open Source LLMs for Machine Translation and Summarization Evaluation
[ "Christoph Leiter", "Steffen Eger" ]
cs.CL
[ "cs.CL" ]
The Pristine Inner Galaxy Survey (PIGS) X Federico Sestito1 Anke Ardern-Arentsen2 Sara Vitali3,4 Martin Montelius5 Romain Lucchesi6 Kim A. Venn1 Nicolas F. Martin7,8 Julio F. Navarro1 Else Starkenburg9 Received XX; accepted YY ==================================================================================================================================================================== § ABSTRACT Large language models (LLMs) have revolutionized the field of NLP. Notably, their in-context learning capabilities also enable their use as evaluation metrics for natural language generation, making them particularly advantageous in low-resource scenarios and time-restricted applications. In this work, we introduce PrExMe, a large-scale prompt exploration for metrics, where we evaluate more than 720 prompt templates for open-source LLM-based metrics on machine translation (MT) and summarization datasets, totalling over 6.6M evaluations. This extensive comparison (1) serves as a benchmark of the performance of recent open-source LLMs as metrics and (2) explores the stability and variability of different prompting strategies. We discover that, on the one hand, there are scenarios for which prompts are stable. For instance, some LLMs show idiosyncratic preferences and favor to grade generated texts with textual labels while others prefer to return numeric scores. On the other hand, the stability of prompts and model rankings can be susceptible to seemingly innocuous changes. For example, changing the requested output format from “0 to 100” to “-1 to +1” can strongly affect the rankings in our evaluation. Our study contributes to understanding the impact of different prompting approaches on LLM-based metrics for MT and summarization evaluation, highlighting the most stable prompting patterns and potential limitations.[We make our code available: <https://github.com/Gringham/PrExMe>] § INTRODUCTION The recent popularity and success of LLMs have led to a paradigm shift in NLP <cit.>. Instruction-tuning allows LLMs to generate responses to complex task descriptions (prompts) <cit.>, making them useful for conventional NLP tasks. One such task is the automatic evaluation of natural language generation (NLG) models in machine translation (MT) and summarization. Following the current trend, researchers use LLMs as evaluation metrics and achieve remarkable performance, sometimes relying solely on in-context learning <cit.>, i.e., with metrics that are purely based on prompting. Such prompting-based metrics require no or only a few data samples, making them useful for low-resource evaluation scenarios <cit.>. Additionally, they are often more resource-efficient since they do not require fine-tuning. Although many prompting-based metrics have been proposed <cit.>, structured evaluations across different prompting approaches remain scarce, especially for open-source models. In recent work, the Eval4NLP 2023 shared task <cit.> addresses this by (1) restricting the usage to selected open-source LLMs and (2) prohibiting the fine-tuning of these models. While the shared-task submissions provide several interesting findings, they focus on a few distinct prompts only. Notably, the effect and robustness of prompt variations on the same model or across different models remain largely unexplored. In this work, we introduce a systematic Prompt Exploration for Metrics (PrExMe), that builds upon Eval4NLP 2023, to provide a much larger, template-based, structured evaluation of the effects different input prompts have on an LLM-based metric's correlation with human judgements in MT and summarization evaluation. We formulate the following research questions: -0.1em RQ1 Can open-source language models evaluate text generation without fine-tuning and how do they differ from each other? RQ2 Can we identify patterns[We define prompting patterns as the template components that constitute a prompt (e.g., zero-shot, one-shot or the output format).] in prompts that lead to a stable performance across different datasets, tasks, and models? RQ3 How should researchers design prompts for new evaluation scenarios? Our prompt exploration constructs hierarchical templates based on approaches such as chain-of-thought (CoT) <cit.>, zero-shot and retrieval-augmented generation (RAG) <cit.>. Each template gets filled with further sub-templates. For example, we vary the requested output formats, such as distinct scores and continuous scores (see <ref>). This setup amounts to more than 720 prompt templates that we evaluate with 7 LLMs. In a 2nd phase, we test the generalizability and performance of the prompts with the best correlations on two further datasets. In summary, our work makes the following key contributions and findings: -0.1em We perform a large-scale analysis (evaluating over 6.6M prompts) of the effect of different prompting approaches on LLM-based metrics for MT and summarization evaluation. This comprehensive exploration includes various prompting techniques, datasets, tasks, and models, making it, to our knowledge, the most extensive evaluation of its kind. We show that certain prompting patterns are robust and generalizable across different tasks and datasets, with the median performance being a good predictor for new settings. For example, some models show a distinctive preference to return textual labels, while others achieve better results with numeric labels. On the other hand for some settings even small changes to the input prompt can strongly affect the performance. Our study tackles prompt-based evaluation with open-source LLMs, targeting scenarios where fine-tuning or access to closed-source LLMs is not possible. Such evaluations are still very scarce but important to make research more accessible, fostering diversity and inclusion. By systematically testing various established prompting approaches, including zero-shot, CoT and RAG, we comprehensively evaluate the performance of recent open-source LLMs for evaluation metrics. Aligning with the recommendations of <cit.>, by evaluating each model with multiple prompts, our LLM comparison is fair because we mitigate the risk of any single prompt disproportionately affecting their performance. We find that the model Platypus2-70B <cit.> achieves the strongest performance for the tested LLMs. § RELATED WORK We first describe the related work of prompting-based metrics for MT and summarization. Then, we relate our work to research on prompting techniques and prompt stability. Prompting-based metrics Recent advancements in LLM-based metrics for NLG often rely on in-context learning, directly predicting quality judgments from generated texts. Surveys by <cit.> and <cit.> provide comprehensive overviews of these metrics. Besides BARTScore <cit.> and PRD <cit.>, the prompt-based approaches surveyed by <cit.> are built upon closed-source models. In contrast, the Eval4NLP 2023 shared task <cit.>, explicitly considers open-source prompt-based metrics, by asking participants to evaluate MT and summarization using only provided models without fine-tuning. The best submissions were able to beat strong baselines such as GEMBA <cit.> for MT and BARTScore for summarization. While the shared task yielded interesting techniques, the participants explored a limited range of prompts, leaving a gap in the comprehensive analysis of prompting patterns and the consistent comparison of LLMs. In this work, we fill this gap and systematically analyze a much larger set of prompts on a comparable grid of experimental settings to (1) study the robustness of prompts across datasets, models and tasks, and to (2) search for rules and patterns that can guide the future construction of prompt-based metrics. Prompting Techniques Many successful prompting techniques have been proposed over the last years <cit.>. Our work mostly relies on established approaches such as Zero-Shot CoT and RAG. Further, <cit.> propose emotion inducing prompts to improve LLM performance. To our best knowledge, we are the first to analyze this technique for evaluation metrics. Inspired by this, we also propose a novel emotion-CoT pattern (see <ref>). Prior evaluation of output formats for prompt-based metrics is done by <cit.>, which we extend by our much broader evaluation. Other works also use hierarchical templates for prompt building <cit.> and tools like LangChain <cit.> and DSPy <cit.> support their implementation. We use hierarchical templates as means for a structured comparison among prompting patterns. Prompting Robustness As we conduct a grid search across different prompts, datasets and tasks, our work builds upon and extends research on how LLMs respond to prompt perturbations. <cit.>, <cit.>, <cit.> and <cit.> find a wide range of performance variation for natural language inference and sentiment classification. As a solution, <cit.> suggest to provide the full range of results across different prompt perturbations. <cit.> and <cit.> suggest that current evaluation benchmarks for LLMs are problematic as they often only provide one prompt template per task. This could be solved by providing multiple templates and evaluating the ensemble. To our best knowledge, we are the first to explore to which degree these robustness problems affect open-source LLM-based metrics and how to select the best prompts for them. Also, by prompting the LLMs with multiple prompts, we follow <cit.> and achieve a stable and fair evaluation of LLMs for this task. § SETUP In this section, we present the templates and prompting techniques we employ for utilizing LLMs as metrics. Additionally, we provide an overview of the datasets and models that we use for testing. We evaluate LLMs in a reference-free setting, i.e., they grade a generated hypothesis based on its source without a reference.[We run experiments using vLLM <cit.> on two clusters with Nvidia A6000, A40 and A100 GPUS. Details on versions, tools and model parameters are in Appendix <ref>.] The evaluated prompt types provide a comprehensive evaluation framework for LLM-based metrics. This range covers basic in-context learning, sophisticated reasoning, emotional context, and varying output structures, ensuring a thorough assessment of robustness and adaptability across tasks and datasets. Prompt Templates Our prompts are constructed as hierarchical templates (see Figure <ref>), i.e., one large template is constructed from multiple smaller ones. Each prompt is constructed from: (1) the source text and generated hypothesis text that should be graded, (2) a base prompt, (3) a task description, (4) a format requirement and (5) optionally a one-shot demonstration. Table <ref> presents examples for (2), (3), (4) and (5). The base prompt is the top layer of our prompt hierarchy, incorporating the other components. Specifically, we test three zero-shot (ZS) and corresponding one-shot (OS) base prompts: (1) Plain ZS/OS (PZS/POS), (2) ZS/OS-CoT and (3) ZS/OS-CoT-Emotion (ZS/OS-CoT-EM). PZS plainly presents the newline separated task description, source, hypothesis and format requirement. ZS-CoT <cit.> additionally asks the model to think step by step before returning its output. Lastly, ZS-CoT-EM asks the model to describe its “emotions” before the ZS-CoT prompt. We include CoT as it has improved the prompt-based performance for closed-source metrics like AutoMQM <cit.> and GEMBA <cit.>. ZS-CoT-EM explores the variation of LLM performance when prompted to describe emotions in its output. This is motivated by our exploration of emotional prompts on metric performance (see “task description” below). The OS versions of the templates add a field for demonstrations. To avoid fixating the model on specific reasoning steps, we include a placeholder for OS-CoT where the model should insert its reasoning. The task description is the instruction to grade the generated hypothesis. <cit.> find that LLM instructions that induce certain emotions for humans can cause performance improvements. Inspired by this finding, we explore the usage of “emotional prompts” in the task description. Primarily, this approach offers a simple paraphrasation strategy to increase the scope of our grid search. Additionally, it allows us to study the impact of “emotions” on LLM-based metrics. Besides neutral prompts, we include instructions that are, e.g., polite, threatening and sceptical. We create 11 task descriptions ourselves and 13 further descriptions with ChatGPT <cit.>. The format requirement describes the output format the LLM should adhere to when generating a score. For example, it includes the range in which the output score should be and whether it should be discrete or continuous. Additionally, we include prompts that ask the LLM to return textual quality labels. In total, we define 10 format requirements. Lastly, we construct the optional OS demonstrations with RAG. We extract demonstrations from WMT21 <cit.> for MT and from RoSE for summarization.[Note that RoSE only considers factuality, which is only one aspect of the evaluated datasets.] <cit.>. For each sample in both datasets and for each input sample of our metric, we create sentence embeddings with XLMR-SBERT <cit.>. Thereby, we concatenate the source and hypothesis embeddings. For each input, we select the demonstration with the highest cosine similarity. Due to resource limitations, we only evaluate the 9 best ZS prompts in a OS setting. The selection process is described in the paragraph Datasets and phases below. MQM-based approaches Additionally to hierarchical templates, we test the prompts of GEMBA-MQM <cit.> with the selected open-source LLMs. GEMBA-MQM, which predicts scores based on the number of present errors weighted by severity, normally uses GPT4. We refer to the open-source implementation as LocalGemba. Score Extraction & Evaluation We restrict generation to 180 tokens and extract the last regex match of a number/label as scores. When no result is found, we average the other scores of its prompt template. For format requirements with text labels, we map the labels to 1, 3 and 5. We evaluate prompt templates on the segment-level, like the WMT QE and metrics shared tasks <cit.>. That means, for each metric we compute the correlation between metric scores and ground truth human judgments without averaging by system or document. As correlation measure, we use the Kendall <cit.>, Pearson and Spearman correlations, as well as tie-calibrated accuracy <cit.>, with Kendall as main measure. Further, we compute permute-input significance tests (p≤ 0.075) <cit.> for the Kendall correlations presented in our result tables. Often, there is no single significantly best metric. Therefore, we report clusters where each included metric is significantly better than metrics that are not included. Models We select instruction-tuned LLMs with strong performance in Eval4NLP 2023: (1) Platypus2-70B-Instruct-GPTQ, (2) Nous-Hermes-13b [<https://huggingface.co/NousResearch/Nous-Hermes-13b>] and (3) OpenOrca-Platypus2-13B <cit.>. We abbreviate these as Platypus2, Nous and Orca. Additionally, we evaluate more recent models: (4) LLaMA3-8B <cit.>, (5) a GPTQ version of LLaMA3-70B <cit.>, (6) Mixtral-8x7B[Due to high resource consumption and comparatively weak performance in phase 1, we do not evaluate Mixtral in phase 2.] <cit.> and Unbabel-Tower <cit.>, a 13B parameter multilingual instruction-tuned model. Datasets and phases Our experiments are in two phases on different datasets. By doing so, we want to alleviate statistical effects of our large prompt search. Also, it allows to evaluate selected prompts on full datasets, a task that would otherwise be too resource intensive, and to explore generalizability. In phase 1, we evaluate on the train set of Eval4NLP 2023 <cit.>, and in phase 2, on its dev and test sets.[Although we do not use the datasets to train a model, for conciseness, we will refer to these dataset as train, dev and test set.] The train and dev sets are (reference-free) splits of the WMT2022 metrics shared task <cit.> and SummEval <cit.>. The test set was newly annotated by <cit.>. As a second test set, we evaluate on the WMT23 MQM annotations for MT <cit.> and Seahorse <cit.> for multilingual summarization. Because OS prompts demonstrate a weak performance on the other datasets, we do not evaluate them on WMT23/Seahorse. More details of the datasets are discussed in Appendix <ref>. In the 1st phase, we evaluate all 720[Considering the different tasks and language pairs, this number could also be considered higher.] combinations of ZS prompts on the train set. As this is resource intensive, for MT we restrict ourselves to the first 500 samples of each language pair. Afterwards, we select the prompt with the highest Kendall correlation for each task+base prompt combination (e.g. en-de+PZS or en-de+ZS-CoT).[Tasks: en-de, zh-en, summarization. In case of duplicates, we choose the second best.] This yields 9 unique prompts for exploration in the phase 2 (see Appendix <ref>). In the 2nd phase, we evaluate the selected prompts of the 1st phase on the full dev and test sets. This further tests the generalizability of prompts between models and for unseen, in-domain data (the train and dev set stem from the same original datasets) and out-domain data (test sets). Baselines For each phase, we also present the correlations of two baseline metrics that use other base models: BARTScore <cit.> and XComet <cit.>. Especially XComet has the benefit of being trained on multilingual datasets. Further, we test the prompts of DSBA <cit.> — that showed a strong performance for summarization in the shared task — with the selected open-source LLMs Platypus2-70B and Orca-13B. § RESULTS In phase 1, we run 6,652,800 ZS prompts (720 prompt templates) and 71,280 OS prompts (9 “best” prompt templates), with no scores extracted in 12.7% resp. 19.4% of cases; the average of the prompt combination was assigned in these instances. Further, in phase 2, we evaluate 5,503,896 ZS and 1,308,690 OS prompts (9 “best” prompt templates for both), with no scores extracted in 22.3% and 19.4% of cases, respectively. Table <ref> presents the Kendall correlations to human scores achieved by each LLM across different tasks and datasets in phase 1 and phase 2. Each cell for hierarchical templates displays the maximum correlation reached by any prompt combination. For the hierarchical templates (table group 1.), Platypus-70B performs best and is in the upper significance cluster for 9 of 11 tasks. Tower-13B follows, with 3 of 11 tasks. Orca-13B has the second-highest average correlation after Platypus2-70B but is only significant for one task. Surprisingly, the newer LLaMA3 models do not outperform the LLaMA2 based models (Orca, Platypus2 and Tower). The separate prompting techniques (table group 2.), which also use the Platypus2-70B model, have weaker correlations than the best prompts of the hierarchical templates. The LocalGemba MQM-based approach is in the best significance cluster for 3 of 11 tasks and is the best prompting based approach for en-de in WMT23. On the other hand, the baseline prompt DSBA is significantly the best on summarization for the Eval4NLP test set where it also won the shared task, but not for other tasks. Regarding the baselines (table group 3.), XComet outperforms our LLM based approaches for MT evaluation by a varying margin. For instance, for en-es in the Eval4NLP test set, the difference is small and XComet is in the same siginificance cluster as Platypus2-70B. On the other hand, for some tasks the performance difference is large, e.g., on en-de in WMT23 XComet performs 0.14 Kendall points better. The strong performance of XComet for MT evaluation is expected as it (1) is based on the multilingual XLMR-XXL model and (2) fine-tuned for MT evaluation. For summarization, prompting approaches significantly outperform BARTScore and XComet. To revisit RQ1, our results show that open-source prompt-based LLMs struggle to reach the performance of the dedicated fine-tuned metric XComet for MT, but generally exhibit a promising performance. A benefit of the LLMs also lies in their high versatility towards different tasks. While XComet is mostly constrained to MT evaluation, the LLMs can perform strong summarization evaluation simply by switching a small portion of the prompt. Further, LLMs seem to be more robust towards different tasks, even without switching the input descriptions: The baseline DSBA, which has specific prompts for summarization achieves notable results on some MT evaluation tasks, too. The prompts used in group 1 are built from hierarchical templates, i.e., each presented correlation can have a different format requirement, base prompt and task description. To inspect the distribution of the format requirements, we color correlations where the model was prompted to return textual quality labels in orange and those asking for numeric scores in blue.[Among the 9 best prompts automatically selected for phase 2 and OS experiments based on phase 1 results, the base prompts are evenly distributed, and the format requirements are split 5/4 between labels and numeric formats (see Appendix <ref>). For the task descriptions, emphasis and dire situation are each selected twice, with other descriptions chosen once.] Orca-13B and Platypus2-70B were prompted to return numeric scores for all but one reported correlations. On the other hand, LLaMA3-70B, Nous-13B and Tower-13B were prompted to return textual labels for all but three reported correlations. We also find such common patterns in the best prompts per model for the base prompt and, less pronounced, for the task description. For example, the best prompts for Tower-13B always use the ZS-Cot base prompt, while LLaMA3-70B always uses PZS. Details of the prompts used for each cell, tie-calibrated accuracy scores, Pearson and Spearman correlations, and the scores of the Eval4NLP dev set are shown in Appendix <ref>. Our results indicate that models have idiosyncratic preferences for certain patterns. In <ref>, we further explore these preferences and their robustness. § ANALYSIS In this section, we answer RQ2 and investigate the performance and robustness of the template components in more detail. Best prompting patterns per model and dataset First, we explore the best base prompt, task description and format requirement for each model. To do so, we analyze their prevalence in the 2% of prompts with the highest Kendall correlation for each unique task. We choose this cutoff to represent every task. For example, Figure <ref> shows how the best base prompts differ between OpenOrca and Tower. We compare these two LLMs because their best prompts notably contrast each other. While Orca prefers the PZS prompts, Tower is better with ZS-CoT and ZS-CoT-EM. For the format requirement, Figure <ref> highlights how Orca prefers scores in the range of -100 to 100, while Tower can work better with labels. The pie charts for all models and the comparison between task descriptions are presented in Appendix <ref>. Here, for the base prompts, Tower uses ZS-CoT or ZS-CoT-EM in 86.2%, Nous in 44.9%, and Platypus2 in 23.9% of its best prompts. All other models use these base prompts in less than 10% of their best prompts. Regarding format requirements, LLaMA3-70B uses textual labels in 90.2% of its best prompts, Tower in 80.4%, and Mixtral in 80%. In contrast, Orca only uses them in 8%, and Platypus2 in 21.7% of its best prompts. For LLaMA3-8B and Nous, there is no clear trend. Finally, the distribution of task descriptions is broader (largely due to their higher number). Notably, the “curious” task description is used in over 15% of best prompts for LLaMA3-70B, Nous, and LLaMA3-8B. “Emphasis” is the most used by Platypus2 (17.4%) and “dire warning” is the most used by Tower (21.4%). Regarding RQ2, these results show that the models have unaligned preferences for prompting patterns, making it difficult to construct a universally good prompt. However, model specific patterns can be found[Which patterns are specific to which model also provides global explanations <cit.> of the models.] and models can be grouped based on their best patterns. For example, one group prefers to return numeric scores and the other textual labels. This behavior may in parts depend on shared instruction-tuning data. E.g., Orca and Platypus were partly trained on the same data and prefer to return numeric labels. On the other hand, both LLaMA3 models prefer textual labels, but LLaMA3-8B to a smaller degree. To analyze whether the model specific preferences hold across datasets, we also plot a dataset-wise distribution for all MT tasks of the top 2% prompts for each model, separated by ZS vs. OS in Appendix <ref>. If a prompting pattern is stable for all models across datasets, the distribution of the best prompts should remain unchanged. Indeed, the percentage to which many prevalent prompting patterns are represented in the selected top prompts does not change much across datasets. E.g., the PZS base prompt ranges between 66.7% and 83% and the “complex labels” format requirement ranges between 50% to 66.7% for ZS and 66.7% to 83.3% for OS. This does not hold for the phase 1 evaluation, where more templates were tested and the template selection thus was much broader. Also, for some prompt patterns, e.g. the “emphasis” and “collaborative” task descriptions, the occurrence in the top prompts seems to swap between datasets. This experiment shows that prompts are to some degree stable between datasets. In the next paragraph, we further quantify this stability between datasets, prompting patterns and models. Prompt stability Next, we quantify how stable the performance of a prompting pattern A is when the dataset, the model or the other parts of the prompts change. To do so, we compute the rankings of prompts that use A before and after the change and then test the similarity of rankings. For example, we compute the ranking of format requirements on dataset 1. Then, we change the dataset and obtain a second ranking. If the first and second ranking are similar, the performance of different format requirements is stable between the two datasets. We test this similarity with the Kendall correlation. The ranking of a prompting pattern can be computed in several ways, because we evaluate multiple prompts containing the pattern. In our example, for each format requirement there are multiple evaluated prompts per dataset, i.e., for different base prompts, task descriptions and tasks. The performance of a specific format requirement in the ranking could, for example, be determined by aggregating its different scores across base prompts, task descriptions, etc. with the mean or median. We test the following aggregation methods: mean, median, mean of top 10%, max, min and saturation <cit.>. Thereby, we determine that the aggregation with the median leads to the most stable ranking, i.e. the highest Kendall correlation between rankings. Specifically, we test this by comparing every selection of two aggregation measures in a permutation test (e.g. median vs. mean, mean vs. max, etc.); see Appendix <ref>. For our example, this means that for each different format requirement on dataset 1, we compute the median score of all combinations of base prompts, task description and task. Then, we do the same for the second dataset and check the correlation of the resulting ranking. A high correlation of the rankings then indicates that the median performance for all prompts using the format requirement is a good indicator of its relative performance on a new dataset. Figure <ref> shows heatmaps for the stability of the format requirement and task description when the base prompt is changed (Further combinations are plotted in Appendix <ref>). The highest stability is given when changing from PZS to ZS-CoT or vice versa (0.65). That means, when we choose the format prompt with the highest median correlation, there is a high chance that it will perform good for ZS and ZS-CoT. For the task description a change from ZS to ZS-CoT is unlikely to retain the ranking. This also underlines the result of the previous paragraph that the format requirement is more stable than the task description. We can also use this method to quantify the stability of the model ranking, when each model is first prompted with pattern A that is then changed to pattern B. With this, we can identify how similar two patterns are. Figure <ref> shows this type of plot for the format requirement. For example, if all models are prompted with “0 to 100” and with “-100 to 100” the ranking of models will not change much. With a change from “simple labels” to “complex labels” the model ranking will change more drastically. With respect to RQ2, the heatmaps highlight that even small changes to the input prompt can drastically influence the relative ranking of LLMs and other prompting patterns. This is in line with recent research that has shown the susceptibility of LLMs to single input prompts <cit.>. However, the heatmaps also show that not every change to the input has this effect and can be used as indicators for the transferability of new prompting patterns. § RECOMMENDATIONS We now address RQ3 and give recommendations to employ open-source prompt-based metrics. Among the evaluated models, Platypus2-70B demonstrates superior performance. For 13B models, Tower and Orca exhibit the highest correlations in MT and summarization tasks. We recommend utilizing the prompting patterns that most frequently yield top correlations for these models (refer to <ref> and Appendix <ref>). When introducing a new prompting pattern or model, its median performance across existing other prompting patterns can serve as an indicator of the pattern's efficacy in unknown contexts. Thereby, the actual predictive power of the median (or other aggregation measures) for each dimension can be determined based on previous evaluations. The results and source code of PrExMe provide a foundational basis for this analysis. § CONCLUSION We have introduced PrExMe, a large scale exploration of prompting templates for prompt-based open-source NLG metrics. We evaluate 720 different templates and over 6.6M prompts and provide recommendations that aim to make future metrics of this type more robust. Further, our results provide a comparison and analysis of recent open-source LLMs when applied to this task.[We used Github copilot (<https://github.com/features/copilot>) for minor code auto-completion tasks and GPT4 as writing aid for paraphrasation.] § ACKNOWLEDGEMENTS The NLLG group gratefully acknowledges support from the Federal Ministry of Education and Research (BMBF) via the research grant “Metrics4NLG” and the German Research Foundation (DFG) via the Heisenberg Grant EG 375/5-1. Further, we thank Juri Opitz for his implementations of the DSBA and GEMBA prompts, as well as for his feedback during our discussions. The authors also acknowledge support by the state of Baden-Württemberg through bwHPC and the German Research Foundation (DFG) through grant INST 35/1597-1 FUGG. § LIMITATIONS One limitation of our work is that even though we evaluate a large variety of possible prompts, there is still a lot of interesting possible variety in prompting approaches that we did not explore for now (e.g., the detail level of task instructions or structured output formats). Especially, our multi-step experiment is currently conducted on a very small scale. Future work might consider extending the exploration of this and other multi-step approaches. A further limitation is that we cannot be sure that the newer LLM models did not see parts of the older datasets in their training data. Also, the selection of the best prompts that are presented in the result tables is currently based on the maximum instead of the median, which was found to highlight the most stable prompts. Generally, by selecting the 9 “best” prompts for phase 2 we are narrowing the search space. Hence, the interplay between prompt patterns might not be fully represented for these phases. Furthermore, our heatmaps only compare one dimension, while another is changed, possibly simplifying the interplay between the others. As another limitation, in rare cases the context size of the models was exceeded. Future work could explore different ways to handle this than cutoff. Further, the heatmaps show many Kendall correlations and may be prone to statistical effects for some values. Lastly, we assume that LocalGemba is performing worse than, e.g., PZS prompts because of its higher prompt complexity, while the original GembaMQM can handle it due to GPT4 being more advanced. However, we did not test PZS prompts with GPT4 to confirm it performs worse than GembaMQM there. § ETHICAL CONSIDERATIONS Evaluating generated texts with prompt-based LLMs might (especially with explanations) be prone to hallucinations. Depending on the use case, this might be dangerous. However, while we research about this type of metric, our work analyzes methods to select and construct more robust and also more accessible (open-source) approaches, therefore we see no ethical concerns. acl_natbib § PROMPT TEMPLATES Tables <ref>, <ref>, <ref>, <ref> and <ref> give an overview of our prompt templates. <ref> § IMPLEMENTATION DETAILS We use the following library versions: torch==2.1.2 transformers==4.39.3 unbabel_comet==2.2.1 vllm==0.4.0.post1 auto_gptq==0.7.1 Further, we use the following models from huggingface: <https://huggingface.co/Open-Orca/OpenOrca-Platypus2-13B/tree/main>, <https://huggingface.co/NousResearch/Nous-Hermes-13b>, <https://huggingface.co/TheBloke/Platypus2-Instruct-GPTQ>, <https://huggingface.co/Unbabel/XCOMET-XXL>, <https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1>, <https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct>, <https://huggingface.co/MaziyarPanahi/Meta-Llama-3-70B-Instruct-GPTQ>, <https://huggingface.co/Unbabel/TowerInstruct-13B-v0.1> and <https://huggingface.co/facebook/bart-large-cnn>. These have 13B, 13B, 70B, 10.7B, 8x7B, 8B, 70B, 13B and 405M parameters respectively. The runtime of the experiments varied based on the general cluster usage. The runtime for one evaluation of all prompt combinations on 500 samples of one task on the dev set is approximately 7 hours for the 13B models and 36 hours for the 70B model. This was only possible through optimizations with vLLM. § DATASET DETAILS Table <ref> shows the distribution of the Eval4NLP 2023 dataset <cit.> (train, dev and test) and our second test set, built from WMT23 <cit.> and Seahorse <cit.>. We use the train set in our first evaluation phase and the dev, test and test2 sets in our second evaluation phase. Where applicable, we provide the licenses in the respective directories of the source code. The WMT23 dataset was built with the mt-metrics-eval library.[<https://github.com/google-research/mt-metrics-eval>] in their data not all sentences had available ground truth annotations. In these cases, we dropped the rows. For Seahorse, we convert the quality questions into scores. If the first question is negative, the score is 0. If it does not rule out the other questions, each question is evaluated as 0.2, such that the scores lie in a range between 0 and 1. § MODEL ABBREVIATIONS Table gives an overview of abbreviations that we use to concisely present our results in the main paper. § PHASE 1 & 2 PERFORMANCE Table <ref> shows the performance of the prompts with the best Kendall performance across the different dimensions. Tables <ref> and <ref> show the performance of selected prompts on the phase 2 datasets. § PROMPT SELECTION Table <ref> contains the some of the 9 prompts that were selected for OS and Phase 2 experiments. Also Table <ref> contains gives an overview of combinations by name. § SIGNIFICANCE MATRICES FOR CORRELATION HEATMAPS To test, which aggregation method is the best to define the ranking of a prompting pattern — inspired by <cit.> — we compare each possible set of two aggregation methods with a permutation test. As main dimensions, we compare the rankings of the format requirement and task description before and after a change. Then we concatenate the scores when changing each of the other dimensions. I.e. we get a ranking that indicates the stability of the main dimension when changing all other dimensions. Then for each aggregation method we compare the ranking before and after the change. Thereby, we randomly swap 50% of samples of one aggregation method with the other. If the difference in their Kendall correlations changes in most permutations one method is significantly better than the other. As a result the mean and median are significantly better than some of the other methods (for a comparison along the task description pattern). Especially the median is significantly (p ≤ 0.05) better than the other methods and remains significantly better than saturation and standard deviation after Bonferroni correction. Figure <ref> indicates the significances of aggregation measures when comparing the task descriptions. § PIE CHARTS BETWEEN MODELS FOR EACH PROMPTING PATTERN Figures <ref>, <ref> and <ref> show the distribution of patterns in the best prompts per model across all other dimensions. § PIECHARTS BETWEEN DATASETS FOR EACH PROMPTING PATTERN Figures <ref>, <ref> and <ref> show the distribution of patterns in the best prompts per dataset across all other prompting patterns. § STABILITY HEATMAPS Figures <ref>, <ref> and <ref> show further heatmaps that show the stability of a ranking of prompting patterns, models and datasets, when another prompting pattern, the model or the dataset is changed.
http://arxiv.org/abs/2406.18349v1
20240626134835
The Incompressible Magnetohydrodynamic Energy Cascade Rate Upstream of Mars: Effects of the Total Energy and the Cross-Helicity on Solar Wind Turbulence
[ "Norberto Romanelli", "Nahuel Andres", "Gina DiBraccio", "Jaye Verniero", "Jacob Gruesbeck", "Adam Szabo", "Jared Espley", "Jasper Halekas" ]
astro-ph.SR
[ "astro-ph.SR", "astro-ph.EP", "physics.space-ph" ]
Norberto Romanelli. Submitted to ApJ norberto.romanelli@nasa.gov 0000-0001-9210-0284]Norberto Romanelli Department of Astronomy, University of Maryland, College Park, MD, USA Planetary Magnetospheres Laboratory, NASA Goddard Space Flight Center, Greenbelt, MD, USA 0000-0002-1272-2778]Nahuel Andrés CONICET - Universidad de Buenos Aires, Instituto de Física Interdisciplinaria y Aplicada (INFINA), Ciudad Universitaria, 1428 Buenos Aires, Argentina Universidad de Buenos Aires, Facultad de Ciencias Exactas y Naturales, Departamento de Física, Ciudad Universitaria, 1428 Buenos Aires, Argentina 0000-0002-2778-4998]Gina A. DiBraccio Planetary Magnetospheres Laboratory, NASA Goddard Space Flight Center, Greenbelt, MD, USA 0000-0003-1138-652X]Jaye L. Verniero Heliospheric Physics Laboratory, NASA Goddard Space Flight Center, Greenbelt, MD, USA 0000-0002-1215-992X]Jacob R. Gruesbeck Planetary Magnetospheres Laboratory, NASA Goddard Space Flight Center, Greenbelt, MD, USA 0000-0003-3255-9071]Adam Szabo Heliospheric Physics Laboratory, NASA Goddard Space Flight Center, Greenbelt, MD, USA 0000-0002-6371-9683]Jared R. Espley Planetary Magnetospheres Laboratory, NASA Goddard Space Flight Center, Greenbelt, MD, USA 0000-0001-5258-6128]Jasper S. Halekas Department of Physics and Astronomy, University of Iowa, Iowa City, Iowa, USA § ABSTRACT Solar wind turbulence is a dynamical phenomenon that evolves with heliocentric distance. Orbiting Mars since September 2014, Mars Atmosphere and Volatile EvolutioN (MAVEN) offers a unique opportunity to explore some of its main properties beyond ∼ 1.38 au. Here, we analyze solar wind turbulence upstream of Mars’s bow shock, utilizing more than five years of magnetic field and plasma measurements. This analysis is based on two complementary methodologies: 1) the computation of magnetohydrodynamic (MHD) invariants characterizing incompressible fluctuations; 2) the estimation of the incompressible energy cascade rate at MHD scales (i.e., ⟨ε^T⟩_MHD). Our results show the solar wind incompressible fluctuations are primarily in a magnetically dominated regime, with the component travelling away from the Sun having a higher median pseudo-energy. Moreover, turbulent fluctuations have a total energy per mass of up to ∼ 300 km^2 s^-2, a range smaller than reported at 1 au. For these conditions, we determine the probability distribution function of ⟨ε^T⟩_MHD ranges mainly between ∼-1× 10^-16 and ∼1× 10^-16 Jm^-3 s^-1, with a median equal to -1.8× 10^-18 Jm^-3 s^-1, suggesting back-transfer of energy. Our results also suggest that |⟨ε^T⟩_MHD| is correlated with the total energy per mass of fluctuations and that the median of ⟨ε^T⟩_MHD does not vary significantly with the cross-helicity. We find, however, that the medians of the inward and outward pseudo-energy cascade rates vary with the solar wind cross-helicity. Finally, we discuss these results and their implications for future studies that can provide further insight into the factors affecting solar wind energy transfer rate. § INTRODUCTION Turbulence is a ubiquitous, multi-scale, and nonlinear phenomenon occurring in many space plasma environments <cit.>. Solar wind turbulence has been investigated by means of analytical theoretical models, numerical simulations and analysis of spacecraft magnetic field and plasma observations at different heliocentric distances <cit.>. At the magnetohydrodynamics (MHD) scales, fully developed solar wind turbulence is partly characterized by the presence of an inertial range, where local energy transfer takes place throughout several spatial and temporal scales without dissipation. One method to quantify the effects of this phenomenon is to estimate the total energy cascade rate, ε, i.e., the amount of energy per unit volume per unit time cascading across different spatial scales, resulting from nonlinear processes. In the classical hydrodynamic picture, this cascade of energy occurs from large to small scales, i.e., ε>0 <cit.>. In plasma turbulence, the energy cascade rate can be determined by means of exact relations, expressed in terms of plasma and magnetic field increment functions <cit.>. <cit.> derived an exact relation valid for incompressible MHD fully developed turbulence, under statistical homogeneous and isotropic conditions. Numerical and observational results have evaluated and confirmed this relation <cit.>. Moreover, the inclusion of additional effects such as plasma compressibility, ion dynamics at smaller scales, and various thermodynamic closures enabled expressions for ε, valid for other plasma environments <cit.>. Orbiting around Mars since September 2014, Mars Atmosphere and Volatile EvolutioN (MAVEN) offers an unique opportunity to explore the solar wind turbulent state at ∼ 1.5 au <cit.>. <cit.> characterized the magnetic field power spectra upstream and inside the magnetosphere of Mars. The authors identified and characterized variability in the power law index of magnetic fluctuations as a function of the magnetosphere region and Mars season. The latter aspect was mainly attributed to seasonal variability of the occurrence rate of waves observed at the local proton cyclotron frequency <cit.>. <cit.> computed the normalized solar wind cross-helicity and residual energy, utilizing MAVEN Magnetometer (MAG) and Solar Wind Ion Analyzer (SWIA) data <cit.>. Consistent with previous studies, <cit.> reported that Alfvénic fluctuations upstream of Mars are magnetically dominated, with higher pseudo-energy for the component traveling outwards from the Sun. <cit.> estimated for the first time the absolute value of the incompressible energy cascade rate at MHD scales upstream of Mars (i.e., ⟨|ε^T|⟩_MHD), using about four months of MAVEN observations. The authors observed changes in the probability distribution function (PDF) of ⟨|ε^T|⟩_MHD with the Martian heliocentric distance and/or the presence of proton cyclotron waves (PCWs) <cit.>. Analyzing more than five years of MAVEN data, <cit.> concluded that PCWs do not have a significant effect on ⟨|ε^T|⟩_MHD, and that the observed variability was due to changes in Mars’s heliocentric distance. To the best of our knowledge, so far there has not been a study computing the signed energy cascade rate at MHD scales upstream of Mars nor on its dependence on the total energy per mass of solar wind fluctuations and the solar wind cross-helicity. The main objective of the present paper is to improve the current understanding of the factors affecting the transfer of energy in the inertial range and to determine the PDF of the energy cascade rate beyond ∼1.38 au. These results are also put into context with previous reports in the inner heliosphere and provide added value, in particular, due to the region of the solar wind phase-space nominally present upstream of Mars. This article is structured as follows: Section 2 presents a brief description of the incompressible MHD equations utilizing the Elsässer variables <cit.> and also shows the exact relation valid in the MHD inertial range used to compute ⟨ε^T⟩_MHD. Section 3 reports the main capabilities of MAVEN MAG and SWIA instruments and the employed selection criteria to identify time intervals of interest and estimate ⟨ε^T⟩_MHD. Section 4 describes our main observational results and Section 5 develops our discussion and conclusions. § INCOMPRESSIBLE MHD TURBULENCE The incompressible MHD equations can be expressed in terms of the Elsässer variables 𝐙^± <cit.>, as follows: ∂𝐙^±/∂ t = -(𝐙^∓·∇)𝐙^± - ∇(P_*) + ν∇^2𝐙^±, where 𝐙^±=𝐕±𝐕_A, 𝐕 and 𝐕_A=𝐁/√(μ_0 ρ_0) are the plasma flow and Alfvén velocity, respectively, P_* is the total pressure and ∇·𝐙^±=0. Moreover, 𝐁, ρ_0 = ⟨ρ⟩ and μ_0 are the magnetic field, the mean plasma mass density and the vacuum permeability, respectively. The Elsässer variables 𝐙^± describe general MHD processes, occurring for instance in the solar wind or planetary magnetosheaths <cit.>. In particular, these terms can also be used to represent low-frequency waves, with 𝐙^- (𝐙^+) associated with waves propagating parallel (antiparallel) to the background mean magnetic field. §.§ Incompressible Solar Wind Fluctuations The incompressible solar wind fluctuations can be characterized by means of several parameters. In particular, the average total fluctuation energy per unit mass, E_T, can be expressed as: E_T=⟨ |Δ𝐕|^2⟩+⟨|Δ𝐕_A|^2⟩/2 = ⟨|Δ𝐙^+|^2⟩+⟨|Δ𝐙^-|^2⟩/4. where Δ𝐕 and Δ𝐕_A are the mean solar wind (proton) and Alfvén velocity fluctuations, respectively. In other words, E_T is the sum of the kinetic and magnetic fluctuation energies averaged over a given time interval, which is also equal to the sum of the pseudo-energies associated with Alfvénic fluctuations propagating anti-parallel (⟨|Δ𝐙^+|^2⟩/4) and parallel (⟨|Δ𝐙^-|^2⟩/4)) to the mean magnetic field. In addition, the normalized solar wind cross-helicity (σ_c) and residual energies (σ_r) are defined as follows, σ_c=2 ⟨Δ𝐕·Δ𝐕_A ⟩/⟨ |Δ𝐕|^2⟩+⟨|Δ𝐕_A|^2⟩ = ⟨|Δ𝐙^+|^2⟩-⟨|Δ𝐙^-|^2⟩/⟨|Δ𝐙^+|^2⟩+⟨|Δ𝐙^-|^2⟩, σ_r=⟨ |Δ𝐕|^2⟩-⟨|Δ𝐕_A|^2⟩/⟨ |Δ𝐕|^2⟩+⟨|Δ𝐕_A|^2⟩, In a few words, σ_c quantifies the cross-correlation between the mean proton and Alfvén velocity fluctuations, or equivalently, the energy balance between anti-parallel and parallel propagating plasma fluctuations. σ_r quantifies the energy balance between kinetic and magnetic field fluctuations. Moreover, by taking into account the interplanetary magnetic field (IMF) orientation for a given time interval, we can redefine the normalized solar wind cross-helicity to provide a measure of the energy balance between fluctuations propagating inwards and outwards from the Sun <cit.>. In this study we compute σ_c sign(⟨IMF Bx⟩), where Bx is the magnetic field component in the Mars Solar Orbital (MSO) coordinate system. Note that the MSO coordinate system is centered at Mars, with its x-axis pointing towards the Sun. The z-axis points northward and normal to the orbital plane, and the y-axis completes the right-handed coordinate system. Thus, the sign of the IMF Bx MSO component provides a very good estimation for the polarity of the radial IMF component. In this regard, the pseudo-energies associated with the components propagating inwards (in) and outwards (out) from the Sun, E_in and E_out, are equal to ⟨|Δ𝐙^in|^2⟩/4 and ⟨|Δ𝐙^out|^2⟩/4, respectively <cit.>. §.§ Incompressible MHD Energy Cascade Rate As reported by <cit.>, the following exact relation for fully-developed incompressible three-dimensional MHD turbulence can be derived assuming homogeneous and isotropic conditions: ⟨δ𝐙^∓_R(L) |δ𝐙^±(L)|^2 ⟩ ρ_0= -4/3ε^± L, where δ𝐙(L)= 𝐙(x+L)-𝐙(x) refers to the difference of 𝐙 at two points separated by a distance L along the radial (𝐑) direction, and 𝐙^±_R=𝐙^±·𝐑. Moreover, ε^± corresponds to the energy cascade rates of the pseudo-energies |𝐙^±|^2, and the total nonlinear energy cascade rate is given by ε^T =(ε^++ε^-)/2. The angular bracket ⟨·⟩ refers to an ensemble average, which is computed as time average assuming ergodicity <cit.>. Under these definitions, positive and negative ε^T values correspond to direct and inverse energy transfer rates, respectively. In the case of single-spacecraft measurements in the solar wind, all lagged separations are along the solar wind flow direction, 𝐑. Since we use the convention of positive differences, δ𝐙^±(τ) = - δ𝐙^±(L), where τ is the timescale of interest, Eq. (<ref>) can be rewritten as: ⟨δ𝐙^∓_R(τ) |δ𝐙^±(τ)|^2 ⟩ ρ_0 = 4/3ε^± V τ where we have made use of Taylor's hypothesis and the fact that the mean solar wind velocity (V) is much larger than typical velocity fluctuations. Same as before, by knowing the IMF orientation for each time interval under analysis, we can define the pseudo-energy cascade rates inwards and outwards from the Sun at MHD scales (⟨ε^in⟩_MHD and ⟨ε^out⟩_MHD, respectively) as ⟨ε^+⟩_MHD or ⟨ε^+⟩_MHD, accordingly. Positive pseudo-energy cascade rates could be associated with a transfer of pseudo-energy from large to small scales within the solar wind. On the other hand, negative rates may indicate the presence of an inverse pseudo-energy cascade. Various mechanisms, such as large-scale shears, anisotropies, or a dominant wave mode, have been proposed to explain observed negative energy transfer rates in space plasmas and neutral fluids. Despite these efforts, further research is needed to better understand their nature. In particular, these analyses would benefit from a comprehensive characterization of small scale physical processes <cit.>. § MAVEN MAG AND SWIA OBSERVATIONS AND SELECTION CRITERIA To investigate the factors that affect the signed incompressible solar wind energy transfer rate at the MHD scales, we analyze MAVEN MAG and SWIA observations gathered between 10 October 2014 and 31 December 2019. MAVEN MAG measurements have a cadence of 32 Hz and an accuracy of ∼0.25 nT <cit.>. SWIA measures ion flux in the 25 eV/q to 25 keV/q energy range with a field of view of 360^∘ by 90^∘ <cit.>. In this work, we have analyzed solar wind proton density and velocity observations, computed onboard with a sampling frequency of 0.25 Hz <cit.>. The selection criteria and methodology employed in this work is analogous to the one used by <cit.>. What follows is a summary of its key steps. We initially identify ∼ 34 min intervals when MAVEN was upstream and magnetically disconnected from Mars' bow shock <cit.>. A sample of this size covers at least one correlation time of solar wind turbulent fluctuations <cit.>. Next, we focus on cases with nearly incompressible solar wind conditions, i.e., |n-n_0|/n_0<0.2, where n is the number plasma density and n_0=⟨ n ⟩. For events satisfying this condition, the 32 Hz MAG data is averaged to derive values at times with available plasma moments (4s time resolution). This step is necessary to compute ⟨ε^T⟩_MHD for every analyzed interval, as shown in Eq. (<ref>). In addition, to determine a reliable energy cascade rate at MHD scales, ⟨ε^T⟩_MHD, we also restrict our analysis to events where the IMF cone angle (i.e., angle between the IMF and the solar wind velocity) is relatively stationary (variability equal or smaller than 15^∘), without changes of sign of the energy cascade rate throughout the MHD range and with std(ε^T_MHD)/|⟨ε^T⟩_MHD|<1 <cit.>. To that end, we have also checked that the third-order moment displays a linear scaling with timescale, implying a nearly constant value of the energy cascade rates in the analyzed temporal scales. Following <cit.> and <cit.>, the computed values of ⟨ε^T⟩_MHD shown in the next sections are the average of ε^T between τ=500 s and τ=1500 s. However, we report that no significant differences are present when computing ⟨ε^T⟩_MHD based on averages between τ=750 s and τ=1250 s, suggesting consistency across the inertial range <cit.>. We have also investigated the convergence of higher-order moments by assessing the variability of energy transfer rate estimates utilizing time intervals of different sizes. In this regard, it is important to note the constraints imposed by MAVEN's orbital period, the duration the spacecraft spends in the pristine solar wind without magnetic connection to the bow shock, and the requirement for a statistically significant number of events. These factors allow us to estimate the maximum size of the time intervals for computing the energy cascade rate, based on the current MAVEN data set. Taking these constraints into account, we analyze MAVEN observations based on the identification of 1890 intervals of ∼34 min and 376 intervals of ∼68 min satisfying the selection criteria. An analysis of the statistical outcomes obtained from these two data subsets does not show significant differences in the observed trends and associated conclusions. Hereafter, we therefore display results derived from all selected ∼34 min intervals. § RESULTS §.§ Case Study: Turbulent Event on July 15, 2015 An example of an event fulfilling our selection criteria described in Section <ref> is shown in Figure <ref>. It displays MAVEN MAG and SWIA observations obtained on July 15, 2015, from ∼14:21 UT to ∼14:55 UT. The solar wind velocity (panel a) is mostly anti-parallel to the x-MSO axis, and displays an approximately constant value throughout this interval with V_x∼-366 km s^-1. Panel (b) indicates that the Alfvén velocity also remains relatively steady. Panel (c) shows the event took place under nearly incompressible conditions, with n∼ 3.1 cm^-3. Panel (d) displays the total signed incompressible energy cascade rate as a function of τ, with τ ranging from 10^2 s to 2×10^3 s. We estimate ⟨ε^T⟩_MHD is approximately -0.67×10^-18 J m^-3 s^-1, suggesting the presence of back-transfer of energy in this event. §.§ Solar Wind Incompressible Fluctuations Figure <ref> displays the PDF of (a) the total energy per mass of solar wind fluctuations and the pseudo-energies E_in and E_out, (b) the normalized solar wind cross-helicity σ_c times the sign(⟨IMF Bx⟩), (c) the residual energy and (d) the ⟨IMF Bx⟩ MSO component for all analyzed events observed by MAVEN. These variables are computed based on averages over the size of the interval, i.e., ∼ 34 min. Note that hereafter we refer to the total energy per mass as total energy. Our results show that the total energy of the turbulent fluctuations extends up to 300 km^2s^-2, with a median of ∼ 60 km^2s^-2 (green dashed line, Figure <ref> (a)). These values are significantly smaller than their reported counterparts at Earth or smaller heliocentric distances. This allows us to investigate solar wind turbulence in a scarcely explored region of phase-space <cit.>. The median pseudo-energy of the outward fluctuation component, E_out, is ∼ 28 km^2s^-2, ∼ 33% larger than that of E_in (∼ 21 km^2s^-2). As a result, σ_c sign(⟨IMF Bx⟩) displays a positive median (∼ 0.15), suggesting the solar wind turbulence state can be understood in terms of a combination of fluctuations travelling towards and away from the Sun, where the latter component is more energetic. Figure <ref> (c) shows most of the events are magnetically dominated (i.e., magnetic fluctuation energy larger than kinetic fluctuation energy), with a median of σ_r ∼ -0.33. Figure <ref> (d) shows a nearly symmetric distribution (skewness ∼ 0.18), with a median equal to 0.03 nT, suggesting MAVEN visited both sides of the heliospheric current sheet in a similar proportion and there are no significant biases associated. The quartiles (Q_1, Q_2, and Q_3) corresponding to the distributions shown in Figure <ref> are provided in Table 1 (see columns associated with ∼ 34 min intervals). §.§ The Solar Wind Incompressible MHD Energy Cascade Rate at Mars Figure <ref> shows the PDFs of ⟨ε^in⟩_MHD, ⟨ε^out⟩_MHD and ⟨ε^T⟩_MHD between -1×10^-16 and 1× 10^-16 Jm^-3 s^-1. The vertical lines correspond to the quartiles of each distribution. Our results show that the three distributions take a wide range of values and have negative medians, within the 95% confidence interval. Specifically, the median for ⟨ε^T⟩_MHD upstream of Mars is -1.8× 10^-18 Jm^-3 s^-1, suggesting the presence of back-transfer of energy at ∼ 1.5 au. Interestingly, this conclusion holds when increasing the size of the analyzed time intervals. Indeed, we also observe a negative median for the MHD energy transfer rate distribution when the time interval duration is doubled. In other words, we obtained similar statistical results from an analogous analysis of MAVEN observations based on 376 intervals of ∼68 min. Moreover, the quartiles associated with the distribution of the pseudo-energy components and total energy cascade rates do not vary significantly with both interval sizes, as shown in Table 1. In addition, it is worth mentioning that negative transfer rates have been found in studies focused on solar wind turbulence at 1 au, but only under certain solar wind conditions <cit.>. §.§.§ Dependence on the Total Energy of Solar Wind Fluctuations Figure <ref> (a) displays a scatter plot of all analyzed events with σ_r as a function of σ_c sign(⟨IMF Bx⟩), and color-coded with the total energy of solar wind fluctuations. As also shown in Figure <ref>, we observe that most of the events are magnetically dominated, and with a slightly larger proportion of events with more pseudo-energy in the component going out of the Sun. In addition, we identify a weak trend in which the events with larger total fluctuation energy appear mostly distributed near the boundary (σ_c^2+σ_r^2 ∼ 1). This can be understood in terms of the phase between the solar wind velocity and magnetic field fluctuations. Indeed, these events present a relatively high correlation between these two fields <cit.>. Figure <ref> (b) shows the total solar wind energy cascade rate at the MHD scales as a function of the total energy of the turbulent fluctuations, in log-log scales. Our results display a strong positive correlation (R=0.80) between ⟨ε^T⟩_MHD and E_T, when ⟨ε^T⟩_MHD>0 (grey dots). An analogous dependence is identified between |⟨ε^T⟩_MHD| and E_T, when ⟨ε^T⟩_MHD<0 (orange dots). Moreover, the computed linear fits suggest the presence of a polynomial dependence, with |⟨ε^T⟩_MHD| ∝ E_T^α, where α=1.3±0.1 and α=1.2±0.1, for each data set, respectively. The dispersion observed in both data sets is likely associated with other plasma variables affecting the solar wind turbulent cascade rate, such as, the solar wind cross-helicity. §.§.§ Dependence on the Cross-Helicity of Solar Wind Fluctuations Figure <ref> shows the medians of the energy cascade rates (a-c), the ⟨ε^out⟩_MHD/⟨ε^in⟩_MHD ratio (d-f), and the total and pseudo-energy of fluctuations (g-i) as a function of σ_c sign(⟨IMF Bx⟩), with each column corresponding to a different total energy fluctuation (E_T) range. As can be seen in panel (g) (which includes all analyzed events), the medians of E_out (blue curve), E_in (red curve), and E_T (green curve) exhibit strong variation with the solar wind cross-helicity. Specifically, the median of E_T is higher for highly Alfvénic (|σ_c|∼1) solar wind fluctuation states. Also, as expected from Eq. (<ref>), the median of E_in (E_out) decreases (increases) with σ_c sign(⟨IMF Bx⟩). It is worth emphasizing that these trends affect the observed variability of the solar wind energy cascade rates as a function of cross-helicity, as suggested by Figure <ref> (b). Motivated by these results, we further analyze two data subsets defined in terms of narrower ranges of total energy E_T. By doing this, we can examine if there is a dependence of the energy cascade rate with the cross-helicity. The central column (i.e., Figure <ref> (b, e, h)) considers events with E_T≤ Q_3(E_T) ∼ 120 km^2s^-2, while the right column (i.e., Figure <ref> (c, f, i)) focuses on events with E_T≥ Q_1(E_T)∼ 30 km^2s^-2. Our results suggest that the total energy of solar wind fluctuations (green curve) varies significantly less with solar wind cross-helicity for these two energy ranges (Figure <ref> (h, i)). In addition, we find mostly negative and approximately constant median total energy cascade rates at MHD scales across cross-helicity bins (Figure <ref> (b, c)). Furthermore, the medians of ⟨ε^in⟩_MHD and ⟨ε^out⟩_MHD are positively and negatively correlated with solar wind cross-helicity, respectively, across all explored energy ranges (Figure <ref> (a-c)). As a result, the median for ⟨ε^out⟩_MHD/⟨ε^in⟩_MHD takes small positive values only for nearly non-correlated solar wind velocity and magnetic field fluctuations. These profiles are highly asymmetric, with negative (but small) values for σ_c sign(⟨IMF Bx⟩)<0 and highly negative values for σ_c sign(⟨IMF Bx⟩)>0 (Figure <ref> (d-f)). § DISCUSSION AND CONCLUSIONS Making use of the incompressible MHD exact law and MAVEN magnetic field and plasma observations, we investigated the properties of solar wind Alfvénic fluctuations and determined the solar wind energy cascade rate at the MHD scales, upstream of the Martian bow shock. Solar wind turbulence properties at the Martian heliocentric distances, i.e., ∼ 1.38 -1.67 au, display both similarities and differences with previous reports upstream of Earth's magnetosphere and closer to the Sun. Among the similarities, we find that most of the analyzed events are characterized by negative residual energies, with a positive median for the solar wind cross-helicity <cit.>. On the other hand, the total energy of solar wind fluctuations takes values significantly smaller than observed upstream of Earth's bow shock and at smaller heliocentric distances <cit.>. We also find that the PDFs of ⟨ε^in⟩_MHD, ⟨ε^out⟩_MHD, and ⟨ε^T⟩_MHD range mainly between ∼-1× 10^-16 and ∼1× 10^-16 Jm^-3 s^-1 and have negative medians, suggesting transfer of energy from the smallest to the largest scales of the system in the studied temporal scales for slightly more than half of the analyzed events (see Figure <ref>). These results appear to be in contrast with some previous studies focused on solar wind turbulence upstream of Earth's bow shock, where a positive energy cascade rate is typically observed <cit.>. However, negative solar wind transfer rates at MHD scales were observed at 1 au, on average, under certain solar wind conditions <cit.>. Moreover, a partial explanation of our results can be provided based on the reports by <cit.> and <cit.>. By applying the exact relation for incompressible MHD turbulence, <cit.> and <cit.> reported a significant back-transfer of solar wind energy from small to large scales for events with large absolute values of cross-helicity, upstream of the terrestrial bow shock. Interestingly, <cit.> also reported that the range of solar wind cross-helicity values where negative energy cascade rates are observed increases in size with decreasing total energy of turbulent fluctuations (see Figure 3 in <cit.>). In this regard, the relatively low total energy of solar wind fluctuations usually seen upstream of Mars provides an explanation for the negative median energy transfer rates observed at MHD scales. Indeed, the lowest total energy level analyzed by <cit.> is four times larger than the maximum value investigated here. As a result, our analysis shows negative median energy cascade rates for the entire cross-helicity range. Previous studies have characterized the evolution of the solar wind normalized cross-helicity, residual energy and energy of fluctuations as a function of heliocentric distance <cit.>. Overall, the solar wind cross-helicity displays a decreasing trend with relatively highly values (∼ 0.6) for small heliocentric distances (∼ 0.2 au), and appears to reach a small asymptotic positive value at approximately 1 au. In other words, the non-linear interaction between solar wind fluctuations is responsible for the evolution from a highly Alfvenic state towards another one with small correlation between solar wind velocity and magnetic field fluctuations. This evolution takes places with magnetic field fluctuations dominating over velocity fluctuations, i.e., a relatively slowly varying but negative residual energy (∼ -0.3). In addition, the energy of fluctuations also decreases with heliocentric distance, with (δ𝐙^+)^2 > (δ𝐙^-)^2. Furthermore, the dependence of the pseudo-energy of these two components varies differently with heliocentric distance <cit.>. Our analysis of MAVEN observations suggest these solar wind properties appear to evolve similarly with heliocentric distance beyond 1 au and, at least, up to Mars's orbital location. Observational studies have also shown that the intensity of the energy transfer rate at MHD scales increases as the heliocentric distance decreases <cit.>. In particular, <cit.> reported that the energy transfer rate around the first Parker Solar Probe (PSP) perihelion is approximately 100 times the typical value at 1 au. <cit.> analyzed more than two years of PSP observations and found that the absolute value of incompressible energy cascade rate is negatively correlated with the heliocentric distance. Similarly, <cit.> observed an increase in the incompressible and compressible energy cascade rates as PSP approached the Sun, which they attributed to an increase in the solar wind fluctuations total energy. The energy cascade rates observed upstream of Mars in the present work are consistent with this trend and previous results obtained by <cit.>. We also investigated the influence the total energy of solar wind turbulent fluctuations and the cross-helicity have on the total energy cascade rate and its components, ⟨ε^in⟩_MHD and ⟨ε^out⟩_MHD. Our observational results suggest the median of the total energy cascade rate is not significantly affected by the solar wind cross-helicity at the Martian heliocentric distances, for relatively narrow total energy bins (see, Figure <ref> (b-c)). Indeed, most of the observed variability is associated with the dependence of E_T on σ_c sign(⟨IMF Bx⟩) (Figure <ref> (g)). In contrast, we also find that the medians of ⟨ε^in⟩_MHD, ⟨ε^out⟩_MHD, and ⟨ε^out⟩_MHD/⟨ε^in⟩_MHD vary with the solar wind cross-helicity. In particular, highly Alfvénic states are observed under relatively intense (negative) pseudo-energy cascade rates, ⟨ε^in⟩_MHD and ⟨ε^out⟩_MHD <cit.>. Moreover, the ratio ⟨ε^out⟩_MHD/⟨ε^in⟩_MHD displays a non-symmetric profile with respect to its maximum, taking positive values only when σ_c sign(⟨IMF Bx⟩) is small, at least for the energy of fluctuations typically seen upstream of Mars's magnetosphere. This analysis provides added value to previous reports focused on solar wind turbulent conditions upstream of Earth <cit.>. In particular, the observed trends in Figure <ref> (d-f) are similar to what has been reported by <cit.> and <cit.>. However, a detailed comparison is limited, since these authors analyzed the relationship between the pseudo-energy cascade rates and the absolute value of the solar wind cross-helicity and/or considered other cross-helicity ranges. In addition, our observational results show a strong correlation between the energy cascade rate intensity at MHD scales and the total energy of solar wind fluctuations, in agreement with results reported by <cit.>. The implied power-law dependence between |⟨ε^T⟩_MHD| and E_T appears intuitive when ⟨ε^T⟩_MHD is positive. Indeed, the more energy fluctuation energy available, the larger the energy that can be transferred to the dissipative scales of the system. The fact that we observe a similar dependence for cases with negative energy transfer rates is somewhat more difficult to interpret. The common explanation involves an inverse cascade rate of energy, however, what would the energy source be in that case? A non trivial plasma process could be present, working as a source of energy in these particular scales. Interestingly, <cit.> reported an analogous polynomial fit between the compressible cascade rate and the compressible energy component. This observational result motivates further studies on this matter. In particular, future numerical simulation studies would allow a parametric analysis of how each solar wind variable influences the energy cascade rate. These simulations should take into account the observed differences when analyzing solar wind turbulence upstream of Earth and Mars. For instance, it is interesting to note that the events with high total energy of fluctuations analyzed in this work do not appear evenly distributed as a function of the solar wind residual energy nor the cross-helicity (see, Figure <ref> (a)). Moreover, additional efforts should be made to determine the effects that the time interval size may have on the computation of the energy cascade rate at MHD scales, and how it impacts the sign <cit.>. In this sense, it is important to note that we have used time intervals larger than at least one correlation timescales at Martian heliocentric distances <cit.>. It is also worth mentioning that solar activity may influence the energy cascade rate <cit.>. Indeed, <cit.> reported that the average pseudo-energy transfer rate is correlated with solar activity, based on Ulysses, high latitude, observations of fast solar wind. In this regard, given that the analyzed MAVEN dataset covers roughly half of the solar cycle, solar activity could partly be responsible for variability in the computed energy transfer rate upstream of Mars bow shock. A comprehensive analysis of these potential effects requires observations spanning at least one complete solar cycle period and is left for a future study. Finally, solar wind turbulence studies around Mars would also benefit from continuous, high cadence, magnetic field and plasma observations in the region upstream of the Martian bow shock. Such data set would allow the determination of typical spatial scales (such us the correlation, Taylor scales and kinetic scales) and the energy cascade rate at different timescales. Furthermore, the continuous sampling of the pristine solar wind would allow one to analyze variability of these quantities with time intervals of different sizes, ranging from a few auto-correlation timescales (on the order of several tens of minutes) to days or even larger and over different phases of the solar cycle. Such observations could be provided by a future Heliophysics mission to the Lagrangian 1 point at Mars <cit.>. The MAVEN project is supported by NASA through the Mars Exploration Program. N.R. is supported through a cooperative agreement with Center for Research and Exploration in Space Sciences & Technology II (CRESST II) between NASA Goddard Space Flight Center and University of Maryland College Park under award number 80GSFC21M0002. MAVEN data are publicly available through the Planetary Data System (<https://pds-ppi.igpp.ucla.edu/index.jsp>). natexlab#1#1 [Alberti et al.(2022)Alberti, Benella, Consolini, Stumpo, & Benzi]Al2022 Alberti, T., Benella, S., Consolini, G., Stumpo, M., & Benzi, R. 2022, The Astrophysical Journal Letters, 940, L13 [Alexakis & Biferale(2018)]Al2018 Alexakis, A., & Biferale, L. 2018, Physics Reports [Alexakis et al.(2024)Alexakis, Marino, Mininni, van Kan, Foldes, & Feraco]A2024 Alexakis, A., Marino, R., Mininni, P. D., et al. 2024, Science, 383, 1005 [Andrés & Banerjee(2019)]A2019 Andrés, N., & Banerjee, S. 2019, Phys. Rev. Fluids, 4, 024603, 10.1103/PhysRevFluids.4.024603 [Andrés et al.(2018)Andrés, Galtier, & Sahraoui]A2018 Andrés, N., Galtier, S., & Sahraoui, F. 2018, Physical Review E, 97, 013204 [Andrés et al.(2020)Andrés, Romanelli, Hadid, Sahraoui, DiBraccio, & Halekas]A2020 Andrés, N., Romanelli, N., Hadid, L. Z., et al. 2020, The Astrophysical Journal, 902, 134, 10.3847/1538-4357/abb5a7 [Andrés & Sahraoui(2017)]A2017b Andrés, N., & Sahraoui, F. 2017, Physical Review E, 96, 053205 [Andrés et al.(2019)Andrés, Sahraoui, Galtier, Hadid, Ferrand, & Huang]A2019b Andrés, N., Sahraoui, F., Galtier, S., et al. 2019, Physical Review Letters, 123, 245101 [Andrés et al.(2021)Andrés, Sahraoui, Hadid, Huang, Romanelli, Galtier, DiBraccio, & Halekas]A2021 Andrés, N., Sahraoui, F., Hadid, L. Z., et al. 2021, The Astrophysical Journal, 919, 19, 10.3847/1538-4357/ac0af5 [Andrés et al.(2022)Andrés, Sahraoui, Huang, Hadid, & Galtier]A2022 Andrés, N., Sahraoui, F., Huang, S., Hadid, L., & Galtier, S. 2022, Astronomy & Astrophysics, 661, A116 [Bandyopadhyay et al.(2020a)Bandyopadhyay, Sorriso-Valvo, Chasapis, Hellinger, Matthaeus, Verdini, Landi, Franci, Matteini, Giles, et al.]Ba2020a Bandyopadhyay, R., Sorriso-Valvo, L., Chasapis, A., et al. 2020a, Physical review letters, 124, 225101 [Bandyopadhyay et al.(2020b)Bandyopadhyay, Goldstein, Maruca, Matthaeus, Parashar, Ruffolo, Chhiber, Usmanov, Chasapis, Qudsi, et al.]Ba2020b Bandyopadhyay, R., Goldstein, M., Maruca, B., et al. 2020b, The Astrophysical Journal Supplement Series, 246, 48 [Bavassano et al.(1998)Bavassano, Pietropaolo, & Bruno]Ba1998 Bavassano, B., Pietropaolo, E., & Bruno, R. 1998, Journal of Geophysical Research: Space Physics, 103, 6521 [Brain(2002)]B2002 Brain, D. A. 2002, Journal of Geophysical Research, 107, 10.1029/2000ja000416 [Brodiano et al.(2023)Brodiano, Dmitruk, & Andrés]B2023 Brodiano, M., Dmitruk, P., & Andrés, N. 2023, Physics of Plasmas, 30 [Bruno & Carbone(2005)]B2005 Bruno, R., & Carbone, V. 2005, Living Reviews in Solar Physics, 2, 4 [Bruno & Carbone(2013)]Br2013 —. 2013, Living Reviews in Solar Physics, 10, 2 [Carbone et al.(2009)Carbone, Marino, Sorriso-Valvo, Noullez, & Bruno]C2009b Carbone, V., Marino, R., Sorriso-Valvo, L., Noullez, A., & Bruno, R. 2009, Physical review letters, 103, 061102 [Chen et al.(2020)Chen, Bale, Bonnell, Borovikov, Bowen, Burgess, Case, Chandran, de Wit, Goetz, et al.]C2020 Chen, C. H. K., Bale, S. D., Bonnell, J., et al. 2020, The Astrophysical Journal Supplement Series, 246, 53 [Coburn et al.(2015)Coburn, Forman, Smith, Vasquez, & Stawarz]Co2015 Coburn, J. T., Forman, M. A., Smith, C. W., Vasquez, B. J., & Stawarz, J. E. 2015, Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, 373, 20140150, 10.1098/rsta.2014.0150 [Coburn et al.(2012)Coburn, Smith, Vasquez, Stawarz, & Forman]Coburn_2012 Coburn, J. T., Smith, C. W., Vasquez, B. J., Stawarz, J. E., & Forman, M. A. 2012, The Astrophysical Journal, 754, 93, 10.1088/0004-637X/754/2/93 [Connerney(2021)]MAG_data Connerney, J. 2021, NASA Planetary Data System, 10.17189/1414178 [Connerney et al.(2015)Connerney, Espley, Lawton, Murphy, Odom, Oliversen, & Sheppard]Con2015 Connerney, J. E. P., Espley, J., Lawton, P., et al. 2015, Space Science Reviews, 195, 257 [Elsässer(1950)]E1950 Elsässer, W. M. 1950, Physical Review, 79, 183 [Greenstadt & Baum(1986)]Greenstadt1986 Greenstadt, E. W., & Baum, L. W. 1986, Journal of Geophysical Research: Space Physics, 91, 9001, 10.1029/JA091iA08p09001 [Gruesbeck et al.(2018)Gruesbeck, Espley, Connerney, DiBraccio, Soobiah, Brain, Mazelle, Dann, Halekas, & Mitchell]Gru2018 Gruesbeck, J. R., Espley, J. R., Connerney, J. E. P., et al. 2018, Journal of Geophysical Research: Space Physics, 123, 4542, 10.1029/2018JA025366 [Hadid et al.(2017)Hadid, Sahraoui, & Galtier]H2017a Hadid, L., Sahraoui, F., & Galtier, S. 2017, The Astrophysical Journal, 838, 9 [Hadid et al.(2018)Hadid, Sahraoui, Galtier, & Huang]H2018 Hadid, L., Sahraoui, F., Galtier, S., & Huang, S. 2018, Phys. Rev. Lett., 120, 055102 [Halekas(2021)]SWIA_data Halekas, J. 2021, NASA Planetary Data System, 10.17189/1414182 [Halekas et al.(2015)Halekas, Taylor, Dalton, Johnson, Curtis, McFadden, Mitchell, Lin, & Jakosky]Ha2015b Halekas, J. S., Taylor, E., Dalton, G., et al. 2015, Space Science Reviews, 195, 125 [Halekas et al.(2017)Halekas, Brain, Luhmann, DiBraccio, Ruhunusiri, Harada, Fowler, Mitchell, Connerney, Espley, Mazelle, & Jakosky]Halekas2017 Halekas, J. S., Brain, D. A., Luhmann, J. G., et al. 2017, Journal of Geophysical Research: Space Physics, 122, 11,320, https://doi.org/10.1002/2017JA024772 [Halekas et al.(2020)Halekas, Ruhunusiri, Vaisberg, Harada, Espley, Mitchell, Mazelle, Romanelli, DiBraccio, & Brain]Halekas2020_waves Halekas, J. S., Ruhunusiri, S., Vaisberg, O. L., et al. 2020, Journal of Geophysical Research: Space Physics, 125, e2020JA028221, https://doi.org/10.1029/2020JA028221 [Hellinger et al.(2018)Hellinger, Verdini, Landi, Franci, & Matteini]He2018 Hellinger, P., Verdini, A., Landi, S., Franci, L., & Matteini, L. 2018, The Astrophysical Journal Letters, 857, L19 [Jakosky et al.(2015)Jakosky, Lin, Grebowsky, Luhmann, Mitchell, Beutelschies, Priser, Acuña, Andersson, Baird, et al.]J2015 Jakosky, B. M., Lin, R., Grebowsky, J., et al. 2015, Space Science Reviews, 195, 3 [Kolmogorov(1941)]K1941a Kolmogorov, A. N. 1941, 30, 299 [Lee et al.(2023)Lee, Sánchez-Cano, DiBraccio, Mayyasi, Xu, Chamberlin, Davies, Scolini, Filwett, Ramstad, Palmerio, Lynch, Luhmann, Ehresmann, Guo, Allen, Vines, Winslow, & Elliott]Lee2023C Lee, C. O., Sánchez-Cano, B., DiBraccio, G. A., et al. 2023, Frontiers in Astronomy and Space Sciences, 10, 1064208, 10.3389/fspas.2023.1064208 [Manzini et al.(2022)Manzini, Sahraoui, Califano, & Ferrand]M2022 Manzini, D., Sahraoui, F., Califano, F., & Ferrand, R. 2022, Physical Review E, 106, 035202 [Marino & Sorriso-Valvo(2023)]M2023 Marino, R., & Sorriso-Valvo, L. 2023, Physics Reports, 1006, 1 [Marino et al.(2008)Marino, Sorriso-Valvo, Carbone, Noullez, Bruno, & Bavassano]M2008 Marino, R., Sorriso-Valvo, L., Carbone, V., et al. 2008, The Astrophysical Journal, 677, L71 [Marino et al.(2012)Marino, Sorriso-Valvo, D’Amicis, Carbone, Bruno, & Veltri]Marino_2012 Marino, R., Sorriso-Valvo, L., D’Amicis, R., et al. 2012, The Astrophysical Journal, 750, 41, 10.1088/0004-637X/750/1/41 [Marquette et al.(2018)Marquette, Lillis, Halekas, Luhmann, Gruesbeck, & Espley]Ma2018 Marquette, M. L., Lillis, R. J., Halekas, J., et al. 2018, Journal of Geophysical Research: Space Physics, 123, 2493 [Matthaeus & Velli(2011)]M2011 Matthaeus, W., & Velli, M. 2011, Space science reviews, 160, 145 [Mazelle et al.(2004)Mazelle, Winterhalter, Sauer, Trotignon, Acuna, Baumgärtel, Bertucci, Brain, Brecht, Delva, et al.]M2004 Mazelle, C., Winterhalter, D., Sauer, K., et al. 2004, in Mars’ Magnetism and Its Interaction with the Solar Wind (Springer), 115–181 [Meziane et al.(2017)Meziane, Mazelle, Romanelli, Mitchell, Espley, Connerney, Hamza, Halekas, McFadden, & Jakosky]Meziane2017 Meziane, K., Mazelle, C. X., Romanelli, N., et al. 2017, Journal of Geophysical Research: Space Physics, 122, 1531, https://doi.org/10.1002/2016JA023282 [Mininni & Pouquet(2009)]Mi2009 Mininni, P. D., & Pouquet, A. 2009, Phys. Rev. E, 80, 025401 [Politano & Pouquet(1998a)]P1998a Politano, H., & Pouquet, A. 1998a, Physical Review E, 57, R21 [Politano & Pouquet(1998b)]P1998b —. 1998b, Geophysical Research Letters, 25, 273 [Pope(2001)]Po2001 Pope, S. B. 2001, Turbulent flows, IOP Publishing [Pouquet et al.(2018)Pouquet, Rosenberg, Marino, & Herbert]Po2018 Pouquet, A., Rosenberg, D., Marino, R., & Herbert, C. 2018, Journal of Fluid Mechanics, 844, 519 [Romanelli et al.(2022)Romanelli, Andrés, & DiBraccio]Romanelli2022APJ Romanelli, N., Andrés, N., & DiBraccio, G. A. 2022, The Astrophysical Journal, 929, 145, 10.3847/1538-4357/ac5902 [Romanelli et al.(2013)Romanelli, Bertucci, Gomez, Mazelle, & Delva]R2013 Romanelli, N., Bertucci, C., Gomez, D., Mazelle, C., & Delva, M. 2013, Planetary and Space Science, 76, 1 [Romanelli et al.(2024)Romanelli, Fowler, DiBraccio, Espley, & Halekas]Romanelli2024 Romanelli, N., Fowler, C. M., DiBraccio, G. A., Espley, J. R., & Halekas, J. S. 2024, Alfvén Waves at Mars (American Geophysical Union (AGU)), 99–123, https://doi.org/10.1002/9781394195985.ch5 [Romanelli et al.(2018)Romanelli, Mazelle, & Meziane]R2018c Romanelli, N., Mazelle, C., & Meziane, K. 2018, Journal of Geophysical Research: Space Physics, 123, 1100 [Romanelli et al.(2016)Romanelli, Mazelle, Chaufray, Meziane, Shan, Ruhunusiri, Connerney, Espley, Eparvier, Thiemann, et al.]Ro2016 Romanelli, N., Mazelle, C., Chaufray, J.-Y., et al. 2016, Journal of Geophysical Research: Space Physics, 121, 11 [Romeo et al.(2021)Romeo, Romanelli, Espley, Mazelle, DiBraccio, Gruesbeck, & Halekas]Romeo2021 Romeo, O. M., Romanelli, N., Espley, J. R., et al. 2021, Journal of Geophysical Research: Space Physics, 126, e2020JA028616, https://doi.org/10.1029/2020JA028616 [Ruhunusiri et al.(2017)Ruhunusiri, Halekas, Espley, Mazelle, Brain, Harada, DiBraccio, Livi, Larson, Mitchell, et al.]Ru2017 Ruhunusiri, S., Halekas, J., Espley, J., et al. 2017, Journal of Geophysical Research: Space Physics, 122, 656 [Russell et al.(1990)Russell, Luhmann, Schwingenschuh, Riedler, & Yeroshenko]R1990 Russell, C., Luhmann, J., Schwingenschuh, K., Riedler, W., & Yeroshenko, Y. 1990, Geophysical Research Letters, 17, 897 [Sahraoui et al.(2020)Sahraoui, Hadid, & Huang]S2020 Sahraoui, F., Hadid, L., & Huang, S. 2020, Reviews of Modern Plasma Physics, 4, 1 [Smith et al.(2009)Smith, Stawarz, Vasquez, Forman, & MacBride]Smith2009 Smith, C. W., Stawarz, J. E., Vasquez, B. J., Forman, M. A., & MacBride, B. T. 2009, , 103, 201101, 10.1103/PhysRevLett.103.201101 [Sorriso-Valvo et al.(2007a)Sorriso-Valvo, Marino, Carbone, Noullez, Lepreti, Veltri, Bruno, Bavassano, & Pietropaolo]SV2007 Sorriso-Valvo, L., Marino, R., Carbone, V., et al. 2007a, Physical review letters, 99, 115001 [Sorriso-Valvo et al.(2007b)Sorriso-Valvo, Marino, Carbone, Noullez, Lepreti, Veltri, Bruno, Bavassano, & Pietropaolo]So2007 —. 2007b, Physical review letters, 99, 115001 [Stawarz et al.(2009)Stawarz, Smith, Vasquez, Forman, & MacBride]St2009 Stawarz, J. E., Smith, C. W., Vasquez, B. J., Forman, M. A., & MacBride, B. T. 2009, The Astrophysical Journal, 697, 1119 [Stawarz et al.(2010)Stawarz, Smith, Vasquez, Forman, & MacBride]Stawarz_2010 —. 2010, The Astrophysical Journal, 713, 920, 10.1088/0004-637X/713/2/920 [Stawarz et al.(2011)Stawarz, Vasquez, Smith, Forman, & Klewicki]St2011 Stawarz, J. E., Vasquez, B. J., Smith, C. W., Forman, M. A., & Klewicki, J. 2011, The Astrophysical Journal, 736, 44 [Vasquez et al.(2018)Vasquez, Forman, Coburn, Smith, & Stawarz]Vasquez_2018 Vasquez, B. J., Forman, M. A., Coburn, J. T., Smith, C. W., & Stawarz, J. E. 2018, The Astrophysical Journal, 867, 156, 10.3847/1538-4357/aae6c6 [Verdini et al.(2015)Verdini, Grappin, Hellinger, Landi, & Müller]Verdini_2015 Verdini, A., Grappin, R., Hellinger, P., Landi, S., & Müller, W. C. 2015, The Astrophysical Journal, 804, 119, 10.1088/0004-637X/804/2/119
http://arxiv.org/abs/2406.18895v1
20240627051704
Can we teach language models to gloss endangered languages?
[ "Michael Ginn", "Mans Hulden", "Alexis Palmer" ]
cs.CL
[ "cs.CL" ]
Credit Ratings: Heterogeneous Effect on Capital Structure Martin Spindler[label=e2] July 1, 2024 ========================================================= § ABSTRACT Interlinear glossed text (IGT) is a popular format in language documentation projects, where each morpheme is labeled with a descriptive annotation. Automating the creation of interlinear glossed text can be desirable to reduce annotator effort and maintain consistency across annotated corpora. Prior research <cit.> has explored a number of statistical and neural methods for automatically producing IGT. As large language models (LLMs) have showed promising results across multilingual tasks, even for rare, endangered languages <cit.>, it is natural to wonder whether they can be utilized for the task of generating IGT. We explore whether LLMs can be effective at the task of interlinear glossing with in-context learning, without any traditional training. We propose new approaches for selecting examples to provide in-context, observing that targeted selection can significantly improve performance. We find that LLM-based methods beat standard transformer baselines, despite requiring no training at all. These approaches still underperform state-of-the-art supervised systems for the task, but are highly practical for researchers outside of the NLP community, requiring minimal effort to use. § INTRODUCTION With thousands of endangered languages at risk of extinction, language documentation has become a major area of linguistic research <cit.>, aiming to produce permanent artifacts such as annotated corpora, reference grammars, and dictionaries. Furthermore, research has explored the potential for computational methods to aid in language documentation and revitalization <cit.>. In particular, we study the task of generating interlinear glossed text (IGT), a line-by-line format for annotated text corpora that is commonly used in documentation projects. IGT generation has been studied using statistical <cit.> and neural <cit.> methods. A key challenge when working with endangered languages is that, in nearly all cases,[As <cit.> notes, not all endangered languages are low-resource (and vice versa), and such languages bear different concerns when developing language technology.] there is very little labeled or unlabeled data available. This is particularly challenging for large neural models which depend on large, representative training data sets. Research has explored methods to overcome this challenge for IGT generation systems, such as crosslingual transfer <cit.> and architectural modifications <cit.>, but these approaches struggle in very low-resource scenarios. In addition, previous approaches generally require expertise in model training, implementation, and deployment, as well as the computational resources needed to serve large neural models. As large language models (LLMs) have demonstrated impressive performance on various natural language tasks, the question arises whether they can benefit language documentation. We seek to evaluate the ability of current LLMs to generate interlinear glossed text, compared with earlier state-of-the-art methods. This research can also shed light on the language-agnostic capabilities of LLMs, requiring the model to learn patterns in very rare languages which are unlikely to have significant presence in their training data. We study strategies for selecting in-context examples, finding significant impacts to performance. Our best-performing systems outperform transformer model baselines, despite involving no training whatsoever. They still underperform SOTA systems that induce morphological segmentation, but at the same time hold promise for offering a new approach to interlinear glossing for language documentation practitioners. § BACKGROUND §.§ Interlinear Glosed Text A typical example of IGT is shown in <ref>. nuhu' tih-'eeneti-3i' heneenei3oobei-3i' this when.PAST-speak-3PL IC.tell.the.truth-3PL “When they speak, they tell the truth.” <cit.> The first line (transcription line) contains the text in the language being documented, and may be segmented into morphemes (as here). The second line (gloss line) provides a gloss for each morpheme in the transcription. Glosses may indicate grammatical function or a translation of the morpheme (for stems). The third line contains a translation into a high-resource language such as English. Producing each of these lines requires knowledge of the language and/or skilled linguistic analysis. Generally, automated IGT systems are trained to predict the gloss line given the transcription line (and sometimes the translation as in ). The primary aim of such systems is to assist a human annotator, providing suggestions for common morphemes that are often glossed with the same label. These systems are not intended to replace human annotators, who are vital to the documentation process, annotating novel morphemes and interesting linguistic phenomena, as well as verifying automatically-produced labels. §.§ LLMs for Rare Languages Though LLMs generally have limited understanding of rare and low-resource languages <cit.>, they can often achieve significantly better performance through crosslingual in-context learning (X-ICL), where a number of examples in the target language are provided directly in the prompt to a multilingual model <cit.>. We study X-ICL methods for using LLMs for the task of IGT generation, including complete IGT examples in the prompt. We hypothesize that this approach will leverage both the set of labeled training examples and the robust multilingual knowledge of the language model. In particular, we explore the effects of including an increasing number of examples in context (<ref>) and using different strategies to select relevant examples (<ref>). §.§ Related Work A number of approaches have been used for IGT generation. <cit.> uses a maximum entropy classifier and represents the earliest work describing benefits of using automated glossing systems. A number of papers <cit.> use statistical classifiers such as conditional random fields. Recent research explores neural models such as recurrent neural networks and transformers <cit.>. Other approaches improve glossing performance using crosslingual transfer <cit.>, hard attention <cit.>, and pseudolabeling <cit.>. IGT data is not only useful for preservation and revitalization projects, but also for downstream tasks such as machine translation <cit.>, developing linguistic resources like dictionaries <cit.> and UMR (Uniform Meaning Representation) graphs <cit.>, studying syntax and morphology <cit.>, and dependency parsing <cit.>. Given the cost and difficulty of obtaining IGT data, research has explored methods to scrape it from Latex documents <cit.> and even images <cit.>. Finally, work has attempted to standardize IGT conventions and formats, balancing consistency and expressiveness across languages <cit.>. § METHODOLOGY We study the IGT generation task described in <cit.>. Given a transcription line and translation line, systems must predict the gloss line. We focus on the closed track setting, where the input words are not segmented into morphemes. This task is strictly more difficult than the setting where words are already segmented, as models must jointly learn segmentation and gloss prediction. As reported in <cit.>, the SOTA on this task remains far weaker than the setting with segmented inputs, with up to a 40 point discrepency in SOTA performance. §.§ Data We use the IGT corpora and splits from the 2023 SIGMORPHON Shared Task <cit.>, allowing us to directly compare to several other systems. We use the languages described in <ref>. We primarily focus on the lower-resource languages from the shared task, where neural methods tended to struggle due to limited training data. We use the data as formatted by <cit.>. §.§ Evaluation We evaluate using the same metrics as the shared task. We primarily report morpheme accuracy, which measures how many morpheme glosses match between the predicted and true glosses. Any predicted glosses beyond the length of the true gloss string are ignored. §.§ Models We run preliminary experiments using Cohere's Command R+ model,[<https://docs.cohere.com/docs/command-r-plus>] a 104B parameter instruction-tuned language model with 128K token context that is designed for multilingual tasks. §.§ Prompting Though the exact prompt varies from experiment to experiment, all runs use the same base prompt, included in <ref>. In the system prompt, we define the IGT generation task and desired output format and provide additional information such as the language and list of possible glosses. In the user prompt, we provide few-shot examples (if any) and the target example to be glossed. We run each experiment three times with temperature 0 and a different random seed, ensuring both the retrieval strategy and model API calls are reproducible. We report the average and standard deviation for performance. § MANY-SHOT PROMPTING Few-shot prompting, where a model is provided with a small number of examples in the context, has proven very effective at a variety of tasks <cit.>. Furthermore, as model context lengths have continued to increase, it has become possible to provide hundreds or even thousands of examples, and performance typically continues to improve <cit.>. On the other hand, increasingly long prompts bear a high cost, and strategies to retrieve relevant examples can often achieve similar performance at a fraction of the cost (see <ref>). §.§ Experimental Settings For all experiments, we run two settings, one with just the base task description, and one where we include a list of possible glosses for functional morphemes. We scrape this list of glosses from all of the seen glosses in the training set. We instruct the model to only use these glosses for functional morphemes (while stem morphemes should still be glossed with their translation). We refer to this setting as [+ Glosslist], with an example gloss list in <ref>. For each language, we experiment with varying number of examples. For all languages except Gitksan, we run experiments providing no examples (zero-shot) and 1, 2, 3, 5, 10, 30, 50, and 100 examples. Gitksan has fewer than 100 training examples, so we use all 74 for the final setting. For each example in our eval set, we randomly sample examples from the training set to be included in the prompt. In <ref>, we compare this strategy to more intentional retrieval strategies that aim to select relevant examples. §.§ Results We report results for our languages in <ref>, with a full table of results provided in <ref>. Generally, we see that the model has very weak performance in the zero-shot setting, indicating that the model has little knowledge of our chosen languages. In some cases, the zeroshot experiments produce results that are not even in the desired output format. Performance improves drastically for the first few shots added, showing smaller improvements as the number of shots increases. For Gitksan, performance levels up as the number of provided examples approaches the full training set. For the other languages with much larger training sets, performance shows continued improvement even around 100 shots, supporting the findings of <cit.>. We suspect that this trend would continue to some extent, but the cost of providing hundreds of examples quickly becomes infeasible. Relationship between Shots and Accuracy What sort of shape is formed by the curve in <ref> and <ref>? The relationship appears to be roughly logarithmic, starting steep and leveling off. To quantify this relationship, we take the log(# shots+1) for each setting.[Adding 1 so the zero-shot setting is defined.] <ref> shows the transformed curve for Gitksan, which now shows a strong linear relationship. We compute the R^2 value over all settings and report it in <ref>. We observe extremely strong correlation values across all settings. This indicates that the logarithmic model is a good fit for the data, and predicts that maintaining steady performance improvements requires exponentially more examples. Effect of Gloss List We initially hypothesized that providing a complete list of possible glosses in the prompt could help the model better adhere to the desired glossing conventions. We report a summary plot of the difference in accuracy between the two settings across languages in <ref>. The average difference is close to 0, well within a standard deviation in all cases, and thus there is little evidence to suggest that including the gloss list meaningfully affects performance. A possible explanation is that since the model has very limited prior knowledge of these languages, providing a list of glosses without any explanation or examples does not provide any useful information. To investigate whether including a gloss list changes the predictions at all, we measure the adherence percentage. This metric is computed by dividing the number of predicted (functional) glosses that adhere to the gloss list by the total number of predicted glosses. We report the distribution over languages and settings in <ref>. We observe that including the gloss list in the prompt is effective for increasing adherence compared to the base setting. While the experiments without the gloss list vary widely, the experiments with it nearly always use glosses from the list. On the other hand, we have observed no evidence that the gloss list improves performance, suggesting that the model may be predicting glosses from the list randomly. Furthermore, including a gloss list in the prompt carries a fixed cost of several hundred tokens for every prompt (e.g. for Uspanteko, the cost is 124 tokens). Since it provides negligible benefit, we opt to omit the glosslist for future experiments in order to reduce cost. § RETRIEVAL STRATEGIES While including a large number of in-context examples can certainly improve performance, long prompts carry a high cost that may be infeasible for real-world documentation projects. For example, running prompts with a thousand examples in Uspanteko costs roughly 10 cents per inference call, which can quickly add up over thousands of examples. Many LLMs still have limited context length, particularly among open-source models, and including many examples may not even be possible. Finally, <cit.> suggests that the effectiveness of many-shot prompting is mainly due to the model seeing relevant examples, and ignoring many irrelevant ones. With this in mind, we consider a method inspired by retrieval-augmented generation (RAG, ). RAG was originally used for knowledge-intensive tasks, using document embeddings to search for relevant documents to a given query and include them in prompt context. We apply a similar strategy in order to search for relevant IGT examples from our training corpus to include in our prompt. §.§ Experimental Settings We consider several strategies for selecting examples that are relevant for the target sentence. Random As a baseline, we use the random strategy from the prior section, which simply samples n examples randomly from the training corpus. Word Recall and Word Precision We hypothesize that a straightforward way to improve performance is by providing examples which have the same morphemes as the target sentence. Since our data is not segmented into morphemes, we instead look for matching words (which will nearly always be composed of the same morphemes). We split each example into words using whitespace, and compute the word recall for a target sentence T and candidate training sentence S. WordRecall = |unique(S) ∩unique(T)|/|unique(T)| This computes the fraction of unique words in the target sentence that appear in the candidate sentence. We can also compute the word precision with a slightly modified formula: WordPrecision = |S ∩unique(T)|/|S| This metric rewards examples where the majority of words in the candidate are in the target sentence. Notice that we do not use the unique words of S, instead weighting an example that uses the same word from T several times more heavily. We select the examples with the highest word recall or precision, considering each example independently and breaking ties randomly. Aggregate Word Recall One limitation of the prior approach is that by considering each candidate individually, we can potentially select several redundent examples in few-shot scenarios. Instead, we can compute the aggregate word recall over a candidate sample of n examples. S_agg = ⋃_i=1^n unique(S_i) AggWordRec = |S_agg∩unique(T)|/|unique(T)| This metric rewards samples that jointly cover more of the words in the target. This is equivalent to the Maximum Coverage Problem, and as such is NP-Hard <cit.>. We use the greedy algorithm, which runs in polynomial time <cit.>. chrF A limitation of the previous strategies is that, by only considering atomic words, there is no way to select examples that may contain the same morphological units. One way we can attempt to capture morphological similarity is through using substring similarity metrics such as <cit.> and <cit.>. These metrics compute the F-score of character n-gram matches ( also incorporates word n-grams), and have been shown to correspond more closely to human judgements for machine translation. Morpheme Recall Although we do not have segmented data, much research has explored methods to induce morphological segmentations from data in an unsupervised manner. In particular, we use Morfessor <cit.>, a popular statistical method that seeks to find a segmentation that maximizes overall the probability of segmented words. We create silver segmentations using Morfessor and compute the recall metric as described earlier, but using morphemes rather than words. We train the segmentation model use the default parameters on the training data and use Viterbi inference to segment test examples. We use the Morfessor 2.0 library <cit.>. §.§ Results We report results across our four languages and six retrieval strategies in <ref>. We run tests using 1, 2, 5, 10, 30, and 50 examples in each prompt. Comparison with Random Retrieval Across all languages, we observe clear and significant improvements over the random selection method described in the prior section (here indicated with a gray line). This is the case both with a small number of fewshot examples and as the number grows large. The only exception is the 50 example setting for Gitksan, at which point the provided examples make up a large fraction of the training corpus. This is an intuitive result, as the IGT generation task requires, at minimum, knowledge about the words of a language and their potential glosses. Even a simple baseline that glosses tokens with their most common gloss from the training set is often fairly effective <cit.>. This is particularly important since the LLM used seems to have very limited prior knowledge of the language, as evidenced by the poor zero-shot performance. Relationship between Shots and Accuracy As before, we generally see consistently improving performance as additional examples are added. However, there are several cases where performance drops going from 30 to 50 shots, as in Gitksan (Word Precision, Max Coverage, and Morpheme Recall) and Lezgi (chrF Score). Both of these languages have fairly small corpora, and it is possible that after a point these strategies run out of beneficial examples, and any additional examples simply contribute noise to the prompt. Effect of Different Granularities Many of the strategies perform very similarly, but there are some observable trends across granularity levels (word, morpheme, and substring). We observe that the chrF strategy is nearly always the most effective, outperforming the word- and morpheme-based strategies in most cases. We hypothesize that this strategy strikes a balance by selecting examples with subword similarity, but not introducing error due to noisy morpheme segmentations. Word Recall vs Morpheme Recall We observe mixed results across the Word Recall and Morpheme Recall strategies. We observe a few settings where there appears to be a significant gap between the two (Gitksan at 30 shots; Lezgi at 50 shots), but generally the strategies are close. It is possible that the words in our evaluation examples often either are monomorphemic, or contain a combination of morphemes already observed in the training data, and thus selecting relevant examples according to morphemes has little benefit. Word Recall vs Word Precision While the Word Recall and Word Precision strategies both seek to quantify the word-level similarity between the target and candidate sentences, they are computed slightly differently and produce different results. The Word Recall strategy prioritizes candidate sentences that contain a large fraction of the word types in the target sentence, ignoring repeated words. Meanwhile, the Word Precision strategy selects candidates based on the fraction of words within the candidate that are also in the target. The Word Recall strategy consistently outperforms Word Precision, except for the two largest settings in Gitksan. This indicates that it is more important to provide examples which cover the words in the target than it is to provide several examples for a single word. Word Recall vs Max Word Coverage We experimented with the Max Word Coverage setting, where we consider the recall of the selected set of candidates as whole, rather than individually. We observe minimal benefits, in fact underperforming the Word Recall setting in many cases. § COMPARISON WITH SOTA Finally, we compare our best-performing strategies from the prior section with several previous baseline methods: * The token classification transformer model of <cit.>, which uses an encoder model to predict glosses word-by-word * Tü-CL from <cit.>, which uses hard attention to induce latent segmentations and predict glosses on segmented words For the LLM-based method, we select the chrF strategy and test with 30 examples for Gitksan and 100 examples for the other languages. We make some small prompt optimizations described in <ref>, and raise the temperature to 0.2. We use the following language models: * Cohere's Command R+, which was used for preliminary experiments. * OpenAI's GPT-4o, specifically the checkpoint <cit.> * Google's Gemini 1.5 Pro <cit.> We run evaluation on the held out test set and report results in <ref>. §.§ Discussion We observe that the LLM based glossing strategies outperform a simple transformer in all languages, despite using no training whatsoever and using a small fraction of the training set as examples. Of the LLM models, Gemini performs best on three languages. However, we note that Gemini refuses to produce answers for many examples, which we count as completely wrong. If we omit such examples, Gemini's performance is even higher, achieving 55.9%, 50.8%, and 63.9% accuracy on Lezgi, Natugu, and Uspanteko respectively. On the other hand, the LLM methods typically underperform the SOTA method of <cit.>, except for Gitksan, where the best LLM (Gemini) outperforms by 6.5 points. The <cit.> approach explicitly models segmentation through a learned latent representation, which our strategy does not utilize. Future work with LLM-based methods could explore an analogous process, explicitly prompting the LLM to generate segmentations before producing final glosses. Furthermore, these methods will likely continue to improve as LLMs become more capable for rare (or even completely unseen) languages, as measured by benchmarks such as <cit.>. Most trivially, as LLMs with increasingly long contexts are developed, we can provide more examples in-context, which our results indicate will continue to provide benefits. § CONCLUSION We find that SOTA large language models struggle to produce interlinear glosses for the endangered languages used in our research. However, by selecting relevant examples from a training corpus and providing them as part of the context for each example to be glossed, we can significantly improve performance. We find that the relationship between performance and the number of few-shot examples is roughly logarithmic. Performance improves by a wide margin when we select examples with a high chrF++ score relative to the target sentence. Our best systems outperform a standard transformer model, despite involving no explicit training and using a fraction of the training data. However, they still underperform the SOTA system for the glossing task on three out of four languages. Thus, for documentary linguists hoping to use automated glossing solutions, the use of LLMs may not achieve ideal accuracy. At the same time, LLMs may still be a preferrable choice for languages with very limited data comparable to Gitksan, and the use of an API is often far more accessible than training and hosting a neural model. Our results encourage further exploration of this approach. § LIMITATIONS While we have selected a small set of languages that we believe give insight into the performance of automated glossing systems, they are certainly not representative of all the world's languages. In particular, LLMs may struggle more with languages that use non-Latin writing scripts <cit.>. We use a single prompt template for the majority of experiments and do not conduct extensive prompt engineering. Frameworks such as DSPy <cit.> have shown that prompt optimization can often greatly improve performance, so it is entirely possible that we could achieve better performance on this problem with the same models and strategies. We evaluate three popular closed-source LLMs, but results may vary across other models. In particular, we have not yet considered open-source, local LLMs due to resource constraints. § ETHICS STATEMENT As our work involves documentation data produced through the combined efforts of documentary linguists and speakers of endangered languages, we strive to respect their desires and avoid treating data as merely a resource to train models with <cit.>. We do not intend for automated glossing systems to replace human annotators, which would drastically impact the quality, novelty, and utility of annotated corpora, but rather to serve as a tool available to support documenters. Finally, we acknowledge that the use of large language models carries a high environmental cost, and make efforts to minimize unnecessary API calls and to track our usage. § PROMPT FORMAT We use the following prompts for our preliminary experiments. The blue placeholders are replaced with the appropriate values. The system prompt is as follows. You are an expert documentary linguist, specializing in (*@$language@*). You are working on a documentation project for (*@$language@*) text, where you are creating annotated text corpora using the interlinear glossed text (IGT) and following the Leipzig glossing conventions. Specifically, you will be provided with a line of text in (*@$language@*) as well as a translation of the text into metalang, in the following format. Transcription: some text in (*@$language@*) Translation: translation of the transcription line in (*@$metalang@*) You are to output the gloss line of IGT. You should gloss stem/lexical morphemes with their translation in (*@$metalang@*), and gloss gram/functional morphemes with a label indicating their function. Please output the gloss line in the following format: Glosses: the gloss line for the transcribed text Glosses should use all caps lettering for functional morphemes and standard lettering for stem translations. Glosses for morphemes in a word should be separated by dashes, and words should be separated by spaces. The main prompt is as follows: Here are some complete glossed examples: (*@$fewshot_examples@*) Please gloss the following example in (*@$metalang@*). Transcription: (*@$transcription@*) Translation: (*@$translation@*) For zero-shot prompts, we remove the first sentence of the main prompt. Furthermore, from qualitative analysis, we observe that the LLM sometimes pulls words from the translation to use as glosses, resulting in incorrect examples. Thus, for the final test, we omit the translation lines from both prompts. § EXAMPLE GLOSS LIST We provide an example list of glosses for Gitksan. There are some formatting artificats, due to the automatic extraction of glosses. #(PROSP), (#COMP), (#PROSP), 1.I, 1.SG.=, 1PL.II, 1SG, 1SG.II, 2SG, 3.I, 3.II, 3.III, 3PL, 3PL.II, 3PL.INDP, 3SG.II, ANTIP, AX, CAUS1, CAUS2, CCNJ, CN, CNTR, COMP, CONNN, DEM.PROX, DES, DISTR, DM, DWID, EPIS, FOC, FUT, FUT=3, IBM, INCEP, INS, IPFV, IPFV=EPIS=CN, IRR, IRR=3, LOC, LOC=CN, LVB, MANR, NEG, NEG=FOC, NEG=FOC=3, NMLZ, OBL, PART, PASS, PCNJ, PN, PR.EVID, PREP, PREP=CN, PROG=CN, PROG[=CN], PROSP, PROSP=3, PROSP=3.I, REAS, SELF, SG, SPT, SX, T, T=PN, TR, TR=CN, TR=PN, VAL, VER, VERUM, [#(PROSP), [(#COMP), [(PROSP), [PROG=CN, [PROSP § FULL RESULTS We present full results across all of our experimental settings in <ref>.
http://arxiv.org/abs/2406.19299v1
20240627161522
PNeRV: A Polynomial Neural Representation for Videos
[ "Sonam Gupta", "Snehal Singh Tomar", "Grigorios G Chrysos", "Sukhendu Das", "A. N. Rajagopalan" ]
cs.CV
[ "cs.CV" ]
Vector Resonant Relaxation and Statistical Closure Theory. I. Direct Interaction Approximation Jean-Baptiste Fouvry July 1, 2024 ================================================================================================ § ABSTRACT Extracting Implicit Neural Representations (INRs) on video data poses unique challenges due to the additional temporal dimension. In the context of videos, INRs have predominantly relied on a frame-only parameterization, which sacrifices the spatiotemporal continuity observed in pixel-level (spatial) representations. To mitigate this, we introduce Polynomial Neural Representation for Videos (PNeRV), a parameter-wise efficient, patch-wise INR for videos that preserves spatiotemporal continuity. PNeRV leverages the modeling capabilities of Polynomial Neural Networks to perform the modulation of a continuous spatial (patch) signal with a continuous time (frame) signal. We further propose a custom Hierarchical Patch-wise Spatial Sampling Scheme that ensures spatial continuity while retaining parameter efficiency. We also employ a carefully designed Positional Embedding methodology to further enhance PNeRV's performance. Our extensive experimentation demonstrates that PNeRV outperforms the baselines in conventional Implicit Neural Representation tasks like compression along with downstream applications that require spatiotemporal continuity in the underlying representation. PNeRV not only addresses the challenges posed by video data in the realm of INRs but also opens new avenues for advanced video processing and analysis. § INTRODUCTION Implicit Neural Representations (INRs) have become the paradigm of choice for modelling discrete signals such as images and videos using a continuous and differentiable neural network, for instance, a multi layered perceptron. They facilitate several important applications like super-resolution, inpainting, and denoising <cit.> for images. They offer various important benefits over discrete representations particularly in terms of them being agnostic to resolution. Recent advancements have extended INR to video signals, but early methods relied on utilizing 3 dimensional spatiotemporal coordinates (x,y,t) as input and RGB values as outputs. Such straightforward extensions of INRs to videos are inefficient during inference since they need to sample T × H × W times to reconstruct the entire video. For high resolution videos, this behavior becomes more prominent. r0.5 < g r a p h i c s > PNeRV when compared to its counterparts: (a) NeRV: An INR for videos with only frame-wise parameterization that leads to loss of spatial continuity. (b) E-NeRV: A step-up over NeRV with a parameterization that employs a fixed Spatial Context (SC). The fixed SC does not support spatial continuity. (c) PNeRV: An efficient INR for videos with a PNN backbone (signified by the usage of Hadamard Product ⊙) that supports varying SC while retaining spatial continuity. Also, a simple multi layered perceptron is unable to model the complex spatio-temporal relationship in video pixels well. To address this issue and maintain parameter efficiency, current state-of-the-art methods in the field use a frame-only parameterization as depicted in Fig. <ref> (a) and (b). These representations take the time index of a frame as input and predicts the entire frame as output. Although state-of-the-art INRs on video data exhibit impressive results on tasks such as video denoising and compression, they suffer from two fundamental issues. Firstly, the lack of spatial parameterization renders the representation less suitable for conventional INR applications such as video super-resolution. Secondly, they are not equipped to capture the information pertaining to pixel-wise auto and cross correlations across time explicitly. Hence, resulting in a suboptimal metric performance to model size ratio. Only recently, <cit.> have attempted to explore a spatiotemporally continuous neural representation based hypernetwork for generating videos. However, their approach and the tasks they enable are fundamentally different[We highlight these differences in section <ref>.]to ours. We utilize the following key insights to build a spatiotemporally continuous Neural Representation while keeping the model size in check: (1) Achieving spatiotemporal continuity doesn't always require dense per-pixel sampling. A well-designed patch-wise sampling approach <cit.> can yield comparable results for downstream tasks while processing less data. (2) To achieve better efficiency in handling higher-dimensional inputs with fewer learnable parameters and maintaining performance, we consider using Polynomial Neural Networks (PNNs) <cit.> as our preferred function approximator. PNNs model the auto and cross correlations within their input feature maps. (3) We also propose a Positional Embedding (PE) methodology to aid the PNN backbone in learning a faithful representation using the sampled inputs. Carefully designed PEs <cit.> are proven to boost the performance of Deep Neural Networks. In this work, we enhance INRs for videos along the following three directions. Firstly, we adopt a temporal as well as spatial parameterization (illustrated in Fig. <ref>(c)) in our light-weight representation. We achieve this by replacing the dense pixel-wise spatial sampling with a carefully designed Hierarchical Patch-wise Spatial Sampling approach. Our scheme (elaborated upon in section <ref>) breaks a video frame into patches and samples coordinates from sub-patches in a recursive fashion across different levels of hierarchy. Secondly, we leverage the properties of PNNs to build a parameter-wise efficient decoder backbone that yields better metric performance. PNeRV also inherits some important properties of PNNs such as robustness to the choice of non-linear activation functions. Finally, we improve the positional embedding of input signals to align well with our PNN backbone and achieve peak metric performance. Our claims are backed by consistent qualitative and quantitative results on video reconstruction and four challenging downstream tasks i.e. Video Compression, Super-Resolution, Frame Interpolation, and Denoising. The key contributions of this paper can be summarized as: * We introduce a Hierarchical Patch-wise Spatial Sampling approach in our formulation which makes PNeRV continuous in space and time while retaining parameter efficiency. * We design a PNN for temporal signals. We build a Higher order Multiplicative Fusion (HMF) module that learns parametric embedding. * We propose a new positional embedding scheme to encode and fuse spatial and temporal signals. The scheme brings together both parametric (learnable) and functional (deterministic) embeddings, a first in Neural Representations for videos. We show that both the embeddings complement each other to align well with the PNN based backbone and attain peak metric performance. § RELATED WORK Implicit Neural Representations. INR is a method to convert conventionally discrete signal representations such as images (discrete in space) and videos (discrete in space and time) into continuous representations. Originally motivated as an alternate representation for images <cit.>, INR has been pushing the envelope in terms of performance on a wide array of tasks on images such as denoising and compression <cit.>. INR for videos extends INR for images by a simple reparameterization in terms of video-frame indices as well <cit.>. The approach of choice for such architectures entails learning an embedding for pixels and timestamps, which are passed on to a decoder network. To expedite model training and inference with large video tensors in such INR formulations, state-of-the-art literature in INR for videos <cit.> has introduced parameterization over frame indices only. While such formulations are lighter and faster, they compromise spatial continuity. We aim to bring the best of both these formulations together in this work by employing a parameterization over patches as well as frame indices, with a PNN backbone. Consequentially, the spatial continuity achieved while keeping model parameters in check, is an essential attribute for a faithful INR and is critical for applications such as super-resolution. <cit.> have recently attempted to build a spatiotemporally continuous INR based hypernetwork for generating videos. Their proposed method differs from ours in two key aspects. First, theirs is a video generation pipeline and the INR is only a component of their model. Whereas ours is a vanilla INR that serves as an alternate representation for videos while enabling interesting downstream tasks. Second, since their model is a hypernetwork, it is not well equipped to tackle high resolution videos such as the ones found in the UVG dataset <cit.>. The authors attribute this behaviour to the unstable training routines of large hypernetworks. Polynomial Neural Networks (PNNs). PNNs model their outputs as a higher-degree polynomial of the input. A full polynomial expansion can be expressed as follows <cit.>: = σ(W_1^T + ^TW_2 + 𝒲_3×_1×_2×_3 + ... + ), where, , , σ, and represent the output, input vector, non-linear activation and bias. 𝒲_i represents the weight tensor for the i^th order, and ×_i represents the mode-i product[Defined in appendix <ref>.]. The PNN paradigm's elegance lies in the utilization of tensor factorization techniques to prevent an exponential increase in model parameters with an increase in the polynomial order. We examine only the Nested Coupled CP Decomposition (NCP)[Definition adopted from <cit.>.] since our model implementation is based on its sequential polynomial expansion. Considering a 3^rd order polynomial governed by Eq. <ref>, the decomposed forward pass can be expressed as the following recursive relationship: _n = (_[n]^T) ⊙ (_[n]^T_n-1 + _n^T_[n]), for n ∈{2, 3}. Here = _3 + is the output of the 3^rd order polynomial, ⊙ represents Hadamard product and _1 = (_[1]^T) ⊙ (_1^T_[1]). The learnable parameters in this setup are ∈ℝ^o × k, _[n]∈ℝ^d × k, _[n]∈ℝ^k × k, _[n]∈ℝ^e × k, and _[n]∈ℝ^e, and ∈ℝ^o . The symbols d, o, e, and k represent the decomposition's input dimensions, output dimensions, implicit dimension, and rank. The rise of PNNs has seen their application to an array of important deep learning regimes such as generative models <cit.>, attention mechanisms <cit.>, and classification models <cit.>. However, their direct application to temporal signals has not emerged, and they have only been used in a single variable setup in unconditional modeling regimes. PNeRV builds along these new directions in its INR decoder and HMF. Rich Positional Embeddings. PEs based on a series of sinusoidal functions much like the Fourier series, have become an integral part of INRs. Several works <cit.> have shown that in the absence of such embeddings, the output of the INR is blurry i.e. misses the high frequency information. Thus, PEs enable INRs to capture fine-details of a signal making them indispensable for image applications <cit.>. INR methods for videos have also sought to capitalize upon the advantages of an efficient PE <cit.>. However, state-of-the-art in the domain <cit.> has only explored functional (deterministic) embeddings in one input variable. In contrast, PNeRV employs both parametric (learnable) and functional embeddings. We also introduce a PNN based fusion strategy to combine the functional and parametric embeddings. § PNERV: POLYNOMIAL NEURAL REPRESENTATION FOR VIDEOS Overview: Let us now introduce our method. The notation and definitions for the various elements used in this section is summarized in Table <ref>. We denote tensors by calligraphic letters, matrices by uppercase boldface letters and vectors by lowercase boldface letters. To enable spatial continuity while keeping the model size in check, we propose a Hierarchical Patch-wise Spatial Sampling approach for the input coordinates. As shown in Fig. <ref>, the PNeRV architecture comprises three key components, namely, a Positional Embedding Module, an Embedding Fusion Block, and the PNN-based INR decoder. Each frame v_t in an input video V = {v_t}_t=1^T is recursively divided into coarse patches and fine sub-patches. Coordinates sampled from both the patch and sub-patch instances along with their respective frame index (t) serve as inputs to the INR decoder. In nutshell, the PNeRV formulation can be represented as: P_ij = F_Θ(Λ_ij, λ_ij, t), where, F_Θ denotes the complete PNeRV model (having parameters Θ). As defined in Table <ref>, Λ_ij denotes a fine coordinate Tensor, λ_ij is a coarse patch coordinate, and t is the frame index. We present a detailed discussion on each of our model's constituent elements in the subsections that follow. r0.5 < g r a p h i c s > Hierarchical Patch-wise Spatial Sampling: (a) A Global coordinate grid 𝒞 with input values normalized to range [0,1] is constructed for each frame. (b) The grid is divided into M × N coarse patches of equal size. For a coarse patch P_ij, its centroid is used as a 2D coordinate λ_ij. (c) Each coarse patch is further divided into K × L fine patches and a collection of the centroids of these smaller patches is used as the fine patch coordinate tensor Λ_ij. §.§ Hierarchical Patch-wise Spatial Sampling State-of-the-art methods <cit.> have drifted away from a spatial parameterization of their representation to ensure faster inference. They resort to a temporal-only parameterization. In contrast, PNeRV uses a spatiotemporal parameterization whilst having fewer parameters by employing our efficient sampling approach (depicted in Fig. <ref>). We observed that a pixel-wise formulation increases the computational complexity manifold. Hence, we opt for a hierarchical patch-wise formulation. A primitive method to sample spatial patch coordinates would be to assign a scalar coordinate to each patch (similar to frame indices). However, the pitfalls of such an approach are twofold. Firstly, scalar patch indices lack spatial context. They do not convey any sense of spatial localization. Secondly, PEs obtained from scalars have a lower variance, which is not ideal for training. Our analysis in Table <ref> underscores these pitfalls. We have designed our sampling strategy to enrich the input to our INR decoder with spatial information of the patches. Instead of associating just a scalar index to each patch, we associate each patch P_ij with a coarse 2D index λ_ij and a fine index Λ_ij∈ℝ^K × L ×2. The process of computing λ_ij and Λ_ij is illustrated in Fig. <ref>. Like traditional INRs <cit.>, we first build a global coordinate grid 𝒞 of size H × D normalized to range [0,1] (Fig. <ref> (a)). Next, each frame is divided into M × N coarse patches. The coordinates λ_ij for these coarse patches P_ij are found by computing their centroids (Fig. <ref> (b)). Further, each coarse P_ij is divided into K × L fine sub-patches. The K × L × 2 dimensional tensor formed by the centroids of each of these sub-patches is used as the fine coordinates of P_ij (Fig. <ref> (c)). It is imperative to note that, although we divide a frame into patches, the normalized coordinate values are sampled from 𝒞 in all cases for computation of centroids. In effect, the manner in which the patch-coordinates are sampled in our scheme is hierarchical in nature. This ensures a sense of spatial locality in all patches. Intuitively, the coarse coordinate captures a global context whilst the fine coordinates of a patch capture the local context. Algorithm <ref> in Appendix <ref> summarizes hierarchical patch-wise spatial sampling. §.§ Positional Embedding Module Literature on INRs <cit.> dictates that rich positional embeddings (PEs) are central to the performance of INR methods. Fourier series like PEs are positively correlated with the network's ability to capture the high frequency information <cit.>. Although the field has witnessed several advances toward the development of optimal functional (fixed) embeddings of signals and their parametric (learnable) fusion, functional fusion and parametric embeddings remain under explored. In this work, we exploit the combination of functional PEs, parametric PEs, functional PE fusion, and parametric PE fusion to learn a superior INR for videos. r0.51 < g r a p h i c s > The HMF architecture at a glance: All linear transformation matrices represent the terms in Eq. <ref>. Here, ⊙ denotes the Hadamard Product, ⊕ represents feature addition, black arrows represent inputs, and blue arrows represent the fused entities. We propose an embedding scheme wherein we perform a temporal functional embedding in t, a spatial embedding via functional fusion, and a parametric (multiplicative) fusion of all PEs to yield a rich spatiotemporally aware PE. We elaborate upon each of our embeddings and their parametric fusion in the sections that follow. Positional Encoding of Frame Index (FPE) Given a frame index t, normalized between [0,1] as input, we adopt the widely used Fourier series based positional encoding scheme similar to the existing methods <cit.>. This embedding is given as: Γ_FPE(t) = [sin(πν^it) cos(πν^it) ...]_i=0^-1, where, ν denotes the frequency governing hyperparameter and governs the number of sinusoids. Parametric Embedding of Fine Coordinates (PPE) We employ a parametric positional embedding scheme (PPE) to encode the spatial context available in the fine patch coordinates given by tensors Λ_ij. The PPE block in Fig. <ref> illustrates the same. First, Eq. <ref> is applied to each element of Λ_ij to map it to ℝ^1 × 2 dimensional vectors. These resultant embeddings are arranged side by side in spatial order to obtain a feature map of size ℝ^K × L × 4. Notice that each value in the K × L grid has a 2D coordinate value corresponding to x and y. Eq. <ref> is applied individually to the x and y coordinates and the resulting vectors are fused across the channel dimensions. Resulting in a channel dimension of 4l. To merge these features we use a Non-Local Block <cit.> followed by a linear layer. This spatially aware attention based fusion mechanism encourages a weighted feature fusion between various spatial regions where the weights are governed by the Non-Local Block. We refer to this parameterized embedding as Γ_PPE(Λ_ij). Time Aware Spatial Embedding (TSE) A video can be seen as time modulated spatial signal. Therefore, ideally, the spatial positional embedding should be dependent on the frame-index (time) as well as patch coordinates. To this end, we design a Time Aware Spatial Embedding which is inspired from Angle modulation. In analog communication, Angle Modulation refers to the technique of varying a carrier signal's phase in accordance with the information content of a modulating signal. The general expression for the same is given by y_c(t) = Amp_c {cos(2 π f_c t) + ϕ (cos(2 π f_m t))}, where, y_c is the modulated signal, Amp_c is the amplitude of the carrier signal, ϕ(.) is the phase governing function. f_c and f_m are the frequencies of the carrier and modulated signals, respectively. We design the embedding to perform functional fusion of λ_ij and t. We model a video as a time (t) modulated spatial signal (λ_ij). The proposed embedding (denoted by Γ_TSE) is governed by Eqs. <ref> and <ref>. Γ_TSE(λ_ij,t) = [cos(Ω_ij^αt) sin(Ω_ij^αt) ...]_α = 0^-1, wherein, Ω_ij^α = 2 πβ^α + sin(2 πλ_xijβ^α)/β^α + sin(2πλ_yijβ^α)/β^α. Our ablations (Table <ref>) substantiate that functional fusion (Γ_TSE) complements parametric fusion of Γ_FPE(t) and Γ_PPE(Λ_ij) to boost performance. §.§ Embedding Fusion Block Effective fusion of all our positional embedding elements is critical to the performance of our method. We opt for a hybrid functional and parametric fusion module to bring together the positional embeddings obtained via the Γ_FPE(.), Γ_PPE(.), and Γ_TSE(.) functions. Our fusion mechanism is split over two stages. First Γ_FPE(t) and Γ_PPE(Λ_ij) are fused using our proposed Higher-order Multiplicative Fusion (HMF) block. Then, Γ_TSE(λ_ij,t) is added to the resulting vector, resulting in new embedding that acts as input to the INR decoder. Higher-order Multiplicative Fusion (HMF) We introduce the HMF which is a Nested-CoPE <cit.> inspired fusion mechanism, to fuse Γ_FPE(t) and Γ_PPE(Λ_ij). As shown in Fig. <ref>, HMF entails additive fusion of the linearly transformed fusion entities to capture first-order correlations. The additive fusion blocks are followed by a Hadamard product operation with the previous additive fusion output in a recursive fashion for three iterations. The recursive structure ensures that cross-correlations are captured well by the fused output. The fusion in effect translates to the following recursive relationship: _n = ((^T_[n,t]Γ_FPE(t) + ^T_[n,Λ_ij]Γ_PPE(Λ_ij)) ⊙_n-1) + _n-1, wherein, _1 = ^T_[1,Λ_ij]Γ_PPE(Λ_ij) + ^T_[1,t]Γ_FPE(t). Here, n ∈{2,3}, _3 represents the fused embedding (output of HMF block), and ⊙ represents Hadamard product. The learnable parameters in HMF are _[n,T]∈ℝ^ 2l × k and _[n,Λ_ij]∈ℝ^ 2l × k. The rank of the decomposed weight matrices k, is taken to be 160. As highlighted in <cit.>, the adopted approach for fusing the frame-timestamps and patches has an advantage over a standard approach that employs concatenation followed by downsampling. In that, concatenation amounts to the additive format of fusion which fails to capture cross-terms in correlation. That is, multiplicative interactions of order 2 or more are essential for capturing both auto and cross-correlations among the entities to be fused. §.§ INR Decoder The literature on PNNs <cit.> has shown that stacking two or more polynomials in a multiplicative fashion leads to a desired order of the underlying polynomial with much lesser parameters. Such an approach is termed as ProdPoly (Product of Polynomials). As defined by <cit.>, a ProdPoly implementation entails the Hadamard product of outputs of sub-modules in the architecture to obtain a higher order polynomial in the input. Since the order of a polynomial is directly correlated with its modelling capabilities, the ProdPoly approach is suitable for designing our lightweight INR decoder. The proposed INR decoder is a modified derivative of the ProdPoly formulation. In that, we design the INR decoder as a product of three polynomials. Per our formulation, the output of the r^th polynomial is given as input to the (r+1)^th block. The advantage of such a stacking is that it leads to an exponential increase in order of the polynomial. Specifically, we have three ProdPoly blocks in a hierarchy. The first ProdPoly block accepts as the fused embedding as input. The other two ProdPoly blocks take the output feature map from their preceding ProdPoly block, _r-1 as their input (Fig. <ref>). Each ProdPoly block in INR decoder is an adapted implementation of an NCP decomposed PNN variant tailored to our model's requirement. The NCP-polynomial in each ProdPoly block is implemented using two convolutional blocks F. The design of these blocks is inspired by <cit.>. Each F block entails an Adaptive Instance Normalization layer (AdaIn) <cit.>, Convolution, pixel shuffle operation and a GeLU <cit.> activation layer. This operation is denoted as F(.). The AdaIn layer takes as input and normalizes the feature distribution with spatio-temporal context embedded in the input vector . In essence, we adapt Eq. <ref> the following, for our decoder where S and A are implemented as F and Φ : _rm = F_rm(_rm-1) ⊙ (Ψ_[rm]^T_i) ; m ∈{1, 2}, wherein, _r1 =(F_r1(^T_r)) ⊙ (Ψ_[r1]^T), _r = _r2 is the output of r^th ProdPoly block. is a set of three transpose convolutional layers applied only before the first ProdPoly block to obtain a 2D feature map from the input vector . _3 is the final output (i.e. reconstructed patch 𝚙̂_ij) of the INR decoder. Ψ_1m's in the first ProdPoly block are implemented as linear layers. In the remaining blocks, transpose convolution layer is used with appropriate padding and strides. To remove the redundant parameters, similar to <cit.>, we also replace the convolutional kernel in F_1 with two consecutive convolution kernels with small channels. The optimal rank for our resultant polynomial's decomposition per the NCP (Eq. <ref>) was found to be 324. Appendix <ref> presents a detailed study pertaining to the choice of optimal rank for the decomposition, alongside elaborate architecture details. §.§ Training To train our network, we randomly sample a batch of frame patches P_ij along with their normalized fine coordinates, coarse coordinates, and the time indices (Λ_ij, λ_ij,t). These indices are then given as input to PNeRV to predict the corresponding patches P̂_ij. The model is trained by using a combination of the L1 and SSIM <cit.> losses between the predicted frame patches and ground truth frame patches, governed by Eq. <ref> L(P̂_ij,P_ij) = 1/M× N × T∑_t=1^T∑_p=1^M × Nγ || P̂_ij - P_ij ||_1 + (1 - γ) (1 - SSIM(P̂_ij, P_ij)) where, M × N is the total number of patches per frame, T denotes the total number of frames, and γ is a hyper-parameter to weigh the loss components. We set γ to 0.7. We infer frame patches at all the locations and concatenate them in a consistent manner to reconstruct the original videos. Since the model learns non-overlapping patches independently, the intensity changes near the patch edges may cause the reconstructed frames to have boundary artifacts. We apply Gaussian blur to the reconstructed video to mitigate these subtle artifacts. No further post-processing is required for continuity and coherence in the generated frames. § EXPERIMENTS We split our experimental analysis of PNeRV into (1) evaluation of the representation ability using Video Reconstruction task (2) testing the efficacy on the proposed downstream tasks (3) performing appropriate ablation studies to assess the contributions and salience of individual design elements. The downstream tasks we perform include (i) Video Compression to assess the applicability of PNeRV as an alternate lightweight video representation (ii) Video Super-Resolution to assess the spatial continuity of PNeRV (iii) Video Interpolation to assess the temporal continuity of PNeRV (iv) Video Denoising as an interesting application of PNeRV. We also compare the rate of convergence (during training) of PNeRV vis-à-vis prior art. Experimental Setup: We train and evaluate our model on the widely used UVG dataset <cit.> and the "Big Buck Bunny" (Bunny) video sequence from scikit-video. The UVG dataset comprises 7 videos. Each UVG video is resized to 720×1280 resolution and every 4^th frame is sampled such that the entire video contains 150 frames. All 132 frames of the Bunny sequence are used at a resolution of 720×1280. For all our experiments, we train each model for 300 epochs with a batch size 16 (unless specified otherwise) with up-scale factors set to 5,2,2. The input embeddings Γ_FPE, Γ_TSE, and Γ_PPE are computed with ν=1.25. We set l=80 for Γ_FPE and Γ_TSE. Whereas, Γ_TSE uses α =40. The network is trained using Adam optimizer <cit.> with default hyperparameters, a learning rate of 5e^-4, and a cosine annealing learning rate scheduler <cit.>. Following E-NeRV's evaluation methodology, we use PSNR <cit.> to evaluate the quality of the reconstructed videos. §.§ Video Reconstruction High fidelity video reconstruction assumes utmost importance when it comes to building an INR. We compare PNeRV with several state-of-the-art methods, namely NeRV-L <cit.>, E-NeRV <cit.> and HNeRV <cit.> on videos belonging to the UVG dataset and the Bunny video. The PSNR values obtained for reconstructed videos are reported in Table <ref>. We observe that our model consistently outperforms existing methods on a diverse set of videos, while employing significantly lesser number of learnable parameters shows improvements on videos with slow moving objects like Beauty, Bee, Shake as well as dynamic videos like Bunny, Bosphorus and Yacht. Hence, validating that the PNN-backed PNeRV is a lightweight INR that captures the necessary spatiotemporal correlations needed to better represent videos. We present qualitative comparisons with state-of-the-art for the task in Fig. <ref> (Appendix <ref>) (left column). Appendix <ref> presents additional qualitative results. §.§ Downstream Tasks §.§.§ Video Compression r0.48 < g r a p h i c s > Model pruning results on NeRV-L, E-NeRV and PNeRV trained for 300 epochs on "Big Buck Bunny" video. Sparsity represents the ratio of pruned parameters. Recent video compression algorithms follow a hybrid approach where a part of the compression pipeline consists of neural networks while following the traditional compression pipeline <cit.>. An INR encodes a video as the weights of a neural network. This enables the use of standard model compression techniques for video compression. Following <cit.>, we employ model pruning for video compression. We present experimental results for the same on the "Big Buck Bunny" sequence from scikit-video in Figure <ref>. It can be observed that a PNeRV model of 40% sparsity achieves results comparable to the full model, in terms of reconstruction accuracy and perceptual coherence. Fig. <ref> (Appendix <ref>) (middle column) presents qualitative comparisons with state-of-the-art for the task. For sparsity values less than 45%, our model outperforms NeRV and E-NeRV. However, beyond 45% sparsity, PNeRV's performance degrades rapidly. This behaviour can be attributed to the use of multiplicative interactions in PNeRV which cause model performance to increase rapidly with increase in model parameters. We provide additional qualitative results, quantitative results on the UVG dataset, and comparisons with HNeRV in Appendix <ref>. From Fig. <ref>, it can be observed that the frames predicted by HNeRV are blurred , a typical property of autoencoder type of an architecture whereas our method is able to preserve the fine details well. §.§.§ Video Super-Resolution We present qualitative results for ×4 Super-Resolution in Fig. <ref>. As reported in Table <ref>, for Super-Resolution, we compare our results with bicubic interpolation, ZSSR <cit.>, and SIREN <cit.>. PNeRV outperforms these baselines in each case, which confirms that PNeRV is a generic spatiotemporal representation that lends itself well to various downstream tasks that require spatial continuity without the need for task-specific retraining or fine-tuning. We also provide reasons for not comparing our results with VideoINR <cit.>, an important contemporary INR based method in Video Super-Resolution in Appendix <ref>. §.§ Video Frame Interpolation The temporally continuous nature of PNeRV, allows us to perform the task of Video Frame Interpolation. We train and evaluate PNeRV on the "Bunny" and "Beauty" videos for this task. We report the quantitative and qualitative comparisons for the task in Table <ref> and Fig. <ref> (Appendix <ref>) (right column), respectively. We observe that our method achieves better metric performance than prior art, and excellent perceptual quality of the predicted "unseen" (interpolated) frames. Hence, we infer that PNeRV better captures spatiotemporal correlations in videos with respect to prior art. We present additional results for the task in Appendix <ref>. §.§.§ Video Denoising INRs have been shown to be better attuned to filtering out inconsistent pixel intensities i.e. noise and perturbations. Hence, making it suitable for denoising videos without being explicitly trained for the task. To test the performance of our representation on noisy videos, we applied white noise and salt and pepper noise separately to the original videos. PNeRV was then trained on these perturbed videos for reconstruction. Comparisons between the reconstructed videos and the original videos reveal that the representation learned by PNeRV is robust to noises. It implicitly learns a regularization objective to filter out noise better than existing methods. Quantitative comparisons with prior art (reported in Table <ref>) assert the superiority of our method. We also provide qualitative results and a detailed analysis of the same in Appendix <ref>. §.§ Ablation Studies §.§.§ Varying the polynomial attributes of the INR Decoder We study the impact of varying the rank and order of the polynomial formed by the PNN-based INR Decoder architecture. Rank of the Polynomial: In NCP-Polynomial formulation, the rank of the polynomial can be varied by modifying the number of channels of the F_rm module in each ProdPoly block. In general, it is expected that a polynomial with a higher-ranked decomposition (i.e. more channels) would perform better due to the increased expressivity of the representation learned by the model. To understand the effect of this, we modify the rank of the first ProdPoly block in the INR-Decoder while keeping the ranks of the second and third ProdPoly block fixed. These results are reported in Tab. <ref>. It can be seen that the rank of the polynomial is positively correlated to the quality of the reconstructed video. Order of the Polynomial: Each ProdPoly block in the proposed architecture has an order of 2. Thus, the effective order of INR-Decoder is 2^R where R is the total number of ProdPoly blocks in the decoder. Hence, we vary the number of ProdPoly blocks to change the order of INR-Decoder polynomial and report our findings in Table <ref>. It can be seen that the performance drops when the order is reduced. Interestingly, the PSNR value decreases when the order is increased beyond a certain range. We also present an analysis of PNeRV's independence to the choice of non-linear activations in Appendix <ref>, a property it inherits from the PNN paradigm. §.§.§ Efficacy of Positional Embeddings We demonstrate the contribution of each Positional Embedding (PE) with respect to its individual contribution toward the reconstruction quality achieved. To this end, we first propose two simple baselines as shown in Table <ref> wherein each patch is assigned a coordinate from 0 to M × N - 1 in a row-wise fashion (row 1) or each patch is assigned its centroid value (row 2). Then Γ_FPE is used to compute the patch embeddings. It is evident that the performance drops considerably in both these settings. Hence, motivating the need of carefully designed positional embeddings. Next, we add the parametric PE (Γ_PPE) (row 3) followed by addition of functional PE (Γ_TSE). The results show that both Γ_PPE and Γ_TSE contribute to the overall network performance. For this ablation study, t is encoded using Γ_FPE and fused with the spatial embedding using the HMF block in all the experiments. Our well-designed PE scheme greatly enhances our model's performance by leveraging the high-frequency information preferred by the PNN paradigm. §.§.§ Varying the Input Patch-Size The patch-wise formulation is the key idea that enables us to model spatial continuity. Thus, we delve into PNeRV's performance obtained for different patchs sizes in Table <ref>. We found that a patch size of (𝐇/4 , 𝐃/4) performs the best. This suggests that neither a pixel-wise (dense spatial) nor frame-wise representation (temporal-only) is optimal. We hypothesize that the surge in parameters (over-parameterization) in the pixel-wise approach might be the limiting factor that inhibits learning in such cases. We find this result particularly insightful since we found a sweet-spot between the two parameterization methodologies. Another interesting trend to observe from Table <ref> is that of the relationship between patch size and the number of parameters. We discuss the reasoning behind this in detail in appendix <ref>. §.§.§ HMF versus other fusion strategies r7cm Ablation: Assessing the efficacy of our HMF versus other parametric PE fusion strategies in terms of PSNR (dB) for reconstruction. !9mm PE Fusion Strategy Bunny Beauty Concat + Linear 43.76 39.39 Linear + Elementwise Addition 43.28 39.39 Linear + Hadamard Product 43.06 38.92 Ours 44.9 39.80 We compare the proposed PNN-backed Higher-order Multiplicative Fusion (HMF) of space and time embeddings with other fusion mechanisms as given in Table <ref>. As expected, conventional concatenation, addition, or multiplication operations on features fail to capture the auto and cross-correlations of the inputs. Hence, causing a drop in performance. We observe that the dip in PSNR is more pronounced for the "bunny" video than the "beauty" video. We attribute this observation to the "bunny" video having more temporal variations. The results of this study indicate that the proposed HMF scheme models both the structural and the perceptual video attributes better than the prior art. §.§ On PNeRV's rate of convergence Following E-NeRV's setup, we perform reconstruction experiments with PNeRV models trained for different number of training epochs on the "Bunny" and "Yacht" (UVG dataset) videos and report our findings in Fig. <ref>. It can be seen that training for more number of epochs boosts the performance with upto 4 × faster convergence than baselines. PNeRV's performance surpasses that of the baselines at 600 epochs on the "Bunny" and 1200 epochs on the "Yacht". We also provide comparisons with state-of-the-art with respect to inference time in Appendix <ref>. § CONCLUSION In this work, we propose and validate the efficacy of PNeRV, a light-weight, spatiotemporally continuous, fast, and generic neural representation for videos with a versatile set of practical downstream applications. We do so by building on two principal insights. First, a well-designed patch-wise spatial sampling scheme can perform just as as good as a pixel-wise sampling. Second, replacing popular function approximators by the more efficient PNNs and designing other model components to aid its learning can lead to superior performance. We provide conclusive results to support our claims with analysis on several downstream tasks and consistent ablation studies. We believe our work shall serve as a primer toward building spatiotemporally continuous light-weight INRs for videos. As a future work, it would be interesting to examine PNN based PEs to further improve INR for videos. Please find our broader impact statement in the following subsection. §.§ Broader Impact Statement As one of the most widely consumed modality of data, videos are central to several important tasks in the modern socio-technical context. In such a scenario, PNeRV brings in a fresh approach to tackle the ever growing costs involved in handling such massive data by providing a method restore and compress videos efficiently. In effect, PNeRV can potentially have a lasting positive impact on several video streaming, communication, and storage services. As with any nascent technology, the largely positive impact areas are accompanied by a few unforeseeable ones which are beyond the scope of this work. tmlr § APPENDIX §.§ Abbreviations xxxxxxxxxxx x̄xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx INRImplicit Neural Representation PNNPolynomial Neural Network HMFHigher-order Multiplicative Fusion PEPositional Embedding FPEPositional Encoding of Frame Index PPEParametric Embedding of Fine Coordinates TSETime Aware Spatial Embedding §.§ The mode-n product The mode-n (matrix) product of a tensor 𝒳∈ℝ^I_1 × I_2 ×…× I_N with a matrix 𝐔∈ℝ^J × I_n is denoted by 𝒳×_n 𝐔 and is of size I_1 ×… I_n-1× J × I_n+1×…× I_n. Elementwise, we have (𝒳×_n 𝐔)_i_1 … i_n-1 j i_n+1… i_N=∑_i_n=1^I_n x_i_1 i_2 … i_N u_j i_n . Each mode- n fiber [of 𝒳 ] is multiplied by the matrix 𝐔. §.§ The Hierarchical Patch-wise Spatial Sampling Algorithm §.§ The INR Decoder architecture in detail In this section, we provide the finer details of the PNeRV architecture. We then provide more details about the implementation and training of the proposed method. PNeRV consists of three components: the Positional Embedding Module (PE), the Embedding Fusion Block, and the INR-Decoder. Given the coarse patch coordinate λ_𝐢𝐣, fine patch coordinate Λ_ij and the time index, we first compute the positional embeddings Γ_TSE(λ_𝐢𝐣, t), Γ_PPE(Λ_ij) and Γ_FPE(t). The embeddings Γ_PPE(Λ_ij) and Γ_FPE(t) are fused using a Polynomial Neural Networks (PNN) based fusion module HMF. HMF consists of a series of linear transformations followed by Hadamard product and addition, as shown in Fig 4 of the paper. Each linear layer, namely, A_[1,t], A_[2,t], A_[3,t], A_[1,Λ_ij], A_[2,Λ_ij], A_[3,Λ_ij] is of dimension 80 × 160 . The resulting embedding is added elementwise to Γ_TSE(λ_𝐢𝐣, t) to obtain the fused embedding z which is given as input to the INR Decoder. z is a vector of dimension 160. The INR-decoder consists of a stack of 3 prodpoly blocks. Each prodpoly block in turn is a 2^nd order NCP-Polynomial implemented using convolutional blocks F_rm, where r is the index of the prodpoly block and m is the index corresponding to the F-block. The structure of F is illustrated in Fig. <ref>. To limit the increase in the number of parameters of the model, following <cit.>, we employ the following design for F_11 block: Conv(C_1, C_0 × s × s) → pixel-shuffle(s) → Conv(C_0, C_2). Where, C_1 = 324, C_0= 81, C_2 = 324 and s=5. The input vector is mapped to a feature map using a 3rd-order polynomial implemented using transpose convolutional layers as depicted in Fig. <ref>. This is referred to as U in Fig. <ref>. Table. <ref> provides the complete architecture details for INR-decoder. §.§ Qualitative results for Video Reconstruction We provide additional comparisons with state-of-the-art in Fig. <ref> and additional qualitative results for our method illustrated in Fig. <ref> and Fig. <ref>. Owing to the ensemble of design elements, PNeRV outperforms state-of-the-art convincingly on this task. §.§ Qualitative Comparisons Figure <ref> (Appendix <ref>) presents qualitative comparisons with prior art on Reconstruction, Compression, and Interpolation. §.§ Additional Results for Video Compression Table <ref> (a) provides a quantitative comparison with state-of-the-art on video compression in different sparsity (denoted by ρ) settings. Our model outperforms prior art convincingly. Fig. <ref> wherein we present qualitative comparisons with state-of-the-art on the task with sparsity ρ = 0.2, further underscores PNeRV's superior performance. §.§ Quantitative Comparison: Inference time per forward pass Table <ref> (b) provides a quantitative comparison with state-of-the-art in terms of time taken (ms) to perform one forward pass of the model on NVIDIA GeFORCE RTX 3090 GPU. Results elucidate that our light-weight model is faster then prior art. §.§ Super-Resolution using PNeRV On the comparison with VideoINR: VideoINR <cit.> has two core differences from our work. Firstly, VideoINR uses ground truth High-Resolution (HR) video frames for training, while ours is a fully unsupervised approach utilizing only the low-resolution video for training. Secondly, our method is a multifunctional INR. In that, it learns to represent a signal (video) as model weights. In contrast, VideoINR is an autoencoder trained specifically for Super-Resolution. Wherein, the claimed INR components function as non-linear transformations in the intermediate feature space. Therefore, we do not compare with VideoINR. Instead, we show qualitative results for Super-Resolution (Fig. <ref>) and quantitative comparison with bicubic interpolation, ZSSR, and SIREN (Table <ref>) which are unsupervised models. §.§ Video Denoising: Qualitative Results Figure <ref> shows the qualitative comparison of the output of our method with the two INR baselines NeRV <cit.> and E-NeRV <cit.>. Notice that E-NeRV fails to reconstruct the honeybee, thus regularizing the video such that the original content is lost. NeRV can generate the honeybee but it lacks clarity. PNeRV preserves all the content of the frames including honeybee and generates superior-quality video. These results confirm that PNeRV learns more robust video representation. §.§ Video Frame Interpolation Following E-NeRV's setup, we divide the training sequence in a 3:1 ("seen:unseen") ratio such that for every four consecutive frames, the fourth frame is not used training. This "unseen" frame is interpolated during inference to quantitatively evaluate the model's performance. Fig. <ref> provides the qualitative results for Video Frame Interpolation task on "bunny" and "beauty" videos. It can be observed that the perceptual quality of the interpolated frame is similar to that of the ground truth for the bunny video. We also assess the quality of the interpolated video using multi-scale structural similarity (MS-SSIM) metric. MS-SSIM accounts for luminance, contrast and structure of each frame and hence better correlates with human perception. Table <ref> reports the MS-SSIM numbers on "beauty" video. PNeRV outperforms SOTA methods on this metric as well suggesting the superior quality of the interpolated video. §.§ Robustness to the choice of activation function Since PNNs <cit.> have built-in non-linearities, they do not rely on the usage of popular hand-crafted non-linear activation functions to yield best performance. To highlight this aspect of our method, we test the effect of training our network and the baselines without any activation functions on "bunny" dataset by removing activation functions from all the network layers except for the output layer. The quantitative and qualitative results for the same are reported in Table <ref> and Fig. <ref>. It can be observed that performance of the baselines NeRV (first row) and E-NeRV (second row), dropped significantly. In contrast, the performance of our model remains comparable. It is notable that NeRV fails to learn high-frequency information such as that in the face of the bunny (highlighted in red boxes), resulting in a worse qualitative performance. §.§ Relationship between patch-size and number of learnable parameters Intuitively, one would expect that number of learnable parameters would increase with an increase in patch size. However, as observed in Table <ref>, the number of parameters decreases when transitioning from a patch size of (H/8, W/8) to (H/4,W/4) because of a lesser number of channels in convolutional layers of the prodpoly blocks for the latter. To be specific, for (H/8, W/8) configuration, the number of channels in F_22, F_31 and F_32 (see Table <ref> in Appendix <ref> and Figure <ref>) are 384 whereas for the (H/4, W/4) scenario, the number of the channels are all set to 96. We arrive at this design choice empirically. The F blocks comprise pixel shuffle layers which are responsible to upsample the size of the generated output. In our design, the number of channels are chosen to be as 96, or channels in the previous layer divided by the upsampling factor square, whichever is minimum. For patch sizes greater than or equal to (H/8, W/8), the channels in the F_22, F_31 and F_32 layers become 96. The increase in parameters after these layers is due to the Ψ_rm layers that upsample the input feature map to appropriate dimensions. This contributes to the extra parameters. get arXiv to do 4 passes: Label(s) may have changed. Rerun
http://arxiv.org/abs/2406.19040v1
20240627094552
On Convex Optimization with Semi-Sensitive Features
[ "Badih Ghazi", "Pritish Kamath", "Ravi Kumar", "Pasin Manurangsi", "Raghu Meka", "Chiyuan Zhang" ]
cs.LG
[ "cs.LG", "cs.CR", "cs.DS" ]
Multiscale Functional Connectivity: Exploring the brain functional connectivity at different timescales. [ July 1, 2024 ======================================================================================================== § ABSTRACT We study the differentially private (DP) empirical risk minimization (ERM) problem under the setting where only some features are sensitive. This generalizes the Label DP setting where only the label is sensitive. We give improved upper and lower bounds on the excess risk for DP-ERM. In particular, we show that the error only scales polylogarithmically in terms of the sensitive domain size, improving upon previous results that scale polynomially in the sensitive domain size <cit.>. Differential Privacy, Semi-sensitive Features, Label Differential Privacy, Convex Optimization § INTRODUCTION In empirical risk minimization (ERM) problem, we are given a dataset D = {x_i}_i ∈ [n]∈^n and a loss function ℓ: ×→ and the goal is to find w ∈ that minimizes the empirical risk (w; D) := 1/n∑_i ∈ [n]ℓ(w; x_i). The excess risk is defined as (w; D) - min_w' ∈(w'; D). Often, the dataset might contain sensitive data, and to provide privacy protection, we will use the notion of differential privacy (DP) <cit.>. ERM is among the most well-studied problems in the DP literature and tight excess risk bounds are known under assumptions such as Lipschitzness, convexity, and strong convexity (e.g., <cit.>). In most of these studies, each x_i is assumed to be sensitive. However, in several applications, such as online advertising, it can be the case that x_i consists of both sensitive and non-sensitive attributes. This can be modeled by letting = ^×^ where ^ is the domain of the non-sensitive features and ^ is the domain of the sensitive features. We will also write each example x as (x^, x^) where x^∈^ and x^∈^. Here, our only aim is to protect x^; in terms of DP, this means that we allow two neighboring datasets to differ only on the sensitive features of a single example. We refer to this DP notion as . (See <Ref> for a more formal definition.) Our definition is identical to the ones considered by <cit.>; a related notion has been considered recently as well <cit.>. To avoid confusion, we refer to the standard DP notion (where a single entire example can be changed in neighboring datasets) as . Throughout, we use k to denote the size of the domain for private features, i.e., k = |^|. The model generalizes the so-called label DP  <cit.> where the only sensitive “feature” is the label.[In our formulation of ERM, there is no distinction between a label and a feature; indeed, the two models are equivalent.] In our language, <cit.> give an -algorithm for convex ERM (under Lipschitzness assumption) that yields an expected excess risk of (k/√(n)); for (, δ)-, they achieve an expected excess risk of (√(k log(1/δ))/√(n)). Complementing these upper bounds, they also provide a lower bound of Ω(1/√(n)) against any (,δ)-. An interesting aspect of these bounds is that they are dimension-independent; meanwhile, for full DP, it is known that the expected excess risk grows (polynomially) with the dimension of  <cit.>. Despite this, the results from <cit.> leave a rather large gap in terms of k: the upper bounds have a polynomial dependence on k that is not captured by the lower bound. §.§ Our Contributions The main contribution of our paper is to (nearly) close this gap. In particular, we show that the dependency on k is polylogarithmic rather than polynomial, as stated below. For ≤ O(log 1/δ), there is an (, δ)-algorithm for ERM w.r.t. any G-Lipschitz convex loss function with domain radius R that has expected excess empirical risk (RG ·√(log(1/δ) ·log k)/√( n)). For ≤ O(log k), any (, o(1/k))-algorithm for ERM w.r.t. any G-Lipschitz convex loss function with domain radius R has expected excess empirical risk at least Ω(RG ·min{1, √(log k)/√( n)}). Notice that the dependency on k is essentially tight for δ = 1 / k^1 + Θ(1). It remains an interesting open question to tighten the bound for a wider regime of δ values. When the loss function is further assumed to be strongly convex and smooth, we can improve on the above excess risk and also provide a nearly tight bound in this case. For ≤ O(log 1/δ), there is an (, δ)-algorithm for ERM w.r.t. any G-Lipschitz μ-strongly convex λ-smooth loss function that has expected excess empirical risk (G^2/μ·√(log(1/δ) ·log k)·log(λ / μ)/ n). For ≤ O(log k), any (, o(1/k))-algorithm for ERM w.r.t. any G-Lipschitz μ-strongly convex μ-smooth loss function has expected excess empirical risk at least Ω(G^2/μ·min{1, log k/ n}). Finally, our techniques are sufficient for solving multiple convex ERM problems on the same input dataset, where the error grows only polylogarithmic in the number of ERM problems: For ≤ O(log 1/δ), there is an (, δ)-algorithm for m ERM problems w.r.t. any G-Lipschitz convex loss function with domain radius R that has expected excess empirical risk (RG ·√(log(1/δ) ·log k)√(log m)/√( n)). For ≤ O(log 1/δ), there is an (, δ)-algorithm for m ERM problems w.r.t. any G-Lipschitz μ-strongly convex λ-smooth loss function that has expected excess empirical risk (G^2/μ·√(log(1/δ) ·log k)·log(mλ / μ)/ n). In the full DP setting, this problem was first studied by <cit.>. The error bound was improved by <cit.>, but their bound still depends polylogarithmically on the dimension d. By removing this dependency, our theorem above improves upon the bounds of <cit.>. We note, however, that <cit.> also give bounds for the ℓ_p-bounded setting for any p 2, but, for simplicity, we do not consider this in our work. §.§ Technical Overview In this section, we briefly discuss the techniques used in our work. Answer Linear Vector Queries. The key ingredient of our work is an algorithm that can answer online linear vector queries. Such a query is of the form f: →_2^d(1) where _2^d(1) denotes the (Euclidean) unit ball in d dimensions, and our goal is to approximate f(D) := 1/n∑_i ∈ [n] f(x_i). There can be up to T online queries (i.e., we have to answer the previous query before receiving the next). The case d = 1 is often referred to as linear queries. In this case, in the setting, <cit.> introduced the “Private Multiplicative Weights” algorithm that has error (||, 1/δ) / √( n). We extend their algorithm in two crucial aspects: (i) We adapt the algorithm to the setting and show that we can improve on the error in this setting: the || term (size of the entire input domain) becomes k = |^| (size of the sensitive features domain). (ii) We show a natural way to handle d > 1. In this case, the error is now measured in the ℓ_2-error of the vector. Interestingly, we show that the error remains (roughly) the same in this setting and is in fact dimension-independent. This is crucial for achieving dimension-independent bounds in our theorems. The algorithm of <cit.> works by maintaining a distribution over all the domain . For each query, we (privately) check whether the current distribution is sufficiently accurate to answer the query. If so, we answer using the current distribution. Otherwise, we apply a multiplicative weight update (MWU) rule to update the distribution. The MWU rule depends on the privatized true answer and the answer computed using the current distribution, where the former is achieved via, e.g., adding Laplace noise. The crux of the analysis is that the privacy budget is only charged when an update occurs. Finally, a standard analysis of MWU shows that there cannot be too many updates. To achieve (i), our algorithm maintains, for each example x_i, a distribution over x_i^ and applies a multiplicative weight update. For (ii), we modify the update as follows. First, we privatize the true answer using the Gaussian mechanism. Then, we apply the MWU rule based on the dot product of this privatized true answer and the value of each example. Crucially, our analysis shows that, even though the total norm of the noise can be very large (growing with the dimension), it does not interfere too much with the update as only the noise in a few directions is relevant. From Answering Linear Vector Queries to Convex Optimization. By letting each query f be the gradient of the loss function, our aforementioned algorithm allows one to construct an approximate gradient oracle. By leveraging existing results in the optimization literature <cit.>, we immediately arrive at the claimed bounds. Comparison to Previous Work. <cit.> show that approximate gradient oracle can be accomplished via Statistical Queries (SQs). For the purpose of our high-level discussion, one can think of SQs as just linear queries. Using this, they observe that the <cit.> algorithm can be used to solve convex optimization problem(s) with low error. We note that this approach can be used in our setting, too, once we extend the Hardt–Rothblum algorithm to the setting with decreased error (i). However, this alone does not yield a dimension-independent bound since the number of linear queries required still depends on the dimension. As such, we still require (ii) to achieve the results stated here. Finally, we remark that vector versions of MWU have been used in the DP literature before (e.g., <cit.>). However, we are not aware of its study with respect to the effect of Gaussian noise; in particular, to the best of our knowledge, the fact that we still have a dimension-independent bound even after applying noise is novel. Lower Bounds. Suppose for simplicity that = Θ(1). For the lower bound, we first recall the construction from previous work <cit.>, which is a reduction from (vector) mean estimation. Roughly speaking, they let the ith example contribute only to the ith coordinate and let the sensitive feature (which is binary in <cit.>) determine whether this coordinate should be +1 or -1. They argue that any (,δ)-algorithm must make an error in determining the sign of Ω(1) fraction of the coordinates; this results in the Ω(1/√(n)) error for mean estimation, which can then be converted to a lower bound for convex ERM via standard techniques <cit.>. We extend this lower bound by grouping together O(log k) examples and assign k common coordinates for them. The examples in each group share the same sensitive feature, and it determines which of the k coordinates the examples contribute to. In other words, each group is a hard instance of the so-called selection problem. This helps increase the error to Ω(√(log k/n)). § PRELIMINARIES We use _2^d(R) to denote the Euclidean ball of radius R in d dimensions, i.e., {y ∈^d |y_2 ≤ R}. §.§ Differential Privacy We recall the definition of differential privacy below. For , δ≥ 0, a mechanism is said to be (, δ)-differentially private ((, δ)-DP) with respect to a certain neighboring relationship iff, for every pair D, D' of neighboring datasets and every set S of outputs, we have [(D) ∈ S] ≤ e^·[(D') ∈ S] + δ. In this paper, we consider datasets consisting of examples with a sensitive and non-sensitive part. More precisely, each dataset D is {x_i}_i ∈ [n] where x_i = (x^_i, x^_i) ∈^×^. Two datasets D = {(x^_i, x^_i)}_i ∈ [n], D' = {(x'^_i, x'^_i)}_i ∈ [n] are neighbors if they differ on a single example's sensitive part. I.e., x^_i = x'^_i for all i ∈ [n] and there exists i' ∈ [n] such that x^_i = x'^_i for all i ∈ [n] ∖{i'}. We often use the prefix “semi-sensitive” (e.g., ) to signify that we are working with this neighboring relationship notion. Note that, the lemmas below that are stated without such a prefix, hold for any neighboring relationship. For the purpose of privacy accounting, it will be convenient to work with the zero-concentrated DP (zCDP) notion. For ρ > 0, an algorithm is said to be ρ-zero concentrated DP (ρ-zCDP) with respect to a certain neighboring relationship iff, for every pair D, D' of neighboring datasets and every α > 1, we have D_α((D) A(D')) ≤ρ·α, where D_α(P Q) denotes the α-Renyi divergence between P and Q. We will use the following results from <cit.> in the privacy analysis. (i) For any > 0, any -DP mechanism is (0.5^2)-zCDP. (ii) For any ρ > 0 and δ∈ (0, 1/2), a ρ-zCDP mechanism is (ρ + 2√(ρln(1/δ)), δ)-DP. If is a mechanism is a (possibly adaptive) composition of mechanisms _1, …, _T, where _i is ρ_i-zCDP, then is (ρ_1 + ⋯ + ρ_T)-zCDP. §.§ Assumptions on the Loss Function Throughout this work, we assume that the loss function ℓ is convex and subdifferentiable (in the first parameter). Furthermore, we assume that it is G-Lipschitz; that is, |ℓ(w) - ℓ(w')| ≤ G ·w - w'_2. There are also two additional assumptions that we use in our second result (<Ref>): * μ-strong convexity: ℓ(w) ≥ℓ(w') + <∇ℓ(w'), w- w' > + μ/2w - w'_2^2. * λ-smoothness: ∇ℓ is λ-Lipschitz, implying, ℓ(w) ≤ℓ(w') + <∇ℓ(w'), w- w' > + λ/2w - w'_2^2. §.§ Concentration Bounds We will now prove a lemma with respect to a “clipped” distribution. To do this, let us define the clipping operation as follows. For ϕ∈^d and c ∈_> 0, we let _ϕ,c: ^d →^d be defined as[In other words, u is scaled so that its ϕ-semi-norm is at most c.] _ϕ,c(u) = u ·min{1, c / |<ϕ, u>|} if <ϕ, u> 0 u if <ϕ, u> = 0, For convenience, for c > 0, we also define _c: → to denote[Note that this coincides with _1, c but we keep a separate notation for brevity and clarity.] the function _c(b) := b ·min{1, c/|b|}, i.e., a rescaling of b so that its absolute value is at most c. The desired lemma is stated below. Although it might seem overly specific at the moment, we state it in this form as it is most convenient for our usage in the accuracy analysis later (without specifying too many extra parameters). Its proof is deferred to Appendix <ref>. Let be any distribution over _2^d(1) and μ_ := _U ∼[U]. Let Z be drawn from (μ_Z, σ_Z^2 I_d) for some σ_Z ∈ (0, 1], μ_Z ∈_2^d(2). Then, we have _Z[|<Z, _U ∼[_Z, 3(U)] - μ_>| > 2 exp(-0.1/σ_Z^2)] < 2 exp(-0.1/σ_Z^2). § ANSWERING LINEAR VECTOR QUERIES WITH As mentioned in the introduction, we consider a setting similar to <cit.> but with two main changes: (i) we support and (ii) each query in the family is allowed to be vector-valued (instead of scalar-valued). We describe this setting in more detail below. A (bounded ℓ_2-norm) linear vector query is a function f: →_2^d(1), where d ∈. The value of the function on a dataset D = {x_i}_i ∈ [n] is defined as f(D) := 1/n∑_i ∈ [n] f(x_i). Online Linear Vector Query problem. In the Online Linear Vector Query (OLVQ) problem, the interaction proceeds in T rounds. At the beginning, the algorithm receives the dataset D as the input. In round t, the analyzer (aka adversary) selects some linear vector query f_t: →_2^d_t(1). The algorithm has to output an estimate e_t of f_t(D). We say that the algorithm is (α, β)-accurate if, with probability 1 - β, e_t - f_t(D)_2 ≤α for all t ∈ [T]. Finally, we say that the algorithm satisfies (, δ)-iff the transcript of the interaction satisfies (, δ)-. Our Algorithm. The rest of this section is devoted to presenting (and analyzing) our algorithm for OLVQ. The guarantee of the algorithm is stated formally below. For all δ, β∈ (0, 1/2) and ∈ (0, √(ln(1/δ))), there is an (, δ)-algorithm for OLVQ that is (α, β)-accurate for α = O(√(ln k ·ln(1/δ))·√(ln(T n/β) + lnln k + ln√(ln(1/δ))/)/√( n)). As mentioned earlier, it will be slightly more convenient to work with the zCDP definition instead of DP for composition theorems. In zCDP terms, our algorithm gives the following guarantee: For every ρ∈ (0, 1), β∈ (0, 1/2), there is a ρ-algorithm for OLVQ that is α, β-accurate for α = O√(ln k/ρ)·√(ln(T n/β) + lnln k + ln(1/ρ))/√(n). Note that <Ref> follows from <Ref> by setting ρ = 0.1^2/log(1/δ) and applying <Ref>(ii). The presentation below follows that of <cit.> which is based on the original paper of <cit.> and the subsequent work of <cit.>. We use the presentation from the Dwork and Roth's book as it uses a more modern privacy analysis through the sparse vector technique, whereas <cit.> use a more direct privacy analysis. §.§ Linear Vector Query Multiplicative Update First, we present the analysis of the multiplicative weight update (MWU) step for linear vector query. This generalizes the standard analysis for scalar-valued query to a vector-valued one. Note that this subsection does not contain any privacy statements, as those will be handled later. The algorithm takes as input a “synthetic” (belief) distribution of the sensitive features for each of the n examples. We write p^ℓ_i to denote the distribution for x^_i. Furthermore, we write p^ℓ_i(y) to denote the probability that x^_i = y under p^ℓ_i. For ^ℓ = (p^ℓ_i)_i ∈ [n] and a linear vector query f, we write f(^ℓ; D) as a shorthand for 1/n∑_i ∈ [n]∑_y ∈^ p^ℓ_i(y) · f(x^_i, y). We may drop D for brevity when it is clear from the context. The update is based on the difference between the estimated value (which will be set as a noised version of the true answer f(D)) and f(^ℓ). Since the noise can have unbounded value, we “truncate” the dot product when using it to simplify the analysis (recall the notion _c from <Ref>). The full update is stated in <Ref>. We now analyze this update rule. To do so, recall the notion from <Ref>; it will be convenient to also define the following additional notation: f^, ϕ, c(x^, y) := _ϕ, c(f(x^, y)), f^, ϕ, c(D) := 1/n∑_i ∈ [n] f^, ϕ, c(x_i), f^, ϕ, c(^ℓ; D) := 1/n∑_i ∈ [n]∑_y ∈^ p^ℓ_i(y) · f^, ϕ, c(x^_i, y). For readability, we sometimes drop ϕ and c from the notations above when it is clear from context. For convenience, we separate the requirement for the MWU analysis into the following condition. The first item states that the error is sufficiently large, the second that the noise added to v is sufficiently small, the next two assert that clipping does not change the function value too much (for the true answer and that evaluated from the synthetic data ^ℓ - 1, respectively), and the remaining two state that is a good estimate for f(D) - f(^ℓ - 1)_2. Suppose that η≤1/c and the following hold: * f(D) - f(^ℓ - 1)_2 ≥ (2c^2 + 7)η, * f(D) - v, f(^ℓ - 1) - f(D)≤η·f(D) - f(^ℓ - 1)_2, * |v - f(^ℓ - 1), f^(D) - f(D)| ≤η^2 , * |v - f(^ℓ - 1), f^(^ℓ - 1) - f(^ℓ - 1)| ≤η^2, * ≥η, * ≤ 2 ·f(D) - f(^ℓ - 1)_2. Under the above conditions, we show that the update cannot be applied too many times: Suppose that _η, c(^ℓ - 1, f, v, ; D) is applied for ℓ = 1, …, L with the initial distribution being the uniform distribution (i.e., p^0_i(y) = 1/k for all i ∈ [n] and y ∈^) such that <Ref> holds for all ℓ∈ [L]. Then, it must be that L < ln k / η^2. Let the potential be Ψ^ℓ := 1/n∑_i ∈ [n]ln1/p_i^ℓx^_i. The main lemma underlying the proof of <Ref> is that the potential always decreases under <Ref>, which immediately implies the proof since the potential satisfies Ψ^0 = ln k and Ψ^L > 0. Assuming that <Ref> holds, then Ψ^ℓ - 1 - Ψ^ℓ≥η^2. To prove <Ref>, we use the following two simple facts. (i) For all x ∈, 1 + x ≤exp(x). (ii) For all x ∈ (-∞, 1], exp(x) ≤ 1 + x + x^2. [<Ref>] From the definition of and , we have _cϕ, f(x^_i, y) = ϕ, f^(x^_i, y). In other words, the update rule can be rewritten as p_i^ℓ(y) p_i^ℓ - 1(y) ·expη·ϕ, f^(x^_i, y)/∑_y' ∈^ p_i^ℓ - 1(y') ·expη·ϕ, f^(x^_i, y'). For brevity, let γ_i^ℓ be the normalization factor ∑_y' ∈^ p_i^ℓ - 1(y') ·expη·ϕ, f^(x^_i, y') for all i ∈ [n]. We have Ψ^ℓ - 1 - Ψ^ℓ = 1/n∑_i ∈ [n]lnp_i^ℓ(x^_i)/p_i^ℓ - 1(x^_i) = 1/n∑_i ∈ [n]η·ϕ, f^(x_i) - lnγ_i^ℓ = η·ϕ, f^(D) - 1/n∑_i ∈ [n]lnγ_i^ℓ. By definition, ϕ, f^(x_i, y')≤ c. Thus, by our assumption that η≤ 1/c, we can bound the normalization factor γ_i^ℓ as follows: γ_i^ℓ = ∑_y' ∈^ p_i^ℓ - 1(y') ·expη·ϕ, f^(x_i, y') (<Ref>(ii)) ≤∑_y' ∈^ p_i^ℓ - 1(y') 1 + (η·ϕ, f^(x_i, y')) + η·ϕ, f^(x_i, y')^2 ≤∑_y' ∈^ p_i^ℓ - 1(y') 1 + (η·ϕ, f^(x_i, y')) + c^2 η^2 = 1 + η·ϕ, ∑_y' ∈^ p_i^ℓ - 1(y') · f^(x_i, y') + c^2 η^2. Applying <Ref>(i), we can then conclude that lnγ_i^ℓ≤η·ϕ, ∑_y' ∈^ p_i^ℓ - 1(y') · f^(x_i, y') + c^2 η^2. Taking the average over all i ∈ [n], we thus have 1/n∑_i ∈ [n]lnγ_i^ℓ≤η·ϕ, f^(^ℓ - 1) + c^2 η^2. Plugging this back into <Ref>, we get Ψ^ℓ - 1 - Ψ^ℓ ≥η·ϕ, f^(D) - f^(^ℓ - 1) - c^2 η^2 = η/·, f^(D) - f(D) + , f(^ℓ - 1) - f^(^ℓ - 1) + , f(D) - f(^ℓ - 1) - c^2 η^2 ()≥η/·, f(D) - f(^ℓ - 1) - (c^2 + 2) η^2 = η/·f(D) - f(^ℓ - 1)_2^2 - f(D) - v, f(D) - f(^ℓ - 1) - (c^2 + 2) η^2 (▪)≥η/·f(D) - f(^ℓ - 1)_2^2 - η·f(D) - f(^ℓ - 1)_2 - (c^2 + 2) η^2 ()≥η/2 ·f(D) - f(^ℓ - 1)_2·(2c^2 + 7)η·f(D) - f(^ℓ - 1)_2 - η·f(D) - f(^ℓ - 1)_2 - (c^2 + 2) η^2 = η^2, where () follows from <Ref><ref>,<ref> and <ref>, (▪) follows from <Ref><ref>, and () follows from <Ref><ref> and <ref>. §.§ The Algorithm We are now ready to describe our algorithm and prove <Ref>. [<Ref>] <Ref> contains the description of our algorithm. Privacy Analysis. For each fixed ℓ, the mechanism is exactly a composition of the AboveThreshold mechanism[See Appendix <ref> for more explanation on the AboveThreshold, Laplace, and Gaussian mechanisms.] <cit.>, the Laplace mechanism with noise multiplier[The sensitivity of f_t(^ℓ - 1) - f_t(D)_2 with respect to is 2/n.] 1/' and the Gaussian mechanism with noise multiplier[The ℓ_2-sensitivity of f(D) with respect to is 2/n.] σ. The first is '-<cit.> and, by <Ref>(i) is thus (0.5'^2)-; similarly, the Laplace mechanism is (0.5'^2)-. Meanwhile, the Gaussian mechanism is (2/σ^2)-zCDP <cit.>. Thus, by the composition theorem (<Ref>) for a fixed ℓ, the mechanism is (0.5'^2) + (0.5'^2) + (2/σ^2) = (ρ / )-. Thus, applying the composition theorem (<Ref>) across all iterations, the entire algorithm is ρ-. Utility Analysis. We set the parameters as follows: (i) ζ = 1/2, (ii) c = 3, (iii) η to be the smallest positive real number such that η > 1000 √(ln k/ρ)·√(lnn T ln k/ρβη)/√(n), (iv) τ = 16η, (v) = 1 + ⌊ln k/η^2⌋. Note that we may assume w.l.o.g. that η≤ 0.1 as otherwise the desired guarantee is trivial (i.e., the algorithm can simply outputs zero always). By the tail bound of Laplace noise (<Ref>(i)), for a fixed ℓ∈ [], the following holds with the probability at least 1 - 0.1β/: |χ_ℓ| ≤4/' n·ln20 /β≤η, where the second inequality follows from our setting of parameters. Similarly, for fixed ℓ∈ [], t ∈ [T], the following holds with probability at least 1 - 0.1β/ T: |ν_t, ℓ| ≤8/' n·ln20 T/β≤η, and, for a fixed ℓ∈ [], the following holds with probability at least 1 - 0.1β/ T: |ξ^ℓ - 1| ≤4/' n·ln20 /β≤η. Observe that, for a given ^ℓ - 1, z^ℓ - 1, f_t(^ℓ - 1) - f_t(D) is distributed as (0, (σ')^2) for σ' = 2σ/n·f_t(^ℓ - 1) - f_t(D)_2. Thus, by the Gaussian tail bound (<Ref>(ii)) and our setting of parameters, the following holds with probability 1 - 0.2β/ T for fixed ℓ∈ [], t ∈ [T]: z^ℓ - 1, f_t(^ℓ - 1) - f_t(D)≤σ' ·√(2 ln10 T/β)≤η·f_t(^ℓ - 1) - f_t(D)_2. Observe also that v^ℓ - 1 - f_t(^ℓ - 1) = f_t(D) - f_t(^ℓ - 1) + z^ℓ - 1 is distributed as (f_t(D) - f_t(^ℓ - 1), (σ”)^2 I_d) where σ” = 2σ/n. By our choice of parameters, we have σ”≤ 0.1/log(10 /βη) As a result, we can apply <Ref> with Z = v^ℓ - 1 - f_t(^ℓ - 1) and being the uniform distribution over {f_t(x_1), …, f_t(x_n)}. This allows us to conclude that the following holds with probability at least 1 - 0.2 β/ for every ℓ∈ []: |v^ℓ - 1 - f_t(^ℓ - 1), f_t^(D) - f_t(D)| ≤ 2 exp-0.1/(σ”)^2≤η^2. Similarly, we can apply <Ref> with the same Z but with being the distribution where each (x^_i, y') has probability mass p^ℓ - 1_i(y')/n to conclude that the following holds with probability at least 1 - 0.2 β/ for every ℓ∈ []: |v^ℓ - 1 - f_t(^ℓ - 1), f_t^(^ℓ - 1) - f_t(^ℓ - 1)| ≤ 2 exp-0.1/(σ')^2≤η^2. By a union bound, all of <ref> hold for all ℓ∈ [], t ∈ [T] with probability at least 1 - β. We assume that these inequalities hold throughout the remainder of the analysis. Observe that, for a given ^ℓ - 1, z^ℓ - 1, f_t(p^ℓ - 1) - f_t(D) is distributed as (0, (σ')^2) for σ' = 2σ/n·f_t(p^ℓ - 1) - f_t(D)_2 ≤4σ/n and, for a given x_i, y, z^ℓ - 1, f_t(x_i, y) is distributed as (0, (σ”)^2) for σ” = 2σ/n·f_t(x_i, y)_2 ≤2σ/n. By the standard tail bound of the Gaussian and Laplace noises (<Ref>) together with a union bound, we have that the following all hold simultaneously with probability at least 1 - β: |χ_ℓ| ≤4/' n·ln10 T/β ≤η ∀ℓ∈ [], |ν_t, ℓ| ≤8/' n·ln10 T/β ≤η ∀ℓ∈ [], t ∈ [T] z^ℓ - 1, f_t(p^ℓ - 1) - f_t(D) ≤4σ/n·√(ln10 T/β) ≤η ∀ℓ∈ [], t ∈ [T] |z^ℓ - 1, f_t(x_i, y)| ≤2σ/n·√(ln10 T k/β) ≤ 1 ∀ℓ∈ [], t ∈ [T], i ∈ [n], y ∈ where the second inequality on each line is from our setting of parameters. From <ref>, if we break the loop, we must have f_t(^ℓ - 1) - f_t(D)_2 < τ + χ_ℓ - ν_t, ℓ≤τ + 2η≤ 18η. This means that whenever the algorithm outputs an estimate, it has ℓ_2-error of at most 18η≤ O√(ln k/ρ)·√(ln(T n/β) + lnln k + ln(1/ρ))/√(n) as desired. As a result, it suffices to show that the algorithm never outputs “FAIL”. On the other hand, if is called, we must have f_t(^ℓ - 1) - f_t(D)_2 ≥τ + χ_ℓ - ν_t, ℓ≥τ - 2η≥ 14η, implying <Ref><ref> (for v = v^ℓ - 1, f = f_t, c = 3). Moreover, <ref> are exactly equivalent to <Ref><ref><ref><ref> respectively. Furthermore, by <ref>, we also have ^ℓ - 1 = f_t(^ℓ - 1) - f_t(D)_2 + ξ^ℓ - 1≥ 14η - η = 13η, and ^ℓ - 1 = f_t(^ℓ - 1) - f_t(D)_2 + ξ^ℓ - 1≤f_t(^ℓ - 1) - f_t(D)_2 + η < 2 ·f_t(^ℓ - 1) - f_t(D)_2, which mean that <Ref><ref><ref> hold, respectively. In other words, <Ref> holds. From this, we may apply <Ref> to conclude that the number of applications of is less than ln k/η^2≤. Thus, the algorithm never outputs “FAIL”. This completes the proof of the accuracy guarantee. § FROM ONLINE LINEAR VECTOR QUERIES TO CONVEX OPTIMIZATION In this section, we prove our main results for convex optimization with , as formalized below. We note that here we formulate it as the problem of solving m linear queries problems w.r.t. losses ℓ_1, …, ℓ_m. The expected excess risk guarantee is for all of these m problems[We say that the expected excess risk is e if, for all i ∈ [m], we have [_i(w_i; D) - min_w' ∈_i_i(w'; D)] ≤ e for all i ∈ [m] where _i denotes the empirical risk and w_i denotes the output of the algorithm for the ith instance.]. Suppose that the loss functions ℓ_1, …, ℓ_m are G-Lipschitz and _1, …, _m ⊆_2(R). For every δ∈ (0, 1/2) and ∈ (0, ln(1/δ)), there is an (, δ)-algorithm for ERM problems w.r.t. ℓ_1, …, ℓ_m with expected excess risk ORG ·√(ln k ·ln(1/δ))·√(ln(mn) + lnln k + ln√(ln(1/δ))/)/√( n). Suppose that the loss functions ℓ_1, …, ℓ_m are G-Lipschitz, μ-strongly convex, and λ-smooth. For every δ∈ (0, 1/2) and ∈ (0, ln(1/δ)), there is an (, δ)-algorithm for ERM problems w.r.t. ℓ_1, …, ℓ_m with expected excess risk OG^2/μ·√(ln k ·ln(1/δ))·ln(m n λ / μ) + lnln (G/μ) + lnln k + ln√(ln(1/δ))// n. As stated in the Introduction, these results are shown via simple applications of known optimization algorithms with approximate gradients. The two cases use slightly different notion of approximate gradients, which we will explain below. §.§ Convex Case via Approximate Gradient Oracle For the convex case, we use the following definition of approximate gradient oracle. For any convex function F: →, an ξ-approximate gradient oracle of F provides (w) for any queried w ∈ such that the following holds: |(w) - ∇ F(w), y - u| ≤ξ for all y, u ∈. Under the above condition, standard gradient descent achieves a similar excess risk to the exact gradient case except that there is an extra ξ term: For any G-Lipschitz convex function F: → with ⊆_2(D) and any q ∈, there exists an algorithm that makes q queries to an ξ-approximate gradient oracle and achieves an excess risk of ODG/√(q) + ξ. We are now ready to prove <Ref>. [<Ref>] Let q = n^2 and T = m · q. We simply run the algorithm from <Ref> for each i ∈ [m] and, for the tth query to approximate gradient oracle, we invoke algorithm from <Ref> with f_q(i - 1) + t(x) = 1/G∇ℓ_i(w; x) and scale the answer back by a factor of G. From <Ref>, with probability 1 - β, this is an (2RG ·α)-approximate gradient oracle for α = O(√(ln k ·ln(1/δ))·√(ln(T n/β) + lnln k + ln√(ln(1/δ))/)/√( n)). When this occurs, <Ref> implies that the excess risk is at most RG/√(q) + 2RG ·α≤ O(RG ·α). With the remaining probability β, the excess risk is still at most RG. Substituting β = 1/n, we can conclude that the expected excess risk of this algorithm is at most O1/n· RG + RG ·α≤ O(RG ·α). Finally, since the algorithm is simply a post-processing of the result from applying <Ref>, it is (, δ)-. §.§ Strongly Convex and Smooth Case via Inexact Oracle For the strongly convex and smooth case, we use the following notion called inexact oracle. For any convex function F: →, a first-order (υ, , )-inexact oracle of F provides ((w), (w)) for any queried w ∈ such that the following holds for all w, w' ∈: /2·w'-w_2^2 ≤ F(w') - ((w) - <(w), w' - w>) ≤/2·w' - w_2^2 + υ. Note that if the gradient and function values are exact (i.e., = F, = ∇), then the above condition holds for υ = 0 when the function F is -strongly convex and -smooth. We use the following relation between the ℓ_2-error of the gradient estimate and inexact oracle. For any μ-strongly convex and λ-smooth F: →, if : →^d is an oracle such that (w) - ∇ g(w)_2 ≤ξ for all w ∈, then there exists : → such that (, ) is an (ξ^2(1/μ + 1/2λ), 2λ, μ/2)-inexact oracle. It should be noted that we do not specify the exact precisely because the optimization algorithm we use does not need this either: For any G-Lipschitz convex function F: → with ⊆_2(D) and any q ∈, α > 0, there exists an algorithm that makes q queries to a first-order (υ, , )-inexact oracle and achieves an excess risk of O( R^2/2·exp(-/· q) + υ) where R denote the distance of the starting point to the optimum. Furthermore, the algorithm only uses the gradient estimate and does not use the function estimate . The proof of <Ref> is almost the same as that of <Ref> except that we now use <Ref> (and <Ref>) instead of <Ref>. [<Ref>] Note that from Lipschitzness and strong convexity, we can assume that the domain _i is contained in _2(R) for R = O(G/μ) for all i ∈ [m]. Let q = ⌈40λ/μ·ln(λ R^2/n) ⌉ and T = m · q. We simply run the algorithm from <Ref> for each i ∈ [m], and for the tth query to approximate gradient oracle, we invoke the algorithm from <Ref> with f_q(i - 1) + t(x) = 1/G∇ℓ_i(w; x) and scale the answer back by a factor of G. From <Ref>, with probability 1 - β, this oracle has ℓ_2-error at most G ·α for α = O(√(ln k ·ln(1/δ))·√(ln(T n/β) + lnln k + ln(√(ln(1/δ))/))/√( n)). By <Ref>, this[Since the algorithm in <Ref> does not use the value from the oracle, we do not need to specify it explicitly.] yields an (υ, 2λ, μ/2)-inexact oracle for υ = O((Gα)^2 ·(1/λ + 1/μ)). When this occurs, <Ref> implies that the excess risk is at most O( R^2/2·exp(-/· T) + υ) ≤ O(υ). With the remaining probability β, the excess risk is still at most DG = O(G^2/μ). Substituting β = 1/n, we can conclude that the expected excess risk of this algorithm is at most O(1/n·G^2/μ + υ) ≤ O(υ). Finally, since the algorithm is simply a post-processing of the result from applying <Ref>, it is (, δ)-. § CONCLUSION AND OPEN QUESTIONS We gave improved bounds for convex ERM with ; crucially they show that the dependency on k is only polylogarithmic instead of polynomial as in previous works. As an intermediate result, we give an algorithm for answering (online) linear vector queries. Given that linear queries are used well beyond convex optimization, we hope that this will find more applications. An obvious open question is to close the gap between the upper and lower bounds. Another interesting question is to come up with a pure-DP algorithm with a similar bound as in <Ref>. In particular, it is open if there is any pure-DP algorithm where the error depends only polylogarithmically on k. § ADDITIONAL PRELIMINARIES In this section, we give some more background on the DP mechanisms from literature that we use as subroutines. We start with sensitivity, the Gaussian mechanism, and the Laplace mechanism. For any query g: ^n →^d and p ≥ 1, its ℓ_p-sensitivity is defined as Δ_p(g) := max_D, D'g(D) - g(D')_p where the maximum is over all neighboring datasets D, D'. The Gaussian mechanism for a function g: ^n →^d simply outputs g(D) + Z on input D where Z ∼(0, σ^2 I_d). The zCDP property of Gaussian mechanism is well known: The Gaussian mechanism is ρ-zCDP for ρ = 0.5Δ_2(g)^2 / σ^2. The Laplace mechanism for a function g: ^n →^d simply outputs g(D) + Z on input D where Z ∼(a)^⊗ d. The Laplace mechanism has been shown to be DP in the original work of <cit.>. The Laplace mechanism is -DP for = Δ_1(g) / a. Another tool we use is the so-called AboveThreshold mechanism, from the Sparse Vector Technique <cit.>. This mechanism is shown in <Ref>, following the presentation in <cit.>. Despite the fact that we handle multiple queries, AboveThreshold only requires a constant amount of noise and satisfies pure-DP: Suppose that each of g_1, …, g_T has sensitivity at most Δ. Then, AboveThreshold (<Ref>) is -DP. § PROOF OF <REF> We will use the following tail bounds for Laplace and Gaussian distributions. (i) _X ∼(a)[|X| ≥ t] ≤ 2 exp(-t/a). (ii) _X ∼(0, σ^2)[|X| ≥ t] ≤ 2exp(-0.5(t/σ)^2). Using the above tail bound, it is relatively simple to show <Ref>. [<Ref>] By Markov's inequality, it suffices to show that _Z[|<Z, _U ∼[_Z, 3(U)] - μ_>|] ≤ 4 exp(-0.2/σ_Z^2). To show this, first observe that _Z[|<Z, _U ∼[_Z, 3(U)] - μ_>|] = _Z[|_U ∼[<Z, _Z, 3(U)> - <Z, U>]|] = _Z[|_U ∼[_3(<Z, U>) - <Z, U>]|] ≤_Z[_U ∼[|_3(<Z, U>) - <Z, U>|]] = _U ∼[_Z[|_3(<Z, U>) - <Z, U>|]] ≤sup_u ∈_2^d(1)_Z[|_3(<Z, u>) - <Z, u>|], where the first inequality follows from Jensen. Let us now fixed u ∈_2^d(1). Observe that _Z[|_3(<Z, u>) - <Z, u>|] ≤_Z[|<Z, u>| ·[|<Z, u>| > 3]] ≤√(_Z[<Z, u>^2] ·_Z[[|<Z, u>| > 3]])≤√(_Z[<Z, u>^2] ·_Z[[|<Z, u>| > 3]]), where the second inequality follows from Cauchy–Schwarz. Notice further that <Z, u> is distributed as (μ', σ') for μ' = <μ_Z, u> and σ' = σ_Z ·u_2 ≤σ_Z. Moreover, since μ_Z ∈_2^d(2), we have |μ'| ≤ 2. As a result, its second moment satisfies _Z[<Z, u>^2] = (μ')^2 + (σ')^2 ≤ 2 + σ_Z^2 ≤ 3. Moreover, applying <Ref>(ii), we can conclude that _Z[|<Z, u>| > 3] ≤ 2exp(-0.5 / σ_Z^2). Plugging these back into (<ref>), we get _Z[|_3(<Z, u>) - <Z, u>|] ≤√(6exp(-0.5 / σ_Z^2))≤ 4 exp(-0.2/σ_Z^2). From this and (<ref>), we can conclude that (<ref>) holds as desired. § EXCESS RISK LOWER BOUND In this section, we prove a nearly-matching lower bound on the excess risk. We will use “group privacy” bound in our lower bound proof. For any neighboring relationship ∼, we use ∼_r to denote the relationship where two datasets D, D' are considered neighbors if there exists a sequence D = D_0, D_1, …, D_r = D' such that D_i-1∼ D_i for all i ∈ [r]. Suppose that is (, δ)-DP w.r.t. ∼, then it is (',δ')-DP w.r.t. ∼_r for ' = r and δ' = e^r-1/e^ - 1·δ. Our lower bounds are stated formally below. For any , δ, R, G > 0 and n, k ∈ such that ≤ln k and δ≤0.4 /k, there exists a G-Lipschitz convex loss function ℓ: ×→, where ⊆_2(R), such that any (, δ)-algorithm for ERM w.r.t on ℓ has expected excess empirical risk at least Ω(DG ·min{1, √(log k)/√( n)}). For any , δ, G, μ > 0 and n, k ∈ such that ≤ln k, δ≤0.4 /k, there exists a G-Lipschitz μ-strongly convex μ-smooth loss function ℓ: ×→ such that any (, δ)-algorithm for ERM w.r.t ℓ has expected excess empirical risk at least Ω(G^2/μ·min{1, log k/ n}). §.§ Convex Case To show the convex case (<Ref>), it will in fact be convenient to first prove a (smaller) lower bound that holds even against very large (up to O(ln k)), as stated more formally below. For any , δ, R, G > 0 and n, k ∈ such that e^/e^ + k - 1 + δ < 0.99, there exists a G-Lipschitz convex loss function ℓ: ×→, where ⊆_2(R), such that any (, δ)-algorithm for ERM w.r.t ℓ has expected excess empirical risk at least Ω(RG/√(n)). Let ^ = [n], ^ = [k], d = nk, and = _2^d(R). For x^∈^, y ∈^, we write j(x^, y) as a shorthand for k(x^ - 1) + y. Finally, let ℓ: × (^×^) → be ℓ(w, (x^, y)) = - G ·<w, e_j(x^, y)>, where e_j ∈^d denotes the jth vector in the standard basis for all j ∈ [d]. Let D = {x_i}_i ∈ [n] be the input dataset generated as follows: * Sample y_1, …, y_n ∼ [k] independently and uniformly at random and * Let x_i = (i, y_i) for all i ∈ [n]. Let : (×)^n → denote any (,δ)-algorithm. For i ∈ [n], we write w(i) as a shorthand for (w_(i - 1)k + 1, …, w_ik). We have[Here ties can be broken arbitrarily for .] _D, ∼(D)[<(i), e_y_i> > 1/√(2)(i)_2] ≤_D, ∼(D)[y_i = _y ∈ [k]<(i), e_y> ] (▪)=1/k∑_y' ∈ [k]_D, ∼(D)[y' = _y ∈ [k]<(i), e_y> | y_i = y' ] = 1/k∑_y' ∈ [k](e^/e^ + k - 1·_D, ∼(D)[y' = _y ∈ [k]<(i), e_y> | y_i = y' ] + k - 1/e^ + k - 1·_D, ∼(D)[y' = _y ∈ [k]<(i), e_y> | y_i = y' ] ) ()≤1/k∑_y' ∈ [k](e^/e^ + k - 1·_D, ∼(D)[y' = _y ∈ [k]<(i), e_y> | y_i = y' ] + k - 1/e^ + k - 1(e^·_D, ∼(D)[y' = _y ∈ [k]<(i), e_y> | y_i y' ] + δ) ) ≤1/k∑_y' ∈ [k](e^/e^ + k - 1·_D, ∼(D)[y' = _y ∈ [k]<(i), e_y>] + δ) = e^/e^ + k - 1 + δ, where (▪) follows from y_i ∼ y' and () follows from the (, δ)-guarantee of . Now, let I_, D denote the set {i ∈ [n] |<(i), e_y_i> > 1/√(2)(i)_2}. The above inequality implies that _D, ∼(D)[|I_, D|] ≤(e^/e^ + k - 1 + δ) n ≤ 0.99n, where the inequality is from our assumption on , δ, k. Meanwhile, |I_, D| can be used to bound the loss function as follows. (w; D) = 1/n∑_i ∈ [n]ℓ(w, (i, y_i)) = -G/n∑_i ∈ [n]<w, e_j(i,y_i)> = -G/n∑_i ∈ [n]<w(i), e_y_i> = -G/n[(∑_i ∈ I_w, D<w(i), e_y_i>) + (∑_i ∉ I_w, D<w(i), e_y_i>)] ≥-G/n[(∑_i ∈ I_w, Dw(i)_2) + (∑_i ∉ I_w, D1/√(2)w(i)_2)] (▴)≥-L/n[√((∑_i ∈ I_w, D 1) + (∑_i ∉ I_w, D1/2))·√((∑_i ∈ I_w, Dw(i)^2_2) + (∑_i ∉ I_w, Dw(i)^2_2))] = -G/n·√(n/2 + |I_w, D|/2)·w_2 ≥-RG/n·√(n/2 + |I_w, D|/2), where (▴) follows from Cauchy–Schwarz. Note that, by picking w^* = R/√(n)∑_i ∈ [n] e_j(i,y_i), we have (w^*; D) = -RG/√(n). Thus, the excess risk is (w; D) - (w^*; D) ≥RG/n(√(n) - √(n/2 + |I_w, D|/2)). As a result, the expected excess risk of is _D, ∼(D)[(, D) - (w^*, D)] ≥_D, ∼(D)[RG/n(√(n) - √(n/2 + |I_, D|/2))] ≥RG/n(√(n) - √(n/2 + _D, ∼(D)[|I_, D|]/2)) (<ref>)≥RG/n(√(n) - √(0.995 n)) ≥Ω(RG/√(n)). <Ref> now easily follows from applying the group privacy bound (<Ref>). [<Ref>] Let r = ⌊ln k/⌋. We will henceforth assume that n ≥ r; otherwise, we can instead apply a lower bound for largest k' such that log k'/≤ n instead (which would give a lower bound of Ω(RG) already). Suppose for the sake of contradiction that there is an (, δ)-algorithm that yields o(RG √(log k)/√( n)) excess risk in the aforementioned setting in the theorem statement. We assume w.l.o.g.[Otherwise, we may simply add dummy input points with constant loss functions.] that n is divisible by r; let n' = n/r. Let ' be an algorithm that takes in n' points, replicates each input datapoint r times and then runs . From <Ref>, ' is (',δ')-for ' = r and δ' = e^r-1/e^ - 1·δ. The expected excess risk of ' is o(RG√(log k)/√( n)) = o(RG/√(n')). Furthermore, we have e^'/e^' + k - 1 + δ' = e^r/e^r + k - 1 + e^r - 1/e^ - 1·δ≤k/2k - 1 + k/·δ≤ 0.9. This contradicts <Ref>. §.§ Strongly Convex (and Smooth) Case The strongly convex and smooth case proceeds in very much the same way except we use the squared loss instead. For any , δ, G, μ > 0 and n, k ∈ such that e^/e^ + k - 1 + δ < 0.99, there exists an G-Lipschitz μ-strongly convex μ-smooth loss function ℓ: ×→ such that any (, δ)-algorithm for ERM w.r.t ℓ has expected excess empirical risk at least Ω(G^2/μ n). We use the same notation as in the proof of <Ref> with R = 0.5 G/μ, except that we let the loss function be ℓ(w, (x, y)) = μ/2 w - R · e_j(x,y)_2^2. It is simple to see that ℓ is μ-strongly convex, μ-smooth, and G-Lipschitz. Similar to the proof of <Ref>, we can prove (<ref>). From this, we rearrange the excess risk (where w^* := R/n∑_i ∈ [n] e_j(i, y_i)) as follows: (w; D) - (w^*; D) = μ/2w - w^*_2^2 = μ/2∑_i ∈ [n]w(i) - R/n· e_y_i_2^2 ≥μ/2∑_i ∉ I_w, Dw(i) - R/n· e_y_i_2^2 = μ/2∑_i ∉ I_w, D(w(i)^2 - 2R/n<w(i), e_y_i> + R^2/n^2) ()≥μ/2∑_i ∉ I_w, D(w(i)_2^2 - R√(2)/n·w(i)_2 + R^2/n^2) = μ/2∑_i ∉ I_w, D((w(i)_2 - R/n√(2))^2 + R^2/2n^2) ≥μ R^2/4n^2· (n - |I_w, D|), where () follows from the definition of I_w, D. Thus, we have _D, ∼(D)[(, D) - (w^*, D)] ≥μ R^2/4n^2(n - _D, ∼(D)[|I_, D|]) (<ref>)≥Ω(μ R^2/n) ≥Ω(G^2/μ n). Again, <Ref> easily follows via group privacy. [<Ref>] Let r = ⌊ln k/⌋. We will henceforth assume that n ≥ r; otherwise, we can instead apply a lower bound for smallest k' such that log k'/≤ n instead (which gives a lower bound of Ω(G^2/μ) already). Suppose for the sake of contradiction that there is an algorithm that yields o(G^2/μ·log k/ n) excess risk in the aforementioned setting in the theorem statement. We assume w.l.o.g. that n is divisible by r; let n' = n/r. Let ' be an algorithm that takes in n' points, replicates each input data point r times and then runs . From <Ref>, ' is (',δ')-for ' = r and δ' = e^r-1/e^ - 1·δ. The expected excess risk of ' is o(G^2/μ·log k/ n) = o(G^2/μ·1/n'). Similar to the calculation in the proof of <Ref>, we have e^'/e^' + k - 1 + δ' ≤ 0.9. Thus, this contradicts <Ref>.
http://arxiv.org/abs/2406.18962v1
20240627074517
Multi-modal Food Recommendation using Clustering and Self-supervised Learning
[ "Yixin Zhang", "Xin Zhou", "Qianwen Meng", "Fanglin Zhu", "Yonghui Xu", "Zhiqi Shen", "Lizhen Cui" ]
cs.IR
[ "cs.IR" ]
Multi-modal Food Recommendation Y. Zhang et al. School of Software, Shandong University, China Joint SDU-NTU Centre for Artificial Intelligence Research (C-FAIR), Shandong University, China Joint NTU-Webank Research Institute on Fintech, Nanyang Technological University, Singapore Joint NTU-UBC Research Centre of Excellence in Active Living for the Elderly, Nanyang Technological University, Singapore College of Computing and Data Science, Nanyang Technological University, Singapore {yixinzhang,mqw_sdu,zfl}@mail.sdu.edu.cn, {xin.zhou,zqshen}@ntu.edu.sg, xu.yonghui@hotmail.com, clz@sdu.edu.cn Multi-modal Food Recommendation using Clustering and Self-supervised Learning Yixin Zhang1,2 Xin Zhou3 Qianwen Meng1,2 Fanglin Zhu1,2 Yonghui Xu2 Zhiqi Shen4,5 Lizhen Cui1,2 July 1, 2024 ===================================================================================================== § ABSTRACT Food recommendation systems serve as pivotal components in the realm of digital lifestyle services, designed to assist users in discovering recipes and food items that resonate with their unique dietary predilections. Typically, multi-modal descriptions offer an exhaustive profile for each recipe, thereby ensuring recommendations that are both personalized and accurate. Our preliminary investigation of two datasets indicates that pre-trained multi-modal dense representations might precipitate a deterioration in performance compared to ID features when encapsulating interactive relationships. This observation implies that ID features possess a relative superiority in modeling interactive collaborative signals. Consequently, contemporary cutting-edge methodologies augment ID features with multi-modal information as supplementary features, overlooking the latent semantic relations between recipes. To rectify this, we present CLUSSL, a novel food recommendation framework that employs CLUstering and Self-Supervised Learning. Specifically, CLUSSL formulates a modality-specific graph tailored to each modality with discrete/continuous features, thereby transforming semantic features into structural representation. Furthermore, CLUSSL procures recipe representations pertinent to different modalities via graph convolutional operations. A self-supervised learning objective is proposed to foster independence between recipe representations derived from different unimodal graphs. Comprehensive experiments on real-world datasets substantiate that CLUSSL consistently surpasses state-of-the-art recommendation benchmarks in performance. § INTRODUCTION Dietary intake is a critical determinant of human health and well-being, impacting both physiological and psychological states. Food recommendation systems (FRSs) aim to leverage user interaction data, multi-modal content, and ingredient information to personalize recipe suggestions that align with individual dietary needs. FRSs have seen notable advancements in recent years, driven by increased research interest and a wider range of lifestyle applications <cit.>. With the pervasive presence of multi-modal data linked to recipes, contemporary advancements have witnessed the assimilation of abundant metadata, encompassing elements such as ingredients and visual attributes, to enhance the precision of recommendations. For example, HAFR <cit.> jointly models interaction data, ingredients, and visual features. Similarly, SCHGN <cit.> further considers higher-order relationships and calorie intake preferences. However, these methods predominantly leverage user, recipe, and ingredient IDs to capture collaborative signals, relegating multi-modal information to secondary feature roles. Meanwhile, our preliminary investigation (Table <ref>) demonstrates that directly utilizing continuous representations (embeddings) derived from multi-modal features (e.g., image and text embeddings) for collaborative filtering (CF) tasks may degrade recommendation performance. In contrast, using discrete features such as IDs yields superior results. Interestingly, employing ingredient information, also a discrete feature, as recipe embeddings leads to competitive performance. These findings suggest that discrete features are more effective in capturing the underlying structure of the data. Consequently, multi-modal models like FREEDOM <cit.> propose an item-item graph construction based on similarities between transformed multi-modal features, where item IDs encapsulate the structural relationships derived from these features. While it can represent some level of similarity through item-item graphs, it fails to explicitly model the underlying categorical taxonomy (e.g., desserts, casseroles) that defines a crucial aspect of recipe semantics. This deficiency hinders its capacity to effectively integrate and discriminate between latent thematic categories within the culinary domain. To this end, this work proposes CLUSSL, a novel multi-modal food recommendation model that employs clustering and self-supervised learning. Specifically, CLUSSL utilizes a novel two-stage approach to exploit multi-modal recipe information for recommendation. In the first stage, unsupervised clustering is applied to unimodal data (e.g., image features, text embeddings) with pre-trained continuous representations. These clusters act as “prototype nodes”, summarizing the key semantic features within each modality. Subsequently, for each modality, a modality-specific graph is constructed to capture the relationships between these prototype nodes. This graph construction leverages ID features to encode the inherent structure of the data. Graph convolutional networks are then employed to effectively propagate and aggregate these semantic relationships across the graphs, allowing CLUSSL to exploit the rich relational information embedded within each modality. Furthermore, CLUSSL incorporates a distance correlation constraint within a self-supervised learning framework. This constraint ensures that the learned recipe representations from different modalities (e.g., image-based and text-based) maintain a degree of independence. By combining these techniques, CLUSSL leverages the strengths of both unimodal data and multi-modal relationships, leading to demonstrably improved recommendation accuracy compared to existing baselines, as confirmed by extensive evaluations on real-world datasets. § RELATED WORK In this section, we review the most relevant existing methods in food recommendation and multi-modal recommendation. §.§ Food Recommendation In response to the diversified needs for food, FRSs are committed to providing accurate and personalized recommendation services for users. Recently, leveraging recipe metadata, such as ingredients and images, within the general CF framework has proven effective in mitigating data sparsity challenges and refining user preference modeling. HAFR <cit.> develops a hierarchical attention mechanism to incorporate the feature interaction between metadata for recommendation. Moving beyond traditional methods, FGCN <cit.> explores leveraging graph neural networks to capture high-order relationships through graph propagation and aggregation. Furthermore, SCHGN <cit.> captures the user’s preference on food calories with self-supervised heterogeneous graph network. In addition, RecipeRec <cit.> constructs heterogeneous recipe graphs to model the structure. GreenRec <cit.> accommodates health criteria into recommender to promote sustainable lifestyles. §.§ Multi-modal Recommendation Multi-modal recommendation systems (MMRSs) strive to enhance items' representation for better recommendation performance by incorporating features from different modalities, including textual content and visual content <cit.>. Early studies, such as VBPR <cit.>, incorporate multi-modal contents with items' ID embeddings to extend the general CF framework. Inspired by the great success of graph-based recommendation methods <cit.>, MMGCN <cit.> attempts to inject high-order semantics into user/item representations via message passing on modality-specific graphs. From another perspective, FREEDOM <cit.> constructs item-item semantic graph from pre-trained multi-modal features to supplement the user-item interactions. An emerging trend is the integration of self-supervised learning methods into MMRSs, showcasing a notable boost. BM3 <cit.> and LGMRec <cit.> improve user and item representation learning by adopting contrastive learning objective. DRAGON <cit.> utilizes homogeneous graphs based on item multi-modal information and a heterogeneous bipartite user-item graph for effective recommendation. DGVAE <cit.> disentangles the user-item interactions and multi-modal information with graph learning. § PRELIMINARIES In this section, we formulate the problem of food recommendation with multi-modal information and introduce the construction of the modality-specific graph. §.§ Problem Formulation We formally define the problem as follows. Let U denote the set of users and I denote the set of food recipes. The user-recipe binary interaction matrix is denoted by Y∈{0, 1}^|U| × |I|, where |U| and |I| represent the cardinalities of users and recipes, respectively. Each entry Y_u, i = 1 indicates that user u has interacted with recipe i. Associated with each recipe i is the following modality information: 1) modality information with discrete features. We only consider recipe-ingredient relationships, denoted as M_A_i∈{0, 1}^|A|. The collection of ingredients across all recipes forms the complete ingredient set A, with cardinality |A|. 2) modality information with continuous features. In this work, we focus on visual features M_v_i∈R^d_v and textual features M_t_i∈R^d_t. Specifically, M_v_i are obtained from image of recipe i through ResNet <cit.>, while M_t_i are extracted from textual description of recipe i using a pre-trained T-5 model <cit.>. Based on the given notation, our objective is to predict the probability that user u will consume recipe i, taking into account user-recipe interactions and the recipe's multi-modal information. §.§ Construction of Modality-Specific Graphs Although heterogeneous graphs can include multiple types of vertices and edges, which helps to alleviate data sparsity and improve recommendation effectiveness <cit.>, their complex structure poses significant challenges for model training. In this work, we propose constructing modality-specific bipartite graphs based on the discrete or continuous features for each modality. §.§.§ Construct Bipartite Graph via Discrete Features. Given the ingredient set A= {a} and the recipe set I= {i}, the observed relationships between these sets are captured by the matrix R^A∈{0, 1}^|A| × |I|, where each entry R_a,i^A = 1 if ingredient a belongs to item i, otherwise R_a,i^A = 0. The i-th column denotes the discrete features M_A_i of recipe i regarding the ingredient modality. Based on the relationship matrix R^A, we construct a bipartite graph G_A={(a, R_a,i^A, i)|a ∈A, i ∈I, R_a,i^A=1)}. §.§.§ Construct Bipartite Graph via Continuous Features. According to <cit.>, closely binding item representations with raw modality features learned by pre-trained model can be detrimental. Thus, instead of using raw feature as an individual node representations, we introduce modality-specific cluster centres as “prototype nodes”. Specifically, taking visual features as an example, we utilize K-means to cluster the modality feature vectors into |N_v| cluster centers, i.e., N_v denotes visual prototype nodes. Then we select the top k nearest prototypes to define the visual-specific relationships for each recipe and form the matrix R^v∈{0, 1}^|N_v| × |I|, where each entry R_n,i^v = 1 if prototype node n belongs to top k set of recipe i, otherwise R_n,i^v = 0. Based on the relationship matrix R^v, we construct a bipartite graph G_v={(n, R_n,i^v, i)|n ∈N_v, i ∈I, R_n,i^v=1)}. Similarly, we construct the textual-specific bipartite graph G_t={(n, R_n,i^t, i)|n ∈N_t, i ∈I, R_n,i^t=1)}. § METHODOLOGY This section delves into the individual components of CLUSSL. The overall architecture of the proposed method is illustrated in Fig. <ref>. §.§ Graph Collaborative Filtering Backbone In general, GNN-based CF methods <cit.> produce informative representations for users and items based on the message propagation and aggregation scheme, which exploit higher-order connectivity in the user-item graph and achieve state-of-the-art performance for recommendation. Following <cit.> , we employ an efficient and effective LightGCN <cit.> as backbone to encode the structure of the user-recipe bipartite graph. Formally, we denote this user-recipe graph as G = {(u, Y_u,i, i)|n ∈U, i ∈I, Y_n,i=1)} based on user-recipe interaction matrix Y∈R^|U| × |I|. We obtain the adjacency matrix of the user-item graph as follows: A=[ 0 Y; Y^T 0 ]. Given l-th layer embedding matrix H^l, the simplifed message passing process in LightGCN is defined as: H^l+1 = (D^-1/2AD^-1/2)H^l, where H^0 ∈R^(|U|+|I|) × d is the 0-th layer embedding matrix. User embeddings are randomly initialized and recipes embeddings are learned through different modality-specific graphs (introduced in section <ref>). D is a (|U| + |I|) × (|U| + |I|) diagonal matrix, also called as degree matrix, and the node embeddings of the (l+1)-th layer are only linearly aggregated from the l-th layer with a symmetrically normalized matrix D^-1/2AD^-1/2. Lastly, representations from all hidden layers are aggregated through a readout function to obtain the final embedding matrix used for recommendation: H_u = READOUT(H_u^0, H_u^1, …, H_u^L), H_i = READOUT(H_i^0, H_i^1, …, H_i^L), where H_u and H_i denote the final representations of user u and recipe i, respectively, the READOUT function can be any differentiable function. Common designs include weighted sum, last-layer only, and others. We use the default mean function <cit.> in practice. To generate recipe recommendations for user u, we first predict the interaction scores between the user and candidate recipes. Then, we rank candidate recipes based on the predicted scores in descending order and select the top-k recipes as recommendations for the user. The interaction score is calculated as: ŷ_u,i = H_u^T H_i, where ŷ_u,i is the prediction score of user u towards recipe i. A high score suggests that the user prefers the recipe. To capture the collaborative information from implicit feedback, we adopt Bayesian Personalized Ranking (BPR) loss <cit.> in model training. Specifically, BPR loss ensures that the prediction score of the observed interactions higher than sampled unobserved ones. Formally: L_Rec = ∑_(u,i^+,i^-) ∈O -logσ (ŷ_u,i^+-ŷ_u,i^-), where σ is the sigmoid function, O = {(u,i^+,i^-)|Y_u,i^+ = 1, Y_u,i^- = 0} denotes the pairwise training data, and i^- denotes a sampled recipe that user u has not interacted with. §.§ Unimodal Graph Representation Learning Given the modality-specific bipartite graphs G_A, G_v, and G_t towards different modalities, we employ graph convolutional networks to perform information propagation and aggregation. Without losing generality, we use LightGCN as the graph encoder, which similar to Eq. <ref> and Eq. <ref> derived from the corresponding adjacency matrix Eq. <ref>. We transform these discrete features into dense-valued vectors through embedding lookup tables: recipes (E_I∈R^|I| × d), ingredients (E_A∈R^|A| × d), image prototypes (E_v ∈R^|N_v| × d), and text prototypes (E_t ∈R^|N_t| × d), where d is the embedding space dimensionality. Utilizing these embeddings as the initial representations (0-th layer), we perform graph propagation and aggregation on each modality-specific graph to obtain the final representations of recipe i, denoted as E^A_i, E^v_i, and E^t_i, respectively. Based on the embeddings mentioned above, we aggregate these embeddings to define the 0-th embedding of recipe i in Eq. <ref>: H_i^0 = Aggregate(E^A_i, E^v_i, E^t_i). In this work, we simply define Aggregate(·) as vector summation. The embedding H_i^0 encompasses holistic perspective of the recipe's multi-modal profile, which is further used to capture collaborative information from user-recipe interactions. §.§ Cross-modal Self-supervised Learning To ensure the stability of learning recipe representations across different modality-specific graphs, we propose a self-supervised regularization to encourage the recipe representations (i.e., E^A_i, E^v_i, and E^t_i) to preserve sufficient independent information and avoid information redundancy. Although we learn the representations from different modality-specific graphs, there might still be redundancy due to high-order aggregation. To further promote independence among these representations, we adopt the distance correlation <cit.> as a regularization technique. Distance correlation can capture the relationship between representations learned from different modalities while encouraging them to retain informative independence. Formally, we define it as follows: ℒ_Cor =∑_m,n ∈{A, v, t}, m≠ ndCov(E^m_i, E^n_i)/√(dVar(E^m_i) · dVar(E^n_i)), where dCov(·) is the distance covariance between two matrices, and dVar(·) represents its own distance covariance. §.§ Multi-task Learning The proposed CLUSSL can be optimized in an end-to-end manner through a unified objective function, defined as follows: L = L_Rec + λ∑_i ∈ (i^+,i^-)L_Cor + ηΘ_2^2, where λ is trade-off coefficient that balance the contributions of the self-supervised loss. η and Θ represent the L_2 regularization coefficient and model parameters, respectively. § EXPERIMENTS §.§ Experimental Settings §.§.§ Dataset. We conduct experiments on two datasets collected from real-world platforms https://www.allrecipes.com/www.allrecipes.com and https://www.food.com/www.food.com. Each recipe in the datasets includes its ingredients, texts, images and the corresponding ratings from users, as shown in Fig. <ref>(a). Among them, the ratings in the range of [1,5], we treat each rating as an implicit feedback record. Following <cit.>, we holdout the latest 30% of interaction history to construct the test set, and split the remaining data into training (60%) and validation (10%) sets. The statistics of two datasets after preprocessing are summarized in Table <ref>. §.§.§ Metrics and Evaluation. During training, we sample one negative recipe for each user-recipe pair in the training set. The performance of different methods is assessed by two widely used evaluation metrics: Recall@K and Normalized Discounted Cumulative Gain@K (denoted by NDCG@K), where K is empirically set to 10 and 20. For each metric, we first compute the accuracy for each user on the testing data, and then report the averaged accuracy for all testing users. Following <cit.>, there are 500 sampled negative recipe with popularity bias for one user and her interacted recipes in the test set. §.§.§ Baseline Methods. We compare the proposed CLUSSL with the following baseline methods. General Collaborative Filtering: BPR <cit.>, NeuMF <cit.>, LightGCN <cit.>. Multi-Modal Recommendation: VBPR <cit.>, MMGCN <cit.>, BM3 <cit.>. FREEDOM <cit.>, LGMRec <cit.>. Food Recommendation: HAFR[https://github.com/elisagao122/HAFR] <cit.>, FGCN <cit.>, SCHGN[https://github.com/TAEYOUNG-SYG/SCHGN] <cit.>. §.§.§ Implementation Details. All the baseline methods are implemented by PyTorch [https://pytorch.org/] and evaluated on a NVIDIA TITAN RTX GPU card. For fair comparison, the hyper-parameters of baseline methods are selected following the original paper, and the optimal settings are determined based on the grid-search and validation set. Multi-modal baselines are implemented based on the unified MMRec framework <cit.>. For the proposed CLUSSL, we empirically set batch size to 512, embedding size to 64, learning rates to 0.002 and 0.001 for Allrecipes and Food.com, respectively. Step decay of the learning rate is also adopted. The regularization coefficient η is set to 0.01. The trade-off coefficient λ tuned in {0, 0.001, 0.005, 0.01, 0.05, 0.1, 0.5, 1}. The number of GCN layers, prototype nodes are also searched in grad. The number of top-k nearest prototypes are set to 6 and 10 for Allrecipes and Food.com, respectively. We repeat the experiments five times and report the average results. §.§ Performance Comparison The performance achieved by different recommendation methods on two real-world datasets are summarized in Table <ref>. From the table, we have the following observations. Firstly, the superiority of CLUSSL. The proposed CLUSSL demonstrates significant superiority over general CF models, multi-modal recommendation models and the state-of-the-art food recommendation models across all dataset. This indicates that the proposed CLUSSL is exceptionally well-designed for food recommendation, effectively leveraging both multi-modal information with discrete and continuous features. Secondly, the effectiveness of multi-modal features. By incorporating multi-modal features reasonably, almost all models have achieved better performance. For instance, VBPR outperforms the backbone BPR. Multi-modal recommenders based on LightGCN (i.e., BM3, FREEDOM, and LGMRec) basically achieve better results than LightGCN alone. Additionally, HAFR and SCHGN take into account not only visual features but also and ingredients information, demonstrating superior performance over other multi-modal models on Allrecipes. Thirdly, the variability of modal semantics on performance. Almost all multi-modal models showcase improved recommendation results. However, we observe that the performance of BM3 is inferior to LightGCN on Food.com. Moreover, HAFR and SCHGN are designed to deeply integrate multi-modal features with the recipe embeddings for prediction, yet their performance on Food.com is less than satisfactory. One potential reason might be that interactive collaborative signals are more easily captured by deep networks in relatively dense datasets, leading complex multi-modal semantics to contribute adverse effects without well-designed modules. The proposed CLUSSL mitigates this issue by transforming semantic features into structural representation. Finally, the effectiveness of self-supervised learning. SCHGN achieves better performance than HAFR by employing a self-supervised ingredient prediction objective. BM3 and LGMRec leverage contrastive learning as an auxiliary optimization function, achieving notable success. We also partially attribute significant improvements of CLUSSL to self-supervised learning. More in-depth discussions can be found in ablation study and sensitivity analysis. §.§ Ablation Study To study the importance of each component of CLUSSL, we consider the following CLUSSL variants for evaluation: 1) w/o M_v_i, 2) w/o M_t_i, 3) w/o M_A_i. These variants involve removing specific modal features related to recipes, thereby excluding their respective modality-specific graphs. 4) w/o L_Cor: we set λ to 0 in Eq. <ref> to eliminate the self-supervised learning component. Table <ref> summarizes the performance of CLUSSL variants on Allrecipes and Food.com datasets. Several observations are noteworthy: CLUSSL consistently outperforms w/o M_v_i, w/o M_t_i, and w/o M_A_i across both datasets, indicating that removing any multi-modal information leads to decreased performance. This underscores the utility of all modalities in enhancing recommendation accuracy. Additionally, the effectiveness of these modalities varies. Generally, visual features prove more crucial than textual or ingredient features. Textual information often contains redundant data, while ingredient features are confined by predefined sets. In contrast, visual features vividly depict food appearance, color, shape, and composition, crucial for stimulating user interest and appetite, thereby offering significant advantages. Furthermore, experiments w/o L_Cor highlight the role of the self-supervised learning in improving recipe representations for food recommendation. By promoting distinctiveness across modalities, CLUSSL ensures independence among multi-modal information, thereby enhancing recommendation effectiveness. §.§ Parameter Sensitivity Study We also perform experiments to study the impacts of three hyper-parameters: the trade-off coefficient λ, the number of prototypes in graph G_v and G_t, and the top-k nearest prototypes. Fig. <ref> shows the performance of CLUSSL with respect to different settings of these three hyper-parameters on both datasets. As depicted in Fig. <ref>, the results of the coefficient λ on the two datasets showcase a consistent trend: the performance first improves to reach optimal and then declines as λ increases. These results suggest that reasonable self-supervised constraints enable recipe representations to retain more comprehensive information, thereby enhancing recommendation performance. Regarding the number of prototypes, the performance trend appears relatively stable. Additionally, the best performance is achieved by setting top-k to 6 and 10 on Allrecipes and Food.com datasets, respectively. § CONCLUSION In this paper, we propose a novel food recommendation model, namely CLUSSL, which employs clustering to transform modal semantics into modality-specific graph across each modality. Moreover, CLUSSL leverages cross-modal self-supervised learning to encourage that recipe representations induced by different modality-specific graphs preserve sufficiently independent information. The backbone network can further benefit from independent representations with the self-supervised constraints, which providing superior performance. Experimental results on two real-world datasets verify that CLUSSL consistently outperforms state-of-the-art baseline recommendation models. splncs04
http://arxiv.org/abs/2406.18432v1
20240626153015
Cloud structure and young star distribution in the Dragonfish complex
[ "Nestor Sanchez", "Elisa Nespoli", "Marta Gonzalez", "Juan B. Climent" ]
astro-ph.GA
[ "astro-ph.GA" ]
Cloud structure and young star distribution in the Dragonfish complex Sanchez et al. ^1Universidad Internacional de Valencia (VIU), C/Pintor Sorolla 21, E-46002 Valencia, Spain ^2Departament d'Astronomia i Astrofísica, Universitat de València, Burjassot, E-46100, Spain Star formation is a complex process involving several physical mechanisms interacting with each other at different spatial scales. One way to shed some light on this process is to analyse the relationship between the spatial distributions of gas and newly-formed stars. In order to obtain robust results, it is necessary for this comparison to be made using quantitative and consistent descriptors applied to the same star-forming region. Here, we use fractal analysis to characterise and compare in a self-consistent way the structure of the cloud and the distribution of young stellar objects (YSO) in the Dragonfish star-forming complex. Different emission maps of the Dragonfish Nebula were retrieved from the NASA/IPAC Infrared Science and the Planck Legacy archives. Moreover, we used photometric information from the AllWISE catalogue to select a total of 1082 YSOs in the region, for some of which we derived physical properties from their spectral energy distributions (SEDs). For both datasets (cloud images and YSOs), the three-dimensional fractal dimension (D_f) was calculated using previously developed and calibrated algorithms. The fractal dimension of the Dragonfish Nebula (D_f = 2.6-2.7) agrees very well with values previously obtained for the Orion, Ophiuchus, and Perseus clouds. On the other hand, YSOs exhibit on average a significantly smaller value (D_f = 1.9-2.0) that indicates a much more clumpy structure than the material from which they formed. Younger Class I and Class II sources have smaller values (D_f = 1.7 ± 0.1) than more evolved Transition Disk objects (D_f = 2.2 ± 0.1), evidencing a certain evolutionary effect where an initially clumpy structure tends to gradually disappear over time. The Dragonfish complex exhibits a structure similar to that of other molecular clouds in the Galaxy. However, we have found clear and direct evidence that the clustering degree of the newly born stars is significantly higher than that of the parent cloud from which they formed. The physical mechanism behind this behaviour is still not clear. Cloud structure and young star distribution in the Dragonfish complex Nestor Sanchez^1 Elisa Nespoli^1 Marta Gonzalez^1 Juan B. Climent^1,2 Received ...; accepted ... ================================================================================================= § INTRODUCTION Star formation is a complex process that is still not fully understood. There is an accepted general picture in which gas and dust inside giant molecular clouds (GMCs) gravitationally collapse to form groups of protostars. After this, protostellar winds and jets blow away the surrounding clouds leaving behind clusters of newly formed stars <cit.>. However, the details of the process are much more complex than this. The internal structure of GMCs is mainly driven by turbulent motions whose origin is still under debate <cit.>. Turbulence tends to act against the gravitational collapse, but it can also originate shocks and high-density regions promoting the collapse. Apart from turbulence and self-gravitation, there are other physical mechanisms such as magnetic fields, thermal pressure and radiation fields that may play important roles at different spatial scales and in different moments of the star formation process <cit.>. Stellar feedback from newly formed stars injects energy into the medium that can either disperse the gas (preventing the formation of other stars) or compress it (triggering the formation of additional stars) depending on many different factors <cit.>. Even if very few physical mechanisms were considered, the interaction among all the involved processes and the interaction among different regions of the GMC at different spatial scales convert star formation in a highly non-linear chaotic process, in the sense of being very sensitive to small variations on the initial and environmental conditions <cit.>. The study of the star formation process may be divided into two steps. Firstly, one needs to address the initial distribution of gas and dust in GMCs, i.e. the initial conditions of the process. Secondly, one can focus on the way and degree in which this initial distribution is transferred or converted into new-born stars, i.e. the star formation process itself. Each one of these parts is a complex research line with many physical processes and many observational problems involved. A way to yield some light into the problem is using a two-sided approach. On the one hand, to study and characterise in detail the structure and properties of GMCs that represent the initial conditions of the star formation process. On the other hand, to analyse the distribution and properties of YSOs. The comparison of these two parts may help to understand the process from which the parental cloud is transformed into newly-formed stars. In order to draw solid conclusions, this comparison should be made for the same star-forming region and using quantitative and consistent descriptors. In the literature, several descriptors and techniques have been considered for the characterisation of the internal structure of interstellar clouds, such as structure tree methods <cit.>, Delta-variance techniques <cit.>, principal component analysis <cit.>, metric space techniques <cit.>, dendrograms <cit.>, convolutional neural networks <cit.>, and fractal <cit.> and multifractal <cit.> analysis, among others. Regarding the structure of the distribution of formed stars and star clusters, commonly used methods include simple kernel density estimators <cit.>, the nearest neighbour distribution and the two-point correlation function <cit.>. The correlation function may be used to directly estimate the fractal correlation dimension of star and cluster distributions <cit.>. A different method was introduced by <cit.> which proposed the use of the so-called Q-parameter, calculated from the minimum spanning tree, to quantify the spatial substructure. This method has the advantage of being able to distinguish between centrally concentrated and fractal-like distributions, and it has been widely used in different star-forming regions <cit.>. Other more recent techniques or variants of already established methods to characterise the internal structure of interstellar clouds and young stars or star clusters include the INDICATE tool <cit.>, the S2D2 procedure <cit.>, the Moran's I statistic <cit.>, and the RJ Plots <cit.>. Each method has its advantages, disadvantages, and limitations, and the choice of which to use depends, among other factors, on the scientific goal to be addressed. Fractal analysis is a particularly suitable tool because of the observed hierarchical and self-similar structure of the interstellar medium, which resembles a fractal system <cit.>. In fractal analysis, the degree of spatial heterogeneity can be quantified through a simple parameter: the fractal dimension (D_f). One advantage of this approach is that D_f can be calculated for both continuous and discrete structures, which allows a direct comparison between the distribution of gas and dust in the parental cloud and the distribution of newly-formed stars. It is believed that very young stars and clusters should follow the fractal patterns of the interstellar medium from which they formed but that such patterns could be dissipated on short timescales <cit.>. However, it is not clear whether the wide variety of observed spatial patterns is due to differences in the structure of the original clouds or to evolutionary or environmental effects <cit.>. In this work, we use fractal analysis to examine in a systematic and consistent way the distribution of gas and YSOs in the Dragonfish region. Dragonfish (G298.4-0.4) is a star-forming complex located at (l,b) = (298,0.4) deg first detected by <cit.> as a clump of HII regions at 10 kpc. Some authors suggested that the Dragonfish complex contained a supermassive OB association <cit.>. However, a detailed work by <cit.> showed that such an association does not exist and that the existing young massive clusters and Wolf-Rayet stars can explain most of the observed ionisation. <cit.> estimated that this region is located at the outer edge of the Sagittarius-Carina spiral arm, at a distance of d = 12.4 kpc. However, <cit.> found a much closer distance of 5.2 kpc using data from the Gaia DR2, although they warned that their distance estimation may be inaccurate. Previous studies suggest that Dragonfish is among the largest and most massive cloud complexes in the Milky Way <cit.>, which makes it an interesting region to investigate the star formation process. Section <ref> of this work is dedicated to characterise the distribution of gas and dust in the Dragonfish nebula. In Section <ref> we use photometric information in several bands to search for YSOs, determine their physical properties, and study the spatial and hierarchical clustering of the selected YSOs. A comparison between the distributions of gas and YSOs is given in Section <ref> and, finally, Section <ref> summarises our main conclusions. § GAS AND DUST DISTRIBUTION §.§ Used data We selected a region large enough to cover the entire Dragonfish star-forming complex. The region is defined by the galactic coordinates l = (297.0, 299.5) deg and b = (-1.1, +0.8) deg. By using the NASA/IPAC Infrared Science Archive[<https://irsa.ipac.caltech.edu>], we downloaded images from the Infrared Array Camera (IRAC) of the Spitzer mission <cit.> and created a mosaic of the region using the Montage program[<http://montage.ipac.caltech.edu/>]. We performed this process for the four IRAC channels. The obtained results did not show significant differences among the four channels, so we present here the results using the 8 μm IRAC channel only, namely channel 4. We also searched for additional data in other wavelengths but radio maps usually do not have enough spatial resolution to achieve our science goals. A relatively good enough image at microwave wavelengths was obtained from the Planck mission <cit.>. In particular, we downloaded from the Planck Legacy Archive[<https://pla.esac.esa.int>] the map of the HFI 545 GHz channel (550 μm). Figure <ref> displays the maps from Spitzer and Planck used in this study. In general, these maps trace the distribution of gas and dust in the Dragonfish Nebula. The Planck map is basically a thermal dust emission map <cit.> whereas IRAC channel 4 is dominated by polycyclic aromatic hydrocarbon (PAH) emission <cit.>, which tends to spatially correlate with molecular gas <cit.>. §.§ Fractal dimension In this section, we use fractal analysis to study the distribution of gas and dust in the Dragonfish region. This tool uses only one parameter, the fractal dimension D_f, to characterise the manner in which the gas is distributed. A D_f value of 3 indicates a homogeneous three-dimensional spatial distribution, while progressively smaller values of D_f correspond to increasingly irregular distributions with higher degrees of clumpiness <cit.>. Monofractal clouds can be characterised by a single D_f value that is valid across the entire range of spatial scales over which the gas is distributed. Although some evidence of multifractality in the interstellar medium (ISM) has been reported <cit.>, this remains an open issue, and a systematic analysis assuming a nearly monofractal behaviour may still provide valuable insights into the underlying structure of the ISM. In general, interstellar clouds are observed as two-dimensional images projected onto the celestial sphere. Therefore, to study the fractal properties of clouds, many authors use the so-called perimeter-area relation to calculate the dimension of the contours of the projected clouds <cit.>, that we denote as D_per. In general, the relation between 2D and 3D fractal dimension values is not trivial but in principle D_per can only vary between the theoretical limits of D_per=1 for the case of smooth projected contours and D_per=2 for extremely irregular contours <cit.>. In previous works <cit.>, we implemented and optimised an algorithm to estimate D_per in a reliable way in cloud emission maps. The method defines objects as sets of connected pixels with intensity values above a defined threshold. In order to increase the number of objects, the algorithm uses ∼20 brightness levels equally spaced between the minimum and the maximum brightness of the map. Several tests performed with both simulations and real maps showed that the obtained D_per values do not depend on the exact number of brightness levels as long as they are not too few (≲ 5). Then, the perimeter and area of each object in the image are calculated and the best linear fit is determined in a log(perimeter)-log(area) plot, being D_per/2 the slope of the fit <cit.>. The algorithm was optimised to account for problems occurring at the image edges as well as signal-to-noise ratio and resolution effects. More importantly, this algorithm was used to characterise in detail the relationship between D_per and D_f. The relation D_per - D_f was empirically determined by simulating three-dimensional clouds with well-defined fractal dimensions and projecting them onto random planes. In general and as expected, D_per decreases (more convoluted boundaries) as D_f increases (more irregular and fragmented clouds). However, the exact relation is not a simple function and, as the image resolution decreases, there is a tendency of D_per to decrease because the details of the roughness disappear as the pixel size increases. The calculated functional forms relating D_per, D_f and N_pix (the maximum object size in pixel units) were presented in Fig. 8 and Table 1 of <cit.> and are used here to estimate D_f for the Dragonfish Nebula. We applied the previously described algorithm to the maps shown in Fig. <ref>. The obtained perimeter-area relations are shown in Fig. <ref>. The corresponding fractal dimension values are summarised in Table <ref>, where D_f is estimated from D_per based on the simulations of projected clouds performed in <cit.>. The Spitzer map we are using has a pixel size corresponding to the “good resolution” case in <cit.>, i.e. the case with N_pix≥ 400 where N_pix is the maximum object size in pixel units. In contrast, the Planck map has a pixel size corresponding to the case N_pix≃ 200. Thus, the relatively small D_per value of the Planck map in Table <ref> is due to resolution effects that tend to smooth the contours. For this reason, after correcting for resolution effects, the three-dimensional fractal dimensions result it the same value for both maps. The obtained value D_f = 2.6-2.7 for the Dragonfish Nebula agrees with our previous results for emission maps in different molecular lines of the Orion, Ophiuchus, and Perseus clouds, where the fractal dimensions are always in the range 2.6 ≲ D_f ≲ 2.8 <cit.>. These D_f values are significantly higher than the average value D_f ≃ 2.3 commonly assumed for the ISM <cit.>. § YOUNG STELLAR OBJECT CANDIDATES §.§ Candidate selection Within the selected region, we used VizieR[<https://vizier.cds.unistra.fr/>] <cit.> to search for all existing sources in the AllWISE catalogue <cit.>. A total of 110 401 sources were retrieved including their IDs, positions and photometry in the W1-W4 and JHK bands. Then, we applied the multicolour criteria scheme proposed by <cit.> to identify YSO candidates. This selection scheme is based on applying different cuts in colours and magnitudes in the WISE+2MASS bands to remove contaminants (Star-forming galaxies, Active Galactic Nuclei, and Asymptotic Giant Branch stars) and to select YSOs of Classes I, II, and Transition disks. The application of these criteria to our sample yielded a total of 1082 YSOs, of which 139 belong to Class I, 627 to Class II, and 316 are Transition disk sources. Table <ref> (fully available online) presents a list of the selected YSOs, including their properties as derived in this work. An example colour-colour diagram is shown in Fig. <ref>. Of the selected sample, 323 sources had already been reported either as YSOs or YSO candidates by other authors <cit.>. Table <ref> also includes the references for these previously identified YSOs. The remaining YSOs are new candidates, identified for the first time in this work. §.§ Physical properties of the selected YSOs §.§.§ Distances From the 1082 selected YSOs, there are 135 objects that have counterparts in the Gaia DR3 catalogue and therefore have available parallaxes. For these sources, we estimated their distances D from their parallaxes, taking into account the global parallax offset of -0.017 mas reported by <cit.>. In general, the obtained values of D are distributed over a relatively large range of values, due in part to uncertainties in the parallaxes (see Fig. <ref>). The most frequent value is found around D ≃ 4000 pc. If we consider parallaxes with relatively smaller errors (purple histogram in Fig. <ref>), the distribution changes slightly but the mode of the distribution always remains in the range 3000-5000 pc. A distance of D ∼ 4 ± 1 kpc to the Dragonfish Complex is smaller than some previous estimations of ∼ 10-12 kpc <cit.> but consistent with the ∼ 4-5 kpc reported by other authors <cit.>. §.§.§ Spectral energy distributions The SEDs of the 1082 selected YSOs were analysed using the Virtual Observatory SED Analyzer (VOSA) developed by the Spanish Virtual Observatory <cit.>. VOSA is a tool that provides a friendly and flexible environment for finding the theoretical spectral model that best fits the observed photometric data. VOSA allows users to search and expand available photometry, choose from a list of models, and define parameter ranges to search for the best fit. For the fitting procedure, we first requested VOSA to expand the SEDs with all the photometry it could find. VOSA itself handles outlier rejection and, in case of finding different photometric values for the same filters, it calculates and uses an average value for the final SED. We then requested VOSA to fit the SEDs by minimising the reduced chi-square with the latest version of the BT-Settl models, which are based on the CIFIST photospheric solar abundances <cit.>. The effective temperature (T_eff) is left as a free parameter in the fitting process, which in the BT-Settl models ranges from 1200 ≤ T_eff≤ 7000 K. VOSA fits the points of the SED that have not been flagged with possible infrared excess, which it assumes to be around the W1 band. The fitting process in VOSA is not very sensitive to some parameters such as metallicity and log g <cit.>. We made several tests by fixing or constraining these parameters around the expected values and also leaving them completely free, and in general the resulting fits were not significantly affected. Eventually, we left log g as a totally free parameter and fixed the metallicity to the solar value, whereas the visual extinction (A_V) was allowed to vary in the range 0 ≤ A_V ≤ 10 mag. For those sources for which the fitting process did not converge on reasonable solutions we also tried an independent fitting with the ATLAS9 Kurucz ODFNEW/NOVER models <cit.>. For these cases the temperature can vary in the range 3500 ≤ T_eff≤ 50000 K and the rest of the parameters are set to the same conditions employed for the BT-Settl fits. In any case, all 1082 SEDs and their fits were visually examined to verify the adequacy of the fits and solutions found by VOSA. For a total of 399 sources (37%), the corresponding SED was well fitted with either BT-Settl models (89% of the sources) or Kurucz models (11%). The temperatures T_eff and extinctions A_V that provided the best fits are reported in Table <ref>, and a histogram with the distribution of T_eff values is shown in Fig. <ref>. Around ∼ 50% of the sources have photospheric temperatures in the range 3000 ≲ T_eff≲ 5000 K, although there is also a significant population of cool stars (late Ms or brown dwarfs) with T_eff≲ 2000 K). An example SED for the first star belonging to this group in our Table <ref> is shown in Fig. <ref>, where we can see both the fit performed by VOSA using a BT-Settl model as well as the expected infrared excess likely produced by circumstellar dust. We have not detected significant patterns or correlations of T_eff or A_V with the spatial distribution or with any other relevant physical variable. This population of cool stars is subject to ongoing investigation. §.§ Spatial distribution The spatial distribution of the selected YSOs, overlaid on the Spitzer map at 8 μm, is shown in Fig. <ref>. At first glance, younger classes (I and II) seem to follow the distribution of gas and dust exhibiting some level of clumpiness, whereas Transition disk sources tend to be more homogeneously spread through the region. In order to objectively quantify the clumpiness we use the so-called correlation dimension (D_c), which is suitable for analysing distributions of point sources. The correlation dimension measures the variation (as r increases) of the probability that two randomly chosen points are separated by a distance smaller than r <cit.>. For homogeneous point distributions in space it is expected that D_c=3, whereas in a plane D_c=2. If the points are distributed following fractal patterns then D_c<3 in the space or D_c<2 in the plane. Here, we use a previously developed and calibrated algorithm that estimates D_c in a precise and accurate manner <cit.>. The algorithm constructs the minimum-area convex polygon to delimit the sample and avoid common problems at large scales (whole sample scale). On the other hand, at spatial scales of the order of the mean distance to the nearest neighbour, the distribution looks like a set of isolated points and the obtained D_c values tend to zero <cit.>. Our algorithm uses suitable criteria to eliminate poorly estimated data (i.e., bad sampling) and thus to avoid these small-scale issues. Additionally, it applies bootstrapping techniques to estimate an uncertainty associated to D_c. The results from applying this algorithm are presented in Table <ref>. The estimation of the corresponding three-dimensional fractal dimension D_f is made based on the simulations and results in <cit.>. The obtained D_f values reveal a certain evolutionary process. Classes I and II exhibit approximately the same value of D_f ≃ 1.7 ± 0.1. The slightly smaller value of D_c (and D_f) for the distribution of Class I objects is likely related to the relatively small number of sources (N=139), because it has been shown that below N ∼ 200 the retrieved value of D_c tends to be smaller than the actual fractal dimension <cit.>. In contrast, the more evolved Transition Disk objects show a significantly larger dimension with D_f ≃ 2.2 ± 0.1. There is evidence for such evolutionary effect, where the initial hierarchical and clumpy structure gradually disappears over time <cit.>. In external galaxies, the clumpy structure in the distribution of star formation sites has been observed to change towards smoother distributions as ages increase <cit.>. At these scales (≳ 10^3 pc), the underlying cause may lie in non-turbulent motions acting at a galactic level, on scales on the order of or larger than the scale height of galactic disks <cit.>. In the case of star clusters, the observed initial substructures also seem to dissipate with age <cit.>, but on those spatial scales (∼ 10 pc) other physical processes may play important roles and the initial fractal structure is expected to be lost rapidly <cit.>, either diluting into more homogeneous distributions in gravitationally unbound clusters or concentrating into radial density distributions in bound clusters. Nevertheless, the associated time scale is not clear and some works suggest that cluster disruption may be a very slow process (≳ 10 Myr) in some cases <cit.>. In any case, different young clusters may reflect the initial structure of the different clouds from which they formed, and these conditions do not necessarily have to be the same. Therefore, reported correlations between clumpiness and age for large samples of cluster could be contaminated by differences in initial conditions and not correspond to any evolutionary effect. In the case of the Dragonfish complex, we focus on spatial scales of the order of ∼ 100 pc, where we detect a significant difference between the distributions of younger and more evolved stars, marking an evolutionary effect. The underlying physical process is still not clear but it may be related to the random stellar motion effect discussed by <cit.>, in which the initial clumpy structure disappears as stars age due to random turbulent velocity fields acquired at birth. §.§ Hierarchy of structures The previous analysis points out that YSOs are distributed in a clumpy manner showing some degree of substructure at different spatial scales. In this section, we address this issue by searching for density substructures in the YSO sample. For this, we apply the algorithm OPTICS <cit.> to perform a global analysis of the density structure of YSOs and retrieve a hierarchy of subclusters. OPTICS extends the clustering algorithm DBSCAN <cit.> to provide a global analysis of the density structure within a region and is especially suited for samples with large density variations. DBSCAN groups points into clusters based on a density associated with two parameters: a minimum number of points N_min and a spatial scale ε. For each point, the scale ε defines a neighbourhood, and N_min sets a density requirement for the neighbourhood. DBSCAN identifies clusters as composed of two kinds of points: core points satisfy the density requirement, while border points belong to the ε-neighbourhood of a core point but do not satisfy the density requirement themselves. The rest of the points are labelled as noise. In OPTICS, only the N_min parameter is fixed and the concept of reachability distance is introduced. We can intuitively interpret the reachability distance between a core point and another point as the minimum distance needed for the second point to be in the ε neighbourhood of the core point, fulfilling the density threshold. OPTICS is not strictly a clustering algorithm, but an analytical tool whose main output is the reachability plot. The reachability plot shows the reachability distance of a reordered sample of points in a diagram where consecutive points are close and the clusters appear as dents or valleys. Based on the reachability plot, we can extract clusters either considering a height ε threshold (obtaining a clustering equivalent to DBSCAN with that ε) or with a slope ξ threshold (where clusters have a specific density ratio to their surroundings). In this work, we use the second approach, as it can detect a hierarchy of structures nested within each other. We set a high value N_min=20 to limit the noise in the reachability plot and tested values of ξ∈ [0.01,  0.1], finally choosing ξ=0.035 as a good compromise between the detection and the reliability of the clusters retrieved. Figures <ref> and <ref> show the reachability plot and a map with the convex hulls of the structures retrieved by OPTICS, in both cases labelled and colour-coded. In Table <ref> we display the obtained characteristics of each structure. The whole sample is itself detected as a single structure by OPTICS, tagged and displayed in grey as structure 1 in Fig. <ref> but not shown in Fig. <ref> and Table <ref> for the sake of clarity. The YSO sample exhibits a rich hierarchical structure, as expected from the previous fractal analysis. There are 8 main structures, and amongst them, structures 6 and 9 contain nested substructures. We assign to each structure a level, defined recursively by the number of substructures it contains: structures of level 0 do not contain any other, structures of level 1 contain structures of level 0, structures of level 2 contain structures of level 1, and so on. We find 10 structures of level 0 (namely 2, 3, 4, 5, 6a, 6b, 7, 8, 9a1, and 9b), 2 of level 1 (6 and 9a), 1 of level 2 (9) and 1 of level 3 (structure 1, i.e. the whole sample). By considering a distance of ∼ 4 kpc (see Section <ref>) and the average equivalent sizes of the structures for each level (shown in Table <ref>), we obtain that the spatial scales of the structures are of the order of 15, 25, and 25 pc, typical of star-forming regions. Table <ref> also shows the correspondence between our detected structures and star cluster candidates listed in Table 7 of <cit.>. Out of their 19 candidates, 15 are found within our structures. Our structure 3 includes the location of the H II region RCW 64 <cit.> that <cit.> consider as a foreground region. However, there are 4 objects inside structure 3 with reliable parallaxes from Gaia DR3 which place this structure at a distance of ∼ 3.8 ± 0.6 kpc, consistent with our estimation for the whole region (see <ref>). Structures 5,6, and 9 contain three or more known star cluster candidates, and, in the case of the most complex structure 9, its higher level substructure 9a1 contains 5 candidates and displays a clear elongated shape. Even though environmental factors must play some role, the fraction of less evolved Class I sources to the total number of YSOs may yield some hints on the age of the structure <cit.>. In fact, the variations in the spatial distribution of objects of different evolutionary stages have been used to investigate the star formation history within specific regions <cit.> For the whole sample, the ratio Class I/YSO is 12.9% which is broadly consistent with the value 14.4% obtained considering only the clustered structures. However, we also identified structures with ratios both larger than 20% and smaller than 5%. Structures 2, 8 and 9a (in particular 9a1) have significantly large Class I/YSO ratios that suggest a higher level of recent star formation, whereas structures 6a and 4 show significantly low ratios, pointing to areas where the star formation activity has declined. These results suggests a complex and varied star formation history in the Dragonfish complex, comprising different events spanning several Myr. § COMPARISON OF GAS AND YSO DISTRIBUTIONS A primary goal of this work is to compare the distribution of gas and dust in the Dragonfish region with the distribution of young stars that were born from this gas. Given that we are using the same characterisation tool (the fractal dimension) for gas and for stars, in principle we expect to obtain nearly the same D_f value for both components. The reason is that, at least in the case of ideal monofractal clouds with dimension D_f, the high-density peaks where star formation preferentially takes place are distributed following patterns with the same underlying dimension D_f. On the contrary, our results clearly indicate a scenario in which the distribution of younger objects is much more clumpy (D_f ≃ 1.7) than the material from which they are forming (D_f ≃ 2.6-2.7). Evidence supporting or contradicting either of these scenarios is far from being conclusive. There is a lack of works that compare the clustering strength of gas and new-born stars in a direct, quantitative and self-consistent way. At spatial scales smaller than 500 pc, the galaxy M33 is on average more fragmented and irregular than the Milky Way, but its bright young stars are distributed following nearly the same fractal patterns as the molecular gas, both having D_f ≲ 1.9 <cit.>. In contrast, for the galaxy NGC 7793, <cit.> found that on average star clusters are distributed with a stronger clustering degree than giant molecular clouds over the range 40-800 pc; nevertheless, they also found approximately the same degree of clustering when comparing the most massive molecular clouds with the youngest and most massive star clusters. In any case, at spatial scales of the order of the disk scale height, the structure of the interstellar medium generated by turbulent motions may be somehow affected by large-scale galactic dynamics, as discussed in <cit.> and <cit.>. <cit.> calculated the clustering in a sample of young star clusters using the Q-parameter <cit.> and also estimated the perimeter-area-based dimension D_per from visual extinction maps in the direction of such clusters. In general, they found that substructures observed for the clusters were very similar to the fractal characteristics of the clouds, although an accurate comparison of the three-dimensional fractal dimension D_f could not be made because projection effects were not properly taken into account. <cit.> performed a direct comparison of the spatial distributions of stars and gas in numerical simulations of molecular clouds using the Q-parameter for both stars and gas. Interestingly, they found that formed stars follow a distribution highly substructured with a value of Q∼ 0.4-0.7, which corresponds to approximately D_f ∼ 1.8-2.3 <cit.>, whereas the gas from which stars form had Q∼ 0.9, indicating a smooth, concentrated distribution of matter. These results should nevertheless be treated with some caution because, as indicated by <cit.>, the Q-parameter may not be an optimal tool for measuring the spatial distribution of gas, since the pixelated image must be previously converted into a point distribution. Here, we have found direct and self-consistent evidence that the clustering degree of newly born stars in the Dragonfish region is significantly higher than that of the parent cloud from which stars are forming. We have mentioned that resolution issues (relatively big pixel sizes) could make gas maps look smoother than they actually are, an effect that would not occur for the distribution of stars. However, this is likely not the cause of the observed difference between stars and gas because resolution effects and other factors (signal-to-noise ratio, cloud opacity) have already been calibrated and accounted for to accurately infer D_f from D_per <cit.>. If this discrepancy is real, then there could be two possible explanations. On the one hand, the denser gas that is forming stars could be actually clumpier than the distribution of gas throughout the entire region, in concordance with a possible multifractal scenario that has been proposed for the interstellar medium <cit.>. The approach used in this work aims to avoid this issue by focusing on spatial scales of the same order for both the cloud structure and the YSO distribution. On the other hand, the degree of clumpiness may somehow increase during the star formation process. Although some simulations <cit.> seem to support this possibility, the physical mechanisms driving this behaviour remain unclear. In simulations of fractal star clusters, <cit.> demonstrated that an initially homogeneous cluster can develop substructures if it is born with some coherence in the initial velocity field. It is not clear, however, whether such kind of processes could operate at the spatial scale of a whole star-forming complex. The relationship between the spatial distributions of gas and formed stars remains as an intriguing, and so far unresolved, issue. The problem is non-trivial because the conversion from gas to stars, i.e. the star formation process, involves many physical mechanisms interacting at different spatial scales. Moreover, the formation of stars does not occur synchronously throughout the entire cloud, and the first-formed stars can interact with the surrounding gas modifying its properties and also affecting the distribution of subsequent stars. § CONCLUSIONS In this paper, we present a systematic and detailed study of the Dragonfish star-forming region. On the one hand, we used different emission maps to characterise the distribution of gas and dust using fractal analysis. The three-dimensional fractal dimension obtained for the Dragonfish Nebula was D_f ≃ 2.6-2.7, a value that agrees very well with previously reported fractal dimensions for other molecular clouds, namely Orion, Ophiuchus and Perseus. On the other hand, we used photometric information from the AllWISE catalogue to select and study a total of 1082 YSOs in this region. From the parallaxes measured by the Gaia mission for 135 of these sources we derived a distance of D ∼ 4 ± 1 kpc to the Dragonfish Complex and from the SED fitting to theoretical models we also determined photospheric temperatures and visual extinctions for 399 sources. Regarding the spatial distribution of YSOs, we identified a clumpy a hierarchical assembly of structures and substructures and, moreover, we found that the clumpy structure of younger Class I and Class II sources tends to disappears for the more evolved sources (Transition Disks), suggesting some kind of evolutionary effect. Interestingly, our fractal analysis clearly show that the distribution of younger objects is much more clumpy (D_f ≃ 1.7) than the distribution of gas from which they formed (D_f ≃ 2.6-2.7). Although some simulations <cit.> seem to support the possibility that newly formed stars exhibit a more clumpy structure than that of their parent cloud, the physical mechanisms behind this behaviour remain unclear. In order to clarify this issue, it would be helpful to use strategies such as the one proposed in this work, in which suitable and well-calibrated tools are used to simultaneously quantify the structure of both gas and stars in a relatively large sample of star-forming complexes. We want to thank the referee for his/her helpful comments, which improved this paper. We acknowledge financial support from Universidad Internacional de Valencia (VIU) through project VIU24003. The work of EN was supported by project VIU24007, funded by the reasearch center ESENCIA of VIU. JBC was supported by projects PID2020-117404GB-C22, funded by MCIN/AEI, CIPROM/2022/64, funded by the Generalitat Valenciana, and by the Astrophysics and High Energy Physics programme by MCIN, with funding from European Union NextGenerationEU (PRTR-C17.I1) and the Generalitat Valenciana through grant ASFAE/2022/018. This research made use of Montage, that it is funded by the National Science Foundation under Grant Number ACI–1440620, and was previously funded by the National Aeronautics and Space Administration's Earth Science Technology Office, Computation Technologies Project, under Cooperative Agreement Number NCC5–626 between NASA and the California Institute of Technology. We have made extensive use of VOSA, developed under the Spanish Virtual Observatory project funded by MCIN/AEI/10.13039/501100011033/ through grant PID2020-112949GB-I00. We also have used the tool TOPCAT <cit.> and the NASA’s Astrophysics Data System. aa
http://arxiv.org/abs/2406.18011v1
20240626014856
Expressive Keypoints for Skeleton-based Action Recognition via Skeleton Transformation
[ "Yijie Yang", "Jinlu Zhang", "Jiaxu Zhang", "Zhigang Tu" ]
cs.CV
[ "cs.CV" ]
Effects of model size in density-functional-theory study of alloys: A case study of Jingrui Li ==================================================================================== § ABSTRACT In the realm of skeleton-based action recognition, the traditional methods which rely on coarse body keypoints fall short of capturing subtle human actions. In this work, we propose Expressive Keypoints that incorporates hand and foot details to form a fine-grained skeletal representation, improving the discriminative ability for existing models in discerning intricate actions. To efficiently model Expressive Keypoints, the Skeleton Transformation strategy is presented to gradually downsample the keypoints and prioritize prominent joints by allocating the importance weights. Additionally, a plug-and-play Instance Pooling module is exploited to extend our approach to multi-person scenarios without surging computation costs. Extensive experimental results over seven datasets present the superiority of our method compared to the state-of-the-art for skeleton-based human action recognition. Code is available at <https://github.com/YijieYang23/SkeleT-GCN>. § INTRODUCTION Skeleton-based action recognition has become a cornerstone for numerous vision applications such as video surveillance <cit.>, human-robot interaction <cit.>, and sports analytics <cit.>, due to its succinct representation and robustness to variations in lighting, scale, and viewpoint. Traditional methods primarily utilize simple body keypoints defined in NTU <cit.> and COCO <cit.> formats, to provide sparse representations of human motion. Despite their utility, the over concise representations are constrained by missing subtle but critical details involving hand and foot movements. Consequently, existing coarse skeletal representations are limited in effectively distinguishing intricate actions. Recently, some approaches <cit.> have resorted to the point cloud representation to capture the detailed spatial structure of human surface, thereby enhancing the ability to recognize complex movements. However, it comes with enormously increased computational cost, detracting from the efficiency of point-based representation. Moreover, several studies <cit.> have aimed to improve the recognition accuracy by introducing object points. However, the generalization of these methods is limited especially in the human-centric scenarios where no interacted object involved. To solve the limitations of prior works, we incorporate richer limb keypoints into body keypoints to propose a fine-grained representation called Expressive Keypoints. It emphasizes nuanced hand interactions and foot movements which are crucial to discerning subtle actions. As shown in  Fig. <ref>, we present various data representations that are commonly utilized. Compared to the representations of RGB images, excessive point cloud data, and coarse body keypoints, the Expressive Keypoints representation stands out for its insensitivity to viewpoints, relatively small data footprint, and ability to represent fine-grained limb details. In practice, Expressive Keypoints can be easily estimated from RGB images based on COCO-Wholebody <cit.> annotations, without relying on obtaining depth information from multi-view data or lab-controlled motion capture system. Experimental results demonstrate that all three baseline methods <cit.> achieve significant improvement in accuracy (+ over 6%) when replacing coarse-grained keypoints with Expressive Keypoints. However, the computational cost of directly taking Expressive Keypoints as input also scales considerably, since nearly three times more joints need to be dealt with. To enhance the computationally efficiency, we propose the Skeleton Transformation (SkeleT) strategy to gradually downsample the skeletal representation of Expressive Keypoints across multiple stages. This novel strategy involves the learnable mapping matrices to refine skeleton features by re-weighting and downsampling the keypoints. These mapping matrices are initialized by semantic partitioning of human topology, and iteratively optimized during training. By further introducing variable group design for different skeletal scales, skeleton features are evenly split and transformed independently before concatenation. SkeleT strategy enables effective downsampling of keypoints and nuanced modeling in groups. It can be effortlessly integrated into most existing GCN-based skeleton action recognition methods, forming our SkeleT-GCN to efficiently process Expressive Keypoints. In experiments over four standard skeleton action recognition datasets <cit.>, SkeleT-GCN achieves comparable or even higher accuracy with much lower (less than half)  compared to its baseline GCN method. Moreover, we want to further evaluate our method on the general in-the-wild datasets <cit.> which include multi-person group activity scenarios. However, we find that traditional GCN methods perform feature modelling for each input person individually and conduct feature fusion in the late stage. Consequently, they have the limitation of exponentially increasing computational complexity as the number of individuals grows in a wild scene. Inspired by <cit.>, we implement a lightweight Instance Pooling module before the GCN models. The key idea is to aggregate the features of multiple persons and projects them to a single skeletal representation in the early stage. By exploiting the plug-and-play Instance Pooling module, the classification of group activities can be supported without surging computation cost. This offers a practical and viable solution for extending GCN-based skeleton action recognition methods (including our SkeleT-GCN) to multi-person scenarios. In extensive experimental evaluations over the total of seven datasets <cit.>, our pipeline consistently achieves the state-of-the-art across all the benchmarks (see Fig. <ref>), demonstrating its superior performance and robust generalization. We find that strategically employing fine-grained keypoints enables recognizing intricate human actions with efficient computation complexity. In summary, the main contributions of our work are threefold: * We introduce fine-grained limb details as the Expressive Keypoints representation for skeleton action recognition, boosting the performance in identifying intricate actions. * We propose the Skeleton Transformation strategy to make existing GCN methods highly efficient while preserving accuracy, through dynamically downsampling of keypoints. * We implement a plug-and-play Instance Pooling module to extend GCN methods to multi-person group activity scenarios without surging computation cost. § RELATED WORKS §.§ Point-based action recognition Point-based action recognition methods are more robust against variations of lightning and view variation compared with RGB-based methods <cit.>. Some works <cit.> take point cloud data, which consists of numerous unordered 3D point sets, as input for their methods. However, point cloud data introduces too much redundant information for learning action patterns, leading to high computation costs. Some works utilize 2D/3D keypoints <cit.> to represent the skeletal structure of human body. They are also commonly referred to as skeleton-based methods. Among them, GCN models <cit.> have been adopted frequently due to the effective representation for the graph structure <cit.>. Additionally, some models <cit.> attempt to project human body keypoints into multiple 2D pseudo-images to learn useful features, which also achieves notable performance. Nevertheless, existing skeleton-based methods use coarse-grained skeleton representation as input, leading to the challenge of discerning complex actions, which results in limited performance. To this end, we propose to incorporate hand and foot keypoints into the body part, forming a fine-grained skeletal structure to better distinguish the intricate actions. §.§ GCNs for skeleton-based action recognition STGCN <cit.> first utilized graph convolution to conduct skeleton action recognition, GCN-based methods soon became the mainstream. Different improvements have been made in recent works <cit.>. MS-AAGCN <cit.> proposes to adaptively learn the topology of graphs instead of setting it manually. CTRGCN <cit.> takes a shared topology matrix as the generic prior for network channels to improve performance. PYSKL <cit.> presents an open-source toolbox for skeleton-based action recognition, which benchmarked representative GCN methods with good practices. DGSTGCN <cit.> proposes a lightweight yet powerful model without a predefined graph. However, traditional methods commonly face two limitations: (1) they maintain a static skeleton structure with a fixed number of keypoints, which restricts their ability to capture multi-scale information, and (2) the computational costs linearly increase with each additional person, resulting in the input being cropped to a maximum of two individuals. In this work, we propose a Skeleton Transformation strategy to dynamically modify the skeleton structure and downsample keypoints. Additionally, we introduce an Instance Pooling module to overcome the constraints of input individuals. § PROPOSED PIPELINE The overview of our proposed pipeline is depicted in Fig. <ref>. In Sec. <ref>, we incorporate detailed keypoints of limbs to coarse-grained body keypoints, forming the representation of Expressive Keypoints. We elaborate on the collection and preprocessing of these keypoints, highlighting the benefits of this approach. In Sec. <ref>, we propose the Skeleton Transformation strategy to efficiently deal with more limb keypoints. We find that implicitly aggregating keypoint in latent space in the network processing can significantly reduce computational complexity while maintaining high accuracy. In Sec. <ref>, we discover that individual modeling and late fusion of instance features in traditional methods limit their scalability in terms of input persons. Therefore, we exploit a plug-and-play Instance Pooling module for multiple instance inputs (in Sec. <ref>), which supports the recognition of group activities without surging computational costs. §.§ Expressive Keypoints representation Data collection. Benefiting from the dense landmarks provided by COCO-WholeBody <cit.>, which encompasses 133 keypoints, including 17 keypoints for the body, 68 for the face, 42 for the hands, and 6 for the feet, we have a base representation for fine-grained skeleton. In practice, COCO-WholeBody can be extracted from a top-down estimator. We firstly extract human bounding boxes using the ResNet50-based Faster-RCNN <cit.>. Subsequently, the COCO-WholeBody <cit.> keypoints within specified bounding boxes are obtained through the pre-trained human pose estimator <cit.>. Keypoint selection. We observe directly using COCO-WholeBody as input not only incurred significant computational costs but also yielded lower performance, because there might be numerous redundant keypoints introducing substantial noise into the model. To alleviate this issue, we select the input 133 keypoints from two perspects. First, COCO-Wholebody not only includes body and detailed hand keypoints, but also includes face landmarks, which are intuitively not related to the human action. Besides, we analyze two statistical metrics: Video Variance and Motion variance on the NTU-120 dataset, which calculate the variance of keypoints for each person and motion frequency of each keypoint between frames, respectively. More details and results are provided in Sec. <ref>. We find facial keypoints (23-90th) have higher video variance and lower motion frequency, which indicates low contribution for action recognition. This observation guides us to manually remove them, resulting in the formation of the final Expressive Keypoints representation. §.§ Skeleton Transformation strategy The representation of Expressive Keypoints provides abundant motion cues for skeleton action recognition. However, directly feeding Expressive Keypoints into existing GCN methods encounters several limitations. (i) Low efficiency: Handling with much more limb joints significantly increases computational complexity compared to the coarse-grained ones. (ii) Sub-optimal accuracy: The topology graph of Expressive Keypoints is more complex and has multi-hop connections which hinders the network from effectively exchange information among distant nodes. Consequently, it faces a more pronounced long-range dependency problem <cit.>. We claim that the key problem is that traditional methods have a fixed skeleton structure during feed forward. To this end, we propose a novel Skeleton Transformation (SkeleT) strategy to gradually downsamples the Expressive Keypoints throughout the processing stages. The SkeleT strategy can be seamlessly integrated into most GCN methods to create our SkeleT-GCN (e.g baseline: DGSTGCN <cit.> → ours: SkeleT-DGSTGCN) without modifying the inner implementation of their graph convolution and temporal convolution layers or the high-level architectural design. What we do is to encapsulating the baseline graph convolution layers within a proposed Grouped Mapping framework, where the input keypoint features are divided into groups and multiplied with the mapping matrices before being processed by the graph convolution layers. By strategically exploit Expressive Keypoints, our SkeleT-GCN can achieve comparable or even higher accuracy with much lower GFLOPs compared with its baseline GCN method. §.§.§ Preliminary and notations of GCN The skeleton sequence 𝐗∈ℝ^J× T× C is defined by J joints with C dimension channels at each joint in T frames. For most existing GCN-based methods, they share a same architecture design of M spatial-temporal blocks, where each spatial-temporal block ℱ contains a graph convolution layer 𝒢 and a temporal convolution layer 𝒯 to alternately model the spatial and temporal information. We use 𝔹={1,2,..,M} to denote the index set of spatial-temporal blocks, which has two subset 𝔹^n and 𝔹^d, where 𝔹^d contains the indices of downsampling blocks ℱ^d that downsample the temporal length and 𝔹^n contains the indices of other normal blocks ℱ^n. The adjacent martix 𝐀∈ℝ^J × J defines the topology links of human skeleton, where 𝐀_ij=1 if i-th joint and j-th joint are physically connected and 0 otherwise. The computation of ℱ can be summarized as: ℱ(𝐗,𝐀) = 𝒯(𝒢(𝐗,𝐀))+𝐗, where 𝐀 = 𝐀+𝐈 is the skeletal topology graph with added self-link. §.§.§ Grouped Mapping Framework To achieve the SkeleT strategy for existing GCN methods, we propose the Grouped Mapping Framework to encapsulate original graph convolution layers 𝒢 and temporal convolution layers 𝒯 of any GCN methods without modifying their inner design. The same high-level architecture 𝔹=𝔹^n∪𝔹^d is also inherited. We denote the Grouped Mapping Framework as ℱ̂ and its detailed architecture is depicted in Fig. <ref>. Specifically, we split the channel dimension of the skeleton sequence 𝐗 into K groups, thereby reducing the channel width of each feature group to C/K. Subsequently, each feature group is independently multiplied by a corresponding mapping matrix 𝐌 to adaptively alter the skeleton structure. Next, we parallelize K baseline graph convolution layers {𝒢_1, ..., 𝒢_K } to extract group-specified features that can greatly enrich the motion feature representations across diverse structures. Finally, K group features are concatenated along the channel dimension and processed by the baseline temporal convolution layer 𝒯 to model the temporal dependency, generating the refined motion feature. The whole processing of our Grouped Mapping Framework ℱ̂ can be formulated as follows: ℱ̂(𝐗,𝐀,𝐌) = 𝒯( σ( 𝒢_k(𝐌_k𝐗_k,𝐀)𝐖) )+ res(𝐗), k ∈{1,...,K}, where 𝐗_k is the k-th split feature and 𝐖 is a learnable weights. σ(·) and res(·) is the activation and residual connection, respectively. We provide further elaborations of mapping matrix 𝐌 subsequently. Mapping matrix. The main idea of downsampling the keypoints is achieved by being multiplied with the mapping matrix 𝐌^d ∈ℝ^J_i× J_i+1 to fuse correlated joints. It maps the original skeleton 𝐗 with J_i joints to a new skeleton 𝐗^' with J_i+1 joints, which can be formulated as follows: 𝐗^'= 𝐌^d 𝐗, Once the skeleton structure is downsampled, the new adjacent matrix can be calculated as follows: 𝐀^'=(𝐌^d)^T𝐀𝐌^d. The downsampling operation is only conduct in the downsampling blocks with indices in 𝔹^d. For the other normal blocks in 𝔹^n, the mapping matrix 𝐌^n ∈ℝ^J_i× J_i is defined as a learnable diagonal matrix that does not downsample the keypoints. It serves to re-weight the skeleton joints, enabling the network to prioritize important joints by allocating weights on the diagonal. Considering the index of ℱ̂ and the type of mapping matrix, Eq.(<ref>) can be detailed as follows: ℱ̂_(i)(𝐗,𝐀,𝐌)={[ 𝒯(σ( {[𝒢_k(𝐌^n_k𝐗_k,𝐀)]}_k ∈{1,...,K}𝐖)+𝐗 ,i ∈𝔹^n,; 𝒯(σ( {[𝒢_k(𝐌^d_k𝐗_k,𝐀)]}_k ∈{1,...,K}𝐖)+𝐌^d𝐗 , i ∈𝔹^d. ]. Pre-defined keypoint partition. As shown in Fig. <ref>, the above downsampling mapping matrix 𝐌^d has a weight of [J_i, J_i+1] to map J_i keypoints to J_i+1 keypoints, and it needs a good initialization to stabilize at beginning stage of training. Adjacent keypoints always have similar semantics for human action, therefore, we use the pre-defined semantical knowledge prior to initialize the 𝐌^d_[i, i+1]. Specifically, J_i joints can be divided into part set {P_(i, i+1)}, where J_i+1^k (k-th joint of J_i+1) includes P_i, i+1^k indexes of J_i keypoints. Once the partition is determined, the initialized element of j-th row, k-th column in 𝐌^d (j ∈ J_i, k ∈ J_i+1) can be formulated as follows: 𝐌^d_(j, k)={[ 1/len(P_(i, i+1)^k) , j ∈ P_i, i+1^k,; 0 , otherwise. ] The keypoint partitions are semantically guided. Related joints like keypoints in the same finger are grouped as one part when initialization. §.§ Instance Pooling module The computation of previous GCN-based works scale linearly with the increasing number of persons in the video, making it less efficient for group activity recognition. The key problem is that traditional methods independently model each person's skeleton sequence and then perform feature fusion at the late stage. To tackle this problem, we implement an plug-and-play Instance Pooling (IP) module which perform early feature fusion of the multiple input skeletons before feeding them to GCN. As illustrated in Fig. <ref>, we obtain the keypoint embedding utilizing a fully connected layer and a keypoint positional encoding from the multi-person skeletal sequences. Subsequently, the Concat Pool Layer 𝒫_c (·) and the Group Pool Layer 𝒫_g (·) proposed by <cit.> are adopted to aggregate I instance-wise feature vectors. This process can be formulated as: 𝐘'= 𝒫_g(σ(𝒫_c(𝐘)+𝐘)), where 𝐘 = emb( {𝐗_1,𝐗_2,...,𝐗_I}) ∈ℝ^I× J× T× C is the embedding of multi-person skeletons. 𝐘^'∈ℝ^J× T× C is the aggregated single-person representation where the dimension of instance I has been eliminated. Through early fusion in the lightweight IP module, the computationally burdensome spatial-temporal modeling will be conducted only once in the subsequent GCN, regardless of the number of input instances. The IP module serves as a flexible and lightweight extension for any GCN-based methods (including our SkeleT-GCN). It offers a a practical and efficient solution for extending GCN-based skeleton action recognition to multi-person group activity scenarios without surging computational cost. § EXPERIMENTS We conduct comprehensive experiments to evaluate our proposed pipeline over seven datasets, including NTU-60 <cit.>, NTU-120 <cit.>, PKU-MMD <cit.>, N-UCLA <cit.>, Kinetics-400 <cit.>, UCF-101 <cit.>, and HMDB-51 <cit.>. Overview of datasets (see Sec. <ref>) and implementation details (see Sec. <ref>) can be found in appendix. We report Top-1 accuracy to evaluate model's recognition performance, and report floating point operations (FLOPs) and number of parameters (Params.) to evaluate model's efficiency in terms of computation cost and model size. §.§ Effectiveness of proposed components We conduct evaluations for the effectiveness of every components in our proposed pipeline, which include the Expressive Keypoints representation, the SkeleT strategy, and the IP module. Expressive Keypoints representation. On NTU-120, we directly feed Expressive Keypoints into three representative GCN methods, which are STGCN++ <cit.>, CTRGCN <cit.>, and DGSTGCN <cit.>. As shown in Tab. <ref>, the Expressive Keypoints representation significantly enhances action recognition performance on all three baseline networks (+7.8%, +8.6%, +6.5%, respectively). Additionally, we further assess the accuracy improvement on 120 action categories (Fig. <ref>) as well as the top-20 hard cases (Fig. <ref>) when replacing coarse-grained NTU keypoints with fine-grained Expressive Keypoints. It can be seen that incorporating detailed limb keypoints consistently boosts the skeleton action recognition performance especially for discerning those hard actions with nuanced limb movements. SkeleT strategy. We further integrate proposed SkeleT strategy to the previous baseline GCN methods to form our SkeleT-GCN, which are SkeleT-STGCN++, SkeleT-CTRGCN, and SkeleT-DGSTGCN. By gradually downsampling Expressive Keypoints, three baseline models applying SkeleT strategy significantly reduce more than half of the computational cost (-4.3G, -5.0G, -3.9G) while achieving comparable or even higher accuracy, as shown in Tab. <ref>. Moreover, we also evaluate the effectiveness of SkeleT strategy with NTU Keypoints input. As shown in Tab. <ref>, SkeleT strategy can also greatly reduce the computation cost (from 2.4G∼2.7G to 1.5G) of processing coarse-grained skeletal data while preserving accuracy. It can be observed that a slight accuracy drop occurs in one of the six settings. We consider this is because the coarse-grained skeletal representation is already very concise, and further downsampling might result in under-represented features. IP module. On HMDB-51 which contains multi-person group activity scenarios, we use SkeleT-DGSTGCN to test the computational cost and accuracy with and without the IP module. The results are presented in Tab. <ref>. We find that incorporating the IP module enhances recognition accuracy while considerably reducing the FLOPs. Moreover, Fig. <ref> illustrates the variation in FLOPs with the number of input presons. Without the IP module, the computational cost escalates rapidly as the number of individuals increases due to the substantial feature modeling required for each individual in the traditional GCN pipeline. However, with the inclusion of the IP module, the increase in FLOPs is minimal since the features of multiple individuals are aggregated into a single representation by the lightweight IP module before fed into the subsequent GCN model. §.§ Configuration exploration Input keypoints selection. We extensively explore the selection of initial input keypoints. As shown in Tab. <ref>, experimental results demonstrate that removing facial keypoints from the COCO-WholeBody Keypoints (protocol #1) to form our Expressive Keypoints (protocol #2) is reasonable and aligns with the statistical analysis. Removing redundant points reduces the impact of introduced noise, resulting in higher accuracy with lower computational cost. Based on Expressive Keypoints, we try to further prune some keypoints. It is noticeable that removing the keypoints of limbs in a explicit way can achieve a decrease in FLOPs, but also incurs an equivalent drop in accuracy (protocol #3∼#5). We argue that it is not applicable for explicitly selecting detailed limb keypoints in various actions of large-scale datasets. That is why we adopt a learning-based method SkeleT strategy for the implicit selection from Expressive Keypoints (protocol #6), achieving great saving in FLOPs while maintaining high accuracy. Group design.  Tab. <ref> present six configurations in terms of the initial value of number of groups K_0 and group expand factor c. It is noticeable that static group designs (c=1) yield sub-optimal performances. For the expanding group designs, the [1, 2, 4] group configuration that can provide the best accuracy performance. We consider that too many groups will result in a small number of features after splitting the channels, limiting the representation ability. §.§ Comparison with the state-of-the-art When comparing to the state-of-the-art (SOTA), we choose DGSTGCN <cit.> with Expressive Keypoints input as the baseline method (denoted as Ours: Baseline), and apply SkeleT strategy to form our SkeleT-DGSTGCN (denoted as Ours: SkeleT). In experiments, * indicates using Expressive Keypoints, we adopted a 4-stream fusion strategy similar to the previous works <cit.>. On NTU-60 and NTU-120, as shown in Tab. <ref>, Expressive Keypoints greatly improves the accuracy for skeleton-based action recognition, even surpassing the SOTA point cloud-based <cit.> and RGB-based methods <cit.>. Upon applying the SkeleT strategy, our method achieves significant savings in the computational cost (25.0G → 9.6G), with comparable or even higher accuracy. On PKU-MMD, Tab. <ref> shows our method outperforming all the previous keleton-based methods by a noticeable margin, achieving the state-of-the-art performance with the top-1 accuracy of 98.4%. On N-UCLA, as showed in Tab. <ref>, our method achieves 97.6% top-1 accuracy, which also surpasses the previous best method <cit.>. It is notable that, among the standard skeleton-based datasets, N-UCLA has the most significant variations in viewpoint and severe occlusions. Despite being limited by the estimated 2D representation that is unable to leverage depth information and 3D spatial augmentations (e.g. 3D random rotation), our approach still reaches a very promising performance. We further extending SkeleT-DGSTGCN with the IP module (denoted as Ours: SkeleT+IP), which allows for evaluating our method on the more general in-the-wild action recognition datasets <cit.>. For Kinetic-400 that encompass many human-object interaction scenarios, such as peeling apples and peeling potatoes, the accuracy of pure skeleton-based methods on the Kinetic-400 is far below than other datasets since they lack of capturing object information. As a result, SKP <cit.> resorts to incorporating object contours and improves the accuracy of keypoint-based benchmark to 52.3%. However, as showed in Tab. <ref>, by strategically utilizing Expressive Keypoints, our method achieves the SOTA performance (53.1%) on the Kinetics-400 dataset even without the object information. This is made possible through our expressive skeletal representation and effective transformation strategy, demonstrating the effectiveness of our pipeline even under these challenging conditions. Moreover, we provide an apple-to-apple comparison on UCF-101 and HMDB-51. As demonstrated in Tab. <ref>, our method consistently surpasses the previous skeleton-based SOTA methods <cit.> regardless of whether pre-training is conducted on the Kinetics-400 dataset or not. § CONCLUSION In this work, we propose the Skeleton Transformation strategy using the Expressive Keypoints representation to achieve high performance in discriminating detailed actions while maintaining the high efficiency. Furthermore, we implement an Instance Pooling module, expanding the applicability of GCN-based methods to multi-person scenarios. Comprehensive experiments over seven datasets demonstrate our pipeline's superior performance and robust generalization. ieeenat_fullname § APPENDIX In this appendix, we provide overview and visualization of datasets, implementation details, additional experimental results, limitations and broader impact of our method to complement the main paper. § OVERVIEW OF DATASETS We conduct comprehensive experiments to evaluate our proposed pipeline over seven datasets, which are NTU-60 <cit.>, NTU-120 <cit.>, PKU-MMD <cit.>, N-UCLA <cit.>, Kinetics-400 <cit.>, UCF-101 <cit.>, and HMDB-51 <cit.>. NTU-60 and NTU-120 can be can be collectively referred to as NTU RGB+D, which is currently the largest dataset for skeleton human action recognition. The NTU-60 dataset contains 56,880 videos of 60 human actions. The authors of this dataset recommend two split protocols: CS and cross-view (CV). The NTU-120 dataset is a superset of NTU-60 and contains a total of 113,945 samples over 120 classes. The authors of this dataset recommend two split protocols: cross-subject (CS) and cross-set (CX). We conduct experiments on NTU-60 and NTU-120 following those recommended protocols. PKU-MMD dataset is originally proposed for action detection. For the action recognition task, we crop long videos to get short clips based on the temporal annotations following <cit.>. The PKU-MMD has nearly 20,000 action instances over 51 classes. We follow the recommended CS split protocol for training and testing. N-UCLA contains 1494 video clips covering 10 action categories, which are performed by 10 different subjects. It has the most various significant variations in viewpoint and severe occlusions. We follow the same evaluation protocol in <cit.>. Kinetics-400, UCF-101, and HMDB-51 are general action recognition datasets collect from web. With the incorporation of the Instance Pooling module, we have extended our pipeline to these in-the-wild datasets. The Kinetics-400 is a large-scale video dataset with 300,000 videos and 400 action classes. The UCF-101 dataset comprises approximately 13,000 videos sourced from YouTube, categorized into 101 action labels. The HMDB-51 consists of around 6,700 videos with 51 actions. § VISUALIZATION OF THE EXTRACTED WHOLE-BODY POSES We visualize the extracted poses of the aforementioned seven datasets <cit.>. NTU RGB+D and PKU-MMD datasets are notable for (1) high resolution and excellent image quality and (2) containing at most two people, free from interference by individuals unrelated to the task. Consequently, the quality of the estimated poses is very high, as shown in Fig. <ref>a and Fig. <ref>b. N-UCLA dataset is also shot indoors, the image quality is relatively high, resulting in fairly good quality pose estimations depicted in Fig. <ref>c. In contrast to NTU RGB+D and PKU-MMD, N-UCLA does not have dual-person actions and focuses solely on single-person action recognition. Kinetics-400 is a large-scale in-the-wild video action recognition dataset presenting complex scenes with numerous multi-person actions (crowd actions) and frequent appearances of unrelated individuals. We provide some examples that our estimator accurately predicts the human poses in Fig. <ref>d. However, since it is not human-centric, there are some problems that will degrade the quality of the extracted skeleton, as shown visualized in Fig. <ref>a. UCF-101 and HMDB-51 datasets are also in-the-wild video action recognition datasets, where the locations, scales, and number of persons may vary a lot. Fig. <ref>e demonstrates some extracted poses with relatively good quality. However, due to low video resolution, tiny persons, and significant motion blur, the quality of most extracted poses is quite low, as shown in Fig. <ref>b. § STATISTICAL METRICS AND RESULTS We conduct statically analysis on NTU-120 dataset, which involves two specific statistical metrics: (i) Video Variance Var^v_i, calculates the variance of keypoints for each person across all videos. A lower value of Var^v_i is indicative of a keypoint distribution that is more consistent and, consequently, more amenable: Var^v_i = 1/S∑_s=1^S(v_i,s - μ_vi)^2, where S represents number of videos, v_i,s is mean of i-th joint positions in each video s, and μ_vi indicates mean of all v_i,s. (ii) Motion variance Var^m_i, measures the motion frequency and range of each keypoint between frames, where higher Var^m_i indicates more obvious movement for action recognition. Var^m_i = f_σ(1/T-1∑_t=1^T-1√((p_i,t+1 - p_i,t)^2)/ϵ_i), where f_σ denotes the standard deviation function computed across videos, p_i,t+1 indicates i-th keypoint position in the t-th frame, and ϵ_i is area scale coefficient of different parts, which is used to normalize the motion variance. As illustrated in Fig.<ref>, facial keypoints (23-90th) have higher video variance and lower motion frequency, which indicates low contribution for action recognition. This observation guides us to manually remove them. § OVERALL ARCHITECTURE OF OUR SKELET-GCN Three representative GCN methods are adopted to be our baseline model, which are STGCN++, CTRGCN, DGSTGCN. All these models share the same high-level design. We apply the SkeleT strategy to form our corresponding SkeleT-GCN, which are SkeleT-STGCN++, SkeleT-CTRGCN, and SkeleT-DGSTGCN, respectively. The integration of the SkeleT strategy is seamless, thus the same overall architecture is inherited. Which includes 10 spatial-temporal blocks, and the output channels (number of features) for each block are configured as 64, 64, 64, 64, 128, 128, 128, 256, 256, and 256, respectively. The 5th and 8th blocks are downsampling blocks, while the other blocks are normal blocks. In each downsampling block, the groups expand at a factor of 2, the temporal length is reduced to half, and the number of joints is downsampled from 65 to 27 and futher to 11. Through a 2D Avg-Pooling, the temporal and joint dimensions are eliminated and the output is used by the classifier to predict a score vector for video-level action recognition. § IMPLEMENTATION DETAILS §.§ Hyperparameters Following the good practices of PYSKL <cit.>, we use the same hyperparameter setting for all GCN models to ensure fair comparison. Specifically, we employ the Stochastic Gradient Descent with a Nesterov momentum of 0.9 and weight decay of 0.0005. When training from scratch, the initial learning rate is set to 0.1, and we train all models for 120 epochs with the Cosine Annealing LR scheduler. On the UCF-101 and HMDB-51 datasets, we fine-tune all models based on the Kinetics-400 pretrained weights for 120 epochs with a initial learning rate of 0.01, which will decay with a factor 0.1 at epoch 90 and 110. The hyperparameters of batch size, temporal length, and number of input persons employed for each datasets are listed in Tab. <ref>. We use zero-padding or cropping for each video to satisfy the fixed number of input persons. Our models are implemented with the PyTorch deep learning framework. All the experiments are conduct on a single Linux server with four RTX 3090 GPUs for distributed training and testing. §.§ Data augmentation Uniform Sampling <cit.> is adopted as a strong temporal augmentation strategy, which evenly partitions the original skeleton sequence into T splits and randomly extracts one frame from each split to form a clip of length T. On the NTU RGB+D, PKU-MMD, and N-UCLA datasets, no spatial augmentation is utilized for processing 2D Expressive Keypoints. On the Kinetics-400, UCF-101, and HMDB-51 datasets, we employ substantial spatial data augmentations, e.g. random scaling, cropping, and flipping the keypoints. Detailed augmentation for each datasets are listed in Tab. <ref>. § SUPPLEMENTARY EXPERIMENTS §.§ Benchmarking GCN methods on Expressive Keypoints With the fine-grained human body representations provided by Expressive Keypoints, most GCN methods can significantly enhance accuracy by simply adjusting the input keypoints. Our proposed Skeleton Transformation (SkeleT) strategy can be applied to these methods, forming our SkeleT-GCN models, which achieves comparable or even higher accuracy with substantially lower computation cost. We conduct a comprehensive benchmark on the NTU-60 and NTU-120 datasets for three representative GCN methods: STGCN++ <cit.>, CTRGCN <cit.>, and DGSTGCN <cit.> with Expressive Keypoints as input, as well as their SkeleT-GCN counterparts: SkeleT-STGCN++, SkeleT-CTRGCN, and SkeleT-DGSTGCN. We measure the Top-1 accuracy of joint-stream (Joint), bone-stream (Bone), two-stream fusion (2s) <cit.>, and four-stream fusion (4s) <cit.>. As shown in Tab. <ref> and Tab. <ref>, our methods obtain better performance and efficiency than baselines in terms of Top1-accuracy, FLOPs, and number of parameters. §.§ Comparison with the state-of-the-art multi-modality methods Across three benchmarks for skeleton action recognition, including NTU RGB+D <cit.>, PKU-MMD <cit.>, and N-UCLA <cit.>, our method not only surpasses all skeleton-based methods but also achieves the best performance among all single-modality methods (RGB-based, point cloud-based). To further demonstrate the superiority of strategically employing Expressive Keypoints, we compare our method with previous SOTA multi-modality methods. It can be observed that on the NTU-60 and NTU-120 datasets (Tab. <ref>), we achieve comparable performance to the SOTA multi-modality method RGBPoseC3D <cit.> in three out of four evaluation protocols. On the PKU-MMD dataset (Tab. <ref>) and the N-UCLA dataset (Tab. <ref>), we outperform the SOTA multi-modality method <cit.>. The experimental results demonstrate that our method, despite being based on a single-modality skeleton input, achieves comparable or even higher performance with a lightweight computational cost than multi-modality methods. This remarkable result primarily stems from introducing fine-grained limb details to the skeleton and employing a SkeleT strategy for effective feature modeling, providing a promising solution for the community. § LIMITATIONS (i) Compared to 3D keypoints, our method faces challenges when recognizing actions in occluded scenarios due to the inherent lack of depth information. (ii) Although we extend our method to in-the-wild scenarios using the Instance Pooling module, it still struggles to distinguish certain scene-based actions or human-object interactions due to the lack of capturing of objects and scenes. § FAILURE CASES As discussed in Sec.<ref>, this section delineates some notable instances where our methodology encounters limitations, leading to classification errors. Specifically, within the N-UCLA dataset, the action labeled as carrying is misclassified due to the obstruction of the right hand, which plays a crucial role in the execution of this action, by the body, as depicted in Fig. <ref>a. Similarly, Fig. <ref>b shows picking up with one hand is misclassified as picking up with two hands because the left hand is completely obscured, making it impossible to distinguish whether the object was picked up with one or both hands. Furthermore, on the Kinetics-400 dataset, there are some failure cases shown in Fig. <ref>a and Fig. <ref>b. The misclassification of those actions are owing to a deficiency in perceiving objects. Moreover, in Fig. <ref>c, our method cannot discern the same actions passing American football with different context (in game vs. not in game), stemming from a lack of contextual scene information. These failure cases reveal that although 2D Expressive Keypoints can significantly enhance recognition performance by providing detailed representations, they struggle in situations involving occlusion due to the absence of depth information, and they cannot effectively distinguish human-object interactions and scene-based actions. These insights point towards promising directions for future enhancements, including the incorporation of depth information and the partial integration of object and scene contextual data. § SOCIAL IMPACT Our research on skeleton-based human action recognition offers significant positive societal impacts, including advancements in healthcare and rehabilitation, elderly care, human-robot interaction, sports analytics, and security and surveillance. However, some potential negative societal impacts may include: (i) the possibility of misuse in surveillance, leading to privacy concerns if individuals are monitored without their consent, and (ii) the risk of biased decision-making if the model is trained on biased data, potentially resulting in unfair treatment of certain groups. However, our model only uses skeletal information, which contains less identifiable appearance information compared to RGB images and videos. This greatly reduces the likelihood of the aforementioned risks.
http://arxiv.org/abs/2406.18656v1
20240626180011
Fundamentalization of periods for first and second-overtone classical Cepheids
[ "Bogumił Pilecki" ]
astro-ph.SR
[ "astro-ph.SR", "astro-ph.GA" ]
Bogumił Pilecki pilecki@camk.edu.pl 0000-0003-3861-8124]Bogumił Pilecki Centrum Astronomiczne im. Mikołaja Kopernika, PAN, Bartycka 18, 00-716 Warsaw, Poland § ABSTRACT Almost half of all classical Cepheids do not pulsate in fundamental mode, and nowadays, the fundamentalization of their higher-mode periods is frequently applied to increase the sample size in astrophysical investigations and allow for comparison with fundamental-mode Cepheids. On the other hand, the relations used to obtain fundamentalized periods are either old or based on small samples that cover narrow period ranges. We used available data of 989 Cepheids pulsating in at least two modes to obtain modern, high-quality empirical fundamentalization relations applicable in a wide range of periods of first- and second-overtone Cepheids for metallicities typical for the Milky Way and Magellanic Clouds. A clear correlation between the features of these relations and metallicity is seen, and periods with lower sensitivity to metallicity are identified. We also compare our results with double-mode Cepheids from the M31 and M33 galaxies. For the first galaxy, this indicates Cepheids have metallicities from supersolar to typical for the LMC, while for the latter, from solar to typical for the SMC. A general discussion of the usage of a different type of fundamentalization relations depending on the scientific problem is included. § INTRODUCTION Classical Cepheids (hereafter also Cepheids) are crucial for various fields of astronomy, including stellar oscillations and the evolution of intermediate and massive stars, and have an enormous influence on modern cosmology <cit.>. Since the discovery of the relationship between their pulsation period and luminosity (the Leavitt Law, ), the Cepheids have been extensively used to measure distances in the Universe <cit.>. They are radially pulsating evolved intermediate and high-mass giants and supergiants, mostly located in a well-defined position on the helium-burning loop (called the blue loop). About 43% of Cepheids do not pulsate in fundamental (F) mode, being mostly either first-overtone (1O) or second-overtone (2O) pulsators <cit.>. To increase the significance of analysis, it is therefore often necessary to combine the samples and, for that, to fundamentalize the higher-order mode pulsation periods <cit.>. Period fundamentalization is also used to apply characteristics of fundamental-mode Cepheids to the first-overtone ones <cit.> or to compare the Cepheids pulsating in different modes <cit.>. There are different empirical approaches for the fundamentalization of pulsation periods. A classical method uses double-mode Cepheids (also called beat Cepheids), which pulsate in both the F and 1O mode, to find a relation between the period ratio P_1O/P_F and other parameters, in principle one of the periods and metallicity <cit.>. Such a method should be consistent with the period-mass-radius (PMR) relation <cit.> but not with the period-luminosity one, being mostly insensitive to the temperature change across the instability strip (IS). Consequently, when such fundamentalized periods are used, 1O Cepheids lie slightly above the period-luminosity (P-L) relation for F-mode Cepheids. The difference increases as we move from the central part of the instability strip occupied by double mode F+1O Cepheids, the so-called OR region <cit.>, towards the blue edge of the IS. An alternative method that, on average, keeps the luminosity (but does not follow the PMR relation) was developed by <cit.> to look for overbright Cepheids regardless of their mode. This was obtained by minimization of scatter in a joint (F+1O) P-L relation, where 1O periods were fundamentalized using a fitted function. This method corrects for the average difference in luminosity between the F and 1O Cepheids along the line of the same period in the IS. Therefore, the luminosity obtained for 1O Cepheids from P-L relations for F-mode Cepheids (using the fundamentalized period) should be, on average, the same as the measured one. And finally, period ratios that are predicted from theoretical modeling for a given metallicity <cit.> can also be used for fundamentalization of higher-order mode periods. This paper presents modern, high-quality empirical fundamentalization relations based on double-mode Cepheids that are applicable in a wide range of periods for metallicities ([Fe/H]) typical for the Milky Way (hereafter also MW), Large Magellanic Cloud (LMC), and Small Magellanic Cloud (SMC). § FIRST-OVERTONE MODE About 42% of Cepheids pulsate in at least 1O mode, not having the F mode excited. In this section, we derive equations for the fundamentalization of 1O periods. §.§ Data From the OGLE-4 catalogs for the Milky Way <cit.> and Magellanic Clouds <cit.> we retrieved pulsation periods for 231 Cepheids pulsating at least in fundamental and first-overtone modes and calculated for them the corresponding period ratios, P_F/P_1O. The MW sample was extended with 18 Cepheids listed in <cit.>. In total, in the analysis, we used 101 objects from the LMC, 69 from the SMC, and 79 from the MW. For all occurrences, the unit for periods is days. §.§ Relations For Cepheids of each galaxy, we fitted a relation in the form of P_F/P_1O = a + b log P_1O, the same as in equation 1 of <cit.>, that makes the fundamentalization much easier than a typically provided P_1O/P_F = a + b log P_F. However, the used data and the best fit for the Milky Way and Magellanic Clouds are shown in Fig.<ref> in a standard Petersen diagram P_1O/P_F vs. P_F. For comparison, we also show similar relations from <cit.>, metallicity-dependent relations from <cit.>, and those from P24. We note that the latter were obtained differently, using the Wesenheit indices of single-mode 1O and F-mode Cepheids as constraints. The best-fitting relations for double-mode Cepheids for each galaxy are given below: (MW) P_F/P_1O = 1.371 + 0.106 log P_1O, (LMC) P_F/P_1O = 1.367 + 0.079 log P_1O, (SMC) P_F/P_1O = 1.356 + 0.068 log P_1O. These relations can be directly used to fundamentalize periods of 1O Cepheids. The high scatter of period ratios for the Milky Way is probably due to the metallicity spread <cit.>, and a more complex relation would be preferred. However, the metallicity-dependent relation from Sz18 has a slope significantly different from the one measured here. The reason for this is probably a much lower period range of Cepheids with measured metallicity, which does not provide a good constraint for the b log P_1O term. As a result, this relation does not reproduce correct period ratios for the lowest and highest pulsation periods for any of the considered galaxies. We note that the assumed metallicities do not affect this conclusion; they can only shift the relation vertically but not change the slope. We used metallicities [Fe/H] of 0.0 dex, -0.35 dex, and -0.75 dex for MW, LMC, and SMC, respectively, consistent with the homogenous metallicity study of Cepheids by <cit.> and values adopted by <cit.> that considered several different estimates. The LMC relation from A95, based on a larger number of Cepheids, is mostly consistent with our result, but a slight difference is notable at the shortest periods. Data from a wide range of periods are thus crucial to obtaining relations that are universally applicable, regardless of Cepheid properties. As expected, the relations from P24 give longer fundamentalized periods, compensating for the higher temperature and luminosity of 1O Cepheids. The corresponding magnitudes from the two approaches differ by about 0.03 to 0.07 mag, depending on the period. In Fig. <ref>, we compare the relations for all three galaxies. The difference between them most probably comes from a different average metallicity of MW, LMC, and SMC, which is reflected by its correlation with the slope. Interestingly, at the short-period end, these relations either cross (MW with LMC and SMC) or get very close to each other (LMC with SMC). Apparently, the dependence of P_F/P_1O on metallicity is lower for periods shorter than one day, although it is not negligible (the scatter for MW Cepheids is significant there). For longer periods, the difference increases considerably. As position on this diagram may be used to estimate the metallicity of Cepheids, we show here also the position of known beat Cepheids from M31 <cit.> and M33 <cit.> galaxies. For Cepheids in M31, this indicates the metallicity typical for the LMC, and for M33 Cepheids, a spread from solar to typical for the SMC. § SECOND-OVERTONE MODE About 7% Cepheids pulsate in at least 2O mode without having the F mode excited. In this section, we derive equations for transforming 2O periods to their equivalents in 1O and F modes. §.§ Data We selected 740 Cepheids pulsating in at least the first and second overtone modes from the same OGLE-4 catalogs <cit.> as used in the previous section. We then calculated the corresponding period ratios, P_1O/P_2O. In total, in this part of the analysis, we used 329 objects from the LMC, 240 from the SMC, and 171 from the MW. §.§ Relations For Cepheids of each galaxy, we fitted a relation in the form of P_1O/P_2O = a + b log P_2O + c log^2 P_2O. These data are shown in Fig. <ref> together with the best fit for the Milky Way and Magellanic Clouds. For comparison, we also show a similar relation from A95. The best-fitting relations for the MW (2a), LMC (2b), and SMC (2c) galaxies are given below: P_1O/P_2O = 1.247 + 0.028 log P_2O + 0.059 (log P_2O)^2 P_1O/P_2O = 1.247 + 0.032 log P_2O + 0.044 (log P_2O)^2 P_1O/P_2O = 1.249 + 0.048 log P_2O + 0.048 (log P_2O)^2 These relations can be used to convert 2O periods to their 1O equivalents. Similarly to the P_F/P_1O, a high scatter can be seen for the Milky Way. The A95 relation is clearly discrepant for low periods but fits the data reasonably well for the period range used to obtain it, i.e., log P_0 = -0.2 – 0.1. As in the previous section, using data with a wide period range was important to obtain relations applicable regardless of the Cepheid period. Although parabolic functions fit the data well, it seems that a linear function with a break would fit the data better for the LMC. However, we prefer to have the same formula for all relations and keep the number of fitted parameters to a minimum. The analysis of these relations shows that the position of the maximum of P_2O/P_1O shifts to shorter periods for decreasing metallicity, being located at log P_M = -0.139, -0.263, and -0.401 for the MW, LMC, and SMC, respectively. Using [Fe/H] values adopted in Section <ref>, this gives a linear relation: [Fe/H] = 0.40 + 2.9 log P_M. In Fig. <ref>, we compare the relations for all three galaxies. They cross at moderate periods and are similar for longer periods, while the discrepancy increases significantly at the shorter-period end. From this comparison, we can infer that for P_1O between 0.7d and 1.4d, P_1O/P_2O have low dependence on metallicity. Actually, at the period of 0.9 days, these ratios are almost completely insensitive to [Fe/H], which is also reflected in a very low scatter for MW Cepheids around this value (see Fig. <ref>). We also overplot in this diagram the position of a known beat M31 Cepheid <cit.>. A comparison with the presented relations indicates that it may have solar or supersolar metallicity. §.§ Fundamentalization A combination of relations given by Eq. 1 and 2 can be used to obtain fundamentalized periods for 2O Cepheids. Below, we provide such a transformation in the same form and order as Eq. 2. P_F/P_2O = 1.723 + 0.171 log P_2O + 0.081 (log P_2O)^2 P_F/P_2O = 1.713 + 0.144 log P_2O + 0.062 (log P_2O)^2 P_F/P_2O = 1.702 + 0.152 log P_2O + 0.068 (log P_2O)^2 To test these relations, we calculated fundamental-mode periods (P_F^rel) using P_2O for the only four Cepheids in our sample that pulsate simultaneously in F and 2O modes (all of them are triple-mode F/1O/2O Cepheids). For OGLE-GD-CEP-1011 and OGLE-GD-CEP-1704 , we obtained a relative difference Δ_rel=(P_F-P_F^rel)/P_F of 0.71% and -0.24%, respectively. For OGLE-LMC-CEP-1378 and OGLE-LMC-CEP-4718, the differences are 0.54% and 0.03%. All four values are below 1σ for Δ_rel obtained for the corresponding relations for 1O Cepheids shown in Fig. <ref>. Please note, however, that 2O Cepheids barely overlap in the IS with those pulsating in the F mode. Therefore, although the relations are well-defined in corresponding period ranges, such a fundamentalization of 2O periods may be considered a significant extrapolation. § FINAL REMARKS Double-mode Cepheids are a subset of all Cepheids; they occupy a limited part of the instability strip, called the OR regions <cit.>. The presented relations are thus only an approximation that may worsen the farther we move from the source data in the parameter space. For example, they do not take the temperature dependence into account. Moreover, we want to highlight that which empirical approach for fundamentalization should be applied depends on the objective for which the resulting data will be used. For example, the fundamentalization presented here should be preferred whenever we are interested in obtaining the expected physical properties of the stars. On the other hand, the transformations based on P-L relations, as in P21 and P24, should be used if we want to unify the samples of higher-order-mode Cepheids with the fundamental-mode ones or, in general, when we want to ignore the average difference in their temperatures and luminosities. An example of using both approaches for the same sample depending on the objective can be found in P24. Alternatively, theoretical hydrodynamical models that account for a time-dependent treatment of convective transport <cit.> offer the calculation of periods of different modes even for single-mode Cepheids, making it possible to obtain the fundamental mode period directly. This is equivalent to the first approach with the advantage of avoiding extrapolation. On the downside, to properly perform such fundamentalization, one needs a well-calibrated model and a knowledge of Cepheid's physical properties <cit.>, which are rarely available and, in most cases, are obtained from other theoretical (e.g., evolutionary) models. An interesting possibility would be to use the theoretically predicted periods for mixed-mode Cepheids in the OR regions to obtain similar fundamentalization relations as derived in this paper. Comparison with empirical relations could then be used to calibrate the models. For RR Lyrae variables, metallicity depends significantly on the pulsation period, which intertwines with a dependence of period ratio on metallicity <cit.>. This is not the case for Cepheids, where only a slight trend for a relation between [Fe/H] and period was detected with a low significance by <cit.>. There is, however, a hint in the data they presented that this trend may grow stronger for short periods. Unfortunately, it cannot be traced there because of scarce metallicity measurements for the faint, short-period Cepheids (log P_F < 0.5, log P_1O < 0.35). Our results, which show a different variation of the period ratios along the Cepheid period for each galaxy, can be considered another hint for metallicity trends and a more complex metallicity dependence. We note here that all our 1O/2O-mode Cepheids have log P_1O < 0.35 (see Fig. <ref>). Additionally, the sensitivity to metallicity may depend on other Cepheid's physical properties and, as a result, on the period. A large spread of Cepheid masses and radii may thus be the cause of a change in the value and the sign of the metallicity dependence between the short and long-period Cepheids. A detailed theoretical study could help explain this phenomenon. There is no direct metallicity determination for Cepheids in M31 and M33 galaxies, but indirect estimates exist. For example, <cit.> calculated [O/H] for Cepheids from their location in the disc using a relation of <cit.> and obtained values that are close to solar. For M33 <cit.>, estimated the metallicities of beat Cepheids using metallicity gradients and a comparison with theoretical models and obtained values of Z from about 0.006 to 0.013, which are consistent with the position of these Cepheids in Fig. <ref>. The research leading to these results received funding from the Polish National Science Center grant SONATA BIS 2020/38/E/ST9/00486. This research used NASA's Astrophysics Data System Service. aasjournal
http://arxiv.org/abs/2406.18723v1
20240626194409
The local structure, electronic and optical properties of Pb(Mg$_{1/3}$Nb$_{2/3}$)O$_3$-PbTiO$_3$: first-principles study
[ "M. Kovalenko", "O. Bovgyra", "V. Kapustianyk", "O. Kozachenko" ]
cond-mat.mtrl-sci
[ "cond-mat.mtrl-sci" ]
Chandra detects low-luminosity AGN with M_BH=10^4-10^6 M_⊙ in nearby (z<0.5), dwarf and star-forming galaxies Santosh Harish July 1, 2024 ============================================================================================================== § ABSTRACT Pb(Mg_1/3Nb_2/3)O_3-PbTiO_3 perovskite-based crystals attract considerable scientific interest due to their inte­res­ting properties and possible use in piezoelectricity and photovoltaics. To understand the local structure and fundamental properties of such materials, in this work, we focused on the study within the density functional theory of structural, electronic, and optical properties of Pb[(Mg_1/3Nb_2/3)_0.75Ti_0.25]O_3. Using GGA(PBEsol) approximation for structure optimization gives a good agreement with experimental data. Through the variation in Hubbard U parameters to GGA(PBEsol) functional, we achieve the bandgap for the Pb[(Mg_1/3Nb_2/3)_0.75Ti_0.25]O_3 which is in good agreement with the experimental results. The study of the bond populations showed that the Mg–O bond demonstrates no covalency, whereas there is a significant Ti–O and Nb–O covalent bonding. Such different bonding characteristics must be responsible for the relaxor properties of Pb[(Mg_1/3Nb_2/3)_0.75Ti_0.25]O_3 compound. In addition, the investigations of the optical properties of the Pb[(Mg_1/3Nb_2/3)_0.75Ti_0.25]O_3 by adopting Hubbard U corrections, modifying the error of the GGA approximation, and confirming the electronic analysis, were performed. § INTRODUCTION Perovskite-based single crystals of lead magnesium niobate-lead titanate (1-x)Pb(Mg_1/3Nb_2/3)O_3–xPbTiO_3 solid solution (PMN–PT) in which the Mg^2+ and Nb^5+ ions occupying the B-sites are initially disordered, are known as a relaxor ferroelectric. In essence, these dielectrics exhibit a wide, frequency-dependent response concerning temperature variations. These PMN–PT solid solutions create a new generation of piezoelectric materials because of their high piezoelectric coefficients (d_33∼ 2500 pC/N) and electromechanical coupling factors (k_33∼ 94%) <cit.>. Moreover, PMN–PT materials exhibit pronouncedly high dielectric constants and correspondingly low dielectric losses <cit.>. These favorable characteristics significantly improve bandwidth and sensitivity when utilized in electromechanical sensing and power applications <cit.>. The outstanding piezoelectric properties of these materials manifest in compositions located within the morphotropic phase boundary (MPB) region, particularly close to the boundaries between the rhombohedral and monoclinic phases and the monoclinic and tetragonal phases. This is attributed to multiple polarization states or dipole orientations in materials with MPB compositions, which exhibit heightened susceptibility to electric field-driven switching. Consequently, these materials become more electrically active, significantly enhancing their piezoelectric response <cit.>. Another promising application area of the PMN–PT compound is its use in photovoltaic (PV) devices, such as semiconductor-based PV cells. In the study <cit.>, the PMN–PT crystal's PV properties and correlation with its deformation properties were investigated. In addition, several ferroelectric compounds from the PMN–PT family became photovoltaic after WO_3 doping <cit.> and the PV effect was reported for two compounds with stoichiometry near the MPB region <cit.>. Considering the above, these materials attract considerable scientific interest. This is confirmed by numerous experimental studies devoted to exploring dielectric dispersion and piezoelectric characteristics <cit.>. However, the precise relationship between composition and relaxor behavior still needs to be understood despite acknowledging the critical role played by hetero-valency or the degree of disorder on the B-site. Drawing from the existing research on relaxor ferroelectrics, there is a compelling proposition that the presence of local structural heterogeneity exerts a profound influence on the piezoelectric properties of these materials. PMN–PT solid solutions are suitable objects for studying the correlation between structure and properties due to the availability of sufficient experimental data. However, to date there are only a few studies carried out within the density functional theory (DFT), which focused on the relationship between the local structure and electronic properties of such compounds <cit.>. Therefore, in this research we studied the influence of the local atomic environment on the electronic and optical properties of PMN–PT solid solutions using the first-principles methods within DFT. § MODEL AND METHODS To study the local structure of PMN–PT, first-principle calculations within DFT were carried out for a 2 × 2 × 2 supercell model system of Pb[(Mg_1/3Nb_2/3)_0.75Ti_0.25]O_3 (0.75PMN–0.25PT) containing 40 atoms which was successfully used in previous studies <cit.>. This composition and B-cation ordering is a highly convenient alternative for first-principles investigations due to its capability of applying a relatively compact supercell. As a result, we adopt this specific system as the primary model for our research calculations. Furthermore, that supercell is set to be large enough to cover all changes in the local structure <cit.>. Periodic calculations within ab initio DFT were performed using the pseudopotential plane wave method implemented in the CASTEP software code <cit.>. The exchange-correlation functional is presented in the generalized gradient approximation (GGA) in the PBE parameterization for solids (PBEsol). The cut-off energy is set at 600 eV for all calculations. Representation of electronic states in the first Brillouin zone using a 2 × 2 × 2 k-points grid was performed according to the Monkhorst–Pack scheme. Equilibrium crystal structures were obtained by geometry optimization in the Broyden–Fletcher–Goldfarb–Shanno (BFGS) minimization algorithm until the forces become less than 0.005 eV·Å^-1. To accurately describe the electronic structure of the 0.75PMN–0.25PT solid solution, the Hubbard U correction method for the GGA(PBEsol) approximation was additionally used. This method is effectively used to describe the electronic structure of various systems of different dimensions <cit.>. § RESULTS AND DISCUSSION Figure <ref> presents the 0.75PMN–0.25PT structure after geometry relaxation. The structure of PMN–PT corresponds to Pb(B'_1/2B”_1/2)O_3 <cit.> for which the arrangement of B cations follows the model of random positions and for Nb atoms occupy the B' cation site, and the B” site Mg and Ti atoms fill equally. The optimized lattice parameters for the 0.75PMN–0.25PT supercell are a = 8.095 Å, b = 7.962 Å, c = 8.177 Å, α = 90.14^∘, β = 89.34^∘, γ = 90.00^∘ and volume V = 527.058 Å^3 corresponding to 65.88 Å^3/f.u. The obtained value for volume per formula unit is in good agreement with the experimental data that gives approximately 64.88 Å^3/f.u. <cit.>. The Pb ions and B-cations displacements, particularly Mg, Nb, and Ti ions inside the corresponding oxygen shells, determine macroscopic properties, especially for polarization in ferroelectric perovskites based on PMN–PT. 6s-electrons of the lone pair of Pb2+ ions cause a significant displacement of Pb ions and the crucial contribution of these ions to the overall polarization <cit.>. The results of DFT calculations for the average displacement values of Pb2+, Mg2+, Nb5+, and Ti4+ cations from the center of oxygen cage in the 0.75PMN–0.25PT solid solution are presented in table <ref>, and are depicted by arrows in figure <ref>. The local structure analysis in the equilibrium state demonstrates that the Pb ions undergo the most significant displacement of 0.446 Å from their initial positions, moving towards the Mg–Nb surface [(001) plane] and bypassing the Ti–Nb surface [(001) plane], because the repulsive force between Pb–Mg atoms is weaker than that between Pb–Nb and Pb–Ti atoms. Nb and Ti ions move in the same direction as Pb, while Mg ions are only slightly shifted. The analysis shows that Nb and Ti atoms move from the center by 0.21–0.25 Å, leading to shorter bonds between Nb–O and Ti–O while making a decisive contribution to macroscopic polarization <cit.>. The calculated local structure parameters agree well with previous theoretical and experimental results (see table <ref>) <cit.>. After structure optimization, the electronic structure of 0.75PMN–0.25PT solid solution was studied. Figure <ref> presents the band structure of 0.75PMN–0.25PT, calculated along the high symmetry points of the first Brillouin zone. The bandgap (E_g) calculated using the GGA(PBEsol) at the Γ point is 2.13 eV. This value is larger than for the PbTiO3 crystal (1.88 eV <cit.>), calculated by the same method. The obtained value of E_g is smaller compared to the experimental one (3.24 eV <cit.>), and such underestimation is a common problem of the GGA exchange-correlation functional. The results of solving this problem using the on-site Hubbard corrections (DFT+U method) will be presented herein below. To establish the genetic origin of the electronic states of the 0.75PMN–0.25PT compound, the distributions of the total and partial density of states (DOS and PDOS, respectively) were calculated and presented in figure <ref>. PDOS shows that deep electronic states from -18 to -15 eV energy range are mainly derived from O 2s-orbitals and Pb 5d-orbitals. Similar behavior was inherent to the PDOS of PbTiO3 perovskites <cit.>. The valence band from -9 to 0 eV is formed by the 2p O states and the 6s Pb states, while the hybridized 3d Ti states, 4d Nb states, and 6p Pb states mainly contribute the conduction band in the energy range from 3.2 to 7.5 eV. It should be noted that the most significant contribution near the top of the valence band is observed from the 2p orbitals of O ions. The bottom of the conduction band is mainly formed by the 3d and 4d states of Ti and Nb atoms, respectively (figure <ref>). In general, the obtained distribution of the density of states corresponds to previous theoretical calculations <cit.>, and we also see a good agreement with the results obtained separately for components of the PMN–PT solid solution <cit.>. The next stage of our investigation was based on analyzing the genesis of the electronic states and their energy position and selecting the Hubbard U parameters to obtain more accurate electronic structures for the 0.75PMN–0.25PT system. First, we chose U parameters for the d-orbitals of Ti and Nb ions (U_d,Ti, U_d,Nb), which are considerable for the formation of electronic states near the bottom of the conduction band and did not apply U_d for Pb d-orbitals because it does not influence the variation in bandgap energy due to its deep position in the valence band of the electronic structure of PMN–PT system (see figure <ref>). Calculations showed that taking into account U corrections only for Ti and Nb atoms refines the value of the bandgap by approximately 20% (the largest E_g = 2.56 eV obtained for U_d,Ti = 8 eV, U_d,Nb = 6 eV, U_p,O = 0 eV, see table <ref>). To improve the obtained results, in addition to the U_d,Ti, and U_d,Nb parameters, the non-zero U parameter for oxygen atoms (U_p,O) was taken into account. This approach was used for simple oxide semiconductors (TiO2, ZnO) <cit.>, and for perovskite-type ABO3 oxide crystals, particularly for PbTiO3 <cit.> and BaTiO3 <cit.>. The obtained results for different sets of U parameters are presented in table <ref>. Consideration of the three Hubbard U parameters allows us to achieve the bandgap value of 3.24 eV with the parameters U_d,Ti = 10 eV, U_d,Nb = 8 eV, and U_p,O = 5 eV, which is in excellent agreement with the experimental data <cit.>, and the obtained corresponding electronic spectrum for 0.75PMN–0.25PT is presented in figure <ref>. The results also showed that for the structural properties such as the value of the volume per formula unit, the GGA+U yielded the results (65.07 Å^3/f.u.) close to the experimental values when U_d,Ti = 10 eV, U_d,Nb = 8 eV, and U_p,O = 5 eV. It should be noted that the received set is not unique, we obtained another set that gives an experimental value of the bandgap: U_d,Ti = 9 eV, U_d,Nb = 9 eV, and U_p,O = 6 eV (see table <ref>), but the value of the volume per formula unit is 65.16 Å^3/f.u. Therefore, all further calculations were carried out for U parameters: U_d,Ti = 10 eV, U_d,Nb = 8 eV, and U_p,O = 5 eV. Following the implementation of U parameters, the band structure exhibits an increased dispersion compared to the outcome without their inclusion. The PDOS analysis showed that the bands associated with contributions from d orbitals of Ti and Nb shift up the energy scale, thereby widening the bandgap. To determine the bonding nature of Pb–O, Ti–O, Mg–O, and Nb–O, the bond lengths, bond population, and Mulliken charges for 0.75PMN–0.25PT system were calculated using the GGA(PBEsol)+U method. The results are presented in table <ref>. The bond lengths of Pb–O, Mg–O, Ti–O, Nb–O and O–O are 2.70, 2.11, 2.00, 2.01, and 2.82 Å, respectively, for the 0.75PMN–0.25PT compound. It is necessary to note that the obtained results for bond lengths of Pb–O, Ti–O, and O–O correspond to appropriate bond lengths received theoretically and experimentally in PbTiO3 perovskite <cit.>. The Mulliken charge analysis in a crystal lattice describes the degree of charge transfer between ions and helps to establish the bonding type between ions. Table <ref> shows the calculated values of charge transfer q(e) for all ions in the 0.75PMN–0.25PT compound. Based on the obtained results, we can conclude that Ti and O atoms form covalent bonding as well as between Nb and O atoms. However, there is no observation of covalent bonding between Mg and O atoms, while the Pb–O bond shows a weak covalent bonding. Our calculations found that the B-site atoms, particularly Ti, Nb, and Mg, in relaxor ferroelectric 075PMN–PT have different bonding characteristics with O atoms: Ti–O and Nb–O bonds are strongly covalent, while Mg–O bond remains highly ionic, representing the absence of covalent bonding with O atoms. The received results indicate that the different bond characteristics might contribute to the relaxor properties of the PMN–PT solid solution. Such bond behavior confirms the theoretical results obtained earlier for PMN relaxor <cit.>. Next, we investigated the optical properties of PMN–PT compounds because these materials are considered to be promising for photovoltaic applications. In particular, we calculated the real (ε_1) and imaginary (ε_2) parts of the dielectric function and, based on them, calculated the absorption coefficient (α), which is directly related to them. The optical properties of the material can be described using the complex dielectric function ε(ω), which has two components — real ε_1(ω) and imaginary ε_2(ω) <cit.>: ε(ω) = ε_1(ω) + ε_2(ω). Usually, the electronic structure is directly related to the imaginary part of the dielectric function [ε_2(ω)] and indicates all possible transitions from filled to unfilled states. The value of ε_2(ω) was calculated from the expression: ε_2(ω) = 2e^2/Ωε_0∑_k,v,c^|⟨ψ_k^c| u × r |ψ_k^v⟩|δ(E_k^c - E_k^v - E), where Ω is the unit cell volume, u is the polarization vector, ψ_k^c and ψ_k^v are the wave functions of the valence and conduction bands, respectively. The real part of the dielectric function ε_1(ω) can be calculated using the Kramers–Kronig relation, starting from ε_2(ω) <cit.>: ε_1(ω) = 1 + 2/𝒫∫_0^∞ω^'ε_2(ω^')/ω^' 2 - ω^2ω^', where 𝒫 denotes the fundamental value of the integral. The absorption coefficient α(ω) is directly related to the dielectric function and can be calculated using the following expression <cit.>: α(ω) = √(2ω)[√(ε_1^2(ω) + ε_2^2(ω)) - ε_1(ω)]^1/2. The optical properties of the 0.75PMN–0.25PT system are presented in figure <ref>, calculated using the GGA(PBEsol)+U approximation. The figure shows a small anisotropy of the optical properties observed in the spectra. The spectra of the imaginary part of the dielectric function ε_2 are directly related to the electronic band structure and describe the light absorption of the compounds. The prominent peaks of the ε_2 spectra are indicated approximately by 6.14, 14.53, 18.16, and 21.33 eV (see figure <ref>, right plot). The transition from 2p O [valence band maximum (VBM)] to 3d Ti and 4d Nb [conduction band minimum (CBM)] orbitals are mainly represented by the peak near 6.14 eV. By contrast, the transition from 2p O (VBM) to 6p Pb (CB) orbitals is indicated by a 14.53 eV peak. In addition, the optical peaks near 18.16 and 21.33 eV are associated with internal electronic excitation transitions of the 5d Pb and 2s O states near the valence band to the semi-core states of the conduction band. It should be noted that every peak in the ε_2 function is not always related to a single interband transition because the electronic band structure may include many direct or indirect transitions with the same energy corresponding to the identical peak. The real part of the dielectric function (ε_1) received similar results to the imaginary part, where the calculated zero frequency limit [ε_1(0)] for 0.75PMN–0.25PT compound is 7.94, 8.06, and 7.73 for different light polarizations (figure <ref>, middle plot) in the case of GGA(PBEsol)+U method. The initial value in the absorption spectrum α(ω) lies near 3.06 eV for 0.75PMN–0.25PT solid solution and reaches its maximum value of 35.22 eV (figure <ref>, left plot). Thus, essential differences in optical parameters in the energy range from 3 to 35 eV make 0.75PMN–0.25PT solid solution very useful for applications in optical devices. § CONCLUSIONS Within the GGA(PBEsol)+U method, the structural, electronic, and optical properties of the 0.75PNM–PT compound were studied. The equilibrium lattice parameters and displacement from the high-symmetry perovskite position of Pb ions and B-cations were established. Using the Hubbard corrections method makes it possible to overcome the usual error in GGA calculations and obtain the band gap value corresponding to the experiment. Different bonding behavior between ions is established in 0.75PNM–PT compound: Ti–O and Nb–O bonds are strongly covalent; Pb–O bonds are weakly covalent, and Mg–O bonds are exclusively ionic. Based on the optical spectra of the real and imaginary parts of the dielectric function of 0.75PNM–PT solid solution, an analysis of interband transitions has been carried out. Accordingly, these results can be used by scientists to expand the targeted area of the PMN–PT compound. § ACKNOWLEDGEMENTS This work supported by the Ministry of Education and Science of Ukraine. 10 Luo2000 Luo H., Xu G., Xu H., Wang P., Yin Z., Jpn. J. Appl. Phys., 2000, 39, 5581, 10.1143/JJAP.39.5581. Kutnjak2006 Kutnjak Z., Petzelt J., Blinc R., Nature, 2006, 441, 956, 10.1038/nature04854. Alguero2006 Algueró M., Moure A., Pardo L., Holc J., Kosec M., Acta Mater., 2006, 54, 501–511, 10.1016/j.actamat.2005.09.020. Bokov2000 Bokov A. A., Ye Z. G., Appl. Phys. Lett., 2000, 77, 1888, 10.1063/1.1310629. Noheda2002 Noheda B., Cox D. E., Shirane G., Gao J., Ye Z. G., Phys. Rev. B, 2002, 66, 054104, 10.1103/PhysRevB.66.054104. Ye2002 Ye Z. G., Curr. Opin. Solid State Mater. Sci., 2002, 6, 35–44, 10.1016/S1359-0286(02)00019-0. Semak2023 Semak S., Kapustianyk V., Eliyashevskyy Yu., Bovgyra O., Kovalenko M., Mostovoi U., Doudin B., Kundys B., J. Phys.: Condens. Matter, 2023, 35, 094001, 10.1088/1361-648X/aca579. Tu2006 Tu C. S., Wang F. T., Chien R. R., Schmidt H. V., Hung C. M., Tseng C. T., Appl. Phys. Lett., 2006, 88, 032902, 10.1063/1.2165278. Liew2022 Liew W. H., Chen Y., Alexe M., Yao K., Small, 2022, 18, 2106275, 10.1002/smll.202106275. Makh2019 Makhort A. S., Schmerber G., Kundys B., Mater. Res. Express, 2019, 6, 066313, 10.1088/2053-1591/ab0758. Makh2018 Makhort A. S., Chevrier F., Kundys D., Doudin B., Kundys B., Phys. Rev. Mater., 2018, 2, 012401, 10.1103/PhysRevMaterials.2.012401. Bokov2002 Bokov A. A., Ye Z. G., Phys. Rev. B, 2002, 66, 094112, 10.1103/PhysRevB.66.094112. Shvar2013 Shvartsman V. V., Kholkin A. L., Raevski I. P., Raevskaya S. I., Savenko F. I., Emelyanov  A. S., J. Appl. Phys., 2013, 113, 187208, 10.1063/1.4801964. Li2019 Li J., Yin R., Su X., Wu H. H., Li J., Qin S., Sun S., Chen J., Su Y., Qiao L., Guo D., Bai Y., Acta Mater., 2020, 182, 250–256, 10.1016/j.actamat.2019.11.017. Li2021 Li J., Li J., Wu H. H., Zhou O., Chen J., Lookman T., Su Y., Qiao L., Bai Y., ACS Appl. Mater. Interfaces, 2021, 13, 38467–38476, 10.1021/acsami.1c07714. Tan2018 Tan T., Takenaka H., Xu C., Duan W., Grinberg I., Rappe A. M., Phys. Rev. B, 2018, 97, 174101, 10.1103/PhysRevB.97.174101. Li2020 Li C., Xu B., Lin D., Zhang S., Bellaiche L., Shrout T. R., Li F., Phys. Rev. B, 2020, 101, 140102(R), 10.1103/PhysRevB.101.140102. Grin2004 Grinberg I., Rappe A. M., Phys. Rev. B, 2004, 70, 220101(R), 10.1103/PhysRevB.70.220101. Takenaka2014 Takenaka H., Grinberg I., Shin Y. H., Rappe A. M., Ferroelectrics, 2014, 469, 1–13, 10.1080/00150193.2014.948341. Grin2005 Grinberg I., Suchomel M. R., Davies P. K., Rappe A. M., J. Appl. Phys., 2005, 98, 094111, 10.1063/1.2128049 Grinb2004 Grinberg I., Cooper V. R., Rappe A. M., Phys. Rev. B, 2004, 69, 144118, 10.1103/PhysRevB.69.144118. Clark2005 Clark S. J., Segall M. D., Pickard C. J., Hasnip P. J., Probert M. I. J., Refson K., Payne M. C., Z. Kristallogr., 2005, 220, 567–570, 10.1524/zkri.220.5.567.65075. Bovgyra2015 Bovgyra O. V., Kovalenko M. V., In: Proceedings of the Conference “2015 International Young Scientists Forum on Applied Physics” (Dnipropetrovsk, 2015), IEEE, New York, 2015, 1–4, 10.1109/YSF.2015.7333157. Bovgyra2023 Bovgyra O., Kozachenko O., Kovalenko M., Kapustianyk V., Appl. Nanosci., 2023, 13, 5003–5010, 10.1007/s13204-022-02662-9. Bovgyra2016 Bovgyra O. V., Kovalenko M. V., J. Nano- Electron. Phys., 2016, 8, 02031, 10.21272/jnep.8(2).02031. Kapus2022 Kapustianyk V., Semak S., Chornii Yu., Bovgyra O., Kovalenko M., Physica B, 2022, 639, 413929, 10.1016/j.physb.2022.413929. Davies2000 Davies P. K., Akbas M. A., J. Phys. Chem. Solids, 2000, 61, 159–166, 10.1016/S0022-3697(99)00275-9. Sepl2011 Sepliarsky M., Cohen R. E., J. Phys.: Condens. Matter, 2011, 23, 435902, 10.1088/0953-8984/23/43/435902. Makh2022 Makhort A., Gumeniuk R., Dayen J. F., Dunne P., Burkhardt U., Viret M., Doudin B., Kundys B., Adv. Opt. Mater., 2022, 10, 2102353, 10.1002/adom.202102353. Grinb2007 Grinberg I., Rappe A. M., Phase Transitions, 2007, 80, 351–368, 10.1080/01411590701228505. Zhang2017 Zhang Y., Sun J., Perdew J. P., Wu X., Phys. Rev. B, 2017, 96, 035143, 10.1103/PhysRevB.96.035143. Wan2004 Wan X., Chan H. L. W., Choy C. L., Zhao X., Luo H., J. Appl. Phys., 2004, 96, 1387, 10.1063/1.1767287. Derk2023 Derkaoui I., Achehboune M., Eglitis R. I., Popov A. I., Rezzouk A., Materials, 2023, 16, 4302, 10.3390/ma16124302. Yang2006 Yang K., Wang C. L., Li J. C., Integr. Ferroelectr., 2006, 78, 113–117, 10.1080/10584580600660033. Park2010 Park S. G., Magyari-Köpe B., Nishi Y., Phys. Rev. B, 2010, 82, 115109, 10.1103/PhysRevB.82.115109. Kovalenko2021 Kovalenko M., Bovgyra O., Franiv A., Dzikovskyi V., Mater. Today: Proc., 2021, 35, 604–608, 10.1016/j.matpr.2019.11.274. Bovgyra2019 Bovgyra O., Kovalenko M., Dzikovskyi V., Moroz M., In: Proceedings of the Conference “2019 IEEE 2nd Ukraine Conference on Electrical and Computer Engineering (UKRCON)” (Lviv, 2019), IEEE, 2019, 726–731, 10.1109/UKRCON.2019.8879928. Bovg2019 Bovgyra O., Kovalenko M., Bovhyra R., Dzikovskyi V., J. Phys. Stud., 2019, 23, 4301, 10.30970/jps.23.4301. Derkaoui2023 Derkaoui I., Achehboune M., Boukhoubza I., El Adnani Z., Rezzouk A., Comput. Mater. Sci., 2023, 217, 111913, 10.1016/j.commatsci.2022.111913. Shirane1956 Shirane G., Pepinsky R., Frazer B. C., Acta Crystallogr., 1956, 9, 131–140, 10.1107/S0365110X56000309. Ambr2006 Ambrosch-Draxl C., Sofo J. O., Comput. Phys. Commun., 2006, 175, 1–14, 10.1016/j.cpc.2006.03.005. Adachi2009 Adachi S., Properties of Semiconductor Alloys: Group-IV, III-V and II-VI Semiconductors, John Wiley & Sons, 2009. , Գ , , . 8, 79005 , § ABSTRACT =3000 Pb(Mg_1/3Nb_2/3)O_3-PbTiO_3  . , , Pb[(Mg_1/3Nb_2/3)_0.75Ti_0.25]O_3 . , GGA(PBEsol) . U GGA(PBEsol) Pb[(Mg_1/3Nb_2/3)_0.75Ti_0.25]O_3, . , Mg–O , Ti–O Nb–O. Pb[(Mg_1/3Nb_2/3)_0.75Ti_0.25]O_3. , Pb[(Mg_1/3Nb_2/3)_0.75Ti_0.25]O_3 U GGA . , ,
http://arxiv.org/abs/2406.18678v1
20240626182912
Few-shot Personalization of LLMs with Mis-aligned Responses
[ "Jaehyung Kim", "Yiming Yang" ]
cs.LG
[ "cs.LG", "cs.AI", "cs.CL" ]
Geometric Features Enhanced Human-Object Interaction Detection (Supplementary Material) Manli Zhu^0000-0002-8231-5342, Edmond S. L. Ho^0000-0001-5862-106X, Shuang Chen^0000-0002-6879-7285, Longzhi Yang^0000-0003-2115-4909, Senior Member, IEEE, Hubert P. H. Shum^0000-0001-5651-6039†, Senior Member, IEEE July 1, 2024 ============================================================================================================================================================================================================================ § ABSTRACT As the diversity of users increases, the capability of providing personalized responses by large language models (LLMs) has become increasingly important. Existing approaches have only limited successes in LLM personalization, due to the absence of personalized learning or the reliance on shared personal data. This paper proposes a new approach for a few-shot personalization of LLMs with their mis-aligned responses (). Our key idea is to learn a set of personalized prompts for each user by progressively improving the prompts using LLMs, based on user profile (e.g., demographic information) and a few examples of previous opinions. During an iterative process of prompt improvement, we incorporate the contexts of mis-aligned responses by LLMs, which are especially crucial for the effective personalization of LLMs. In addition, we develop an effective inference method to further leverage the context of the test query and the personalized prompts. Our experimental results demonstrate that significantly improves performance across various benchmarks, compared to the best-performing baselines.[The code will be available at <https://github.com/bbuing9/Fermi>.] § INTRODUCTION The recent development of large language models (LLMs) has significantly accelerated progress in various NLP tasks, and yielded real-world applications used by millions of users, such as coding assistants and chatbots <cit.>. As the use of LLMs by diverse users in real-world applications increases, personalization of LLMs, i.e., steering LLMs' responses towards the unique needs or preferences of individual users becomes progressively important <cit.>. However, recent studies show that LLMs' responses are often biased toward certain groups but not suited for other diverse groups of users, and such biases cannot be fixed by providing simple instructions <cit.>. To tackle this problem, methods to steer the responses of LLMs have been recently explored and they can be roughly divided into two categories. One category is prompt engineering, which heuristically incorporates the user's information into the input prompts of LLMs <cit.>. The other category focuses on learning from other users' data <cit.>. But, both categories have limitations: prompt engineering for every user would be too costly and non-trivial, while the learning-based category relies on unrealistic assumption that personal data can be shared without violating privacy considerations. This paper addresses those limitations by introducing a new approach, namely Few-shot Personalization of LLMs with mis-aligned responses (). Our high-level idea is to use LLM to progressively improve its input prompts based on a few examples of previous user opinions and profiles (e.g., demographics) in an iterative process. In addition to the current prompts' scores measured on given few-shot user opinions <cit.>, incorporates the mis-aligned responses (i.e., LLM’s responses with those prompts, which are inconsistent with given user opinions) as additional context. The contexts of mis-aligned responses include useful learning signals to update prompts such as the types of wrong predictions with the current prompts (see the empirical evidence in Section <ref>). Specifically, the iterative process of consists of three steps: (1) scoring the initial or current prompts with LLM, (2) updating the memory with high-scored prompts in the form of <prompt, score, context> triplets, and (3) generating new improved prompts with LLM based on the updated memory. In addition, we propose Retrieval-or-Prompt, a method to improve the inference on a given test query. Retrieval-or-Prompt selectively uses one of the personalized prompts obtained from the optimization, based on the context of the test query. An overview of is presented in Figure <ref>. We demonstrate the effectiveness of for few-shot personalization of LLMs, through extensive evaluations on various tasks including question-answering (QA), classification, and regression. For example, we observe that exhibited 6.8% and 4.1% average accuracy improvements on two multiple-choice QA datasets, constructed to evaluate the personalization of LLMs, compared to the previous state-of-the-art heuristic and optimization approaches, respectively. We also found that the personalized prompts produced with one LLM are also effective on other LLMs, including both API-based and open-sourced ones, which is crucial for efficient deployment in practice. In addition, our in-depth analyses reveal why is more effective than other prompting methods and what are the important features of prompts for effective personalization of LLMs. We hope our work provides useful insights for the research on LLM personalization, which becomes increasingly emerging and important for the future success of LLMs in real-world applications. § RELATED WORKS Few-shot personalization of LLMs. Few-shot personalization of LLM is to align LLM's responses to a specific user with a limited number of user information such as user profile (e.g., demographic information) or opinions (e.g., previous responses to questions by user). To this end, one line of prior works has explored how to input given user information into LLM in a heuristical manner, i.e., prompt engineering; for example, <cit.> designs three different templates of input prompt. <cit.> leverages the retrieval system <cit.> to use the given user opinions selectively. <cit.> shows that using both user profile and opinions is more effective. On the other hand, another line of prior works has proposed learning from other user's data; <cit.> selects the relevant users using collaborative filtering, then learns the soft-prompt <cit.> from the augmented training data from these users' data. <cit.> proposes to train an independent transformer module via meta-learning on several users' data. However, both approaches have their limitations; prompt engineering incurs the cost of designing the prompt, and could be limited to fully utilizing the user information due to the absence of learning. The learning-based one necessitates other users' data which is hard to obtain in real-world, due to privacy issues. Therefore, we propose to only learn from target user's information and find the optimized (i.e., personalized) prompt for that user. Prompt optimization with LLM. As the prior works for prompt-tuning, relying on the gradient-based update <cit.>, become inapplicable to the recent API-based LLM due to their black-box nature, other approaches have been recently explored for gradient-free prompt optimization, such as a progressive improvement using heuristic rules or LLMs <cit.>. For example, <cit.> receives text feedback on how to update the prompts by instructing LLM. Also, after generating initial prompts with LLMs, <cit.> generates a semantically similar variant of the prompts with the highest accuracies. <cit.> iterates evaluation and generation of prompts with two LLMs, to solve the black-box optimization such as prompt optimization; <cit.> incorporates the past generated prompts with their scores to enable the LLM for the optimization to construct new improved prompts. However, only providing the scores on training examples is insufficient to optimize the prompt for few-shot personalization of LLMs, as the context with mis-aligned responses such as the types or patterns within recursively wrong predictions can't be captured in scores. Therefore, we propose an efficient way to incorporate such context during the optimization, along with an additional method to improve the inference by considering the context of the given test query. § : FEW-SHOT PERSONALIZATION OF LLMS WITH MIS-ALIGNED RESPONSES In this section, we present our framework proposed for Few-shot Personalization of LLMs from mis-aligned responses (). We first present our problem setup in Section <ref>. Then, in Section <ref>, we present our core component that optimizes the input prompt with a given user information, by using LLM as a black-box optimizer along with the additional contexts from mis-aligned responses. Lastly, we introduce an efficient inference scheme after optimizing prompts with , by utilizing the context of a test query (Section <ref>). §.§ Problem description We first describe the problem setup of our interest under a question-answering (QA) scenario. Our goal is to steer LLM for a specific user using that user's information, and hence make LLM adaptively answer a given question depending on the user. Formally, let q denote the given test question and ℳ denote the LLM, respectively. Next, for user u, we assume two types of user information: U_ pro and U_ opi. U_ pro indicates explicit profile of u such as demographics information (e.g., region, sex, and age) or ideology (e.g., political affiliation). U_ opi indicates N few-shot previous opinions by u, which has the form of QA pairs, i.e., U_ opi={(q_i,a_i)}_i=1^N where q_i is a previously asked question and a_i is an opinion (answer) by the user. Then, for given test question q, our goal is to predict the answer a, which would be generated by user u, through LLM ℳ by using both U_ pro and U_ opi. The heuristic design of input prompt p to incorporate such user information has been previously explored <cit.>, i.e., prediction a is obtained by conditioning ℳ with p, which is constructed using U_ pro and U_ opi: a(p)=ℳ(q;p). However, heuristically designed prompts could be limited to fully exploit the given user information. For example, compared to using all opinions in U_ opi, appending fewer user opinions can yield better personalization accuracy for LLM <cit.>. Therefore, we tackle this limitation by finding personalized prompts that steer LLM to the user, through direct learning from given user information. §.§ Prompt optimization using mis-aligned responses by LLM To mitigate the difficulties from the large scale and black-box nature of recent LLMs, we instead optimize input prompts to learn from user information. It is motivated by the recent work <cit.> that uses two LLMs, ℳ and ℳ_ opt, to solve black-box optimization, where ℳ_ opt denotes another LLM used for the optimization. Specifically, our key idea is incorporating the contexts of mis-aligned responses (i.e., QAs in U_ opi that ℳ incorrectly predict with current prompts) during the optimization, instead of only using scores of the prompts (e.g., average accuracy of the prediction by ℳ on U_ opi). As the contexts of mis-aligned responses include useful learning signals such as types or patterns of common wrong predictions, they could be effective in learning how to improve the prompts. We first assume that there is an initial prompt set P^0={p^0}, e.g., heuristically designed prompt <cit.>. Then, at each iteration t, we conduct the following three steps: ∘ 1. Score Prompts: Evaluate prompt based on its accuracy in predicting user's previous answers. ∘ 2. Update Memory: Maintain a memory of the best-performing prompts along with their scores and the contexts of their mis-aligned responses. ∘ 3. Generate New Prompts: Generate new improved prompts with ℳ_ opt and the updated memory. ∘ Step 1: Score Prompts. We first calculate the score s_k of each prompt p_k∈P^t, by obtaining the predictions from ℳ under p_k and evaluating them using the user's previous answers: s_k = ∑_(q_i,a_i) ∼ U_ opis(a_i, a_i(p_k)) / N,  where  a_i(p_k)=ℳ(q_i;p_k). Here, s(·,·) is a specific metric to evaluate the prediction (e.g., accuracy). During this calculation of the score s_k of the prompt p_k, we also collect mis-aligned QA pairs U^k_ opi that the prediction of ℳ under p_k is not aligned with the user's answer: U^k_ opi = {(q_i,a_i)|s(a_i, a_i(p_k))< τ,  (q_i,a_i) ∈ U_ opi}, where τ is a threshold to judge the mis-alignment; for example, we set τ=0.5 when we use the correctness of prediction as the score s(·,·). ∘ Step 2: Update Memory. Next, we construct an optimization memory M^t, which is used for the input of ℳ_ opt to generate new improved prompts, by providing the information of well-performing prompts through the contexts of their mis-aligned responses. To be specific, the optimization memory M^t={(p_l, s_l, c_l)}_l=1^L is constructed by selecting top-L prompts among P^t and M^t-1 (where M^0=∅), according to their scores (Eq. <ref>). Here, we present the triplets in M^t in ascending order, i.e., s_l < s_l^' when l < l^', and provide the varied context c_l depending on l. Specifically, for l=1, we construct c_l by concatenating QAs and mis-aligned responses by ℳ under p_l on U^l_ opi: c_l = {(i,q_i,a_i,a_i(p_l))|(q_i,a_i)∈ U^l_ opi}. In Figure <ref>, the texts corresponding to c_1 are highlighted in blue. For other cases (i.e., l1), instead of the enumeration like c_1, we construct the context c_l with (i) the indices of common mis-aligned QA pairs between p_l and p_1, and (ii) the number of newly mis-aligned QAs by p_l compared to p_1 (see the green texts in Figure <ref> for an example). Through the presented indices in c_l, ℳ_ opt can directly access the mis-aligned QA pairs by referring c_1, and one can avoid unnecessary complexity of c_l and cost from the long input to ℳ_ opt. Additionally, the number of newly mis-aligned ones offers further insight into whether p_l has improved, which can't be captured by the common mis-aligned ones. ∘ Step 3: Generate New Prompts. With the updated memory M^t, we generate K new improved prompts P^t+1={p^ new_k}_k=1^K by prompting ℳ_ opt to generate the new and high-scored prompts: p^ new_k = ℳ_ opt(M^t;p_ opt), where p_ opt is a fixed input prompt for ℳ_ opt to generate new prompts, and we use a random sampling with temperature to generate diverse new prompts from ℳ_ opt. Figure <ref> presents the example of the overall input of ℳ_ opt to generate new prompts, which is constructed with M^t and p_ opt. Then, we go back to Step 1 with P^t+1 and iterate these 3 steps for T times. After that, we obtain the optimized (i.e., personalized) prompts P^T={p^T_k}_k=1^K for the user u. We remark that we also use the user's explicit profile U_ pro to construct the initial prompt set P^0 when it is available; thereby we fully utilize the given user information (see more details in Appendix <ref>). §.§ Effective inference by Retrieval-of-Prompt After T iterations of the optimization procedure, outputs K unique personalized prompts P^T={p^T_k}_k=1^K. Therefore, for a given test question q, one needs to determine which prompt to apply. Selecting the prompt with the highest score, i.e., k^*=max_k s_k (Eq. <ref>), would be a straight-forward way. However, our intuition is that better selection is possible if we utilize the context of the test question q as additional information. To this end, we propose to select the input prompt with the highest score on the subset of U_ opi, which only consists of the previous questions highly relevant to q. Formally, we first measure the relevance r between q and previous question q_i: R(q, U_ opi)={r(q, q_i)|q_i∈ U_ opi}. For the relevance r, we use the cosine similarity between the embeddings of questions, extracted by the sentence encoder <cit.>. Then, we select top-Ñ questions according to the calculated relevance and construct the subset U_ opi^q with those questions. Lastly, we choose the input prompt p^*=p^T_k^* based on the score on U_ opi^q, which were already calculated, and use the prediction a(p^*) by ℳ: k^*=max_ks^T_k( U_ opi^q), where s^T_k(U_ opi^q) = ∑_(q_i,a_i) ∼ U_ opi^q s(a_i, a_i(p^T_k)) / Ñ. Figure <ref> illustrates the overview of and Algorithm <ref> summarizes the overall procedure of . We note that a full version of the prompts and examples of personalized prompts are presented in Appendixes <ref> and <ref>, respectively. § EXPERIMENTS In this section, we design our experiments to investigate the following questions: ∘ How does perform compare to other personalization methods? (Tables <ref> and <ref>) ∘ Is the optimized prompt with from one LLM transferable to different LLMs? (Table <ref>) ∘ What is the effect of each component in ? (Table <ref>) ∘ Why optimized prompt by is more effective than other prompts? (Table <ref>) §.§ Setups First, we describe our experimental setups. More details are presented in Appendix <ref>. Datasets. For the experiments, we first use two multiple-choice QA datasets proposed to measure the steerability of LLMs for specific users (or social groups): OpinionQA <cit.> and GlobalOpinionQA <cit.>. For OpinionQA, we use a subsampled split released by <cit.>, which consists of 10.5k and 15.8k training and test QA pairs across 525 users and 15 topics, respectively. For GlobalOpinionQA, since the dataset originally included the answer distribution by multiple respondents in the same country, we converted it to have a single answer by selecting the choice with the highest probability. It results in 920 training and 1,317 test QA pairs across 46 countries. We consider each country as a specific user. Next, we use two additional datasets, LaMP_ tag and LaMP_ rate, from a recent benchmark proposed for personalization of LLMs <cit.>. LaMP_ tag is a 15-way classification data where an input is a movie description and a label is a movie tag, and LaMP_ rate is a regression data where an input is a user review and a label is an integer rating (1-5). We construct both datasets by subsampling from their original validation split, which results in 1,000 training and 1,500 test QA pairs across 50 users for each dataset. On average across four datasets, for each user, 20 training QAs as previous opinions and specific profile are given, and then 30 test QAs are used to evaluate. For LaMP_ rate, we report mean absolute error (MAE), a commonly used metric for the regression. For others, we report average test accuracy (Acc). Baselines. We compare against extensive baselines as follows: (1) Uniform: expected performance when the prediction is made uniformly at random. (2) Vanilla: answers the question with LLMs without any user information. (3) Profile: constructing prompt using all available user profiles <cit.> such as demographics or nationality. (4) Few-shot: retrieving relevant previous questions and opinions, then append them to the prompt <cit.>. Following <cit.>, we consider BM25 <cit.> and Contriever <cit.> for the retriever models. The number of retrieved profiles is determined among {3, 8, all} with validation performance. (5) All Info: using both explicit profiles and retrieved previous QAs to construct prompt <cit.>. We use the retrieval with the best performance in Few-shot.[In the case of OpinionQA, we additionally consider the retrieved indices originally included by <cit.>.] (6) Optimization by PROmpting (OPRO; <cit.>): optimizing input prompt using both user profiles and previous opinions using LLMs. Here, all of the previous opinions are utilized during the optimization. In the experiments, the prompt with the best training score is selected for the test. Implementation details. We use three recent state-of-the-art LLMs for the prediction LLM ℳ for the experiments: ChatGPT () <cit.>, GPT-4 () <cit.>, and LLaMA2-chat-70B <cit.>. For ℳ, we use a temperature of 0.0 when calling the API or greedy decoding for LLaMA, to remove the effect of random sampling. For the optimization LLM ℳ_ opt, we always use GPT-4, as the prompt optimization based on the memory (Eq. <ref>) requires complex reasoning capability (See Appendix <ref>), with a temperature of 1.0. For OPRO and , we use fixed values of K=4, L=5, and T=10. Also, with previous user opinions in U_ opi, 80% is used for optimization and 20% is used as few-shot demonstrations in p_ opt. To obtain sentence embeddings for Retrieval-of-Prompt, we use the sentence encoder with MPNet <cit.> showing the best performance.[Following the results in <https://www.sbert.net>] Also, we use a fixed Ñ=3 for Retrieval-of-Prompt. §.§ Main results Table <ref> summarizes the experimental results on two different multiple-choice QA datasets, under ChatGPT. First, it is observed that augmenting the user information into the input prompt is effective in improving the accuracies of LLMs, but the effectiveness could be varied. For example, retrieving relevant user opinions is more effective than using the user profile for OpinionQA (49.8% vs. 48.1%), but it's vice versa in GlobalOpinionQA (61.2% vs. 66.1%). It is due to the difference between datasets, as each user is asked multiple questions on the same topic in OpinionQA while GlobalOpinionQA asks the broader topics; this result also reveals the necessity of the learning-based prompt optimization approach. From the results of OPRO and , one can observe that the optimization-based approach is actually effective, and the proposed method significantly improves it. To be specific, exhibits 6.75% average accuracy improvement compared to the previous prompting method. Furthermore, compared to the existing optimization method, exhibits 4.05% accuracy improvement in the average. In Figure <ref>, we additionally present detailed results on OpinionQA, a topic-wise accuracy from four representative baselines selected based on average accuracy. Here, consistently shows better performance than other baselines across all topics, which further demonstrates the effectiveness of for the personalization of LLMs. Next, Table <ref> summarizes the experimental results on LaMP_ tag (classification) and LaMP_ rate (regression), under ChatGPT. We note that these datasets do not include explicit user profiles; hence, we exclude both Profile and All Info for the baselines. Here, it is noteworthy that the effectiveness of OPRO is significantly degraded, as the given task becomes more challenging to solve (e.g., the average number of answer choices: 3.96 for GlobalOpinionQA vs. 15 for LaMP_ tag). Nevertheless, is consistently effective and outperforms the other baselines; for example, exhibits 4.42% and 5.56% relative improvement for both datasets, respectively. §.§ Analyses with In this section, we provide additional analyses of with the experiments on GlobalOpinionQA. We denote that more analyses are also presented in Appendix <ref>. Transferability of the optimized prompt. Here, we provide additional experiments to verify the transferability of the learned prompt with our method. To be specific, we first save the optimized prompts under ChatGPT as LLM for evaluation (Eq. <ref>), which are used in Table <ref>. Then, we directly apply these prompts to two different types of LLMs (LLaMA-2-chat-70B and GPT-4), without additional optimization as same as applying heuristically designed prompts. From Table <ref>, one can observe that the transferred prompts from significantly outperform the baseline prompting methods on both LLMs; for example, it exhibits 3.4% and 7.1% accuracy improvement compared to the best-performing baselines for each LLM, respectively. We remark that the prompts from OPRO are even less effective than the existing baseline, which further shows the advantages of in learning the well-generalized personalized prompt. Also, the effectiveness on LLaMA-2 demonstrates that our method is also applicable to open-sourced LLMs, not only for black-box API LLMs. [13]r0.48 0.45 Ablation study of . Test accuracy of ChatGPT on GlobalOpinionQA with different configurations of the proposed components in . Methods Add_ Mis Add_ Num RoP Acc OPRO 55 55 55 71.1 51 55 55 73.7 51 51 55 74.2 51 51 51 74.8 Ablation study. To validate the effectiveness of the proposed component of in Section <ref>, we perform the ablation experiments by decomposing our framework into three different components: (1) including QAs that have mis-alinged responses with the initial presentation and referring via common indices (Add_ Mis), (2) noting the number of QAs with new mis-aligned responses (Add_ Num), and (3) Retrieval-of-Prompt for a test query (RoP). As shown in Table <ref>, all components progressively improve the few-shot personalization of LLMs. Especially, it is observable that efficiently providing the context of mis-aligned QAs during the optimization is mostly crucial for the improvement. Next, providing the number of new mis-aligned QAs makes additional improvement, as it can provide information about the effectiveness of the given prompt, which is not captured by commonly mis-aligned QAs. Lastly, for a test query, retrieving the most relevant prompt is more effective than selecting with the highest training score, as it successfully utilizes the context of the test query. Features of good input prompts for personalization. In Table <ref>, we further conduct the experiments to answer the following question: what features make good personalized prompts for LLMs? First, we claim that the relevance of the prompt to the test query is crucial; for example, Few-shot_ top3, Few-shot_ all, and Few-shot_ bott3 are different prompting methods by retrieving the 3 mostly relevant, all 20, and 3 mostly irrelevant previous opinions, respectively. Here, it is observable that test accuracy largely degrades when a portion of irrelevant opinions increases. Similarly, when we retrieved the most irrelevant prompt (_ irrel), i.e., take min in Eq. <ref>, accuracy of is also decreased. Second, providing the user information with the proper format for LLMs is important. As shown in Figure <ref>, the optimized prompt by is a detailed instruction consisting of multiple sentences that condense the lessons from the user opinions and LLM's mis-aligned responses. In contrast, the previous prompt used to incorporate previous opinions is based on the specific form, which is harder to follow by LLMs. To verify the importance of the format, we convert the enumeration of all QAs (by Few-shot_ all) into the instruction of multiple sentences (denoted by Few-shot_ format), by prompting GPT-4 using the optimized prompts by as reference. Interestingly, this format conversion shows significant improvement (56.3% → 66.4%) while it is still underperforming . [14]r0.45 < g r a p h i c s > Optimization trajectory. Average training accuracies of OPRO and on GlobalOpinionQA, across optimization iterations (T=10). Lastly, effectively distilling the given user information is important. As shown in Table <ref>, the prompting method with higher accuracy on previous user opinions U_ opi (i.e., training accuracy) has a higher test accuracy for that user as well, except Few-shot_ all which can directly access U_ opi. In this aspect, shows a clear advantage compared to the previous prompting optimization method; as shown in Figure <ref>, more effectively optimizes the prompt and achieves higher training accuracy than OPRO. These results indicate that finding a proper way to condense and incorporate the user information to design input prompts is crucial, and achieves this by using the context of mis-aligned responses. Overall, designing personalized prompts satisfying these three properties (relevancy to test query, proper format, and effective distillation of user information) is challenging, but effectively accomplishes this goal. § CONCLUSION In this paper, we propose , a simple yet effective framework for improving the few-shot personalization of LLMs. Our key idea is to optimize the input prompt by learning from the user information; we propose an efficient way to incorporate contexts of mis-aligned responses by LLMs during the optimization, and a retrieval approach to select the optimized prompt relevant to test query. The effectiveness of is demonstrated by results on various personalization tasks and LLMs. We believe that our framework could be beneficial for improving the experience with the personal usage of LLMs, which become increasingly emerging and important in the future. More discussions on the limitation and the broader impact of this work are presented in Appendix <ref>. abbrvnat § LIMITATIONS AND BROADER IMPACT §.§ Limitations and future work Although we have conducted comprehensive experiments on various NLP tasks with multiple LLMs, results and analyses on more datasets, tasks, and LLMs would likely draw a more decisive conclusion. For example, the tested benchmarks in Section <ref> are discriminative tasks, i.e., the correctness of the responses by LLM can be directly evaluated using the ground-truth response from the user, and hence it's easy to find the mis-aligned responses. In contrast, evaluating the correctness of LLM’s response (i.e., finding a proper metric) is challenging for the generation tasks, and is being continuously discussed <cit.>. Nevertheless, we believe that our framework is still applicable in the generation task if the proper metric is given. For instance, ROUGE-L <cit.> and MAUVE <cit.> are popular metrics to measure the quality of machine-generated responses compared to ground-truth human-generated responses. As these metrics range between 0 and 1, one can set a specific threshold (i.e., τ∈ [0,1]) to determine the mis-aligned responses under these metrics (see Step 1 in Section <ref>). In addition, LLM-as-judge <cit.> is another emerging way to evaluate the correctness of generation; in this case, it’s more straightforward to apply our framework, as it provides the binary outputs as same as discriminative tasks. However, finding a proper metric for each generation task itself is still a difficult problem, and hence we expect that this direction could be explored in the future. In addition, while we show that the proposed framework can find personalized prompts by learning from the given user information, we also observe that its success highly depends on the capability of LLMs used for the optimization (i.e., generating new prompt from the memory in Eq. <ref>), as shown in Figure <ref>. Since our approach requires a few number of iterations of optimization to provide high-quality personalized prompts, a certain amount of cost is inevitably required. However, as we demonstrated in the experiments, the personalized prompts from our method are well-transferrable to other LLMs that are not used during optimization (Table <ref>), could be continuously updated with enlarged data through the user interactions (Table <ref>), and also reusable to convert previous prompts to have the proper format for LLMs (Table <ref>). Therefore, we believe that our approach could be an even more efficient way for personalization compared to the heuristical design of the prompt, after the consumption of the cost at the initial optimization. §.§ Broader impact and ethical implications We strongly believe that can provide a strong positive impact in real-world applications that require personalized responses for the given user, e.g., search engines or chatbots. We expect that our framework would be especially beneficial for the users belonging to under-populated social groups, since LLMs are known to follow the knowledge or opinion of the major population within pre-trained data <cit.>. In contrast, there also exists some potential negative impacts. Since our framework needs to provide personal information to LLMs (mostly through API), it has a potential privacy risk when the provider of LLMs does not follow the safeguard and collects the given information. In addition, as our framework didn't filter out the resulting prompts separately, it can include the prompts that have socially negative impacts, e.g., jailbreak of LLMs <cit.>. We believe that the incorporation of an additional filtering step could be a solution to this problem <cit.>. § MORE ANALYSES WITH In this section, we provide more analyses of in addition to the analyses in Section <ref>. Importance of using strong LLM for optimization ℳ_ opt. As denoted in Section <ref>, we commonly use GPT-4 for LLM ℳ_ opt to generate new prompts from the optimization memory (Eq. <ref>) for all the experiments in Section <ref>. To validate this design choice, we conduct the experiments by substituting GPT-4 with ChatGPT ℳ_ opt in both OPRO and . Figure <ref> is the optimization trajectory in terms of training accuracy (i.e., average accuracy of the prediction by ℳ on previous user opinions). Here, one can observe that both OPRO and suffer in optimizing the prompt when we use ChatGPT as ℳ_ opt, similar to the previous observation <cit.>; it reveals that generating the improved prompts from the optimization memory with previous prompts, scores, and contexts requires complex reasoning capability. Therefore, using a strong LLM such as GPT-4 is necessary. Optimization with stronger LLM for evaluation ℳ. Next, to explore the compatibility of with different configurations of two LLMs during the optimization, we conduct the additional experiments by substituting evaluating LLM ℳ to GPT-4 from ChatGPT; namely, two LLMs ℳ and ℳ_ opt for evaluating and generating are GPT-4. The results on GlobalOpinionQA are presented in Table <ref>. It is observable that one can find further improved personalized prompts in terms of test accuracy, when using stronger LLM ℳ for evaluating (Eq. <ref>). For example, compared to the use of personalized prompts optimized by ChatGPT as ℳ (^*), the optimization only using GPT-4 exhibits 1.9% additional test accuracy improvement. This result clearly shows that the proposed is compatible with different types and capacities of evaluating LLMs. Importance of initial prompts in . For the experiments, we used a fixed initial prompt template across all datasets in our experiments, that maximally incorporates the given user profiles, as it has proven effective in prior studies <cit.>, as described in Section <ref> and Appendix <ref>. Nevertheless, to further provide insights about the impact of initial prompt templates on , we conduct additional experiments by varying the initial prompt set P^0={p_0}. To be specific, on GlobalOpinionQA dataset, we exclude the user profiles for the construction of the initial prompt unlike the original (in Table <ref>), and use the prompt of Vanilla for the initialization. We denote this version as _ vanilla. The results (in comparison with other methods are) shown in Table <ref>, where (our method) consistently outperforms the baselines with both choices of prompt initialization while the gain is enlarged with better initialization when incorporating the user profile. Continual optimization of prompts. In the previous experiments, we assumed that the fixed dataset U_ opi of questions and user's opinions is given. However, in the real-world, user often interacts frequently with LLMs, which means that the dataset could be continuously updated. Therefore, the iterative process of refining prompts might incur significant computational costs, if it should be conducted from scratch at certain intervals (e.g., when the number of new data reaches the threshold). To mitigate this issue, we conduct additional experiments to show that the idea of continual prompt optimization <cit.> could be applied to , and hence such cost could be drastically reduced. Specifically, we first conduct by using half of the previous questions and the user's responses U_ opi (denoted by ^ half). We remark that other parameters are kept the same such as the 10 iterations of the optimization. Then, with the entire U_ opi, we continuously conduct under the limited number of iterations, by initializing the prompt pool with previously optimized prompts in ^ half (i.e., substituting the initialization in line 116). We denoted the results of this continuous optimization with 1 and 5 iterations as ^ cont_iter 1 and ^ cont_iter 5, respectively. The results are presented in Table <ref>. First, it is notable that even with the reduced number of data for the optimization, still outperforms the strong baselines that are based on heuristic prompt engineering (Profile) or using the optimization by LLMs under full data (ORPO). However, one can also observe that the accuracy under full data is much better (74.8 vs. 73.0), which reveals that the data quantity is still important in . Next, it is also observed that the prompts could be successfully optimized continuously when the new data is added. Here, we denote that the previously optimized prompts in ^ half are also re-used for the pool of Retrieval-of-Prompt, to keep the knowledge of previous iterations.[Integrating new prompts into each user’s retrieval pool adds minimal computational overhead for calculating their embeddings.] Remarkably, even with only 1 additional iteration of optimization, the accuracy is significantly increased (73.0 → 74.0). Also, when increasing the number of iterations to 5 (i.e,, the same amount of computations compared to the original ), the accuracy is increased and slightly outperforms the original optimization under the full data. Such improvement might be from the enlarged pool of Retrieval-of-prompt that enables better exploitation of the previous knowledge. These results clearly show that the proposed framework is still effective for a more realistic scenario under the continuously updated user data. § EXPERIMENTAL DETAILS This section provides more details about the experimental setups in Section <ref>. §.§ Datasets First, we present more detailed descriptions of the used datasets: OpinionQA <cit.>, GlobalOpinionQA <cit.>, LaMP_ tag, and LaMP_ rate <cit.>. Dataset statistics are presented in Table <ref>. Also, an example from each dataset is presented in Figure <ref>. ∘ OpinionQA is a multiple-choice QA dataset originally constructed based on a public opinion survey <cit.>, to evaluate the alignment of LM with 60 US demographic groups over various topics. As OpinionQA includes the information of each respondent, this dataset has been also used to evaluate the personalization of LLMs <cit.> and we also adopt it. Specifically, we use a subsampled split released by <cit.>, which consists of 10.5k and 15.8k training and test QA pairs across 525 users and 15 topics; namely, each user has 20 training QA pairs and 30 test QA pairs for each topic, on average. Also, the average number of answer choices is 3.2. Then, we use training QA pairs as given previous opinions by user, and use test QA pairs to evaluate. In addition, for the experiments, we use all 12 types of user profiles included in the dataset: {Age, Citizenship, Region, Education, Income, Marital status, Political ideology, Political party, Race, Religion, Frequency of religious attendance, Gender}. ∘ GlobalOpinionQA is a multiple-choice QA dataset constructed from cross-national surveys to capture diverse opinions on global issues across different countries. Since the dataset originally included the answer distribution by multiple respondents in the same country, we converted it to have a single answer by selecting the choice with the highest probability, and treated each country as a specific user. To be specific, we set a threshold (0.8) and selectively use the data when its highest probability is higher than the threshold to guarantee the quality of the converted. It results in 920 training and 1,317 test QA pairs across 46 countries; namely, each user (country) has 20 training QA pairs and 28.6 test QA pairs for each topic, on average. Also, the average number of answer choices is 4.1. Then, we use training QA pairs as given previous opinions by user, and use test QA pairs to evaluate. Also, nationality becomes the only available profile. The full list of countries included in the dataset is presented in Table <ref>. Dataset could be downloaded from <https://huggingface.co/datasets/Anthropic/llm_global_opinions>. ∘ LaMP_ tag is is a 15-way classification data where an input is a movie description and a label is a corresponding movie tag among 15 categories: {Sci-fi, Based on a book, Comedy, Action, Twist ending, Dystopia, Dark comedy, Classic, Psychology, Fantasy, Romance, Thought-provoking, Social commentary, Violence, True story}. Since the original dataset is proposed to consider the scenario of fine-tuning LMs and hence it consists of a large number of examples, we construct our dataset by subsampling from its validation dataset to make it suitable to evaluate LLMs with inference. It results in 1,000 training and 1,500 test QA pairs across 50 users, respectively. ∘ LaMP_ rate is a regression data where an input is a user review and a label is an integer rating (1-5), i.e., 1 is mostly negative and 5 is mostly positive. Under the same motivation with LaMP_ tag, we construct our dataset by subsampling from its validation dataset, which results in 1,000 training and 1,500 test QA pairs across 50 users, respectively. LaMP benchmarks could be downloaded in <https://github.com/LaMP-Benchmark/LaMP>. §.§ Baselines In this section, we present the specific prompts used for the experiments in Section <ref>. Listing <ref>-<ref> are actually used prompts for Vanilla, Profile, Few-shot, and All Info, during the experiments on GlobalOpinionQA. Also, the prompt of OPRO used for the optimization is presented in Figure <ref>, which is the originally used one in <cit.>. While we're trying to adapt this prompt similar to ours in Figure <ref>, we observed that it degrades the performance of OPRO; for example, the average test accuracy is reduced to 70.7% from 71.1%. Therefore, we use the original prompt for all the experiments. We remark that each prompt is minimally adjusted to consider the difference between datasets. For example, as OpinionQA includes many available user profiles, we fully incorporate these with the prompt in Listing <ref>, following <cit.>. Also, we present the prompt of Vanilla method on LaMP_ rate dataset in Listing <ref>. In addition, we present the prompt used to convert the format of the input prompt by Few-shot (Table <ref>) in Listing <ref>. §.§ As denoted in Section <ref>, we need to provide an initial input prompt set P^0={p^0}. To this end, we use the heuristically design input prompts, which are presented in <ref>. Specifically, we adopt the prompts used for Profile tuned for each data, when the user profile U_ pro is available (both OpinionQA and GlobalOpinionQA). Since our framework only utilizes a given few-shot previous opinions during the optimization, this way of initial prompting naturally enables us to fully utilize all the user information. When the user profile is not available, we adopt the prompts used for Vaniall. In addition, we present a more detailed version of the prompt p_ opt used to generate new input prompts with ℳ_ opt in Figure <ref>. We remark that p_ opt is minimally adjusted across dataset, to match the different task and user information of each dataset. [!ht] [fontsize=, frame=single, breaklines]python f”' Choose the proper answer to the given question among the given answer choices. Your answer should be a single alphabet among given answer choices: Question: question Answer choices: answer choice Answer: ”' Input prompt used for Vanilla method on GlobalOpinionQA. [!ht] [fontsize=, frame=single, breaklines]python f”' Choose the proper answer to the given question among the given answer choices, as if you currently reside in user profile. Your answer should be a single alphabet among given answer choices: Question: question Answer choices: answer choice Answer: ”' Input prompt used for Profile method on GlobalOpinionQA. [!ht] [fontsize=, frame=single, breaklines]python f”' [1]. Question: question of 1st retrieval among previous opinions Answer choices: answer choice of 1st retrieval among previous opinions Answer: answer of 1st retrieval among previous opinions ... [N]. Question: question of Nth retrieval among previous opinions Answer choices: answer choice of Nth retrieval among previous opinions Answer: answer of Nth retrieval among previous opinions Based on the above previous questions and answers, choose the proper answer to the given question among the given answer choices. Your answer should be a single alphabet among given answer choices: Question: question Answer choices: answer choice Answer: ”' Input prompt used for Few-shot method. [!ht] [fontsize=, frame=single, breaklines]python f”' [1]. Question: question of 1st retrieval among previous opinions Answer choices: answer choice of 1st retrieval among previous opinions Answer: answer of 1st retrieval among previous opinions ... [N]. Question: question of Nth retrieval among previous opinions Answer choices: answer choice of Nth retrieval among previous opinions Answer: answer of Nth retrieval among previous opinions Based on the above previous questions and answers, choose the proper answer to the given question among the given answer choices, as if you currently reside in explicit_profile. Your answer should be a single alphabet among given answer choices: Question: question Answer choices: answer choice Answer: ”' Input prompt used for All Info method. [!ht] [fontsize=, frame=single, breaklines]python f”' A person can be described as follows: Age: age in user profile Citizenship in America: citizenship in America in user profile Region: region in user profile Education: education in user profile Income: income in user profile Marital status: marital status in user profile Political ideology: political ideology in user profile Political party: political party in user profile Race: race in user profile Religion: religion in user profile Frequency of religious attendance: frequency of religious attendance in user profile Gender: gender in user profile Based on the demographic information, choose the proper answer to the given question among the given answer choices. Your answer should be a single alphabet among given answer choices: Question: question Answer choices: answer choice Answer: ”' Input prompt used for Profile method on OpinionQA. [!ht] [fontsize=, frame=single, breaklines]python f”' Answer to the given question. Just answer with 1, 2, 3, 4, or 5 without further explanation: Question: question Answer choices: answer choice Answer: ”' Input prompt used for Vanilla method on LaMP_ rate. [!ht] [fontsize=, frame=single, breaklines]python f”' The followings are two different prompts used to answer the question. [Input prompt]: prompt by Few-shot [Target prompt]: prompt optimized by Fermi You need to convert the input prompt to the format of the target prompt while preserving the original contexts in the input prompt. Converted prompt: ”' Prompt used to convert the format of input prompt by Few-shot to be instruction with multiple sentences. § ADDITIONAL QUANTITATIVE RESULTS In this section, we provide additional quantitative results that can't be presented in the main draft due to the limited space. First, in Table <ref>, we present the average and standard deviation of topic-wise accuracy, i.e., the average and standard deviation are calculated across 35 users where each user receives 30 test questions in the same topic. Next, we present the test performance of Few-shot method in Section <ref>, under different numbers of retrieved opinions. Lastly, we present the test performance under a different number of considered training questions Ñ (Eq. <ref>). As one can see in Table <ref>, Ñ=3 which is commonly used in our experiments shows consistent improvements in general, although the optimal values are different across the datasets. § MORE COMPARISON EXAMPLES BETWEEN PERSONALIZED PROMPTS In this section, we present more qualitative comparisons between the prompts from different methods for personalization of LLMs. To be specific, we present the specific test query from each data, and three corresponding prompts from the heuristic design, OPRO, and . Figures <ref>-<ref> are the comparison results on four datasets used in Section <ref>. Somewhat interestingly, one can observe that the personalized prompts by exhibit non-trivial incorporation of user information. In addition, we present examples of format-converted versions of few-shot prompting of previous user opinions (i.e., Few-shot_ format in Table <ref>) in Figures <ref> and <ref>. Here, one can observe that the converted prompts have a similar form to the personalized prompts by which is more natural to understand and follow for LLMs, and hence it significantly improves the performance up to 10.1%, as shown in Table <ref>.
http://arxiv.org/abs/2406.18083v1
20240626054339
Measurements of $K_S^0$-$K_L^0$ asymmetries in the decays $Λ_c^+ \to pK_{L,S}^0$, $pK_{L,S}^0π^+π^-$ and $pK_{L,S}^0π^0$
[ "BESIII Collaboration", "M. Ablikim", "M. N. Achasov", "P. Adlarson", "O. Afedulidis", "X. C. Ai", "R. Aliberti", "A. Amoroso", "Q. An", "Y. Bai", "O. Bakina", "I. Balossino", "Y. Ban", "H. -R. Bao", "V. Batozskaya", "K. Begzsuren", "N. Berger", "M. Berlowski", "M. Bertani", "D. Bettoni", "F. Bianchi", "E. Bianco", "A. Bortone", "I. Boyko", "R. A. Briere", "A. Brueggemann", "H. Cai", "X. Cai", "A. Calcaterra", "G. F. Cao", "N. Cao", "S. A. Cetin", "J. F. Chang", "G. R. Che", "G. Chelkov", "C. Chen", "C. H. Chen", "Chao Chen", "G. Chen", "H. S. Chen", "H. Y. Chen", "M. L. Chen", "S. J. Chen", "S. L. Chen", "S. M. Chen", "T. Chen", "X. R. Chen", "X. T. Chen", "Y. B. Chen", "Y. Q. Chen", "Z. J. Chen", "Z. Y. Chen", "S. K. Choi", "G. Cibinetto", "F. Cossio", "J. J. Cui", "H. L. Dai", "J. P. Dai", "A. Dbeyssi", "R. E. de Boer", "D. Dedovich", "C. Q. Deng", "Z. Y. Deng", "A. Denig", "I. Denysenko", "M. Destefanis", "F. De Mori", "B. Ding", "X. X. Ding", "Y. Ding", "Y. Ding", "J. Dong", "L. Y. Dong", "M. Y. Dong", "X. Dong", "M. C. Du", "S. X. Du", "Y. Y. Duan", "Z. H. Duan", "P. Egorov", "Y. H. Fan", "J. Fang", "J. Fang", "S. S. Fang", "W. X. Fang", "Y. Fang", "Y. Q. Fang", "R. Farinelli", "L. Fava", "F. Feldbauer", "G. Felici", "C. Q. Feng", "J. H. Feng", "Y. T. Feng", "M. Fritsch", "C. D. Fu", "J. L. Fu", "Y. W. Fu", "H. Gao", "X. B. Gao", "Y. N. Gao", "Yang Gao", "S. Garbolino", "I. Garzia", "L. Ge", "P. T. Ge", "Z. W. Ge", "C. Geng", "E. M. Gersabeck", "A. Gilman", "K. Goetzen", "L. Gong", "W. X. Gong", "W. Gradl", "S. Gramigna", "M. Greco", "M. H. Gu", "Y. T. Gu", "C. Y. Guan", "A. Q. Guo", "L. B. Guo", "M. J. Guo", "R. P. Guo", "Y. P. Guo", "A. Guskov", "J. Gutierrez", "K. L. Han", "T. T. Han", "F. Hanisch", "X. Q. Hao", "F. A. Harris", "K. K. He", "K. L. He", "F. H. Heinsius", "C. H. Heinz", "Y. K. Heng", "C. Herold", "T. Holtmann", "P. C. Hong", "G. Y. Hou", "X. T. Hou", "Y. R. Hou", "Z. L. Hou", "B. Y. Hu", "H. M. Hu", "J. F. Hu", "S. L. Hu", "T. Hu", "Y. Hu", "G. S. Huang", "K. X. Huang", "L. Q. Huang", "X. T. Huang", "Y. P. Huang", "Y. S. Huang", "T. Hussain", "F. Hölzken", "N. Hüsken", "N. in der Wiesche", "J. Jackson", "S. Janchiv", "J. H. Jeong", "Q. Ji", "Q. P. Ji", "W. Ji", "X. B. Ji", "X. L. Ji", "Y. Y. Ji", "X. Q. Jia", "Z. K. Jia", "D. Jiang", "H. B. Jiang", "P. C. Jiang", "S. S. Jiang", "T. J. Jiang", "X. S. Jiang", "Y. Jiang", "J. B. Jiao", "J. K. Jiao", "Z. Jiao", "S. Jin", "Y. Jin", "M. Q. Jing", "X. M. Jing", "T. Johansson", "S. Kabana", "N. Kalantar-Nayestanaki", "X. L. Kang", "X. S. Kang", "M. Kavatsyuk", "B. C. Ke", "V. Khachatryan", "A. Khoukaz", "R. Kiuchi", "O. B. Kolcu", "B. Kopf", "M. Kuessner", "X. Kui", "N. Kumar", "A. Kupsc", "W. Kühn", "J. J. Lane", "L. Lavezzi", "T. T. Lei", "Z. H. Lei", "M. Lellmann", "T. Lenz", "C. Li", "C. Li", "C. H. Li", "Cheng Li", "D. M. Li", "F. Li", "G. Li", "H. B. Li", "H. J. Li", "H. N. Li", "Hui Li", "J. R. Li", "J. S. Li", "K. Li", "K. L. Li", "L. J. Li", "L. K. Li", "Lei Li", "M. H. Li", "P. R. Li", "Q. M. Li", "Q. X. Li", "R. Li", "S. X. Li", "T. Li", "W. D. Li", "W. G. Li", "X. Li", "X. H. Li", "X. L. Li", "X. Y. Li", "X. Z. Li", "Y. G. Li", "Z. J. Li", "Z. Y. Li", "C. Liang", "H. Liang", "H. Liang", "Y. F. Liang", "Y. T. Liang", "G. R. Liao", "Y. P. Liao", "J. Libby", "A. Limphirat", "C. C. Lin", "D. X. Lin", "T. Lin", "B. J. Liu", "B. X. Liu", "C. Liu", "C. X. Liu", "F. Liu", "F. H. Liu", "Feng Liu", "G. M. Liu", "H. Liu", "H. B. Liu", "H. H. Liu", "H. M. Liu", "Huihui Liu", "J. B. Liu", "J. Y. Liu", "K. Liu", "K. Y. Liu", "Ke Liu", "L. Liu", "L. C. Liu", "Lu Liu", "M. H. Liu", "P. L. Liu", "Q. Liu", "S. B. Liu", "T. Liu", "W. K. Liu", "W. M. Liu", "X. Liu", "X. Liu", "Y. Liu", "Y. Liu", "Y. B. Liu", "Z. A. Liu", "Z. D. Liu", "Z. Q. Liu", "X. C. Lou", "F. X. Lu", "H. J. Lu", "J. G. Lu", "X. L. Lu", "Y. Lu", "Y. P. Lu", "Z. H. Lu", "C. L. Luo", "J. R. Luo", "M. X. Luo", "T. Luo", "X. L. Luo", "X. R. Lyu", "Y. F. Lyu", "F. C. Ma", "H. Ma", "H. L. Ma", "J. L. Ma", "L. L. Ma", "L. R. Ma", "M. M. Ma", "Q. M. Ma", "R. Q. Ma", "T. Ma", "X. T. Ma", "X. Y. Ma", "Y. Ma", "Y. M. Ma", "F. E. Maas", "M. Maggiora", "S. Malde", "Y. J. Mao", "Z. P. Mao", "S. Marcello", "Z. X. Meng", "J. G. Messchendorp", "G. Mezzadri", "H. Miao", "T. J. Min", "R. E. Mitchell", "X. H. Mo", "B. Moses", "N. Yu. Muchnoi", "J. Muskalla", "Y. Nefedov", "F. Nerling", "L. S. Nie", "I. B. Nikolaev", "Z. Ning", "S. Nisar", "Q. L. Niu", "W. D. Niu", "Y. Niu", "S. L. Olsen", "Q. Ouyang", "S. Pacetti", "X. Pan", "Y. Pan", "A. Pathak", "Y. P. Pei", "M. Pelizaeus", "H. P. Peng", "Y. Y. Peng", "K. Peters", "J. L. Ping", "R. G. Ping", "S. Plura", "V. Prasad", "F. Z. Qi", "H. Qi", "H. R. Qi", "M. Qi", "T. Y. Qi", "S. Qian", "W. B. Qian", "C. F. Qiao", "X. K. Qiao", "J. J. Qin", "L. Q. Qin", "L. Y. Qin", "X. P. Qin", "X. S. Qin", "Z. H. Qin", "J. F. Qiu", "Z. H. Qu", "C. F. Redmer", "K. J. Ren", "A. Rivetti", "M. Rolo", "G. Rong", "Ch. Rosner", "S. N. Ruan", "N. Salone", "A. Sarantsev", "Y. Schelhaas", "K. Schoenning", "M. Scodeggio", "K. Y. Shan", "W. Shan", "X. Y. Shan", "Z. J. Shang", "J. F. Shangguan", "L. G. Shao", "M. Shao", "C. P. Shen", "H. F. Shen", "W. H. Shen", "X. Y. Shen", "B. A. Shi", "H. Shi", "H. C. Shi", "J. L. Shi", "J. Y. Shi", "Q. Q. Shi", "S. Y. Shi", "X. Shi", "J. J. Song", "T. Z. Song", "W. M. Song", "Y. J. Song", "Y. X. Song", "S. Sosio", "S. Spataro", "F. Stieler", "S. S Su", "Y. J. Su", "G. B. Sun", "G. X. Sun", "H. Sun", "H. K. Sun", "J. F. Sun", "K. Sun", "L. Sun", "S. S. Sun", "T. Sun", "W. Y. Sun", "Y. Sun", "Y. J. Sun", "Y. Z. Sun", "Z. Q. Sun", "Z. T. Sun", "C. J. Tang", "G. Y. Tang", "J. Tang", "M. Tang", "Y. A. Tang", "L. Y. Tao", "Q. T. Tao", "M. Tat", "J. X. Teng", "V. Thoren", "W. H. Tian", "Y. Tian", "Z. F. Tian", "I. Uman", "Y. Wan", "S. J. Wang", "B. Wang", "B. L. Wang", "Bo Wang", "D. Y. Wang", "F. Wang", "H. J. Wang", "J. J. Wang", "J. P. Wang", "K. Wang", "L. L. Wang", "M. Wang", "N. Y. Wang", "S. Wang", "S. Wang", "T. Wang", "T. J. Wang", "W. Wang", "W. Wang", "W. P. Wang", "X. Wang", "X. F. Wang", "X. J. Wang", "X. L. Wang", "X. N. Wang", "Y. Wang", "Y. D. Wang", "Y. F. Wang", "Y. L. Wang", "Y. N. Wang", "Y. Q. Wang", "Yaqian Wang", "Yi Wang", "Z. Wang", "Z. L. Wang", "Z. Y. Wang", "Ziyi Wang", "D. H. Wei", "F. Weidner", "S. P. Wen", "Y. R. Wen", "U. Wiedner", "G. Wilkinson", "M. Wolke", "L. Wollenberg", "C. Wu", "J. F. Wu", "L. H. Wu", "L. J. Wu", "X. Wu", "X. H. Wu", "Y. Wu", "Y. H. Wu", "Y. J. Wu", "Z. Wu", "L. Xia", "X. M. Xian", "B. H. Xiang", "T. Xiang", "D. Xiao", "G. Y. Xiao", "S. Y. Xiao", "Y. L. Xiao", "Z. J. Xiao", "C. Xie", "X. H. Xie", "Y. Xie", "Y. G. Xie", "Y. H. Xie", "Z. P. Xie", "T. Y. Xing", "C. F. Xu", "C. J. Xu", "G. F. Xu", "H. Y. Xu", "M. Xu", "Q. J. Xu", "Q. N. Xu", "W. Xu", "W. L. Xu", "X. P. Xu", "Y. Xu", "Y. C. Xu", "Z. S. Xu", "F. Yan", "L. Yan", "W. B. Yan", "W. C. Yan", "X. Q. Yan", "H. J. Yang", "H. L. Yang", "H. X. Yang", "T. Yang", "Y. Yang", "Y. F. Yang", "Y. F. Yang", "Y. X. Yang", "Z. W. Yang", "Z. P. Yao", "M. Ye", "M. H. Ye", "J. H. Yin", "Junhao Yin", "Z. Y. You", "B. X. Yu", "C. X. Yu", "G. Yu", "J. S. Yu", "M. C. Yu", "T. Yu", "X. D. Yu", "Y. C. Yu", "C. Z. Yuan", "J. Yuan", "J. Yuan", "L. Yuan", "S. C. Yuan", "Y. Yuan", "Z. Y. Yuan", "C. X. Yue", "A. A. Zafar", "F. R. Zeng", "S. H. Zeng", "X. Zeng", "Y. Zeng", "Y. J. Zeng", "Y. J. Zeng", "X. Y. Zhai", "Y. C. Zhai", "Y. H. Zhan", "A. Q. Zhang", "B. L. Zhang", "B. X. Zhang", "D. H. Zhang", "G. Y. Zhang", "H. Zhang", "H. Zhang", "H. C. Zhang", "H. H. Zhang", "H. H. Zhang", "H. Q. Zhang", "H. R. Zhang", "H. Y. Zhang", "J. Zhang", "J. Zhang", "J. J. Zhang", "J. L. Zhang", "J. Q. Zhang", "J. S. Zhang", "J. W. Zhang", "J. X. Zhang", "J. Y. Zhang", "J. Z. Zhang", "Jianyu Zhang", "L. M. Zhang", "Lei Zhang", "P. Zhang", "Q. Y. Zhang", "R. Y. Zhang", "S. H. Zhang", "Shulei Zhang", "X. D. Zhang", "X. M. Zhang", "X. Y Zhang", "X. Y. Zhang", "Y. Zhang", "Y. Zhang", "Y. T. Zhang", "Y. H. Zhang", "Y. M. Zhang", "Yan Zhang", "Z. D. Zhang", "Z. H. Zhang", "Z. L. Zhang", "Z. Y. Zhang", "Z. Y. Zhang", "Z. Z. Zhang", "G. Zhao", "J. Y. Zhao", "J. Z. Zhao", "L. Zhao", "Lei Zhao", "M. G. Zhao", "N. Zhao", "R. P. Zhao", "S. J. Zhao", "Y. B. Zhao", "Y. X. Zhao", "Z. G. Zhao", "A. Zhemchugov", "B. Zheng", "B. M. Zheng", "J. P. Zheng", "W. J. Zheng", "Y. H. Zheng", "B. Zhong", "X. Zhong", "H. Zhou", "J. Y. Zhou", "L. P. Zhou", "S. Zhou", "X. Zhou", "X. K. Zhou", "X. R. Zhou", "X. Y. Zhou", "Y. Z. Zhou", "Z. C. Zhou", "A. N. Zhu", "J. Zhu", "K. Zhu", "K. J. Zhu", "K. S. Zhu", "L. Zhu", "L. X. Zhu", "S. H. Zhu", "T. J. Zhu", "W. D. Zhu", "Y. C. Zhu", "Z. A. Zhu", "J. H. Zou", "J. Zu" ]
hep-ex
[ "hep-ex" ]
[ [ July 1, 2024 ================ § INTRODUCTION The lightest charmed baryon, , provides a unique environment for studying the behavior of light di-quarks in the presence of a heavy quark <cit.>. Its hadronic decays occur only through the weak interaction, and various theoretical models have been proposed. These include the covariant confined quark model <cit.>, the pole model <cit.>, current algebra <cit.>, and SU(3) flavor symmetry approaches <cit.>. Its decay falls into three categories: Cabibbo-favored (CF) decays, singly Cabibbo-suppressed decays, and doubly Cabibbo-suppressed (DCS) decays. The decay amplitudes of the CF and DCS modes are expected to be proportional to the products of the Cabibbo-Kobayashi-Maskawa elements |V_ud^*V_cs| and |V_us^*V_cd|, respectively. The ratio of their decays is approximately of the order of 𝒪(10^-3), resulting in a small branching fraction (BF) for the DCS decay and making it challenging to observe directly in experiments. In addition to direct measurements of DCS decays, the amplitudes of DCS modes can be probed using the - asymmetry in the decays into neutral kaons, which arises from the interference between CF and DCS amplitudes <cit.>. The - asymmetry has been studied in the decays of charmed D mesons, where the asymmetry is defined by R(D, K_S,L^0X) = (D→ X) - (D→ X)/(D→ X)+(D→ X), and X can be , η, η^', ω, ρ^0 or ϕ. A large asymmetry for R(D^0, K_S,L^0) was reported in a previous measurement by the CLEO experiment as R(D^0, K_S,L^0) = 0.108±0.025±0.024 <cit.>, where the first uncertainty is statistical and the second is systematic. The BESIII experiment reported measurements of the - asymmetries R(D^0, K_S,L^0X), where X=ϕ, η, ω <cit.>. Significant asymmetries were observed in D^0→η and D^0 →η^' decays with R(D^0,K_S,L^0η) = 0.080±0.022 and R(D^0,K_S,L^0η^')=0.108±0.035, respectively. In addition, this asymmetry has been investigated for the lightest charmed strange meson, and R(D_s^+, K_S,L^0K^+) was determined to be (-2.1±1.9±1.6)% <cit.>. However, such measurements have not been made for the decays of charmed baryons. Using flavor SU(3) asymmetry <cit.>, theoretical predictions <cit.> for - asymmetries have been made for charmed baryon two-body decays into a light baryon and a neutral kaon. Similar to Equation <ref>, the asymmetry of (→ X) and (→ X) in charmed baryon decays is defined as R(, K_S,L^0 X)=(→ X)-(→ X)/(→ X)+(→ X), where X is p, p or p. Equation <ref> can be further reduced as R(→ K^0_S,LX) ≃ -2r_fcosδ_f, where r_f and δ_f are the relative strength and phase between the DCS (→ K^0X) and CF (→K̅^0X) amplitudes, respectively. The parameter r_f is expected to be proportional to the ratio |V_cd^*V_us/V_cs^*V_ud| ∼λ^2 <cit.>. A non-zero asymmetry value indicates the presence of DCS processes. The asymmetry of → pK_S,L^0 is predicted to be in the range of (-0.010, 0.087) in Ref. <cit.>. The - asymmetry is a promising observable with which to search for the two-body DCS processes of charmed baryons. In this paper, we report the first measurements of the absolute BFs of → p, → p and → p based on annihilation data samples corresponding to a total integrated luminosity of 4.5 collected at the center-of-mass (c.m.) energies √(s) between 4.600 and 4.699 GeV. The luminosities are listed in Table <ref> <cit.>. Using the results of (→ p), (→ p), and (→ p) from the Particle Data Group (PDG) <cit.>, we present the - asymmetries R(, K_S,L^0 X), where X = p, p or p. Charge conjugate channels are implied throughout this paper, unless explicitly stated. § BESIII EXPERIMENT AND MONTE CARLO SIMULATION The =0BESIII detector <cit.> records symmetric collisions provided by the =0BEPCII storage ring <cit.>, which operates at c.m. energies ranging from 1.85 to 4.95 GeV, with a peak luminosity of 1.1 × 10^33 cm^-2s^-1 achieved at √(s) = 3.773 GeV. The =0BESIII detector has collected large data samples in this energy region <cit.>. The cylindrical core of the =0BESIII detector covers 93% of the full solid angle and consists of a helium-based multilayer drift chamber (MDC), a plastic scintillator time-of-flight system (TOF), and a CsI(Tl) electromagnetic calorimeter (EMC), which are all enclosed in a superconducting solenoidal magnet providing a 1.0 T magnetic field <cit.>. The solenoid is supported by an octagonal flux-return yoke with resistive plate counter muon identification modules interleaved with steel. The charged-particle momentum resolution at 1 is 0.5%, and the dE/dx resolution is 6% for electrons from Bhabha scattering. The EMC measures photon energies with a resolution of 2.5% (5%) at 1 in the barrel (end cap) region. The time resolution in the TOF barrel region is 68 ps, while that in the end cap region was initially 110 ps. The end cap TOF system was upgraded in 2015 using multi-gap resistive plate chamber technology, providing a time resolution of 60 ps <cit.>. Of the data used in this analysis, 87% was with the upgraded end cap TOF. Simulated samples generated with geant4-based <cit.> Monte Carlo (MC) software, which includes the geometric description of the BESIII detector and the detector response performance <cit.>, are used to determine detection efficiencies and to estimate potential background contributions. The simulation describes the beam energy spread and the initial state radiation (ISR) in the e^+e^- annihilations with the generator kkmc <cit.>. The inclusive MC samples, corresponding to about 40 times the number of events of the data samples, include the production of pairs, open charm processes, the ISR production of vector charmonium(-like) states, and the continuum processes incorporated in kkmc <cit.>. The known decay modes are modeled with evtgen <cit.> using BFs taken from the PDG <cit.>, and the remaining unknown charmonium decays are modeled with lundcharm <cit.>. Final state radiation from charged final state particles is incorporated using photos <cit.>. For the production of e^+e^- → events, the Born cross-section line shape from BESIII measurements is used <cit.>. Exclusive → signal MC samples are generated with decaying to twelve specific tag modes (as described in Section <ref>) and decaying to p, p and p. The angular distribution of the decay → p is modeled with decay asymmetry parameters obtained from Ref. <cit.>. For processes from → p and → p channels, signal models are tunned based on the data. Additional MC samples are generated to estimate contributions from peaking background processes, where decays into tag modes and decays into p, pη, p, and p, with and η decaying inclusively. Each tag mode of the exclusive MC samples is generated with the same number of events. § DATA ANALYSIS Taking advantage of the threshold production of the pair, the double-tag (DT) method <cit.> is employed to study → p, → p and → p, where is reconstructed by the missing-mass technique. A single-tag (ST) event is selected by tagging a baryon with one of the following twelve tag modes: , , , , , , , , , , , and . The ST event selection criteria, efficiencies, and yields are described in Ref. <cit.>. The signal decays → p, p, and p are reconstructed using the remaining charged tracks and photons recoiling against the ST candidates, and referred to as DT events. Charged tracks are required to be within |cosθ|<0.93, where θ is the polar angle defined with respect to the z-axis, which is the symmetry axis of the MDC. The distance of closest approach to the interaction point (IP) must be less than 10 cm along the z axis and less than 1 cm in the perpendicular plane. Particle identification (PID) for charged tracks combines measurements of the energy deposited in the MDC (dE/dx) and the flight time in the TOF to form a likelihood value ℒ(h) for each hadron (h) hypothesis, where h = p, K, or π. Charged tracks are identified as protons if the proton hypothesis has the highest likelihood (ℒ(p) > ℒ(K) and ℒ(p) > ℒ(π)), or as pions if ℒ(π) > ℒ(K) is satisfied. Photon candidates are reconstructed from showers that are not associated with any charged tracks in the EMC <cit.>. The deposited energy of each shower in the EMC is required to be greater than 25 MeV in the barrel region (|cosθ| < 0.80), and greater than 50 MeV in the end cap region (0.86 < |cosθ| < 0.92). The EMC time difference from the event start time is required to be less than 700 ns, to exclude electronic noise and showers unrelated to the events. The opening angle between each shower and p̅ must be greater than 20^∘, to suppress the background from annihilation of p̅ with the detector material. The candidates are reconstructed from photon pairs with invariant mass M(γγ) in the range 0.115 < M(γγ) < 0.150 . To improve momentum resolution and exclude background, a kinematic fit is performed to constrain M(γγ) to the known mass <cit.>, and candidates with fit quality χ^2 < 20 are retained for further analysis. The signal candidates of → p and → p are required to have only one charged track with opposite charge to the tagged satisfying the proton PID criteria. For → p decay, the candidate with the highest energy is selected. In the reconstruction of → p, events must have only three remaining charged tracks with correct charges and PID. Candidates with additional charged tracks, whose distances of closest approaches to the IP are within ±20 cm along the beam direction, are excluded. The presence of the is inferred by the kinematic variable , defined as ≡(-E_selected)^2/c^4 - | p⃗_-p⃗_selected|^2/c^2, where is the beam energy and E_selected (p⃗_selected) is the total measured energy (momentum) of the selected particles in the DT signal side, boosted into the c.m. system of . To improve the momentum resolution, the momentum of is determined by p⃗_≡ -p̂_√(^2/c^2 - m^2_c^2), where p̂_ is the direction of the tagged and m_ is the known baryon mass taken from the PDG <cit.>. For all three decays, the distributions are expected to have a peak around the known mass squared of  <cit.>. Based on studies of inclusive MC samples, the dominant background events for the signal mode → p are from processes with Λ→ p and →. They are rejected by vetoing events with M(p) (M()) invariant masses in the interval of 1.11 < M(p) < 1.12 (0.48 <M()< 0.52 ). The combinatorial backgrounds are suppressed by requiring the recoil mass of the proton M_recoil(p) ≡√(E_beam^2 - |p⃗_ - p⃗_p|^2) > 1.0 , which removes only about 3% of the signal. Here p⃗_p is the momentum of the proton. For the → p signal mode, background events of → p(→) and p are excluded by requiring M_recoil(p) > 0.65, which removes less than 1% of signal. Events within the range 1.17 < M(p) < 1.20 are discarded to suppress the background of the Σ^+ → p decay. To improve the momentum resolution, a six constraint (6C) kinematic fit is performed requiring total four-momentum conservation with respect to that of the initial collision and constraining both masses of the tagged and the signal to m_. The is treated as a missing particle, and its four-momentum and mass are free in the kinematic fit. The χ^2 of the kinematic fit for each signal mode is required to be less than the optimized value that maximizes the figure of merit S/√(S+B), where S and B are the numbers of signal and background events from MC simulations, scaled to the data luminosity. The optimized requirements are χ^2 < 60 for → p, χ^2 < 25 for → p, and χ^2 < 20 for → p. The resulting distributions of the DT events are shown in Figure <ref>, which combine all data samples at the seven c.m. energies. Signal events are indicated by the significant peaks around the mass squared. There are peaking backgrounds remaining from → p(→) and → pη(→γγ or 3), → p(→), and → p(→) in the corresponding signal modes. The peaking background events from → X decays N^Bkg_ X are determined by N^Bkg_ X=N^Data_DT X· w_ X, w_ X = ∑_is_i ·_i/_i· N^MC,i_DT X/∑_i_i/_i· N^MC,i_DT X, where i represents the tag mode, and N^Data_DT X denotes the data yields passing the DT selection criteria of → X. Here, the DT selection criteria of → X require a fully reconstructed from combinations, as described in Ref. <cit.>. N^Data_DT X is corrected by the factor w_ X, which is derived from the exclusive MC simulation samples of → X. N_DT X^MC,i and N_DT X^MC,i are the numbers of the X MC events that satisfy the DT selection criteria of → X and → X, respectively. _i and _i are the ST yields and ST efficiencies from Ref. <cit.>. A scale factor s_i is specified for each tag mode, and s_i is set to 2 if both the tag and signal modes are X. Otherwise, it is set to 1. For peaking background events from → pη, the contribution is evaluated based on the corresponding exclusive MC samples using N^Bkg_pη=(→ p η)· w_pη, w_pη =∑_i( _i/_i·N'^MC,i_p/N'^MC,i_tot), with (→ pη)=(1.41±0.11)×10^-3 <cit.>. N'^MC,i_p is the number of surviving DT events for the i-th tag mode, that satisfy the DT selection criteria of → p, and N'^MC,i_tot is the total number of MC events generated for the i-th tag mode. Table <ref> summarizes the contributions arising from each peaking background process. A simultaneous unbinned maximum-likelihood fit is performed on the distributions of the seven c.m. energies. The signal and peaking backgrounds are modeled by individual MC-simulated shapes convolved with Gaussian functions to account for differences between the data and MC simulations. The Gaussian means and widths are free parameters in the fit. The yields of the peaking background events are free with their mean and standard deviation values set to the results listed in Table <ref>. For the signal mode → p, a truth-matching method is employed to obtain the pure signal shape by comparing the two photons from the with their corresponding MC truth information. The opening angle θ_truth between the truth and the reconstructed photons is required to be less than 10^∘. The combinatorial background shape is taken from the inclusive MC samples, including non-signal and continuum hadron production events. The BFs of the decays → p, → p, and → p are shared variables for the seven c.m. energies in the simultaneous fit, determined by _sig=/·ε_avg·_int , where ε_avg = (∑_i _i·_i/_i)/ is the average detection efficiency for detecting signal modes in ST events and i represents the i-th ST tag mode. Table <ref> lists the ST events and the average detection efficiencies for each c.m. energy. and _i are the DT yields and corresponding efficiencies, respectively. _int is the intermediate BF of , (→γγ) = (98.823±0.034)% <cit.> for → p decay. Figure <ref> shows the results of fits to the distributions, combining all data samples. From these fits, the BFs are (→ p) = (1.67±0.06)%, (→ p) = (1.69±0.10)%, and (→ p) = (2.02±0.13)%, where the uncertainties are statistical only. The total DT signal yields from all c.m. energies are N^DT_p=1627±56, N^DT_p=648±39, and N^DT_p=652±41, for → p, → p, and → p, respectively. § SYSTEMATIC UNCERTAINTIES In the DT method, most of the systematic uncertainties associated with the ST selections cancel. The major sources of systematic uncertainties in the BFs measurements are described below and are reported relative to the measured BFs. * Tracking and PID efficiencies. The tracking and PID efficiencies of the charged protons and pions are studied using a control sample of → pp̅ <cit.>. The MC simulation samples are weighted by the efficiency ratio between data and MC as function of charged particle momentum and cosθ. The systematic uncertainties of tracking and PID are 0.5% and 0.1% for → p, 1.6% and 0.8% for → p, and 0.7% and 0.4% for → p, respectively. * No extra charged track requirement. The number of good charged tracks is required to be exactly one (three) for p and p (p) DT candidates in the recoil system of the tagged . The difference between data and MC simulation from this selection is studied using a control sample of → pK^+. The systematic uncertainty is 1.9%. * MC statistics. The exclusive MC simulation samples are used to obtain the ST and DT detection efficiencies and to estimate the peaking background events. The systematic uncertainties associated with the limited MC sample sizes are estimated to be 0.1%, 0.5%, and 0.5% for → p, → p, and → p, respectively. * ST yield. The systematic uncertainty arising from the total ST yield is assigned to be 0.2% <cit.>. * Kinematic fit. The model of the MC simulation is much simpler than the real detector performance, resulting in a difference between the data and MC simulation in the track parameters of the charged tracks <cit.>. The helix parameters of the charged tracks are corrected, and the BFs are re-evaluated with the updated MC simulation samples. The differences from the measured BFs are taken as the systematic uncertainties associated with the kinematic fit, which are 0.5%, 1.0%, and 0.5% for → p, → p and → p, respectively. * Angle(γ,p̅) requirement. To estimate the systematic uncertainty of the Angle(γ, p̅) requirement, the difference between the data and MC simulation samples of this requirement is investigated from a control sample of ψ(3686) →π^+π^-, → p p̅π^0. The systematic uncertainty is 0.2% for → p. * reconstruction. The systematic uncertainty due to the reconstruction is determined using the control sample of → p p̅ <cit.>. The MC simulation samples are corrected depending on the momentum. The systematic uncertainty is determined to be 0.5%. * Truth-match method. The systematic uncertainty from the truth-match method is determined comparing the measured BFs with and without the truth-match requirements. The resulting systematic uncertainty is taken as 0.2%. * Signal model. For → p, the systematic uncertainty from the signal model is determined varying the decay asymmetry parameters within ± 1σ. The deviation from the measured BF is found to be negligible. For → p and → p, the signal models in the nominal analysis is tunned based on the data. The possible intermediate resonances are considered in the amplitude analysis, composed of Σ^*, Δ^*, N^*, K̅^* and ρ. The nominal amplitude models are then replaced by alternative ones with equivalent descriptions of the data. The alternative model of → p is selected by removing the insignificant intermediate resonances. For → p, the amplitude fit is not stable due to the limited statistics of data. An alternative model is chosen with a similar fit quality to the nominal. The systematic uncertainties are determined to be 1.1% and 0.1% for → p and → p, respectively. * Background shape. To investigate the systematic uncertainty from the background shape, the nominal background shape is replaced with a second-order Chebychev polynomial function in the simultaneous fit. The systematic uncertainties are 0.9%, 0.6%, and 0.4% for → p, → p and → p, respectively. * Fit bias. The systematic uncertainty from the simultaneous fit is studied with 5000 sets of toy MC samples, which are simulated with all parameters from the fit model fixed. The BFs obtained from the toy samples are fitted with a Gaussian function. The deviations between the Gaussian mean value and nominal BFs are assigned as systematic uncertainties. For the decay Λ_c^+ → p K_L π^0, the fit bias is found to be 0.27%, while for the other two signal modes it is negligible. Other sources of systematic uncertainties, such as the BF of →γγ, are neglected due to their negligible effects. Assuming that all sources of systematic uncertainties in the BFs measurements are uncorrelated, the quadratic sums of the different sources are considered as the total systematic uncertainties, which are 2.2%, 3.1%, and 2.3% for → p, → p, and → p, respectively. Table <ref> lists all the systematic uncertainties discussed above. § SUMMARY In summary, we report the BFs of → p, → p and → p for the first time, by analyzing annihilation data samples corresponding to an integrated luminosity of 4.5 collected at c.m. energies between 4.600 and 4.699 . The measured BFs of these decays are (→ p) = (1.67 ± 0.06 ± 0.04)%, (→ p) = (1.69 ± 0.10 ± 0. 05)%, and (→ p) = (2.02 ± 0.13 ± 0.05)%. Combining the BFs measurements in this work with the values of (→ X) <cit.>, the - asymmetries are determined, as summarized in Table <ref>. The uncertainties are derived through the standard error propagation procedure, assuming that the uncertainties of the estimated (→ X) and the quoted (→ X) are uncorrelated. Taking into account the uncertainties, no obvious asymmetry is observed in any of the three decays. The - asymmetry of → pK_S,L^0 R(, pK_S,L^0) = -0.025 ± 0.031 is compatible with the prediction of (-0.010, 0.087) based on SU(3) flavor symmetry <cit.>. Our measurements of the - asymmetries in charmed baryon decays offer the possibility to access the DCS processes involving neutral kaons and provide further constraints on their amplitudes. The BESIII Collaboration thanks the staff of BEPCII and the IHEP computing center for their strong support. This work is supported in part by National Key R&D Program of China under Contracts Nos. 2020YFA0406400, 2020YFA0406300, 2023YFA1606000; National Natural Science Foundation of China (NSFC) under Contracts Nos. 11635010, 11735014, 11935015, 11935016, 11935018, 12025502, 12035009, 12035013, 12061131003, 12192260, 12192261, 12192262, 12192263, 12192264, 12192265, 12221005, 12225509, 12235017, 12361141819; the Chinese Academy of Sciences (CAS) Large-Scale Scientific Facility Program; the CAS Center for Excellence in Particle Physics (CCEPP); Joint Large-Scale Scientific Facility Funds of the NSFC and CAS under Contract No. U1832207; 100 Talents Program of CAS; The Institute of Nuclear and Particle Physics (INPAC) and Shanghai Key Laboratory for Particle Physics and Cosmology; German Research Foundation DFG under Contracts Nos. 455635585, FOR5327, GRK 2149; Istituto Nazionale di Fisica Nucleare, Italy; Ministry of Development of Turkey under Contract No. DPT2006K-120470; National Research Foundation of Korea under Contract No. NRF-2022R1A2C1092335; National Science and Technology fund of Mongolia; National Science Research and Innovation Fund (NSRF) via the Program Management Unit for Human Resources & Institutional Development, Research and Innovation of Thailand under Contract No. B16F640076; Polish National Science Centre under Contract No. 2019/35/O/ST2/02907; The Swedish Research Council; U. S. Department of Energy under Contract No. DE-FG02-05ER41374 JHEP § THE BESIII COLLABORATION tocsectionThe BESIII collaboration M. Ablikim^1, M. N. Achasov^4,c, P. Adlarson^76, O. Afedulidis^3, X. C. Ai^81, R. Aliberti^35, A. Amoroso^75A,75C, Q. An^72,58,a, Y. Bai^57, O. Bakina^36, I. Balossino^29A, Y. Ban^46,h, H.-R. Bao^64, V. Batozskaya^1,44, K. Begzsuren^32, N. Berger^35, M. Berlowski^44, M. Bertani^28A, D. Bettoni^29A, F. Bianchi^75A,75C, E. Bianco^75A,75C, A. Bortone^75A,75C, I. Boyko^36, R. A. Briere^5, A. Brueggemann^69, H. Cai^77, X. Cai^1,58, A. Calcaterra^28A, G. F. Cao^1,64, N. Cao^1,64, S. A. Cetin^62A, J. F. Chang^1,58, G. R. Che^43, G. Chelkov^36,b, C. Chen^43, C. H. Chen^9, Chao Chen^55, G. Chen^1, H. S. Chen^1,64, H. Y. Chen^20, M. L. Chen^1,58,64, S. J. Chen^42, S. L. Chen^45, S. M. Chen^61, T. Chen^1,64, X. R. Chen^31,64, X. T. Chen^1,64, Y. B. Chen^1,58, Y. Q. Chen^34, Z. J. Chen^25,i, Z. Y. Chen^1,64, S. K. Choi^10, G. Cibinetto^29A, F. Cossio^75C, J. J. Cui^50, H. L. Dai^1,58, J. P. Dai^79, A. Dbeyssi^18, R.  E. de Boer^3, D. Dedovich^36, C. Q. Deng^73, Z. Y. Deng^1, A. Denig^35, I. Denysenko^36, M. Destefanis^75A,75C, F. De Mori^75A,75C, B. Ding^67,1, X. X. Ding^46,h, Y. Ding^40, Y. Ding^34, J. Dong^1,58, L. Y. Dong^1,64, M. Y. Dong^1,58,64, X. Dong^77, M. C. Du^1, S. X. Du^81, Y. Y. Duan^55, Z. H. Duan^42, P. Egorov^36,b, Y. H. Fan^45, J. Fang^59, J. Fang^1,58, S. S. Fang^1,64, W. X. Fang^1, Y. Fang^1, Y. Q. Fang^1,58, R. Farinelli^29A, L. Fava^75B,75C, F. Feldbauer^3, G. Felici^28A, C. Q. Feng^72,58, J. H. Feng^59, Y. T. Feng^72,58, M. Fritsch^3, C. D. Fu^1, J. L. Fu^64, Y. W. Fu^1,64, H. Gao^64, X. B. Gao^41, Y. N. Gao^46,h, Yang Gao^72,58, S. Garbolino^75C, I. Garzia^29A,29B, L. Ge^81, P. T. Ge^19, Z. W. Ge^42, C. Geng^59, E. M. Gersabeck^68, A. Gilman^70, K. Goetzen^13, L. Gong^40, W. X. Gong^1,58, W. Gradl^35, S. Gramigna^29A,29B, M. Greco^75A,75C, M. H. Gu^1,58, Y. T. Gu^15, C. Y. Guan^1,64, A. Q. Guo^31,64, L. B. Guo^41, M. J. Guo^50, R. P. Guo^49, Y. P. Guo^12,g, A. Guskov^36,b, J. Gutierrez^27, K. L. Han^64, T. T. Han^1, F. Hanisch^3, X. Q. Hao^19, F. A. Harris^66, K. K. He^55, K. L. He^1,64, F. H. Heinsius^3, C. H. Heinz^35, Y. K. Heng^1,58,64, C. Herold^60, T. Holtmann^3, P. C. Hong^34, G. Y. Hou^1,64, X. T. Hou^1,64, Y. R. Hou^64, Z. L. Hou^1, B. Y. Hu^59, H. M. Hu^1,64, J. F. Hu^56,j, S. L. Hu^12,g, T. Hu^1,58,64, Y. Hu^1, G. S. Huang^72,58, K. X. Huang^59, L. Q. Huang^31,64, X. T. Huang^50, Y. P. Huang^1, Y. S. Huang^59, T. Hussain^74, F. Hölzken^3, N. Hüsken^35, N. in der Wiesche^69, J. Jackson^27, S. Janchiv^32, J. H. Jeong^10, Q. Ji^1, Q. P. Ji^19, W. Ji^1,64, X. B. Ji^1,64, X. L. Ji^1,58, Y. Y. Ji^50, X. Q. Jia^50, Z. K. Jia^72,58, D. Jiang^1,64, H. B. Jiang^77, P. C. Jiang^46,h, S. S. Jiang^39, T. J. Jiang^16, X. S. Jiang^1,58,64, Y. Jiang^64, J. B. Jiao^50, J. K. Jiao^34, Z. Jiao^23, S. Jin^42, Y. Jin^67, M. Q. Jing^1,64, X. M. Jing^64, T. Johansson^76, S. Kabana^33, N. Kalantar-Nayestanaki^65, X. L. Kang^9, X. S. Kang^40, M. Kavatsyuk^65, B. C. Ke^81, V. Khachatryan^27, A. Khoukaz^69, R. Kiuchi^1, O. B. Kolcu^62A, B. Kopf^3, M. Kuessner^3, X. Kui^1,64, N.  Kumar^26, A. Kupsc^44,76, W. Kühn^37, J. J. Lane^68, L. Lavezzi^75A,75C, T. T. Lei^72,58, Z. H. Lei^72,58, M. Lellmann^35, T. Lenz^35, C. Li^47, C. Li^43, C. H. Li^39, Cheng Li^72,58, D. M. Li^81, F. Li^1,58, G. Li^1, H. B. Li^1,64, H. J. Li^19, H. N. Li^56,j, Hui Li^43, J. R. Li^61, J. S. Li^59, K. Li^1, K. L. Li^19, L. J. Li^1,64, L. K. Li^1, Lei Li^48, M. H. Li^43, P. R. Li^38,k,l, Q. M. Li^1,64, Q. X. Li^50, R. Li^17,31, S. X. Li^12, T.  Li^50, W. D. Li^1,64, W. G. Li^1,a, X. Li^1,64, X. H. Li^72,58, X. L. Li^50, X. Y. Li^1,64, X. Z. Li^59, Y. G. Li^46,h, Z. J. Li^59, Z. Y. Li^79, C. Liang^42, H. Liang^1,64, H. Liang^72,58, Y. F. Liang^54, Y. T. Liang^31,64, G. R. Liao^14, Y. P. Liao^1,64, J. Libby^26, A.  Limphirat^60, C. C. Lin^55, D. X. Lin^31,64, T. Lin^1, B. J. Liu^1, B. X. Liu^77, C. Liu^34, C. X. Liu^1, F. Liu^1, F. H. Liu^53, Feng Liu^6, G. M. Liu^56,j, H. Liu^38,k,l, H. B. Liu^15, H. H. Liu^1, H. M. Liu^1,64, Huihui Liu^21, J. B. Liu^72,58, J. Y. Liu^1,64, K. Liu^38,k,l, K. Y. Liu^40, Ke Liu^22, L. Liu^72,58, L. C. Liu^43, Lu Liu^43, M. H. Liu^12,g, P. L. Liu^1, Q. Liu^64, S. B. Liu^72,58, T. Liu^12,g, W. K. Liu^43, W. M. Liu^72,58, X. Liu^38,k,l, X. Liu^39, Y. Liu^81, Y. Liu^38,k,l, Y. B. Liu^43, Z. A. Liu^1,58,64, Z. D. Liu^9, Z. Q. Liu^50, X. C. Lou^1,58,64, F. X. Lu^59, H. J. Lu^23, J. G. Lu^1,58, X. L. Lu^1, Y. Lu^7, Y. P. Lu^1,58, Z. H. Lu^1,64, C. L. Luo^41, J. R. Luo^59, M. X. Luo^80, T. Luo^12,g, X. L. Luo^1,58, X. R. Lyu^64, Y. F. Lyu^43, F. C. Ma^40, H. Ma^79, H. L. Ma^1, J. L. Ma^1,64, L. L. Ma^50, L. R. Ma^67, M. M. Ma^1,64, Q. M. Ma^1, R. Q. Ma^1,64, T. Ma^72,58, X. T. Ma^1,64, X. Y. Ma^1,58, Y. Ma^46,h, Y. M. Ma^31, F. E. Maas^18, M. Maggiora^75A,75C, S. Malde^70, Y. J. Mao^46,h, Z. P. Mao^1, S. Marcello^75A,75C, Z. X. Meng^67, J. G. Messchendorp^13,65, G. Mezzadri^29A, H. Miao^1,64, T. J. Min^42, R. E. Mitchell^27, X. H. Mo^1,58,64, B. Moses^27, N. Yu. Muchnoi^4,c, J. Muskalla^35, Y. Nefedov^36, F. Nerling^18,e, L. S. Nie^20, I. B. Nikolaev^4,c, Z. Ning^1,58, S. Nisar^11,m, Q. L. Niu^38,k,l, W. D. Niu^55, Y. Niu ^50, S. L. Olsen^64, Q. Ouyang^1,58,64, S. Pacetti^28B,28C, X. Pan^55, Y. Pan^57, A.  Pathak^34, Y. P. Pei^72,58, M. Pelizaeus^3, H. P. Peng^72,58, Y. Y. Peng^38,k,l, K. Peters^13,e, J. L. Ping^41, R. G. Ping^1,64, S. Plura^35, V. Prasad^33, F. Z. Qi^1, H. Qi^72,58, H. R. Qi^61, M. Qi^42, T. Y. Qi^12,g, S. Qian^1,58, W. B. Qian^64, C. F. Qiao^64, X. K. Qiao^81, J. J. Qin^73, L. Q. Qin^14, L. Y. Qin^72,58, X. P. Qin^12,g, X. S. Qin^50, Z. H. Qin^1,58, J. F. Qiu^1, Z. H. Qu^73, C. F. Redmer^35, K. J. Ren^39, A. Rivetti^75C, M. Rolo^75C, G. Rong^1,64, Ch. Rosner^18, S. N. Ruan^43, N. Salone^44, A. Sarantsev^36,d, Y. Schelhaas^35, K. Schoenning^76, M. Scodeggio^29A, K. Y. Shan^12,g, W. Shan^24, X. Y. Shan^72,58, Z. J. Shang^38,k,l, J. F. Shangguan^16, L. G. Shao^1,64, M. Shao^72,58, C. P. Shen^12,g, H. F. Shen^1,8, W. H. Shen^64, X. Y. Shen^1,64, B. A. Shi^64, H. Shi^72,58, H. C. Shi^72,58, J. L. Shi^12,g, J. Y. Shi^1, Q. Q. Shi^55, S. Y. Shi^73, X. Shi^1,58, J. J. Song^19, T. Z. Song^59, W. M. Song^34,1, Y.  J. Song^12,g, Y. X. Song^46,h,n, S. Sosio^75A,75C, S. Spataro^75A,75C, F. Stieler^35, S. S Su^40, Y. J. Su^64, G. B. Sun^77, G. X. Sun^1, H. Sun^64, H. K. Sun^1, J. F. Sun^19, K. Sun^61, L. Sun^77, S. S. Sun^1,64, T. Sun^51,f, W. Y. Sun^34, Y. Sun^9, Y. J. Sun^72,58, Y. Z. Sun^1, Z. Q. Sun^1,64, Z. T. Sun^50, C. J. Tang^54, G. Y. Tang^1, J. Tang^59, M. Tang^72,58, Y. A. Tang^77, L. Y. Tao^73, Q. T. Tao^25,i, M. Tat^70, J. X. Teng^72,58, V. Thoren^76, W. H. Tian^59, Y. Tian^31,64, Z. F. Tian^77, I. Uman^62B, Y. Wan^55, S. J. Wang ^50, B. Wang^1, B. L. Wang^64, Bo Wang^72,58, D. Y. Wang^46,h, F. Wang^73, H. J. Wang^38,k,l, J. J. Wang^77, J. P. Wang ^50, K. Wang^1,58, L. L. Wang^1, M. Wang^50, N. Y. Wang^64, S. Wang^12,g, S. Wang^38,k,l, T.  Wang^12,g, T. J. Wang^43, W.  Wang^73, W. Wang^59, W. P. Wang^35,58,72,o, X. Wang^46,h, X. F. Wang^38,k,l, X. J. Wang^39, X. L. Wang^12,g, X. N. Wang^1, Y. Wang^61, Y. D. Wang^45, Y. F. Wang^1,58,64, Y. L. Wang^19, Y. N. Wang^45, Y. Q. Wang^1, Yaqian Wang^17, Yi Wang^61, Z. Wang^1,58, Z. L.  Wang^73, Z. Y. Wang^1,64, Ziyi Wang^64, D. H. Wei^14, F. Weidner^69, S. P. Wen^1, Y. R. Wen^39, U. Wiedner^3, G. Wilkinson^70, M. Wolke^76, L. Wollenberg^3, C. Wu^39, J. F. Wu^1,8, L. H. Wu^1, L. J. Wu^1,64, X. Wu^12,g, X. H. Wu^34, Y. Wu^72,58, Y. H. Wu^55, Y. J. Wu^31, Z. Wu^1,58, L. Xia^72,58, X. M. Xian^39, B. H. Xiang^1,64, T. Xiang^46,h, D. Xiao^38,k,l, G. Y. Xiao^42, S. Y. Xiao^1, Y.  L. Xiao^12,g, Z. J. Xiao^41, C. Xie^42, X. H. Xie^46,h, Y. Xie^50, Y. G. Xie^1,58, Y. H. Xie^6, Z. P. Xie^72,58, T. Y. Xing^1,64, C. F. Xu^1,64, C. J. Xu^59, G. F. Xu^1, H. Y. Xu^67,2,p, M. Xu^72,58, Q. J. Xu^16, Q. N. Xu^30, W. Xu^1, W. L. Xu^67, X. P. Xu^55, Y. Xu^40, Y. C. Xu^78, Z. S. Xu^64, F. Yan^12,g, L. Yan^12,g, W. B. Yan^72,58, W. C. Yan^81, X. Q. Yan^1,64, H. J. Yang^51,f, H. L. Yang^34, H. X. Yang^1, T. Yang^1, Y. Yang^12,g, Y. F. Yang^43, Y. F. Yang^1,64, Y. X. Yang^1,64, Z. W. Yang^38,k,l, Z. P. Yao^50, M. Ye^1,58, M. H. Ye^8, J. H. Yin^1, Junhao Yin^43, Z. Y. You^59, B. X. Yu^1,58,64, C. X. Yu^43, G. Yu^1,64, J. S. Yu^25,i, M. C. Yu^40, T. Yu^73, X. D. Yu^46,h, Y. C. Yu^81, C. Z. Yuan^1,64, J. Yuan^34, J. Yuan^45, L. Yuan^2, S. C. Yuan^1,64, Y. Yuan^1,64, Z. Y. Yuan^59, C. X. Yue^39, A. A. Zafar^74, F. R. Zeng^50, S. H. Zeng^63A,63B,63C,63D, X. Zeng^12,g, Y. Zeng^25,i, Y. J. Zeng^59, Y. J. Zeng^1,64, X. Y. Zhai^34, Y. C. Zhai^50, Y. H. Zhan^59, A. Q. Zhang^1,64, B. L. Zhang^1,64, B. X. Zhang^1, D. H. Zhang^43, G. Y. Zhang^19, H. Zhang^72,58, H. Zhang^81, H. C. Zhang^1,58,64, H. H. Zhang^59, H. H. Zhang^34, H. Q. Zhang^1,58,64, H. R. Zhang^72,58, H. Y. Zhang^1,58, J. Zhang^81, J. Zhang^59, J. J. Zhang^52, J. L. Zhang^20, J. Q. Zhang^41, J. S. Zhang^12,g, J. W. Zhang^1,58,64, J. X. Zhang^38,k,l, J. Y. Zhang^1, J. Z. Zhang^1,64, Jianyu Zhang^64, L. M. Zhang^61, Lei Zhang^42, P. Zhang^1,64, Q. Y. Zhang^34, R. Y. Zhang^38,k,l, S. H. Zhang^1,64, Shulei Zhang^25,i, X. D. Zhang^45, X. M. Zhang^1, X. Y Zhang^40, X. Y. Zhang^50, Y.  Zhang^73, Y. Zhang^1, Y.  T. Zhang^81, Y. H. Zhang^1,58, Y. M. Zhang^39, Yan Zhang^72,58, Z. D. Zhang^1, Z. H. Zhang^1, Z. L. Zhang^34, Z. Y. Zhang^77, Z. Y. Zhang^43, Z. Z.  Zhang^45, G. Zhao^1, J. Y. Zhao^1,64, J. Z. Zhao^1,58, L. Zhao^1, Lei Zhao^72,58, M. G. Zhao^43, N. Zhao^79, R. P. Zhao^64, S. J. Zhao^81, Y. B. Zhao^1,58, Y. X. Zhao^31,64, Z. G. Zhao^72,58, A. Zhemchugov^36,b, B. Zheng^73, B. M. Zheng^34, J. P. Zheng^1,58, W. J. Zheng^1,64, Y. H. Zheng^64, B. Zhong^41, X. Zhong^59, H.  Zhou^50, J. Y. Zhou^34, L. P. Zhou^1,64, S.  Zhou^6, X. Zhou^77, X. K. Zhou^6, X. R. Zhou^72,58, X. Y. Zhou^39, Y. Z. Zhou^12,g, Z. C. Zhou^20, A. N. Zhu^64, J. Zhu^43, K. Zhu^1, K. J. Zhu^1,58,64, K. S. Zhu^12,g, L. Zhu^34, L. X. Zhu^64, S. H. Zhu^71, T. J. Zhu^12,g, W. D. Zhu^41, Y. C. Zhu^72,58, Z. A. Zhu^1,64, J. H. Zou^1, J. Zu^72,58 (BESIII Collaboration) ^1 Institute of High Energy Physics, Beijing 100049, People's Republic of China ^2 Beihang University, Beijing 100191, People's Republic of China ^3 Bochum Ruhr-University, D-44780 Bochum, Germany ^4 Budker Institute of Nuclear Physics SB RAS (BINP), Novosibirsk 630090, Russia ^5 Carnegie Mellon University, Pittsburgh, Pennsylvania 15213, USA ^6 Central China Normal University, Wuhan 430079, People's Republic of China ^7 Central South University, Changsha 410083, People's Republic of China ^8 China Center of Advanced Science and Technology, Beijing 100190, People's Republic of China ^9 China University of Geosciences, Wuhan 430074, People's Republic of China ^10 Chung-Ang University, Seoul, 06974, Republic of Korea ^11 COMSATS University Islamabad, Lahore Campus, Defence Road, Off Raiwind Road, 54000 Lahore, Pakistan ^12 Fudan University, Shanghai 200433, People's Republic of China ^13 GSI Helmholtzcentre for Heavy Ion Research GmbH, D-64291 Darmstadt, Germany ^14 Guangxi Normal University, Guilin 541004, People's Republic of China ^15 Guangxi University, Nanning 530004, People's Republic of China ^16 Hangzhou Normal University, Hangzhou 310036, People's Republic of China ^17 Hebei University, Baoding 071002, People's Republic of China ^18 Helmholtz Institute Mainz, Staudinger Weg 18, D-55099 Mainz, Germany ^19 Henan Normal University, Xinxiang 453007, People's Republic of China ^20 Henan University, Kaifeng 475004, People's Republic of China ^21 Henan University of Science and Technology, Luoyang 471003, People's Republic of China ^22 Henan University of Technology, Zhengzhou 450001, People's Republic of China ^23 Huangshan College, Huangshan 245000, People's Republic of China ^24 Hunan Normal University, Changsha 410081, People's Republic of China ^25 Hunan University, Changsha 410082, People's Republic of China ^26 Indian Institute of Technology Madras, Chennai 600036, India ^27 Indiana University, Bloomington, Indiana 47405, USA ^28 INFN Laboratori Nazionali di Frascati , (A)INFN Laboratori Nazionali di Frascati, I-00044, Frascati, Italy; (B)INFN Sezione di Perugia, I-06100, Perugia, Italy; (C)University of Perugia, I-06100, Perugia, Italy ^29 INFN Sezione di Ferrara, (A)INFN Sezione di Ferrara, I-44122, Ferrara, Italy; (B)University of Ferrara, I-44122, Ferrara, Italy ^30 Inner Mongolia University, Hohhot 010021, People's Republic of China ^31 Institute of Modern Physics, Lanzhou 730000, People's Republic of China ^32 Institute of Physics and Technology, Peace Avenue 54B, Ulaanbaatar 13330, Mongolia ^33 Instituto de Alta Investigación, Universidad de Tarapacá, Casilla 7D, Arica 1000000, Chile ^34 Jilin University, Changchun 130012, People's Republic of China ^35 Johannes Gutenberg University of Mainz, Johann-Joachim-Becher-Weg 45, D-55099 Mainz, Germany ^36 Joint Institute for Nuclear Research, 141980 Dubna, Moscow region, Russia ^37 Justus-Liebig-Universitaet Giessen, II. Physikalisches Institut, Heinrich-Buff-Ring 16, D-35392 Giessen, Germany ^38 Lanzhou University, Lanzhou 730000, People's Republic of China ^39 Liaoning Normal University, Dalian 116029, People's Republic of China ^40 Liaoning University, Shenyang 110036, People's Republic of China ^41 Nanjing Normal University, Nanjing 210023, People's Republic of China ^42 Nanjing University, Nanjing 210093, People's Republic of China ^43 Nankai University, Tianjin 300071, People's Republic of China ^44 National Centre for Nuclear Research, Warsaw 02-093, Poland ^45 North China Electric Power University, Beijing 102206, People's Republic of China ^46 Peking University, Beijing 100871, People's Republic of China ^47 Qufu Normal University, Qufu 273165, People's Republic of China ^48 Renmin University of China, Beijing 100872, People's Republic of China ^49 Shandong Normal University, Jinan 250014, People's Republic of China ^50 Shandong University, Jinan 250100, People's Republic of China ^51 Shanghai Jiao Tong University, Shanghai 200240, People's Republic of China ^52 Shanxi Normal University, Linfen 041004, People's Republic of China ^53 Shanxi University, Taiyuan 030006, People's Republic of China ^54 Sichuan University, Chengdu 610064, People's Republic of China ^55 Soochow University, Suzhou 215006, People's Republic of China ^56 South China Normal University, Guangzhou 510006, People's Republic of China ^57 Southeast University, Nanjing 211100, People's Republic of China ^58 State Key Laboratory of Particle Detection and Electronics, Beijing 100049, Hefei 230026, People's Republic of China ^59 Sun Yat-Sen University, Guangzhou 510275, People's Republic of China ^60 Suranaree University of Technology, University Avenue 111, Nakhon Ratchasima 30000, Thailand ^61 Tsinghua University, Beijing 100084, People's Republic of China ^62 Turkish Accelerator Center Particle Factory Group, (A)Istinye University, 34010, Istanbul, Turkey; (B)Near East University, Nicosia, North Cyprus, 99138, Mersin 10, Turkey ^63 University of Bristol, (A)H H Wills Physics Laboratory; (B)Tyndall Avenue; (C)Bristol; (D)BS8 1TL ^64 University of Chinese Academy of Sciences, Beijing 100049, People's Republic of China ^65 University of Groningen, NL-9747 AA Groningen, The Netherlands ^66 University of Hawaii, Honolulu, Hawaii 96822, USA ^67 University of Jinan, Jinan 250022, People's Republic of China ^68 University of Manchester, Oxford Road, Manchester, M13 9PL, United Kingdom ^69 University of Muenster, Wilhelm-Klemm-Strasse 9, 48149 Muenster, Germany ^70 University of Oxford, Keble Road, Oxford OX13RH, United Kingdom ^71 University of Science and Technology Liaoning, Anshan 114051, People's Republic of China ^72 University of Science and Technology of China, Hefei 230026, People's Republic of China ^73 University of South China, Hengyang 421001, People's Republic of China ^74 University of the Punjab, Lahore-54590, Pakistan ^75 University of Turin and INFN, (A)University of Turin, I-10125, Turin, Italy; (B)University of Eastern Piedmont, I-15121, Alessandria, Italy; (C)INFN, I-10125, Turin, Italy ^76 Uppsala University, Box 516, SE-75120 Uppsala, Sweden ^77 Wuhan University, Wuhan 430072, People's Republic of China ^78 Yantai University, Yantai 264005, People's Republic of China ^79 Yunnan University, Kunming 650500, People's Republic of China ^80 Zhejiang University, Hangzhou 310027, People's Republic of China ^81 Zhengzhou University, Zhengzhou 450001, People's Republic of China ^a Deceased ^b Also at the Moscow Institute of Physics and Technology, Moscow 141700, Russia ^c Also at the Novosibirsk State University, Novosibirsk, 630090, Russia ^d Also at the NRC "Kurchatov Institute", PNPI, 188300, Gatchina, Russia ^e Also at Goethe University Frankfurt, 60323 Frankfurt am Main, Germany ^f Also at Key Laboratory for Particle Physics, Astrophysics and Cosmology, Ministry of Education; Shanghai Key Laboratory for Particle Physics and Cosmology; Institute of Nuclear and Particle Physics, Shanghai 200240, People's Republic of China ^g Also at Key Laboratory of Nuclear Physics and Ion-beam Application (MOE) and Institute of Modern Physics, Fudan University, Shanghai 200443, People's Republic of China ^h Also at State Key Laboratory of Nuclear Physics and Technology, Peking University, Beijing 100871, People's Republic of China ^i Also at School of Physics and Electronics, Hunan University, Changsha 410082, China ^j Also at Guangdong Provincial Key Laboratory of Nuclear Science, Institute of Quantum Matter, South China Normal University, Guangzhou 510006, China ^k Also at MOE Frontiers Science Center for Rare Isotopes, Lanzhou University, Lanzhou 730000, People's Republic of China ^l Also at Lanzhou Center for Theoretical Physics, Lanzhou University, Lanzhou 730000, People's Republic of China ^m Also at the Department of Mathematical Sciences, IBA, Karachi 75270, Pakistan ^n Also at Ecole Polytechnique Federale de Lausanne (EPFL), CH-1015 Lausanne, Switzerland ^o Also at Helmholtz Institute Mainz, Staudinger Weg 18, D-55099 Mainz, Germany ^p Also at School of Physics, Beihang University, Beijing 100191 , China
http://arxiv.org/abs/2406.18953v1
20240627073604
Spin Hamiltonian with large fourth order terms: Triple well potentials and Bloch sphere visualization
[ "D. S. Lohr Robles", "M. Grether", "E. Lopez Moreno", "P. O. Hess" ]
quant-ph
[ "quant-ph" ]
]Spin Hamiltonian with large fourth order terms: Triple well potentials and Bloch sphere visualization ^1Independent researcher, Mexico ^2Facultad de Ciencias, Universidad Nacional Autónoma de México, 04510 Mexico-City, Mexico ^3Instituto de Ciencias Nucleares, Universidad Nacional Autónoma de México, A.P. 70-543, 04510 Mexico-City, Mexico ^4Frankfurt Institute for Advanced Studies, J. W. von Goethe University, Hessen, Germany hess@nucleares.unam.mx § ABSTRACT We present a study of a general spin Hamiltonian with terms up to fourth order. With the coherent states the semiclassical potential is obtained and with catastrophe theory its parameter space is constructed. When the fourth order parameters are large enough the parameter space has regions where the semiclassical potential has three wells. By applying an oscillating magnetic field a trajectory in parameter space crosses the Maxwell set multiple times resulting in many ground state quantum phase transitions. Using the coherent states we are able to visualize the localization of the ground state on the Bloch sphere as the magnetic field is varied. Conflict of Interest declaration: The authors declare that they have NO affiliations with or involvement in any organization or entity with any financial interest in the subject matter or materials discussed in this manuscript. Keywords: Quantum phase transitions, algebraic models, semiclassical approximation § INTRODUCTION The use of spin Hamiltonian to model many body systems has experienced a wide variety of applications in different areas of physics. By an appropriate fitting of the interacting parameters the phenomena of interest may be accurately described, assigning physical meaning to each of the terms. For this reason, theoretical work when the Hamiltonian has higher order contributing terms can be of use, in particular when the value of the corresponding parameters are large enough to induce a meaningful change, and how this contribution affects the phase transitions of the system. The physics of single molecule magnets (SMMs) can be accurately described using a spin Hamiltonian <cit.>. The applications of SMMs in new technologies as qubits remains a possibility <cit.>. Spin Hamiltonians have also been used in the description of quantum phase transitions (QPTs) and excited states quantum phase transitions (ESQPTs) as a function of the Hamiltonian parameters <cit.>, and they have also been applied to model systems of very large spin in ferritin proteins <cit.>. As a result these models are very useful as a way to test new phenomena. The motivation of the present contribution is to study phase transitions in SMMs and the search of a tractable manner to manipulate them. The present paper is structured as follows: In section <ref> the spin Hamiltonian is introduced with terms up to fourth order, the coherent states are defined and the parameter space of the semiclassical potential is obtained. In section <ref> the master equation that defines the population of states as a function of time is introduced, together with the concept of relaxation time of the material. In section <ref> the concept of fidelity and fidelity susceptibility and their relation to QPTs is discussed. A series of results for three different examples are presented in section <ref>. In section <ref>, hypothetical examples of systems with large values of the fourth order parameters are considered, which lead to systems with three stability wells in the semiclassical potentials; two of these examples are presented and studied. The concept of the Bloch sphere is introduced as a way to visualize and describe the localization of the eigenstates of the Hamiltonian with the intention to construct and manipulate qubits. Finally, in section <ref> conclusions are drawn and possible future work is discussed. § SPIN HAMILTONIAN We consider the following expression for a spin Hamiltonian, designed to describe magnetic molecules, with operators up to fourth order: H = - g μ_B/SB⃗·S⃗ + 1/S(2S-1)( D ( S^2_z - 1/3 S(S+1)) + E( S^2_x- S^2_y) ) + H_4/S(2S-1)(2S-2)(2S-3) where g is the Landé factor, μ_B is the Bohr magneton, D and E are the second order anisotropy constants of the molecule and H_4 indicates fourth order anisotropy terms. The first order interaction is given by the Zeeman term in (<ref>), with B⃗=(B_x,B_y,B_z) an external magnetic field and S⃗=(S_x,S_y,S_z) the spin vector, with S_i, i=x,y,z, the spin components operators. The explicit form of the fourth order terms is given by: H_4 = B_4^0 O_4^0 + B_4^2 O_4^2 + B_4^3 O_4^3 + B_4^4 O_4^4 = B_4^0 (35 S_z^4 + (25-30S(S+1)) S_z^2 +3S^2(S+1)^2 -6 S(S+1) ) +B_4^2/4((7 S_z^2 - S(S+1)-5)( S_+^2 + S_-^2) +( S_+^2 + S_-^2)(7 S_z^2 - S(S+1)-5) ) +B_4^3/4( S_z ( S_+^3 + S_-^3)+( S_+^3 + S_-^3) S_z ) + B_4^4/2( S_+^4 + S_-^4), where O_q^k are the Steven operators <cit.> and B_q^k the corresponding fourth order anisotropy constants. It is important to notice that the values of these constants are smaller than that of the second order ones, and the non-zero values are dependent on the symmetric properties of the material: tetragonal symmetry (k=0,4), orthorhombic symmetry (k=0,2,4), and trigonal symmetry (k=0,3) <cit.>. To each of the terms in the Hamiltonian we add a factor of the form 2∏_q(2S-q+1)^-1 for the q-th order interaction term. This is to ensure that the semiclassical potential calculated in the next subsection is independent of S. Using the complete basis |SM⟩ of the eigenvectors of the S_z operator, satisfying the eigenvalue equation S_z |SM⟩ = Mħ |SM⟩, the Hamiltonian matrix of (<ref>) is constructed and its eigenvalues E_k, which satisfy H |ψ_k⟩ = E_k |ψ_k⟩ , are obtained by diagonalization. The eigenvectors |ψ_k⟩ are expressed as linear combinations in the |SM⟩ basis as: |ψ_k⟩ = ∑_M=-S^S c_k,M|SM⟩, where the square of the absolute value of the coefficients c_k,M represent the probability of finding the state |ψ_k⟩ with spin projection M, with M=-S,-S+1,…,S-1,S. In the following we will simplify the notation defining |SM⟩≡ |M⟩. The Hamiltonian of the system written in (<ref>) depends on nine parameters: Three free parameters for the interaction with the magnetic field and six anisotropic constants (two second order and four fourth order). The non-free parameters are adjusted to experiment. We want to study the effect of these parameters in the structure of the eigenstates of the Hamiltonian. Using the SU(2) coherent states <cit.> we can obtain the semiclassical potential of the system as the expectation value of the Hamiltonian in the coherent state basis. We will find that by studying the critical points of the semiclassical potential we can obtain information about the behaviour of the eigenvalues and eigenvectors. Catastrophe theory is a very useful method <cit.> for the categorization of functions that depend on parameters, as it presents a classification of the stable singularities of a potential functions leading to the construction of separatrices in parameter space, denoting the different phases of the system, identifiable as elementary catastrophes: fold, cusp, swallowtail, butterfly, etc., and their respective regions of stability. The catastrophe theory has been applied with great success to nuclear and particle physics <cit.>. §.§ Coherent states and semiclassical potential The semiclassical potential is obtained by calculating the expectation value of the Hamiltonian in the atomic coherent state basis: V(θ,ϕ) = ⟨ζ | H |ζ⟩, where the coherent state |ζ⟩ is defined as a rotation of lowest weighted state |-S⟩ of angle θ about an axis n̂=(sinϕ,-cosϕ,0) in angular momentum space <cit.>: |ζ⟩ = R(θ,ϕ) |-S⟩ = (1+ |ζ|^2)^-Se^ζ S_+ |-S⟩ = ∑_M=-S^S ((2S)!/(S+M)!(S-M)!)^1/2(cosθ/2)^S-M(e^-iϕsinθ/2)^S+M |M⟩ with ζ = e^-iϕtan(θ/2). The semiclassical potential is explicitly given by: V(θ,ϕ;D,E,B_i) = D/12(1+3cos 2θ) + E/2cos 2ϕsin^2 θ - gμ_B (-B_z cosθ + B_xcosϕsinθ + B_y sinϕsinθ) +1/8(B_4^0/8(35cos 4θ +20cos 2θ +9) + B_4^2/2(7cos 2θ +5)cos 2ϕsin^2θ. . - B_4^3 cos 3ϕcosθsin^3θ + B_4^4 cos 4ϕsin^4θ) , which is a function of two angular variables (θ,ϕ) and independent of the value of spin S of the system. Therefore, the parameter spaces obtained in these sections are valid for all values of S. Because our interest lies on how the changes of parameters can lead the system to change from one phase to another, we will find that very telling results, e.g. indications of QPTs, can be obtained even for small values of S. As a starting point we will restrict ourselves to the case when the magnetic field is constrained in the xz-plane: B⃗=(B_x,0,B_z), as previously done in <cit.>. We can see in (<ref>) that when B_y=0, the values ϕ_c=0,π, are critical points for all values of the parameters. Thus, we are able to substitute these values in (<ref>) and focus on the study of the following one-dimensional potential function: V(θ,ϕ_c;r_i) = r_1 cosϕ_csinθ + r_2 cosθ + r_3 cos 2θ + r_4 cos 4θ + r_5 cosϕ_c (2sin 2θ - sin 4θ), where we defined the new parameters r_i as: r_1 = -gμ_BB_x r_2 = gμ_BB_z r_3 = 1/4 (D-E)+1/16(5B_4^0+B_4^2-B_4^4) r_4 = 1/64(35 B_4^0 -7B_4^2 + B_4^4 ) r_5 = -1/64 B_4^3 . Inversely, once the parameters r_i are known, the magnetic field components and D, E can be deduced. §.§ Bifurcation and Maxwell sets The bifurcation set is a subspace in parameter space where critical points of the potential function begin to emerge. Thus, in its vicinity we can find two phases (regions), in one there exist a stability point θ_c, while in the other there is no such point. The bifurcation set can be found considering the critical manifold, which is the hypersurface of critical points θ_c satisfying: .d/d θ V(θ,ϕ_c;r_i)|_θ=θ_c =0 spanned by a continuous variation of the parameters r_i. The set of points where the mapping of the critical manifold to the parameter space is singular is defined as the bifurcation set. A straightforward calculation of this singular mapping result in the following set of parametric functions: r_1 (x,ϕ_c;r_3,r_4,r_5) = r_3 cosϕ_c (3sin x - sin 3x) + 2r_4 cosϕ_c (5sin 3x - 3sin 5x) - 6 r_5 (cos x -2cos 3x + cos 5x) r_2 (x,ϕ_c;r_3,r_4,r_5) = -r_3 (3cos x +cos 3x)-2r_4 (5cos 3x + 3cos 5x) + 4 r_5 cosϕ_c sin x(2+7cos 2x +3 cos 4x), where x=θ_c is the critical point and satisfies (<ref>). These parametric functions allow us to draw the bifurcation set in (r_1,r_2) parameter space for a given set of values (r_3,r_4,r_5). One can also solve these equations to obtain the bifurcation set in any other (r_i,r_j) parameter space of interest. The Maxwell set is the subspace in parameter space where two or more extrema (minima or maxima) of the potential function have the same value, i.e. V(θ_1,ϕ_c;r_i)=V(θ_2,ϕ_c;r_i); thus in its vicinity two phases exist, in one V(θ_1)>V(θ_2), while in the other V(θ_1)<V(θ_2), i.e. there is a change of the dominant stability point (phase). Similarly to the bifurcation set, the Maxwell set can be found by considering the singular mapping of a hypersurface to the parameter space. The particular hypersurface is defined as the set of roots θ_i, such that V(θ_i,ϕ_c; r_i)+V_0=0, where V_0 is an arbitrary real number, spanned by a continuous variation of the parameters r_i. Then one has to find the values of r_i such that the above description is true for two values θ_1 and θ_2. Performing the calculation is very involved. Fortunately, using the routines of the software MATHEMATICA we could resolve the problem and we found the following set of functions for the Maxwell set: r_1 (θ_1,θ_2;r_4,r_5) = 8ϕ_c sinθ_1 sinθ_2 (sinθ_1 +sinθ_2) (2r_4 (1-cos(θ_1 + θ_2)) +r_5 cos 3ϕ_c (2(θ_1 + θ_2)-cos 2(θ_1 + θ_2)(θ_1 + θ_2) ) ) r_2 (θ_1,θ_2;r_4,r_5) = 2 cosθ_1 - θ_2/2(32 r_4 cosθ_1 cosθ_2 cos^3 θ_1 + θ_2/2) - r_5 cos 3ϕ_c θ_1 + θ_2/2(2 -2 cos 2θ_1 - 2cos 2θ_2 - cos(θ_1 + θ_2) - 2 cos 2(θ_1 + θ_2) - cos 3(θ_1 + θ_2) - cos(3θ_1 + θ_2)-cos(θ_1 +3θ_2) ) r_3 (θ_1,θ_2;r_4,r_5) = -2 r_4 (3 cos 2θ_1 +3cos 2θ_2 + 4cos(θ_1 + θ_2) ) + r_5/2cos 3ϕ_c θ_1 + θ_2/2θ_1 + θ_2/2(2cos(θ_1 + θ_2) -4cos(2θ_1 +2θ_2) -3 (cos(3θ_1 +θ_2)+cos(θ_1 +3θ_2)) ) and we are able to draw the Maxwell set as a parametric function in parameter space (r_1,r_2) given the critical points (θ_1,θ_2) that satisfy (<ref>) for a given set of values (r_3,r_4,r_5). In the following applications, examples of phases are shown, and the corresponding separatrices of the bifurcation and Maxwell sets will be illustrated. § MAGNETIZATION AND MASTER EQUATION The labelling of the eigenstates of the Hamiltonian (<ref>) is an important issue to address. In cases when the D parameter is the most relevant the low energy states have a dominant coefficient c_k,M in the expression (<ref>), while the other coefficients are small. In this cases it is possible to maintain the labelling of states of the spin projections, which is required when studying transitions between states in the interaction of the system with a magnetic field. In this framework the states labelled M=± S, are the lowest states in their respective potential wells. The population p_m of the state labelled m at a time t is described by the master equation <cit.>: d p_m(t)/d t = ∑_m'(γ_m'm p_m'(t) - γ_mm'p_m(t) ) with γ_mm' the transition rates of going from the state m' to m, and are given by γ_mm' = 3/πħ^4 ρ c_s^5(E_m' - E_m)^3/e^(E_m'-E_m)/k_B T-1( |D_1|^2 (|⟨ψ_m'| S_+^2 |ψ_m⟩|^2 +|⟨ψ_m'| S_-^2 |ψ_m⟩|^2 ) + |D_2|^2 (|⟨ψ_m'|{S_+,S_z} |ψ_m⟩|^2 +|⟨ψ_m'|{S_-,S_z} |ψ_m⟩|^2 ) ) where ρ is the density of the material, c_s is the velocity of sound in the material, D_1 and D_2 the spin-phonon couplings parameters, and {·,·} is the anti-commutator <cit.>. The differential equation in (<ref>) can be written in matrix form as: d p⃗(t)/d t = p⃗(t) G where G is the transition rate matrix, and its respective matrix elements are given by: G_mm' =γ_mm'-δ_mm'∑_kγ_mk . In order to solve the master equation (<ref>) one has to take into account that the variation of the magnetic field in (<ref>) is time dependent: |B⃗|=ν t + B_0, with ν = dB/dt the sweeping rate of magnetization and B_0 the initial magnetization. The study of the master equation has been considered for various SMMs in order to model the magnetization hysteresis loops <cit.>. Monte Carlo methods have proven to be a strong computational technique when dealing with time dependent transition rates <cit.>. In particular, in <cit.> they used a kinetic Montecarlo variable size step method to solve the master equation, which we will follow in the present paper. The magnetization hysteresis loops are obtained using the solutions of the master equation (<ref>) as: M = ∑_m p_m(t) ⟨ψ_m |S_z| ψ_m ⟩ . §.§ Relaxation time and relaxation rate Measurements of the relaxation time and rate as a function of the magnetic field have been of considerable importance in the understanding of SMMs <cit.>. The relaxation time τ of the material as a function of the magnetic field can be obtained with the eigenvalues of the transition rate matrix in (<ref>) as done in <cit.>. It is proven in <cit.> that one of the eigenvalues is equal to zero and corresponds to the equilibrium state, while the others are negative. The relaxation time is then identified as the reciprocal of the smallest non-zero eigenvalue g_min <cit.>: τ = -1/g_min . The relaxation time tells us how long does the magnetization due to the alignment of the molecules persists in the presence of the magnetic field. The relaxation rate Γ is in turn the reciprocal of the relaxation time <cit.>: Γ = 1/τ In <cit.> the peaks of the relaxation rate as a function of time are fitted with a Lorentzian function. § FIDELITY AND FIDELITY SUSCEPTIBILITY Quantum fidelity is a measure of how much does a quantum state, that depends on a parameter, resembles itself with a small variation of the parameter <cit.>, e.g. the magnetic field amplitude. This quantity has proven to be very useful for the identification of QPTs, characterized by a sudden drop in fidelity at the critical point of the parameter, and has been used successfully to detect QPTs in quantum many-body systems <cit.>. In order to calculate the quantum fidelity of a state as a function of the parameter one needs to introduce an arbitrary small value dB. The fidelity of a state |ψ_k⟩ is defined as <cit.>: F_k= |⟨ψ_k (B_i-dB_i)|ψ_k(B_i+dB_i)⟩|^2 where B_i is the magnetic field parameter and dB_i represents a small increment. Using this expression one is able to calculate the value of the fidelity of the k-th state as a function of a varying magnetic field. The quantum fidelity susceptibility χ_F_k is defined as the second order coefficient of the Taylor series expansion of the quantum fidelity in (<ref>) about dB=0 <cit.>. It contains all the information of the quantum fidelity and has the advantage that it is independent of the arbitrary value of dB, i.e. it does not depend on any external value and it is therefore more suitable for calculations. The fidelity susceptibility can be explicitly written as <cit.> χ_F_k = 2 ∑_m≠ k|⟨ψ_m |S_z|ψ_k⟩|^2/(E_m - E_k)^2 where S_z is related to the parameter of the interaction, i.e. the magnetic field. § RESULTS In this section we will present results of the physical phenomena describe in the previous sections for three different test cases of magnetic molecules: a) Fe_8 SMM, b) Fe_4 SMM, c) Arbitrary parameters, not related to a specific magnetic molecule, with a large fourth order term. §.§ Fe_8 SMM We start with the parameter values given in the review <cit.> and modify them a little in order to better fit the hysteresis plots in <cit.>: S=10, D/k_B=-0.295 K, E/k_B=0.056 K, B_4^0/k_B=1.15× 10^-6 K, B_4^2 /k_B =-1.15× 10^-6 K, B_4^4 /k_B =-2.18× 10^-5 K, B_x =0.02 T, B_y=B_4^3=0. In this case we considered the magnetic field to be not completely aligned with the z-axis and the easy magnetization axis of the molecules, but making a small 3^∘ angle with respect to it; this was done to recreate the hysteresis steps in the presence of a very small transverse field. The parameters in the transition rates γ_mm' in (<ref>) used are: 3/(πħ^4 ρ c_s^5) = 3.13× 10^3 K^-5s^-1, D_1=D_2=0.26 K, and T=0.04 K. In figure <ref>(a) we show the bifurcation set, and the horizontal dashed line corresponds to the parameter values of Fe_8. When the bifurcation set is crossed the avoided level crossings of higher energy levels start to occur, as a consequence of the double well structure of the semiclassical energy surface. At zero magnetic field both potential wells are at the same depth and there is an avoided level crossing of the ground state, which can be seen in figure <ref>(d). In figures <ref>(b) and <ref>(e) we plotted the relaxation time and relaxation rate as functions of the magnetic field, respectively. The drops of the relaxation time, which are peaks in the relaxation rate, occur at the avoided level crossings of the eigenstate with label m=-10, as seen in figure <ref>(d). A similar behaviour occur with the fidelity and fidelity susceptibility seen in figures <ref>(c) and <ref>(f), respectively. In figure <ref>(g) we plotted the hysteresis loop for two values of the sweeping rate of the magnetic field in the z direction. These results can be compared with the experimental results obtained in figure 2 in <cit.>. The transition rates defined in (<ref>) for the master equation in this model does not adequately reproduce the transitions at zero magnetic field for pure tunneling, i.e. at low temperature, as described in <cit.>, and to solve this issue and additional constant correction term is added to the transition rates γ_mm' for these cases. Following the methodology described in <cit.>, we added the constant correction term γ_t=0.002 s^-1 in order to obtain the hysteresis in figure <ref>(g). The location of the steps in the hysteresis loops also coincide with the location of the drops of the relaxation time. §.§ Fe_4 SMM We use the parameters obtained in <cit.>, with the addition of a fourth order term B_4^3 permitted by the tetragonal symmetry of the molecule to better fit the experimental hysteresis loops. The parameters used for Fe_4 were: S=5, D/k_B=-0.601 K, E/k_B=0.024 K, B_4^0/k_B=2.88× 10^-5 K, B_4^3 /k_B = 0.0004 K, B_x =B_y=B_4^2=B_4^4=0. The parameters in the transition rates γ_mm' in (<ref>) used are: 3/(πħ^4 ρ c_s^5) = 4.1× 10^3 K^-5s^-1, D_1=D_2=0.28 K, and T=0.04 K. For this case we also add a constant correction term γ_t to the transition rate to enhance the tunnelling between states m=± 5 and m=± 4, as was done in <cit.>. Depending on the magnetic sweeping rate we have: γ_t = 0.1 s^-1 for the sweeping rate 0.001 Ts^-1 and γ_t = 0.6 s^-1 for the sweeping rate 0.017 Ts^-1. In figure <ref> we show the same results as in figure <ref> but for the Fe_4 SMM. In this case we can see in figures <ref>(b) and <ref>(e) the consequences of the constant correction term for the transition rate between states m=-5 and m=5 at zero magnetic field, were the value of the relaxation time is very small in the vicinity of B_z=0, while the relaxation rate has an artificial peak in that vicinity. In figure <ref>(g) we plotted the hysteresis loop for two values of the sweeping rate of the magnetic field in the z direction. These results can be compared with the experimental results obtained in figure 3 in <cit.>. §.§ A case of an arbitrary set of parameters with large fourth order term A system with arbitrary parameters is considered in order to investigate the effects of large values of fourth order terms. Here we consider the parameters: S=5, D/k_B=-0.5 K, E/k_B=0 K, B_4^4/k_B=-1.5× 10^-2 K, B_x=0.001 T, B_y=B_4^0=B_4^2=B_4^3=0. The parameters in the transition rates γ_mm' in (<ref>) used are: 3/(πħ^4 ρ c_s^5) = 1.831× 10^3 K^-5s^-1, D_1=D_2=0.5 K, and T=0.1 K. In figure <ref> we show the parameter space and we can see that in this case the horizontal dashed line corresponding to the parameter values crosses a butterfly catastrophe structure of the bifurcation set. The energy levels are shown in figure <ref>(d) were the ground state energy has strong avoided level crossings for non-zero values of the magnetic field. This is reflected in the relaxation time and relaxation rate in figures <ref>(b) and <ref>(e), respectively, were in the insets we can see the respective drops and peaks about B_z≈ 5 T and B_z≈ 5.6 T. This structure can also be seen in the fidelity and fidelity susceptibility in figures <ref>(c) and <ref>(f), respectively, were the drops in the fidelity and the peaks of the fidelity susceptibility appear more clearly. § TRIPLE WELL POTENTIALS The spin Hamiltonian in consideration has two free parameters, namely the x and z components of magnetic field, while the parameters of higher order terms are fixed by the system of study. However it should be noted that the methods described above allow for the study of QPTs in parameter spaces involving free parameters of higher order terms in a similar fashion. In this section we will perform a theoretical study and explore the complete parameter space for regions of interesting structural stability. Instead of freely varying the Hamiltonian parameters, by constructing first the semiclassical potential, and the separatrices in parameter space, we can have an idea of the behaviour and structure of the energy levels of the Hamiltonian, and it is this aspect where the usefulness and advantage of the methodology resides. Two example cases are presented next where triple well potentials are found: Case I: Parameters: S=10, D=-0.5 K, E=0.04 K, B_4^4=-0.007 K, B_4^0=B_4^2=B_4^3=0. In figure <ref>(a) we show the parameter space (r_1,r_2) with the bifurcation set in green and the Maxwell set in red (minima) and dark red (maxima). This particular shape of the bifurcation set results from the overlap of two butterfly catastrophes one for each of the critical points ϕ_c=0,π. This overlap is a consequence of the large value of the fourth order parameters B_4^4, which result in a large value of r_4. Case II: Parameters: S=10, D=0.4 K, E=-0.03 K, B_4^0=-0.00012 K, B_4^2=B_4^3=B_4^4=0. In this case the parameter space consists of two cusps along the r_1-axis and two butterfly catastrophes along the r_2-axis. In figure <ref>(b) we show one of the butterfly separatrix in parameter space (r_1,r_2). At the intersection of two Maxwell set lines we find that the semiclassical potential has three equally deep potential wells. §.§ Magnetic field trajectories The magnetic field components x and z are free and we can use them to create a trajectory in the parameter space, i.e., we can manipulate the system such that it passes from one phase to another one. We will do it by a time varying magnetic field. A quantum phase transition occurs when the parameters r_1 and r_2 are varied from one region to another, crossing a Maxwell set for the global minimum. We consider the variation of the magnetic field to have the following form: B⃗=(|B|cosω t + B_x,0,0,|B|sinω t + B_z,0). In figures <ref>(a) and <ref>(b) this can be seen as the black circles, with the arrow indicating its direction: For case I we used B⃗=(15.63cosω t ,0,15.63sinω t ), and for case II we used B⃗=(4.47cosω t ,0,4.47sinω t -32.9). The white dots indicate the points where a Maxwell set for the global minimum is crossed. The effect of crossing the Maxwell set can be seen in figures <ref>(c) and <ref>(d), where the energy levels as a function of ω t are plotted. Here we can see how an avoided level crossing of the ground state (dotted line) occurs near the crossing of the Maxwell set (dashed line). In black lines we plotted the maxima and minima of the semiclassical potential as a function of ω t, and at the bottom an schematic picture of the semiclassical potential for each region is shown. An additional trajectory is considered in figure <ref>(a), where B_x is fixed at zero or almost zero, and the component B_z is varied passing through the point 4, which correspond to the value of B_z for which the semiclassical potential of the system has three equally deep stable wells. In figure <ref>(e) we have plotted the value of the ground state for one value of the magnetic field at ω t=0.4 for case I as a function of S. As S increases the ground state approaches the value of the global minimum of the semiclassical potential shown as an horizontal red line. §.§ Bloch sphere visualization As shown in the schematic representation of the semiclassical potential in each of the phases,as the Maxwell set is crossed, the deepest potential well changes from one to another. In this subsection we present a way to visualize the localization of the eigenstates of the Hamiltonian across these quantum phase transitions. Using the expression of the eigenvectors of the Hamiltonian in the basis of |M⟩ in (<ref>) and the definition of the coherent states in (<ref>) we can define the complex function ⟨ψ_k | ζ⟩ as the projection of the eigenvector on the coherent states: ⟨ψ_k|ζ⟩ = ∑_M=-S^S c_k,M^*((2S)!/(S+M)!(S-M)!)^1/2(cosθ/2)^S-M(e^-iϕsinθ/2)^S+M, where according to the definition of the coherent states (θ,ϕ) are the angular coordinates on the Bloch sphere. The latitudes of the Bloch sphere are interpreted as the different spin projections M, with the south pole corresponding to M=S and the north pole to M=-S, and for the case when S is an integer we have that M=0 corresponds to the equator. The projection of the coherent states onto the eigenstates has been treated in <cit.>. To show the information provided by (<ref>) we will apply it to the trajectory shown in figure <ref>(a). In figure <ref> we show |⟨ψ_1|ζ⟩ |^2 for the ground state for various values of ω t, to depict points before, during and after a transition. Note, that for some values of ω t the coherent state is localized to a definite area on the Bloch sphere. Thus, controlling ω t, stopping when a certain value is reached, we can shift the coherent state at a certain area. In other words, we have a switch. To complement this vision, in figure <ref>, we also show the semiclassical potential as a function of (θ,ϕ_c). At the start, ω t=0.3, the ground state is localized at the equator, with θ≈π /2 and ϕ=0, or in cartesian coordinates about (x=1,y=0,z=0). As the magnetic field moves to ω t =1.1 we can see the localization of the ground state starts to shift to the south pole. We can interpret this as a competition between the two wells close in energy, until, at ω t=1.6 the well at about θ≈π is the deepest. In turn, in figure <ref> for that value, now the eigenstate of the ground state is localized in the south pole. As ω t increases the ground state continues to travel along the Bloch sphere: It started in the equator at about (x=1,y=0,z=0), then it goes to the south pole, then to the equator at about (x=-1,y=0,z=0), then to the north pole, and finally back to its starting point. Another interesting feature of these example cases is the presence of the triple point 4 in figure <ref>(b), where the semiclassical potential has three equally deep wells. In figure <ref> we show the structure of the eigenstates as a trajectory in parameters passes through that point. In the top row we plotted the energy levels as a function of r_2, and show a zoom view of the avoided level crossing at the triple point. At the bottom of the middle plot schematic pictures of the potential at the different regions are shown. The crossing of the semiclassical triple point is depicted as a dashed vertical line at r_2=-44.3, while the avoided level crossing of the ground state is depicted as a dotted vertical line at r_2≈ -44.80887. The minima and maxima of the semiclassical potential is plotted as a function of r_2 and shown as black lines. In the bottom row the |⟨ψ_k|ζ⟩ |^2 functions are shown on the Bloch sphere, viewed from the top of the north pole, for the first three states. Here we can see how at the point of the avoided level crossing of the ground state, the ground state distribution has components at the north pole, and at opposite sides of the equator at (x=1,y=0,z=0) and (x=-1,y=0,z=0), consequence of the three wells. The second excited states also shares this behaviour, while the distribution of the first excited state is localized at both sides of the equator at (x=1,y=0,z=0) and (x=-1,y=0,z=0). § CONCLUSIONS In the present work we have discussed and studied many properties of spin Hamiltonians with terms up to fourth order and found how, by the variation of the external magnetic field, the system can go through the different phases in parameter space. We applied the powerful method of catastrophe theory. To provide a real physical system, we studied two examples of single molecule magnets and showed that the sudden changes in fidelity and fidelity susceptibility are related with the inverse peaks in the relaxation rate and the peaks in the relaxation rate. For each of these examples the parameter space was constructed and it can be seen that the separatrices determine the structure of the eigenvalues of the Hamiltonian as a function of the magnetic field. We obtained a hysteresis for two magnetic molecules and demonstrated that the model applied is able to describe observation. When considering large values of the fourth order parameters the separatrices in parameter space obtain a more complicated structure, which permits the inclusion of more phases, and in particular there exists regions where the semiclassical potential has three stability wells. Next, we studied the manner on how to manipulate the position of the coherent state on the Bloch sphere, using a varying magnetic field. An oscillatory external magnetic field allows the system to travel through many of these different phases and the localization of the eigenstates can be visualized on the Bloch sphere, as well as how it changes along the sphere with the varying magnetic field, by performing a projection of the coherent states onto the eigenstates. This result can be viewed as the manipulation of a qubit state by means of a varying magnetic field, choosing the value of ω t in order to reach the desired orientation on the Bloch sphere. In one of the examples we found a point in parameter space which corresponds to the presence of three equally deep semiclassical potential wells. Using the visualization of the distribution of the ground state on the Bloch sphere, we found that at the avoided level crossing point of the ground state with the first excited state, three separated regions on the sphere (north pole and (± 1,0,0)) have non zero contribution which is a consequence of the three competing stability points. We acknowledge the financial support from PAPIIT-DGAPA IN117923, and IN116824. § REFERENCES 84 url<#>1#1urlprefixURL nanomagnets Gatteschi D, Sessoli R and Villain J 2006 Molecular Nanomagnets (Oxford: Oxford University Press) jiang2012a Jiang S, Goss K, Cervetti C and Bogani L 2012 Sci. China Chem. 55 867–882 jenkins2017a Jenkins M D, Duan Y, Diosdado B, García-Ripoll J J, Gaita-Ariño A, Giménez-Saiz C, Alonso P J, Coronado E and Luis F 2017 Phys. Rev. B 95 064423 morenopineda2018a Moreno-Pineda E, Godfrin C, Balestro F, Wernsdorfer W and Ruben M 2018 Chem. Soc. Rev. 47 501–513 gaitaarino2019a Gaita-Ariño A, Luis F, Hill S and Coronado E 2019 Nat. Chem. 11 301–309 morenopineda2021a Moreno-Pineda E and Wernsdorfer W 2021 Nat. Rev. Phys. 3 645–659 zhang2021a Zhang Z, Wang Y, Wang H, Liu H and Dong L 2021 Nanoscale Res. Lett. 16 77 cejnar2021a Cejnar P, Stránský P, Macek M and Kloc M 2021 J. Phys. A: Math. Theor. 54 133001 hagen2024a Hagen W R 2024 Molecules 29 2254 gatteschi2003a Gatteschi D and Sessoli R 2003 Angew. Chem. Int. Ed. 42 268–297 cornia2001a Cornia A, Gatteschi D and Sessoli R 2001 Coord. Chem. Rev. 219-221 573–604 arecchi1972a Arecchi F T, Courtens E, Gilmore R and Thomas H 1972 Phys. Rev. A 6(6) 2211–2237 thom Thom R 1975 Structural Stability and Morphogenesis (Reading: W. A. Benjamin) gilmore Gilmore R 1981 Catastrophe Theory for Scientists and Engineers (New York: Wiley) arnold Arnold V I 1986 Catastrophe Theory (Berlin: Springer-Verlag) lopezmoreno1996a López-Moreno E and Castaños O 1996 Phys. Rev. C 54(5) 2374–2384 lohrrobles2021a Lohr-Robles D S, López-Moreno E and Hess P O 2021 Nucl. Phys. A 1016 122335 lohrrobles2023b Lohr-Robles D S, López-Moreno E and Hess P O 2023 J. Phys. A: Theor. Math. 56 505301 mannini2010a Mannini M, Pineider F, Danieli C, Totti F, Sorace L, Sainctavit Ph, Arrio M A, Otero E, Joly L, Cezar J C, Cornia A and Sessoli R 2010 Nature 468 417–421 fernandez1998a Fernández J F, Bartolomé J and Luis F 1998 J. Appl. Phys. 83 6940–6942 luis1998a Luis F, Bartolomé J and Fernández J F 1998 Phys. Rev. B 57 505 serrano2020a Serrano G, et al. 2020 Nat. Mater. 19 546–551 fichthorn1991a Fichthorn K A and Weinberg W H 1991 J. Chem. Phys. 95 1090–1096 jansen1995a Jansen A P J 1995 Comput. Phys. Commun. 86 1–12 liu2009a Liu G B and Liu B G 2009 Appl. Phys. Lett. 95 183110 liu2010a Liu G B and Liu B G 2010 Phys. Rev. B 82 134410 cuppen2013a Cuppen H M, Karssemeijer L J and Lamberts T 2013 Chem. Rev. 113 8840–8871 thomas1996a Thomas L, Lionti F, Ballou R, Gatteschi D, Sessoli R and Barbara B 1996 Phys. Rev. B 66 073309 wernsdorfer2000a Wernsdorfer W, Caneschi A, Sessoli R, Gatteschi D, Cornia A, Villar V and Paulsen C 2000 Phys. Rev. Lett. 84 2965 ueda2002a Ueda M, Maegawa S and Kitagawa S 2002 Phys. Rev. B 66 073309 dressel2002a Dressel M, Gorshunov B, Rajagopal K, Vongtragool S and Mukhin A A 2002 Phys. Rev. B 67 060405(R) adams2013a Adams S T, da Silva Neto E H, Datta S, Ware J F, Lampropoulos C, Christou G, Myaesoedov Y, Zeldov E, and Friedman J R 2013 Phys. Rev. Lett. 110 087205 villain1994a Villain J, Hartman-Boutron F, Sessoli R and Rettori A 1994 Europhys. Lett. 27 159 leuenberger1999a Leuenberger M N and Loss D 1999 Europhys. Lett. 46 692 leuenberger2000a Leuenberger M N and Loss D 2000 Phys. Rev. B 61 1286 gu2010a Gu S J 2010 Int. J. Mod. Phys. B 24 4371–4458 zanardi2006a Zanardi P and Paunković N 2006 Phys. Rev. E 74(3) 031123 buonsante2007a Buonsante P and Vezzani A 2007 Phys. Rev. Lett. 98(11) 110601 you2007a You W L, Li Y W and Gu S J 2007 Phys. Rev. E 76(2) 022101 tzeng2008a Tzeng Y C, Hung H H, Chen Y C and Yang M F 2008 Phys. Rev. A 77(6) 062321 tian2011a Tian L J, Zhu C Q, Zhang H B and Qin L G 2011 Chin. Phys. B 20 040302 plotz2011a Plötz P, Lubasch M and Wimberger S 2011 Physica A 390 1363–1369 rams2011a Rams M M and Damski B 2011 Phys. Rev. Lett. 106(5) 055701 wernsdorfer2000b Wernsdorfer W, Sessoli R, Caneschi A, Gatteschi D, Cornia A and Mailly D 2000 J. Appl. Phys. 87 5481–5486 vergnani2012a Vergnani L, Barra A L, Neugebauer P, Rodriguez-Douton M J, Sessoli R, Sorace L, Wernsdorfer W and Cornia A 2012 Chem. Eur. J. 18 3390–3398 lopezmoreno2014a López-Moreno E and Grether M 2014 Quantum Stud.: Math. Found. 1 203–211
http://arxiv.org/abs/2406.18175v1
20240626084951
Galaxy spectroscopy without spectra: Galaxy properties from photometric images with conditional diffusion models
[ "Lars Doorenbos", "Eva Sextl", "Kevin Heng", "Stefano Cavuoti", "Massimo Brescia", "Olena Torbaniuk", "Giuseppe Longo", "Raphael Sznitman", "Pablo Márquez-Neila" ]
astro-ph.GA
[ "astro-ph.GA", "astro-ph.IM", "cs.AI" ]
Lars Doorenbos, Eva Sextl lars.doorenbos@unibe.ch, sextl@usm.lmu.de AIMI, ARTORG Center, University of Bern, Murtenstr. 50, CH-3008 Bern, Switzerland Universitäts-Sternwarte, Fakultät für Physik, Ludwig-Maximilians Universität München, Scheinerstr. 1, 81679 München, Germany Universitäts-Sternwarte, Fakultät für Physik, Ludwig-Maximilians Universität München, Scheinerstr. 1, 81679 München, Germany INAF - Astronomical Observatory of Capodimonte, Salita Moiariello 16, I-80131 Napoli, Italy INFN - Sezione di Napoli, via Cinthia 9, 80126 Napoli, Italy Department of Physics, University Federico II, Strada Vicinale Cupa Cintia, 21, 80126 Napoli, Italy INAF - Astronomical Observatory of Capodimonte, Salita Moiariello 16, I-80131 Napoli, Italy INFN - Sezione di Napoli, via Cinthia 9, 80126 Napoli, Italy Department of Physics and Astronomy `Augusto Righi', University of Bologna, via Piero Gobetti 93/2, 40129 Bologna, Italy Department of Physics, University Federico II, Strada Vicinale Cupa Cintia, 21, 80126 Napoli, Italy AIMI, ARTORG Center, University of Bern, Murtenstr. 50, CH-3008 Bern, Switzerland AIMI, ARTORG Center, University of Bern, Murtenstr. 50, CH-3008 Bern, Switzerland § ABSTRACT Modern spectroscopic surveys can only target a small fraction of the vast amount of photometrically cataloged sources in wide-field surveys. Here, we report the development of a generative AI method capable of predicting optical galaxy spectra from photometric broad-band images alone. This method draws from the latest advances in diffusion models in combination with contrastive networks. We pass multi-band galaxy images into the architecture to obtain optical spectra. From these, robust values for galaxy properties can be derived with any methods in the spectroscopic toolbox, such as standard population synthesis techniques and Lick indices. When trained and tested on 64 × 64-pixel images from the Sloan Digital Sky Survey, the global bimodality of star-forming and quiescent galaxies in photometric space is recovered, as well as a mass-metallicity relation of star-forming galaxies. The comparison between the observed and the artificially created spectra shows good agreement in overall metallicity, age, Dn4000, stellar velocity dispersion, and E(B-V) values. Photometric redshift estimates of our generative algorithm can compete with other current, specialized deep-learning techniques. Moreover, this work is the first attempt in the literature to infer velocity dispersion from photometric images. Additionally, we can predict the presence of an active galactic nucleus up to an accuracy of 82 %. With our method, scientifically interesting galaxy properties, normally requiring spectroscopic inputs, can be obtained in future data sets from large-scale photometric surveys alone. The spectra prediction via AI can further assist in creating realistic mock catalogs. [0]*The first two authors share the first authorship and are in alphabetical order. § INTRODUCTION Astrophysics, like many other sciences, is currently undergoing a significant transformation due to the avalanche of high-quality data from various sky surveys <cit.>. Merely forty years ago, astronomical image data sets were measured in kilo- or megabytes. Twenty years ago, by the beginning of the 21st century, data releases for the first large-scale survey, the Sloan Digital Sky Survey (SDSS), started. Other well-known surveys such as Pan-STARRS <cit.>, DESI <cit.>, and Euclid <cit.> followed. DESI alone now has captured more galaxies than 10 years of SDSS <cit.>. The currently built Vera C. Rubin Observatory in Chile is designed to collect 20 terabytes per night over the time span of 10 years <cit.>. Yet, most objects are only captured via photometry or photometric imaging. Quality spectra are only available for a small portion of the captured galaxies due to the extended duration needed for exposure and the limited capacity of spectroscopic instruments to handle multiple measurements simultaneously <cit.>. Furthermore, the magnitude limits for spectroscopy are much brighter, thus preventing spectroscopic observations of faint galaxies. For instance, Legacy Survey of Space and Time (LSST) investigations will likely acquire spectra for less than 1% of the galaxies involved <cit.>. Greatly simplified, the overall goal of photometry is to map observed colors to galaxy properties. When only apparent magnitudes in several broad-band filters are available for a galaxy, performing a panchromatic SED fitting (from UV to IR) has become a popular approach. The best suitable spectral energy distribution is chosen from a large sample of pre-computed templates, which assume, among others, different star formation and metal enrichment histories <cit.>. The use of state-of-the-art publicly available codes is discussed in <cit.>. The SED fitting method allows for the evaluation of critical characteristics like the star formation rate (SFR) and stellar mass of a galaxy, which are crucial for comprehending their formation and evolution. Normally, these methods use solely color information, not images with morphological features (i.e., the brightness distribution across the galaxy). With such reconstructed SEDs, it is not possible to make trustworthy and accurate age or metallicity claims, as was recently shown in <cit.>. Even the use of additional narrow-band filters to given broad-bands does not seem to improve the recovery of galaxy parameters beyond stellar mass and SFR <cit.>. Determining stellar age, stellar metallicity, and dust properties with photometry alone seem, therefore, rather hopeless. With the increasing integration of Artificial Intelligence (AI) in astronomy, new avenues have opened up for deriving parameters from photometric data with machine learning techniques. <cit.> predicted stellar atmospheric parameters like effective temperature from stellar photometric images, and <cit.> succeeded in recognizing active galactic nuclei (AGN) with a deep neural network that takes photometric magnitudes as input. Several classification tasks of celestial objects, which would traditionally require spectroscopic data, will soon be fully automated using only photometric bands <cit.>. In this paper, we combine our proposed generative AI system with large quantities of photometric images to go a step further toward an all-embracing use of the upcoming full-sky surveys. We present a pilot study of predicting optical spectra directly from photometric broad-band images in the SDSS survey. We show that their spectral resolution and overall quality are sufficient to analyze them with common spectroscopic tools. In doing so, we can recover interesting physical parameters such as the population age or mean metallicity. This could be a classical absorption feature analysis (i.e., Lick Indices) or a full-spectral fitting code as part of stellar population synthesis. Unlike previous attempts in the literature (e.g., <cit.>, we choose to make a detour over the generation of optical spectra and not predict the physical quantities directly from the images. Therefore, we can detach ourselves from training with specialized model templates and from answering unsolved questions such as which full-spectral fitting code performs best <cit.>. Once the spectrum is created from the generative AI, the choice of the spectroscopic analysis method is left to the user. It must only be ensured that the quality and the information content of the predicted spectra are similar to the real, observed ones. We explore this in the following pages. In section <ref>, we describe the utilized SDSS data and introduce our machine learning pipeline and implementation in section <ref>. We use a diverse toolset to evaluate the predicted spectra, for which we explain the details in section <ref>. Major results are presented in section <ref>. A final summary with an outlook is found in <ref>. § DATA The significance of data in machine learning cannot be overstated. The type and quantity of data provided to an algorithm play a crucial role in its ability to extract information and generate accurate results. Creating artificial spectra in a spectral resolution suitable for follow-up analysis likely requires a large dataset with hundreds of thousands of entries. The natural first choice for this is the Sloan Digital Sky Survey. In its third phase in 2014, it encompassed more than one-third of the entire celestial sphere and is freely accessible[<https://live-sdss4org-dr12.pantheonsite.io/scope/>]. Numerous research groups have already applied a variety of machine learning techniques to SDSS in order to answer scientific questions about galaxies, quasars, and various other celestial objects ( to name a few). We aim to build further upon this work. §.§ Multiband images We utilize pre-processed broadband images from the dataset made available[ <https://deepdip.iap.fr/#item/60ef1e05be2b8ebb048d951d>] by <cit.> for their work on photometric redshift estimation with a deep convolutional neural network. For each broadband in SDSS, the galaxy’s brightness is captured in an image. Their total dataset contains 659 857 galaxies of the 12th Data Release (DR12) of the Sloan Digital Sky Survey <cit.> in a redshift range of 0 < z < 0.7. For each galaxy, astrometrically calibrated imaging data in u, g, r, i, and z filters with 0.396^ ''/pixel sampling is available through the automated data-pipeline of SDSS <cit.>. Further re-sampling and stacking of the obtainable broad-band images by <cit.> leads to a final 64×64×5 data cube centered on each spectroscopic target. For our purposes, a further cut in redshift is made later on. Example images can be found in figure <ref>. It should be emphasized that images with adjacent objects (in either the forefront or background) are not removed from the sample. As a result, together with the size, morphology, and surface brightness, information about the environment is also fed into the generative model <cit.>. Thus, more information is available to break various degeneracies. §.§ Galaxy spectra After obtaining the images, the corresponding optical galaxy spectra and their labels were extracted from the SDSS database. For such large queries, the most efficient way is using CasJobs <cit.>, a flexible, advanced SQL-based interface. We obtain galaxy spectra in the spectroscopic redshift range of 0.05 < z ≤ 0.15, which show no problematic flags[<https://live-sdss4org-dr16.pantheonsite.io/tutorials/flags>] and reasonable Petrosian radii and de Vaucouleurs/exponential surface brightness fits. The complete query code is shown in Appendix <ref>. This final sample contains 270 621 image-spectra pairs spanning a wide range of morphology classes, such as spiral galaxies with bulge and/or bar components, ellipticals, and irregular galaxies at low redshift with their distinct spectral features. We split the data into 268 603 samples for training the algorithm, 512 for validation/fine-tuning the model's performance, and 1 506 as the test set. We set apart these fractions of the data for the validation and test sets due to the large overall size of the dataset and the large number of computationally involved analyses of the generated spectra we performed. Nonetheless, a test set with over 1 500 objects allows us to draw robust conclusions in the following sections, and small ratios for testing and validation are standard practice in deep learning when dealing with big datasets and/or computationally involved evaluations <cit.>. Moreover, in absolute terms, our validation and test sizes match those used in deep learning practice, for instance, in semantic segmentation <cit.>. The test dataset was never used in training; instead, it is used to assess the model's performance on unseen data. Figure <ref> shows some artificial spectra from the test set compared to their observed counterparts. The morphological categories “elliptical”, “spiral”, and “uncertain” for each galaxy in the test set, which are used later on, are determined by the citizen science project Galaxy Zoo <cit.>. Galaxies were labeled as “uncertain” if their images were not clearly voted on as spiral or elliptical. These galaxies are most likely composite bulge-disk systems in which neither the bulge nor disk overshadows the other according to <cit.>. In this context, it should also be mentioned that a clear identification of merger systems via images is difficult as galaxies can appear to be isolated galaxies in the image (lacking visual features like tidal features) but appear to have undergone a recent merger when further investigated <cit.>. Therefore, analysis of the galaxies with respect to the categories merger/no merger was omitted. We reduced the resolution of the spectra from the original R ∼ 2000 to R ∼ 1500 at 5000 due to the high computing capacity needed. Nonetheless, this is still larger than comparable studies <cit.>. These smoothed spectra are used during training; therefore, the predicted spectra also show this reduced spectral resolution. As final preparatory work, the spectra are interpolated and tailored to 1 Angstrom steps in the range of 4000-8499. Each spectrum is then normalized to a value of 1 between 6900-6950 (R-band) rest-frame wavelength. This region does not contain prominent absorption or emission features and is, therefore, suited for the task, as we want the overall continuum to be scaled. Other possible wavelength regions would be 4400-4450 (B-band) or 5500-5550 (V-band). This does not change the results of the upcoming analysis, especially not those of the full-spectral fitting <cit.>. For the spectral fits in section <ref>, the spectrum to be fitted and the model templates always receive such a scaling for numerical stability. Now-and-then occurring skylines in the training data are not removed, and the spectra are not shifted in the rest-frame. We also decided against removing galactic extinction in the images and the spectra since the overall reddening is rather small: 85% of our sample has E(B-V) values lower than 0.05 <cit.>. < g r a p h i c s > An overview of different predicted spectra (orange) and their associated smoothed observed spectra (blue) in the rest-frame from the test set. Grey-shaded areas depict the corresponding 1-σ error bars of the smoothed observed spectra. The galaxies show different morphologies (according to the Galaxy Zoo classification: uncertain, elliptical, uncertain, spiral) as visible in the corresponding SDSS tri-color images from the inner three bands on the right. These jpeg images are available on the Skyserver and only serve as an illustration for the reader; the algorithm uses 5-band images. The first example is a possible merger; the third example is classified as LINER by <cit.>, and the galaxy at the bottom is best described as star bursting. The galaxy names read (from top to bottom): SDSS J205600.32-053137.8, SDSS J150109.86+472039.6, SDSS J105728.22+065954.4, SDSS J025019.52-070223.4 § PROCEDURE We frame the problem of predicting galaxy spectra from photometry as modeling the conditional distribution of spectra given an image[Parts of this section are based on <cit.>, presented at the NeurIPS 2022 workshop on Machine Learning and the Physical Sciences.]. Specifically, we want to approximate the empirical distribution p(spectrum|image), which is defined by the training data, using a neural network. However, directly learning this distribution over high-resolution spectra is both challenging and computationally expensive. Instead, we follow recent works on high-resolution image synthesis e.g., <cit.> and decompose the problem into two parts. First, we learn the simpler distribution of low-resolution spectra conditioned on images, p(low-res spectrum|image) ≡ p_lr. Second, we learn the image-conditional distribution over full-resolution spectra, with an additional condition on the corresponding low-resolution spectrum, p(spectrum|image, low-res spectrum) ≡ p_sr. By combining the two, we can generate a spectrum for a given image by drawing a sample from p_lr and then using it as the condition for p_sr. This effectively upsamples the low-resolution spectrum to the original resolution, which is fit for analysis. In practice, we learn both distributions with a conditional diffusion model (CDM), which is the current state-of-the-art in generative modeling <cit.>. While CDMs can generate realistic samples that closely mirror the characteristics of the training set, they do not allow for density estimation. Consequently, sampling from the CDMs results in multiple possible spectra for a given object without information on their likelihood. Nonetheless, we need to select one of the spectra for our subsequent analyses. To decide which spectrum we select for follow-up evaluation, we use multimodal contrastive learning <cit.> as a heuristic to find high-likelihood samples of the learned distribution, which has proven to work well in practice <cit.>. Multimodal contrastive learning learns to map images and spectra into a shared representation space, where images and spectra with similar representations are likely to belong to the same object. We rank the generated spectra based on the similarities between their representations and that of the original image, then select the best-matching samples. Our full method for predicting spectra from photometry begins by sampling several 563-dimensional spectra for an image with the low-resolution CDM. We select the three best synthetic spectra according to the low-resolution contrastive network. Then, we generate five full-resolution spectra for each of the selected low-resolution spectra. Finally, we select the best-matching spectrum with the full-resolution contrastive network, giving us the final synthetic spectrum for the object. A visualization of our pipeline is provided in Figure <ref>. We provide further technical details in Appendix <ref>. §.§ Implementation details We train all networks using Adam <cit.> with a learning rate of 10^-4, with the contrastive networks using a weight decay of 10^-3. We use a batch size of 512 for the low-resolution experiments and 224/340 for the full-resolution CDM and contrastive network, respectively. The CDM uses 250 timesteps with a cosine variance schedule, exponential moving average with α=0.9999, a ResNet-18 <cit.> for the image encoder τ_θ and a 1D U-net <cit.> for the denoising autoencoder. The contrastive networks use a ResNet-50 <cit.> for the image encoder and a 1D ResNet encoder for the spectra, both with a latent dimensionality of 128. We standardize the images by channel and predict the logarithm of the spectra so that the ranges of values better match the Gaussian noise used by the CDM. We apply data augmentation to the images to artificially increase the size of the training dataset and improve the generalizability of the algorithm, as, for instance, flipping the image of an object should not affect its spectrum. Specifically, we flip images horizontally and vertically with a probability of 0.5 and apply random cropping. We do not apply data augmentation to the spectra. All models are trained with 2 NVIDIA GeForce RTX 3090s until convergence of the Mean Squared Error (MSE) between generated and ground-truth samples on the validation set. We will make our code available upon publication. § EVALUATION METHODS In this section, we present the tools used to evaluate the information content of our artificial spectra. A successful generative model should not only produce galaxy spectra with similar shapes and matching overall features (i.e., a low MSE) compared to their observed counterparts. More importantly, the extracted stellar population properties from the artificial spectra should coincide with those imprinted in the observed spectra. The spectroscopic toolbox we utilize for this contains two stellar population fitting codes (subsection <ref>) capable of recovering the mean age, metallicity, extinction, and stellar mass of a galaxy from its spectrum. It also encompasses measurements of the strength of prominent emission and absorption lines (Lick Indices) and band-head features (such as Dn4000). A detailed explanation and the corresponding definitions are found in subsection <ref>. Finally, we assess whether the generative model has learned to link typical AGN emission to photometric features and evaluate it through popular performance metrics in machine learning (subsection <ref>). cCCCcc[htb] 6 15pt Spectral Indices Index Blue Side band [] Line [] Red side band [] Reference Frame Dn4000 3850.000 – 3950.000 4000.000 – 4100.000 <cit.> air Mg b 5142.625 – 5161.375 5160.125 – 5192.625 5191.375 – 5206.375 <cit.> air Fe5270 5233.150 – 5248.150 5245.650 – 5285.650 5285.650 – 5318.150 <cit.> air Fe5335 5304.625 – 5315.875 5312.125 – 5352.125 5353.375 – 5363.375 <cit.> air Hβ 4827.875 – 4847.87500 4847.875 – 4876.625 4876.625 – 4891.625 <cit.> air SDSS spectra are provided in vacuum wavelengths, but many indices are measured in air. If so, the SDSS wavelengths are converted using <cit.>. Yet, the resulting error would be negligible. It should also be mentioned that Dn4000 as a spectral index is measured using the flux per unit frequency (F_ν), not flux per unit wavelength (F_λ) as the others. §.§ Full spectral fitting A possible way of evaluating the quality of artificially generated spectra is to run full-spectral fitting codes on them. As the name states, these techniques work on the complete wavelength range available, not only on distinct spectral features such as the Balmer lines. Today, they are a standard procedure to determine galaxy properties, including age, stellar metallicity, stellar mass, and dust extinction of composite or spatially-resolved stellar populations. In this work, we used  <cit.> and  <cit.>, two well-known non-parametric population synthesis codes available from the astrophysical community. They do not assume a star formation history a priori but try to recover it, along with other properties, from the spectrum and are, therefore, quite general (i.e., concerning merger events). The user should nevertheless not blindly trust the complete star-formation history (SFH); a reduction to a robust young and old population can sometimes be the only way to statistically sound statements <cit.>. applies a penalized maximum likelihood approach to fit single-burst stellar population models (SSPs) on spectra. By imposing a penalty on pixels that are not well characterized by the templates, it works to minimize template mismatch. One of the advantages of the code is the possibility of simultaneous fitting of gas emission lines along with stellar kinematics (velocity dispersion) and stellar population. With this software, we are using the included model templates from <cit.>. We realigned our work to Jupyter Notebook examples[<https://github.com/micappe/ppxf_examples>] available online, which show the use of an additional bootstrapping method <cit.> with during the fit. As a first step in this procedure, a regularization (smoothing of template weights with =10) was applied. The emerging residuals are stored and then used to bootstrap the spectrum 50 times. This leads to robust average galaxy properties (mean age, mean metallicity, ...) and an estimate for their uncertainties (see also for more details). When replicating the examples, we strongly recommend using the newer for the extinction, not the now obsolete keywords which are used in the examples. is a chi-squared minimization fitting code that fits combinations of SSPs to spectroscopic data, following an iterative best-fitting process controlled by the Bayesian information criterion. This approach has been designed to be a good way to recover galaxy properties, especially in low S/N-regimes, where accurately deriving properties from spectral fits becomes more and more challenging <cit.>. Extinction due to dust is not fitted in a conventional way: A High-Pass Filter (HPF) is used to rectify the continuum before fitting, allowing for the removal of large-scale modes of the spectrum associated with dust and/or poor flux calibration. Regions with nebular emission lines are masked out during the process. The package is supplied with the pre-calculated stellar population models of (MILES as the stellar library combined with a Kroupa IMF in a fuel-consumption approach). We also ran tests with the models used in <cit.> derived with FSPS v3.2 <cit.> and a combination of several stellar catalogs (MILES in combination with additional templates from Post-AGB, WR-stars etc) with a Chabrier IMF <cit.> and MIST isochrones <cit.>. This leads to substantially longer run times of the code but not to improved results in the evaluation metrics presented in this section. The lack of hot stars (T > 9000 K) in the MILES library <cit.> does not seem to play a crucial role in the setup here. We, therefore, remain with the Maraston & Strömbäck SSPs. The corresponding age grid covers SSPs between 6.5 Myr and the Age of the universe, while the sampled metallicities read [Z] = -1.3, -0.3, 0.0, and 0.3. Regions with emission lines or absorption features polluted with emission are masked for a functioning fit. As an extinction law, <cit.> is chosen. For the observed spectra, the velocity dispersion (σ_∗) retrieved from was used in the input file. These values are in agreement with the error bars with the values for σ_∗ deposited in the SDSS archive. The artificial spectra tend to show similar velocity dispersions as their real counterparts (see figure <ref>). As an additional asset, we also integrate the bootstrapping method from above into the Python routines. Due to longer calculation times, the spectra are only bootstrapped 10 times. Yet, is a code already designed to work in a low S/N regime, and we will see that the artificial spectra perform equally well with both codes. A key distinction between the codes lies in the treatment of dust: While assumes a dust reddening law and dust is treated as an adjustable fit parameter, determines the effect of dust prior to the main fitting by comparing the large modes of data and models <cit.>. We emphasize that the focus of this work does not lie in the comparison between the different codes or SSPs but in the performance of the predicted spectra in contrast to the true observed spectra. Absolute values from full spectral fitting are always affected by systematic differences in the technique itself and underlying stellar population models and their ingredients <cit.>. < g r a p h i c s > Another four predicted created spectra (orange) and their corresponding full-spectrum fits with a population synthesis code. A result is shown in the first two rows. Regions with potential emission lines (not necessarily visible in each spectrum) are masked out for the fit and displayed here on a gray background. The third and fourth spectra are fitted with with emission and absorption fitted in parallel. The images on the right show again the corresponding SDSS tri-color images for illustration; the algorithm uses 5-band images. Figure <ref> shows four full-spectral fit examples for and on selected galaxies in the test set. §.§ Spectral Features Before the widespread use of full-spectral fitting techniques as described above, a limited number of prominent stellar absorption features were examined. Their strength or equivalent width is not sensitive to flux calibration and is available even at modest resolution and low S/N optical spectra. Extensive research in the 1980/1990s by <cit.> led to the Lick index system, a set of 25 optical absorption-line indices, the most commonly used in absorption-line analyses of old stellar populations. Each index in this system is defined by a central “feature bandpass” and two adjacent windows, blue- and rewards, for defining so-called “pseudo-continua” acting as baselines. Some indices are known to be more age-sensitive (e.g., Hydrogen Balmer lines) or metallicity-sensitive (e.g., Fe, Mg features). The well-known additional dependence on α-enhancement (α / Fe) complicates the picture, and efforts went into constructing combined metallicity indices with a weak α-dependence (see below). In our work, we measure prominent Lick indices (equivalent widths compared to two sidebands) and also Dn4000 as a “bandhead” index (difference of two passbands) based on the recipes in the Mapping Nearby Galaxies at Apache Point Observatory (MaNGA) data pipeline <cit.> with the program <cit.>. This package does not take into account uncertainties in the flux. SDSS galaxies also do not show strong wavelength-dependent noise, which would distort the measured Lick values <cit.>. Table <ref> shows the utilized indices and their pseudo-continua wavelength ranges. As a robust metallicity indicator independent of α-enhancement, the averaged Lick index [MgFe]^' from <cit.> is computed from our measurements: [MgFe]^'=√(Mg b·( 0.72 ·Fe5270 + 0.28 ·Fe5335) ) We do deteriorate the spectral resolution to reach the original wavelength-dependent Lick resolution (FWHM[4000]=11.5 , FWHM[6000]=8.9 ). Lick Indices were originally mainly used for quiescent galaxies without visible emission lines, but star-forming galaxies in our test sample can have strong Hβ-emission. Even the vanishing star formation rates of ellipticals can trigger this emission <cit.>. In order to prevent contamination in this line index, we measure this index not from the original spectra but from the model fits obtained from . The fit only contains stellar light and no emission from gas. Other spectral features are not affected by this issue. Velocity dispersion broadening has another non-negligible effect on absorption index measurements and has to be accounted for. Many artificial spectra show similar velocity dispersions as the real observations (figure <ref> top right), yet some deviate strongly. We apply the method presented in <cit.> to remove the effect of the velocity dispersion altogether, keeping in mind that single SDSS galaxies can show values of σ_∗≈ 400 km/s. The authors assume a simple polynomial relation between spectral indices with and without velocity dispersion broadening and provide suitable coefficients for our used indices: x_0=p_0 + p_1 x^p_2 + p_3 σ^p_4 + p_5 x^p_6σ^p_7 σ is the velocity dispersion measured in km/s and x, x_0 are spectral index values before/after correction. The values for the coefficients p_0 to p_7 for each index can be found in <cit.> table 2. The uncertainties traced through this method are usually below 5 %. In order to use equation <ref>, robust velocity dispersions of our artificial and observed spectra are needed. We measure them with (see <ref>). §.§ Performance on AGN recognition For a typical binary classification task in machine learning, various performance metrics are used to evaluate how accurate the predictions are. In subsection <ref>, we will ask the question of whether a galaxy harbors an AGN or not according to its position in the Baldwin-Phillips-Terlevich (BPT) diagram <cit.>. For this, the real observed spectra or the artificial spectra are used as a starting point. If both spectra (the true and the predicted one) lead to the same classification as AGN, we speak of a true positive (TP). If the emission lines in both spectra point towards a pure star-forming galaxy, it is a true negative (TN). FN (False negative) denotes the number of AGNs not showing up in the artificial spectra. The three metrics used in our work are: Accuracy = TP + TNTotal Precision = Purity = TPTP + FP Recall = Sensitivity = TPTP + FN An evaluation based on accuracy should only be used in approximately equally sized groups, as unbalanced sizes lead to largely overestimated accuracy scores. § RESULTS §.§ Quality assessment First, we mathematically assess the performance of our generative model before discussing the spectroscopic quality of the predicted spectra. One possible way to evaluate the difference between the predicted and observed spectra is by comparing them at each wavelength point. To do this, we shift the anticipated redshift of the artificial spectra to match the true redshift of the observed spectra. Figure <ref> shows the quality indicator Δ for the whole test set of 1506 galaxies split into different morphological groups defined by the galaxy zoo. It is defined as Δ = 1N∑_λ|O_λ - F_λ|O_λ with the number of wavelength points N, O_λ and F_λ represent the observed and the predicted spectra respectively. As such, Δ can be interpreted as a measure of the mean relative deviation between the true and artificial galaxy spectrum (after ). The median Δ in the test set is 5.5%, and 90% of all spectra have values less than 10%. The inspection of the worst matches (Δ > 30) reveals that the generative model heavily under- or overestimated the strength of the Balmer emission lines in these cases. The strong discrepancy in a few wavelength points affected the overall sum in equation (<ref>); the same holds true for the MSE error. A further inspection indicates that these poorly forecasted galaxies are mostly noisy low-mass starburst galaxies (EW H α > 35) categorized as “spiral” and “uncertain” in roughly equal parts. When calculating χ^2=∑|O_λ - F_λ|^2/σ_λ^2 which incorporates the uncertainty of the observed spectra σ_λ, all three morphological groups perform equally well. This is surprising since emission lines are usually narrow in comparison to the width of the broad-band filter, and photons from the continuum, not from the emission line, account for most of the signal. Such a performance can only be achieved by relating emission line fluxes not only to the averaged colors of the galaxy but also to the distribution of colors/magnitudes in the images. How well the generative model recovers line ratios is discussed in section <ref>. We do not have error bars for the predicted spectra available as we focused on obtaining a single best estimate for the spectrum of an object. However, while CDMs do not allow us to access the learned distribution directly, one can, in principle, sample multiple candidate spectra per image. In future work, we plan to explore how to use these to provide uncertainty estimates for our predicted spectra. §.§ Redshift estimate Since we do not remove the redshift in our galaxy sample, the generative model also predicts not only the spectrum itself but also an accompanying redshift. We measure this “photo-z” in the artificial spectra and compare it to the true spectral redshift (z_spec). This term is slightly odd as we measure the redshift in a spectrum, but this underlying artificial spectrum is a prediction on photometric images alone. As there are no suitable words to describe this situation, we refer to these redshift estimates as “predict-z”. Often used statistics concerning redshifts are: * Δ z = (z_pred-z_spec)/(1+z_spec) * σ_MAD=1.4826 · |Δ z - median(Δ z)| The MAD value (for Median Absolute Deviation) is a common tool for comparing the quality of the predicted redshifts. It is a general measure of dispersion similar to the standard deviation but more robust to outliers. <cit.>, whose training set we are using, achieved a σ_MAD = 0.00912, significantly lower than other machine learning techniques on the same samples. Later work with various ML architectures expanded this work, e.g., <cit.>. On our rather limited redshift sample (0.05 < z < 0.15) we achieve values of σ_MAD = 0.01177 and obtain 2 from 1 506 outliers with Δ z >0.05, see Figure <ref>. Unlike other methods, this competitive result is only a byproduct of our generative AI, which is not limited to predicting redshift alone. However, we tested here only on a relatively narrow redshift range. The cited literature mostly focuses on redshifts up to z ∼ 1 or beyond. Predicting redshift from images, in general, offers a significant advantage over traditional methods that rely on limited and biased information (i.e., only colors and magnitudes) taken from pre-processed catalogs <cit.>. The user is simply not biased in selecting measured properties obtained from the galaxy image beforehand. The question of which features are most important does not present itself when using images as basically all features imaginable are present <cit.>. While supervised convolutional neural networks are the natural choice for redshift prediction with photometric images <cit.>, we show that conditional diffusion models are also successful - when a small detour over spectra is taken. Improving photometric redshift estimates is one of the most pressing needs in the next generation of photometric surveys to unlock their full potential <cit.>. Further work will show whether our approach can help in this regard. §.§ Spectral Indices In the previous section, we showed that the observed and predicted spectra agree quantitatively to a high degree. To further quantify the ML output, we measure spectral indices at several key absorption features in all galaxies in the test set. These can be sensitive to metallicity (Fe5270, Fe5335, Mg b, [MgFe]^'), age (Hβ) or both (Dn4000, ). For proper measurement of the correct spectral regions, all spectra are shifted into the rest frame beforehand. Figure <ref> shows the velocity dispersion corrected indices for the predicted and the observed spectra of the uncertain & elliptical morphological group. For [MgFe]^' and Dn4000, we see a strong correlation between observed and predicted values, which also translates into high Pearson correlation coefficients (ρ=0.71 and ρ=0.91). For Mg b, the values correlate well, ρ=0.60, yet the correlation is improved heavily up to ρ=0.78 if only the subset of ellipticals is used. <cit.> argue that the forbidden [N I] line emission at 5199 can add to the red Mg b pseudo-continuum and enlarge the index value (and lead to higher metallicities). Since [N I] is generally correlated with [O III] emission, Mg b values in star-forming galaxies are generally unreliable. A wrong prediction of [N I] from our generative model might, therefore, negatively influence the comparison to the observed spectra. For the two iron indices, a one-to-one relation is no longer clearly identifiable. The correlation coefficients are below 0.5 for all indices; Fe5335 is as low as ρ = 0.2. These two neighboring features are generally less prominent in the galaxy spectra. Fe5270, in particular, testifies of cool main sequence stars (around 4500 K) and MK III giants <cit.>. The fraction of light from such stars in our spectra is small in comparison to the contribution of luminous main-sequence or supergiant stars. §.§ Full-spectral fitting results The metallicity of stars in galaxies is an important indicator of the chemical evolution of a galaxy. Yet, measuring gas or stellar metallicity from photometric data alone presents challenges due to the age-metallicity-dust degeneracy. This means that a galaxy can appear red for various reasons, including the cessation of star formation, high metallicity, or strong attenuation. Additional information like morphological features can help to lift these problems. In the second column of figure <ref>, the differences between metallicity derived for the observed spectra are compared with the values determined by the predicted spectra. The first row shows results, the second row shows the FIREFLY fits. Concerning , 86% of all predicted spectra coincide in metallicity within 0.10 dex. Taking into account the obtained error bars from the bootstrapping procedure, which are of the order of 0.07 dex in the median, 95% of metallicity values derived from the predicted spectra coincide with their real counterparts. As expected, the elliptical galaxies show narrower distributions than spirals as they have an overall narrower range of metallicities (see for instance , figure 5) and easier star-formation histories. For FIREFLY, the scatter is larger. 44% of all galaxies coincide within 0.10 dex and 70% within 0.2 dex. Higher uncertainties in the metallicity (0.11 dex in the median) due to the lower number of bootstrapping cycles do not counterbalance this. The values of the flux-weighted age show similar results. For , the scatter is 0.23 dex in the median, and for , 0.33 dex. For that, the deviations in reddening are notably smaller for FIREFLY independent of the morphological group (0.08 dex vs 0.13 dex in ). This most likely has to do with the different fitting algorithms themselves, which are further explored in the next subsection. The velocity dispersion in the spectra cannot be fitted with FIREFLY (see section <ref>); this is only implemented in . The shown distribution in the top right of figure <ref> has an overall mean of -1.4 km/s and a standard deviation of 40 km/s. The distribution is visibly smaller for ellipticals (in dark green), with a standard deviation of 29 km/s. Whether these numbers are good enough for a scientific application depends on the task in mind. But one should not forget that precise measurement of stellar velocity dispersion provides additional insights into the gravitational potential well that encompasses the stars. Since this potential is primarily influenced by dark matter, the velocity dispersion also indirectly indicates the characteristics of the dark matter halo <cit.>. Even though this quantity is of prime scientific importance, the authors are unaware of attempts in the literature to predict the stellar velocity dispersion from photometry. In the closing of this chapter, the absolute values of the metallicity can be considered. In figure <ref>, the overall mass-metallicity relationship is shown for the predicted spectra (top with FIREFLY, bottom with ). These coincide overall with the lookback evolution models by <cit.>. Assuming a redshift-dependent relationship between gas mass and stellar mass enabled them to derive numerical models of chemical evolution that are easy to calculate. The all-in-all scatter in the mass-metallicity relation is expected as it also depends on SFR as a third parameter <cit.>. Galaxies with lower SFR tend to have higher metallicities at the same stellar mass and vice versa. There is a slight discrepancy between the values derived from the two different codes and SSPs. Metallicities with tend to be systematically lower by ∼ 0.1 dex. This was also observed in <cit.> fitting spatially resolved passive early-type galaxies from the MaNGA survey. One possible origin might be the spacing of the metallicity grid of the SSPs or the overall spectral library. §.§ Degeneracies Between Age, Metallicity and Reddening It has been long recognized that optical spectra exhibit degeneracies in terms of age, metallicity, and dust properties. This means that different stellar populations with varying ages, metallicities, and dust properties can have nearly identical optical spectra, making it challenging to distinguish between them based solely on their observations <cit.>. With the advent of full-spectral fitting, this problem has become more pressing. The breakdown of the integrated spectrum into a combination of different building blocks (the simple stellar populations) does not necessarily lead to a unique solution, meaning that a different set of SSPs can also create the same flux output. Also, slight fluctuations due to noise can impact the fitting result obtained. There exist intrinsic limitations to the precision to which age and metallicity can be determined without reformulating the problem (i.e., by having a larger wavelength coverage , or using higher S/N data). Fitting the rather low S/N SDSS galaxy spectra is also expected to be plagued by this problem. We, therefore, use , which uses a regularization procedure that is said to be a suitable mathematical tool for such an ill-posed problem. Additionally, we implemented a bootstrapping procedure that tackles the impact of noise in the spectra. Yet, both codes still show degeneracies in age, metallicity, and reddening, which is not surprising. Figure <ref> shows the difference of metallicity between the predicted and observed metallicity on the x-axis and the deviation in log age on the y-axis. The overall color coding marks the discrepancy in the color excess E(B-V), i.e., reddening. FIREFLY (bottom panel) shows a preferred degeneracy in age and metallicity, whereas, for , a degeneracy between age and color excess emerges. This is identical to what <cit.> found using mock-spectra from the magnetohydrodynamical simulation IllustrisTNG[https://www.tng-project.org/]. Yet, we discourage a direct comparison of both codes as they do not use identical SSPs in our case. Nevertheless, the generative AI's predicted spectra can be used with both codes without problems despite incorporating completely different fitting procedures and templates. Also, it seems that a not negligible part of the deviation in the physical quantities between the predicted and observed spectra seen in the histograms of Figure <ref> do not necessarily come from a different information content of the observed/predicted spectral pair but from the overall degeneracies which plague full-spectral fitting overall in this S/N regime. As a result, even better predictions of the generative AI might not inevitably lead to better conformity as the limiting factor is the reconstruction of physical quantities from spectra, not the spectra themselves. §.§ Bimodality of Galaxies A further testing ground for the predicted galaxy spectra is the bimodality of galaxies. For decades, a separation of the galaxy population into two distinct groups has been observed in various physical quantities, including color, mass, age, and spectral indices such as Dn4000. This bimodality suggests that galaxies can be broadly classified into two categories: one population dominated by older, more massive, and redder galaxies with lower star formation rates ('the red sequence') and another population consisting of younger, less massive, bluer galaxies with higher star formation rates ('blue cloud') <cit.>. The analysis of the predicted spectra is able to reproduce this two-fold distribution. Figure <ref> shows the relation of physical quantities (reddening, stellar age, σ_∗, Dn4000, u-r) in relation to each other. All of these data points were solely derived from the artificial spectra and (u-r) color from photometry. As most of these relations are already seen partially in photometric colors itself <cit.>, our machine learning algorithm is supposed to pick them up, and this subsection can therefore be seen as a consistency check. The relations in the upper panels and the lower left in figure <ref> were shown in an identical manner for SDSS galaxies in the work of <cit.> using the full-spectral fitting code <cit.>. Their analysis also goes into detail about why one can expect these relations. On this occasion, it should be noted that the depicted log Age (mean stellar age), is actually the mean flux-weighted stellar age, i.e. a quantity biased towards the age of stars that produce most of the flux in a spectrum. These are mostly very young, high-mass stars formed in recent star formation episodes, which die off quickly (on the scale of less than several hundred million years). Dn4000 and colors are also heavily affected by the flux of these types of stars, which drives some of the observed bimodal behavior seen in Figure <ref>. Yet, <cit.> used the original available SDSS spectra for their analysis at that time (DR2). We derived the same relations from purely AI-predicted spectra. This reinforces the idea that the generative AI not only produces spectra that have an overall suitable shape but also captures the nature of the galaxies. Morphology, size, and colors of the galaxy are interlinked with the properties of the stellar populations through complicated relations <cit.>. Our method is capable of taking full advantage of this. The most interesting relation not mentioned yet is the relation in the lower right panel of figure <ref>. With , we are able to retrieve values for the velocity dispersion σ_∗ of the predicted spectra. So, at the end of the day, our generative AI predicts σ_∗ from photometric broad-band images alone. In the literature, σ_∗ is commonly measured with the help of spectra or indirect with the help of scaling relations <cit.>. In practice this can mean for instance that the stellar mass of early-type galaxies (ETGs) is measured photometrically and in a second step σ_∗ is inferred from its correlation to stellar mass. Here, we derive the values through the utilization of on the artificial spectra, but other template-fitting techniques are also possible. When taking the complete test set (no split in different groups), the measurements of σ_∗ of the observed and the predicted spectra agree with a negligible median offset of 0.5% with a dispersion of 20%. There seem to be no other attempts (machine-learning or traditional) on the prediction of σ_∗ in the literature from photometric images. As the colors of a galaxy are not affected by a different velocity dispersion given a spectrum <cit.>, the 2D information seems to be the deciding factor here. The generative AI seems to pick up the fundamental plane of elliptical galaxies (relation between the effective radius, average surface brightness, and central velocity dispersion) and is therefore able to predict σ_∗ values for the corresponding galaxies. For late-type galaxies (LTGs) the situation is less clear, but even there relations between σ_∗ and other physical properties can be found <cit.>. But this also explains why the generative AI is doing a better prediction task for elliptical galaxies (see again Figure <ref> top right). §.§ AGN classification As a final quality test, we focus on AGN recognition in emission-line galaxies. As the generative model predicts the complete spectrum of the galaxy, not only stellar light, it suggests itself to make use of the predicted emission lines. We make use of the classic BPT diagnostic diagram <cit.>, which compares the relative strength of collision lines of metals to recombination lines of hydrogen. The primary diagram assesses the line ratio [O III]λ 5007/ Hβ to [N II] λ 6584/ Hα. Due to the pairs of wavelengths being close together, these ratios are not affected by differential extinction. We measure the lines after subtracting the fit of the stellar population from FIREFLY. Figure <ref> top shows the BPT diagram for the observed spectra. The galaxies tend to lie in two well-defined sequences in the BPT diagram, leading to a characteristic “seagull” shape. The left-wing sequence in dark blue is associated with star-forming galaxies, while the right-wing sequence in red is associated (partially) with other ionization mechanisms (mostly AGN). The solid line represents the classification curve from <cit.> defining the upper limit for finding pure SF galaxies. Galaxies were labeled as passive when the equivalent widths (EW) of Hα were smaller than 1 or when a visual examination showed no spectra left after the subtraction of the continuum. This threshold has the advantage of being independent of any S/N ratio, which is not available in predicted spectra in the first place. <cit.> argue that this criterion is independent of data quality and has a better astrophysical meaning. We measure the EW of Hα with the python package . More problematic is the case when one of the four prominent emission lines used in the classical BPT diagram is missing, as this galaxy cannot be placed in figure <ref>. Our simple solution is then the assignment as a passive galaxy even though this is not physically correct: The center of this galaxy can simply be (partially) dust-obscured. Ratios between obscured and un-obscured AGN can even be as large as ∼ 3 <cit.>. As this is a general problem of the optical bands we cannot solve, wavelength regimes in the IR provide useful alternatives with less extinction and open up a realm for new line diagnostics <cit.>. The BPT diagram of artificial spectra is shown in the right panel of Figure <ref>. From the 1 506 synthetic spectra, 1 079 also showed measurable emission according to the criteria from above. Table <ref> shows the 3 × 3 confusion matrix, with rows corresponding to actually being AGN, SF, and passive and columns corresponding to the predicted classification from the artificial spectra. In other words, each row of the matrix represents the objects in the actual class (passive, pure SF, AGN), while each column represents the galaxies in the predicted class from the artificial spectra. From these values, the evaluation metrics can be calculated. The accuracy of predicting a possible AGN is 82%; the precision resides at 73%. The recall is on the same level with 74%. Usually, precision (everything flagged AGN should indeed be an AGN) and recall (not missing any AGNs) are in tension; tuning machine learning on one of these quantities usually diminishes the other, and the optimal trade-off depends on the task at hand. As this paper acts as a feasibility study, the obtained performance metrics already show encouraging results, especially as these AGN predictions come for free from the generated spectra. No specific algorithm solely used for this purpose had to be trained. For instance, <cit.> tried to answer the same question (distinguishing AGN and non-AGN with photometry) using various machine-learning methods applied specifically for this task. Instead of images, they used dereddened SDSS colors, the dereddened magnitude in the r band, the fiber magnitude in the r band, and the photometric redshift. A support vector machine (SVM) achieved the best result with an accuracy of 76% with a test set size of 25466 objects. For a proper analysis, the uncertainty in the measurements has to be taken into account. The classifications for our observed spectra will certainly not be completely correct and contain some errors. We follow <cit.> in estimating the error budget: Hβ and [O III] λ 5007 are free from overlaps with other prominent emission lines and, therefore, rather easy to fit with a Gaussian. Uncertainties arise through noise in the flux and mainly systematic calibration errors, with an estimate of 15%. Contrarily, Hα is partially blended with [N II] λλ 6548, 6584, which leads to higher errors around 20% for Hα and 30% of [N II] λ 6584. Measuring the equivalent width of Hα was multiplied with a Gaussian error with a standard deviation of 1 to incorporate uncertainty. Some measured EWs are slightly lower than 1 , and the corresponding galaxy is labeled as passive; with the error budget, they can now find their way in the diagram. All of these errors can be propagated and give a rough probability for the label of the galaxy from the observed spectra (i.e., 95% probability of being an AGN). The same analysis can be done for the artificial galaxy spectra. Now, random number draws can be drawn where each label (true and predicted) is flipped according to the assigned probability <cit.>. Galaxies in the upper right corner of the BPT diagram have low changes of classification other than AGN (the BPT diagram is in log-scale), whereas pure star-forming galaxies in the vicinity of the Kauffmann line can swap more easily towards AGN candidates. We run 5000 experiments with a simple random number generator in Python and obtain the following uncertainties for the evaluation metrics (see subsection <ref> again for their definitions) from the slightly changed labels: * Recall = Completeness = 0.736 ± 0.018 * Precision = 0.725 ± 0.016 * Accuracy = 0.818 ± 0.011 As the BPT diagram is on a log scale, uncertainties in the evaluation metrics turn out to be quite small in the end. We still show them here for completeness. More interesting is the reason why the generative AI can predict correct emission lines in the first place. This is at first puzzling since most of the photons in a given broad-band filter come from the stellar continuum and not from an emission line except in extreme cases. Only the broadband photometry of infant massive star clusters is known to be heavily affected by nebular emission, both in lines and continuum <cit.>. Yet, our algorithm produces the correct AGN-emission line features only having the photometric images at hand. On second thought, AGNs do not occur at random. A large-scale bar, easily identifiable in photometric images, can, for instance, fuel infalling gas into the central regions of a galaxy (if gas is available in the first place) and trigger both central star formation and the activation of AGNs. Also, star formation activity is stronger in galaxies with large bars <cit.>. Work on spatially-resolved spectra in the CALIFA survey by <cit.> showed further links between AGNs and their hosts. AGNs seem to be concentrated in the high-mass, high-metallicity regime. Their ages are between those of pure blue cloud and red sequence galaxies, and some morphological types are preferred (Sab-Sb types). Additionally, galaxy color and morphology can be affected by the presence of an AGN <cit.>. All of these relations, some of them stronger than others, help relate the optical emission lines with the photometric images. Therefore, it is no surprise that photometric colors (solely colors) correlate with the equivalent widths of emission lines <cit.>. Yet these authors also found a certain degree of degeneracy between the colors of many passive galaxies and SF ones. The spatial 2D information of colors we provide seems to break this as the distinction between SF and AGN is solid in Table <ref>. Another hint on the feasibility of our approach can be found in <cit.>, who used a neural network to predict the equivalent width of bright optical emission lines from the continuum, having the creation of mock catalogs in mind. That such approaches proved to be a success (also earlier work from with PCA) is not a surprise if one considers how emission lines are produced in the first place. Nebular line emission is created when recombination occurs or when collisionally excited states in atoms or ions decay. This, on the other hand, depends on both the ionizing radiation field (ionization parameter, properties of the stars ionizing the gas) and the metallicity of the gas (metals act as coolants in a nebula) <cit.>. Photoionization codes in combination with stellar population synthesis like in <cit.> were therefore capable of showing how emission line intensities correlate with the stellar population properties (age, metallicity). The latter also heavily affects the continuum shape and the broad-band photometry. §.§ Disentangling color and spatial information Our generative AI learns the distribution of spectra conditioned on 5-band images with a resolution of 64x64, containing spatially resolved information (morphology) and pure color/magnitude information. The question arises: which of both is more important in successfully generating synthetic spectra? To explore this, we train multiple CDMs with the same overall architecture as before but different input images by degrading their resolution and color information. Instead of the 64x64 images, stepwise coarser ones with 32x32, 16x16, 8x8, 4x4, 2x2 and finally 1x1 pixels are used. The overall area covered by the image (the FoV) remains the same, as well as the assignment of objects in the training, validation, and test set. Additionally, cases were considered where not all five bands were used; in the most extreme case, only the g-band alone was given to the generative AI. After training for each case, the spectra of the test set are predicted. Not surprisingly, we find that by reducing the image resolution and thus gradually destroying the spatial information but keeping five bands, the mean squared error (MSE) of the generated spectra increases as listed in the first part of table <ref>. This means that predicted spectra now differ more strongly from the observed ones. Also, having more bands available improves the performance of our method (second part of the table). Figure <ref> illustrates the differences in the generated spectra, showing results for different spatial resolutions and a different number of bands. We show an example of what the images look like after reducing resolution in Figure <ref>. Our experiment shows that both spatial information and magnitudes are important for the successful generation of spectra, although which is more important is inconclusive. Removing the two outermost bands has approximately the same effect as quartering the number of pixels in an image (one step coarser spatial resolution). Incorporating additional bands, especially small-band filters, will likely improve the results even further. Additional higher spatial resolution might not have such an impact. Finally, using only one band is not sufficient enough to provide a usable spectrum, as the SEDs and emission lines strongly deviate from the real observations. § SUMMARY In this work, we have explored the possibility of generating complete galaxy spectra from photometric broadband images alone. We applied a versatile spectroscopic toolkit to evaluate the quality and information content of our “artificial” or “predicted” spectra, leading us to the following conclusions: * The mean relative deviation between the observed and predicted galaxy spectra showed the largest discrepancies in low-mass star bursting spirals due to wrong predictions of the extensive emission lines. A refined analysis incorporating the uncertainties of the observed flux at these data points showed that the galaxy spectra from different morphological groups were predicted equally well. * A comparison between spectral indices measured in predicted and observed spectra showed excellent agreement in Dn4000 and good agreement in the prominent spectral features [MgFe]^', Hbeta, and Mg b. Difficulties for the generative AI arise only in weaker Lick indices such as Fe5270 and Fe5335. * The predicted and observed spectra were evaluated with two stellar population fitting codes ( & ). The mean metallicity of the galaxy was recovered from the artificial spectrum; 86% of all predicted spectra coincide in metallicity within 0.10 dex with . performed worse with 70% agreement within 0.2 dex. Overall, the use of delivered metallicities approximately 0.1 dex higher than those of . This is consistent with earlier findings from the literature comparing the performance of these two fitting codes. Values of the mean age of the stellar population and extinction showed good agreement: 0.3 dex scatter in log age and 0.1 dex scatter in E(B-V) as an overall rule of thumb. * With our procedure generating artificial spectra, it becomes possible to predict the central velocity dispersions from photometric images alone. To our knowledge, this is the first attempt in the literature to do so. On our test set, we obtain values that are consistent with values from the observed spectra within 20%. * The presented machine learning algorithm recovers the famous bimodality of galaxy populations in colors, Dn4000, mean stellar age, extinction, and velocity dispersion using solely artificial spectra. It links colors and their 2D distribution (including morphological features) in the photometric images with physical quantities retrievable from spectra. * It is possible to identify AGN candidates from the photometric images with an accuracy of 82%. For this, emission line ratios of the artificial spectra were evaluated in the BPT diagram. In short, our approach can successfully predict various galaxy properties without explicitly training to do so. We show reasonable estimates for quantities such as age, metallicity, dust reddening, and velocity dispersion from photometric images alone by making a detour over artificial spectra, and more properties can easily be derived. We believe the most promising applications of our method are in upcoming all-sky surveys such as Euclid and LSST, which will only have spectroscopic information on a small subset of the objects for which photometric images will be taken. By generating artificial spectra we can, for instance, determine objects that are likely to be interesting and perform more targeted spectroscopic follow-ups. § ACKNOWLEDGMENTS Special thanks go to Rolf-Peter Kudritzki and Luca Tortorelli for useful comments on the course of this work and the manuscript draft. This work was funded by the Swiss National Science Foundation (SNSF), research grant 200021_192285 “Image data validation for AI systems”. ES acknowledges support from the Computational Center for Particle and Astrophysics (C2PAP) as part of the Munich Excellence Cluster Origins funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany’s Excellence Strategy EXC-2094 390783311. SC and MB acknowledge the ASI-INAF TI agreement, 2018-23-HH.0 “Attività scientifica per la missione Euclid - fase D". Funding for SDSS-III has been provided by the Alfred P. Sloan Foundation, the Participating Institutions, the National Science Foundation, and the U.S. Department of Energy Office of Science. The SDSS-III web site is <http://www.sdss3.org/>. SDSS-III is managed by the Astrophysical Research Consortium for the Participating Institutions of the SDSS-III Collaboration including the University of Arizona, the Brazilian Participation Group, Brookhaven National Laboratory, Carnegie Mellon University, University of Florida, the French Participation Group, the German Participation Group, Harvard University, the Instituto de Astrofisica de Canarias, the Michigan State/Notre Dame/JINA Participation Group, Johns Hopkins University, Lawrence Berkeley National Laboratory, Max Planck Institute for Astrophysics, Max Planck Institute for Extraterrestrial Physics, New Mexico State University, New York University, Ohio State University, Pennsylvania State University, University of Portsmouth, Princeton University, the Spanish Participation Group, University of Tokyo, University of Utah, Vanderbilt University, University of Virginia, University of Washington, and Yale University. HST(STIS), Swift(XRT and UVOT), AAVSO, CTIO:1.3m, CTIO:1.5m,CXO Astropy <cit.>, Specutils affiliated with Astropy, FIREFLY <cit.>, pPXF <cit.> PYPHOT <cit.>, NumPy <cit.>, Matplotlib <cit.>, SciPy <cit.> aasjournal § TECHNICAL DETAILS In this section, we provide a more detailed description of our method, including technical aspects of diffusion models and contrastive learning, following the workflow in Figure <ref>. In the first step, we learn the conditional distribution over low-resolution spectra p(low-res spectrum|image) with a conditional diffusion model. Diffusion models are generative models that model the probability distribution of observed data and can generate new samples that closely mirror the characteristics of the training set. Diffusion models are trained using a combination of two Markov chains of length T, known as the forward and backward processes <cit.>. The forward process gradually adds Gaussian noise to a sample x_0 with a fixed variance schedule β_1,...,β_T depending on the time step t ∈[ 1,T]. This process transforms the original sample x_0 into Gaussian noise at x_T. The backward denoising process aims to reverse the forward process by learning to predict the denoised version x_0 from x_t with an autoencoder neural network ϵ_θ. In practice, ϵ_θ learns to predict the noise <cit.>, with the simplified objective L_DM = _x_0,t,ϵ∼𝒩(0, 𝐈)(∥ϵ - ϵ_θ(x_t, t)∥^2 ). After training is complete, the backward process describes the data distribution. With it, diffusion models can generate new samples by sampling from an isotropic Gaussian distribution and using the backward process to iteratively denoise it for T timesteps. Vanilla diffusion models generate unconditional samples. In our case, these will be spectra that are similar to the spectra in the training dataset, but they are not related to a specific image. Instead, in order to generate spectra for a specific object, we condition the noise prediction network ϵ_θ on the input image. This allows us to sample from a conditional distribution of spectra given an image. These models are known as conditional diffusion models (CDM). Specifically, the CDM projects the condition image y to a latent space using a learnable mapping τ_θ and introduces it into the network with cross-attention at multiple layers of the autoencoder <cit.>. This gives the conditional loss function for a sample x, L_CDM = _x_0,y,t,ϵ∼𝒩(0, 𝐈)(∥ϵ - ϵ_θ(x_t, t, τ_θ(y))∥^2 ), i.e., it learns to denoise a spectrum given the corresponding image. Thus, the first step of our method is sampling multiple possible spectra for an object from the low-resolution CDM, CDMlr, given its image. However, as described in the main text, CDMs do not allow for density estimation and sampling from the CDMs results in multiple possible spectra for a given object without information on their likelihood. To select the most promising spectra for follow-up evaluation, we use multimodal contrastive learning <cit.> as a heuristic to find high-likelihood samples of the learned distribution. Contrastive learning is a method for self-supervised learning. Unlike traditional supervised learning, which relies on manual annotations, in self-supervised learning, the supervision is automatically generated from unlabelled input data. For contrastive learning, the underlying idea is that two views of a sample, such as two pictures of an object taken at different angles, should have a similar representation. Formally, contrastive learning optimizes a neural network to minimize the distance between the features of two views of the same object while maximizing the distances to the features of other samples. This contrastive loss for a batch of size N is given by <cit.> ℓ_i, j = -logexp (sim(z_i, z_j) / τ)/∑^2N_k=11_[k≠ i]exp (sim(z_i, z_k) / τ), for views i and j of an object, where z represents their feature representation, τ the temperature, and sim(·,·) a similarity measure, typically the cosine similarity. In our case, the two views of an object are given by its image and spectrum. When the different views come from different modalities, this technique is called multimodal contrastive learning. In our case, we learn to map images and spectra into a shared representation space, where images and spectra with similar representations are likely to belong to the same object. The second step of our algorithm involves using the contrastive network trained on the low-resolution spectra and corresponding images to rank the generated spectra by CDMlr based on the similarities between their representations and that of the original image. We then select the best-matching samples to continue with the next steps. At this point, we have generated a handful of high-likelihood but low-resolution spectra for an object. For step three, we train a second CDM to generate full-resolution spectra using the same process as in step one, with the only difference being an extra condition on the corresponding low-resolution spectrum. We do this by stacking the low-resolution spectrum with the original one channel-wise into x_t^comb, which is then used to train the full-resolution CDM: L_CDM_sr = _x_0,y,t,ϵ∼𝒩(0, 𝐈)(∥ϵ - ϵ_θ(x_t^comb, t, τ_θ(y))∥^2 ). In short, this model learns to denoise a full-resolution spectrum, given the corresponding low-resolution spectrum and image, and is used to generate full-resolution versions of the most artificial spectra from step two. The fourth step uses a second contrastive network to find the best matching full-resolution spectrum for an image. This contrastive network is trained as in step two, with the only difference being that it uses full-resolution instead of low-resolution images. In summary, our full method samples several 563-dimensional spectra for an image with CDMlr. We select the three best synthetic spectra according to the low-resolution contrastive network. Then, we generate five full-resolution spectra for each of the selected low-resolution spectra. Finally, we select the best-matching spectrum with the full-resolution contrastive network, giving us the final synthetic spectrum for the object. § SQL QUERY
http://arxiv.org/abs/2406.17919v1
20240625200523
Majorana representation for topological edge states of massless Dirac fermion with non-quantized Berry phase
[ "F. R. Pratama", "Takeshi Nakanishi" ]
cond-mat.mes-hall
[ "cond-mat.mes-hall" ]
pratama.fr@aist.go.jp t.nakanishi@aist.go.jp ^1Mathematics for Advanced Materials-OIL, AIST, 2-1-1 Katahira, Aoba, 980-8577 Sendai, Japan ^2Advanced Institute for Materials Research, Tohoku University, 2-1-1 Katahira, Aoba, 980-8577 Sendai, Japan § ABSTRACT We study the bulk-boundary correspondences for zigzag ribbons (ZRs) of massless Dirac fermion in two-dimensional α-T_3 lattice. By tuning the hopping parameter α∈[0,1], the α-T_3 lattice interpolates between pseudospin S=1/2 (graphene) and S=1 (T_3 or dice lattice), for α=0 and 1, respectively, which is followed by continuous change of the Berry phase from π to 0. The range of existence for edge states in the momentum space is determined by solving tight-binding equations at the boundaries of the ZRs. We find that the transitions of in-gap bands from bulk to edge states in the momentum space do not only occur at the positions of the Dirac cones but also at additional points depending on α∉{0,1}. The α-T_3 ZRs are mapped into stub Su-Schrieffer-Heeger chains by performing unitary transforms of the bulk Hamiltonian. The non-trivial topology of the bulk bands is revealed by the Majorana representation of the eigenstates, where the ℤ_2 topological invariant is manifested by the winding numbers on the complex plane and the Bloch sphere. 72.20.Pa,72.10.-d,73.50.Lw Majorana representation for topological edge states of massless Dirac fermion with non-quantized Berry phase Takeshi Nakanishi^1 July 1, 2024 ============================================================================================================ § INTRODUCTION   Bulk-boundary correspondence (BBC) is the relation between the number of robust edge states and the bulk topological invariant. This concept leads to the paradigm <cit.> to classify phases of condensed-matters based on the topology of bulk bands. The pioneering work of Thouless et. al. <cit.> identified the Chern number 𝒞 of the occupied Landau levels as the ℤ invariant for the quantized Hall conductivity <cit.> in a non-interacting, two-dimensional (2D) electron gas. In systems that preserve time-reversal symmetry, ℤ_2 invariants <cit.> differentiate between the ordinary and quantum spin Hall insulators. The Berry phase <cit.> in 1D Brillouin zone (BZ), also known as the Zak phase <cit.>, provides the ℤ_2 invariant for the Dirac matters with both time-reversal and inversion symmetries <cit.>. The quantized value of Zak phase to π or 0 is utilized to predict the presence or absence of edge states in the Su-Schrieffer-Heeger (SSH) chain of polyacetylene <cit.> and in various types of graphene ribbons <cit.>, insofar the number of unit cells is commensurate with the bulk lattice <cit.>. The spinless model for these systems is described by the generic Hamiltonian ℋ=d·σ, where σ=(σ_x,σ_y,σ_z) is the vector of Pauli matrices. The bivalued Zak phase corresponds to the two possibilities whether the path of d=(Re[d],-Im[d],0) encloses the origin of the complex plane or not <cit.>. ℋ is also adapted to explain the edge states in phosphorene ribbons <cit.>. Experimentally, the topology of SSH chain is studied by using arrays of cold atoms <cit.>, photonic waveguides <cit.>, and electrical circuits <cit.>. Apart from the observations <cit.> by scanning-tunneling microscopy, the properties of edge states in graphene ribbons have been investigated using artificial honeycomb lattices <cit.>, which encompass the photonic <cit.>, phononic <cit.>, plasmonic <cit.>, and polaritonic <cit.> analogs of graphene. The α-T_3 lattice <cit.> [Fig. <ref>(a)] interpolates between pseudospin S=1/2 (graphene <cit.>) and S=1 (T_3 or dice lattice <cit.>) by continuously varying the hopping parameter α from 0 to 1, respectively, while the energy dispersion [Fig. <ref>(b)] remains the same. On the other hand, the Berry phase smoothly changes from π to 0, which can be seen in the gradual transition of diamagnetic to paramagnetic orbital susceptibilities <cit.>, among others. Motivated by the unconventional physics phenomena that arise from the variable geometric phase and the existence of flat band—e.g. enhanced Klein tunneling and supercollimation <cit.>—there have been extensive studies on the properties of the T_3 and α-T_3 lattices, including the optical <cit.>, magnetic <cit.>, pseudomagnetic <cit.>, magneto-optical <cit.>, and transport <cit.> properties. The T_3 lattice is predicted to occur in several perovskite-based heterostructures <cit.> and strained blue-phosphorene oxide <cit.>. Moreover, the Hamiltonian of the α-T_3 lattice can be employed to reproduce the absorption spectra <cit.> of 2D Hg_1-xCd_xTe at the critical doping x≈ 0.17 <cit.>. In the context of topology, the T_3 lattice shows quantum anomalous Hall effect with |𝒞|=2 <cit.> by including the Haldane term <cit.> in the Hamiltonian. Similarly, the α-T_3 ribbon with spin-orbit interactions shows the quantum spin Hall effect <cit.>. However, the BBC for the α-T_3 lattice without opening non-trivial gaps is yet to be formulated, even though it is natural to investigate whether the topological edge states remain without the quantized Berry phase. Furthermore, it is shown in a recent work <cit.> that the T_3 zigzag ribbon (ZR) hosts edge states. Our study shows that the edge states are topologically non-trivial despite the zero Berry phase. Our paper is organized as follows. In Sec. <ref>, we discuss the BBC for the stub SSH chain <cit.>, which is constructed by connecting an additional site to one of the sites in the SSH dimer as depicted in Fig. <ref>. The results of this section are useful for understanding and will be applied to uncover the BBC of the α-T_3 ZRs. In the Majorana representation of the eigenstates <cit.>, we analytically prove that the presence or absence of edge states are topologically characterized by winding numbers on the complex plane and the Bloch sphere. Sec. <ref> is devoted to a brief review of the relevant bulk properties of the α-T_3 lattice. In Sec. <ref>, first we describe the configurations of α-T_3 ZR. For two types of ZRs, tight-binding equations (TBEs) at the boundaries are solved to determine the range of existence for edge states in momentum space. In Sec. <ref>, we map the α-T_3 ZRs into stub SSH chains by unitary transforms of the Hamiltonian. Here, momentum in the direction parallel to the α-T_3 ZRs is transformed into hopping parameters of the corresponding stub SSH chains, and thus the dimension is reduced from 2D to 1D. We discuss the topological phase diagram for each ZR. Conclusion is given in Sec. <ref>. This paper serves as a technical companion to Ref. <cit.> by providing detailed derivations of the results. § STUB SSH CHAIN   §.§ Bulk Hamiltonian Fig. <ref> illustrates the stub SSH chain. In each unit cell (dashed rectangle), the intracell hopping parameter between the B and A (C) sites is V_A≥ 0 (V_C). The intercell hopping parameter is V_A^'. For a chain consisting J unit cells, the Hamiltonian Ĥ is given by Ĥ = -∑_j=1^J[ V_A ( â_j^†b̂_j + b̂_j^†â_j ) + V_A^'( â_j^†b̂_j+1 + b̂_j+1^†â_j ) +V_C ( b̂_j^†ĉ_j + ĉ_j^†b̂_j ) ], where â_j^†, b̂_j^†, and ĉ_j^† (â_j, b̂_j, and ĉ_j) are the creation (annihilation) operators for the A, B, and C sites, respectively, in the j-th unit cell. By performing the Fourier transforms for the creation and annihilation operators, the bulk Hamiltonian in the momentum space is given by Ĥ(k) = ∑_kΨ̂_k^† H(k) Ψ̂_k, where Ψ̂_k = [ a_k b_k c_k ]^T is the field operator, and H(k) = - [ 0 F^*(k) 0; F(k) 0 V_C; 0 V_C 0 ]. Here, we define F(k) = |F(k)| e^-iΦ(k)≡ V_A + V_A^' e^-ika_0. a_0 ≡ R_j+1-R_j is the lattice constant [The positions of all sites in each unit cell are regarded as identical. This treatment is equivalent to fixing a gauge in the Hamiltonian such that the intracell hopping is real <cit.>.]. Φ(k)∈ (-π, π] is given by [Z=Arg(X+iY)=atan2(Y,X), where atan2 is two-argument arctangent function. Particularly, Z=0 for Y=0, X>0 and Z=π for Y=0, X<0.]: Φ(k) = Arg[V_A/V_A^'+cos(ka_0)+isin(ka_0)]. Without lost of generality, we only consider V_A^'> 0, and the 1D BZ is defined for k∈ [-π/a_0,π/a_0] [For V_A^'<0, the BZ is defined for k∈[0,2π/a_0], because F(k)=V_A-|V_A^'| e^-ika_0 = V_A+|V_A^'| e^-i(k+π/a_0)a_0]. It is noted that F(-k)=F^*(k) and Φ(-k) = -Φ(k). The energy eigenvalues of Eq. (<ref>) are given by E_s(k) = s√(V_A^2+V_A^'^2+V_C^2+2V_AV_A^'cos(ka_0)), E_0 = 0, where E_s indicates the valence (conduction) band for s=-1 (+1). E_0 is the flat band at zero energy whose origin is understood from the Lieb theorem <cit.>: due to the absence of interactions between the A and C sites, each unit cell can be partitioned into two sublattices: one consists of the A and C sites, the other consists of the B site. The numerical imbalance between the sublattices gives rise to E_0. For V_C ≠ 0, the bulk bands do not touch at |k| = π/a_0, unlike in the SSH chain where the gap closing indicates a topological phase transition when V_A/V_A^' = 1. Nevertheless, regardless the value of V_C, Φ(π/a_0) = π for V_A/V_A^'<1, 0 for V_A/V_A^'>1, and indeterminate for V_A/V_A^' =1. Eq. (<ref>) will be used to calculate the topological index. The eigenvectors for the band c∈{s,0} are |Ψ_s(k) ⟩ = 1/√(2)[ cosΘ(k) e^iΦ (k); s; sinΘ(k) ] for the dispersive bands, and |Ψ_0(k) ⟩ = [ sinΘ(k) e^iΦ(k); 0; -cosΘ(k) ] for the flat band, where Θ(k) ≡tan^-1 ( V_C/|F(k)| ). §.§ Open boundary conditions We derive the quantization condition of k to enumerate the number of bulk states by imposing the boundary conditions on the wavefunction Ψ_s = [ Ψ_s^A Ψ_s^B Ψ_s^C ]^T. §.§.§ Missing bulk states As illustrated by Fig. <ref>, the missing B site at R_J+1 and A site at R_0 necessitate Ψ_s^B(R_J+1) = 0, Ψ_s^A(R_0)= 0. Ψ_s(R_j) is constructed by a linear combination of the Bloch states with opposite momenta as follows <cit.>: Ψ_s(R_j) ≡𝒜_+ e^i k R_j|Ψ_s(k)⟩ + 𝒜_- e^-i k R_j|Ψ_s(-k)⟩. By letting R_j = (j-J-1)a_0, Eq. (<ref>) implies 𝒜_-=-𝒜_+. Thus, Eq. (<ref>) yields e^-i(J+1)ka_0F^*(k) = e^i(J+1)ka_0F(k) or sin[(J+1)ka_0-Φ(k)] = 0. Equivalently, the quantization condition of k for a finite J is given by (J+1)ka_0 - Φ(k) = nπ, for n=1, 2,…,J. In Fig. <ref>(a), we plot Φ(k) as a function of k∈ [0, π/a_0] for V_A/V_A^'=0.5, 0.98, and 1.5. The straight lines correspond to l_n(k) ≡ (J+1)ka_0 - nπ, for J=20. The number of bulk states is equal to the number of solutions of Eq. (<ref>), which are given by the intersections of Φ(k) and l_n(k) along k∈ (0, π/a_0). Here, Ψ_s(R_j) vanishes identically at k∈{0,π/a_0}. There are J solutions for V_A/V_A^' = 0.98 and 1.5. On the other hand, one solution is missing for V_A/V_A^'=0.5. The existence of J-1 solutions requires Φ(π/a_0)=π and ∂_kΦ(k)|_π/a_0<∂_k l_J(k)|_π/a_0, or V_A /V_A^' <1 - 1/(J+1). We shall show that edge states emerge from the missing bulk states. The ratio (V_A/V_A^')_J≡ 1-1/(J+1) is interpreted as the critical value of V_A/V_A^' at which a bulk state for each E<0 and E>0 become edge states. In the limit J→∞, (V_A/V_A^')_∞∼ 1. §.§.§ Edge states By substituting k = π/a_0 + iκ into Eqs. (<ref>) and (<ref>), e^(J+1)κ a_0F̃(-κ) = e^-(J+1)κ a_0F̃(κ), where we define F̃(κ) ≡ V_A - V_A^'e^κ a_0. Here, 1/κ>0 is the localization length of edge states. By rearranging Eq. (<ref>), κ satisfies Δ(κ) ≡V_A/V_A^' - sinh[Jκ a_0]/sinh[(J+1) κ a_0] = 0. In Fig. <ref>(b), we plot Δ(κ) for V_A/ V_A^'= 0.5, 0.98 and 1.5. Eq. (<ref>) is satisfied for V_A/ V_A^'= 0.5, where Δ(κ)=0 at κ≈ 0.693/a_0 (vertical dashed line). Thus, the edge states exist for V_A/V_A^' = sinh[J κ a_0]/sinh[(J+1)κ a_0]<1. By inserting k = π/a_0 + iκ into Eq. (<ref>), the energies of edge states are given by Ẽ_s(κ) =s√(V_A^2+V_A^'^2+V_C^2-2V_AV_A^'cosh(κ a_0)). By combining Eqs. (<ref>) and (<ref>), we get Ẽ_s(κ) ≡ s √(V_A^'^2sinh^2(κ a _0)/sinh^2[(J+1)κ a_0] + V_C^2 ). We can see that for J κ a_0≫1, Ẽ_s(κ) becomes independent of J and converges to Ẽ_s ∼ s |V_C|. Fig. <ref> shows the numerical calculation of energy spectra as a function of V_A/V_A^' for V_C =0.5 V_A^' and J=20. Since each unit cell consists of three sites, there exist 3J bands in total: J pairs of dispersive bands and a flat band with J-fold degeneracy at E=0 (blue line). The edge states are depicted by the bold red lines. The transition from bulk to edge states is marked by the vertical dashed line at (V_A/V_A^')_20≈ 0.9524, where the in-gap bands become constant at |E|∼ |V_C|. Therefore, the emergence of edge states is indicated by the flattening of the in-gap bands when V_A/V_A^' < 1. Our theoretical result is consistent with a recent experiment <cit.> that realized stub SSH chain using a photonic lattice. §.§ Topological invariant We have demonstrated that the emergence of edge states in the stub SSH chain is irrespective of V_C and controlled only by V_A/V_A^', similar to the SSH chain. Nevertheless, the presence of the coupled C sites breaks the inversion symmetry <cit.>, and as a consequence, the Zak phase 𝒵_c≡ i∫_BZ dk ⟨Ψ_c (k) | ∂_k |Ψ_c(k)⟩ is no longer quantized to π and 0 (mod 2π). By using |Ψ_c⟩ in Eqs. (<ref>) and (<ref>), 𝒵_c's are analytically calculated as follows: 𝒵_s = -∫_0^π/a_0 dk cos^2Θ(k) ∂Φ(k)/∂ k= -π/2 [ 1 - V_A^2 - V_A^'^2 + V_C^2/√((V_A - V_A^')^2 +V_C^2)√((V_A + V_A^')^2 +V_C^2)], 𝒵_0 = -2∫_0^π/a_0 dk sin^2Θ(k) ∂Φ(k)/∂ k= -π [ 1 - sgn(V_A^2 - V_A^'^2) ] -2𝒵_s. The case V_C = 0 corresponds to the SSH chain with an additional flat band originating from the uncoupled C sites, where 𝒵_s = -π and 0 for V_A/V_A^'<1 and V_A/V_A^'>1, respectively, and hence |𝒵_s|= Φ(π/a_0). This relation does not hold for V_C≠ 0. On the other hand, it was shown <cit.> that the edge states in the stub SSH chain are robust against disorders that perturb V_A and V_A^', which suggests the topologically non-trivial character of the bulk bands. By using the Majorana representation of the eigenstates, Bartlet et al. <cit.> found that the existence of the edge states can be predicted from the azimuthal winding number W_c on the Bloch sphere. We complement the finding with an analytical proof that |W_c|=Φ(π/a_0). The Majorana representation is a method to visualize the eigenstate |Ψ⟩ of an N× N Hamiltonian on the Bloch sphere by a set pseudospinors for S=1/2. Here, |Ψ⟩ is treated as a pseudospinor for S=(N-1)/2. According to the Schwinger theory <cit.> of angular momentum, |Ψ⟩ can be constructed by the bosonic creation operators â_↑,↓^† acting on the vacuum state |∅⟩, as follows: |Ψ⟩ = ∑_μ = -S ^S𝒞_μ( â_↑^†)^S+μ/√((S+μ)!)( â_↓^†)^S-μ/√((S-μ)!) | ∅⟩, where 𝒞_μ's are the basis vector. Earlier, Majorana <cit.> discovered that |Ψ⟩ is given by a product of the Pauli spinors |ζ_μ⟩ = [ cos(η_μ/2) sin(η_μ/2)e^iξ_μ ]^T as follows: |Ψ⟩ = 1/𝒦∏_μ=1^2S[ cos( η_μ/2) â_↑^† + sin( η_μ/2) e^iξ_μâ_↓^†]| ∅⟩ = 1/𝒦∏_μ=1^2S|ζ_μ⟩ , where 𝒦 is the normalization constant. η and ξ are the polar and azimuthal angles. The trajectory of each Majorana 'star' ζ_μ≡⟨ζ_μ | σ | ζ_μ⟩ thus represents the evolution of |Ψ⟩. Alternatively, Eq. (<ref>) is expressed as |Ψ⟩ = 𝒞_S/√((2S)!)∏_μ=1^2S[ â_↑^† + λ_μâ_↓^† ]| ∅⟩, where λ_μ = tan (η_μ/2)e^iξ_μ. Suppose that λ_μ's are the roots of ∏_μ=1^2S(λ - λ_μ)=0. By comparing the coefficients of Eqs. (<ref>) and (<ref>), λ_μ are given by the Majorana polynomial as follows: ∑_μ=0^2S(-1)^μ𝒞_S-μ/√((2S-μ)!μ !)λ^2S-μ = 0. Particularly for the stub SSH chain, we insert S=1, where |Ψ⟩ = [ 𝒞_1 𝒞_0 𝒞_-1 ]^T is given by Eqs. (<ref>) or (<ref>). For the dispersive band, Eq. (<ref>) is reduced to cosΘ(k)e^iΦ(k)λ^2 -√(2)sλ+sinΘ(k) = 0. As for the flat band, we get sinΘ(k)e^iΦ(k)λ^2 -cosΘ(k) = 0. The winding number Ω_c around the origin of complex plane is calculated by the Cauchy integral formula. Let λ_s^∓ (k) and λ_0^∓(k) be the roots of Eqs. (<ref>) and (<ref>), respectively. From the coefficients of the quadratic equations, it is noted that Λ_s(k) ≡∏_ν=∓λ_s^ν(k)=tanΘ(k)e^-iΦ (k), Λ_0(k) ≡∏_ν=∓λ_0^ν(k) =-Θ(k) e^-iΦ(k). By regarding Λ_c(k) as a curve parameterized by k, Ω_c≡1/2π i∫_-π/a_0^π/a_0d k/Λ_c(k)∂Λ_c (k) /∂ k . Here, dlnΛ_s(k) = d ln |tanΘ(k)| - idΦ(k) and dlnΛ_0(k) = d ln |Θ(k)| - idΦ(k). It is noted that Λ_c(k) is a closed curve because Θ(-π/a_0) = Θ(π/a_0), thus the total changes of ln |tanΘ(k)| and ln |Θ(k)| are zero. Therefore, Ω_c is identical for each c as follows: Ω_c = -1/2π∫_-π/a_0^π/a_0 dk ∂Φ(k)/∂ k = -1/πΦ( π/a_0). The azimuthal winding number W_c counts for the total number of times ζ_c^ν(k)'s travel around the z axis of the Bloch sphere. W_c is defined by <cit.>: W_c ≡1/2π∫_-π/a_0^π/a_0 dk∂/∂ k∑_ν=∓ξ_c^ν(k). Since λ_c^ν(k) = tan[η_c^ν(k)/2] e^iξ_c^ν(k) by definition, -∑_ν=∓ξ_c^ν(k) = Φ(k). By substituting Eq. (<ref>) into Eq. (<ref>), it is inferred that W_c = Ω_c, and accordingly: |W_c| = 1 for V_A/V_A^'<1, 0 for V_A/V_A^'>1. Therefore, the trajectories of ζ_c^ν(k)'s enclose the z axis once only if Φ(π/a_0) = π, as k is traversing the BZ. Fig. <ref> shows the trajectories of ζ_c^ν's on the Bloch sphere (left panels) and their projections on the xy plane (right panels) for V_C = 0.5 V_A^', and (a) V_A/V_A^' =0.5, (b) 1, (c) 1.5. ζ_c^ν's associated with c=-1, 0, and 1 are depicted by the red/yellow, light-green/dark-green, and blue/purple curves, respectively. In Fig. <ref>(a), the red, purple, and connected green loops wind around the z axis, so |W_c|=1. In Fig. <ref>(b), |W_c| is ill-defined because the same loops intersect the poles. This is an example of topological phase transition without gap closing, which can occur provided that the topological invariant at the transition point becomes ill-defined <cit.>. In Fig. <ref>(c), none of the curves enclose the z axis and thus |W_c| = 0. It is noted that Eqs. (<ref>) does not hold for all of the bases of |Ψ_c⟩. Hereafter, the basis in Eqs. (<ref>) or (<ref>) is referred to as (abc) for simplicity. There are six permutations of basis in total: (abc), (bca), (cab), (cba), (acb), and (bac). In general, the ℤ_2 invariant is given by the parity P of the total winding numbers defined by <cit.>: P≡ (-1)^∑_c|W_c|, where P=-1 and +1 indicate the non-trivial and trivial phases, respectively. We prove this statement by calculating the Majorana polynomials for all the bases, which are classified into three cases as follows: 1. For (abc) and (cba), Λ_c∝ e^- iΦ and e^ iΦ, respectively. 2. For (bca) and (acb), Λ_s∝ e^ iΦ and e^- iΦ, respectively, while a single solution of the Majorana polynomial for the flat band is λ_0∝ e^ iΦ and e^- iΦ, respectively. 3. For (cab) and (bac), Λ_s does not depend on Φ, and λ_0∝ e^ iΦ and e^-iΦ, respectively, for the single solution. By calculating W_c, ∑_c|W_c|=0 for all cases when V_A/V_A^'>1. In contrast, ∑_c|W_c| is either 3 (cases 1 and 2) or 1 (case 3) when V_A/V_A^'<1. Therefore, P= -1 for V_A/V_A^'<1, +1 for V_A/V_A^'>1. § Α-T_3 LATTICE Consider a honeycomb lattice composed of A and B sites. The α-T_3 lattice is constructed by connecting the additional C sites at the center of each hexagon to the nearest B sites, as illustrated by Fig. <ref>(a). The nearest-neighbour interactions between the B and A (C) sites are denoted by the hopping parameter t_A>0 (t_C), where t_C/t_A=α∈ [0, 1]. The dashed hexagons represent three choices to define the unit cell. The vectors b_1 = a(-1/2, -1/2√(3) ), b_2 = a(1/2, -1/2√(3) ) and b_3 = a(0, 1/√(3) ) connect the B site to the nearest A and C sites, where a is the length of primitive vectors a_1 = a(1,0) and a_2 = a(1/2,√(3)/2). The bulk Hamiltonian h(k) in the momentum space k = (k_x, k_y) is given by h(k) = - [ 0 t_A f(k) 0; t_A f^*(k) 0 t_C f(k); 0 t_C f^*(k) 0 ], where we define f(k) ≡∑_n=1^3 e^-ik·b_n = |f(k)|e^-iφ(k). By defining t ≡√(t_A^2 + t_C^2) and ϑ≡tan^-1α, h(k) = -t [ 0 cosϑ f(k) 0; cosϑ f^*(k) 0 sinϑ f(k); 0 sinϑ f^*(k) 0 ]. For α = 0, the α-T_3 lattice reduces to graphene with a zero-energy flat band that originates from the uncoupled C sites. Since h(k) is scaled by t, the energy eigenvalues of Eq. (<ref>) are independent of α as follows: ε_s(k) = st √(1+β^2(k_x) + 2β(k_x) cos(√(3)/2k_y a)), ε_0 = 0, where β(k_x) ≡ 2 cos (k_x a/2)≥ 0 for k_x∈[-π/a,π/a]. ε_s indicates the energy of the valence (conduction) band for s=-1 (+1). In addition, ε_0 appears due to the sublattice imbalance. In Fig. <ref>(b), the Dirac cones touch ε_0 at the K and K^' points (labeled by τ=+1 and -1, respectively) in the corners of hexagonal BZ. The eigenvector of ε_s is given by |ψ_s(k) ⟩ = 1/√(2)[ cosϑ e^-iφ (k); s; sinϑ e^iφ (k) ]. As for ε_0, we get |ψ_0(k) ⟩ = [ sinϑ e^-iφ (k); 0; -cosϑ e^iφ (k) ]. The Berry phase Γ_c ≡ i∮ dk·⟨ψ_c (k)|∇_k | ψ_c (k) ⟩ can be analytically evaluated by contour integration along a path enclosing each Dirac point, as follows <cit.>: Γ_s,τ = πτcos(2ϑ), Γ_0,τ = -2πτcos(2ϑ). Thus, Γ_c≠π or 0 (mod 2π) for α≠ 0 or 1. Furthermore, Γ_c's possess opposite signs for the K and K^' valleys except for α=0 and 1. § Α-T_3 ZRS §.§ Types of ZR Fig. <ref> illustrates commensurate α-T_3 ZRs that we call (a) ABC, (b) BCA, and (c) CAB ZRs. The ZRs are respectively constructed by translating the bottom, middle, and top unit cells in Fig. <ref>(a) along the x direction with the periodicity a_1. The width of the ZR is characterized by a positive integer J, which is the number of trimers in the supercell (dashed rectangle). Starting from the bottom edge, the indices m=1,2,…,3J and j=1,2,…,J specify the positions of sites and trimers, respectively, in the y axis. Additionally, we can construct two non-commensurate ZRs from each commensurate ZR. First, by eliminating the sites at m=3J, and second, by eliminating the sites at m=3J and m=3J-1. Therefore, there are nine types of α-T_3 ZRS: three commensurate and six non-commensurate ZRs. Here, we will discuss the BBCs for the BCA and CAB ZRs. The existence of edge states in the ABC ZR is discussed in Appendix <ref>. §.§ Open boundary conditions for ZRs In an infinite system, the TBEs with nearest-neighbor interactions at the A, B, C sites are given by εψ_s^A(r)=-t_A∑_n=1^3ψ_s^B(r-b_n), εψ_s^B(r)=-∑_n=1^3[t_Aψ_s^A(r+b_n)+t_Cψ_s^C(r-b_n)], εψ_s^C(r)=-t_C∑_n=1^3ψ_s^B(r+b_n), respectively. The missing terms of TBEs on the edges of ZRs constitute the boundary conditions. The wavefunction ψ_s=[ ψ_s^A ψ_s^B ψ_s^C ]^T is given by a linear combination of the Bloch states <cit.>: ψ_s(r) ≡𝒜e^ik·r|ψ_s(k)⟩+𝒜^'e^ik^'·r|ψ_s(k^')⟩. The amplitudes 𝒜 and 𝒜^', and momenta k and k^' are determined to satisfy the boundary conditions. §.§.§ BCA ZR In Fig. <ref>(a), the vector L=(L_x,L_y) connects an empty A site at m=0 to an empty B site at m=3J+1 chosen as the origin O=(0,0). L_x=0 and -a/2 for odd and even J, respectively, while L_y = ρ-a/√(3), with ρ≡√(3)/2(J+1)a. Let us define L_l≡L+la and O_l≡O+la, for l∈ℤ. Thus, L_0=L and O_0=O. Consider the TBEs at m=1 and m=3J where r=-L_l-b_2 and r=-O_l+b_1, respectively, as follows: εψ_s^B(-L_l-b_2)= -t_Cψ_s^C(-L_l+b_3) -t_Cψ_s^C(-L_l+1+b_3) -t_Aψ_s^A(-L_l+1+a_2), εψ_s^A(-O_l+b_1)= -t_Aψ_s^B(-O_l-a_2). The boundary conditions are thus given by ψ_s^A(-L-la_1) + ψ_s^A(-L-[l+1]a_1) +αψ_s^C(-L-la_1+b_1) =0, ψ_s^B(-la_1) = 0. §.§.§ CBA ZR Fig. <ref>(b) shows that O is assigned to an empty B site at m=0. The TBEs at m=1 and m=3J are εψ_s^C(O_l-b_2)= -t_Cψ_s^B(O_l-1+a_2), εψ_s^B(L_l+b_1)= -t_Aψ_s^A(L_l-b_3) -t_Aψ_s^A(L_l-1-b_3) -t_Cψ_s^C(L_l-a_2). The boundary conditions given as follows: ψ_s^B(la_1) = 0, α{ψ_s^C(L+la_1) + ψ_s^C(L+[l-1]a_1) } + ψ_s^A(L+la_1-b_2) =0. §.§ The existence of edge states in ZRs §.§.§ BCA ZR In Appendix <ref>, we show in detail that by combining Eqs. (<ref>) and (<ref>), we obtain e^-ik_yρf_1^* (k) = e^ik_yρf_1 (k). Here, f_1 (k) = | f_1 (k)|e^-iϕ_1(k) is defined by f_1 (k) ≡ v_A(k_x) + v_A1^'(k_x) e^-i√(3)/2k_ya, where v_A(k_x) and v_A1^'(k_x) depend on β(k_x) as follows: v_A≡β (1 + α^2), v_A1^'≡α^2 + β^2. It is clear that ϕ_1(k_x,2π/√(3)a) = π and 0 for v_A/v_A1^'<1 and v_A/v_A1^'>1, respectively. The quantization condition of the bulk states is therefore given by √(3)/2 (J + 1) k_y a-ϕ_1(k) = nπ, for n=1,…,J. The intersections of ϕ_1(k) and l^'_n(k_y)≡√(3)(J+1)k_ya/2 - nπ along k_y∈(0,2π/√(3)a) gives the solutions of Eq. (<ref>). As discussed in Sec. <ref>, a pair of bulk states become edge states when (v_A/v_A1^')_J < 1- 1/(J+1), or in terms of β: β^2 - 1+α^2/1-(J+1)^-1β + α^2 > 0. For α≠ 0, Eq. (<ref>) possesses two solutions: β_J^+<β≤ 2 or 0≤β<β_J^-, where β_J^±≡ 2cos(χ_J^± a/2). Thus, χ_J^± correspond to |k_x|'s at which the transitions from bulk to edge states occur. The edge states exist in the range 0≤ |k_x|<χ_J^+ and χ_J^-<|k_x| ≤π/a. In the limit J→∞, χ_J^± converge to χ_∞^+ ∼ (2/a)cos^-1( 1/2 ) = 2π/3a, χ_∞^- ∼ (2/a)cos^-1( α^2/2 ). Thus, apart from the Dirac point χ_∞^+, the bulk states undergo transition to edge states at χ_∞^- for α∈(0,1). Let 1/κ_y be the localization length of edge states. By substituting k_y = 2π/√(3)a + iκ_y into Eqs. (<ref>) and (<ref>), e^κ_ yρf̃_1(-κ_y) = e^-κ_ yρf̃_1(κ_y), where we define f̃_1(κ_y) ≡ v_A(k_x) - v_A1^'(k_x) e^√(3)/2κ_ya. Therefore, κ_y is computed by solving δ_1(κ_y) ≡v_A/v_A1^' - sinh( √(3)Jκ_y a/2 )/sinh( ρκ_y ) = 0. Analytically, the energies of edge states are calculated by inserting k_y = 2π/√(3)a + iκ_y into Eq. (<ref>) as follows: ε̃_s(κ_y) = st √(1+β^2 - 2βcosh(√(3)/2κ_y a)). By substituting Eq. (<ref>) into (<ref>), ε̃_s for Jκ_y a≫ 1 is ε̃_s = st α|1-β^2|/√(1+α^2)√(α^2+β^2). Since β depends on k_x, ε̃_s is dispersive. In particular, we get ε̃_s ≈ 0 and ε̃_s= st_A at |k_x|≈ 2π/3a and |k_x|=π/a, respectively. Thus, the ZR is gapless for J→∞. Fig. <ref>(a) depicts the numerical calculation of energy spectra as function of k_x for α=0.8 and J=20. By solving Eq. (<ref>), we get χ_20^+ ≈ 0.5985π/a and χ_20^-≈ 0.8251π/a, which are indicated by the vertical dashed and dotted lines, respectively (we check that χ_J^+ is undefined for J<3 ). The bold red lines depict the edge states, which are pinned at |ε| = t_A≈ 0.7809t at |k_x|= π/a, in agreement with Eq. (<ref>). Due to the finite size effect, band gaps open at |k_x|=2π/3a. In Figs. <ref>(b)-(e), we plot the electronic probability density |ψ|^2 for the in-gap states per site m at several k_x's. It is noted that |ψ|^2's are identical for both signs of k_x and ε. The grey, brown, and yellow bars in the histogram correspond to the A, B, and C sites, respectively. The distributions of |ψ|^2 in Figs. <ref>(b) and (c) exhibit the profiles of asymmetric edge states. Here, |ψ|^2's are concentrated around the bottom edge of the ZR and decay exponentially towards the top edge. Figs. <ref>(d) and (e) indicate the bulk state because |ψ|^2's are extended throughout the ZR. In Fig. <ref>(e), |ψ|^2 is high around the bottom edge because k_x is close to χ_20^-. The inset shows the re-emergence of the edge state at a larger k_x. §.§.§ CAB ZR In Appendix <ref>, we show in detail that Eqs. (<ref>) and (<ref>) lead to: e^ik_yρf_2 (k) = e^-ik_yρf_2^* (k), where f_2 (k) = | f_2 (k)|e^-iϕ_2(k) is defined by f_2 (k) ≡ v_A(k_x) + v_A2^'(k_x)e^-i√(3)/2k_ya, and v_A2^'(k_x) is given as follows: v_A2^'≡ (1+α^2β^2). The quantization condition of k_y is given by replacing ϕ_1 in Eq. (<ref>) with ϕ_2. Likewise, edge states appear in the gap when (v_A/v_A2^')_J < 1- 1/(J+1), or α^2 β^2 - 1+α^2/1-(J+1)^-1β + 1 > 0. By denoting the solutions of Eq. (<ref>) as β_J^+<β≤ 2 or 0≤β<β_J^-, the edge states for α≠ 0 exist in the range 0≤ |k_x|<χ_J^+ and χ_J^-<|k_x| ≤π/a. Note that χ_J^± in Eq. (<ref>) are not equal to those in Eq. (<ref>) except for α=1. Here, in the limit J→∞, χ_J^± become χ_∞^+ ∼ (2/a)cos^-1( 1/2α^2 ), χ_∞^- ∼ (2/a)cos^-1( 1/2 ) = 2π/3a. It is noticed that χ_∞^+ is undefined for α<1/√(2). To determine κ_y, we define δ_2 by replacing v_A1^' in Eq. (<ref>) with v_A2^'. ε̃_s for Jκ_y a≫ 1 is given by ε̃_s = st α|1-β^2|/√(1+α^2)√(1+α^2β^2). At |k_x|≈ 2π/3a (|k_x|=π/a), ε̃_s ≈ 0 (ε̃_s=st_C). Fig. <ref>(a) shows the numerical calculation of energy spactra as a function of k_x for α=0.8 and J=20. By solving Eq. (<ref>), χ_20^+ ≈ 0.2542π/a and χ_20^- ≈ 0.7213π/a, respectively marked by the vertical dashed and dotted lines (χ_J^+ is undefined for J<12). The bold red lines indicate the edge states, where |ε| = t_C≈ 0.6247t at |k_x|=π/a, in accordance with Eq. (<ref>). Figs. <ref>(b)-(e) show |ψ|^2's for the in-gap states per site m at several k_x's. Figs. <ref>(b) and (e) exhibit the profiles of asymmetric edge states where |ψ|^2's are high around the top edge of ZR and decay exponentially toward the bottom edge. In contrast, Figs. <ref>(c) and (d) indicate the bulk states due to the extended |ψ|^2's. § TOPOLOGICAL INVARIANT OF Α-T_3 ZRS   §.§ Unitary transforms of bulk Hamiltonian In Section <ref>, we found that for each k_x, the α-T_3 ZRs are isomorphic to the stub SSH chain along the y direction. By unitary transforms of h(k) in Eq. (<ref>), the BCA and CAB ZRs are mapped into stub SSH chains with distinct unit cells and hopping parameters. As a result, the ℤ_2 invariant for the α-T_3 ZRs is given by |W_c|. §.§.§ BCA ZR To transform h(k), we define a unitary matrix U(u_A,u_C) ≡[ e^ik·u_A 0 0; 0 1 0; 0 0 e^ik·u_C ]. h_1(k)≡ U(b_3,-b_1-a_1/2)h(k)U^†(b_3,-b_1-a_1/2) is h_1(k) =-[ 0 t_A p^*(k) 0; t_A p(k) 0 t_C q(k); 0 t_C q^*(k) 0 ], where we define p(k) ≡ 1 + β(k_x) e^-i√(3)/2k_ya, q(k) ≡β(k_x) + e^-i√(3)/2k_ya. h_1(k) is the Hamiltonian of a rhombic or diamond chain <cit.> illustrated in the middle panel of Fig. <ref>. Here, the A and C sites are connected to the B sites by the intracell (intercell) hopping parameters t_A and β t_C (β t_A and t_C), respectively. Experimental realizations of the rhombic chain have been achieved using photonic lattices <cit.>. The present method is therefore applicable for topological characterization of edge states <cit.> in the rhombic chains. Now, consider a rotation matrix Υ (γ_1)≡[ cosγ_1 0 sinγ_1; 0 1 0; -sinγ_1 0 cosγ_1 ]. By setting γ_1= tan^-1(α/β), we transform h_1 into ℋ_1(k)=Υ(γ_1) h_1(k) Υ^†(γ_1) as follows: ℋ_1(k) =-t_1[ 0 f_1^*(k) 0; f_1(k) 0 -v_C; 0 -v_C 0 ], where t_1≡ t_Acosγ_1/β, v_C ≡α(1-β^2), and f_1(k) is defined by Eq. (<ref>). ℋ_1(k) constitutes the Hamiltonian of stub SSH chain illustrated in the right panel of Fig. <ref>. The mapping of h(k) into ℋ_1(k) explains the emergence edge states with energies ϵ̃_s = s|t_1 v_C|, identical to Eq. (<ref>), when v_A/v_A1^' <1. The eigenstates of ℋ_1(k) are derived by replacing Φ(k) and Θ(k) in see Eqs. (<ref>), (<ref>) with ϕ_1(k) and θ_1(k)≡tan^-1(-v_C(k_x)/|f(k)|), respectively. Here, the BZ is k_y∈[-2π/√(3)a, 2π/√(3)a]. ζ_c^ν and |W_c| are calculated by using the method discussed in Sec. <ref>. Fig. <ref> shows the trajectories of ζ_c^ν's for α=0.8 at (a) k_x = 0.5π/a, (b) χ_∞^+=2π/3a [K point, see Eq. (<ref>)], (c) 0.7π/a, (d) χ_∞^-≈ 0.7926π/a [see Eq. (<ref>)], and (e) 0.8π/a. The red/yellow, light-green/dark-green, and blue/purple curves correspond to c=-1, 0, and 1, respectively. In Figs. <ref>(a) and (e), |W_c|=1 because 0<k_x<χ_∞^+ and χ_∞^-<k_x<π/a, respectively. In Fig. <ref>(c), |W_c|=0 because χ_∞^+<k_x<χ_∞^-. W_c's are ill-defined at χ_∞^± for different reasons. In Fig. <ref>(b), the trajectories of ζ_c^ν's become discontinuous due to the gap closing at |k_y|= 2π/√(3)a. Here, the system is reduced to the metallic SSH chain because β=1 and consequently v_C=0, v_A=v_A1^'. In Fig. <ref>(d), the red, purple, and connected green curves intersect the poles of the Bloch sphere. Accordingly, Figs. <ref>(b) and (d) indicate topological phase transitions with and without gap closing. To show that the unitary transforms preserve the topological properties of the α-T_3 ZR, we numerically compute the accumulated Berry phase Γ_c from the adiabatic evolution of ζ_c^ν's. For S=1, Γ_c is given by <cit.>: Γ_c = Γ_c^' + Γ_c^'', where Γ_c^' ≡-1/2∑_ν=∓∫_BZ (1-cosη_c^ν)dξ_c^ν, Γ_c^'' ≡-1/2∫_BZζ_c^-×ζ_c^+/3+ζ_c^-·ζ_c^+· d(ζ_c^–ζ_c^+). Γ_c^' corresponds to the sum of solid angles subtended by ζ_c^- and ζ_c^+, while Γ_c^'' is interpreted as the correlation term between ζ_c^- and ζ_c^+ due to their relative motions. Fig. <ref>(f) shows Γ_s, Γ_s^', and Γ_s^'' as a function of k_x, where the vertical dashed and dotted lines mark k_x=χ_∞^+ and χ_∞^-, respectively. It is noted that Γ_s^', Γ_s^'' and hence Γ_s are identical for s=±1. For 0≤ k_x <χ_∞^+, Γ_s^' includes the large portion of the Bloch sphere bounded by the red (or purple) loop that encloses the z axis [see Fig. <ref>(a) for instance], where Γ_s≈ 1.390244π. The trajectories of ζ_s^ν's become disconnected at χ_∞^+ [Fig. <ref>(b)] and reconnect for χ_∞^+< k_x ≤χ_∞^-, where Γ_s≈-0.390244π. The abrupt decrease of Γ_s is because Γ_s^' comprises the small portion of the Bloch sphere as the red (or purple) loop does not enclose the z axis [Figs. <ref>(c) and (d) for examples]. For χ_∞^-<k_x≤π/a, Γ_s increases by 2π to Γ_s≈1.609756π as the red (or purple) loop re-encloses the z axis [see Figs. <ref>(e)]. Since Γ_s is defined modulo 2π, Γ_s can not distinguish between the trivial and non-trivial phases of the ZR for χ_∞^+< k_x < χ_∞^- and χ_∞^-<k_x≤π/a, respectively. It is noted that Γ_s for k_x∈(χ_∞^-, π/a] and k_x∈[0,χ_∞^+) differs by πcos(2ϑ)≈0.219512π (mod 2π), consistent with Eq. (<ref>) for ϑ=tan^-1(0.8) and τ=1. Fig. <ref>(g) shows Γ_0, Γ_0^', and Γ_0^'' as a function of k_x. Γ_0^' either corresponds to a portion of the Bloch sphere bounded by a single loop [as in Figs. <ref>(a) and (e)], or two loops [as in Figs. <ref>(c) and (d)]. The jump discontinuity from Γ_0≈ 1.219512π to Γ_0≈ 0.780488π occurs at k_x=χ_∞^+. Therefore, Γ_0 for k_x∈(χ_∞^+, π/a] and k_x∈[0,χ_∞^+) differs by -2πcos(2ϑ)≈-0.439024π, in agreement with Eq. (<ref>) for ϑ=tan^-1(0.8) and τ=1. §.§.§ CAB ZR By using Eq. (<ref>), we can transform h(k) into h_2(k)≡ U(b_2-a_1/2,-b_3)h(k)U^†(b_2-a_1/2,-b_3) as follows: h_2(k) =-[ 0 t_A q(k) 0; t_A q^*(k) 0 t_C p^*(k); 0 t_C p(k) 0 ]. The middle panel of Fig. <ref> shows the system described by Eq. (<ref>), which is a rhombic chain with a distinct choice of unit cell compared with the one in Fig. <ref>. By using Υ in Eq. (<ref>), h_2(k) is transformed into ℋ_2(k)=Υ(γ_2) h_2(k) Υ^†(γ_2) as follows: ℋ_2(k) =-t_2[ 0 f_2(k) 0; f_2^*(k) 0 v_C; 0 v_C 0 ], where γ_2≡tan^-1(αβ), t_2≡ t_Acosγ_2, and f_2(k) is defined by Eq. (<ref>). The right panel of Fig. <ref> depicts the stub SSH chain represented by Eq. (<ref>). Thus, edge states with energies ϵ̃_s = s|t_2 v_C| [identical to Eq. (<ref>)] appear when for v_A/v_A2^' <1. Similarly, we derive the eigenstates of ℋ_2(k) by replacing Φ(k) and Θ(k) in Eqs. (<ref>), (<ref>) with ϕ_2(k) and θ_2(k)=-θ_1(k), respectively. §.§ Topological phase diagrams of ZRs Having shown that the distinction between the trivial and nontrivial phases of the α-T_3 ZRs is indicated by |W_c|, we discuss the topological phase diagrams as a function |k_x|∈[0,π/a] and α∈[0,1]. §.§.§ BCA ZR Fig. <ref>(a) depicts the topological phase diagram of the BCA ZR. The light and dark regions indicate the trivial and nontrivial phases, respectively. The pink and red lines mark the phase transition with and without gap closing. For α =0, the edge states exist in the range |k_x|∈[0, 2π/3a). This result is expected because removing the C sites reduces the BCA ZR to bearded graphene ribbons <cit.>, i.e. ZR with dangling bonds or the Klein defects. For α =1 (T_3 ZR), the edge states exist for |k_x|∈[0, π/a] except at |k_x|= 2π/3a, even though the Berry phase Γ_s=0. In the Majorana representation of the corresponding stub SSH chain, |W_c|=1 before and after gap closing because χ_∞^+=χ_∞^-=2π/3a. Thus, when we plot Γ_s as a function of k_x, the value of Γ_s remains the same for |k_x|∈[0, 2π/3a) and (2π/3a, π/a]. §.§.§ CAB ZR Fig. <ref>(b) shows the topological phase diagram of the CAB ZR. For α<1/√(2), the edge states appear in the range |k_x|∈(2π/3a,π/a], similar to those in the graphene ZR <cit.>. § CONCLUSION   In the first part of this paper, we investigate the bulk-boundary correspondence for the stub SSH chain. The edge states appear when the intracell hopping parameter is smaller than the intercell one, identical to the SSH chain. By using the Majorana representation of the eigenstates on the Bloch sphere, we prove that the ℤ_2 topological invariant is given by the winding number instead of the Zak phase, which is not quantized to π or 0 due to the broken inversion symmetry. In the second part, we demonstrate the isomorphism between the α-T_3 ZRs and the stub SSH chains from the boundary conditions of the former. The equivalence between the two systems is revealed by the unitary transforms of the bulk Hamiltonian, which maps the α-T_3 ZRs into stub SSH chains. As a result, the edge states in the α-T_3 ZRs possess the same topological origin and characterization as those in the stub SSH chains. Therefore, the α-T_3 lattice is identified as a topologically non-trivial, massless Dirac fermion in two dimensions without a quantized Berry phase. The T_3 lattice particularly exemplifies a topological system with zero Berry phase. § ACKNOWLEDGMENT We thank R. Saito, N. T. Hung, S. Hayashi, and E. H. Hasdeo for helpful discussions. This work is partly supported by Grant-in-Aid for Scientific Research, JSPS KAKENHI (23K03293), and JST-CREST (JPMJCR18T1). § ABC ZR Here, we derive the quantization condition of the bulk states and prove the existence of two dispersive edge states for each ε<0 and ε>0. The topological characterization of the edge states is reserved for future work. The boundary conditions are derived by expressing the nearest-neighbor interactions for the B sites in terms of the next-nearest-neighbor interactions. First, we multiply Eq. (<ref>) by ε, then we substitute ψ_s^A and ψ_s^C by the right-hand sides of Eqs. (<ref>) and (<ref>), respectively. As a result, Eq. (<ref>) becomes ε^2ψ_s^B(r) =  t^2[ ∑_ν=±{ψ_s^B(r+νa_1) + ψ_s^B(r+νa_2) +ψ_s^B(r+ν[a_2-a_1])} +3ψ_s^B(r) ]. In Fig. <ref>(a), we define a vector ℒ=(ℒ_x,ℒ_y), where ℒ_x=0 and -a/2 for odd and even J, respectively, while ℒ_y = √(3)(J-1)a/2. For each l∈ℤ, the TBEs at m=2 and m=3J-1 are accordingly given as follows: εψ_s^B(O_l)= -t_A∑_n=1^3ψ_s^A(O_l+b_n)-t_C∑_n=1^2ψ_s^C(O_l-b_n), εψ_s^B(ℒ_l)= -t_A∑_n=1^2ψ_s^A(ℒ_l+b_n)-t_C∑_n=1^3ψ_s^C(ℒ_l-b_n). We define ℒ_l≡ℒ+la_1 and O_l≡O+la_1, thus ℒ_0=ℒ and O_0=O. By expressing ψ_s^A and ψ_s^C in terms of ψ_s^B, Eqs. (<ref>) and (<ref>) are respectively given by ε^2ψ_s^B(O_l)=  t_A^2ψ_s^B(O_l) + t^2[2ψ_s^B(O_l)+ψ_s^B(O_l+a_2) +ψ_s^B(O_l+a_2-a_1) +∑_ν=±ψ_s^B(O_l+νa_1)], ε^2ψ_s^B(ℒ_l)=  t_C^2ψ_s^B(ℒ_l) + t^2[2ψ_s^B(ℒ_l)+ψ_s^B(ℒ_l-a_2) +ψ_s^B(ℒ_l-a_2+a_1) +∑_ν=±ψ_s^B(ℒ_l-νa_1)]. The missing terms of the TBEs at O_l and ℒ_l constitute the boundary conditions as follows: t_C^2ψ_s^B(O_l)+t^2[ψ_s^B(O_l-a_2) +ψ_s^B(O_l-a_2+a_1)]=0, t_A^2ψ_s^B(ℒ_l)+t^2[ψ_s^B(ℒ_l+a_2) +ψ_s^B(ℒ_l+a_2-a_1)]=0. Therefore, the boundary conditions are not indeterminate as suggested by Ref. <cit.>. By adopting ψ_s(r) in Eq. (<ref>), and assuming k^'=(k_x,-k_y) because momentum is conserved in the x direction, Eqs. (<ref>) and (<ref>) respectively reduce to 𝒜f_C(k)+𝒜^' f_C(k^') =0, 𝒜e^ik·ℒf_A(k^')+𝒜^' e^ik^'·ℒf_A(k) =0, where f_λ(k)=|f_λ (k)|e^-iϕ_λ(k), λ=A,C are defined by f_C (k)≡sin^2ϑ + β(k_x) e^-i√(3)k_ya/2, f_A (k)≡cos^2ϑ + β(k_x) e^-i√(3)k_ya/2. By combining Eqs. (<ref>) and (<ref>), we get e^ik_yℒ_yf_A(k^')f_C(k^')+e^-ik_yℒ_yf_A(k)f_C(k) = 0. Thus, the quantization condition is derived as follows: √(3)/2(J-1)k_ya + ϕ_0(k) = nπ, for n=1,…, J, where we define ϕ_0(k)= ϕ_A(k)+ ϕ_C(k). Note that Eq. (<ref>) is not mathematically analogous to those of the BCA and CAB ZRs [see Eq. (<ref>)]. By taking derivative of Eq. (<ref>) with respect to k_y and setting k_y = 2π/√(3)a, we obtain J+1/J-1β^2 - J/J-1β + 1/4sin^2(2ϑ)=0, for J≥ 2. Similarly, the solutions of Eq. (<ref>) are denoted by β_J^±=2cos(χ_J^±/2). χ_J^± converge to χ_∞^+ ∼ (2/ a)cos^-1(cos^2ϑ/ 2), χ_∞^- ∼ (2/ a)cos^-1(sin^2ϑ/ 2). Except for α=0, the transitions from bulk to edge states do not occur at |k_x|=2π/3a, unlike the BCA and CAB ZRs. Fig. <ref>(b) shows χ_J^± as a function of J=2,…, 60 for α=0.8. The solid and dashed lines indicate χ_∞^+≈ 0.80277π/a and χ_∞^-≈ 0.87498π/a, respectively. Fig. <ref>(a) shows ϕ_0(k) for several k_x's and -ℓ_n(k_y), where ℓ_n(k_y)≡√(3)(J-1)k_ya/2 - nπ for J=20. We can see that along k_y∈(0,2π/√(3)a), ϕ_0 for k_x=0.83π/a [0.91π/a] does not intersect -ℓ_J [-ℓ_J and -ℓ_J-1], thus two [four] bulk states are missing in the dispersive bands. To demonstrate that the edge states arise from the missing missing bulk states, we substitute k_y=2π/√(3)a+iκ_y into Eq. (<ref>). The existence of edge states requires δ_0(κ_y) ≡β^2 - sinh(√(3)Jκ_ya/2)/sinh(ρκ_y)β+sinh(ℒ_y κ_y)/sinh(ρκ_y)sin^2(2ϑ)/4 =0. Fig. <ref>(b) shows δ_0(κ_y) for several k_x's. The number of zero-crossings for k_x=0.83π/a [0.91π/a] is one [two], which indicates the existence of two [four] edge states. For Jκ_y a≫ 1, Eq. (<ref>) is rearranged into e^2κ_yβ^2 - e^κ_yβ + sin^2(2ϑ)/4=0. It is noted that Eqs. (<ref>) and (<ref>) become identical in the limits κ_y→ 0 and J→∞, respectively. The solutions of Eq. (<ref>) are given by κ_y^+ = ln(cos^2ϑ/β), κ_y^- = ln(sin^2ϑ/β). By substituting κ_y^± into Eq. (<ref>), the energies of edge states are given as follows: ε̃_s^+ =st_C√(1-(β/cosϑ)^2), ε̃_s^- =st_A√(1-(β/sinϑ)^2). Since κ_y^±>0, the edge states associated with ε̃_s^± appear in the range χ_∞^±<|k_x|≤π/a, where t_Csinϑ < |ε̃_s^+| ≤ t_C and t_Acosϑ < |ε̃_s^-| ≤ t_A. Note that for α=0, the edge states with ε̃_s^- vanish. The remaining edge states with ε̃_s^+=0 exist for |k_x|∈ (2π/3a,π/a] because the ABC ZR reduces to the graphene ZR by removing the C sites. Fig. <ref>(a) shows the numerical calculation of energy spectra as a function of k_x. The vertical dashed and dotted lines accordingly mark χ_20^+≈ 0.8116π/a and χ_20^-≈ 0.8818π/a. The bold red lines in the bands labeled by "+" and "-" depict the edge states with energies ε̃_s^+ and ε̃_s^-, respectively. At |k_x| =π/a, the edge states are pinned at ε=t_C≈ 0.6247t and ε=t_A≈ 0.7809t, consistent with Eqs. (<ref>) and (<ref>). Figs. <ref>(b) and (c) [(d) and (e)] depict the probability density |ψ|^2 per site m for the "-" ["+"] bands at k_x=0.83π/a and 0.91π/a, respectively. Since 0.83π/a<χ_20^-<0.91π/a, Figs. <ref>(b) and (c) accordingly show the profile of bulk and edge states. On the other hand, both Figs. <ref>(d) and (e) exhibit the profile of edge states because χ_20^+<0.83π/a<0.91π/a. § DERIVATIONS OF EQS. (<REF>), (<REF>) First, let us determine 𝒜^' and k^' in ψ_s(r) [see Eq. (<ref>)]. Eqs. (<ref>) or (<ref>) imply 𝒜^' = - 𝒜 and k·a_1 = k^'·a_1, due to the conservation of momentum in the x direction. It can be shown that the conservation of energy ε (k) = ε (k^') gives k·a_1 = k^'·a_1=(k + k^')·a_2 [see Eq. (<ref>)]. By using the vector components of a_1 and a_2, we get k = (k_x, k_y), and k^' = (k_x, -k_y). Next, we write Eq. (<ref>) as f(k)=e^-ik·b_ng_n(k), where g_1(k) = 1 + e^-ik·a_1 + e^-ik·a_2 = q(k)e^-ik·a_1/2, g_2(k) = 1 + e^ik·a_1 + e^ik·(a_1-a_2) = q(k)e^ik·a_1/2, g_3(k) = 1 + e^ik·a_2 + e^ik·(a_2-a_1) = p^*(k). Note that |g_n(k)|=|f(k)|. p(k) and q(k) are defined in Eqs. (<ref>) and (<ref>), respectively. §.§ Derivation of Eq. (<ref>) Let us express ψ_s(r) in terms of |ψ_s(k)⟩ [see Eq. (<ref>)]. Multiplying Eq. (<ref>) by |f(k)| yields e^-ik·Lf(k)-e^-ik^'·Lf(k^')+ e^-ik·(L+a_1)f(k)-e^-ik^'·(L+a_1)f(k^')+ α^2{e^-ik·(L-b_1)f^*(k)-e^-ik^'·(L-b_1)f^*(k^')} =0. In the middle unit cell in Fig. <ref>(a), the A and C sites are connected to the B site by b_3 and b_1, respectively. By expressing f (f^*) in terms of g_3 (g_1^*), Eq. (<ref>) becomes {1+e^-ik_xa}{e^-ik_yρg_3(k)-e^ik_yρg_3(k^') }+ α^2e^-ik_xa{e^-ik_yρg_1^*(k) - e^ik_yρg_1^*(k^') } =0. Multiplying Eq. (<ref>) by e^ik_xa/2 and grouping the resulting terms gives Eq. (<ref>). §.§ Derivation of Eq. (<ref>) By following the same procedure, we multiply Eq. (<ref>) by |f(k)| as follows: e^ik·(L-b_2)f(k)-e^ik^'·(L-b_2)f(k^')+ α^2{e^ik·Lf^*(k)-e^ik^'·Lf^*(k^')}+ α^2{e^ik·(L-a_1)f^*(k)-e^ik^'·(L-a_1)f^*(k^')} =0. In the top unit cell in Fig. <ref>(a), the A (C) site is connected the B site by b_2 (b_3). Hence, f and f^* in Eq. (<ref>) are expressed in terms of g_2 and g_3^*, respectively, e^-ik_xa{e^ik_yρg_2(k)-e^-ik_yρg_2(k^')}+ α^2{1+e^-ik_xa}{e^ik_yρg_3^*(k) - e^-ik_yρg_3^*(k^')} =0. Multiplying Eq. (<ref>) by e^ik_xa/2 and grouping the resulting terms leads to Eq. (<ref>). pratrev 112 fxundefined [1] ifx#1 fnum [1] #1firstoftwo secondoftwo fx [1] #1firstoftwo secondoftwo noop [0]secondoftwo ref[1]@startlink#1@href href[1]#1@endlink anitize@url [0]` 12`$12`&12`#12`1̂2`_12`%12 startlink[1] endlink[0] rl [1]href #1 @bib@innerbibempty [Hasan and Kane(2010)]hasan2010 author author M. Z. Hasan and author C. L. Kane, title title Colloquium: topological insulators, @noop journal journal Rev. Mod. Phys. volume 82, pages 3045 (year 2010)NoStop [Cooper et al.(2019)Cooper, Dalibard, and Spielman]cooper2019 author author N. Cooper, author J. Dalibard, and author I. Spielman, title title Topological bands for ultracold atoms, @noop journal journal Rev. Mod. Phys. volume 91, pages 015005 (year 2019)NoStop [Cayssol and Fuchs(2021)]cayssol2021 author author J. Cayssol and author J.-N. Fuchs, title title Topological and geometrical aspects of band theory, @noop journal journal J. Phys. Mater. volume 4, pages 034007 (year 2021)NoStop [Thouless et al.(1982)Thouless, Kohmoto, Nightingale, and den Nijs]thouless1982 author author D. J. Thouless, author M. Kohmoto, author M. P. Nightingale, and author M. den Nijs, title title Quantized Hall conductance in a two-dimensional periodic potential, @noop journal journal Phy. Rev. Lett. volume 49, pages 405 (year 1982)NoStop [Klitzing et al.(1980)Klitzing, Dorda, and Pepper]klitzing1980 author author K. v. Klitzing, author G. Dorda, and author M. Pepper, title title New method for high-accuracy determination of the fine-structure constant based on quantized Hall resistance, @noop journal journal Phy. Rev. Lett. volume 45, pages 494 (year 1980)NoStop [Kane and Mele(2005)]kane2005 author author C. L. Kane and author E. J. Mele, title title Quantum spin Hall effect in graphene, @noop journal journal Phys. Rev. Lett. volume 95, pages 226801 (year 2005)NoStop [Berry(1984)]berry1984 author author M. V. Berry, title title Quantal phase factors accompanying adiabatic changes, @noop journal journal Proc. R. Soc. A: Math. Phys. Eng. Sci. volume 392, pages 45 (year 1984)NoStop [Zak(1989)]zak1989 author author J. Zak, title title Berry’s phase for energy bands in solids, @noop journal journal Phys. Rev. Lett. volume 62, pages 2747 (year 1989)NoStop [Xiao et al.(2010)Xiao, Chang, and Niu]xiao2010 author author D. Xiao, author M.-C. Chang, and author Q. Niu, title title Berry phase effects on electronic properties, @noop journal journal Rev. Mod. Phys. volume 82, pages 1959 (year 2010)NoStop [Su et al.(1979)Su, Schrieffer, and Heeger]su1979 author author W.-P. Su, author J. R. Schrieffer, and author A. J. Heeger, title title Solitons in polyacetylene, @noop journal journal Phys. Rev. Lett. volume 42, pages 1698 (year 1979)NoStop [Delplace et al.(2011)Delplace, Ullmo, and Montambaux]delplace2011 author author P. Delplace, author D. Ullmo, and author G. Montambaux, title title Zak phase and the existence of edge states in graphene, @noop journal journal Phys. Rev. B volume 84, pages 195452 (year 2011)NoStop [Cao et al.(2017)Cao, Zhao, and Louie]cao2017 author author T. Cao, author F. Zhao, and author S. G. Louie, title title Topological phases in graphene nanoribbons: junction states, spin centers, and quantum spin chains, @noop journal journal Phys. Rev. Lett. volume 119, pages 076401 (year 2017)NoStop [Gröning et al.(2018)Gröning, Wang, Yao, Pignedoli, Borin Barin, Daniels, Cupo, Meunier, Feng, Narita et al.]groning2018 author author O. Gröning, author S. Wang, author X. Yao, author C. A. Pignedoli, author G. Borin Barin, author C. Daniels, author A. Cupo, author V. Meunier, author X. Feng, author A. Narita, et al., title title Engineering of robust topological quantum phases in graphene nanoribbons, @noop journal journal Nature volume 560, pages 209 (year 2018)NoStop [Li et al.(2021)Li, Sanz, Merino-Díez, Vilas-Varela, Garcia-Lekue, Corso, de Oteyza, Frederiksen, Peña, and Pascual]li2021 author author J. Li, author S. Sanz, author N. Merino-Díez, author M. Vilas-Varela, author A. Garcia-Lekue, author M. Corso, author D. G. de Oteyza, author T. Frederiksen, author D. Peña, and author J. I. Pascual, title title Topological phase transition in chiral graphene nanoribbons: from edge bands to end states, @noop journal journal Nat. Commun. volume 12, pages 5538 (year 2021)NoStop [Rhim et al.(2018)Rhim, Bardarson, and Slager]rhim2018 author author J.-W. Rhim, author J. H. Bardarson, and author R.-J. Slager, title title Unified bulk-boundary correspondence for band insulators, @noop journal journal Phys. Rev. B volume 97, pages 115143 (year 2018)NoStop [Ryu and Hatsugai(2002)]ryu2002 author author S. Ryu and author Y. Hatsugai, title title Topological origin of zero-energy edge states in particle-hole symmetric systems, @noop journal journal Phys. Rev. Lett. volume 89, pages 077002 (year 2002)NoStop [Mong and Shivamoggi(2011)]mong2011 author author R. S. Mong and author V. Shivamoggi, title title Edge states and the bulk-boundary correspondence in Dirac Hamiltonians, @noop journal journal Phys. Rev. B volume 83, pages 125109 (year 2011)NoStop [Ezawa(2014)]ezawa2014 author author M. Ezawa, title title Topological origin of quasi-flat edge band in phosphorene, @noop journal journal New J. Phys. volume 16, pages 115004 (year 2014)NoStop [Grujić et al.(2016)Grujić, Ezawa, Tadić, and Peeters]grujic2016tunable author author M. M. Grujić, author M. Ezawa, author M. Ž. Tadić, and author F. M. Peeters, title title Tunable skewed edges in puckered structures, @noop journal journal Phys. Rev. B volume 93, pages 245413 (year 2016)NoStop [van Miert et al.(2016)van Miert, Ortix, and Smith]van2016 author author G. van Miert, author C. Ortix, and author C. M. Smith, title title Topological origin of edge states in two-dimensional inversion-symmetric insulators and semimetals, @noop journal journal 2D Mater. volume 4, pages 015023 (year 2016)NoStop [Hitomi et al.(2021)Hitomi, Kawakami, and Koshino]hitomi2021 author author M. Hitomi, author T. Kawakami, and author M. Koshino, title title Multiorbital edge and corner states in black phosphorene, @noop journal journal Phys. Rev. B volume 104, pages 125302 (year 2021)NoStop [Atala et al.(2013)Atala, Aidelsburger, Barreiro, Abanin, Kitagawa, Demler, and Bloch]atala2013 author author M. Atala, author M. Aidelsburger, author J. T. Barreiro, author D. Abanin, author T. Kitagawa, author E. Demler, and author I. Bloch, title title Direct measurement of the Zak phase in topological Bloch bands, @noop journal journal Nat. Phys. volume 9, pages 795 (year 2013)NoStop [Grusdt et al.(2014)Grusdt, Abanin, and Demler]grusdt2014measuring author author F. Grusdt, author D. Abanin, and author E. Demler, title title Measuring ℤ_2 topological invariants in optical lattices using interferometry, @noop journal journal Phys. Rev. A volume 89, pages 043621 (year 2014)NoStop [Lu et al.(2016)Lu, Schemmer, Aycock, Genkina, Sugawa, and Spielman]lu2016 author author H.-I. Lu, author M. Schemmer, author L. M. Aycock, author D. Genkina, author S. Sugawa, and author I. B. Spielman, title title Geometrical pumping with a bose-einstein condensate, @noop journal journal Phys. Rev. Lett. volume 116, pages 200402 (year 2016)NoStop [Meier et al.(2016)Meier, An, and Gadway]meier2016soliton author author E. J. Meier, author F. A. An, and author B. Gadway, title title Observation of the topological soliton state in the Su-Schrieffer-Heeger model, @noop journal journal Nat. Commun. volume 7, pages 13986 (year 2016)NoStop [Mivehvar et al.(2017)Mivehvar, Ritsch, and Piazza]mivehvar2017 author author F. Mivehvar, author H. Ritsch, and author F. Piazza, title title Superradiant topological peierls insulator inside an optical cavity, @noop journal journal Phys. Rev. Lett. volume 118, pages 073602 (year 2017)NoStop [St-Jean et al.(2017)St-Jean, Goblot, Galopin, Lemaître, Ozawa, Le Gratiet, Sagnes, Bloch, and Amo]st2017lasing author author P. St-Jean, author V. Goblot, author E. Galopin, author A. Lemaître, author T. Ozawa, author L. Le Gratiet, author I. Sagnes, author J. Bloch, and author A. Amo, title title Lasing in topological edge states of a one-dimensional lattice, @noop journal journal Nat. Photonics volume 11, pages 651 (year 2017)NoStop [Longhi(2018)]longhi2018 author author S. Longhi, title title Probing one-dimensional topological phases in waveguide lattices with broken chiral symmetry, @noop journal journal Opt. Lett. volume 43, pages 4639 (year 2018)NoStop [Jiang et al.(2018)Jiang, Guo, Ding, Sun, Li, Jiang, and Chen]jiang2018experimental author author J. Jiang, author Z. Guo, author Y. Ding, author Y. Sun, author Y. Li, author H. Jiang, and author H. Chen, title title Experimental demonstration of the robust edge states in a split-ring-resonator chain, @noop journal journal Opt. Express volume 26, pages 12891 (year 2018)NoStop [Jiao et al.(2021)Jiao, Longhi, Wang, Gao, Zhou, Wang, Fu, Wang, Ren, Qiao et al.]jiao2021 author author Z.-Q. Jiao, author S. Longhi, author X.-W. Wang, author J. Gao, author W.-H. Zhou, author Y. Wang, author Y.-X. Fu, author L. Wang, author R.-J. Ren, author L.-F. Qiao, et al., title title Experimentally detecting quantized Zak phases without chiral symmetry in photonic lattices, @noop journal journal Phys. Rev. Lett. volume 127, pages 147401 (year 2021)NoStop [Goren et al.(2018)Goren, Plekhanov, Appas, and Le Hur]goren2018 author author T. Goren, author K. Plekhanov, author F. Appas, and author K. Le Hur, title title Topological Zak phase in strongly coupled LC circuits, @noop journal journal Phys. Rev. B volume 97, pages 041106 (year 2018)NoStop [Tao et al.(2011)Tao, Jiao, Yazyev, Chen, Feng, Zhang, Capaz, Tour, Zettl, Louie et al.]tao2011 author author C. Tao, author L. Jiao, author O. V. Yazyev, author Y.-C. Chen, author J. Feng, author X. Zhang, author R. B. Capaz, author J. M. Tour, author A. Zettl, author S. G. Louie, et al., title title Spatially resolving edge states of chiral graphene nanoribbons, @noop journal journal Nat. Phys. volume 7, pages 616 (year 2011)NoStop [Wang et al.(2016a)Wang, Talirz, Pignedoli, Feng, Müllen, Fasel, and Ruffieux]wang2016giant author author S. Wang, author L. Talirz, author C. A. Pignedoli, author X. Feng, author K. Müllen, author R. Fasel, and author P. Ruffieux, title title Giant edge state splitting at atomically precise graphene zigzag edges, @noop journal journal Nat. Commun. volume 7, pages 11507 (year 2016a)NoStop [Prudkovskiy et al.(2022)Prudkovskiy, Hu, Zhang, Hu, Ji, Nunn, Zhao, Shi, Tejeda, Wander et al.]prudkovskiy2022epitaxial author author V. S. Prudkovskiy, author Y. Hu, author K. Zhang, author Y. Hu, author P. Ji, author G. Nunn, author J. Zhao, author C. Shi, author A. Tejeda, author D. Wander, et al., title title An epitaxial graphene platform for zero-energy edge state nanoelectronics, @noop journal journal Nat. Commun. volume 13, pages 7814 (year 2022)NoStop [Polini et al.(2013)Polini, Guinea, Lewenstein, Manoharan, and Pellegrini]polini2013artificial author author M. Polini, author F. Guinea, author M. Lewenstein, author H. C. Manoharan, and author V. Pellegrini, title title Artificial honeycomb lattices for electrons, atoms and photons, @noop journal journal Nat. Nanotechnol. volume 8, pages 625 (year 2013)NoStop [Rechtsman et al.(2013)Rechtsman, Plotnik, Zeuner, Song, Chen, Szameit, and Segev]rechtsman2013topological author author M. C. Rechtsman, author Y. Plotnik, author J. M. Zeuner, author D. Song, author Z. Chen, author A. Szameit, and author M. Segev, title title Topological creation and destruction of edge states in photonic graphene, @noop journal journal Phys. Rev. Lett. volume 111, pages 103901 (year 2013)NoStop [Plotnik et al.(2014)Plotnik, Rechtsman, Song, Heinrich, Zeuner, Nolte, Lumer, Malkova, Xu, Szameit et al.]plotnik2014observation author author Y. Plotnik, author M. C. Rechtsman, author D. Song, author M. Heinrich, author J. M. Zeuner, author S. Nolte, author Y. Lumer, author N. Malkova, author J. Xu, author A. Szameit, et al., title title Observation of unconventional edge states in ‘photonic graphene’, @noop journal journal Nat. Mater. volume 13, pages 57 (year 2014)NoStop [Bellec et al.(2014)Bellec, Kuhl, Montambaux, and Mortessagne]bellec2014manipulation author author M. Bellec, author U. Kuhl, author G. Montambaux, and author F. Mortessagne, title title Manipulation of edge states in microwave artificial graphene, @noop journal journal New J. Phys. volume 16, pages 113023 (year 2014)NoStop [Milićević et al.(2017)Milićević, Ozawa, Montambaux, Carusotto, Galopin, Lemaître, Le Gratiet, Sagnes, Bloch, and Amo]milicevic2017orbital author author M. Milićević, author T. Ozawa, author G. Montambaux, author I. Carusotto, author E. Galopin, author A. Lemaître, author L. Le Gratiet, author I. Sagnes, author J. Bloch, and author A. Amo, title title Orbital edge states in a photonic honeycomb lattice, @noop journal journal Phys. Rev. Lett. volume 118, pages 107403 (year 2017)NoStop [Zhang et al.(2020)Zhang, Wang, Zhang, Kartashov, Li, Zhong, Guan, Gao, Li, Zhang et al.]zhang2020soliton author author Z. Zhang, author R. Wang, author Y. Zhang, author Y. V. Kartashov, author F. Li, author H. Zhong, author H. Guan, author K. Gao, author F. Li, author Y. Zhang, et al., title title Observation of edge solitons in photonic graphene, @noop journal journal Nat. Commun. volume 11, pages 1902 (year 2020)NoStop [Xia et al.(2023)Xia, Liang, Tang, Song, Xu, and Chen]xia2023photonic author author S. Xia, author Y. Liang, author L. Tang, author D. Song, author J. Xu, and author Z. Chen, title title Photonic realization of a generic type of graphene edge states exhibiting topological flat band, @noop journal journal Phys. Rev. Lett. volume 131, pages 013804 (year 2023)NoStop [Xi et al.(2021)Xi, Ma, Wan, Dong, and Sun]xi2021observation author author X. Xi, author J. Ma, author S. Wan, author C.-H. Dong, and author X. Sun, title title Observation of chiral edge states in gapped nanomechanical graphene, @noop journal journal Sci. Adv. volume 7, pages eabe1398 (year 2021)NoStop [Wang et al.(2021a)Wang, Zhang, Gu, Liao, Cheng, and Liu]wang2021zak author author G. Wang, author Z. Zhang, author Y. Gu, author D. Liao, author Y. Cheng, and author X. Liu, title title Zak-phase-inspired acoustic topological edge states on the honeycomb lattice, @noop journal journal Phys. Rev. B volume 103, pages 094102 (year 2021a)NoStop [Han et al.(2009)Han, Lai, Zi, Zhang, and Chan]han2009dirac author author D. Han, author Y. Lai, author J. Zi, author Z.-Q. Zhang, and author C. T. Chan, title title Dirac spectra and edge states in honeycomb plasmonic lattices, @noop journal journal Phys. Rev. Lett. volume 102, pages 123904 (year 2009)NoStop [Wang et al.(2016b)Wang, Zhang, Xiao, Han, Chan, and Wen]wang2016existence author author L. Wang, author R.-Y. Zhang, author M. Xiao, author D. Han, author C. T. Chan, and author W. Wen, title title The existence of topological edge states in honeycomb plasmonic lattices, @noop journal journal New J. Phys. volume 18, pages 103029 (year 2016b)NoStop [Jacqmin et al.(2014)Jacqmin, Carusotto, Sagnes, Abbarchi, Solnyshkov, Malpuech, Galopin, Lemaître, Bloch, and Amo]jacqmin2014direct author author T. Jacqmin, author I. Carusotto, author I. Sagnes, author M. Abbarchi, author D. Solnyshkov, author G. Malpuech, author E. Galopin, author A. Lemaître, author J. Bloch, and author A. Amo, title title Direct observation of Dirac cones and a flatband in a honeycomb lattice for polaritons, @noop journal journal Phys. Rev. Lett. volume 112, pages 116402 (year 2014)NoStop [St-Jean et al.(2021)St-Jean, Dauphin, Massignan, Real, Jamadi, Milicevic, Lemaitre, Harouri, Le Gratiet, Sagnes et al.]st2021measuring author author P. St-Jean, author A. Dauphin, author P. Massignan, author B. Real, author O. Jamadi, author M. Milicevic, author A. Lemaitre, author A. Harouri, author L. Le Gratiet, author I. Sagnes, et al., title title Measuring topological invariants in a polaritonic analog of graphene, @noop journal journal Phys. Rev. Lett. volume 126, pages 127403 (year 2021)NoStop [Raoux et al.(2014)Raoux, Morigi, Fuchs, Piéchon, and Montambaux]raoux2014dia author author A. Raoux, author M. Morigi, author J.-N. Fuchs, author F. Piéchon, and author G. Montambaux, title title From dia-to paramagnetic orbital susceptibility of massless fermions, @noop journal journal Phys. Rev. Lett. volume 112, pages 026402 (year 2014)NoStop [Neto et al.(2009)Neto, Guinea, Peres, Novoselov, and Geim]neto2009electronic author author A. C. Neto, author F. Guinea, author N. M. Peres, author K. S. Novoselov, and author A. K. Geim, title title The electronic properties of graphene, @noop journal journal Rev. Mod. Phys. volume 81, pages 109 (year 2009)NoStop [Bercioux et al.(2009)Bercioux, Urban, Grabert, and Häusler]bercioux2009massless author author D. Bercioux, author D. Urban, author H. Grabert, and author W. Häusler, title title Massless Dirac-Weyl fermions in α-T_3 optical lattice, @noop journal journal Phys. Rev. A volume 80, pages 063603 (year 2009)NoStop [Dóra et al.(2011)Dóra, Kailasvuori, and Moessner]dora2011lattice author author B. Dóra, author J. Kailasvuori, and author R. Moessner, title title Lattice generalization of the Dirac equation to general spin and the role of the flat band, @noop journal journal Phys. Rev. B volume 84, pages 195422 (year 2011)NoStop [Urban et al.(2011)Urban, Bercioux, Wimmer, and Häusler]urban2011barrier author author D. F. Urban, author D. Bercioux, author M. Wimmer, and author W. Häusler, title title Barrier transmission of Dirac-like pseudospin-one particles, @noop journal journal Phys. Rev. B volume 84, pages 115136 (year 2011)NoStop [Fang et al.(2016)Fang, Zhang, Louie, and Chan]fang2016klein author author A. Fang, author Z. Zhang, author S. G. Louie, and author C. T. Chan, title title Klein tunneling and supercollimation of pseudospin-1 electromagnetic waves, @noop journal journal Phys. Rev. B volume 93, pages 035422 (year 2016)NoStop [Illes and Nicol(2017)]illes2017klein author author E. Illes and author E. Nicol, title title Klein tunneling in the α-T_3 model, @noop journal journal Phys. Rev. B volume 95, pages 235432 (year 2017)NoStop [Betancur-Ocampo et al.(2017)Betancur-Ocampo, Cordourier-Maruri, Gupta, and de Coss]betancur2017super author author Y. Betancur-Ocampo, author G. Cordourier-Maruri, author V. Gupta, and author R. de Coss, title title Super-Klein tunneling of massive pseudospin-one particles, @noop journal journal Phys. Rev. B volume 96, pages 024304 (year 2017)NoStop [Weekes et al.(2021)Weekes, Iurov, Zhemchuzhna, Gumbs, and Huang]weekes2021generalized author author N. Weekes, author A. Iurov, author L. Zhemchuzhna, author G. Gumbs, and author D. Huang, title title Generalized wkb theory for electron tunneling in gapped α-T_3 lattices, @noop journal journal Phys. Rev. B volume 103, pages 165429 (year 2021)NoStop [Cunha et al.(2022)Cunha, da Costa, Pereira Jr, Costa Filho, Van Duppen, and Peeters]cunha2022tunneling author author S. Cunha, author D. da Costa, author J. M. Pereira Jr, author R. Costa Filho, author B. Van Duppen, and author F. Peeters, title title Tunneling properties in α-T_3 lattices: Effects of symmetry-breaking terms, @noop journal journal Phys. Rev. B volume 105, pages 165402 (year 2022)NoStop [Illes et al.(2015)Illes, Carbotte, and Nicol]illes2015hall author author E. Illes, author J. Carbotte, and author E. Nicol, title title Hall quantization and optical conductivity evolution with variable Berry phase in the α-T_3 model, @noop journal journal Phys. Rev. B volume 92, pages 245410 (year 2015)NoStop [Malcolm and Nicol(2016)]malcolm2016frequency author author J. Malcolm and author E. Nicol, title title Frequency-dependent polarizability, plasmons, and screening in the two-dimensional pseudospin-1 dice lattice, @noop journal journal Phys. Rev. B volume 93, pages 165433 (year 2016)NoStop [Carbotte et al.(2019)Carbotte, Bryenton, and Nicol]carbotte2019optical author author J. Carbotte, author K. Bryenton, and author E. Nicol, title title Optical properties of a semi-Dirac material, @noop journal journal Phys. Rev. B volume 99, pages 115406 (year 2019)NoStop [Mojarro et al.(2020)Mojarro, Ibarra-Sierra, Sandoval-Santana, Carrillo-Bastos, and Naumis]mojarro2020electron author author M. Mojarro, author V. Ibarra-Sierra, author J. Sandoval-Santana, author R. Carrillo-Bastos, and author G. G. Naumis, title title Electron transitions for Dirac Hamiltonians with flat bands under electromagnetic radiation: Application to the α-T_3 graphene model, @noop journal journal Phys. Rev. B volume 101, pages 165305 (year 2020)NoStop [Han and Lai(2022)]han2022optical author author C.-D. Han and author Y.-C. Lai, title title Optical response of two-dimensional Dirac materials with a flat band, @noop journal journal Phys. Rev. B volume 105, pages 155405 (year 2022)NoStop [Iurov et al.(2022)Iurov, Zhemchuzhna, Gumbs, Huang, Dahal, and Abranyos]iurov2022finite author author A. Iurov, author L. Zhemchuzhna, author G. Gumbs, author D. Huang, author D. Dahal, and author Y. Abranyos, title title Finite-temperature plasmons, damping, and collective behavior in the α-T_3 model, @noop journal journal Phys. Rev. B volume 105, pages 245414 (year 2022)NoStop [Oriekhov and Gusynin(2022)]oriekhov2022optical author author D. Oriekhov and author V. Gusynin, title title Optical conductivity of semi-Dirac and pseudospin-1 models: Zitterbewegung approach, @noop journal journal Phys. Rev. B volume 106, pages 115143 (year 2022)NoStop [Iurov et al.(2021)Iurov, Zhemchuzhna, Gumbs, Huang, Fekete, Anwar, Dahal, and Weekes]iurov2021tailoring author author A. Iurov, author L. Zhemchuzhna, author G. Gumbs, author D. Huang, author P. Fekete, author F. Anwar, author D. Dahal, and author N. Weekes, title title Tailoring plasmon excitations in α-T_3 armchair nanoribbons, @noop journal journal Sci. Rep. volume 11, pages 20577 (year 2021)NoStop [Illes and Nicol(2016)]illes2016magnetic author author E. Illes and author E. Nicol, title title Magnetic properties of the α-T_3 model: Magneto-optical conductivity and the hofstadter butterfly, @noop journal journal Phys. Rev. B volume 94, pages 125435 (year 2016)NoStop [Soni et al.(2020)Soni, Kaushal, Okamoto, and Dagotto]soni2020flat author author R. Soni, author N. Kaushal, author S. Okamoto, and author E. Dagotto, title title Flat bands and ferrimagnetic order in electronically correlated dice-lattice ribbons, @noop journal journal Phys. Rev. B volume 102, pages 045105 (year 2020)NoStop [Roslyak et al.(2021)Roslyak, Gumbs, Balassis, and Elsayed]roslyak2021effect author author O. Roslyak, author G. Gumbs, author A. Balassis, and author H. Elsayed, title title Effect of magnetic field and chemical potential on the rkky interaction in the α-T_3 lattice, @noop journal journal Phys. Rev. B volume 103, pages 075418 (year 2021)NoStop [Sun et al.(2022)Sun, Liu, Du, and Guo]sun2022strain author author J. Sun, author T. Liu, author Y. Du, and author H. Guo, title title Strain-induced pseudo magnetic field in the α-T_3 lattice, @noop journal journal Phys. Rev. B volume 106, pages 155417 (year 2022)NoStop [Filusch and Fehske(2022)]filusch2022tunable author author A. Filusch and author H. Fehske, title title Tunable valley filtering in dynamically strained α-T_3 lattices, @noop journal journal Phys. Rev. B volume 106, pages 245106 (year 2022)NoStop [Li et al.(2023)Li, Liu, and Wang]li2023topological author author R. Li, author J.-F. Liu, and author J. Wang, title title Topological ac charge current and continuous invariant in the α-T_3 lattice under a periodically varying strain, @noop journal journal Phys. Rev. B volume 108, pages 115403 (year 2023)NoStop [Kovács et al.(2017)Kovács, Dávid, Dóra, and Cserti]kovacs2017frequency author author Á. D. Kovács, author G. Dávid, author B. Dóra, and author J. Cserti, title title Frequency-dependent magneto-optical conductivity in the generalized α-T_3 model, @noop journal journal Phys. Rev. B volume 95, pages 035414 (year 2017)NoStop [Chen et al.(2019a)Chen, Xu, Wang, Liu, and Ma]chen2019enhanced author author Y.-R. Chen, author Y. Xu, author J. Wang, author J.-F. Liu, and author Z. Ma, title title Enhanced magneto-optical response due to the flat band in nanoribbons made from the α-T_3 lattice, @noop journal journal Phys. Rev. B volume 99, pages 045420 (year 2019a)NoStop [Chen et al.(2019b)Chen, Zuber, Ma, and Zhang]chen2019nonlinear author author L. Chen, author J. Zuber, author Z. Ma, and author C. Zhang, title title Nonlinear optical response of the α-T_3 model due to the nontrivial topology of the band dispersion, @noop journal journal Phys. Rev. B volume 100, pages 035440 (year 2019b)NoStop [Balassis et al.(2020)Balassis, Dahal, Gumbs, Iurov, Huang, and Roslyak]balassis2020magnetoplasmons author author A. Balassis, author D. Dahal, author G. Gumbs, author A. Iurov, author D. Huang, and author O. Roslyak, title title Magnetoplasmons for the α-T_3 model with filled landau levels, @noop journal journal J. Phys.: Condens. Matter volume 32, pages 485301 (year 2020)NoStop [Vigh et al.(2013)Vigh, Oroszlány, Vajna, San-Jose, Dávid, Cserti, and Dóra]vigh2013diverging author author M. Vigh, author L. Oroszlány, author S. Vajna, author P. San-Jose, author G. Dávid, author J. Cserti, and author B. Dóra, title title Diverging dc conductivity due to a flat band in a disordered system of pseudospin-1 Dirac-Weyl fermions, @noop journal journal Phys. Rev. B volume 88, pages 161413 (year 2013)NoStop [Louvet et al.(2015)Louvet, Delplace, Fedorenko, and Carpentier]louvet2015origin author author T. Louvet, author P. Delplace, author A. A. Fedorenko, and author D. Carpentier, title title On the origin of minimal conductivity at a band crossing, @noop journal journal Phys. Rev. B volume 92, pages 155116 (year 2015)NoStop [Wang et al.(2020)Wang, Liu, Wang, and Liu]wang2020integer author author J. J. Wang, author S. Liu, author J. Wang, and author J.-F. Liu, title title Integer quantum Hall effect of the α-T_3 model with a broken flat band, @noop journal journal Phys. Rev. B volume 102, pages 235414 (year 2020)NoStop [Wang et al.(2021b)Wang, Wang, Wang, and Liu]wang2021flat author author X.-H. Wang, author J. J. Wang, author J. Wang, and author J.-F. Liu, title title Flat band assisted topological charge pump in the dice lattice, @noop journal journal Phys. Rev. B volume 103, pages 195442 (year 2021b)NoStop [Zhou(2021)]zhou2021andreev author author X. Zhou, title title Andreev reflection and Josephson effect in the α-T_3 lattice, @noop journal journal Phys. Rev. B volume 104, pages 125441 (year 2021)NoStop [Biswas and Ghosh(2016)]biswas2016magnetotransport author author T. Biswas and author T. K. Ghosh, title title Magnetotransport properties of the α-T_3 model, @noop journal journal J. Phys.: Condens. Matter volume 28, pages 495302 (year 2016)NoStop [Islam and Dutta(2017)]islam2017valley author author S. F. Islam and author P. Dutta, title title Valley-polarized magnetoconductivity and particle-hole symmetry breaking in a periodically modulated α-T_3 lattice, @noop journal journal Phys. Rev. B volume 96, pages 045418 (year 2017)NoStop [Duan(2023)]duan2023seebeck author author W. Duan, title title Seebeck and Nernst effects of pseudospin-1 fermions in the α-T_3 model under magnetic fields, @noop journal journal Phys. Rev. B volume 108, pages 155428 (year 2023)NoStop [Huang et al.(2019)Huang, Iurov, Xu, Lai, and Gumbs]huang2019interplay author author D. Huang, author A. Iurov, author H.-Y. Xu, author Y.-C. Lai, and author G. Gumbs, title title Interplay of Lorentz-Berry forces in position-momentum spaces for valley-dependent impurity scattering in α-T_3 lattices, @noop journal journal Phys. Rev. B volume 99, pages 245412 (year 2019)NoStop [Wang and Ran(2011)]wang2011nearly author author F. Wang and author Y. Ran, title title Nearly flat band with chern number C=2 on the dice lattice, @noop journal journal Phys. Rev. B volume 84, pages 241103 (year 2011)NoStop [Köksal et al.(2023)Köksal, Li, and Pentcheva]koksal2023high author author O. Köksal, author L. Li, and author R. Pentcheva, title title High chern numbers in a perovskite-derived dice lattice (La XO_3)_3/(La Al O_3)_3(111) with X= Ti, Mn and Co, @noop journal journal Sci. Rep. volume 13, pages 10615 (year 2023)NoStop [Zhu et al.(2016)Zhu, Wang, Guan, Liu, Zhang, Chen, and Yang]zhu2016blue author author L. Zhu, author S.-S. Wang, author S. Guan, author Y. Liu, author T. Zhang, author G. Chen, and author S. A. Yang, title title Blue phosphorene oxide: strain-tunable quantum phase transitions and novel 2d emergent fermions, @noop journal journal Nano Lett. volume 16, pages 6548 (year 2016)NoStop [Malcolm and Nicol(2015)]malcolm2015magneto author author J. D. Malcolm and author E. J. Nicol, title title Magneto-optics of massless Kane fermions: Role of the flat band and unusual Berry phase, @noop journal journal Phys. Rev. B volume 92, pages 035118 (year 2015)NoStop [Orlita et al.(2014)Orlita, Basko, Zholudev, Teppe, Knap, Gavrilenko, Mikhailov, Dvoretskii, Neugebauer, Faugeras et al.]orlita2014observation author author M. Orlita, author D. Basko, author M. Zholudev, author F. Teppe, author W. Knap, author V. Gavrilenko, author N. Mikhailov, author S. Dvoretskii, author P. Neugebauer, author C. Faugeras, et al., title title Observation of three-dimensional massless kane fermions in a zinc-blende crystal, @noop journal journal Nat. Phys. volume 10, pages 233 (year 2014)NoStop [Andrijauskas et al.(2015)Andrijauskas, Anisimovas, Račiūnas, Mekys, Kudriašov, Spielman, and Juzeliūnas]andrijauskas2015three author author T. Andrijauskas, author E. Anisimovas, author M. Račiūnas, author A. Mekys, author V. Kudriašov, author I. Spielman, and author G. Juzeliūnas, title title Three-level Haldane-like model on a dice optical lattice, @noop journal journal Phys. Rev. A volume 92, pages 033617 (year 2015)NoStop [Dey et al.(2020)Dey, Kapri, Pal, and Ghosh]dey2020unconventional author author B. Dey, author P. Kapri, author O. Pal, and author T. K. Ghosh, title title Unconventional phases in a Haldane model of dice lattice, @noop journal journal Phys. Rev. B volume 101, pages 235406 (year 2020)NoStop [Haldane(1988)]haldane1988model author author F. D. M. Haldane, title title Model for a quantum Hall effect without Landau levels: Condensed-matter realization of the" parity anomaly", @noop journal journal Phys. Rev. Lett. volume 61, pages 2015 (year 1988)NoStop [Wang and Liu(2021)]wang2021quantum author author J. Wang and author J.-F. Liu, title title Quantum spin Hall phase transition in the α-T_3 lattice, @noop journal journal Phys. Rev. B volume 103, pages 075419 (year 2021)NoStop [Hao(2022)]hao2022zigzag author author L. Hao, title title Zigzag dice lattice ribbons: Distinct edge morphologies and structure-spectrum correspondences, @noop journal journal Phys. Rev. Mater. volume 6, pages 034002 (year 2022)NoStop [Real et al.(2017)Real, Cantillano, López-González, Szameit, Aono, Naruse, Kim, Wang, and Vicencio]real2017flat author author B. Real, author C. Cantillano, author D. López-González, author A. Szameit, author M. Aono, author M. Naruse, author S.-J. Kim, author K. Wang, and author R. A. Vicencio, title title Flat-band light dynamics in stub photonic lattices, @noop journal journal Sci. Rep. volume 7, pages 15085 (year 2017)NoStop [Bartlett et al.(2021)Bartlett, Hu, and Zhao]bartlett2021illuminating author author J. Bartlett, author H. Hu, and author E. Zhao, title title Illuminating the bulk-boundary correspondence of a non-hermitian stub lattice with Majorana stars, @noop journal journal Phys. Rev. B volume 104, pages 195131 (year 2021)NoStop [Cáceres-Aravena et al.(2022)Cáceres-Aravena, Real, Guzmán-Silva, Amo, Torres, and Vicencio]caceres2022experimental author author G. Cáceres-Aravena, author B. Real, author D. Guzmán-Silva, author A. Amo, author L. E. F. Torres, and author R. A. Vicencio, title title Experimental observation of edge states in SSH-stub photonic lattices, @noop journal journal Phys. Rev. Res. volume 4, pages 013185 (year 2022)NoStop [Majorana(1932)]majorana1932atomi author author E. Majorana, title title Atomi orientati in campo magnetico variabile, @noop journal journal Il Nuovo Cimento (1924-1942) volume 9, pages 43 (year 1932)NoStop [Hannay(1998)]hannay1998berry author author J. Hannay, title title The Berry phase for spin in the Majorana representation, @noop journal journal J. Phys. A: Math. Gen. volume 31, pages L53 (year 1998)NoStop [Liu and Fu(2014)]liu2014representation author author H. Liu and author L. Fu, title title Representation of Berry phase by the trajectories of Majorana stars, @noop journal journal Phys. Rev. Lett. volume 113, pages 240403 (year 2014)NoStop [Pratama and Nakanishi(2024)]pratama24-letter author author F. R. Pratama and author T. Nakanishi, title title Topological edge state of massless fermion with non-quantized and zero Berry phases, @noop journal journal Submitted (year 2024)NoStop [Note1()]Note1 note The positions of all sites in each unit cell are regarded as identical. This treatment is equivalent to fixing a gauge in the Hamiltonian such that the intracell hopping is real <cit.>.Stop [Note2()]Note2 note Z=Arg(X+iY)=atan2(Y,X), where atan2 is two-argument arctangent function. Particularly, Z=0 for Y=0, X>0 and Z=π for Y=0, X<0.Stop [Note3()]Note3 note For V_A^' <0, the BZ is defined for k∈ [0,2π /a_0], because F(k)=V_A-|V_A^' | e^-ika_0 = V_A+|V_A^' | e^-i(k+π /a_0)a_0NoStop [Lieb(1989)]lieb1989two author author E. H. Lieb, title title Two theorems on the hubbard model, @noop journal journal Phys. Rev. Lett. volume 62, pages 1201 (year 1989)NoStop [Wakabayashi et al.(2010)Wakabayashi, Sasaki, Nakanishi, and Enoki]wakabayashi2010electronic author author K. Wakabayashi, author K.-i. Sasaki, author T. Nakanishi, and author T. Enoki, title title Electronic states of graphene nanoribbons and analytical solutions, @noop journal journal Sci. Technol. Adv. Mater. volume 11, pages 054504 (year 2010)NoStop [Schwinger(1952)]schwinger1952 author author J. Schwinger, title title On angular momentum, @noop journal journal US Atomic Energy Commission, Report No. NYO-3071 (year 1952)NoStop [Ezawa et al.(2013)Ezawa, Tanaka, and Nagaosa]ezawa2013topological author author M. Ezawa, author Y. Tanaka, and author N. Nagaosa, title title Topological phase transition without gap closing, @noop journal journal Sci. Rep. volume 3, pages 2790 (year 2013)NoStop [Longhi(2014)]longhi2014aharonov author author S. Longhi, title title Aharonov-Bohm photonic cages in waveguide and coupled resonator lattices by synthetic magnetic fields, @noop journal journal Opt. Lett. volume 39, pages 5892 (year 2014)NoStop [Mukherjee and Thomson(2015)]mukherjee2015observation author author S. Mukherjee and author R. R. Thomson, title title Observation of localized flat-band modes in a quasi-one-dimensional photonic rhombic lattice, @noop journal journal Opt. Lett. volume 40, pages 5443 (year 2015)NoStop [Pelegrí et al.(2019)Pelegrí, Marques, Dias, Daley, Ahufinger, and Mompart]pelegri2019topological author author G. Pelegrí, author A. Marques, author R. Dias, author A. Daley, author V. Ahufinger, and author J. Mompart, title title Topological edge states with ultracold atoms carrying orbital angular momentum in a diamond chain, @noop journal journal Phys. Rev. A volume 99, pages 023612 (year 2019)NoStop [Oriekhov et al.(2018)Oriekhov, Gorbar, and Gusynin]oriekhov2018electronic author author D. Oriekhov, author E. Gorbar, and author V. Gusynin, title title Electronic states of pseudospin-1 fermions in dice lattice ribbon, @noop journal journal Low Temp. Phys. volume 44, pages 1313 (year 2018)NoStop
http://arxiv.org/abs/2406.19153v1
20240627131342
A Generalized Theory for Optical Cooling of a Trapped Atom with Spin
[ "Saumitra S. Phatak", "Karl N. Blodgett", "David Peana", "Meng Raymond Chen", "Jonathan D. Hood" ]
physics.atom-ph
[ "physics.atom-ph", "quant-ph" ]
APS/123-QED , hoodjd@purdue.edu § ABSTRACT Cooling atoms to the ground-state of optical tweezers is becoming increasingly important for high-fidelity imaging, cooling, and molecular assembly. While extensive theoretical work has been conducted on cooling in free space, fewer studies have focused on cooling in bound states. In this work, we present a unified formalism for optical cooling mechanisms in neutral atom tweezers, including resolved and unresolved sideband cooling with different trapping potentials, polarization gradient cooling, gray molasses cooling, Λ-enhanced gray molasses cooling, and Raman sideband cooling. We perform simulations and demonstrate good agreement with a simplified spin model. We derive and discuss the fundamental limits of each cooling mechanism and propose new strategies for achieving ground-state cooling in optical tweezers. Our findings provide valuable insights into optimizing cooling schemes for neutral atoms in optical tweezers, paving the way for minimizing thermal decoherence in Rydberg and molecular gates and improving efficiencies of molecular assembly. A Generalized Theory for Optical Cooling of a Trapped Atom with Spin Jonathan D. Hood July 1, 2024 ==================================================================== § INTRODUCTION Laser cooling of bound neutral atoms has recently gained renewed significance in experiments involving tweezer arrays of atoms and molecules <cit.>. In systems of Rydberg atoms <cit.> or molecules <cit.> with dipolar coupling, thermal motion limits the coherence of interactions. Molecular assembly <cit.> and few-body physics <cit.> studies require high-fidelity ground-state preparation of multiple atoms. A wide range of optical cooling techniques have been developed for trapped atoms, illustrated in Fig. <ref>. Single-photon cooling schemes, such as sideband cooling, have been used for atoms with narrow lines for high-fidelity imaging and ground-state preparation <cit.>. Two-photon schemes, including polarization gradient (PG) <cit.>, gray molasses (GM) and Λ-enhanced gray molasses (Λ-GM) <cit.>, and Raman-sideband cooling (RSB) <cit.>, have been successfully applied for imaging and cooling of atoms without narrow lines. While extensive theoretical work has been developed for sideband cooling <cit.>, many of the other cooling schemes for bound atoms are still interpreted through their free-space pictures, for example in Sisyphus cooling where an atom moves through a changing polarization <cit.>. However, this picture breaks down for a tightly trapped atom in the Lamb-Dicke regime, where the atom wavefunction is smaller than the wavelength. In this work, we develop a generalized optical cooling model for bound atoms with spin in a single or counter-propagating beam configuration with different polarizations. We validate these models using master equation simulations that include the ground and excited spins and the harmonic oscillator states. Our analysis reveals a consistent picture across various cooling schemes, shown in Fig. <ref>. Cooling occurs when |n ⟩ and |n-1 ⟩ are Raman coupled for different spin states and brought into resonance using a combination of light shifts (PG, GM, Λ-GM) and two-photon detunings (RSB). Sideband cooling has been extensively studied in previous works <cit.>, but cooling of a bound atom with spin has been much more limited. Previous work on cooling bound atoms with spin has primarily focused on specific cases, such as J=1/2 to J'=3/2 transitions in counter-propagating orthogonal linear light <cit.> or linear light with standing waves <cit.>. We develop a new formalism for arbitrary spin and polarization, with a particular emphasis on the experimentally relevant σ_+ - σ_- configuration. Notably, while these previous theoretical studies found a fundamental limit of ⟨ n ⟩≈ 1, we demonstrate that schemes like GM and Λ-GM can be used for high ground-state preparation when considering different polarization and spin configurations. A recent experimental result achieved a record-high imaging fidelity of 99.96% for lithium using Λ-enhanced GM cooling with significant ground-state preparation, despite lithium having the largest recoil heating <cit.>. This raises the question of whether spin cooling techniques can be further optimized for ground-state preparation. The unified model also allows us to explore novel cooling schemes that transcend the conventions of any single conventional cooling technique. The paper is structured as follows. Section <ref> develops the general master equation for spin cooling and derive effective ground-state spin operators by adiabatically eliminating the excited state. Section <ref> applies this formalism to sideband cooling, including cases with different excited state trapping frequencies. Section <ref> develops a formalism for spin cooling and applies it to PG, GM, Λ-GM, and RSB cooling. Each section compares exact simulations to simplified models and contrasts various cooling techniques, with the goal of creating a unified picture that will lead to more coherent interactions and higher fidelity ground-state preparation. § THEORY: LASER COOLING IN A HARMONIC POTENTIAL §.§ Master equation for laser cooling In this section, we develop a master equation for an atom confined in a tight harmonic trap and interacting with a semi-classical electric field. In a semi-classical approximation, we take the expectation of the electric field but keep the atom position x̂ as a quantum operator <cit.>. The electric field for a single laser frequency ω_L is E(x̂ , t) = E_0 ϵ( x̂ ) e^- i ω_L t + h.c. . E_0 is a real field amplitude and the last term is the hermitian conjugate. The complex polarization and phase are contained within ϵ(x̂) and can describe a traveling wave or a standing wave with various polarization configurations. The atom has ground hyperfine spin states denoted as |F_g m_g⟩ and excited hyperfine spin states denoted as |F_e m_e⟩. The ground and excited state projection operators P_e are P_e = ∑_m_e |F_e m_e⟩⟨ F_e m_e| P_g = ∑_m_g |F_g m_g⟩⟨ F_g m_g|. For an optically trapped atom, the ground and excited states can generally experience different potentials due to their differing polarizabilities. This difference will be addressed later, but for now, we will assume that the ground and excited states share the same harmonic trapping frequency ν. The total Hamiltonian for the bound atom interacting with the electric field in the rotating frame of the laser is <cit.> Ĥ = νâ^†â + ( Δ - i Γ/2) P_e - Ω/2( ϵ(x̂ ) ·D̂^† + h.c. ). Δ = ω_A - ω_L is the detuning of the atom from the laser. The Rabi frequency Ω is defined in terms of the reduced matrix element times the electric field amplitude Ω = ⟨ J_e || d || J_g ⟩ E_0/ħ. The atomic operator D̂^† is a raising operator for all the hyperfine levels in the ground-state, and D̂ is the corresponding lowering operator. They are defined in terms of the dipole operator d̂ normalized by the reduced matrix element, D̂ = P_gd̂ P_e/⟨ J_e || d || J_g⟩, D̂^† = P_ed̂ P_g/⟨ J_e || d || J_g⟩. The Hamiltonian in Eq. <ref> also contains a non-Hermitian imaginary decay rate Γ of the excited state due to spontaneous emission. In the Wigner-Weisskopf model, the excited state decays due to spontaneous emission without repopulating the ground-state. This leads to a reduction in the wavefunction normalization. The master equation for the density matrix contains refeeding terms with collapse operators L_i that repopulate the ground-state and preserve the density matrix normalization: d ρ̂/dt = -i ( Ĥρ̂ - ρ̂Ĥ^† ) + ∑_iL̂_i ρ̂L̂^†_i. The collapse operators contain both the lowering of the excited state as well as the momentum recoil for emitting a photon: L̂_n̂,q = √(Γ) R̂_n,qD̂_q. The collapse operators have two effects when operating on the density matrix. First, they lower the atom back to the ground-state. D̂_q is the lowering operator in the spherical vector basis D̂_q = ê^*_q ·D̂, where ê_q is a spherical basis vector. Second, the recoil operator R̂_n,q imparts a momentum kick due to the photon recoil after the emission of a photon with polarization q in n directions. In general, the recoil operator is averaged over the angular emission profile for each polarization. If we assume that the photon is emitted along either direction of the x-axis, then the recoil operators are R_±,q = e^± ikx, and we have a total of six collapse operators for the two emissions directions and three polarizations L̂_±,q = √(Γ) e^± i k x̂D̂_q. In 1D, the photon recoil operator R_± = e^± i k x̂ can be understood as a momentum translation operator that induces a momentum kick due to the recoil from an emitted photon, with e^± i k x̂ |ψ(p) ⟩ = |ψ(p ±ħ k) ⟩. We can express the position operator in terms of the harmonic oscillator creation and annihilation operators x̂ = x_0 ( â + â^† ), where x_0 = √(ħ/ 2 m ν) is the width of the ground-state wavefunction. The recoil operator becomes e^i k x̂ = e^i η ( â + â^†), where the Lamb-Dicke (LD) parameter η = k x_0 is the ratio of the ground-state wavefunction width to the wavelength. The square of the LD parameter also happens to be equal to the ratio of the photon recoil energy E_recoil = ħ^2 k^2/ (2m) to the harmonic energy ħν, or η^2 = E_recoil/(ħν). When η≪ 1, then the recoil from photon emission or absorption is unlikely to change n, and the ground-state wavefunction is smaller than the cooling laser's wavelength. In this regime, we can also approximate the recoil operator as e^ i k x̂≈ 1 + i η (â + â^†) . The result of a recoil is then e^ i k x̂ |n ⟩≈ |n ⟩ + η√(n) |n-1 ⟩ + η√(n+1)|n+1 ⟩. The zeroth order term is from n → n, and the first order LD term is from n → n ± 1. The LD regime is defined as when η^2 (2n+1) ≪ 1. One common misconception is that in the LD regime, heating is suppressed. Although the probability of n changing decreases, the overall heating rate is still given by the scattering rate times the recoil energy. More generally, the matrix element can be expressed in terms of the generalized Laguerre polynomial as <cit.> ⟨ n' | e^i k x̂ | n ⟩ = e^-η^2/2(n_<!/(n_< + Δ n)!)^1/2 (i η )^Δ n L_n_<^Δ n(η^2) . Here, n_< is the least of n and n', and Δ n = |n'-n|. This functional form becomes important when cooling outside the LD regime. The n → n-1 transition goes to zero for higher n, creating dark motional states where the population gets stuck. Cooling can still occur by addressing higher order sidebands, as shown experimentally in Ref. <cit.>. The master equation here is simulated throughout this paper for different cooling schemes using the steady-state solver in QuTip <cit.>. We include up to 15 harmonic states, which for the larger spin systems in this paper like F=1 to F=2 takes approximately one minute per simulation. §.§ Effective master equation within ground-state subspace Most optical cooling processes take place when there is low saturation, meaning that only a small amount of the steady-state population is in the excited state. The excited state can then be reasonably approximated from the ground state, and can be adiabatically eliminated. In this section, we will derive the effective ground-state master equation. In situations where the excited state quickly reaches an equilibrium, the excited state can be solved in terms of the ground-states and then eliminated from the time evolution equations. The elimination results in an effective Hamiltonian and collapse operator that gives the dynamics within the ground-state. These effective operators can be derived by performing these operations for a given Hamiltonian <cit.>, but Ref. <cit.> provides a compact general formalism that we use here. This first form is valid even when the cooling light is close to resonance such as in sideband cooling. The Hamiltonian and collapse operators within the ground-state subspace are H_eff = ν a^† a -1/2[ ∑_m_g,m_eP_g H |m_e ⟩⟨ m_e| H|m_g ⟩⟨ m_g | /( Δ_m_e, m_g - i Γ/2 ) + h.c. ] L_eff^i = L_i ∑_m_g, m_e |m_e ⟩⟨ m_e| H | m_g ⟩⟨ m_g | /Δ_m_e, m_g - i Γ/2 . Here the laser detuning is Δ_m_g,m_e = ω_L - (E_m_e - E_m_g ) appears in the denominator. For the case of an atom with spin interacting with a single frequency laser field, as in the Hamiltonian in Eq. <ref>, the effective ground-state Hamiltonian is H_eff = νâ^†â - Ω^2/8[ ∑_m_g,m_e (ϵ (x̂) ·D) |m_e ⟩⟨ m_e | (ϵ^*(x̂) ·D^†) |m_g ⟩⟨ m_g |/Δ_m_g,m_e - i Γ/2 + h.c.] and the collapse operator for the emission of a photon with polarization q is: L_eff^q, n = - √(Γ) Ω/2∑_m_g, m_e (ℛ_ne_q^* ·D)|m_e ⟩⟨ m_e| (ϵ(x̂) ·D^†)| m_g ⟩⟨ m_g |/Δ_m_g,m_e - i Γ/2 . The effective Hamiltonian includes light shifts, two-photon Raman coupling, and decay due to spontaneous emission. The collapse operators repopulate the ground-state. Two photon cooling schemes operate in the large detuning limit. In the large detuning limit, the state dependence in the denominator detuning vanishes Δ_m_g, m_e≈Δ. The sum over the excited states in the numerator becomes the identity. The effective ground-state operators simplify to: H_eff = ν a^† a -1/2 Δ( P_g H P_e H P_g ) L_eff^q = L_q P_e H P_g /Δ - i Γ/2. For the single frequency Hamiltonian in Eq. <ref>, it becomes H_eff = νâ^†â - Ω^2/4 Δ[ (ϵ^*(x̂) ·D) (ϵ(x̂) ·D^†)] and the ground-state collapse operator in the large detuning limit is L_eff^q,n = - √(Γ) Ω/2[ ( ℛ_nê_q^* ·D) (ϵ(x̂) ·D^†) /Δ - i Γ/2]. The interaction part of the ground-state Hamiltonian describes two-photon processes such as light shifts and Raman transitions. The right term in the numerator ϵ(x̂) ·D̂^† excites the atom due to absorption of a photon from the cooling light. The left term ϵ^*(x̂) ·D̂ returns the atom to the ground-state with the emission of a photon into the cooling light. The ground-state collapse operator describes the redistribution of population within the ground-state due to spontaneous emission. It has a similar form compared to interaction Hamiltonian. The right term ϵ(x̂) ·D̂^† excites the atom due to the absorption of a photon from the cooling light. The left term ℛ_ne_q^* ·D̂ returns the atom to the ground-state by emitting a photon into free-space and gives the atom a momentum kick through the recoil operator ℛ_n. This momentum kick is the key difference between the two operators. After adiabatic elimination, the form of both the effective Hamiltonian and collapse operators are in terms of the scalar products of the dipole operators and field polarization, H ∝ (A^* ·D̂ ) (D̂^†·B) = (A^* ⊗B) · (D̂⊗D̂^†) . These operators act within the ground-state spin manifold just like the hyperfine spin operators F̂_i, F_±. If you observe, the right hand side is actually a the scalar product of two rank 2 matrices. Using the Wigner-Eckart theorem and properties of spherical vectors, we can express these in terms of spin operators. The spin Hamiltonian is <cit.> H_eff = νâ^†â - Ω^2/4 Δ[ C^(0) | ϵ(x̂)|^2 . . + C^(1) i (ϵ^*(x̂) ×ϵ(x̂)) . .+ C^(2)( ( (ϵ^*(x̂) ·F̂) (ϵ(x̂) ·F̂) + c.c. )/2 - 1/3F̂^2 |ϵ(x̂)|^2 ) ] and the collapse operators which give the redistribution of population in the ground-state due to spontaneous emission are L_eff = -√(Γ)Ω/2 Δ[ C^(0) (ℛ_nê_q^*) ·ϵ(x̂) . . + C^(1) i ((ℛ_nê_q^*) ×ϵ(x̂)) . . + C^(2)/2[ ( (ℛ_nê_q^*) ·F̂) (ϵ(x̂) ·F̂) + c.c. ) . . - 1/3F̂^2 |(ℛ_nê_q^*) ·ϵ(x̂)| ] . The effective spin operators are valid for any ground and excited spin states. The excited state only affects the C^(i) coefficients. The ground-state collapse operators in Eqs. <ref> (far detuned limit) and <ref> (near-resonant limit) describe the redistribution of the population within the ground-state. With the effective ground-state Hamiltonian, a master equation incorporating these operators describes the ground-state dynamics. The Hamiltonian accounts for energy shifts and coherent transfer between states, while the collapse operators account for incoherent redistribution due to spontaneous emission. Next, we will derive an approximate population rate equation by working in the basis where H_eff is diagonal. We define an interaction picture by applying a unitary transformation à = U A U^† with U = e^-i H_eff t. In the interaction picture, the time evolution of the Hamiltonian is included within the density matrix and operators U ρ U^† with unitary U = e^-i H t. In the interaction picture, the master equation is: ρ̇̃̇ = ∑_q [ L̃_eff,q ρ̃ L̃_eff,q^† - 1/2{L̃_eff,q^†L̃_eff,q , ρ̃}]. Finally, we derive rate equations for the populations by multiplying by ⟨ n | on the left and | n ⟩ on the right and defining the populations P(n) = ρ̃_nn = ρ_nn. The resulting equation contains coherence terms ⟨ n | ρ | m ⟩ between ground-states. If we work in the basis where the ground-state Hamiltonian is diagonal, these coherences are less important and can often be ignored. By setting all these coherences to zero, we obtain a population rate equation: dP(n)/dt = - P(n) ∑_mγ_n → m + ∑_mγ_m → n P(m), where the transfer rate between ground-states n and m is: γ_n → m = ∑_q |⟨ m | L_eff,q | n ⟩|^2. Note that in this last equation, L_eff can be taken out of the interaction picture because as long as n and m are eigenstates of H_eff, the time-dependence cancels upon taking the magnitude. However, it is important to remember that the eigenstates must be those of H_eff. The steady-state conditions can then be found by solving the matrix equation for dP(n)/dt = 0. In this paper, we derive analytic solutions for different cooling schemes by picking an appropriate basis where the coherences between eigenstates are least important and solving this rate equation The energy or temperature of a single atom confined in a 1D harmonic potential is given by E = ħν(⟨ n⟩ + 1/2), where ν is the trapping frequency, and ⟨ n⟩ is the average motional quantum number. However, it is important to note that the energy of a single atom in an optical tweezer can be manipulated by adiabatically varying the optical trap power <cit.>, where both the trapping frequency and the energy scale with the square root of the optical trap power. As a result, the motional quantum number, ⟨ n⟩, becomes a more relevant quantity to consider, as it remains conserved during adiabatic changes in the trap depth. Therefore, we choose to use ⟨ n⟩ as the cooling metric for comparing all the techniques in the following discussions. § SIDEBAND COOLING For an atom confined in a harmonic trap within the Lamb-Dicke (LD) regime, Doppler cooling operates differently than in free space, as illustrated in Fig. <ref>(a). In this scenario, the cooling laser drives transitions from the motional state n to n-1, while spontaneous emission predominantly returns the atom to the n-1 state. This process leads to net cooling because, in the LD regime, the recoil energy from photon emission is smaller than the energy spacing between trap levels (ħν). Doppler cooling of trapped atoms can be classified into two regimes based on the relationship between the trapping frequency ν and the atomic transition linewidth Γ: * Resolved sideband cooling: when ν > Γ * Unresolved sideband cooling: when ν < Γ These regimes exhibit distinct cooling dynamics and efficiencies, which we will explore in subsequent sections. Sideband cooling was extensively studied theoretically in the 1980's by Refs. <cit.>. It has been demonstrated experimentally in ions <cit.>, neutral atoms <cit.>, and nanomechanical oscillators <cit.>. Here we present these results in our formalism. We then look at the case of different trapping frequencies and derive an analytic expression for the steady-state temperature. Doppler cooling for a bound atom in the LD regime only requires a single traveling wave ϵ(x̂) = e^-i k x̂. For a two-level system, the dipole operators are D̂ = σ̂ = |g ⟩⟨ e|. The Hamiltonian is H = νâ^†â + Δ |e ⟩⟨ e| - Ω/2( e^-i k x̂D̂^† + h.c. ), and the collapse operators for emission into the positive and negative x-direction is L_± = √(Γ) e^± i k xD̂ . The resonant and spontaneous emission processes are shown in Fig. <ref>(b). To estimate the final population distribution, we can use the effective ground-state collapse operator in Eq. <ref> and the population transfer rates from Eq. <ref>. From these equations, the transfer rate between ground-states n_1 and n_2 in the ground-state through the excited harmonic states n_e is γ_n_1 → n_2 = Γ Ω^2 ∑_±| ∑_n_e⟨ n_2 | e^± i k x̂ | n_e ⟩⟨ n_e| e^i k x̂ | n_1 ⟩/Δ + ν (n_1 - n_e ) - i Γ/2 |^2 . The sum inside the magnitude is over the excited state n_e and pathways through each excited motional state can interfere. The sum on the outside is over the emission direction. Emission into each direction results in a different phase, and the average over all directions averages out the coherences between the different pathways. As a result, we can take the sum over the excited states outside of the sum and omit the average over directions, γ_n_1 → n_2 = Γ Ω^2 ∑_n_e| ⟨ n_2 | e^ i k x̂ | n_e ⟩⟨ n_e| e^i k x̂ | n_1 ⟩/Δ + ν (n_1 - n_e ) - i Γ/2 |^2 . This rate describes two processes. The first term in the numerator |⟨ n_2 | e^ikx̂ | n_e⟩|^2 describes the process of recoil due to spontaneous emission. The rest of the expression gives the rate of absorption and recoil from photons from the cooling light. Together, these two terms give the total rate of photon absorption and emission into motional states. In the LD regime, the allowed motional state transition rates are γ_n → n+1 = η^2 (n+1) A_ + γ_n → n-1 = η^2 (n) A_ - where A_ + and A_ - are A_ + = R(Δ - ν) + α R(Δ ) A_ - = R(Δ + ν) + α R(Δ ). The function R(Δ) is the low-saturation two-level system scattering rate R( Δ ) = Γ Ω^2/4/Δ^2+(Γ/2)^2 . The constant α is related to the averaging over emission directions. Assuming emission into 1D gives α=1, but averaging over emission into all directions leads to α = 1/3 <cit.>. In the LD regime, transfer only occurs between neighboring states, in which case the rate equation from Eq. <ref> is Ṗ(n) = η^2 A_ - (n+1) P(n+1) + η^2 A_ + n P(n-1) - η^2 A_ + (n+1) P(n) - η^2 A_ - n P(n) . The cooling laser (Fig. <ref>(b), blue) drives the n+1 and n-1 sidebands with effective Rabi frequencies Ωη√(n+1) and Ωη√(n). Decay (Fig. <ref>(b), red) of excited state to sideband occurs with probability Γη^2 n. Because the population only transfers between neighboring n in the LD regime, we can calculate the steady-state population by assuming no flow between two states: P(n+1) γ_n+1 → n = P(n) γ_n → n+1. Solving this equation leads to the steady-state condition P(n+1)/P(n) = A_ +/A_ -. We can set this equal to the Boltzmann factor to get the steady-state temperature: P(n+1)/P(n) = exp(-ħν / k_B T) . This represents a geometric distribution. Defining this ratio as s, the normalized solution is P(n) = (1-s) s^n, and the expectation and variance of n is ⟨ n⟩ = s/1-s and ⟨ (Δ n)^2 ⟩ = s/(1-s)^2. Multiplying by n and summing over n gives the time rate equation for ⟨ n ⟩: d⟨ n ⟩/dt = η^2 [ A_ + (⟨ n ⟩ + 1) - A_ -⟨ n ⟩] The cooling rate is W_A = η^2 (A_ - - A_ +), which is balanced by the constant heating from spontaneous emission η^2 A_ +. The steady-state solution is ⟨ n ⟩ = A_ +/ A_ - - A_ + = R(Δ- ν) + α R(Δ)/ R(Δ+ν) - R(Δ - ν) . In the resolved-sideband regime (Γ≪ν), the lowest temperature occurs at the cooling sideband Δ = - ν, at which the steady-state is ⟨ n ⟩ = 1/4(Γ/ν)^2 (α + 1/4) . In the unresolved-sideband regime (Γ≫ν), the steady state is <cit.> ⟨ n ⟩ = (1 + α)/8( Γ/2 Δ + 2 Δ/Γ) - 1/2. In Fig. <ref> we simulate the exact solution using the master equation from Eq. <ref>. Fig. <ref>(c-d) are in the unresolved and resolved sideband regimes for η = 0.05 and in the low power limit Ω = 0.05 /Γ. In the resolved regime in (d), best cooling occurs at Δ = -ν and agrees closely with Eq. <ref>. The unresolved regime in (e) is a zoomed plot of the lower left corner of (d). Theoretically, the lowest temperature occurs for Δ = -Γ/2, although the exact solution shows that it is broad and not much better than the resolved criteria Δ = -Γ. In plot Fig. <ref>(e), we plot steady-state population versus the trapping frequency for various LD parameters η. The population converges for η < 0.1. The cooling performance of sideband cooling can be improved by adding a second counter-propagating beam with the same polarization to form a standing wave and placing the atom at the node of the field <cit.>. Although the atom now sees zero intensity at the equilibrium position, the sidebands are still driven. This can be seen by expanding the atom-field interaction as a series to first order in terms of the LD parameter and field gradient H_I = -Ω/2[ ( ϵ(0) + x̂dϵ(0)/d x̂) ·D^† +h.c. ]. The gradient can be either a phase or amplitude gradient, both of which drive the sideband. Intensity at the position of the atom does not lead to cooling and, in fact, only leads to extra heating. If we place the atom in a standing wave with linear polarization, the field is ϵ(x̂) = sin(k x̂ ). Fig. <ref>(f) simulates the steady-state population versus atom position with ν = -Δ. Theoretically, this gets rid of the process in Fig. <ref>(b) of driving n to n, followed by spontaneous emission heating to the sideband. The steady-state solution is the same except with α = 0. Cooling can be further enhanced by placing the atom in a cavity that can enhance the red sideband <cit.>. §.§ Excited state with different trapping frequency The Doppler cooling section assumed that the electronic ground and excited states have the same trapping frequency. But typically, the ground and excited states of neutral atoms have different polarizabilities except at “magic wavelengths." The different potentials lead to heating due to the abruptly changing dipole force during absorption or emission, shown in the schematics in Fig. <ref>. In the extreme case, the excited state is anti-trapped resulting in loss of the atom if the excited lifetime is long compared to 1/ν. Because magic wavelengths are not often convenient for optical trapping, neutral atoms more often use two-photon cooling schemes, such as polarization gradient cooling and Raman sideband cooling. However, alkaline-earth atoms contain narrow linewidth transitions that have been used for imaging and cooling <cit.>, even ground-state cooling in Ref. <cit.>. Alkali atoms also contain narrow lines from forbidden transitions like the quadrupole transition of the S to D transition <cit.> or (n)S to (n+1)P transitions <cit.>. While sideband cooling with different trapping frequencies has been investigated through simulations <cit.>, in this section we derive a novel analytical formula for capturing the effects of different trapping frequencies, which also demonstrates additional opportunities for cooling on higher order sidebands. We now derive the equations for an atom with a ground-state frequency ν and an excited state frequency ν_e. The excited state harmonic eigenstates are stretched or compressed relative to the ground-state eigenstates. The squeezing operator from quantum optics can therefore be utilized to relate the wavefunctions in the two potentials:  <cit.> |m_e ⟩ = Ŝ^†(r) | n_e ⟩, where |n_e ⟩ is the ground-state basis |m_e ⟩ is the excited state basis. The squeezing operator Ŝ(r) is Ŝ(r) = exp( r/2 (â^2 - â^† 2) ) . The squeezing parameter r is related to the trapping frequencies by r = 1/2ln( ν_e/ν) . The excited state annihilation operator is related to the ground-state annihilation operator by â_e = Ŝ^†(r) â Ŝ(r) . The harmonic Hamiltonian is H_ h =ν ( â^†â )P_g + ν_e( â_e^†â_e) P_e = ν ( â^†â )P_g + ν_e Ŝ^†(r) (â^†â )Ŝ(r) P_e. The transfer rates between the ground-states are similar to Eq. <ref>, where now the squeezing operators are used to modify the excited harmonic states: γ_n_1 → n_2 = Γ Ω^2 ×∑_±| ∑_n_e⟨ n_2 | e^± i k x̂Ŝ^†(r) | n_e ⟩⟨ n_e| Ŝ(r) e^i k x̂ | n_1 ⟩/ ( Δ + ν n_1 - ν_e n_e ) - i Γ/2 |^2 . The sum over the excited states cannot generally be taken out of the magnitude now because each term does not necessarily have the directional emission terms. The transition matrix elements now contain the squeezing operator. To first order in r and η, the matrix elements are ⟨ n_e | Ŝ(r) e^ik x̂ | n_1 ⟩ = ⟨ n_e | ( 1 + i η (â + â^†) +r/2 (â^2 -(â^†)^2)) | n_1 ⟩ . The first order LD terms couple n → n ± 1. In contrast, the first-order squeezing terms couple n → n ± 2, preserving the symmetry of the wavefunction. The resonance condition for the light absorption is now determined by the detuning Δ + ν n_1 - ν_e n_e, which we can re-express in terms of the difference trapping frequency can be rewritten as Δ + ν n_1 - ν_e n_e = (Δ + n_e Δν) + ν (ν_1 - ν_e), where the trapping frequency difference is Δν = ν_e - ν. For increasing n, the detuning between n_1 =n_e changes by ν n. We define the resonance condition for the n'th harmonic state as Δ_n = Δ + n Δν. The n → n ± 1 transition are similar to before γ_n → n+1 = η^2 (n+1) A_ + γ_n → n-1 = η^2 n A_ - . The rates are a function of the n dependent detunings A_ + = R(Δ_n - ν ) + α R(Δ_n ) A_ - = R(Δ_n + ν ) + α R(Δ_n ). In this context, the rates add incoherently due to the randomized phase acquired during emission in multiple directions. As illustrated in Fig. <ref>(a), the n-dependent detuning results in the first-order cooling sideband being resonant only for a specific vibrational quantum number n. For a less tightly trapped excited state (left panel), the cooling light shifts towards heating sidebands for higher n values. Conversely, lower n values gradually shift towards cooling of higher-order sidebands. This effect leads to instability when ν_e/ν < 1, where higher-energy populations heat out of the trap. When ν_e/ν > 1, a type of “cap" emerges, with heating occurring below a certain n value and cooling above it. Consequently, this cap effectively traps population below it. The squeezing operator now also allows n → n ± 2 transitions to first order in r: γ_n → n+2 = r^2 (n+2) (n+1) B_ + γ_n → n-2 = r^2 n (n-1) B_ - . The new rates B are a sum over different excited states, but the coherence/interference now matters because the coupling does not involve spontaneous emission to first order, B_ -= | T( Δ_n) - T(Δ_n + 2 ν ) |^2 B_ + = | T(Δ_n ) - T(Δ_n - 2 ν ) |^2 . The complex scattering rates are T(Δ) = √(Γ)Ω/2/Δ - i Γ/2. and are related to the R rates by |T(Δ)|^2 = R(Δ). Interestingly, the destructive interference between the two excited pathways in the n → n ± 2 transition results in a transfer rate that cancels for large detuning, leading to much lower heating rates at large detuning. Next we estimate the final temperature as a function of Δν. We construct a rate equation for ⟨ n ⟩ by multiplying the rate equation in Eq. <ref> by n and summing over all n. This leads to the rate equation d ⟨ n ⟩/dt = η^2 [ A_ + (⟨ n ⟩ + 1) - A_ -⟨ n ⟩] + 2 r^2 [ B_ +⟨ (n+2)(n+1)⟩ - B_ -⟨ n (n-1)⟩] . Although we would need a separate expression for ⟨ n^2 ⟩ to solve this exactly, we can find an approximate solution by assuming the population follows a geometric series with a Boltzmann factor, as was the case in the Doppler section. Then we can use the property ⟨ n^2 ⟩ = 2 ⟨ n ⟩^2 + ⟨ n ⟩ for geometric distributions and solve the equation numerically to estimate the steady-state ⟨ n ⟩, which gives the equation 0 = η^2 [ ⟨ n ⟩ (A_ + -A_ - ) + A_ +] + 4 r^2 [ ⟨ n ⟩^2 (B_ + -B_ - ) + B_ + (2 ⟨ n ⟩ + 1) ] . B_± and A_± are also n dependent through the detuning Δ_n. This solution agrees well for small r and η, with the main assumption that the distribution follows a geometric series. This will break down for high populations as the cooling and heating become n dependent. Cooling is a balance between the sideband and squeezing terms. Cooling due to sidebands is W_A ≈η^2 (A_- - A_+), while squeezing cooling is W_B ≈ 2 r^2 (B_- - B_+), with the squeezing dependence being quadratic. Additionally, there is heating from both that goes as η^2 A_+ and 2r^2 B_+, respectively. Interestingly, the squeezing results in a coupling between different motional states and can be used for cooling, as shown below. Fig. <ref>(b) is a master equation simulation of steady-state population versus the excited to ground-state trapping frequency ratio ν_e/ν. The magic condition is ν_e/ν=1. Cooling performance drops on both sides, with much more dramatic for a weaker excited state (ν_e/ν < 1). Fig. <ref>(c) is a detuning scan for η=0.1, ν = 2 Γ, and various ν_e/ν. There is a resonance at Δ = - ν due to sideband cooling, but also a second resonance at Δ = -2ν due to the squeezing coupling. For cooling the atom in a `non-magic' trap, a technique known as “Sisyphus Cooling", has been recently demonstrated in tweezers <cit.>. In situations where the polarizability of the excited state is greater (or less) than that of the ground-state, the atom experiences a deeper (or shallower) trap upon excitation by the cooling laser. This difference enables the implementation of `attractive' (or `repulsive') Sisyphus cooling. In the repulsive Sisyphus cooling method, there is a limit known as the Sisyphus cap to how much the atom can be cooled. However, in the attractive Sisyphus regime, significantly lower temperatures can be achieved within the trap. Ref. <cit.> proposes that sweeping the cooling laser's frequency adiabatically could achieve temperatures well below the Sisyphus cap. § SUB-DOPPLER COOLING WITH SPIN Doppler cooling alone is often insufficient for achieving ground-state cooling of neutral atoms in optical traps due to two main issues. First, the trapping frequencies in optical traps are typically less than 100 kHz, while the line widths of the D1 and D2 transitions are around 5 MHz, placing them far into the unresolved regime. Second, the ground and excited states have different optical polarizabilities, which can lead to additional heating and atom loss due to far-off-resonant trapping light. To overcome these limitations, sub-Doppler cooling schemes such as polarization gradient (PG) <cit.>, gray molasses (GM) and Λ-enhanced GM <cit.>, electromagnetically induced transparency (EIT) cooling <cit.>, and Raman sideband (RSB) <cit.> employ two-photon transitions between ground-states, as illustrated in Fig. <ref>(a). By detuning the cooling laser far from the excited states, these techniques mitigate the issues associated with the unresolved regime and differential polarizabilities. Furthermore, the ground hyperfine states in neutral atoms generally experience the same light shifts, resulting in similar trapping frequencies for different spin states. Two-photon cooling schemes rely on coupling two spin states with different motional quantum numbers, n, and energies. The quantum model for PG and GM cooling is depicted in Fig. <ref>(d,f). In the first stage, a two-photon coherent transition (green) changes the spin and motional states, where the motional state decreases from n to n-1. In the second stage, spontaneous emission (red) from the cooling light or a separate beam then re-pumps the atom back to the original spin state, allowing the cooling cycle to repeat. As long as the trapping frequency energy, ħν is larger than the recoil energy, E_recoil, net cooling can be achieved, as the average heating from spontaneous emission is limited to one photon recoil. In this section, we present a unified spin cooling formalism that encompasses PG, GM, EIT, Λ-GM, and RSB cooling. We show that all these schemes exhibit similar forms of the effective Hamiltonian and collapse operators in the LD regime and rely on the same principles. PG and GM cooling use a single counter-propagating laser frequency to create light shifts similar to the trapping frequency, as well as the Raman coupling and optical pumping. EIT and Λ-GM cooling incorporate a second coherent laser to address other spin states and exploit the dark states of a three-level Λ system to create darker reservoir states. RSB cooling separates all three processes, using two lasers to couple different ground spin states and a third frequency and beam for optical pumping that uses angular momentum selection rules to keep the cooling state dark. §.§ Polarization Gradient Cooling and Gray Molasses PG cooling was described theoretically by Cohen-Tannoudji in the late 1980's <cit.> and resulted in sub-doppler temperatures in the first magneto-optical traps <cit.>. Two different PG models were presented in free space for the case of lin⊥lin and polarizations, shown in Fig. <ref>(b). The lin⊥lin configuration comes from counter-propagating orthogonal linear polarization beams, resulting in a polarization that changes from linear to circular. This model works on a F=1/2 to F'=3/2 atom, and the free-space picture is shown in Fig. <ref>(c). The circular polarization results in a vector shift Δ E ∝ m_J that modulates every wavelength. PG cooling works by having an optical pumping that pumps the atom to the lower potential. As the atom travels, it goes up the potential hill, loses energy, and then is pumped back to the bottom. The cycle repeats until the atom reaches the recoil temperature E_recoil, which is typically a few μK. This style of cooling is also called Sisyphus cooling, named after the Greek mythological figure who was condemned to eternally push a boulder up a hill. The model is applicable when counter-propagating laser beams with opposite circular polarizations are used. As shown in Fig. <ref>(b), the resulting polarization is linear at every point in space but rotates along the propagation direction with a period equal to the wavelength. The model relies on the tensor shift and therefore requires a ground-state spin of at least F=1. This version is more common because it is also the polarization configuration used by a magneto-optical trap (MOT). Both forms of PG cooling work by using red-detuned light with a J → J+1 transition, which is the same configuration used by a MOT for its stretch state cycling condition. Gray Molasses (GM) is a closely related cooling scheme that operates with blue-detuned light with a F → F or F → F-1 transition. GM has achieved even colder temperatures than PG cooling <cit.> by storing cold atoms in dark states that do not interact with the cooling light due to polarization and angular momentum selection rules. For example in Fig. <ref>(e), certain ground-states are not coupled to the excited states. GM works with blue detuned light so that the bright states are higher in energy. Atoms that fall into these dark states are effectively shelved, reducing their interaction with the cooling light. The residual velocity couples the atom back into a bright state, where they can undergo Sisyphus cooling again. However, these free-space models break down for a trapped atom in the Lamb-Dicke (LD) regime, where the atom's wavefunction is smaller than the cooling wavelength. While PG and GM have been implemented for atoms in optical lattices, tweezers <cit.>, and ions <cit.>, they have not been described theoretically. The most related works have been Wineland et al. in Ref. <cit.>, which studied a bound atom trapped in a linearly polarized standing wave, and Cirac et al. in Ref. <cit.> which investigated a J=1/2 → J'=3/2 atom in the lin⊥lin configuration. Both of these studies found a lower limit for the mean vibrational quantum number, ⟨ n ⟩≈ 1. Here we develop a generalized spin cooling model that applies to all spins and polarizations and shows that certain configurations can lead to ground-state cooling. §.§ Spin cooling theory First we look at the case of two counter-propagating beams with opposite-handed circular polarized light (). The complex polarization is ϵ_(ẑ) = i/√(2)( e^i k ẑê_1 + e^- i k ẑê_-1) = sin(k ẑ) x̂ + cos(k ẑ) ŷ ≈ŷ + (k ẑ) x̂ where ê_1,0,-1 are the spherical unit vectors ê_±1 = ∓1/√(2) (ê_x ± i ê_y) and ê_0 = ê_z, and also The field is Taylor expanded in the last line. A fixed atom sees only linear light.. The intensity is constant everywhere. The polarization is linear but rotates every wavelength, as shown in Fig. <ref>(b). As it oscillates back and forth in the trap, it sees a small rotation of the field. For convenience, we choose the quantization axis ẑ to be aligned with linear polarization at equilibrium. The cooling is however independent of the atom position because the light is linear everywhere. The Hamiltonian and collapse operator in the LD regime are H_σ_ +-σ_ - =νâ^†â + (Δ - i Γ/2) P_e + Ω/√(2)[ D̂^†_z - i η (â + â^†) D̂_x^† + h.c. ] L̂_q,± = √(Γ) (1 ± i k x̂) D̂_q The Cartesian dipole operators are defined as D_i = î·D̂. The PG cooling processes are illustrated in Fig. <ref>(d) for a F=1 atom. The position-independent interaction D̂_z leads to a tensor shift and optical pumping. For the case of F=1 ground-state, the population is pumped (red) to the m=0 state. The position-dependent term in the interaction η(â + â^†) D̂_x drives an orthogonal polarization transition m → m ± 1 while also changing the motional state. This drive is reduced by the LD parameter and produces heating due to spontaneous emission. The last ingredient for cooling is a two-photon coherent Raman transition (green). One branch of this transition comes from the position-independent term, while the other branch comes from the position-dependent term. As shown in the illustration, together these terms produce a coherent transition from |m=0, n ⟩ to |m=±1, n-1 ⟩. As drawn, the resonance for this two-photon condition is satisfied when the differential light shift between the m=0 and m=±1 states are equal to the trapping frequency. This process then repeats, cooling the atom. These processes can be seen more clearly by looking at the effective ground-state operators from Eq. <ref>. The effective ground-state Hamiltonian is H^eff_σ_ +-σ_ - = νâ^†â - Ω^2/4 Δ[ D̂_z D̂_z^† + k x̂ ( D̂_z D̂_x^† + D̂_xD̂_z^† ) ] . Converting it to spin operators using Eq. <ref> gives an effective spin model H^eff_σ_ +-σ_ - = νâ^†â + Ω^2/Δ[ C^(0) + C^(2) (F̂_z^2 - F̂^2/3 )] + kx̂ Ω^2/Δ C^(2) (F̂_x F̂_z + F̂_z F̂_x ) . The collapse operators are L_q, ±^eff =√(Γ)Ω/2/Δ - i Γ/2[ D̂_q (D̂_z^† + i kx̂(D̂_z^†+ D̂_x^†) ))] . A spin version of this operator can also be found using Eq. <ref>. This spin Hamiltonian is valid for all ground and excited spin combinations. While the ground-state spin determines the spin operators F_i, the excited state spin only affects the C^(i) coefficients. The Hamiltonian and collapse operator both have a position-independent and position-dependent term. This expansion can be performed for other polarizations as well to get a similar form. We can also look at the lin⊥lin case, where the polarization changes from circular to linear depending on the atom position, ϵ_lin⊥lin(x̂, ϕ) = 1/2 ( e^i k x̂x̂ + e^i ϕ e^- i k x̂ŷ ) . For circular polarization, the interaction is H^I_lin⊥lin(ϕ=π/2) = Ω/2[ D̂_1^† + η(â+â^†) D̂_-1^† + h.c. ], and for linear polarization, H^I_lin⊥lin(ϕ=0) = Ω/2[ D̂_z^† + η(â+â^†) D̂_x^† + h.c. ]. Before performing the full master equation simulations, our next goal is to develop a simple analytical model for a system with a Hamiltonian and collapse operator in these forms. We derive an approximate temperature for a simplified model that captures both regimes. We use two spin states |,n ⟩ and |, n ⟩ and an effective Hamiltonian and collapse operators H = νâ^†â + V_0( F̂_z + 1/2) + Ω_R k x̂ F̂_x L_1 =√(γ_ +) (F̂_+ + i k x̂F̂_x ) L_-1 = √(γ_ -) (F̂_- + i k x̂F̂_x ) . V_0 is a spin-dependent light shift. Position-independent collapse parts with pumping rates γ_+ and γ_- are the optical pumping between the spin states. The Ω_R k x̂F̂_x term produces Raman transitions between |,n ⟩ and |, n-1 ⟩. Ω_R = Ω^2/Δ C^(2). The optimal light shift for cooling is to make these two states the same frequency by setting V_0 = ν. The |, n ⟩ couples to |, n-1 ⟩, but off-resonantly so we can ignore it. To find an approximate expression for the steady-state temperature, we first use the master equation to determine the steady-state coherences between the Raman-coupled states: ⟨, n |ρ|, n - 1 ⟩ =i √(n) η Ω_R /Δ -i (γ_ -+γ_ +)/2 [ P_(n) - P_(n-1) ] By substituting this coherence into the density matrix equations and ignoring off-resonance coherences, we obtain the transfer rate Γ_R between | ↑, n ⟩ to ↓, n-1 ⟩: Γ_R = 4 η^2 Ω_R^2 /(γ_ + + γ_ -) 1/1 + 4 Δ_r^2 / (γ_ + + γ_ -)^2. Δ_r is the relative detuning between the two states. Optimal cooling occurs when the light shifts bring these states into resonance, i.e., Δ_r = 0. Although there are off-resonant transitions, such as |n, ⟩→ |n-1, ↑⟩, we ignore them in this simple model. By neglecting all off-resonant coherences, we arrive at the population rate equations: dP_(n)/dt = Γ_R (n+1) [ -P_(n) + P_(n + 1)] + γ_ - P_(n) - γ_ + P_(n) +η^2 γ_ h[ (n+1)P_(n+1) + n P_(n-1) - (2n+1) P_(n)] dP_(n)/dt =n Γ_R [P_(n-1) - P_(n)] - γ_ - P_(n) + γ_ + P_(n) + η^2 γ_ h[ (n+1)P_(n + 1) + n P_(n-1) - (2n+1) P_(n)] Here γ_h = γ_+ + γ_- is a scattering rate related to spontaneous emission heating. Next, we calculate the steady state solution for ⟨ n ⟩. The derivation is as follows. First, we sum the two equations in Eq. <ref>, yielding a single equation. Next, we multiply the distribution equations by n and sum from n=0 to ∞, resulting in two equations containing higher-order moments ⟨ n^2 ⟩_ and ⟨ n^2 ⟩_. In these four expressions, we have six unknowns: ⟨ n^2 ⟩_, ⟨ n^2 ⟩_, ⟨ n ⟩_, ⟨ n ⟩_, and the fraction of population in either up or down P_ = ∑_n=0 P_(n), P_ = ∑_n=0 P_(n). We make these equations solvable by simplifying the higher-order moments by assuming that both P_↑(n) and P_↓(n) follow a geometric distribution. We can express ⟨ n^2 ⟩_ as ⟨ n^2 ⟩_ = 2 ⟨ n ⟩_ / P_ + ⟨ n ⟩_, where P_ and P_ appear because they are not a normalized distribution. Then we solve the system of equations for ⟨ n ⟩_ + ⟨ n ⟩_ = ⟨ n ⟩. We then take the limit of large detuning to obtain a simpler analytical expression compared to the complete solution. In the large detuning limit, the cooling rate Γ_R = 4 Ω_R^2 η^2 / (γ_ + + γ_ -) dominates. The cooling rate Γ_R is independent of detuning because both Ω_R^2 and the scattering rate scale as 1/Δ^2. Consequently, for large detuning, Γ_R becomes the dominant rate, and the population quickly distributes between the two Raman-coupled states. The asymmetry of the optical pumping then determines the cooling efficiency and the final temperature. In this limit of large detuning, the Raman coupling rate is faster than any scattering rate, and the population simplifies to ⟨ n ⟩ = s/(1-s), s = 4 η^2 γ_h + γ_ +/η^2 γ_h + γ_ - . We note that a similar expression can also be derived from the no-flow condition solving for s = P(n+1)/P(n) without having to take the expectation values, although other approximations have to be made. This analytical model is the final result of this model and provides intuition for the cooling performance of PG and GM. It agrees very well with the exact spin 1/2 solution for η < 0.1. For other spin models, it also serves as a fitting function. The ratio γ_+/γ_- is then an effective asymmetry of the optical pumping to the lower energy state. This asymmetry is close to unity for PG cooling, and much smaller for cooling with dark state, i.e. GM and Λ-GM. The ratio γ_h/γ_+ is an effective scattering rate of the lower energy state. Together with the Lamb-Dicke parameter η, these three parameters determine the lowest achievable population. In Fig. <ref>, we simulate the full master equation with ground and excited states for PG and GM for F=1 ground-state in light, meaning that the light is linear at the atom equilibrium position. The PG case is F=1 to F'=1, and the Clebsch-Gordan (CG) coefficients are shown in (a). In the configuration, the m=0 state has the largest CG and, consequently, the largest Stark shift. In linear light, however, the population still pumps to the m=0 state because of strong diagonal CG coefficients from |J'=2, m=±1 ⟩. Cooling requires that the population is pumped to the lowest energy state. Therefore, cooling requires the red-detuned configuration where the m=0, which is the most stark shifted state, is also the lowest energy, as shown in Fig. <ref>(d). In the PG simulation in Fig. <ref>(b), cooling is optimum along the line where the Raman transition is resonant. The optical power of the cooling light is such that the energy difference between m=0 and m=±1 equals the trapping frequency. The lowest population converges to ⟨ n ⟩ = 0.9 in the limit of large detuning Δ and small LD parameter η. The optimal value is plotted in red in Fig. <ref>(e). The light red line is the simple model from Eq. <ref> with pumping asymmetry γ_-/γ_+ = 0.45, and heating scattering γ_h/γ_+= 27. There is not a large asymmetry in the optical pumping, and γ_ +∼γ_ -, which are both larger than η^2 γ_s in the LD regime. In this case, the final population is s ≈γ_ + / γ_ -. In this situation, the cooling reaches a fundamental limit for large detuning and small η. Next we look at GM for F=1 to F'=1. The CG coefficients are in Fig. <ref>(c). One optical selection rule is that the |F, m=0 ⟩ to |F', m=0 ⟩ is forbidden, due to the photon needing to add one unit of angular momentum. Because of this dark state, all the population pumps to m=0. But in contrast to PG, this state is now dark. For that reason GM requires blue-detuned light (Δ>0) to make this state the lowest energy. The simulation in Fig. <ref>(d) again shows that optimal cooling occurs when the light shift equals the trapping frequency. The population however drops close to ground-state. The optimal cooling is plotted in Fig. <ref>(e), along with the simple model with effective parameters γ_-/γ_+ = 7.5 × 10^-4 and heating scattering γ_h/γ_+= 1.5. For this large optical pumping asymmetry where the reservoir state is dark, such as found in GM and EIT cooling, γ_ - = 0, and s ≈γ_ +/η^2 γ_h. In this dark cooling limit, the final population is instead limited by η. In Fig. <ref>, we also investigate the case of J=1/2 to J'=3/2 for lin⊥lin, which was studied in Ref. <cit.>. There is no cooling for , which requires at least three ground spins due to the tensor shift. This configuration is not relevant for neutral atoms, but it is for ions. Because the polarization in lin⊥lin changes with position, we plot the cooling as a function of atom position, with ϕ = 0 linear and ϕ = π/2 circular. Similar to the other PG model, the population converges to ⟨ n ⟩≈ 1. Interestingly though, this occurs for when the polarization is linear, not circular. This is because circular polarization cooling also requires three spins because the Raman transition supplies two angular momenta. The cooling still occurs for when the light shift is equal to the trapping frequency <cit.>. However, we do find that for larger spin states, circular polarization is the best cooling, in agreement with our model. §.§ Lambda-GM and EIT Cooling The simple models reveal a crucial insight: lower populations are achieved when pumping to darker states. Gray molasses (GM) cooling generates dark states through angular momentum selection rules. Electromagnetically induced transparency (EIT) cooling and Λ-type gray molasses (Λ-GM) create additional dark states by incorporating multiple ground-states addressed with a second laser frequency in an EIT configuration, as illustrated in Fig. <ref>(a). These dark states arise from the interference of two lasers coupling the ground-states to a common excited state. Historically, the term “EIT cooling" has been primarily associated with trapped ions <cit.>, where the two ground-states are typically different Zeeman sublevels. In contrast, “Λ-GM" is commonly used in the context of neutral atoms <cit.>, where the two ground-states belong to different hyperfine manifolds of the ground-state. Cooling to the motional ground-state using EIT has been theoretically predicted and achieved experimentally in Ca^+ ion using Zeeman sublevels of the D1 line <cit.>. Λ-enhanced GM cooling on the other hand is proven effective not only for bound atoms <cit.> but also for cooling and imaging molecules in optical traps <cit.>. EIT addresses two ground-states |g_1 ⟩ and |g_2 ⟩ with two different Rabi frequencies Ω_1 and Ω_2. These two states are typically from different hyperfine ground-states. Each ground-state is addressed by a laser frequency to a single excited state with corresponding detunings Δ_1 and Δ_2. The Hamiltonian is written with g_1 in the rotating frame of laser one and g_2 in the rotating frame of laser two, H_0 = ν a^† a + Δ_1 |g_1 ⟩⟨ g_1 | + Δ_2 |g_2 ⟩⟨ g_2 |. We have two dipole operators, D̂_1 = |g_1 ⟩⟨ e| and D̂_2 = |g_2 ⟩⟨ e|. The interaction Hamiltonian is H_I = Ω_1/2 (D̂_1 ^† e^i k_1 x̂ + h.c. ) + Ω_2/2 (D̂_2 ^† e^i k_2 x̂ + h.c. ). The two collapse operators represent the spontaneous emission of the single excited state to both ground-states, L_1, ± = √(Γ) e^± ik_1x̂D̂_1, L_2, ± = √(Γ) e^± ik_2x̂D̂_2. Next, we perform a unitary transformation of the two ground-states into two superpositions so that only one of them is coupled to the two lasers, and the other is the EIT dark state, shown in Fig. <ref>(b). The dark state is dark due to the destructive interference of the population in the excited state from the two ground-states. The basis transformation is |g_ B⟩ = 1/Ω_rms(Ω_1 |g_1 ⟩ + Ω_2 |g_2 ⟩) |g_ D⟩ =1/Ω_rms(-Ω_2 |g_1 ⟩ + Ω_1 |g_2 ⟩) |e ⟩ = |e ⟩ where Ω_rms = √(Ω_1^2 + Ω_2^2). Under this transformation, the bare Hamiltonian is H_0 = ν a^† a + Δ_ B |g_ B⟩⟨ g_ B | + Δ_ D |g_ D⟩⟨ g_ D | +Ω_ BD( | g_ B⟩⟨ g_ D| +| g_ D⟩⟨ g_ B| ), with the energies of the bright and dark states as Δ_ B =( Ω_1^2 Δ_1 + Ω_2^2 Δ_2 )/Ω_rms^2 Δ_ D = ( Ω_1^2 Δ_2 + Ω_1^2 Δ_2 )/ Ω_rms^2, and a coupling between the dark and bright states Ω_ BD = (Δ_2 - Δ_1) Ω_1 Ω_2/Ω_1^2 + Ω_2^2. The transformed atom-field interaction is H_I =1/2 Ω_rms[ D_B^† ( Ω_1^2 e^i k_1 x̂ + Ω_2^2 e^i k_2 x̂) . . + D_D^†Ω_1 Ω_2 ( e^i k_1 x̂ - e^i k_2 x̂) + h.c. ]. In the case where the two lasers are two-photon resonant, the Rabi frequencies are equal Ω_1 = Ω_2, and the two lasers are counter-propagating k_1 = -k_2, the expressions takes a simple form H_0 =ν a^† a+ Δ( |g_ B⟩⟨ g_ B | + |g_ D⟩⟨ g_ D| ) H_I = Ω( cos(kx̂) D_B^† + sin(k x̂) D_D^†) ≈Ω( D_B^† + k x̂ D_D^†) . In the LD regime, only the bright state is coupled to the excited state. To the zeroth order, the dark state is not coupled. But the dark state can be driven directly only after changing the motional state. This Hamiltonian now looks similar to the spin cooling case. An interesting case is if, instead, the lasers are co-propagating with k_1 = k_2, then H_I = Ω D_B^† e^ikx, and the dark state is dark even to n changing transitions. Co-propagating light cannot change the momentum, prohibiting cooling. Next, we can adiabatically eliminate the excited state using Eq. <ref> and <ref> to get the effective ground-state dynamics. In the original basis, the effective ground-state Hamiltonian is H_eff = νâ^†â (Δ_1 -Ω_1^2/Δ) |g_1 ⟩⟨ g_1| +(Δ_2 - Ω_2^2/Δ )|g_2 ⟩⟨ g_2| -Ω_1 Ω_2/Δ( |g_1 ⟩⟨ g_2| e^i(k_2 - k_1) x̂ + h.c. ) . In the bright and dark basis with Ω_1 = Ω_2 and k_2 = - k_1, the effective ground-state Hamiltonian is H_eff =ν a^† a + Ω_1^2 + Ω_2^2/2Δ |g_ B⟩⟨ g_ B| + ( i k x̂Ω_1 Ω_2/Δ |g_ B⟩⟨ g_ D| + h.c. ) and the collapse operators are L_eff, ± ,i = √(Γ) Ω_rms/Δ- i Γ/2 D̂_i( D̂_B± ikx̂ ( D̂_B + 2 Ω_1 Ω_2/Ω^2_rms D̂_D )) . We can now see that the Hamiltonian and collapse operator take a similar form as before. The position-independent Hamiltonian results in a light shift for only the bright state. Therefore, just like GM, EIT also works in the blue-detuned regime so that the dark state is lower energy. The position-dependent Hamiltonian results in coupling between neighboring motional states. The collapse operator also has a position-independent term, which optically pumps from the bright state to the dark state, and a position-dependent term from recoil heating. So if you are at two photon resonance condition (Λ-enhancement) in GM, the formalism and the effectiveness of Λ-GM is identical to EIT. In Fig. <ref>(c), we plot the cooling performance as a function of detuning. In the main plot, the light shift is equal to the trapping frequency, so the optimal cooling occurs when Δ_1 = Δ_2. The inset shows the cooling performance as the cooling power is decreased. The relative detuning can be adjusted to bring the system back into resonance and recover the cooling efficiency. Fig. <ref>(d) compares the optimal cooling performance for the F=1/2, 3/2 to F'=3/2 system, with and without the Λ-enhancement (i.e., with and without the F=1/2 state). The inclusion of the F=1/2 state in the Λ-configuration significantly improves the cooling performance. The simulation results are also fitted to the theoretical model to extract the effective cooling parameters. The effects of EIT on the cooling performance can now be clearly seen. By creating darker reservoir states, EIT decreases the effective asymmetry between the pumping rates, γ_-/γ_+. As a result, the fundamental limit for EIT cooling is similar to that of GM cooling, but reduced by the factor by which the dark states are darker. The cooling efficiency still depends on the inverse square of the Lamb-Dicke parameter, 1/η^2. In Fig. <ref>(d), we plot the cooling performance for the F=1/2, 3/2 to F'= 3/2 system, comparing the cases with and without the F=1/2 state. The inclusion of the F=1/2 state in the Λ-configuration leads to a significant improvement in the cooling efficiency, demonstrating the power of EIT in enhancing the performance of sub-Doppler cooling techniques. §.§ Raman-sideband cooling Raman sideband (RSB) cooling has achieved the highest ground-state populations in neutral atoms <cit.>, ions <cit.>, optical tweezers <cit.>, and molecules <cit.>. The working principle of RSB cooling is illustrated in Fig. <ref>(a). In RSB cooling, two coherent, far-detuned lasers drive the transition from |g_1, n ⟩ to |g_2, n-1⟩, where |g_1⟩ and |g_2⟩ are two ground-states, and n represents the motional quantum number. The light shifts and scattering rates from these cooling lasers are typically negligible due to the large detuning. A third laser, tuned closer to resonance, performs optical pumping and exploits selection rules to ensure that the dark state remains dark. The effective Raman coupling strength between the two ground-states is given by Ω_1 Ω_2/ Δ, where Ω_1 and Ω_2 are the Rabi frequencies of the two cooling lasers, and Δ is the detuning from the excited state. The effective ground-state Hamiltonian for RSB cooling is: ⟨ g_1, n_1 | H_eff | g_2, n_2 ⟩ = -Ω_1 Ω_2/Δ⟨ n_1| e^i(k_1 - k_2) x̂ | n_2 ⟩, where x̂ is the position operator, and k_1 and k_2 are the wave vectors of the two cooling lasers. By carefully tuning the laser frequencies and polarizations, this Raman coupling can be engineered to drive transitions that reduce the motional quantum number, leading to cooling. In 3D, the Raman coupling term becomes e^i (k_1 -k_2)·r̂≈ 1 + i x̂ (k_1 -k_2) ·x̂, which is related to the projection of the difference wave-vector onto the cooling axis. Multiple laser beams are typically used so that pairs of beams have projections on all three axes, enabling 3D cooling. RSB cooling improves upon EIT and Λ-GM cooling techniques, which introduced dark states in an electromagnetically induced transparency (EIT) configuration. While EIT and Λ-GM achieved lower temperatures, they were limited by spontaneous emission from the dark state, parameterized by the ratio η^2 γ_h/γ_+, where η is the Lamb-Dicke parameter, γ_h is the heating rate, and γ_+ is the pumping rate. RSB cooling addresses this limitation by increasing the detuning of the cooling lasers until the collapse operator L becomes negligible, effectively eliminating spontaneous emission and repumping from the cooling lasers. The additional repump beam, optimized using angular momentum selection rules, ensures that the state |g_1 ⟩ remains dark to it, making the dark state immune to position-dependent spontaneous emission. An effective spin model for RSB cooling can be created, where the light shift from the cooling light is ignored, and the two-photon detuning is adjusted to bring the |g_1, n ⟩ to |g_2, n-1 ⟩ transition into resonance: H = νâ^†â + (Δ_1 - Δ_2) |g_2 ⟩⟨ g_2| + Ω_R k x̂ F̂_x L_+ =√(γ_ +) F̂_+( 1 + i k x̂). While RSB cooling offers significant advantages, its fundamental limitations are often technical rather than theoretical. The coherence time between the two ground-states is typically limited to a few milliseconds due to factors such as magnetic field fluctuations or laser decoherence. For efficient cooling, the Raman Rabi frequency Ω_R should exceed 10 kHz. However, the Rabi frequency for the n to n transition is larger than that of the n to n-1 transition by a factor of 1/η, resulting in both transitions being driven simultaneously. Further analysis is needed to determine the cooling limit in the presence of these technical constraints. While RSB cooling offers a pathway to overcome the limitations of EIT and Λ-GM cooling schemes, its implementation is more complex experimentally. However, it potentially enables more efficient ground-state cooling in a wider range of atomic and molecular systems. § ANALYSIS The formalism in this paper reveals that all two-photon cooling schemes share a fundamental similarity: they involve a Raman transition between neighboring motional states with different spin states. Close-to-resonance cooling schemes like PG and GM utilize light shifts to bring the states into resonance, while further detuned cooling methods, such as Raman sideband (RSB) cooling, use the two-photon detuning of two frequencies and two far detuned ground-states. Similarly, techniques like Λ-GM cool by establishing dark states states through destructive interference of two frequencies and two far detuned ground-states. PG cooling is fundamentally limited to the optical pumping asymmetry, which for F to F+1 is ⟨ n ⟩≈ 1. GM and Λ-GM lower the population to ⟨ n ⟩≈η^2. But, spontaneous emission in these schemes still drives the population out of the dark state through either spontaneous emission from the first-order LD parameter or in the effective ground-state formalism, the position dependent component of the collapse operator. The dark n=0 state is thus not completely dark, as it still can be driven through the n=1 excited state. However, when using the Raman light in an optical pump, this is unpreventable, as the light requires a position-dependent term to produce the Raman coupling between emotional states. RSB cooling represents the ultimate limit of cooling, detuning so far from the excited state that the spontaneous emission of the Raman coupling light is negligible. The re-pumping is achieved through a separate spontaneous emission beam optimized to make the dark state truly dark. In practical laboratory settings, PG, GM, and Λ-GM are more convenient due to their operational simplicity and similarity to the MOT configuration. But RSB is still employed for high ground-state preparation despite its significant experimental overhead. This paper demonstrates that Grey molasses cooling combined with the Λ-enhancement can achieve significant ground-state populations. This raises the question of how schemes resembling MOT configurations can be adapted for near-ground-state cooling. Likely, this adaptation involves adding an additional beam specialized for optical pumping. For example adding an additional circularly polarized beam close-to-resonance beam for optical pumping (not counter-propagating). This should, from this model, provide the optical pumping without driving sidebands of the dark state. Then in the limit of large detuning for the GM beams, it would effectively create a L_+ = √(γ_+) e^ikx̂F̂_+ from the RSB model. This will be the investigation of future experimental and theoretical work. § CONCLUSION This study presents a comprehensive theoretical framework for major cooling mechanisms in neutral atom tweezers, uncovering shared principles across diverse techniques. Our approach combines detailed full-level structure simulations with a simplified spin model, offering novel insights for optimizing cooling schemes. By extending previous research to encompass arbitrary spins and polarizations, we show that gray molasses cooling may potentially reach the ground-state, challenging conventional limits and opening up new avenues for exploration. Future research will investigate polarization gradients in three-dimensional beam configurations and examine deviations from the idealized one-dimensional cases presented here. We also plan to conduct simulations for specific alkali atoms, including all relevant ground and excited states, to assess the impact of multiple excited states and cross-coupling between lasers addressing different hyperfine ground-states. These simulations will help establish fundamental cooling limits for each atomic species for various techniques. Future research will explore innovative cooling approaches that blur the boundaries between established techniques. Rapid, robust, and efficient ground-state optical cooling will increase the coherence and fidelity of experiments throughout the ultracold field. Reducing thermal motion in Rydberg atoms will pave the way for higher-fidelity quantum gates. Rapid optical cooling techniques could also reduce the long times associated with thermal evaporation for producing degenerate gases. § ACKNOWLEDGEMENTS This work was supported by the NSF Career Award (Award No. 0543784). We express our gratitude to Yichao Yu for valuable early discussions regarding the relationship between spin cooling and Raman-sideband cooling. * § APPENDIX A §.§ Dipole spin operator For the atom spin, we will use the hyperfine spin states F = J + I. Note that setting I=0 recovers the J formalism for comparison. The excited states are |F_e, m_e ⟩ and ground states are |F_g, m_g ⟩. The dipole operator in Eq. <ref> is in this spin basis and using the Wigner-Eckart theorem is D̂_q = ∑_m_g m_e O^J_g J_e_I F_g F_e ⟨ F_g m_g|F_e m_e;1q⟩ |F_g m_g⟩⟨ F_e m_e|. The oscillator strength coefficients are given by O^J_g J_e_ I F_g F_e = (-1)^F_e + J_g + I + 1√((2F_e + 1)(2 J_g + 1)) J_e J_g 1 F_g F_e I , which are on the order of unity and give the strength of a F_g to F_e transition relative to the J_g to J_e transition with ∑_F_e |O^J_g J_e_I F_g F_e|^2 = 1 §.§ Spin tensor coefficients The coefficients are determined by the Wigner-Eckart reduced matrix in the ground and excited spins. For the case of J,F in the ground and J',F' in the excited, the coefficients are <cit.>: C^(0) = (-1)^3F-F'√(1/3)2F' + 1/√(2F + 1) ×[ F 1 F'; 1 F 0 ]| 1 F F' 0 1/2 F |^2, C^(1) = (-1)^3F-F'√(3/2)2F' + 1/√(F(F + 1)(2F + 1)) ×[ F 1 F'; 1 F 1 ]| 1 F F' 0 1/2 F |^2, C^(2) = (-1)^3F-F'√(30(2F' + 1)/F(F + 1)(2F + 1)(2F - 1)(2F + 3)) ×[ F 1 F'; 2 F 0 ]| 1 F F' 0 1/2 F |^2. apsrev4-2