text
stringlengths
0
514k
meta
dict
--- abstract: 'Separation kernels are fundamental software of safety and security-critical systems, which provide to their hosted applications spatial and temporal separation as well as controlled information flows among partitions. The application of separation kernels in critical domain demands the correctness of the kernel by formal verification. To the best of our knowledge, there is no survey paper on this topic. This paper presents an overview of formal specification and verification of separation kernels. We first present the background including the concept of separation kernel and the comparisons among different kernels. Then, we survey the state of the art on this topic since 2000. Finally, we summarize research work by detailed comparison and discussion.' address: | National Key Laboratory of Software Development Environment (NLSDE)\ School of Computer Science and Engineering, Beihang Univerisity, Beijing, China author: - Yongwang Zhao title: A survey on formal specification and verification of separation kernels --- real-time operating systems ,separation kernel ,survey ,formal specification ,formal verification Introduction {#sec:intro} ============ The concept of “Separation Kernel” was introduced by John Rushby in 1981 [@Rushby81] to create a secure environment by providing temporal and spatial separation of applications as well as to ensure that there are no unintended channels for information flows between partitions other than those explicitly provided. Separation kernels decouple the verification of the trusted functions in the separated components from the verification of the kernels themselves. They are often sufficiently small and straightforward to allow formal verification of their correctness. The concept of separation kernel originates the concept of Multiple Independent Levels of Security/Safety (MILS) [@Alves06]. MILS is a high-assurance security architecture based on the concepts of separation [@Rushby81] and controlled information flow [@Denning76]. MILS provides means to have several strongly separated partitions on the same physical computer/device and enables existing of different security/safety level components in the same system. The MILS architecture is particularly well suited to embedded systems which must provide guaranteed safety or security properties. An MILS system employs the separation mechanism to maintain the assured data and process separation, and supports enforced security/safety policies by authorizing information flows between system components. The MILS architecture is layered and consists of separation kernels, middleware and applications. The MILS separation kernels are small pieces of software that divide the system into separate partitions where the middleware and applications are located, as shown in [[[Fig.]{}]{}]{} \[fig:mils\_arch\]. The middleware provides an interface to applications or a virtual machine enabling operating systems to be executed within partitions. The strong separation between partitions both prevents information leakage from one partition to another and provides fault-containment by preventing a fault in one partition from affecting another. MILS also enables communication channels (unidirectional or bidirectional) to be selectively configured between partitions. ![The MILS architecture. Notation: Unclassified (U), confidential (C), secret (S), top secret (TS), single level (SL), multi level security (MLS) [@gjertsen08][]{data-label="fig:mils_arch"}](FCS-14226-fig1.pdf){width="3.4in"} Separation kernels are first applied in embedded systems. For instance, they have been accepted in the avionics community and are required by ARINC 653 [@ARINC653] compliant systems. Many implementations of separation kernels for safety and security-critical systems have been developed, such as VxWorks MILS [@vxworksmils13], INTEGRITY-178B [@integrity], LynxSecure [@LynxSecure], LynxOS-178 [@lynxos178], PikeOS [@pikeos], and open-source implementations, such as POK [@Delange11] and Xtratum [@Masmano09]. In safety and security-critical domains, the correctness of separation kernels is significant for the whole system. Formal verification is an rigorous approach to proving or disproving the correctness of the system w.r.t. a certain formal specification or property. The work in [@Woodcock09] presents 62 industrial projects using formal methods over 20 years and the effects of formal methods on time, cost and quality of systems. The successful applications of formal methods in software development are increasing in academic and industries. Security and safety are traditionally governed by well-established standards. (1) In the security domain, verified security is achieved by Common Criteria (CC) [@CC] evaluation, where EAL 7 is the highest assurance level. EAL 7 certification demands that formal methods are applied in requirements, functional specification, and high-level design. The low-level design may be treated semi-formally. The correspondence between the low-level design and the implementation is usually confirmed in an informal way. But for the purpose of fully formal verification, the verification chain should reach the implementation level. In 2007, the Information Assurance Directorate of the U.S. National Security Agency (NSA) published the Separation Kernel Protection Profile (SKPP) [@SKPP07] within the framework established by the CC [@CC]. SKPP is a security requirements specification for separation kernels. SKPP mandates formal methods application to demonstrate the correspondence between security policies and the functional specification of separation kernels. (2) In the safety domain, safety of software deployed in airborne systems is governed by RTCA DO-178B [@DO178B], where Level A is the highest level. The new version DO-178C [@DO178C] was published in 2011 to replace DO-178B. The technology supplements of DO-178C recommend formal methods application to complement testing. Although most of commercial products of separation kernels have been certified through DO-178B Level A and CC, we only find two CC EAL 7 certified separation kernels, i.e., LynxSecure and the AAMP7G Microprocessor [@Wilding10] (a separation kernel implemented as hardware). Without fully verification, the correctness of the separation kernels can not be fully assured. Many efforts have been paid on achieving verified separation kernels in this decade, such as formal verification of SYSGO PikeOS [@Baumann09; @Baumann09b; @Baumann10; @Baumann11], INTEGRITY-178B kernel [@Richards10], ED (Embedded Devices) separation kernel of Naval Research Laboratory [@Heitmeyer06; @Heitmeyer08], and Honeywell DEOS [@Penix00; @Penix05; @Ha04]. Using logic reduction to create high dependable and safety-critical software was one of 10 breakthrough technologies selected by MIT Technology Review in 2011 [@Bulk11]. They reported the L4.verified project in NICTA (National ICT Australia). The seL4 (secure embedded L4) micro-kernel, which comprises 8,700 lines of C code and 600 lines of assembler code, is fully formally verified by the Isabelle/HOL theorem prover [@Klein09; @Klein10]. They found 160 bugs in the C code in total, 16 of which are found during testing and 144 bugs during the C verification phase. This work provides successful experiences for formal verification of separation kernels and proves the feasibility of fully formal verification on small kernels. We could find a survey on formal verification of micro-kernels of general purpose operating systems [@Klein09b], but a survey of separation kernel verification for safety and security-critical systems does not exist in the literature to date. Considering that the correctness of separation kernels is crucial for safety and security-critical systems, this survey covers the research work on formal specification and verification of separation kernels ever since 2000. We outline them in high-level including formal specification, models, and verification approaches. By comparing and discussing research work in detail, this survey aims at proving an useful reference for separation kernel verification projects. In the next section, we first introduce the concept of separation kernels and compare it to other types of kernels to clarity the relationship. In [[[Section]{}]{}]{} \[sec:verify\], literatures on formal specification and verification of separation kernels are surveyed including three categories: formalization of security policies and properties, formal specification and model of separation kernels, and formal verification of separation kernels. In [[[Section]{}]{}]{} \[sec:summary\], we summarize research work by detailed comparison and discussion. Finally, we conclude this paper in [[[Section]{}]{}]{} \[sec:conclude\]. Background {#sec:bg} ========== This section first introduces the concept of separation kernel, and then gives the comparisons among different kernels such security kernels, partition kernels and hypervisors. What’s the Separation Kernel ---------------------------- Separation kernel is a type of security kernels [@Ames83] to simulate a distributed environment. Separation kernels are proposed as a solution to develop and verify the large and complex security kernels that are intended to “provide multilevel secure operation on general-purpose multi-user systems.” “The task of a separation kernel is to create an environment which is indistinguishable from that provided by a physically distributed system: it must appear as if each regime is a separate, isolated machine and that information can only flow from one machine to another along known external communication lines. One of the properties we must prove of a separation kernel, therefore, is that there are no channels for information flow between regimes other than those explicitly provided. [@Rushby81]” Based on separation kernels, the system security is archived partially through physical separation of individual components and mediation of trusted functions performed within some components. Separation kernels decouple the verification of components from the kernels themselves. Separation kernels provide their hosted software applications high-assurance partitioning and controlled information flow that are both tamperproof and non-bypassable [@Van05; @pkpp]. Untrusted software in one partition may contain malicious code that attacks other partitions and separation kernels. Kernels in general purpose operating systems usually cannot represent these security policies and cannot provide adequate protection against these attacks. In 2007, the Information Assurance Directorate of the U.S. National Security Agency (NSA) published the SKPP [@SKPP07] to describe, in CC [@CC] parlance, a class of modern products that provide the foundational properties of Rushby’s conceptual separation kernel. The SKPP defines separation kernels as “hardware and/or firmware and/or software mechanisms whose primary function is to establish, isolate and separate multiple partitions and control information flow between the subjects and exported resources allocated to those partitions.” Unlike traditional operating systems services such as device drivers, file systems, network stacks, etc., separation kernels provide very specific functionalities including enforcing data separation and information flow controls within a single microprocessor and providing both time and space partitioning. The security properties that must be enforced in separation kernels are relative simple. The security requirements for MILS include four foundational security properties [@Van05]: **Data Separation**: each partition is implemented as a separated resource. Applications in one partition can neither change applications or private data of other partitions nor command the private devices or actuators in other partitions. This property is also known as “Data Isolation”. **Information Flow Security**: information flows from one partition to others are from an authenticated source to authenticated recipients; the source of information is authenticated to the recipients. This property is also known as “Control of Information Flow”. **Temporal Separation**: it allows different components to share the same physical resource in different time slices. A resource is dedicated to one component for a period, then scrubbed clean and allocated to another component and so on. Services received from shared resources by applications in one partition cannot be affected by others. This property is also known as “Periods Processing”. **Fault Isolation**: damage is limited by preventing a failure in one partition from cascading to any other partition. The properties of data separation, information flow security and fault isolation are all spatial properties. They are collectively called “spatial separation” properties. The data separation requires that memory address spaces/objects of a partition must be completely independent with other partitions. The information flow security is a modification of data separation. Pure data separation is not practical and separation kernels define authorized communication channels between partitions for inter-partition communication. Pure data isolation is permitted to be violated only through these channels. The consequences of a fault or security breach in one partition are limited by the data separation mechanisms. A faulty process in one partition does not affect processes in other partitions because addresses spaces of partitions are separated. Separation kernels allow partitions to cause information flows, each of which comprises a flow between partitions. The allowed inter-partition information flows can be modeled as a “partition flow matrix” whose entries indicate the mode of the flow, such as read and write. The “flow” rules are passed to separation kernels in the form of configuration data interpreted during kernel initialization. For instance, a notional set of allowable information flows between partitions is illustrated in [[[Fig.]{}]{}]{} \[fig:mils\_arch\]. *NEAT* are famous properties considered for separation kernels. NEAT is the abbreviation of Non-bypassable, Evaluatable, Always invoked and Tamper proof [@Van05; @pkpp]: **Non-bypassable**: security functions cannot be circumvented. It means that a component cannot use another communication path, including lower-level mechanisms, to bypass the security monitor. **Evaluatable**: security functions are small and simple enough to enable rigorous proof of correctness through mathematical verification. It means that components are modular, well designed, well specified, well implemented, small, and low complex, etc. **Always-invoked**: security functions are always invoked. It means each access/message is checked by the appropriate security monitors. Security monitors check on not only the first access but also all subsequent accesses/messages. **Tamper proof**: the system controls “modify” rights to the security monitor code, configuration and data. It prevents unauthorized changes, either by subversive or poorly written code. These concepts, although intuitive, are not necessarily easy to be formalised and proved directly. Separation kernels are usually verified by proving properties of data separation, temporal separation, information flow security and fault isolation. The concern of the original separation kernel proposed by John Rushby [@Rushby81] is security. The reason that the concept is first applied in embedded systems, in particular the avionic systems, is the acceptance of Integrated Modular Avionics (IMA) [@Parr99] in 1990s. IMA is the integration of physically separated functions on common hardware platforms. The integration is furthermore supported by the trend of more powerful multicore computers. The IMA can decrease the weight and power consumption of currently implemented systems while concurrently create new space for new functional components such as on-board entertainment. Current embedded systems in avionics are already built in an IMA fashion. A major foundation of the IMA concept for operating systems and computing platforms is the separation of computer system resources into isolated computation compartments - called *partitions*. Computations in partitions have to run concurrently in a way that any unintended interference and interaction between them are impossible. Thus, a partition is considered as a process with guaranteed processing performance and system resources. It is very similar to separation kernels. Therefore, the concept of separation kernel is adopted in avionics as the kernel of partitioning operating systems for IMA. Separation kernels in the community are also called “partitioning kernels” [@pkpp]. The ARINC 653 standard [@ARINC653] defines the standard interfaces of partitioning kernels. Besides the security, partitioning kernels concern safety, which means a failure in one partition must not propagate to cause failure in other partitions. Comparison of Different Kernels ------------------------------- There are a set of kernel concepts similar to the separation kernel, which need to be clarified here. They are security kernel, partitioning kernel, and hypervisor. - Security Kernel [@Ames83] Security kernels manage hardware resources, from which they create, export and protect abstractions (e.g., subjects/processes and memory objects) and related operations. Security kernels bind internal sensitivity labels to exported resources and mediate access by subjects to other resources according to a partial ordering of the labels defined in an internal policy module. Separation kernels extend security kernels by *partitions*. Separation kernels map the set of exported resources into partitions. Resources in a given partition are treated equivalently w.r.t. the inter-partition flow policy. Subjects in one partition are allowed to access resources in another partition. Separation kernels enforce the separation of partitions and allow (subjects in those) partitions to cause flows, each of which, when projected to partition space, comprises a flow between partitions [@Levin07]. - Partitioning Kernel [@Rushby00; @pkpp; @Leiner07] Partitioning kernels concern safety separation largely based on an ARINC 653-style separation scheme. Besides the information flow control, partitioning kernels concentrate on spatial and temporal partitioning. Partitioning kernels provide a reliable protection mechanism for the integration of different application subsystems. They split a system into execution spaces that prohibit unintended interference of different application subsystems. Reliable protection in both spatial domain and temporal domain is particularly relevant for systems where the co-existence of safety-critical and non safety-critical application subsystems shall be supported. Partitioning on node level enforces fault containment, and thereby enables simplified replacement/update and increases reusability of software components. In order to provide an execution environment that allows the execution of software components without unintended interference, temporal and spatial partitioning for both computational and communication resources are required. Spatial partitioning ensures that a software component cannot alter the code or private data of other software components. Temporal partitioning ensures that a software component cannot affect the ability of other software components to access shared resources. For the purpose of spatial partitioning, system memory is divided among partitions in a fixed manner. The idea is to take a processor to pretend several processors by completely isolating the subsystems. Hard partitions are set up for each part of the system, and each partition has certain amount of memory allocated to it. Each partition is forever limited to its initial fixed memory allocation, which can neither be increased nor decreased after system initialization. For the purpose of temporal partitioning, partitioning kernels run in a static style. They typically support a static table-driven scheduling approach [@Ramam94] that is very well suited for safety-critical and hard real-time systems since its static nature makes it possible to check the feasibility of the scheduling in advance. Typical partitioning kernels are WindRiver VxWorks 653, GreenHill INTEGRITY-178B, LynxOS-178, and PikeOS. All these products are compliant with ARINC 653. In the following sections, the notion “separation kernel” covers the original concept [@Rushby81] and the concept of partitioning kernel. - Hypervisor [@Popek74] Hypervisors or virtual machine monitors (VMMs) provide a software virtualization environment in which other software, including operating systems, can run with the appearance of full access to the underlying system hardware, but in fact such access is under the complete control of hypervisors. In general, hypervisors are classified into two types [@Popek74]: Type 1 (or native, bare metal) hypervisor and Type 2 (or hosted) hypervisor. Hypervisors virtualize the hardware (processor, memory, devices, etc.) for hosted operating systems. Therefore, general purpose operating systems can run on top of hypervisors directly. Similar to the Type I hypervisors, separation kernels achieve isolation of resources in different partitions by virtualization of shared resources, such that each partition is assigned as a set of resources that appear to be entirely its own. But traditional hypervisors are specifically designed for secure separation, and typically do not provide services for explicit memory sharing. Moreover, traditional hypervisors support interprocess communication only via emulated communication devices. Hypervisors permit the deployment of legacy applications (within a VM) and new applications on the same platform. Whilst separation kernels typically only support specific APIs (e.g., ARINC 653) for hosted applications. Hypervisors have been introduced into embedded systems, so called embedded hypervisors in IMA systems. The application of embedded hypervisors are increasing. PikeOS, Wind River Hypervisor and LynuxWorks’s LynxSecure are typical embedded hypervisors for safety and security-critical systems. Because of the overlapped functionalities between separation kernels and hypervisors, we also survey typical verification work of embedded hypervisors in this paper. The State of the Art {#sec:verify} ==================== Due to the importance of security policies in MILS architectures, we highlight typical definitions of security policies supported by separation kernels. In this section, we first survey the formalizations of security policies and properties. Then, we present formal specification and models of separation kernels. Finally, we survey the formal verification of separation kernels. Firstly, we distinguish the concepts of “security policy”, “security property” and “security model”. Security policies or properties define security requirements of separation kernels. Separation kernels are represented by security models [@Goguen82], which are the abstraction of concrete kernel implementations. Thus, security models of separation kernels are the formal models. Security policies and properties are formulas represented in first- or high-order logics. Preservation of them on security models means the security of separation kernels. Formalization of Security Policies and Properties ------------------------------------------------- This subsection presents the formalizations of security policies (e.g., MILS and SKPP) and security properties (e.g., data separation, information flow security, and temporal separation). ### MILS and SKPP Security Policies A formal specification of what the system allows, needs and guards against is called a formal policy. Two typical security policies for MILS architecture based on separation kernels are the inter-partition flow policy (IPFP) and the partitioned information flow policy (PIFP). The inter-partition flow policy is a sort of security policies for original separation kernel [@Rushby81] on MILS. Separation kernels map the set of exported resources into partitions: $resource\_map: resource \rightarrow partition$. The inter-partition flow policy [@Levin07] can be expressed abstractly in a partition flow matrix, whose entries indicate the mode of the flow, $partition\_flow: partition \times partition \rightarrow mode$. The mode indicates the direction of the flow, for instance $partition\_flow(P_1,P_2) = W$ means that the partition $P_1$ is allowed to write to any resource in $P_2$. Resources in a given partition are treated equivalently w.r.t. the inter-partition flow policy. “SKPP specifies the security functional and assurance requirements for a class of separation kernels. Unlike those traditional security kernels which perform all trusted functions for a secure operating system, a separation kernel’s primary security function is to partition (viz. separate) the subjects and resources of a system into security policy-equivalence classes, and to enforce the rules for authorized information flows between and within partitions. [@SKPP07]” It mainly addresses security evaluations of separation kernels at EAL 6 and EAL 7 of CC. The SKPP enforces PIFP with requirements at the gross partition level as well as at the granularity of individual subjects and resources. A subset of the exported resources are active and are commonly referred to as *subjects*. Flows occur between a subject and a resource, and between the subject’s partition and the resource’s partition, in a direction defined by a *mode*. In *read* mode, the subject is the destination of the flow. In *write* mode the subject is the source of the flow. [[[Fig.]{}]{}]{} \[fig:skpp\] illustrates an allocation of TOE (Target of Evaluation, a concept in CC) resources. The resources inside of each rectangle are bound to that partition. Allowed information flows are indicated by the directed arrows. For instance, Subject 2 is allowed to write Resource 6 and Subject 3 is allowed to read Resource 9. By this policy abstraction, subjects in a partition can have different access rights to resources in another partition. Resources 7, 8 and 10 illustrate this finer grained control of information flows. ![Allocation of TOE Resources [@SKPP07][]{data-label="fig:skpp"}](FCS-14226-fig2.pdf){width="3.0in"} SKPP defines a partition-to-partition policy ($P2P_p$) and a subject-to-resource ($S2R_p$) policy (also known as a *least privilege policy*). Flow rules, P2P and S2R, are associated with each policy: $$\begin{aligned} &S2R: [s:subject, r:resource, m:mode] \\ &P2P: [sub\_p: partition, res\_p: partition, m: mode] \end{aligned}$$ The PIFP policy of SKPP is: $$\begin{aligned} AL&LOWED([s: subject, r: resource, m: mode]) = \\ & \begin{aligned} S2&R_p \in sys.policy \rightarrow (\\ &S2R(s,r).m=allow \ \vee \\ &(S2R(s,r).m=null \wedge P2P(s.p,r.p).m=allow) \\ \end{aligned}\\ &) \ \wedge \ P2P_p \in sys.policy \rightarrow P2P(s.p,r.p).m=allow \\ \end{aligned}$$ where $sys.policy$ indicates which policies are configured to be active. In the $S2R_p$ policy, the $S2R$ rules override the $P2P$ rules everywhere except there is a null entry in the $S2R$ rule set. Note that the $S2R_p$ policy is defined to reference the $P2P$ values regardless of whether the $P2P_p$ policy itself is active. The separation kernels of VxWorks MILS, LynxSecure, INTEGRITY-178B and PikeOS meet the security functionalities and security assurance requirements in SKPP. ### Data Separation Properties Data separation requires that resources of a partition must be completely independent of other partitions. - MASK Separation Properties The DoD of USA set out in 1997 to formally construct a separation kernel, a Mathematically Analyzed Separation Kernel (MASK) [@Martin00; @Martin02], which has been used by Motorola on its smart cards. MASK regulates communication between processes based on separation policies. The separation policies of MASK include two separation axioms: the *communication policy* and an anonymous policy. In the abstraction of the MASK separation kernel, *Multiple Cell Abstraction (MCA)* describes the system. The *Init* and *Next* operations evolve the system. *Cells* and *Single Cell Abstraction (SCA)* are domains of execution or a context, which consist of a collection of strands. Each strand is a stream of instructions to be executed when a message is input to a strand of a cell. The communication policy is as follows. $$\label{eq:mask_comm} \begin{aligned} Fiber_y(MCA) \neq Fiber_y(Next_x(MCA)) \\ \Rightarrow Communicates(x,y) \end{aligned}$$ where $Fiber_y$ determines the SCA corresponding to the CellID $y$ in the subscript, $Next_x$ advances the system state by advancing the cell indicated by the subscript $x$. The policy states that if the fiber of cell $y$ changes as the result of advancing the state of cell $x$, it must be the case that $x$ is permitted to communicate with $y$. The second separation constraint upon cells is as follows. $$\label{eq:mask_comm2} \begin{aligned} Fiber_x(MCA_1) = Fiber_x(MCA_2) \Rightarrow \\ Fiber_y(MCA_1) = Fiber_y(MCA_2) \Rightarrow \\ Fiber_y(Next_x(MCA_1)) = Fiber_y(Next_x(MCA_2)) \end{aligned}$$ The policy represents that if an action by cell $x$ is going to change the state of cell $y$, the change in the state of $y$ depends only on the states of $x$ and $y$. In other words, the new state of $y$ is a function of the previous states of $x$ and $y$. - ED Data Separation Properties To provide evidence for a CC evaluation of the ED (Embedded Devices) separation kernel to enforce data separation, five subproperties, namely, No-Exfiltration, No-Infiltration, Temporal Separation, Separation of Control, and Kernel Integrity are proposed to verify the kernel [@Heitmeyer06; @Heitmeyer08]. The Top-Level Specification (TLS) is used to provide a precise and understandable description of the allowed security-relevant external behavior and to make the assumptions on which the TLS is explicitly based. TLS is also to provide a formal context and precise vocabulary to define data separation properties. In TLS, the state machine representing the kernel behavior is defined in terms of an input alphabet, a set of states, an initial state and a transform relation describing the allowed state transitions. The input alphabet contains internal events (cause the kernel to invoke some process) and external events (performed by an external host). The state consists of the id of a partition processing data, the values of the partition’s memory areas and a flag to indicate sanitization of each memory area. The No-Exfiltration Property states that data processing in any partition cannot influence data stored outside the partition, which is formulated as follows. $$\begin{aligned} & s,s' \in S \wedge s' = T(s,e) \; \wedge \\ & e \in P_j \cup E^{In}_j \cup E^{Out}_j \; \wedge \\ & a \in \mathcal{M} \wedge a_s \neq a_{s'} \\ & \Rightarrow a \in A_j \end{aligned}$$ where $s$ and $s'$ are states and $s'$ is the next state of $s$ transited by an event $e$ in the partition $j$. $P_j$ is the internal event set of the partition $j$. $E^{In}_j$ is the set of external events writing into or clearing the input buffers of the partition $j$. $E^{Out}_j$ is the set of external events reading from or clearing the output buffers of the partition $j$. For any memory area $a$ of the system ($\mathcal{M}$), $a$ is a memory area in the partition $j$ ($A_j$), if the value of $a$ in state $s$ and $s'$ are not equal. The No-Infiltration Property states that data processing in a partition is not influenced by data outside that partition, which is formulated as follows. $$\begin{aligned} & s_1,s_2,s'_1,s'_2 \in S \wedge s'_1 = T(s_1,e) \wedge \\ & s'_2 = T(s_2,e) \wedge (\forall a \in A_i) \; a_{s_1} = a_{s_2} \\ & \Rightarrow (\forall a \in A_i)a_{s'_1} = a_{s'_2} \end{aligned}$$ The Separation of Control Property states that when data processing is in progress in a partition, no data is being processed in other partitions until processing in the first partition terminates, which is formulated as follows. $$\begin{aligned} & s,s' \in S \wedge s' = T(s,e) \; \wedge \\ & c_s \neq j \wedge c_{s'} \neq j \\ & \Rightarrow (\forall a \in A_j) \; a_{s'_1} = a_{s'_2} \end{aligned}$$ where $c_s$ is the id of the partition that is processing data in state $s$. The Kernel Integrity Property states when data processing is in progress in a partition, the data stored in the shared memory area do not change, which is formulated as follows. $$\begin{aligned} s,s' \in S \wedge s' = T(s,e) \wedge e \in P_i \\ \Rightarrow G_s = G_{s'} \end{aligned}$$ where $G$ is the single shared memory area and contains all programs and data not residing in any memory area of partitions, $P_i$ is the internal event set of the partition $i$. ### Information Flow Security Properties In the domain of operating systems, state-event based information flow security properties are often applied [@Murray12]. We present two major categories of information flow security properties: the GWV policy and noninterference. - GWV Policy Greve, Wilding and Vanfleet propose the GWV security policy in [@Greve03] to model separation kernels. The separation axiom of this policy is as follows. $$\label{eq:gwv} \begin{aligned} & selectlist(segs,st1) = selectlist(segs,st2) \; \wedge \\ & current(st1) = current(st2) \; \wedge \\ & select(seg,st1) = select(seg,st2) \\ & \Rightarrow \\ & select(seg,next(st1)) = select(seg,next(st2)) \end{aligned}$$ where $segs = dia(seg) \cap segsofpartition(current(st1))$. The security policy requires that the effect on an arbitrary memory segment $seg$ of the execution of one machine step is a function of the set of memory segments that are both allowed to interact with $seg$ and are associated with the current partition. In this formula, the function $select$ extracts the values in a machine state that are associated with a memory segment. The function $selectlist$ takes a list of segments and returns a list of segment values in a machine state. The function $current$ calculates the current partition given a machine state. The function $next$ models one step of computation of the machine state. It takes a machine state as the argument and returns a machine state that represents the effect of the single step. The function $dia$ takes a memory segment name as the argument and returns a list of memory segments that are allowed to affect it. The function $segsofpartition$ returns names of the memory segments associated with a particular partition. The detailed information about the meaning of a machine state and the $next$ function of states are explained in [@Alves04]. The GWV security policy has been well known and accepted in industry [@integrity08; @Greve04; @Greve10]. The PVS formalization of GWV policy has been provided by Rushby [@Rushby04]. The GWV policy is changed/extended in [@Alves04; @Tverdy11]. The $dia$ function is weakened by allowing communication between segments of the same partition in [@Alves04] as follows. $$\begin{aligned} seg \in segsofpartition(p) \Rightarrow \\ segsofpartition(p) \in dia(seg) \end{aligned}$$ The $dia$ function is extended by a restriction considering partition names, $diaStrong(seg,p) \subset dia(seg)$, in [@Tverdy11]. In addition, the GWV policy is extended by the $subject$. A subject is an active entity which operates on segments of a GWV partition. The extended GWV policy is as follows. $$\label{eq:gwv_pikeos} \begin{aligned} & current(st1) = current(st2) \; \wedge \\ & currentsubject(st1) = currentsubject(st2) \; \wedge \\ & select(seg,st1) = select(seg,st2) \; \wedge \\ & selectlist(segs,st1) = selectlist(segs,st2) \\ & \Rightarrow \\ & select(seg,next(st1)) = select(seg,next(st2)) \end{aligned}$$ where $segs = diastrong(seg,current(st1)) \cap segsofpartition(current(st1))$. The extended GWV policy has been applied to formally specify the PikeOS [@Tverdy11]. The GWV policy is only applicable to a class of systems in which strict temporal partitioning is utilized and kernel state cannot be influenced by execution of code within partitions. The GWV theorem has been shown to hold for the AAMP7G’s hardware-based separation kernel [@Wilding10]. The original GWV theorem is only applicable to such strict static schedulers. The GWV policy is sound but not complete [@Grev05]. In GWV, $dia$ function only expresses the direct interaction between segments. It is extended by multiple active “Agent” in GWVr1 [@Grev05] that moving data from one segment to another segment is under control of one agent. GWVr1 is similar to the $diaStrong$ function in [@Tverdy11]. For more dynamic models, a more general GWV theorem, GWVr2 [@Grev05], uses a more generalized influence between segments, the information flow graph, to specify the formal security policy. The information flow graph enables system analysis and can be used as foundation for application-level policies. The GWVr2 is used to formal analysis for the INTEGRITY-178B separation kernel [@Richards10]. More theoretical discussion of GWVr1 and GWVr2 is in [@Greve10]. - Noninterference The concept of noninterference is introduced in [@Goguen82] to provide a formal foundation for the specification and analysis of security policies and the mechanisms to enforce them. The intuitive meaning of noninterference is that a security domain $u$ cannot interfere with a domain $v$ if no action performed by $u$ can influence subsequent outputs seen by $v$. The system is divided into a number of *domains*, and the allowed information flows between domains are specified by means of an information flow policy $\rightsquigarrow$, such that $u \rightsquigarrow v$ if information is allowed to flow from a domain $u$ to a domain $v$. The standard noninterference is too strong and not able to model channel-control policies. Thus, the intransitive noninterference is introduced, which uses a $sources(\alpha,u)$ function to identify those actions in an action sequence $\alpha$ that their domain may influence the domain $u$. Rushby [@rushby92] gives a standard definition of intransitive noninterference as follows. $$\label{eq:nonitf} \begin{aligned} noninterference \equiv \forall \alpha \ u . (s_0 \lhd \alpha \stackrel{u}{\bumpeq} s_0 \lhd ipurge(\alpha,u)) \end{aligned}$$ where $ipurge(\alpha,u)$, defined based on $sources(\alpha,u)$, removes the actions from the action sequence $\alpha$ that their domains cannot interfere with $u$ directly or indirectly. A system is secure for the policy $\rightsquigarrow$, if for each domain $u$ and each action sequence $\alpha$, the final states of executing $\alpha$ and $\alpha'$ ($\alpha'$ is the result of removing actions that their domain can not influence $u$) from the initial state $s_0$ are observed equivalently for $u$. The intransitive noninterference is usually chosen to formally verify information flow security of general purpose operating systems or separation kernels [@Murray12]. Classical noninterference is concerned with the secrets that events introduce in the system state and that are possibly observed via outputs [@von04]. Although noninterference is adequate for some sorts of applications, there are many others considering the prevention of secret information leakage out of the domains it is intended to be confined to. Language-based information flow security typically considers information leakage and has two domains: *High* and *Low*. It is generalized to arbitrary multi-domain policies in [@von04] as a new notion *nonleakage*. As pointed out in [@Mantel01] that it is important to combine language-based and state-event based security, and a new notion *noninfluence* which is combination of nonleakage with traditional noninterference [@rushby92] is proposed in [@von04]. A system is nonleaking if and only if for any states $s$ and $t$ and a domain $u$, the final states after executing any action sequence $\alpha$ in $s$ and $t$ are indistinguishable for $u$ if $s$ and $t$ are indistinguishable for all domains ($sources(\alpha,u)$) that may interfere with $u$ directly or indirectly during the execution of $\alpha$. The nonleakage is defined as follows. $$\label{eq:nonlk} \begin{aligned} nonleakage \equiv \forall \alpha \ s \ u \ t . s \stackrel{sources(\alpha,u)}{\approx} t \longrightarrow \\ s \lhd \alpha \stackrel{u}{\bumpeq} t \lhd \alpha \end{aligned}$$ Combination of noninterference and nonleakage introduces the notion *noninfluence* as follows. $$\label{eq:noninfl} \begin{aligned} noninfluence \equiv \forall \alpha \ s \ u \ t . s \stackrel{sources(\alpha,u)}{\approx} t \longrightarrow \\ s \lhd \alpha \stackrel{u}{\bumpeq} t \lhd ipurge(\alpha,u) \end{aligned}$$ The *nonleakage* and *noninfluence* are applied in formal verification of seL4 separation kernel in [@Murray13]. ### Temporal Separation Properties Temporal separation usually concerns sanitization/period processing. A sanitization property (called Temporal Separation) on ED separation kernel is defined in [@Heitmeyer08] as follows to guarantee that the data areas in a partition are clear when the system is not processing data in that partition. $$\begin{aligned} (\forall s \in S, 1 \leq i \leq n) \; c_s = 0 \\ \Rightarrow D_{i,s}^1 = 0 \wedge ... \wedge D_{i,s}^k = 0 \end{aligned}$$ where $c_s$ is the id of a partition that is processing data in state $s$. When $c_s$ is 0, it means that no data processing in any partition is in progress. $D_{i,s}^1 = 0, ..., D_{i,s}^k = 0$ means that all data areas in the partition $i$ are clear. Satisfaction of this property implies that no data stored in the partition during one configuration of this partition can remain in any memory area of a later configuration. ### Formal Comparison of Policies and Properties As presented in previous subsections, security policies and properties for separation kernels have been studied in literature. They are formalized in different specification and verification systems, such as ACL2, Isabelle/HOL, and PVS. Formal comparison of them to clarify the relationships can establish a substantial foundation for formal specification and verification of separation kernels. In [@von04], the notions of noninterference, nonleakage, and noninfluence are defined based on the same state machine and formally compared. The author states that noninfluence is semantically equal to the conjunction of noninterference and nonleakage. In [@Bond14], the GWV policy and Rushby’s noninterference are formally compared in detail. The authors present a mapping between the objects and relations of the two models. The conclusion is that GWV is stronger than Rushby’s noninterference, i.e., all systems satisfying GWV’s separation also satisfy Rushby’s noninterference. Formal Specification and Models of Separation Kernels ----------------------------------------------------- The formal specification and models of separation kernels present a significant contribution to formal verification. Here, we only discuss the models for formally developing separation kernels. Models targeted at formal verification are surveyed in the next subsection. In formal development, the specification may be used as a guide while the concrete implementation is developed during the design process. We present typical specification and models of separation kernels in turn. - Craig’s Z model of separation kernel Following the earlier book on modeling operating system kernels [@Craig06] that shows it is possible and relatively easy to specify small kernels and refine them to the running code, Craig [@Craig07] concerns entirely with the specification, design and refinement in Z [@Abrial80] to executable code of operating system kernels, one of which is a separation kernel, to demonstrate that the refinement of formal specification of kernels is possible and quite tractable. Craig provides a substantial work on a formal separation kernel model which delivers the majority of separation kernel requirements and functionalities [@Velykis10], such as (1) Process table for basic process management; (2) Process spatial separation in terms of non-overlapping address space allocation; (3) Communication channels by the means of an asynchronous kernel-based messaging system; and (4) Process temporal separation using a non-preemptive scheduler and the messaging system. The formal specification is relatively complete and the refinements reach the level at which executable code in a language such as C or Ada can be read off from the Z specification. Separation kernels frequently need threads/tasks inside each partition. In the Craig’s model, it makes no mention of threads. It is considered that threads can be included by simple modifications to the specification. Hardware is not the emphasis in their work. The Intel IA32/64 hardware operations at a level of detail are specified in the model, which are adequate for the production of the tiny amounts of assembly code required to complete the kernel. Finally, all of the work in their book is done by hand including the specification and proofs. - Z model of separation kernel in Verified Software project Formalization of separation kernels [@Velykis09; @Velykis10] is part of a pilot project in modeling OS kernels within an international Grand Challenge (GC) in Verified Software [@Jones06; @Woodcock09]. The objective is to provide proofs of the correctness of a formal specification and design of separation kernels. They start from Craig’s formal model [@Craig07] and take into account separation kernel requirements in [@Rushby81] and SKPP [@SKPP07]. The Craig’s original model is typeset by hand and includes several manual proofs. The specification is augmented in [@Velykis09; @Velykis10] using Z notation [@Wood96] by mechanising it in the Z/Eves theorem prover. All proofs in [@Craig07] are also declared and proved using the Z/Eves prover. As a result, syntax errors in Craig’s specification are eliminated, model feasibility and API robustness are verified, and missing invariants and new security properties to guarantee correct operations are found. The upgraded formal model is fully proved mechanically. The upgraded formal model focuses on the core data structures within a separation kernel, such as the process table, queue and scheduler. Craig’s scheduler model is significantly improved. Certain properties about the scheduler (e.g., the scheduler deadlock analysis) are able to be formulated and proved by translating verbal requirements to mathematical invariants and improving design of the specification. - B model of a secure partitioning kernel The B Method [@abrial96] has been used for the formal development of a secure partitioning kernel (SPK) in the Critical Software company [@Andre09]. The novelty of this work in the formal methods community is an extra challenge to apply the B Method outside its usual application domains (railway and automotive). Initially, a complete development of a high-level model of the SPK is built. The high-level model constitutes a complete architectural design of the system, and is animated and validated by ProB [@Leus03]. Abstract model of SPK in high-level consists of memory management, scheduling, kernel communication, flow policy, and clock. The validated high-level model can be refined for a completely and formally developed SPK. As a first step, the PIFP policy, which is part of the SPK, is refined to a level from where C code can be automatically generated. The refinement process that leads to the implementation of the PIFP is carried out with the assistance of Atelier B. Finally, an open source micro kernel, PREX, is adopted to integrate the proposed PIFP. They demonstrate the feasibility of applying formal methods only to parts of the system. - B model of OS-K separation kernel A separation kernel based operating system, OS-K [@Kawamorita10], has been designed for use in secure embedded systems by applying formal methods. The separation kernel layer and the additional OS services on top of it are prototyped on the Intel IA-32 architecture. The separation kernel is designed using two formal methods: the B method and the Spin model checker. The B method is adopted as formal design, and Spin for verification via model checking. The separation kernel layer provides several functions: partition management, inter-partition communication, access control for inter-partition communication, memory management, timer management, processor scheduling, I/O interrupt synchronization for device driver operation, and interrupt handling. The separation kernel provides the access-control function for inter-partition communication, which provides the only linkage between separated partitions. In the IA-32 architecture based implementation, two memory-protection features of the IA-32 architecture are utilized: the ring protection feature is used to protect the memory area of the separation kernel against access by the processes and the partition OSs; each partition is assigned a local descriptor table in which the partition segments are registered to isolate the partition memory spaces. The B models are also refined to an implementation by converting the non-deterministic sections to sequential processing. Proof obligations of their B model are generated and verified in B4free tools. There are more than 2,700 proof obligations and almost all of them are proved automatically in B4free tools. - Event-B model of ARINC 653 The kernel interface defines operating system services provided to applications. Formalization of the kernel interface could support formally modeling and verification of application software on top of separation kernels. ARINC 653 [@ARINC653] aims at providing a standardized interface between separation kernels and application software, as well as a set of functionalities to improve safety and certification process of safety-critical systems. Therefore, formalization and verification of ARINC 653 has been considered in recent years. In [@zhao15], system functionality and all of 57 services specified in ARINC 653 Part 1 are formalized using Event-B [@Abrial07]. They use the refinement structure in Event-B to formalize ARINC 653 in a stepwise manner and a semi-automatic translation from service requirements of ARINC 653 into the low-level specification. The Event-B specification has 2,700 LOC. A set of safety properties are defined as invariants in Event-B and verified on the specification. - Formal API Specification of PikeOS separation kernel Aiming at a precise model of PikeOS and a precise formulation of the PikeOS security policy, the EURO-MILS project[^1] releases a new generic specification of separation kernels – Controlled Interruptible Separation Kernel (CISK) [@Verb14]. This specification contains several facets that are useful to implement separation kernels, such as interrupts, context switches between domains, and control. The initial specification is close to a Mealy machine. The second-level specification adds the notion of separation and security policy. At the third-level, “interruptible” is introduced and calls to the kernel are no longer considered atomic. The final-level specification provides an interpretation of control that allows atomic kernel actions to be aborted or delayed. The specification is rich in detail, making it suitable for formal verification of realistic and industrial systems. The specification and proofs have been formalized in Isabelle/HOL. Based on the CISK specification, the formal API specification of the PikeOS separation kernel has been provided aiming at the certification of PikeOS up to CC EAL7 [@verb15]. The formal API specification covers the IPC, memory, file provider, port, and event, etc. Formal Verification of Separation Kernels ----------------------------------------- As introduced in [[[Section]{}]{}]{} \[sec:bg\], the typical properties of separation kernels are data separation, information flow security, fault isolation, and temporal separation. The first three properties are collectively called “spatial separation” properties. Therefore, we categorize formal verification work on separation kernels into spatial and temporal separation verification in this subsection. ### Spatial Separation Verification Most related work on formally verifying separation kernels consider both the data separation and information flow security. Here, we present significant research work of spatial separation verification. Due to the importance of data separation and information flow security properties for separation kernels, we finally highlight a general verification approach for these properties. - ED Separation Kenrel A novel and practical approach to verify security of separation kernels code which substantially reduces the cost of verification is presented in [@Heitmeyer06; @Heitmeyer08]. The objective of this project is to provide evidence for a CC evaluation of the ED (Embedded Devices) separation kernel to enforce data separation. The ED separation kernel contains 3,000 lines of C and assembly code. The code verification process consists of five steps: (1) Producing a Top-Level Specification (TLS) using a state machine model. (2) Formally expressing the security property as the data separation property of the state machine model. (3) Formally verifying that the TLS enforces data separation in TAME (Timed Automata Modeling Environment), a front end to the PVS theorem prover. (4) Partitioning the code into three categories, in which it is identified as “Other Code” such code not corresponding to any behavior defined by the TLS; “Other Code” is ignored in the verification, therefore greatly simplifying the process. (5) Demonstrating that the kernel code conforms to the TLS. They define two mapping functions to establish correspondence between the TLS and kernel code. A mapping establishes correspondence between concrete states in the code and abstract states in the TLS. Another maps the preconditions and postconditions of the TLS events to the preconditions and postconditions that annotate the corresponding Event Code. They adopt the natural language representation of the TLS and the size of the TLS is very small, which only takes 15 pages. It can simplify communication with the other stakeholders, changing the specification when the kernel behavior changed, translating the specification into TAME and proving that the TLS enforced data separation. They use 2.5 weeks to formulate the TLS and the data separation property, 3.5 weeks to produce the TAME model and formally verify that the TLS enforces data separation, and 5 weeks to establish conformance between code and TLS. The cost of formal verification is much lower than the verification effort on seL4 kernel [@Klein09; @Klein10] where they translated almost all of source code to the Isabelle/HOL description. - AAMP7G microprocessor The AAMP7G and its previous version AAMP7 microprocessor of Rockwell Collins are hardware implementation of separation kernels. The AAMP7 and AAMP7G design is mathematically proved to achieve MILS using formal methods techniques as specified by EAL 7 of CC [@Greve04; @Wilding10]. The AAMP7G provides a novel architectural feature, *intrinsic partitioning*, that enables the microprocessor to enforce an explicit communication policy between applications. Rockwell Collins has performed a formal verification of the AAMP7G partitioning system using the ACL2 theorem prover. They first establish a formal security specification, AAMP7G GWV theorem, which is the intrinsic partitioning separation theorem [@Greve04]. This theorem is an instantiation of GWV policy [@Greve03]. Then, they produce an abstract model of the AAMP7G’s partitioning system and a low-level model that directly corresponds with the AAMP7G microcode. In the low-level model, each line of microcode is modeled by how it updates the state of the partition-relevant machine. The entire AAMP7G model is approximately 3,000 lines of ACL2 definitions. The AAMP7G GWV theorem is proved using ACL2. The proofs are decomposed into three main pieces: proofs to validate the correctness theorem, proofs to show that the abstract model meets the security specification, and proofs to show that the low-level model corresponds with the abstract model. The AAMP7G GWV theorem is shown as follows. The theorem involves abstract and functional (low-level) models of the AAMP7G. The theorem is about the behavior of the functional model, but they express the theorem about an abstract model of the AAMP7G that has been “lifted” from a functional model. In this way, the expression of the theorem is simplified. Moreover, the behavior of the most concrete model of the AAMP7G is also presented to ensure that the theorem is about the “real” AAMP7G. & secure\_config(spex)\ & spex\_hyp(spex,fun\_st1)\ & spex\_hyp(spex,fun\_st2)\ & $$\begin{aligned} & (raw\_selectlist(segs,abs\_st1) \\ & = raw\_selectlist(segs,abs\_st2) \; \wedge \\ & current(abs\_st1) = current(abs\_st2) \; \wedge \\ & raw\_select(seg,abs\_st1) = raw\_select(seg,abs\_st2) \\ \end{aligned}$$ $$\begin{aligned} &\Rightarrow \\ & raw\_select(seg, lift\_raw(spex, next(spex,fun\_st1))) \\ & = raw\_select(seg, lift\_raw(spex, next(spex,fun\_st2))) ) \end{aligned}$$ where $abs\_st1=lift\_raw(spex,fun\_st1)$, $abs\_st2=lift\_raw(spex,fun\_st2)$ and $segs=dia\_fs(seg,abs\_st1) \cap segs\_fs(current(abs\_st1),abs\_st1)$. - PikeOS PikeOS [@Kaiser07] is a powerful and efficient para-virtualization real-time operating system based on a separation microkernel. The Verisoft XT [^2] project has an Avionics subproject [@Baumann09; @Baumann09b; @Baumann09c] to prove functional correctness of all system calls of the PikeOS at the source code level using the VCC verification tool [^3]. They propose a simulation theorem between a top-level abstract model and the system consisting of the kernel and user programs running in alternation on the real machine. They identify the correctness properties of all components in the trace that are needed for the overall correctness proofs of the microkernel. Memory separation of the PikeOS separation kernel has been formally verified on the source code level [@Baumann11] also using VCC. The desired memory separation property is easy to describe informally but infeasible to define directly in the specification language. Therefore, they break down the high-level, non-functional requirement into functional memory manager properties that can be presented as a set of assertions for function contracts. The GWV property has been applied to verify the PikeOS separation kernel in [@Tverdy11]. They extend the GVW property with *subjects* to resolve the problem that the same current partition can have different active tasks. They present a modular way to apply the GWV property for the two layers of PikeOS. In the micro-kernel model, the major abstractions are tasks and threads, which are corresponding to subjects and partitions in the extended GWV theorem respectively. The *segment* is instantiated as the physical address in the memory. In the separation kernel model, they add “partitions” and separated the tasks of micro-kernel model and physical address of the memory into different partitions. The modular and reusable application of the security policy reduces the number of formal models and hence the number of artefacts to certify. All models are formalised in Isabelle/HOL. - INTEGRITY-178B The INTEGRITY-178B separation kernel of Green Hills Software was formally analysed and obtained a CC Certificate at the EAL 6+ level on September 1, 2008 [@Richards10]. The INTEGRITY-178B evaluation requirements for EAL 6+ specify five elements that are either formal or semi-formal: (1) The Security Policy Model which is a formal specification of the relevant security properties of the system; (2) Functional Specification which is a formal representation of the functional interfaces of the system; (3) High-Level Design which is a semi-formal and abstract representation of the system; (4) Low-Level Design which is a semi-formal, but detailed representation of the system; (5) Representation Correspondence to demonstrate the correspondence between pairs of the above four elements. Considering that the original GWV theorem [@Greve03] is only applicable to strict static kernels, they adopt the GWVr2 [@Greve10] theorem as the Security Policy Model because INTEGRITY-178B’s scheduling model is much more dynamic. The GWVr2 theorem is $system(state) = system^*(graph,state)$. This theorem means that the $system$ and $system^*$ produce identical results for all inputs of interest. It implies that the graph used by $system^*$ completely captures information flows of the system. The system is modeled as a state transition system that receives the current state of the system as inputs, as well as any external inputs, and produces a new system state, as well as any external outputs. This state transition is expressed as $state' = system(state)$, where the external inputs and outputs are also contained in the system state structure. The hardware-independent portion of the INTEGRITY-178B kernel is implemented in C code and formally modeled in ACL2 which has one-to-one correspondence with the C source code. This simplifies the “code-to-spec” review during CC certification. The hardware-dependent code is not modeled and is subjected to a rigorous by hand review. In order to prove the GWVr2 theorem on the ACL2 model, they first prove two lemmas w.r.t. each function in this model. The *Workhorse* Lemma states that the function’s graph sufficiently captures the dependencies in the data flows of the function. The *ClearP* Lemma states that all of the changes to state performed by a function are captured by the function’s graph. Once these two lemmas are proved, it is straightforward to prove the GWVr2 theorem. - PROSPER separation kernel The information flow security for a simple ARM-based separation kernel, PROSPER, has been formally verified by proving the bisimulation between the abstraction specification and the kernel binary code, where communication between partitions is explicit and information flow is analyzed in presence of such communication channels [@Dam13]. The PROSPER kernel consists of 150 lines of assembly code and 600 lines of C code. Their system model only considers two partitions that are respectively executed on two separate special ARMv7 machines communicating via asynchronous message passing, a logical component, and a shared timer. The goal of verification is to show that there is no way for the partitions to affect each other directly or indirectly, except through the communication channel. It is assured by that a partition can not access the memory or register contents, by reading or writing, of the other partition, except that the communication is realized by explicit usage of the intended channel. The isolation theorem of their kernel is as follows. $$\begin{aligned} tr_{g,r}(mem_1,mem_2) = tr_{g,i}(mem_1,mem_2) \end{aligned}$$ where $g \in {1,2}$ indicates the partition, $mem_1$ and $mem_2$ are initial memories of the two partitions respectively. $r$ (real system) indicates the implementation and $i$ (ideal system) is the abstraction model. The theorem means that the traces of each partition in abstraction and implementation layers are equivalent. The theorem is reduced to subsidiary properties: isolation lemmas of ARM and User/Handler. Their three ARM lemmas concerning the ARM instruction set architecture assure that (1) behavior of the active partition is influenced only by those resources allowed to do so if an ARM machine executes in user mode in a memory protected configuration, (2) the non-accessible resources not allocated to the active partition in user mode are not modified by the execution of this partition, (3) if an ARM machine switches from a state in user mode to another in privileged mode, the conditions for the execution of the handler are prepared properly. Their models are built on top of the Cambridge ARM HOL4 model which is extended by a simple MMU unit. The isolation lemmas of ARM are proved using the ARM-prover, which is developed for the purpose in HOL4. The model of the ideal system, the formalization of the verification procedure, and the proofs of the theorems consist of 21k lines of HOL4 code. During verification process, several bugs are identified and fixed, such as the registers are not sanitized after the bootstrap, some of the execution flags are not correctly restored during the context switch. They verify the entire kernel at machine code level and avoid reliance on a C compiler. This approach can transparently verify code that mix C and assembly. - seL4 separation kernel The seL4 microkernel, which is fully and formally verified in NICTA [@Klein09; @Klein10], is extended as a separation kernel for security-critical domains in [@Murray13]. The information flow security property is formally proved [@Murray12; @Murray13] based on the results of verifying seL4 kernel [@Klein09; @Klein10]. To provide a separation kernel, they minimally extend seL4 by adding a static partition-based scheduler and enforce requiring that seL4 be configured to prevent asynchronous interrupt delivery to user-space partitions which would introduce an information channel. The priority-based scheduling is changed to the partitioning scheduling that follows a static round-robin scheduling between partitions, with fixed-length time slices per partition, while doing dynamic priority-based round-robin scheduling of threads within each partition. For information flow security, they adopt an extension of von Oheimb’s notion of *nonleakage* [@von04] which is a variant of intransitive noninterference [@Murray12]. Nonleakage is defined as follows. $$\label{eq:sel4} \begin{aligned} \mathbf{nonleakage} \equiv \forall n \; s \; t \; p . \mathbf{reachable} \; s \wedge \mathbf{reachable} \; t \; \wedge \\ s \stackrel{PSched}{\sim} t \wedge s \stackrel{\mathbf{sources} \; n \; s \; p}{\approx} t \longrightarrow s \stackrel{p}{\sim}_n t \end{aligned}$$ It states that for two arbitrary and reachable states $s$ and $t$, if the two states agree on the private state of the separation kernel scheduler ($s \stackrel{PSched}{\sim} t$), and for each entity in partition’s extent in a partition set ($\mathbf{sources} \; n \; s \; p$), the entity’s state is identical in the two state $s$ and $t$ ($s \stackrel{\mathbf{sources} \; n \; s \; p}{\approx} t$), then after performing $n$ transitions from $s$ and $t$, the entities of partition $p$ in the new two states are identical ($s \stackrel{p}{\sim}_n t $). The partition set ($\mathbf{sources} \; n \; s \; p$) includes partitions that are permitted to send information to a specific partition $p$ when a sequence of $n$ transitions occur from a state $s$. The security property assures that seL4’s C implementation enforces information flow security (Formula \[eq:sel4\]). Because information flow security is preserved by refinement, it allows to prove information flow security on seL4’s abstract specification and then concludes that it holds for seL4’s C implementation by the refinement relation between abstraction specification and implementation proved in [@Klein09; @Klein10]. When proving information flow security on the abstract specification, they simplify the proofs by discharging proof obligations of **nonleakage**, *unwinding conditions*, that examines individual execution steps. The unwinding condition, called **confidentiality-u** as follows, is equivalent to **nonleakage** $$\begin{aligned} \mathbf{confidentiality-u} \equiv \forall p \; s \; t . \mathbf{reachable} \; s \wedge \mathbf{reachable} \; t \;\wedge \\ s \stackrel{p}{\sim} t \; \wedge \; s \stackrel{PSched}{\sim} t \wedge \; (\mathbf{part} \; s \rightsquigarrow p \longrightarrow s \stackrel{\mathbf{part} \; s}{\sim} t ) \longrightarrow s \stackrel{p}{\sim}_1 t \end{aligned}$$ It means that the contents of each partition $p$ after each step depend only on the contents of the following partitions before the step: $p$, $\mathbf{PSched}$ and the currently running partition $\mathbf{part} \; s$ when it is allowed to send information to $p$. In other words, information may flow to $p$ only from $\mathbf{PSched}$ and the current partition in accordance with the information flow policy $\rightsquigarrow$. The information flow policy $p_1 \rightsquigarrow p_2$ holds if the access control policy allows the partition $p_1$ to affect any subject in $p_2$’s extent. This condition has been proven for the execution steps of their transition system in abstraction specification. They state that it is the first complete, formal, machine-checked verification of information flow security for the implementation of a general-purpose microkernel. Unlike previous proofs of information flow security for separation kernels, their verification is applied to the actual 8,830 lines of C code of seL4, and so rule out the possibility of invalidation by implementation errors in this code. The proofs of information flow security are done in Isabelle/HOL by 27,756 lines of proof, and take a total effort of roughly 51 person-months. The proofs precisely describe how the general purpose kernel should be configured to enforce isolation and mandatory information flow control. - ARINC 653 compliant separation kernels A trend is to integrate safe and secure functionalities into one separation kernel. In order to develop ARINC 653 compliant secure separation kernels, it is necessary to assure security of the functionalities defined in ARINC 653. In [@zhao16], authors present a formal specification and its security proofs of separation kernels with ARINC 653 channel-based communication in Isabelle/HOL. They provide a mechanically checked formal specification which comprises a generic execution model for separation kernels and an event specification which models all IPC services defined in ARINC 653. A set of information flow security properties and an inference framework to sketch out the implications of them are provided. Finally, they find some covert channels to leak information in the ARINC 653 standard and in two open-source ARINC 653 compliant separation kernels, i.e. XtratuM and POK. - Spatial separation of hypervisors Hypervisors provide a software virtualization environment in which operating systems can run with the appearance of full access to the underlying system hardware, but in fact such access is under the complete control of hypervisors. Hypervisors support COTS operating systems and legacy/diverse applications on specific operating systems. Hypervisors for safety and security-critical systems have been widely discussed [@heiser08; @mcder12]. For instance, Xtratum [@crespo10] is a typical hypervisor for safety-critical embedded systems. Similar to separation kernels, hypervisors mainly provide the memory separation for hosted operating systems. Address separation protects the memory regions of one execution context by preventing other context from accessing these regions. It is a crucial property - in essence requiring that disjoint source addresses spaces be mapped to disjoint destination address spaces. Separation is achieved by an address translation subsystem and sophisticated address translation schemes use multi-level page tables. Separation kernels can employ shadow paging to isolate critical memory regions from an untrusted guest OS. The kernel maintains its own trusted version of the guest’s page table, called the shadow page table. The guest is allowed to modify its page table. However, the kernel interposes on such modifications and checks that the guest’s modifications do not violate memory separation. A parametric verification technique [@Franklin10; @Franklin12] is able to handle separation mechanisms operating over multi-level data structures of arbitrary size and with arbitrary number of levels. They develop a parametric guarded command language ($PGCL^+$) for modeling hypervisors and a parametric specification formalism, $PTSL^+$, for expressing security policies of separation mechanisms modeled in $PGCL^+$. The separation property states that the physical addresses accessible by the guest OS must be less than the lowest address of the hypervisor protected memory. Models of Xen and ShadowVisor are created in C and two properties are verified using CBMC (a model checker for C): (1) the initial state of the system ensures separation; (2) if the system started in a state that ensures separation, executing any of the guarded commands also preserves separation. Hypervisors allow multiple guest operating systems to run on shared hardware and offer a compelling means of improving the security and the flexibility of software systems. In [@Barthe11], the strong isolation properties ensure an operating system can only read and modify its own memory and its behavior is independent of the state of other operating systems. The read isolation captures the intuition that no OS can read memories that do not belong to it. The write isolation captures the intuition that an OS cannot modify memories that it does not own. The OS isolation captures the intuition that the behavior of any OS does not depend on other OSs states. They formalize in the Coq proof assistant an idealized model of a hypervisor and formally establish that the hypervisor ensures strong isolation properties. Xenon [@mcder08] is a high-assurance separation hypervisor built by Naval Research Laboratory based on re-engineering the Xen open-source hypervisor. The information flow security has been proposed for the Xenon hypervisor [@mcdermott08] as a the basis for formal policy-to-code modeling and evidence for a CC security evaluation. Their security policy is an independence policy [@roscoe94], which is preserved by refinement. Considering that the original independence policy is defined in a purely event-based formalism that does not directly support refinement into state-rich implementations like hypervisor internals, they use the $\mathsf{Circus}$ language to formalize the security policy. The Xenon security policy defines separation between *Low* and *High* as the independence of *Low*’s view from anything *High* might do. *Low* and *High* are domains that contain the guest operating systems hosted by Xenon. *High* guest operating systems can not only perform all possible sequences of *High* events including event sequences a well-behaved user would not generate, but also arbitrarily refuse to perform any of them as well. If this kind of arbitrary behavior by the *High* part of the system cannot cause the *Low* part of the system to behave in a non-deterministic way, *High* cannot influence what *Low* sees and there are no information flows from *High* to *Low*. The formal security policy model is in heuristic use for re-engineering the internal design of Xen into the internal design of Xenon. Mechanical proofs of the refinement between the $\mathsf{Circus}$ security policy model and the Xenon implementation have not been constructed. - A General Verification Approach for Spatial Separation Properties From the literature of spatial separation verification, we could see that spatial separation properties are mostly formally verified using theorem proving technique. The data separation properties and GWV policy are formulated on individual execution steps of the system to observe the pre- or post-conditions of the execution step. They use the $next$ function (see [[[Equation]{}]{}]{} \[eq:mask\_comm\], \[eq:mask\_comm2\] and \[eq:gwv\]) to represent one individual execution step. Properties of noninterference, nonleakage and noninfluence are expressed in terms of sequences of actions and state transitions. In order to verifying the security of systems, the standard proofs of information flow security properties are discharged by proving a set of unwinding conditions [@rushby92] that examine individual execution steps of the system. The unwinding theorem [@rushby92] for security policies says if the system is *output consistent*, *weakly step consistent* and *locally respects* $\rightsquigarrow$, the system is secure for policy $\rightsquigarrow$. The three conditions are called *unwinding conditions*. The unwinding theorem simplifies the security proofs by decomposing the global properties into unwinding conditions on each execution step. The three unwinding conditions are as follows, and the unwinding theorem states that $output\_consistent \wedge weakly\_step\_consistent \wedge locally\_respect \longrightarrow noninterference$. $$\begin{aligned} output\_consistent \equiv s \stackrel{u}{\sim} t \longrightarrow s \stackrel{u}{\bumpeq} t \end{aligned}$$ $$\begin{aligned} weakly\_step\_consistent \equiv dom(a) \rightsquigarrow u \wedge s \stackrel{dom(a)}{\sim} t \\ \wedge s \stackrel{u}{\sim} t \longrightarrow step(a,s) \stackrel{u}{\sim} step(a,t) \end{aligned}$$ $$\begin{aligned} locally\_respect \equiv \neg (dom(a) \rightsquigarrow u) \longrightarrow s \stackrel{u}{\sim} step(a,s) \end{aligned}$$ The general proofs of information flow security properties and unwinding conditions are available in [@rushby92; @von04] and an application of them on a concrete separation kernel is available in [@zhao16]. ### Temporal Separation Verification Temporal separation ensures that the services provided by shared resources to applications in a partition cannot be affected by applications in other partitions. It includes the performance of the resources concerned, as well as the rate, latency, jitter, and duration of scheduled access to them [@Rushby00]. The temporal separation becomes critical when being applied in safety-critical systems. The scheduler of separation kernels implements temporal separation since it is responsible for assigning processor time to partitions. Temporal separation requires a two-level scheduler, partition level and process level, according to ARINC 653 standard. The literature mainly deals with two issues for temporal separation: the schedulability analysis of two-level scheduling and correct implementation of the scheduler. The first one usually uses a compositional approach to formally specify and analyze the schedulability of real-time applications running under the two-level scheduling. The recent work is discussed in [@Carnevali11; @Carnevali13]. It considers the application but not the separation kernels. Our survey concerns with verification of separation kernels and the second one is discussed here. - Honeywell DEOS scheduler The Honeywell Dynamic Enforcement Operating System (DEOS) is a microkernel-based real-time operating system that supports flexible IMA applications by providing both space partitioning at the process level and time partitioning at the thread level. The model checking and theorem proving approaches have been applied to the DEOS scheduler to analyze the temporal separation property [@Penix00; @Penix05; @Ha04]. A core slice of the DEOS scheduling kernel contains 10 classes and over 1000 lines of actual code are first translated without abstraction from C++ into Promela, which is the input language for the Spin model checker. The temporal partitioning property of DEOS scheduler is that each thread in the kernel is guaranteed to have access to its complete CPU budget during each scheduling period. They use two approaches to analyze the time partitioning properties in the DEOS kernel. The first one is to place assertions over program variables to identify potential errors. The second approach is to use a liveness property, *Idle Execution*, presented by LTL. The liveness property specified as $ \mathrm{[\;](beginperiod -> (! \; endperiod \; U \; idle))}$, states that if there is slack in the system (i.e., the main thread does not have 100% CPU utilization), the idle thread should run during every longest period. This is a necessary condition of time partitioning. The size and complexity of this system limit them to analyze only one configuration at a time. To overcome this limitation and generalize the analysis to arbitrary configurations, they have turned to theorem proving approach and used the PVS theorem prover to analyze the DEOS scheduler [@Ha04]. They model the operations of the scheduler in PVS and the execution timeline of DEOS using a discrete time state-transition system. Properties of time partitioning (TP) are formulated as predicates on the set of states and proved to hold for all reachable states. The corresponding PVS proofs consist of the base step and the inductive step as follows. $$init\_invariant: init(s) \longrightarrow TP(s)$$ $$transition\_invariant: TP(ps) \wedge transition(ps, s) \longrightarrow TP(s)$$ The $TP$ predicate is defined as follows. $$\begin{aligned} good&Commitment(s,period) \equiv \\ & commitment(s,period) \leq remainTime(s,period)\end{aligned}$$ $$\begin{aligned} TP(s,period) \equiv & goodCommitment(s,period) \ \vee \\ & \forall t. \ period \ (threadWithId(s,t)) \leq period \\ & \longrightarrow satisfied(s,t)\end{aligned}$$ $$TP(s) \equiv \forall period. \ TP(s,period)$$ The entire collection of theories has a total 1648 lines of PVS code and 212 lemmas. In addition to the inductive proofs of the time partitioning invariants, they use a feature-based technique to model state-transition systems and formulate inductive invariants. This technique facilitates an incremental approach to theorem proving that scales well to models of increasing complexity. - A two-level scheduler for VxWorks kernel In [@Asberg11], a hierarchical scheduler executing in the WindRiver VxWorks kernel has been modeled using task automata and model checked using the Times tool. The two-level hierarchical scheduler uses periodic/polling servers (PS) and fixed priority preemptive scheduling (FPPS) of periodic tasks for integrating real-time applications. In their framework, the *Global scheduler* responds for distributing the CPU capacity to the servers (the schedulable entity of a subsystem). Servers are allocated a defined time (budget) of every predefined period. Each server comprises a *Local scheduler* which schedules the workload inside it, i.e. its tasks, when the server is selected for execution by the global scheduler. They use the task automata [@Fersman07] (timed automata with tasks) supported by the Times tool to model the global scheduler, event handler, and each local scheduler for partitions. The event handler decouples the global scheduler from the variability of partition amount. They specify 5 and 4 properties in TCTL (Timed Computation Tree Logic) for the global and local scheduler, respectively. - An ARINC653 scheduler modeled in AADL In [@Singhoff07], AADL (Architecture Analysis and Design Language) is used to model an ARINC653 hierarchical scheduler for critical systems and Cheddar is used to analyze the scheduling simulation on AADL specifications with hierarchical schedulers. AADL is a textual and graphical language support for model-based engineering of embedded real time systems that has been approved and published as SAE Standard. Cheddar is a set of Ada packages which aim at performing analysis of real time applications. The Cheddar language allows the designer to define new schedulers into the Cheddar framework. In their ARINC 653’s two-levels hierarchical scheduling, the first-level static scheduling is fixed at design time, and the second scheduling level is related to the task scheduling where tasks of a given partition are scheduled with a fixed priority scheduler. In the AADL model, ARINC 653 kernel, partitions, and tasks are modeled as a processor, processes, and threads, respectively. The specific Cheddar properties are extended to the AADL model in order to describe the behavior of each AADL component in Cheddar language and apply real time scheduling analysis tools. The behavior of each scheduler is modeled as a timed automaton in Cheddar language. With the meta CASE tool Platypus, they have designed a meta-model of Ada 95 for Cheddar and a model of the Cheddar language. From these models, they generate Ada packages which are part of the Cheddar scheduling simulation engine. These Ada packages implement a Cheddar program compiler and interpreter. Then scheduling simulation analysis is performed on AADL specifications with hierarchical schedulers. - A two-level scheduler for RTSJ The Real-Time Specification for Java (RTSJ) is a set of interfaces and behavioral specifications that allow for real-time computer programming in the Java programming language. It is modified to allow applications to implement two-level scheduling mechanism where the first level is the RTSJ priority scheduler and the second level is under application control [@Zerzelidis06b; @Zerzelidis10]. They also verify the two-level scheduler for RTSJ using Timed Automata in the UPPAAL tool [@Zerzelidis06]. The *Thread*, *BaseScheduler* (global scheduler), *EDFScheduler*(local scheduler) and other components are presented by timed automata. Five properties are verified on their model. Three of them are to check the correctness of their model: (1) a thread’s priority never takes an invalid value, (2) no thread can block due to locking after it starts, and (3) the system will always select a thread to run with higher absolute preemption level than the system ceiling, unless the selected thread is either currently locking a resource with higher ceiling than its apl or a thread that has just been released. The other two are liveness and deadlock free properties that state the system is livelock free and can never deadlock. Summary {#sec:summary} ======= Comparison of Related Work -------------------------- We summarize the research work on formal specification and verification of separation kernels in [[[Table]{}]{}]{} \[tbl:comparison\_tab\]. In this table, “[[$\divideontimes$]{}]{}” means that the evidence for the data is not available and empty cells mean that the feature is not considered in the work. We compare seven features of them. The column “Target Kernel” is the object specified or verified in each work. The “Objective” shows the concerns of each work, in which *Specification* indicates that the work concentrates on formal specifying/developing/modeling separation kernels and *Verification* on formally verifying separation kernels. Some work aims at these two aspects together. The “Property” indicates the policies or properties specified or verified in each work. The “Formal Language” indicates what’s the formal language used when specifying or verifying the separation kernels. The “Approach” indicates the formal specification or verification approaches used. The “Size” shows the scale of the formal specification or verification proofs. The “Tools” shows the software tools used in each work. [|p[3.5cm]{}|p[3.2cm]{}|p[1.5cm]{}|p[2.0cm]{}|p[2.8cm]{}|p[2.0cm]{}|p[1.8cm]{}|p[3.0cm]{}|]{} **Related Work** & **Target Kernel** & **Objective** & **Property** & **Formal Language** & **Approach** & **Size** & **Tools**\ Department of Defense [@Martin00; @Martin02] & MASK separation kernel & --------------- Specification Verification --------------- & Data separation & SPECWARE & ----------------- Refinement Theorem proving ----------------- & [$\divideontimes$]{}& SPECWARE environment\ GWV, GWVr2 and extensions [@Greve03; @Rushby04; @Alves04; @Tverdy11; @Richards10] & Applicable for generic separation kernels & Specification & Information flow security & ACL2, PVS & Theorem proving & [$\divideontimes$]{}& ACL2, PVS theorem prover\ Naval Research Lab [@Heitmeyer06; @Heitmeyer08] & ED separation kernel & --------------- Specification Verification --------------- & Data separation & TAME & ----------------- Refinement Theorem proving ----------------- & 368 LOC of TAME spec. & TAME, PVS theorem prover\ Craig’s book [@Craig06; @Craig07] & A separation kernel & Specification & & Z notation & Refinement & $\approx$100 pages & By hand\ Verified software project [@Velykis09; @Velykis10] & A separation kernel & --------------- Specification Verification --------------- & PIFP & Z notation & ----------------- Refinement Theorem proving ----------------- & $\approx$50 pages & Z/Eves prover\ Critical Software company [@Andre09] & A secure partitioning kernel & Specification & PIFP & B & Refinement & [$\divideontimes$]{}& Atelier B\ OS-K [@Kawamorita10] & A separation-kernel-based OS & --------------- Specification Verification --------------- & & B, Promela & ----------------- Refinement Theorem proving Model checking ----------------- & 2,700 proof obligations & B4free, Spin\ ARINC 653 [@zhao15] & ARINC 653 standard & Specification & Safety & Event-B & ----------------- Refinement Theorem proving ----------------- & 2,700 LOC of Event-B spec. & RODIN\ EURO-MILS [@verb15] & PikeOS separation kernel & Specification & PIFP & HOL & Theorem proving & > 4,000 LOC of spec. & Isabelle/HOL\ Rockwell Collins [@Greve04; @Wilding10] & AAMP7G microprocessor & Verification & GWV & ACL2 & Theorem proving & 3000 LOC ACL2 definitions & ACL2 theorem prover\ Verisoft XT project [@Baumann11] & PikeOS separation kernel & Verification & Data separation & Annotated C code & Theorem proving & [$\divideontimes$]{}& VCC tool\ SYSGO AG [@Tverdy11] & PikeOS separation kernel & Verification & GWV & HOL & Theorem proving & [$\divideontimes$]{}& Isabelle/HOL theorem prover\ Green Hills [@Richards10] & INTEGRITY-178B separation kernel & Verification & GWVr2 & ACL2 & Theorem proving & [$\divideontimes$]{}& ACL2 theorem prover\ PROSPER [@Dam13] & A Simple ARM-Based Separation Kernel & Verification & Information flow security & HOL4 & Theorem proving & 21,000 LOC & HOL4\ NICTA [@Murray12; @Murray13] & seL4 separation kernel & Verification & Information flow security & HOL & Theorem proving & 4970 LOC of spec., 27,756 LOC of proof & Isabelle/HOL theorem prover\ CMU [@Franklin10; @Franklin12] & Hypervisor (Xen and Shadowvisor) & --------------- Specification Verification --------------- & Data separation & $PGCL^+$,$PTSL^+$ & Model checking & [$\divideontimes$]{}& CBMC\ VirtualCert project [@Barthe11] & A simple hypervisor (from Microsoft Hyper-V) & Verification & Data separation & Coq & Theorem proving & 21,000 LOC & Coq proof assistant\ Securify project [@zhao16] & ARINC 653 separation kernel & --------------- Specification Verification --------------- & Information flow security & HOL & Theorem proving & 1,000 LOC of spec., 7,000 LOC of proof & Isabelle/HOL\ Xenon project [@mcdermott08] & Xenon hypervisor & Specification & IPFP & $\mathsf{Circus}$ language & ----------------- Refinement Theorem proving ----------------- & $\approx$ 4,500 pages & CZT $\mathsf{Circus}$ tools\ Honeywell [@Penix00; @Penix05] & DEOS scheduler & Verification & Temporal separation& Promela & Model checking & [$\divideontimes$]{}& Spin\ Honeywell [@Ha04] & DEOS scheduler & Verification & Temporal separation& PVS & Theorem proving & 1648 LOC of PVS, 212 lemmas & PVS theorem prover\ Malardalen Univ. (Sweden) [@Asberg11] & A two-level scheduler for VxWorks kernel & Verification & Temporal separation & Task automata, TCTL & Model checking & [$\divideontimes$]{}& Times tool\ Brest Univ. (France) [@Singhoff07] & An ARINC653 scheduler & Verification & Temporal separation & AADL, Cheddar language & Simulation analysis & [$\divideontimes$]{}& Cheddar\ York Univ. (UK) [@Zerzelidis06] & A two-level scheduler for RTSJ & Verification & Temporal separation & Timed automata & Model checking & [$\divideontimes$]{}& UPPAAL tool\ \[tbl:comparison\_tab\] Discussion and Issues --------------------- ### Relationship of Security Properties We have classified the properties of separation kernels as four categories: data separation, information flow security, temporal separation and fault isolation. The relationship among these properties is very important for formal specification and verification of separation kernels. We discuss the relationship here. The separation security properties, infiltration, mediation and exfiltration [@Alves11] can be represented by the GWV separation axiom [@Greve03]. Exfiltration specifies that the private data of executing partition cannot be written by or modify the private data of other partitions. Mediation specifies that an executing partition cannot use private data of one partition to modify private data of other partitions. Infiltration specifies that an executing partition cannot read private data of other partitions. The GWV policy implies the basic separation axioms of MASK [@Martin00; @Martin02]. The MASK data separation properties consider the dependency of data in different partitions indirectly. They are based on a shared memory by which partitions influence with each other by external event. The MASK data separation properties can be represented by the GWV policy, except the Temporal Separation Property. The No-Exfiltration property is a special case of exfiltration theorem in [@Greve03] without the $dia$ function. The No-Infiltration property is equivalent to the infiltration theorem in [@Greve03] on different abstract models. The Separation of Control property means that one step execution in a partition cannot affect data on other partitions. Its external event may affect the shared memory, but not memory areas in other partitions. It is a special case of exfiltration theorem in [@Greve03] in the situation that partitions exchange data indirectly by the shared memory. For the Kernel Integrity property, the shared memory is the data area of a special partition, then one step internal execution of other partitions could not affect this shared memory. This is a special case of the exfiltration theorem in [@Greve03]. Noninfluence is semantically equal to the conjunction of noninterference and nonleakage [@von04]. GWV is stronger than noninterference [@Bond14]. Finally, the shared resources and communication channels, etc., among partitions can affect the scheduling in separation kernels. But the relationship among spatial separation and temporal separation is complicated and not clear now. It needs further study. ### Information Flow Security Policy The GWV policy proposed by Rockwell Collins has been considered as the security policy to provide evidence for the CC evaluation and is used in verification of industrial separation kernels, such as AAMP7G microprocessor, INTEGRITY-178B separation kernel and PikeOS separation kernel. The separation security policies: infiltration, mediation and exfiltration [@Alves11] can be presented by the GWV separation axiom [@Greve03]. GWV is stronger than noninterference [@Goguen82] and supports intransitive noninterference [@rushby92] as proved in [@Alves04]. As an industrially applicable and practically proved security policy, the GWV policy is a useful property for verifying separation kernels and proving the policy could be considered as a trusted way for certification. ### Theorem Proving vs. Model Checking Separation Kernels From [[[Table]{}]{}]{} \[tbl:comparison\_tab\], we could see that most of verification work on spatial separation use the theorem proving approach. The reasons are (1) separation kernels for safety and security-critical systems need fully formal verification. Whilst model checking approach is not competent because of its state space explosion problem; (2) separation kernels usually are small and have thousands of lines of source code that make it is possible to be fully verified and theorem proving approach can be applied without too much cost; (3) it is difficult to represent separation properties of separation kernels in property languages, such as LTL and CTL, in model checking approach; (4) theorem proving approach on verifying operating system kernels exhibits good results. For instance, more than 140 bugs are found in the project of verifying the seL4 kernel. Different to the theorem proving approach on spatial separation, verifying the temporal separation usually uses the model checking approach. The reason is that it is difficult to express *time* by logics in the theorem provers. However, the *time* can be conveniently represented in model checkers, such as the timed automata in the UPPAAL tool. The problem of model checking temporal separation is that size and complexity of separation kernels limit the approach to analyze only one configuration at a time. The global scheduler is verified with the local scheduler together and the verification result relies on the number of partitions. Honeywell has faced this problem and uses the PVS theorem prover to analyze the DEOS scheduler [@Ha04]. Our opinion is that verifying temporal separation needs more study of theorem proving approach in the future. The capability and automation of specification and verification systems play key roles in enforcing security of separation kernels. Theorem provers, such as Isabelle/HOL, HOL4 and PVS, have been applied in formal verification of spatial separation properties. The expressiveness of formal notations in these provers is enough for spatial separation. A shortage is the low degree of verification automation. In model checking approach, efforts have been paid on automatically formal verification of spatial separation properties on security systems. Security policies are classified in [@Clarkson10]. Information flow security properties are not trace properties, but hyperproperties. They have developed a prototype model checker for hyperproperties in [@Clarkson14] using the OCaml program language. The prototype is very preliminary and currently does not scale up to 1,000 states. It is even not applicable to formally verify abstract specification of separation kernels now. Thus, automatically formal verification of separation kernels is attractive in the future. ### Correctness of Separation Kernels As studied in [@Baumann10], correctness properties of the PikeOS kernel are formulated as a simulation relation between the concrete system and an abstract model. As well as the functional properties, correctness properties of address translation, memory separation, techniques to handle assembly code, and assumptions on various components like the compiler, hardware and implementation policies are identified as ingredients of operating system kernel correctness. For separation kernels, the paper [@Velykis09] has summarized the separation kernel requirements according to the original definition [@Rushby81] and SKPP extensions [@SKPP07], which includes functionalities and security, separation and information flow, configuration, principle of least privilege, memory management, execution and scheduling, and platform considerations. We consider that properties of security, separation, information flow, memory, scheduling, etc., are typical and important correctness properties of separation kernels, and there are still other correctness properties to be taken into account. ### Developing Correct Separation Kernels The two primary approaches to developing correct separation kernel are (1) formal development from top-level specification to low-level implementation by refinement and (2) formal verification of low-level implementation according to its specification. In formal methods, refinement is the verifiable transformation of high-level formal specification into low-level implementation and then into executable code. B [@abrial96] and Z [@Wood96] are typical formal development methods for software. Correctness of models in different abstract levels and correspondence between models of two neighboring levels assure the correctness of the design. The certifiable code generation guarantees the correspondence between the low-level implementation and the source code. The work [@Andre09; @Velykis09; @Velykis10; @Craig07] employ this approach to develop correct separation kernels. Due to the successful application in industrial projects [@Woodcock09; @Abrial06], formal development of separation kernels by refinement into the low-level implementation can alleviate the manual review between the design and the implementation in safety and security certification. In formal verification of separation kernels, the EAL 7 of CC does not enforce formal verification on the source code level. Therefore, many verification efforts on separation kernels are carried out on the abstract- or low-level models. Establishing correspondence between the model and the source code of the implementation is typically by code review and not formally assured, such as CC evaluation of the INTEGRITY-178B separation kernel [@Richards10]. Others, such as [@Heitmeyer06; @Heitmeyer08; @Baumann11], annotate the source code of separation kernels for formal verification. The work [@Murray12; @Murray13; @Ha04] translates the source code manually/automatically to formal languages in theorem provers for reasoning. While the work [@Greve04; @Wilding10; @Dam13] verifies separation kernels on binary or assemble code level. As illustrated by the project of verifying seL4 kernel, fully formal verification shows better result and less certification cost (for example EAL 7 certification) [@Klein09]. Due to the feasibility and successful experiences, our opinion is to recommend fully formal verification on source code level. Formal development based on B, Z and other formal methods is recommended to develop new separation kernels. Conclusion {#sec:conclude} ========== In this paper, we surveyed the research work on formal specification and verification of separation kernels, which covered the concepts, security policies, properties, formal specification and formal verification approaches. We aimed at presenting the framework and focuses of related work, so the details were not touched. Future work includes the formal comparison of correctness properties, a formal model for separation kernels and efforts on fully formal verification. [10]{} Rushby J. Design and verification of secure systems. , 1981, 15(5):12–21. Alves-FossJ , Oman PW, Taylor C, and Harrison WS. The mils architecture for high-assurance embedded systems. , 2006, 2(3-4):239–247. Denning DE. A lattice model of secure information flow. , 1976, 19(5):236–243. Gjertsen T and Nordbotten NA. Multiple independent levels of security (mils)–a high assurance architecture for handling information of different classification levels. Technical report, FFI report, 2008. Airlines Electronic Engineering Committee et al. Arinc 653 - avionics application software standard interface. 2003. Wind river vxworks mils platform. Technical report, 2013. Safety-critical products: Integrity-178b real-time operationg system. Technical report, Green Hills Software, Inc., 2005. Lynxsecure: software security driven by an embedded hypervisor. Technical report, LynuxWorks, Inc., 2012. Lynxos-se: Time- and space-partitioned rtos with open-standards apis. Technical report, LynuxWorks, Inc., 2008. Robert K and Stephan W. The pikeos concept – history and design. Technical report, SYSGO AG, 2007. Delange J and Lec L. Pok, an arinc653-compliant operating system released under the bsd license. In: [*Proceedings of 13th Real-Time Linux Workshop*]{}, 2011. Masmano M, Ripoll I, Crespo A, and Metge J. Xtratum: a hypervisor for safety critical embedded systems. In: [*Proceedings of 11th Real-Time Linux Workshop*]{}, 2009. Woodcock J, Larsen PG, Bicarregui J, and Fitzgerald J. Formal methods: Practice and experience. , 2009, 41(4):19:1–19:36. National Security Agency. , 3.1 r4 edition, 2012. U.s. government protection profile for separation kernels in environments requiring high robustness. Technical report, National Security Agency, June 2007. Federal Aviation Authority. Software considerations in airborne systems and equipment certification, 1992. document no. Technical report, RTCA/DO-178B, RTCA, Inc. Federal Aviation Authority. Software considerations in airborne systems and equipment certification, 2011. document no. Technical report, RTCA/DO-178C, RTCA, Inc. Wilding MM, Greve DA, Richards RJ, and Hardin DS. Formal verification of partition management for the aamp7g microprocessor. In: [*Design and Verification of Microprocessor Systems for High-Assurance Applications*]{}, Springer, 2010, 175–191. Baumann C, Beckert B, Blasum H, and Bormer T. Formal verification of a microkernel used in dependable software systems. In: [*Computer Safety, Reliability, and Security*]{} Springer, 2009, 187–200. Baumann C and Bormer T. Verifying the pikeos microkernel: first results in the verisoft xt avionics project. In: [*Proceedings of Doctoral Symposium on Systems Software Verification*]{}, 2009, 20–20. Baumann C, Beckert B, Blasum H, and Bormer T. Ingredients of operating system correctness. In: [*Proceedings of Embedded World Conference*]{}, 2010. Baumann C, Bormer T, Blasum H, and Tverdyshev S. Proving memory separation in a microkernel by code level verification. In: [*Proceedings of 14th IEEE International Symposium on Object/Component/Service-Oriented Real-Time Distributed Computing Workshops*]{}, 2011, 25–32. Richards RJ. Modeling and security analysis of a commercial real-time operating system kernel. In: [*Design and Verification of Microprocessor Systems for High-Assurance Applications*]{}, Springer, 2010, 301–322. Heitmeyer CL, Archer M, Leonard EI, and McLean J. Formal specification and verification of data separation in a separation kernel for an embedded system. In: [*Proceedings of the 13th ACM conference on Computer and communications security*]{}, 2006, 346–355. Heitmeyer CL, Archer M, Leonard EI, and McLean J. Applying formal methods to a certifiably secure software system. , 2008, 34(1):82–98. Penix J, Visser W, Engstrom E, Larson A, and Weininger N. Verification of time partitioning in the deos scheduler kernel. In: [*Proceedings of the 22nd international conference on Software engineering*]{}, 2000, 488–497. Penix J, Visser W, Park S, Pasareanu C, Engstrom E, Larson A, and Weininger N. Verifying time partitioning in the deos scheduling kernel. , 2005, 26(2):103–135. Ha V, Rangarajan M, Cofer D, Rues H, and Dutertre B. Feature-based decomposition of inductive proofs applied to real-time avionics software: An experience report. In: [*Proceedings of the 26th International Conference on Software Engineering*]{}, 2004, 304–313. Bulkeley W. Crash-proof code. , 2011, 114(3):53–54. Klein G, Elphinstone K, Heiser G, Andronick J, Cock D, Derrin P, Elkaduwe D, Engelhardt K, Kolanski R, Norrish M, et al. sel4: Formal verification of an os kernel. In: [*Proceedings of the ACM SIGOPS 22nd symposium on Operating systems principles*]{}, 2009, 207–220. Klein G, Andronick J, Elphinstone K, Heiser G, Cock D, Derrin P, Elkaduwe D, Engelhardt K, Kolanski R, Norrish M, et al. sel4: formal verification of an operating-system kernel. , 2010, 53(6):107–115. Klein G. Operating system verification - an overview. , 2009, 34(1):27–69. Ames SR, Gasser M, and Schell RR. Security kernel design and implementation: An introduction. , 1983, 16(7):14–22. Mark V, William B, Ben C, Jahn L, Carol T, and Gordon U. Mils:architecture for high-assurance embedded computing. , 2005, 12–16. Protection profile for partitioning kernels in environments requiring augmented high robustness. Technical report, The Open Group, 2003. Parr GR and Edwards R. Integrated modular avionics. , 1999, 1(2):72 – 75. Levin TE, Irvine CE, Weissman C, and Nguyen TD. Analysis of three multilevel security architectures. In: [*Proceedings of the 2007 ACM workshop on Computer security architecture*]{}, 2007, 37–46. Rushby J. Partitioning in avionics architectures: Requirements, mechanisms, and assurance. Technical report, DTIC Document, 2000. Leiner B, Schlager M, Obermaisser R, and Huber B. A comparison of partitioning operating systems for integrated systems. In: [*Computer Safety, Reliability, and Security*]{}, 2007, 342–355. Ramamritham K and Stankovic JA. Scheduling algorithms and operating systems support for real-time systems. , 1994, 82(1):55–67. Popek GJ and Goldberg RP. Formal requirements for virtualizable third generation architectures. , 1974, 17(7):412–421. Goguen JA and Meseguer J. Security policies and security models. In: [*Proceedings of IEEE Symposium on Security and privacy*]{}, 1982. Martin W, White P, Taylor FS, and Goldberg A. Formal construction of the mathematically analyzed separation kernel. In: [*Proceedings of IEEE International Conference on Automated Software Engineering*]{}, 2000, 133–141. Martin WB, White PD, and Taylor FS. Creating high confidence in a separation kernel. , 2002, 9(3):263–284. Murray T, Matichuk D, Brassil M, Gammie P, and Klein G. Noninterference for operating system kernels. In: [*Certified Programs and Proofs*]{}, Springer, 2012, 126–142. Greve D and Wilding M. A separation kernel formal security policy. In: [*Proceedings of the ACL2 Workshop*]{}, 2003. Alves-foss J and Taylor C. An analysis of the gwv security policy. In: [*Proceedings of the ACL2 Workshop*]{}, 2004. Integrity-178b separation kernel security target. Technical report, Green Hills Software, 2008. Greve D, Richards R, and Wilding M. A summary of intrinsic partitioning verification. In: [*Proceedings of the ACL2 Workshop*]{}, 2004. Greve D. Information security modeling and analysis. In: [*Design and Verification of Microprocessor Systems for High-Assurance Applications*]{}, Springer, 2010, 249–299. Rushby J. A separation kernel formal security policy in pvs. Technical report, CSL Technical Note, SRI International, 2004. Tverdyshev S. Extending the gwv security policy and its modular application to a separation kernel. In: [*NASA Formal Methods*]{}, Springer, 2011, 391–405. Greve D, Wilding M, and Vanfleet WM. High assurance formal security policy modeling. In: [*Proceedings of the 17th Systems and Software Technology Conference*]{}, 2005. Rushby J. Noninterference, transitivity, and channel-control security policies. Technical report, SRI International, Computer Science Laboratory, 1992. Oheimb D. Information flow control revisited: Noninfluence= noninterference+ nonleakage. In: [*Proceedings of 9th European Symposium on Research Computer Security*]{}, 2004, 225–243. Mantel H and Sabelfeld A. A generic approach to the security of multi-threaded programs. In: [*Proceedings of the 14th IEEE Workshop on Computer Security Foundations*]{}, 2001, 126–142. Murray T, Matichuk D, Brassil M, Gammie P, Bourke T, Seefried S, Lewis C, Gao X, and Klein G. sel4: from general purpose to a proof of information flow enforcement. In: [*Proceedings of 34th IEEE Symposium on Security and Privacy*]{}, 2013, 415-429. Ramirez A, Schmaltz J, Verbeek F, Langenstein B, and Blasum H. On two models of noninterference: Rushby and greve, wilding, and vanfleet. In: [*Proceedings of Computer Safety, Reliability, and Security*]{}, 2014, 246–261. Craig I. . Springer, 2006. Craig I. . Springer, 2007. Abrial JR, Schuman S, and Meyer B. Specification language. In: [*On the Construction of Programs*]{}, Cambridge University Press, 1980, 343–410. Velykis A and Freitas L. Formal modelling of separation kernel components. In: [*Proceedings of the 7th International colloquium conference on Theoretical aspects of computing*]{}, 2010, 230–244. Velykis A. Formal modelling of separation kernels. Master’s thesis, Department of Computer Science, University of York, 2009. Jones C, O’Hearn P, and Woodcock J. Verified software: A grand challenge. , 2006, 39(4):93–95. Woodcock J and Davies J. . Prentice-Hall, 1996. Abrial JR. . Cambridge University, 1996. André P. Assessing the formal development of a secure partitioning kernel with the b method. In: [*Proceedings of ESA Workshop on Avionics Data, Control and Software Systems*]{}, 2009. Leuschel M and Butler M. Prob: A model checker for b. In: [*Proceedings of International Symposium of Formal Methods Europe*]{}, 2003, 855–874. Kawamorita K, Kasahara R, Mochizuki Y, and Noguchi K. Application of formal methods for designing a separation kernel for embedded systems. , 2010, 506–514. Zhao Y, Yang Z, Sanan D, and Liu Y. Event-based formalization of safety-critical operating system standards: An experience report on arinc 653 using event-b. In: [*Proceedings of IEEE 26th International Symposium on Software Reliability Engineering*]{}, 2015, 281–292. Abrial JR and Hallerstede S. Refinement, decomposition, and instantiation of discrete models: Application to event-b. , 2007, 77(1-2):1–28. Verbeek F, Tverdyshev S, Havle O, Blasum H, Langenstein B, et al. Formal specification of a generic separation kernel. , 2014. Verbeek F, Havle O, Schmaltz J, Tverdyshev S, Blasum H, et al. Formal api specification of the pikeos separation kernel. In: [*NASA Formal Methods*]{}, Springer, 2015, 375–389. Kaiser R and Wagner S. Evolution of the pikeos microkernel. In: [*Proceedings of First International Workshop on Microkernels for Embedded Systems*]{}, 2007, 50–50. Baumann C, Beckert B, Blasum H, and Bormer T. Better avionics software reliability by code verification. In: [*Proceedings of Embedded World Conference*]{}, 2009. Dam M, Guanciale R, Khakpour N, Nemati H, and Schwarz O. Formal verification of information flow security for a simple arm-based separation kernel. In: [*Proceedings of the ACM SIGSAC Conference on Computer & Communications Security*]{}, 2013, 223–234. Zhao Y, Sanan D, Zhang F, and Liu Y. Reasoning about information flow security of separation kernels with channel-based communication. In: [*Proceedings of the 22nd International Conference on Tools and Algorithms for the Construction and Analysis of Systems*]{}, 2016, 791–810. Heiser G. The role of virtualization in embedded systems. In: [*Proceedings of the 1st workshop on Isolation and integration in embedded systems*]{}, 2008, 11–16. McDermott J, Montrose B, Li M, Kirby J, and Kang M. Separation virtual machine monitors. In: [*Proceedings of the 28th Annual Computer Security Applications Conference*]{}, 2012, 419–428. Crespo A, Ripoll I, and Masmano M. Partitioned embedded architecture based on hypervisor: the xtratum approach. In: [*Proceedings of European Dependable Computing Conference (EDCC)*]{}, 2010, 67–72. Franklin J, Chaki S, Datta A, and Seshadri A. Scalable parametric verification of secure systems: How to verify reference monitors without worrying about data structure size. In: [*Proceedings of the 2010 IEEE Symposium on Security and Privacy*]{}, 2010, 365–379. Franklin J, Chaki S, Datta A, McCune J, and Vasudevan A. Parametric verification of address space separation. In: [*Principles of Security and Trust*]{}, Springer, 2012, 51–68. Barthe G, Betarte G, Campo JD, and Luna C. Formally verifying isolation and availability in an idealized model of virtualization. In: [*Proceedings of International Symposium on Formal Methods*]{}, 2011, 231–245. McDermott J, Kirby J, Montrose B, Johnson T, and Kang M. Re-engineering xen internals for higher-assurance security. , 2008, 13(1):17 – 24. McDermott J and Freitas L. A formal security policy for xenon. In: [*Proceedings of the 6th ACM workshop on Formal methods in security engineering*]{}, 2008, 43–52. Roscoe A, Woodcock J, and Wulf L. Non-interference through determinism. In: [*Proceedings of the Third European Symposium on Research in Computer Security*]{}, 1994, 33–53. Carnevali L, Lipari G, Pinzuti A, and Vicario E. A formal approach to design and verification of two-level hierarchical scheduling systems. In: [*Proceedings of the 16th Ada-Europe international conference on Reliable software technologies*]{}, 2011, 118–131. Carnevali L, Pinzuti A, and Vicario E. Compositional verification for hierarchical scheduling of real-time systems. , 2013, 39(5):638–657. Asberg M, Pettersson P, and Nolte T. Modelling, verification and synthesis of two-tier hierarchical fixed-priority preemptive scheduling. In: [*Proceedings of 23rd Euromicro Conference on Real-Time Systems (ECRTS), 2011* ]{}, 2011, 172–181. Fersman E, Krcal P, Pettersson P, and Wang Y. Task automata: Schedulability, decidability and undecidability. , 2007, 205(8):1149–1172. Singhoff F and Plantec A. Aadl modeling and analysis of hierarchical schedulers. , 2007, 27(3):41–50. Zerzelidis A and Wellings A. Getting more flexible scheduling in the rtsj. In: [*Proceedings of the 9th IEEE International Symposium on Object and Component-Oriented Real-Time Distributed Computing*]{}, 2006, 8–8. Zerzelidis A and Wellings A. A framework for flexible scheduling in the rtsj. , 2010, 10(1):3:1–3:44. Zerzelidis A and Wellings A. Model-based verification of a framework for flexible scheduling in the real-time specification for java. In: [*Proceedings of the 4th international workshop on Java technologies for real-time and embedded systems*]{}, 2006, 20–29. Alves-Foss J. Multiple independent levels of security. In: [ *Encyclopedia of Cryptography and Security*]{}, Springer US, 2011, 815–818. Clarkson M and Schneider F. Hyperproperties. , 2010, 18(6):1157–1210. Clarkson M, Finkbeiner B, Koleini M, Micinski K, Rabe M, and S[á]{}nchez C. Temporal logics for hyperproperties. In: [*Proceedings of International Conference on Principles of Security and Trust*]{}, 2014, 265–284. Abrial JR. Formal methods in industry: Achievements, problems, future. In: [*Proceedings of the 28th International Conference on Software Engineering*]{}, 2006, 761–768. [^1]: http://www.euromils.eu/ [^2]: http://www.verisoftxt.de/StartPage.html [^3]: http://research.microsoft.com/en-us/projects/vcc/
{ "pile_set_name": "ArXiv" }
--- author: - 'C. Argiroffi' - 'A. Maggio' - 'G. Peres' - 'J. J. Drake' - 'J. López-Santiago' - 'S. Sciortino' - 'B. Stelzer' bibliography: - 'mpmus.bib' date: 'Received 1 July 2009 / Accepted 25 August 2009' title: 'X-ray optical depth diagnostics of T Tauri accretion shocks' --- Introduction ============ Classical T Tauri stars (CTTS) are young low-mass stars, still surrounded by a circumstellar disk from which they accrete material. According to a widely accepted model, they have a strong magnetic field that regulates the accretion process disrupting the circumstellar disk, loading material of the inner part of the disk, and guiding it in a free fall along its flux tubes toward the central star [@UchidaShibata1984; @BertoutBasri1988; @Koenigl1991]. A characteristic feature of young stars is strong X-ray emission that is traditionally ascribed to magnetic activity in their coronae. Similar to their more evolved siblings, the diskless weak-line T Tauri stars (WTTS), CTTS display high X-ray luminosities and frequent flaring activity. The typical temperature of their coronal plasma is $\sim10-20$MK, or even higher during strong flares [e.g. $50-100$MK, @GetmanFeigelson2008]. From a theoretical point of view, the accretion process can also produce significant X-ray emission on CTTS. Material accreting from the circumstellar disk reaches velocities of $\sim300-500\,{\rm km\,s^{-1}}$. A shock forms at the base of the accretion column because of the impact with the stellar atmosphere. This shock heats up the accreting material to a maximum temperature $T_{\rm max}=3 \mu m_{\rm H} v_{0}^2 / ( 16 k )$, where $v_{0}$ is the infall velocity. Because of the high pre-shock velocity, the infalling material reaches temperatures of a few MK, and hence it emits X-rays. Typical values of mass accretion rate for CTTS indicate that the accretion-driven X-ray luminosity should be comparable to the coronal one [@Gullbring1994]. Considering typical inferred stream cross-sectional area [$\la5\%$ of the stellar surface, e.g. @CalvetGullbring1998], velocity ($\sim300-500\,{\rm km\,s^{-1}}$), and mass accretion rate [$\sim10^{-9}-10^{-7}\,{\rm M_{\sun}\,yr^{-1}}$, e.g. @GullbringHartmann1998], it can be inferred that the plasma heated in the accretion shock should have densities $n_{\rm e}\ga10^{11}\,{\rm cm^{-3}}$, i.e. at least one order of magnitude higher than coronal plasma density. Hence, in principle, the accretion process can produce plasma with: high $L_{\rm X}$, high density, and temperatures of a few MK. To summarize, X-ray emission from CTTS can originate from two different plasma components: plasma heated in the accretion shock and coronal plasma. The former, because of its lower temperatures, should dominate the softer X-ray band [e.g. $E\le1$keV in the case of the CTTS TW Hya, @GuntherSchmitt2007]. While the harder X-ray emission, $E\ge1$keV, should be produced almost entirely by coronal plasma. Recently, high-resolution X-ray spectra of a few CTTS enabled measurement of individual emission lines sensitive to plasma density (i.e. He-like triplets), and hence searches for evidence of accretion-driven X-rays. The density of the plasma at $T\sim2-4$MK can be inferred from the and triplet lines (at $E\approx0.6$ and 0.9keV, respectively). All but one of the CTTS for which the triplet lines were detected showed cool plasma with high density, $n_{\rm e}>10^{11}\,{\rm cm^{-3}}$ [@KastnerHuenemoerder2002; @StelzerSchmitt2004; @SchmittRobrade2005; @GuntherLiefke2006; @ArgiroffiMaggio2007; @GuedelSkinner2007; @RobradeSchmit2007]. In contrast, the cool quiescent plasma of active stellar coronae is always dominated by low densities [$n_{\rm e}\la10^{10}\,{\rm cm^{-3}}$, @NessSchmitt2002; @TestaDrake2004a]. This basic difference suggests that the high-density cool plasma in CTTS is not coronal plasma but plasma heated in accretion shocks. One complication to this argument is that mass accretion rates derived from assuming a very high efficiency of conversion of accretion energy into X-rays tend to be an order of magnitude or so lower than rates derived using other methods [e.g. @Drake2005; @SchmittRobrade2005; @GuntherSchmitt2007]. The idea of accretion-driven X-rays from CTTS is superficially supported by a soft X-ray excess found in high-resolution X-ray spectra of CTTS with respect to similar spectra of WTTS by @TelleschiGuedel2007 and @GuedelTelleschi2007. However, @GuedelSkinner2007 and @GuedelTelleschi2007 noted that this soft X-ray excess is significantly lower than that predicted by simple models of X-ray emission from accretion shocks. Moreover, the soft excess scales with total stellar X-ray luminosity, and hence is related at least partially in some way with the stellar magnetic activity. @GuedelSkinner2007 and @GuedelTelleschi2007 suggested that the CTTS soft X-rays could be produced by infalling material loaded into coronal structures. The properties of the X-ray emitting plasma in CTTS and in WTTS have also been investigated using CCD X-ray spectra of large stellar samples. These studies, however, commonly covered the $0.5-8.0$keV energy band in which the coronal component dominates. The main results are that CTTS are on average less luminous in the X-ray band than WTTS [e.g. @FlaccomioMicela2003], and that X-ray emitting plasma of CTTS is on the average hotter than that of WTTS [@NeuhaeuserSterzik1995; @PreibischKim2005]. CTTS and WTTS therefore do have different coronal characteristics, suggesting that the accretion process can affect coronal properties to some extent. Numerical simulations have confirmed that the accretion process can produce significant X-rays: @GuntherSchmitt2007 derived stationary 1-D models of the shock in an accretion column; @SaccoArgiroffi2008 improved those results by performing 1-D hydrodynamical (HD) simulations of the accretion shock, including the stellar atmosphere and taking into account time variability. Assuming optically thin emission, @SaccoArgiroffi2008 showed that, even for low accretion rates, the amount of X-rays produced in the accretion shock is comparable to the typical X-ray luminosity of CTTS ($L_{\rm X}\sim10^{30}{\rm erg\,s^{-1}}$ for $\dot{M}\sim10^{-10}\,{\rm M_{\odot}\,yr^{-1}}$). Several aspects of the nature of the high-density cool plasma component observed in CTTS are still debated. In particular, definitive evidence that it is material heated in the accretion shock is still lacking. Moreover, while the simple “photospheric burial” model of @Drake2005 suggests that under some circumstances a large fraction of the shock X-rays can be absorbed and reprocessed by the photosphere, there are currently no detailed quantitative models explaining why the X-ray luminosities, predicted on the basis of 1-D HD simulation results and mass accretion rates inferred from observations at other wavelengths, are universally much higher than observed. Understanding the link between accretion and X-rays would also allow more accurate characterization of the coronal component of the X-ray emission from CTTS. This could help in understanding how accretion changes coronal activity, and which other parameters determine the coronal activity level in PMS stars, whose X-ray luminosity cannot simply be explained in terms of a Rossby dynamo number [@PreibischKim2005] as is largely the case for active main sequence stars [e.g. @PizzolatoMaggio2003]. To address the above issues, we performed a detailed study of the high-resolution X-ray spectra of two nearby CTTS: TW Hya and MP Mus. In particular we investigated: - optical depth effects in their soft X-ray emission; - the emission measure distribution [*(EMD)*]{} of the X-ray emitting plasma. Optical depth effects probe the nature of the high-density cool plasma component: we show that, if the emitting plasma is located in the accretion shock, some emission lines should have non-negligible optical depth; in contrast these lines should be optically thin if the plasma is located in coronal structures. We also investigate how the [*EMD*]{} can help in recognizing coronal and accretion plasma components, which should have different average temperatures. We compare the [*EMD*]{} of accreting and non-accreting young stars, and compare these observed [*EMD*]{} with that predicted on the basis of the HD shock model of [@SaccoArgiroffi2008]. Targets ======= ![image](figure1.ps){width="17cm"} We selected the two CTTS MP Mus and TW Hya for this study. These stars suffer moderate interstellar absorption ($N_{\rm H}<10^{21}\,{\rm cm^{-2}}$), and good $S/N$ spectra, gathered with [*XMM-Newton*]{}/RGS (high spectral resolution and large effective area in the soft X-ray band) are available. While high-resolution X-ray spectra of TW Hya have also been obtained by [*Chandra*]{}, we chose to analyze the [*XMM-Newton*]{}/RGS spectra so as to have more uniform data for the two stars, enabling more ready comparison of the derived results. TW Hya is a $0.7\,{\rm M_{\sun}}$ CTTS, located at 56pc. From its membership in the eponymous TW Hya Association (TWA), its age is estimated to be $\sim10$Myr [@KastnerZuckerman1997]. @MuzerolleCalvet2000 estimated a mass accretion rate of $\sim5\times10^{-10}\,{\rm M_{\sun}\,yr^{-1}}$ based on its H$\alpha$ profile and UV flux. @AlencarBatalha2002 and @BatalhaBatalha2002 derived a higher value, $1-5\times10^{-9}\,{\rm M_{\sun}\,yr^{-1}}$, based on UV data, in agreement with the mass accretion rate inferred from the H$\alpha$ line width by @JayawardhanaCoffey2006. The X-ray emission of TW Hya shows clear evidence of high-density plasma ($n_{\rm e}\sim 10^{13}\,{\rm cm^{-3}}$) at low temperatures [@KastnerHuenemoerder2002; @StelzerSchmitt2004]. MP Mus is a CTTS with spectral type K1IVe. @MamajekMeyer2002 identified it as a member of the Lower Centaurus Crux (LCC) Association, and determined a distance of 86pc using the moving cluster method and the LCC convergent point. They inferred for MP Mus a mass of $1.1-1.2\,{\rm M_{\sun}}$ from photometry and isochrone fitting to different evolutionary tracks. Recently @TorresQuast2008 suggested that MP Mus is likely a member of the younger $\epsilon$ Cha Association. The authors, using their convergent method iteratively with their list of candidates of the association [@TorresQuast2006], determined a distance to the star of 103pc. Note that both these distance estimates, 86 and 103pc, depend on correct identification of the parent stellar association. In this work we assume a distance for MP Mus of 86pc. Adopting the longer distance would lead to higher X-ray fluxes by only 30% or so and would be of no consequence for the purpose of this study. Like TW Hya, MP Mus also exhibits evidence of cool plasma at high density [$n_{\rm e}\approx 5\times10^{11}\,{\rm cm^{-3}}$, @ArgiroffiMaggio2007]. While it is certain that MP Mus is a CTTS [H$\alpha$ equivalent width is $\sim47$Å, @GregorioHetemLepine1992], its mass accretion rate is not known. The accretion rate is fundamental for studying the relation between accretion and soft X-ray emission, and we evaluate it in Sect. \[optobs\] using an optical spectrum of MP Mus taken with FEROS at La Silla Observatory. We compare the results obtained for TW Hya and MP Mus with those obtained for TWA 5, a member of the TWA with similar age to TW Hya. TWA 5 is a quadruple system with three similar components of $0.5\,{\rm M_{\odot}}$ and a brown dwarf. H$\alpha$ equivalent width [$\sim10$Å, @GregorioHetemLepine1992] indicates that TWA 5 is a WTTS, even if some marginal accretion is still present at least in one of the components [@MohantyJayawardhana2003]. We selected TWA 5 for the comparison because: 1) its [*EMD*]{} was derived from an [*XMM-Newton*]{}/RGS observation [@ArgiroffiMaggio2005] with the same method we adopted in this work for MP Mus and TW Hya; 2) low plasma density was inferred from the triplet [@ArgiroffiMaggio2005], suggesting that its X-ray emission is entirely produced by the stellar corona. RGS data analysis ================= TW Hya was observed on July 2001 for 30ks, and MP Mus on August 2006 for 110ks. Details of the data processing can be found in @StelzerSchmitt2004 and @ArgiroffiMaggio2007, respectively. In Table \[tab:log\] we report the log of the [*XMM-Newton*]{}/RGS observations of the two stars. For MP Mus we discarded time segments affected by high background count rates, obtaining a net exposure of $\sim100$ks over the selected good time intervals. Background-subtracted spectra of MP Mus and TW Hya contain $\sim5800$ and $\sim6500$ net counts, respectively. To increase the $S/N$ ratio of the data we added the RGS1 and RGS2 spectra and rebinned by a factor 2. The resulting RGS spectra are shown in Fig. \[fig:rgsspec\]. We based the spectral analysis (line identification, line emissivity functions $G(T)$, line oscillator strengths $f$) on the Astrophysical Plasma Emission Database [APED V1.3, @SmithBrickhouse2001], as implemented in the PINTofALE V2.0 software package [@KashyapDrake2000]. Spectral line fluxes were measured within PINTofALE by fitting the observed lines with a Lorentzian profiles. The results are listed in Table \[tab:lines\]. EMD reconstruction {#emd} ------------------ Having defined a temperature grid, $T_{i}$, with binsize $\Delta T$, the [*EMD*]{} of the X-ray emitting plasma is computed as the amount of emission measure, $n_{\rm e}\,n_{\rm H}\,\Delta V$, of plasma with temperature ranging between $T_{i}-\Delta T /2$ and $T_{i}+\Delta T /2$. We derived the [*EMD*]{} from the line flux measurements using the Markov-Chain Monte Carlo (MCMC) method of @KashyapDrake1998. We considered all the measured lines whose intensity depends only on the plasma temperature, and not on the plasma density: hence, we discarded the intercombination and forbidden lines of the and He-like triplets. The lines used for the [*EMD*]{} reconstruction are marked in Table \[tab:lines\]. We reconstructed the plasma [*EMD*]{} over a logarithmic temperature grid ranging between $\log T {\rm (K)}=6.0$ and 7.2, with a bin size of 0.2. This choice was guided by the set of formation temperatures of the measured lines (see $\log T_{\rm max}$ in Table \[tab:lines\]). This range includes the expected temperature of plasma heated in an accretion shock. On the other hand, coronal plasma may also have temperatures higher than $\log T =7.2$, so the derived [*EMD*]{} does not take very hot coronal components into account. The derived [*EMD*]{} for MP Mus and TW Hya are reported in Table \[tab:emd\]. Simultaneously with the [*EMD*]{} reconstruction, we derived also the relative abundances of elements represented by the selected lines. The absolute abundances, also reported in Table \[tab:emd\], were fixed for both stars by matching the predicted and observed spectra. Optical depth effects {#optdepth} --------------------- Assuming that the X-ray emitting plasma has a geometrical depth $l$ (in cm) along the line of sight, free electrons with density $n_{\rm e}$ and temperature $T_{\rm e}$ (in K), then the optical depth of an emission line is given by [@Acton1978]: $$\tau = 1.16\times10^{-14}\, \lambda\,f\, \left( \frac{m_{Z}}{T_{\rm e}} \right)^{1/2} \, \frac{n_{Z, i}}{n_{Z}}\, \frac{n_{Z}}{n_{\rm H}}\, \frac{n_{\rm H}}{n_{\rm e}}\, n_{\rm e}\,l \label{eq:tau}$$ where $\lambda$ is the line wavelength (in Å), $f$ the line oscillator strength, $m_{Z}$ is the atomic mass of the element (in amu), $n_{Z, i}$ the density of element $Z$ with ionization level $i$, $n_{Z}$ the density of element $Z$, $n_{\rm H}$ the hydrogen density (all the densities are in units of ${\rm cm^{-3}}$). For increasing density $n_{\rm e}$, and/or increasing source dimension $l$, $\tau$ increases. A non-negligible optical depth ($\tau\sim 1$), because of resonance scattering, first occurs in lines with large oscillator strengths $f$ and ground state lower levels. Photons produced in these transitions are absorbed again by ions of the same species in the ground state. These photons are then re-emitted in different, random directions. When the optical depth is of the order of unity, resonance scattering only affects the emergent spectrum if the source does not have a spherical symmetry. In the scenario of accretion-driven X-ray emission, the material heated in the accretion shock is entirely located in a small compact volume at the footpoint of the accretion stream. The high-density cool plasma component observed in CTTS has density of $10^{11}-10^{13}\,{\rm cm^{-3}}$ and linear dimension of $l\sim10^{9}-10^{10}$cm [inferred from measured values of $n_{\rm e}$ and [*EM*]{}, @KastnerHuenemoerder2002; @StelzerSchmitt2004; @ArgiroffiMaggio2007; @RobradeSchmit2007]. Considering these values for $n_{\rm e}$ and $l$, the optical depth of the strongest emission lines produced by this shock-heated plasma ($T\sim1-3$MK) should be non-negligible. As an example, the line center optical depth, $\tau$, of the resonance line at 21.60Å  is $\tau\sim10$ for a density $n_{\rm e}=1\times10^{11}\,{\rm cm^{-3}}$, $n_{Z,i}/n_{Z}=0.5$ (corresponding to a temperature $T=2$MK), $n_{Z}/n_{\rm H}=8.5\times10^{-4}$, $n_{\rm H}/n_{\rm e}=0.83$, $m_{Z}=16.0$amu, and $l=10^{9}$cm. On the other hand, if this high-density plasma component observed in CTTS is contained in many separate coronal structures, the average length $l$ covered by photons before escaping from the plasma should be significantly smaller, and hence the optical depth negligible. In fact, extensive investigation of coronal spectra for optical depth effects has shown coronae to be effectively optically-thin in almost all cases [e.g. @NessSchmitt2003; @TestaDrake2004b; @TestaDrake2007], with only a few notable exceptions (see below). Optical depth effects in a given line can be investigated by considering, as a reference, another line produced by the same element in the same ionization stage, but with a smaller oscillator strength [@NessSchmitt2003; @TestaDrake2004b; @MatrangaMathioudakis2005; @TestaDrake2007]. In the optically-thin case, the ratio between two such lines is dictated purely by atomic parameters and has only a weak dependence on the plasma temperature. Instead, when the gas is not optically-thin, the line with larger oscillator strength can be quenched relative to the weaker line, causing a discrepancy between the observed and predicted optically-thin ratios. In the high-resolution X-ray spectra gathered with [*Chandra*]{} and [*XMM-Newton*]{}, the strongest emission lines with the largest oscillator strengths are the Ly$\alpha$ lines of and , the resonance lines of and , and the line at 15.01Å. Considering [*Chandra*]{}/HETGS data, @TestaDrake2004b and @TestaDrake2007 found significant resonance scattering in the Ly$\alpha$/Ly$\beta$ line ratios of and of the active stars II Peg and IM Peg. @MatrangaMathioudakis2005 and @RoseMatranga2008, analyzing RGS data of AB Dor and EV Lac, found that the ratio between the lines at 15.01 and 16.78Å changes between the flaring and quiescent phases, indicating opacity in the 15.01Å line. We searched for optical depth effects in the X-ray emission of MP Mus and TW Hya, using TWA 5 as a non-accreting comparison. We investigated three line ratios: - the ratio of the lines at 21.60Å and 18.63Å, produced by transitions to the ground level ($n=1$) from the shells $n=2$ and $n=3$ respectively; the resonance line at 21.60Å has a large oscillator strength ($f=0.69$), while the line at 18.63Å has a small oscillator strength ($f=0.15$); - the ratio between the Ly$\alpha$ at 18.97 (a doublet with oscillator strengths $f=0.54$ and $f=0.27$) and the Ly$\beta$ at 16.01Å (a doublet with $f=0.10$ and $f=0.05$); - the ratio between the lines at 15.01Å (large oscillator strength, $f=2.73$) and 16.78Å (small oscillator strength, $f=0.11$). This line set allowed us to probe the optical depth of the plasma at temperatures ranging from 2 to 5MK. We did not consider the Ly$\alpha$ and Ly$\beta$ line ratio since, for MP Mus, we detected only the Ly$\alpha$ line. The observed ratios of MP Mus, TW Hya, and TWA 5 are listed in Table \[tab:linerat\] and plotted in Fig. \[fig:fluxratio\]. These ratios were obtained correcting the observed fluxes (Table \[tab:lines\]) for the interstellar absorption, assuming the $N_{\rm H}$ values derived from the analysis of EPIC spectra [@ArgiroffiMaggio2007; @StelzerSchmitt2004; @ArgiroffiMaggio2005]. The correction factors due to interstellar absorption range between 0.94 and 1.16. We associated each flux ratio of each star to an average temperature. That average temperature was obtained assuming, as a weighting function, the product of the [*EMD*]{} and emissivity functions, $G(T)$, of the line in the numerator. The predicted flux ratios as a function of plasma isothermal temperature are also illustrated in Fig. \[fig:fluxratio\]. All the lines involved in the analyzed ratios are free from significant blending, except for the Ly$\beta$ line, located at 16.01Å. This line is blended with two lines (16.00 and 16.07Å), whose contributions are small but non-negligible [see, e.g. @TestaDrake2007 for a detailed discussion and treatment]: the [*EMD*]{} and abundances derived for the X-ray emitting plasma of MP Mus and TW Hya (see Sect. \[emd\]) indicate that the contribution to the observed fluxes is about $\sim20\%$ and $\sim6\%$, respectively. To take this into account, we included these contributions in the calculation of the predicted flux ratios. The predicted ratio of the Ly$\alpha$ and Ly$\beta$ lines is then weakly dependent on the model abundances and hence differs slightly for different stars. For comparison, we considered also other coronal sources for which line fluxes and $N_{\rm H}$ values, obtained from [*XMM-Newton*]{}/RGS data, were published. These stars and their and line flux ratios, corrected for the interstellar absorption, are listed in Table \[tab:linerat\] and are included in Fig. \[fig:fluxratio\]. We did not consider the line ratio since its analysis requires knowledge of model abundances for each star. MP Mus shows in all cases an observed ratio significantly lower than the predicted one, over the entire range of plausible temperature. The discrepancy between the observed and predicted ratios for MP Mus are 2.8$\sigma$, 4.7$\sigma$, and 2.3$\sigma$ for the , , and ratios, respectively. Conversely, the observed ratios for TW Hya are perfectly compatible with those predicted for optically-thin emission. The and ratios for all the other stars considered are compatible with the predicted optically-thin values. Conversely some stars, other than MP Mus, show a line ratio discrepant with the optically thin case. The interpretation of the ratio is more controversial: while coronal plasma might have non-negligible optical depth in the 15.01Å line [i.e. the Sun and AB Dor, @SabaSchmelz1999; @MatrangaMathioudakis2005], @NessSchmitt2003 found that several stars, in a large sample, exhibit ratios somewhat discrepant from the predicted optically-thin value, as in our case, but they concluded that the interpretation of this result in terms of opacity effects is not convincing. Summarizing, we detected a significant intensity deficit in lines with the largest oscillator strength in the spectrum of MP Mus: the resonance line at 21.60Å, the Ly$\alpha$ line at 18.98Å, and the line at 15.01Å. These lines are mainly formed by the plasma at a temperature of $2-8$MK. We conclude that this emission is not optically-thin and that resonance scattering has quenched the strongest lines. We explored whether the anomalous oxygen line ratios observed in MP Mus could be explained by an absorbing column $N_{\rm H}$ higher than that assumed (that however could not explain the line ratio). To explain the observed and line ratios for MP Mus, a hydrogen column $\ga 5\times10^{21}\,{\rm cm^{-2}}$ is required. This value is higher, by a factor 10, than the value constrained from the analysis of the EPIC spectra by @ArgiroffiMaggio2007, and adopted for the line ratio analysis. In Fig. \[fig:mpmusepicspec\] we show the PN and MOS1 spectra of MP Mus (MOS2 is omitted for clarity). We also plotted the 3$-T$ model with the inferred absorption $N_{\rm H}=(5 \pm 1.8)\times10^{20}\,{\rm cm^{-2}}$ [@ArgiroffiMaggio2007], and the 3$-T$ model obtained by fitting the EPIC spectra with an absorbing column fixed at $5\times10^{21}\,{\rm cm^{-2}}$. The soft part of the EPIC spectra, which constrains the $N_{\rm H}$ value, is dominated by continuum emission. Therefore, if soft continuum and soft RGS lines, both contained in the $0.3-1.0$keV energy range, are produced by the same plasma component, then the observed line ratios of MP Mus cannot be explained in terms of photoelectric absorption. Instead if soft continuum and soft lines originated from different plasma components, affected by different absorbing columns, then the observed line ratios could be explained by photoelectric absorption, rather than opacity effects in the emitting plasma. However, different absorbing columns would indicate that the highly absorbed component (the one producing the and lines) is located under the accretion stream or buried down in the stellar atmosphere, suggesting again that this plasma component originates in the accretion shock. Optical spectra and their analysis {#optobs} ================================== To gather information on the accretion status of MP Mus, we analyzed its optical spectrum taken from the public archive of La Silla Observatory (ESO). The observation was performed on March 18, 2005 using the echelle spectrograph FEROS at the 2.2m telescope. The spectra cover approximately 3850–9200 Å with a resolving power $R = 48000$ ($\Delta\lambda \approx 0.15$ Å, in the region of H$\alpha$, measured in the calibration lamp spectrum). This range includes all the chromospheric activity indicators, ranging from H & K to the calcium infrared triplet, as well as the lithium line at 6708 Å. The signal-to-noise ratio at 6550 Å is $S/N \sim 75$. For the reduction, we used the standard procedures in the IRAF[^1] package (bias subtraction, extraction of the scattered light produced by the optical system, division by a normalized flat-field, and wavelength calibration). After reduction, the spectrum of MP Mus was normalized to the continuum order by order by fitting a polynomial function to remove the general shape from the aperture spectra. We did not perform a flux calibration since it is not necessary for our study. H$\alpha$ line and accretion rate --------------------------------- We measured the equivalent width of the H$\alpha$ line using an IDL procedure developed by us that enables estimation of the error in the measurements using both the signal-to-noise ratio and the spectral resolution. We obtained $EW({\rm H\alpha})=-41.05\pm0.09$Å, which is similar to the value reported by @MamajekMeyer2002, but lower than that measured by @GregorioHetemLepine1992, who found $EW({\rm H\alpha})=-47$Å. This variation of 6Å in observations obtained $\sim10$yr apart is not very high if compared e.g. with variations observed by @SaccoFranciosini2008 in $\sigma$ Ori and $\lambda$ Ori. However, there is not enough time coverage to study the H$\alpha$ variability in detail. We also measured the width at 10% of the peak of the H$\alpha$ line (see Fig. \[fig:halpha\]), which is indicative of the mass accretion rate for widths above $200\,{\rm km\,s^{-1}}$ [@NattaTesti2004]. We obtained $W({\rm H\alpha})=446\pm5$ kms$^{-1}$, corresponding to a mass accretion rate of $\approx3\times10^{-9}\,{\rm M_{\odot}\,yr^{-1}}$. Discussion ========== Soft X-rays from shock-heated plasma: optical depth effects ----------------------------------------------------------- It appears that in most CTTS, significant soft X-rays are produced by a high-density plasma component. This result has been interpreted as a strong indication that this plasma, at a temperature of a few MK, is formed in the shock at the base of the accretion stream near the stellar surface. Hence this high-density plasma component should be contained in a small volume at the base of the accretion stream, instead of being located in extended structures like coronal plasma. We showed in Sect. \[optdepth\] that, considering the characteristic volumes and densities for the high-density cool plasma component of CTTS, the optical depth $\tau$ (eq. \[eq:tau\]) in some emission lines with large oscillator strength should be significantly larger than 1, producing detectable opacity effects. We found clear evidence of opacity effects in all the examined line ratios of the soft X-ray spectrum of MP Mus, produced by the high-density plasma at temperatures of a few MK. These lines are: the resonance lines produced by $n=2$ to $n=1$ shell transitions in the and ions (at $21.60$Å and 18.97Å, respectively), and the line at 15.01Å. All these lines are significantly weaker than their predicted optically-thin intensities, a result which can be readily explained in terms of resonance scattering. We did not find evidence of optical depth effects in the X-ray emission from TW Hya. We explored whether the observed opacity could be produced by coronal plasmas. Studies on large stellar surveys showed that optical depth effects from cool coronal plasma exist but are extremely rare [@NessSchmitt2003; @TestaDrake2007]. Quiescent plasma confined within coronal magnetic structures can hardly explain the observed non-negligible optical depths in CTTS. In fact the @RosnerTucker1978 model indicates loop semi-length of $\sim10^{6}-2\times10^{7}$cm, for $T_{\rm max}=3$MK and $n_{\rm e}=5\times10^{11}-10^{13}\,{\rm cm^{-3}}$. Such scale lengths are unrealistically small if compared to photospheric and chromospheric scale heights, and not large enough to account for the observed optical depth ($\tau$ would be $\sim0.03$, in the center of the line at 21.60Å, assuming that the loop cross section has a radius smaller by a factor 10 than the loop semi-length). In principle flaring plasma, instead of quiescent plasma, could favor opacity effects. In fact standard models suggest that plasma density increases during flares, raising the opacity. Opacity effects have been searched in flaring coronal sources [e.g. @GuedelAudard2004], but they have been detected only a couple of times [@MatrangaMathioudakis2005; @RoseMatranga2008]. In those cases opacity affected lines formed at higher temperatures than the and lines. Moreover the soft X-ray light curve of MP Mus indicates that this emission does not originate from flaring plasma [@ArgiroffiMaggio2007]. We cannot reject altogether the hypothesis that the high-density cool plasma of MP Mus, showing opacity effects, originates from coronal plasma. However the alternative explanation, based on the hypothesis that this plasma is located in the post shock region at the base of the accretion stream, naturally explains the observed opacity effects. Therefore opacity by itself is not a prove of the nature of this plasma component, but strongly supports the shock-heated plasma hypothesis. We find evidence of optical depth effects for MP Mus, and not for TW Hya, although optical depth effects were expected for both stars. Moreover the higher densities of the plasma of TW Hya, in comparison to MP Mus, should favor opacity effects, considering that the characteristic dimension of the plasma volume $l$ scales with $EM^{1/3}\,n_{\rm e}^{-2/3}$, and hence $\tau\propto l\,n_{\rm e}=EM^{1/3}\,n_{\rm e}^{1/3}$. A tentative explanation of the lack of opacity effects from TW Hya could be related to the fact that resonance scattering depends on the geometry of the source and on the inclination angle under which the source is observed. In particular, the post-shock accretion hot-spot is likely to have quite different vertical and horizontal dimensions. The volume occupied by the plasma heated in the accretion shock is defined by the stream cross-section $A$ and by the thickness of the hot post-shock region $L$ (see Fig. \[fig:accretion\]). $L$ is mainly determined by the post-shock velocity and by the plasma cooling time, and hence by the plasma density [e.g. @SaccoArgiroffi2008]. Estimates of both $A$ and $L$ can be made from the soft X-ray emission of CTTS: observed electron densities range from $10^{11}\,{\rm cm^{-3}}$ to $10^{13}\,{\rm cm^{-3}}$, a typical value of the $EM=n_{\rm e}^2 \times A \times L$ responsible for the soft X-ray emission is $\sim10^{53}\,{\rm cm^{-3}}$. Assuming an infall velocity of $400\,{\rm km\,s^{-1}}$ and assuming that all the shock-heated plasma is contained in one shock area, we obtain: $L\sim10^{7}-10^{9}\,{\rm cm}$ and $A\sim10^{20}-10^{22}\,{\rm cm^{2}}$. The difference of the horizontal and vertical dimensions does not depend on assuming only one shock area [@RomanovaUstyugova2004], instead of assuming a few separated ($~\le10$) accretion footpoints [@DonatiJardine2007]. Therefore, the post-shock region has the vertical dimension $L$ significantly smaller than the other two dimensions. In Fig. \[fig:accretion\] we display a schematic view of the footpoint of an accretion stream and the post shock region. The resonance scattering effect on the X-ray spectrum should be different depending on whether we observe the emitting plasma from the top or from the side (inclination angle $\theta_{1}$ or $\theta_{2}$ respectively). The latter is the one that favors optical depth effects, while in the former case these effects could even be negligible. The different opacity effects observed for MP Mus and TW Hya could be explained assuming that for the two stars the angle $\theta$ of the accretion footpoints differ significantly. TW Hya and MP Mus are viewed under quite different inclinations $i$: TW Hya is almost pole-on [$i\la10\degr$, e.g. @QiHo2004], while MP Mus should have $i\ga60\degr$, considering that its optical spectrum indicates $v\sin i\sim13\,{\rm km\,s^{-1}}$ and that its rotational period is $\sim5-6$d [@BatalhaQuast1998]. Assuming that the footpoints of the accretion streams are located at high latitudes on both stars [i.e. the magnetic dipole field is almost aligned with the rotational axis, @RomanovaUstyugova2004], the orientation of TW Hya should be similar to the top-view ($\theta_1$ in Fig. \[fig:accretion\]), while that of MP Mus to the side view ($\theta_2$). We suggest that this important difference in viewing angle $\theta$ might explain the different optical depth obtained from the X-ray spectra of the two accreting stars. We cannot exclude that, although the inclination angles $i$ of two stars differ significantly, the two viewing angles $\theta$ of the post shock region are similar (e.g. if the accretion footpoint latitude is $\sim50-60\degr$). If it were the case, a different explanation would be required. Soft X-rays from shock-heated plasma: EMD ----------------------------------------- The above results are consistent with the current prevailing view that accreting stars have two distinct plasma components: coronal plasma and shock-heated plasma, both contributing to different extents to the observed X-ray emission. To study the properties of these two plasma components it is necessary to identify and disentangle them with the help of both density and temperature diagnostics. The [*EMD*]{} can also provide information on the nature of the X-ray emitting plasma. In Fig. \[fig:allemd\] we show the [*EMD*]{} of the two CTTS MP Mus and TW Hya. To understand their [*EMD*]{} in terms of the coronal and accretion-driven hot plasma components, we compared them with the [*EMD*]{} of TWA 5 [@ArgiroffiMaggio2005], rebinned on the same $\log T$ grid used for the [*EMD*]{} of MP Mus and TW Hya. The [*EMD*]{} reconstruction is based on the assumption that emission is optically thin. We know that this hypothesis is not true in the case of MP Mus. The lines affected are the resonance lines of and which constrain the cooler part of the [*EMD*]{}. Hence, it must be noted that the true [*EMD*]{} of MP Mus at low temperatures ($\log T \la 6.5$) is likely to be slightly higher than that inferred by a factor of 2 or so. Inspecting Fig. \[fig:allemd\] we found three main results: 1. MP Mus and TW Hya show a peak in their $EMD$ at low temperature ($\log T\sim6.4$); this peak is not present in the $EMD$ of TWA 5; 2. MP Mus and TWA 5 have a strong $EMD$ peak at high temperature ($\log T\sim7.0-7.2$), while for TW Hya that peak, even if marginally detected, is significantly lower compared to those of the other two stars; 3. the relative strength of cool and hot peak $EMD$ in MP Mus and TW Hya is significantly different. @SaccoArgiroffi2008 performed hydrodynamic simulations, tuned to the MP Mus case, to study the shock formed by an accretion stream impacting on a chromosphere and its X-ray emission. In Fig. \[fig:emdandmodel\] we show the $EMD$ of MP Mus together with the time-averaged $EMD$ of the simulation by @SaccoArgiroffi2008. The normalization of the model [*EMD*]{} was derived assuming that the soft X-ray emission is entirely produced by the shock-heated plasma component. This is reasonable, considering the line triplets: any contribution from coronal plasma, assuming that it has low density, cannot exceed 20% [@ArgiroffiMaggio2007]. The important result is that the model [*EMD*]{} has a pronounced peak at $T\sim3$MK, which is exactly the feature that we observe in MP Mus and TW Hya, while no cool peak is present in the [*EMD*]{} of the WTTS TWA 5. The cool peak that we found in the [*EMD*]{} of the two CTTS TW Hya and MP Mus explains, in terms of [*EMD*]{}, the soft flux excess observed in the X-ray spectra of CTTS in the XEST survey [@GuedelTelleschi2007; @TelleschiGuedel2007]. These results suggest that the shape of the [*EMD*]{} in CTTS can be used to determine whether soft emission can be produced by accretion shocks, also in those cases in which the electron density provided by the triplet is compatible with coronal plasma [i.e. the case of T Tau, @GuedelSkinner2007], or when the vital diagnostic intercombination and forbidden lines are of poor quality. Soft X-rays from shock-heated plasma: mass accretion rate --------------------------------------------------------- The comparison between the [*EMD*]{} of MP Mus with that of the WTTS TWA 5, and the results obtained from the hydrodynamical simulations performed by @SaccoArgiroffi2008 shows that the soft X-ray emission of MP Mus can be entirely explained in terms of the plasma heated in the accretion shock. The mass accretion rate provided by the simulations, $8\times10^{-11}\,{\rm M_{\odot}\,yr^{-1}}$, agrees with that derived on the basis of a simplified model by @ArgiroffiMaggio2007. However the $\dot{M}$ value derived from the H$\alpha$ line width exceeds that obtained from X-rays, $\dot{M}_{\rm X}$, by more than one order of magnitude. This discrepancy was noted also by @Drake2005 and @GuntherSchmitt2007 for TW Hya. A similar situation arises for BP Tau, for which mass accretion rates were derived from X-ray data by @SchmittRobrade2005. We list in Table \[tab:accrate\] mass accretion rates derived from X-ray data for these three CTTS, compared to those derived from other accretion indicators (i.e. UV or H$\alpha$). In all three cases the $\dot{M}_{\rm X}$ values are lower by a factor 10 (or even more) than the corresponding value derived from UV or optical data. We note that the observed fluxes of the strongest lines were used to derive $\dot{M}_{\rm X}$, assuming optically thin emission. Since we found that the X-ray emission produced by the shock-heated plasma, in MP Mus, is not optically thin, the accretion rate of MP Mus derived from X-rays, [$5-8\times10^{-11}\,{\rm M_{\odot}\,yr^{-1}},$ derived from the Ly$\alpha$ and resonance lines of and , @ArgiroffiMaggio2007; @SaccoArgiroffi2008], is likely underestimated by a factor of 2 or so—much too little to reconcile the discrepancy between the $\dot{M}$ values. The accretion flow is likely composed of several funnels, each isolated from the adjacent one because the strong magnetic field inhibits thermal conduction and mass motion perpendicular to its lines. Each funnel is characterized by a given density and infall velocity. Both the density and velocity determine the amount of observable X-rays produced by the accretion shock of each funnel. As explained below a distribution of density and velocity might explain the observed $\dot{M}$ discrepancy. @Drake2005 argued on the basis of accretion stream ram pressure and shock stand-off height that more dense streams would form smaller, less extended shocks buried too deeply in the photosphere to be observed from most inclination angles. X-rays would then be reprocessed to lower energy by the surrounding photospheric gas. The observed accretion-shocked plasma of MP Mus should have a stand-off height of a $10^8-10^9$cm or so [e.g. @ArgiroffiMaggio2007; @SaccoArgiroffi2008]—sufficient to lie well above the photosphere and be visible with minimum absorption from the stellar atmosphere. It is possible that there are other streams of accretion carrying the majority of the inflowing mass flux that are much more dense and that do get obscured from the line-of-sight by absorption. X-ray emission from TW Hya apparently does not fit in this scenario: it is produced by a very high-density plasma, but it suffers low absorption. To solve this issue, @Drake2005 suggested that the plasma density of TW Hya could be slightly overestimated, because of UV radiation and its influence on the He-like triplets. Another effect might also contribute to the observed discrepancy. @RomanovaUstyugova2004 and @GregoryJardine2006 inferred that accreting material arrives at the base of the accretion stream with a large distribution of infall velocities. Therefore assuming that different funnels (or different part of the same funnel) may have different velocities, then each funnel will produce a post shock region with a temperature depending only on the relevant velocity. In such case, funnels with high velocity would produce hot post shock plasma, and, hence, X-ray emission, while funnels with reduced velocity might produce post-shock plasma of insufficient temperature to contribute to the X-ray emission. Coronal X-ray luminosity in CTTS -------------------------------- The presence of coronal plasma for both TW Hya and MP Mus is indicated by their observed flaring activity and by the presence of a hot plasma component detected in their EPIC spectra, and inferred by the presence of emission lines produced by hot plasma [$T\sim10$MK, @KastnerHuenemoerder2002; @StelzerSchmitt2004; @ArgiroffiMaggio2007]. Assuming that the cool peak in the [*EMD*]{} of MP Mus and TW Hya is entirely due to accretion, then their coronal component can be evaluated considering only the hottest part of their [*EMD*]{}. If we take into account only $\log T (K) \ge6.7$, the coronal X-ray luminosity of MP Mus and TW Hya are $1.4\times10^{30}$ and $1.4\times10^{29}\,{\rm erg\,s^{-1}}$, respectively, in the $0.5-8.0$keV band. The coronal luminosity of TW Hya is significantly reduced if compared to its whole X-ray luminosity, $8.1\times10^{29}\,{\rm erg\,s^{-1}}$, while for MP Mus the corona emits $\sim80$% of its entire X-ray luminosity. With these new estimates for the coronal $L_{\rm X}$, MP Mus and TW Hya have $\log (L_{\rm X}/L_{\rm bol})=-3.4$, and $-3.8$ respectively. Both of these values, but especially that of TW Hya, are significantly lower than previous estimates of their coronal to bolometric luminosities [@MamajekMeyer2002; @KastnerZuckerman1997], and place the two CTTS under the saturation level $\log (L_{\rm X}/L_{\rm bol})=-3$, as is usual for CTTS [e.g @PreibischKim2005]. This suggests that, when the X-ray luminosity is to be used as activity indicator, it would be preferable to compute it excluding the plasma components at $T\le5$MK. Conclusions =========== In this work we presented an analysis of the high-resolution X-ray spectra, obtained with [*XMM-Newton*]{}/RGS, of the two CTTS TW Hya and MP Mus. For MP Mus we detected significant resonance scattering in the Ly$\alpha$ and resonance lines, which are produced by the high-density plasma at temperatures of a few MK. No resonance scattering was detected in the spectrum of TW Hya. This result strongly supports the hypothesis that this plasma is formed by the shock at the base of the accretion column which is likely viewed obliquely. The different optical depths observed for TW Hya and MP Mus could be explained in terms of different viewing angle of their accretion shocks. We also derived the [*EMD*]{} for TW Hya and MP Mus, finding that they both show a peak at $T\sim3-4$MK, in addition to the hot peak at $10-20$MK typical of coronal plasma on magnetically active stars. The cool peak is perfectly described by plasma heated in an accretion shock [@SaccoArgiroffi2008]. The same peak is not present in the plasma of the non-accreting young star TWA 5. The identification of this [*EMD*]{} peak as due to shock-heated plasma allows us to assess the characteristics of the true coronal plasma and its luminosity. In particular, the coronal X-ray luminosity of TW Hya is less than 20% of its whole X-ray luminosity in the $0.5-8.0$keV band. Soft X-ray emission can be used to compute the mass accretion rate. We compared the mass accretion rates derived from X-ray data, $\dot{M}_{\rm X}$, with those obtained from UV and optical indicators, $\dot{M}$, finding that $\dot{M}_{\rm X}$ is underestimated by, at least, a factor 10. Two possible explanations for this are that some accretion streams are most dense and form shocks that are located too deep in the stellar atmosphere to be observed, and that some plasma in accretion streams might not attain the full free-fall velocity required to form X-ray emitting shocks. The authors thank the referee, M. G[ü]{}del, for comments that improved the paper. CA, AM, GP, SS, and BS acknowledge partial support for this work from contract ASI-INAF I/023/05/0 and from the Ministero dell’Università e della Ricerca. JJD was supported by NASA contract NAS8-39073 to the [*Chandra X-ray Center*]{} during the course of this research, and thanks the director, H. Tananbaum, and CXC science team for advice and support. JLS acknowledges financial support by the projects S-0505/ESP-0237 (ASTROCAM) of the Comunidad Autonoma de Madrid and AYA2008-06423-C03-03 of the Spanish Ministerio de Ciencia y Tecnología. Based on observations obtained with [*XMM-Newton*]{}, an ESA science mission with instruments and contributions directly funded by ESA Member States and NASA. [^1]: IRAF is distributed by the National Optical Observatory, which is operated by the Association of Universities for Research in Astronomy, Inc., under contract with the National Science Foundation.
{ "pile_set_name": "ArXiv" }
--- abstract: 'We investigate the characteristics of the thick disk in the Canada – France – Hawaii – Telescope Legacy Survey (CFHTLS) fields, complemented at bright magnitudes with Sloan Digital Sky Survey (SDSS) data. The (\[Fe/H\], Z) distributions are derived in the W1 and W3 fields, and compared with simulated maps produced using the Besançon model. It is shown that the thick disk, represented in star-count models by a distinct component, is not an adequate description of the observed (\[Fe/H\], Z) distributions in these fields.' address: - 'GEPI, Observatoire de Paris, CNRS, Université Paris Diderot; 5 Place Jules Janssen, 92190 Meudon, France' - 'Observatoire de Besançon; 41 bis, avenue de l’Observatoire, 25000 Besançon, France' author: - 'M. Guittet' - 'M. Haywood$^1$' - 'M. Schultheis' bibliography: - 'guittet.bib' title: The Milky Way stellar populations in CFHTLS fields --- the Galaxy, the thick disk, \[Fe/H\] abundance. Introduction {#intro} ============ Our knowledge of the characteristics of the thick disk remains limited in practically every aspects. Its structure on large scales ($>$kpc) is not well defined, either clumpy or smooth, and its connections with the collapsed part of the halo or the old thin disk are essentially not understood. The spectrum of possible scenarios proposed to explain its formation is still very large and really discriminant constraints are rare. The SDSS photometric survey has provided a wealth of new informations on the thick disk, see in particular Ivezić et al. (2008), Bond et al. (2010) and Lee et al. (2011). However, the data have barely been directly confronted to star-count models, and little insights have been given on how the thick disk in these models really represents the survey data. In the present work, we initiate such comparisons by comparing the Besançon model with metallicity and distance information in the W1 and W3 CFHTLS fields, and provide a brief discussion of our results. Data description ================ Among the four fields that make the Wide Survey, W1 and W3 cover larger angular surfaces (72 and 49 square degrees) than W2 and W4 (both having 25 square degrees). They point towards higher latitudes (–61.24${{\mathrm{^\circ}}}$ and 58.39${{\mathrm{^\circ}}}$ respectively) and are consequently less affected by dust extinction, and contain a larger relative proportion of thick disk stars. We will therefore focus on W1 and W3. CFHTLS photometry starts at a substantially fainter magnitude than the SDSS, missing a large part of the thick disk. We complemented the CFHTLS catalogue at the bright end with stars from the SDSS not present in the CFHTLS fields. In the final catalogues, W1 contains $\sim$ 139 000 stars, with 16$\%$ from the SDSS, while $\sim$ 132 000 stars are found in W3 field, with 31$\%$ coming from the Sloan.\ W1 and W3 are at large distances above the galactic plane. The dust extinction is very small at these latitudes. For example the Schlegel map [@schlegel98] estimates for W1 an absorption coefficient Av of 0.087 while @Jones11 give Av=0.113. The extinction models of @Arenou92 or @Hakkila97 estimate Av values to 0.1 and 0.054 respectively. We briefly discuss the effect of extinction on distance determination and metallicities in 4.1. Comparisons between the Besançon model and CFHTLS/SDSS data: Hess diagrams ========================================================================== The Besançon model ------------------ Simulations were made using the Besançon model (@Robin03, @Haywood97, @Bienayme87) online version. The model includes four populations: the bulge, the thin disk, the thick disk and the halo. The metallicities of the thick disk and the halo in the online version of the model (–0.78 and –1.78 dex respectively) were shifted (to –0.6 dex and -1.5 dex) to comply with more generally accepted values, and in particular with values derived from the Sloan data (see @Lee11, who shows that the thick disk have a metallicity \[Fe/H\] = –0.6 dex roughly independant of vertical distances, and (@ivezic08, @Bond10, @Sesar11, @Carollo10 or @Dejong10 for the inner halo metallicity, estimated to be about –1.5 dex). The thick disk has a scale height of 800 pc and a local stellar density $\rho_0$ of 6.8 $\%$ of the local thin disk, while the stellar halo is described by a power law with a flattening and a local density of 0.6%. Simulations where made assuming photometric errors as described in the SDSS. Hess diagrams ------------- The distributions of CFHTLS/SDSS and model stars in the g versus u–g color magnitude diagram (CMD) are shown in Fig. \[fig1\]. For both diagrams, faint blue stars (u–g $\sim$ 0.9, g$>$18) are clearly discernible and correspond to the galactic halo. The concentration of stars at g$<$18, u–g $\sim$1.1, corresponds to disk stars and in particular thick disk stars. Because of the SDSS saturation at g=14 which does not allow to have a representative sample of thin disk stars, our data sample is mainly composed of thick disk and halo stars. The Besançon model shows a distinct separation between thin disk stars (u–g$\sim$1.3, g$<$14-15) and thick disk stars (u–g$\sim$1.1, 15$<$g$<$18) which cannot be check with the present data.\ Comparisons between the Besançon model and CFHTLS/SDSS data: (\[Fe/H\], Z) distributions ======================================================================================== Metallicity and photometric distance determinations --------------------------------------------------- @Juric08 and @ivezic08 have published calibrations of the metallicity and photometric parallax as a function of ugri magnitudes. The metallicity calibration has been revised in @Bond10 : $$\begin{aligned} \label{feh} \mathrm{[Fe/H]} & = & \mathrm{A + Bx + Cy + Dxy + Ex^2 + Fy^2 + Gx^2y + Hxy^2 + Ix^3 + Jy^3 }\end{aligned}$$ where x = $u\--g$, y = $g\--r$ and (A–J) = (–13.13, 14.09, 28.04, –5.51, –5.90, –58.58, 9.14, –20.61, 0.0, 58.20).\ This relation has been determined for F and G stars and is consequently applicable in the range : 0.2 $< g\--r <$ 0.6 and –0.25 + 0.5($u\--g$) $< g\--r <$ 0.05 + 0.5($u\--g$). This calibration only extends to –0.2 dex. Observed vertical distances $Z$ have been calculated using $\normalsize{ Z\ = \ \mathrm{D \ sin(b)} }$, b being the latitude of the star. Photometric distances $D$, such as $\normalsize{m_{r} \-- M_{r} = 5\ log(D) \-- 5}$, were determined using the absolute magnitude calibration of @ivezic08 which depends on the metallicity and on $g-i$ colours.\ For the highest extinction values given by @Jones11, the impact on metallicities, as can be estimated using Eq. \[feh\] and the absolute magnitude relation of @ivezic08 are at most of 0.15 dex near g–r=0.5 at solar metallicities and 0.1 dex at \[Fe/H\]= –1 dex. Distances will be affected at most by about 20% at solar metallicities and 15% at \[Fe/H\]= –1 dex at g–r near 0.40-0.45.\ (\[Fe/H\], Z) distributions ---------------------------- We generated catalogues with the model in the direction of W1 and W3, deriving the Z height above the plane from simulated distances and metallicities from the assumed metallicity distributions of each population. In Fig. \[fig2\] we present (\[Fe/H\], Z) distributions for both the data and the model. The dotted line is the median metallicity per bin of 0.5 kpc. The continuous line is the median metallicity for disk stars as shown by @Bond10 and follows rather well the disk distribution in our data. We find similar results as @Bond10 : the halo dominates the star counts above 3 kpc with a mean metallicity of about –1.5 dex. @Sesar11 studied the four CFHTLS Wide fields but with magnitudes corrected for ISM extinction. They found the mean halo metallicity in the range between –1.4 and –1.6 dex. Our estimate of the extinction effect would shift metallicities to about 0.15 dex at most, and shows that our mean halo metallicity is in good agreement with their estimates. The interesting point worth of notice is the conspicuous, distinct, pattern that represents the thick disk in the model and which clearly is absent in the data. As expected, the standard thick disk model dominates the counts between 1 and 4 kpc, while in the data, the thick disk seems to be less extended, and does not appear as a distinct component between the thin disk and the halo. The vertical resolution of the observed distribution prevents any clear statement concerning the transition from the thin to thick disk, although it is apparent that the model is at variance with the data. This result raises the interesting question of the connections (or lack of) between the thin and thick disks. Almost since its discovery, it has been suggested that the thick disk is more akin to an extended thin disk [@Norris87]. Our knowledge of the thick disk more than twenty years later does not permit us to draw any firm conclusion on that point. Conclusion ========== Investigation of the (\[Fe/H\], Z) distribution in the CFHTLS Wide fields does not seem to show a thick disc component as prominent and distinct as predicted by standard star-count models. The mean halo metallicity found to be –1.5 dex is in agreement with previous studies (e.g @Bond10, @Sesar11). The behavior of models must be studied on more extensive data sets in order to assess the necessary adjustments and to better characterise the thick disk.
{ "pile_set_name": "ArXiv" }
--- abstract: 'The principal portfolios of the standard Capital Asset Pricing Model (CAPM) are analyzed and found to have remarkable hedging and leveraging properties. Principal portfolios implement a recasting of any *correlated* asset set of $N$ risky securities into an equivalent but *uncorrelated* set when short sales are allowed. While a determination of principal portfolios in general requires a detailed knowledge of the covariance matrix for the asset set, the rather simple structure of CAPM permits an accurate solution for any reasonably large asset set that reveals interesting universal properties. Thus for an asset set of size $N$, we find a *market-aligned* portfolio, corresponding to the *market* portfolio of CAPM, as well $N-1$ *market-orthogonal* portfolios which are market neutral and strongly leveraged. These results provide new insight into the return-volatility structure of CAPM, and demonstrate the effect of unbridled leveraging on volatility.' author: - 'M. Hossein Partovi' title: 'Hedging and Leveraging: Principal Portfolios of the Capital Asset Pricing Model' --- 1. Introduction {#introduction .unnumbered} =============== Modern investment theory dates back to the mean-variance analysis of Markowitz (1952, 1959), which is expected to hold if asset prices are normally distributed or the investor preferences are quadratic. Undoubtedly, the most consequential fruit of Markowitz’ seminal work was the introduction of the capital asset pricing model (CAPM) by Sharpe (1964), Lintner (1965), and Mossin (1966). The key ideas of this model are that investors are mean-variance optimizers facing a frictionless market with full agreement on the distribution of security returns and unrestricted access to borrowing and lending at the riskless rate. As an asset pricing model, CAPM is an equilibrium model valid for a given investment horizon, which is taken to be the same for all investors. Indeed investors are solely distinguished by their level of risk aversion. Principal portfolio analysis, on the other hand, simplifies asset allocation by recasting the asset set into uncorrelated portfolios when short sales are allowed (Partovi and Caputo 2004). Stated otherwise, the original problem of stock selection from a set of *correlated assets* is transformed into the much simpler problem of choosing from a set of *uncorrelated portfolios*. The details of this transformation are given in Partovi and Caputo (2004), where the results are summarized as follows: *Every investment environment ${ \{ {s}_{i}, {r}_{i}, {\sf \sigma}_{ij} \} }_{i,j=1}^{N}$ which allows short sales can be recast as a principal portfolio environment ${ \{ {S}_{\mu}, {R}_{\mu}, {\sf V}_{\mu \nu} \} }_{\mu, \nu =1}^{N}$ where the principal covariance matrix ${\sf V}$ is diagonal. The weighted mean of the principal variances equals the mean variance of the original environment. In general, a typical principal portfolio is hedged and leveraged.* Here $s_i$ ($ {S}_{\mu}$), $r_i$ (${R}_{\mu}$), and ${\sigma}_{ij}$ (${\sf V}_{\mu \nu} $) represent the assets, the expected returns, and the covariance matrix of the original (recast) set, while $N$ is the size of the asset set. It was further shown in Partovi and Caputo (2004) that the efficient frontier in the presence of a riskless asset has a simple allocation rule which requires that each principal portfolio be included in inverse proportion to its variance. Practical applications of principal portfolios have already been considered by several authors, for example, Poddig and Unger (2012) and Kind (2013). In this paper we present a perturbative calculation of the principal portfolios of the single-index CAPM in the large $N$ limit. The results of this calculation are in general expected to entail a relative error of the order of $1/{N}^2$. However, since any application of the single-index CAPM is most likely to involve a large asset set, the stated error is normally quite small and in any case majorized by modelling errors. Thus the results to be reported here are accurate implications of the underlying model. The principal portfolio analysis of the single-index model and an exactly solvable version of it presented in §3 highlight the volatility structure of principal portfolios in a practical and familiar context. A remarkable result of the analysis is the bifurcation of the set of principal portfolios into a [*market-aligned*]{} portfolio, which is unleveraged and behaves rather like a total-market index fund, and $N-1$ *market-orthogonal* portfolios, which are hedged and leveraged,[^1] and nearly free of market driven fluctuations. This equivalency between the original asset set and two classes of principal portfolios is reminiscent of, but fundamentally different from, Merton’s (1972) two mutual fund theorems. The market-orthogonal portfolios, on the other hand, provide a vivid demonstration of the effect of leveraging on the volatility level of a portfolio. 2. Principal Portfolios of the Single-Index Model {#principal-portfolios-of-the-single-index-model .unnumbered} ================================================= Here we shall analyze the standard single-index model as well as an exactly solvable special case of it with respect to their principal portfolio structure. Remarkably, our analysis will uncover interesting and hitherto unnoticed properties of well-diversified and arbitrarily leveraged portfolios within the single-index model. Consider a set of $N$ assets $ \{ {s}_{i} \}$, $1 \le i \le N$, whose rates of return are normally distributed random variables given by $${\rho}_{i}\stackrel{\rm def}{=}{\alpha}_{i}+{\beta}_{i}{\rho}_{mkt}, \label{431}$$ where ${\alpha}_{i}$ and ${\rho}_{mkt}$ are uncorrelated, normally distributed random variables with expected values and variances equal to ${\bar{\alpha}}_{i}$, ${\bar{\rho}}_{mkt}$ and ${\bar{{\alpha}_{i}^{2}}}$, ${\bar{{\rho}^{2}}}_{mkt}$, respectively. The quantity ${\beta}_{i}$ associated with asset ${s}_{i}$ is a constant which measures the degree to which ${s}_{i}$ is coupled to the overall market variations. Thus the attributes of a given asset are assumed to consist of a [*market-driven*]{} (or [*systematic*]{}) part described by $({\beta}_{i}{\rho}_{mkt},{\beta}_{i}^{2}{\bar{{\rho}^{2}}}_{mkt})$ and a [*residual*]{} (or [*specific*]{}) part described by $({\alpha}_{i},{\bar{{\alpha}_{i}^{2}}})$, with the two parts being uncorrelated. The expected value of Eq. (\[431\]) is given by $${\bar{{\rho}_{i}}}\stackrel{\rm def}{=}{r}_{i}={\bar{{\alpha}_{i}}}+{\beta}_{i}{\bar{\rho}}_{mkt}. \label{4311}$$ The covariance matrix which results from Eq. (\[431\]) is similarly a superposition of the specific and market-driven contributions, as would be expected of the sum of two uncorrelated variables. It can be written as $${\sf \sigma}_{ij}={\bar{{\alpha}_{i}^{2}}} {\delta}_{ij}+ {\beta}_{i}{\beta}_{j} {\bar{{\rho}^{2}}}_{mkt}. \label{432}$$ Note that ${\sf \sigma}$ is a [*definite*]{} matrix, since we have excluded riskless assets from the asset set for the time being. We shall assume here that the number of assets $N$ is appropriately large, as is in fact implicit in the formulation of all index models, so that the condition ${\bar{{\alpha}_{i}^{2}}}/ N {b}^{2} {\bar{{\rho}^{2}}}_{mkt} \ll 1$ is satisfied; here $b\stackrel{\rm def}{=} {({\sum}_{i=1}^{N}{\beta}_{i}^{2}/N)}^{1 \over 2}$ is the square root of the average value of ${\beta}_{i}^{2}$, typically of the order of unity. These assumptions are not essential to our discussion, but they do simplify the presentation and more importantly, they are usually well satisfied for appropriately large values of $N$ and guarantee that our purturbative results below are accurate for practical applications. Under the above assumptions it is appropriate to rescale the covariance matrix as in ${\sf \sigma}_{ij}= N {b}^{2} {\bar{{\rho}^{2}}}_{mkt} {\tilde{\sf \sigma}}_{ij}$, where $${\tilde{\sf \sigma}}_{ij}\stackrel{\rm def}{=}{\gamma}_{i}^{2} {\delta}_{ij}+ {\hat{\beta}}_{i}{\hat{\beta}}_{j} \label{433}$$ is a dimensionless matrix. Here ${\hat{\beta}}_{i} \stackrel{\rm def}{=}{{\beta}}_{i}/ {({\sum}_{i=1}^{N}{\beta}_{i}^{2})}^{1 \over 2}$, so that $\hat{{\bm { \beta}}}=({\hat{\beta}}_{1},{\hat{\beta}}_{2}, \ldots, {\hat{\beta}}_{N})$ is a unit vector, and ${\gamma}_{i}^{2}\stackrel{\rm def}{=}{\bar{{\alpha}_{i}^{2}}}/ N {b}^{2} {\bar{{\rho}^{2}}}_{mkt} \ll 1$ as concluded above. The above representation of the covariance matrix for the single-index model is quite suitable for revealing the structure of its eigenvalues and eigenvectors, these being closely related to the rescaled principal variances and the principal portfolios we are seeking. The structure in question is actually discernible on the basis of simple, qualitative considerations of the spectrum of ${\tilde{\sf \sigma}}$. To see this structure, let us first note that the sum of the eigenvalues of ${\tilde{\sf \sigma}}$, which is given by ${\rm tr} ({\tilde{\sf \sigma}})\stackrel{\rm def}{=} {\sum}_{i=1}^{N}{\tilde{\sf \sigma}}_{ii}$, equals $1+{\sum}_{i=1}^{N} {\gamma}_{i}^{2}$. We will show below that the largest eigenvalue of ${\tilde{\sf \sigma}}$ is approximately equal to unity, so that the remaining $N-1$ eigenvalues have an average value approximately equal to the average of $\{ {\gamma}_{i}^{2} \}$, which was shown above to be much smaller than unity as a consequence of the large $N$ assumption. Thus, barring a strongly skewed distribution of the latter, which is all but ruled out for any of the customary asset classes, we find that the spectrum of ${\tilde{\sf \sigma}}$ consists of a “major” eigenvalue close to unity, and $N-1$ “minor” eigenvalues each much smaller than unity. Stated in terms of the spectrum of ${\sf V}$, this implies that the principal portfolios separate into two classes of quite different properties, namely (i) a single [*market-aligned*]{} portfolio with a variance of magnitude approximately equal to $N{b}^{2}{\bar{{\rho}^{2}}}_{mkt}/{{W}_{N}}^{2}$, and (ii) $N-1$ [*market-orthogonal*]{} portfolios whose variances have a weighted average approximately equal to the average of the residual variance of the original asset set. As one might suspect, these two categories are characterized by sharply different values of portfolio beta,[^2] the former with a value typical of the asset set (i.e., of the order of unity) and the remaining $N-1$ portfolios with much smaller (possibly vanishing; cf. §3) values. To see the quantitative details of the foregoing qualitative analysis, we now turn to a perturbative treatment of the spectrum of ${\tilde{\sf \sigma}}$. The eigenvalue equation for ${\tilde{\sf \sigma}}$ reads $${({\tilde{\sf \sigma}}{\mathbf{e}}^{\mu})}_{i}={\gamma}_{i}^{2} {e}^{\mu}_{i}+ {\hat{\bm{\beta}}}\cdot {\mathbf{{e}^{\mu}}} {\hat{\beta}}_{i} ={\tilde{v}}_{\mu}^{2}{e}^{\mu}_{i}, \label{434}$$ where ${\bm{e}}^{\mu}$ is the $\mu$th eigenvector, ${e}^{\mu}_{i}$ is the $i$th component of that eigenvector, and ${\tilde{v}}_{\mu}^{2}$ is the corresponding eigenvalue, all quantities as defined earlier. Because of its simple structure, Eq. (\[434\]) can be implicitly solved for the components of the eigenvectors to yield $${e}^{\mu}_{i}=[{\hat{\bm{\beta}}}\cdot {\mathbf{{e}^{\mu}}} / ({\tilde{v}}_{\mu}^{2}-{\gamma}_{i}^{2})]{\hat{\beta}}_{i}. \label{435}$$ Upon multiplying this equation by ${\hat{\beta}}_{i}$ and summing over $i$, we find the characteristic equation for the eigenvalues. It reads $$1={\sum}_{i=1}^{N}[{\hat{\beta}}_{i}^{2} / ({\tilde{v}}_{\mu}^{2}- {\gamma}_{i}^{2})]. \label{436}$$ This equation can be rearranged as an $N$th-order polynomial equation in the variable ${\tilde{v}}_{\mu}^{2}$, the $\mu$th eigenvalue of ${{\sf \sigma}}$ divided by $N{b}^{2}{\bar{{\rho}^{2}}}_{mkt}$, and is guaranteed to have $N$ real, positive roots (with multiple roots counted according to their multiplicity). Once these roots are determined, they can be used in Eq. (\[435\]) to find the eigenvectors in the usual manner. As mentioned earlier, the structure of ${\tilde{\sf \sigma}}$ allows an approximate determination of its largest eigenvalue when $N$ is suitably large, say a hundred or more. This is of course a significant advantage in any numerical solution of the equations described in the preceding paragraph. As one can see from Eq. (\[433\]), the matrix in question, ${\tilde{\sf \sigma}}$, is the sum of two parts, one is diagonal with elements ${\gamma}_{i}^{2}$ which are much smaller than unity, and the other a rank-1 matrix with eigenvalue equal to unity. This implies that the eigenvector of the latter matrix is an approximate eigenvector of ${\tilde{\sf \sigma}}$ with eigenvalue approximately equal to unity. This is the eigenvalue we designated as [*major*]{} in our qualitative discussion. Let this be the $N$th eigenvalue, so that ${\tilde{v}}_{N}^{2}\stackrel{\rm def}{=}1+{\epsilon}_{N}$, with $\vert {\epsilon}_{N} \vert \ll 1$. Substituting this expression for ${\tilde{v}}_{N}^{2}$ in Eq. (\[436\]), and treating the resulting equation to first order in ${\gamma}_{i}^{2}$,[^3] we find $${\tilde{v}}_{N}^{2} \simeq 1+{\sum}_{i=1}^{N} {\gamma}_{i}^{2}{\hat{\beta}}_{i}^{2}, \label{437}$$ which identifies ${\epsilon}_{N} $ as equal to ${\sum}_{i=1}^{N} {\gamma}_{i}^{2}{\hat{\beta}}_{i}^{2}$ to first order, thus verifying the condition $\vert {\epsilon}_{N} \vert \ll 1$. The corresponding eigenvector can now be found from Eq. (\[435\]); to first order, it is given by $${e}^{N}_{i} \simeq (1+ {\gamma}_{i}^{2}-{\sum}_{j=1}^{N} {\gamma}_{j}^{2}{\hat{\beta}}_{j}^{2}){\hat{\beta}}_{i}, \label{438}$$ where the conditions of unit length and non-negative relative weight stipulated earlier have already been imposed within the stated order of approximation. Equation (\[438\]) specifies the (relative) composition of the market-aligned portfolio. The relative weight ${W}_{N}$ of this portfolio, on the other hand, is expected to be of the order of ${N}^{1 \over 2}$, since this portfolio consists entirely of purchased assets (recall our estimate of the relative weights earlier in §2). Indeed one can see from Eq. (\[438\]) that ${W}_{N} \simeq {\sum}_{i=1}^{N}{\hat{\beta}}_{i}$ in the leading order of approximation,[^4] which confirms the above-stated estimate (recall that the average of the ${\hat{\beta}}_{i}^{2}$ equals ${N}^{-1}$). Equations (\[437\]) and (\[438\]) provide approximate expressions for the major eigenvalue and eigenvector of the covariance matrix of the single-index model. Rescaling Eqs. (\[437\]) and (\[438\]) back to original variables, we find, for the variance and the composition of the market-aligned principal portfolio, the expressions $${{V}_{N}}^{2} \simeq [1+3 {\sum}_{i=1}^{N} {\gamma}_{i}^{2}{\hat{\beta}}_{i}^{2}-{({\sum}_{i=1}^{N} {\hat{\beta}}_{i})}^{-1}{\sum}_{i=1}^{N} {\gamma}_{i}^{2}{\hat{\beta}}_{i}]{({\bm{\beta}} \cdot {\bm{\beta}})} {\bar{{\rho}^{2}}}_{mkt} /{({\sum}_{i=1}^{N}{\hat{\beta}}_{i})}^{2}, \label{439}$$ $${e}^{N}_{i}/{W}_{N} \simeq [1+ {\gamma}_{i}^{2}-{({\sum}_{i=1}^{N} {\hat{\beta}}_{i})}^{-1}{\sum}_{i=1}^{N} {\gamma}_{i}^{2}{\hat{\beta}}_{i}]{\hat{\beta}}_{i}/({\sum}_{j=1}^{N} {\hat{\beta}}_{j}), \label{440}$$ where we have left the small correction terms in dimensionless form. It is clear from Eq. (\[440\]) that the market-aligned portfolio is basically composed by investing in each asset in proportion to how strongly it is correlated with the overall market fluctuations, i.e., in proportion to the value of its beta; cf. Eq. (\[431\]). Consequently, it is expected to be strongly suseptible to market-driven fluctuations. Indeed as one can see from Eq. (\[439\]), the variance of this principal portfolio in the leading order is given by $(N{b}^{2}/{{W}_{N}}^{2}){\bar{{\rho}^{2}}}_{mkt}$, which is of the same order of magnitude as ${\bar{{\rho}^{2}}}_{mkt}$ (recall that $b$ is of the same approximate magnitude as a typical $\beta$ and that ${W}_{N}$ is of the order of ${N}^{1 \over 2}$). The market-aligned portfolio is therefore seen to be that principal portfolio which approximately reflects the volatility profile of the market as a whole. Moreover, since it entirely composed of purchased assets, it is neither hedged nor leveraged. By contrast, the remaining $N-1$ market-orthogonal principal portfolios are in general hedged and leveraged, and they are quite immune to overall market fluctuations[^5]. In fact, since ${\sum}_{\mu=1}^{N} {{v}}_{\mu}^{2}={\rm tr}({{\sf \sigma}})={\bm{\beta}} \cdot {\bm{\beta}} {\bar{{\rho}^{2}}}_{mkt}(1+{\sum}_{i=1}^{N} {\gamma}_{i}^{2})$, and ${{v}}_{N}^{2} \simeq {\bm{\beta}} \cdot {\bm{\beta}} {\bar{{\rho}^{2}}}_{mkt} + {\sum}_{i=1}^{N} {\hat{\beta}}_{i}^{2} {\bar{{\alpha}_{i}^{2}}}$, we find for the the average value of the $N-1$ minor eigenvalues $${(N-1)}^{-1}{\sum}_{\mu=1}^{N-1} {{v}}_{\mu}^{2}={(N-1)}^{- 1}{\sum}_{\mu=1}^{N-1} {{W}_{\mu}}^{2}{{V}_{\mu}}^{2} \simeq {(N-1)}^{-1}{\sum}_{i=1}^{N}(1- {\hat{\beta}}_{i}^{2}) {\bar{{\alpha}_{i}^{2}}}. \label{441}$$ Thus the weighted average of principal variances for market-orthogonal portfolios is approximately equal to (and in fact less than) the average of the residual variances of the original asset set. Therefore, these $N-1$ market-orthogonal principal portfolios are free not only of mutual correlations with other portfolios but in general also of the volatility induced by overall market fluctuations. This feat is possible in part because of the very special structure of the single-index model which makes it possible to isolate essentially all of the systematic market volatility in one portfolio, leaving the remaining $N-1$ portfolios almost totally immune to systematic market fluctuations. There is an important caveat with respect to the foregoing statement. Recall that there is an inverse relationship between ${{V}}_{\mu}$, defined as the positive square root of ${{V}}_{\mu}^{2}$, and ${W}_{\mu}$, so that for highly leveraged portfolios which are characterized by the condition ${W}_{\mu} \ll 1$, the above argument would imply a principal variance far exceeding the original ones. Of course the condition ${W}_{\mu} \ll 1$ that implies such large variances also implies large expected returns, so that a more sensible comparative measure under such conditions is ${\check{V}}_{\mu}\stackrel{\rm def}{=}{V}_{\mu}/{R}_{\mu}={v}_{\mu}/{\sum}_{i=1}^{N}{e}^{\mu}_ {i}{r}_{i}$, which may be called [*return-adjusted volatility*]{} of the principal portfolio. As expected, the relative weight ${W}_{\mu}$ is no longer present in this adjusted version of the volatility. The return-adjusted volatility for the market-aligned portfolio, on the other hand, can be calculated from Eqs. (\[431\]), (\[439\]), and (\[440\]). It is given by $${\check{V}}_{N} \simeq \{ 1- [{({\bar{{\rho}^{2}}}_{mkt})}^{1 \over 2}/{\bar{\rho}}_{mkt}]{\sum}_{i=1}^{N} {\gamma}_{i}{\hat{\beta}}_{i} \} {({\bar{{\rho}^{2}}}_{mkt})}^{1 \over 2}/{\bar{\rho}}_{mkt}, \label{442}$$ which is approximately equal to ${({\bar{{\rho}^{2}}}_{mkt})}^{1 \over 2}/{\bar{\rho}}_{mkt}$. This ratio is of course precisely what one would expect for the approximate value of the return-adjusted volatility of a portfolio which is aligned with the overall market price movements. It is appropriate at this point to summarize the properties of the principal portfolios for the single-index model. [**Proposition 1.**]{} [*The principal portfolios of the single-index model consist of a market-aligned portfolio, which is unleveraged and has a return-adjusted volatility $\simeq {({\bar{{\rho}^{2}}}_{mkt})}^{1 \over 2}/{\bar{\rho}}_{mkt}$ characteristic of market-driven price movements, and $N-1$ market-orthogonal portfolios which are hedged and leveraged, and nearly free of systematic market fluctuations. Equations (\[439\])-(\[442\]) provide approximate expressions valid to first order in $1/N$ for the properties of these portfolios.*]{} 3. Single-Index Model with Constant Residual Variance {#single-index-model-with-constant-residual-variance .unnumbered} ===================================================== To provide an explicit illustration of the principal portfolio structure within the single-index model described in the preceding section, we now turn to an exactly solvable, albeit oversimplified, version of that model. This model is defined by the assumption that the residual variance of the $i$th asset in the original set, ${\bar{{\alpha}_{i}^{2}}}$, is equal to ${\bar{{\alpha}^{2}}}$ for all assets. Observe that this assumption does not affect the expected rate of return for the $i$th asset, which is given by ${r}_{i}={\bar{{\alpha}_{i}}}+{\beta}_{i} {\bar{\rho}}_{mkt}$ as before. This simplification will allow us to derive an exact solution for the model and illustrate the concepts and methods of the previous section in more explicit terms. The price for this simplification is of course the unrealistic assumption of constant residual variance which defines the model. The covariance matrix with the above simplification appears as $${\sf \sigma}_{ij}^{crv}={\bar{{\alpha}^{2}}} {\delta}_{ij}+ {\beta}_{i}{\beta}_{j} {\bar{{\rho}^{2}}}_{mkt}, \label{443}$$ whose rescaled version is $${\tilde{\sf \sigma}}_{ij}^{crv}={\gamma}^{2} {\delta}_{ij}+ {\hat{\beta}}_{i}{\hat{\beta}}_{j} \label{444}$$ These equations are of course specialized versions of Eqs. (\[432\]) and (\[433\]). Referring to the results of the previous section, one can readily see that the spectrum of ${\tilde{\sf \sigma}}^{crv}$ consists of a major eigenvalue (exactly) equal to $1+{\gamma}^{2}$ \[cf. Eq. (\[437\])\], and $N-1$ minor eigenvalues, all equal to ${\gamma}^{2}$. Recall that these eigenvalues respectively correspond to the market-aligned and market-orthogonal portfolios introduced in §4.1. Not surprisingly, the spectrum of ${\tilde{\sf \sigma}}_{ij}^{crv}$ is found to be highly degenerate. The eigenvector ${{\bf e}^{crv}}^{N}$ corresponding to the major eigenvalue is (exactly) equal to ${\hat{\beta}}_{i}$ \[cf. Eq. (\[438\])\], while the remaining $N-1$ minor eigenvectors are not uniquely determined[^6] and may be arbitrarily chosen to be any orthonormal set of $N-1$ vectors orthogonal to the major eigenvector ${\hat{\beta}}_{i}$. The expected return and volatility features of the $N-1$ market-orthogonal portfolios defined by this arbitrary choice, on the other hand, do depend on that choice, as the following analysis will show. Since our main objective is the determination of the efficient frontier, we shall choose the remaining $N-1$ eigenvectors with respect to their volatility level, which, it may be recalled from §2, is given by ${{V}_{\mu}}^{2}={v}_{\mu}^{2}/{{W}_{\mu}}^{2}$. For the present case, minimizing ${V}_{\mu}$ amounts to maximizing ${W}_{\mu}$. Therefore, we will look for a unit vector $\bf e$ that is orthogonal to $\bm{{\hat{\beta}}}$ as stipulated above [*and*]{} maximizes ${\sum}_{i=1}^{N}{e}_{i}$. In terms of rescaled quantities, this problem appears as $${\max \;}_{\bf e} {\;} {\bf e} \cdot {\hat{{\bf u}}} {\:} {\;} s.t. \; {{\bf e} \cdot {\bf e}}=1, {\:} {\:} {\bf e} \cdot {\hat{\bm{\beta}}}=0, \label{445}$$ where ${\hat{{\bf u}}}_{i}\stackrel{\rm def}{=}{N}^{-{1 \over 2}}(1,1, \ldots,1)$ is an $N$-dimensional unit vector all of whose components are equal. The solution to Eq. (\[445\]) may be found by standard methods provided that ${\hat{{\bf u}}}$ and ${\hat{\bm{\beta}}}$ are not parallel, a condition whose violation is very improbable and will henceforth be assumed to hold. On the other hand, it is clear from geometric considerations that the solution must be that linear combination of ${\hat{{\bf u}}}$ and ${\hat{\bm{\beta}}}$ which is orthogonal to ${\hat{\bm{\beta}}}$. Designating the solution vector as ${{\bf e}^{crv}}^{1}$, we find $${{\bf e}^{crv}_{i}}^{1}= [{\hat{u}}_{i}-\cos (\theta) {\hat{{\beta}}}_{i}]/ \sin(\theta), \label{446}$$ where $\theta$ is the angle formed by the unit vectors ${\hat{{\bf u}}}$ and ${\hat{\bm{\beta}}}$, constrained by the condition $0 < \theta \leq \pi /2 $ under our assumptions. Indeed a little algebra shows that $$\tan(\theta) = \delta \beta / {\bar {\beta}}, \label{447}$$ where ${\bar{\beta}}$ and $\delta \beta$ respectively denote the mean and the standard deviation of the $\beta$’s, i.e., ${N}^{-1}{\sum}_{i=1}^{N} {\beta}_{i}$ and ${[{N}^{-1}{\sum}_{i=1}^{N} {({\beta}_{i}-{\bar {\beta}})}^{2}]}^{1 \over 2}$. Equation (\[447\]) clearly shows that the angle $\theta$ represents the degree of scatter among the betas, vanishing when all betas are equal and increasing as they are made more unequal. Note that the condition $\theta >0$ stipulated above excludes the (improbable) case of uniform betas. Note also that the condition ${\bf e} \cdot {\hat{\bm{\beta}}}=0$ in Eq. (\[445\]) is equivalent to the vanishing of the portfolio beta for ${{\bf e}^{crv}_{i}}^{1}$, in contrast to the same quantity for the market-aligned portfolio which is found to be ${\bar {\beta}}/{\cos}^{2}(\theta)$. Thus far we have determined two eigenvectors, the market-aligned ${{\bf e}^{crv}}^{N}$, and the minimum-volatility, market-orthogonal eigenvector ${{\bf e}^{crv}}^{1}$. The remaining $N-2$ market-orthogonal eigenvectors will of course have to be orthogonal to these, which immediately implies that they will all be orthogonal to ${\hat{{\bf u}}}$. But orthogonality to ${\hat{{\bf u}}}$ implies a vanishing weight, thus implying that these portfolios require zero initial investment. Stated differently, these $N- 2$ portfolios are *critically* leveraged, with the short-sold assets precisely balancing the purchased ones in each portfolio. Under these circumstances, any volatility in portfolio return would imply an infinite variance for that principal portfolio because of the vanishing initial investment. Note that the return-adjusted volatility for these portfolios, on the other hand, need not (and in typical cases will not) diverge at all. As stated earlier, the efficient frontier in the presence of a riskless asset has a simple allocation rule which requires that each principal portfolio be included in inverse proportion to its variance. For the current case, this rule clearly excludes the $N-2$ portfolios described above from the efficient frontier, leaving the first two principal portfolios and the riskless asset as the only constituents. Thus for the special case of constant residual variance, a knowledge of the two distinguished principal portfolios determined above is all that is needed to specify the efficient frontier when a riskless asset is present. For this reason, we will not continue with the explicit construction of the remaining $N-2$ eigenvectors. At this point we can determine the expected value and the variance of the two principal portfolios determined above according to the definitions and formulae given in §2. Straightforward algebra leads to $${R}^{crv}_{N}={{\sum}_{i=1}^{N}{\hat{{\beta}}}_{i}({\bar{{\alpha}_{i}}} +{\beta}_{i}{\bar{\rho}}_{mkt}) \over {N}^{{1 \over 2}} \cos (\theta)}, \;\; {({V}_{N}^{crv})}^{2}={ {\bar{{\alpha}^{2}}} + {\bm{\beta}} \cdot {\bm{\beta}} {\bar{{\rho}^{2}}}_{mkt} \over N {\cos}^{2} (\theta)}, \label{448}$$ for the market-aligned portfolio, and $${R}^{crv}_{1}={{r}^{av} \over {\sin}^{2} (\theta)} -{\cot}^{2} (\theta){R}^{crv}_{N} , \;\; {({V}_{1}^{crv})}^{2}={ {\bar{{\alpha}^{2}}} \over N {\sin}^{2} (\theta)}, \label{449}$$ for the market-orthogonal, minimum volatility principal portfolio. In order to facilitate comparison with the perturbative results of §2 for the general single-index model, we also record here the return-adjusted volatilities of these portfolios; $${\check{V}}^{crv}_{N} ={{({\bar{{\alpha}^{2}}} + {\bm{\beta}} \cdot {\bm{\beta}} {\bar{{\rho}^{2}}}_{mkt})}^{1 \over 2} \over {\sum}_{i=1}^{N}{\hat{{\beta}}}_{i} ({\bar{{\alpha}_{i}}}+{\beta}_{i}{\bar{\rho}}_{mkt})}, \label{450}$$ $${\check{V}}^{crv}_{1} = {{[{\bar{{\alpha}^{2}}} {\sin}^{2} (\theta)]}^{1 \over 2} \over {N}^{-{1 \over 2}}[{r}^{av} -{\cos}^{2} (\theta){R}^{crv}_{N}] }. \label{451}$$ The results just derived demonstrate the powerful volatility reduction effect of diversification coupled with short sales for the market-orthogonal portfolio ${{\bf e}^{crv}}^{1}$. To see this, let us assume a typical value for $\tan (\theta)$ of the order of unity \[for reasonably large $N$; cf. Eq. (\[447\])\]. We then find from the above results $${\lim}_{N \rightarrow \infty} \; {{V}^{crv}_{N}}={[{{\bm{\beta}} \cdot {\bm{\beta}} \over N {\cos}^{2} (\theta)} {\bar{{\rho}^{2}}}_{mkt}]}^{1 \over 2}, \; {\lim}_{N \rightarrow \infty}\;{{V}^{crv}_{1}}=0. \label{452}$$ Note that the quantity ${\bm{\beta}} \cdot {\bm{\beta}}$ in general grows in proportion to $N$, and therefore that ${\bm{\beta}} \cdot {\bm{\beta}} / N {\cos}^{2}(\theta)$ is typically of the order of unity for large $N$. Thus the variance of the market-aligned portfolio will be of the order of ${\bar{{\rho}^{2}}}_{mkt}$ for large $N$, as would be expected. The variance of the market-orthogonal portfolio, on the other hand, vanishes altogether in proportion to ${N}^{-1}$ in the same limit of large $N$. These conclusions echo our results in §2, Eq. (\[442\]) et seq. Note that the vanishing of the market risk for the market-orthogonal portfolio, which is in addition to the vanishing of the “diversifiable” (or specific) risk expected for large $N$ (Elton and Gruber 1991), is a specific result of leveraging coupled with hedging (or diversification). Similarly, the infinite volatility and expected return levels of the $N-2$ remaining portfolios of this model underscore the dramatic levels of volatility as well as return that can be expected of highly leveraged portfolios. We are now in a position to determine the composition of the efficient frontier for the constant residual variance case. As stated above, we find from the allocation rule of the efficient frontier that ${{X}^{crv}}_{\mu}=0 $ for $2 \leq \mu \leq N-1$, since the corresponding inverse variances ${{Z}^{crv}}_{\mu}$ all vanish. The three components of the efficient frontier are the riskless asset together with ${{\bf e}^{crv}}^{1}$ and ${{\bf e}^{crv}}^{N}$. Furthermore, the latter portfolio will be strongly disfavored relative to the former for large $N$ since its variance grows in proportion to $N$ relative to that of the former; cf. Eq. (\[452\]) et seq. Indeed for reasonably large $N$, the efficient frontier is essentially a combination of ${{\bf e}^{crv}}^{1}$ and the riskless asset; $${X}^{crv}_{0} {\rightarrow} { {R}^{crv}_{1}-{\cal R} \over {R}^{crv}_{1}- {R}_{0}}, \; {X}^{crv}_{1}{\rightarrow}{{\cal R}-{R}_{0} \over {R}^{crv}_{1}-{R}_{0} }, \; \;{\rm as}\; {N \rightarrow \infty}, \label{4840}$$ while $${X}^{crv}_{N}{\rightarrow} 0, \; {V}_{eff}=\vert {{\cal R}-{R}_{0} \over {R}_{1}^{crv}-{R}_{0} } \vert {[{ {\bar{{\alpha}^{2}}} \over N {\sin}^{2} (\theta)}]}^{1 \over 2} {\rightarrow}0 \;{\rm as}\; N \rightarrow \infty. \label{485}$$ This last property, i.e., the vanishing of the efficient portfolio volatility (i.e., the market as well as the specific risk) in proportion to ${N}^{-{1 \over 2}}$ in the limit as $N\rightarrow \infty$, also holds for the general single-index model, as can be discerned from the results of §2. As discussed earlier, this total vanishing of the portfolio volatility is a specific consequence of leveraging. We close this section by summarizing the results established above. [**Proposition 2.**]{} [*The principal portfolios of the constant variance single-index model consist of a market-aligned portfolio ${{\bf e}^{crv}}^{N}$, a minimum-volatility, market-orthogonal portfolio ${{\bf e}^{crv}}^{1}$, and $N-2$ critically leveraged market-orthogonal portfolios with infinite volatility and expected return, as given in Eqs.(\[446\])-(\[452\]). Furthermore, as $N \rightarrow \infty$, the efficient portfolio reduces to a combination of ${{\bf e}^{crv}}^{1}$ and the riskless asset with a vanishing total volatility, as given in Eqs.(\[4840\])-(\[485\]).*]{} 4. Concluding Remarks {#concluding-remarks .unnumbered} ===================== The main objective of this work, namely recasting the efficient portfolio problem in terms of principal portfolios whereby the selection is made from an uncorrelated set of portfolios instead of the original asset set has been implemented in detail. As we have emphasized throughout, principal portfolios are the natural instruments for analyzing the efficient frontier when short sales are allowed. More generally, they are the natural instruments for any stock selection process based on the mean-variance formulation. In effect, the analysis is transformed from the original, correlated environment of individual assets to one of uncorrelated portfolios, thus simplifying both the conceptual framework as well as the practical procedures involved. This is of course another example of the golden rule in applied analysis which teaches us that when the basic object in the problem involves a quadratic form, it is often advantageous to treat it in the principal axes basis. In order to illustrate the concepts and methods of this paper, we have analyzed the single-index model as well as the constant residual variance version of it in considerable detail. Indeed our perturbative treatment of the general single-index model for large asset sets has revealed interesting new spectral features for that model. In particular, the bifurcation into market-aligned and market-orthogonal portfolios found in §2 is an important observation on the volatility structure of the model. This is particularly so in view of the fact that for sufficiently large asset sets the market-aligned portfolio is all but excluded from the efficient frontier thereby eliminating the component of volatility commonly referred to as market risk. The constant residual variance version of the model, while admittedly oversimplified, brings out the above-mentioned bifurcation as well as the elimination of the market risk in a clear and explicit manner. The virtual elimination of efficient portfolio volatility for reasonably large asset sets as well as the occurrence of large volatility and expected return principal portfolios encountered in our analysis demonstrate the combined effects of leveraging and diversification. As an example, consider the market-aligned portfolio in the single-index model. This portfolio does not involve any short positions and is unleveraged. Consequently, its volatility is characteristic of the market volatility and does not vanish for large asset sets. The market-orthogonal portfolios, on the other hand, are leveraged to varying degrees, a feature that (together with hedging) largely immunizes them against the market volatility. These features are amplified and explicitly demonstrated by the two distinguished principal portfolios of the constant residual variance model. The remaining portfolios of this model, it may be recalled, are critically leveraged and as such have infinite volatility and expected return. We conclude by observing that the mean-variance description of risky asset prices whereby short-term price variations are taken to be random fluctuations has been a remarkably fruitful idea for describing the dynamics of financial markets, its well known limitations notwithstanding. We have attempted in this work to extend the utility of that idea by providing a new analytical tool for its implementation.\ \ [**REFERENCES**]{}\ [Bachelier, L.]{} (1900) “Théorie de la Speculation,” [*Annals Scientifiques de l’Ecole Normale Superieure*]{} **III-17**, 21-86.\ [Black, Fischer and Scholes, Myron S.]{} (1973) “The pricing of options and corporate liabilities,” [*Journal of Political Economy*]{} **81**, 637-654.\ [Elton, Edwin J. and Gruber, Martin J.]{} (1995) [*Modern Portfolio Theory and Investment Analysis*]{}, 5th Ed., Wiley: New York.\ [Fama, Eugene F. and Miller, Merton H.]{} (1972) [*The Theory of Finance*]{}, Holt Rinehart & Winston.\ [Ingersoll, J. E.]{} (1987) [*Theory of Financial Decision Making*]{}, Totowa, New Jersey: Rowman and Littlefield.\ [Lintner, John]{} (1965) “The valuation of risk assets and the selection of risky in stock portfolios and capital budgets,” [*Review of Economics and Statistics*]{} **47**, 13-37.\ [Markowitz, Harry]{} (1952) “Portfolio selection,” [*Journal of Finance*]{} **7**, 77-91.\ [Markowitz, Harry]{} (1959) [*Portfolio Selection: Efficient Diversification of Investments*]{}, New York: John Wiley & Sons.\ [Merton, Robert]{} (1972) “An Analytic Derivation of the Efficient Portfolio Frontier,” [*Journal of Finance and Quantitative Analysis*]{} **7**, 1851-1872.\ [Mossin, Jan]{} (1966) “Equilibrium in a capital asset market,” [*Econometrica*]{} **34**, 768-783.\ [Ross, S. A.]{} (1976) “The Arbitrage Theory of Capital Asset Pricing,” [*Journal of Economic Theory*]{} **13**, 341-360.\ [Sharpe, William F.]{} (1963) “A simplified model for portfolio analysis,” [*Management Science*]{} **11**, 277-293.\ [Sharpe, William F.]{} (1964) “Capital asset prices: A theory of market equilibrium under conditions of risk,” [*Journal of Finance*]{} **19**, 425-442.\ [Tobin, James]{} ( 1958) “Liquidity preference as behavior towards risk,” [*Review of Economic Studies*]{} **25**, 65-86.\ [Partovi, M. Hossein and Caputo, Michael R.]{} (2004), “Principal Portfolios: Recasting the Efficient Frontier,” *Economics Bulletin* **7**, 1−10.\ [Poddig, Thorsten and Unger, Albina]{} (2012), “On the robustness of risk-based asset allocations”, *Financial Markets and Portfolio Management* **26**, 369-401.\ [Kind, Christoph]{} (2013) “Risk-Based Allocation of Principal Portfolios,” SSRN: http://ssrn.com/abstract=2240842, 2013.\ [^1]: We use the term “leveraged” here to imply that the portfolio contains borrowed assets, e.g., short-sold positions. [^2]: Here the portfolio beta is defined to be the weighted mean of beta in the single-index model literature [^3]: This is the approximation in which any contribution to ${\tilde{v}}_{N}^{2}$ whose ratio to ${\gamma}_{i}^{2}$ vanishes in the $N \rightarrow \infty$ limit will be neglected. [^4]: This is the approximation in which any contribution to ${W}_{N}$ whose ratio to ${\hat{\beta}}_{i}$ vanishes in the $N \rightarrow \infty$ limit will be neglected. [^5]: These market-orthogonal portfolios essentially eliminate what is referred to as “market risk” in the single-index model jargon [^6]: This is of course the exceptional case of spectrum degeneracy mentioned in §2.
{ "pile_set_name": "ArXiv" }
--- abstract: 'We unify two recent results concerning equilibration in quantum theory. We first generalise a proof of Reimann \[PRL 101,190403 (2008)\], that the expectation value of ‘realistic’ quantum observables will equilibrate under very general conditions, and discuss its implications for the equilibration of quantum systems. We then use this to re-derive an independent result of Linden et. al. \[PRE 79, 061103 (2009)\], showing that small subsystems generically evolve to an approximately static equilibrium state. Finally, we consider subspaces in which all initial states effectively equilibrate to the same state.' author: - 'Anthony J. Short' title: Equilibration of quantum systems and subsystems --- Introduction ============ Recently there has been significant progress in understanding the foundations of statistical mechanics, based on fundamentally quantum arguments [@Mahler; @Goldstein1; @Goldstein2; @PopescuShortWinter; @Tasaki; @reimann1; @reimann2; @us1; @us2; @gogolin1; @gogolin2]. In particular, Reimann [@reimann1; @reimann2] has shown that the expectation value of any ‘realistic’ quantum observable will equilibrate to an approximately static value, given very weak assumptions about the Hamiltonian and initial state. Interestingly, the same assumptions were used independently by Linden *et al.* [@us1; @us2], to prove that any small quantum subsystem will evolve to an approximately static equilibrium state (such that even ‘unrealistic’ observables on the subsystem equilibrate). In this paper we unify these two results, by deriving the central result of Linden *et al.*[@us1] from a generalisation of Reimann’s result. We also offer a further discussion and extension of Reimann’s results, showing that systems will appear to equilibrate with respect to all reasonable experimental capabilities. Finally, we identify subspaces of initial states which equilibrate to the same state. Equilibration of expectation values. ==================================== We prove below a generalisation of Reimann’s result that the expectation value of any operator will almost always be close to that of the equilibrium state [@reimann1]. We extend his results to include non-Hermitian operators (which we will use later to prove equilibration of subsystems), correct a subtle mistake made in [@reimann2] when considering degenerate Hamiltonians, and improve the bound obtained by a factor of 4. As in [@reimann2; @us2], we make one assumption in the proof, which is that the Hamiltonian has *non-degenerate energy gaps*. This means that given any four energy eigenvalues $E_k, E_l, E_m$ and $E_n$, $$\label{eq:non-degen} E_k - E_l = E_m - E_n \Rightarrow \begin{array}{c} (E_k = E_l \; \textrm{and}\; E_m = E_n) \\ \textrm{or} \\ (E_k = E_m \; \textrm{and}\; E_l = E_n). \end{array}$$ Note that this definition allows degenerate energy levels, which may arise due to symmetries. However, it ensures that all subsystems physically interact with each other. In particular, given any decomposition of the system into two subsystems ${\mathcal{H}}= {\mathcal{H}}_A \otimes{\mathcal{H}}_B$, equation (\[eq:non-degen\]) will not be satisfied by any Hamiltonian of the form $H=H_A \otimes I_B + I_A \otimes H_B$ (unless either $H_A$ or $H_B$ is proportional to the identity) [^1]. Consider a $d$-dimensional quantum system evolving under a Hamiltonian $H=\sum_n E_n P_n$, where $P_n$ is the projector onto the eigenspace with energy $E_n$. Denote the system’s density operator by $\rho(t)$, and its time-averaged state by $\omega \equiv {\left\langle \rho(t) \right\rangle_t}$. If $H$ has non-degenerate energy gaps, then for any operator $A$, $$\label{eq:theorem} \sigma_A^2 \equiv {\left\langle \left| {\operatorname{tr}}\left(A \rho\left(t\right) \right) - {\operatorname{tr}}\left( A \omega\right) \right|^2 \right\rangle_t} \leq \frac{\Delta(A)^2 }{4 d_{{\rm eff}}} \leq \frac{\|A\|^2}{d_{{\rm eff}}}$$ where $\|A\|$ is the standard operator norm [^2], $$\Delta(A) \equiv 2 \min_{c \in \mathbb{C}} \| A- c I \|,$$ and $$d_{{\rm eff}} \equiv \frac{1}{\sum_n \big( {\operatorname{tr}}(P_n \rho(0)) \big)^2}.$$ This bound will be most significant when the number of different energies incorporated in the state, characterised by the effective dimension $ d_{{\rm eff}}$, is very large. Note that $1 \leq d_{{\rm eff}} \leq d$, and that $d_{{\rm eff}}=N$ when a measurement of $H$ would yield $N$ different energies with equal probability. For pure states $d_{{\rm eff}} = {\operatorname{tr}}(\omega^2)^{-1}$ as in [@us1; @us2], but it may be smaller for mixed states when the Hamiltonian is degenerate. The quantity $\Delta(A) $ gives the range of eigenvalues when $A$ is Hermitian, and gives a slightly tighter bound than the operator norm. Following [@reimann2], we could improve the bound further by replacing $\Delta(A) $ with a state- and Hamiltonian-dependent term [^3], however we omit this step here for simplicity. **Proof:** To avoid some difficulties which arise when considering degenerate Hamiltonians, we initially consider a pure state $\rho(t) = {{| \psi(t) \rangle}\!{\langle \psi(t) |}}$, then extend the results to mixed states via purification. We can always choose an energy eigenbasis such that ${| \psi(t) \rangle}$ has non-zero overlap with only a single energy eigenstate ${| n \rangle}$ of each distinct energy, by including states ${| n \rangle} = P_n {| \psi(0) \rangle}/\sqrt{{\langle \psi(0) |} P_n {| \psi(0) \rangle}}$ whenever ${\langle \psi(0) |} P_n {| \psi(0) \rangle}>0$. The state at time $t$ is then given by $${| \psi(t) \rangle} = \sum_{n} c_n e^{-i E_n t/\hbar} {| n \rangle},$$ where $c_n = {\left\langle n| \psi(0) \right\rangle}$. This state will evolve in the subspace spanned by $\{{| n \rangle}\}$ as if it were acted on by the non-degenerate Hamiltonian $H'=\sum_n E_n {{| n \rangle}\!{\langle n |}}$. For any operator $A$, it follows that $$\begin{aligned} \sigma_A^2\!\!\! &=& {\left\langle |{\operatorname{tr}}(A [\rho(t) - \omega] )|^2 \right\rangle_t} \nonumber \\ &=& {\left\langle \left|\sum_{n \neq m} c_n c_m^* e^{i(E_m-E_n)t/\hbar} {\langle m |} A {| n \rangle} \right|^2 \right\rangle_t} \nonumber \\ &=& \!\!\!\! \sum_{\scriptsize \begin{array}{c} n \neq m \\ k\neq l \end{array}} \!\!\! \! c_n c_m^* c_k c_l^*{\left\langle e^{i(E_m-E_n + E_l - E_k)t/\hbar} \right\rangle_t}{\langle m |} A { | n \rangle \! \langle l |} A^{\dag} {| k \rangle} \nonumber \\ &=& \sum_{n,m} |c_n|^2 |c_m|^2 {\langle m |} A { | n \rangle \! \langle n |} A^{\dag} {| m \rangle} - \sum_{n} |c_n|^4 |{\langle n |}A {| n \rangle}|^2 \nonumber \\ & \leq &{\operatorname{tr}}( A \omega A^{\dag} \omega ) \nonumber \\ & \leq & \sqrt{{\operatorname{tr}}(A^{\dag}\!A\, \omega^2) {\operatorname{tr}}(A A^{\dag} \omega^2)} \nonumber \\ &\leq& \| A \|^2 {\operatorname{tr}}(\omega^2) \label{eq:pure_theorem} \nonumber \\ &=& \| A \|^2{\operatorname{tr}}\left[ \left(\sum_n |c_n|^2 {{| n \rangle}\!{\langle n |}}\right)^2\right] \nonumber \\ &=& \| A \|^2 \sum_n \big( {\operatorname{tr}}(P_n \rho(0)) \big)^2 \nonumber \\ &=& \frac{\| A \|^2}{d_{{\rm eff}}}. \end{aligned}$$ In the fourth line, we have used the assumption that the Hamiltonian has non-degenerate energy gaps, in the sixth line we have used the Cauchy-Schwartz inequality for operators with scalar product ${\operatorname{tr}}(A^{\dag} B)$ and the cyclic symmetry of the trace, and in the seventh line we have used the fact that for positive operators $P$ and $Q$, ${\operatorname{tr}}(PQ) \leq \|P\| {\operatorname{tr}}(Q) $. This gives the weaker bound in the theorem. To obtain the tighter bound, we note that $\sigma_A$ is invariant if $A$ is replaced by $\tilde{A} = A- c I$ for any complex $c$. Performing this substitution with $c$ chosen so as to minimize $\|\tilde{A}\|$ we can replace $\|A\|$ with $\|\tilde{A}\|=\Delta(A)/2$. An extension to mixed states can be obtained via purification, following the approach discussed in [@us2]. Given any initial state $\rho(0)$ on ${\mathcal{H}}$, we can always define a pure state ${| \phi(0) \rangle}$ on ${\mathcal{H}}\otimes {\mathcal{H}}$ such that the reduced state of the first system is $\rho(0)$. By evolving ${| \phi(t) \rangle}$ under the joint Hamiltonian $H'=H \otimes I$, we will recover the correct evolution $\rho(t)$ of the first system, and $H'$ will have non-degenerate energy gaps whenever $H$ does. The expectation value of any operator $A$ for $\rho(t)$ will be the same as the expectation value of $A'=A \otimes I$ on the total system, and we also obtain $\Delta(A')=\Delta(A)$, $\|A\|=\|A'\|$, and $d_{{\rm eff}}'=d_{{\rm eff}}$. However, note that ${\operatorname{tr}}(\omega'^2)$ does not equal ${\operatorname{tr}}(\omega^2)$. Using the result for pure states, we can obtain (\[eq:theorem\]) in the mixed state case from $$\sigma_A^2 = \sigma_{A'}^2 \leq \frac{\Delta(A')^2 }{4 d_{{\rm eff}}'} =\frac{\Delta(A)^2 }{4 d_{{\rm eff}}}.$$ This completes the proof. $\square$ In [@reimann1], Reimann proves that $\sigma_A^2 \leq \Delta(A)^2 {\operatorname{tr}}(\omega^2)$ when $A$ is Hermitian and the Hamiltonian has non-degenerate levels as well as non-degenerate gaps. However, it appears that there is a subtle mistake in [@reimann2] when extending this proof to degenerate Hamiltonians. Specifically, the step from equation (D.11) to (D.12) in [@reimann2] does not follow if the state has support on more than one energy eigenstate in a degenerate subspace. A counterexample is provided by the mixed state $\rho(0) = \frac{1}{k} {{| 0 \rangle}\!{\langle 0 |}} \otimes {I}$, of a qubit and a $k$-dimensional system, with $H=({ | 0 \rangle \! \langle 1 |} + { | 1 \rangle \! \langle 0 |}) \otimes {I}$ and $A = ({{| 0 \rangle}\!{\langle 0 |}}-{{| 1 \rangle}\!{\langle 1 |}}) \otimes {I}$. In this case $\sigma^2_A = \frac{1}{2}$, $\Delta(A)=2$ and ${\operatorname{tr}}(\omega^2) = \frac{1}{2k}$, giving $\sigma_A^2 > \Delta(A)^2 {\operatorname{tr}}(\omega^2)$ when $k>4$. However, subsequently in [@reimann2], $ {\operatorname{tr}}(\omega^2)$ is replaced by an upper bound of $\max_n {\operatorname{tr}}(\rho(0) P_n)$, and this also upper bounds $d_{{\rm eff}}^{-1}$, so later results are unaffected. Note that the bound given by Theorem 1 for the same example is satisfied tightly for all $k$, as $ d_{{\rm eff}}=2$ and thus $\sigma_A^2 = \frac{1}{2} = \frac{\Delta(A)^2}{4 d_{{\rm eff}}}$. Distinguishability ================== When $A$ represents a physical observable and $\rho(0)$ a realistic initial state, it is argued in [@reimann1] that the difference between ${\operatorname{tr}}(A \rho(t))$ and ${\operatorname{tr}}(A \omega)$ will almost always be less than realistic experimental precision. This is then taken to imply that $\rho(t)$ will be indistinguishable from $\omega$ for the overwhelming majority of times. However, the fact that two states yield the same expectation value for a measurement does not necessarily imply that they cannot be distinguished by it. For example, a measurement yielding an equal mixture of $+1$ and $-1$ outcomes for one state and always yielding $0$ for a second state clearly can distinguish the two states, despite the expectation values in the two cases being identical. Furthermore, even though any particular realistic measurement cannot distinguish $\rho(t)$ from $\omega$ for almost all times, this does not imply that for almost all times, no realistic measurement can distinguish $\rho(t)$ from $\omega$. This is because the optimal measurement to distinguish the two states may change over time. Finally, the measurement precision is not easy to define for measurements with discrete outcomes. To address these issues, we first note that the most general quantum measurement is not described by a Hermitian operator, but by a positive operator valued measure (POVM). For simplicity, we consider POVMs with a finite set of outcomes, which is reasonable for realistic measurements, as even continuous outputs such as pointer position cannot be determined or recorded with infinite precision [^4]. A general measurement $M$ is described by giving a positive operator $M_r$ for each possible measurement result $r$, satisfying $\sum_r M_r= I$. The probability of obtaining result $r$ when measuring $M$ on $\rho$ is given by ${\operatorname{tr}}(M_r \rho)$. Suppose you are given an unknown quantum state, which is either $\rho_1$ or $\rho_2$ with equal probability. Your maximum success probability in guessing which state you were given after performing the measurement $M$ is $$p^{\textrm{succ}}_{M} = \frac{1}{2} ( 1 + D_{M} (\rho_1, \rho_2))$$ where $$D_{M} (\rho_1, \rho_2) \equiv \frac{1}{2} \sum_{r} | {\operatorname{tr}}(M_r \rho_1) - {\operatorname{tr}}(M_r \rho_2) |.$$ We refer to $D_{M} (\rho_1, \rho_2)$ as the distinguishability of $\rho_1$ and $\rho_2$ using the measurement $M$. Similarly, the distinguishability of two states using any measurement from a set ${\mathcal{M}}$ is given by $$D_{{\mathcal{M}}} (\rho_1, \rho_2) \equiv \max_{M \in {\mathcal{M}}} D_{M} (\rho_1, \rho_2).$$ Note that $$0 \leq D_{{\mathcal{M}}} (\rho_1, \rho_2) \leq D(\rho_1, \rho_2) \leq 1,$$ where $D(\rho_1, \rho_2) = \frac{1}{2} {\operatorname{tr}}|\rho_1 - \rho_2|$ is the trace-distance, which is equal to $D_{{\mathcal{M}}} (\rho_1, \rho_2)$ when $\mathcal{M}$ includes all measurements. Effective equilibration of large systems ======================================== For typical macroscopic systems, the dimension of ${\mathcal{H}}$ will be incredibly large (e.g. For Avagardo’s number $N_A$ of spin-$\frac{1}{2}$ particles, we would have $d >10^{10^{23}}$), and it is unrealistic to be able to perform any measurement with this many outcomes, let alone all such measurements. For practical purposes, we are therefore restricted to some set of realistic physical measurements ${\mathcal{M}}$. In this case, we would expect ${\mathcal{M}}$ to be a finite set, as all realistic experimental setups (including all settings of variable parameters) will be describable within a finite number of pages of text. We say that a state *effectively equilibrates* if $${\left\langle D_{{\mathcal{M}}} (\rho(t), \omega) \right\rangle_t} \ll 1.$$ This means that for almost all times, it is almost impossible to distinguish the true state $\rho(t)$ from the equilibrium state $\omega$ using any achievable measurement. We can obtain an upper bound on the average distinguishability as a corollary of theorem 1. Consider a quantum system evolving under a Hamiltonian with non-degenerate energy gaps. The average distinguishability of the system’s state $\rho(t)$ from $\omega$, given a finite set of measurements ${\mathcal{M}}$, satisfies $${\left\langle D_{{\mathcal{M}}} (\rho(t), \omega) \right\rangle_t} \leq \frac{\sum_{M \in {\mathcal{M}}} \sum_{r } \Delta(M_r)}{4\sqrt{d_{{\rm eff}}}} \leq \frac{N({\mathcal{M}})}{4\sqrt{d_{{\rm eff}}}}, \label{eq:effective_equilibration}$$ where $N({\mathcal{M}})$ is the total number of outcomes for all measurements in ${\mathcal{M}}$. The first bound will be tighter when measurements are imprecise, as each outcome is weighted by $\Delta(M_r) \in [0,1]$, reflecting its usefulness in distinguishing states [^5]. **Proof:** $$\begin{aligned} {\left\langle D_{{\mathcal{M}}} (\rho(t), \omega) \right\rangle_t} &=& {\left\langle \max_{M(t) \in {\mathcal{M}}} D_{M(t)} (\rho(t), \omega) \right\rangle_t} \nonumber\\ &\leq& \sum_{M \in {\mathcal{M}}} {\left\langle D_M (\rho(t), \omega) \right\rangle_t} \nonumber\\ &=&\frac{1}{2} \sum_{M \in {\mathcal{M}}} \sum_{r } {\left\langle | {\operatorname{tr}}(M_r \rho(t) ) - {\operatorname{tr}}(M_r \omega) | \right\rangle_t} \nonumber\\ &\leq&\frac{1}{2} \sum_{M \in {\mathcal{M}}} \sum_{r } \sqrt{ \sigma_{M_r}^2 } \nonumber\\ &\leq & \frac{ \sum_{M \in {\mathcal{M}}} \sum_{r } \Delta(M_r)}{4\sqrt{d_{{\rm eff}}}} \nonumber\\ &\leq & \frac{N({\mathcal{M}})}{4\sqrt{d_{{\rm eff}}}}. \label{eq:distinguishability} \qquad \square\end{aligned}$$ In realistic experiments, we would expect the bound on the right of (\[eq:effective\_equilibration\]) to be much smaller than 1, implying that the state of the system effectively equilibrates to $\omega$. Consider again our system of $N_A$ spins. If $d_{{\rm eff}}\geq~d^{\,0.1} $, even if we take ${\mathcal{M}}$ to include any experiment whose description could be written in $10^{19}$ words, each of which generates up to $10^{21}$ bytes of data, we would still obtain ${\left\langle D_{{\mathcal{M}}} (\rho(t), \omega) \right\rangle_t} \leq 1/(10^{10^{22}})$. Equilibration of small subsystems ================================= Now consider that the system can be decomposed into two parts, a small subsystem of interest $S$, and the remainder of the system which we refer to as the bath $B$. Then ${\mathcal{H}}= {\mathcal{H}}_S \otimes {\mathcal{H}}_B$, where ${\mathcal{H}}_{S/B}$ has dimension $d_{S/B}$. It is helpful to define the reduced states of the subsystem $\rho_S(t) = {\operatorname{tr}}_B(\rho(t))$ and $\omega_S = {\operatorname{tr}}_B(\omega)$. In such cases, it was shown in [@us1; @us2] that for sufficiently large $d_{{\rm eff}}$ the subsystem’s state fully equilibrates, such that for almost all times, no measurement on the subsystem (even ‘unrealistic’ ones) can distinguish $\rho(t)$ from $\omega$. In particular, when $\rho(t)$ is pure and the Hamiltonian has non-degenerate energy levels as well as non-degenerate energy gaps, it is proven in [@us1] that $$\label{eq:oureqn} {\left\langle D(\rho_S(t), \omega_S) \right\rangle_t} \leq \frac{1}{2} \sqrt{\frac{d_S^2}{d_{{\rm eff}}}}.$$ Extending this result to degenerate Hamiltonians and initially mixed states is discussed in [@us2]. We cannot recover this bound directly from (\[eq:effective\_equilibration\]) by considering the set of all measurements on the subsystem, because this set contains an infinite number of measurements. However, we can derive (\[eq:oureqn\]) from Theorem 1 by considering an orthonormal operator basis for the subsystem, given by the $d_S^2$ operators [@schwinger] $$F_{(d_Sk_0 + k_1)} = \frac{1}{\sqrt{d_S}} \sum_{l} e^{\frac{2 \pi i l k_0}{d_S}} {| (l+k_1)\, \textrm{mod}\, d_S \rangle} {\langle l |}$$ where $k_0,k_1 \in \{0,1,\ldots d_S-1 \}$ and the states ${| l \rangle}$ are an arbitrary orthonormal basis for the subsystem. Then writing $(\rho_S(t) - \omega_S) = \sum_{k} \lambda_k(t) F_k$ we have $$\begin{aligned} {\left\langle D(\rho_S(t), \omega_S) \right\rangle_t}\!\! &=&\! \frac{1}{2} {\left\langle {\operatorname{tr}}\big|\sum_k \lambda_k(t) F_k \big| \right\rangle_t} \nonumber \\ &\leq&\! \frac{1}{2} {\left\langle \sqrt{ d_S {\operatorname{tr}}\big(\sum_{kl} \lambda_k(t) \lambda^*_l(t) F_l^{\dag} F_k \big)} \right\rangle_t} \nonumber \\ &\leq&\! \frac{1}{2} \sqrt{ d_S \sum_{kl} {\left\langle \lambda_k(t) \lambda^*_l(t) \right\rangle_t} {\operatorname{tr}}(F_l^{\dag} F_k)} \nonumber \\ &=&\! \frac{1}{2} \sqrt{ d_S \sum_k {\left\langle |\lambda_k(t)|^2 \right\rangle_t}} \nonumber \\ &=&\! \frac{1}{2} \sqrt{ d_S \sum_k {\left\langle \big|{\operatorname{tr}}\big((\rho(t) - \omega)F^{\dag}_k\! \otimes I \big)\big|^2 \right\rangle_t}} \nonumber \\ &\leq&\! \frac{1}{2} \sqrt{ d_S \sum_k \frac{\|F^{\dag}_k\! \otimes I\|^2 }{ d_{{\rm eff}}}} \nonumber \\ &\leq&\! \frac{1}{2} \sqrt{ \frac{d_S^2}{d_{{\rm eff}}}}.\end{aligned}$$ In the second line we have used a standard relation between the 1- and 2-norm, and in the sixth line we have used Theorem 1 for the non-Hermitian operator $F^{\dag}_k\! \otimes I$. Note that $\sqrt{d_S} F_k$ is unitary, and thus $\|F_k^{\dag} \otimes I\| = \frac{1}{\sqrt{d_S}}$. Universality of equilibrium states ================================== We have so far been concerned with when states equilibrate, rather than the nature of their equilibrium state. However, one of the notable properties of equilibration is that many initial states effectively equilibrate to the same state, determined only by macroscopic properties such as temperature. Given a particular Hamiltonian and a set of realistic measurements ${\mathcal{M}}$, we can construct a partition of the Hilbert space into a direct sum of subspaces ${\mathcal{H}}= \bigoplus_k {\mathcal{H}}_k$, such that all states within ${\mathcal{H}}_k$ with large enough $d_{{\rm eff}}$ effectively equilibrate to the same state $\Omega_k$. One way to achieve this is to choose the subspaces such that each projector $\Pi_k$ onto ${\mathcal{H}}_k$ commutes with the Hamiltonian, and such that any two energy eigenstates in ${\mathcal{H}}_k$ are hard to distinguish. i.e. For some fixed $\epsilon$ satisfying $0 < \epsilon \ll 1$, and all normalised energy eigenstates ${| i \rangle}, {| j \rangle} \in {\mathcal{H}}_k$ $$D_{{\mathcal{M}}} ({{| i \rangle}\!{\langle i |}}, {{| j \rangle}\!{\langle j |}}) \leq \epsilon.$$ When $d_{{\rm eff}}$ is sufficiently large, it follows that all states in ${\mathcal{H}}_k$ effectively equilibrate to $\Omega_k = \Pi_k/ {\operatorname{tr}}(\Pi_k)$, as $$\begin{aligned} {\left\langle D_{{\mathcal{M}}}(\rho(t), \Omega_k) \right\rangle_t}\! &\leq& {\left\langle D_{{\mathcal{M}}}(\rho(t), \omega) \right\rangle_t} + {\left\langle D_{{\mathcal{M}}}(\omega, \Omega_k) \right\rangle_t} \nonumber \\ \! &\leq&\!\! \frac{N({\mathcal{M}})}{4\sqrt{d_{{\rm eff}}}} + \sum_{i,j} \frac{{\langle i |} \omega {| i \rangle}}{{\operatorname{tr}}(\Pi_k)} D_{{\mathcal{M}}}({{| i \rangle}\!{\langle i |}}, {{| j \rangle}\!{\langle j |}}) \nonumber \\ \! &\leq& \!\!\frac{N({\mathcal{M}})}{4\sqrt{d_{{\rm eff}}}} + \epsilon.\end{aligned}$$ where the sums in the second line are over an eigenbasis of $\omega$ (which is also a basis of ${\mathcal{H}}_k$), and we have used the fact that $D_{{\mathcal{M}}} (\rho, \sigma)$ satisfies the triangle inequality ($D_{{\mathcal{M}}} (\rho, \sigma) \leq D_{{\mathcal{M}}} (\rho, \tau) + D_{{\mathcal{M}}} (\tau, \sigma)$) and convexity, $$D_{{\mathcal{M}}}\left( \sum_i p_i \rho_i, \sigma \right) \leq \sum_i p_i D_{{\mathcal{M}}}(\rho_i, \sigma),$$ where $p_i \geq 0$ and $\sum_i p_i= 1$. When ${\mathcal{H}}_k$ can be chosen to be a small band of energies, the equilibrium state $\Omega_k$ will be the usual microcanonical state. Conclusions =========== To summarise, we have shown that two key results of [@us1; @reimann1] about the equilibration of large systems can be derived from very weak assumptions (non-degenerate energy gaps, and sufficiently large $d_{{\rm eff}}$), and a single theorem (Theorem 1). In particular, for almost all times, the state of an isolated quantum system will be indistinguishable from its equilibrium state $\omega$ using any *realistic* experiment, and the state of a small subsystem will be indistinguishable from $\omega_S$ using any experiment. Although the first result has a similar flavour to the classical equilibration of course-grained observables such as density and pressure, it is really much stronger, as it encompasses any measurement you could describe and record the data from in a reasonable length of text, including microscopic measurements. The second result has no classical analogue, as it yields an essentially static description of the true micro-state of a subsystem, rather than the rapidly fluctuating dynamical equilibrium of particles in classical statistical mechanics. Given the difficulty of proving similar results in the classical case, it seems that quantum theory offers a firmer foundation for statistical mechanics. *Acknowledgments.* The author is supported by the Royal Society. [10]{} P. Reimann, Phys. Rev. Lett. [**101**]{},190403 (2008). P. Reimann, New J. Phys. [**12**]{}, 055027 (2010). N. Linden, S. Popescu, A. Short and A. Winter, Phys. Rev. E [**79**]{}:061103 (2009). N. Linden, S. Popescu, A. Short and A. Winter, New J. Phys. [**12**]{}, 055021 (2010). J. Gemmer, M. Michel, G. Mahler, *Quantum Thermodynamics*, Springer Verlag, LNP 657, Berlin (2004). S. Goldstein, J. L. Lebowitz, R. Tumulka, N. Zanghì, Phys. Rev. Lett. [**96**]{}:050403 (2006). S. Goldstein, J. L. Lebowitz, C. Mastrodonato, R. Tumulka, N. Zanghi, Phys. Rev. E [**81**]{}: 011109 (2010). S. Popescu, A. J. Short, A. Winter, Nature Physics [**2**]{}(11):754-758 (2006). H. Tasaki, Phys. Rev. Lett. [**80**]{}(7):1373-1376 (1998). C. Gogolin, arXiv:1003.5058 (2010). C.Gogolin, Phys. Rev. E 81, 051127 (2010). J.Schwinger, Proc Natl Acad Sci U S A., [**46**]{}, 570 (1960). [^1]: To see this, consider the four energy eigenstates ${| k \rangle}={| 0 \rangle}{| 0 \rangle}$, ${| l \rangle}={| 0 \rangle}{| 1 \rangle}, {| m \rangle}= {| 1 \rangle}{| 0 \rangle}, {| n \rangle}={| 1 \rangle}{| 1 \rangle}$ which are products of eigenstates of $H_A$ and $H_B$. [^2]: $ \|A\|=\sup \{ \sqrt{{\langle v |} A^{\dag} A {| v \rangle}} : {| v \rangle} \in {\mathcal{H}}\,\textrm{with}\, {\left\langle v| v \right\rangle}=1\}$, or equivalently $\|A\|$ is the largest singular value of $A$. [^3]: In particular we could replace $\Delta(A)$ with $\Delta''(A)=\min_{\tilde{A}} 2\|\tilde{A}\|$, where the operators $\tilde{A}$ are obtained by subtracting any function of $H$ from $A$ and projecting onto the support of $\omega$. [^4]: However, our results could be extended to continuous output sets using measure theory if desired [^5]: Note that $\Delta(M_r)$ is the maximum difference in probability of that result occurring for any two states
{ "pile_set_name": "ArXiv" }
--- abstract: 'Three types of explicit estimators are proposed here to estimate the loss rates of the links in a network of the tree topology. All of them are derived by the maximum likelihood principle and proved to be either asymptotic unbiased or unbiased. In addition, a set of formulae are derived to compute the efficiencies and variances of the estimators that also cover some of the estimators proposed previously. The formulae unveil that the variance of the estimates obtained by a maximum likelihood estimator for the pass rate of the root link of a multicast tree is equal to the variance of the pass rate of the multicast tree divided by the pass rate of the tree connected to the root link. Using the formulae, we are able to evaluate the estimators proposed so far and select an estimator for a data set.' author: - 'Weiping Zhu [^1]' bibliography: - '../globcom06/congestion.bib' title: Statistical Properties of Loss Rate Estimators in Tree Topology --- Correlation, Efficiency, Explicit Estimator, Loss Tomography, Maximum Likelihood, Variance. Introduction {#section1} ============ Network characteristics, such as link-level loss rate, delay distribution, available bandwidth, etc. are valuable information to network operations, development and researches. Therefore, a considerable attention has been given to network measurement, in particular to large networks that cross a number of autonomous systems, where security concerns, commercial interests, and administrative boundary make direct measurement impossible. To overcome the security and administrative obstacles, network tomography was proposed in [@YV96], where the author suggests the use of end-to-end measurement and statistical inference to estimate the characteristics of interest. Since then, many works have been carried out to estimate various characteristics that cover loss tomography [@CDHT99; @CDMT99; @CDMT99a; @CN00; @XGN06; @BDPT02; @ADV07; @DHPT06; @ZG05; @GW03], delay tomography [@LY03; @TCN03; @PDHT02; @SH03; @LGN06], loss pattern tomography [@ADV07], and so on. Despite the enthusiasm in loss tomography, there has been little work to study the statistical properties of an estimator with a finite sample size although some asymptotic properties are presented in the literature [@CDHT99; @DHPT06]. The finite sample properties, such as efficiency and variance, differ from the asymptotic ones that are critical to the performance evaluation of an estimator since each of them unveil the quality and effectiveness of an estimator in a specific aspect. Apart from that, the finite sample properties can be used to select a better estimator, if not the best, from a group for a data set obtained from a specific circumstance. To fill the gap, we in this paper propose a number of maximum likelihood estimators (MLE) that can be solved explicitly for a network of the tree topology and provide the statistical properties of them. The statistical properties are further extended to cover the MLEs proposed previously. One of the most important discoveries is a set of formulae to compute the efficiency and variance of the estimates obtained by an estimator. The approach proposed in [@YV96] requires us to send probing packets, called probes, from some end-nodes called sources to the receivers located on the other side of the network, where the paths connecting the sources to the receivers cover the links of interest. To make the probes received informative in statistical inference, multicast or unicast-based multicast proposed in [@HBB00; @CN00] is used to send probes from a source to a number of receivers, via a number of intermediate nodes that replicate the arrived probes and then forward to its descendants. This process continues until either the probes reach the destinations or lost, which makes the observations of any two receivers correlated in some degree and the degrees vary depending on the interconnection between the receivers. Given the network topology used for sending probes and the observations obtained at receivers, we are able to create a likelihood function to connect the observation to the process described above. Since the number of correlations created by multicasting are proportional to the number of descendants attached to a node, the likelihood equation obtained for a node having many descendants is a high degree polynomial that requires an iterative procedure, such as the expectation and maximization (EM) or the Newton-Raphson algorithm, to approximate the solution. Using iterative procedure to solve a polynomial has been widely criticised for its computational complexity that increases with the number of descendants attached to the link or path to be estimated [@CN00]. There has been a persistent effort in the research community to search for explicit estimators that are comparable in terms of accuracy to the estimators using iterative approach. To achieve this, we must have the statistical properties of the estimates obtained by an estimator, such as unbiasedness, efficiency, and variance. Unfortunately, there has been little work in a general form for the properties and the asymptotic properties obtained in [@CDHT99; @DHPT06] has little use in this circumstance. To overcome the problems stated above, we have undertaken a thorough and systematic investigation of the estimators proposed for loss tomography that aims at identifying the statistical principle and strategies that have been used or can be used in the tree topology. A number of findings are obtained in the investigation that show all of the estimators proposed previously rely on observed correlations to infer the loss/pass rates and most of them use all of the correlations available in estimation, such as the MLE proposed in [@CDHT99]. However, the qualities of the correlations, measured by the fitness between a correlation and the corresponding observation, are very much ignored. Rather than using all of the correlations available in estimation, we propose here to use a small portion of high-quality ones and expect the estimates obtained by such an estimator are comparable to that considering all of the correlations. The investigation further leads to a number of findings that contribute to loss tomography in four-fold. - A large number of explicit estimators are proposed on the basis of composite likelihood [@Lindsay88] that are divided into three groups: the block wised estimators (BWE), the reduce scaled estimators (RSE), and the individual based estimators (IBE). - The estimators in BWE and IBE are proved to be unbiased and that in RSE are proved to be asymptotic unbiased as that proved in [@DHPT06]. A set of formulae are derived for the efficiency and variances of the estimators in RSE and IBE, plus the MLE proposed in [@CDHT99]. The formulae show the variance of the estimates obtained by a MLE can be exactly expressed by the pass rate of the path of interest and the pass rate of the subtrees connected to the path. The formulae also show the weakness of the result obtained in [@DHPT06]. - The efficiency of the estimators in IBE are compared with each other on the basis of the Fisher information that shows an estimator considering a correlation involving a few observers can be more efficient than that considering more and the estimator proposed in [@DHPT06] is the least efficient. A similar conclusion is obtained for the estimators in BWE. - Using the formulae, we able to identify an efficient estimator by examining the end-to-end observation that makes model selection not only possible but also feasible. A number of simulations are conducted to verify this feature that also show the connection between efficiency and robustness of an estimator. The rest of the paper is organised as follows. In Section \[related work\], we briefly introduce the previous works related to explicit loss rate estimators and point out the weakness of them. In Section \[section2\], we introduce the loss model, the notations, and the statistics used in this paper. Using the model and statistics, we derive a MLE that considers all available correlations for a network of the tree topology in Section \[section3\]. We then decompose the MLE into a number of components according to correlations and derive a number of likelihood equations for the components in Section \[section 4\]. A statistic analysis of the proposed estimators is presented in Section \[section5\] that details the statistical properties of the proposed estimators, one of them is the formulae to calculate the variances of various estimators. Simulation study is presented in Section \[section 6\] that compares the performance of five estimators and shows the feasibility of selecting an estimator for a data set. Section \[section7\] is devoted to concluding remark. Related Works {#related work} ============= Multicast Inference of Network Characters (MINC) is the pioneer of using the ideas proposed in [@YV96] into practice, where a Bernoulli model is used to model the loss behaviors of a path. Using this model, the authors of [@CDHT99] derive an estimator in the form of a polynomial that is one degree less than the number of descendants connected to the end node of the path of interest [@CDHT99; @CDMT99; @CDMT99a]. Apart from that, the authors obtain a number of results from asymptotic theory, such as the large number behaviour of the estimator and the dependency of the estimator variance on topology. Unfortunately, the results only hold if the sample size $n$ grows indefinitely. In addition, if $n\rightarrow \infty$, almost all of the estimators proposed previously must have the same results and no one can tell the difference between them. In order to evaluate the performance of an estimator, experiments and simulation have been widely used but lead to little result since there are too many random factors affecting the results obtained from experiments and simulations. To overcome the problem stated, simple and explicit estimators, such as that proposed in [@DHPT06], are investigated that aims at reducing the complexity of an estimator and hopefully finding theoretical support for further development since a simple estimator may be easy to analyse. Using this strategy, the authors of [@DHPT06] propose an explicit estimator that only considers a correlation, i.e. the correlation involving all descendants, and claim the same asymptotic variance for the estimates obtained by the estimator as that obtained by the estimator proposed in [@CDHT99] to first order. The claim is obtained by applying the central limited theorem (CLT) on one of the results acquired by the asymptotic theory in [@CDHT99], where the covariance between two descendants attached to the path of interest is obtained by assuming the loss rate of a link is very small and then the delta method is used to compute the asymptotic variance on the covariance matrix obtained by the asymptotic theory. The repeated use of the CLT makes the claim questionable and expensive to use in practice since the result only holds if $n \rightarrow \infty$. Apart from that, some sensitive parameters are cancelled out in the process. It is easy to prove that under the same condition, most of the estimators proposed so far can achieve the same result, if not better, as that proposed in [@DHPT06]. In contrast to [@DHPT06], [@ADV07; @Zhu11a] propose an estimator that converts a general tree into a binary one and subsequently makes the likelihood equation into a quadratic equation of $A_k$ that is solvable analytically. Experiments show the estimator preforms better than that in [@DHPT06] since the estimator uses more information in estimation. Except experimental results, there is little statistical analysis to demonstrate why it is better than that proposed in [@DHPT06] and how to improve from there. Although the author of [@Zhu11a] proves the estimator is a MLE, there is a lack of other statistical properties, such as whether the MLE proposed in [@Zhu11a] is the same as that proposed in [@CDHT99] and if not, how much difference between them. To be able to evaluate the performance of an estimator, we need to have the statistical properties of the estimator, such as unbiasedness, efficiency, variance, and so on, that differ from the asymptotic ones by showing the quality of an estimator in a finite sample. To distinguish the properties from the asymptotic ones, we call them finite sample properties and there has been a lack of results for the finite sample properties. This paper aims to fill the gap and provides the properties. Assumption, Notation and Sufficient Statistics {#section2} ============================================== To make the following statistical analysis clear and rigorous, we need to use a large number of symbols that may overwhelm the readers who are not familiar with loss tomography. To assist them, the symbols will be gradually introduced through the paper, where the frequently used symbols will be introduced in the following two sections and the others will be brought up until needed. In addition, the most frequently used symbols and their meanings are presented in Table \[Frequently used symbols and description\] for quick reference. Assumption ---------- We assume the probes multicasted from the source to receivers are independent and network traffic remains statistically stable during the measurement. In addition, the observation obtained at receivers is considered to be independent identical distributed ($i.i.d.$). Further, the losses occurred at a node or on a link are assumed to be $i.i.d$ as well. Notation {#treenotation} -------- As stated, a network of the tree topology is considered in this paper and denoted by $T=(V, E)$ that multicasts probes from the source to a number of receivers, where $V=\{v_0, v_1, ... v_m\}$ is a set of nodes and $E=\{e_1,..., e_m\}$ is a set of directed links that connect the nodes in $V$. In addition, $v_k, k \in \{1,\cdot\cdot, m\}$ is often called node $k$ and $e_k$ called link $k$ in the following discussion. By default, node $0$ is the root node of the multicast tree to which the source is attached. Apart from being the root that does not have a parent, node $0$ is different from others by having a single descendant, $v_1$, that is connected by $e_1$. Among the nodes in $V$, there are a number of them called leaf nodes that do not have any descendant but a receiver is attached to a leaf node. Because of this, we do not distinguish between a leaf node and a receiver and we use $R, R \subset V$ to denote them. Since there are $m$ links to connect $m+1$ nodes in $T$, the links and nodes are organised in such a way that if $f(i)$ is used to denote the parent of node $i$, $e_i$ is the link connecting $v_{f(i)}$ to $v_i$. Figure \[tree example\] is an example of a multicast binary tree that is named and connected according to the rules. A multicast tree, as a tree, can be decomposed into a number of multicast subtrees at node $i, i \in V\setminus ( v_0 \lor R)$, where $T(i)$ denotes the multicast subtree that has $v_{f(i)}$ as its root, $e_i$ as its root link, and $R(i)$ as the receivers attached to $T(i)$. In addition, we use $d_i$ to denote the descendants attached to node $i$ that is a nonempty set if $i \notin R$. If $x$ is a set, $|x|$ is used to denote the number of elements in $x$. Thus, $|d_i|$ denotes the number of descendants in $d_i$. Using the symbols on Figure \[tree example\], we have $R=\{v_8, v_9, \cdot\cdot, v_{15}\}$, $R(v_2)=\{v_8, v_9, v_{10}, v_{11}\}$, $d_{v2}=\{v_4, v_5\}$, and $|d_{v_2}|=2$. If $n$ probes are sent from $v_0$ to $R$ in an experiment, each of them gives rise of an independent realisation of the passing (loss) process $X$. Let $X^{(i)}, i=1,...., n$ donate the $i-th$ process, where $x_k^i=1, k\in V$ if probe $i$ reaches $v_k$; otherwise $x_k^i=0$. The sample $Y=(x_j^{(i)})^{i \in \{1,..,n\}}_{j \in R}$ is the observation obtained in an experiment that can be divided into a number of sections according to $R(k)$, where $Y_k, k \in V$ denotes the part of $Y$ obtained by $R(k)$. In addition, each of the sections can be further divided into subsections $Y_x, x \subset d_k$ that is the part of observation obtained by $R(j), j \in x \land x \subset d_k$. Obviously, $Y_x \subset Y_k$. If we use $y_j^i$ to denote the observation of receiver $j$ for probe $i$, we have $y_j^i=1$ if probe $i$ is observed by receiver $j$; otherwise, $y_j^i= 0$. Although loss tomography aims at estimate the loss rate of a link, the pass rate of the path connecting $v_0$ to $v_k, k \in V$ is often used as the parameter to be estimated. Let $A_k$ be the pass rate of the path connecting $v_0$ to $v_k$ that is defined as the percentage of the number of probes arrived at node $k$ among the number of probes sent by the source. Given $A_k, k \in V\setminus v_0$, we are able to compute the pass rates of all links in $E$ since there is a bijection from the pass rates of the paths to the pass rates of the links in a network of the tree topology. If $\alpha_k$ denotes the pass rate of link $k$ we have $$\alpha_k=\dfrac{A_k}{A_{f(k)}}.$$ Given $\alpha_k$, we are able to compute the loss rate of link $k$ that is equal to $\bar{\alpha}_k=1-\alpha_k$. Statistics {#mlesection} ---------- To estimate $A_k$ from $Y$, we need a likelihood function to connect the [*i.i.d.*]{} model defined previously to $Y$. To support the initiative of using a part of the available correlations to estimate $A_k$, a function, $n_k(x), x \subseteq d_k$, defined as follows is used to return the statistic for the likelihood function: $$n_k(x)=\sum_{i=1}^n \bigvee_{\substack{j \in R(z)\\ z \in x}} y_j^i. \label{nk2}$$ Obviously. $$n_k(d_k)=\sum_{i=1}^n \bigvee_{\substack{j \in R(k)}} y_j^i. \label{nk1}$$ $n_k(x)$ is the number of probes, confirmed by the observation of $R(i), i \in x$, reaching node $k$. If $N_k$ is used to denote the number of probes reaching node $k$, we have $N_k \geq n_k(d_k)\geq n_k(x), x \subset d_k$, where $n_k(d_k)$ and $n_k(x)$ are of statistics that can be used to estimate $A_k$. To write a likelihood function of $A_k$ with $n_k(d_k)$, $\beta_k$ and $\gamma_k$ are introduced to denote the pass rate of the subtrees rooted at node $k$ and the pass rate of the special multicast tree that connects $v_0$ to node $k$ and then to $R(k)$. Clearly $\gamma_k=A_k\cdot\beta_k, k \in V$ and $\hat\gamma_k=\dfrac{n_k(d_k)}{n}$ that is the empirical value of $\gamma_k$. Note that $\hat\gamma_j=\dfrac{n_j(j)}{n}, j \in R$ is the empirical pass rate of the path from the root to node $j$. Given the assumptions and definitions, the likelihood function of $A_k$ for observation $n_k(d_k)$ is written as follows: $${\cal L}(A_k, n_k(d_k))=(A_k\beta_k)^{n_k(d_k)}(1-A_k\beta_k)^{n-n_k(d_k)}. \label{likelihood function}$$ We can then prove $n_k(d_k)$ is a sufficient statistic with respect to ([*wrt.*]{}) the passing process of $A_k$ for the observation obtained by $R(k)$. Rather than using the well known factorisation theorem in the proof, we directly use the mathematic definition of a sufficient statistic (See definition 7.18 in [@RM96]) to achieve this. The definition [*wrt.*]{} the statistical model defined for the passing process is presented as a theorem here: \(b) at (25,30) [ $v_0$ ]{}; (d) at (25,24) [ $v_1$ ]{}; (f) at (20,18) [ $v_2$ ]{}; (g) at (30.5,18) [ $v_3$ ]{}; (j) at (16,12) [ $v_4$ ]{}; (k) at (23,12) [ $v_5$ ]{}; (l) at (29,12) [ $v_6$ ]{}; (m) at (35,12) [ $v_7$ ]{}; (r) at (12,6) [ $v_8$ ]{}; (s) at (18,6) [ $v_9$ ]{}; (t) at (21,6) [ $v_{10}$ ]{}; (u) at (24.5,6) [ $v_{11}$ ]{}; (v) at (27.5,6) [ $v_{12}$ ]{}; (w) at (30.5,6) [ $v_{13}$ ]{}; (x) at (33.5,6) [ $v_{14}$ ]{}; (y) at (39,6) [ $v_{15}$ ]{}; \(b) edge (d); (d) edge (f); (d) edge (g); (f) edge (j); (f) edge (k); (g) edge (l); (g) edge (m); (j) edge (r); (j) edge (s); (k) edge (t); (k) edge (u); (l) edge (v); (l) edge (w); (m) edge (x); (m) edge (y); \[complete minimal sufficient statistics\] Let $Y_k=\{X^{(1)},....,X^{(n)}\}$ be an i.i.d random sample, governed by ${\cal L}(A_k|Y_k)$. The statistic $n_k(d_k)$ is minimal sufficient for $A_k$ in respect of the observation of $Y_k$. According to the definition of sufficiency, we need to prove $${\cal L}(A_k|n_k(d_k)=t)=\dfrac{{\cal L}(A_k, n_k(d_k)=t)}{{\cal L}(n_k(d_k)=t)} \label{suff-condition}$$ is independent of $A_k$. Given (\[likelihood function\]), the passing process with observation of $n_k(d_k)=t$ is a random process that yields the binomial distribution as follows $${\cal L}(n_k(d_k)=t)=\binom{n}{t}(A_k\beta_k)^{t}(1-A_k\beta_k)^{n-t}.$$ Then, we have $$\begin{aligned} {\cal L}(A_k|n_k(d_k)=t)&=&\dfrac{(A_k\beta_k)^{t}(1-A_k\beta_k)^{n-t}}{\binom{n}{t}(A_k\beta_k)^{t}(1-A_k\beta_k)^{n-t}. } \nonumber \\ =\dfrac{1}{\binom{n}{t}},\end{aligned}$$ which is independent of $A_k$. Then, $n_k(d_k)$ is a sufficient statistic. Apart from the sufficiency, $n_k(d_k)$, as defined in (\[nk1\]), is a count of the probes reaching $R(k)$ that counts each probe once and once only regardless of how many receivers observe the probe. Therefore, $n_k(d_k)$ is a minimal sufficient statistic in regard to the observation of $R(k)$. Statistics considering a part of observation {#mlestatistics} -------------------------------------------- Instead of using $n_k(d_k)$ to estimate $A_k$, we can use $n_k(x), x \subset d_k \land |x|\geq 2$, defined in Section \[mlesection\] to estimate $A_k$. The difference between them is the number of correlations considered in estimation, where the latter is smaller than the former. As (\[likelihood function\]), $\beta_k(x), x \subset d_k$ is needed to express the pass rate of the subtrees consisting of $T(j), j \in x$. Given $n_k(x)$ and $\beta_k(x)$, we can also write a likelihood function of $A_k$ and use the same procedure as that in Section \[mlesection\] to prove $n_k(x)$ a sufficient statistic in the context of the observation obtained by $R(j), j \in x$. Further, an estimator on the observation of $R(j), j \in x$ can be created that will be discussed in Section \[section 4\]. Symbol Desciption ---------------------- --------------------------------------------------------------------------------- $T(k)$ the subtree rooted at link $k$. $d_k $ the descendants attached to node $k$. $R(k)$ the receivers attached to $T(k)$. $A_k$ the pass rate of the path from $v_0$ to $v_k$. $\beta_k$ the pass rate of the subtree rooted at node $k$. $\beta_k(x)$ the pass rate of the subtree consisting of $T(j), j \in x \land x \subset d_k$. $\gamma_k$ $A_k*\beta_k$, pass rate from $v_0$ to $R(k)$. $N_k$ the number of probes reaching node $k$. $x_k^i$ the state of $v_k$ for probe $i$. $\sum_k$ the $\sigma$-algebra created from $d_k$. $n$ the number of probes sent in an experiment, $n_k(d_k)$ the number of probes reaches $R(k)$. $n_k(x)$ the number of probes reaches the receivers attached to $T(j), j \in x$. $I_k(x)$ the number of probes observed by the members of $x$. $Y$ the observation obtained in an experiment. $Y_k, k \in V$ the part of $Y$ obtained by $R(k)$. $Y_x, x \subset d_k$ the part of $Y$ obtained by $R(j), j \in x$. : Frequently used symbols and description \[Frequently used symbols and description\] Estimator Analysis {#section3} ================== This section is dedicate to the analysis of the MLE that considers all of the correlations available in observation. By the analysis, we are able to identify all of the correlations in observation and find the connections among them that will set up the foundation for various explicit estimators. Maximum Likelihood Estimator based on $n_k(d_k)$ {#2.a} ------------------------------------------------ Turning the likelihood function presented in (\[likelihood function\]) into a log-likelihood function, we have $$\log {\cal L}(A_k|Y_k)=n_k(d_k)\log (A_k\beta_k)+(n-n_k(d_k))\log(1-A_k\beta_k). \label{likelihood}$$ Differentiating (\[likelihood\]) [*wrt.*]{} $A_k$ and letting the derivatives be 0, we have $$\begin{aligned} \dfrac{n_k(d_k)}{A_k}-\dfrac{(n-n_k(d_k))\beta_k}{1-A_k\beta_k}=0, \label{likelihood equation}\end{aligned}$$ and then $$\begin{aligned} A_k\beta_k&=&\frac{n_k(d_k)}{n}. \label{AkBk}\end{aligned}$$ Since neither $A_k$ nor $\beta_k$ can be solved from (\[AkBk\]), we need to consider other correlations and then derive a MLE. Given the [*i.i.d.*]{} model assumed previously and the multicast used in probing, we have the following equation to link the observation of $R(k)$ to $\beta_k$ $$1-\beta_k=\prod_{j \in d_k} (1-\dfrac{\gamma_j}{A_k}). \label{beta-k}$$ Solving $\beta_k$ from (\[beta-k\]) and using it in (\[likelihood equation\]), we have a MLE as $$1-\dfrac{n_k(d_k)}{n \cdot A_k}=\prod_{j \in d_k} (1-\dfrac{\gamma_j}{A_k}). \label{realmle1}$$ Using $\gamma_k$ to replace $\dfrac{n_k(d_k)}{n}$ since the latter is the empirical value of the former, we have a likelihood equation as follows: $$1-\dfrac{\gamma_k}{A_k}=\prod_{j \in d_k} (1-\dfrac{\gamma_j}{A_k}) \label{minc}$$ that is identical to the estimator proposed in [@CDHT99]. Predictor and Observation ------------------------- To make the correlations involved in (\[realmle1\]) visible, we expand the left hand side (LHS) and the right hand side (RHS) of (\[realmle1\]), where the terms obtained from the LHS are called observations and the terms from the RHS are called correlations. The correlation is also called the [*predictor*]{} since it predicates the observation received in an experiment. For instance, $\gamma_i\cdot \gamma_j/A_k, i, j \in d_k \land i \neq j$ is the predictor of the probes simultaneously observed by the receivers attached to subtree $i$ and subtree $j$, i.e. there is at least one receiver from each subtree. To represent the correlations involved in (\[realmle1\]), a $\sigma$-algebra, $S_k$, is created over $d_k$ and let $\Sigma_k=S_k \setminus \emptyset$ be the non-empty sets in $S_k$. Each member in $\Sigma_k$ corresponds to a pair of a predictor and its observation. If the number of elements in a member of $\Sigma_k$ is defined as the degree of the correlation, $\Sigma_k$ can be divided into $|d_k|$ exclusive groups, one for a degree of correlations that vary from 1 degree to $|d_k|$ degree. Let $S_k(i), i \in \{1,\cdot\cdot,|d_k|\}$ denote the group that considers $i$ degree correlations. For example, if $d_k=\{i,j,k,l\}$, $S_k(2)=\{(i,j),(i,k),(i,l),(j,k),(j,l),(k,l)\}$ consists of the pairwise correlations in $d_k$, and $S_k(3)=\{(i,j,k),(i,j,l),(i,k,l),(j,k,l)\}$ contains all of the triplet-wise correlations. Given $\Sigma_k$, $n_k(d_k)$ can be decomposed into the probes that are observed simultaneously by the members of $\Sigma_k$ that is defined as if $x \in \Sigma_k$ and $|x|>1$, a probe observed by $x$ if and only if at least a receiver attached to subtree $j, j \in x$ observes the probe. We call such an observation simultaneous observation. To explicitly express $n_k(d_k)$ by $n_j(d_j), j \in d_k$, $I_k(x), x \in \Sigma_k$ is introduced to return the number of probes observed simultaneously by $x$ in an experiment. Let $u_j^i$ be the observation of $R(j)$ for probe $i$ that is defined as: $$u_j^i=\bigvee_{k \in R(j)} y_k^i,$$ then $$I_k(x)=\sum_{i=1}^n \bigwedge_{j \in x} u_j^i, \mbox{\vspace{1cm} } x \in \Sigma_k. \label{I-k x}$$ If $x=(j)$, $$I_k(x)=n_j(d_j), j \in d_k,$$ Given the above, $n_k(d_k)$ can be decomposed as: $$n_k(d_k)=\sum_{i=1}^{|d_k|}(-1)^{i-1}\sum_{x \in S_k(i)} I_k(x). \label{n_k value}$$ (\[n\_k value\]) states that $n_k(d_k)$ is equal to a series of $I_k(x), x \in S_k(i)$ that are overlapped each other. To ensure each probe observed by $R(k)$ is counted once and once only in $n_k(d_k)$, we need to use the alternating adding and subtracting operations to eliminate duplication. Correspondence between Predictors and Observations -------------------------------------------------- Given (\[n\_k value\]), we are able to prove the MLE proposed in [@CDHT99] considers all of the correlations in $\Sigma_k$ and have the following theorem. \[minctheorem\] 1. (\[realmle1\]) is a full likelihood estimator that considers all of the correlations in $\Sigma_k$; 2. (\[realmle1\]) consists of observed values and their predictors, one for a member of $\Sigma_k$; and 3. the estimate obtained from (\[realmle1\]) is a fit that minimises an alternating differences between observed values and corresponding predictors. (\[realmle1\]) is a full likelihood estimator that considers all of the correlations in $d_k$. To prove 2) and 3), we expand the both sides of (\[realmle1\]) to pair the observed values with the predictors of them according to $S_k$. We take three steps to achieve the goal. 1. If we use (\[n\_k value\]) to replace $n_k(d_k)$ from LHS of (\[realmle1\]), the LHS becomes: $$1-\dfrac{n_k(d_k)}{n\cdot A_k}= 1-\dfrac{1}{n\cdot A_k}\big [\sum_{i=1}^{|d_k|}(-1)^{i-1}\sum_{x \in S_k(i)}I_k(x)]. \label{nkexpansion}$$ 2. If we expand the product term located on the RHS of (\[realmle1\]), we have: $$\prod_{j \in d_k}(1-\dfrac{\gamma_j}{A_k})=1-\sum_{i=1}^{|d_k|}(-1)^{i-1}\sum_{x \in S_k(i)}\dfrac{\prod_{j \in x} \gamma_j}{A_k^i} \label{prodexpansion}$$ where the alternative adding and subtracting operations intend to remove the impact of redundant observation. 3. Deducting 1 from both (\[nkexpansion\]) and (\[prodexpansion\]) and then multiplying the results by $A_k$, (\[realmle1\]) turns to $$\begin{aligned} \sum_{i=1}^{|d_k|}(-1)^{i}\sum_{x \in S_k(i)}\dfrac{I_k(x)}{n} =\sum_{i=1}^{|d_k|}(-1)^{i}\sum_{x \in S_k(i)}\dfrac{\prod_{j \in x} \gamma_j}{A_k^{i-1}}. \label{statequal}\end{aligned}$$ It is clear there is a correspondence between the terms across the equal sign, where the terms on the LHS are the observed values and the terms on the RHS are the predictors. If we rewrite (\[statequal\]) as $$\sum_{i=1}^{|d_k|}(-1)^{i}\sum_{x \in S_k(i)}\Big(\dfrac{I_k(x)}{n} -\dfrac{\prod_{j \in x} \gamma_j}{A_k^{i-1}}\Big)=0, \label{correspondence}$$ the correspondence between correlations and observed values becomes obvious. To distinguish the MLE from those proposed in this paper, we call it original MLE in the rest of the paper. Explicit Estimators based on Composite Likelihood {#section 4} ================================================= (\[correspondence\]) shows that the original MLE takes into account all of the correlations in $\Sigma_k$. If the number of subtrees rooted at node $k$ is larger than 6, the estimator is a high degree polynomial that could not be solved analytically. To have an explicit estimator in those circumstances, we need to reduce the number of correlations considered in estimation and there are a number of strategies to achieve this. We here propose three of them and use composite likelihood that is also called pseudo-likelihood by Besag in [@Besay74] to create likelihood functions for the strategies. The three strategies are named reduce scaled, block-wised, and individual based, respectively. The reduce scaled strategy, as named, is a small version of the original MLE that selectively removes a number of subtrees rooted at node $k$ from consideration and then uses the maximum likelihood principle on the rest to estimate $A_k$. The block-wised strategy differs from the reduce scaled one by dividing all available correlations considered by the original MLE into a number of blocks, one for a degree of correlations, from pairwise to $d_k$-wise. The individual based one, in contrast to the other two, considers a correlation at a time that leads to a large number of estimators. Reduce Scaled Estimator (RSE) ----------------------------- Rather than considering all of the correlations in $\Sigma_k$, the correlations can be divided into groups according to the subtrees rooted at node $k$. Let $x, x \subset d_k$ be the group to be considered by an estimator in RSE. The log-likelihood function considering the correlations $x$ is as follows: $$\log L(A_k|Y_x)=n_k(x)\log (A_k\beta_k(x))+(n-n_k(x))\log(1-A_k\beta_k(x)) \label{likelihood RSE}$$ where $n_k(x)$ as defined in \[mlestatistics\] is the number of probes reaching node $k$ confirmed from the observations of the receivers attached to $T(j), j \in x$ and $\beta_k(x)$ is the pass rate of $T(j), j \in x$ that can be expressed as $$1-\beta_k(x)=\prod_{j \in x} (1-\dfrac{\gamma_j}{A_k}). \label{beta-k1}$$ Then, a similar likelihood equation as (\[realmle1\]) is obtained and presented as follows: $$1-\dfrac{n_k(x)}{n \cdot A_k}=\prod_{j \in x} (1-\dfrac{\gamma_j}{A_k}). \label{estimator MLEPC1}$$ If $|x|<5$, the equation is solvable analytically. The estimators in RSE are denoted by $Am_k(x), x \subset d_k$. Block-wised Estimator (BWE) --------------------------- (\[correspondence\]) shows that the correlations involved in the original MLE can be divided into $|d_k|-1$ blocks, from pairwise to $|d_k|$-wise. Each of them can be written as a likelihood function. In order to use a unique likelihood function for all of them, we let the likelihood function considering single correlation be 1. Then, the $i$-wise likelihood function denoted as $L_c(i; A_k; y)$ can be expressed uniformly. \[recursive corollary\] There are a number of composite likelihood functions, one for a degree of correlations, varying from pairwise to $|d_k|$-wise. The composite likelihood function $L_c(i; A_k; y), i \in \{2,\cdot\cdot, |d_k|\}$ has a form as follows: $$\begin{aligned} L_c(i; A_k; y) &=&\dfrac{\prod_{x \in S(i)} (A_k\beta_k(x))^{n_k(x)}(1-A_k\beta_k(x))^{n-n_k(x)}}{\prod_{x' \in S(i-1)}(A_k\beta_k(x'))^{n_k(x')}(1-A_k\beta_k(x'))^{n-n_k(x')}}. \nonumber \\ && i \in \{2,\cdot\cdot,|d_k|\} \label{recursive form} %L_c(i; A_k; y) &=&\prod_{x \in S_k(i)}(A_k\psi_k(x))^{n_k(x)}(1-A_k\prod_{x \in S_k(i)}\psi_k(x))^{n-n_k(x)} \label{recursive form} \\ %&& i \in \{2,\cdot\cdot,|d_k|\} \nonumber\end{aligned}$$ Let $A_k(i)$ be the estimator derived from $L_c(i; A_k; y)$. Then, we have the following theorem. \[all explicit\] Each of the composite likelihood equations obtained from (\[recursive form\]) is an explicit estimator of $A_k$ that is as follows: $$A_k(i)=\Big (\dfrac{\sum_{\substack{ x \in S_k(i)}} \prod_{j \in x} \gamma_j}{\sum_{x \in S_k(i)}\dfrac{I_k(x)}{n}}{\Big )} ^{\frac{1}{i-1}}, i \in \{2,.., |d_k|\}. \label{approximateestimator}$$ Firstly, we can write (\[recursive form\]) into a log-likelihood function and differentiate the log-likelihood function [*wrt*]{} $A_k$. As (\[AkBk\]), we cannot solve $A_k$ or $\beta_k(x)$ directly from the derivative and we need to consider other correlations as (\[beta-k\]). We then have an equation as $$\frac{\partial\log L_c(i,A_k; y)}{\partial A_k}=\sum_{x \in S(i)} \Big [1-\dfrac{\gamma_k(x)}{A_k}-\prod_{q\in x}(1-\dfrac{\gamma_q}{A_k})\Big ]-\sum_{x' \in S(i-1)} \Big [1-\dfrac{\gamma_k(x')}{A_k}-\prod_{q\in x'}(1-\dfrac{\gamma_q}{A_k})\Big ] \label{pairwise equation}$$ The two summations can be expanded as (\[realmle1\]) and only the terms related to the i-wise correlation left since all other terms in the first summation are canceled by the terms of the second summation. The likelihood equation as (\[approximateestimator\]) follows. In the rest of the paper, $A_k(i)$ is used to refer to the $i-wise$ estimator and $\widehat A_k(i)$ refers to the estimate obtained by $A_k(i)$. Individual based Estimator (IBE) -------------------------------- Instead of considering a block of correlations together, we can consider a correlation at at time and have a large number of estimators. Each of them has a similar likelihood function as (\[likelihood RSE\]), where $\beta_k(x)$ and $n_k(x)$ are replaced by $\psi_k(x)$ and $I_k(x)$, respectively. $\psi_k(x)=\prod_{j \in x} \alpha_j\beta_j, x \subseteq d_k$, is the pass rate of $T(j), j \in x$. If $\Sigma_k'=\Sigma_k \setminus S_k(1)$ is the correlations considered by IBE, the log-likelihood function for $A_k$ given observation $I_k(x)$ is equal to $$\begin{aligned} L(A_k|I_k(x))=I_k(x)\log (A_k\psi_k(x))+(n-I_k(x))\log(1-A_k\psi_k(x)), \mbox{ } x \in \Sigma_k'. \label{Al likelihood1} \end{aligned}$$ We then have the following theorem. \[local estimator\] Given (\[Al likelihood1\]), $A_k\psi_k(x)$ is a Bernoulli process. The MLE for $A_k$ given $I_k(x)$ equals to $$Al_k(x)=\Big(\dfrac{\prod_{j\in x} \gamma_j}{\dfrac{I_k(x)}{n}}\Big)^{\frac{1}{|x|-1}}. \mbox{ } x \in \Sigma_k' \label{local estimator1}$$ Using the same procedure as that used in \[2.a\], we have the theorem. Comparing (\[approximateestimator\]) with (\[local estimator1\]), we can find that $\widehat Al_k(x)$, where $|x|=i$, is a type of geometric mean and $\widehat A_k(i)$ is the arithmetic mean of $\widehat Al_k(x), x \in S_k(i)$. Therefore, $A_k(i)$ is more robust than $Al_k(x)$. Using and combining the strategies presented here, we can have various explicit estimators that cover those proposed previously. For instance, the estimator proposed in [@ADV07; @Zhu11a] is one of them that divides $d_k$ into two groups and only considers the pairwise correlations between the members of the two groups. Therefore, although the estimator proposed in [@ADV07; @Zhu11a] is a MLE in terms of the observation used in estimation, it is not the same as (\[minc\]). Properties of the Estimators {#section5} ============================ It is known that if a MLE is a function of the sufficient statistic, it is asymptotically unbiased, consistent and asymptotically efficient. Thus, the original MLE and all of the estimators proposed in this paper have the properties. Apart from them, we are interested in whether some of the estimators have better properties than them, such as, unbiasedness, uniqueness, variance, and efficiency, that can be used to evaluate the estimators. This section is devoted to present them that consist of a number of theorems and corollaries. Unbiasedness and Uniqueness of $Al(x)$ and $A_k(i)$ --------------------------------------------------- This subsection is focused on the unbiasedness of the estimators in IBE and BWE although the statistic used by the latter is not minimal sufficient. For $Al_k(x), x \in \Sigma_k'$, we have the following theorem. \[local maximum\] $Al_k(x)$ is a unbiased estimator. Let $z_j, j \in d_k$ be the pass rate of $T(j)$ and let $\overline{A_k}=\frac{N_k}{n}$ be the sample mean of $A_k$. Note that $z_j$ and $z_l, j, l \in d_k$ are independent from each other if $ j \neq l$. In addition, $z_j, j \in d_k$ is independent from $A_k$. Because of this, $x_k^i\prod_{j \in x} z_j$ is used to replace $\bigwedge_{j \in x} y_j^i$ in the following derivation since the latter is equal to $\prod_{j\in x} y_j^i$ that is equal to $x_k^i\prod_{j \in x} z_j$. We then have $$\begin{aligned} E(\widehat Al_k(x))&=&E\Big( \big (\frac{\prod_{j\in x} \hat\gamma_j}{\frac{I_k(x)}{n}}\big)^{\frac{1}{|x|-1}}\Big) \nonumber \\ &=& E\Big(\big(\frac{\prod_{j\in x} \frac{n_j(d_j)}{n}}{\frac{\sum_{i=1}^n \bigwedge_{j \in x} y^i_j}{n}}\big)^{\frac{1}{|x|-1}} \Big )\nonumber \\ &=& E\Big(\big(\frac{(\dfrac{N_k}{n})^{|x|}\prod_{j\in x} \frac{n_j(d_j)}{N_k}}{\frac{N_k}{n} \frac{\sum_{i=1}^{N_k}\prod_{j \in x} z_j}{N_k}}\big)^{\frac{1}{|x|-1}} \Big ) \nonumber \\ &=& E\Big(\frac{N_k}{n}\Big) E\Big(\big(\frac{\prod_{j\in x} \frac{1}{N_k}\sum_{i=1}^{N_k}{z_j}}{\sum_{i=1}^{N_k} \frac{1}{N_k}\prod_{j \in x} {z_j}}\big)^{\frac{1}{|x|-1}}\Big ) \nonumber \\ &=& E\Big (\overline{A_k}\Big) %\label{weighted mean}\end{aligned}$$ The theorem follows. Given theorem \[local maximum\], we have the follow corollary. \[global expect\] $A_k(i)$ is a unbiased estimator. According to theorem \[local maximum\], we have $$\begin{aligned} E(\widehat A_k(i))&=&E\Big(\overline{A_k}\Big)E\Big(\big(\frac{\sum_{x \in S(i)}\prod_{j\in x} \frac{1}{N_k}\sum_{i=1}^{N_k}{z_j}}{\sum_{x \in S(i)}\sum_{i=1}^{N_k} \frac{1}{N_k}\prod_{j \in x} {z_j}}\big)\big)^{\frac{1}{i-1}} \Big)\nonumber \\ &=&E\Big (\overline{A_k}\Big) \label{global estimate}\end{aligned}$$ Given $Al_k(x), x \in \Sigma_k'$ and $A_k(i)$ are unbiased estimators, we can prove the uniqueness of $A_k(i)$. If $$\sum_{\substack{ x \in S_k(i)}} \prod_{j \in x} \hat\gamma_j < \sum_{x \in S_k(i)}\dfrac{I_k(x)}{n},$$ there is only one solution in $(0,1)$ for $\widehat A_k(i), 2 \leq i \leq |d_k|$. Since the support of $ A_k$ is in (0,1), we can reach this conclusion from (\[approximateestimator\]). Efficiency of $Al_k(x)$, $Am_k(x)$, and the original MLE -------------------------------------------------------- Apart from asymptotically efficiency stated previously for the MLEs using sufficient statistics, we are interested in the efficiency of the estimators proposed in this paper. Given (\[Al likelihood1\]), we have the following theorem for the Fisher information of an observation, $y$, for the estimators in IBE, i.e. $Al_k(x), x \in \Sigma_k'$. \[Al fisher\] The Fisher information of $y$ on $Al_k(x), x \subset d_k$ is equal to $ \dfrac{\psi_k(x)}{A_k (1-A_k \psi_k(x))}$. Considering $I_k(x)=y$ is the observation of the receivers attached to $x$, we have the following as the likelihood function of the observation: $$L(A_k|y)=y\log (A_k\psi_k(x))+(1-y)\log(1-A_k\psi_k(x)). \label{Al likelihood for single}$$ Differentiating (\[Al likelihood for single\]) [*wrt*]{} $A_k$, we have $$\begin{aligned} \dfrac{\partial L(A_k|y)}{\partial A_k}=\dfrac{y}{A_k}-\dfrac{(1-y)\psi_k(x)}{1-A_k\psi_k(x)}\end{aligned}$$ We then have $$\begin{aligned} \dfrac{\partial^2 L(A_k|y)}{\partial A_k^2}&=& -\dfrac{y}{A_k^2}-\dfrac{(1-y)\psi_k(x)^2}{(1-A_k\psi_k(x))^2}\end{aligned}$$ If ${\cal I}(Al_k(x)|y)$ is used to denote the Fisher information of observation $y$ for $A_k$ in $Al_k(x)$, we have $$\begin{aligned} \label{fisher} {\cal I}(Al_k(x)|y)&=&-E(\dfrac{\partial^2 L(A_k|y)}{\partial A_k^2}) \nonumber \\ &=&\dfrac{E(y)}{A_k^2}+\dfrac{E(1-y)\psi_k(x)^2}{(1-A_k\psi_k(x))^2} \nonumber \\ &=&\dfrac{\psi_k(x)}{A_k (1-A_k \psi_k(x))} %&=& \dfrac{1}{i-1}A_k \prod_{j \in x} \beta_j\end{aligned}$$ that is the information provided by $y$ for $A_k$. Given (\[fisher\]), we are able to have a formula for the Fisher information of the original MLE and the estimators in RSE. In order to achieve this, let $\beta_k(d_k)=\beta_k$. Then, we have the following corollary. \[MLE fisher\] The Fisher information of observation $y$ for $A_k$ in the original MLE and $Am_k(x), x \subseteq d_k$ is equal to $$\dfrac{\beta_k(x)}{A_k (1-A_k \beta_k(x))}, \mbox{ } x \subseteq d_k. \label{MLE fisher equ}$$ Replacing $n_k(d_k)$ or $n_k(x)$ by $y$ and replacing $n-n_k(d_k)$ or $n-n_k(x)$ by $1-y$ from (\[likelihood\]) and (\[likelihood RSE\]), respectively, and then using the same procedure as that used in the proof of theorem \[Al fisher\], the corollary follows. Because of the similarity between (\[fisher\]) and (\[MLE fisher equ\]), the two equations have the same features in terms of support, singularity, and maximums. After eliminating the singular points, the support of $A_k$ is in $(0,1)$ and the support of $\beta_k(x)$ (or $\psi_k(x)$) is in $[0, 1]$. Both (\[fisher\]) and (\[MLE fisher equ\]) are convex functions in the support and reach the maximum at the points of $A_k \rightarrow 1, \beta_k(x) =1$ (or ($\psi_k(x)=1$) and $A_k\rightarrow 0, \beta_k(x)=1$ (or ($\psi_k(x)=1$). Given $A_k$, (\[MLE fisher equ\]) is a monotonic increase function of $\beta_k(x)$ whereas (\[fisher\]) is a monotonic increase function of $\psi_k(x)$. Despite the similarity between (\[fisher\]) and (\[MLE fisher equ\]), $Am_k(x)$ and $Al_k(x)$ react differently if $x$ is replaced by $y, x \subset y$ in terms of efficiency that leads to two corollaries, one for each of them. $Am_k(y)$ is more efficient than $Am_k(x)$ if $x \subset y$. If $x \subset y$, $\beta_k(x) \leq \beta_k(y)$ and then we have the corollary. For $Al_k(x)$, we have The efficiency of $Al_k(x), x \in \Sigma_k'$ forms a partial order that is identical to that formed on the inclusion of the members in $\Sigma_k'$, where the most efficient estimator must be one of the $Al_k(x), x \in S_k(2)$ and the least efficient one must be $Al_k(d_k)$. \[Al corollary\] According to Theorem \[Al fisher\], the efficiency of $Al_k(x)$ is determined by $\psi_k(x)$, where $\psi_k(x)=\prod_{j \in x} \alpha_j \beta_j$. If $x \subset y$, we have $$\begin{aligned} \psi_k(y)&=&\prod_{j \in y} \alpha_j \beta_j \nonumber \\ &=&\psi_k(x)\prod_{j \in (y \setminus x)} \alpha_j \beta_j \nonumber \\ &<& \psi_k(x) \end{aligned}$$. Therefore, the order of the efficiency of $Al_k(x), x \in \Sigma_k'$ shares that of the inclusion in $\Sigma_k'$, where $x, x \in S_k(2)$ are the members of $\Sigma_k'$ that have the minimal number of elements. In contrast to $\psi_k(x), x \in S_k(2)$, $\psi_k(d_k)\leq \psi_k(x)$ since $\forall x, x \in \Sigma_k', x \subseteq d_k$. Then, the corollary follows. Variance of $Al_k(x)$, $Am_k(x)$, and the original MLE ------------------------------------------------------ The estimator specified by (\[minc\]), $Am_k(x)$, and $Al_k(x)$ are of MLEs that have different focuses on the observations obtained. Despite the difference between them, they share a number of features, including likelihood function and efficient equation. In addition, the variances of them are expressed by a general function showing the connection between $A_k$ and the pass rate of the subtree considered in estimation. Let $mle$ denote all of them. Then, we have a theorem for the variances of the estimators in $mle$. \[Al variance\] The variances of the estimators in $mle$ equal to $$var(mle)=\dfrac{A_k (1-A_k\delta_k(x) )}{\delta_k(x)}, \mbox{ } x \subseteq d_k \label{Al variance1}$$ where $\delta_k(x)$ $$\begin{aligned} \delta_k(x)=\begin{cases} \beta_k(x), & \mbox{For original MLE and } Am_k(x); \\ \psi_k(x), & \mbox{For } Al_k(x). \end{cases} \nonumber\end{aligned}$$ The passing process described by (\[Al likelihood1\]) is a Bernoulli process that falls into the exponential family and satisfies the regularity conditions presented in [@Joshi76]. Thus, the variance of an estimator in $mle$ reaches the Cramér-Rao bound that is the reciprocal of the Fisher information. (\[Al variance1\]) can be written as $$\begin{aligned} \frac{A_k}{\delta_k(x)}-A_k^2\end{aligned}$$ which shows: 1. the estimates obtained by an estimator spread out more widely than that obtained by direct measurement. The wideness is determined by $\delta_k(x)$, the pass rate of the subtrees connecting node $k$ to observers. If $\delta_k(x)=1$, there is no further spread-out than that obtained by direct measurement. Otherwise, the variance increases as the decreases of $\delta_k$ and in a super linear fashion. 2. the variance of the estimates obtained by an estimator is monotonically increasing as the depth of the subtree rooted at node $k$ since the pass rate of a subtree decreases as its depth, i.e., the pass rate of an $i$-level tree, say A, is larger than that of the $i+1$-level one that is extended from the $i$-level one; 3. the variance of the estimates obtained by an estimator in $mle$ is a monotonically decreasing function of $\delta_k(x)$. The three points confirm some of the experiment results reported previously, such as the dependency of variance on topology reported in [@CDHT99]. Note than despite 3), the variance of $Am_k(x)$ can be the same as that of $Am_k(y), x \subset y$ if $\beta_k(x)=\beta_k(y)$. So does $Al_k(x)$. In other words, if the probes observed by $R(j), j \in (y\setminus x)$ are included in that observed by $R(i), i \in x$, the estimate obtained by $Am_k(x)$ is the same as that obtained by $Am_k(y)$. Efficiency and Variance of BWE ------------------------------ As stated, the estimate obtained by $A_k(i)$ is a type of the arithmetic mean of $Al_k(x), x \in S_k(i)$ that has the same advantages and disadvantages as the arithmetic mean. Thus, $A_k(i)$ is more robust and efficient than that of $Al_k(x), x \in S(i)$ since the former considers more probes than the latter in estimation although some of the probes may be considered more than once. Because of this, (\[fisher\]) cannot be used to evaluate the efficiency of an estimator in BWE. Despite this, we can put a range for the information obtained by $A_k(i)$ that is $$\begin{aligned} \frac{\psi_k(x)}{A_k(1-A_k\psi_k(x))} \leq {\cal I}(A_k(i)|y) \leq {d_k \choose i} \frac{\psi_k(x)}{A_k(1-A_k\psi_k(x))}. \mbox{ } |x|=i. \end{aligned}$$ In addition, $A(i)$ is at least as efficient as $A(i+1)$ and the variance of $A(i)$ is at least as small as that of $A(i+1)$ since $\sum_{x \in S_k(i)} I_k(x) \leq \sum_{x \in S_k(i+1)} I_k(x)$. Example ------- We use an example to conclude this section that illustrate the differences of the variances obtained from the estimates of four estimators. The four estimators are: direct measurement, the original MLE, $Al_k(x), |x|=2$ and $Al_k(d_k)$, respectively. The setting used here is identical to that presented in [@DHPT06], where node $k$ has three children with a pass rate of $\alpha, 0 < \alpha \leq 1$, and the pass rate from the root to node $k$ is also equal to $\alpha$. Using (\[Al variance1\]), we have the variances of them that are presented below: 1. $\alpha-\alpha^2$, 2. $\frac{1}{3(1-\alpha)+\alpha^2}-\alpha^2$, 3. $\frac{1}{\alpha}-\alpha^2$, and 4. $\frac{1}{\alpha^2}-\alpha^2$. The difference between them becomes obvious as $\alpha$ decreases from $1$ to $0.99$, where the variances of the four estimators change from 0 to 0.01, 0.01, 0.03, and 0.04, respectively. The variance of $Al_k(d_k)$ is 4 times of that of the original MLE that is significantly different from that obtained in [@DHPT06]. Although the variances are decreased as the number of probes multicasted, the ratio between them remains. Model Selection and Simulation {#section 6} ============================== The large number of estimators in IBE, RSE and BWE, plus the original MLE, make model selection possible. However, to find the most suitable one in terms of efficiency and computational complexity is a hard task since the two goals conflict each other. Although one is able to identify the the most suitable estimator by computing the Kullback-Leigh divergence or the composite Kullback-Leigh divergence of the estimators, the cost of computing the Akaike information criterion (AIC) for each of the estimators makes this approach prohibitive. Nevertheless, the derivation of (\[MLE fisher equ\]) successfully solves the problem in some degree since (\[MLE fisher equ\]) shows the most suitable estimator should have a bigger $\beta_k(x)$ which can be obtained from end-to-end observation since $\beta_k(x) \propto \prod_{j \in x} \gamma_j$. Estimators ------------ -------- ---------- -------- ---------- -------- ---------- -------- ---------- -------- ---------- samples Mean Var Mean Var Mean Var Mean Var Mean Var 300 0.0088 1.59E-05 0.0088 1.59E-05 0.0088 1.64E-05 0.0087 1.59E-05 0.0087 1.61E-05 900 0.0092 7.76E-06 0.0092 7.82E-06 0.0091 7.84E-06 0.0092 7.90E-06 0.0092 8.15E-06 1500 0.0096 4.55E-06 0.0096 4.55E-06 0.0096 4.80E-06 0.0096 4.78E-06 0.0096 4.33E-06 2100 0.0097 3.14E-06 0.0097 3.11E-06 0.0097 3.14E-06 0.0097 3.02E-06 0.0097 3.08E-06 2700 0.0100 1.72E-06 0.0100 1.72E-06 0.0100 1.74E-06 0.0100 1.81E-06 0.0100 1.83E-06 Estimators ------------ -------- ---------- -------- ---------- -------- ---------- -------- ---------- -------- ---------- samples Mean Var Mean Var Mean Var Mean Var Mean Var 300 0.0088 1.59E-05 0.0089 1.64E-05 0.0089 1.68E-05 0.0091 2.36E-05 0.0088 1.95E-05 900 0.0091 7.76E-06 0.0091 7.80E-06 0.0091 7.83E-06 0.0092 9.74E-06 0.0091 8.67E-06 1500 0.0096 4.55E-06 0.0096 4.72E-06 0.0096 4.81E-06 0.0097 4.36E-06 0.0096 4.45E-06 2100 0.0097 3.14E-06 0.0097 3.11E-06 0.0097 3.11E-06 0.0098 3.39E-06 0.0097 3.04E-06 2700 0.0100 1.72E-06 0.0100 1.69E-06 0.0100 1.67E-06 0.0101 2.11E-06 0.0100 1.90E-06 Estimators ------------ -------- ---------- -------- ---------- -------- ---------- -------- ---------- -------- ---------- samples Mean Var Mean Var Mean Var Mean Var Mean Var 300 0.0503 2.15E-04 0.0504 2.15E-04 0.0505 2.14E-04 0.0508 2.18E-04 0.0505 2.16E-04 900 0.0511 5.85E-05 0.0511 5.81E-05 0.0511 5.79E-05 0.0512 5.79E-05 0.0512 5.88E-05 1500 0.0502 2.24E-05 0.0502 2.24E-05 0.0502 2.23E-05 0.0503 2.33E-05 0.0502 2.32E-05 2100 0.0507 1.16E-05 0.0507 1.19E-05 0.0507 1.20E-05 0.0507 1.09E-05 0.0507 1.13E-05 2700 0.0507 1.31E-05 0.0507 1.34E-05 0.0507 1.35E-05 0.0508 1.35E-05 0.0507 1.34E-05 Simulation ---------- To compare the effectiveness, robustness, and sensitivity of the estimators between the original MLE, $A_k(i)$, and $Al_k(x)$, three rounds of simulations are conducted in three settings. The multicast tree used in the simulations having a path/link from the root to node $k$ that has 8 subtrees connecting to the receivers. Five estimators: the original MLE (OMLE), $ A_k(2), A_k(3), Al_k(x), |x|=2$, and $Al_k(x), |x|=3$, are compared against each other in the simulation. The number of samples used in the simulations varies from 300 to 2700 in a step of 600. For each sample size, 20 experiments with different initial seeds are carried out and the means and variances of the estimates obtained by the five estimators are presented in three tables, from Table \[Tab2\] to Table \[Tab4\]. Table \[Tab2\] is the results obtained in the first round that sets the loss rate of the subtrees to 1%, so does the loss rate of the path from the root to node $k$. The result shows when the sample size is small, the estimates obtained by all estimators are drifted away from the true value that indicates the data obtained is not enough. With the increase of sample size, the estimates gradually approach to the true value and all of the estimators achieve a similar outcome. As expected, the variances decrease as the sample size that is agreed with (\[Al variance1\]) and there is no significant difference among the estimators since all of the subtrees connected to node $k$ have the same loss rates. Despite this, the variance of $Al_k(x), |x|=2$ is slightly better than that of $Al_k(x), |x|=3$ as specified by Theorem \[Al variance\]. To compare the sensitivity and robustness, another round simulation is carried out on the same network. The difference between this round and the previous one is the loss rates of the subtrees connected to node $k$, where 6 of the 8 subtrees have their loss rates equal to $1\%$ and the other two have their loss rates equal to $5\%$. The two subtrees considered by $Al_k(x), |x|=2$ have the loss rates equal to $1\%$ and $5\%$, respectively; whereas the two of the three subtrees considered by $Al_k(x), |x|=3$ have their loss rates equal to $1\%$ and the other has its loss rate equals to $5\%$. The results are presented in Table \[Tab3\]. Compared Table \[Tab3\] with Table \[Tab2\], there is no change for the original MLE and there are slight changes for the estimates obtained by $A_k(2)$ and $A_k(3)$. That confirms the robustness of the original MLE and the $A_k(i)$ over $Al_k(x)$. In contrast to the original MLE and $A_k(i)$, the variances of the estimates obtained by $Al_k(x), |x|=2$ and $Al_k(x), |x|=3$ have noticeable differences with their counterparts, in particular if the sample size is small because each of the estimators has a descendant with a higher loss rate than that used in the first round. The advantage of this shows at the mean obtained by $Al_k(x), |x|=2$ that approaches to the true value quicker than that in the first round and that of $Al_k(x), |x|=3$. This is because one of the two descendants considered by $Al_k(x), |x|=2$ has a higher loss rate than the other that increases the probability of matching the predicator to its observation. In contrast to $Al_k(x), |x|=2$, the mean of $Al_k(x), |x|=3$ has little change from that obtained in the first round. This reflects the tradeoff between efficiency and robustness among $Al_k(x)$, where the larger the $|x|$ is, the robuster the $Al_k(x)$ is to the turbulence of the loss rates in $x$. To have a similar result as the original MLE, we should select the subtrees that have loss rates equating to $1\%$ for $Al_k(x), |x| = 2$ or $3$. Then, the same result as that presented in Table \[Tab2\] should be obtained. To verify the claim made at the end of last paragraph, we conduct another round simulation, where the loss rate of the path of interest is increased from $1\%$ to $5\%$, and the loss rates of the eight subtrees rooted at node $k$ are divided into two groups, four of them are set to $5\%$ and the other four to $1\%$. The two estimators from IBE, i.e. $Al_k(x), |x|=2$ and $3$, consider the observations obtained from the subtrees that have their loss rates equal to $1\%$. The result is presented in Table \[Tab4\] that confirms the estimates of $Al_k(x)$ can be as good as that of the OMLE. In comparison with Table \[Tab2\], there are two noticeable differences in Table \[Tab4\] : - the means of the estimates approach to the true value quicker; and - the variances are a magnitude higher. The first can be derived from Theorem \[Al fisher\] and Corollary \[MLE fisher\] since the efficiency of an estimator is inversely proportional to $A_k$; whereas the second can be obtained from Theorem \[Al variance\] that states a smaller $\delta_k(x)$ results in a bigger variance. The simulations show that the original MLE undoultly is the most robust estimator that fits all of the three situations well although it reacts slower than some of the estimators proposed in this paper to the variation of observation. In contrast, there is always an estimator that has a similar performance as that of the MLE in each of the situations. The findings of this paper make it possible to identify a suitable estimator according to end-to-end observation. Conclusion {#section7} ========== This paper starts from finding inspirations that can lead to efficient explicit estimators for loss tomography and ends with a large number of unbiased or asymptotic unbiased and consistent explicit estimators, plus a number of theorems and corollaries to assure the statistical properties of the estimators. One of the most important findings is of the formulae to compute the variances of $A_k$ estimated by the estimators in RSE, IBE and the original MLE. Apart from clearly expressing the connection between the path to be estimated and the subtrees connecting the path to the observers of interest, the formulae potentially have many applications in network tomography, some have been identified in this paper. For instance, using the formulae, we have ranked the MLEs proposed so far, including those proposed in this paper. In addition, the formulae make model selection possible in loss tomography and then the multicast used in end-to-end measurement is no longer only for creating various correlations but also for identifying the subtrees that can be used in estimation. The effectiveness of the strategy has been verified in a simulation study. Apart from those, there are other potentials to use the formulae and the findings that require further exploration. [^1]: Weiping Zhu is with University of New South Wales, Australia, email w.zhu@adfa.edu.au
{ "pile_set_name": "ArXiv" }
--- abstract: 'We implement the contractor-renormalization method to study the checkerboard Hubbard model on various finite-size clusters as function of the inter-plaquette hopping $t''$ and the on-site repulsion $U$ at low hole doping. We find that the pair-binding energy and the spin gap exhibit a pronounced maximum at intermediate values of $t''$ and $U$, thus indicating that moderate inhomogeneity of the type considered here substantially enhances the formation of hole pairs. The rise of the pair-binding energy for $t''<t''_{\rm max}$ is kinetic-energy driven and reflects the strong resonating valence bond correlations in the ground state that facilitate the motion of bound pairs as compared to single holes. Conversely, as $t''$ is increased beyond $t''_{\rm max}$ antiferromagnetic magnons proliferate and reduce the potential energy of unpaired holes and with it the pairing strength. For the periodic clusters that we study the estimated phase ordering temperature at $t''=t''_{\rm max}$ is a factor of 2–6 smaller than the pairing temperature.' author: - Shirit Baruch and Dror Orgad title: 'A contractor-renormalization study of Hubbard plaquette clusters' --- Introduction {#intro} ============ It is by now generally accepted that spatial inhomogeneity may emerge either as a static or as a fluctuating effect in strongly-coupled models of the high-temperature superconductors, and indeed in many of the real materials.[@ourreview] What is far from being settled is the issue of whether such inhomogeneity is [*essential*]{} to the mechanism of high-temperature superconductivity from repulsive interactions. While most researchers would probably answer this question in the negative one should bare in mind the absence of a conclusive evidence that the single-band two-dimensional Hubbard model, widely believed to be the “standard model” of high-temperature superconductivity, actually supports superconductivity with a high transition temperature.[@aimi] On the other hand, when examined on small clusters the same model and its strong-coupling descendent, the $t-J$ model, exhibit robust signs of incipient superconductivity in the form of a spin-gap and pair binding.[@ourreview] This fact points to the possibility that the strong susceptibility towards pairing is a consequence of the confining geometry itself. This line of thought has been pursued in the past by considering the extreme limit where the electronic density modulation is so strong that the system consists of weakly coupled Hubbard ladders[@optimal-ladder; @optimal-AFK] or plaquettes[@wf-steve]. Beyond the questionable applicability of such models to the physical systems, which are at most only moderately modulated, it is clear that strong inhomogeneity, even if beneficial to pairing, is detrimental to the establishment of phase coherence and consequently to superconductivity. On both counts it is, therefore, desirable to extend the analysis to the regime of intermediate inhomogeneity. Recently, the checkerboard Hubbard model, constructed from 4-site plaquettes with nearest-neighbor hopping $t$ and on-site repulsion $U$, was studied as function of the inter-plaquette hopping $t'$ (see Fig. \[model-fig\]). Tsai [@steve-exact] diagonalized exactly the $4\times 4$ site cluster ($2\times 2$ plaquettes) and found that the pair-binding energy, as defined by Eq. (\[pb-def\]) below, exhibits a substantial maximum at $t'\approx t/2$ for $U\approx 8t$ and low hole concentration. Doluweera [@DMFT-cluster], on the other hand, used the dynamical cluster approximation in the range $0.8\le t'/t\le 1$ and obtained a monotonic increase in both the strength of the $d$-wave pairing interaction and the superconducting transition temperature, $T_c$, towards a maximum that occurs in the homogeneous model. In this paper, we use the contractor-renormalization (CORE) method[@CORE] to derive an effective low-energy Hamiltonian for the checkerboard Hubbard model, which we then diagonalize numerically on various finite-size clusters. We begin by establishing the region of applicability of the CORE approximation by contrasting its predictions with the exact results of Ref. for $2\times 2$ plaquettes. Our findings indicate that at low concentrations of doped holes the two approaches agree reasonably well unless $t'$ is larger than a value, which increases with $U$. Deviations also appear for small $t'$ when $U$ is large. We identify probable sources of these discrepancies. Based on the lessons gained from the small system we go on to study larger clusters of up to 10 plaquettes. These include the periodic $6\times 6$ sites cluster and 2-leg and 4-leg ladders with periodic boundary conditions along their length. Within the region where CORE is expected to provide reliable results the pair-binding energy continues to exhibit a non-monotonic behavior with a pronounced maximum at intermediate values of $t'$ and $U$. The precise location of the maximum depends on the cluster geometry but it typically occurs in the range $t'_{\rm max}\approx 0.5-0.7t$ and $U_{\rm max}\approx 5-8t$. The spin gap of the doped system follows a similar trend, often reaching the maximum slightly before the pair-binding energy. These findings demonstrate that moderate inhomogeneity, of the type considered here, can substantially enhance the binding of holes into pairs. In an effort to elucidate the source of the maximum we have looked into the content of the ground state and calculated the contributions of various couplings in the effective Hamiltonian to its energy. Our results indicate that for $t'<t'_{\rm max}$ the doped holes move in a background, which is composed predominantly of plaquettes that are in their half-filled ground state. This background possesses strong intra-plaquette singlet resonating valence bond (RVB) correlations, which facilitate the propagation of pairs relative to independent holes. The rise in the pair-binding energy while $t'$ grows towards $t'_{\rm max}$ is a result of a faster decrease of the pair kinetic energy in comparison to that of unpaired fermions. As $t'$ crosses $t'_{\rm max}$ and approaches the uniform limit the ground state contains a growing number of plaquettes that support antiferromagnetic (AFM) magnons. In this regime of increasing AFM correlations the kinetic energy changes relatively little with $t'$, and the decrease of the pair-binding energy for $t'>t'_{\rm max}$ is caused by the lowering of the energy of single holes due to their interactions with the magnons. Interestingly, we find that the maximum in the pair-binding energy of the periodic clusters is accompanied by a change in the crystal momentum of the single-hole ground state from the $\Gamma-{\rm M}$ and symmetry related directions at $t'<t'_{\rm max}$ to the Brillouin-zone diagonals at $t'>t'_{\rm max}$. A similar correlation was also found for the 3-hole ground state of the $6\times 6$ sites cluster. While the pair-binding energy sets a pairing scale, $T_p$, a phase-ordering scale, $T_\theta$, is provided by the phase stiffness. The latter was evaluated from the second derivative of the ground state energy with respect to a phase twist introduced by threading the system with an Aharonov-Bohm flux. We have found that as the twist is taken to zero, the CORE energy curvature typically converges towards a limiting value only when $t'<t'_{\rm max}$. Within this region the phase stiffness increases monotonically with $t'$. Our results indicate that for the lightly doped periodic clusters that we have considered phase fluctuations dominate over pairing, specifically, $T_p\approx 2-6 T_\theta$ at $t'=t'_{\rm max}$. The limitations of the present study make it difficult to draw conclusions regarding the behavior of $T_c$ in the two-dimensional thermodynamic limit. We have also calculated the pair-field correlations between Cooper-pairs that reside on the most distant bonds allowed by our finite clusters. As expected, these correlations are consistent with $d$-wave pairing. However, in contrast to the pair-binding energy and the phase stiffness the correlations change little with $t'$ and are small in magnitude. This discrepancy might be resolved in light of our finding that only few holes are tightly bound into pairs that reside within a single plaquette. Moreover, we obtain that the number of such pairs changes relatively little with $t'$ with no apparent correlation to the substantial maximum in the pair-binding energy. Taken together these findings suggest that the correlation function which we and others often use to identify and quantify pairing in the Hubbard model may be ill-constructed to take account of the more extended and structured nature of pairing in this model. ![The checkerboard Hubbard model. Shown here are two of the clusters that we studied. The bonds labeled $ab$, $cd$, and $ef$ specify locations used in calculating the pairing correlations.[]{data-label="model-fig"}](plaquette.eps){width="\linewidth"} Model and Method {#models} ================ The Hamiltonian of the checkerboard Hubbard model, which we have studied, is given by H=-\_[i,j, ]{}( t\_[ij]{} c\_[i,]{}\^c\_[j,]{} +[H.c.]{})+U\_i n\_[i,]{} n\_[i,]{}, \[H\] where $c_{i,\sigma}^\dagger$ creates an electron with spin $\sigma=\uparrow,\downarrow$ at site $i$ of a two-dimensional square lattice. Here $n_{i,\sigma}=c_{i,\sigma}^\dagger c_{i,\sigma}$, and $\langle i,j\rangle$ denotes nearest-neighbor sites. The hopping amplitude is $t_{ij}=t$ for $i$ and $j$ on the same plaquette, while $t_{ij}=t'$ when they belong to neighboring plaquettes, as shown in Fig. \[model-fig\]. The first step in obtaining the CORE effective Hamiltonian for the above model, is the exact diagonalization of a four-site plaquette. Out of the full spectrum, the $M$ lowest-energy states are retained. The reduced Hilbert space, in which the effective Hamiltonian operates, is spanned by the tensor products of these states on different plaquettes. Next, the Hamiltonian (\[H\]) is diagonalized on $N$ connected plaquettes and the $M^N$ lowest-energy states are projected onto the reduced Hilbert space and Gram-Schmidt orthonormalized. Finally, after replacing the exact eigenstates by their projections, the $N$-plaquette Hamiltonian can be represented as one for $M$ types of hard core particles coupled via $N$-body interactions. The CORE approximation consists of applying the resulting effective Hamiltonian to the study of larger clusters. By construction, the spectrum of the CORE Hamiltonian coincides with the low-energy spectrum of the exact problem on $N$ plaquettes. We note, however, that this ceases to be the case if one or more of the exact low-energy states have zero projection on the reduced Hilbert space, or, if some of them are projected onto the same tensor-product state. In the following we demonstrate that such a problem arises in certain parameter regions of the model (\[H\]). We concentrate on relatively low hole densities as measured from the half-filled system. The simplest truncation used to describe this regime is to retain the ground state of the half-filled plaquette \[a total spin singlet $S=0$ with plaquette momentum ${\bf q}=(0,0)$\], its $S=1$, ${\bf q}=(\pi,\pi)$ triplet of lowest lying AFM magnon excitations, and the $S=0$, ${\bf q}=(0,0)$ hole pair ground state.[@AAcore] The inclusion of the magnon excitations is essential for retrieving the correct magnetic behavior at low hole doping.[@core-res] Below we show that they also play an important role in the physics of hole binding. One can improve the approximation by including in the CORE plaquette basis also the two degenerate doublets $S_z=\pm 1/2$, ${\bf q}=(0,\pi),(\pi,0)$, comprising the single hole ground state.[@core-res; @core-tj] Moreover, the inclusion of these states is mandatory for the purpose of calculating the pair binding energy, which is one of the goals of the present work. Consequently, our CORE scheme consists of keeping the above mentioned $M=9$ states. We have considered only range-2 interactions, $N=2$. The resulting effective Hamiltonian includes all possible couplings, which respect the symmetries of the 2-plaquettes problem. These include the conservation of number of holes $N_h$, invariance under SU(2) spin rotations and under reflections about the central bonds of the cluster in the $x$ and $y$ directions. The latter, together with the conservation of $N_h$, imply that within our reduced Hilbert space, as defined above, the total plaquette momentum ${\bf q_1+q_2}$ is also conserved (modulo $2\pi$). We will not list here the 45 couplings which are allowed by the symmetries. Instead, we will describe the most important ones in the appropriate context and refer the reader to the Appendix for a detailed description of the Hamiltonian. Many of the results reported in the following are derived from the spectrum of the effective Hamiltonian as obtained by exact diagonalization. We also calculate various ground-state correlations. To this end we project the appropriate operators on the reduced Hilbert space[@CORE] before evaluating their ground-state correlation function. Results ======= Although the size of the Hilbert space is massively reduced by the CORE approximation it still grows exponentially with the size of the system. Therefore, even the largest clusters that we are able to diagonalize using this method are too small for a direct calculation of $T_c$. Instead we calculate various properties of the system which are indicative of the two necessary ingredients for superconductivity: pairing and phase stiffness. We begin with the former and study its behavior as function of $t'$ and $U$ on various geometries. These include the $4\times 4$ and $6\times 6$ periodic clusters, seen in Fig. \[model-fig\], as well as 2-leg and 4-leg ladders with periodic boundary conditions along their length, which extends up to 20 sites. ![The pair-binding energy in a periodic $4\times 4$ cluster at 1/16 hole doping as obtained by (a) CORE, and (b) exact diagonalization (Ref. ). CORE projects out low energy states from the effective Hilbert space in the region above the dashed line. The crystal momentum of the degenerate single-hole ground state is $(0,\pi)$ and $(\pi,0)$ below the solid line and $(0,0)$ and $(\pi,\pi)$ above it.[]{data-label="bindingCont1"}](2x2BindingContours.eps){width="\linewidth"} ![The pair-binding energy in a periodic $4\times 4$ cluster at 1/16 doping for various values of the interaction strength (a) $U=4t$, (b) $U=8t$, and (c) $U=10t$. Triangles depict the CORE results and circles correspond to the exact diagonalization results of Ref. .[]{data-label="binding1"}](binding1.eps){width="\linewidth"} Pair-binding energy and spin-gap -------------------------------- The pair-binding energy is defined by \_[pb]{}(M/N)=2E\_0(M)-, \[pb-def\] where $E_0(M)$ is the ground-state energy of the system with $M$ holes doped into the $N$-site half-filled cluster. Consider two identical clusters each with $M$ holes. If holes tend to pair and $M$ is odd it should be energetically favorable to move an electron from one cluster to another in order to obtain a fully-paired state in both. On the other hand, such a redistribution should be unfavorable if $M$ is even. In this sense, a positive $\Delta_{pb}$ for odd $M$ and a negative $\Delta_{pb}$ for even $M$ signifies an effective attraction between holes. Recently, Tsai [@steve-exact] have found by exact diagonalization of the periodic $4\times 4$ cluster that the pair-binding energy exhibits a pronounced maximum both as function of $t'$ and $U$. Their results allow for a critical evaluation of the validity of the CORE method in a range of parameters. To this end we present in Figs. \[bindingCont1\] and \[binding1\] a comparison between the CORE and the exact results for $\Delta_{pb}(1/16)$. It is clear that CORE introduces substantial errors in two specific regimes: small $U$ and large $t'$ \[Fig. \[binding1\](a)\], and large $U$ and small $t'$ \[Fig. \[binding1\](c)\], while it is in reasonable agreement with the exact results in the intermediate parameter regime. An obvious source for the discrepancies is the fact that our CORE approximation includes only range-2 couplings. Longer-range interactions are expected to become more important as the system becomes more homogeneous when $t'\rightarrow t$. We believe that the deviations between the CORE predictions and the exact results in this limit, especially for small $U$ where the pair size is expected to be large, are mainly due to insufficient range of the effective interactions. A related problem may emerge at large $U$ where the extent of magnetic correlations grow. However, we did not confirm these conjectures by explicit calculations. A more subtle source of errors, which we have mentioned already in the previous Section, is the fact that low-energy states may be projected out from the CORE effective Hilbert space in the process of generating the effective Hamiltonian. This happens when a low-lying state of a connected cluster has zero overlap with the tensor-product states of the effective Hilbert space or when two or more low-lying states are mapped onto the same state in the effective space (Note, however, that spin-rotation symmetry is preserved in the sense that spin multiplets are either kept or projected out as a whole.) Fig. \[p-out\] depicts for each of the sectors in which such a problem arises the excitation energy of the lowest projected-out state in units of the bandwidth of the kept states in the sector. We also denoted in Figs. \[bindingCont1\] and \[bindingCont2\] the parameter region where the problem occurs. ![The excitation energy of the lowest energy state that is projected-out by CORE in units of the bandwidth of the kept states in its sector.[]{data-label="p-out"}](projOut4.eps){width="\linewidth"} The overlap issue is responsible for the failure of CORE in the regime of small $t'$ and large $U$. When $U>7.858t$ and for $t'=0$ the $N_h=1$, $S=3/2$ double-plaquette (eight-fold degenerate) ground state $|N_h=1,S=3/2\rangle_2$ consists of one plaquette in its half-filled ground state and a second plaquette in a fully-polarized $S=3/2$ single-hole state. The latter resides outside the effective Hilbert space and therefore $|N_h=1,S=3/2\rangle_2$ is projected out. This ceases to be the case once $t'$ is turned on as a result of a component which appears in $|N_h=1,S=3/2\rangle_2$ and corresponds to a system with a magnon on one plaquette and a plaquette-fermion on the other. However, the amplitude of this component diminishes with increasing $U$. This leads CORE to misidentify the nature of $|N_h=1,S=3/2\rangle_2$ and induces an abrupt increase in the magnon-fermion interaction \[$V_{ft}^{3/2,\nu,{\bf q}}$ in Eq. (\[Vt\])\] for small $t'$. As a result, CORE underestimates the energy of the two-hole ground state of the $4\times 4$ cluster and consequently predicts an erroneously large pair-binding energy, see Fig. \[binding1\]. Nevertheless, it appears that away from this region of parameters the projected-out states are high enough in energy as to not cause qualitative errors. Based on the comparison of $\Delta_{pb}$ depicted in Figs. \[bindingCont1\],\[binding1\] and similar plots presented below for the spin-gap \[Fig. \[spinGap\](a)\] and pair-field correlations \[Fig. \[pairingCorr\](a)\] we conclude that CORE agrees semi-quantitatively with the exact results provided $U/50 \lesssim t'\lesssim U/8$. Within this region, and across all geometries studied, we found the pair-binding energy to exhibit the same qualitative behavior consisting of a broad peak both as function of $t'$ and $U$. This conclusion holds true also when one varies the doping level (at least in the low-doping regime which we have considered) as can be seen from the results for the $6\times 6$ cluster presented in Fig. \[bindingCont2\]. In addition, the same figure suggests that the above mentioned problems with the CORE method become less severe as the size of the system increases. ![The pair-binding energy in a periodic $6\times 6$ cluster at (a) 1/36, and (b) 3/36 hole doping. CORE projects out low energy states from the effective Hilbert space in the region above the dashed line. In (a) the crystal momentum of the degenerate single-hole ground state is $(0,\pm2\pi/3)$ and $(\pm2\pi/3,0)$ below the solid line and $(\pm2\pi/3,\pm2\pi/3)$ above it. In (b) the crystal momentum of the degenerate 3-hole ground state is $(\pm2\pi/3,\pm2\pi/3)$ between the solid lines and $(0,\pm2\pi/3)$ and $(\pm2\pi/3,0)$ elsewhere.[]{data-label="bindingCont2"}](3x3BindingContours.eps){width="\linewidth"} ![The spin gap of undoped and two-hole doped systems at $U=8t$. (a) The $4\times 4$ periodic cluster - comparison between CORE and exact diagonalization results. (b) CORE results for the spin-gap $\Delta_{\text {\small{\it s}}}$ and the pair-binding energy $\Delta_{pb}(1/36)$ of the $6\times 6$ periodic cluster.[]{data-label="spinGap"}](spinGap3.eps){width="\linewidth"} The association of positive pair-binding energy with Cooper pairing may be contested on the ground that it can also be taken as evidence for a tendency of the system to phase separate. We believe that this is not the case for the model studied here for the following reasons. First, in accordance with the interpretation discussed above of $\Delta_{pb}$ as indication for hole pairing we have found its sign to change according to $(-1)^{M+1}$ for all the clusters and doping levels which we have considered. Second, while the appropriate criteria for identifying regimes of phase separation from finite size studies include the Maxwell construction[@Hellberg] and measurements of the surface tension in the presence of boundary conditions that force phase coexistence, a crude way of identifying phase separation is by calculating the inverse compressibility $\kappa^{-1}=n^2 \partial\mu/\partial n$, where $\mu$ is the chemical potential and $n$ the electronic density. For numerical purposes a discrete version is used, which in our case reads \^[-1]{}E\_0(M+2)+E\_0(M-2)-2E\_0(M). \[comp-def\] Negative inverse compressibility indicates instability towards phase separation. We always find $\kappa^{-1}>0$. Finally, whenever the ground state is a spin singlet one can define the spin-gap as the energy gap to the lowest $S=1$ excitation. We have calculated the spin gap for the two-hole doped systems and found that in all cases it follows the pair-binding energy in the regime of small to moderate $t'$, see Figs. \[spinGap\] and \[binding2\]. This coincidence strongly suggests that in this regime the lowest $S=1$ excitation is a result of a dissociation of a hole pair into two separate holes. It is interesting to note that we always observe that the spin-gap reaches a maximum and starts to drop before the pair-binding energy does so. This may be an indication that moderate inhomogeneity supports the formation of a bound $S=1$ magnon–hole-pair state.[@core-res; @spingap2leg] ![The spin-gap $\Delta_{\text {\small{\it s}}}$ and pair-binding energy $\Delta_{pb}$ at $U=8t$ for two-hole doped a) 2-leg ladders, (b) 4-leg ladders.[]{data-label="binding2"}](gapsLadders.eps){width="\linewidth"} Consequently, our findings and the above arguments lead us to conclude that inhomogeneity of the type included in the checkerboard Hubbard model substantially enhances hole-pairing. The precise position of the point of optimal inhomogeneity in the sense of strongest pairing depends on the cluster geometry and interaction strength. Albeit, it typically occurs in the range $t'_{\rm max}\approx 0.5-0.7t$ and $U_{\rm max}\approx 5-8t$. We note that this fact implies that the physics behind the large pairing scale of the model necessarily involves inter-plaquette couplings since the single plaquette does not support hole-pairing beyond $U_c\approx 4.6t$.[@AAcore] Energetics and structure of the ground state -------------------------------------------- What drives the enhancement of hole-pairing and what is the reason for its maximum as function of $t'$? In an attempt to gain insights into these questions we have took advantage of the fact that CORE provides us with an effective Hamiltonian whose various couplings can be classified and analyzed. To this end we have divided the 45 different couplings into four groups, as described in the Appendix. They include: fermion and hole-pair “bare” kinetic terms (including fermion and pair hopping as well as Andreev-like pair creation and disintegration), magnon-assisted fermion and pair hopping, fermion and pair interactions and finally, interactions involving magnons. Fig. \[coupling1\] depicts the contribution of each group to the ground-state energy of the $N_h=0,1,2$ doped $6\times 6$ periodic cluster and to its pair-binding energy $\Delta_{pb}(1/36)$ at $U=8t$. Fig. \[coupling1\] makes it clear that the increase in the pair-binding energy from $t'=0$ to $t'_{\rm max}$ is dominated by a faster decrease of the kinetic energy of hole pairs as compared to unpaired holes. Furthermore, in this region the pair-binding energy is largely determined by the “bare” kinetic terms while the (negative) contribution of hopping processes that involve magnons is much smaller. The small contributions of the various interactions approximately cancel out. Looking more closely at the way charges propagate in this range of $t'$ we found that the main channel for single holes is a direct hop between neighboring plaquettes but that this process is virtually non-existent for hole pairs. Instead, a pair propagates predominantly by Andreev-like dissociation into single holes on adjacent plaquettes and recombination of these holes into a pair one register away from its original position \[as described by the last term in Eq. (\[Kbf\])\]. For $t'>t'_{\rm max}$ the behavior changes qualitatively and rather abruptly. The gain in kinetic energy of the pair relative to that of unpaired holes ceases to increase. While pairs continue to propagate mainly via a series of dissociation and recombination events, single holes move almost exclusively by hopping processes involving magnons \[the second and third terms in Eq. (\[Kbft\])\]. The decrease in the pair-binding energy in this regime is induced by a sharp decrease of the potential energy of the unpaired holes owing to their interactions with the magnons. On the other hand, the contribution of interactions not involving the magnons to the pair-binding energy does not show a significant change as $t'$ is driven through $t'_{\rm max}$. ![Ground state expectation values of various effective couplings for the $6\times 6$ periodic cluster at $U=8t$: (a) fermion and pair hopping; (b) fermion and pair magnon-assisted hopping; (c) fermion and pair interactions; (d) interactions involving magnons; (e) the full Hamiltonian. The insets show the contribution of each group of couplings to the pair-binding energy. The full binding energy reaches a maximum at $t'=0.6t$ as indicated by the dotted line.[]{data-label="coupling1"}](coupling3.eps){width="\linewidth"} The above results suggest that the AFM magnons play an important role in inducing the change in the behavior of the pair-binding energy. To further test this conclusion we have looked at the evolution of the ground-state content with $t'$. Fig. \[eigens2\] shows the average number of magnons, fermions and pairs in the $N_h=0-4$ ground states of the $6\times 6$ periodic cluster. Evidently, the magnons begin to proliferate slightly before the maximum in the pair-binding energy is reached. Concomitantly, there is an increase of AFM correlations in the system as can be seen from Fig. \[magnetization\], which depicts the staggered magnetization $m_{(\pi,\pi)}$ defined by m\_[(,)]{}\^2=\^2, \[stag\] where ${\bf Q}=(\pi,\pi)$ and ${\bf S_j}$ is the electronic spin operator on site $j$ at position ${\bf r_j}$. In contrast to the behavior of the magnons, the fermions-to-pairs ratio does not change considerably at moderate values of $t'$. Note that at $t'=0$ all the holes appear as single fermions. This is a manifestation of the absence of pair-binding on the single plaquette at $U=8t$. ![ The average number of (a) magnons, (b) fermions, and (c) hole pairs in the ground state of the $6\times 6$ periodic cluster at $U=8t$. The position of the pair-binding energy maximum is indicated by the dotted line.[]{data-label="eigens2"}](eigens5.eps){width="\linewidth"} We have found that the same behavior, both in terms of energetics and structure of the ground state, persists across the entire range of geometries and doping levels which we have studied. Therefore, we conclude that the initial rise of the pair-binding energy for $t'<t'_{\rm max}$ is kinetic-energy driven. In this range most of the plaquettes are in their half-filled, RVB-correlated ground-state. This type of background facilitates the motion of bound pairs as compared to single holes. When $t'$ approaches $t'_{\rm max}$ the undoped background changes its nature and becomes more AFM. The gain in kinetic energy associated with hole-pairing saturates and instead a gain in the potential energy of unpaired holes sets in due to their interactions with the AFM magnons. This leads to the decrease of the pair-binding energy. Another correlation that we were able to establish is between the maximum of the pair-binding energy and the position of the single-hole ground state in momentum space. In both the $4\times 4$ and $6\times 6$ periodic clusters the ground state shifts from the $\Gamma-{\rm M}$ and symmetry related directions of the Brillouin-zone to the zone-diagonals as $t'$ is increased through $t'_{\rm max}$, see Figs. \[bindingCont1\] and \[bindingCont2\]. Specifically, exact diagnonalization[@steve-exact-sup] of the $4\times 4$ cluster shows that the crystal momentum changes from $(0,\pi)$ and $(\pi,0)$ to $(0,0)$ and $(\pi,\pi)$ \[CORE finds a similar transition to $(\pi,\pi)$ but misses the $(0,0)$ state.\] In the $6\times 6$ cluster the shift is from $(0,\pm2\pi/3)$ and $(\pm2\pi/3,0)$ to $(\pm2\pi/3,\pm2\pi/3)$ \[except for $U=1-3t$ where in a narrow region above $t'_{\rm max}$ the ground state is at $(0,0)$.\] It is known from quantum Monte-Carlo simulations that the single-hole ground state of the [*homogeneous*]{} two-dimensional $t-J$ model resides at $(\pm\pi/2,\pm\pi/2)$.[@Assaad] One may speculate whether this state is adiabatically connected to the ground state of the inhomogeneous model for $t'>t_{\rm max}$. The answer to this question is beyond the present study as it requires the diagonalization of larger clusters and the addition of higher-energy plaquette fermions with plaquette momentum $(0,0)$ and $(\pi,\pi)$ to the effective Hilbert space. Regardless of this point, it seems that the transition in the ground state momentum is a possible consequence of the maximum in $\Delta_{pb}$ rather than its cause. We arrive at this conclusion based on the fact that in the $4\times 4$ cluster $\Delta_{pb}(3/16)$ exhibits a maximum of similar magnitude to that of $\Delta_{pb}(1/16)$ while the 3-hole ground state is located at $(0,0)$ and $(\pi,\pi)$ over the entire parameter range.[@steve-exact-sup] In the $6\times 6$ cluster, on the other hand, the maximum in $\Delta_{pb}(3/36)$ is accompanied by a change in the 3-hole ground state momentum, as depicted in Fig. \[bindingCont2\]. ![The staggered magnetization in the two-hole and four-hole ground states of the $6\times 6$ periodic cluster at $U=8t$.[]{data-label="magnetization"}](magnetizationN4.eps){width="\linewidth"} Phase stiffness --------------- In the thermodynamic limit of a $d$-wave superconductor the pair-binding energy vanishes as $\Delta_{pb}\sim 2\Delta_0 N^{-1/2}$, where $\Delta_0$ is the maximal value of the superconducting gap.[@steve-exact] In our rather small clusters we can therefore roughly estimate $\Delta_0\approx \Delta_{pb}/2$, which together with the $d$-wave BCS gap relation $T_c=\Delta_0/2.14$, gives T\_p=, \[Tp\] as a characteristic temperature at which pairs fall apart. The actual $T_c$ may be smaller than $T_p$ if phase fluctuations are important. To obtain an estimate for the phase-ordering temperature $T_\theta$ we calculate the ground-state phase stiffness defines as \_s=.|\_[=0]{}. \[rhos\] Here $E/A$ is the ground-state energy per unit area and $\phi$ is a phase twist per bond in the $x$ direction.[@SWZ] Neglecting the suppression of the stiffness due to thermal excitation of gapless nodal quasiparticles and using the relation $T_c=0.89\rho_s$ for the two-dimensional $XY$ model we obtain the estimator T\_=\_s. \[Ttheta\] ![The pairing scale $T_p$ and the phase coherence scale $T_\theta$ in the two-hole and four-hole ground states of the $6\times 6$ periodic cluster at $U=8t$. $T_\theta$ is shown for the cases where the phase twist is introduced at the bond level (solid lines) and at the plaquette level (dashed lines). The inset depicts $T_\theta$ of the two-hole system as the phase twist per bond $\phi$ is varied from $\pi/9$ (upper curve) to $\pi/72$ (lower curve).[]{data-label="stiff3x3"}](stiffness3x3.eps){width="\linewidth"} We have calculated $\rho_s$ in two ways. In the first the phase twist was introduced into the Hamiltonian (\[H\]) by changing $t_{ij}\rightarrow t_{ij}e^{i\phi/2}$ for two nearest-neighbor sites in the $x$ direction. The effective CORE Hamiltonian for the twisted system was then derived and diagonalized to obtain the $\phi$ dependence of the ground state energy. In the second way the twist was introduced on the plaquette level by modifying the couplings in the effective CORE Hamiltonian for the untwisted model (\[H\]). This was achieved via multiplication of a coupling between two neighboring plaquettes in the $x$ direction that changes the number of holes on the right plaquette by $\Delta n$, by $e^{i\phi\Delta n}$. The phase-ordering temperature of the periodic $6\times 6$ cluster with two and four holes is depicted in Fig. \[stiff3x3\]. The two methods yield similar results and they both encounter problems in the region $t'>t'_{\rm max}$. The nature of the difficulty is demonstrated by the inset of Fig. \[stiff3x3\], showing $\rho_s$ as calculated from a discrete derivative of the ground state energy with respect to a twist introduced at the bond level. When the derivative is calculated for increasingly smaller values of $\phi$ the result does not converge for $t'>t'_{\rm max}$. Rather, it becomes negative and diverges, indicating that the CORE ground-state energy develops a cusp as function of $\phi$. A similar behavior is also found in the $4\times 4$ periodic cluster and in the ladder systems. It occurs at lower values of $t'$ for systems with odd number of holes. We take these findings as an indication that CORE is unable to produce a reliable approximation for $\rho_s$ in the region beyond the maximum in the pairing scale. In the range $t'<t'_{\max}$ the estimated phase-ordering temperature increases monotonically with $t'$, but is consistently below the pairing scale. At $t'=t'_{\rm max}$ we find for the two-hole system $T_p/T_\theta\approx 6$. Increasing the doping to four holes decreases the maximal $T_p$ slightly and increases $T_\theta$ by about 70% leading to $T_p/T_\theta(t'=t'_{\rm max})\approx 3$. The same holds true for the $4\times 4$ cluster with two holes, which has a similar hole density and $T_p/T_\theta(t'=t'_{\rm max})\approx 2$. Such a behavior suggests that superconductivity in the lightly doped two-dimensional checkerboard Hubbard model is governed by phase fluctuations. In ladders our definition Eq. (\[rhos\]) is equivalent to the phase stiffness along the ladder ($v_cK_c$ in the effective Luttinger liquid description of the system) divided by its width . As shown by Fig. \[stiffladder\] it is larger than the corresponding stiffness in the periodic clusters and grows with doping. However, since the one-dimensional system can not order it does not provide a phase ordering temperature similar to Eq. (\[Ttheta\]). ![The pairing scale $T_p$ and the phase stiffness of a $14\times 2$ ladder in the two-hole and four-hole ground states at $U=8t$. $T_p$ is essentially the same for the two hole doping levels. The solid (dashed) $\rho_s$ lines were calculated by introducing the phase twist at the bond (plaquette) level.[]{data-label="stiffladder"}](stiffnessL17.eps){width="\linewidth"} Pairing correlations -------------------- Another diagnostic tool for the presence of superconductivity is the pair-field correlation function. We have calculated the following equal-time correlator \[delta-corr\] D\_[,]{}=\_[ij]{}\^\_[kl]{} , where $\overline{ij}$ denotes the bond between the nearest-neighbor sites $i$ and $j$, and where the pair field on that bond is given by \[delta-def\] \_[ij]{}\^=1 (c\_[i]{}\^c\_[j]{}\^+c\_[j]{}\^c\_[i]{}\^). Fig. \[pairingCorr\] shows the results for the pair-field correlations between the two most distant parallel $(D_\parallel)$ and perpendicular $(D_\perp)$ bonds on the periodic clusters with $N_h=2$ and $N_h=4$. Similar results were also obtained for the ladder systems. We find that $D_\parallel$ is positive and $D_\perp$ is negative, consistent with $d$-wave pairing. The pairing correlations diminish in the limits $t'/t\rightarrow 0$ and $t'/t\rightarrow 1$ but unlike the pair-binding energy and the phase stiffness they are nearly independent of $t'$ in the range of moderate inhomogeneity (from $t'=0.1t$ to $t'=0.6t$ $\Delta_{pb},\rho_s$ and $D$ change by a factor of 7.5,4.5, and 1.5, respectively.) The magnitude of the correlations is small and comparable to results of previous studies of Hubbard ladders[@2leg] and Hubbard[@aimi] and $t-J$ periodic clusters.[@tj2d] ![Pair-field correlations at $U=8t$ in the ground state of (a) the 2-hole doped $4\times 4$ periodic cluster, including a comparison to the exact results of Ref. , and (b) the 2-hole and 4-hole doped $6\times 6$ periodic cluster. $D_\perp$ and $D_\parallel$ are the correlations between the pair-field on bond $\overline{ab}$ and the pair field on bonds $\overline{cd}$ and $\overline{ef}$, respectively, as defined by Fig. \[model-fig\].[]{data-label="pairingCorr"}](pair_corr.eps){width="\linewidth"} The behavior of $D$ suggests that pairing is very weak in the systems that were studied. This conclusion is in apparent contradiction with the large pair-binding energy found in the same clusters. In addition, as we already noted, the $t'$-dependence of the two quantities is very different. We believe that the fault may lie in the specific form of the pair-field, Eq. (\[delta-def\]), that was used for calculating the pairing correlations. It assumes a pair-wavefunction which is strongly localized in space. This may be wrong, as suggested by our results for the structure of the ground-state. Fig. \[eigens2\] clearly shows that most holes are not bound into pairs on a single plaquette. This is expected since for $U=8t$ the plaquette does not provide a positive pair-binding energy. It seems, therefore, that thinking about Cooper-pairing in such systems in terms of real-space pairs occupying single bonds is a misleading oversimplification. Most likely, the phenomenon is more complicated and the pair wavefunction, while being much more localized than its counterpart in conventional superconductors, still possesses a non-trivial real-space structure. Conclusions =========== This study had a dual motivation. First, to explore the utility of the CORE approximation as a method to investigate fermionic strongly correlated systems, and secondly to shed additional light on the role of inhomogeneity in the physics of high-temperature superconductivity. As far as CORE is concerned, it is difficult to carry out the original scheme of Morningstar and Weinstein[@CORE] who iteratively applied the CORE method to obtain and analyze a fixed point Hamiltonian. In the case of the Hubbard model there are simply too many couplings that are generated at each step. One is, therefore, forced to apply CORE once and investigate the resulting effective Hamiltonian either by means of a mean-field approximation[@AAcore], or via numerical diagonalization of finite clusters. The latter approach was previously implemented in the study of spin systems[@Piekarewicz; @capponi-spin] and the $t-J$ model[@core-res; @core-tj], and is the one which we pursued. As expected, when applied to the checkerboard Hubbard model range-2 CORE provides results which are in good agreement with the available exact diagonalization results in the limit of small $t'$. In the moderate $t'$ regime the method may be considered as semiquantitative and its validity in the uniform limit is questionable, particularly in the case of small $U$. More precisely, this statement depends on the property that one tries to calculate using the method. It seems that pairing is moderately local such that range-2 CORE is able to capture its salient features already in small systems. The establishment of phase coherence, on the other hand, is a more extended phenomenon, for which the inclusion of longer range effective interactions and diagonalization of larger clusters are needed. In this context we would like to note that signatures associated with nodal quasiparticles of the putative $d$-wave Hubbard superconductor, such as the suppression of the phase stiffness at low temperatures, are particularly difficult to capture using range-2 CORE.[@AAcore] Regarding the effects of inhomogeneity, our results demonstrate that plaquettization of the Hubbard model may lead to a substantial enhancement of pairing. Optimal pairing is achieved at an intermediate scale of inhomogeneity, which marks a crossover from a region with pronounced RVB characteristics to one with stronger local AFM correlations. The interactions of the doped holes with the spin background are the driving force of the pairing process. One should bare in mind, however, that the Hubbard plaquette, the building block of our model, is a special system. Its undoped ground state is a quintessential RVB state and it provides a positive pair-binding energy in a wide range of interaction strengths. Hence, it is interesting to ask whether a similar enhancement occurs for other plane patterns, especially those constructed from elementary clusters that do not exhibit pair binding. The possibility of such an outcome gains support from the fact that in the checkerboard model maximal pairing occurs at an interaction strength for which the pair-binding energy on each individual plaquette is negative. In the lightly doped clusters that we have studied superconductivity appears to be controlled by phase fluctuations. Owing to the reasons outlined above and our inability to carry out significant finite-size scaling it is difficult to estimate the phase ordering temperature in the two-dimensional limit and determine whether $T_c$ indeed achieves a maximum at an intermediate value of $t'$. $T_c$ enhancement due to inhomogeneous pairing interaction was found in the attractive Hubbard model[@optimal-BCS; @optimal-s; @optimal-QMC; @optimal-BCS-dis; @Mishra] and the phase-ordering transition temperature is raised in the classical two-dimensional $XY$ model with certain “framework” modulations of the phase couplings.[@optimal-erica] We find it interesting to conclude by noting that Fig. \[stiff3x3\] hints at the possibility that a related inhomogeneity-induced enhancement occurs in the model considered here as well. The CORE Hamiltonian {#app:models} ==================== The full CORE Hamiltonian includes all possible terms that satisfy the symmetries of the problem, as detailed in Section \[models\]. The resulting 45 effective couplings may be grouped in the following way $$H=K_{bf}+K_{bf+t}+V_{bf}+V_{t}.$$ The kinetic energy of the fermionic holes and the bosonic pairs is given by the first two terms. $K_{bf}$ contains the contribution of hopping processes involving only the charged degrees of freedom while $K_{bf+t}$ contains similar processes in which the triplet of AFM magnons also participate. The interactions among the fermions and pairs comprise $V_{bf}$. Their remaining interactions with the magnon triplet, as well as couplings involving only the triplets, form the last group $V_t$. In the following, $b_i^\dagger$, $t_{\sigma i}^\dagger$ and $f_{\textbf{q} \sigma i}^\dagger$ create a hole pair, a magnon with spin component $S_z=\sigma$ and a fermion with spin component $S_z=\sigma$ and plaquette momentum ${\bf q}$ at site $i$, respectively. Our choice to use a basis where the two fermions have a definite plaquette momentum ${\bf q}=(0,\pi)$ or ${\bf q}=(\pi,0)$ results in different interaction strengths between nearest neighbors in the $x$ direction compared to the $y$ direction. The notation $\langle i,j\rangle_\nu$ in the Hamiltonian below stands for nearest neighbors in the $\nu=x,y$ direction and $(A_iB_j)_{S,\sigma}$ signifies that the operators $A_i$ and $B_j$ are coupled into an operator of total spin $S$ and spin component $S_z=\sigma$. Finally, summation over $S$ ,$\sigma$, ${\bf q}$, and $\nu$ indices is implied. The 7 “bare” kinetic couplings include fermion and pair hopping, as well as pair-fermion exchange and Andreev-like pair creation and disintegration. $$\begin{aligned} \label{Kbf} \hspace{-0.9cm} K_{bf}&=& J_{b} \sum_{\langle i,j\rangle}b_{i}^\dagger b_{j}\nonumber\\ &+& J_{f}^{\nu,{\bf q}} \sum_{\langle i,j\rangle _{\nu}}f_{{\bf q}\sigma i}^\dagger f_{{\bf q}\sigma j}\nonumber\\ &+& J_{bf}^{\nu,{\bf q}} \sum_{\langle i,j\rangle_\nu}b_i^\dagger f_{{\bf q}\sigma j}^\dagger b_j f_{{\bf q}\sigma i}\nonumber\\ &+&J_{bff}^{\nu,{\bf q}} \sum_{\langle i,j\rangle_{\nu}}\left[b_i^\dagger f_{{\bf q}\uparrow i} f_{{\bf q}\downarrow j}+ b_i^\dagger f_{{\bf q}\uparrow j} f_{{\bf q}\downarrow i}+{\rm H.c.}\right].\end{aligned}$$ Note that since the Hamiltonian is symmetric under rotations and reflections some of the couplings are related. For example, $J_{f}^{x,{\bf q}}=J_{f}^{y,\overline{{\bf q}}}$, where $\overline{{\bf q}}={\bf q}+(\pi,\pi) \mod 2\pi$. These symmetries and the $d$-wave symmetry of the plaquette hole-pair state also imply $J_{bff}^{x,{\bf q}}=-J_{bff}^{y,\overline{{\bf q}}}$. The remaining 9 kinetic couplings are associated with magnon-assisted hopping processes $$\begin{aligned} \label{Kbft} \hspace{-0.0cm}K_{bf+t}&=& J_{bt} \sum_{\langle i,j\rangle}b_i^\dagger t_{\sigma j}^\dagger b_j t_{\sigma i} \nonumber\\ &+&J_{ft}^{S,\nu,{\bf q}} \sum_{\langle i,j\rangle_{\nu}}(t_i^\dagger f_{{\bf q} j}^\dagger )_{S,\sigma} (t_j f_{{\bf q} i})_{S,\sigma}\nonumber\\ &+& J_{fft}^{\nu,{\bf q}} \sum_{\langle i,j\rangle_{\nu}}\left[(t_i^\dagger f_{{\bf q} j} ^\dagger )_{\frac{1}{2},\sigma} f_{\overline{{\bf q}}\sigma i}+{\rm H.c.}\right]\nonumber\\ &+& J_{bft}^{\nu,{\bf q}} \sum_{\langle i,j\rangle_{\nu}}\left[b_i^\dagger t_{\sigma j}^\dagger (f_{{\bf q} i} f_{\overline{{\bf q}} j})_{1,\sigma}+{\rm H.c.}\right].\end{aligned}$$ The 16 fermion and pair on-site energies and interactions are $$\begin{aligned} V_{bf}&=& {\epsilon_{f}}_{\bf q}\sum_{i} f_{{\bf q}\sigma i}^\dagger f_{{\bf q}\sigma i}+ \epsilon_b\sum_{i} b_{i}^\dagger b_{i}\nonumber\\ &+& V_{b} \sum_{\langle i,j\rangle}b_{i}^\dagger b_{j}^\dagger b_j b_i + V_{bf}^{\nu,{\bf q}} \sum_{\langle i,j\rangle_\nu}b_i^\dagger f_{{\bf q}\sigma j}^\dagger f_{{\bf q}\sigma j} b_i\nonumber\\ &+& V1_{ff}^{S,\nu,{\bf q}} \sum_{\langle i,j\rangle_\nu}(f_{{\bf q} j}^\dagger f_{{\bf q} i}^\dagger )_{S,\sigma} (f_{{\bf q} i}f_{{\bf q} j})_{S,\sigma}\nonumber\\ &+& V2_{ff}^{S} \sum_{\langle i,j\rangle_\nu}(f_{{\bf q} j}^\dagger f_{{\bf q} i}^\dagger )_{S,\sigma} (f_{\overline{{\bf q}} i} f_{\overline{{\bf q}} j})_{S,\sigma}\nonumber\\ &+& V3_{ff}^{S} \sum_{\langle i,j\rangle_\nu}(f_{{\bf q} j}^\dagger f_{\overline{{\bf q}} i}^\dagger )_{S,\sigma} (f_{{\bf q} i} f_{\overline{{\bf q}} j})_{S,\sigma}\nonumber\\ &+& V4_{ff}^{S} \sum_{\langle i,j\rangle_\nu}(f_{{\bf q} j}^\dagger f_{\overline{{\bf q}} i}^\dagger )_{S,\sigma} (f_{\overline{{\bf q}} i}f_{{\bf q} j})_{S,\sigma}.\end{aligned}$$ The fermion on-site energies depend on ${\bf q}$ in ladders where the symmetry between the $x$ and $y$ directions is broken. The on-site energies on a plaquette depend on the number of its nearest neighbors. Therefore, they may be position dependent in finite clusters without periodic boundary conditions. This does not happen for the clusters that we have investigated. The last group consists of 13 couplings involving the magnons. They include their on-site energy, excitation amplitude from the vacuum, hopping matrix element and the strength of their mutual interaction together with their interaction couplings to the fermions and bosons. We find the coupling to the bosons to be very small. $$\begin{aligned} \label{Vt} V_{t}&=& \epsilon_t \sum_{i} t_{\sigma i}^\dagger t_{\sigma i} +J_{tt} \sum_{\langle i,j\rangle}\left[(t_{i}^\dagger t_{j}^\dagger)_0+{\rm H.c.}\right]\nonumber\\ &+& J_{t} \sum_{\langle i,j\rangle}t_{\sigma i}^\dagger t_{\sigma j} + V_{tt}^{S} \sum_{\langle i,j\rangle}(t_i^\dagger t_{j}^\dagger )_{S,\sigma} (t_{j} t_i)_{S,\sigma} \nonumber\\ &+& V_{bt} \sum_{\langle i,j\rangle}b_i^\dagger t_{\sigma j}^\dagger t_{\sigma j} b_i \nonumber\\ &+& V_{ft}^{S,\nu,{\bf q}} \sum_{\langle i,j\rangle_{\nu}}(t_i^\dagger f_{{\bf q} j}^\dagger) _{S,\sigma}(f_{{\bf q} j} t_i)_{S,\sigma}\nonumber\\ &+& V_{ft}^{\nu,{\bf q}}\sum_{\langle i,j\rangle_{\nu}}\left[(t_i^\dagger f_{{\bf q} j}^\dagger ) _{\frac{1}{2},\sigma} f_{\overline{{\bf q}}\sigma j}+{\rm H.c.}\right].\end{aligned}$$ [999]{} For a review see E. W. Carlson, V. J. Emery, S. A. Kivelson, and D. Orgad, in [*“Superconductivity: Novel Superconductors”*]{}, Vol 2, p. 1225 , edited by K. H. Bennemann and J. B. Ketterson (Springer-Verlag 2008). T. Aimi and M. Imada, J. Phys. Soc. Jpn. [**76**]{}, 113708 (2007). E. Arrigoni and S. A. Kivelson, , 180503(R) (2003). E. Arrigoni, E. Fradkin, and S. A. Kivelson, , 214519 (2004). W.-F. Tsai and S. A. Kivelson, , 214510 (2006), [*ibid*]{} [**76**]{}, 139902 (2007). W.-F. Tsai, H. Yao, A. L${\rm \ddot{a}}$uchli, and S. A. Kivelson, , 214502 (2008). D. G. S. P. Doluweera, A. Macridin, T. A. Maier, M. Jarrell, and Th. Pruschke, , 020504(R) (2008). C. J. Morningstar and M. Weinstein, Phys. Rev. D. [**54**]{}, 4131 (1996). E. Altman and A. Auerbach, , 104508 (2002). D. Poilblanc, E. Orignac, S. R. White, and S. Capponi, , 220406(R) (2004). S. Capponi and D. Poilblanc, , 180503 (2002). C. S. Hellberg and E. Manousakis, , 4609 (1997). D. Poilblanc, O. Chiappa, J. Riera, S. R. White, and D. J. Scalapino, , R14633 (2000). W.-F. Tsai (private communication). M. Brunner, F. F. Assaad, and A. Muramatsu, , 15480 (2000). D. J. Scalapino, S. R. White, and S. Zhang, , 7795 (1993). R. M. Noack, S. R. White, and D. J. Scalapino, , 882 (1994). S. Sorella, G. B. Martins, F. Becca, C. Gazza, L. Capriotti, A. Parola, and E. Dagotto, , 117002 (2002). J. Piekarewicz and J. R. Shepard, , 10260 (1998). S. Capponi, A. L${\rm \ddot{a}}$uchli, and M. Mambrini, , 104424 (2004). I. Martin, D. Podolsky, and S. A. Kivelson, , 060502(R) (2005). K. Aryanpour, E. R. Dagotto, M. Mayr, T. Paiva, W. E. Pickett, and R. T. Scalettar, , 104518 (2006). K. Aryanpour, T. Paiva, W. E. Pickett, and R. T. Scalettar, , 184521 (2007). Y. Zou, I. Klich, and G. Refael, , 144523 (2008). V. Mishra, P. J. Hirschfeld, and Y. S. Barash, , 134525 (2008). Y. L. Loh and E. W. Carlson, , 132506 (2007).
{ "pile_set_name": "ArXiv" }
--- bibliography: - 'References.bib' title: '**[On the Perturbative Expansion around a Lifshitz Point]{}**' --- IPMU-09-0105 <span style="font-variant:small-caps;">Abstract</span>\ The quantum Lifshitz model provides an effective description of a quantum critical point. It has been shown that even though non–Lorentz invariant, the action admits a natural supersymmetrization. In this note we introduce a perturbative framework and show how the supersymmetric structure can be used to greatly simplify the Feynman rules and thus the study of the model. Introduction {#sec:intro} ============ In 1941, Lifshitz [@Lifshitz] introduced models with anisotropic scaling between space and time in the context of tri–critical models. Since then, such models have been studied in the context of solid state physics. Materials with strongly correlated electrons, such as copper oxides, show this type of critical behaviour, and also the smectic phase of liquid crystals for example can be described this way. Our treatment is based on quantum Lifshitz models as were studied in [@Ardonne:2003p1613]. Quantum Lifshitz points are especially interesting, since they are *quantum critical points* [@Sachdev], *i.e.* points at which a continuous phase transition happens at $T=0$ which is driven by zero point quantum fluctuations. A quantum Lifshitz point is characterized by the vanishing of the term $(\nabla \phi)^2$ in the effective Hamiltonian. While scale invariance is conserved, this gives rise to an anisotropy between space and time. This anisotropy is quantified by the *dynamical critical exponent* $z$, $$t\to \lambda^zt,\ \ x\to \lambda x.$$ For models in $2+1$ dimensions at a Lifshitz point, $z=2$, as opposed to the Lorentz invariant $z=1$. Models at a Lifshitz point have recently met with a large amount of interest beyond their original field of application[^1]. A $3+1$ dimensional theory of gravity with $z=3$ put forward by Hořava [@Horava:2009uw] has generated a big echo. But also in the context of the AdS/CFT correspondence, interest in gravity duals of non–Lorentz invariant models has arisen, see *e.g.* [@Balasubramanian:2008dm; @Son:2008ye; @Kachru:2008yh; @Volovich:2009yh]. In [@Kachru:2008yh] in particular, a gravity dual for a Lifshitz type model with $z=2$ was proposed. As discussed recently in [@Li:2009pf], it seems difficult to find string theory embeddings for gravity duals of Lifshitz–type points. While often, calculations are easier to do on the gravity side of the correspondence, we are able to perform a number of calculations directly on the field theory side, which are presented in this article. Apart from being of interest directly for statistical physics, our results can serve as a point of reference for comparison to results derived on the gravity side. In [@Dijkgraaf:2009gr] it was shown that systems of Lifshitz type in $\left( d + 1 \right)$ dimensions admit a natural supersymmetrization, a property which results from their relation to $d$–dimensional models via a Langevin equation. The quantum Lifshitz model in [@Ardonne:2003p1613], described by the action $$\begin{aligned} S [ \phi] = \int \diff \left[ \dot \phi^2 + (\partial_i \partial^i \phi )^2 \right], && \mathbf{x} = x_i, \, i = 1,2 \, ,\end{aligned}$$ can be thought of as descending from a free boson in two dimensions with action $$W[\phi] = \int \di \mathbf{x} \, \left[ \partial_i \phi \partial^i \phi \right] \, .$$ This formulation allows the generalization of the quantum Lifshitz model to massive and interacting cases. It becomes possible to consider the class of models satisfying the detailed balance condition whose (bosonic part of the) action takes the form $$S[\phi] = \int \diff \left[ \dot \phi^2 + \left( \frac{\delta W [\phi]}{\delta \phi }\right)^2 \right] \, ,$$ where $W[\phi]$ is a local functional of the field $\phi (t,\mathbf{x})$. The structure due to the Langevin equation implies supersymmetry in the time direction, so that the complete action includes also a fermionic field. It is given by $$\label{eq:SuperStochasticAction} S[\phi, \psi, \bar \psi] = \int \diff \left[ \dot \phi^2 + \left( \frac{\delta W [\phi]}{\delta \phi }\right)^2 - \bar \psi \left( \frac{\di}{\di t} + \frac{\delta^2 W[\phi]}{\delta \phi^2} \right) \psi \right] \, .$$ This is the supersymmetric theory we focus on in this work. A major advantage of models with this structure is that they can be studied very efficiently by using a perturbative expansion of the underlying Langevin equation, as proposed in [@Cecotti:1983up]. In this way, the cancellation of bosonic and fermionic terms in the perturbative expansion becomes automatic. In consequence, there is a *great simplification of the Feynman diagrams* of the theory in $\left( d+ 1 \right)$ dimensions which are reformulated in terms of those of the $d$–dimensional system described by $W[\phi]$, plus a set of additional rules. If we consider only $n$–point functions for the bosonic field $\phi$, all the fermionic contributions are automatically accounted for, so that it is not even necessary to introduce a fermionic propagator.\ For relativistic theories, this construction is possible only for $d=0$ and $d=1$. Giving up Lorentz invariance, we concentrate on $d=2$, which – as we show in the following – is the critical case. The generalization to any $d$ is however clear. In the following we derive - the expression for the propagator of the free Lifshitz scalar (Sec. \[sec:propagator\]); - the Feynman rules for the simplest generalization to the interacting case (Sec. \[sec:interaction\]); - a scheme for UV regularization (Sec. \[sec:regularization\]). As examples, the three–point function (Sec. \[sec:three-point-function\]) and the one–loop propagator (Sec. \[sec:one-loop-propagator\]) are discussed. The Langevin equation and the Nicolai map {#sec:stoch-quant} ========================================= Having chosen to study the supersymmetric extension of the quantum Lifshitz model, we can make use of the Nicolai map [@Nicolai:1980jc]. In a supersymmetric field theory, a Nicolai map is a transformation of the bosonic fields $$\label{eq:Nicolai-map} \phi (t, \mathbf{x} ) \mapsto \eta(t, \mathbf{x} ) \, ,$$ such that the bosonic part of the Lagrangian is quadratic in $\eta$ and the Jacobian for the transformation is given by the determinant of the fermionic part: $$\begin{gathered} \label{eq:Nicolai-boson} S_B = \int \diff \left[ \frac{1}{2} \eta(t, \mathbf{x} )^2 \right] \, ,\\ \label{eq:Nicolai-fermion} \det \left[ \frac{\delta \eta}{\delta \phi} \right] = \int \mathcal{D}\psi \mathcal{D} \bar \psi \, \exp[-S_F] \, .\end{gathered}$$ Following [@Cecotti:1983up], we would like to interpret the mapping in Eq. (\[eq:Nicolai-map\]) as a Langevin equation for the field $\phi(t, \mathbf{x})$ with noise $\eta(t,\mathbf{x})$. More precisely, we want to show the equivalence of the action in Eq. (\[eq:SuperStochasticAction\]) to the Langevin equation $$\label{eq:langevin_st} \frac{\partial\,\phi(t,\mathbf{x})}{\partial t} = -\frac{\delta W}{\delta \phi}+\eta(t, \mathbf{x}).$$ The correlations of $\eta$, which is a white Gaussian noise (as in Eq. (\[eq:Nicolai-boson\])), are given by $$\begin{aligned} \label{eq:corr_eta} \braket{\eta (t,\mathbf{x})} = 0 \, , && \braket{ \eta (t, \mathbf{x})) \eta(t^\prime, \mathbf{x}^\prime)} = 2\, \delta ( t - t^\prime) \delta ( \mathbf{x} - \mathbf{x}^\prime ) \, .\end{aligned}$$ A stochastic equation of this type, where the dissipation term depends on the gradient of a function of the field is said to satisfy the *detailed balance condition*. Equation (\[eq:langevin\_st\]) has to be solved given an initial condition, leading to an $\eta$–dependent solution $\phi_\eta(t,\mathbf{x})$. As a consequence, $\phi_\eta(t,\mathbf{x})$ becomes a stochastic variable. Its correlation functions are defined by $$\braket{\phi_\eta(t_1, \mathbf{x}_1)\ldots\phi_\eta(t_k, \mathbf{x}_k)}_\eta = \frac{\int\mathcal{D} \eta \, \exp \left[ -\tfrac{1}{4} \int \diff \eta^2(t,\mathbf{x}) \right] \phi_\eta(t_1,\mathbf{x}_1) \ldots \phi_\eta(t_k, \mathbf{x}_k)} {\int\mathcal{D} \eta \, \exp\left[-\tfrac{1}{4}\int \diff \eta^2(t,\mathbf{x})\right]} \, .$$ A necessary condition for this approach to work is that thermal equilibrium is reached for $t\to \infty$, and that $$\label{eq:Equilibrium} \lim_{t\to\infty} \braket{\phi_\eta(t,\mathbf{x}_1) \ldots \phi_\eta(t,\mathbf{x}_k)}_\eta = \braket{\phi(\mathbf{x}_1) \ldots \phi(\mathbf{x}_k)},$$ *i.e.* that the equal time correlators for $\phi_\eta$ tend to the corresponding quantum Green’s functions for the $d$–dimensional theory described by $W[\phi]$. Since $\phi (t, \mathbf{x})$ is a stochastic variable, the expectation value of any functional $F[\phi]$ is obtained by averaging over the noise: $$\braket{F[\phi]}_\eta = \frac{1}{\mathcal{Z}} \int \mathcal{D} \eta \, F[\phi] e^{-\frac{1}{2} \int \diff \eta (t,\mathbf{x})^2} \, ,$$ where the partition function is defined by $$\mathcal{Z} = \int \mathcal{D} \eta \, e^{-\frac{1}{2} \int \diff \eta (t,\mathbf{x})^2} \, .$$ It is convenient to change the integration variable from $\eta $ to $\phi$. The expression becomes $$\mathcal{Z} = \int \mathcal{D} \phi \, \left. \det \left[ \frac{\delta \eta}{\delta \phi} \right] \right|_{ \eta = \dot \phi + \frac{\delta W}{\delta \phi}} \exp \left[ -\frac{1}{2} \int \diff \left[ \left(\dot \phi + \frac{\delta W}{\delta \phi} \right)^2 \right] \right] \, .$$ The Jacobian can be expressed by introducing two fermionic fields $\psi (t,\mathbf{x})$ and $\bar \psi(t,\mathbf{x})$ such that (as in Eq. (\[eq:Nicolai-fermion\])) $$\left. \det \left[ \frac{\delta \eta}{\delta \phi} \right] \right|_{ \eta = \dot \phi + \frac{\delta W}{\delta \phi}} = \int \mathcal{D} \psi \mathcal{D} \bar \psi \, \exp \left[ - \int \diff \left[\bar \psi(t,\mathbf{x}) \left( \partial_t - \frac{\delta^2 W}{\delta \phi^2}\right) \psi(t,\mathbf{x}) \right] \right] \, .$$ In this way we can directly read off the $\left( d + 1 \right)$–dimensional action that, up to a boundary term, reproduces Eq. : $$S[\phi, \psi, \bar \psi] = \int \diff \left[ \dot \phi^2 + \left( \frac{\delta W [\phi]}{\delta \phi }\right)^2 - \bar \psi \left( \frac{\di}{\di t} + \frac{\delta^2 W[\phi]}{\delta \phi^2} \right) \psi \right] \, .$$ Finally, one can show [@Dijkgraaf:2009gr] that in a Hamiltonian formulation, the supersymmetric ground state is the bosonic state $$\ket{\Psi_0} = e^{-W[\phi]} \, ,$$ as is already the case in the standard quantum Lifshitz model. This construction bears an obvious resemblance with the stochastic quantization of the $d$–dimensional theory described by $W[\phi]$. We would however like to stress a fundamental difference. In the case of stochastic quantization, one is only interested in the $t \to \infty$ limit and hence in the ground state. This means that the action in Eq. (\[eq:SuperStochasticAction\]) is seen as a topological theory. Here, on the other hand, we wish to study the finite–time behaviour of the system, and the same action is taken to describe a conventional supersymmetric model. In the following, we will work in $d=2$, but in principle, all calculations are equally valid for general $d$ and $z=2$. The case $d=2$ is arguably the most interesting, since it corresponds to the Lifshitz point with its quantum critical behaviour. Perturbative Solution of the Langevin equation {#sec:pert-solut-lang} ============================================== In the following, we will show how to perturbatively solve the Langevin equation (\[eq:langevin\_st\]), which gives rise to the dynamics of the Lifshitz model. As we will see, the main advantage of this approach is that the perturbative expansion is realized in terms of the Feynman diagrams for the theory in $d$ dimensions which does not include fermionic contributions. To solve a transport equation like the Langevin equation in Eq., it is convenient to consider an integral transform. The choice of transform depends on the choice of boundary conditions. In space, the natural choice is given by requiring the field to vanish at infinity, $$\phi (t, \mathbf{x} ) \xrightarrow[\abs{\mathbf{x}}\to \infty]{} 0 \, .$$ This means that the Fourier transform is well defined: $$\phi( t, \mathbf{k} ) = \int \di \mathbf{x} \, \left[ e^{- \imath \mathbf{k} \cdot \mathbf{x}} \phi (t, \mathbf{x} ) \right] \, .$$ For the time direction, we have two possible choices: - The field vanishes at *negative infinity*. If we impose $\phi (t, \mathbf{x} ) \xrightarrow[t \to - \infty]{} 0 $, we can define a *Fourier* transform in time and use $$\phi ( \omega, \mathbf{k} ) = \int_{-\infty}^\infty \di t \, \left[e^{- \imath \omega t } \phi (t, \mathbf{k}) \right] \, .$$ - Initial condition at $t = 0$. If we impose $\phi( 0, \mathbf{x} ) = \phi_0 (\mathbf{x})$, it is convenient to define a *Laplace* transform in the time direction: $$\phi ( s, \mathbf{k} ) = \int_0^\infty \di t \, \left[ e^{- s t } \phi (t,\mathbf{k}) \right] \, .$$ Note that the first choice preserves time–translation invariance which in case (b) is broken by an extra mode that describes the evolution of the initial condition. On the other hand, if the kernel of the Langevin equation $\frac{\delta W}{\delta \phi}$ is *positive definite* (as it is for the cases we are considering here), this extra mode decays exponentially, and the large–time behaviours of both choices coincide. In other words, one can without loss of generality choose to impose the initial condition (b) and then take the large–time limit to recover the finite–time Fourier transform behaviour of case (a)[^2]. From now on, we will consider the Fourier–Laplace transform (*i.e.* a Fourier transform in space and Laplace transform in the time direction): $$\phi (s, \mathbf{k} ) = \int_0^\infty \di t \int \di \mathbf{x} \left[ e^{- \imath \mathbf{k}\cdot \mathbf{x} - s t} \phi (t, \mathbf{x} ) \right] \, .$$ Free propagator {#sec:propagator} --------------- As a first application, let us consider the action obtained by adding a relevant perturbation to the quantum Lifshitz model, described by $$S[ \phi, \psi, \bar \psi] = \int \diff \bigg[ \frac{1}{2} \dot \phi^2 + ( \partial_i \partial^i \phi )^2 + m^2 \partial_i \phi\, \partial^i \phi + m^4 \phi^2 +\text{fermions} \bigg] \, .$$ According to the argument in Sec. \[sec:stoch-quant\], this is equivalent to the Langevin equation corresponding to the massive boson in $2$ dimensions described by the functional $$W [\phi] = \int \di \mathbf{x} \left[ \frac{1}{2}\partial_i \phi\, \partial^i \phi + \frac{1}{2} m^2 \phi^2 \right] \, .$$ After the integral transform, the Langevin equation (\[eq:langevin\_st\]) takes the form $$s\, \phi (s, \mathbf{k} ) - \phi_0 (\mathbf{k} ) = - \Omega^2 \phi ( s, \mathbf{k} ) + \eta (s, \mathbf{k}) \, ,$$ where we introduced $\Omega^2 = \left( \mathbf{k}^2 + m^2 \right) $. The Gaussian noise $\eta(s,\mathbf{k})$ has the two–point function $$\braket{\eta (s,\mathbf{k}) \eta(s^\prime, \mathbf{k}^\prime) } = \frac{2 \left(2 \pi \right)^2 \delta (\mathbf{k} + \mathbf{k}^\prime)}{s + s^\prime} \, .$$ The retarded Green’s function for this problem is the solution to the equation $$s\, G (s, \mathbf{k} ) = - \Omega^2 G( s, \mathbf{k} ) + 1.$$ It follows that the solution to the Langevin equation is $$\phi (s, \mathbf{k} ) = G(s, \mathbf{k} ) \eta(s, \mathbf{k}) + G (s, \mathbf{k}) \phi_0 (\mathbf{k}) \, ,$$ and since the Laplace transform exchanges point–wise products and convolution products, one finds that the field $\phi$ as a function of time can be written as $$\phi(t, \mathbf{k}) = G(t, \mathbf{k}) \ast \eta( t, \mathbf{k}) + \phi_0 (\mathbf{k} ) G(t, \mathbf{k}) = \int_{0}^t \di \tau \, \left[ G( t - \tau, \mathbf{k}) \eta(\tau, \mathbf{k}) \right] + \phi_0 (\mathbf{k} ) G(t, \mathbf{k}) \, .$$ More explicitly, using the fact that $$G(t, \mathbf{k} ) = e^{-t \Omega^2 } \, ,$$ we find the solution $$\phi(t, \mathbf{k}) = \int_{0}^t \di \tau \, \left[e^{- \left( t - \tau \right) \Omega^2 } \eta(\tau, \mathbf{k}) \right] + \phi_0 (\mathbf{k} ) e^{-t \Omega^2 } \, .$$ Having expressed $\phi$ as a function of the noise, we are now in a position to evaluate the two–point function: $$\begin{gathered} \label{eq:Propagator} D( s, \mathbf{k} ; s^\prime, \mathbf{k}^\prime ) = \braket{ \phi( s, \mathbf{k} ) \phi (s^\prime,\mathbf{k}^\prime) } = G( s, \mathbf{k} ) G( s^\prime, \mathbf{k}^\prime) \braket{ \eta( s, \mathbf{k} ) \eta (s^\prime,\mathbf{k}^\prime) } \\ = \frac{2 \left(2 \pi \right)^2 \delta( \mathbf{k} + \mathbf{k}^\prime)}{\left( s + \Omega^2 \right) \left( s^\prime + \Omega^2 \right) \left( s + s^\prime \right)} \, .\end{gathered}$$ Taking the (bidimensional) inverse Laplace transform, this becomes $$\begin{gathered} D( t, \mathbf{k}; t^\prime, \mathbf{k}^\prime ) = \int_{\sigma - \imath \infty}^{\sigma + \imath \infty} \frac{\di s}{2 \pi \imath} \int_{\sigma - \imath \infty}^{\sigma + \imath \infty} \frac{\di s^\prime}{2 \pi \imath} \, \left[ e^{s t + s^\prime t^\prime}D( s, \mathbf{k} ; s^\prime, \mathbf{k}^\prime ) \right] \\ = \left( 2 \pi \right)^2 \frac{e^{- \Omega^2 \abs{t - t^\prime}} - e^{- \Omega^2 \left( t + t^\prime\right)}}{ \Omega^2} \delta ( \mathbf{k} + \mathbf{k}^\prime ) \, .\end{gathered}$$ Two limits are interesting: - For $t, t^\prime \to \infty$, the second exponential is suppressed and the two–point function becomes $$D( t, \mathbf{k}; t^\prime, \mathbf{k}^\prime ) \xrightarrow[t, t^\prime \to \infty]{} \left(2 \pi \right)^2 \frac{e^{- \Omega^2 \abs{t - t^\prime}}}{ \Omega^2} \delta ( \mathbf{k} + \mathbf{k}^\prime ) \, .$$ Its Fourier transform is given by $$D ( \omega, \mathbf{k}; \omega^\prime, \mathbf{k}^\prime ) = \frac{\delta (\omega + \omega^\prime) \delta( \mathbf{k} + \mathbf{k}^\prime)}{\omega^2 + \Omega^2} \, ,$$ as found in [@Ardonne:2003p1613]. - For $t = t^\prime \to \infty$, the two–point function reproduces the usual bosonic propagator in $d$ dimensions (as expected from the general structure of the Langevin equation and shown in Eq. (\[eq:Equilibrium\])): $$D( t, \mathbf{k}; t, \mathbf{k}^\prime ) \xrightarrow[t \to \infty]{} \left(2 \pi \right)^2 \frac{\delta (\mathbf{k} + \mathbf{k}^\prime)}{ \Omega^2} \, .$$ Note that for $m = 0$, this means that for large times, the propagator will behave polynomially and the theory is critical, just like in the case of a two–dimensional bosonic field. Interacting theory {#sec:interaction} ------------------ As an example of an interacting theory, let us consider the action descending from the theory of a massive boson with a $\phi^3 $ interaction. This is described by a Langevin equation with functional $$W [\phi] = \int \di \mathbf{x} \left[ \frac{1}{2}\partial_i \phi\, \partial^i \phi + \frac{1}{2} m^2 \phi^2 + \frac{1}{3} g^2 \phi^3 \right] .$$ The action in $\left(2 + 1 \right)$ dimensions in Eq. (\[eq:SuperStochasticAction\]) is immediately found to be $$\begin{gathered} \label{eq:InteractingMassiveAction} S[ \phi, \psi, \bar \psi] = \int \diff \bigg[ \frac{1}{2} \dot \phi^2 + ( \partial_i \partial^i \phi )^2 + m^2 \partial_i \phi\, \partial^i \phi + g^2 \phi \,\partial_i \phi\, \partial^i \phi + m^4 \phi^2 + g^2 m^2 \phi^3 + g^4 \phi^4 + \\ + \text{fermions} \bigg] \, ,\end{gathered}$$ and in particular, for the critical $m=0$ case, $$S[ \phi] = \int \diff \left[ \frac{1}{2} \dot \phi^2 + ( \partial_i \partial^i \phi )^2 + g^2 \phi\, \partial_i \phi \,\partial^i \phi + g^4 \phi^4 + \text{fermions} \right] \, .$$ Note that the coefficients of the three– and four–point interactions are not independent, but fixed by the detailed balance condition. It is in fact the detailed balance, which here plays the the role of a symmetry the theory has to fulfill, which keeps terms other than those given in Eq. (\[eq:InteractingMassiveAction\]) from appearing. This relation between the different coupling constants is a property which is accessible to experimental checks in materials which are described by a Lifshitz point effective action and thus is a testable prediction of this framework. The most effective way of making use of this symmetry consists once more in starting from the corresponding Langevin equation, which takes the simple form $$\partial_t \phi (t, \mathbf{x} ) = - \left( - \nabla^2 \phi (t,\mathbf{x}) + m^2 \phi(t,\mathbf{x}) + g^2 \phi^2(t,\mathbf{x}) \right) + \eta (t,\mathbf{x}) \, ,$$ where $\eta (t, \mathbf{x})$ is the same Gaussian noise as in the previous example. Taking the Fourier–Laplace transform as above, one has to deal with the quadratic term that can be recast in the following form: $$\begin{gathered} -g ^2 \int_0^\infty \di t \int \di\mathbf{x} \, e^{-\imath \mathbf{k} \cdot \mathbf{x} - s t} \phi ( t, \mathbf{x} )^2 =\\ = - g^2\int \diff \frac{\di \mathbf{k}_1}{\left(2 \pi \right)^2} \frac{\di \mathbf{k}_2}{\left(2 \pi \right)^2} \frac{\di s_1}{2 \pi \imath}\frac{\di s_2}{2 \pi \imath} \, \left[ e^{-\imath \left( \mathbf{k}- \mathbf{k}_1 - \mathbf{k}_2 \right) \cdot \mathbf{x} - \left( s - s_1 - s_2 \right) t} \phi ( s_1, \mathbf{k}_1 ) \phi ( s_2, \mathbf{k}_2 ) \right] = \\ = - g^2\int d[\mathbf{k_1},\mathbf{k}_2,s_1,s_2] \, \left[ \frac{\delta(\mathbf{k} - \mathbf{k}_1 - \mathbf{k}_2)}{s - s_1 - s_2} \phi ( s_1, \mathbf{k}_1 ) \phi ( s_2, \mathbf{k}_2 ) \right]\, .\end{gathered}$$ The Langevin equation becomes an integral equation: $$\begin{gathered} \phi (s, \mathbf{k} ) = G(s, \mathbf{k} ) \eta(s, \mathbf{k}) - g^2 G(s, \mathbf{k} ) \int \di [\mathbf{k}_1, \mathbf{k}_2, s_1, s_2] \left[ \frac{\delta(\mathbf{k} - \mathbf{k}_1 - \mathbf{k}_2)}{s - s_1 - s_2} \phi ( s_1, \mathbf{k}_1 ) \phi ( s_2, \mathbf{k}_2 ) \right] \\ + G (s, \mathbf{k})\, \phi_0 (\mathbf{k})\, .\end{gathered}$$ This type of equation can be solved perturbatively in $g^2$, using the usual Feynman diagram techniques by denoting $G (s, \mathbf{k})$ with an arrow and the noise $\eta( s, \mathbf{ k})$ with a cross. In particular, the field $\phi$ can be expanded as $$\label{eq:FieldExpansion} \phi =\includegraphics[scale=.7]{StochGraph-Expansion01-crop} + \begin{minipage}{.20\linewidth} \includegraphics[scale=.7]{StochGraph-Expansion02-crop} \end{minipage} + \begin{minipage}{.25\linewidth} \includegraphics[scale=.7]{StochGraph-Expansion03-crop} \end{minipage} + \dots$$ The Feynman rules for this model are summarized in Table \[tab:FeynmanRules\]. [c@cc]{} element & Fourier-Laplace & Fourier\ (20,5)(,) (22,10)[$s$]{} (22,-7)[$\mathbf{k}$]{} ![image](StochGraph-GreenFunction-crop) & $G(s, \mathbf{k}) = \displaystyle{\frac{1}{s + \Omega^2}}$ & $G(\omega, \mathbf{k}) = \displaystyle{\frac{1}{\imath \omega + \Omega^2}}$\ (20,5)(,) (2,10)[$s$]{} (2,-7)[$\mathbf{k}$]{} (54,10)[$s^\prime$]{} (54,-7)[$\mathbf{k}^\prime$]{} ![image](StochGraph-Propagator-crop) & $ \displaystyle{\frac{2 \left(2 \pi \right)^2 \delta( \mathbf{k} + \mathbf{k}^\prime)}{\left( s + \Omega^2 \right) \left( s^\prime + \Omega^2 \right) \left( s + s^\prime \right)}} $ & $\displaystyle{\frac{\left(2 \pi \right)^{2+1} \delta (\mathbf{k} + \mathbf{k}^\prime) \delta( \omega + \omega^\prime)}{\omega^2 + \Omega^4}}$\ (20,30)(,) (0,20)[$s_1$]{} (0,2)[$\mathbf{k}_1$]{} (30,30)[$s_2 \hspace{.6em}\mathbf{k}_2$]{} (30,-4)[$s_3 \hspace{.6em}\mathbf{k}_3$]{} ![image](StochGraph-Vertex-crop) & $g^2 \displaystyle{\frac{\delta( \mathbf{k}_1 - \mathbf{k}_2 - \mathbf{k}_3)}{s_1 - s_2 - s_3}} $ & $g^2 \delta( \mathbf{k}_1 - \mathbf{k}_2 - \mathbf{k}_3) \delta(\omega_1 - \omega_2 - \omega_3)$\ Note that even if the action in Eq.  has two cubic and a quartic interaction, the Feynman diagrams obtained from the Langevin equation only have one cubic vertex. This might at first seem surprising, but is again a simplifying consequence of the detailed balance condition. Examples -------- ### Three–point function {#sec:three-point-function} As a first example, let us consider the three–point function $\braket{\phi(s_1, \mathbf{k}_1)\phi(s_2, \mathbf{k}_2)\phi(s_3, \mathbf{k}_3)}$ at tree level. Expanding the field as in Eq. , we find that at tree level the function is given by the sum of three contributions: $$\braket{\phi(s_1, \mathbf{k}_1)\phi(s_2, \mathbf{k}_2)\phi(s_3, \mathbf{k}_3)} \hspace{.5em} = \hspace{1em} \begin{small} \begin{minipage}{.08\linewidth} \begin{picture}(20,37)(,) \put(0,26){$s_1$} \put(0,8){$\mathbf{k}_1$} \put(18,40){$s_2 \hspace{.6em}\mathbf{k}_2$} \put(18,-4){$s_3 \hspace{.6em}\mathbf{k}_3$} \includegraphics[scale=.5]{StochGraph-3pt-01-crop} \end{picture} \end{minipage} \hspace{1em} + \hspace{1em} \begin{minipage}{.08\linewidth} \begin{picture}(20,38)(,) \put(0,26){$s_1$} \put(0,8){$\mathbf{k}_1$} \put(22,40){$s_2 \hspace{.6em}\mathbf{k}_2$} \put(22,-4){$s_3 \hspace{.6em}\mathbf{k}_3$} \includegraphics[scale=.5]{StochGraph-3pt-02-crop} \end{picture} \end{minipage} \hspace{1em} + \hspace{1em} \begin{minipage}{.08\linewidth} \begin{picture}(20,30)(,) \put(0,24){$s_1$} \put(0,6){$\mathbf{k}_1$} \put(22,40){$s_2 \hspace{.6em}\mathbf{k}_2$} \put(22,-4){$s_3 \hspace{.6em}\mathbf{k}_3$} \includegraphics[scale=.5]{StochGraph-3pt-03-crop} \end{picture} \end{minipage} \end{small}$$ Using the Feynman rules in Table \[tab:FeynmanRules\], it is immediate to find that each diagram gives a contribution of $$-\int \frac{\di s_a \di s_b}{4\pi^2} \left[ \textstyle{\frac{4 g^2}{\left( s_i + \Omega_i^2 \right) \left( s_j + \Omega_j^2 \right) \left( s_a + \Omega_j^2 \right) \left( s_j + s_a \right) \left( s_k + \Omega_k^2 \right) \left( s_b + \Omega_k^2 \right) \left( s_k + s_b \right) \left( s_i - s_a - s_b\right)}} \right] \, ,$$ where $\set{i,j,k}$ take the values $\set{1,2,3}$ and their cyclic permutations. After an inverse Laplace transform in the time direction and taking the large time limit (which together is equivalent to taking the Fourier transform), the three–point function becomes: $$\begin{gathered} \braket{\phi(t_1, \mathbf{k}_1)\phi(t_2, \mathbf{k}_2)\phi(t_3, \mathbf{k}_3)} \\ = g^2\tfrac{e^{-\Omega_2^2 t_{21}- \Omega_3^2 t_{31} \Omega_1^2 \left( \Omega_1^2+ \Omega_2^2 - \Omega_3^2 \right)} + e^{-\Omega_1^2 t_{31} - \Omega_2^2 t_{32}} \Omega_3^2 \left( - \Omega_1^2 + \Omega_2^2 + \Omega_3^2 \right) - e^{-\Omega_1^2 t_{21} - \Omega_3^2 t_{32}} \Omega_2^2 \left( \Omega_1^2 + \Omega_2^2 + \Omega_3^2 \right)}{\Omega_1^2 \Omega_2^2 \Omega_3^2 \left( \left( \Omega_1^2 - \Omega_3^2 \right)^2 - \Omega_2^4 \right)} ,\end{gathered}$$ where $t_{ij} = t_i - t_j$ and $t_1 \le t_2 \le t_3$. A consistency check for this expression is obtained by considering the equal time case $t_1 = t_2 = t_3 \to \infty$, which reproduces, as expected, the usual bosonic result (as in Eq. (\[eq:Equilibrium\])): $$\braket{\phi(t, \mathbf{k}_1)\phi(t, \mathbf{k}_2)\phi(t, \mathbf{k}_3)} \xrightarrow[t \to \infty]{} \frac{g^2}{\Omega_1^2 \Omega_2^2 \Omega_3^2}\,.$$ ### One–loop propagator {#sec:one-loop-propagator} As a next example, let us consider the one–loop correction to the two–point function $\braket{\phi(s_1, \mathbf{k}_1) \phi( s_2, \mathbf{k}_2 )}$. Using the expansion in Eq. , we find that $$\braket{\phi(s_1, \mathbf{k}_1) \phi(s_2, \mathbf{k}) } = \begin{small} \begin{minipage}{.14\linewidth} \vspace{3em} \begin{picture}(20,5)(,) \put(2,10){$s_1$} \put(2,-7){$\mathbf{k}$} \put(47,10){$s_2$} \put(47,-7){$\mathbf{k}$} \put(33,-22){(a)} \includegraphics[scale=.7]{StochGraph-Propagator-crop} \end{picture} \vspace{3em} \end{minipage} + \begin{minipage}{.17\linewidth} \begin{picture}(20,30)(,) \put(2,22){$s_1$} \put(2,5){$\mathbf{k}$} \put(66,22){$s_2$} \put(66,5){$\mathbf{k}$} \put(18,30){$s_a$} \put(18,0){$s_b$} \put(48,30){$s_d$} \put(48,0){$s_c$} \put(33,35){$\mathbf{k}_1$} \put(33,-8){$\mathbf{k}_2$} \put(33,-22){(b)} \includegraphics[scale=.5]{StochGraph-2pt1loop-01-crop} \end{picture} \end{minipage} + \begin{minipage}{.17\linewidth} \begin{picture}(20,30)(,) \put(0,22){$s_1$} \put(0,5){$\mathbf{k}$} \put(66,22){$s_2$} \put(66,5){$\mathbf{k}$} \put(12,22){$s_a$} \put(18,30){$s_b$} \put(18,0){$s_c$} \put(48,30){$s_d$} \put(33,35){$\mathbf{k}_1$} \put(33,-8){$\mathbf{k}_2$} \put(33,-22){(c)} \includegraphics[scale=.5]{StochGraph-2pt1loop-02-crop} \end{picture} \end{minipage} + \begin{minipage}{.17\linewidth} \begin{picture}(20,30)(,) \put(0,22){$s_1$} \put(0,5){$\mathbf{k}$} \put(66,22){$s_2$} \put(66,5){$\mathbf{k}$} \put(52,22){$s_d$} \put(18,30){$s_a$} \put(18,0){$s_b$} \put(46,30){$s_c$} \put(33,35){$\mathbf{k}_1$} \put(33,-8){$\mathbf{k}_2$} \put(33,-22){(d)} \includegraphics[scale=.5]{StochGraph-2pt1loop-03-crop} \end{picture} \end{minipage} \end{small}$$ The first term is just the usual propagator $D( s_1, \mathbf{k}_1; s_2, \mathbf{k}_2)$. The contributions from the diagrams (b), (c), (d) are as follows: $$\begin{aligned} \text{(b)} &= g^4\frac{G(s_1, \mathbf{k}) D(s_a,\mathbf{k}_1; s_d, -\mathbf{k}_1 ) D( s_b, \mathbf{k}_2; s_c, -\mathbf{k}_2) G( s_2, \mathbf{k})}{\left( s_1-s_a-s_b \right) \left( s_2 - s_d - s_c \right)} \, , \\ \text{(c)} &= g^4\frac{D( s_1, \mathbf{k}; s_a, - \mathbf{k} ) D( s_b, \mathbf{k}_1; s_d, -\mathbf{k}_1) G (s_c, \mathbf{k_2} ) G( s_2, \mathbf{k} )}{\left( s_c - s_a - s_b \right) \left( s_2 - s_c - s_d \right)} \, ,\\ \text{(d)} &= g^4\frac{G(s_1, \mathbf{k}) D(s_a,\mathbf{k}_1; s_c, -\mathbf{k}_1) G(s_b,\mathbf{k}_2) D(s_d, \mathbf{k}; s_2; -\mathbf{k}) }{\left( s_1 - s_a - s_b \right) \left( s_b - s_c - s_d \right)} \, .\end{aligned}$$ The two–point function is obtained by summing up all the contributions and integrating over the internal momenta. Once more, the result is more transparent if we take the inverse Laplace transform and consider the large time limit $t_1, t_2 \to \infty$: $$\begin{gathered} \braket{\phi(t_1, \mathbf{k}) \phi(t_2, \mathbf{k}) } \xrightarrow[t_1, t_2 \to \infty]{} \frac{e^{-\Omega^2 t_{21}}}{\Omega^2} \\ + g^4 \int \frac{\di \mathbf{k}_1 \di \mathbf{k}_2}{\left(2 \pi \right)^{4}} \left[ \frac{\Omega^2 e^{- \left( \Omega_1^2 + \Omega_2^2 \right) t_{21}} - \left( \Omega_1^2 + \Omega_2^2 \right) e^{-\Omega^2 t_{21}}}{\Omega^4 \Omega_1^2 \Omega_2^2 \left( \Omega^2 - \Omega_1^2 - \Omega_2^2 \right)} \delta ( \mathbf{k} - \mathbf{k}_1 - \mathbf{k}_2 ) \right] \, ,\end{gathered}$$ where $t_{21} = \abs{t_2 - t_1}$. Taking the $t_{21} \to 0$ limit, one reproduces the usual bosonic two–point function at one loop (as in Eq. (\[eq:Equilibrium\])): $$\braket{\phi(t, \mathbf{k}) \phi(t, \mathbf{k}) } \xrightarrow[t\to \infty]{} \frac{1}{\Omega^2} + g^4 \int \frac{\di \mathbf{k}_1 \di \mathbf{k}_2}{\left(2 \pi \right)^{4}} \frac{ \delta (\mathbf{k} - \mathbf{k}_1 - \mathbf{k}_2) }{\Omega^4 \Omega_1^2 \Omega_2^2}\, .$$ UV Regularization {#sec:regularization} ----------------- The two–point function we derived above suffers from an ultraviolet divergence. In order to regularize it, one can either change the Langevin equation (Eq. (\[eq:langevin\_st\])), or the noise correlation function in Eq. (\[eq:corr\_eta\]). Here, we follow the latter approach and show how smearing out the delta function *in time* by introducing a cut–off $\Lambda$ results in an ultraviolet regularization *in space* which, in the large time limit, reproduces the usual Pauli–Villars result. Consider the noise function $\eta_\Lambda(t, \mathbf{x})$ with the following two–point function: $$\braket{\eta_\Lambda (t, \mathbf{x}) \eta_\Lambda (t^\prime, \mathbf{x}^\prime)} = \delta (\mathbf{x} - \mathbf{x}^\prime ) \Lambda^2 e^{-\Lambda^2 \abs{t - t^\prime}} \, .$$ For $\Lambda \to \infty$, it converges to the two–point function of the usual noise: $$\begin{gathered} \braket{\eta_\Lambda (t, \mathbf{x}) \eta_\Lambda (t^\prime, \mathbf{x}^\prime)} \xrightarrow[\Lambda \to \infty]{} 2 \delta (\mathbf{x} - \mathbf{x}^\prime ) \delta (t - t^\prime ) \,,\\ \eta_{\Lambda} (t, \mathbf{x} ) \xrightarrow[\Lambda \to \infty]{} \eta (t, \mathbf{x}) \, .\end{gathered}$$ Applying the Fourier–Laplace transform, we get $$\braket{\eta_\Lambda (t, \mathbf{x}) \eta_\Lambda (t^\prime, \mathbf{x}^\prime)} = \frac{ \left( 2\pi \right)^2 \delta (\mathbf{k} + \mathbf{k}^\prime )}{ s + s^\prime} \frac{ \Lambda^2 \left( s + s^\prime + 2 \Lambda^2 \right) }{\left( s + \Lambda^2 \right) \left( s^\prime + \Lambda^2 \right)} \, .$$ In this scheme, the Langevin equation remains unchanged, which implies that the retarded Green’s function remains the same, $$G ( s, \mathbf{k} ) = \frac{1}{s + \Omega^2} \, ,$$ while the field receives a $\Lambda^2 $ correction. In particular, for the free case: $$\phi_\Lambda (s, \mathbf{k} ) = G( s, \mathbf{k} ) \eta_{\Lambda} (s, \mathbf{k}) + G( s, \mathbf{k}) \phi_0 (\mathbf{k}) \, .$$ We are now in a position to calculate the $\Lambda^2 $ correction to the propagator in Eq. (\[eq:Propagator\]): $$D_{\Lambda} (s, \mathbf{k}; s^\prime, \mathbf{k}^\prime ) = \braket{\phi_{\Lambda}(s, \mathbf{k}) \phi_{\Lambda}( s^\prime, \mathbf{k}^\prime) } = \frac{ \left(2 \pi \right)^2 \delta( \mathbf{k} + \mathbf{k}^\prime)}{\left( s + \Omega^2 \right) \left( s^\prime + \Omega^2 \right) \left( s + s^\prime \right)} \frac{ \Lambda^2 \left( s + s^\prime + 2 \Lambda^2 \right) }{\left( s + \Lambda^2 \right) \left( s^\prime + \Lambda^2 \right)} \, .$$ To see how this corresponds to an ultraviolet regularization, let us perform the inverse Laplace transform: $$\begin{gathered} D_{\Lambda} (t, \mathbf{k}; t^\prime, \mathbf{k}^\prime ) = \frac{ \left( 2\pi \right)^2 \delta (\mathbf{k} + \mathbf{k}^\prime ) \Lambda^2}{\Omega^2 \left( \Lambda^4 - \Omega^4 \right)} \left( \Lambda^2 \ e^{- \Omega^2 \abs{t - t^\prime}} - \Omega^2 \ e^{- \Lambda^2 \abs{t - t^\prime}} \right. \\ \left.- \left( \Lambda^2 + \Omega^2 \right) e^{-\Omega^2 \left( t + t^\prime \right)} + \Omega^2 e^{-\Omega^2 t - \Lambda^2 t^\prime} + \Omega^2 e^{-\Omega^2 t^\prime - \Lambda^2 t} \right) \, .\end{gathered}$$ For large times, only the first two terms in the sum contribute, and for $t = t^\prime \to \infty$, we find the usual Pauli–Villars propagator for the $2$–dimensional boson: $$D( t, \mathbf{k}; t, \mathbf{k}^\prime ) \xrightarrow[t \to \infty]{} \left(2 \pi \right)^2 \delta (\mathbf{k} + \mathbf{k}^\prime)\frac{\Lambda^2}{\Omega^2 \left( \Lambda^2 + \Omega^2 \right)} \, .$$ Acknowledgements {#acknowledgements .unnumbered} ---------------- We are indebted to Luciano Girardello for enlightening discussions, and Tadashi Takayanagi for illuminating discussions and comments on the manuscript. We would also like to thank Masahito Yamazaki for numerous discussions at an early stage of the project. Finally, we would like to thank the workshop “New Perspectives in String Theory” at the Galileo Galilei Institute for Theoretical Physics in Arcetri, Firenze, for hospitality, where this research was initiated. The research of the authors was supported by the World Premier International Research Center Initiative (WPI Initiative), MEXT, Japan. [^1]: We cannot give an extensive account of the existing literature here and thus content us to only mention a few main examples. [^2]: An analogous problem was solved by Landau in [@Landau:1946jc] in the study of oscillations in plasma.
{ "pile_set_name": "ArXiv" }
--- abstract: 'We present [*macroscopic*]{} experimental evidence for field-induced [*microscopic*]{} quantum fluctuations in different hole- and electron-type cuprate superconductors with varying doping levels and numbers of CuO$_2$ layers per unit cell. The significant suppression of the zero-temperature in-plane magnetic irreversibility field relative to the paramagnetic field in all cuprate superconductors suggests strong quantum fluctuations due to the proximity of the cuprates to quantum criticality.' author: - 'A. D. Beyer,$^1$ V. S. Zapf,$^2$ H. Yang,$^1$, F. Fabris,$^2$ M. S. Park,$^3$ K. H. Kim,$^3$ S.-I. Lee,$^3$ and N.-C. Yeh' title: | Macroscopic evidence for quantum criticality and field-induced\ quantum fluctuations in cuprate superconductors --- High-temperature superconducting cuprates are extreme type-II superconductors that exhibit strong thermal, disorder and quantum fluctuations in their vortex states. [@FisherDS91; @Blatter94; @Yeh93; @Balents94; @Giamarchi95; @Kotliar96; @Kierfeld04; @Zapf05; @Yeh05a] While much research has focused on the [*macroscopic*]{} vortex dynamics of cuprate superconductors with phenomenological descriptions, [@FisherDS91; @Blatter94; @Yeh93; @Balents94; @Giamarchi95; @Kierfeld04] little effort has been made to address the [*microscopic*]{} physical origin of their extreme type-II nature. [@Yeh05a] Given that competing orders (CO) can exist in the ground state of these doped Mott insulators besides superconductivity (SC), [@Yeh05a; @Zhang97; @Demler01; @Chakravarty01; @Sachdev03; @Kivelson03; @LeePA06] the occurrence of quantum criticality may be expected. [@Demler01; @Sachdev03; @Onufrieva04] The proximity to quantum criticality and the existence of CO can significantly affect the low-energy excitations of the cuprates due to strong quantum fluctuations [@Zapf05; @Yeh05a] and the redistribution of quasiparticle spectral weight among SC and CO. [@Yeh05a; @ChenCT07; @Beyer06] Indeed, empirically the low-energy excitations of cuprate superconductors appear to be unconventional, exhibiting intriguing properties unaccounted for by conventional Bogoliubov quasiparticles. [@Yeh05a; @ChenCT07; @Beyer06; @ChenCT03] Moreover, external variables such as temperature ($T$) and applied magnetic field ($H$) can vary the interplay of SC and CO, such as inducing or enhancing the CO [@ChenHY05a; @Lake01] at the price of more rapid suppression of SC, thereby leading to weakened superconducting stiffness and strong thermal and field-induced fluctuations. [@FisherDS91; @Blatter94; @Yeh93] On the other hand, the quasi two-dimensional nature of the cuprates can also lead to quantum criticality in the limit of decoupling of CuO$_2$ planes. In this work we demonstrate experimental evidence from [*macroscopic*]{} magnetization measurements for field-induced quantum fluctuations among a wide variety of cuprate superconductors with different [*microscopic*]{} variables such as the doping level ($\delta$) of holes or electrons, and the number of CuO$_2$ layers per unit cell ($n$). [@Chakravarty04] We suggest that the manifestation of strong field-induced quantum fluctuations is consistent with a scenario that all cuprates are in close proximity to a quantum critical point (QCP). [@Kotliar96] To investigate the effect of quantum fluctuations on the vortex dynamics of cuprate superconductors, our strategy involves studying the vortex phase diagram at $T \to 0$ to minimize the effect of thermal fluctuations, and applying magnetic field [*parallel*]{} to the CuO$_2$ planes ($H \parallel ab$) to minimize the effect of random point disorder. The rationale for having $H \parallel ab$ is that the intrinsic pinning effect of layered CuO$_2$ planes generally dominates over the pinning effects of random point disorder, so that the commonly observed glassy vortex phases associated with point disorder for $H \parallel c$ ([*e.g.*]{} vortex glass and Bragg glass) [@FisherDS91; @Giamarchi95; @Kierfeld04] can be prevented. In the [*absence*]{} of quantum fluctuations, random point disorder can cooperate with the intrinsic pinning effect to stabilize the low-temperature vortex smectic and vortex solid phases, [@Balents94] so that the vortex phase diagram for $H \parallel ab$ would resemble that of the vortex-glass and vortex-liquid phases observed for $H \parallel c$ with a glass transition $H_G (T = 0)$ approaching $H_{c2} (T = 0)$. On the other hand, when [*field-induced quantum fluctuations*]{} are dominant, the vortex phase diagram for $H \parallel ab$ will deviate substantially from predictions solely based on thermal fluctuations and intrinsic pinning, and we expect strong suppression of the magnetic irreversibility field $H_{irr} ^{ab}$ relative to the upper critical field $H_{c2} ^{ab}$ at $T \to 0$, because the induced persistent current circulating along both the c-axis and the ab-plane can no longer be sustained if field-induced quantum fluctuations become too strong to maintain the c-axis superconducting phase coherence. In this communication we present experimental results that are consistent with the notion that all cuprate superconductors exhibit significant field-induced quantum fluctuations as manifested by a characteristic field $H_{irr} ^{ab} (T \to 0) \equiv H^{\ast} \ll H_{c2} ^{ab} (T \to 0)$. Moreover, we find that we can express the degree of quantum fluctuations for each cuprate in terms of a reduced field $h^{\ast} \equiv H^{\ast}/H_{c2}^{ab}(0)$, with $h^{\ast} \to 0$ indicating strong quantum fluctuations and $h^{\ast} \to 1$ referring to the mean-field limit. Most important, the $h^{\ast}$ values of all cuprates appear to follow a trend on a $h^{\ast} (\alpha)$-vs.-$\alpha$ plot, where $\alpha$ is a material parameter for a given cuprate that reflects its doping level, electronic anisotropy, and charge imbalance if the number of CuO$_2$ layers per unit cell $n$ satisfies $n \ge 3$. [@Kotegawa01a; @Kotegawa01b] In the event that $H_{c2} ^{ab} (0)$ exceeds the paramagnetic field $H_p \equiv \Delta _{\rm SC} (0)/(\sqrt{2} \mu _B)$ for highly anisotropic cuprates, where $\Delta _{\rm SC} (0)$ denotes the superconducting gap at $T = 0$, $h^{\ast}$ is defined by $(H^{\ast}/H_p)$ because $H_p$ becomes the maximum critical field for superconductivity. To find $h^{\ast}$, we need to determine both the upper critical field $H_{c2} ^{ab} (T)$ and the irreversibility field $H_{irr} ^{ab} (T)$ to as low temperature as possible. Empirically, $H_{c2} ^{ab} (T)$ can be derived from measuring the magnetic penetration depth in pulsed fields, with $H_{c2}^{ab}(0)$ extrapolated from $H_{c2} ^{ab} (T)$ values obtained at finite temperatures. The experiments involve measuring the frequency shift $\Delta f$ of a tunnel diode oscillator (TDO) resonant tank circuit with the sample contained in one of the component inductors. Details of the measurement techniques have been given in Ref. . In general we find that the condition $H_{c2} ^{ab} (0) > H_p$ is satisfied among all samples investigated so that we define $h^{\ast} \equiv (H^{\ast}/H_p)$ hereafter. On the other hand, determination of $H_{c2}^{ab} (0)$ and $H_{c2}^c (0)$ is still useful because it provides the electronic anisotropy $\gamma \equiv (\xi _{ab}/\xi _c) = \lbrack H_{c2}^{ab}(0)/H_{c2}^c(0) \rbrack$, where $\xi _{ab} (\xi _c)$ refers to the in-plane (c-axis) superconducting coherence length. ![Representative measurements of the in-plane irreversibility fields $H_{irr}^{ab} (T)$ in cuprate superconductors: (a) Hg-1223 (polycrystalline and grain-aligned), (b) Hg-1234 (polycrystalline), (c) Hg-1245 (grain-aligned), and (d) La-112 (polycrystalline and grain-aligned). Insets: (a) Consistent $T_{irr}^{ab}(H)$ obtained from maximum irreversibility of a polycrystalline sample and from irreversibility of a grain-aligned sample with $H \parallel ab$; (b) representative $\chi_3$ data taken using AC Hall probe techniques; (c) details of the $M$-vs.-$T$ curves, showing an anomalous upturn at $T < \tilde T$; (d) exemplifying determination of $H_{c2}^{ab}$ in La-112 using TDO to measure $\Delta f$.[@Zapf05][]{data-label="fig1"}](Fig1){width="3.375in"} To determine $H_{irr} ^{ab} (T)$, we employed different experimental techniques, including DC measurements of the magnetization $M(T,H)$ with the use of a SQUID magnetometer or a homemade Hall probe magnetometer for lower fields (up to 9 Tesla), a DC magnetometer (up to 14 Tesla) at the National High Magnetic Field Laboratory (NHMFL) in Los Alamos (LANL-PPMS), a cantilever magnetometer at the NHMFL in Tallahasseefor higher fields (up to 33 Tesla DC fields in a $^3$He refrigerator), [@Naughton97] and a compensated coil for magnetization measurements in the pulsed-field (PF) facilities at LANL for even higher fields (up to 65 Tesla pulsed fields in a $^3$He refrigerator). [@Zapf05] In addition, AC measurements of the third harmonic magnetic susceptibility ($\chi _3$) as a function of temperature in a constant field are employed to determine the onset of non-linearity in the low-excitation limit. [@ReedDS95] Examples of the measurements of $H_{irr} ^{ab} (T)$ for $\rm HgBa_2Ca_2Cu_3O_x$ (Hg-1223, $T_c = 133$ K), $\rm HgBa_2Ca_3Cu_4O_x$ (Hg-1234, $T_c = 125$ K), $\rm HgBa_2Ca_4Cu_5O_x$ (Hg-1245, $T_c = 108$ K) and $\rm Sr_{0.9}La_{0.1}CuO_2$ (La-112, $T_c = 43$ K), are shown in Fig. 1 (a) - (d), and the consistency among $H_{irr} ^{ab} (T)$ deduced from different techniques have been verified, as summarized in the $H$-vs.-$T$ phase diagrams ($H \parallel ab$) in Fig. 2(a) – (d). The Hg-based cuprates are in either polycrystalline or grain-aligned forms, and the quality of these samples is confirmed with x-ray diffraction and magnetization measurements to ensure single phase and nearly 100% volume superconductivity. [@KimMS98; @IyoA06] We have also verified that $H_{irr}^{ab} (T)$ obtained from the polycrystalline samples are consistent with those derived from the grain-aligned samples with $H \parallel ab$, because the measured irreversibility in a polycrystalline sample is manifested by its maximum irreversibility $H_{irr}^{ab}$ among grains of varying orientation relative to the applied field. Examples of this consistency have been shown in Ref.  and also in the main panel and the inset of Fig. 1(a). In addition to the four different cuprates considered in this work, we compare measurements of $H_{irr} ^{ab} (T)$ on other cuprate superconducting single crystals, including underdoped $\rm YBa_2Cu_3O_{7-\delta}$ (Y-123, $T_c$ = 87 K), [@OBrien00] optimally doped $\rm Nd_{1.85}Ce_{0.15}CuO_4$ (NCCO, $T_c$ = 23 K) [@Yeh92] and overdoped $\rm Bi_2Sr_2CaCu_2O_x$ (Bi-2212, $T_c$ = 60 K). [@Krusin-Elbaum04] The irreversibility fields for these cuprates normalized to their corresponding paramagnetic fields $H_p$ are summarized in Fig. 3(a) as a function of the reduced temperature $(T/T_c)$, clearly demonstrating strong suppression of $H^{\ast}$ relative to $H_p$ and $H_{c2} ^{ab} (0)$ in all cuprates and implying significant field-induced quantum fluctuations. ![Determination of $H^{\ast}$ using various magnetic measurements of $H_{irr}^{ab} (T)$: (a) Hg-1223, (b) Hg-1234, (c) Hg-1245, and (d) La-112. In (b) and (d) dashed lines indicate $H_{c2}^{ab} (T)$ from TDO measurements. In (d) we also illustrate $H_{c2}^{c}$ for comparison. We note reasonable consistency among different experimental techniques, indicating strong suppression of $H^{\ast} \equiv H_{irr} ^{ab} (0)$ relative to $H_{c2} ^{ab} (0)$ (or $H_p$) in all cuprates.[]{data-label="fig2"}](Fig2){width="3.45in"} --------- --------- -------- -------- ------------------------- ------- -------- -------- -------- -------- ------------ ------- -------- -------- Hg-1245 $0.15$ $1.30$ $0.80$ $55$ [@Hg1245comment] $25$ $0.06$ $0.3$ $23.0$ $5.0$ $-[278]$ $40$ $0.08$ $0.02$ Hg-1223 $0.15$ $1.04$ $0.92$ $52$ [@Zech96] $18$ $0.26$ $0.9$ $48.5$ $6.5$ $-[347]$ $50$ $0.14$ $0.02$ Hg-1234 $0.15$ $1.20$ $0.80$ $52$ [@Zech96] $10$ $0.13$ $0.2 $ $75.0$ $10.0$ $-[320]$ $46$ $0.23$ $0.02$ La-112 $0.10$ $1.00$ $1.00$ $13$ [@Zapf05] $4.0$ $0.77$ $2.4 $ $46.0$ $4.0$ $160[110]$ $10$ $0.42$ $0.04$ Bi-2212 $0.225$ $1.00$ $1.00$ $11$ [@Krusin-Elbaum04] $8.0$ $2.05$ $15 $ $65.0$ $10$ $100[155]$ $22$ $0.42$ $0.06$ NCCO $0.15$ $1.00$ $1.00$ $13$ [@Yeh92] $5.0$ $1.15$ $4.4 $ $40.0$ $5.0$ $77[59]$ $8.0$ $0.68$ $0.12$ Y-123 $0.13$ $1.00$ $1.00$ $7.0$ [@Zech96] $2.0$ $1.86$ $5.3 $ $210$ $50$ $600[239]$ $25$ $0.88$ $0.23$ --------- --------- -------- -------- ------------------------- ------- -------- -------- -------- -------- ------------ ------- -------- -------- \[table1\] The physical significance of $h^{\ast}$ may be better understood by considering how the magnetic irreversibility for $H \parallel ab$ occurs. For sufficiently low $T$ and small $H$, a supercurrent circulating both along and perpendicular to the CuO$_2$ planes with a coherent superconducting phase can be induced and sustained, leading to magnetic irreversibility. On the other hand, strong thermal or quantum fluctuations due to large anisotropy and competing orders in the cuprates can reduce the phase coherence of supercurrents, particularly the coherence perpendicular to the CuO$_2$ planes, thereby diminishing the magnetic irreversibility. Thus, we expect that the degree of the in-plane magnetic irreversibility is dependent on the nominal doping level $\delta$, the electronic anisotropy $\gamma$, the number of CuO$_2$ layers per unit cell $n$, and the ratio of charge imbalance $(\delta _o/\delta _i)$ [@Kotegawa01a; @Kotegawa01b] between the doping level of the outer layers ($\delta _o$) and that of the inner layer(s) ($\delta _i$) in multi-layer cuprates with $n \ge 3$. In other words, $h^{\ast}$ for each cuprate superconductor may be expressed in terms of a material parameter $\alpha$ that depends on $\delta$, $\gamma$, $n$ and $(\delta _o / \delta _i)$, and the simplest assumption for a linearized dependence of $\alpha$ on these variables gives: $$\begin{aligned} \alpha &\equiv \gamma ^{-1} \delta (\delta _o/\delta _i)^{-(n-2)}, \qquad (n \ge 3); \\ \alpha &\equiv \gamma ^{-1} \delta , \qquad \qquad \qquad \qquad (n \le 2) . \label{eq:alpha}\end{aligned}$$ If the suppression of the in-plane magnetic irreversibility is associated with field-induced quantum fluctuations and the proximity to a quantum critical point $\alpha _c$, [@Demler01] $h^{\ast} (\alpha)$ should be a function of $|\alpha - \alpha _c|$. Indeed, we find that using the empirically determined values for different cuprates tabulated in Table 1 and the definition of $\alpha$ given above, the $h^{\ast}$-vs.-$\alpha$ data for a wide variety of cuprates appear to follow a trend, as shown in Fig. 3(b). For comparison, we include in Fig. 3(b) theoretical curves predicted for field-induced static spin density waves (SDW) in cuprate superconductors in the limit of $H_{c1} (0) \ll H \ll H_{c2} (0)$, where $h^{\ast}$ (above which static ![(a) Reduced in-plane fields $(H_{irr}^{ab}/H_p)$ and $(H_{c2}^{ab}/H_p)$ vs. $(T/T_c)$ for various cuprates. In the $T \to 0$ limit where $H_{irr}^{ab} \to H^{\ast}$, the reduced fields $h^{\ast} \equiv (H^{\ast}/H_p) < 1$ for all cuprates are listed in Table I for Y-123, NCCO, Bi-2212, La-112, Hg-1234, Hg-1223, and Hg-1245 (in descending order). (b) $h^{\ast}$ vs. $\alpha$ in logarithmic plot for different cuprates, with decreasing $\alpha$ representing increasing quantum fluctuations. The lines given by $-400 |\alpha - \alpha _c| / \ln | \alpha - \alpha _c |$ represent the field-induced SDW scenario [@Demler01] in Eq. (3) with different $\alpha _c$ = 0, $10^{-4}$ and $2 \times 10^{-4}$ from left to right. Inset: The linear plot of the main panel. (c) The same data as in (b) are compared with the power-law dependence (solid lines) given by $5(\alpha - \alpha _c)^{1/2}$, using different $\alpha _c$ = 0, $10^{-4}$ and $2 \times 10^{-4}$ from left to right. Inset: The linear plot of the main panel. (d) The $H$-vs.-$T$ diagram of Hg-1245. (See text for details).[]{data-label="fig3"}](Fig3){width="3.45in"} SDW coexists with SC) satisfies the relation: [@Demler01] $$h^{\ast} (\alpha) \propto |\alpha - \alpha _c|/\lbrack \ln |\alpha - \alpha _c| \rbrack. \label{eq:hstar}$$ Here $\alpha _c$ is a non-universal critical point, [@Demler01] $h^{\ast} (\alpha) \to 0$ for $\alpha \to \alpha _c$, and we have shown theoretical curves associated with three different $\alpha_c$ values for comparison with data. On the other hand, a simple scaling argument would assert a power-law dependence: $$h^{\ast} (\alpha) \propto |\alpha - \alpha _c|^{a}. \qquad \qquad (a > 0) \label{eq:hpower}$$ Using $a \sim 0.5$ in Eq. (4), we compare the power-law dependence with experimental data in Fig. 3(c). This dependence appears to agree better with experimental data than the SDW/SC formalism in Eq. (3). Although our available data cannot accurately determine $\alpha _c$, we further examine Hg-1245 (which has the smallest $h^{\ast}$) for additional clues associated with the nature of the QCP. We find that the magnetization $M$ of Hg-1245 always exhibits an anomalous increase for $T < \tilde T (H)$ (see the inset of Fig. 1(c)), indicating a field-induced reentry of magnetic ordering below $\tilde T (H)$. This magnetism reentry line $\tilde H (T)$ is shown together with $H_{irr}^{ab} (T)$ in Fig. 3(d). We suggest that the regime below both $H_{irr}^{ab}$ and $\tilde H$ corresponds to a coherent SC state ($c$-SC), and that bounded by $H_{irr}^{ab}$ and $\tilde H$ is associated with a coherent phase of coexisting SC and magnetic CO ($c$-SC/CO), whereas that above $H_{irr}^{ab}$ is an incoherent SC phase ($i$-SC and $i$-SC/CO) with strong fluctuations. Our conjecture of a field-induced magnetic CO in Hg-1245 contributing to quantum fluctuations may be further corroborated by considering the $h^{\ast}$-vs.-$\alpha$ dependence in the multi-layered cuprates Hg-1223, Hg-1234 and Hg-1245. While these cuprate superconductors have the highest $T_c$ and $H_{c2}$ values, as shown in Table I, they also exhibit the smallest $h^{\ast}$ and $\alpha$ values, suggesting maximum quantum fluctuations. These strong quantum fluctuations can be attributed to both their extreme two dimensionality (i.e., large $\gamma$) [@KimMS98; @KimMS01] and significant charge imbalance that leads to strong CO in the inner layers. [@Kotegawa01a; @Kotegawa01b] Indeed muon spin resonance ($\mu$SR) experiments [@Tokiwa03] have revealed increasing antiferromagnetic ordering in the inner layers of the multi-layer cuprates with $n \ge 3$. Given that the $\gamma$ values of all Hg-based multi-layer cuprates are comparable (Table I), the finding of larger quantum fluctuations (i.e. smaller $h^{\ast}$) in Hg-1245 is suggestive of increasing quantum fluctuations with stronger competing order. However, further investigation of the $h^{\ast}$ and $\gamma$ values of other multi-layer cuprates will be necessary to confirm whether competing orders in addition to large anisotropy contribute to quantum fluctuations. In summary, our investigation of the [*in-plane*]{} magnetic irreversibility in a wide variety of cuprate superconductors reveals strong field-induced quantum fluctuations. The [*macroscopic*]{} irreversibility field exhibits dependences on such [*microscopic*]{} material parameters as the doping level, the charge imbalance in multi-layered cuprates, and the electronic anisotropy. Our finding is consistent with the notion that cuprate superconductors are in close proximity to quantum criticality. Research at Caltech was supported by NSF Grant DMR-0405088 and through the NHMFL. The SQUID data were taken at the Beckman Institute at Caltech. Work at Pohang University was supported by the Ministry of Science and Technology of Korea. The authors gratefully acknowledge Dr. Kazuyasu Tokiwa and Dr. Tsuneo Watanabe at the Tokyo University of Science for providing the $\rm HgBa_2Ca_4Cu_5O_x$ (Hg-1245) samples. [35]{} natexlab\#1[\#1]{}bibnamefont \#1[\#1]{}bibfnamefont \#1[\#1]{}citenamefont \#1[\#1]{}url \#1[`#1`]{}urlprefix\[2\][\#2]{} \[2\]\[\][[\#2](#2)]{} , , , ****, (). , , , , , ****, (). , , , , , , , ****, (). , ****, (). , ****, (). , ****, (). , ****, (). , , , , , , , , , ****, (). , , , , , ****, (). , ****, (). , , , ****, (). , , , , ****, (). , ****, (). , , , , , , , ****, (). , , , ****, (). , ****, (). , , , ****, (). , , , . , ****, (). , ****, (). , , , , , , , , , , , ****, (). , , , ****, (). , , , , , , , , , , , ****, (). , , , , , , , , , , ****, (). , , , , , , ****, (). , , , , , ****, (). , , , , , , ****, (). , , , , , , ****, (). , , , , , , , , , , , ****, (). , , , , , , ****, (). , , , ****, (). , , , , , , ****, (). , , , , ****, (). , , , , , , , , , , ****, ().
{ "pile_set_name": "ArXiv" }
--- abstract: 'The paper deals with the program of determining the complexity of various homeomorphism relations. The homeomorphism relation on compact Polish spaces is known to be reducible to an orbit equivalence relation of a continuous Polish group action (Kechris-Solecki). It is shown that this result extends to locally compact Polish spaces, but does not hold for spaces in which local compactness fails at only one point. In fact the result fails for those subsets of ${\mathbb{R}}^3$ which are unions of an open set and a point. In the end a list of open problems is given in this area of research.' author: - 'Vadim Kulikov[^1]' bibliography: - 'ref1.bib' title: 'Classification and Non-classification of Homeomorphism Relations' --- **MSC2010:** 03E15, 57N10, 57M25, 57N65. #### Acknowledgments. {#acknowledgments. .unnumbered} I am grateful to Professors Alexander Kechris and Su Gao for providing useful answers to my e-mails which helped and encouraged me to proceed with this work. During the time of the preparation I also had several valuable discussions on this topic with my Ph.D. supervisor Tapani Hyttinen and my friends and collegues Rami Luisto, Pekka Pankka and Marcin Sabok. Last but not least I would like to thank a dear to me woman (whose identity is concealed) for the infinite inspiration that she has given me during the time of this work. This research was supported by the Austrian Science Fund (FWF) under project number P24654. Introduction {#sec:Intro} ============ It is known that the homeomorphism relation on Polish spaces is $\Sigma^1_2$ [@Gao] and ${{\Sigma_1^1}}$-hard [@FerLouRos Thm 22]. On the other hand, it is known that restricted to compact spaces this homeomorphism relation is reducible to an orbit equivalence relation induced by continuous action of a Polish group which is known to be strictly below ${{\Sigma_1^1}}$-complete. This result is extended to locally compact spaces in Theorem \[thm:locCom\]. The main result of this paper is that this “nice” property of locally compact spaces breaks when just one point is added to them: The homeomorphism relation on the ${\sigma}$-compact spaces of the form $V\cup \{x\}$ where $x\in {\mathbb{R}}^3$ is fixed and $V\subset {\mathbb{R}}^3$ is open falls somewhere in between: it is ${{\Sigma_1^1}}$ and the equivalence relation known as $E_1$ is continuously reducible to it (Theorem \[thm:NonClass\]). This implies that this homeomorphism relation is not classifiable in a Borel way by any orbit equivalence relation arising from a Borel action of a Polish group. The proof relies on known results in knot theory and low dimensional topology. We hope that these methods can be helpful in approaching Question \[open:Main\] and other questions listed in Section \[sec:Further\]. Sections \[sec:KnotTheory\] and \[sec:BkgrndDST\] are devoted to the required preliminaries. In Section \[sec:NonClass\] we prove the main non-classification result. In the final sections the research topic of classifying homeomorphism relations is looked at in more detail: In Section \[sec:Other\] it is reviewed what positive results there are in classification of homeomorphism relations and in Section \[sec:Further\] a list of open questions is given in the area. Preliminaries in Topology and Knot Theory {#sec:KnotTheory} ========================================= In this section we go through those definitions and lemmas in knot theory and topology that we need in the proofs later. We assume that the reader is familiar with the notion of the first homology group $H_1(X)$ of a topological space $X$. The standard definitions can be found for example in [@Hatcher]. We denote by ${\mathbb{R}}^n$ the $n$-dimensional Euclidean space and by ${\mathbb{S}}^n$ the one-point compactification of it, i.e. ${\mathbb{S}}^n={\mathbb{R}}^n\cup\{\infty\}$ and the neighborhoods of $\infty$ are the sets of the form $\{\infty\}\cup({\mathbb{R}}^n\setminus C)$ where $C$ is compact. By ${{\operatorname{int}}}A$ we denote the topological interior of $A$ and by $\bar A$ the closure. Hausdorff Metric and Path Connected Subspaces --------------------------------------------- \[def:HausdorffMetric\] Let $X$ be a compact metric space. The space of all non-empty compact subsets of $X$ is denoted by $K(X)$. We equip $K(X)$ with the Hausdorff-metric: An *${\varepsilon}$-collar* of a set $C\subset X$ is the set $$C_{\varepsilon}=\{x\mid d(x,C)< {\varepsilon}\}$$ and the Hausdorff-distance between two sets in $K(X)$ is determined by $$d_{K(X)}(C,C')=\max\{\inf\{{\varepsilon}\mid C\subset C'_{\varepsilon}\},\inf\{{\varepsilon}\mid C'\subset C_{\varepsilon}\}\}.$$ The following facts are standard to verify. \[fact:Hausdorffmetric\] Let $X$ be a compact metric space. Then $K(X)$ is compact and if $(C_i)_{i\in{\mathbb{N}}}$ is a converging sequence in $K(X)$ and $C_*$ is its limit, then 1. for every $x_*$ we have $x_*\in C_*$ if and only if there is a sequence $x_i$ converging to $x_*$ with $x_i\in C_i$ for all $i\in{\mathbb{N}}$. \[fact:Haus1\] 2. if every $C_i$ is connected, then $C_*$ is connected. \[fact:Haus2\] \[fact:Haus3\] \[def:DenselyPathConnected\] A subset $A\subset {\mathbb{R}}^n$ is *path metric* if the distance between two points is given by $$d_E(x,y)=\inf\{L({\gamma})\mid {\gamma}\subset A\text{ is a path joining }x\text{ and }y\}$$ where $d_E$ is the Euclidean distance and $L({\gamma})$ is the length of the path. Equivalently $A$ is path metric if and only if for every two points $x,y\in A$ and ${\varepsilon}>0$ there is a path ${\gamma}\subset A$ connecting $x$ to $y$ and $L({\gamma})<(1+{\varepsilon})d_E(x,y)$. \[lem:HDPathMetric\] If the Hausdorff dimension of a closed $A\subset{\mathbb{R}}^n$ is less than $n-1$, then ${\mathbb{R}}^n\setminus A$ is path metric. Let $D_0$ be the $(n-1)$-dimensional unit disc $$D_0=\{(x_1,\dots,x_{n-1})\mid x_1^2+\dots+x_{n-1}^2<1\}.$$ and let $C_0$ be the cylinder $$D_0\times [0,1]\subset {\mathbb{R}}^n.$$ For $x,y\in{\mathbb{R}}^n$ denote by $[x,y]$ the straight line segment connecting $x$ and $y$. Suppose $A_0\subset C_0$ and assume that for every $(x_1,\dots,x_{n-1})\in D_0$ the set $$A_0\cap [(x_1,\dots,x_{n-1},0),(x_1,\dots,x_{n-1},1)]$$ is non-empty. Then $A_0$ must have Hausdorff dimension at least $n-1$: $A_0$ can be projected onto $D_0$ with the Lipschitz map $$(x_1,\dots,x_{n-1},x_n)\mapsto (x_1,\dots,x_{n-1},0),$$ the latter has Hausdorff dimension $n-1$ and the Hausdorff dimension cannot increase in a Lipschitz map. Therefore we have the following claim: If $A_0\subset C_0$ has Hausdorff dimension less than $n-1$, then there is $(x_1,\dots,x_{n-1})\in D_0$ such that $[(x_1,\dots,x_{n-1},0),(x_1,\dots,x_{n-1},1)]\cap A_0={\varnothing}.$ Let $x,y\in {\mathbb{R}}^n\setminus A$ and let ${\varepsilon}>0$. Since $A$ is closed there is ${\delta}<{\varepsilon}/2$ such that $\bar B(x,{\delta})\cap A=\bar B(y,{\delta})\cap A={\varnothing}$. Let $P_x$ and $P_y$ be $(n-1)$-dimensional affine hyperplanes passing through $x$ and $y$ respectively and which are orthogonal to $x-y$. Then there is an affine map $f\colon {\mathbb{R}}^n\to{\mathbb{R}}^n$ such that $f[P_x]={\mathbb{R}}^{n-1}\times\{0\}$, $f[P_y]={\mathbb{R}}^{n-1}\times \{1\}$ and $f[P_x\cap \bar B(x,{\delta})]=D_0\times\{0\}$. Since $\dim_H(A)<n-1$, also $\dim_H(f[A])<n-1$ (because $f$ is Lipschitz) and so by the claim above there is a line segment $s$ passing from $f[\bar B(x,{\delta})\cap P_x]$ to $f[\bar B(y,{\delta})\cap P_y]$ outside $f[A]$ which is orthogonal to ${\mathbb{R}}^{n-1}\times\{0\}$. By applying $f^{-1}$ to $s$, we obtain a straight line segment passing from $\bar B(x,\delta)\cap P_x$ to $\bar B(y,\delta)\cap P_y$ orthogonal to $P_x$. Now by connecting the endpoints of $f^{-1}[s]$ to $x$ and $y$ we obtain a path outside $A$ of length at most $d(x,y)+2{\delta}=d(x,y)+{\varepsilon}$ connecting these two points. \[lemma:CauchyComponent\] Suppose $X,X'\subset {\mathbb{R}}^n$ are such that $X$ is a path metric space and there is a homeomorphism $h\colon X\to X'$. If $(x_i)_{i\in{\mathbb{N}}}$ is a Cauchy sequence in $X$ converging in ${\mathbb{R}}^n$ to some point $x\in {\mathbb{R}}^n\setminus X$, then all the accumulation points of $(h(x_i))_{i\in{\mathbb{N}}}$ lie in the same component of ${\mathbb{R}}^n\setminus X'$. In particular, if this component is a singleton, then $(h(x_i))_{i\in{\mathbb{N}}}$ is also a Cauchy sequence. Let $(x_i)_{i\in{\mathbb{N}}}$ be as in the statement. Suppose for a contradiction that $y^1$ and $y^2$ are two points in two different components of ${\mathbb{R}}^n\setminus X'$ that are accumulation points of $(h(x_i))_{i\in{\mathbb{N}}}$ and let $(x^1_{i})_{i\in{\mathbb{N}}}$ and $(x^2_{i})_{i\in{\mathbb{N}}}$ be subsequences of $(x_i)_{i\in{\mathbb{N}}}$ such that $(h(x^1_i))_{i\in{\mathbb{N}}}$ and $(h(x^2_i))_{i\in{\mathbb{N}}}$ converge to $y^1$ and $y^2$ respectively. For $k\in\{1,2\}$ and $i\in{\mathbb{N}}$ denote $y^k_i=h(x^k_i)$. For each $i\in {\mathbb{N}}$ let ${\gamma}_i$ be a path in $X$ connecting $x^1_i$ to $x^2_{i}$ such that $L({\gamma})<(1+2^{-i})d(x^1_i,x^2_{i})$. We think of the paths as compact subsets of ${\mathbb{S}}^n$. The sequence $\{{\gamma}_i\mid i\in{\mathbb{N}}\}$ converges in $K({\mathbb{S}}^n)$ to $\{x\}$. Consider the sequence $(h[\gamma_i])_{i\in{\mathbb{N}}}$. It is a sequence of compact subsets of ${\mathbb{S}}^n$, so it is a sequence of elements of $K({\mathbb{S}}^n)$. The latter is compact, so there is a converging subsequence: $(h[\gamma_{i(k)}])_{k\in{\mathbb{N}}}$. Denote by $\gamma$ the limit of that sequence. By Fact \[fact:Hausdorffmetric\].\[fact:Haus1\], we have $y^1,y^2\in {\gamma}$ since $y^1_i,y^2_i\in h[\gamma_i]$ for all $i\in{\mathbb{N}}$ and additionally, since every element in the sequence is connected, ${\gamma}$ is also connected by Fact \[fact:Hausdorffmetric\].\[fact:Haus2\]. Since $y^1$ and $y^2$ lie in different components of ${\mathbb{R}}^n\setminus X'$, there must be a point $z$ in ${\gamma}$ which is in $X'$. Now, by Fact \[fact:Hausdorffmetric\].\[fact:Haus1\] we can find a sequence $z_k\in h[{\gamma}_{i(k)}]$, $k\in{\mathbb{N}}$, such that $(z_k)_{k\in{\mathbb{N}}}$ converges to $z$. But $h^{-1}(z_k)$ lies in ${\gamma}_{i(k)}$ and so $(h^{-1}(z_k))_{k\in{\mathbb{N}}}$ converges to $x$. This is a contradiction, because $x\notin {\operatorname{dom}}h=X$, but $z\in {\operatorname{ran}}h= X'$. Separation Theorems {#sec:Sep} ------------------- Here we state, for the sake of completeness, two known results from finite dimensional topology that we will need. \[thm:JordanBrouwer\] Let $h\colon {\mathbb{S}}^{n-1}\to {\mathbb{S}}^n$ be an embedding. Then ${\mathbb{S}}^n\setminus h[{\mathbb{S}}^{n-1}]$ consists of two open connected components. (A Generalization of the Schönflies Theorem by M. Brown [@Brown])\[thm:Brown\] Let $h\colon {\mathbb{S}}^{n-1}\times [0,1]\to {\mathbb{S}}^n$ be an embedding. Then the closures of the complementary domains of $h[{\mathbb{S}}^{2}\times \{\frac{1}{2}\}]$ are topological $n$-cells, i.e. homeomorphic to closed balls. In particular there is a self-homeomorphism of ${\mathbb{S}}^n$ which takes $h[{\mathbb{S}}^{n-1}\times \{\frac{1}{2}\}]$ to the standard ${\mathbb{S}}^{n-1}$. Knot Theory ----------- We present the basics of knot theory here as neatly as possible and account only for the facts necessary for the present paper. Unless a specific reference is given below, the reader is referred to the classical textbooks on knot theory [@BurZie; @Kauf; @Mur] for the details and omitted proofs. A *knot* is an embedding $K\colon {\mathbb{S}}^1\to {\mathbb{S}}^3$. We often identify a knot with its image, ${\operatorname{ran}}K$. This is in particular justified by the following equivalence relation on knots: Two knots $K_0,K_1\colon {\mathbb{S}}^1\to {\mathbb{S}}^3$ are equivalent, if there is a homeomorphism $h\colon {\mathbb{S}}^3\to {\mathbb{S}}^3$ with $$K_0=h\circ K_1.$$ In literature this homeomorphism is often required to be orientation preserving in which case this equivalence relation coincides with the so-called *ambient isotopy*, but we do not require $h$ to be orientation preserving. A knot is *trivial* if it is equivalent to the standard embedding ${\mathbb{S}}^1{\hookrightarrow}{\mathbb{S}}^3$. A knot is *tame* if it is equivalent to a smooth or a piecewise linear knot. As usual in knot theory, we consider only tame knots. The following is a basic fact of knot theory: There are infinitely many non-equivalent knots. A not so basic fact is the following theorem: ([@GorLue])\[thm:GordonLuecke\] If two knots have homeomorphic complements, then they are equivalent. Let $K$ be a knot in ${\mathbb{R}}^3$. A *Seifert surface* $S$ of $K$ is a compact orientable connected $2$-manifold with boundary $M\subset {\mathbb{R}}^3$ whose interior lies in ${\mathbb{R}}^3\setminus K$ and the boundary is exactly $K$. \[fact:ExistsSeifertS\] For every open ball $B$ containing $K$ there exists a Seifert surface $S\subset B$ of $K$. ([@Rolfsen 5.D])\[fact:LinkingSeifert\] Let $K$ be a knot and $w$ a closed curve in ${\mathbb{R}}^3\setminus K$. The following are equivalent: - $w\cap S\ne{\varnothing}$ for every Seifert surface $S$ of $K$, - $w$ represents a non-trivial element in $H_1({\mathbb{R}}^3\setminus K)$. \[fact:NonTrHomology\] For every knot $K$ we have $H_1({\mathbb{S}}^3\setminus K)\cong {\mathbb{Z}}$. Preserving Knot Types --------------------- The goal of this section is to prove Lemma \[lemma:FindPoints\] which says that if we carve out infinitely many knots from ${\mathbb{R}}^3$ in a certain way, then a self-homeomorphism of the left-over space will, in an approximate way, respect the knot-types of the carved knots. \[def:Properties\] Let $(B_n)_{n\in{\mathbb{N}}}$ be a sequence of closed balls in ${\mathbb{R}}^3$, $(K_n)_{n\in{\mathbb{N}}}$ a sequence of knots, $Q\subset {\mathbb{R}}^3$ and $P\subset {\mathbb{R}}^3$. Here we list some properties for these sets which we will later refer to. - All the balls are disjoint from each other and are contained in a bounded region, i.e. there is $r$ such that ${\bigcup}_{n\in{\mathbb{N}}}B_n\subset B(0,r)$. - If $x$ is a limit of a sequence $(x_i)_{i\in{\mathbb{N}}}$ such that for all $i\in{\mathbb{N}}$ the point $x_i$ is in the ball $B_{n_i}$ and for all $i<j$, $n_i\ne n_j$, then $x$ is not in any of the balls. $Q$ is the set of such points $x$. - $P\supset Q$, every connected component of $P$ contains a point in $Q$ and for all $n$ there is ${\varepsilon}>0$ such that $P\cap (B_n)_{{\varepsilon}}={\varnothing}$. (Recall the definition of ${\varepsilon}$-collar, Definition \[def:HausdorffMetric\]). - $K_n\subset {{\operatorname{int}}}B_n$. - $X={\mathbb{R}}^3\setminus (P\cup{\bigcup}_{n\in{\mathbb{N}}}K_n)$ is path metric (Definition \[def:DenselyPathConnected\]). \[lemma:FindPoints\] Suppose $(B_n)_{n\in{\mathbb{N}}}$, $(K_n)_{n\in{\mathbb{N}}}$, $Q$ and $P$ as well as $(B_n')_{n\in{\mathbb{N}}}$, $(K_n')_{n\in{\mathbb{N}}}$, $Q'$ and $P'$ satisfy the properties B1 – B5. Let $$X={\mathbb{S}}^3\setminus (P\cup{\bigcup}_{n\in{\mathbb{N}}}K_n)$$ and $$X'={\mathbb{S}}^3\setminus (P'\cup{\bigcup}_{n\in{\mathbb{N}}}K'_n).$$ Suppose further that $X$ and $X'$ are homeomorphic and $h$ is the homeomorphism. Then there is a bijection $\rho\colon{\mathbb{N}}\to{\mathbb{N}}$ such that for all $n\in{\mathbb{N}}$ we have that $K_n$ and $K_{\rho(n)}'$ have the same knot-type and for some $z\in B_n\setminus K_n$ we have $h(z)\in B_{\rho(n)}'\setminus K_{\rho(n)}'$. Fix $n\in {\mathbb{N}}$. By the Jordan-Brouwer separation theorem (Theorem \[thm:JordanBrouwer\]) the complement of $h[\partial B_n]$ in ${\mathbb{S}}^3$ consists of two open connected components, say $Y_1$ and $Y_2$. In this case, however, we can prove even more, namely that $Y_1$ and $Y_2$ are homeomorphic to open balls and that there is a self-homeomorphism of ${\mathbb{S}}^3$ wich takes $h[\partial B_n]$ to ${\mathbb{S}}^2$. Let ${\varepsilon}$ be small enough so that $(\partial B_n)_{\varepsilon}\cap B_k={\varnothing}$ for all $k\ne n$, $(\partial B_n)_{\varepsilon}\cap P={\varnothing}$ and $(\partial B_n)_{{\varepsilon}}\cap K_n={\varnothing}$. This is possible by B2, B3 and B4. Let $$f\colon {\mathbb{S}}^2\times [0,1]\to (\partial B_n)_{{\varepsilon}}$$ be a homeomorphism such that $f[{\mathbb{S}}^2\times \{\frac{1}{2}\}]=\partial B_n$. We can think of $h\circ f$ as an embedding of ${\mathbb{S}}^2\times [0,1]$ into ${\mathbb{S}}^3$. Now apply the the generalized Schönflies theorem (Theorem \[thm:Brown\]) to $h\circ f$. Since $\partial B_n$ divides $X$ into two disjoint components as well as $h[\partial B_n]$ divides $X'$, $h$ takes them to one another. Assume without loss of generality that $h[{{\operatorname{int}}}B_n\setminus K_n]=Y_1\cap X'$. \[claim:ConnectedThing\] The space $Y_1\setminus X'$ is connected. For this we need a slight modification of the argument used to prove Lemma \[lemma:CauchyComponent\]. (Note that $B_n\setminus K_n$ is path metric.) Suppose there was two components $A$ and $B$ of $Y_1\setminus X'$ and let $(x_1,y_1,x_2,y_2,\dots)$ be a sequence such that $(x_i)_{i\in{\mathbb{N}}}$ converges (in ${\mathbb{S}}^3$) to a point in $A$ and $(y_i)_{i\in{\mathbb{N}}}$ converges to a point in $B$. Now $(h^{-1}(x_1),h^{-1}(y_1)\cdots)$ can only have accumulation points in $K_n$ (because the accumulation points cannot be in $X$). Pick Cauchy subsequences from both $(h^{-1}(x_i))_{i\in{\mathbb{N}}}$ and $(h^{-1}(y_i))_{i\in{\mathbb{N}}}$ and denote $(z_i)_{i\in{\mathbb{N}}}$ and $(w_i)_{i\in{\mathbb{N}}}$. Since $z=\lim_{i\to\infty}z_i$ and $w=\lim_{i\to\infty}w_i$ lie both in the knot, using the fact that $B_n\setminus K_n$ is path metric, it is possible to connect $z_i$ to $w_i$ by a curve ${\gamma}_i$ lying in $B_n\setminus K_n$ such that the sequence $({\gamma}_i)_{i\in{\mathbb{N}}}$ converges in $K({\mathbb{S}}^3)$ to a subset of $K_n$. Now pick (in $K({\mathbb{S}}^3)$) a converging subsequence $(\xi_{j})_{j\in{\mathbb{N}}}$ of $(h[{\gamma}_i])_{i\in{\mathbb{N}}}$. These are connected sets containing $h(z_i)$ and $h(w_i)$. Therefore the limit in $K({\mathbb{S}}^3)$ must intersect both $A$ and $B$ and since it is connected, it must contain a point $p$ in $Y_1\cap X'= h[{{\operatorname{int}}}B_n]$. By Fact \[fact:Hausdorffmetric\] there is a Cauchy sequence $(p_j)_{j\in{\mathbb{N}}}$ with $p_j\in \xi_{j}$ converging to $p$ but $((h^{-1}(p_j))_{j\in{\mathbb{N}}}$ does not have accumulation points in $B_n\setminus K_n$. This is a contradiction. Thus, $Y_1\setminus X'$ is a connected component of ${\mathbb{S}}^3\setminus X'$ Note that this component must be in the interior of $\bar Y_1$, so it cannot be a subset of $P$, by B2, B3 and B4. Thus, it is $K_m'$ for some $m$. Since $h$ is a homeomorphism we have that $${{\operatorname{int}}}B_n\setminus K_n\approx Y_1\setminus K_m'.$$ Since $Y_1\approx {{\operatorname{int}}}B_n\approx {\mathbb{R}}^n$, we can conclude from Theorem \[thm:GordonLuecke\] that $K_n$ and $K_{m'}$ have the same knot-type. By symmetry arguments using the fact that $h$ is a homeomorphism, this establishes a map $n\mapsto m$ which is actually bijective, so denote this bijection by $\rho$. Let ${\gamma}\subset B_n\setminus K_n$ be a closed curve representing a non-trivial cycle in $H_1(B_n\setminus K_n)$ (such exists by Fact \[fact:NonTrHomology\]). Then $h[{\gamma}]$ will be a non-trivial cycle in $h[B_n]$. We would like to show that $h[{\gamma}]$ is also non-trivial in $({\mathbb{S}}^3\setminus Y_1)\cup h[B_n]$. But since we established a homeomorphism of ${\mathbb{S}}^3$ to itself taking $h[\partial B_n]$ to ${\mathbb{S}}^2$, we know that if ${\gamma}$ bounds a disk $D\subset ({\mathbb{S}}^3\setminus Y_1)\cup h[B_n]$, this disk can be isotoped to a disk $D'\subset Y_1$ keeping $Y_1\cap D$ fixed. Let $S$ be a Seifert surface of $K_m'$ contained in $B_m'$ (see Fact \[fact:ExistsSeifertS\]). Then by Fact \[fact:LinkingSeifert\] there is a point $z'\in h[{\gamma}]\cap S$. Let $z=h^{-1}(z')$. This completes the proof, since $z'\in B_{m'}$. Preliminaries in Descriptive Set Theory {#sec:BkgrndDST} ======================================= A *Polish space* is a separable topological space which is homeomorphic to a complete metric space. The most common examples of Polish spaces are ${\mathbb{R}}$, ${\mathbb{C}}$ and ${\mathbb{N}}^{\mathbb{N}}$ in the Tychonov product topology. Less common examples include the space of all homeomorphisms ${{\operatorname{Hom}}}(X)$ of a compact Polish space $X$ in the $\sup$-metric (see Fact \[fact:HomPolish\]) and the space of compact subsets of a compact space $X$ in the Hausdorff metric denoted $K(X)$ (see Fact \[fact:Hausdorffmetric\]). ([@Kechris])\[fact:Gdelta\] A subset of a Polish space is Polish in the subspace topology if and only if it is a $G_\delta$ subset. ([@Kechris Theorem 3.11 and Example 9B(8)]) \[fact:HomPolish\] For a compact Polish space $X$ equipped with the metric $d_X$, the space ${{\operatorname{Hom}}}(X)$ of homeomorphisms of $X$ in the $\sup$-metric, $\delta(h,g)=\sup\{d_X(h(x),g(x))\mid x\in X\}$ is a Polish space. \[def:Reduction\] Suppose $E$ and $E'$ are equivalence relations on Borel subsets $A$ and $A'$ of Polish spaces $X$ and $X'$ respectively. The equivalence relation $E$ is *Borel reducible to* $E'$, denoted $E{\leqslant}_B E'$, if there is a Borel map $f\colon A\to A'$ such that $$\forall x,y\in A\big((x,y)\in E\iff (f(x),f(y))\in E'\big).$$ We say that an equivalence relation $E$ is *universal* among a set $X$ of equivalence relations, if $E\in X$ and for all $E'\in X$ we have $E'{\leqslant}_B E$. A lot is known about the partial order ${\leqslant}_B$ on analytic equivalence relations which are defined on standard Borel spaces. A thorough treatment can be found in [@Gao]. Preface in [@Hjorth] gives a good glimpse of available applications. Here is an example of an equivalence relation which we will need: \[def:E1\] Let $(2^{\mathbb{N}})^{\mathbb{N}}$ be the space of sequences of elements of $2^{\mathbb{N}}$ (the Cantor space). The topology on both $2^{\mathbb{N}}$ and $(2^{\mathbb{N}})^{{\mathbb{N}}}$ is given by the Tychonov product topology. Let $E_1$ be the equivalence relation given by: $$((r_n)_{n\in {\mathbb{N}}},(s_n)_{n\in {\mathbb{N}}})\in E_1\iff \exists m\forall k>m(r_k=s_k).$$ Another wide class of equivalence relations is given by Polish group actions: Let $G$ be a Polish group acting in a Borel way on a Polish space $X$. Let $E^X_G$ be the equivalence relation where $x,y\in X$ are equivalent if and only if there exists $g\in G$ such that $y=gx$. This is called the *orbit equivalence relation* induced by this (Borel) action of a Polish group. Many natural equivalence relations, in particular the isomorphism on countable structures (see the end of this section), can be viewed as orbit equivalence relations induced by Polish group actions. A proof of the following can be found in [@Gao Theorem 10.6.1]. \[thm:KecLou\](Kechris-Louveau [@KecLou]) Let $E$ be any orbit equivalence relation induced by a Borel action of a Polish group. Then $E_1\not{\leqslant}_B E$. \[def:StrangeSpaces\] Let $X$ be a compact Polish space. For a fixed closed (and hence compact) subset $F\subset X$, let $$K^{F}(X)=\{A\in K(X)\mid F\subset A\}.$$ (See Definition \[def:HausdorffMetric\] for the definition of $K(X)$.) Then $K^{F}(X)$ is a closed subspace of $K(X)$ and so Polish itself by Fact \[fact:Gdelta\]. Let $$K^{F}_*(X)=\{(X\setminus A)\cup F\mid A\in K^F(X)\}.$$ The Polish topology on $K^{F}_*(X)$ is induced by the bijection $K^F(X)\to K^F_*(X)$ given by $A\mapsto (X\setminus A)\cup F$. Let $F\subset X$ be closed. Then elements of $K^F_*(X)$ are of the form $U\cup F$ where $U$ is an open set disjoint from $F$. Therefore elements of this space are $\sigma$-compact $G_\delta$-subsets. Using Fact \[fact:Gdelta\] we obtain: \[fact:TheyreAllPolish\] For a fixed closed $F\subset X$ and $X$ compact $K^F_*(X)$ consists of $\sigma$-compact Polish spaces. \[def:Relations\] For a fixed closed $F\subset {\mathbb{S}}^3$, let $\approx^F$ be the homeomorphism relation on the space $K^F_*({\mathbb{S}}^3)$. The main result of this paper (Theorem \[thm:NonClass\]) can be now stated: for a fixed $x\in {\mathbb{S}}^3$, $E_1{\leqslant}_B\ \approx^{\{x\}}$. A countable model in a fixed vocabulary with universe ${\mathbb{N}}$ can be coded as an element of $2^{{\mathbb{N}}}$ in such a way that each $\eta\in 2^{\mathbb{N}}$ in fact represents some model. There are many nice ways to do this, see for example [@Gao]. Let $\cong$ be the equivalence relation of isomorphism. It is well known that given a vocabulary and any collection of countable models in this vocabulary whose set of codes is Borel, $\cong$ is reducible to $\cong_G$ where $\cong_G$ is the isomorphism of graphs, i.e. vocabulary consists of one binary symbol and the models are infinite graphs with domain ${\mathbb{N}}$. This equivalence relation is induced by the action of the infinite symmetric group $S_\infty$ (which is Polish in the standard product topology). A corollary to Theorem \[thm:NonClass\] which follows from Theorem \[thm:KecLou\] is that $\approx^{\{x\}}$ is not reducible to $\cong_G$, although this has been proved for the homeomorphism relation on compact spaces already by Hjorth [@Hjorth]. The original motivation of this research was the following, stronger, question: \[open:Main\] Is $\approx^{{\varnothing}}$ reducible to $\cong_G$? Note that $\approx^{{\varnothing}}$ is just the homeomorphism relation on open subsets of ${\mathbb{S}}^3$. See Section \[sec:Further\] for a discussion on this and other open questions. Parametrization {#ssec:Par} --------------- As was pointed out, the space $K^F_*(X)$ consists of ${\sigma}$-compact Polish spaces (Fact \[fact:TheyreAllPolish\]). However, not *all* ${\sigma}$-compact Polish spaces are found in $K^F_*(X)$. There are different ways to parametrize different classes of Polish spaces such as compact, locally compact, ${\sigma}$-compact, $n$-manifolds and so on. In this section we will present these different ways and show that essentially it does not matter which one we choose, all of them being essentially equivalent in some sense. Additionally in this section we introduce many new notations for various homeomorphism relations. A helpful list of notations can be found in Section \[sec:Conclusion\]. In [@HjoKec1] Hjorth and Kechris give a simple parametrization of all Polish spaces. Their parametrization, let us call it the *Hjorth-Kechris parametrization*, consists of two-fold sequences $\eta\in {\mathbb{R}}^{{\mathbb{N}}\times{\mathbb{N}}}$ which satisfy the requirements for a metric on ${\mathbb{N}}$. The set of such $\eta$ is easily seen to be Borel. Then the space $X(\eta)$ is obtained as a completion of this countable metric space. Another way to parametrize all Polish spaces is to view them as closed subsets of the Urysohn universal space $U$. Denote the space of all closed subsets of $U$ by $F(U)$. It can be equipped with a standard Borel structure which is inherited from $K(\bar U)$ where $\bar U$ is a compactification of $U$ (see [@Kechris Thm 12.6]). The Borel sets of $F(U)$ are generated by the sets of the form $$\{F\in F(U)\mid F\cap O\ne {\varnothing}\}$$ for some open $O\subset U$. This Borel structure is also generated by the *Fell topology* generated by the sets of the form $$\label{eq:StandBor} \{F\in F(U)\mid F\cap K={\varnothing}\land F\cap O_1\ne{\varnothing}\land\dots\land F\cap O_n\ne{\varnothing}\},$$ where $K$ varies over $K(U)$ and $O_i$ are open sets in $U$ [@Kechris Exercise 12.7]. Let us show that these parametrizations are essentially equivalent. The universality property of $U$ is that given any finite metric space $H$ and $x\in H$, every isometric embedding of $H\setminus \{x\}$ into $U$ extends to an isometric embedding of $H$ into $U$. Thus, given a countable metric space as defined by $\eta$ as above, it can be isometrically embedded into $U$. The closure of the image will then be homeomorphic (and even isometric) to $X(\eta)$. We want to show that there are Borel reductions reducing the homeomorphism of Polish spaces in one parametrization to the other. To show this, let us define an “intermediate” parametrization. Let $U^{\mathbb{N}}$ be the set of all countable sequences in $U$. Each such sequence $\xi$ corresponds to the Polish space $Y(\xi)$ obtained as its closure taken in $U$. Let $f_1\colon U^{\mathbb{N}}\to {\mathbb{R}}^{{\mathbb{N}}\times{\mathbb{N}}}$ be defined by $f_1(\xi)=\eta$ where $\eta(n,m)=d_U(\xi(n),\xi(m))$. Obviously $X(\eta)$ and $Y(\xi)$ are isometric and $f_1$ is continuous. Let $f_2\colon U^{\mathbb{N}}\to F(U)$ be the map which takes $\xi$ to the closure of $\{\xi(n)\mid n\in{\mathbb{N}}\}$ in $U$. There are Borel functions $g_1\colon {\operatorname{ran}}(f_1)\to U^{\mathbb{N}}$ and $g_2\colon F(U)\to U^{\mathbb{N}}$ such that $f_1\circ g_1={\operatorname{id}}$ and $f_2\circ g_2={\operatorname{id}}$. For $g_2$ we will use [@Sri Cor 5.4] which says that if $f\colon X\to Y$ is a Borel function between Polish spaces $X$ and $Y$ such that $f[V]$ is open for all open $V\subset X$, $f^{-1}[V]$ is $F_{\sigma}$ for all open $V\subset Y$ and $f^{-1}\{y\}$ is $G_\delta$ for all $y\in Y$, then there is $g\colon Y\to X$ such that $f\circ g={\operatorname{id}}_Y$. It is easy to see that the inverse image under $f_2$ of a set of the form is $F_{\sigma}$ in $U^{\mathbb{N}}$, so in particular $f_2$ is Borel. Additionally, given a closed set $C\in F(U)$, the inverse image of the singleton $f_2^{-1}\{C\}$ is $G_\delta$. To see this, let $Q=\{q_n\mid n\in{\mathbb{N}}\}$ be a dense countable subset of $C$. Then $f_2^{-1}\{C\}$ is the intersection of $\{\xi\mid {\operatorname{ran}}(\xi)\subset C\}$ and the sets $$O(k,m)=\{\xi\mid \exists n\in{\mathbb{N}}(\xi(n)\in B(q_k,1/m))\}.$$ The former is closed and the latter are open, so the intersection is $G_{\delta}$. Let $V\subset U^{\mathbb{N}}$ be a basic open set. It is of the form $$O_0\times\cdots\times O_n\times U^{{\mathbb{N}}\setminus \{0,\dots,n\}}.\label{eq:openset}$$ To see that $f_2[V]$ is open in $U^{\mathbb{N}}$ note that it can be represented in the form of with $K={\varnothing}$ and $O_i$ as in ; thus $f_2$ is an open map. By [@Sri Cor 5.4] there is a Borel $g_2\colon F(U)\to U^{\mathbb{N}}$ such that $f_2\circ g_2={\operatorname{id}}$. Now consider $f_1$. It is continuous and the inverse image of a singleton is closed. By [@Kechris Thm 35.46] (or again by [@Sri Cor 5.4]) there is $g_1\colon {\operatorname{ran}}(f_1)\to U^{\mathbb{N}}$ such that $f_1\circ g_1={\operatorname{id}}$. Note that ${\operatorname{ran}}(f_1)$ is merely the Borel subset of ${\mathbb{R}}^{{\mathbb{N}}\times{\mathbb{N}}}$ on which the operation $\eta\mapsto X(\eta)$ is well defined and produces a Polish space. Let $\approx_P$ be the equivalence relation on ${\operatorname{ran}}(f_1)\subset{\mathbb{R}}^{{\mathbb{N}}\times{\mathbb{N}}}$ where $\eta$ and $\eta'$ are equivalent if and only if $X(\eta)$ and $X(\eta')$ are homeomorphic, let $\approx_P'$ be the equivalence relation on $U^{\mathbb{N}}$ where two sequences $\xi$ and $\xi'$ are equivalent if and only if $Y(\xi)$ and $Y(\xi')$ are homeomorphic, and let $\approx''_P$ be the equivalence relation on $F(U)$ where two closed $C$ and $C'$ are equivalent if and only if they are homeomorphic. Then $f_1$, $g_1$, $f_2$ and $g_2$ witness that these three equivalence relations, $\approx_P$, $\approx'_P$ and $\approx''_P$ are all Borel reducible to each other. Hjorth and Kechris showed that the set of those $\eta$ for which $X(\eta)$ is compact and the set of those for which it is locally compact are both Borel subsets of ${\mathbb{R}}^{{\mathbb{N}}\times{\mathbb{N}}}$. Taking Borel inverse images under $f_1$ and $g_2$ we obtain the same conclusion for the other parametrizations. In [@HjoKec1] it is shown that the set of complex $n$-manifolds is Borel. One has to replace “biholomorphic” by “homeomorphic” in order to relax from complex manifolds to (conventional) manifolds. But as also proved in [@HjoKec1], in the case of locally compact spaces, a function is defined to be a homeomorphism in a Borel way. Thus, the set of those $\eta$ for which $X(\eta)$ is an $n$-manifold is Borel. Using the functions $f_1$, $g_1$, $f_2$ and $g_2$ we finally obtain that the sets of $n$-manifolds in all the other parametrizations are also Borel. Denote by $\approx_P$ the homeomorhism relation on all Polish spaces and by $\approx_{loc}$, $\approx_c$ and $\approx_n$ the same relation restricted to the sets of locally compact, compact Polish spaces and $n$-manifolds respectively. From what is shown above it follows that the chosen parametrization is irrelevant. Recall also the notation $\approx^{\{x\}}$ from Definition \[def:Relations\]. All of these equivalence relations are defined on Borel subsets of Polish spaces. The following easily follows from [@Kechris Exercise (27.9)]: The set of those $C\in F(U)$ which are ${\sigma}$-compact is ${{\Pi_1^1}}$-complete. Because, as custom is, we require in Definition \[def:Reduction\] that the domains of equivalence relations are Borel subsets of Polish spaces, we do not talk directly about the homeomorphism relation restricted to the ${\sigma}$-compact spaces. However, by removing that requirement and relaxing from Borel sets to *relatively Borel* one could also talk about the Borel reducibility of $\approx_{\sigma}$, the homeomorphism relation on ${\sigma}$-compact spaces, to other equivalence relations. From our results it would follow in particular that $E_1{\leqslant}_B\ \approx_{\sigma}$ and that $\approx_{\sigma}$ is not classifiably by any equivalence relation induced by a Borel group action. Yet another way to parametrize compact and locally compact spaces is to view them as subsets of the Hilbert cube $I^{\mathbb{N}}$ where $I$ is the unit interval. It is known that for every compact Polish space there is a homeomorphic copy as a subset of $I^{\mathbb{N}}$. For locally compact spaces we also obtain a parametrization: By [@Kechris Theorem 5.3], the one-point compactification of every locally compact Polish space is a compact Polish space. Now fix a point $x\in I^{\mathbb{N}}$ and for each $\xi\in (I^{\mathbb{N}}\setminus\{x\})^{\mathbb{N}}$ let $Z(\xi)$ be the space $\overline{\{\xi(n)\mid n\in{\mathbb{N}}\}}\setminus \{x\}$. If $P$ is any locally compact space, then let $\bar P=P\cup \{\infty\}$ be its one-point compactification. There is an embedding of $\bar P$ into $I^{\mathbb{N}}$ and since $I^{\mathbb{N}}$ is homogenous (see e.g. [@Fort]), there is an embedding such that $\infty$ is mapped to $x$. Thus, in the notation of \[def:StrangeSpaces\], $K^{\{x\}}(I^{\mathbb{N}})$ is a space parametrizing all locally compact spaces. By using the fact that $I^{\mathbb{N}}$ can be also isometrically embedded into the Urysohn space $U$, one can use the methods from above to conclude that this parametrization is in our sense equivalent to all the other parametrizations (i.e. the homeomorphism relation is Borel bireducible with the corresponding relation in other parametrizations and the relevant subsets such as $n$-manifolds are Borel subsets). #### Summary. When proving a classification or a non-classification result for any of $\approx_P$, $\approx^{\{x\}}$, $\approx_{loc}$, $\approx_{c}$, $\approx_n$ it is irrelevant which of the parametrizations is used. Additionally the sets of locally compact and compact spaces, of $n$-manifolds and of the spaces in $K^{\{x\}}_*({\mathbb{S}}^3)$ are Borel no matter which parametrization is used. Non-classification of $\approx^{\{x\}}$ {#sec:NonClass} ======================================= This section is devoted to proving the main result: \[thm:NonClass\] The equivalence relation $E_1$ (Definition \[def:E1\]) is continuously reducible to the homeomorphism relation on $K^{\{x\}}_*({\mathbb{S}}^3)$ for any fixed $x\in{\mathbb{S}}^3$. As before, we parametrize ${\mathbb{S}}^3$ as ${\mathbb{R}}^3\cup \{\infty\}$. Obviously the choice of $x$ does not matter. In our case $x=(1,1,\frac{1}{2})\in {\mathbb{R}}^3$ as will be seen below. For every $n\in{\mathbb{N}}$, $k\in{\mathbb{N}}$ and $l\in \{0,1\}$, let $B_{n,k,l}\subset {\mathbb{R}}^3$ be a closed ball with the center at $(1-2^{-n},1-2^{-k},l)$ and radius $2^{-4(n+1)(k+1)}$. Define $Q$, $P'$ and $P$ as follows: $$\begin{aligned} Q&=&\{(1-2^{-n},1,l)\mid n\in{\mathbb{N}},l\in\{0,1\}\} \cup\{(1,1-2^{-k},l)\mid k\in{\mathbb{N}},l\in\{0,1\}\},\\ P'&=&Q\cup {\bigcup}_{n\in{\mathbb{N}}}\{(1-2^{-n},1,t)\mid t\in [0,1]\}\\ P&=&P'\cup \{(1,1,t)\mid t\in [0,1]\}\setminus \{(1,1,\frac{1}{2})\}. \end{aligned}$$ Thus, $(B_{n,k,l})$, $Q$ and $P$ satisfy the assumptions B1, B2 and B3 from Definition \[def:Properties\]. Let $\{P_{n,k,l}\mid n\in{\mathbb{N}},k\in{\mathbb{N}},l\in\{0,1\}\}$ be the set of all (mutually different) knot types indexed by the set ${\mathbb{N}}\times{\mathbb{N}}\times \{0,1\}$. Let $\bar r=(r_n)_{n\in{\mathbb{N}}}\in (2^{{\mathbb{N}}})^{{\mathbb{N}}}$ be a sequence of elements of $2^{\mathbb{N}}$. For each $(n,k,l)\in (2^{{\mathbb{N}}})^{{\mathbb{N}}}$, let $K^{\bar r}_{n,k,l}$ be a (piecewise linear) knot inside the interior of $B_{n,k,l}$. The knot-type of $K^{\bar r}_{n,k,l}$ is determined as follows: - If $n$ is odd, then it is $P_{n,k,l}$, - If $n$ is even and $r_{n/2}(k)=0$, then it is $P_{n,k,l}$, - If $n$ is even and $r_{n/2}(k)=1$, then it is $P_{n,k,1-l}$. Let $R(\bar r)$ be ${\mathbb{S}}^3\setminus (P\cup{\bigcup}_{n,k,l}K^{\bar r}_{n,k,l}).$ Note that $R(\bar r)$ corresponds to $X$ in Definition \[def:Properties\] and properties B4 and B5 are now also satisfied (B5 follows easily from Lemma \[lem:HDPathMetric\] and the fact that ${\mathbb{S}}^3\setminus X$ is a countable union of piecewise linear curves and points). Notice also that $R(\bar r)\setminus \{(1,1,\frac{1}{2})\}$ is an open set, so $R(\bar r)\in K^{\{(1,1,\frac{1}{2})\}}_*({\mathbb{S}}^3)$. In the following three claims we will show that $F$ is a continuous reduction: $\bar r$ and $\bar r'$ are $E_1$-equivalent if and only if $R(\bar r)$ and $R(\bar r')$ are homeomorphic. Suppose $\bar r$ and $\bar r'$ are $E_1$-equivalent. Then $R(\bar r)$ and $R(\bar r')$ are homeomorphic. For every $(n,k)\in {\mathbb{N}}\times{\mathbb{N}}$ let $C_{n,k}$ be the convex hull of $B_{n,k,0}\cup B_{n,k,1}$, a “capsule” containing $B_{n,k,0}$ and $B_{n,k,1}$ disjoint from all other balls and from $P$. Denote for simplicity $X=R(\bar r)$ and $X'=R(\bar r')$. Now $C_{n,k}\cap X$ and $C_{n,k}\cap X'$ are homeomorphic because both are complements of two knots of types $P_{n,k,0}$ and $P_{n,k,1}$. If $n$ is odd or $n$ is even and $r_{n/2}(k)=r'_{n/2}(k)$ then identity on $C_{n,k}$ witnesses this. Otherwise there is a homeomorphism $g_{n,k}$ of ${\mathbb{S}}^3$ fixing ${\mathbb{S}}^3\setminus C_{n,k}$ and taking $C_{n,k}\cap X$ to $C_{n,k}\cap X'$. For each $(n,k)$, if $n$ is even and $r_{n/2}(k)\ne r'_{n/2}(k)$, let $h_{n,k}=g_{n,k}$. Otherwise let $h_{n,k}$ be the identity on ${\mathbb{S}}^3$. Let $\pi\colon {\mathbb{N}}\to {\mathbb{N}}\times{\mathbb{N}}$ be a bijection and define a sequence of functions $(t_m)_{m\in{\mathbb{N}}}$ by induction as follows: $$\begin{aligned} t_0&=&h_{\pi(0)}\\ t_{m+1}&=&h_{\pi(m+1)}\circ t_m. \end{aligned}$$ We claim that for every $x\in R(\bar r)$ the limit $t(x)=\lim_{m\to\infty}t_m(x)$ exists and defines a homeomorphism $t$ from $R(\bar r)$ to $R(\bar r')$. Let us define a *support* of a homeomorphism $h$ to be the set ${\operatorname{sprt}}h=\{x\in{\operatorname{dom}}h\mid h(x)\ne x\}$. Now obviously for $m\ne m'$, the supports of $h_{\pi(m)}$ and $h_{\pi(m')}$ are disjoint, so the existence of the limit follows easily. In fact if $x\in C_{n,k}$ for some $n,k\in{\mathbb{N}}$, then $t(x)=h_{(n,k)}(x)$ and $t(x)=x$ otherwise. Same argument leads that $t$ is bijective. Let $(x,y,z)\in X$ and let us show that $t$ is continuous at $(x,y,z)$. If $y\ne 1$ and $x\ne 1$, then $(x,y,z)$ has a neighborhood intersecting only finitely many $C_{n,k}$, so $t$ is determined by a finite composition of continuous functions in this neighborhood. If $y=1$ and $x\notin \{1\}\cup\{1-2^{-n}\mid n\in{\mathbb{N}}\}$, then the same holds again and also if vice versa: If $x=1$ and $y\notin \{1\}\cup\{1-2^{-n}\mid n\in{\mathbb{N}}\}$. If $y=1$ and $x\in \{1-2^{-n}\mid n\in{\mathbb{N}}\}$, then $(x,y,z)\in X$ only if $z\notin [0,1]$ (by the definition of $P$) and in this case $(x,y,z)$ has again an open neighborhood intersecting only finitely many $C_{n,k}$. If $x=1$ and $y\in \{1\}\cup\{1-2^{-n}\mid n\in{\mathbb{N}}\}$, then every neighborhood intersects infinitely many $C_{n,k}$. Let $n_*$ be such that for all $n>n_*$ we have $r_{n}(k)=r'_{n}(k)$ which exists because $\bar r$ and $\bar r'$ are $E_1$-equivalent and let $U$ be a neighborhood of $(x,y,z)$ of radius $2^{-2n_*}$. Then $U$ intersects only those $C_{n,k}$ for which $n/2>n_*$ and so by the definition of $h_{n,k}$ it is identity on $C_{n,k}$ for all such $n$. Thus, $t_{m}$ is identity in $U$ for all $m$ and so $t$ is continuous. Now we should check that the inverse is also continuous. But with just a little care in the definition of $g_{n,k}$ we can assume that $g_{n,k}=g_{n,k}^{-1}$ and so $t=t^{-1}$. Thus by symmetry, $t^{-1}$ is also continuous. Suppose $\bar r$ and $\bar r'$ are not $E_1$-equivalent. Then $R(\bar r)$ and $R(\bar r')$ are not homeomorphic. Denote again $X=R(\bar r)$ and $X'=R(\bar r')$ and assume on contrary that there is a homeomorphism $h\colon X\to X'$. Since $\bar r$ and $\bar r'$ are not $E_1$-equivalent, there is a sequence $(n_i,k_i)_{i\in{\mathbb{N}}}$ such that $(n_i)_{i\in{\mathbb{N}}}$ is increasing and unbounded in ${\mathbb{N}}$ and for all $i$, $r_{n_i}(k_i)\ne r'_{n_i}(k_i)$. Suppose first that $(k_i)_{i\in{\mathbb{N}}}$ is bounded in ${\mathbb{N}}$. Then there exists a subsequence $(n_{i(j)},k_{i(j)})_{j\in{\mathbb{N}}}$ such that $k_{i(j)}=k_*$ for all $j$ for some fixed $k_*$. By the construction each knot-type appears exactly once in either of the sets $$\{K^{\bar r}_{n,k,l}\mid (n,k,l)\in{\mathbb{N}}\times{\mathbb{N}}\times\{0,1\}\}$$ and $$\{K^{\bar r'}_{n,k,l}\mid (n,k,l)\in{\mathbb{N}}\times{\mathbb{N}}\times\{0,1\}\}.$$ For each $m\in{\mathbb{N}}$ define the point $x_{m}$ as follows: Let $x_{m}$ be the point in $B_{m,k_*,0}\setminus K^{\bar r}_{m,k_*,0}$ given by Lemma \[lemma:FindPoints\]. We know that if $m$ is odd, then $K^{\bar r}_{m,k_*,0}$ has the same knot-type as $K^{\bar r'}_{m,k_*,0}$ and if $m/2=n_{i(j)}$ for some $j$, then $K^{\bar r}_{m,k_*,0}$ has the same knot-type as $K^{\bar r'}_{m,k_*,1}$. Thus there are infinitely many $m$ such that $h(x_m)\in B_{m,k_*,0}$ and infinitely many $m$ such that $h(x_m)\in B_{m,k_*,1}$. Thus, both points $(1,1-2^{-k_*},0)$ and $(1,1-2^{-k_*},1)$ are accumulation points of $(h(x_m))_{m\in{\mathbb{N}}}$. But only $(1,1-2^{-k_*},0)$ is an accumulation point of $x_m$ which is a contradiction with Lemma \[lemma:CauchyComponent\], because both $\{(1,1-2^{-k_*},0)\}$ and $\{(1,1-2^{-k_*},1)\}$ are connected components of both ${\mathbb{S}}^3\setminus X$ and ${\mathbb{S}}^3\setminus X'$. Suppose now that $(k_i)_{i\in {\mathbb{N}}}$ is unbounded in ${\mathbb{N}}$. Now pick a subsequence $(n_{i(j)},k_{i(j)})_{j\in{\mathbb{N}}}$ such that not only $n_{i(j)}$ is strictly increasing, but also $k_{i(j)}$ is. For all $j$, let $x_{2j}$ be the point in $B_{2n_{i(j)},k_{i(j)},0}$ given by Lemma \[lemma:FindPoints\]. By similar argumentation as above we know that $h(x_{2j})\in B_{2n_{i(j)},k_{i(j)},1}$. Now again for all $j$, define the point $x_{2j+1}$ to be a point in $B_{2j+1,k_{i(j)},0}$ given again by Lemma \[lemma:FindPoints\]. By the construction we know that $h(x_{2j+1})$ is in $B_{2j+1,k_{i(j)},0}$ too. Thus $(x_m)_{m\in{\mathbb{N}}}$ is now a Cauchy sequence converging to $(1,1,0)$ and $(h(x_m))_{m\in{\mathbb{N}}}$ is sequence with two accumulation points $(1,1,0)$ and $(1,1,1)$. The first of these points belongs to the connected component $\{(1,1,t)\mid 0{\leqslant}t<\frac{1}{2}\}$ of both ${\mathbb{S}}^3\setminus X$ and ${\mathbb{S}}^3\setminus X'$ (by the definition of $P$) and the second belongs to the other connected component $\{(1,1,t)\mid \frac{1}{2}<t{\leqslant}1\}$. Thus, we obtain a contradiction with Lemma \[lemma:CauchyComponent\] again. $F$ is continuous. The inverse image of an ${\varepsilon}$-neighborhood of $R(\bar r)$ consists of all $\bar r$ which are mapped inside the ${\varepsilon}$-collar of $R(\bar r)$ and in whose ${\varepsilon}$-collar $R(\bar r)$ is contained. It is evident that only finitely many of the knot-types are determined by the ${\varepsilon}$-collar, since the ${\varepsilon}$-collar of $Q$ (or $P$) “swallows” all but finitely many knots. By Theorem \[thm:KecLou\] we have: The homeomorphism relation $\approx^{\{x\}}$, is not Borel reducible to any orbit equivalence relation induced by a Borel action of a Polish group. There is a collection $D$ of Polish spaces homeomorphic to subsets of ${\mathbb{S}}^3$ of the form $V\cup \{x\}$ where $V$ is open and $x\in {\mathbb{R}}^3$ such that all elements of $D$ have the same fundamental group, but are not classifiable up to homeomorphism by any equivalence relation arising from a Borel action of a Polish group. The fundamental group of $R(\bar r)$ is the same for all $\bar r$ – the free product of the knot groups – as can be witnessed by the Seifert-van Kampen theorem by considering $R(\bar r)$ as the union of its open subsets $A_{n}={\mathbb{S}}^3\setminus (P\cup K_n\cup {\bigcup}_{k\ne n}B_k)$ (here we fall back to the easier enumeration of the balls by just one index used in Definition \[def:Properties\]). Positive Classification Results {#sec:Other} =============================== The results in this section are either known or follow easily from what is known. We give some of the proofs for the sake of completeness. Since the main result of the paper deals with $\sigma$-compact spaces, we begin this section with the following relatively simple observation: The homeomorphism relation on any Borel collection of ${\sigma}$-compact spaces, such as $K^{\{x\}}_{*}$ is ${{\Sigma_1^1}}$. Two ${\sigma}$-compact spaces $X$ and $X'$ are homeomorphic if and only if there exist sequences of compact sets $(C_n)_{n\in{\mathbb{N}}}$ and $(C_n')_{n\in{\mathbb{N}}}$ and homeomorphisms $h_n\colon C_n\to C_n'$ for each $n$ such that 1. $C_n\subset C_{n+1}$ and $C'_n\subset C'_{n+1}$ for all $n$, 2. $h_n\subset h_{n+1}$ for all $n$, 3. $X={\bigcup}_{n\in{\mathbb{N}}}C_n$ and $X'={\bigcup}_{n\in{\mathbb{N}}}C'_n$. To see this suppose $h\colon X\to X'$ is a homeomorphism. Since $X$ is ${\sigma}$-compact, there is a sequence of compact sets $(C_n)_{n\in{\mathbb{N}}}$ which satisfies the first parts of (1) and (3). Let $C'_n=h[C_n]$. Then $(C_n')_{n\in{\mathbb{N}}}$ and $(h_n)_{n\in{\mathbb{N}}}$ where $h_n=h{\!\restriction\!}C_n$ satisfy all the rest. On the other hand suppose that such sequences $(C_n)_{n\in{\mathbb{N}}}$, $(C_n')_{n\in{\mathbb{N}}}$ and $(h_n)_{n\in{\mathbb{N}}}$ exist. Then obviously ${\bigcup}_{n\in{\mathbb{N}}}h_n$ is a homeomorphism $X\to X'$. Consider the space $F(U)$ defined in Section \[ssec:Par\] parametrizing all Polish spaces. Then, according to the above, the homeomorphism relation restricted to ${\sigma}$-compact spaces can be defined by saying that $C$ and $C'$ are equivalent if there exist sequences satisfying (1), (2) and (3). But these properties are all Borel properties. Also for $h_n$ to be a homeomorphism is Borel, because the domains $C_n$ are compact. This shows that the equivalence relation is ${{\Sigma_1^1}}$. Now we turn to compact and locally compact spaces. Let $I^{{\mathbb{N}}}$ be the Hilbert cube. A set of *infinite deficiency* $A\subset I^{{\mathbb{N}}}$ is a closed set whose projection onto infinitely many interval-coordinates is a singleton. A closed set $A$ is a *$Z$-set*, if for every open $U\subset I^{\mathbb{N}}$ with all homotopy groups vanishing, all the homotopy groups of $U\setminus A$ vanish too. Anderson proved in [@Anderson] the following: ([@Anderson])\[Anderson\] 1. If a set is of an infinite deficiency, then it is a $Z$-set. 2. Each homeomorphism between two closed $Z$-subsets of $I^{\mathbb{N}}$ can be extended to a homeomorphism of $I^{\mathbb{N}}$ onto itself. From this it is not difficult to obtain the following theorem: \[thm:KechSol\] The homeomorphism relation on compact Polish spaces is continuously reducible to an orbit equivalence relation induced by a Polish group action. Let $h_1\colon I^{\mathbb{N}}\to I^{\mathbb{N}}$ be an embedding defined as follows: $$h_1((x_i)_{i\in{\mathbb{N}}})=(y_i)_{i\in{\mathbb{N}}}$$ where for all $n$, $y_{2n}=x_n$ and $y_{2n+1}=0$. Let $X={\operatorname{ran}}(h_1)$. Then $X$ is homeomorphic to $I^{\mathbb{N}}$. Let $\approx^*_{c}$ be the equivalence relation on $K(I^{\mathbb{N}})$ where two compact sets $C$ and $C'$ are equivalent if there exists a homeomorphism $h\colon I^{\mathbb{N}}\to I^{\mathbb{N}}$ taking $C$ onto $C'$. By Fact \[fact:HomPolish\] this equivalence relation is induced by a Polish group action, and it is standard to verify that this action is continuous. Let $\approx_{c}$ be the homeomorphism relation on $K(I^{\mathbb{N}})$ where $C$ and $C'$ are equivalent if they are homeomorphic. Thus, it is sufficient to find a reduction of this into $\approx^*_{c}$. For each $C\in K(I^{\mathbb{N}})$ let $F(C)=h_1[C]$. Now, if $C\approx_{c} C'$, then there is a homeomorphism between $F(C)$ and $F(C')$. But these are of infinite deficiency, so by Theorem \[Anderson\] there is a homeomorphism $h\colon I^{\mathbb{N}}\to I^{\mathbb{N}}$ taking $F(C)$ onto $F(C')$ and so $F(C)\approx^*_{c} F(C')$. If $C$ and $C'$ are not homeomorphic, then so are not $F(C)$ and $F(C')$, so no such homeomorphism can exist. Arguments along the same lines give us a stronger result: \[thm:locCom\] The homeomorphism relation on all locally compact Polish spaces is Borel reducible to an orbit equivalence relation induced by a continuous Polish group action. Let ${{\operatorname{Hom}}}^{\{x\}}(I^{\mathbb{N}})$ be the subgroup of ${{\operatorname{Hom}}}(I^{\mathbb{N}})$ which consists of those homeomorphisms $h$ such that $h(x)=x$. As a closed subgroup of ${{\operatorname{Hom}}}(I^{\mathbb{N}})$ it is also Polish and acts continuously on $K^{\{x\}}(I^{\mathbb{N}})$ (see Definition \[def:StrangeSpaces\]). As shown in Section \[ssec:Par\], the space of locally compact spaces can be parametrized as $K^{\{x\}}(I^{\mathbb{N}})$ each locally compact space being homeomorphic to $C\setminus \{x\}$ for some $C\in K^{\{x\}}(I^{\mathbb{N}})$. Applying the homeomorphism $h_1$ from the proof of Theorem \[thm:KechSol\], we may as well assume that this $C$ is of infinite deficiency. Now, if the two spaces $C\setminus \{x\}$ and $C'\setminus \{x\}$ are homeomorphic, the homeomorphism extends to their one-point compactifications and thus to $x$. Further, since $C$ and $C'$ are $Z$-sets, the homeomorphism extends to an element of ${{\operatorname{Hom}}}^{\{x\}}(I^{\mathbb{N}})$. On the other hand, it is obvious that if $C\setminus \{x\}$ and $C'\setminus \{x\}$ are not homeomorphic, then no such element of ${{\operatorname{Hom}}}^{\{x\}}(I^{\mathbb{N}})$ can exist. By combining these results with Theorem \[thm:NonClass\] we can conclude that “not locally compact at one point” is in a sense the strongest requirement for Polish spaces to be non-classifiable by such an orbit equivalence relation. This is also reflected in the following Corollary: \[cor:locsig\] Then $\approx^{\{x\}}\ \not{\leqslant}_B\ \approx_{loc}$. We would like to apply Theorem \[thm:locCom\] to the homeomorphism on $n$-manifolds. It was discussed in Section \[ssec:Par\] that the set $M_n$ of $n$-manifolds is Borel as a subset of the space of all Polish spaces (in any of the parametrizations). As before, denote the homeomorphism relation on $M_n$ by $\approx_n$. Since manifolds are locally compact the inclusion into the locally compact spaces is a reduction $\approx_n\ {\leqslant}_B\ \approx_{loc}$. By applying Theorem \[thm:locCom\] we get the following: $\approx_n$ is reducible to an orbit equivalence relation induced by a Polish group action. More is known in the case $n=2$. There is a classification of $\approx_2$ by algebraic structures using cohomology groups by Goldman [@Goldman]. It is probably routine to verify that this gives a Borel reduction into the isomorphism on countable structures, but for now I leave it open in the form of a conjecture: \[con:firstCon\] The classification in [@Goldman] is a Borel reduction into $\cong_G$, thus $\approx_2\ {\leqslant}_B\ \cong_G$. If the conjecture holds, we obtain a consequence which follows from Theorem \[thm:graphstomanifolds\] below: \[conj:2to3\] $\approx_2\ {\leqslant}_B\ \approx_3$. The converse to Conjecture \[con:firstCon\] is known to hold: $\cong_G\ {\leqslant}_B\ \approx_2$. In fact $\cong_G\ {\leqslant}_B \ \approx_n$ for all $n{\geqslant}2$. We sketch two proofs of this fact – one is based on results by Camerlo and Gao and extension theorems from topology – the other one, for $n=3$, is based on the methods used in this paper, just to illustrate how these methods can be used. \[thm:graphstomanifolds\] For all $n{\geqslant}2$ we have $\cong_G\ {\leqslant}_B\ \approx_n$. I would like to thank Clinton Conley who came up with this proof at `mathoverflow.net`. It is sufficient to find a reduction into the homeomorphism relation on open subsets of ${\mathbb{S}}^n$ or ${\mathbb{R}}^n$. As shown in [@CamGao], $\cong_G$ is Borel reducible to the homeomorphism relation on $K(2^{\mathbb{N}})$. On one hand it is known that every homeomorphism of a totally disconnected compact subset of the plane extends to the whole plane ([@Moise Ch. 13, Thm 7]). On the other hand, by an application of Lemma \[lemma:CauchyComponent\], every homeomorphism of ${\mathbb{R}}^2\setminus C$ where $C$ is compact and totally disconnected, induces a homeomorphism of $C$. Thus, we can define a reduction from the homeomorphism relation on $K(2^{\mathbb{N}})$ to $\approx_2$: let $f\colon 2^{\mathbb{N}}\to {\mathbb{R}}^2$ be the standard embedding (the Cantor set) and with $C\subset 2^{\mathbb{N}}$ associate the open set ${\mathbb{R}}^2\setminus f[C]$. Of course these homeomorphisms extend to ${\mathbb{R}}^n$ for every $n>2$ as well, so in fact we have $\cong_G\ {\leqslant}_B\ \approx_n$ for all $n$. Again, we consider only open subsets of ${\mathbb{R}}^3$. It was proved by H. Friedman and L. Stanley in [@FrSt] that $\cong_G$ is reducible to the isomorphism relation on countable linear orders. Given a countable linear order $L$ with domain $\{x_n\mid n\in{\mathbb{N}}\}$, construct first a set of disjoint open intervals $U_n\subset [0,1]$ such that ${\bigcup}_{n\in{\mathbb{N}}}U_n$ is open and dense in $[0,1]$, $\sup U_n{\leqslant}\inf U_m$ if and only if $x_n<_Lx_m$ and if $x_m$ is an immediate successor of $x_n$ then $\sup U_n=\inf U_m$. Then, by considering $[0,1]$ as a subset of ${\mathbb{R}}^3$ in a canonical way, replace each open interval with a copy of the chain depicted on Figure \[fig:VadimsSuperLink\]. Let $C(L)$ be the closure of the union of all these chains in ${\mathbb{R}}^3$ By using methods similar to those above, one can show that two linear orders $L$ and $L'$ are isomorphic if and only if the complements of $C(L)$ and $C(L')$ are homeomorphic. ![The singular link.[]{data-label="fig:VadimsSuperLink"}](Singular_link_trefoil.mps){width="70.00000%"} The idea is that the knot-types fix the orientation within the chain, and the set $Q$ – in this case, the set $[0,1]\setminus {\bigcup}_{n\in{\mathbb{N}}}U_n$ – is totally disconnected and the homeomorphism of the complement extends to it. Moreover it extends to it in an order preserving way and also preserves end-points of the chains. On the other hand all these chains are similar to one another, so any isomorphism of $L$ can be realized as a homeomorphism of the complement of $C(L)$. Further Research {#sec:Further} ================ Let $O_n({\mathbb{S}}^n)$ be the space of all open subsets of ${\mathbb{S}}^n$ and let $\approx^o_n$ be the homeomorphism relation on this space. As before, $\approx_n$ is the homeomorphism relation on general non-compact $n$-manifolds without boundary. As before, let $\approx_{P}$ be the homeomorphism relation on all Polish, spaces, $\approx_{loc}$ the one on locally compact, $\approx_{c}$ the one on compact Polish spaces and $\approx^{\{x\}}$ as in Definition \[def:Relations\]. An open-ended research direction is to establish the places of these and other topological equivalence relations in the hierarchy of analytic equivalence relations. Positive and negative, new and old results have been reviewed in this paper. We already stated the main open question: Is $\approx_3\ {\leqslant}_B\ \cong_G$? If not, is it universal among orbit equivalence relations induced by a Polish group action? And further one can ask: \[q:whichnm\] For which $n$ and $m$ do we have $\approx_n\ {\leqslant}_B\ \approx_m$? The same for $\approx_n^o$ and $\approx_{m}^o$. For which $n\in{\mathbb{N}}$ and known equivalence relations $E$ do we have or $E{\leqslant}_B\ \approx_n$ and same for $\approx^o_n$? What about open subsets of the separable Hilbert space $\ell_2$? By the results of Henderson [@Hend] this covers all reasonable concepts of infinite-dimensional manifolds. \[q:S12\] What is the exact complexity of $\approx_P$? It is known that it is $\Sigma^1_2$ [@Gao] and ${{\Sigma_1^1}}$-hard [@FerLouRos]. What are the precise locations of $\approx_{loc}$ and $\approx_{c}$ and in particular are they bireducible? Hjorth has shown using turbulence theory [@Hjorth] that $\cong_G\ <_B\ \approx_c$ (notice strict inequality) and by the results above, $\approx_c$ as well as $\approx_{loc}$ are below the universal equivalence relation induced by the Polish group action. The following question has been asked already in [@FaToTo2]: Is $\approx_c$ universal among all equivalence relations that are reducible to an orbit equivalence relation induced by a Polish group action? And of course the conjectures from the end of the previous section: \[conj:2to3dubl\] The classification in [@Goldman] is a Borel reduction into $\cong_G$, thus $\approx_2\ {\leqslant}_B\ \cong_G$. In particular $\approx_2\ {\leqslant}_B\ \approx_3$. Concerning Question \[q:whichnm\] and Conjectures \[conj:2to3\] and \[conj:2to3dubl\]: at first it might seem that it holds that $\approx_n\ {\leqslant}_B\ \approx_{n+1}$. However, the obvious candidate for a reduction $M\mapsto M\times{\mathbb{R}}$ does not work: as shown in [@McMillan] there are open subsets $O$ of ${\mathbb{R}}^3$ which are not homeomorphic with ${\mathbb{R}}^3$, yet $O\times {\mathbb{R}}\approx {\mathbb{R}}^4$. There are no such manifolds in dimension $2$ [@Daverman], but it is still unclear to the author whether the general map $M\mapsto M\times{\mathbb{R}}$ from $2$- to $3$-manifolds provides a reduction between the homeomorphism relations. Conclusion {#sec:Conclusion} ========== As a conclusion we provide a diagram of all the relevant equivalence relations and which relations are knownbetween them. We omit some obvious arrows that follow e.g. from transitivity. In the diagram we use the following notation: $$\begin{aligned} \xymatrix{E\ar[r] & \ E'} & & E {\leqslant}_B E',\\ \xymatrix{E\ar@{.>}[r]|? & E'} & & \text{Not known whether or not } E {\leqslant}_B E',\\ \xymatrix{E\ar@{.>}[r]|C & E'} & & \text{Conjectured in this paper that } E {\leqslant}_B E',\\ \xymatrix{E\ar@{-|}[r] &\ E'} & & E \not{\leqslant}_B E',\\ E_1 && \text{Definition \ref{def:E1},}\\ E_0 && \text{Two sequences $\eta,\xi\in {\mathbb{N}}^{\mathbb{N}}$ are equivalent if $\exists n\forall (m>n)(\eta(m)=\xi(m))$,}\\ E_{Gr} && \text{The universal equivalence relation induced by a Borel Polish group action,}\\ E_{{{\Sigma_1^1}}} && \text{The universal ${{\Sigma_1^1}}$ equivalence relation,}\\ \approx^{\{x\}}&& \text{See Definition~\ref{def:Relations},}\\ \approx_{loc}&& \text{Homeomorphism on locally compact Polish spaces,}\\ \approx_c&& \text{Homeomorphism on compact Polish spaces,}\\ \approx_P&& \text{Homeomorphism on all Polish spaces,}\\ \approx_n && \text{Homeomorphism on $n$-manifolds.}\\ \cong_G && \text{Isomorphism on countable graphs.}\end{aligned}$$ $$\xymatrix{ & \approx_P \ar@<1ex>@{.>}[d]|?& \\ & E_{{{\Sigma_1^1}}}\ar[u] \ar@{.>}[dl]|? &\\ \approx^{\{x\}}\ar@<1ex>[ur] &\approx_{loc}\ar@{|->}[u]\ar@<1ex>@{.>}[d]|? \ar@{|->}[l]\ar@<1ex>[r]& E_{Gr}\ar@{|->}[ul]\ar@{.>}[l]|? \\ & \approx_c \ar[u] & \approx_n \ar@<1ex>@{.>}[dl]|?\ar[u]\ar[ul]\ar@<1ex>@{.>}[d]|?&\approx_m\ar@{<.>}[l]|?\\ E_1 \ar[uu]& \cong_G\ar@{|->}[u]\ar[ur]\ar@<-1ex>[r]& \approx_2\ar@{.>}[l]|C\ar@<.5ex>@{.>}[u]|C \ar@/^/[uul]\\ & E_0 \ar@{|->}[ul]\ar@{|->}[u]\ar@{|->}[ur]& }$$ [^1]: Affiliation: Kurt Gödel Research Center, University of Vienna, vadim.kulikov@iki.fi, phone: 00436804461218
{ "pile_set_name": "ArXiv" }
--- abstract: 'This is a note on a recent paper of De Simoi - Kaloshin - Wei [@DKW]. We show that using their results combined with wave trace invariants of Guillemin-Melrose [@GM2] and the heat trace invariants of Zayed [@Za] for the Laplacian with Robin boundary conditions, one can extend the Dirichlet/Neumann spectral rigidity results of [@DKW] to the case of Robin boundary conditions. We will consider the same generic subset as in [@DKW] of smooth strictly convex ${{\mathbb Z}}_2$-symmetric planar domains sufficiently close to a circle, however we pair them with arbitrary ${{\mathbb Z}}_2$-symmetric smooth Robin functions on the boundary and of course allow deformations of Robin functions as well.' address: 'Department of Mathematics, UC Irvine, Irvine, CA 92617, USA' author: - Hamid Hezari title: Robin spectral Rigidity of strictly convex domains with a reflectional symmetry --- Introduction ============ In [@DKW], it is shown that for a generic class $\mathcal C$ of smooth strictly convex ${{\mathbb Z}}_2$-symmetric planar domains sufficiently close to a circle, endowed with Dirichlet or Neumann boundary conditions, one has Laplace spectral rigidity within $\mathcal C$. This means that given any $\Omega_0 \in \mathcal C$ and any $C^1$-deformation $\{\Omega_s\}_{s \in [0, 1]} $ of $\Omega_0$ in $\mathcal C$ with $\text{Spec}(\Delta_{\Omega_s}) = \text{Spec}(\Delta_{\Omega_0})$ for all $s \in [0, 1]$, one can find isometries $\{ \mathcal I_s\}_{s \in [0, 1]}$ of ${{\mathbb R}}^2$ such that $ \mathcal I_s( \Omega_0) = \Omega_s$. Here $\text{Spec}(\Delta_{\Omega})$ is the spectrum of the euclidean Laplacian $\Delta= \frac{\partial^2}{\partial x^2}+\frac{\partial^2}{\partial y^2}$ with Dirichlet (or Neumann) boundary condition on $\Omega$. In this paper we are concerned with the generalization of this problem for $\text{Spec}(\Delta_{\Omega, K})$ i.e., the spectrum of the euclidean Laplacian with Robin boundary condition $\partial_n u = Ku$ on $\partial \Omega$, for a given function $K \in C^\infty (\partial \Omega)$, where $\partial_n$ is the inward normal differentiation. In particular, by this notation $\Delta_{\Omega, 0}$ is the Laplacian on $\Omega$ with Neumann boundary condition on $\partial \Omega$. We show that: \[main\] Let $\delta >0$ and $\mathcal S_\delta$ be the class of smooth strictly convex ${{\mathbb Z}}_2$-symmetric[^1] planar domains that are $\delta$-close [^2] to a circle. Then there exists $\delta>0$ and a generic subset $\mathcal C$ of $\mathcal S_\delta$ such that given any $\Omega_0 \in \mathcal C$ and $K_0 \in C^\infty_{{{\mathbb Z}}_2} (\partial \Omega)$, and any $C^1$-deformation $\{\Omega_s\}_{s \in [0, 1]}$ of $\Omega_0$ in $\mathcal C$ and $C^0$-deformation $\{K_s\}_{s \in [0, 1]}$ of $K_0$ in $C_{{{\mathbb Z}}_2}^\infty (\partial \Omega)$ satisfying $\text{Spec}(\Delta_{\Omega_s, K_s}) = \text{Spec}(\Delta_{\Omega_0, K_0})$ for all $s \in [0, 1]$, one can find isometries $\{ \mathcal I_s\}_{s \in [0, 1]}$ of ${{\mathbb R}}^2$ such that $ \mathcal I_s( \Omega_0) = \Omega_s$ and $K_s ( \mathcal I_s (b))= K_0(b)$ for all $b \in \partial \Omega_0$. Here, $C^\infty_{{{\mathbb Z}}_2} (\partial \Omega)$ is the space of smooth functions on $\partial \Omega$ that are invariant under the imposed ${{\mathbb Z}}_2$-symmetry on $\Omega$. Also, in fact the generic class $\mathcal C$ consists of $\Omega \in \mathcal S_\delta$ that satisfy: - Up to the reflection symmetry, all distinct periodic billiard orbits in $\Omega$ have distinct lengths. - All (transversal) periodic billiard orbits in $\Omega$ are non-degenerate, i.e., the linearized Poincaré map associated to each orbit does not have $1$ as an eigenvalue. Using the results of [@PSgeneric] one sees that $\mathcal C$ is generic[^3] in $\mathcal S_\delta$. Moreover, for every $\Omega \in \mathcal C$, the spectrum of $\Delta$ with Dirichlet, Neumann, or Robin boundary conditions, determines the length spectrum $\text{LS}(\Omega)$, which is the set of lengths of periodic billiard trajectories and their iterations also including the length of the boundary and its multiples with positive integers. Such determination is shown through the so called *Poisson relation *proved by [@AnMe; @PS], which asserts that if the boundary of $\Omega$ is smooth then $$\text{SingSupp} \left ( \text{Tr} \; \cos{ t \sqrt{-\Delta^B_\Omega} } \right ) \subset \{0 \} \cup \pm \text{LS}(\Omega),$$ where $\Delta^B_\Omega$ is the Euclidean Laplacian with Dirichlet, Neumann, or Robin boundary conditions. One can see ([@PS; @PSgeneric]) that under the generic conditions (1) and (2) above, the containment in the Poisson relation is an equality, hence $\text{LS}(\Omega)$ is a spectral invariant. On the other hand, the length spectral rigidity result of [@DKW] shows that if $\Omega_s \in \mathcal S_\delta$, and if $\text{LS}(\Omega_s) =\text{LS}(\Omega_0)$, then there exist isometries $\{ \mathcal I_s\}_{s \in [0, 1]}$ of ${{\mathbb R}}^2$ such that $ \mathcal I_s( \Omega_0) = \Omega_s$. Hence Theorem \[main\] follows from the second part of the following theorem which concerns a fixed domain. To present the statement it is convenient to fix the axis of symmetry and also a marked point as in [@DKW]; we assume that each $\Omega \in \mathcal S_\delta$ is invariant under the reflection about the $x$-axis, that $\Omega \subset \{ (x, y); x \geq 0 \}$, and that $0=(0, 0) \in \partial \Omega$, which will be called the marked point.** \[Robin\] Let $\mathcal C \subset \mathcal S_\delta$ be defined as above. There exists $\delta>0$ such that - If $\Omega \in \mathcal C$, $K_1, K_2 \in C^\infty_{{{\mathbb Z}}_2}(\partial \Omega)$, $K_1(0)=K_2(0)$, and $\text{Spec}(\Delta_{\Omega, K_1}) = \text{Spec}(\Delta_{\Omega, K_2})$, then $K_1=K_2$. - If $\Omega \in \mathcal C$ and if there are three functions $K_1, K_2, K_3$ in $C^\infty_{{{\mathbb Z}}_2}(\partial \Omega)$ such that $$\text{Spec}(\Delta_{\Omega, K_1})= \text{Spec}(\Delta_{\Omega, K_2})= \text{Spec}(\Delta_{\Omega, K_2}),$$ then at least two of them are identical. One can see that if we add the assumption that $\Omega$ has two perpendicular reflectional symmetries, and $K_1$ and $K_2$ are preserved under both symmetries, then $K_1(0)=K_2(0)$, hence $K_1=K_2$ by part (a). As a result one gets the following extension of the inverse spectral result of Guillemin-Melrose [@GM1] obtained on ellipses. \[2symmetries\] Let $\mathcal S_{2, \delta}$ be the subclass of $\mathcal S_{\delta}$ consisting of domains with two reflectional symmetries whose axes are perpendicular to each other. Let $\mathcal C_2 \subset \mathcal S_{2, \delta}$ be the class of domains satisfying the generic properties (1) and (2) above. If $\Omega \in \mathcal C_2$, $K_1, K_2 \in C^\infty_{{{\mathbb Z}}_2 \times {{\mathbb Z}}_2}(\partial \Omega)$, and $\text{Spec}(\Delta_{\Omega, K_1}) = \text{Spec}(\Delta_{\Omega, K_2})$, then $K_1=K_2$. To prove Theorem \[Robin\] we will use some technical results from [@DKW]. To be able to do so we will need a sufficient number of spectral invariants which we will obtain from a Poisson summation formula of Guillemin-Melrose [@GM2], and also heat trace formulas of Zayed [@Za] for the Robin Laplacian. In fact to our knowledge these are the only Robin spectral invariants that are explicitly given in the literature. We will review these trace invariants in the next section. **Historical background.** There is a huge literature on inverse spectral problems. Here we shall only mention the positive results concerning smooth euclidean domains and refer the reader to the surveys [@Z; @MeSurvey; @DaHe] for further historical background on positive results, and for negative results (counterexamples) we refer to [@GWW] and the surveys [@Go] and [@GPS]. Kac [@Ka] proved that disks in ${{\mathbb R}}^n$ are spectrally determined among all other domains. Marvizi-Melrose [@MM] showed that there exists a two-parameter family of smooth strictly convex domains in ${{\mathbb R}}^2$ with the symmetries of the ellipse that are spectrally isolated in an open dense class of smooth strictly convex domains. Melrose [@Me] and Osgood-Phillips-Sarnak [@OPS] established compactness of isospectral sets of smooth planar domains. Colin de Verdière [@CdV] proved that real analytic planar domains with the symmetries of the ellipse are spectrally rigid (i.e. all isopsectral deformations are trivial) among themselves. Zelditch [@Ze] proved that generic real analytic planar domains with one reflectional symmetry are spectrally distinguishable from one another. In [@HeZe], it was shown that real analytic domains in ${{\mathbb R}}^n$ with reflectional symmetries about all coordinate axes are spectrally determined among the same domains. In [@HeZe], ellipses were shown to be to infinitesimally spectrally rigid among smooth domains with the symmetries of the ellipse (see [@PT1; @PT2; @PT3] for results in the context of completely integrable tables other than ellipses). Guillemin-Melrose [@GM1] showed Robin spectral rigidity of ellipses when the Robin functions preserve both reflectional symmetries. To our knowledge, Theorem \[main\] is the first inverse spectral result that allows both the boundary and the Robin function to vary. The new feature is that beside the underlying domain one can also determine an additional data (namely $K$) from the trace invariants. Trace invariants for the Robin Laplacian ======================================== Wave trace invariants --------------------- The following is a more precise form of a result of [@GM2]. \[GM\] Let $\Omega$ be a smooth strictly convex planar domain. Let $\gamma$ be a periodic billiard trajectory of length $T$, and $\{b_j \}_{j=1}^q$ be the points of reflections of $\gamma$ on $\partial \Omega$ and $\{{\varphi}_j\}_{j=1}^q$ in $(0, \frac \pi2]$ be the angles of reflections with respect to the tangent lines at $\{b_j \}_{j=1}^q$ . Assume no[^4] periodic orbit in $\Omega$ other than $\gamma$ has length $T$, and that the linearized Poincaré map $\mathcal P_\gamma$ associated to $\gamma$ does not have $1$ as an eigenvalue. Then for any $K \in C^\infty(\partial \Omega)$ we have the following singularity expansion near $t=T$ $$\small Tr \left ( \cos\big( t \sqrt{-\Delta_{\Omega, K}}\big) -\cos\big( t \sqrt{-\Delta_{\Omega, 0}}\big) \right ) \sim \text{Re} \; \left ( \log (t -T+ i 0^+ ) \sum_{k=0}^\infty c_k (t-T)^k \right ),$$ where $$c_0= C_\gamma \sum_{j=1}^q \frac{K(b_j)}{\sin({\varphi}_j)},$$ for a certain constant $C_\gamma$ that depends only on $\gamma$ and is independent of $K$. Here, $\log (t -T+ i 0^+ )$ is the distribution defined by $ \lim_{\varepsilon \to 0^+ }\log (t -T+ i \varepsilon)$. Heat trace invariants --------------------- The following heat trace formula is the main result of [@Za]. \[Za\] Let $\Omega$ be a smooth simply connected planar domain and $K \in C^\infty(\partial \Omega)$. Let $\sigma$ be an arc-length parametrization of $\partial \Omega$ in the counter-clockwise direction and $\kappa(\sigma)$ be the curvature of $\partial \Omega$ at $\sigma$. Then as $t \to 0^+$ $$\label{Zinvariants} \small Tr \left ( e^{t\Delta_{\Omega, K}} - e^{t\Delta_{\Omega, 0}} \right ) = \frac{1}{2 \pi} \int_{\partial \Omega} K(\sigma) d\sigma + \frac{\sqrt{t}}{ 8 \sqrt{\pi}} \int_{\partial \Omega} \left ( K(\sigma) \kappa(\sigma)+2 K^2(\sigma) \right )d\sigma+ O(t).$$ Proofs of Theorems \[main\], \[Robin\], and \[2symmetries\] =========================================================== As we discussed in the introduction, using the length spectral rigidity result of [@DKW], Theorem \[main\] reduces to part (b) of Theorem \[Robin\], hence we only need to prove Theorem \[Robin\]. Since $\Omega \in \mathcal C$, all periodic billiard orbits $\gamma$ satisfy the required conditions of Theorem \[GM\]. Therefore, if we let $K=K_1-K_2$ we have $$\label{K} \sum_{j=1}^q \frac{K(b_j(\gamma))}{\sin({\varphi}_j(\gamma))} =0,$$ for all periodic billiard orbits. The equation is very similar to (but not the same as) the equations - below, which were studied in [@DKW]. Let us first review the ingredients we need from their article. The method of De Simoi-Kaloshin-Wei {#the-method-of-de-simoi-kaloshin-wei .unnumbered} ----------------------------------- We recall from the introduction that we have assumed that each $\Omega \in \mathcal S_\delta$ is invariant under the reflection about the $x$-axis, $\Omega \subset \{ (x, y); x \geq 0 \}$, and $0=(0, 0) \in \partial \Omega$, which is called the marked point. At the first step, the authors show that for any $q \geq 2$, there exists a ${{\mathbb Z}}_2$-symmetric $q$-periodic orbit of rotation number $\frac{1}{q}$ passing through the marked point and having maximal length among $q$-periodic orbits of rotation number $\frac{1}{q}$ passing through the marked point. They call such an orbit a *marked symmetric maximal $q$-periodic orbit *and denote its length by $\Delta_q$. Next, suppose $\{\Omega_s \}$ is a $C^1$ deformation of $\Omega_0$ in $\mathcal S_\delta$ that fixes the length spectrum i.e., $\text{LS}(\Omega_s)=\text{LS}(\Omega_0)$ for all $s \in [-1, 1]$. For simplicity we represent $\Omega_s$ using a $C^1$ family $\rho_s \in C_{{{\mathbb Z}}_2}^\infty$ so that $$\partial \Omega_s = \{ b+ \rho_s(b) n(b) \, ; \, b \in \partial \Omega_0 \},$$ where $n(b)$ is the unit inward normal at $b$. Then by taking the variation of $\Delta_q(s)$, the length of a marked symmetric maximal $q$-periodic orbit in $\Omega_s$, they show that $$\label{variation} \ell_q( \dot \rho) =0, \qquad q\geq 2,$$ where $\dot \rho =\frac{d}{ds}|_{s=0} (\rho_s)$, and the functional $\ell_q$ is defined by $$\label{lq} \ell_q(u) =\sum_{j=1}^q u(b_j)\sin({\varphi}_j), \qquad q \geq 2,$$ where $\{b_j \}_{j=1}^q$ are the points of reflections of a marked symmetric maximal $q$-periodic orbit in $\Omega_0$ and $\{{\varphi}_j\}_{j=1}^q \subset (0, \frac \pi2]$ are the corresponding angles of reflections. Then the authors define the map $$\begin{cases} \mathcal T: C^\infty_{{{\mathbb Z}}_2} (\partial \Omega_0) \to \ell^\infty, \\ {\mathcal T}(u) = ( {\ell}_q(u))_{q=0}^\infty, \end{cases}$$ where $\ell_q$ for $q \geq 2$ are defined above and for $q=0$ and $q=1$ are defined by $${\ell}_0(u) = \int_{\partial \Omega_0} \frac{u(\sigma)}{ \kappa(\sigma)} d\sigma,$$ $${\ell}_1(u) = \mu(0)u(0).$$ We recall that $\sigma$ is the arc-length parametrization of $\partial \Omega_0$, identifying $\sigma=0$ with the marked point, and $\kappa(\sigma)$ is its curvature. Also, $\mu(0)$ is the value of the Lazutkin weight $\mu$ (to be defined later) at the marked point. Since the marked point is fixed through the deformation we know that $\ell_1( \dot \rho)=0$. In addition, by taking the variation of the length of the boundary $\partial \Omega_s$ the authors show that $\ell_0 ( \dot \rho)=0$. Hence by we have $$\mathcal T (\dot \rho) =0.$$ The main result of [@DKW] is to show that $\mathcal T$ is injective. To do this they take advantage of the Lazutkin coordinate in which they provide a precise description of periodic orbits creeping along the boundary. The Lazutkin coordinate is defined in terms of $\sigma$ by ($\sigma =0$ being the marked point) $$x( \sigma) = C_L \int_0^ \sigma \kappa(\sigma)^{-2/3} d \sigma, \qquad C_L= \left ( \int_{\partial \Omega_0} \kappa(\sigma)^{-2/3} d \sigma \right ) ^{-1}.$$ In this coordinate, the Lazutkin weight is defined by $$\mu(x)= \frac{\kappa(x)^{-1/3}}{2 C_L}.$$ We note that for every ${\varepsilon}>0$ there exists $\delta >0$ so that for every $\Omega \in \mathcal S_\delta^r$ we can make sure that $\|\mu(x) -\pi \|_{C^0} < {\varepsilon}$ and $\| \mu^{(m)} \|_{C^0} < {\varepsilon}$ for $1 \leq m \leq r$ for any fixed $r \in \mathbb N$. One can easily check that for the unit circle $\mu(x)=\pi$.** The following lemma is a crucial ingredient in [@DKW], which is also a key for our proof. \[MainLemma\] Assume $r \geq 8$. For any ${\varepsilon}>0$ sufficiently small, there exists $\delta >0$ so that for any $\Omega \in \mathcal S_\delta$ there exist $C^{r-4}$ real valued functions $\alpha(x)$ odd and $\beta(x)$ even so that for any maximal marked symmetric $q$-periodic orbit $\gamma$: $$x_q^k =\frac{k}{q}+ \frac{\alpha(k/q)}{q^2} + \varepsilon O(q^{-4}),$$ $${\varphi}_q^k =\frac{\mu(x_q^k)}{q} \left (1+ \frac{\beta(k/q)}{q^2} + \varepsilon O(q^{-4}) \right ),$$ where $\| \alpha \|_{C^{r-4}} =O_r({\varepsilon})$ and $\| \beta \|_{C^{r-4}} =O_r({\varepsilon})$. Here $\{x^k_q \}_{k=1}^q$ are the points of reflections of $\gamma$ on $\partial \Omega$ in the Lazutkin coordinate and $\{{\varphi}^k_q\}_{k=1}^q$ in $(0, \frac \pi2]$ are the corresponding angles of reflections. To prove their theorem, the authors use this lemma to show that the operator $\widetilde {\mathcal T}$ defined by $\widetilde {\mathcal T} (u) = \mathcal T (\frac{u}{\mu})$ is injective, whose injectivity is clearly equivalent to the injenctivity of $\mathcal T$. We will follow the same approach expect that in our case the factor $sin({\varphi}_q^k)$ will be in the denominator. Another difference is that in our case we do not know the vanishing of the operators $\ell_0$ and $\ell_1$ at $K_1-K_2$. However, as we shall see, we can obtain the vanishing of $\ell_0$ by taking the limit of $\ell_q$ as $q \to \infty$. We will overcome the the lack of $\ell_1$ by using a heat trace invariant of [@Za]. Proof of part (a) of Theorem \[Robin\] -------------------------------------- Throughout this section we assume that $K_1(0)=K_2(0)$ and we denote $K=K_1-K_2$. Hence by this notation $K(0)=0$. First we define the operator $ {\mathscr{T}}(u) = ({{\mathscr{L}}}_q(u))_{q=0}^\infty$, where $$q \geq 2: \qquad {{\mathscr{L}}}_q(u) = \sum_{k=0}^{q-1} u(x_q^k) \frac{\mu(x_q^k)}{q^2\sin {\varphi}_q^k},$$ $${\mathscr{L}}_0(u) = \int_0^1 u(x) dx,$$ $${\mathscr{L}}_1(u) = u(0).$$ Then it is clear from that $$q\geq 2: \qquad {\mathscr{L}}_q \left ( \frac{K}{\mu} \right )=0.$$ Also since by our assumption $K(0)=0$, we have $${\mathscr{L}}_1 \left ( \frac{K}{\mu} \right ) =0.$$ On the other hand using Lemma \[MainLemma\], the mean value theorem, the approximation $\sin x =x +O(x^3)$, and the Riemann sum definition of integrals, we have $$\lim_{q \to \infty} {\mathscr{L}}_q (u) = {\mathscr{L}}_0( u),$$ which implies that $ {\mathscr{L}}_0 \left ( \frac{K}{\mu} \right ) =0.$ As a result we have $${\mathscr{T}}\left ( \frac{K}{\mu} \right ) =0.$$ To conclude part (a) of Theorem \[Robin\] we need to show that ${\mathscr{T}}$ is injective. We first simplify $\mu(x^k_q)$ using the asymptotic of $x^k_q$. By the mean value theorem and using the fact that $\mu(x)$ has a uniform positive lower bound $$\label{mu} \mu(x_q^k) = \mu \left (\frac{k}{q}+ \frac{\alpha(k/q)}{q^2} \right ) \left ( 1 + \varepsilon O(q^{-4}) \right ).$$ Plugging this into the equation of ${\varphi}^k_q$ we get $${\varphi}_q^k = \frac{1}{q} \mu \left (\frac{k}{q}+ \frac{\alpha(k/q)}{q^2} \right )\left (1+ \frac{\beta(k/q)}{q^2} + \varepsilon O(q^{-4}) \right ).$$ Next, we take $sin$ of both sides, use the mean value theorem again and the lower bound $\sin(\frac{\mu(x)}{q}) \geq \frac{C}{q}$ to get $$\sin {\varphi}_q^k = \sin \left ( \frac{1}{q} \mu \left (\frac{k}{q}+ \frac{\alpha(k/q)}{q^2} \right )\left (1+ \frac{\beta(k/q)}{q^2} \right ) \right ) \left ( 1+ \varepsilon O_r(q^{-4}) \right ).$$ Combining with , we obtain $$\frac{\mu(x^k_q)} { q^2 \sin {\varphi}_q^k} = \frac { \frac{1}{q} \mu \left (\frac{k}{q}+ \frac{\alpha(k/q)}{q^2} \right )} { q \sin \left ( \frac{1}{q} \mu \left (\frac{k}{q}+ \frac{\alpha(k/q)}{q^2} \right ) \left (1+ \frac{\beta(k/q)}{q^2} \right ) \right )} \left ( 1+ \varepsilon O_r(q^{-4}) \right ).$$ Since $$\left (1 + \frac{\beta(k/q)}{q^2} \right )^{-1} = 1 - \frac{\beta(k/q)}{q^2} + \varepsilon O_r(q^{-4}),$$ we can rewrite the above expression as $$\frac{\mu(x^k_q)} { q^2 \sin {\varphi}_q^k} = \frac { \frac{1}{q} \mu \left (\frac{k}{q}+ \frac{\alpha(k/q)}{q^2} \right ) \left (1 + \frac{\beta(k/q)}{q^2} \right ) } { q \sin \left ( \frac{1}{q} \mu \left (\frac{k}{q}+ \frac{\alpha(k/q)}{q^2} \right ) \left (1+ \frac{\beta(k/q)}{q^2} \right ) \right )} \left ( 1 - \frac{\beta(k/q)}{q^2} + \varepsilon O_r(q^{-4}) \right ).$$ Applying the mean value theorem to the principal term, using $ \| \alpha \|_{C^0}, \|\beta \|_{C^0} \leq C \varepsilon$, and $$\frac{x}{\sin x}= 1 + \frac{x^2}{6} + O(|x|^4), \qquad |x| < \frac{\pi+\varepsilon}{2},$$ we get $$\frac {\mu(x^k_q)} { q^2 \sin {\varphi}_q^k}= \frac{1}{q} \left (\frac { \frac{1}{q} \mu \left (\frac{k}{q} \right )}{ \sin \left ( \frac{1}{q} \mu \left (\frac{k}{q} \right ) \right )} - \frac{\beta(k/q)}{q^2} + \varepsilon O_r(q^{-4}) \right ).$$ We shall write this expression in the form $$\label{Sq} \frac {\mu(x^k_q)} {q^2 \sin {\varphi}_q^k} = \frac{1}{q} \left (1- \frac{\beta(k/q)}{q^2} + \varepsilon O_r(q^{-4}) \right ) + \frac{1}{q} S_q(\frac{k}{q}),$$ where $$\label{S} S_q(x) =\frac { \frac{1}{q} \mu \left ( x \right )} { \sin \left ( \frac{1}{q} \mu \left (x \right ) \right )} -1.$$ In [@DKW], the function $S_q$ is given by $\frac { \sin \left ( \frac{1}{q} \mu \left (x \right ) \right )}{ \frac{1}{q} \mu \left ( x \right )} -1$. To show that the operator $ {\mathscr{T}}(u) = ({{\mathscr{L}}}_q(u))_{q=0}^\infty$ is injective we first choose a basis for $L^2_{{{\mathbb Z}}_2}( \partial \Omega)$. A convenient basis is $\{ \cos(2 \pi j x ) \}_{j=0}^\infty$ where $x$ is the Lazutkin parameter. We shall denote $e_j=\cos(2 \pi j x )$. Next, by some computation as performed in Lemma 5.2. of [@DKW], we get \[F\] For all $q \geq 2$ and all $ j \geq 1$, $${\mathscr{L}}_q(e_j) = (1- \frac{\beta_0}{q^2}) \delta_{q|j} - \frac{ \beta_j +2 \pi i j \alpha_j }{q^2}+ \mathcal S_q(e_j) + \mathcal R _q(e_j),$$ where the operator $\mathcal S_q$ is defined by $$\mathcal S _q (e_j) = \frac{1}{q} \sum_{k=0}^{q-1} S_q\left (\frac{k}{q} \right ) e_j \left (\frac{k}{q} + \frac{\alpha(\frac{k}{q})}{q^2} \right ),$$ and $\mathcal R_q$ is a remainder operator that is given by $$\mathcal R _q(e_j)= \frac{1}{q^2} \sum_{s \in {{\mathbb Z}}, s \neq 0, sq \neq j } \left (- \beta_{sq-j} + 2\pi ij \alpha_{sq-j} \right ) + {\varepsilon}O(\frac{j^2}{q^4} ).$$ The symbol $\delta_{q | j} =1$ if $q|j$ and it is zero otherwise. We now analyze $\mathcal S_q(e_j)$. First let us record some properties of the function $S_q$. Since in the interval $|x| < \frac{\pi+{\varepsilon}}{2}$ we have $$\left |\frac{x}{\sin x} -1 \right |= \left |\frac{x-\sin x}{\sin x} \right | \leq \frac{x^3/6}{(\frac{2 \cos {\varepsilon}}{\pi+{\varepsilon}}) x }= \frac{(\pi+{\varepsilon}) |x|^2}{12 \cos {\varepsilon}},$$ we obtain the following supnorm estimates on $S_q$: $$\label{supnormS} | S_q(x) | \leq \frac{ (\pi +{\varepsilon}) \mu^2(x)}{12q^2 \cos {\varepsilon}} \leq \frac{ (\pi +{\varepsilon})^3}{12q^2 \cos {\varepsilon}}.$$ Also since $| \mu'(x) | < {\varepsilon}$ we have $$\label{derivativesS}| S^{(r)}_q(x) | = {\varepsilon}O_r( \frac{1}{q^2} ), \quad r \geq 1.$$ Now we write $$\mathcal S_q (e_j) = \frac{1}{q} \sum_{k=0}^{q-1} \cos \left ( 2 \pi j \left ( \frac{k}{q} + \frac{\alpha(k/q)}{q^2} \right )\right ) S_q \left ( \frac{k}{q} \right).$$ By the mean value theorem and we get $$\mathcal S_q (e_j) = \frac{1}{q} \sum_{k=0}^{q-1} \cos \left ( \frac{2 \pi j k}{q}\right ) S_q \left ( \frac{k}{q} \right) + {\varepsilon}O(\frac{j}{q^4} ).$$ We then plug in the Fourier series $$S_q(x) = \sum_{p \in \mathbb Z} \sigma_p(q) e^{2 \pi i p x},$$ of $S_q(x)$ and obtain $$\mathcal S_q (e_j) = \sum_{s \in {{\mathbb Z}}} \sigma_{sq-j}(q) + {\varepsilon}O(\frac{j^2}{q^4} ) = \sigma_0(q) \delta_{q|j} + \sigma_{j}(q) + \sum_{s \in {{\mathbb Z}}, s \neq 0, sq \neq j } \sigma_{sq-j}(q) + {\varepsilon}O(\frac{j^2}{q^4} ).$$ Therefore, Lemma \[F\] takes the following form: \[F2\] For all $q \geq 2$ and $j \geq 1$, one has $${{\mathscr{L}}}_q(e_j) = (1+ \sigma_0(q) - \frac{\beta_0}{q^2}) \delta_{q|j} + \frac{{\mathscr{L}}^{*}(e_j)}{q^2} + \mathcal R _q(e_j),$$ where $${\mathscr{L}}^*(e_j) = q^2\sigma_j(q) - \beta_j -2 \pi j \alpha_j ,$$ and the remainder operator is given by $$\label{R} \mathcal R _q(e_j)= \frac{1}{q^2} \sum_{s \in {{\mathbb Z}}, s \neq 0, sq \neq j } q^2 \sigma_{sq-j}(q) + \alpha_{sq-j}(q) - 2\pi ij \beta_{sq-j}(q) + {\varepsilon}O(\frac{j^2}{q^4} )$$ However, this is not a convenient way of writing the operator $ {\mathscr{L}}^* $, since $q^2\sigma_j(q)$ hence ${\mathscr{L}}^*$, depends on $q$. To resolve this we note that $$\begin{aligned} \sigma_j (q) = \int_0^1 S_q(x) e^{2 \pi i j x} dx & = \int_0^1 \left ( \frac { \frac{1}{q} \mu \left ( x \right )} { \sin \left ( \frac{1}{q} \mu \left (x \right ) \right )} -1 \right ) e^{2 \pi i j x} dx \\ & = \int_0^1 \frac{ \mu^2(x)}{6q^2} e^{2 \pi i j x} dx + \int_0^1 R \left ( \frac{\mu(x)}{q} \right ) e^{2 \pi i j x}dx, \end{aligned}$$ where $R$ is defined by $ \frac{y}{\sin(y)} - 1 = \frac{y^2}{6} +R(y).$ Since $R(y)=O(y^4)$ and $R'(y)=O(y^3)$, by performing integration by parts once to the the second integral and the fact $| \mu'(x) | < {\varepsilon}$, we get $$\int_0^1 R \left ( \frac{\mu(x)}{q} \right ) e^{2 \pi i j x} dx = O( \frac{{\varepsilon}}{jq^4}).$$ Therefore we can absorb this term in the remainder term $\mathcal R_q(e_j)$. The conclusion is we can write $$\label{Lq} {\mathscr{L}}_q(e_j) = \left (1+ \sigma_0(q) - \frac{\beta_0}{q^2} \right ) \delta_{q|j} + \frac{{\mathscr{L}}^{**}(e_j)}{q^2} + \mathcal R _q(e_j),$$ where $${\mathscr{L}}^{**}(e_j) = \widetilde {\sigma}_j - \beta_j -2 \pi j \alpha_j ,$$ with $$\widetilde {\sigma}_j = \int_0^1 \frac{ \mu^2(x)}{6} e^{2 \pi i j x} dx.$$ We now observe that by the properties and of $S_q(x)$ we have $$\label{sigma0} |\sigma_0(q)| < \frac{ (\pi +{\varepsilon})^3}{12q^2 \cos {\varepsilon}},$$ $$| \sigma_p(q) | = {\varepsilon}O_r( \frac{1}{j^{r}q^2} ), \quad p \neq 0.$$ Note that the second equation follows from integration by parts. This shows that for $p \neq 0$, $q^2\sigma_p(q)$ behaves similarly as $\alpha_p$ and $\beta_p$. Now assume ${\mathscr{T}}(u)=0$. We recall that ${\mathscr{T}}= \{ {\mathscr{L}}_q \}_{q=0}^\infty$. Then in particular $${\mathscr{L}}_0(u)= \int_0^1 u(x) dx=0.$$ Thus by and we can write $$\label{decomposition} {\mathscr{T}}(u) = {\mathscr{L}}^{**}(u) b_*+ {\mathscr{T}}_{*, R}(u),$$ where $(b_*)_q = \frac{1}{q^2}$ for $q \geq 2$ and $(b_*)_0=(b_*)_1=0$, and ${\mathscr{T}}_{*, R}$ is defined on the basis $\{ e_j \}_{j=0}^\infty$ by $$j \geq 1: \quad {\mathscr{T}}_{*, R}(e_j) = \left (1+ \sigma_0(q) - \frac{\beta_0}{q^2} \right ) \delta_{q|j} + \mathcal R _q(e_j),$$ $${\mathscr{T}}_{*, R}(e_0)=0.$$ For $ 3 < \gamma < 4$ we denote $$X_\gamma = \{ u \in L^1(\mathbb T): u(x)=u(-x), \; \hat u_0=0, \; \hat u_j =o(j^{- \gamma})\}, \qquad || u ||_{X_\gamma} = \max_{ j\geq 1} j^\gamma \hat u_j \; ,$$ $$\ell_\gamma= \{b \in \ell^\infty: \; b_0=0, \; b_q = o( q^{-\gamma}) \}, \qquad || b ||_{\ell_\gamma} = \max_{ j\geq 1} j^\gamma b_j \; .$$ Then one can easily see that ${\mathscr{T}}_{*, R}$ maps $X_\gamma$ to $\ell_\gamma$ (in fact it is invertible as we shall see below). If ${\mathscr{T}}(u)=0$, then since for us $u = \dot \rho \in C^\infty( \mathbb T) \subset X_\gamma$, by , $${\mathscr{L}}^{**}(u) b_* \in \text{Range} ( {\mathscr{T}}_{*, R} ) \subset \ell_{\gamma}.$$ However since $\gamma >3$, this is impossible unless $ {\mathscr{L}}^{**}(u)=0$, which by implies that $${{\mathscr{T}}}_{*, R} (u) =0.$$ Then we show that ${\mathscr{T}}_{*, R}: X_\gamma \to \ell_\gamma$ is invertible by showing that $||{\mathscr{T}}_{*, R} - \text{Id} ||_\gamma < 1$ where $||. ||_\gamma$ is the operator norm from $X_\gamma$ to $\ell_\gamma$. Here the identity operator $\text{Id}:X_\gamma \to \ell_\gamma$ is defined by $\text{Id}(e_j)= e'_j$ for all $j \geq 1$, where as before $\{e_j\}_{j=1}^\infty = \{ \cos (2 \pi jx) \}_{j=1}^\infty$ and $\{e'_q\}_{q=1}^\infty$ is the standard basis for $\ell_\gamma$. For any operator $T: X_\gamma \to \ell_\gamma$ with the matrix representation $[T_{qj}]$, the operator norm is given by $$|| T ||_\gamma = \sup_{ q \geq 1} \sum_{ j\geq 1} q^\gamma j^{-\gamma} |T_{qj}|.$$ Let $\Delta: X_\gamma \to \ell_\gamma$ be the operator with the matrix $[ \delta_{q |j} ]$. Then $$({\mathscr{T}}_{*, R})_{1j} = (\Delta)_{1j}=1$$ $$q \geq 2: \qquad ({{\mathscr{T}}}_{*, R})_{qj} = \Delta_{qj} + \left (\sigma_0(q) - \frac{\beta_0}{q^2} \right ) \Delta_{qj} + \mathcal R_{qj}.$$ By a simple estimate (see [@DKW]) one gets $$|| \Delta-I ||_\gamma< \zeta(3) -1$$ In particular $||\Delta||_\gamma < \zeta(3)$. This shows that if $\Delta'$ is the operator defined by $$(\Delta')_{1j} =0$$ $$q \geq 2: \quad (\Delta')_{qj} =\left (\sigma_0(q) - \frac{\beta_0}{q^2} \right ) \Delta_{qj},$$ then by the estimate on $\sigma_0(q)$ $$|| \Delta'||_\gamma \leq \left ( \frac{(\pi +{\varepsilon})^3}{48 \cos {\varepsilon}} + \frac{C{\varepsilon}}{4} \right ) \zeta(3).$$ On the other hand by the computations of [@DKW] we know that $|| \mathcal R ||_\gamma \leq C {\varepsilon}$. Combining these estimates together we obtain $$\begin{aligned} ||{\mathscr{T}}_{*, R} - \text{Id} ||_\gamma & = ||\Delta+ \Delta' + \mathcal R - \text{Id} ||_\gamma \\ & \leq ||\Delta - \text{Id} ||_\gamma + || \Delta'||_\gamma + ||\mathcal R||_\gamma \\ & \leq \zeta(3)-1 + \left ( \frac{(\pi +{\varepsilon})^3}{48 \cos {\varepsilon}} + \frac{C{\varepsilon}}{4} \right ) \zeta(3) + C {\varepsilon}. \end{aligned}$$ At ${\varepsilon}=0$, the last expression is less than $\frac{979}{1000}$, hence by choosing ${\varepsilon}>0$ sufficiently small we can guarantee that it is less than one. Proof of part (b) of Theorem \[Robin\] -------------------------------------- If two of $K_1(0), K_2(0),K_3(0)$ agree then by part (a) of Theorem \[Robin\] we are done. So assume $K_1(0), K_2(0),K_3(0)$ are three distinct numbers and define the function $f \in C^\infty_{{{\mathbb Z}}_2}(\partial \Omega)$ by $$f(b) = \frac {K_2(b) - K_3(b)} {K_2(0)- K_3(0)} \; , \quad b \in \partial \Omega .$$ Then we consider the functions $$K_{12} = (K_1 -K_2) - (K_1(0) - K_2(0)) f \;,$$ $$K_{13} = (K_1 -K_3) - (K_1(0) - K_3(0)) f \; .$$ Obviously $K_{12}, K_{13} \in C^\infty_{{{\mathbb Z}}_2}(\partial \Omega)$ and $K_{12}(0)=K_{13}(0)=0$ because $f(0)=1$. Since $\text{Spec}(\Delta_{\Omega, K_1})$ $= \text{Spec}(\Delta_{\Omega, K_2})= \text{Spec}(\Delta_{\Omega, K_2})$ by the notations and the discussion at the beginning of the proof of part (a) we have $${\mathscr{T}}\left ( \frac{K_{12}}{\mu} \right )={\mathscr{T}}\left ( \frac{K_{13}}{\mu} \right )=0.$$ However, we showed in the previous section that the operator ${\mathscr{T}}$ is injective. Thus $K_{12}=K_{13}=0$, or equivalently $$\label{K12} K_1 -K_2 = (K_1(0) - K_2(0)) f ,$$ $$\label{K13} K_1 -K_3 = (K_1(0) - K_3(0)) f .$$ On the other hand by the heat trace formula , we know that $$\int_{\partial \Omega} K_1 \kappa +2 K_1^2 = \int_{\partial \Omega} K_2 \kappa +2 K_2^2 =\int_{\partial \Omega} K_3 \kappa +2 K_3^2.$$ These imply that $$\int_{\partial \Omega} (K_1 -K_2) \big ( \kappa + 2 (K_1+K_2) \big ) =0,$$ $$\int_{\partial \Omega} (K_1 -K_3)\big ( \kappa + 2 (K_1+K_3) \big ) =0.$$ By plugging and into these identities and dividing by $K_1(0) - K_2(0)$ and $K_1(0) - K_3(0)$ respectively we get $$\int_{\partial \Omega} \big ( \kappa + 2 (K_1+K_2) \big ) f =0,$$ $$\int_{\partial \Omega} \big ( \kappa + 2 (K_1+K_3) \big ) f =0.$$ By subtracting these two equations we obtain $$\int_{\partial \Omega} \big ( K_2 - K_3 ) f =0.$$ Recalling the definition of $f$ we get $\int_{\partial \Omega} f^2 =0.$ Since $f$ is continuous and real-valued this implies that $f=0$. However this contradicts $f(0)=1$. Proof of Theorem \[2symmetries\] -------------------------------- Let $0'$ be the other point on the axis of symmetry other than the marked point $0$, and consider the $2$-orbit bouncing between $0$ and $0'$. Then by we have $ K_1(0) + K_1(0')= K_2(0) + K_2(0')$. On the other hand since $K_1$ and $K_2$ are ${{\mathbb Z}}_2 \times {{\mathbb Z}}_2$-symmetric we have $K_1(0)=K_1(0')$ and $K_2(0)=K_2(0')$ . These together show that $K_1(0)=K_2(0)$, and hence by part (a) of Theorem \[Robin\] we must have $K_1=K_2$. [HHHH]{} K. G. Andersson and R. B. Melrose, *The propagation of singularities along gliding rays*. Invent. Math. **41** (1977), no. 3, 197–232. Y. Colin de Verdière, *Sur les longueurs des trajectoires périodiques d’un billard*, Géométrie symplectique et de contact, Sem. Sud-Rhod. Géom. (1984), 122–139. K. Datchev and H. Hezari, *Inverse problems in spectral geometry*, Inverse problems and applications: inside out. II, 455–485, Math. Sci. Res. Inst. Publ., **60**, Cambridge Univ. Press, Cambridge, 2013. J. De Simoi, V. Kaloshin, Q. Wei, *Dynamical spectral rigidity among ${{\mathbb Z}}_2$-symmetric strictly convex domains close to a circle*, 2016, arXv:1606.00230. C. S. Gordon. *Survey of isospectral manifolds*, Handbook of differential geometry 1, 747–778, 2000. C.S. Gordon, P. Perry, and D. Schueth, *Isospectral and isoscattering manifolds: a survey of techniques and examples*, In Geometry, spectral theory, groups, and dynamics. Contemp. Math. **387**, 157–179, 2005. C.S. Gordon, D. Webb, and S. Wolpert, *Isospectral plane domains and surfaces via Riemannian orbifolds*, Invent. Math. **110**:1, 1–22, 1992. V. Guillemin and R.B. Melrose, *An inverse spectral result for elliptical regions in $\Bbb{R}^2$*. Adv. Math. **32** (1979), 128–148. V. Guillemin and R. B. Melrose, *The Poisson summation formula for manifolds with boundary*. Adv. in Math. **32** (1979), no. 3, 204–232. H. Hezari and S. Zelditch, *Inverse spectral problem for analytic ${{{\mathbb Z}}}_2^n$-symmetric domains in ${{{\mathbb R}}}^n$*, Geom. Funct. Anal. **20** (2010), 160–191. H. Hezari and S. Zelditch, *$C^\infty$ spectral rigidity of the ellipse*, Anal. PDE **5** (2012), no. 5, 1105–1132. M. Kac, *Can one hear the shape of a drum?* Amer. Math. Monthly **73** 1966 no. 4, part II, 1–23. S. Marvizi and R. B. Melrose, *Spectral invariants of convex planar regions*. J. Differential Geom. **17**(1982), no. 3, 475–502. R. B. Melrose, *Isospectral sets of drumheads are compact in $C^{\infty}$*, unpublished, MSRI preprint, 1984. R. B. Melrose, *The inverse spectral problem for planar domains*. Instructional Workshop on Analysis and Geometry, Part I (Canberra, 1995), 137–160, Proc. Centre Math. Appl. Austral. Nat. Univ., **34**, Austral. Nat. Univ., Canberra, 1996. B. Osgood, R. Phillips, and P. Sarnak, *Compact isospectral sets of plane domains*, Proc. Nat. Acad. Sci. U.S.A. **85** (1988), no. 15, 5359–5361. V. M. Petkov and L. Stoyanov, *Periods of multiple reflecting geodesics and inverse spectral results.* Amer. J. Math. **109** (1987), no. 4, 619–668. V. M. Petkov and L. Stoyanov, *Geometry of reflecting rays and inverse spectral problems.* Pure and Applied Mathematics (New York). John Wiley $\&$ Sons, Ltd., Chichester, 1992. G. Popov and P. Topalov, *Liouville billiard tables and an inverse spectral result.* Ergodic Theory Dynam. Systems **23** (2003), no. 1, 225–248. G. Popov and P. Topalov, *Invariants of Isospectral Deformations and Spectral Rigidity.* Communications in Partial Differential Equations, **37**(3), 369–446, 2012. G. Popov and P. Topalov, *From K.A.M. Tori to Isospectral Invariants and Spectral Rigidity of Billiard Tables*, 2016, arXiv: 1602.03155. E. M. E. Zayed, *Short-time asymptotics of the heat kernel of the Laplacian of a bounded domain with Robin boundary conditions*. Houston J. Math. **24** (1998), no. 2, 377–385. S. Zelditch, *The inverse spectral problem. With an appendix by Johannes Sjöstrand and Maciej Zworski*. Surv. Differ. Geom., IX, Surveys in differential geometry. Vol. IX, 401–467, Int. Press, Somerville, MA, 2004. S. Zelditch, *Inverse spectral problem for analytic domains. II. $\Bbb Z_2$-symmetric domains*. Ann. of Math. **2**170 (2009), no. 1, 205–269. [^1]: It means that there exists a reflection across a line in ${{\mathbb R}}^2$ that preserves the domain. [^2]: See definition 2.9 of [@DKW]. [^3]: Countable intersection of open dense subsets with respect to Whitney $C^\infty$ topology. See [@PSgeneric]. [^4]: Up to reflectional symmetries when the domain is ${{\mathbb Z}}_2$ or ${{\mathbb Z}}_2 \times {{\mathbb Z}}_2$ symmetric.
{ "pile_set_name": "ArXiv" }
--- abstract: | The flavour degree of freedom in non-charged $q\bar q$ mesons is discussed in a generalisation of quantum electrodynamics including scalar coupling of gauge bosons, which yields to an understanding of the confinement potential in mesons. The known “flavour states” $\sigma$, $\omega$, $\Phi$, $J/\Psi$ and $\Upsilon$ can be described as fundamental states of the $q\bar q$ meson system, if a potential sum rule is applied, which is related to the structure of vacuum. This indicates a quantisation in fundamental two-boson fields, connected directly to the flavour degree of freedom.\ In comparison with potential models additional states are predicted, which explain the large continuum of scalar mesons in the low mass spectrum and new states recently detected in the charm region. PACS/ keywords: 11.15.-q, 12.40.-y, 14.40.Cs, 14.40.Gx/ Generalisation of quantum electrodynamics with massless elementary fermions (quantons, $q$) and scalar two-boson coupling. Confinement potential. Flavour degree of freedom of mesons described by fundamental $q^+q^-$ states. Masses of $\sigma$, $\omega$, $\Phi$, $J/\Psi$ and $\Upsilon$. --- version 30.3.2011 [Two-boson field quantisation and flavour in $q^+q^-$ mesons]{} H.P. Morsch[^1]\ Institute for Nuclear Studies, Pl-00681 Warsaw, Poland The flavour degree of freedom has been observed in hadrons, but also in charged and neutral leptons, see e.g. ref. [@PDG]. It is described in the Standard Model of particle physics by elementary fermions of different flavour quantum number. The fact that flavour is found in both strong and electroweak interactions could point to a supersymmetry between these fundamental forces, which should give rise to a variety of supersymmetric particles, which in spite of extensive searches have not been observed. A very different interpretation of the flavour degree of freedom is obtained in an extension of quantum electrodynamics, in which the property of confinement of mesons as well as their masses are well described. This is based on a Lagrangian [@Moinc], which includes a scalar coupling of two vector bosons $$\label{eq:Lagra} {\cal L}=\frac{1}{\tilde m^{2}} \bar \Psi\ i\gamma_{\mu}D^{\mu}( D_{\nu}D^{\nu})\Psi\ -\ \frac{1}{4} F_{\mu\nu}F^{\mu\nu}~,$$ where $\Psi$ is a massless elementary fermion (quanton, q) field, $D_{\mu}=\partial_{\mu}-i{g_e} A_{\mu}$ the covariant derivative with vector boson field $A_{\mu}$ and coupling $g_e$, and $F^{\mu\nu}=\partial^{\mu}A^{\nu}-\partial^{\nu}A^{\mu}$ the field strength tensor. Since our Lagrangian is an extension of quantum electrodynamics, the coupling $g_e$ corresponds to a generalized charge coupling $g_e\geq e$ between charged quantons $q^+$ and $q^-$. By inserting the explicite form of $D^{\mu}$ and $D_{\nu}D^{\nu}$ in eq. (\[eq:Lagra\]), this leads to the following two contributions with 2- and 3-boson ($2g$ and $3g$) coupling, if Lorentz gauge $\partial_\mu A^\mu =0$ and current conservation is applied $$\label{eq:L2} {\cal L}_{2g} =\frac{-ig_e^{2}}{\tilde m^{2}} \ \bar \Psi \gamma_{\mu} \partial^{\mu} (A_\nu A^\nu) \Psi $$ and $$\label{eq:L3} {\cal L}_{3g} =\frac{ -g_e^{3}\ }{\tilde m^{2}} \ \bar \Psi \gamma_{\mu} A^\mu (A_\nu A^\nu)\Psi \ .$$ Requiring that $A_\nu A^\nu$ corresponds to a background field, ${\cal L}_{2g}$ and ${\cal L}_{3g}$ give rise to two first-order $q^+q^-$ matrix elements $$\label{eq:P2} {\cal M}_{2g} =\frac{-\alpha_e^{2}}{\tilde m^{3}} \bar \psi(\tilde p') \gamma_\mu~\partial^\mu \partial^\rho w(q)g_{\mu\rho}~\gamma_\rho \psi(\tilde p)$$ and $$\label{eq:P3} {\cal M}_{3g} = \frac{-\alpha_e^{3}}{\tilde m} \bar \psi(\tilde p')\gamma_{\mu} ~w(q)\frac{g_{\mu\rho} f(p_i)}{p_i^{~2}} w(q)~ \gamma_{\rho} \psi(\tilde p)~,$$ in which $\alpha_e={g_e^2}/{4\pi}$ and $\psi(\tilde p)$ is a two-fermion wave function $\psi(\tilde p)=\frac{1}{\tilde m^3} \Psi(p)\Psi(k)$. The momenta have to respect the condition $\tilde p'-\tilde p=q+p_i=P$. Further, $w(q)$ is the two-boson momentum distribution and $f(p_i)$ the probability to combine $q$ and $P$ to $-p_i$. Since $f(p_i)\to 0$ for $\Delta p\to 0$ and $\infty$, there are no divergencies in ${\cal M}_3$. By contracting the $\gamma$ matrices by $\gamma_\mu\gamma_\rho+ \gamma_\rho\gamma_\mu=2g_{\mu\rho}$, reducing eqs. (\[eq:P2\]) and (\[eq:P3\]) to three dimensions, and making a transformation to r-space (details are given in ref. [@Moinc]), the following two potentials are obtained, which are given in spherical coordinates by $$V_{2g}(r)= \frac{\alpha_e^2\hbar^2 \tilde E^2}{\tilde m^3}\ \Big (\frac{d^2 w(r)}{dr^2} + \frac{2}{r}\frac{d w(r)}{dr}\Big )\frac{1}{\ w(r)}\ , \label{eq:vb}$$ where $\tilde E=<E^2>^{1/2}$ is the mean energy of scalar states of the system, and $$\label{eq:vqq} V^{(1^-)}_{3g}(r)= \frac{\hbar}{\tilde m} \int dr'\rho(r')\ V_{g}(r-r')~,$$ in which $w(r)$ and $\rho(r)$ are two-boson wave function and density (with dimension $fm^{-2}$), respectively, related by $\rho(r)=w^2(r)$. Further, $V_{g}(r-r')$ is an effective boson-exchange interaction $V_{g}(r)=-\alpha_e^3\hbar \frac{f(r)}{r}$. Since the quanton-antiquanton parity is negative, the potential (\[eq:vqq\]) corresponds to a binding potential for vector states (with $J^\pi =1^-$). For scalar states angular momentum L=1 is needed, requiring a p-wave density, which is related to $\rho(r)$ by $$\label{eq:spur} \rho^{ p}(\vec r)=\rho^{ p}(r)\ Y_{1,m}(\theta,\Phi) = (1+\beta R\ d/dr) \rho(r)\ Y_{1,m}(\theta,\Phi)\ .$$ $\beta R$ is determined from the condition $<r_{\rho^p}>\ =\int d\tau\ r \rho^p(r)=0$ (elimination of spurious motion). This yields a boson-exchange potential given by $$\label{eq:vqq0} V^{(0^+)}_{3g}(r)= \frac{\hbar}{\tilde m} \int d\vec r\ '\rho^{ p}(\vec r\ ')\ Y_{1,m}(\theta',\Phi')\ V_{g}(\vec r-\vec r') = 4\pi \frac{\hbar}{\tilde m} \int dr'\rho^{ p}(r')\ V_{g}(r-r')~.$$ We require a matching of $V^{(0^+)}_{3g}(r)$ and $\rho(r)$ $$\label{eq:con1} V^{(0^+)}_{3g}(r)=c_{pot} \ \rho(r)\ ,$$ where $c_{pot}$ is an arbitrary proportionality factor. Eq. (\[eq:con1\]) is a consequence of the fact that $V_{g}(r)$ should be finite for all values of r. This can be achieved by using a form $$\label{eq:veff} V_{g}(r)=f_{as}(r) (-\alpha_e^3 \hbar /r)\ e^{-cr}$$ with $f_{as}(r)=(e^{(ar)^{\sigma}}-1)/(e^{(ar)^{\sigma}}+1)$, where the parameters $c$, $a$ and $\sigma$ are determined from the condition (\[eq:con1\]). Self-consistent two-boson densities are obtained assuming a form $$\label{eq:wf} \rho(r)=\rho_o\ [exp\{-(r/b)^{\kappa}\} ]^2\ \ with\ \ \kappa \simeq 1.5\ .$$ The matching condition (\[eq:con1\]) is rather strict (see fig. 1) and determines quite well the parameter $\kappa$ of $\rho(r)$: using a pure exponential form ($\kappa$=1) a very steep rise of $\rho(r)$ is obtained for $r\to 0$ , but an almost negligible and flat boson-exchange potential, which cannot satisfy eq. (\[eq:con1\]). Also for a Gaussian form ($\kappa$=2) no consistency is obtained, the deduced potential falls off more rapidly towards larger radii than the density $\rho(r)$. The agreement between $<r^2_{\rho}>$ and $<r^2_{V_{3g}}(r)>$ cannot be enforced by using a different parametrisation for $f_{as}(r)$. Only by a density with $\kappa\simeq 1.5$ a satisfactory solution is obtained. For our solution (\[eq:wf\]) it is important to verify that $V_{g}(r)$ is quite similar in shape to $\rho^{p}(r)$ required from the modification of the boson-exchange propagator. This is indeed the case, as shown in the upper part of fig. 2, which displays solution 4 in the tables. Further, the low radius cut-off function $f_{as}(r)$ is shown by the dashed line, which falls off to zero for $r\to 0$. A transformation to momentum ($Q$) space leads to $f_{as}(Q)\to 0$ for $Q\to \infty$. Interestingly, this decrease of $f_{as}(Q)$ for large momenta is quite similar to the behaviour of quantum chromodynamics, a slowly falling coupling strength $\alpha(Q)$ related to the property of asymptotic freedom [@GWP]. In the two lower parts of fig. 2 the resulting two-boson density and the boson-exchange potential (\[eq:vqq0\]) are shown in r- and Q-space[^2] for solution 4 in the tables, both in very good agreement. In the Fourier transformation to Q-space the process $gg\rightarrow q\bar q$ is elastic and consequently the created $q\bar q$-pair has no mass. However, if we take a finite mass of the created fermions of 1.4 GeV (such a mass has been assumed for a comparable system in potential models [@qq]), a boson-exchange potential is obtained (given by the dashed line in the lower part of fig. 2), which cannot be consistent with the density $\rho(r)$. Thus, our solutions require [**massless**]{} fermions. This allows to relate the generated system to the absolute vacuum of fluctuating boson fields with energy $E_{vac}=0$. The mass of the system is given by $$\label{eq:mass} M^m_{n}=-E_{3g}^{~m}+E_{2g}^{~n} \ ,$$ where $E_{3g}^{~m}$ and $E_{2g}^{~n}$ are binding energies in $V_{3g}(r)$ and $V_{2g}(r)$, respectively, calculated by using a mass parameter $\tilde m=1/4~\tilde M =1/4 <Q^2_\rho>^{1/2}$, where $\tilde M$ is the average mass generated, and $\tilde E$ given in table 2. The coupling constant $\alpha_e$ is determined by the matching of the binding energies to the mass, see eq. (\[eq:mass\]). The boson-exchange potential is attractive and has negative binding energies, with the strongest bound state having the largest mass and excited states having smaller masses. These energies do not increase the mean energy $E_{vac}$ of the vacuum: writing the energy-momentum relation $E_{vac}=0=\sqrt{<Q^2_\rho>}+E_{3g}$, this relation is conserved, if $E_{3g}$ is compensated by the root mean square momentum of the deduced density $<Q^2_\rho>^{1/2}$. Differently, the binding energy in the self-induced two-boson potential (\[eq:vb\]), which does not appear in normal gauge theory applications (see ref. [@PDG]), is positive and corresponds to a real mass generation by increasing the total energy by $E_{2g}$. Therefore, this potential allows a creation of stationary $(q\bar q)^n$ states out of the absolute vacuum of fluctuating boson fields, if two rapidly fluctuating boson fields overlap and cause a quantum fluctuation with energy $E_{2g}$. The two-boson potential $V_{2g}(r)$ (with density parameters from solution 4 in the tables) is compared to the confinement potential from lattice gauge calculations [@Bali] in the upper part of fig. 3, which shows good agreement. The corresponding potentials obtained from the other solutions are very similar, if a small decrease of $\kappa$ is assumed for solutions of stronger binding (as given in table 2). Solution (meson) $M^1_1$ $M^1_{2}$ $M^1_{3}$ $M_1^{exp}$ $M_{2}^{exp}$ $M_{3}^{exp}$ ------------ ------------ --------- ----------- ----------- -------------- --------------- --------------- 1   scalar $\sigma$ 0.55 1.28 1.88 0.60$\pm$0.2 1.35$\pm$0.2 2   scalar $ f_o $ 1.38 2.25 2.9 1.35$\pm$0.2     vector $\omega$ 0.78 1.65 2.3 0.78 1.65$\pm$0.02 3   scalar $ f_o $ 2.68 3.34 3.9 —     vector $\Phi$ 1.02 1.68 2.23 1.02 1.68$\pm$0.02 4   scalar not seen 11.7 12.3 12.8 —     vector $J/\Psi$ 3.10 3.69 4.16 3.097 3.686 (4.160) 5   scalar not seen 40.5 41.0 41.4 —     vector $\Upsilon$ 9.46 9.98 10.38 9.46 10.023 10.355 : Deduced masses (in GeV) of scalar and vector $q^+q^-$ states in comparison with known $0^{++}$ and $1^{--}$ mesons [@PDG] (for $V_{3g}(r)$ only the lowest bound state is given). We have seen in fig. 1 that the functional shape of the two-boson density (\[eq:wf\]) (given by the parameter $\kappa$) is quite well determined. In contrary, we find that the slope parameter $b$ (which governs the radial extent $<r^2_\rho>$) is not constrained by the different conditions applied. This allows a continuum of solutions with different radius. However, on the fundamental level of overlapping boson fields quantum effects are inherent and should give rise to discrete solutions. Such a (new) quantisation can only arise from an additional constraint orginating from the structure of the vacuum. This may be formulated in the form of a vacuum potential sum rule. Sol. $\kappa$ $b$ $\alpha_e$ $c$ $a$ $\sigma$ $<Q^2_{\rho}>^{1/2}$ $\tilde E$ $<r^2_{\rho}>$ $<r^2>_{exp}$ ------ ---------- ------- ------------ ------ ------- ---------- ---------------------- ------------ ---------------- --------------- 1  1.50 0.77 0.26 2.4  6.4 0.86 0.59 0.9 0.65  –  2  1.46 0.534 0.385 3.3 12.0 0.86 0.81 1.0 0.33  0.33  3  1.44 0.327 0.44 5.35 16.4 0.85 1.44 1.3 0.13  0.21  4  1.40 0.125 0.58 13.6 50.7 0.83 3.50 1.6 0.02  0.04  5  1.37 0.042 0.635 46.0 132.6 0.82 10.46 2.3 0.002   –  : Parameters and deduced values of $<Q^2_{\rho}>^{1/2}$, $\tilde E$ in GeV and $<r^2>$ in $fm^2$ for the different solutions. $b$ is given in $fm$, $c$ and $a$ in $fm^{-1}$. The values of $<r^2>_{exp}$ are taken from ref. [@Mo]. We assume the existence of a global boson-exchange interaction in the vacuum $V_{vac}(r)$, which has a radial dependence similar to the boson-exchange interaction (\[eq:veff\]) discussed above, but with an additional $1/r$ fall-off, which leads to $V_{vac}(r)\sim 1/r^2$. Further, we require that the different potentials $V^i_{3g}(r)$ (where $i$ are discrete solutions) sum up to $V_{vac}(r)$ $$\label{eq:sum} \sum_i V^i_{3g}(r)=V_{vac}(r)= \tilde f_{as}(r) (-\tilde \alpha_e^3 \hbar\ r_o/{r^2})\ e^{-\tilde cr} \ ,$$ where $\tilde f_{as}(r)$ and $e^{-\tilde cr}$ are cut-off functions as in eq. (\[eq:veff\]). Actually, we expect that the cut-off functions should be close to those for the state with the lowest mass. Interestingly, the radial forms of $V_g(r)$ and $V_{vac}(r)$ are the only two forms, which lead to equally simple forms in Q-space: $1/r\to1/Q^2$ and $1/r^2\to1/Q$. This supports our assumption. If we assume that the new quantisation is related to the flavour degree of freedom, the different “flavour states” of mesons $\omega$, $\Phi$, $J/\Psi$, and $\Upsilon$ should correspond to eigenstates of the sum rule (\[eq:sum\]). Indeed, we find that the sum of the boson-exchange potentials with g.s. masses of 0.78, 1.02, 3.1 and 9.4 add up to a potential, which is in reasonable agreement with the sum (\[eq:sum\]). However, the needed cut-off parameters $a$, $\sigma$, and $c$ correspond to those for the $\sigma(600)$ solution (see ref. [@Moinc]). This can be regarded as strong evidence for the $\sigma(600)$ being to the lowest flavour state. By inclusion of this solution also, a good agreement with the sum rule (\[eq:sum\]) is obtained. This is shown in the lower part of fig 3, where the different potentials are given by dashed and dot-dashed lines with their sum given by the solid line. The resulting masses of scalar and vector states together with their excited states in $V_{2g}(r)$ are given in table 1, which are in good agreement with experiment for the known states. The corresponding density parameters are given in table 2 with mean square radii in reasonable agreement with the meson radii extacted from experimental data (see ref. [@Mo]). It is evident that in this multi-parameter fit there are ambiguities, which can be reduced only by detailed studies of the contributions of the different states to the average mass $\tilde E$ and its relation to $<Q^2_{\rho}>^{1/2}$. However, the reasonable account of the experimental masses and the quantitative fit of the sum rule (\[eq:sum\]) in fig. 3 indicates that our results are quite correct. As compared to potential models using finite fermion (quark) masses (see e.g. ref. [@qq]), we obtain significantly more states, bound states in $V_{2g}(r)$ and in $V_{3g}(r)$. The solutions in table 1 correspond only to the 1s level in $V_{3g}(r)$, in addition there are Ns levels with N=2, 3, ... Most of these states, however, have a relatively small mass far below 3 GeV. As the boson-exchange potential is Coulomb like, it creates a continuum of Ns levels with masses, which range down to the threshold region. This is consistent with the average energy $\tilde E$ of scalar excitations in table 2, which increases much less for heavier systems as compared to the energy of the 1s-state. These low energy states give rise to large phase shifts at low energies, in particular large scalar phase shifts. Concerning masses above 3 GeV, solution 5 yields additional scalar 2s and 3s states at masses of about 12 and 8.8 GeV, respectively, whereas an extra vector 2s state is obtained (between the most likely $\Psi$(3s) and $\Psi$(4s) states at 4.160 GeV and 4.415 GeV) at a mass of about 4.2 GeV. This state may be identified with the recently discovered X(4260), see ref. [@PDG]. Corresponding excited states in the confinement potential (\[eq:vb\]) should be found at masses of 4.9, 5.3 and 5.5 GeV with uncertainties of 0.2-0.3 GeV. In summary, the present model based on an extension of electrodynamics leads to a good understanding of the confinement and the masses of fundamental $q\bar q$ mesons. The flavour degree of freedom is described by stationary states of different radial extent, whose potentials exhaust a vacuum potential sum rule. In a forthcoming paper a similar description will be discussed for neutrinos, which supports our conclusion that the flavour degree of freedom is related to the structure of overlapping boson fields in the vacuum. Fruitful discussions and valuable comments from P. Decowski, M. Dillig (deceased), B. Loiseau and P. Zupranski among many other colleagues are appreciated. [99]{} Review of particle properties, C. Amsler et al., Phys. Lett B 667, 1 (2008);\ http://pdg.lbl.gov/ and refs. therein H.P. Morsch, “Inclusion of scalar boson coupling in fundamental gauge theory Lagrangians”, to be published D.J. Gross and F. Wilczek, Phys. Rev. Lett. 30, 1343 (1973); H.D. Politzer, Phys. Rev. Lett. 30, 1346 (1973) (2005) and refs. therein R. Barbieri, R. Kögerler, Z. Kunszt, and R. Gatto, Nucl. Phys. B 105, 125 (1976); E. Eichten, K.Gottfried, T. Kinoshita, K.D. Lane, and T.M. Yan, Phys. Rev. D 17, 3090 (1978); S. Godfrey and N. Isgur, Phys. Rev. D 32, 189 (1985); D. Ebert, R.N. Faustov, and V.O. Galkin, Phys. Rev. D 67, 014027 (2003); and refs. therein G.S. Bali, K. Schilling, and A. Wachter, Phys. Rev. D 56, 2566 (1997);\ G.S. Bali, B. Bolder, N. Eicker, T. Lippert, B. Orth, K. Schilling, and T. Struckmann, Phys. Rev. D 62, 054503 (2000) H.P. Morsch, Z. Phys. A 350, 61 (1994) M. Ablikim, et al., hep-ex/0406038 (2004); see also D.V. Bugg, hep-ex/0510014 (2005) and refs. therein 16, 229 (2003) ![Comparison of two-boson density for $<r^2_{\rho}>$=0.2 fm$^2$ (dot-dashed lines) and boson-exchange potential $|V_{3g}(r)/c_{pot}|$ (solid lines) for $\kappa$=1.5, 1 and 2, respectively.[]{data-label="fig:f1"}](g1ex2sneu.eps){height="16cm"} \[ht\] ![Self-consistent solution with $<r^2_{\rho}>$=0.013 fm$^2$. Low radius cut-off function $f_{as}(r)$, shape of interaction (\[eq:veff\]) and $\rho^{p}(r)$ given by dashed, solid and dot-dashed lines, respectively. Two-boson density and boson-exchange potential $|V_{3g}/c_{pot}|$ in r- and Q-space, for the latter multiplied by $Q^2$, given by the overlapping dot-dashed and solid lines, respectively. The dashed line in the lower part corresponds to a calculation assuming fermion masses of 1.4 GeV.[]{data-label="fig:simdens"}](g1ex2c.eps "fig:"){height="18cm"} ![ Deduced confinement potential $V_{2g}(r)$ (\[eq:vb\]) taken from solution 4 (solid line) in comparison with lattice gauge calculations [@Bali] (solid points) . Boson-exchange potentials for the different solutions in the tables (given by dot-dashed and dashed lines) and sum given by solid line. This is compared to the vacuum sum rule (\[eq:sum\]) given by the dot-dashed line overlapping with the solid line. A pure potential $V=-const/r^2$ is shown also by the lower dot-dashed line.[]{data-label="fig:confine"}](confinesum.eps){height="18cm"} [^1]: postal address: Institut für Kernphysik, Forschungszentrum Jülich, D-52425 Jülich, Germany\ E-mail: h.p.morsch@gmx.de [^2]: In Q-space multiplied by $Q^2$.
{ "pile_set_name": "ArXiv" }
--- abstract: 'The segmentation of large scale power grids into zones is crucial for control room operators when managing the grid complexity near real time. In this paper we propose a new method in two steps which is able to automatically do this segmentation, while taking into account the real time context, in order to help them handle shifting dynamics. Our method relies on a “guided” machine learning approach. As a first step, we define and compute a task specific “Influence Graph” in a guided manner. We indeed simulate on a grid state chosen interventions, representative of our task of interest (managing active power flows in our case). For visualization and interpretation, we then build a higher representation of the grid relevant to this task by applying the graph community detection algorithm *Infomap* on this Influence Graph. To illustrate our method and demonstrate its practical interest, we apply it on commonly used systems, the IEEE-14 and IEEE-118. We show promising and original interpretable results, especially on the previously well studied RTS-96 system for grid segmentation. We eventually share initial investigation and results on a large-scale system, the French power grid, whose segmentation had a surprising resemblance with RTE’s historical partitioning.' author: - 'A. Marot, S. Tazi, B. Donnot, P. Panciatici (RTE R&D)$^{1}$ [^1]' bibliography: - 'bibliography.bib' title: '**Guided Machine Learning for power grid segmentation** ' --- INTRODUCTION ============ Well-established power systems such as the French power grid are starting to experience a transition with a steep rise in complexity. This is due in part to the changing nature of the grid, with an end to the ever increasing total consumption. This shifts the way we traditionally develop the grid. While we used to expand it by building new power lines with heavy investments that relies on growth in revenues, we now should optimize the existing one with every flexibilities at our disposal. We also notice a revival of DC current technology, hybridizing the current AC grid with new dynamics. In addition, this new complexity also comes from other external factors such as the changing energy mix with a massive integration of renewables, as well as an ever more fragmented set of actors at a more granular level like prosumers, or at the supranational level with an interconnected European grid for instance. This new complexity will bring new dynamics such as dynamically varying flow amplitudes and directions. This is in contrast of what was the case in the past with centralized production from large power plants, “pushing” the flows to the loads in a very hierarchical and descendant way. New distributed controls are getting implemented, taking advantage of new communication and software technologies. This pushes us towards an always more entangled cyber-physical system whose topology is no more the actual physical grid topology, which was convenient to study the grid. Its topology will be one also induced by long distance communications and controls. Therefore, rethinking the way we operate the grid has become a necessity. To handle the current complexity, our control room operators have built over time, and over many studies with the simulators at their disposal, their own mental representations of the grid. They actually segment the grid into static zones that are redefined every year to study the grid efficiently near real time. They are indeed able to quickly identify remedial actions given security risks around. It helps them make the best trade-off between exploration and exploitation. However we anticipate that these yearly static views will be less and less relevant in the future to operate the grid, with fuzzier electrical “frontiers”. This can even occur along a course of a day within this dynamic context. However a zonal segmentation should still be relevant to operate the grid by efficiently representing this complexity to act on it. Offering such context awareness will help our dispatchers in their decision making process. That is why an assisted segmentation built in a dynamic fashion to fit the specific context of a situation is needed. Hence, how can we build such contextual segmentation for a given task? Previous works on segmentation have relied on the one hand on gathering proper dynamical phasor measurements on the grid to compute disturbance-based coherency in the time-domain and find similarities between electrical nodes [@Kamwa2007Automatic; @Wang1994Novel; @Juarez2011Characterization]. This implies a massive deployment of PMUs or very accurate large-scale dynamic simulations. On the other hand, other analytical approaches have investigated simplified modeling of the grid, relying on the linearized DC approximation, to partition it along buses for the purpose of studying cascading failures [@Blumsack2009Defining] (hierarchical clustering), [@Sanchez2014Hierarchical] (spectral clustering) [@Cotilla2013Multi] (hybrid K-means/evolutionary algorithm). This gave interesting results at a much lower cost. However, as our system becomes more cyber-physical with distributed regulations relying on advanced embedded software and fast communications, this has some limitations on the system complexity it can handle, such as non connected clusters in the actual grid topology. In addition, given their objective of identifying weak components overall for cascading failures, those methods were not particularly grid state specific. We would like to address those 2 points for our near-real time applications. In contrast of analytical methods that have been more extensively explored in the field of power systems, our approach relies on machine learning, following our previous work [@IntroducingML] and responding to the call for new grid proxies in reliability management [@Proxies]. We propose in this paper a new method that relies on a guided use of existing power grid simulators to teach the machine an expected system response in a context of our task. We will talk of “guided machine learning”, a form of unsupervised learning with carefully generated inputs to represent, guided by human expertise. For a more extended form of it, you can refer to [@GuidedML]. Simulating systematic chosen interventions on a grid state, we build an Influence Graph (IG) to define a similarity between our components given our operational task. Interestingly, the IG connectivity goes beyond the actual grid topology, which can further lead to non topologically connected components within a cluster, an idea expressed by [@hines2015InfluenceG] and reminiscent of [@roy2001InfluenceModel] when studying cascading failures. This kind of phenomena will certainly become more prominent in a cyber-physical system and should be captured. Our machine can then learn a useful interpretable representation, a proxy, from that IG complex representation by running a suitable clustering algorithm. The *Infomap* algorithm from the field of community detection was our top candidate given some intrinsic properties and has shown to work well on our IG. The paper is organized as followed. Section \[sec:method\] is dedicated to the method, where we describe the IG, justify its relevance compare to more classical distance matrices and talk about the suitability of the “*Infomap*” clustering algorithm. In section \[sec:results\] we present the results on commonly used system, namely the IEEE 14, for illustration and interpretation of our method. To compare our method to others, we use the well studied 96-RTS system for grid segmentation which can serve as a benchmark. We eventually give some insights on the usefulness of our method on large-scale power grids such has the French power grid. Finally, section \[sec:conclusions\] provides conclusions and future directions for this work. METHOD {#sec:method} ====== A proxy, a simplified model of a complex system, can only be relevant for a certain range of tasks as we are “neglecting” details that matters for other phenomena. It is useful as it reduces the dimension and exploration of a problem related to our task while preserving the relevant information. Such a representation can be judged along 3 axis: interpretability (helping someone apprehend a situation), synthetic (limiting someone’s exploration of the problem) and efficiency (containing solutions to the actual problem). Clustering methods already applies to a wide diversity of problems. Our main issue here is to provide representative data of our task for a given grid state to a clustering algorithm. Measurements are not enough as our grid state is evolving. The dynamics around this state are the results of multiple entangled phenomena whose contributions are hard to assess. We cannot invasively influence those dynamics on the real system for the sake of our method. Rather we need to rely on the proper use of simulators. Building on top of existing simulators has the advantage to rely on the complexity of their system modeling. This avoid us the need of redefining a specific analytical model that captures the grid complexity for our task. Furthermore, rather than analytically and explicitly modeling our task, we make use of the simulator as an oracle to show a system response under some representative experiments that we call interventions. We then let the machine learns from it a proper synthetic representation for this implicit task. The combination of the simulator (Sim) and our set of interventions peculiar to our task can be seen as a teacher for our machine: we will call it a guided simulator (GSim). An influence Matrix: a grid state under simulated small perturbations --------------------------------------------------------------------- In this section, we suppose that we have at our disposal a simulator $Sim$, that given a grid state $(Inj,Topo)$, representing respectively the injections vector and the topology representation. The resulting call of this simulator is denoted by $x$: $$x=Sim(Inj,Topo)$$ Given our state $x = (\bm{z}, \omega)$, $\bm{z} = (z_1, \dots, z_j, z_{n_z})$ being our variables of interest relevant to our task, $\omega$ being the other variables we discard and with $(Inj,Topo)$ $\subset{x}$. In our case we have $\bm{z} = $ “all the active power flow on each line of the power grid”, while neglecting voltage amplitude for example. These variables $\bm{z}$ are our variable of interest, and we want our clustering to best “represent” the “$\bm{z}$”’s complex interactions. To reveal this complexity, we can use a set of small perturbations $\mathcal{P} = \{ p_i \}_{1 \leq i \leq n_p}$ on either $Inj$ or $Topo$, around our grid state: $\forall p_i \in \mathcal{P} , p_i = \delta_{Inj}(x) \text{ or } \delta_{Topo}(x)$). For example $p_1$ can be “redispatching the production of the first generator of 1MW”, and $p_2$ “disconnecting the $3^{\text{rd}}$ line of the power grid”. We then run our simulator to assess the global effect $\Delta x_i$ of the perturbation $i$ on all the variables $x$: $$\Delta x_i=|Sim( (Inj,Topo) \odot p_i)-Sim(Inj,Topo)|$$ $\Delta x_i$ is the absolute difference between the state before the perturbation, and after it, which can see as the influence of $p_i$ on it. Recall that we are only interested in the subset $\Delta \bm{z}_i$. By stacking all the $\Delta \bm{z}_i$, we can define an Influence Matrix “IM” dedicated to our task (in our case studying the active flow $\bm{z}$) : $$IM_{i,j}=(\Delta \bm{z}_i)[j]_{{1 \leq i \leq n_p},{1 \leq j \leq n_z} }$$ where $(\Delta \bm{z}_i)[j]$ denotes the $j^{\text{th}}$ value of the perturbated vector $\Delta \bm{z}_i$. To compute IM, we could imagine simulating any kind of perturbations, naively simulating for instance all possible injection redispatching actions and all topological changes, observing their effect on our $\bm{z}$ variables. We could then apply a clustering method on this entire cloud of observations. However, computing all these perturbations is computationally too intensive in practice and not always meaningful. We should then choose targeted and relatively non invasive small perturbations that leads to an overall system response, representative of our task. That’s where the “guided” simulator comes into play. From(2), with carefully chosen set of perturbations $\mathcal{I}$, denoted as of now “intervention”, we here define a guided simulator, given $(Inj,Topo) \subset{x}$: $$\begin{split} GSim(x,\mathcal{I})=|Sim((Inj, Topo)\odot \mathcal{I})-Sim((Inj, Topo))| \end{split}$$ An Influence Graph: guided simulations with systematic interventions --------------------------------------------------------------------- A class of interventions we will use that is likely to match the above mentioned prerequisite are direct, independent and anaesthetizing interventions on each $z_i$. By a counterfactual reasoning, we can then observe how the system would have behaved without $z_i$ playing its role. This can thereby give us relevant information on the role of $z_i$ within the system, as well as highlights the close interactions with other $z_k$ that plays a similar role. Ideally we would like to decipher the causal influence of some injections and topologies on our $z_i$, which could help us gather explicitly $z_i$’s with common influent factors. This relates to the seminal work of [@Pearl] on graphical models and causality. He indeed theorized the need for interventions in addition to observations to properly identify causal links and build proper graphical models. For this class of interventions $I_z$, we now have card($I_z$)=nz. The related IM is now no more than a square adjacency matrix of a directed graph. We define it as an Influence Graph IG. The nodes represents our variables z, the edges the interaction between $z_i$’s and the origin of an edge is given by the source $z_k$ on which the intervention $I_k$ occurred. To compute our contextual $IG_z$ for our task, we need to run our guided simulator, keeping a subset of the results related to z: $$IG_z(x)=GSim_{x \in z}(x,I_z)$$ Let’s now apply this framework to our specific task of interest that is of importance for our operators today: managing the active power flows. The Influence Graph for managing active power flow -------------------------------------------------- For the task of managing active power flows, our $z$=$L^{pf}$ are the power flows on power lines of number $nL$. The intervention we choose [$I_{z_i}=Off(L^{pf}_i)$]{} is the disconnection of a given line to prevent the power flowing through it and hence setting it to 0. We hence observe how the system respond when redispatching the existing power flow through other pathways. This is a targeted and relatively non invasive intervention in regards of the active system variables, injections and topology. Indeed within a meshed transportation grid, it prevents power generation redispatching, maintaining the initial injection plan, and only modifying the topology locally and smoothly. As a simulator, we use a static AC power flow $Sim=ACpf$ and hence compute our influence graph over all independent line disconnections $Off(z_i)$. Nodes in this graph correspond to our $z$, the power lines. An edge $e_{ij}$ has a weight: $$w_{ij}=GACpf_{x \in L^{pf}}(x,Off(L^{pf}_i))_j, e_{ij}\in IG_{L^{pf}} .$$ A sensible threshold is set to remove numerical noise, the equivalent of 1MW on the French power grid which is often considered as a numerical tolerance. Let’s now visualize on Figure \[fig:InfluenceGraph\] an $IG_{L^{pf}}$ over a relatively small system such as the IEEE 14 and compare it to other well-known representation of a grid, namely the grid topology and the grid power flows. ![3 representations of the grid. Topological on the right is the simplest. The power-flow one in the center adds the localization of injections, production in red and load in cyan, as well as the direction of flow. The Blue Influence Graph for power flows on the right eventually highlights the wide complexity of interactions between these flows through lines. An arrow goes from a line on which we intervene to a one it influenced.[]{data-label="fig:InfluenceGraph"}](IEEE14_topologie.png "fig:"){width="3" height="3.5cm"}![3 representations of the grid. Topological on the right is the simplest. The power-flow one in the center adds the localization of injections, production in red and load in cyan, as well as the direction of flow. The Blue Influence Graph for power flows on the right eventually highlights the wide complexity of interactions between these flows through lines. An arrow goes from a line on which we intervene to a one it influenced.[]{data-label="fig:InfluenceGraph"}](IEEE14_powerflow.png "fig:"){width="3" height="3.5cm"}![3 representations of the grid. Topological on the right is the simplest. The power-flow one in the center adds the localization of injections, production in red and load in cyan, as well as the direction of flow. The Blue Influence Graph for power flows on the right eventually highlights the wide complexity of interactions between these flows through lines. An arrow goes from a line on which we intervene to a one it influenced.[]{data-label="fig:InfluenceGraph"}](InfluenceGraphIEEE14.png "fig:"){width="2.8cm" height="3.5cm"} For the topological representation, we might want to segment the grid into 3 zones along the vertical axis when looking for cliques, that is coherently meshed zones. It misses however the localization of injections. Considering them might actually lead to a different number of relevant zones. If productions and loads are distributed over all the nodes there might not be distinguishable sub-zones. If productions are concentrated in one part and loads in another as it is here, 2 zones could make more sense. Focusing now on the power-flow representation, it is more informative on that point. However it can be misleading as we will naturally follow the path of a given flow along its directions like a water flow, shadowing the superposition of interactions as stated by the superposition theorem. A power-flow is indeed a residual flow of multiple flows in both directions. For flows not belonging to the same “water flow path”, we will hence make independent zones whereas they could be strongly interacting. This difficulty doesn’t arise clearly here as the production is quite concentrated and localized, hence pushing the flows along one path. We might want here to create 2 zones: a localized production zone with a clique of 3 in the bottom left, a diffuse consumption zone for the remaining grid. Now studying our $IG_{L^{pf}}$, it represents an additional level of complexity highlighting the superposition of interactions, some over long-distance. It shows the centrality of line 4-5 for our grid state as it is a bottleneck for our influence flows. We could expect from that observation to see our grid segmented into 2 electrically coherent zones with line 4-5 being the border line. If we can make that analysis on such a small grid, it might become quite impossible for a larger grid to interpret a wider IG. To synthesize this information and gain interpretability, we apply on it an unsupervised machine learning to help us better represent it. We hence choose and run an appropriate hierarchical community detection algorithm Infomap on our Influence Graph, resulting in various levels of representation for our task. Infomap: an information theory based community detection algorithm ------------------------------------------------------------------ There are several algorithms for graph segmentation, known in literature as community detection algorithms [@Newman2002Finding; @Guimera2004Modularity; @Blondel2008Fast]. One can refer to the following article for a review on community detection algorithms [@Fortunato2010Community]. The algorithm developed by Rosvall *et al.* [@Rosvall2009map] known as “*Infomap*”, has the advantage of being particularly suitable for oriented, weighted graphs, and able to identify flow patterns inside the graph. It is recursively hierarchical [@Rosvall2011Multilevel] and can automatically find the proper number of hierarchical levels and clusters. In addition, “*Infomap*” can handle overlapping [@Viamontes2011Compression] which could be of interest for future works. Indeed electrical frontiers are fuzzy and it could make sense that lines interconnect some clusters. We will here briefly describe the main ideas of this method, the reader can refer to the original article for a complete description. The idea is to use the duality between the problem of how we should best partition the network with respect to flow and the problem of minimizing the description length of places visited along a path given the influence graph. The goal is hence to compress the information by making the best encoding to name our variables: a codeword. This is all goes back to Shannon’s source coding theorem from information theory which establishes that for $n$ codewords describing $n$ states of a random variable $X$ that occur with frequencies $p_i$, the average length of a codeword can be no less than: $H(X) = - \sum ^{n} _{1} p_i log(p_i)$. The more you use a codeword, the smallest encoding you should give it. To better minimize the average encoding length of codewords, one can further take advantage of the graph regional cycling structure that highlights modules $M$, and define a “module codebook” for each area that contains all the nodes codewords of this area. Thereby it is possible to reuse the same codeword for different nodes since we can specify which module codebook to use. We then need an “index codebook” containing a codeword for each “module codebook”. Going from one node to another in the same region, one only needs to refer to a short codeword to identify it, knowing the region codebook. An easy analogy is the case of maps with streets, cities and countries. For instance, in different cities you will find the same street names, and you can do so because you can name the city as well, to better identify this street in a country. But being in a given city, you don’t even need to name the city again to refer to a street in it: you compressed the information while still being able to communicate it. To eventually minimize the description length of our variables, we can recursively apply Shanon’s theorem to codewords and codebooks which leads to the map equation: $$L(M) = q_{\curvearrowright} H(Q) + \sum _{i=1} ^m p_{\circlearrowright} ^{i} H(P^i)$$ with $H(Q)$ the weighted average length of codewords in the codebook index and $H(P_i)$, the weighted average length of codewords in the module codebook $i$. The codeword index is used at frequency $q_{\curvearrowright}$ the probability to change module at every step of the random walk. The $i$ module codebook is used at frequency $p_{\circlearrowright} ^{i}$, which is the number of moves continually spent in module i plus the probability to leave. In practice to compute the frequencies, they use a random walker over the graph. Results {#sec:results} ======= We applied our method on system of different sizes to visualize the method genericity. First we illustrate it on the reduced IEEE-14. We then benchmark our method on the 96-RTS system which has been studied in the past as an interesting baseline for segmentation purposes. We further share results on the 118-bus system which is a realistic and readable middle-sized one. We finally show a segmentation on the large-scale French power grid to demonstrate how it scales while being able to give initial interpretations. IEEE-14 ------- The IEEE-14 is an appropriate system to illustrate our method. From figure 2, we see that the system has a production zone and a distribution one. Our flows are highly influenced by one or the other. We expect our method to segment this system into 2, a cluster for productions and another for loads. This is what we observe on our results, with the line “4\_5” being a border. Our representation is hence interpretable since it helps understanding the structure of the power system. ![On the left, our IG for the IEEE 14 system. On the right, our segmentation into 2 zones using InfoMap on our IG](InfluenceGraphIEEE14.png "fig:"){width="3" height="3cm"}![On the left, our IG for the IEEE 14 system. On the right, our segmentation into 2 zones using InfoMap on our IG](Infomap_IEEE14.png "fig:"){width="3" height="3cm"} \[fig:ieee14\] IEEE-RTS-96 ----------- To benchmark our method, we used the reliability test system 1996 from which we obtained the clustering showed in figure \[fig:ieee96\]. It highlights one level and 7 clusters, 6 agreeing with the power grid connectivity and 1 not. We argue that this surprising non-connected cluster comes from the system and is not an artifact of our method. This will be discussed after. As for the 6 others, since they represent the same IEEE-24 case ,it is consistent to segment them in the same way. We compare the results of our method to other works in figure \[fig:ieee96compa\], [@Cotilla2013Multi] who uses electrical distance and [@Kamwa2007Automatic] who uses time-domain measurements. Overall, the clusterings are very similar while being computed by 3 different methods which might indicate we are close to an interesting and useful clustering. ![Comparison of our IEEE-RTS-96 segmentation (line colored) with two other methods (node colored)[@Cotilla2013Multi][@Kamwa2007Automatic]. Recurrent dissemblances highlighted on 1 subgrid, plus the non-connected cluster[]{data-label="fig:ieee96compa"}](96-RTS_ComparisonClustering_Contour.png){width="5cm" height="3cm"} There are however slight differences we can comment on. We can notice 2 differences as circled on the figure \[fig:ieee96compa\], besides that we are actually clustering lines and they are clustering nodes. Topologically speaking, we argue that our method properly circumscribes the upper cluster to a meshed clique whereas the other clustering has slight less obvious unmeshed extensions. About the non-connected one, it gathers high voltage interconnecting power lines close to productions that interconnects the three same sub-grids, but leave aside the low voltage interconnection close to loads, which can be understood. We rediscovered that that these lines were artificial additions for the purpose of this synthetic system, and not the result of a coherent grid development. Cutting one of those power lines leads to significant changes in flows over the whole grid, as illustrated in the Influence Graph heatmap in figure \[fig:heatMap AS\]. Hence these interconnecting lines play the same role in the power grid, even if they are non-connected, and it then makes sense to cluster them together. Beyond the grid topological graph --------------------------------- Very little works for grid segmentation have tried to use representations of the grid that go beyond its connectivity, a hard constraint which seems at first natural and intuitive. Nevertheless, this might overlook that a power system is a very entangled one with complex interactions, sometimes counter-intuitive as [@hines2015InfluenceG] explained. Here we use an influence graph that has a different connectivity than the grid as shown in figure \[fig:heatMap AS\] for RTS-96 system [@Grigg1999IEEE]: it is actually more connected. But in the segmentation process, some links will appear stronger and relevant while other will be weak and ignored. As a consequence, the results will most generally lead to connected elements in clusters from the grid topological graph perspective. But some might not be connected, which could highlight complex interactions between flows and potential areas where the grid needs to be reinforced. This is one of the interesting interpretation we retrieve when applying our method. ![HeatMaps to illustrate graph connectivity given different representations for RTS-96 system: a) Influence Graph b) topological graph. The bottom rows of those graph matrices are filled quite differently and represent the non topologically connected cluster in the Influence Graph.[]{data-label="fig:heatMap AS"}](heatMapofSecurityAnalysis.pdf "fig:"){width="4.5" height="3cm"}![HeatMaps to illustrate graph connectivity given different representations for RTS-96 system: a) Influence Graph b) topological graph. The bottom rows of those graph matrices are filled quite differently and represent the non topologically connected cluster in the Influence Graph.[]{data-label="fig:heatMap AS"}](heatMapofEdgeConnexity.pdf "fig:"){width="4cm" height="3cm"} To confirm this primary analysis on the graphs, we run Infomap on classical topological representation of the grid which both resulted in a 3-zone segmentation, missing our interconnection line cluster and not representative of the localization of injections. This is an example of one possible other application of our method: identifying weakly meshed interconnecting areas that are strongly interacting over long distance. We can hence capture clusters that are non-connected in the actual grid topology. Middle-sized grid: IEEE-118 --------------------------- The IEEE-118 bus test case is a reduced model of the Midwestern US power grid in 1962 [@PSTCA2001Power]. In figure \[fig:ieee118\], one can see the IEEE-118 case segmentation. We can distinguish at the top level 9 clusters, 8 agreeing with the grid connectivity and 1 which does not. This non-connected cluster 7 in the grid topology plays the same role as its counterpart in the IEEE-96-RTS case: important weakly-meshed interconnections between two well-meshed East and West grids, with some unbalance here, the East grid having more production and the West too much consumption. The remaining connected clusters seem reasonable with a proper clique segmentation and localized injections. We could actually expect some overlapping for some lines in-between clusters so that they each have proper cliques. This is something we actually observe when running *Infomap* with that option and will be further studied in future works. Large grids: the French power grid ---------------------------------- Finally, we discuss our segmentation over the French power grid composed of $6~000$ nodes, $10~000$ lines and $10~000$ injections on a snapshot 19th January 2012 at 7pm. Running our method we actually find 8 clusters for the level 1 of the hierarchical clustering. 8 is here an result of Infomap, not a prior input parameter. It is remarkably close to 7, the number of RTE historical regional segmentation. We decided to compare those 2 segmentations and figured out they were actually quite close as you can see visually on figure \[fig:reseauRTE\]. The historical RTE partition is not a pure electrical one, human resources, workload, maintenance teams and their localization were taken into account by that time. Nonetheless, this preliminary result is very encouraging as we can give it such interpretation and it has been positively commented by our dispatchers. Further validation on the quality of lower level clusters such as level 2, that have synthetic sizes close to the areas usually drawn by our dispatchers, could lead to redesigning some of our study tools, offering them context awareness in a more dynamical cyber-physical system. ![Comparison of a) our French power grid segmentation with b) historical RTE regional segmentation. []{data-label="fig:reseauRTE"}](ComparaisonReseauFrance1.png "fig:"){width="4" height="3cm"}![Comparison of a) our French power grid segmentation with b) historical RTE regional segmentation. []{data-label="fig:reseauRTE"}](ComparaisonReseauFrance2.png "fig:"){width="4cm" height="3cm"} CONCLUSIONS {#sec:conclusions} =========== In this paper we derived a new method to efficiently segment power grids. The method relies on a guided machine learning approach build on top of existing physical simulators. We applied it to the task of studying power flows on a grid state and the resulting synthetic segmentations led to a successful benchmarking and meaningful interpretations. In particular it highlights non-connected clusters in the grid topology illustrating the grid complexity. It also finds itself an appropriate number of clusters. We believe that our approach could generalize to more cyber-physical systems and could be extended to create other meaningful representations for other tasks of interest. Future works will define more quantitative measures beside our interesting analytical results, to further validate our unsupervised method. New analysis will also be conducted on lower level clusters, on overlapping, on clustering evolution over time, to eventually assess its efficiency and the importance of grid state context. Our method could then become a building block for new contextual visualization of the power system or for targeted control applications with reduced computation. [^1]: $^{1}$Réseau de Transport d’Électricité (French TSO)
{ "pile_set_name": "ArXiv" }
--- abstract: 'In this paper we present the design, manufacturing, characterization and analysis of the coupling ratio spectral response for Multimode Interference (MMI) couplers in Silicon-on-Insulator (SOI) technology. The couplers were designed using a Si rib waveguide with SiO$_2$ cladding, on a regular 220 nm film and 2 $\mu$m buried oxide SOI wafer. A set of eight different designs, three canonical and five using a widened/narrowed coupler body, have been subject of study, with coupling ratios 50:50, 85:15 and 72:28 for the former, and 95:05, 85:15, 75:25, 65:35 and 55:45 for the latter. Two wafers of devices were fabricated, using two different etch depths for the rib waveguides. A set of six dies, three per wafer, whose line metrology matched the design, were retained for characterization. The coupling ratios obtained in the experimental results match, with little deviations, the design targets for a wavelength range between 1525 and 1575 nm, as inferred from spectral measurements and statistical analyses. Excess loss for all the devices is conservatively estimated to be less than approximately 2 dB. All the design parameters, body width and length, input/output positions and widths, and tapers dimensions are disclosed for reference.' author: - | José David Doménech$^1$, Javier S. Fandiño$^2$, Bernardo Gargallo$^2$ and Pascual Muñoz$^{1,2}$\ \ $^1$VLC Photonics S.L., C/ Camino de Vera s/n,\ Valencia 46022, Spain e-mail: david.domenech@vlcphotonics.com\ $^2$Optical and Quantum Communications Group, iTEAM Research Institute,\ Universitat Politècnica de València, C/ Camino de Vera s/n,\ Valencia 46022, Spain e-mail: pmunoz@iteam.upv.es. bibliography: - 'mmi.bib' title: 'Arbitrary coupling ratio multimode interference couplers in Silicon-on-Insulator' --- Introduction ============ Optical couplers are perhaps one of the most basic and most used among the building blocks for photonic integrated circuits (PICs) in all currently available technology platforms [@munoz_icton2013]. Different integrated implementations exist (see [@hunsperger]), and they are usually compared according to their coupling constant and operational wavelength range. Among all of them, the Multimode Interference (MMI) coupler is mostly used in high index contrast PIC technologies, such as III-V and group IV materials, since it is in general more compact and preserves the coupling constant over a wide wavelength range. Since its inception by Ulrich in 1975 [@ulrich1975], and the demonstrations of MMIs as we know them today carried out by Pennings and co-workers [@pennings1991], multitude of papers have studied the different aspects of these very versatile devices: fundamental theory and design rules for the so called canonical MMIs, by Soldano [@soldano] and Bachmann [@bachmann94; @bachmann95]; design rules and experimental demonstrations of widened/narrowed body MMIs for arbitrary coupling ratios at a single wavelength by Besse [@besse96], with reconfiguration using thermal tuning by Leuthold [@leuthold01]; tolerance analysis by Besse [@besse94]; design optimizations for different technologies by Halir [@halir]; library of experimentally demonstrated 50:50 Silicon-on-Insulator couplers [@zhou], to name a few. [There are other means of implementing couplers with arbitrary ratio that make use of additional structures, as for instance the combination of two MMIs in a Mach-Zehnder Interferometer (MZI) like structure recently proposed by Cherchi]{} [@art:cherchi14]. In this paper we report on the design and experimental demonstration of arbitrary coupling ratio MMIs following the design rules by Besse and co-workers [@besse96], supported by Beam Propagation Method (BPM) commercial software optimizations [@phoenix], on a Silicon-on-Insulator (SOI) platform. [Complimentary to previous works available in the literature this paper presents:, a) all]{} the design parameters required to obtain broadband (1525-1575 nm) coupling ratio, with modest excess loss, for canonical 50:50, 85:15 and 72:28 MMIs, as well as for widened/narrowed 95:05, 85:15, 75:25, 65:35 and 55:45 MMIs are disclosed[; b) spectral traces demonstrating the otherwise well known theoretically broadband operation of these devices; c) statistics for the coupling ratio variations in the operational wavelength range, that may be of use to perform variational analysis of more complex on-chip devices, circuits and networks based on these MMIs; d) explanation on how measurement deviations, due to variations in the in/out coupling to/from the chip, can bias the coupling ratio results, and e) measurements to infer the reproducibility, die to die and wafer to wafer, of the responses.]{} These reference designs, experimentally validated, [together with the statistical variations and reproducibility information,]{} can be used as starting point for other designers [and researchers of these devices, and of more complex chip networks employing them,]{} on SOI platforms. Design ====== The design of all the MMIs was carried out in three steps: i) cross-section analysis and 2D reduction, ii) analytic approach and iii) numerical BPM optimization. The cross section consists of a buried oxide layer of 2 microns height, capped with a 220 nm Si layer and a SiO$_2$ over-cladding. Rib waveguides, with 130 nm etch depth from top of the Si layer, were used in the design stage. For the same lithographic resolution, rib waveguides provide more robust MMIs than strip waveguides, owing to the fact that wider waveguides are required to support the same number of modes [@soldano]. This comes at the cost of increased footprint and some additional design refinements are required to minimize the MMI imbalance and excess loss[@halir][@hill], besides the complexity of two mask level fabrication described in [@thomson2010low]. The latter trade-off is common in applications where the coupling constant needs to be set very precisely, for instance in very small free spectral range Mach-Zehnder interferometers (MZI), to compensate for the significantly larger loss difference between the long and short interferometer arms [@bogaerts2010silicon]. Moreover it is determinant for on-chip reflectors based on Sagnac interferometers, where the reflectivity is solely determined by the coupling ratio of the coupler in the interferometer [@munoz2011sagnac]. Firstly, for the cross-section analysis a film-mode matching mode solver was used [@phoenix]. The wavelength dependence of the refractive indices was included in the solver (see the Appendix). For a given MMI width, the first and second mode propagation constants, $\beta_0$ and $\beta_1$ respectively, were found for a wavelength of 1.55 $\mu$m for TE polarization, and the beat length $L_{\pi} = \pi / (\beta_0-\beta_1)$ was computed from these. For the case of all the MMIs subject to design, the body width was set to 10 $\mu$m. The effective indices for the first and second mode given by the solver are n$_{eff,0}$=2.84849 and n$_{eff,1}$=2.84548. Therefore the beat length results into L$_{\pi}$=257.61 $\mu$m. In order to later use a 2D BPM method, the cross-section was reduced vertically to a 1D waveguide using the effective index method (EIM) [@buus]. EIM was firstly used to derive the 1D effective index for the core region, and then the effective index left/right to the core was calculated by numerically solving (with a bisection method) for the 1D modes of the reduced structure to match the previously calculated $L_{\pi}$ on the 2D cross-section. Secondly, analytic design rules for canonical [@soldano] and arbitrary coupling ratio [@besse96] MMIs were used. These rules provide, for a given MMI width, an analytic approximation for the MMI body length, [named L$^0$]{}, from the previously calculated $L_{\pi}$, and for the case of arbitrary ratio, the width variation and body geometry (named type A, B, C and D in [@besse96]). [For completeness, the analytic approximations for the MMI lengths are reproduced here:]{} $$\begin{aligned} L^0_A &=& \delta_W^A \frac{1}{2} \left(3 L_\pi \right) \label{eq:La}\\ L^0_{B,Bsym} &=& \delta_W \frac{1}{3} \left(3 L_\pi \right) \label{eq:Lb}\\ L^0_C &=& \delta_W \frac{1}{4} \left(3 L_\pi \right) \label{eq:Lc}\\ L^0_D &=& \delta_W \frac{1}{5} \left(3 L_\pi \right) \label{eq:Ld}\end{aligned}$$ [Where 3$L_\pi$ is the distance for the first direct (not mirrored) image]{} [@soldano], [$\delta_W^A = 1-\Delta W/W$ and $\delta_W = 1-2\Delta W/W$. Note in the case of rectangular body, $\Delta W$=0 and the last two expresions are equal to 1.]{} Up to this stage, only the MMI body width and a first guess for the length are set. The final step consists of using BPM for a MMI having input/output tapered waveguides. Tapers are required to minimize the MMI excess loss, imbalance and reflections as described in [@halir][@hill]. Hence, BPM is used to find iteratively both the MMI length and the input/output tapers width. [The optimization process has as target to minimize the coupler imbalance, i.e. that the ratios at both outputs match the target, and to maximize the overall optical power with respect to the input, i.e. to minimize the excess loss. To do so, the BPM is equiped with mode overlap monitors at the output waveguides. They provide the amplitude and phase for the overlap of the output field with the fundamental mode of the waveguide. The optimization process starts with a fixed set of taper width and MMI length. The starting taper width was set to 3 $\mu$m. The taper length was set to 50.0 $\mu$m, which is sufficiently large for adiabatic linear tapering as per]{} [@art:Fu14][. The MMI length was set to the values obtained through the aforementioned analytic formulas. They provide an MMI length that does not account the tapering of the input/outputs, which in turn modifies the propagation conditions in the multimode waveguide. Therefore, for the initial guess of taper width, the length of the MMI is solved numerically in a first step. Next, the width of the taper is varied. Both parameters are iteratively changed following update and minimization numerical methods commonly now, until a solution is found for the coupling ratios, having as stop condition a tolerance of 0.01.]{} The optimization was performed firstly for $\lambda$ = 1.55 $\mu$m, and finally cross-checked for the design wavelength interval, 1.525-1.575 $\mu$m. The body shapes and parameters for the MMIs are given in Fig. \[fig:mmi\_sketch\] and Table \[tab:mmi\_param\]. [The parameters subject to numerical optimization are marked in the table with the $^*$ symbol.]{} [Note the optimization process yields shorter MMI body lengths than those provided by the analytic expressions in Eqs.]{} (\[eq:La\])-(\[eq:Ld\]). [This can be explained in terms of the underlying physics as follows. The analytic expressions provide the length for a perfectly rectangular body. Including input/output tapers perturbs the rectangular body shape in the longitudinal dimension, producing multimode propagation in a length larger than the canonical rectangular length. The effect is similar to having a perfectly rectangular body, but with increased length. Hence, to compensate this extra propagation length in the tapers, the body length needs to be reduced.]{} [|c|c|c|c|c|]{}\ Id & \#1 & \#2 & \#3 & \#4\ Ratio & 50:50 & 85:15 & 95:05 & 85:15\ Type & A & C & B Sym & B Sym\ L$^0$ & $\delta_W^A$(1/2)L$_\pi$ & $\delta_W$(3/4)L$_\pi$ & $\delta_W$(3/3)L$_\pi$ & $\delta_W$(3/3)L$_\pi$\ & 128.81 & 193.21 & 220.40 & 192.75\ L$^*$ & 122.96 & 184.95 & 211.95 & 184.55\ W & 10.00 & 10.00 & 10.00 & 10.00\ $\Delta$W &0&0& 0.72 & 1.26\ d$_{i0}$,d$_{i1}$ & 1.90 & 0.83 & 1.97 & 1.97\ d$_{o0}$,d$_{o1}$ & 1.90 & 0.83 & 1.97 & 1.97\ l$_t$ & 50.00 & 50.00 & 50.00 & 50.00\ w$_t$ & 0.45 & 0.45 & 0.45 & 0.45\ W$_t^0$ & 3.00 & 3.00 & 3.00 & 3.00\ W$_t$$^*$ & 2.75 & 3.35 & 2.7 & 2.7\ Body & Single & Single & Double & Double\ Id & \#5 & \#6 & \#7 & \#8\ Ratio & 75:25 & 65:35 & 55:45 & 72:28\ Type & C & D & D & D\ L$^0$ & $\delta_W$(3/4)L$_\pi$ & $\delta_W$(3/5)L$_\pi$ & $\delta_W$(3/5)L$_\pi$ & $\delta_W$(3/5)L$_\pi$\ & 257.49 & 177.66 & 208.55 & 154.57\ L$^*$ & 247.18 & 170.36 & 200.04 & 147.76\ W & 10.00 & 10.00 & 10.00 & 10.00\ $\Delta$W & 3.26 & 1.5 & 3.34 & 0\ d$_{i0}$,d$_{i1}$ & 1.11 & 0.60,2.60 & 0.61,2.61 & 0.75,2.75\ d$_{o0}$,d$_{o1}$ & 1.11 & 2.60,0.60 & 2.61,0.61 & 2.75,0.75\ l$_t$ & 50.00 & 50.00 & 50.00 & 50.00\ w$_t$ & 0.45 & 0.45 & 0.45 & 0.45\ W$_t^0$ & 3.00 & 3.00 & 3.00 & 3.00\ W$_t$$^*$ & 2.83 & 2.83 & 2.80 & 2.50\ Body & Single & Single & Single & Single\ Experimental results ==================== Fabrication ----------- The designs were fabricated on two different wafers, named A and B from now onwards, using a 248 nm CMOS line Multi-Project Wafer at the Institute of Micro-Electronics, Singapore. The difference between wafers A and B was the etch depth for the rib waveguides, 130 nm (as per design) and 160 nm from top of the Si film, respectively. From the dies delivered by the fab, those exhibiting metrology (grating line width, grating space width, waveguide width) close to target were retained for measurements. The target grating line and space were both 315 nm. Metrology shows grating line in \[321,333\] nm, and grating space in \[296-312\] nm. The target waveguide width for the process calibrations at the fab was 500 nm, and metrology shows widths in \[520,541\] nm. The number of dies with metrology in these ranges amounted for 3 dies per wafer, namely A1, A2 and A3 for wafer A, and B1, B2, B3 for wafer B. A picture of the fabricated devices is shown in Fig. \[fig:mmi\_chip\]. All the layouts included focusing grating couplers (FGC) to couple light vertically into/out from the chips [@fgc]. Both the FGCs and the waveguides connected to the MMIs supported only TE polarization. Characterization setup ---------------------- The characterization setup consists of a set of three motorized positioners. Two of them are used for holding the fibers vertically at the right angle to couple light into the FGCs (10$^{\circ}$ from the normal to the chip surface), whereas the third one holds the sample on top of a thermally controlled (25 $^{\circ}$C) vacuum chuck. A CCD camera vision system is also mounted in a motorized stage and a LED lamp is used for illumination. For the measurements, the fibers are aligned manually in two steps. Firstly, the fibers are approximated to the FGC locations by visual inspection using the live images from the camera. The approximated height location can be obtained when the fiber and its shadow overlap. Secondly, a broadband light source is connected to one of the fibers, whereas a power meter is connected at the end of the other. Hence, the positions of the input and output fibers are optimized with the motorized stages to obtain the maximum power. A further detailed description of this setup and procedure can be found in [@domenech_phd]. After the fibers alignments are optimized, an Optical Spectrum Analyzer (OSA) is used to record the spectra with a resolution of 10 pm. Test structures and stability ----------------------------- Prior to measuring and processing the target devices, straight waveguides were measured in order to gather information on the different features observed in preliminarily recorded traces. Referring to Fig. \[fig:8sws\], a single straight waveguide was measured repeatedly, and a set of 8 consecutive traces was obtained. These are shown in black dots in the figure, with the average in marine blue solid line. Some Fabry-Pérot (FP) like ripples were observed, with a separation between peaks of 0.26 nm. These are attributed to reflections occurring elsewhere, and that are otherwise not present in the spectrum of the optical source.Thus a moving average of 71 points (traces were recorded with a spectral resolution of 10 pm, this corresponds to 0.71 nm, approximately twice $\Delta \lambda$) was applied to the 8 traces, in order to clean the FP peaks. The results are shown in Fig. \[fig:8sws\] with red dots for the 8 traces, whereas the average for them is shown in a dashed green line. Hence, all spectral traces recorded for the MMIs are to be smoothed as described, before using them in the calculations of coupling ratios described in the next subsection. As a final remark on stability, each trace involved a sweep in the OSA of 20 seconds. From Fig. \[fig:8sws\] a good setup stability can be inferred, i.e. very little power variations due to mechanical issues, as the fiber holders, translation stages, tabletop and other during the time it took to acquire these 8 traces (close to 3 minutes) is attained. Though not shown, minor drifts in the alignments started to happen right after the time for the 8 traces. For the MMIs a single trace is collected per output, therefore, it is not subject to the mechanical drifts of the measurement setup. MMIs Coupling ratio ------------------- The coupling ratios for the MMIs were derived from the spectral traces measured at outputs ’m’, from input ’l’, termed $S_{l,m}(\lambda)$, as: $$\label{eq:K} K_{l,m}(\lambda) = \frac{S_{l,m}(\lambda)}{S_{l,0}(\lambda)+S_{l,1}(\lambda)}$$ where l=0,1 and and m=0,1 label the input and output waveguides respectively, and with spectral traces in linear units. $K_{l,m}(\lambda)$ traces for MMI\#2 and MMI\#3 on die A1 are plotted in Fig. \[fig:mmi\_spectra\]. The results show good agreement with target coupling ratios, where deviations are approximately in the range of $\pm$0.01. For the rest of the dies, similar spectral traces and deviations were obtained from both wafers, albeit the etch depth difference between wafer A and B. Note the additional etch depth of 30 nm in wafer B did not change significantly the results, which is in good agreement with the sensitivity analysis reported in [@halir]. The results for all the couplers per die and wafer are compiled in summary graphs given in Fig. \[fig:mmi\_wafer\_die\_1\] and Fig. \[fig:mmi\_wafer\_die\_2\], where the average coupling ratios and standard deviations in the wavelength range of the measurements are shown. MMI \#1 50:50 samples exhibited coupling constants around 0.5 with deviations in the whole wavelength range of less than $\pm$0.01 for all dies, except A1 and B3, where some imbalance is clearly appreciated. For MMIs \#2 to \#4 the graphs are given with broken axes, but with the same interval around the target coupling ratio. Comparing MMI \#2 and \#4, which have as target 85:15 splitting ratio, the performance of the first proved to be best for all dies. One might be tempted to attribute this to the fact that device \#2 is a canonical (rectangular body) design, whereas \#4 follows the widened body geometry shown in Fig. \[fig:mmi\_sketch\]-(b), i.e. Type B Symmetrized. However MMI \#3 shown in Fig. \[fig:mmi\_wafer\_die\_1\]-(c) exhibits very good performance, which might be misleadingly interpreted as MMI \#4 being a sub-optimal design. Therefore additional insight is provided in the following. Note the spectral traces $S_{l,m}(\lambda)$ are recorded at the two different outputs of the MMI, each one equipped with a FGC. Ideally both FGCs should have very similar performance. If this is not the case, a minor difference in the average power delivered from the FGCs changes Eq. (\[eq:K\]) into: $$\label{eq:Ks} K_{l,m}^{\pm}(\lambda) = \frac{(1 \pm \frac{\Delta}{2} ) S_{l,m}(\lambda)}{(1 \mp \frac{\Delta}{2} )S_{l,0}(\lambda)+(1 \pm \frac{\Delta}{2} ) S_{l,1}(\lambda)}$$ where $\Delta$ represents difference between average power delivered by each FGC. Hence, resorting to Fig. \[fig:Ks\], the calculated coupling ratios, for a given difference in the performance of the FGCs, are more sensitive in the case of coupling ratios closer to 0.5. On the contrary, sensitivity to this issue is lower for more asymmetric couplers. This is clearly noticeable for MMI\#2 B1 in Fig. \[fig:mmi\_wafer\_die\_1\]-(b), MMI\#7 A3 in Fig. \[fig:mmi\_wafer\_die\_2\]-(c), MMI\#8 A3 in Fig. \[fig:mmi\_wafer\_die\_2\]-(d) as well. Therefore the efficiency of the FGCs that may vary from one to other, not only between different dies, but inside the same die too [@hochberg_fgc], is the most likely cause of the cases out of the general trends. Similar conclusions can be inferred for MMIs \#5 to \#8 from Fig. \[fig:mmi\_wafer\_die\_2\]. Excess loss ----------- An estimation for the excess loss, EL, is derived combining the MMI measured spectra, $S_{l,m}(\lambda)$, with the spectra of reference straight waveguides, $S_{sw}(\lambda)$ as. $$EL_l(\lambda) = 10\log_{10}\left[\frac{S_{sw}(\lambda)}{S_{l,0}(\lambda)+S_{l,1}(\lambda)}\right]$$ with all the magnitudes in linear units. The spectra of four straight waveguides per die, spanning the same length than the MMIs with input/outputs, were measured. The maximum value at each wavelength was calculated to obtain a single trace $S_{sw}(\lambda)$ in each die for normalization. The average excess loss for all wavelength was then calculated. The result needed to be corrected by adding 0.4 dB, meaning the deviations due to fiber alignments and differences in FGC performance are at least 0.4 dB. Consequently, the EL values obtained are best case. It is likely the actual value to be 0.2-0.4 dB larger (i.e. despite considering the FGCs having equal performance, the fibers measuring the straight waveguides might be slightly misaligned). Under these conditions, the numerical values for the excess loss average for all wafers, dies and MMIs are calculated. Consequently, one cannot derive and absolutely accurate value for the EL from these measurements. Otherwise, resorting to full in depth statistical analysis of a larger number of samples would be required, and it is out of the scope of this paper (see [@phd:dumon], [@hochberg_fgc] and [@bogaerts_challenges] for reproducibility issues). From these calculations, the interval on which the EL lies can be estimated at the sight of Fig. \[fig:EL\]. Panel (a) in the figure shows the estimated EL per MMI, i.e. all the dies for the each MMI. Panel (b) shows the estimated EL for all the MMIs in a die. [Except MMI\#8 at die A3,]{} Fig. \[fig:EL\]-(a)[, which had excess loss close to 2 dB, all the other devices/dies had losses in average around 0.6 dB, since die A3 exhibited comparatively higher excess loss for all the MMIs, as can be inferred from]{} Fig. \[fig:EL\]-(b). Conclusion and outlook ====================== In this paper the design, fabrication and measurement of MMIs with arbitrary coupling ratio in Silicon-on-Insulator technology has been reported. The design methodology consisted on a combination of theoretical first guess and numerical optimization, using the Beam Propagation Method. The devices were fabricated in two different wafers, where the waveguides had different etch depths. Very good match between the design and experimental results was obtained in terms of the coupling ratio for the devices. All the coupling ratios were attained within the design wavelength range of 1525-1575 nm with deviations as low as $\pm$0.02. Minor deviations were attributed to the difference in the performance of the focusing grating couplers. [Except for one die, the estimated average excess loss for the MMIs is around 0.6 dB. The statistical and reproducibility information on this paper can be readily incorporated by others to device, circuit and network on-chip simulation and design tools, in order to asses on more complex photonic chip circuits based on these MMIs.]{} Acknowledgment {#acknowledgment .unnumbered} ============== The authors acknowledge financial support by the Spanish CDTI NEOTEC start-up programme, the Spanish MICINN project TEC2010-21337, acronym ATOMIC, the Spanish MINECO project TEC2013-42332-P, acronym PIC4ESP, project FEDER UPVOV 10-3E-492 and project FEDER UPVOV 08-3E-008. B. Gargallo acknowledges financial support through FPI grant BES-2011-046100. J.S. Fandiño acknowledge financial support through FPU grant AP2010-1595. The following wavelength dependence for the refractive indices of the materials was included in the solver given by Sellmeier equation[@weber]: $$n^2\left(\lambda\right)= 1+\sum_{i=0}^N \frac{B_i\lambda^2}{\lambda^2-C_i} $$ For Si and $\lambda \in \left[1.36-11\right]\mu$m the coefficients are B$_i$={10.66842933, 0.003043475, 1.54133408} and C$_i$={0.3015116485$^2$, 1.13475115$^2$, 1104.0$^2$}$\mu$m$^2$. For SiO$_2$ and $\lambda \in \left[0.21-3.71\right]\mu$m the coefficients are B$_i$={0.6961663, 0.4079426, 0.1162414} and C$_i$={0.0684043$^2$, 0.1162414$^2$, 9.896161$^2$}$\mu$m$^2$.
{ "pile_set_name": "ArXiv" }
--- abstract: 'In order to maximize the sensitivity of pulsar timing arrays to a stochastic gravitational wave background, we present computational techniques to optimize observing schedules. The techniques are applicable to both single and multi-telescope experiments. The observing schedule is optimized for each telescope by adjusting the observing time allocated to each pulsar while keeping the total amount of observing time constant. The optimized schedule depends on the timing noise characteristics of each individual pulsar as well as the performance of instrumentation. Several examples are given to illustrate the effects of different types of noise. A method to select the most suitable pulsars to be included in a pulsar timing array project is also presented.' author: - | K. J. Lee $^{1,2}$[^1], C. G. Bassa$^{2}$, G. H. Janssen$^{2}$, R. Karuppusamy$^{1,2}$, M. Kramer$^{1,2}$, R. Smits$^{2,3}$ and B. W. Stappers$^{2}$\ $^1$[Max-Planck-Institut für Radioastronomie, Auf dem Hügel 69, D-53121 Bonn, Germany]{}\ $^2$[Jodrell Bank Centre for Astrophysics, University of Manchester, Manchester M13 9PL, UK]{}\ $^3$[Stichting ASTRON, Postbus 2, 7990 AA Dwingeloo, The Netherlands]{}\ title: The optimal schedule for pulsar timing array observations --- \[firstpage\] [pulsar: general — gravitational wave]{} Introduction ============ Millisecond pulsars (MSPs) are stable celestial clocks, so that the timing residuals, the differences between the observed and the predicted time of arrival (TOA) of their pulses, are usually minute compared to the total length of the data span. A stochastic gravitational wave (GW) background leaves angular dependent correlations in the timing residuals of widely separated pulsars (for general relativity see @HD83, for alternative gravity theories see @LJR08 [@LJR10]), i.e. the correlation coefficient between timing residuals of a pulsar pair is a function of the angular distance between the two pulsars. Such a spatial correlation in pulsar timing signals makes it possible to directly detect GW using pulsar timing arrays (PTAs; @HD83 [@FB90]). Previous analyses [@JHLM05] have calculated PTA sensitivity to a stochastic GW background generated by super massive blackhole (SMBH) binaries at cosmological distances [@JB03; @SHMV04]. They have shown that a positive detection of the GW background is feasible, if one uses state of the art pulsar timing technologies. Such encouraging results triggered consequent observational efforts. At present, several groups are trying to detect GWs using PTAs: i) the European Pulsar Timing Array (EPTA; @EPTA06 [@EPTA10; @FHB10; @VLJ11]) with a sub-project, the Large European Array for Pulsars (LEAP, @KS10 [@FHB10]), combining data from the Lovell telescope, the Westerbork Synthesis Radio Telescope, the Effelsberg 100-m Radio Telescope, the Nançay Decimetric Radio Telescope, and the Sardinia Radio Telescope[^2], ii) the Parkes Pulsar Timing Array (PPTA; @Man08 [@Hob09; @PPTA10]) using observations with the Parkes radio telescope augmented by public domain observations from the Arecibo Observatory, iii) the North-American Nanohertz Observatory for Gravitational waves (NANOGrav, @Jenet09) using data from the Green Bank Telescope and the Arecibo Observatory, iv) Kalyazin Radio Astronomical Observatory timing [@Rodin01]. Besides these on-going projects, international cooperative efforts, e.g. the International Pulsar Timing Array (IPTA, @Ipta10) or future telescopes with better sensitivity, e.g. the Five-hundred-meter Aperture Spherical Radio Telescope (FAST, @NWZZJG04 [@SLKMSJN09]) and the Square Kilometre Array (SKA, @KS10 [@SKSL09]), are planned to join the PTA projects to increase the chances of detecting GWs. Operational questions arise naturally from such PTA campaigns, e.g. how should the observing schedule be arranged to maximize our opportunity to detect the GW signal? How much will we benefit from such optimization? In this paper, we try to answer these questions. The paper is organized as follows: In [Section \[sec:decs\]]{}, we extend the formalism of [@JHLM05] to calculate the GW detection significance as a function of observing schedules, i.e. the telescope time allocation to each pulsar. Then we describe the technique to maximize the GW detection significance in [Section \[sec:optintro\]]{}. Frameworks of the optimization problem are described in [Section \[sec:optbak\]]{}, and the algorithm to optimize a single and multiple telescope array are given in [Section \[sec:optsig\]]{} and [Section \[sec:optmul\]]{} respectively. The results are presented in [Section \[sec:res\]]{} and we discuss related issues in [Section \[sec:con\]]{}. Analytical Calculation For GW Detection Significance {#sec:decs} ==================================================== In this section, we calculate the statistical significance $S$ for detecting the stochastic GW background using PTAs. We consider TOAs from multiple pulsars, where each set may be collected from different telescopes or data acquisition systems. To detect the GW background, one correlates the TOAs between pulsar pairs and checks if the GW-induced correlation is significant. [@JHLM05] have calculated the GW detection significance for the case, where the noise in TOAs is of a white spectra with equal root-mean-square (RMS) level for all pulsars. To investigate the optimal observing schedule, we have to generalize the calculation, such that we can explicitly check the dependence of the GW detection significance on the noise properties of each individual pulsar. Under the influence of a stochastic gravitational wave background, the pulsar timing residual $R$ from a standard pulsar timing pipeline contains two components, the GW-induced signal ${ {s} }$ and noise from other contributions ${ {n} }$. In this section, we determine the statistical properties of ${ {s} }$ and ${ {n} }$ first, and then calculate the GW detection significance. Statistics for GW-induced pulsar timing signal {#sec:psr} ---------------------------------------------- The spectrum of the stochastic GW background is usually assumed to be a power-law, in which the characteristic strain ($h_{\rm c}$) of the GW background is $h_{\rm c}=A_{0} (f/f_{0})^{\alpha}$. Here, $A_{0}$ is the dimensionless amplitude for the background at $f_{0}=1\, \textrm{yr}^{-1}$, and $\alpha$ is the spectral index. Under the influence of such a GW background, the power spectrum $S_{\rm s}(f)$ of the GW-induced pulsar timing residual ${ {s} }$ is [@JHLM05] $$S_{\rm s}(f)= \frac{A_{0}^2 f^{2\alpha-3}}{12 \pi^2 f_{0}^{2\alpha}}\,. \label{eq:powsgw}$$ GWs perturb the space-time metric at the Earth. This introduces a correlation in the timing signal of two pulsars. The correlation coefficient $H(\theta)$ between the GW-induced signals of two pulsars with an angular separation of $\theta$ is called the Hellings and Downs function [@HD83] given as $$H(\theta)=\left\{\begin{array}{l} \frac{3+\cos\theta}{8}-\frac{3(\cos\theta-1)}{2} \ln \left[\sin\left(\frac{\theta}{2}\right)\right] \textrm{, if }\theta\neq 0\,, \\ 1 \textrm{, if }\theta=0\,. \end{array}\right. \label{eq:hdfun}$$ The spectral properties, [equation (\[eq:powsgw\])]{}, together with the spatial correlation, [equation (\[eq:hdfun\])]{}, fully characterize the statistical properties of the GW-induced signals. For an isotropic GW background, the correlations between the GW-induced signals are $$\langle { {\,^{i}\negthinspace}}s_{k}{ {\,^{j}\negthinspace}}s_{k'}\rangle=\sigma_{\rm g}^2 H({ {\,^{i}\negthinspace}}{ {\,^{j}\negthinspace}}\theta) \gamma_{kk'}\,. \label{eq:corgws}$$ Here, we follow the notation that the subscript on the right is the index of sampling and the superscript on the left is the index for the pulsar. For example, we denote the $k$-th measurement of a timing residual of the $i$-th pulsar as ${ {\,^{i}\negthinspace}}R_{k}$, the GW-induced signal as ${ {\,^{i}\negthinspace}}s_{k}$ and other noise contributions as ${ {\,^{i}\negthinspace}}n_{k}$. $\sigma_{\rm g}$ is the RMS level for the GW-induced signal, ${ {\,^{i}\negthinspace}}{ {\,^{j}\negthinspace}}\theta$ is the angular distance between the $i$-th and $j$-th pulsar, and $\gamma_{kk'}$ is the temporal correlation coefficient between the $k$-th and $k'$-th sampling. $\sigma_{\rm g}$ and $\gamma_{kk'}$ are numerically calculated from the GW spectrum as shown in [Appendix \[sec:appcor\]]{}. Statistics of noise components from other contributions ------------------------------------------------------- A purely theoretical modeling of the noise part ${ {n} }$ is complex, because it [@FB90; @SC10; @CS10] depends on the properties of each individual pulsar, the instrumentation, and radio frequency interference. We therefore model the noise phenomenologically using observational knowledge. In this paper, the noise part of a pulsar timing residual is modeled as a superposition of a white noise component and a red noise component, where the white noise is designated to the measurement uncertainty on the TOA due to radiometer noise and pulse jitter noise [@LKL12]. Timing residuals of several millisecond pulsars show clear evidence of temporally correlated noise, although its origin is not yet clear [@VBC09]. The red noise components are used to empirically model such effects. We further assume that the noise components are not correlated between any two different pulsars[^3]. For each pulsar, three parameters are used to characterize the noise spectrum. These parameters are the RMS level of the white noise ${ {\,^{i}\negthinspace}}\sigma_{\rm w}$, the RMS level for red noise ${ {\,^{i}\negthinspace}}\sigma_{\rm r}$, and the spectral index for the red noise ${ {\,^{i}\negthinspace}}\beta$. The white noise spectrum is $${ {\,^{i}\negthinspace}}S_{\rm w}(f)=\frac{{ {\,^{i}\negthinspace}}\sigma_{\rm w}}{\widetilde{\sigma_{\rm w}}}\,, \label{eq:whitespec}$$ and the red noise spectrum is $${ {\,^{i}\negthinspace}}S_{\rm r}(f)=\frac{{ {\,^{i}\negthinspace}}\sigma_{\rm r} f^{{ {\,^{i}\negthinspace}}\beta}}{ \widetilde{\sigma_{\rm r}}}\,, \label{eq:redspec}$$ where the $\widetilde{\sigma_{\rm w}}$ and $\widetilde{\sigma_{\rm r}}$ are for normalization. From the spectrum, one can derive the correlation between noise components $$\langle { {\,^{i}\negthinspace}}n_{k} { {\,^{j}\negthinspace}}n_{k'}\rangle=\left({ {\,^{i}\negthinspace}}\sigma_{\rm w}^2\delta_{kk'} +{ {\,^{i}\negthinspace}}\sigma_{\rm r}^2 g_{kk'}\right)\delta_{ij}\,. \label{eq:cornoi}$$ Following our conventions, ${ {\,^{i}\negthinspace}}n_{k}$, is the noise for the $k$-th sampling of the $i$-th pulsar. The $\delta_{ij}$ is the ‘Kronecker delta’ symbol, i.e. $\delta_{ij}=1$, if $i=j$, otherwise $\delta_{ij}=0$. The parameters $\widetilde{\sigma_{\rm w}}$, $\widetilde{\sigma_{\rm r}}$, and correlation coefficients $g_{kk'}$ are calculated using a numerical simulation shown in [Appendix \[sec:appcor\]]{}. GW background detection significance {#sec:gwdecs} ------------------------------------ Following [@JHLM05], we calculate the cross power ${ {\,^{i}\negthinspace}}{ {\,^{j}\negthinspace}}c$ between timing residuals and then compare it with the predicted correlation coefficient $H(\theta)$ to check whether the GW signal is significant. The cross power ${ {\,^{i}\negthinspace}}{ {\,^{j}\negthinspace}}c$ is $${ {\,^{i}\negthinspace}}{ {\,^{j}\negthinspace}}c=\frac{1}{m} {{\sum_{k=1}^{m}}}{{\,^{i}\negthinspace R}}_{k} {{\,^{j}\negthinspace R}}_k, \label{eq:defcorr}$$ where ${{\,^{i}\negthinspace R}}_{k}, {{\,^{j}\negthinspace R}}_{k}$ are the timing residuals of the $i$-th and the $j$-th pulsars for the $k$-th timing session, $m$ is the number of data points for a given pulsar. It is assumed that the data from different pulsars overlap with each other and the number of data points is identical for all the pulsars, similar to the discussion in [@YCH11]. These assumptions are good approximations in calculating the GW detection sensitivity. In the case where the TOAs are mis-aligned or the number of data points is not identical, one can combine data points so that the number of data points are identical. This operation retains most of the GW detection sensitivity due to: i) The RMS error reduces when data is combined, and such a reduction in RMS error compensates the reduction in the number of data points. ii) The spectrum of the GW induced timing signal is steep, thus most of the GW induced signal is in the low frequency components, which are preserved during the combining operation. The comparison between the ${ {\,^{i}\negthinspace}}{ {\,^{j}\negthinspace}}c$ and the Hellings-Downs function is carried out by doing another correlation, which gives the GW detection significance $S$ as $$S=\sqrt{M} \frac{\sum_{{ i-j\,\textrm{pairs}}} ({ {\,^{i}\negthinspace}}{ {\,^{j}\negthinspace}}c-\overline{c}) (H({ {\,^{i}\negthinspace}}{ {\,^{j}\negthinspace}}\theta)-\overline{H}) }{\sqrt{\sum_{ { i-j\,\textrm{pairs}}}({ {\,^{i}\negthinspace}}{ {\,^{j}\negthinspace}}c-\overline{c})^2 \sum_{{ i-j\,\textrm{pairs}}} (H({ {\,^{i}\negthinspace}}{ {\,^{j}\negthinspace}}\theta)-\overline{H})^2}}, \label{eq:decs}$$ where the summation $\sum_{{ i-j\,\textrm{pairs}}}$ sums over all independent pulsar pairs except the case where $i=j$, i.e. $$\sum_{{ i-j\,\textrm{pairs}}}\equiv\sum_{i=1}^{N}\sum_{j=1}^{i-1}\,,$$ and $$\begin{aligned} \overline{ c}&=&\frac{1}{M}\sum_{{ i-j\,\textrm{pairs}}} { {\,^{i}\negthinspace}}{ {\,^{j}\negthinspace}}c\,, \\ \overline{H}&=&\frac{1}{M}\sum_{{ i-j\,\textrm{pairs}}} H({ {\,^{i}\negthinspace}}{ {\,^{j}\negthinspace}}\theta)\,.\end{aligned}$$ Given $N$ pulsars, the sum $M=\sum_{{ i-j\,\textrm{pairs}}} 1=N(N-1)/2$ is the number of independent pulsar pairs. To evaluate the quality of the detector, we need the expectation for the detection significance $\langle S \rangle$, which is $$\langle S \rangle \simeq \frac{\sum_{ { i-j\,\textrm{pairs}}} \langle ({ {\,^{i}\negthinspace}}{ {\,^{j}\negthinspace}}c-\overline{\langle c\rangle}) (H({ {\,^{i}\negthinspace}}{ {\,^{j}\negthinspace}}\theta)-\overline{H}) \rangle }{\sqrt{M}\Sigma_c \Sigma_H} \label{eq:exps},$$ where $$\begin{aligned} \Sigma_{\rm H}&=&\sqrt{\frac{1}{M}\sum_{ { i-j\,\textrm{pairs}}} [H({ {\,^{i}\negthinspace}}{ {\,^{j}\negthinspace}}\theta)-\overline{H}]^2}\,, \\ \Sigma_{c}&=&\sqrt{\left \langle { \frac{1}{M}\sum_{ { i-j\,\textrm{pairs}}} [c({ {\,^{i}\negthinspace}}{ {\,^{j}\negthinspace}}\theta)-\overline{\langle c\rangle}]^2} \right\rangle}\,, \end{aligned}$$ and the $\langle \cdot \rangle$ denotes the ensemble average. As we show in Appendix \[sec:appS\], the expected GW detection significance is $$\langle S\rangle \simeq \sqrt{M} \left[{1+\frac{ \sum_{{ i-j\,\textrm{pairs}}}\left( { {\,^{i}\negthinspace}}{ {\,^{j}\negthinspace}}A+ { {\,^{i}\negthinspace}}{ {\,^{j}\negthinspace}}B\right) }{M \Sigma_{\rm H}^2 }}\right]^{-\frac{1}{2}}\,, \label{eq:decse}$$ where $$\begin{aligned} { {\,^{i}\negthinspace}}{ {\,^{j}\negthinspace}}A&=&\frac{1}{m^2}\sum_{kk'}\left[\left(1+H({ {\,^{i}\negthinspace}}{ {\,^{j}\negthinspace}}\theta)\right)\gamma_{kk'}^2 \nonumber \right.\\ &&\left.+\left({ {\,^{i}\negthinspace}}\eta_{ {\rm r}, kk'} + { {\,^{j}\negthinspace}}\eta_{ {\rm r}, kk'}\right){\gamma_{kk'}}+{ {\,^{i}\negthinspace}}\eta_{ {\rm r}, kk'} { {\,^{j}\negthinspace}}\eta_{ {\rm r}, kk'}\right] \,,\\ { {\,^{i}\negthinspace}}{ {\,^{j}\negthinspace}}B&=& \frac{1}{m}\left({ {\,^{i}\negthinspace}}\eta_{ \rm w} +{ {\,^{j}\negthinspace}}\eta_{\rm w}+{ {\,^{i}\negthinspace}}\eta_{\rm w}{ {\,^{j}\negthinspace}}\eta_{\rm w} +{ {\,^{i}\negthinspace}}\eta_{\rm r} { {\,^{j}\negthinspace}}\eta_{\rm w} +{ {\,^{j}\negthinspace}}\eta_{\rm r}{ {\,^{i}\negthinspace}}\eta_{\rm w}\right)\,, \label{eq:AB}\end{aligned}$$ and $\eta$ denotes the ratio between the power of noise components and the power of the GW-induced signal, which are $$\begin{aligned} { {\,^{i}\negthinspace}}\eta_{ {\rm r}, kk'}&=& \frac{{ {\,^{i}\negthinspace}}\sigma_{\rm r}^2 { {\,^{i}\negthinspace}}g_{kk'}}{\sigma_{\rm g}^2}\, \\ { {\,^{i}\negthinspace}}\eta_{\rm r}&=& \frac{{ {\,^{i}\negthinspace}}\sigma_{\rm r}^2}{\sigma_{\rm g}^2}\, \\ { {\,^{i}\negthinspace}}\eta_{\rm w}&=& \frac{{ {\,^{i}\negthinspace}}\sigma_{\rm w}^2}{\sigma_{\rm g}^2}\,.\end{aligned}$$ If the noise level ${ {\,^{i}\negthinspace}}\sigma_{\rm n}$ is identical for all pulsars and there is no red noise component (${ {\,^{i}\negthinspace}}\sigma_{\rm w}={ {\,^{j}\negthinspace}}\sigma_{\rm w}$ and ${ {\,^{i}\negthinspace}}\sigma_{\rm r}=0$ for all the $i,j=1\ldots N$), [equation (\[eq:decse\])]{} reduces to the result found by [@JHLM05] and [@VBC09]. The ${ {\,^{i}\negthinspace}}{ {\,^{j}\negthinspace}}A$ terms are independent of telescope integration time, because they only contain the RMS level of the GW-induced signal and the red noise, both of which are independent of telescope integration time. The observing schedule, the plan of allocating telescope time to each pulsar, only changes terms of ${ {\,^{i}\negthinspace}}{ {\,^{j}\negthinspace}}B$, which depend on the ratio between signal and noise amplitude. By optimizing the observing schedule, we can reduce the summation of ${ {\,^{i}\negthinspace}}{ {\,^{j}\negthinspace}}B$, such that the GW detection significance is maximized. We present the technique to maximize the GW detection significance in the next section. Optimization Of The GW Detection Significance Under Observing Constraints {#sec:optintro} ========================================================================= Pulsar timing observations are usually conducted as a series of successive observing sessions. In each session, multiple pulsars are observed using either one or multiple telescopes. There are basically two ways to use multiple telescopes. The simple way is to use each telescope independently, combine the TOA data from each telescope, remove the time jumps between each data set, and form a single TOA data set [@JSKN08]. The other way, as in the LEAP project, is to use telescopes simultaneously to form a phased array and then calculate the TOAs from the phased-array data. Due to such different methods of using multiple telescopes, the optimization techniques differ from one another. We answer the following questions in this section: i) What are the variables to optimize? ii) What are the constraints on the optimization? iii) How do we perform the optimization? The objective, variables, and constraints of the optimization {#sec:optbak} ------------------------------------------------------------- Given a fixed amount of telescope time, we can adjust the amount of observing time allocated to each pulsar. Increasing the observing time for one pulsar will reduce its timing measurement error, but will increase the timing error for other pulsars. Naturally, for the purpose of detecting GWs, the optimization objective is to maximize the expected GW detection significance $\langle S \rangle$, while the constraint is the total amount of telescope time for the project. Generally, for a timing project using ${ N_{\rm tel}}$ telescopes to observe $N$ pulsars, we need $2{ N_{\rm tel}}+3N$ input parameters to characterize the whole timing project. $2{ N_{\rm tel}}$ parameters are used to characterize ${ N_{\rm tel}}$ telescopes, where each telescope is quantified by the *gain* (${ {\cal G}}$) and the *total available telescope time* ($\tau$). $3N$ parameters are used to characterize the timing behavior of $N$ pulsars. The red noise level $\sigma_{\rm r}$ and spectral index $\beta$ are assumed to be pulsar intrinsic. The observed white noise RMS levels $\sigma_{\rm w}$ depend on telescope gain, telescope time allocation to the pulsar and other parameters intrinsic to the pulsar (e.g. flux, pulse width, and so on). For single telescope cases, we can encapsulate the dependence of the white noise level on the schedule into the normalized white noise level $\sigma_{\rm 0}$ and pulse jitter noise level $\sigma_{\rm J}$ [@FB90; @SC10; @CS10] $${ {\,^{i}\negthinspace}}\sigma_{\rm w}=\left({ {\,^{i}\negthinspace}}\sigma_{0}^2 { {\cal G}}^{-2} +{ {\,^{i}\negthinspace}}\sigma_{\rm J}^2\right)^{1/2}\left(\frac{{ {\,^{i}\negthinspace}}\tau}{1 \textrm{hr}}\right)^{-1/2} , \label{eq:scale}$$ where ${ {\,^{i}\negthinspace}}\sigma_{\rm w}$ is the measured RMS level for the white noise component of the $i$-th pulsar, ${ {\,^{i}\negthinspace}}\tau$ is the telescope time being used for the $i$-th pulsar per observing session, and ${ {\cal G}}$ is the telescope gain. The normalized noise level ${ {\,^{i}\negthinspace}}\sigma_{0}$ and the ${ {\,^{i}\negthinspace}}\sigma_{\rm J}$ are used to characterize the radiometer noise and the pulse jitter noise of the pulsar. On the one hand, if there is no pulse jitter noise, $\sigma_{\rm 0}$ will be the observed RMS level of the white noise for a 1 hour observation using a telescope with unit gain ${ {\cal G}}=1$[^4]. On the other hand, if we have a telescope with infinite gain, the pulsar timing accuracy will be limited by the pulse jitter, and $\sigma_{\rm J}$ will be the observed RMS level of white noise for a 1 hour observation. If phased array observations are performed using multiple telescopes (as done in the LEAP project), the situation is identical to the case of a single telescope, and we use the effective gain for the array to determine the RMS level of noise. For multiple incoherent telescopes, the gain of ${ N_{\rm tel}}$ telescopes can be summarized by a vector ${ {\cal G}}_{\nu}$, where $\nu=1\ldots{ N_{\rm tel}}$ and the $\nu$-th component ${ {\cal G}}_{\nu}$ is the gain for the $\nu$-th telescope. In a similar fashion, the available telescope time of each telescope is summarized by the vector $\tau_{\nu}$. The definition of the observing schedule becomes more complex, since we need to specify the observation time for each pulsar using each telescope. Furthermore, the schedule should also include information on the telescope availability, e.g. certain pulsars may not be visible to some telescopes due to geographical reasons. In this paper, we use the *resource allocation matrix* ${\rm \bf O}$ to describe the telescope availability, where the $i$-th row and $\nu$-th column element ${ {\,^{i}\negthinspace}}O_{\nu}$ indicates whether we use the $\nu$-th telescope to observe the $i$-th pulsar, i.e.${ {\,^{i}\negthinspace}}O_{\nu}=1$, if the $\nu$-th telescope observes the $i$-th pulsar, and ${ {\,^{i}\negthinspace}}O_{\nu}=0$ otherwise. The telescope time allocation is described by another matrix, the *schedule matrix* ${\rm \bf P}$, where the $i$-th row $\nu$-th column element ${ {\,^{i}\negthinspace}}P_{ \nu}$ is the time allocated for the $\nu$-th telescope observing the $i$-th pulsar. With the schedule matrix ${\rm \bf P}$, the equivalent RMS level of the white noise in the combined data is $${ {\,^{i}\negthinspace}}\sigma_{\rm w}=\left[{ {\,^{i}\negthinspace}}\sigma_{0}^2 \left(\sum_{\nu=1}^{{ N_{\rm tel}}} { {\cal G}}_{\nu}^2{ {\,^{i}\negthinspace}}P_{\nu} { {\,^{i}\negthinspace}}O_{\nu} \right)^{-1}+{ {\,^{i}\negthinspace}}\sigma_{\rm J}^2 \left(\sum_{\nu=1}^{{ N_{\rm tel}}}{ {\,^{i}\negthinspace}}P_{\nu} { {\,^{i}\negthinspace}}O_{\nu} \right)^{-1}\right]^{1/2}\,. \label{eq:incadd}$$ The main reason to introduce the resource allocation matrix ${ {\,^{i}\negthinspace}}O_{\nu}$ is to take care of the complexity of telescope availability, and the telescope time can be treated in an identical way independent of the availability. For a single telescope or a phased array, the optimization constraint is $$\tau=\sum_{i=1}^{N} { {\,^{i}\negthinspace}}\tau\,, \label{eq:cons1}$$ i.e. the total telescope time is pre-fixed to be $\tau$. For multiple incoherent telescopes, the above constraint is applied to each telescope individually, i.e.the constraints specify the available observation time $\tau_{\nu}$ for each telescope, which gives $$\tau_{\nu}=\sum_{i=1}^{N}{ {\,^{i}\negthinspace}}P_{\nu} { {\,^{i}\negthinspace}}O_{\nu}\,. \label{eq:cons3}$$ With the observing schedule (i.e.the vector ${ {\,^{i}\negthinspace}}\tau$ for a single telescope or the matrix ${ {\,^{i}\negthinspace}}O_{\nu}$ and ${ {\,^{i}\negthinspace}}P_{\nu}$ for incoherent telescopes), one can use [equation (\[eq:scale\])]{} and  (\[eq:incadd\]) to determine the white noise level, then determine the GW detection significance as explained in [Section \[sec:gwdecs\]]{}. The optimization of the observing schedule means that we choose an appropriate ${ {\,^{i}\negthinspace}}\tau$ or ${ {\,^{i}\negthinspace}}P_{\nu}$ such that the expected GW-detection significance $\langle S \rangle$ is maximized under the constraint of [equation (\[eq:cons1\])]{} or (\[eq:cons3\]). From [equation (\[eq:decse\])]{}, the maximization of $\langle S \rangle$ is equivalent to the minimization of the term $\sum_{{ i-j\,\textrm{pairs}}} ({ {\,^{i}\negthinspace}}{ {\,^{j}\negthinspace}}A+{ {\,^{i}\negthinspace}}{ {\,^{j}\negthinspace}}B)$. Because the terms of ${ {\,^{i}\negthinspace}}{ {\,^{j}\negthinspace}}A$ are independent of the telescope time, the maximization of $\langle S \rangle$ by adjusting the observing schedule is thus equivalent to minimizing the objective function ${{\cal L}}$ defined as $${{\cal L}}=\sum_{{ i-j\,\textrm{pairs}}} { {\,^{i}\negthinspace}}{ {\,^{j}\negthinspace}}B\,. \label{eq:lag}$$ Optimizing a single telescope {#sec:optsig} ----------------------------- Our technique to optimize the single telescope schedule takes two steps. The first step is to convert the constrained optimization problem to a constraint-free version, and the second step is to solve the constraint-free optimization. For comparison, we have also developed an alternative semi-analytical iterative technique in Appendix \[sec:appopt\]. To remove the constraints, we transform variables ${ {\,^{i}\negthinspace}}\tau$ to a new set of variables $\theta_\mu$, where $\mu$ is $1\ldots N-1$, and the transformation is $$\left(\begin{array}{c} \theta_1 \\ \theta_2 \\ \theta_3\\ \ldots \\ \theta_{N-2} \\ \theta_{N-1} \end{array}\right)=\left(\begin{array}{c} \arccos \left(\frac{\sqrt{\,^{1}\negthinspace \tau }}{\sqrt{\tau }}\right) \\ \arccos \left(\frac{\sqrt{\,^{2}\negthinspace \tau }}{\sqrt{\tau} \sin\theta_1}\right) \\ \arccos \left(\frac{\sqrt{\,^{3}\negthinspace \tau }}{\sqrt{ \tau}\sin\theta_1\sin\theta_2 }\right)\\ \ldots \\ \arccos \left(\frac{\sqrt{\,^{N-2}\negthinspace \tau }}{\sqrt{\tau} \prod_{i=1}^{N-3}\sin\theta_i }\right) \\ \arccos \left(\frac{\sqrt{\,^{N-1}\negthinspace \tau }}{\sqrt{\tau} \prod_{i=1}^{N-2}\sin\theta_i }\right) \end{array}\right)\,, \label{eq:transforma}$$ of which the inverse transformation is $$\left(\begin{array}{c} {\,^{1}\negthinspace \tau } \\ {\,^{2}\negthinspace \tau } \\ {\,^{3}\negthinspace \tau } \\ \ldots \\ {\,^{N-1} \tau } \\ {\,^{N} \tau } \end{array}\right)= \tau \left(\begin{array}{c} \cos^2 \theta_1 \\ \sin^2 \theta_1 \cos^2 \theta_2 \\ \sin^2 \theta_1 \sin^2 \theta_2 \cos^2 \theta_3\\ \ldots \\ \prod_{\mu=1}^{N-2}\sin^2\theta_\mu \cos^2 \theta_{N-1} \\ \prod_{\mu=1}^{N-1}\sin^2\theta_\mu\end{array}\right)\,. \label{eq:transform}$$ Here it is implicitly assumed that ${ {\,^{i}\negthinspace}}\tau \ge 0$, the $\prod_i$ and $\prod_\mu$ are the serial products using the index $i$ and $\mu$ respectively. Equations (\[eq:transforma\]) and (\[eq:transform\]) are, in fact, the transformation between a $N$-dimensional Cartesian coordinate system and its corresponding hyperspherical coordinate system. The constraint in the new spherical coordinate system corresponds to fixing the radius of the hypersphere (fixing $\tau$), and all angular coordinates $\theta_{\mu}$ are free variables for which one can specify any value for $\theta_{\mu}$ without breaking the constraint, i.e. after the above coordinate transformation, the objective ${{\cal L}}$ becomes a function of a new variable $\theta_\mu$, the constraint [equation (\[eq:cons1\])]{} is automatically satisfied for any $\theta_{\mu}$. We can then find the minimum of ${{\cal L}}$ using numerical methods for constraint-free problems. In this paper, the downhill simplex method [@NM65] is adopted, which has, usually, better global converging behavior than other methods [@KWJ94]. Once the optimal $\theta_\mu$ are found, we transform them back to ${ {\,^{i}\negthinspace}}\tau$ using [equation (\[eq:transform\])]{}, which yields the optimal single telescope schedule. In practical situations, one also needs to add telescope slewing time, observing time for calibration sources and other necessary auxiliary time on top of this telescope time schedule to get the final schedule. Optimizing multiple incoherent telescopes {#sec:optmul} ----------------------------------------- Similarly to section \[sec:optsig\], the optimal observing schedule for multiple incoherent telescopes is calculated by minimizing ${{\cal L}}$, although the constraints are slightly more complex here. In this section, we present the optimizing technique and discuss later the relation between the optimization of multiple telescopes and single telescope optimization. From the multiple telescope constraint [equation (\[eq:cons3\])]{}, we can see that the constraints are applied to each telescope individually. Thus, the generalization of the method presented in the previous section is straightforward by applying the transformation [equation (\[eq:transforma\])]{} to each telescope separately. Take telescope 1 as an example. The first column of matrix $\rm \bf P$, the $\,^{1} \negthinspace P_{\nu}$, is the observing schedule for the first telescope. We can transform those components of $\,^{1} P_{\nu}$ indicated by $\,^{1} \negthinspace O_{\nu}=1$ using [equation (\[eq:transforma\])]{} to remove the constraint of the first telescope. Similarly, by applying the transformation to the other columns of matrix $\rm \bf P$ successively, one can remove all the constraints. With the new constraint-free variables, we use the down-hill simplex method to find the optimization. We then transform back to ${ {\,^{i}\negthinspace}}P_{\nu}$, which is the optimization schedule. In the optimization algorithm, we treat the resource allocation matrix $\rm \bf O$ as input knowledge. A question naturally arises as to whether one can find a better observing schedule for the same telescopes with the same amounts of telescope time but with a different $\rm \bf O$, i.e.whether one can add pulsars to or remove pulsars from schedules of certain telescopes to increase the detection significance? The configuration with all ${ {\,^{i}\negthinspace}}O_{\nu}=1$ allows one to use any telescope to observe any pulsar, i.e. allows one to adjust the schedule with the maximal degrees of freedom. In this way, the optimization schedule of configuration (${ {\,^{i}\negthinspace}}O_{\nu}=1$) leads to the highest GW detection significance. If any schedule has the same detection significance as the optimal schedule with all ${ {\,^{i}\negthinspace}}O_{\nu}=1$, we call such schedule ‘global optimal’. To determine whether the global optimization is achievable, we first investigate the case when the pulse jitter noise can be ignored, we then discuss situations where the pulse jitter noise becomes important. When the pulse jitter noise is neglected, there is a close relation between the single telescope optimization and the multiple telescope optimization. In fact, under certain conditions, the optimization for multiple telescopes is equivalent to the single telescope optimization. To see this, we replace the variables ${ {\,^{i}\negthinspace}}P_{\nu}$ and ${ {\,^{i}\negthinspace}}O_{\nu}$ in [equation (\[eq:incadd\])]{} by *effective telescope time* ${ {\,^{i}\negthinspace}}\tau_{\rm e}$ as follows $${ {\,^{i}\negthinspace}}\tau_{\rm e}=\sum_{\nu=1}^{N_{\rm tel}} { {\cal G}}_{\nu}^2 { {\,^{i}\negthinspace}}P_{\nu} { {\,^{i}\negthinspace}}O_{\nu}\,. \label{eq:taueff}$$ After ignoring the $\sigma_{\rm J}$, [equation (\[eq:incadd\])]{} becomes $${ {\,^{i}\negthinspace}}\sigma_{\rm w}={ {\,^{i}\negthinspace}}\sigma_{0} { {\,^{i}\negthinspace}}\tau_{\rm e}^{-1/2} \,, \label{eq:effopta}$$ and the constrains [equation (\[eq:cons3\])]{} reduces to a single constraint $$\sum_{i=1}^{N} { {\,^{i}\negthinspace}}\tau_{\rm e} = \tau_{\rm e}\equiv \sum_{\nu=1}^{N_{\rm tel}} \tau_{\nu} { {\cal G}}_{\nu}^2 \,. \label{eq:effopt}$$ By comparing equations (\[eq:effopta\]) and (\[eq:effopt\]) with [equation (\[eq:cons1\])]{} and (\[eq:scale\]), one can see that the optimization for multiple telescopes is very similar to the single telescope optimization. In fact, if there exists a unique solution of ${ {\,^{i}\negthinspace}}P_{\nu}$ to [equation (\[eq:taueff\])]{}, which satisfies each individual constraint of [equation (\[eq:cons3\])]{}, the multiple telescope and single telescope optimization are mathematically identical. The differences between multiple telescope and single telescope optimization lie in the difference between the constraints [equation (\[eq:cons3\])]{} and (\[eq:effopt\]). For the single telescope case, only one constraint (equation \[eq:cons1\]) is involved. This is very different from the case of multiple telescopes, where $N_{\rm tel}$ constraints (equation \[eq:cons3\]) are present. The variable substitution in [equation (\[eq:taueff\])]{} combines all $N_{\rm tel}$ constraints (equation \[eq:cons3\]) and forms a single constraint (equation \[eq:effopt\]), where the constraint of each individual telescope is ignored. Whether global optimization is achievable is, now, equivalent to whether one can find a solution to ${ {\,^{i}\negthinspace}}P_{\nu}$ for [equation (\[eq:taueff\])]{}, while satisfying all individual constraints (equation \[eq:cons3\]). Generally, when solving ${ {\,^{i}\negthinspace}}P_{\nu}$ from [equation (\[eq:taueff\])]{} and (\[eq:cons3\]), one meets three types of situations: a unique solution, multiple solutions and no solution. For the case of multiple solutions, there are multiple choices for the optimal schedule. All these configurations are identical in the sense that they give the same GW detection significance. The case of no solution can only arise when constraints for some telescopes cannot be met, i.e.some of the pulsars need more time than the telescopes can give, while some of the pulsars have more telescope time than should be assigned. Take the case with two telescopes and two pulsars as an example, where the gain of the two telescopes are the same, two pulsars have identical $\sigma_{0}$, and ${ {\,^{i}\negthinspace}}O_{\nu}=\left[\begin{array}{c c}1& 1 \\0& 1\end{array}\right]$, i.e.only the second telescope can observe the second pulsar. Since the two pulsars have the same $\sigma_{0}$, the total effective telescope time should then also be the same for the optimal schedule (i.e.$\,^{1} \tau_{\rm e}= \,^{2} \tau_{\rm e}$). However if the first telescope does not have enough time, one gets $\,^{1} \tau_{\rm e}< \,^{2} \tau_{\rm e}$ and the global optimal schedule is not achievable for such configurations. The case for which there no solution to [equation (\[eq:taueff\])]{}, is due to an improper choice of telescopes. Most of the time the matrix $\rm \bf O$ is determined by the sky coverage of the telescopes. Thus, if no solution can be found, one needs to seek telescopes with the appropriate sky coverage or extend the time for telescopes with the sky coverage. In order to identify such a no-solution situation we show in [Appendix \[app:exist\]]{} that a solution to [equation (\[eq:taueff\])]{} exists, if the following conditions are satisfied $$\begin{aligned} { {\,^{i}\negthinspace}}\tau_{\rm e}\le \sum_{\nu=1}^{N_{\rm tel}} { {\cal G}}^2 \tau_{\nu} { {\,^{i}\negthinspace}}O_{\nu}\,, \textrm{ for any index of pulsar }i. \label{eq:existsol}\end{aligned}$$ One can identify which pulsar in [equation (\[eq:existsol\])]{} fails. These failures indicate the corresponding elements of resources allocation matrix to be adjusted. We now consider the situation where the pulse jitter noise becomes important. Clearly, if we cannot ignore the pulse jitter noise, the effective telescope time is no longer linearly dependent on the telescope time ${ {\,^{i}\negthinspace}}P_{\nu}$ as in [equation (\[eq:taueff\])]{} and we do not have a simple method to check if the global optimization is achieved. However we can still set all the ${ {\,^{i}\negthinspace}}O_{\nu}=1$, find the global optimal strategy, and compare it with the optimal strategy for the input ${ {\,^{i}\negthinspace}}O_{\nu}$ to check if the global optimization is achieved. In [Figure \[fig:mulexam\]]{}, we give four examples for the optimization of multiple telescopes. In these examples, two telescopes are used to observe two pulsars, where the pulsar noise parameters and the telescope parameters are specified in [Table \[tab:ex4\]]{}. These four examples are given as follows. i) ‘Case A’, two identical pulsars are observed with two identical telescopes. The global optimal schedule is achievable and one can exchange telescope time between the two telescopes. As indicated in [Figure \[fig:mulexam\]]{}, as long as we assign the same amount of effective telescope time ${ {\,^{i}\negthinspace}}\tau_{\rm e}$ for two pulsars, it is the optimal observing schedule. ii) ‘Case B’, telescope 1 has twice the gain compared to telescope 2. Similar to ‘case A’, the global optimal schedule is achievable and one can exchange telescope time. But due to the gain differences, the telescope time exchange should be weighted by the gain, such that the same amount of effective telescope time is assigned to the two identical pulsars. iii) ‘Case C’, where telescope 1 has more time (1.5 hr) available compared to telescope 2 (1 hr) and only telescope 2 can be used to observe the 2nd pulsar (as indicated by the matrix ${ {\,^{i}\negthinspace}}O_{\nu}$). For this case, the total telescope time is the same as in ‘case A’, but we do not achieve the same low level of ${{\cal L}}$ as in ‘case A’, i.e.global optimization is not achievable. This is due to the constraint from matrix ${ {\,^{i}\negthinspace}}O_{\nu}$, which prevents us from reaching the global optimization as discussed. iv) ‘Case D’, where pulsar 1 is affected by pulse jitter noise. In this case, one does not have the freedom to exchange telescope time and optimization suggests that the low gain telescope (telescope 2) should spend more time on the pulse jitter affected pulsar (pulsar 1). ![image](fexbw) [l lll c lll]{} &case A: & & & & case B: & &\ \ Radiometer noise& $\,^{1} \sigma_{0}=10^{-7}$ & $\,^{2} \sigma_{0}=10^{-7}$ & & & $\,^{1} \sigma_{0}=10^{-7}$ & $\,^{2} \sigma_{0}=10^{-7}$ &\ Jitter noise&$\,^{1} \sigma_{\rm J}=0$ & $\,^{2} \sigma_{\rm J}=0$ & & & $\,^{1} \sigma_{\rm J}=0$ & $\,^{2} \sigma_{\rm J}=0$ &\ Observation time&$\tau_{1}=1$ & $\tau_{2}=1$ & & & $\tau_{1}=1$ & $\tau_{2}=1$ &\ Telescope gain&${ {\cal G}}_{1}=1$ & ${ {\cal G}}_{2}=1$ & & & ${ {\cal G}}_{1}=2$ & ${ {\cal G}}_{2}=1$ &\ Resource allocation matrix&${ {\,^{i}\negthinspace}}O_{\nu}=\left[\begin{array}{c c}1& 1 \\ 1& 1\end{array}\right]$ & & & & ${ {\,^{i}\negthinspace}}O_{\nu}=\left[\begin{array}{c c}1& 1 \\ 1& 1\end{array}\right]$ & &\ &case C: & & & & case D: & &\ \ Radiometer noise&$\,^{1} \sigma_{0}=10^{-7}$ & $\,^{2} \sigma_{0}=10^{-7}$ & & & $\,^{1} \sigma_{0}=10^{-7}$ & $\,^{2} \sigma_{0}=10^{-7}$ &\ Jitter noise &$\,^{1} \sigma_{\rm J}=0$ & $\,^{2} \sigma_{\rm J}=0$ & & & $\,^{1} \sigma_{\rm J}=10^{-7}$ & $\,^{2} \sigma_{\rm J}=0$ &\ Observation time&$\tau_{1}=1.5$ & $\tau_{2}=0.5$ & & & $\tau_{1}=1$ & $\tau_{2}=1$ &\ Telescope gain&${ {\cal G}}_{1}=1$ & ${ {\cal G}}_{2}=1$ & & & ${ {\cal G}}_{1}=2$ & ${ {\cal G}}_{2}=1$ &\ Resource allocation matrix&${ {\,^{i}\negthinspace}}O_{\nu}=\left[\begin{array}{c c}1& 1 \\ 0& 1\end{array}\right]$ & & & & ${ {\,^{i}\negthinspace}}O_{\nu}=\left[\begin{array}{c c}1& 1 \\ 1& 1\end{array}\right]$ & &\ Results {#sec:res} ======= As examples, we use pulsar properties measured from data of the PPTA, EPTA and NANOGrav to show the potential benefit of optimizing the observing schedule. The parameters of the pulsars are given in [Table \[tab:psrp\]]{}. -------------- ------- -------- ------- ---------------------- ------------------- ------------------- ----------------------- PSR J P $P_b$ S1400 Array PPTA $\sigma_{0}$ EPTA $\sigma_{0}$ NANOGrav $\sigma_{0}$ (ms) (d) (mJy) ($\mu$s) ($\mu$s) ($\mu$s) J0030$+$0451 4.87 - 0.6 EPTA, NANOGrav - 0.54 0.31 J0218$+$4232 2.32 2.03 0.9 NANOGrav - - 4.81 J0437$-$4715 5.76 5.74 142.0 PPTA 0.03 - - J0613$-$0200 3.06 1.20 1.4 PPTA, EPTA, NANOGrav 0.71 0.45 0.50 J0621$+$1002 28.85 8.32 1.9 EPTA - 9.58 - J0711$-$6830 5.49 - 1.6 PPTA 1.32 - - J0751$+$1807 3.48 0.3 3.2 EPTA - 0.78 - J0900$-$3144 11.1 18.7 3.8 EPTA - 1.55 - J1012$+$5307 5.26 0.60 3.0 EPTA, NANOGrav - 0.32 0.61 J1022$+$1001 16.45 7.81 3.0 PPTA, EPTA 0.37 0.48 - J1024$-$0719 5.16 - 0.7 PPTA, EPTA 0.43 0.25 - J1045$-$4509 7.47 4.08 3.0 PPTA 2.68 - - J1455$-$3330 7.99 76.17 1.2 EPTA, NANOGrav - 3.83 1.60 J1600$-$3053 3.60 14.35 3.2 EPTA, PPTA 0.32 0.23 - J1603$-$7202 14.84 6.31 3.0 PPTA 0.70 - - J1640$+$2224 3.16 175.46 2.0 EPTA, NANOGrav - 0.45 0.19 J1643$-$1224 4.62 147.02 4.8 PPTA, EPTA, NANOGrav 0.57 0.56 0.53 J1713$+$0747 4.57 67.83 8.0 PPTA, EPTA, NANOGrav 0.15 0.07 0.04 J1730$-$2304 8.12 - 4.0 PPTA, EPTA 0.83 1.01 - J1732$-$5049 5.31 5.26 - PPTA 1.74 - - J1738$+$0333 5.85 0.35 - NANOGrav - - 0.24 J1741$+$1351 3.75 16.34 - NANOGrav - - 0.19 J1744$-$1134 4.08 - 3.0 PPTA, EPTA, NANOGrav 0.21 0.14 0.14 J1751$-$2857 3.91 110.7 0.06 EPTA - 0.90 - J1824$-$2452 3.05 - 0.2 PPTA, EPTA 0.39 0.24 - J1853$+$1303 4.09 115.65 0.4 NANOGrav - - 0.17 J1857$+$0943 5.37 12.33 5.0 PPTA, EPTA, NANOGrav 0.82 0.44 0.25 J1909$-$3744 2.95 1.53 3.0 PPTA, EPTA, NANOGrav 0.19 0.04 0.15 J1910$+$1256 4.98 58.47 0.5 EPTA, NANOGrav - 0.99 0.17 J1918$-$0642 7.65 10.91 - EPTA, NANOGrav - 0.87 1.08 J1939$+$2134 1.56 - 10.0 PPTA, EPTA, NANOGrav 0.11 0.02 0.03 J1955$+$2908 6.13 117.35 1.1 NANOGrav - - 0.18 J2019$+$2425 3.94 76.51 - NANOGrav - - 0.66 J2124$-$3358 4.93 - 1.6 PPTA 1.52 - - J2129$-$5721 3.73 6.63 1.4 PPTA 0.87 - - J2145$-$0750 16.05 6.84 8.0 PPTA, EPTA, NANOGrav 0.86 0.40 1.37 J2317$+$1439 3.44 2.46 4.0 NANOGrav - 0.81 0.25 -------------- ------- -------- ------- ---------------------- ------------------- ------------------- ----------------------- Figure \[fig:stw\] shows the comparison between the GW detection significance $\langle S \rangle$ for an unoptimized and an optimized PTA with the same parameters. We can see that the optimal observing strategy increases the GW detection significance. Evaluating from the rising edge of the curve, the optimized arrays are able to detect a GW background 2-3 times weaker than unoptimized arrays depending on the pulsar population. Because the radiometer noise $\sigma_{0}{ {\cal G}}^{-1}$ dominates over the pulse jitter noise for most pulsars using current timing techniques, we ignore the pulse jitter noise in these examples. We also note that the larger the red noise level is, the less we gain from optimizing the observing schedule. When the amplitude of intrinsic red noises is large, the ${ {\,^{i}\negthinspace}}{ {\,^{j}\negthinspace}}A$ dominates the detection significance as shown in [equation (\[eq:decse\])]{}, and thus the detection is no longer sensitive to the schedule optimization, which only affects the ${ {\,^{i}\negthinspace}}{ {\,^{j}\negthinspace}}B$ terms. In fact, only if the pulsar noise level is sensitive to the observing schedule (i.e. when red noise does not dominate), the optimization will be effective. This conclusion applies to any type of GW detector. ![image](f1) In the optimization process, one always uses a numerical technique to determine the optimal observation plan. It is, nevertheless, worth finding a rule of thumb to determine an ‘approximate’ optimal strategy. We prove in [Appendix \[sec:appopt\]]{} that, for strong GW cases, the optimal observing schedule weakly depends on the amplitude of the GW background and a good approximation for the optimal schedule is $${ {\,^{i}\negthinspace}}\tau=\tau\left(\frac{\sqrt{ { {\,^{i}\negthinspace}}Q} }{\sum_{j=1,\neq i}^{N} { \sqrt{{ {\,^{i}\negthinspace}}Q} } }\right)\,, \label{eq:optapp}$$ where the ${ {\,^{i}\negthinspace}}Q$ is defined as $${ {\,^{i}\negthinspace}}Q={ {\,^{i}\negthinspace}}\sigma_{0}^2 { {\cal G}}^{-2} +\sigma_{\rm J}^2\,.$$ Here the ${ {\,^{i}\negthinspace}}Q$, noise parameter, defines the noisiness of the $i$-th pulsar observed using a telescope of gain ${ {\cal G}}$. By comparing the numerical optimization and the result from [equation (\[eq:optapp\])]{} in [Figure \[fig:tauvssig\]]{}, we show that the optimal schedule is insensitive to the GW amplitude and can be well approximated by [equation (\[eq:optapp\])]{}, when the amplitude of the GW is large. For the case of a weak GW background, the optimal schedule for most of the pulsars is still close to [equation (\[eq:optapp\])]{}, but the optimal schedule for noisy pulsars (pulsars with larger ${ {\,^{i}\negthinspace}}Q$) starts to deviate from the analytic approximation. ![The optimal telescope time for each pulsar as a function of its white noise level. The x-axis shows the one-hour pulsar noise level ${ {\,^{i}\negthinspace}}\sigma_{0}$. The y-axis shows the optimal telescope time for the pulsar per observing session. The EPTA pulsars are used in this demonstration, where the total telescope time is 24 hours, i.e. the average telescope time for each pulsar is 1 hour. We indicate ${ {\,^{i}\negthinspace}}\sigma_0$ for each pulsar using a vertical line on the top of the figure. The dotted-dashed line, dashed line, dotted line and solid line are for optimization results with GW amplitude of $A_{0}=10^{-12}, 10^{-13}, 2\times 10^{-13}$, and $10^{-14}$ respectively. The results of [equation (\[eq:optapp\])]{} overlaps with the dotted-dashed line. For GWs with amplitude between $10^{-12}$ and $10^{-13}$, the optimal schedules are very close to each other. For the weaker GW cases, e.g. $A_{0}=10^{-14}$, the optimal schedule starts to deviate from the approximation [equation (\[eq:optapp\])]{}. Such a deviation is mainly due to the pulsars with a high noise level. Since these noisy pulsars will not contribute significantly to the GW signal detection, the optimal algorithm starts to reduce its observing time. For most of the pulsars, the optimal schedule is still close to the result from [equation (\[eq:optapp\])]{}. \[fig:tauvssig\] ](f2) There are a few more general conclusions on improving the timing accuracy independent of GW detection algorithms. In order to improve timing accuracy, one can use telescopes with higher effective gain[^5] or increase telescope time. Increasing the telescope gain reduces $\sigma_{\rm w}$ and increasing telescope time reduces both $\sigma_{\rm w}$ and $\sigma_{\rm J}$. The example of case ‘D’ in [Figure \[fig:mulexam\]]{} shows that one should use a low gain telescope on those pulsars with larger jitter noise level, while using a high gain telescope on pulsars with larger radiometer noise. This is further supported by the results in [Figure \[fig:svsn\]]{}, which shows that increasing the telescope time is more effective than using a high gain telescope for pulse jitter noise dominant pulsars. Besides providing the optimal schedule to detect a GW background, our technique answers the question about the optimal *number* of pulsars one should observe in a PTA with a given amount of telescope time. The number of pulsars in a PTA has two effects on the GW detection significance. Firstly, from [equation (\[eq:decse\])]{}, one can see that the significance increases as $\langle S\rangle \propto \sqrt{M}\sim N$. Secondly, since the telescope time is limited, observing more pulsars increases the TOA noise level, i.e. $\sigma_{\rm w}^2\propto N^{-1}$ given a fixed amount of telescope time. When $N$ becomes large, the two effects mentioned above cancel each other out, and the detection significance becomes insensitive to $N$. In general, when the number of pulsars ($N$) is small, the detection significance increases with $N$. If all pulsars have the same noise level, the detection significance saturates for large $N$, where the saturation level is mainly determined by the available telescope time. In practice, when trying to include more pulsars in a timing array, pulsars with a higher noise level will be inevitably included, such that the detection significance will decrease for any GW detection algorithm. In this way, observing more pulsars does not necessarily help detecting the GW background, unless one gets more telescope time. Given the telescope time, the number of pulsars, at which the GW detection significance achieves its maximum, is the optimal number of pulsars one should use in the PTA. We propose the following algorithm to determine the best sample of pulsars to observe. 1. From a group of to-be-observed pulsars, choose the two pulsars with smallest noise levels, then optimize the schedule, and compute the GW detection significance. 2. Include one extra pulsar, optimize the schedule, compute the GW detection significance, loop over the rest of the pulsars and find the new pulsar leading to the largest detection significance. 3. If the GW detection significance increases by including the new pulsar, add this pulsar to the list, and repeat the second step for the rest of the pulsars, otherwise the optimal set of pulsars is already achieved. When pulsar surveys discover a new pulsar, we can check whether it is worth including it in the PTA by using the above algorithm, although, to start with, sufficient observations are still necessary to measure the noise properties of this pulsar. ![The GW detection significance $\langle S \rangle$ (y-axis) as a function of GW amplitude $\log(A_0)$ (x-axis), given different telescope gain and amount of telescope time. In the calculation, we assume that the PTA is composed of 20 pulsars, all of which have $\sigma_{0}=100$ns and $\sigma_{\rm J}=100$ns. We plot three telescope configurations here, each of which is labeled with the numerical value of total telescope time ${\tau}$ and telescope gain ${ {\cal G}}$. Here the telescope time is in units of hours, the gain takes an arbitrary fiducial unit as in [equation (\[eq:scale\])]{}. For the following two PTA configurations, the $\{{ {\cal G}}=10, \tau=20\}$ and the $\{{ {\cal G}}=1, \tau=20\times10^2\}$, all pulsars have the same level of radiometer noise, which is 10 times smaller than the case where $\{{ {\cal G}}=1, \tau=1\}$. However, because of the pulse jitter noise effect, the configurations with more telescope time performs better than the cases with higher system gain. Furthermore, by comparing $\{{ {\cal G}}=10, \tau=20\}$ case with the case of $\{{ {\cal G}}=1, \tau=20\times 10^2\}$, we conclude that it is important, even for a high gain telescope, to acquire sufficient telescope time in order to improve the GW detection significance, when the pulsar becomes pulse jitter noise dominant. \[fig:svsn\] ](f3) Discussions and Conclusions {#sec:con} =========================== In this paper, we have investigated a technique to optimize the allocation of telescope time in a pulsar timing array to maximize the GW detection significance given a fixed amount of telescope time. This is done in two steps. First, the GW detection significance using a correlation detector is calculated analytically as a function of the white noise and red noise level of each pulsar, i.e. [equation (\[eq:decse\])]{}. Secondly, the GW detection significance is optimized under the constraints that the allocated telescope time is fixed. The constrained optimization is converted to a corresponding constraint-free optimization using the coordinate transformations of [equation (\[eq:transforma\])]{} and (\[eq:transform\]). Finally, the optimization is solved numerically using the downhill simplex algorithm. For characteristic PTAs, the optimized arrays are able to detect a GW background 2-3 times weaker than unoptimized arrays depending on the pulsar population. Besides the single telescope case, we also derive the optimization algorithm for multiple telescopes. We also examine the links between the multiple and single telescope optimization. We investigate the optimal number of pulsars to observe for a given PTA, where the algorithm to construct the optimal group of pulsars from candidates is also included. In our optimization, the total telescope time $\tau$ and $\tau_{\nu}$ are input parameters, that need to be determined before optimizing the schedule. When defining a PTA project, one can start with a reasonable amount of telescope time, say 20 hours per each session/telescope, optimize the schedule, calculate the detection significance $\langle S\rangle$, check if the detection is sensitive enough to the predicted GW background, and then adjust the input total telescope time accordingly, i.e.increase total telescope time, if the PTA is not sensitive enough. In practice, the detailed numerical values for the optimal observing strategy are still to-be-determined, due to the lack of measurements for the necessary pulsar noise parameters. To determine the realistic optimization strategy, these parameters are critical. A detailed investigation on the individual timing properties of potential PTA pulsars is highly necessary, from which further observations will benefit. One may encounter the situation, in which the optimal strategy requires longer observing time for certain pulsars per session than their transit time. If this happens, one has to split one observing session into several successive sessions. The GW detection significance is not significantly impaired by session splitting, since the GW induced timing signal has a very red spectrum and we are detecting its low frequency component. However the session splitting can lead to inconveniences in practical arrangements. A better way to avoid such a situation is to construct the PTA using telescopes with enough geographical coverage, which can be one of the driving reasons for the IPTA project. The optimization in this paper is designed to maximize the GW detection significance. Our optimization is built for the correlation detector proposed by [@JHLM05]. As shown in [Appendix \[sec:appopt\]]{}, our optimization, which is very close to minimizing the total PTA noise power, can be different from optimization for other types of GW detectors e.g. frequentist detectors [@VBC09; @YCH11] or the Bayesian detector [@VLM09; @VLJ11]. In fact, as already shown by [@BLF10], one can get a different optimal observation schedule, when focusing on single source detection. We have ignored the information of historical non-overlapping data in our algorithms, because the non-overlapping data has very limited contributions to the cross-power of GW signals, although these data can be important to constrain the upper limit of GW amplitude [@VLJ11]. Furthermore, the PTA offers an opportunity to investigate much broader topics, e.g. interstellar medium effects, time metrology, and so on. This paper, thus, by no means claims that our objective function is the only one we should pursue. However our basic framework of optimization, constraints and related numerical techniques will be the same for other detectors. It is straightforward to generalize our method to these detectors. For Bayesian detectors, there is no analytical expression for the detection significance at present. The Baysian detectors are usually computationally expensive, which makes the optimization difficult at this stage. Figure \[fig:stw\] shows that increasing the levels of red noise does not decrease the saturation level of the detection significance, i.e.the detection significance at large GW amplitude. This seems to contradict the conclusion of [@JHLM05] and [@VBC09], where the red spectrum of the GW limits the saturation level. As shown in [equation (\[eq:decse\])]{}, the term limiting the saturation level is due to the term $(1+H({ {\,^{i}\negthinspace}}{ {\,^{j}\negthinspace}}\theta))\gamma_{kk'}^2$ in ${ {\,^{i}\negthinspace}}{ {\,^{j}\negthinspace}}A$, which is the non-zero correlation between GW signals of two different pulsars. Since the red intrinsic noise of pulsars, unlike the GW induced signal, are uncorrelated, it is natural that they do not limit the saturation level. In this paper, we use a phenomenological model to describe the noise component, which is a superposition of a telescope-time dependent white noise component and another red noise component independent of telescope time. The reason for using such a phenomenological model is to use the observational information and to introduce minimal theoretical assumptions. Our white noise term contains both the radiometer noise and pulse jitter noise. Although the radiometer noise is the current bottleneck in timing accuracy of most MSPs ( $\sigma_{0} { {\cal G}}^{-1}\gg \sigma_{\rm J}$), the pulse jitter noise [@LK11], can be a potential limitation for future single dish high gain telescopes. Similarly, red noise can be another limitation for the long term timing accuracy. Detailed studies on the pulse jitter noise and red noise properties and related mitigation algorithms will be useful for the future prospects of GW detection. We assumed that the noise sources are not correlated among pulsars. This may be valid for all the noise of astrophysical origin, although the clock error can be an identical noise among all pulsars [@HR84; @FB90; @Man94; @Tinto11]. The clock error may also introduces correlations in TOAs from different telescopes, since observatory clocks are usually synchronized. However, thanks to the red spectrum of most clock errors [@Riehle04], one can completely remove the noise using simultaneous differential measurements [@Tinto11]. Furthermore, as we argue in [Appendix \[sec:cnm\]]{}, such a common noise source can be significantly suppressed by post processing, if each session is compact within the time scale of days. In this way, our assumption is justified that different noise sources are not correlated among pulsars. Our optimal observation strategy is for allocating telescope time among pulsars. This only affects the white noise related part (${ {\,^{i}\negthinspace}}{ {\,^{j}\negthinspace}}B$ terms) in GW detection significance ([equation (\[eq:decse\])]{}). In fact, one can also specify the epoch of each session to minimize the effect of the red noise related part (${ {\,^{i}\negthinspace}}{ {\,^{j}\negthinspace}}A$ terms). Since these ${ {\,^{i}\negthinspace}}{ {\,^{j}\negthinspace}}A$ terms can be more effectively reduced using whitening techniques [@JHLM05; @JH06], we do not optimize the epoch of each session to avoid sampling artifacts and to reduce the complexities in observation management. The discussions for the frequency of observation sessions are omitted, since it is not bounded in terms of optimization, i.e. observing more frequently sessions simply increase the sensitivity. Acknowledgments {#acknowledgments .unnumbered} =============== We gratefully acknowledge support from the ERC Advanced Grant ‘LEAP’, Grant Agreement Number 227947 (PI Michael Kramer). We thank Norbert Wex for reading through the manuscript and his useful discussions. We also thank the anonymous referee for the important and helpful comments. B. J., [Lommen]{} A. N., [Finn]{} L. S., 2011, ApJ, 730, 17 J. M., [Shannon]{} R. M., 2010, arxiv: 10103785 R. D., [van Haasteren]{} R., [Bassa]{} C. G., [et al.]{} 2010, Classical and Quantum Gravity, 27, 084014 R. S., [Backer]{} D. C., 1990, ApJ, 361, 300 R. W., [Downs]{} G. S., 1983, ApJL, 265, L39 G., [et al.]{} 2010, Classical and Quantum Gravity, 27, 084013 G. B., et al., 2009, PASA, 26, 103 C. J., [Rees]{} M. J., 1984, Nature, 311, 109 A. H., [Backer]{} D. C., 2003, ApJ, 583, 616 G. H., et al., 2008, A&A, 490, 753 F., [Finn]{} L. S., [Lazio]{} J., [et al.]{} 2009, ArXiv e-prints F. A., et al., 2006, ApJ, 653, 1571 F. A., [Hobbs]{} G. B., [Lee]{} K. J., [Manchester]{} R. N., 2005, ApJL, 625, L123 M., [Stappers]{} B., 2010, in ISKAF2010 Science Meeting - ISKAF2010, June 10-14, 2010 Assen, the Netherlands [LOFAR, LEAP and beyond: Using next generation telescopes for pulsar astrophysics ]{}. p. 10 M., [Wielebinski]{} R., [Jessner]{} A., [Gil]{} J. A., [Seiradakis]{} J. H., 1994, A& As, 107, 515 K., 2010, preprint (astro-ph/1002.0737) K., [Jenet]{} F. A., [Price]{} R. H., [Wex]{} N., [Kramer]{} M., 2010, ApJ, 722, 1589 K. J., [Jenet]{} F. A., [Price]{} R. H., 2008, ApJ, 685, 1304 K., [Keane]{} E. F., [Lee]{} K. J., [Kramer]{} M., [Cordes]{} J. M., [Purver]{} M. B., 2012, MNRAS, 420, 361 K., [Verbiest]{} J. P. W., [Karmmer]{} M., [Stappers]{} B. W., [van Straten]{} W., [Cordes]{} J. M., 2011, submitted R. N., 1994, Proceedings of the Astronomical Society of Australia, 11, 1 R. N., 2008, in [C. Bassa, Z. Wang, A. Cumming, & V. M. Kaspi]{} ed., 40 Years of Pulsars: Millisecond Pulsars, Magnetars and More Vol. 983 of American Institute of Physics Conference Series, [The Parkes Pulsar Timing Array Project]{}. pp 584–592 R., [Wang]{} Q., [Zhu]{} L., [Zhu]{} W., [Jin]{} C., [Gan]{} H., 2006, Chinese Journal of Astronomy and Astrophysics Supplement, 6, 020000 J. A., [Mead]{} R., 1965, The Computer Journal, 7, 303 F., 2004, [Frequency Standards: Basics and Applications]{}. Wiley-VCH A. E., 2011, Astronomy Reports, 55, 132 A., [Haardt]{} F., [Madau]{} P., [Volonteri]{} M., 2004, ApJ, 611, 623 R. M., [Cordes]{} J. M., 2010, ApJ, 725, 1607 R., [Kramer]{} M., [Stappers]{} B., [Lorimer]{} D. R., [Cordes]{} J., [Faulkner]{} A., 2009, A&A, 493, 1161 R., [Lorimer]{} D. R., [Kramer]{} M., [Manchester]{} R., [Stappers]{} B., [Jin]{} C. J., [Nan]{} R. D., [Li]{} D., 2009, A&A, 505, 919 B. W., [Kramer]{} M., [Lyne]{} A. G., [D’Amico]{} N., [Jessner]{} A., 2006, Chinese Journal of Astronomy and Astrophysics Supplement, 6, 020000 M., 2011, Phy. Rev. Lett., 106, 191101 R., [Levin]{} Y., 2012, preprint (astro-ph/1202.5932) R., [Levin]{} Y., [Janssen]{} G. H., et al., 2011, MNRAS, 414, 3117 R., [Levin]{} Y., [McDonald]{} P., [Lu]{} T., 2009, MNRAS, 395, 1005 J. P. W., et al., 2009, MNRAS, 400, 951 J. P. W., et al., 2010, Classical and Quantum Gravity, 27, 084015 D. R. B., et al., 2011, MNRAS, 414, 1777 A., 2010, [Quantum Field Theory in a Nutshell: Second Edition]{}. Princeton University Press, 2nd ed Pulsar timing correlation {#sec:appcor} ========================= In this section, we calculate the cross correlation for a Gaussian random signal with a power-law spectrum. We discuss two main issues arising in the calculation, the effect of polynomial fitting on the cross correlation and the effects of spectral leakage. For a stationary continuous-in-time random signal $s(t)$, which has a prescribed power spectrum of $S_{\rm s}(f)$, the well-known Wiener-Khinchin theorem states that the autocorrelation $\langle s(t_{i}) s(t_{j}) \rangle$ is the Fourier transform of the power spectral density, i.e. $$\langle s(t_{i}) s(t_{j}) \rangle=\int_{0}^{\infty}S_{\rm s}(f) e^{2\pi f (t_{i}-t_{j})}\, df\,. \label{eq:wctheor}$$ However due to the fitting of polynomials, the direct application of the above to the pulsar timing problem needs revision. In a practical pulsar timing data reduction pipeline, one usually uses a least-square polynomial fitting to extract the pulsar parameters. Such a fit is *not* a stationary process, which prevents us from calculating the correlation directly using [equation (\[eq:wctheor\])]{}. In this paper, correlations are calculated via numerical simulations, that take the following steps: i) Generate a series of the sampled signal using individual frequency components as described in [@LJR08] ii) Fit the signal with a polynomial to simulate the effects of fitting the pulsar rotation frequency and its derivative. iii) Calculate the required cross correlation. iv) Repeat steps (i), (ii), (iii) and average the cross correlation, until the required precision is attained. In this paper, the correlations are calculated to a relative error of 1%. From the numerical results in [Figure \[fig:coreff\]]{}, the polynomial fitting clearly breaks the stationarity of signals. ![ The correlation coefficient for 128 regular samples of a red noise power-law. For illustration purposes the spectral index is -2 and we removed the fitted parabolic term in the signal $s(t_{i})$. The x-axis and y-axis show the index $i$ and $j$, respectively. The colors indicate values of the correlation coefficients. The diagonal elements of the plot are not equal to each other. This clearly show that the polynomial fitting breaks the stationarity of the red noise. \[fig:coreff\]](fa1) The other issue is the spectral leakage. A red noise signal with a steep spectrum introduces leakage from low frequency components into high frequency components. For a red noise signal with a spectral density of $S_{\rm s}(f)\sim f^{-\beta}$, the amplitude for those signal components within the frequency range $[f, f+\delta f]$ is $f^{-\beta/2} \delta f^{1/2}$. The waveform of this component will be sinusoidal, i.e.$s(t)\simeq f^{-\beta/2}\delta f^{1/2} \exp(2\pi f i t)$. At the low frequency limit, where $f t \ll 1$, $s(t)$ can be approximated using Taylor series $$s(t)\sim f^{-\beta/2} \delta f^{1/2} \exp(2\pi i f t)\simeq \delta f^{1/2} \sum_{l=0}^{l=\infty} \frac{(2\pi i t)^{l} f^{l-\beta/2}}{l!}.$$ If we fit the waveform with a $n$ degree polynomial, we effectively remove the leading terms up to $f^{n+3/2-\beta/2}$. Clearly, if $n+3/2-\beta/2\le0$, the signal amplitude goes to infinity, when $f\rightarrow 0$. Thus there will be no low frequency cut-off in the spectrum, even for data with a finite length. In this case, the low frequency components are always dominant. To guarantee that the low frequency cut-off rises naturally (regularize the signal), we need to fit the time series to a polynomial with degree $n > \beta/2-3/2$. For example, the SMBH GW background introduces a pulsar timing signal with a spectral index of $\beta=13/3$, hence we need at least $n=1$, i.e.fitting with a first order polynomial (a straight line), although the spectral leakage from low frequency components is dominant. We now take a closer look at the effects of the fitting on the RMS level of a signal with a power-law spectrum given by $$S_{\rm s}(f)=S_0 f^{-\beta}, \textrm{with }f\in[f_{\rm L}, \infty)\,,$$ where the $f_{\rm L}$ is the lowest frequency cut-off, $S_0$ is the power spectrum value at the unit frequency. Using the data length ($T$) as the temporal unit, the RMS level of such signal is $$\sigma^2= S_0 T^{\beta-1}\frac{f_{\rm 0,L}^{1-\beta}}{ (\beta-1)}\,, \label{eq:refz}$$ where $f_{\rm 0, L}=T f_{\rm L}$ is the dimensionless lowest cut-off frequency. The polynomial fitting to regular sampled data using a $\ell^2$-norm is equivalent to the following *linear time-variant* filter $$h(t_1,t_2)=\sum_{l=0}^{n}(2l+1)P_l(2t_1-1)P_l(2t_2-1)\,, \label{eq:polfit}$$ where $t_1,t_2 \in [0,1]$ are dimensionless time, $n$ is the order of the fitting polynomial, and $P_{l}(\cdot)$ is the $l$-th order Legendre polynomial. Denoting the signal as $s(t)$ and the polynomial fitting of such a signal as $s_0(t)$, we have $$s_0(t)=\int_0^1 h(t,t_2) s(t_2)\,{\mathrm{d}}t_2\,,$$ and the residual is $s'(t)=s(t)-s_0(t)$. The average RMS level of post-fitting residual becomes $$\sigma'^2=\int_{0}^{1} {\mathrm{d}}t s'(t)^2 =\int_{0}^1\int_0^1\int_0^1 h(t,t_1)h(t,t_2) s(t_1)s(t_2)\, {\mathrm{d}}t {\mathrm{d}}t_1 {\mathrm{d}}t_2\,.$$ One can show that $$\sigma'^2=\sigma^2- \int_0^1\int_0^1 \sum_{l=0}^{n} (2l+1) P_l(2t_1-1)P_l(2t_2-1) C(t_1-t_2)\, {\mathrm{d}}t_1 {\mathrm{d}}t_2\,, \label{eq:int}$$ where $C(t_1-t_2)=\langle s(t_1)s(t_2)\rangle$. For the case of a power-law spectrum, we have $$\begin{aligned} C(t)&=&S_0 T^{\beta-1}\frac{f_{\rm 0, L}^{1-\beta}}{\beta-1} \,_1F_2\left(\frac{1-\beta}{2}; \frac{1}{2}, \frac{3-\beta}{2}; -\pi^2 f _{\rm 0, L}^2 t^2\right)\nonumber \\ &&+ T^{\beta-1}(2\pi)^{\beta-1} \Gamma(1-\beta) \sin\left(\frac{\pi\beta}{2}\right)|t|^{\beta-1}\,,\end{aligned}$$ where $\tau$ is dimensionless time in units of $T$, $\,_1F_2(\cdot)$ is the generalized hypergeometric function (see also [@VLJ11] for the series presentation), and $\Gamma(\cdot)$ is the gamma function. Integrating [equation (\[eq:int\])]{}, one reads $$\begin{aligned} \sigma'^2&=&S_0 T^{\beta-1}\left[\frac{}{} \right.\nonumber \\ &-&\sum_{k=n+1}^{\infty} \frac{(2\pi)^{2k} (-1)^{k+n} f_{\rm 0,L}^{2k+1-\beta}(1+n) \Gamma(k) \sin(\pi \beta) }{(1+2k-\beta)\Gamma(2+2k)\Gamma(k-n) \Gamma(2+k+n)} \nonumber \\ &+&\left.2^{\beta-2}(1+n)\pi^{\beta-1} \frac{\Gamma(\frac{3+2m-\beta}{2}) \Gamma(\frac{\beta-1}{2})\Gamma(\frac{1+\beta}{2})}{ \Gamma(\frac{3+2n+\beta}{2}) \Gamma(1+\beta)}\right]\,.\end{aligned}$$ Clearly, if $2n+3\ge\beta$, we have $$\lim_{f_{\rm 0, L}\rightarrow0} \sigma'^2=S_0 T^{\beta-1}2^{\beta-2}(1+n)\pi^{\beta-1} \frac{\Gamma(\frac{3+2n-\beta}{2}) \Gamma(\frac{\beta-1}{2})\Gamma(\frac{1+\beta}{2})}{ \Gamma(\frac{3+2n+\beta}{2}) \Gamma(1+\beta)}\,,$$ i.e. the RMS $\sigma'^2$ is regularized to be a finite value for $f_{\rm 0,L} \rightarrow 0$, which confirms our previous estimation. Usually pulsar spin and spin-down are subtracted from the data. This corresponds to the case of $n=2$. For GW induced signal, we have $\beta=13/3$ and $S_0=h_{\rm c}^2(\textrm{yr}^{-1})(12\pi^2)^{-1} \textrm{yr}^{-4/3}$, which gives $$\begin{aligned} \sigma'^{2}&=&h_{\rm c}^2(\textrm{yr}^{-1})\frac{3^6 (2\pi)^{4/3}T^{10/3} \Gamma\left(\frac{2}{3}\right)}{2^7 \cdot 5\cdot 7^2\cdot 11\cdot 13} \, \textrm{yr}^{-4/3}\\ &\simeq&2.55\times 10^{-3} h_{\rm c}^2T^{10/3}\, \textrm{yr}^{-4/3}\,.\end{aligned}$$ This is identical to the results of [@VL12]. It should be also noted that for such case ($\beta=13/3$ and $S_0=h_{\rm c}^2(\textrm{yr}^{-1})(12\pi^2)^{-1} \,\textrm{yr}^{-4/3}$) [equation (\[eq:refz\])]{} becomes $$\sigma^2\simeq2.53\times 10^{-3} h_{\rm c}^2T^{10/3} f_{\rm 0, L}^{-10/3}\, \textrm{yr}^{-4/3}\,.$$ In this way, estimating RMS value of fitted signal using [equation (\[eq:refz\])]{} is accurate enough for practical purposes, given data length is adopted as the *effective cut-off frequency*, i.e. to use $f_{\rm 0,L}=1$. GW detection significance {#sec:appS} ========================= In this section, we calculate the GW detection significance. The expected value for the detection significance $\langle S \rangle$ depends on $\Sigma_{\rm c}$ and $\langle { {\,^{i}\negthinspace}}{ {\,^{j}\negthinspace}}c \rangle$ as shown in [equation (\[eq:exps\])]{}. We determine the $\langle { {\,^{i}\negthinspace}}{ {\,^{j}\negthinspace}}c \rangle$ first. From [equation (\[eq:defcorr\])]{}, we have $$\langle { {\,^{i}\negthinspace}}{ {\,^{j}\negthinspace}}c \rangle= {\frac{1}{m}{{\sum_{k=1}^{m}}}\langle {{\,^{i}\negthinspace R}}_{k} {{\,^{j}\negthinspace R}}_k \rangle }= {\sigma_{\rm g}^2 H({ {\,^{i}\negthinspace}}{ {\,^{j}\negthinspace}}\theta) }\,. \label{eq:expc}$$ To determine $\Sigma_{\rm c}$, we need $\langle { {\,^{i}\negthinspace}}{ {\,^{j}\negthinspace}}c^2\rangle$, which is calculated in a similar fashion such that $$\langle { {\,^{i}\negthinspace}}{ {\,^{j}\negthinspace}}c^2\rangle={\frac{1}{m^2}\left\langle \sum_{k=1}^{m} \sum_{k'=1}^{m} { {\,^{i}\negthinspace}}R_{k} { {\,^{j}\negthinspace}}R_{k} { {\,^{i}\negthinspace}}R_{k'} { {\,^{j}\negthinspace}}R_{k'}\right\rangle }\,, \label{eq:avrc2}$$ After using the correlation relation [equation (\[eq:corgws\])]{}, [equation (\[eq:cornoi\])]{}, and performing the Wick expansion [@Zee10] to calculate higher momentum, we have $$\left \langle \frac{1}{m^2}\sum_{k=1}^{m} \sum_{k'=1}^{m} { {\,^{i}\negthinspace}}R_{k} { {\,^{j}\negthinspace}}R_{k} { {\,^{i}\negthinspace}}R_{k'} { {\,^{j}\negthinspace}}R_{k'}\right \rangle = \sigma_{\rm g}^4 \left( { {\,^{i}\negthinspace}}{ {\,^{j}\negthinspace}}A+{ {\,^{i}\negthinspace}}{ {\,^{j}\negthinspace}}B+H({ {\,^{i}\negthinspace}}{ {\,^{j}\negthinspace}}\theta)^2 \right ) \,. \label{eq:rrrr}$$ The $\Sigma_{\rm c}$ is then $$\begin{aligned} \Sigma_{\rm c}&=&\sqrt{\left \langle {\frac{1}{M}\sum_{{ i-j\,\textrm{pairs}}} \left({ {\,^{i}\negthinspace}}{ {\,^{j}\negthinspace}}c-\overline{\langle c\rangle}\right)^2} \right \rangle}\nonumber \\ &=& \sqrt{\frac{1}{M}\sum_{{ i-j\,\textrm{pairs}}} \left(\langle { {\,^{i}\negthinspace}}{ {\,^{j}\negthinspace}}c ^2 \rangle -\overline{\langle c\rangle}^2\right)} \nonumber \\ &=&{\sigma_{\rm g}^2}\sqrt{\Sigma_{H}^2+\frac{1}{M}\sum_{{ i-j\,\textrm{pairs}}} \left( { {\,^{i}\negthinspace}}{ {\,^{j}\negthinspace}}A+{ {\,^{i}\negthinspace}}{ {\,^{j}\negthinspace}}B\right)}\,, \end{aligned}$$ with which one can derive [equation (\[eq:decse\])]{}. Optimization using Lagrangian multiplier {#sec:appopt} ======================================== In section \[sec:optintro\], we describe the optimization technique using variable transformation and numerical optimization. In this section, we introduce another method solving the constrained optimization problem directly. As an example, we present the technique for the single telescope situation, which is also readily generalized. The optimization problem for the single telescope case is to search the minimal value of ${{\cal L}}=\sum_{{ i-j\,\textrm{pairs}}} { {\,^{ij}\negthinspace}}B$ under the constraint that $\tau=\sum_{i=1}^{n} { {\,^{i}\negthinspace}}\tau$. Using a Lagrangian multiplier $\lambda$, we can re-cast the optimization problem to optimize the ${{\cal L}}'$, where $$\begin{aligned} {{\cal L}}'&=&{{\cal L}}+\lambda \left (\tau-\sum_{i=1}^{n} { {\,^{i}\negthinspace}}\tau\right) \\ &=&\sum_{{ i-j\,\textrm{pairs}}} \frac{{ {\,^{i}\negthinspace}}\kappa { {\,^{j}\negthinspace}}q}{{ {\,^{j}\negthinspace}}\tau} + \frac{{ {\,^{j}\negthinspace}}\kappa { {\,^{i}\negthinspace}}q}{{ {\,^{i}\negthinspace}}\tau}+ \frac{{ {\,^{i}\negthinspace}}q { {\,^{j}\negthinspace}}q}{{ {\,^{i}\negthinspace}}\tau { {\,^{j}\negthinspace}}\tau} +\lambda \left (\tau-\sum_{i=1}^{n} { {\,^{i}\negthinspace}}\tau\right)\,, \label{eq:lagequiv}\end{aligned}$$ where ${ {\,^{i}\negthinspace}}q=({ {\,^{i}\negthinspace}}\sigma_{0}^2{ {\cal G}}^{-2} + { {\,^{i}\negthinspace}}\sigma_{\rm J}^2)\sigma_{\rm g}^{-2}$ and ${ {\,^{i}\negthinspace}}\kappa =1+{ {\,^{i}\negthinspace}}\sigma_{\rm r}^2 \sigma_{\rm g}^{-2}$. The minimization of ${{\cal L}}'$ can be found by $$\begin{aligned} \frac{\partial {{\cal L}}'}{\partial { {\,^{i}\negthinspace}}\tau}&=&0\,, \label{eq:diffe1}\\ \frac{\partial {{\cal L}}'}{\partial \lambda}&=&0\,, \label{eq:diffe2}\end{aligned}$$ which give $$\frac{{ {\,^{i}\negthinspace}}q}{{ {\,^{i}\negthinspace}}\tau^2}\sum_{j=1, \neq i}^{n}{ {\,^{j}\negthinspace}}\kappa+\frac{{ {\,^{i}\negthinspace}}q}{{ {\,^{i}\negthinspace}}\tau^2}\sum_{j=1, \neq i}^{n} \frac{{ {\,^{j}\negthinspace}}q}{{ {\,^{j}\negthinspace}}\tau}-\frac{\lambda}{2} =0 \,,\\ \label{eq:difopt1}$$ and $$\sum_{i=1}^{n} { {\,^{i}\negthinspace}}\tau-\tau =0\,. \label{eq:difopt2}$$ It is easy to check that [equation (\[eq:difopt1\])]{} and  (\[eq:difopt2\]) can be solved using the following recipe: 1\. Guess an initial value for ${ {\,^{i}\negthinspace}}\tau$. 2\. Update ${ {\,^{i}\negthinspace}}\tau$ with a newer value using $$\tau \frac{\sqrt{ { {\,^{i}\negthinspace}}q\sum_{j=1,\neq i}^{n}\left({ {\,^{j}\negthinspace}}\kappa+{{ {\,^{j}\negthinspace}}q}/{{ {\,^{j}\negthinspace}}\tau}\right)}} { \sum_{i=1}^{n} \sqrt{{ {\,^{i}\negthinspace}}q\sum_{j=1,\neq i}^{n}\left({ {\,^{j}\negthinspace}}\kappa+{{ {\,^{j}\negthinspace}}q}/{{ {\,^{j}\negthinspace}}\tau}\right)}} \rightarrow { {\,^{i}\negthinspace}}\tau \label{eq:iteration}$$ 3\. Repeat step 2, until the required precision is achieved. One can monitor the change of ${ {\,^{i}\negthinspace}}\tau$ for each iteration until it converges to the necessary precision. The initial value for the iteration is determined from the strong-signal limit, i.e.${ {\,^{i}\negthinspace}}q\rightarrow 0$, where the iteration [equation (\[eq:iteration\])]{} reduces to a solution $${ {\,^{i}\negthinspace}}\tau=\tau \frac{\sqrt{{ {\,^{i}\negthinspace}}q \sum_{j=1,\neq i}^{n} { {\,^{j}\negthinspace}}\kappa}}{\sum_{i=1}^{n} \sqrt{ { {\,^{i}\negthinspace}}q \sum_{j=1,\neq i}^{n} { {\,^{j}\negthinspace}}\kappa}}+{\cal O}(q)\,. \label{eq:initialv}$$ It is worthwile noting that, in fact, the iteration process ([equation (\[eq:iteration\])]{}) will not change the results very much from the initial value [equation (\[eq:initialv\])]{}. This is due to $$\begin{aligned} &&\tau \frac{\sqrt{ { {\,^{i}\negthinspace}}q\sum_{j=1,\neq i}^{n}\left({ {\,^{j}\negthinspace}}\kappa+{{ {\,^{j}\negthinspace}}q}/{{ {\,^{j}\negthinspace}}\tau}\right)}} { \sum_{i=1}^{n} \sqrt{{ {\,^{i}\negthinspace}}q\sum_{j=1,\neq i}^{n}\left({ {\,^{j}\negthinspace}}\kappa+{{ {\,^{j}\negthinspace}}q}/{{ {\,^{j}\negthinspace}}\tau}\right)}} \nonumber \\ &&\simeq \tau \frac{\sqrt{ { {\,^{i}\negthinspace}}q\sum_{j=1}^{n}\left({ {\,^{j}\negthinspace}}\kappa+{{ {\,^{j}\negthinspace}}q}/{{ {\,^{j}\negthinspace}}\tau}\right)}} { \sum_{i=1}^{n} \sqrt{{ {\,^{i}\negthinspace}}q\sum_{j=1}^{n}\left({ {\,^{j}\negthinspace}}\kappa+{{ {\,^{j}\negthinspace}}q}/{{ {\,^{j}\negthinspace}}\tau}\right)}}+{\cal O}\left(\frac{1}{n}\right) \nonumber \\ &&=\tau \frac{\sqrt{{ {\,^{i}\negthinspace}}q}}{\sum_{i=1}^{n} \sqrt{{ {\,^{i}\negthinspace}}q}} \nonumber \\ &&= \tau \frac{\sqrt{{ {\,^{i}\negthinspace}}Q}}{\sum_{i=1}^{n} \sqrt{{ {\,^{i}\negthinspace}}Q}} \,,\end{aligned}$$ where ${ {\,^{i}\negthinspace}}Q={ {\,^{i}\negthinspace}}\sigma_{0}^2 { {\cal G}}^{-2} +\sigma_{\rm J}^2$. Clearly, the initial value [equation (\[eq:initialv\])]{} we use is already a good approximation to the optimal observation strategy. The condition of existing solutions to ${ {\,^{i}\negthinspace}}P_{\nu}$ {#app:exist} ======================================================================== In this section, we investigate the conditions to be satisfied such that the following equations have solution ${ {\,^{i}\negthinspace}}P_{\nu}$ for given ${ {\,^{i}\negthinspace}}\tau_{\rm e}, { {\cal G}}_{\nu}, { {\,^{i}\negthinspace}}O_{\nu}$, and $\tau_{\nu}$, [align]{} [ [\^[i]{}]{}]{}\_[e]{}&=\_[=1]{}\^[N\_[tel]{}]{} [ [G]{}]{}\_\^2 [ [\^[i]{}]{}]{}P\_ [ [\^[i]{}]{}]{}O\_, \[eq:a1\]\ \_ &=\_[i=1]{}\^[N]{}[ [\^[i]{}]{}]{} P\_ [ [\^[i]{}]{}]{}O\_, \[eq:a2\]\ [ [\^[i]{}]{}]{} P\_&0, \[eq:a3\] where $i$ is $1\ldots N$, $\nu$ is $1\ldots N_{\rm tel}$, and ${ {\,^{i}\negthinspace}}\tau_{\rm e}$ satisfies [equation (\[eq:taueff\])]{}. We want to prove that [equation (\[eq:a1\])]{}, (\[eq:a2\]), and (\[eq:a3\]) have solutions ${ {\,^{i}\negthinspace}}P_{\nu}$, if and only if, for any $i$, the following condition is satisfied, $${ {\,^{i}\negthinspace}}\tau_{\rm e} \le \sum_{\nu=1}^{N_{\rm tel}} { {\cal G}}_{\nu}^2 \tau_{\nu} { {\,^{i}\negthinspace}}O_{\nu}\,. \label{eq:cond}$$ The proof contains two steps as follows: i) The ‘only if’ part. If ${ {\,^{i}\negthinspace}}P_{\nu}$ is the solution, then $${ {\,^{i}\negthinspace}}\tau_{\rm e}=\sum_{\nu=1}^{N_{\rm tel}} { {\cal G}}^2_{\nu} { {\,^{i}\negthinspace}}P_{\nu} { {\,^{i}\negthinspace}}O_{\nu}\le \sum_{\nu=1}^{N_{\rm tel}} { {\cal G}}^2_{\nu} \tau_{\nu} { {\,^{i}\negthinspace}}O_{\nu}\,, \label{eq:provof}$$ where the second step is due to the constraint of [equation (\[eq:a2\])]{} and (\[eq:a3\]). ii) The ‘if’ part. It is easy to note that, under the constraints of [equation (\[eq:a2\])]{}, any linear combination of the column vectors, e.g. $\beta_i=\sum_{\nu} c_{\nu} { {\,^{i}\negthinspace}}P_{\nu} { {\,^{i}\negthinspace}}O_{\nu}$, belongs to a hyperplane $\cal S$, because $\sum_{i=1}^{N} \beta_i=\sum_{\nu} \tau_{\nu} c_{\nu}$. Due to [equation (\[eq:a3\])]{}, only part of the hyperplane $\cal S$ is accessible to the vector $\beta_{i}$, i.e.$\beta_{i}$ is constrained by $0\le \beta_{i}\le \sum_{\nu} c_{\nu} \tau_{\nu} { {\,^{i}\negthinspace}}O_{\nu}$ in the $\cal S$. Denoting such an accessible region as $\cal A$, and let $c_{\nu}={ {\cal G}}^{2}_{\nu}$. Clearly, if the vector ${ {\,^{i}\negthinspace}}\tau_{\rm e}$ is in the accessible region $\cal A$, there exists a solution to [equation (\[eq:a1\])]{}, which satisfies both [equation (\[eq:a2\])]{} and (\[eq:a3\]). From [equation (\[eq:taueff\])]{}, we know $0\le{ {\,^{i}\negthinspace}}\tau_{\rm e} \le \sum_{\nu=1}^{N_{\rm tel}} { {\cal G}}_{\nu}^2 \tau_{\nu} { {\,^{i}\negthinspace}}O_{\nu}$, thus the vector ${ {\,^{i}\negthinspace}}\tau_{\rm e}$ belongs to the accessible region $\cal A$, and the solution exists. A graphical illustration for the condition is given in [Figure \[fig:illu\]]{}, where we choose ${ {\,^{i}\negthinspace}}O_{\nu}=\left[\begin{array}{c c}1 & 1\\1& 1 \\ 1& 0\end{array}\right]$. ![The illustration for the accessible region of the summation of vectors. Here $X_1, X_2, X_3$ are the coordinates. Because of the ${ {\,^{i}\negthinspace}}O_{\nu}$ we choose, the vector $v_1$ is constrained to the region $A_1$ (the plane constrained to the shaded triangle) and the vector $v_2$ is constrained to the region $A_2$ (the line segment). The accessible region for the summation ($v_3=v_1+v_2$) of two vectors is clearly the $A_3$, i.e.a plane with extra constraints. \[fig:illu\]](f4) Common noise mitigation {#sec:cnm} ======================= Clock errors and other common noise sources are harmful to PTA observations. [@Tinto11] has shown that a simultaneous differential measurement of a pulsar TOA completely removes the clock error. Instead of the requirement for simultaneous observations of multiple pulsars, we argue in this section that one can still remove most of the clock errors in post processing without simultaneity, if each observation session is compact enough. Suppose the pulsar timing signal contains an identical (clock) noise $n_{\rm c}(t)$ with a red spectrum. A subtraction between the timing signals of two different pulsars at two close epochs will remove most power of the identical noise component. One can check this by looking at the residuals (i.e.$n_{\rm c}(t)-n_{\rm c}(t+\Delta)$) of the identical noise after the subtraction. It is easy to show that the power spectrum of the residual $S_{\rm n}(f)$ becomes $$S_{\rm n}(f)=S_{\rm c}(f) [1-\cos(2\pi f\Delta)]\,, \label{eq:sigres}$$ where $S_{\rm c}(f)$ is the noise spectrum of $n_{\rm c}$, $\Delta$ is the time difference between the two epochs and is thus roughly the time span of one observing session. The clock noise $S_{\rm c}(f)$ dominates at low frequencies with time scales of ten years, thus we can check the residual at such frequencies, where the residual power spectrum becomes $$S_{\rm n}(f)\simeq \left[ 1.5\times 10^{-6} \left(\frac{f}{\rm 10\, yr^{-1}}\right)^2 \left(\frac{\Delta}{\rm 1 \, day}\right)^2 \right]S_{\rm c} (f)$$ Thus, if we keep each observation session compact within a few days, the clock error can still be significantly (factor of $\sim 10^{6}$) suppressed by post processing, and we do not need to worry about such common noise for planning an observing schedule at this stage. \[lastpage\] [^1]: Email: kjlee@mpifr-bonn.mpg.de [^2]: The Sardinia Radio telescope is in the commissioning phase at the time of writing this this paper. [^3]: The correlated noise such as clock noise is discussed in [Section \[sec:con\]]{} and [Appendix \[sec:cnm\]]{} [^4]: In this paper, we use a fiducial unit for the gain, thus one can take any telescope as unit 1 and scale other telescopes accordingly. [^5]: Here, telescopes with higher effective gain ${ {\cal G}}$ can be telescopes with larger collection area, lower system temperature, wider bandwidth and so on.
{ "pile_set_name": "ArXiv" }
--- abstract: | We show that, with indivisible goods, the existence of competitive equilibrium fundamentally depends on agents’ substitution effects, not their income effects. Our Equilibrium Existence Duality allows us to transport results on the existence of competitive equilibrium from settings with transferable utility to settings with income effects. One consequence is that net substitutability—which is a strictly weaker condition than gross substitutability—is sufficient for the existence of competitive equilibrium. We also extend the “demand types” classification of valuations to settings with income effects and give necessary and sufficient conditions for a pattern of substitution effects to guarantee the existence of competitive equilibrium. JEL Codes: C62, D11, D44 author: - Elizabeth Baldwin - Omer Edhan - Ravi Jagadeesan - Paul Klemperer - Alexander Teytelboym bibliography: - 'bib.bib' date: 17th June 2020 title: | The Equilibrium Existence Duality:\ Equilibrium with Indivisibilities & Income Effects --- Introduction ============ This paper shows that, when goods are indivisible and there are income effects, the existence of competitive equilibrium fundamentally depends on agents’ substitution effects—i.e., the effects of compensated price changes on agents’ demands. We provide general existence results that do not depend on income effects. In contrast to the case of divisible goods, competitive equilibrium does not generally exist in settings with indivisible goods [@henry1970indivisibilites]. Moreover, most previous results about when equilibrium does exist with indivisible goods assume that utility is transferable—ruling out income effects but allowing tractable characterizations of (Pareto-)efficient allocations and aggregate demand that can be exploited to analyze competitive equilibrium.[^1] But understanding the role of income effects is important for economies with indivisible goods, as these goods may comprise large fractions of agents’ budgets. Furthermore, in the presence of income effects, the distribution of wealth among agents affects both Pareto efficiency and aggregate demand, making it necessary to develop new methods to analyze competitive equilibrium with indivisible goods. The cornerstone of our analysis is an application of the relationship between Marshallian and Hicksian demand. As in classical demand theory, Hicksian demand is defined by fixing a utility level and minimizing the expenditure of obtaining it. We combine Hicksian demands to construct a family of “Hicksian economies” in which prices vary, but agents’ utilities—rather than their endowments—are held constant. Our key result, which we call the Equilibrium Existence Duality, states that competitive equilibria exist for all endowment allocations if and only if competitive equilibria exist in the Hicksian economies for all utility levels. Preferences in each Hicksian economy reflect agents’ substitution effects. Therefore, by the Equilibrium Existence Duality, the existence of competitive equilibrium fundamentally depends on substitution effects. Moreover, as fixing a utility level precludes income effects, agents’ preferences are quasilinear in each Hicksian economy. Hence, the Equilibrium Existence Duality allows us to transport (and so generalize) *any* necessary or sufficient condition for equilibrium existence from settings with transferable utility to settings with income effects.[^2] In particular, our most general existence result gives a necessary and sufficient condition for a pattern of agents’ substitution effects to guarantee the existence of competitive equilibrium in the presence of income effects. Consider, for example, the case of substitutable goods in which each agent demands at most one unit of each good. With transferable utility, substitutability is sufficient for the existence of competitive equilibrium [@KeCr:82] and defines a maximal domain for existence [@GuSt:99]. With income effects, @FlJaJaTe:19 showed that competitive equilibrium exists under gross substitutability. The Equilibrium Existence Duality tells us that, with income effects, competitive equilibrium in fact exists under *net* substitutability and that net substitutability defines a maximal domain for existence. Moreover, we show that gross substitutability implies net substitutability; the reverse direction is not true in the presence of income effects. An implication of our results is that it is unfortunate that [@KeCr:82], and much of the subsequent literature, used the term “gross substitutes” to refer to a condition on quasilinear preferences. Indeed, gross and net substitutability are equivalent without income effects, and our work shows that it is net substitutability, not gross substitutability, that is critical to the existence of competitive equilibrium with substitutes.[^3] To appreciate the distinction between gross and net substitutability, suppose that Martine owns a house and is thinking about selling her house and buying one of two different other houses: a spartan one and a luxurious one [@quinzii1984core]. If the price of her own house increases, she may wish to buy the luxurious house instead of the spartan one—exposing a gross complementarity between her existing house and the spartan one. However, Martine regards the houses as net substitutes: the complementarity emerges entirely due an income effect. Competitive equilibrium is therefore guaranteed to exist in economies with Martine if all other agents see the goods as net substitutes, despite the presence of gross complementarities. Our most general equilibrium existence theorem characterizes the combinations of substitution effects that guarantee the existence of competitive equilibrium. It is based on classification of valuations into “demand types.” A demand type is defined by the set of vectors that summarize the possible ways in which demand can change in response to a small generic price change. For example, the set of all substitutes valuations forms a demand type, as does the set of all complements valuations, etc. Applying @BaKl:19’s taxonomy to changes in Hicksian demands, we see that their definition easily extends to general utility functions, capturing agents’ substitution effects. Examples of demand types in our setting with income effects, therefore, include the set of all net substitutes preferences, the set of all net complements preferences, etc. The Equilibrium Existence Duality then makes it straightforward that the Unimodularity Theorem[^4]—which encompasses many standard results on the existence of competitive equilibrium as special cases[^5]—is unaffected by income effects. Therefore, as with the case of substitutes, conditions on complementarities and substitutabilities that guarantee the existence of competitive equilibrium in settings with transferable utility translate to conditions on net complementarities and substitutabilities that guarantee the existence of competitive equilibrium in settings with income effects. In particular, there are patterns of net complementarities that are compatible with the existence of competitive equilibrium. Our results may have significant implications for the design of auctions that seek competitive equilibrium outcomes, and in which bidders face financing constraints. For example, they suggest that versions of the Product-Mix Auction [@klemperer2008new], used by the Bank of England since the Global Financial Crisis, may work well in this context. Several other papers have considered the existence of competitive equilibrium in the presence of indivisibilities and income effects. [@quinzii1984core], [@gale1984equilibrium], and [@svensson1984competitive] showed the existence of competitive equilibrium in a housing market economy in which agents have unit demand and endowments. Building on those results, [@kaneko1986existence], [@van1997existence; @van2002existence], and [@yang2000equilibrium] analyzed settings with multiple goods, but restricted attention to separable preferences. By contrast, our results—even for the case of substitutes—allow for interactions between the demand for different goods. We also clarify the role of net substitutability for the existence of competitive equilibrium. In a different direction, [@DaKoMu:01] proved a version of the sufficiency direction of the Unimodularity Theorem for settings with income effects. [@DaKoMu:01] also defined domains of preferences using an optimization problem that turns out to be equivalent to the expenditure minimization problem. However, they did not note the connection to the expenditure minimization problem or Hicksian demand, and, as a result, did not interpret their sufficient conditions in terms of substitution effects or establish the role of substitution effects in determining the existence of equilibrium. We proceed as follows. Section \[sec:setting\] describes our setting—an exchange economy with indivisible goods and money. Section \[sec:EED\] develops the Equilibrium Existence Duality. Since the existing literature has focused mostly on the case in which indivisible goods are substitutes, we consider that case in Section \[sec:subst\]. Section \[sec:demTypes\] develops demand types for settings with income effects and states our Unimodularity Theorem with Income Effects. Section \[sec:auctions\] remarks on implications for auction design, and Section \[sec:conclusion\] is a conclusion. Appendix \[app:EEDproof\] proves the Equilibrium Existence Duality. Appendix \[app:grossToNet\] proves the connection between gross and net . Appendices \[app:dualDemPrefs\] and \[app:maxDomain\] adapt the proofs of results from the literature to our setting. The Setting {#sec:setting} =========== We work with a model of exchange economies with indivisibilities—adapted to allow for income effects. There is a finite set $J$ of agents, a finite set $I$ of indivisible goods, and a divisible numéraire that we call “money.” We allow goods to be undesirable, i.e., to be “bads." We fix a *total endowment* $\tot \in \mathbb{Z}^I$ of goods in the economy.[^6] Preferences and Marshallian Demand ---------------------------------- Each agent $j \in J$ has a finite set $\Feas{j} \subseteq \Z^I$ of *feasible bundles* of indivisible goods and a lower bound $\feas{j} \ge -\infty$ on her consumption of money. As bundles that specify negative consumption of some goods can be feasible, our setting implicitly allows for production.[^7] The principal cases of $\feas{j}$ are $\feas{j} = -\infty$, in which case all levels of consumption of money are feasible, and $\feas{j} = 0$, in which case the consumption of money must be positive. Hence, the set of feasible consumption bundles for agent $j$ is $\Feans{j} = (\feas{j},\infty) \times \Feas{j}$. Given a bundle $\bunn \in \Feans{j},$ we let $\numer$ denote the amount of money in $\bunn$ and $\bun$ denote the bundle of goods specified by $\bunn,$ so $\bunn = (\defbunn)$. The utility levels of agent $j$ lie in the range $\feasUtil{j}$, where $-\infty \le \minu{j} < \maxu{j} \le \infty.$ Furthermore, each agent $j$ has a *utility function* $\utilFn{j}: \Feans{j} \to \feasUtil{j}$ that we assume to be continuous and strictly increasing in $\numer$, and to satisfy $$\label{eq:ulimits} \lim_{\numer \to (\feas{j})^+} \util{j}{\defbunn} = \minu{j} \quad \text{and} \quad \lim_{\numer \to \infty} \util{j}{\defbunn} = \maxu{j}$$ for all $\bun \in \Feas{j}.$ Condition (\[eq:ulimits\]) requires that some consumption of money above the minimum level $\feas{j}$ be essential to agent $j$.[^8] We let $\pnumer = 1$. Given an endowment $\bunndow = (\defbunndow) \in \Feans{j}$ of a feasible consumption bundle and a price vector $\p \in \mathbb{R}^I,$ agent $j$’s *Marshallian demand* for goods is $$\dM{j}{\p}{\bunndow} = \left\{\bun^* \lgiv \bunn^* \in \argmax_{\bunn \in X^j \mid \pall \cdot \bunn \le \pall \cdot \bunndow} \util{j}{\bunn}\lgivend\right\}.$$ As usual, Marshallian demand is given by the set of bundles of goods that maximize an agent’s utility, subject to a budget constraint, given a price vector and an endowment. An *income effect* is a change in an agent’s Marshallian demand induced by a change in her money endowment, holding prices fixed.[^9] Our setup is flexible enough to capture a wide range of preferences with and without income effects, as the following two examples illustrate. \[eg:quasilin\] Given a *valuation* $\valFn{j}: \Feas{j} \to \mathbb{R},$ letting $\feas{j} = \minu{j}= -\infty$ and $\maxu{j} =\infty,$ one obtains a quasilinear utility function given by $$\util{j}{\defbunn} = \numer + \val{j}{\bun}.$$ When agents utility functions are quasilinear, they do not experience income effects. When all agents have quasilinear utility functions, we say that utility is *transferable*. \[eg:quasilog\] Given a function $\quasivalFn{j}: \Feas{j} \to (-\infty,0)$, which we call a *quasivaluation*,[^10] and letting $\minu{j}= -\infty$, $\maxu{j} =\infty,$ and $\feas{j} = 0,$ there is a *quasilogarithmic* utility function given by $$\util{j}{\bunn} = \log \numer - \log(- \quasival{j}{\bun}).$$ Unlike with quasilinear utility functions, agents with quasilogarithmic utility functions exhibit income effects. Hicksian Demand, , and the --------------------------- The concept of Hicksian demand from consumer theory plays a key role in our analysis. Given a utility level $\ub \in \feasUtil{j}$ and a price vector $\p,$ agent $j$’s *Hicksian demand* for goods is $$\label{eq:costMin} \dH{j}{\p}{\ub} = \left\{\bun^* \lgiv \bunn^* \in \argmin_{\bunn \in \Feans{j} \mid \util{j}{\bunn} \ge \ub} \pall \cdot \bunn\lgivend\right\}.$$ As in the standard case with divisible goods, Hicksian demand is given by the set of bundles of goods that minimize the expenditure of obtaining a utility level given a price vector. A *substitution effect* is a change in an agent’s Hicksian demand induced by a change in prices, holding her utility level fixed. As in classical demand theory, Marshallian and Hicksian demand are related by the duality between the utility maximization and expenditure minimization problems. Specifically, a bundle of goods is expenditure-minimizing if and only if it is utility-maximizing.[^11] \[fac:dualDem\] Let $\p$ be a price vector. 1. For all endowments $\bunndow$, we have that $\dM{j}{\p}{\bunndow} = \dH{j}{\p}{\ub},$ where $$\ub = \max_{\bunn \in \Feans{j} \mid \pall \cdot \bunn \le \pall \cdot \bunndow} \util{j}{\bunn}.$$ 2. For all utility levels $\ub$ and endowments $\bunndow$ with $$\pall \cdot \bunndow = \min_{\bunn \in \Feans{j} \mid \util{j}{\bunn} \ge \ub} \pall \cdot \bunn,$$ we have that $\dH{j}{\p}{\ub} = \dM{j}{\p}{\bunndow}.$ If an agent has a quasilinear utility function, then, as she experiences no income effects, her Marshallian and Hicksian demands coincide and do not depend on endowments or utility levels. Under quasilinearity, we therefore refer to both Marshallian and Hicksian demand simply as *demand*, which we denote by $\dQL{j}{\p}$. Formally, if $j$ has quasilinear utility with valuation $\valFn{j},$ defining $\dQL{j}{\p}$ as the solution to the quasilinear maximization problem $$\label{eqn:quasilin} \dQL{j}{\p} = \argmax_{\bun \in \Feas{j}} \{\val{j}{\bun} - \p \cdot \bun\},$$ we have that $\dM{j}{\p}{\bunndow} = \dQL{j}{\p}$ for all endowments $\bunndow$ and that $\dH{j}{\p}{\ub} = \dQL{j}{\p}$ for all utility levels $\ub$. We next show that the interpretation of the expenditure minimization problem as a quasilinear maximization problem persists in the presence of income effects. Specifically, we can rewrite the expenditure minimization problem of Equation (\[eq:costMin\]) as a quasilinear optimization problem by using the constraint to solve for $\numer$ as a function of $\bun$. Formally, for a bundle $\bun \in \Feas{j}$ of goods and a utility level $\ub \in \feasUtil{j},$ we let $\cf{j}{\bun}{\ub} = \util{j}{\cdot,\bun}^{-1}(\ub)$ denote the level of consumption of money (or *s*avings) needed to obtain utility level $\ub$ given $\bun.$[^12] By construction, we have that $$\dH{j}{\p}{\ub} = \argmin_{\bun \in \Feas{j}} \left\{\cf{j}{\bun}{\ub} + \p \cdot \bun\right\}.$$ It follows that agent $j$’s expenditure minimization problem at utility level $\ub$ can be written as a quasilinear maximization problem for the valuation $-\cf{j}{\cdot}{\ub}$, which we therefore call the Hicksian valuation. The ** of agent $j$ at utility level $u$ is $\valHDef{j} = -\cf{j}{\cdot}{\ub}$. Note that $\cf{j}{\cdot}{\ub}$ is continuous and strictly increasing in $\ub,$ and hence $\valHDef{j}$ is continuous and strictly decreasing in $\ub$. The following lemma formally states that agent $j$’s Hicksian demand at utility level $\ub$ is the demand correspondence of an agent with valuation $\valHDef{j}$. \[lem:dHvalH\] For all price vectors $\p$ and utility levels $\ub$, we have that $$\dH{j}{\p}{\ub} = \argmin_{\bun \in \Feas{j}} \left\{\cf{j}{\bun}{\ub} + \p \cdot \bun\right\} = \argmax_{\bun \in \Feas{j}} \left\{\valH{j}{\bun}{\ub} - \p \cdot \bun\right\}.$$ As $\utilFn{j}(\bunn)$ is strictly increasing in $\numer,$ we have that $$\dH{j}{\p}{\ub} = \left\{\bun^* \lgiv \bunn^* \in \argmin_{\bunn \in \Feans{j} \mid \util{j}{\bunn} = \ub} \pall \cdot \bunn\lgivend\right\}.$$ Applying the substitution $\numer = \cf{j}{\bun}{\ub}=-\valH{j}{\bun}{\ub}$ to remove the constraint from the minimization problem yields the lemma. It follows from Lemma \[lem:dHvalH\] that an agent’s  at a utility level gives rise to a quasilinear utility function that reflects the agent’s substitution effects at that utility level. Lemma \[lem:dHvalH\] also yields a relationship between the family of  and income effects. Indeed, by Fact \[fac:dualDem\], an agent’s income effects correspond to changes in her Hicksian demand induced by changes in her utility level, holding prices fixed. By Lemma \[lem:dHvalH\], these changes in Hicksian demand reflect the changes in the  that are induced by the changes in utility levels. Hence, the  at each utility level determine an agent’s substitution effects, while the variation of the  with the utility level captures her income effects. To illustrate how an agent’s family of  reflects her income effects, we consider the cases of quasilinear and quasilogarithmic utility. With quasilinear utility, the  at utility level $\ub$ is $\valH{j}{\bun}{\ub} = \val{j}{\bun} - \ub.$ Changes in $\ub$ do not affect the relative values of bundles under $\valH{j}{\cdot}{\ub}$, so changes in the utility level do not affect Hicksian demand. Indeed, there are no income effects. By construction, a utility function $\util{j}{\bunn}$ is quasilinear in $\numer$ if and only if $\cf{j}{\bun}{\ub}$ is quasilinear in $\ub$—or, equivalently, $\valH{j}{\bun}{\ub}$ is quasilinear in $-\ub$. In general, it follows from Fact \[fac:dualDem\] and Lemma \[lem:dHvalH\] that agent $j$’s preferences exhibit income effects if and only if $\cf{j}{\bun}{\ub}$—or, equivalently, $\valH{j}{\bun}{\ub}$—is not additively separable between $\bun$ and $\ub$. \[eg:quasilogDualVal\] With quasilogarithmic utility, the  at utility level $\ub$ is $\valH{j}{\bun}{\ub} = e^{\ub} \quasival{j}{\bun}.$ In this case, each  is a positive linear transformation of $\quasivalFn{j}$. Income effects are reflected by the fact that $\valH{j}{\bun}{\ub}$ is not additively separable between $\bun$ and $\ub$. We use Lemma \[lem:dHvalH\] to convert preferences with income effects into families of valuations. It turns out that each continuously decreasing family of valuations is the family of  of a utility function, so a utility function can be represented equivalently by a family of . \[fac:dualPrefs\] Let $F: \Feas{j} \times \feasUtil{j} \to (-\infty,-\feas{j})$ be a function. There exists a utility function $\utilFn{j}: \Feans{j} \to \feasUtil{j}$ whose  at each utility level $\ub$ is $F(\cdot,\ub)$ if and only if for each $\bun \in \Feas{j},$ the function $F(\bun,\cdot)$ is continuous, strictly decreasing, and satisfies[^13][^14] $$\label{eq:vlimits} \lim_{\ub \to (\minu{j})^+} F(\bun,\ub) = -\feas{j} \quad \text{and} \quad \lim_{\ub \to (\maxu{j})^-} F(\bun,\ub) = -\infty.$$ Finally, we combine the families of to form a family of , in each of which utility is transferable and agents choose consumption bundles to minimize the expenditure of obtaining given utility levels. The * for a profile of utility levels $(\ubj)_{j \in J}$* is the transferable utility economy in which agent $j$’s valuation is $\valH{j}{\cdot}{\ubj}$. The family of  consists of the “duals" of the original economy in which income effects have been removed and  are given by substitution effects. Like the construction of , the construction of the  allows us to convert economies with income effects to families of economies with transferable utility and is a key step of our analysis. The Equilibrium Existence Duality {#sec:EED} ================================= We now turn to the analysis of  in exchange economies.  consists of an endowment $\bunndowj \in \Feans{j}$ for each agent $j$ such that $\sum_{j \in J} \bundowj = \tot,$ where $\tot$ is the total endowment. Given , a  specifies a price vector such that markets for goods clear when agents maximize utility. By Walras’s Law, it follows that the market for money clears as well. Given  $(\bunndowj)_{j \in J}$, a ** consists of a price vector $\p$ and a bundle $\bunj \in \dM{j}{\p}{\bunndowj}$ for each agent such that $\sum_{j \in J} \bunj = \tot$. In transferable utility economies, a  consists of a price vector $\p$ and a bundle $\bunj \in \dQL{j}{\p}$ for each agent such that $\sum_{j \in J} \bunj = \tot$. In this case, the  does not affect  because endowments do not affect (Marshallian) demand. We therefore omit the  when considering  in transferable utility economies in which  exists—i.e., $\tot \in \sum_{j \in J} \Feas{j}$. On the other hand, the total endowment $\tot$ affects  even when utility is transferable. Recall that utility is transferable in the . Furthermore, by Lemma \[lem:dHvalH\], a  in the  for a profile $(\ubj)_{j \in J}$ of utility levels consists of a price vector $\p$ and a bundle $\bunj \in \dH{j}{\p}{\ubj}$ for each agent such that $\sum_{j \in J} \bunj = \tot$. Thus, agents act as if they minimize expenditure in  in the .[^15] Building on Fact \[fac:dualDem\] and Lemma \[lem:dHvalH\], our Equilibrium Existence Duality connects the equilibrium existence problems in the original economy (which can feature income effects) and the  (in which utility is transferable). Specifically, we show that  always exists in the original economy if and only if it always exists in the . Here, we hold agents’ preferences and the total endowment (of goods) fixed but allow the  to vary. \[thm:existDualExchange\] Suppose that the total endowment and the sets of feasible bundles are such that  exists.  exist for all  if and only if  exist in the  for all profiles of utility levels. By Lemma \[lem:dHvalH\], agents’ substitution effects determine their preferences in each . Therefore, Theorem \[thm:existDualExchange\] tells us that *any* condition that ensures the existence of  can be written as a condition on substitution effects alone. That is, substitution effects fundamentally determine whether  exists. Both directions of Theorem \[thm:existDualExchange\] also have novel implications for the analysis of  in economies with indivisibilities. As demands in the  are given by Hicksian demand in the original economy (Lemma \[lem:dHvalH\]), the “if" direction of Theorem \[thm:existDualExchange\] implies that every condition on demand $\dQLFn{j}$ that guarantees the existence of  in settings with transferable utility translates into a condition on *Hicksian* demand $\dHFn{j}$ that guarantees the existence of  in settings with income effects. In Sections \[sec:subst\] and \[sec:demTypes\], we use the “if" direction of Theorem \[thm:existDualExchange\] to obtain new domains for the existence of  with income effects from previous results on the existence of  in settings with transferable utility [@KeCr:82; @BaKl:19]. Conversely, the “only if" direction of Theorem \[thm:existDualExchange\] shows that if a condition on demand defines a maximal domain for the existence of  in settings with transferable utility, then the translated condition on Hicksian demand defines a maximal domain for the existence of  in settings with income effects. In Sections \[sec:subst\] and \[sec:demTypes\], we also use this implication to derive new maximal domain results for settings with income effects. To prove the “only if" direction of Theorem \[thm:existDualExchange\], we exploit a version of the Second Fundamental Theorem of Welfare Economics for settings with indivisibilities. To understand connection to the existence problem for the , note that the existence of  in the  is equivalent to the conclusion of the Second Welfare Theorem—i.e., that each Pareto-efficient allocation can be supported in an equilibrium with endowment transfers—as the following lemma shows.[^16] \[lem:dualEconSWT\] Suppose that the total endowment and the sets of feasible bundles are such that  exists.  exist in the  for all profiles of utility levels if and only if, for each Pareto-efficient allocation $(\bunnj)_{j \in J}$ with $\sum_{j \in J} \bunj = \tot$, there exists a price vector $\p$ such that $\bunnj \in \dM{j}{\p}{\bunnj}$ for all agents $j$. We prove Lemma \[lem:dualEconSWT\] in Appendix \[app:EEDproof\]. Intuitively, as utility is transferable in the , variation in utility levels between  plays that same role as endowment transfers in the Second Welfare Theorem. It is well-known that the conclusion of the Second Welfare Theorem holds whenever  exist for all  [@maskin2008fundamental].[^17] It follows that  always exists in the  whenever it always exists in the original economy, which is the “only if" direction of Theorem \[thm:existDualExchange\]. We use a different argument to prove the “if" direction. Our strategy is to show that there exists a profile of utility levels and a  in the corresponding  in which all agents’ expenditures equal their budgets in the original economy. To do so, we apply a topological fixed-point argument that is similar in spirit to standard proofs of the existence of . Specifically, we consider an auctioneer who, for a given profile of candidate equilibrium utility levels, evaluates agents’ expenditures over all  in the  and adjusts candidate equilibrium utility levels upwards (resp. downwards) for agents who under- (resp. over-) spend their budgets.[^18] The existence of  in the  ensures that the process is nonempty-valued, and the transferability of utility in the  ensures that the process is convex-valued. Kakutani’s Fixed Point Theorem implies the existence of a fixed-point utility profile. By construction, there exists a  in the corresponding  at which agents’ expenditures equal the values of their endowments. By Lemma \[lem:dHvalH\], agents must be maximizing utility given their endowments at this equilibrium, and hence once obtains a  in the original economy. The details of the argument are in Appendix \[app:EEDproof\]. Examples -------- We next illustrate the power of Theorem \[thm:existDualExchange\] using the two examples. Our first example is a “housing market” in which agents have unit-demand preferences, may be endowed with a house, and can experience arbitrary income effects. We can use Theorem \[thm:existDualExchange\] to reduce the existence problem to the assignment game of [@koopmans1957assignment]—reproving a result originally due to [@quinzii1984core]. \[eg:house\] For each agent $j,$ let $\Feas{j} \subseteq \{\zero\} \cup \{\e{i} \mid i \in I\}$ be nonempty. In this case, in , utility is transferable and agents have unit demand for the goods. As the  does not affect  when utility is transferable, the results of [@koopmans1957assignment] imply that  exist in the  for all profiles of utility levels (provided that  exists). Hence, Theorem \[thm:existDualExchange\] implies that  exist for all endowment allocations—even in the presence of income effects. In the second example, we revisit the quasilogarithmic utility functions from Example \[eg:quasilog\]. We provide sufficient conditions on agents’ quasivaluations for  to exist. These conditions are related to, but not in general implied by, the conditions developed in Sections \[sec:subst\] and \[sec:demTypes\]. \[eg:quasilogExist\] For each agent $j,$ let $\quasivalFn{j}: \Feas{j} \to (-\infty,0)$ be a quasivaluation. Let agent $j$’s utility function be quasilogarithmic for the quasivaluation $\quasivalFn{j}$, as in Example \[eg:quasilog\]. In this case, agent $j$’s  at each utility level is a positive linear transformation of $\quasivalFn{j}$ (Example \[eg:quasilogDualVal\]). Hence, by Theorem \[thm:existDualExchange\],  exist for all  as long as  exists when utility is transferable and each agent $j$’s valuation is an (agent-dependent) positive linear transformation of $\quasivalFn{j}$—e.g., if the quasivaluations $\quasivalFn{j}$ are all  valuations [@MiSt:09], or all valuations of a unimodular demand type [@BaKl:19]. Additionally, in the case in which one unit of each good is available in total (i.e., $\totComp{i} = 1$ for all goods $i$), [@CaOzPa:15] showed that  exists when utility is transferable and all agents have sign-consistent tree valuations. Hence, if one unit of each good is available in total, then Theorem \[thm:existDualExchange\] implies that  exist with quasilogarithmic utility for all  if all agents’ quasivaluations are sign-consistent tree valuations. In the remainder of the paper, we use Theorem \[thm:existDualExchange\] to develop novel conditions on preferences that ensure the existence of . The Case of Substitutes {#sec:subst} ======================= In this section, we apply the Equilibrium Existence Duality (Theorem \[thm:existDualExchange\]) to prove a new result regarding the existence of  with substitutable indivisible goods and income effects: we show that a form of *net*  is sufficient for, and in fact defines a maximal domain for, the existence of . We begin by reviewing previous results on the existence of  under (gross) . We then derive our existence theorem for  and relate it to the previous results. In this section, we focus on the case in which each agent demands at most one unit of each good. Formally, we say that an agent $j$ *demands at most one unit of each good* if $\Feas{j} \subseteq \{0,1\}^I$. We extend to the case in which agents can demand multiple units of some goods in Section \[sec:ssub\].  and the Existence of {#sec:subOld} ---------------------- We recall a notion of  for preferences over indivisible goods from [@FlJaJaTe:19], which extends the  condition from classical demand theory. It requires that *uncompensated* increases in the price of a good weakly raise demand for all other goods. With quasilinear utility, the modifier “gross” can be dropped—as in classical demand theory (see also Footnote 1 in [@KeCr:82]). \[def:gsub\] Suppose that agent $j$ demands at most one unit of each good. 1. \[part:gsub\] A utility function $\utilFn{j}$ is a * utility function at endowment $\bundow \in \Feas{j}$ of goods* if for all money endowments $\numerdow > \feas{j}$, price vectors $\p$, and $\lambda > 0,$ whenever $\dM{j}{\p}{\bunndow} = \{\bun\}$ and $\dM{j}{\p + \lambda \e{i}}{\bunndow} = \{\bunpr\},$ we have that $\bunprComp{k} \ge \bunComp{k}$ for all goods $k \not= i$.[^19] 2. A * valuation* is a valuation for which the corresponding quasilinear utility function is a  utility function.[^20] Technically, Definition \[def:gsub\] imposes a  condition on the locus of prices at which Marshallian demand is single-valued—following [@AuMi:02], [@HaKoNiOsWe:11], [@BaKl:19], and [@FlJaJaTe:19].[^21] It is well-known that when utility is transferable,  exists under . \[fac:subExist\] Suppose that utility is transferable and that  exists. If each agent demands at most one unit of each good and has a  valuation, then  exists.[^22] Moreover, the class of  valuations forms a maximal domain for the existence of  in transferable utility economies. Specifically, if an agent has a non- valuation, then  may not exist when the other agents have  valuations. Technically, we require that one unit of each good be present among agents’ endowments (i.e., that $\totComp{i} = 1$ for all goods $i$) as complementarities between goods that are not present are irrelevant for the existence of . \[fac:subMaxDomain\] Suppose that $\totComp{i} = 1$ for all goods $i$. If $|J| \ge 2$, agent $j$ demands at most one unit of each good, and $\valFn{j}$ is not a  valuation, then there exist sets $\Feas{k} \subseteq \{0,1\}^I$ of feasible bundles and  valuations $\valFn{k}: \Feas{k} \to \mathbb{R}$ for agents $k \not= j$, for which there exists  but no .[^23] While Fact \[fac:subMaxDomain\] shows that there is no domain strictly containing the domain of  valuations for which the existence of  can be guaranteed in transferable utility economies, it does not rule out the existence of other domains for which the existence of  can be guaranteed. For example, [@SuYa:06], [@CaOzPa:15], and [@BaKl:19] gave examples of domains other than  for which the existence of  is guaranteed. Generalizing Fact \[fac:subExist\] to settings with income effects, [@FlJaJaTe:19] showed that  exists for  $(\bunndowj)_{j \in J}$ if each agent $j$’s utility function is a  utility function at her endowment $\bundowj$ of goods.[^24] However, [@FlJaJaTe:19] did not offer a maximal domain result for . In the next section, we show that  does not actually drive existence of  with substitutable indivisible goods.  and the Existence of {#sec:nsub} ---------------------- In light of Theorem \[thm:existDualExchange\] and Fact \[fac:subExist\],  exists if agents’ Hicksian demands satisfy an appropriate  condition—i.e., if preferences satisfy a *net* analogue of . We build on Definition \[def:gsub\] to define a concept of  for settings with indivisibilities.  is a version of the  condition from classical consumer theory. It requires that *compensated* increases in the price of a good (i.e., price increases that are offset by compensating transfers) weakly raise demand for all other goods. \[def:nsub\] Suppose that agent $j$ demands at most one unit of each good. A utility function $\utilFn{j}$ is a * utility function* if for all utility levels $\ub$, price vectors $\p$, and $\lambda > 0,$ whenever $\dH{j}{\p}{\ub} = \{\bun\}$ and $\dH{j}{\p + \lambda \e{i}}{\ub} = \{\bunpr\}$, we have that $\bunprComp{k} \ge \bunComp{k}$ for all goods $k \not= i$. For quasilinear utility functions,  coincides with (gross) . More generally,  can be expressed as a condition on . \[rem:netSubDual\] By Lemma \[lem:dHvalH\], if an agent demands at most one unit of each good, then she has a  utility function if and only if her  at all utility levels are  valuations. We can apply Fact \[fac:dualPrefs\] and Remark \[rem:netSubDual\] to construct large classes of  preferences with income effects from families of  valuations. There are several rich families of  valuations, including endowed assignment valuations [@HaMi:05] and matroid-based valuations [@ostrovsky2015gross]. This leads to a large class of quasilogarithmic  utility functions. \[eg:quasilogSubs\] A quasilogarithmic utility function $\utilFn{j}$ is a  utility function if and only if the quasivaluation $\quasivalFn{j}$ is a  valuation.[^25] More generally, in light of Fact \[fac:dualPrefs\] and Remark \[rem:netSubDual\], each family of  valuations leads to a class of  utility functions with income effects consisting of the utility functions whose  all belong to the family. These classes are defined by conditions on substitution effects and do not restrict income effects. By contrast,  places substantial restrictions on the form of income effects.[^26] To understand the difference between gross and , we compare the conditions in a setting in which agents have unit demand for goods. \[eg:houseGrossNet\] Consider an agent, Martine, who owns a house $i_1$ and is considering selling it to purchase (at most) one of houses $i_2$ and $i_3$. If Martine experiences income effects, then her choice between $i_2$ and $i_3$ generally depends on the price she is able to procure for her house $i_1$. For example, if $i_3$ is a more luxurious house than $i_2,$ then Martine may only demand $i_3$ if the value of her endowment is sufficiently large—i.e., if the price of her house $i_1$ is sufficiently high. As a result, when Martine is endowed with $i_1$, she does not generally have  preferences: increases in the price of $i_1$ can lower Martine’s demand for $i_2$. That is, Martine can regard $i_2$ as a gross complement for $i_1$. In contrast, Martine has  preferences—no compensated increase in the price of $i_1$ could make Martine stop demanding $i_2$—a condition that holds generally in the housing market economy.[^27] Note also that, unlike ,  generally depends on endowments: if Martine were not endowed a house, she would have  preferences [@kaneko1982central; @kaneko1983housing; @DeGa:85]. While Example \[eg:houseGrossNet\] shows that  does not imply , it turns out that  implies . \[prop:ssubst\] If agent $j$ demands at most one unit of each good and there exists an endowment $\bundow$ of goods at which $\utilFn{j}$ is a  utility function, then $\utilFn{j}$ is  utility function. Proposition \[prop:ssubst\] and Example \[eg:houseGrossNet\] show that  (at any one endowment of goods) implies  but places additional restrictions on income effects. Nevertheless, the restrictions on substitution effects alone, entailed by , are sufficient for the existence of . \[thm:netSubExist\] If all agents demand at most one unit of each good and have  utility functions, then  exist for all . Theorem \[thm:netSubExist\] is an immediate consequence of the Equilibrium Existence Duality and the existence of  in transferable utility economies under . Remark \[rem:netSubDual\] implies that the agents’  at all utility levels are  valuations. Hence, Fact \[fac:subExist\] implies that  exist in the  for all profiles of utility levels if  exists. The theorem follows by the “if” direction of Theorem \[thm:existDualExchange\]. As  implies  (Proposition \[prop:ssubst\]), the existence of  under  is a special case of Theorem \[thm:netSubExist\]. But Theorem \[thm:netSubExist\] is more general: as Example \[eg:houseGrossNet\] shows,  allows for forms of gross complementarities between goods, in addition to . The following example illustrates how the distinction between  and  relates to the existence of  when agents caan demand multiple goods. \[eg:grossVersusNetNumerical\] There are two goods and the total endowment is $\tot = (1,1)$. There are two agents, which we call $j$ and $k,$ and $j$’s feasible set of consumption bundles of goods is $\Feas{j} = \{0,1\}^2$. We consider the price vectors $\p = (2,2)$ and $\ppr = (4,2)$ and consider two examples in which agent $j$’s Marshallian demand changes from $(1,1)$ to $(0,0)$ as prices change from $\p$ to $\ppr$—a . But the consequences for the existence of  are different across the two cases. In Case \[eg:numericalComp\], the  reflects a net complementarity for $j$, and  may not exist if $k$ sees goods as net substitutes. In Case \[eg:numericalLog\], the  reflects only an income effect for $j$, as in Example \[eg:houseGrossNet\], so  is guaranteed to exist if $k$ sees goods as net substitutes. 1. \[eg:numericalComp\] Suppose that $j$ has a quasilinear utility function with valuation given by $$\val{j}{\bun} = \begin{cases} 0 & \text{if } \bun = (0,0),(0,1),(1,0)\\ 5 & \text{if } \bun = (1,1). \end{cases}$$ Here, $\valFn{j}$ is not a  valuation because $\dQL{j}{\p} = \{(1,1)\}$ while $\dQL{j}{\ppr} = \{(0,0)\}$: i.e., increasing the price of the first good can lower $j$’s demand for the second good. If $\Feas{k} = \{(0,0),(0,1),(1,0)\}$ and agent $k$ has a quasilinear utility function with a  valuation given by $$\label{eq:kUnitDem} \val{k}{\bun} = \begin{cases} 0 & \text{if } \bun = (0,0)\\ 4 & \text{if } \bun = (1,0)\\ 3 & \text{if } \bun = (0,1), \end{cases}$$ then no  exists.[^28] 2. \[eg:numericalLog\] Suppose instead that $\utilFn{j}$ is quasilogarithmic (as defined in Example \[eg:quasilog\]) with quasivaluation given by $$\quasival{j}{\bun} = \begin{cases} -11 & \text{if } \bun = (0,0)\\ -7 & \text{if } \bun = (0,1)\\ -4 & \text{if } \bun = (1,0)\\ -1 & \text{if } \bun = (1,1). \end{cases}$$ At the endowment $\bundowj = (0,1)$ of goods, $\utilFn{j}$ is not a  utility function as, letting $\numerdowj = 3,$ we have that $\dM{j}{\p}{\bunndowj} = \{(1,1)\}$ while $\dM{j}{\ppr}{\bunndowj} = \{(0,0)\}$.[^29] That is, increasing the price of the first good can lower $j$’s Marshallian demand for the second good. By contrast, as $\quasivalFn{j}$ is a  valuation, Example \[eg:quasilogSubs\] implies that $\utilFn{j}$ is a  utility function: the  is entirely due to an income effect. For example, at the utility level $$\ub = \max_{\bunn \in \Feans{j} \mid \pprall \cdot \bunn \le \pprall \cdot \bunndowj} \util{j}{\bunn} = \log \frac{5}{11},$$ we have that $\dH{j}{\p}{u} = \{(1,0)\}$ and that $\dH{j}{\ppr}{\ub} = \{(0,0)\}$,[^30] so the decrease in the Marshallian demand for the second good as prices change from $\p$ to $\ppr$ at the endowment $\bunndowj$ reflects an income effect. By Theorem \[thm:netSubExist\],  exists whenever $k$ has a  utility function. For example, if $k$ has a quasilinear utility function with a  valuation given by Equation (\[eq:kUnitDem\]), then for the  defined by $\bundowj = (0,1)$, $\bundowag{k} = (1,0)$, and $\numerdowj = \numerdowag{k} = 3,$ the price vector $(3,2)$ and the allocation of goods defined by $\bunj = (1,0)$ and $\bunag{k} = (0,1)$ comprise a .[^31] In Case \[eg:numericalLog\], agent $j$ has  preferences—leading to the guaranteed existence of  when agent $k$ has  preferences. By contrast, in Case \[eg:numericalComp\], agent $j$ does not have  preferences—and  may not exist when $k$ has  preferences. In general, net substitutability forms a maximal domain for the existence of . Specifically, if an agent does not have  preferences, then  may not exist when the other agents have  quasilinear preferences. \[prop:netSubstMaxDomain\] Suppose that $\totComp{i} = 1$ for all goods $i$. If $|J| \ge 2$, agent $j$ demands at most one unit of each good, and $\utilFn{j}$ is not a  utility function, then there exist sets $\Feas{k} \subseteq \{0,1\}^I$ of feasible bundles and  valuations $\valFn{k}: \Feas{k} \to \mathbb{R}$ for agents $k \not= j$, and  for which no  exists. Proposition \[prop:netSubstMaxDomain\] is an immediate consequence of the Equilibrium Existence Duality and the fact that  defines a maximal domain for the existence of  with transferable utility. By Remark \[rem:netSubDual\], there exists a utility level $\ub$ at which agent $j$’s  $\valHDef{j}$ is not a  valuation. Fact \[fac:subMaxDomain\] implies that there exist feasible sets $\Feas{k} \subseteq \{0,1\}^I$ and  valuations $\valFn{k}$ for agents $k \not= j$, for which  exists but no  would exist with transferable utility if agent $j$’s valuation were $\valHDef{j}$. With those sets $\Feas{k}$ of feasible bundles and valuations $\valFn{k}$ for agents $k \not= j,$ the “only if” direction of Theorem \[thm:existDualExchange\] implies that there exists  for which no  exists. Proposition \[prop:netSubstMaxDomain\] entails that any domain of preferences that contains all  quasilinear preferences and guarantees the existence of  must lie within the domain of  preferences. Therefore, Proposition \[prop:netSubstMaxDomain\] and Theorem \[thm:netSubExist\] suggest that  is the most general way to incorporate income effects into a  condition to ensure the existence of . By contrast, the relationship between the nonexistence of  and failures of  depends on why  fails.  can fail due to substitution effects that reflect net complementarities, as in Example \[eg:grossVersusNetNumerical\]\[eg:numericalComp\], or due to income effects, as in Example \[eg:grossVersusNetNumerical\]\[eg:numericalLog\]. If the failure of  reflects a net complementarity, then Proposition \[prop:netSubstMaxDomain\] tells us that  may not exist if the other agents have  quasilinear preferences, as in Example \[eg:grossVersusNetNumerical\]\[eg:numericalComp\]. On the other hand, the failure of  is only due to income effects, then Theorem \[thm:netSubExist\] tells us that  exists if the other agents have  preferences (e.g.,  quasilinear preferences), as in Example \[eg:grossVersusNetNumerical\]\[eg:numericalLog\]. Demand Types and the Unimodularity Theorem {#sec:demTypes} ========================================== In this section, we characterize exactly what conditions on patterns of substitution effects guarantee the existence of . Specifically, we consider classification of valuations into “demand types” based on sets of vectors that summarize the possible ways in which demand can change in response to a small generic price change. We first review the definition of demand types from [@BaKl:19]. We then extend the concept of demand types to settings with income effects, and develop a version of the Unimodularity Theorem that allows for income effects and characterizes which demand types guarantee the existence of  (see also [@DaKoMu:01]). A special case of the Unimodularity Theorem with Income Effects extends Theorem \[thm:netSubExist\] to settings in which agents can demand multiple units of some goods. Demand Types and the Unimodularity Theorem with Transferable Utility {#sec:demTypesTU} -------------------------------------------------------------------- We first review the concept of demand types for quasilinear settings, as developed by @BaKl:19. An integer vector is *primitive* if the greatest common divisor of its components is 1. By focusing on the directions of demand changes, we can restrict to primitive demand change vectors. A ** is a set $\mathcal{D} \subseteq \mathbb{Z}^I$ of primitive  such that if $\dvec \in\mathcal{D}$ then $- \dvec \in\mathcal{D}$. \[def:demTypeTU\] Let $\valFn{j}$ be a valuation. 1. A bundle $\bun$ is *uniquely demanded by agent $j$* if there exists a price vector $\p$ such that $\dQL{j}{\p} = \{\bun\}.$ 2. A pair $\{\bun,\bunpr\}$ of uniquely demanded bundles are *adjacently demanded by agent $j$* if there exists a price vector $\p$ such that $\dQL{j}{\p}$ contains $\bun$ and $\bunpr$ but no other bundle that is uniquely demanded by agent $j$. 3. If $\mathcal{D}$ is a , then $\valFn{j}$ is *of demand type $\mathcal{D}$* if for all pairs $\{\bun,\bunpr\}$ that are adjacently demanded by agent $j$, the difference $\bunpr - \bun$ is a multiple of an element of $\mathcal{D}$.[^32] For intuition, suppose that a small price change causes a change in demand. Then, generically, demand changes between adjacently demanded bundles. Thus, the demand type vectors represent the possible directions of changes in demand in response to small generic price changes (see Proposition 3.3 in [@BaKl:19] for a formal statement). To illustrate Definition \[def:demTypeTU\], we consider an example. \[eg:demTypeDef\] Suppose that there are two goods and let $$\Feas{j} = \{0,1,2,3\}^2 \ssm \{(2,3),(3,2),(3,3)\}.$$ Consider the valuation defined by $\val{j}{\bun} = \bunComp{1} + \bunComp{2}$. As Figure \[fig:demTypeDef\] illustrates, the uniquely demanded bundles are $(0,0)$, $(0,3),$ $(1,3),$ $(3,0)$, and $(3,1)$. When $1 = \pComp{1} < \pComp{2},$ agent $j$’s demand is $\dQL{j}{\p} = \{(0,0),(1,0),(2,0),(3,0)\}$. Hence, as the bundles $(1,0)$ and $(2,0)$ are not uniquely demanded, the bundles $(0,0)$ and $(3,0)$ are adjacently demanded. As a result, for $\valFn{j}$ to be of demand type $\mathcal{D}$, the set $\mathcal{D}$ must contain the vector $(1,0)$, which is the primitive  proportional to the demand change $(3,0) - (0,0) = (3,0)$. Similarly, the bundles $(0,0)$ and $(0,3)$ are adjacently demanded, and any  $\mathcal{D}$ such that $\valFn{j}$ is of demand type $\mathcal{D}$ must contain the vector $(0,1)$. When $\pComp{1} < \pComp{2} = 1,$ demand is $\dQL{j}{\p} = \{(3,0),(3,1)\}$. Hence, the bundles $(3,0)$ and $(3,1)$ are adjacently demanded. Similarly, the bundles $(0,3)$ and $(1,3)$ are adjacently demanded. These facts respectively imply, again, that $(0,1)$ and $(1,0)$ are in any  $\mathcal{D}$ such that $V^j$ is of demand type $\mathcal{D}$. Last, when $p_1 = p_2 < 1,$ agent $j$’s demand is $\dQL{j}{\p} = \{(1,3),(2,2),(3,1)\}$. Hence, as the bundle $(2,2)$ is not uniquely demanded, the bundles $(1,3)$ and $(3,1)$ are adjacently demanded. As a result, for $\valFn{j}$ to be of demand type $\mathcal{D}$, the set $\mathcal{D}$ must contain the vector $(1,-1)$, which is the primitive  proportional to the demand change $(3,1) - (1,3) = (2,-2)$. By contrast, the bundles $(0,0)$ and $(3,1)$ are not adjacently demanded: the only price vector at which agent $j$ demands them both is $\p = (1,1),$ but $\dQL{j}{1,1}$ also contains the uniquely demanded bundles $(0,3)$, $(1,3),$ and $(3,0)$. Similarly, the bundles $(0,0)$ and $(1,3)$ are not adjacently demanded. Hence, $$\mathcal{D} = \pm \left\{\begin{bmatrix}1 \\ 0\end{bmatrix}, \begin{bmatrix}0 \\ 1\end{bmatrix}, \begin{bmatrix} 1 \\ -1\end{bmatrix}\right\}$$ is the minimal  $\mathcal{D}$ such that $\valFn{j}$ is of demand type $\mathcal{D}$. Consider any valuation of the same demand type $\mathcal{D}$ as in Example \[eg:demTypeDef\], and a change in price from $\p$ to $\ppr=\p+\lambda \e{1}$ for some $\lambda>0$. For generic choices of $\p$ and $\lambda$, the demand at any price on the straight line from $\mathbf{p}_I$ to $\mathbf{p}_I'$ either is unique, or demonstrates the adjacency of two bundles uniquely demanded at prices on this line. The change in demand between such bundles must therefore be a multiple of an element of $\mathcal{D}$ (by Definition \[def:demTypeTU\]). Moreover, since only the price of good 1 is changing and that price is increasing, the law of demand entails that demand for good 1 must strictly decrease upon any change in demand.[^33] Thus, the change in demand between the two consecutive uniquely demanded bundles must be a positive multiple of either $(-1,0)$ or $(-1,1)$. Therefore, demand for good 2 must (weakly) increase, reflecting  between the goods. This two-good example is a special case of an important class of demand types. \[eg:ssubDemType\] The *strong substitutes* consists of all vectors in $\Z^I$ with at most one $+1$ component, at most one $-1$ component, and no other nonzero components. As illustrated in Example \[eg:demTypeDef\], this  captures one-to-one substitution between goods through demand type vectors with one component of $1$ and one component of $-1$. Furthermore, if an agent $k$ demands at most one unit of each good, then $\valFn{k}$ is a  valuation if and only if it is of the strong substitutes demand type (see Theorems 2.1 and 2.4 in [@fujishige2003note]). In settings in which agents can demand multiple units of each good, a form of concavity is needed to ensure the existence of . A valuation is concave if, under that valuation, each bundle of goods that is a convex combination of feasible bundles of goods is demanded at some price vector. For the formal definition, we let $\conv(T)$ denote the *convex hull* of a set $T \subseteq \mathbb{R}^I.$ \[def:concave\] A valuation $\valFn{j}$ is *concave* if for each bundle $\bun \in \conv(\Feas{j}) \cap \mathbb{Z}^n,$ there exists a price vector $\p$ such that $\bun \in \dQL{j}{\p}$. In Section \[sec:subOld\], we discussed that  guarantees the existence of  in transferable utility economies when agents demand at most one unit of each good. Generalizing that result, @BaKl:19 identified a necessary and sufficient condition for the concave valuations of a demand type to form a domain for the guaranteed existence of . A set of vectors in $\Z^I$ is *unimodular* if every linearly independent subset can be extended to be a basis for $\R^I$, of , such that any square matrix whose columns are these vectors has determinant $\pm 1$. For example, the  in Example \[eg:demTypeDef\] is unimodular, while the   $$\label{eq:subsComp} \pm \left\{\begin{bmatrix} 1 \\ -1\end{bmatrix}, \begin{bmatrix}1 \\ 1\end{bmatrix}\right\}$$ is not unimodular, because $$\left|\begin{matrix} 1 & 1\\ -1 & 1 \end{matrix}\right| = 2.$$ The  in (\[eq:subsComp\]) represents that the two goods can be substitutable or complementary for agents—a possibility that can cause  to fail to exist, as in Example \[eg:grossVersusNetNumerical\]\[eg:numericalComp\]. [@BaKl:19] showed that the unimodularity of a  is precisely the condition for the corresponding demand type to guarantee the existence of . \[fac:unimod\] Let $\mathcal{D}$ be a .  exist for all finite sets $J$ of agents with concave valuations of demand type $\mathcal{D}$ and for all total endowments for which  exist if and only if $\mathcal{D}$ is unimodular.[^34] [@DaKoMu:01] used conditions on the ranges of agents’ demand correspondences to describe classes of concave valuations, which correspond to the concave valuations of unimodular demand types;[^35] they formulated a version of the “if" direction of Fact \[fac:unimod\] with those conditions.[^36] As [@poincare1900second] showed, the strong substitutes  is unimodular. Therefore, in light of Example \[eg:ssubDemType\], the existence of  in transferable utility economies in which agents demand at most one unit of each good and have substitutes valuations (Fact \[fac:subExist\]) is a special case of Fact \[fac:unimod\]. Moreover, Fact \[fac:unimod\] is strictly more general: as [@BaKl:19] showed, there are unimodular for which the existence of  cannot be deduced from the corresponding result for strong substitutes by applying a change of basis to the space of bundles of goods.[^37] To illustrate the additional generality, we discuss an example of such a demand type.[^38] \[eg:5D\] There are five goods. Consider the $$\mathcal{D} = \pm\left\{\begin{bmatrix}1 \\ 0 \\ 0 \\ 0 \\ 0\end{bmatrix}, \begin{bmatrix}0 \\ 1 \\ 0 \\ 0 \\ 0\end{bmatrix}, \begin{bmatrix}0 \\ 0 \\ 1 \\ 0 \\ 0\end{bmatrix}, \begin{bmatrix}0 \\ 0 \\ 0 \\ 1 \\ 0\end{bmatrix}, \begin{bmatrix}0 \\ 0 \\ 0 \\ 0 \\ 1\end{bmatrix}, \begin{bmatrix}1 \\ -1 \\ 1 \\ 0 \\ 0\end{bmatrix}, \begin{bmatrix}0 \\ 1 \\ -1 \\ 1 \\ 0\end{bmatrix}, \begin{bmatrix}0 \\ 0 \\ 1 \\ -1 \\ 1\end{bmatrix}, \begin{bmatrix}1 \\ 0 \\ 0 \\ 1 \\ -1\end{bmatrix}, \begin{bmatrix}-1 \\ 1 \\ 0 \\ 0 \\ 1\end{bmatrix}\right\}.$$ Intuitively, this  allows for independent changes in the demand for each good (through the first five vectors), as well as for substitution from a good to the bundle consisting of its two neighbors if the goods are arranged in a circle (through the last five vectors). This  is unimodular, and cannot be obtained from the strong substitutes  by a change of basis of the space of integer bundles of goods (see, e.g., Section 19.4 of [@schrijver1998theory]). Moreover, the demand types defined by maximal, unimodular turn out to define maximal domains for the existence of  in settings with transferable utility. Here, we say that a unimodular  is *maximal* if it is not strictly contained in another unimodular . \[fac:unimodMaxDomain\] Let $\mathcal{D}$ be a maximal unimodular . If $|J| \ge 2$ and $\valFn{j}$ is non-concave or not of demand type $\mathcal{D}$, then there exist sets $\Feas{k}$ of feasible bundles and concave valuations $\valFn{k}: \Feas{k} \to \mathbb{R}$ of demand type $\mathcal{D}$ for agents $k \not= j$, as well as a total endowment, for which there exists  but no .[^39] While Fact \[fac:unimod\] shows that there exist valuations in each non-unimodular demand type for which  does not exist, Fact \[fac:unimodMaxDomain\] shows that for *every* valuation outside a maximal unimodular demand type, there exist concave valuations within the demand type that lead to non-existence. Hence, the necessity direction of Fact \[fac:unimod\], together with Fact \[fac:unimodMaxDomain\], provide complementary perspectives on the way in which  can fail to exist outside the context of unimodular demand types. Demand Types and the Unimodularity Theorem with Income Effects -------------------------------------------------------------- We now use Fact \[fac:dualPrefs\] to extend the demand types framework to settings with income effects. \[def:demTypeIncEff\] An agent’s preferences are *of demand type $\mathcal{D}$* if her  at all utility levels are of demand type $\mathcal{D}$. Lemma \[lem:dHvalH\] leads to an economic interpretation of Definition \[def:demTypeIncEff\]: a utility function is of demand type $\mathcal{D}$ if $\mathcal{D}$ summarizes the possible ways in which Hicksian demand can change in response to a small generic price change. In particular, Definition \[def:demTypeIncEff\] extends the concept of demand types to settings with income effects by placing conditions on substitution effects. Indeed, Definition \[def:demTypeIncEff\] considers only the properties of  at each utility level (which, by Lemma \[lem:dHvalH\], reflect substitution effects), and not how an agent’s  vary with her utility level (which, by Fact \[fac:dualDem\] and Lemma \[lem:dHvalH\], reflects income effects). [@DaKoMu:01] translated their conditions on the ranges of agents’ demand correspondences from quasilinear settings to settings with income effects by using Fact \[fac:dualPrefs\] in an analogous manner (see Assumption 3$'$ in [@DaKoMu:01]). However, the economic interpretation in terms of substitution effects that Lemma \[lem:dHvalH\] leads to was not clear from formulation. As with the case of transferable utility, a concavity condition is needed to ensure the existence of . With income effects, the relevant condition is a version of the quasiconcavity condition from classical demand theory for settings with indivisible goods. We define quasiconcavity based on concavity and duality.[^40] \[def:quasiConc\] An agent’s utility function is *quasiconcave* if her  at all utility levels are concave. As with the case of transferable utility, unimodularity is a necessary and sufficient condition for the existence of  to be guaranteed for all quasiconcave preferences of a demand type when income effects are present. \[thm:unimod\] Let $\mathcal{D}$ be a .  exist for all finite sets $J$ of agents with quasiconcave utility functions of demand type $\mathcal{D}$, for all total endowments, and for all  if and only if $\mathcal{D}$ is unimodular. The “only if” direction of Theorem \[thm:unimod\] is a special case of the Unimodularity Theorem with Transferable Utility (Fact \[fac:unimod\]). The “if” direction of Theorem \[thm:unimod\] is an immediate consequence of the Equilibrium Existence Duality and Fact \[fac:unimod\]. Consider a finite set $J$ of agents with quasiconcave preferences of demand type $\mathcal{D}$ and a total endowment for which  exists. By definition, the agents’ at all utility levels are concave and of demand type $\mathcal{D}$. Hence,  exist in the  for all profiles of utility levels by the “if” direction of Fact \[fac:unimod\]. By the “if” direction of Theorem \[thm:existDualExchange\],  must therefore exist in the original economy for all . [@DaKoMu:01] proved a version of the “if” direction of Theorem \[thm:unimod\] under the assumptions that utility functions are monotone in goods, that consumption of goods is nonnegative, and that the total endowment is strictly positive (see Theorems 2 and 4 in [@DaKoMu:01]).[^41] Note that they formulated their result in terms of Fact \[fac:dualPrefs\] and a condition on the ranges of demand correspondences (see their Assumption 3$'$) instead of in terms of unimodular demand types. approach was to show the existence of  in a convexified economy and that, under unimodularity,  in the convexified economy give rise to  in the original economy. In contrast, our approach of using the Equilibrium Existence Duality illuminates the role of substitution effects in ensuring the existence of . Moreover, it yields a maximal domain result for unimodular demand types with income effects. \[prop:netUnimodMaxDomain\] Let $\mathcal{D}$ be a maximal unimodular . If $|J| \ge 2$ and $\utilFn{j}$ is not quasiconcave or not of demand type $\mathcal{D}$, then there exist sets $\Feas{k}$ of feasible bundles and concave valuations $\valFn{k}: \Feas{k} \to \mathbb{R}$ of demand type $\mathcal{D}$ for agents $k \not= j$, as well as a total endowment and , for which no  exists. Proposition \[prop:netUnimodMaxDomain\] is an immediate consequence of the Equilibrium Existence Duality and the maximal domain result for unimodular demand types under the transferability of utility. By definition, there exists a utility level $\ub$ at which agent $j$’s  $\valHDef{j}$ is non-concave or not of demand type $\mathcal{D}$. In either case, Fact \[fac:unimodMaxDomain\] implies that there exist sets $\Feas{k}$ of feasible bundles and concave valuations $\valFn{k}: \Feas{k} \to \mathbb{R}$ of demand type $\mathcal{D}$ for agents $k \not= j$, and a total endowment for which  exists but no  would exist with transferable utility if agent $j$’s valuation were $\valHDef{j}$. With those sets $\Feas{k}$ of feasible bundles and valuations $\valFn{k}$ for agents $k \not= j$ and that total endowment, the “only if" direction of Theorem \[thm:existDualExchange\] implies that there exists  for which no  exists. Intuitively, Proposition \[prop:netUnimodMaxDomain\] and Theorem \[thm:unimod\] suggest that Definition \[def:demTypeIncEff\] is the most general way to incorporate income effects into unimodular demand types from the quasilinear setting and ensure the existence of . Indeed, Proposition \[prop:netUnimodMaxDomain\] entails that any domain of preferences that contains all concave quasilinear preferences of a maximal, unimodular demand type and guarantees the existence of  must lie within the corresponding demand type constructed in Definition \[def:demTypeIncEff\]. The Strong Substitutes Demand Type and Net  with Multiple Units {#sec:ssub} --------------------------------------------------------------- We now use the case of Theorem \[thm:unimod\] for the strong substitutes demand type to extend Theorem \[thm:netSubExist\] to settings in which agents can demand multiple units of some goods. In such settings, if utility is transferable, the  condition needed to ensure the existence of  is **—the condition requiring that agents see units of goods as  [@MiSt:09]. As [@shioura2015gross] and [@BaKl:19] showed, there is a close relationship between strong (net)  and the strong substitutes demand type.[^42] 1. A valuation is a * valuation* if it corresponds to a  valuation when each unit of each good is regarded as a separate good. 2. A utility function is a * utility function* if it corresponds to a  utility function when each unit of each good is regarded as a separate good. \[fac:ssubDemTypeConc\] A valuation (resp. utility function) is a strong (net)  valuation (resp. utility function) if and only if it is concave (resp. quasiconcave) and of the strong substitutes demand type.[^43][^44] As the strong substitutes  is unimodular [@poincare1900second], the existence of  under  is therefore a special case of the Unimodularity Theorem with Income Effects. \[cor:snsubExist\] If all agents have  utility functions, then  exist for all . Corollary \[cor:snsubExist\] can also be proven directly using the Equilibrium Existence Duality and the existence of  under  in transferable utility economies [@MiSt:09; @ikebe2015stability]. Theorem \[thm:netSubExist\] is the special case of Corollary \[cor:snsubExist\] for settings in which agents demand at most one unit of each good. As there are unimodular unrelated to the strong substitutes  (such as the one in Example \[eg:5D\]), Theorem \[thm:unimod\] is strictly more general than Corollary \[cor:snsubExist\] (and hence Theorem \[thm:netSubExist\]). In particular, Theorem \[thm:unimod\] also illustrates that certain patterns of net complementarities can also be compatible with the existence of . As the strong substitutes  is maximal as a unimodular  (see, e.g., Example 9 in [@danilov2004discrete]), Proposition \[prop:netUnimodMaxDomain\] yields a maximal domain result for . \[cor:snsubMax\] If $|J| \ge 2$ and $\utilFn{j}$ is not a  utility function, then there exist  valuations $\valFn{k}$ for agents $k \not= j$, as well as a total endowment and , for which no  exists. Auction Design {#sec:auctions} ============== Our work has several implications for auction design. First, our perspective of analyzing preferences by using the expenditure/minimization problem may yield new approaches for extending auction bidding languages to allow for income effects. Second, our equilibrium existence results suggest that some auctions with competitive equilibrium pricing may work well for indivisible goods even in the presence of financing constraints. One set of examples are Product-Mix Auctions, such as the one implemented by the Bank of England[^45]—these implement competitive equilibrium allocations assuming that the submitted sealed bids represent bidders’ actual preferences, since truth-telling is a reasonable approximation in these auctions when there are sufficiently many bidders. However, while we have shown that gross complementarities do not lead to the nonexistence of competitive equilibrium, they do create problems for dynamic auctions. When agents see goods as gross substitutes, iteratively increasing the prices of over-demanded goods leads to a competitive equilibrium [@KeCr:82; @FlJaJaTe:19]. In contrast, when there are gross complementarities between goods, increases in the price of an over-demanded good can lead to other goods being under-demanded due to an income effect. So, even though competitive equilibrium always exists when agents see goods as (strong) net substitutes, it may not be possible to find a competitive equilibrium using a monotone, dynamic auction. In particular, simple “activity rules” that require bidders to bid on a smaller total number of units of goods as prices increase may result in inefficient outcomes. So, the Product-Mix Auction approach of finding competitive equilibrium based on a single round of sealed bids seems especially useful in the presence of income effects. Conclusion {#sec:conclusion} ========== The Equilibrium Existence Duality is a useful tool for analyzing economies with indivisible goods. It is based on the relationship between Marshallian and Hicksian demands, and on an interpretation of Hicksian demand in terms of a quasilinear maximization problem. The Equilibrium Existence Duality shows that competitive equilibrium exists (for all endowment allocations) if and only if competitive equilibrium exists in each of a family of Hicksian economies. An application is that it is net substitutability, not gross substitutability, that is relevant to the existence of equilibrium. And extending the demand types classification of valuations [@BaKl:19] allows us to state a Unimodularity Theorem with Income Effects that gives conditions on the patterns of substitution effects that guarantee the existence of competitive equilibrium. In short, with income effects, just as without them, existence does not depend on agents seeing goods as substitutes; rather, substitution effects are fundamental to the existence of competitive equilibrium. Our results point to a number of potential directions for future work. First, it would be interesting to investigate applications of the Equilibrium Existence Duality to other results on the existence of equilibrium with transferable utility—such as those of [@BiMa:97], [@Ma:98], and [@CaOzPa:15]. Second, our results could be used to further develop auction designs that find competitive equilibrium outcomes given the submitted bids, such as Product-Mix Auction. More broadly, our approach may lead to new results about the properties of economies with indivisibilities and income effects. @fact=@theorem @lemma=@theorem @proposition=@theorem @corollary=@theorem @definition=@theorem @claim=@theorem @remark=@theorem @example=@theorem Proof of Theorem \[thm:existDualExchange\] and Lemma \[lem:dualEconSWT\] {#app:EEDproof} ======================================================================== We prove the following result, which combines Theorem \[thm:existDualExchange\] and Lemma \[lem:dualEconSWT\]. \[thm:existDualExchangeSWT\] Suppose that the total endowment and the sets of feasible bundles are such that  exists. The following are equivalent. 1. \[cond:marshall\]  exist for all . 2. \[cond:SWT\] For each Pareto-efficient allocation $(\bunnj)_{j \in J}$ with $\sum_{j \in J} \bunj = \tot$, there exists a price vector $\p$ such that $\bunnj \in \dM{j}{\p}{\bunnj}$ for all agents $j$. 3. \[cond:hicks\]  exist in the  for all profiles of utility levels. The remainder of this appendix is devoted to the proof of Theorem \[thm:existDualExchangeSWT\]. Proof of the \[cond:marshall\] implies \[cond:SWT\] Implication in Theorem \[thm:existDualExchangeSWT\] ------------------------------------------------------------------------------------------------------- The proof of this implication is essentially identical to the proof of Theorem 3 in [@maskin2008fundamental]. Consider a Pareto-efficient allocation $(\bunnj)_{j \in J}$ with $\sum_{j \in J} \bunnj = \tot.$ Let agent $j$’s endowment be $\bunndowj = \bunnj$. By Statement \[cond:marshall\] in the theorem, there exists a , say consisting of the price vector $\p$ and the allocation $(\hbunj)_{j \in J}$ of goods. By the definition of , we have that $\hbunj \in \dM{j}{\p}{\bunj}$ for all agents $j$. In particular, letting $\hnumerj = \numerj - \p \cdot (\hbunj - \bunj)$ for each agent $j,$ we have that $\sum_{j \in J} \hbunnj = \sum_{j \in J} \bunnj$ and that $\util{j}{\hbunnj} \ge \util{j}{\bunnj}$ for all agents $j$. As the allocation $(\bunnj)_{j \in J}$ is Pareto-efficient, we must have that $\util{j}{\hbunnj} = \util{j}{\bunnj}$ for all agents $j$. It follows that $\bunj \in \dM{j}{\p}{\bunnj}$ for all agents $j$—as desired. Proof of the \[cond:SWT\] implies \[cond:hicks\] Implication in Theorem \[thm:existDualExchangeSWT\] ---------------------------------------------------------------------------------------------------- Let $(\ubj)_{j \in J}$ be a profile of utility levels. Consider any allocation $(\bunj)_{j \in J} \in \bigtimes_{j \in J} \Feas{j}$ of goods with $\sum_{j \in J} \bunj = \tot$ that minimizes $$\sum_{j \in J} \cf{j}{\bunj}{\ubj}$$ over all allocations $(\hbunj)_{j \in J} \in \bigtimes_{j \in J} \Feas{j}$ of goods with $\sum_{j \in J} \hbunj = \tot.$ Such an allocation exists because each set $\Feas{j}$ is finite and  exists. For each agent $j,$ let $\numerj = \cf{j}{\bunj}{\ubj}$—so $\util{j}{\bunnj} = \ubj$. \[cl:dualPE\] The allocation $(\bunnj)_{j \in J}$ is Pareto-efficient. Consider any allocation $(\hbunnj)_{j \in J} \in \bigtimes_{j \in J} \Feans{j}$ with $\sum_{j \in J} \hbunj = \tot$, and $\util{j}{\hbunnj} \ge \util{j}{\bunnj} = \ubj$ for all agents $j$ with strict inequality for some $j = j_1$. As $\cf{j}{\hbunj}{\cdot}$ is strictly increasing for each agent $j$, we must have that $$\hnumerj = \cf{j}{\hbunj}{\util{j}{\hbunnj}} \ge \cf{j}{\hbunj}{\ubj}$$ for all agents $j$ with strict inequality for $j = j_1$. Hence, we must have that $$\sum_{j \in J} \hnumerj > \sum_{j \in J} \cf{j}{\hbunj}{\ubj} \ge \sum_{j \in J} \cf{j}{\bunj}{\ubj} = \sum_{j \in J} \numerj,$$ where the second inequality follows from the definition of $(\bunj)_{j \in J}$, so the allocation $(\bunnj)_{j \in J}$ cannot be Pareto-dominated. By Claim \[cl:dualPE\] and Statement \[cond:SWT\] in the theorem, there exists a price vector $\p$ such that $\bunj \in \dM{j}{\p}{\bunnj}$ for all agents $j$. Fact \[fac:dualDem\] implies that $\bunj \in \dH{j}{\p}{\ubj}$ for all agents $j$. By Lemma \[lem:dHvalH\], it follows that the price vector $\p$ and the allocation $\left(\bunj\right)_{j \in J}$ of goods comprise a  in the  for the profile $(\ubj)_{j \in J}$ of utility levels. Proof of the \[cond:hicks\] implies \[cond:marshall\] Implication in Theorem \[thm:existDualExchangeSWT\] --------------------------------------------------------------------------------------------------------- Let $(\bunndowj)_{j \in J}$ be an endowment allocation. For each agent $j,$ we define a utility level $\umin{j} = \util{j}{\bunndowj}$ and let $$\begin{aligned} K^j &= \numerdowj - \min_{\bun \in \Feas{j}} \cf{j}{\bun}{\umin{j}},\end{aligned}$$ which is non-negative by construction. Furthermore, let $K = 1 + \sum_{j \in J} K^j$ and let $$\umax{j} = \max_{\bun \in \Feas{j}} \util{j}{\numerdowj + K, \bun}.$$ Given a profile $\ubvec = (\ubj)_{j \in J}$ of utility levels, let $$\payoffs{\ubvec} = \left\{\left(\begin{array}{l} \cf{j}{\bunj}{\ubj} - \numerdowj\\ + \p \cdot (\bunj - \bundowj) \end{array}\right)_{j \in J} \lgiv \begin{array}{l} \left(\p,(\bunj)_{j \in J}\right) \text{ is a competitive}\\ \text{equilibrium in the \dualecon}\\ \text{for the profile } (\ubj)_{j \in J} \text{ of utility levels} \end{array}\lgivend \right\}$$ denote the set of profiles of net expenditures over all  in the  for the profile $(\ubj)_{j \in J}$ of utility levels. As discussed in Section \[sec:EED\], the strategy of the proof is to solve for a profile $\ubvec = (\ubj)_{j \in J}$ of utility levels such that $\zero \in \payoffs{\ubvec}$. We first show that the correspondence $\payoffFn: \bigtimes_{j \in J} [\umin{j},\umax{j}] \toto \mathbb{R}^J$ is upper hemicontinuous and has compact, convex values. We then apply a topological fixed point argument to show that there exists a profile $\ubvec = (\ubj)_{j \in J} \in \bigtimes_{j \in J} [\umin{j},\umax{j}]$ of utility levels such that $\zero \in \payoffs{\ubvec}$. We conclude the proof by constructing a  for the endowment allocation $(\bunndowj)_{j \in J}$ in the original economy from a  in the  for the profile $(\ubj)_{j \in J}$ of utility levels. ### Proof of the Regularity Conditions for $\payoffFn$. {#proof-of-the-regularity-conditions-for-payofffn. .unnumbered} We begin by proving that the correspondence $\payoffFn: \bigtimes_{j \in J} [\umin{j},\umax{j}] \toto \mathbb{R}^J$ is upper hemicontinuous and has compact, convex values. We actually give explicit bounds for the range of $\payoffFn$. Let $$\overline{M} = \max_{j \in J} \left\{\cf{j}{\bundowj}{\umax{j}} - \numerdowj\right\}$$ and let $$\underline{M} = \sum_{j \in J} \left(\min_{\bun \in \Feas{j}} \left\{\cf{j}{\bun}{\umin{j}}\right\} - \numerdowj\right) - (|J|-1) \overline{M}.$$ \[cl:regT\] The correspondence $\payoffFn: \bigtimes_{j \in J} [\umin{j},\umax{j}] \toto \mathbb{R}^J$ is upper hemicontinuous and has compact, convex values and range contained in $[\underline{M},\overline{M}]^J$. The proof of Claim \[cl:regT\] uses the following technical description of $\payoffFn$. \[cl:hicksEqHelp\] Let $\ubvec = (\ubj)_{j \in J} \in \bigtimes_{j \in J} [\umin{j},\umax{j}]$ be a profile of utility levels and let $(\bunj)_{j \in J} \in \bigtimes_{j \in J} \Feas{j}$ be an allocation of goods with $\sum_{j \in J} \bunj = \tot$. If $(\bunj)_{j \in J}$ minimizes $$\sum_{j \in J} \cf{j}{\hbunj}{\ubj}$$ over all allocations $(\hbunj)_{j \in J} \in \bigtimes_{j \in J} \Feas{j}$ of goods with $\sum_{j \in J} \hbunj = \tot$, then we have that $$\payoffs{\ubvec} = \left\{\rgivend \left(\cf{j}{\bunj}{\ubj} - \numerdowj + \p \cdot (\bunj - \bundowj)\right)_{j \in J} \rgiv \p \in \mathcal{P}\right\},$$ where $$\mathcal{P} = \left\{\p \lgiv \cf{j}{\bunj}{\ubj} + \p \cdot \bunj \le \cf{j}{\bunpr}{\ubj} + \p \cdot \bunpr \text{ for all } j \in J \text{ and } \bunpr \in \Feas{j} \lgivend\right\}.$$ By construction, we have that $$\mathcal{P} = \left\{\p \lgiv \begin{array}{l} \left(\p,(\bunnj)_{j \in J}\right) \text{ is a \ce\ in the}\\ \text{ \dualecon\ for the profile } (\ubj)_{j \in J} \text{ of utility levels} \end{array}\lgivend \right\}.$$ A standard lemma regarding  in transferable utility economies shows that in the  for the profile $(\ubj)_{j \in J}$ of utility levels, if $\left(\p,(\hbunnj)_{j \in J}\right)$ is a , then so is $\left(\p,(\bunj)_{j \in J}\right)$.[^46] In this case, we have that $$\cf{j}{\bunj}{\ubj} + \p \cdot \bunj = \cf{j}{\hbunj}{\ubj} + \p \cdot \hbunj,$$ and hence that $$\cf{j}{\bunj}{\ubj} - \numerdowj + \p \cdot (\bunj - \bundowj) = \cf{j}{\hbunj}{\ubj} - \numerdowj + \p \cdot (\hbunj - \bundowj),$$ for all agents $j$. The claim follows. It suffices to show that $\payoffFn$ has convex values, range contained in $[\underline{M},\overline{M}]^J$, and a closed graph. We first show that $\payoffs{\ubvec}$ is convex for all $\ubvec \in \bigtimes_{j \in J} [\umin{j},\umax{j}]$. We use the notation of Claim \[cl:hicksEqHelp\] to prove this assertion. Note that $\mathcal{P}$ is the set of solutions to a set of linear inequalities, and is hence convex. Claim \[cl:hicksEqHelp\] implies that $\payoffs{\ubvec}$ is the set of values of a linear function on $\mathcal{P}$—so it follows that $\payoffs{\ubvec}$ is convex as well. We next show that $\payoffs{\ubvec} \subseteq [\underline{M},\overline{M}]^J$ holds for all $\ubvec \in \bigtimes_{j \in J} [\umin{j},\umax{j}].$ We again use the notation of Claim \[cl:hicksEqHelp\]. Let $\ubvec \in \bigtimes_{j \in J} [\umin{j},\umax{j}]$ and $\payoffvec \in \payoffs{\ubvec}$ be arbitrary. By Claim \[cl:hicksEqHelp\], there exists $\p \in \mathcal{P}$ such that $$\payoff{j} = \cf{j}{\bunj}{\ubj} - \numerdowj + \p \cdot (\bunj - \bundowj)$$ for all agents $j$. Note that for all agents $j,$ we must have that $$\payoff{j} \le \cf{j}{\bundowj}{\ubj} - \numerdowj \le \cf{j}{\bundowj}{\umax{j}} - \numerdowj \le \overline{M},$$ where the first inequality holds due to the definition of $\mathcal{P}$, the second inequality holds because $\cf{j}{\bundowj}{\cdot}$ is strictly increasing, and the third inequality holds due to the definition of $\overline{M}$. Furthermore, as $\sum_{j \in J} \bunj = \tot = \sum_{j \in J} \bundowj$, we have that $$\begin{aligned} \sum_{j \in J} \payoff{j} &= \sum_{j \in J} (\cf{j}{\bunj}{\ubj} - \numerdowj).\end{aligned}$$ It follows that $$\begin{aligned} \payoff{j} &= \sum_{k \in J} (\cf{k}{\bunag{k}}{\ubag{k}} - \numerdowag{k}) - \sum_{k \in J \ssm \{j\}} \payoff{k}\\ &\ge \sum_{k \in J} (\cf{k}{\bunag{k}}{\umin{k}} - \numerdowag{k}) - \sum_{k \in J \ssm \{j\}} \payoff{k}\\ &\ge \sum_{k \in J} (\cf{k}{\bunag{k}}{\umin{k}} - \numerdowag{k}) - (|J|-1) \overline{M}\\ &\ge \underline{M}\end{aligned}$$ for all agents $j$, where the first inequality holds because $\cf{k}{\bunag{k}}{\cdot}$ is increasing for each agent $k$, the second inequality holds because $\payoff{k} \le \overline{M}$ for all agents $k$, and the third inequality holds due to the definition of $\underline{M}$. Last, we show that $\payoffFn$ has a closed graph. Our argument uses the following version of Farkas’s Lemma. \[fac:farkas\] Let $L_1,L_2$ be disjoint, finite sets and, for each $\ell \in L_1 \cup L_2,$ let $\v^\ell \in \R^I$ be a vector and let $\alpha_\ell$ be a scalar. There exist scalars $\lambda_\ell$ for $\ell \in L_1 \cup L_2$ with $\lambda_\ell \ge 0$ for $\ell \in L_2$ such that $$\sum_{\ell \in L_1 \cup L_2} \lambda_\ell \v^\ell = \zero \quad \text{and} \sum_{\ell \in L_1 \cup L_2} \lambda_\ell \alpha_\ell < 0$$ if and only if there does not exist a vector $\p \in \R^I$ such $\v^\ell \cdot \p \le \alpha_\ell$ for all $\ell \in L_1 \cup L_2$ with equality for all $\ell \in L_1.$ Consider a sequence ${\ubvec_{(1)}},{\ubvec_{(2)}},\ldots \in \bigtimes_{j \in J} [\umin{j},\umax{j}]$ of profiles of utility levels. For each $m$, let ${\payoffvec_{(m)}} \in \payoffs{{\ubvec_{(m)}}}$. Suppose that ${\ubvec_{(m)}} \to \ubvec$ and ${\payoffvec_{(m)}} \to \payoffvec$ as $m \to \infty$. We need to show that $\payoffvec \in \payoffs{\ubvec}$. As each set $\Feas{j}$ is finite and  exists, by passing to a subsequence we can assume that there exists an allocation $(\bunj)_{j \in J} \in \bigtimes_{j \in J} \Feas{j}$ of goods with $\sum_{j \in J} \bunj = \tot$ that, for each $m,$ minimizes $$\sum_{j \in J} \cf{j}{\hbunj}{{\ubj_{(m)}}}$$ over all allocations $(\hbunj)_{j \in J} \in \bigtimes_{j \in J} \Feas{j}$ of goods with $\sum_{j \in J} \hbunj = \tot.$ By the continuity of $\cf{j}{\hbunj}{\ub}$ in $\ub$ for each agent $j$, the allocation $(\bunj)_{j \in J}$ minimizes $$\sum_{j \in J} \cf{j}{\hbunj}{\ubj}$$ over all allocations $(\hbunj)_{j \in J} \in \bigtimes_{j \in J} \Feas{j}$ of goods with $\sum_{j \in J} \hbunj = \tot.$ Suppose for sake of deriving a contradiction that $\payoffvec \notin \payoffs{\ubvec}$. Let $L_1 = J$ and let $L_2 = \bigcup_{j \in J} \{j\} \times \Feas{j}.$ Define vectors $\v^\ell \in \mathbb{R}^I$ for $\ell \in L_1 \cup L_2$ by $$\v^\ell = \begin{cases} \bunj - \bundowj & \text{ for } \ell = j \in L_1\\ \bunj - \bunpr & \text{ for } \ell = (j,\bunpr) \in L_2\\ \end{cases}$$ and scalars $\alpha_\ell$ for $\ell \in L_1 \cup L_2$ by $$\alpha_\ell = \begin{cases} \cf{j}{\bunj}{\ubj} - \numerdowj - \payoff{j} & \text{ for } \ell = j \in L_1\\ \cf{j}{\bunpr}{\ubj} - \cf{j}{\bunj}{\ubj} & \text{ for } \ell = (j,\bunpr) \in L_2. \end{cases}$$ By Claim \[cl:hicksEqHelp\], there does not exist a price vector $\p$ such that $\v^\ell \cdot \p \le \alpha_\ell$ for all $\ell \in L_1 \cup L_2$ with equality for all $\ell \in L_1.$ The “if” direction of Fact \[fac:farkas\] therefore guarantees that there exist scalars $\lambda_\ell$ for $\ell \in L_1 \cup L_2$ with $\lambda_\ell \ge 0$ for all $\ell \in L_2$ such that $$\sum_{\ell \in L_1 \cup L_2} \lambda_\ell \v^\ell = \zero \quad \text{and} \quad \sum_{\ell \in L_1 \cup L_2} \lambda_\ell \alpha_\ell < 0.$$ By the definition of the scalars $\alpha_\ell,$ we have that $$\sum_{j \in J} \lambda_j \left(\cf{j}{\bunj}{\ubj} - \numerdowj - \payoff{j}\right) + \sum_{j \in J} \sum_{\bunpr \in \Feas{j}} \lambda_{j,\bunpr} \left(\cf{j}{\bunpr}{\ubj} - \cf{j}{\bunj}{\ubj}\right) < 0.$$ Due the continuity of $\cf{j}{\hbunj}{\ub}$ in $\ub$ for each agent $j$ and because ${\ubvec_{(m)}} \to \ubvec$ and ${\payoffvec_{(m)}} \to \payoffvec$ as $m \to \infty$, there must exist $m$ such that $$\sum_{j \in J} \lambda_j \left(\cf{j}{\bunj}{{\ubj_{(m)}}} - \numerdowj - {\payoff{j}_{(m)}}\right) + \sum_{j \in J} \sum_{\bunpr \in \Feas{j}} \lambda_{j,\bunpr} \left(\cf{j}{\bunpr}{{\ubj_{(m)}}} - \cf{j}{\bunj}{{\ubj_{(m)}}}\right) < 0.$$ Defining scalars $\alpha'_\ell$ for $\ell \in L_1 \cup L_2$ by $$\begin{aligned} \alpha'_\ell &= \begin{cases} \cf{j}{\bunj}{{\ubj_{(m)}}} - \numerdowj - {\payoff{j}_{(m)}} & \text{ for } \ell = j \in L_1\\ \cf{j}{\bunpr}{{\ubj_{(m)}}} - \cf{j}{\bunj}{{\ubj_{(m)}}} & \text{ for } \ell = (j,\bunpr) \in L_2, \end{cases}\end{aligned}$$ we have that $$\sum_{\ell \in L_1 \cup L_2} \lambda_\ell \v^\ell = \zero \quad \text{and that} \quad \sum_{\ell \in L_1 \cup L_2} \lambda_\ell \alpha'_\ell < 0.$$ The “only if” implication of Fact \[fac:farkas\] therefore guarantees that there does not exist a price vector $\p$ such that $\v^\ell \cdot \p \le \alpha'_\ell$ for all $\ell \in L_1 \cup L_2$ with equality for all $\ell \in L_1.$ By Claim \[cl:hicksEqHelp\], it follows that ${\payoffvec_{(m)}} \notin \payoffs{{\ubvec_{(m)}}}$—a contradiction. Hence, we can conclude that $\payoffvec \in \payoffs{\ubvec}$—as desired. ### Completion of the Proof of the \[cond:marshall\] implies \[cond:SWT\] Implication in Theorem \[thm:existDualExchangeSWT\]. {#completion-of-the-proof-of-the-condmarshall-implies-condswt-implication-in-theoremthmexistdualexchangeswt. .unnumbered} We first solve for a profile $\ubvec = (\ubj)_{j \in J}$ of utility levels such that $\zero \in \payoffs{\ubvec}.$ \[cl:fpEq\] Under Statement \[cond:hicks\] in Theorem \[thm:existDualExchangeSWT\], there exists a profile $\ubvec = (\ubj)_{j \in J}$ of utility levels such that $\zero \in \payoffs{\ubvec}$. To prove Claim \[cl:fpEq\], we apply a topological fixed point argument. Consider the compact, convex set $$Z = [\underline{M},\overline{M}]^J \times \bigtimes_{j \in J} [\umin{j},\umax{j}].$$ As $\payoffs{\ubvec} \subseteq [\underline{M},\overline{M}]^J$ for all $\ubvec \in \bigtimes_{j \in J} [\umin{j},\umax{j}],$ we can define a correspondence $\Phi: Z \toto Z$ by $$\Phi(\payoffvec,\ubvec) = \payoffs{\ubvec} \times \argmin_{\hubvec \in \bigtimes_{j \in J} [\umin{j},\umax{j}]} \left\{\sum_{j \in J} \payoff{j} \hubj\right\}.$$ Claim \[cl:regT\] guarantees that $T: \bigtimes_{j \in J} [\umin{j},\umax{j}] \toto \mathbb{R}^J$ is upper hemicontinuous and has compact, convex values. Statement \[cond:hicks\] in Theorem \[thm:existDualExchangeSWT\] ensures that the correspondence $T$ has non-empty values. Because $\bigtimes_{j \in J} [\umin{j},\umax{j}]$ is compact and convex, it follows that the correspondence $\Phi$ is upper hemicontinuous and has non-empty, compact, convex values as well. Hence, Kakutani’s Fixed Point Theorem guarantees that $\Phi$ has a fixed point $(\payoffvec,\ubvec)$. By construction, we have that $\payoffvec \in \payoffs{\ubvec}$ and that $$\label{eq:fpUtil} \ubj \in \argmin_{\hubj \in [\umin{j},\umax{j}]} \payoff{j} \hubj$$ for all agents $j$. It suffices to prove that $\payoffvec = \zero$. Let $\left(\p,(\bunnj)_{j \in J}\right)$ be a  in the  for the profile $(\ubj)_{j \in J}$ of utility levels with $$\label{eq:fpPayoff} \cf{j}{\bunj}{\ubj} - \numerdowj + \p \cdot (\bunj - \bundowj) = \payoff{j}$$ for all agents $j$. As $\ubj \ge \umin{j}$ and $\cf{j}{\bunj}{\cdot}$ is increasing for each agent $j$, it follows from Equation (\[eq:fpPayoff\]) and the definition of $K^j$ that $$\begin{aligned} \payoff{j} &= \cf{j}{\bunj}{\ubj} - \numerdowj + \p \cdot (\bunj - \bundowj) \nonumber\\ &\ge \cf{j}{\bunj}{\umin{j}} - \numerdowj + \p \cdot (\bunj - \bundowj) \nonumber\\ &\ge \p \cdot (\bunj-\bundowj) - K^j \label{eq:tlb}\end{aligned}$$ for all agents $j$. Next, we claim that $\payoff{j} \le 0$ for all agents $j$. If $\payoff{j} > 0,$ then Equation (\[eq:fpUtil\]) would imply that $\ubj = \umin{j}$. But as $\payoffvec \in \payoffs{\ubvec},$ it would follow that $$\payoff{j} \le \cf{j}{\bundowj}{\umin{j}} - \numerdowj + \p \cdot (\bundowj - \bundowj) = \cf{j}{\bundowj}{\umin{j}} - \numerdowj = 0,$$ where the last equality holds due to the definitions of $\cfFn{j}$ and $\umin{j},$ so we must have that $\payoff{j} \le 0$ for all agents $j$. As $(\bunj)_{j \in J}$ is the allocation of goods in a , we have that $\sum_{j \in J} \bunj = \tot = \sum_{j \in J} \bundowj$ and hence that $$\sum_{j \in J} \p \cdot (\bunj - \bundowj) = 0 \ge \sum_{j \in J} \payoff{j},$$ where the inequality holds because $\payoff{j} \le 0$ for all agents $j$. It follows that for all agents $j,$ we have that $$\payoff{j} - \p \cdot (\bunj - \bundowj) \le \sum_{k \in J \ssm \{j\}} (\p \cdot (\bunag{k} - \bundowag{k}) - \payoff{k}) \le \sum_{k \in J \ssm \{j\}} K^k \le \sum_{k \in J} K^k < K,$$ where the second inequality follows from Equation (\[eq:tlb\]), the third inequality holds because $K^j \ge 0,$ and the fourth inequality holds due to the definition of $K$. Hence, by Equation (\[eq:fpPayoff\]), we have that $$\cf{j}{\bunj}{\ubj} = \numerdowj + \payoff{j} - \p \cdot (\bunj - \bundowj) < \numerdowj + K$$ for all agents $j$. Since utility is strictly increasing in the consumption of money, it follows that $$\ubj = \util{j}{\cf{j}{\bunj}{\ubj},\bunj} < \util{j}{\numerdowj + K,\bunj} \le \umax{j},$$ where the equality holds due to the definition of $\cfFn{j}$ and the second inequality holds due to the definition of $\umax{j}$. Equation (\[eq:fpUtil\]) then implies that $\payoff{j} \ge 0$ for all agents $j$, so we must have that $\payoff{j} = 0$ for all agents $j$. By Claim \[cl:fpEq\], there exists a profile $\ubvec = (\ubj)_{j \in J}$ of utility levels and a  $(\p,(\bunj)_{j \in j})$ in the corresponding  with $$\label{eq:tFpFinal} \numerdowj = \cf{j}{\bunj}{\ubj} + \p \cdot (\bunj - \bundowj)$$ for all agents $j$. Lemma \[lem:dHvalH\] implies that $\bunj \in \dH{j}{\p}{\ubj}$ for all agents $j$, and we have that $\util{j}{\numerdowj - \p \cdot (\bunj - \bundowj),\bunj} = \ubj$ for all agents $j$ by Equation (\[eq:tFpFinal\]) and the definition of $\cfFn{j}$. It follows from Fact \[fac:dualDem\] that $\bunj \in \dM{j}{\p}{\bunndowj}$ for all agents $j,$ so the price vector $\p$ and the allocation $(\bunj)_{j \in J}$ of goods comprise a  in the original economy for the endowment allocation $(\bunndowj)_{j \in J}$. Proof of Proposition \[prop:ssubst\] {#app:grossToNet} ==================================== We actually prove a stronger statement. \[cl:marshallNetWeak\] Suppose that agent $j$ demands at most one unit of each good and let $\bundow \in \Feas{j}$. A utility function $\utilFn{j}$ is a  utility function if for all money endowments $\numerdow > \feas{j},$ price vectors $\p,$ and $0 < \mu < \lambda$, whenever 1. \[cond:start\] $\dM{j}{\p}{\bunndow} = \{\bun\}$, 2. \[cond:end\] $\dM{j}{\p + \lambda \e{i}}{\bunndow} = \{\bunpr\}$, 3. \[cond:middle\] $\{\bun,\bunpr\} \subseteq \dM{j}{\p + \mu \e{i}}{\bunndow}$, and 4. \[cond:ineq\] $\bunprComp{i} < \bunComp{i}$, we have that $\bunprComp{k} \ge \bunComp{k}$ for all goods $k \not= i.$ To complete the proof of the proposition from Claim \[cl:marshallNetWeak\], we work in the setting of Claim \[cl:marshallNetWeak\]. Note that, for the endowment $\bundow$ of goods, $\utilFn{j}$ is a  utility function when $\bunprComp{k} \ge \bunComp{k}$ holds for all goods $k \not= i$ under Conditions \[cond:start\] and \[cond:end\]. This property clearly implies that $\bunprComp{k} \ge \bunComp{k}$ holds for all goods $k \not= i$ under Conditions \[cond:start\], \[cond:end\], \[cond:middle\], and \[cond:ineq\], and hence that $\utilFn{j}$ is  utility function by Claim \[cl:marshallNetWeak\]. The proposition therefore follows from Claim \[cl:marshallNetWeak\]. It remains to prove Claim \[cl:marshallNetWeak\]. In the argument, we use the following characterization of  valuations. \[fac:subsDemCplx\] Suppose that agent $j$ demands at most one unit of each good. A valuation $\valFn{j}$ is a  valuation if and only if for all price vectors $\p$ with $|\dQL{j}{\p}| = 2,$ writing $\dQL{j}{\p} = \{\bun,\bunpr\},$ the difference $\bunpr - \bun$ is a vector with at most one positive component and at most one negative component. We prove the contrapositive. Suppose that $\utilFn{j}$ is not a  utility function. We show that there exists a money endowment $\numerdow,$ a price vector $\p,$ price increments $0 < \mu < \lambda$, and goods $i \not= k$ such that Conditions \[cond:start\], \[cond:end\], \[cond:middle\], and \[cond:ineq\] from the statement hold but $\bunprComp{k} < \bunComp{k}.$ By Remark \[rem:netSubDual\], there exists a utility level $\ub$ such that $\valHDef{j}$ is not a  valuation. Hence, by Lemma \[lem:dHvalH\] and the “if" direction of Fact \[fac:subsDemCplx\] for $\valFn{j} = \valHDef{j}$, there exists a price vector $\hp$ such that $|\dH{j}{\hp}{\ub}| = 2$, and writing $\dH{j}{\hp}{\ub} = \{\bun,\bunpr\},$ the difference $\bunpr - \bun$ has at least two positive components or at least two negative components. Without loss of generality, we can assume that the difference $\bunpr - \bun$ has at least two negative components. Suppose that $\bunprComp{i} < \bunComp{i}$ (so Condition \[cond:ineq\] holds) and that $\bunprComp{k} < \bunComp{k},$ where $i,k \in I$ are distinct goods. Define a money endowment $\numerdow$ by $$\numerdow = \cf{j}{\bun}{\ub} + \hp \cdot (\bun - \bundow) = \cf{j}{\bunpr}{\ub} + \hp \cdot (\bunpr - \bundow);$$ Fact \[fac:dualDem\] implies that $\dM{j}{\hp}{\bunndow} = \{\bun,\bunpr\}$. Let $\mu$ be such that $$\dM{j}{\hp - \mu \e{i}}{\bunndow}, \dM{j}{\hp + \mu \e{i}}{\bunndow} \subseteq \{\bun,\bunpr\};$$ such a $\mu$ exists due to the upper hemicontinuity of $\dMFn{j}$. Let $\p = \hp - \mu \e{i},$ let $\lambda = 2 \mu,$ and let $\ppr = \p + \lambda \e{i} = \hp + \mu \e{i}$. By construction, we have that $\{\bun,\bunpr\} \subseteq \dM{j}{\p + \mu \e{i}}{\bunndow} = \dM{j}{\hp}{\bunndow}$, so Condition \[cond:middle\] holds. It remains to show that $\dM{j}{\p}{\bunndow} = \{\bun\}$ and that $\dM{j}{\ppr}{\bunndow} = \{\bunpr\}$. As $j$ demands at most one unit of each good, we must have that $\bunComp{i} = 1$ and that $\bunprComp{i} = 0.$ We divide into cases based on the value of $\bundowComp{i}$ to show that $$\label{eq:subsIneqs} \begin{aligned} \util{j}{\numerdow - \p \cdot (\bun - \bundow),\bun} &> \util{j}{\numerdow - \p \cdot (\bunpr - \bundow),\bunpr}\\ \util{j}{\numerdow - \ppr \cdot (\bunpr - \bundow),\bunpr} &> \util{j}{\numerdow - \ppr \cdot (\bun - \bundow),\bun}. \end{aligned}$$ $\bundowComp{i} = 0$. In this case, we have that $$\begin{aligned} \util{j}{\numerdow - \p \cdot (\bun - \bundow),\bun} &> \util{j}{\numerdow - \hp \cdot (\bun - \bundow),\bun}\\ &= \util{j}{\numerdow - \hp \cdot (\bunpr - \bundow),\bunpr}\\ &=\util{j}{\numerdow - \p \cdot (\bunpr - \bundow),\bunpr},\end{aligned}$$ where the inequality holds because $\pComp{i} < \hpComp{i}$ and $\bunComp{i} > \bundowComp{i}$, the first equality holds because $\{\bun,\bunpr\} \subseteq \dM{j}{\hp}{\bundow},$ and the second equality holds because $\bunprComp{i} = \bundowComp{i}.$ Similarly, we have that $$\begin{aligned} \util{j}{\numerdow - \ppr \cdot (\bun - \bundow),\bun} &< \util{j}{\numerdow - \hp \cdot (\bun - \bundow),\bun}\\ &= \util{j}{\numerdow - \hp \cdot (\bunpr - \bundow),\bunpr}\\ &= \util{j}{\numerdow - \ppr \cdot (\bunpr - \bundow),\bunpr},\end{aligned}$$ where the inequality holds because $\pprComp{i} > \hpComp{i}$ and $\bunComp{i} > \bundowComp{i}$, the first equality holds because $\{\bun,\bunpr\} \subseteq \dM{j}{\hp}{\bundow},$ and the second equality holds because $\bunprComp{i} = \bundowComp{i}.$ $\bundowComp{i} = 1$. In this case, we have that $$\begin{aligned} \util{j}{\numerdow - \p \cdot (\bunpr - \bundow),\bunpr} &< \util{j}{\numerdow - \hp \cdot (\bunpr - \bundow),\bunpr}\\ &= \util{j}{\numerdow - \hp \cdot (\bun - \bundow),\bun}\\ &= \util{j}{\numerdow - \p \cdot (\bun - \bundow),\bun}\end{aligned}$$ where the inequality holds because $\pComp{i} < \hpComp{i}$ and $\bunprComp{i} < \bundowComp{i}$, the first equality holds because $\{\bun,\bunpr\} \subseteq \dM{j}{\hp}{\bundow},$ and the second equality holds because $\bunComp{i} = \bundowComp{i}.$ Similarly, we have that $$\begin{aligned} \util{j}{\numerdow - \ppr \cdot (\bunpr - \bundow),\bunpr} &> \util{j}{\numerdow - \hp \cdot (\bunpr - \bundow),\bunpr}\\ &= \util{j}{\numerdow - \hp \cdot (\bun - \bundow),\bun}\\ &= \util{j}{\numerdow - \ppr \cdot (\bun - \bundow),\bun},\end{aligned}$$ where the inequality holds because $\pprComp{i} > \hpComp{i}$ and $\bunprComp{i} < \bundowComp{i}$, the first equality holds because $\{\bun,\bunpr\} \subseteq \dM{j}{\hp}{\bundow},$ and the second equality holds because $\bunComp{i} = \bundowComp{i}.$ As $\bundow \in \Feas{j} \subseteq \{0,1\}^I,$ the cases exhaust all possibilities. Hence, we have proven that Equation (\[eq:subsIneqs\]) must hold. As $\dM{j}{\p}{\bunndow},\dM{j}{\ppr}{\bunndow} \subseteq \{\bun,\bunpr\},$ we must have that $\dM{j}{\p}{\bunndow} = \{\bun\}$ and that $\dM{j}{\ppr}{\bunndow} = \{\bunpr\}$—so Conditions \[cond:start\] and \[cond:end\] hold, as desired. [**FOR ONLINE PUBLICATION**]{} Proofs of Facts \[fac:dualDem\] and \[fac:dualPrefs\] {#app:dualDemPrefs} ===================================================== Proof of Fact \[fac:dualDem\] ----------------------------- We begin by proving two technical claims. \[cl:dualDemHelpMtoH\] Let $\bunndow \in \Feans{j}$ be an endowment and let $\ub$ be a utility level. If $$\label{eq:marshallForProofOfDual1} \ub = \max_{\bunn \in \Feans{j} \mid \pall \cdot \bunn \le \pall \cdot \bunndow} \util{j}{\bunn},$$ then we have that $$\pall \cdot \bunndow = \min_{\bunn \in \Feans{j} \mid \util{j}{\bunn} \ge \ub} \pall \cdot \bunn$$ and that $\dM{j}{\p}{\bunndow} \subseteq \dH{j}{\p}{\ub}$. Letting $\bunpr \in \dM{j}{\p}{\bunndow}$ be arbitrary and $\numerpr = \numerdow - \p \cdot (\bunpr - \bundow),$ we have that $\util{j}{\bunnpr} = u$ and that $\pall \cdot \bunnpr \le \pall \cdot \bunndow$ by construction. It follows that $$\pall \cdot \bunndow \ge \min_{\bunn \in \Feans{j} \mid \util{j}{\bunn} \ge \ub} \pall \cdot \bunn.$$ Suppose for the sake of deriving a contradiction that there exists $\bunndpr \in \Feans{j}$ with $\pall \cdot \bunndpr < \pall \cdot \bunndow$ and $\util{j}{\bunndpr} \ge \ub.$ Then, we have that $\numerdpr < \numerdow + \p \cdot (\bundpr - \bundow)$; write $\numertpr = \numerdow + \p \cdot (\bundpr - \bundow),$ so $\numertpr > \numerdpr.$ Since $\utilFn{j}$ is strictly increasing in consumption of money, it follows that $\util{j}{\numertpr,\bundpr} > u$—contradicting Equation (\[eq:marshallForProofOfDual1\]) as $\numertpr + \p \cdot \bundpr = \pall \cdot \bunndow.$ Hence, we can conclude that $$\pall \cdot \bunndow = \min_{\bunn \in \Feans{j} \mid \util{j}{\bunn} \ge \ub} \pall \cdot \bunn.$$ Since $\util{j}{\bunnpr} = \ub$ and $\pall \cdot \bunnpr = \pall \cdot \bunndow,$ it follows that $\bunpr \in \dH{j}{\p}{\ub}$. Since $\bunpr \in \dM{j}{\p}{\bunndow}$ was arbitrary, we can conclude that $\dM{j}{\p}{\bunndow} \subseteq \dH{j}{\p}{\ub}$. \[cl:dualDemHelpHtoM\] Let $\bunndow \in \Feans{j}$ be an endowment and let $\ub$ be a utility level. If $$\label{eq:hicksForProofOfDual1} \pall \cdot \bunndow = \min_{\bunn \in \Feans{j} \mid \util{j}{\bunn} \ge \ub} \pall \cdot \bunn,$$ then we have that $$\ub = \max_{\bunn \in \Feans{j} \mid \pall \cdot \bunn \le \pall \cdot \bunndow} \util{j}{\bunn}$$ and that $\dH{j}{\p}{\ub} \subseteq \dM{j}{\p}{\bunndow}$. Let $\bunpr \in \dH{j}{\p}{\ub}$ be arbitrary and $\numerpr = \cf{j}{\bunpr}{\ub}$. We have that $\util{j}{\bunnpr} \ge \ub$ and that $\pall \cdot \bunnpr = \pall \cdot \bunndow$ by construction. It follows that $$\ub \le \max_{\bunn \in \Feans{j} \mid \pall \cdot \bunn \le \pall \cdot \bunndow} \util{j}{\bunn}.$$ We next show that $$\ub = \max_{\bunn \in \Feans{j} \mid \pall \cdot \bunn \le \pall \cdot \bunndow} \util{j}{\bunn}.$$ Suppose for sake of deriving a contradiction that there exists $\bunndpr \in \Feans{j}$ with $\pall \cdot \bunndpr \le \pall \cdot \bunndow$ and $\util{j}{\bunndpr} > \ub.$ By definition of $\cfFn{j}$, we know that $\numerdpr > \cf{j}{\bundpr}{\ub}$. Letting $\numertpr = \cf{j}{\bundpr}{\ub}$, we have that $\numertpr + \p \cdot \bundpr < \pall \cdot \bunndow$, which contradicts Equation (\[eq:hicksForProofOfDual1\]) as $\util{j}{\numertpr,\bundpr} = \ub.$ Hence, we can conclude that $$\ub = \max_{\bunn \in \Feans{j} \mid \pall \cdot \bunn \le \pall \cdot \bunndow} \util{j}{\bunn}.$$ Since $\util{j}{\bunpr} = u$ and $\pall \cdot \bunpr = \pall \cdot \bunndow,$ it follows that $\bunpr \in \dM{j}{\p}{\bunndow}$. Since $\bunpr \in \dH{j}{\p}{\ub}$ was arbitrary, we can conclude that $\dH{j}{\p}{\ub} \subseteq \dM{j}{\p}{\bunndow}$. Let $\bunndow \in \Feans{j}$ be an endowment and let $\ub$ be a utility level. By Claims \[cl:dualDemHelpMtoH\] and \[cl:dualDemHelpHtoM\], Conditions (\[eq:marshallForProofOfDual1\]) and (\[eq:hicksForProofOfDual1\]), are equivalent, and under these equivalent conditions, we have that $\dM{j}{\p}{\bunndow} \subseteq \dH{j}{\p}{\ub}$ and that $\dH{j}{\p}{\ub} \subseteq \dM{j}{\p}{\bunndow}$. Hence, we must have that $\dM{j}{\p}{\bunndow} = \dH{j}{\p}{\ub}$ under the equivalent Conditions (\[eq:marshallForProofOfDual1\]) and (\[eq:hicksForProofOfDual1\])—as desired. Proof of Fact \[fac:dualPrefs\] ------------------------------- We prove the “if” and “only if” directions separately. ### Proof of the “If" Direction. {#proof-of-the-if-direction. .unnumbered} We define a utility function $\utilFn{j}$ implicitly by $$\util{j}{\bunn} = F(\bun,\cdot)^{-1}(-\numer),$$ which is well-defined, continuous, and strictly increasing in $\numer$ by the Inverse Function Theorem because $F(\bun,\cdot)$ is continuous, strictly decreasing, and satisfies Condition (\[eq:vlimits\]). Condition (\[eq:ulimits\]) holds because $F$ is defined over the entirety of $\Feas{j} \times \feasUtil{j}.$ ### Proof of the “Only If" Direction. {#proof-of-the-only-if-direction. .unnumbered} We define $F:\Feas{j} \times (\minu{j},\maxu{j}) \to (-\infty,-\feas{j})$ implicitly by $$F(\bun,\ub) = -\util{j}{\cdot,\bun}^{-1}(\numer),$$ which is well-defined, continuous, and strictly decreasing in $\ub$ by the Inverse Function Theorem because $\util{j}{\cdot,\bun}$ is continuous, strictly increasing, and satisfies Condition (\[eq:ulimits\]). Condition (\[eq:vlimits\]) holds because $\utilFn{j}$ is defined over the entirety of $\Feans{j}$. Proofs of the Maximal Domain and Necessity Results for Settings with Transferable Utility {#app:maxDomain} ========================================================================================= In this appendix, we supply proofs of Facts \[fac:subMaxDomain\] and \[fac:unimodMaxDomain\], as well as the “only if" direction of Fact \[fac:unimod\]. Utility is transferable throughout this appendix. We use the concept of a pseudo-equilibrium price vector. Suppose that utility is transferable. A *pseudo-equilibrium price vector* is a price vector $\p$ such that $$\tot\in \conv\left(\sum_{j \in J} D^j(\p)\right).$$ There is a connection between pseudo-equilibrium price vectors, , and the existence problem. \[fac:pseudoEquil\] If utility is transferable and the total endowment is such that a competitive equilibrium exists, then, for each pseudo-equilibrium price vector $\p$, there exists an allocation $(\bunj)_{j \in J}$ such that $\p$ and $(\bunj)_{j \in J}$ comprise a . The nonexistence of  may therefore be demonstrated by using the contrapositive of Fact \[fac:pseudoEquil\]. Our arguments use valuations that are *linear on their domain*. That is, let $\mathbf{t}_I\in\R^I$, let $X^j_I\subseteq\Z^n$ be finite, and let $V^j=V^{j,\mathbf{t}_I}\mathbf:X^j_I\rightarrow\R$ be given by $V^{j,\mathbf{t}_I}(\mathbf{x}_I)\coloneq\mathbf{t}_I\cdot\mathbf{x}_I$ for all $\mathbf{x}_I\in X^j_I$. Recalling Equation (\[eqn:quasilin\]) for demand sets in the quasilinear case, we observe that, for each $\mathbf{s}_I\in\R^I$, we have that $$\label{eqn:faces} D^j(\mathbf{t}_I-\mathbf{s}_I)=\argmax_{\mathbf{x}_I\in X^j_I}\left(\mathbf{t}_I\cdot\mathbf{x}_I-(\mathbf{t}_I-\mathbf{s}_I)\cdot\mathbf{x}_I\right)=\argmax_{\mathbf{x}_I\in X^j_I}\mathbf{s}_I\cdot\mathbf{x}_I.$$ \[lem:linConv\] If $\conv(X^j_I)\cap\Z^I=X^j_I$, then $V^{j,\mathbf{t}_I}$ is concave for all $\mathbf{t}_I\in\R^I$. Observe by Equation (\[eqn:faces\]) that $D^j(\mathbf{t}_I)=\argmax_{\mathbf{x}_I\in X^j_I}\mathbf{0}\cdot\mathbf{x}_I=X^j_I$. So, if $\mathbf{x}_I\in\conv(X^j_I)\cap\Z^I=X^j_I$ then $\mathbf{x}_I\in D^j(\mathbf{t}_I)$. By Definition \[def:concave\], we know $V^{j,\mathbf{t}_I}$ is concave. We will also make use of an alternative characterization of concavity. \[fac:conc\] A valuation $\valFn{j}$ is concave if and only if $\conv\left(\dQL{j}{\p}\right) \cap \Z^I = \dQL{j}{\p}$ for all price vectors $\p$. Additional Facts regarding Unimodularity and Demand Types --------------------------------------------------------- The following results are especially useful in the proof of the “only if” direction of Fact \[fac:unimod\], and the proof of Fact \[fac:unimodMaxDomain\]. We seek to construct pseudo-equilibrium price vectors (the total endowment is in the convex hull of aggregate demand) that are not  price vectors (the total endowment is not demanded on aggregate). Failure of unimodularity allows such constructions because of the following property. \[fac:unimodSet\] A demand type vector set $\mathcal{D}$ is unimodular if and only if there is no linearly independent subset $\{\mathbf{d}^1,\ldots,\mathbf{d}^r\}$ of $\mathcal{D}$ such that there exists $\mathbf{z}=\sum_{\ell=1}^r \alpha_\ell \dvec^\ell\in\Z^I$ with $\alpha_\ell\in(0,1)$ for $\ell=1,\ldots,r$. To see the connection to Fact \[fac:pseudoEquil\] and to existence of competitive equilibrium, suppose that $\{\mathbf{d}^1,\ldots,\mathbf{d}^r\}$ and $\mathbf{z}$ are as in Fact \[fac:unimodSet\]. If $\tot=\mathbf{z}$ and if $D^j_M(\p,\bunndowj)=\{\mathbf{0},\mathbf{d}^j\}$ for $j=1,\dots,r$, then $\p$ is a pseudo-equilibrium price vector but there is no competitive equilibrium at $\p$. [@BaKl:19] generalized Fact \[fac:subsDemCplx\] to the general case of transferable utility. \[fac:demTypeCplx\] Let $\valFn{j}$ be a valuation of demand type $\mathcal{D}$. For any price $\ppr$, if $\conv(\dQL{j}{\ppr})$ has an edge $E$, then the difference between the extreme points of $E$ is proportional to a demand type vector, and there exists a price $\p$ such that $\conv(\dQL{j}{\p})=E$. Moreover if $\dvec$ is in the minimal  $\mathcal{D}$, such that $\valFn{j}$ is of demand type $\mathcal{D}$, then there exists a price vector $\p$ such that $\conv(\dQL{j}{\p})$ is a line segment, the difference between whose endpoints is proportional to $\dvec.$ We also demonstrate now the following useful corollary of Fact \[fac:demTypeCplx\]. \[cor:edges\] Let $V^j=V^{j,\mathbf{t}_I}$ for some $\mathbf{t}_I\in\R^I$, and let $\mathcal{D}$ be the minimal demand type vector set such that $V^j$ is of demand type $\mathcal{D}$. Then $\mathcal{D}$ consists of the primitive integer vectors in the directions of the edges of the polytope $\conv(X^j_I)$. Observe that $D^j(\mathbf{t}_I)=X^j_I$ and so, by Fact \[fac:demTypeCplx\], each edge of $\conv(X^j_I)$ is proportional to a vector in $\mathcal{D}$. Conversely, if $\mathbf{d}\in\mathcal{D}$ then, by Fact \[fac:demTypeCplx\], there exists a price $\p$ such that $\conv(D^j(\p))$ is a line segment, the difference between whose endpoints is proportional to $\mathbf{d}$. But writing $\mathbf{s}_I=\mathbf{t}_I-\p$, we see from Equation (\[eqn:faces\]) that $D^j(\p)=\argmax_{x_I\in X^j_I} \mathbf{s}_I\cdot\mathbf{x}_I$ which tells us (cf. e.g. @Gruenbaum1967, Section 2.4) that $E$ is an edge of $\conv(X^j_I)$. Our proofs of Facts \[fac:subMaxDomain\] and \[fac:unimodMaxDomain\], and the “only if” direction of Fact \[fac:unimod\], now follow the same structure. Within each argument, we address a demand type which is not unimodular. Observe by Fact \[fac:unimodSet\] that when unimodularity fails for a set of vectors $\mathcal{D}$, then there exist polytopes, with integer vertices and whose edge directions are in $\mathcal{D}$, that contain a non-vertex integer vector, $\mathbf{z}$. We use Corollary \[cor:edges\] construct valuations of the appropriate demand type such that, at some price $\p$, the convex hull of the aggregate demand set is a polytope with these properties; and such that there exists a feasible endowment allocation is the total endowment is the non-vertex integer vector $\mathbf{z}$. Thus $\p$ is a pseudo-equilibrium price. Moreover, we design our individual valuations so that $\p$ is not a competitive equilibrium. The contrapositive of Fact \[fac:pseudoEquil\] can then be applied to show the non-existence of competitive equilibrium. Proof of Fact \[fac:subMaxDomain\] ---------------------------------- By Fact \[fac:subsDemCplx\], there exists a price vector $\p$ such that $\dQL{j}{\p} = \{\bunpr,\bunpr+\norm\}$, where $\norm$ has at least two positive components or at least two negative components. Identify $I$ with $\{1,\ldots,|I|\}$ and without loss of generality assume that $g_1,g_2<0$. Because agent $j$ demands at most one unit of each good, we know that $\bunpr,\bunpr+\norm\in\{0,1\}^{|I|}$ and so $\norm\in \{-1,0,1\}^{|I|}$. We conclude both that $g_1=g_2=-1$ and that $x'_1=x'_2=1$. Let $k \in J \ssm \{j\}$ be arbitrary. For agents $j' \in J \ssm \{j,k\},$ let $\Feas{j'} = \{\zero\}$, let $\valFn{j'}$ be arbitrary, and let $\bundowag{j'} = 0$. Let $X^k_I\coloneq\{\rgivend\mathbf{x}_I\in\{0,1\}^{|I|}\rgiv x_1+x_2\leq 1\}$ and let $\mathbf{t}_I\coloneq\p-\mathbf{e}^1-\mathbf{e}^2$. Let $V^k = V^{k,{\bf t}_I},$ which is a  valuation by Example 11 and Corollary D.2, because each edge of $\conv(X^k_I)$ is proportional to either ${\bf e}^1 - {\bf e}^2$ or to ${\bf e}^\ell$ for some $\ell \in I$; or, alternatively, by Theorem 4 in Hatfield et al. (2019) because, like assignment valuations, it is the supremal convolution of $|I|-1$ unit-demand valuations. Set $\mathbf{w}^j_I\coloneq\bunpr\in X^j_I$ and set $\mathbf{w}^k_I\coloneq\tot-\bunpr$. Since $\bunpr\in\{0,1\}^{|I|}$ it follows that $\mathbf{w}^k_I\in\{0,1\}^{|I|}$, and moreover since $x'_1=x'_2=1$ we know $w^k_1=w^k_2=0$; thus $\mathbf{w}^k_I\in X^k_I$. Now $(\mathbf{w}^{j'}_I)_{j'\in J}$ is clearly an endowment allocation. By Equation (\[eqn:faces\]) we know that $$D^k(\mathbf{p}_I)=\argmax_{\mathbf{x}_I\in X^k_I}(\mathbf{e}^1+\mathbf{e}^2)\cdot\mathbf{x}_I=\{\mathbf{x}_I\in\{0,1\}^{|I|}|x_1+x_2=1\}.$$ Observing that $\mathbf{e}^2\in D^k(\mathbf{p}_I)$ and considering the vectors from $\mathbf{e}^2$ to other elements of the demand set, we can write $D^k(\mathbf{p}_I)$ as $$D^k(\p) = \e{2} + \left\{\rgivend \alpha_2 (\e{1} - \e{2}) + \sum_{\ell = 3}^{|I|} \alpha_\ell \e{\ell} \rgiv \alpha_\ell \in \{0,1\} \text{ for } 2 \le \ell \le |I|\right\}.$$ Combining this with agent $j$, and recalling other agents’ demand sets are identically zero, we conclude that $$\sum_{j'\in J} D^{j'}(\mathbf{p}_I)=\bunpr+\mathbf{e}^2+\left\{\rgivend\alpha_1\norm+\alpha_2(\mathbf{e}^1-\mathbf{e}^2)+\sum_{\ell=3}^{|I|}\alpha_\ell\mathbf{e}^\ell\rgiv\alpha_\ell\in\{0,1\}\text{ for }1\leq \ell\leq |I|\right\}.$$ The convex hull of this set can be expressed very similarly, but the weights $\alpha_\ell$ are allowed to lie in $[0,1]$. Since $\bunpr, \bunpr+\mathbf{g}\in\{0,1\}^{|I|},$ we have that if $g_i=1$ (resp. $g_i = -1$), then $x'_i=0$ (resp. $x'_i = 1$). Taking $$\alpha_\ell = \begin{cases} \dfrac{|\normComp{\ell}|}{2} & \text{ if } \normComp{\ell} \not= 0\\ 1 - \bunprComp{\ell} & \text{if } \normComp{\ell} = 0 \end{cases}$$ for $1 \le \ell \le |I|$, we have that $$\begin{aligned} \bunprComp{i} + \alpha_1 \normComp{i} + \alpha_i &= 1 - \frac{1}{2} + \frac{1}{2} = 1 & \text{ for all } i \in I \text{ with } \normComp{i} = -1\\ \bunprComp{i} + \alpha_1 \normComp{i} + \alpha_i &= \bunprComp{i} + 0 + (1 - \bunprComp{i}) = 1 & \text{ for all } i \in I \text{ with } \normComp{i} = 0\\ \bunprComp{i} + \alpha_1 \normComp{i} + \alpha_i &= 0 + \frac{1}{2} + \frac{1}{2} = 1 & \text{ for all } i \in I \text{ with } \normComp{i} = 1.\end{aligned}$$ As $\bunprComp{1} = \bunprComp{2} = 1$ and $\normComp{1} = \normComp{2} = -1,$ it follows that $$\bunpr + \e{2} + \alpha_1\norm+\alpha_2(\mathbf{e}^1-\mathbf{e}^2)+\sum_{\ell=3}^{|I|}\alpha_\ell\mathbf{e}^\ell = \tot.$$ As $\alpha_\ell \in [0,1]$ for all $1 \le \ell \le |I|$, we therefore have that $\tot \in \conv\left(\sum_{j'\in J} D^{j'}(\mathbf{p}_I)\right)$, so $\p$ is a pseudo-equilibrium price vector. But as $\alpha_1 \in (0,1)$ and the vectors $\mathbf{g},\mathbf{e}^1-\mathbf{e}^2,\mathbf{e}^3,\ldots,\mathbf{e}^{|I|}$ are linearly independent, we have that $\tot \notin \sum_{j'\in J} D^{j'}(\mathbf{p}_I),$ so there is no  at $\p$.[^47] Therefore, by the contrapositive of Fact \[fac:pseudoEquil\], no  can exist. Proof of the “Only If” Direction of Fact \[fac:unimod\] ------------------------------------------------------- Let $\mathcal{D}$ be a  that is not unimodular. We need to show that there exists a finite set $J$ of agents with concave valuations of demand type $\mathcal{D}$, as well as a total endowment, for which there exists  but no . We will use $J=\{j,k\}$. Let $\linset = \{\dvec^1,\ldots,\dvec^n\} \subseteq \mathcal{D}$ be a minimal non-unimodular subset. By construction, $\linset$ is linearly independent, and $\{\dvec^1,\ldots,\dvec^{n-1}\}$ is unimodular. Let $$\mathcal{P} = \left\{\rgivend\sum_{\ell=1}^n \alpha_\ell \dvec^\ell \rgiv 0 \le \alpha_\ell \le 1\text{ for }\ell=1,\ldots,n\right\}$$ denote the parallelepiped spanned by $\linset$. By Fact \[fac:unimodSet\], there exists $\mathbf{z} = \sum_{\ell=1}^n \beta_\ell \dvec^\ell \in \mathcal{P} \cap \mathbb{Z}^I$ with $\beta_\ell \in (0,1)$ for all $\ell=1,\ldots,n$. Let $X^j_I\coloneq \mathcal{P}\cap\Z^n$ and let $V^j\coloneq V^{j,\mathbf{0}}$ be the linear valuation which is identically zero on its domain. Recall Equation (\[eqn:faces\]): we know $D^j(\mathbf{0})=X^j_I$. Observe that $\mathbf{z}\in X^j_I$. Clearly $\conv(X^j_I)\cap\Z^I=X^j_I$ and so $V^j$ is concave by Lemma \[lem:linConv\]. Let $\mathbf{s}_I$ satisfy $\mathbf{s}_I\cdot \dvec^\ell= 0$ for $\ell=1,\ldots,n-1$ and $\mathbf{s}_I\cdot\dvec^n>0$. (Such an $\mathbf{s}_I$ exists as $\linset$ is linearly independent.) Then, for $\mathbf{x}_I=\sum_{\ell=1}^n \alpha_\ell \dvec^\ell\in X^j_I$, we have $$\mathbf{s}_I\cdot\mathbf{x}_I=\sum_{\ell=1}^n \alpha_\ell \mathbf{s}_I\cdot\dvec^\ell=\alpha_n \mathbf{s}_I\cdot\dvec^n.$$ We assumed that $\mathbf{s}_I\cdot\dvec^n>0$, so $\mathbf{s}_I\cdot\mathbf{x}_I$ is minimized when $\alpha_n=0$; equivalently $-\mathbf{s}_I\cdot\mathbf{x}_I$ is maximized when $\alpha_n=0$. So, by Equation (\[eqn:faces\]), we know that $$D^j(\mathbf{s}_I)=\argmax_{\mathbf{x}_I\in X^j_I}-\mathbf{s}_I\cdot\mathbf{x}_I=\left\{\rgivend\sum_{\ell=1}^{n-1} \alpha_\ell \dvec^\ell \rgiv 0 \le \alpha_\ell \le 1 \text{ for }\ell=1,\ldots,n-1\right\}\cap\Z^I.$$ Now set $X^k_I\coloneq\{\mathbf{0},\dvec^n\}$ and let $V^k\coloneq V^{k,\mathbf{s}_I}$. By Equation (\[eqn:faces\]) again, we know that $D^k(\mathbf{s}_I)=X^k_I$. As $\mathbf{d}^n\in\mathcal{D}$, which is a , we know that $\mathbf{d}^n$ is a primitive integer vector, from which it follows that $\conv(X^k_I)\cap\Z^I=X^k_I$. Thus, by Lemma \[lem:linConv\], we know that $V^k$ is concave. Observe that $$D^j(\mathbf{s}_I)+D^k(\mathbf{s}_I)=\left\{\rgivend\sum_{\ell=1}^n \alpha_\ell \dvec^\ell \rgiv 0 \le \alpha_\ell \le 1 \text{ for }\ell=1,\ldots,n-1 \text{ and }\alpha_n\in\{0,1\}\right\} \cap \Z^I.$$ So $\conv(D^j(\mathbf{s}_I)+D^k(\mathbf{s}_I))=\mathcal{P}$. Let the total endowment $\tot$ be $\mathbf{z}$. Set $\mathbf{w}^j_I\coloneq\mathbf{z}\in X^j_I$, and set $\mathbf{w}^k_I\coloneq\mathbf{0}\in X^k_I$. This is clearly an endowment allocation. Since $\tot\in\mathcal{P}=\conv(D^j(\mathbf{s}_I)+D^k(\mathbf{s}_I))$, we see $\mathbf{s}_I$ is a pseudo-equilibrium price vector. But, since $\linset$ is linearly independent and since $0<\beta_n<1$, we know $\tot\notin D^j(\mathbf{s}_I)+D^k(\mathbf{s}_I)$, so there is no competitive equilibrium at $\mathbf{s}_I$. It follows, by the contrapositive of Fact \[fac:pseudoEquil\], that no  can exist. Proof of Fact \[fac:unimodMaxDomain\] ------------------------------------- We will use the following claim. \[cla:maximal\] If $\mathcal{D}$ is a maximal unimodular  then $\mathcal{D}$ spans $\mathbb{R}^I$. Let $\linset \subseteq \mathcal{D}$ be a maximal, linearly independent set. As $\mathcal{D}$ is unimodular, there exists a set $T$ of  such that $\linset \cap T = \emptyset$ and $\linset \cup T$ is a basis of $\mathbb{R}^I$ with determinant $\pm 1.$ We claim that $\mathcal{D}_0 = \mathcal{D} \cup T \cup -T$ is unimodular. To see why, let $L' \subseteq \mathcal{D} \cup T\cup -T$ be a maximal linearly independent set. As $\mathcal{D}_0$ spans $\mathbb{R}^I$ by construction, $L'$ must span $\mathbb{R}^I$. Due to the maximality of $\linset$, we must have that $|L' \cap (T \cup -T)| = |T|.$ It follows that $L' \cap \mathcal{D}$ is a basis for the span of $\mathcal{D}$. As $\mathcal{D}$ is unimodular, $L' \cap \mathcal{D}$ must be the image of $\linset$ under a unimodular change of basis of the span of $\mathcal{D}$. It follows that $L'$ is a basis for $\mathbb{R}^I$ with determinant $\pm 1$—so $\mathcal{D}_0$ is unimodular. Due to the maximality of $\mathcal{D},$ we must have that $T = \emptyset$, and hence $\mathcal{D}$ must span $\mathbb{R}^I$. As $\mathcal{D}$ is unimodular, it follows that $\mathcal{D}$ must integrally span $\mathbb{Z}^I$. We next divide into cases based on whether $\valFn{j}$ is non-concave and of demand type $\mathcal{D}$, or not of demand type $\mathcal{D}$, to construct concave valuations $\valFn{k}$ of demand type $\mathcal{D}$ for agents $k \not= j$ and a total endowment for which no  exists. $\valFn{j}$ is not concave but is of demand type $\mathcal{D}$. By Fact \[fac:conc\], there exists a price vector $\p$ such that $\dQL{j}{\p} \not= \conv\left(\dQL{j}{\p}\right) \cap \Z^I.$ Let $\bunpr \in \dQL{j}{\p}$ be an extreme point of $\conv\left(\dQL{j}{\p}\right)$, so there exists $\mathbf{s}_I\in\R^n$ satisfying $$\label{eqn:xprime} \{\bunpr\}=\argmax_{\mathbf{y}_I\in \dQL{j}{\p}}\mathbf{s}_I\cdot\mathbf{y}_I.$$ Let $\bundpr \in \left(\conv\left(\dQL{j}{\p}\right) \cap \Z^I\right) \ssm \dQL{j}{\p}$ be arbitrary. Let $k \in J \ssm \{j\}$ be arbitrary. Let $X^k_I=(\conv(D^j(\p))\cap\Z^I)+\{-\bunpr\}$. Since $V^j$ is of demand type $\mathcal{D}$, it follows by Fact \[fac:demTypeCplx\] that every edge of $\conv(D^j(\p))$ is a multiple of a vector in $\mathcal{D}$, and so the same is true of $\conv(X^k_I)$. Moreover, by definition of $X^k_I$ it is clear that $\conv(X^k_I)\cap\Z^I=X^k_I$. Fix $\mathbf{t}_I\coloneq\p+\mathbf{s}_I$ and let $V^k\coloneq V^{\mathbf{t}_I,k}$, which is concave by Lemma \[lem:linConv\] and of demand type $\mathcal{D}$ by Corollary \[cor:edges\]. By Equation (\[eqn:faces\]) we know $D^k(\p)=\argmax_{\mathbf{x}_I\in X^k_I}\mathbf{s}_I\cdot\mathbf{x}_I$, and so by Equation (\[eqn:xprime\]) and the definition of $X^k_I$, it follows that $D^k(\p)=\{\bunpr-\bunpr\}=\{\mathbf{0}\}$. Let the total endowment $\tot$ be $\mathbf{x}''_I$, let $\bundowj \coloneq \bunpr\in X^j_I$, and let $\mathbf{w}^k_I\coloneq\bundpr-\bunpr\in X^k_I$. For agents $j' \in J \ssm \{j,k\}$, let $\Feas{j'} = \{\zero\}$, let $\valFn{j'}$ be arbitrary, and let $\bundowag{j'} = \zero.$ Thus $(\mathbf{w}^{j'}_I)_{j'\in J}$ is an endowment allocation. Moreover, $$\sum_{j'\in J}D^{j'}(\p)=D^j(\p).$$ Thus $\tot=\bundpr\in \conv\left(\sum_{j'\in J}D^{j'}(\p)\right)$ and so $\p$ is a pseudo-equilibrium price vector. But $\tot=\bundpr\notin \sum_{j'\in J}D^{j'}(\p)=D^j(\p)$ by definition of $\bundpr$, and so there is no  at $\p$. Therefore, by the contrapositive of Fact \[fac:pseudoEquil\], no  can exist. $V^j$ is not of demand type $\mathcal{D}$. By Fact \[fac:demTypeCplx\] there exists a primitive integer vector $\norm\notin\mathcal{D}$ and a price vector $\p\in\R^n$ such that $D^j(\p)\subseteq\{\bunpr+\alpha\norm\ | \ \alpha=0,\ldots,r\}$ where $r\geq 1$ and $\bunpr,\bunpr+r\norm\in D^j(\p)$. As $\mathcal{D}$ is not strictly contained in any unimodular , and as $\norm\notin\mathcal{D}$, the set $\mathcal{D}\cup\{\norm\}$ is not unimodular. Let $\{\mathbf{d}^1,\ldots,\mathbf{d}^m,\norm\}$ be a minimal non-unimodular subset of $\mathcal{D}\cup\{\norm\}$. Thus the set $\{\mathbf{d}^1,\ldots,\mathbf{d}^m,\norm\}$ is linearly independent and, by Fact \[fac:unimodSet\], there exists $$\label{eqn:zinP} \mathbf{z}=\beta_0\norm+\sum_{\ell=1}^m\beta_\ell \mathbf{d}^\ell\in \cap\Z^I\text{ with }0<\beta_\ell<1\text{ for }\ell=0,\ldots,m.$$ By Claim \[cla:maximal\], we know that $\mathcal{D}$ spans $\R^I$. Since $\mathcal{D}$ is also unimodular, by Fact \[fac:unimodSet\] there exist $\mathbf{d}^{m+1},\ldots,\mathbf{d}^n\in\mathcal{D}$ for some $n\geq m$ such that $\mathbf{d}^1,\ldots,\mathbf{d}^n$ are linearly independent and $$\mathbf{z}=\sum_{\ell=1}^n\gamma_\ell \mathbf{d}^\ell \text{ with }\gamma_\ell \in\Z \text{ for all }\ell=1,\ldots,n.$$ Moreover, by replacing $\mathbf{d}^{m+1},\ldots,\mathbf{d}^n$ with their negations if necessary, we can assume that $\gamma_{m+1},\ldots,\gamma_n \geq 0$. Let $k \in J \ssm \{j\}$ be arbitrary. Let $X^k_I=Y^k_I+Z^k_I$, where $$\begin{aligned} Y^k_I &=\left\{\rgivend\sum_{\ell=1}^m\alpha_\ell\mathbf{d}^\ell\rgiv -|\gamma_\ell|\leq\alpha_\ell\leq |\gamma_\ell|+1 \text{ for }\ell=1,\ldots,m\right\}\cap\Z^I\\ Z^k_I &=\left\{\rgivend\sum_{\ell=m+1}^n\alpha_\ell\mathbf{d}^\ell\rgiv 0\leq\alpha_\ell\leq \gamma_\ell \text{ for }\ell=m+1,\ldots,n\right\}\cap\Z^I.\end{aligned}$$ Observe that $\mathbf{z}\in X^k_I$. Moreover, $\conv(X^k_I)\cap\Z^I=X^k_I$, as we may see by writing $X^k_I\coloneq\{\sum_{\ell=1}^n\alpha_\ell \mathbf{d}^\ell|c_\ell\leq\alpha_\ell\leq d_l \text{ for }\ell=1,\ldots,n\}\cap\Z^I$ for suitably chosen $c_\ell$ and $d_\ell$. Choose $\mathbf{s}_I$ such that $\mathbf{s}_I\cdot\mathbf{d}^\ell=0$ for $\ell=1,\ldots,m$ and $\mathbf{s}_I\cdot\mathbf{d}^\ell<0$ for $\ell=m+1,\ldots,n$. (Such an $\mathbf{s}_I$ exists because $\mathbf{d}^1,\ldots,\mathbf{d}^n$ are linearly independent.) Set $\mathbf{t}_I\coloneq\p+\mathbf{s}_I$ and set $V^k\coloneq V^{k,\mathbf{t}_I}$. Then $V^k$ is concave by Lemma \[lem:linConv\]. By Equation (\[eqn:faces\]) and the definition of $X^k_I$, we deduce that $$D^k(\p)=\argmax_{\mathbf{x}_I\in X^k_I}\mathbf{s}_I\cdot\mathbf{x}_I =Y^k_I.$$ Moreover, the edges of $X^k_I$ are parallel to $\mathbf{d}^1,\ldots,\mathbf{d}^n$ and so by Corollary \[cor:edges\], the valuation $V^k$ is of demand type $\mathcal{D}$. For agents $j' \in J \ssm \{j,k\},$ let $\Feas{j'} = \{\zero\}$, let $\valFn{j'}$ be arbitrary, and let $\bundowag{j'} = \mathbf{0}$. Let the total endowment $\mathbf{y}_I$ be $\mathbf{x}'_I + \mathbf{z}$. Set $\mathbf{w}^j_I \coloneq \mathbf{x}'_I \in X^j_I$ and $\mathbf{w}^k_I = \mathbf{z} \in X^k_I,$ so $(\mathbf{w}^{j'}_I)_{j' \in J}$ is . Now see that $$\label{eqn:demSet} \sum_{j'\in J}D^j(\p)\subseteq\{\bunpr+\alpha\norm\ | \ \alpha=0,\ldots,r\}+Y^k_I$$ while, since $\bunpr+r\norm\in D^j(\p)$, we have the equality $$\conv\left(\sum_{j'\in J}D^j(\p)\right)=\{\bunpr+\alpha\norm\ | \ 0\leq\alpha\leq r\}+\conv(Y^k_I)$$ Recalling Equation (\[eqn:zinP\]), we conclude that $\tot = \bunpr+\mathbf{z}\in \conv\left(\sum_{j'\in J}D^j(\p)\right)$, so $\p$ is a pseudo-equilibrium price vector. But, since $0<\beta_0<1$ in Equation (\[eqn:zinP\]) and since the set $\{\mathbf{d}^1,\ldots,\mathbf{d}^m,\mathbf{g}\}$ is linearly independent, we conclude from Equation (\[eqn:demSet\]) that $\tot = \bunpr+\mathbf{z}\notin \sum_{j'\in J}D^j(\p)$, so there is no  at $\p$. Therefore, by the contrapositive of Fact \[fac:pseudoEquil\], no  can exist. As the cases exhaust all possibilities, we have proven the fact. [^1]: For example, methods based on integer programming (see, e.g., [@koopmans1957assignment], [@BiMa:97], [@Ma:98], [@CaOzPa:15], and [@tran2019product]) rely on characterizations of the set of Pareto-efficient allocations as the solutions to a welfare maximization problem, while methods based on convex programming (see, e.g., [@Muro:2003], [@ikebe2015stability], and [-@CaEpVo:17]) and tropical geometry [@BaKl:14; @BaKl:19] rely on representing aggregate demand as the demand of a representative agent. [^2]: Outside the case of substitutes (which we describe in detail), [@BiMa:97] and [@Ma:98] gave necessary and sufficient conditions on profiles of valuations, and [@CaOzPa:15] gave sufficient conditions on agents’ individual valuations, for the existence of competitive equilibrium in transferable utility economies. [^3]: [@KeCr:82] were aware of the equivalence between gross and net substitutability in their setting (see their Footnote 1) but used the term “gross substitutes” due to an analogy of their arguments for existence with tâtonnement from general equilibrium theory. [^4]: See Theorem 4.3 of [@BaKl:19]; an earlier version was given by [@DaKoMu:01]. [^5]: It generalizes the quasilinear case of [@KeCr:82], and results of [@SuYa:06], [@MiSt:09], [@HaKoNiOsWe:11], and [@Teyt:14]. [^6]: In particular, we allow for multiple units of some goods to be present in the aggregate, unlike [@GuSt:99] and [@CaOzPa:15]. [^7]: Technological constraints on production (in the sense of [@HaKoNiOsWe:11] and [@FlJaJaTe:19]) can be represented by the possibility that some bundles of goods are infeasible for an agent to consume (see Example 2.15 in [@BaKl:14]). [^8]: @henry1970indivisibilites [pages 543–544], @mas1977indivisible [Theorem 1(i)], and @DeGa:85 [Equation (3.1)] made similar assumptions. If consuming money is inessential but consumption of money must be nonnegative, then it is known that  may not exist [@mas1977indivisible]—even in settings in which agents have unit demand for goods (see, e.g., [@herings2019competitive]). However, the existence of  can be guaranteed when the agents trade lotteries over goods [@Gul.etal2020]. [^9]: Note that income effects also correspond to changes in an agent’s Marshallian demand induced by changes in the value of her endowment, holding prices fixed. [^10]: Here, we call $\quasivalFn{j}$ a quasivaluation, and denote it by $\quasivalFn{j}$ instead of $\valFn{j}$, to distinguish it from the valuation of an agent with quasilinear preferences. [^11]: Although Fact \[fac:dualDem\] is usually stated with divisible goods (see, e.g., Proposition 3.E.1 and Equation (3.E.4) in [@MaWhGr:95]), the standard proof applies with multiple indivisible goods and money under Condition (\[eq:ulimits\]). For sake of completeness, we give a proof of Fact \[fac:dualDem\] in Appendix \[app:dualDemPrefs\]. [^12]: The function $\cfFn{j}$ is the *compensation function* of [@DeGa:85] (see also [@DaKoMu:01]). [^13]: A version of Fact \[fac:dualPrefs\] for the function $\cfFn{j}$ in a setting in which utility is increasing in goods is proved in Lemma 1 in [@DaKoMu:01]. For sake of completeness, we give a proof of Fact \[fac:dualPrefs\] in Appendix \[app:dualDemPrefs\]. Fact \[fac:dualPrefs\] is also similar in spirit to the duality between utility functions and expenditure functions (see, e.g., Propositions 3.E.2 and 3.H.1 in [@MaWhGr:95]). However, the arguments of the expenditure function (at each utility level) are prices, while the arguments of the  (at each utility level) are quantities. [^14]: Condition (\[eq:vlimits\]) is analogous to Condition (\[eq:ulimits\]) and ensures that the corresponding utility function is defined everywhere on $\Feans{j}$. Note that Condition (\[eq:vlimits\]) is essentially automatic in the context of [@DaKoMu:01] and therefore does not appear explicitly in their result (Lemma 1 in [@DaKoMu:01]). [^15]: \[fn:quasiEquil\]As a result,  in the  coincide with *quasiequilibria with transfers* from the modern treatment of the Second Fundamental Theorem of Welfare Economics (see, e.g., Definition 16.D.1 in [@MaWhGr:95]). As the set of feasible levels of money consumption is open, agents always can always reduce their money consumption slightly from a feasible bundle to obtain a strictly cheaper feasible bundle. Hence, quasiequilibria with transfers coincide with equilibria with transfers in the original economy (see, e.g., Proposition 16.D.2 in [@MaWhGr:95] for the case of divisible goods). If the endowments of money were fixed in the , this concept would coincide with the concept of *compensated equilibrium* of [@arrow1971general] and the concept of *quasiequilibrium* introduced by [@debreu1962new]. [^16]: Recall that an allocation $(\bunnj)_{j \in J} \in \bigtimes_{j \in J} \Feans{j}$ is *Pareto-efficient* if there does not exist an allocation $(\hbunnj)_{j \in J} \in \bigtimes_{j \in J} \Feans{j}$ such that $$\sum_{j \in J} \hbunnj = \sum_{j \in J} \bunnj,$$ and $\util{j}{\hbunnj} \ge \util{j}{\bunnj}$ for all agents $j$ with strict inequality for some agent. [^17]: While [@maskin2008fundamental] assumed that goods are divisible, their arguments apply even in the presence of indivisibilities—as we show in Appendix \[app:EEDproof\]. [^18]: This approach is similar in spirit to proof of the existence of  with divisible goods. [@negishi1960welfare] instead applied an adjustment process to the inverses of agents’ marginal utilities of money. However, approach does not generally yield a convex-valued adjustment process in the presence of indivisibilities. [^19]: Our definition of  holds the endowment of goods fixed, but, unlike [@FlJaJaTe:19], imposes a condition at every feasible endowment of money. Imposing the “full substitutability in demand language” condition from Assumption D.1 in Supplemental Appendix D of [@FlJaJaTe:19] at every money endowment is equivalent to our  condition. [^20]: Note that  is independent of the endowment of goods as endowments do not affect the demands of agents with quasilinear utility functions. Our definition of  coincides with definition [@DaKoLa:2003]. [^21]: By contrast, [@KeCr:82] imposed a  condition at all price vectors. Imposing condition at every money endowment leads to a strictly stronger condition than Definition \[def:gsub\]\[part:gsub\] in the presence of income effects [@schlegel2018trading]. [^22]: Fact \[fac:subExist\] is a version of Theorem 1 in [@HaKoNiOsWe:11] for exchange economies and follows from Proposition 4.6 in [@BaKl:19]. See [@KeCr:82] and [@GuSt:99] for earlier versions that assume that valuations are monotone. [^23]: Fact \[fac:subMaxDomain\] is a version of Theorem 2 in [@GuSt:99] and Theorem 4 in [@yang2017maximal] that applies when $\Feas{k}$ can be strictly contained in $\{0,1\}^I$, as well as a version of Theorem 7 in [@HaKoNiOsWe:11] for exchange economies. For sake of completeness, we give a proof of Fact \[fac:subMaxDomain\] in Appendix \[app:maxDomain\]. The proof shows that the statement would hold if $|J| \geq |I|$ and agents $k \not= i$ were restricted to unit-demand valuations—as in Theorem 2 in [@GuSt:99]. [^24]: [@FlJaJaTe:19] worked with a matching model and considered equilibrium with personalized pricing, but their arguments also apply in exchange economies without personalized pricing. However, [@FlJaJaTe:19] only required that each agent sees goods as  for a fixed endowment of goods and money. Our notion of  considers a fixed endowment of goods but a variable endowment of money, and therefore the existence result of [@FlJaJaTe:19] is not strictly a special case of Theorem \[thm:netSubExist\]. Moreover, [@FlJaJaTe:19] also allowed for frictions such as transaction taxes and commissions in their existence result. [^25]: Indeed, recall that Example \[eg:quasilogDualVal\] tells us that agent $j$’s  at each utility level is a positive linear transformation of $\quasivalFn{j}$. The conclusion follows by Remark \[rem:netSubDual\]. [^26]: See Remark E.1 in the Supplemental Material of [@FlJaJaTe:19]). [^27]: @DaKoMu:01 [Example 2] also showed the connection between housing market economy and a  condition, but formulated their discussion in terms of the shape of the convex hull at domains at which demand is multi-valued instead of . Their discussion is equivalent to ours by Corollary 5 in [-@DaKoLa:2003] and Remark \[rem:netSubDual\]. [^28]: The existence of a feasible set of bundles of goods and a  valuation for $k$ for which no  exists follows from Fact \[fac:subMaxDomain\]. To check that $\valFn{k}$ is an example of such a valuation, suppose, for sake of deriving a contradiction, that $(\bunj,\bunag{k})$ is the allocation of goods in a . The First Welfare Theorem implies that $\bunj = (1,1)$ and that $\bunag{k} = (0,0)$. But for agent $j$ to demand $(1,1),$ the equilibrium prices would have to sum to at most 5, while for agent $k$ to demand $(0,0)$, the equilibrium prices would both have to be at least 3—a contradiction. Hence, we can conclude that no  exists. [^29]: \[fn:evalU\]To show this, note that $\numerdowj - \ppr \cdot ((1,1) - \bundowj) = -1,$ so it would violate $j$’s budget constraint to demand $(1,1)$ at the price vector $\ppr$. For the other bundles, note that $$\begin{array}{c|c|c|c|c} \bun & (0,0) & (0,1) & (1,0) & (1,1)\\ \hline \util{j}{\numerdowj - \p \cdot (\bun - \bundowj),\bun} & \log \frac{5}{11} & \log \frac{3}{7} & \log \frac{3}{4} & \log 1\\ \hline \util{j}{\numerdowj - \ppr \cdot (\bun - \bundowj),\bun} & \log \frac{5}{11} & \log \frac{3}{7} & \log \frac{1}{4} & \text{undef.,} \end{array}$$ so $\dM{j}{\p}{\bunndowj} = \{(1,1)\}$ and $\dM{j}{\ppr}{\bunndowj} = \{(0,0)\}$. [^30]: The expressions for $\dH{j}{\p}{\ub}$ and $\dH{j}{\ppr}{\ub}$ hold because agent $j$’s Hicksian valuation at utility level $\ub$ is $\frac{5}{11}$ times the quasivaluation $\quasivalFn{j}$ (by Example \[eg:quasilogDualVal\]). [^31]: To show this, let $\hp = (3,2)$. It is clear that $(0,1) \in \dQL{k}{\hp}$. It remains to show that $(1,0) \in \dM{j}{\hp}{\bunndowj}$. Note that $\numerdowj - \hp \cdot ((1,1) - \bundowj) = 0,$ so it would violate $j$’s budget constraint to demand $(1,1)$ at the price vector $\hp$. For the other bundles, note that $$\util{j}{\numerdowj - \hp \cdot (\bun - \bundowj),\bun} = \begin{cases} \log \frac{5}{11} & \text{if } \bun = (0,0)\\ \log \frac{3}{7} & \text{if } \bun = (0,1)\\ \log \frac{1}{2} & \text{if } \bun = (1,0), \end{cases}$$ so $\dM{j}{\hp}{\bunndowj} = \{(1,0)\}$. [^32]: Definition \[def:demTypeTU\](c) coincides with Definition 3.1 in [@BaKl:19] by Proposition 2.20 in [@BaKl:19]. [^33]: As there are no income effects here, the compensated law of demand (see, e.g., Proposition 3.E.4 in [@MaWhGr:95]) reduces to the law of demand. [^34]: \[fn:classDiscConv\]The “if” direction of Fact \[fac:unimod\] is a case of the “if" direction of Theorem 4.3 in [@BaKl:19]. The “only if” direction of Fact \[fac:unimod\], which we prove in Appendix \[app:maxDomain\], is a mild strengthening of the “only if" direction of Theorem 4.3 in [@BaKl:19] that applies in exchange economies. [^35]: To understand the correspondence, let $\mathcal{D}$ be a unimodular . In the terminology of [@DaKoMu:01], a valuation $\valFn{j}$ is *$\mathscr{D}(\mathscr{P}t(\mathcal{D},\mathbb{Z}))$-concave* if, for each price vector $\p$, we have that $\dQL{j}{\p} = \conv(\dQL{j}{\p}) \cap \Z^I$ and each edge of $\conv(\dQL{j}{\p})$ is parallel to an element of $\mathcal{D}$ (see Definition 4 and pages 264–265 in [@DaKoMu:01]). It follows from Lemma 2.11 and Proposition 2.16 in [@BaKl:19] that a valuation is $\mathscr{D}(\mathscr{P}t(\mathcal{D},\mathbb{Z}))$-concave if and only if it is concave and of demand type $\mathcal{D}$. [^36]: See Definition 4, Theorem 3, and pages 264–265 in [@DaKoMu:01]. [^37]: By contrast, the existence results of [@SuYa:06] and [@Teyt:14] can be deduced from Fact \[fac:subExist\] applying an appropriate change of basis. Those results are also special cases of Fact \[fac:unimod\]. [^38]: Section 6.1 in [@BaKl:19] provides another example that includes only complements valuations. [^39]: Fact \[fac:unimodMaxDomain\] is related to Proposition 6.10 in [@BaKl:14], which connects failures of unimodularity to the non-existence of  in specific economies. We supply a proof of Fact \[fac:unimodMaxDomain\] in Appendix \[app:maxDomain\]. [^40]: It is equivalent to define quasiconcavity in terms of the convexity of the upper contour sets, but Definition \[def:quasiConc\] is more immediately applicable for us. [^41]: existence result is not formally a special case of ours because they allowed for unbounded sets $\Feas{j}$ of feasible bundles of goods. [^42]: Requiring that different goods, rather than different units of goods, be  leads to a condition called **. However,  does not ensure the existence of  when agents can demand multiple units of some goods [@DaKoLa:2003; @MiSt:09; @BaKl:19].  in turn corresponds to an “ordinary substitutes” demand type (see Definitions 3.4 and 3.5 and Proposition 3.6 in [@BaKl:19]). [^43]: The quasilinear case of this fact is part of Theorem 4.1(i) in [@shioura2015gross] (see also Proposition 3.10 in [@BaKl:19]). The general case follows from the quasilinear case by Lemma \[lem:dHvalH\] and Remark \[rem:netSubDual\]. [^44]: In particular, if agent $j$ demands at most one unit of each good, then $\utilFn{j}$ is a  utility function if and only if it is of the strong substitutes demand type. [^45]: See [@klemperer2008new; @klemperer2010product; @klemperer2018product] and . Iceland planned a Product-Mix Auction for bidders with budget constraints [@klemperer2018product], but that auction was for a setting with divisible goods. [^46]: The lemma is due to @shapley1964values [page 3]; see also [@BiMa:97] and [@HaKoNiOsWe:11]. @jagadeesan2020lone [Lemma 1] proved the lemma in a setting with multiple units that allows for non-monotone valuations. [^47]: The existence of an integer vector that is in $\conv\left(\sum_{j'\in J} D^{j'}(\mathbf{p}_I)\right)$ but not $\sum_{j' \in J} D^{j'}(\mathbf{p}_I)$ follows from Fact \[fac:unimodSet\] as the vectors $\mathbf{g},\mathbf{e}^1-\mathbf{e}^2,\mathbf{e}^3,\ldots,\mathbf{e}^{|I|}$ do not comprise a unimodular set.
{ "pile_set_name": "ArXiv" }
--- author: - Aaron Clauset - Kristian Skrede Gleditsch title: The developmental dynamics of terrorist organizations --- [**Traditional studies of terrorist group behavior [@terrorism:definition] focus on questions of political motivation, strategic choices, organizational structure, and material support [@cordes:etal:1985; @hoffman:1998; @enders:sandler:2002; @pape:2003; @sageman:2004; @li:2005], but say little about the basic laws that govern how the frequency and severity (number of deaths) [@clauset:young:gleditsch:2007] of their attacks change over time. Here we study 3,143 fatal attacks carried out worldwide from 1968–2008 by 381 terrorist groups [@mipt:2008], and show that the frequency of a group’s attacks accelerates along a universal trajectory, in which the time between attacks decreases according to a power law in the group’s total experience; in contrast, attack severity is independent of organizational experience and organizational size. We show that the acceleration can be explained by organizational growth, and suggest that terrorist organizations may be best understood as firms whose primary product is political violence. These results are independent of many commonly studied social and political factors, suggesting a fundamental law for the dynamics of terrorism and a new approach to understanding political conflicts.** ]{} High-quality empirical data on terrorist groups, such as their recruitment, fundraising, decision making, and organizational structure, are scarce, and the available sources are not typically amenable to scientific analysis [@jackson:etal:2005]. However, good-quality data on the frequency and severity of their attacks do exist, and their systematic analysis can shed new light on which facets of terrorist group behavior are predictable and which are inherently contingent. Each record in our worldwide database of 3,143 fatal attacks [@mipt:2008] includes its calendar date $t$, its severity $x$, and the name of the associated organization, if known (see Supplementary Information). For each group, we quantify the changes in the frequency and severity of a group’s attacks over its lifetime using a *development curve*. This curve maps a group’s behavior onto a common quantitative scale and facilitates the direct comparison of different groups at similar points in their life histories. To construct this, we plot the behavioral variable, such as the time (days) between consecutive attacks [$\Delta t$]{} or the severity of an attack $x$, as a function of the group’s maturity or experience $k$, indexed here by the cumulative number of fatal attacks (Fig. \[fig:individual:devcurves\]). Combining the developmental curves of many groups produces an aggregate picture of their behavioral dynamics, and allows us to extract the typical developmental trajectory of a terrorist group. Constructing a combined development curve for the 381 organizations in our database, we find that the time between consecutive attacks [$\Delta t$]{} changes in a highly regular way (Fig. \[fig:development\]a), while the severity of these attacks $x$ is independent of organizational experience (Fig. \[fig:development\]c). ![image](devcurves_4groups_freq_sev.eps) Empirically, the time between attacks decreases quickly as a group gains experience. For example, the mean delay between the first and second fatal attacks is almost six months, $\langle{\ensuremath{\Delta t}}\rangle=168.6\pm0.6$ days, while after 13 attacks, the mean delay is only $\langle{\ensuremath{\Delta t}}\rangle=27\pm1$ days. More generally, the envelope or distribution of delays $p({\ensuremath{\Delta t}},k)$ can be characterized as a truncated log-normal distribution with constant variance $\sigma^{2}$ and a characteristic delay between attacks $\mu$ that decreases systematically with experience $k$. Mathematically: $$\begin{aligned} p(\log{\ensuremath{\Delta t}},\log k) & \propto {\rm exp}\!\left[\frac{-(\log \Delta t+\beta \log k-\mu)^{2}}{2\sigma^{2} }\right] \enspace , \label{eq:model}\end{aligned}$$ where $\beta$ controls the trajectory of the distribution toward the natural cutoff at ${\ensuremath{\Delta t}}=1$ day. For small $k$, i.e., during a group’s early development, this model predicts a mean delay between attacks that decays like a power law ${\ensuremath{\Delta t}}\approx\mu\,\!k^{-\beta}$; however, as $k$ increases, this trend is attenuated as the mean delay asymptotes to ${\ensuremath{\Delta t}}=1$ (see Supplementary Information). Under this model, $\beta=1$ would indicate a simple linear feedback between a group’s attack rate and its experience. However, we find $\hat{\beta}=1.10\pm0.02$, indicating a faster-than-linear feedback between the accumulation of experience and the rate of future attacks. This model successfully predicts that the distributions of normalized delays ${\ensuremath{\Delta t}}\,k^{\hat{\beta}}$ will collapse onto a single log-normal distribution with parameters $\hat{\mu}$ and $\hat{\sigma}$ (Fig. \[fig:development\]b). However, individual attacks cannot be considered fully independent ($p=0.00\pm0.03$; see Supplementary Information), indicating that significant temporal or inter-group correlations may exist in the timing of a group’s future attacks [@clauset:etal:2009:b] (Fig. \[fig:individual:devcurves\]). In contrast, the severity of an attack $x$ is independent of group experience $k$ (Fig. \[fig:development\]c; $r=-0.024$, t-test, $p=0.17$), as illustrated by the collapse of the severity distributions $p(x\,|\,k)$ onto a single invariant heavy-tailed distribution [@clauset:young:gleditsch:2007; @clauset:etal:2009] (Fig. \[fig:development\]d; see Supplementary Information). For example, the mean severity of a group’s first attack $\langle x\rangle=6.7\pm0.9$ is only slightly larger than the mean severity of all attacks by very experienced groups ($k>100$) $\langle x\rangle=5.1\pm0.6$. Thus, contrary to common assumptions, young and old groups are equally likely to produce extremely severe events. Older groups, however, remain significantly more lethal overall [@asal:rethemeyer:2008] because they attack much more frequently than small groups, not because their individual attacks are more deadly. Here, we consider four explanations for the acceleration in the frequency of attacks: (i) organizational learning [@jackson:etal:2005], (ii) organizational growth, (iii) sampling artifacts, and (iv) averaging artifacts [@gallistel:etal:2004]. Organizational learning—commonly studied in manufacturing [@dutton:thomas:1984; @argote:1993] and called “learning by doing”—implies that terrorist groups are born clumsy and increase their attack rate primarily because their existing members learn to be more efficient, e.g., better planning, coördination, and execution. In contrast, organizational growth implies that groups are born small and increase their attack rate primarily by recruiting new, replaceable, relatively independent members, e.g., adding new terrorist cells. Straightforward tests of the data can eliminate both artifactual explanations (see Supplementary Information), indicating that the acceleration is real, even at the level of an individual group (Fig. \[fig:individual:devcurves\]). The relative importance of organizational learning and organizational growth cannot be estimated using frequency and severity data alone. Untangling their effects requires data both on event planning and execution, and on a group’s size and recruitment at various points in its life history. To our knowledge, systematic data on event planning and execution do not exist. The best available data on group sizes, taken from an expert survey [@asal:rethemeyer:2008], are coarse (roughly order of magnitude) estimates of the maximum size achieved by each of the 381 groups over the 1998–2005 period; of these 161 conducted at least one fatal attack, and 80 conducted at least two. ![image](delays_trend_final.eps) ![image](delays_collapse_final.eps)\ ![image](sev_trend_final_c.eps) ![image](sev_collapse_final_d.eps) The growth hypothesis predicts that a group’s maximum size will be inversely related to the minimum delay between its attacks over the 1998–2005 period. An analysis of variance indicates that the average minimum delays in the four size categories are significantly different (Fig. \[fig:group:sizes\]a; $n$-way ANOVA, $F=9.98$, $p<0.000013$), and further that larger organizational size is a highly significant predictor of increased attack frequency ($r=-0.49$, t-test, $p<10^{-5}$). In contrast, size, like experience (Fig. \[fig:development\]c), is not a significant predictor of attack severity (Fig. \[fig:group:sizes\]b; see Supplementary Information). Although operational, organizational, and political circumstances vary widely across terrorist groups, the systematic nature of our results suggests several general conclusions. The strong dependence of attack frequency on experience (Fig. \[fig:development\]a) suggests that the timing of events is governed primarily by organizationally internal factors, like growth and learning, related to group development, e.g., recruitment, personnel turnover, and internal coördination. Our analysis of group sizes lends significant support to the growth hypothesis, but without additional data, we cannot eliminate the possibility that these groups are also learning. Even so, our results implicate these internal factors as leverage points for decreasing the incidence rate of future attacks. In contrast, internal factors seem to play a marginal role in the severity of any particular attack (Figs. \[fig:development\]c and \[fig:group:sizes\]b), implying that the lethality of larger and more mature groups [@asal:rethemeyer:2008] is explained by their more frequent activity rather than more deadly activity. Further, we note that curtailing the frequency of a group’s attacks, perhaps by limiting growth, would reduce the [*cumulative risk*]{} of very severe attacks. Learning and growth may also constrain other behaviors, such as fundraising, resource availability, political motivation [@sageman:2004; @krueger:2007], strategic choices in timing [@kydd:walter:2002] and targets [@enders:sandler:2002], and the use of tactics like suicide bombs [@pape:2003]. However, most groups never achieve a high level of experience ($k>100$), and it is unclear to what degree this leaky pipeline is caused by group death, e.g., from counter-terrorism activities or internal conflicts [@cronin:2006; @jones:libicki:2008], versus shifts away from violence. Similarly, it is unclear whether or how these dynamics change when a group does become highly experienced. Once a group is large enough to execute almost daily attacks, it may be more like a social institution, and thus face different constraints and incentives than smaller, still developing groups. It is also unclear what mechanism explains the power-law distribution in the severity of terrorist attacks [@clauset:young:gleditsch:2007; @clauset:etal:2009], and its independence of organizational experience and size. Two possibilities are (i) the advantages and disadvantages of youth (small size and the element of surprise vs. poor resources and clumsy attacks) balance those of maturity (more resources and better planning vs. risk aversion and hostile attention from governments), yielding apparent independence; and (ii) severity is inherently random, governed by contingent details associated with the particular attack, the particular group, etc. --------------------------------------------- ----------------------------------------------- ![image](groupsize_minfreq_1998-2005_a.eps) ![image](groupsize_mediansev_1998-2005_b.eps) --------------------------------------------- ----------------------------------------------- The development curves shown here are similar in form to production curves found in manufacturing [@dutton:thomas:1984], in which per-item production costs tends to decrease like a power law in the cumulative number of items produced. In this light, terrorist groups may best be understood as firms whose principal product is political violence, and whose production rates depend largely on organizational growth and the availability of low-skill labor. Thus, studies of event-driven, non-terrorist organizations, e.g., some non-profits, political activism groups, and commercial firms, could shed additional light on the dynamics of terrorist groups. A better understanding of how these groups are like, and unlike, non-terrorist human social groups may indicate which counter-terrorism policies, e.g., limiting growth by stifling recruitment or by forcing organizational turnover, are likely to be effective. Finally, we note that our results are independent of many commonly studied factors, including geographic location, time period, ideological motivations (religious, separatist, reactionary, etc.), and political context. Some aspects of terrorism are thus not nearly as contingent as is widely assumed, and quantifying dynamical patterns in political conflict can serve the broader goal of understanding what regularities exist, why they exist, and what they imply for long-term social and political stability. [**Acknowledgments**]{} The authors thanks L.-E. Cederman, K. Drakos, B. Karrer, J. McNerney, J. H. Miller, M. E. J. Newman, A. Ruggeri, D. Sornette, C. R. Shalizi, and M. Young for helpful conversations. This work was supported in part by the Santa Fe Institute, the Economic and Social Research Council (RES-062-23-0259), and the Research Council of Norway (180441/V10).\ [10]{} url \#1[`#1`]{}urlprefix\[2\][\#2]{} \[2\]\[\][[\#2](#2)]{} *et al.* ** (, , ). ** (, , ). & In (ed.) ** (, , ). . ** ****, (). ** (, , ). ** ****, (). , & . ** ****, (). . (). . *et al.* **, vol.  (, , ). , , & (). . , & (). . & . ** ****, (). , & . ** ****, (). & . ** ****, (). . ** ****, (). ** (, , ). & . ** ****, (). . ** ****, (). & ** (, , ). Appendix \[sec:terrorism:data\]: Terrorism data - \[sec:data:events\]: Events - \[sec:data:organizations\]: Organizations - \[sec:unattributed\]: Attributed vs. unattributed events - \[sec:domestic:transnational\]: Domestic vs. transnational events Appendix \[sec:dev:curve:model\]: Attack frequency development curve model\ Appendix \[sec:tests:of:delay:curve\]: Tests of the attack frequency development curve - \[sec:average:sampling\]: Averaging and sampling artifacts - \[sec:p:value\]: Calculation of statistical significance; $p$-value Appendix \[sec:severity:by:experience\]: Analysis of attack severity development curve\ Appendix \[sec:group:sizes\]: Additional analyses of group size Terrorism Data {#sec:terrorism:data} ============== Events {#sec:data:events} ------ Many organizations track terrorist events worldwide, but few provide their data in a form amenable to scientiÞc analysis. The most popular source of information on terrorist events in the political science literature is the ITERATE data set [@s:mickolus:etal:2004], which focuses exclusively on transnational terrorist events, i.e., events involving actors from at least two countries. For the analysis of terrorist groups, however, domestic events, i.e., events involving actors from only one country, are equally relevant. Thus, we use the data contained in the National Memorial Institute for the Prevention of Terrorism [@s:mipt:2008] database, which largely overlaps with the ITERATE data but also includes fully domestic terrorist events since at least 1998. The MIPT database combines the RAND Terrorism Chronology 1968-1997, the RAND-MIPT Terrorism Incident database (1998-2008), the Terrorism Indictment database (University of Arkansas and University of Oklahoma), and DFI International’s research on terrorist organizations. Since we last collected event data [@s:mipt:2008], MIPT mandate was changed by the U.S. Department of Homeland Security such that maintaining and making publicly available this data was no longer a priority. Our understanding is that the MIPT data will eventually be merged with the Global Terrorism Database (GTD) managed by the START program, a U.S. Department of Homeland Security Center of Excellence, at the University of Maryland, College Park. Records in the MIPT database are derived by a team of specialized scholars from original sources, largely reports in newspapers worldwide, accessible via the LexusNexis service and other means. This human element presents the possibility that some records, particularly those corresponding to non-lethal events, could be missing or corrupted by poor reporting [@s:danzger:1975]. In order to mitigate the effect of such biases, we focus our analysis on the portion of the database that is most trustworthy [@s:clauset:young:gleditsch:2007], namely those events that resulted in at least one fatality, since these events are typically reported more reliably in the media [@s:snyder:kelly:1977]. The MIPT database contains 35,688 recorded terrorist attacks worldwide from January 1968 to January 2008; these events occurred in almost 6,000 cities in over 180 nations. 13,274 (37.2%) of the events resulted in at least one fatality while 10,085 (28.3%) were attributed to one of 910 organizations. The intersection of these criteria, i.e., events that were both fatal and attributed, includes 3,143 events attributed to 381 organizations. (106 such events were attributed to more than one organization; in our analysis, we give all groups credit for such an event.) This set of organizations includes most common ideological motivations, such as national-separatist, reactionary, religious, and revolutionary ideologies [@s:miller:2007], and spans a wide variety of political contexts. In some cases, groups carried out multiple attacks on the same day. For our analysis of the delay between consecutive attacks, these few cases present a difficulty by inducing a delay variable of ${\ensuremath{\Delta t}}=0$. For simplicity, we combine such concurrent attacks into a single record with a severity equal to the sum of the fatalities of its component attacks. Organizations {#sec:data:organizations} ------------- There are very few sources of systematic data on terrorist organizations worldwide. The START program at the University of Maryland currently hosts some data under their “Terrorist Organization Profiles” (TOPs) program; this data was originally developed by Detica, Inc., a British defense contractor, and was accessible from the MIPT from c.1998 to March 2008. Estimates of group sizes in this database are relatively few, and the methodology used to derive them opaque. ![The severity distributions for attributed and unattributed events, along with maximum likelihood power-law models for their tails. See Table \[table:unattributed\] for details. []{data-label="fig:unattributed"}](severity_attribution.eps) $n$ $\langle x \rangle$ $\sigma$ $x_{\max}$ $\hat{x}_{\min}$ $\hat{\alpha}$ $p$ -------------- -------- --------------------- ---------- ------------ ------------------ ---------------- --------------- Attributed 3,143 6.6 52.1 2749 $10\pm3$ $2.24\pm0.12$ $0.78\pm0.03$ Unattributed 10,131 3.5 10.5 500 $14\pm3$ $2.55\pm0.12$ $0.19\pm0.03$ All events 13,274 4.2 51.3 2749 $10\pm3$ $2.39\pm0.09$ $0.39\pm0.03$ Jones and Libicki, working at the RAND Corporation, compiled a database of information on 649 terrorist groups. This database includes estimates of the group’s peak size over its entire lifetime [@s:jones:libicki:2008]. Unfortunately, the breadth of time over which the size estimate holds makes it difficult to tightly relate size with behavior. Finally, Asal and Rethemeyer extracted data from the MIPT list of organizations and augmented it with data drawn from public sources and estimates from a panel of experts at the Monterey Institute of International Studies (MIIS) [@s:asal:rethemeyer:2008]. This data provides coarse (roughly order of magnitude) estimates of maximum size over the 1998–2005 period. Given the care taken in constructing the Asal and Rethemeyer data, it appears to be the most accurate data on terrorist group sizes currently available. Of the groups they consider, 161 conducted at least one fatal attack over the 1998–2005 period, and 80 conducted at least two. For our analyses of group size (Appendix \[sec:group:sizes\]), we consider the former set for questions of event severity, and the latter set for questions of event frequency. Attributed vs. unattributed events {#sec:unattributed} ---------------------------------- Because a large fraction of the MIPT events are not attributed to any particular terrorist organization, an important question is whether the unattributed events differ from the attributed events in important ways. For example, are attributed events systematically more severe than unattributed events? This might be the case if there were a systematic bias or incentive for attacking organizations to be associated with their more severe events, perhaps to gain greater media attention. We can test for such a bias by separately considering the severity distributions for attributed and unattributed events in the MIPT database. Fig. \[fig:unattributed\] shows these distributions, along with their corresponding power-law fits; Table \[table:unattributed\] summarizes these data and models. The results show that the attributed- and unattributed-event severity distributions are extremely similar. Their averages and standard deviations differ primarily because these are high-variance distributions, and the presence of a few very large attributed events drives outsized differences in these summary statistics. There are slight differences in the bodies (small $x$, see Fig. \[fig:unattributed\]), and the upper tails are almost identical [@s:clauset:etal:2009]: both are plausibly distributed according to power laws and span largely indistinguishable ranges of severity, but have slightly different scaling exponents (Table \[table:unattributed\]). Although we find some evidence for a very weak relationship between severity and attribution, our results imply that the severity of attributed and unattributed events do not differ in qualitatively important ways. ![The attack frequency development curves for groups whose first attack was [**a**]{}, in 1968–1997 or [**b**]{}, in 1998–2008, illustrating that the development curve progression for attack frequency is not merely a transnational or domestic phenomenon, and is robust to the change in the definition of the MIPT database in 1998. The expected mean delay shown here is taken from the main text, and thus serves as a reference point. []{data-label="fig:1998"}](frequency_devcurve_1968-1997_a.eps "fig:") ![The attack frequency development curves for groups whose first attack was [**a**]{}, in 1968–1997 or [**b**]{}, in 1998–2008, illustrating that the development curve progression for attack frequency is not merely a transnational or domestic phenomenon, and is robust to the change in the definition of the MIPT database in 1998. The expected mean delay shown here is taken from the main text, and thus serves as a reference point. []{data-label="fig:1998"}](frequency_devcurve_1998-2008_b.eps "fig:") There is no clear theoretical reason to expect that the delay between events ${\ensuremath{\Delta t}}$ should be systematically related to attribution. However, because the development curve analysis described in Appendix \[sec:dev:curve:model\] omits all of the events carried out by a group but which are not attributed to it, the empirical delay between consecutive attacks is at best an upper-bound on the true delay. (Unfortunately, it is unknown how the observed delay between attributed events relates to the true delay between consecutive events.) This implies that our estimate of the acceleration of attack rates (characterized by $\beta$) may overestimate its true value. The systematic patterns shown in Fig. \[fig:development:freq\] and Fig. \[fig:development:sev\] suggest that, unless they are highly pathologically distributed as a function of $k$, incorporating correctly labeled unattributed events is unlikely to fundamentally change our results. Domestic vs. transnational events {#sec:domestic:transnational} --------------------------------- The MIPT data has an important bias, due to its particular maintenance history. From 1968–1997, the database was maintained by the RAND Corporation as part of its project on transnational terrorism. As a result, almost no domestic terrorist attacks are included before 1998, after which the scope of the database was significantly expanded (in part due to the Oklahoma City bombing in 1995) to include purely domestic events worldwide. Thus, an important check on the generality of our results is to ask whether the attack frequency development curves depend on mixing data across the critical 1998 date. To test for this sensitivity, we construct combined development curves from events by groups whose first attack was before 1 January 1998 (mainly transnational groups), and separately from events by groups whose first attack was on or after 1 January 1998 (transnational and domestic groups). Groups in the MIPT database are not coded as being transnational or domestic, and adding such data is a highly non-trivial task for a database of this size. Figure \[fig:1998\] shows that the development curve phenomenon is robust to this division of data. That being said, one difference is worth noting. The development curve for the 1968–1997 data shows a much slower overall rate of acceleration than does the 1998–2008 curve. The origin of this difference may be related to the rise of religiously motivated terrorism in the 1990s and beyond; however, sorting out its cause is beyond the scope of both this test and this study. For our purposes, it suffices to demonstrate that the development curve’s overall form is robust to the MIPT’s coverage bias. ![[**a**]{}, The mean delay $\langle\log{\ensuremath{\Delta t}}\rangle$ between attacks by a terrorist group, with $25^{\rm th}$ and $75^{\rm th}$ percentile isoclines, as a function of group experience $k$. The solid line shows the expected mean delay, from the model described in the text. [**b**]{}, The distributions of normalized delays $p({\ensuremath{\Delta t}}\,k^{\hat{\beta}})$, showing the predicted data collapse onto an underlying log-normal distribution.[]{data-label="fig:development:freq"}](delays_trend_final.eps "fig:") ![[**a**]{}, The mean delay $\langle\log{\ensuremath{\Delta t}}\rangle$ between attacks by a terrorist group, with $25^{\rm th}$ and $75^{\rm th}$ percentile isoclines, as a function of group experience $k$. The solid line shows the expected mean delay, from the model described in the text. [**b**]{}, The distributions of normalized delays $p({\ensuremath{\Delta t}}\,k^{\hat{\beta}})$, showing the predicted data collapse onto an underlying log-normal distribution.[]{data-label="fig:development:freq"}](delays_collapse_final.eps "fig:") Attack frequency development curve model {#sec:dev:curve:model} ======================================== This model comes from the observation that the distributions of delays $p({\ensuremath{\Delta t}},k)$ for different values of $k$ collapse onto a single universal curve (Fig. \[fig:development:freq\]b) of the form $$\begin{aligned} p(\log{\ensuremath{\Delta t}},\log k) & = C\,{\rm exp}\!\left[\frac{-(\log {\ensuremath{\Delta t}}+\beta\log k-\mu)^{2}}{2\sigma^{2} }\right] \label{eq:model2} \\ C & = \frac{\sqrt{2/\pi}}{ \sigma \left(1 - {\rm Erf}\!\left[\frac{\beta \log k-\mu}{\sigma\sqrt{2}}\right]\right)} \enspace , \nonumber\end{aligned}$$ ![image](devcurves_8groups_frequency.eps) where ${\rm Erf}$ is the error function and $C$ is the normalization constant. In words, the logarithm of the delay is normally distributed $\mathcal{N}(\mu,\sigma)$ (or equivalently, the delay is log-normally distributed), but has a natural lower cutoff at ${\ensuremath{\Delta t}}=1$ day (which comes from the resolution of the recorded data) and a $\mu$ parameter that decreases systematically as $k$ increases. The parameter $\mu$ denotes the characteristic delay between attacks, and in particular the delay between the first and second attacks, while $\sigma^{2}$ denotes the variance in the expected delay. Due to the breadth of the log-normal distribution, a non-trivial value of $\sigma$ implies a wide degree of variability in the waiting time, even given the characteristic delay $\mu$. By integrating, we can derive the expected mean delay from Eq. , which has the form $$\begin{aligned} {\rm E}[\log{\ensuremath{\Delta t}}] = \mu - \beta \log k + \left(\frac{ {\rm exp}\!\left[ \frac{-(\beta \log k - \mu)^{2}}{2\sigma^{2}} \right] \sqrt{2/\pi} }{\sigma^{-1}\left(1-{\rm Erf}\!\left[ \frac{\beta \log k - \mu}{\sigma\sqrt{2}} \right]\right)} \right)\enspace . \label{eq:expected:delay}\end{aligned}$$ For small values of $k$, the expected delay is dominated by the first two terms, and thus decays according to a power-law ${\ensuremath{\Delta t}}\approx \mu k^{-\beta}$, where $\mu$ represents the initial rate of attack for a group. At larger values of $k$, the expected delay is dominated by the third term in Eq. , which makes the expected delay to approach ${\ensuremath{\Delta t}}=1$ more slowly than a power law. The model’s parameters $\mu$, $\sigma$ and $\beta$ can be estimated directly from the data using standard numerical procedures (the simplex or Nelder-Mead method [@s:nelder:mead:1965], in this case) to maximize the likelihood of the data. Standard error estimates for the parameters can then be numerically estimated using the method of Fisher information to approximate the width of the log-likelihood function in the vicinity of the maximum [@s:barndoff-nielsen:cox:1995]. Applying these methods to our data yields $\hat{\mu}=5.67\pm0.05$, $\hat{\sigma}=2.12\pm0.02$, and $\hat{\beta}=1.10\pm0.02$. The estimated value of $\beta$ has particular significance: $\beta=1$ indicates a simple linear feedback between the gain in experience and the change in attack rate, with deviations above and below indicating super- and sub-linear feedback. The estimated $\hat{\beta}>1$, thus indicates super-linear feedback, which implies an explosive acceleration in the attack rate in finite time. However, this explosive growth is strongly attenuated by the third term in Eq. , which prevents the actual attack rate from reaching extremely high levels. Although desirable, estimates of individual group trajectories are difficult to obtain using this procedure, due to a severe $O(k_{*}^{-1/4})$ finite-size bias in the estimation of $\mu$ (where $k_{*}$ denotes the total number of attacks by the group), and slightly less severe biases in estimating $\sigma$ and $\beta$. On the other hand, combining data from many groups yields a less severe $O(n^{-1/2})$ bias in each parameter (where $n$ is the total number of delays combined). For the combined development curve, we have $n\approx2500$, and the finite-size bias is negligible (less than 0.1 for each parameter). Finally, we point out that very few groups (e.g., Hamas, Fatah, LTTE, FARC, etc.) manage to become highly experienced ($k\gtrsim100$). The sparsity of the data for large $k$ means that the fit in this region is primarily controlled by the delays at much smaller values of $k$, where the vast majority of data lay (Fig. \[fig:development:freq\]a). This explains the slight misfit to the delays for highly experienced groups, relative to the empirical observations, and highlights the fact that the behavior of inexperienced groups is predictive of the behavior of more mature groups. Tests of Attack Delay Development Curve {#sec:tests:of:delay:curve} ======================================= Averaging and sampling artifacts {#sec:average:sampling} -------------------------------- As described in the main text, there are several possible explanations for the observed universal development curve for the frequency of terrorist attacks by a group. These are (i) organizational learning [@s:argote:etal:1995], (ii) organizational growth, (iii) sampling artifacts, and (iv) averaging artifacts [@s:gallistel:etal:2004]. Here we show that the sampling and averaging explanations can be eliminated using the available empirical data. In the sampling scenario, groups are born with some unique, but fixed, attack rate $\mu_{i}$. As each group progresses through its lifetime, groups with slower attack rates (larger $\mu_{i}$) die out first. This implies that the maximum experience $k_{*}$ achieved by a group should be inversely proportional to its attack rate $\mu_{i}$. The net result of sampling groups in this way is to leave progressively fewer groups with large $\mu_{i}$ values at larger experiences $k$, which gives the illusion of a smooth trend toward faster attack rates. In the averaging scenario, groups can exhibit one of two attack frequencies: daily attacks (fast attackers) or attacks at some rate $\mu$ (slow attackers). Initially, all groups are slow attackers; however, at a group-specific experience threshold $k_{\circ,i}$, they switch behaviors and become fast attackers. The individual development curve for such a group would be a step function. By combining many such step functions, each with a different step location $k_{\circ,i}$, i.e., by averaging across the different thresholds, the combined development curve show a smooth trend toward faster attack rates even though no individual group behaves that way. (As an aside: this averaging across thresholds model has been used to explain the apparently progressive learning curves for certain behaviors in animals [@s:gallistel:etal:2004] and is thought to underly some evolved regulatory behaviors among eusocial insects [@s:jones:etal:2004].) It is instructive to again use the factory analogy. In the case of selection artifacts, different factories have different intrinsic production rates, but slower factories go out of business sooner, leaving a progressively larger fraction of fast factories at later times. Thus, although no factory exhibits a progressive trend toward faster production, as time passes, a larger fraction of the slow factories have died off, and the average rate of the remaining factories progressively increases. In the case of averaging artifacts, all factories begin with slow production rate, but at a randomly chosen time in the future, each switches from slow to fast production (due to a sudden insight or a sudden windfall profit). Thus, although no individual factory exhibits a progressive trend toward faster production, as time passes, a larger fraction of factories have switched to the faster rate, and the average rate of all factories progressively increases. Both the sampling and averaging explanations can be tested using the available event frequency data. The sampling explanation predicts an inverse relationship between a group’s characteristic attack rate $\mu_{i}$ and its maturity $k_{*,i}$ (total number of attacks), i.e., groups with larger $\mu_{i}$ (slower attack rate) die out more quickly and thus should exhibit a smaller number of total events. Unfortunately, the finite-size bias mentioned in Appendix \[sec:dev:curve:model\] prevents us from testing this prediction directly, by first estimating each group’s $\mu_{i}$ via maximum likelihood and then testing if $\mu_{i}$ varies inversely with $k_{*,i}$. Instead, we must use a more indirect test. Here, we use the first delay variable for a group ${\ensuremath{\Delta t}}^{(1)}_{i}$ as a proxy for the group’s attack rate $\mu_{i}$. Regressing each group’s initial delay against its total experience yields no significant relationship ($r=-0.057$, t-test, $p=0.46$), and similarly for the logarithm of delay and experience ($r=-0.114$, t-test, $p=0.14$). (Notably, the empirical $k_{*,i}$ underestimates the total number of attacks by a group for all groups still active today; however, this particular kind of bias is unlikely to transform an inverse relationship between ${\ensuremath{\Delta t}}^{(1)}_{i}$ and $k_{*,i}$ into the null relationship we observe here.) A strong test of the sampling explanation, which also tests the averaging hypothesis, is to plot the individual attack frequency development curves for the groups with the most fatal attacks. Fig. \[fig:averaging:test\] shows these curves for the eight most experienced groups in our database. Notably, none of these curves exhibits the predicted switching behavior (from low to high frequency) predicted by the averaging hypothesis, and very few (Islamic State of Iraq, and possibly al Qaeda in Iraq) exhibit extremely short initial delays; instead, each generally exhibits a progressive increase in attack rate, and similar results hold for less experienced groups. Thus, although the combined development curve may not represent precisely the development of any individual organization—and indeed, we see that individual groups do indeed deviate from the population trend (Fig. \[fig:averaging:test\])—its general shape is an accurate description of the tendencies of each group. ![The numerically estimated distribution of $p$-values (4000 reps.) for data drawn iid from the generative model described in Eq. , using the procedure described in Appendix \[sec:p:value\], shown as [**a**]{}, the histogram with standard error estimates and [**b**]{}, the cumulative distribution function. The dashed line shows the null-model of a uniform distribution, and the estimated distribution’s uniformity indicates that the Monte Carlo estimation of the $p$-value is unbiased. []{data-label="fig:pvalue:test"}](pvalue_unbiased.eps "fig:") ![The numerically estimated distribution of $p$-values (4000 reps.) for data drawn iid from the generative model described in Eq. , using the procedure described in Appendix \[sec:p:value\], shown as [**a**]{}, the histogram with standard error estimates and [**b**]{}, the cumulative distribution function. The dashed line shows the null-model of a uniform distribution, and the estimated distribution’s uniformity indicates that the Monte Carlo estimation of the $p$-value is unbiased. []{data-label="fig:pvalue:test"}](pvalue_unbiased_cdf.eps "fig:") ![image](devcurves_8groups_severity_b.eps) Calculation of statistical significance; $p$-value {#sec:p:value} -------------------------------------------------- We numerically estimate the statistical significance of the model given in Eq.  using a semiparametric Monte Carlo procedure. Under this procedure, we use the empirical experience data $\{k_{i}\}$, but generate new delay variables $\{{\ensuremath{\Delta t}}_{i}\}$ by drawing them iid from the estimated model, conditioned on their corresponding $k_{i}$ value (see Appendix \[sec:dev:curve:model\]). As the test statistic, we use the Kolmogorov-Smirnov distance [@s:press:etal:1992] between the empirical data and the truncated log-normal distribution, averaged over experience variables: $$\begin{aligned} \label{eq:test:statistic} D^{*} & = \frac{1}{k_{\max}} \,\sum_{k=1}^{k_{\max}} \,\max_{i} \left| S_{k}(i) - P_{k}(i) \right| \enspace ,\end{aligned}$$ where $S_{k}(i)$ is the empirical distribution function for experience $k$ and $P_{k}(i)$ is the corresponding theoretical cumulative distribution function. We average the KS distances, rather than, say, take their maximum, because the number of delays $n_{k}$ observed at each value of $k$ is roughly inversely proportional to $k$, and when $n_{k}$ is small, the KS distance will be large, regardless of how well (or poorly) the model fits the data. The average KS distance avoids this problem. To calculate the $p$-value, we use the following procedure: 1. estimate $\mu$, $\sigma$ and $\beta$ for the empirical $\{{\ensuremath{\Delta t}}_{i}\}$ data using maximum likelihood \[list:1\]; 2. compute the empirical value of the test statistic $D^{*}$ (Eq. ), relative to this model ; 3. for $j=1\dots N$ repetitions (for $N\gg1$), do the following: 1. for each event delay $k_{i}$, draw a new delay variable ${\ensuremath{\Delta t}}_{i}$ independently from $p({\ensuremath{\Delta t}}\,|\,k_{i})$ (the model estimated in step \[list:1\]) \[list:2\]; 2. estimate model parameters $\mu_{j}$, $\sigma_{j}$, and $\beta_{j}$ from the data generated in step \[list:2\] \[list:3\]; 3. compute the test statistic $D^{*}_{j}$ for this same data (from step \[list:2\] relative to its own estimated model, with parameters $\mu_{j}$, $\sigma_{j}$, and $\beta_{j}$, from step \[list:3\]) ; 4. the $p$-value is defined as the fraction of the test statistics $D^{*}_{j}$ that are at least as large as the empirical test statistic, i.e., $D^{*}_{j} \geq D^{*}$. To check that this choice of test statistic yields an unbiased $p$-value, we conducted a numerical experiment to measure the distribution of $p$-values for data truly drawn iid from the model. To be unbiased, i.e., to have the correct interpretation as a probability that the data were in fact drawn from the null model, this distribution should be uniform on the unit interval. Fig. \[fig:pvalue:test\] shows the results of this test, confirming that the test statistic $D^{*}$ is unbiased. When applied to the empirical delays, we estimate $p=0.00\pm0.03$, indicating that the null model can be rejected. Because the normalized delay distributions $p(k^{\hat{\beta}}\,{\ensuremath{\Delta t}})$ collapse onto the underlying log-normal distribution, however, this rejection implicates the assumption of independence as being invalid, as opposed to the general form of the model, i.e., there are significant correlations in the timing of attacks. Analysis of attack severity development curve {#sec:severity:by:experience} ============================================= Fig. \[fig:8groups:severity\] shows the individual attack severity development curves for the same eight groups as in Fig. \[fig:averaging:test\], along with the population-level trend described below. Most notably, these individual development curves largely reflect the combined development curve result that experience and severity are independent, even at the level of individual groups. Fig. \[fig:development:sev\]a repeats the attack severity developmental curve from the main text. The slope of a simple trend line (regressing $\log k$ against $\log x$) is not distinguishable from zero ($r=-0.0238$, t-test, $p=0.17$), and the solid line shows the zero-slope null hypothesis. Further, if we consider the distribution of severities at a particular value of $k$, we see that they do not seem to depend on $k$ (Fig. \[fig:development:sev\]b), and that they resemble power-laws distributions [@s:clauset:young:gleditsch:2007]. Taking this similarity at face value, we can test whether the severity distribution $p(x\,|\,k)$ depends strongly on the group’s experience $k$. Fig. \[fig:sev:statistics\] (upper) shows the results of fitting each of the distributions $p(x\,|\,k)$ for the first 40 values of $k$ with a power-law distribution with $x_{\min}=1$ [@s:clauset:etal:2009]. ![[**a**]{}, The mean severity $\langle\log x\rangle$ of fatal attacks by a terrorist group, with $25^{\rm th}$ and $75^{\rm th}$ percentile isoclines, as a function of group experience $k$. The solid line (with slope zero) shows the expected mean delay, from a simple regression model. [**b**]{}, The distributions of event severities $\Pr(X\geq x)$, showing the data collapse onto an underlying heavy-tailed distribution.[]{data-label="fig:development:sev"}](sev_trend_final_a.eps "fig:") ![[**a**]{}, The mean severity $\langle\log x\rangle$ of fatal attacks by a terrorist group, with $25^{\rm th}$ and $75^{\rm th}$ percentile isoclines, as a function of group experience $k$. The solid line (with slope zero) shows the expected mean delay, from a simple regression model. [**b**]{}, The distributions of event severities $\Pr(X\geq x)$, showing the data collapse onto an underlying heavy-tailed distribution.[]{data-label="fig:development:sev"}](sev_collapse_final_b.eps "fig:") The estimated power-law exponents are very stable: the correlation between $\hat{\alpha}_{k}$ and $k$ is negligible ($r=0.0274$, t-test, $p=0.87$), and the average estimated value $\langle \hat{\alpha}_{k} \rangle = 1.74$ is indistinguishable from the estimated value for all the data together (i.e., ignoring $k$), $\hat{\alpha}_{\rm all} = 1.74\pm0.01$. Thus, the distribution of attack severities appears to be independent of organizational experience, i.e., the probability that an attack will cause $x$ fatalities does not depend on the number of attacks $k$ a group has done in the past. ![Power-law analysis of event severities, for $k\leq40$: (upper) the maximum likelihood exponent $\hat{\alpha}$ for a power-law distribution with $x_{\min}=1$, and (lower), the corresponding $p$-value of the fit. The slope of the trend line in the upper panel is not statistically different from zero. []{data-label="fig:sev:statistics"}](sev_analysis.eps) This result has several implications for understanding the origin of the power-law form of the severities distribution for terrorist attacks worldwide [@s:clauset:young:gleditsch:2007; @s:clauset:etal:2009]. In so far as terrorist attacks are characteristic of the more general category of asymmetric or guerrilla warfare, a model proposed by Johnson et al. suggests that the severity power law comes from self-organized critical behavior in the internal dynamics of terrorist organizations [@s:johnson:etal:2005; @s:johnson:etal:2006]. Their model assumes that all groups are composed of a number of cells, and that these cells merge (pairwise) or fall apart (into individuals) according to a Markov process. Under these dynamics, the steady-state distribution of cell sizes can be shown to follow a power law under relatively general conditions [@s:ruszczycki:etal:2008; @s:clauset:wiegel:2009], and by assuming that cells attack independently, at roughly equal rates, and induce fatalities in proportion to their size, this model yields a power-law distribution in the severity of attacks. The independence of event severity $x$ and organizational experience $k$, however, suggests that this explanation requires additional assumptions to explain terrorist attack severities. For instance, we might assume that groups are born with a power-law distribution of cell sizes, as the model otherwise predicts an initial transient period of non-power-law behavior while the cells self-organize away from their initial state toward their steady state. This transient period should appear in our data as non-power-law behavior in the severity of events at small $k$, with the distribution at $k=1$ reflecting the distribution of initial cell sizes. However, Figs. \[fig:development:sev\]a,b and \[fig:sev:statistics\] show that even for $k=1$, we see power-law-like behavior. This suggests that (i) terrorist groups are not internally self-organized critical, (ii) groups converge to their steady-state distribution of cell sizes before making any attacks, or (iii) other assumptions (e.g., about the behavior of cells) conspire in a complicated way to nevertheless produce a power-law distributions for small $k$. Common sense and historical evidence suggest that the second possibility is probably not the case: most terrorist groups start out small and do not wait very long before beginning their attacks [@s:hoffman:1998; @s:sageman:2004]. An important caveat to our stability analysis is that more of the 40 distributions tested above have $p<0.1$ than we would expect from an iid situation (37% vs. 10%; Fig. \[fig:development:sev\]a, lower). This implies that there is more structure (or more variance) in the severity data than the simple iid power-law hypothesis would lead us to expect. That is, there are likely significant correlations in the severity of subsequent attacks by the same group. One place this structure could be hiding is in the extreme right tail. ![image](groupsize_meanfreq_1998-2005_a.eps) ![image](groupsize_medianfreq_1998-2005_b.eps) ![image](groupsize_minfreq_1998-2005_c.eps) Additional analyses of group size {#sec:group:sizes} ================================= On the question of how the frequency of attack varies with organizational size, in the main text, we argue that the minimum delay $\min{\ensuremath{\Delta t}}$ is the appropriate dependent variable to consider because the size estimate corresponds to the maximum size over the 1998–2005 period. For completeness, we additionally considered whether the mean and median delays vary with group size, and found similar results (Fig. \[fig:groups:freq\]). In all cases, the categorical means were significantly different ($n$-way ANOVA, $F=3.78$ and $p=0.0139$ for mean delay; $F=7.44$ and $p=0.0002$ for median delay), and the trends point in the same direction as for the minimum delay. That is, in all cases, larger organizational size is indicative of greater attack frequency. Similarly, we considered how the mean, median and maximum severity of attacks varies with organizational size. Here, the mean and median severities are not related to organizational size ($n$-way ANOVA, $F=1.14$ and $p=0.3352$ for mean severity; $F=0.59$ and $p=0.6219$ for median severity). Only in the case of maximum severity do we find a significant relationship ($n$-way ANOVA, $F=4.53$ and $p=0.0045$); however, such a relationship is expected, given that larger groups attack much more frequently. That is, consider a situation in which the severity $x$ of an attack is an iid random variable drawn from some distribution $P$. It is well known from extreme value theory that the expected maximum observed severity will increase monotonically with the number of draws from $P$ [@s:dehann:ferreria:2006]. Thus, because larger groups attack more often, i.e., they have many more chances to produce a severe attack, a positive relationship between the maximum severity and group size is expected. ![image](groupsize_meansev_1998-2005_a.eps) ![image](groupsize_mediansev_1998-2005_b.eps) ![image](groupsize_maxsev_1998-2005_c.eps) [10]{} url \#1[`#1`]{}urlprefix\[2\][\#2]{} \[2\]\[\][[\#2](#2)]{} , , & (). . . (). . . ** ****, (). , & . ** ****, (). & . ** ****, (). . ** ****, (). & ** (, , ). & . ** ****, (). , & (). . & . ** (). & ** (, , ). , , & . ** ****, (). , & . ** ****, (). , , & . ** ****, (). , , & ** (, , ). *et al.* (). . *et al.* (). . , , & (). . & (). . ** (, , ). ** (, , ). & ** (, , ).
{ "pile_set_name": "ArXiv" }
--- abstract: | This paper deals with the problem of estimating predictive densities of a matrix-variate normal distribution with known covariance matrix. Our main aim is to establish some Bayesian predictive densities related to matricial shrinkage estimators of the normal mean matrix. The Kullback-Leibler loss is used for evaluating decision-theoretical optimality of predictive densities. It is shown that a proper hierarchical prior yields an admissible and minimax predictive density. Also, superharmonicity of prior densities is paid attention to for finding out a minimax predictive density with good numerical performance. [*AMS 2010 subject classifications:*]{} Primary 62C15, 62C20; secondary 62C10. [*Key words and phrases:*]{} Admissibility, Gauss’ divergence theorem, generalized Bayes estimator, inadmissibility, Kullback-Leibler loss, minimaxity, shrinkage estimator, statistical decision theory. author: - 'Hisayuki Tsukuma[^1]  and Tatsuya Kubokawa[^2]' title: 'Proper Bayes and Minimax Predictive Densities for a Matrix-variate Normal Distribution' --- Introduction {#sec:intro} ============ The problem of predicting a density function for future observation is an important field in practical applications of statistical methodology. Since predictive density estimation has been revealed to be parallel to shrinkage estimation for location parameter, it has extensively been studied in the literature. Particularly, the Bayesian prediction for a multivariate (vector-valued) normal distribution has been developed by Komaki (2001), George et al. (2006) and Brown et al. (2008). See George et al. (2012) for a broad survey including a clear explanation of parallelism between density prediction and shrinkage estimation. This paper addresses Bayesian predictive density estimation for a matrix-variate normal distribution. Denote by $\Nc_{a\times b}(M,\Psi\otimes\Si)$ the $a\times b$ matrix-variate normal distribution with mean matrix $M$ and positive definite covariance matrix $\Psi\otimes\Si$, where $M$, $\Psi$ and $\Si$ are, respectively, $a\times b$, $a\times a$ and $b\times b$ matrices of parameters and $\Psi\otimes\Si$ represents the Kronecker product of the positive definite matrices $\Psi$ and $\Si$. Let $A^\top$ be the transpose of a matrix $A$ and let $\tr A$ and $|A|$ be, respectively, the trace and the determinant a square matrix $A$. Also, let $A^{-1}$ be the inverse of a nonsingular matrix $A$. If an $a\times b$ random matrix $Z$ is distributed as $\Nc_{a\times b}(M,\Psi\otimes\Si)$, then $Z$ has density of the form $$(2\pi)^{-ab/2}|\Psi|^{-b/2}|\Si|^{-a/2}\exp[-2^{-1}\tr\{\Psi^{-1}(Z-\Th)\Si^{-1}(Z-\Th)^\top\}].$$ For more details of matrix-variate normal distribution, see Muirhead (1982) and Gupta and Nagar (1999). It is assumed in this paper that the covariance matrix of a matrix-variate normal distribution is known. Then the prediction problem is more precisely formulated as follows: Let $X|\Th\sim\Nc_{r\times q}(\Th, v_xI_r\otimes I_q)$ and $Y|\Th\sim\Nc_{r\times q}(\Th, v_yI_r\otimes I_q)$, where $\Th$ is a common $r\times q$ matrix of unknown parameters, $v_x$ and $v_y$ are known positive values and $I_r$ stands for the identity matrix of order $r$. Assume that $q\geq r$ and $X$ and $Y$ are independent. Let $p(X\mid \Th)$ and $p(Y\mid \Th)$ be the densities of $X$ and $Y$, respectively. Consider here the problem of estimating $p(Y\mid \Th)$ based only on the observed $X$. Denote by $\ph=\ph(Y\mid X)$ an estimated density for $p(Y\mid \Th)$ and hereinafter $\ph$ is referred to as a predictive density of $Y$. Define the Kullback-Leibler (KL) loss as $$\begin{aligned} \label{eqn:loss} L_{KL}(\Th,\ph) &=\Er^{Y|\Th}\bigg[\log {p(Y\mid\Th)\over\ph(Y\mid X)}\bigg] \non\\ &=\int_{\Re^{r\times q}} p(Y\mid \Th)\log {p(Y\mid\Th)\over\ph(Y\mid X)}\dd Y.\end{aligned}$$ The performance of a predictive density $\ph$ is evaluated by the risk function with respect to the KL loss (\[eqn:loss\]), $$\begin{aligned} R_{KL}(\Th,\ph)&=\Er^{X|\Th}[L_{KL}(\Th,\ph)]\\ &=\int_{\Re^{r\times q}}\int_{\Re^{r\times q}}p(X\mid\Th)p(Y\mid\Th)\log {p(Y\mid\Th)\over\ph(Y\mid X)}\dd Y\dd X.\end{aligned}$$ Let $\pi(\Th)$ be a proper/improper density of prior distribution for $\Th$, where we assume that the marginal density of $X$, $$m_\pi(X;v_x)=\int_{\Re^{r\times q}} p(X\mid \Th)\pi(\Th)\dd\Th,$$ is finite for all $X\in\Re^{r\times q}$. Denote the Frobenius norm of a matrix $A$ by $\Vert A\Vert=\sqrt{\tr AA^\top}$. Let $$p_\pi(X,Y)=\int_{\Re^{r\times q}} p(X\mid \Th) p(Y\mid \Th)\pi(\Th)\dd\Th.$$ Note that $p_\pi(X,Y)$ is finite if $m_\pi(X;v_x)$ is finite. Here $p_\pi(X,Y)$ can be rewritten as $$\begin{aligned} p_\pi(X,Y)&={1\over (2\pi v_s)^{qr/2}}e^{-\Vert Y-X\Vert^2/2v_s} \times \int_{\Re^{r\times q}} {1\over (2\pi v_w)^{qr/2}}e^{-\Vert W-\Th\Vert^2/2v_w}\pi(\Th)\dd\Th \\ &\equiv \ph_U(Y\mid X) \times m_\pi(W;v_w),\end{aligned}$$ where $v_s=v_x+v_y$ and $$W=v_w(X/v_x+Y/v_y)\mid \Th \sim\Nc_{r\times q}(\Th, v_wI_r\otimes I_q)$$ with $v_w=(1/v_x+1/v_y)^{-1}$. From Aitchison (1975), a Bayesian predictive density relative to the KL loss (\[eqn:loss\]) is given by $$\label{eqn:BPD} \ph_\pi(Y\mid X)={p_\pi(X,Y) \over m_\pi(X; v_x)} ={m_\pi(W;v_w)\over m_\pi(X;v_x)}\,\ph_U(Y\mid X).$$ See George et al. (2006, Lemma 2) for the multivariate (vector-valued) normal case. It is noted that $\ph_U(Y\mid X)$ is the Bayesian predictive density with respect to the uniform prior $\pi_U(\Th)=1$. Under the predictive density estimation problem relative to the KL loss (\[eqn:loss\]), $\ph_U(Y\mid X)$ is the best invariant predictive density with respect to a location group. Using the same arguments as in George et al. (2006, Corollary 1) gives that, for any $r$ and $q$, $\ph_U(Y\mid X)$ is minimax relative to the KL loss (\[eqn:loss\]) and has a constant risk. Recently, Matsuda and Komaki (2015) constructed an improved Bayesian predictive density on $\ph_U(Y\mid X)$ by using a prior density of the form $$\label{eqn:pr_em} \pi_{EM}(\Th)=|\Th\Th^\top|^{-\al^{EM}/2},\quad \al^{EM}=q-r-1.$$ The prior (\[eqn:pr\_em\]) is interpreted as an extension of Stein’s (1973, 1981) harmonic prior $$\label{eqn:pr_js} \pi_{JS}(\Th)=\Vert\Th\Vert^{-\be^{JS}}=\{\tr(\Th\Th^\top)\}^{-\be^{JS}/2},\quad \be^{JS}=qr-2.$$ In the context of Bayesian estimation for mean matrix, (\[eqn:pr\_em\]) yields a matricial shrinkage estimator, while (\[eqn:pr\_js\]) does a scalar shrinkage one. Note that, when $X\sim\Nc_{r\times q}(\Th, v_x I_r\otimes I_q)$, typical examples of the matricial and the scalar shrinkage estimators for $\Th$ are, respectively, the Efron-Morris (1972) estimator $$\label{eqn:EM} \Thh_{EM}=\{I_r-\al^{EM}v_x(XX^\top)^{-1}\}X \quad \textup{for $\al^{EM}\geq 1$ (i.e., $q\geq r+2$)}$$ and the James-Stein (1961) like estimator $$\label{eqn:JS} \Thh_{JS}=\Big\{1-\frac{\be^{JS}v_x}{\tr(XX^\top)}\Big\}X \quad \textup{for $\be^{JS}\geq 1$ (i.e., $qr\geq 3$)}.$$ The two estimators $\Thh_{EM}$ and $\Thh_{JS}$ are minimax relative to a quadratic loss. Also, $\Thh_{EM}$ and $\Thh_{JS}$ are characterized as empirical Bayes estimators, but they are not generalized Bayes estimators which minimize the posterior expected quadratic loss. The purposes of this paper are to construct some Bayesian predictive densities with different priors from (\[eqn:pr\_em\]) and (\[eqn:pr\_js\]) and to discuss their decision-theoretic properties such as admissibility and minimaxity. Section \[sec:preliminaries\] first lists some results on the Kullback-Leibler risk and the differentiation operators. Section \[sec:properminimax\] applies an extended Faith’s (1978) prior to our predictive density estimation problem and provides sufficient conditions for minimaxity of the resulting Bayesian predictive densities. Also, an admissible and minimax predictive density is obtained by considering a proper hierarchical prior. In Section \[sec:superharmonic\], we utilize Stein’s (1973, 1981) ideas for deriving some minimax predictive densities with superharmonic priors. Section \[sec:MCstudies\] investigates numerical performance in risk of some Bayesian minimax predictive densities. Preliminaries {#sec:preliminaries} ============= The Kullback-Leibler risk ------------------------- First, we state some useful lemmas in terms of the Kullback-Leibler (KL) risk. The lemmas are based on Stein (1973, 1981), George et al. (2006) and Brown et al. (2008) and play important roles in studying decision-theoretic properties of a Bayesian predictive density. From George et al. (2006, Lemma 3), we observe that $m_\pi(W;v_w)<\infi$ for all $W\in\Re^{r\times q}$ if $m_\pi(X;v_x)<\infi$ for all $X\in\Re^{r\times q}$. Note also that $\int_{\Re^{r\times q}}\ph_\pi(Y\mid X)\dd Y=1$ and $$\int_{\Re^{r\times q}}Y\ph_\pi(Y\mid X)\dd Y =\frac{\int_{\Re^{r\times q}} \Th p(X\mid \Th)\pi(\Th)\dd\Th}{\int_{\Re^{r\times q}} p(X\mid \Th)\pi(\Th)\dd\Th},$$ namely, the mean of a predictive distribution for $Y$ is the same as the posterior mean of $\Th$ given $X$ or, equivalently, the generalized Bayes estimator relative to a quadratic loss for a mean of $X$. Hereafter denote by $p(W|\Th)$ a density of $W|\Th\sim\Nc_{r\times q}(\Th, vI_r\otimes I_q)$ with a positive value $v$. In order to prove minimaxity of a Bayesian predictive density, we require the following lemma, which implies that our Bayesian prediction problem can be reduced to the Bayesian estimation problem for the normal mean matrix relative to a quadratic loss. \[lem:identity\] The KL risk difference between $\ph_U(Y\mid X)$ and $\ph_\pi(Y\mid X)$ can be written as $$R_{KL}(\Th,\ph_U) - R_{KL}(\Th,\ph_\pi) =\frac{1}{2}\int_{v_w}^{v_x}\frac{1}{v^2}\{\Er^{W|\Th}[\Vert W-\Th\Vert^2]-\Er^{W|\Th}[\Vert\Thh_\pi-\Th\Vert^2]\}\dd v,$$ where $\Er^{W|\Th}$ stands for expectation with respect to $W$ and $$\Thh_\pi=\Thh_\pi(W)=\frac{\int_{\Re^{r\times q}} \Th p(W\mid \Th)\pi(\Th)\dd\Th}{\int_{\Re^{r\times q}} p(W\mid \Th)\pi(\Th)\dd\Th}.$$ [**Proof.**]{}   This is verified by the same arguments as in Brown et al. (2008, Theorem 1 and its proof). $\Box$ Let $\nabla_W=(\partial/\partial w_{ij})$ be an $r\times q$ matrix of differentiation operators with respect to an $r\times q$ matrix $W=(w_{ij})$ of full row rank. For a scalar function $g(W)$ of $W$, the operation $\nabla_W g(W)$ is defined as an $r\times q$ matrix whose $(i,j)$-th element is $\partial g(W)/\partial w_{ij}$. Also, for a $q\times a$ matrix-valued function $G(W)=(g_{ij})$ of $W$, the operation $\nabla_WG(W)$ are defined as an $r\times a$ matrix whose $(i,j)$-th element of $\nabla_WG(W)$ is $\sum_{k=1}^q \partial g_{kj}/\partial w_{ik}$. Stein (1973) showed that for a $q\times r$ matrix $G(W)$ $$\int_{\Re^{r\times q}} \tr\{(W-\Th)G(W)\}p(W|\Th)\dd W = v \int_{\Re^{r\times q}}\tr\{\nabla_WG(W)\}p(W|\Th)\dd W,$$ namely, $$\Er^{W|\Th}[\tr\{(W-\Th)G(W)\}] = v \Er^{W|\Th}[\tr\{\nabla_WG(W)\}].$$ This identity is referred to as the Stein identity in the literature. Using the Stein identity, we can easily obtain the following lemma. \[lem:identity2\] Use the same notation as in Lemma \[lem:identity\]. Then we obtain $$\begin{aligned} &R_{KL}(\Th,\ph_U) - R_{KL}(\Th,\ph_\pi)\\ &=-\int_{v_w}^{v_x}\Er^{W|\Th}\bigg[2\frac{\tr[\nabla_W\nabla_W^\top m_\pi(W;v)]}{m_\pi(W;v)}-\frac{\Vert\nabla_W m_\pi(W;v)\Vert^2}{\{m_\pi(W;v)\}^2}\bigg]\dd v.\end{aligned}$$ [**Proof.**]{}   This lemma can be shown by the same arguments as in Stein (1973, 1981). We provide only an outline of proof. Note from Brown (1971) that $\Thh_\pi$, given in Lemma \[lem:identity\], can be represented as $$\Thh_\pi=W+v\frac{\nabla_W m_\pi(W;v)}{m_\pi(W;v)}=W+v\nabla_W\log m_\pi(W;v).$$ By some manipulation after using the Stein identity, we have $$\begin{aligned} &\frac{1}{v}\{\Er^{W|\Th}[\Vert W-\Th\Vert^2]-\Er^{W|\Th}[\Vert\Thh_\pi-\Th\Vert^2]\} \\ &=-v\Er^{W|\Th}\bigg[2\frac{\tr[\nabla_W\nabla_W^\top m_\pi(W;v)]}{m_\pi(W;v)}-\frac{\Vert\nabla_W m_\pi(W;v)\Vert^2]}{\{m_\pi(W;v)\}^2}\bigg].\end{aligned}$$ Combining this identity and Lemma \[lem:identity\] completes the proof. $\Box$ Using Lemma \[lem:identity2\] immediately establishes the following proposition. \[prp:cond\_mini\] $\ph_\pi(Y|X)$ is minimax relative to the KL loss (\[eqn:loss\]) if $$2\tr[\nabla_W\nabla_W^\top m_\pi(W;v)]-\frac{\Vert\nabla_W m_\pi(W;v)\Vert^2}{m_\pi(W;v)}\leq 0$$ for $v_w\leq v\leq v_x$. Differentiation of matrix-valued functions ------------------------------------------ Next, some useful formulae are listed for differentiation with respect to a symmetric matrix. The formulae are applied to evaluation of the Kullback-Leibler risks of our Bayesian predictive densities. Let $S=(s_{ij})$ be an $r\times r$ symmetric matrix of full rank. Let $\Dc_S$ be an $r\times r$ symmetric matrix of differentiation operators with respect to $S$, where the $(i,j)$-th element of $\Dc_S$ is $$\{\Dc_S\}_{ij}=\frac{1+\de_{ij}}{2}\frac{\partial}{\partial s_{ij}}$$ with the Kronecker delta $\de_{ij}$. Let $g(S)$ be a scalar-valued and differentiable function of $S=(s_{ij})$. Also let $G(S)=(g_{ij}(S))$ be an $r\times r$ matrix, where all the elements $g_{ij}(S)$ are differentiable functions of $S$. The operations $\Dc_S g(S)$ and $\Dc_S G(S)$ are, respectively, $r\times r$ matrices, where the $(i,j)$-th elements of $\Dc_S g(S)$ and $\Dc_S G(S)$ are defined as, respectively, $$\{\Dc_S g(S)\}_{ij}=\frac{1+\de_{ij}}{2}\frac{\partial g(S)}{\partial s_{ij}},\quad \{\Dc_S G(S)\}_{ij}=\sum_{k=1}^r\frac{1+\de_{ik}}{2}\frac{\partial g_{kj}(S)}{\partial s_{ik}}.$$ First, the product rule in terms of $\Dc_S$ is expressed in the following lemma due to Haff (1982). \[lem:diff1\] Let $G_1$ and $G_2$ be $r\times r$ matrices such that all the elements of $G_1$ and $G_2$ are differentiable functions of $S$. Then we have $$\Dc_S (G_1G_2)=(\Dc_S G_1)G_2+(G_1^\top \Dc_S)^\top G_2.$$ In particular, for differentiable scalar-valued functions $g_1(S)$ and $g_2(S)$, $$\Dc_S \{g_1(S)g_2(S)\}=g_2(S)\Dc_S g_1(S)+g_1(S) \Dc_S g_2(S).$$ Denote by $S=HLH^\top$ the eigenvalue decomposition of $S$, where $H=(h_{ij})$ is an orthogonal matrix of order $r$ and $L=\diag(\ell_1,\ldots,\ell_r)$ is a diagonal matrix of order $r$ with $\ell_1\geq \cdots\geq \ell_r$. The following lemma is provided by Stein (1973). \[lem:diff2\] Define $\Psi(L)=\diag(\psi_1,\ldots,\psi_r)$, whose diagonal elements are differentiable functions of $L$. Then we obtain 1. $\{\Dc_S\}_{ij} \ell_k=h_{ik}h_{jk}$  $(k=1,\ldots,r)$, 2. $\Dc_S H\Psi(L)H^t=H\Psi^*(L)H^t$, where $\Psi^*(L)=\diag(\psi_1^*,\ldots,\psi_r^*)$ with $$\psi_i^*=\frac{\partial \psi_i}{\partial\ell_i}+\frac{1}{2}\sum_{j\ne i}^r\frac{\psi_i-\psi_j}{\ell_i-\ell_j}.$$ \[lem:diff3\] Let $a$ and $b$ be constants and let $C$ be a symmetric constant matrix $C$. Then it holds that 1. $\Dc_S \tr(S C)=C$, 2. $\displaystyle \Dc_S S=\frac{r+1}{2}I_r$, 3. $\displaystyle \Dc_S S^2=\frac{r+2}{2}S+\frac{1}{2}(\tr S)I_r$. 4. $\Dc_S |aI_r+bS|=b|aI_r+bS|(aI_r+bS)^{-1}$ if $aI_r+bS$ is nonsingular. [**Proof.**]{}   For proofs of Parts (i), (ii) and (iii), see Haff (1982) and Magnus and Neudecker (1999). Using (i) of Lemma \[lem:diff2\] gives that $$\begin{aligned} \{\Dc_S |aI_r+bS|\}_{ij} &= \{\Dc_S\}_{ij} \prod_{k=1}^r(a+b\ell_k) \\ &=b\sum_{c=1}^r h_{ic}h_{jc}\prod_{k\ne c}^r(a+b\ell_k) \\ &=b|aI_r+bS|\sum_{c=1}^r h_{ic}h_{jc}(a+b\ell_c)^{-1} \\ &=b|aI_r+bS|\{(aI_r+bS)^{-1}\}_{ij},\end{aligned}$$ which implies Part (iv). Let $\nabla_W$ be the same $r\times q$ differentiation operator matrix as in the preceding subsection. If $S=WW^\top$, then we have the following lemma, where the proof is referred to in Konno (1992). \[lem:diff4\] Let $G$ be an $r\times r$ symmetric matrix, where all the elements of $G$ are differentiable function of $S=WW^\top$. Then it holds that 1. $\nabla_W^\top G=2W^\top \Dc_S G$, 2. $\tr(\nabla_W W^\top G)=(q-r-1)\tr G+2\tr(\Dc_S SG)$. Admissible and minimax predictive densities {#sec:properminimax} =========================================== In this section, we consider a class of hierarchical priors inspired by Faith (1978) and derive a sufficient condition for minimaxity of the resulting Bayesian predictive density. Also, a proper Bayes and minimax predictive density is provided. A class of hierarchical prior distributions ------------------------------------------- Let $\Sc_r$ be the set of $r\times r$ symmetric matrices. For $A$ and $B\in\Sc_r$, write $A\prec(\preceq) B$ or $B\succ(\succeq) A$ if $B-A$ is a positive (semi-)definite matrix. The set $\Rc_r$ is defined as $$\Rc_r=\{ \La\in\Sc_r \mid 0_{r\times r}\prec \La \prec I_r\},$$ where $0_{r\times r}$ is the $r\times r$ zero matrix. Denote the boundary of $\Rc_r$ by $\partial\Rc_r$. It is noted that if $\Om\in\partial\Rc_r$ then $0_{r\times r}\preceq \Om \preceq I_r$ and also then $|\Om|=0$ or $|I_r-\Om|=0$. Consider a proper/improper hierarchical prior $$\pi_H(\Th)=\int_{\Rc_r}\pi_1(\Th|\Om)\pi_2(\Om)\dd\Om.$$ The priors $\pi_1(\Th|\Om)$ and $\pi_2(\Om)$ are specified as follows: Assume that a prior distribution of $\Th$ given $\Om$ is $\Nc_{r\times q}(0_{r\times q},v_0\Om^{-1}(I_r-\Om)\otimes I_q)$, where $v_0$ is a known constant satisfying $$v_0\geq v_x.$$ Then the first-stage prior density $\pi_1(\Th|\Om)$ can be written as $$\label{eqn:pr_Th} \pi_1(\Th|\Om)=(2\pi v_0)^{-qr/2}|\Om(I_r-\Om)^{-1}|^{q/2}\exp\Big[-\frac{1}{2v_0}\tr\{\Om(I_r-\Om)^{-1}\Th\Th^\top\}\Big].$$ Assume also that $\pi_2(\Om)$, a second-stage prior density for $\Om$, is a differentiable function on $\Rc_r$. Denote by $\ph_H=\ph_H(Y|X)$ the resulting Bayesian predictive density with respect to the hierarchical prior $\pi_H(\Th)$. Assume that a marginal density of $W$ with respect to $\pi_H(\Th)$ is finite when $v=v_x$. The marginal density is given by $$\begin{aligned} \label{eqn:m(W)} m(W)&=\int_{\Re^{r\times q}} p(W|\Th)\pi_H(\Th)\dd\Th \non\\ &=\int_{\Rc_r}\int_{\Re^{r\times q}} \pi(\Th|\Om,W) \dd\Th \pi_2(\Om)\dd\Om,\end{aligned}$$ where $\pi(\Th|\Om,W)=p(W|\Th)\pi_1(\Th|\Om)$ is a posterior density of $\Th$ given $\Om$ and $W$. To make it easy to derive sufficient conditions that $\ph_H$ is minimax, we show the following lemma. \[lem:alter\_m(W)\] The marginal density $m(W)$ can alternatively be represented as $$m(W)=\int_{\Rc_r}f_\pi(\La;W)\dd\La,$$ where $$f_\pi(\La;W)=(2\pi v)^{-qr/2}|\La|^{q/2}\pi_2^J(\La)\exp\Big[-\frac{1}{2v}\tr(\La WW^\top)\Big]$$ with $$\pi_2^J(\La) =v_1^{r(r+1)/2}|v_1I_r+(1-v_1)\La|^{-r-1}\pi_2[\La\{v_1I_r+(1-v_1)\La\}^{-1}].$$ [**Proof.**]{}   Let $$\La(I_r-\La)^{-1}=v_1\Om(I_r-\Om)^{-1},\quad v_1=\frac{v}{v_0},$$ where $0_{r\times r}\prec\La\prec I_r$. Since $v^{-1}(I_r-\La)^{-1}=v^{-1}I_r+v_0^{-1}\Om(I_r-\Om)^{-1}$, we observe that $$\begin{aligned} &\frac{1}{v}\Vert W-\Th\Vert^2+\frac{1}{v_0}\tr\{\Om(I_r-\Om)^{-1}\Th\Th^\top\}\\ &=\frac{1}{v}\tr\Big[(I_r-\La)^{-1}\{\Th-(I_r-\La)W\}\{\Th-(I_r-\La)W\}^\top\Big] +\frac{1}{v}\tr(\La WW^\top),\end{aligned}$$ so $\pi(\Th|\Om,W)$ is proportional to $$\pi(\Th|\Om,W) \propto \exp\Big[-\frac{1}{2v}\tr\Big[(I_r-\La)^{-1}\{\Th-(I_r-\La)W\}\{\Th-(I_r-\La)W\}^\top\Big]\Big],$$ namely, $\Th|\Om,W\sim\Nc_{r\times q}((I_r-\La)W,v(I_r-\La)\otimes I_q)$. Integrating out (\[eqn:m(W)\]) with respect to $\Th$ gives that $$\label{eqn:m(W)-1} m(W)=(2\pi v)^{-qr/2}\int_{\Rc_r} |\La|^{q/2}\pi_2(\Om)\exp\Big[-\frac{1}{2v}\tr(\La WW^\top)\Big]\dd\Om.$$ Note that $\Om=\La\{v_1I_r+(1-v_1)\La\}^{-1}$ and the Jacobian of the transformation from $\Om$ to $\La$ is given by $$J[\Om\to \La]=v_1^{r(r+1)/2}|v_1I_r+(1-v_1)\La)|^{-r-1}.$$ Hence making the transformation from $\Om$ to $\La$ for (\[eqn:m(W)-1\]) completes the proof. Let $\Dc_\La$ be an $r\times r$ symmetric matrix of differentiation operators with respect to $\La=(\la_{ij})$, where the $(i,j)$-th element of $\Dc_\La$ is $$\{\Dc_\La\}_{ij}=\frac{1+\de_{ij}}{2}\frac{\partial}{\partial\la_{ij}}.$$ Proposition \[prp:cond\_mini\] and Lemma \[lem:alter\_m(W)\] are utilized to get sufficient conditions for minimaxity of $\ph_H$. \[thm:faith\] Let $f_\pi(\La;W)$ and $\pi_2^J(\La)$ be defined as in Lemma \[lem:alter\_m(W)\]. Let $$M=M(W)=\int_{\Rc_r}\La f_\pi(\La;W)\dd\La.$$ Assume that $$f_\pi(\La;W)=0 \quad \textup{for all $\La\in\partial\Rc_r$}.$$ Then $\ph_H$ is minimax relative to the KL loss $(\ref{eqn:loss})$ if $\De(W;\pi_2^J)\leq 0$, where $$\De(W;\pi_2^J)=\De_1(W;\pi_2^J)-\De_2(W;\pi_2^J)-(q-3r-3)\tr M$$ with $$\begin{aligned} \De_1(W;\pi_2^J)&=4\int_{\Rc_r}\frac{1}{\pi_2^J(\La)}\tr\{\La^2\Dc_\La \pi_2^J(\La)\}f_\pi(\La;W)\dd\La,\\ \De_2(W;\pi_2^J)&=\frac{2}{m(W)}\int_{\Rc_r}\frac{1}{\pi_2^J(\La)}\tr\{\La M\Dc_\La \pi_2^J(\La)\}f_\pi(\La;W)\dd\La,\end{aligned}$$ provided all the integrals are finite. [**Proof.**]{}   From Proposition \[prp:cond\_mini\], $\ph_H$ is minimax when $$\De=2\tr[\nabla_W\nabla_W^\top m(W)]-\frac{\Vert\nabla_W m(W)\Vert^2}{m(W)}\leq 0.$$ It is seen from Lemma \[lem:alter\_m(W)\] that $$\nabla_W f_\pi(\La;W)=-\frac{1}{v}\La W f_\pi(\La;W)$$ and $$\nabla_W\nabla_W^\top f_\pi(\La;W)=\Big(\frac{1}{v^2}\La WW^\top\La-\frac{q}{v}\La\Big)f_\pi(\La;W).$$ Hence we obtain $$\label{eqn:d2_mw1} \De=\frac{1}{v}[2E_1(W)-\{m(W)\}^{-1}E_2(W)],$$ where $$\begin{aligned} E_1(W)&=\int_{\Rc_r}\Big[\frac{1}{v}\tr(WW^\top\La^2)-q\tr\La\Big]f_\pi(\La;W)\dd\La,\\ E_2(W)&=\frac{1}{v}\tr\bigg[WW^\top\bigg\{\int_{\Rc_r}\La f_\pi(\La;W)\dd\La\bigg\}^2\bigg] \\ &=\frac{1}{v}\int_{\Rc_r}\tr(MWW^\top\La)f_\pi(\La;W)\dd\La.\end{aligned}$$ Using Lemmas \[lem:diff1\] and \[lem:diff3\] yields that $$\Dc_\La f_\pi(\La;W)=\frac{1}{2}\Big[q\La^{-1}+\frac{2}{\pi_2^J(\La)}\Dc_\La \pi_2^J(\La)-\frac{1}{v}WW^\top\Big]f_\pi(\La;W),$$ so that $$\begin{aligned} \tr[\Dc_\La\{f_\pi(\La;W)\La^2\}]&=\tr[\La^2\Dc_\La f_\pi(\La;W)]+f_\pi(\La;W)\tr[\Dc_\La\La^2]\\ &=\frac{1}{2}\Big[\frac{2}{\pi_2^J(\La)}\tr[\La^2\Dc_\La \pi_2^J(\La)]-\Big\{\frac{1}{v}\tr(WW^\top\La^2)-q\tr\La\Big\} \\ &\qquad +2(r+1)\tr\La \Big]f_\pi(\La;W).\end{aligned}$$ Thus $E_1(W)$ can be expressed as $$\begin{aligned} \label{eqn:E1} E_1(W) &=2(r+1)\tr M+\frac{1}{2}\De_1(W;\pi_2^J) -2\int_{\Rc_r}\tr[\Dc_\La\{f_\pi(\La;W)\La^2\}]\dd\La.\end{aligned}$$ Similarly, we observe that from Lemmas \[lem:diff1\] and \[lem:diff3\] $$\begin{aligned} \tr[\Dc_\La\{f_\pi(\La;W)\La\}M] &=\tr[\La M\Dc_\La f_\pi(\La;W)]+f_\pi(\La;W)\tr[M\Dc_\La\La]\\ &=\frac{1}{2}\Big[(q+r+1)\tr M+\frac{2}{\pi_2^J(\La)}\tr[\La M\Dc_\La \pi_2^J(\La)] \\ &\qquad -\frac{1}{v}\tr(MWW^\top\La)\Big]f_\pi(\La;W),\end{aligned}$$ which leads to $$\begin{aligned} \label{eqn:E2} E_2(W) &=(q+r+1)m(W)\tr M +m(W)\De_2(W;\pi_2^J) \non\\ &\qquad -2\int_{\Rc_r}\tr[\Dc_\La\{f_\pi(\La;W)\La\}M]\dd\La .\end{aligned}$$ Combining (\[eqn:d2\_mw1\]), (\[eqn:E1\]) and (\[eqn:E2\]) gives that $$\begin{aligned} \label{eqn:d2_mw2} \De &= \frac{\De(W;\pi_2^J)}{v} -\frac{4}{v}\int_{\Rc_r}\tr[\Dc_\La\{f_\pi(\La;W)\La^2\}]\dd\La \non\\ &\qquad +\frac{2}{vm(W)}\int_{\Rc_r}\tr[\Dc_\La\{f_\pi(\La;W)\La\}M]\dd\La.\end{aligned}$$ If we can show that two integrals in (\[eqn:d2\_mw2\]) are, respectively, equal to zero, then the proof is complete. Let $G=(g_{ij})$ be an $r\times r$ symmetric matrix such that all the elements of $G$ are differentiable functions of $\La\in\Rc_r$. Denote $$\vec(G)=(g_{11},g_{12},\ldots,g_{1r},g_{22},g_{23},\ldots,g_{r-1,r-1},g_{r-1,r},g_{rr})^\top,$$ which is a $\{2^{-1}r(r+1)\}$-dimensional column vector. Denote an outward unit normal vector at a point $\La$ on $\partial\Rc_r$ by $$\nu=\nu(\La)=(\nu_{11},\nu_{12},\ldots,\nu_{1r},\nu_{22},\nu_{23},\ldots,\nu_{r-1,r-1},\nu_{r-1,r},\nu_{rr})^\top.$$ If $\tr(\Dc_\La G)$ is integrable on $\Rc_r$ then it is seen that $$\int_{\Rc_r}\tr(\Dc_\La G)\dd\La =\int_{\Rc_r}\sum_{i=1}^r\sum_{j=1}^r\frac{1+\de_{ij}}{2}\frac{\partial g_{ji}}{\partial\la_{ij}}\dd\La =\int_{\Rc_r}\sum_{i=1}^r\sum_{j=i}^r\frac{\partial g_{ij}}{\partial\la_{ij}}\dd\La$$ by symmetry of $\La$ and $G$. From the Gauss divergence theorem, we obtain $$\int_{\Rc_r}\tr(\Dc_\La G)\dd\La=\int_{\partial\Rc_r}\sum_{i=1}^r\sum_{j=i}^r\nu_{ij} g_{ij}\dd\si=\int_{\partial\Rc_r}\nu^\top\vec(G)\dd\si,$$ where $\si$ stands for Lebesgue measure on $\partial\Rc_r$. Note that $$\begin{aligned} \tr[\Dc_\La\{f_\pi(\La;W)\La\}M]=\tr[\Dc_\La\{f_\pi(\La;W)\La M\}]=\tr[\Dc_\La\{f_\pi(\La;W)M\La\}]\end{aligned}$$ because $M=M(W)$ is symmetric and does not depend on $\La$. It is observed that $\La^2$ and $\La M+M\La$ are symmetric for $\La\in\Rc_r$, so that $$\label{eqn:gdt1} \int_{\Rc_r}\tr[\Dc_\La\{f_\pi(\La;W)\La^2\}]\dd\La=\int_{\partial\Rc_r}\nu^\top\vec(\La^2)f_\pi(\La;W)\dd\si,$$ and $$\begin{aligned} &\int_{\Rc_r}\tr[\Dc_\La\{f_\pi(\La;W)\La\}M]\dd\La \non\\ &=\frac{1}{2}\int_{\Rc_r}\tr[\Dc_\La\{f_\pi(\La;W)\La M+f_\pi(\La;W)M\La\}]\dd\La \non\\ &=\frac{1}{2}\int_{\partial\Rc_r}\nu^\top \vec(\La M+M\La)f_\pi(\La;W)\dd\si. \label{eqn:gdt2}\end{aligned}$$ Recall that $M$ is finite and $0_{r\times r} \preceq \La \preceq I_r$ for $\La\in\partial\Rc_r$, so that $\nu^\top\vec(\La^2)$ and $\nu^\top \vec(\La M+M\La)$ are bounded. Since $f_\pi(\La;W)=0$ for any $\La\in\partial\Rc_r$, (\[eqn:gdt1\]) and (\[eqn:gdt2\]) are, respectively, equal to zero, which completes the proof. $\Box$ Proper Bayes and minimax predictive densities --------------------------------------------- Define a second-stage prior density for $\Om$ as $$\label{eqn:pr_GB} \pi_{GB}(\Om)=K_{a,b}|\Om|^{a/2-1}|I_r-\Om|^{b/2-1},\qquad 0_{r\times r}\prec \Om\prec I_r,$$ where $a$ and $b$ are constants and $K_{a,b}$ is a normalizing constant. The hierarchical prior (\[eqn:pr\_Th\]) with (\[eqn:pr\_GB\]) is a generalization of Faith (1978) in Bayesian minimax estimation of a normal mean vector. Faith’s (1978) prior has also been discussed in detail by Maruyama (1998). When $a>0$ and $b>0$, $\pi_{GB}(\Om)$ is proper and the distribution of $\Om$ is often called the matrix-variate beta distribution. Konno (1988) showed that $$\begin{aligned} &\int_{\Rc_r} \Om \pi_{GB}(\Om)\dd\Om=\frac{a+r-1}{a+b+2r-2}I_r \qquad \textup{for $a>0$ and $b>0$},\\ &\int_{\Rc_r} \Om(I_r-\Om)^{-1} \pi_{GB}(\Om)\dd\Om=\frac{a+r-1}{b-2}I_r \qquad \textup{for $a>0$ and $b>2$}.\end{aligned}$$ For other properties of the matrix-variate beta distribution, see Muirhead (1982) and Gupta and Nagar (1999). Let $\ph_{GB}(Y\mid X)$ be the generalized Bayesian predictive density with respect to (\[eqn:pr\_Th\]) and (\[eqn:pr\_GB\]). A sufficient condition for minimaxity of $\ph_{GB}(Y\mid X)$ is given as follows. \[prp:mini\_GB\] Assume that $q-r-1>0$. Then $\ph_{GB}(Y\mid X)$ is minimax relative to the KL loss [(\[eqn:loss\])]{} if $$\label{eqn:upper0} a>-q+2,\quad b>2,\quad a+b \leq (q-r-1)/(2-v_w/v_0)-2r+2.$$ There exist constants $a$ and $b$ satisfying (\[eqn:upper0\]) if $ q-r-1+(2-v_w/v_0)(q-2r-2)>0. $ Recall that $\ph_U(Y\mid X)$ is minimax and has a constant risk relative to the KL loss (\[eqn:loss\]). Hence if $\ph_{GB}(Y\mid X)$ is proper Bayes then it is admissible. Assume that $q-r-1>0$. Then $\ph_{GB}(Y\mid X)$ is admissible and minimax relative to the KL loss [(\[eqn:loss\])]{} if $$\label{eqn:upper1} a>0,\quad b>2, \quad a+b \leq (q-r-1)/(2-v_w/v_0)-2r+2.$$ Thus, there exist constants $a$ and $b$ satisfying (\[eqn:upper1\]) if $(-q+5r+1)/(2r)<v_w/v_0 <1$. Since $0<v_w<v_x\leq v_0$, it is observed that $$(q-r-1)/(2-v_w/v_0)-2r+2>(q-r-1)/2-2r+2=(q-5r+3)/2.$$ We also obtain the following corollary. \[cor:proper2\] Assume that $q-5r-1>0$. Then, for any $v_x$, $v_y$ and $v_0\ (\geq v_x)$, $\ph_{GB}(Y\mid X)$ is admissible and minimax relative to the KL loss [(\[eqn:loss\])]{} if $$a>0,\quad b>2, \quad a+b \leq (q-5r+3)/2.$$ ![Sufficient conditions on $(a,b)$ of $\ph_{GB}(Y|X)$ for admissibility and minimaxity](graph.eps) [**Proof of Proposition \[prp:mini\_GB\].**]{}   Using Theorem \[thm:faith\], we will derive a sufficient condition for minimaxity of $\ph_{GB}(Y\mid X)$. Denote $$c=a+b+2r-2.$$ Let $$\begin{aligned} \pi_{GB}^J(\La)&=v_1^{r(r+1)/2}|v_1I_r+(1-v_1)\La|^{-r-1}\pi_{GB}[\La\{v_1I_r+(1-v_1)\La\}^{-1}]\\ &=K_{a,b}v_1^{r(r+b-1)/2}|\La|^{a/2-1}|I_r-\La|^{b/2-1}|v_1I_r+(1-v_1)\La|^{-c/2}.\end{aligned}$$ When $$\label{eqn:bound-cond} q+a>2 \quad\textup{and}\quad b>2,$$ it follows that for any $\La\in\partial\Rc_r$ $$f_{GB}(\La;W)=(2\pi v)^{-qr/2}|\La|^{q/2}\pi_{GB}^J(\La)\exp\Big[-\frac{1}{2v}\tr(\La WW^\top)\Big]=0.$$ Define $$\begin{aligned} m_{GB}&=m_{GB}(W)=\int_{\Rc_r} f_{GB}(\La;W)\dd\La, \\ M_{GB}&=M_{GB}(W)=\int_{\Rc_r}\La f_{GB}(\La;W)\dd\La.\end{aligned}$$ Since $0<v_1\leq 1$ and $\La\in\Rc_r$, it holds that $$|v_1I_r+(1-v_1)\La|^{-c/2}\leq \max\big(1,v_1^{-rc/2}\big),$$ which implies that $$f_{GB}(\La;W)\leq \textup{const.}\times |\La|^{(q+a)/2-1}|I_r-\La|^{b/2-1}.$$ Thus if $q+a>0$ and $b>0$ then $m_{GB}$ and $M_{GB}$ are finite. It is seen from Lemmas \[lem:diff1\] and \[lem:diff3\] that $$\frac{\Dc_\La \pi_{GB}^J(\La)}{\pi_{GB}^J(\La)} =\frac{1}{2}\big[(a-2)\La^{-1}-(b-2)(I_r-\La)^{-1}-(1-v_1)c\{v_1I_r+(1-v_1)\La\}^{-1}\Big],$$ so that $$\begin{aligned} \De_1(W;\pi_{GB}^J) &=4\int_{\Rc_r}\frac{1}{\pi_{GB}^J(\La)}\tr[\La^2\Dc_\La \pi_{GB}^J(\La)]f_{GB}(\La;W)\dd\La\\ &=2(a-2)\tr M_{GB}-2(b-2)\int_{\Rc_r}\tr[(I_r-\La)^{-1}\La^2]f_{GB}(\La;W)\dd\La\\ &\quad -2(1-v_1)c\int_{\Rc_r}\tr[\{v_1I_r+(1-v_1)\La\}^{-1}\La^2]f_{GB}(\La;W)\dd\La.\end{aligned}$$ Note that $$\tr[(I_r-\La)^{-1}\La^2]=-\tr\La+\tr[(I_r-\La)^{-1}\La],$$ which leads to $$\begin{aligned} \label{eqn:De1} &\De_1(W;\pi_{GB}^J) \non\\ &=2(a+b-4)\tr M_{GB}-2(b-2)\int_{\Rc_r}\tr[(I_r-\La)^{-1}\La]f_{GB}(\La;W)\dd\La \non\\ &\quad -2(1-v_1)c\int_{\Rc_r}\tr[\{v_1I_r+(1-v_1)\La\}^{-1}\La^2]f_{GB}(\La;W)\dd\La.\end{aligned}$$ Similarly, Lemmas \[lem:diff1\] and \[lem:diff3\] are used to see that $$\begin{aligned} \frac{2\tr[\La M_{GB}\Dc_\La \pi_{GB}^J(\La)]}{\pi_{GB}^J(\La)} &=(a-2)\tr M_{GB}-(b-2)\tr[M_{GB}(I_r-\La)^{-1}\La] \\ &\quad -(1-v_1)c\tr[M_{GB}\{v_1I_r+(1-v_1)\La\}^{-1}\La],\end{aligned}$$ which yields that $$\begin{aligned} \label{eqn:De2} \De_2(W;\pi_{GB}^J) &= \frac{2}{m_{GB}}\int_{\Rc_r}\frac{\tr[\La M_{GB}\Dc_\La \pi_{GB}^J(\La)]}{\pi_{GB}^J(\La)}f_{GB}(\La;W)\dd\La \non\\ &= (a-2)\tr M_{GB} -\frac{b-2}{m_{GB}}\int_{\Rc_r}\tr[M_{GB}(I_r-\La)^{-1}\La]f_{GB}(\La;W)\dd\La \non\\ &\quad -\frac{(1-v_1)c}{m_{GB}}\int_{\Rc_r}\tr[M_{GB}\{v_1I_r+(1-v_1)\La\}^{-1}\La]f_{GB}(\La;W)\dd\La.\end{aligned}$$ Hence combining (\[eqn:De1\]) and (\[eqn:De2\]) gives that $$\begin{aligned} \label{eqn:De0} \De(W;\pi_{GB}^J)&=\De_1(W;\pi_{GB}^J)-\De_2(W;\pi_{GB}^J)-(q-3r-3)\tr M_{GB} \non\\ &= -(q-3r+3-a-2b)\tr M_{GB} +\De_3+\De_4,\end{aligned}$$ where $$\begin{aligned} \De_3 &= -2(b-2)\int_{\Rc_r}\tr[(I_r-\La)^{-1}\La]f_{GB}(\La;W)\dd\La \\ &\qquad +\frac{b-2}{m_{GB}}\int_{\Rc_r}\tr[M_{GB}(I_r-\La)^{-1}\La]f_{GB}(\La;W)\dd\La, \\ \De_4 &=-2(1-v_1)c\int_{\Rc_r}\tr[\{v_1I_r+(1-v_1)\La\}^{-1}\La^2]f_{GB}(\La;W)\dd\La \\ &\qquad +\frac{(1-v_1)c}{m_{GB}}\int_{\Rc_r}\tr[M_{GB}\{v_1I_r+(1-v_1)\La\}^{-1}\La]f_{GB}(\La;W)\dd\La.\end{aligned}$$ Here, it can easily be verified that $\De(W;\pi_{GB}^J)$ is finite for $q+a>0$ and $b>2$. For notational simplicity, we use the notation $$\Er_\La[g(\La)]=\int_{\Rc_r}g(\La)f_{GB}(\La;W)\dd\La \Big/\int_{\Rc_r}f_{GB}(\La;W)\dd\La$$ for an integrable function $g(\La)$. Then from (\[eqn:De0\]), $$\begin{aligned} \label{eqn:De00} {\De(W;\pi_{GB}^J)\over m_{GB}} =& (c-q+r+b-1)\tr \Er_\La(\La) \non\\ &+ (b-2)\Big[ \tr\big[ \Er_\La(\La)\Er_\La\{(I_r-\La)^{-1}\La\} \big] - 2\tr\big[ \Er_\La\{(I_r-\La)^{-1}\La\}\big]\Big]\non\\ &+(1-v_1)c\Big[ \tr\big[ \Er_\La(\La)\Er_\La\{(v_1I_r+(1-v_1)\La)^{-1}\La\}\big] \non\\ &\qquad\qquad\qquad -2\tr\big[ \Er_\La\{(v_1I_r+(1-v_1)\La)^{-1}\La^2\}\big]\Big]\end{aligned}$$ for $c=a+b+2r-2$. Note that $0_{r\times r} \preceq \La \preceq I_r$ and $I_r \preceq (I_r-\La)^{-1}$. Since $\Er_\La(\La) \preceq I_r$ and $\tr\big[ (I_r-\La)^{-1}\La \big] \geq \tr\La$, the second term in the r.h.s. of (\[eqn:De00\]) is evaluated as $$\begin{aligned} \tr\big[ &\Er_\La(\La)\Er_\La\{(I_r-\La)^{-1}\La\}\big] - 2\tr\big[ \Er_\La\{(I_r-\La)^{-1}\La\}\big] \\ &\leq - \tr\big[ \Er_\La\{(I_r-\La)^{-1}\La\}\big] \leq - \tr \Er_\La(\La).\end{aligned}$$ Since $b>2$, we have $$\begin{aligned} \label{eqn:De01} {\De(W;\pi_{GB}^J)\over m_{GB}} \leq & (c-q+r+1)\tr \Er_\La(\La) \non\\ &+(1-v_1)c\Big[ \tr\big[ \Er_\La(\La)\Er_\La\{(v_1I_r+(1-v_1)\La)^{-1}\La\}\big] \non\\ &\qquad\qquad\qquad -2\tr\big[ \Er_\La\{(v_1I_r+(1-v_1)\La)^{-1}\La^2\}\big]\Big].\end{aligned}$$ It is here observed that $$\begin{aligned} &(1-v_1)\{v_1I_r+(1-v_1)\La\}^{-1}\La \\ &\qquad = \{v_1I_r+(1-v_1)\La\}^{-1} \{ v_1I_r+(1-v_1)\La - v_1I_r\}\\ &\qquad= I_r - v_1 \{v_1I_r+(1-v_1)\La\}^{-1}, \\ &(1-v_1)\{v_1I_r+(1-v_1)\La\}^{-1}\La^2 \\ &\qquad= \La - v_1 \{v_1I_r+(1-v_1)\La\}^{-1}\La,\end{aligned}$$ which is used to get $$\begin{aligned} &(1-v_1)c\Big[ \tr\big[ \Er_\La(\La)\Er_\La\{(v_1I_r+(1-v_1)\La)^{-1}\La\}\big] \\ &\qquad\qquad -2\tr\big[ \Er_\La\{(v_1I_r+(1-v_1)\La)^{-1}\La^2\}\big]\Big] \\ &\qquad =-c\tr \Er_\La(\La)\\ &\qquad\qquad + cv_1 \Big[ 2 \tr\big[ \Er_\La\{(v_1I_r+(1-v_1)\La)^{-1}\La\}\big] \\ &\qquad\qquad\qquad-\tr\big[ \Er_\La(\La) \Er_\La\{(v_1I_r+(1-v_1)\La)^{-1}\}\big]\Big].\end{aligned}$$ Substituting this quantity into (\[eqn:De01\]) gives $$\begin{aligned} \label{eqn:De02} {\De(W;\pi_{GB}^J)\over m_{GB}} \leq & -(q-r-1)\tr \Er_\La(\La) \non\\ &+ cv_1 \Big[ 2 \tr\big[ \Er_\La\{(v_1I_r+(1-v_1)\La)^{-1}\La\}\big] \non\\ &\qquad\qquad -\tr\big[ \Er_\La(\La) \Er_\La\{(v_1I_r+(1-v_1)\La)^{-1}\}\big]\Big].\end{aligned}$$ To evaluate the second term in the r.h.s. of (\[eqn:De02\]), note that $$I_r \preceq \{v_1I_r+(1-v_1)\La\}^{-1} \preceq v_1^{-1}I_r. \label{eqn:inq}$$ In the case of $c\geq 0$, it is seen from (\[eqn:inq\]) that $$\begin{aligned} cv_1 &\Big[ 2 \tr\big[ \Er_\La\{(v_1I_r+(1-v_1)\La)^{-1}\La\}\big]-\tr\big[ \Er_\La(\La) \Er_\La\{(v_1I_r+(1-v_1)\La)^{-1}\}\big]\Big] \\ &\leq cv_1 \Big\{ {2\over v_1}\tr \Er_\La(\La) - \tr \Er_\La(\La)\Big\} = c (2-v_1) \tr \Er_\La(\La),\end{aligned}$$ which implies that $$\De(W;\pi_{GB}^J)/m_{GB} \leq \{ - (q-r-1) + c(2-v_w/v_0)\}\tr \Er_\La(\La),$$ because $1>v_1=v/v_0\geq v_w/v_0>0$. It is noted that $c=a+b+2r-2>-q + 2r +2=- (q-r-1) + r+1$ because $a>2-q$ and $b>2$. Thus, one gets a sufficient condition given by $$\max \{0, - (q-r-1) + r+1\} \leq c \leq (q-r-1)/(2-v_w/v_0). \label{eqn:sc1}$$ In the case of $c\leq 0$, it is seen from (\[eqn:inq\]) that $$\begin{aligned} cv_1 &\Big[ 2 \tr\big[ \Er_\La\{(v_1I_r+(1-v_1)\La)^{-1}\La\}\big]-\tr\big[ \Er_\La(\La) \Er_\La(v_1I_r+(1-v_1)\La)^{-1}\big]\Big] \\ &\leq cv_1 \Big\{ 2\tr \Er_\La(\La) - {1\over v_1}\tr \Er_\La(\La)\Big\} = c (2v_1-1) \tr \Er_\La(\La),\end{aligned}$$ which implies that $$\De(W;\pi_{GB}^J)/m_{GB} \leq \{ - (q-r-1) + c(2v_w/v_0-1)\}\tr \Er_\La(\La).$$ Hence, it holds true that $ - (q-r-1) + c(2v_w/v_0-1)\leq 0$ if $$\min \{0, - (q-r-1) + r+1\} \leq c \leq 0. \label{eqn:sc2}$$ Combining (\[eqn:sc1\]) and (\[eqn:sc2\]) yields the condition $- (q-r-1) + r+1 \leq c \leq (q-r-1)/(2-v_w/v_0)$, namely, $-q+4\leq a+b\leq (q-r-1)/(2-v_w/v_0)-2r+2$. From (\[eqn:bound-cond\]), the sufficient conditions on $(a,b)$ for minimaxity can be written as $a>2-q$, $b>2$ and $a+b\leq (q-r-1)/(2-v_w/v_0)-2r+2$ if $$\begin{aligned} &\{(q-r-1)/(2-v_w/v_0)-2r+2\}-\{-q+4\}\\ &=\{q-r-1+(2-v_w/v_0)(q-2r-2)\}/(2-v_w/v_0)>0.\end{aligned}$$ Thus the proof is complete. $\Box$ Take $v_x=v_w=v_0=1$. Let $X|\Th\sim\Nc_{r\times q}(\Th,I_r\otimes I_q)$. Consider the problem of estimating the mean matrix $\Th$ under the squared Frobenius norm loss $\Vert\Thh-\Th\Vert^2$. The Bayesian estimator with respect to (\[eqn:pr\_Th\]) and (\[eqn:pr\_GB\]) is expressed as $$\Thh_{GB}=\bigg[I_r-\frac{\int_{\Rc_r}\Om|\Om|^{(q+a)/2-1}|I_r-\Om|^{b/2-1}\exp[-\tr(\Om XX^\top)/2]\dd\Om}{\int_{\Rc_r}|\Om|^{(q+a)/2-1}|I_r-\Om|^{b/2-1}\exp[-\tr(\Om XX^\top)/2]\dd\Om}\bigg]X.$$ Then the same arguments as in this section yield that $\Thh_{GB}$ is proper Bayes and minimax if $a>0$, $b>2$, $q>3r+1$ and $ 2 < a+b \leq q-3r+1$. $\Box$ Superharmonic priors for minimaxity {#sec:superharmonic} =================================== In estimation of the normal mean vector, Stein (1973, 1981) discovered an interesting relationship between superharmonicity of prior density and minimaxity of the resulting generalized Bayes estimator. The relationship is very important and useful in Bayesian predictive density estimation. In this section we derive some Bayesian minimax predictive densities with superharmonic priors. Let $\ph_\pi=\ph_\pi(Y|X)$ be a Bayesian predictive density with respect to a prior $\pi(\Th)$, where $\pi(\Th)$ is twice differentiable and the marginal density $m_\pi(X;v_x)$ is finite. All the results in this section are based on the following key lemma. \[lem:superharmonic\] Denote by $\nabla_\Th=(\partial/\partial\th_{ij})$ the $r\times q$ differentiation operator matrix with respect to $\Th$. Then $\ph_\pi$ is minimax relative to the KL loss $(\ref{eqn:loss})$ if $\pi(\Th)$ is superharmonic, namely, $$\tr[\nabla_\Th\nabla_\Th^\top \pi(\Th)]=\sum_{i=1}^r\sum_{j=1}^q\frac{\partial^2 \pi(\Th)}{\partial \th_{ij}^2}\leq 0.$$ [**Proof.**]{}   This lemma can be proved along the same arguments as in Stein (1981). See also George et al. (2006) and Brown et al. (2008). $\Box$ Define a class of prior densities as $$\pi(\Th)=g(\Si),\quad \Si=\Th\Th^\top,$$ where $g$ is twice differentiable with respect to $\Si$. Let $\Dc_\Si$ be an $r\times r$ matrix of differential operator with respect to $\Si=(\si_{ij})$ such that the $(i,j)$ element of $\Dc_\Si$ is $$\{\Dc_\Si\}_{ij}=\frac{1+\de_{ij}}{2}\frac{\partial}{\partial \si_{ij}},$$ where $\de_{ij}$ stands for the Kronecker delta. Let $$G=(g_{ij})=G(\Si)=\Dc_\Si g(\Si),$$ namely, $G$ is an $r\times r$ symmetric matrix such that $g_{ij}=\{\Dc_\Si\}_{ij} g(\Si)$. \[lem:condition1\] $\ph_\pi$ with respect to $\pi(\Th)=g(\Si)$ is minimax relative to the KL loss $(\ref{eqn:loss})$ if $$\tr[\nabla_\Th\nabla_\Th^\top\pi(\Th)]=2[(q-r-1)\tr(G)+2\tr(\Dc_\Si \Si G)]\leq 0,$$ where $G=\Dc_\Si g(\Si)$. [**Proof.**]{}   Using (i) and (ii) in Lemma \[lem:diff4\] gives that $$\begin{aligned} \tr[\nabla_\Th\nabla_\Th^\top \pi(\Th)] &=2\tr(\nabla_\Th \Th^\top \Dc_\Si g(\Si))=2\tr(\nabla_\Th \Th^\top G)\\ &=2\big[(q-r-1)\tr(G)+2\tr(\Dc_\Si \Si G)\big].\end{aligned}$$ From Lemma \[lem:superharmonic\], the proof is complete. Let $\la_1,\ldots,\la_r$ be ordered eigenvalues of $\Si=\Th\Th^\top$, where $\la_1\geq\cdots\geq\la_r$, and let $\La=\diag(\la_1,\ldots,\la_r)$. Denote by $\Ga=(\ga_{ij})$ an $r\times r$ orthogonal matrix such that $\Ga^\top\Si \Ga=\La$. Assume that $g(\Si)$ is orthogonally invariant, namely, $g(\Si)=g(P\Si P^\top)$ for any orthogonal matrix $P$. Then, we can assume that $g(\Si)=g(\La)$ without loss of generality. \[prp:condition2\] Assume that $g(\Si)=g(\La)$ and $g(\La)$ is a twice differentiable function of $\La$. Then $\ph_\pi$ with $\pi(\Th)=g(\La)$ is minimax relative to the KL loss $(\ref{eqn:loss})$ if $$\begin{aligned} &\tr[\nabla_\Th\nabla_\Th^\top\pi(\Th)]\\ &=2\sum_{i=1}^r\bigg\{(q-r+1)\phi_i(\La)+\sum_{j\ne i}^r\frac{\la_i\phi_i(\La)-\la_j\phi_j(\La)}{\la_i-\la_j}+2\la_i\frac{\partial\phi_i(\La)}{\partial\la_i}\bigg\}\leq 0,\end{aligned}$$ where $\phi_i(\La)=\partial g(\La)/\partial\la_i$. [**Proof.**]{}   Since from (i) of Lemma \[lem:diff2\] $$\{\Dc_\Si\}_{ij}\la_k=\ga_{ik}\ga_{jk},$$ it is observed that by the chain rule $$\{\Dc_\Si\}_{ij} g(\La)=\sum_{k=1}^r \frac{\partial g(\La)}{\partial\la_k}\{\Dc_\Si\}_{ij}\la_k =\{\Ga\Phi(\La)\Ga^\top\}_{ij},$$ where $\Phi(\La)=\diag(\phi_1(\La),\ldots,\phi_r(\La))$. Using Lemma \[lem:condition1\] and (ii) of Lemma \[lem:diff2\] gives that $$\begin{aligned} &\tr[\nabla_\Th\nabla_\Th^\top\pi(\Th)] \\ &=2[(q-r-1)\tr\{\Ga\Phi(\La)\Ga^\top\}+2\tr\{\Dc_\Si \Ga\La\Phi(\La)\Ga^\top\}] \\ &=2\sum_{i=1}^r\bigg[(q-r-1)\phi_i(\La)+\sum_{j\ne i}^r\frac{\la_i\phi_i(\La)-\la_j\phi_j(\La)}{\la_i-\la_j}+2\frac{\partial}{\partial\la_i}\{\la_i\phi_i(\La)\}\bigg]\\ &=2\sum_{i=1}^r\bigg\{(q-r+1)\phi_i(\La)+\sum_{j\ne i}^r\frac{\la_i\phi_i(\La)-\la_j\phi_j(\La)}{\la_i-\la_j}+2\la_i\frac{\partial\phi_i(\La)}{\partial\la_i}\bigg\}.\end{aligned}$$ Hence the proof is complete. Using Proposition \[prp:condition2\], we give some examples of Bayesian predictive densities with respect to superharmonic priors. Consider a class of shrinkage prior densities, $$\pi_{SH}(\Th) = \{\tr(\Th\Th^\top)\}^{-\be/2}\prod_{i=1}^r \la_i^{-\al_i/2} =\bigg\{\sum_{i=1}^r \la_i\bigg\}^{-\be/2}\prod_{i=1}^r \la_i^{-\al_i/2},$$ where $\al_1,\ldots,\al_r$ and $\be$ are nonnegative constants. The class $\pi_{SH}(\Th)$ includes both harmonic priors $\pi_{EM}(\Th)$ and $\pi_{JS}(\Th)$, which are given in (\[eqn:pr\_em\]) and (\[eqn:pr\_js\]), respectively. Indeed, $\pi_{SH}(\Th)$ is the same as $\pi_{EM}(\Th)$ if $\al_1=\cdots=\al_r=\al^{EM}$ and $\be=0$ and as $\pi_{JS}(\Th)$ if $\al_1=\cdots=\al_r=0$ and $\be=\be^{JS}$. It is noted that $$\begin{aligned} \frac{\partial}{\partial \la_k}\pi_{SH}(\Th) &= -\frac{1}{2}\Big(\frac{\al_k}{\la_k}+\frac{\be}{\sum_{i=1}^r \la_i}\Big)\pi_{SH}(\Th), \label{eqn:d_pi_g} \\ \frac{\partial^2}{\partial \la_k^2}\pi_{SH}(\Th) &=\frac{1}{2}\bigg\{\Big(\frac{\al_k}{\la_k^2}+\frac{\be}{(\sum_{i=1}^r \la_i)^2}\Big)+\frac{1}{2}\Big(\frac{\al_k}{\la_k}+\frac{\be}{\sum_{i=1}^r \la_i}\Big)^2\bigg\}\pi_{SH}(\Th) . \label{eqn:dd_pi_g}\end{aligned}$$ Combining (\[eqn:d\_pi\_g\]), (\[eqn:dd\_pi\_g\]) and Proposition \[prp:condition2\], we obtain $$\begin{aligned} &\tr[\nabla_\Th\nabla_\Th^\top \pi_{SH}(\Th)] \non\\ &=\pi_{SH}(\Th)\sum_{i=1}^r\bigg[\{\al_i^2-(q-r-1)\al_i\}\frac{1}{\la_i}-2\sum_{j>i}^r\frac{\al_i-\al_j}{\la_i-\la_j}+\frac{2\al_i\be}{\tr(\Th\Th^\top)}\bigg] \non\\ &\qquad +\pi_{SH}(\Th)\frac{\be^2-(qr-2)\be}{\tr(\Th\Th^\top)}. \label{eqn:dd-pi_g}\end{aligned}$$ \[exm:1\] Let $$\pi_{ST}(\Th)=\prod_{i=1}^r \la_i^{-\al_i/2},$$ where $\al_1,\ldots,\al_r$ are nonnegative constants. Assume that $\al_1\geq\cdots\geq\al_r$. Note that $$\begin{aligned} \sum_{i=1}^r\sum_{j>i}^r\frac{\al_i-\al_j}{\la_i-\la_j} &=\sum_{i=1}^r\sum_{j>i}^r\frac{1}{\la_i}\frac{\la_i-\la_j+\la_j}{\la_i-\la_j}(\al_i-\al_j)\\ &=\sum_{i=1}^r(r-i)\frac{\al_i}{\la_i}-\sum_{i=1}^r\frac{1}{\la_i}\sum_{j>i}^r\al_j+\sum_{i=1}^r\sum_{j>i}^r\frac{\la_j}{\la_i}\frac{\al_i-\al_j}{\la_i-\la_j}\\ &\geq \sum_{i=1}^r(r-i)\frac{\al_i}{\la_i}-\sum_{i=1}^r\frac{1}{\la_i}\sum_{j>i}^r\al_j.\end{aligned}$$ From (\[eqn:dd-pi\_g\]), it is seen that $$\begin{aligned} &\tr[\nabla_\Th\nabla_\Th^\top \pi_{ST}(\Th)] \\ &\leq\pi_{ST}(\Th)\sum_{i=1}^r\bigg\{\al_i^2-(q-r-1)\al_i-2(r-i)\al_i+2\sum_{j>i}^r\al_j\bigg\}\frac{1}{\la_i}\\ &=\pi_{ST}(\Th)\sum_{i=1}^r\bigg\{\al_i^2-(q+r-2i-1)\al_i+2\sum_{j>i}^r\al_j\bigg\}\frac{1}{\la_i}.\end{aligned}$$ Here, assume additionally that $\al_i\leq\al_i^{ST}/2$ with $\al_i^{ST}=q+r-2i-1$ for $i=1,\ldots,r$. For each $i$ we observe that $$\begin{aligned} &\al_i^2-(q+r-2i-1)\al_i+2\sum_{j>i}^r\al_j \\ &\leq \al_{i+1}^2-(q+r-2i-1)\al_{i+1}+2\sum_{j>i}^r\al_j\\ &= \al_{i+1}^2-\{q+r-2(i+1)-1\}\al_{i+1}+2\sum_{j>i+1}^r\al_j\\ &\leq \cdots\\ &\leq \al_r^2-(q-r-1)\al_r\leq0,\end{aligned}$$ which implies that $\tr[\nabla_\Th\nabla_\Th^\top \pi_{ST}(\Th)]\leq 0$ if $\al_1\geq\cdots\geq\al_r$ and $\al_i\leq\al_i^{ST}/2$ for each $i$. Then the resulting Bayesian predictive density is minimax under the KL loss (\[eqn:loss\]).  $\Box$ \[exm:2\] Consider a prior density of the form $$\label{eqn:pr_MS1} \pi_{MS1}(\Th)=\{\tr(\Th\Th^\top)\}^{-\be^{MS}/2}\prod_{i=1}^r \la_i^{-\al_i^{ST}/4},$$ where $\be^{MS}=2(r-1)$. Combining Example \[exm:1\] and (\[eqn:dd-pi\_g\]) gives that $$\begin{aligned} &\tr[\nabla_\Th\nabla_\Th^\top \pi_{MS1}(\Th)]\\ &\leq\pi_{MS1}(\Th)\sum_{i=1}^r\frac{\al_i^{ST}\be^{MS}}{\tr(\Th\Th^\top)} +\pi_{MS1}(\Th)\frac{(\be^{MS})^2-(qr-2)\be^{MS}}{\tr(\Th\Th^\top)}=0.\end{aligned}$$ Hence the Bayesian predictive density with respect to $\pi_{MS1}(\Th)$ is minimax relative to the KL loss (\[eqn:loss\]).  $\Box$ In the literature, many shrinkage estimators have been developed in estimation of a normal mean matrix. It is worth pointing out that the Bayesian predictive densities with superharmonic prior $\pi_{SH}(\Th)$ correspond to such shrinkage estimators. Let $X|\Th\sim\Nc_{r\times q}(\Th, v_x I_r\otimes I_q)$ and denote an estimator of $\Th$ by $\Thh$. Consider the problem of estimating the mean matrix $\Th$ relative to quadratic loss $L_Q(\Thh,\Th)=\Vert\Thh-\Th\Vert^2$. Then the generalized Bayes estimator of $\Th$ with the prior density $\pi_{SH}$ is expressed as $$\begin{aligned} \Thh_{SH} &=\frac{\int_{\Re^{r\times q}} \Th \exp(-\Vert X-\Th \Vert^2/(2v_x))\pi_{SH}(\Th)\dd\Th}{\int_{\Re^{r\times q}} \exp(-\Vert X-\Th \Vert^2/(2v_x))\pi_{SH}(\Th)\dd\Th}.\end{aligned}$$ If $\pi_{SH}$ is superharmonic then $\Thh_{SH}$ is minimax relative to the quadratic loss $L_Q$. Since $v_x\nabla_\Th \exp(-\Vert X-\Th \Vert^2/(2v_x))=-(\Th-X)\exp(-\Vert X-\Th \Vert^2/(2v_x))$, the integration by parts gives that $$\begin{aligned} \Thh_{SH} &=X-v_x\frac{\int_{\Re^{r\times q}} [\nabla_\Th \exp(-\Vert X-\Th \Vert^2/(2v_x))]\pi_{SH}(\Th)\dd\Th}{\int_{\Re^{r\times q}} \exp(-\Vert X-\Th \Vert^2/(2v_x))\pi_{SH}(\Th)\dd\Th} \\ &=X+v_x \frac{\int_{\Re^{r\times q}} \exp(-\Vert X-\Th \Vert^2/(2v_x))[\nabla_\Th\pi_{SH}(\Th)]\dd\Th}{\int_{\Re^{r\times q}} \exp(-\Vert X-\Th \Vert^2/(2v_x))\pi_{SH}(\Th)\dd\Th}.\end{aligned}$$ Here using (i) of Lemma \[lem:diff4\] and (i) of Lemma \[lem:diff2\] gives that $$\begin{aligned} \nabla_\Th^\top \pi_{SH}(\Th) &= 2\Th^\top \Dc_\Si \pi_{SH}(\Th) \non\\ &=- \Th^\top \bigg\{\Ga\diag\Big(\frac{\al_1}{\la_1},\ldots,\frac{\al_r}{\la_r}\Big)\Ga^\top+\frac{\be}{\tr(\Si)}I_r \bigg\} \pi_{SH}(\Th),\end{aligned}$$ which leads to $$\label{eqn:Th_MS} \Thh_{SH} =X-v_x\Er^{\Th|X}\bigg[ \bigg\{\Ga\diag\Big(\frac{\al_1}{\la_1},\ldots,\frac{\al_r}{\la_r}\Big)\Ga^\top+\frac{\be}{\tr(\Si)}I_r \bigg\} \Th\bigg],$$ where $\Er^{\Th|X}$ stands for the posterior expectation with respect to a density proportional to $\exp(-\Vert \Th-X \Vert^2/(2v_x))\pi_{SH}(\Th)$. Denote by $XX^\top=HLH^\top$ the eigenvalue decomposition of $XX^\top$, where $H=(h_{ij})$ is an orthogonal matrix of order $r$ and $L=\diag(\ell_1,\ldots,\ell_r)$ is a diagonal matrix of order $r$ with $\ell_1\geq \cdots\geq \ell_r$. Substituting $(X,H,L)$ for $(\Th,\Ga,\La)$ in the second term of the r.h.s. of (\[eqn:Th\_MS\]), we obtain an empirical Bayes shrinkage estimator $$\Thh_{MS}=X-v_x\bigg\{H\diag\Big(\frac{\al_1}{\ell_1},\ldots,\frac{\al_r}{\ell_r}\Big)H^\top+\frac{\be}{\tr(XX^\top)}I_r \bigg\}X.$$ The shrinkage estimator $\Thh_{MS}$ is equivalent to $\Thh_{JS}$, given in (\[eqn:JS\]), when $\al_1=\cdots=\al_r=0$ and $\be=\be^{JS}$, and to $\Thh_{EM}$, given in (\[eqn:EM\]), when $\al_1=\cdots=\al_r=\al^{EM}$ and $\be=0$. In estimation of the normal mean matrix relative to the quadratic loss $L_Q$, $\Thh_{JS}$ and $\Thh_{EM}$ are minimax. If $\Thh_{MS}$ with certain specified $\al_1,\ldots,\al_r$ and $\be$ has good performance, the prior density $\pi_{SH}$ with the same $\al_1,\ldots,\al_r$ and $\be$ would produce a good Bayesian predictive density. From Tsukuma (2008), $\Thh_{MS}$ is a minimax estimator dominating $\Thh_{EM}$ when $\al_i=\al_i^{ST}$ for $i=1,\ldots,r$ and $0\leq \be\leq 4(r-1)$. A reasonable choice for $\be$ is $\be^{MS}=2(r-1)$ and this suggests that we should consider a prior density of the form $$\label{eqn:pr_MS2} \pi_{MS2}(\Th) =\{\tr(\Th\Th^\top)\}^{-\be^{MS}/2}\prod_{i=1}^r \la_i^{-\al_i^{ST}/2} =\pi_{MS1}(\Th)\prod_{i=1}^r \la_i^{-\al_i^{ST}/4}.$$ The prior density $\pi_{MS2}(\Th)$ is not superharmonic, and it is not known whether the resulting Bayesian predictive density is minimax or not. In the next section, we verify risk behavior of the Bayesian predictive density with respect to $\pi_{MS2}(\Th)$ through Monte Carlo simulations. Monte Carlo studies {#sec:MCstudies} =================== This section briefly reports some numerical results so as to compare performance in risk of some Bayesian predictive densities for $r=2$ and $q=15$. First we investigate risk behavior of generalized Bayes predictive densities $\ph_{GB}(Y|X)$ with $v_0=1$ in the following six cases: $$(a,\, b)=(-11,\, 3),\ (-11,\, 9),\ (-11,\, 15),\ (-5,\, 3),\ (-5,\, 9),\ (1,\, 3)$$ for the second-stage prior (\[eqn:pr\_GB\]). When $r=2$ and $q=15$, $\ph_{GB}(Y|X)$ with the above six cases are minimax and, in particular, $\ph_{GB}(Y|X)$ with $(a,\, b)=(1,\,3)$ is proper Bayes for any $v_x$ and $v_y$ (see Corollary \[cor:proper2\]). The risk has been simulated by 100,000 independent replications of $X$ and $Y$, where $X|\Th\sim\Nc_{r\times q}(\Th, v_xI_r\otimes I_q)$ and $Y|\Th\sim\Nc_{r\times q}(\Th, v_yI_r\otimes I_q)$ with $(v_x,v_y)=(0.1,\, 1),\ (1,\, 1)$ and $(1,\, 0.1)$. It has been assumed that a pair of the maximum and the minimum eigenvalues of $\Th\Th^\top$ is $(0,\, 0),\ (24,\, 0)$ or $(24,\, 24)$. Note that the best invariant predictive density $\ph_U(Y|X)$ has a constant risk and its risk is approximately given by $$R(\ph_U,\Th)=\frac{rq}{2}\log\frac{v_s}{v_y}\approx\begin{cases} 1.42 & \textup{for $(v_x,v_y)=(0.1,\ 1)$},\\ 10.4 & \textup{for $(v_x,v_y)=(1,\ 1)$},\\ 36.0 & \textup{for $(v_x,v_y)=(1,\ 0.1)$}, \end{cases}$$ when $r=2$ and $q=15$. Denote by $\Bc(a,b)$ the matrix-variate beta distribution having the density (\[eqn:pr\_GB\]). Using (\[eqn:m(W)-1\]) with $\La=v_1\Om\{I_r-(1-v_1)\Om\}^{-1}$ and $v_1=v/v_0$, we can rewrite $\ph_{GB}(Y|X)$ as $$\ph_{GB}(Y|X)=\frac{\Er^{\Om}[g_{v_w}(\Om|W)]}{\Er^{\Om}[g_{v_x}(\Om|X)]}\ph_U(Y|X),$$ where $\Er^{\Om}$ indicates expectation with respect to $\Om\sim \Bc(a+q,b)$ and $$g_{v}(\Om|Z)=\Big|I_r-\Big(1-\frac{v}{v_0}\Big)\Om\Big|^{-q/2}\exp\Big[-\frac{1}{2v_0}\tr\Big[\Om\Big\{I_r-\Big(1-\frac{v}{v_0}\Big)\Om\Big\}^{-1}ZZ^\top\Big]\Big]$$ for an $r\times q$ matrix $Z$. Hence in our simulations, the expectation $\Er^{\Om}[g_{v}(\Om|Z)]$ was estimated by $j_0^{-1}\sum_{j=1}^{j_0} g_{v}(\Om_j|Z)$, where $j_0=100,000$ and the $\Om_j$ are independent replications from $\Bc(a+q,b)$. $ \begin{array}{ccccccccc} \hline (v_x, v_y) & {\rm Eigenvalues}&{\rm Minimax}&\multicolumn{6}{c}{(a,b)}\\ \cline{4-9} & {\rm of}\ \Th\Th^\top &{\rm risk}&(-11,3)&(-11,9)&(-11,15)&(-5,3)&(-5,9)&(1,3) \\ \hline (0.1,1) &(\ 0,\ 0)&1.42& 0.47 & 0.96 & 1.20 & 0.38 & 0.80 & 0.33 \\ &(24,\ 0) & & 0.91 & 1.17 & 1.30 & 0.87 & 1.09 & 0.84 \\ &(24,24) & & 1.39 & 1.39 & 1.40 & 1.37 & 1.37 & 1.39 \\ [6pt] (1, 1) &(\ 0,\ 0)&10.4& 5.3 & 6.9 & 7.7 & 2.8 & 4.6 & 1.6 \\ &(24,\ 0) & & 6.9 & 8.0 & 8.4 & 5.3 & 6.3 & 4.6 \\ &(24,24) & & 8.7 & 9.0 & 9.2 & 7.9 & 8.0 & 8.4 \\ [6pt] (1,0.1)&(\ 0,\ 0)&36.0& 15.2 & 24.0 & 28.2 & 9.6 & 17.7 & 6.8 \\ &(24,\ 0) & & 23.6 & 28.5 & 30.9 & 20.1 & 24.5 & 18.7 \\ &(24,24) & & 32.7 & 33.2 & 33.5 & 31.0 & 31.4 & 32.4 \\ \hline \end{array} $ The simulated results for risk of $\ph_{GB}(Y|X)$ are given in Table \[tab:1\]. When the pair of eigenvalues of $\Th\Th^\top$ is $(0,\, 0)$, our simulations suggest that the risk of $\ph_{GB}(Y|X)$ decreases as $a$ increases under which $b$ is fixed or under which $a+b$ is fixed and also that the risk of $\ph_{GB}(Y|X)$ increases as $b$ increases under which $a$ is fixed. It is observed that $\ph_{GB}(Y|X)$ with $(a, b)=(1, 3)$ is superior to others. When the pair of eigenvalues of $\Th\Th^\top$ is $(24,\, 24)$, $\ph_{GB}(Y|X)$ with $(a, b)=(-5, 3)$ or $(-5, 9)$ is best, but the improvement over $\ph_U(Y|X)$ is little. When the pair of eigenvalues of $\Th\Th^\top$ is $(24,\, 0)$, $\ph_{GB}(Y|X)$ with $(a, b)=(1, 3)$ is best. Next, we investigate the risk of Bayesian predictive densities based on superharmonic priors when $r=2$ and $q=15$. If $\pi_s(\Th)$ is a superharmonic prior, then the Bayesian predictive density (\[eqn:BPD\]) can be expressed as $$\ph_{\pi_s}(Y|X)=\frac{\Er^{\Th|W}[\pi_s(\Th)]}{\Er^{\Th|X}[\pi_s(\Th)]}\ph_U(Y|X),$$ where $\Er^{\Th|W}$ and $\Er^{\Th|X}$ stand, respectively, for expectations with respect to $\Th|W\sim\Nc_{r\times q}(W, v_w I_r\otimes I_q)$ and $\Th|X \sim\Nc_{r\times q}(X, v_x I_r\otimes I_q)$. In our simulations, $\ph_{\pi_s}(Y|X)$ was estimated by means of $$\ph_{\pi_s}(Y|X)\approx \frac{\sum_{i=1}^{i_0}\pi_s(\Th_i)}{\sum_{i=1}^{i_0}\pi_s(\Th_i)}\ph_U(Y|X),$$ where $i_0=100,000$ and the $\Th_i$ and the $\Th_j$ are, respectively, independent replications from $\Nc_{r\times q}(W, v_w I_r\otimes I_q)$ and $\Nc_{r\times q}(X, v_x I_r\otimes I_q)$. $ \begin{array}{cccc@{\hspace{20pt}}ccccc} \hline v_x & v_y & {\rm Eigenvalues} &{\rm Minimax}& $GB$ & $JS$\ & $EM$\, & $MS1$ & $MS2$ \\ && {\rm of}\ \Th\Th^\top &{\rm risk}&&&&&\\ \hline 0.1& 1 &(\ 0,\ 0)&1.42& 0.33 & 0.09 & 0.28 & 0.71 & 0.09 \\ & &(24,\ 0) & & 0.84 & 1.30 & 0.83 & 1.11 & 0.82 \\ & &(24,\ 4) & & 1.27 & 1.31 & 1.27 & 1.30 & 1.26 \\ & &(24,\ 8) & & 1.32 & 1.33 & 1.33 & 1.34 & 1.32 \\ & &(24, 12) & & 1.35 & 1.34 & 1.35 & 1.36 & 1.35 \\ & &(24, 24) & & 1.39 & 1.36 & 1.37 & 1.38 & 1.37 \\ [6pt] 1 & 1 &(\ 0,\ 0)&10.4& 1.6 & 0.7 & 2.1 & 5.2 & 0.7 \\ & &(24,\ 0) & & 4.6 & 5.5 & 4.9 & 7.0 & 4.4 \\ & &(24,\ 4) & & 5.7 & 5.9 & 5.9 & 7.4 & 5.4 \\ & &(24,\ 8) & & 6.6 & 6.3 & 6.6 & 7.7 & 6.2 \\ & &(24, 12) & & 7.2 & 6.6 & 7.1 & 8.0 & 6.7 \\ & &(24, 24) & & 8.4 & 7.3 & 7.9 & 8.4 & 7.6 \\ [6pt] 1 &0.1&(\ 0,\ 0)&36.0& 6.8 & 2.4 & 7.2 & 18.0 & 2.4 \\ & &(24,\ 0) & & 18.7 & 25.8 & 18.9 & 26.1 & 18.2 \\ & &(24,\ 4) & & 25.5 & 26.8 & 25.7 & 28.9 & 25.0 \\ & &(24,\ 8) & & 28.1 & 27.7 & 28.2 & 30.2 & 27.6 \\ & &(24, 12) & & 29.7 & 28.4 & 29.5 & 30.9 & 28.9 \\ & &(24, 24) & & 32.4 & 30.0 & 31.3 & 32.0 & 30.8 \\ \hline \end{array} $ The risk is based on 100,000 independent replications of $X$ and $Y$ for some pairs of two eigenvalues of $\Th\Th^\top$. The simulation results are provided in Table \[tab:2\], where GB, JS, EM, MS1 and MS2 are the Bayesian predictive densities with the following priors. [ ]{} (\[eqn:pr\_Th\]) and (\[eqn:pr\_GB\]) with $a=1$, $b=3$ and $v_0=1$, (\[eqn:pr\_js\]), (\[eqn:pr\_em\]), (\[eqn:pr\_MS1\]), (\[eqn:pr\_MS2\]). Note that GB, JS, EM and MS1 are minimax, while MS2 has not been shown to be minimax. When the pair of eigenvalues of $\Th\Th^\top$ is $(0,\, 0)$, JS and MS2 are superior. When the pair of eigenvalues of $\Th\Th^\top$ is $(24,\, 24)$, JS has nice performance but it is bad if the two eigenvalues of $\Th\Th^\top$ are much different. Our simulations suggest that MS2 is better than EM and MS1. When the two eigenvalues of $\Th\Th^\top$ are much different, namely they are $(24,\, 0)$ and $(24,\, 4)$, MS2 is best and GB or EM is second-best. [**Acknowledgments.**]{} The research of the first author was supported by Grant-in-Aid for Scientific Research (15K00055) from Japan Society for the Promotion of Science (JSPS). The research of the second author was supported in part by Grant-in-Aid for Scientific Research (15H01943 and 26330036) from JSPS. [100]{} Aitchison, J. (1975). Goodness of prediction fit, [*Biometrika*]{}, [**62**]{}, 547–554. Brown, L.D. (1971). Admissible estimators, recurrent diffusions, and insoluble boundary value problems, [*Ann. Math. Statist.*]{}, [**42**]{}, 855–903. Brown, L.D., George, E.I. and Xu, X. (2008). Admissible predictive density estimation, [*Ann. Statist.*]{}, [**36**]{}, 1156–1170. Efron, B. and Morris, C. (1972). Empirical Bayes on vector observations: An extension of Stein’s method, [*Biometrika*]{}, [**59**]{}, 335–347. Faith, R.E. (1978). Minimax Bayes estimators of a multivariate normal mean, [*J. Multivariate Anal.*]{}, [**8**]{}, 372–379. George, E.I., Liang, F. and Xu, X. (2006). Improved minimax predictive densities under Kullback-Leibler loss, [*Ann. Statist.*]{}, [**34**]{}, 78–91. George, E.I., Liang, F. and Xu, X. (2012). From minimax shrinkage estimation to minimax shrinkage prediction, [*Statist. Sci.*]{}, [**27**]{}, 82–94. Gupta, A.K. and Nagar, D.K. (1999). [*Matrix Variate Distributions*]{}, Chapman & Hall/CRC, New York. Haff, L.R. (1982). Identities for the inverse Wishart distribution with computational results in linear and quadratic discrimination. [*Sankhyā*]{}, [**44**]{}, Series B, 245–258. James, W. and Stein, C. (1961). Estimation with quadratic loss, In [*Proc. Fourth Berkeley Symp. Math. Statist. Probab.*]{}, [**1**]{}, 361–379, Univ. of California Press, Berkeley. Komaki, F. (2001). A shrinkage predictive distribution for multivariate normal observables, [*Biometrika*]{}, [**88**]{}, 859–864. Konno, Y. (1988). Exact moments of the multivariate F and beta distributions, [*J. Japan Statist. Soc.*]{}, [**18**]{}, 123–130. Konno, Y. (1992). Improved estimation of matrix of normal mean and eigenvalues in the multivariate $F$-distribution, Doctoral dissertation, Institute of Mathematics, University of Tsukuba. This is downloadable from his website (http://mcm-www.jwu.ac.jp/\~konno/). Magnus, J.R. and Neudecker, H. (1999). [*Matrix differential calculus with applications in statistics and econometrics, 2nd ed.*]{}, Wiley, New York. Maruyama, Y. (1998). A unified and broadened class of admissible minimax estimators of a multivariate normal mean [*J. Multivariate Anal.*]{}, [**64**]{}, 196–205. Matsuda, T. and Komaki, F. (2015). Singular value shrinkage priors for Bayesian prediction, [*Biometrika*]{}, [**102**]{}, 843–854. Muirhead, R.J. (1982). [*Aspects of Multivariate Statistical Theory*]{}, John Wiley & Sons, New York. Stein, C. (1973). Estimation of the mean of a multivariate normal distribution, In: [*Proc. Prague Symp. Asymptotic Statist.*]{}, 345-381. Stein, C. (1981). Estimation of the mean of a multivariate normal distribution, [*Ann. Statist.*]{}, [**9**]{}, 1135–1151. Tsukuma, H. (2008). Admissibility and minimaxity of Bayes estimators for a normal mean matrix, [*J. Multivariate Anal.*]{}, [**99**]{}, 2251–2264. [^1]: Faculty of Medicine, Toho University, 5-21-16 Omori-nishi, Ota-ku, Tokyo 143-8540, Japan, E-Mail: tsukuma@med.toho-u.ac.jp [^2]: Faculty of Economics, University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-0033, Japan, E-Mail: tatsuya@e.u-tokyo.ac.jp
{ "pile_set_name": "ArXiv" }
--- abstract: | We present sensitive (T$_R^*\ \approx\ $0.1K), large-scale (47$^{\prime}$ $\times$ 7$^{\prime}$–corresponding to 4 pc $\times$ 0.6 pc at the source) maps of the CO J=1$\rightarrow$0 emission of the L1448 dark cloud at 55$^{\prime\prime}$ resolution. These maps were acquired using the On-The-Fly (OTF) capability of the NRAO 12-meter telescope atop Kitt Peak in Arizona. CO outflow activity is seen in L1448 on parsec-scales for the first time. Careful comparison of the spatial and velocity distribution of our high-velocity CO maps with previously published optical and near-infrared images and spectra has led to the identification of six distinct CO outflows. Three of these are powered by the Class 0 protostars, L1448C, L1448N(A), and L1448N(B). L1448 IRS 2 is the source of two more outflows, one of which is newly identified from our maps. The sixth newly discovered outflow is powered by an as yet unidentified source outside of our map boundaries. We show the direct link between the heretofore unknown, giant, highly-collimated, protostellar molecular outflows and their previously discovered, distant optical manifestations. The outflows traced by our CO mapping generally reach the projected cloud boundaries. Integrated intensity maps over narrow velocity intervals indicate there is significant overlap of blue- and redshifted gas, suggesting the outflows are highly inclined with respect to the line-of-sight, although the individual outflow position angles are significantly different. The velocity channel maps also show that the outflows dominate the CO line cores as well as the high-velocity wings. The magnitude of the combined flow momenta, as well as the combined kinetic energy of the flows, are sufficient to disperse the 50 M$_{\odot}$ NH$_3$ cores in which the protostars are currently forming, although some question remains as to the exact processes involved in redirecting the directionality of the outflow momenta to effect the complete dispersal of the parent cloud. author: - 'Grace A. Wolf-Chase, Mary Barsony, and JoAnn O’Linger' title: Giant Molecular Outflows Powered by Protostars in L1448 --- Introduction ============ It has long been an open question whether young stars could be the agents of dispersal of their parent molecular clouds through the combined effects of their outflows [@nor79; @ber96]. The answer to this question depends on whether the outflows have the requisite kinetic energy to overcome the gravitational binding energy of the cloud, as well as the efficiency with which outflows can transfer momentum, in both magnitude and direction, to the surrounding cloud. For considerations of molecular cloud dispersal, addressing the question of the adequacy of outflow momenta has historically lagged behind determinations of outflow energetics. This is because evaluation of the available energy sources needed to account for the observed spectral linewidths in a cloud is adequate for quantitative estimates of outflow energies. However, in order to address whether the requisite momentum for cloud dispersal exists in a given case requires well-sampled, sensitive, large-scale mapping of sufficiently large areas to encompass entire molecular clouds. Such observing capability has been beyond reach until the last few years, with the implementation of “rapid” or “On-The-Fly” mapping capabilities at large-aperture millimeter telescopes. The fact that many outflows powered by young stellar objects actually extend well beyond their parent molecular cloud boundaries has been recognized only recently, with the advent of large-scale, narrowband optical imaging surveys that have revealed shock-excited Herbig-Haro objects at parsec-scale separations from their exciting sources [@ba96a; @ba96b; @bal97; @dev97; @eis97; @wil97; @gom97; @gom98; @rei98] and from equally large-area, sensitive, millimeter line maps that show parsec-scale molecular outflows [@den95; @lad96; @ben96; @ben98; @oli99]. The millimeter line maps of parsec-scale flows have been almost exclusively confined to instances of single, well-isolated cases, due to the tremendous confusion of multiple outflows in regions of clustered star formation, such as are found in NGC 1333 (Sandell & Knee 1998; Knee & Sandell 2000), $\rho$ Oph, Serpens [@whi95], or Circinus [@bal99]. The L1448 dark cloud, with a mass of 100 M$_{\odot}$ over its $\sim$ 1.3 pc $\times$ 0.7 pc extent as traced by C$^{18}$O emission, [@ba86a], is part of the much more extensive (10 pc $\times$ 33 pc) Perseus molecular cloud complex, which contains $\approx$ 1.7 $\times$ 10$^4$ M$_{\odot}$, at a distance of 300 pc [@ba86b]. The two dense ammonia cores within L1448 contain 50 M$_{\odot}$ distributed over a 1 pc $\times$ 0.5 pc area [@ba86a; @ang89]. The core at V$_{LSR}$ = 4.2 km s$^{-1}$ contains the Class 0 protostar L1448 IRS 2, while the other core, at V$_{LSR}$ = 4.7 km s$^{-1}$, harbors four Class 0 protostars: L1448C, L1448N(A), L1448N(B), and L1448NW. [@bar98; @oli99; @eis00]. The Class I source, L1448 IRS 1, lies close to the western boundary of the cloud, just outside the lowest NH$_3$ contours in the maps of [@ba86b]. High-velocity molecular gas in L1448 was discovered a decade ago via CO J$=$2$\rightarrow$1 and CO J$=$1$\rightarrow$0 mapping of a $\sim$ 2$^{\prime}$ $\times$ 6$^{\prime}$ area centered on L1448C, acquired with 12$^{\prime\prime}$ and 20$^{\prime\prime}$ angular resolutions, respectively [@bac90]. Due to its brightness, high-velocity extent ($\pm$ 70 km s$^{-1}$), and symmetrically spaced CO bullets, the L1448C molecular outflow has been the object of much study, unlike the flows from its neighbors, the 7$^{\prime\prime}$ (in projected separation) protobinary, L1448N(A) & (B), just 1.2$^{\prime}$ to the north, or L1448 IRS 2, 3.7$^{\prime}$ to the northwest (e.g., [@cur90; @gui92; @bal93; @bac94; @dav94; @bac95; @dut97]). Although outflow activity in the vicinity of the protobinary had been reported previously, the H$_2$ and CO flows, driven by L1448N(A) and L1448N(B), respectively [@bac90; @dav95], were not recognized as distinct until recently [@bar98]. Identification of these flows was aided by noting the position angle of the low-excitation H$_2$ flow, centered on L1448N(A), to be distinct from the position angle of the CO flow from L1448N(B), defined by the direction of the line joining L1448N(B) with the newly discovered Herbig-Haro object, HH 196 [@bal97]. Recent, wide-angle ($\sim$ 70$^{\prime}$ field-of-view), narrowband optical imaging of the entire extent of the L1448 cloud has resulted in the discovery of several systems of Herbig-Haro objects, some displaced several parsecs from any exciting source [@bal97]. In order to investigate the link between high-velocity molecular gas and the newly discovered Herbig-Haro objects, as well as to study the possibility of cloud dispersal via outflows, we acquired new, sensitive, large-scale CO J$=$1$\rightarrow$0 maps of a substantial portion of the L1448 cloud. These new molecular line maps were acquired with the On-The-Fly (OTF) mapping technique as implemented at NRAO’s 12-meter millimeter telescope atop Kitt Peak, Arizona. Observations and Data Reduction =============================== The CO J$=$1$\rightarrow$0 maps of L1448 presented in this paper were acquired using the spectral-line On-The-Fly (OTF) mapping mode of the NRAO’s[^1] 12-meter telescope on 23 June 1997, UT 13$^h$53$^m$ $-$ UT 19$^h$25$^m$. We stress that the OTF technique allows the acquisition of large-area, high-sensitivity, spectral line maps with unprecedented speed and pointing accuracy. For comparison, it would have taken eight times the amount of telescope time, or nearly a week in practice, to acquire this same map using conventional, point-by-point mapping. Although OTF mapping is not a new concept, given the rigor of the position encoding that allows precise and accurate gridding of the data, the fast data recording rates that allow rapid scanning without beam smearing, and the analysis tools that are available, the 12-meter implementation is the most ambitious effort at OTF imaging yet. To produce our CO maps of L1448, we observed a 47$^{\prime}$ $\times$ 7$^{\prime}$ field along a position angle P.A. $=$ 135$^{\circ}$, (measured East from North), centered on the coordinates of L1448 IRS 2 ($\alpha_{1950}=$ 3$^h$ 22$^m$ 17.9$^s$, $\delta_{1950}=$ 30$^{\circ}$ 34$^{\prime}$ 41$^{\prime\prime}$). The 115 GHz beamwidth was $\approx$ 55$^{\prime\prime}$. We scanned a total of 33 rows at a rate of 50$^{\prime\prime}$/s, along P.A. $=$ 135$^{\circ}$, with a row spacing of 12.7$^{\prime\prime}$. (The row spacing is determined by the optimum spatial sampling and by the scanning position angle.) We calibrated and integrated on an absolute off position ($\alpha_{1950}=$ 3$^h$ 20$^m$ 00.0$^s$, $\delta_{1950}=$ 31$^{\circ}$ 00$^{\prime}$ 00$^{\prime\prime}$) at the start of every row. Each row took approximately one minute to scan. Each map coverage took 41 $-$ 56 minutes to complete. We performed six map coverages to attain an RMS in each OTF spectrum of T$_R^*$ $\approx$ 0.11 K. A dual-channel, single-sideband SIS receiver was used for all observations. The backend consisted of 250 kHz and 500 kHz resolution filterbanks, yielding velocity resolutions of 0.65 km s$^{-1}$ and 1.3 km s$^{-1}$, respectively. The filterbanks were used in parallel mode, each of the two receiver polarization channels using 256 filterbank channels. The polarization channels were subsequently averaged together to improve signal-to-noise. Only the 250 kHz resolution data were used to produce the maps presented here. Line temperatures at the 12-meter are on the T$_R^*$ scale, and must be divided by the corrected main beam efficiency, $\eta_m^*$, to convert to the main-beam brightness temperature scale. For our very extended source, $\eta_m^*$ is approximately 1.0. Since the corrected main-beam efficiency is the fraction of the forward power in the main diffraction beam relative to the total forward power in the main beam plus error beam, contribution from the error beam can make $\eta_m^*$ $>$ 1.0. At 115 GHz, the theoretical error beam width is $\approx$ 17$^{\prime}$, but the ratio of the error beam amplitude to the main beam amplitude is only 6 $\times$ 10$^{-4}$, suggesting contribution from the error beam can be ignored. We used the NRAO standard source, B5 ($\alpha_{1950}=$ 3$^h$ 44$^m$ 52.4$^s$, $\delta_{1950}=$ 32$^{\circ}$ 44$^{\prime}$ 28$^{\prime\prime}$), to check absolute line temperatures. The OTF data were reduced with the Astronomical Image Processing Software (AIPS), Version 15JUL95. AIPS tasks specific to OTF data are ‘OTFUV’, which converts a single 12-meter OTF map (in UniPOPS SDD format) to UV (single-dish) format, and ‘SDGRD’, which selects random position single-dish data in AIPS UV format in a specified field of view about a specified position and projects the coordinates onto the specified coordinate system. The data are then convolved onto a grid. OTF data maps were first combined, then gridded into a data cube and baseline-subtracted. Channel maps as well as individual spectra were inspected to ensure good baseline removal and to check for scanning artifacts. Only the first and last rows scanned contained corrupted spectra and were rejected. Results ======= The Extent of High-Velocity CO in L1448 --------------------------------------- Figure 1a shows the extent of high-velocity blue- and redshifted CO J$=$1$\rightarrow$0 emission found within our OTF map boundaries, outlined by the zig-zag lines. The red contours represent high-velocity redshifted emission integrated over the velocity interval 8.1 km s$^{-1}$ $\le$ V$_{LSR}$ $\le$ 17.8 km s$^{-1}$, whereas the blue contours represent high-velocity blueshifted emission integrated over $-$12.1 km s$^{-1}$ $\le$ V$_{LSR}$ $\le$ $-$1 km s$^{-1}$. The rest velocities of the two NH$_3$ cores are 4.2 km s$^{-1}$ and 4.7 km s$^{-1}$. The mapped area comprises 47$^{\prime}$ $\times$ 7$^{\prime}$, corresponding to 4 pc $\times$ 0.6 pc at the source, at a distance of 300 pc. Stars indicate the positions of the known Class 0 sources in L1448, which include L1448C, L1448N (the unresolved, 7$^{\prime\prime}$ separation protobinary L1448N(A) & L1448N(B)), L1448NW (20$^{\prime\prime}$ northwest of the protobinary), and L1448 IRS 2, at map center. A filled square indicates the position of the Class I Young Stellar Object (YSO), L1448 IRS 1 (Cohen 1980; Eislöffel 2000). Figure 1b, the inset to Figure 1a, shows the scale of the previously mapped region of high-velocity CO, for comparison (from [@bac90]). This earlier map includes only a small portion of the high-velocity gas associated with Class 0 protostars in L1448. Comparison of Figures 1a and 1b highlights the truly spectacular spatial extent of outflow activity in L1448. Figure 2 indicates the positions of all the known Herbig-Haro objects (crosses) which are found within our map boundaries, superimposed on the CO map of Figure 1a. Several striking features are evident in Figure 2: (1) There is a $\sim$15$^{\prime}$ long blueshifted filament that is connected to both the L1448N/L1448C region and to L1448 IRS 2 in a wishbone-shaped structure which contains HH 197, HH 195A$-$D, HH 196, and culminates in HH 193; (2) There is extended redshifted outflow emission which suggests structure along three separate axes (at P.A.’s$\sim$129$^{\circ}$, 150$^{\circ}$, & 180$^{\circ}$) directly to the southeast of L1448N/L1448C; (3) In the immediate vicinity of IRS 2, the blueshifted gas peaks on HH 195E, whereas the center of the red peak is along a line drawn through IRS 2 and HH 195A$-$D; (4) There is redshifted emission that peaks $\sim$9$^{\prime}$ to the southeast of IRS 2, which lies on a line connecting IRS 2 and HH 193 at P.A.$\sim$152$^{\circ}$; (5) There is redshifted emission that peaks on HH 277, which appears to be oriented nearly perpendicular to the long axis of our map; (6) There is blueshifted CO emission associated with the HH 267 knots, which lie at the northwestern edge of our map. In addition to the OTF map, we also acquired single-point spectra of the CO J$=$1$\rightarrow$0 and $^{13}$CO J$=$1$\rightarrow$0 transitions at four positions, as indicated in Figure 3. These spectra were used to help determine the appropriate velocity integration limits for the high-velocity CO emission shown in Figures 1a and 2. The mapped linewing emission velocities are indicated by the horizontal arrows for the the blue-shifted emission in Figures 3a & 3c, and for the red-shifted emission in Figures 3b & 3d. These spectra were also used to determine velocity-intervals free of line emission for baseline-subtraction of the gridded OTF data, and to check CO optical depths in the line wings. The spectra shown in Figure 3a were obtained just off the northwest corner of our map, near the HH 267 knots, whereas the spectra of Figures 3b, c, & d were obtained at positions of strong outflow emission within our map boundary. The spectra in Figure 3a show a separate velocity feature at 0 km s$^{-1}$ in both CO and $^{13}$CO. The integration limits for the high-velocity gas were chosen conservatively, avoiding emission from the velocity feature at V$_{LSR}=$0 km s$^{-1}$. It is interesting to note that this velocity feature corresponds to the V$_{LSR}$ of three HH objects in this cloud: HH 196A, HH 196B, & HH 267B. Background: Individual Outflow Sources -------------------------------------- ### The L1448C/L1448N Core The CO outflow from L1448C has been studied extensively since its discovery, at which time it was recognized as a unique source due to its high collimation factor (approaching 10:1) and its extremely high velocities ($\pm$70 km s$^{-1}$ $-$ [@bac90; @bac95]. Interferometric observations of the outflow, acquired with a 3$^{\prime\prime}$ $\times$ 2.5$^{\prime\prime}$ synthesized beam, were required to resolve the limb-brightened CO cavities at “low” velocities ($-$12 $\le$ V$_{LSR}$ $\le$ $+$16 km s$^{-1}$) [@bac95; @dut97]. By modelling the interferometric CO channel maps, the outflow inclination angle was found to be $i\ =\ 70^{\circ}$, implying actual jet velocities in excess of 200 km s$^{-1}$. The initial conical outflow opening half-angle was found to be $\phi$/2 $=$ 22.5$^{\circ}$ [@bac95]. The outflow cavity walls become parallel, however, $\approx$ 1$^{\prime}$ ($=$ 0.08 pc) downstream from the driving source, with a width of $\sim$ 20$^{\prime\prime}$. Therefore, in our OTF map, the L1448C outflow remains unresolved along its width. The redshifted lobe of the L1448C outflow is deflected by $\sim$ 20$^{\circ}$ from its initial direction, from an initial position angle of $+$160$^{\circ}$ to a final position angle of $+$180$^{\circ}$ [@dut97; @eis00]. This change in direction of the redshifted flow axis occurs abruptly near the position of the CO “bullet” known as R3 [@bac90; @dut97]. A string of H$_2$ emission knots along P.A. $= 180^{\circ}$ extends for nearly two arcminutes, beginning about an arcminute southeast of L1448C (Eislöffel 2000). Similarly, the blueshifted CO outflow lobe powered by L1448C starts out at a position angle P.A. $=$ $+$159$^{\circ}$ [@bac95], before being deflected through a total angle of $\sim$ 32$^{\circ}$ by the time it arrives an arcminute downstream (Davis & Smith 1995; Eislöffel 2000). This deflection is due to the collision of the blueshifted gas driven by L1448C with the ammonia core containing the protobinary L1448N(A) & (B) [@cur99]. The strongly radiative shock emission at this interaction region is evident through various tracers: enhanced CO millimeter line emission at the site of the CO bullet “B3” of Bachiller et al. (1990), a far-infrared continuum emission peak [@bar98], high-excitation, shocked molecular hydrogen emission [@dav94; @dav95], and the optically visible shock-excited gas of HH 197 [@bal97]. Extrapolating from the vicinity of HH 197 along P.A. $\sim$ 127$^{\circ}$, which is the axis of the blueshifted L1448C CO outflow after its deflection near HH 197 through knots I, R, and S (Davis & Smith 1995; Eislöffel 2000), leads directly to HH 267. The measured radial velocities of the HH 267 knots (HH 267A: V$_{LSR} = -50$ km s$^{-1}$; HH 267B: V$_{LSR} = 0$ km s$^{-1}$; HH 267C: V$_{LSR} = -63$ km s$^{-1}$ $-$ Bally et al. 1997) and the velocity extent of the blueshifted CO observed by Bachiller et al. (1990) agree well. The reported terminal velocity for the L1448C flow is 70 km s$^{-1}$ (Bachiller et al. 1990, 1995). These two facts led to the suggestion that HH 267 may be powered by L1448C [@bar98]. L1448N(A) and L1448N(B) form a close (7$^{\prime\prime}$ separation) protobinary (Terebey & Padgett 1997), which is unresolved in our OTF map. The redshifted portion of the L1448N(A) flow was first detected via its associated low-excitation molecular hydrogen emission [@dav95]. Its exciting source was first correctly identified by Barsony et al. (1998). The corresponding blueshifted lobe has been detected only in an extended, conical reflection nebulosity whose peak is estimated to lie $1.^{\prime\prime}5$ west and 6$^{\prime\prime}$ north of L1448N(A) (Bally et al. 1993). The position angle of $\sim 150^{\circ}$ for this flow is determined by the symmetry axis of the U-shaped shocked molecular hydrogen emission (Barsony et al. 1998; see also Eislöffel 2000) and from the axis of the highly-collimated redshifted CO jet driven by L1448N(A) as seen in the V$_{LSR}$ $=$ $+$8 km s$^{-1}$ outflow channel map of Bachiller et al. (1995). Redshifted gas associated with the L1448N(B) flow first appears in the paper reporting the discovery of the L1448C outflow (Bachiller et al. 1990). The corresponding blueshifted outflow lobe from L1448N(B) was partially mapped by Bontemps et al. (1996). Barsony et al. (1998) noted that HH 196, a series of blueshifted optical emission knots (HH 196A: V$_{LSR} = 0$ km s$^{-1}$; HH 196B: V$_{LSR} = 0$ km s$^{-1}$; HH 196C: V$_{LSR} = -35$ km s$^{-1}$; HH 196D: V$_{LSR} = -37$ km s$^{-1}$ $-$ Bally et al. 1997) lie along the L1448N(B) outflow axis. The position angle of the L1448N(B) outflow is P.A. $\sim$129$^{\circ}$. About 20$^{\prime\prime}$ northwest of L1448N(A) lies L1448NW. Recent observations suggest that L1448NW drives a small-scale H$_2$ outflow along an east-west direction (Eislöffel 2000). Although we do not detect a CO outflow associated with L1448NW with our spatial resolution, unpublished interferometric observations do indicate the presence of an E-W flow centered on L1448NW (Terebey 1998). ### The L1448 IRS 2 Core L1448 IRS 2 was confirmed as a Class 0 protostar by O’Linger et al. (1999), who reported a CO outflow associated with this source. The outflow’s symmetry axis along P.A. $\sim$ 133$^{\circ}$ and full opening angle, $\phi\ =$27$^{\circ}$, were initially inferred from (1) the locations of HH 195A$-$D, (2) a fan-shaped reflection nebulosity emanating from IRS 2 in the K$^{\prime}$ images of Hodapp (1994), (3) the positions of CO “bullets”, detected at the 3$\sigma$ level along the outflow axis about 10$^{\prime}$ to the northwest of IRS 2, and (4) the apparent V-shaped morphology of the blueshifted CO emission. A more recent, H$_2$ image of this region has led to a refinement of the IRS 2 outflow axis determination to P.A. $\sim$ 138$^{\circ}$ (Eislöffel 2000). Velocity Maps ------------- In order to elucidate the velocity structure of the CO emission, we present Figures 4, 5, & 6. All three figures are presented in rotated coordinates, such that the major axis of our map along P.A. $=$ 135$^{\circ}$ now lies horizontally. Figure 4 is meant to be used as a key to identify the HH objects (crosses) and Young Stellar Objects (YSO’s–open stars for Class 0 protostars, filled square for the Class I protostar, L1448 IRS1) indicated by the same symbols in the CO veolocity channel maps of Figures 5 & 6. Our beamsize is indicated in the lower right-hand corner of each panel. Figure 5 shows the contoured greyscale images of the blueshifted integrated CO intensities for five velocity intervals blueward of the ambient cloud velocity, proceeding top down from the highest velocities in the top panel, to the lowest, cloud-core velocities in the bottom panel. Figure 6 shows the same for the redshifted emission. In order to obtain good signal-to-noise, our highest velocity channel maps have been integrated over a 4 km s$^{-1}$ velocity interval. The other channel maps have been integrated over 2 km s$^{-1}$ intervals. These maps shed more light on the emission features enumerated in §3.1, and uncover additional information that is not obvious from the outflow map of Figure 2. It is immediately apparent from these maps that much of the emission within the line core delineates gas that has been entrained in the outflows. In particular, the $\sim 15^{\prime}$ long blueshifted feature, which extends from IRS 2 to HH 193, is seen to some degree in all of the maps depicting emission blueward of the ambient cloud velocity (Figures 5a$-$e), as well as in the core gas redward of the ambient cloud velocity (Figures 6d & e). Redshifted, as well as blueshifted, CO emission surrounds HH 193. This is not surprising, since the radial velocities and linewidths of HH 193A, B, and C, are -18 & 40 km s$^{-1}$, 10 & 70 km s$^{-1}$, and -10 & 50 km s$^{-1}$, respectively (Bally et al. 1997). This significant overlap of blueshifted and redshifted emission strongly suggests that the blueshifted feature is oriented close to the plane of the sky. The blueshifted feature is part of a longer feature, which is bipolar about IRS 2 at P.A. $\sim$ 152$^{\circ}$. The blueshifted emission is visible as a well-defined structure from V$_{LSR} = $-8 to 0 km s$^{-1}$. Similarly, redshifted emission is visible as a well-defined structure from V$_{LSR} = $8 to 16 km s$^{-1}$. At the highest velocities (Figures 5a & 6a), the blue- and redshifted emission is highly collimated along P.A. $\sim 152^{\circ}$, centered on L1448 IRS 2. However, the blueshifted emission intersects a second blueshifted feature emanating from the L1448N/L1448C region along P.A. $\sim$ 129$^{\circ}$ (Figure 5a). The intersection occurs $\sim$2$^{\prime}$ downstream of the HH 196 knots. A third blueshifted feature branches off to the west $\sim$3$^{\prime}$ downstream from the HH 196 knots (Figure 5b). The redshifted emission along P.A. $\sim$ 152$^{\circ}$ from IRS 2 extends $\sim 9^{\prime}$ southeast of IRS 2 (Figures 6a$-$c), where it shows a prominent peak in Figure 6b, as well as in the CO total integrated intensity outflow map in Figure 2. The blue- and redshifted peaks adjacent to L1448 IRS 2 are most prominent in the lowest-velocity outflow emission (Figures 5c & 6c). The blueshifted peak is spatially coincident with HH 195E. At higher velocities (Figures 5a & b), the blueshifted emission extends along a P.A. $\sim$ 125$^{\circ}$ from IRS 2, past IRS 1, and may continue past HH 194, which shows a local peak in blueshifted emission (Figures 5b & c). Curiously, the HH 194 knots are all [*redshifted*]{}, with very large linewidths (HH 194A: V$_{LSR} =$66 km s$^{-1}$, $\Delta$V$=$110 km s$^{-1}$; HH 194B: V$_{LSR} =$66 km s$^{-1}$, $\Delta$V$=$150 km s$^{-1}$; HH 194C: V$_{LSR} =$110 km s$^{-1}$, $\Delta$V$=$60 km s$^{-1}$). HH 194C has been associated with redshifted outflow emission from IRS 1 (Bally et al. 1997). Although there is a prominent local peak in the blueshifted CO emission at the position of HH 194, there is no evidence of redshifted CO emission (Figures 6a$-$c) at this position. The extended redshifted emission directly to the southeast of L1448N/L1448C is also clearly present at core velocities, very strongly so in Figure 6d, and even in Figure 5e. The previously suggested structure along three separate axes is also seen in Figures 6a$-$c; mostly strikingly, in Figure 6b. These three axes, along P.A. $\sim$ 180$^{\circ}$, 150$^{\circ}$, & 129$^{\circ}$, correspond to the PAs of the redshifted lobes of the outflows associated with L1448C, L1448N(A), & L1448N(B), respectively. The redshifted feature that peaks on HH 277 is most prominent as a separate velocity feature in Figure 6c. In the next section, we consider these features in connection with what is known about the individual outflows in order to interpret the outflow morphology in L1448. DISCUSSION ========== Interpretation of Outflow Structure ----------------------------------- Figures 7$-$9 present the various outflow extents and position angles. Figure 7 is relevant for the discussion of alternative interpretations of the outflow emission centered on the L1448 IRS 2 ammonia core. Figure 8 is used for the discussion of the outflows originating from sources embedded in the ammonia core associated with L1448C and L1448N. For both Figures 7 and 8, the outflow axes and extents are superposed on our integrated high-velocity CO linewing map of the L1448 cloud. Figure 9 shows the same outflow axes superposed on a much higher spatial-resolution ($\sim$ 1$^{\prime\prime}$ vs. $\sim$ 55$^{\prime\prime}$) H$_2$ image of the L1448 region from Eislöffel (2000). (This is the only figure using J2000 coordinates.) In all cases (except for the two newly-identified outflow features seen in our CO maps associated with L1448 IRS 2 and the unidentified source outside our map boundaries), outflow position angles were determined from previously-published, arcsecond-scale outflow data. For all of the outflows, we find good agreement between large-scale CO features seen in our maps and flow axes that have been determined from the previous, higher-resolution observations. In Figures 7$-$9, [*solid*]{}, colored lines denote well-established flow position angles and extents, derived from our own CO data and the published literature, whereas [*dashed*]{} lines denote outflow position angles and extents that are consistent with our new CO data. ### Outflow Emission from the L1448 IRS 2 Ammonia Core High-velocity CO outflow activity centered on L1448 IRS 2 was discovered from low-spatial resolution ($\approx$ 55$^{\prime\prime}$) mapping (O’Linger et al. 1999). These authors suggested that L1448 IRS 2 was the source of a single outflow, with a constant opening angle, as depicted in Figure 7a. The previous outflow symmetry axis along P.A. $\sim$ 133$^{\circ}$ and opening angle, $\phi \sim 27^{\circ}$, were derived from the high spatial resolution K$^{\prime}$ images of Hodapp (1994). We derive a new outflow symmetry axis along P.A. $\sim$ 138$^{\circ}$ from the more recent H$_2$ images of Eislöffel (2000). Therefore, we have been able to determine more accurately the position angles for the proposed outflow cavity walls, which should lie along P.A. $\sim$ 152$^{\circ}$ and P.A. $\sim$ 125$^{\circ}$, if the IRS 2 outflow retains its initial opening angle out to large distances. This model explains the presence of high-velocity blue- and redshifted CO emission seen along these position angles at large distances from IRS 2, notably the V-shaped morphology of the blueshifted gas and the presence of several CO “bullets” located along the proposed outflow axis, well beyond HH 195A$-$D. In a constant opening angle outflow scenario, HH 193 lies along one arm of this V, at P.A. $\sim$ 152$^{\circ}$. The blueshifted gas along the other arm of the V (P.A. $\sim$ 125$^{\circ}$) would be confused with emission from the E-W outflow in this vicinity, but since the blueshifted emission extends [*past*]{} IRS 1 towards the [*redshifted*]{} HH 194 knots, at least part of this emission could be due to IRS 2. Although there is redshifted CO emission along P.A. $\sim$ 138$^{\circ}$ and along P.A. $\sim$ 152$^{\circ}$, there is little evidence for redshifted emission along P.A. $\sim$ 125$^{\circ}$. However, this may be due to confusion with the three redshifted lobes associated with L1448C, L1448N(A), & L1448N(B). Figure 9 clearly shows an outflow associated with IRS 2 along a P.A. $\sim$ 138$^{\circ}$. The redshifted gas along the outflow axis is prominent in the H$_2$ emission, but is not clearly apparent in our CO maps beyond the redshifted peak about 1$^{\prime}$ southeast of IRS 2. Hints of more extended emission along P.A. $\sim$ 138$^{\circ}$ may be seen, however, in Figures 6b & c, and curiously, in [*blueshifted*]{} emission extended along this axis to the southeast of IRS 2 in Figure 5d. Such overlap of blueshifted gas along the redshifted outflow axis is expected for outflows oriented nearly in the plane of the sky. Indications of extended blueshifted emission along P.A. $\sim$ 138$^{\circ}$, at least a few arcminutes downstream of HH 195A$-$D, are seen in Figures 5b & c. However, there are a few problems with the single outflow, constant opening angle model: (1) Figure 9 indicates that although the initial opening angle of the IRS 2 outflow is $\phi \sim 27^{\circ}$, the two strands of H$_2$ defining this opening angle join in a bow shock structure in the vicinity of HH 195A-D; (2) The CO data show little evidence for emission along the cavity wall at P.A. $\sim$ 125$^{\circ}$, although we note that there is much confusion from high-velocity gas associated with other outflows along this position angle to both sides of IRS 2; (3) HH 193 lies precisely at the end of the outflow wall in this model, an unlikely location for a shock; (4) The velocity dispersion of the blueshifted feature along P.A. $\sim$ 152$^{\circ}$ is high, since it is prominent at both ambient cloud velocities and at highly blueshifted velocities (Figures 5, 6d&e); (5) The highest-velocity outflow emission should converge towards the outflow axis (P.A. $\sim$ 138$^{\circ}$) due to projection effects, but the highest-velocity outflow emission lies along P.A. $\sim$ 152$^{\circ}$, the proposed outflow wall. Our maps are not sensitive enough to have picked up the highest-velocity outflow emission, however, which is severely diluted in our large ($\sim$1$^{\prime}$) beam. Nevertheless, the absence of CO emission along P.A. $\sim$ 138$^{\circ}$ in the highest velocities suggests that the feature along P.A. $\sim$ 152$^{\circ}$ defines a separate outflow axis. This has led to an alternate interpretation of the high-velocity CO associated with L1448 IRS 2 in which the presence of [*two*]{} outflows is required, as depicted in Figure 7b, with one outflow along P.A. $\sim$ 138$^{\circ}$, and a new, second outflow along P.A. $\sim$ 152$^{\circ}$, so prominent in the CO data. In this scenario, the new IRS 2 outflow would be responsible for exciting HH 193. Two outflows, along distinctly different position angles, would also suggest that IRS 2 is a binary system. Although there is currently no evidence to indicate this source to be binary from available continuum data obtained with the Submillimetre Common User Bolometer Array (SCUBA) at the JCMT on Mauna Kea, Hawaii (O’Linger et al. 1999), it is possible that IRS 2 is a compact binary on a scale smaller than 7$^{\prime\prime}$ (the resolution of the SCUBA 450 $\mu$m data). Recent work indicates a high incidence of binarity among young stellar systems (eg., Ghez, Neugebauer, & Matthews 1993; Looney, Mundy, & Welch 2000). Only arcsec/sub-arcsec imaging at either millimeter or centimeter wavelengths could test the binary hypothesis further. Evidence of other outflow activity near L1448 IRS 2 is found by noting that the very confined outflow, whose lobes peak only $\sim$1$^{\prime}$ on either side of IRS 2, has its redshifted peak well-aligned along the P.A. $=$ 138$^{\circ}$ outflow that excites HH 195A-D, but the corresponding blue peak closest to IRS 2 is skewed at a somewhat shallower position angle closer to P.A. $\sim$ 125$^{\circ}$. No CO emission (Figure 2, Figures 6a$-$c) is seen along P.A. $\sim$ 125$^{\circ}$ on the opposite side from IRS 2, which would be expected if there were an outflow along this direction. The blue peak is spatially coincident with HH 195E, however, the only HH 195 knot which is off the P.A. $=$ 138$^{\circ}$ axis of the IRS 2 outflow. It has been argued that IRS 1 drives an east-west oriented outflow and is the most probable driving source of HH 195E (Bally et al. 1997; Eislöffel 2000). Although the presence of such an E-W oriented outflow in this region is undisputed, it is possible that an as yet undiscovered source, other than IRS 1, may be the responsible agent. Thus, the positioning of the blue peak so close to IRS 2 may be coincidental, and due primarily to local heating associated with HH 195 E, and overlapping outflows associated with blueshifted emission from IRS 2 and IRS 1. This picture is supported by the L1448 H$_2$ mosaic of Eislöffel (2000), shown in Figure 9, which shows the H$_2$ emission in the vicinity of HH 195E pointing toward IRS 1, not IRS 2. ### Outflow Emission from the L1448C/L1448N Ammonia Core The position angles of the blue- and red-shifted outflowing gas powered by L1448C are indicated by the green lines in both Figures 8 & 9. Solid green lines indicate the L1448C outflow’s direction and extent as determined by previous workers (see the caption of Figure 8 for references). The dashed green line on the blue-shifted side represents the continuation of the L1448C outflow proposed by Barsony et al. (1998). The blueshifted L1448C outflow suffers a large deflection from its original direction, as can be seen clearly in Figure 9, which shows the blueshifted L1448C outflow axis passing directly through H$_2$ emission knots I & S, and about 20$^{\prime\prime}$ north of emission knot R. Emission knot R lies within more extended H$_2$ emission that appears to form a U-shaped or bow-shock structure which opens toward the southeast, bisected by the L1448C outflow axis. The sides of the U are separated by $\sim$ 40$^{\prime\prime}$, with the brighter side (including knot R) lying to the south of the L1448C outflow axis. This final, deflected, blue-shifted outflow axis lies along P.A. $\sim$ 127$^{\circ}$, where a long finger of high-velocity blueshifted CO emission is found. Blueshifted CO emission surrounds the HH 267 complex in a horseshoe shape about the proposed extension of the L1448C outflow, suggesting the outflow may be responsible for this emission. The dashed green line on the redshifted side in Figure 8 indicates the possible continuation of the L1448C outflow to the south. The L1448N(A) molecular outflow axis and extent are depicted by the purple line in Figures 8 & 9. Only redshifted molecular gas associated with the L1448N(A) outflow, along P.A. $\sim$ 150$^{\circ}$, is detected in our CO maps, with a total length of at least 0.7 pc, as seen in Figure 8. The U-shaped molecular hydrogen emission that traces part of the redshifted outflow cavity wall from the L1448N(A) outflow, as seen in Figure 9, is unresolved in our CO maps. Knots of H$_2$ emission which trace the outflow wall are $\le$30$^{\prime\prime}$ apart as far as 3$^{\prime}$ to the southeast of L1448N(A) (Barsony et al. 1998; Eislöffel 2000). Taken together with the observed length of the CO redshifted lobe, this suggests a collimation factor of at least 16:1. The opening half-angle of the L1448N(A) outflow was estimated to be $\phi/2\ \approx\ 25^{\circ}$, from the morphology of the near-infrared reflection nebulosity to the north of L1448N(A), associated with what would be the blueshifted outflow lobe (Bally et al. 1993). The blueshifted flow powered by L1448N(A) is not apparent in our CO maps, judging by the drop in the blueshifted, high-velocity CO contour levels along the symmetry axis of its NIR reflection nebula. The lack of blueshifted CO emission from the L1448N(A) outflow is most likely accounted for by the likelihood that the cloud boundary has been reached in this direction, and that the flow has broken out of the molecular cloud. The L1448N(B) molecular outflow axis and extent are depicted by the mustard-colored lines in Figures 8 & 9. The position angle, P.A. $\sim$ 129$^{\circ}$, of the L1448N(B) outflow was determined by the orientation of the redshifted CO outflow driven by L1448N(B) from Figure 1b and noting that this CO flow symmetry axis intersects HH 196 (Barsony et al. 1998). The true spatial extent of the L1448N(B) CO outflow, however, is demonstrated here for the first time. On the scale of our map, the L1448N(B) outflow remains unresolved along its width. The optical emission knots of HH 196 are, indeed, found to lie right along the L1448N(B) CO outflow axis, confirming the identification of L1448N(B) as their exciting source (Figure 8). Approximately 2$^{\prime}$ downstream of HH 196, the L1448N(B) flow becomes confused in projection with the P.A. $\sim$ 152$^{\circ}$ outflow from IRS 2. If the outflow continues along P.A. $\sim$ 129$^{\circ}$, it could account for the bulges in the blueshifted emission to the south and southwest of HH 193 (Figure 2, Figure 5c, Figure 8), and might even be responsible for the “C”–shaped blueshifted emission structure east of the HH 267 system, $\sim$2.5 parsecs from L1448N(B). The redshifted lobe associated with this source appears to terminate $\sim$2$^{\prime}$ northwest of HH 277, although this could be due to confusion with the high-velocity CO emission surrounding HH 277, which seems to be part of an outflow driven by an unidentified source outside of the boundaries of our map. Using the most conservative length for the L1448N(B) outflow, the major axis taken from the HH 196 knots through the end of the redshifted lobe ($\sim$12$^{\prime}$), the derived lower limit for the collimation factor is $\ge$12:1. Estimates of the initial opening angle and width of the L1448N(B) outflow await higher spatial resolution, interferometric imaging. A dark blue dashed line in Figure 8 indicates a possible outflow axis passing through an otherwise unexplained, high-velocity redshifted feature associated with HH 277, in the southeast quadrant of our CO map. This redshifted CO velocity feature is most prominently seen in Figure 6c. The orientation of this structure is almost perpendicular to the general orientation of our map. Thus, HH 277 is probably driven by a source off the edge of our map. Finally, although the origin of the HH 267 knots cannot definitively be resolved based on the CO data we present here, our observations do constrain the driving source. L1448N(A) & (B) can be ruled out as possible driving sources of HH 267, since the P.A.’s of their associated outflows are along completely different directions than the lines linking them with HH 267. Furthermore, blue-shifted molecular gas has yet to be detected from L1448N(A). Terminal velocities for the L1448 IRS 2 outflows can not be determined from our data, but the P.A.’s of both of these outflows also miss the HH 267 complex completely. However, the P.A. of the deflected blueshifted lobe of the L1448C flow goes right through HH 267, and the reported terminal velocity for this outflow (70 km s$^{-1}$: Bachiller et al. 1990, 1995) is in good agreement with the measured HH 267 velocities (Bally et al. 1997), as suggested by Barsony et al. (1998). Cloud Dispersal by Giant Protostellar Flows? -------------------------------------------- The most dramatic evidence for the direct effects of the outflows on the L1448 molecular cloud is seen in the distortions of the cloud contours at all velocities in Figures 5 & 6. Although we cannot estimate the masses and energetics of each individual outflow in our maps due to confusion in space and velocity, we can, nevertheless, estimate the [*total*]{} contribution of the outflows to the cloud’s energetics. The Local Thermodynamic Equilibrium (LTE) analysis used to estimate the combined mass of the outflows (M$_{tot}\approx 0.7$ M$_{\odot}$) is discussed in O’Linger et al. (1999). Optically thin high-velocity CO emission was assumed, given the lack of observed high-velocity $^{13}$CO emission in the velocity intervals outside the cloud core velocities (see Figure 3). Therefore, the resultant derived mass is a strict lower limit, since no attempt was made to correct for the considerable mass expected to be masked by the line core emission. For highly inclined outflows ($i>70^{\circ}$), the characteristic velocity which is used to calculate outflow energetic parameters is best chosen as the geometrical mean between the highest observed velocities, $V_{CO}$, and the inclination-corrected velocity, $V_{CO}/cos(i-{{\phi}\over {2}})$, where ${\phi}\over {2}$ is the half-opening angle of the outflow (Cabrit & Bertout 1992). Assuming the outflow inclinations, 70$^{\circ}$ $\le$ $i$ $\le$ 90$^{\circ}$, V$_{char} = 22 - 34$ km s$^{-1}$ for the L1448 outflows, the total momentum in all the flows is computed to be 16 M$_{\odot}$ km s$^{-1}$ $\le$ M$_{tot}$V$_{char}$ $\le$ 24 M$_{\odot}$ km s$^{-1}$. This range of values is nearly equivalent to the momentum content of the quiescent NH$_3$ cores, assuming 50 M$_{\odot}$ total cores and $v_{turb}$ $\sim$ 0.5 km s$^{-1}$ [@ba86a]. Even more striking, the total kinetic energy in all the flows, 2$\times 10^{45}$ ergs $\le$ $1\over {2}$M$_{tot}$V$_{char}^2$ $\le$ 8$\times 10^{45}$ ergs, exceeds the gravitational binding energy ($\sim\ GM^2/R\ \approx$ 5 $\times$ 10$^{44}$ ergs) of the NH$_3$ cores by an order of magnitude, and the gravitational binding energy of the 100 M$_{\odot}$ C$^{18}$O cloud (9 $\times$ 10$^{44}$ ergs), contained within a 1.3 pc $\times$ 0.7 pc region [@ba86b], by a factor of five. The total outflow momentum quoted above is, in fact, a lower limit, since these outflows are still gaining momentum from the force provided by the central driving engines, and the total outflow mass may be grossly underestimated. The magnitude of both the total energy and momenta of the outflows suggests these outflows are capable of dispersing the NH$_3$ cores, with the caveat that it is unclear, both from the outflow and ambient cloud morphology, how the outflow momenta can be adequately transferred to the surrounding core. Possibly, this can be accomplished as the individual outflow opening angles increase with time. Summary ======= $\bullet$ Spectral-line “on-the-fly” mapping was used at the NRAO 12-meter millimeter telescope to produce a large-scale (47$^{\prime}$ $\times$ 7$^{\prime}$) CO (J$=$1$\rightarrow$0) map of the L1448 dark cloud, sensitive enough (1 $\sigma$ $=$ 0.1K) to enable the detection of outflow activity on parsec-scales and the identification of six distinct molecular outflows. Large-scale, high-spatial resolution optical and near-infrared images of shocked gas emission regions associated with each outflow were crucial for identifying the CO counterparts of these outflows. Three of the outflows are associated with the Class 0 protostars, L1448C, L1448N(A), & L1448N(B). Two outflows are associated with the Class 0 protostar, L1448 IRS 2. A sixth outflow, apparently associated with HH 277, is probably driven by an unidentified source located outside of our map. $\bullet$ For all of the outflows which have previously been identified through high-resolution interferometric observations or small-scale shocked gas emission, we find good agreement between large-scale CO features and flow axes that have been determined from these higher-resolution observations. $\bullet$ We find evidence of two distinct outflows emanating from the recently-confirmed Class 0 protostar, L1448 IRS 2 (O’Linger et al. 1999), suggesting that IRS 2 is an unresolved binary system. One of these outflows lies along P.A. $\sim$ 138$^{\circ}$, and is apparent both in H$_2$ emission (Eislöffel 2000) and, to a lesser degree, in our CO data. The second outflow lies along P.A. $\sim$ 152$^{\circ}$ and is seen as a highly-collimated jet in our CO data, culminating, on the blueshifted side, in HH 193. $\bullet$ The ambient cloud emission contours are severely disturbed by the outflows, suggesting a large fraction of the ambient cloud in the mapped region has been entrained in, or stirred up by, the outflows. $\bullet$ The total outflow kinetic energy ($>$ 2 $\times$ 10$^{45}$ ergs) and combined outflow momenta ($>$ 16 M$_{\odot}$ km s$^{-1}$) indicate that the outflows are energetically, and probably dynamically, capable of dispersing the dense ammonia cores out of which the protostellar outflow driving sources of five of the six identified flows, L1448C, L1448N(A), L1448N(B), and L1448 IRS 2, are currently forming. However, it is unclear, from both the outflow and ambient cloud morphology, how the outflow momenta can be adequately transferred to the surrounding cores. ACKNOWLEDGEMENTS: We thank Dr. Darrel Emerson, Dr. Eric Greisen, and Dr. Jeff Mangum of NRAO for the development, implementation, and improvement of the spectral-line On-The-Fly mapping capability of the 12-meter telescope. GWC, JO, and MB gratefully acknowledge financial support from NSF grant AST-0096087 while part of this work was carried out. Part of this work was performed while GWC held a President’s Fellowship from the University of California. MB’s NSF POWRE Visiting Professorship at Harvey Mudd College, NSF AST-9731797, provided the necessary time to bring this work to completion. JO acknowledges financial support by the NASA Grant to the Wide-Field Infrared Explorer Project at the Jet Propulsion Laboratory, California Institute of Technology. We would like to thank our referee, John Bally, for his many helpful suggestions which greatly improved this paper. Anglada, G., Rodríguez, L. F., Torrelles, J. M., Estalella, R., Ho, P. T. P., Cantó, J., López, R., & Verdes-Montenegro, L. 1989, , 31, 208 Bachiller, R. & Cernicharo, J. 1986a, , 168, 262 Bachiller, R. & Cernicharo, J. 1986b, , 166, 283 Bachiller, R., Cernicharo, J., Martin-Pintado, J., Tafalla, M., & Lazareff, B. 1990, , 231, 174 Bachiller, R., Terebey, S., Jarrett, T., Martin-Pintado, J., Beichman, C.A., & Van Buren, D. 1994, , 437, 296 Bachiller, R., Guilloteau, S., Dutrey, A., Planesas, P., & Martin-Pintado, J., 1995, , 299, 857 Bally, J., Lada, E.A., & Lane, A.P. 1993, , 418, 322 Bally, J., Devine, D., & Reipurth, B. 1996a, , 473, L49 Bally, J., Devine, D., & Alten, V. 1996b, , 473, 921 Bally, J., Devine, D., & Alten, V., & Sutherland, R.S. 1997, , 478, 603 Bally, J., Reipurth, B., Lada, C.J., & Billawala, Y. 1999, , 117, 410 Barsony, M., Ward-Thompson, D., André, P., & O’Linger, J. 1998, , 509, 733 Bence, S.J., Richer, J.S., & Padman, R. 1996, , 279, 866 Bence, S.J., Padman, R., Isaak, K.G., Wiedner, M.C., & G.S. Wright 1998, , 299, 965 Bertoldi, F. & McKee, C.F. 1996 in Amazing Light: A Volume Dedicated to C.H. Townes on his 80th Birthday, ed. R.Y. Chiao, New York:Springer Bontemps, S., Andr’[e]{}, P., Terebey, S., & Cabrit, S. 1996, , 311, 858 Cabrit, S. & Bertout, C. 1992, , 261, 274 Cohen, M. 1980, , 85, 29 Curiel, S., Raymond, J. C., Rodríguez, L. F., Cantó, J. & Moran, J. M. 1990, , 365, L85 Curiel, S., Torrelles, J. M., Rodríguez, L. F., Gomez, J. F., & Anglada, G. 1999, , 527, 310 Davis, C. J., Dent, W. R. F., Matthews, H. E., Aspin, C., & Lightfoot, J. F. 1994, , 266, 933 Davis, C. J., & Smith, M. D. 1995, , 443, L41 Dent, W.R.F., Matthews, H.E., & Walther, D.M. 1995, , 277, 193 Devine, D., Bally, J., Reipurth, B., & Heathcote, S. 1997, , 114, 2095 Dutrey, A., Guilloteau, S., & Bachiller, S. 1997, , 325, 758 Eislöffel, J. 2000, , 354, 236 Eislöffel, J. & R. Mundt 1997, , 114, 280 Ghez, A.M., Neugebauer, G., & Matthews, K. 1993, , 106, 2005 Gomez, M., Whitney, B., & Kenyon, S.J. 1997, , 114, 1138 Gomez, M., Whitney, B., & Wood, K. 1998, , 115, 2018 Guilloteau, S., Bachiller, R., Fuente, A., & Lucas, R. 1992, , 265, L49 Hodapp, K.-W. 1994, , 94, 615 Knee, L. B. G., & Sandell, G. 2000, to appear in Lada, C.J. & Fich, M. 1996, , 459, 638 Looney, L.W., Mundy, L.G., & Welch, W.J. 2000, , 529, 477 Norman, C. & Silk, J. 1979, , 234, 86 O’Linger, J., Wolf-Chase, G.A., Barsony, M., & Ward-Thompson, D. 1999, , 515, 698 Reipurth, B., Devine, D., & Bally, J. 1998, , 116, 1396 Sandell, G., & Knee, L.B.G. 1998, J. R. Astron. Soc. Ca., 92, 32 Terebey, S. 1998, private communication Terebey, S. & Padgett, D. 1997, in IAU Symposium 182, Herbig-Haro Flows and the Birth of Low-Mass Stars, eds. B. Reipurth & C. Bertout (Dordrecht: Kluwer) White, G.J., Casali, M.M., & Eiroa, C. 1995, , 298, 594 Wilking, B.A., Schwartz, R.D., Fanetti, T.M., & Friel, E.D. 1997, PASP, 109, 549 Blue contours indicate high-velocity blueshifted ($-12.1\ \le\ V_{LSR}\ \le\ -1$ km s$^{-1}$) emission and red contours indicate high-velocity redshifted ($+$8.1 $\le\ V_{LSR}\ \le\ $+$17.8$ km s$^{-1}$) CO J$=$1$\rightarrow$0 emission in the 47$^{\prime}\ \times\ 7^{\prime}$ region we mapped with the NRAO 12-meter telescope. Contour levels start at 2 K km s$^{-1}$ ($\approx$ 3 $\sigma$), and increase in 1.5 K km s$^{-1}$ intervals. For comparison, the dashed rectangle shows the approximate area previously mapped (in CO J$=$2$\rightarrow$1), shown in the inset. Stars indicate the positions of L1448C, L1448N, L1448NW, and L1448 IRS 2. All of these are Class 0 sources; L1448N is a 7$^{\prime\prime}$ separation protobinary, consisting of L1448N(A) and L1448N(B). L1448NW lies $\sim$ 20$^{\prime\prime}$ to the northwest of the protobinary. The filled box indicates the position of the Class I source, L1448 IRS 1. The 55$^{\prime\prime}$ FWHM NRAO beamsize is indicated in the lower right-hand corner. [**b.**]{} The previous CO J$=$2$\rightarrow$1 IRAM 30-meter map of high-velocity gas in L1448 (from [@bac90]): Solid contours indicate high-velocity, blueshifted ($-$ 55 km s$^{-1}$ $\le$ V$_{LSR}$ $\le$ 0 km s$^{-1}$) gas, and dotted contours indicate high-velocity, redshifted ($+$ 10 km s$^{-1}$ $\le$ V$_{LSR}$ $\le$ $+$ 65 km s$^{-1}$) gas. First contour and contour intervals are at 10 K km s$^{-1}$. In addition to the famous protostellar outflow powered by L1448C, the weaker outflow, powered by L1448N(B), is also detected . The 12$^{\prime\prime}$ FWHM IRAM beamsize is indicated in the lower right-hand corner. Names and positions of all the Herbig-Haro objects (black crosses) are shown, as well as the five Class 0 sources (black stars) and the Class I source, L1448 IRS 1 (solid black box). Velocity intervals and contour levels are the same as in Figure 1a. $^{12}$CO (thin solid line) & $^{13}$CO (thick solid line) J=1$\rightarrow$0 spectra obtained at the four positions on our map whose B1950 coordinates are indicated: (a) CO spectra near the HH 267 knots (just off the northwest corner of our map) clearly show two separate velocity components: a brighter component at the same velocity as HH 267B, V$_{LSR}=$0 km s$^{-1}$ (thin vertical dashed line), and a dimmer component at V$_{LSR}=$4.25 km s$^{-1}$ (thick vertical dashed line). (b) $-$ (d) CO spectra show strong CO self-absorption at the ambient cloud velocity, V$_{LSR}=$4.25 km s$^{-1}$. Arrows indicate the velocity ranges used to determine integrated intensities, masses, and energetics for the outflow emission. The positions of all the Herbig-Haro objects are indicated by tilted black crosses; the Class I source, L1448 IRS 1, is shown by a filled red box; the Class 0 protostars, L1448 IRS 2, L1448C, L1448N(A) & (B), and L1448NW are indicated by red stars. The separation between L1448C & L1448N is $\sim$ 80$^{\prime\prime}$. Contoured greyscale images of the integrated CO intensity over velocity intervals blueward of the ambient cloud emission. Velocity intervals, lowest contour levels, and contour intervals, are, respectively: (a) -8 to -4 km s$^{-1}$, 1 K km s$^{-1}$, 0.5 K km s$^{-1}$; (b) -4 to -2 km s$^{-1}$, 1 K km s$^{-1}$, 0.5 K km s$^{-1}$; (c) -2 to 0 km s$^{-1}$, 1.5 K km s$^{-1}$, 1 K km s$^{-1}$; (d) 0 to 2 km s$^{-1}$, 2 K km s$^{-1}$, 2 K km s$^{-1}$; and (e) 2 to 4 km s$^{-1}$, 2 K km s$^{-1}$, 2 K km s$^{-1}$. Positions of Herbig-Haro objects (tilted yellow crosses), IRS 1 (filled red box), and Class 0 objects (red stars) are indicated. Contoured greyscale images of the integrated CO intensity over velocity intervals redward of the ambient cloud emission. Velocity intervals, lowest contour levels, and contour intervals, are, respectively: (a) 12 to 16 km s$^{-1}$, 1 K km s$^{-1}$, 1 K km s$^{-1}$; (b) 10 to 12 km s$^{-1}$, 1 K km s$^{-1}$, 1 K km s$^{-1}$; (c) 8 to 10 km s$^{-1}$, 2 K km s$^{-1}$, 1 K km s$^{-1}$; (d) 6 to 8 km s$^{-1}$, 2 K km s$^{-1}$, 2 K km s$^{-1}$; and (e) 4 to 6 km s$^{-1}$, 2 K km s$^{-1}$, 2 K km s$^{-1}$. Positions of Herbig-Haro objects (tilted yellow crosses), IRS 1 (filled red box), and Class 0 objects (red stars) are indicated. One outflow constant-opening angle model (O’Linger et al. 1999). The outflow axis (P.A. $\sim$ 138$^{\circ}$) and corresponding extent of the H$_2$ emission are denoted by the solid black line. The dashed black line indicates possible extension of this outflow and is based on observed CO bullets along this axis. The dotted lines indicate the outflow walls, determined from a fan-shaped reflection nebulosity ($\phi \sim 27^{\circ}$) seen in K$^{\prime}$ emission (Hodapp 1994), as well as two strands of H$_2$ emission originating from IRS 2 which lie along these axes (Eislöffel 2000). [**b.**]{} Two outflows model. Solid and dashed black lines as in Figure 6a. The dashed orange line indicates the axis of the highly-collimated bipolar CO emission which is evident in our CO data. Positions of all the Herbig-Haro objects (black crosses) are shown, as well as the five Class 0 sources (black stars) and the Class I source, IRS 1 (solid black box). Individual outflow position angles and extents superimposed on the CO outflow integrated intensity map. Positions of all the Herbig-Haro objects (black crosses) are shown, as well as the five Class 0 sources (black stars) and the Class I source, IRS 1 (solid black box). Position angles are indicated for the outflows associated with L1448C (green $-$ Bachiller et al. 1995; Davis & Smith 1995; Dutrey et al. 1997; Eislöffel 2000), L1448N(B) (mustard $-$ Bachiller et al. 1990; Bontemps et al. 1996; Barsony et al. 1998), L1448N(A) (purple $-$ Davis & Smith 1995; Barsony et al. 1998), L1448 IRS 2 (black $-$ O’Linger et al. 1999; orange $-$ this work), and high velocity gas of unknown origin associated with HH 277 (dark blue). The extents of the outflow lobes that are well-established from the literature and our data are indicated with solid lines. Features that are seen only in our CO data, and extrapolations that are consistent with our data, are shown with dashed lines. Individual outflow position angles superimposed on an H$_2$ mosaic of L1448 (Eislöffel 2000). Note that the axes of this figure are in J2000 coordinates. The positions of Herbig-Haro objects (orange crosses), Class 0 sources (red stars), and IRS 1 (open red box), are indicated. Position angles of outflows are indicated as in Figure 8. Note that after its bend at emission knot I, the position angle of the L1448C outflow passes directly through emission knot S. Although emission knot R lies about 20$^{\prime\prime}$ south of the L1448C outflow axis, this knot appears to be part of extended emission that forms a U-shaped structure which opens toward the southeast, bisected by the outflow axis. The sides of the U are separated by $\sim$ 40$^{\prime\prime}$, with the brighter side (including knot R) lying to the south of the L1448C outflow axis. [^1]: The National Radio Astronomy Observatory is a facility of the National Science Foundation, operated under cooperative agreement by Associated Universities, Inc.
{ "pile_set_name": "ArXiv" }
--- abstract: 'This paper leverages heterogeneous auxiliary information to address the data sparsity problem of recommender systems. We propose a model that learns a shared feature space from heterogeneous data, such as item descriptions, product tags and online purchase history, to obtain better predictions. Our model consists of autoencoders, not only for numerical and categorical data, but also for sequential data, which enables capturing user tastes, item characteristics and the recent dynamics of user preference. We learn the autoencoder architecture for each data source independently in order to better model their statistical properties. Our evaluation on two [*MovieLens*]{} datasets and an e-commerce dataset shows that mean average precision and recall improve over state-of-the-art methods.' author: - title: | Deep Heterogeneous Autoencoders\ for Collaborative Filtering\ --- Deep Autoencoder, Heterogeneous Data, Shared Representation, Sequential Data Modeling, Collaborative Filtering Introduction ============ Although Collaborative Filtering (CF) techniques achieve good performance in many recommender systems [@Hu2008], their performance degrades significantly when historical data is sparse. In order to alleviate this problem, features from auxiliary data sources that reflect user preference have been extracted [@Oord2013; @Porteous2010], as shown in Fig. \[fig:auxiliary\_usage\]. How to represent data from different sources is still a research problem, and it has been shown that the representation itself substantially impacts performance [@Loyola2017; @Goodfellow2016]. Recently, representation learning that automatically discovers hidden factors from raw data has become a popular approach to remedy the data sparsity issue of recommender systems [@wangweiran2015; @Zheng2017]. Many online shopping platforms gather not only user profiles and item descriptions, but various other types of data, such as product reviews, tags and images. Recent research has added textual and visual information to recommender systems [@Fuzheng2016; @Oramas2017]. However, in many cases sequential data, such as user purchase and browsing history, which carries information about trends in user tastes, have largely been neglected in CF-based recommender systems. ![[**Auxiliary information usage in recommender systems.**]{} *Item descriptions and user profiles are typically being used for feature extraction to alleviate the data sparsity problem. Our proposal also leverages sequential data, such as purchase histories, to reflect user preferences.*[]{data-label="fig:auxiliary_usage"}](heterogeneous2recommender.png){width="\columnwidth"} In this paper we propose Deep Heterogeneous Autoencoders (DHA) for Collaborative Filtering to combine information from multiple domains. We use Stacked Denoising Autoencoders (SDAE) to extract latent features from non-sequential data, and Recurrent Neural Network Encoder-Decoders (RNNED) to extract features from sequential data. The model is able to capture both user preferences and potential shifts of interest over time. Each data source is modeled using an independent encoder-decoder mechanism. Different encoders can have different number of hidden layers and an arbitrary number of hidden units in order to deal with the intrinsic difference of data sources. For instance, user demographic data and item content are typically categorical, while user comments or item tags are textual. After pre-processing, such as one hot encoding, bag-of-words and word2vec computation, representation vectors are on a different level of abstraction. Owing to its flexible structure, our model is able to learn suitable latent feature vectors for each component. These local representations from each data source are joined to form a shared feature space, which couples the joint learning of the representation from heterogeneous data and the collaborative filtering of user-item relationships. The contributions of this paper are summarized as follows: 1. A method for modeling both static and sequential data in a consistent way for recommender systems in order to capture the trend in user tastes, and 2. Adaptation of the autoencoder architecture to accurately model each data source by considering their distinct abstraction levels. We show improvements in terms of mean average precision and recall on three different datasets. Related work ============ Incorporating side information into recommender systems ------------------------------------------------------- In order to improve recommendation performance, research has been focusing on using side information, such as user profiles and reviews [@Fuzheng2016; @Porteous2010]. In particular, deep learning models have been widely studied [@He2017; @Wu2017]. AutoRec first proposed the use of autoencoders for recommender systems [@Sedhain2015]. In more recent work, representations are learned via stacked autoencoders (SAE), and fed into conventional CF models, either loosely or tightly coupled [@szhang2017; @hwang2015]. Deep models that integrate autoencoders into collaborative filtering have shown state-of-the-art performance. Recurrent Neural Network Encoder-Decoder ---------------------------------------- Recurrent neural networks (RNNs) process sequential data one element at each step to capture temporal dynamics. The encoder-decoder mechanism was initially applied to RNN for machine translation [@Cho2014]. Recently, RNN encoder-decoders (RNNED) have been used to learn features from a series of actions and have successfully been applied in other areas. It was shown that Long Short-Term Memory (LSTM) networks have the ability to learn on data with long range temporal dependencies, and we adopt LSTMs for modeling sequential data. ![[**Deep Heterogeneous Autoencoders and the integration with collaborative filtering.**]{} *The proposed model extracts a shared feature space from multiple sources of auxiliary information. It models non-sequential and sequential data to capture user preferences, item properties as well as temporal dynamics. It adopts independent encoder-decoder architectures for different data sources in order to better model their statistical properties. The product of $U \in \mathbb{R}^{m \times d}$ and $V \in \mathbb{R}^{n \times d}$ approximates the user-item interaction matrix.*[]{data-label="fig:DHA"}](DHA_0606.png){width="45.00000%"} Deep Heterogeneous Autoencoders for Collaborative Filtering =========================================================== Overview -------- We propose a model that learns a joint representation from heterogeneous auxiliary information to mitigate the data sparsity problem of recommender systems. SDAEs are applied to numerical and categorical data for modeling the static tastes of users for items. We use RNNEDs to extract features from sequential data to reveal interest shifts over time. The model adopts an independent autoencoder architecture for each data source since the inputs are generally on a different level of abstraction, see Fig. \[fig:DHA\] for an overview. In order to discover the distinct statistical properties of every data source, our model takes the existing disparity of input abstraction levels into consideration, and applies autoencoders to each source independently by allowing distinct hidden layer numbers and arbitrary hidden units at every layer. Deep Heterogeneous Autoencoders ------------------------------- We define each source of auxiliary data as a component indexed by $c \in \{1,...,C\}$. $S_c$ denotes the input of component $c$. We pre-process non-sequential data like textual item descriptions by generating fixed-length embedding vectors. For sequential data, an embedding vector is learned for every time step after tokenization. We seperately describe the encoding-decoding outputs of the above two types of embedding vectors. As shown in Fig. \[fig:DHA\], SDAE is applied to fixed-length embedding vectors. Each component encoder takes the input $S_c$, generates a corrupted version of it, $\hat{S_c}$, and the first layer maps it to a hidden representation $h_c$, which captures the main factors of variation in the input data distribution[@vincent2008; @vincent2010]. More importantly, the number of component hidden layers in our model can differ from each other. The architecture is unique for each data source, where the number of layers of component $c$ is denoted as $L_c$. The representation at every layer is $S_{c,l}$. For the encoder of each component, given $l_c \in \{1, ..., L_c/2 \}$ and $c \in C$, the hidden representation $h_{c,l}$ is derived as: $$h_{c, l} = f \left(W_{c,l}h_{c,l-1} + b_{c,l} \right).$$ The decoder reconstructs the data at layer $L$ as follows: $$\bar{S}_c = g\left( W'_c h_{c,L} + b'_c \right).$$ The proposed model leverages sequential data by using two LSTMs for encoding and decoding one sequential data source. Specifically, the encoder reads a sequence with $T$ time steps. At the last time step, the hidden state $h_T$ is mapped to a context vector $c$, as a summary of the whole input sequence[@Cho2014]. The decoder generates the output sequence by predicting the next action $y_t$ given $h_t$. Both $y_t$ and $h_t$ are also conditioned on $y_{t-1}$ and the context vector $c$. To combine them, as shown in Fig. \[fig:DHA\], the first part of our model encodes all components to generate hidden representations $S_{c, L_c/2}$ of non-sequential data and $h_T$ of sequential data across all sources. These are merged to generate a joint latent representation, denoted as $h_{+,0}$. Analogous to the hidden layers of each component, the fusion model can have multiple hidden layers, the total number denoted as $L_+$. The representation of the first fusion hidden layer is $$h_{+,0} = f \left(\sum_{c\in C} W_{c, +}h_{c,L_c/2} + b_{+,0} \right).$$ The first hidden layer $h_{+,0}$ of the fused model is fed into the collaborative filtering model. After joint training, $h_{+,0}$ is the latent vector to generate recommendation results. DHA-based Collaborative Filtering --------------------------------- All data is fed into two DHAs for users and items, respectively. Fig. \[fig:DHA\] shows the process for items, and it is analogous for user data. Let $R \in \mathbb{R}^{m \times n}$ denote the rating matrix of users to items, $S_c^{\left(u \right)}$ being the component $c$ input for users and $S_c^{\left(v \right)}$ that for items. Then, $h_{+,0}^{\left(u \right)}$ and $h_{+,0}^{\left(v \right)}$ are the latent factors. The loss function of the proposed DHA based collaborative filtering is defined as: @size[8]{}@mathfonts \[dha\_cost\_func\] L = &\_[i,j]{} c\_[i,j]{} (r\_[i,j]{} - u\_i v\_j ) \^ 2 + \_f (\_i||u\_i||\^2 + \_j ||v\_j||\^2)\ &+ \_[m]{} \_[cC\_u]{} *loss*(S\_c\^[(u)]{}, |[S]{}\_c\^[(u)]{}) + \_[n]{} \_[cC\_i]{} *loss*(S\_c\^[(v)]{}, |[S]{}\_c\^[(v)]{})\ &+ \_u \_i || u\_i - h\_[+, 0]{}\^[u\_i]{} ||\^2 + \_v \_j || v\_j - h\_[+, 0]{}\^[v\_j]{} ||\^2\ &+ \_w (\_c\_l (||W\_[c,l]{}\^[(u)]{}||\^2 + ||b\_[c,l]{}\^[(u)]{}||\^2 )\ &+ \_c\_l (||W\_[c,l]{}\^[(v)]{}||\^2 + ||b\_[c,l]{}\^[(v)]{}||\^2 ) ). The loss function includes reconstruction costs of user and item information sets, the error to predict $r_{i,j}$, and the approximation error between latent factor vectors of feature learning and collaborative filtering. The loss function is minimized to obtain parameters for the DHAs and the CF model. The mean squared error and the negative log-likelihood are used as cost functions for non-sequential and sequential data, separately. We use $\lambda_{m}$, $\lambda_{n}$, $\lambda_u$ and $\lambda_v$ to balance losses between users and items, $\lambda_f$, and $\lambda_w$ to regularize the weight matrix and bias vectors. Parameter learning ------------------ We apply coordinate descent to alternate the optimization between representation learning of heterogeneous data and user-item interaction, similar to [@hwang2015; @cwang2011]. Given $W$s and $b$s, the gradients of the loss function $L$ with respect to $u_i$ and $v_j$ are computed and set to 0, leading to the following updates: u\_i &(V\^TC\_i V +\_f I + \_u I)\^[-1]{}(V\^TC\_i R\_i + \_u h\_[+,0]{}\^[u\_i]{}),\ v\_j &(U\^TC\_j U +\_f I + \_v I)\^[-1]{}(U\^TC\_j R\_j + \_v h\_[+,0]{}\^[v\_j]{}), where $U \in \mathbb{R}^{m \times d}$ and $V \in \mathbb{R}^{n \times d}$ contain the user and item latent factor vectors, and $d$ is the vector dimensionality. Given $U$ and $V$, the weight matrix and bias vectors of every layer are learned by backpropagation with stochastic gradient descent (SGD). Gradients of $W$ and $b$ are calculated as follows: @size[8]{}@mathfonts =& \_w W\^u + \_m \_c *loss*(S\_c\^u, |[S]{}\_c\^u) + \_u (U - h\_[+,0]{}\^u),\ =& \_w b\^u + \_m \_c *loss*(S\_c\^u, |[S]{}\_c\^u) + \_u (U - h\_[+,0]{}\^u). A learning rate $\alpha$ is adopted to update all parameters using calculated gradients. Experiments =========== Experiments are conducted on three real world datasets, MovieLens-100k ([*ml-100k*]{}), MovieLens-10M ([*ml-10m*]{}), and one dataset from an e-commerce company (OfflinePay). We first investigate whether the flexible autoencoder architecture of our model can generate more accurate latent representations on non-sequential data. Experiments on OfflinePay evaluate the effectiveness of sequential data modeling. Datasets and preprocessing -------------------------- The first dataset, [*ml-100k*]{}, contains ratings from 943 users on 1,682 movies. It has demographic data for users and descriptions for movies. The second dataset, [*ml-10m*]{}, contains 10,000,054 ratings and 95,580 tags from 71,567 users for 10,681 movies. It contains item content information, but no demographic data. We employ user-added tags as an information source for users as well as for movies. OfflinePay is a dataset of user purchases in (offline) shops, paying with a plastic e-money card. The dataset contains a total of 67M transaction records from a four-month period. The goal of using the OfflinePay dataset is to recommend new shop genres to users, not individual products. After aggregating all transaction data into the format of (user $i$, shop genre $j$, number of transactions $r_{ij}$), and removing shoppers who used only one shop genre, the number of $r_{ij}$ values is 7,150,833 with 961,992 unique users and 105 shop genres. The auxiliary data sources include user registered information and shop genre textual descriptions. Additionally, we collect user purchase history on an e-commerce platform during the same time period. The sequence data contains the genres of purchased items online. The datasets are preprocessed to fixed-length embeddings for non-sequential data, and sequences of embedding vectors for sequential data, respectively. For [*ml-100k*]{}, we discretize continuous features like age to discrete values, compute a bag-of-words vector for each user and item. The vector dimensions are 821 for users, 2,482 for movies, respectively. For [*ml-10m*]{}, movie content description and tags that users give to items are textual information. We first tokenize texts, then train Doc2vec vectors for every data source with the embedding vector length set to 500. To generate shop genre embedding vectors for the OfflinePay dataset, all shop names that belong to same genre are grouped together and Doc2vec is applied to generate a 300-dimensional vector for each shop genre. User registered information is preprocessed the same way as [*ml-100k*]{}, and the vector length is 189. For the sequence of genre purchase history, Word2vec is adopted to build 100-d embedding vectors after tokenization. Genres in each sequence are mapped to the corresponding embedding vectors. In experiments, we rank predicted ratings of candidate items and recommend the top $M$ to each user. Mean average precision (MAP) and recall are used as evaluation metrics. [0.33]{}![image](ml100k_f50_recall_0824.png){width="\linewidth"} [0.33]{}![image](ml100k_f100_recall_0824.png){width="\linewidth"} [0.33]{}![image](ml100k_f150_recall_0824.png){width="\linewidth"} [0.33]{}![image](ml10m_f50_recall_0824.png){width="\linewidth"} [0.33]{}![image](ml10m_f100_recall_0824.png){width="\linewidth"} [0.33]{}![image](ml10m_f150_recall_0824.png){width="\linewidth"} Experimental setting -------------------- The number of hidden layers of each model is optimized on a validation dataset. The first fusion hidden layer of DHA is used to bridge the joint training between feature space learning and collaborative filtering. For other models, if the total number of hidden layers is $L$, we connect layer $L/2$ for joint training. The number of units in each hidden layer is incremented by $K$ from the middle of the autoencoder to both sides. For sequential data modeling, recent $T$ purchases is used in the experiments, and values $T \in \{5, 10\}$ are evaluated in our experiments. The mini-batch size is set to 50 and 1,000 for [*ml-100k*]{} and [*ml-10m*]{}, respectively. For the OfflinePay dataset, since the numbers of unique users and items differ significantly, it is set to 20 for items and 10,000 for users, separately. The model is implemented using the Theano library. Experiments on MovieLens datasets --------------------------------- We compare our model with the following algorithms. Note that experiments on [*MovieLens*]{} do not include sequential data. - AutoRec[@Sedhain2015]: I-AutoRec takes a partial item feedback vector as input and reconstructs at the output layer. - CDL[@hwang2015]: a hierarchical Bayesian model that jointly performs deep representation learning for content information and collaborative filtering for the ratings matrix. - DCF[@Lis2015]: a model that combines matrix factorization with marginalized denoising stacked autoencoders. We concatenate side information as input to DCF. - aSDAE[@Dong2017]: a hybrid model that integrates side information by an additional denoising autoencoder into the matrix factorization model. - DHA: the proposed model that applies independent autoencoder architecture to heterogeneous data sources. To compare different models, we repeat 80-20 splits of the data 5 times, run 5-fold cross validation and report average performance. Grid search is applied to find optimal hyperparameters for all models. We search the learning rate of SGD, $\alpha \in \{ 0.1, 0.05, 0.01, 0.001 \} $, the regularization of learned parameters, $\lambda_f$ and $\lambda_w$ of our model $ \in \{ 2.0, 0.1, 0.01, 0.001 \} $, the corruption level of masking noise $\in \{ 0.1, 0.3 \} $, the activation function $\in \{sigmoid, relu\} $, and the number of fusion hidden layers $\in \{ 1, 2 \} $. The parameters used to balance loss between user and item, $\lambda_{m}, \lambda_{n}, \lambda_{u}, \lambda_{v}$ are set to 1. For CDL, DCF and aSDAE, we search hidden layer number from 4 and 6. The joint training is alternated 5 times, and we run 5 epochs for learning features in each alteration. Before the joint training, layer-wise pretraining is conducted to initialize network weights. For the experiment on [*ml-100k*]{}, we input rating vectors, item content information and user demographic data to DHA and aSDAE. Rating vectors are not used in DCF and only item content information is used in CDL. I-AutoRec leverages no side information. After grid search, the adopted hidden layer number of CDL, DCF and aSDAE is 4. The number of fusion hidden layer is set to 1 for DHA. The parameter for regularizing learned parameters is set to 0.01 in DHA, 0.001 in CDL and aSDAE, and 0.1 in DCF, respectively. The optimal performance is found when the learning rate is set to 0.001 for CDL, DCF, 0.01 for aSDAE and DHA, and 0.1 for I-Autorec. \[tab: map@100 for ml-100k, ml-10m\] ----------- -- -------- -------- -------- -- -------- -------- -------- Model d=50 d=100 d=150 d=50 d=100 d=150 I-AutoRec 0.0573 0.0568 0.0572 0.0325 0.0323 0.0326 CDL 0.1896 0.1825 0.1685 0.1458 0.1532 0.1612 DCF 0.2012 0.2028 0.2069 0.1591 0.1620 0.1566 aSDAE 0.2161 0.2228 0.2142 0.1602 0.1560 0.1642 DHA 0.2236 0.2304 0.2258 0.1793 0.1774 0.1824 ----------- -- -------- -------- -------- -- -------- -------- -------- : [**MAP@100 comparison on [*ml-100k*]{} and [*ml-10m*]{} datasets.**]{} *Results are shown for three different settings of user and item latent factor vectors, $d$=50, 100, and 150.* As shown in Fig. \[fig:recall\_comparison\_ml-100k\_and\_ml-10m\], all models achieve better recall than I-AutoRec, showing the advantage of using side information. DHA and aSDAE perform better than CDL which only incorporates item content description. DHA outperforms aSDAE which integrates raw side information at every hidden layer. The MAP comparison in Table \[tab: map@100 for ml-100k, ml-10m\] shows our model obtains more precise results for all dimension settings. There are five sets of available inputs for the experiment on the [*ml-10m*]{} dataset. Users and movies have rating and tag vectors, movies also have content vectors. For CDL, DCF and aSDAE, different information vectors are concatenated as input. Our model uses all components, [*i.e.*]{} two components for users and three components for movies. In the experiment, the best performance is obtained when the number of hidden layers is set to 4 for CDL, aSDAE and to 6 for DCF. In our model, we use 2 fusion hidden layers and different layer numbers for components. The number of hidden layers, $L_c$, is set to 4 for users and movie rating vectors and to 2 for tag and content vectors. As shown in Fig. \[fig:recall\_comparison\_ml-100k\_and\_ml-10m\], DHA obtains better recall performance compared to other algorithms. aSDAE is competitive and outperforms both DCF and CDL in three dimension settings. The MAP comparison in Table \[tab: map@100 for ml-100k, ml-10m\] indicates that in addition to producing recommendation with better recall, our model also achieves better precision results. [0.34]{} ![image](edy_f50_recall_0910.png){width="\linewidth"} [0.34]{} ![image](edy_f100_recall_0910.png){width="\linewidth"} [0.34]{} ![image](edy_f150_recall_0910.png){width="\linewidth"} Experiments on OfflinePay dataset --------------------------------- Since the OfflinePay dataset involves user online purchase histories, we use the first 3-month data as training data, the following half month’s data as validation dataset to find optimal parameters, and data from the remaining half-month as test set. We compare the following algorithms: - implicit-cf[@Hu2008]: a matrix factorization model for implicit datasets. - CDL[@hwang2015]: a Bayesian model that learns a feature space from item information and jointly trains with CF. - DCF[@Lis2015]: a model that incorporates side information by marginalized denoising stacked autoencoders with a matrix factorization model. - DHA-RNNED-s10: our model that learns a latent representation only from the sequence of online purchases. The number of time steps in each sequence is 10. - DHA-RNNED-s5: our model with the same modeling process as DHA-RNNED-s10, but using 5 time steps in each sequence. - DHA-RNNED-item: our model extracts features from sequential online purchases at user side, and from shop genre descriptions at item side. - DHA-all: the proposed model that leverage non-sequential side information sets and sequential online purchase activities simultaneously. The used time step number of the purchase sequence is 10. In the experiment, the joint learning is alternated 3 times, and we run 3 epochs for feature extraction every time. The number of hidden layers for CDL, DCF and our model is set to 4, and 1 fusion hidden layer is used in DHA models. For the sequential modeling, we set the hidden units of LSTMs to be the same as the dimension of the user and item latent factor vector. The SGD learning rate and regularization parameters for each model are found by grid search on the validation set. We set the learning rate to 0.1 for implicit-cf and CDL, to 0.001 for DCF and to 0.01 for the other models. The parameter to regularize learned parameters is set to 2.0 for CDL, and 0.1 for DCF and DHA-all. There is no training alteration for implicit-cf, but we run 25 iterations to learn user and item latent factor vectors. DCF integrates both user registration information and shop genre descriptions, while CDL uses only the latter one. DHA-RNNED-s10 and DHA-RNNED-s5 do not include any side information except user online purchases. DHA-RNNED-item adopts sequential data and shop genre descriptions, and DHA-all utilizes all of the data. Note that since ratings are not used in any models, aSDAE is not applied on OfflinePay dataset. From Fig. \[fig:recall\_comparison\_offlinepay\], we observe that models taking advantage of side information have better recall than the baseline implicit-cf. CDL outperforms DCF which, in fact, uses more information sets. This may be due to the fact that many user registration records have outdated or missing values, making the feature extraction less accurate. Compared to CDL and DCF, the proposed models with sequential data modeling achieve better recall. This is due to the fact that offline shop genres in the dataset are included in the online purchased genres. This also indicates that the latent features is able to be extracted from recent online purchases accurately, and reflect the trends of user interests, then lead to better recommendations for offline products, as well. The MAP comparison in Table \[tab: map@100 for offlinepay\] shows that the models involving sequential modeling achieve higher precision. This consistently shows that the modeling of online purchases helps with offline product recommendation. The recall comparison in Fig. \[fig:recall\_comparison\_offlinepay\] shows that DHA-RNNED-s10 and DHA-RNNED-s5 have a similar trend as recommended item $M$ increases. These two models use only the sequence of purchased genres from an online e-commerce platform, but with different time steps in the sequence. it is also shown that DHA-all and DHA-RNNED-item have similar recalls. The difference between these two models is that the latter model does not include user registered data. Linking to the previous observation that CDL outperforms DCF, user data does not significantly contribute to the recommendation results. In order to compare the effect of purchase recency of the input sequence, we apply DHA-RNNED-s10 and DHA-RNNED-s5 to encode the recent ten and five purchases, respectively. Our hypothesis is that more recent online purchases are more representative of current user interests. Although the difference is not big, the recall and MAP comparisons support our hypothesis. The experiments demonstrate that with the independent autoencoder structure for user and item side information and the modeling of user online activities, our model is able to achieve competitive recall and MAP results. Conclusions =========== We proposed a model that incorporates multiple sources of heterogeneous auxiliary information in a consistent way to alleviate the data sparsity problem of recommender systems. It takes static and sequential data as input and captures both the inherent tastes of users as well as the dynamics of user preference. The model uses a flexible autoencoder structure for integrating different data sources leading to significant performance gains. [00]{} Y. Hu, Y. Koren, and C. Volinsky, “Collaborative filtering for implicit feedback datasets,” In Proc. Eighth IEEE ICDM, pages 263-–272, 2008. A. V. D. Oord, S. Dieleman, and B. Schrauwen, “Deep content-based music recommendation,” In Proc. 26th International Conference on NIPS, Vol. 2, pp. 2643–2651, 2013. F. Zhang, N. J. Yuan, D. Lian, X. Xie, and W. Y. Ma, “Collaborative knowledge base embedding for recommender systems,” In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 353–362, 2016. S. Oramas, O. Nieto, M. Sordo, and X. Serra, “A deep multimodal approach for cold-start music recommendation” In Proceedings of the 2nd Workshop on Deep Learning for Recommender Systems, 2017. I. Porteous, A. Asuncion, and M. Welling, “Bayesian matrix factorization with side information and dirichlet process mixtures,” In Proc. 24th AAAI, pp. 563–568, 2010. P. Loyola, C. Liu, and Y. Hirate, “Modeling user session and intent with an attention-based encoder-decoder architecture,” In Proc. of 11th ACM Conference on Recommender Systems, pp. 147–151, 2017. H. Wang, N. Wang, and D. Y. Yeung, “Collaborative deep learning for recommender systems,” In Proc. 21st ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 1235–1244, 2015. P. Vincent, H. Larochelle Y. Bengio and P.A. Manzagol, “Extracting and composing robust features with denoising autoencoders,” In Proc. Twenty-fifth ICML, pp. 1096–1103, 2008. P. Vincent, H. Larochelle, I. Lajoie, Y. Bengio, and P. A. Manzagol, “Stacked denoising autoencoders: learning useful representations in a deep network with a local denoising criterion,” JMLR, Vol. 11, pp. 3371–3408, 2010. W. Wang, R. Arora, K. Livescu, and J. Bilmes, “On deep multi-view representation learning”, In Proc. 32nd ICML, Vol. 37, pp. 1083–1092, 2015. K. Cho, B. V. M, and C. Gulcehre, “Learning phrase representations using RNN encoder–decoder for statistical machine translation,” In Proc. 2014 Conference on Empirical Methods in Natural Language Processing, pp. 1724–1734, 2014. S. Li, J. Kawale, and Y. Fu, “Deep collaborative filtering via marginalized denoising auto-encoder,” In Proc. 24th ACM CIKM, pp. 811–-820, 2015. X. He, L. Liao, H. Zhang, L. Nie, X. Hu, and T. S. Chua, “Neural collaborative filtering,” In Proc. 26th International Conference on WWW, pp. 173–182, 2017. L. Zheng, V. Noroozi, and P. S. Yu, “Joint deep modeling of users and items using reviews for recommendation,” In Proc. Tenth ACM ICWDM, pp. 425–434, 2017. C. Y. Wu, A. Ahmed, A. Beutel, A. J. Smola, and H. Jing, “Recurrent recommender networks,” In Proc. Tenth ACM ICWDM, pp. 495–-503, 2017. C. Wang, and D. M. Blei, “Collaborative topic modeling for recommending scientific articles,” In Proceedings of the 17th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 448–456, 2011. S. Sedhain, A. K. Menon, S. Sanner, and L. Xie, “AutoRec: autoencoders meet collaborative filtering,” In Proc. 24th International Conference on WWW, pp. 111–112, 2015. S. Zhang, L. Yao, and X. Xu, “AutoSVD++: an efficient hybrid collaborative filtering model via contractive autoencoders,” In Proc. 40th International ACM SIGIR Conference on Research and Development in Information Retrieval, pp. 957–960, 2017. X. Dong, L. Yu, Z. Wu, Y. Sun, L. Yuan, and F. Zhang, “A hybrid collaborative filtering model with deep structure for recommender systems,” AAAI, 2017. I. Goodfellow, Y. Bengio, and A. Courville, “Deep Learning,” MIT Press, 2016.
{ "pile_set_name": "ArXiv" }
--- abstract: | Monolayers and multilayers of semiconducting transition metal dichalcogenides (TMDCs) offer an ideal platform to explore valley-selective physics with promising applications in valleytronics and information processing. Here we manipulate the energetic degeneracy of the $\mathrm{K}^+$ and $\mathrm{K}^-$ valleys in few-layer TMDCs. We perform high-field magneto-reflectance spectroscopy on WSe$_2$, MoSe$_2$, and MoTe$_2$ crystals of thickness from monolayer to the bulk limit under magnetic fields up to 30 T applied perpendicular to the sample plane. Because of a strong spin-layer locking, the ground state A excitons exhibit a monolayer-like valley Zeeman splitting with a negative $g$-factor, whose magnitude increases monotonically when thinning the crystal down from bulk to a monolayer. Using the $\mathbf{k\cdot p}$ calculation, we demonstrate that the observed evolution of $g$-factors for different materials is well accounted for by hybridization of electronic states in the $\mathrm{K}^+$ and $\mathrm{K}^-$ valleys. The mixing of the valence and conduction band states induced by the interlayer interaction decreases the $g$-factor magnitude with an increasing layer number. The effect is the largest for MoTe$_2$, followed by MoSe$_2$, and smallest for WSe$_2$. Keywords: MoSe$_2$, WSe$_2$, MoTe$_2$, valley Zeeman splitting, transition metal dichalcogenides, excitons, magneto optics. author: - Ashish Arora - Maciej Koperski - Artur Slobodeniuk - Karol Nogajewski - Robert Schmidt - Robert Schneider - 'Maciej R. Molas' - Steffen Michaelis de Vasconcellos - Rudolf Bratschitsch - Marek Potemski title: | Zeeman spectroscopy of excitons and hybridization of electronic states\ in few-layer WSe$_2$, MoSe$_2$ and MoTe$_2$ --- Hybridization of electronic states in van der Waals-coupled layers of semiconducting transition metal dichalcogenides (TMDCs), significantly affects their energy bands and optical properties. Most striking is a dramatic change in the quasiparticle band gap character, from a direct bandgap at the $\mathrm{K}$-point of the Brillouin zone in monolayers to an indirect $\Gamma-\Lambda$ band gap in multilayers and bulk crystals [@1; @2]. In contrast, the energy of the optical band gap, which is due to $\mathrm{K}$-point excitons in any mono-, multi- and bulk-crystals rather weakly depends on the number of layers in TMDC stacks [@1]. This effect is due to both the hybridization of electronic states at the $\mathrm{K}$-points [@3] and the change in the dielectric environment with different number of layers [@4]. While the hybridization of the electronic states leads to (often unresolved) multiplets of intralayer (electron and hole within the same layer) and spatially-separated interlayer excitons (electron and hole confined to different layers), the dielectric environment largely determines the excitonic binding energy and the optical band gap. The hybridization of electronic states in TMDC multilayers is also encoded in the magnitudes of the effective Landé $g$-factors of the coupled states. However, in contrast to the energetic positions of electronic resonances, $g$-factors are less sensitive to the effects of Coulomb interaction (dielectric environment) [@5]. In TMDC monolayers, the band structure at the $\mathrm{K}$-point consists of energetically degenerate states at the $\mathrm{K}^+$ and $\mathrm{K}^-$ valleys. However, the two valleys possess opposite magnetic moments, and can be individually addressed using $\sigma^+$ and $\sigma^-$-polarized light [@1]. An externally applied magnetic field in the Faraday geometry lifts the valley degeneracy, resulting in a so-called valley Zeeman splitting [@1]. Therefore, the $g$-factors of the excitons can be measured using helicity-resolved spectroscopy under magnetic fields [@6; @7; @8; @9; @10; @11; @12; @13; @14; @15; @16; @17; @18; @19]. In multilayer and bulk TMDCs, it has been found that the spin orientation of the carriers is strongly coupled to the valleys within the individual layers (“spin-layer locking”) [@17; @19; @20; @21]. Therefore, many salient features of monolayer physics are preserved in multilayers. As a consequence, intralayer excitons form with their characteristic negative $g$-factors [@17; @21]. Moreover, spin-layer locking effects have recently enabled the unambiguous identification of interlayer excitons in bulk TMDCs with positive $g$-factors [@17; @19]. However, a systematic investigation of the effect of layer number and the hybridization of electronic states on the valley Zeeman effect has not been reported so far. Here, we perform circular polarization-resolved micro-reflectance contrast ($\mu$RC) spectroscopy on 2H-WSe$_2$, 2H-MoSe$_2$ and 2H-MoTe$_2$ crystals of variable thickness (from monolayer to bulk) under high magnetic fields of up to $B=30$ T and at a temperature of $T=4$ K. We measure the layer thickness-dependent valley Zeeman splittings of the ground state A excitons ($X_A^{1s}$) and compare the observed trends with the $\mathbf{k\cdot p}$ theory. The model takes into account the interlayer admixture of valence and conduction bands and corrections from the higher and lower bands from adjacent layers at the $\mathrm{K}$-point of the Brillouin zone. We find that the hybridization of the electronic states at the band extrema has profound effects on the $g$-factors of the excitons. Overall, the exciton $g$-factor decreases with an increasing layer thickness where the extent of this reduction depends upon the magnitude of interlayer interaction in the TMDCs. Experiment ========== Monolayer and few-layer flakes of TMDCs are mechanically exfoliated [@22] onto SiO$_2$(80nm)/Si substrates. The layer number in the MoSe$_2$ and WSe$_2$ crystals is determined by the optical contrast, Raman spectroscopy and the low-temperature (liquid helium) micro-photoluminescence [@23; @24; @25]. For MoTe$_2$, the thickness characterization was performed using ultra-low frequency Raman spectroscopy [@26; @27; @28], in addition to the reflectance contrast and atomic force microscopy (AFM) measurements (see Fig. \[MOKEvsB\] in Appendix \[Sec:experiment\]). Magneto-reflectance measurements are performed using a fiber-based low-temperature probe inserted inside a resistive magnet with 50 mm bore diameter, where magnetic fields up to 30 T are generated in the center of the magnet. Light from a tungsten halogen lamp is routed inside the cryostat using an optical fiber of 50 $\mu$m diameter and focused on the sample to a spot of about 10 $\mu$m diameter with an aspheric lens of focal length 3.1 mm (numerical aperture NA=0.68). The sample is displaced by $x-y-z$ nano-positioners. The reflected light from the sample is circularly polarized using the combination of a quarter wave plate (QWP) and a polarizer. The emitted polarized light is collected using an optical fiber of 200 $\mu $m diameter, dispersed with a monochromator and detected using a liquid nitrogen cooled Si CCD (WSe$_2$ and MoSe$_2$) or InGaAs array (MoTe$_2$). During the measurements, the configuration of QWP-polarizer assembly is kept fixed, producing one state of circular polarization, whereas the effect corresponding to the other polarization state can be measured by reversing the direction of magnetic field, as a result of the time reversal symmetry [@11; @29]. We define the reflectance contrast $C(\lambda)$ at a given wavelength $\lambda$ as $C(\lambda)=[R(\lambda)-R_0 (\lambda)]/[R(\lambda)+R_0(\lambda)]$, where $R_0(\lambda)$ is the reflectance spectrum of the SiO$_2$/Si substrate and $R(\lambda)$ is the one of the TMDC flake kept on the substrate. $C(\lambda)$ spectral line shapes are modeled using a transfer matrix method-based approach to obtain the transition energies [@30]. The excitonic contribution to the dielectric response function is assumed to follow a Lorentz oscillator-like mode [@5; @31] $$\epsilon(E)=(n_b+ik_b)^2+\sum_{j}\frac{A_j}{E_{0j}^2-E^2-i\gamma_jE},$$ where $n_b+ik_b$ is the background complex refractive index of the TMDC being investigated, which excludes excitonic effects, and is kept equal to that of bulk material (WSe$_2$ [@32], MoSe$_2$ [@33], or MoTe$_2$ [@33] in the respective cases). $E_0$, $A$ and $\gamma$ are the transition energy, the oscillator strength parameter, and the full width at half maximum (FWHM) linewidth parameter, whereas the index $j$ represents the sum over excitons. ![(a)-(d) Helicity-resolved microreflectance contrast spectra of the ground state A excitons ($X_A^{1s}$) in 1L, 2L, 3L and bulklike WSe$_2$ crystals, respectively, measured at a temperature of T = 4.2 K under magnetic fields of 0 T, 15 T, and 30 T. Orange and blue spheres represent the experimental data for the $\sigma^+$ and $\sigma^-$ polarizations, respectively, whereas solid lines are the modeled spectra. The curves for $B>0$ T are shifted vertically with respect to the $B=0$ T measurement for clarity. (e)-(h) Excitonic transition energies derived for the two circular polarizations from the modeled spectra in (a)-(d), respectively, as a function of magnetic field from 0 to 30 T. (i)-(l) Green circles represent the Zeeman splittings for the corresponding cases in (e)-(h), respectively, whereas solid lines are linear fits to the data.[]{data-label="figure1"}](Fig1.png){width="8"} Figures 1(a)-(d) depict $\sigma^+$ (orange) and $\sigma^-$ (blue) components of the $\mu$RC spectra of the ground state A exciton ($X_A^{1s}$) in 1L, 2L, 3L and bulklike WSe$_2$ crystals kept at liquid He temperature of 4.2 K under magnetic fields of 0 T, 15 T and 30 T. With increasing magnetic field, one clearly observes an energetic splitting between the two circular components of excitonic features, indicating the Zeeman effect. The spectra are modeled (solid black lines) using the transfer matrix method as described before. The derived $X_A^0$ transition energies for the two circular polarizations for the four cases are displayed in Figs. 1(e)-(h). The excitonic Zeeman splittings are defined as $\Delta E_X=E_{\sigma^+}-E_{\sigma^-}=g_X\mu_B B$, where $E_{\sigma^+}$ and $E_{\sigma^-}$ are the transition energies for the two circular polarizations, $g_X$ is the exciton’s effective $g$-factor and $\mu_B$ is the Bohr’s magneton (0.05788 meV/T). The Zeeman splittings calculated from Figs. 1(e)-(h) are shown in Figs. 1(i)-(l) respectively (green circles), as a function of magnetic field. Fig. 2 displays the corresponding data for the 1L to 3L thick MoSe$_2$ while Fig. 3 shows the plots for 2L, 3L and 4L MoTe2. The magnitude of the splitting increases linearly with rising magnetic field in all cases. The excitonic $g$-factors obtained from the above analysis are summarized in Table 1 and plotted in Fig. 4. The 1L and 40 nm thick bulklike MoTe$_2$ crystals, whose $g$-factors ($-4.8\pm0.2$ [@11] and $-2.4\pm0.1$ [@17] respectively) are also marked in Fig. 4, are obtained from the same single crystalline source material on SiO$_2$/Si substrates, as the 2L, 3L, and 4L samples. For bulk MoSe$_2$, the $g$-factor was measured for a 30 nm thick flake exfoliated on sapphire substrate [@19]. Interestingly, the absolute value of the $g$-factor for $X_A^0$ clearly decreases monotonically with increasing layer thickness and approaches the limiting bulk value of $-3.4\pm0.1$, $-2.7\pm0.1$, and $-2.4\pm0.1$ for WSe$_2$, MoSe$_2$ [@19], and MoTe$_2$ [@17], respectively. ![(a)-(c) Helicity-resolved microreflectance contrast spectra of the ground state A excitons ($X_A^{1s}$) in 1L, 2L and 3L MoSe$_2$ crystals, respectively, at a temperature of T = 4.2 K under magnetic fields of 0 T, 15 T, and 30 T. Orange and blue spheres represent the experimental data for the $\sigma^+$ and $\sigma^-$ polarizations respectively, whereas solid lines are the modeled spectra. The curves for $B>0$ T are shifted vertically with respect to the $B=0$ T measurement for clarity. (d)-(f) Excitonic transition energies derived for the two circular polarizations from the modeled spectra in (a)-(c), respectively, as a function of magnetic field from 0 to 30 T. (g)-(i) Green circles represent the Zeeman splittings for the corresponding cases in (d)-(f), respectively, whereas solid lines are linear fits to the data.[]{data-label="figure2"}](Fig2.png){width="8"} ![(a)-(c) Helicity-resolved microreflectance contrast spectra of the ground state A excitons ($X_A^{1s}$) in 2L, 3L and 4L MoTe$_2$ crystals, respectively, at a temperature of T = 4.2 K under different magnetic fields. Orange and blue spheres represent the experimental data for the $\sigma^+$ and $\sigma^-$ polarizations respectively, whereas solid lines are the modeled spectra. The curves for $B>0$ T are shifted vertically with respect to the $B=0$ T measurement for clarity. (d)-(f) The excitonic transition energies obtained from modeling the reflectance contrast spectra in (a)-(c), respectively. (g)-(i) Green circles represent the Zeeman splittings for the corresponding cases in (d)-(f), respectively, whereas solid lines are linear fits to the data.[]{data-label="figure3"}](Fig3.png){width="8"} [p[1.5cm]{} c p[5mm]{} c p[5mm]{} c]{}\ \[-0.5ex\] [Layer thickness]{} & WSe$_2$ & & MoSe$_2$ && MoTe$_2$\ \[1.0ex\]\ 1L & $-3.8\pm0.1$ & & $-4.2\pm0.1$ & & $-4.8\pm0.2$\ \[1.5ex\] 2L & $-3.7\pm0.1$ & & $-3.6\pm0.1$ & & $-4.12\pm0.05$\ \[1.5ex\] 3L & $-3.5\pm0.1$ & & $-3.6\pm0.1$ & & $-3.5\pm0.1$\ \[1.5ex\] 4L & - & & - & & $-3.0\pm0.1$\ \[1.5ex\] Bulk & $-3.4\pm0.1$ & & $-2.7\pm0.1$ & & $-2.4\pm0.1$\ \[1.5ex\] Theory ====== The experimental results presented above demonstrate a significant deviation of $A$-exciton $g$-factor in the multilayer TMDC from the one found in monolayer. Moreover, absolute value of the $g$-factor decreases monotonically with the number of layers $N$. Although weak, the hybridization of $\mathrm{K}$-electronic states is likely at the origin of this effect. In order to confirm this hypothesis we develop a $\mathbf{k\cdot p}$ theory [@13; @34; @35; @36] both for mono- and multilayers and derive exciton $g$-factors in this framework. Namely we consider the properties of the quasiparticles in the corners of the 1-st Brillouin zone (where the studied optical transitions take place). Next, we focus on $\mathrm{K}^+$ point for brevity. Let us first consider the TMDC monolayer, situated in $xy$ plane. Electronic excitations in $\mathrm{K}^+$ point of such a system are described by a set of Bloch states $\{|\Psi_n,s\rangle\}$ with energies $\{E_{ns}\}$. The subscript $n$ enumerates the bands, and index $s=\uparrow,\downarrow$ determines their spin degrees of freedom. According to $\mathbf{k\cdot p}$ method, the quasiparticles with the momentum $\mathbf{k}=(k_x,k_y)$ near $\mathrm{K}^+$ point are described by the matrix elements $\langle\Psi_n,s|\widehat{H}^{(1)}|\Psi_{n'},s'\rangle$ of one-particle Hamiltonian $$\begin{aligned} \widehat{H}^{(1)}(\boldsymbol{\rho},z)=& \frac{\mathbf{\widehat{p}}^2}{2m_0}+ U(\boldsymbol{\rho},z) + \nonumber \\ +& \frac{\hbar}{4m_0^2c^2}\big[\nabla U(\boldsymbol{\rho},z),\mathbf{\widehat{p}}\big]\boldsymbol{\sigma} + \frac{\hbar}{m_0}\mathbf{k\widehat{p}}.\end{aligned}$$ Here $m_0$ is electron’s mass, $c$ — speed of light, $\hbar$ — Planck’s constant and $\boldsymbol{\sigma}=(\sigma_x, \sigma_y, \sigma_z)$ are Pauli matrices. We also introduced in-plane coordinate $\boldsymbol{\rho}=(x,y)$, the momentum operator $\mathbf{\widehat{p}}=-i\hbar\nabla$ and the crystal field of a monolayer $U(\boldsymbol{\rho},z)$. The first two terms of the Hamiltonian define the energies $E_n$ of the bands, doubly degenerated by spin. The next part describes the spin-orbital interaction. It lifts the spin-degeneracy of $n$-th band by the value $\Delta_n$ ([*i.e.*]{} in total, one has $E_n\pm \Delta_n/2$ for $s=\uparrow,\downarrow$ states respectively). The last $\mathbf{k\widehat{p}}$ term couples different Bloch states of monolayer. The coupling gives rise to additional energy of the $n$-th band $\delta E_n = g_n\mu_B B$ in the presence of magnetic field $\mathbf{B}=B\mathbf{e}_z$. Here $\mu_B$ is the Bohr magneton. According to the Roth formula [@37] the spin-independent $g$-factor of the $n$-th band is $$g_n\!=\!\frac{1}{2m_0}\!\!\sum_{n'\neq n}\frac{|\langle\Psi_n,s|\widehat{p}_+|\Psi_{n'},s\rangle|^2- |\langle\Psi_n,s|\widehat{p}_-|\Psi_{n'},s\rangle|^2}{E_n-E_{n'}},$$ where $\widehat{p}_\pm=\widehat{p}_x\pm i\widehat{p}_y$. The interaction of electron’s magnetic moment with magnetic field gives the spin correction $\delta E_s=\sigma_s\mu_BB$, where $\sigma_s=+1(-1)$ for $s=\uparrow(\downarrow)$. Finally the energy of the $n$-th band in $\mathrm{K}^+$ point is $E_{ns}(B)=E_n+\sigma_s\Delta_n/2 + g_n\mu_BB + \sigma_s\mu_BB$. In this picture, the experimentally measured $A$-exciton $g$-factor is doubled difference $g_{exc}=2(g_c-g_v)$ between the $g$-factors of conduction $g_c$ and valence $g_v$ bands. We take this result as a reference point for our next calculations. The $N$-layer TMDC crystal with $2\text{H}$ stacking order can be represented as a pile of monolayers separated by a distance $l$. Each successive layer of such a crystal is $180^\circ$ rotated with respect to the previous one. The one-particle Hamiltonian for this system has a form $$\begin{aligned} \widehat{H}^{(N)}&(\mathbf{\boldsymbol{\rho}},z)=\frac{\mathbf{\widehat{p}}^2}{2m_0}+ \sum_{m=1}^{N}U\big((-1)^{m+1}\mathbf{\boldsymbol{\rho}},z-z_m\big) + \nonumber \\ +& \sum_{m=1}^{N}\frac{\hbar}{4m_0^2c^2} \big[\nabla U\big((-1)^{m+1}\boldsymbol{\rho},z-z_m\big),\mathbf{\widehat{p}}\big]\boldsymbol{\sigma} +\frac{\hbar}{m_0}\mathbf{k\widehat{p}}\end{aligned}$$ It contains a sum of potentials from all the layers, with coordinates $z_m=(m-1)l$. The potential of each even stratum has a form $U(-\boldsymbol{\rho},z-z_{2m})$. The sign “-” before two-dimensional coordinate $\boldsymbol{\rho}$ represents the fact of $180^\circ$ rotation. Note that the orientation of the first layer of the system does not depend on $N$. Hence, it is convenient to match the $\mathrm{K}^+$ point of any multilayer with $\mathrm{K}^+$ point of its lowest part. This uniquely determines the form of the unperturbed Bloch states in each $m$-th stratum of the system. We consider the set of such states in $\mathrm{K}^+$ point as a new basis $\{|\Psi_n^{(m)},s\rangle\}$ of the multilayer. The states $|\Psi_n^{(1)},s\rangle$ belong to the lowest (first) stratum and are equal to $|\Psi_n,s\rangle$ by definition. The other part of the basis can be derived from $|\Psi_n^{(1)},s\rangle$ with the help of crystal symmetry operations (see Appendix \[Sec:theory\]). We suppose the orthogonality of states from different layers $\langle\Psi_n^{(m)},s|\Psi_{n'}^{(m')},s'\rangle=\delta_{nn'}\delta_{mm'}\delta_{ss'}$. According to our choice of the basis, the bands of multilayer are $N$-times degenerated (in leading approximation). The Roth formula is not applicable in this case. To solve this problem, we apply the Löwdin partitioning technique [@37] to multimatrix $\langle\Psi_n^{(m)},s|\widehat{H}^{(N)}|\Psi_{n'}^{(m')},s'\rangle$ and derive the effective conduction $H^{(N)}_{cs}$ and valence $H^{(N)}_{vs}$ band Hamiltonians. They act in the spaces, spanned over $\{|\Psi_c^{(m)},s\rangle\}$ and $\{|\Psi_v^{(m)},s\rangle\}$ basis states respectively and can be presented as $N\times N$ matrices. Their eigenvalues determine the multilayer $g$-factors. The effective Hamiltonians have a form of pentadiagonal matrix. Their main diagonal contains $E_n\pm\Delta_n/2$ terms, spin term $\delta E_s$ and $B$-dependent correction $\delta E_n^{(m)}=g^{(m)}_n\mu_BB$ to the energies of quasiparticles from $m$-th layer. Here and further in the text $n$ takes $c$ or $v$ values. The corresponding $g$-factor of $m$-th layer originates from $\mathbf{k\widehat{p}}$ term and reads $$\begin{aligned} g^{(m)}_n=&\frac{1}{2m_0}\sum_{n'\neq n}\sum_{\eta=\pm}\frac{\eta|\langle\Psi^{(m)}_n,s|\widehat{p}_\eta |\Psi^{(m)}_{n'},s\rangle|^2}{E_n-E_{n'}} + \nonumber \\ +&\frac{1}{2m_0}\sum_{\langle\!\langle m',m\rangle\!\rangle}\sum_{n'\neq n}\sum_{\eta=\pm} \frac{\eta|\langle\Psi^{(m)}_n,s|\widehat{p}_\eta|\Psi^{(m')}_{n'},s\rangle|^2}{E_n-E_{n'}}.\end{aligned}$$ The result is a sum of intralayer and interlayer contributions. The intralayer part is nothing but the monolayer’s $g$-factor considered above. The interlayer part determines the deviation $\propto\delta g_n$ from this $g$-factor. The symbol $\langle\!\langle m',m\rangle\!\rangle$ describes the nearest neighbours of the $m$-th stratum. Namely $\langle\!\langle m',1\rangle\!\rangle \rightarrow m'=2$; $\langle\!\langle m',N\rangle\!\rangle\rightarrow m'=N-1$ and $\langle\!\langle m',m\rangle\!\rangle\rightarrow m'=m-1, m+1$ for $m=2,3,\dots N-1$. We restrict our summation in such a way, since the next nearest neighbour terms are suppressed by the distance between the layers. We omit these terms from our study. The sub- and superdiagonal matrix elements of considered Hamiltonians describe the admixing of the Bloch states between neighbour layers. For conduction bands they origin from $\mathbf{k\widehat{p}}$ terms and have a linear in $\propto k_x\pm ik_y$ dependence. For valence bands they appear from the crystal field of neighbour layers and are proportional to material dependent constant $t\sim 40\dots 70\,\text{meV}$ [@38]. The next nearest sub- and superdiagonal matrix elements are linear in magnetic field $\propto \bar{g}_n\mu_BB$ and also originate from $\mathbf{k\widehat{p}}$ terms. In our model, we did several simplifications: [*i*]{}) Only the intralayer matrix elements of spin-orbit interaction are taken into account in $\widehat{H}^{(N)}(\mathbf{\boldsymbol{\rho}},z)$. The interlayer terms are beyond of accuracy of our approximation; [*ii*]{}) The interlayer crystal field corrections to the bands energy positions are supposed to be small and omitted from our study; [*iii*]{}) The $\mathbf{k}$ dependent part of spin-orbital interaction $\propto\big[\nabla U(\boldsymbol{\rho},z),\mathbf{k}\big]\boldsymbol{\sigma}$ is neglected. This term produces a small correction to the spin $g$-factor, which is beyond the scope of this paper; [*iv*]{}) The effective Hamiltonians are considered up to the linear in $\mathbf{k}$ terms. The higher order corrections give zero contribution to the band energy in $\mathrm{K}^+$ point, and therefore are not important in this study. The diagonalization of $H^{(N)}_{cs}$ and $H^{(N)}_{vs}$ matrices provides a new set of energy bands with corresponding eigenstates. Hence, we expect a series of exciton lines instead of single $A$-exciton one. It is well known that the exciton lines in the optical spectra of TMDCs as-exfoliated on substrates such as SiO$_2$ or sapphire have a significantly large inhomogeneous line width broadening compared to the homogeneous line widths [@39; @40; @41]. In principle, it is possible to achieve the homogeneous linewidth with hBN, which might able to resolve the close-lying individual lines of excitons in multilayers [@39; @40; @41]. However, in the present case, we calculate the average $g$-factor from all lines and compare it with the experiment. The corresponding observable (see the detailed $\mathbf{k\cdot p}$ analysis for each multilayer in Appendix \[Sec:theory\]) as a function of number of layers $N$ has the following form $$g^{(N)}_{exc}= g_{exc} + 4\Big(1-\frac{1}{N}\Big)\big[\delta g_v -\delta g_c - g_u\big] + O\Big(\frac{t^2}{\Delta_v^2}\Big).$$ The parameters $\delta g_c$ and $\delta g_v$ are the interlayer corrections to the conduction and valence band energies, $g_u$ and $t$ appear from the interlayer admixture of the conduction and valence band states respectively. This formula indicates the measured dependence of the exciton $g$-factor, if we suppose $\delta g_v-\delta g_c-g_u>0$. ![Effective $g$-factors of the ground state A excitons ($X_A^{1s}$) in WSe$_2$, MoSe$_2$, and MoTe$_2$ as a function of layer thickness from monolayer to the bulk limit. The $g$-factors for 1L and bulk MoTe$_2$ are taken from Refs. [@11] and [@19], respectively, and were measured on a flake obtained from the same crystal as used in the present work, and under the same experimental conditions. Solid lines represent the theoretical model as described in the main text.[]{data-label="figure4"}](figure4.png){width="5.5"} Using Eq. 6, we fit the experimental data in Fig. 4 to the first order (solid lines). Here, we fix $g_{exc}$ to the experimentally measured $g$-factor of the monolayer. The deviation of the fits from the experimental data could be explained by the neglected second-order term $O(t^2/\Delta_v^2)$. Apart from this, our model predicts the correct qualitative trends of the $g$-factors observed in the experiment as a function of layer thickness. The fitting parameter $[\delta g_v-\delta g_c-g_u]$ is found to be equal to 0.1, 0.35 and 0.55 for WSe$_2$, MoSe$_2$ and MoTe$_2$, respectively. A larger value of this parameter in MoSe$_2$ and MoTe$_2$ points towards a stronger interlayer interaction in these materials, when compared to that of WSe$_2$. This is in agreement with the [*ab-initio*]{} calculations where the spin-valley coupling of holes to a particular layer was found to be significantly larger than (comparable to) the interlayer hopping in W-based (Mo-based) compounds [@38]. Furthermore, an increased interlayer coupling has been reported when the chalcogen atom changes from Se to Te [@42]. The recent observation of spatially indirect (“interlayer”) excitons in bulklike MoTe$_2$ [@17] and MoSe$_2$ [@19], where a large interlayer interaction results in a significant oscillator strength of interlayer excitons [@3; @43] support our conclusions as well. Indeed, we find that the strength of interlayer excitons is much smaller in W-based TMDCs, which leads to an absence of their signature in the optical spectra of WS$_2$ and WSe$_2$ [@19]. In summary, we have measured the Zeeman effect of intralayer A excitons in semiconducting WSe$_2$, MoSe$_2$, and MoTe$_2$ crystals of variable thickness from monolayer to the bulk limit, using helicity-resolved magneto-reflectance contrast spectroscopy under high magnetic fields up to 30 T. The magnitude of the negative $g$-factors of the A excitons displays a monotonic decrease as the layer thickness is increased from monolayer to a bulklike crystal. The effect is qualitatively explained with a model considering thickness-dependent interlayer interactions, and band mixing effects. Our results represent the first report devoted to the effect of the band hybridization on magneto-optics of multilayer TMDCs, and will contribute towards a better understanding of TMDCs along with future device-based applications. Acknowledgements ================ The authors acknowledge the financial support from Alexander von Humboldt foundation, German Research Foundation (DFG project no. AR 1128/1-1), European Research Council (MOMB project no. 320590), the EC Graphene Flagship project (no. 604391) and the ATOMOPTO project (TEAM programme of the Foundation for Polish Science cofinanced by the EU within the ERDFund). The authors declare no competing financial interest. Characterization of MoTe$_2$ crystals of different thickness {#Sec:experiment} ============================================================ Supplementary Fig. \[MOKEvsB\](a) shows Raman spectra of MoTe$_2$ crystals with thicknesses ranging from $1$L to $6$L and $40$ nm think bulklike material. The Raman-active modes A$_{1g}$, E$_{2g}^1$, B$_{2g}^1$ as well as low-frequency shear modes are clearly visible [@27; @28]. For initial characterization, $\mu$RC measurements on the flakes with layer thickness $1$L to $4$L and the bulklike crystal are performed at low temperatures in the absence of magnetic field (**B**=$0$). Fig. \[MOKEvsB\](b) displays the $\mu$RC spectra as a function of layer thickness. Features corresponding to the neutral (ground state $1$s) A exciton resonance $\rm X_A^{1s}$ and a broad B exciton resonance $\rm X_B^{1s}$ are identified in the spectra. A weak shoulder at the high-energy side of $\rm X_A^{1s}$ is associated with the excited state $\rm X_A^{2s}$ exciton resonance. An additional feature at $1.183$ eV for the bulklike flake arises due to the optically active interlayer $\rm X_{IL}$ exciton [@17]. The derived excitonic transition energies for the various observed features are shown in Fig. \[MOKEvsB\](c). The $\rm X_A^{1s}$ resonance undergoes a red shift from $1.196 \pm 0.001$ eV to $1.1307 \pm 0.001$ eV as the layer thickness is increased from $1$L to bulk as has been observed previously also in WSe$_2$ [@25], MoSe$_2$ [@23], WS$_2$ [@3] and MoTe$_2$ [@26; @Ruppert2014]. At the same time, the energy difference between the $\rm X_A^{1s}$ and $\rm X_A^{2s}$ resonances decreases from $127$ meV to $25$ meV as the layer thickness is increased from $2$L to bulk ($\rm X_A^{2s}$ is not observed for the $1$L flake). This behavior points towards a reduction of the excitonic binding energy with increasing crystal thickness. It has been largely associated with an increasing dielectric constant when the layer thickness is increased [@Cheiw2012], and has also been observed previously in WSe$_2$ [@25] and MoSe$_2$ [@23]. It is worth mentioning that the binding energies of the A excitons in $1$L and bulklike MoTe$_2$ have been calculated to be $710$ meV [@Ramasu2012] and $150$ meV [@17], respectively. ![image](Arora_SI_Fig1.pdf){width="13"} Theory {#Sec:theory} ====== Our purpose is to calculate the $g$-factors of $A$-excitons in $\mathrm{K}^\pm$ valleys of multilayer TMDC. In order to do it, we extend the 7-band $\mathbf{k\cdot p}$ model [@34; @35; @13] to $N$-layer case, derive the effective Hamiltonians and then calculate the positions of energy bands as a function of magnetic field. Namely, we use the monolayer Bloch functions to construct the basis states of a multilayer. Then, we compute first-order $\mathbf{k\cdot p}$ and spin-orbital corrections to the Hamiltonian of the system. Finally, we derive the effective valence and conduction bands Hamiltonians as a function of external magnetic field, diagonalize them and find the bands $g$-factors. In further, we consider $\mathrm{K}^+$ point for brevity. Monolayer --------- The 7-band model contains 3 additional bands below the valence band and 2 bands above the conduction one [@34; @35; @13]. The Bloch states in $\mathrm{K}^+$ point of monolayer are $|\Psi_{v-3},s\rangle, \,\, |\Psi_{v-2},s\rangle, \,\, |\Psi_{v-1},s\rangle,\,\, |\Psi_v,s\rangle,\,\, |\Psi_c,s\rangle,\,\, |\Psi_{c+1},s\rangle, \,\, |\Psi_{c+2},s\rangle$. The lower index $n=v-3, v-2,\dots c+1$ indicates the band, $s=\uparrow,\downarrow$ is the spin degree of freedom. The basis vectors are defined as a decomposition $|\Psi_n,s\rangle=|\Psi_n\rangle|s\rangle$. They can be classified according to irreducible representations of the symmetry group of the crystal [@34; @35]. All the group transformations are based on the in-plane $2\pi/3$ rotation $C_3$ and in-plane mirror reflection $\sigma_h$. The states $|\Psi_{v-3},s\rangle, \,\, |\Psi_v,s\rangle, \,\, |\Psi_c,s\rangle, \,\, |\Psi_{c+2},s\rangle$ are even under mirror transformation, while the $|\Psi_{v-2},s\rangle, \,\, |\Psi_{v-1},s\rangle, \,\, |\Psi_{c+1},s\rangle$ are odd. The $\mathbf{k\cdot p}$ perturbation terms couple only the states with the same parity. Therefore the odd states do not affect $g$-factors of monolayer and can be excluded from this particular case. Taking into account the transformation properties of the remaining states [@13] $C_3|\Psi_v,s\rangle=|\Psi_v,s\rangle$, $C_3|\Psi_c,s\rangle=\omega^*|\Psi_c,s\rangle$, $C_3|\Psi_{v-3},s\rangle=\omega|\Psi_{v-3},s\rangle$, $C_3|\Psi_{c+2},s\rangle=\omega|\Psi_{c+2},s\rangle$ with $\omega=e^{2i\pi/3}$, one obtains $\mathbf{k\cdot p}$ matrix elements, presented in Table \[tab:monolayer\] [@34; @35; @13; @Kormanyos2014]. [p[1.5cm]{} p[1.5cm]{} p[1.5cm]{} p[1.5cm]{} p[1.2cm]{}]{}\ \[-0.5ex\] [$H_{\mathbf{kp}}$]{} & $|\Psi_v,s\rangle$ & $|\Psi_c,s\rangle$ & $|\Psi_{v-3},s\rangle$ & $|\Psi_{c+2},s\rangle$\ \[1.0ex\]\ $|\Psi_v,s\rangle$ & $E_v$ & $\gamma_3k_+$ & $\gamma_2k_-$ & $\gamma_4k_-$\ \[1.5ex\] $|\Psi_c,s\rangle$ & $\gamma^*_3k_-$ & $E_c$ & $\gamma_5k_+$ & $\gamma_6k_+$\ \[1.5ex\] $|\Psi_{v-3},s\rangle$ & $\gamma^*_2k_+$ & $\gamma^*_5k_-$ & $E_{v-3}$ & 0\ \[1.5ex\] $|\Psi_{c+2},s\rangle$ & $\gamma^*_4k_+$ & $\gamma^*_6k_-$ & 0 & $E_{c+2}$\ \[1.5ex\] Here we introduced notation $k_\pm=k_x\pm ik_y$ and a set of energies $\{E_n\}$ in $\mathrm{K}^+$ point for clarity. The spin-orbit interaction, considered as a perturbation, gives the correction $\sigma_s\Delta_n/2$ to diagonal elements of the table, with $\sigma_s=+1(-1)$ for $\uparrow(\downarrow)$ states. Applying the Löwdin procedure we calculate the energies of $c$ and $v$ bands in $\mathrm{K}^+$ point $$E_{cs}(B)=E_c+\sigma_s\Delta_c/2+g_c\mu_B B + \sigma_s\mu_BB, \quad E_{vs}(B)=E_v+\sigma_s\Delta_v/2+g_v\mu_B B + \sigma_s\mu_BB.$$ Here $\mu_B$ is the Bohr magneton, $B$ is the strength of magnetic field $\mathbf{B}=B\mathbf{e}_z$ and $$\begin{aligned} g_v&=&\frac{2m_0}{\hbar^2}\left[\frac{|\gamma_3|^2}{E_c-E_v}+\frac{|\gamma_2|^2}{E_v-E_{v-3}} +\frac{|\gamma_4|^2}{E_v-E_{c+2}}\right], \\ g_c&=&\frac{2m_0}{\hbar^2}\left[\frac{|\gamma_3|^2}{E_c-E_v}-\frac{|\gamma_5|^2}{E_c-E_{v-3}} -\frac{|\gamma_6|^2}{E_c-E_{c+2}}\right].\end{aligned}$$ The last term in $E_{cs}(B)$ and $E_{vs}(B)$ is a free electron Zeeman energy. In our study, we suppose the spin-orbital corrections to electron’s magnetic moment are small [@1]. Note that the $A$-exciton transitions in $\mathrm{K}^+$ points are possible only in $\sigma^+$ circularly polarized light. In magnetic field their energy shifts by the value $\delta E_+=(g_c-g_v)\mu_B B$. In $\mathrm{K}^-$ point, transitions are active only in $\sigma^-$ polarization and are characterised by the shift $\delta E_-=-\delta E_+$, which is a consequence of time reversal symmetry in the system. Therefore the measurable exciton $g$-factor is $$\begin{aligned} g_{exc}=2(g_c-g_v)=-\frac{4m_0}{\hbar^2}\left[\frac{|\gamma_5|^2}{E_c-E_{v-3}} +\frac{|\gamma_6|^2}{E_c-E_{c+2}}+\frac{|\gamma_2|^2}{E_v-E_{v-3}} +\frac{|\gamma_4|^2}{E_v-E_{c+2}}\right].\end{aligned}$$ We use this result as a reference point for our next calculations. Bilayer ------- A bilayer TMDC crystal with 2$H$ stacking order can be presented as two monolayers separated by distance $l$, with the second (upper) layer $180^\circ$ rotated relative to the first (lower) one. It is convenient to arrange them in $z=-l/2$ and $z=l/2$ planes respectively. In this presentation the crystal has the inverse symmetry $I$ with the inversion center placed in the middle between the monolayers. There are two subsets of basis Bloch states in $\mathrm{K}^+$ point of bilayer – from the lower and upper strata. The first part $\{|\Psi^{(1)}_n,s\rangle\}$ coincides with the Bloch states of monolayer $\{|\Psi_n,s\rangle\}$, located in $z=-l/2$ plane. The second part $\{|\Psi^{(2)}_n,s\rangle\}$ can be derived as $|\Psi^{(2)}_n,s\rangle=p_nK_0I|\Psi^{(1)}_n,s\rangle$. Here $K_0$ is the conjugation operator and $p_n=\pm1$ is the parity of $|\Psi^{(1)}_n\rangle$. As a result the upper states are transformed as a complex conjugated to the lower ones. It leads to opposite optical selection rules for such states. Namely, in $\mathrm{K}^+$ point of bilayer the first (second) layer absorbs only $\sigma^+(\sigma^-)$ polarized light respectively. Hence, the bilayer does not possess any optical dichroism, which is nothing but a manifestation of the inversion symmetry of the crystal. In contrast to the monolayer case, the odd states of bilayer give non-zero $\mathbf{k\cdot p}$ contributions. Therefore, taking into account their rotational $C_3|\Psi_{v-2},s\rangle=\omega^*|\Psi_{v-2},s\rangle$, $C_3|\Psi_{v-1},s\rangle=\omega|\Psi_{v-1},s\rangle$, $C_3|\Psi_{c+1},s\rangle=\omega|\Psi_{c+1},s\rangle$ and inversion properties we derive the Table \[tab:bilayer\_even\] and Table \[tab:bilayer\_odd\]. [p[1.5cm]{} p[1.5cm]{} p[1.5cm]{} p[1.5cm]{} p[1.5cm]{} p[1.5cm]{} p[1.5cm]{} p[1.5cm]{} p[1.2cm]{}]{}\ \[-0.5ex\] [$H_{\mathbf{kp}}$]{} & $|\Psi^{(1)}_v,s\rangle$ & $|\Psi^{(2)}_v,s\rangle$ & $|\Psi^{(1)}_c,s\rangle$ & $|\Psi^{(2)}_c,s\rangle$ & $|\Psi^{(1)}_{v-3},s\rangle$ & $|\Psi^{(1)}_{c+2},s\rangle$ & $|\Psi^{(2)}_{v-3},s\rangle$ & $|\Psi^{(2)}_{c+2},s\rangle$\ \[1.0ex\]\ \[0.1ex\] $|\Psi^{(1)}_v,s\rangle$ & $E_v$ & $t$ & $\gamma_3k_+$ & $rk_-$ & $\gamma_2k_-$ & $\gamma_4k_-$ & $ak_+$ & $bk_+$\ \[1.5ex\] $|\Psi^{(2)}_v,s\rangle$ & $t$ & $E_v$ & $rk_+$ & $\gamma_3k_-$ & $ak_-$ & $bk_-$ & $\gamma_2k_+$ & $\gamma_4k_+$\ \[1.5ex\] $|\Psi^{(1)}_c,s\rangle$ & $\gamma^*_3k_-$ & $r^*k_-$ & $E_c$ & $uk_+$ & $\gamma_5k_+$ & $\gamma_6k_+$ & 0& 0\ \[1.5ex\] $|\Psi^{(2)}_c,s\rangle$ & $r^*k_+$ & $\gamma^*_3k_+$ & $uk_-$ & $E_c$ & 0 & 0& $\gamma_5k_-$ & $\gamma_6k_-$\ \[1.5ex\] Note that the diagonal matrix elements in the case of bi- and other multilayers should contain small corrections $\delta E_n$, which appear from the crystal field of adjacent layers. However, according to our rough estimation such diagonal terms produce less than 5% deviation to the $g$-factors of multilayers, considered here. Therefore, for the clarity reasons we put $\delta E_n=0$ for this particular study, remembering, however, that these terms can give non-negligible corrections in other cases. [p[1.5cm]{} p[1.5cm]{} p[1.5cm]{} p[1.5cm]{} p[1.5cm]{} p[1.5cm]{} p[1.2cm]{}]{}\ \[-0.5ex\] [$H_{\mathbf{kp}}$]{} & $|\Psi^{(1)}_{v-2},s\rangle$ & $|\Psi^{(1)}_{v-1},s\rangle$ & $|\Psi^{(1)}_{c+1},s\rangle$ & $|\Psi^{(2)}_{v-2},s\rangle$ & $|\Psi^{(2)}_{v-1},s\rangle$ & $|\Psi^{(2)}_{c+1},s\rangle$\ \[1.0ex\]\ \[0.1ex\] $|\Psi^{(1)}_v,s\rangle$ & 0 & 0 & 0 & $ck_-$ & $dk_+$ & 0\ \[1.5ex\] $|\Psi^{(2)}_v,s\rangle$ & $-ck_+$ & $-dk_-$ & 0 & 0 & 0 & 0\ \[1.5ex\] $|\Psi^{(1)}_c,s\rangle$ & 0 & 0 & 0 & $fk_+$ & 0 & $jk_-$\ \[1.5ex\] $|\Psi^{(2)}_c,s\rangle$ & $-fk_-$ & 0 & $-jk_+$ & 0 & 0 & 0\ \[1.5ex\] We also introduced the admixing parameter $t$ between valence bands of the first and second layers. Then we add the spin orbit-interaction, apply the Löwding partitioning to corresponding matrix elements and derive the effective valence and conduction band Hamiltonians. The valence band Hamiltonian, written in the basis $\{|\Psi^{(1)}_v,s\rangle,|\Psi^{(2)}_v,s\rangle\}$, reads $$H^{(2)}_{vs}=\left[ \begin{array}{cc} E_v+\sigma_s\frac{\Delta_v}{2} & t \\ t & E_v-\sigma_s\frac{\Delta_v}{2} \\ \end{array} \right] + \left[ \begin{array}{cc} g_v-\delta g_v +\sigma_s & 0 \\ 0 & -g_v+\delta g_v+\sigma_s \\ \end{array} \right]\mu_BB.$$ The conduction band Hamiltonian, written in the basis $\{|\Psi^{(1)}_c,s\rangle,|\Psi^{(2)}_c,s\rangle\}$ is $$H^{(2)}_{cs}=\left[ \begin{array}{cc} E_c+\sigma_s\frac{\Delta_c}{2} & uk_+ \\ uk_- & E_c-\sigma_s\frac{\Delta_c}{2} \\ \end{array} \right]+ \left[ \begin{array}{cc} g_c-\delta g_c +\sigma_s & 0 \\ 0 & -g_c+\delta g_c +\sigma_s \\ \end{array} \right]\mu_BB.$$ Note that spin-up and spin-down states can be considered separately. The parameters $\delta g_v$ and $\delta g_c$ are the corrections to monolayer’s $g$-factors of valence and conduction bands $$\begin{aligned} \delta g_v=\frac{2m_0}{\hbar^2}\left[\frac{|a|^2}{E_v-E_{v-3}}-\frac{|b|^2}{E_{c+2}-E_v} -\frac{|c|^2}{E_v-E_{v-2}}+\frac{|d|^2}{E_v-E_{v-1}}+\frac{|r|^2}{E_{c}-E_v}\right],\end{aligned}$$ $$\begin{aligned} \delta g_c=\frac{2m_0}{\hbar^2}\left[\frac{|f|^2}{E_c-E_{v-2}}+\frac{|j|^2}{E_{c+1}-E_c} -\frac{|r|^2}{E_c-E_v}\right].\end{aligned}$$ Technically, these corrections originate from additional non-zero $\mathbf{k\cdot p}$ matrix elements between the states of bilayer, allowed by the symmetry. The expressions for valence band energies up to $O(B)$ terms are $$\begin{aligned} E^\text{I}_{vs}=E_v+\sigma_s\sqrt{\frac{\Delta_v^2}{4}+t^2}+\frac{(g_v-\delta g_v)\Delta_v}{\sqrt{\Delta_v^2+4t^2}}\mu_B B+\sigma_s\mu_B B, \\ E^\text{II}_{vs}=E_v-\sigma_s\sqrt{\frac{\Delta_v^2}{4}+t^2}-\frac{(g_v-\delta g_v)\Delta_v}{\sqrt{\Delta_v^2+4t^2}}\mu_B B+\sigma_s\mu_B B.\end{aligned}$$ The following eigenstates are $$\begin{aligned} &|\Phi^\text{I}_{vs}\rangle=\cos(\theta/2)|\Psi^{(1)}_v,s\rangle + \sigma_s\sin(\theta/2)|\Psi^{(2)}_v,s\rangle, \\ &|\Phi^\text{II}_{vs}\rangle=-\sigma_s\sin(\theta/2)|\Psi^{(1)}_v,s\rangle +\cos(\theta/2)|\Psi^{(2)}_v,s\rangle,\end{aligned}$$ where we introduced $\{\cos\theta,\sin\theta\}=\{\Delta_v/\sqrt{\Delta_v^2+4t^2}, 2t/\sqrt{\Delta_v^2+4t^2}\}$. The first state corresponds mostly to the optical transitions in $\sigma^+$ polarized light, while the second one is active predominantly in $\sigma^-$ polarization. The intensity of emitted light in $\mathrm{K}^+$ point is the same in both polarizations, which reflects the presence of inversion symmetry of the bilayer crystal. The new conduction band energies are $$\begin{aligned} E^\text{I}_{cs}=E_c+\sigma_s\frac{\Delta_c}{2}-\sigma_sg_u\mu_BB+(g_c-\delta g_c)\mu_BB+\sigma_s\mu_BB,\\ E^\text{II}_{cs}=E_c-\sigma_s\frac{\Delta_c}{2}-\sigma_sg_u\mu_BB-(g_c-\delta g_c)\mu_BB+\sigma_s\mu_BB,\end{aligned}$$ where $g_u=2m_0u^2/\hbar^2\Delta_c$. The conduction band eigenstates with the same energies coincide with $|\Psi^{(1)}_{c},s\rangle$ and $|\Psi^{(2)}_{c},s\rangle$, up to $O(\mathbf{k}^2)$ order. An analysis of new possible interband transitions demonstrates two $A$-exciton lines in $\mathrm{K}^+$ point of the bilayer. They are active in $\sigma^+$ and $\sigma^-$ polarisations respectively, and have opposite energy shifts in magnetic field $\delta E_+=-\delta E_-=g^{(2)}_{exc}\mu_B B/2$. Here $$g_{exc}^{(2)}=-2g_u+2(g_c-\delta g_c) -2(g_v-\delta g_v)\frac{\Delta_v}{\sqrt{\Delta_v^2+4t^2}}$$ is the $A$-exciton $g$-factor of the bilayer. We rewrite this result up to $O(t^2/\Delta_v^2)$ order $$g^{(2)}_{exc}=g_{exc} + 2\big[\delta g_v-\delta g_c-g_u\big] + \frac{4t^2}{\Delta_v^2}\big(g_v-\delta g_v\big),$$ where $g_{exc}$ is the $A$-exciton $g$-factor of the monolayer. The quantitative estimate of $\delta g_c$ and $\delta g_v$ deviations can be done in numerical simulations and is beyond the scope of this study. However, we will use the experimental fact that for a bilayer $\delta g_v-\delta g_c-g_u>0$ and demonstrate the self-consistency the other multilayer $g$-factors with this assumption. Trilayer -------- We calculate the $g$-factors of a trilayer in the same way as in the bilayer. We introduce the three sets of basis states $\{|\Psi^{(1)}_n\rangle\}$, $\{|\Psi^{(2)}_n\rangle\}$, $\{|\Psi^{(3)}_n\rangle\}$, which belong to the layers $z=-l$, $z=0$ and $z=l$ respectively. In this case the crystal has the mirror symmetry, with the mirror plane $z=0$. This helps us to determine the following symmetry relations between the states $$|\Psi^{(3)}_n\rangle=p_n\sigma_h|\Psi^{(1)}_n\rangle,\quad |\Psi^{(2)}_n\rangle=p_n\sigma_h|\Psi^{(2)}_n\rangle.$$ The states from $1$-st and $3$-d layer have the same rotational properties as in monolayer. The rotational properties of the states from $2$-nd layer are complex conjugated to previous ones. Therefore the $\mathbf{k\cdot p}$ matrix elements can be restored from the known result $$\begin{aligned} \langle\Psi^{(2)}_n|H_{\mathbf{kp}}|\Psi^{(3)}_m\rangle= p_np_m\langle\Psi^{(2)}_n|H_{\mathbf{kp}}|\Psi^{(1)}_m\rangle, \\ \langle\Psi^{(3)}_n|H_{\mathbf{fkp}}|\Psi^{(3)}_m\rangle= p_np_m\langle\Psi^{(1)}_n|H_{\mathbf{kp}}|\Psi^{(1)}_m\rangle.\end{aligned}$$ We also assume $\langle\Psi^{(1)}_n|H_{\mathbf{kp}}|\Psi^{(3)}_m\rangle=0$ because of the large distance $2l$ between $1$-st and $3$-d layers. Note that the $1$-st and $3$-d layers in $\mathrm{K}^+$ point absorb predominantly $\sigma^+$ polarized light, while the $2$-nd layer absorbs $\sigma^-$ polarized light. The Hamiltonian of trilayer TMDC can be written separately for spin-up and spin-down states. The Hamiltonian for valence bands, written in the basis $\{|\Psi^{(1)}_v,s\rangle,|\Psi^{(2)}_v,s\rangle,|\Psi^{(3)}_v,s\rangle\}$, is $$H^{(3)}_{vs}=\left[ \begin{array}{ccc} E_v+\sigma_s\frac{\Delta_v}{2} & t & 0 \\ t & E_v-\sigma_s\frac{\Delta_v}{2} & t \\ 0 & t & E_v+\sigma_s\frac{\Delta_v}{2} \\ \end{array} \right] + \left[ \begin{array}{ccc} g_v-\delta g_v+\sigma_s & 0 & \bar{g}_v\\ 0 & -g_v+2\delta g_v+\sigma_s & 0 \\ \bar{g}_v & 0 & g_v-\delta g_v+\sigma_s \\ \end{array} \right]\mu_B B.$$ The Hamiltonian for conduction bands, written in the basis $\{|\Psi^{(1)}_c,s\rangle,|\Psi^{(2)}_c,s\rangle,|\Psi^{(3)}_c,s\rangle\}$, reads $$H^{(3)}_{cs}=\left[ \begin{array}{ccc} E_c+\sigma_s\frac{\Delta_c}{2} & uk_+ & 0 \\ uk_- & E_c-\sigma_s\frac{\Delta_c}{2} & uk_- \\ 0& uk_+& E_c+\sigma_s\frac{\Delta_c}{2}\\ \end{array} \right]+ \left[ \begin{array}{ccc} g_c-\delta g_c+\sigma_s & 0 & \bar{g}_c \\ 0 & -g_c+2\delta g_c+\sigma_s & 0 \\ \bar{g}_c & 0 & g_c-\delta g_c+\sigma_s \\ \end{array} \right]\mu_B B.$$ Here we introduced $$\begin{aligned} \bar{g}_c=\frac{2m_0}{\hbar^2}\left[\frac{|f|^2}{E_c-E_{v-2}}+\frac{|j|^2}{E_{c+1}-E_c} +\frac{|r|^2}{E_c-E_v}\right],\end{aligned}$$ $$\begin{aligned} \bar{g}_v=\frac{2m_0}{\hbar^2}\left[\frac{|r|^2}{E_v-E_c}-\frac{|a|^2}{E_v-E_{v-3}} -\frac{|b|^2}{E_v-E_{c+2}}-\frac{|c|^2}{E_v-E_{v-2}}+\frac{|d|^2}{E_v-E_{v-1}}\right].\end{aligned}$$ The new energy of conduction bands and corresponding eigenstates of such systems are $$\begin{aligned} &E^{\text{I}}_{cs}=E_c-\sigma_s\frac{\Delta_c}{2}+ (-g_c+2\delta g_c+\sigma_s-2g_u\sigma_s)\mu_B B, &|\Phi^\text{I}_{cs}\rangle&=|\Psi^{(2)}_{c},s\rangle; \\ &E^{\text{II}}_{cs}=E_c+\sigma_s\frac{\Delta_c}{2}+ (g_c-\delta g_c+\sigma_s-\bar{g}_c)\mu_B B, &|\Phi^\text{II}_{cs}\rangle&=\frac{1}{\sqrt{2}}\Big(|\Psi^{(1)}_{c},s\rangle-|\Psi^{(3)}_{c},s\rangle\Big);\\ &E^{\text{III}}_{cs}=E_c+\sigma_s\frac{\Delta_c}{2}+ (g_c-\delta g_c+\sigma_s-2g_u\sigma_s+\bar{g}_c)\mu_B B, &|\Phi^\text{III}_{cs}\rangle&=\frac{1}{\sqrt{2}}\Big(|\Psi^{(1)}_{c},s\rangle+|\Psi^{(3)}_{c},s\rangle\Big).\end{aligned}$$ The energies and normalised eigenstates of valence bands up to $O(t^2/\Delta_v^2)$ are $$\begin{aligned} E^{\text{I}}_{vs}&=E_v-\sigma_s\frac{\Delta_v}{2}-2\frac{\sigma_st^2}{\Delta_v}+ (-g_v+2\delta g_v+\sigma_s)\mu_B B, &|\Phi^\text{I}_{vs}\rangle&=\Big(\frac{\sigma_st}{\Delta_v}|\Psi^{(1)}_{v},s\rangle-|\Psi^{(2)}_{v},s\rangle +\frac{\sigma_st}{\Delta_v}|\Psi^{(1)}_{v},s\rangle\Big)\frac{\Delta_v}{\sqrt{\Delta_v^2+2t^2}}; \\ E^{\text{II}}_{vs}&=E_v+\sigma_s\frac{\Delta_v}{2}+ (g_v-\delta g_v+\sigma_s-\bar{g}_v)\mu_B B, &|\Phi^\text{II}_{vs}\rangle&=\frac{1}{\sqrt{2}}\Big(|\Psi^{(1)}_{v},s\rangle-|\Psi^{(3)}_{v},s\rangle\Big);\\ E^{\text{III}}_{vs}&=E_v+\sigma_s\frac{\Delta_v}{2}+2\frac{\sigma_st^2}{\Delta_v} +(g_v-\delta g_v+\sigma_s+\bar{g}_v)\mu_B B, &|\Phi^\text{III}_{vs}\rangle&=\Big(|\Psi^{(1)}_{v},s\rangle+2\frac{\sigma_st}{\Delta_v}|\Psi^{(2)}_{v},s\rangle+ |\Psi^{(1)}_{v},s\rangle\Big)\frac{\Delta_v}{\sqrt{2\Delta_v^2+4t^2}}.\end{aligned}$$ The lowest energy transitions in $\mathrm{K}_+$ point occur between new states with the same upper index. The first transition is in $\sigma^-$ polarization, while the two others are in $\sigma^+$. The $g$-factors and normalised intensities of these transitions are $$\begin{aligned} &g^\text{I}=\frac{1}{\mu_B}\frac{d}{dB}(E^\text{I}_{c\downarrow}-E^\text{I}_{v\downarrow})=-(g_c-g_v+2\delta g_v -2\delta g_c-2g_u), &J^\text{I}&=\frac{\Delta^2_v}{\Delta_v^2+2t^2};\\ &g^\text{II}=\frac{1}{\mu_B}\frac{d}{dB}(E^\text{II}_{c\uparrow}-E^\text{II}_{v\uparrow})=g_c-g_v+\delta g_v -\delta g_c +\bar{g}_v-\bar{g}_c, &J^\text{II}&=1;\\ &g^\text{III}=\frac{1}{\mu_B}\frac{d}{dB}(E^\text{III}_{c\uparrow}-E^\text{III}_{v\uparrow})=g_c-g_v+\delta g_v -\delta g_c-2g_u -\bar{g}_v+\bar{g}_c, &J^\text{III}&=\frac{\Delta_v^2}{\Delta_v^2+2t^2}.\end{aligned}$$ Therefore, since these three lines can not be resolved we introduce the average $A$-exciton $g$-factor $$g^{(3)}_{exc}= g_{exc} + \frac{8}{3}\big[\delta g_v -\delta g_c - g_u\big] + \frac{4t^2}{9\Delta_v^2}\big(\delta g_c -\delta g_v + 4g_u + 3\bar{g}_v-3\bar{g}_c\big).$$ Quadrolayer ----------- We introduce four sets of Bloch states $\{|\Psi^{(1)}_n,s\rangle\}$,$\{|\Psi^{(2)}_n,s\rangle\}$,$\{|\Psi^{(3)}_n,s\rangle\}$,$\{|\Psi^{(4)}_n,s\rangle\}$ in $\mathrm{K}^+$ point of a quadrolayer. They correspond to $z=-3l/2$, $z=-l/2$, $z=l/2$ and $z=3l/2$ planes respectively. The quadrolayer possesses the inversion symmetry, which results into the following relations between the Bloch states $$\begin{aligned} |\Psi^{(4)}_n\rangle=p_nK_0I|\Psi^{(1)}_n\rangle,\quad |\Psi^{(3)}_n\rangle=p_nK_0I|\Psi^{(2)}_n\rangle.\end{aligned}$$ Such relations allow to calculate $\mathbf{k\cdot p}$ matrix elements $$\begin{aligned} \langle\Psi^{(4)}_n|H_{\mathbf{kp}}|\Psi^{(3)}_m\rangle= p_np_m\langle\Psi^{(1)}_n|H_{\mathbf{kp}}|\Psi^{(2)}_m\rangle, \\ \langle\Psi^{(4)}_n|H_{\mathbf{kp}}|\Psi^{(4)}_m\rangle= p_np_m\langle\Psi^{(1)}_n|H_{\mathbf{kp}}|\Psi^{(1)}_m\rangle.\end{aligned}$$ We also suppose that $\langle\Psi^{(4)}_n|H_{\mathbf{kp}}|\Psi^{(2)}_m\rangle= \langle\Psi^{(4)}_n|H_{\mathbf{kp}}|\Psi^{(1)}_m\rangle=\langle\Psi^{(3)}_n|H_{\mathbf{kp}}|\Psi^{(1)}_m\rangle=0$ because the large distance between the layers. The Hamiltonian for valence bands, written in the basis $\{|\Psi^{(1)}_v,s\rangle,|\Psi^{(2)}_v,s\rangle,|\Psi^{(3)}_v,s\rangle,|\Psi^{(4)}_v,s\rangle\}$, can be presented as a sum of non-magnetic and magnetic parts $H^{(4)}_{vs}=\mathcal{H}^{(4)}_{vs}+ \mathcal{M}^{(4)}_{vs}\mu_B B$, with $$\mathcal{H}^{(4)}_{vs}=\left[ \begin{array}{cccc} E_v+\sigma_s\frac{\Delta_v}{2} & t & 0 & 0 \\ t & E_v-\sigma_s\frac{\Delta_v}{2} & t & 0 \\ 0 & t & E_v+\sigma_s\frac{\Delta_v}{2} & t \\ 0 & 0 & t & E_v-\sigma_s\frac{\Delta_v}{2} \\ \end{array} \right]$$ and $$\mathcal{M}^{(4)}_{vs}= \left[ \begin{array}{cccc} g_v-\delta g_v+\sigma_s & 0 & \bar{g}_v & 0 \\ 0 & -g_v+2\delta g_v+\sigma_s & 0 & -\bar{g}_v \\ \bar{g}_v & 0 & g_v-2\delta g_v+\sigma_s & 0 \\ 0 & - \bar{g}_v & 0 & -g_v+\delta g_v+\sigma_s \\ \end{array} \right].$$ The Hamiltonian for conduction bands, written in the basis $\{|\Psi^{(1)}_c,s\rangle,|\Psi^{(2)}_c,s\rangle,|\Psi^{(3)}_c,s\rangle, |\Psi^{(4)}_c,s\rangle\}$, has also the structure $H^{(4)}_{cs}=\mathcal{H}^{(4)}_{cs}+\mathcal{M}^{(4)}_{cs}\mu_B B$. The corresponding matrices are $$\mathcal{H}^{(4)}_{cs}=\left[ \begin{array}{cccc} E_c+\sigma_s\frac{\Delta_c}{2} & uk_+ & 0 & 0\\ uk_- & E_c-\sigma_s\frac{\Delta_c}{2} & uk_- & 0 \\ 0 & uk_+& E_c+\sigma_s\frac{\Delta_c}{2} & uk_+ \\ 0& 0 & uk_-& E_c-\sigma_s\frac{\Delta_c}{2} \end{array} \right],$$ $$\mathcal{M}^{(4)}_{cs}= \left[ \begin{array}{cccc} g_c-\delta g_c+\sigma_s & 0 & \bar{g}_c & 0 \\ 0 & -g_c+2\delta g_c+\sigma_s & 0 & -\bar{g}_c \\ \bar{g}_c & 0 & g_c-2\delta g_c+\sigma_s & 0 \\ 0& -\bar{g}_c & 0 & g_c-\delta g_c+\sigma_s \\ \end{array} \right].$$ Note that, the valence and conduction band Hamiltonians for $N>4$ multilayers have the same pentadiagonal strucure of their matrices. No one additional parameters appears for larger TMDC crystals. The $A$-exciton $g$-factor is derived in analogues way as it is done for bi- and trilayer. Since the expressions for eigenvalues and eigenstates are quite lengthy we present only the final result $$g^{(4)}_{exc}=g_{exc}+3\big[\delta g_v -\delta g_c -g_u\big]+ O\Big(\frac{t^2}{\Delta_v^2}\Big).$$ Bulk ---- The effective Hamiltonian of the bulk can be constructed in the same way as in bi-, tri- and quadrolayer. Like in previous case, the spin-up and spin-down states can be considered separately. The effective Hamiltonian for valence band written in infinite basis $\{\dots|\Psi^{(j-1)}_v,s\rangle,|\Psi^{(j)}_v,s\rangle,|\Psi^{(j+1)}_v,s\rangle\dots\}$ has the matrix elements $$\begin{aligned} &\Big[H^{(\infty)}_{vs}\Big]_{j,j}=E_v+(-1)^{j+1}\Big[\sigma_s\frac{\Delta_v}{2}+(g_v-2\delta g_v)\mu_BB\Big] + \sigma_s\mu_BB, \\ &\Big[H^{(\infty)}_{vs}\Big]_{j,j+1}=\Big[H^{(\infty)}_{vs}\Big]_{j+1,j}=t, \\ &\Big[H^{(\infty)}_{vs}\Big]_{j,j+2}=\Big[H^{(\infty)}_{vs}\Big]_{j+2,j}=(-1)^{j+1}\bar{g}_v\mu_B B.\end{aligned}$$ The Hamiltonian for conduction band written in the basis $\{\dots |\Psi^{(j-1)}_c,s\rangle,|\Psi^{(j+1)}_c,s\rangle,|\Psi^{(j+1)}_c,s\rangle,\dots\}$ has the matrix elements $$\begin{aligned} &\Big[H^{(\infty)}_{cs}\Big]_{j,j}=E_c+(-1)^{j+1}\Big[\sigma_s\frac{\Delta_c}{2} + (g_c-2\delta g_c)\mu_BB\Big]+\sigma_s\mu_BB, \\ &\Big[H^{(\infty)}_{cs}\Big]_{2j\pm1,2j}=uk_+, \quad \Big[H^{(\infty)}_{cs}\Big]_{2j,2j\pm1}=uk_-, \\ &\Big[H^{(\infty)}_{cs}\Big]_{j,j+2}=\Big[H^{(\infty)}_{cs}\Big]_{j+2,j}=(-1)^{j+1}\bar{g}_c\mu_B B.\end{aligned}$$ We solve the eigenvalues problem for a bulk in the following way. Let us consider a finite size $N=2M$ multilayer with periodic boundary conditions. In this case, all the eigenstates of the crystal can be parameterised by a wave-vector $k_n=\pi n/Ml$. Hereafter, we omit subscript $n$ for brevity and write $k$ instead of $k_n$. We are looking for the valence band solutions in the form $$|\Phi_{vs}^{k}\rangle=\frac{1}{\sqrt{M}}\sum_{m=1}^M e^{2ikml}\big[A_{vs}(k)|\Psi^{(2m-1)}_{v},s\rangle+ B_{vs}(k)|\Psi^{(2m)}_{v},s\rangle\big].$$ This ansatz reduces the eigenvalues problem to $$\begin{aligned} &E(k)A_{vs}(k)=\Big\{E_v+\sigma_s\frac{\Delta_v}{2}+\big[g_v-2\delta g_v +\sigma_s+2\bar{g}_v\cos(2kl)\big]\mu_BB\Big\}A_{vs}(k) + 2te^{-ikl}\cos(kl)B_{vs}(k),\\ &E(k)B_{vs}(k)=\Big\{E_v-\sigma_s\frac{\Delta_v}{2} -\big[g_v-2\delta g_v -\sigma_s+2\bar{g}_v\cos(2kl)\big]\mu_BB\Big\}B_{vs}(k) +2te^{ikl}\cos(kl)A_{vs}(k).\end{aligned}$$ The spectrum of the system up to $O(B)$ order is $$E_{vs}^{\pm}(k)=E_v+\sigma_s\mu_BB\pm\frac12\sqrt{\Delta_v^2+16t^2\cos^2(kl)}\pm\sigma_s \frac{\Delta_v[g_v-2\delta g_v+2\bar{g}_v\cos(2kl)]}{\sqrt{\Delta_v^2+16t^2\cos^2(kl)}}\mu_BB.$$ Since we are interested in $A$-exciton transitions, we consider only the high energy bands. The corresponding eigenstates up to zeroth order in magnetic field have the form $$\left[ \begin{array}{c} A^+_{v\uparrow}(k) \\ B^+_{v\uparrow}(k) \\ \end{array} \right]=\left[ \begin{array}{c} \cos\theta_k \\ e^{i\frac{kc}{2}}\sin\theta_k \\ \end{array} \right], \quad \left[ \begin{array}{c} A^+_{v\downarrow}(k) \\ B^+_{v\downarrow}(k) \\ \end{array} \right]=\left[ \begin{array}{c} e^{-i\frac{kc}{2}}\sin\theta_k \\ \cos\theta_k \\ \end{array} \right],$$ where $\{\cos(2\theta_k),\sin(2\theta_k)\}=\{\Delta_v/\sqrt{\Delta_v^2+16t^2\cos^2(kl)}, 4t/\sqrt{\Delta_v^2+16t^2\cos^2(kl)}\}$. The solutions for conduction band states can be written as $$\begin{aligned} |\Phi_{cs}^{k},+\rangle=\frac{1}{\sqrt{M}}\sum_{m=1}^M e^{2ikml}|\Psi^{(2m-1)}_{v},s\rangle, \quad |\Phi_{cs}^{k},-\rangle=\frac{1}{\sqrt{M}}\sum_{m=1}^M e^{2ikml}|\Psi^{(2m)}_{c},s\rangle.\end{aligned}$$ Their spectrum of energies is $$E^\pm_{cs}(k)=E_c\pm\sigma_s\frac{\Delta_c}{2}\pm(g_c-2\delta g_c+\sigma_s)\mu_BB-4\sigma_sg_u\mu_BB\cos^2(kl)\pm2\bar{g}_c\mu_BB\cos(2kl).$$ A direct calculation demonstrates that $A$-exciton optical transitions are possible only between $\{|\Phi_{v\uparrow}^{k}\rangle, |\Phi_{c\uparrow}^{k},+\rangle\}$ and $\{|\Phi_{v\downarrow}^{k}\rangle, |\Phi_{c\downarrow}^{k},-\rangle\}$ pairs of states, with the same wave-vector $k$. The corresponding transitions active in $\sigma^+$ and $\sigma^-$ polarisations respectively and have the same intensities $$J(k)=|A^+_{v\uparrow}(k)|^2=|B^+_{v\downarrow}(k)|^2= \frac{1}{2}\Big(1+\frac{\Delta_v}{\sqrt{\Delta_v^2+16t^2\cos^2(kl)}}\Big).$$ The $g$-factors of these transitions have opposite signs $$g_+(k)=-g_-(k)=g_c-2\delta g_c-4g_u\cos^2(kl)+2\bar{g}_c\cos(2kl)-\frac{\Delta_v[g_v-2\delta g_v+2\bar{g}_v\cos(2kl)]}{\sqrt{\Delta_v^2+16t^2\cos^2(kl)}}.$$ Next, the averaging of the $g$-factor with corresponding weights gives $$g^{(\infty)}_{exc}=g_{exc}+4[\delta g_v-\delta g_c-g_u]+\frac{4t^2}{\Delta_v^2}[2g_v+g_u-4\delta g_v -\bar{g}_c+3\bar{g}_v].$$ [99]{} M. Koperski, M.R. Molas, A. Arora, K. Nogajewski, A.O. Slobodeniuk, C. Faugeras, and M. Potemski, Nanophotonics [**6**]{}, 1289 (2017). G. Wang, A. Chernikov, M.M. Glazov, T.F. Heinz, X. Marie, T. Amand, and B. Urbaszek, Rev. Mod. Phys. [**90**]{}, 21001 (2018). M. R. Molas, K. Nogajewski, A.O. Slobodeniuk, J. Binder, M. Bartos, and M. Potemski, Nanoscale [**9**]{}, 13128 (2017). Y. Lin, X. Ling, L. Yu, S. Huang, A.L. Hsu, Y.-H. Lee, J. Kong, M.S. Dresselhaus, and T. Palacios, Nano Lett. [**14**]{}, 5569 (2014). A. Arora, A. Mandal, S. Chakrabarti, and S. Ghosh, J. Appl. Phys. [**113**]{}, 213505 (2013). G. Aivazian, Z. Gong, A.M. Jones, R.-L. Chu, J. Yan, D.G. Mandrus, C. Zhang, D. Cobden, W. Yao, and X. Xu, Nat. Phys. [**11**]{}, 148 (2015). A. Srivastava, M. Sidler, A. V. Allain, D.S. Lembke, A. Kis, and A. Imamoğlu, Nat. Phys. [**11**]{}, 141 (2015). Y. Li, J. Ludwig, T. Low, A. Chernikov, X. Cui, G. Arefe, Y.D. Kim, A.M. van der Zande, A. Rigosi, H.M. Hill, S.H. Kim, J. Hone, Z. Li, D. Smirnov, and T.F. Heinz, Phys. Rev. Lett. [**113**]{}, 266804 (2014). D. MacNeill, C. Heikes, K.F. Mak, Z. Anderson, A. Kormányos, V. Zólyomi, J. Park, and D.C. Ralph, Phys. Rev. Lett. [**114**]{}, 37401 (2015). A. V. Stier, K.M. McCreary, B.T. Jonker, J. Kono, and S.A. Crooker, Nat. Commun. [**7**]{}, 10643 (2016). A. Arora, R. Schmidt, R. Schneider, M.R. Molas, I. Breslavetz, M. Potemski, and R. Bratschitsch, Nano Lett. [**16**]{}, 3624 (2016). A.A. Mitioglu, P. Plochocka, Á. G. del Aguila, P.C.M. Christianen, G. Deligeorgis, S. Anghel, L. Kulyuk, and D.K. Maude, Nano Lett. [**15**]{}, 4387 (2015). G. Wang, L. Bouet, M.M. Glazov, T. Amand, E.L. Ivchenko, E. Palleau, X. Marie, and B. Urbaszek, 2D Mater. [**2**]{}, 034002 (2015). G. Plechinger, P. Nagler, A. Arora, A. Granados del Águila, M. V. Ballottin, T. Frank, P. Steinleitner, M. Gmitra, J. Fabian, P.C.M. Christianen, R. Bratschitsch, C. Schüller, and T. Korn, Nano Lett. [**16**]{}, 7899 (2016). R. Schmidt, A. Arora, G. Plechinger, P. Nagler, A. G. del Águila, M. V. Ballottin, P.C.M. Christianen, S. Michaelis de Vasconcellos, C. Schüller, T. Korn, and R. Bratschitsch, Phys. Rev. Lett. [**117**]{}, 77402 (2016). Z. Wang, J. Shan, and K.F. Mak, Nat. Nanotechnol. [**12**]{}, 144 (2016). A. Arora, M. Drüppel, R. Schmidt, T. Deilmann, R. Schneider, M.R. Molas, P. Marauhn, S. Michaelis de Vasconcellos, M. Potemski, M. Rohlfing, and R. Bratschitsch, Nat. Commun. [**8**]{}, 639 (2017). M. Koperski, M.R. Molas, A. Arora, K. Nogajewski, M. Bartos, J. Wyzula, D. Vaclavkova, P. Kossacki, and M. Potemski, 2D Mater. [**6**]{}, 015001 (2018). A. Arora, T. Deilmann, P. Marauhn, M. Drüppel, R. Schneider, M.R. Molas, D. Vaclavkova, S.M. de Vasconcellos, M. Rohlfing, M. Potemski and R. Bratschitsch, Nanoscale [**10**]{}, 15571-7 (2018). A.M. Jones, H. Yu, J.S. Ross, P. Klement, N.J. Ghimire, J. Yan, D.G. Mandrus, W. Yao, and X. Xu, Nat. Phys. [**10**]{}, 130 (2014). C. Jiang, F. Liu, J. Cuadra, Z. Huang, K. Li, A. Srivastava, Z. Liu, and W.-B. Gao, Nat. Commun. [**8**]{}, 802 (2017). A. Castellanos-Gomez, M. Buscema, R. Molenaar, V. Singh, L. Janssen, H.S.J. van der Zant, and G. A. Steele, 2D Mater. [**1**]{}, 11002 (2014). A. Arora, K. Nogajewski, M. Molas, M. Koperski, and M. Potemski, Nanoscale [**7**]{}, 20769 (2015). M. Koperski, K. Nogajewski, A. Arora, V. Cherkez, P. Mallet, J.-Y. Veuillen, J. Marcus, P. Kossacki, and M. Potemski, Nat. Nanotechnol. [**10**]{}, 503 (2015). A. Arora, M. Koperski, K. Nogajewski, J. Marcus, C. Faugeras, and M. Potemski, Nanoscale [**7**]{}, 10421 (2015). I.G. Lezama, A. Arora, A. Ubaldini, C. Barreteau, E. Giannini, M. Potemski, and A.F. Morpurgo, Nano Lett. [**15**]{}, 2336 (2015). G. Froehlicher, E. Lorchat, F. Fernique, C. Joshi, A. Molina-Sánchez, L. Wirtz, and S. Berciaud, Nano Lett. [**15**]{}, 6481 (2015). M. Grzeszczyk, K. Go[ł]{}asa, M. Zinkiewicz, K. Nogajewski, M.R. Molas, M. Potemski, A. Wysmo[ł]{}ek, and A. Babiński, 2D Mater. [**3**]{}, 25010 (2016). A. Arora, B. Karmakar, S. Sharma, M. Schardt, S. Malzer, B. Bansal, G. Döhler, and B.M. Arora, Rev. Sci. Instrum. [**81**]{}, 83901 (2010). E. Hecht [*Optics*]{} (reading, MA: Addiso-Wesley, 2001) F. Wooten, [ *Optical Properties of Solids*]{} (Academic Press, New York, 1972). A.R. Beal, W.Y. Liang, and H.P. Hughes, J. Phys. C: Solid State Phys. [**9**]{}, 2449 (1976). A.R. Beal and H.P. Hughes, J. Phys. C: Solid State Phys. [**12**]{}, 881 (1979). A. Kormányos, G. Burkard, M. Gmitra, J. Fabian, V. Zólyomi, N.D. Drummond, and V. Fal’ko, 2D Mater. [**2**]{}, 022001 (2015). G. Liu, D. Xiao, Y. Yao, X. Xu, and W. Yao, Chem. Soc. Rev. [**44**]{}, 2643 (2015). A. Kormányos,V. Zólyomi, V. Fal’ko and G. Burkard, Phys. Rev. B [**98**]{}, 035408 (2018). G.L. Bir and G.E. Pikus, [*Symmetry and Strain-Induced Effects in Semiconductors*]{} (Wiley, 1974). Z. Gong, G.-B. Liu, H. Yu, D. Xiao, X. Cui, X. Xu, and W. Yao, Nat. Commun. [**4**]{}, 2053 (2013). F. Cadiz [*et. al.*]{} Phys. Rev. X [**7**]{}, 021026 (2017) O. A. Ajayi [*et. al.*]{} 2D Mater. [**4**]{}, 031011 (2017) J. Wierzbowski [*et. al.*]{} Sci. Rep. [**7**]{}, 12383 (2017) L. Debbichi, O. Eriksson, and S. Lebègue, Phys. Rev. B [**89**]{}, 205311 (2014). T. Deilmann and K.S. Thygesen, Nano Lett. [**18**]{}, 2984 (2018). C. Ruppert, O. B. Aslan, and T. F. Heinz, Nano Lett. **14**, 6231 (2014). T. Cheiwchanchamnangij and W. R .L. Lambrecht, Phys. Rev. B **85**, 205302 (2012). A. Ramasubramaniam, Phys. Rev. B **86**, 115409 (2012). A. Kormányos, V. Zólyomi,N. D. Drummond, and G. Burkard, Phys. Rev. X. **4**, 011034 (2014).
{ "pile_set_name": "ArXiv" }
--- abstract: 'In order to study the spin density wave transition temperature ($T_{\rm SDW}$) in $\mathrm{(TMTSF)_2PF_6}$ as a function of magnetic field, we measured the magnetoresistance $R_{zz}$ in fields up to 19 T. Measurements were performed for three field orientations $\mathbf{B}\|\mathbf{a}, \mathbf{b''}$ and $\mathbf{c^*}$ at ambient pressure and at $P= 5$ kbar, that is nearly the critical pressure. For $\mathbf{B\|c^*}$ orientation we observed quadratic field dependence of $T_{\rm SDW}$ in agreement with theory and with previous experiments. For $\mathbf{B\|b''}$ and $\mathbf{B\|a}$ orientations we have found no shift in $T_{\rm SDW}$ within 0.05 K, both at $P=0$ and $P=5$ kbar. This result is also consistent with theoretical predictions.' author: - 'Ya.A. Gerasimenko' - 'V.A. Prudkoglyad' - 'A.V. Kornilov' - 'V.M. Pudalov' - 'V.N. Zverev' - 'A.-K. Klehe' - 'J.S. Qualls' title: 'Anisotropy of the Spin Density Wave Onset for (TMTSF)$_2$PF$_6$ in Magnetic Field' --- Introduction ============ $\mathrm{(TMTSF)_2PF_6}$ is a layered organic compound that demonstrates a complex phase diagram, containing phases, characteristic of one-, two- and three-dimensional systems. Transport properties of this material are highly anisotropic (typical ratio of the conductivity tensor components is $\sigma_{xx}:\sigma_{yy}:\sigma_{zz}\sim 10^5:10^3:1$ at $T=100$K [@Review:Lebed_Yamaji; @notations]). At ambient pressure and zero magnetic field the carrier system undergoes a transition to the antiferromagnetically ordered spin density wave (SDW) state [@Review:Lebed_Yamaji] with a transition temperature $T_{\rm SDW}\approx12$K. When an external hydrostatic pressure is applied, $T_{\rm SDW}$ gradually decreases and vanishes at the critical pressure of $\sim6\,$kbar[@critical-pressure]. For higher pressures, $P>6$kbar, the SDW state is completely suppressed. Application of a sufficiently high magnetic field along the least conducting direction $\mathbf{c^*}$ restores the spin ordering. This occurs via a cascade of the field induced SDW states (FISDW) [@FISDW]. The conventional model for the electronic spectrum is [@Gorkov_Lebed; @Review:Lebed_Yamaji]: $$\mathcal{E}_0(\mathbf{k})=\pm \hbar v_F(k_x \mp k_F)-2t_b \cos(k_y b')-2t_b' \cos(2k_y b')- 2t_c \cos (k_z c^*), \label{eqn:dlaw}$$ where $t_b,\ t_c$ are the nearest neighbor transfer integrals along $\mathbf{b^\prime}$ and $\mathbf{c^*}$ directions respectively, and $t_b'$ is the transfer integral involving next-to-nearest (second order) neighbors. For ideal one dimensional case, $t_b=t_c=t_b'=0$, and the Fermi surface consists of two parallel flat sheets. This surface satisfies the so-called ideal nesting condition: there exists a vector $\mathbf{Q}_0$ which couples all states across the Fermi surface. In the quasi-one dimensional case, when $t_b$ and $t_c$ are non-zero, the Fermi-sheets become slightly corrugated. Nevertheless, one can still find a vector, that couples all states across the Fermi surface, therefore the ideal nesting property also holds in this case. It means that the magnetic susceptibility $\chi(\mathbf{q})$ of the system diverges at $\mathbf{q}=\mathbf{Q}_0$ and the system is unstable against formation of SDW [@Review:Lebed_Yamaji]. When $t_b'$ is non-zero, the situation changes drastically: no vector can couple all states on both sides of the Fermi surface, though $\mathbf{Q}_0$ still couples a large number of states. The situation called “imperfect nesting” is sketched on Fig. \[surface\]a. ![(a) Schematic view of the Fermi surface in the imperfect nesting model. Dashed and solid lines show FS with and without $t_b^\prime$ term in Eq. (\[eqn:dlaw\]) respectively ($t_b^\prime$ value is magnified for clarity). $\mathbf{Q}$ denotes the nesting vector. (b) Schematic 3D-view of the Fermi surface.Dashed and solid lines are the orbits of an electron, when magnetic field $\mathbf{B\|c^*}$ and $\mathbf{B\|b^\prime}$ respectively.[]{data-label="surface"}](fig1n.eps "fig:"){width="37.00000%"} ![(a) Schematic view of the Fermi surface in the imperfect nesting model. Dashed and solid lines show FS with and without $t_b^\prime$ term in Eq. (\[eqn:dlaw\]) respectively ($t_b^\prime$ value is magnified for clarity). $\mathbf{Q}$ denotes the nesting vector. (b) Schematic 3D-view of the Fermi surface.Dashed and solid lines are the orbits of an electron, when magnetic field $\mathbf{B\|c^*}$ and $\mathbf{B\|b^\prime}$ respectively.[]{data-label="surface"}](3dfs2.eps "fig:"){width="37.00000%"} Despite the complex behavior of the system, theory[@Montambaux; @Maki] successfully describes the effects of pressure and magnetic field on the SDW transition in terms of the single parameter $t_b'$. According to the theory, $t_b'$ increases with external pressure, and conditions for nesting deteriorate. Therefore, under pressure deviations of the system from the ideal 1D-model become more prominent and, as a consequence, $T_{\rm SDW}$ decreases. When $t_b'$ reaches a critical value $t_b^*$, the SDW transition vanishes. The application of a magnetic field normal to the $\mathbf{a}$ direction restricts electron motion in the $\mathbf{b\mathrm{-}c}$ plane making the system effectively more one dimensional. Theory Ref.  predicts that the transition temperature increases in weak fields $\mathbf{B\|c^*}$ as $$\nonumber \Delta T_{\rm SDW}(B)=T_{\mathrm{SDW}}(B)-T_{\mathrm{SDW}}(0)=\alpha B^2,$$ and further saturates in high fields; here $\alpha=\alpha(P)$ is a function of pressure. A number of experiments[@critical-pressure; @Chaikin_abc; @Tsdw-Biskup; @Tsdw-highfields] were made to examine the predictions of the theory for the $\mathbf{B\|c^*}$ case. All these studies confirmed quadratic field dependence of the transition temperature. Nevertheless, the predicted saturation has not been seen until now. Furthermore, Murata et. al [@uniaxial-1; @uniaxial-2; @uniaxial-3; @uniaxial-4] reported an unexpected anisotropy of the $T_{\rm SDW}$ in $\mathrm{(TMTSF)_2PF_6}$ under uniaxial stress, the result seems to disagree with the theory. According to theory [@Montambaux; @Maki], the only relevant parameter is $t^\prime_b$; therefore, one might expect the uniaxial stress along $\mathbf{b}^\prime$ to affect $T_{\rm SDW}$ stronger than the stress in other directions. Murata et al.[@uniaxial-1; @uniaxial-2; @uniaxial-3; @uniaxial-4], however, showed experimentally that the uniaxial stress applied along the $\mathbf{a}$ direction changed $T_{\rm SDW}$ stronger than the stress in the $\mathbf{b}^\prime$ direction. The results mentioned above demonstrate that the consistency between the theoretical description and experiment is incomplete. Whereas there is a number of experimental data for the magnetic field $\mathbf{B\|c^*}$, for $\mathbf{B\|a}$ and $\mathbf{B\|b'}$ only one experiment[@Chaikin_abc] has been done so far at ambient pressure, and none at elevated pressure. Danner et al.[@Chaikin_abc] observed no field dependence for $\mathbf{B\|a}$ and $\mathbf{B\|b'}$ at ambient pressure. The absence of a field dependence, however, cannot be considered as a crucial test of the theory, because the effect of the magnetic field might be small at ambient pressure. Indeed, according to the theory, elevated pressure enhances any imperfections of nesting, and the effect of magnetic field is expected to become stronger. As a result, the strongest effect should take place at pressures close to the critical value. The aim of the present work, therefore, is to determine experimentally $T_{\rm SDW}(B)$ dependence for $\mathbf{B\|a}$ and $\mathbf{B\|b'}$ near the critical pressure. We report here our measurements of the magnetic field induced shift in $T_{\rm SDW}$ made at $P=0$ and 5kbar for the three orientations $\mathbf{B\|a, B\|b',\mbox{ and } B\|c^*}$. Our main result is that for $\mathbf{B\|a}$ and $\mathbf{B\|b^\prime}$ there is no distinct shift of the transition temperature within our measurements’ uncertainty 0.05K at pressure up to 5kbar and in fields up to 19T. At the same time, we found quadratic $T_{\rm SDW}(B)$ dependences for $\mathbf{B\|c^*}$ both at zero and non-zero pressures, a result in agreement with previous studies by other groups [@critical-pressure; @Chaikin_abc; @Tsdw-Biskup; @Tsdw-highfields]. We suggest an explanation of our experimental data, based on the mean-field theory, and show that the latter correctly describes the effect of the magnetic field on $T_{\rm SDW}$. Experimental ============ Single-crystal samples of $\mathrm{(TMTSF)_2PF_6}$ were grown by conventional electrochemical technique. Measurements were made on three samples from the same batch (the typical sample size is $3\times0.25\times0.1\ \mathrm{mm}^3$ along $\mathbf{a},\mathbf{b^\prime}$ and $\mathbf{c^*}$ directions respectively). Eight fine wires (10$\mu m$ Au wires or 25$\mu m$ Pt wires) were attached to the sample with conductive graphite paste. Two groups of four contacts were made on the two opposite a-b$^\prime$ faces of the sample along a-axis. All measurements were made by four-probe ac lock-in technique at 10-120Hz frequencies. The out-of-phase component of the contacts resistance was negligible. The resistance along c$^*$-axis, $R_{zz}$, was measured using two pairs of contacts on top and bottom faces, normal to the c$^*$-axis. For measurements under pressure the sample and a manganin pressure gauge were inserted into a miniature nonmagnetic spherical pressure cell[@PressureCell] with an outer diameter of 15mm. The cell was filled with Si-organic pressure transmitting liquid[@Si-organic_liquid] (PES-1). The pressure was applied and fixed at room temperature. The pressure values quoted throughout this paper refer to those determined at helium temperature. After pressure was applied, the cell was mounted in a two-axis rotation stage placed in liquid $^4$He in a bore of a 21T superconducting magnet at Oxford University. The rotating system enabled rotation of the pressure cell around the main axis by $200^\circ$ (with uncertainty of $\sim 0.1^\circ$) and around the auxiliary axis by $360^\circ$ (with uncertainty of $\sim 1^\circ$); this allowed us to set the sample at any desired orientation with respect to the magnetic field direction. Measurements at ambient pressure were performed using more simple rotating system which allowed rotation around only one axis (perpendicular to the field direction) by $\sim200^\circ$ with $\sim 0.1^\circ$ uncertainty. This system was mounted in a bore of a 17T superconducting magnet at ISSP. ![Temperature dependences of $R_{zz}$ at ambient pressure for six values of magnetic field (as shown on panel (b)) aligned with the least conduction direction, $\mathbf{B\|c^*}$. (a) $R_{zz}(T)$ for the set of magnetic fields; (b) logarithmic derivatives of the same data. Inset to panel (a) demonstrates typical dependence of $R_{zz}$ versus T. Inset to (b) shows linear fit for transition temperatures, obtained from the derivative plots, vs $B^2$.[]{data-label="ambient"}](figure2.eps){width="47.00000%"} ![Temperature dependences of $R_{zz}$ at an elevated pressure of 5kbar for a field orientation $\mathbf{B\|c^*}$. (a) $R_{zz}(T)$ for the set of magnetic fields. The inset shows $\Delta T_{SDW}$ versus $B^2$ for $P=5\,$kbar (filled dots) and for $P=0$ (empty circles). (b) logarithmic derivatives of the same data. The inset demonstrates that $T_{SDW}$ shift in the magnetic field is much more pronounced at $P=5$kbar than that at ambient pressure.[]{data-label="pressure"}](figure3.eps){width="47.00000%"} ![Temperature dependences of $R_{zz}$ at ambient pressure in magnetic field 9T compared for $\mathbf{B\|c^*}$ and $\mathbf{B\|b'}$ orientations. (a) panel shows $R_{zz}(T)$, (b) panel shows logarithmic derivatives of these dependences. Derivative graphs in a larger scale are shown on the inset.[]{data-label="b-ambient"}](figure4.eps){width="47.00000%"} ![Temperature dependences of $R_{zz}$ under pressure $P=5$kbar in magnetic field 19T aligned with $\mathbf{B\|a}$ or $\mathbf{B\|b'}$ compared with orienation $\mathbf{B\|c^*}$; (a) and (b) panels show $R_{zz}(T)$ and their logarithmic derivatives respectively. These results are corrected due to magnetoresistance of RuO$_2$ thermometer. Panel (c) zooms in 0.5K interval near the transition, solid lines are cubic polynomial fits of experimental points.[]{data-label="orientation"}](figure5.eps){width="45.00000%"} Samples were cooled very slowly, at the rate of $0.2\div0.3$K/min to avoid microcracks. Nevertheless, some samples cooled down at ambient pressure experienced 1-2 microcracks, seen as an irreversible jumps (a few percents) in the sample resistance. No cracks were observed during cooling of a sample in the pressure cell. During measurements under pressure the temperature of the cell was determined by RuO$_2$ thermometer, and during measurements at ambient pressure – by Cu-Fe-Cu thermocouple and RuO$_2$ thermometer. The temperature was varied slowly in order to insure, that the sample and the thermometer were in thermal equilibrium. The thermal equilibrium condition was verified by the absence of a hysteresis in $R_{zz}(T)$ between cooling and heating cycles. Results and discussion ====================== Measurements were performed on three samples from the same batch and the results were in qualitative correspondence with each other. Most detailed data taken for two samples are presented in this section. $\mathbf{B\|c^*}$ ----------------- Fig. \[ambient\] shows the temperature dependence of $R_{zz}$ at ambient pressure and different magnetic fields. $R_{zz}(B=0)$ at ambient pressure is shown in the inset to Fig. \[ambient\]a over a large temperature range: as temperature decreases, the resistance decreases monotonically, then exhibits a sharp jump and further increases in a temperature activated manner. The jump at 12K indicates the transition to the low-temperature spin-density wave state. Throughout the paper we define the transition temperature according to the peak in $d\ln R_{zz}/d(1/T)$, the logarithmic derivative of resistance *vs* inverse temperature. As the magnetic field applied along $\mathbf{c^*}$-axis grows, $T_{\rm SDW}$ is shifted progressively to higher temperatures (Fig. \[ambient\]b). The shift increases quadratically with field, as shown in the inset to Fig. \[ambient\]b. Application of pressure $P=5\,$kbar lowers the zero-field transition temperature down to 6.75K (Fig. \[pressure\]a, b). The pressure dependence of $T_{\rm SDW}(P)$ is known to be strongly nonlinear [@chaikin93], its slope is small at low pressures and sharply increases in the vicinity of the critical pressure value. Therefore, the factor of two decrease in $T_{\rm SDW}(P)$ (from 12K to 6.75K, compare Figs. 2 and 3) demonstrates that the pressure is close to the critical value. At a pressure of 5kbar and in the presence of a magnetic field $\mathbf{B\|c^*}$, the transition temperature $T_{\rm SDW}$ grows nearly quadratically with field, $\Delta T_{\rm SDW}\propto B^2$ (see inset to Fig. \[pressure\]a). This growth is qualitatively similar to that for zero pressure, however, it is much more pronounced, compared to the former case (see Fig. \[pressure\]b and the inset to Fig. \[pressure\]a). Application of a magnetic field also increases resistance in the SDW state (cf. Figs. \[ambient\]a,\[pressure\]a). In principle, the resistance growth should be related with the increase of $T_{\rm SDW}$, e.g. due to the increase of the SDW gap in magnetic field[@Maki]. However, the data on Fig. \[pressure\] as well as previous observations (for example, Ref. ) indicate, that $R_{zz}(T)$ cannot be described by a temperature-activated behavior, both in zero and non-zero magnetic fields. Apparently, the observed $R_{zz}(T,B)$ dependence is governed by both the increase of the SDW intensity in magnetic field and the magnetoresistance. Therefore, without an adequate model for $R_{zz}(T,B)$ the two contributions cannot be separated, even though our data clearly indicate the correlation of the resistance growth and the increase of $T_{\rm SDW}$ in magnetic field. The observed $T_{\rm SDW}(B)$ dependence (Fig. \[pressure\]) for our samples in magnetic field along $\mathbf{c^*}$ is qualitatively consistent with theory[@Montambaux; @Maki] and with earlier observations by other groups [@critical-pressure; @Chaikin_abc; @Tsdw-Biskup; @Tsdw-highfields]. According to the theory, pressure deteriorates nesting conditions, enhancing the $t_b^\prime$ term in the energy spectrum Eq. (1). Therefore, under pressure the number of unnested electrons increases as compared to the ambient pressure case. In contrast to the action of pressure, application of magnetic field $\mathbf{B\|c^*}$ improves the nesting conditions, both at elevated and at zero pressure, although the number of unnested electrons is larger in the former case. This is predicted to lead to an enhancement of the field dependence of $T_{\rm SDW}$ under pressure. Our data (see inset to Fig. \[pressure\]a) confirms the theoretically predicted enhancement of the $T_{\rm SDW}(\mathbf{B\|c^*})$ dependence at pressures close the critical value. We therefore anticipate, that if the $T_{\rm SDW}(B)$ dependence existed for other field orientations, it would be enhanced at elevated pressures. Correspondingly, we performed an experimental search for this dependence at $P$ close to the critical value of $P_c$. $\mathbf{B\|a,b^\prime}$ ------------------------ When the magnetic field is applied in the a-b plane, it’s effect on the SDW-transition temperature is either missing or, at least, is much less than for $\mathbf{B\|c^*}$. Figures 4a,b illustrate this result for one of the orientations, $\mathbf{B\|b^\prime}$. Even though the shape of the $R_{zz}(T)$ curves slightly changes with field, the temperature of the transition remains unchanged within our measurement uncertainty of $\sim0.03\,$K. For comparison, on the same figures we present also the $R_{zz}(T)$ data for the $\mathbf{B\|c^*}$ orientation, demonstrating that the shift of $T_{\rm SDW}$ in the same field of 9T is an order of magnitude higher for $\mathbf{B\|c^*}$. In line with the experimental situation for the $\mathbf{B\|c^*}$, one might expect the shift in $T_{\rm SDW}$ (if any) to be enhanced under pressure. Figures \[orientation\]a,b summarize the main result of our paper — the $R_{zz}(T)$ dependences across the transition measured for all three field orientations ($\mathbf{B\|a}$, $\mathbf{b^\prime}$, and $\mathbf{c^*}$) at $P= 5$kbar, close to the critical pressure. At $P=5$kbar and at $B=19$T, the shift $\Delta T_{\rm SDW} (B)$ is as large as 1K for $\mathbf{B\|c^*}$, whereas for $\mathbf{B\|a,b^\prime}$ the shift is either missing or vanishingly small , at least a factor of 20 smaller than for $\mathbf{B\|c^*}$ (see Fig. 5b). Zooming the data in Fig. 5c (on the left panel), one can notice that $R_{zz}(T)$ curves for $\mathbf{B\|a,b^\prime}$ are slightly shifted from the $\mathbf{B}=0$ one. However, our measurements uncertainty is comparable with this difference; for this reason, the sources of this uncertainty are analyzed below. There are two possible sources of uncertainties: (i) the calibration error of the RuO$_2$ thermometer in magnetic fields, and (ii) an uncertainty of the procedure used to determine the transition temperature. The latter contribution was determined by the width of the transition and was estimated to be about 0.02 - 0.03K. As for the former one, in all measurements we used RuO$_2$ resistance thermometer whose magnetoresistance was calibrated at 4.2K. Possible changes of the RuO$_2$ magnetoresistance between 4.2K and 6.7K are the major source of our uncertainty and are estimated to be 0.04K. Thus, we can only quantify the changes $\Delta T_{\rm SDW}(B)$ that are larger than 50mK. If the transition temperature changes with $\mathbf{B\|a}$ or $\mathbf{B\|b^\prime}$, the changes are to be smaller than the above value. Discussion ---------- \(i) In theory [@Montambaux; @Maki], the changes of the transition temperature result from imperfect nesting. The energy spectrum Eq. (1) contains the only term $t_b^\prime\cos(k_yb^\prime)$ that is responsible for the nesting imperfection. Magnetic field parallel to the $\mathbf{c^*}$ direction eliminates the electron dispersion in the $\mathbf{b^\prime}$ direction from the system Hamiltonian [@Chaikin_q1d]. This effect is somewhat similar to the quasiclassical action of the Lorentz force on the electrons. The force is directed along $\mathbf{b^\prime}$ axis because the Fermi velocity $\mathbf{v}_F$ in (TMTSF)$_2$PF$_6$ is along the $\mathbf{a}$-axis, on average. It makes electrons on the Fermi surface cross the Brillouin zone in the $k_y$ direction (see Fig. \[surface\]a). Such a motion averages the electron’s energy over all $k_y$ states [@Chaikin_q1d], and all the terms, that contain $\cos(k_yb^\prime)$ in the electron spectrum Eq. (\[eqn:dlaw\]), vanish. Since the $t_b^\prime$ term, responsible for the nesting imperfection, also vanishes, the magnetic field $\mathbf{B\|c^*}$ improves nesting conditions; this results in a growth of the transition temperature. This effect is described by the mean-field theory [@Montambaux; @Maki]. In contrast, a magnetic field $\mathbf{B\|b^\prime}$ has no effect on $t_b^\prime$, therefore, no shift in $T_{\rm SDW}$ should occur in this field orientation. Our result that for $\mathbf{B\|b^\prime}$ the shift in $T_{\rm SDW}$ is much less than for $\mathbf{B\|c^*}$ does not contradict this prediction. \(ii) In principle, the magnetic field $\mathbf{B\|b^\prime}$ still can affect the electron dispersion in the $\mathbf{c^*}$ direction. In theory [@Montambaux] such a dispersion is neglected and $t_b^\prime$ is assumed to be the only term responsible for imperfect nesting. In general, besides $t_b^\prime$ there are other antinesting terms, that can affect $T_{\rm SDW}$ in field $\mathbf{B\|b^\prime}$. Studies of $T_{\rm SDW}$ anisotropy for different field direction may in principle provide information on the $t_b/t_c$ ratio. In what follows, we estimate the $t_b/t_c$ ratio from our experimental data. In order to do this, we expand the energy spectrum Eq. (\[eqn:dlaw\]): $$\label{eqn:dlaw_ext} \mathcal{E}_1(\mathbf{k})=\mathcal{E}_0(\mathbf{k})- 2t_{bc}^\prime \cos(k_y b^\prime)\cos(k_z c^*)-2t_c^\prime \cos(2k_zc^*),$$ where $t_{bc}^\prime$ and $t_c^\prime$ are the next-to-nearest hopping integrals. In the model Eq. (\[eqn:dlaw\_ext\]) and for magnetic field direction $\mathbf{B\|c^*}$, the correction to $\Delta T_{\rm SDW}$ from the $t_{bc}^\prime$ term is considerably smaller, than that from $t_b^\prime$, for $t_b/t_c\gg1$. Therefore, the $T_{\rm SDW}(\mathbf{B\|c^*})$ dependence is almost unchanged, when $t_{bc}^\prime$ is taken into account. However, for $\mathbf{B\|b^\prime}$ the situation is essentially different. The electrons experience now the Lorenz force along $\mathbf{c^*}$ axis, and the corresponding motion along $k_z$ averages out all $\cos(k_z c^*)$ terms in the electron spectrum Eq. (\[eqn:dlaw\_ext\]). Therefore, the contribution of $t_{bc}^\prime$ to $T_{\rm SDW}(B)$ dependence becomes dominant. \(iii) When magnetic field is applied along $\mathbf{a}$-axis, it is not expected to alter the electron motion, because the Lorentz force is zero, on average. Correspondingly, there are no terms in the electron spectrum which may be affected by the magnetic field in this orientation and the transition temperature is not expected to depend on the field $\mathbf{B\|a}$. From the above discussion we conclude that $T_{\rm SDW}$ in principle might be affected by the field $\mathbf{B\|b^\prime}$. Bjeli$\mathrm{\check s}$ and Maki in Ref.  took the $t_{bc}^\prime$ term into account and derived an expression for the transition temperature in tilted magnetic field. Based on this result, one can show (see Appendix) that in high magnetic fields the anisotropy of $\Delta T_{\rm SDW}(\mathbf{B})$ is related with the $t_b/t_c$ ratio: $$\label{tdif2} \frac{T_{\rm SDW}(B\|c^*)-T_{\rm SDW}(0)}{T_{\rm SDW}(B\|b^\prime)-T_{\rm SDW}(0)}\approx\beta\frac{1}{4}\left(\frac{t_b}{t_c}\right)^2\left(\frac{\omega_c}{\omega_b}\right)^2,$$ where $\beta\approx1$ is a numeric factor. Since the above relationship is dominated by $(t_b/t_c)^2$, the shift in $T_{\rm SDW}$ for $\mathbf{B\|b^\prime}$ is expected to be considerably weaker than for $\mathbf{B\|c^*}$. Fig. \[orientation\] shows the experimental data for SDW transition in fields $\mathbf{B}\|\mathbf{a},\,\mathbf{b^\prime},\,\mathbf{c^*}$. This data enables us to estimate the $t_b/t_c$ ratio using Eq. (\[tdif2\])[@scattering]. However, such a straightforward comparison of $T_{\rm SDW}$ in $B=19$T and $B=0$ includes a large uncertainty related with thermometer magnetoresistance. In order to overcome this problem, we ramped the temperature slowly in a fixed magnetic field 19T and measured $R_{zz}(T)$. We repeated the procedure for the three field orientations ($\mathbf{B}\|\mathbf{a},\,\mathbf{b^\prime},\,\mathbf{c^*}$) by rotating in situ the pressure cell with the sample with respect to magnetic field direction. The $T_{\rm SDW}(\mathbf{B})$ data measured this way were used to calculate $t_b/t_c$ with Eq. (\[tdif2\]). In the calculations we substituted $T_{\rm SDW}(B\|a)$ for $T_{\rm SDW}(0)$ because as discussed in (iii) magnetic field $\mathbf{B\|a}$ does not affect $T_{\rm SDW}$. Such a procedure enabled us to eliminate the error related with the magnetoresistance of RuO$_2$ thermometer. Yet the difference $\Delta T_{ab}=T_{\rm SDW}({B\|b^\prime})- T_{\rm SDW}({B\|a})$ was within the error bar of $0.02$K in the experiment, the upper bound of our estimate $\Delta T_{ab}=0.02$K corresponds according to Eq. (\[tdif2\]) to the lower bound of $t_b/t_c\approx7$. This estimate agrees with earlier result $t_b/t_c\approx6$ obtained from angle-dependent magnetoresistance studies in the metallic state at 7kbar[@Naughton]. The $t_b/t_c$ estimates indicate that the contribution of the $t_{bc}^\prime$ term to $T_{\rm SDW}$ is negligible, a factor of 50 smaller than that of $t_b^\prime$. Conclusion. =========== In conclusion, we have measured the magnetic field effect on the transition temperature $T_{\rm SDW}$ to the spin density wave state in (TMTSF)$_2$PF$_6$ in fields up to $\mathbf{B}=19$T for three orientations $\mathbf{B\|a, b', c^*}$, and at pressures up to 5kbar. Measurements for $\mathbf{B\|c^*}$ are in qualitative agreement with the mean field theory [@Montambaux; @Maki] and with results of other groups[@critical-pressure; @Chaikin_abc; @Tsdw-Biskup; @Tsdw-highfields]. Our data confirm that the field dependence of $T_{\rm SDW}$ is enhanced as pressure increases and approaches the critical value. Measurements of $T_{\rm SDW}$ for $\mathbf{B\|a,b^\prime}$ under pressure are presented for the first time. The main result of our paper is that the magnetic field dependence of $T_{\rm SDW}$ for $\mathbf{B\|a}$ and for $\mathbf{B\|b^\prime}$ is either absent or vanishingly small (at least a factor of 20 smaller than for $\mathbf{B\|c^*}$) even near the critical pressure and at $B=19$T. This shows, that the influence of other imperfect nesting terms on $T_{\rm SDW}$ is negligibly small. This result confirms the assumption of the theory that $T_{\rm SDW}$ is determined by the antinesting terms with the biggest contribution from the $t^\prime_b$ term in the electron spectrum. Acknowledgements ================ We are grateful to P.D. Grigoriev, A.G. Lebed and A. Ardavan for their valuable suggestions and discussion of our results. The work was partially supported by the Programs of the Russian Academy of Sciences, RFBR (08-02-01047, 09-02-12206), Russian Ministry of Education and Science, the State Program of support of leading scientific schools (1160.2008.2), EPSRC, and the Royal Society. Appendix: derivation of Eq. (\[tdif2\]) ======================================= Bjeli$\mathrm{\check s}$ and Maki in Ref.  took the $t_{bc}^\prime$ term into account and derived an expression for the transition temperature for magnetic field $\mathbf{B}$ in $\mathbf{b^\prime}-\mathbf{c^*}$ plane. General form of this expression involves series of products of Bessel functions. In high magnetic fields the expression can be simplified by saving only the greatest term in the series. Namely, for $\mathbf{B\|c^*}$: $$\ln\left[\frac{T_\mathrm{SDW}(B\|c^*)}{T_\mathrm{SDW}}\right]\approx J_1^2\left(\frac{t_{b}^\prime}{\omega_b}\right)J_0^2\left(\frac{t_{bc}^\prime}{\omega_b}\right)\left[\mathrm{Re}\Psi\left(\frac{1}{2}+\frac{i2\omega_b}{4\pi T_\mathrm{SDW}}\right)-\Psi\left(\frac{1}{2}\right)\right]\label{eqn:tcc}$$ and for $\mathbf{B\|b^\prime}$: $$\ln\left[\frac{T_\mathrm{SDW}(B\|b^\prime)}{T_\mathrm{SDW}}\right]\approx J_1^2\left(\frac{t_{bc}^\prime}{\omega_c}\right)\left[\mathrm{Re}\Psi\left(\frac{1}{2}+\frac{i\omega_c}{4\pi T_\mathrm{SDW}}\right)-\Psi\left(\frac{1}{2}\right)\right]\label{eqn:tcb}$$ Here $T_\mathrm{SDW}=T_\mathrm{SDW}(B=0)$, $J_{0,1}$ are Bessel functions and $\Psi$ is a digamma function. When $\mathbf{B\|c^*}$, Lorentz force pushes the electrons to cross the Brillouin zone in $k_y$ direction with the characteristic frequency of $\omega_b=ev_FBb$ (see Fig.\[surface\]b). When $\mathbf{B\|b^\prime}$ the frequency is $\omega_c=ev_FBc$, a factor of 2 larger than $\omega_b$. Substitution of the lattice parameters and $v_F\sim1.1\cdot10^5$\[m/sec\][@Tsdw-highfields] gives $\omega_b\approx0.985\cdot B$\[K\]. Therefore, in high fields $t_b^\prime/\omega_b$ and $t_{bc}^\prime/\omega_b$ vanish, leaving only terms with lower order Bessel functions in the original series. Consequently, we arrive at Eq. (\[eqn:tcc\]) and (\[eqn:tcb\]). One can show by expanding exponents in series, that $$\label{tdif} \frac{T_{\rm SDW}(B\|c^*)-T_{\rm SDW}(0)}{T_{\rm SDW}(B\|b^\prime)-T_{\rm SDW}(0)}\approx\beta\frac{J_1^2\left(\frac{t_b^\prime}{\omega_b}\right)J_0^2\left(\frac{t_{bc}^\prime}{\omega_b}\right)}{J_1^2\left(\frac{t_{bc}^\prime}{\omega_c}\right)},$$ where $\beta\approx1$ is a numeric factor. By substituting asymptotic forms for Bessel functions with small arguments, one can obtain $$\label{tdif22} \frac{T_{\rm SDW}(B\|c^*)-T_{\rm SDW}(0)}{T_{\rm SDW}(B\|b^\prime)-T_{\rm SDW}(0)}\approx\beta\frac{1}{4}\left(\frac{t_b}{t_c}\right)^2\left(\frac{\omega_c}{\omega_b}\right)^2,$$ Therefore, by measuring the above differences of the transition temperature one can determine the ratio of the transfer integrals $t_b/t_c$. [99]{} For review see *The Physics of Organic Superconductors and Conductors*, edited by A.G. Lebed (Springer, Berlin 2008); T. Ishiguro, K. Yamaji, G. Saito, *Organic Superconductors*, 2nd edn (Springer, Berlin 1998). Through this paper we use the approxiamtion of orthorhombic elementary cell with the basic vectors $\mathbf{a}$, $\mathbf{b^\prime}$ and $\mathbf{c^*}$ and the coordinates $(x,y,z)$ in corresponding directions. J.F. Kwak, J.E. Schirber, P.M. Chaikin, J.M. Williams, H.-H. Wang, L.Y. Chiang, Phys. Rev. Lett. **56**, 972 (1986). J.F. Kwak, J.E. Schirber, R.L. Greene, E.M. Engler, Phys. Rev. Lett. **46**, 1296 (1981). L.P. Gor’kov, A.G. Lebed, J. Physique Lett. **45**, L-433 (1984) G.Montambaux, Phys. Rev. B **38**, 4788 (1988). K. Maki, Phys. Rev. B **47**, 11506 (1993). A. Bjelis, K. Maki, Phys. Rev. B **45**, 12887 (1992); G.M. Danner, P.M. Chaikin, S.T. Hannahs, Phys. Rev. B **53**, 2727 (1996). W. Kang, S.T. Hannahs, P.M. Chaikin, Phys. Rev. Lett. **70**, 3091 (1993). F.Z. Guo, K. Murata, A. Oda, H. Yoshino, J. Phys. Soc. Jpn. **69**, 2164 (2000). K. Murata, Y. Mizuno, F.Z. Guo, S. Shodai, A. Oda, H. Yoshino, Synth. Met. **120**, 1071 (2001). K. Murata, K. Iwashita, Y. Mizuno, F.Z. Guo, S. Shodai, H. Yoshino, J.S. Brooks, L. Balicas, D. Graf, K. Storr, I. Rutel, S. Uji, C. Terakura, Y. Imanaka, Synthetic Metals **63**, 1263-1265 (2002). K. Murata, K. Iwashita, Y. Mizuno, F.Z. Guo, S. Shodai, H. Yoshino, J.S. Brooks, L. Balicas, D. Graf, K. Storr, I. Rutel, S. Uji, C. Terakura, Y. Imanaka, Synthetic Metals **133**-**134**, 51–53 (2003). N.Matsunaga, K. Yamashita, H. Kotani, K. Nomura, T. Sasaki, T. Hanajiri, J. Yamada, S. Nakatsuji, H. Anzai, Phys. Rev. B **64**, 052405 (2001) N. Biskup, S. Tomic, D. Jerome, Phys. Rev. B **51**, 17972 (1995) A.V. Kornilov and V.M. Pudalov, Instrum. Exp. Tech. **42**, 127 (1999) A.S. Kirichenko, A.V. Kornilov, and V.M. Pudalov, Instrum. Exp. Tech. **48**, 813 (2005). P.M. Chaikin, Phys. Rev. B **31**, 4770 (1985) Strictly speaking, theoretical models[@Montambaux; @Maki] for $T_{\rm SDW}(B)$ do not take finite scattering time $\tau$ into account; however we expect Eq. \[tdif2\] to be valid in the high-field limit, $\omega_c\tau\gg1$. For the studied sample we observed the FISDW transitions starting from field $B=7$T (for $T=2$K and $P=10$kbar). This result provides the evidence that $\omega_c\tau\gg1$ for our data taken at 19T and thus ensures the applicability of Eq. \[tdif2\]. I.J. Lee and M.J. Naughton, Phys. Rev. B **58**, R13343 (1998)
{ "pile_set_name": "ArXiv" }
--- abstract: 'We investigate the behavior of a mixture of asymmetric colloidal dumbbells and emulsion droplets by means of kinetic Monte Carlo simulations. The evaporation of the droplets and the competition between droplet-colloid attraction and colloid-colloid interactions lead to the formation of clusters built up of colloid aggregates with both closed and open structures. We find that stable packings and hence complex colloidal structures can be obtained by changing the relative size of the colloidal spheres and/or their interfacial tension with the droplets.' author: - Hai Pham Van - Andrea Fortini - Matthias Schmidt bibliography: - 'refs.bib' title: Assembly of open clusters of colloidal dumbbells via droplet evaporation --- Introduction {#intr} ============ Complex colloids characterized by heterogeneous surface properties are an active field of research due to their diverse potential applications as interface stabilizers, catalysts, and building blocks for nanostructured materials. Janus particles are colloidal spheres with different properties on the two hemispheres. They have recently attracted significant attention due to their novel morphologies [@Pawar2010]. Corresponding dumbbells consist of two colloidal spheres with different sizes or dissimilar materials [@Claudia2013]. Many studies have investigated the self-assembly of colloidal dumbbells into more complex structures, including micelles, vesicles [@Sciortino2009; @Liang2008], bilayers [@Munao2013; @Whitelam2010; @Avvisati2015] and dumbbell crystals  [@Mock2007; @Marechal2008; @Ahmet2010]. Particularly, open clusters of colloidal dumbbells with syndiotactic, chiral [@Zerrouki2008; @Bo2013] and stringlike structures [@Smallenburg2012] are significant because they can be regarded as colloidal molecules [@Blaaderen2003; @Duguet2011] that exhibit unique magnetic, optical and rheological properties [@Edwards2007]. However, the control of the cluster stability and the particular geometric structure are two major challenges, which have yet to be solved. Several self-assembly techniques have been used to control the aggregation of colloidal particles. Velev [*et al*.]{} developed a method to obtain so-called colloidosomes from colloidal particles by evaporating droplets [@Velev1996a; @Velev1996b; @Velev1997]. Based on this technique, Manoharan [*et al*.]{} [@Manoharan2003] successfully prepared micrometer-sized clusters and found that the structures of particle packings seem to minimize the second moment of the mass distribution. Wittemann [*et al*.]{} also produced clusters, but with a considerably smaller size of about 200nm [@Wittemann2008; @Wittemann2009; @Wittemann2010]. Cho [*et al*.]{} prepared binary clusters with different sizes or species from phase-inverted water-in-oil [@Cho2005] and oil-in-water emulsion droplets [@Cho2008]. These authors found that the interparticle interaction and the wettability of the constituent spheres play an important role in the surface coverage of the smaller particles. In addition, for oil-in-water emulsions the minimization of the second moment of the mass distribution ($M2$) only applies if the size ratio is less than 3. More recently, Peng [*et al*.]{} [@Bo2013] reported both experimental and simulation work on the cluster formation of dumbbell-shaped colloids. These authors proved that the minimization of the second moment of the mass distribution is not generally true for anisotropic colloidal dumbbell self-assembly. However, they predicted cluster structures without considering the different wettabilities for constituent colloidal spheres. In previous work, Schwarz [*et al*.]{} [@Ingmar2011] studied cluster formation via droplet evaporation using Monte Carlo (MC) simulation with shrinking droplets. It was shown that a short-ranged attraction between colloidal particles can produce $M2$ nonminimal isomers and the fraction of isomers varied for each number of constituent particles. In addition, supercluster structures were found with complex morphologies starting from a mixture of tetrahedral clusters and droplets. Fortini [@Fortini2012] modeled cluster formation in hard sphere-droplet mixtures, without shrinking droplets, and observed a transition from clusters to a percolated network that is in good agreement with experimental results. In the current paper, we extend the model of Ref. [@Ingmar2011] in order to investigate the dynamic pathways of cluster formation in a mixture of colloidal dumbbells and emulsion droplets. By varying the size or hydrophilic property of colloidal dumbbells, we find a variety of complex cluster structures that have not been observed in clusters of monodispersed colloidal spheres. In particular, we find open clusters with a compact core, which determines the overall symmetry, and protruding arms. These structures could lead to novel self-assembled structures. This paper is organized as follows. We introduce the model and simulation method in Sec. \[s:model-method\]. We analyze the cluster formation, structures and size distributions for dumbbells with asymmetric wetting properties in Sec. \[s:fluid1\]. In Sec. \[s:fluid2\] we present the results for dumbbells with asymmetric sizes. Conclusions are given in Sec. \[s:conc\]. Model and Methods {#s:model-method} ================= We simulate a ternary mixture of $N_\textrm{d}$ droplets of diameter $\sigma_{\textrm{d}}$ and $N_\textrm{c}$ colloidal dumbbells formed by two spherical colloids, labeled colloidal species 1 and colloidal species 2, of diameter $\sigma_{1}$ and $\sigma_{2}$ ($\sigma_{1}\geq \sigma_{2} $), respectively. A sketch of the model is shown in Fig. \[fig:skt\]. ![Sketch of the model of colloidal dumbbells (bright yellow and dark red spheres) and droplets (white spheres). Shown are the diameters of colloidal species 1, $\sigma_{1}$, colloidal species 2, $\sigma_{2}$, and droplet $\sigma_{d}$. (a) In the initial stages the droplet captures the colloidal dumbbells. (b) The droplet has shrunk and has pulled the dumbbells into a cluster. The competition between Yukawa repulsion and surface adsorption energies can lead to open cluster structures. []{data-label="fig:skt"}](fig1){width="9cm"} The colloids in each dumbbell are separated from each other by a distance $l$ that fluctuates in the range of $\lambda\leq l\leq \lambda+\Delta $, where $\lambda=(\sigma_{1}+\sigma_{2})/2$. The total interaction energy is given by $$\begin{aligned} \dfrac{U}{k_\textrm{B}T} &=&\sum_{i<j}^{N_{c}} \phi_{11}\left ( \left | \mathbf{r}_{1i}-\mathbf{r}_{1j} \right | \right )+\sum_{i<j}^{N_{c}} \phi_{22}\left ( \left | \mathbf{r}_{2i}-\mathbf{r}_{2j} \right | \right )\nonumber \\ &&+\sum_{i,j}^{N_{c}} \phi_{12}\left ( \left | \mathbf{r}_{1i}-\mathbf{r}_{2j} \right | \right )+\sum_{i}^{N_c} \sum_{j}^{N_{d}} \Phi _{1\textrm{d}}\left ( \left | \mathbf{r}_{1i}-\mathbf{R}_{j} \right | \right )\nonumber \\ &&+\sum_{i}^{N_c} \sum_{j}^{N_{d}} \Phi_{2\textrm{d}}\left ( \left | \mathbf{r}_{2i}-\mathbf{R}_{j} \right | \right )\nonumber \\ &&+\sum_{i<j}^{N_{d}} \Phi_{\textrm{dd}}\left ( \left |\mathbf{R}_{i}-\mathbf{R}_{j} \right | \right ), \label{eqn:total-energy}\end{aligned}$$ where $k_\textrm{B}$ is the Boltzmann constant; $T$ is the temperature; $\mathbf{r}_{1i}$ and $\mathbf{r}_{2i}$ are the center-of-mass coordinates of colloid 1 and colloid 2 in dumbbell $i$, respectively; $\mathbf{R}_{j}$ is the center-of-mass coordinate of droplet $j$; $\phi_{11}, \phi_{12}$ and $\phi_{22}$ are the colloid 1-colloid 1, colloid 1-colloid 2, and colloid 2-colloid 2 pair interactions, respectively; $\Phi_{1\textrm{d}}$ and $\Phi_{2\textrm{d}}$ are the colloid 1-droplet, colloid 2-droplet pair interactions, respectively; and $\Phi_{\textrm{dd}}$ is the droplet-droplet pair interaction. The colloid-colloid pair interaction is composed of a short-ranged attractive square well and a longer-ranged repulsive Yukawa potential, $$\phi_{11}(r)=\left \{ \begin{array}{ll} \infty & r< \sigma_{1} \\ - \epsilon_{\mathrm{SW}} & \sigma_{1} <r< \sigma_{1}+\Delta \\ \epsilon_{\mathrm{Y}} \sigma_{1}\dfrac{e^{-\kappa \left ( r-\sigma_{1} \right )}}{r} & \textrm{otherwise,} \end{array} \right . \label{eqn:phic1c1}$$ $$\phi_{22}(r)=\left \{ \begin{array}{ll} \infty & r< \sigma_{2} \\ - \epsilon_{\mathrm{SW}} & \sigma_{2} <r< \sigma_{2}+\Delta \\ \epsilon_{\mathrm{Y}}\sigma _{2}\dfrac{e^{-\kappa \left ( r-\sigma_{2} \right )}}{r} & \textrm{otherwise,} \end{array} \right . \label{eqn:phic2c2}$$ and $$\phi_{12}(r)=\left \{ \begin{array}{ll} \infty & r< \lambda \\ - \epsilon_{\mathrm{SW}} & \lambda <r< \lambda +\Delta \\ \epsilon_{\mathrm{Y}}\lambda\dfrac{e^{-\kappa \left ( r-\lambda \right )}}{r} & \textrm{otherwise,} \end{array} \right . \label{eqn:phic1c2}$$ where $r$ is the the center-center distance of particles. ![Sketch of the pair interactions. (a) potentials between two colloidal particles with ${\epsilon_{\mathrm{SW}}=9k_{\textrm{B}}T}$, ${\Delta=0.09\sigma _{2}}$, $\epsilon_{\textrm{Y}}=24.6k_{\textrm{B}}T$ and (b) colloid-droplet potential at ${\sigma_{d}\left(t\right)=4\sigma_{2}}$, with $\sigma_1=1.2\sigma_2$.[]{data-label="fig:pot"}](fig2a "fig:"){width="4.25cm"} ![Sketch of the pair interactions. (a) potentials between two colloidal particles with ${\epsilon_{\mathrm{SW}}=9k_{\textrm{B}}T}$, ${\Delta=0.09\sigma _{2}}$, $\epsilon_{\textrm{Y}}=24.6k_{\textrm{B}}T$ and (b) colloid-droplet potential at ${\sigma_{d}\left(t\right)=4\sigma_{2}}$, with $\sigma_1=1.2\sigma_2$.[]{data-label="fig:pot"}](fig2b "fig:"){width="4.25cm"} In Fig. \[fig:pot\](a), the colloid-colloid interaction potentials are plotted against the separation for a given set of parameters used in the simulations. The parameters ${\epsilon_{\mathrm{SW}}=9k_{\textrm{B}}T}$, ${\Delta=0.09\sigma _{2}}$ are the depth and the width of a short-ranged attractive square well, respectively, while the parameter $\epsilon_{\textrm{Y}}=24.6k_{\textrm{B}}T$ controls the strength of long-ranged repulsive Yukawa interaction with inverse Debye length $\kappa\sigma_{2}=10$. A comparison between experimental quantities and simulation parameters can be found in Ref. [@Mani2010]. In principle, the strength of the attractive interaction is chosen so that physical bonds between colloids at the end of evaporation are irreversible. At the same time the repulsive barrier is chosen to be large enough to hinder spontaneous clustering. A wide range of simulation parameters satisfies the above conditions without qualitatively affecting the final results. The potential shape depicted in Fig. \[fig:pot\](a) is similar to that employed by Mani [*et al*.]{} [@Mani2010]. However, differently from their systematic investigation of the repulsive parameters ($\epsilon_{\textrm{Y}}$,$\kappa$) on the stability of colloidal shells, we restrict our consideration to a fixed value of both attractive and repulsive parameters between colloids but vary colloid-droplet energies in order to address competing interactions. The droplet-droplet pair interaction is a hard-sphere potential, $$\Phi_{\textrm{dd}}(r)=\left \{ \begin{array}{ll} \infty & r< \sigma_{\textrm{d}}+\sigma_{1}\\ 0 & \text{otherwise,} \end{array} \right . \label{eq:phidd}$$ where the hard core droplet diameter $\sigma_{\textrm{d}}$ is added to the colloid diameter $\sigma _{1}$ such that no two droplets can share the same colloid (recall that $\sigma_{1}\geq\sigma_{2}$). The colloid-droplet interaction is taken to model the Pickering effect [@Pieranski1980]. Since the droplets shrink, their diameter is recorded as a function of time, that is, $\sigma_{d}(t)$ may be larger or smaller than that of the colloids. Hence, if $\sigma_{\textrm{d}} > \sigma_{i}$, the colloid-droplet adsorption energy is [@Ingmar2011] $$\Phi_{i\textrm{d}}(r)= \left \{ \begin{array}{ll} - \gamma_{i} \pi \sigma_{\textrm{d}} h & \dfrac{\sigma_{\textrm{d}}-\sigma_{i}}{2}<r< \dfrac{\sigma_{\textrm{d}}+\sigma_{i} }{2}\\ 0 & \textrm{otherwise}, \end{array} \right . \label{eqn:phicd1}$$ and when $\sigma_{\textrm{d}} < \sigma_{i}$, $$\Phi_{i\textrm{d}}(r)= \left \{ \begin{array}{ll} - \gamma_{i} \pi \sigma_{\textrm{d}}^{2} & r< \dfrac{\sigma_{i}-\sigma_{\textrm{d}}}{2} \\ - \gamma_{i} \pi \sigma_{\textrm{d}} h & \dfrac{\sigma_{i}-\sigma_{\textrm{d}}}{2}<r< \dfrac{\sigma_{i}+\sigma_{\textrm{d}} }{2}\\ 0 & \textrm{otherwise,} \end{array} \right . \label{eqn:phicd2}$$ where $i=1,2$ labels the two colloidal species in each dumbbell, ${h=(\sigma_{i}/2-\sigma_{\textrm{d}}/2+r)(\sigma_{i}/2+\sigma_{\textrm{d}}/2-r)/(2r)}$ is the height of the spherical cap that results from the colloid-droplet intersection [@Ingmar2011], and the parameter $\gamma_{i}$ is the droplet-solvent interfacial tension used to control the strength of the colloid-droplet interaction. \[See Fig. \[fig:pot\](b) for an illustration of the colloid-droplet pair potential.\] We introduce the energy ratio $k$ defined by $$k=\frac{\gamma _{2}}{\gamma _{1}}, \label{eqn:k}$$ which characterizes the dissimilarity of the surface properties of the two colloidal species. We define a bond between two colloidal spheres of type $i$ and $j$ when their distance is smaller than or equal to ${(\sigma_{i}+\sigma_{j})/2+\Delta}$, with $i,j=1,2$. A cluster is a group of colloidal particles connected with each other by a sequence of bonds. Hence, each cluster is characterized by both the number of bonds $n_b$ and the number of colloidal particles $n_c$ belonging to this cluster. A single dumbbell can be considered as a trivial cluster structure with $n_b=1, n_c=2$. These trivial clusters will be neglected in the following analysis. We carry out Metropolis MC simulations in the *NVT* ensemble. For a fixed set of parameters, statistical data are collected by running 30 independent simulations. In each run, a maximum displacement step of colloids ${d_{c}=0.01\sigma _{2}}$ and droplets ${d_{d}=d_{c}\sqrt{\sigma _{2}/\sigma _{\textrm{d}}}}$ ensures that Monte Carlo simulations are approximately equivalent to Brownian dynamics simulations [@Sanz2010]. The total number of MC cycles per particle is $10^{6}$, with $5\times 10^{5}$ MC cycles used to shrink the droplets at a fixed rate. This shrinking rate is chosen such that the droplet diameter vanishes after $5\times 10^{5}$ MC steps. Another $5\times 10^{5}$ MC cycles are used to equilibrate the cluster configurations. As a test, for $k=0.1$ (open clusters), $k=0.5$ (intermediate clusters) and $k=1$ (closed clusters), we monitored the total energy and the obtained number of clusters $N_{n_{c}}$ (composing of $n_c$ colloids and $n_b$ bonds) for an additional $10^{6}$ cycles and found no changes in the results. Our kinetic MC simulation uses sequential moves of individual particles and neglects the collective motion of particles in the cluster, i.e. collective translational and rotational cluster moves are absent. Such collective modes of motion only play a role in dense colloidal suspensions of strongly interacting overdamped particles [@Stephen2009; @Stephen2011]. We did not attempt to reproduce the correct experimental time scale of droplet evaporation, and the influence of colloid adsorption on the evaporation rate is neglected. The physical time corresponding to the MC time scale can be roughly estimated via the translational diffusion coefficient of clusters $D_{\textrm{cls}}$ defined by the Einstein relationship [@Huitema1999], $$\lim_{n\to\infty}\frac{\left\langle \triangle r_{\textrm{cls}}^{2}(n)\right\rangle }{n}=6D_{\textrm{cls}}\tau,$$ where $n$ is the number of MC cycles and $\tau$ is the physical time per MC cycle. The Stokes-Einstein equation for diffusion of spherical particles is ${D_{\textrm{cls}}=\frac{k_{B}T}{3\pi\eta\sigma_{\textrm{cls}}}}$, with $\eta$ the viscosity of the solvent. Here $\left\langle \triangle r_{\textrm{cls}}^{2}(n)\right\rangle $ is the mean square displacement of the clusters after $n$ cycles, defined as $$\left\langle \triangle r_{\textrm{cls}}^{2}(n)\right\rangle =\dfrac{1}{N_{n_{c}}}{ \sum_{i=1}^{N_{n_{c}}}\triangle\mathbf{r}_{\textrm{cls},i}(n)\cdot\triangle\mathbf{r}_{\textrm{cls},i}(n)},$$ where $N_{n_{c}}$ is the number of clusters with $n_c$ colloids and $\triangle\mathbf{r}_{\textrm{cls},i}(n)$ is the center-of-mass displacement of a cluster with $n_c$ colloids after $n$ cycles. In addition, the time required for a cluster to diffuse over its diameter $\sigma_{\textrm{cls}}$ is the so-called Brownian time scale $\tau_{B}$, given by ${\tau_{B}=\sigma_{\textrm{cls}}^{2}/D_{\textrm{cls}}}$ with the assumption that the diameter of the spherical cluster ${\sigma_{\textrm{cls}}=\sqrt[3]{n_{c}}\sigma_2}$. Hence, we have $$\dfrac{n\tau}{\tau_{B}}\simeq\frac{\left\langle \triangle r_{\textrm{cls}}^{2}(n)\right\rangle }{6\sqrt[3]{n_{c}^{2}}\sigma_{2}^{2}} \label{eqn:time}$$ From Eq. (\[eqn:time\]) we derive an MC simulation time of about $10-20\tau_B$, depending on the number of colloids in the cluster. As an example, for clusters composed of ten colloids with diameter of $154\ \textrm{nm}$ in water ($\eta=1\ \textrm{mPa\ s}$) and at room temperature, we obtain a Brownian time $\tau_{B}\sim0.85\ \textrm{s}$. Compared to the time scales of experiments that typically last tens of minutes [@Wittemann2008; @Ingmar2011], our MC simulation time scales are much smaller. However, the validity of a similar model for a binary mixture of single colloidal particles and droplets has been demonstrated by qualitative and quantitative agreement between experimental and simulation results [@Ingmar2011]. The simulations are performed in a cubic box with $N_c=250$ colloidal dumbbells with the packing fraction $\eta_c=0.01$ and $N_d=10-44$ droplets with packing fraction $\eta_d=0.1$. To initialize our simulation, we start by randomly distributing the colloidal dumbbells in the simulation box and with random orientations. The initial distance between the colloid-1 and colloid-2 in a dumbbell is set smaller than $\lambda+\Delta$. In contrast, the initial distance between any two colloidal species that belongs to different dumbbells is larger than ${\sigma _{1}+\Delta}$. In this way, no two colloidal dumbbells bind together in the initial stage of the simulation. In addition, all colloids are located outside of the droplets. The initial droplet diameter is set to $8\sigma _{2}$ and shrunk at constant rate. Results and Discussion ====================== Asymmetric wetting properties and symmetric sizes {#s:fluid1} ------------------------------------------------- ![Snapshots of the simulation for colloidal dumbbells with symmetric sizes and droplets at the energy ratio $k=0.1$. Results are shown at two different stages of the time evolution (a) after $2.5\times10^{5}$ MC cycles several colloidal dumbbells (bright yellow and dark red spheres) are trapped at the surface of the droplets (gray spheres) and (b) after $10^{6}$ MC cycles the stable clusters that are formed due to the droplets are composed of different colored colloids, that is, blue and green spheres represent colloidal species 1 and colloidal species 2, respectively. Open cluster structures with a compact core by colloid 1 and protruding arms by colloid 2 can be observed.[]{data-label="fig:snap"}](fig3){width="9cm"} ![Radial distribution functions, $g_{i\textrm{d}}(r)$ ($i=1,2$), for colloid 1-droplet (solid lines) and colloid 2-droplet (dashed lines) as a function of the scaled distance $r/\sigma$ at energy ratio $k=0.1$. Shown are results at different stages of the computer simulation. (See the notation in Table \[tab:peak-position\].)[]{data-label="fig:fkt1"}](fig4){width="9cm"} ----- ------- ---------------------- ------------------------- ------- $g_{1\textrm{d}}(r)$ $g_{2\textrm{d}}(r)$ $1$ $2.0$ $2.26$ $2.27$ $4.5$ $2$ $2.4$ $1.92$ $1.95$ $3.9$ $3$ $2.8$ $1.60$ $1.62$ $3.2$ $4$ $3.2$ $1.26$ $1.29$ $2.5$ $5$ $3.6$ $0.90$ $0.97$ $1.8$ $6$ $4.0$ $0.55$ [^1]$0.60\quad1.00$[^2] $1.1$ $7$ $5.0$ $\parallel$ $\parallel$[^3] $0.0$ ----- ------- ---------------------- ------------------------- ------- : Peak positions of the radial distribution functions $g_{1\textrm{d}}(r)$, $g_{2\textrm{d}}(r)$ and instantaneous droplet diameter at different stages of the time evolution for energy ratio $k=0.1$. \[tab:peak-position\] We first study dumbbells built of colloids with equal diameter $\sigma_{1}=\sigma_{2}\equiv\sigma$ and different wetting properties. The parameter $\gamma _{1}$ is fixed to $100k_{\textrm{B}}T/\sigma^{2}$, while the parameter $\gamma _{2}$ is varied from $10k_{\textrm{B}}T/\sigma^{2}$ to $100k_{\textrm{B}}T/\sigma^{2}$. As a consequence, the energy ratio, Eq. \[eqn:k\], ranges from $ k=0.1-1 $. In the special case of $k=1$, colloids 1 and 2 are identical. Figure \[fig:snap\] shows snapshots at two different stages of the simulation for the energy ratio $k=0.1$. After $2.5\times10^{5}$ MC cycles \[see Fig. \[fig:snap\](a)\] colloidal dumbbells are captured at the droplet surface. Figure  \[fig:snap\](b) shows the final cluster configurations obtained after $10^{6}$ cycles. Only clusters that are stable against thermal fluctuations survived and are considered for analysis. We analyze how colloidal dumbbells are captured by the droplet surface by means of the radial distribution functions of colloid 1-droplet, $g_{1\textrm{d}}(r)$, and colloid 2-droplet, $g_{2\textrm{d}}(r)$, defined explicitly as $g_{i\textrm{d}}(r)=\frac{dn_{i\textrm{d}}(r)}{4\pi r^{2}dr\rho_{\textrm{d}}}$ with $dn_{i\textrm{d}}(r)$ the number of droplets between distances $r$ and $r+dr$ from a colloid of species $i$ ($i=1,2$) and $\rho_{\textrm{d}}$ the average number density of droplets. In Fig. \[fig:fkt1\], we consider different stages of the time evolution ranging from $t_1$ to $t_7$ (see Table \[tab:peak-position\] for an explanation of the symbols). Between times $t_1$ and $t_6$ the function $g_{1\textrm{d}}(r)$ (solid lines) shows only a single peak. For example, at $t_1=2\times10^{5}$ MC, $g_{1\textrm{d}}(r)$ has a peak at $r\simeq 2.25\sigma$ corresponding to the instantaneous droplet radius $\sigma_{\textrm{d}}(t)/2$. The peak is due to colloid-1 spheres trapped at the droplet surface. The droplet radius decreases continuously during the modeled evaporation. As a result, the peak position of $\sigma_{\textrm{d}}(t)/2$ shifts continuously towards smaller distances. Moreover, since the number of trapped type-1 colloids onto the droplet surface can increase during the movement of particles, the peak height of $g_{1\textrm{d}}\left ( r \right )$ increases with MC time. Finally, after $t=t_7$ ($5\times10^{5}$ MC cycles) the droplets vanish completely \[$\sigma _{\textrm{d}}(t)=0$\] and as a result $g_{1\textrm{d}}(r)$ stops changing. A similar trend can be observed in the radial distribution function $g_{2\textrm{d}}(r)$ (dashed lines in Fig. \[fig:fkt1\]). However, $g_{2\textrm{d}}(r)$ has two distinct peaks at $t_6=4\times10^{5}$ MC cycles. Table \[tab:peak-position\] lists peak positions of $g_{1\textrm{d}}(r)$, $g_{2\textrm{d}}(r)$ and instantaneous droplet diameter $\sigma_{\textrm{d}}(t)$ with respect to the time evolution of the system for $k=0.1$. In addition, for $k=0.1$ the peak height of $g_{1\textrm{d}}\left ( r \right )$ is always much larger than that of $g_{2\textrm{d}}\left ( r \right )$ at the same time. This means that there exists a higher probability of finding type-1 colloids than finding type-2 colloids on the droplet surface. Figure \[fig:fkt2\] shows results for $g_{1\textrm{d}}(r)$ and $g_{2\textrm{d}}(r)$ at different energy ratios $k$ after $4.0\times 10^{5}$ MC cycles. As shown in Fig. \[fig:fkt2\](a), $g_{1\textrm{d}}(r)$ has a peak at $r\simeq \sigma _{\textrm{d}}(t)/2$ that is independent of the value of $k$. At the same time, $g_{2\textrm{d}}(r)$ \[Fig. \[fig:fkt2\](b)\] exhibits two distinct peaks, the first peak at a position coinciding with the peak of $g_{1\textrm{d}}(r)$, and the second peak (marked by an asterisk), which shifts towards the first peak with increasing $k$. Colloids trapped on the droplet surface feel the Yukawa repulsive interaction, thermal fluctuation and adsorption interaction between colloids and droplets $\Phi _{i\textrm{d}}$, $i=1,2$. For a given colloid 1-droplet interaction $\gamma _{1}=100k_{\textrm{B}}T/\sigma^{2}$, whose magnitude is much larger than the Yukawa repulsive interaction and thermal energy, the colloid-1 spheres cannot overcome the energy barrier to escape from the droplet surface. Meanwhile, for $k=0.1$ ($\gamma _{2}=10k_{\textrm{B}}T/\sigma^{2}$) the trapped colloid-2-droplet interaction may be comparable to the Yukawa repulsive interaction and thermal energy. This leads to some colloid-2 to be separated from each other and/or released from the droplet surface, forming the second peak at a distance larger than $\sigma _{\textrm{d}}(t)/2$. When the energy ratio $k$ increases, the binding energy between trapped colloid-2 and droplets becomes stronger, which results in an increase of the probability of finding the colloid-2 at a shorter radial distance from the droplet. Finally, for $k=1$, all of trapped colloid-1 and colloid-2 are strongly localized on the droplet surface, signalled by a single peak with a broader width (see in Fig. \[fig:fkt2\]). ![Colloid 1-droplet (a) and colloid 2-droplet (b) radial distribution functions, $g_{1\textrm{d}}(r)$ and $g_{2\textrm{d}}(r)$, respectively, as a function of the scaled distance $r/\sigma$ after $t=t_6$ ($4.0\times 10^{5}$ MC cycles). Results are shown for different energy ratios $k$. An asterisk is used as a guide to the eyes to trace the shift of the second peak. Curves are shifted upwards by 40 units for clarity.[]{data-label="fig:fkt2"}](fig5a "fig:"){width="4.25cm"} ![Colloid 1-droplet (a) and colloid 2-droplet (b) radial distribution functions, $g_{1\textrm{d}}(r)$ and $g_{2\textrm{d}}(r)$, respectively, as a function of the scaled distance $r/\sigma$ after $t=t_6$ ($4.0\times 10^{5}$ MC cycles). Results are shown for different energy ratios $k$. An asterisk is used as a guide to the eyes to trace the shift of the second peak. Curves are shifted upwards by 40 units for clarity.[]{data-label="fig:fkt2"}](fig5b "fig:"){width="4.25cm"} Examples of the obtained cluster structures are shown in Fig. \[fig:cluster\](a) for $k=0.1$, Fig. \[fig:cluster\](b) for $k=0.5$ and Fig. \[fig:cluster\](c) for $k=1$. Clusters with colloid numbers between $n_c=4$ and $n_c=10$ are found. For the same number of constituent colloids $n_c$, clusters can have several distinct structures (isomers) [@Ingmar2011; @Bo2013]. It is convenient to use the bond-number $n_b$ as an indicator for the compactness of clusters. For a given value of $n_c$, the smaller the bond number $n_b$ is, the more open the structure is. As shown in Fig. \[fig:cluster\](a), open structures are obtained for $k=0.1$. In these isomers, the colloids of type 1 arrange themselves into symmetric structures, i.e, doublet, triplet, tetrahedron and triangular dipyramid. Increasing the energy ratio $k$, a larger number of isomers with different bond number $n_b$ are found. For example, for $n_c=4$ \[Fig. \[fig:cluster\](b)\] we find four different isomers with $n_b$ ranging from $3$ to $6$, corresponding to a transition from string-like clusters to more compact structures. Finally, for the special case $k=1$ the two colloidal species are identical and we find compact isomers with the largest $n_b$ \[Fig. \[fig:cluster\](c)\] such as $n_c=4,n_b=6$ (tetrahedron); $n_c=6,n_b=12$ (octahedron); $n_c=8,n_b=18$ (snub disphenoid); $n_c=10,n_b=22$ (gyreoelongate square dipyramid). These structures are similar to the one-component structures that minimize the second moment of the mass distribution [@Manoharan2003; @Wittemann2010]. ![Typical cluster structures found in simulations for (a) $k=0.1$, (b) $k=0.5$ and (c) $k=1$ at the final stage of the simulations. The red and yellow colored spheres represent colloid-1 and colloid-2 spheres in each dumbbell, respectively. For each cluster with the same number of constituent colloids the bond number $n_b$ is used to distinguish whether a cluster is open or closed structure. The wireframe connecting the colloid centers represent the bond skeleton.[]{data-label="fig:cluster"}](fig6a "fig:"){width="8.5cm"} ![Typical cluster structures found in simulations for (a) $k=0.1$, (b) $k=0.5$ and (c) $k=1$ at the final stage of the simulations. The red and yellow colored spheres represent colloid-1 and colloid-2 spheres in each dumbbell, respectively. For each cluster with the same number of constituent colloids the bond number $n_b$ is used to distinguish whether a cluster is open or closed structure. The wireframe connecting the colloid centers represent the bond skeleton.[]{data-label="fig:cluster"}](fig6b "fig:"){width="8.5cm"} ![Typical cluster structures found in simulations for (a) $k=0.1$, (b) $k=0.5$ and (c) $k=1$ at the final stage of the simulations. The red and yellow colored spheres represent colloid-1 and colloid-2 spheres in each dumbbell, respectively. For each cluster with the same number of constituent colloids the bond number $n_b$ is used to distinguish whether a cluster is open or closed structure. The wireframe connecting the colloid centers represent the bond skeleton.[]{data-label="fig:cluster"}](fig6c "fig:"){width="8.5cm"} Figure \[fig:hist1\] shows a stacked histogram of the number of clusters $N_{n_{c}}$ with $n_c$ colloids. The height of each differently colored bar is proportional to the number of clusters with the bond number $n_b$. For small value of $k$, a large fraction of clusters has an open structure, while for $k=1$ almost all cluster have closed structure and for $k=0.5$ a variety of intermediate structures can be found. These observations are in good agreement with our results for colloid-droplet radial distribution functions, as discussed above. ![image](fig7a){width="5.9cm"} ![image](fig7b){width="5.9cm"} ![image](fig7c){width="5.9cm"} Symmetric wetting properties and asymmetric sizes {#s:fluid2} ------------------------------------------------- We next investigate the cluster formation of colloidal dumbbells built with spheres of different diameters, ${\sigma_{1}=1.5\sigma_{2}}$ and $\sigma_{1}=2.0\sigma_{2}$ but equal wetting properties, which are obtained by the interfacial tension $\gamma_1=\gamma_2\equiv\gamma$. We investigate the setting values $\gamma=$10, 40 and $100k_{\textrm{B}}T/\sigma _{2}^{2}$. We note that a size asymmetry between the colloids forming the dumbbells causes an asymmetry in colloid-droplet adsorption energies \[Eqs. (\[eqn:phicd1\]) and (\[eqn:phicd2\])\]. For this reason, the structures found in this case are the same as those shown in Fig. \[fig:cluster\] for asymmetric wetting properties. We analyze the size distribution of the clusters. Figure \[fig:hist2\] shows stacked histograms of the number of clusters $N_{n_{c}}$ with $n_c$ colloids for different values of $\gamma$ and two different size ratios. For the case $\sigma_{1}=1.5\sigma_{2}$ and $\gamma=10k_{\textrm{B}}T/\sigma _{2}^{2}$ \[Fig. \[fig:hist2\](a) and Fig. \[fig:hist2\](d)\], all clusters have open structures with $n_b=3$. In addition, we do not find cluster with a high $n_c$ \[Fig. \[fig:hist2\](a) and (d)\] because the Yukawa repulsion and the thermal fluctuations dominate over the adsorption energy between colloids and droplets that keeps the colloids in a compact arrangement. On the other hand, in the case of $\gamma=40k_{\textrm{B}}T/\sigma _{2}^{2}$ \[Fig. \[fig:hist2\](b)\] we observe many clusters of bond numbers $n_b$ in the range $3-6$, corresponding to intermediate structures. Finally, when $\gamma=100k_{\textrm{B}}T/\sigma _{2}^{2}$, the adsorption energy between colloids and droplets is much larger than the total repulsive energy. Therefore, we observe mostly closed structures \[Fig. \[fig:hist2\](c)\]. At a larger size asymmetry of $\sigma_{1}=2.0\sigma_{2}$, but at the same interfacial tension $\gamma=40, 100\, k_{\textrm{B}}T/\sigma _{2}^{2}$ \[Fig. \[fig:hist2\](b),(f) and \[fig:hist2\](c),(f)\] we observe a decrease of the number of large clusters, while the yield of smaller clusters increases. -------------------------------- -------------------------------- -------------------------------- ![image](fig8a){width="5.9cm"} ![image](fig8b){width="5.9cm"} ![image](fig8c){width="5.9cm"} ![image](fig8d){width="5.9cm"} ![image](fig8e){width="5.9cm"} ![image](fig8f){width="5.9cm"} -------------------------------- -------------------------------- -------------------------------- Summary and Conclusions {#s:conc} ======================= We investigated the cluster formation process of a mixture of colloidal dumbbells and droplets via emulsion droplet evaporation using Metropolis-based kinetic Monte Carlo simulations. The short-ranged attraction between colloids has a potential well depth of $9k_{\textrm{B}}T$ in order to ensure that neither dumbbells nor clusters are likely to break apart due to thermal fluctuations. In addition, the height of the repulsive barrier between colloids is about $9k_{\textrm{B}}T$, which is a large enough value to avoid spontaneous formation of clusters. The droplet-droplet interaction is a hard-sphere repulsion with an effective hard-sphere diameter chosen so that any two droplets cannot merge. The adsorption interaction between colloids and droplets has a minimum at the droplet surface to model the Pickering effect. In experiments, this energy has values up to millions of $k_{\textrm{B}}T$, depending on the contact angle, interfacial tension and particle size [@Aveyard2003]. In our simulations, however, we limited the colloid-droplet adsorption energy below $100k_{\textrm{B}}T$, and the contact angle at a planar interface is $90^\circ$. In the dumbbell system with symmetric sizes, the colloid 1-droplet adsorption energy is kept at a fixed value of nearly $100k_{\textrm{B}}T$, while the colloid 2-droplet adsorption energy is controlled by changing the interfacial tension. Droplet-colloid radial distribution functions indicate that both colloid 1 and colloid-2 spheres can be captured and freely diffuse on the droplet surface. Choosing a smaller colloid 2-droplet energy leads to an increase of the probability of colloid 2 detachment from the droplet surface. In agreement with typical cluster structures in the final stage of simulation we found that clusters with the same number of constituent colloids can produce a variety of different isomers. The bond number was used to assess whether an isomer is open or closed. Histograms show that a larger fraction of open isomers can be obtained by decreasing the colloid 2-droplet adsorption energy. Similar results were obtained in the asymmetric dumbbell system. Whether open, intermediate or closed structures are formed, strongly depends on the interfacial tension of both colloid 1 and colloid 2 and their relative sizes. This results from competing Yukawa repulsion, colloid-droplet adsorption interactions and thermal fluctuations. However, choosing a larger size of colloid 1 compared to colloid 2 could lead to a decrease in the number of large clusters. Although closed structures have been reported in many studies of the assembly of single component spheres [@Manoharan2003; @Wittemann2010; @Ingmar2011], the open and intermediate structures found here have not yet been observed in experiments. The repulsive energy between the colloids can be controlled experimentally by tuning pH, concentration of salt, and composition of the solution [@Mani2010], while the adsorption energy can be controlled by colloid diameter and wettability [@Bernard2008]. Therefore, our result could be useful to guide experimental work for preparing increasingly complex building blocks for the assembly of nanostructured materials. This research was supported by a PhD grant of the Vietnamese Government Scholarship Program (Project 911). [^1]: Peak 1 [^2]: Peak 2 [^3]: Undefined value
{ "pile_set_name": "ArXiv" }
--- abstract: 'Using a sample of $1.31\times10^{9} ~J/\psi$ events collected with the BESIII detector, we perform a study of $J/\psi\to\gamma K\bar{K}\eta''$. The $X(2370)$ is observed in the $K\bar{K}\eta''$ invariant-mass distribution with a statistical significance of 8.3$\sigma$. Its resonance parameters are measured to be $M=2341.6\pm 6.5\text{(stat.)}\pm5.7\text{(syst.)}$ MeV/$c^{2}$ and $\Gamma = 117\pm10\text{(stat.)}\pm8\text{(syst.)}$ MeV. The product branching fractions for ${J/\psi}\to \gamma X(2370),X(2370)\to {K^{+}K^{-}}\eta''$ and ${J/\psi}\to \gamma X(2370),X(2370)\to {K_{S}^{0}K_{S}^{0}}\eta''$ are determined to be $(1.79\pm0.23\text{(stat.)}\pm0.65\text{(syst.)})\times10^{-5}$ and $(1.18\pm0.32\text{(stat.)}\pm0.39\text{(syst.)})\times10^{-5}$, respectively. No evident signal for the $X(2120)$ is observed in the $K\bar{K}\eta''$ invariant-mass distribution. The upper limits for the product branching fractions of $\mathcal{B}({J/\psi}\to\gamma X(2120)\to\gamma K^{+} K^{-} \eta'')$ and $\mathcal{B}({J/\psi}\to\gamma X(2120)\to\gamma K_{S}^{0} K_{S}^{0} \eta'')$ are determined to be $1.49\times10^{-5}$ and $6.38\times10^{-6}$ at the 90% confidence level, respectively.' author: - | M. Ablikim$^{1}$, M. N. Achasov$^{10,e}$, P. Adlarson$^{59}$, S.  Ahmed$^{15}$, M. Albrecht$^{4}$, M. Alekseev$^{58A,58C}$, A. Amoroso$^{58A,58C}$, Q. An$^{55,43}$,  Anita$^{21}$, Y. Bai$^{42}$, O. Bakina$^{27}$, R. Baldini Ferroli$^{23A}$, I. Balossino$^{24A}$, Y. Ban$^{35,l}$, K. Begzsuren$^{25}$, J. V. Bennett$^{5}$, N. Berger$^{26}$, M. Bertani$^{23A}$, D. Bettoni$^{24A}$, F. Bianchi$^{58A,58C}$, J Biernat$^{59}$, J. Bloms$^{52}$, I. Boyko$^{27}$, R. A. Briere$^{5}$, H. Cai$^{60}$, X. Cai$^{1,43}$, A. Calcaterra$^{23A}$, G. F. Cao$^{1,47}$, N. Cao$^{1,47}$, S. A. Cetin$^{46B}$, J. Chai$^{58C}$, J. F. Chang$^{1,43}$, W. L. Chang$^{1,47}$, G. Chelkov$^{27,c,d}$, D. Y. Chen$^{6}$, G. Chen$^{1}$, H. S. Chen$^{1,47}$, J. C. Chen$^{1}$, M. L. Chen$^{1,43}$, S. J. Chen$^{33}$, Y. B. Chen$^{1,43}$, W. Cheng$^{58C}$, G. Cibinetto$^{24A}$, F. Cossio$^{58C}$, X. F. Cui$^{34}$, H. L. Dai$^{1,43}$, J. P. Dai$^{38,i}$, X. C. Dai$^{1,47}$, A. Dbeyssi$^{15}$, D. Dedovich$^{27}$, Z. Y. Deng$^{1}$, A. Denig$^{26}$, I. Denysenko$^{27}$, M. Destefanis$^{58A,58C}$, F. De Mori$^{58A,58C}$, Y. Ding$^{31}$, C. Dong$^{34}$, J. Dong$^{1,43}$, L. Y. Dong$^{1,47}$, M. Y. Dong$^{1,43,47}$, Z. L. Dou$^{33}$, S. X. Du$^{63}$, J. Z. Fan$^{45}$, J. Fang$^{1,43}$, S. S. Fang$^{1,47}$, Y. Fang$^{1}$, R. Farinelli$^{24A,24B}$, L. Fava$^{58B,58C}$, F. Feldbauer$^{4}$, G. Felici$^{23A}$, C. Q. Feng$^{55,43}$, M. Fritsch$^{4}$, C. D. Fu$^{1}$, Y. Fu$^{1}$, X. L. Gao$^{55,43}$, Y. Gao$^{56}$, Y. Gao$^{35,l}$, Y. G. Gao$^{6}$, Z. Gao$^{55,43}$, I. Garzia$^{24A,24B}$, E. M. Gersabeck$^{50}$, A. Gilman$^{51}$, K. Goetzen$^{11}$, L. Gong$^{34}$, W. X. Gong$^{1,43}$, W. Gradl$^{26}$, M. Greco$^{58A,58C}$, L. M. Gu$^{33}$, M. H. Gu$^{1,43}$, S. Gu$^{2}$, Y. T. Gu$^{13}$, A. Q. Guo$^{22}$, L. B. Guo$^{32}$, R. P. Guo$^{36}$, Y. P. Guo$^{26}$, Y. P. Guo$^{9,j}$, A. Guskov$^{27}$, S. Han$^{60}$, X. Q. Hao$^{16}$, F. A. Harris$^{48}$, K. L. He$^{1,47}$, F. H. Heinsius$^{4}$, T. Held$^{4}$, Y. K. Heng$^{1,43,47}$, M. Himmelreich$^{11,h}$, T. Holtmann$^{4}$, Y. R. Hou$^{47}$, Z. L. Hou$^{1}$, H. M. Hu$^{1,47}$, J. F. Hu$^{38,i}$, T. Hu$^{1,43,47}$, Y. Hu$^{1}$, G. S. Huang$^{55,43}$, J. S. Huang$^{16}$, X. T. Huang$^{37}$, X. Z. Huang$^{33}$, N. Huesken$^{52}$, T. Hussain$^{57}$, W. Ikegami Andersson$^{59}$, W. Imoehl$^{22}$, M. Irshad$^{55,43}$, S. Jaeger$^{4}$, Q. Ji$^{1}$, Q. P. Ji$^{16}$, X. B. Ji$^{1,47}$, X. L. Ji$^{1,43}$, H. B. Jiang$^{37}$, X. S. Jiang$^{1,43,47}$, X. Y. Jiang$^{34}$, J. B. Jiao$^{37}$, Z. Jiao$^{18}$, D. P. Jin$^{1,43,47}$, S. Jin$^{33}$, Y. Jin$^{49}$, T. Johansson$^{59}$, N. Kalantar-Nayestanaki$^{29}$, X. S. Kang$^{31}$, R. Kappert$^{29}$, M. Kavatsyuk$^{29}$, B. C. Ke$^{1}$, I. K. Keshk$^{4}$, A. Khoukaz$^{52}$, P.  Kiese$^{26}$, R. Kiuchi$^{1}$, R. Kliemt$^{11}$, L. Koch$^{28}$, O. B. Kolcu$^{46B,g}$, B. Kopf$^{4}$, M. Kuemmel$^{4}$, M. Kuessner$^{4}$, A. Kupsc$^{59}$, M.  G. Kurth$^{1,47}$, W. Kühn$^{28}$, J. S. Lange$^{28}$, P.  Larin$^{15}$, L. Lavezzi$^{58C}$, H. Leithoff$^{26}$, T. Lenz$^{26}$, C. Li$^{59}$, Cheng Li$^{55,43}$, D. M. Li$^{63}$, F. Li$^{1,43}$, G. Li$^{1}$, H. B. Li$^{1,47}$, H. J. Li$^{9,j}$, J. C. Li$^{1}$, J. W. Li$^{41}$, Ke Li$^{1}$, L. K. Li$^{1}$, Lei Li$^{3}$, P. L. Li$^{55,43}$, P. R. Li$^{30}$, Q. Y. Li$^{37}$, S. Y. Li$^{45}$, W. D. Li$^{1,47}$, W. G. Li$^{1}$, X. H. Li$^{55,43}$, X. L. Li$^{37}$, X. N. Li$^{1,43}$, Z. B. Li$^{44}$, Z. Y. Li$^{44}$, H. Liang$^{55,43}$, H. Liang$^{1,47}$, Y. F. Liang$^{40}$, Y. T. Liang$^{28}$, G. R. Liao$^{12}$, L. Z. Liao$^{1,47}$, J. Libby$^{21}$, C. X. Lin$^{44}$, D. X. Lin$^{15}$, Y. J. Lin$^{13}$, B. Liu$^{38,i}$, B. J. Liu$^{1}$, C. X. Liu$^{1}$, D. Liu$^{55,43}$, D. Y. Liu$^{38,i}$, F. H. Liu$^{39}$, Fang Liu$^{1}$, Feng Liu$^{6}$, H. B. Liu$^{13}$, H. M. Liu$^{1,47}$, Huanhuan Liu$^{1}$, Huihui Liu$^{17}$, J. B. Liu$^{55,43}$, J. Y. Liu$^{1,47}$, K. Liu$^{1}$, K. Y. Liu$^{31}$, Ke Liu$^{6}$, L. Liu$^{55,43}$, L. Y. Liu$^{13}$, Q. Liu$^{47}$, S. B. Liu$^{55,43}$, T. Liu$^{1,47}$, X. Liu$^{30}$, X. Y. Liu$^{1,47}$, Y. B. Liu$^{34}$, Z. A. Liu$^{1,43,47}$, Z. Q. Liu$^{37}$, Y.  F. Long$^{35,l}$, X. C. Lou$^{1,43,47}$, H. J. Lu$^{18}$, J. D. Lu$^{1,47}$, J. G. Lu$^{1,43}$, Y. Lu$^{1}$, Y. P. Lu$^{1,43}$, C. L. Luo$^{32}$, M. X. Luo$^{62}$, P. W. Luo$^{44}$, T. Luo$^{9,j}$, X. L. Luo$^{1,43}$, S. Lusso$^{58C}$, X. R. Lyu$^{47}$, F. C. Ma$^{31}$, H. L. Ma$^{1}$, L. L.  Ma$^{37}$, M. M. Ma$^{1,47}$, Q. M. Ma$^{1}$, X. N. Ma$^{34}$, X. X. Ma$^{1,47}$, X. Y. Ma$^{1,43}$, Y. M. Ma$^{37}$, F. E. Maas$^{15}$, M. Maggiora$^{58A,58C}$, S. Maldaner$^{26}$, S. Malde$^{53}$, Q. A. Malik$^{57}$, A. Mangoni$^{23B}$, Y. J. Mao$^{35,l}$, Z. P. Mao$^{1}$, S. Marcello$^{58A,58C}$, Z. X. Meng$^{49}$, J. G. Messchendorp$^{29}$, G. Mezzadri$^{24A}$, J. Min$^{1,43}$, T. J. Min$^{33}$, R. E. Mitchell$^{22}$, X. H. Mo$^{1,43,47}$, Y. J. Mo$^{6}$, C. Morales Morales$^{15}$, N. Yu. Muchnoi$^{10,e}$, H. Muramatsu$^{51}$, A. Mustafa$^{4}$, S. Nakhoul$^{11,h}$, Y. Nefedov$^{27}$, F. Nerling$^{11,h}$, I. B. Nikolaev$^{10,e}$, Z. Ning$^{1,43}$, S. Nisar$^{8,k}$, S. L. Olsen$^{47}$, Q. Ouyang$^{1,43,47}$, S. Pacetti$^{23B}$, Y. Pan$^{55,43}$, M. Papenbrock$^{59}$, P. Patteri$^{23A}$, M. Pelizaeus$^{4}$, H. P. Peng$^{55,43}$, K. Peters$^{11,h}$, J. Pettersson$^{59}$, J. L. Ping$^{32}$, R. G. Ping$^{1,47}$, A. Pitka$^{4}$, R. Poling$^{51}$, V. Prasad$^{55,43}$, H. R. Qi$^{2}$, H. R. Qi$^{45}$, M. Qi$^{33}$, T. Y. Qi$^{2}$, S. Qian$^{1,43}$, C. F. Qiao$^{47}$, N. Qin$^{60}$, X. P. Qin$^{13}$, X. S. Qin$^{4}$, Z. H. Qin$^{1,43}$, J. F. Qiu$^{1}$, S. Q. Qu$^{34}$, K. H. Rashid$^{57}$, K. Ravindran$^{21}$, C. F. Redmer$^{26}$, M. Richter$^{4}$, A. Rivetti$^{58C}$, V. Rodin$^{29}$, M. Rolo$^{58C}$, G. Rong$^{1,47}$, Ch. Rosner$^{15}$, M. Rump$^{52}$, A. Sarantsev$^{27,f}$, M. Savrié$^{24B}$, Y. Schelhaas$^{26}$, C. Schnier$^{4}$, K. Schoenning$^{59}$, W. Shan$^{19}$, X. Y. Shan$^{55,43}$, M. Shao$^{55,43}$, C. P. Shen$^{2}$, P. X. Shen$^{34}$, X. Y. Shen$^{1,47}$, H. Y. Sheng$^{1}$, X. Shi$^{1,43}$, X. D Shi$^{55,43}$, J. J. Song$^{37}$, Q. Q. Song$^{55,43}$, X. Y. Song$^{1}$, S. Sosio$^{58A,58C}$, C. Sowa$^{4}$, S. Spataro$^{58A,58C}$, F. F.  Sui$^{37}$, G. X. Sun$^{1}$, J. F. Sun$^{16}$, L. Sun$^{60}$, S. S. Sun$^{1,47}$, Y. J. Sun$^{55,43}$, Y. K Sun$^{55,43}$, Y. Z. Sun$^{1}$, Z. J. Sun$^{1,43}$, Z. T. Sun$^{1}$, Y. X. Tan$^{55,43}$, C. J. Tang$^{40}$, G. Y. Tang$^{1}$, X. Tang$^{1}$, V. Thoren$^{59}$, B. Tsednee$^{25}$, I. Uman$^{46D}$, B. Wang$^{1}$, B. L. Wang$^{47}$, C. W. Wang$^{33}$, D. Y. Wang$^{35,l}$, K. Wang$^{1,43}$, L. L. Wang$^{1}$, L. S. Wang$^{1}$, M. Wang$^{37}$, M. Z. Wang$^{35,l}$, Meng Wang$^{1,47}$, P. L. Wang$^{1}$, W. P. Wang$^{55,43}$, X. Wang$^{35,l}$, X. F. Wang$^{1}$, X. L. Wang$^{9,j}$, Y. Wang$^{55,43}$, Y. Wang$^{44}$, Y. D. Wang$^{15}$, Y. F. Wang$^{1,43,47}$, Y. Q. Wang$^{1}$, Z. Wang$^{1,43}$, Z. G. Wang$^{1,43}$, Z. Y. Wang$^{1}$, Zongyuan Wang$^{1,47}$, T. Weber$^{4}$, D. H. Wei$^{12}$, P. Weidenkaff$^{26}$, F. Weidner$^{52}$, H. W. Wen$^{32,a}$, S. P. Wen$^{1}$, U. Wiedner$^{4}$, G. Wilkinson$^{53}$, M. Wolke$^{59}$, L. H. Wu$^{1}$, L. J. Wu$^{1,47}$, Z. Wu$^{1,43}$, L. Xia$^{55,43}$, S. Y. Xiao$^{1}$, Y. J. Xiao$^{1,47}$, Z. J. Xiao$^{32}$, Y. G. Xie$^{1,43}$, Y. H. Xie$^{6}$, T. Y. Xing$^{1,47}$, X. A. Xiong$^{1,47}$, G. F. Xu$^{1}$, J. J. Xu$^{33}$, Q. J. Xu$^{14}$, W. Xu$^{1,47}$, X. P. Xu$^{41}$, F. Yan$^{56}$, L. Yan$^{58A,58C}$, L. Yan$^{9,j}$, W. B. Yan$^{55,43}$, W. C. Yan$^{2}$, W. C. Yan$^{63}$, H. J. Yang$^{38,i}$, H. X. Yang$^{1}$, L. Yang$^{60}$, R. X. Yang$^{55,43}$, S. L. Yang$^{1,47}$, Y. H. Yang$^{33}$, Y. X. Yang$^{12}$, Yifan Yang$^{1,47}$, M. Ye$^{1,43}$, M. H. Ye$^{7}$, J. H. Yin$^{1}$, Z. Y. You$^{44}$, B. X. Yu$^{1,43,47}$, C. X. Yu$^{34}$, J. S. Yu$^{20}$, T. Yu$^{56}$, C. Z. Yuan$^{1,47}$, X. Q. Yuan$^{35,l}$, Y. Yuan$^{1}$, A. Yuncu$^{46B,b}$, A. A. Zafar$^{57}$, Y. Zeng$^{20}$, B. X. Zhang$^{1}$, B. Y. Zhang$^{1,43}$, C. C. Zhang$^{1}$, D. H. Zhang$^{1}$, H. H. Zhang$^{44}$, H. Y. Zhang$^{1,43}$, J. Zhang$^{1,47}$, J. L. Zhang$^{61}$, J. Q. Zhang$^{4}$, J. W. Zhang$^{1,43,47}$, J. Y. Zhang$^{1}$, J. Z. Zhang$^{1,47}$, K. Zhang$^{1,47}$, L. Zhang$^{1}$, Lei Zhang$^{33}$, S. F. Zhang$^{33}$, T. J. Zhang$^{38,i}$, X. Y. Zhang$^{37}$, Y. H. Zhang$^{1,43}$, Y. T. Zhang$^{55,43}$, Yan Zhang$^{55,43}$, Yao Zhang$^{1}$, Yi Zhang$^{9,j}$, Yu Zhang$^{47}$, Z. H. Zhang$^{6}$, Z. P. Zhang$^{55}$, Z. Y. Zhang$^{60}$, G. Zhao$^{1}$, J. W. Zhao$^{1,43}$, J. Y. Zhao$^{1,47}$, J. Z. Zhao$^{1,43}$, Lei Zhao$^{55,43}$, Ling Zhao$^{1}$, M. G. Zhao$^{34}$, Q. Zhao$^{1}$, S. J. Zhao$^{63}$, T. C. Zhao$^{1}$, Y. B. Zhao$^{1,43}$, Z. G. Zhao$^{55,43}$, A. Zhemchugov$^{27,c}$, B. Zheng$^{56}$, J. P. Zheng$^{1,43}$, Y. Zheng$^{35,l}$, Y. H. Zheng$^{47}$, B. Zhong$^{32}$, L. Zhou$^{1,43}$, L. P. Zhou$^{1,47}$, Q. Zhou$^{1,47}$, X. Zhou$^{60}$, X. K. Zhou$^{47}$, X. R. Zhou$^{55,43}$, A. N. Zhu$^{1,47}$, J. Zhu$^{34}$, K. Zhu$^{1}$, K. J. Zhu$^{1,43,47}$, S. H. Zhu$^{54}$, W. J. Zhu$^{34}$, X. L. Zhu$^{45}$, Y. C. Zhu$^{55,43}$, Y. S. Zhu$^{1,47}$, Z. A. Zhu$^{1,47}$, J. Zhuang$^{1,43}$, B. S. Zou$^{1}$, J. H. Zou$^{1}$\ (BESIII Collaboration)\ title: 'Observation of the $X(2370)$ and search for the $X(2120)$ in $J/\psi\to\gamma K\bar{K} \eta''$' --- INTRODUCTION ============ Quantum chromodynamics (QCD), a non-Abelian gauge field theory, predicts the existence of new types of hadrons with explicit gluonic degrees of freedom (e.g., glueballs, hybrids) [@bibg1; @bibg2; @bibg3]. The search for glueballs is an important field of research in hadron physics. It is, however, challenging since possible mixing of pure glueball states with nearby $q\bar{q}$ nonet mesons makes the identification of glueballs difficult in both experiment and theory. Lattice QCD (LQCD) predicts the lowest-lying glueballs which are scalar (mass 1.5$-$1.7 GeV/$c^2$), tensor (mass 2.3$-$2.4 GeV/$c^2$), and pseudoscalar (mass 2.3$-$2.6 GeV/$c^2$) [@bib3]. Radiative $J/\psi$ decay is a gluon-rich process and it is therefore regarded as one of the most promising hunting grounds for glueballs [@bibjpsi1; @bibjpsi2]. Recently, three states, the $X(1835)$, $X(2120)$ and $X(2370)$, are observed in the BESIII experiment in the $\pi^{+}\pi^{-}\eta'$ invariant-mass distribution through the decay of $J/\psi\to\gamma\pi^{+}\pi^{-}\eta'$ with statistical significances larger than 20$\sigma$, 7.2$\sigma$ and 6.4$\sigma$, respectively [@PRL1]. The measured mass of the $X(2370)$ is consistent with the pseudoscalar glueball candidate predicted by LQCD calculations [@bib3]. In the case of a pseudoscalar glueball, the branching fractions of the $X(2370)$ decaying into $KK\eta'$ and $\pi\pi\eta'$ are predicted to be 0.011 and 0.090 [@PRD1], respectively, in accordance with calculations that are based upon the chiral effective Lagrangian. Study on the decays to $K\bar{K}\eta'$ of the glue-ball candidate X states is helpful to identify their natures. In this paper, the $X(2370)$ as well as the $X(2120)$ are studied via the decays of $J/\psi\to\gamma K^{+}K^{-}\eta'$ and $J/\psi\to\gamma K_{S}^{0}K_{S}^{0}\eta'$($K_{S}^{0}\to\pi^{+}\pi^{-}$) using (1310.6$\pm$7.0)$\times$10$^6$ $J/\psi$ decays [@jpsinumber] collected with the BESIII detector in 2009 and 2012. Two $\eta'$ decay modes are used, namely $\eta'\to\gamma\rho^{0}(\rho^{0}\to\pi^{+}\pi^{-})$ and $\eta'\to\pi^{+}\pi^{-}\eta(\eta\to\gamma\gamma)$. DETECTOR AND MONTE CARLO SIMULATIONS ==================================== The BESIII detector is a magnetic spectrometer [@Ablikim:2009aa] located at the Beijing Electron Positron Collider II(BEPCII) [@Yu:IPAC2016-TUYA01]. The cylindrical core of the BESIII detector consists of a helium-based multilayer drift chamber (MDC), a plastic scintillator time-of-flight system (TOF), and a CsI(Tl) electromagnetic calorimeter (EMC), which are all enclosed in a superconducting solenoidal magnet providing a 1.0 T (0.9 T in 2012) magnetic field. The solenoid is supported by an octagonal flux-return yoke with resistive plate counter muon identifier modules interleaved with steel. The acceptance of charged particles and photons is 93% over $4\pi$ solid angle. The charged-particle momentum resolution at $1~{\rm GeV}/c$ is $0.5\%$, and the $dE/dx$ resolution is $6\%$ for the electrons from Bhabha scattering. The EMC measures photon energies with a resolution of $2.5\%$ ($5\%$) at $1$ GeV in the barrel (end cap) region. The time resolution of the TOF barrel part is 68 ps, while that of the end cap part is 110 ps. Simulated samples produced with the [geant4]{}-based [@geant4] Monte Carlo (MC) package which includes the geometric description of the BESIII detector and the detector response, are used to determine the detection efficiency and to estimate the backgrounds. The simulation includes the beam energy spread and initial-state radiation (ISR) in the $e^+e^-$ annihilations modeled with the generator [kkmc]{} [@ref:kkmc]. The inclusive MC sample consists of the production of the $J/\psi$ resonance, and the continuum processes incorporated in [kkmc]{} [@ref:kkmc]. The known decay modes are modeled with [evtgen]{} [@ref:evtgen] using branching fractions taken from the Particle Data Group [@pdg], and the remaining unknown decays from the charmonium states are generated with [lundcharm]{} [@ref:lundcharm]. The final-state radiations (FSR) from charged final-state particles are incorporated with the [photos]{} package [@photos]. Background is studied using a sample of $1.2 $$\times$$ 10^{9}$ simulated ${J/\psi}$ events. Phase-space (PHSP) MC samples of ${J/\psi}\to\gamma {K^{+}K^{-}}\eta'$ and ${J/\psi}\to\gamma {K_{S}^{0}K_{S}^{0}}\eta'$ are generated to describe the nonresonant contribution. To estimate the selection efficiency and to optimize the selection criteria, signal MC events are generated for ${J/\psi}\to\gamma X(2120)/X(2370) \to\gamma{K^{+}K^{-}}\eta'$ and ${J/\psi}\to\gamma X(2120)/X(2370)\to\gamma{K_{S}^{0}K_{S}^{0}}\eta'$ channel, respectively. The polar angle of the photon in the ${J/\psi}$ center-of-mass system, $\theta_{\gamma}$, follows a $1+ \mathrm{cos}^{2}\theta_{\gamma}$ function. For the process of $\eta'\to\gamma\rho^{0}, \rho^{0}\to\pi^{+}\pi^{-}$, a generator taking into account both the $\rho - \omega$ interference and the box anomaly is used [@gammapipiDIY]. The analysis is performed in the framework of the BESIII offline software system (BOSS) [@ref:boss] incorporating the detector calibration, event reconstruction and data storage. EVENT SELECTION =============== Charged-particle tracks in the polar angle range $|\cos\theta| < 0.93$ are reconstructed from hits in the MDC. Tracks (excluding those from $K_{S}^{0}$ decays) are selected that extrapolated to be within 10 cm from the interaction point in the beam direction and 1 cm in the plane perpendicular to the beam. The combined information from energy-loss ($dE/dx$) measurements in the MDC and time in the TOF is used to obtain confidence levels for particle identification (PID) for $\pi$, $K$ and $p$ hypotheses. For ${J/\psi}\to\gamma{K^{+}K^{-}}\eta'$ decay, each track is assigned to the particle type corresponding to the highest confidence level; candidate events are required to have four charged tracks with zero net charge and with two opposite charged tracks identified as kaons and the other two identified as pions. For the ${J/\psi}\to\gamma{K_{S}^{0}K_{S}^{0}}\eta'$ decay, each track is assumed to be a pion and no PID restrictions are applied; candidate events are required to have six charged tracks with zero net charge. ${K_{S}^{0}}$ candidates are reconstructed from a secondary vertex fit to all ${\pi^{+}\pi^{-}}$ pairs, and each ${K_{S}^{0}}$ candidate is required to satisfy $|M_{{\pi^{+}}{\pi^{-}}}-m_{{K_{S}^{0}}}|<$9 MeV/$c^{2}$, where $m_{{K_{S}^{0}}}$ is the nominal mass of the ${K_{S}^{0}}$ [@pdg]. The reconstructed ${K_{S}^{0}}$ candidates are used as an input for the subsequent kinematic fit. Photon candidates are required to have an energy deposition above 25 MeV in the barrel region ($|\cos\theta|<0.80$) and 50 MeV in the end cap ($0.86<|\cos\theta|<0.92$). To exclude showers from charged tracks, the angle between the shower position and the charged tracks extrapolated to the EMC must be greater than $5^{\circ}$. A timing requirement in the EMC is used to suppress electronic noise and energy deposits unrelated to the event. At least two (three) photons are required for the $\eta'\to\gamma\rho^{0}$ ($\eta'\to{\pi^{+}\pi^{-}\eta}$) mode. For the ${J/\psi}\to\gamma{K^{+}K^{-}}\eta' (\eta'\to\gamma\rho^{0})$ channel, a four-constraint (4C) kinematic fit is performed by requiring the total energy and each momentum component to be conserved to the hypothesis of ${J/\psi}\to\gamma\gamma{K^{+}K^{-}}{\pi^{+}}{\pi^{-}}$. For events with more than two photon candidates, the combination with the minimum $\chi_{4C}^{2}$ is selected, and $\chi_{4C}^{2}<$ 25 is required. Events with $|M_{\gamma\gamma} - m_{\pi^{0}}| <$ 30 MeV/$c^{2}$ or $|M_{\gamma\gamma} - m_{\eta}| <$ 30 MeV/$c^{2}$ are rejected to suppress background containing $\pi^{0}$ or $\eta$, where the $m_{\pi^{0}}$ and $m_{\eta}$ are the nominal masses of $\pi^{0}$ and $\eta$ [@pdg]. A clear $\eta'$ signal is observed in the invariant-mass distribution of ${\gamma\pi^{+}\pi^{-}}$ ($M_{\gamma\pi^{+}\pi^{-}}$), as shown in Fig. \[selectkketap\](a). Candidates of $\rho$ and $\eta'$ are reconstructed from the ${\pi^{+}\pi^{-}}$ and ${\gamma\pi^{+}\pi^{-}}$ combinations with 0.55 GeV/$c^{2} < M_{\pi^{+}\pi^{-}} < $ 0.85 GeV/$c^{2}$ and $|M_{\gamma\pi^{+}\pi^{-}} - m_{\eta'}| <$ 20 MeV/$c^{2}$, where $m_{\eta'}$ is the nominal mass of $\eta'$ [@pdg], respectively. If there are more than one combination satisfing the selection criteria, the combination with $M_{\gamma\pi^{+}\pi^{-}}$ closest to $m_{\eta'}$ is selected. After applying the above requirements, we obtain the invariant-mass distribution of ${K^{+}K^{-}}\eta'$ ($M_{{K^{+}K^{-}}\eta'}$) as shown in Fig. \[selectkketap\](b). ![Invariant-mass distributions for the selected candidates of ${J/\psi}\to\gamma{K^{+}K^{-}}\eta'$. Plots (a) and (b) are invariant-mass distributions of ${\gamma\pi^{+}\pi^{-}}$ and ${K^{+}K^{-}}\eta'$ for $\eta'\to\gamma\rho^{0}$, $\rho^{0}\to\pi^{+}\pi^{-}$, respectively; plots (c) and (d) are the invariant-mass distributions of ${\pi^{+}\pi^{-}\eta}$ and ${K^{+}K^{-}}\eta'$ for $\eta'\to{\pi^{+}\pi^{-}\eta}$, $\eta\to\gamma\gamma$, respectively. The dots with error bars correspond to data and the histograms are the results of PHSP MC simulations (arbitrary normalization).[]{data-label="selectkketap"}](Fig01a.eps "fig:"){width="24.00000%"}(-30,85)[[(a)]{}]{} ![Invariant-mass distributions for the selected candidates of ${J/\psi}\to\gamma{K^{+}K^{-}}\eta'$. Plots (a) and (b) are invariant-mass distributions of ${\gamma\pi^{+}\pi^{-}}$ and ${K^{+}K^{-}}\eta'$ for $\eta'\to\gamma\rho^{0}$, $\rho^{0}\to\pi^{+}\pi^{-}$, respectively; plots (c) and (d) are the invariant-mass distributions of ${\pi^{+}\pi^{-}\eta}$ and ${K^{+}K^{-}}\eta'$ for $\eta'\to{\pi^{+}\pi^{-}\eta}$, $\eta\to\gamma\gamma$, respectively. The dots with error bars correspond to data and the histograms are the results of PHSP MC simulations (arbitrary normalization).[]{data-label="selectkketap"}](Fig01b.eps "fig:"){width="24.00000%"}(-30,85)[[(b)]{}]{} -0.03cm ![Invariant-mass distributions for the selected candidates of ${J/\psi}\to\gamma{K^{+}K^{-}}\eta'$. Plots (a) and (b) are invariant-mass distributions of ${\gamma\pi^{+}\pi^{-}}$ and ${K^{+}K^{-}}\eta'$ for $\eta'\to\gamma\rho^{0}$, $\rho^{0}\to\pi^{+}\pi^{-}$, respectively; plots (c) and (d) are the invariant-mass distributions of ${\pi^{+}\pi^{-}\eta}$ and ${K^{+}K^{-}}\eta'$ for $\eta'\to{\pi^{+}\pi^{-}\eta}$, $\eta\to\gamma\gamma$, respectively. The dots with error bars correspond to data and the histograms are the results of PHSP MC simulations (arbitrary normalization).[]{data-label="selectkketap"}](Fig01c.eps "fig:"){width="24.00000%"}(-30,85)[[(c)]{}]{} ![Invariant-mass distributions for the selected candidates of ${J/\psi}\to\gamma{K^{+}K^{-}}\eta'$. Plots (a) and (b) are invariant-mass distributions of ${\gamma\pi^{+}\pi^{-}}$ and ${K^{+}K^{-}}\eta'$ for $\eta'\to\gamma\rho^{0}$, $\rho^{0}\to\pi^{+}\pi^{-}$, respectively; plots (c) and (d) are the invariant-mass distributions of ${\pi^{+}\pi^{-}\eta}$ and ${K^{+}K^{-}}\eta'$ for $\eta'\to{\pi^{+}\pi^{-}\eta}$, $\eta\to\gamma\gamma$, respectively. The dots with error bars correspond to data and the histograms are the results of PHSP MC simulations (arbitrary normalization).[]{data-label="selectkketap"}](Fig01d.eps "fig:"){width="24.00000%"}(-30,85)[[(d)]{}]{} To reduce background and to improve the mass resolution of the ${J/\psi}\to\gamma{K^{+}K^{-}}\eta' (\eta'\to{\pi^{+}\pi^{-}\eta})$ channel, a five-constraint (5C) kinematic fit is performed whereby the total four momentum of the final-state particles are constrained to the total initial four momentum of the colliding beams and the invariant mass of the two photons from the decay of the $\eta$ is constrained to its nominal mass. If there are more than three photon candidates, the combination with the minimum $\chi_{5C}^{2}$ is retained, and $\chi_{5C}^{2} < $ 45 is required. To suppress background from $\pi^{0}\to\gamma\gamma$, $|M_{\gamma\gamma} - m_{\pi^{0}}| >$ 30 MeV/$c^{2}$ is required for all photon pairs. The $\eta'$ candidates are formed from the ${\pi^{+}\pi^{-}\eta}$ combination satisfying $|M_{\pi^{+}\pi^{-}\eta} - m_{\eta'}| <$ 15 MeV/$c^{2}$, where $M_{\pi^{+}\pi^{-}\eta}$ is the invariant mass of $\pi^{+}\pi^{-}\eta$, as shown in Fig. \[selectkketap\](c). After applying the mass restrictions, we obtain the invariant-mass distribution of ${K^{+}K^{-}}\eta'$($\eta'\to{\pi^{+}\pi^{-}\eta}$) as shown in Fig. \[selectkketap\](d). ![Invariant-mass distributions for the selected ${J/\psi}\to\gamma{K_{S}^{0}K_{S}^{0}}\eta'$ candidate events. (a) and (b) are the invariant-mass distributions of ${\gamma\pi^{+}\pi^{-}}$ and ${K_{S}^{0}K_{S}^{0}}\eta'$ for $\eta'\to\gamma\rho^{0}$, $\rho^{0}\to\pi^{+}\pi^{-}$, respectively; (c) and (d) are the invariant-mass distribution of ${\pi^{+}\pi^{-}\eta}$ and ${K_{S}^{0}K_{S}^{0}}\eta'$ for $\eta'\to{\pi^{+}\pi^{-}\eta}$, $\eta\to\gamma\gamma$, respectively. The dots with error bars represent the data and the histograms are the results of PHSP MC simulations (arbitrary normalization).[]{data-label="selectksksetap"}](Fig02a.eps "fig:"){width="24.00000%"}(-30,85)[[(a)]{}]{} ![Invariant-mass distributions for the selected ${J/\psi}\to\gamma{K_{S}^{0}K_{S}^{0}}\eta'$ candidate events. (a) and (b) are the invariant-mass distributions of ${\gamma\pi^{+}\pi^{-}}$ and ${K_{S}^{0}K_{S}^{0}}\eta'$ for $\eta'\to\gamma\rho^{0}$, $\rho^{0}\to\pi^{+}\pi^{-}$, respectively; (c) and (d) are the invariant-mass distribution of ${\pi^{+}\pi^{-}\eta}$ and ${K_{S}^{0}K_{S}^{0}}\eta'$ for $\eta'\to{\pi^{+}\pi^{-}\eta}$, $\eta\to\gamma\gamma$, respectively. The dots with error bars represent the data and the histograms are the results of PHSP MC simulations (arbitrary normalization).[]{data-label="selectksksetap"}](Fig02b.eps "fig:"){width="24.00000%"}(-30,85)[[(b)]{}]{} -0.03cm ![Invariant-mass distributions for the selected ${J/\psi}\to\gamma{K_{S}^{0}K_{S}^{0}}\eta'$ candidate events. (a) and (b) are the invariant-mass distributions of ${\gamma\pi^{+}\pi^{-}}$ and ${K_{S}^{0}K_{S}^{0}}\eta'$ for $\eta'\to\gamma\rho^{0}$, $\rho^{0}\to\pi^{+}\pi^{-}$, respectively; (c) and (d) are the invariant-mass distribution of ${\pi^{+}\pi^{-}\eta}$ and ${K_{S}^{0}K_{S}^{0}}\eta'$ for $\eta'\to{\pi^{+}\pi^{-}\eta}$, $\eta\to\gamma\gamma$, respectively. The dots with error bars represent the data and the histograms are the results of PHSP MC simulations (arbitrary normalization).[]{data-label="selectksksetap"}](Fig02c.eps "fig:"){width="24.00000%"}(-30,85)[[(c)]{}]{} ![Invariant-mass distributions for the selected ${J/\psi}\to\gamma{K_{S}^{0}K_{S}^{0}}\eta'$ candidate events. (a) and (b) are the invariant-mass distributions of ${\gamma\pi^{+}\pi^{-}}$ and ${K_{S}^{0}K_{S}^{0}}\eta'$ for $\eta'\to\gamma\rho^{0}$, $\rho^{0}\to\pi^{+}\pi^{-}$, respectively; (c) and (d) are the invariant-mass distribution of ${\pi^{+}\pi^{-}\eta}$ and ${K_{S}^{0}K_{S}^{0}}\eta'$ for $\eta'\to{\pi^{+}\pi^{-}\eta}$, $\eta\to\gamma\gamma$, respectively. The dots with error bars represent the data and the histograms are the results of PHSP MC simulations (arbitrary normalization).[]{data-label="selectksksetap"}](Fig02d.eps "fig:"){width="24.00000%"}(-30,85)[[(d)]{}]{} ![image](Fig03a.eps){width="45.00000%"}(-30,145)[[(a)]{}]{} ![image](Fig03b.eps){width="45.00000%"}(-30,145)[[(b)]{}]{} -0.03cm ![image](Fig03c.eps){width="45.00000%"}(-30,145)[[(c)]{}]{} ![image](Fig03d.eps){width="45.00000%"}(-30,145)[[(d)]{}]{} For the ${J/\psi}\to\gamma{K_{S}^{0}K_{S}^{0}}\eta' (\eta'\to\gamma\rho^{0})$ channel, the $\gamma \gamma {K_{S}^{0}K_{S}^{0}}\pi^+\pi^-$ candidates are subjected to a 4C kinematic fit. For events with more than two photons or two $K^{0}_{S}$ candidates, the combination with the smallest $\chi^{2}_{4C}$ is retained, and $\chi^{2}_{4C} < 45$ is required. To suppress background events containing a $\pi^{0}$ or $\eta$, events with $|M_{\gamma\gamma} - m_{\pi^{0}}| <$ 30 MeV/$c^{2}$ or $|M_{\gamma\gamma} - m_{\eta}| <$ 30 GeV/$c^{2}$ are rejected. The ${\pi^{+}\pi^{-}}$ invariant mass is required to be in the $\rho$ mass region, 0.55 GeV/$c^{2} < M_{\pi^{+}\pi^{-}} < $ 0.85 GeV/$c^{2}$, and $|M_{\gamma\pi^{+}\pi^{-}} - m_{\eta'}| <$ 20 MeV/$c^{2}$ is applied to select $\eta'$ signal. If more than one combination of $\gamma\pi^{+}\pi^{-}$ are obtained, the combination with $M_{\gamma\pi^{+}\pi^{-}}$ closest to $m_{\eta'}$ is selected as shown in Fig. \[selectksksetap\](a). After applying the above requirements, we obtain the ${K_{S}^{0}K_{S}^{0}}\eta'$($\eta'\to\gamma\rho^{0}$) invariant-mass spectrum as illustrated in Fig. \[selectksksetap\](b). Candidate events of the ${J/\psi}\to\gamma{K_{S}^{0}K_{S}^{0}}\eta'$ $(\eta'\to{\pi^{+}\pi^{-}\eta})$ channel are subjected to a 5C kinematic fit, which is similar to that for the ${J/\psi}\to\gamma{K^{+}K^{-}}\eta'$ $(\eta'\to{\pi^{+}\pi^{-}\eta})$ mode. If there are more than three photons or more than two $K^{0}_{S}$ candidates, only the combination with the minimum $\chi_{5C}^{2}$ is selected and $\chi_{5C}^{2} < $50 is required. To reduce the combinatorial background from $\pi^{0}\to\gamma\gamma$ events, $|M_{\gamma\gamma} - m_{\pi^{0}}| >$ 30 MeV/$c^{2}$ is required for all photon pairs. For selecting the $\eta'$ signal, the ${\pi^{+}\pi^{-}\eta}$ combination satisfying $|M_{\pi^{+}\pi^{-}\eta} - m_{\eta'}| <$ 15 MeV/$c^{2}$ is required, as shown in Fig. \[selectksksetap\](c). After applying the above selection criteria, we obtain the invariant-mass distribution of ${K_{S}^{0}K_{S}^{0}}\eta'$$(\eta'\to{\pi^{+}\pi^{-}\eta})$ events as shown in Fig. \[selectksksetap\](d). SIGNAL EXTRACTION ================= Potential backgrounds are studied using an inclusive MC sample of $1.2\times10^{9}$ ${J/\psi}$ decays. No significant peaking background is identified in the invariant-mass distributions of ${K^{+}K^{-}}\eta'$ and ${K_{S}^{0}K_{S}^{0}}\eta'$. Non-$\eta'$ processes are studied using the $\eta'$ mass sidebands. The major background in the decay ${J/\psi}\to\gamma{K^{+}K^{-}}\eta'$ stem from ${J/\psi}\to K^{*+} K^{-}\eta' (K^{*+}\to K^{+}\pi^{0}) + c.c.$. The contribution of ${J/\psi}\to K^{*+} K^{-}\eta' (K^{*+}\to K^{+}\pi^{0}) + c.c.$ is estimated by the background-subtracted ${K^{+}K^{-}}\eta'$ spectrum of ${J/\psi}\to K^{*+} K^{-}\eta' (K^{*+}\to K^{+}\pi^{0}) + c.c.$ events selected from data. The spectrum is reweighted according to the ratio of efficiency of ${J/\psi}\to\gamma{K^{+}K^{-}}\eta'$ and ${J/\psi}\to K^{*+} K^{-}\eta' (K^{*+}\to K^{+}\pi^{0}) + c.c.$. For the ${J/\psi}\to\gamma{K_{S}^{0}K_{S}^{0}}\eta'$ case, backgrounds from the process ${J/\psi}\to{\pi^{0}}{K_{S}^{0}K_{S}^{0}}\eta'$ are negligible, as it is forbidden due to charge conjugation invariance. A structure near 2.34 GeV/$c^{2}$ is observed in the invariant-mass distribution of ${K^{+}K^{-}}\eta'$ and ${K_{S}^{0}K_{S}^{0}}\eta'$. We performed a simultaneous unbinned maximum-likelihood fit to the ${K^{+}K^{-}}\eta'$ and ${K_{S}^{0}K_{S}^{0}}\eta'$ invariant-mass distributions between 2.0 and 2.7 GeV/$c^{2}$, as shown in Fig \[fit2370\]. The signal is represented by an efficiency-weighted non-relativistic Breit-Wigner (BW) function convolved with a double Gaussian function to account for the mass resolution. The mass and width of BW function are left free in the fit while the parameters of the double Gaussian function are fixed on the results obtained from the fit of signal MC samples generated with zero width. The non-$\eta'$ background events are described with $\eta'$ sideband data and the yields from these sources are fixed; the ${J/\psi}\to K^{*+}K^{-}\eta' + c.c.$ contributions in ${J/\psi}\to\gamma{K^{+}K^{-}}\eta'$ decay channel are studied as discussed above and its shape as well as the yields are fixed in the fit; the contribution from the nonresonant $\gamma K\bar{K}\eta'$ production is described by the shape from the PHSP MC sample of $J/\psi\to\gamma K\bar{K} \eta'$ and its absolute yield is set as a free parameter in the fit; the remaining background is described by a second order Chebychev polynomial function and its parameters are left to be free. In the simultaneous fit, the resonance parameters are free parameters and constrained to be the same for all four channels. The signal ratio for the two $\eta'$ decay modes is fixed with a factor calculated by their branching fractions and efficiencies. The signal ratio between ${J/\psi}\to\gamma X(2370)\to\gamma{K^{+}K^{-}}\eta'$ and ${J/\psi}\to\gamma X(2370)\to\gamma{K_{S}^{0}K_{S}^{0}}\eta'$ is a free parameter in the fit. The obtained mass, width and the number of signal events for the $X(2370)$ are listed in Table \[fitresult2370\]. A variety of fits with different fit ranges, $\eta'$ sideband regions and background shapes are performed, and the smallest statistical significance among these fits is found to be 8.3$\sigma$. With the detection efficiencies listed in Table \[effi\_all\], the product branching fractions for ${J/\psi}\to \gamma X(2370),X(2370)\to {K^{+}K^{-}}\eta'$ and ${J/\psi}\to \gamma X(2370),X(2370)\to {K_{S}^{0}K_{S}^{0}}\eta'$ are determined to be $(1.79\pm0.23)\times10^{-5}$ and $(1.18\pm0.32)\times10^{-5}$, respectively, where the uncertainties are statistical only. $\eta'\to\gamma\rho^{0}$ $\eta'\to\pi^{+}\pi^{-}\eta$ ------------------------------------ -------------------------- ------------------------------ $M_{X(2370)}$ (MeV/$c^{2}$) $\Gamma_{X(2370)}$ (MeV) $N(J/\psi \to \gamma X(2370)^{a})$ $882\pm112$ $320\pm40$ $N(J/\psi \to \gamma X(2370)^{b})$ $174\pm47\phantom{2}$ $55\pm15$ $N(J/\psi \to \gamma X(2120)^{a})$ $<553.5$ $<187.3$ $N(J/\psi \to \gamma X(2120)^{b})$ $<88.7$ $<30.0$ : \[fitresult\]Fit results for the structure around 2.34 GeV/$c^{2}$ and 2.12 GeV/$c^{2}$. The superscripts $a$ and $b$ represent the decay modes of $ X\to K^{+}K^{-}\eta'$ and $X\to K_{S}^{0}K_{S}^{0}\eta'$, respectively. The uncertainties are statistical only.[]{data-label="fitresult2370"} Decay modes $\varepsilon_{\eta'\to\gamma\rho^{0}}$ $\varepsilon_{\eta'\to\pi^{+}\pi^{-}\eta}$ --------------------------------- ---------------------------------------- -------------------------------------------- $J/\psi \to \gamma X(2370)^{a}$ 12.9 % 8.0 % $J/\psi \to \gamma X(2370)^{b}$ 8.1 % 4.4 % $J/\psi \to \gamma X(2120)^{a}$ 10.3 % 6.0 % $J/\psi \to \gamma X(2120)^{b}$ 7.9 % 4.6 % : Summary of the MC detection efficiencies of the signal yields for the two $\eta'$ modes where the $K\bar{K} \eta'$ invariant-mass is constrained to the applied fitting range between 2.0 and 2.7 GeV/$c^{2}$. The superscripts $a$ and $b$ represent the decay modes of $ X\to K^{+}K^{-}\eta'$ and $X\to K_{S}^{0}K_{S}^{0}\eta'$, respectively.[]{data-label="effi_all"} There is no obvious signal of the $X(2120)$ found in the $K\bar{K}\eta'$ invariant-mass distribution. We performed a simultaneous unbinned maximum-likelihood fit to the $K\bar{K}\eta'$ invariant-mass distribution in the range of \[2.0, 2.7\] GeV/$c^2$. The signal, $X(2120)$, is described with an efficiency-weighted BW function convolved with a double Gaussian function. The mass and width of the BW function are fixed to previously published BESIII results [@PRL1]. The backgrounds are modeled with the same components as used in the fit of the $X(2370)$ as mentioned above. The contribution from the $X(2370)$ is included in the fit and its mass, width and the number of events are set free. The distribution of normalized likelihood values for a series of input signal event yields is taken as the probability density function (PDF) for the expected number of events. The number of events at 90$\%$ of the integral of the PDF from zero to the given number of events is defined as the upper limit, $N^{UL}$, at the 90$\%$ confidence level (C.L.). We repeated this procedure with different signal shape parameters of $X(2120)$ (by varying the values of mass and width with 1$\sigma$ of the uncertainties cited from [@PRL1]), fit ranges, $\eta'$ sideband regions and background shapes, and the maximum upper limit among these cases is selected. The statistical significance of the $X(2120)$ is determined to be 2.2$\sigma$. To calculate $N^{UL}$ for the ${J/\psi}\to\gamma X(2120)\to\gamma{K^{+}K^{-}}\eta'$ (${J/\psi}\to\gamma X(2120)\to\gamma{K_{S}^{0}K_{S}^{0}}\eta'$) channel, the number of signal events for ${J/\psi}\to\gamma X(2120)\to\gamma{K_{S}^{0}K_{S}^{0}}\eta'$ (${J/\psi}\to\gamma X(2120)\to\gamma{K^{+}K^{-}}\eta'$) channel is left free. The obtained upper limits of the signal yields are listed in Table \[fitresult2370\], and the upper limit for the product branching fractions are calculated to be $\mathcal{B}({J/\psi}\to\gamma X(2120)\to\gamma K^{+} K^{-} \eta') < 1.41\times10^{-5}$ and $\mathcal{B}({J/\psi}\to\gamma X(2120)\to\gamma K_{S}^{0} K_{S}^{0} \eta') < 6.15\times10^{-6}$, respectively. ----------------------------------- ----- ---------- ----- ---------- ----- ---------- ----- ---------- $M$ $\Gamma$ $M$ $\Gamma$ $M$ $\Gamma$ $M$ $\Gamma$ Veto $\pi^{0}$ 0.0 1 0.3 1 0.2 1 0.2 1 Veto $\eta$ 0.2 1 – – 0.2 1 – – Fit range 0.1 3 0.1 3 0.1 3 0.1 3 Sideband region 0.1 2 0.1 2 0.2 1 0.1 1 Chebychev function 0.2 3 0.1 3 0.2 1 0.1 3 $J/\psi\to K^{*+}K^{-}\eta'+c.c.$ 0.2 5 0.2 5 0.2 5 0.2 5 X(2120)\* 5.7 7 5.7 7 5.7 7 5.7 7 Total 5.7 10 5.7 10 5.7 9 5.7 10 ----------------------------------- ----- ---------- ----- ---------- ----- ---------- ----- ---------- SYSTEMATIC UNCERTAINTIES {#sec::sys} ======================== Several sources of systematic uncertainties are considered for the determination of the mass and width of the $X(2370)$ and the product branching fractions. These include the efficiency differences between data and MC simulation in the MDC tracking, PID, the photon detection, ${K_{S}^{0}}$ reconstruction, the kinematic fitting, and the mass window requirements of $\pi^{0}$, $\eta$, $\rho$ and $\eta'$. Furthermore, uncertainties associated with the fit ranges, the background shapes, the sideband regions, the signal shape parameters of $X(2120)$, intermediate resonance decay branching fractions and the total number of ${J/\psi}$ events are considered. ----------------------------------------------------------------- ------ ------ ------ ------ I II I II MDC tracking\* 4.0 4.0 2.0 2.0 Photon detection\* 2.0 3.0 2.0 3.0 $K_{S}^{0}$ reconstruction\* – – 3.0 3.0 PID\* 4.0 4.0 – – Kinematic fit 1.7 1.0 3.8 2.2 $\rho$ mass window 0.2 – 0.3 – $\eta'$ mass window 0.1 0.4 0.1 0.3 Veto $\pi^{0}$ 1.2 1.6 1.7 0.6 Veto $\eta$ 1.0 – 0.6 – Fit range 2.4 2.4 1.7 1.7 Sideband region 5.4 2.8 2.8 1.2 Chebychev function 4.9 5.5 1.7 1.7 $J/\psi\to K^{*+}K^{-}\eta'+c.c.$ 4.0 4.0 2.2 2.2 $\mathcal{B}(\eta'\to \gamma\rho^{0}\to\gamma{\pi^{+}\pi^{-}})$ 1.7 – 1.7 – $\mathcal{B}(\eta'\to\eta{\pi^{+}\pi^{-}})$ – 1.6 – 1.6 $\mathcal{B}(\eta\to\gamma\gamma)$ – 0.5 – 0.5 $\mathcal{B}(K_{S}^{0}\to{\pi^{+}\pi^{-}})$\* – – 0.1 0.1 Number of ${J/\psi}$ events\* 0.5 0.5 0.5 0.5 Quantum number of $X$ 16.7 13.6 16.0 19.0 X(2120)\* 33.7 33.7 30.5 30.5 Total 39.2 37.7 35.3 36.5 ----------------------------------------------------------------- ------ ------ ------ ------ : Systematic uncertainties for determination of the branching fraction of ${J/\psi}\to\gamma X(2370)\to\gamma K \bar{K} \eta'$ (in %). The items with \* are common uncertainties of both $\eta'$ decay modes. I and II represent the decay modes of $ \eta'\to\gamma\rho^{0}$, $\rho^{0}\to\pi^{+}\pi^{-}$ and $\eta'\to{\pi^{+}\pi^{-}\eta}$, $\eta\to\gamma\gamma$, respectively. []{data-label="sumsysBR2370"} ----------------------------------------------------------------- ------ ------ ------ ------ I II I II MDC tracking\* 4.0 4.0 2.0 2.0 Photon detection\* 2.0 3.0 2.0 3.0 $K_{S}^{0}$ reconstruction\* – – 3.0 3.0 PID\* 4.0 4.0 – – Kinematic fit 1.7 0.8 4.0 3.5 $\rho$ mass window 0.2 – 0.3 – $\eta'$ mass window 0.1 0.1 0.2 0.2 Veto $\pi^{0}$ 0.8 1.0 1.4 1.5 Veto $\eta$ 0.8 – 1.4 – $\mathcal{B}(\eta'\to \gamma\rho^{0}\to\gamma{\pi^{+}\pi^{-}})$ 1.7 – 1.7 – $\mathcal{B}(\eta'\to\eta{\pi^{+}\pi^{-}})$ – 1.6 – 1.6 $\mathcal{B}(\eta\to\gamma\gamma)$ – 0.5 – 0.5 $\mathcal{B}(K_{S}^{0}\to{\pi^{+}\pi^{-}})$\* – – 0.1 0.1 Number of ${J/\psi}$ events\* 0.5 0.5 0.5 0.5 Quantum number of $X$ 18.2 16.4 20.9 19.3 Total 19.3 17.6 21.8 20.2 ----------------------------------------------------------------- ------ ------ ------ ------ : Systematic uncertainties for the determination of the upper limit of the branching fraction of ${J/\psi}\to\gamma X(2120)\to\gamma K \bar{K} \eta'$(in %). The items with \* are common uncertainties of both $\eta'$ decay modes. I and II represent the decay modes of $ \eta'\to\gamma\rho^{0}$, $\rho^{0}\to\pi^{+}\pi^{-}$ and $\eta'\to{\pi^{+}\pi^{-}\eta}$, $\eta\to\gamma\gamma$, respectively.[]{data-label="sumsysBR2120"} Efficiency estimation --------------------- The MDC tracking efficiencies of charged pions and kaons are investigated using nearly background-free (clean) control samples of ${J/\psi}\to p\bar{p}{\pi^{+}\pi^{-}}$ and ${J/\psi}\to{K_{S}^{0}}K^{\pm}\pi^{\mp}$ [@MDCpi; @MDC], respectively. The difference in tracking efficiencies between data and MC is 1.0% for each charged pion and kaon. The photon detection efficiency is studied with a clean sample of $J/\psi\to\rho^{0}\pi^{0}$ [@Photon], and the result shows that the difference of photon detection efficiencies between data and MC simulation is 1.0% for each photon. The systematic uncertainty from $K_{S}^{0}$ reconstruction is determined from the control samples of ${J/\psi}\to K^{*\pm}K^{\mp}$ and ${J/\psi}\to\phi{K_{S}^{0}}K^{\pm}\pi^{\mp}$, which indicate that the efficiency difference between data and MC is less than 1.5$\%$ for each ${K_{S}^{0}}$. Therefore, 3.0$\%$ is taken as the systematic uncertainty for the two ${K_{S}^{0}}$ in ${J/\psi}\to\gamma{K_{S}^{0}K_{S}^{0}}\eta'$ channel. For the decay channel of $J/\psi\to\gamma K^{+}K^{-}\eta'$, the PID has been used to identify the kaons and pions. Using a clean sample of $J/\psi\to p\bar{p}\pi^{+}\pi^{-}$, the PID efficiency of $\pi^{+}/\pi^{-}$ has been studied, which indicates that the $\pi^{+}/\pi^{-}$ PID efficiency for data agrees with MC simulation within 1%. The PID efficiency for the kaon is measured with a clean sample of $J/\psi\rightarrow K^+ K^-\eta$. The difference of the PID efficiency between data and MC is less than 1% for each kaon. Hence, In this analysis, four charged tracks are required to be identified as two pions and two kaons, 4% is taken as the systematic uncertainty associated with the PID. The systematic uncertainties associated with the kinematic fit are studied with the track helix parameter correction method, as described in Ref. [@4cError]. The differences with respect to those without corrections are taken as systematic uncertainties. Due to the difference in the mass resolution between data and MC, uncertainties related to the $\rho^{0}$ and $\eta'$ mass window requirements are investigated by smearing the MC simulation to improve the consistency between data and MC simulation. The differences in the detection efficiency before and after smearing are assigned as systematic uncertainties for the $\rho^{0}$ and $\eta'$ mass window requirement. The uncertainties from the $\pi^{0}$ and $\eta$ mass-window requirements are estimated by varying the mass windows of $\pi^{0}$ and $\eta$, and differences in the resulting branching fractions are assigned as the systematic uncertainties of this item. Furthermore, we considered the effects arising from different quantum numbers of the $X(2120)$ and $X(2370)$. We generated ${J/\psi}\rightarrow\gamma X(2120)$ and ${J/\psi}\rightarrow\gamma X(2370)$ decays following a $\mathrm{sin}^2\theta_{\gamma}$ angular distribution. The resulting differences in efficiency with respect to the nominal value are taken as systematic uncertainties. Fit to the signal ----------------- To study the uncertainties from the fit range and $\eta'$ sideband region, the fits are repeated with different fit ranges and sideband regions, the largest differences among these signal yields are taken as systematic uncertainties, respectively. To estimate the uncertainties in the description of various background contributions, we performed alternative fits with third-order Chebychev polynomials modeling the background of the ${K^{+}K^{-}}\eta'$ and ${K_{S}^{0}K_{S}^{0}}\eta'$ channels. The maximum differences in signal yields with respect to the nominal fit are taken as systematic uncertainties. The uncertainties from the background of $J/\psi\to K^{*+}K^{-}\eta' + c.c.$ are estimated by absorbing this component into a Chebychev polynomial function, and the differences obtained by using the description with or without the background component of $J/\psi\to K^{*+}K^{-}\eta' + c.c.$ are taken as systematic uncertainties. The impact of the $X(2120)$ is also considered as a systematic uncertainty in the study of the $X(2370)$. The difference between a fit with and without a $X(2120)$ contribution is taken as a systematic uncertainty associated to this item. Others ------ Since no evident structures are observed in the invariant-mass distributions of $M(K\eta')$, $M(\bar{K}\eta')$ and $M(K\bar{K})$ for the events with a $K\bar{K}\eta'$ invariant mass within the $X(2370)$ mass region (2.2 GeV/$c^{2} < M_{K\bar{K}\eta'} <$ 2.5 GeV/$c^{2}$), the systematic uncertainties of the reconstruction efficiency due to the possible intermediate states on the $K\eta'$, $\bar{K}\eta'$ and $K\bar{K}$ mass spectra are ignored. The uncertainties on the intermediate decay branching fractions of $\eta'\to\gamma\rho^{0}\to\gamma\pi^{+}\pi^{-}$, $\eta'\to\pi^{+}\pi^{-}\eta$, $\eta\to\gamma\gamma$ and $K_{S}^{0}\to\pi^{+}\pi^{-}$ are taken from the world average values [@pdg], which are $1.7\%$, $1.6\%$, $0.5\%$ and $0.1\%$, respectively. The systematic uncertainty due to the number of ${J/\psi}$ events is determined as 0.5$\%$ according to Ref. [@jpsinumber]. A summary of all the uncertainties is shown in Table \[sumsys2370\],  \[sumsysBR2370\] and \[sumsysBR2120\]. The total systematic uncertainties are obtained by adding all individual uncertainties in quadrature, assuming all sources to be independent. The $X(2120)$ and $X(2370)$ are studied via ${J/\psi}\to\gamma {K^{+}K^{-}}\eta'$ and $J/\psi\to\gamma {K_{S}^{0}K_{S}^{0}}\eta'$ with two $\eta'$ decay modes, respectively. The measurements from the two $\eta'$ decay modes are, therefore, combined by considering the difference in uncertainties of these two measurements. The combined systematic uncertainties are calculated with weighted least squares method [@combinepaper] and the results are shown in Table \[combinesystable\]. ---------------------------------------------------------------------------- -------------------------------------------------------------------- $M_{X(2370)}$ (MeV/$c^{2}$) $2341.6\pm 6.5\text{(stat.)}\pm 5.7\text{(syst.)}$ $\Gamma_{X(2370)}$ (MeV) $117\pm10\text{(stat.)}\pm 8\text{(syst.)}$ $\mathcal{B}(J/\psi \to \gamma X(2370)\to \gamma K^{+}K^{-}\eta')$ $(1.79\pm0.23~\text{(stat.)}\pm 0.65~\text{(syst.)})\times10^{-5}$ $\mathcal{B}(J/\psi \to \gamma X(2370)\to \gamma K_{S}^{0}K_{S}^{0}\eta')$ $(1.18\pm0.32~\text{(stat.)}\pm 0.39~\text{(syst.)})\times10^{-5}$ $\mathcal{B}(J/\psi \to \gamma X(2120)\to \gamma K^{+}K^{-}\eta')$ $<1.49\times10^{-5}$ $\mathcal{B}(J/\psi \to \gamma X(2120)\to \gamma K_{S}^{0}K_{S}^{0}\eta')$ $<6.38\times10^{-6}$ ---------------------------------------------------------------------------- -------------------------------------------------------------------- RESULTS AND SUMMARY =================== Using a sample of $1.31\times10^{9} ~J/\psi$ events collected with the BESIII detector, the decays of $J/\psi\to\gamma {K^{+}K^{-}}\eta'$ and $J/\psi\to\gamma {K_{S}^{0}K_{S}^{0}}\eta'$ are investigated using the two $\eta'$ decay modes, $\eta'\to\gamma\rho^{0}(\rho^{0}\to\pi^{+}\pi^{-})$ and $\eta'\to\pi^{+}\pi^{-}\eta( \eta\to\gamma\gamma)$. The $X(2370)$ is observed in the $K\bar{K}\eta'$ invariant-mass distribution with a statistical significance of 8.3$\sigma$. The mass and width are determined to be\ ,\ ,\ which are found to be consistent with those of the $X(2370)$ observed in the previous BESIII results [@PRL1]. The product branching fractions of $\mathcal{B}({J/\psi}\to\gamma X(2370)\to\gamma K^{+} K^{-} \eta')$ and $\mathcal{B}({J/\psi}\to\gamma X(2370)\to\gamma K_{S}^{0} K_{S}^{0} \eta')$ are measured to be $(1.79\pm0.23~\text{(stat.)}\pm0.65~\text{(syst.)})\times10^{-5}$ and $(1.18\pm0.32~\text{(stat.)}\pm0.39~\text{(syst.)})\times10^{-5}$, respectively. No evident signal for the $X(2120)$ is observed in the $K\bar{K}\eta'$ invariant-mass distribution. For a conservative estimate of the upper limits of the product branching fractions of $J/\psi\to\gamma X(2120)\to{K^{+}K^{-}}\eta'$ and $J/\psi\to\gamma X(2120)\to{K_{S}^{0}K_{S}^{0}}\eta'$, the multiplicative uncertainties are considered by convolving the normalized likelihood function with a Gaussian function. Upper limits for product branching fractions at 90% C. L. are determined to be $\mathcal{B}({J/\psi}\to\gamma X(2120)\to\gamma K^{+} K^{-} \eta') < 1.49\times10^{-5}$ and $\mathcal{B}({J/\psi}\to\gamma X(2120)\to\gamma K_{S}^{0} K_{S}^{0} \eta') < 6.38\times10^{-6}$. To understand the nature of the $X(2120)$ and $X(2370)$, it is critical to measure their spin and parity and to search for them in more decay modes. A partial-wave analysis is needed to measure their masses and widths more precisely, and to determine their spin and parity. This might become possible in the future with the foreseen higher statistics of ${J/\psi}$ data samples. The BESIII collaboration thanks the staff of BEPCII and the IHEP computing center for their strong support. This work is supported in part by National Key Basic Research Program of China under Contract No. 2015CB856700; National Natural Science Foundation of China (NSFC) under Contracts Nos. 11625523, 11635010, 1332201, 11735014, 11565006; National Natural Science Foundation of China (NSFC) under Contract No. 11835012; the Chinese Academy of Sciences (CAS) Large-Scale Scientific Facility Program; Joint Large-Scale Scientific Facility Funds of the NSFC and CAS under Contracts Nos. U1532257, U1532258, U1732263, U1832207; CAS Key Research Program of Frontier Sciences under Contracts Nos. QYZDJ-SSW-SLH003, QYZDJ-SSW-SLH040; 100 Talents Program of CAS; INPAC and Shanghai Key Laboratory for Particle Physics and Cosmology; German Research Foundation DFG under Contract No. Collaborative Research Center CRC 1044; Istituto Nazionale di Fisica Nucleare, Italy; Koninklijke Nederlandse Akademie van Wetenschappen (KNAW) under Contract No. 530-4CDP03; Ministry of Development of Turkey under Contract No. DPT2006K-120470; National Science and Technology fund; The Knut and Alice Wallenberg Foundation (Sweden) under Contract No. 2016.0157; The Royal Society, UK under Contract No. DH160214; The Swedish Research Council; U. S. Department of Energy under Contracts Nos. DE-FG02-05ER41374, DE-SC-0010118, DE-SC-0012069; University of Groningen (RuG) and the Helmholtzzentrum fuer Schwerionenforschung GmbH (GSI), Darmstadt. [99]{} C. Amsler and N. A. Tornqvist, Phys. Rep. [**389**]{}, 61 (2004). E. Klempt and A. Zaitsev, Phys. Rep. [**454**]{}, 1 (2007). V. Crede and C. A. Meyer, Prog. Part. Nucl. Phys. [**63**]{}, 74 (2009). Y. Chen [*et al*]{}., Phys. Rev. D [**73**]{}, 014516 (2006). M. B. Cakir and G. R. Farrar, Phys. Rev. D [**50**]{}, 3268 (1994). F. E. Close, G. R. Farrar and Z. P. Li, Phys. Rev. D [**55**]{}, 5749 (1997). M. Ablikim [*et al.*]{} (BESIII Collaboration), Phys. Rev. Lett. [**106**]{}, 072002 (2011). W. I. Eshraim, S. Janowski, F. Giacosa and D. H. Rischke, Phys. Rev. D [**87**]{}, 054036 (2013). M. Ablikim [*et al.*]{} (BESIII Collaboration), Chin. Phys. C[**41**]{}, 013001 (2017). M. Ablikim [*et al.*]{} (BESIII Collaboration), Nucl. Instrum. Methods Phys. Res., Sect. A [**614**]{}, 345 (2010). C. H. Yu [*et al.*]{}, Proceedings of IPAC2016, Busan, Korea, 2016, doi:10.18429/JACoW-IPAC2016-TUYA01. S. Agostinelli [*et al.*]{} (GEANT4 Collaboration), Nucl. Instrum. Methods Phys. Res., Sect. A [**506**]{}, 250 (2003). S. Jadach, B. F. L. Ward and Z. Was, Phys. Rev. D [**63**]{}, 113009 (2001); Comput. Phys. Commun.  [**130**]{}, 260 (2000). D. J. Lange, Nucl. Instrum. Methods Phys. Res., Sect. A [**462**]{}, 152 (2001); R. G. Ping, Chin. Phys. C [**32**]{}, 599 (2008). C. Patrignani [*et al.*]{} (Particle Data Group), Chin. Phys. C [**40**]{}, 100001 (2016). J. C. Chen, G. S. Huang, X. R. Qi, D. H. Zhang and Y. S. Zhu, Phys. Rev. D [**62**]{}, 034003 (2000); R. L. Yang, R. G. Ping and H. Chen, Chin. Phys. Lett.  [**31**]{}, 061301 (2014). E. Richter-Was, Phys. Lett. B [**303**]{}, 163 (1993). M. Ablikim [*et al.*]{} (BESIII Collaboration), Phys. Rev. Lett. [**120**]{}, 242003 (2018). W. D. Li [*et al.*]{}, in proceeding of CHEP06, Mumbai, India, 2006 edited by Sunanda Banerjee (Tata Institute of Fundamental Reserach, Mumbai, (2006). M. Ablikim [*et al.*]{} (BESIII Collaboration), Phys. Rev. D [**85**]{}, 092012 (2012). M. Ablikim [*et al.*]{} (BESIII Collaboration), Phys. Rev. D [**83**]{}, 112005 (2011). M. Ablikim [*et al.*]{} (BESIII Collaboration), Phys. Rev. D [**81**]{}, 052005 (2010). M. Ablikim [*et al.*]{} (BESIII Collaboration), Phys. Rev. D [**87**]{}, 012002 (2013). G. D’Agostini, Nucl. Instrum. Methods Phys. Res., Sect. A [**346**]{}, 306 (1994).
{ "pile_set_name": "ArXiv" }
--- abstract: 'Changes in stoichiometric NiTi allotropes induced by hydrostatic pressure have been studied employing density functional theory. By modelling the pressure-induced transitions in a way that imitates quasi-static pressure changes, we show that the experimentally observed B19$''$ phase is (in its bulk form) unstable with respect to another monoclinic phase, B19$''''$. The lower symmetry of the B19$''''$ phase leads to unique atomic trajectories of Ti and Ni atoms (that do not share a single crystallographic plane) during the pressure-induced phase transition. This uniqueness of atomic trajectories is considered a necessary condition for the shape memory ability. The forward and reverse pressure-induced transition B19$''$[$\leftrightarrow$]{}B19$''''$ exhibits a hysteresis that is shown to originate from hitherto unexpected complexity of the Born-Oppenheimer energy surface.' author: - David Holec - Martin Friák - Antonín Dlouhý - Jörg Neugebauer title: 'Ab initio study of pressure stabilised NiTi allotropes: pressure-induced transformations and hysteresis loops' --- Introduction ============ Nickel-titanium alloys belong to the important class of shape-memory materials [@Hornbogen1991; @Saburi1998; @Van-Humbeeck1999; @Duerig1999]. Their properties include super-elasticity, excellent mechanical strength and ductility, good corrosion resistance and bio-compatibility (important for example in medical applications), and high specific electric resistance (allowing the material to be easily heated by an electric current). The shape memory effect is governed by a martensitic transformation from a high-temperature austenitic phase (cubic B2, CsCl-structure) into a low-temperature martensitic phase. X-ray experiments on single crystals[@Kudoh1985; @Michal1981] and neutron measurements on powder samples[@Buehrer1983] revealed the low temperature phase to be a monoclinic B19$'$ structure (see Fig. \[fig:B19”\], $\gamma\approx97.8{\ensuremath{^\circ}}$) with P$2_1$/m space group. In addition, a rhombohedral R-phase[@Hara1997] with P3 space group was found during multi-step martensitic transformations[@Khalil-Allafi2002; @Dlouhy2003; @Khalil-Allafi2004; @Bojda2005; @Michutta2006] under the following conditions: (i) off-stoichiometric composition, (ii) presence of substitutional or interstitial impurities, and/or (iii) formation of precipitate phases. ![Atomic geometry of the investigated B19$'$-like phases. The various structures considered in this study alternate in the lattice parameters $a$, $b$, $c$, monoclinic angle $\gamma$, and internal positions (see text for details). Larger blue spheres correspond to Ni, smaller gray spheres to Ti atoms. The highlighted planes are used to characterize the structural ability to accommodate the shape-memory effect (see Sec. \[sec:shapeMemory\]). The picture was generated using the VESTA package[@Momma2008].[]{data-label="fig:B19”"}](fig1a.eps){width="0.9\columnwidth"} Several theoretical studies on the low temperature martensitic phase of stoichiometric NiTi alloys have been performed. The intense search has been motivated in part by the fact that theoretically predicted structures do not unambiguously agree with those detected experimentally. For example @Huang2003 concluded that the B19$'$ structure is unstable with respect to a higher-symmetry base-centered orthorhombic (BCO, in some studies also termed B33) structure (see Fig. \[fig:B19”\], $\gamma\approx107{\ensuremath{^\circ}}$). These conclusions were based on systematically cross-checking several distinct DFT methods, functionals, and implementations (FLAPW, PAW, USPP, GGA, LDA, ABINIT, VASP, etc.). The analysis also considered a carefully selected shear transformation path connecting all three structures B2 ($\gamma=90{\ensuremath{^\circ}}$), B19$'$, and BCO, since they are characterized by a specific value of the crystallographic angle $\gamma$. Very similar results were reported by @Wagner2008 and by @GudaVishnu2010. The latter authors[@GudaVishnu2010] also predicted a new phase (B19$''$) characterized by $\gamma\approx102.5{\ensuremath{^\circ}}$ and with practically identical energy to the BCO phase. Finally, a barrier-less transformation path between the B2 and the BCO phases as a sequence of several special deformation modes was demonstrated in Ref. . Various explanations of the discrepancy between (i) the apparent stability of the B19$'$ phase as observed in low-temperature experiments and (ii) the instability of the B19$'$ phase predicted by theoretical calculations (for $T=0\,\mathrm{K}$) have been proposed: Recent theoretical works of @Sestak2011 and @Zhong2011 suggest that the B19$'$ may be stabilized by the presence of (nano)twins that are often experimentally observed[@Wagner2008]. As another possibility @Huang2003 suggested that the B19$'$ structure could be stabilized by residual stresses that are frequently present in experimental samples. Since the equilibrium volume is predicted to be smaller for the B19$'$ structure than for the BCO phase[@Huang2003], one may expect the BCO structure to transform into the B19$'$ phase under compressive loads. Considering this variety of mechanisms active in NiTi and in order to understand how external strains effect the stability of the various phases, we systematically explore the potential energy surface (PES). To complement previous studies, we focus solely on martensitic phase transformations induced by volumetric changes, i.e., hydrostatic pressure. Our choice is motivated by the fact that (i) stress/strain fields in NiTi alter process-parameters of the martensitic transformations (such as e.g., the transition temperature) and (ii) these actual stresses and strains in experimental samples are difficult to measure and are often not known. Focusing on volumetric changes, we [[show]{}]{} an unexpectedly complex PES. This complexity results in transformation mechanisms that exhibit hysteresis effects not reported in previous studies. From a methodological point of view, we also show that it is difficult to include internal variables explicitly in the PES since they are responsible for metastability of and the newly discovered hysteresis processes. Computational Details ===================== The calculations were performed using density functional theory (DFT)[@Hohenberg1964; @Kohn1965] in the generalized gradient approximation (GGA-PBE’96)[@Perdew1996] as implemented in the Vienna Ab-initio Simulation Package (VASP)[@Kresse1993; @Kresse1996]. All monoclinic structures were studied using four-atom cells with different external and internal parameters, while a two-atom cell was used for the B2 phase. As the total energy differences among different phases are rather small, it was necessary to ensure convergence of the energy below [[$1\,\mathrm{meV}$ per formula unit (f.u.), i.e., one Ni and one Ti atom]{}]{}. Therefore, the plane wave cutoff energy was set to $400\,\mathrm{eV}$ and a $24\times16\times18$ $\bm{k}$-point Monkhorst-Pack mesh was used to sample the Brillouin zone of the monoclinic allotropes studied. Computational Methodology: Quasi-Static Volumetric Changes ---------------------------------------------------------- The computational approach usually employed for studying the effect of hydrostatic pressure is based on determining the total energy as function of volume. The hydrostatic pressure in the system is obtained by fitting the equation of state[@Murnaghan1944] to the calculated energy–volume data points. Because the B19$'$ and the BCO phases are structurally similar and differ only slightly in few internal (atomic coordinates) and external (lattice constants and the angle $\gamma$) parameters, the multi-dimensional Born-Oppenheimer potential energy surface (PES) is expected to be quite complex, exhibiting many local minima. In order to explore the impact of hydrostatic pressure on phase stability and martensitic phase transformations among different NiTi allotropes, we determined the PES as function of (i) the atomic volume, (ii) Ni atom $x$-axis internal coordinate, and monoclinic angle $\gamma$ (see details below). In order to systematically map the complex PES, we adopted a quasi-static (QS) approach, within which the volume is increased/decreased gradually in an adiabatic-like manner (see detailed explanation in Appendix \[app-QS\]). This not usually used approach allows for more realistic simulations of gradually increasing/decreasing pressures since it closely imitates experimental conditions. Results and discussion ====================== The monoclinic allotropes under hydrostatic load ------------------------------------------------ [(a)]{} ![image](fig2.eps){width="0.9\columnwidth"} [(b)]{} ![image](fig2a.eps){width="0.9\columnwidth"} The QS simulations were initiated using the previously identified ground states for each phase (B19$'$ and BCO). Subsequently, both structures were to evolve quasi-statically under applied volumetric changes. Fig. \[fig:gamma.and.xNi\] summarizes results from four separate simulations. From the initial configuration (either B19$'$ or BCO), we first gradually increased the volume to the maximum value studied here (blue circles and dashed lines in Fig. \[fig:gamma.and.xNi\]), and subsequently decreased the volume in the QS manner to the lowest calculated value (blue circles and solid lines in Fig. \[fig:gamma.and.xNi\]). Similarly, we proceeded in the opposite direction: from the initial state we first decrease the volume down to the minimum investigated value (red triangles and dashed lines in Fig. \[fig:gamma.and.xNi\]), and then increased it to the maximum (red triangles and solid lines in Fig. \[fig:gamma.and.xNi\]). We applied these two forward-and-backward runs to both the B19$^\prime$ and BCO starting configurations. When starting the QS volumetric changes with the BCO phase, the angle $\gamma$ ranges between $105^\circ$ for $20\,\mathrm{\mbox{\AA}^3/f.u.}$ ($\approx60\,\mathrm{GPa}$, compression) and $108^\circ$ for $33\,\mathrm{\mbox{\AA}^3/f.u.}$ ($\approx-20\,\mathrm{GPa}$, expansion). The internal coordinate ${\ensuremath{x_{\mathrm{Ni}}}}$ remains almost constant at the value $\approx0.915$. In contrast to what was suggested by @Huang2003, no transition to the B19$'$ phase is observed within this fairly broad range of hydrostatic pressures. A very different behavior is obtained for the B19$'$ starting configuration. The application of positive hydrostatic pressures (red dashed path in Fig. \[fig:gamma.and.xNi\]) first changes the starting angle $\gamma$ abruptly from $\approx100{\ensuremath{^\circ}}$ to $\approx94{\ensuremath{^\circ}}$. Further decreasing the volume results in only small changes of the angle $\gamma$. Again, no transition to the BCO structure is predicted. Surprisingly, when negative pressures are applied (volumetric increase, see the dashed blue path in Fig. \[fig:gamma.and.xNi\]), the angle $\gamma$ changes to approximately $103{\ensuremath{^\circ}}$. The resulting unit cell geometry and the internal coordinates no longer correspond to values typical for either the B19$'$ or the BCO state. A similar behavior is demonstrated in Fig. \[fig:gamma.and.xNi\]b for the volumetric dependence of the internal coordinate ${\ensuremath{x_{\mathrm{Ni}}}}$. The structural parameters of this state are very similar to the B19$''$ phase described by @GudaVishnu2010. Our results allow to disregard early suggestions of a B19$'$[$\leftrightarrow$]{}BCO transition induced by hydrostatic pressure. Rather, we conclude that hydrostatic pressure, similar to shear deformations[@GudaVishnu2010], transforms B19$'$ into B19$''$. In contrast to monoclinic shear[@GudaVishnu2010], hydrostatic strain does not drive a transition towards BCO. Finally, we find that the BCO phase is stable with respect to the hydrostatic deformations and does not transform to B19$'$ (or B19$''$). Origin of the B19$'$[$\leftrightarrow$]{}B19$''$ hysteresis ----------------------------------------------------------- A closer look at the reaction pathways in Fig. \[fig:gamma.and.xNi\] reveals the presence of a narrow hysteresis loop. To explain its origin, we have analyzed the PES along both transition paths (resulting from increasing and decreasing volume). We expressed the total energy as [[a]{}]{} function of a single external parameter, volume $V$, and one selected internal parameter, here the $x^\mathrm{Ni}$ position. We chose the latter parameter because, unlike the angle $\gamma$, it can easily be kept constant in available DFT implementation and it provides clear ranges defining the two phases, B19$'$ and B19$''$ (see Fig. \[fig:gamma.and.xNi\] and Table \[tab:ground-state\]). [(a)]{} ![image](fig3a.eps){width="0.9\columnwidth"} [(b)]{} ![image](fig3b.eps){width="0.9\columnwidth"} [(c)]{} ![image](fig4.eps){width="0.9\columnwidth"} Using these two parameters we have calculated the total potential energy surface $E^{\mathrm{PES}}(x^{\mathrm{Ni}}, V)$ (Fig. \[fig:energy\_landscape\]a). As expected, the B19$''$ structure is a stable phase (a global minimum at $V\approx27.5\,\mathrm{\mbox{\AA}^3/f.u.}$ and ${\ensuremath{x_{\mathrm{Ni}}}}\approx0.94$), while B19$'$ is not associated with any minimum, indicating that in a fully relaxed environment (hydrostatic pressure $p = 0$) this phase is unstable. To investigate the influence of external strain we consider the enthalpy $H (x^{\mathrm{Ni}}, V, p) = E^{\mathrm{PES}}(x^{\mathrm{Ni}}, V) + pV$ with $p$ being the (hydrostatic) external applied pressure. Increasing the pressure $p$ shifts the equilibrium volume of the B19$''$ phase towards smaller values (Fig. \[fig:energy\_landscape\]b). In addition, at sufficiently high pressures, a new minimum occurs that represents for $p=5.22$ GPa the B19$'$ phase. Fig. \[fig:energy\_landscape\]b explains also neatly the occurrence of the hysteresis. To go from one phase to the other, even at the critical pressure ($p=5.22$ GPa) where both phases have identical enthalpy, a barrier along the constant-volume paths exists. Since in an adiabatic transformation only the nearest local minimum is reachable, the trajectory follows the original path even though this minimum is no longer the energetically most favorable one. To demonstrate this further, we plot in Fig. \[fig:energy\_landscape\]c the energy profiles at fixed volumes (vertical profiles corresponding to the PES in Fig. \[fig:energy\_landscape\]a). The figure clearly shows that upon increasing the volume (i.e., following the red triangles), the structure is trapped in a local energy valley, and transforms to B19$''$ only when the energy barrier completely flattens. A similar mechanism happens also in the opposite direction (i.e., following the blue circles). To further confirm this hypothesis, we performed an additional test. We started from a B19$''$-like structure, but from a volume ($\approx27.3\,\mathrm{\mbox{\AA}^3/f.u.}$) only slightly larger than that at which the B19$'$[$\to$]{}B19$''$ transition occurs ($\approx26.9\,\mathrm{\mbox{\AA}^3/f.u.}$). This pathway is marked by purple stars in Fig. \[fig:energy\_landscape\]. This pathway also crosses the B19$''$ minimum and eventually joins the B19$''$[$\to$]{}B19$'$ branch of the original hysteresis, i.e., the one corresponding to volume compression (“blue circle” data points). From Fig.  \[fig:energy\_landscape\]b we can further deduce that on changing the transformation coordinates from volume to $x^{\mathrm{Ni}}$ (which is closely related to the monoclinic angle $\gamma$) qualitatively different paths result. In this scenario only a single minimum for a fixed value of $x$ the transformation coordinate (the horizontal cuts of the PES) is obtained, instead of the two-minima for the vertical cuts shown in Fig. \[fig:energy\_landscape\]c. Consequently, in this case corresponding to a shear mode transformation along the angle $\gamma$, no hysteresis occurs. We thus conclude that the structural complexity of the B19$'$ and B19$''$ phases and (related to it) the multi-minimum character of the PES are the origin for the transformation hysteresis under hydrostatic loading. This is in contrast to the structurally much more distinct phases, B2 and B19, as shown by @Kibey2009. In contrast to what may be expected the volume increasing (red triangles) and decreasing (blue circles) data points in Fig. \[fig:energy\_landscape\] do not coincide in the region away from the hysteresis loop. The reason is the energy difference between the states (expanding (red triangle) and shrinking (blue circle) volume) at a constant volume is in the order of (or smaller than) $1\,\mathrm{meV/f.u.}$. This value is below the numerical accuracy of the present calculations. The apparent discrepancy is thus simply a consequence of extremely flat valleys of the PES corresponding to the B19$'$ and, in particular, B19$''$ phases. Increasing the calculation accuracy (albeit at significant CPU costs) is expected to result in a closer correspondence of the two pathways. Structural parameters {#sec:structures} --------------------- --------- -------------------------------- ------------------ -------------- ----------------------- ---------------- ---------------- ---------------- ------------------------------ ----------------- ----------------- ----------------- ----------------- phase $V_{\rm eq}$ $B_0$ $B_0^\prime$ $\Delta E$ $a$ $b$ $c$ $\gamma$ $x^\mathrm{Ni}$ $y^\mathrm{Ni}$ $x^\mathrm{Ti}$ $y^\mathrm{Ti}$ $[\mathrm{\mbox{\AA}^3/f.u.}]$ $[\mathrm{GPa}]$ $[\mathrm{meV/f.u.}]$ $[\mbox{\AA}]$ $[\mbox{\AA}]$ $[\mbox{\AA}]$ B2 27.19 160 4.00 84 3.007 4.253 4.253 90.0${\ensuremath{^\circ}}$ 1.0 0.75 0.5 0.25 27.24 100 3.009 4.255 4.255 90.0${\ensuremath{^\circ}}$ 1.0 0.75 0.5 0.25 92 3.014 4.262 4.262 90.0${\ensuremath{^\circ}}$ 1.0 0.75 0.5 0.25 B19$'$ 26.96 153 3.67 17 2.732 4.672 4.234 95.3${\ensuremath{^\circ}}$ 0.980 0.823 0.564 0.289 27.52 16 2.929 4.686 4.048 97.8${\ensuremath{^\circ}}$ 0.953 0.825 0.588 0.283 11 2.933 4.678 4.067 98.3${\ensuremath{^\circ}}$ 0.955 0.826 0.589 0.283 BCO 27.56 149 3.76 0 2.914 4.927 4.021 107.3${\ensuremath{^\circ}}$ 0.915 0.829 0.643 0.286 27.74 0 2.940 4.936 3.997 107.0${\ensuremath{^\circ}}$ 0.914 0.827 0.642 0.286 0 2.928 4.923 4.017 106.6${\ensuremath{^\circ}}$ 0.918 0.829 0.640 0.286 B19$''$ 27.43 147 5.03 $<1.0$ 2.917 4.780 4.047 100.0${\ensuremath{^\circ}}$ 0.945 0.828 0.602 0.284 5 2.923 4.801 4.042 102.4${\ensuremath{^\circ}}$ 0.936 0.829 0.615 0.237 --------- -------------------------------- ------------------ -------------- ----------------------- ---------------- ---------------- ---------------- ------------------------------ ----------------- ----------------- ----------------- ----------------- ![The calculated $E(V)$ curves for the B2, B19$'$, BCO and B19$''$ phases close to the equilibrium.[]{data-label="fig:murn_fit"}](fig5.eps){width="0.9\columnwidth"} The potential energy surface shown in Fig. \[fig:energy\_landscape\]a provides sets of quasi-static energy–volume data. These data sets that can be individually analyzed using the Murnaghan equation of state[@Murnaghan1944]. Following this approach we get the true ground state properties of all phases and can assign a pressure value to each data point. Part of these results are presented in Fig. \[fig:murn\_fit\]. In this graph all data points are plotted, i.e., from both branches of the hysteresis in Fig. \[fig:gamma.and.xNi\]. As a criterion for separating the B19$'$ and B19$''$ phases we used the internal coordinate $x^{\mathrm{Ni}}$: a structure with $x^{\mathrm{Ni}}>0.97$ is B19$'$-like otherwise it corresponds to the B19$''$ phase (see Fig. \[fig:gamma.and.xNi\]b). Focusing on the most interesting region close to the equilibrium volumes (Fig. \[fig:murn\_fit\]), we could have easily mistaken the BCO and B19$''$ states as a single phase if we had not performed a thorough analysis of internal coordinates and lattice parameters. The bulk moduli and their pressure derivatives, as well as the equilibrium volumes and the structure energy differences from the Murnaghan equations of state are summarized in Table \[tab:ground-state\], together with all the equilibrium structural parameters. As can be seen, the energy of the B19$''$ state is equal to that of the previously predicted BCO state within the numerical accuracy of our calculations. A comparison of the structural parameters of our B19$''$ phase with those obtained by @GudaVishnu2010 reveals some differences, the largest being in the monoclinic angle $\gamma$ ($100.0{\ensuremath{^\circ}}$ predicted here vs. $102.4{\ensuremath{^\circ}}$ calculated by @GudaVishnu2010). These are likely to be consequences of using different deformation modes (hydrostatic vs. shear). Despite these small differences we regard these two structures as “flavors of the same phase and thus use the same name B19$''$ for both of them. Finally, differences between our structural parameters and those reported in the literature[@Huang2003; @GudaVishnu2010] of the B19$'$ phase stem from the fact that in the earlier studies the monoclinic angle $\gamma$ was fixed to the experimental value ($\approx98{\ensuremath{^\circ}}$) while we allowed for a full structural relaxation. As mentioned in the previous section, performing a full relaxation reveals that at ambient pressure the B19$'$ phase is unstable with respect to the B19$''$ phase. We note that because some phases are stable only in certain volume (pressure) range, the $E(V)$ data points could not be computed over the whole volumetric range. For example, the B19$'$ phase data points are only available for volumes smaller than $\approx27\,\mathrm{\mbox{\AA}^3/f.u.}$. The predicted properties of the ground states should nevertheless be reasonably accurate as the number of data points obtained is sufficient to perform a numerically robust fitting to the equation of state. An advantage of the QS approach is that the energy–volume data points are less scattered (i.e., less influenced by the complexity of the NiTi PES) and their numerical analysis is therefore more robust. Consequently, the initial states (minima of the non-QS energy–volume curves) differ from final states (minima of the quasi-static $E(V)$ curves). An example is shown in Fig. \[fig:gamma.and.xNi\] and illustrates these differences: the non-QS value for the monoclinic angle of the B19$'$ structure is $\gamma\approx100.5{\ensuremath{^\circ}}$ while the QS analysis gives $\gamma\approx94.5{\ensuremath{^\circ}}$. Since we consider the QS calculations (that mimic the experimental pressure increase or decrease) more accurately, the discrepancy between the QS and non-QS ground states demonstrates the necessity to compute the energy–volume dependence quasi-statically. In order to determine the critical pressure needed for the B19$'$[$\leftrightarrow$]{}B19$''$ phase transition, we calculated the enthalpies $H$ of both phases (Fig. \[fig:entalpy\]a). Since the enthalpy–pressure data calculated for different phases are similar, an analytical formula for the enthalpy function[@Holec2010] was used for the fitting. [(a)]{} ![(a) The theoretically predicted enthalpy of the B19$'$ and B19$''$ phases over the whole range of studied pressures. (b) The differences with respect to the phase that minimizes the enthalpy for a given pressure.[]{data-label="fig:entalpy"}](fig6a.eps){width="0.9\columnwidth"} [(b)]{} ![(a) The theoretically predicted enthalpy of the B19$'$ and B19$''$ phases over the whole range of studied pressures. (b) The differences with respect to the phase that minimizes the enthalpy for a given pressure.[]{data-label="fig:entalpy"}](fig6b.eps){width="0.9\columnwidth"} Fig. \[fig:entalpy\]b shows the enthalpy difference between a given phase and the phase with the lower enthalpy for a specific pressure. The corresponding transition pressure from B19$''$ to B19$'$ is $5.22\,\mathrm{GPa}$. We note that this pressure dramatically reduces when applying the analytical formula by a linear ($0.47\,\mathrm{GPa}$) or quadratic ($2.01\,\mathrm{GPa}$) fit. This finding clearly demonstrates the necessity of using the analytical expression from Ref.  based on the Murnaghan equation of state[@Murnaghan1944]. It should be further noted that this single-value transition pressure neglects kinetic effects. The value of the critical hydrostatic pressure, $\approx5\,\mathrm{GPa}$, above which B19$'$ becomes stable may be compared with the value of $1\, \mathrm{GPa}$ when applying shear stress[@Wagner2008]. Exploring the complexity of possible mechanisms active in NiTi alloys, our study and that by @Wagner2008 also complement recent work by @Sestak2011 proposing a twinning mechanism for the stabilization. Ability of the monoclinic allotropes to show a shape memory effect {#sec:shapeMemory} ------------------------------------------------------------------ In contrast to the orthorhombic BCO phase (which has a too high symmetry to account for the shape memory effect[@Huang2003]), both the B19$'$ and B19$''$ structures possess only a lower (monoclinic) symmetry. The lower symmetry guarantees that the atomic austenite–martensite transition pathway within the unit cell is unique. Thus, the structural phase can, in principle, store the shape information since all the atoms remain situated in the Ericksen-Pitteri neighborhood of their austenite counterparts[@Bhattacharya2004]. In order to recover the BCO lattice from the B19$''$ structure, the Ni atom above the base center has to move into the plane defined by two Ni atoms from the unit cell basal plane and one of the neighboring Ti atoms (see Fig. \[fig:B19”\]). The internal atomic positions then fulfill a specific geometric relation that can be quantified by a structural parameter $\delta$. Employing the internal structural parameters $\xi_{\rm Ti}$, $\xi_{\rm Ni}$, $\zeta_{\rm Ti}$, and $\zeta_{\rm Ni}$ as defined in Fig. \[fig:B19”\], the parameter $\delta$ is given as: $$\delta = \frac{\xi_{\rm Ti}}{\zeta_{\rm Ti}} {\Big /} \frac{\xi_{\rm Ni}}{\zeta_{\rm Ni}}\ .$$ When $\delta = 1$, the corresponding Ni and Ti atoms are located within a single plane and the structure is too symmetric to keep an atom-to-atom relationship with the B2 lattice necessary for the shape-memory effect. As the $\delta$ parameter distinguishes whether these Ni and Ti atoms are, or are not, in a planar arrangement, it will be termed “planarity” parameter: A deviation from $\delta = 1$ is a prerequisite to store the shape information. ![The planarity parameter, $\delta$, describing the internal symmetry of the phases as a function of volume. The value of 1 found for the BCO phase indicates that the Ti atoms are located within the same atomic plane as the Ni atoms (see Fig. \[fig:B19”\]) and the internal geometry of the phase is too symmetrical. The B19$''$ values deviating from $\delta=1$ indicate an ability to store the shape information.[]{data-label="fig:planeness"}](fig7.eps){width="0.9\columnwidth"} The planarity parameter $\delta$ for both the BCO and B19$''$ phases is shown in Fig. \[fig:planeness\]. Due to the high symmetry environment, the Ti atoms in the BCO phase (full circles in Fig. \[fig:planeness\]) are very stable in their location in spite of high hydrostatic pressures (small volumes). $\delta$ is a constant function of volume for the BCO phase indicating that the high symmetry is preserved during the hydrostatic loading. In contrast, the volume dependence of the planarity parameter of B19$''$ (empty circles in Fig. \[fig:planeness\]) deviates from $\delta=1$ over the whole range of volumes studied. This is in agreement with the analysis of the B19$''$ structure by @GudaVishnu2010. Within the theory of symmetry-dictated extrema [@Craievich1994; @Sob1997; @Einarsdotter1997; @Friak2001; @Cerny2005; @Friak2008], the BCO phase represents a structure with a symmetry-dictated energy minimum. Conclusions =========== We report on first-principles calculations of pressure-induced transitions in stoichiometric NiTi allotropes. Complementing previous studies that focused on shear strains and twinning mechanisms, we have systematically explored the complex potential energy surface of NiTi under well-defined generic volumetric changes. We kept the volume constant at each simulation step of the pressure-induced transitions and relaxed all other structural degrees of freedom with respect to the total energy. By repeating these steps in a quasi-static manner, we closely mimicked experimental conditions. In contrast to previous theoretical studies of shear deformations[@Huang2003; @Wagner2008; @GudaVishnu2010], the BCO phase does not transform into the experimentally observed B19$'$ phase when applying hydrostatic pressures. We ascribe this stability of the BCO allotrope to the high symmetry of this structure. In contrast, the B19$'$ structure distorts under pressure into another, newly identified, monoclinic phase, B19$''$. This phase is located structurally in between the B19$'$ and BCO phases. We find that the B19$''$ phase has an energy comparable to that of the BCO phase. The complexity of the Born-Oppenheimer potential energy surface results in a pressure-induced B19$'$[$\leftrightarrow$]{}B19$''$ transition that exhibits a previously unreported hysteresis. The latter could be related to the inherent multi-dimensional nature of the potential energy surface in NiTi. The B19$''$ structure has a lower symmetry than the BCO phase. As a consequence, the B19$''$ structure can be the basis of the shape memory effect. Acknowledgments =============== We thank Dr. Chris Race from the Computational Materials Design department at the Max-Planck Institut für Eisenforschung GmbH in Düsseldorf for carefully reading the manuscript and providing us with helpful comments. The quasi-static approach {#app-QS} ========================= In the following we explain the essential difference between quasi-static (QS) and non-QS calculations. The more common non-QS approach can be regarded as a series of energy–volume data points obtained by (i) starting with an identical internal atomic coordinates and the overall cell shape and (ii) changing only the overall volume of the unit cell. A so-called relaxed state, i.e., a set of external, $\{\xi_i^n\}$, and internal parameters, $\{\zeta_i^n\}$, for a given volume, $V_n$, is then found by minimizing the total energy, $E$, as a function of both internal and external parameters (except for the volume): $$\mbox{non-QS:}\quad\min_{\xi_i=\xi_i^0, \zeta_i=\zeta_i^0} E(V=V_n,\xi_i, \zeta_i) \rightarrow \{\xi_i^n,\zeta_i^n\}$$ where $\xi_i=\xi_i^0, \zeta_i=\zeta_i^0$ reflects the fact that the starting configuration for all $E(V)$ data points (labeled with $n$) is the same. The relaxed states found in the non-QS approach can exhibit phases different from the starting one if pressure-induced structural transitions occur in the studied system. The non-QS computational approach may fail to properly simulate experimental conditions as the non-QS states that were obtained by discontinuously changed volume may differ from those found in experiments (in which volume and/or strain are always varied continuously). The advantage of the non-QS simulation is that all the calculations can be performed independently in a parallel manner, i.e., calculations for different volumes can be distributed over all available computational units (processors). In contrast to the non-quasi-static simulations, the quasi-static (QS) simulations can not be parallelized as they proceed by a subsequent series of calculations consisting of the following steps. First, all parameters are energy-relaxed for a certain starting state that frequently corresponds to the equilibrium conditions, i.e., zero hydrostatic pressure. Then, a small change of the volume is applied and new sets of internal and external parameters are obtained by the total energy minimization with respect to the structural parameters (volume being fixed). With these new relaxed parameters (i) a small volumetric change and (ii) subsequent structural optimization are repeated. The two steps are then repeated so as to cover the whole range of volumes: $$\mbox{QS:}\quad\min_{\xi_i=\xi_i^{n-1}, \zeta_i=\zeta_i^{n-1}} E(V=V_n,\xi_i, \zeta_i) \rightarrow \{\xi_i^n,\zeta_i^n\}\ .$$ The QS simulation mimics a compression of the structure in case of negative volumetric changes and decompression in case of positive ones. The denser is the mesh of the calculated volumes the better correspondence should be achieved with the experimental compression/decompression processes. The QS procedure ensures that the system evolves smoothly from one local minimum into another and the pressure-induced transitions including the phase transition path in a complex configurational space can be studied. In contrast, a non-QS search algorithm may results in (non-physically) discontinuous jumps in the atomic trajectories. [34]{}ifxundefined \[1\][ ifx[\#1]{} ]{}ifnum \[1\][ \#1firstoftwo secondoftwo ]{}ifx \[1\][ \#1firstoftwo secondoftwo ]{}““\#1””@noop \[0\][secondoftwo]{}sanitize@url \[0\][‘\ 12‘\$12 ‘&12‘\#12‘12‘\_12‘%12]{}@startlink\[1\]@endlink\[0\]@bib@innerbibempty @noop [**]{} (, ) pp.  @noop [**]{} (, ) pp.  @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} [****,  ()](\doibase 10.1107/S0021889808012016) @noop [****,  ()]{} [****, ()](\doibase 10.1016/j.actamat.2008.08.043) [****,  ()](\doibase 10.1016/j.actamat.2009.09.019) [****,  ()](\doibase 10.1016/j.intermet.2011.05.031) [****,  ()](\doibase 10.1063/1.3621429) @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} [****, ()](\doibase 10.1103/PhysRevB.47.558) [****,  ()](\doibase 10.1103/PhysRevB.54.11169) @noop [****,  ()]{} [****,  ()](\doibase 10.1016/j.actamat.2008.12.008) [****,  ()](\doibase 10.1016/j.scriptamat.2009.10.040) [****,  ()](\doibase 10.1038/nature02378) [****,  ()](\doibase 10.1103/PhysRevLett.72.3076) @noop [****,  ()]{} [****,  ()](\doibase 10.1103/PhysRevLett.79.2073) @noop [****,  ()]{} @noop [****,  ()]{} [****,  ()](\doibase 10.1103/PhysRevB.77.174117)
{ "pile_set_name": "ArXiv" }
--- abstract: 'The magnetic properties of various iron pnictides are investigated using first-principles pseudopotential calculations. We consider three different families, LaFePnO, BaFe$_2$Pn$_2$, and LiFePn with Pn=As and Sb, and find that the Fe local spin moment and the stability of the stripe-type antiferromagnetic phase increases from As to Sb for all of the three families, with a partial gap formed at the Fermi energy. In the meanwhile, the Fermi-surface nesting is found to be enhanced from Pn=As to Sb for LaFePnO, but not for BaFe$_2$Pn$_2$ and LiFePn. These results indicate that it is not the Fermi surface nesting but the local moment interaction that determines the stability of the magnetic phase in these materials, and that the partial gap is an induced feature by a specific magnetic order.' author: - 'Chang-Youn Moon, Se Young Park, and Hyoung Joon Choi' title: 'Dominant role of local-moment interactions in the magnetism in iron pnictides : comparative study of arsenides and antimonides from first-principles ' --- The iron pnictide superconductors and their fascinating physical properties have become central issues in many fields since their recent discoveries [@Kamihara2006; @Kamihara2008; @Takahashi]. The prototype materials are REFeAsO with variouof RE (rare earth) elements, and the superconducting transition temperature ($T_c$) is as high as 55 K in doped SmFeAsO [@Ren2053]. Other compounds with various types of insulating layers are also superconducting when doped, such as K-doped BaFe$_2$As$_2$ [@0805.4021; @0805.4630] and SrFe$_2$As$_2$ [@0806.1043; @0806.1209] with $T_c$ of 38 K, and LiFeAs with $T_c$ of 16 K [@Pitcher] or 18 K [@Wang; @Tapp]. Without doping, these materials exhibit a peculiar magnetic structure of a stripe-type antiferromagnetic (AFM) spin configuration coupled to orthorhombic atomic structure, and either hole or electron doping destroys the AFM and the superconductivity emerges subsequently. Hence the magnetism is considered to be closely related to the superconductivity in these materials [@Giovannetti; @Singh; @Haule; @Xu; @Mazin], and the spin-fluctuation-mediated superconductivity is assumed in many theoretical works [@Mazin2; @Kuroki; @Korshunov]. Understanding the nature of magnetism in these materials is thus of crucial importance, but it still under debate. On one hand, many theoretical [@Mazin; @Cvetkovic; @Yin; @Dong] and experimental [@Dong; @Lorenz; @Cruz; @Klauss; @Hu] works emphasize on the itinerant nature of the magnetism of the spin density wave (SDW) type, since the electron and hole Fermi surfaces (FS) are separated by a commensurate nesting vector in iron pnictides, which is further supported by the reduced magnetic moment of about 0.3 $\mu_B$ [@Cruz; @Klauss] and the energy gap near the Fermi energy ($E_F$). On the other hand, there are also interpretations based on the Heisenberg-type interaction between localized spin moments [@Si; @Yang2; @Yildirim]. In this localized-moment picture, the observed stripe-type AFM ordering results from the frustrated spin configuration with the next-nearest-neighbor exchange interaction ($J_2$) larger than half of the nearest-neighbor (NN) interaction ($J_1$). The itinerant and the local-moment pictures are based on different assumptions on the electron itinerancy, but a more comprehensive mechanism might be discovered by combining the two pictures [@Wu; @Kou]. Recently, motivated by the great success of the As substitution for P in LaFePO on raising $T_c$, hypothetical iron antimonide compounds have been studied as candidates for a higher-$T_c$ superconductor by first-principles calculations [@Moon; @Zhang]. In these works, Sb substitution for As is found to modify the FS nesting and the magnetic stability significantly. Thus, with more variation of compounds including antimonides, more comprehensive understanding of the nature of magnetism in iron pnictides would be possible through a systematic comparative study dealing with many different types of compounds altogether. In this study, we present our density-functional pseudopotential calculations of the electronic and magnetic properties of various iron arsenides and antimonides: LaFePnO, BaFe$_2$Pn$_2$, and LiFePn (Pn=As and Sb). We find that there is no systematic trend of FS nesting feature between arsenides and antimonides, whereas the stability and the local Fe spin moment of the magnetic phase increase from arsenides to antimonides for all three types of compounds. This finding is consistent with Heisenberg-type interaction picture that the local Fe moment is larger for antimonides with the enhanced Hund’s rule coupling due to their larger lattice constants. We also find that the FS reconstruction and the subsequent formation of a partial gap in the density of states (DOS) at $E_F$ can be regarded as a secondary effect caused by the magnetic ordering of local moments. Our first-principles calculations are based on the density-functional theory (DFT) within the generalized gradient approximation (GGA) for the exchange-correlation energy functional [@PBE] and the [*ab-initio*]{} norm-conserving pseudopotentials as implemented in SIESTA code [@SIESTA]. Semicore pseudopotentials are used for Fe, La, and Ba, and electronic wave functions are expanded with localized pseudoatomic orbitals (double zeta polarization basis set), with the cutoff energy for real space mesh of 500 Ry. Brillouin zone integration is performed by Monkhorst-Pack scheme [@Monkhorst] with 12 $\times$ 12 $\times$ 6 k-point grid. First we obtain the optimized cell parameters and atomic coordinates of compounds by total energy minimization, as listed in Table I. For the non-magnetic (NM) phase, tetragonal structures are obtained while the stripe-type AFM phase prefers the orthorhombic structure of the approximate $\sqrt{2} \times \sqrt{2}$ supercell, in agreement with experiments. The lowering of the total energy per Fe atom in the stripe-type AFM phase in the optimized orthorhombic structure relative to the NM phase in the optimized tetragonal unit cell is 354 and 706 meV for LaFeAsO and LaFeSbO, 297 and 745 meV for BaFe$_2$As$_2$ and BaFe$_2$Sb$_2$, and 153 and 523 meV for LiFeAs and LiFeSb, respectively. Along with the local magnetic moments on Fe atoms displayed in Table I, this result implies the existence of a universal trend that the magnetism is stronger for antimonides than for arsenides irrespective of the detailed material properties. Figure 1 shows the calculated FSs on the $k_z=0$ plane. To facilitate the investigation of the nesting feature, the electron and hole surfaces are drawn together in the reduced Brillouin zone for the $\sqrt{2} \times \sqrt{2}$ supercell. LaFeSbO shows an enhanced nesting between the electron and hole surfaces which coincide with each other very isotropically with almost circular shapes compared with LaFeAsO [@Moon]. For BaFe$_2$Pn$_2$, the arsenide exhibits a moderate nesting feature, while nesting looks poor for the antimonide because hole surfaces, which are present in the arsenide, are missing so that the electron surfaces have no hole surfaces to couple with nearby. LiFeSb also shows an inefficient nesting compared with LiFeAs with some hole surfaces missing around the $\Gamma$ point. The nesting feature can be more quantitatively estimated by evaluating the Pauli susceptibility $\chi_0({\bf q})$ as a function of the momentum ${\bf q}$ in the static limit with matrix elements ignored. The result is displayed in Fig. 2. For LaFePnO, $\chi_0$ is larger for LaFeSbO for entire range of ${\bf q}$, especially at the nesting vector ${\bf q}=(\pi,\pi)$ where the pronounced peak is located. This peak indicates the enhanced FS nesting for LaFeSbO, consistently with the FS topology in Fig. 1. For BaFe$_2$Pn$_2$, situation is drastically different. Although the susceptibility for BaFe$_2$As$_2$ has similar ${\bf q}$ dependence with those for LaFePnO, the susceptibility for BaFe$_2$Sb$_2$ is larger only for partial range of ${\bf q}$ with very weak ${\bf q}$ dependence and moreover there is no peak at ${\bf q}=(\pi,\pi)$. This feature clearly reflects the poor FS nesting in BaFe$_2$Sb$_2$ due to the lack of hole surfaces, as shown in Fig. 1. Finally, LiFeSb also has smaller $\chi_0({\bf q})$ than LiFeAs near $(\pi,\pi)$, hence LiFeSb has less effective FS nesting at $(\pi,\pi)$ than LiFeAs. Although many previous studies suggest the itinerant magnetism in iron pnictides that the stripe-type AFM is the SDW type driven by the FS nesting, our results are in contradiction with this picture of magnetism. As we have just discussed, the FS nesting for ${\bf q}=(\pi,\pi)$, at which the stripe-type AFM occurs, is more pronounced for LaFeSbO than LaFeAsO, while BaFe$_2$As$_2$ and LiFeAs have more effective nesting feature than BaFe$_2$Sb$_2$ and LiFeSb, respectively. Thus, there is no universal trend in the FS nesting feature between arsenides and antimonides, which is in contrast, however, with the result that magnetism is stronger for antimonides than the respective arsenides for all three types of iron pnictides, with larger energy differences between AFM and NM states and greater Fe local magnetic moments for antimonides. This implies that the contribution of itinerant electrons to the magnetic energy and moment is relatively small. In order to obtain a deeper insight into the nature of magnetism in these compounds, we consider another type of AFM ordering to examine how the relative stability and magnetic moments are affected by different AFM ordering. The additional AFM ordering considered is a ‘checkerboard’ type AFM ordering in which the four NN Fe atoms have the opposite spin direction to the Fe atom which they surround. This AFM ordering is denoted by AFM1 in this paper, and the stripe-type AFM ordering by AFM2. In Table II, the relative energy of each AFM type and the magnetic moment on a Fe atom are listed for all the six compounds. For each compound, atomic structures optimized in the NM phase are used for all magnetic phases to see purely electronic contribution to the total energy differences among magnetic phases without structural relaxation effects. As shown in Table II, AFM1 is more stable than NM phase for all of the compounds, and the stability and the Fe local magnetic moment are larger for the antimonides than their respective arsenides. Since the AFM1 ordering is surely not related to the FS nesting, there should be a mechanism other than the simple itinerant magnetism to explain the stability of AFM1 and its enhancement in antimonides. Furthermore, we find energetic stability of AFM2 relative to AFM1 phases and magnetic moment in AFM2 phase are enhanced in all antimonides compared with respective arsenides, as shown in Table II. This is again in contradiction with FS nesting features related to the itinerant magnetism. Therefore, the Heisenberg-type magnetic interaction naturally arises as more appropriate description for the magnetism in these materials. As the lattice parameters are larger for antimonides than their corresponding arsenides, the Fe 3$d$ orbitals are more localized as is evident from the reduced band width around $E_F$ [@Moon]. Thus the Hund’s rule coupling becomes stronger and the local magnetic moment is larger for antimonides, as is in Table II. The generally larger Fe magnetic moments can explain the enhanced stability of AFM1 with respect to the NM phase, and AFM2 with respect to AFM1, for antimonides compared with arsenides within the Heisenberg interaction with $J_2 > J_1/2$ [@Si; @Yang2; @Yildirim]. In the meanwhile, there is clear difference in DOS between AFM2 and other phases calculated with the same structural parameters optimized for the NM phase for each compound, as displayed in Fig. 3. The NM phase has a finite DOS at $E_F$, and AFM1 magnetic ordering does not reduce the DOS at $E_F$, while it is greatly reduced for the AFM2 ordering. This feature indicates that the AFM2 phase involves the ordering-induced FS reconstruction by the coupling between the electron and hole surfaces, in contrast to the AFM1 phase where only the local magnetic interaction is involved. Our result qualitatively agrees with the recently suggested model [@Kou] in which the itinerant electrons couple to the local magnetic moments which are AFM ordered. Even in the case of BaFe$_2$Sb$_2$, where the FS nesting is very ineffective as in Figs. 1 and 2, the AFM2 ordering produces the strong perturbing potential for the electron and hole bands to be hybridized, resulting in the partial gap in DOS at $E_F$, as shown in Fig. 3 (d). Other compounds exhibit similar feature in DOS at $E_F$ among different magnetic phases, indicating that the presence of partial gap is not sensitive to the detailed FS nesting characteristics as it is an induced feature by coupling to more robust underlying magnetism of the local moment interaction. In summary, we investigate the magnetic properties of known and hypothetical iron pnictides by the total energy calculations. We find that our calculated FS nesting feature in the NM phase is not consistent with the trend of the magnetic stability that the AFM phases are more stable in antimonides than in arsenides. Heisenberg-type local moment interaction is more appropriate to explain our results when we consider the larger Fe spin moment found in antimonides. Thus our results indicate that experimentally observed stripe-type AFM in iron pnictides is mainly driven by local moment interaction, while SDW of the itinerant electrons and the partial gap at $E_F$ emerge as an induced order by coupling to the local moments. This work was supported by the KRF (KRF-2007-314-C00075) and by the KOSEF Grant No. R01-2007-000-20922-0. Computational resources have been provided by KISTI Supercomputing Center (KSC-2008-S02-0004). [35]{} natexlab\#1[\#1]{}bibnamefont \#1[\#1]{}bibfnamefont \#1[\#1]{}citenamefont \#1[\#1]{}url \#1[`#1`]{}urlprefix\[2\][\#2]{} \[2\]\[\][[\#2](#2)]{} [lcccccc]{}\ &  $a$ (Å)   &  $b$ (Å)   &  $c$ (Å)  &  $z_1$  & $z_2$  & $N(E_f)$\ LFAO & 3.999 & 3.999 & 8.706 & 0.145 & 0.640 & 1.7\ LFSO & 4.106 & 4.106 & 9.311 & 0.130 & 0.659 & 2.9\ BFA & 3.935 & 3.935 & 6.314 & 0 & 0.696 & 1.9\ BFS & 4.324 & 4.324 & 6.315 & 0 & 0.708 & 1.8\ LFA & 3.767 & 3.767 & 5.967 & 0.173 & 0.734 & 2.1\ LFS & 3.995 & 3.995 & 6.266 & 0.211 & 0.756 & 2.6\ \ &  $a$ (Å)   &  $b$ (Å)   &  $c$ (Å)  &  $z_1$  & $z_2$  & $m(\mu_B)$\ LFAO & 5.780 & 5.693 & 8.875 & 0.139 & 0.654 & 2.83\ LFSO & 5.955 & 5.844 & 9.542 & 0.124 & 0.673 & 3.13\ BFA & 5.756 & 5.590 & 6.520 & 0 & 0.712 & 2.78\ BFS & 6.231 & 5.937 & 7.246 & 0 & 0.722 & 3.22\ LFA & 5.482 & 5.285 & 6.190 & 0.171 & 0.745 & 2.54\ LFS & 5.830 & 5.593 & 6.528 & 0.199 & 0.768 & 2.95\ \[table I\] Compound $E_1$ $m$(AFM1) $E_2$ $m$(AFM2) ---------------- ------- ----------- ------- ----------- LaFeAsO -123 2.23 -109 2.35 LaFeSbO -387 2.88 -136 2.83 BaFe$_2$As$_2$ -108 2.09 -64 2.20 BaFe$_2$Sb$_2$ -426 2.80 -75 2.78 LiFeAs -45 1.83 -99 1.96 LiFeAs -269 2.54 -118 2.63 : Stability of magnetic phases and Fe magnetic moments $m$ in $\mu_B$ for iron pnictides. For each compound, calculations are done in the optimized structure for the NM phase. $E_1$ is the energy of AFM1 relative to the NM phases and $E_2$ is the energy of AFM2 relative to the AFM1 phases, in meV per Fe atom. \[table II\]
{ "pile_set_name": "ArXiv" }
--- abstract: 'Let $G$ be a finite group and $R$ be a commutative ring. The Mackey algebra $\mu_{R}(G)$ shares a lot of properties with the group algebra $RG$ however, there are some differences. For example, the group algebra is a symmetric algebra and this is not always the case for the Mackey algebra. In this paper we present a systematic approach to the question of the symmetry of the Mackey algebra, by producing symmetric associative bilinear forms for the Mackey algebra. Using the fact that the category of Mackey functors is a closed symmetric monoidal category, we prove that the Mackey algebra $\mu_{R}(G)$ is a symmetric algebra if and only if the family of Burnside algebras $(RB(H))_{H\leqslant G}$ is a family of symmetric algebras with a compatibility condition. As a corollary, we recover the well known fact that over a field of characteristic zero, the Mackey algebra is always symmetric. Over the ring of integers the Mackey algebra of $G$ is symmetric if and only if the order of $G$ is square free. Finally, if $(K,\mathcal{O},k)$ is a $p$-module system for $G$, we show that the Mackey algebras $\mu_{\mathcal{O}}(G)$ and $\mu_{k}(G)$ are symmetric if and only if the Sylow $p$-subgroups of $G$ are of order $1$ or $p$.' author: - Baptiste Rognerud title: 'Trace maps for Mackey algebras.' --- \[section\] \[section\] \[theo\][Proposition]{} \[theo\][Lemma]{} \[theo\][Corollary]{} \[theo\][Question]{} \[theo\][Notations]{} \[theo\][Definition]{} \[theox\][Definition]{} \[theo\][Example]{} \[theo\][Remark]{} \[theo\][Remarks]{} [*[Key words: Finite group. Mackey functor. Symmetric Algebra. Symmetric monoidal category. Burnside Ring.]{}*]{} [*[A.M.S. subject classification: 19A22, 20C05, 18D10,16W99.]{}*]{} Trace maps for Mackey algebras. =============================== Introduction. ------------- Let $R$ be a unital commutative ring and $G$ be a finite group. The notion of Mackey functor was introduced by Green in $1971$. For him a Mackey functor is an axiomatisation of the comportment of the representations of a finite group. There are now several possible definitions of Mackey functors, in this paper we use the point of view of Dress who defined the Mackey functors as particular bivariant functors and we use the Mackey algebra introduced by Thévenaz and Webb. In [@tw] they proved that a Mackey functor is nothing but a module over the so-called Mackey algebra. Numerous properties of this algebra are known: this algebra shares a lot of properties with the group algebra. For example, the Mackey algebra is a free $R$-module, and its $R$-rank doesn’t depend on the ring $R$. If we work with a $p$-modular system which is “large enough”, there is a decomposition theory, in particular the Cartan matrix of this algebra is symmetric. However there are some differences, over a field of characteristic $p>0$, where $p\mid |G|$, the determinant of the Cartan matrix is not a power of the prime number $p$ in general, and as shown in $\cite{tw}$ the Mackey algebra is seldom a self-injective algebra. One may wonder about a stronger property for the Mackey algebra: when is the Mackey algebra a symmetric algebra? The answer to this question depends on the ring $R$. When $R$ is a field of characteristic $0$ or coprime to $|G|$, the Mackey algebra is semi-simple (see [@tw_simple]), so it is clearly a symmetric algebra. Over a field of characteristic $p>0$ which is “*large enough*", where $p\mid |G|$, then Jacques Thévenaz and Peter Webb proved that the so called $p$-local Mackey algebra (see [@bouc_resolution]) is self-injective if and only if the Sylow $p$-subgroups of $G$ are of order $p$. However, in the same article, they proved that the $p$-local Mackey algebra is a product of matrix algebras and Brauer tree algebras. Since a Brauer tree algebra is derived equivalent to a symmetric Nakayama algebra, then by [@rickard_derived] or, for a more general result [@zimmermann_tilted_orders], all Brauer tree algebras are symmetric algebras. So the $p$-local Mackey algebra over a field of characteristic $p$ is symmetric if and only if the Sylow $p$-subgroups are of order $1$ or $p$. Now the Mackey algebra of the group $G$ is Morita equivalent to a direct product of $p$-local Mackey algebras for some sub-quotients of the group $G$ (Theorem 10.1 [@tw]), so if $p^2 \nmid |G|$, the Mackey algebra of $G$ is symmetric. However, if $(K,\mathcal{O},k)$ is a $p$-modular system for the group $G$, it is not so clear that the previous argument can be use for the valuation ring $\mathcal{O}$. In particular the Mackey algebras over the valuation rings are rather complicate objects (see Section $6.3$ of [@these]). An $R$-algebra is a symmetric algebra if it is a projective $R$-module and if there exist a non degenerate symmetric, associative bilinear form on this algebra. One may think that the previous argument for the symmetry of the Mackey algebra is somewhat elaborate for something as elementary as the existence of a bilinear form on this algebra. However, for the Mackey algebra it is not obvious to specify such a bilinear form even in the semi-simple case. In this paper we propose a systematic approach to this question: by using the so-called Burnside Trace, introduce by Serge Bouc ([@bouc_burnside_dim]), we reduce the question of the existence of such bilinear a form on the Mackey algebra to the question of the existence of a family of symmetric, associative, non degenerate bilinear forms on Burnside algebras with an extra property. Here we denote by $RB(H)$ the usual Burnside algebra of the group $H$. Let $G$ be a finite group and $R$ be a commutative ring. Let $\phi=(\phi_{H})_{H\leqslant G}$ be a family of linear maps such that $\phi_{H}$ is a linear form on $RB(H)$. Let $b_{\phi_{H}}$ be the bilinear form on $RB(H)$ defined by $b_{\phi_{H}}(X,Y):= \phi_{H}(XY)$ for $X,Y\in RB(H)$. 1. The family $\phi$ is stable under induction if for every $H$ subgroup of $G$ and finite $H$-set $X$ we have $\phi_{G}(Ind_{H}^{G}(X)) = \phi_{H}(X)$. 2. The family $\big(RB(H)\big)_{H\leqslant G}$ is a stable by induction family of symmetric algebras if there exist a stable by induction family of linear forms $\phi=(\phi)_{H\leqslant G}$ such that the bilinear form $b_{\phi_{H}}$ on $RB(H)$ is non-degenerate for all $H\leqslant G$. The main result of the paper is the following theorem: Let $G$ be a finite group and $R$ be a commutative ring. Then the Mackey algebra $\mu_{R}(G)$ is a symmetric algebra if and only if $\big(RB(H)\big)_{H\leqslant G}$ is a stable by induction family of symmetric algebras. As corollary, we produce various symmetric associative bilinear form on the Mackey algebra which generalize the usual bilinear form for the group algebra. Using these forms we give direct and elementary proof for the symmetry of the Mackey algebras in the following cases: - Over the ring of the integers $\mathbb{Z}$, the Mackey algebra of a finite group $G$ is symmetric if and only if the order of $G$ is square-free. - Over a field $k$ of characteristic $0$, the Mackey algebra of $G$ is symmetric. - Over a field $k$ of characteristic $p>0$, the Mackey algebra of $G$ is symmetric if and only if $p^2\nmid |G|$. - Let $p$ be a prime number such that $p\mid |G|$. Let $R$ be a ring in which all prime divisors of $|G|$, except $p$, are invertible. Then the Mackey algebra $\mu_{R}(G)$ is symmetric if and only if $p^2 \nmid |G|$. In particular, if $(K,\mathcal{O},k)$ is a $p$-modular system for $G$, then the Mackey algebras $\mu_{k}(G)$ and $\mu_{\mathcal{O}}(G)$ are symmetric if and only of $p^2 \nmid |G|$. We use the following notations: - Let $G$ be a finite group. Then $[s(G)]$ denotes a set of representatives of the conjugacy classes of subgroups of $G$. - Let $X$ be a finite $G$-set. We still denote by $X$ the isomorphism class of $X$ in the Burnside ring $B(G)$. - All the $G$-sets are supposed to be finite. - Let $p$ be a prime number. Then $O^{p}(G)$ is the smallest normal subgroup of $G$ such that $G/O^{p}(G)$ is a $p$-group. A finite group $G$ is $p$-perfect if $O^{p}(G)=G$. - Let $H$ and $K$ be two subgroups of $G$. We use the notation $H=_{G} K$ if $H$ and $K$ are conjugate in $G$. Symmetric algebras ------------------ Let $R$ be a commutative ring with unit. Let $A$ be an $R$-algebra. Then $A$ is a symmetric algebra if: 1. $A$ is a finitely generated projective $R$-module. 2. There exist a non degenerate, associative, symmetric bilinear form $b$ on $A$. That is a bilinear form $b$ such that: - for $x$, $y$, $z\in A$ we have $b(xy,z)=b(x,yz)$. - For $x$ and $y$ in $A$, we have $b(x,y)=b(y,x)$. - The map from $A$ to $Hom_{R}(A,R)$ defined by $x\mapsto b(x,-)$ is an isomorphism of $R$-modules. Let $A$ be an $R$-algebra which is a finitely generated projective $R$-module. Then $A$ is a symmetric algebra if and only if $A$ is isomorphic to $Hom_{R}(A,R)$ as $A$-$A$-bimodule. We have the following elementary result: Let $A$ be an $R$-algebra which is free of finite rank over $R$. Let $b$ be a bilinear form on $A$. Let $e:=(e_{1},\cdots,e_{n})$ be an $R$-basis of $A$. Then $b$ is non-degenerate if and only if the matrix of $b$ in the basis $e$ is invertible. Lemma $2.2$ [@symmetric_bilinear_forms]. Mackey functors. ---------------- There are several possible definitions for the notion of Mackey functor for $G$ over $R$. In this paper we use two of them. The first definition is due to Dress in [@dress]. \[Dress\] A bivariant functor $M~=~(M^{*},M_{*})$ from $G$-$\rm{set}$ to $R$-$\rm{Mod}$ is a pair of functors from $G$-$\rm{set}\to$ $R$-$\rm{Mod}$ such that $M^{*}$ is a contravariant functor, and $M_{*}$ is a covariant functor. If $X$ is a $G$-set, then the image by the covariant and by the contravariant part coincide. We denote by $M(X)$ this image. A Mackey functor for $G$ over $R$ is a bivariant functor from $G$-set to $R$-Mod such that: - Let $X$ and $Y$ be two finite $G$-sets, $i_{X}$ and $i_{Y}$ the canonical injection of $X$ (resp. $Y$) in $X\sqcup Y$, then $(M^{*}(i_X),M^{*}(i_Y))$ and $(M_{*}(i_X),M_{*}(i_Y))$ are inverse isomorphisms. $$M(X)\oplus M(Y)\cong M(X\sqcup Y).$$ - If $$\xymatrix{ X\ar[r]^{a}\ar[d]^{b}& Y\ar[d]^{c} \\ Z\ar[r]^{d} & T }$$ is a pullback diagram of $G$-sets, then the diagram $$\xymatrix{ M(X)\ar[d]_{M_{*}(b)} & M(Y)\ar[l]_{M^{*}(a)}\ar[d]^{M_{*}(c)}\\ M(Z) & M(T)\ar[l]_{M^{*}(d)} }$$ is commutative. A morphism between two Mackey functors is a natural transformation of bivariant functors. Let us denote by $Mack_{R}(G)$ the category of Mackey functors for $G$ over $R$. Let us first recall an important example of Mackey functor: [@bouc_green]\[burnside\] If $X$ is a finite $G$-set, then the category of $G$-sets over $X$ is the category with objects $(Y,\phi)$ where $Y$ is a finite $G$-set and $\phi$ is a morphism from $Y$ to $X$. A morphism $f$ from $(Y,\phi)$ to $(Z,\psi)$ is a morphism of $G$-sets $f:Y\to Z$ such that $\psi\circ f=\phi$. The Burnside functor at $X$ is the Grothendieck group of the category of $G$-sets over $X$, for relations given by disjoint union. This is a Mackey functor for $G$ over $R$ by extending scalars from $\mathbb{Z}$ to $R$. We denote by $RB$ the functor after scalar extension. If $X$ is a $G$-set, the Burnside module $RB(X^2)$ has an $R$-algebra structure. The product of (the isomorphism classes of) $(X\overset{\alpha}{\leftarrow} Y \overset{\beta}{\rightarrow} X)$ and $(X\overset{\gamma}{\leftarrow} Z \overset{\delta}{\rightarrow} X)$ is given by (the isomorphism class of ) the pullback along $\beta$ and $\gamma$. $$\xymatrix{ & & P\ar@{..>}[dr]\ar@{..>}[dl] & & \\ & Y\ar[dl]_{\alpha}\ar[dr]^{\beta} & & Z\ar[dl]_{\gamma}\ar[dr]^{\delta} & \\ X & & X & &X }$$ The identity of this $R$-algebra is (the isomorphism class of )$\xymatrix{&X\ar@{=}[rd]\ar@{=}[dl]&\\X& &X }$ The usual Burnside algebra of a finite group $H$, previously denoted by $RB(H)$ is isomorphic to the Burnside functor evaluated at the $H$-set $H/H$. In the Mackey functors’ langage the first notation correspond to Green’s notation and the second one correspond to Dress’ notation. In the rest of the paper the notation $RB(H)$ will always be used for the usual Burnside algebra of the group $H$. If we want to speak about the Burnside functor evaluated at the $H$-set $H/1$, we will write $RB(H/1)$. Another definition of Mackey functors was given by Thévenaz and Webb in [@tw]. The Mackey algebra $\mu_{R}(G)$ for $G$ over $R$ is the unital associative algebra with generators $t_{H}^{K}$, $r^{K}_{H}$ and $c_{g,H}$ for $H\leqslant K\leqslant G$ and $g\in G$, with the following relations: - $\sum_{H\leqslant G}t^{H}_{H}=1_{\mu_{R}(G)}$. - $t^{H}_{H}=r^{H}_{H}=c_{h,H}$ for $H\leqslant G$ and $h\in H$. - $t^{L}_{K}t_{H}^{K}=t^{L}_{H}$, $r^{K}_{H}r^{L}_{K}=r^{L}_{H}$ for $H\subseteq K\subseteq L$. - $c_{g',{^{g}H}}c_{g,H}=c_{g'g,H}$, for $H\leqslant G$ and $g,g'\in G$. - $t^{{^{g}K}}_{{^{g}H}}c_{g,H}=c_{g,K}t^{K}_{H}$ and $r^{{^{g}K}}_{{^{g}H}}c_{g,K}=c_{g,H}r^{K}_{H}$, $H\leqslant K$, $g\in G$. - $r^{H}_{L}t^{H}_{K}=\sum_{h\in [L\backslash H / K]} t^{L}_{L\cap {^{h} K}} c_{h, L^{h} \cap H} r^{K}_{L^{h}\cap H}$ for $L\leqslant H \geqslant K$. - All the other products of generators are zero. \[basis\] The Mackey algebra is a free $R$-module, of finite rank independent of $R$. The set of elements $t^{H}_{K}xr^{L}_{K^{x}}$, where $H$ and $L$ are subgroups of $G$, where $x\in [H\backslash G/L]$, and $K$ is a subgroup of $H{\cap~{\ ^{x}L}}$ up to $(H\cap {\ ^{x}L})$-conjugacy, is an $R$-basis of $\mu_{R}(G)$. Section $3$ of [@tw]. \[prop\_b\] The Mackey algebra $\mu_{R}(G)$ is isomorphic to $RB(\Omega_{G}^{2})$, where $\Omega_{G}$ is the $G$-set: $\sqcup_{L\leqslant G} G/L$. The proof can be found in Proposition $4.5.1$ of [@bouc_green]. Let us recall that an explicit isomorphism $\beta$ can be defined on the generators of $\mu_{R}(G)$ by $\beta(t^{K}_{H}):=$ $$\xymatrix{ & G/H\ar[dl]_{\pi_{H}^{K}}\ar@{=}[dr]& \\ G/K & & G/H }$$ where $\pi_{H}^{K} : G/H\to G/K$ is the canonical map. Similarly, we define $\beta(r^{K}_{H}):=$ $$\xymatrix{ & G/H\ar[dr]^{\pi_{H}^{K}}\ar@{=}[dl]& \\ G/H & & G/K }$$ and $\beta(c_{g,H}):=$ $$\xymatrix{ & G/{^{g}H}\ar[dr]^{\gamma_{g,H}}\ar@{=}[dl]& \\ G/{^{g}H} & & G/H }$$ where $\gamma_{g,H}(x{\ ^{g}H})=xgH$. One can check that this gives an isomorphism of algebras. There is an equivalence of categories $Mack_{R}(G)\cong \mu_{R}(G)$-Mod. Burnside Trace. --------------- There is a tensor product in the category of Mackey functors (see [@bouc_green], e.g.). With this tensor product, the category is a closed symmetric monoidal category with the Burnside functor as unit. So, using the formalism of May ([@may_trace]) where the dualizable Mackey functors are exactly the finitely generated projective Mackey functors, Bouc has defined the notion of Burnside dimension and Burnside trace for these Mackey functors ([@bouc_burnside_dim]). Let $M$ be a finitely generated projective Mackey functor. The Burnside trace, denoted by $Btr$ is a map from $End_{Mack_{R}(G)}(M)$ to $RB(G)$. Let $RB_{X}$ be the Dress construction of the Burnside functor at the finite $G$-set $X$ (see [@dress] or [@bouc_green]). It is well known that $RB_{X}$ is a finitely generated projective Mackey functor. By an adjunction property, we have an isomorphism of $R$-algebras $End_{Mack_{R}(G)}(RB_{X})\cong RB(X^2)$ where the product on this ring is defined as in Example \[burnside\]. Using these identifications, the Burnside trace on this Mackey functor is in fact a map from $RB(X^2)$ to $RB(G)$. Here we use Green’s notation for $RB(G)$. Let $X$ and $Z$ be finite $G$-sets, let $a$ and $b$ be maps of $G$-sets from $Z$ to $X$. Let $$f=\xymatrix{ & Z \ar[dl]_{b}\ar[dr]^{a}&\\ X && X }$$ The Burnside trace $Btr : RB(X^2)\to RB(G)$ is defined on $f$ by: $$Btr(f):=\{z\in Z\ | a(z)=b(z)\}\in RB(G).$$ Corollary 2.7 [@bouc_burnside_dim]. By composing the Burnside trace by any $R$-linear map $RB(G)\to R$ we have a linear form on $RB(X^2)$.\ \[gene\] Let $R$ be a commutative ring. Let $f$ be a linear map from $RB(G)\to R$, such that $f(G/1)=1$. The trace map $f\circ Btr$ generalizes the usual trace map for the group ring $RG$ in the following way. The Burnside algebra $RB(G/1\times G/1)$ is isomorphic to $RG$. The isomorphism is defined as follow: a transitive $G$-set over $G/1\times G/1$ is isomorphic to $$f_{g} = \xymatrix{ & G/1\ar[rd]^{g}\ar@{=}[ld] &,\\ G/1 && G/1 }$$ for some $g\in G$. The element $f_{g}$ is sent to $g\in RG$. Now, the Burnside trace of the element $f_{g}$ is $\delta_{g,1}G/1$. Using the fact that the Mackey algebra $\mu_{R}(G)$ is isomorphic to $RB(\Omega_{G}^{2})$, the Burnside trace gives a linear map from $\mu_{R}(G)$ to $RB(G)$. Using Proposition \[prop\_b\] we have as immediate corollary: The Burnside Trace $Btr$ on the Mackey algebra is defined on a basis element by $$Btr(t^{K}_{H}xr^{L}_{H^{x}}) =\left\{\begin{array}{c}G/H \hbox{ if $K=L$ and $x\in L$} \\0 \hbox{ if not.}\end{array}\right.$$ \[calc\] Let $t^{H}_{K}xr^{L}_{K^{x}}$ and $t^{L}_{Q}yr^{H}_{Q^{y}}$ be two basis elements of $\mu_{R}(G)$. Then $$Btr\big(t^{H}_{K}xr^{L}_{K^{x}}t^{L}_{Q}yr^{H}_{Q^{y}}\big)=\sum_{\alpha\in[K^{x}\backslash L /Q]} \delta_{x\alpha y, H} G/(K\cap \ ^{x\alpha}Q),$$ where $\delta_{x\alpha y, H} =1$ if $x\alpha y\in H$ and $0$ otherwise. This follows from the computation of the product $t^{H}_{K}xr^{L}_{K^{x}}t^{L}_{Q}yr^{H}_{Q^{y}}$ by using the Mackey formula: $$Btr(t^{H}_{K}xr^{L}_{K^{x}}t^{L}_{Q}yr^{H}_{Q^{y}})=\sum_{\alpha\in[K^{x}\backslash L / Q] } Btr(t^{H}_{K\cap^{x\alpha}Q}\ x\alpha y\ r^{H}_{Q^{y}\cap K^{x\alpha y}}).$$ Let $(-,-)_{B}$ be the bilinear map $\mu_{R}(G)\times \mu_{R}(G) \to RB(G)$ defined by $(x,y)_{B}:=Btr(xy)$ for $x,y\in \mu_{R}(G)$. \[bl\] In the basis of Proposition \[basis\] the matrix $M$ of the bilinear form $(-,-)_{B}$ is a permutation by block matrix. The possibly non-zero blocks can be labelled by $(H,L,x,y)$ where $H$ and $L$ are subgroups of $G$. The element $x$ is a representative of a double coset $H\backslash G/L$ and $y$ is a representative of $L\backslash G/H$ such that $HxL=Hy^{-1}L$. In the basis of Proposition \[basis\], it is easy to see that the matrix $M$ of $(-,-)_{B}$ is a block matrix, where the blocks are indexed by two pairs of subgroups of $G$. Indeed the block matrix indexed by $(H,L)$ and $(M,N)$ is the sub-matrix of $M$ where the columns are indexed by the basis elements of the form $t^{H}_{K}xr^{L}_{K^{x}}$ and the lines are indexed by the basis elements of the form $t^{M}_{P}yr^{N}_{P^{y}}$. Now the product $t^{H}_{K}xr^{L}_{K^{x}}t^{M}_{P}yr^{N}_{P^{y}}$ is zero unless $L=M$ and $Btr\big(t^{H}_{K}xr^{L}_{K^{x}}t^{M}_{P}yr^{N}_{P^{y}}\big)=0$ unless $H=N$. So the non-zero blocks are exactly the blocks indexed by the pairs of subgroups $(H,L)$ and $(L,H)$. Let $Bl$ be the block of $M$ indexed by $(H,L)$ and $(L,H)$. Then, the matrix $Bl$ is again a block matrix where the blocks are indexed by elements $x\in [H\backslash G / L]$ and $y\in [L\backslash G /H]$. Let us denote by $Bl_{H,L,x,y}$ the corresponding block. If $HxL \neq Hy^{-1} L$ then $Bl_{H,L,x,y} = 0$. Indeed if the restriction of $(-,-)_{B}$ to the block $Bl_{x,y}$ is non zero, then there are subgroups $K \leqslant H\cap {\ ^{x}L}$ and $Q\leqslant L\cap {\ ^{y}H}$ and an element $\alpha\in [K^{x}\backslash L / Q]$ such that $x\alpha y \in H$. Then there exist $h\in H$ such that $x = hy^{-1}\alpha^{-1}$, so $HxL =Hy^{-1}L$. Let $\phi_{G}$ be a linear map from $RB(G)$ to $R$. - We denote by $tr_{\phi_{G}}$ the composite $\phi_{G}\circ Btr : \mu_{R}(G)\to R$. - We denote by $(-,-)_{\phi_{G}}$ the bilinear form on $\mu_{R}(G)$ defined by $(x,y)_{\phi_{G}}=tr_{\phi_{G}}(xy),$ for $x,y\in\mu_{R}(G)$. - We denote by $b_{\phi_{G}}$ the bilinear form on $RB(G)$ defined by $b_{\phi_{G}}(X,Y):=\phi_{G}(XY)$. The map $tr_{\phi}$ is a central linear form on the Mackey algebra $\mu_{R}(G)$. This follows from the fact that the Burnside trace is central. Let $G$ be a finite group and $\phi=(\phi_{H})_{H\leqslant G}$ be a family of linear maps such that $\phi_{H}$ is a linear form on $RB(H)$. The family $\phi$ is stable under induction if for every $H$ subgroup of $G$ and finite $H$-set $X$ we have $\phi_{G}(Ind_{H}^{G}(X)) = \phi_{H}(X)$. \[red1\] Let $\phi=(\phi_{H})_{H\leqslant G}$ be a stable by induction family of linear forms on $\big(RB(H)\big)_{H\leqslant G}$. In the usual basis of $\mu_{R}(G)$, the matrix of $(-,-)_{\phi_{G}}$ is a permutation by block matrix. A non-zero block indexed by $(H,L,x,y)$ of this matrix is equal, up to permutation of the lines and the columns, to the block $(\Theta,\Theta,1,1)$ of the matrix of $(-,-)_{\phi_{\Theta}}$ for $\Theta = L\cap H^{x}$. Let $Bl_{H,L,x,y}$ be a non-zero block of the matrix of $tr_{\phi_{G}}$. That is $H$ and $L$ are subgroups of $G$, the element $x$ is a representative of the double coset $H\backslash G/L$ and the element $y$ is a representative of $L\backslash G/H$. Since the block is non-zero, the double cosets $HxL$ and $Hy^{-1}L$ are equal. Let $h\in H$ and $l\in L$ such that $$y=lx^{-1}h.$$ Now the basis elements which appear for this block are: for the lines $t^{H}_{K}xR^{L}_{K^{x}}$ for $K\leqslant H\cap\ ^{x}L$ up to conjugacy in $H\cap\ ^{x}L$, and for the columns $t^{L}_{Q}yR^{H}_{Q^{y}}$ where $Q\leqslant L\cap\ ^{y} H$ up to conjugacy in $ L\cap\ ^{y} H$. By Lemma \[calc\], the entry indexed by this two elements is: $$\sum_{\alpha\in[K^{x}\backslash L /Q]} \delta_{x\alpha y, H} tr_{\phi_{G}}\big(G/(K\cap \ ^{x\alpha}Q)\big).$$ \[le1\] The map $f$ defined by $f(\alpha)=\alpha l$ induces a bijection between the set $$\{ \alpha \in [K^{x}\backslash L /Q]\ ; x\alpha y \in H\},$$ and the set $$\{w\in [K^{x}\backslash L\cap H^{x}/ Q^{l}]\}.$$ - Let $\alpha\in L$ such that $x\alpha y \in H$. Since $y=lx^{-1}h$ we have: $$\begin{aligned} x\alpha y \in H &\Leftrightarrow x\alpha l x^{-1} h \in H\\ &\Leftrightarrow \alpha l \in H^{x},\end{aligned}$$ so $\alpha l \in L\cap H^{x}$. - The map $f$ is well defined: if $\alpha$ and $\alpha'$ are in the same double coset, there are $k\in J$ and $q\in Q$ such that $\alpha'=x^{-1}kx\alpha q$, and $$f(\alpha')=x^{-1}kx\alpha q l = x^{-1}kx\alpha l l^{-1}q l,$$ so $f(\alpha)$ and $f(\alpha')$ are in the same double coset. - The map $f$ is injective: if $f(\alpha)=f(\alpha')$ then there are $k\in K$ and $q\in Q$ such that $\alpha l = x^{-1}kx \alpha' l l^{-1}q l = x^{-1}kx \alpha' q l$, so $\alpha$ and $\alpha'$ are in the same double coset. - The map $f$ is surjective: let $w\in L\cap H^{x}$, then $wl^{-1}\in L$ and $f(wl^{-1}) = w$. So, we have: $$\begin{aligned} tr_{\phi_{G}}(t^{H}_{K}xR^{L}_{K^{x}}t^{L}_{Q}yR^{H}_{Q^{y}})&=\sum_{\alpha\in[K^{x}\backslash L /Q]} \delta_{x\alpha y, H} tr_{\phi_{G}}\big(G/(K\cap \ ^{x\alpha}Q)\big)\\ &=\sum_{w\in [K^{x}\backslash L\cap H^{x}/Q^{l}]} \phi_{G}(G/K\cap\ ^{xw}(Q^{l}))\\ &=\sum_{w\in [K^{x}\backslash L\cap H^{x}/Q^{l}]} \phi_{G}(G/K^{x}\cap\ ^{w}(Q^{l}))\\ &=\sum_{w\in [K^{x}\backslash L\cap H^{x}/Q^{l}]} \phi_{G}(Ind_{L\cap H^{x}}^{G}(L\cap H^{x}/K^{x}\cap\ ^{w}(Q^{l})))\\ &=\sum_{w\in [K^{x}\backslash L\cap H^{x}/Q^{l}]}\phi_{L\cap H^{x}}(L\cap H^{x}/K^{x}\cap\ ^{w}(Q^{l})). \end{aligned}$$ Let $\Theta= L\cap\ ^{x}H$. The basis elements which appear for the block $Bl_{\Theta,\Theta,1,1}$ of the matrix of $\phi_{\Theta}$ are the $t^{\Theta}_{A}r^{\Theta}_{A}$ for $A\leqslant \Theta$ up to conjugacy. Let $A$ and $B$ be subgroups of $\Theta$, the entry corresponding to $t^{\Theta}_{A}r^{\Theta}_{A}$ and $t^{\Theta}_{B}r^{\Theta}_{B}$ is: $$\sum_{w\in [A\backslash \Theta / B]} \phi_{\Theta}(\Theta/ A\cap\ ^{w} B).$$ So the blocks $B_{H,L,x,y}$ and $B_{\Theta,\Theta,1,1}$ are equals up to permutation of the lines and the columns. In particular, these two matrices have the same determinant, up to a sign. \[red2\] Let $\Theta$ be a finite group, and $\mu'$ the sub-algebra of $\mu_{R}(\Theta)$ generated by the elements of the form $t^{\Theta}_{A}r^{\Theta}_{A}$ for $A\leqslant \Theta$. Then the restriction of the Burnside trace to $\mu'$ is an isomorphism of $R$-algebras between $\mu'$ and $RB(\Theta)$, sending the basis of Proposition \[basis\] to the usual basis of $RB(\Theta)$ consisting of isomorphism classes of transitive $G$-sets. It is clear that the restriction of the Burnside trace to $\mu'$ is an $R$-linear isomorphism since we have $Btr(t^{\Theta}_{A}r^{\Theta}_{A})= \Theta/A \in RB(\Theta)$. Moreover this is an isomorphism of algebras, since: $$\begin{aligned} Btr(t^{\Theta}_{A}r^{\Theta}_{A}t^{\Theta}_{B}r^{\Theta}_{B})&=\sum_{\theta\in [A\backslash \Theta / B]} \Theta/(A\cap B^{\theta})\\ &=\Theta/A\times \Theta/B\in RB(\Theta). \end{aligned}$$ We have: \[meta\] Let $G$ be a finite group. Let $\phi=(\phi_{H})_{H\leqslant G}$ be a stable by induction family of linear forms on $\big(RB(H)\big)_{H\leqslant G}$. Then the bilinear form $(-,-)_{\phi_{G}}$ on the Mackey algebra $\mu_{R}(G)$ is non degenerate if and only if the bilinear form $b_{\phi_{H}}$ on $RB(H)$ is non degenerate for every $H$ subgroup of $G$. If $\phi$ is such a family of linear forms, by Lemma \[bl\] the matrix of the bilinear form $(-,-)_{\phi_{G}}$ in the usual basis of $\mu_{R}(G)$ is a permutation by block matrix. So the determinant of this matrix is (up to a sign) the product of the determinant of the non-zero blocks. By Lemma \[red1\] and Lemma \[red2\] the determinant of the block indexed by $(H,L,x,y)$ is equal to the determinant of the matrix of the bilinear form $b_{\phi_{L\cap H^{x}}}$ in the usual basis of $RB(L\cap H^{x})$. So the determinant of $(-,-)_{\phi_{G}}$ is invertible in $R$ if and only if the determinant of the form $b_{\phi_H}$ on $RB(H)$ is invertible in $R$ for every subgroup $H$ of $G$. Let $G$ be a finite group. The family $\big(RB(H)\big)_{H\leqslant G}$ is a stable by induction family of symmetric algebras if there exist a stable by induction family of linear forms $\phi=(\phi)_{H\leqslant G}$ such that the bilinear form $b_{\phi_{H}}$ on $RB(H)$ is non-degenerate for every $H\leqslant G$. \[main\] Let $G$ be a finite group. Then the Mackey algebra is a symmetric algebra if and only if $\big(RB(H)\big)_{H\leqslant G}$ is a stable by induction family of symmetric algebras. Only for this proof, we use Green’s definition of Mackey functors since it is much more convenient for understanding the action of the induction and restriction maps (see Section $2$ of [@tw]). If $\big(RB(H)\big)_{H\leqslant G}$ is a stable by induction family of symmetric algebras, then by Theorem \[meta\], the Mackey algebra is symmetric. Conversely, if the Mackey algebra is symmetric, then the Mackey algebra is isomorphic to its $R$-linear dual as bimodule. Using the usual equivalence of categories, the modules over the Mackey algebras are the Mackey functors. In particular the Burnside functor $RB$ corresponds to a direct summand of the free module of rank $1$ over the Mackey algebra. Since the Mackey algebra is symmetric, the Burnside functor is isomorphic to its $R$-linear dual, that is there there exist an isomorphism of Mackey functors $f: RB\to Hom_{R}(RB,R)$. For the Mackey functor structure of $Hom_{R}(RB,R)$, see Section $4$ of [@tw]. This isomorphism allows us to build an associative non-degenerate bilinear form ${<}-,-{>} : RB\times RB\to R$ i-e a family of bilinear form ${<}-,-{>}_{K}$ for each subgroup $K$ of $G$ defined in the following way: let $K$ be a subgroup of $G$ and $X$ and $Y$ be two elements of $RB(K)$, then $${<}X,Y{>}_{K}:=f_{K}(X)(Y)$$The fact that $f$ is a Mackey functor morphism implies in particular the following properties: let $H\leqslant K$ be subgroups of $G$, then: let $X$ be an $H$-set and $Y$ be an $K$-set, then: $${<}Ind_{H}^{K}X,Y{>}_{K} = {<}X,Res^{K}_{H} Y{>}_{H},$$ and $${<}Res_{H}^{K}Y,X{>}_{H} = {<}Y,Ind^{K}_{H} X{>}_{K},$$ So we have a family of linear forms $(\phi_{H})_{H\leqslant G}$ on the Burnside algebras $(RB(H))_{H\leqslant G}$ defined by: let $X\in RB(H)$, then $\phi_{H}(X):={<}X,H/H{>}$. Let $H\leqslant K$ and $X\in RB(H)$, then $$\begin{aligned} \phi_{K}(Ind_{H}^{K}X)&={<}Ind_{H}^{K}(X),K/K{>}_{K}\\ &={<}X,Res^{K}_{H}K/K{>}_{H}\\ &={<}X,H/H{>}_{H}\\ &=\phi_{H}(X).\end{aligned}$$ The family $\big(\phi_{H}\big)_{H\leqslant G}$ is a stable by induction family of linear forms on the Burnside algebras $\big(RB(H)\big)_{H\leqslant G}$, and the bilinear forms $b_{\phi_{H}}$ are the bilinear forms ${<}-,-{>}_{H}$ so by definition they are non-degenerate. If the Mackey algebra is symmetric, it is always possible to choose a stable by induction family of linear maps $(\phi_{H})_{H\leqslant G}$ on $(RB(H))_{H\leqslant G}$ which generalize the trace maps on $\big(RH\big)_{H\leqslant G}$ in the sense of Remark \[gene\], i.e. such that $\phi_{H}(H/1)=1$.Indeed, since the family is stable by induction, for every $H$ subgroup of $G$, we have $\phi_{H}(H/1)=\phi_{1}(1/1)$. Let us denote by $a$ the value $\phi_{H}(H/1)$. Now in the usual basis of $RB(H)$, the matrix of the bilinear form $b_{\phi_{H}}$ as a column divisible by $a$, and since this bilinear form is non degenerate, we have $a\in R^{\times}$, so one can normalize the linear forms $\phi_{H}$. Symmetricity in the semi-simple case. ===================================== Let $G$ be a finite group and $k$ a field of characteristic zero, or characteristic $p>0$ which does not divide the order of $G$, then it is well known that the Mackey algebra $\mu_{k}(G)$ is semi-simple, so it is clearly a symmetric algebra. One can specify a trace map for this algebra by using the previous section. Let us consider the linear form $\phi_{G}$ on $kB(G)$ defined by $$\phi(X) = \sum_{H \in [s(G)]} \frac{1}{|N_{G}(H)|} |X^{H}|,$$ where $X\in kB(G)$ and $[s(G)]$ is a system of representatives of conjugacy classes of subgroups of $G$. In this situation the set of the primitive orthogonal idempotents of $kB(G)$ is well known. These idempotents are in bijection with the conjugacy classes of subgroups of $G$. If $H$ is a subgroup of $G$, let us denote by $e^{G}_{H}$ the idempotent corresponding to the conjugacy class of $H$. For more details, see [@yoshida_idempotent],[@gluck_idempotent] or [@bouc_burnside] for a summary. Let us recall some important results about these idempotents: Let $G$ be a finite group. 1. Let $H$ and $K$ be subgroups of $G$, then $|(e_{H}^{G})^{K}| = 1$ if $H$ is conjugate to $K$ and $0$ otherwise. 2. Let $X$ be a $G$-set and $H\leqslant G$, then $X.e_{H}^{G} = |X^{H}|e^{G}_{H}$. 3. Let $H\leqslant K$ be subgroups of $G$, then $Ind_{K}^{G}(e_{H}^{K})=\frac{|N_{G}(H)|}{|N_{K}(H)|}e_{H}^{G}.$ 4. Let $H$ be a subgroup of $G$, then $$e_{H}^{G}=\frac{1}{|N_{G}(H)|} \sum_{K\leqslant H} |K|\mu(K,H) G/K.$$ \[lee1\] 1. Let $G$ be a finite group, then $\phi_{G}$ is a linear form. 2. The family $(\phi_{G})_{G}$ is stable by induction. 3. Let $G$ be a finite group, then $\phi_{G}(G/1)=1$. The only non obvious assertion is the second. Since the map is linear it is enough to check this assertion on the basis elements of $kB(G)$. We use the basis consisting of the primitive orthogonal idempotents. Let $H\leqslant K\leqslant G$, then $$\begin{aligned} \phi_{G}(Ind_{K}^{G}(e_{H}^{K}))&=\frac{|N_{G}(H)|}{|N_{K}(H)|}\phi_{G}(e^{G}_{H})\\ &=\frac{|N_{G}(H)|}{|N_{K}(H)|}\frac{1}{|N_{G}(H)|}\\ &=\frac{1}{|N_{K}(H)|}.\end{aligned}$$ In the other hand, $$\begin{aligned} \phi_{K}(e_{H}^{K})=\frac{1}{|N_{K}(H)|}. \end{aligned}$$ \[prop1\] The determinant of this bilinear form $b_{\phi_{G}}$, in the basis consisting of the transitive $G$-sets is: $$det(b_{\phi})=\prod_{H\in [s(G)]} \frac{|N_{G}(H)|}{|H|^{2}}.$$ If $G$ is abelian, this determinant is equal to $1$. We first compute the determinant of this bilinear form in the basis consisting of the orthogonal primitive idempotents of $kB(G)$, then we apply a change of basis. Since the idempotents are orthogonal, this matrix is diagonal. The diagonal terms are $\phi_{G}(e_{H}^{G}) = \frac{1}{|N_{G}(H)|}$. So in this basis, the determinant of the matrix is $\prod_{H\in [s(G)]}\frac{1}{|N_{G}(H)|}$. The change of basis matrix from the basis of transitive $G$-sets to the basis of the primitive idempotents is a upper triangular matrix, the diagonal terms are the $\frac{|N_{G}(H)|}{|H|}$. So in the basis of transitive $G$-sets, we have $$det(b_{\phi})=\prod_{H\in [s(G)]} \frac{|N_{G}(H)|}{|H|^{2}}.$$ If $G$ is abelian, this determinant is equal to $\frac{\prod_{H\leqslant G}\frac{|G|}{|H|}}{\prod_{H\leqslant G}|H|}$, which is equal to $1$ since the abelian groups are isomorphic to their dual. There exist non abelian group such that this determinant is equal to $1$. The smallest counter example is for $G=(C_{4}\times C_{2})\rtimes C_{4}$. A quick run in GAP with the group $G:=SmallGroup(32,2)$ show that the determinant of $b_{\phi_{G}}$ is $1$. This determinant is most of the time of the form $\frac{1}{n}$, where $n\in \mathbb{N}$, but this is not always true. The first counter example is for two groups of order $64$: $H=(C_{8}\times C_{2})\rtimes C_{4}$ and $K=C_{2}\times ((C_{4}\times C_{2})\rtimes C_{4})$. The determinant is in these two cases $4$ and $16$. \[cor\] Let $G$ be a finite group and $k$ be a field of characteristic zero, or $p>0$ which does not divide the order of $G$, then the Mackey algebra $\mu_{k}(G)$ is symmetric. By Lemma \[lee1\] and Proposition \[prop1\], the family $(kB(H))_{H\leqslant G}$ is a stable by induction family of symmetric algebras. The result is now clear by Theorem \[meta\]. Symmetry of the Mackey algebra over the ring of integers. ========================================================= The trace map defined in the previous section is not defined over the ring of integers. In this part let us consider the map $\phi_{G} : B(G) \to \mathbb{Z}$ defined on the usual basis by $\phi(G/H)= 1$ if $H=\{1\}$ and $\phi(G/H)=0$ otherwise. We have the following lemma: Let $G$ be a finite group. 1. $\phi_{G}$ is a linear form on $B(G)$. 2. $\phi=(\phi_{H})_{H\leqslant G}$ is a stable by induction family. 3. $\phi(G/1)=1$. Let $G$ be a finite group. We denote by $\pi(G)$ the set of the prime divisors of $|G|$. Recall that for $\pi\subseteq \pi(G)$, a Hall-$\pi$-subgroup of $G$ (or a $S_{\pi}$-subgroup of $G$) is a $\pi$-subgroup $H$ such that $|H|$ and $|G/H|$ are coprime. The notion of $S_{\pi}$-group is a generalization of the notion of Sylow $p$-subgroup. In the case of a solvable group, there is a Sylow theorem for $S_{\pi}$-groups: The group $G$ is solvable if and only if $G$ has $S_{\pi}$-subgroup for all set $\pi$ of prime divisors of $|G|$. In this case, 1. Two $S_{\pi}$-subgroups are conjugate in $G$. 2. Each $\pi$-subgroup of $G$ is contained in a $S_{\pi}$-subgroup. The proof can be found in Part I.6 of [@gorenstein]. The finite group $G$ is a square-free group if $p^2$ does not divide the order of $G$ for any prime number $p$. Let us recall the well-known fact: A square-free group $G$ is solvable. The group $G$ is in fact a super-solvable group. This is well known, but we weren’t able to find a reference. Let $p$ be the smallest prime divisor of $|G|$. Let $P$ be a Sylow $p$-subgroup of $G$. Then $N_{G}(P)/C_{G}(P)\hookrightarrow Aut(P)$. But $|Aut(P)|=p-1$ and the order of $N_{G}(P)/C_{G}(P)$ is a product of prime numbers bigger that $p$. So $N_{G}(P)=C_{G}(P)$, and by Burnside’s Theorem, the set of all the $p'$-elements of $G$ is a normal subgroup of $G$. By induction this proves that $G$ is (super-)solvable. Let $n$ be the size of $\pi(G)$. Then there are $2^n$ conjugacy classes of subgroups of $G$, one for each divisor of $|G|$. Let $\pi$ be a set of prime divisors of $G$. Since $G$ is solvable, there is a $S_{\pi}$-subgroup of $G$. Now since $G$ is a square-free order group, each subgroup of $G$ is a $S_{\pi}$-subgroup for a set of prime $\pi$. So two subgroups are conjugate in $G$ if and only if they have the same order. \[ordre\] Let $\mathcal{P}$ be the set of divisors of $|G|$. Let us consider the following order on this set: let $p_1,p_2,\cdots, p_n$ be the prime divisors of $|G|$ such that $p_1 < p_2<\cdots <p_n$. Then $p_1<p_2<\cdots< p_n<p_1p_2<p_1p_3<\cdots <p_1p_n <p_2 p_3< \cdots <p_{n-1}p_{n} < p_1p_2p_3 <\cdots$. let $[H]$ and $[K]$ be two conjugacy classes of subgroups of $G$. Then $[H] \leqslant [K]$ if and only if $|H|<|K|$ for this order or $|H|=|K|$. \[det1\] Let $G$ be a square-free group. The determinant of the bilinear form $b_{\phi_{G}}$ is $\pm 1$. We will work with the basis of $B(G)$ consisting of transitive $G$-sets. Let $H$ and $K$ be subgroups of $G$, then $$b_{\phi}(G/H,G/K) = Card(\{g\in [H\backslash G/ K]\ ; \ H\cap K^{g} = 1\}).$$ - If $\pi(H)\sqcup \pi(K) = \pi(G)$ and $\pi(H)\cap \pi(K)= \emptyset$, then $b_{\phi}(G/H,G/K)=1$. Indeed, by cardinality reason, for all $g\in G$, we have $H\cap K^{g}=1$, so $$b_{\phi}(G/H,G/K)=Card\{g\in [H\backslash G/K]\}=1,$$ since there is only one double coset in this situation. - If $H \leqslant G$ and $K\leqslant G$ such that $$\Pi_{p_i\in \pi(H)}p_i \times \Pi_{p_{j}\in\pi(K)}p_j>|G|,$$ then $b_{\phi}(G/H,G/K)=0$, since $H\cap K^{g}\neq \{1\}$ for all $g\in G$. We order the basis elements using the total order of Remark \[ordre\] on the subgroups of $G$. The antidiagonal coefficients of the matrix correspond to subgroups $H$ and $K$ such that $\pi(H)\cap \pi(K) =\emptyset$ and $\pi(H)\sqcup \pi(K)= \pi(G)$. So the anti-diagonal coefficients of the matrix are $1$. The coefficients under the anti-diagonal correspond to subgroups $H$ and $K$ such that $\Pi_{p_i\in \pi(H)}p_i \times \Pi_{p_{j}\in\pi(K)}p_j>|G|$. So these coefficients are zero. The matrix of $b_{\phi}$ in this basis, is an upper anti-triangular matrix with $1$ on the anti-diagonal so its determinant is $\pm 1$. \[sym\] The Mackey algebra $\mu_{\mathbb{Z}}(G)$ is a symmetric algebra if and only if $G$ is a square-free group. Let $G$ be a square-free group. Then by Theorem \[meta\] and the result of Propostion \[det1\], the determinant of matrix of the bilinear form $(-,-)_{\phi} : \mu_{\mathbb{Z}}(G)\times \mu_{\mathbb{Z}}(G)\to \mathbb{Z}$ is $\pm 1$. There exist a non degenerate bilinear associative symmetric form for $\mu_{\mathbb{Z}}(G)$, so this algebra is symmetric. Conversely, let $G$ be a finite group and $p$ be a prime number such that $p^2 \mid |G|$, then $G$ has a $p$-subgroup $P$ of order $p^2$. We prove that all the associative symmetric bilinear form ${<}-,-{>}$ on $RB(P)$ are degenerate. - Suppose that $P = C_{p^2}$, let $B$ be the Burnside functors of $Mack_{\mathbb{Z}}(P)$, then there are $a,b,c\in \mathbb{Z}$ such that the matrix $M$ of ${<}-,-{>}$ in the usual basis of $B(G)$ is: $$M=\left(\begin{array}{ccc}a & b & c \\b & pb & pc \\c & pc & p^2c\end{array}\right),$$ If we reduce modulo $p$ this matrix, it is clear that the two last columns are proportional. So the $det(M)$ is divisible by $p$, so $B$ is not isomorphic to its $\mathbb{Z}$-linear dual $B^{*}$. - Suppose that $P=C_{p} \times C_{p}$. Let $B$ be the Burnside functors of $Mack_{\mathbb{Z}}(P)$. There are elements $a,b_1,b_2,\cdots,b_{p+1},c\in \mathbb{Z}$ such that the matrix $M$ of ${<}-,-{>}$ in the usual basis of $B(G)$ is: $$M:=\left(\begin{array}{cccccc}a & b_1 & \cdots & b_{p} & b_{p+1} & c \\b_{1} & pb_{1} & c & \cdots & c & pc \\\vdots & c & pb_{2} & \ddots & \vdots & \\b_{p} & \vdots & \ddots & \ddots & c & \vdots \\b_{p+1} & c & \cdots & c & pb_{p+1} & pc \\c & pc & \cdots & pc & pc & p^2c\end{array}\right)$$ By reducing this matrix modulo $p$ it is enough to look at the following $(p+1)\times (p+1)$ matrix: $$\left(\begin{array}{cccc}0 & c & \cdots & c \\c & 0 & \ddots & \vdots \\\vdots & \ddots & \ddots & c \\c & \cdots & c & 0\end{array}\right)$$ the sum of the lines is zero modulo $p$, so $det(M)$ is divisible by $p$. Let $G$ be a finite group and let $p$ be a prime number such that $p^2 \mid |G|$. The proof of Theorem \[sym\] shows that if $p$ is not invertible in a commutative ring $R$, then the Mackey algebra $\mu_{R}(G)$ is not symmetric. The p-local case. ================= Let $G$ be a finite group. Let $p$ be a prime number such that $p \mid |G|$. Let $R$ be a commutative ring with unit in which all the prime divisors of $|G|$ except $p$ are invertible. The ring $R$ can be a field $k$ of characteristic $p>0$. If $(K,\mathcal{O},k)$ is a $p$-modular system, the ring $R$ can be either the valuation ring or the residue field. Finally $R$ can be the localization of $\mathbb{Z}$ at the prime $p$. Even for the field $k$, the symmetry of the Mackey algebra does not directly follows from Theorem \[sym\], since the determinant of the bilinear forms $(-,-)_{\phi}$ and $b_{\phi}$ can be zero. For example, the matrix of $b_{\phi_{C_{p^2}}}$ is: $\left(\begin{array}{ccc}p^2 & p & 1 \\p & 0 & 0 \\1 & 0 & 0\end{array}\right)$. So in characteristic $3$, for $G=C_{3}\times C_{4}$ the determinant of $b_{\phi_{G}}$ is zero. Using Theorem \[main\], the symmetry of the Mackey algebra $\mu_{k}(G)$ follows from the symmetry of the modular Burnside algebras $(kB(H))_{H\leqslant G}$. In [@deiml], Markus Deiml proved that the Burnside algebra of a finite group $G$ is symmetric if and only if $p^2\nmid |G|$. For our purpose, we need to check that the stability by induction condition holds. So, following Deiml’s proof, we specify a symmetric associative non degenerate bilinear form on the Burnside algebra, then we check the stability condition. Almost all the arguments of Deiml can be used for the ring $R$, if it is not the case, we sketch the proof. Let us recall that the primitive idempotents of the Burnside algebra are in bijection with the conjugacy classes of $p$-perfect subgroups of $G$ denoted by $[s(G)]_{perf}$. If $J$ is $p$-perfect, then we denote by $f_{J}^{G}$ the corresponding idempotent of $RB(G)$. \[form\] Let $J$ be a $p$-perfect subgroup of $G$. Then, $f_{J}^{G}=\sum_{K} e_{K}^{G},$ where $K$ runs through the conjugacy classes of subgroups of $G$ such that $O^{p}(K)=_{G}J$. Let $G$ be a finite group and $J$ be a $p$-perfect subgroup of $G$. If $p\mid \frac{|N_{G}(J)|}{|J|}$ and $p^{2}\nmid \frac{|N_{G}(J)|}{|J|}$, then there are exactly two conjugacy classes of subgroups $L$ of $G$ such that $O^{p}(L)=J$. Let $S_{J}\leqslant N_{G}(J)$ such that $S_{J}/J$ is a Sylow $p$-subgroup of $N_{G}(J)/J$, then $O^{p}(S_{J})=J$. Conversely if $H$ is a subgroup of $G$ such $O^{p}(H)$ is conjugate to $J$, then changing $H$ by one of its conjugate one can assume that $O^{p}(H)=J$ and $H\leqslant N_{G}(J)$. Now $H/J$ is a $p$-subgroup of $N_{G}(J)/J$, so there are two possibilities: either $H=J$ or $H/J$ is a Sylow $p$-subgroup of $N_{G}(J)/J$, i-e $H$ is conjugate to $S_{J}$. \[basis1\] Let $J$ be a $p$-perfect group. Let us denote by $\mathcal{S}_{J}$ a set of representatives of conjugacy classes of subgroups $L$ of $G$ such that $O^{p}(L)=J$. Then the set of $G/I f_{J}^{G}$ where $I\in \mathcal{S}_{J}$ and $J\in [s(G)]_{perf}$ is a basis of $RB(G)$. Here, the proof of Deiml does not work for a general ring $R$, since there is a dimension argument. However by Lemma 5 ([@deiml]), we know that the family $\big(G/I f_{J}^{G}\big)$ is a free family, so we just need to check that it is a generating family. Let $K$ be a subgroup of $G$. It is enough to check that $G/K$ is a $R$-linear combination of elements of the form $G/I f_{J}^{G}$ where $O^{p}(I)=J$. If $|K|=1$, then $G/1 = G/1 f_{1}^{G}$. By induction on $|K|$, in $RB(G)$, we have: $$\begin{aligned} G/K = G/K \times 1 &= \sum_{J\in [s(G)]_{perf}} G/K \times f_{J}^{G} \\ &= G/K f_{O^{p}(K)}^{G} + \sum_{O^{p}(K)\neq J\in [s(G)]_{perf}} G/K\times f_{J}^{G}.\end{aligned}$$ Now $G/K f_{J}^{G}$ is zero unless $J$ is conjugate to a subgroup of $K$. If it is the case, we have: $$\begin{aligned} G/K f_{J}^{G} = \sum_{L\in [s(G)]\ ;\ O^{p}(L)=_{G}J} |G/K^{L}|e_{L}^{G}.\end{aligned}$$ Now $|G/K^{L}|$ is zero unless $L$ is conjugate to a subgroup of $K$. Moreover, since $O^{p}(L) = K \neq O^{p}(K)$, the group $L$ canot be equal to $K$. So $G/K f_{J}^{G}$ is a $R$-linear combination of transitive $G$-set $G/L'$ where $|L'|<|K|$. By induction, $G/K$ is a $R$-linear combination of elements of the form $G/I f_{J}^{G}$. Following [@deiml], let us consider the linear form $\phi_{G}$ on $RB(G)$ defined on a basis element by: $$\phi\big(G/I f_{J}^{G}\big) = \left\{\begin{array}{c}1 \hbox{ if $I=J$,} \\0 \hbox{ if $I\neq J$.}\end{array}\right.$$ \[re1\] - If $R=k$ is a field of characteristic $p$, and if $p\nmid |G|$, then the idempotents $f_{J}^{G}$ are the idempotents $e_{J}^{G}$ so it is easy to check that $\phi_{G}(X) = \sum_{H\in [s(G)]}\frac{|H|}{|N_{G}(H)|} |X^{H}|$, for $X\in kB(G)$. - If $p\mid |G|$, it seems rather difficult to compute the value of $\phi_{G}$ on a transitive $G$-set. \[indu\] Let $H\leqslant G$ and $J$ be a $p$-perfect subgroup of $H$. Then: 1. $Ind_{H}^{G}(H/J f_{J}^{H})=G/Jf_{J}^{G}$. 2. Moreover if $p\mid |N_{H}(J)/J|$ and $p^2\nmid |N_{H}(J)/J|$, let $S_{J}$ be a subgroup of $H$ such that $J\subset S_{J}$ and $O^{p}(S_{J})=J$. Then: $$Ind_{H}^{G}(H/S_{J}f_{J}^{H})=G/S_{J} f_{J}^{G}.$$ Using Lemma \[form\], we have: $$\begin{aligned} Ind_{H}^{G}\big(H/J f_{J}^{H}\big)&= \frac{|N_{H}(J)|}{|J|}Ind_{H}^{G}(e_{H}^{G})\\ &=\frac{|N_{H}(J)|}{|J|} \frac{|N_{G}(J)|}{|N_{H}(J)|} e_{J}^{G}\\ &=G/J f_{J}^{G}.\end{aligned}$$ For the second part, by Lemma $3.5$ of [@yoshida_idempotent], we have $Res^{G}_{H}(f^{G}_{J})=\sum_{J'} f^{H}_{J'}$ where $J'$ runs the subgroups of $H$ up to $H$-conjugacy such that $J'$ is conjugate to $J$ in $G$. So, we have: $$\begin{aligned} H/S_{J} Res^{G}_{H}(f^{G}_{J})&=\sum_{J'} H/S_{J} f^{H}_{J'},\end{aligned}$$ but we have: $$\begin{aligned} H/S_{J}f_{J'}^{H}= \sum_{\underset{O^{p}(K)=J'}{K\leqslant J\hbox{ {\footnotesize up to $H$-conjugacy}}}} |(H/S_{J})^K|e_{K}^{H}. \end{aligned}$$ But $|(H/S_{J})^{K}|=0$ unless $K$ is $H$-conjugate to a subgroup of $S_{J}$. Without lost of generality one can assume $K\subseteq S_{J}$. So the only non zero terms are for $J'\leqslant K \leqslant S_{J}$ and since $|S_{J}|/|J'|=p$ either $K=J'$ or $K=S_{J}$. If $K=S_{J}$, then $O^{p}(K)=J'$ is $H$-conjugate to $O^{p}(S_{J})=J$, that is $J'$ is $H$-conjugate to $J$. If $K=J'$ and $J\neq J'$, then there we have the following situation: $$\xymatrix{ & S_{J} &\\ J\ar@{=}[ur]^{p} & & J'\ar@{-}[ul]_{p}\\ & J\cap J'\ar@{-}[ul]\ar@{=}[ur] & }$$ The two subgroups $J$ and $J'$ are of index $p$ in $S_{J}$. We have $JJ'=S_{J}$. Since $J$ is normal in $S_{J}$, the intersection $J\cap J'$ is normal in $J'$. Then by the second isomorphism theorem, we have $|J'|/|J'\cap J|=p$. This implies that $p^2/|S_{J}|$ which is not possible by hypothesis. So we have $H/S_{J}Res^{G}_{H}(f_{J}^{G})=H/S_{J} f_{J}^{H}$. Using the Frobenius identity (see Proposition $3.13$ [@bouc_burnside]), we have: $$\begin{aligned} Ind_{H}^{G}(H/S_{j} f_{J}^{H})&=Ind_{H}^{G}(H/S_{J}Res^{G}_{H}f^{G}_{J})\\ &= G/S_{J}f^{G}_{J}. \end{aligned}$$ \[mod1\] Let $G$ be a finite group. 1. $\phi_{G}(G/1)=1$. 2. if $p\mid |G|$ and $p^2 \nmid |G|$, then the family $\big(\phi_{H}\big)_{H\leqslant G}$ is stable by induction. <!-- --> 1. The first part is obvious since $G/1f_{1}^{G}=|G|e^{G}_{1}=G/1$. 2. The second part follow from Lemma \[indu\]. \[mod2\] Let $G$ be a finite group such that $p\mid |G|$ and $p^2\nmid |G|$. Then the Burnside algebra $RB(G)$ is a symmetric algebra. In the basis of Lemma \[basis1\] the matrix of $b_{\phi_{G}}$ is a diagonal by block matrix. The blocks are indexed by the conjugacy classes of $p$-perfect subgroups of $G$. If $J$ is a $p$-perfect subgroup such that $p\nmid |N_{G}(J)/J|$, then there is only one conjugacy class of subgroup $L$ of $G$ such that $O^{p}(L)=J$, so the block indexed by $J$ is of size $1$. The entry in this block is : $$\begin{aligned} b_{\phi_{G}}(G/Jf_{J}^{G},G/Jf_{J}^{G}) &= \phi(G_{J}f_{J}^{G}\times G/Jf_{J}^{G})\\ &= \sum_{g\in [J\backslash G/J]} \phi(G/J\cap J^{g}f_{J}^{G})\\ &=\sum_{g\in [N_{G}(J)/J]}\phi(G/Jf_{J}^{G})\\ &=\frac{|N_{G}(J)|}{|J|} \in R^{\times}. \end{aligned}$$ If $J$ is a $p$-perfect subgroup of $G$ such that $p\mid |N_{G}(J)/J|$, then there are two conjugacy classes of subgroups $L$ of $G$ such that $O^{p}(L)=J$. We denote by $S_{J}$ a subgroup of $G$ such that $J\subset S_{J}$ and $O^{p}(S_{J})=J$. The block matrix indexed by $J$ is of size $2$. The first diagonal entry is: $$\begin{aligned} b_{\phi_{G}}(G/Jf_{J}^{G},G/Jf_{J}^{G})=\frac{|N_{G}(J)|}{|J|}. \end{aligned}$$ the anti-diagonal entries are: $$\begin{aligned} b_{\phi_{G}}(G/J\times G/S_{J} f_{J}^{G})&=\sum_{g\in [S_{J}\backslash G/J]} \phi(G/S_{J}\cap J^{g} f_{J}^{G})\\ &=\sum_{g\in [S_{J}\backslash N_{G}(J)/J]} 1\\ &=\frac{|N_{G}(J)|}{|S_{J}|}.\end{aligned}$$ Finally, the second diagonal element is: $$\begin{aligned} a:=b_{\phi_{G}}(G/S_{J}f_{J}^{G},G/S_{J}f_{J}^{G})= \sum_{g\in [S_{J}\backslash G/S_J]} \phi_{G}(G/S_{J}\cap S_{J}^{g} f_{J}^{G}).\end{aligned}$$ Now, if $g\notin N_{G}(J)$ we have $G/S_{J}\cap S_{J}^{g} f_{J}^{G}=0$ and if $g\in N_{G}(S_{J})$, we have $$\phi_{G}(G/S_{J}f_{J}^{G})=0.$$ For the computation of $a$, we work in $\mathbb{Q}$. Then we have: $$\begin{aligned} a&=\sum_{g\in [S_{J}\backslash G/S_J]} \phi_{G}(G/S_{J}\cap S_{J}^{g} f_{J}^{G})\\ &= \sum_{g\in N_{G}(J)\backslash N_{G}(S_{J})} \frac{|S_{J}\cap S_{J}^{g}|}{|S_{J}|^{2}} \phi_{G}(G/J f_{J}^{G})\\ &=\sum_{g\in N_{G}(J)\backslash N_{G}(S_{J})} \frac{|J|}{|S_{J}|^2}\\ &= \frac{|J|}{|S_{J}|^2}\big(|N_{G}(J)|-|N_{G}(S_{J})|).\end{aligned}$$ The determinant of each of these blocks is: $$\begin{aligned} &\frac{|N_{G}(J)|}{|J|}\times \Big(\frac{|J|}{|S_{J}|^2}\big(|N_{G}(J)|-|N_{G}(S_{J})|)\Big)- \frac{|N_{G}(J)|^2}{|S_{J}^2|}\\ & = -\frac{|N_{G}(J)|\times |N_{G}(S_{J})|}{S_{J}^{2}}\in R^{\times}.\end{aligned}$$ This determinant is invertible in $R$, so the bilinear form $b_{\phi_{G}}$ is non degenerate. Let $G$ be a finite group. Then the Mackey algebra $\mu_{R}(G)$ is a symmetric algebra if and only if $p^2\nmid |G|$. If $p^2\nmid |G|$, the fact that $\mu_{R}(G)$ is a symmetric algebra follows from Theorem \[meta\], Proposition \[mod2\] and Lemma \[mod1\]. If $p^2\mid |G|$, we saw in the proof of Theorem \[sym\] that every associative bilinear form on $RB(P)$ is degenerate if $|P|=p^2$, so the Mackey algebra $\mu_{R}(G)$ is not a symmetric algebra. #### Acknowledgements The author would like to thank the foundation FEDER and the CNRS for their financial support and the foundation ECOS and CONACYT for the financial support in the project M10M01. Thanks also go to Serge Bouc for his suggestions. [10]{} S. Bouc. , volume 1671 of [*Lecture Notes in Mathematics*]{}. Springer, 1997. S. Bouc. R[é]{}solutions de foncteurs de [M]{}ackey. , pages 31–83, 1998. S. Bouc. urnside [R]{}ings. , 2:739–803, 2000. S. Bouc. he [B]{}urnside dimension of projective [M]{}ackey functors. In [*Proceedings of the [S]{}ymposium “[A]{}lgebraic [C]{}ombinatorics"*]{}, pages 107–120, Kyoto, 2004. RIMS. M. Brou[é]{}. Higman´s criterion revisited. , 58(1):125–179, 05 2009. M. Deiml. The symmetry of the modular burnside ring. , 228(2):397 – 405, 2000. A. Dress. , volume 342 of [*Lecture Notes in Mathematics*]{}. Springer Berlin / Heidelberg, 1973. D. Gluck. Idempotent formula for the [B]{}urnside algebra with applications to the [$p$]{}-subgroup simplicial complex. , 25(1):63–67, 1981. D. Gorenstein. . AMS Chelsea Publishing Series. American Mathematical Society, 2007. J. P. May. Picard groups, [G]{}rothendieck rings, and [B]{}urnside rings of categories. , 163(1):1–16, 2001. J. Milnor and D. Husem[ö]{}ller. . Springer-Verlag, 1973. J. Rickard. Derived equivalences as derived functors. , 43:37–48, 1991. B. Rognerud. PhD thesis, Université de Picardie Jules Verne, [D]{}ecember 2013. J. Th[é]{}venaz and P. Webb. The structure of [M]{}ackey functors. , 347(6):1865–1961, 1995. J. Th[é]{}venaz and P. J. Webb. imple [M]{}ackey functors. In [*Proceedings of the [S]{}econd [I]{}nternational [G]{}roup [T]{}heory [C]{}onference ([B]{}ressanone, 1989)*]{}, 1990. T. Yoshida. Idempotents of burnside rings and dress induction theorem. , 80(1):90–105, 1983. A. [Z]{}immermann. . , 73(1):15–17, 1999. [Baptiste Rognerud\ EPFL / SB / MATHGEOM / CTG\ Station 8\ CH-1015 Lausanne\ Switzerland\ e-mail: baptiste.rognerud@epfl.ch]{}
{ "pile_set_name": "ArXiv" }
--- abstract: 'The Gromov-Hausdorff distance $(d_{GH})$ proves to be a useful distance measure between shapes. In order to approximate $d_{GH}$ for compact subsets $X,Y\subset\R^d$, we look into its relationship with $d_{H,iso}$, the infimum Hausdorff distance under Euclidean isometries. As already known for dimension $d\geq 2$, the $d_{H,iso}$ cannot be bounded above by a constant factor times $d_{GH}$. For $d=1$, however, we prove that $d_{H,iso}\leq\frac{5}{4}d_{GH}$. We also show that the bound is tight. In effect, this gives rise to an $O(n\log{n})$-time algorithm to approximate $d_{GH}$ with an approximation factor of $\left(1+\frac{1}{4}\right)$.' author: - 'Sushovan Majhi[^1][^2]' - Jeffrey Vitter - 'Carola Wenk$^\dag$' bibliography: - 'main.bib' title: 'Approximating Gromov-Hausdorff Distance in Euclidean Space' --- Introduction {#sec:intro} ============ This paper grew out of our effort to compute the Gromov-Hausdorff distance between Euclidean subsets. The Gromov-Hausdorff distance between two abstract metric spaces was first introduced by M. Gromov in ICM 1979 (see Berger [@berger_encounter_2000]). The notion, although it emerged in the context of Riemannian metrics, proves to be a natural distance measure between any two (compact) metric spaces. Only in the last decade the Gromov-Hausdorff distance has received much attention from the researchers in the more applied fields. In shape recognition and comparison, shapes are regarded as metric spaces that are deformable under a class of transformations. Depending on the application in question, a suitable class of transformations is chosen, then the dissimilarity between the shapes are defined by a suitable notion of *distance measure or error* that is invariant under the desired class of transformations. For comparing Euclidean shapes under Euclidean isometry, the use of Gromov-Hausdorff distance is proposed and discussed in [@memoli_theoretical_2005; @memoli_use_nodate; @memoli_gromov-hausdorff_2008; @memoli_properties_2012]. In this paper, we are primarily motivated by the questions pertaining to the computation of the Gromov-Hausdorff distance, particularly between Euclidean subsets. Although the distance measure puts the Euclidean shape matching on a robust theoretical foundation [@memoli_theoretical_2005; @memoli_use_nodate], the question of computing the Gromov-Hausdorff distance, or even an approximation thereof, still remains elusive. In the recent years, some efforts have been made to address such computational aspects. Most notably, the authors of [@agarwal_computing_2015] show an NP-hardness result for approximating the Gromov-Hausdorff distance between metric trees. For Euclidean subsets, however, the question of a polynomial time algorithm is still open. In [@memoli_properties_2012], the author shows that computing the distance is related to various NP-hard problems and studies a variant of Gromov-Hausdorff distance. #### Background and Related Work The notion of Gromov-Hausdorff distance is closely related to the notion of Hausdorff distance. Let $(Z,d_Z)$ be any metric space. We first give a formal definition of the directed Hausdorff distance between any two subsets of $Z$. \[def:dh\] For any two compact subsets $X,Y$ of a metric space $(Z,d_Z)$, the from $X$ to $Y$, denoted $\overrightarrow{d}_H^Z(X,Y)$, is defined by $$\sup_{x\in X}\inf_{y\in Y}d_Z(x,y).$$ Unfortunately, the directed Hausdorff distance is not symmetric. To retain symmetry, the is defined in the following way: \[def:h\] For any two compact subsets $X,Y$ of a metric space $(Z,d_Z)$, their , denoted $d_H^Z(X,Y)$, is defined by $$\max\left\{\overrightarrow{d}_H^Z(X,Y),\overrightarrow{d}_H^Z(Y,X)\right\}.$$ To keep our notations simple, we drop the superscript when it is understood that $Z$ is taken to be $\R^d$ and $X,Y$ are Euclidean subsets equipped with the standard Euclidean metric $\mod{\cdot}$. The $d_{H}$ can be computed in $O(n\log{n})$-time for finite point sets with at most $n$ points; see [@Alt1995]. We are now in a place to define the Gromov-Hausdorff distance formally. Unlike the Hausdorff distance, the Gromov-Hausdorff distance can be defined between two abstract metric spaces $(X,d_X)$ and $(Y,d_Y)$ that may not share a common ambient space. We start with the following formal definition: \[def:gh\] The , denoted $d_{GH}(X,Y)$, between two metric spaces $(X,d_X)$ and $(Y,d_Y)$ is defined to be $$d_{GH}(X,Y)=\inf_{\substack{f:X\to Z \\g:Y\to Z\\ Z}}d_H^Z(f(X),g(Y)),$$ where the infimum is taken over all isometries $f:X\to Z$, $g:Y\to Z$ and metric spaces $(Z,d_Z)$. In order to present an equivalent definition of the Gromov-Hausdorff distance that is computationally viable, we first define the notion of a correspondence. \[def:cor\] A $\C$ between any two (non-empty) sets $X$ and $Y$ is defined to be a subset $\C\subseteq X\times Y$ with the following two properties: i) for any $x\in X$, there exists an $y\in Y$ such that $(x,y)\in\C$, and ii) for any $y\in Y$, there exists $x\in X$ such that $(x,y)\in\C$. A correspondence $\C$ is a special *relation* that assigns all points of both $X$ and $Y$ a corresponding point. If the sets $X$ and $Y$ in the are equipped with metrics $d_X$ and $d_Y$ respectively, we can also define the distortion of the correspondence $\C$. Let $\C$ be a correspondence between two metric spaces $(X,d_X)$ and $(Y,d_Y)$, then its , denoted $Dist(\C)$, is defined to be $$\sup_{(x_1,y_1),(x_2,y_2)\in\C}\mod{d_X(x_1,x_2)-d_Y(y_1,y_2)}$$ The distortion $Dist(\C)$ is sometimes called the *additive* distortion as opposed to the *multiplicative* distortion; see [@kenyon_low_2010] for a definition. For non-empty sets $X,Y$, we denote by $\C(X,Y)$ the set of all correspondences between $X$ and $Y$. We note the following relation, which can be used to give an equivalent definition of the Gromov-Hausdorff distance via correspondences. For a proof of the following, the readers are encouraged to see [@burago_course_2001]. For any two compact metric spaces $(X,d_X)$ and $(Y,d_Y)$, the following relation holds: $$d_{GH}(X,Y)=\frac{1}{2}\inf\limits_{\C\in\C(X,Y)} Dist(\C)$$ This work is primarily motivated by the question of approximating the Gromov-Hausdorff distance between compact sets $X,Y\subset\R^d$. In [@memoli_gromov-hausdorff_2008], the authors use a related notion $d_{H,iso}$ in an effort to bound the Gromov-Hausdorff distance in the Euclidean case. For any $d\geq1$, a Euclidean isometry $T:\R^d\to\R^d$ is defined to be a map that preserves the distance, i.e., $\mod{T(a)-T(b)}=\mod{a-b}~\forall a,b\in\R^d$. When $d=1$, the $T$ can only afford to be a translation or a reflection (flip). In $d=2$, a Euclidean isometry is characterized by a combination of a translation, a rotation by an angle, or a mirror-reflection. For more about Euclidean isometries, see [@artin2011algebra]. We denote by $\E(\R^d)$ the set of all isometres of $\R^d$. \[def:hiso\] For any two compact subsets $X,Y$ of $\R^d$, we define $d_{H,iso}(X,Y)$ to be $$\inf\limits_{T\in\mathcal{E}(\R^d)} d_H(X,T(Y)).$$ The authors show in [@memoli_gromov-hausdorff_2008] the following bounds, relating $d_{H,iso}$ and $d_{GH}$ between two compact subsets $X,Y$ of $\R^d$. $$\label{eqn:memoli} d_{GH}(X,Y)\leq d_{H,iso}(X,Y)\leq c'_d(M)^\frac{1}{2}\sqrt{d_{GH}(X,Y)},$$ where $M=\max\big\{\mbox{diameter}(X),\mbox{diameter}(Y)\big\}$ and $c'_d$ is a constant that depends only on the dimension $d$. In the inequality , note the upper bound depends on the diameter of the input sets $X$ and $Y$. For $d\geq 2$, such a dependence is unavoidable. For $d=1$, however, we show such a dependence disappears giving rise to an approximation algorithm to approximate $d_{GH}$ with $d_{H,iso}$. #### Our Contribution Our main contribution in this work is to provide a satisfactory answer to the quest of understanding the relation between $d_{H,iso}$ and $d_{GH}$ when $X,Y$ are compact subsets of $\R^1$ equipped with the standard Euclidean metric. In , we show that $d_{H,iso}(X,Y)\leq\frac{5}{4}d_{GH}(X,Y)$ for any compact $X,Y\subset\R^1$. For subsets of the real line, it was believed for a long time that $d_{GH}=d_{H,iso}$. We show in that this is, in fact, not true by showing that the bound $\frac{5}{4}$ in is tight. Since $d_{H,iso}(X,Y)$ can be computed in $O(n\log{n})$-time ([@ROTE1991123]), we provide an $O(n\log{n})$-time algorithm to approximate $d_{GH}(X,Y)$ for finite $X,Y\subset\R^1$ with an approximation factor of $(1+\frac{1}{4})$. Approximating Gromov-Hausdorff Distance in $\R^1$ {#sec:1d} ================================================= This section is devoted to our main result () on approximating the Gromov-Hausdorff distance between subsets of the real line. Unless stated otherwise, in this section we always assume that $X,Y$ are compact subsets of $\R^1$ and both are equipped with the standard Euclidean metric denoted by $\mod{\cdot}$.\ In $\R^1$ it often helps to visualize $X\times Y$ on the disjoint union of two real lines in $\R^2$ and a correspondence $\C\in\C(X,Y)$ by edges between the corresponding points; see . Such a two dimensional visualization comes in handy for the proofs. For a correspondence $\C\in\C(X,Y)$, we say a pair of edges $(x_1,y_1)$ and $(x_2,y_2)$ are if they cross in the usual sense, i.e., either of the following happens $x_1<x_2$ but $y_1>y_2$ or $x_1>x_2$ but $y_1<y_2$; see . [0.45]{} ![On the left, the (sorted) $X=\{x_1,x_2\}$ and $Y=\{y_1,y_2,y_3\}$ are identified as subsets of the top and the bottom lines respectively. The points of $X$ are shown in green, and the points of $Y$ are shown in yellow. We visualize the correspondence $\C=\{(x_1,y_1),(x_2,y_2),(x_2,y_3)\}$ by the red edges between the respective points. Also, the edges $(x_1,y_1)$ and $(x_2,y_2)$ are crossing. On the right, the distortion $D$ of a correspondence is attained by the pairs $(x',y')$ and $(x,y)$.[]{data-label="fig:visual"}](visual.pdf "fig:") [0.45]{} ![On the left, the (sorted) $X=\{x_1,x_2\}$ and $Y=\{y_1,y_2,y_3\}$ are identified as subsets of the top and the bottom lines respectively. The points of $X$ are shown in green, and the points of $Y$ are shown in yellow. We visualize the correspondence $\C=\{(x_1,y_1),(x_2,y_2),(x_2,y_3)\}$ by the red edges between the respective points. Also, the edges $(x_1,y_1)$ and $(x_2,y_2)$ are crossing. On the right, the distortion $D$ of a correspondence is attained by the pairs $(x',y')$ and $(x,y)$.[]{data-label="fig:visual"}](standard-1.pdf "fig:") For a correspondence $\C\in\C(X,Y)$ between two compact sets with $Dist(\C)=D$, there exists a pair of edges $(x',y'),(x,y)\in\C$ such that $\bigmod{\mod{x-x'}-\mod{y-y'}}=D$. We can further assume, without loss of generality, that $x\geq x'$ and $(x-x')\leq\mod{y-y'}$. Then, there exists an isometry $T\in\E(\R^1)$ such that the edges $(x',T(y'))$ and $(x,T(y))$ do not cross and $\big(x'-T(y')\big)=\big(T(y)-x\big)=\frac{D}{2}$; see . From now on, we always assume this standard configuration for any given compact $X,Y\subset\R^1$ and a correspondence $\C$ between them.\ Now, we present in our main result of this section. We also know that $d_{GH}(X,Y)\leq d_{H,iso}(X,Y)$ for any compact $X,Y\subset\R^d$; see [@memoli_gromov-hausdorff_2008]. Together with this, thus gives us the approximation algorithm for $d_{GH}$ with an approximation factor of $\left(1+\frac{1}{4}\right)$. Later in , we also show that the upper bound of is tight. \[thm:gh\] For any two compact $X,Y\subset\R^1$ we have $$d_{H,iso}(X,Y)\leq\frac{5}{4}d_{GH}(X,Y).$$ In order to prove the result, it suffices to show that for any correspondence $\C\in\C(X,Y)$ with $Dist(\C)=D$, there exists a Euclidean isometry $T\in\mathcal{E}(\R^1)$ such that $$d_H(X,T(Y))\leq\frac{5D}{8}.$$ Depending on the crossing behavior, we classify a given correspondence into three main types: no double crossing, wide crossing, and no wide crossing, and we divide the proof for each type into , , and respectively. We start with the definition of a double crossing edge. An edge in a correspondence $\C\in\C(X,Y)$ is said to be if it crosses both the (designated) edges $(x',y')$ and $(x,y)$; see . In the following, we consider the case when there is no double crossing edge in $\C$. \[thm:no-cross\] For a correspondence $\C\in\C(X,Y)$ without any double crossing, there exists a value $\Delta\in\R$ such that $$d_H(X,Y+\Delta)\leq\frac{5}{8}D,$$ where $D=Dist(\C)$. In the trivial case, when $d_H(X,Y)\leq\frac{5}{8}D$, we take $\Delta=0$. So, we assume the non-trivial case that $d_H(X,Y)>\frac{5}{8}D$. Therefore, there exists either i) $a_0\in X$ with $\min\limits_{b\in Y}\mod{a_0-b}>\frac{D}{2}$ or ii) $b_0\in Y$ with $\min\limits_{a\in X}\mod{a-b_0}>\frac{D}{2}$, or both. We first note that such an $a_0$ cannot belong to $[x',x]$, where $Dist(C)=\bigmod{\mod{y-y'}-\mod{x-x'}}$. If it did, then for any edge $(a_0,t)\in\C$, we would have $t\in[y',y]$ and $\mod{a_0-t}\leq\frac{D}{2}$ because then either $\bigmod{\mod{a_0-x'}-\mod{t-y'}}\geq\frac{D}{2}$ or $\bigmod{\mod{a_0-x}-\mod{t-y}}\geq\frac{D}{2}$. In fact, $a_0$ has to belong to $A$ or $A'$ as defined below (see ): $$\begin{aligned} A&=\{p\in X\cap(x+D,\infty)\mid\mbox{ there exists }q\in Y\cap[y',y]\mbox{ with }(p,q)\in\C\},\\ A'&=\{p\in X\cap(-\infty,x'-D)\mid\mbox{ there exists }q\in Y\cap[y',y]\mbox{ with }(p,q)\in\C\}. \end{aligned}$$ Similarly if $b_0$ exists, it has to belong to either $B$ or $B'$: $$\begin{aligned} B&=\{q\in Y\cap(y',y-D)\mid\mbox{ there exists }p\in X\cap[x,\infty)\mbox{ with }(p,q)\in\C\},\\ B'&=\{q\in Y\cap(y'+D,y)\mid\mbox{ there exists }p\in X\cap[x,\infty)\mbox{ with }(p,q)\in\C\}. \end{aligned}$$ ![The no double crossing case is shown. The sets $A$ and $B$ are subsets of the top and bottom thick, blue intervals respectively.[]{data-label="fig:ub"}](ub.pdf) We now argue it suffices to study only the following three cases. If either $A\neq\emptyset$ or $A'\neq\emptyset$, we can assume, without loss of generality, that $A\neq\emptyset$ and use Case (1) and Case (2). Now if we have $A=A'=\emptyset$ and either $B\neq\emptyset$ or $B'\neq\emptyset$, we can assume, without loss of generality, that $B\neq\emptyset$ and use Case (3). For each of these cases, we will choose a positive $\Delta\leq\frac{3D}{8}$ and show that $d_H(X,Y+\Delta)\leq\frac{D}{2}+\frac{\Delta}{3}$. As a result, $d_H(X,Y+\Delta)\leq\frac{D}{2}+\frac{D}{8}$. We denote $p_0=\max{A}$ and $\eps=p_0-x-D$. We also let $q_0\in Y\cap[y',y]$ such that $(p_0,q_0)\in\C$; let $\eps'=(y-q_0)$. In this case, we choose $\Delta=\frac{3}{4}\eps$.\ We first observe that $(p_0-x')\geq(q_0-y')$. From the distortion of the pair $(x',y')$ and $(p_0,q_0)$, and noting that $(p_0-x')\geq(q_0-y')$, we get $$\begin{aligned} D\geq\mod{(p_0-x')-(q_0-y')}=(p_0-x')-(q_0-y')=\eps+\eps'. \end{aligned}$$ In particular, $\eps'\leq D$. Now from the distortion of the pair $(x,y)$ and $(p_0,q_0)$, we also get $$D\geq\mod{D+\eps-\eps'}=D+\eps-\eps'.$$ This implies that $\eps'\geq\eps$. Combining this with $\eps+\eps'\leq D$, we obtain $\eps\leq\frac{D}{2}$.\ In order to show $\overrightarrow{d}_H(X,Y+\Delta)\leq\frac{D}{2}+\frac{\Delta}{3}$, we consider the following partition of the real line into intervals: $$\begin{aligned} \I_1 &= \left(-\infty,q_0+\Delta-\frac{D}{2}\right],\; \I_2 = \left[q_0+\Delta-\frac{D}{2},q_0+\Delta+\frac{D}{2}\right],\\ \I_3 &= \left[q_0+\Delta+\frac{D}{2},p_0\right], \mbox{ and } \I_4 = [p_0,\infty). \end{aligned}$$ For an arbitrary point $a\in X$ from any of the above intervals, we show that there exists a point $b\in Y$ such that $\mod{a-(b+\Delta)}\leq\left(\frac{D}{2}+\frac{\Delta}{3}\right)$.\ Let $a\in\I_1\cap X$ and $b\in Y$ such that $(a,b)\in\C$. From $\Delta<\eps\leq\eps'$, it follows that $a<x$. We first note that $(a,b)$ cannot cross $(x,y)$. If $a\in[x',x]$, then $(a,b)$ does not cross $(x,y)$ because of its distortion bound with the edge $(x',y')$. Now if $a<x'$, then $(a,b)$ does not cross $(x,y)$, otherwise $(a,b)$ would be a double crossing edge. Now, we argue that $(a,b)$ cannot cross $(p_0,q_0)$ either. We assume the contrary that $(a,b)$ crosses $(p_0,q_0)$. Since $(a,b)$ does not cross $(x,y)$, we have $(b-q_0)\leq\eps'$. So, we get the following contradiction: $$\begin{aligned} D&\geq\bigmod{\mod{p_0-a}-\mod{q_0-b}}=\bigmod{\left[q_0+\eps'+\frac{D}{2}+\eps-a\right]-(b-q_0)} \geq\left(\frac{D}{2}-\Delta\right)+\eps'+\frac{D}{2}+\eps-\eps'\\ &=D+\eps-\Delta>D. \end{aligned}$$ As a consequence if $b\leq a$, then it follows from the distortion of the pair $(a,b)$ and $(x,y)$ that $(a-b)\leq\frac{D}{2}$. If $b>a$, from the distortion of the pair $(a,b)$ and $(p_0,q_0)$ we get $b-a\leq\frac{D}{2}-(\eps+\eps')$. In either case, we conclude that $\mod{a-(b+\Delta)}\leq\frac{D}{2}$, since $\Delta<\eps$.\ For $a\in \I_2\cap X$ we have $\mod{a-(q_0+\Delta)}\leq\frac{D}{2}$.\ For $a\in \I_3\cap X$, the distance $\mod{a-(y+\Delta)}$ is maximized when $a$ is the right endpoint of the interval $\I_3$. Therefore, $$\mod{a-(y+\Delta)}\leq\bigmod{p_0-(y+\Delta)}=\max\left\{\bigmod{\frac{D}{2}-\eps'},\frac{D}{2}+\eps-\Delta\right\}=\frac{D}{2}+\frac{\Delta}{3}.$$\ If $(a,b)\in\C$ with $a\in \I_4\cap X$, we have $a>p_0=\max{A}$. Therefore, $(a,b)$ does not cross $(x,y)$. Also, it cannot cross $(x',y')$ because of our assumption of no double crossing. By an argument similar to $\I_1$, we conclude $\mod{a-(b+\Delta)}\leq\frac{D}{2}$.\ In order to show $\overrightarrow{d_H}(Y+\Delta,X)\leq\frac{D}{2}+\frac{\Delta}{3}$, we consider the following partition of the real line into intervals: $$\begin{aligned} \J_1 &= \left(-\infty,x-\frac{D}{2}-\Delta\right], \J_2 = \left[x-\frac{D}{2}-\Delta,y-\frac{\eps}{2}\right], \J_3 = \left[y-\frac{\eps}{2},y\right],\\ \J_4 &= [y,p_0+\frac{D}{2}-\Delta],\mbox{ and } \J_5 = (p_0+\frac{D}{2}-\Delta,\infty). \end{aligned}$$ For an arbitrary point $b\in Y$ from any of the above intervals, we now show that there exists a point $a$ in $X$ such that $\mod{a-b}\leq\left(\frac{D}{2}+\frac{\Delta}{3}\right)$.\ Since $B=\emptyset$, for any $b\in \J_1\cap Y$ with edge $(a,b)\in\C$, the edge cannot cross $(x,y)$ or $(p_0,q_0)$. Therefore, $\mod{a-(b+\Delta)}\leq\frac{D}{2}$ as before.\ For $b\in \J_2\cap Y$, the distance $\mod{x-(b+\Delta)}$ is maximum at the endpoints of $\J_2$. Therefore, $$\begin{aligned} \mod{x-(b+\Delta)} \leq\max\left\{\frac{D}{2},\bigmod{x-y+\frac{\eps}{2}-\Delta}\right\} &=\max\left\{\frac{D}{2},\bigmod{-\frac{D}{2}+\frac{2\Delta}{3}-\Delta}\right\}\\ &=\frac{D}{2}+\frac{\Delta}{3}. \end{aligned}$$\ Now, let $b\in \J_3\cap Y$ and $(a,b)\in\C$. Because of the distortion bound $D$ with the edges $(x,y)$ and $(p_0,q_0)$, a moment’s reflection reveals that $a\in\left(p_0-D-\frac{\eps}{2},x+D+\frac{\eps}{2}\right)=\left(x+\frac{\eps}{2},p_0-\frac{\eps}{2}\right)$. So, the distance $\mod{a-(b+\Delta)}$ is maximum when $b$ in one of the endpoints of $\J_3$ and $a$ the other endpoint of the interval $\left(x+\frac{\eps}{2},p_0-\frac{\eps}{2}\right)$. Therefore, $$\begin{aligned} \mod{a-(b+\Delta)} &\leq \max\left\{\bigmod{\left(x+\frac{\eps}{2}\right)-\left(y-\frac{\eps}{2}+\Delta\right)},\bigmod{\left(p_0-\frac{\eps}{2}\right)-(y+\Delta)}\right\}\\ &=\max\left\{\frac{D}{2}-\frac{\Delta}{3},\frac{D}{2}+\frac{\Delta}{3}\right\} =\frac{D}{2}+\frac{\Delta}{3}. \end{aligned}$$\ For $b\in\J_4\cap Y$, the distance $\mod{p_0-(b+\Delta)}$ is maximum when $b$ is one of the endpoints of the interval $\J_4$. So, $$\mod{p_0,(b+\Delta)}\leq\max\left\{\bigmod{p_0-\left(y+\Delta\right)},\frac{D}{2}\right\}=\max\left\{\frac{D}{2}+\eps-\Delta,\frac{D}{2}\right\}=\frac{D}{2}+\frac{\Delta}{3}.$$\ Similarly, for $b\in\J_5\cap Y$, an edge $(a,b)\in\C$ cannot cross $(p_0,q_0)$ because of the distortion bound. Following the argument for $\I_1$, we conclude $\mod{a-(b+\Delta)}\leq\frac{D}{2}+\frac{\Delta}{3}$. We denote $q_1=\min{B}$ and $\eta=y-D-q_1$. We also let $p_1\in Y\cap(x,\infty)$ such that $(p_1,q_1)\in\C$ and $\eta'=p_1-x$. We show, in this case, that $d_H(X,Y+\Delta)\leq\frac{D}{2}+\frac{\Delta}{3}$ for $\Delta=\frac{3}{4}\eta$. In this case, we choose $\Delta=\frac{3}{4}\eta$.\ We first observe that $(p_1-x')\geq(q_1-y')$. Therefore, from the distortion of the pair $(x',y')$ and $(p_1,q_1)$ we get $$\begin{aligned} D&\geq\mod{(p_1-x')-(q_1-y')}=(p_1-x')-(q_1-y'),\mbox{ since }(p_1-x')\geq(q_1-y')\\ &=\left[(x-x')+\eta'\right]-\left[\frac{D}{2}+(x-x')-\frac{D}{2}-\eta\right]=\eta'+\eta. \end{aligned}$$ In particular, $\eta'\leq D$. Now from the distortion of the pair $(x,y)$ and $(p_1,q_1)$, we also get $$D\geq\mod{D+\eta-\eta'}=D+\eta-\eta'.$$ This implies that $\eta'\geq\eta$. Combining this with $\eta+\eta'\leq D$, we get $\eta\leq\frac{D}{2}$.\ In order to show $\overrightarrow{d_H}(X,Y+\Delta)\leq\frac{D}{2}+\frac{\Delta}{3}$, we consider the following intervals of the real line: $$\begin{aligned} \I_1 &=\left(-\infty,q_1+\Delta-\frac{D}{2}\right], \I_2 =\left[q_1+\Delta-\frac{D}{2},x\right] \I_3 =\left[x,x+\frac{\eta}{2}\right]\\ \I_4 &=\left[x+\frac{\eta}{2},y+\Delta-\frac{D}{2}\right],\mbox{ and } \I_5 =\left[y+\Delta-\frac{D}{2},\infty\right) \end{aligned}$$ By the symmetry of the problem, we follow the arguments presented in Case (1) for $\J_5,\J_4,\J_3,\J_2,\J_1$ to conclude the same about the nearest neighbor distances for $\I_1,\I_2,\I_3,\I_4,\I_5$ respectively.\ Now, in order to show $\overrightarrow{d_H}(Y+\Delta,X)\leq\frac{D}{2}+\frac{\Delta}{3}$, we consider the following intervals: $$\begin{aligned} \J_1 &= \left(-\infty,q_1\right), \J_2 = \left[q_1,p_1-\frac{D}{2}-\Delta\right],\\ \J_3 &= \left[p_1-\frac{D}{2}-\Delta,p_1+\frac{D}{2}-\Delta\right],\mbox{ and } \J_4 = \left[p_1+\frac{D}{2}-\Delta,\infty\right).\\ \end{aligned}$$ Again by the symmetry of the problem, we follow the arguments presented in Case (1) for $\I_4,\I_3,\I_2,\I_1$ to conclude the same about the nearest neighbor distances for $\J_1,\J_2,\J_3,\J_4$ respectively. In this case, we take $\Delta=\frac{3}{4}\max\{\eps,\eta\}$ and consider all the intervals from Case (1) and Case (2) to conclude that $d_H(X,Y+\Delta)\leq\frac{D}{2}+\frac{\Delta}{3}$. Now, we undertake the task of finding a suitable isometry/alignment when there is a double crossing in $\C$. In this case, we may have to consider flipping $Y$ to construct such an isometry. We always flip $Y$ about the midpoint of $x$ and $x'$ and denote the image by $\widetilde{Y}$. We first present two technical lemmas. \[lem:cross\] Let $(p,q)\in\C$ be a double crossing; see . If we denote $h=(x-x')$, $\eps_1=(p-x)$, and $\eps_2=(y'-q)$, then we have the following: i) $\eps_1-\eps_2\geq h$, ii) $\eps_1-\eps_2\leq D-h$, iii) $h\leq\frac{D}{2}$, and iv) $\mod{p-\widetilde q}\leq\frac{D}{2}-h$, where $\widetilde{q}$ denotes the reflection of $q$ about the midpoint of $x$ and $x'$. ![A double crossing $(p,q)$ is shown.[]{data-label="fig:cross"}](cross.pdf) i) Let us assume the contrary, i.e., $\eps_1<\eps_2+h$. Then, the distortion for the pairs $(x,y)$ and $(p,q)$ becomes $$\mod{\eps_2+D+h-\eps_1}=\eps_2+h+D-\eps_1> D.$$ This contradicts the fact that the distortion of $\C$ is $D$. Therefore, we conclude that $\eps_1-\eps_2\geq h$. ii) Since from (i) we have $\eps_1\geq\eps_2$, from the distortion for the pairs $(p,q)$ and $(x',y')$, we have $$h+\eps_1-\eps_2\leq D$$ So, $\eps_1-\eps_2\leq D-h$. iii) From (ii) we have $\eps_2+D\geq\eps_1$. Hence, the distortion for the pairs $(p,q)$ and $(x,y)$ implies $$\eps_2+D+h-\eps_1\leq D.$$ Adding (1) and (2), we get $2h\leq D$. Hence, $h\leq\frac{D}{2}$. iv) If $p>\widetilde q$, then $$p-\widetilde q=\eps_1-\frac{D}{2}-\eps_2\leq(D-h)-\frac{D}{2}-\eps_2\leq\frac{D}{2}-h.$$ Otherwise, $$\widetilde q-p=\frac{D}{2}+\eps_2-\eps_1\leq\frac{D}{2}-(\eps_1-\eps_2)\leq\frac{D}{2}-h.$$ Therefore, $\mod{p-\widetilde q}\leq\frac{D}{2}-h$. In our pursuit of constructing the right isometry, we first define a wide (double) crossing. We show in , that we need to flip $Y$ in the presence of such a wide crossing. A crossing edge $(p,q)\in\C$ is called a *wide crossing* if either $p$ or $q$ lie outside $(x'-D,x+D)$; see . Before presenting , we make an important observation first in the following technical lemma. \[lem:wide-cross\] Let there be a wide crossing $(p,q)\in\C$ and an edge $(p_0,q_0)\in\C$ such that $p_0>x+D$ and $y'<q_0<y$. If we denote $\eps=p_0-x-D$, $\eps'=y-q_0$ and $h=x-x'$, then we have $\eps'\geq h$. We prove by contradiction. Let us assume that $\eps'<h$. In , we have shown two possible positions of $p$. In each of the following cases, we arrive at a contradiction. ![A wide crossing $(p,q)$ is shown. Both the cases are shown in bright red.[]{data-label="fig:wide-cross"}](wide-cross.pdf) From the distortion of the pair $(p,q)$ and $(p_0,q_0)$, we have $$\mod{(\eps_1+h+D+\eps)-(\eps'+\eps_2)}\leq D.$$ Since by assumption $h>\eps'$ and from we have $\eps_1\geq\eps_2$, we get $$\eps'\geq h+(\eps_1-\eps_2)+\eps\geq2h+\eps>h.$$ From the distortion of the pair $(p,q)$ and $(p_0,q_0)$, we get $$\mod{(\eps_2+D+h-\eps')-(\eps_1-D-\eps)}\leq D.$$ Since $D-(\eps_1-\eps_2)\geq h$ and $h>\eps'$, we have $$\eps'\geq D+h-(\eps_1-\eps_2)+\eps\geq2h+\eps>h.$$ Again from the distortion of the pair $(p,q)$ and $(p_0,q_0)$, we get $$\eps_2+\frac{D}{2}+h+\left(\frac{D}{2}-h\right)-\eps\leq D.$$ We get $\eps_2\leq\eps$. So, $\eps'\geq\eps\geq\eps_2\geq\eps_1+h-D\geq h$. Therefore, $\eps'\geq h$. \[thm:wide-cross\] Let $\C$ be a correspondence between two compact sets $X,Y\subseteq\R^1$ with distortion $D$. If there is a wide crossing $(p,q)\in\C$, then there exists a value $\Delta\in\R$ such that $$d_{H}(X,\widetilde{Y}+\Delta)\leq\frac{5}{8}D,$$ where $\widetilde{Y}$ denotes the refelection of $Y$ about the midpoint of $x$ and $x'$. We first note from that $h=x-x'\leq\frac{D}{2},\mod{p-\widetilde q}\leq\left(\frac{D}{2}-h\right)$. Let us define $$A=\{p\in X\cap(x+D,\infty)\mid\mbox{ there exists }q\in Y\cap[y',y]\mbox{ with }(p,q)\in\C\},$$ and $$A'=\{p\in X\cap(-\infty,x'-D)\mid\mbox{ there exists }q\in Y\cap[y',y]\mbox{ with }(p,q)\in\C\}.$$ ![The correspondence with a wide crossing is shown. The sets $A$ and $B$ are subsets of the thick, dark blue regions on top and bottom respectively. In the bottom, we show the configuration when $Y$ is flipped about the midpoint of $x$ and $x'$.[]{data-label="fig:ub-1"}](ub-1.pdf "fig:") ![The correspondence with a wide crossing is shown. The sets $A$ and $B$ are subsets of the thick, dark blue regions on top and bottom respectively. In the bottom, we show the configuration when $Y$ is flipped about the midpoint of $x$ and $x'$.[]{data-label="fig:ub-1"}](ub-1-flip.pdf "fig:") Let $p_0=\max A$ and $\eps=p_0-x-D$. We now define $$B=\{q\in Y\cap[y,\infty)\mid\mbox{ there exists }p\in X\cap[x,\infty)\mbox{ with }(p,q)\in\C\}.$$ Let us also define $q_1=\max B$, $\eta=q_1-y$, and let there exists edge $(p_1,q_1)\in\C$ with $\eta'=p_1-x$. Comparing with the edge $(x,y)$, we get $\eta'\geq\eta$. Because of the distortion bound with the wide crossing edge, we must have $\eta\leq\frac{D}{2}$. From , we also have $\eps'=(y-q_0)\geq h$. If we take $\Delta=\frac{3}{4}\max\{\eps,\eta\}$, we argue that $$d_H(X,\widetilde{Y}+\Delta)\leq\frac{D}{2}+\frac{\Delta}{3}.$$ In order to show that $\overrightarrow{d}_H(X,\widetilde{Y}+\Delta)\leq\frac{D}{2}+\frac{\Delta}{3}$, we define the following intervals: $$\begin{aligned} \I_1 &= \left(-\infty,\widetilde{q_1}+\Delta-\frac{D}{2}\right] \I_2 = \left[\widetilde{q_1}+\Delta-\frac{D}{2},x'\right] \I_3 = \left[x',\widetilde{q_0}+\Delta+\frac{D}{2}\right],\\ \I_4 &= \left[\widetilde{q_0}+\Delta+\frac{D}{2},p_0\right],\mbox{ and } \I_5 = [p_0,\infty). \end{aligned}$$ For $a\in(\I_1\cup\I_5)\cap X$ and an edge $(a,b)\in\C$, the edge has to be a double crossing edge because of the distortion bound with the wide crossing edge. So after the flip, the edge $(a,\widetilde{b})$ does not cross $(p_0,\widetilde{q_0})$ or $(p_1,\widetilde{q_1})$. As a result, when $a\geq\widetilde{b}$, we have $(a-\widetilde{b})\leq\left(\frac{D}{2}-h\right)$ as shown in and $(\widetilde{b}-a)\leq\left(\frac{D}{2}-\max\{\eps,\eta\}\right)$ otherwise. Therefore, $\mod{a-(\widetilde{b}+\Delta)}\leq\frac{D}{2}$.\ For $a\in\I_2\cap X$ we have $$\mod{a-(\widetilde{q_1}+\Delta)}\leq\max\left\{\frac{D}{2},\mod{x'-\widetilde{q_1}-\Delta}\right\}=\max\left\{\frac{D}{2},\bigmod{\frac{D}{2}+\eta-\Delta}\right\}\leq\frac{D}{2}+\frac{\Delta}{3}.$$ For $a\in\I_3\cap X$ we have $$\mod{a-(\widetilde{q_0}+\Delta)} \leq\max\left\{\mod{x'-\widetilde{q_0}-\Delta},\frac{D}{2}\right\} \leq\max\left\{\bigmod{\eps'-\frac{D}{2}},\frac{D}{2}\right\}\leq\frac{D}{2}$$ For $a\in\I_4\cap X$ we have $$\begin{aligned} \mod{a-(\widetilde{y'}+\Delta)} &\leq\max\left\{\bigmod{\widetilde{q_0}+\frac{D}{2}-\widetilde{y'}},\bigmod{p_0-\widetilde{y'}-\Delta}\right\} =\max\left\{\bigmod{\frac{D}{2}-(D+h-\eps')},\bigmod{\frac{D}{2}+\eps-\Delta}\right\}\\ &=\max\left\{\bigmod{\frac{D}{2}-(\eps'-h)},\bigmod{\frac{D}{2}+\frac{\Delta}{3}}\right\} \leq\frac{D}{2}+\frac{\Delta}{3}. \end{aligned}$$ In order to show that $\overrightarrow{d}_H(\widetilde{Y}+\Delta,X)\leq\frac{D}{2}+\frac{\Delta}{3}$, we define the following intervals: $$\begin{aligned} \J_1 &= \left(-\infty,\widetilde{q_1}\right],\; \J_2 = \left[\widetilde{q_1},x'+\frac{D}{2}-\Delta\right],\; \J_3 = \left[x'+\frac{D}{2}-\Delta,\widetilde{y'}-\frac{2\Delta}{3}\right],\\ \J_4 &= \left[\widetilde{y'}-\frac{2\Delta}{3},\widetilde{y'}\right],\; \J_5 = \left[\widetilde{y'},p_0+\frac{D}{2}-\Delta\right],\mbox{ and } \J_6 = \left[p_0+\frac{D}{2}-\Delta,\infty\right). \end{aligned}$$ For $\J_1$, $\J_2$, $\J_4$, $\J_5$, and $\J_6$ we use routine arguments used in Case (1) of . As a new situation, we only consider $\J_3$ here. For $b\in\J_3\cap Y$ we have $$\begin{aligned} \mod{x-(b+\Delta)}\leq\max\left\{\bigmod{x-x'-\frac{D}{2}},\bigmod{x-\widetilde{y'}+\frac{2\Delta}{3}-\Delta}\right\} \leq\max\left\{\bigmod{\frac{D}{2}-h},\bigmod{\frac{D}{2}+\frac{\Delta}{3}}\right\}\leq\frac{D}{2}+\frac{\Delta}{3}. \end{aligned}$$ In this case, we assume, without loss of generality, that $\eta_1\leq\eta_2$; see . When considering $\widetilde{Y}$, we first note from the distortion bound with $(p,\widetilde{q})$ that $\eta_1,\eta_2\leq\left(\frac{D}{2}-h\right)$.\ If $h>\frac{3D}{8}$, then $\eta_1>\frac{D}{8}$. We take $\Delta=\left(\eta_1-\eta_2'-\frac{D}{8}\right)$, and we argue that $$d_H(X,\widetilde{Y}+\Delta)\leq\frac{D}{2}+\frac{D}{8}.$$ ![Wide crossing exists, and both $A=\emptyset$, $A'=\emptyset$[]{data-label="fig:ub1"}](ub-2.pdf "fig:") ![Wide crossing exists, and both $A=\emptyset$, $A'=\emptyset$[]{data-label="fig:ub1"}](ub-2-flip.pdf "fig:") In order to show that $\overrightarrow{d}_H(X,\widetilde{Y}+\Delta)\leq\frac{D}{2}+\frac{D}{8}$, we define the following intervals: $$\begin{aligned} \I_1 &= \left(-\infty,\widetilde{q_1}+\Delta-\frac{D}{2}\right], \I_2 = \left[\widetilde{q_1}+\Delta-\frac{D}{2},x'\right], \I_3 = \left[x',x'+\frac{D}{8}+\Delta\right],\\ \I_4 &= \left[x'+\frac{D}{8}+\Delta,x-\frac{D}{8}+\Delta\right], \I_5 = \left[x-\frac{D}{8}+\Delta,\widetilde{y'}-\frac{5D}{8}+\Delta\right],\mbox{ and } \I_5 = \left[\widetilde{y'}+\Delta+\frac{D}{2},\infty\right). \end{aligned}$$\ For $\I_1$ and $\I_5$, we use the arguments from Case (1).\ For $a\in\I_2\cap X$, we get $$\begin{aligned} \mod{a-(\widetilde{q_1}+\Delta)}\leq\max\left\{\frac{D}{2},x'-\widetilde{q_1}-\Delta\right\} =\max\left\{\frac{D}{2},\frac{D}{2}+\eta_1-\Delta\right\} &=\max\left\{\frac{D}{2},\frac{D}{2}+\eta_1-\eta_1+\frac{D}{8}\right\}\\ &=\frac{D}{2}+\frac{D}{8}. \end{aligned}$$ A similar argument holds also for $\I_3,\I_5$.\ For $a\in\I_4\cap X$, let $b\in[y',y]\cap Y$ such that $(a,b)\in\C$. If $a\geq b$, then by the distortion bound with $(x,y)$ we have $(a-x')\leq(b-y')$. So, $$\begin{aligned} \mod{(\widetilde{b}+\Delta)-a}&=\Delta+\frac{D}{2}+h-(a-x')-(b-y') \leq\Delta+\frac{D}{2}+h-2(a-x')\\ &\leq\Delta+\frac{D}{2}+h-2\left(\frac{D}{8}+\Delta\right) =\frac{D}{2}+h-\frac{D}{4}-\Delta =\frac{D}{2}+h-\frac{D}{4}\\ &\leq\frac{D}{2}+\frac{3D}{8}-\frac{D}{4} =\frac{D}{2}+\frac{D}{8}. \end{aligned}$$ If $a\leq b$, we argue by symmetry that $\mod{(\widetilde{b}+\Delta)-a}\leq\frac{D}{2}+\frac{D}{8}$.\ In order to show that $\overrightarrow{d}_H(\widetilde{Y}+\Delta,X)\leq\frac{D}{2}+\frac{D}{8}$, we define the following intervals: $$\begin{aligned} \J_1 &= \left(-\infty,\widetilde{q_1}\right], \J_2 = \left[\widetilde{q_1},x'\right], \J_3 = \left[x',x'+\frac{D}{2}-\Delta\right]\\ \J_4 &= \left[x'+\frac{D}{2}-\Delta,x+\frac{D}{2}-\Delta\right], \J_5 = \left[x+\frac{D}{2}-\Delta,\widetilde{q_2}\right],\mbox{ and } \J_6 = \left[\widetilde{q_2},\infty\right). \end{aligned}$$ The analysis for $\I_1,\I_2,\I_3,\I_4$ are similar to Case (1). We note for $\J_5$ that $\mod{(\widetilde{q_2}+\Delta)-p_1} =\frac{D}{2}+\eta_2-\eta_1'+\Delta\leq\frac{D}{2}+\eta_2'-\eta_1+\Delta =\frac{D}{2}+\eta_2'-\eta_1+\left(\eta_1-\eta_2'-\frac{D}{8}\right)\leq\frac{D}{2}$.\ If $h\leq\frac{3D}{8}$, then $\eta_1\geq\frac{D}{8}$. We take $\Delta=\left(\frac{D}{8}-\eta_1\right)$, and we argue that $$d_H(X,\widetilde{Y}+\Delta)\leq\frac{D}{2}+\frac{D}{8}.$$ We use the same intervals as Case (1). With this new $\Delta$, the only changes in the calculations appear in $\I_3$. We show $\I_3$ here.\ $$\begin{aligned} \mod{(\widetilde{b}+\Delta)-a}&=\Delta+\frac{D}{2}+h-(a-x')-(b-y') \leq\Delta+\frac{D}{2}+h-2(a-x')\\ &\leq\Delta+\frac{D}{2}+h-2\left(\frac{D}{8}+\Delta\right) =\frac{D}{2}+h-\frac{D}{4}-\Delta =\frac{D}{2}+h-\frac{D}{4}-\left(\frac{D}{8}-\eta_1\right)\\ &\leq\frac{D}{2}+h-\frac{3D}{8}+\eta_1 \leq\frac{D}{2}+h-\frac{3D}{8}+\left(\frac{D}{2}-h\right) =\frac{D}{2}+\frac{D}{8}. \end{aligned}$$ This completes the proof for wide crossing. In order to complete our analysis of various types of correspondences, we show now that a flip is not required if there is no wide crossing in $\C$. \[thm:no-wide\] Let $\C$ be a correspondence between two compact sets $X,Y\subset\R^1$ with distortion $D$. If there are double crossings but not wide, then there exists a value $\Delta\in\R$ such that $$d_H(X,Y+\Delta)\leq\frac{D}{2}+\frac{D}{8}.$$ We assume that there are double crossings in $\C$, but none of them are wide; see . ![No wide crossing exists[]{data-label="fig:cross-1"}](cross-1.pdf) This case is similar to Case (1) and Case (3) of . If $\eta_1\geq\eps$, then we note that $d_H(X,Y)\leq\frac{D}{2}$. This is the trivial case. So, we assume that $\eta_1<\eps$. From the distortion of the pair $(p_0,q_0)$ and $(p_2,q_2)$, we have $$D\geq(\eta_2+h+D+\eps)-(\eta_1+\eps').$$ So, we get $\eta_2+\eps+h\leq\eta_1+\eps'$. We take $\Delta=\eps-\eta_1$. We first consider the following intervals: $$\begin{aligned} \I_1 &= \left(-\infty,y'+\Delta-\frac{D}{2}\right], \I_2 = \left[y'+\Delta-\frac{D}{2},x'\right] \I_3 = [x',x],\\ \I_4 &= \left[x,q_0+\frac{D}{2}+\Delta\right], \I_5 = \left[q_0+\frac{D}{2}+\Delta,p_0\right], \mbox{ and }\I_6 = [p_0,\infty).\end{aligned}$$ The intervals similar to $\I_2,\I_3,\I_4,\I_5$ are considered already in . We show that if $p_2\in\I_1\cap X$, then any edge $(p_2,q_2)\in\C$ cannot cross $(x',y')$, consequently $\mod{p_2-(q_2+\Delta)}\leq\frac{D}{2}$. If we assume the contrary, then the edge has to be a double crossing; see . Since $p_2$ is assumed to be in $\I_1$, we have $\left(\eta_2-\frac{D}{2}\right)>\left(\frac{D}{2}-\Delta\right)$. This would imply $$\eta_2>D-\Delta=D-(\eta_1-\eps)\geq D+\eta_2+h-\eps'=h+\eta_2+(D-\eps')\geq h+\eta_2.$$ This is a contradiction. Therefore, $\overrightarrow{d}_H(X,Y+\Delta)\leq\frac{D}{2}$. For $\overrightarrow{d}_H(Y+\Delta,X)$, the arguments are similar to . In this case, we choose $\Delta=\frac{3}{4}\max\{\eta_1,\eps_2\}$ and conclude the result using arguments similar to Case (2) in . This concludes the proof. We conclude this section by showing in that the bound of is a tight upper bound in the following sense: \[thm:lb\] For any $0<\eps<\frac{1}{4}$ and $\delta>0$, there exist compact $X,Y\subset\R$ with $d_{GH}(X,Y)=\delta$ and $$d_{H,iso}(X,Y)=\left(\frac{5}{4}-\eps\right)\delta.$$ It suffices to assume that $\eps=\frac{1}{4(2k+1)}$ for some $k\in\mathbb{N}$. We now take (sorted) $$X=\{x',x,x_k,x_{k-1},\cdots,x_1,x_0\}\mbox{ and }Y=\{y',y_0,y_1,\cdots,y_{k-1},y_k,y\},$$ with distances as shown in . As a result, we also have $(y_i-x) =4i\eps\delta$ and $(x_k-y_i)=2\delta+4(k-i+1)\eps\delta$, $\forall i\in\{0,1,2,\cdots,k\}$. ![This picture demonstrates the configuration of $X$ and $Y$. The correspondence $\C$ is shown using the (red) edges. In the bottom, $X$ and $\widetilde{Y}$, the reflection of $Y$ about the midpoint of $x$ and $x'$, are shown, along with the correspondence $\C$ by the red edges.[]{data-label="fig:lb"}](lb.pdf "fig:") ![This picture demonstrates the configuration of $X$ and $Y$. The correspondence $\C$ is shown using the (red) edges. In the bottom, $X$ and $\widetilde{Y}$, the reflection of $Y$ about the midpoint of $x$ and $x'$, are shown, along with the correspondence $\C$ by the red edges.[]{data-label="fig:lb"}](lb-flip.pdf "fig:") To prove our claim that $d_{H,iso}(X,Y)=\left(\frac{5}{4}-\eps\right)\delta$, we consider translating both $Y$ and $\widetilde{Y}$, the reflection of $Y$ about the midpoint of $x$ and $x'$. When translating $\widetilde{Y}$, we note that the smallest Hausdorff distance of $\frac{3\delta}{2}$ is achieved for a translation of $\widetilde{Y}$ by an amount of $\frac{\delta}{2}$ to the right. For this amount of translation, $\widetilde{y'}$ becomes the midpoint of $x$ and $x_0$, where $\widetilde{y'}$ is the reflection of $y'$ about the midpoint of $x$ and $x'$. And, all the other points of $\widetilde{Y}$ are at distance at least $50\delta$ from $x$. Now, we consider translating $Y$ by an amount $\Delta\in\R$. We first observe that $d_H(X,Y)=2\delta$, and the distance is attained by $x_0$ and $y$. Now, a translation of $Y$ to the left is only going to increase the Hausdorff distance $d_H(X,Y+\Delta)$. Taking this argument one step further we get the following analysis as we vary $\Delta$: If $\Delta\in\left(-\infty,\frac{3\delta}{4}+\eps\delta\right)$, then the pair $(x_0,y)$ gives $$d_H(X,Y+\Delta)=2\delta-\Delta>2\delta-\frac{3\delta}{4}-\eps\delta=\left(\frac{5}{4}-\eps\right)\delta.$$ For $\Delta=\frac{3\delta}{4}+\eps\delta$, we get $x_0-(y+\Delta)=\left(\frac{5}{4}-\eps\right)\delta$. Also, $y_k+\Delta-x=\left(\frac{5}{4}-\eps\right)\delta$ and $x_k-(y_k+\Delta)=\left(\frac{5}{4}-\eps\right)\delta+4\eps\delta$. So, $d_H(X,Y+\Delta)=\left(\frac{5}{4}-\eps\right)\delta$, which is attained by $\mod{x_0-y_k}$. Following this pattern, we conclude that $d_H(X,Y+\Delta)>\left(\frac{5}{4}-\eps\right)\delta$, except for $\Delta=\frac{3\delta}{4}+\eps\delta+4i\eps\delta$ for $i\in\{0,1,2,\cdots,k\}$. Therefore, $d_{H,iso}(X,Y)=\left(\frac{5}{4}-\eps\right)\delta$. We summarize our analysis in . $\Delta$ $\overrightarrow{d}_H(X,Y+\Delta)$ $\overrightarrow{d}_H(Y+\Delta,X)$ $d_H(X,Y+\Delta)$ -------------------------------------------------------------------------------------------------------- ------------------------------------ ------------------------------------ ---------------------------------------- $\left(-\infty,\frac{3\delta}{4}+\eps\delta\right)$ $(x_0,y)$ – $>\left(\frac{5}{4}-\eps\right)\delta$ $\frac{3\delta}{4}+\eps\delta$ $(x_0,y)$ $(y_k,x),(y,x_k)$ $\left(\frac{5}{4}-\eps\right)\delta$ $\left(\frac{3\delta}{4}+\eps\delta,\frac{3\delta}{4}+\eps\delta+4\eps\delta\right)$ – $(y_k,x),(y_k,x_k)$ $>\left(\frac{5}{4}-\eps\right)\delta$ $\frac{3\delta}{4}+\eps\delta+4\eps\delta$ – $(y_{k-1},x),(y_k,x_k)$ $\left(\frac{5}{4}-\eps\right)\delta$ $\cdots$ $\cdots$ $\cdots$ $\cdots$ $\left(\frac{3\delta}{4}+\eps\delta+4i\eps\delta,\frac{3\delta}{4}+\eps\delta+4(i+1)\eps\delta\right)$ – $(y_{k-i},x),(y_{k-i},x_k)$ $>\left(\frac{5}{4}-\eps\right)\delta$ $\frac{3\delta}{4}+\eps\delta+4(i+1)\eps\delta$ – $(y_{k-i-1},x)$, $(y_{k-i},x_k)$ $\left(\frac{5}{4}-\eps\right)\delta$ $\cdots$ $\cdots$ $\cdots$ $\cdots$ $\frac{3\delta}{4}+\eps\delta+4k\eps\delta$ $(x,y_0)$ $(y_0,x)$ $(\frac{5}{4}-\eps)\delta$ $\left(\frac{3\delta}{4}+\eps\delta+4k\eps\delta,\infty\right)$ $(x,y_0)$ – $>\left(\frac{5}{4}-\eps\right)\delta$ : A summary of $d_H(X,Y+\Delta)$ is recorded for $\Delta\in\R$. In the second and third columns, the directed Hausdorff distances are achieved for the shown pairs of points. The other columns are self-explanatory.[]{data-label="tab:lb"} With the $d_{H,iso}(X,Y)$ computed, we now define the following correspondence $\C$ between $X$ and $Y$: $$\C=\big\{(x_i,y_i)\mid i\in\{0,1,\cdots,k\}\big\}\cup\big\{(x',y'),(x,y)\big\}.$$ The distortion of $\C$ is evidently $2\delta$. Moreover, we observe that $\C$ is an optimal correspondence. Therefore, $d_{GH}(X,Y)=\delta$. Conclusions and Future Work =========================== In this work, we focus on approximating Gromov-Hausdorff distance by the Hausdorff distance for subsets of $\R^1$. We believe that the problem of computing the Gromov-Hausdorff distance in the $\R^1$ case is NP-hard. The question of a polynomial time approximation algorithm for subsets of $\R^d$ is still open for $d\geq2$. Acknowledgments =============== The authors would like to thank Yusu Wang and Facundo Mémoli for hosting Sushovan Majhi at OSU and for discussions on the project during the visit. We also thank Helmut Alt for his valuable feedback during his visit at Tulane University in the Spring of 2019. A Weak Upper Bound of 2 ======================= \[thm:2-ub\] For any two compact subsets $X,Y$ of $\R^1$, we have the following $$d_{H,iso}(X,Y)\leq 2d_{GH}(X,Y).$$ Let $C$ be any correspondence between two compact subsets $X,Y$ of $\R^1$. There exists a pair of relatives $(x,y),(x',y')\in C$ such that $\bigmod{\mod{x-x}-\mod{y-y'}}=D$, where $D$ is the distortion of $C$. Without loss of generality, we assume that $x\leq x'$ and $\mod{x-x'}\leq\mod{y-y'}$. Then, there exists an $R^1$-isometry such that, when applied on $Y$, the pairs look like . From now on, we assume this configuration for any given correspondence $C$. It suffices to show that for any correspondence $\C\subseteq X\times Y$ with distortion $D$, there exists a Euclidean isometry $T\in\mathcal{E}(R^1)$ such that $$d_H(X,T(Y))\leq D.$$ ![This standard alignment is assumed for $\C$ in this proof. We may need to apply a Euclidean isometry on $Y$ so that $(x_L,y_1)$ and $(x_R,y_2)$ do not cross and $x_L=y_1$.[]{data-label="fig:standard"}](standard.pdf) Let us take an arbitrary correspondence $\C$ between $X$ and $Y$ with distortion $D$. Let us denote $x_L=\min{X}$ and $x_R=\max{X}$. We also assume that $(x_L,y_1),(x_R,y_2)\in C$ for some $y_1,y_2\in Y$. Without loss of generality, we can assume that the edges $(x_L,y_1),(x_R,y_2)$ do not cross, i.e., $y_1\leq y_2$. This may require to flip $Y$ once, but applying such an isometry is distortion-safe. We can further assume that $x_L=y_1$, which may require an additional translation. See . In this case, we claim that $d_H(X,Y)\leq D$. To see the claim, consider any edge $(p,q)\in\C$. Since, the edge does not cross $(x_L,y_1)$, we must have $\mod{p-q}\leq D$. So, $d_H(X,Y)\leq D$. Let there at least an edge that crosses the edge $(x_L,y_1)$. We now let $$\eps_1=\max\{(x-x_L)\mid\text{the edge }(x,y)\in\C\text{ crosses }(x_L,y_1)\text{ for some }y\in Y\},$$ and $$\eps_2=\max\{(y_1-y)\mid\text{the edge }(x,y)\in\C\text{ crosses }(x_L,y_1)\text{ for some }x\in X\}.$$ We first observe that $\eps_1,\eps_2>0$. In this case, we consider both $\eps_1,\eps_2\leq D$. We claim that $d_H(X,Y)\leq D$. To see that $\overrightarrow{d_H}(X,Y)\leq D$, consider an $x\in X$. If $x\leq x_L+\eps_1$, then we have $\mod{x-y_1}\leq D$. And, we note that $(x.y)$ cannot cross $(x_L,y_1)$, hence we have $\mod{x-y}\leq D$. Now to show that $\overrightarrow{d_H}(Y,X)\leq D$, we take an $y\in Y$. If $y<y_1$, then we have $\mod{y-x_L}\leq D$. If $y>y_1$, we consider $x\in X$ such that $(x,y)\in\C$. Then, $(x,y)$ does not cross $(x_L,y_1)$. Therefore, $\mod{x-y}\leq D$. This proves the claim. The last case deals with the scenario of having at least one edge crossing the edge $(x_L,y_1)$ and either $\eps_1>D$ or $\eps_2>D$. In this case, we pick $T$ to be the translation of $\R^1$ to the right by $D$ and argue that $d_H(X,T(Y))\leq D$. Since the edge $(x_R,y_2)$ does not cross $(x_L,y_1)$, we first note that $\mod{x_R-y_2}\leq D$. Therefore, $\eps_1+\eps_2\leq2D$. To see that $\overrightarrow{d_H}(X,T(Y))\leq D$, consider an $x\in X$. If $x\leq x_L+\eps_1$, then we still have $\mod{x-(y_1+D)}\leq D$. If $x>x_L+\eps_1$, then consider $(x,y)\in\C$. The edge $(x,y)$ does not cross $(x_L,y_1)$, hence we have $\mod{x-y}\leq D$. Since $\eps_1+\eps_2>D$, we have $y\leq x$. Hence, $\mod{(y+D)-x}\leq D$. Now to show that $\overrightarrow{d_H}(T(Y),X)\leq D$, we take an $y\in Y$. If $y<y_1$, then we have $\mod{y-x_L}\leq D$. If $y>y_1$, we consider $x\in X$ such that $(x,y)\in\C$. Then, $(x,y)$ does not cross $(x_L,y_1)$. Therefore, $\mod{x-y}\leq D$. This proves the claim. [^1]: supported by the National Science Foundation grant CCF-1618469 [^2]: corresponding author `[email:smajhi@tulane.edu]`
{ "pile_set_name": "ArXiv" }
--- abstract: 'This paper has been addressed to a very old but burning problem of energy in General Relativity. We evaluate energy and momentum densities for the static and axisymmetric solutions. This specializes to two metrics, i.e., Erez-Rosen and the gamma metrics, belonging to the Weyl class. We apply four well-known prescriptions of Einstein, Landau-Lifshitz, Papaterou and M$\ddot{o}$ller to compute energy-momentum density components. We obtain that these prescriptions do not provide similar energy density, however momentum becomes constant in each case. The results can be matched under particular boundary conditions.' author: - | M. Sharif [^1] and Tasnim Fatima\ Department of Mathematics, University of the Punjab,\ Quaid-e-Azam Campus, Lahore-54590, Pakistan. title: '**Energy Distribution associated with Static Axisymmetric Solutions**' --- [**Keywords:**]{} Energy-momentum, axisymmetric spacetimes. Introduction ============ The problem of energy-momentum of a gravitational field has always been an attractive issue in the theory of General Relativity (GR). The notion of energy-momentum for asymptotically flat spacetime is unanimously accepted. Serious difficulties in connection with its notion arise in GR. However, for gravitational fields, this can be made locally vanish. Thus one is always able to find the frame in which the energy-momentum of gravitational field is zero, while in other frames it is not true. Noether’s theorem and translation invariance lead to the canonical energy-momentum density tensor, $T_a^b$, which is conserved. $$T^b_{a;b}=0,\quad (a,b=0,1,2,3).$$ In order to obtain a meaningful expression for energy-momentum, a large number of definitions for the gravitation energy-momentum in GR have been proposed. The first attempt was made by Einstein who suggested an expression for energy-momentum density \[1\]. After this, many physicists including Landau-Lifshitz \[2\], Papapetrou \[3\], Tolman \[4\], Bergman \[5\] and Weinburg \[6\] had proposed different expressions for energy-momentum distribution. These definitions of energy-momentum complexes give meaningful results when calculations are performed in Cartesian coordinates. However, the expressions given by M$\ddot{o}$ller \[7,8\] and Komar \[9\] allow one to compute the energy-momentum densities in any spatial coordinate system. An alternate concept of energy, called quasi-local energy, does not restrict one to use particular coordinate system. A large number of definitions of quasi-local masses have been proposed by Penrose \[10\] and many others \[11,12\]. Chang et al. \[13\] showed that every energy-momentum complex can be associated with distinct boundary term which gives the quasi-local energy-momentum. There is a controversy with the importance of non-tensorial energy-momentum complexes whose physical interpretation has been a problem for the scientists. There is a uncertainity that different energy-momentum complexes would give different results for a given spacetime. Many researchers considered different energy-momentum complexes and obtained encouraging results. Virbhadra et al. \[14-18\] investigated several examples of the spacetimes and showed that different energy-momentum complexes could provide exactly the same results for a given spacetime. They also evaluated the energy-momentum distribution for asymptotically non-flat spacetimes and found the contradiction to the previous results obtained for asymptotically flat spacetimes. Xulu \[19,20\] evaluated energy-momentum distribution using the M$\ddot{o}$ller definition for the most general non-static spherically symmetric metric. He found that the result is different in general from those obtained using Einstein’s prescription. Aguirregabiria et al. \[21\] proved the consistency of the results obtained by using the different energy-momentum complexes for any Kerr-Schild class metric. On contrary, one of the authors (MS) considered the class of gravitational waves, G$\ddot{o}$del universe and homogeneous G$\ddot{o}$del-type metrics \[22-24\] and used the four definitions of the energy-momentum complexes. He concluded that the four prescriptions differ in general for these spacetimes. Ragab \[25,26\] obtained contradictory results for G$\ddot{o}$del-type metrics and Curzon metric which is a special solution of the Weyl metrics. Patashnick \[27\] showed that different prescriptions give mutually contradictory results for a regular MMaS-class black hole. In recent papers, we extended this procedure to the non-null Einstein-Maxwell solutions, electromagnetic generalization of G$\ddot{o}$del solution, singularity-free cosmological model and Weyl metrics \[28-30\]. We applied four definitions and concluded that none of the definitions provide consistent results for these models. This paper continues the study of investigation of the energy-momentum distribution for the family of Weyl metrics by using the four prescriptions of the energy-momentum complexes. In particular, we would explore energy-momentum for the Erez-Rosen and gamma metrics. The paper has been distributed as follows. In the next section, we shall describe the Weyl metrics and its two family members Erez-Rosen and gamma metrics. Section 3 is devoted to the evaluation of energy-momentum densities for the Erez-Rosen metric by using the prescriptions of Einstein, Landau-Lifshitz, Papapetrou and M$\ddot{o}$ller. In section 4, we shall calculate energy-momentum density components for the gamma metric. The last section contains discussion and summary of the results. The Weyl Metrics ================ Static axisymmetric solutions to the Einstein field equations are given by the Weyl metric \[31,32\] $$ds^2=e^{2\psi}dt^2-e^{-2\psi}[e^{2\gamma}(d\rho^2+dz^2) +\rho^2d\phi^2]$$ in the cylindrical coordinates $(\rho,~\phi,~z)$. Here $\psi$ and $\gamma$ are functions of coordinates $\rho$ and $z$. The metric functions satisfy the following differential equations $$\begin{aligned} \psi_{\rho\rho}+\frac{1}{\rho}\psi_{\rho}+\psi_{zz}=0,\\ \gamma_{\rho}=\rho(\psi^2_{\rho}-\psi^2_{z}),\quad \gamma_{z}=2\rho\psi_{\rho}\psi_{z}.\end{aligned}$$ It is obvious that Eq.(3) represents the Laplace equation for $\psi$. Its general solution, yielding an asymptotically flat behaviour, will be $$\psi=\sum^\infty_{n=0}\frac{a_n}{r^{n+1}}P_n(\cos\theta),$$ where $r=\sqrt{\rho^2+z^2},~\cos\theta=z/r$ are Weyl spherical coordinates and $P_n(\cos\theta)$ are Legendre Polynomials. The coefficients $a_n$ are arbitrary real constants which are called [*Weyl moments*]{}. It is mentioned here that if we take $$\begin{aligned} \psi=-\frac{m}{r},\quad\gamma=-\frac{m^2\rho^2}{2r^4},\quad r=\sqrt{\rho^2+z^2}\end{aligned}$$ then the Weyl metric reduces to special solution of Curzon metric \[33\]. There are more interesting members of the Weyl family, namely the Erez-Rosen and the gamma metric whose properties have been extensively studied in the literature \[32,34\]. The Erez-Rosen metric \[32\] is defined by considering the special value of the metric function $$2\psi=ln(\frac{x-1}{x+1})+q_2(3y^2-1)[\frac{1}{4}(3x^2-1) ln(\frac{x-1}{x+1})+\frac{3}{2}x],$$ where $q_2$ is a constant. Energy and Momentum for the Erez-Rosen Metric ============================================= In this section, we shall evaluate the energy and momentum density components for the Erez-Rosen metric by using different prescriptions. To obtain meaningful results in the prescriptions of Einstein, Ladau-Lifshitz’s and Papapetrou, it is required to transform the metric in Cartesian coordinates. This can be done by using the transformation equations $$x=\rho cos\theta,\quad y=\rho sin\theta.$$ The resulting metric in these coordinates will become $$ds^2=e^{2\psi}dt^2-\frac{e^{2(\gamma-\psi)}}{\rho^2}(xdx+ydy)^2\nonumber\\ -\frac{e^{-2\psi}}{\rho^2}(xdy-ydx)^2-e^{2(\gamma-\psi)}dz^2.$$ Energy and Momentum in Einstein’s Prescription ---------------------------------------------- The energy-momentum complex of Einstein \[1\] is given by $$\Theta^b_a= \frac{1}{16 \pi}H^{bc}_{a,c},$$ where $$H^{bc}_a=\frac{g_{ad}}{\sqrt{-g}}[-g(g^{bd}g^{ce} -g^{be}g^{cd})]_{,e},\quad a,b,c,d,e = 0,1,2,3.$$ Here $\Theta^0_{0}$ is the energy density, $\Theta^i_{0}~ (i=1,2,3)$ are the momentum density components and $\Theta^0_{i}$ are the energy current density components. The Einstein energy-momentum satisfies the local conservation laws $$\frac{\partial \Theta^b_a}{\partial x^{b}}=0.$$ The required components of $H_a^{bc}$ are the following $$\begin{aligned} H^{01}_{0}&=&\frac{4y}{\rho^2}e^{2\gamma}(y\psi_{,x}-x\psi_{,y}) +\frac{4x}{\rho^2}(x\psi_{,x}+y\psi_{,y})\nonumber\\ &-&\frac{x}{\rho^2}-2x\psi^2_{,\rho}+\frac{x}{\rho^2}e^{2\gamma},\\ H^{02}_{0}&=&\frac{4x}{\rho^2}e^{2\gamma}(x\psi_{,y}-y\psi_{,x}) +\frac{4y}{\rho^2}(x\psi_{,x}+y\psi_{,y})\nonumber\\ &-&\frac{y}{\rho^2}-2y\psi^2_{,\rho}+\frac{y}{\rho^2}e^{2\gamma}.\end{aligned}$$ Using Eqs.(13)-(14) in Eq.(10), we obtain the energy and momentum densities in Einstein’s prescription $$\begin{aligned} \Theta^0_{0}&=&\frac{1}{8\pi \rho^2}[e^{2\gamma}\{\rho^2\psi^2_{,\rho}+2(x^2\psi_{,yy}+y^2\psi_{xx} -x\psi_{,x}-y\psi_{,y})\}\nonumber\\ &+&2\{x^2\psi_{,xx}+y^2\psi_{,yy}+x\psi_{,x}+y\psi_{,y} -\rho^2\psi_{,\rho}(\psi_{,\rho}+\rho \psi_{,\rho\rho})\}].\end{aligned}$$ All the momentum density components turn out to be zero and hence momentum becomes constant. Energy and Momentum in Landau-Lifshitz’s Prescription ----------------------------------------------------- The Landau-Lifshitz \[2\] energy-momentum complex can be written as $$L^{ab}= \frac{1}{16 \pi}\ell^{acbd}_{,cd},$$ where $$\ell^{acbd}= -g(g^{ab}g^{cd}-g^{ad}g^{cb}).$$ $L^{ab}$ is symmetric with respect to its indices. $L^{00}$ is the energy density and $L^{0i}$ are the momentum (energy current) density components. $\ell^{abcd}$ has symmetries of the Riemann curvature tensor. The local conservation laws for Landau-Lifshitz energy-momentum complex turn out to be $$\frac{\partial L^{ab}}{\partial x^{b}}=0.$$ The required non-vanishing components of $\ell^{acbd}$ are $$\begin{aligned} \ell^{0101}&=&-\frac{y^2}{\rho^2}e^{4\gamma-4\psi} -\frac{x^2}{\rho^2}e^{2\gamma-4\psi},\\ \ell^{0202}&=&-\frac{x^2}{\rho^2}e^{4\gamma-4\psi} -\frac{y^2}{\rho^2}e^{2\gamma-4\psi},\\ \ell^{0102}&=&\frac{xy}{\rho^2}e^{4\gamma-4\psi} -\frac{xy}{\rho^2}e^{2\psi-4\gamma}.\end{aligned}$$ Using Eqs.(19)-(21) in Eq.(16), we get $$\begin{aligned} L^{00}&=&\frac{e^{2\gamma-4\psi}}{8\pi \rho^2}[e^{2\gamma}\{2\rho^2\psi^2_{,\rho}-8(y^2\psi^2_{,x} +x^2\psi^2_{,y})+2(x^2\psi_{,xx}+y^2\psi_{,yy}\nonumber\\ &-&x\psi_{,x}-y\psi_{,y})+16xy\psi_{,x}\psi_{,y}-4xy\psi_{,xy}\}\nonumber\\ &-&\rho^2\psi_{,\rho}(3\psi_{,\rho} +2\rho^2\psi^3_{,\rho}+2\rho \psi_{,\rho\rho})-8\rho^2\psi^2_{,\rho}(x\psi_{,x}+y\psi_{,y})\nonumber\\ &-&8(x^2\psi^2_{,x}+y^2\psi^2_{,y})+2(x^2\psi_{,xx}+y^2\psi_{,yy}\nonumber\\ &+&x\psi_{,x}+y\psi_{,y})-16xy\psi_{,x}\psi_{,y}+4xy\psi_{,xy}].\end{aligned}$$ The momentum density vanishes and hence momentum becomes constant. Energy and Momentum in Papapetrou’s Prescription ------------------------------------------------ We can write the prescription of Papapetrou \[3\] energy-momentum distribution in the following way $$\Omega^{ab}=\frac{1}{16\pi}N^{abcd}_{,cd},$$ where $$N^{abcd}=\sqrt{-g}(g^{ab}\eta^{cd}-g^{ac}\eta^{bd} +g^{cd}\eta^{ab}-g^{bd}\eta^{ac}),$$ and $\eta^{ab}$ is the Minkowski spacetime. It follows that the energy-momentum complex satisfies the following local conservation laws $$\frac{\partial \Omega^{ab}}{\partial x^b}=0.$$ $\Omega^{00}$ and $\Omega^{0i}$ represent the energy and momentum (energy current) density components respectively. The required components of $N^{abcd}$ are $$\begin{aligned} N^{0011}&=&-\frac{y^2}{\rho^2}e^{2\gamma}-\frac{x^2}{\rho^2} -e^{2\gamma-4\psi},\\ N^{0022}&=&-\frac{x^2}{\rho^2}e^{2\gamma}-\frac{y^2}{\rho^2} -e^{2\gamma-4\psi},\\ N^{0012}&=&-\frac{xy}{\rho^2}e^{2\gamma}-\frac{xy}{\rho^2}.\end{aligned}$$ Substituting Eqs.(26)-(28) in Eq.(23), we obtain the following energy density $$\begin{aligned} \Omega^{00}&=&\frac{e^{2\gamma}}{8\pi}[\psi^2_{,\rho} -e^{-4\psi}\{\psi^2_{,\rho}+2\rho^2\psi^4_{,\rho} +2\rho \psi_{,\rho}\psi_{,\rho\rho}\nonumber\\ &-&8\psi^2_{,\rho}(x\psi_{,x}+y\psi_{,y})+ 8(\psi^2_{,x}+\psi^2_{,y})-2(\psi_{,xx}+\psi_{,yy})\}].\end{aligned}$$ The momentum density vanishes. Energy and Momentum in Möller’s Prescription -------------------------------------------- The energy-momentum density components in Möller’s prescription \[7,8\] are given as $$M^b_a= \frac{1}{8\pi}K^{bc}_{a,c},$$ where $$K_a^{bc}= \sqrt{-g}(g_{ad,e}-g_{ae,d})g^{be}g^{cd}.$$ Here $K^{bc}_{ a}$ is symmetric with respect to the indices. $M^0_{0}$ is the energy density, $M^i_{0}$ are momentum density components, and $M^0_{i}$ are the components of energy current density. The Möller energy-momentum satisfies the following local conservation laws $$\frac{\partial M^b_a}{\partial x^b}=0.$$ Notice that Möller’s energy-momentum complex is independent of coordinates. The components of $K^{bc}_a$ for Erez-Rosen metric is the following $$\begin{aligned} K^{01}_0&=&2\rho \psi_{,\rho}.\end{aligned}$$ Substitute Eq.(33) in Eq.(30), we obtain $$\begin{aligned} M^0_0&=&\frac{1}{4\pi}[\psi_{,\rho}+\rho\psi_{,\rho\rho}].\end{aligned}$$ Again, we get momentum constant. The partial derivatives of the function $\psi$ are given by $$\begin{aligned} \psi_{,x}&=&\frac{1}{x^2-1}+\frac{q_2}{4}(3y^2-1)[3x ln(\frac{x-1}{x+1})+\frac{3x^2-1}{x^2-1}+3],\\ \psi_{,y}&=&\frac{3yq_2}{4}[(3x^2-1) ln(\frac{x-1}{x+1})+6x],\\ \psi_{,xx}&=&\frac{-2x}{(x^2-1)^2}+\frac{q_2}{4}(3y^2-1)[3 ln(\frac{x-1}{x+1})+2x\frac{3x^2-5}{(x^2-1)^2}],\\ \psi_{,yy}&=&\frac{3q_2}{4}[(3x^2-1) ln(\frac{x-1}{x+1})+6x],\\ \psi_{,xy}&=& U_{,yx}=\frac{3yq_2}{4}[3x ln(\frac{x-1}{x+1})+2\frac{3x^2-2}{x^2-1}],\\ \psi_{,\rho}&=& \frac{\rho}{x(x^2-1)}+\frac{\rho q_2}{4x}[3x(3\rho^2-2) ln(\frac{x-1}{x+1})\nonumber\\ &+&2\frac{(3x^2-1)(3y^2-1)}{x^2-1}+18x^2],\end{aligned}$$ $$\begin{aligned} \psi_{,\rho\rho}&=&\frac{1}{x(x^2-1)}-\frac{2\rho^2}{x(x^2-1)^2} +\frac{q_2}{4x^2}(3y^2-1)[3(\rho^2+x^2) ln(\frac{x-1}{x+1})\nonumber\\ &+&\frac{2x}{x^2-1}(3x^2-2+\frac{\rho^2(3x^2-5)}{x^2-1})] +\frac{3\rho q_2}{4}(1+\frac{\rho}{y^2})\nonumber\\ &\times&[(3x^2-1)ln(\frac{x-1}{x+1})+6x]+\frac{3\rho^2q_2}{x}[3 ln(\frac{x-1}{x+1})+2\frac{3x^2-1}{x^2-1}].\end{aligned}$$ Energy and Momentum for the Gamma Metric ======================================== A static and asymptotically flat exact solution to the Einstein vacuum equations is known as the gamma metric. This is given by the metric \[34\] $$ds^2=(1-\frac{2m}{r})^{\gamma}dt^2-(1-\frac{2m}{r})^{-\gamma} [(\frac{\Delta}{\Sigma})^{\gamma^2-1}dr^2+\frac{\Delta^{\gamma^2}} {\Sigma^{\gamma^2-1}}d\theta^2+\Delta\sin^2\theta d\phi^2],$$ where $$\begin{aligned} \Delta &=& r^2-2mr,\\ \Sigma &=& r^2-2mr+m^2sin^2\theta,\end{aligned}$$ $m$ and $\gamma$ are constant parameters. $m=0$ or $\gamma=0$ gives the flat spacetime. For $|\gamma|=1$ the metric is spherically symmetric and for $|\gamma|\neq1$, it is axially symmetric. $\gamma=1$ gives the Schwarzschild spacetime in the Schwarzschild coordinates. $\gamma=-1$ gives the Schwarzschild spacetime with negative mass, as putting $m=-M(m>0)$ and carrying out a non-singular coordinate transformation $(r\rightarrow R=r+2M)$ one gets the Schwarzschild spacetime (with positive mass) in the Schwarzschild coordinates $(t,R,\theta,\Phi)$. In order to have meaningful results in the prescriptions of Einstein, Landau-Lifshitz and Papapetrou, it is necessary to transform the metric in Cartesian coordinates. We transform this metric in Cartesian coordinates by using $$x=rsin\theta\cos\phi,\quad y=rsin\theta\sin\phi,\quad z=rcos\theta.$$ The resulting metric in these coordinates will become $$\begin{aligned} ds^2&=&(1-\frac{2m}{r})^{\gamma}dt^2-(1-\frac{2m}{r})^{-\gamma} [(\frac{\Delta}{\Sigma})^{\gamma^2-1}\frac{1}{r^2} \{xdx+ydy+zdz\}^2\nonumber\\ &+&\frac{\Delta^{\gamma^2}}{\Sigma^{\gamma^2-1}} \{\frac{xzdx+yzdy-(x^2+y^2)dz}{r^2\sqrt{x^2+y^2}}\}^2 +\frac{\Delta(xdy-ydx)^2}{r^2(x^2+y^2)}.\end{aligned}$$ Now we calculate energy-momentum densities using the different prescriptions given below. Energy and Momentum in Einstein’s Prescription ---------------------------------------------- The required non-vanishing components of $H^{bc}_{a}$ are $$\begin{aligned} H^{01}_{0}&=&4\gamma m \frac{x}{r^3}+(\frac{\Delta} {\Sigma})^{\gamma^2-1}\frac{x}{x^2+y^2}-(\gamma^2+1) (1-\frac{m}{r})\frac{2x}{r^2}\nonumber\\ &+& (\gamma^2-1)(1-\frac{m}{r})\frac{2\Delta x}{\Sigma r^2}+\frac{2\Delta x}{r^4} +(\gamma^2-1)\frac{2m^2xz^2}{\Sigma r^4}\nonumber\\ &-&\frac{xz^2}{r^2(x^2+y^2)}+\frac{x}{r^2},\\ H^{02}_{0}&=&4\gamma m \frac{y}{r^3}+(\frac{\Delta} {\Sigma})^{\gamma^2-1}\frac{y}{x^2+y^2}-(\gamma^2+1) (1-\frac{m}{r})\frac{2y}{r^2}\nonumber\\ &+& (\gamma^2-1)(1-\frac{m}{r})\frac{2\Delta y}{\Sigma r^2}+\frac{2\Delta y}{r^4} +(\gamma^2-1)\frac{2m^2yz^2}{\Sigma r^4}\nonumber\\ &-&\frac{xyz^2}{r^2(x^2+y^2)}+\frac{y}{r^2},\\ H^{03}_{0}&=&4\gamma m \frac{z}{r^3}-(\gamma^2+1)(1-\frac{m}{r})\frac{2z}{r^2}+ (\gamma^2-1)(1-\frac{m}{r})\frac{2\Delta z}{\Sigma r^2}\nonumber\\&+&\frac{2\Delta z}{r^4} -(\gamma^2-1)(x^2+y^2)\frac{2m^2z}{\Sigma r^4}+\frac{2z}{r^2}.\end{aligned}$$ Using Eqs.(47)-(49) in Eq.(10), we obtain non-vanishing energy density in Einstein’s prescription given as $$\begin{aligned} \Theta^0_{0}&=&\frac{1}{8\pi \Sigma^{\gamma^2}r^6}[(\gamma^2-1)\Sigma \Delta^{\gamma^2-2}r^5(r-m)-(\gamma^2-1)\Delta^{\gamma^2-1}r^2\nonumber\\ &\times&\{r^4-mr^3+m^2r^2-m^2(x^2+y^2)\} -(\gamma^2+1)\Sigma^{\gamma^2}r^4\nonumber\\ &+&2(\gamma^2-1)\Sigma^{\gamma^2-1} r^4(r-m)^2 +(\gamma^2-1)m\Delta\Sigma^{\gamma^2-1}r^2\nonumber\\ &-&(\gamma^2-1)\Delta^{\gamma^2-2}\Delta r^4(r-m)^2+(\gamma^2-1)\Delta^{\gamma^2-1}\nonumber\\&\times& \Delta r^5(r-m)+2\Sigma^{\gamma^2}r^3(r-m)-\Sigma^{\gamma^2}\Delta r^2+\Sigma^{\gamma^2}r^4\nonumber\\ &+&3(\gamma^2-1)\Sigma^{\gamma^2}r^2m^2z^2 -(\gamma^2-1)\Sigma^{\gamma^2-1}r^4m^2\nonumber\\ &-&2(\gamma^2-1) \Sigma^{\gamma^2-2}m^4z^2(x^2+y^2)].\end{aligned}$$ The momentum density components become zero and consequently momentum is constant. Energy and Momentum in Landau-Lifshitz’s Prescription ----------------------------------------------------- The required non-vanishing components of $\ell^{acbd}$ are $$\begin{aligned} \ell^{0101}&=&-(1-\frac{2m}{r})^{-2\gamma}[\frac{y^2\Delta^{2\gamma^2-1}} {r^2(x^2+y^2)\Sigma^{2(\gamma^2-1)}} +\frac{x^2\Delta^{\gamma^2+1}}{r^6\Sigma^{\gamma^2-1}}\nonumber\\& +&\frac{\Delta^{\gamma^2}x^2z^2} {r^4(x^2+y^2)\Sigma^{\gamma^2-1}}],\\ \ell^{0202}&=&-(1-\frac{2m}{r})^{-2\gamma}[\frac{x^2\Delta^{2\gamma^2-1}} {r^2(x^2+y^2)\Sigma^{2(\gamma^2-1)}} +\frac{y^2\Delta^{\gamma^2+1}}{r^6\Sigma^{\gamma^2-1}}\nonumber\\& +&\frac{\Delta^{\gamma^2}y^2z^2} {r^4(x^2+y^2)\Sigma^{\gamma^2-1}}],\\ \ell^{0303}&=&-(1-\frac{2m}{r})^{-2\gamma}[\frac{z^2\Delta^{\gamma^2+1}} {r^6\Sigma^{\gamma^2-1}}+\frac{(x^2+y^2)\Delta^{\gamma^2}} {r^4\Sigma^{\gamma^2-1}}],\\ \ell^{0102}&=&(1-\frac{2m}{r})^{-2\gamma}[\frac{xy\Delta^{2\gamma^2-1}} {r^2(x^2+y^2)\Sigma^{2(\gamma^2-1)}} -\frac{xy\Delta^{\gamma^2+1}}{r^6\Sigma^{\gamma^2-1}}\nonumber\\& -&\frac{\Delta^{\gamma^2}xyz^2} {r^4(x^2+y^2)\Sigma^{\gamma^2-1}}],\\ \ell^{0103}&=&-(1-\frac{2m}{r})^{-2\gamma}[\frac{xz\Delta^{\gamma^2+1}} {r^6\Sigma^{\gamma^2-1}}-\frac{xz\Delta^{\gamma^2}} {r^4\Sigma^{\gamma^2-1}}],\\ \ell^{0203}&=&-(1-\frac{2m}{r})^{-2\gamma}[\frac{yz\Delta^{\gamma^2+1}} {r^6\Sigma^{\gamma^2-1}}-\frac{yz\Delta^{\gamma^2}} {r^4\Sigma^{\gamma^2-1}}].\end{aligned}$$ When we substitute these values in Eq.(16), it follows that the energy density remains non-zero while momentum density components vanish. This is given as follows $$\begin{aligned} L^{00}&=&\frac{(1-\frac{2m}{r})^{-2\gamma}}{8\pi }[-\frac{4\gamma m^2}{r^4}(\frac{\Delta}{\Sigma})^{\gamma^2-1}(2\gamma+1)+\frac{2\gamma m}{r^7}\{-(\frac{\Delta}{\Sigma})^{2(\gamma^2-1)}r^4\nonumber\\ &+&4(\gamma^2+1)(1-\frac{m}{r})(\frac{\Delta}{\Sigma})^{\gamma^2-1}r^4 -4(\gamma^2-1)(1-\frac{m}{r})(\frac{\Delta}{\Sigma})^{\gamma^2}r^4\nonumber\\ &-&\frac{\Delta^{\gamma^2}}{\Sigma^{\gamma^2-1}}r^2- (\frac{\Delta}{\Sigma})^{\gamma^2-1}r^4+4({\gamma^2-1}) \frac{\Delta^{\gamma^2-1}}{\Sigma^{\gamma^2}}m^2z^2(x^2+y^2)\}\nonumber\\ &+&\frac{1}{r^2}(2\gamma^2-1)(1-\frac{m}{r})(\frac{\Delta} {\Sigma})^{2(\gamma^2-1)}-\frac{2}{r^2}(\gamma^2-1)(1-\frac{m}{r}\nonumber\\ &+&\frac{m^2}{r^2}-\frac{m^2(x^2+y^2)}{r^4})(\frac{\Delta} {\Sigma})^{2\gamma^2-1}-\frac{\Delta^{2\gamma^2-1}} {r^4\Sigma^{2(\gamma^2-1)}}\nonumber\\ &-&\frac{2\gamma^2}{r^2}(\gamma^2+1)(1-\frac{m}{r})^2(\frac{\Delta} {\Sigma})^{\gamma^2-1}+\frac{4}{r^2}({\gamma^4-1}) (1-\frac{m}{r})^2(\frac{\Delta}{\Sigma})^{\gamma^2}\nonumber\\ &-&\frac{2}{r^2}({\gamma^2-1})(1-\frac{m}{r})^2(\frac{\Delta} {\Sigma})^{\gamma^2+1}-2({\gamma^2-1})(1-\frac{m}{r}+\frac{m^2}{r^2}\nonumber\\ &-&\frac{m^2(x^2+y^2)}{r^4})\frac{\Delta^{\gamma^2+1}}{\Sigma^{\gamma^2}}- (\gamma^2+1)\frac{m\Delta^{\gamma^2}}{r^5\Sigma^{\gamma^2}-1}\nonumber\\ &+&3(\gamma^2+1)\frac{\Delta^{\gamma^2}}{r^4\Sigma^{\gamma^2-1}}(1-\frac{m}{r}) +(\gamma^2-1)\frac{m\Delta^{\gamma^2+1}}{r^5\Sigma^{\gamma^2}}(1-\frac{2m}{r}\nonumber\\ &-& \frac{m^2(x^2+y^2)}{r^3}) -2(\gamma^2-1)(x^2+y^2) \frac{m^2z^2\Delta^{\gamma^2+1}}{r^{10}\Sigma^{\gamma^2}}\nonumber\\&-&\frac{3\Delta^ {\gamma^2+1}}{r^6\Sigma^{\gamma^2-1}}+\gamma^2(\frac{\Delta} {\Sigma})^{\gamma^2-1}(1-\frac{m}{r})(\frac{1}{r^2}+\frac{2z^2}{r^4}) -2\gamma^2(\gamma^2-1)\nonumber\\&\times&(x^2+y^2)\frac{m^4z^2\Delta^{\gamma^2}} {r^8\Sigma^{\gamma^2+1}} +2(\gamma^2-1)(x^2+y^2)(\frac{\Delta} {\Sigma})^{\gamma^2}\frac{m^2z^2}{r^8}\nonumber\\&-&(\gamma^2-1)(\frac{\Delta} {\Sigma})^{\gamma^2}(1-\frac{m}{r}+\frac{m^2}{r^2})\frac{1}{r^2}.\end{aligned}$$ As momentum density vanishes hence it is constant. Energy and Momentum in Papapetrou’s Prescription ------------------------------------------------ The required non-vanishing components of $N^{abcd}$ are given by $$\begin{aligned} N^{0011}&=&-(\frac{\Delta}{\Sigma})^{\gamma^2-1}\frac{y^2}{x^2+y^2} -\frac{\Delta x^2}{r^4}-\frac{x^2z^2}{r^2(x^2+y^2)}\nonumber\\ &-&(1-\frac{2m}{r})^{-2\gamma} \frac{\Delta^{\gamma^2}}{r^2\Sigma^{\gamma^{2}-1}},\\ N^{0022}&=&-(\frac{\Delta}{\Sigma})^{\gamma^2-1}\frac{x^2}{x^2+y^2} -\frac{\Delta y^2}{r^4}-\frac{y^2z^2}{r^2(x^2+y^2)}\nonumber\\ &-&(1-\frac{2m}{r})^{-2\gamma} \frac{\Delta^{\gamma^2}}{r^2\Sigma^{\gamma^{2}-1}},\\ N^{0033}&=&-\frac{\Delta z^2}{r^4}-\frac{x^2+y^2}{r^2} -(1-\frac{2m}{r})^{-2\gamma} \frac{\Delta^{\gamma^2}}{r^2\Sigma^{\gamma^{2}-1}},\\ N^{0012}&=&(\frac{\Delta}{\Sigma})^{\gamma^2-1}\frac{xy}{x^2+y^2} -\frac{\Delta xy}{r^4}-\frac{xy}{r^2(x^2+y^2)}\nonumber\\ &-&(1-\frac{2m}{r})^{-2\gamma} \frac{\Delta^{\gamma^2}}{r^2\Sigma^{\gamma^{2}-1}},\\ N^{0013}&=&-\frac{\Delta xz}{r^4}+\frac{xz}{r^2},\\ N^{0023}&=&-\frac{\Delta yz}{r^4}+\frac{yz}{r^2}.\end{aligned}$$ Substituting Eqs.(58)-(63) in Eq.(23), we obtain the following energy density and momentum density components $$\begin{aligned} \Omega^{00}&=&\frac{(1-\frac{2m}{r})^{-2\gamma}}{8\pi}[-4\gamma m^2(2\gamma+1)\frac{\Delta^{\gamma^2-2}} {r\Sigma^{\gamma^2-1}}+8\gamma m\{\frac{\Delta^{\gamma^2-2}} {r\Sigma^{\gamma^2-1}}(1-\frac{m}{r})\nonumber\\ &-&(\gamma^2-1)(1-\frac{m}{r})\frac{\Delta^{\gamma^2-1}} {r\Sigma^{\gamma^2}}-(\frac{\Delta}{\Sigma})^{\gamma^2}\frac{1}{r^3}\}\nonumber\\ &-&2\gamma^2(\gamma^2-1)(1-\frac{m}{r})^2\frac{\Delta^{\gamma^2-2}} {\Sigma^{\gamma^2-1}}+4\gamma^2(\gamma^2-1)(1-\frac{m}{r})^2 \frac{\Delta^{\gamma^2-1}}{\Sigma^{\gamma^2}}\nonumber\\ &+&(1-\frac{m}{r})\frac{\gamma^2}{r^2} (\frac{\Delta}{\Sigma})^{\gamma^2-1}-2\gamma^2(\gamma^2-1) \frac{(x^2+y^2)\Delta^{\gamma^2}}{r^2\Sigma^{\gamma^2+1}}(1-\frac{m}{r}\nonumber\\ &+&\frac{m^2}{r^2}-\frac{m^2(x^2+y^2)}{r^4})^2-2\gamma^2(\gamma^2-1) \frac{z^2\Delta^{\gamma^2}}{r^2\Sigma^{\gamma^2+1}} (1-\frac{m}{r}\nonumber\\&-&\frac{m^2(x^2+y^2)}{r^4})^2- \frac{\gamma^2-1}{r^2}(\frac{\Delta}{\Sigma})^{\gamma^2} (1-\frac{m}{r}-\frac{2m^2}{r^2}-\frac{3m^2(x^2+y^2)}{r^4})\nonumber\\&-& \frac{\Delta^{\gamma^2}}{r^4\Sigma^{\gamma^2-1}}]+\frac{1}{8\pi}[ (\gamma^2-1)(1-\frac{m}{r})\{\frac{2x^2}{x^2+y^2}-1\} \frac{\Delta^{\gamma^2-2}}{\Sigma^{\gamma^2-1}}\nonumber\\ &+&(\gamma^2-1)\frac{\Delta^{\gamma^2-1}}{\Sigma^{\gamma^2}} (1-\frac{m}{r}+\frac{m^2}{r^2}-\frac{m^2(x^2+y^2)}{r^4}) \{1-\frac{2x^2}{x^2+y^2}\}\nonumber\\ &+&(\frac{\Delta}{\Sigma})^{\gamma^2-1}\{\frac{1}{x^2+y^2} -\frac{2x^2}{(x^2+y^2)^2}\}-\frac{4}{r^2}(1-\frac{m}{r})+\frac{6\Delta}{r^4}].\end{aligned}$$ Energy and Momentum in Möller’s Prescription -------------------------------------------- For the gamma metric, we obtain the following non-vanishing components of $K^{bc}_a$ $$K^{01}_0=-2m\gamma sin\theta.$$ When we make use of Eq.(65) in Eq.(30), the energy and momentum density components turn out to be $$M^0_0=0.$$ and $$M^i_0=0=M^0_i.$$ This shows that energy and momentum turn out to be constant. Conclusion ========== Energy-momentum complexes provide the same acceptable energy-momentum distribution for some systems. However, for some systems \[22-30\], these prescriptions disagree. The debate on the localization of energy-momentum is an interesting and a controversial problem. According to Misner et al. \[35\], energy can only be localized for spherical systems. In a series of papers \[36\] Cooperstock et al. has presented a hypothesis which says that, in a curved spacetime, energy and momentum are confined to the regions of non-vanishing energy-momentum tensor $T_a^b$ of the matter and all non-gravitational fields. The results of Xulu \[19,20\] and the recent results of Bringley \[37\] support this hypothesis. Also, in the recent work, Virbhadra and his collaborators \[14-18\] have shown that different energy-momentum complexes can provide meaningful results. Keeping these points in mind, we have explored some of the interesting members of the Weyl class for the energy-momentum distribution. In this paper, we evaluate energy-momentum densities for the two solutions of the Weyl metric, i.e., Erez-Rosen and the gamma metrics. We obtain this target by using four well-known prescriptions of Einstein, Landau-Lifshitz, Papapetrou and M$\ddot{o}$ller. From Eqs.(15), (22), (29), (34), (50), (57), (64) and (67), it can be seen that the energy-momentum densities are finite and well defined. We also note that the energy density is different for the four different prescriptions. However, momentum density components turn out to be zero in all the prescriptions and consequently we obtain constant momentum for these solutions. The results of this paper also support the Cooperstock’s hypothesis \[36\] that energy is localized to the region where the energy-momentum tensor is non-vanishing. We would like to mention here that the results of energy-momentum distribution for different spacetimes are not surprising rather they justify that different energy-momentum complexes, which are pseudo-tensors, are not covariant objects. This is in accordance with the equivalence principle \[35\] which implies that the gravitational field cannot be detected at a point. These examples indicate that the idea of localization does not follow the lines of pseudo-tensorial construction but instead it follows from the energy-momentum tensor itself. This supports the well-defined proposal developed by Cooperstock \[36\] and verified by many authors \[22-30\]. In GR, many energy-momentum expressions (reference frame dependent pseudo-tensors) have been proposed. There is no consensus as to which is the best. Hamiltonian’s principle helps to solve this enigma. Each expression has a geometrically and physically clear significance associated with the boundary conditions. [**Acknowledgment**]{} We would like to thank for the anonymous referee for his useful comments. [**References**]{} [\[1\]]{} Trautman, A.: [*Gravitation: An Introduction to Current Research*]{} ed. Witten, L. (Wiley, New York, 1962)169. [\[2\]]{} Landau, L.D. and Lifshitz, E.M.: [*The Classical Theory of Fields*]{} (Addison-Wesley Press, 1962). [\[3\]]{} Papapetrou, A.: [*Proc. R. Irish Acad*]{} [**A52**]{}(1948)11. [\[4\]]{} Tolman R. C: Relativity, Thermodynamics and Cosmology, (Oxford University Press, Oxford, 1934)227. [\[5\]]{} Bergman P.G: and Thompson R. Phys. Rev. [**89**]{}(1958)400. [\[6\]]{} Weinberg, S.: [*Gravitation and Cosmology*]{} (Wiley, New York, 1972). [\[7\]]{} Möller, C.: Ann. Phys. (NY) [**4**]{}(1958)347. [\[8\]]{} Möller, C.: Ann. Phys. (NY) [**12**]{}(1961)118. [\[9\]]{} Komar, A. Phys. Rev. [**113**]{}(1959)934. [\[10\]]{} Penrose, R.: [*Proc. Roy. Soc.*]{} London [**A388**]{}(1982)457;\ [*GR10 Conference*]{}, eds. Bertotti, B., de Felice, F. and Pascolini, A. Padova [**1**]{} (1983)607. [\[11\]]{} Brown, J.D. and York, J,W.: Phys. Rev. [**D47**]{}(1993)1407. [\[12\]]{} Hayward, S.A.: Phys. Rev. [**D49**]{}(1994)831. [\[13\]]{} Chang, C.C., Nester, J.M. and Chen, C.: Phys. Rev. Lett. [**83**]{}(1999)1897. [\[14\]]{} Virbhadra, K.S.: Phys. Rev. [**D42**]{}(1990)2919. [\[15\]]{} Virbhadra, K.S.: Phys. Rev. [**D60**]{}(1999)104041. [\[16\]]{} Rosen, N. and Virbhadra, K.S.: Gen. Relati. Gravi. [**25**]{}(1993)429. [\[17\]]{} Virbhadra, K.S. and Parikh, J.C.: Phys. Lett. [**B317**]{}(1993)312. [\[18\]]{} Virbhadra, K.S. and Parikh, J.C.: Phys. Lett. [**B331**]{}(1994)302. [\[19\]]{} Xulu, S.S.: Int. J. of Mod. Phys. [**A15**]{}(2000)2979; Mod. Phys. Lett. [**A15**]{}(2000)1151 and reference therein. [\[20\]]{} Xulu, S.S.: Astrophys. Space Sci. [**283**]{}(2003)23. [\[21\]]{} Aguirregabiria, J.M., Chamorro, A. and Virbhadra, K.S,: [*Gen. Relativ. and Grav.*]{} [**17**]{}, 927 [**28**]{}(1996)1393. [\[22\]]{} Sharif, M.: Int. J. of Mod. Phys. [**A17**]{}(2002)1175. [\[23\]]{} Sharif, M.: Int. J. of Mod. Phys. [**A18**]{}(2003)4361; Errata [**A19**]{}(2004)1495. [\[24\]]{} Sharif, M.: Int. J. of Mod. Phys. [**D13**]{}(2004)1019. [\[25\]]{} Gad, R.M: Astrophys. Space Sci. [**293**]{}(2004)453. [\[26\]]{} Gad, R.M: Mod. Phys. Lett. [**A19**]{}(2004)1847. [\[27\]]{} Patashnick, O.: Int. J. of Mod. Phys. [**D**]{}(2005) (gr-qc/0408086). [\[28\]]{} Sharif, M. and Fatima, Tasnim: Int. J. of Mod. Phys. [**A20**]{}(2005)4309. [\[29\]]{} Sharif, M. and Fatima, Tasnim: Nuovo Cimento [**B**]{}(2005). [\[30\]]{} Fatima, Tasnim: M.Phil. Thesis (University of the Punjab, Lahore, 2004). [\[31\]]{} Weyl, H.: Ann. Phys. (Leipzig) [**54**]{}(1917)117; [**59**]{}(1919)185;\ Civita, Levi, L.: Atti. Acad. Naz. Lince Rend. Classe Sci. Fis. Mat. e Nat., [**28**]{}(1919)101;\ Synge, J.L.: [*Relativity, the General Theory*]{} (North-Holland Pub. Co. Amsterdam, 1960). [\[32\]]{} Kramer, D., Stephani, H., MacCallum, M.A.H. and Hearlt, E.: [*Exact Solutions of Einstein’s Field Equations*]{} (Cambridge University Press, 2003). [\[33\]]{} Curzon, H.E.J.: [*Proc. Math. Soc.*]{} London [**23**]{}(1924)477. [\[34\]]{} Esposito, F. and Witten, L.: Phys. Lett. [**B58**]{}(1975)357;\ Virbhadra, K.S.: gr-qc/9606004;\ Herrera, L., A Di Prisco, A.Di. and Fuenmayor, E.: Class. Quant. Grav. [**20**]{}(2003)1125. [\[35\]]{} Misner,C.W., Thorne, K.S. and Wheeler, J.A. [*Gravitation*]{} (W.H. Freeman, New York, 1973)603. [\[36\]]{} Cooperstock, F.I. and Sarracino, R.S. [*J. Phys. A.: Math. Gen.*]{} [**11**]{}(1978)877.\ Cooperstock, F.I.: in [*Topics on Quantum Gravity and Beyond*]{}, Essays in honour of Witten, L. on his retirement, ed. Mansouri, F. and Scanio, J.J. (World Scientific, Singapore, 1993); Mod. Phys. Lett. [**A14**]{}(1999)1531; Annals of Phys. [**282**]{}(2000)115;\ Cooperstock, F.I. and Tieu, S.: Found. Phys. [**33**]{}(2003)1033. [\[37\]]{} Bringley, T.: Mod. Phys. Lett. [**A17**]{}(2002)157. [^1]: e-mail: msharif@math.pu.edu.pk
{ "pile_set_name": "ArXiv" }
--- abstract: 'For some estimations and predictions, we solve minimization problems with asymmetric loss functions. Usually, we estimate the coefficient of regression for these problems. In this paper, we do not make such the estimation, but rather give a solution by correcting any predictions so that the prediction error follows a general normal distribution. In our method, we can not only minimize the expected value of the asymmetric loss, but also lower the variance of the loss.' author: - 'Naoya Yamaguchi, Yuka Yamaguchi, and Ryuei Nishii' bibliography: - 'reference.bib' title: Minimizing the expected value of the asymmetric loss and an inequality of the variance of the loss --- Introduction {#S1} ============ For some estimations and predictions, we solve minimization problems with loss functions, as follows: Let $\{ (x_{i}, y_{i}) \mid 1 \leq i \leq n \}$ be a data set, where $x_{i}$ are $1 \times p$ vectors and $y_{i} \in \mathbb{R}$. We assume that the data relate to a linear model, $$y = X \beta + \varepsilon,$$ where $y = {}^{t}(y_{1}, \ldots, y_{n})$, $\varepsilon = {}^{t}(\varepsilon_{1}, \ldots, \varepsilon_{n})$, and $X$ is the $n \times p$ matrix having $x_{i}$ as the $i$th row. Let $L$ be a loss function and let $r_{i}(\beta) := y_{i} - x_{i} \beta$. Then we estimate the value: $$\begin{aligned} \hat{\beta} := \arg\min_{\beta} \left\{ \sum_{i = 1}^{n} L(r_{i}(\beta)) \right\}. \end{aligned}$$ The case of $L(r_{i}(\beta)) = r_{i}(\beta)^{2}$ is well-known (see, e.g., Refs. [@doi:10.1111/j.1751-5823.1998.tb00406.x], [@legendre1805nouvelles], and [@stigler1981]). In the case of an asymmetric loss function, we refer the reader to, e.g., Refs. [@10.2307/2336317], [@10.2307/24303995], [@10.2307/1913643], and [@10.2307/2289234]. These studies estimate the parameter $\hat{\beta}$. In this paper, however, we do not make such the estimation, but instead give a solution to the minimization problems by correcting any predictions so that the prediction error follows a general normal distribution. In our method, we can not only minimize the expected value of the asymmetric loss, but also lower the variance of the loss. Let $y$ be an observation value, and let $\hat{y}$ be a predicted value of $y$. We derive the optimized predicted value $y^{*} = \hat{y} + C$ minimizing the expected value of the loss under the assumption: 1. The prediction error $z := \hat{y} - y$ is the realized value of a random variable $Z$, whose density function is a generalized Gaussian distribution function (see, e.g., Refs. [@Dytso2018], [@doi:10.1080/02664760500079464], and [@Sub23]) with mean zero $$\begin{aligned} f_{Z}(z) := \frac{1}{2 a b \G(a)} \exp{\left( - \left\lvert \frac{z}{b} \right\rvert^{\frac{1}{a}} \right)}, \end{aligned}$$ where $\G(a)$ is the gamma function and $a$, $b \in \mathbb{R}_{> 0}$. 2. Let $k_{1}$, $k_{2} \in \mathbb{R}_{> 0}$. If there is a mismatch between $y$ and $\hat{y}$, then we suffer a loss, $$\begin{aligned} \Pe(z) := \begin{cases} k_{1} z, & z \geq 0, \\ - k_{2} z, & z < 0. \end{cases}\end{aligned}$$ That is, the solution to the minimization problem is $$\begin{aligned} C = \arg\min_{c} \left\{ \operatorname{{E}}\left[ \Pe(Z + c) \right] \right\}. \end{aligned}$$ The motivation of our research is as follows: (1) Predictions usually cause prediction errors. Therefore, it is necessary to use predictions in consideration of predictions errors. Actually, in some cases, it is best not to act as predicted because of prediction errors. For example, the paper [@Yamaguchi2018] formulates a method for minimizing the expected value of the procurement cost of electricity in two popular spot markets: [*day-ahead*]{} and [*intra-day*]{}, under the assumption that the expected value of the unit prices and the distributions of the prediction errors for the electricity demand traded in two markets are known. The paper showed that if the procurement is increased or decreased from the prediction, in some cases, the expected value of the procurement cost is reduced. (2) In recent years, prediction methods have been black boxed by the big data and machine learning (see, e.g., Ref. [@10.1145/3236009]). The day will soon come, when we must minimize the objective function by using predictions obtained by such black boxed methods. In our method, even if we do not know the prediction $\hat{y}$, we can determine the parameter $C$ if we know the prediction error distribution $f$ and asymmetric loss function $L$. To obtain $y^{*}$, we derive $\operatorname{{E}}[\Pe(Z + c)]$ for any $c \in \mathbb{R}$. Let $\G(a, x)$ and $\g(a, x)$ be the upper and the lower incomplete gamma functions, respectively (see, e.g., Ref. [@doi:10.1142/0653]). The expected value and the variance of $\Pe(Z + c)$ are as follows: \[lem:1.1\] For any $c \in \mathbb{R}$, we have $$\begin{aligned} (1)\quad \operatorname{{E}}[\Pe(Z + c)] &= \frac{(k_{1} - k_{2}) c}{2} + \frac{(k_{1} + k_{2}) \lvert c \rvert}{2 \G(a)} \g\left(a, \left\lvert \frac{c}{b} \right\rvert^{\frac{1}{a}} \right) + \frac{(k_{1} + k_{2}) b}{2 \G(a)} \G\left(2a, \left\lvert \frac{c}{b} \right\rvert^{\frac{1}{a}} \right), \\ (2)\quad \operatorname{{V}}[\Pe(Z + c)] &= \frac{(k_{1} + k_{2})^{2} c^{2}}{4} + \frac{(k_{1}^{2} - k_{2}^{2}) b c}{2 \G(a)} \G\left(2a, \left\lvert \frac{c}{b} \right\rvert^{\frac{1}{a}} \right) \nonumber \\ &\quad - \frac{(k_{1} + k_{2})^{2} b \lvert c \rvert}{2 \G(a)^{2}} \g\left(a, \left\lvert \frac{c}{b} \right\rvert^{\frac{1}{a}} \right) \G\left(2a, \left\lvert \frac{c}{b} \right\rvert^{\frac{1}{a}} \right) \nonumber \\ &\quad - \frac{(k_{1} + k_{2})^{2} c^{2}}{4 \G(a)^{2}} \g\left(a, \left\lvert \frac{c}{b} \right\rvert^{\frac{1}{a}} \right)^{2} - \frac{(k_{1} + k_{2})^{2} b^{2}}{4 \G(a)^{2}} \G\left(2a, \left\lvert \frac{c}{b} \right\rvert^{\frac{1}{a}} \right)^{2} \nonumber \\ &\quad + \frac{(k_{1}^{2} + k_{2}^{2}) b^{2} \G(3a)}{2 \G(a)} + \operatorname{{sgn}}(c) \frac{(k_{1}^{2} - k_{2}^{2}) b^{2}}{2 \G(a)} \g\left(3a, \left\lvert \frac{c}{b} \right\rvert^{\frac{1}{a}} \right). \nonumber\end{aligned}$$ We write the value of $c$ satisfying $\frac{d}{dc} \operatorname{{E}}[\Pe(Z + c)] = 0$ as $C$. Then, we find that $\operatorname{{E}}[\Pe(Z + c)]$ has a minimum value at $c = C$. Also, it follows from $$\begin{aligned} \g\left(a, \left\lvert \frac{C}{b} \right\rvert^{\frac{1}{a}} \right) = \operatorname{{sgn}}(C) \frac{k_{2} - k_{1}}{k_{1} + k_{2}} \G(a) \end{aligned}$$ that $\operatorname{{sgn}}(C) = \operatorname{{sgn}}(k_{2} - k_{1})$, where $\operatorname{{sgn}}(c) := 1 \: (c \geq 0); -1 \: (c < 0)$, and $C = 0$ only when $k_{1} = k_{2}$. This equation implies that the ratio of $\G(a)$ and $\g\left(a, \left\lvert \frac{C}{b} \right\rvert^{\frac{1}{a}} \right)$ is $1 : \frac{\lvert k_{2} - k_{1}\rvert}{k_{1} + k_{2}}$. That is, the vertical axis $t = \left\lvert \frac{C}{b} \right\rvert^{\frac{1}{a}}$ divides the area between $t^{a - 1} e^{- t}$ and the $t$-axis into $\frac{\lvert k_{2} - k_{1}\rvert}{k_{1} + k_{2}} : 1- \frac{\lvert k_{2} - k_{1}\rvert}{k_{1} + k_{2}}$. Substituting $c = C$ in the equation $(1)$ of Lemma $\ref{lem:1.1}$, from the equation $(3)$, we have $$\begin{aligned} \operatorname{{E}}[\Pe(Z + C)] = \frac{(k_{1} + k_{2}) b}{2 \G(a)} \G\left(2a, \left\lvert \frac{C}{b} \right\rvert^{\frac{1}{a}} \right). \end{aligned}$$ This is the minimum value of $\operatorname{{E}}[\Pe(Z + c)]$. From this and the $c = 0$ case of the equation $(1)$ of Lemma $\ref{lem:1.1}$, we have the following corollary: \[cor:1.2\] We have $$\begin{aligned} \operatorname{{E}}[\Pe(Z)] - \operatorname{{E}}[\Pe(Z + C)] &= \frac{(k_{1} + k_{2}) b}{2 \G(a)} \g\left(2a, \left\lvert \frac{C}{b} \right\rvert^{\frac{1}{a}} \right), \\ \frac{\operatorname{{E}}[\Pe(Z + C)]}{\operatorname{{E}}[\Pe(Z)]} &= \frac{1}{\G(2a)} \G\left(2a, \left\lvert \frac{C}{b} \right\rvert^{\frac{1}{a}} \right). \end{aligned}$$ This corollary asserts that the expected value of the loss is reduced by correcting a predicted value $y$ to the optimized predicted value $y^{*}$. Moreover, the following holds: \[thm:1.3\] We have $$\begin{aligned} \operatorname{{V}}[\Pe(Z + C)] \leq \operatorname{{V}}[\Pe(Z)], \end{aligned}$$ where equality sign holds only when $C = 0$; that is, when $k_{1} = k_{2}$. This theorem asserts that the variance of the loss is reduced by correcting the predicted value $y$ to the optimized predicted value $y^{*}$. To prove this theorem, we use the following lemma: \[lem:1.4\] For $a > 0$ and $x > 0$, we have $$\begin{aligned} x^{a} \g(a, x)^{2} - x^{a} \G(a)^{2} + 2 \g(a, x) \G(2a, x) > 0. \end{aligned}$$ To prove Lemma $\ref{lem:1.4}$, we use the following lemmas: \[lem:1.5\] For $a > 0$, we have $$\begin{aligned} 2 \G(2a) - a \G(a)^{2} > 0. \end{aligned}$$ \[lem:1.6\] For $a > 0$, we have $$\begin{aligned} 4^{a} \G\left(a + \frac{1}{2} \right) > \sqrt{\pi} \G(a + 1). \end{aligned}$$ The remainder of this paper is organized as follows. In Section $2$, we set up the problem. In Section $3$, we introduce the expected value and the variance of $\Pe(Z + c)$, and we determine the value of $c = C$ that gives the minimum value of $\operatorname{{E}}[\Pe(Z + c)]$. In addition, we give a geometrical interpretation of the parameter $C$, and give the minimized expected value $\operatorname{{E}}[\Pe(Z + C)]$. In Section $4$, we prove Theorem $\ref{thm:1.3}$. In Section $5$, we give some inequalities for the gamma and the incomplete gamma functions, which used to derive the inequality for the variance of the loss in Theorem $\ref{thm:1.3}$. In Section $6$, we write the calculation of the expected value and the variance of the loss $\Pe(Z + c)$ for $c \in \mathbb{R}$. Problem statement ================= In this section, we set a problem. Let $y$ be an observation value, let $\hat{y}$ be a predicted value of $y$, and let $\G(a)$ be the gamma function (see, e.g., Ref. [@doi:10.1142/0653 p. 93]) defined by $$\begin{aligned} \G(a) := \int_{0}^{+\infty} t^{a - 1} e^{- t} dt, \quad \text{Re}(a) > 0. \end{aligned}$$ We assume the following: 1. The prediction error $z := \hat{y} - y$ is the realized value of a random variable $Z$, whose density function is a generalized Gaussian distribution function (see, e.g., Refs. [@Dytso2018], [@doi:10.1080/02664760500079464], and [@Sub23]) with mean zero $$\begin{aligned} f_{Z}(z) := \frac{1}{2 a b \G(a)} \exp{\left( - \left\lvert \frac{z}{b} \right\rvert^{\frac{1}{a}} \right)}, \end{aligned}$$ where $a$, $b \in \mathbb{R}_{> 0}$. 2. Let $k_{1}$, $k_{2} \in \mathbb{R}_{> 0}$. If there is a mismatch between $y$ and $\hat{y}$, then we suffer a loss, $$\begin{aligned} \Pe(z) := \begin{cases} k_{1} z, & z \geq 0, \\ - k_{2} z, & z < 0. \end{cases}\end{aligned}$$ We derive the optimized predicted value $y^{*} = \hat{y} + C$ minimizing $\operatorname{{E}}[\Pe(Z + c)]$. For this purpose, we derive $\operatorname{{E}}[\Pe(Z + c)]$ for any $c \in \mathbb{R}$ in the next section. Expected value and variance of the loss ======================================= Here, we introduce the expected value and the variance of $\Pe(Z + c)$, and determine the value of $c = C$ that gives the minimum value of $\operatorname{{E}}[\Pe(Z + c)]$. In addition, we give a geometrical interpretation of the parameter $C$ and give the minimized expected value $\operatorname{{E}}[\Pe(Z + C)]$. Expected value and variance of the loss --------------------------------------- Let $\G(a, x)$ and $\g(a, x)$ be the upper and the lower incomplete gamma functions, respectively, defined by $$\begin{aligned} \G(a, x) := \int_{x}^{+\infty} t^{a - 1} e^{-t} dt, \qquad \g(a, x) := \int_{0}^{x} t^{a - 1} e^{-t} dt, \end{aligned}$$ where $\text{Re}(a) > 0$ and $x \geq 0$. These functions have the following properties: \[lem:3.0\] For ${\rm Re}(a) > 0$ and $x \geq 0$, &(1)(a, x) + (a, x) = (a);\ &(2)\_[x ]{} (a, x) = (a);\ &(3)(a, 0) = (a);\ &(4) (a, x) = x\^[a - 1]{} e\^[-x]{};\ &(5) (a, x) = - x\^[a - 1]{} e\^[-x]{}. & Also, for $c \in \mathbb{R}$, let $\operatorname{{sgn}}(c) := 1 \: (c \geq 0); -1 \: (c < 0)$. Then, the expected value and the variance of $\Pe(Z + c)$ are as follows: \[lem:3.1\] For any $c \in \mathbb{R}$, we have $$\begin{aligned} (1)\quad \operatorname{{E}}[\Pe(Z + c)] &= \frac{(k_{1} - k_{2}) c}{2} + \frac{(k_{1} + k_{2}) \lvert c \rvert}{2 \G(a)} \g\left(a, \left\lvert \frac{c}{b} \right\rvert^{\frac{1}{a}} \right) + \frac{(k_{1} + k_{2}) b}{2 \G(a)} \G\left(2a, \left\lvert \frac{c}{b} \right\rvert^{\frac{1}{a}} \right), \\ (2)\quad \operatorname{{V}}[\Pe(Z + c)] &= \frac{(k_{1} + k_{2})^{2} c^{2}}{4} + \frac{(k_{1}^{2} - k_{2}^{2}) b c}{2 \G(a)} \G\left(2a, \left\lvert \frac{c}{b} \right\rvert^{\frac{1}{a}} \right) \nonumber \\ &\quad - \frac{(k_{1} + k_{2})^{2} b \lvert c \rvert}{2 \G(a)^{2}} \g\left(a, \left\lvert \frac{c}{b} \right\rvert^{\frac{1}{a}} \right) \G\left(2a, \left\lvert \frac{c}{b} \right\rvert^{\frac{1}{a}} \right) \nonumber \\ &\quad - \frac{(k_{1} + k_{2})^{2} c^{2}}{4 \G(a)^{2}} \g\left(a, \left\lvert \frac{c}{b} \right\rvert^{\frac{1}{a}} \right)^{2} - \frac{(k_{1} + k_{2})^{2} b^{2}}{4 \G(a)^{2}} \G\left(2a, \left\lvert \frac{c}{b} \right\rvert^{\frac{1}{a}} \right)^{2} \nonumber \\ &\quad + \frac{(k_{1}^{2} + k_{2}^{2}) b^{2} \G(3a)}{2 \G(a)} + \operatorname{{sgn}}(c) \frac{(k_{1}^{2} - k_{2}^{2}) b^{2}}{2 \G(a)} \g\left(3a, \left\lvert \frac{c}{b} \right\rvert^{\frac{1}{a}} \right). \nonumber\end{aligned}$$ See the last two sections for the proof of Lemma $\ref{lem:3.1}$. From Lemma $\ref{lem:3.1}$, we have the following: $$\begin{aligned} \operatorname{{E}}[\Pe(Z)] &= \frac{(k_{1} + k_{2}) b}{2 \G(a)} \G\left( 2a \right), \label{E[L(Z)]} \\ \operatorname{{V}}[\Pe(Z)] &= \frac{(k_{1}^{2} + k_{2}^{2}) b^{2} \G(3a)}{2 \G(a)} - \frac{(k_{1} + k_{2})^{2} b^{2} \G(2a)^{2}}{4 \G(a)^{2}}. \label{V[L(Z)]}\end{aligned}$$ Let $\operatorname{{erf}}(x)$ be the error function defined by $$\begin{aligned} \operatorname{{erf}}(x) := \frac{2}{\sqrt{\pi}} \int_{0}^{x} \exp{\left( - t^{2} \right)} dt \end{aligned}$$ for any $x \in \mathbb{R}$. We give two examples of $\operatorname{{E}}[\Pe(Z + c)]$ and $\operatorname{{V}}[\Pe(Z + c)]$. \[rei:3.2\] In the case of ${\rm Laplace}(0, b)$, since $a = 1$, we have $$\begin{aligned} \operatorname{{E}}[\Pe(Z + c)] &= \Pe(c) + \frac{(k_{1} + k_{2}) b}{2} \exp{\left(- \left\lvert \frac{c}{b} \right\rvert \right)}, \\ \operatorname{{V}}[\Pe(Z + c)] &= \left\{ k_{1}^{2} + k_{2}^{2} + \operatorname{{sgn}}(c) (k_{1}^{2} - k_{2}^{2} ) \right\} b^{2} \\ &\quad - \operatorname{{sgn}}(c) (k_{1} + k_{2}) \left\{ \Pe(c) + b (k_{1} - k_{2}) \right\} b \exp{\left(- \left\lvert \frac{c}{b} \right\rvert \right)} \\ &\quad - \frac{(k_{1} + k_{2})^{2} b^{2}}{4} \exp{\left(- 2 \left\lvert \frac{c}{b} \right\rvert \right)}. \end{aligned}$$ In the case of $\mathcal{N}(0, \frac{1}{2} b^{2})$, since $a = \frac{1}{2}$, we have $$\begin{aligned} \operatorname{{E}}[\Pe(Z + c)] &= \frac{(k_{1} - k_{2}) c}{2} + \frac{(k_{1} + k_{2}) c}{2} \operatorname{{erf}}{\left(\frac{c}{b} \right)} + \frac{(k_{1} + k_{2}) b}{2 \sqrt{\pi}} \exp{\left(- \frac{c^{2}}{b^{2}} \right)}, \\ \operatorname{{V}}[\Pe(Z + c)] &= \frac{(k_{1}^{2} + k_{2}^{2}) b^{2} }{4} + \frac{(k_{1} + k_{2})^{2} c^{2} }{4} + \frac{(k_{1}^{2} - k_{2}^{2}) b^{2} }{4} \operatorname{{erf}}{\left( \frac{c}{b} \right)} - \frac{(k_{1} + k_{2})^{2} c^{2} }{4} \operatorname{{erf}}^{2}{\left( \frac{c}{b} \right)} \\ &\quad - \frac{(k_{1} + k_{2})^{2} b c}{2 \sqrt{\pi}} \operatorname{{erf}}{\left(\frac{c}{b} \right)} \exp{\left( - \frac{c^{2}}{b^{2}} \right)} - \frac{(k_{1} + k_{2})^{2} b^{2} }{4 \pi} \exp{\left( - \frac{2 c^{2}}{b^{2}} \right)}. \end{aligned}$$ With the conditions fixed as $k_{1} = 50$ and $b = 1$, we can plot $\operatorname{{E}}[\Pe(Z)]$ and $\operatorname{{V}}[\Pe(Z)]$ for the Laplace and the Gauss distributions as follows: [cc]{} ![Plots for the Laplace distribution](Laplace-Ex1.pdf "fig:"){height="4cm"} \[fig:winter\] ![Plots for the Laplace distribution](Laplace-Va1.pdf "fig:"){height="4cm"} \[fig:fall\] [cc]{} ![Plots for the Gauss distribution](Gauss-Ex1.pdf "fig:"){height="4cm"} \[fig:winter\] ![Plots for the Gauss distribution](Gauss-Va1.pdf "fig:"){height="4cm"} \[fig:fall\] Parameter value minimizing the expected value --------------------------------------------- Here, we determine the value of $c = C$ that gives the minimum value of $\operatorname{{E}}[\Pe(Z + c)]$. Since $$\begin{aligned} \frac{d}{dc} \G\left(2a, \left\lvert \frac{c}{b} \right\rvert^{\frac{1}{a}} \right) &= - \frac{c}{a b} \exp{\left(- \left\lvert \frac{c}{b} \right\rvert^{\frac{1}{a}} \right)}; \\ \frac{d}{dc} \g\left(a, \left\lvert \frac{c}{b} \right\rvert^{\frac{1}{a}} \right) &= \operatorname{{sgn}}(c) \frac{1}{a b} \exp{\left(- \left\lvert \frac{c}{b} \right\rvert^{\frac{1}{a}} \right)}, \end{aligned}$$ we have $$\begin{aligned} \frac{d}{dc} \operatorname{{E}}[\Pe(Z + c)] = \frac{k_{1} - k_{2}}{2} + \operatorname{{sgn}}(c) \frac{k_{1} + k_{2}}{2 \G(a)} \g\left(a, \left\lvert \frac{c}{b} \right\rvert^{\frac{1}{a}} \right). \end{aligned}$$ We will denote the value of $c$ satisfying $\frac{d}{dc} \operatorname{{E}}[\Pe(Z + c)] = 0$ as $C$. Then, from the first derivative test, we find that $\operatorname{{E}}[\Pe(Z + c)]$ has a minimum value at $c = C$. $c$ Less than $C$ $C$ More than $C$ ----------------------------------------------- --------------------- ----- --------------------- $\frac{d}{dc} \operatorname{{E}}[\Pe(Z + c)]$ Negative $0$ Positive $\operatorname{{E}}[\Pe(Z + c)]$ Strongly decreasing Strongly increasing Also, it follows from $$\begin{aligned} \label{C} \g\left(a, \left\lvert \frac{C}{b} \right\rvert^{\frac{1}{a}} \right) = \operatorname{{sgn}}(C) \frac{k_{2} - k_{1}}{k_{1} + k_{2}} \G(a) \end{aligned}$$ that $\operatorname{{sgn}}(C) = \operatorname{{sgn}}(k_{2} - k_{1})$ and $C = 0$ only when $k_{1} = k_{2}$. Moreover, equation $(\ref{C})$ implies that the ratio of $\G(a)$ and $\g\left(a, \left\lvert \frac{C}{b} \right\rvert^{\frac{1}{a}} \right)$ is $1 : \frac{\lvert k_{2} - k_{1}\rvert}{k_{1} + k_{2}}$. That is, the vertical axis $t = \left\lvert \frac{C}{b} \right\rvert^{\frac{1}{a}}$ divides the area between $t^{a - 1} e^{- t}$, and the $t$-axis into $\frac{\lvert k_{2} - k_{1}\rvert}{k_{1} + k_{2}} : 1- \frac{\lvert k_{2} - k_{1}\rvert}{k_{1} + k_{2}}$. ![Plot of area ratio](mathplot.pdf){width="80mm"} Let $\operatorname{{erf}}^{-1}(x)$ be the inverse error function. We give two examples of $C$. \[rei:\] In the case of ${\rm Laplace}(0, b)$, since $a = 1$, we have $$\begin{aligned} C = - \operatorname{{sgn}}(k_{2} - k_{1}) b \log{\left( 1 - \left\lvert \frac{k_{2} - k_{1}}{k_{1} + k_{2}} \right\rvert \right)}. \end{aligned}$$ In the case of $\mathcal{N}(0, \frac{1}{2} b^{2})$, since $a = \frac{1}{2}$, we have $$\begin{aligned} C = \operatorname{{sgn}}(k_{2} - k_{1}) b \operatorname{{erf}}^{-1}\left( \left\lvert \frac{k_{2} - k_{1}}{k_{1} + k_{2}} \right\rvert \right). \end{aligned}$$ Fixing the conditions as $k_{1} = 50$ and $b = 1$, we can plot $C$ for the Laplace and the Gauss distributions as follows: [cc]{} ![Plots of $C$ for the Laplace and the Gauss distributions](Laplace-C.pdf "fig:"){height="4cm"} \[fig:winter\] ![Plots of $C$ for the Laplace and the Gauss distributions](Gauss-C.pdf "fig:"){height="4cm"} \[fig:fall\] Minimized expected value of the loss ------------------------------------ We give the minimum value of $\operatorname{{E}}[\Pe(Z + c)]$. Substituting $c = C$ in equation $(1)$ of Lemma $\ref{lem:3.1}$, from equation $(\ref{C})$, we have $$\begin{aligned} \operatorname{{E}}[\Pe(Z + C)] = \frac{(k_{1} + k_{2}) b}{2 \G(a)} \G\left(2a, \left\lvert \frac{C}{b} \right\rvert^{\frac{1}{a}} \right). \end{aligned}$$ This is the minimum value of $\operatorname{{E}}[\Pe(Z + c)]$. From this and equation $(\ref{E[L(Z)]})$, we have the following corollary: \[cor:3.3\] We have $$\begin{aligned} \operatorname{{E}}[\Pe(Z)] - \operatorname{{E}}[\Pe(Z + C)] &= \frac{(k_{1} + k_{2}) b}{2 \G(a)} \g\left(2a, \left\lvert \frac{C}{b} \right\rvert^{\frac{1}{a}} \right), \\ \frac{\operatorname{{E}}[\Pe(Z + C)]}{\operatorname{{E}}[\Pe(Z)]} &= \frac{1}{\G(2a)} \G\left(2a, \left\lvert \frac{C}{b} \right\rvert^{\frac{1}{a}} \right). \end{aligned}$$ Fixing the conditions as $k_{1} = 50$ and $b = 1$, we can plot the plots of $\operatorname{{E}}[\Pe(Z)] - \operatorname{{E}}[\Pe(Z + C)]$ for the Laplace and the Gauss distributions as follows: [cc]{} ![Plots of $\operatorname{{E}}[\Pe(Z)] - \operatorname{{E}}[\Pe(Z + C)]$ for the Laplace and the Gauss distributions](Laplace-DE.pdf "fig:"){height="4cm"} \[fig:winter\] ![Plots of $\operatorname{{E}}[\Pe(Z)] - \operatorname{{E}}[\Pe(Z + C)]$ for the Laplace and the Gauss distributions](Gauss-DE.pdf "fig:"){height="4cm"} \[fig:fall\] An inequality for the variance of the loss ========================================== In this section, we derive an inequality for the variance of $\Pe(Z + c)$. Let $C$ be the value of $c$ giving the minimum value of $\operatorname{{E}}[\Pe(Z + c)]$. Then, the following holds: \[thm:3.4\] We have $$\begin{aligned} \operatorname{{V}}[\Pe(Z + C)] \leq \operatorname{{V}}[\Pe(Z)], \end{aligned}$$ where equality holds only when $C = 0$; that is, when $k_{1} = k_{2}$. Fixing the conditions as $k_{1} = 50$ and $b = 1$, we can plot $\operatorname{{V}}[\Pe(Z)] - \operatorname{{V}}[\Pe(Z + C)]$ for the Laplace and the Gauss distributions as follows: [cc]{} ![Plots of $\operatorname{{V}}[\Pe(Z)] - \operatorname{{V}}[\Pe(Z + C)]$ for the Laplace and the Gauss distributions](Laplace-DV.pdf "fig:"){height="4cm"} \[fig:winter\] ![Plots of $\operatorname{{V}}[\Pe(Z)] - \operatorname{{V}}[\Pe(Z + C)]$ for the Laplace and the Gauss distributions](Gauss-DV.pdf "fig:"){height="4cm"} \[fig:fall\] To prove Theorem $\ref{thm:3.4}$, we use the following lemma: \[lem:3.5\] For $a > 0$ and $x > 0$, we have $$\begin{aligned} x^{a} \g(a, x)^{2} - x^{a} \G(a)^{2} + 2 \g(a, x) \G(2a, x) > 0. \end{aligned}$$ The proof of Lemma $\ref{lem:3.5}$ is presented in Section $5.2$. Now we can prove Theorem $\ref{thm:3.4}$. It follows from the equation $(\ref{C})$ that $$\begin{aligned} \frac{(k_{1}^{2} - k_{2}^{2}) b C}{2 \G(a)} \G\left(2a, \left\lvert \frac{C}{b} \right\rvert^{\frac{1}{a}} \right) &= - \operatorname{{sgn}}(C) \frac{k_{2} - k_{1}}{k_{1} + k_{2}} \frac{(k_{1} + k_{2})^{2} b \lvert C \rvert}{2 \G(a)} \G\left(2a, \left\lvert \frac{C}{b} \right\rvert^{\frac{1}{a}} \right) \\ &= - \frac{(k_{1} + k_{2})^{2} b \lvert C \rvert}{2 \G(a)^{2}} \g\left(a, \left\lvert \frac{C}{b} \right\rvert^{\frac{1}{a}} \right) \G\left(2a, \left\lvert \frac{C}{b} \right\rvert^{\frac{1}{a}} \right), \allowdisplaybreaks \\ \operatorname{{sgn}}(C) \frac{(k_{1}^{2} - k_{2}^{2}) b^{2}}{2 \G(a)} \g\left(3a, \left\lvert \frac{C}{b} \right\rvert^{\frac{1}{a}} \right) &= - \operatorname{{sgn}}(C) \frac{k_{2} - k_{1}}{k_{1} + k_{2}} \frac{(k_{1} + k_{2})^{2} b^{2}}{2 \G(a)} \g\left(3a, \left\lvert \frac{C}{b} \right\rvert^{\frac{1}{a}} \right) \\ &= - \frac{(k_{1} + k_{2})^{2} b^{2}}{2 \G(a)^{2}} \g\left(a, \left\lvert \frac{C}{b} \right\rvert^{\frac{1}{a}} \right) \g\left(3a, \left\lvert \frac{C}{b} \right\rvert^{\frac{1}{a}} \right). \end{aligned}$$ Hence, substituting $c = C$ in equation $(2)$ of Lemma $\ref{lem:3.1}$, we have $$\begin{aligned} \operatorname{{V}}[\Pe(Z + C)] &= \frac{(k_{1} + k_{2})^{2} C^{2}}{4} - \frac{(k_{1} + k_{2})^{2} b \lvert C \rvert}{\G(a)^{2}} \g\left(a, \left\lvert \frac{C}{b} \right\rvert^{\frac{1}{a}} \right) \G\left(2a, \left\lvert \frac{C}{b} \right\rvert^{\frac{1}{a}} \right) \\ &\quad - \frac{(k_{1} + k_{2})^{2} C^{2}}{4 \G(a)^{2}} \g\left(a, \left\lvert \frac{C}{b} \right\rvert^{\frac{1}{a}} \right)^{2} - \frac{(k_{1} + k_{2})^{2} b^{2}}{4 \G(a)^{2}} \G\left(2a, \left\lvert \frac{C}{b} \right\rvert^{\frac{1}{a}} \right)^{2} \\ &\quad + \frac{(k_{1}^{2} + k_{2}^{2}) b^{2} \G(3a)}{2 \G(a)} - \frac{(k_{1} + k_{2})^{2} b^{2}}{2 \G(a)^{2}} \g\left(a, \left\lvert \frac{C}{b} \right\rvert^{\frac{1}{a}} \right) \g\left(3a, \left\lvert \frac{C}{b} \right\rvert^{\frac{1}{a}} \right). \end{aligned}$$ From this and equation $(\ref{V[L(Z)]})$, we obtain $$\begin{aligned} &\!\!\!\!\operatorname{{V}}[\Pe(Z)] - \operatorname{{V}}[\Pe(Z + C)] \\ &= - \frac{(k_{1} + k_{2})^{2} C^{2}}{4} + \frac{(k_{1} + k_{2})^{2} b \lvert C \rvert}{\G(a)^{2}} \g\left(a, \left\lvert \frac{C}{b} \right\rvert^{\frac{1}{a}} \right) \G\left(2a, \left\lvert \frac{C}{b} \right\rvert^{\frac{1}{a}} \right) \\ &\quad + \frac{(k_{1} + k_{2})^{2} C^{2}}{4 \G(a)^{2}} \g\left(a, \left\lvert \frac{C}{b} \right\rvert^{\frac{1}{a}} \right)^{2} + \frac{(k_{1} + k_{2})^{2} b^{2}}{4 \G(a)^{2}} \G\left(2a, \left\lvert \frac{C}{b} \right\rvert^{\frac{1}{a}} \right)^{2} \\ &\quad - \frac{(k_{1} + k_{2})^{2} b^{2} \G(2a)^{2}}{4 \G(a)^{2}} + \frac{(k_{1} + k_{2})^{2} b^{2}}{2 \G(a)^{2}} \g\left(a, \left\lvert \frac{C}{b} \right\rvert^{\frac{1}{a}} \right) \g\left(3a, \left\lvert \frac{C}{b} \right\rvert^{\frac{1}{a}} \right) \\ &= \frac{(k_{1} + k_{2})^{2} b^{2}}{4 \G(a)^{2}} f \left(a, \left\lvert \frac{C}{b} \right\rvert^{\frac{1}{a}} \right), \end{aligned}$$ where, for $a > 0$ and $x \geq 0$, $f(a, x)$ is defined as $$\begin{aligned} f(a, x) &:= x^{2a} \g(a, x)^{2} - x^{2a} \G(a)^{2} + 4 x^{a} \g(a, x) \G(2a, x) \\ &\quad + \G(2a, x)^{2} - \G(2a)^{2} + 2 \g(a, x) \g(3a, x). \end{aligned}$$ Here, since $$\begin{aligned} \frac{d}{dx} f(a, x) &= 2 a x^{a - 1} \left\{x^{a} \g(a, x)^{2} - x^{a} \G(a)^{2} + 2\g(a, x) \G(2a, x) \right\} \\ &\quad + 2 x^{a - 1} e^{-x} \g(3a, x) + 2 x^{2a - 1} e^{-x} \G(2a, x), \end{aligned}$$ from Lemma $\ref{lem:3.5}$, we have $\frac{d}{dx} f(a, x) > 0$ ($a > 0$, $x > 0$). Also, $f(a, 0) = 0$ holds for $a > 0$. Therefore, we obtain $$\begin{aligned} \operatorname{{V}}[\Pe(Z)] - \operatorname{{V}}[\Pe(Z + C)] \geq 0, \end{aligned}$$ where equality holds only when $C = 0$. Moreover, from equation $(\ref{C})$, we find that $C = 0$ holds only when $k_{1} = k_{2}$. Inequalities for the gamma and the incomplete gamma functions ============================================================= In this section, we give some inequalities for the gamma and the incomplete gamma functions, which we used to derive the inequality for the variance of the loss in Theorem $\ref{thm:3.4}$. Inequalities for the gamma function ----------------------------------- To prove Lemma $\ref{lem:3.5}$, we use the following: \[lem:4-1-3\] For $a > 0$, we have $$\begin{aligned} 2 \G(2a) - a \G(a)^{2} > 0. \end{aligned}$$ Next, to prove Lemma $\ref{lem:4-1-3}$, we use the following: \[lem:4-1-1\] For $a > 0$, we have $$\begin{aligned} 4^{a} \G\left(a + \frac{1}{2} \right) > \sqrt{\pi} \G(a + 1). \end{aligned}$$ Furthermore, to prove Lemma $\ref{lem:4-1-1}$, we need another lemma: \[lem:4-1-2\] We have $$\begin{aligned} \sum_{n = 1}^{\infty} \frac{1}{n (2n - 1)} = 2 \log{2}. \end{aligned}$$ Let $S_{n} := \sum_{k = 1}^{n} \frac{1}{k (2k - 1)}$. Accordingly, we have $$\begin{aligned} S_{n} &= \sum_{k = 1}^{n} \left(\frac{2}{2k - 1} - \frac{1}{k} \right) \allowdisplaybreaks \\ &= 2 \sum_{k = 1}^{n} \frac{1}{2k - 1} - \sum_{k = 1}^{n} \frac{1}{k} \allowdisplaybreaks \\ &= 2 \sum_{k = 1}^{n} \frac{1}{2k - 1} + \left(2 \sum_{k = 1}^{n} \frac{1}{2 k} - 2 \sum_{k = 1}^{n} \frac{1}{2 k} \right) - \sum_{k = 1}^{n} \frac{1}{k} \allowdisplaybreaks \\ &= 2 \left(\sum_{k = 1}^{n} \frac{1}{2 k - 1} + \sum_{k = 1}^{n} \frac{1}{2 k} \right) - 2 \sum_{k = 1}^{n} \frac{1}{k} \allowdisplaybreaks \\ &= 2 \sum_{k = 1}^{2n} \frac{1}{k} - 2 \sum_{k = 1}^{n} \frac{1}{k} \allowdisplaybreaks \\ &= 2 \sum_{k = n + 1}^{2n} \frac{1}{k} \allowdisplaybreaks \\ &= 2 \sum_{k = 1}^{n} \frac{1}{k + n}. \end{aligned}$$ Therefore, we find $$\begin{aligned} \lim_{n \rightarrow \infty} S_{n} &= 2 \lim_{n \to \infty} \sum_{k = 1}^{n} \frac{1}{k + n} \allowdisplaybreaks \\ &= 2 \lim_{n \to \infty} \frac{1}{n} \sum_{k = 1}^{n} \frac{1}{1 + \frac{k}{n}} \allowdisplaybreaks \\ &= \int_{0}^{1} \frac{1}{1 + x} dx \allowdisplaybreaks \\ &= 2\log{2}.\end{aligned}$$ The lemma is thus proved. Now we can prove Lemma $\ref{lem:4-1-1}$. Let $$\begin{aligned} g(a) := \frac{4^{a} \G\left(a + \frac{1}{2} \right)}{\sqrt{\pi} \G(a + 1)}. \end{aligned}$$ To prove $g(a) > 1$ for $a > 0$, we use the following formula [@andrews_askey_roy_1999 p.13, Theorem 1.2.5]: $$\begin{aligned} \frac{d}{dx} \log{\G(x)} = \frac{\G^{'}(x)}{\G(x)} = - \g_{0} + \sum_{n = 1}^{\infty} \left(\frac{1}{n} - \frac{1}{x + n - 1} \right), \end{aligned}$$ where $\g_{0}$ is Euler’s constant given by $$\begin{aligned} \g_{0} := \lim_{n \to \infty} \left(\sum_{k = 1}^{n} \frac{1}{k} - \log{n} \right). \end{aligned}$$ Taking the logarithmic derivative of $g(a)$, from the above formula, we have $$\begin{aligned} \frac{d}{da} \log{g(a)} &= 2\log{2} + \frac{d}{da} \log{\G\left(a + \frac{1}{2} \right)} - \frac{d}{da} \log{\G(a + 1)} \allowdisplaybreaks \\ &= 2\log{2} + \sum_{n = 1}^{\infty} \left(\frac{1}{n} - \frac{1}{a - \frac{1}{2} + n} \right) - \sum_{n = 1}^{\infty} \left(\frac{1}{n} - \frac{1}{a + n} \right) \allowdisplaybreaks \\ &= 2\log{2} - \frac{1}{2} \sum_{n = 1}^{\infty} \frac{1}{(a + n) \left(a - \frac{1}{2} + n \right)} \allowdisplaybreaks \\ &> 2\log{2} - \frac{1}{2} \sum_{n = 1}^{\infty} \frac{1}{n \left(n - \frac{1}{2} \right)} \allowdisplaybreaks \\ &= 2\log{2} - \sum_{n = 1}^{\infty} \frac{1}{n (2n - 1)}\end{aligned}$$ for $a > 0$. Moreover, using Lemma $\ref{lem:4-1-2}$, we obtain $\frac{d}{da} \log{g(a)} > 0$ for $a > 0$. This leads to $\frac{d}{da} g(a) > 0$ for $a > 0$. The lemma follows from this and $g(0) = 1$. Now, we can prove Lemma $\ref{lem:4-1-3}$. We use the following formula [@andrews_askey_roy_1999 p.22, Theorem 6.5.1]: $$\begin{aligned} \G(2a) = \frac{2^{2a - 1}}{\sqrt{\pi}} \G(a) \G\left(a + \frac{1}{2} \right). \end{aligned}$$ From this and Lemma $\ref{lem:4-1-1}$, we have $$\begin{aligned} 2 \G(2a) - a \G(a)^{2} &= \frac{2^{2a}}{\sqrt{\pi}} \G(a) \G\left(a + \frac{1}{2} \right) - \G(a) \G(a + 1) \allowdisplaybreaks \\ &= \frac{1}{\sqrt{\pi}} \G(a) \left\{4^{a} \G\left(a + \frac{1}{2} \right) - \sqrt{\pi} \G(a + 1) \right\} \\ &> 0. \end{aligned}$$ The lemma is thus proved. Inequalities for the incomplete gamma functions ----------------------------------------------- We will prove the following lemma: \[lem:4-2-1\] For $a > 0$ and $x > 0$, we have $$\begin{aligned} x^{a} \g(a, x)^{2} - x^{a} \G(a)^{2} + 2 \g(a, x) \G(2a, x) > 0. \end{aligned}$$ To prove Lemma $\ref{lem:4-2-1}$, we need to prove two other lemmas: \[lem:4-2-2\] For $a > 0$ and $x \geq 0$, we have $$\begin{aligned} a \g(a, x) \geq x^{a} e^{-x}. \end{aligned}$$ For $a > 0$ and $x \geq 0$, we define $$\begin{aligned} u(a, x) := a \g(a, x) - x^{a} e^{-x}. \end{aligned}$$ Then, we have $$\begin{aligned} \frac{d}{dx} u(a, x) = x^{a} e^{-x} \geq 0. \end{aligned}$$ The lemma follows from this and $u(a, 0) = 0$. \[lem:4-2-3\] For $a > 0$ and $b \in \mathbb{R}$, we have $$\begin{aligned} \lim_{x \to +\infty} x^{b} \G(a, x) = 0. \end{aligned}$$ When $b \leq 0$, it is easily obtained from the definition of $\G(a, x)$. When $b > 0$, using the L’Hôpital’s rule, we obtain $$\begin{aligned} \lim_{x \to +\infty} \frac{\G(a, x)}{x^{-b}} &= \lim_{x \to +\infty} \frac{x^{a - 1} e^{-x}}{b x^{- b - 1}} \allowdisplaybreaks \\ &= \lim_{x \to +\infty} \frac{x^{a + b} e^{-x}}{b} \\ &= 0. \end{aligned}$$ Now, we can prove Lemma $\ref{lem:4-2-1}$. For $a > 0$ and $x \geq 0$, we define $$\begin{aligned} y_{1} (a, x) := x^{a} \g(a, x)^{2} - x^{a} \G(a)^{2} + 2 \g(a, x) \G(2a, x). \end{aligned}$$ Let us prove $y_{1} (a, x) > 0$ ($a > 0$, $x > 0$). For $a > 0$ and $x \geq 0$, we define $$\begin{aligned} y_{2} (a, x) &:= a \g(a, x)^{2} - a \G(a)^{2} + 2 e^{-x} \G(2a, x); \\ y_{3} (a, x) &:= a x^{a - 1} \g(a, x) - \G(2a, x) - x^{2a - 1} e^{-x}; \\ y_{4} (a, x) &:= a (a - 1) \g(a, x) + x^{a} e^{-x} (2x + 1 - a). \end{aligned}$$ Then, we have $$\begin{aligned} \frac{d}{dx} y_{1} (a, x) &= x^{a - 1} y_{2} (a, x); \\ \frac{d}{dx} y_{2} (a, x) &= 2 e^{-x} y_{3} (a, x); \\ \frac{d}{dx} y_{3} (a, x) &= x^{a - 2} y_{4} (a, x); \\ \frac{d}{dx} y_{4} (a, x) &= x^{a} e^{-x} (3a + 1 - 2x). \end{aligned}$$ From these relations, we find that the (positive or negative) signs of $\frac{d}{dx} y_{i}(a, x)$ and $y_{i + 1}(a, x)$ ($i = 1, 2, 3$) are equal to each other for $a > 0$ and $x > 0$. Let $p_{i} (a)$ ($i = 2, 3, 4$) be the value of $x$ satisfying $y_{i}(a, x) = 0$. It is easily verified that $\lim_{x \to 0+} \frac{d}{dx}y_{4}(a, x) = \lim_{x \to +\infty}\frac{d}{dx}y_{4}(a, x) = \lim_{x \to 0+} y_{4}(a, x) = 0$ and $\lim_{x \to +\infty}y_{4}(a, x) = a (a - 1) \G(a)$ for $a > 0$. Therefore, from the first derivative test, we obtain Tables $1$ and $2$. Moreover, using Lemmas $\ref{lem:4-2-2}$, $\ref{lem:4-2-3}$, and L’Hôpital’s rule, we obtain $$\begin{aligned} &\lim_{x \to 0+} \frac{d}{dx} y_{3}(a, x) = \begin{cases} \infty & (0 < a < 1), \\ 0 & (a \geq 1), \end{cases} & &\lim_{x \to +\infty} \frac{d}{dx} y_{3}(a, x) = \begin{cases} 0 & (0 < a < 2), \\ 2 & (a = 2), \\ \infty & (a > 2), \end{cases} \allowdisplaybreaks \\ &\lim_{x \to 0+} y_{3}(a, x) = - \G(2a) \quad (a > 0), & &\lim_{x \to +\infty} y_{3}(a, x) = \begin{cases} 0 & (0 < a < 1), \\ 1 & (a = 1), \\ \infty & (a > 1), \end{cases} \allowdisplaybreaks \\ &\lim_{x \to 0+} \frac{d}{dx} y_{2}(a, x) = - 2\G(2a) \quad (a > 0), & &\lim_{x \to +\infty} \frac{d}{dx} y_{2}(a, x) = 0 \quad (a > 0), \allowdisplaybreaks \\ &\lim_{x \to 0+} y_{2}(a, x) = 2 \G(2a) - a\G(a)^{2} \quad (a > 0), & &\lim_{x \to +\infty} y_{2}(a, x) = 0 \quad (a > 0), \allowdisplaybreaks \\ &\lim_{x \to 0+} \frac{d}{dx} y_{1}(a, x) = \begin{cases} \infty & (0 < a < 1), \\ 1 & (a = 1), \\ 0 & (a > 1), \end{cases} & &\lim_{x \to +\infty} \frac{d}{dx} y_{1}(a, x) = 0 \quad (a > 0), \allowdisplaybreaks \\ &\lim_{x \to 0+} y_{1}(a, x) = 0 \quad (a > 0), & &\lim_{x \to +\infty} y_{1}(a, x) = 0 \quad (a > 0). \end{aligned}$$ From these results, Lemma $\ref{lem:4-1-3}$, and the fact that the signs of $\frac{d}{dx} y_{i}(a, x)$ and $y_{i + 1}(a, x)$ ($i = 1, 2, 3$) are equal to each other for $a > 0$ and $x > 0$, we obtain Tables $3$ and $4$. From Tables $3$ and $4$, we can verify that $y_{1}(a, x) > 0$ holds for $a > 0$ and $x > 0$. This completes the proof of the lemma. $x$ $\;0\;$ $\cdots$ $\;\frac{3a + 1}{2}\;$ $\cdots$ $\;p_{4}(a)\;$ $\cdots$ $+\infty$ ---------------------------- --------- ---------- ------------------------ ---------- ---------------- ---------- ----------- $\frac{d}{dx} y_{4}(a, x)$ $0$ $+$ $0$ $-$ $-$ $-$ $0$ $y_{4}(a, x)$ $0$ $+$ $+$ $+$ $0$ $-$ $-$ : Case of $0 < a < 1$ $x$ $\;0\;$ $\cdots$ $\;\frac{3a + 1}{2}\;$ $\cdots$ $\;+\infty\;$ ---------------------------- --------- ---------- ------------------------ ---------- ------------------------------------------------------- $\frac{d}{dx} y_{4}(a, x)$ $0$ $+$ $0$ $-$ $0$ $y_{4}(a, x)$ $0$ $+$ $+$ $+$ $\begin{matrix}0\;\;(a = 1)\\ +\;(a > 1)\end{matrix}$ : Case of $a \geq 1$ $x$ $\;0\;$ $\cdots$ $\;p_{2}(a)\;$ $\cdots$ $\;p_{3}(a)\;$ $\cdots$ $\;p_{4}(a)\;$ $\cdots$ $\;+\infty\;$ ---------------------------- ----------- ---------- ---------------- ---------- ---------------- ---------- ---------------- ---------- --------------- $\frac{d}{dx} y_{3}(a, x)$ $+\infty$ $+$ $+$ $+$ $+$ $+$ $0$ $-$ $0$ $y_{3}(a, x)$ $-$ $-$ $-$ $-$ $0$ $+$ $+$ $+$ $0$ $\frac{d}{dx} y_{2}(a, x)$ $-$ $-$ $-$ $-$ $0$ $+$ $+$ $+$ $0$ $y_{2}(a, x)$ $+$ $+$ $0$ $-$ $-$ $-$ $-$ $-$ $0$ $\frac{d}{dx} y_{1}(a, x)$ $+$ $+$ $0$ $-$ $-$ $-$ $-$ $-$ $0$ $y_{1}(a, x)$ $0$ $+$ $+$ $+$ $+$ $+$ $+$ $+$ $0$ : Case of $0 < a < 1$ $x$ $\;0\;$ $\cdots$ $\;p_{2}(a)\;$ $\cdots$ $\;p_{3}(a)\;$ $\cdots$ $\;+\infty\;$ ---------------------------- --------- ---------- ---------------- ---------- ---------------- ---------- ------------------------------------------------------------------------ $\frac{d}{dx} y_{3}(a, x)$ $0$ $+$ $+$ $+$ $+$ $+$ $\begin{matrix}0\;(a < 2)\\ +\;(a = 2)\\ +\infty\;(a > 2)\end{matrix}$ $y_{3}(a, x)$ $-$ $-$ $-$ $-$ $0$ $+$ $0$ $\frac{d}{dx} y_{2}(a, x)$ $-$ $-$ $-$ $-$ $0$ $+$ $0$ $y_{2}(a, x)$ $+$ $+$ $0$ $-$ $-$ $-$ $0$ $\frac{d}{dx} y_{1}(a, x)$ $+$ $+$ $0$ $-$ $-$ $-$ $0$ $y_{1}(a, x)$ $0$ $+$ $+$ $+$ $+$ $+$ $0$ : Case of $a \geq 1$ Calculation of the expected value and the variance of the loss ============================================================== Here, we calculate the expected value and the variance of the loss $\Pe(Z + c)$ for $c \in \mathbb{R}$. **[Expected value of the loss]{}** ---------------------------------- Here, let us put $\beta := (2 a b \G(a))^{-1}$; then, we have $$\begin{aligned} \operatorname{{E}}[\Pe(Z + c)] &= \int_{- \infty}^{+\infty} \Pe(z + c) f_{Z}(z) dz \\ &= k_{2} \beta \int_{- \infty}^{- c} (- z - c) \exp{\left( - \left\lvert \frac{z}{b} \right\rvert^{\frac{1}{a}} \right)} dz + k_{1} \beta \int_{- c}^{+\infty} (z + c) \exp{\left( - \left\lvert \frac{z}{b} \right\rvert^{\frac{1}{a}} \right)} dz. \end{aligned}$$ Replace $z$ with $b z$ to get $$\begin{aligned} \operatorname{{E}}[\Pe(Z + c)] = k_{2} b \beta \int_{- \infty}^{- c / b} (- b z - c) \exp{\left( - \lvert z \rvert^{\frac{1}{a}} \right)} dz + k_{1} b \beta \int_{- c / b}^{+\infty} (b z + c) \exp{\left( - \lvert z \rvert^{\frac{1}{a}} \right)} dz. \end{aligned}$$ When $c \geq 0$, we have $$\begin{aligned} \operatorname{{E}}[\Pe(Z + c)] &= k_{2} b \beta \int_{- \infty}^{- c / b} (- b z - c) \exp{\left( - (-z)^{\frac{1}{a}} \right)} dz \\ &\quad + k_{1} b \beta \int_{- c / b}^{0} (b z + c) \exp{\left( - (-z)^{\frac{1}{a}} \right)} dz + k_{1} b \beta \int_{0}^{+\infty} (b z + c) \exp{\left( - z^{\frac{1}{a}} \right)} dz \allowdisplaybreaks \\ &= k_{2} b \beta \int_{c / b}^{+\infty} (b z - c) \exp{\left( - z^{\frac{1}{a}} \right)} dz \\ &\quad + k_{1} b \beta \int_{0}^{c / b} (- b z + c) \exp{\left( - z^{\frac{1}{a}} \right)} dz + k_{1} b \beta \int_{0}^{+\infty} (b z + c) \exp{\left( - z^{\frac{1}{a}} \right)} dz \allowdisplaybreaks \\ &= (k_{1} + k_{2}) b^{2} \beta \int_{c / b}^{+\infty} z \exp{\left( - z^{\frac{1}{a}} \right)} dz \\ &\quad + (k_{1} - k_{2}) b c \beta \int_{0}^{+\infty} \exp{\left( - z^{\frac{1}{a}} \right)} dz + (k_{1} + k_{2}) b c \beta \int_{0}^{c / b} \exp{\left( - z^{\frac{1}{a}} \right)} dz. \end{aligned}$$ When $c < 0$, we have $$\begin{aligned} \operatorname{{E}}[\Pe(Z + c)] &= k_{2} b \beta \int_{- \infty}^{0} (- b z - c) \exp{\left( - (-z)^{\frac{1}{a}} \right)} dz + k_{2} b \beta \int_{0}^{- c / b} (- b z - c) \exp{\left( - z^{\frac{1}{a}} \right)} dz \\ &\quad + k_{1} b \beta \int_{- c / b}^{+\infty} (b z + c) \exp{\left( - z^{\frac{1}{a}} \right)} dz \allowdisplaybreaks \\ &= k_{2} b \beta \int_{0}^{+\infty} (b z - c) \exp{\left( - z^{\frac{1}{a}} \right)} dz + k_{2} b \beta \int_{0}^{- c / b} (- b z - c) \exp{\left( - z^{\frac{1}{a}} \right)} dz \\ &\quad + k_{1} b \beta \int_{- c / b}^{+\infty} (b z + c) \exp{\left( - z^{\frac{1}{a}} \right)} dz \allowdisplaybreaks \\ &= (k_{1} + k_{2}) b^{2} \beta \int_{- c / b}^{+\infty} z \exp{\left( - z^{\frac{1}{a}} \right)} dz \\ &\quad + (k_{1} - k_{2}) b c \beta \int_{0}^{+\infty} \exp{\left( - z^{\frac{1}{a}} \right)} dz - (k_{1} + k_{2}) b c \beta \int_{0}^{- c / b} \exp{\left( - z^{\frac{1}{a}} \right)} dz. \end{aligned}$$ From the above, for any $c \in \mathbb{R}$, we have $$\begin{aligned} \operatorname{{E}}[\Pe(Z + c)] &= (k_{1} + k_{2}) b^{2} \beta \int_{\lvert c / b \rvert}^{+\infty} z \exp{\left( - z^{\frac{1}{a}} \right)} dz \\ &\quad + (k_{1} - k_{2}) b c \beta \int_{0}^{+\infty} \exp{\left( - z^{\frac{1}{a}} \right)} dz + (k_{1} + k_{2}) b \lvert c \rvert \beta \int_{0}^{\lvert c / b \rvert} \exp{\left( - z^{\frac{1}{a}} \right)} dz. \end{aligned}$$ Now set $t := z^{\frac{1}{a}}$ to get $$\begin{aligned} \operatorname{{E}}[\Pe(Z + c)] &= (k_{1} + k_{2}) a b^{2} \beta \int_{c'}^{+\infty} t^{2a - 1} e^{-t} dt \\ &\quad + (k_{1} - k_{2}) a b c \beta \int_{0}^{+\infty} t^{a - 1} e^{-t} dt + (k_{1} + k_{2}) a b \lvert c \rvert \beta \int_{0}^{c'} t^{a - 1} e^{-t} dt \allowdisplaybreaks \\ &= (k_{1} + k_{2}) a b^{2} \beta \G(2a, c') + (k_{1} - k_{2}) a b c \beta \G(a) + (k_{1} + k_{2}) a b \lvert c \rvert \beta \g(a, c'), \end{aligned}$$ where $c' := \lvert c / b \rvert^{\frac{1}{a}}$. Therefore, for any $c \in \mathbb{R}$, we have $$\begin{aligned} \label{E[Pe(Z+c)]-2} \operatorname{{E}}[\Pe(Z + c)] = \frac{(k_{1} - k_{2}) c}{2} + \frac{(k_{1} + k_{2}) \lvert c \rvert}{2 \G(a)} \g\left(a, \left\lvert \frac{c}{b} \right\rvert^{\frac{1}{a}} \right) + \frac{(k_{1} + k_{2}) b}{2 \G(a)} \G\left(2a, \left\lvert \frac{c}{b} \right\rvert^{\frac{1}{a}} \right). \end{aligned}$$ **[Variance of the loss]{}** ---------------------------- Now let us calculate the variance of the loss $\Pe(Z + c)$ for $c \in \mathbb{R}$. Put $\beta := (2 a b \G(a))^{-1}$; then, we have $$\begin{aligned} \operatorname{{E}}[\Pe(Z + c)^{2}] &= \int_{- \infty}^{+\infty} \Pe(z + c)^{2} f_{Z}(z) dz \allowdisplaybreaks \\ &= k_{2}^{2} \beta \int_{- \infty}^{- c} (z + c)^{2} \exp{\left( - \left\lvert \frac{z}{b} \right\rvert^{\frac{1}{a}} \right)} dz + k_{1}^{2} \beta \int_{- c}^{+\infty} (z + c)^{2} \exp{\left( - \left\lvert \frac{z}{b} \right\rvert^{\frac{1}{a}} \right)} dz. \end{aligned}$$ Replace $z$ with $b z$ to get $$\begin{aligned} \operatorname{{E}}[\Pe(Z + c)^{2}] = k_{2}^{2} b \beta \int_{- \infty}^{- c / b} (b z + c)^{2} \exp{\left( - \lvert z \rvert^{\frac{1}{a}} \right)} dz + k_{1}^{2} b \beta \int_{- c / b}^{+\infty} (b z + c)^{2} \exp{\left( - \lvert z \rvert^{\frac{1}{a}} \right)} dz. \end{aligned}$$ When $c \geq 0$, we have $$\begin{aligned} \operatorname{{E}}[\Pe(Z + c)^{2}] &= k_{2}^{2} b \beta \int_{- \infty}^{- c / b} (b z + c)^{2} \exp{\left( - (- z)^{\frac{1}{a}} \right)} dz \\ &\quad + k_{1}^{2} b \beta \int_{- c / b}^{0} (b z + c)^{2} \exp{\left( - (- z)^{\frac{1}{a}} \right)} dz + k_{1}^{2} b \beta \int_{0}^{+\infty} (b z + c)^{2} \exp{\left( - z^{\frac{1}{a}} \right)} dz \allowdisplaybreaks \\ &= k_{2}^{2} b \beta \int_{c / b}^{+\infty} (- b z + c)^{2} \exp{\left( - z^{\frac{1}{a}} \right)} dz \\ &\quad + k_{1}^{2} b \beta \int_{0}^{c / b} (- b z + c)^{2} \exp{\left( - z^{\frac{1}{a}} \right)} dz + k_{1}^{2} b \beta \int_{0}^{+\infty} (b z + c)^{2} \exp{\left( - z^{\frac{1}{a}} \right)} dz \allowdisplaybreaks \\ &= (k_{1}^{2} + k_{2}^{2}) b^{3} \beta \int_{0}^{+\infty} z^{2} \exp{\left( - z^{\frac{1}{a}} \right)} dz + (k_{1}^{2} - k_{2}^{2}) b^{3} \beta \int_{0}^{c / b} z^{2} \exp{\left( - z^{\frac{1}{a}} \right)} dz \\ &\quad + 2 (k_{1}^{2} - k_{2}^{2}) b^{2} c \beta \int_{c / b}^{+\infty} z \exp{\left( - z^{\frac{1}{a}} \right)} dz \\ &\quad + (k_{1}^{2} + k_{2}^{2}) b c^{2} \beta \int_{0}^{+\infty} \exp{\left( - z^{\frac{1}{a}} \right)} dz + (k_{1}^{2} - k_{2}^{2}) b c^{2} \beta \int_{0}^{c / b} \exp{\left( - z^{\frac{1}{a}} \right)} dz. \end{aligned}$$ When $c < 0$, we have $$\begin{aligned} \operatorname{{E}}[\Pe(Z + c)^{2}] &= k_{2}^{2} b \beta \int_{- \infty}^{0} (b z + c)^{2} \exp{\left( - (- z)^{\frac{1}{a}} \right)} dz + k_{2}^{2} b \beta \int_{0}^{- c / b} (b z + c)^{2} \exp{\left( - z^{\frac{1}{a}} \right)} dz \\ &\quad + k_{1}^{2} b \beta \int_{- c / b}^{+\infty} (b z + c)^{2} \exp{\left( - z^{\frac{1}{a}} \right)} dz \allowdisplaybreaks \\ &= k_{2}^{2} b \beta \int_{0}^{+\infty} (- b z + c)^{2} \exp{\left( - z^{\frac{1}{a}} \right)} dz + k_{2}^{2} b \beta \int_{0}^{- c / b} (b z + c)^{2} \exp{\left( - z^{\frac{1}{a}} \right)} dz \\ &\quad + k_{1}^{2} b \beta \int_{- c / b}^{+\infty} (b z + c)^{2} \exp{\left( - z^{\frac{1}{a}} \right)} dz \allowdisplaybreaks \\ &= (k_{1}^{2} + k_{2}^{2}) b^{3} \beta \int_{0}^{+\infty} z^{2} \exp{\left( - z^{\frac{1}{a}} \right)} dz - (k_{1}^{2} - k_{2}^{2}) b^{3} \beta \int_{0}^{- c / b} z^{2} \exp{\left( - z^{\frac{1}{a}} \right)} dz \\ &\quad + 2 (k_{1}^{2} - k_{2}^{2}) b^{2} c \beta \int_{- c / b}^{+\infty} z \exp{\left( - z^{\frac{1}{a}} \right)} dz \\ &\quad + (k_{1}^{2} + k_{2}^{2}) b c^{2} \beta \int_{0}^{+\infty} \exp{\left( - z^{\frac{1}{a}} \right)} dz - (k_{1}^{2} - k_{2}^{2}) b c^{2} \beta \int_{0}^{- c / b} \exp{\left( - z^{\frac{1}{a}} \right)} dz. \end{aligned}$$ From the above, for any $c \in \mathbb{R}$, we have $$\begin{aligned} \operatorname{{E}}[\Pe(Z + c)^{2}] &= (k_{1}^{2} + k_{2}^{2}) b^{3} \beta \int_{0}^{+\infty} z^{2} \exp{\left( - z^{\frac{1}{a}} \right)} dz \\ &\quad + \operatorname{{sgn}}(c) (k_{1}^{2} - k_{2}^{2}) b^{3} \beta \int_{0}^{\lvert c / b \rvert} z^{2} \exp{\left( - z^{\frac{1}{a}} \right)} dz \\ &\quad + 2 (k_{1}^{2} - k_{2}^{2}) b^{2} c \beta \int_{\lvert c / b \rvert}^{+\infty} z \exp{\left( - z^{\frac{1}{a}} \right)} dz \\ &\quad + (k_{1}^{2} + k_{2}^{2}) b c^{2} \beta \int_{0}^{+\infty} \exp{\left( - z^{\frac{1}{a}} \right)} dz \\ &\quad + \operatorname{{sgn}}(c) (k_{1}^{2} - k_{2}^{2}) b c^{2} \beta \int_{0}^{\lvert c / b \rvert} \exp{\left( - z^{\frac{1}{a}} \right)} dz. \end{aligned}$$ Now set $t := z^{\frac{1}{a}}$ to get $$\begin{aligned} \operatorname{{E}}[\Pe(Z + c)^{2}] &= (k_{1}^{2} + k_{2}^{2}) a b^{3} \beta \int_{0}^{+\infty} t^{3a - 1} e^{-t} dt + \operatorname{{sgn}}(c) (k_{1}^{2} - k_{2}^{2}) a b^{3} \beta \int_{0}^{c'} t^{3a - 1} e^{-t} dt \\ &\quad + 2(k_{1}^{2} - k_{2}^{2}) a b^{2} c \beta \int_{c'}^{+\infty} t^{2a - 1} e^{-t} dt \\ &\quad + (k_{1}^{2} + k_{2}^{2}) a b c^{2} \beta \int_{0}^{+\infty} t^{a - 1} e^{-t} dt + \operatorname{{sgn}}(c) (k_{1}^{2} - k_{2}^{2}) a b c^{2} \beta \int_{0}^{c'} t^{a - 1} e^{-t} dt \allowdisplaybreaks \\ &= (k_{1}^{2} + k_{2}^{2}) a b^{3} \beta \G(3a) + \operatorname{{sgn}}(c) (k_{1}^{2} - k_{2}^{2}) a b^{3} \beta \g(3a, c') \\ &\quad + 2(k_{1}^{2} - k_{2}^{2}) a b^{2} c \beta \G(2a, c') \\ &\quad + (k_{1}^{2} + k_{2}^{2}) a b c^{2} \beta \G(a) + \operatorname{{sgn}}(c) (k_{1}^{2} - k_{2}^{2}) a b c^{2} \beta \g(a, c'), \end{aligned}$$ where $c' := \lvert c / b \rvert^{\frac{1}{a}}$. Therefore, for any $c \in \mathbb{R}$, we have $$\begin{aligned} \operatorname{{E}}[\Pe(Z + c)^{2}] &= \frac{(k_{1}^{2} + k_{2}^{2}) c^{2}}{2} + \operatorname{{sgn}}(c) \frac{(k_{1}^{2} - k_{2}^{2}) c^{2}}{2 \G(a)} \g\left(a, \left\lvert \frac{c}{b} \right\rvert^{\frac{1}{a}} \right) + \frac{(k_{1}^{2} - k_{2}^{2}) b c}{\G(a)} \G\left(2a, \left\lvert \frac{c}{b} \right\rvert^{\frac{1}{a}} \right) \\ &\quad + \frac{(k_{1}^{2} + k_{2}^{2}) b^{2} \G(3a)}{2 \G(a)} + \operatorname{{sgn}}(c) \frac{(k_{1}^{2} - k_{2}^{2}) b^{2}}{2 \G(a)} \g\left(3a, \left\lvert \frac{c}{b} \right\rvert^{\frac{1}{a}} \right). \end{aligned}$$ Also, from $(\ref{E[Pe(Z+c)]-2})$, we have $$\begin{aligned} \operatorname{{E}}[\Pe(Z + c)]^{2} &= \frac{(k_{1} - k_{2})^{2} c^{2}}{4} + \frac{(k_{1}^{2} - k_{2}^{2}) c \lvert c \rvert}{2 \G(a)} \g\left(a, \left\lvert \frac{c}{b} \right\rvert^{\frac{1}{a}} \right) + \frac{(k_{1}^{2} - k_{2}^{2}) b c}{2 \G(a)} \G\left(2a, \left\lvert \frac{c}{b} \right\rvert^{\frac{1}{a}} \right) \\ &\quad + \frac{(k_{1} + k_{2})^{2} b \lvert c \rvert}{2 \G(a)^{2}} \g\left(a, \left\lvert \frac{c}{b} \right\rvert^{\frac{1}{a}} \right) \G\left(2a, \left\lvert \frac{c}{b} \right\rvert^{\frac{1}{a}} \right) \\ &\quad + \frac{(k_{1} + k_{2})^{2} c^{2}}{4 \G(a)^{2}} \g\left(a, \left\lvert \frac{c}{b} \right\rvert^{\frac{1}{a}} \right)^{2} + \frac{(k_{1} + k_{2})^{2} b^{2}}{4 \G(a)^{2}} \G\left(2a, \left\lvert \frac{c}{b} \right\rvert^{\frac{1}{a}} \right)^{2}. \end{aligned}$$ Therefore, for any $c \in \mathbb{R}$, we have $$\begin{aligned} \operatorname{{V}}[\Pe(Z + c)] &= \operatorname{{E}}[\Pe(Z + c)^{2}] - \operatorname{{E}}[\Pe(Z + c)]^{2} \\ &= \frac{(k_{1} + k_{2})^{2} c^{2}}{4} + \frac{(k_{1}^{2} - k_{2}^{2}) b c}{2 \G(a)} \G\left(2a, \left\lvert \frac{c}{b} \right\rvert^{\frac{1}{a}} \right) \nonumber \\ &\quad - \frac{(k_{1} + k_{2})^{2} b \lvert c \rvert}{2 \G(a)^{2}} \g\left(a, \left\lvert \frac{c}{b} \right\rvert^{\frac{1}{a}} \right) \G\left(2a, \left\lvert \frac{c}{b} \right\rvert^{\frac{1}{a}} \right) \nonumber \\ &\quad - \frac{(k_{1} + k_{2})^{2} c^{2}}{4 \G(a)^{2}} \g\left(a, \left\lvert \frac{c}{b} \right\rvert^{\frac{1}{a}} \right)^{2} - \frac{(k_{1} + k_{2})^{2} b^{2}}{4 \G(a)^{2}} \G\left(2a, \left\lvert \frac{c}{b} \right\rvert^{\frac{1}{a}} \right)^{2} \nonumber \\ &\quad + \frac{(k_{1}^{2} + k_{2}^{2}) b^{2} \G(3a)}{2 \G(a)} + \operatorname{{sgn}}(c) \frac{(k_{1}^{2} - k_{2}^{2}) b^{2}}{2 \G(a)} \g\left(3a, \left\lvert \frac{c}{b} \right\rvert^{\frac{1}{a}} \right). \nonumber\end{aligned}$$ Naoya Yamaguchi\ Office for Establishment of an Information-related School\ Nagasaki University\ 1-14 Bunkyo, Nagasaki City 852-8521\ Japan\ yamaguchi@nagasaki-u.ac.jp Yuka Yamaguchi\ Office for Establishment of an Information-related School\ Nagasaki University\ 1-14 Bunkyo, Nagasaki City 852-8521\ Japan\ yamaguchiyuka@nagasaki-u.ac.jp Ryuei Nishi\ Office for Establishment of an Information-related School\ Nagasaki University\ 1-14 Bunkyo, Nagasaki City 852-8521\ Japan\ nishii.ryuei@nagasaki-u.ac.jp
{ "pile_set_name": "ArXiv" }
epsf Introduction ============ In the theory of random Hermitian matrices [@Guhr98] two robust types of statistics are found in the limit of infinite matrix size (denoted here as ’thermodynamic limit’). First, the Wigner-Dyson statistics describing systems that become ergodic in the thermodynamic limit and have an incompressible, correlated spectrum and Gaussian distributed, uncorrelated amplitudes of the corresponding eigenstates (see Fig. 1a). Since we do not consider further symmetry constraints we focus on the matrix ensembles denoted as class A in the classification of [@AZ96]. A convenient representative of its ergodic limiting ensemble is given by the Gaussian unitary matrix ensemble GUE. The second robust statistics is the Poisson statistics with eigenstates, localized on certain basis states (sites), and with an compressible, uncorrelated spectrum (see Fig. 1b). =6.5cm Real complex quantum systems, represented by random Hermitian matrices, can show a crossover between Wigner-Dyson and Poisson statistics or, in some cases, a true quantum phase transition with novel ’critical’ statistics. A well known example is the 3D Anderson model (see Fig. 2) =6.5cm describing the motion of independent electrons on a 3D lattice with random uncorrelated on-site disorder. Below a certain critical value of disorder, in the thermodynamic limit, all states at the energy band center are infinitely extended (delocalized) in space, while for larger disorder all states are spatially localized. Instead of changing the disorder, one can change the energy within the energy band, keeping the disorder fixed at low values. Again, a transition from localization (band tails) to delocalization (band center) occurs. It is worth mentioning, that the average density of states (DOS) is non-critical, i.e. it stays smooth across the localization-delocalization (LD) transition. Although these statements are substantiated by analytical as well as numerical work (for reviews see [@MK93; @J98]), the special structure of this matrix ensemble (composed of a sparse, but deterministic matrix and a random diagonal matrix) has prohibited, so far, a rigorous proof of these statements. Another well known system with a transition from localized to critical states is the two-dimensional (2D) quantum Hall system (for reviews see [@H94; @JVFH94]) which we will describe briefly later. Furthermore, several matrix ensembles modeling the motion of 2D disordered electrons undergoing (time-reversal symmetric) spin-orbit interactions are known to display a LD transition (see e.g. [@MJH98]). In all of these realistic matrix ensembles the statistics at criticality represents an unstable fixed point under increasing system size (i.e. matrix dimension), which means that any slight shift away from the critical value of the energy, say, will drive the system into one of the stable matrix ensembles, Wigner-Dyson for the delocalized states and Poisson for localized states. The critical ensembles are characterized by correlated spectra, but with a finite compressibility. Furthermore, critical eigenstates are multifractal and the multifractal exponents are related to the compressibility of the spectrum (for a review see [@J98]). It is desirable to study matrix ensembles with simple construction rules and to ask for necessary ingredients in order to have a LD transition. Also in quantum chaos the interest in crossover ensembles has grown [@C99]. In that context the Rosenzweig-Porter model [@RP60] was studied as a toy-model for the crossover. It is defined as a simple superposition of a Poissonian and a Wigner-Dyson matrix. It has been shown rigorously that, by choosing the superposition in an appropriate way, novel critical ensembles emerge, but the spectral compressibility is identical to the Poisson ensemble and states are not multifractal (see [@JVS98] and references therein). Another well studied matrix ensemble is that of random band matrices (RBM) with uncorrelated elements. The band width $B$ describes the number of diagonals with non-vanishing elements. For $B\sim N^s$, with $s > 1/2$, one recovers the Wigner-Dyson statistics. Such band matrix models have been discussed in the context of the ’quantum kicked rotor’ problem [@I90] and have been studied extensively in a series of papers by Mirlin, Fyodorov and others (for review see [@MF94; @M99]). It turned out that, in particular for $B\gg 1$, all states are localized with a localization length (in index space) $\xi\sim B^2$. For fixed $B$ one has therefore a crossover from Wigner-Dyson to Poisson statistics as $N$ is taken from values much smaller than $B^2$ to values much larger than $B^2$, and $B^2/N$ is the relevant parameter for a scaling analysis of data. Superpositions of such random band matrices with random diagonal matrices have been studied in the context of the ’two-interacting particle’ problem (see e.g. [@Sh94; @Fr98]), however these ensembles do not show novel critical behavior as compared to the Rosenzweig-Porter model. In fact, only few simply designed matrix ensembles are known to become critical with multifractal critical states (see [@Mir97; @Kr98]), for example ’power law’ band matrix ensembles, where the strength of (uncorrelated) matrix elements falls off in a power law fashion in the direction perpendicular to the central diagonal. The critical cases occur for the power law behavior $\sim x^{-1}$ of the typical absolute values of matrix elements [@Mir97; @BV]. It is, however, important to notice a significant difference to realistic critical ensembles: there is no LD transition within the spectrum; iff parameters are fixed to critical values all states are critical. In this paper we study correlated random band matrix (CRBM) ensembles and, with the assistance of numerical calculations, argue that these ensembles can lead to a LD transition within the spectrum. A new parameter $C(N)\sim N^t$ describing the correlation of certain matrix elements is introduced for random band matrices. For $B(N)\sim \sqrt{N}$ states are localized outside of the energy band center and a LD transition at the energy band center occurs provided $1/2 \leq t \leq 1$. A major motivation for studying these ensemble originates from the theory of the integer quantum Hall effect [@JVFH94]. The plateau to plateau transition in the quantum Hall effect can be captured in models of non-interacting 2D electrons in a strong magnetic field and a random potential, referred to as quantum Hall system (QHS). In the one-band Landau representation the Hamiltonian is represented as a random matrix with two characteristic features. (i) The matrix elements decay perpendicular to the main diagonal in a Gaussian way. (ii) No correlations exist between elements on distinct ’nebendiagonal’ lines, but Gaussian correlations exist along each of the nebendiagonals. These features led to the introduction of the ’random Landau model’ (RLM) to study critical properties of QHSs (see e.g. [@H94]). The original purpose of constructing the RLM was to avoid explicite calculations of matrix elements starting from a randomly chosen disorder potential and to directly generate the matrix elements as random numbers that fulfill the statistical properties (i) and (ii). As will be explained in more details below, the corresponding CRBM model simplifies the RLM further, in as much as a sharp band width is introduced and correlations along nebendiagonals are idealized and cutoff after a finite length. Recently, matrix ensembles with correlated matrix elements attracted some interest [@FK99] in the context of the metal-insulator experiments in 2D [@Khv94ff], for which strong Coulomb interaction is believed to be a necessary ingredient. It is very interesting that in [@Izr98] 1D models with correlated disorder potentials could, to a large extent, be solved analytically and that certain correlated disorder potentials were shown to cause LD transitions in 1D. It is not obvious how to extend the method of [@Izr98] to the case of CRBM models. Usually, correlations in matrix ensembles lead to serious complications in analytical attacks. For example, in the field theoretic treatment (see e.g. [@EfeB97]) of random matrix ensembles the absence of long ranged correlations is essential to find appropriate field degrees of freedom that depend smoothly on a single site variable. In our CRBM models correlations are introduced by constraints (a number of matrix elements are taken to be identical). This may help to reduce complications in constructing a field theoretic approach for CRBM models. In Sec. 2 we give a detailed definition of the CRBM and discuss three alternative interpretations. The investigation of the LD transition is carried out in Sec. 3 by a multifractal analysis of states for an ensemble that is expected to fall into the quantum Hall universality class. Our results are in favor of this expectation. The analysis is carried over to ensembles where correlations are taken to extreme limits in Sec. 4. In Section 5 we present our conclusions. Correlated Random Band Matrix Model =================================== Let the elements of a $N\times N$ Hermitian matrix $H$ be written as H\_[kl]{} &=& x\_[kl]{}+i y\_[kl]{} l &gt; k ,\ H\_[kk]{} &=& x\_[kk]{} , where all non-vanishing real numbers $ x_{kl}, y_{kl}$ are taken from the same distribution ${\cal P}$ with vanishing mean and finite variance $\sigma^2$. We take the symmetric and uniform distribution on $[-1,1]$ ($\sigma^2=1/3$). With $B,C$ being two integer numbers, called the ’band width’ and the ’correlation parameter’, respectively, the correlated band matrix ensembles are defined by the following algorithm (I) – (III) and are visualized in Fig. 3. \(I) Begin with the main diagonal of $H$ and draw a random integer number $N_1\leq C$, and the random number $x_{11}$ from the distribution ${\cal P}$. Take the first $N_1$ values on the main diagonal equal to $\sqrt{2} x_{11}$. Then, draw $x_{N_1+1,N_1+1}$ from the distribution ${\cal P}$ and take successively $C$ elements on the main diagonal equal to $\sqrt{2}(x_{N_1+1,N_1+1})$. Now, draw $x_{N_1+C+1,N_1+C+1}$ from the distribution ${\cal P}$ and take successively $C$ elements on the main diagonal equal to $\sqrt{2}x_{N_1+C+1,N_1+C+1}$. Continue with this procedure until the main diagonal is filled up. \(II) Now consider the next ’nebendiagonal’. Its first random element $H_{12}=x_{12}+iy_{12}$ and a random integer number $N_{2}\leq C$ are drawn. Put the first $N_2$ elements of this nebendiagonal equal to $H_{12}$. The next $C$ elements on the same nebendiagonal are taken equal to the random value $H_{N_2+1,N_2+2}$, and so on. \(III) The procedure terminates after the $B$th nebendiagonal ($l-k=B$) is filled up. All other matrix elements ($l-k > B$) are set to zero. Finally, Hermiticity is installed by taking $H_{l>k}=H^*_{k<l}$. For $C=1$ the usual band matrix models (band width $B$) of uncorrelated matrix elements are recovered. $C$ and $B$ form the relevant parameters of the CRBM, while the value of $\sigma$ is not significant – it just defines the energy units. For finite $C>1$ the correlation along (neben)diagonals is a triangular function of range $C$ and half-width $C/2$. Thus, $C/2$ is a typical distance over which elements are correlated along (neben)diagonals. The spectrum is always distributed in a symmetric way around the center $E=0$ as a consequence of the symmetry of the distribution ${\cal P}$. In the following, we are going to discuss three possible physical interpretations. The most obvious interpretation relies on the site representation $\left.\mid l \KET = (0,0,\ldots,l=1,0,\ldots,0)$. In this representation, $H$ describes hopping of particles on a 1D chain of length $L=N$ (lattice spacing $=1$) with a maximum distance of hopping equal to $B$. The average hopping probability in one instant of time $t$ is ($\hbar=1$, $\BRA \ldots \KET$ denotes the ensemble average) |k(t)|l(t+1)|\^2= \^2 0 |l-k|B . The correlation between matrix elements means that two hopping amplitudes are identical, if the hopping distance is equal, and provided the hopping starts at sites the distance between which is less than typically $C/2$ (see Fig. 4). An alternative interpretation results, when the $N$ sites are arranged in a quasi-1D (Q1D) geometry with $N_c=B+1$ parallel ’channels’ in a ’wire’ of length $L'=N/(B+1)$. The lattice spacing [*along* ]{} the wire, $\hbar$, and the unit of time are taken to be $1$. Transitions in one unit of time are possible between channels (coupling) and along the wire (hopping). Sites are labeled (see Fig. 5) such that hopping is possible at most over one lattice spacing along the wire. Again, hopping (coupling) is correlated over typically $\sim C/2$ states, provided the difference between labels is identical. A less obvious interpretation of CRBM models arises when quantum Hall systems are studied in a one band Landau representation. A quantum Hall system is described by a one-particle Hamiltonian of 2D electrons (charge $-e$, mass $m$) in the presence of a strong perpendicular magnetic field ${\cal B}$ and a random potential $V(x,y)$. In Landau gauge the Hamiltonian reads H=p\_x\^2 + (p\_y + e[B]{}x)\^2+ V(x,y) , where $(p_x,p_y)$ is the canonical momentum with respect to the Cartesian coordinates $(x,y)$. For periodic boundary conditions in $y$-direction (length $L_y$) the kinetic energy forms a highly degenerate harmonic oscillator problem ($p_y$ is conserved) that is diagonalized by Landau states [@Lan29] $\left.\mid n,l\KET$. Here $n=0,1,2,3,\ldots$ labels the Landau energies $E_n=\hbar \omega_c (n+1/2)$ ($\omega_c=e{\cal B}/m$ cyclotron energy), and $l=0,1,2, \ldots $ labels center coordinates of the degenerate Landau states. The Landau states are separated into plane waves in $y$-direction with quantized momentum $q_l$ and into oscillator wave functions centered at $X_l=-\lambda^2 q_l$, where $\lambda=\sqrt{h/(e{\cal B})}$ is the characteristic ’magnetic length’. As long as the typical values of the random potential are much smaller than the cyclotron energy, one can study the full eigenvalue problem of the low lying ’Landau bands’ approximately by restricting the Hilbert space to separate Landau levels $n$. In particular, for the lowest Landau level the Landau states read x,y| l= . \[2.1\] A convenient recipe to study finite systems of length $L$ in $x$-direction is to use ’Landau counting’ of states, that is to take only those Landau states into account for which the center coordinates $X_l$ fall into the interval $[0,L]$. The total number of states, for an aspect ratio $a=L_y/L$, is N=(a/2) (L/)\^2 . By shifting the lowest Landau energy to zero, the eigenvalue problem is defined by the matrix $H_{kl}=\BRA k |V| l\KET$ which reads H\_[kl]{}= \_[-]{}\^ dx (x; X) e\^[-(x-X)\^2]{} ,\ (x;X) L\^[-1]{}\_y\_[-[L\_y/2]{}]{}\^[L\_y/2]{} dy V(x,y) e\^[i yX/\^2 ]{} ,\[HKL\] where $X=(X_k+X_l)/2$ and $\Delta X= X_k-X_l$. These matrix elements form a random $N\times N$ matrix, the elements of which are composed by a Fourier transformation of the random potential in $y$-direction and a Gaussian weighted averaging of the potential over a magnetic length in $x$-direction. Crucial for the structure of the matrix is the fact that, within the distance of one magnetic length $\lambda$, a number of $N_\lambda \sim N(\lambda/L)$ different Landau states can be situated (see Fig. 6). Thus, for a constant aspect ratio, this number increases as $N_\lambda \sim \sqrt{N}$ in the thermodynamic limit. This leads to correlations between matrix elements along nebendiagonals over a range of $N_\lambda$ states. This range increases when the disorder potential is correlated in real space over distances exceeding the magnetic length. The correlation between matrix elements on distinct (neben)diagonals is negligible, since the correlator contains the superposition of $\sim N_\lambda$ random phase factors. The matrix elements decay perpendicular to the main diagonal due to the Gaussian decay of Landau states. For disorder potentials with a spatial correlation length $d\gg \lambda$ the decay sets in earlier, because the Landau states are orthogonal. A quantitative analysis shows (see [@H94]) that the Gaussian band-width is = \[bandwidth\] and the Gaussian correlation length along (neben)diagonals is = .\[corrlenght\] Here $\beta \geq 1$ is a parameter that is controlled by the potential correlation length $d$, .\[beta\] For finite and fixed values $d,a$, in the thermodynamic limit, $\tilde{B}(N)$ and $\tilde{C}(N)$ increase as $\sqrt{N}$ to infinity. A matrix for QHSs with the statistical properties described above is denoted as ’random Landau matrix’ (RLM) (see [@H94]). Our CRBM simulates the RLM in as much, as it has the same qualitative features of a finite band width $B$ and a correlation length, being typically $C/2$. The quantitative differences are that, in the CRBM, the matrices have a sharp band width $B$, instead of a Gaussian band width $\tilde{B}$, and that the correlation function is a triangle with a half width $C/2$, instead of a Gaussian with a half width $\tilde{C}$. In the thermodynamic limit, however, we expect that these differences should be insignificant for the statistics of eigenvalues and eigenvectors, provided the parameters $B(N)$ and $C(N)$ scale in the same way with $N$ as the parameters $\tilde{B}(N)$ and $\tilde{C}(N)$, respectively. Note, however, that in the RLM the ratio $\tilde{C}(N)/\tilde{B}(N)= \beta^2 $ is bounded from below by 1, while in the CRBM model we are free to choose any value for $B(N), C(N)$. Multifractality and Spatial Correlations ======================================== To study the LD properties of matrix ensembles numerically, one can follow a number of different strategies. The most efficient is to analyze only the eigenvalue statistics. Although the eigenvalues encode most of the relevant information about LD properties, the statistics of wavefunction amplitudes is more direct. Localization, delocalization and even criticality of states can be qualitatively distinguished already by inspection of plots of the squared amplitudes of wavefunctions (see e.g. Fig. 7). Critical states $\psi(\r)$ are characterized by having a multifractal distribution of its squared amplitudes ${\rm prob}(\r)\equiv |\psi(\r)|^2$. This spectrum becomes independent of system size and is universal for all of the critical states in the thermodynamic limit (for a review see [@JanR94]) or, more generally, follows a universal distribution ( see [@Par99]). In particular, the geometric mean is a convenient measure of a typical probability and scales as \_[typ]{} = ([prob()]{})\~L\^[-[ \_0]{}]{} , where the deviation $\alpha_0-d\geq 0$ of the fractal dimension $\alpha_0$ from the Euclidean dimension $d$ signals multifractality and is the most sensitive critical exponent of the LD transition. Although a critical state is extended all over the system, it fluctuates strongly and has large regions of low probability which results in the stronger decay of typical amplitudes as compared to homogeneously extended states. A quantity that is closely related to $\alpha_0$ is the exponent $\eta$ of long ranged spatial correlations $\BRA {\rm prob}(r) {\rm prob}(0)\KET \sim r^{-\eta}$ [@Weg80; @ChaDa88]. It fulfills scaling relations to the fractal dimension of the second moment of ${\rm prob}(\r)$ [@PoJa], and also to the compressibility of the eigenvalue spectrum [@KraLerCha] (see also [@Poly]). As a crude estimate (based on a log-normal approximation[@PoJa] to the distribution of ${\rm prob}$) one has $\eta\approx 2(\alpha_0-d)$. In this work we focus on wavefunction statistics and the determination of $\alpha_0$. One should, however, be careful when drawing conclusions from the calculation of fractal exponents for finite matrix sizes. Such calculations should be assisted by inspection of states, and one should study the dependence on the matrix size $N$. For example, states with localization lengths that are small, but not very small compared to system size tend to produce larger values of $\alpha_0$ because parts of the wave function have low amplitudes. This can be seen easily in plots of the corresponding squared amplitudes. From the linear regression procedure that allows to determine $\alpha_0$ one cannot distinguish such behavior from true multifractality, as long as the system sizes cannot be made much larger than the localization length. Furthermore, one should distinguish between spatial, energy, and ensemble averages. In practice, we first perform the spatial average for a fixed wavefunction to determine the exponent $\alpha_0$ for a finite matrix size $N$ by the box-counting method (see e.g. [@PoJa]). This can be done for many states in the critical region (which has finite width for finite $N$) and we average over these states. Finally, an average over different realizations (ensemble average) can be performed. It is not obvious, that the order of these different procedures commutes since the exponent fluctuates from state to state. The question about the self-averaging of the fractal exponents was recently addressed in [@Par99] and it was claimed that exponents follow a universal scale independent distribution function in the thermodynamic limit, rather than being self-averaging. We do not investigate this question here. For our finite systems the exponents are fluctuating anyway and we consider averages (over typically 100 states) as explained above. For a number of different models of QHSs the exponent $\alpha_0$ has been determined, e.g $\alpha_0=2.28 \pm 0.02$ in [@Pra96]. Other authors (e.g. [@ALFA0]) find values between $2.27\pm 0.1$ and $\alpha_0=2.29 \pm 0.02$. The largest system sizes studied were about $L\approx 200 \lambda$ leading to matrix dimensions $N$ of about $10^4$. For the sake of a direct comparison of wavefunction plots between our CRBM models and the RLM we took the 2D Landau representation of Eq. (\[2.1\]) with a Landau counting for an aspect ratio $a=1$. The eigenvalues and eigenstates were calculated numerically exploiting the band structure of the matrices. To be as close as possible to a real QHS with an aspect ratio $a\approx 1$ and a short ranged random potential, we took $B(N)=\tilde{B}(N,a=1,\beta=1)= \sqrt{N/\pi}$ and $C(N)=(1/2)\tilde{C}(N,a=1,\beta=1)=(1/2)\sqrt{N/\pi}$ (e.g. the largest matrices had parameters $N=6400$, $B=45$ and $C=90$). We refer to this choice of parameters as the ”standard quantum Hall case” Our findings for the standard quantum Hall case of the CRBM model can be summarized as follows. Almost all states are localized (see Fig. 7a). Only those around the energy band center $E=0$ are multifractaly extended (see Fig. 7b). For finite $N$ there is a small energy band of extended states with localization length $\xi \geq L$. The energy band width of extended states, $\Delta_c$, shrinks with increasing $N$ with a critical exponent related to the divergence of the localization length at the band center. To determine this critical exponent it would be sufficient to determine $\Delta_c$ as a function of system size. However, $\Delta_c$ can be defined only by considering an appropriate scaling variable, e.g the participation ratio, that increases above some threshold value when the states become extended. In finite systems scaling variables are strongly fluctuating and one has to consider distribution functions and/or appropriate typical values. We did not try to calculate this exponent precisely, but only convinced ourself that the typical numbers of clearly extended states was comparable to those of realistic quantum Hall systems with $\tilde{B}(N)=B(N)$, $\tilde{C}(N)=2C(N)$. The fluctuation of $\alpha_0$ determined by the box-counting for individual states can bee seen in Fig. 10. The average over 100 different extended states of a $N=6400$ standard quantum Hall case yields $\alpha_0=2.26\pm 0.02$. This value is close to the value $\alpha_0=2.28\pm 0.02$ obtained by the same averaging procedure for an original quantum Hall system in [@Pra96]. We therefore conclude, that the CRBM shows indeed a LD transition reminiscent of quantum Hall systems. It seems likely that, in the thermodynamic limit, the critical behavior of CRBMs in the standard quantum Hall class is actually identical to that of the quantum Hall universality class, because the essential $N$ dependence of the relevant parameters, band width $B(N)\sim \sqrt{N}$ and correlation parameter $C(N)\sim \sqrt{N}$, are identical to those of realistic QHSs in Landau representation. So far the comparison of multifractal exponents was based on the wave function statistics without any reference to spatial correlations. Therefore, a more ambitious comparison between the CRBM and a QHS concerns the critical exponents of spatial correlations and their scaling relations to the multifractal exponents. In a multifractal state the $q$-correlator $\BRA {\rm prob}^q(r) {\rm prob}^q(0)\KET \sim r^{-x(q)}$ has fractal dimension x(q)=2(q)-(2q) \[scalrel\] where $\Delta(q)$ are the usual fractal exponents of the $q$-moments $\BRA {\rm prob}^q(\r)\KET \sim L^{-\Delta(q)}$ (for review see [@J98]). We find for the spatial correlations of the standard quantum Hall case the scaling exponents shown in Fig. 8. They are compared with the data that follow from the spectrum $\Delta(q)$ and Eq. (\[scalrel\]). The spectrum $\Delta(q)$ was calculated by the box counting-method. They satisfy, within the numerical uncertainties, the scaling relation Eq. (\[scalrel\]). Furthermore, the exponents are close to their values for critical states in realistic quantum Hall systems [@Pra96], e.g. the exponent of ’anomalous diffusion’ [@ChaDa88] is $x(1)=0.4\pm 0.1$. We summarize this section by stating that our multifractal analysis gives a number of clear indications that correlated random band matrices, in the ”standard quantum Hall” case, are true representatives of the quantum Hall universality class. We close this section with an observation not directly relevant for the questions addressed in this paper, but which may be relevant for those readers that like to perform numerical calculations with the CRBM model. In realistic QHSs with an aspect ratio $a\not=1$ one observes that a number of the multifractal states tend to have an orientation along the direction of smaller width. This orientation effect has no influence on the asymptotic statistical properties of the wave function amplitudes as $N$ scales to $\infty$. Only corrections to scaling due to finite system sizes can be different for different aspect ratios. However, an aspect ratio $a\not=1$ will influence scaling variables like the Thouless sensitivity $g_{\rm Th}=\delta \varepsilon/\Delta$. It measures the change in energy $\delta \varepsilon$ due to a change from periodic to antiperiodic boundary conditions in a given direction, relative to the mean level spacing $\Delta$. It is exponentially small for localized states and typically of order 1 for critically extended states. We calculated this quantity for a realistic quantum Hall system. It has strong mesoscopic fluctuations (variance $\sim$ mean) and found that the unique maximum of its typical values at the band center, for $a=1$, splits up into two maxima, for $a\not=1$, symmetric around the band center. This behavior is related to the fact that those wave functions that start to be extended in the direction of smaller width are more sensitive to changes in the boundary condition than those that have already huge localization lengths and are uniformly extended in both directions. In the CRBM model we observe a similar phenomenon. For the standard quantum Hall case we actually found a tendency for an orientation into the $y$-direction (with Landau states as defined in Eq. (\[2.1\]) and Landau counting for $a=1$)). This behavior changes to an orientation in the opposite direction under increasing $C(N)$ by a factor of ${\cal O}(1)$, keeping $B(N)$ fixed. In contrast to a realistic QHS where we can calculate $\tilde{B}, \tilde{C}$ for a truly symmetric situation, $a=1$, in the CRBM model we do not know a priori if the choice $C/2=B$ is appropriate to simulate a truly symmetric situation, $a=1$. Because the identification of the half-width value of a triangular correlation function with the half-width of a Gaussian is not strict, taking a factor of order unity between them is equally well justified. The same ambiguity is present in the identification of the band width $B$ with $\tilde{B}$. Any change in $B$, $C$ by a factor of order unity can therefore lead to the orientation effect. As in realistic quantum Hall systems we also found the splitting of the maximum in Thouless sensitivities in the standard quantum Hall case. This splitting effect may be further investigated. Tuning the Correlation Parameter $C(N)$ ======================================= When the correlation parameter is fixed to a constant $C(N)\equiv C_0$ the CRBM will behave like an ordinary random band matrix when $N\gg C_0$. This case is very well understood (see [@MF94; @M99]) and one knows that a crossover from localization to Wigner-Dyson delocalization takes place as the band width $B$ is varied. The localization length in 1D interpretation is $\xi_{\rm 1D}= c B^2$ ($c$ a constant of order unity). Except for the far energy band tails, where states are stronger localized, the localization behavior is almost uniform over the energy band. It is worth noting that the amplitudes of wavefunctions within the central region (where the amplitudes are not exponentially small) are strongly fluctuating, but they are [*not*]{} multifractal in the limit of large $B(N)$. The entire distribution of amplitudes is, asymptotically in $B$, fixed by the value of the ratio $g=\xi_{\rm 1D}/N=c B^2(N)/N$. This ratio is the relevant scaling parameter and is denoted as ”conductance”. As we have seen in the previous section, the behavior of CRBM changes drastically when the correlation parameter increases sufficiently fast with $N$. In the standard quantum Hall case, a LD transition takes place within the energy band. To get more insight into the role of the correlation parameter we therefore studied two extreme cases: (A) $C(N)=1$, and (B) $C(N)=N$. In both cases we kept $B(N)\sim \sqrt{N}$ as in the standard quantum Hall case. Situation (A) corresponds to the usual uncorrelated random band matrix models with a large localization length of $\xi_{\rm 1D}\sim B^2 \sim L$ in 1D-interpretation ($\xi_{\rm Q1D}\sim B \sim L'$ in Q1D-interpretation) and a constant [ ’conductance’ of order $1$]{}, $g=\xi_{\rm 1D}/L=\xi_{\rm Q1D}/L'={\rm const}$. This model has [*no*]{} interpretation as a QHS, since the ratio $C(N)/B(N)\sim N^{-1/2} \ll 1$. For better comparison we used the same Landau representation as before and performed a multifractal analysis of the extended states by the same box-counting method as in the standard quantum Hall case. Our findings in situation (A) can be summarized as follows. All states behave similar within the band (except for those in the far tails) The states are not uniformly extended, but are confined to strips with a width of about half the system size (see Fig. 9). Within that strip the states are extended and they fluctuate strongly in a ’grassy’ way. They do not show the self-similar regions of low amplitude like typical multifractal states. This behavior is compatible with the 1D (or quasi-1D) interpretation of the uncorrelated random band matrix with a localization length of the order of $L$ ($L'$) and a conductance of order unity. We calculated, for $N=6400, B=45$, the exponent $\alpha_0^{[2D/C=1]} \approx 2.14$ (see Fig. 10). This value must be taken with care, as the states were not extended over the full system. They are localized to an area of about half the system size. Thus, the regions of exponentially small amplitudes outside the localization center lead to values $\alpha_0> 2$. Taking amplitudes from only the localization center reduces the average of $\alpha_0$, but fluctuations from state to state are strong. We therefore expect that $\alpha_0$, measured in the localization center, will slowly converge to $\alpha_0=2$ as $B(N)$ increases further with $N$. Situation (B) deviates strongly from the usual uncorrelated random band matrix models. The elements on each (neben)diagonal are constant, but uncorrelated for distinct (neben)diagonals. We denote them by $h_m$ where m=0 labels the main diagonal and positive (negative) $m$ label the upper right (lower left) lying nebendiagonals. Had we set $N=\infty$ in the first place, we could solve the eigenvalue problem by Fourier transformation. Each hopping event over a fixed distance would be translational invariant. Thus, the eigenstates, for $N=\infty$, are plane waves $\psi_q(l)=e^{iql}$ where $q$ is a quantum wave number that can take any real value. The corresponding eigenvalue is E\_q=\_[m=-]{}\^ h\_m e\^[iqm]{}=h\_0 + 2\_[m=1]{}\^B h\_m e\^[iqm]{} .\[5.5\] In Landau representation the plane waves $\psi_q(l)$ transform into wave functions $\psi_q(x,y)$ that are plane waves in $x$ direction, centered at a center coordinate $Y_q=-\lambda q$, and have a width of a magnetic length in $y$ direction. For any finite $N$, however, such solution is not possible, unless periodic boundary conditions are implemented in the site representation. To implement them into our band matrix models we have to add $\sim B^2$ matrix elements in the upper right (and lower left) corners of the matrix. This would violate the band structure. We see that, for any finite $N$, the correlated band-matrix brakes the translational invariance of hopping events, and it is not obvious that the states restore this symmetry when $N$ goes to infinity. Actually, our finite $N$ results indicate that the states will not be plane waves in the center of the band (see Fig. 11). Furthermore, a simple perturbative treatment shows that the omission of the $\sim B^2$ elements in the corners cannot be neglected in the limit $N\to \infty$. The CRBM in situation (B) does also allow for an interpretation as a quantum Hall system, since $C(N)/B(N)\sim \sqrt{N} > 1$. As follows from equations (\[bandwidth\] – \[beta\]) the potential correlation length $d \sim N^{1/4}$ and the aspect ratio is large, $a\sim \sqrt{N}$. This translates to the scaling with system size $L$ as d\~L\~N\^[1/4]{} , L\_y=aL \~L\^3 . The CRBM in situation (B), thus represents a long quantum Hall strip where $L_y/L \sim (L/\lambda)^2$ and the random potential can be thought of as being smooth over a distance of the width $L$. With periodic boundary conditions in $x$-direction one would again conclude that eigenstates are plane waves in $x$-direction, labeled by $N$ different center coordinates $Y_q$ ($q$ is an integer times $2\pi/L$) in $y$-direction, and the eigenvalues $E_q$ would be determined by the value of the random potential at center coordinate $Y_q$. This scenario is also consistent with Eq. (\[5.5\]), because the Fourier transform of the random potential at $V(Y_q)$ yields the matrix elements $h_m$ (see also Eq. \[HKL\]). In the absence of periodic boundary conditions the situation changes. For energies far from the band center one expects, that the corresponding eigenstates are localized on equipotential contours of the random potential and are centered at some value $Y_q$. However, close to the energy band center eigenstates become extended and one typical eigenstate is shown in Fig. 11. Although this state has a preferred orientation in $x$-direction it is by no means localized to a small region in $y$-direction. It fluctuates strongly, it has non-vanishing values all over the system, and it also shows large areas of low probability. Therefore, the multifractal exponent $\alpha_0$ is larger than in the standard quantum Hall situation. Let us try to give heuristic arguments of how to estimate the value of $\alpha_0$. For that purpose we recall that, quite generally $\xi_{\rm Q1D}$ is of the order of the number of transverse modes $N_c$ times the relevant scattering length $l$ (for a discussion see e.g. [@J98]). In our situation $N_c=N$ and $l\approx d\sim L$. Therefore, the quasi-1D localization length is estimated to be $\sim L_y^{5/3}$, and it is much larger than $L_y$ [@Note]. We may thus assume that the state is critical and has, in the strip-representation, a value $\alpha_0\approx 2.26$ when the fractal analysis is restricted to sizes much larger than $d\sim L$. Recall that we have chosen the Landau representation corresponding to an aspect ratio $a=1$. Therefore, the value of $\alpha_0$ found by box counting in that representation must be different. The box counting method uses squares of size $l^2$ in the 2D Landau representation with $a=1$. This corresponds to [*rectangular boxes*]{} in the strip-representation, where the length in $y$ direction scales as the third power of the length in $x$-direction. Thus, the ”effective volume” is $l^4$. By this reasoning $\alpha_0 (C=N) -2$ will be, in the $a=1$ representation, two times larger than in the strip-representation, and we may expect that we should find $\alpha_0^{[C=N]}\approx 2.54$ by box-counting in 2D Landau representation with $a=1$. Indeed, this estimate is compatible with our findings as displayed in Fig. 10. Conclusions =========== We have studied a novel type of matrix models, the correlated random band matrices. We used numerical diagonalization and performed a multifractal analysis to analyze the localization-delocalization properties of such matrix models in the thermodynamic limit of infinite matrix size. The parameters of correlated band matrices are the band width $B$ and the correlation parameter $C$. We offered three interpretations: (i) independent quantum particles on a one dimensional chain with correlated hopping, (ii) independent quantum particles on a quasi-one-dimensional strip with correlated coupling of channels, and (iii) in some range of its parameters the models resemble two dimensional quantum Hall systems. For $B\sim C\sim \sqrt{N}$ a transition from localized to critical states in the band center occurs, and the corresponding critical exponents are close to those of real quantum Hall systems. Furthermore, we found the following qualitative behavior when keeping the band width $\sim \sqrt{N}$ constant: A reduction of correlations suppresses multifractality (i.e. criticality) at the band center and finally, for $C=1$, the ordinary non-critical random band matrix ensemble is reached which shows localization lengths $\xi\sim B^2$. Increasing correlations beyond $C\sim \sqrt{N}$, the transition to critical states in the band center remains, however their multifractality seems to be more pronounced. The fractal critical exponent for extreme correlations, $C(N)=N$, turned out to be compatible with a heuristic estimate. Therefore, our numerical results suggest that the correlated band matrix models show transitions from localization to critical delocalization on approaching the energy band center, provided the band width scales like $B(N)\sim \sqrt{N}$ and the correlation parameter scales like $C(N)\sim N^t$ with $1/2 \leq t \leq 1$. It should be pointed out that correlations lead to stronger localization off the band center, while they lead to critical delocalization at the band center. We hope that our work initiates more studies on the ensemble of correlated random band matrices with a general behavior of $B(N)\sim N^s$, $C(N)\sim N^t$ where $s,t$ may vary between $0$ and $1$, and to reach solid statements about the localization behavior in the thermodynamic limit. We also like to point out, that the ”standard quantum Hall case” of the correlated random band matrix models is not only a simple matrix realization for quantum Hall systems, but has a very interesting distinction from other representative models for the quantum Hall universality class (for an overview over such models see [@Zirn99]). The correlated random band matrix does not incorporate any handedness related to the magnetic field. This handedness is essential in all other representative models that allow for the existence of extended states. In the correlated random band matrix model, however, the connection to a quantum Hall system goes via the Landau representation, which takes the handedness into account. Fortunately, the question of localization and delocalization is not restricted to that representation. In the correlated band matrix model the correlation of matrix elements is the key for the quantum Hall transition. It would be very interesting to construct a manageable field theoretic formulation for the correlated random band matrix model. This may be possible when taking advantage of the fact that the correlations are given by constraints which may be included by Lagrangian multipliers. [**Acknowledgment.**]{} MJ thanks B. Shapiro for previous collaboration on matrix models related to the quantum Hall effect and for the central idea to construct the ensembles of correlated random band matrices. We thank J. Hajdu, B. Huckestein, F. Izrailev and I. Varga for useful discussions. This research was supported in part by the Sonderforschungsbereich 341 of the DFG and by the MINERVA foundation. [99]{} T. Guhr, A. Mueller-Groeling, and H.A. Weidenmüller, Phys. Rep. [**299**]{}, 189 (1998) M.R. Zirnbauer, J. Math. Phys. [**37**]{}, 4986 (1996); A. Altland, and M.R. Zirnbauer, Phys. Rev. B [**55**]{}, 1142 (1997) B. Kramer, and A. MacKinnon, Rep. Prog. Phys. [**56**]{}, 1469 (1993) M. Janssen, Phys. Reports [**295**]{}, 1 (1998) B. Huckestein, Rev. Mod. Phys. [**67**]{}, 357 (1995) M. Janssen, O. Viehweger, U. Fastenrath, and J. Hajdu, [*Introduction to the Theory of the Integer Quantum Hall Effect*]{}, VCH-Verlag, Weinheim, 1994 R. Merkt, M. Janssen, and B. Huckestein, Phys. Rev. B [**58**]{}, 4394 (1998) E.B. Bogomolny, C. Gerland, and C. Schmit, Phys. Rev. E [**59**]{}, R1315 (1999), and references therein N. Rosenzweig, and C.E. Porter, Phys. Rev. [**120**]{}, 1698 (1960 M. Janssen, B. Shapiro, and I. Varga, Phys. Rev. Lett. [**81**]{}, C3048 (1998) F.M. Izrailev, Phys. Reports [**196**]{}, 299 (1990) Y.V. Fyodorov, and A.D. Mirlin, Int. J. Mod. Phys. B [**8**]{}, 3795 (1994) A.D Mirlin, cond-mat/9907126 D.L. Shepelyansky, Phys. Rev. Lett. [**73**]{}, 2607 (1994) K.M. Frahm, Eur. Phys. J B [**10**]{}, 371 (1999) A.D. Mirlin, Y.V. Fyodorov, F.-M. Dittes,J. Quezada, and T.H. Seligman, Phys. Rev. E [**54**]{}, 3221 (1996) V.E. Kravtsov, and K.A. Muttalib, Phys. Rev. Lett. [**79**]{}, 1913 (1997) I. Varga, and D. Braun, cond-mat/9909285 V.V. Flambaum, and V.V. Sokolov, cond-mat/9904013 S.V. Kravchenko, W.E. Mason, G.E. Bowker, J.E. Furneaux, V.M. Pudalov, and M. D’lorio Phys. Rev. B [**51**]{}, 7038 (1995) F.M. Izrailev, and A.A. Krokhin, cond-mat/9812209 K.B. Efetov, [*Supersymmetry in Disorder and Chaos*]{}, Cambridge University Press, New York, 1997 L. Landau, Z. Physik [**64**]{}, 629 (1930) M. Janssen, Int. J. Mod. Phys. [**8**]{}, 943 (1994) D.A. Parshin, and H.R. Schober, cond-mat/9907067 F. Wegner, Z. Physik B [**36**]{}, 209 (1980) J.T. Chalker, and G.J. Daniell, Phys. Rev. Lett. [**61**]{}, 593 (1988) W. Pook, and M. Janssen, Z. Physik B [**82**]{}, 295 (1991) J.T. Chalker, V.E. Kravtsov, and I.V. Lerner, JETP Lett. [**64**]{}, 386 (1996) D.G. Polyakov, Phys. Rev. Lett. [**81**]{}, 4696 (1998) K. Pracz, M. Janssen, and P. Freche, J. Phys. Condensed Matter [**8**]{}, 7147 (1996) R. Klesse, and M. Metzler, Europhys. Lett. [**32**]{}, 229 (1995); B. Huckestein, L. Schweitzer, and B. Kramer, Surface Sciences [**263**]{}, 125 (1992) This is in contrast to a quantum Hall strip of width $L$ and much larger length $L_y$, for which the potential correlation length $d< \lambda$. In that case one knows from many numerical simulations that the quasi-1D localization length $\xi_{\rm Q1D} (L)$ is of the order of $L$ for critical states (see e.g. [@H94]). M.R. Zirnbauer, cond-mat/9905054
{ "pile_set_name": "ArXiv" }
--- abstract: 'Representation learning focused on disentangling the underlying factors of variation in given data has become an important area of research in machine learning. However, most of the studies in this area have relied on datasets from the computer vision domain and thus, have not been readily extended to music. In this paper, we present a new symbolic music dataset that will help researchers working on disentanglement problems demonstrate the efficacy of their algorithms on diverse domains. This will also provide a means for evaluating algorithms specifically designed for music. To this end, we create a dataset comprising of 2-bar monophonic melodies where each melody is the result of a unique combination of nine latent factors that span ordinal, categorical, and binary types. The dataset is large enough ($\approx$ 1.3 million data points) to train and test deep networks for disentanglement learning. In addition, we present benchmarking experiments using popular unsupervised disentanglement algorithms on this dataset and compare the results with those obtained on an image-based dataset.' bibliography: - '2020-ISMIR-dMelodies.bib' title: 'Melodies: A Music Dataset for Disentanglement Learning' --- Introduction {#sec:intro} ============ Representation learning deals with extracting the underlying factors of variation in a given observation [@bengio_representation_2013]. Learning compact and *disentangled* representations (see   for an illustration) from given data, where important factors of variation are clearly separated, is considered useful for generative modeling and for improving performance on downstream tasks (such as speech recognition, speech synthesis, vision and language generation [@hsu2017unsupervised; @hsu2019disentangling; @kexin2018neural]). Disentangled representations allow a greater degree of interpretability and controllability, especially for content generation, be it language, speech, or music. In the context of Music Information Retrieval (MIR) and generative music models, learning some form of disentangled representation has been the central idea for a wide variety of tasks such as genre transfer [@brunner_midi-vae_2018], rhythm transfer [@yang2019deep; @jiang2020transformer], timbre synthesis [@luo2019learning], instrument rearrangement [@hung2019musical], manipulating musical attributes [@hadjeres_glsr-vae_2017; @pati19latent-reg], and learning music similarity [@lee2020disentangled]. Consequently, there exists a large body of research in the machine learning community focused on developing algorithms for learning disentangled representations. These span unsupervised [@higgins_beta-vae_2017; @chen_isolating_2018; @kim_disentangling_2018; @kumar_variational_2017], semi-supervised [@kingma2014semi; @siddharth2017learning; @Locatello2020Disentangling] and supervised [@lample_fader_2017; @hadjeres_glsr-vae_2017; @kulkarni_deep_2015; @donahue_semantically_2018] methods. However, a vast majority of these algorithms are designed, developed, tested, and evaluated using data from the image or computer vision domain. The availability of standard image-based datasets such as dSprites [@matthey_dsprites_2017], 3D-Shapes [@burges_3d-shapes_2020], and 3D-Chairs [@aubry_seeing_2014] among others has fostered disentanglement studies in vision. Additionally, having well-defined factors of variation (for instance, size and orientation in dSprites [@matthey_dsprites_2017], pitch and elevation in Cars3D [@reed_deep_2015]) has allowed systematic studies and easy comparison of different algorithms. However, this restricted focus on a single domain raises concerns about the generalization of these methods [@locatello_challenging_2019] and prevents easy adoption into other domains such as music. Research on disentanglement learning in music has often been application-oriented with researchers using their own problem-specific datasets. The factors of variation have also been chosen accordingly. To the best of our knowledge, there is no standard dataset for disentanglement learning in music. This has prevented systematic research on understanding disentanglement in the context of music. In this paper, we introduce *dMelodies*, a new dataset of monophonic melodies, specifically intended for disentanglement studies. The dataset is created algorithmically and is based on a simple and yet diverse set of independent latent factors spanning ordinal, categorical and binary attributes. The full dataset contains $\approx 1.3$ million data points which matches the scale of image datasets and should be sufficient to train deep networks. We consider this dataset as the primary contribution of this paper. In addition, we also conduct benchmarking experiments using three popular unsupervised methods for disentanglement learning and present a comparison of the results with the dSprites dataset [@matthey_dsprites_2017]. Our experiments show that disentanglement learning methods do not directly translate between the image and music domains and having a music-focused dataset will be extremely useful to ascertain the generalizability of such methods. The dataset is available online[^1] along with the code to reproduce our benchmarking experiments.[^2] Motivation {#sec:motivation} ========== In representation learning, given an observation $\mathbf{x}$, the task is to learn a representation $r(\mathbf{x})$ which “makes it easier to extract useful information when building classifiers or other predictors” [@bengio_representation_2013]. The fundamental assumption is that any high-dimensional observation $\mathbf{x} \in \mathcal{X}$ (where $\mathcal{X}$ is the data-space) can be decomposed into a semantically meaningful low dimensional latent variable $\mathbf{z} \in \mathcal{Z}$ (where $\mathcal{Z}$ is referred to as the latent space). Given a large number of observations in $\mathcal{X}$, the task of disentanglement learning is to estimate this low dimensional latent space $\mathcal{Z}$ by separating out the distinct factors of variation [@bengio_representation_2013]. An ideal disentanglement method ensures that changes to a single underlying factor of variation in the data changes only a single factor in its representation [@locatello_challenging_2019]. From a generative modeling perspective, it is also important to learn the mapping from $\mathcal{Z}$ to $\mathcal{X}$ to enable better control over the generative process. Lack of diversity in disentanglement learning --------------------------------------------- Most state-of-the-art methods for unsupervised disentanglement learning are based on the Variational Auto-Encoder (VAE) [@kingma_auto-encoding_2014] framework. The key idea behind these methods is that factorizing the latent representation to have an aggregated posterior should lead to better disentanglement [@locatello_challenging_2019]. This is achieved using different means, e.g., imposing constraints on the information capacity of the latent space [@higgins_beta-vae_2017; @burgess_understanding_2018; @rubenstein_learning_2018], maximizing the mutual information between a subset of the latent code and the observations [@chen_infogan_2016], and maximizing the independence between the latent variables [@chen_isolating_2018; @kim_disentangling_2018]. However, unsupervised methods for disentanglement learning are sensitive to inductive biases (such network architectures, hyperparameters, and random seeds) and consequently there is a need to properly evaluate such methods by using datasets from diverse domains [@locatello_challenging_2019]. Apart from unsupervised methods for disentanglement learning, there has also been some research on semi-supervised [@siddharth2017learning; @Locatello2020Disentangling] and supervised [@kulkarni_deep_2015; @lample_fader_2017; @connor2019representing; @engel_latent_2017] learning techniques to manipulate specific attributes in the context of generative models. In these paradigms, a labeled loss is used in addition to the unsupervised loss. Available labels can be utilized in various ways. They can help with disentangling known factors (e.g., digit class in MNIST) from latent factors (e.g., handwriting style) [@bouchacourt_multi-level_2018], or supervising specific latent dimensions to map to specific attributes [@hadjeres_glsr-vae_2017]. However, most of these approaches are evaluated using image domain datasets. Tremendous interest from the machine learning community has led to the creation of benchmarking datasets (albeit image-based) specifically targeted towards disentanglement learning such as dSprites [@matthey_dsprites_2017], 3D-Shapes [@burges_3d-shapes_2020], 3D-chairs [@aubry_seeing_2014], MPI3D [@gondal2019transfer], most of which are artificially generated and have simple factors of variation. While one can argue that artificial datasets do not reflect real-world scenarios, the relative simplicity of these datasets is often desirable since they enable rapid prototyping. Lack of consistency in music-based studies ------------------------------------------ Representation learning has also been explored in the field of MIR. Much like images, learning better representations has been shown to work well for MIR tasks such as composer classification [@bretan15learning; @gururani2019comparison], music tagging [@choi2017transfer], and audio-to-score alignment [@lattner2019learning]. The idea of disentanglement has been particularly gaining traction in the context of interactive music generation models [@engel_latent_2017; @brunner_midi-vae_2018; @yang2019deep; @pati19latent-reg]. Disentangling semantically meaningful factors can significantly improve the usefulness of music generation tools. Many researchers have independently tried to tackle the problem of disentanglement in the context of symbolic music by using different musically meaningful attributes such as genre [@brunner_midi-vae_2018], note density [@hadjeres_glsr-vae_2017], rhythm [@yang2019deep], and timbre [@luo2019learning]. However, these methods and techniques have all been evaluated using different datasets which makes a direct comparison impossible. Part of the reason behind this lack of consistency is the difference in the problems that these methods were looking to address. However, the availability of a common dataset allowing researchers to easily compare algorithms and test their hypotheses will surely aid systematic research. Melodies Dataset {#sec:design} ================ The primary objective of this work is to create a simple dataset for music disentanglement that can alleviate some of the shortcomings mentioned in : first, researchers interested in disentanglement will have access to more diverse data to evaluate their methods, and second, research on music disentanglement will have the means for conducting systematic, comparable evaluation. This section describes the design choices and the methodology used for creating the proposed *dMelodies* dataset. While core MIR tasks such as music transcription, or tagging focus more on analysis of audio signals, research on generative models for music has focused more on the symbolic domain. Considering most of the interest in disentanglement learning stems from research on generative models, we decided to create this dataset using symbolic music representations. Design Principles {#sec:design_principles} ----------------- To enable objective evaluation of disentanglement algorithms, one needs to either know the ground-truth values of the underlying factors of variation for each data point, or be able to synthesize the data points based on the attribute values. The dSprites dataset [@matthey_dsprites_2017], for instance, consists of single images of different 2-dimensional shapes with simple attributes specifying the position, scale and orientation of these shapes against a black background. The design of our dataset is loosely based on the dSprites dataset. The following principles were used to finalize other design choices: The dataset should have a simple construction with homogenous data points and intuitive factors of variation. It should allow for easy differentiation between data points and have clearly distinguishable latent factors. The factors of variation should be independent, i.e., changing any one factor should not cause changes to other factors. While this is not always true for real-world data, it enables consistent objective evaluation. There should be a clear one-to-one mapping between the latent factors and the individual data points. In other words, each unique combination of the factors should result in a unique data point. The factors of variation should be diverse. In addition, it would be ideal to have the factors span different types such as discrete, ordinal, categorical and binary. Finally, the different combinations of factors should result in a dataset large enough to train deep neural networks. Based on size of the different image-based datasets [@matthey_dsprites_2017; @liu_deep_2015], we would require a dataset of the order of at least a few hundred thousand data points. Dataset Construction -------------------- Considering the design principles outlined above, we decided to focus on monophonic pitch sequences. While there are other options such as polyphonic or multi-instrumental music, the choice of monophonic melodies was to ensure simplicity. Monophonic melodies are a simple form of music uniquely defined by the pitch and duration of their note sequences. The pitches are typically based on the key or scale in which the melody is being played and the rhythm is defined by the onset positions of the notes. Since the set of all possible monophonic melodies is very large and heterogeneous, the following additional constraints were imposed on the melody in order to enforce homogeneity and satisfy the other design principles: \[(a)\] Each melody is based on a scale selected from a finite set of allowed scales. This choice of scale also serves as one of the factors of variation. The melody will also be uniquely defined by the pitch class of the tonic (root pitch) and the octave number. In order to constrain the space of all possible pitch patterns within a scale, we restrict each melody to be an arpeggio over the standard I-IV-V-I cadence chord pattern. Consequently, each melody consists of 12 notes (3 notes for each of the 4 chords). In order to vary the pitch patterns, the direction of arpeggiation of each chord, i.e. up or down, is used as a latent factor. This choice adds a few binary factors of variation to the dataset. The melodies are fixed to 2-bar sequences with 8th note as the minimum note duration. This makes the dataset uniform in terms of sequence lengths of the data points and also helps reduce the complexity of the sequences. 2-bar sequences have been used in other music generation studies as well [@hadjeres_glsr-vae_2017; @roberts_hierarchical_2018]. We use a tokenized data representation such that each melody is a sequence of length 16. If we consider the space of all possible unique rhythms, the number of options will explode to $16 \choose 12$ which will be significantly larger than other factors of variation. Hence, we choose to break the latent factor for rhythm into 2 independent factors: rhythm for bar 1 and bar 2. The rhythm of a melody is based on the metrical onset position of the notes [@toussaint_mathematical_2002]. Consequently, rhythm is dependent on the number of notes. In order to keep rhythm independent from other factors, we constrain each bar to have 6 notes (play 2 chords) thereby obtaining $8 \choose 6$ options for each bar. Based on the above design choices, the dMelodies dataset consists of 2-bar monophonic melodies with 9 factors of variations listed in . [The factors of variation were chosen to satisfy the design principles listed in . For instance, while melodic transformations such as repetition, inversion, retrograde would have made more musical sense, they did not allow creation of a large-enough dataset with independent factors of variation. The resulting dataset thus contains simple melodies which do not adequately reflect real-world musical data. A side-effect of this choice of factors is that some of them (such as arpeggiation direction and rhythm) affect only a specific part of the data.]{} Since each unique combination of these factors results in a unique data point we get 1,354,752 unique melodies. shows one such melody from the dataset and its corresponding latent factors. [The dataset is generated using the *music21* [@cuthbert_music21_2010] python package.]{} [lcl]{} **Factor** & **$\#$ Options** & **Notes**\ *Tonic* & 12 & C, C$\#$, D, through B\ *Octave* & 3 & Octave 4, 5 and 6\ *Scale* & 3 & major, harmonic minor, and blues\ *Rhythm Bar 1* & 28 & $8 \choose 6$, based on onset locations of 6 notes\ *Rhythm Bar 2* & 28 & $8 \choose 6$, based on onset locations of 6 notes\ *Arp Chord 1* & 2 & up/down, for Chord 1\ *Arp Chord 2* & 2 & up/down, for Chord 2\ *Arp Chord 3* & 2 & up/down, for Chord 3\ *Arp Chord 4* & 2 & up/down, for Chord 4\ Benchmarking Experiments {#sec:exp} ======================== In this section, we present benchmarking experiments to demonstrate the performance of some of the existing unsupervised disentanglement algorithms on the proposed dMelodies dataset and contrast the results with those obtained on the image-based dSprites dataset. Experimental Setup ------------------ We consider 3 different disentanglement learning methods: $\beta$-VAE [@higgins_beta-vae_2017], Annealed-VAE [@burgess_understanding_2018], and FactorVAE [@kim_disentangling_2018]. All these methods are based on different regularization terms applied to the VAE loss function. ### Data Representation We use a tokenized data representation [@hadjeres2017deepbach] with the 8th-note as the smallest note duration. Each 8th note position is encoded with a token corresponding to the note name which starts on that position. A special continuation symbol (‘\_\_’) is used which denotes that the previous note is held. A special token is used for rest. ### Model Architectures Two different VAE architectures are chosen to conduct these experiments. The first architecture (dMelodies-CNN) is based on Convolutional Neural Networks (CNNs) and is similar to those used for several image-based VAEs, except that we use 1-D convolutions. The second architecture (dMelodies-RNN) is based on a hierarchical recurrent model [@roberts_hierarchical_2018; @pati_learning_2019]. Details of the model architectures are provided in the supplementary material. ### Hyperparameters Each learning method has its own regularizing hyperparameter. For $\beta$-VAE, we use three different values of $\beta \in \left\{ 0.2, 1.0, 4.0 \right\}$. This choice is loosely based on the notion of normalized-$\beta$ [@higgins_beta-vae_2017]. In addition, we force the KL-regularization only when the KL-divergence exceeds a fixed threshold $\tau=50$ [@kingma_improved_2016; @roberts_hierarchical_2018]. For Annealed-VAE, we fix $\gamma=1.0$ and use three different values of capacity, $C \in \left\{ 25.0, 50.0, 75.0 \right\}$. For FactorVAE, we use the Annealed-VAE loss function with a fixed capacity ($C = 50$), and choose three different values for $\gamma \in \left\{ 1, 10, 50 \right\}$. ### Training Specifications For each of the above methods, model, and hyperparameter combination, we train 3 models with different random seeds. To ensure consistency across training, all models are trained with a batch-size of $512$ for $100$ epochs. The ADAM optimizer [@kingma_adam_2015] is used with a fixed learning rate of $1\mbox{e\ensuremath-}4$, $\beta _{1}=0.9$, $\beta _{2}=0.999$, and $\epsilon = 1\mbox{e\ensuremath-}8$. For $\beta$-VAE and Annealed-VAE, we use 10 warm-up epochs where $\beta=0.0$. After warm-up, the regularization hyperparameter ($\beta$ for $\beta$-VAE and $C$ for Annealed-VAE) is annealed exponentially from $0.0$ to their target values over $100000$ iterations. For FactorVAE, we stick to the original implementation and do not anneal any of the parameters in the loss function. The VAE optimizer is the same as mentioned earlier. The FactorVAE discriminator is optimized using ADAM with a fixed learning rate of $1\mbox{e\ensuremath-}4$, $\beta _{1}=0.8$, $\beta _{2}=0.9$, and $\epsilon = 1\mbox{e\ensuremath-}8$. We found that utilizing the original hyperparameters [@kim_disentangling_2018] for this optimizer led to unstable training on dMelodies. For comparison with dSprites, we present the results for all the three methods using a CNN-based VAE architecture. The set of hyperparameters and other training configurations were kept the same for the dSprites dataset, except for the FactorVAE where we use the originally proposed loss function and discriminator optimizer hyperparameters, as the model does not converge otherwise. ### Disentanglement Metrics The following objective metrics for measuring disentanglement are used: *Mutual Information Gap (MIG)* [@chen_isolating_2018], which measures the difference of mutual information between a given latent factor and the top two dimensions of the latent space which share maximum mutual information with the factor, *Modularity* [@ridgeway_learning_2018], which measures if each dimension of the latent space depends on only one latent factor, and *Separated Attribute Predictability (SAP)* [@kumar_variational_2017], which measures the difference in the prediction error of the two most predictive dimensions of the latent space for a given factor. For each metric, the mean across all latent factors is used for aggregation. For consistency, standard implementations of the different metrics are used [@locatello_challenging_2019]. Experimental Results -------------------- ### Disentanglement ---------------------------------------------------------- ![image](figs/disent_results_MIG.pdf){width="33.00000%"} \[\] (a) *MIG* ---------------------------------------------------------- ----------------------------------------------------------------- ![image](figs/disent_results_Modularity.pdf){width="33.00000%"} \[\] (b) *Modularity* ----------------------------------------------------------------- ---------------------------------------------------------- ![image](figs/disent_results_SAP.pdf){width="33.00000%"} \[\] (c) *SAP Score* ---------------------------------------------------------- In this experiment, we present the comparative disentanglement performance of the different methods on dMelodies. The result for each method is aggregated across the different hyperparameters and random seeds. shows the results for all three disentanglement metrics. We group the trained models based on the architecture. The results for the dSprites dataset are also shown for comparison. First, we compare the performance of different methods on dMelodies. Annealed-VAE shows better performance for MIG and SAP. These metrics indicate the ability of a method to ensure that each factor of variation is mapped to a single latent dimension. The performance in terms of Modularity is similar across the different methods. High Modularity indicates that each dimension of the latent space maps to only a single factor of variation. For dSprites, FactorVAE seems to be best method overall across metrics. However, the high variance in the results shows that choice of random seeds and hyperparameters is probably more important than the disentanglement method itself. This is in line with observations in previous studies [@locatello_challenging_2019]. Second, we observe no significant impact of model architecture on the disentanglement performance. For both the CNN and the hierarchical RNN-based VAE, the performance of all the different methods on dMelodies is comparable. This might be due to the relatively short sequence lengths used in dMelodies which do not fully utilize the capabilities of the hierarchical-RNN architecture (which has been shown to work well in learning long-term dependencies [@roberts_hierarchical_2018]). On the positive side, this indicates that the dMelodies dataset might be agnostic to the VAE-architecture. Finally, we compare differences in the performance between the two datasets. In terms of MIG and SAP, the performance for dSprites is slightly better (especially for Factor-VAE), while for Modularity, performance across both datasets is comparable. However, once again, the differences are not significant. Looking at the disentanglement metrics alone, one might be tempted to conclude that the different methods are domain invariant. However, as the next experiments will show, there are significant differences. ### Reconstruction Fidelity From a generative modeling standpoint, it is important that along with better disentanglement performance we also retain good reconstruction fidelity. This is measured using the reconstruction accuracy shown in . It is clear that all three methods fail to achieve a consistently good reconstruction accuracy on dMelodies. $\beta$-VAE gets an accuracy $\geq 90\%$ for some hyperparameter values (more on this in ). However, both Annealed-VAE and Factor-VAE struggle to cross a median-accuracy of $40\%$ (which would be unusable from a generative modeling perspective). The performance of the hierarchical RNN-based VAE is slightly better than the CNN-based architecture. In comparison, for dSprites, all three methods are able to consistently achieve better reconstruction accuracies. ### Sensitivity to Hyperparameters {#sec:hyper_param} ------------------------------------------------------------------- ![image](figs/hyperparam_results_beta-VAE.pdf){width="33.00000%"} \[\] (a) $\beta$-VAE: Varying $\beta$ ------------------------------------------------------------------- ----------------------------------------------------------------------- ![image](figs/hyperparam_results_Annealed-VAE.pdf){width="33.00000%"} \[\] (b) Annealed-VAE: Varying $C$ ----------------------------------------------------------------------- --------------------------------------------------------------------- ![image](figs/hyperparam_results_Factor-VAE.pdf){width="33.00000%"} \[\] (c) Factor-VAE: Varying $\gamma$ --------------------------------------------------------------------- The previous experiments presented aggregated results over the different hyperparameter values for each method. Next, we take a closer look at the individual impact of those hyperparameters, i.e., the effect of changing the hyperparameters on the disentanglement performance (MIG) and the reconstruction accuracy. shows this in the form of scatter plots. The ideal models should lie on the top right corner of the plots (with high values of both reconstruction accuracy and MIG). Models trained on dMelodies are very sensitive to hyperparameter adjustments. This is especially true for reconstruction accuracy. For instance, increasing $\beta$ for the $\beta$-VAE model improves MIG but severely reduces reconstruction performance. For Annealed-VAE and Factor-VAE there is a wider spread in the scatter plots. For Annealed-VAE, having a high capacity $C$ seems to marginally improve reconstruction (especially for the recurrent VAE). For FactorVAE, increasing $\gamma$ leads to a drop in both disentanglement and reconstruction. Contrast this with the scatter plots for dSprites. For all three methods, the hyperparameters seem to only significantly affect the disentanglement performance. For instance, increasing $\beta$ and $\gamma$ (for $\beta$-VAE and FactorVAE, respectively) result in clear improvement in MIG. More importantly, however, there is no adverse impact on the reconstruction accuracy. ### Factor-wise Disentanglement We also looked at how the individual factors of variation are disentangled. We consider the $\beta$-VAE model for this since it has the highest reconstruction accuracy. shows the factor-wise *MIG* for both the CNN and RNN-based models. Factors corresponding to octave and rhythm are disentangled better. This is consistent with some recent research on disentangling rhythm [@yang2019deep; @jiang2020transformer]. In contrast, the factors corresponding to the arpeggiation direction perform the worst. This might be due to their binary type. Similar analysis for the dSprites dataset reveals better disentanglement for the scale and position based factors. Additional results are provided in the supplementary material. Discussion {#sec:results} ========== As mentioned in , disentanglement techniques have been shown to be sensitive to the choice of hyperparameters and random seeds [@locatello_challenging_2019]. The results obtained in our benchmarking experiments in the previous section using dMelodies seem to ascertain this even further. We find that methods which work well for image-based datasets do not extend directly to the music domain. When moving between domains, not only do we have to tune hyperparameters separately, but the model behavior may vary significantly when hyperparameters are changed. For instance, reconstruction fidelity is hardly effected by hyperparameter choice in the case of dSprites while for dMelodies it varies significantly. While sensitivity to hyperparameters is expected in neural networks, this is also one of the main reasons for evaluating methods on more than one dataset, preferably from multiple domains. [ Some aspects of the dataset design, especially the nature of the factors of variation, might have affected our experimental results. While the factors of variation in dSprites are continuous (except the shape attribute), those for dMelodies span different data-types (categorical, ordinal and binary). This might make other types of models (such as VQ-VAEs [@oord2017vqvae]) more suitable. Another consideration is that some factors of variation (such as the arpeggiation direction and rhythm) effect only a part of the data. However, the effect of this on the disentanglement performance needs further investigation since we get good performance for rhythm but poor performance for arpeggiation direction. ]{} Unsupervised methods for disentanglement learning have their own limitations and some degree of supervision might actually be essential [@locatello_challenging_2019]. It is still unclear if it is possible to develop general domain-invariant disentanglement methods. Consequently, supervised and semi-supervised methods have been garnering more attention [@pati19latent-reg; @bouchacourt_multi-level_2018; @hadjeres_glsr-vae_2017; @Locatello2020Disentangling]. The dMelodies dataset can also be used to explore such methods for music-based tasks. There has been some work recently in disentangling musical attributes such as rhythm and melodic contours which are considered important from an interactive music generation perspective [@pati19latent-reg; @akama_controlling_2019; @yang2019deep]. Apart from the designed latent factors of variation, other low-level musical attributes such as rhythmic complexity and contours can also be computationally extracted using this dataset to meet task-specific requirements. Conclusion {#sec:conclusion} ========== This paper addresses the need for more diverse modes of data for studying disentangled representation learning by introducing a new music dataset for the task. The *dMelodies* dataset comprises of more than 1 million data points of 2-bar melodies. The dataset is constructed based on fixed rules that maintain independence between different factors of variation, thus enabling researchers to use it for studying disentanglement learning. [Benchmarking experiments conducted using popular disentanglement learning methods show that existing methods do not achieve performance comparable to those obtained on an analogous image-based dataset. This showcases the need for further research on domain-invariant algorithms for disentanglement learning.]{} Acknowledgment ============== [The authors would like to thank Nvidia Corporation for their donation of a Titan V awarded as part of the GPU (Graphics Processing Unit) grant program which was used for running several experiments pertaining to this research.]{} [^1]: https://github.com/ashispati/dmelodies\_dataset [^2]: [https://github.com/ashispati/dmelodies\_benchmarking]{}
{ "pile_set_name": "ArXiv" }
--- author: - 'Hiroshi <span style="font-variant:small-caps;">Kunitomo</span>[^1]' title: | Space-time supersymmetry in\ WZW-like open superstring field theory --- Introduction ============ Construction of a complete action including both the Neveu-Schwarz (NS) sector representing space-time bosons and the Ramond sector representing space-time fermions are a long-standing problem in superstring field theory. While the action for the NS sector was constructed based on two different formulations, the WZW-like formulation[@Berkovits:1995ab] and the homotopy-algebra-based formulation,[@Erler:2013xta] it had been difficult to incorporate the Ramond sector in a Lorentz-covariant way. Only recently, however, a complete action has been constructed for the WZW-like formulation[@Kunitomo:2015usa], and soon afterwards for the homotopy-algebra-based formulation.[@Erler:2016ybs] Interestingly enough, in these complete actions, the string field in each sector appears quite asymmetrically. In the WZW-like formulation, for example, the string field $\Phi$ in the NS sector is in the large Hilbert space, characterizing the WZW-like formulation, but the string field $\Psi$ in the Ramond sector is in the restricted small Hilbert space defined using the picture-changing operators. Then the question is how space-time supersymmetry is realized between these two apparently asymmetric sectors. The purpose of this paper is to answer this question by explicitly constructing the space-time supersymmetry transformation in the WZW-like formulation.[^2] In the first quantized formulation, space-time supersymmetry is generated by the supercharge obtained by using the covariant fermion emission vertex,[@Friedan:1985ge] which interchanges each physical state in the NS sector with that in the Ramond sector. Therefore, it is natural to expect first that the space-time supersymmetry transformation in superstring field theory is realized as a linear transformation using this first-quantized supercharge.[@Witten:1986qs] We will see, however, that this expectation is true only for the free theory, while the action including the interaction terms is not invariant under this linear transformation. We modify it so as to be a symmetry of the complete action, and then verify whether the constructed nonlinear transformation satisfies the supersymmetry algebra. We find that the supersymmetry algebra holds, up to the equations of motion and gauge transformation, only except for a nonlinear transformation. It is shown, however, that this extra transformation can also be absorbed into the gauge transformation up to the equations of motion at the linearized level. Under the assumption that the asymptotic condition holds also for the string field theory, this implies, at least perturbatively, that the constructed transformation acts as space-time supersymmetry on the physical states defined by the asymptotic string fields. This guarantees that supersymmetry is realized on the physical S-matrix.[^3] The rest of the paper is organized as follows. In section 2, we summarize the known results on the complete action for the WZW-like open superstring field theory. In addition, restricting the background to the flat space-time, we introduce the GSO projection operator, which is essential to make the physical spectrum supersymmetric. For later use, some basic ingredients, such as the Maurer-Cartan equations and the covariant derivatives, are extended to those based on general derivations of the string product which can be noncommutative. After this preparation, the space-time supersymmetry transformation is constructed in section 3. Using the first-quantized supercharge, a linear transformation is first defined so as to be consistent with the restriction in the Ramond sector. Since this transformation is only a symmetry of the free theory, we first construct the nonlinear transformation perturbatively by requiring it to keep the complete action invariant. Based on some lower-order results, we suppose the full nonlinear transformation $\delta_{\mathcal{S}}$ in a closed form, and prove that it is actually a symmetry of the action. In section 4, the commutator of two transformations is calculated explicitly. We show that it provides the space-time translation $\delta_p$, up to the equations of motion and gauge transformation, except for a nonlinear transformation $\delta_{\tilde{p}}$ that can be absorbed into the gauge transformation only at the linearized level. Thus the supersymmetry algebra holds only on the physical states, and hence the physical S-matrix, defined by the asymptotic string fields under appropriate assumptions on asymptotic properties of the string fields. Although this extra symmetry is unphysical in this sense, it is nontrivial in the total Hilbert space including unphysical degrees of freedom. It produces further unphysical symmetries by taking commutators with supersymmetries or themselves successively. We have a sequence of unphysical symmetries corresponding to the first-quantized charges obtained by taking successive commutators of the supercharge and the unconventional translation charge with picture number $p=-1$. Section 5 is devoted to summary and discussion, and two appendices are added. In Appendix A, we summarize the conventions for the $SO(1,9)$ spinor and the Ramond ground states, which are needed to identify the physical spectrum although they do not appear in this paper explicitly. The triviality of the extra transformation in the Ramond sector, which remains to be shown, is given in Appendix B. Further nonlinear transformations obtained by taking the commutator of two unphysical transformations, $[\delta_{\tilde{p}_1},\delta_{\tilde{p}_2}]$ are also discussed. All the extra symmetries obtained by taking commutators with $\delta_{\mathcal{S}}$ or $\delta_{\tilde{p}}$ repeatedly are shown to be unphysical. Complete gauge-invariant action =============================== On the basis of the Ramond-Neveu-Schwarz (RNS) formulation of superstring theory, an open superstring field is a state in the conformal field theory (CFT) consisting of the matter sector, the reparametrization ghost sector, and the superconformal ghost sector. We assume in this paper that the background space-time is ten-dimensional Minkowski space, for which the matter sector is described by string coordinates $X^\mu(z)$ and their partners $\psi^\mu(z)$ $(\mu=0,1,\cdots,9)$. The reparametrization ghost sector and superconformal ghost sector are described by a fermion pair $(b(z),c(z))$ and a boson pair $(\beta(z),\gamma(z))$, respectively. The superconformal ghost sector has another description by a fermion pair ($\xi(z)$, $\eta(z)$) and a chiral boson $\phi(z)$ [@Friedan:1985ge]. The two descriptions are related through the bosonization relation: $$\beta(z)\ =\ \partial\xi(z) e^{-\phi(z)}\,,\qquad \gamma(z)\ =\ e^{\phi(z)} \eta(z)\,.$$ The Hilbert space for the $\beta\gamma$ system is called the small Hilbert space and that for the $\xi\eta\phi$ system is called the large Hilbert space. The theory has two sectors depending on the boundary condition on the world-sheet fermions $\psi^\mu$, $\beta$, and $\gamma$. The sector in which the world-sheet fermion obeys an antiperiodic boundary condition is known as the Neveu-Schwarz (NS) sector, and describes the space-time bosons. The other sector in which the world-sheet fermion obeys a periodic boundary condition is known as the Ramond (R) sector, and describes the space-time fermions. We can obtain the space-time supersymmetric theory by suitably combining two sectors[@Gliozzi:1976qd]. String fields and constraints ----------------------------- In the WZW-like open superstring field theory, we use the string field $\Phi$ in the large Hilbert space for the NS sector. It is Grassmann even, and has ghost number 0 and picture number 0. Here we further impose the BRST-invariant GSO projection[^4] $$\Phi\ =\ \frac{1}{2}(1+(-1)^{G_{NS}})\, \Phi\,, $$ where $G_{NS}$ is defined by $$\begin{aligned} G_{NS}\ =&\ \sum_{r>0}(\psi^\mu_{-r}\psi_{r\mu}-\gamma_{-r}\beta_r+\beta_{-r}\gamma_r) - 1 \nonumber\\ \equiv&\ \sum_{r>0}\psi^\mu_{-r}\psi_{r\mu} + p_\phi\qquad (\textrm{mod}\ 2)\,,\end{aligned}$$ with $p_\phi=-\oint\frac{dz}{2\pi i}\partial\phi(z)$. This is necessary to remove the tachyon and makes the spectrum supersymmetric[@Gliozzi:1976qd]. For the Ramond sector, we use the string field $\Psi$ constrained on the restricted small Hilbert space satisfying the conditions[@Kunitomo:2015usa] $$\eta\Psi\ =\ 0\,,\qquad XY\Psi\ =\ \Psi\,, \label{R constraints}$$ where $X$ and $Y$ are the picture-changing operator and its inverse acting on the states in the small Hilbert space with picture numbers $-3/2$ and $-1/2$, respectively. They are defined by $$X\ =\ -\delta(\beta_0)G_0 + \delta'(\beta_0)b_0\,,\qquad Y\ =\ -c_0\delta'(\gamma_0)\,, \label{PCO}$$and satisfy $$XYX\ =\ X\,,\qquad YXY\ =\ Y\,, \qquad [Q,\,X]\ =\ 0\,. \label{xyx}$$ The string field $\Psi$ is Grassmann odd, and has ghost number $1$ and picture number $-1/2$. The picture-changing operator $X$ is BRST exact in the large Hilbert space, and can be written using the Heaviside step function as $ X=\{Q,\Theta(\beta_0)\}$. Here, instead of $\Theta(\beta_0)$, we introduce $$\Xi\ =\ \xi_0 + (\Theta(\beta_0)\eta\xi_0 - \xi_0)P_{-3/2} + (\xi_0\eta\Theta(\beta_0) - \xi_0)P_{-1/2}\,,$$ and anew define $$X\ =\ \{Q,\ \Xi\}\,. \label{X in Ramond}$$ This is identical to the one defined in (\[PCO\]) when it acts on the states in the small Hilbert space with picture number $-3/2$, but can act on the states in the large Hilbert space without the restriction on the picture number.[@Erler:2016ybs] The operator $\Xi$ is nilpotent ($\Xi^2=0$) and satisfies $\{\eta, \Xi\}=1$ [@Erler:2016ybs], from which, with $\{Q,\eta\}=0$, we can conclude $$\begin{aligned} [\eta, X]\ =&\ [\eta,\{Q,\Xi\}] \nonumber\\ =&\ -[Q,\{\Xi,\eta\}]-[\Xi,\{\eta,Q\}]\ =\ 0\,.\end{aligned}$$ We impose the BRST-invariant GSO projection as $$\Psi\ =\ \frac{1}{2}(1+\hat{\Gamma}_{11}(-1)^{G_R})\,\Psi\,, \label{GSO Ramond} $$ where $G_R$ is given by $$\begin{aligned} G_R\ =&\ \sum_{n>0}(\psi^\mu_{-n}\psi_{n\mu}-\gamma_{-n}\beta_n+\beta_{-n}\gamma_n) - \gamma_0\beta_0 \nonumber\\ \equiv&\ \sum_{n>0}\psi^\mu_{-n}\psi_{n\mu} + p_\phi + \frac{1}{2}\qquad (\textrm{mod}\ 2)\,.\end{aligned}$$ The gamma matrix $\hat{\Gamma}_{11}$ is defined by using the zero-modes of the world-sheet fermion $\psi^\mu(z)$ as $$\hat{\Gamma}_{11}\ =\ 2^5\,\psi^{0}_0\psi^{1}_0\cdots\psi^{9}_0\,. \label{gamma11}$$ We summarize the convention on how the zero modes $\psi^\mu_0$ act on the Ramond ground states in Appendix \[convention\].[^5] Complete gauge-invariant action ------------------------------- By use of the string fields introduced in the previous subsection, the complete action for the WZW-like open superstring field theory is given by[@Kunitomo:2015usa] $$S\ =\ -\frac{1}{2}{\langle\!\langle}\Psi, YQ\Psi{\rangle\!\rangle}-\int_0^1 dt \langle A_t(t), QA_\eta(t)+(F(t)\Psi)^2\rangle\,, \label{complete action}$$ and is invariant under the gauge transformations \[full gauge\] $$\begin{aligned} A_{\delta_g}\ =&\ D_\eta\Omega + Q\Lambda + \{F\Psi,F\Xi\{F\Psi,\Lambda\}\} - \{F\Psi,F\Xi\lambda\}\,, \label{gauge tf ns}\\ \delta_g\Psi\ =&\ -X\eta F\Xi[F\Psi, D_\eta\Lambda] + Q\lambda + X\eta F\lambda\,, \label{gauge tf r}\end{aligned}$$ where we have introduced the one parameter extension $\Phi(t)$ of $\Phi$ $(t\in[0,1])$ satisfying the boundary condition $\Phi(1)=\Phi$ and $\Phi(0)=0$, and defined $$A_{\mathcal{O}}(t)\ =\ (\mathcal{O} e^{\Phi(t)})e^{-\Phi(t)}\,,$$ with $\mathcal{O}=\partial_t, \eta,$ or $\delta$, which are analogs of (components) of the right-invariant one form, satisfying the Maurer-Cartan-like equation $$\mathcal{O}_1A_{\mathcal{O}_2}(t) -(-1)^{\mathcal{O}_1\mathcal{O}_2}\mathcal{O}_2A_{\mathcal{O}_1}(t) -[\![A_{\mathcal{O}_1}(t),\,A_{\mathcal{O}_2}(t)]\!] =\ 0\,,\label{MC}$$ where $[\![A_1,A_2]\!]$ is the graded commutator of the two string field $A_1$ and $A_2$: $[\![A_1,A_2]\!]=A_1A_2-(-1)^{A_1A_2}A_2A_1$. Using $A_\eta(t)$, the covariant derivative $D_\eta(t)$ is defined by the operator acting on the string field $A$ as $$D_\eta(t) A\ =\ \eta A - [\![A_\eta,\, A]\!]\,, $$ which is nilpotent: $(D_\eta(t))^2=0$. Then the linear map $F(t)$ on a general string field $\Psi$ in the Ramond sector is defined by $$\begin{aligned} F(t)\Psi\ =&\ \frac{1}{1+\Xi(D_\eta(t)-\eta)}\,\Psi \nonumber\\ =&\ \Psi + \Xi[\![A_\eta(t)\,, \Psi]\!] + \Xi[\![A_\eta(t),\Xi[\![A_\eta(t), \Psi]\!] ]\!]+\cdots\,. \label{def F}\end{aligned}$$ The map $F(t)$ has a property that changes $D_\eta(t)$ into $\eta$: $$D_\eta(t)F(t)\ =\ F(t)\eta\,. \label{important property}$$ Using $F(t)$, we can define a homotopy operator for $D_\eta(t)$ as $F(t)\Xi$ satisfying[@Kunitomo:2015usa] $$\{D_\eta(t), F(t)\Xi\}\ =\ 1\,, \label{homotopy relation}$$ which trivializes the $D_\eta$-cohomology as well as the $\eta$-cohomology in the large Hilbert space. From the definition (\[def F\]), we can show that the homotopy operator $F\Xi$ is BPZ even $$\langle F\Xi \Psi_1, \Psi_2\rangle\ =\ (-1)^{\Psi_1}\langle \Psi_1, F\Xi \Psi_2\rangle\,, \label{BPZ homotopy R}$$ and satisfies $$ \{Q, F\Xi\}A\ =\ FXF\Xi D_\eta A + FX\eta F\Xi A-F\Xi[QA_\eta, F\Xi A]\,, \label{Q and FXi}$$ for a string field $A$. It is useful to note that we can define the projection operators $$\mathcal{P}_R\ =\ D_\eta F\Xi\,,\qquad \mathcal{P}_R^{\perp} =\ F\Xi D_\eta\,, \label{proj ramond}$$ onto the Ramond string field annihilated by $D_\eta$ and its orthogonal complement, respectively. The BPZ inner product in the small Hilbert space ${\langle\!\langle}\cdot,\cdot{\rangle\!\rangle}$ is related to that in the large Hilbert space $\langle\cdot,\cdot\rangle$ as $$\begin{aligned} {\langle\!\langle}A\,, B{\rangle\!\rangle}\ =&\ \langle\Xi A\,, B\rangle\ =\ (-1)^A\langle A\,, \Xi B\rangle \nonumber\\ =&\ \langle\xi_0 A\,, B\rangle\ =\ (-1)^A\langle A\,, \xi_0 B\rangle\,, \label{small to large}\end{aligned}$$ where $A$ and $B$ are in the small Hilbert space, and also in the Ramond sector for the equations in the first line. Using a general variation of the map $F(t)$ on a string field $A$, $$(\delta F(t))A\ =\ -F(t)(\delta F^{-1}(t))F(t)A\ =\ F\Xi[\![\delta A_\eta(t)\,, F(t)A]\!]\,, \label{variation F}$$ a general variation of the action (\[complete action\]) can be calculated as[@Kunitomo:2015usa] $$\delta S\ =\ - \langle A_\delta, QA_\eta+(F\Psi)^2\rangle - {\langle\!\langle}\delta\Psi, Y(Q\Psi+X\eta F\Psi){\rangle\!\rangle}\,, \label{general variation}$$ from which we find the equations of motion, $$QA_\eta + (F\Psi)^2\ =\ 0\,,\qquad Q\Psi + X\eta F\Psi\ =\ 0\,. \label{equations of motion}$$ Before closing this section, we generalize several ingredients for later use. We can define $A_{\mathcal{O}}(t)$ not only for $\mathcal{O}=\partial_t, \eta,$ or $ \delta$, but also for any other derivations of the string product. Although such general $\mathcal{O}$’s are not in general commutative, we assume that they satisfy a closed algebra with respect to the graded commutator of derivations, $\{\mathcal{O}_1,\mathcal{O}_2]\ =\ \mathcal{O}_1\mathcal{O}_2 -(-1)^{\mathcal{O}_1\mathcal{O}_2}\mathcal{O}_2\mathcal{O}_1$. The generalized $A_{\mathcal{O}}(t)$’s satisfy the equation $$\begin{aligned} \mathcal{O}_1A_{\mathcal{O}_2}(t) -&(-1)^{\mathcal{O}_1\mathcal{O}_2}\mathcal{O}_2A_{\mathcal{O}_1}(t) - [\![A_{\mathcal{O}_1}(t)\,, A_{\mathcal{O}_2}(t)]\!] =\ A_{\{\mathcal{O}_1,\mathcal{O}_2]}(t)\,,\label{gen MC}\end{aligned}$$ which reduces to the Maurer-Cartan-like equation (\[MC\]) when $\{\mathcal{O}_1,\mathcal{O}_2]=0$. Using $A_{\mathcal{O}}(t)$, we can define the covariant derivative $D_{\mathcal{O}}(t)$ on a string field $A$ by $$D_{\mathcal{O}}(t) A\ =\ \mathcal{O} A - [\![A_{\mathcal{O}}(t)\,, A]\!]\,. $$ From (\[gen MC\]), we can show that $$[\![D_{\mathcal{O}_1}(t)\,, D_{\mathcal{O}_2}(t)]\!]\ =\ D_{\{\mathcal{O}_1, \mathcal{O}_2]}(t)\,. \label{generalized D}$$ As an analog of the linear map $F(t)$ in the Ramond sector, we can also define the linear map $f(t)$ on a general string field $\Phi$ in the NS sector by $$\begin{aligned} f(t)\Phi\ =&\ \frac{1}{1+\xi_0(D_\eta(t)-\eta)}\,\Phi \nonumber\\ =&\ \Phi + \xi_0 [\![A_\eta(t), \Phi]\!] + \xi_0 [\![A_\eta(t),\,\xi_0[\![A_\eta(t), \Phi]\!] ]\!]\cdots\,. \label{f ns}\end{aligned}$$ A homotopy operator for $D_\eta(t)$ in the NS sector is given by the BPZ even operator $f(t)\xi_0$: $$\{D_\eta(t),\, f(t)\xi_0\}\ =\ 1\,,\qquad \langle f\xi_0 \Phi_1, \Phi_2\rangle\ =\ (-1)^{\Phi_1}\langle \Phi_1, f\xi_0 \Phi_2\rangle\,. \label{BPZ homotopy NS}$$ We can define the projection operators $$\mathcal{P}_{NS}\ =\ D_\eta f\xi_0\,,\qquad \mathcal{P}_{NS}^\perp\ =\ f\xi_0 D_\eta\,,\qquad \label{proj ns}$$ onto the NS string field annihilated by $D_\eta$ and its orthogonal complement, respectively. Space-time supersymmetry ======================== Now let us discuss how space-time supersymmetry is realized in the WZW-like formulation. Starting from a natural linearized transformation exchanging the NS string field $\Phi$ and the Ramond string field $\Psi$, we construct a nonlinear transformation that is a symmetry of the complete action (\[complete action\]). We show that the transformation satisfies the supersymmetry algebra, up to the equations of motion and gauge transformation, except for an unphysical symmetry. Space-time supersymmetry transformation --------------------------------------- At the linearized level, a natural space-time supersymmetry transformation of string fields in the small Hilbert space, $\eta\Phi$ and $\Psi$, is given by $$\delta^{(0)}_{{\mathcal{S}}(\epsilon)} \eta\Phi\ =\ {\mathcal{S}}(\epsilon)\Psi,\qquad \delta^{(0)}_{{\mathcal{S}}(\epsilon)} \Psi\ =\ X{\mathcal{S}}(\epsilon)\eta\Phi\,, \label{restricted linear} $$ where $${\mathcal{S}}(\epsilon)\ =\ \epsilon_\alpha q^\alpha\ =\ \epsilon_\alpha \oint\frac{dz}{2\pi i}S^\alpha(z) e^{-\phi(z)/2}\\, \label{supercharge}$$ is the first-quantized space-time supersymmetry charge with the parameter $\epsilon_\alpha$. The spin operator $S^\alpha(z)$ in the matter sector can be constructed from $\psi^\mu(z)$ using the bosonization technique [@Friedan:1985ge]. This ${\mathcal{S}}(\epsilon)$ is a (Grassmann-even) derivation of the string product, and is commutative with $Q$, $\eta$ and $\xi_0$: $[Q,{\mathcal{S}}(\epsilon)]=[\eta,{\mathcal{S}}(\epsilon)]=[\xi_0, {\mathcal{S}}(\epsilon)]=0$. It satisfies the algebra, $$\begin{aligned} [{\mathcal{S}}(\epsilon_1), {\mathcal{S}}(\epsilon_2)]\ =&\ \tilde{p}(v_{12})\,, \label{1st quantized alg}\end{aligned}$$ with $v_{12}^\mu=(\epsilon_1C\bar{\gamma}^\mu\epsilon_2)/\sqrt{2}$, where $\tilde{p}(v)$ is the operator with picture number $p=-1$ defined by $$\tilde{p}(v)\ =\ v_\mu\tilde{p}^\mu\ =\ - v_\mu\oint\frac{dz}{2\pi i}\psi^\mu(z) e^{-\phi(z)}\,. \label{p with -1}$$ This is equivalent to the space-time translation operator $p(v)=v_\mu\oint\frac{dz}{2\pi i}i\partial X^\mu(z)$ (center of mass momentum of the string) in the sense that, for example,[@Witten:1986qs] $$(p(v)-X_0\tilde{p}(v))\ =\ \{Q, M(v)\}\,, \label{p tilde p}$$with $$M(v)\ =\ v^\mu\oint\frac{dz}{2\pi i}(\xi(z)-\xi_0)\psi_\mu(z)e^{-\phi(z)}\,. \label{kernel M}$$ Note that $M(v)$ does not include $\xi_0$, and so is in the small Hilbert space: $\{\eta, M(v)\}=0$. The algebra (\[1st quantized alg\]) and the Jacobi identity imply that $[Q, \tilde{p}(v)]=[\eta, \tilde{p}(v)]=[\xi_0, \tilde{p}(v)]=0$. We frequently omit specifying the parameters explicitly and denote, for example, ${\mathcal{S}}(\epsilon_1)$ by ${\mathcal{S}}_1$. Since $\eta\Phi$ and $\Psi$ are in the small Hilbert space containing the physical spectrum, (\[restricted linear\]) is the transformation law given in Ref.  except that the local picture-changing operator at the midpoint is replaced by the $X$ in (\[PCO\]) so that the transformation is closed in the restricted space. As a transformation of $\Phi$ in the large Hilbert space, we adopt here that $$\delta^{(0)}_{{\mathcal{S}}(\epsilon)}\Phi\ =\ {\mathcal{S}}(\epsilon)\Xi\Psi\,.\label{linear tf phi}$$ This is consistent with (\[restricted linear\]) but is not unique. A different choice, however, can be obtained by combining (\[linear tf phi\]) and an $\Omega$-gauge transformation, for example, $$\begin{aligned} \tilde{\delta}_{{\mathcal{S}}(\epsilon)}^{(0)}\Phi\ =&\ \xi_0{\mathcal{S}}(\epsilon)\Psi \nonumber\\ =&\ \delta_{{\mathcal{S}}(\epsilon)}^{(0)}\Phi - \eta(\xi_0{\mathcal{S}}(\epsilon)\Xi\Psi)\,.\end{aligned}$$ Using the fact that ${\mathcal{S}}$ is BPZ odd, $$\langle {\mathcal{S}}A, B\rangle\ =\ -\langle A, {\mathcal{S}}B\rangle\,, \label{BPZ S}$$ it is easy to see that the quadratic terms of the action (\[complete action\]), $$S^{(0)}\ =\ - \frac{1}{2} \langle\Phi, Q\eta\Phi\rangle - \frac{1}{2} {\langle\!\langle}\Psi, YQ\Psi{\rangle\!\rangle}\,, \label{kinetic}$$ are invariant under the transformation $$\delta_{\mathcal{S}}^{(0)}\Phi\ =\ {\mathcal{S}}\Xi\Psi\,,\qquad \delta_{\mathcal{S}}^{(0)}\Psi\ =\ X{\mathcal{S}}\eta\Phi\,. \label{linearized tf}$$ However, the action at the next order, $$S^{(1)}\ =\ -\frac{1}{6}\langle\Phi, Q[\Phi, \eta\Phi]\rangle - \langle\Phi, \Psi^2\rangle\,,$$ is not invariant under $\delta^{(0)}_{\mathcal{S}}$ but is transformed as $$\begin{aligned} \delta^{(0)}_{\mathcal{S}}S^{(1)}\ =\ \langle\left(\frac{1}{2}[\Phi, {\mathcal{S}}\Xi\Psi] -{\mathcal{S}}\Xi[\Phi, \Psi] +\{\Psi, \Xi{\mathcal{S}}\Phi\}\right),Q\eta\Phi\rangle \nonumber\\ +\, {\langle\!\langle}\left(-\frac{1}{2}X\eta[\Phi,{\mathcal{S}}\Phi] +X\eta[\Phi,\Xi{\mathcal{S}}\eta\Phi]\right), YQ\Psi{\rangle\!\rangle}\,. \label{var one}\end{aligned}$$ We have thus to modify the transformation by adding $$\begin{aligned} \delta^{(1)}_{\mathcal{S}}\Phi\ =&\ \frac{1}{2}[\Phi, {\mathcal{S}}\Xi\Psi] -{\mathcal{S}}\Xi[\Phi,\Psi]+\{\Psi,\Xi {\mathcal{S}}\Phi\},\\ \delta^{(1)}_{\mathcal{S}}\Psi\ =&\ -\frac{1}{2}X\eta[\Phi,{\mathcal{S}}\Phi] +X\eta[\Phi,\Xi {\mathcal{S}}\eta\Phi]\,,\end{aligned}$$ under which the kinetic terms (\[kinetic\]) are transformed so as to cancel the contribution (\[var one\]): $\delta_{\mathcal{S}}^{(1)}S^{(0)}+\delta_{\mathcal{S}}^{(0)}S^{(1)}=0$. Then at the next order we have two contributions, $\delta_{\mathcal{S}}^{(1)}S^{(1)}$ and $\delta_{\mathcal{S}}^{(0)}S^{(2)}$, which are again nonzero and require to add $$\begin{aligned} \delta^{(2)}_{\mathcal{S}}\Phi\ =&\ \frac{1}{12}[\Phi,[\Phi,{\mathcal{S}}\Xi\Psi]] +\frac{1}{2}\{[\Phi,\Psi],\Xi {\mathcal{S}}\Phi\} +\frac{1}{2}[\Xi[\Phi,\Psi],{\mathcal{S}}\Phi] \nonumber\\ & +\frac{1}{2}\{\Psi,\Xi\{\eta\Phi,\Xi {\mathcal{S}}\Phi\}\} +\frac{1}{2}\{\Psi,\Xi[\Phi,\Xi {\mathcal{S}}\eta\Phi]\} -[\Xi[\Phi,\Psi],\Xi {\mathcal{S}}\eta\Phi] \nonumber\\ & -\frac{1}{2}{\mathcal{S}}\Xi[\Phi,\Xi\{\eta\Phi,\Psi\}] -\frac{1}{2}{\mathcal{S}}\Xi[\eta\Phi,\Xi[\Phi,\Psi]],\\ \delta^{(2)}_{\mathcal{S}}\Psi\ =&\ \frac{1}{6}X\eta[\Phi,[\Phi,{\mathcal{S}}\Phi]] +\frac{1}{2}X\eta[\Phi,\Xi[{\mathcal{S}}\Phi,\eta\Phi]] +\frac{1}{2}X\eta\{\eta\Phi,\Xi[\Phi,\Xi {\mathcal{S}}\eta\Phi]\} \nonumber\\ & +\frac{1}{2}X\eta[\Phi,\Xi[\eta\Phi,\Xi {\mathcal{S}}\eta\Phi]]\,,\end{aligned}$$ to cancel them by $\delta_{\mathcal{S}}^{(2)}S^{(0)}$: $\delta_{\mathcal{S}}^{(2)}S^{(0)}+\delta_{\mathcal{S}}^{(1)}S^{(1)}+\delta_{\mathcal{S}}^{(0)}S^{(2)}=0$. The procedure is not terminated, so we suppose a full transformation consistent with these results, and then show that it is in fact a symmetry of the complete action. Complete space-time supersymmetry transformation ------------------------------------------------ Here we suppose that the complete transformation is given by \[complete transformation\] $$\begin{aligned} A_{\delta_{\mathcal{S}}}\ =&\ e^\Phi({\mathcal{S}}\Xi(e^{-\Phi}F\Psi e^\Phi))e^{-\Phi} + \{F\Psi,F\Xi A_{\mathcal{S}}\}, \label{complete tf ns}\\ \delta_{\mathcal{S}}\Psi\ =&\ X\eta F\Xi D_\eta A_{\mathcal{S}}\ =\ X\eta F\Xi {\mathcal{S}}A_\eta\,, \label{complete tf r}\end{aligned}$$ and show that the complete action (\[complete action\]) is invariant under this transformation. From the formula of the general variation of the action (\[general variation\]), we have $$\begin{aligned} \delta_{\mathcal{S}}S\ =&\ -\langle e^\Phi({\mathcal{S}}\Xi(e^{-\Phi}F\Psi e^\Phi))e^{-\Phi} ,QA_\eta+(F\Psi)^2\rangle -\langle \{F\Psi,F\Xi A_{\mathcal{S}}\}, QA_\eta+(F\Psi)^2\rangle \nonumber\\ &\ - {\langle\!\langle}X\eta F\Xi D_\eta A_{\mathcal{S}}\,,Y(Q\Psi+X\eta F\Psi){\rangle\!\rangle}\,. \label{var S}\end{aligned}$$ We calculate each of these three terms, which we denote (I), (II), and (III), separately. First, using $(\ref{BPZ homotopy R})$ and the cyclicity of the inner product, the second term is calculated as $$\textrm{(II)} =\ \langle A_{\mathcal{S}}\,, F\Xi[QA_\eta+(F\Psi)^2, F\Psi]\rangle\,. \label{II} $$ For the third term, we find $$\begin{aligned} \textrm{(III)} =&\ - {\langle\!\langle}\eta F\Xi D_\eta A_{\mathcal{S}}\,, Q\Psi+X\eta F\Psi{\rangle\!\rangle}\nonumber\\ =&\ - \langle A_{\mathcal{S}}\,, D_\eta F\Xi (Q\Psi+X\eta F\Psi) \rangle \nonumber\\ =&\ - \langle A_{\mathcal{S}}\,, F (Q\Psi+X\eta F\Psi) \rangle\,, \label{III}\end{aligned}$$ where we have used $(\ref{BPZ homotopy R})$, $(\ref{important property})$, and the fact that $X$ is BPZ even with respect to the inner product in the small Hilbert space, ${\langle\!\langle}XA\,, B{\rangle\!\rangle}={\langle\!\langle}A\,, XB{\rangle\!\rangle}$, and $Q\Psi+X\eta F\Psi$ is in the restricted small Hilbert space. In order to calculate the first term (I), some consideration is necessary. In addition to the cyclicity, we need the following relation for two graded commutative derivations of the string product, $\mathcal{O}_1$ and $\mathcal{O}_2$ satisfying $\{\mathcal{O}_1\,,\mathcal{O}_2]=0$. $$\begin{aligned} e^{-\Phi}(\mathcal{O}_1A_{\mathcal{O}_2})e^\Phi\ =&\ \mathcal{O}_1\widetilde{A}_{\mathcal{O}_2}+ \widetilde{A}_{\mathcal{O}_1}\widetilde{A}_{\mathcal{O}_2} -(-1)^{\mathcal{O}_1\mathcal{O}_2} \widetilde{A}_{\mathcal{O}_2} \widetilde{A}_{\mathcal{O}_1} \nonumber\\ =&\ (-1)^{\mathcal{O}_1\mathcal{O}_2}\mathcal{O}_2\widetilde{A}_{\mathcal{O}_1}\,, \label{dual relation}\end{aligned}$$ where $\widetilde{A}_{\mathcal{O}}$ is an analog of the left-invariant current: $\widetilde{A}_{\mathcal{O}}=e^{-\Phi}(\mathcal{O}e^\Phi)$. If we use this relation for $(\mathcal{O}_1,\mathcal{O}_2)=(Q,\eta)$, we find $$\begin{aligned} \textrm{(I)}\ =&\ -\langle {\mathcal{S}}\Xi(e^{-\Phi}F\Psi e^\Phi), e^{-\Phi}(QA_\eta+(F\Psi)^2)e^\Phi\rangle \nonumber\\ =&\ \langle {\mathcal{S}}\Xi(e^{-\Phi}F\Psi e^\Phi), \eta\widetilde{A}_Q\rangle -\langle {\mathcal{S}}\Xi(e^{-\Phi}F\Psi e^\Phi), (e^{-\Phi}F\Psi e^\Phi)^2\rangle\,. \label{I-1}\end{aligned}$$ Here the second term vanishes owing to (\[BPZ S\]) and (\[small to large\]): $$\begin{aligned} - \langle {\mathcal{S}}\Xi(e^{-\Phi}F\Psi e^\Phi), (e^{-\Phi}F\Psi e^\Phi)^2\rangle\ =&\ {\langle\!\langle}(e^{-\Phi}F\Psi e^\Phi), \{(e^{-\Phi}F\Psi e^\Phi), {\mathcal{S}}(e^{-\Phi}F\Psi e^\Phi)\}{\rangle\!\rangle}\nonumber\\ =&\ \frac{2}{3}\Big( {\langle\!\langle}{\mathcal{S}}(e^{-\Phi}F\Psi e^\Phi), (e^{-\Phi}F\Psi e^\Phi)^2{\rangle\!\rangle}\nonumber\\ &\hspace{10mm} + {\langle\!\langle}(e^{-\Phi}F\Psi e^\Phi), \{(e^{-\Phi}F\Psi e^\Phi), {\mathcal{S}}(e^{-\Phi}F\Psi e^\Phi)\}{\rangle\!\rangle}\Big) \nonumber\\ =&\ 0\,.\end{aligned}$$ The first term in (\[I-1\]) can further be calculated as $$\begin{aligned} \textrm{(I)} =&\ - \langle {\mathcal{S}}(e^{-\Phi}F\Psi e^\Phi), \widetilde{A}_Q\rangle\ =\ \langle F\Psi, e^\Phi({\mathcal{S}}\widetilde{A}_Q)e^{-\Phi}\rangle \nonumber\\ =&\ \langle F\Psi, QA_{\mathcal{S}}\rangle\ =\ \langle A_{\mathcal{S}}, QF\Psi\rangle\,, \label{I}\end{aligned}$$ where we have used the relation (\[dual relation\]) with $(\mathcal{O}_1,\mathcal{O}_2)=(Q,{\mathcal{S}})$, and the identity $$\begin{aligned} \eta(e^{-\Phi}F\Psi e^\Phi)\ =&\ e^{-\Phi} (D_\eta F\Psi) e^\Phi\ =\ 0\,. $$ Summing (\[II\]), (\[III\]), and (\[I\]), the variation of the action under the space-time supersymmetry transformation finally becomes $$\delta_{\mathcal{S}}S\ =\ \langle A_{\mathcal{S}}, \left(QF\Psi - F(Q\Psi+X\eta F\Psi) + F\Xi[QA_\eta+(F\Psi)^2, F\Psi]\right)\rangle\,,$$ which vanishes due to the identity (4.89) in Ref.: $\delta_{\mathcal{S}}S=0$. Hence the complete action (\[complete action\]) is invariant under the transformation (\[complete transformation\]). Algebra of transformation {#sec algebra} ========================= Starting from a natural linear transformation (\[linearized tf\]), we have constructed the nonlinear transformation (\[complete transformation\]) as a symmetry of the complete action (\[complete action\]). If this is in fact space-time supersymmetry, the commutator of two transformations should satisfy the supersymmetry algebra $$[\delta_{{\mathcal{S}}_1},\,\delta_{{\mathcal{S}}_2}]\ =^{\hspace{-2mm} ?}\ \delta_{p(v_{12})}\,, \label{susy alg}$$ up to the equations of motion (\[equations of motion\]) and gauge transformation (\[full gauge\]) generated by some field-dependent parameters, where $\delta_{p(v_{12})}$ is the space-time translation defined by $$\delta_{p(v)}A_\eta\ =\ - p(v)A_\eta\,, \qquad \delta_{p(v)}\Psi\ =\ - p(v)\Psi\,, \label{translation}$$ with the parameter $v_{12}$ in (\[1st quantized alg\]). In this section, we show that the algebra (\[susy alg\]) is slightly modified, but still the transformation (\[complete transformation\]) can be identified with space-time supersymmetry. Preparation ----------- As preparation, note that the relations \[large small\] $$\begin{aligned} \delta A_\eta\ =&\ D_\eta A_\delta\,, \label{delta eta}\\ A_\delta\ =&\ f\xi_0 \delta A_\eta + D_\eta \Omega_\delta\,, \label{A delta}\end{aligned}$$ hold with $\Omega_\delta=f\xi_0A_\delta$, for general variation of the NS string field $A_\delta$. The former, (\[delta eta\]), is the case of $(\mathcal{O}_1,\mathcal{O}_2)=(\delta,\eta)$ in (\[MC\]), and the latter, (\[A delta\]), is obtained by decomposing $A_\delta$ by the projection operators (\[proj ns\]) and using (\[delta eta\]). These relations (\[large small\]) show that two variations $A_\delta$ and $\delta A_\eta$ are in one-to-one correspondence up to the $\Omega$-gauge transformation. Since any transformation of the string field is a special case of the general variation, (\[large small\]) holds for any symmetry transformation $\delta_I$, \[delta I\] $$\begin{aligned} \delta_I A_\eta\ =&\ D_\eta A_{\delta_I}\,, \label{delta I 1}\\ A_{\delta_I}\ =&\ f\xi_0\delta_IA_\eta + D_\eta\Omega_I\,. \label{delta I 2}\end{aligned}$$ This is the case even for the commutator of the two transformations $[\delta_I, \delta_J]$, \[delta I delta J\] $$\begin{aligned} [\delta_I, \delta_J] A_\eta\ =&\ D_\eta A_{[\delta_I, \delta_J]}\,, \label{delta I delta J 1}\\ A_{[\delta_I, \delta_J]}\ =&\ f\xi_0[\delta_I, \delta_J] A_\eta + D_\eta\Omega_{IJ}\,, \label{delta I delta J 2}\end{aligned}$$ with $$\begin{aligned} \Omega_{IJ}\ =&\ -f\xi_0[f\xi_0\delta_I A_\eta,\, f\xi_0\delta_J A_\eta] \nonumber\\ &\ +\delta_I \Omega_{J} -[f\xi_0\delta_I A_\eta,\, \Omega_{J}] -\delta_J \Omega_{I} +[f\xi_0\delta_J A_\eta,\, \Omega_{I}] -[\Omega_{I},\, D_\eta\Omega_{J}]\,, \label{Omega IJ}\end{aligned}$$ which can be shown by explicit calculation using (\[gen MC\]) and (\[f ns\]) if we assume (\[delta I\]) with some field-dependent $\Omega_I$. Therefore if the algebra of the transformation is closed on $A_\eta$, $$[\delta_I,\,\delta_J]A_\eta\ =\ \sum_{K\ne\Omega}\delta_K A_\eta\,, \label{alg A_eta}$$ we have $$A_{[\delta_I,\delta_J]}\ =\ \sum_{K\ne\Omega} A_{\delta_K} + D_\eta\Omega_{IJ}\ =\ \sum_K A_{\delta_K}\,, \label{alg AIJ}$$ or equivalently, the algebra is also closed on $e^\Phi$: $$[\delta_I,\,\delta_J]e^\Phi\ =\ \sum_K\delta_K e^\Phi\,.$$ with some field-dependent $\Omega_{IJ}$. Here in (\[alg A\_eta\]) we used that $A_\eta$ is invariant under the $\Omega$-gauge transformation, $A_{\delta_\Omega}=D_\eta\Omega$, as seen from (\[delta I 1\]). $[\delta_{{\mathcal{S}}_1},\delta_{{\mathcal{S}}_2}]$ ----------------------------------------------------- Now let us explicitly calculate the supersymmetry algebra on $A_\eta$ and $\Psi$, which is easier to calculate than the algebra on the fundamental string fields $\Phi$ (or $e^\Phi$) and $\Psi$ due to their $\Omega$-gauge invariance and enough to know that on the fundamental string fields as was shown in the previous subsection. From (\[complete transformation\]) we find $$\begin{aligned} A_{\delta_{\mathcal{S}}}\ =&\ f\xi_0\delta_{\mathcal{S}}A_\eta + D_\eta\Omega_{\mathcal{S}}\,,\\ \delta_{\mathcal{S}}\Psi\ =&\ X\eta F\Xi{\mathcal{S}}A_\eta\,, \label{susy A_eta} \end{aligned}$$ with $$\begin{aligned} \delta_{\mathcal{S}}A_\eta\ =&\ {\mathcal{S}}F\Psi + [F\Psi,F\Xi {\mathcal{S}}A_\eta]\ =\ D_{\mathcal{S}}F\Psi - [F\Psi, D_\eta F\Xi A_{\mathcal{S}}]\,,\\ \Omega_{{\mathcal{S}}}\ =&\ f\xi_0\left(e^\Phi({\mathcal{S}}\Xi(e^{-\Phi}F\Psi e^\Phi))e^{-\Phi} +\{F\Psi, F\Xi A_{{\mathcal{S}}}\}\right)\,.\end{aligned}$$ Here we used the relations $$D_\eta(e^\Phi A e^{-\Phi})\ =\ e^\Phi(\eta A)e^{-\Phi}\,,\qquad \eta(e^{-\Phi} A e^\Phi)\ =\ e^{-\Phi}(D_\eta A)e^\Phi\,,$$ which hold for a general string field $A$. The commutator of two transformations on $\Psi$, $$[\delta_{{\mathcal{S}}_1}, \delta_{{\mathcal{S}}_2}]\,\Psi\ =\ \delta_{{\mathcal{S}}_1}(X\eta F\Xi {\mathcal{S}}_2 A_\eta) - (1\leftrightarrow 2)\,,$$ which is easier and straightforward, can be calculated as follows. Using (\[variation F\]), (\[homotopy relation\]) and (\[MC\]) with $(\mathcal{O}_1,\mathcal{O}_2)=({\mathcal{S}},\eta)$ and $(\delta, \eta)$, we can find $$\begin{aligned} \delta_{{\mathcal{S}}_1}(X\eta F\Xi {\mathcal{S}}_2 A_\eta) =&\ X\eta F\Xi[\delta_{{\mathcal{S}}_1}A_\eta,\,F\Xi{\mathcal{S}}_2A_\eta] + X\eta F\Xi{\mathcal{S}}_2(\delta_{{\mathcal{S}}_1}A_\eta) \nonumber\\ =&\ X\eta F\Xi D_{{\mathcal{S}}_2}(\delta_{{\mathcal{S}}_1}A_\eta) + X\eta F\Xi [D_\eta F\Xi A_{{\mathcal{S}}_2}\,,\delta_{{\mathcal{S}}_1}A_\eta]\,.\end{aligned}$$ Then, using $[D_\eta, D_{\mathcal{S}}]=0$, $$\begin{aligned} [\delta_{{\mathcal{S}}_1}, \delta_{{\mathcal{S}}_2}]\,\Psi\ =&\ \Big( X\eta F\Xi D_{{\mathcal{S}}_2}D_{{\mathcal{S}}_1}F\Psi - X\eta F\Xi [F\Psi, D_{{\mathcal{S}}_2}D_\eta F\Xi A_{{\mathcal{S}}_1}] \nonumber\\ &\ - X\eta F\Xi [D_\eta F\Xi A_{{\mathcal{S}}_2}\,,[F\Psi, D_\eta F\Xi A_{{\mathcal{S}}_1}]] \Big) - (1\leftrightarrow 2) \nonumber\\ =&\ - X\eta F\Xi D_{\tilde{p}_{12}}F\Psi \nonumber\\ &\ + X\eta F\Xi [F\Psi, D_\eta\big(D_{{\mathcal{S}}_1}F\Xi A_{{\mathcal{S}}_2} - D_{{\mathcal{S}}_2}F\Xi A_{{\mathcal{S}}_1} + [F\Xi A_{{\mathcal{S}}_1}, D_\eta F\Xi A_{{\mathcal{S}}_2}]\big)]\,, \label{ss on Psi}\end{aligned}$$ where we have used (\[generalized D\]) and (\[1st quantized alg\]), and denoted $\tilde{p}(v_{12})=\tilde{p}_{12}$. Comparing with (\[gauge tf r\]), we find that the second line has the form of the gauge transformation with the parameter $$\begin{aligned} D_\eta\Lambda_{{\mathcal{S}}_1{\mathcal{S}}_2}\ =&\ - D_\eta\Big(D_{{\mathcal{S}}_1}F\Xi A_{{\mathcal{S}}_2}-D_{{\mathcal{S}}_2}F\Xi A_{{\mathcal{S}}_1} + [ F\Xi A_{{\mathcal{S}}_1},\, D_\eta F\Xi A_{{\mathcal{S}}_2}]\Big) \nonumber\\ =&\ - A_{\tilde{p}_{12}} + ({\mathcal{S}}_1F\Xi{\mathcal{S}}_2-{\mathcal{S}}_1F\Xi{\mathcal{S}}_1)A_\eta -[F\Xi{\mathcal{S}}_1 A_\eta,\, F\Xi{\mathcal{S}}_2A_\eta]\,. \label{Lambda ss}\end{aligned}$$ The second form can be obtained using (\[gen MC\]), and will be used below. In order to calculate the algebra on $A_\eta$, we first calculate the transformation of $F\Psi$ using (\[variation F\]): $$\begin{aligned} \delta_{\mathcal{S}}F\Psi\ =&\ F\Xi\{\delta_{\mathcal{S}}A_\eta, F\Psi\} + F\delta_{\mathcal{S}}\Psi \nonumber\\ =&\ FX\eta F\Xi{\mathcal{S}}A_\eta + F\Xi{\mathcal{S}}(F\Psi)^2 + F\Xi[(F\Psi)^2, F\Xi{\mathcal{S}}A_\eta] \nonumber\\ =&\ QF\Xi {\mathcal{S}}A_\eta + F\Xi {\mathcal{S}}\left(QA_\eta + (F\Psi)^2\right) +F\Xi[QA_\eta + (F\Psi)^2, F\Xi {\mathcal{S}}A_\eta] \nonumber\\ \cong&\ QF\Xi {\mathcal{S}}A_\eta\,, \label{susy on F psi} \end{aligned}$$ where the third equality follows from (\[Q and FXi\]), and the symbol $\cong$ denotes an equation which holds up to the equations of motion. Then the commutator of two transformations on $A_\eta$ $$[\delta_{{\mathcal{S}}_1}, \delta_{{\mathcal{S}}_2}]\,A_\eta\ =\ \delta_{{\mathcal{S}}_1}\big({\mathcal{S}}_2 F\Psi+[F\Psi, F\Xi {\mathcal{S}}_2 A_\eta]\big) - (1\leftrightarrow2)\,,$$ can be calculated similarly to that on $\Psi$. Since the first term can be calculated as $$\begin{aligned} \delta_{{\mathcal{S}}_1}\big({\mathcal{S}}_2 F\Psi+[F\Psi, F\Xi {\mathcal{S}}_2 A_\eta]\big) =&\ {\mathcal{S}}_2(\delta_{{\mathcal{S}}_1}F\Psi)+[(\delta_{{\mathcal{S}}_1}F\Psi), F\Xi{\mathcal{S}}_2A_\eta] \nonumber\\ &\ + [F\Psi, F\Xi D_{{\mathcal{S}}_2}(\delta_{{\mathcal{S}}_1}A_\eta)] +[F\Psi, F\Xi[D_\eta F\Xi A_{{\mathcal{S}}_2}, (\delta_{{\mathcal{S}}_1}A_\eta)]] \nonumber\\ \cong&\ {\mathcal{S}}_2QF\Xi {\mathcal{S}}_1 A_\eta + [QF\Xi{\mathcal{S}}_1 A_\eta, F\Xi{\mathcal{S}}_2 A_\eta] \nonumber\\ &\ +[F\Psi, F\Xi D_{{\mathcal{S}}_2}D_{{\mathcal{S}}_1}F\Psi] - [F\Psi, F\Xi D_{{\mathcal{S}}_2}[F\Psi\,, D_\eta F\Xi A_{{\mathcal{S}}_1}]] \nonumber\\ &\ +[F\Psi, F\Xi[D_\eta F\Xi A_{{\mathcal{S}}_2}, D_{{\mathcal{S}}_1}F\Psi]] \nonumber\\ &\hspace{10mm} - [F\Psi, F\Xi[D_\eta F\Xi A_{{\mathcal{S}}_2}, [F\Psi\,, D_\eta F\Xi A_{{\mathcal{S}}_1}]]]\,,\end{aligned}$$ we find $$\begin{aligned} [\delta_{{\mathcal{S}}_1}, \delta_{{\mathcal{S}}_2}]\,A_\eta\ \cong&\ - Q\Big(({\mathcal{S}}_1 F\Xi {\mathcal{S}}_2 - {\mathcal{S}}_2 F\Xi {\mathcal{S}}_1)A_\eta - [F\Xi{\mathcal{S}}_1A_\eta, F\Xi{\mathcal{S}}_2A_\eta]\Big) \nonumber\\ &\ - [F\Psi, F\Xi[D_{{\mathcal{S}}_1}, D_{{\mathcal{S}}_2}]F\Psi] - [F\Psi, F\Xi[F\Psi, D_\eta\Lambda_{{\mathcal{S}}_1{\mathcal{S}}_2}]] \nonumber\\ =&\ -QA_{\tilde{p}_{12}}-[F\Psi,\,F\Xi D_{\tilde{p}_{12}}F\Psi] \nonumber\\ &\ -QD_\eta\Lambda_{{\mathcal{S}}_1{\mathcal{S}}_2} - [F\Psi,\,F\Xi[F\Psi,\,D_\eta\Lambda_{{\mathcal{S}}_1{\mathcal{S}}_2}]]\,, \label{ss on Aeta}\end{aligned}$$ using two expressions in (\[Lambda ss\]). From (\[ss on Psi\]), (\[ss on Aeta\]) and (\[alg AIJ\]) we can conclude that the the commutator of two space-time supersymmetry transformations satisfies the algebra $$[\delta_{{\mathcal{S}}_1},\delta_{{\mathcal{S}}_2}]\ \cong\ \delta_{p(v_{12})} + \delta_{g(\Lambda_{{\mathcal{S}}_1{\mathcal{S}}_2},\Omega_{{\mathcal{S}}_1{\mathcal{S}}_2})} + \delta_{\tilde{p}(v_{12})}\,, \label{susy alg2}$$ with the gauge parameters given in (\[Lambda ss\]) and (\[Omega IJ\]). The last term absent in (\[susy alg\]) is a new symmetry defined by \[p tilde\] $$\begin{aligned} A_{\delta_{\tilde{p}(v)}}\ =&\ A_{p(v)} -f\xi_0\big(QA_{\tilde{p}(v)} + [F\Psi,\,F\Xi D_{\tilde{p}(v)}F\Psi]\big)\,, \label{p tilde A}\\ \delta_{\tilde{p}(v)}\,\Psi\ =&\ p(v)\Psi - X\eta F\Xi D_{\tilde{p}(v)}F\Psi\,, \label{p tilde Psi}\end{aligned}$$ where the former is determined so as to induce $$\begin{aligned} \delta_{\tilde{p}(v)}A_\eta\ =&\ D_\eta\Big( A_{p(v)}- f\xi_0\big(QA_{\tilde{p}(v)} + [F\Psi,\,F\Xi D_{\tilde{p}(v)}F\Psi]\big)\Big) \nonumber\\ \cong&\ p(v)A_\eta - QA_{\tilde{p}(v)} - [F\Psi,\,F\Xi D_{\tilde{p}(v)}F\Psi]\,.\end{aligned}$$ This extra contribution can be absorbed into the gauge transformation, up to the equations of motion, at the linearized level as we will see shortly. Let us consider the transformation (\[p tilde\]) at the linearized level: \[translation tilde\] $$\begin{aligned} \delta_{\tilde{p}}^{(0)}\Phi\ =&\ p(v)\Phi - \xi_0 Q\tilde{p}(v)\Phi\ =\ \big(p(v)- X_0\tilde{p}(v)\big)\Phi +Q(\xi_0\tilde{p}(v)\Phi)\,, \label{trans tilde ns}\\ \delta_{\tilde{p}}^{(0)}\Psi\ =&\ \big(p(v)-X\tilde{p}(v)\big)\Psi\,. \label{trans tilde ramond}\end{aligned}$$ Thanks to (\[p tilde p\]), the transformation of $\Phi$ (\[trans tilde ns\]) becomes the form of the gauge transformation up to the equation of motion at the linearized level: $$\delta_{\tilde{p}}^{(0)}\Phi\ =\ Q \big((M(v) + \xi_0\tilde{p}(v))\Phi\big) + \eta \big(\xi_0 M(v) Q\Phi\big) + \xi_0M(v)Q\eta\Phi\,. \label{trans tilde ns 2}$$ We can similarly show that the transformation of $\Psi$ in (\[trans tilde ramond\]) can also be written as a gauge transformation up to the equation of motion at the linearized level as shown in Appendix \[app B\]. Here we assume that the asymptotic condition[@Lehmann:1954rq] holds for string field theory as well as the conventional (particle) field theory. Then, at least perturbatively, we can identify that the transformation (\[translation tilde\]), or (\[trans tilde ns 2\]) and (\[B4\]) can be interpreted, with appropriate (finite) renormalization, as that of asymptotic string fields. If we further assume asymptotic completeness, this implies that the extra transformation (\[translation tilde\]) acts trivially on the on-shell physical states defined by these asymptotic string fields, and thus the physical S-matrix. Thus the supersymmetry algebra is realized on the physical S-matrix, and we can identify the transformation (\[complete transformation\]) with space-time supersymmetry. Extra unphysical symmetries {#extra symm} --------------------------- We have shown that the supersymmetry algebra is realized on the physical S-matrix but this is not the end of the story. The extra transformation $\delta_{\tilde{p}}$ produces another extra transformation if we consider the nested commutator $[\delta_{{\mathcal{S}}_1},[\delta_{{\mathcal{S}}_2},\delta_{{\mathcal{S}}_3}]]$. The extra contribution comes from the commutator $[\delta_{\mathcal{S}},\delta_{\tilde{p}}]$ which is non-trivial because the first-quantized charges ${\mathcal{S}}$ and $\tilde{p}$ are not commutative: $[{\mathcal{S}},\tilde{p}]\ne0$. In fact, we can show that the algebra $$[\delta_{\mathcal{S}},\, \delta_{\tilde{p}}]\ \cong\ \delta_g + \delta_{[{\mathcal{S}},\tilde{p}]}\,, \label{alg sp}$$ holds with the gauge parameters, $$\begin{aligned} \Lambda_{{\mathcal{S}}\tilde{p}}\ =&\ f\xi_0\big(D_{\tilde{p}}f\xi_0D_{\mathcal{S}}- D_{\mathcal{S}}F\Xi D_{\tilde{p}}\big) F\Psi - [F\Psi, F\Xi D_{\tilde{p}}F\Xi A_{\mathcal{S}}] \nonumber\\ &\ - [F\Xi A_{\mathcal{S}}, F\Xi D_{\tilde{p}}F\Psi] - D_{\tilde{p}}f\xi_0\{F\Psi, F\Xi A_{\mathcal{S}}\}\,,\\ \lambda_{{\mathcal{S}}\tilde{p}}\ =&\ X\eta F\Xi D_\eta D_{\tilde{p}} F\Xi A_{\mathcal{S}}\,,\end{aligned}$$ and $\Omega_{{\mathcal{S}}\tilde{p}}$ in (\[Omega IJ\]). The new transformation $\delta_{[{\mathcal{S}},\tilde{p}]}$ is defined by \[tf sp\] $$\begin{aligned} A_{\delta_{[{\mathcal{S}},\tilde{p}]}}\ =&\ f\xi_0\Big(Qf\xi_0D_{[{\mathcal{S}},\tilde{p}]}F\Psi + [F\Psi,\,F\Xi\big(QA_{[{\mathcal{S}},\tilde{p}]}+[F\Psi,\,f\xi_0D_{[{\mathcal{S}},\tilde{p}]}F\Psi]\big)]\Big)\,, \label{tf sp ns}\\ \delta_{[{\mathcal{S}},\tilde{p}]}\Psi\ =&\ X\eta F\Xi\big( QA_{[{\mathcal{S}},\tilde{p}]}+[F\Psi,\,f\xi_0D_{[{\mathcal{S}},\tilde{p}]}F\Psi]\big)\,, \label{tf sp ramond}\end{aligned}$$ where $[{\mathcal{S}},\tilde{p}]$ denotes the first-quantized charge defined by the commutator $[q^\alpha,\tilde{p}^\mu]$ with the parameter $\zeta_{\mu\alpha}$, $$[{\mathcal{S}},\tilde{p}]\ =\ \zeta_{\mu\alpha} [q^\alpha,\tilde{p}^\mu]\,,$$ and in particular $\zeta_{\mu\alpha}=\epsilon_\alpha v_\mu$ on the right-hand side of (\[alg sp\]). This new symmetry is also unphysical in a similar sense to $\delta_{\tilde{p}}$. At the linearized level, the transformation (\[tf sp\]) becomes[^6] $$\begin{aligned} \delta_{[{\mathcal{S}},\tilde{p}]}\Phi\ =&\ \xi_0 Q \xi_0 [{\mathcal{S}},\tilde{p}]\Psi\ =\ \xi_0 X_0[{\mathcal{S}},\tilde{p}]\Psi\,, \label{tf sp ns at linear}\\ \delta_{[{\mathcal{S}},\tilde{p}]}\Psi\ =&\ X\eta\Xi Q[{\mathcal{S}},\tilde{p}]\Phi\ \cong\ XQ[{\mathcal{S}},\tilde{p}]\Phi\,, \label{tf sp ramond at linear} \end{aligned}$$ where we have used the fact that ${\mathcal{S}}$, $\tilde{p}$, and thus $[{\mathcal{S}}, \tilde{p}]$ are commutative with $Q$ and $\eta$. If we note that $[{\mathcal{S}},p]=0$ and $$[{\mathcal{S}},X_0]\ =\ [{\mathcal{S}},\{Q,\xi_0\}]\ =\ \{Q,[{\mathcal{S}},\xi_0]\}+\{\xi_0,[{\mathcal{S}},Q]\}\ =\ 0\,,$$ the transformation of $\Phi$, (\[tf sp ns\]), can further be rewritten in the form of a linearized gauge transformation: $$\begin{aligned} \delta_{[{\mathcal{S}},\tilde{p}]}\Phi\ =&\ - \xi_0[{\mathcal{S}}, (p-X_0\tilde{p})]\,\Psi\ =\ -\xi_0[{\mathcal{S}}, \{Q, M\}]\,\Psi \nonumber\\ \cong&\ -\xi_0 Q[{\mathcal{S}}, M]\,\Psi \nonumber\\ =&\ Q(\xi_0[{\mathcal{S}}, M]\Psi)-\eta(\xi_0X_0[{\mathcal{S}},M]\Psi)\,.\end{aligned}$$ Similarly the transformation of $\Psi$ (\[tf sp ramond\]) can also be written as $$\begin{aligned} \delta_{[{\mathcal{S}},\tilde{p}]}\Psi\ \cong&\ X\eta\xi_0Q[{\mathcal{S}},\tilde{p}]\Phi \nonumber\\ =&\ Q(X\eta\xi_0[{\mathcal{S}},\tilde{p}]\Phi) + X\eta X_0[{\mathcal{S}},\tilde{p}]\Phi \nonumber\\ \cong&\ Q(X\eta\xi_0[{\mathcal{S}},\tilde{p}]\Phi + X\eta[{\mathcal{S}},M]\Phi)\,. $$ It should be noted that the gauge parameter in this form, $\lambda_{{\mathcal{S}}\tilde{p}}=X\eta\xi_0[{\mathcal{S}},\tilde{p}]\Phi + X\eta[{\mathcal{S}},M]\Phi$, is in the restricted small Hilbert space: $\eta\lambda_{{\mathcal{S}}\tilde{p}}=0$ and $XY\lambda_{{\mathcal{S}}\tilde{p}}=\lambda_{{\mathcal{S}}\tilde{p}}$. In addition, a further extra transformation is produced by considering the commutator between $\delta_{\tilde{p}_1}$ and $\delta_{\tilde{p}_2}$, and this sequence of extra transformations does not terminate as long as the nested commutators, $[{\mathcal{O}},[{\mathcal{O}},{\mathcal{O}}]]$, $[{\mathcal{O}},[{\mathcal{O}},[{\mathcal{O}},{\mathcal{O}}]]]$, $\cdots$, with $\mathcal{O}=$ ${\mathcal{S}}$ or $\tilde{p}$, do not vanish. This complicates the structure of the algebra, but we can similarly show that all of these extra transformations act trivially on the physical S-matrix, as shown in Appendix \[app B\]. Summary and discussion ====================== In this paper, we have explicitly constructed a space-time supersymmetry transformation of the WZW-like open superstring field theory in flat ten-dimensional space-time. Under the GSO projections, we have extended a linear transformation expected from space-time supersymmetry in the first-quantized theory to a nonlinear transformation so as to be a symmetry of the complete action (\[complete action\]). We have also shown that the transformation satisfies the supersymmetry algebra up to gauge transformation, the equations of motion and a transformation $\delta_{\tilde{p}}$ acting trivially on the asymptotic physical states defined by the asymptotic string fields. This unphysical transformation produces a series of transformations $\delta_{[{\mathcal{S}},\tilde{p}]},\, \delta_{[\tilde{p}\tilde{p}]},\,\cdots$ by taking commutators with $\delta_{\mathcal{S}}$ or $\delta_{\tilde{p}}$ repeatedly. All of these symmetries also act trivially on the asymptotic physical states, and thus are unphysical, but it is interesting to clarify their complete structure, which is nontrivial in the total Hilbert space including unphysical degrees of freedom. In any case, except for such an unphysical complexity, we have now understood how space-time supersymmetry is realized in superstring field theory, and therefore are ready to study various consequences of space-time supersymmetry[@Kishimoto:2005bs] on a firm basis. We have to (re)analyze them precisely using the techniques developed in conventional quantum field theory.[^7] We hope to report on them in the near future. Acknowledgements {#acknowledgements .unnumbered} ================ The author would like to give special thanks to Ted Erler for helpful discussion, that was essential for clarifying the structure of the supersymmetry algebra in the large Hilbert space. The main part of the work was completed at the workshop on “String Field Theory and Related Aspects VIII” held at ICTP, SAIFR in São Paulo, Brazil. The author also thanks to the organizers, particularly Nathan Berkovits, for their hospitality and for providing a stimulating atmosphere. Spinor conventions and Ramond ground states {#convention} =========================================== In this paper, although it is mostly implicit, we adopt the chiral representation for $SO(1,9)$ gamma matrices $\Gamma^\mu$, in which $\Gamma^\mu$ is given by $$\Gamma^\mu\ =\ \begin{pmatrix} 0 & (\gamma^\mu)_{\alpha\dot{\beta}}\\ (\bar{\gamma}^\mu)^{\dot{\alpha}\beta} & 0 \end{pmatrix}\,,$$ where $\gamma^\mu$ and $\bar{\gamma}^\mu$ satisfy $${(\gamma^\mu\bar{\gamma}^\nu + \gamma^\nu\bar{\gamma}^\mu)_\alpha}^\beta =\ 2\eta^{\mu\nu}{\delta_\alpha}^\beta\,,\qquad {(\bar{\gamma}^\mu\gamma^\nu + \bar{\gamma}^\nu\gamma^\mu)^{\dot{\alpha}}}_{\dot{\beta}} =\ 2\eta^{\mu\nu}{\delta^{\dot{\alpha}}}_{\dot{\beta}}\,. $$ The charge conjugation matrix $\mathcal{C}$ satisfies the relations $$(\Gamma^\mu)^T\ =\ -\mathcal{C}\Gamma^\mu\mathcal{C}^{-1},\qquad \mathcal{C}^T\ =\ -\mathcal{C}\,,$$ and is given in the chiral representation by $$\mathcal{C}\ =\ \begin{pmatrix} 0 & {C^\alpha}_{\dot{\beta}}\\ -{(C^T)_{\dot{\alpha}}}^\beta & 0 \end{pmatrix}\,.$$ The matrices $\mathcal{C}\Gamma^\mu$ are symmetric, or equivalently $$(C\bar{\gamma}^\mu)^{\alpha\beta}\ =\ (C\bar{\gamma}^\mu)^{\beta\alpha}\,,\qquad (C^T\gamma^\mu)_{\dot{\alpha}\dot{\beta}}\ =\ (C^T\gamma^\mu)_{\dot{\beta}\dot{\alpha}}\,.$$ The world-sheet fermion $\psi^\mu(z)$ in the Ramond sector has zero-modes that satisfy the $SO(1,9)$ Clifford algebra $$\{\psi^\mu_0, \psi^\nu_0\}\ =\ 0\,.$$ The degenerate ground states therefore become the space-time spinor, on which $\psi^\mu_0$ act as space-time gamma matrices. We summarize here the related convention. We denote the ground state spinor as $\left(\begin{matrix}|{}^\alpha\rangle\\ |{}_{\dot{\alpha}}\rangle\end{matrix} \right)$, on which $\psi^\mu_0$ acts as $$\psi^\mu_0|{}^\alpha\rangle\ =\ |{}_{\dot{\alpha}}\rangle\frac{1}{\sqrt{2}}(\bar{\gamma}^\mu)^{\dot{\alpha}\alpha}\,,\qquad \psi^\mu_0|{}_{\dot{\alpha}}\rangle\ =\ |{}^\alpha\rangle\frac{1}{\sqrt{2}}(\gamma^\mu)_{\alpha\dot{\alpha}}\,.$$ Then $\hat{\Gamma}_{11}$ defined by (\[gamma11\]) acts on the ground states as $$\hat{\Gamma}_{11}|{}^\alpha\rangle\ =\ |{}^\alpha\rangle\,,\qquad \hat{\Gamma}_{11}|{}_{\dot{\alpha}}\rangle\ =\ -|{}_{\dot{\alpha}}\rangle\,,$$ by which the definition of the GSO projection (\[GSO Ramond\]) is supplemented. Similarly, the BPZ conjugate of the ground state spinor $(\langle{}^\alpha|,\langle{}_{\dot{\alpha}}|)$ satisfies $$\langle{}^\alpha|\psi^\mu_0\ =\ \frac{i}{\sqrt{2}}(\bar{\gamma}^\mu)^{\dot{\alpha}\alpha}\langle{}_{\dot{\alpha}}|\,,\qquad \langle{}_{\dot{\alpha}}|\psi^\mu_0\ =\ - \frac{i}{\sqrt{2}}(\gamma^\mu)_{\alpha\dot{\alpha}}\langle{}^\alpha|\,,$$ with the normalization $$\langle{}^\alpha|{}_{\dot{\alpha}}\rangle\ =\ {C^\alpha}_{\dot{\alpha}}\,,\qquad \langle{}_{\dot{\alpha}}|{}^\alpha\rangle\ =\ -i{C^\alpha}_{\dot{\alpha}}\,.$$ The nontrivial matrix elements of $\psi^\mu_0$ are then given by $$\langle{}^\alpha|\psi^\mu_0|{}^\beta\rangle\ =\ \frac{1}{\sqrt{2}}(C\bar{\gamma}^\mu)^{\alpha\beta}\,,\qquad \langle{}_{\dot{\alpha}}|\psi^\mu_0|{}_{\dot{\beta}}\rangle\ =\ -\frac{i}{\sqrt{2}}(C^T\gamma^\mu)_{\dot{\alpha}\dot{\beta}}\,.$$ Triviality of the extra unphysical symmetries at the linearized level {#app B} ===================================================================== First, in order to show the triviality of (\[trans tilde ramond\]), it is useful to introduce the local inverse picture-changing operator $$Y(z_0)\ =\ -c(z_0)\delta'(\gamma(z_0))\,,$$ which also satisfies $$XY(z_0)X\ =\ X\,,\label{xyx local}$$ and in addition is commutative with $Q$: $[Q, Y(z_0)]=0$. The point $z_0$ can be chosen to be any point on the string, for example, the midpoint $z_0=i$. Due to (\[xyx local\]), we can define another projection operator $XY(z_0)$ that is commutative with $Q$, and acts identically with $XY$ in the restricted small Hilbert space: $$[Q, XY(z_0)]\ =\ 0\,, $$ and if $XY\Psi=\Psi$ then $$XY(z_0)\Psi\ =\ XY(z_0)XY\Psi\ =\ XY\Psi\,.$$ Using this projection operator, the linearized transformation (\[trans tilde ramond\]) can be written as the a linearized gauge transformation, $$\begin{aligned} \delta_{\tilde{p}}^{(0)}\Psi\ =&\ XY(z_0)\left(p(v) - X \tilde{p}(v)\right)\Psi\ =\ XY(z_0)\{Q, \tilde{M}(v)\}\,\Psi\ \nonumber\\ \cong&\ Q(XY(z_0)\tilde{M}(v)\Psi)\,, \label{B4}\end{aligned}$$ up to the linearized equation of motion, $Q\Psi=0$, with $$\tilde{M}(v)\ =\ v^\mu \oint\frac{dz}{2\pi i}(\xi(z)-\Xi)\psi_\mu(z)e^{-\phi(z)}\,.$$ We can see that the gauge parameter in (\[B4\]), $$\lambda_{\tilde{p}}\ =\ XY(z_0)\tilde{M}(v)\Psi\,,$$ is in the restricted small Hilbert space, $$\eta \lambda_{\tilde{p}}\ =\ 0\,,\qquad XY \lambda_{\tilde{p}}\ =\ \lambda_{\tilde{p}}\,,$$ if we note that $\{\eta, \tilde{M}\}=0$. As was mentioned in section \[extra symm\], the commutator $[\delta_{\tilde{p}_1},\delta_{\tilde{p}_2}]$ produces another unphysical transformation $\delta_{[\tilde{p},\tilde{p}]}$: $$[\delta_{\tilde{p}_1},\, \delta_{\tilde{p}_2}]\ \cong\ \delta_g + \delta_{[\tilde{p},\tilde{p}]_{12}}\,, \label{alg pp}$$ where the field-dependent parameters are given by $$\begin{aligned} \Lambda_{\tilde{p}_1\tilde{p}_2}\ =&\ f\xi_0\Big((D_{\tilde{p}_1}f\xi_0 D_{\tilde{p}_2} - D_{\tilde{p}_2}f\xi_0 D_{\tilde{p}_1})A_Q +D_{\tilde{p}_1}f\xi_0[F\Psi, F\Xi D_{\tilde{p}_2}F\Psi] \nonumber\\ &\ -D_{\tilde{p}_2}f\xi_0[F\Psi, F\Xi D_{\tilde{p}_1}F\Psi] +\{F\Psi, F\Xi(D_{\tilde{p}_1}F\Xi D_{\tilde{p}_2} - D_{\tilde{p}_2}F\Xi D_{\tilde{p}_1})F\Psi\} \nonumber\\ &\ -[F\Xi D_{\tilde{p}_2}F\Psi, F\Xi D_{\tilde{p}_2}F\Psi]\Big)\,,\\ \lambda_{\tilde{p}_1\tilde{p}_2}\ =&\ -X\eta F\Xi(D_{\tilde{p}_1}F\Xi D_{\tilde{p}_2} - D_{\tilde{p}_2}F\Xi D_{\tilde{p}_1})F\Psi\,,\end{aligned}$$ and $\Omega_{\tilde{p}_1\tilde{p}_2}$ in (\[Omega IJ\]). The unphysical transformation $\delta_{[\tilde{p},\tilde{p}]}$ is defined by \[tf pp\] $$\begin{aligned} A_{\delta_{[\tilde{p},\tilde{p}]}}\ =&\ - f\xi_0\Bigg(Qf\xi_0\big( QA_{[\tilde{p},\tilde{p}]}+[F\Psi, \,F\Xi D_{[\tilde{p},\tilde{p}]}]F\Psi]\big) \nonumber\\ &\ + [F\Psi,\,F\Xi\Big( QF\Xi D_{[\tilde{p},\tilde{p}]}F\Psi +[F\Psi,f\xi_0\big( QA_{[\tilde{p},\tilde{p}]}+[F\Psi, F\Xi D_{[\tilde{p},\tilde{p}]}F\Psi]\big)] \Big)]\Bigg)\,, \label{tf pp ns}\\ \delta_{[\tilde{p},\tilde{p}]}\Psi\ =&\ -X\eta F\Xi\Big( QF\Xi D_{[\tilde{p},\tilde{p}]}F\Psi +[F\Psi,\,f\xi_0\big( QA_{[\tilde{p},\tilde{p}]}+[F\Psi, \,F\Xi D_{[\tilde{p},\tilde{p}]}F\Psi]\big)] \Big)\,. \label{tf pp ramond}\end{aligned}$$ The first-quantized charge $[\tilde{p},\tilde{p}]$ is defined by $$[\tilde{p},\tilde{p}]\ =\ w_{\mu\nu}[\tilde{p}^\mu,\tilde{p}^\nu]\,,$$ with the parameter $w_{\mu\nu}\,(=-w_{\nu\mu})$, and $[\tilde{p},\tilde{p}]_{12} =[\tilde{p},\tilde{p}](w_{12}=(v_1v_2-v_2v_1)/2)$ in (\[alg pp\]). At the linearized level, the transformation (\[tf pp\]) becomes $$\begin{aligned} \delta_{[\tilde{p},\tilde{p}]}\ \Phi\ =&\ -\xi_0Q\xi_0Q[\tilde{p},\tilde{p}]\Phi\ =\ -\xi_0QX_0[\tilde{p},\tilde{p}]\Phi\,,\\ \delta_{[\tilde{p},\tilde{p}]}\ \Psi\ =&\ -X\eta\Xi Q\Xi[\tilde{p},\tilde{p}]\Psi\ =\ -X\eta\Xi X[\tilde{p},\tilde{p}]\Psi\,, \end{aligned}$$ and can further be rewritten in the form of a linearized gauge transformation: $$\begin{aligned} \delta_{[\tilde{p},\tilde{p}]}\ \Phi\ =&\ \xi_0 Q [\tilde{p},\{Q, M\}]\Phi\ =\ \xi_0 Q [\tilde{p}, M]Q\Phi \nonumber\\ \cong&\ -Q(\xi_0[\tilde{p}, M]Q\Phi) + \eta(\xi_0X_0[\tilde{p}, M]Q\Phi)\,,\end{aligned}$$ and $$\begin{aligned} \delta_{[\tilde{p},\tilde{p}]}\ \Psi\ =&\ X\eta\Xi[\tilde{p},\{Q, \tilde{M}\}]\Psi\ \cong\ X\eta\Xi Q[\tilde{p}, \tilde{M}]\Psi \nonumber\\ =&\ Q(X\eta\Xi[\tilde{p},\tilde{M}]\Psi)\,,\end{aligned}$$ up to the linearized equations of motion. The parameter $\lambda_{\tilde{p}\tilde{p}}=X\eta\Xi[\tilde{p},\tilde{M}]\Psi$ is in the restricted small Hilbert space: $\eta\lambda_{\tilde{p}\tilde{p}}=0$ and $XY\lambda_{\tilde{p}\tilde{p}}=\lambda_{\tilde{p}\tilde{p}}$. Finally we show that all the extra symmetries obtained from the repeated commutators of $\delta_{\mathcal{S}}$’s and $\delta_{\tilde{p}}$’s act trivially on the physical states defined by the asymptotic string fields. For this purpose, it is enough to consider the transformations of $\eta\Phi$ and $\Psi$ at the linearized level for a similar reason to that discussed in Section \[sec algebra\]. Using the linearized form of (\[large small\]) for general variation, $$\delta\Phi\ =\ \xi_0\delta\eta\Phi + \eta(\xi_0\delta\Phi)\,,$$ we can show that if the transformation of $\eta\Phi$ has the form of a gauge transformation, $\delta\eta\Phi= - Q\eta\Lambda$, with some field-dependent parameter $\Lambda$, then the transformation of $\Phi$ also has the form of a gauge transformation: $$\begin{aligned} \delta\Phi\ =&\ - \xi_0 Q\eta\Lambda + \eta\Omega \nonumber\\ =&\ Q\Lambda + \eta(\Omega-\xi_0Q\Lambda)\,, \end{aligned}$$ with some field-dependent $\Omega$. Starting from the linearized transformations $$\begin{aligned} {3} \delta_{\mathcal{S}}\eta\Phi\ =&\ {\mathcal{S}}\Psi\,,\qquad & \delta_{\mathcal{S}}\Psi\ =&\ X{\mathcal{S}}\eta\Phi\,,\\ \delta_{\tilde{p}}\eta\Phi\ =&\ (p-X_0\tilde{p})\eta\Phi\,,\qquad& \delta_{\tilde{p}}\Psi\ =&\ (p-X\tilde{p})\Psi\,,\end{aligned}$$ extra symmetries can be read from repeated commutators, $[\delta_{\mathcal{O}_1},[\delta_{\mathcal{O}_2},\cdots, [\delta_{\mathcal{O}_n},\delta_{\tilde{p}}]\cdots]]$, where $\mathcal{O}_i={\mathcal{S}}$ or $\tilde{p}$. For example, we can read $\delta_{[{\mathcal{S}},\tilde{p}]}$ from $[\delta_{\mathcal{S}},\delta_{\tilde{p}}]$, $$\begin{aligned} [\delta_{\mathcal{S}}, \delta_{\tilde{p}}]\,\eta\Phi\ =&\ (p-X_0\tilde{p}){\mathcal{S}}\Psi - {\mathcal{S}}(p-X\tilde{p})\Psi \nonumber\\ =&\ - X_0\tilde{p}{\mathcal{S}}\Psi + {\mathcal{S}}X\tilde{p}\Psi \nonumber\\ \cong&\ -X_0\tilde{p}{\mathcal{S}}\Psi + Q{\mathcal{S}}\{\xi_0,\eta\}\Xi\tilde{p}\Psi \nonumber\\ =&\ [{\mathcal{S}}, X_0\tilde{p}]\Psi + Q\eta({\mathcal{S}}\xi_0\Xi\tilde{p}\Psi) \nonumber\\ =&\ - [{\mathcal{S}}, (p-X_0\tilde{p})]\Psi + Q\eta({\mathcal{S}}\xi_0\Xi\tilde{p}\Psi)\,, \label{sp on etaphi}\end{aligned}$$ and $$\begin{aligned} [\delta_{\mathcal{S}}, \delta_{\tilde{p}}]\,\Psi\ =&\ (p-X\tilde{p})X{\mathcal{S}}\eta\Phi - X{\mathcal{S}}(p-X_0\tilde{p})\eta\Phi \nonumber\\ =&\ -X\tilde{p}X{\mathcal{S}}\eta\Phi + X{\mathcal{S}}X_0\tilde{p}\eta\Phi \nonumber\\ \cong&\ -QX\{\xi_0,\eta\}\tilde{p}\Xi{\mathcal{S}}\eta\Phi + X{\mathcal{S}}X_0\tilde{p}\eta\Phi \nonumber\\ =&\ X[{\mathcal{S}}, X_0\tilde{p}]\eta\Phi - Q\eta(X\xi_0\tilde{p}\Xi{\mathcal{S}}\eta\Phi) \nonumber\\ =&\ -X[{\mathcal{S}}, (p-X_0\tilde{p})]\eta\Phi - Q\eta(X\xi_0\tilde{p}\Xi{\mathcal{S}}\eta\Phi)\,, \label{sp on psi}\end{aligned}$$ as $$\begin{aligned} \delta_{[{\mathcal{S}},\tilde{p}]}\eta\Phi\ =&\ - [{\mathcal{S}}, (p-X_0\tilde{p})]\Psi\,,\\ \delta_{[{\mathcal{S}},\tilde{p}]}\Psi\ =&\ - X[{\mathcal{S}}, (p-X_0\tilde{p})]\eta\Phi\,,\end{aligned}$$ up to the equations of motion and gauge transformation. Similarly we can find that general extra symmetries have the form \[general odd\] $$\begin{aligned} \delta_{[\mathcal{O}_1,[\mathcal{O}_2,\cdots,[\mathcal{O}_{2k+l-1},\tilde{p}]]]}\,\eta\Phi\ =&\ - (-1)^l(X_0)^{k+l-1}[\mathcal{O}_1,[\mathcal{O}_2,\cdots,[\mathcal{O}_{2k+l-1},(p-X_0\tilde{p})]]]\,\Psi\,,\\ \delta_{[\mathcal{O}_1,[\mathcal{O}_2,\cdots,[\mathcal{O}_{2k+l-1},\tilde{p}]]]}\,\Psi\ =&\ - (-1)^l(X)^{k+l}[\mathcal{O}_1,[\mathcal{O}_2,\cdots,[\mathcal{O}_{2k+l-1},(p-X_0\tilde{p})]]]\,\eta\Phi\,,\end{aligned}$$ or \[general even\] $$\begin{aligned} \delta_{[\mathcal{O}_1,[\mathcal{O}_2,\cdots,[\mathcal{O}_{2k+l},\tilde{p}]]]}\,\eta\Phi\ =&\ (-1)^l(X_0)^{k+l}[\mathcal{O}_1,[\mathcal{O}_2,\cdots,[\mathcal{O}_{2k+l},(p-X_0\tilde{p})]]]\,\eta\Phi\,,\\ \delta_{[\mathcal{O}_1,[\mathcal{O}_2,\cdots,[\mathcal{O}_{2k+l},\tilde{p}]]]}\,\Psi\ =&\ (-1)^l(X)^{k+l}[\mathcal{O}_1,[\mathcal{O}_2,\cdots,[\mathcal{O}_{2k+l},(p-X_0\tilde{p})]]]\,\Psi\,,\end{aligned}$$ with $k=1,2,\cdots$ and $l=0,1,\cdots$, up to the equations of motion and gauge transformation. Here $2k-1$ $(l)$ of the $\mathcal{O}$’s are ${\mathcal{S}}$ ($\tilde{p}$) in (\[general odd\]) and $2k$ $(l)$ of the $\mathcal{O}$’s are ${\mathcal{S}}$ ($\tilde{p}$) in (\[general even\]). All the picture-changing operators, except for the last one, can be put together in front of the right-hand side with aligning $X_0$ or $X$, which is always possible in a similar way to (\[sp on etaphi\]) or (\[sp on psi\]). If an $X$ is in front of some $\mathcal{O}_{i_0}$, we can move it to the top, for example, $$\begin{aligned} & (X_0)^p[\mathcal{O}_1,[\mathcal{O}_2,\cdots, [X\mathcal{O}_{i_0},\cdots,[\mathcal{O}_n,(p-X_0\tilde{p})]]]]\eta\Phi \nonumber\\ &\hspace{20mm} \cong\ Q\{\xi_0,\eta\}(X_0)^p[\mathcal{O}_1,[\mathcal{O}_2,\cdots, [\Xi\mathcal{O}_{i_0},\cdots,[\mathcal{O}_n,(p-X_0\tilde{p})]]]]\eta\Phi\ \nonumber\\ &\hspace{20mm} =\ Q\xi_0(X_0)^p[\mathcal{O}_1,[\mathcal{O}_2,\cdots, [\mathcal{O}_{i_0},\cdots,[\mathcal{O}_n,(p-X_0\tilde{p})]]]]\eta\Phi\ \nonumber\\ &\hspace{25mm} +Q\eta(\xi_0(X_0)^p[\mathcal{O}_1,[\mathcal{O}_2,\cdots, [\mathcal{O}_{i_0},\cdots,[\mathcal{O}_n,(p-X_0\tilde{p})]]]]\eta\Phi)\,. \nonumber\\ &\hspace{20mm} \cong\ (X_0)^{p+1}[\mathcal{O}_1,[\mathcal{O}_2,\cdots, [\mathcal{O}_{i_0},\cdots,[\mathcal{O}_n,(p-X_0\tilde{p})]]]]\eta\Phi\ \nonumber\\ &\hspace{25mm} +Q\eta(\xi_0(X_0)^p[\mathcal{O}_1,[\mathcal{O}_2,\cdots, [\mathcal{O}_{i_0},\cdots,[\mathcal{O}_n,(p-X_0\tilde{p})]]]]\eta\Phi)\,.\end{aligned}$$ Using (\[p tilde p\]), it is easy to show that the transformations (\[general odd\]) or (\[general even\]) can further be written in the form of a gauge transformation as $$\begin{aligned} \delta_{[\mathcal{O}_1,[\mathcal{O}_2,\cdots,[\mathcal{O}_{2k+l-1},\tilde{p}]]]}\eta\Phi \cong& -(-1)^l Q\eta( (X_0)^{k+l-1}\xi_0[\mathcal{O}_1,[\mathcal{O}_2,\cdots,[\mathcal{O}_{2k+l-1}, M]\cdots]]\Psi),\\ \delta_{[\mathcal{O}_1,[\mathcal{O}_2,\cdots,[\mathcal{O}_{2k+l-1},\tilde{p}]]]}\,\Psi \cong& -(-1)^l Q((X)^{k+l}\eta\xi_0[\mathcal{O}_1,[\mathcal{O}_2,\cdots,[\mathcal{O}_{2k+l-1}, M]\cdots]]\,\eta\Phi)\,,\end{aligned}$$ or $$\begin{aligned} \delta_{[\mathcal{O}_1,[\mathcal{O}_2,\cdots,[\mathcal{O}_{2k+l},\tilde{p}]]]}\,\eta\Phi\ \cong&\ (-1)^l Q \eta ((X_0)^{k+l}\xi_0[\mathcal{O}_1,[\mathcal{O}_2,\cdots,[\mathcal{O}_{2k+l}, M]\cdots]]\,\eta\Phi)\,,\\ \delta_{[\mathcal{O}_1,[\mathcal{O}_2,\cdots,[\mathcal{O}_{2k+l},\tilde{p}]]]}\,\Psi\ \cong&\ (-1)^l Q ( (X)^{k+l}\eta\xi_0[\mathcal{O}_1,[\mathcal{O}_2,\cdots,[\mathcal{O}_{2k+l}, M]\cdots]]\,\Psi)\,,\end{aligned}$$ respectively. Hence all the extra symmetries obtained as repeated commutators of $\delta_{\mathcal{S}}$’s and $\delta_{\tilde{p}}$’s act trivially on the on-shell physical states, and thus the physical S-matrix, defined by the asymptotic string fields. [99]{} N. Berkovits, “SuperPoincare invariant superstring field theory,” Nucl. Phys. B [**450**]{} (1995) 90 \[Erratum-ibid. B [**459**]{} (1996) 439\] \[hep-th/9503099\]. T. Erler, S. Konopka and I. Sachs, “Resolving Witten’s superstring field theory,” JHEP [**1404**]{}, 150 (2014) doi:10.1007/JHEP04(2014)150 \[arXiv:1312.2948 \[hep-th\]\]. H. Kunitomo and Y. Okawa, “Complete action for open superstring field theory,” PTEP [**2016**]{}, no. 2, 023B01 (2016) doi:10.1093/ptep/ptv189 \[arXiv:1508.00366 \[hep-th\]\]. T. Erler, Y. Okawa and T. Takezaki, “Complete Action for Open Superstring Field Theory with Cyclic $A_\infty$ Structure,” arXiv:1602.02582 \[hep-th\]. T. Erler, “Supersymmetry in Open Superstring Field Theory,” arXiv:1610.03251 \[hep-th\]. D. Friedan, E. J. Martinec and S. H. Shenker, “Conformal Invariance, Supersymmetry and String Theory,” Nucl. Phys. B [**271**]{} (1986) 93. E. Witten, “Interacting Field Theory of Open Superstrings,” Nucl. Phys. B [**276**]{}, 291 (1986). doi:10.1016/0550-3213(86)90298-1 H. Terao and S. Uehara, “Covariant Second Quantization of Free Superstring,” Phys. Lett. B [**168**]{}, 70 (1986). doi:10.1016/0370-2693(86)91462-0 F. Gliozzi, J. Scherk and D. I. Olive, “Supersymmetry, Supergravity Theories and the Dual Spinor Model,” Nucl. Phys. B [**122**]{}, 253 (1977). doi:10.1016/0550-3213(77)90206-1 H. Lehmann, K. Symanzik and W. Zimmermann, Nuovo Cim.  [**1**]{}, 205 (1955). doi:10.1007/BF02731765 I. Kishimoto and T. Takahashi, “Marginal deformations and classical solutions in open superstring field theory,” JHEP [**0511**]{}, 051 (2005) doi:10.1088/1126-6708/2005/11/051 \[hep-th/0506240\]. T. Erler, “Analytic solution for tachyon condensation in Berkovits open superstring field theory,” JHEP [**1311**]{}, 007 (2013) doi:10.1007/JHEP11(2013)007 \[arXiv:1308.4400 \[hep-th\]\]. N. Berkovits and E. Witten, “Supersymmetry Breaking Effects using the Pure Spinor Formalism of the Superstring,” JHEP [**1406**]{}, 127 (2014) doi:10.1007/JHEP06(2014)127 \[arXiv:1404.5346 \[hep-th\]\]. A. Sen, “Supersymmetry Restoration in Superstring Perturbation Theory,” JHEP [**1512**]{}, 075 (2015) doi:10.1007/JHEP12(2015)075 \[arXiv:1508.02481 \[hep-th\]\]. R. Pius and A. Sen, “Cutkosky Rules for Superstring Field Theory,” JHEP [**1610**]{}, 024 (2016) doi:10.1007/JHEP10(2016)024 \[arXiv:1604.01783 \[hep-th\]\]. A. Sen, “Reality of Superstring Field Theory Action,” JHEP [**1611**]{}, 014 (2016) doi:10.1007/JHEP11(2016)014 \[arXiv:1606.03455 \[hep-th\]\]. A. Sen, “Unitarity of Superstring Field Theory,” arXiv:1607.08244 \[hep-th\]. A. Sen, “Wilsonian Effective Action of Superstring Theory,” arXiv:1609.00459 \[hep-th\]. A. Sen, “Equivalence of Two Contour Prescriptions in Superstring Perturbation Theory,” arXiv:1610.00443 \[hep-th\]. N. Ishibashi, “Light-cone gauge superstring field theory in linear dilaton background,” arXiv:1605.04666 \[hep-th\]. N. Ishibashi and K. Murakami, “Multiloop Amplitudes of Light-cone Gauge NSR String Field Theory in Noncritical Dimensions,” arXiv:1611.06340 \[hep-th\]. [^1]: E-mail:  [kunitomo@yukawa.kyoto-u.ac.jp]{} [^2]: Space-time supersymmetry in the homotopy-algebra-based formulation has recently been studied by Erler.[@Erler:2016rxg] [^3]: We further assume asymptotic completeness in this paper. [^4]: This BRST-invariant GSO projection and that for the Ramond sector to be introduced shortly were first given in Ref. . The operators $G_{NS}$ and $G_R$ are none other than world-sheet fermion number operators in the total Hilbert space including the ghost sectors. [^5]: In the context of string field theory, the GSO projections are also needed to make the Grassmann properties of string fields $\Phi$ and $\Psi$ consistent with those of the coefficient space-time fields. [^6]: In this subsection, the symbol $\cong$ denotes an equation that holds up to the linearized equations of motion, $Q\eta\Phi=Q\Psi=0$. [^7]: For such analyses of superstring field theory, see, for example, Refs. -.
{ "pile_set_name": "ArXiv" }
--- abstract: 'The near-infrared spectral region is becoming a very useful wavelength range to detect and quantify the stellar population of galaxies. Models are developing to predict the contribution of TP-AGB stars, that should dominate the NIR spectra of populations 0.3 to 2 Gyr old. When present in a given stellar population, these stars leave unique signatures that can be used to detect them unambiguously. However, these models have to be tested in a homogeneous database of star-forming galaxies, to check if the results are consistent with what is found from different wavelength ranges. In this work we performed stellar population synthesis on the nuclear and extended regions of 23 star-forming galaxies to understand how the star-formation tracers in the near-infrared can be used in practice. The stellar population synthesis shows that for the galaxies with strong emission in the NIR, there is an important fraction of young/intermediate population contributing to the spectra, which is probably the ionisation source in these galaxies. Galaxies that had no emission lines measured in the NIR were found to have older average ages and less contribution of young populations. Although the stellar population synthesis method proved to be very effective to find the young ionising population in these galaxies, no clear correlation between these results and the NIR spectral indexes were found. Thus, we believe that, in practice, the use of these indexes is still very limited due to observational limitations.' author: - | Lucimara P. Martins$^{1}$[^1], Alberto Rodríguez-Ardila$^2$, Suzi Diniz$^{1,3}$,Rogério Riffel$^{3}$ and Ronaldo de Souza$^{4}$\ $^{1}$NAT - Universidade Cruzeiro do Sul, Rua Galvao Bueno, 868, São Paulo, SP, Brazil\ $^{2}$Laboratório Nacional de Astrofísica/MCT, Rua dos Estados Unidos 154, CEP 37501-064. Itajubá, MG, Brazil\ $^{3}$Universidade Federal do Rio Grande do Sul - IF, Departamento de Astronomia, CP 15051, 91501-970, Porto Alegre, RS, Brasil\ $^{4}$Instituto Astronômico e Geofísico - USP, Rua do Matão, 1226, São Paulo, SP date: 'Accepted ? December ? Received ? December ?; in original form ? October ?' title: 'Spectral Synthesis of Star-forming Galaxies in the Near-Infrared' --- \[firstpage\] Stars: AGB and post-AGB, Galaxies: starburst, Galaxies: stellar content, Infrared: galaxies Introduction ============ The integrated spectrum of galaxies is sensitive to the mass, age, metallicity, dust and star formation history of their dominant stellar populations. Disentangling these stellar populations is important to the understanding of their formation and evolution and the enhancement of star formation in the universe. Star formation tracers in the optical region are nowadays considerably well known and studied, and have been a fundamental tool to identify star-formation in galaxies [@kennicutt88; @kennicutt92; @worthey+97; @balogh+97; @gu+06]. However, the use of this knowledge is not always possible in the case of very dusty galaxies or due to the presence of a luminous AGN. Because of these setbacks, tracers in other wavelength regions have been searched. In this sense, the near-infrared region (NIR hereafter) offers an alternative to tackle this problem. It conveys specific information that adds important constrains in stellar population studies. Except for extreme cases such as ultraluminous IRAS galaxies (Goldader et al. 1995, Lançon et al. 1996), the dominant continuum source is still stellar. The $K$-band light of stellar populations with ages between 0.3 and 2 Gyr is dominated by one single component, namely, the thermally pulsating stars on the asymptotic giants branch (TP-AGB) [@maraston05; @marigo+08]. For populations with age larger than 3 Gyr the NIR light is dominated by stars on the red giant branch (RGB) [@origlia+93]. Their contribution stays approximately constant over large time scales [@maraston05]. By isolating the signature of these stellar evolutionary phases, one expects to gain a better understanding of the properties of the integrated stellar populations. This knowledge is of paramount importance, for example, in the study of high redshift galaxies, when the major star-formation has occurred. Population synthesis models are beginning to account for these stars in a fully consistent way. As a result, they predict prominent molecular bandheads in the NIR. The spectral features of highest relevance to extragalactic studies are the ones located redward of 1 $\mu$m, which should be detectable in the integrated spectra of populations few times 10$^8$ years old. The detection of these bandheads would be a safe indication of the presence of the TP-AGB stars. The most massive of these TP-AGB stars can be very luminous in the NIR, exceeding the luminosity of the tip of the red giant branch by several magnitudes (Melbourne et al. 2012). Models that neglect TP-AGB stars have been shown to over-estimate the masses of distant galaxies by factors of two or more in comparison to models that include them [@ilbert+10]. Models from @maraston05 show that the combination of metallic indexes using these bands can quantify age, metallicity and even separate populations with single bursts from the ones with continuous star formation. However, for nearby objects, some of these bands are located in, or very close to, strong telluric absorption bands, rendering their predictions useless or strongly dependent on the S/N or observing conditions of the spectra. As soon as good quality single stellar population (SSP) models in the NIR became available, the stellar population synthesis emerged as a powerful tool to study galaxies of many different types. For example, by fitting combinations of stellar population models of various ages and metallicities @riffel+08 studied the inner few hundred parsecs stellar populations of starburst galaxies in the NIR.They observed spectra were best explained by stellar populations containing a sizable amount (20-56% by mass) of stars $\sim$1Gyr-old in the thermally pulsing asymptotic giant branch. @riffel+09 applied spectral synthesis to study the differences between the stellar populations of Seyfert 1 (Sy1) and 2 (Sy2) galaxies in the NIR. They found that the central few hundred parsecs of the studied galaxies contain a substantial fraction of intermediate-age SPs with a mean metallicity near solar. They also found that the contribution of the featureless continuum and young components tends to be higher in Sy 1 than in Sy 2. @martins+10 also used stellar population synthesis to investigate the NIR extended spectra of NGC 1068. They found an important contribution of a young stellar population at $\sim$ 100 pc south of the nucleus, which might be associated with regions where the jet encounters dense clouds, possibly inducing star formation. However, if on one hand we have now sophisticated models that predict the spectra of integrated populations in the NIR, on the other hand, few attempts have been made to fit them to observations in a consistent way, in order to calibrate or test these predictions. Much of the work in the NIR has focused on unusual objects with either active galactic nuclei [@larkin+98; @alonso-herrero+00; @ivanov+2000; @reunanen+02; @reunanen+03; @riffel+09; @martins+10; @riffel+10; @riffel+11; @storchi-bergmann+12] or very strong star formation [@goldader+97; @burston+01; @dannerbauer+05; @engelbracht+98; @vanzi+97; @coziol+01; @reunanen+07; @riffel+08]. @kotilainen+12 recently published NIR long-slit spectroscopy for a sample of nearby inactive spiral galaxies to study the composition of their NIR stellar populations. With these galaxies they created NIR HK-band templates spectra for low redshift spiral galaxies along the Hubble sequence. They found a dependency between the strength of the absorption lines and the luminosity and/or temperature of the stars, implying that NIR spectral indices can be used to trace the stellar population of galaxies. Moreover, evolved red stars completely dominate the NIR spectra of their sample, meaning that the contribution from hot young stars is insignificant in this spectral region, although such ages play an important role in other spectral regions [@riffelRogerio+11]. However, to identify and quantify tracers of star formation in the NIR, we need galaxies known to have a significant fraction of star formation. With this in mind, we used the NIR spectral sample of star-forming galaxies of @martins+13, which are known to have star formation from their optical observations. Our objective is to test the predictions of stellar population models in this wavelength range and verify the diagnostics that can, in practice, be used as proxies in stellar population studies. In order to do this we fit the underlying continuum between 0.8 and 2.4 $\micron$ with the stellar population synthesis technique, using the same method described in @riffel+09 and @martins+10. In §2 we present the details of our observations and reduction process; in §3 we briefly describe the stellar population synthesis method; in §4 we present our results and discussions and in §5 we show our conclusions. The Data ======== The sample used here was presented in @martins+13 and is a subset of the one presented in the magnitude-limited optical spectroscopic survey of nearby bright galaxies of Ho et al. (1995, hereafter HO95). These galaxies are sources defined by Ho et al. (1997, hereafter HO97) as those composed of “nuclei dominated by emission lines from regions of active star formation (H[ii]{} or starburst nuclei)". In addition, five galaxies classified as non-star-forming in the optical, dominated by old-stellar population and with no detected emission lines, were included as a control sample. All spectra were obtained at the NASA 3m Infrared Telescope Facility (IRTF) in two observing runs (2007 and 2008) - the same data from @martins+13. The SpeX spectrograph [@rayner+03] was used in the short cross-dispersed mode (SXD, 0.8-2.4 $\mu$m). The detector consists of a 1024x1024 ALADDIN 3 InSb array with a spatial scale of 0.15“/pixel. A 0.8”x 15" slit oriented in the north-south direction was used, providing a spectral resolution of 360 kms$^{-1}$. This value was determined both from the arc lamp and the sky line spectra and was found to be constant with wavelength along the observed spectra. The seeing varied from night to night but on average, most objects were observed under a 1$\arcsec$ seeing conditions. The spectral reduction, extraction and wavelength calibration procedures were performed using SPEXTOOL, the in-house software developed and provided by the SpeX team for the IRTF community [@cushing+04]. Telluric features removal and flux calibration were done using XTELLCOR [@vacca+03], another software available by the SpeX team. The spectra were corrected for Galactic extinction using the @cardelli+89 law and @schlafly+11 extinction map. The final sample is composed of 28 galaxies, from which 23 are classified as starbursts or star-forming and 5 are non-starforming galaxies for comparison. A different number of apertures were extracted for each galaxy, depending on the size of the extended emission across the slit. The average exctraction size is 1". More details about the observations and reduction process can be found in @martins+13. Additional information about these galaxies are presented in Table \[sample\]. The G band measurements and the H$_\alpha$ flux are presented because they can be used to characterize the old and young stellar populations, respectively, in the optical. The wavelength and flux-calibrated spectra were de-redshifted using the $z$ values listed in column 2 of Table \[sample\]. Then, they were normalised to unity at 1.233$\mu$m (as defined in @riffel+08). This position was set because no emission or absorption features are located in or around it. ---------- --------- ----------- ------- ------------------- ----------------------- -- -- -- -- -- Galaxy z Apertures Class W[(G band)]{}$^*$ log F(H$_\alpha$)$^*$ (a) (b) (c) (d) NGC 0221 -0.0007 nuc + 3 N 4.54 - NGC 0278 0.0021 nuc + 3 H 0.50 -13.45 NGC 0514 0.0082 nuc + 2 H 3.86 -14.64 NGC 0674 0.0104 nuc + 2 H 4.38 -14.31 NGC 0783 0.0173 nuc + 2 H 1.98 -13.54 NGC 0864 0.0052 nuc + 2 H 1.03 -12.88 NGC 1174 0.0091 nuc + 2 H 3.04 -13.07 NGC 1232 0.0053 nuc N - - NGC 1482 0.0064 nuc +2 H - - NGC 2339 0.0074 nuc + 2 H 0.00 -13.07 NGC 2342 0.0176 nuc H 0.67 -12.83 NGC 2903 0.0019 nuc + 7 H 0.57 -12.63 NGC 2950 0.0045 nuc + 4 N 4.98 - NGC 2964 0.0044 nuc + 2 H 1.03 -12.75 NGC 3184 0.0020 nuc + 2 H 1.37 -13.12 NGC 4102 0.0028 nuc + 2 H 1.94 -12.52 NGC 4179 0.0042 nuc + 2 N 4.89 -14.30 NGC 4303 0.0052 nuc + 6 H 3.60 -12.84 NGC 4461 0.0064 nuc + 2 N 4.67 - NGC 4845 0.0041 nuc + 6 H 4.18 -13.61 NGC 5457 0.0008 nuc + 2 H 1.03 -13.33 NGC 5905 0.0113 nuc + 2 H 2.15 -13.13 NGC 6181 0.0079 nuc + 2 H 4.43 -13.57 NGC 6946 0.0195 nuc H 0.32 -13.01 NGC 7080 0.0161 nuc + 2 H - -13.49 NGC 7448 0.0073 nuc + 2 H 0.42 -14.01 NGC 7798 0.0080 nuc + 2 H 0.04 -13.16 NGC 7817 0.0077 nuc + 2 H 3.57 -13.25 ---------- --------- ----------- ------- ------------------- ----------------------- -- -- -- -- -- : Sample details \ References: (1) @ho+95 (2) @kennicutt88 (3) @coziol+98\ (\*)Adopted from @ho+97.// Redshifts obtained from NED (http://ned.ipac.caltech.edu/) (a) Number of apertures extracted from each galaxy. (b) H means star-forming and N means non-star-forming galaxy. (c) Measured G band. (d) Log of the measured H$\alpha$ flux (in ergs/cm$^2$/s) \[sample\] Spectral Synthesis Base Set and Method ====================================== The spectral synthesis is done using the code STARLIGHT [^2] [@cid+04; @cid+05a; @mateus+06; @asari+07; @cid+09], which mixes computational techniques originally developed for semi-empirical population synthesis with ingredients from evolutionary synthesis models. The code fits an observed spectrum O$_\lambda$ with a combination, in different proportions, of a number SSPs, to obtain the final model that fits the spectrum, M$_\lambda$. Due to the fact that the @maraston05 models include the effect of the TP-AGB phase, we use these SSPs models as the base set for STARLIGHT. The SSPs used in this work cover 14 ages, t = 0.001, 0.005, 0.01, 0.03, 0.05, 0.1, 0.2, 0.5, 0.7, 1, 2, 5, 9 and 13 Gyr, and 4 metalicities, Z = 0.02, 0.5, 1 and 2 z$_{\sun}$, summing up 56 SSPs. It is important to mention that the spectral resolution of the models in this spectral region ranges from R $\sim$100 (K-band) to R $\sim$ 250 (z-band) while that of the observed data is significantly higher R $\sim$750 [@rayner+03]. The observations were then degraded to the model’s resolution. For this reason, STARLIGHT’s fit will depend more strongly on the continuum shape rather than on the absorption features. This comes directly from the way the synthesis works: the best models are chosen based on the $\chi^2$ values. This means that the synthesis does a point by point subtraction between models and observations. When the resolution is very low, the “distance” between models and observations becomes smaller in the absorption bands, and therefore they become “less important”. Recently, @maraston+11 published a new set of NIR models with R $\sim$500 (K-band). However, they are only available for solar metallicity. In order to test possible differences introduced in the results due to the spectral resolution, these latter models were used to construct a base using the same ages as for the low resolution base. Besides that, we include in our spectral base 8 Planck distributions (black-body-BB), with T ranging from 700 to 1400 K, in steps of 100 K, to account for a possible contribution of hot dust to the continuum (for more details see @riffel+09). Although dust at such high temperatures is not common in starburst galaxies, its existence would indicate the presence of a hidden active galactic nucleus, for instance. Extinction is modelled by STARLIGHT as due to foreground dust, and parametrised by the V-band extinction A$_V$. We use the @calzetti+00 extinction law to this purpose because it is more appropriated for star-forming galaxies [@Fishera+03]. Velocity dispersion is also a free parameter for STARLIGHT. This means that the code broadens the SSPs in order to better fit the absorption lines in the observed spectra. In our case the velocity dispersion results lack of significance because of the low resolution of the models compared to that of the observations. Emission lines were masked, since STARLIGHT only fits the stellar population continuum. Lines masked in this work were \[SIII\]$\lambda$ 0.907 $\micron$, \[SIII\]$\lambda$ 0.953 $\micron$, HeI 1.083 $\micron$, Pa$\gamma$ $\lambda$ 1.094 $\micron$, \[FeII\]$\lambda$ 1.257 $\micron$, Pa$\beta$ $\lambda$ 1.282, \[FeII\]$\lambda$ 1.320 $\micron$, \[FeII\]$\lambda$ 1.644 $\micron$, Pa$\alpha$ $\lambda$ 1.875 $\micron$ and HI $\lambda$ 1.945 $\micron$.Spurious data (like bad teluric correction regions) were also masked out. Results ======= The results of the spectral synthesis fitting procedure for the nuclear apertures are presented in Figures \[specfit1\] to \[specfit7\]. For each galaxy, the top panel shows the observed and modeled spectra normalised to unity at 1.233 $\mu$m. The middle panel shows the observed spectrum after subtraction of the stellar contribution (O$_\lambda$ - M$_\lambda$). This residual can be understood as the gas emission component. The bottom panel shows the contribution of each SSP to the continuum found by the synthesis. This last panel can be understood in terms of the star formation history of the galaxy. ![image](starlighfit_1.ps){width="180mm"} ![image](starlighfit_2.ps){width="180mm"} ![image](starlighfit_3.ps){width="180mm"} ![image](starlighfit_4.ps){width="180mm"} ![image](starlighfit_5.ps){width="180mm"} ![image](starlighfit_6.ps){width="180mm"} ![image](starlighfit_7.ps){width="180mm"} Figures \[specfit1\] to \[specfit7\] show that overall, the the spectral synthesis reproduces well the continuum shape and the most conspicuous absorption signatures, like the CaT and CO. Examples of good results are NGC 0864 and NGC 4102. However in many galaxies some strong absorption signatures are not being reproduced by the models. Clear examples are the features that appear around 0.94 $\mu$m (TiO) and 1.1 $\mu$m (CN) in many galaxies. Worst cases seem to be NGC 2950, NGC 4179 and NGC 6181. It is important to mention that we tested many different weights for different regions in the synthesis process, but the features were still not reproduced. Many things might be playing a role here, from telluric contamination around these regions to lack of precision from the method due to the low resolution of the models, or even the strength of these signatures in the models. The quality of the fits are measured by the reduced $\chi^2$ and the [*adev*]{} parameter, which is the percentage mean deviation over all fitted pixels, $|$O$_\lambda$ - M$_\lambda$$|$/O$_\lambda$. Following @cid+05b, we present our results using a condensed population vector to take into account noise effects that dump small differences between similar spectral components. This is obtained by binning the population vector x into young (x$_Y$: t $\leq$ 5 $\times$ 10$^7$ yr), intermediate-age (x$_I$: 1 $\times$ 10$^8$ $\leq$ t $\leq$ 2 $\times$ 10$^9$ yr) and old (x$_O$: t $>$ 2 $\times$ 10$^9$ yr) components, using the flux distributions. Results for the condensed vectors for each galaxy (nuclear and off-nuclear apertures) are presented in Table \[synthesis\]. Additional results from the synthesis, namely the extinction value A$_v$, the mean age $<$log(t$_{\textrm{av}}$)$>$ and mean metallicity $<$z$_{\textrm{av}}$$>$ of the stellar population, weighted by the light fraction are also shown. These quantities are defined as @cid+05b: $$\langle {\rm log} t_{ av} \rangle_{L} = \displaystyle \sum^{N_{\star}}_{j=1} x_j {\rm log}t_j,$$ and $$\langle Z_{ av} \rangle_{L} = \displaystyle \sum^{N_{\star}}_{j=1} x_j Z_j,$$ where N$^*$ is the total number of SSPs used in the base. Table \[synthesis\] also presents the current mass of the galaxy which is in the form of stars (M\*), the mass that was processed in stars during the galaxy lifetime (M$_{ini}$) and the star formation rate in the last 100 Myrs (SFR$_{100}$). We also present the reduced $\chi^2$ and the [*adev*]{} values for each fit. The last column of this table shows the distance of the aperture (in arcseconds) to the nucleus. More information about these quantities can be obtained in the STARLIGHT manual. @cid+05b tested the effects of the S/N in the optical on the stellar population synthesis results, concluding that for observed spectra with S/N $\ge$ 10, in the continuum the results are robust. For that reason we only applied the synthesis method for apertures obeying this criteria. Average metallicites tend to be between 1 and 2 times the solar value. No hot dust contribution to the continuum was found from the synthesis for any galaxy. This result rules out the presence of a hidden AGN in our sample and agrees with previous observations in other wavelengths bands, where no mention to the possible presence of an AGN, even of low-luminosity, is made for any of the targets studied. Moreover, we found a significant contribution of young stellar population (x$_Y$ $\geq$ 10%) in 17 out of the 23 galaxies classified as starforming galaxies. In contrast, four of the five galaxies classified as non-star-forming have zero contribution of young stars, being dominated by an old stellar population whose contribution is larger than 75%. This agrees with the work of @kotilainen+12 who also found negligible contribution of young stars in their sample of non-active galaxies by means of $H-$ and $K$-band spectroscopy. NGC 4461 the only non-star-forming galaxy with anomalous result, has a young population fraction of x$_Y$ $\sim$ 48% in the nuclear region. This is surprising, as no emission lines were detected in that object. Moreover, its NIR spectrum is almost featureless, with no apparent CaII or CO 2.3$\mu$m absorption features, leading us to consider this result with caution. Indeed, the remaining non-star-forming galaxies of our sample, as well as those of @kotilainen+12, display these two systems of absorption lines/band. We therefore believe that our fits failed because no constraint could be set either from the continuum or from absorption features. Ideally one would like to compare these results with stellar populations synthesis results from the optical. However, as shown by @martins+13, the apertures from the optical observations and from the NIR are very diffent, which leads to the sampling of different stellar populations (the optical spectra were obtained through a slit with 5 times the area of the NIR slit). Despite this, we believe our results can be trusted as consistent with the optical emission line tracers for the stellar population, which is further detailed in section 4.3. Figure \[gradpop\] shows the gradient of the stellar populations for galaxies with five or more apertures. In this plot, negative distances represent the north direction and positive distances the south. One of these galaxies is a non-star-forming galaxy (NGC 2950, top right plot), and no variation of the stellar population is seen along the galaxy. For the three remaining galaxies (NGC 2903, NGC 4303 and NGC 4845), the fraction of younger populations are clearly rising in the outer apertures, coinciding with where the emission lines are stronger. ![image](starlight_grad_1.ps){width="180mm"} [cccccccccccccc]{} \ & & & & & & & & & & & & &\ [[**  – continued from previous page**]{}]{}\ & & & & & & & & & & & & &\ NGC0221 & nuc & 0 & 30& 70 & 0.476 & 9.57 & 0.026 & 2.10E+07 & 3.27E+07 & 0.00E+00 & 2.21 & 1.71 & 0.00\ & S1 & 0 & 24& 76 &-0.421 & 9.60 & 0.027 & 6.58E+06 & 1.03E+07 & 0.00E+00 & 2.25 & 2.48 & 2.00\ & S2 & 0 & 5& 95 &-0.428 & 9.76 & 0.023 & 2.32E+06 & 3.70E+06 & 0.00E+00 & 2.31 & 2.24 & 4.00\ & N1 & 0 & 25& 75 & 0.174 & 9.65 & 0.027 & 8.15E+06 & 1.29E+07 & 0.00E+00 & 2.80 & 1.22 & -2.00\ NGC0278 & nuc & 0 & 33& 67 & 0.066 & 9.56 & 0.036 & 7.26E+07 & 1.14E+08 & 0.00E+00 & 1.64 & 2.78 & 0.00\ & S1 & 0 & 33& 67 &-0.180 & 9.56 & 0.030 & 3.24E+07 & 5.08E+07 & 0.00E+00 & 1.30 & 5.06 & 2.00\ & S2 & 9 & 25& 65 & 0.210 & 9.56 & 0.032 & 1.50E+07 & 2.33E+07 & 3.37E-03 & 1.17 & 12.31 & 4.00\ & N1 & 0 & 20& 80 & 0.369 & 9.62 & 0.033 & 3.18E+07 & 5.01E+07 & 0.00E+00 & 1.44 & 7.09 & -2.00\ NGC0514 & nuc & 5 & 34& 61 & 0.386 & 9.55 & 0.027 & 1.49E+08 & 2.33E+08 & 2.09E-03 & 0.96 & 5.68 & 0.00\ & S1 & 0 & 49& 51 & 0.658 & 9.45 & 0.023 & 3.69E+07 & 5.73E+07 & 0.00E+00 & 0.86 & 27.34 & 2.00\ NGC0674 & nuc &16 & 10& 74 & 1.641 & 9.59 & 0.023 & 1.07E+09 & 1.69E+09 & 4.25E-02 & 1.27 & 4.47 & 0.00\ NGC0783 & nuc &11 & 48& 42 & 2.001 & 9.41 & 0.024 & 2.05E+09 & 3.14E+09 & 7.39E-02 & 1.13 & 5.29 & 0.00\ & S1 & 0 &100& 0 & 1.769 & 8.87 & 0.035 & 3.56E+08 & 4.87E+08 & 0.00E+00 & 0.86 & 11.31 & 2.00\ & N1 &16 & 64& 19 & 1.922 & 9.17 & 0.033 & 3.21E+08 & 4.69E+08 & 1.74E-01 & 0.85 & 19.00 & -2.00\ NGC0864 & nuc &14 & 75& 10 & 1.210 & 8.92 & 0.033 & 3.98E+07 & 5.61E+07 & 1.85E-02 & 0.60 & 6.48 & 0.00\ & S1 & 0 & 55& 45 & 0.700 & 9.77 & 0.017 & 3.48E+07 & 5.70E+07 & 0.00E+00 & 0.90 & 18.73 & 2.00\ & N1 & 0 & 46& 54 & 0.926 & 9.48 & 0.020 & 2.11E+07 & 3.32E+07 & 0.00E+00 & 0.95 & 30.46 & -2.00\ NGC1174 & nuc &12 & 88& 0 & 2.775 & 8.52 & 0.038 & 4.28E+08 & 5.57E+08 & 5.81E-02 & 1.16 & 3.71 & 0.00\ & S1 & 0 & 42& 58 & 0.357 & 9.57 & 0.026 & 1.64E+08 & 2.55E+08 & 0.00E+00 & 0.68 & 20.17 & 3.00\ & N1 & 0 & 69& 31 & 1.338 & 9.34 & 0.033 & 1.05E+08 & 1.59E+08 & 0.00E+00 & 0.62 & 45.96 & -3.00\ NGC1232 & nuc & 0 & 2& 98 & 0.671 & 9.69 & 0.023 & 1.41E+08 & 2.23E+08 & 0.00E+00 & 0.96 & 18.83 & 0.00\ NGC1482 & nuc &30 & 70& 0 & 3.303 & 8.55 & 0.038 & 1.03E+09 & 1.39E+09 & 4.76E-01 & 1.42 & 3.56 & 0.00\ & S1 & 4 & 59& 37 & 1.518 & 9.71 & 0.039 & 1.46E+09 & 2.39E+09 & 1.28E-02 & 11.05 & 11.05 & -3.00\ NGC2339 & nuc &48 & 52& 0 & 3.589 & 8.33 & 0.035 & 7.71E+08 & 9.79E+08 & 1.23E+00 & 1.05 & 3.11 & 0.00\ & S1 & 0 & 67& 33 & 2.455 & 9.31 & 0.037 & 4.03E+08 & 6.08E+08 & 0.00E+00 & 0.81 & 7.82 & 2.40\ & N1 &42 & 58& 0 & 3.218 & 8.51 & 0.035 & 1.13E+08 & 1.48E+08 & 1.40E-01 & 0.83 & 10.38 & -2.40\ NGC2342 & nuc &26 & 74& 0 & 2.432 & 8.56 & 0.036 & 1.28E+09 & 1.68E+09 & 4.18E-01 & 1.14 & 4.39 & 0.00\ NGC2903 & nuc & 0 & 0& 100 & 1.598 & 9.70 & 0.032 & 2.30E+08 & 3.63E+08 & 1.67E-08 & 2.23 & 2.72 & 0.00\ & S1 & 6 & 0& 94 & 1.361 & 9.67 & 0.037 & 9.91E+07 & 1.57E+08 & 1.25E-03 & 1.95 & 3.27 & 1.66\ & S2 &30 & 3& 67 & 0.805 & 9.53 & 0.032 & 8.99E+07 & 1.42E+08 & 7.98E-03 & 2.18 & 2.97 & 3.58\ & S3 &19 & 0& 81 & 0.551 & 9.61 & 0.034 & 3.38E+07 & 5.36E+07 & 1.64E-03 & 1.47 & 4.68 & 4.98\ & N1 & 2 & 3& 95 & 1.090 & 9.68 & 0.037 & 9.32E+07 & 1.48E+08 & 4.15E-04 & 2.08 & 3.67 & -1.74\ & N2 &56 & 8& 36 & 0.895 & 9.27 & 0.023 & 4.34E+07 & 6.78E+07 & 1.27E-02 & 1.90 & 3.16 & -2.40\ & N3 &45 & 55& 0 & 1.873 & 8.03 & 0.036 & 1.22E+07 & 1.53E+07 & 9.41E-03 & 1.38 & 4.13 & -4.63\ & N4 &45 & 1& 54 & 1.357 & 9.43 & 0.040 & 2.72E+07 & 4.29E+07 & 4.59E-03 & 1.34 & 5.67 & -6.13\ NGC2950 & nuc & 0 & 23& 77 & 0.579 & 9.60 & 0.031 & 2.22E+09 & 3.49E+09 & 0.00E+00 & 2.41 & 2.27 & 0.00\ & S1 & 0 & 21& 79 &-0.427 & 9.61 & 0.027 & 3.89E+08 & 6.11E+08 & 0.00E+00 & 2.16 & 3.43 & 1.50\ & S2 & 0 & 22& 78 &-0.510 & 9.61 & 0.026 & 1.66E+08 & 2.59E+08 & 0.00E+00 & 1.14 & 5.06 & 2.50\ & N1 & 0 & 24& 76 & 0.500 & 9.60 & 0.028 & 4.45E+08 & 6.97E+08 & 0.00E+00 & 2.23 & 3.26 & -1.50\ & N2 & 0 & 18& 82 & 0.329 & 9.62 & 0.025 & 1.94E+08 & 3.04E+08 & 0.00E+00 & 1.27 & 5.49 & -2.50\ NGC2964 & nuc &12 & 88& 0 & 2.155 & 8.39 & 0.036 & 3.23E+08 & 4.15E+08 & 4.65E-02 & 0.97 & 5.41 & 4.98\ & S1 & 8 & 92& 0 & 1.730 & 8.28 & 0.038 & 8.66E+07 & 1.08E+08 & 5.91E-02 & 1.08 & 21.17 & 0.00\ NGC3184 & nuc &21 & 79& 0 & 1.471 & 8.27 & 0.039 & 8.97E+06 & 1.13E+07 & 1.41E-02 & 0.85 & 7.01 & 0.00\ & S1 &18 & 82& 0 & 1.155 & 7.93 & 0.040 & 3.22E+06 & 3.99E+06 & 1.22E-03 & 0.74 & 18.95 & 2.00\ & N1 & 3 & 97& 0 & 1.675 & 7.99 & 0.040 & 4.06E+06 & 5.05E+06 & 1.07E-04 & 0.75 & 24.56 & -2.00\ NGC4102 & nuc &52 & 48& 0 & 3.251 & 8.58 & 0.020 & 8.02E+08 & 1.05E+09 & 2.02E+00 & 1.00 & 2.72 & 0.00\ & S1 &58 & 42& 0 & 2.528 & 8.42 & 0.018 & 9.61E+07 & 1.29E+08 & 1.33E-01 & 1.04 & 3.58 & 2.00\ & N1 &38 & 62& 0 & 3.446 & 8.54 & 0.032 & 3.14E+08 & 3.97E+08 & 8.26E-01 & 1.37 & 3.30 & -2.00\ NGC4179 & nuc & 0 & 4&96 & 0.951 & 9.69 & 0.026 & 2.17E+09 & 3.42E+09 & 0.00E+00 & 1.96 & 4.24 & 0.00\ & S1 & 0 & 2&98 & 0.587 & 9.69 & 0.024 & 7.18E+08 & 1.13E+09 & 0.00E+00 & 1.45 & 6.41 & 2.00\ NGC4303 & nuc & 0 & 55&45 & 0.557 & 9.48 & 0.033 & 3.12E+08 & 4.81E+08 & 0.00E+00 & 0.87 & 2.98 & 0.00\ & S1 & 0 & 31&69 & 0.357 & 9.56 & 0.034 & 7.76E+07 & 1.22E+08 & 0.00E+00 & 1.77 & 3.45 & 1.50\ & S2 &34 & 30&36 & 0.575 & 9.30 & 0.039 & 2.91E+07 & 4.45E+07 & 2.69E-02 & 1.14 & 4.40 & 2.50\ & S3 &17 & 24&59 & 0.703 & 9.49 & 0.035 & 2.14E+07 & 3.36E+07 & 1.15E-03 & 0.70 & 6.29 & 3.50\ & N1 & 0 & 33&66 & 0.431 & 9.55 & 0.035 & 7.09E+07 & 1.11E+08 & 1.52E-05 & 1.60 & 3.63 & -1.50\ & N2 &23 & 42&35 & 1.085 & 9.30 & 0.036 & 2.76E+07 & 4.21E+07 & 1.49E-02 & 1.05 & 4.87 & -2.50\ & N3 &33 & 25&42 & 1.017 & 9.36 & 0.035 & 1.47E+07 & 2.27E+07 & 7.81E-03 & 0.81 & 9.51 & -3.50\ NGC4461 & nuc &48 & 28& 24 & 1.875 & 9.51 & 0.014 & 7.15E+08 & 1.15E+09 & 7.55E-01 & 0.53 & 4.91 & 0.00\ & S1 &24 & 60& 16 & 1.609 & 9.36 & 0.033 & 3.18E+08 & 4.82E+08 & 3.05E-01 & 0.94 & 8.11 & 2.00\ & N1 & 0 & 79& 21 & 0.546 & 9.22 & 0.030 & 1.80E+08 & 2.69E+08 & 0.00E+00 & 0.95 & 9.06 & -2.00\ NGC4845 & nuc &35 & 65& 1 & 4.000 & 8.55 & 0.029 & 5.96E+08 & 7.93E+08 & 4.63E-01 & 1.92 & 2.41 & 0.00\ & S1 & 9 & 91& 0 & 2.555 & 8.41 & 0.033 & 1.29E+08 & 1.67E+08 & 1.22E-02 & 1.24 & 3.60 & 1.50\ & S2 &25 & 75& 0 & 3.039 & 8.37 & 0.024 & 6.54E+07 & 8.16E+07 & 1.19E-01 & 0.97 & 6.52 & 2.50\ & N1 &51 & 49& 0 & 4.000 & 8.43 & 0.032 & 8.87E+07 & 1.12E+08 & 2.67E-01 & 1.08 & 4.42 & -1.50\ & N2 &50 & 50& 0 & 4.000 & 8.54 & 0.030 & 3.92E+07 & 4.90E+07 & 1.40E-01 & 0.94 & 8.52 & -2.50\ NGC5457 & nuc &64 & 28& 7 & 1.182 & 8.74 & 0.016 & 1.13E+07 & 1.60E+07 & 3.82E-02 & 0.72 & 5.83 & 0.00\ & S1 &54 & 46& 0 & 1.925 & 8.13 & 0.031 & 5.13E+06 & 5.97E+06 & 2.36E-02 & 0.84 & 15.96 & 2.00\ & N1 &41 & 59& 0 & 2.074 & 8.29 & 0.033 & 4.57E+06 & 5.82E+06 & 9.48E-03 & 0.87 & 20.81 & -2.00\ NGC5905 & nuc &32 & 20& 49 & 1.393 & 9.41 & 0.023 & 1.35E+09 & 2.12E+09 & 1.61E-01 & 1.32 & 3.31 & 0.00\ & S1 &52 & 23& 25 & 1.537 & 9.15 & 0.038 & 2.96E+08 & 4.42E+08 & 5.77E-01 & 0.87 & 6.97 & 2.00\ & N1 &46 & 42& 12 & 1.633 & 9.06 & 0.031 & 2.73E+08 & 3.97E+08 & 4.40E-01 & 0.92 & 7.57 & -2.00\ NGC6181 & nuc &38 & 33& 30 & 1.195 & 9.27 & 0.030 & 6.95E+08 & 1.06E+09 & 3.40E-01 & 1.10 & 3.50 & 0.00\ & S1 & 5 & 27& 67 & 0.292 & 9.59 & 0.034 & 4.72E+08 & 7.39E+08 & 6.38E-03 & 0.93 & 5.07 & 2.00\ NGC6946 & nuc &56 & 44& 0& 3.295 & 8.41 & 0.033 & 6.29E+07 & 8.29E+07 & 9.61E-02 & 2.83 & 1.33 & 0.00\ NGC7080 & nuc &16 & 67& 17 & 1.725 & 9.17 & 0.029 & 2.32E+09 & 3.44E+09 & 2.95E-01 & 1.28 & 2.54 & 0.00\ & S1 & 0 & 31& 69 & 0.890 & 9.59 & 0.036 & 1.22E+09 & 1.91E+09 & 1.49E-06 & 1.01 & 5.83 & 2.00\ & N1 & 0 & 56& 44 & 1.358 & 9.39 & 0.036 & 7.80E+08 & 1.20E+09 & 0.00E+00 & 0.98 & 7.09 & -2.00\ NGC7448 & nuc & 3 & 43& 55 & 0.196 & 9.44 & 0.036 & 1.66E+08 & 2.56E+08 & 1.45E-03 & 0.92 & 5.41 & 0.00\ & S1 & 0 & 37& 63 & 0.298 & 9.50 & 0.036 & 1.06E+08 & 1.64E+08 & 0.00E+00 & 0.75 & 8.84 & 2.00\ & N1 & 0 & 29& 71 & 0.085 & 9.55 & 0.033 & 1.06E+08 & 1.66E+08 & 0.00E+00 & 0.83 & 10.22 & -2.00\ NGC7798 & nuc &10 & 25& 65 & 1.473 & 9.53 & 0.025 & 1.20E+09 & 1.88E+09 & 3.53E-02 & 1.47 & 3.55 & 0.00\ & S1 & 0 & 10& 90 & 1.195 & 9.75 & 0.026 & 2.96E+08 & 4.77E+08 & 0.00E+00 & 0.70 & 10.28 & 2.00\ & N1 & 0 & 37& 63 & 1.828 & 9.58 & 0.011 & 2.22E+08 & 3.51E+08 & 0.00E+00 & 0.66 & 22.11 & -2.00\ NGC7817 & nuc & 0 & 15& 85 & 2.041 & 9.65 & 0.031 & 5.78E+08 & 9.09E+08 & 0.00E+00 & 1.03 & 5.35 & 0.00\ & N1 & 0 & 1& 99 & 1.538 & 9.70 & 0.022 & 1.84E+08 & 2.90E+08 & 0.00E+00 & 0.80 & 18.17 & -2.00\ \[specfitresult\] Comparison with higher resolution models ---------------------------------------- Because higher resolution stellar population models are available in the literature for the wavelength range covered by our observations, it is important to test these models and compare the output with that of the lower resolution. In principle, one would expect that higher resolution models produce a higher accuracy in the output since the signatures we are looking for should be more conspicuous. and clear. The main limitation, however, is that @maraston+11 models are available only for solar metallicity. They were constructed using the same recipe as the ones from @maraston05, but using different observables. The set of models that extends to the NIR are the ones that use the @pickles98’s stellar spectral library and above $\sim$ 1 $\mu$m, nearly half of the spectra from that library lack of spectroscopic observations. In order to solve this problem, the author constructed a smooth energy distribution from broadband photometry. It may imply that some NIR absorption features may not be well resolved, even for these higher resolution models. We applied the synthesis population fitting method to our galaxies using these high resolution SSPs for the base, using the same ages as described in section 3. These results were compared with the low resolution models. However, to be certain that differences were due only to model differences and not metallicity differences, we performed the stellar population synthesis with the low resolution models again, but with a base of SSP containing only solar metallicity models. This will ensure that any differences found between the results were not due to any age-metallicity degeneracy effect. The comparison between the results of the low and high resolution SSPs can be seen in Figure \[comp\_lowhires\]. Note that only nuclear apertures are shown in the plot because of their higher S/N, reducing uncertainties in the fit. The dotted line in Figure \[comp\_lowhires\] marks the one to one relationship. The size of each point in these plots is inversely proportional to the $adev$ value found by the fitting procedure, meaning that the larger the point, the smaller the error. It can be seen from Figure \[comp\_lowhires\] that the fraction of old population found for each galaxy does not change significantly when using the high resolution models instead of the low resolution ones (bottom left panel). The same can be said about the reddening (bottom right panel). Regarding the fraction of intermediate age stellar populations (top right panel), discrepancies are clearly observed, but still most points are distributed along the one-to-one relationship. The test fails, as expected, for the fraction of young stellar population (top left panel). Indeed, @maraston+11 found that the larger discrepancies between the old models and the new ones are in the NIR, for ages around 10 $-$ 20 Myr, where the red supergiants strongly contributes. Due to the short duration of their phase, a large scatter is expected for models that try to predict their contribution to the continuum. Besides, since the higher resolution models do not have the features expected for this wavelength region, they tend to be bluer than the low resolution models. This would explain why the synthesis with these models find a smaller contribution of x$_Y$ to the spectra. Given the limitations of the spectral library used for the high resolution models in the NIR, it is not clear that they bring an improvement to the method that compensates the fact that they are available only for solar metallicity. For this reason, we consider that the low resolution models of @maraston05 are still the best option of SSPs for this wavelength range. ![image](compare_lowhires.ps){width="162mm"} Spectral Synthesis and the NIR Indexes -------------------------------------- Since the development of stellar population models that take into account more rigorously the TP-AGB phase, the CN molecular band has been advertised as an unambiguous evidence of intermediate-age stellar population [@maraston05; @riffel+07; @riffel+09]. This issue, however, has not yet been fully assessed on observational grounds, mostly because the results gathered up to now are based on samples containing only a few star-forming galaxies or active galactic nuclei. @riffel+09, for example, presented a histogram comparing the fraction of intermediate age stellar population of Seyfert galaxies (obtained by stellar population synthesis) with the presence or absence of the CN band. They found a slight tendency of the galaxies that display CN to show a higher fraction of intermediate-age stellar population. Note, however, that they also report galaxies with no CN and a high fractions of x$_I$ as well as galaxies with CN and low fractions of x$_I$. Later, @zibetti+12 claimed that the two main signatures of the presence of TP-AGB stars predicted by the @maraston05’s models, CN at 1.41$\mu$m and C$_2$ at 1.77$\mu$m, are not detected in a sample of 16 post-starburst galaxies. Figure \[CN\_comp\] shows the relation between the presence of the CN band and the intermediate age population for our galaxies. Only nuclear apertures were considered. The values of the CN index (equivalent width as defined by @riffel+07) were obtained from @martins+13, with an average value of 18.3 Å. We divided our sample into two groups: galaxies with low values of CN (values below the average) and high values of CN (values above the average). ![Histogram comparing the intermediate age component (x$_I$) for the nuclear aperture of the galaxies with low values of CN (crosshatch) and high values of CN (shaded in gray). []{data-label="CN_comp"}](cn_histogram.ps){width="86mm"} From the histogram, there is no clear trend between the CN index and the fraction of intermediate stellar population within a galaxy. We found galaxies with a very high fraction of intermediate-age stellar population with no measured CN and galaxies where the CN is very strong but only a small fraction of intermediate-age stars was found. As pointed out by @riffel+09, that result does not necessarily imply that the CN is not a suitable tracer of intermediate-age stellar population. The proximity of strong permitted lines to the CN band may hinder, for instance, its observation. This is the case of galaxies where He[i]{} 1.081$\mu$m and Pa$\gamma$ in emission are strong. The red wings of these two lines will partially or totally fill up the CN band. As a result, the derived CN index will be small. This problem is particularly relevant in AGNs with broad permitted lines, which is not our case. Another possibility is that for galaxies with low $z$ ($z<$0.01), the CN band falls in a region of strong telluric absorption. Residuals left after the division by the telluric standard star may affect the S/N around 1.1 $\mu$m. In this case, the measurement of the CN band may be subjected to large uncertainties. For our sample, we estimate that low S/N around 1.1 $\mu$m can be an issue in NGC1232 (x$_{\rm i}$=97.5%), NGC2964 (x$_{\rm i}$=87.5%), NGC3184 (x$_{\rm i}$=78.8%), NGC4461 (x$_{\rm i}$=27.8%) and NGC7817 (x$_{\rm i}$=15%). Note, however, that NGC2964 and NGC3184 display emission lines in their NIR spectra, giving additional support to the result of a large fraction of intermediate-age stars in their nuclei (see next section). We found that although the CN band is a potential tracer of the intermediate age population, in practice its usefulness can be limited by the proximity of emission lines and residuals left after the division by the telluric standard. Stellar population synthesis results and the presence of emission lines ----------------------------------------------------------------------- Although no correlation was found between the stellar population synthesis and the NIR indexes, we were able to identify a relationship between the stellar population and the presence of emission lines in the galaxy sample. Galaxies that display no emission lines in the NIR were found to have, in general, very old stellar populations. Assuming that the spectra recorded for these sources is representative of the bulk of their stellar populations, these objects are either dominated by x$_O$ or it represents a large fraction of the emitted light. The weak nebular component found in the optical region contributes little to the NIR spectrum and/or the regions that emit these lines were not present inside the slit. Most of the important young population contributions were found in apertures with strong emission lines. Only a few were found in off-nuclear apertures with no signs of emission lines, but this happens when the S/N of the spectrum is very low, which means larger uncertainties in the fit. This is observed by the high [*adev*]{} value of these fits. @martins+13 compared emission line properties from the optical observations and from the NIR. They found that for a subsample of galaxies the emission lines in the NIR were much weaker than in the optical, sometimes even absent. They concluded that this was mainly due to the differences in aperture between these two sets of data - the aperture in the optical has 5 times the area than the aperture in the NIR. In many of these galaxies the star-formation is probably not nuclear, but circumnuclear, or located in hot spots outside the nucleus. Based on this comparisons, they classified the galaxy sample in four classes: - Weak emission lines in the optical, no emission lines in the NIR, either at the nucleus or in the extended region (class 1): NGC514, NGC674, NGC6181, NGC7448. - Strong emission lines in the optical, no emission lines in the NIR, either at the nucleus or in the extended region (class 2): NGC278, NGC7817 - Strong emission lines in the optical, evidence of weak to moderate-intensity lines in the NIR (nucleus or/and extended region - class 3): NGC783, NGC864, NGC1174, NGC2964, NGC3184, NGC4303, NGC4845, NGC5457, NGC5905, NGC7080, NGC7798 - Strong emission lines in the optical, moderate to strong emission lines in the NIR (class 4): NGC2339, NGC2342, NGC2903, NGC4102, NGC6946 Figure \[synperclass\] shows the synthesis results for each of these classes. Only results for which the [*adev*]{} value was smaller than 5.0 were included in these plots. This figure shows the distribution of x$_Y$, x$_I$, x$_O$, the average age (t$_{av}$) and average metallicity (Z$_{av}$) of the stellar population and the star formation rate (SFR$_{100}$) in the last 100 Myrs, as found by the synthesis. All class 1 and class 2 objects have very old average age. For all objects the synthesis found a very old stellar population in the nucleus. The lack of emission lines in the NIR was interpreted by @martins+13 as a difference between the slit sizes of the optical observations and the ones done in the NIR. Since the slit used for the NIR observations was much smaller, the ionisation source was missed by these observations, and that agrees with the old ages found by the synthesis. NGC 6181 seems to be a particular exception since a contribution of x$_Y$ to the nucleus larger than 30$\%$ was found. However, its average age is still high. For the off-nuclear apertures of NGC 514 and NGC 674 the spectra were very noisy and the synthesis results cannot be really trusted. Class 3 and 4 objects have in general much lower average ages. All class 4 and most of class 3 objects have significant contribution of a young stellar population. It is also very interesting that this contribution tends to be higher in the apertures where the emission lines are stronger. This young stellar population should be the responsible for the gas ionisation. We also found higher star formation rates in the last 100 Myrs in objects from classes 3 and 4. There are no significant differences in metallicity between the classes. However, the non-star-forming galaxies seem to have lower metallicities than the galaxies classified as starburst or star-forming. It is also interesting to comment on the results found for the non-star-forming galaxies NGC 221, NGC 1232, NGC 2950, NGC 4179 and NGC 4461. The first four galaxies, as expected, are dominated by x$_O$. The lack of young stars able to ionise the gas should explain the absence of emission lines in their spectra. However NGC 4461 has an important contribution of x$_Y$. The result for this galaxy is somewhat puzzling because the quality of the fit, as measured by the [*adev*]{} value is not bad (at least for the nuclear aperture). We do not expect this galaxy to have any contribution of a young, ionising stellar population to its continuum, otherwise strong emission lines should be seen. One possibility is that this young population is indeed real and the galaxy has no gas (or not enough to have strong emission lines). Taking all this together, we found that overall the stellar population synthesis is recovering what would be expected of star-forming galaxies. However, individual results have to be carefully verified. ![image](syn_per_class.ps){width="178mm"} Stellar Extinction ------------------ @martins+13 calculated the gas extinction in the NIR using the hydrogen ratio Pa$\beta$/Br$\gamma$, and found it to be generally larger than the gas extinction calculated from optical emission lines. This result agreed with previous results from the literature, and is explained by the fact that the NIR should probe larger optical depths than the optical range. One of the outputs from the stellar populations synthesis is A$_V$, the extinction from the continuum. Figure \[extinction\] shows the comparison of the values we obtained with the gas extinction from @martins+13. Filled circles represent the nuclear and open circles represent the off-nuclear apertures. The extinction we found from the stars in the NIR traces the extinction from the gas in the optical. This is somewhat expected. The extinction obtained from the gas in the optical is higher than the one derived from the stars in this same wavelength region because the gas emission comes from a dustier region than the stars. However, the NIR emission from the stars should also probe dustier regions than the optical emission from the stars, which means that the emission from the gas in the optical and the continuum emission from the stars in the NIR should probe similar optical depths. However, one has to keep in mind that they do not have to be associated with the same spatial positions, as they differ from the extinction from the stars in the optical for different reasons - the gas emission intrinsically arrises from dustier positions while the stars in the NIR just probe higher optical depths. ![image](extinction_complete_calzetti.ps){width="182mm"} @martins+13 also showed that the continuum of the galaxies in the NIR have a diversity of shapes and steepness. In Figure 6 of their paper, they show the nuclear spectra of the galaxies, organised by their steepness. They suggest that the shapes are related to the presence or absence of dust and the young stellar population. Here we present in Figure \[ordercont\] the average age and the stellar reddening obtained by the synthesis for the galaxies, now organised in steepness order, from the steepest continua (left) to flatter ones (right). From this figure it is clear that the continuum shape is closely related to the stellar extinction (bottom panel), and that the presence of younger stellar populations make the continuum flatter. ![Relation between the average age (top panel) and the continuum extinction in the NIR (bottom panel) with the continuum steepness. The abscissa shows the galaxies from the sample in order of continuum steepness, from steepest ones (left) to the flatter ones (right).[]{data-label="ordercont"}](order_continuum.ps){width="86mm"} Comparison with Far-Infrared Tracers ------------------------------------- Galaxies with active star-formation present a peak of emission around 60 $\mu$m, which is the maximum temperature of the dust heated by this process. The young OB stars that dominate the starburst radiate primarily in the optical and ultraviolet, but surrounding gas and dust reprocesses this radiation and thus strongly radiates at thermal wavelengths in the far-infrared. Far-IR luminosity (LIR) is thus indicative of the magnitude of recent star formation activity [@telesco88; @lonsdale+84]. We can then try to correlate the LIR with our results. Again, one has to be careful with the aperture sizes in each of the observations, to be sure results are consistent. We obtained IRAS (Infrared Astronomical Satellite) IR fluxes from @surace+04, and calculated the LIR luminosity as defined in @sanders+96. As expected, no direct correlation was found between these values and the fractions of x$_Y$ or x$_I$ found by the synthesis. This is directly related to the fact that for many of the galaxies in our sample the star-formation is not a nuclear phenomenon. Figure \[lirdistrib\] shows the LIR distribution separated by the classes as defined by @martins+13. From this figure it is clear that classes 3 and 4, which had the strongest emission lines and higher fractions of younger populations, present also higher LIR values in comparison with classes 1 and 2. Non-star-forming galaxies clearly have the lowers LIR luminosities, as expected. ![image](LIR_per_class_new.ps){width="172mm"} Summary and conclusions ======================= We performed stellar population synthesis in a sample of long-slit NIR spectra of 23 star-forming and 5 non-star-forming galaxies to test the predictions of stellar population models that are available in this wavelength region. The stellar population synthesis code used for this work was STARLIGHT. The chosen SSP models used as a base for STARLIGHT were the ones from @maraston05, because they include the effect of the TP-AGB phase, crucial to model the stellar population in this wavelength region. So far, these (and their most recent version in @maraston+11) are the only ones published that take this effect into account. We compared the synthesis results from older version of the models [@maraston05] with the new version @maraston+11. We found discrepancies for the fraction of young stellar populations, which were mentioned by the own authors, due to the uncertainties in the presence of the red supergiants. Given the limitations of these models, we believe that the lower resolution models from @maraston05 are still the best set of models for stellar population synthesis in the NIR. No hot dust contribution was found for any galaxy, ruling out the presence of a possible hidden AGN in any of them. We found no correlation between the synthesis results and the NIR indexes measured by @martins+13. Although some of the signatures, like the CN band, can be potential traces of the intermediate age population, we believe that the use of them in practice is still complicated, mostly because of observational limitations. The results from the synthesis seem to be in very good agreement with the emission line measurements. For galaxies with no emission lines detected in the NIR, no significant young stellar population was found. They have their continuum dominated by old stellar populations. In addition, the apertures with strong emission lines have large contribution of a young stellar population. This becomes more obvious when comparing the results from the synthesis separated by the classes defined by @martins+13. The classes were created based on the strength of the emission lines in the NIR, with class 1 objects presenting no detected emission lines in the NIR and class 4 having the strongest emission lines measured. Classes 1 and 2 objects have higher contribution of older stellar populations and higher average ages. Classes 3 and 4 objects have higher contribution of the young stellar population and lower average ages. From the synthesis we also obtained the extinction from the continuum in the NIR, which we found to trace the extinction from the gas in the optical. We also find that the continuum steepness is related to this extinction, in the sense that the steepest continua tend to have smaller extinction values that the flatter ones. Acknowledgments {#acknowledgments .unnumbered} =============== The authors thank the reviewer of the paper for his valuable comments. This research has been partially supported by the Brazilian agency FAPESP (2011/00171-4). ARA thanks to CNPq for financial support through grant 307403/2012-2. RR thanks to FAPERGs (ARD 11/1758-5) and CNPq (304796/2011-5) for financial support. natexlab\#1[\#1]{} Alonso-Herrero A., Rieke M. J., Rieke G. H., Shields J. C., 2000, The Astrophysical Journal, 530, 688 Asari N. V., [Cid Fernandes]{} R., Stasińska G., Torres-Papaqui J. P., Mateus A., Sodré L., Schoenell W., Gomes J. M., 2007, Monthly Notices of the Royal Astronomical Society, 381, 263 Balogh M. L., Morris S. L., Yee H. K. C., Carlberg R. G., Ellingson E., 1997, The Astrophysical Journal, 488, L75 A. J., [Ward]{} M. J., [Davies]{} R. I., 2001, , 326, 403 D., [Armus]{} L., [Bohlin]{} R. C., [Kinney]{} A. L., [Koornneef]{} J., [Storchi-Bergmann]{} T., 2000, , 533, 682 Cardelli J. A., Clayton G. C., Mathis J. S., 1989, $\backslash$apj, 345, 245 R. [et al.]{}, 2004, $\backslash$apj, 605, 105 R., Mateus A., Sodré L., Stasi$\backslash$’nska G., Gomes J. M., 2005, $\backslash$mnras, 358, 363 R., Mateus A., Sodré L., Stasi$\backslash$’nska G., Gomes J. M., 2005, $\backslash$mnras, 358, 363 R. [et al.]{}, 2009, in Revista Mexicana de Astronomia y Astrofisica, vol. 27, Vol. 35, Revista Mexicana de Astronomia y Astrofisica Conference Series, pp. 127–132 Coziol R., Doyon R., Demers S., 2001, Monthly Notices of the Royal Astronomical Society, 325, 1081 Coziol R., Torres C. A. O., Quast G. R., Contini T., Davoust E., 1998, $\backslash$apjs, 119, 239 Cushing M. C., Vacca W. D., Rayner J. T., 2004, Publications of the Astronomical Society of the Pacific, 116, 362 H., [Rigopoulou]{} D., [Lutz]{} D., [Genzel]{} R., [Sturm]{} E., [Moorwood]{} A. F. M., 2005, , 441, 999 Engelbracht C. W., Rieke M. J., Rieke G. H., Kelly D. M., Achtermann J. M., 1998, $\backslash$apj, 505, 639 Fischera J., Dopita M. A., Sutherland R. S., 2003, $\backslash$apjl, 599, L21 Goldader J. D., Goldader D. L., Joseph R. D., Doyon R., Sanders D. B., 1997, $\backslash$aj, 113, 1569 Gu Q., Melnick J., [Cid Fernandes]{} R., Kunth D., Terlevich E., Terlevich R., 2006, $\backslash$mnras, 366, 480 Ho L. C., Filippenko A. V., Sargent W. L., 1995, The Astrophysical Journal Supplement Series, 98, 477 Ho L. C., Filippenko A. V., Sargent W. L. W., 1997, The Astrophysical Journal Supplement Series, 112, 315 Ilbert O. [et al.]{}, 2010, $\backslash$apj, 709, 644 V. D., [Rieke]{} G. H., [Groppi]{} C. E., [Alonso-Herrero]{} A., [Rieke]{} M. J., [Engelbracht]{} C. W., 2000, , 545, 190 R. C., 1988, $\backslash$apj, 334, 144 R. C., 1992, $\backslash$apj, 388, 310 Kotilainen J. K., Hyvönen T., Reunanen J., Ivanov V. D., 2012, Monthly Notices of the Royal Astronomical Society, 425, 1057 J. E., [Armus]{} L., [Knop]{} R. A., [Soifer]{} B. T., [Matthews]{} K., 1998, , 114, 59 C. J., [Persson]{} S. E., [Matthews]{} K., 1984, , 287, 95 Maraston C., 2005, Monthly Notices of the Royal Astronomical Society, 362, 799 C., [Str[ö]{}mb[ä]{}ck]{} G., 2011, , 418, 2785 P., [Girardi]{} L., [Bressan]{} A., [Groenewegen]{} M. A. T., [Silva]{} L., [Granato]{} G. L., 2008, , 482, 883 Martins L. P., Riffel R., Rodríguez-Ardila A., Gruenwald R., de Souza R., 2010, Monthly Notices of the Royal Astronomical Society, 406, 2185 Martins L. P., Rodríguez-Ardila A., Diniz S., Gruenwald R., de Souza R., 2013, Monthly Notices of the Royal Astronomical Society Mateus A., Sodré L., [Cid Fernandes]{} R., Stasi$\backslash$’nska G., Schoenell W., Gomes J. M., 2006, $\backslash$mnras, 370, 721 Origlia L., Moorwood A. F. M., Oliva E., 1993, $\backslash$aap, 280, 536 A. J., 1998, , 110, 863 Rayner J. T., Toomey D. W., Onaka P. M., Denault A. J., Stahlberger W. E., Vacca W. D., Cushing M. C., Wang S., 2003, Publications of the Astronomical Society of the Pacific, 115, 362 Reunanen J., Kotilainen J. K., Prieto M. A., 2002, Monthly Notices of the Royal Astronomical Society, 331, 154 Reunanen J., Kotilainen J. K., Prieto M. A., 2003, Monthly Notices of the Royal Astronomical Society, 343, 192 J., [Tacconi-Garman]{} L. E., [Ivanov]{} V. D., 2007, , 382, 951 R., [Bonatto]{} C., [Cid Fernandes]{} R., [Pastoriza]{} M. G., [Balbinot]{} E., 2011, , 411, 1897 Riffel R., Pastoriza M. G., Rodríguez-Ardila A., Bonatto C., 2009, Monthly Notices of the Royal Astronomical Society, 400, 273 R., [Pastoriza]{} M. G., [Rodr[í]{}guez-Ardila]{} A., [Maraston]{} C., 2007, , 659, L103 Riffel R., Pastoriza M. G., Rodríguez-Ardila A., Maraston C., 2008, Monthly Notices of the Royal Astronomical Society, 388, 803 R., [Riffel]{} R. A., [Ferrari]{} F., [Storchi-Bergmann]{} T., 2011, , 416, 493 R. A., [Storchi-Bergmann]{} T., [Riffel]{} R., [Pastoriza]{} M. G., 2010, , 713, 469 D. B., [Mirabel]{} I. F., 1996, , 34, 749 Schlafly E. F., Finkbeiner D. P., 2011, $\backslash$apj, 737, 103 T., [Riffel]{} R. A., [Riffel]{} R., [Diniz]{} M. R., [Borges Vale]{} T., [McGregor]{} P. J., 2012, , 755, 87 J. A., [Sanders]{} D. B., [Mazzarella]{} J. M., 2004, , 127, 3235 C. M., 1988, , 26, 343 W. D., [Cushing]{} M. C., [Rayner]{} J. T., 2003, , 115, 389 L., [Rieke]{} G. H., 1997, , 479, 694 Worthey G., Ottaviani D. L., 1997, $\backslash$apjs, 111, 377 Zibetti S., Gallazzi A., Charlot S., Pierini D., Pasquali A., 2013, Monthly Notices of the Royal Astronomical Society, 428, 1479 \[lastpage\] [^1]: E-mail: lucimara.martins@cruzeirodosul.edu.br [^2]: The code and its manual can be downloaded from http://astro.ufsc.br/starlight/node/1
{ "pile_set_name": "ArXiv" }
--- abstract: 'We study the quantum effects induced by bulk scalar fields in a model with a de Sitter (dS) brane in a flat bulk (the Vilenkin-Ipser-Sikivie model) in more than four dimensions. In ordinary dS space, it is well known that the stress tensor in the dS invariant vacuum for an effectively massless scalar ($m_{{\rm eff}}^2=m^2+\xi {\cal R}=0$ with ${\cal R}$ the Ricci scalar) is infrared divergent except for the minimally coupled case. The usual procedure to tame this divergence is to replace the dS invariant vacuum by the Allen Follaci (AF) vacuum. The resulting stress tensor breaks dS symmetry but is regular. Similarly, in the brane world context, we find that the dS invariant vacuum generates ${{\langle}T_{\mu\nu}{\rangle}}$ divergent everywhere when the lowest lying mode becomes massless except for massless minimal coupling case. A simple extension of the AF vacuum to the present case avoids this global divergence, but ${{\langle}T_{\mu\nu}{\rangle}}$ remains to be divergent along a timelike axis in the bulk. In this case, singularities also appear along the light cone emanating from the origin in the bulk, although they are so mild that ${{\langle}T_{\mu\nu}{\rangle}}$ stays finite except for non-minimal coupling cases in four or six dimensions. We discuss implications of these results for bulk inflaton models. We also study the evolution of the field perturbations in dS brane world. We find that perturbations grow linearly with time on the brane, as in the case of ordinary dS space. In the bulk, they are asymptotically bounded.' address: - '$^1$ Yukawa Institute for Theoretical Physics, Kyoto University, Kyoto 606-8502, Japan' - '$^2$ Department of Physics, Kyoto University, Kyoto 606-8502, Japan' author: - 'Oriol Pujol[à]{}s$^1$[^1] and Takahiro Tanaka$^{2}$[^2]' title: | Massless scalar fields and infrared divergences\ in the inflationary brane world --- introduction ============ The brane world (BW) scenario [@rsI; @rsII] has been intensively studied in the recent years. Little is known yet concerning the quantum effects from bulk fields in cosmological models [@wade; @nojiri; @kks; @hs]. Quite generically, one expects that local quantities like ${{\langle}T_{\mu\nu}{\rangle}}$ or ${{\langle}\phi^2 {\rangle}}$ can be large close to the branes, due to the well known divergences appearing in Casimir energy density computations. This has been confirmed for example in [@knapman; @romeosaharian] for flat branes. These divergences are of ultraviolet (UV) nature and do not contribute to the force. Hence, they are ignored in Casimir force computations. However, they are relevant to the BW scenario since they may induce large backreaction, and are worth of investigation. In this article, we shall shed light on another aspect of objects like ${{\langle}T_{\mu\nu}{\rangle}}$ in BW. We shall point out that they can suffer from infrared (IR) divergences as well. These divergences arise when there is a zero mode in the spectrum of bulk fields in brane models of RSII type with dS brane[@rsII; @gasa]. The situation is analogous to the case in dS space without brane. It is well known that light scalars in dS develop an IR divergence in the dS invariant vacuum. The main purpose of this article is to explore the effects of scalar fields with light modes in a BW cosmological setup of the RSII type [@rsII]. To consider massless limit of scalar field in inflating BW is especially well motivated in the context of ‘bulk inflaton’ models [@kks; @hs; @shs; @hts; @koyama], in which the dynamics of a bulk scalar drives inflation on the brane. In the simplest realizations, the brane geometry is close to dS and the bulk scalar is nearly massless. Let us recall what happens in the usual dS case [@bida]. For light scalars $m_{{\rm eff}}\ll H$ (with $H$ the Hubble constant) in dS, ${\langle}\phi^2{\rangle}$ and ${{\langle}T_{\mu\nu}{\rangle}}$ in the dS invariant vacuum develop a global IR divergence $\sim1/m_{{\rm eff}}^2$. To be precise, this depends on whether the field is minimally coupled or not. What we have in mind is a generic situation in which the effective mass $m_{{\rm eff}}^2=m^2+\xi {\cal R}$ is small, and $\xi\neq0$. In these cases ${{\langle}T_{\mu\nu}{\rangle}}$ diverges as mentioned. The point is that in the generic massless limit, another vacuum must be chosen to avoid the global IR divergence. This process breaks dS invariance [@af], but this shall not really bother us. The simplest choice is the Allen Follaci (AF) vacuum, in which the stress tensor is globally finite and everywhere regular. The massless minimally coupled case is special [@gaki], and it accepts a different treatment which gives finite ${{\langle}T_{\mu\nu}{\rangle}}$ without violating dS invariance. In the BW scenario [@rsII], the bulk scalar is decomposed into a continuum of KK modes and bound states. Here we consider the case that there is a unique bound state with mass $m_d$. If $m_d$ is light, ${\langle}\phi^2{\rangle}$ and ${{\langle}T_{\mu\nu}{\rangle}}$ for the dS invariant vacuum will also diverge like $1/m_d^2$. In this case, again, one will be forced to take another vacuum state like the AF vacuum. Then one naive question is what is the behavior of the stress tensor in such a vacuum in the BW. Also, one might expect singularities on the light cone emanating from the center (the fixed point under the action of dS group) if we recall that the field perturbations for a massless scalar in dS grow like ${{\langle}\phi^2 {\rangle}}\sim\chi$, where $\chi$ is the proper time in dS [@vilenkinford; @linde; @starobinsky; @vilenkin] (see also [@hawkingmoss]). The light cone in the RSII model corresponds to $\chi\to\infty$. Before we start our discussion, we should mention previous calculation given in Ref. [@xavi]. In that paper the stress tensor for a massless minimally coupled scalar was obtained in four dimensions, in the context of open inflation. Montes showed that ${{\langle}T_{\mu\nu}{\rangle}}$ can be regular everywhere except on the bubble. As we will see, these properties hold as well in other dimensions, but only for massless minimal coupling fields. For simplicity, we consider one extremal case of the RSII model [@rsII] in which the bulk curvature and hence the bulk cosmological constant is negligible. We take into account the gravitational field of the brane by imposing Israel’s matching conditions. The resulting spacetime can be constructed by the ‘cut-and-paste’ technique. Imposing mirror symmetry, one cuts the interior of a dS brane in Minkowski and pastes it to a copy of itself (see Fig. \[fig:mink\]). Such a model was introduced in the context of bubble nucleation by Vilenkin [@v] and by Ipser and Sikivie [@is], and we shall refer to it as ‘the VIS model’. This article is organized as follows. In Section \[sec:vis\], we describe the VIS model and introduce a bulk scalar field with generic bulk and brane couplings. The Green’s function is obtained first for the case when the bound state is massive, $m_d>0$. The form of ${{\langle}T_{\mu\nu}{\rangle}}$ in the limit $m_d\to0$ is also obtained. In Section \[sec:massless\], we consider an exactly massless bound state $m_d=0$, and we present the divergences of the AF vacuum. The case when the bulk mass vanishes is technically simpler and explicit expressions for ${{\langle}T_{\mu\nu}{\rangle}}$ can be obtained. This is done in Section \[sec:M=0\]. With this, we describe the evolution of the field perturbations in Section \[sec:pert\], and conclude in Section \[sec:concl\]. $$\begin{array}{ccc} \includegraphics[width=5cm]{minkowski4.eps} &\qquad &\includegraphics[width=3.7cm]{VIS3.eps} \nonumber\\[-5mm] {\rm (a)}&\qquad&{\rm (b)} \end{array}$$ Scalar fields in the VIS Model {#sec:vis} ============================== In this Section we consider a generic scalar field propagating in the VIS model, describing a gravitating brane in an otherwise flat space [@v; @is]. Specifically, the space time consists of two copies of the interior of the brane glued at the brane location, as illustrated in Fig. \[fig:mink\]. In the usual Minkowski spherical coordinates the metric is $ds^2=-d{T}^2+d{R}^2+{R}^2 d\Omega_{(n)}^2$, where $d\Omega_{(n)}^2$ stands for the line element on a unit $n$ sphere. From the symmetry, it is convenient to introduce another set of coordinates. The Rindler coordinates, defined by $$\begin{array}{ll} {R}= r \cosh \chi~, \\ {T}= r \sinh \chi~, \end{array}$$ cover the exterior of the light cone emanating from ${R}={T}=0$. In terms of them the brane location is simply $r=r_0$, and the metric looks like $$ds^2=dr^2+r^2 dS_{({n}+1)}^2~, $$ where $dS_{({n}+1)}^2$ is the line element of de Sitter (dS) space of unit curvature radius. Thus, the Hubble constant on the brane is $H=1/r_0$. In order to cover the interior of the light cone, we introduce the ‘Milne’ coordinates according to $$\begin{array}{ll} {R}= {\tau}\sinh \psi~, \\ {T}= {\tau}\cosh \psi ~. \end{array}$$ In these coordinates, the metric is $ds^2=-d{\tau}^2+{\tau}^2[d\psi^2+\sinh^2\psi\,d\Omega_{(n)}^2]$. Note that we can go from the Rindler to the expanding (contracting) Milne regions making the continuation $r=\pm i {\tau}$ and $\chi= \psi \mp (\pm)i\pi /2$. Here upper and lower signatures correspond to $+i\epsilon$ and $-i\epsilon$ prescriptions, respectively. We consider a scalar field even under $Z_2$ symmetry, with generic couplings described by the action $$\label{action} S=-{1\over2}\int_{\rm Bulk} \left[(\partial \phi)^2 + \left( M^2 +\xi R\right) \phi^2\right] -\int_{\rm Brane} \left[{\mu}+2\xi {\rm tr} \,{\cal K}\right] \,\phi^2~,$$ where $M$ and $\mu$ are the bulk and brane masses, ${\cal R}$ is the Ricci scalar and ${\rm tr} \,{\cal K}$ is the trace of the extrinsic curvature. The latter arises because the curvature scalar contains $\delta$ function contributions on the brane, and the factor 2 in front of it is due to the $Z_2$ symmetry. The stress tensor for a classical field configuration can also be split into bulk and a surface parts as[@bida; @saharian] $$\begin{aligned} \label{tmnclass} T_{\mu\nu}^{bulk}&=&\left(1-2\xi\right)\,\partial_\mu\phi\partial_\nu\phi-{1-4\xi\over2}\; \left[\left(\partial\phi\right)^2+\left(M^2+\xi {\cal R}\right)\phi^2 \right]g_{\mu\nu} -2\xi{\cal R}_{\mu\nu}\phi^2 -2\xi\phi\nabla_\mu\nabla_\nu\phi \cr T_{ij}^{brane}&=&\left[(4\xi-1)\,{\mu}_{{\rm eff}}\,h_{ij}-2\xi\, {\cal K}_{ij}\right]\,\phi^2\,\delta(r-r_0)~,\end{aligned}$$ where $h_{ij}$ is the induced metric on the brane, ${\cal R}_{\mu\nu}$ the Ricci tensor, and the equation of motion has been used.[^3] Here we have introduced an [*effective brane mass*]{} as $$\label{meff} {\mu}_{{\rm eff}}\equiv {\mu}+2(n+1)\xi H~,$$ where $H=1/r_0$ is the Hubble constant on the brane. Then, the v.e.v. ${{\langle}T_{\mu\nu}{\rangle}}$ in point splitting regularization is computed as[^4] $$\begin{aligned} \label{splitting} {\langle}T_{\mu\nu}{\rangle}^{bulk}&=&{1\over2} {\cal T}_{\mu\nu} \left[G^{(1)}(x,x')\right]~,$$ with $$\begin{aligned} \label{splitting1} {\cal T}_{\mu\nu} &\equiv&\lim_{x'\to x}\left\{(1-2\xi)\partial_\mu\partial'_\nu-{1-4\xi\over2}\; g_{\mu\nu}\left(g^{\lambda\sigma}\partial_\lambda\partial'_\sigma+M^2\right) -2\xi \nabla_\mu\nabla_\nu \right\},\end{aligned}$$ and $$\begin{aligned} \label{splitting2} {\langle}T_{ij}{\rangle}^{brane}&=&{1\over2}\Big[(4\xi-1)\,{\mu}_{{\rm eff}}\,h_{ij}-2\xi\, {\cal K}_{ij}\Big]\,\delta(r-r_0)\,G^{(1)}(x,x')\big|_{x'=x} ~,\end{aligned}$$ where $\partial'_\mu=\partial/\partial x'^\mu$. This expression is extended to the case with a nonzero bulk cosmological constant by replacing $M^2$ with $M^2+\xi{\cal R}$ and recovering the Ricci tensor term in Eq. (\[tmnclass\]). Spectrum {#sec:spectrum} -------- The Klein-Gordon equation following from the action (\[action\]) is separable into radial and dS parts so we introduce the mode decomposition $\phi=\sum\int~{{\cal U}}_{p}(r)\,{\cal Y}_{p\ell m}(\chi,\Omega)$, where $m$ is a multiple index. The radial equation is $$\label{radial} \left[\partial_r^2+{{n}+1\over r}\partial_r+{1\over r^2} \left(p^2+{{n}^2\over 4}\right) -M^2 \right]{{\cal U}}_p(r)=0~,$$ while the brane terms can be encoded in the boundary condition $$\label{boundcond} \left. \left(\partial_r+{\mu}_{{\rm eff}}\right){{\cal U}}_p\right\vert_{r=r_0}=0, $$ where $Z_2$ symmetry has been imposed and the effective brane mass ${\mu}_{{\rm eff}}$ is given in Eq. (\[meff\]). The de Sitter part satisfies $$\left[\Box_{n+1} - \left(n/2\right)^2 -p^2 \right] {\cal Y}_{p\ell m}=0~.$$ Thus one obtains a tower of modes ${\cal Y}_{p\ell m}$ in dS with masses $m_{\textsc{kk}}^2=(n/2)^2+p^2$ in units of $H$. The mass spectrum determined by the Schr[" o]{}dinger problem defined by Eqs. (\[radial\]) and (\[boundcond\]). It consists of a bound state plus a continuum of KK states with $p\geq0$ ($m_{KK}\geq n/2$). The radial part for the KK modes is of the form $$\label{kkwavefunct} {{\cal U}}^{\textsc{kk}}_p(r)=r^{-{n}/2}\left[A_p I_{ip}(Mr) + B_{p} I_{-ip}(Mr)\right],$$ with $A_p$ and $B_p$ determined by the boundary condition (\[boundcond\]) and continuum normalization, $2\int dr r^{n-1} {{\cal U}}^{\textsc{kk}}_p(r) {{\cal U}}^{\textsc{kk}}_{p'}(r) = \delta(p-p') $. ![ The shaded area corresponds to the values of ${\mu}_{{\rm eff}}$ and $M^2$ for which the bound state is normalizable ($m_d<n/2$) and non tachyonic, $m_d\geq0$. The thick (red) line corresponds to the massless case. The plot is for $n=3$. []{data-label="meffvsM2"}](meffvsM2_3.eps "fig:"){width="8cm"}\ The mass of the discrete spectrum is $$m_d^2=(n/2)^2+p_d^2 < (n/2)^2~,$$ and hence $p_d$ is pure imaginary. The normalizability implies that its wave function is $$\label{wavefunct} {{\cal U}}^{{bs}}(r)=N_d\; r^{-{n}/2} I_{-ip_d}(Mr)~,$$ with $-ip_d>0$. The boundary condition (\[boundcond\]) determines $p_d$ in terms of $M$ and ${\mu}_{{\rm eff}}$ according to $$\label{BSmassM} {\nu}\; I_{-ip_d}(M r_0)- M r_0 I_{-ip_d}'(M r_0)=0~, $$ where we introduced the combination $${\nu}\equiv{n\over2}-{{\mu}_{{\rm eff}}\over2H}.\label{mu}$$ In the limit $M r_0\ll1$ and ${\mu}_{{\rm eff}}r_0\ll1$, Eq. (\[BSmassM\]) implies that the mass of the bound state is $$\label{BSmassMsmall} (H m_d)^2= n H\,{{\mu}_{{\rm eff}}} + {n\over n+2}\; M^2 +{\cal O}\left( {\mu}_{{\rm eff}}^2, {\mu}_{{\rm eff}}M^2, M^4\right),$$ which agrees with the results of Ref. [@shs; @lasa]. Figure \[meffvsM2\] shows the values of $M^2$ and $\mu_{{\rm eff}}$ for which there exists a non-tachyonic ($-ip_d\leq n/2$) bound state. In this paper, we are mostly interested in the situation when the bound state is massless. This happens whenever Eq. (\[BSmassM\]) with $-ip_d$ replaced by $n/2$ holds, that is when ${\nu}$ reaches the ‘critical’ value $$\label{muc} {\nu}_c= {Mr_0 I'_{{n}/2}(Mr_0)\over I_{{n}/2}(Mr_0)}~.$$ Green’s function {#sec:massiveg} ---------------- The renormalized $D$-dimensional Green’s function can be split into the bound state and KK contributions, $$G_{(ren)}^{(1)} = G^{{\textsc{kk}}}+G^{bs}~, \label{greg}$$ with $$\begin{aligned} G^{bs}&=&{{\cal U}}^{{bs}}(r){{\cal U}}^{{bs}}(r')G_{p_d\,(dS)}^{(1)}, \label{greg1}\\ G^{{\textsc{kk}}}&=&\displaystyle\int_0^\infty dp \left[{{\cal U}}^{\textsc{kk}}_p(r ) {{\cal U}}^{\textsc{kk}}_p(r')\right]^{ren}G_{p\,(dS)}^{(1)}~, \label{greg2}\end{aligned}$$ where $G_{p\,(dS)}^{(1)}$ denotes the Green’s function of a field with mass $(n/2)^2+p^2$ in $n+1$ dimensional dS space with $H=1$. It depends on $x$ and $x'$ through the invariant distance in dS, which we call $\zeta$. Its precise form is given in Appendix \[sec:GdS\]. The ‘renormalized’ product of the KK mode functions in Eq. (\[greg2\]) is $$\begin{aligned} \label{uumass} \left[{{\cal U}}^{\textsc{kk}}_p(r) {{\cal U}}^{\textsc{kk}}_p(r')\right]^{ren} &\equiv& {{\cal U}}^{\textsc{kk}}_p(r ) {{\cal U}}^{\textsc{kk}}_p(r')-{{\cal U}}^{Mink}_p(r ) {{\cal U}}^{Mink}_p(r')\cr &=& { i\,p \over 2\pi (r r')^{{n}/2}} {{\nu}K_{-ip}(Mr_0) -Mr_0 K'_{-ip}(Mr_0) \over {\nu}I_{-ip}(Mr_0) -Mr_0 I'_{-ip}(Mr_0)} I_{-ip}(Mr) I_{-ip}(Mr')+ (p\to -p)~.\end{aligned}$$ Here, ${{\cal U}}^{Mink}_p(r )\propto K_{ip}(Mr)$ is the Minkowski counter part of (\[kkwavefunct\]). This effectively removes the UV divergent contribution to the Green’s function and guarantees the renormalized Green’s function (\[greg\]) to be finite in the coincidence limit. Since Eq. (\[uumass\]) is even in $p$, Eq. (\[greg2\]) can be cast as $G^{{\textsc{kk}}}=\int_{-\infty}^{\infty}dp \,\left[{{\cal U}}_p^{\textsc{kk}}(r){{\cal U}}_p^{\textsc{kk}}(r')\right]^{ren}_1\big|_{p_d} \,G_{p\,(dS)}^{(1)}$ , where $\left[{{\cal U}}_p^{\textsc{kk}}(r){{\cal U}}_p^{\textsc{kk}}(r')\right]^{ren}_1$ stands for the first term in Eq. (\[uumass\]) only. This can be evaluated summing the residues. From Eq. (\[gds\]), the poles in $G_{p(dS)}^{(1)}$ in the upper $p$ plane are at $p=i(q+ n/2)$, with $q=0,1,2\dots$ (see Fig. \[contour\]). From Eqs. (\[uumass\]) and (\[BSmassM\]), we see that the KK radial part has a pole at the value of $p$ corresponding to the bound state, $p=p_d$. We shall now show that the residue is related to the bound state wave function as $$\label{resuu} 2\pi i\; Res \left[{{\cal U}}_p^{\textsc{kk}}(r){{\cal U}}_p^{\textsc{kk}}(r')\right]^{ren}_1\big|_{p_d} =-\,{{\cal U}}^{{bs}}(r) {{\cal U}}^{{bs}}(r')~.$$ Using the Wronskian relation $K_\lambda^{}(z) I_\lambda'(z)-K_\lambda'(z) I_\lambda^{}(z)=1/z$ and Eq. (\[BSmassM\]), it is straightforward to show that $$2\pi i\; Res \left[{{\cal U}}_p^{\textsc{kk}}(r){{\cal U}}_p^{\textsc{kk}}(r')\right]^{ren}_1 \big|_{p_d} = - { p_d / I_{-ip_d}(Mr_0) \over \left.\partial_p \left({\nu}I_{-ip}(Mr_0) -M r_0I'_{-ip}(Mr_0) \right)\right\vert_{p=p_d}}\;{ I_{-ip_d}(Mr) I_{-ip_d}(Mr')\over (r r')^{{n}/2}}.$$ The overall constant in the r.h.s. is nothing but the normalization constant in the bound state wave function (\[uumass\]), up to the sign. Using the Schrödinger equation (\[radial\]) and integrating by parts, we have $$(p^2-p_d^2) \int_0^{r_0} {dr\over r} I_{-ip_d}(Mr)I_{-ip}(Mr) = I_{-ip}(Mr_0) Mr_0 I_{-ip_d}'(Mr_0) -I_{-ip_d}(Mr_0) Mr_0 I_{-ip}'(Mr_0).$$ Setting $p=p_d$ after differentiation with respect to $p$, we find $$\label{radialnorm} {1\over N_d^2} = 2 \int_0^{r_0} {dr\over r} \left[I_{-ip_d}(Mr) \right]^2 ={1\over p_d} I_{-ip_d}(Mr_0) \left. \partial_{p} \left({\nu}I_{-ip}(Mr_0)-Mr_0 I'_{-ip}(Mr_0) \right)\right\vert_{p=p_d},$$ where we used Eq. (\[BSmassM\]). From this and the form of the bound state wave function (\[wavefunct\]), it is clear that Eq. (\[resuu\]) holds. Equation (\[resuu\]) implies that no term of the form ${{\cal U}}^{}_{{bs}}(r){{\cal U}}^{{bs}}(r') G_{p_d}^{(dS)}$ survives in the result. This is ‘fortunate’ because close to $r=0$, ${{\cal U}}^{{bs}}\sim r^{-n/2-ip_d}$ which is divergent. Thus, Eq. (\[resuu\]) guarantees that ${{\langle}T_{\mu\nu}{\rangle}}$ is regular on the light cone. Since only the poles from $G_{p\,(dS)}^{(1)}$ contribute to $G^{(ren)}$, it can be written as the integral over a contour ${\cal C}$ that runs above $p_d$ (see Fig. \[contour\]). Equation (\[gds\]) leads to the expression appropriate to see the coincidence limit, $$\begin{aligned} \label{polesmass} G^{(ren)} &=& \int_{\cal C}dp \, \left[{{\cal U}}_p^{\textsc{kk}}(r){{\cal U}}_p^{\textsc{kk}}(r')\right]^{ren}_1 \,G_{p\,(dS)}^{(1)}\cr &=&- { S_{({n})}^{-1} \over 2^{{n}-1}\Gamma\left({{n}+1\over 2}\right)} \sum_{{k}=0}^{\infty} \sum_{j=0}^{\infty} {{\nu}K_{{n}/2+{k}+j}(Mr_0) -Mr_0 K'_{{n}/2+{k}+j}(Mr_0) \over {\nu}I_{{n}/2+{k}+j}(Mr_0) -Mr_0 I'_{{n}/2+{k}+j}(Mr_0)} \cr &&\qquad {\left({{n}\over 2}+{k}+j\right) I_{{n}/2+{k}+j}(Mr) I_{{n}/2+{k}+j}(Mr')\over (rr')^{{n}/2}} {(-1)^{k}\Gamma\left({n}+2{k}+j\right) \over j!\,{k}!\, \Gamma\left({{n}+1\over 2}+{k}\right)} \left({1-\cos\zeta\over 2}\right)^{k}~,\end{aligned}$$ and we remind that $\zeta$ is the invariant distance in dS. Each term comes from the pole at $p=i((k+j)+n/2)$. Setting $k=0$, we find that the terms with a large $j$ is unsuppressed for $r=r'=r_0$ for $\zeta=0$. Hence the Green’s function in the coincidence limit is divergent on the brane. This is the usual UV ‘Casimir’ divergence near the boundary. Since we are interested in the IR behavior, we shall not further comment on this UV divergence. The term with ${k}=j=0$ in Eq. (\[polesmass\]) renders the Green’s function [*globally*]{} IR divergent in the limit when the bound state is massless, ${\nu}\to{\nu}_c$ (see Eq. (\[muc\])). One can show that this term comes from the homogeneous ($\ell=0$) mode of the bound state. Using Eqs. (\[cancelation\]) and (\[taylor\]), the leading behavior of Eq. (\[polesmass\]) in the massless limit $m_d\to0$ can be written as $$\label{irdivg} G^{(ren)(1)}={2\over S_{(n+1)}}\, {H^2\over m_d^2} \, {{\cal U}}^{{bs}}_0(r){{\cal U}}^{{bs}}_0(r')+{\cal O}(m_d^0)~,$$ where ${{\cal U}}^{{bs}}_0(r)=N_0 I_{n/2}(Mr)/r^{n/2}$ is the wave function of the bound state (\[wavefunct\]) for the exactly massless case. The divergence (\[irdivg\]) appears because in the massless limit, the wave function of the homogeneous mode of the bound state in the dS invariant vacuum broadens without bound [@gaki]. It can be removed by considering another vacuum with finite width, which implies breaking dS symmetry [@af]. Later, we shall take the Allen Follaci vacuum [@af]. We will find in Section \[sec:massless\] that in the brane world context, this process removes the global IR divergence, but a localized singularity within the bulk remains. Let us now examine the behavior of ${{\langle}T_{\mu\nu}{\rangle}}$ in the massless limit, $m_d\to0$. In this limit, the stress tensor is given by $$\begin{aligned} \label{IRdivtmn} {{\langle}T_{\mu\nu}{\rangle}}^{bulk}&\simeq &{H^2\over S_{(n+1)}m_d^2} \;{\cal T}_{\mu\nu} \left[{{\cal U}}^{{bs}}_0(r) {{\cal U}}^{{bs}}_0(r')\right]~, \\ \label{IRdivtmn2} {\langle}T_{ij}{\rangle}^{brane}&\simeq& {H^2\over S_{(n+1)}m_d^2}\, \left[(4\xi-1){\mu}_{{\rm eff}}-2H\xi\right] \; ({{{\cal U}}^{{bs}}_0}|_{r_0})^2 \; \delta(r-r_0) \,h_{ij} ~,$$ where ${\cal T}_{\mu\nu}$ is the differential operator given in Eq. (\[splitting1\]) and $h_{ij}$ is the induced metric on the brane. For $M\neq0$, ${{\cal U}}_0^{{bs}}$ is not constant. Therefore Eq. (\[IRdivtmn\]) explicitly shows the presence of a global IR divergence in ${{\langle}T_{\mu\nu}{\rangle}}$ for the dS invariant vacuum in the $m_d\to0$ limit. For $M=0$, ${{\cal U}}_0^{{bs}}$ is constant. Hence, the bulk part is finite. However, if $\xi\neq0$ then the surface term (\[IRdivtmn2\]) diverges.[^5] Thus, in the limit $m_d=0$ we are forced to consider another quantum state. This is the subject of Section \[sec:massless\]. We shall mention that there is a possibility to avoid this IR divergence without modifying the choice of dS invariant vacuum state. In the present case the divergence is constant and is localized on the brane. Hence, it can be absorbed by changing the brane tension. We may therefore have a model in which this singular term is appropriately renormalized so as not to diverge in the $m_d\to 0$ limit. Of course, such a model is a completely different model from the original one without this IR renormalization. The massless minimally coupled limit, $M={\mu}_{{\rm eff}}=\xi=0$, is exceptional. Both bulk and brane parts of stress tensor (\[IRdivtmn\]) and (\[IRdivtmn2\]) are finite in this limit, though here there is a slight subtlety. The limiting values depend on how we fix the ratios among $M, {\mu}_{{\rm eff}}$ and $\xi$. For example, using Eq. (\[BSmassMsmall\]), the surface term is given by $$\label{masminlim} \lim_{M,{\mu}_{{\rm eff}},\xi\to0} {\langle}T_{ij}{\rangle}^{brane}\simeq -{n+2\over{}n}{H^2\,({{{\cal U}}^{{bs}}_0}|_{r_0})^2\over S_{(n+1)}} \; {{\mu}_{{\rm eff}}+2\xi H \over (n+2) H {\mu}_{{\rm eff}}+ M^2 } \;\delta(r-r_0)~,$$ where we used the approximate mass of the bound state, $$m_d^2\simeq n \,{\mu}\,H + 2n(n+1)\,\xi\, H^2 + {n\over n+2}\; M^2 ~,$$ which is valid in the massless minimal coupling limit (see Eq. (\[BSmassMsmall\])). Hence, in the absence of any fine tuning ($m_d^2\approx \max(M^2, \xi H^2, H {\mu})$), it is clear that the contribution (\[IRdivtmn2\]) is not large, even though the Green’s function (\[irdivg\]) is. Thus, only in the case when the parameters are ‘fine tuned’ according to Eq. (\[muc\]) the stress tensor (\[IRdivtmn2\]) is large. From Eq. (\[tmnclass\]), if the Green’s function is free from IR divergence, it is clear that the brane stress tensor must be zero in the massless minimally coupled case. The direction that reproduces this result is the one along which ${\langle}T_{ij}{\rangle}^{brane}$ already vanishes (in the massive case), that is ${\mu}_{{\rm eff}}=-2\xi H$. Note that this feature is analogous to what happens in dS space [@gaki]. The bulk stress tensor (\[IRdivtmn\]) also has a similar but a bit more complicated feature. The operator (\[splitting1\]) has terms which do not manifestly involve a small quantity, such as $M^2$, $\xi$ or ${\mu}$. However, these terms are associated with derivative operators. In the limit $M^2\to 0$, we can expand the $r$-dependence of the term with $k=j=0$ in Eq. (\[polesmass\]) as $${I_{n/2}(Mr)\over r^{n/2}} \approx {1\over 2^{n/2}\Gamma({n\over 2}+1)} \left(1+{M^2 r^2\over 2(n+2)}+\cdots\right).$$ Then, the leading term in the above expansion, which is not suppressed by a factor $M^2$, vanishes in (\[IRdivtmn\]). The remaining terms are finite unless the parameters are fine tuned, as in the case of the brane stress tensor. Finally, Eq. (\[IRdivtmn\]) also shows that ${{\langle}T_{\mu\nu}{\rangle}}$ is perfectly regular on the light cone for $m_d\neq0$. As mentioned before, this happens thanks to the KK modes. Note as well that for $M\gtrsim H$, Eq. (\[IRdivtmn\]) is exponentially localized on the brane, because the bound state is localized in this case. ![Integration contour in Eq. (\[polesmass\]).[]{data-label="contour"}](integration2.eps "fig:"){width="7cm"}\ Exactly Massless bound state {#sec:massless} ============================ In the preceding section, we have seen that de Sitter (dS) invariant vacuum causes divergence in the limit when the bound state is massless. The divergence is caused by the $\ell=0$ homogeneous mode in the bound state. In this section we consider a different choice of vacuum state for this mode aiming at resolving the problem of divergence, following the standard methods used in dS space[@gaki; @af]. For simplicity, we concentrate on the case of exactly massless bound state $m_d=0$ that is $p_d=n i/2$, although more general cases would be treated in a similar way. The case with $m_d=0$ includes not only a massless minimally coupled scalar, $M={\mu}=\xi=0$, but also other fine tuned cases. The Green’s function {#sec:masslessG} -------------------- Here, we should split the Green’s function into KK and bound state contributions, $G^{(ren)(1)}=G^{{\textsc{kk}}(1)}+G^{bs (1)}$. We leave the quantum state for the KK contribution untouched, and change only the contribution from the bound state. In the integral representation for Green’s function in (\[polesmass\]), we have used the integration contour given in Fig. \[contour\]. This choice of contour automatically takes into account the bound state contribution simultaneously. In the present case, we consider the contour that runs below the pole at $p=p_d$ to exclude the bound state contribution. For $m_d=0$, the integrand has a double pole at $p=ni/2$ because the pole in the radial modes coincides with one of the poles in the dS Green’s function. Hence the integral with the contour given in Fig. \[contour\], which runs through these merging poles, is not well defined. But the integral with the new contour, which picks up the contribution from the KK modes only, is well behaved, and it can be cast as $$\begin{aligned} G^{{\textsc{kk}}(1)}&=&2\pi i \Biggl\{ \sum_{simple~poles} Res\Big( \left[{{\cal U}}_p^{\textsc{kk}}(r){{\cal U}}_p^{\textsc{kk}}(r')\right]^{ren}_1 G_p^{(dS)(1)}\Big) + Res\Big(\left[{{\cal U}}_p^{\textsc{kk}}(r){{\cal U}}_p^{\textsc{kk}}(r')\right]^{ren}_1 \Big)_{p_d}\, \partial_p\left[ (p-p_d) G_p^{(dS)(1)}\right]\big|_{p_d} \cr &&\qquad + Res\Big( G_p^{(dS)(1)}\Big)_{p_d} \partial_p \Big( (p-p_d) \left[{{\cal U}}_p^{\textsc{kk}}(r){{\cal U}}_p^{\textsc{kk}}(r')\right]^{ren}_1 \Big)\big|_{p_d} \Biggr\}. \label{gcont}\end{aligned}$$ As before, $\left[{{\cal U}}_p^{\textsc{kk}}(r){{\cal U}}_p^{\textsc{kk}}(r')\right]^{ren}_1$ denotes the first term in Eq. (\[uumass\]). In the first term the ‘simple poles’ mean poles at $p=i(q+n/2)$ with $q=1,2,\dots$ (see Fig. \[contour\]). Namely, it is obtained by removing the term with $k=j=0$ from Eq. (\[polesmass\]). The last two terms are contributions from the double pole at $p=ni/2$. Next, we consider the contribution from bound state. A massless bound state behaves as a massless scalar from the viewpoint of $n+1$ dimensional dS space. In dS space it is well known that the dS invariant Green’s function diverges in the massless limit because of the $\ell=0$ homogeneous mode [@af; @gaki]. The usual procedure is to treat separately the $\ell=0$ mode from the rest. It is easy to show that (see Appendix \[sec:GdSmassless\]) $$\begin{aligned} \label{gdisc} G^{bs (1)} &=& {{\cal U}}^{{bs}}(r) {{\cal U}}^{{bs}}(r')\Big\{\sum_{\ell>0} {\cal Y}_{p\ell m}(\chi) {\cal Y}^*_{p\ell m}(\chi')+{\widetilde{\cal Y}}_{AF}(\chi) {\widetilde{\cal Y}}^*_{AF}(\chi')\Big\} +{\rm c.c.} \\[1mm] &=&{{\cal U}}^{{bs}}(r) {{\cal U}}^{{bs}}(r') \left\{ \partial_p\left[ (p-p_d) G_p^{(dS)(+)}\right]\big|_{p_d} - \partial_p \left[ (p-p_d) {\cal Y}_{p00}(\chi) {\cal Y}^*_{p00}(\chi') \right] \big|_{p_d} + {\widetilde{\cal Y}}_{AF}(\chi) {\widetilde{\cal Y}}^*_{AF}(\chi') \right\} +{\rm c.c.}~,\nonumber\end{aligned}$$ where ${\cal Y}_{p\ell m}(\chi)$ are the positive frequency dS invariant vacuum modes with mass $p^2+(n/2)^2$. The last term in Eq. (\[gdisc\]) is the contribution from the homogeneous mode in the appropriate state, which we shall take to be the Allen Follaci (AF) vacuum [@af] (see Eq. (\[yaf\])). At this stage, we note that the second term of Eq. (\[gcont\]) cancels the first term of Eq. (\[gdisc\]), due to Eq. (\[resuu\]). This cancelation resembles the one that occurred in the previous case between the KK modes and the bound state. In that case, it guaranteed the absence of light cone divergences. In the present case, the terms that cancel in Eqs. (\[gcont\]) and (\[gdisc\]) are already regular. We show below (see discussion after Eq. (\[g0\])) that instead the last term in Eq. (\[gcont\]) and the second term in Eq. (\[gdisc\]) diverge on the light cone. However, when added up, they render $G^{ren}$ finite on the light cone in odd dimension. (In even dimensions, $G^{ren}$ is finite but its derivatives are not.) The fact that the dS invariant Green’s function diverges because of the homogeneous mode implies that $Res\big(G_p^{(dS)(1)}\big)_{p_d}=\lim_{p \to p_d } (p-p_d){\cal Y}_{p00} {\cal Y}'^*_{p00}+{\rm c.c.}$. Using this and Eq. (\[resuu\]), we can rewrite the total Green’s function in the more convenient form $$\label{GmasslesBS} G^{(ren)(1)} = G_{simple}^{(1)} + G_{LC} + {\widetilde G}_{AF} ~,$$ with $$\begin{aligned} \label{GmasslesBS2} G_{LC} &\equiv& 2\pi i \, \partial_p\Big( (p-p_d) {\cal Y}_{p00} {\cal Y}'^*_{p00} \;\;(p-p_d) \left[{{\cal U}}_p^{\textsc{kk}}(r){{\cal U}}_p^{\textsc{kk}}(r')\right]^{ren}_1 \Big) \big|_{p_d}~+{\rm c.c.}~, \cr {\widetilde G}_{AF} &\equiv& {{\cal U}}^{{bs}}(r) {{{\cal U}}^{{bs}}}(r') \,{\widetilde {\cal Y}}_{AF} {{\widetilde {\cal Y}}'^*}_{AF} ~+{\rm c.c.}~,\end{aligned}$$ and $G_{simple}^{(1)}$ is the contribution from the first term in Eq. (\[gcont\]). As is manifest from the expression (\[polesmass\]) with the $k=j=0$ term removed, $G_{simple}^{(1)}$ is regular in the coincidence limit, everywhere except for the ‘Casimir’ divergences on the brane, and depends on the points $x$ and $x'$ through $r$, $r'$ and $\zeta(x,x')$ (the invariant distance in dS between the projections of points $x$ and $x'$), and hence it is dS invariant. The second term, $G_{LC}$, contains the two contributions in Eqs. (\[gcont\]) and (\[gdisc\]) that are separately divergent on the light cone mentioned in the previous paragraph. The last term, ${\widetilde G}_{AF}$, encodes the choice of vacuum for the zero mode of the bound state.  \ Let us examine the contribution potentially divergent on the light cone $G_{LC}$. The derivative of the $\chi$ dependent part is obtained from Eqs. (\[cpsmall\]) and (\[yp00\]). The derivative of the radial part follows from Eq. (\[uumass\]) and $\partial_\lambda I_\lambda(z)= I_\lambda(z)\log z+f(z)$, where $f(z)$ is a regular function. It is easy to see that $G_{LC}$ takes the form $$\begin{aligned} \label{g0} G_{LC}^{}(x,x')&=& {2\over nS_{(n+1)}}\;{{\cal U}}^{{bs}}(r){{\cal U}}^{{bs}}(r') \;{\rm Re}\,\big[ F(x)+F(x') \big] + {\rm regular},\end{aligned}$$ with $$\begin{aligned} \label{g02} F(x)&=&\log r+n\int_0^\chi {d\chi_{{}_1}\over \cosh^{n}\chi_{{}_1}} \int_{-{i\pi\over 2}}^{\chi_{{}_1}} d\chi_{{}_2} \cosh^{{n}} \chi_{{}_2} ~,\end{aligned}$$ where the indicated regular term depends on $r$ and $r'$ only. The double integral in (\[g02\]) grows linearly with $\chi$ for large $\chi$ and eventually blows up on the light cone (The first integral asymptotically grows like $e^{n\chi_1}/n$, and therefore the integrand in the second integral goes to a constant). This is the expected behavior from the massless bound state. On the other hand, the KK modes contribute the $\log r$ term, which cancels the light cone divergence. To see this, we integrate by parts to obtain $$\label{doubleint} F(x)=\log r+ \log\sinh\chi+ \int_0^\chi {d\chi_{{}_1}\over \cosh^{n}\chi_{{}_1}} \int_{-{i\pi\over2}}^{\chi_{{}_1}} d\chi_{{}_2} {\cosh^{{n}}\chi_{{}_2}\over\sinh^2\chi_{{}_2}}~,$$ and now the double integral is bounded. The first two terms are simply $\log\,{T}$. Thus, the leading divergence in $G_{LC}$ on the light cone cancels between contributions from bound state and from the KK modes, although it still diverges logarithmically at infinity. This statement has to be qualified for even dimension. In this case, the derivatives of $G_{LC}$ diverge on the light cone because of the last term in Eq. (\[doubleint\]). To see this, note that the integrand in Eq. (\[doubleint\]) can be expanded in exponentials. Then, the integral is a sum of exponential terms except for one, of the form $\chi e^{-n \chi}$, if $n$ is even. In terms of the null coordinates $U$ and $V$, this is $\sim \left({V/U}\right)^{n/2} \,\log\left({V/ U}\right)$ for $\chi\to+\infty$ (for $\chi\to-\infty$ replace $U\leftrightarrow V$). Even though the Green’s function is regular at $V=0$, the stress tensor develops a singularity which behaves like $\sim 1/V$ or $1/U$ on the light cone in four dimensions $(n=2)$ and like $\sim \log V$ or $\log U$ in six dimensions $(n=4)$ if $\xi\neq0$. For reference, we show the explicit form of $F(x)$ for dimensions 4 and 5, $$\label{F234} F(x)= \left\{\begin{array}{ll} \displaystyle {U \log U + V\log |V| \over {R}} & \hbox{for}\qquad n=2 \ ,\\[4mm] \displaystyle \log\left({T}+{\tau}\right)-{({T}-{\tau}){\tau}\over{R}^2} & \hbox{for}\qquad n=3 \ , \end{array}\right.$$ where ${\tau}^2={{T}}^2-{R}^2$. Note that despite appearances, Eq. (\[F234\]) is regular at ${R}=0$. This is guaranteed since the $\chi$ dependent part of $F(x)$ is related to the dS invariant vacuum modes (see Eq. (\[yp00\])), which are regular at $R=0$ ($\xi=-\pi i/2$) by construction. The expressions (\[F234\]) are appropriate in the Milne region, and we have already taken the real part, which is the relevant part for $G^{(1)}$.  \ The last term $\tilde G_{AF}$ in Eq. (\[GmasslesBS\]) corresponds to the choice of vacuum for the $\ell=0$ mode. This mode is peculiar because it behaves like a free particle rather than an oscillator. The eigenstates of the Hamiltonian, and in particular the ground state, are plane waves in field space. However, such states are not normalizable. One can construct well defined states as wave packets. The simplest option is a Gaussian packet. This is the Allen Follaci vacuum [@af], which in fact is a two-parameter family of vacua. Its mode function is given by [@af] $$\label{yaf} {\widetilde {\cal Y}}_{AF}(\chi)={1\over \sqrt{S_{(n)}}}\left[{1\over 2\alpha} + i \beta -i\alpha \int_0^\chi {d\chi'\over \cosh^{n}\chi'}\right]~,$$ where $\alpha>0$ and $\beta$ are the mentioned free (real) parameters. We shall impose that the vacuum is time reversal symmetric. This translates into ${\widetilde {\cal Y}}_{AF}(-\chi)={\widetilde {\cal Y}}_{AF}^*(\chi)$, and implies $\beta=0$. Because of the time dependence, it breaks dS symmetry. For this vacuum, $$\label{gaf} {\widetilde G}_{AF}(x,x')={{{\cal U}}^{{bs}}(r) {{\cal U}}^{{bs}}(r')\over S_{(n)}} \left\{ {1\over 2\alpha^2}+2\alpha^2 \textrm{Re}\left[F_{AF}(x) F_{AF}^*(x')\right] - \textrm{Im}\left[F_{AF}(x)+ F_{AF}(x')\right] \right\}$$ with $$\begin{aligned} F_{AF}=\int d\chi\,\cosh^{-n}\chi.\end{aligned}$$ In order to obtain the analytic continuation of $F_{AF}$ to the Milne region, it is better to write it as an integral over a contour with constant $T$. It is straightforward to see that $$F_{AF}(x)=-T\int^{{R}} {d{R}' \over {R}'^n} \left({R}'^2-T^2\right)^{{n\over2}-1}~.$$ It is transparent now that $F_{AF}$ behaves like $1/{R}^{n-1}$ near ${R}=0$. It is also clear that it is regular on the light cone and is bounded at infinity. Moreover, it gets an imaginary part in the Milne region for odd dimensions (in the Rindler region it is always real). More specifically, we have $$\label{FAF234} F_{AF}(x)= \left\{\begin{array}{ll} \displaystyle {{T}\over{R}}~, \qquad & \hbox{for}\qquad n=2, \\[2mm] \displaystyle {i\over2} \left[ {{T}\,{\tau}\over{R}^2} -\ln\left( i {{T}+{\tau}\over{R}} \right) \right] \qquad &\hbox{for}\qquad n=3~.\end{array}\right.$$ Divergent stress tensor {#sec:masslesstens} ----------------------- Now we discuss the form of the expectation value of the stress tensor for the possible values of $M$, ${\mu}_{{\rm eff}}$ and $\xi$ in which the bound state is massless. The bulk part of the stress tensor is most conveniently separated into $$\begin{aligned} {{\langle}T_{\mu\nu}{\rangle}}&=&{{\langle}T_{\mu\nu}{\rangle}}_0+{{\langle}T_{\mu\nu}{\rangle}}_{simple}\end{aligned}$$ where ${{\langle}T_{\mu\nu}{\rangle}}_0$ contains the contributions from $G_{LC}$ and ${\widetilde G}_{AF}$, whereas ${{\langle}T_{\mu\nu}{\rangle}}_{simple}$ is the contribution from $G_{simple}^{(1)}$. All the IR irregularities are contained in ${{\langle}T_{\mu\nu}{\rangle}}^{bulk}_0$. From Eqs. (\[g0\]) and (\[gaf\]), we obtain the bulk part of ${{\langle}T_{\mu\nu}{\rangle}}_0$ as $$\begin{aligned} \label{t0} {{\langle}T_{\mu\nu}{\rangle}}^{bulk}_0 & = & {1 \over 2S_{(n)}} \; {\cal{}T}_{\mu\nu}\,\Big[{{\cal U}}_{bs}(r) {{\cal U}}_{bs}(r') \left\{{1\over 2\alpha^2}+ 2\alpha^2 \textrm{Re}\big[F_{AF}(x)F_{AF}^*(x')\big]\Big\} \right]\cr && -2\xi\;\;\nabla_\mu\nabla_\nu \;{{\cal U}}_{bs}(r) {{\cal U}}_{bs}(r') \left[ {{\rm Re}\, F(x)\over n S_{(n+1)}}- {\textrm{Im} \,F_{AF}(x) \over 2 S_{(n)}} \right]~.\end{aligned}$$ The term proportional to $\alpha^2$ diverges at $R=0$ like $1/R^{2n}$ for any value of $\xi$. $R=0$ corresponds to a timelike axis in the bulk passing through the center of symmetry (see Fig. \[diagram\]). The last term also diverges like $1/R^{n+1}$ for odd dimensions, but vanishes for even dimensions. The piece involving ${\rm Re}\,F$ diverges on the light cone for $n=2$ or 4. Let us begin with the most general case with $M\neq0$. In this case ${{\cal U}}^{{bs}}(r)$ is not constant, and the contribution from the term inversely proportional to $\alpha^2$ in Eq. (\[t0\]) does not vanish. This term is analogous to Eq. (\[IRdivtmn\]), and it diverges globally in the $\alpha\to 0$ limit. Hence, this state cannot be taken, and one has to content with the AF vacuum with some nonzero $\alpha$. Hence, the term proportional to $\alpha^2$ is unavoidable. But this is very noticeable since it contains a singularity at $R=0$ of the form $\sim \alpha^2/{R}^{2n}$, present even for minimal coupling. The main point is that in the presence of a bulk mass, ${{\langle}T_{\mu\nu}{\rangle}}$ contains a quite severe bulk singularity even after we get rid of the global IR divergence. There is of course the possibility that a different choice of vacua for the KK modes could cancel it out. In this case, it seems that the vacuum choice should not be dS invariant, otherwise the singular zero mode contribution that is not dS invariant could not be compensated. We leave this issue for future research. Now we turn to the case with $M=0$. Let us first consider the nonminimal case $\xi\neq0$. Since $M=0$, ${{\cal U}}^{{bs}}(r)$ is constant and the bulk stress tensor does not diverge globally in the $\alpha\to 0$ limit. Divergences in the other $\alpha$ independent two terms in (\[t0\]) also vanish in even dimensions with $n\geq 6$. However, we cannot avoid the divergence in the brane stress tensor in the $\alpha\to 0$ limit. From Eq. (\[splitting2\]) and taking into account that ${\mu}_{{\rm eff}}=0$ (see Eq. (\[BSmassMsmall\])), we have $$\begin{aligned} \label{tbrane} {\langle}T_{ij}{\rangle}_0^{brane} &=&-2\xi {\langle}\phi^2{\rangle}_{AF}\;{\cal K}_{ij}\;\delta(r-r_0)\nonumber\\[2mm]&=& -{\xi\,n\over r_0^{n+1}} \left({(2\alpha)^{-2}+\alpha^2 F_{AF}^2(x)\over S_{(n)}}+ {2{\rm Re}\,F(x)\over S_{(n+1)}} \right)\,h_{ij}\;\delta(r-r_0) ~,\end{aligned}$$ where we used that ${{\cal U}}_{{bs}}^2=n/2r_0^n$. Thus, one is forced to take the AF vacuum again. As a result, the bulk stress tensor develops the same singularity at $R=0$ as before in the $M\ne 0$ case. The globally divergent ($1/\alpha^2$) term in Eq. (\[tbrane\]) is proportional to the induced metric. One might wonder whether this effect is physical or not, since it could be simply absorbed in the brane tension as before. We think that such a procedure is not justified here because $\alpha$ is a state dependent parameter, and renormalization should be done independently of the choice of the quantum state. Before we examine the stress tensor in the massless minimal coupling case, let us comment on the relation between the AF vacuum and the Garriga Kirsten (GK) vacuum. The latter was introduced in [@gaki] for a massless minimally coupled scalar in dS. It corresponds to the plane wave state with zero momentum in field space. This is intrinsically ill defined, giving a ‘constant infinite’ contribution to the Green’s function. However, the point is that, since it is an eigenstate of the Hamiltonian, it does not depend on $\chi$ so it is dS invariant. Divergence in the Green’s function can be accepted in the massless minimally coupled case. Our reasoning is as follows. In this case, the action has the symmetry $\phi\to\phi+$ constant. If we consider it as a ‘gauge’ symmetry, all the observables are to be constructed from derivatives (or differences) of the field. In fact the stress tensor operator (\[splitting1\]) only contains derivatives of the type $\partial_\mu\partial'_\nu G^{(1)}(x,x')$ in this case. Then we will find that the constant contributions in the Green’s function are irrelevant.[^6] We need a practical way to compute quantities in this vacuum. If we follow the argument by Kirsten and Garriga, it will be given by the limit of AF vacuum $$|0{\rangle}_{GK}\equiv\lim_{\alpha\to0} |0{\rangle}_{AF}~.$$ From Eq. (\[tbrane\]), it follows that the brane term ${\langle}T_{ij}{\rangle}^{brane}$ vanish in the GK vacuum. The first termin Eq. (\[t0\]), which is responsible for the ‘infinite constant’, vanishes for $M=0$. While the second term proportional to $\alpha^2$ also vanishes in the $\alpha\to 0$ limit. We shall note that the last two terms in Eq. (\[t0\]) independent of $\alpha$ are also zero for $\xi=0$. Hence, we are left with ${{\langle}T_{\mu\nu}{\rangle}}_{simple}$, which is manifestly dS invariant, and is finite (aside from the ‘Casimir’ divergences on the brane). Thus, the total stress tensor in the GK vacuum is given by a simple formula presented below in Eq. (\[tmnM=m=0\]) with $\xi=0$. In contrast to the massless minimal coupling limit discussed in the preceding section, here we do not have any ambiguity. Zero bulk mass {#sec:M=0} ============== In the absence of bulk mass $M$, the Green’s function and the stress tensor can be obtained explicitly. We shall thus discuss now this case. Generic mass of the bound state ------------------------------- First we discuss the case when the bound state is not massless. The other case is postponed until Section \[sec:M=m=0\]. For the bound state, the boundary condition (\[boundcond\]) fixes $p_d=i{\nu}$ and $m_d$ is given by Eq. (\[BSmassMsmall\]) with $M=0$. Its wave function is proportional to $r^{-ip_d-n/2}=r^{-{\mu}_{{\rm eff}}/H}$, which is constant if $m_d=0$ (see Eq. (\[BSmassMsmall\])). From the discussion in Section \[sec:spectrum\], the bound state is normalizable and non-tachyonic for $n/2\geq{\nu}> 0$ ([*i.e.*]{} $0\leq {\mu}_{{\rm eff}}<(n/2)H$), see Fig. \[meffvsM2\]. The normalized radial KK modes are $${{\cal U}}^{\textsc{kk}}_p(r)=\sqrt{2\over \pi r^{{n}} (1+({\nu}/p)^2)}\left( \cos \left( p{\,{\ln r}}\right) +{{\nu}\over p}\sin \left( p{\,{\ln r}}\right)\right)~,$$ and the renormalized product of mode functions in Eq. (\[greg2\]) is $$\label{uumassless} \left[{{\cal U}}_p^{\textsc{kk}}(r){{\cal U}}_p^{\textsc{kk}}(r')\right]^{ren} ={1\over 2\pi r^{{{n}/ 2}}} \left[{p+i{\nu}\over p-i{\nu}}\; (rr')^{-ip} +{p-i{\nu}\over p+i{\nu}} \;(rr')^{ip}\right]~.$$ Proceeding as in Eq. (\[polesmass\]), we can explicitly perform the integration over the KK modes and in this case, we obtain a simpler expression, $$\label{polesmassless} D_{(ren)}^{(1)}(x,x') = {1 \over (2r_0)^{{n}} S_{({n})}\Gamma\left({{n}+1\over 2}\right)} \sum_{{k}=0}^{\infty} \sum_{j=0}^{\infty} {{{n}\over 2}+{\nu}+{k}+j \over {{n}\over 2}-{\nu}+{k}+j} ~{(-1)^{k}\Gamma\left({n}+2{k}+j\right) \over j!\,{k}!\, \Gamma\left({{n}+1\over 2}+{k}\right)} \left({rr'\over r_0^2}\right)^{{k}+j} \left({1-\cos\zeta\over 2}\right)^{k}~,$$ where as usual, we denote the (bulk-) massless Green’s function by $D_{(ren)}^{(1)}(x,x')$. From Eq. (\[polesmassless\]), it is manifest that the Green’s function in the dS invariant vacuum state is regular at $r=0$ in the coincidence limit. As before, in the massless limit ${\nu}\to {n}/2$, it is singular. However, since the divergent term $k=j=0$ is constant in the present case, it will not affect the bulk ${{\langle}T_{\mu\nu}{\rangle}}$. In order to compute the stress tensor, it is convenient to rewrite Eq. (\[polesmassless\]) in a more compact form. For a conformally coupled field, ${\nu}$ vanishes, and the Green’s function actually takes the simple form[^7] $$\label{mu=0} D^{(ren)(1)}_{{\nu}=0}(x,x') = {1\over n S_{({n}+1)}} \left({1\over r_0^2 + (r r'/r_0)^2 - 2 r r' \cos\zeta} \right)^{{n}/2}.$$ This expression can be obtained by the method of images. It corresponds to the potential induced by a source of unit charge at $x'$ together with an image source located at $r'_I=r_0^2/r'$ with a charge $q'_I=(r'_I/r_0)^n$. From the form of (\[polesmassless\]), the Green’s function for ${\nu}\neq0$ can be obtained from Eq. (\[mu=0\]) by applying the integral operator $$\label{integral} D^{(ren)(1)} = \left[1 + 4{\nu}r_0^{-2{\nu}} \int_{r_0}^\infty {d \tilde r_0\over \tilde r_0} \tilde r_0^{2{\nu}} \right] D^{(ren)(1)}_{{\nu}=0}~.$$ Borrowing the intuition from the method of images, Eq. (\[integral\]) can be interpreted as the potential induced by the image charge mentioned in the previous paragraph together with a string stretching from $r_0^2/r'$ to infinity along the radial direction defined by $x'$ with a charge line density given by $\lambda(r)=4{\nu}(r/r_0)^{2{\nu}-1}/r'$. To obtain the stress tensor, we can first compute the case ${\nu}=0$ and then apply the same operator as in Eq. (\[integral\]). The general result with $M=0$ and $m_d\neq0$ is $$\begin{aligned} \label{tmassless} {}^{(m_d\neq0)}{\langle}T^r_{~r}{\rangle}= (\xi-\xi_c) { (-1)^n ({n}+1)\over S_{({n}+1)} r_0^{{n}+2}} &\Biggl\{& {r_0^{2{n}+2} \over (r_0^2-r^2)^{{n}+1} } + {4{\nu}\over {n}+2-2{\nu}} ~ {}_2F_1\left({n}+1, {{n}\over2} +1-{\nu};{{n}\over2} +2-{\nu};{r^2\over r_0^2}\right) ~\Biggr\}~, \\ &~&\cr {}^{(m_d\neq0)}{\langle}T^i_{~j}{\rangle}= (\xi-\xi_c){ (-1)^n ({n}+1)\over S_{({n}+1)} r_0^{{n}+2} } &\Biggl\{& {r_0^{2{n}+2} (r_0^2+r^2) \over (r_0^2-r^2)^{{n}+2}} + {4{\nu}\over {n}+2-2{\nu}} ~ {}_2F_1\left({n}+1, {{n}\over2} +1-{\nu};{{n}\over2} +2-{\nu};{r^2\over r_0^2}\right) \cr &&\qquad\qquad\qquad {8{\nu}\over {n}+4-2{\nu}} {r^2\over r_0^2} ~ {}_2F_1\left({n}+2,{{n}\over2} +2-{\nu};{{n}\over2} +3-{\nu};{r^2\over r_0^2}\right) ~\Biggr\}\delta^i_{~j}~,\nonumber\end{aligned}$$ where $\xi_c=n/4(n+1)$. As an aside, we shall note as well that for conformal coupling, ${{\langle}T_{\mu\nu}{\rangle}}^{bulk}=0$ even with a nonzero boundary mass ${\mu}$. This is a consequence of conservation and tracelessness of $T_{\mu\nu}$. Massless bound state {#sec:M=m=0} -------------------- We consider now the case $m_d=0$, that is ${\nu}=n/2$. We can proceed as in Eq. (\[GmasslesBS\]) and decompose $D^{ren}=D_{simple}+G_{LC}+{\widetilde G_{AF}}$, where $D_{simple}$ is the contribution from the simple poles $p=i(n/2+k)$ with $k=1,2,\dots$ in Eq. (\[gcont\]). Thus, it is given by the terms in Eq. (\[polesmassless\]) with non-vanishing $k$ and $j$. The integral representation analogous to Eq. (\[integral\]) is now $$\label{dintegral} D_{simple}^{(1)} =\left[1 + {2 n \over r_0^n} \int_{r_0}^\infty {d \tilde r_0\over \tilde r_0} \tilde r_0^n \right] \left( D^{(ren)}_{{\nu}=0} - {1\over n S_{(n+1)} \tilde r_0^n} \right)~,$$ where we subtract the constant to remove the $j=k=0$ term. The explicit expression in four dimensions was obtained in [@xavi][^8], and we shall not reproduce it here. The case of main interest for us is $n=3$ and we find, up to an finite constant, $$\begin{aligned} \label{gsimple} D_{simple}^{(1)}(x,x') ={\rm Re}\,\Biggl[{1\over 8 \pi^2} {1\over r_0^3\Delta^{3/2}} +{3\over 8\pi^2 r_0^3} \left\{ {\cos2\zeta\over\sin^2\zeta}-{\cos2\zeta -(rr'/r_0^2)\cos\zeta \over \sin^2\zeta \;\Delta^{1/2}} -\log\left[1 -{rr'\over r_0^2}\cos\zeta +\Delta^{1/2} \right] \right\}\Biggr]~,\end{aligned}$$ where $\Delta=1+(rr'/r_0^2)^2-2 (rr'/r_0^2) \cos\zeta $. As mentioned above, in the coincidence limit this contribution is regular except on the brane. We shall note as well that it grows logarithmically at infinity.  \ The contribution ${{\langle}T_{\mu\nu}{\rangle}}^{bulk}_{simple}$ is easily found exploiting the integral representation (\[dintegral\]), as before. The only difference between (\[dintegral\]) and (\[integral\]) in the limit ${\nu}=n/2$ ([*i.e.*]{} $m_d=0$) is a constant, which does not affect ${{\langle}T_{\mu\nu}{\rangle}}$ in this case. Thus, ${{\langle}T_{\mu\nu}{\rangle}}^{bulk}_{simple}$ reduces to (\[tmassless\]) with ${\nu}=n/2$. This gives a simple expression in terms of elementary functions. In four dimensions it is given in [@xavi] (for $\xi=0$). In five dimensions, we obtain $$\begin{aligned} \label{tmnM=m=0} {\langle}T^r_{~r}{\rangle}^{bulk}_{simple} &=& -{9\over32\pi^2}\;{\xi-\xi_c\over r_0^3}\;\left\{ {r_0^{6}\over (r_0^2-r^2)^{4} } +{r^4-3r_0^2 r^2+3r_0^4 \over (r_0^2-r^2)^{3} } \right\}~,\nonumber\\[2mm] {\langle}T^i_{~j}{\rangle}^{bulk}_{simple} &=& -{9\over32\pi^2}\;{\xi-\xi_c\over r_0^3}\;\left\{ r_0^{6}{r^2+r_0^2 \over (r_0^2-r^2)^{5} } +{r^2\over2} {r^4-4r_0^2 r^2+6r_0^4 \over (r_0^2-r^2)^{4} } +{r^4-3r_0^2 r^2+3r_0^4 \over (r_0^2-r^2)^{3} } \right\}\delta^i_{~j}~,\end{aligned}$$ which is dS invariant and regular everywhere except on the brane. Perturbations {#sec:pert} ============= We shall now discuss the form of the field perturbations, focusing on the bulk massless case with $n=3$, since we have obtained a closed form expression for the Green’s function and the stress tensor in the preceding section. We begin by the case with generic coupling, and we consider ${\langle}\phi^2(x){\rangle}$ in the Allen Follaci vacuum. The massless minimally coupled case in the de Sitter (dS) invariant Garriga Kirsten vacuum requires a different treatment, which will be discussed in Section \[sub:mmc\]. Generic coupling ---------------- The renormalized expectation value of $\phi^2(x)$ is given by $${\langle}\phi^2(x){\rangle}^{ren}_{AF}={1\over2}\,G_{ren}^{(1)}(x,x)~,$$ with $G_{ren}^{(1)}$ given by (\[GmasslesBS\]). From Eqs. (\[gaf\]) and (\[FAF234\]), it diverges at ${R}=0$ because of $G_{AF}$. It is also clear that this contribution is bounded at (null) infinity. From Eq. (\[g0\]), the $G_{LC}$ contribution is regular on the light cone. Equation (\[gsimple\]) shows that the KK contribution $D_{simple}$ diverges on the brane. In the bulk, both $D_{simple}$ and $G_{LC}$ grow logarithmically at infinity. However, the growing terms cancel out. Indeed, Eq. (\[gsimple\]) shows that as long as $x$ is not on the brane, $D_{simple}$ at coincident points is dominated by the logarithmic term. Therefore we have $$\label{gsimpleinf} {1\over2}D_{simple}^{(1)}(x,x)\sim -{3\over 16\pi^2r_0^3}\,\log \Big|1-{r^2\over r_0^2}\Big|~,$$ in the limit $x\to\infty$. As for $G_{LC}$, since the wave function of the bound state in the $M=0$ case is ${{\cal U}}_{{bs}}^2=n/2r_0^n$, we find $$\label{g0inf} {1\over2}G_{LC}^{}(x,x)={3\over 8\pi^2r_0^3}\,{\rm Re}\big[\,\log\left({T}-ir\right)\big]+{\cal O}(1)~,$$ where we used Eq. (\[F234\]). It is clear that the logarithmic terms cancel and as a result ${\langle}\phi^2(x){\rangle}^{ren}_{AF}$ is bounded at infinity. This is expected, because the bulk is flat. Intuitively, in four dimensional dS space, the perturbations grow because when the modes are stretched to a super-horizon scale, they freeze out. Since modes of ever smaller scales continuously being stretched, they pile up at a constant rate [@vilenkin]. Since this effect is due to the local curvature of the spacetime, it should not happen in a flat bulk. Accordingly, on the brane we recover the same behavior as in de Sitter space. Indeed, restricting (\[g0inf\]) on the brane, we obtain $G_{LC}^{(1)} \sim \chi$. We have mentioned before that $D_{simple}(x,x)$ is UV divergent on the brane. Since this happens because point splitting regularization used here does not operate on the brane, this object needs UV regularization and renormalization. This can be done in a variety of ways, [*e.g.*]{} with dimensional regularization [^9], introducing a finite brane thickness, smearing the field etc. The point is that the renormalized value must be a constant simply because $D_{simple}(x,x)$ is dS invariant and is a function of $x$ only. Thus, Eqs. (\[gsimpleinf\]) and (\[g0inf\]) imply that on the brane, ${\langle}\phi^2{\rangle}$ grows in time, as in dS space. Massless minimal coupling {#sub:mmc} ------------------------- Now, we shall see essentially the same features arise for a massless minimally coupled scalar in the GK vacuum, in which case everything can be obtained in a dS invariant way [@gaki]. Because of the shift symmetry $\phi\to\phi+$const. , ${{\langle}\phi^2 {\rangle}}$ is not an observable in this case. Still, it is possible to define a ‘shift-invariant’ notion for the field perturbations. Following [@gaki], one introduces the correlator $$\label{phixy} {\cal G}(x,y)\equiv\big\langle\left[\phi(x) -\phi(y)\right]^2\big\rangle_{GK},$$ which can be thought of as the combination $\big[G^{(1)}(x,x)+G^{(1)}(y,y)-2G^{(1)}(x,y)\big]/2$. Since the first two terms are UV divergent, we shall rather consider $$\label{phixyren} {\cal G}^{ren}(x,y)= {1\over2}\left[G^{ren(1)}(x,x)+G^{ren(1)}(y,y)-2G^{ren(1)}(x,y)\right].$$ The main point is that all the terms of the form $f(x)+f(y)$ in $G^{ren (1)}$ cancel out in the combination ${\cal G}^{ren}(x,y)$. All the terms in Eqs. (\[g0\]) and (\[gaf\]) that do not vanish in the limit $\alpha\to0$ are of this form because ${{\cal U}}^{{bs}}$ is constant in the massless minimally coupled case. Thus, this correlator is well defined for the GK vacuum [@gaki] and we can readily write $$\label{regsimple} {\cal G}^{ren}(x,y)={1\over2}\left[ D^{(1)}_{simple}(x,x)+D^{(1)}_{simple}(y,y)-2D^{(1)}_{simple}(x,y)\right]~,$$ with $D_{simple}(x,y)$ given by (\[gsimple\]).[^10] Thus, the behavior of the perturbations in the GK vacuum for $x$ and $y$ distant is thus described by the asymptotic behavior of $D_{simple}(x,x)$ and $D_{simple}(x,y)$. The former is summarized in Eq. (\[gsimpleinf\]). As mentioned before, for finite $x$, $D_{simple}(x,x)$ is regular everywhere except on the brane (in which case $y_I=y$). Before describing the asymptotic form of $D_{simple}(x,y)$, we need to discuss the singularities that it contains. The combination that we called $\Delta$ in Eq. (\[gsimple\]) is $$\Delta(x,y)=|x-y_I|^2/|y_I|^2~,$$ where $|x|^2=\eta_{\mu\nu}x^\mu x^\nu$ and $y_I^\mu=(r_0^2/|y|^2) y^\mu$ is the image of $x$ (see comments around Eq. (\[integral\]) and Fig. \[diagram\]). Note that when $y$ is in one of the Milne regions, then $y_I$ is in the other one. Rather than an ‘image charge’, it represents the point where the light cone focuses, (see Fig. \[rays\] (b)). For $x\neq y$ and both finite, $D^{(1)}_{simple}(x,y)$ has singularities in two types of situations. One is when $x$ is on the light cone of $y_I$ (then, $\Delta=0$). The other case arise when the argument of the logarithmic term in Eq. (\[gsimple\]) vanishes. It is convenient to rewrite this term as $$\label{arglog} -{3\over 8\pi^2 r_0^3}\; \log\Big|{\left(|x-y_I|+|y_I|\right)^2-|x|^2\over 2|y_I|^2}\Big| ~.$$ The argument vanishes when they are aligned with respect to the origin *and* $|x|^2>|y_I|^2$. This condition defines an ‘image string’ stretching from $y_I$ to infinity. Hence, this singularity occurs only when two points are in different Milne regions. This is consistent with the interpretation of Eq. (\[integral\]), that the Green’s function can be constructed with a mirror image and a linear charge distribution over such a string, see Fig. \[rays\].[^11] These two situations correspond to the coincidence of $x$ with the image of $y$ or its ‘light cone’. Therefore all of them are of UV nature.[^12] $$\begin{array}{cccc} \includegraphics[width=4.2cm]{confDiagram6space2.eps} &\includegraphics[width=4.2cm]{confDiagram6null2.eps} &\includegraphics[width=4.2cm]{confDiagram6time2.eps} &\includegraphics[width=4.2cm]{confDiagram6infty2.eps}\nonumber\\[-1cm] {\rm (a)}& {\rm (b)}& {\rm (c)}& {\rm (d)} \end{array}$$ The asymptotic form of $D_{simple}(x,y)$ for distant points follows from Eq. (\[arglog\]) because then, the inverse powers of $\Delta$ in (\[gsimple\]) are finite. Taking $y$ fixed and $x\to\infty$ (with $x$ not on the light cone from $y_I$ nor on the image string nor on its light wedge), one obtains $$\label{simplexy} D^{(1)}_{simple}(x,y)\simeq -{3\over 8\pi^2 r_0^3}\; \log\big|x\big| ~.$$ However, since $D_{simple}(x,x)$ grows twice as fast (see Eq. (\[gsimpleinf\])), the combination (\[phixyren\]) is bounded in the bulk. We can consider as well both $x$ and $y$ approaching null infinity in the bulk. In this case, the image $y_I$ approaches the light cone, as illustrated in Fig. \[rays\] (d). Then the logarithm in Eq. (\[arglog\]) is $\sim\log\big(|x-y_I|/|y_I|^2\big) \simeq \log\big(|x|\,|y|\big)$, hence the combination ${\cal G}^{ren}$ is bounded again. On the brane, the situation is very different. As mentioned before, $D_{simple}(x,x)$ is divergent but when properly renormalized, it is simply a constant because of dS symmetry. Then, ${\cal G}^{ren}(x,y)$ behaves like $D_{simple}(x,y)$ and from Eq. (\[arglog\]) for large separation and $|x|=|y|=r_0$, one obtains $${\cal G}^{ren}(x,y)\simeq {3\over 4\pi^2 r_0^3}\; \log\big|x-y\big| ~.$$ Since $|x-y|^2=2r_0^2(1-\cos\zeta)$, this corresponds to the linear growth with the invariant dS time interval $\zeta(x,y)$ between $x$ and $y$ [@gaki]. Finally, we shall mention that the generalization of Eq. (\[phixy\]), $$\label{calgmass} \Big\langle\left[{\phi(x)\over{{\cal U}}^{{bs}}(x)} -{\phi(y)\over{{\cal U}}^{{bs}}(y)}\right]^2\Big\rangle^{ren}$$ is completely regular in the GK vacuum when $M\neq0$, and its asymptotic behaviour at infinity parallels that of (\[phixy\]). ![ Divergences in ${\langle}\phi^2{\rangle}^{ren}=G(x,x)^{ren}$ and ${{\langle}T_{\mu\nu}{\rangle}}^{ren}$ when the bound state is massless. For the AF vacuum, both the Green’s function and the stress tensor diverge at ${R}=0$, indicated with a thick dashed line. In addition, if $\xi\neq0$ the stress tensor diverges on the light cone like $1/U$ or logarithmically in four and six dimensions respectively (thin dashed line). Besides, we also have the ultraviolet ‘Casimir’ divergence on the brane, represented by the plain thick line. In the Garriga Kirsten vacuum (for the massless minimally coupled scalar), ${{\langle}T_{\mu\nu}{\rangle}}^{ren}$ presents only this Casimir-type divergence on the brane.[]{data-label="diagram"}](confDiagram32.eps){width="5cm"} conclusions {#sec:concl} =========== Our main results can be summarized as follows. In analogy with what happens in de Sitter (dS) space [@af; @gaki], scalar fields with a massless bound state in the spectrum do not have a well defined dS invariant vacuum, except for the massless minimally coupled case. (The case of vanishing bulk mass with non-vanishing curvature coupling has a little subtlety, though.) The Green’s function and the v.e.v. of the energy momentum tensor diverges everywhere. The simplest alternative from the analogy to dS case is to take the Allen Follaci vacuum. However, in this vacuum, divergences in the stress tensor are not removed completely within the bulk. Figure \[diagram\] illustrates the location of the IR singularities in the Green’s function and the stress tensor for the AF vacuum. It remains to clarify whether it is possible or not to avoid these singularities by choosing a vacuum for the KK modes other than the dS invariant vacuum. When the bound state is very light (but not exactly massless) because $M$, ${\mu}$ and/or $\xi$ are fine tuned according to Eq. (\[muc\]), then the stress tensor in dS invariant vacuum takes the form of (\[IRdivtmn\]). The stress tensor in this case is smooth, but it becomes very large. Hence, even when the bound state mass is not exactly zero, the dS invariant vacuum looks problematic because of large back reaction. Note that the situation here is different from the usual dS case in two respects. In dS space the large v.e.v. in the stress tensor for the dS invariant vacuum is a constant proportional to the metric. Hence, it might be absorbed by IR renormalization of the cosmological constant. In our case, the stress tensor given by Eq. (\[IRdivtmn\]) is a nonlocal expression and cannot be ‘renormalized away’. On the other hand, if one does not want to make any IR renormalization, in the dS case one can take the AF vacuum, and ${{\langle}T_{\mu\nu}{\rangle}}$ stays regular. In the brane world, we do not know the prescription how to remove this large v.e.v. by changing the vacuum state. Choosing non dS invariant vacuum will lead to not only the mentioned divergence in the bulk, but also a new singularity on the light cone, when the bound state mass $m_d$ is not exactly zero. In this case, the radial function for the bound state behaves like $\propto r^{-m_d^2/n}$ near the light cone at $r=0$. Hence if you single out the contribution to the Green’s function from the bound state, it is singular and its derivatives diverges at $r=0$. Hence, as far as we restrict the change of quantum state to the bound state, we will not be able to remove the large v.e.v. of stress tensor without spoiling its regularity. A light bound state is compatible with a well behaved and not large stress tensor only in a situation ‘close’ to the massless minimally coupled case. More precisely, here we consider the cases that all of the bulk mass $M$, the brane mass ${\mu}$ and the nonminimal coupling $\xi$ are small (see Eq. (\[BSmassMsmall\])). This corresponds to having a light bound state without accidental cancellations. Namely, the squared bound state mass $m_d^2$ is of order of the largest among $M^2$, $H {\mu}$ and $H^2 \xi$, where $H$ is the Hubble constant on the brane. In this case, the dangerous terms proportional to $m_d^{-2}$ are always associated with some small factor $M^2$, ${\mu}$ or $\xi$, and therefore none of them becomes large. The case with $M^2\approx H{\mu}\ll H^2\xi$ has a little subtlety. In this case, the large v.e.v. in ${{\langle}T_{\mu\nu}{\rangle}}$ appears only in the brane part, and it is constant proportional to the induce metric. Hence, we might be able to consider a model with an appropriate IR renormalization. Only in such a modified model, the stress tensor in the dS invariant vacuum can escape from appearance of a large v.e.v.. Application to the bulk inflaton type models [@kks; @hs] is a part of motivation of the present study. In these models there must be a light bound state of a bulk scalar field. In order to explain the smallness of the bound state mass it will be natural to assume that it is due to smallness of all the bulk and brane parameters without fine tuning. Therefore we will not have to seriously worry about the backreaction of the inflaton in the context of bulk inflaton type models. We have a few words to add on the massless minimally coupled case. If we consider this case as a limiting situation close to the massless minimally coupled case, the results depend on how we fix the ratio amongst $M$, $H {\mu}$ and $H^2 \xi$, and hence there remains ambiguity. However, this limiting case has the shift symmetry $\phi\to\phi+$constant. If this shift symmetry is one of the symmetries that are to be gauged, there is no ambiguity because the problematic homogeneous mode does not exist in the theory from the beginning. In this setup, undifferentiated $\phi$ is not an observable. In fact, ${{\langle}T_{\mu\nu}{\rangle}}$ automatically does not contain undifferentiated $\phi$. Hence, ${{\langle}T_{\mu\nu}{\rangle}}$ is unambiguously defined although the Green’s function is not well defined. We can compute ${{\langle}T_{\mu\nu}{\rangle}}$ in this model by applying the idea of the Garriga Kirsten vacuum (equivalent to a limiting case of AF vacuum), and we confirmed it dS invariant and regular as is expected. We have also discussed the form of the field perturbations ${{\langle}\phi^2 {\rangle}}$ when the bound state is massless. The main point is that on the brane ${{\langle}\phi^2 {\rangle}}$ grows linearly with dS time $\chi$ while in the bulk it is bounded, as expected since the bulk is flat. Aside from this, we derived in closed form the ${{\langle}T_{\mu\nu}{\rangle}}$ for a generic field with zero bulk mass, see Eq. (\[tmassless\]). The same discussion applies in the RSII model with little modifications, which mainly comes from the fact that the Ricci tensor term in the bulk stress tensor $\sim \xi{{\langle}\phi^2 {\rangle}}{\cal R}_{\mu\nu}$ is not zero in the RSII model. Thus, the bulk part of ${{\langle}T_{\mu\nu}{\rangle}}$ is finite in the limit of massless bound state only for massless minimal coupling. We are grateful to Jaume Garriga, Misao Sasaki, Wade Naylor, Alan Knapman and Luc Blanchet for useful discussions. T.T. acknowledges support from Monbukagakusho Grant-in-Aid No. 16740141 and Inamori Foundation, and O.P. from JSPS Fellowship number P03193. Massive Green’s function in de Sitter {#sec:GdS} ===================================== In this Appendix we obtain the form of the Green’s function for a massive field propagating in $D-1=n+1$ dimensional de Sitter (dS) space. The metrics for dS space and its Euclidean version are $$dS_{(n+1)}^2 = -d\chi ^2+ \cosh^2\chi d\Omega_{({n})}^2 = d{\chi_{{}_{E}}}^2+\sin^2{\chi_{{}_{E}}}d\Omega_{({n})}^2,$$ where $d\Omega_{(n)}^2$ is the metric of a unit $n$ dimensional sphere. The Euclidean time is given by ${\chi_{{}_{E}}}= i\chi + \pi/2$. The Euclidean version of the Hadamard function in dS space can be found from the equation $$\left[\partial_{{\chi_{{}_{E}}}}^2+{n}\cot{\chi_{{}_{E}}}\partial_{{\chi_{{}_{E}}}} - \left(p^2+{{n}^2\over 4}\right)\right]G^{(dS)}_p(x,x') =-\delta^{({n}+1)}(x-x').$$ From the symmetry, we can choose $x'$ to be at the pole so that $G^{(dS)}_p$ depends on ${\chi_{{}_{E}}}$ only. In terms of $F=(\sin({\chi_{{}_{E}}}-\pi))^{({n}-1)/2} G^{(dS)}_p(x,x')$, this becomes $$\left[(1-w^2)\partial_w^2-2w\partial_w -\left(p^2+{1\over 4}\right) -{({n}-1)^2\over 4 (1-w^2)}\right]F=0,$$ where $w=\cos({\chi_{{}_{E}}}-\pi)$, and this is solved to give $$F=N e^{({n}-1)\pi i\over 2}\Gamma\left({{n}+1\over 2}\right) P^{-{{n}-1\over 2}}_{ip-{1\over 2}} (\cos({\chi_{{}_{E}}}-\pi)).$$ Hence, the Green’s function will be given by the replacement of ${\chi_{{}_{E}}}$ by the proper distance between the points $x$ and $x'$ in $dS$ space, which we call $\zeta(x,x')$. $G^{(dS)}_p$ is guaranteed to be regular at $\zeta \to \pi$. To see the behaviour in the $\zeta \to 0$ limit, the alternative expression $$\begin{aligned} \label{gds} G^{(dS)(1)}_p&=&{\tilde N\over (1-\cos\zeta)^{{n}-1\over 2}} \Biggl\{F\left(-ip+{1\over 2},ip+{1\over 2},{-{n}+3\over 2}; {1-\cos\zeta\over 2}\right)\cr &&+ {\Gamma\left({{n}\over 2}-ip\right)\Gamma\left({{n}\over 2}+ip\right) \Gamma\left(-{{n}-1\over 2}\right) \over \Gamma\left({1\over 2}-ip\right)\Gamma\left({1\over 2}+ip\right) \Gamma\left({{n}-1\over 2}\right) }\left({1-\cos\zeta\over 2}\right)^{{n}-1\over 2} F\left(-ip+{{n}\over 2},ip+{{n}\over 2},{{n}+1\over 2}; {1-\cos\zeta\over 2}\right) \Biggr\}\end{aligned}$$ is relevant, and here $$\tilde N= {\Gamma\left({{n}+1\over 2}\right)\Gamma\left({{n}-1\over 2}\right) \over \Gamma\left({{n}\over 2}-ip\right)\Gamma\left({{n}\over 2}+ip\right)}N~.$$ In the coincidence limit, the Green’s function must behave like $G_{(n+1)}^{(1)}\approx 1/(n-1) S_{({n})}\zeta^{{n}-1}$, where $S_{({n})}$ is the area of an ${n}$-dimensional unit sphere. The limiting behaviour is controlled by the first term. Hence we have $\tilde N=2^{-({n}-1)/2}/(n-1)S_{({n})}$. Massless Green’s function in dS {#sec:GdSmassless} =============================== Here, we compute the Green’s function for the massless scalar in dS. In the massless limit ($p\to in/2$), the de Sitter (dS) invariant Green’s function diverges because of the contribution from the $\ell=0$ mode, ${\cal Y}_{p00}$. The idea is to construct a modified Green’s function by substituting this mode by another one that is finite in the limit $p\to in/2$. In other words, $$G_{(m=0)}^{(dS)(+)}=G_{(\ell>0)}^{(+)} + {\widetilde {\cal Y}}_{}^{}{\widetilde {\cal Y}_{}}'^{*},$$ where ${\widetilde {\cal Y}}$ corresponds to the $\ell=0$ mode and $$G_{(\ell>0)}^{(+)}=\sum_{\ell>0} {\cal Y}_{p\ell m}(\chi) {\cal Y}^*_{p\ell m}(\chi')~,$$ where ${\cal Y}_{p\ell m}$ are the positive frequency dS invariant vacuum modes. The latter can be obtained as follows. Since $G^{(dS)}_p$ diverges because of the $\ell=0$ mode, the divergent term in the Laurent expansions of both $G^{(dS)}_p$ and ${\cal Y}_{p00}{\cal Y}_{p00}'^*$ coincide. We show below that this series contains a simple pole in $p-in/2$. Hence, we can write $$\label{l>0} G_{(\ell>0)}^{(+)}=\lim_{p\to{in\over2}} \left[ G^{(dS)(+)}_p - {\cal Y}_{p00}^{~} {\cal Y}_{p00}^{*}\right] =\partial_{p}\left[ \left(p-i{n\over2}\right) G^{(dS)(+)}_p \right]_{p=i{n\over2}} - \partial_{p}\left[\left(p-i{n\over2}\right) {\cal Y}_{p00}^{~} {\cal Y}_{p00}'^{*} \right]_{p=i{n\over2}} ~.$$ The explicit expression for the first term in the r.h.s. is unnecessary because it is canceled by the KK contribution (\[gcont\]). The dS invariant vacuum mode for $\ell=0$ with $-ip<n/2$ is $$\label{y0} {\cal Y}_{p00}=C_p \; { 2^{{n}-1\over2}\Gamma\left({{n}+1\over 2}\right)\over \left(\cosh\chi\right)^{{n}-1\over2}} \;P^{-{{n}-1\over 2}}_{-ip-{1\over 2}}(i\sinh \chi)~,$$ and the normalization constant has been chosen so that $\lim_{p\to in/2}\, {\cal Y}_{p00}/C_p=1$. The Klein Gordon normalization requires $$\label{cp} |C_p|^2= {\Gamma\left(ip +{{n}\over 2}\right) \Gamma\left(-ip +{{n}\over 2}\right) \over 2^{n}\Gamma\left({{n}+1\over 2}\right)^2 S_{(n)}}~.$$ Expanding around $p=in/2$ we find $$\label{cpsmall} |C_p|^2={1\over i n S_{(n+1)} } {1\over p-in/2} + {\cal O}\left[(p-in/2)^0\right]~,$$ where $S_{(n+1)}$ is the area of an $n+1$ dimensional sphere of unit radius. The behaviour of ${\cal Y}_{p00}$ near $p=in/2$, is most easily found using the equation that it solves, $$\left[\partial_\chi^2+{n}\tanh \chi\partial_\chi + \left(p^2+{{n}^2\over 4}\right)\right] {\cal Y}_{p00} =0~.$$ Bearing in mind that the positive frequency function for the dS invariant vacuum is determined by the regularity when the function is continued to the Euclidean region on the side that contains $\chi=-\pi i/2$, one finds $$\label{yp00} {\cal Y}_{p00}(\chi)=C_p\;\left[1-\left(p^2+{{n}^2\over 4}\right) \int^\chi d\chi_{{}_1}{1\over \cosh^{n}\chi_{{}_1}} \int_{-\pi i/2}^{\chi_{{}_1}} d\chi_{{}_2} \cosh^{n}\chi_{{}_2} + {\cal O}\left( \left(p-i n/ 2\right)^2\right) \,\right]~.$$ From this result, the last term in Eq. (\[l&gt;0\]) can be readily evaluated, and Eq. (\[g0\]) follows. Divergence in the Green’s function and the $\ell=0$ mode {#sec:cancelation} ======================================================== Here, we show that the IR divergence in the Green’s function (\[polesmass\]) or (\[polesmassless\]) is due to the homogeneous $\ell=0$ mode of the bound state. Using Eqs. (\[wavefunct\]), (\[radialnorm\]), (\[cpsmall\]) and (\[yp00\]), the contribution from the $\ell=0$ mode of the bound state for $p_d$ close to $in/2$ is found to be $$\begin{aligned} \label{zerozero} G_{\ell=0,p_d}^{(1)}&=& {{\cal U}}^{{bs}}(r){{\cal U}}^{{bs}}(r')\; {\cal Y}_{p_d00}(\chi) {\cal Y}^*_{p_d00}(\chi') +{\rm c.c.} \simeq 2{{\cal U}}^{{bs}}(r){{\cal U}}^{{bs}}(r')\;|C_{p_d}|^2 \left[ 1+ {\cal O}\left( p_d-in/2\right) \right] \cr &\simeq& {1\over S_{(n+1)} } \;{1\over p_d-in/2}\; { I_{n/2}(Mr) I_{n/2}(Mr')\, /\,(r r')^{{n}/2} \over I_{n/2}(Mr_0) \left.\partial_p \left({\nu}_c I_{-ip}(Mr_0) -M r_0I'_{-ip}(Mr_0) \right)\right\vert_{p=in/2}} +\dots $$ where the dots denote higher order in $p_d-in/2$. In the same limit, ${\nu}$ is close to ${\nu}_c$. The $j=k=0$ term in (\[polesmass\]) is $$\begin{aligned} \label{cancelation} G_{j=0,k=0}^{(ren)(1)} &=& -{ \,n \Gamma\left({n}\right) \over 2^{{n}}\Gamma\left({{n}+1\over 2}\right)^2 S_{({n})}} \;{{\nu}K_{{n}/2}(Mr_0) -Mr_0 K'_{{n}/2}(Mr_0) \over {\nu}I_{{n}/2}(Mr_0) -Mr_0 I'_{{n}/2}(Mr_0)} \;{I_{{n}/2}(Mr) I_{{n}/2}(Mr')\over (rr')^{{n}/2}}\cr &=& -{ 1 \over S_{({n}+1)}} \;{{\nu}_c K_{{n}/2}(Mr_0) -Mr_0 K'_{{n}/2}(Mr_0) + \left({\nu}-{\nu}_c\right) K_{{n}/2}(Mr_0) \over I_{{n}/2}(Mr_0)\left({\nu}-{\nu}_c\right) } \;{I_{{n}/2}(Mr) I_{{n}/2}(Mr')\over (rr')^{{n}/2}}\cr &\simeq& - { 1 \over S_{({n}+1)}} \;{ I_{{n}/2}(Mr) I_{{n}/2}(Mr')/ (rr')^{{n}/2} \over I^2_{n/2}(Mr_0) \left({\nu}-{\nu}_c \right)} + {\cal~O}\left[({\nu}-{\nu}_c)^0\right]\end{aligned}$$ where in the second line we used Eq. (\[muc\]) and the Wronskian relation $K_{n/2}^{}(z) I_{n/2}'(z)-K_{n/2}'(z) I_{n/2}^{}(z)=1/z$. Using Eq. (\[BSmassM\]), it is easy to see that $$\begin{aligned} \label{taylor} {\nu}-{\nu}_c = \partial_{p_d} {\nu}\big|_{p_d=in/2}\, \left(p_d-i{n\over2}\right) + \dots = -{\partial_{p}\left[ {\nu}_c I_{-ip} -Mr_0 I_{-ip}'(Mr_0) \right]\big|_{p=in/2}\over I_{{n}/2}(Mr_0) }\; \left(p_d-i{n\over2}\right) + \dots~,\end{aligned}$$ so (\[zerozero\]) and (\[cancelation\]) agree in the limit $p_d\to in/2$. Note that since in this limit this is the dominant contribution and $m_d\simeq in(p_d-in/2)$, the total Green’s function (\[polesmass\]) can be rewritten in the simple form $$\label{glimit} G^{(1)}_{(ren)}(x,x')\simeq{2\over~S_{(n+1)}}\;{{{\cal U}}_0^{{bs}}(r){{\cal U}}_0^{{bs}}(r')\over r_0^2\, m_d^2}+{\cal O}\left(m_d^0\right)~,$$ where ${{\cal U}}_0^{{bs}}(r)=N_0 I_{n/2}(r)/r^{n/2}$ is the wave function of the bound state for the exactly massless case. [99]{} L. Randall and R. Sundrum, Phys. Rev. Lett.  [**83**]{}, 4690 (1999) \[arXiv:hep-th/9906064\]. L. Randall and R. Sundrum, Phys. Rev. Lett.  [**83**]{}, 3370 (1999) \[arXiv:hep-ph/9905221\]. W. Naylor and M. Sasaki, Phys. Lett. B [**542**]{}, 289 (2002) \[arXiv:hep-th/0205277\]; E. Elizalde, S. Nojiri, S. D. Odintsov and S. Ogushi, Phys. Rev. D [**67**]{}, 063515 (2003) \[arXiv:hep-th/0209242\]; I. G. Moss, W. Naylor, W. Santiago-German and M. Sasaki, Phys. Rev. D [**67**]{}, 125010 (2003) \[arXiv:hep-th/0302143\]; I. Brevik, K. A. Milton, S. Nojiri and S. D. Odintsov, Nucl. Phys. B [**599**]{}, 305 (2001) \[arXiv:hep-th/0010205\]. S. Nojiri and S. D. Odintsov, JCAP [**0306**]{}, 004 (2003) \[arXiv:hep-th/0303011\]. S. Kobayashi, K. Koyama and J. Soda, Phys. Lett. B [**501**]{}, 157 (2001) \[arXiv:hep-th/0009160\]. Y. Himemoto and M. Sasaki, Phys. Rev. D [**63**]{}, 044015 (2001) \[arXiv:gr-qc/0010035\]. A. Romeo and A. A. Saharian, J. Phys. A [**35**]{}, 1297 (2002) \[arXiv:hep-th/0007242\]. A. Knapman and D. J. Toms, Phys. Rev. D [**69**]{}, 044023 (2004) \[arXiv:hep-th/0309176\]. J. Garriga and M. Sasaki, Phys. Rev. D [**62**]{}, 043523 (2000) \[arXiv:hep-th/9912118\]. N. Sago, Y. Himemoto and M. Sasaki, Phys. Rev. D [**65**]{}, 024014 (2002) \[arXiv:gr-qc/0104033\]. Y. Himemoto, T. Tanaka and M. Sasaki, Phys. Rev. D [**65**]{}, 104020 (2002) \[arXiv:gr-qc/0112027\]. K. Koyama and K. Takahashi, Phys. Rev. D [**67**]{}, 103503 (2003) \[arXiv:hep-th/0301165\];\ [*ibid.*]{} D [**68**]{}, 103512 (2003) \[arXiv:hep-th/0307073\]. N. D. Birrell and P. C. W. Davies, “Quantum Fields In Curved Space,” Cambridge University Press. B. Allen and A. Folacci, Phys. Rev. D [**35**]{}, 3771 (1987);\ A. Folacci, J. Math. Phys.  [**32**]{}, 2828 (1991) \[Erratum-ibid.  [**33**]{}, 1932 (1992)\]. K. Kirsten and J. Garriga, Phys. Rev. D [**48**]{}, 567 (1993) \[arXiv:gr-qc/9305013\]. A. Vilenkin and L. H. Ford, Phys. Rev. D [**26**]{}, 1231 (1982). A. D. Linde, Phys. Lett. B [**116**]{}, 335 (1982). A. A. Starobinsky, Phys. Lett. B [**117**]{}, 175 (1982). A. Vilenkin, Nucl. Phys. B [**226**]{}, 527 (1983). S. W. Hawking and I. G. Moss, Nucl. Phys. B [**224**]{}, 180 (1983). X. Montes, Int. J. Theor. Phys.  [**38**]{}, 3091 (1999) \[arXiv:gr-qc/9904022\]. A. Vilenkin, Phys. Lett. B [**133**]{} (1983) 177. J. Ipser and P. Sikivie, Phys. Rev. D [**30**]{}, 712 (1984). A. A. Saharian, Phys. Rev. D [**69**]{}, 085005 (2004) \[arXiv:hep-th/0308108\]. G. Kennedy, R. Critchley and J. S. Dowker, Annals Phys.  [**125**]{}, 346 (1980). D. Deutsch and P. Candelas, Phys. Rev. D [**20**]{}, 3063 (1979). S. A. Fulling, J. Phys. A [**36**]{}, 6529 (2003) \[arXiv:quant-ph/0302117\]. D. Langlois and M. Sasaki, Phys. Rev. D [**68**]{}, 064012 (2003) \[arXiv:hep-th/0302069\]. J. P. Gazeau, J. Renaud and M. V. Takook, Class. Quant. Grav.  [**17**]{}, 1415 (2000) \[arXiv:gr-qc/9904023\]. [^1]: pujolas@yukawa.kyoto-u.ac.jp [^2]: tama@scphys.kyoto-u.ac.jp [^3]: Generically, surface terms are irrelevant for the Casimir force, but are essential to relate the vacuum energy density and the Casimir energy, see [@kcd; @dc; @fulling; @saharian; @romeosaharian]. [^4]: We omit the anomaly term since it vanishes in the bulk for odd dimension, and on the brane it can be absorbed in a renormalization of the brane tension. [^5]: \[footn:mu\] For $M=0$, $\;{\mu}_{{\rm eff}}\,{\langle}\phi^2{\rangle}$ is finite because ${\langle}\phi^2{\rangle}\sim 1/{\mu}_{{\rm eff}}$, see (Eq. \[BSmassMsmall\]). [^6]: Taking this vacuum is analogous to performing a Gupta-Bleuler quantization [@renaud]. [^7]: From Eq. (\[polesmassless\]), the expression for a Dirichlet scalar (${\nu}\to\infty$) is the same with opposite sign. [^8]: In Ref. $Z_2$ reflection symmetry is not assumed for the field. [^9]: It is illustrative to consider the massless conformal case. The Green’s function (\[mu=0\]) for $x$ and $x'$ on the brane is simply $D^{(ren)}_{{\nu}=0}(x,x')\sim 1/|x-x'|^{n/2}$. If $n$ is negative, this is clearly zero in the coincidence limit, and hence the continuation to $n=3$ is zero as well. In the non-conformal case, we can use the integral form (\[integral\]) to compute ${{\langle}\phi^2 {\rangle}}\delta(r-r_0)$ in arbitrary dimension. Upon continuation to $n=3$, one obtains a pole $\sim1/(n-3)$ and a nonzero finite part. The pole can be absorbed in the brane tension and ${\langle}\phi^2(r_0){\rangle}^{ren}$ is a finite constant. [^10]: Note that (\[regsimple\]) is completely regular on the light cone. [^11]: In four dimensions [@xavi], the logarithmic term is $\log \big||x-y_I|^2/|y_I|^2\big|$. Thus, this string singularity does not appear, despite the same interpretation of Eq. (\[integral\]) holds. [^12]: As an aside, let us comment on the singularities present in the bound state contribution. Specifically, in the term $\displaystyle \partial_p \left[(p-p_d) G_p^{D-1}\right]$ present in Eq. (\[gdisc\]). We remind that this term is canceled by the KK contribution (\[gcont\]). Hence, it plays no role in the Green’s function. From [@gaki; @af] we know that for $x$ and $y$ timelike related and far apart, then $\zeta$ is large and pure imaginary, and it behaves like $$\partial_p \left[(p-p_d) G_p^{D-1}\right]\sim \log\big|1-\cos\zeta\big| = \log\Big|{(|x|-|x_I|)^2-|x-x_I|^2\over2|x||x_I|} \Big|.$$ This presents divergences on the light cone and whenever $x$ and $y_I$ (or equivalently, $y$) are aligned. This condition reduces to the coincidence limit on the brane, but in the bulk one does not expect singularities to appear at all these points. The above mentioned cancelation guarantees that these unphysical singularities are not present in $G^{ren}(x,y)$ once we include the KK contribution.
{ "pile_set_name": "ArXiv" }
--- abstract: 'We propose a novel reinforcement learning algorithm, , that incorporates the strengths of off-policy RL algorithms into Quality Diversity (QD) approaches. Quality-Diversity methods contribute structural biases by decoupling the search for diversity from the search for high return, resulting in efficient management of the exploration-exploitation trade-off. However, these approaches generally suffer from sample inefficiency as they call upon evolutionary techniques. removes this limitation by relying on off-policy RL algorithms. More precisely, we train a population of off-policy deep RL agents to simultaneously maximize diversity inside the population and the return of the agents. selects agents from the diversity-return Pareto Front, resulting in stable and efficient population updates. Our experiments on the environment show that can solve challenging exploration and control problems with deceptive rewards while being more than 15 times more sample efficient than its evolutionary counterparts.' author: - | Geoffrey Cideron\ InstaDeep\ `g.cideron@instadeep.com`\ Thomas Pierrot\ InstaDeep\ `t.pierrot@instadeep.com`\ Nicolas Perrin\ CNRS, Sorbonne Université\ `perrin@isir.upmc.fr`\ Karim Beguir\ InstaDeep\ `kb@instadeep.com`\ Olivier Sigaud\ Sorbonne Université\ `olivier.sigaud@upmc.fr`\ bibliography: - 'ms.bib' title: 'QD-RL: Efficient Mixing of Quality and Diversity in Reinforcement Learning' --- Introduction ============ Despite outstanding successes in specific domains such as games [@silver2017mastering; @jaderberg2019human] and robotics [@tobin2018domain; @akkaya2019solving], Reinforcement Learning (RL) algorithms are still far from being immediately applicable to complex sequential decision problems. Among the issues, a remaining burden is the need to find the right balance between exploitation and exploration. On one hand, algorithms which do not explore enough can easily get stuck in poor local optima. On the other hand, exploring too much hinders sample efficiency and can even prevent users from applying RL to large real world problems. Dealing with this exploration-exploitation trade-off has been the focus of many RL papers [@tang2016exploration; @bellemare2016unifying; @fortunato2017noisy; @plappert2017parameter]. Among other things, having a population of agents working in parallel in the same environment is now a common recipe to stabilize learning and improve exploration, as these parallel agents collect a more diverse set of samples. This has led to two approaches, namely [*distributed RL*]{} where the agents are the same and [*population-based training*]{}, where diversity between agents further favors exploration [@jung2020population; @parker2020effective]. However, such methods do certainly not make the most efficient use of available computational resources, as the agents may collect highly redundant information. Besides, the focus on sparse or deceptive rewards problems led to the realization that looking for diversity independently from maximizing rewards might be a good exploration strategy [@lehman2011abandoning; @eysenbach2018diversity; @colas2018gep]. More recently, it was established that if one can define a [*behavior space*]{} or [*outcome space*]{} corresponding to the smaller space that matters to decide if a behavior is successful or not, maximizing diversity in this space might be the optimal strategy to find the sparse reward source [@doncieux2019novelty]. When the reward signal is not sparse though, one can do better than just looking for diversity. Trying to simultaneously maximize diversity and rewards has been formalized into the Quality-Diversity (QD) framework [@pugh2016quality; @cully2017quality]. The corresponding algorithms try to populate the outcome space as widely as possible with an [*archive*]{} of past solutions which are both diverse and reward efficient. To do so, they generally rely on evolutionary algorithms. Selecting diverse and reward efficient solutions is then performed using the Pareto front of the *diversity* $\times$ *reward efficiency* landscape, or populating a grid of outcome cells with reward efficient solutions in the algorithm [@mouret2015illuminating]. In principle, the QD approach offers a great way to deal with the exploration-exploitation trade-off as it simultaneously ensures pressure towards both wide covering of the outcome space and high return efficiency. However, these methods suffer from relying on evolutionary methods. Though they have been shown to be competitive with deep RL approaches provided enough computational power [@salimans2017evolution; @colas2020scaling], they do not take advantage of the gradient’s analytical form, and thus have to sample to estimate gradients, resulting in far worse sample efficiency than their deep RL counterparts [@sigaud2019policy]. On the other hand, deep RL methods which leverage policy gradients have far better sample efficiency but they struggle on problems that require strong exploration and are sensitive to poorly conditioned reward signals such as deceptive rewards [@colas2018gep]. This is in part because they explore in the action space, the state-action space or the policy space rather than in an outcome space. In this work, we combine the general QD framework with policy gradient methods and capitalize on the strengths of both approaches. Our algorithm explores in an outcome space and thus can solve problems that simultaneously require complex exploration and high dimensional control capabilities. We investigate the properties of by first controlling a low dimensional agent in a maze, and then addressing , a larger benchmark. We compare to several recent algorithms which also combine a diversity objective and a return maximization method, namely the family which mixes evolution strategies with novelty search [@conti2017improving] and the algorithm [@colas2020scaling] which uses to maintain a diverse and high performing population. The latter has been shown to scale well enough to also address large benchmarks, but we show that is several orders of magnitude more sample efficient than these competitors. Related Work {#sec:related} ============ We consider the general context of a fully observable Markov Decision Problem (MDP) $\left( \mathcal{S}, \mathcal{A}, \mathcal{T}, \mathcal{R}, \gamma \right)$ where $\mathcal{S}$ is the state space, $\mathcal{A}$ is the action space, $\mathcal{T}: \mathcal{S} \times \mathcal{A} \rightarrow \mathcal{S}$ is the transition function, $\mathcal{R}: \mathcal{S} \times \mathcal{A} \rightarrow \mathbb{R}$ is the reward function and $\gamma$ is a discount factor. The exploration-exploitation trade-off being central in RL, the search for efficient exploration methods are ubiquitous in the domain. We focus on the relationship between our work and two methods: those which introduce explicit diversity into a multi-actor deep RL approach, and those which combine distinct mechanisms for exploration and exploitation. #### Diversity in multi-actor RL Managing several actors is now a well established method to improve wall clock time and stabilize learning [@jaderberg2017population]. But including an explicit diversity criterion is a more recent trend. The algorithm [@doan2019attraction] uses a combination of attraction and repulsion mechanisms between good agents and poor agents to ensure diversity in a population of agents trained in parallel. The algorithm shows improvement in performance in large continuous action benchmarks such as and sparse reward variants. But diversity is defined in the space of policy performance thus the drive towards novel behaviors could be strengthened. The algorithm [@jung2020population] is an instance of population-based training where the parameters of the best actor are softly distilled into the rest of the population. To prevent the whole population from collapsing into a single agent, a simple diversity criterion is enforced so as to maintain a minimum distance between all agents. The algorithm shows good performance over a large set of continuous action benchmarks, including “delayed” variants where the reward is obtained only every $K$ time steps. However, the diversity criterion they use is far from guaranteeing efficient exploration of the outcome space, particularly in the absence of reward, and it seems that the algorithms mostly benefits from the higher stability of population-based training. With respect to , the algorithm [@parker2020effective] proposes a population-wide diversity criterion which consists in maximizing the volume between the parameters of the agents in a latent space. This criterion better limits redundancy between the considered agents. Like our work, all these methods use a population of deep RL agents and explicitly look for diversity among these agents. However, none of them addresses deceptive reward environments such as the mazes we consider in our work. Furthermore, none of them clearly separates two components nor searches for diversity in the outcome space as does. #### Separated exploration and exploitation mechanisms One extreme case of the separation between exploration and exploitation is “exploration-only” methods. The efficiency of this approach was first put forward within the evolutionary optimization literature [@lehman2011abandoning; @doncieux2019novelty] and then imported into the reinforcement learning literature with works such as [@eysenbach2018diversity] which gave rise to several recent follow-up [@pong2019skew; @lee2019efficient; @islam2019marginalized]. These methods have proven useful in the sparse reward case, but they are inherently limited when some reward signal can be used and maximized during exploration. A second approach is sequential combination. Similarly to us, the algorithm [@colas2018gep] combines a diversity seeking component, namely [*Goal Exploration Processes*]{} [@forestier2017intrinsically] and a deep RL algorithm, namely [@lillicrap2015continuous] and shows that combining them sequentially can overcome a deceptive gradient issue. This sequential combination of exploration-then-exploitation is also present in [@ecoffet2019go] which explores first and then memorizes the sequence to look for a high reward policy in games, and in [@matheron2020pbcs] which does the same in a continuous action domain. Again, this approach is limited when the reward signal can help driving the exploration process towards a satisfactory solution. Removing the sequentiality limitation, some approaches use a population of agents with various exploration rates [@badia2020agent57]. Along a different line, the algorithm [@pourchot2018cem] combines an evolutionary algorithm, [@deboer05tutorial], and a deep RL algorithm, [@fujimoto2018addressing] in such a way that each component takes the lead when it is the most appropriate in the current situation. Doing so, benefits from the better sample efficiency of deep RL and from the higher stability of evolutionary methods. But the evolutionary part is not truly a diversity seeking component and, being still an evolutionary method, it is not as sample efficient as . A common feature between and our work is that the reward seeking agents benefit from the findings of the other agents by sharing their replay buffer. Closer to our quality-diversity inspired approach, [@conti2017improving] proposes and . But, as outlined in [@colas2020scaling], these approaches are not sample efficient and the diversity and environment reward functions are mixed in a less efficient way. The most closely related work w.r.t. ours is [@colas2020scaling]. The algorithm also optimizes both diversity and reward efficiency, using an archive and two ES populations. Instead of using a Pareto front, uses the approach where the outcome space is split in cells that the algorithm has to cover. Using such distributional ES approach has been shown to be critically more efficient than population-based GA methods [@salimans2017evolution], but our results show that they are still less sample efficient than off-policy deep RL methods as they do not leverage direct access to the policy gradient. QD-RL ===== We present , a quality-diversity optimization method designed to address hard exploration problems where sample efficiency matters. As depicted in  \[fig:gen\_schema\], optimizes a population of agents for both environment reward and diversity using off-policy policy gradient methods which are known to be more efficient than traditional genetic algorithms or evolution strategies [@salimans2017evolution; @such2017deep]. In this study, we chose to rely on the agent (see Supplementary Section \[sec:td3\]) but any other off-policy agent such as [@haarnoja2018soft] could be used instead. With respect to the standard MDP framework, introduces an extra outcome space $O$ and a behavior characterization function $\bfunc: \mathcal{S} \rightarrow \mathcal{O}$ that extracts the outcome $o$ for a state $s$. The outcome of a behavior characterizes what matters about this behavior. As it often corresponds to what is needed to determine whether the behavior was successful or not, this outcome space can be equivalent to a goal space such as introduced in [@schaul2015universal; @andrychowicz2017hindsight]. For example, when working in a maze environment, the outcome may represent the coordinates $(x, y)$ at the end of the trajectory of the agent, which may also be its goal. However, in contrast to , we do not condition the policy on the outcome $o$. In this work, the behavior characterization function is given, as is also the case in [@schaul2015universal; @andrychowicz2017hindsight], and we consider outcomes computed as a function of a single state. The more general case where it is learned or computed as a function of the whole trajectory is left for future work. As any QD system, manages a population of policies into an archive which contains all previously trained actors. The first generation contains a population of $N$ neural actors $\pi_{\theta_i}: \mathcal{S} \rightarrow \mathcal{A}$ with parameters $\theta_i, i \in [1, N ]$. While all these actors share the same neural architecture, their weights are initialized differently. At each iteration, a selection mechanism based on the Pareto front selects $N$ best actors from the archive containing all past agents, according to two criteria: the environment reward and a measure of novelty of the actor. To better stick to the QD framework, hereafter the former is called “quality” and the latter “diversity”. If the Pareto front contains less than $N$ actors, the whole Pareto front is selected and computed again over the remaining actors. Additional actors are sampled from the new Pareto front, and so on until $N$ actors are sampled. These selected actors form a new generation. Then, half of the selected actors are trained to optimize quality while the others optimize diversity. These actors with updated weights are then evaluated in the environment and added to the archive. In more details, training is performed as follows. ![Architecture of the algorithm. []{data-label="fig:gen_schema"}](images/QDRL-3.png){width="89.00000%"} First, as any standard RL method, optimizes actors so as to maximize quality. More formally, it updates the actor weights to maximize the objective function $J_{\text{quality}}(\theta_i) = \Esp_{\tau_i}{\left[ \sum_t \gamma^t r_t^{Q} \right] }$ where $\tau_i$ is a trajectory obtained by following the policy $\pi_{\theta_i}$ and $r_t^{Q}=r$ is the environment reward function. Second, also optimizes actors to increase diversity in the outcome space. To evaluate the diversity of an outcome $o$, we seek for the $k$-nearest neighbors of outcome $o$ in the archive and compute a novelty score $NS(o, A)$ as the mean of the squared Euclidean distances between $o$ and its $k$ neighbors, as in [@lehman2011abandoning; @conti2017improving]. More formally, maximizes the objective function $J_{\text{diversity}}(\theta_i) = \Esp_{\tau_i} \left[ \sum_t \gamma^t NS(o_t, A) \right]$ where $o_t = \bfunc(s_t)$ is the outcome discovered by policy $i$ at time step $t$, $NS()$ is the novelty score function and $A$ is the archive containing already discovered outcomes. The $J_{\text{quality}}$ and $J_{\text{diversity}}$ functions have the same structure as we can re-write $J_{\text{diversity}}(\theta_i) = \Esp_{\tau_i}{\left[ \sum_t \gamma^t r^D_t \right] }$, where $r^D = NS(o, A)$ is a non stationary reward function corresponding to novelty scores. Thus all the mechanisms introduced in the deep RL literature to optimize $J_{\text{quality}}$ can also be applied to optimize $J_{\text{diversity}}$. Notably, we can introduce Q-value functions $Q_Q$ and $Q_D$ dealing with quality and diversity and we can define two randomly initialized critic neural networks $Q_{\theta^Q}$ and $Q_{\theta^D}$, with parameters $\theta^Q$ and $\theta^D$ to approximate these functions. These critics are shared by all the trained actors. Therefore, they capture the average population performance rather than the performance of individual actors, which has both an information sharing effect and a smoothing effect. We found that training individual critics is harder in practice and left this analysis for future work. The quality and diversity update of actor weights is performed according to Equation . An update consists in sampling a batch of transitions from the replay buffer and optimizing the weights of both critics so as to maximize quality and diversity. Then, we optimize parameters of half policies so as to maximize $J_{\text{quality}}$ and the other half to maximize $J_{\text{diversity}}$. Therefore, the global update can be written $$\label{eq:global_Q_update} \left\{ \begin{array}{ll} \theta^Q \leftarrow \theta^Q - \frac{2\alpha}{N} \nabla_{\theta^Q} \sum\limits_{batch} \sum\limits_{i=1}^{\nicefrac{N}{2}} \left( Q_{\theta^Q}(s_t, a_t) - (r^Q_t + \gamma Q_{\theta^{Q\prime}}(s_{t+1}, \pi_{\theta_i}(s_{t+1}))) \right)^2 \\ \theta^D \leftarrow \theta^D - \frac{2\alpha}{N} \nabla_{\theta^D} \sum\limits_{batch} \sum\limits_{i=\nicefrac{N}{2} + 1}^{N} \left( Q_{\theta^D}(s_t, a_t) - (r^D_t + \gamma Q_{\theta^{D\prime}}(s_{t+1}, \pi_{\theta_i}(s_{t+1}))) \right)^2\\ \end{array} \right.$$ $$\label{eq:global_pi_update} \left\{ \begin{array}{ll} \theta_{i} \leftarrow \theta_{i} + \alpha \nabla_{\theta_i} \sum\limits_{batch} Q_{\theta^Q} (s_t, \pi_{\theta_i}(s_t)), \ \ \forall i \leq \nicefrac{N}{2} \hspace{0.2cm} \textit{// Quality update of half actors}\\ \theta_{i} \leftarrow \theta_{i} + \alpha \nabla_{\theta_i} \sum\limits_{batch} Q_{\theta^D} (s_t, \pi_{\theta_i}(s_t)), \ \ \forall i > \nicefrac{N}{2} \hspace{0.2cm} \textit{// Diversity update of half actors} \end{array} \right.$$ where ${\theta^{Q}}^\prime$ and ${\theta^{D}}^\prime$ correspond to the parameters of target critic networks. To keep notations simple, updates of the extra critic networks introduced in to reduce the value estimation bias do not appear in , but we use them in practice. Once updates have been performed, trajectories are collected in parallel from all policies. These trajectories are stored into a common replay buffer and the tuple (final outcome $o_{T}$, return, parameters) is stored into the archive. Since the novelty score of an outcome varies through time as the archive grows, instead of storing it, we store outcomes and fresh diversities are computed every time a batch is sampled from the replay buffer. A short version of the algorithm is presented in Algorithm \[alg:short\]. Other implementation details are presented in Supplementary Section \[sec:details\]. Experiments =========== In this section, we demonstrate the capability of to solve challenging exploration problems. We implement it with the algorithm and refer to this implementation as the algorithm. Hyper-parameters are described in Section \[sec:details\] of the supplementary document. We first analyse each component of and demonstrate their usefulness on a toy example. Then we show that can solve a more challenging control and exploration problem such as navigating the Ant into a large maze with a better sample complexity than its standard evolutionary competitors. [.5]{} ![Experimental environments. Though they may look similar, the environment state and action spaces are two-dimensional, whereas in , the state space has 29 dimensions and the action space 8.[]{data-label="fig:envs"}](images/al-toy_maze_vierge.png "fig:"){width="0.9\linewidth"} [.5]{} ![Experimental environments. Though they may look similar, the environment state and action spaces are two-dimensional, whereas in , the state space has 29 dimensions and the action space 8.[]{data-label="fig:envs"}](images/antobstacle.png "fig:"){width="0.9\linewidth"} Point Maze: Move a point in a Maze ---------------------------------- We first consider the environment in which the agent controls a 2D material point which must exit from a three corridors maze depicted in  \[fig:toy\_maze\]. The observation corresponds to the agent coordinates $(x_t, y_t) \in [-1,1]^2$ at time $t$. The two continuous actions $(\delta x, \delta y) \in [-0.1,0.1]^2$ correspond to position increments along the $x$ and $y$ axes. The outcome space is the final state $(x_f,y_f)$ of the agent, as in [@conti2017improving]. The initial position of the agent is sampled uniformly in $[-0.1, 0.1]\times[-1, -0.7]$. This zone is located at the bottom right of the maze. The exit area is a square centered at $(x_{\text{goal}}=-0.5, y_{\text{goal}}=0.8)$ of width $0.1$. Once this exit square is reached, the episode ends. The maximum length of an episode is 200 time steps. The reward is computed as $r_t = - (x_t - x_{\text{goal}})^2 - (y_t - y_{\text{goal}})^2$. This reward leads to a deceptive gradient signal: following it would lead the agent to stay stuck by the second wall in the maze, as shown in  \[fig:grad\_maps\] of Supplementary Section \[sec:analysis\_supp\]. In order to exit the maze, the agent must find the right balance between exploitation and exploration, that is at a certain point ignore the policy gradient and only explore the maze. Thus, though this example may look simple due to its low dimension, it remains very challenging for standard deep reinforcement agents such as . performs three main operations: (i) it optimizes half of the agents to maximize quality; (ii) it does the same to the other half to maximize diversity; (iii) it uses a quality-diversity Pareto front as a population selection mechanism. We investigate the impact of each of these components separately through an ablation study. For all experiments, we use 4 actors. Results are aggregated in  \[fig:aggcirclezone\]. First, we measure performance when training the 4 actors to maximize quality only. We call the resulting agent , but this is simply a multi-actor . As depicted in  \[fig:coverage\_map\_pt\_maze\] of Supplementary Section \[sec:analysis\_supp\], the population finds a way to the second maze wall but is stuck there due to the deceptive nature of the gradient. This experiment shows clearly enough that using a quality-only strategy has no chance of solving hard exploration problems with a deceptive reward signal such as or . Then, we evaluate the agent performance when training the 4 actors to maximize diversity only. We call the resulting agent . We show that finds sometimes how to get further the second wall but with a large variance and without finding the optimal trajectory. We then consider a + agent that optimizes only for diversity but performs agent selection from the archive with a Pareto front, that is it selects 4 actors for the next generation based on both their quality and diversity, but without optimizing the former. Interestingly, adding the Pareto front selection mechanism significantly improves performance and stability. Finally, optimizes half of the actors for quality, the other half for diversity and it selects them with the Pareto Front mechanism. We observe that outperforms all ablations, even if the improvement over + is lesser, which means that optimizing for quality is less critical in this environment as good enough solutions are found just by maximising diversity.  \[tab:recap\] summarises all the ablations we performed. [.5]{} ![Learning curves of versus ablations on and several other agents on . The performance corresponds to the highest return on and to the highest negative distance to the goal area on .[]{data-label="fig:results"}](images/agg_clean_circle_maze.pdf "fig:"){width="95.00000%"} [.5]{} ![Learning curves of versus ablations on and several other agents on . The performance corresponds to the highest return on and to the highest negative distance to the goal area on .[]{data-label="fig:results"}](images/agg_ant_maze_new_colors.pdf "fig:"){width="95.00000%"} Opt. Quality Opt. Diversity Pareto selection Episode return ($\pm$ std) --- -------------- ---------------- ------------------ ---------------------------- $-29 \: (\pm 1)$ + X $-35 \:(\pm 3)$ X X $-111 \:(\pm 59)$ + X $-128 \:(\pm 0)$ X X $-128 \:(\pm 0)$ : Summary of compared ablations.\[tab:recap\] Ant Maze: Control an articulated Ant to solve a Maze ---------------------------------------------------- We then test on a challenging environment modified from OpenAI Gym [@brockman2016openai] based on and also used in [@colas2020scaling; @frans2017meta]. We refer to this environment as the environment. In , a four-legged “ant” robot has to reach a goal zone located at $[35, -25]$ which corresponds to the lower right part of the maze (colored in green in ). The initial position of the ant is sampled from a small circle of radius $0.1$ around the initial point $[-25, -25]$ situated in the extreme bottom right of the maze. Maze walls are organized so that following the gradient of distance to the goal drives the ant into a dead-end. As in the , the reward is expressed as minus the distance between the center of gravity of the ant and the center of the goal zone, thus leading to a strongly deceptive gradient. This environment is more complex than as the agent must learn to control a body with 8 degrees of freedom in all directions to explore the maze and solve it. Therefore, this problem is much harder than the standard gym environment in which the ant only learns to go straight forward. The observation space contains the positions, angle, velocities and angular velocities of most ant articulations and center of gravity, and has 29 dimensions. The action space is $[-1,1]^8$ where an action correspond to the choice of 8 continuous torque intensities to apply to the 8 ant articulations. The episodes have a fixed length of 3000 time steps. As previously, the outcome is computed as the final position $(x_f,y_f)$ of the center of gravity of the ant. We compare the performance of on this benchmark to 4 state-of-the art methods: , , and . While and -optimize only for diversity, and -optimizes only for quality, , and -- optimize for both. To ensure fair comparison, we did not implement our own versions of these algorithms but reused results from the paper [@colas2020scaling]. We also ensured that the environment we used was rigorously the same. All the baselines were run for 5 seeds. In , each seed corresponds to 20 actors distributed on 20 cores. As in [@colas2020scaling], we compute the average and standard deviation between the seeds of the minimum distance to the goal reached at the end of an episode by one of the agents and report the results in  \[fig:qdrlantmaze\]. As explained in [@colas2020scaling], and -obtain a score around -26, which means that they get stuck in the dead-end, similarly to . By contrast, all other algorithms manage to avoid it. More importantly, the algorithm achieves a similar score to these better exploring agents, but in more than **15 times less samples** than its evolutionary competitors, as shown in Table \[tab:recapant\]. **Final Perf.** ($\pm$ std) **Sampled steps** **Steps to -10** **Ratio to QD-TD3** -------- ----------------------------- ------------------- ------------------ --------------------- (Ours) $-10 \: (\pm 10.1)$ 6e8 5e8 1 $-9 \: (\pm 5.6)$ 1e10 8e9 16 $-27 \: (\pm 0)$ 1e10 $\infty$ $\infty$ $-10 \: (\pm 11)$ 1e10 8e9 16 - $-10 \: (\pm 6)$ 1e10 9e9 18 - $-11.7 \: (\pm 6.9)$ 1e10 9e9 18 - $-25.8 \: (\pm 0)$ 1e10 $\infty$ $\infty$ $-26 \: (\pm 2)$ 3e8 $\infty$ $\infty$ : Summary of compared algorithms. The mean performance is computed as the average over $5$ seeds of the minimum distance to the goal reached at the end of an episode by the population. “Steps to -10” correspond to the number of steps to reach an average performance of -10. - [ee]{} stands for - -.\[tab:recapant\] These results show that leveraged the sample efficiency brought by off-policy policy gradient to learn to efficiently explore the maze. We also emphasize the low cost of compared to its evolutionary counterparts. To solve the , [**requires only 2 days of training on 20 cores**]{} with no while evolutionary algorithms are usually run on much larger infrastructures. For instance, needs to sample 10.000 different set of parameters per iteration and evaluates them all to compute a diversity gradient with [@colas2020scaling]. Besides, the failure of the ablation into unsurprisingly shows that a pure RL approach without a diversity component fails in these deceptive gradient benchmarks. Conclusion ========== In this paper, we proposed a novel way to deal with the exploration-exploitation trade-off by combining a reward-seeking component, a diversity-seeking component and a selection component inspired from the Quality-Diversity approach. Crucially, we showed that quality and diversity could be optimized with off-policy reinforcement learning algorithms, resulting in a significantly improved sample efficiency. We showed experimentally the effectiveness of the resulting framework, which can solve in two days with 20 problems which were previously out of reach without a much larger infrastructure. Key components of are selection through a Pareto front and the search for diversity in an outcome space. Admittedly, the outcome space needed to compute the diversity reward is hard coded. There are attempts to automatically obtain the outcome space through unsupervised learning methods [@pere2018unsupervised; @paolo2019unsupervised], but defining such a space is often a trivial decision which helps a lot, and can alleviate the need to carefully design reward functions. In the future, we first want to address the case where the outcome depends on the whole trajectory. Next we plan to further study the versatility of our approach to exploration compared to other deep reinforcement learning exploration approaches. Besides, we intend to show that our approach could be extended to problems where the environment reward function can itself be decomposed into several loosely dependent components, such as standing, moving forward and manipulating objects for a humanoid or solving multiagent reinforcement learning problems. In such environments, we could replace the maximization of the sum of reward contributions with a multi-criteria selection from a Pareto front where diversity would be only one of the considered criteria. Broader Impact {#broader-impact .unnumbered} ============== Our paper presents a novel approach to the combination of diversity-driven exploration and modern reinforcement learning techniques. It results in more stable learning with respect to standard reinforcement learning, and more sample efficient learning with respect to standard evolutionary approaches to diversity. We believe this has a positive impact in making reinforcement learning techniques more accessible and feasible towards real world applications. Besides, our work may help casting a needed bridge between the reinforcement learning and the evolutionary optimization research communities. Finally, by releasing our code, we believe that we help efforts in reproducible science and allow the wider community to build upon and extend our work in the future.
{ "pile_set_name": "ArXiv" }
--- abstract: 'An improved version of the “optical bar” intracavity readout scheme for gravitational-wave antennae is considered. We propose to call this scheme “optical lever” because it can provide significant gain in the signal displacement of the local mirror similar to the gain which can be obtained using ordinary mechanical lever with unequal arms. In this scheme displacement of the local mirror can be close to the signal displacement of the end mirrors of hypothetical gravitational-wave antenna with arm lengths equal to the half-wavelength of the gravitational wave.' author: - | F.Ya.Khalili[^1]\ [*Dept. of Physics, Moscow State University*]{},\ [*Moscow 199899, Russia*]{} title: ' The “optical lever” intracavity readout scheme for gravitational-wave antennae ' --- Introduction ============ All contemporary large-scale gravitational-wave antennae are based on common principle: they convert phase shift of the optical pumping field into the intensity modulation of the output light beam being registered by photodetector [@Abramovici1992]. This principle allows to obtain sensitivity necessary to detect gravitational waves from astrophysical sources. However, its use in the next generations of gravitational-wave antennae where substantially higher sensitivity is required, encounters serious problems. An excessively high value of optical pumping power which also depends sharply on the required sensitivity, is likely to be the most important one. For example, at the stage II of the LIGO project the light power circulating in the interferometer arms will be increased to about 1 MWatt, in comparison with about 10 KWatt being currently used [@WhitePaper1999]. In particular, so high values of the optical power can produce undesirable non-linear effects in the large-scale Fabry-Perot cavities [@BSV_Instab2001]. This dependence of pumping power on sensitivity can be explained easily using the Heisenberg uncertainty relation. Really, in order to detect displacement $\Delta x$ of test mass $M$ it is necessary to provide perturbation of its momentum $ \Delta p \ge \hbar/2\Delta x$. The only source of this perturbation in the interferometric gravitational-wave antennae is the uncertainty of the optical pumping energy $\Delta\mathcal{E}$. Hence, the following conditions have to be fulfilled: $\Delta\mathcal{E}\propto (\Delta x)^{-1}$. If pumping field is in the coherent quantum state then $\Delta\mathcal{E}\propto\sqrt\mathcal{E}$, and therefore $\mathcal{E}\propto (\Delta x)^{-2}$. Rigorous analysis (see [@Amaldi1999]) shows that pumping energy stored in the interferometer have to be larger than $$\label{E_SQL} \mathcal{E} = \frac{ML^2\Omega^2\Delta\Omega}{4\omega_p\xi^2}\,,$$ where $\Omega$ is the signal frequency, $\Delta\Omega<\Omega$ is the bandwidth where necessary sensitivity is provided, $\omega_p$ is the pumping frequency, $L=c\tau$ is the length of the interferometer arms, $\xi<1$ is the ratio of the amplitude of the signal which can be detected to the amplitude corresponding to the Standard Quantum Limit. This problem can be alleviated by using optical pumping field in squeezed quantum state [@Caves1981], but can not be solved completely, because only modest values of squeezing factor have been obtained experimentally yet. Estimates show that usage of squeezed states allows to decrease $\xi$ by the factor of $\simeq 3$ for the same value of the pumping energy (see [@KLMTV2002]), and the energy still remains proportional to $\xi^{-2}$. In the article [@NonLin1996] the new principle of [*intracavity*]{} readout scheme for gravitational-wave antennae has been considered. It has been proposed to register directly redistribution of the optical field [*inside*]{} the optical cavities using Quantum Non-Demolition (QND) measurement instead of monitoring output light beam. The main advantage of such a measurement is that in this case a non-classical optical field is created by the measurement process automatically. Therefore, sensitivity of these schemes does not depend directly on the circulating power and can be improved by increasing the precision of the intracavity measurement device. The only fundamental limitation in this case is the condition $$\frac{\Delta x}L \gtrsim \frac{\Omega}{\omega_p N}\,,$$ where $N$ is the number of optical quanta in the antenna. In the articles [@OptBar1997; @SymPhot1998] two possible realizations of this principle have been proposed and analyzed. Both of them are based on the pondermotive QND measurement of the optical energy proposed in the article [@JETP1977]. In these schemes displacement of the end mirrors of the gravitational-wave antenna caused by the gravitational wave produces redistribution of the optical energy between the two arms of the interferometer. This redistribution, in its turn, produces variation of the electromagnetic pressure on some additional local mirror (or mirrors). This variation can be detected by measurement device which monitors position of the local mirror(s) relative to reference mass placed outside the pumping field (for example, a small-scale optical interferometric meter can be used as such a meter). The optical pumping field works here as a passive medium which transfers the signal displacement of the end mirrors to the displacement of the local one(s) and, at the same time, transfers perturbation of the local mirror(s) due to measurement back to the end mirrors. In this article we consider an improved version of the “optical bar” scheme considered in the article [@OptBar1997]. We propose to call this scheme “optical lever” because it can provide gain in displacement of the local mirror similar to the gain which can be obtained using ordinary mechanical lever with unequal arms. This scheme is discussed in the section \[sec:OptLeverL\]. In the section \[sec:OptLeverX\] we analyse instability which can exist in both “optical bar” and “optical lever” schemes (namely, in so-called X-topologies of these schemes) and which was not mentioned in the article [@OptBar1997]. We suppose in this article for simplicity that all optical elements of the scheme are ideal. It means that reflectivities of the end mirrors are equal to unity, and all internal elements have no losses. We presume that optical energy have been pumped into the interferometer using very small transparency of some of the end mirrors, and at the time scale of the gravitation-wave signal duration the scheme operates as a conservative one. It has been shown in the article [@OptBar1997] that losses in the optical elements limited the sensitivity at the level $$\xi \gtrsim \frac1{\sqrt{\Omega\tau_\mathrm{opt}^*}}$$ where $\tau_\mathrm{opt}^*$ is the optical relaxation time. Taking into account that value of $\tau_\mathrm{opt}^*$ can be as high as $\gtrsim 1\,\mathrm{s}$, one can conclude that the optical losses do not affect the sensitivity if $\xi\gtrsim 10^{-1}$. The optical lever {#sec:OptLeverL} ================= (100,60) (15,59)(15,15)(79,15) (46,14.75)(79,14.75)(46,15.25)(79,15.25) (14.75,26)(14.75,59)(15.25,26)(15.25,59) (78,10)(80,10)(80,20)(78,20)(78,10,79,15,78,20)(80,21)[(0,0)\[cb\]]{} (80,18)[(1,0)[10]{}]{}(91,18)[(0,0)\[lc\][$x$]{}]{} (47,10)(45,10)(45,20)(47,20)(47,10,46,15,47,20)(45,21)[(0,0)\[cb\]]{} (10,58)(10,60)(20,60)(20,58)(10,58,15,59,20,58)(9,60)[(0,0)\[rt\]]{}(18,60)[(0,-1)[10]{}]{}(18,49)[(0,0)\[ct\][$x$]{}]{} (10,27)(10,25)(20,25)(20,27)(10,27,15,26,20,27)(9,25)[(0,0)\[rb\]]{} (10,20)(20,10) (24,10)(24,20)(26,20)(26,10)(24,10) (25,21)[(0,0)\[cb\]]{} (25,18)[(1,0)[10]{}]{}(36,18)[(0,0)\[lc\][$y$]{}]{} (30,2)(30,12)(32,12)(32,2)(30,2) (26,11)[(1,0)[4]{}]{}(40,11)[(-1,0)[14]{}]{} (33,4)[(0,0)\[lb\][Local position meter]{}]{} One of the possible “optical lever” scheme topologies (L-topology) is presented in Fig.\[fig:OptLeverL\] (another variant — X-topology — is considered in the next section). It differs from the L-topology of the “optical bar” scheme [@OptBar1997] by two additional mirrors and only. These mirrors together with the end mirrors and form two Fabry-Perot cavities with the same initial lengths $L=c\tau$ coupled by means of the central mirror with small transmittance $T_C$. Exactly as in the case of the “optical bar” scheme, due to this coupling eigen modes of such a system form the set of doublets with frequencies separated by the value of $\Omega_B$ which is proportional to the $T_C$ and can be made close to the signal frequency $\Omega$. Let distances between the mirrors to be adjusted in such a way that Fabry-Perot cavities and are tuned in resonance with the upper frequency of one of the doublets, and additional Fabry-Perot cavities and are tuned in antiresonance with this frequency. It is supposed here that (i) distances $l$ between the mirrors and the coupling mirror are small enough to neglect values of the order of magnitude close to $\Omega l/c$, and that (ii) it is possible to neglect the mirrors motion. For example, they can be attached rigidly to the platform where the local position meter is situated. Let only the upper frequency of this doublet to be excited initially. In this case most of the optical energy is concentrated in the cavities and and distributed evenly between them. Small differential change $x$ in the cavities optical lengths will redistribute the optical energy between the arms and hence will create difference in pondermotive forces acting on the mirrors. In other words, an optical pondermotive rigidity exists in such a scheme. In the article [@OptBar1997] the analogy with two “optical springs”, one of which was situated between the mirrors and and another one (L-shaped) between the mirrors and , has been used. It has been shown in that article that if the optical energy exceeded the threshold value of $$\label{E_OB} \mathcal{E} \sim \frac{ML^2\Omega^3}{\omega_p} \,,$$ then these springs became rigid enough to transfer displacement $x$ of the mirrors , to the same displacement $y$ of the mirror . It is rather evident that if additional mirrors are present then displacement $x$ of the end mirrors is equal to about $\mathcal{F}$ times greater displacement $x$ of the mirrors , where $\mathcal{F}$ is finesse of the Fabry-Perot cavities and (one can imagine, for simplicity, that these Fabry-Perot cavities are replaced by delay lines). Therefore, one can expect that scheme presented in Fig.\[fig:OptLeverL\] provides gain in the mirror displacement relative to its displacement in the original “optical bars” scheme, and this gain has to be close to $\mathcal{F}$. The analysis shows that it is true. The mechanical degrees of freedom equations of motion in spectral representation have the form (we omit here some very lengthy but rather straightforward calculations devoted to excluding variables for electromagnetic degrees of freedom and reducing the full equations set for the system to mechanical equations only): \[MechEq\] $$\begin{aligned} \left[-2M_x\Omega^2 + K_{xx}(\Omega)\right]x(\Omega) &= K_{xy}(\Omega)y(\Omega) + F_\mathrm{grav}(\Omega) \,, \\ \left[-M_y\Omega^2 + K_{yy}(\Omega)\right]y(\Omega) &= K_{xy}(\Omega)x(\Omega) + F_\mathrm{fluct}(\Omega) \,, \end{aligned}$$ Here $M_x$ is the mass of the mirrors , $M_y$ is the mass of the mirror , $x$ is the displacement of the mirrors , $y$ is the displacement of the mirror , $F_\mathrm{grav}(\Omega)$ is the signal force acting on the mirrors , and $F_\mathrm{fluct}$ is fluctuational back action force produced by the device which monitors variable $y$. We consider here only differential motion of the mirrors , when their displacements have the same absolute value and the directions shown in Fig.\[fig:OptLeverL\]. This motion corresponds to the gravitational wave with optimal polarization. It has been shown in the article [@OptBar1997] that the symmetric motion of the mirrors (when both mirrors approach to or move from the mirror ) did not coupled with the degrees of freedom $x$ and $y$ and could be excluded from the consideration. Factors $K_{xx},K_{yy},K_{xy},K_{yx}$ which form the matrix of the pondermotive rigidities are equal to \[K\_OptLeverL\] $$\begin{aligned} K_{xx}(\Omega) &= \frac{2\omega_p\mathcal{E}}{c^2\tau\cos^2\Omega\tau}\, \frac{\tan\Omega_B\tau}{\tan^2\Omega_B\tau-\tan^2\Omega\tau} \,, \\ K_{yy}(\Omega) &= \frac{2\omega_p\mathcal{E}}{c^2\tau\digamma^2}\, \frac{\tan\Omega_B\tau}{\tan^2\Omega_B\tau-\tan^2\Omega\tau} \,, \\ K_{xy}(\Omega) &=\frac{2\omega_p\mathcal{E}} {c^2\tau\digamma\cos\Omega\tau} \, \frac{\tan\Omega_B\tau}{\tan^2\Omega_B\tau-\tan^2\Omega\tau} \,, \end{aligned}$$ where $$\label{R} \digamma = \frac{1+R}{1-R} \approx \frac{2}{\pi}\,\mathcal{F} \,,$$ $$\label{Omega_B} \Omega_B = \frac1\tau\arctan\left(\frac{\tan\phi}{\digamma}\right)\,,$$ $\phi=\arcsin T_C$ and $R$ is the reflectivity of the mirrors . It have to be noted that these rigidities exactly satisfy the condition $$K_{xx}(\Omega)K_{yy}(\Omega) - K_{xy}^2(\Omega) = 0 \,,$$ (90,45) (0,0)(90,0) (5,40)[(0,-1)[25]{}]{}(5,14)[(0,0)\[ct\][$y$]{}]{} (85,15)[(0,1)[25]{}]{}(85,41)[(0,0)\[cb\][$x$]{}]{} (0,0)(90,0)(54.5,0)(54.5,25)(55.5,25)(55.5,0)(55,25) (16,35)(76,20)(14,35)(74,20) (15,35)(15,38)[(0,0)\[cb\][$M_y$]{}]{}(15,35)[(0,-1)[15]{}]{}(15,19)[(0,0)\[ct\][$F_\mathrm{fluct}$]{}]{} (75,20)(75,17)[(0,0)\[ct\][$2M_x$]{}]{}(75,20)[(0,1)[15]{}]{}(75,36)[(0,0)\[cb\][$F_\mathrm{grav}$]{}]{} Suppose then $\Omega\tau \ll 1$ (in the case of the contemporary terrestrial gravitational-wave antennae $\tau\lesssim 10^{-5}\,\mathrm{s}$ and $\Omega\lesssim 10^3\,\mathrm{s}^{-1}$, so $\Omega\tau \lesssim 10^{-2})$. In this case we obtain that $$\label{K_xx_short} K_{xx}(\Omega) = \frac{2\omega_p\mathcal{E}}{L^2}\, \frac{\Omega_B}{\Omega_B^2-\Omega^2} \,,$$ and $$\label{K_lever} K_{xx}(\Omega) = \digamma K_{xy} = \digamma^2 K_{yy} \,.$$ There is a very simple mechanical model which also can be described by the equations (\[MechEq\],\[K\_lever\]), putting aside for a while particular spectral dependence (\[K\_xx\_short\]). This is an ordinary mechanical lever with arm lengths ratio $\digamma$ (see Fig.\[Fig:Lever\]). Rigidities $K_{xx}, K_{yy}$, and $K_{xy}$ in this case are proportional to the bending rigidity of the lever bar. It is evident that if the motion is sufficiently slow and therefore it is possible to neglect bending then the $y$-arm tip will be $\digamma$ times greater than the $x$-arm tip displacement. Consequently, if the observation frequency $\Omega$ is sufficiently small then in the “optical lever” scheme presented in Fig.\[fig:OptLeverL\] the mirror motion will repeat the end mirrors motion with the gain factor $\digamma$. In all other aspects this scheme is similar to the “optical bars” scheme. As it follows from the symmetry conditions (\[K\_lever\]) if one replaces in the equations (\[MechEq\]) $y$ by $y/\digamma$, $F_\mathrm{fluct}$ by $\digamma\times F_\mathrm{fluct}$, $M_y$ by $\digamma^2\times M_y$ and then replaces all rigidities by their values corresponding to “optical bars” scheme (with $\digamma=1$) then these equations still remain valid. It means that if in the “optical bars” scheme one (i) replaces the mass $M_y$ by $\digamma^2$ smaller one; (ii) decreases back action noise of the meter by the factor of $\digamma$ and increases proportionally its measurement noise (for example, by decreasing pumping power in the interferometric position meter by the factor of $\digamma^2$); and (iii) inserts the additional mirrors with refelctivity defined by the equation (\[R\]) then signal-to-noise ratio (relative to the local meter noises) and dynamical properties of the scheme will remain unchanged, with the only evident replacement of $y$ by $\digamma y$. Two characteristic regimes of the “optical lever” scheme are similar to the quasistatic and resonant regimes of the “optical bars” scheme described in the article [@OptBar1997], and therefore here we consider them in brief only. Characteristic equation of the equations set (\[MechEq\]) is the following: $$\label{CharactEq} \Omega^2\left(\Omega^4 - \Omega_B^2\Omega^2 + \frac{2\omega_p\mathcal{E}\Omega_B}{M_\mathrm{eff}L^2}\right) = 0 \,,$$ where $$M_\mathrm{eff} = \left(\frac1{2M_x} + \frac1{\digamma^2M_y}\right)^{-1} \,.$$ Root $\Omega=0$ of this equation corresponds to the quasistatic regime. If pumping energy is sufficiently high: $$\begin{aligned} \label{CondForE} K_{xx}(\Omega) &= \frac{2\omega_p\mathcal{E}}{L^2}\, \frac{\Omega_B}{\Omega_B^2-\Omega^2} \gg M_x\Omega^2 \,, & K_{yy}(\Omega) &= \frac{2\omega_p\mathcal{E}}{\digamma^2 L^2}\, \frac{\Omega_B}{\Omega_B^2-\Omega^2} \gg M_y\Omega^2 \,,\end{aligned}$$ then the equations set (\[MechEq\]) solution for this regime can be presented as $$y(\Omega) = \digamma x(\Omega) = -\frac{ \digamma F_\mathrm{grav}(\Omega) + \digamma^2F_\mathrm{fluct}(\Omega) }{\Omega^2(2M_x + \digamma^2 M_y)} \,.$$ It is evident that maximal value of the signal response can be obtained here if $$\label{Masses} \digamma = \sqrt{\frac{2M_x}{M_y}} \,,$$ and in this case we will get $$y(\Omega) = \digamma x(\Omega) = -\frac1{2\Omega^2}\left( \frac{F_\mathrm{grav}(\Omega)}{\sqrt{2M_xM_y}} + \frac{F_\mathrm{fluct}(\Omega)}{M_y} \right) \,.$$ Taking into account, that gravitational-wave signal force is proportional to the mass of the end mirrors: $F_\mathrm{grav}\propto M_x$, we can conclude that in the gravitational-wave experiments this regime can provide a wide-band gain in signal displacement proportional to $\sqrt{M_x/M_y}$. It is necessary to note that this gain by itself does not allow to overcome the standard quantum limit because the value of the standard quantum limit for the test mass $M_y$ rises exactly in the same proportion. But it does allow to use less sensitive local position meter and it does increase the signal-to-noise ratio for miscellaneous noises of non-quantum origin and therefore makes it easier to overcome the standard quantum limit using, for example, variational measurement in the local position meter [@Vyatchanin1998; @DSVM2000]. Another two roots of the equation (\[CharactEq\]) $$\Omega_{1,2}^2 = \frac{\Omega_B^2}2 \pm \sqrt{ \frac{\Omega_B^2}2 - \frac{4\omega_p\mathcal{E}\Omega_B}{M_\mathrm{eff}L^2} }$$ correspond to the more sophisticated resonant regime of the scheme. Placing these two roots evenly in the spectral band of the signal it is possible to obtain sensitivity a few times better than the standard quantum limit for a free mass in relatively wide band, as it has been proposed in the article [@Buonanno2001]. Using the value of pumping power $$\mathcal{E} = \frac{M_\mathrm{eff}L^2\Omega_B^3}{8\omega_p} \,,$$ it is possible to implement the second-order-pole test object and obtain the sensitivity substantially better than the standard quantum limit for both free mass and harmonic oscillator in narrow band near the frequency $\Omega_B/\sqrt2$ [@FDRigidity2001]. In both cases “optical lever” allows to increase the signal displacement of the local mirrors and therefore makes it easier implementation of the local position meter. X-topologies of the “optical bars” and the “optical lever” schemes {#sec:OptLeverX} ================================================================== (90,70) (30.5,19.5)(29.5,20.5)(39.5,30.5)(40.5,29.5)(30.5,19.5) (29,19)[(0,0)\[rt\]]{} (78,20)(80,20)(80,30)(78,30)(78,20,79,25,78,30)(80,19)[(0,0)\[ct\]]{}(80,28)[(1,0)[10]{}]{}(90,27)[(0,0)\[rt\][$x$]{}]{} (45,20)(43,20)(43,30)(45,30)(45,20,44,25,45,30)(45,19)[(0,0)\[ct\]]{} (30,68)(30,70)(40,70)(40,68)(30,68,35,69,40,68)(29,70)[(0,0)\[rt\]]{}(38,70)[(0,-1)[10]{}]{}(38,59)[(0,0)\[ct\][$x$]{}]{} (30,35)(30,33)(40,33)(40,35)(30,35,35,34,40,35)(30,35)[(0,0)\[rb\]]{} (12,20)(10,20)(10,30)(12,30)(12,20,11,25,12,30)(10,19)[(0,0)\[ct\]]{}(10,28)[(-1,0)[10]{}]{}(0,27)[(0,0)\[lt\][$y$]{}]{} (30,2)(30,0)(40,0)(40,2)(30,2,35,1,40,2)(29,0)[(0,0)\[rb\]]{}(38,0)[(0,1)[10]{}]{}(38,10)[(0,0)\[cb\][$y$]{}]{} (10,25)(80,25)(35,0)(35,70) (44,24.75)(79,24.75)(44,25.25)(79,25.25) (34.75,34)(34.75,69)(35.25,34)(35.25,69) In the article [@OptBar1997] two possible topologies of the “optical bars” scheme have been considered, the L-topology discussed in the previous section and the X-topology similar to the Michelson interferometer topology. The latter one also can be converted to the “optical lever” scheme using additional mirrors and as it is shown in Fig.\[Fig:OptLeverX\]. Here is the coupling mirror with transmittance $T_C=\sin\phi$. In this topology one optical “spring” exists between the mirrors and , and another one between the mirrors and In the article [@OptBar1997] L- and X-topologies were considered as identical ones with the only difference that in the case of X-topology the value of $\Omega_B$ was about two times greater, $$\Omega_B = \frac1\tau\arctan\left(\frac{\tan 2\phi}{\digamma}\right)\,.$$ More rigorous analysis shows, however, that this is not the case. Really, the rigidities which appear in the equations (\[MechEq\]) for the case of the X-topology are equal to \[K\_OptLeverX\] $$\begin{aligned} K_{xx}(\Omega) &= \frac{2\omega_p\mathcal{E}}{c^2\tau\cos^2\Omega\tau}\, \frac{\tan\Omega_B\tau}{\tan^2\Omega_B\tau-\tan^2\Omega\tau} \,, \\ K_{yy}(\Omega) &=\frac{2\omega_p\mathcal{E}(1+\digamma^2\tan^2\Omega\tau)} {c^2\tau\digamma^2}\, \frac{\tan\Omega_B\tau}{\tan^2\Omega_B\tau-\tan^2\Omega\tau} \,, \\ K_{xy}(\Omega) &=\frac{2\omega_p\mathcal{E}} {c^2\tau\digamma\cos\Omega\tau\cos 2\phi} \, \frac{\tan\Omega_B\tau}{\tan^2\Omega_B\tau-\tan^2\Omega\tau} \,, \end{aligned}$$ and $$K_{xx}(\Omega)K_{yy}(\Omega)-K_{xy}^2(\Omega) = - \left(\frac{2\omega_p\mathcal{E}}{c^2\tau\cos\Omega\tau}\right)^2\, \frac{\tan^2\Omega_B\tau}{\tan^2\Omega_B\tau-\tan^2\Omega\tau}\ne 0 \,.$$ It means that the low-frequency mechanical mode which in the article [@OptBar1997] has been considered as a free mass mode (see factor $p^2$ in the equation (C.3) in the above-mentioned article) and which does represent free mass in the case of L-topology, has a non-zero rigidity in the case of X-topology. Moreover, if $\Omega<\Omega_B$ then this rigidity is negative, and therefore asynchronous instability exists in the system[^2]. Characteristic time for this instability is equal to $$\tau_\mathrm{instab} = \left( \frac{2\omega_p\mathcal{E}\Omega_B}{L^2(2M_x/\digamma^2+M_y)} \right)^{-1/2} \,.$$ Taking into account condition (\[CondForE\]) an supposing that $\Omega\sim\Omega_B$, one can obtain that $$\tau_\mathrm{instab}\Omega \approx \frac1{\digamma\Omega\tau} \,.$$ This value is rather large ($\sim 10^2$ if, say, $\Omega\sim 10^3\,\mathrm{s}^{-1}$ and $\tau\sim 10^{-5}\,\mathrm{s}$) in the case of pure “optical bars” scheme ($\digamma=1$). Therefore, this instability can be easily dumped by the feed-back system in this case. On the other hand, in the case of the “optical lever” scheme can be $\tau_\mathrm{instab} \sim \Omega^{-1}$, if one attempts to use too large value of $\digamma$. In the article [@Buonanno2002] it has been shown, however, that even such a strong instability can be dumped in principle by feed-back scheme without any loss in the signal-to-noise ratio. Conclusion ========== Properties of the “optical bars” intracavity scheme [@OptBar1997] can be substantially improved by converting arms of the antenna into Fabry-Perot cavities similar to ones used in traditional topologies of gravitational-wave antennae with extracavity measurement. This new “optical lever” scheme allows to obtain the gain in signal displacement of local mirror approximately equal to finesse of the Fabry-Perot cavities. This gain by itself does not allow to overcome the standard quantum limit in wide-band regime. But it allows to use less sensitive local position meter and increases the signal-to-noise ratio for miscellaneous noises of non-quantum origin, making it easier to overcome the standard quantum limit using, for example, variational measurement in the local position meter. The value of this gain is limited, in principle, by the formula (\[Omega\_B\]) only. As it follows from this formula $\digamma$ can not exceed value $(\Omega_B\tau)^{-1}$. If $\Omega_B \sim \Omega \sim 10^3\,\text{s}^{-1}$ and $\tau \sim 10^{-5}\,\text{s}$ (which corresponds to arms length of LIGO and VIRGO antennae) then $\digamma \lesssim 10^2$. If $\tau\sim 10^{-6}\,\text{s}$ (GEO-600 and TAMA) then this limitation is about one order of magnitude less strong, $\digamma \lesssim 10^3$. It is interesting to note that if $\digamma$ is close to its limiting value $(\Omega_B\tau)^{-1} \sim (\Omega\tau)^{-1}$ then signal displacement of the local mirror is close to the signal displacement of the end mirrors of hypothetical gravitational-wave antenna with arm lengths equal to the half-wavelength of the gravitational wave. Acknowledgments {#acknowledgments .unnumbered} =============== Author thanks V.B.Braginsky, M.L.Gorodetsky, Yu.I.Vorontsov and S.P.Vyatchanin for useful remarks and discussions. This paper was supported in part by the California Institute of Technology, US National Science Foundation, by the Russian Foundation for Basic Research, and by the Russian Ministry of Industry and Science. [10]{} , Science [**256**]{}, 325 (1992). , , 1999, . , Physics Letters A [**287**]{}, 331 (2001). , Energetic quantum limit in large-scale interferomters, in [*Proceedings of Third Edoardo Amaldi Conference*]{}, edited by [Sydney Meshkov]{}, 1999. , Physical Review D [**23**]{}, 1693 (1981). , Physical Review D [**65**]{}, 022002 (2002). , Physics Letters A [**218**]{}, 167 (1996). , Physics Letters A [**232**]{}, 340 (1997). , Physics Letters A [**246**]{}, 485 (1998). , Sov. Phys. JETP [**46**]{}, 705 (1977). , Physics Letters A [**239**]{}, 201 (1998). , Physics Letters A [**278**]{}, 123 (2000). , Physical Review D [**64**]{}, 042006 (2001). , Physics Letters A [**288**]{}, 251 (2001). , Physical Review D [**65**]{}, 042001 (2002). [^1]: farid@mol.phys.msu.su [^2]: In the article [@OptBar1997] another instability has been considered which has different origin and depends on the optical relaxation time. Instability we consider here exists even if this relaxation time is equal to infinity.
{ "pile_set_name": "ArXiv" }
--- abstract: 'In this paper we continue the development of quantum holonomy theory, which is a candidate for a fundamental theory based on gauge fields and non-commutative geometry. The theory is build around the $\mathbf{QHD}(M)$ algebra, which is generated by parallel transports along flows of vector fields and translation operators on an underlying configuration space of connections, and involves a semi-final spectral triple with an infinite-dimensional Bott-Dirac operator. Previously we have proven that the square of the Bott-Dirac operator gives the free Hamilton operator of a Yang-Mills theory coupled to a fermionic sector in a flat and local limit. In this paper we show that the Hilbert space representation, that forms the backbone in this construction, can be extended to include many-particle states.' --- **** On the Fermionic Sector of Quantum Holonomy Theory 6ex Johannes <span style="font-variant:small-caps;">Aastrup</span>$^{a}$[^1] & Jesper Møller <span style="font-variant:small-caps;">Grimstrup</span>$^{b}$[^2]\ 3ex $^{a}\,$*Mathematisches Institut, Universität Hannover,\ Welfengarten 1, D-30167 Hannover, Germany.*\ $^{b}\,$*QHT Gruppen, Copenhagen, Denmark.*\ [*This work is financially supported by Ilyas Khan,\ St. EdmundÕs College, Cambridge, United Kingdom and by\ Tegnestuen Haukohl & Køppen, Copenhagen, Denmark.*]{} 3ex Introduction ============ In this paper we continue the development of [Quantum Holonomy Theory]{}, which is a candidate for a fundamental theory based on gauge fields and formulated within the framework of non-commutative geometry and spectral triples. The basic idea in Quantum Holonomy Theory is to start with an algebra that encodes the canonical commutation relations of a gauge theory in an integrated and non-local fashion. The algebra in question is called the quantum holonomy-diffeomorphisms algebra, denoted $\mathbf{QHD}(M)$, which was first presented in [@Aastrup:2014ppa] and which is generated by parallel transports along flows of vector fields and by translation operators on an underlying configuration space of gauge connections. In [@Aastrup:2015gba] it was demonstrated that this algebra encodes the canonical commutation relations of a gauge theory. Once the $\mathbf{QHD}(M)$ has been identified the question arises whether it has non-trivial Hilbert space representations. This question was answered in the affirmative in [@Aastrup:2017vrm] where we proved that separable and strongly continuous Hilbert space representations of the $\mathbf{QHD}(M)$ exist in any dimensions. A key feature of these Hilbert space representations is that they are non-local. They are labelled by a scale $\tau$, which we tentatively interpret as the Planck scale and which essentially serves as a UV-regulator by suppressing modes in the ultra-violet. This UV-suppression does not break any spatial symmetries, i.e. these representations are isometric. In [@Aastrup:2017atr] we constructed an infinite-dimensional Bott-Dirac operator that interacts with an algebra generated by holonomy-diffeomorphisms alone, denoted by $\mathbf{HD}(M)$, and proved that this Bott-Dirac operator together with the aforementioned Hilbert space representation forms a semi-finite spectral triple over a configuration space of connections. In that paper we also demonstrated that the square of the Bott-Dirac operator coincides in a local and flat limit with the free Hamilton operator of a gauge field coupled to a fermionic sector, a result which opens the door to an interpretation of quantum holonomy theory in terms of a quantum field theory on a curved background. In this paper we continue the analysis of these Hilbert space representations. One feature of the Bott-Dirac operator is that it naturally introduces the CAR algebra into the construction via an infinite-dimensional Clifford algebra. This CAR algebra has a natural interpretation in terms of a fermionic sector due to the aforementioned result that the square of the Bott-Dirac operator includes the Hamilton of a free fermion. One drawback of the Hilbert space representation constructed in [@Aastrup:2017vrm] is that it only involves what amounts to one-particle states. In other words, the Hilbert space representation does not act on the CAR algebra itself. In this paper we construct such a Hilbert space representation of the $\mathbf{QHD}(M)$ algebra. The result that such a representation exist solidifies the interpretation that quantum holonomy theory should be understood as a quantum theory of gauge fields coupled to fermions.\ This paper is organised as follows: We begin by introducing the $\mathbf{HD}(M)$ and $\mathbf{QHD}(M)$ algebras in section 2 and the infinite-dimensional Bott-Dirac operator in section 3. We then review the Hilbert space representation constructed in [@Aastrup:2017vrm] in section 4. Finally we construct in section 5 a new Hilbert space representation where the $\mathbf{QHD}(M)$ algebra acts on the Fock space. We end with a discussion in section 6. The $\mathbf{HD}(M)$ and $\mathbf{QHD}(M)$ algebras {#sektion2} =================================================== In this section we introduce the algebras $\mathbf{HD}(M)$ and $\mathbf{QHD}(M)$, which are generated by parallel transports along flows of vector-fields and for the latter part also by translation operators on an underlying configuration space of connections. The $\mathbf{HD}(M)$ algebra was first defined in [@Aastrup:2012vq; @AGnew] and the $\mathbf{QHD}(M)$ algebra in [@Aastrup:2014ppa]. In the following we shall define these algebras in a local and a global version.\ Let $M$ be a compact manifold and let $\ca$ be a configuration space of gauge connections that takes values in the Lie-algebra of a compact gauge group $G$. A holonomy-diffeomorphism $e^X\in \mathbf{HD}(M)$ is a parallel transport along the flow $t\to \exp_t(X)$ of a vector field $X$. To see how this works we first let $\gamma$ be the path $$\gamma (t)=\exp_{t} (X) (m)$$ running from $m$ to $m'=\exp_1 (X)(m)$. Given a connection $\nabla$ that takes values in a $n$-dimensional representation of the Lie-algebra $\mathfrak{g}$ of $G$ we then define a map $$e^X_\nabla :L^2 (M )\otimes \mathbb{C}^n \to L^2 (M )\otimes \mathbb{C}^n$$ via the holonomy along the flow of $X$ $$(e^X_\nabla \xi )(m')= \hbox{Hol}(\gamma, \nabla) \xi (m) , \label{chopin1}$$ where $\xi\in L^2(M,\mathbb{C}^n)$ and where $\hbox{Hol}(\gamma, \nabla)$ denotes the holonomy of $\nabla$ along $\gamma$. This map gives rise to an operator valued function on the configuration space $\ca$ of $G$-connections via $$\ca \ni \nabla \to e^X_\nabla , {\nonumber}$$ which we denote by $e^X$ and which we call a holonomy-diffeomorphism[^3]. For a function $f\in C^\infty (M)$ we get another operator valued function $fe^X$ on $\ca$. We call the algebra generated by all holonomy-diffeomorphisms $e^X$ for the [*global*]{} holonomy-diffeomorphism algebra, denoted by $\mathbf{HD}_{\mbox{\tiny g}}(M)$, and we call the algebra generated by all holonomy-diffeomorphisms $f e^X$ for the [*local*]{} holonomy-diffeomorphism algebra, denoted simply by $\mathbf{HD}(M)$.\ Furthermore, a $\mathfrak{g}$ valued one-form $\oo$ induces a transformation on $\ca$ and therefore an operator $U_\omega $ on functions on $\ca$ via $$U_\omega (\xi )(\nabla) = \xi (\nabla - \omega) ,$$ which gives us the quantum holonomy-diffeomorphism algebras, denoted either by $\mathbf{QHD}_{\mbox{\tiny g}}(M)$, which is the algebra generated by $\mathbf{HD}_{\mbox{\tiny g}}(M)$ and all the $U_\oo$ operators, or by $\mathbf{QHD}(M)$, which is the algebra generated by $\mathbf{HD}(M)$ and all the $U_\oo$ operators (see also [@Aastrup:2014ppa]). An infinite-dimensional Bott-Dirac operator {#Bott} =========================================== In this section we introduce an infinite-dimensional Bott-Dirac operator that acts in a Hilbert space that shall later play a key role in defining a representation of the $\mathbf{QHD}(M)$ algebras. The following formulation of an infinite-dimensional Bott-Dirac operator is due to Higson and Kasparov [@Higson] (see also [@Aastrup:2017atr]).\ Let $\ch_n= L^2(\mathbb{R}^n)$, where the measure is given by the flat metric, and consider the embedding $$\varphi_n : \ch_n\rightarrow\ch_{n+1}$$ given by $$\varphi_n(\eta)(x_1,x_2,\ldots x_{n+1}) = \eta(x_1,\ldots, x_n) \left(\frac{s_{n+1}}{\tau_2\pi}\right)^{1/4}e^{- \frac{s_{n+1} x_{n+1}^2}{2\tau_2}}, \label{ref}$$ where $\{s_n\}_{n\in\mathbb{N}}$ is a monotonously increasing sequence of parameters, which we for now leave unspecified[^4]. This gives us an inductive system of Hilbert spaces $$\ch_1\stackrel{\varphi_1}{\longrightarrow} \ch_2 \stackrel{\varphi_2}{\longrightarrow} \ldots \stackrel{\varphi_n}{\longrightarrow} \ch_{n+1} \stackrel{\varphi_{n+1}}{\longrightarrow}\ldots$$ and we define[^5] $L^2(\mathbb{R}^\infty) $ as the Hilbert space direct limit $$L^2(\mathbb{R}^\infty) = \lim_{\rightarrow} L^2(\mathbb{R}^n)$$ taken over the embeddings $\{\varphi_n\}_{n\in\mathbb{N}}$ given in (\[ref\]). We are now going to define the Bott-Dirac operator on $ L^2(\mathbb{R}^n)\otimes \Lambda^*\mathbb{R}^n$. Denote by $\mbox{ext}(v)$ the operator of external multiplication with $v$ on $\Lambda^*\mathbb{R}^n$, where $v$ is a vector in $\mathbb{R}^n$, and denote by $\mbox{int}(v)$ its adjoint, i.e. the interior multiplication by $v$. Denote by $\{v_i\}$ a set of orthonormal basis vectors on $\mathbb{R}^n$ and let $\bar{c}_i$ and $c_i$ be the Clifford multiplication operators given by $$\begin{aligned} {c}_i &=& \mbox{ext}(v_i) + \mbox{int}(v_i) {\nonumber}\\ \bar{c}_i &=& \mbox{ext}(v_i) - \mbox{int}(v_i) \end{aligned}$$ that satisfy the relations $$\begin{aligned} \{c_i, \bar{c}_j\} = 0, \quad \{c_i, {c_j}\} = \d_{ij}, \quad \{\bar{c}_i, \bar{c}_j\} =- \d_{ij}.\end{aligned}$$ The Bott-Dirac operator on $ L^2(\mathbb{R}^n)\otimes \Lambda^*\mathbb{R}^n$ is given by $$B_n = \sum_{i=1}^n\left( \tau_2 \bar{c}_i \frac{{\partial}}{{\partial}x_i} + s_i c_i x_i\right).$$ With $B_n$ we can then construct the Bott-Dirac operator $B$ on $L^2(\mathbb{R}^\infty)\otimes \Lambda^*\mathbb{R}^\infty$ that coincides with $B_n$ on any finite subspace $L^2(\mathbb{R}^n)$. Here we mean by $\Lambda^*\mathbb{R}^\infty$ the inductive limit $$\Lambda^*\mathbb{R}^\infty= \lim_{\rightarrow} \Lambda^*\mathbb{R}^n.$$ For details on the construction of $B$ we refer the reader to [@Higson] and to [@Aastrup:2017atr], where we also showed that the square of $B$ coincides with the free Hamilton operator of a fermion Yang-Mills theory in a flat and local limit. A representation of the $\mathbf{QHD}(M)$ algebra ================================================= In this section we write down the representation of the $\mathbf{QHD}(M)$ algebra, which was first constructed in [@Aastrup:2017vrm]. A key feature of this representation is that it involves a spatial non-locality characterised by a physical parameter $\tau_1$, which effectively acts as an ultra-violet regulator and which we in [@Aastrup:2017vrm] tentatively interpreted in terms of the Planck length.\ To obtain a representation of the $\mathbf{QHD}(M)$ algebra we let $\langle \cdot\vert\cdot\rangle_{\mbox{\tiny s}} $ denote the Sobolev norm on $\OO^1(M\otimes\mathfrak{g})$, which has the form $$\langle \omega_1\vert\omega_2\rangle_{\mbox{\tiny s}} := \int_M \big( (1+ \tau_1\Delta^{\sigma})\omega_1 , (1+ \tau_1\Delta^{\sigma})\omega_2 \big)_{T_x^*M\otimes \mathbb{C}^n} (m) dm \label{sob}$$ where the Hodge-Laplace operator $\Delta$ and the inner product $(,)_{T_x^*M\otimes \mathbb{C}^n}$ on $T_x^*M\otimes \mathbb{C}^n$ depend on a metric g and where $\tau_1$ and $\sigma$ are positive constants. Also, we choose an $n$-dimensional representation of $\mathfrak{g}$. Next, denote by $\{\xi_i\}_{i\in\mathbb{N}}$ an orthonormal basis of $\OO^1(M\otimes\mathfrak{g})$ with respect to the scalar product (\[sob\]). With this we can construct a space $L^2(\ca)$ as an inductive limit over intermediate spaces $L^2(\ca_n)$ with an inner product given by $$\begin{aligned} \langle \eta \vert \zeta \rangle_{\ca_n} &=& \int_{\mathbb{R}^n} \overline{\eta(x_1\xi_1 + \ldots + x_n \xi_n)} \zeta (x_1\xi_1 + \ldots + x_n \xi_n) dx_1\ldots dx_n \label{rn} $$ where $\eta$ and $\zeta$ are elements in $L^2(\ca_n)$, as explained in section 3, and also using the same tail behaviour as in section 3. Finally, we define the Hilbert space $$\ch_{\mbox{\bf\tiny YM}}= L^2(\ca)\otimes L^2(M, \mathbb{C}^n) \label{ymm}$$ in which we then construct the following representation of the $\mathbf{QHD}_{\mbox{\tiny l}}(M)$ algebra. First, given a smooth one-form $\oo\in\OO^1(M,\mathfrak{g})$ we write $\oo =\sum \oo_i \xi_i$. The operator $U_\chi$ acts by translation in $L^2(\ca)$, i.e. $$\begin{aligned} U_{\oo}(\eta) (\omega)&=&U_{\oo}(\eta) (x_1 \xi_1+x_2 \xi_2+ \ldots) {\nonumber}\\ &=& \eta ( (x_1+\oo_1)\xi_1+(x_2+\oo_2)\xi_2+ \ldots) \label{rep1}\end{aligned}$$ with $\eta\in L^2(\ca)$. Next, we let $f e^X\in \mathbf{HD}(M)$ be a holonomy-diffeomorphism and $\Psi(\omega,m)=\eta(\omega)\otimes \psi(m)\in\ch_{\mbox{\tiny\bf YM}}$ where $\psi(m)\in L^2(M)\otimes \mathbb{C}^n$. We write $$f e^X \Psi(\omega,m') = f(m) \eta(\omega) Hol(\gamma, \omega) \psi(m) \label{rep2}$$ where $\gamma$ is again the path generated by the vector field $X$ with $m'=\exp_1(X)(m)$. In [@Aastrup:2017vrm] and [@Aastrup:2017atr] we prove that equations (\[rep1\]) and (\[rep2\]) gives a strongly continuous Hilbert space representation of the $\mathbf{QHD}(M)$ algebra in $\ch_{\mbox{\bf\tiny YM}}$. Note that this representation is isometric with respect to the background metric $g$, see [@Aastrup:2017vrm] for details. Representing $\mathbf{QHD}_{\mbox{\tiny g}}(M)$ on the Fock Space ================================================================= The Bott-Dirac operator acts on $L^2 (\mathbb{R}^\infty )\otimes \Lambda^*\mathbb{R}^\infty$, and not on $L^2(\ca)\otimes L^2(M,\mathbb{C}^n)$ as the $\mathbf{QHD}(M)$-algebra does. The Hilbert space $L^2(\mathbb{R}^\infty)$ is, however, easily identified with $L^2 (\ca )$ via $$\mathbb{R}^n \ni (x_1,\ldots , x_n) \mapsto x_1\xi_1 +\ldots + x_n \xi_n \in \ca_n .$$ We will therefore denote $\Lambda^*\mathbb{R}^\infty$ by $\Lambda^*\ca$. We thus get an action of the Bott-Dirac operator and the $\mathbf{QHD}(M)$-algebra on $L^2(\ca)\otimes\Lambda^*\ca \otimes L^2(M,\mathbb{C}^n)$. This is somewhat unsatisfactory due to two reasons: 1. The Fermions on which the $\mathbf{QHD}(M)$-algebra acts, is a one-particle space. We could of course try to take the Fock space of $L^2(M,\mathbb{C}^n)$ instead of just $L^2(M,\mathbb{C}^n)$. 2. We have a fermionic doubling in the sense that we have the fermionic Fock space $\Lambda^*\ca$, where the bosons, i.e. the $\mathbf{QHD}(M)$-algebra, do not act at all, and then the fermions in $L^2(M,\mathbb{C}^n)$, where the bosons do act. It is therefore desirable to get an action the $\mathbf{QHD}(M)$ algebra on $L^2 (\ca )\otimes \Lambda^* \ca $. In this section we show how this can be accomplished for the $\mathbf{QHD}_{\mbox{\tiny g}}(M)$ algebra but at the present moment not for the local $\mathbf{QHD}(M)$ algebra.\ We begin with the basespace $$\mathcal{H}^\sigma =\Omega^1 (M,\mathfrak{g}) , \label{trmpp}$$ where the Hilbert space structure is again with respect to a suitable Sobolev norm (\[sob\]) in the sense that the righthand side of (\[trmpp\]) has been completed in this norm (we remind the reader that the superscript ’$\sigma$’ is the power of the Laplace operator in (\[sob\])). The main purpose is to get a unitary, connection dependent action of the group of holonomy-diffeomorhphisms in $\mathbf{HD}_{\mbox{\tiny g}}(M)$ on the Hilbert space $\mathcal{H}^\sigma$. Once we have a unitary action it extends uniquely to an action on the associated Fock space $\Lambda^* \mathcal{H}^\sigma$ via $$F_\nabla(v_1\wedge \ldots \wedge v_n)=F_\nabla (v_1)\wedge \ldots \wedge F_\nabla (v_n) ,$$ where $F$ denotes a holonomy-diffeomorphism and $\nabla$ denotes a connection. Once we have this we get a unitary action of the $\mathbf{HD}_{\mbox{\tiny g}}(M)$ algebra on $\Lambda^* \mathcal{H}^\sigma \otimes L^2(\mathcal{A})$ via $$F (\xi \otimes \eta )(\nabla ) =F_\nabla (\xi)\eta ( \nabla ) .$$ The question is of course how we get an action of $\mathbf{HD}_{\mbox{\tiny g}}(M)$ on $\mathcal{H}^\sigma$. To answer this question we let $F$ be a holonomy-diffeomorphism and let $\nabla$ be a $\mathfrak{g}$-connection. We start with the case $\sigma=0$. Let $F$ be the flow of the vector field $X$ and let $\omega \in \Omega^1 (M,\mathfrak{g})$. Let $m_1\in M$ and $m_2=\exp (X)(m_1)$, and $\gamma$ the path $t\to e^{tX}(m_1)$. Furthermore we denote by $(e^{-X})^* (\omega )$ the pullback of the one-form part of $\omega$ by the diffeomorphism $e^{ -X}$, i.e. $(e^{-X})^*$ leaves the Lie algebra $\mathfrak{g}$ unchanged. We define $$e_\nabla^{X}(\omega ) (m_2)= \hbox{Hol} (\gamma ,\nabla)\Big( (e^{-X})^*(\omega)(m_2) \Big) (\hbox{Hol} (\gamma ,\nabla))^{-1} .$$ This does not define a unitary operator, unless $\exp (X)$ is an isometric flow. Unlike in section \[sektion2\] we cannot adjust the lack of unitarity by multiplying by a suitable determinant. The problem lies in the one form part. One possible way to deal with this is to consider only holonomy-diffeomorphisms, which are isometries with respect to a chosen metric. Alternatively – and this is the option that we shall adopt – we can allow the operators to be non-unitary. In this latter case we will still get bounded operators on $\mathcal{H}^\sigma$, even when we consider the supremum over all connections. The problem is, that when we extend the action to the Fock space the operators will no longer be bounded. The unboundedness is however not so severe since the operators are bounded when we consider them only acting on a subspace of the Fock space which contains particle states with particle number bounded by a given value. For general $\sigma$’s there is a natural way to proceed: The map $1+\tau_1\Delta^\sigma: \mathcal{H}^0 \to \mathcal{H}^\sigma$ is a unitary operator, and to get the action on $\mathcal{H}^\sigma$ we simply conjugate the action we have on $\mathcal{H}^0 $ with $1+\tau_1\Delta^\sigma$. If we choose holonomy-diffeomorphisms, which are isometries, this gives a unitary action. For general $\sigma$’s, we could also just proceed directly like above, without conjugating with $1+\tau_1\Delta^\sigma$. However without this conjugation the action would not be unitary on $\mathcal{H}^\sigma$, $\sigma\not= 0$, for the isometric flows.\ Finally, this representation of the $\mathbf{HD}_{\mbox{\tiny g}}(M)$ algebra is straight forwardly extended to the full $\mathbf{QHD}_{\mbox{\tiny g}}(M)$ algebra via $$U_\oo (\xi \otimes \eta )(\nabla ) = (\xi \otimes \eta )(\nabla +\oo) , \label{NOO2}$$ for $\xi\otimes\eta\in \Lambda^* \mathcal{H}^\sigma \otimes L^2(\mathcal{A})$. This implies that we have a non-unitary action of the $\mathbf{QHD}_{\mbox{\tiny g}}(M)$ algebra on $\Lambda^*\mathcal{A}\otimes L^2 (\mathcal{A})$. The action is strongly continuous if it is restricted to finite particle states. Note that the reason that this representation does not work for the local $\mathbf{HD}(M)$ and $\mathbf{QHD}(M)$ algebras is that it is not clear what the action of a function $f\in C^\infty(M)$ should be on the $0$-forms in $\Lambda^* \mathcal{H}^\sigma$, i.e. on the vacuum. For this reason we leave out the $C^\infty(M)$ part and consider instead only global holonomy-diffeomorphisms. Discussion ========== In this paper we show that the representation of the $\mathbf{QHD}(M)$ algebra constructed in [@Aastrup:2017vrm] can be extended to include also the CAR algebra if we consider only global holonomy-diffeomorphisms. This result provides us with what we believe is a completely new interpretation of fermionic quantum field theory in terms of geometrical data of a configuration space of connections. Consider first the ordinary Dirac operator and a spin-geometry. Here the fermion can be viewed as being part of an encoding of geometrical data of the underlying manifold, i.e. a spectral triple. In our case we have instead of the 4-dimensional Dirac operator an infinite dimensional Bott-Dirac operator acting in a Hilbert space over a configuration space of connections. This means that the CAR algebra and the fermionic sector is part of an encoding of geometrical data of this configuration space.\ As we demonstrated in [@Aastrup:2017atr] quantum holonomy theory is closely related to quantum field theory, the latter being based on two basic principles: locality and Lorentz invariance. In the axiomatic approaches these principles are encoded in the Osterwalder Schrader [@Osterwalder:1973dx] axioms for the Euclidean theory and in the Garding-Wightman [@Wightman] or the Haag-Kastler [@Haag:1963dh] axioms for the Lorentzian theory. To understand the difference between the present approach and ordinary quantum field theory we need to understand the role of the ultra-violet regulator in the form of the Sobolev norm (\[sob\]), which is the central element required to secure the existence of the Hilbert space representations. There are two options: either this regulator is a traditional cut-off that should eventually be taken to zero or it is a physical feature of this particular theory. If the ultra-violet regulator is a traditional cut-off then we are firmly within the boundaries of ordinary quantum field theory albeit with a different approach and with a different toolbox. In that case the question is whether the introduction of the Bott-Dirac operator and the fact that we have a spectral triple will give us new information about the limit where the cut-off goes to zero. Similar to algebraic quantum field theory [@Haag:1992hx] this approach is not limited in its choice of background. If on the other hand the regulator is to be viewed as a physical feature then we are decidedly outside the realm of traditional quantum field theory. There are two immediate consequences: 1. The Lorenz symmetry will be broken. The Hilbert space representation based on the Sobolev norm (\[sob\]) is isometric with respect to the metric on the three-dimensional manifold but the Lorentz symmetry will not be preserved. Instead there will be a larger symmetry that involves a scale transformation. 2. The theory is non-local. Whereas ordinary quantum field theory is based on operator valued distributions the present setup does not permit sharply localised entities. This also implies that the canonical commutation relations will only be realised up to a correction at the scale of the regularisation. Clearly this breaks with all the aforementioned axiomatic systems but the question is whether it is physically feasible? We believe that it is. First of all, it is not known whether the Lorentz symmetry is an exact symmetry in Nature and indeed much experimental effort has gone into testing whether it is [@Jacobson:2004rj]. We believe that the experimental constraints are sufficiently weak to permit the type of Lorenz breaking that we propose as long as it is restricted to the Planck scale. Secondly, it is generally believed that exact locality is not realised in Nature. Simple arguments combining quantum mechanics with general relativity strongly suggest that distances shorter than the Planck length are operational meaningless [@Doplicher:1994tu]. It is generally believed that a Planck scale screening will be produced by a theory of quantum gravity but we see no reason why it cannot be generated by quantum field theory itself as a part of its representation theory. We would then of course need to address the question of which regulator to choose, since the regulator would now be a quantity, that in principle is obvervable. [**Acknowledgements**]{}\ JMG would like to express his gratitude towards Ilyas Khan, United Kingdom, and towards the engineering company Tegnestuen Haukohl & Køppen, Denmark, for their generous financial support. JMG would also like to express his gratitude towards the following sponsors: Ria Blanken, Niels Peter Dahl, Simon Kitson, Rita and Hans-Jørgen Mogensen, Tero Pulkkinen and Christopher Skak for their financial support, as well as all the backers of the 2016 Indiegogo crowdfunding campaign, that has enabled this work. Finally, JMG would like to thank the mathematical Institute at the Leibniz University in Hannover for kind hospitality during numerous visits.\ [99]{} J. Aastrup and J. M. Grimstrup, “The quantum holonomy-diffeomorphism algebra and quantum gravity,” Int. J. Mod. Phys. A [**31**]{} (2016) no.10, 1650048. J. Aastrup and J. M. Grimstrup, “Quantum Holonomy Theory,” Fortsch. Phys.  [**64**]{} (2016) no.10, 783. J. Aastrup and J. M. Grimstrup, “Representations of the Quantum Holonomy-Diffeomorphism Algebra,” arXiv:1709.02943. J. Aastrup and J. M. Grimstrup, “Nonperturbative Quantum Field Theory and Noncommutative Geometry,” arXiv:1712.05930. J. Aastrup and J. M. Grimstrup, “C\*-algebras of Holonomy-Diffeomorphisms and Quantum Gravity I,” Class. Quant. Grav.  [**30**]{} (2013) 085016. J. Aastrup and J. M. Grimstrup, “C\*-algebras of Holonomy-Diffeomorphisms and Quantum Gravity II”, J. Geom. Phys.  [**99**]{} (2016) 10. N. Higson and G. Kasparov, “E-theory and KK-theory for groups which act properly and isometrically on Hilbert space”, Inventiones Mathematicae, vol. [**144**]{}, issue 1, pp. 23-74. K. Osterwalder and R. Schrader, “Axioms For Euclidean Green’s Functions,” Commun. Math. Phys.  [**31**]{} (1973) 83. A. S. Wightman, “HilbertÕs sixth problem: Mathematical treatment of the axioms of physics “, in F.E. Browder (ed.): Mathematical Developments Arising from HilbertÕs Problems, Vol. 28:1 of Proc. Symp. Pure Math., Amer. Math. Soc, 1976, pp. 241 - 268. R. Haag and D. Kastler, “An Algebraic approach to quantum field theory,” J. Math. Phys.  [**5**]{} (1964) 848. R. Haag, “Local quantum physics: Fields, particles, algebras,” Berlin, Germany: Springer (1992) 356 p. (Texts and monographs in physics). T. Jacobson, S. Liberati and D. Mattingly, “Astrophysical bounds on Planck suppressed Lorentz violation,” Lect. Notes Phys.  [**669**]{} (2005) 101. S. Doplicher, K. Fredenhagen and J. E. Roberts, “The Quantum structure of space-time at the Planck scale and quantum fields,” Commun. Math. Phys.  [**172**]{} (1995) 187. [^1]: email: `aastrup@math.uni-hannover.de` [^2]: email: `jesper.grimstrup@gmail.com` [^3]: The holonomy-diffeomorphisms, as presented here, are not a priori unitary, but by multiplying with a factor that counters the possible change in volume in (\[chopin1\]) one can make them unitary, see [@AGnew]. [^4]: In [@Higson] these parameters were not included, i.e. $s_n=1\forall n$. [^5]: The notation $L^2 (\mathbb{R}^\infty )$, which we are using here, is somewhat ambiguous. We are here only considering functions on $\mathbb{R}^\infty $ with a specific tail behaviour, namely the one generated by (3). We have not included this tail behaviour in the notation. See [@Aastrup:2017vrm] for further details.
{ "pile_set_name": "ArXiv" }
--- abstract: 'We first prove a one-to-one correspondence between finding Hamiltonian cycles in a cubic planar graphs and finding trees with specific properties in dual graphs. Using this information, we construct an exact algorithm for finding Hamiltonian cycles in cubic planar graphs. The worst case time complexity of our algorithm is O$(2^n)$.' author: - | **Bohao Yao\ **Charl Ras, Hamid Mokhtar\ **The University of Melbourne****** title: '**An algorithm for finding Hamiltonian Cycles in Cubic Planar Graphs**' --- Introduction ============ A [*Hamiltonian cycle*]{} is a cycle which passes through every vertex in a graph exactly once. A [*planar graph*]{} is a graph which can be drawn in the plane such that no edges intersect one another. A [*cubic graph*]{} is a graph in which all vertices have degree 3. Finding a Hamiltonian cycle in a cubic planar graph is proven to be an $\mathcal{NP}$-Complete problem [@garey1976planar]. This implies that unless $\mathcal{P}=\mathcal{NP}$, we could not find an efficient algorithm for this problem. Most approaches to finding a Hamiltonian cycle in planar graph utilises the [*divide-and-conquer*]{} method, or its derivation, the [*separator theorem*]{} which partitions the graph in polynomial time [@lipton1979separator]. Exact algorithms using such methods were found to have the complexity of O$(c^{\sqrt{n}})$ [@klinz2006exact] [@dorn2005efficient], where [*n*]{} denotes the number of vertices and [*c*]{} is a constant. In this paper, we consider only cubic planar graphs and attempt to find a new algorithm to provide researchers with a new method to approaching this problem. The Expansion Algorithm ======================= We first start by introducing our so-called [*Expansion Algorithm*]{} which increases the number of vertices in a cycle at each iteration. A cycle can be first found by taking the outer facial cycle of the planar graph. We define it as the [*base cycle*]{}, $\sigma_0$. This base cycle is then expanded by the Expansion Algorithm, which will be described in detail later. \[def1\] Consider a planar graph $G=(V,E)$ A *complementary path*, $P_e^\sigma$, is a path between 2 adjacent vertices, $v_1, v_2 \in \sigma$ connected by the edge, $e$, s.t. $P_e^\sigma$ is internally disjoint from $\sigma$. Furthermore, $P_e^\sigma$ and $e$ together will form the boundary of a face in $G$. Assuming we are not dealing with multigraphs, the complementary path will always have at least one other vertex besides $v_1, v_2$. The restriction that $P_e^\sigma$ and $e$ have to form the boundary of a face will be used later to prove Corollary \[cor1\]. \[def2\] Let $G_1=(V_1,E_1)$ and $G_2=(V_2,E_2)$. Then, $G_1+G_2 := (V_1 \cup V_2,E_1 \cup E_2)$ \[alg1\] [**Let**]{} $\sigma_0$ be the outer facial cycle.\ [**Let**]{} $i=0$ At each iteration, the algorithm removes an edge, $e$ and adds a path $P_e^\sigma$. Since there are no vertices on $e$ and there is at least 1 vertex on $P_e^\sigma$, the number of vertices on the cycle will always increase at each iteration. Since there is only a finite amount of vertices in the graph, the algorithm will have to terminate eventually. ![Example of utilizing the Expansion Algorithm with the base cycle in blue](Expansion.eps) The *interior* of a cycle, $C$, is the connected region lying to the left of an anticlockwise orientation of $C$. \[lem2\] At each iteration of the Algorithm \[alg1\], all the vertices of G either lie in the interior of $\sigma$, or on $\sigma$. Let $P_i$ be the statement that all the vertices of G either lie in the interior of $\sigma_i$ - $P_0$ is true as $\sigma_0$ is the outer facial cycle, with all the vertices either lie in the interior or on $\sigma_0$. - Assume $P_k$ true. Then all the vertices either lie in the interior or on $\sigma_k$. Assume $\exists P_{e_k}^{\sigma_k}$, then $\sigma_{k+1}$ exists. Let $f_k$ be the face bounded by $P_{e_k}^{\sigma_k}$ and $e_k$. The interior of $f_k$ originally lies inside $\sigma_k$. But by an iteration of Algorithm \[alg1\], the interior of $f_k$ now lies outside the new base cycle, $\sigma_{k+1}$. However, since there are no vertices in the interior of a face, all the vertices in $\sigma_{k+1}$ also lies either in the interior or on $\sigma_{k+1}$. Therefore, if $\sigma_{k+1}$ exists, then $P_k \Rightarrow P_{k+1}$. Hence, by mathematical induction, $P_i$ is true $\forall i$ as long as $\sigma_i$ exists. \[cor1\] If $P_e^\sigma$ exists for an edge $e \in \sigma$, then $P_e^\sigma$ is unique. By Lemma \[lem2\], none of the vertices can lie outside $\sigma$, thus, $P_e^\sigma$ also lies in the interior of $\sigma$. Any edge $e$ lies on the boundary of 2 faces. If $e \in \sigma$, then $\exists$ only 1 possible $P_e^\sigma$ such that $e$ and $P_e$ forms the boundary of a face that lies in the interior of $\sigma$ (the other face that $e$ is a boundary of lies outside $\sigma$) \[lem1\] If $G$ have a Hamiltonian cycle, then $\exists$ a choice of complementary paths that algorithm \[alg1\] can use to find that Hamiltonian cycle. Consider a Hamiltonian cycle, $C$, in $G$. If $C = \sigma_0$ then the case is trivial. If $C \ne \sigma_0$, then by Lemma \[lem2\], all vertices in $C$ lies either on or in the interior of $\sigma_0$ since $C$ contains all the vertices of the graph. Since $C$ contains all the vertices and $\sigma_0$ is a cycle and therefore must contain at least 3 vertices, $C \cap \sigma_0 \ne \emptyset$. Suppose $C \ne \sigma_0$. Let $v_1,v_2,...,v_n$ be consecutive vertices on $\sigma_0$ about the clockwise rotation. Let $P_i$ be the path connecting $v_i,v_{i+1}$ that is the subpath of $C$. Let the edge between $v_i$ and $v_{i+1}$ be $e_i$ which also lies on $\sigma_0$. Since $C$ contains all the vertices in the graph, by iteratively finding $P_e^\sigma$, starting with $e = e_i$, more subpath of $P_i$ will lie on $\sigma$ at each iteration. The algorithm will only move on when $P_i \subset \sigma$. Repeating this process $\forall i$, we will eventually end up with $\sigma = C$, in which the algorithm will terminate. ![Illustration of the proof for Lemma \[lem1\] with $C$ in red](Graph1.eps) The Problem in the Dual Graph ============================= Given a cubic planar graph, $G$, and the corresponding dual graph, $\overline{G}=(\overline{V},\overline{E})$. A *corresponding face*, $f_{\overline{v}} \in G$, is the face corresponding to the vertex $\overline{v} \in \overline{G}$. The *outer vertex*, $\overline{v}^*$, is the vertex in $\overline{G}$ that corresponds to the outer face of $G$. Let $e \in G$ be the shared boundary between $f_{\overline{v}_1},f_{\overline{v}_2}$. A *dual edge*, $\overline{e} \in \overline{G}$, is defined as an edge between $\overline{v}_1, \overline{v}_2 \in \overline{G}$. ![Example of $\overline{e}$ and $\overline{v}^*$](v.eps) Consider Algorithm \[alg1\]. Since $P_e^\sigma$ is unique for an edge $e$ by Corollary \[cor1\], let $e_0,e_1,...,e_n$ be the edges chosen in the [Expansion Algorithm]{} in order. Let $\overline{e}_0,\overline{e}_1,...,\overline{e}_n$ be the corresponding dual edges in $\overline{G}$. ![Expansion Algorithm in the Dual Graph with $\overline{T}$ in Purple](Expansion_Dual.eps) \[lem3\] $\bigcup\limits_{i=0}^n \overline{e}_i$ is a tree ($\overline{T}$). Let $P_a$ be the statement that $\bigcup\limits_{i=0}^a \overline{e}_i$ is a tree. - $P_0$ is true as 2 vertices connected by an edge, $\overline{e}_0$ is considered a tree. - Assume $P_k$ is true. Then $\bigcup\limits_{i=0}^k \overline{e}_i$ is a tree. From proof of Corollary \[cor1\], $e_{k+1}$ is a boundary of 2 faces, one that lies inside $\sigma_{k+1}$ ($f_1$) and another that lies outside $\sigma_{k+1}$ ($f_2$). From proof of Lemma \[lem2\], by including path $P_{e_i}^{\sigma_i}$, the face bounded by $e_i$ and $P_{e_i}^{\sigma_i}$ which originally lies in the interior of $\sigma_i$ will now lie on the exterior of $\sigma_{i+1}$. Therefore, the vertex in $\overline{G}$ that corresponds to $f_1$ will lie outside $\overline{T}$ while the vertex that corresponds to $f_2$ will lie inside $\overline{T}$. Thus, $\overline{e}_{k+1}$ will connect a vertex on $\overline{T}$ to a vertex that is outside of $\overline{T}$, thus expanding the tree. Hence $\bigcup\limits_{i=0}^{k+1} \overline{e}_i$ is also a tree and therefore $P_{k+1}$ is true. Thus, $P_k \Rightarrow P_{k+1}$. By mathematical induction, $\bigcup\limits_{i=0}^n \overline{e}_i$ is a tree. From Lemma \[lem2\], we know that faces $f_k$, bounded by $e_k$ and $P_{e_k}^{\sigma_k}$, will be on the exterior of $\sigma$ if $e_k$ was used in the algorithm. Conversely, if $e_k$ was not used in the algorithm, then $f_k$ will be in the interior of the cycle $\sigma$. This is equivalent to stating that all the vertices in $\overline{T}$ lies outside $\sigma$ while all the vertices not in $\overline{T}$ lies in the interior or $\sigma$. \[lem4\] If $\overline{v} \in \overline{G}$ lies on $\overline{T}$, then all the vertices in $G$ on $f_{\overline{v}}$ lies on $\sigma_n$. If $\overline{v}$ lies on the tree, then $\exists P_e^\sigma$ s.t. all its vertices in $G$ on $f_{\overline{v}}$ lies on $P_e^\sigma$ since the face is bounded by $P_e^\sigma$ and $e$. Since each iteration of the algorithm will only remove an edge and not any vertices, all the vertices already on the cycle will remain on the cycle at each iteration. \[thm1\] $G$ has a Hamiltonian cycle if and only if $\exists \overline{T}$ found by Algorithm \[alg1\], with $\overline{v}^*$ at the root, that satisfy the following properties: 1. For each vertex $v \in G$ that lies on the boundaries of $f_{\overline{v}_1},f_{\overline{v}_2},f_{\overline{v}_3}$, at least one of the vertices, $\overline{v}_1,\overline{v}_2,\overline{v}_3 \in \overline{G}$, lies on $\overline{T}$. 2. No two vertices on $\overline{T}$ can be joined by an edge $\overline{e}$ unless $\overline{e} \in \overline{T}$. By first taking the outer facial cycle, $\sigma_0$, only $\overline{v}^*$ lies outside our base cycle. Hence, $\overline{T}$ starts with $\overline{v}^*$ as its root. ($\Rightarrow$) We will begin by proving each of the following property: 1. If $G$ has a Hamiltonian cycle, then $\exists$ a cycle in $G$ that uses every vertex in $G$. If none of the vertices, $\overline{v}_1,\overline{v}_2,\overline{v}_3$, lies on $\overline{T}$, then none of the $P_e^\sigma$ found by Algorithm \[alg1\] could contain $v$ from negation of Lemma \[lem4\]. This contradicts the definition of a Hamiltonian cycle. Therefore, at least one of the vertices lies on $\overline{T}$. ![A vertex lies on the cycle (blue) as $\overline{v}_1 \in \overline{T}$ (purple)](Thm1_1.eps) 2. If 2 vertices, $\overline{v}'_1, \overline{v}'_2 \in \overline{T}$ are adjacent, $\exists$ 2 vertices shared by $f_{\overline{v}'_1},f_{\overline{v}'_2} \in G$. However, the edge between them is not used on $\overline{T}$. This implies that the 2 vertices will lie on the corresponding path in $G$ twice, once from each face $f_{\overline{v}'_1},f_{\overline{v}'_2} \in G$. This is a contradiction from the definition of a Hamiltonian cycle. [.5]{} ![$\overline{v}_1,\overline{v}_2 \in \overline{T}$ and $\overline{e} \in \overline{T}$](Thm1_21.eps "fig:"){width=".4\linewidth"} [.5]{} ![$\overline{v}_1,\overline{v}_2 \in \overline{T}$ and $\overline{e} \in \overline{T}$](Thm1_22.eps "fig:"){width=".4\linewidth"} ($\Leftarrow$) Suppose that $\overline{T}=\bigcup\limits_{i=0}^n \overline{e}_i$ is a tree found by Algorithm \[alg1\] that $\overline{T}$ satisfies those properties. From 2, we apply restriction on $\overline{T}$ such that the path in $G$ passes through each vertex at most once (as proven previously). From 1, we know the path passes through every vertex in $G$ at least once. Since there is a path in $G$ that passes through every vertex at most once and at least once, we know the path passes through every vertex in $G$ once. Furthermore, we know that Algorithm \[alg1\] will always output a cycle, $\sigma$. Therefore, we know that $\overline{T}$ satisfying those properties will correspond to a Hamiltonian cycle in $\overline{G}$ Note that Theorem \[thm1\] is a corollary of a theorem by Skupie[ń]{} [@skupien2002hamiltonicity]. Finding $\overline{T}$ ====================== We start solving the problem by modifying a backtracking algorithm (**procedure** `solve`) with restrictions (**procedure** `update`) to limit the amount of guessing required. Restrictions from **procedure** `update` may cause the tree to form 2 disjoint graphs, in which case a new backtracking algorithm (**procedure** `disjoint`) is called to trace a path between the graphs. If a particular choice of paths does not work, the algorithm will negate that choice (**procedure** `backtrack`). The algorithm will terminate if it successfully find $\overline{T}$ (all verties assigned) or if all other choices are exhausted, concluding that the graph has no Hamiltonian cycle. Search Algorithm ---------------- \[alg2\] To find $\overline{T}$ with properties in Theorem \[thm1\], we use a modified backtracking algorithm. [**procedure**]{} update($\overline{G},\overline{v}$):\ [**procedure**]{} backtrack($\overline{G},\overline{v}$):\ $V(\overline{T}):= V(\overline{T}) - \overline{v}$ $\overline{S}:=\overline{S}+\overline{v}$ [**procedure**]{} solve($\overline{G}$):\ disjoint($\overline{G},\overline{v}$):\ \ $\overline{S} := \emptyset$ $\overline{T} := \left\{\overline{v}^*\right\}$ call solve($\overline{G}$) Restrictions on vertices does not apply in disjoint graphs where the 2 vertices $\in \overline{T}$ belongs to different sets due to the necessity of connecting the disjoint graphs into one tree. Complexity ---------- In the worst case scenario, we expect to force at least one vertex into $\overline{S}$ at each guess. This reduces the number of faces in which we have to check to $\frac{f}{2}$ where $f$ is the number of faces in $G$. Since we are working in planar graph, $e=\frac{3}{2}n$ where $n$ is the number of vertices. Using Euler’s formula: $$\begin{aligned} n-e+f&=2 \\ n-\frac{3}{2}n+f&=2 \\ f&=2+\frac{n}{2}\end{aligned}$$ Since we have 2 choices ($\overline{T}$ or $\overline{S}$) that we have to guess, in the worst case, our algorithm runs in $2^{\frac{f}{2}}$ or $2^{1+\frac{n}{4}}$. Hence, our algorithm has a time complexity of O$(2^n)$. Future Research =============== We could modify Theorem \[thm1\] such that it encompasses any planar graph, $G_2=(V_2,E_2)$ However, caution is advised dealing with faces that share a single vertex, but not a boundary, as this is not reflected in the dual graph, $\overline{G}_2=(\overline{V}_2,\overline{E}_2)$. To overcome this, we introduce the *imaginary edge*. An *imaginary edge*, $\overline{e}'$ between vertices in $\overline{G}_2$ if the corresponding faces in $G_2$ share a single vertex but not a boundary. \[thm2\] If $G_2$ has a Hamiltonian cycle then $\exists \overline{T}$ found by Algorithm \[alg1\], with $\overline{v}^*$ at the root, that at least satisfy the following properties: 1. For each n degree vertex $v \in G_2$ that lies on the boundaries of $f_{\overline{v}_1},f_{\overline{v}_2}, ..., f_{\overline{v}_n}$, at least one of the vertices, $\overline{v}_1,\overline{v}_2, ..., \overline{v}_n \in \overline{G}_2$, lies on $\overline{T}$ 2. No two vertices on $\overline{T}$ can be joined by an edge $\overline{e}$ unless $\overline{e} \in \overline{T}$. 3. No two vertices joined by an imaginary edge can lie on $\overline{T}$ Properties 1 and 2 are proven in Theorem \[thm1\]. The proof of property 3 follows from the proof of Property 2: If 2 vertices, $\overline{v}_1, \overline{v}_2 \in \overline{T}$ are joined by an imaginary edge, then the vertex in which $f_{\overline{v}_1}$ and $f_{\overline{v}_2}$ shares will be on the cycle twice. Since this contradicts the definition of a Hamiltonian cycle, property 3 holds. Acknowledgments =============== The author wishes to thank his supervisors, Dr Charl Ras and Hamid Mokhtar, for their continued guidance and providing useful insights to this problem. This paper was supported by the Vacation Research Scholarship awarded by the Australian Mathematical Sciences Institute. [1]{} F. Dorn, E. Penninkx, H. L. Bodlaender, and F. V. Foin. Efficient exact algorithms on planar graphs: Exploiting sphere cut branch decompositions. In [*Algorithms-ESA 2005*]{}, pages 95-106. Springer, 2005. M. R. Garey, D. S. Johnson, and R. E. Tarjan. The planar hamiltonian circuit problem is NP-complete. [*SIAM Journal of Computing*]{}, 5(4):704-714, 1976. B. Klinz, G. J. Woeginger, et al. Exact algorithms for the Hamiltonian cycle problem in planar graphs. [*Operations Research Letters*]{}, 34(3):269-274, 2006. R. J. Lipton and R. E. Tarjan. A separator theorem for planar graphs. [*SIAM Journal on Applied Mathematics*]{}, 36(2):177-189, 1979. Z. Skupie[ń]{}. Hamiltonicity of planar cubic multigraphs. [*Discrete mathematics*]{}, 251(1):163-168, 2002.
{ "pile_set_name": "ArXiv" }
--- abstract: 'Earth’s climate, mantle, and core interact over geologic timescales. Climate influences whether plate tectonics can take place on a planet, with cool climates being favorable for plate tectonics because they enhance stresses in the lithosphere, suppress plate boundary annealing, and promote hydration and weakening of the lithosphere. Plate tectonics plays a vital role in the long-term carbon cycle, which helps to maintain a temperate climate. Plate tectonics provides long-term cooling of the core, which is vital for generating a magnetic field, and the magnetic field is capable of shielding atmospheric volatiles from the solar wind. Coupling between climate, mantle, and core can potentially explain the divergent evolution of Earth and Venus. As Venus lies too close to the sun for liquid water to exist, there is no long-term carbon cycle and thus an extremely hot climate. Therefore plate tectonics cannot operate and a long-lived core dynamo cannot be sustained due to insufficient core cooling. On planets within the habitable zone where liquid water is possible, a wide range of evolutionary scenarios can take place depending on initial atmospheric composition, bulk volatile content, or the timing of when plate tectonics initiates, among other factors. Many of these evolutionary trajectories would render the planet uninhabitable. However, there is still significant uncertainty over the nature of the coupling between climate, mantle, and core. Future work is needed to constrain potential evolutionary scenarios and the likelihood of an Earth-like evolution.' author: - 'Bradford J. Foley, Peter E. Driscoll' title: 'Whole planet coupling between climate, mantle, and core: Implications for the evolution of rocky planets ' --- Introduction ============ Overview -------- Recent discoveries have revealed that rocky exoplanets are relatively common [@Batalha2014]. As a consequence, determining the factors necessary for a rocky planet to support life, especially life that may be remotely observable, has become an increasingly important topic. A major requirement, that has been extensively studied, is that solar luminosity must be neither too high nor too low for liquid water to be stable on a planet’s surface; this requirement leads to the concept of the “habitable zone," the range of orbital distances where liquid water is possible [@Hart1978; @Hart1979; @Kasting1993; @Franck2000; @Kopp2014]. Inward of the habitable zone’s inner edge, the critical solar flux that triggers a runaway greenhouse effect is exceeded. The critical flux is typically estimated at $\approx 300 $ W m$^{-2}$, with variations of $\sim 10-100$ W m$^{-2}$ possible due to atmospheric composition, planet size, or surface water inventory [@Ingersoll1969; @Kasting1988; @Nakajima1992; @Abe2011; @Goldblatt2013]. In a runaway greenhouse state liquid water can not condense out of the atmosphere, so any water present would exist as steam. Furthermore a runaway greenhouse is thought to cause rapid water loss to space, and can thus leave a planet desiccated [@Kasting1988; @Hamano2013; @Wordsworth2013]. Beyond the outer edge insolation levels are so low that no amount of CO$_2$ can keep surface temperatures above freezing [@Kasting1993]. However, lying within the habitable zone does not guarantee that surface conditions will be suitable for life. Variations in atmospheric CO$_2$ content can lead to cold climates where a planet is globally glaciated, or hot climates where surface temperatures are higher than any known life can tolerate (i.e. above $\approx 400$ K [@Takai2008]). A hot CO$_2$ greenhouse can also cause rapid water loss to space [@Kasting1988] (though [@Wordsworth2013] argues against this), or even a steam atmosphere if surface temperatures exceed water’s critical temperature of 647 K. Moreover, the solar wind can strip the atmosphere of water and expose the surface to harmful radiation, unless a magnetic field is present to shield the planet [e.g. @Kasting2003; @griessmeier2009; @brain2014]. Atmospheric CO$_2$ concentrations are regulated by the long-term carbon cycle on Earth, such that surface temperatures have remained temperate throughout geologic time [e.g. @walker1981; @Berner2004]. The long-term carbon cycle is facilitated by plate tectonics [e.g. @Kasting2003]. Furthermore the magnetic field is maintained by convection in Earth’s liquid iron outer core (i.e. the geodynamo). As a result, interior processes, namely the operation of plate tectonics and the geodynamo, are vital for habitability. However, whether plate tectonics or a strong magnetic field is likely on rocky planets, especially those in the habitable zone where liquid water is possible, is unclear. The four rocky planets of our solar system have taken dramatically different evolutionary paths, with only Earth developing into a habitable planet possessing liquid water oceans, plate tectonics, and a strong, internally generated magnetic field. In particular the contrast between Earth and Venus, which is approximately the same size as Earth and has a similar composition yet lacks plate tectonics, a magnetic field, and a temperate climate, is striking. In this review we synthesize recent work to highlight that plate tectonics, climate, and the geodynamo are coupled, and that this “whole planet coupling" between surface and interior places new constraints on whether plate tectonics, temperate climates, and magnetic fields can develop on a rocky planet. We hypothesize that whole planet coupling can potentially explain the Earth-Venus dichotomy, as it allows two otherwise similar planets to undergo drastically different evolutions, due solely to one lying inward of the habitable zone’s inner edge and the other within the habitable zone. We also hypothesize that whole planet coupling can lead to a number of different evolutionary scenarios for habitable zone planets, many of which would be detrimental for life, based on initial atmospheric composition, planetary volatile content, and other factors. We primarily focus on habitable zone planets, as these are most interesting in terms of astrobiology, and because the full series of surface-interior interactions we describe involves the long-term carbon cycle, which requires liquid water. Each process, the generation of plate tectonics from mantle convection, climate regulation due to the long-term carbon cycle, and dynamo action in the core, is still incompletely understood and the couplings between these processes are even more uncertain. As a result, significant future work will be needed to place more quantitative constraints on the evolutionary scenarios discussed in this review. Whole planet coupling --------------------- Several basic concepts illustrate the coupling between the surface and interior (Figure \[fig:CTM\]). (1) Climate influences whether plate tectonics can take place on a planet. (2) Plate tectonics plays a vital role in the long-term carbon cycle, which helps to maintain a temperate climate. (3) Plate tectonics effects the generation of the magnetic field via core cooling. (4) The magnetic field is capable of shielding the atmosphere from the solar wind. Cool climates are favorable for plate tectonics because they facilitate the formation of weak lithospheric shear zones, which are necessary for plate tectonics to operate. Low surface temperatures suppress rock annealing, increase the negative buoyancy of the lithosphere, and allow for deep cracking and subsequent water ingestion into the lithosphere, all of which promote the formation of weak shear zones. When liquid water is present on a planet’s surface, silicate weathering, the primary sink of atmospheric CO$_2$ and thus a key component of the long-term carbon cycle, is active. However, silicate weathering also requires a sufficient supply of fresh, weatherable rock at the surface, which plate tectonics helps to provide via orogeny and uplift. As a result, the coupling between plate tectonics and climate can behave as a negative feedback mechanism in some cases, where a cool climate promotes the operation of plate tectonics, and plate tectonics enhances silicate weathering such that the carbon cycle can sustain cool climate conditions. ![Flow chart representing the concept of whole planet coupling. Climate influences tectonics through the role of surface temperature in a planet’s tectonic regime (i.e. stagnant lid versus plate tectonics), while the tectonic regime in turn affects climate through volatile cycling between the surface and interior. The tectonic regime also influences whether a magnetic field can be generated by dictating the core cooling rate. Finally, the strength of the magnetic field influences atmospheric escape, and therefore long-term climate evolution. []{data-label="fig:CTM"}](whole_planet_coup_flowchart.pdf){width="\linewidth"} An additional coupling comes into play via the core dynamo and the magnetic field. The magnetic field is generated by either thermal or chemical convection in the liquid iron core. Thermal convection requires a super-adiabatic heat flux out of the core, which is controlled in part by the style of mantle convection, while chemical convection is driven by light element release during inner core nucleation, which also relies on cooling of the core. Plate tectonics cools the mantle efficiently by continuously subducting cold slabs into the deep interior, thus maintaining a high heat flow out of the core. The magnetic field can in turn limit atmospheric escape, helping retain liquid surface water. The coupling between plate tectonics and the core dynamo, and the magnetic field and the climate, completes the concept of whole planet coupling (Figure \[fig:CTM\]). Moreover the magnetic field and plate tectonics can also act as a negative feedback in cases where magnetic shielding is required to prevent rapid planetary water loss, and plate tectonics is needed to drive the dynamo. Applications to the evolution of rocky exoplanets ------------------------------------------------- The concept of whole planet coupling discussed in this review is based on our knowledge of the Earth and the other rocky planets in the solar system, and is therefore limited to planets of a particular composition; i.e. planets made up mainly of silicate mantles and iron cores. It is unknown whether any of the processes discussed here are applicable to planets with more exotic compositions, such as planets primarily composed of carbides rather than silicates [e.g. @Madhu2012]. Furthermore we focus on H$_2$O and CO$_2$ as key volatiles; CO$_2$ is an important greenhouse gas for stabilizing planetary climate, and water is crucial for driving the carbon cycle, may play a role in the operation of plate tectonics, and is thought to be a necessary ingredient for life. Thus planets must accrete a significant supply of both H$_2$O and CO$_2$ for whole planet coupling to be possible. However, volatile accretion is unlikely to be a problem as simulations indicate that planets commonly acquire large volatile inventories (i.e. Earth-like or larger) during late stage planet formation [@Morbi2000; @Raymond2004]. Another important consideration is the redox state of the mantle, which determines whether degassing via planetary magmatism releases H$_2$O and CO$_2$ to the atmosphere or reduced species such as H$_2$ and CH$_4$ [e.g. @Kasting1993b]. Carbon dioxide is the only major greenhouse gas known to be regulated by negative feedbacks such that it has a stabilizing influence on climate. Thus having CO$_2$ as a primary greenhouse gas is important for the whole planet coupling discussed here to operate. An oxidized upper mantle favors CO$_2$ over CH$_4$ and other reduced gases. Earth’s mantle has been oxidized at present day levels since at least the early Archean [@Delano2001], and possibly even since the Hadean [@Trail2011]. Oxidation of the mantle is thought to occur by disproportionation of FeO to Fe$_2$O$_3$-bearing perovskite and iron metal in the lower mantle during accretion and core formation. The iron metal is then lost to the core leaving behind oxidized perovskite that mixes with the rest of the mantle [@Frost2008; @Frost2008_rev]. Disproportionation of FeO is expected to occur on rocky planets Earth sized or larger [@Wade2005; @Wood2006], so CO$_2$ is likely to be an important greenhouse gas on exoplanets. Though other greenhouse gases can still be important, any planet where significant amounts of CO$_2$ are degassed by mantle volcanism will need silicate weathering to act as a CO$_2$ sink to avoid extremely hot climates. Outline ------- The paper is structured as follows. We review the basic physics behind the generation of plate tectonics from mantle convection, highlighting the role of climate, in §\[sec:plate\_generation\]. We then review the long-term carbon cycle and its influence on climate, and detail the specific ways plate tectonics is important for the operation of this cycle, in §\[sec:climate\_reg\_cc\]. Next we review magnetic field generation via a core dynamo and the physics of atmospheric shielding and volatile retention in §\[sec:whole\_planet\]. In §\[sec:summary\] we integrate the discussion of the previous three sections to describe the range of different evolutionary scenarios that can result from differences in orbital distance, initial climate state, volatile inventory, and other factors as a consequence of whole planet coupling. We conclude by listing some major open questions that must be addressed in order to further our understanding of how terrestrial planets evolve and the geophysical factors that influence habitability (§\[sec:future\]). Generation of plate tectonics from mantle convection and the role of climate {#sec:plate_generation} ============================================================================ General physics of plate generation {#sec:plate_physics_gen} ----------------------------------- Plate tectonics is the surface expression of convection in the Earth’s mantle: the lithosphere, composed of the individual plates, is the cold upper thermal boundary layer of the convecting mantle, subducting slabs are convective downwellings, and plumes are convective upwellings [@davies1999; @Berco2015_treatise]. However, mantle convection does not always lead to plate tectonics, as evidenced by Mercury, Mars, and Venus, each of which is thought to have a convective mantle but lack plate tectonics [e.g. @Breuer2007]. On these planets, “stagnant lid convection," where the lithosphere acts as a rigid, immobile lid lying above the convecting mantle [e.g. @ogawa1991; @Davaille1993; @slava1995], is thought to operate [@Strom1975; @Phillips1981; @Solomon1992; @Solomatov1996; @Solomatov1997; @Spohn2001; @ONeill2007c], though Venus may experience occasional, short-lived episodes of subduction [@Turcotte1993]. Stagnant lid convection is a result of the temperature dependence of mantle viscosity, which increases by many orders of magnitude as temperature decreases from the hot conditions that prevail in the planetary interior to the much cooler temperatures that prevail at the surface [e.g. @Karato1993; @hirth2003]. When the viscosity at the surface is $\sim 10^3 - 10^4$ times that of the interior, the top thermal boundary layer is so viscous that it can no longer sink under its own weight and form subduction zones, and stagnant lid convection ensues. [e.g. @Richter1983; @christensen1984c; @ogawa1991; @Davaille1993; @Moresi1995; @slava1995]. Laboratory experiments indicate that the viscosity ratio between the surface and mantle interior for typical terrestrial planets, including Earth, is $\sim 10^{10}-10^{20}$, far larger than the threshold for stagnant lid convection. Stagnant lid convection is thus the “natural" state for rocky planets, and additional processes are necessary for plate tectonics to operate on Earth. In order to generate plate-tectonic style mantle convection, rheological complexities capable of forming weak, localized shear zones in the high viscosity lithosphere are necessary. These weak shear zones (essentially plate boundaries) remove the strong viscous resistance to surface motion that temperature-dependent viscosity causes and thus allow subduction and plate motions to occur. We discuss plate boundary formation in terms of viscous weakening in the lithosphere because the majority of the lithosphere, including the region of peak strength in the mid-lithosphere, deforms via ductile or semibrittle/semiductile behavior; the lithosphere only “breaks" in a brittle fashion at shallow depths of $< 10-20$ km [@Berco2015_treatise]. Many mechanisms have been proposed for producing the lithospheric weakening and shear localization necessary for plate-tectonic style mantle convection [see reviews by @Tackley2000; @Berco2003; @bk2003b; @Regenauer2003; @Berco2015_treatise]; while we won’t detail each of these mechanisms, they do all share some general principles. Namely, shear-thinning non-Newtonian rheologies, have been found to be successful at generating plate-like mantle convection. In a shear-thinning fluid, the strain-rate, $\dot{\varepsilon}$, is a non-linear function of the stress, $\tau$, (where $\dot{\varepsilon}$ and $\tau$ are the second invariants of the stress and strain-rate tensors, respectively): $$\eqlbl{non_newt} \dot{\varepsilon} \propto \tau^n$$ where $n$ is a constant. As the effective viscosity, $\mu_{eff}$, is proportional to $\tau \dot{\varepsilon}^{-1}$, $$\eqlbl{non_newt_mu} \mu_{eff} \propto \tau^{(1-n)} \propto \dot{\varepsilon}^{\frac{(1-n)}{n}} .$$ With $n > 1$, viscosity decreases as stress or strain-rate increases and high stress, high strain-rate regions of the lithosphere, such as regions of compression or tension caused by drips breaking off the base of the lithosphere, are weakened (see Figure \[fig:plate\_gen\]). Thus lithospheric weakening occurs in exactly the regions necessary for facilitating subduction. Shear-thinning behavior can result from inherently non-Newtonian rheologies with $n > 1$, or more complicated rheologies that involve “memory," where a time-evolving state variable, itself a function of stress and/or strain-rate, influences viscosity. In the case of memory, an effectively shear-thinning $\dot{\varepsilon}-\tau$ relationship can be calculated by taking the state variable in steady-state (as we show for grainsize reduction in §\[sec:damage\]). Moreover, rheologies where stress decreases at high strain-rates, resulting in even stronger weakening, are possible but beyond the scope of this paper. ### Viscoplasticity {#sec:plasticity} One popular plate generation mechanism is the viscoplastic rheology, where the lithosphere “fails" and becomes mobilized when it’s intrinsic strength, the yield stress, is reached [@Fowler1993; @Moresi1998]. Typical implementations of the viscoplastic rheology assume that the mantle behaves like a Newtonian fluid (following & with $n = 1$) until stress becomes equal to the yield stress, $\tau_y$ (see Figures \[fig:plate\_gen\]C & D). At this point stress is independent of strain-rate, fixed at $\tau_y$, and the viscosity follows $$\mu_{eff} \propto \frac{\tau_y}{\dot{\varepsilon}}.$$ When convective stresses in the lithosphere hit the yield stress, low viscosity shear zones at convergent and divergent margins can form, and plate-like mantle convection can be obtained in two-dimensional [@Moresi1998; @richards2001; @korenaga2010], three-dimensional cartesian [@trompert1998; @Tackley2000a; @stein2004], and three-dimensional spherical convection models [@vanHeck2008; @Foley2009]. Convective stresses lower than the yield stress do not form weak plate boundaries, and thus result in stagnant lid convection. The convective stress, $\tau_m$, has typically been found to scale as $$\eqlbl{tau_m} \tau_m \propto \mu_m \dot{\varepsilon}_m$$ where $\mu_m$ and $\dot{\varepsilon}_m$ are the effective viscosity and strain-rate in the mantle, respectively [e.g. @ONeill2007b]. The strain-rate can be estimated as $v_m/d$, where $v_m$ is convective velocity and $d$ is the thickness of the mantle. Velocity scales as [e.g. @slava1995; @turc1982] $$\eqlbl{v_m} v_m \propto Ra^{\frac{2}{3}} ,$$ where the Rayleigh number ($Ra$) is defined as $$\eqlbl{Ra} Ra = \frac{\rho g \alpha \Delta T d^3}{\kappa \mu_m}$$ and describes the convective vigor of the system $(\rho$ is density, $g$ gravity, $\alpha$ thermal expansivity, $\Delta T$ the non-adiabatic temperature drop across the mantle, and $\kappa$ the thermal diffusivity). However, despite a large Rayleigh number on the order of $10^7 - 10^8$, typical convective stresses are only $\sim 1-100$ MPa [e.g. @Tackley2000a; @Solomatov2004; @ONeill2007], approximately an order of magnitude lower than the strength of the lithosphere based on laboratory experiments [e.g. @kohlstedt1995]. Thus much work has focused on mechanisms for decreasing the lithospheric strength, usually involving water [e.g. @Tozer1985; @Lenardic1994; @rege2001; @vanderLee2008], though some recent models have looked at mechanisms for increasing convective stresses [@Rolf2011; @Hoink2012]. In §\[sec:climate\_tectonics\] we describe how the possibility of water weakening links the operation of plate tectonics to climate. Moreover, climate also influences convective stresses, by altering $\Delta T$, providing another important link between climate and tectonics. ### Damage and grainsize reduction {#sec:damage} While viscoplasticity has shown some success at generating plate-like mantle convection, it fails to form localized strike-slip faults [@Tackley2000a; @vanHeck2008; @Foley2009], a problem with all simple non-Newtonian rheologies [@berco1993]. Furthermore viscoplasticity is an instantaneous rheology: when stresses fall below the yield stress weakened lithosphere instantly regains its strength. On Earth, however, dormant weak zones are known to persist over geologic timescales, and can serve as nucleation points for new subduction zones [@Toth1998; @gurnis2000]. An alternative mechanism, that allows for dormant weak zones (i.e. memory of past deformation) and is capable of generating strike-slip faults, is grainsize reduction [@bk2003b; @br2012; @br2013; @br2014]. Grainsize reduction is commonly seen in field observations of exhumed lithospheric shear zones [@white1980; @Drury1991; @Jin1998; @Warren2006; @Skemer2010], and experiments show that viscosity is proportional to grainsize when deformation is dominated by diffusion creep or grain boundary sliding [e.g. @hirth2003]. One major problem for a grainsize reduction plate generation mechanism, though, is that grainsize reduction occurs by dynamic recrystallization, which takes place in the dislocation creep regime, while grainsize sensitive flow only occurs in the diffusion creep or grain boundary sliding regimes [e.g. @Etheridge1979; @DeBresser1998; @Karato1993]. Thus, a feedback mechanism, where deformation causes grainsize reduction, which weakens the material and leads to more deformation, is apparently not possible [e.g. @DeBresser2001]. Without such a feedback only modest weakening can occur. However, a new theory argues that when a secondary phase (e.g. pyroxene) is dispersed throughout the primary phase (e.g. olivine), a grainsize feedback is possible, because grainsize reduction can continue in the diffusion creep regime due the combined effects of damage to the interface between phases and Zener pinning (where the secondary phase blocks grain-growth of the primary phase) [@br2012]. Using a simplified version of the theory in [@br2012] [see also @Foley2013_scaling], the grainsize reduction mechanism can be formulated as: $$\eqlbl{eqd} \frac{dA}{dt} = \frac{f_{A}}{\gamma}\Psi - h A^p$$ where $A$ is the fineness (or inverse grainsize), $\gamma$ is surface tension, $h$ is the grain-growth (or healing) rate, $p$ is a constant, $\Psi$ is the deformational work, defined as $\Psi = \varepsilon_{ij}\tau_{ij}$, where $\varepsilon_{ij}$ is the strain-rate tensor and $\tau_{ij}$ is the stress tensor, and $f_A$ is the fraction of deformational work that partitions into surface energy, thus driving grainsize reduction. The grain-growth rate is strongly temperature dependent, and is given by $$\eqlbl{eqheal} h = h_n \exp \left(\frac{-E_h}{R_g T} \right)$$ where $h_n$ is a constant, $E_h$ is the activation energy for grain-growth, $R_g$ is the universal gas constant, and $T$ is temperature. The effective viscosity is $$\eqlbl{eqv} \mu_{eff} = \mu_n \exp \left(\frac{E_v}{R_g T} \right) \left(\frac{A}{A_0} \right)^{-m} = \mu(T)\left(\frac{A}{A_0} \right)^{-m} ,$$ where $\mu_n$ and $m$ are constants, $E_v$ is the activation energy for diffusion creep, and $A_0$ is the reference fineness. Taking in steady-state, using the viscous constitutive law $\tau_{ij} = 2 \mu_{eff} \varepsilon_{ij}$, and the definition $\varepsilon_{ij} \varepsilon_{ij} = 2 \dot{\varepsilon}$, the fineness can be written as a function of strain-rate $$\eqlbl{steady_state} A = \left(\frac{4 f_{A} \mu(T)\dot{\varepsilon}^2 A_0^m}{\gamma h} \right)^{\frac{1}{p+m}} .$$ Combining and , the viscosity is also a function of strain-rate, $$\mu_{eff} = \left(\mu(T)^p \left(\frac{4 f_{A} \dot{\varepsilon}^2}{\gamma h A_0^p}\right)^{-m} \right)^{\frac{1}{p+m}}$$ and using $\tau = 2 \mu_{eff} \dot{\varepsilon}$ the effective $\tau-\dot{\varepsilon}$ relationship is $$\eqlbl{dam_const_law} \dot{\varepsilon} = \frac{1}{2} \left(\mu(T)^{-p} \left(\frac{f_{A} }{\gamma h A_0^p} \right)^m \tau^{p+m} \right)^{\frac{1}{p-m}}.$$ With typical parameters of $m=2$ and $p=4$ [@hirth2003; @br2012], the grainsize reduction mechanism results in an effectively non-Newtonian rheology with $n=3$ (Figures \[fig:plate\_gen\]C & D). Furthermore, as detailed in §\[sec:climate\_tectonics\], climate can have a significant impact on plate generation with this mechanism, because grain-growth is temperature-dependent. At high surface temperatures, and thus higher temperatures in the mid-lithosphere, faster grain-growth (i.e. higher $h$) impedes grainsize reduction. Influence of climate on tectonic regime {#sec:climate_tectonics} --------------------------------------- Both the viscoplastic and grainsize reduction mechanisms for generating plate-tectonic style mantle convection are linked to climate. Although we treat these mechanisms separately, they are not mutually exclusive. The viscoplastic rheology is most relevant to brittle deformation in the upper lithosphere, while grainsize reduction provides viscous weakening in the lower lithosphere. As a result both mechanisms can operate simultaneously, and both may be important for generating plate tectonics. Climate can influence wether plate-like convection occurs with a viscoplastic rheology in three main ways: first, climate dictates whether water can exist in contact with ocean lithosphere at the surface, such that high pore pressure [e.g. @Hubbert1959; @Sibson1977; @Rice1992; @Sleep1992] and weak, hydrous phases [@Escartin2001; @Hilariet2007] can lubricate faults and lower the lithosphere’s yield strength; second, climate influences whether deep cracking of the lithosphere can occur, which is potentially important for hydrating the mid- to lower-lithosphere; and third, climate influences convective stresses. The first point, whether water can interact with rocks at the surface, provides only a weak influence of climate on tectonic regime, because hydrous phases can form over a wide range of surface temperatures. Geothermal heat flow can likely maintain a sub-ice ocean even at temperatures below the water freezing point [e.g. @Warren2002], and hydrous silicates are stable at temperatures up to $\approx 500-700^{\circ}$ C [e.g. @Hacker2003], meaning these phases could even form under a steam atmosphere. However, forming weak faults in the near surface environment is not sufficient for plate tectonics; weakening of the mid- and lower-lithosphere, where strength is maximum, is also necessary. The deep lithosphere is initially dry due to dehydration during mid-ocean ridge melting [@hirth1996; @evans2005], and is deeper than hydrothermal circulation can reach [@Gregory1981]. Thus, a mechanism capable of hydrating the mid-lithosphere is necessary for water weakening to be viable. One possible mechanism is the ingestion of water along deep thermal cracks [@Korenaga2007]. Deep cracking is a result of thermal stresses that arise in the cooling lithosphere. The thermal stresses are proportional to the temperature difference between the cold lithosphere and the mantle interior, so climate can directly influence hydration of the mid-lithosphere. Higher surface temperatures could potentially lead to shallow cracks that leave the mid-lithosphere dry and strong. The surface temperature range where thermal cracking is effective has not been explored, but temperatures would likely have to be on the order of hundreds of degrees hotter than the present day Earth to impede plate tectonics. An even stronger role for climate stems from the influence of surface temperature on convective stresses. High surface temperatures decrease stresses by lowering the negative thermal buoyancy of the lithosphere (e.g. $\Delta T$ decreases, resulting in lower $\tau_m$ from equations -). If the convective stress drops below the yield stress, then stagnant lid convection ensues. [@Lenardic2008] found that increasing Earth’s present day surface temperature by $\sim 100$ K or more is sufficient to induce stagnant lid convection, even with a weak, hydrated lithosphere [see also @Weller2015]. When a grainsize reduction mechanism is considered, climate controls whether plate-like convection can occur through the influence of temperature on lithospheric grain-growth rates (Figure \[fig:damage\]). Grain-growth is faster at higher temperatures (see ), and higher grain-growth rates impede the formation of weak shear zones by acting to increase grainsize (see ). Thus high surface temperatures, which lead to higher temperatures in the mid-lithosphere, promote stagnant lid convection [@Landuyt2009a; @Foley2012]. [@Foley2014_initiation] found that surface temperatures need to reach $\approx 500-600$ K for an Earth-like planet to enter a stagnant lid regime due to enhanced grain-growth. Increasing surface temperature even further would eventually lead to a state where the lithosphere has a low enough viscosity to “subduct" even without any of the rheological weakening mechanisms discussed in this section (i.e. mantle convection would resemble constant viscosity convection). However, for this style of “subduction" to occur, the viscosity ratio between the surface and mantle interior must be less than $\approx 10^4$ [@slava1995], which requires a surface temperature of $>1000$ K with an Earth-like mantle temperature of $\approx 1600$ K. Summarizing the results from both mechanisms, cold surface temperatures are generally favorable for plate tectonics because they promote grainsize reduction and boost convective stresses. Moreover, cold surface temperatures are unlikely to prevent water weakening because geothermal heat flow can maintain liquid water beneath an ice covered ocean. However, high temperatures, in the range of 400-600 K, can cause stagnant lid convection by dropping convective stresses, increasing grain-growth rates, and possibly suppressing thermal cracking. The influence of climate on tectonic regime is also a leading a hypothesis for the lack of plate tectonics on Venus, as the Venusian surface temperature, 750 K, is easily hot enough to shut down plate tectonics based on the geodynamical models discussed in this section [@Lenardic2008; @Landuyt2009a]. Importance of mantle temperature and other factors {#sec:pt_other_factors} -------------------------------------------------- Naturally climate is not the only important factor governing whether terrestrial planets will have plate tectonics. Mantle temperature, in particular, has been shown to have a major influence on a planet’s tectonic regime. Mantle temperature modulates mantle convective stresses and lithospheric grain-growth rates, similar to the influence of surface temperature discussed in §\[sec:climate\_tectonics\], and mantle temperature controls the thickness of the oceanic crust formed at ridges, which affects the lithosphere’s buoyancy with respect to the underlying mantle. All of these effects act to inhibit plate tectonics when interior temperatures are high, implying that planets with high internal heating rates or young planets, which are hotter because they have had less time to lose their primordial heat [e.g. @abbott1994; @Korenaga2006; @Labrosse2007; @Herzberg2010] (see also §\[sec:core\_mantle\]), will be less likely to have plate tectonics. However, there is disagreement over just how important mantle temperature is in dictating a planet’s mode of surface tectonics. Increasing mantle temperature drops the mantle viscosity, $\mu_m$. From equations -, mantle stress scales as $\mu_m^{1/3}$. Thus higher mantle temperatures lead to lower convective stresses, and, if stresses drop below the yield strength, stagnant lid convection in viscoplastic models [@ONeill2007b; @Moore2013; @Stein2013]; this effect may have even caused stagnant lid convection, possibly with episodic subduction events, on the early Earth [@ONeill2007b; @Moore2013]. The influence of mantle temperature on stress can also lead to hysteresis in viscoplastic mantle convection models [@Crowley2012; @Weller2012; @Weller2015]. Stagnant lid convection results in higher interior temperatures, and thus lower stresses, than mobile lid convection, due to low heat flow across the thick, immobile lithosphere. It is therefore harder to initiate plate tectonics starting from a stagnant lid initial condition than it is to sustain plate tectonics on a planet where it is already in operation. Such hysteresis would also mean that a planet that loses plate tectonics will be unlikely to restart it at a later time, even before the couplings between climate and the magnetic field are taken into account. High mantle temperatures also play a role in plate generation via grainsize reduction, because they result in a warmer mid-lithosphere, and thus higher grain-growth rates. However, the influence of mantle temperature on lithospheric grain-growth rate is weaker than that of surface temperature, so elevated mantle temperatures alone are not capable of shutting down plate tectonics [@Foley2014_initiation], in contrast to viscoplasticity. Furthermore, the weak role of mantle temperature in plate generation means that the hysteresis loops seen in viscoplastic models may be less prominent with a grainsize reduction mechanism, though future study is needed to confirm this. Mantle temperature also determines the thickness of oceanic crust generated at spreading centers, with hotter temperatures generally leading to a thicker crust [e.g. @White1989]. As crust is less dense than the underlying mantle, cooling of the lithospheric mantle is necessary for the lithosphere as a whole to become convectively unstable [e.g. @Oxburgh1977]. A very thick crust then requires a long cooling time, such that a thick lithospheric mantle root can form, in order to create a negatively buoyant lithospheric column. Forming a thick lithospheric mantle root is problematic, especially under high mantle temperature conditions, because the lithospheric mantle can delaminate before the lithosphere as a whole reaches convective instability, preventing subduction and plate tectonics [e.g. @davies1992b]. The ability of crustal buoyancy to preclude plate tectonics on both the early Earth [@davies1992b; @Vlaar1985; @Vanthienen2004], and on exoplanets [@Kite2009], has been discussed by many authors. However, the transition from basalt to compositionally dense eclogite, which occurs at shallow depths of $\approx 50$ km on Earth [e.g. @Hacker1996], allows episodic subduction events to occur despite a thick, buoyant crust in models of the early Earth [@Vanthienen2004b] and super-Earth exoplanets [@ORourke2012]. Moreover, even stable, modern day plate tectonics can potentially still operate because the same melting process that creates the crust also dehydrates and stiffens the sub-crustal mantle; this stiff lithospheric mantle then resists delamination and allows thick, negatively buoyant lithosphere to form through conductive cooling [@Korenaga2006]. It is also important to point out that other factors, in particularly composition and size, can potentially have a major impact on whether plate tectonics can take place on a rocky planet. However, both the influence of size and bulk composition on a planet’s tectonic regime are not well understood. There is still significant disagreement over whether increasing planet size makes plate tectonics more or less likely, owing mainly to uncertainties in lithospheric rheology, the pressure dependence of mantle viscosity and thermal conductivity, and the expected radiogenic heating budget of exoplanets (see also §\[sec:sum\_exoplanets\]). Likewise, little is known about the key material properties for planetary mantles with a significantly different composition than Earth. Understanding how these factors influence a planet’s propensity for plate tectonics and overall evolution is a vital area of future research. Climate regulation and the long-term carbon cycle {#sec:climate_reg_cc} ================================================= A primary factor determining whether a planet’s climate will be conducive to plate tectonics is the degree of greenhouse warming. Venus has $\approx 90$ bar of CO$_2$ in its atmosphere resulting in a surface temperature of $\approx 750$ K that is unfavorable for plate tectonics, while Earth only has $\approx 4 \times 10^{-4}$ bar of atmospheric CO$_2$, and thus has a temperate climate that allows for plate tectonics. However, on Earth there is enough CO$_2$, anywhere from $\approx 60-200$ bar [e.g. @Sleep2001b], locked in carbonate rocks at the surface and in the mantle to cause very hot surface temperatures of $\approx 400-500$ K [@Kasting1986]. Thus a key for maintaining plate tectonics on a planet is the ability to prevent large quantities of CO$_2$ from building up in the atmosphere. On planets possessing liquid water, like Earth, this can be accomplished by silicate weathering at the surface and on the seafloor. Crucially, the weathering rate is sensitive to climate, with higher temperatures leading to larger weathering rates (and thus higher CO$_2$ drawdown rates), meaning that weathering acts as a negative feedback mechanism that works to maintain temperate climate conditions [@walker1981; @berner1983; @TajikaMatsui1990; @Brady1991; @TajikaMatsui1992; @Berner1997; @Berner2004]. Planets that lack liquid water, like Venus, have no mechanism for regulating atmospheric CO$_2$. However, as we show in this section, simply possessing water does not guarantee that weathering can maintain climates within a range favorable for plate tectonics. A sufficient supply of fresh, weatherable rock at the surface is also needed. We further argue that plate tectonics enhances the supply of fresh rock to the surface, opening the possibility that plate tectonics and the long-term carbon cycle act as a self-sustaining feedback mechanism in some cases. Modeling the long-term carbon cycle {#sec:weather_feedback} ----------------------------------- Figure \[fig:c\_cycle\_model\] gives a simple illustration of the global carbon cycle where four main reservoirs of carbon are considered (see [@Berner2004] and [@Ridgwell2005] for detailed reviews): the seafloor, the mantle, the atmosphere, and the ocean (the atmosphere and ocean reservoirs are often combined because equilibration between these two reservoirs is essentially instantaneous on geologic timescales [e.g. @Sleep2001b]). Carbon dioxide in the atmosphere is consumed by weathering reactions with silicate rocks, producing calcium, magnesium, and bicarbonate ions that flow from groundwater into rivers, eventually draining into the ocean (see [@Gaillardet1999] for a global compilation of CO$_2$ consumption via chemical weathering). Once in the ocean, these ions recombine to precipitate carbonates, leading to an overall net transfer of CO$_2$ from the atmosphere to carbonate rocks on the seafloor. Hydrothermal alteration of basalt can also act as a sink for CO$_2$ dissolved in the oceans, and given rapid equilibration between the atmosphere and ocean, a sink for atmospheric CO$_2$ as well [@Staudigel1989; @Alt1999; @Gillis2011]. Carbonates on the seafloor, both in the form of sediments and altered basalt, are subducted into the mantle at trenches. Here, a fraction of the carbon devolatilizes and returns to the atmosphere through arc volcanoes, with the remaining carbon being recycled to the deep mantle. To close the cycle, mantle carbon is degassed through mid-ocean ridge and plume volcanism back to the atmosphere and ocean reservoirs. The balance between weathering and volcanic outgassing dictates atmospheric CO$_2$ content. This balance is formulated as $$\eqlbl{balance} F_{weather} + F_{sfw} = F_{arc} + F_{ridge},$$ where $F_{weather}$ is the silicate weathering flux on land, $F_{sfw}$ is the seafloor weathering flux, $F_{arc}$ is the flux of CO$_2$ degassing from volcanic arcs, and $F_{ridge}$ is the degassing flux at ridges. Balance between silicate weathering and degassing typically occurs rapidly, on a timescale of $ \sim 1$ Myr or less [e.g. @Sundquist1991; @Berner1997; @Driscoll2013; @Foley2015_cc], so assuming that weathering always balances degassing is reasonable when studying long-term climate evolution. The arc degassing flux is typically written as $$\eqlbl{arc} F_{arc} = \frac{f v_p L R_p}{A_p} ,$$ where $f$ is the fraction of subducted carbon that degasses, $v_p$ is the plate speed, $L$ is the length of subduction zone trenches, $R_p$ is the amount of carbon on the seafloor, and $A_p$ is the area of the seafloor. The ridge degassing flux is given by $$F_{ridge} = \frac{f_d v_p L d_{melt} R_{man}}{V_{man}} ,$$ where $f_d$ is the fraction of upwelling mantle that degasses, $L$ is the length of ridges (assumed equal to the length of trenches), $d_{melt}$ is the depth where mid-ocean ridge melting begins, $R_{man}$ is the amount of carbon in the mantle, and $V_{man}$ is the volume of the mantle [e.g. @TajikaMatsui1990; @TajikaMatsui1992; @Sleep2001b; @Driscoll2013; @Foley2015_cc]. Silicate weathering can exert a negative feedback on climate because mineral dissolution rates, and hence CO$_2$ drawdown rates, increase with increasing temperature and precipitation [e.g. @walker1981; @berner1983] (see [@Kump2000] and [@Brantley2014] for reviews on mineral dissolution kinetics and atmospheric CO$_2$). A similar link between climate and weathering rate can be seen in many field studies [e.g. @Velbel1993; @White_Blum1995; @Dessert2003; @Gislason2009; @White2014; @Viers2014]. However, many other locations show no link between weathering and climate; instead the weathering flux is linearly related to the physical erosion rate [@Stallard1983; @Edmond1995; @Gaillardet1999; @Oliva1999; @Millot2002; @Dupre2003; @Riebe2004; @West2005; @Viers2014]. These conflicting field observations are likely due to the distinction between “kinetically limited" (or “reaction limited") weathering and “supply limited" weathering. When weathering is kinetically limited, the kinetics of mineral dissolution controls the weathering rate. However, when weathering is supply limited, the supply of fresh rock brought to the weathering zone by erosion controls the weathering rate. Supply limited weathering occurs when the reaction between silicate minerals and CO$_2$ runs to completion in the regolith, requiring physical erosion to expose fresh bedrock for weathering to continue. The supply limited weathering flux ($F_{w_s}$) can be written as [@Riebe2004; @West2005; @Mills2011; @Foley2015_cc] $$F_{w_s} = \frac{A_{land} E \chi \rho_{cc}}{\bar{m}} ,$$ where $A_{land}$ is the surface area of all exposed land, $E$ is the physical erosion rate, $\chi$ is the fraction of reactable elements in the crust, $\rho_{cc}$ is the density of the crust, and $\bar{m}$ is the molar mass of reactable elements. The kinetically limited weathering flux, $F_{w_k}$, is sensitive to climate and can be written as [e.g. @walker1981; @berner1983; @TajikaMatsui1990; @TajikaMatsui1992; @Berner1994; @Sleep2001b; @Berner2004; @Driscoll2013] $$\eqlbl{f_weather} F_{w_k} = F_w^* \exp{\left [ \frac{E_a}{R_g} \left(\frac{1}{T^*} - \frac{1}{T} \right) \right ]} \left(\frac{P_{CO_2}}{P^*_{CO_2}}\right)^{\beta} \left(\frac{R}{R^*} \right)^{\alpha} \left(\frac{f_{land}}{f_{land}^*} \right) ,$$ where $F^*_w$ is the present day rate of atmospheric CO$_2$ drawdown by silicate weathering ($F^*_w \approx 6 \times 10^{12}$ mol yr$^{-1}$, or half the estimate of [@Gaillardet1999] because half of the CO$_2$ removed by weathering is re-released to the atmosphere when carbonates form [@Berner2004]), $E_a$ is the activation energy for mineral dissolution, $R_g$ is the universal gas constant, $T$ is temperature, $P_{CO_2}$ is the partial pressure of atmospheric CO$_2$, $R$ is the runoff, $f_{land}$ is the land fraction (subaerial land area divided by Earth’s surface area), and $\beta$ and $\alpha$ are constants. Stars represent present day values. The exponential term in describes the temperature sensitivity of mineral dissolution rates, where typical values of $E_a$ range from $\approx 40-50$ kJ mol$^{-1}$ [e.g. @Brady1991]. The $P_{CO_2}$ term describes the direct dependence of silicate weathering rates on atmospheric CO$_2$ concentration, which results from the role of pH in mineral dissolution. Specifically, more acidic pH values cause faster dissolution rates. However, when dissolution is caused by organic acids produced by biology (mainly plants on the modern Earth), soil pH is fixed, and the direct dependence of weathering rate on $P_{CO_2}$ is weak (i.e. $\beta$ in is approximately 0.1 or less [@Volk1987; @Berner1994; @Sleep2001b]). Without biologically produced acids, carbonic acid formed when atmospheric CO$_2$ dissolves in rainwater sets the soil pH, making the direct dependence of weathering rate on $P_{CO_2}$ stronger ($\beta \approx 0.5$ [@Berner1992]). The runoff term adds an additional climate feedback, as precipitation rates increase with temperature ($\alpha \approx 0.6-0.8$ [@Berner1994; @Bluth1994; @West2005]; see [@Berner1994] for a detailed discussion of how silicate weathering scales with runoff). Finally, the weathering rate scales with land fraction, because decreasing the total area of land undergoing weathering lowers the total weathering flux. The supply limited and kinetically limited weathering fluxes can be combined into a total weathering flux following [@Gabet2009], [@Hilley2010], and [@West2012] (see [@Foley2015_cc] for a full derivation) as $$F_{weather} = F_{w_s} \left [ 1 - \exp{\left (-\frac{F_{w_k}}{F_{w_s}} \right)} \right ].$$ When $F_{w_k}$ is below the supply limit, $F_{w_s}$, the overall weathering rate is approximately equal to the kinetically limited weathering rate, i.e. $F_{weather} \approx F_{w_k}$. However, when $F_{w_k}$ reaches or exceeds the supply limit, the overall weathering rate is fixed at $F_{w_s}$; i.e. $F_{weather}$ can no longer increase with increasing surface temperature or atmospheric CO$_2$ content once the supply limit to weathering has been reached. The present day Earth consists of both regions undergoing supply limited weathering and those undergoing kinetically limited weathering. However, kinetically limited weathering appears to be the dominant component of the global silicate weathering flux [@West2005]. Finally, CO$_2$ drawdown also occurs via seafloor weathering on mid-ocean ridge flanks [@Alt1999]. Seafloor weathering is thought to be a smaller present day CO$_2$ sink than weathering on land, and whether it exerts a strong feedback on climate is debated [@Caldeira1995; @Brady1997; @Berner2004; @Coogan2013; @Coogan2015]. Many studies have assumed a weak dependence on $P_{CO_2}$, based on the experiments of [@Brady1997], and formulate the seafloor weathering flux as [e.g. @Sleep2001b; @Mills2014; @Foley2015_cc] $$\eqlbl{sfw} F_{sfw} = F_{sfw}^* \left(\frac{v_p}{v_p^*}\right) \left(\frac{P_{CO_2}}{P_{CO_2}^*} \right)^{\xi} ,$$ where the present day seafloor weathering flux $F^*_{sfw} \approx 1.75 \times 10^{12}$ mol yr$^{-1}$ [@Mills2014] and $\xi \approx 0.25$. However, both a stronger $P_{CO_2}$ dependence or a direct temperature feedback are possible. Seafloor weathering can also become “supply limited," because only a finite amount of CO$_2$ can be stored in the ocean crust. [@Sleep2001a] and [@Sleep2014] estimate that only the top $\approx 500$ m of ocean crust can be completely reacted with CO$_2$, meaning no more than $\approx 4 \times 10^{21}$ mol (or $\approx 30$ bar), $\approx$ 1/10-1/2 Earth’s total CO$_2$ budget, can be locked in the seafloor at any one time. The silicate weathering climate feedback and plate tectonics ------------------------------------------------------------ Kinetically limited weathering is able to prevent massive quantities of CO$_2$ from building up in the atmosphere, because the weathering rate increases with increasing atmospheric CO$_2$ content. Considering a balance between a given degassing flux, $F_{vol} = F_{arc} + F_{ridge}$, and weathering, the atmospheric CO$_2$ content as a function of degassing rate can be calculated (Figure \[fig:fvol\]). When the degassing rate is increased, only a small increase in atmospheric CO$_2$ concentration is needed to bring weathering back into balance with degassing, because the weathering rate is a strong function of $P_{CO_2}$. The stronger the dependence of weathering on atmospheric CO$_2$, the less sensitive $P_{CO_2}$ is to variations in the degassing rate. It is this ability to balance the volcanic outgassing flux with minimal changes in $P_{CO_2}$ that allows weathering to prevent extremely hot climates, and thus maintain conditions favorable for plate tectonics. The negative feedback between climate and weathering is also important for habitability, because it acts to stabilize climate in response to variations in solar luminosity. Stars increase in luminosity as they age; for example the sun’s luminosity was 70 % its present day level during the Archean [@Gough1981]. The weathering feedback acts to partially counteract this change in luminosity by allowing higher CO$_2$ levels when luminosity is low and lower CO$_2$ levels when luminosity is high. However, this topic has been extensively studied [e.g. @walker1981; @Franck1999; @Sleep2001b; @Abbot2012], and going into more detail is beyond our scope. On the other hand, when weathering becomes supply limited, it can no longer increase with atmospheric CO$_2$ level and is therefore unable to balance the degassing flux (globally supply limited weathering requires that $F_{vol} \geq F_{w_s}$, otherwise weathering would not be supply limited). As a result, atmospheric CO$_2$ accumulation from volcanic outgassing continues unabated until the mantle and plate reservoirs are depleted in carbon, and extremely hot climates, that are unfavorable for plate tectonics, prevail (Figure \[fig:supply\_lim\]). Small land areas or low erosion rates lead to supply limited weathering, because both factors limit the supply of fresh rock to the surface, and hence lower $F_{w_s}$ [see @Foley2015_cc for details]. Alternatively, factors that increase the degassing rate can also drive a planet into the supply limited weathering regime, even with a large land area or high erosion rates. Planets with large total CO$_2$ inventories are more susceptible to supply limited weathering because degassing rates are higher. Another important factor is the fraction of subducted carbon that reaches the deep mantle, instead of devolatilizing and returning to the atmosphere at arcs ($f$ from equation ). When more carbon can be subducted and stored in the mantle, $F_{arc}$ is lower and kinetically limited weathering is easier to maintain (Figure \[fig:supply\_lim\]D). The fraction of subducted carbon that degasses at arcs is not well constrained, and the physical and chemical processes controlling this number are poorly understood [e.g. @Kerrick2001; @Dasgupta2010; @Ague2014; @Kelemen2015]. Moreover, $f$ is likely a function of mantle temperature, as more slab CO$_2$ will devolatilize during subduction into a hotter mantle [e.g. @Dasgupta2010]. Thus planets with hot mantles, as expected for young planets or those with high radiogenic heating budgets (see §\[sec:core\_mantle\]), may be more susceptible to supply limited weathering. Better constraints on carbon subduction and devolatilization are clearly needed for understanding global climate feedbacks. [Figure \[fig:supply\_lim\] assumes that seafloor weathering follows , as in [@Foley2015_cc], which provides only a modest climate feedback. However, including a stronger climate feedback does not prevent extremely hot climates from forming when weathering on land is supply limited, because]{} seafloor weathering will also become supply limited when all of the accessible basalt is completely altered [@Sleep2001a; @Sleep2014]. With the ocean crust unable to take in any more carbon, a CO$_2$ rich atmosphere still forms [@Foley2015_cc]. Furthermore, the high surface temperatures predicted for the supply limited weathering regime could potentially lead to rapid water loss to space or even trigger a runaway greenhouse. Such scenarios are not explicitly modeled here, but would reinforce the fact that supply limited weathering causes a climate state that is unfavorable for plate tectonics; losing water would prevent volatile weakening of the lithosphere, and a runaway greenhouse climate results in temperatures even higher than those shown in Figure \[fig:supply\_lim\]. Extent of coupling between climate and surface tectonics {#sec:pt_cc} -------------------------------------------------------- Plate tectonics helps to sustain kinetically limited weathering, and therefore prevent extremely hot climates from forming, by acting to maintain high erosion rates and large subaerial land areas through orogeny and volcanism. Erosion rates are highest in rapidly uplifting, tectonically active areas [e.g. @Portenga2011], and orogenic processes are the primary cause of such uplift. Without tectonic uplift, erosion rates would decay to very low values because the surface can only erode as quickly as uplift creates topography. Orogeny results from plate tectonic processes, such as continent-continent collisions, island arc formation, and accretion of arcs to cratons, so plate tectonics is essential for keeping erosion rates on Earth high. Tectonic uplift and physical erosion have long been thought to be important for weathering and climate [e.g. @Raymo1992; @Maher2014], and the transition between kinetically limited and supply limited weathering provides an exciting new mechanism for explaining this influence [e.g. @Kump1997; @Froelich2014]. Volcanism also creates topography and supplies fresh rock to the surface, so the fact that plate tectonics causes widespread subaerial volcanism (e.g. through arc, plume, and other hot spot melting), is important for sustaining high erosion rates as well. Perhaps the more important role for volcanism, however, is in creating exposed land, as large land areas also increase the supply of weatherable rock. In particular, continental crust, which makes up the large majority of Earth’s exposed land, is thought to be predominantly created by subduction zone volcanism [e.g. @Rudnick1995; @Cawood2013], a process that is unique to plate tectonics. Though hotspot or plume volcanism, the dominant mode of volcanism for a stagnant lid planet, can still create land and continental crust [e.g. @Stein1996; @Smithies2005], plate tectonics clearly enhances subaerial land. In fact, on a planet with a large area of exposed land, like Earth, the high erosion rates provided by plate tectonics may not be necessary for maintaining kinetically limited weathering. On Earth, typical erosion rates for flat-lying, tectonically inactive regions are on the order of 0.01 mm yr$^{-1}$ [@Portenga2011; @Willenbring2013], which gives a supply-limited weathering flux on the order of $10^{13}$ mol yr$^{-1}$ when combined with Earth’s large land area of $\approx 1.5 \times 10^{14}$ m$^2$. However, as half of the CO$_2$ removed by silicate weathering is re-released to the atmosphere when carbonates form [e.g. @berner1983], the net rate of CO$_2$ drawdown is $\sim 5 \times 10^{12}$ mol yr$^{-1}$. The present day degassing flux is $\approx (6-10) \times 10^{12}$ mol yr$^{-1}$ [e.g. @Marty1998; @Sleep2001b; @Burton2013], which would be significantly lower if Earth were in a stagnant lid regime. [@Marty1998] estimate $\approx 3 \times 10^{12}$ mol yr$^{-1}$ of CO$_2$ degassing from plumes, which is a reasonable approximation for Earth’s degassing flux were it in a stagnant lid regime. As the stagnant lid degassing flux is lower than the supply-limited weathering flux, kinetically limited weathering is possible on a hypothetically stagnant lid Earth. Therefore plate tectonics and the long-term carbon cycle probably only act as a self-sustaining feedback on planets with small land areas or large planetary CO$_2$ inventories, where high erosion rates are needed to prevent supply limited weathering (i.e. planets that would lie near the boundary between kinetically limited and supply limited weathering in Figure \[fig:supply\_lim\]). Unfortunately a more quantitative estimate is not possible with our current level of understanding on how tectonics modulates erosion and weathering. Nevertheless, given that volatile acquisition during terrestrial planet formation can vary significantly [e.g. @Raymond2004], volatile rich planets where plate tectonics and the carbon cycle are so tightly coupled may be common. Furthermore, even if a planet’s exposed land area is large enough to sustain kinetically limited weathering at low erosion rates, plate tectonics may still be important. Plate tectonics may be responsible for creating most of the exposed land through continental crust formation, without which much higher erosion rates would be needed for kinetically limited weathering. Earth may even be an example of such a planet. Another aspect of the coupling between plate tectonics and climate is that plate tectonics leads to long-lived CO$_2$ degassing by recycling carbon into the mantle at subduction zones. This process is important for habitability, because when CO$_2$ degassing rates are low, or cease entirely, snowball climates can form [@Kadoya2014]. However, snowball climates are unlikely to also shut down plate tectonics, so the role of plate tectonics in sustaining CO$_2$ degassing probably does not represent a self-sustaining feedback between tectonics and climate. The different evolutionary scenarios that result from coupling between climate and surface tectonics are described in §\[sec:summary\]. In particular planets at different orbital distances can undergo different evolutions if one planet lies inward of the habitable zone’s inner edge and thus lacks silicate weathering. Likewise different initial climate conditions can cause divergent evolutions if an initially hot climate prevents plate tectonics and kinetically limited weathering, or even weathering at all, from transpiring. Generation of the magnetic field and its role in atmospheric escape {#sec:whole_planet} =================================================================== We have described the interactions between the surface environment (atmosphere plus ocean) and the mantle in sections \[sec:plate\_generation\] & \[sec:climate\_reg\_cc\]. In this section we demonstrate that additional interactions exists between the mantle and core, and the geomagnetic field (which is generated by the core dynamo) and atmosphere. The mantle controls the rate at which the core cools, thereby playing a crucial role in maintaining the energy flow necessary to drive convection and dynamo action in the core. The geomagnetic field provides a shield that holds the solar wind far above the surface (presently at about 9 Earth-radii) so that most high energy particles are diverted and prevented from disrupting the near surface environment. As a result, magnetic fields may limit the atmospheric escape rate under certain conditions. The magnetic field’s influence on escape rate can then have an important control on long-term climate evolution, opening the possibility for an indirect influence of the core dynamo on mantle convection, in addition to the direct role mantle convection plays in driving the dynamo. Core dynamo {#sec:dynamo} ----------- Earth has maintained an internally generated planetary magnetic field through convective dynamo action in its core over much of its history [@tarduno2015]. The presence of a long-lived magnetic field is another unique feature of our planet, as only Mercury and Ganymede also have active magnetic fields today among the terrestrial planets and moons of the solar system [@schubert2011]. Internally generated magnetic fields are created by convective dynamo action in a large rotating volume of electrically conductive liquid. In the terrestrial planets this typically occurs in an iron core. The most commonly cited process for driving dynamo action, and thus the focus of this section, is thermal or compositional convection. However, other driving mechanisms, such as gravitational tides [e.g. @cebron2014] or precipitation of Mg initially dissolved in the liquid iron [@orourke2016] are possible. Thermal convection occurs when the core heat flow, $Q_{cmb}$, exceeds the heat conducted adiabatically through the core. Recent estimates place the core’s conductive heat flow at 13-15 TW [@Driscoll2014; @pozzo2014]. A high core heat flow is also inferred from revisions to the dynamics of mantle plumes, which are thought to be generated at the core-mantle boundary (CMB) and be indicative of the CMB heat flow [@bunge2005; @zhong2006]. Even if the total core heat flow is less than that needed to keep the core adiabatically well mixed, compositional buoyancy may still be able to drive convection. Compositional convection, which can be driven a number of ways, usually involves a phase change in the liquid where a density gradient develops or by the dissolution of an incompatible element due to a change in the solubility. For example, as Earth’s core cools the inner core crystallizes from below, releasing buoyant light element rich fluid into the surrounding iron-rich fluid and driving compositional convection. In this example core heat flow is still important for driving compositional convection, because core cooling is necessary for inner core growth. The heat transferred across the core-mantle boundary, $Q_{cmb}$, is controlled by the temperature gradient in the viscous mantle above: $$Q_{cmb}=A_{cmb}k_{LM}\frac{\Delta T_{LM}}{\delta_{LM}} \label{q_cmb}$$ where $A_{cmb}$ is CMB area, and $k_{LM}$, $\Delta T_{LM}$, and $\delta_{LM}$ are the thermal conductivity, temperature drop, and thickness of the lower mantle thermal boundary layer. The temperature drop, $\Delta T_{LM}=T_{cmb}-T_{m}$, measures the thermal disequilibrium between the mantle and core. If the mantle cools efficiently then $\Delta T_{LM}$ is large, resulting in a high $Q_{cmb}$. The thermal boundary layer thickness, $\delta_{LM}$, can be derived by assuming the boundary layer Rayleigh number (i.e. $\rho g \alpha \Delta T_{LM} \delta_{LM}^3/(\kappa \mu_{LM})$, where $\mu_{LM}$ is the effective viscosity of the lower mantle boundary layer) is at the critical value for convection. Compositional differences between the lower mantle and the bulk mantle [e.g. @sleep1988] can influence the thickness of $\delta_{LM}$; this influence can be incorporated by modifying $\mu_{LM}$ [@Driscoll2014] or the critical Rayleigh number [@stevenson1983]. Calculating $\delta_{LM}$ from the boundary layer Rayleigh number implies that a hotter mantle produces a thinner boundary layer, increasing $Q_{cmb}$. Therefore, the ratio $\Delta T_{LM}/\delta_{LM}$ in (\[q\_cmb\]) implies that the core and mantle will evolve towards thermal equilibrium faster in a hotter mantle and that efficient mantle cooling leads to efficient core cooling. However, cooling the core too fast can be detrimental to dynamo action in two ways: (1) rapid cooling can lead to complete solidification of the core, preventing fluid motion and dynamo action, and (2) rapid cooling can bring the core into thermal equilibrium with the mantle, which will eventually decrease the cooling rate below the threshold for driving convection. The later is the eventual fate of every rocky body, but the longer a planet can avoid thermal equilibrium and maintain moderate cooling the longer it will generate a magnetic field. Coupling between core dynamo and tectonic mode {#sec:core_mantle} ---------------------------------------------- As the core cooling rate is determined by the mantle, the surface tectonic mode, which determines the mantle cooling rate, dramatically influences the dynamo. A mobile lid can accommodate a larger surface heat flow by exposing the mantle’s top thermal boundary layer to the surface, while a stagnant lid inhibits cooling by insulating the mantle with a thick conductive layer. To model the cooling and convective power available to drive dynamo action over time, the energy balance of the core and mantle are solved simultaneously. Volume averaged temperature evolution can be derived using the secular cooling equation $Q_{i}=-c_i M_i \dot{T}_i$, where $c$ is specific heat and $i$ refers to either mantle ($i = m$) or core ($i = c$) in the mantle and core energy balances [e.g. @Driscoll2014]. Solving for $\dot{T}_m$ and $\dot{T}_c$ in terms of sources and sinks gives the mantle and core thermal evolution equations, $$% \dot{T}_m=\frac{\left( Q_{cmb} + Q_{rad} +Q_{tidal}+Q_{L,man} -Q_{conv}-Q_{melt} \right)}{M_m c_m } \dot{T}_m=\frac{\left( Q_{cmb} + Q_{rad} -Q_{conv}-Q_{melt} \right)}{M_m c_m } \label{dot_T_m}$$ $$%\dot{T}_c= -\frac{ (Q_{cmb}-Q_{rad,c})} {M_c c_c - A_{ic} \rho_{ic} \eta_c \frac{d R_{ic}}{d T_{cmb}} (L_{Fe}+E_G) } \dot{T}_c= -\frac{ (Q_{cmb}-Q_{rad,c})} {M_c c_c - A_{ic} \rho_{ic} \epsilon_c \frac{d R_{ic}}{d T_{cmb}} (L_{Fe}+E_G) } \label{dot_T_c}$$ where $Q_{conv}$ is heat conducted through the lithospheric thermal boundary layer by mantle convection (in W), $Q_{melt}$ is heat loss by mantle melt eruption, and $Q_{rad}$ and $Q_{rad,c}$ are radiogenic heat production in the mantle and core, respectively. Crustal heat sources are excluded because they do not contribute to the mantle heat budget. The denominator of (\[dot\_T\_c\]) is the sum of core specific heat and heat released by inner core growth, where $A_{ic}$ is inner core surface area, $\rho_{ic}$ is inner core density, $\epsilon_c$ is a constant that relates average core temperature to CMB temperature, $dR_{ic}/dT_{cmb}$ is the rate of inner core growth as a function of CMB temperature, and $L_{Fe}$ and $E_G$ are the latent and gravitational energy released at the ICB per unit mass. Detailed expressions for heat flows and temperature profiles as functions of mantle and core properties are given in @Driscoll2014 and @driscoll2015b. Heat is lost from the mantle via conduction through the upper mantle thermal boundary layer ($Q_{conv}$) and by melt eruption ($Q_{melt}$). The rate at which this heat is lost is a complex function of mantle temperature, material properties, and style of convection. A mobile lid implies that the upper mantle thermal boundary layer reaches the planetary surface, so that the convective heat loss is controlled by conduction through this thin boundary layer. Assuming the boundary layer is at the critical Rayleigh number for convection, the convective heat flow is $$Q_{conv}=a_m \nu_m^{-\beta}T_m^{\beta+1} \label{q_conv} ,$$ where $\nu_m = \mu_m/\rho$ is temperature-dependent mantle kinematic viscosity and $a_m=8.4\times10^{10}$ W(m$^2$s$^{-1}$K$^{-4}$)$^{1/3}$ and $\beta=1/3$ are constants [see @Driscoll2014 equation 43]. In steady state, a stagnant lid has a thicker conductive boundary layer, which can be modeled by decreasing $a_m$ by a factor of $\sim25$ [@slava1995]. Mobile lid convection therefore favors dynamo action because it efficiently cools the mantle and thus boosts the core heat flow. Stagnant lid convection, on the other hand, can impede core cooling; stagnant lid convection on Venus is a leading hypothesis for why Venus lacks a magnetic field [@nimmo2002]. However, stagnant lid convection does not always prevent a core dynamo, as thermal history models commonly show a relatively short period of rapid cooling, regardless of the surface tectonic mode, during which core cooling rates are high and dynamo action is possible. This period of rapid cooling, or thermal adjustment period, results from initially hot interior temperatures that are a consequence of planetary accretion. The thermal adjustment period typically lasts for the first $1-2$ Gyr as the internal temperatures and heat flows adjust to boundary conditions and heat sources. Similar early thermal dynamos have been proposed for the Moon [@stegman2003] and Mars [@nimmo2000]. Figure \[fig:EV\] shows example thermal histories for Earth and Venus, where the Earth model uses a mobile lid heat flow scaling while the Venus model uses a stagnant lid scaling [see @Driscoll2014]. The initial mid-mantle and CMB temperatures are assumed to be at the silicate liquidus, which is a reasonable starting temperature following the last large accretion event. Both models experience an initial thermal adjustment for $1-2$ Gyr, but diverge soon after. After the initial adjustment the Earth model cools monotonically, maintaining a thermal dynamo and later a thermo-chemical dynamo after inner core nucleation around 4 Gyr, which can be seen as a jump in the predicted magnetic moment (Figure \[fig:EV\]B). The core of the Venus model cools slightly during the thermal adjustment period, driving a transient thermal dynamo, but is slowly heated as the mantle and core heat up due to radiogenic heat trapped beneath the stagnant lid. In this case the core is too hot to solidify, precluding a compositional dynamo. These models predict Venus maintained a magnetic field for about 1.5-4 Gyr [@Driscoll2014]. During the thermal adjustment period the core and mantle are still strongly coupled, but the cooling rate is less sensitive to the style of mantle convection. Thus the primary role of plate tectonics is to extend the life of the dynamo beyond this thermal adjustment period. In addition to normal convective cooling, volcanic heat loss can potentially extend core cooling beyond the thermal adjustment period on a stagnant lid planet, provided most of the melt can reach the surface of the planet and cool [@morgan1983; @nakagawa2012; @armann2012; @Driscoll2014]. Obviously, magmatic heat loss was not efficient enough for Venus, Mars, or the Moon to sustain dynamos, so we are left with the tentative conclusion that a stagnant lid reduces both the long-term convective and volcanic mantle heat loss. ![Comparison of the predicted thermal (A) and magnetic (B) histories of Earth (solid) and Venus (dashed). The only difference between the models is the Earth model uses a mobile lid heat flow scaling while the Venus model uses a stagnant lid scaling (see (\[q\_cmb\])-(\[q\_conv\]); @Driscoll2014 contains additional details).[]{data-label="fig:EV"}]({EV_comparison}.jpg){width="\linewidth"} The role of mantle dynamics in dictating magnetic field strength also indirectly links the core to climate. Climate influences the tectonic regime of a planet, and the tectonic regime dictates the core cooling rate through time and thus whether a long-lived magnetic field can be maintained. Therefore a cool climate, being favorable for plate tectonics, will also be favorable for long-term magnetic field generation, and hot climates, to the extent that they lead to stagnant lid convection, will be unfavorable for the dynamo. An important question is then whether the core can exert any influence on climate through the role the magnetic field plays in limiting atmospheric escape. We address this topic next. Magnetic limited escape {#sec:mag_limited_escape} ----------------------- In describing the escape of a planetary atmosphere it is helpful to think of the limiting escape mechanism rather than the specific physical escape process, as several escape processes typically occur concurrently with a single limiting bottleneck. The escape limit is typically characterized as being in either the diffusion or energy limited regime, with no connection to the presence of a magnetic field. However, it is commonly argued that magnetic field strength should play an important role in atmospheric escape [@michel1971; @chassefiere1997; @lundin2007; @dehant2007; @lammer2012; @owen2014; @brain2014]. Below we speculate about how a magnetic field could influence escape. The hydrodynamic and diffusion limits to escape have been heavily studied [e.g. @hunten1973; @watson1981; @shizgal1996; @lammer2008; @luger2015b]. Both of these escape limits depend on hydrogen number density. In the diffusion limit escape is linearly proportional to hydrogen mixing ratio at the tropopause [@hunten1973], while hydrodynamic (or energy) limited escape has a weaker dependence on hydrogen mixing ratio but is expected to occur at much higher H concentrations [@watson1981]. Hydrodynamic escape is fundamentally driven by incident energy absorbed at the base of the escaping region, but a bottleneck occurs as the escape rate grows and the absorbing species are lost, thereby limiting absorption and the energy available to drive escape. Regardless of their detailed dependences, diffusion and energy limited escape are expected to intersect at some H mixing ratio (therefore denoting a transition from one mechanism to the other) as depicted in Figure \[fig:escape\]. ![Schematic diagram of atmospheric limiting escape regimes. Hydrogen escape rate versus mixing ratio with the diffusive (black), magnetic (blue), and hydrodynamic (black) regimes. The transition from diffusive to hydrodynamic escape may be interrupted by magnetically limited escape if the planetary magnetic field is strong. See @Driscoll2013 for more details.[]{data-label="fig:escape"}]({escapelimits}.jpg){width="\linewidth"} If a strong planetary magnetic field is present that balances the solar wind far from the top of the atmosphere, then the density of species exposed to the flow will be limited. In other words, the rate at which ionized planetary species are swept away by the stellar wind will decrease with increasing magnetopause distance and magnetic field strength. This effect adds a third limiting escape regime between the diffusion and hydrodynamic regimes. Figure \[fig:escape\] illustrates this scenario, where increasing hydrogen concentration leads to a transition from diffusion to magnetic limited, and then from magnetic to hydrodynamic limited escape. [@Driscoll2013] propose that the limiting physical mechanism in the magnetic regime could be the removal of planetary ions from the magnetopause via Kelvin-Helmholtz instabilities, the occurrence of which are well documented [e.g. @wolff1980; @barabash2007; @taylor2008]. Mathematically the magnetically limiting escape rate is predicted to be a function of hydrogen ion concentration, magnetopause surface area, and instability time scale, such that stronger magnetic fields expose a lower density of planetary species to the solar wind because their number density decreases faster than the magnetopause surface area [@Driscoll2013]. Weaker (or non-existent) planetary magnetic fields will allow the solar wind to interact with the “top” of the atmosphere (where escape occurs), potentially pushing the magnetic limit above the diffusion and hydrodynamic limits, rendering the magnetic limit irrelevant (Figure \[fig:escape\]). Extent of coupling between mantle, core, and climate {#sec:core_summary} ---------------------------------------------------- ![\[fig:magneticregime\] Schematic of magnetic field intensity over time. Initially the dynamo is maintained by cooling during the thermal adjustment period (solid black) and tectonic mode is not important. This continues until core cooling begins to slow and the dynamo will either die without efficient mantle cooling (dashed black), or stay strong with efficient interior cooling associated with plate tectonics (solid black). If the timing of this event occurs after the transition from magnetic to diffusion limited escape (Scenario \#1, vertical dashed red) then the shielding of the atmosphere is not dependent on the tectonic mode. If the dynamo divergence event occurs before the escape transition (Scenario \#2, vertical dashed red) then the shielding of the atmosphere is coupled to the tectonic mode. ]({magregime1}.jpg){width="\linewidth"} The magnetic field is therefore most important for atmospheric evolution during the transition from hydrodynamic to diffusion limited escape. A strong magnetic field could reduce the escape rate during this transition, thereby helping to preserve a planet’s water budget. Without magnetic shielding significant water loss could occur, potentially preventing the silicate weathering feedback from acting to stabilize climate and, in turn, plate tectonics from operating. Whether plate tectonics is itself necessary for magnetic shielding during the transition to diffusion limited escape depends on the timing of this transition (Figure \[fig:magneticregime\]). If the transition to the diffusion regime occurs quickly, i.e. $f_H$ decreases rapidly, then it would occur during a planet’s thermal adjustment period, when dynamo action is possible without plate tectonics (see §\[sec:core\_mantle\]). On the contrary, if the magnetic limited escape regime is occupied longer than the mantle thermal adjustment period (e.g. $f_H$ decreases slowly) then plate tectonics is probably important for preventing significant water loss through magnetic shielding. Unfortunately, the timing of the transition to diffusion limited escape is not well known, as $f_H$ depends on the details of a planet’s atmospheric structure. Future work in this area is needed to determine how long rocky planets occupy the magnetic limited escape regime, and the conditions under which plate tectonics is necessary for water retention. The timing of the transition to diffusion limited escape is also affected by stellar wind strength. In particular, magnetic shielding may be important over a wider range of stratospheric hydrogen mixing ratios, and thus over a longer period of time, for planets exposed to stellar winds much stronger than those at Earth today. Strong stellar winds are likely for planets orbiting active solar mass stars or orbiting close to moderately active small mass stars. In these cases maintaining a magnetic field may be crucial for preserving liquid surface water over billion year timescales, and such a long lived dynamo likely requires plate tectonics. In fact the magnetic field, climate, and plate tectonics can act as a self-sustaining feedback in this case, where the magnetic field is required to prevent water loss, water is necessary for silicate weathering to keep the climate cool enough for plate tectonics, and plate tectonics is required to drive the dynamo. Our knowledge of how planetary magnetic fields influence atmospheric escape is still in its infancy with many unanswered questions. In fact a strong magnetic field may even enhance escape in some instances, by producing a larger interaction cross section with the solar wind, which can concentrate incident energy flux by a factor of $10-100$ [@brain2014]. Clearly a more thorough understanding of how the properties of the atmosphere, stellar wind, and magnetic field influence escape rates is needed to constrain the coupling between the core dynamo and climate. Whole planet coupling and planetary evolution {#sec:summary} ============================================= Sections \[sec:plate\_generation\], \[sec:climate\_reg\_cc\], and \[sec:whole\_planet\] together outline whole planet coupling between climate, plate tectonics, and the magnetic field. In this section we describe how whole planet coupling can potentially explain the Earth-Venus dichotomy, and lead to a number different evolutionary scenarios for rocky exoplanets, many of which would be unfavorable for life. In particular we focus on habitable zone planets, showing how events early in a planet’s history, such as the initial atmospheric composition and the timing of the initiation of plate tectonics, play a major role in determining whether long-term habitable conditions can develop. A planet’s volatile inventory is likely important as well. Given our incomplete understanding of plate tectonics, the carbon cycle, the geodynamo, and how they interact, our discussion is mostly qualitative, and only represents some initial hypotheses for the factors governing planetary evolution. The Earth-Venus dichotomy {#sec:Earth_venus} ------------------------- Figures \[fig:flowE\] and \[fig:flowV\] illustrate how we propose that climate, mantle, and the core interact on Earth and Venus. Venus, being inward of the inner edge of the habitable zone, can not have liquid water at its surface. As a result, silicate weathering can not draw CO$_2$ out of the atmosphere, so a hot, CO$_2$ rich climate forms via volcanic degassing and/or primordial degassing during accretion and magma ocean solidification. The hot climate prevents plate tectonics, and in turn the long-lived operation of a core dynamo due to a low core heat flux (Figure \[fig:flowV\]). For the Earth, being in the habitable zone allows liquid water, and in turn silicate weathering. Thanks to a temperate climate plate tectonics can operate, and the resulting high core heat flow powers the geodynamo (Figure \[fig:flowE\]). [@Jellinek2015] invoke a similar series of couplings to explain the Earth-Venus dichotomy, though they argue that the lack of plate tectonics on Venus is caused by high rates of radiogenic heating in the mantle, and that Venus’ hot climate stems from the lack of plate tectonics. Their hypothesis contrasts with our interpretation, that Venus’ orbital position is the key factor explaining its evolution. ![Flow chart of climate-tectonic-magnetic coupling for an Earth-like planet. Arrows between reservoirs indicate volatile (blue) and heat (red) fluxes, their width roughly in proportion to magnitude. Black lines indicate conceptual couplings, such as the influence of magnetic field strength on escape rate or chemical weathering on climate. []{data-label="fig:flowE"}](flowchart_earth.pdf){width="\linewidth"} ![Flow chart of climate-tectonic-magnetic coupling on a Venus-like planet. []{data-label="fig:flowV"}](flowchart_venus.pdf){width="0.6\linewidth"} If surface-interior coupling is responsible for the divergent evolution of Earth and Venus, the runaway greenhouse climate, loss of plate tectonics (if it ever existed), and death of the magnetic field should all be correlated in time. Thus, determining the climate, magnetic, and tectonic history of Venus is a vital test. Unfortunately our current knowledge of Venusian history is poor. The trace amount of water in the atmosphere requires a minimum of about 500 Myr for the H to escape to space [@donahue1999], so the runaway greenhouse must have occurred by at least $\approx 0.5-1$ Ga, but could have taken place much earlier. In fact, Venus may have entered a runaway greenhouse and lost its water during formation [@Hamano2013]. The Venusian tectonic and magnetic histories are similarly uncertain. There is evidence for a massive resurfacing event at 0.5-1 Ga. Surface features related to this event are most consistent with volcanism and lava flows rather than subduction of old crust and creation of new crust at ridges [@smrekar2013; @ivanov2013]. The planet may still be volcanically “active” today [@smrekar2010], but eruptions are likely sporadic. The style of tectonics before the resurfacing event is unknown, with stagnant lid convection, episodic overturns, or even Earth-like plate tectonics all possibilities. Unraveling the Venusian magnetic history is challenging because the preservation and in-situ measurement of any remanent magnetization on Venus’ hot surface is unlikely. However, another possible line of evidence that may imply the presence of a paleo-Venusian magnetic field would be the loss of H$^+$, He$^+$, and O$^+$ along polar magnetic field lines to the solar wind, a process known on Earth as the polar wind [@moore2007]. This ion escape mechanism relies on the presence of an internally generated magnetic field, and could potentially leave a chemical fingerprint in the Venusian atmosphere [@brain2014]. However our present day knowledge provides no evidence for a strong, internally generated magnetic field on Venus. Future exploration of Venus is needed to place tighter constraints on its evolution. Evolution of rocky exoplanets {#sec:sum_exoplanets} ----------------------------- Venus demonstrates one likely evolutionary scenario for planets lying inward of the habitable zone’s inner edge. However, cooler climates, that still allow plate tectonics and a magnetic field, are also possible for such planets if they have a significantly smaller CO$_2$ inventory than Venus. In this case a temperate climate can still exist, even without liquid water and silicate weathering to regulate atmospheric CO$_2$ levels, simply because there isn’t enough CO$_2$ to cause extreme greenhouse warming. In some cases a planet that experiences a runaway greenhouse could even retain some water at the poles, and potentially remain habitable [@Kodama2015]. Planets lying beyond the habitable zone’s outer edge will likely be cold, and thus can plausibly sustain plate tectonics and a magnetic field as well. However, whether complex life can develop on any of these planets is unclear. Likewise, an Earth-like evolution is one likely scenario for planets lying within the habitable zone, but a number of factors could lead to a different evolution that is unsuitable for life. Hot, CO$_2$ rich climates are expected after planet formation due to degassing during planetesimal accretion and magma ocean solidification [@Abe1985; @Zahnle2007; @Lindy2008]. If a planet’s initial climate is so hot that liquid water is not stable (i.e. temperature and pressure conditions are beyond the critical point for water), then developing a temperate climate is probably not possible. Initial surface temperature and pressure conditions in excess of the critical point imply that silicate weathering would not occur, with or without plate tectonics (though some limited reaction between atmospheric CO$_2$ and the crust is possible). As a result the climate would remain extremely hot, preventing both plate tectonics and a core dynamo. However, a climate hot enough to exceed the water critical point is an extreme case, requiring $\approx 500$ bar of CO$_2$ for a planet with an atmosphere composed of CO$_2$ and H$_2$O and receiving the same insolation as the Hadean Earth [@Lebrun2013]. Earth’s total planetary CO$_2$ budget is estimated at $\approx 100-200$ bar, so even with complete degassing during accretion or magma ocean solidification liquid water was still stable [e.g. @Sleep2001b; @Zahnle2007]. Thus a planet would need a significantly larger total CO$_2$ budget than Earth for initial atmospheric makeup to exceed the liquid water critical point. Another possibility is a climate where liquid water is stable, but is still too hot for plate tectonics. In this case low land fractions or erosion rates can potentially prevent plate tectonics, and a cool climate, from ever developing. High rates of volcanism would be needed to avoid this fate, by creating a sufficient supply of fresh rock at the surface for silicate weathering to cool the climate. Finally, even when climate conditions are amenable to plate tectonics, an uninhabitable state can be reached if plate tectonics does not initiate before increasing insolation warms the climate to the point where it is no longer possible (Figure \[fig:div\_point\]). Before plate tectonics initiates on a planet, weathering may be supply limited and the atmosphere CO$_2$ rich. Thus surface temperature will increase with increasing luminosity, and can potentially become hot enough to preclude plate tectonics from ever starting. If plate tectonics does not initiate before this divergence point is reached, a planet could become permanently stuck with a hot, uninhabitable climate, stagnant lid convection in the mantle, and no protective magnetic field. Conversely on a planet where plate tectonics does initiate before reaching this divergence point, continental growth and orogeny will increase both land area and erosion rates, enhancing the ability of silicate weathering to establish and maintain a temperate, habitable climate. If a large enough land area forms, weathering may even be capable of maintaining a temperate climate without plate tectonics to elevate erosion rates. Likewise plate tectonics means that long-lived dynamo action, and therefore volatile shielding from the solar wind, is possible. The divergence point involving the initiation of plate tectonics could be particularly important if high mantle temperatures are a significant impediment to plate tectonics through low convective stresses or the creation of thick buoyant crust (see §\[sec:pt\_other\_factors\]). Mantle temperature may need to cool before plate tectonics can begin, even if surface temperatures are not hot enough to preclude plate motions. However, the increase in luminosity during a star’s main sequence evolution is relatively gradual (the sun’s luminosity was only $\approx 30$ % lower 4.5 Gyrs ago [@Gough1981]), so there is a large time window, on the order of 1 Gyr, for sufficient mantle cooling to occur before climate becomes too hot for plate tectonics. Another divergence point involves magnetic shielding and planetary water loss. The magnetic field can limit H escape, and thus help preserve surface water, when atmospheric escape transitions from being hydrodynamically limited to diffusion limited (see §\[sec:mag\_limited\_escape\] and §\[sec:core\_summary\].) If no magnetic field is available to shield the solar wind then massive amounts of H may be lost, leaving the planet desiccated and unable to regulate atmospheric CO$_2$ levels. As discussed in §\[sec:core\_summary\], if the transition to diffusion limited escape is early in a planet’s history, then magnetic shielding is possible regardless of tectonic regime, and many planets will likely be able to keep their volatiles and possibly develop habitable climates. However, if the transition occurs after the mantle’s thermal adjustment period, then plate tectonics is likely necessary for magnetic shielding, and fewer planets will be able to retain surface water. Additional divergence points are possible later in planetary evolution if climate, tectonics, and the magnetic field are tightly coupled. For example, if the carbon cycle and plate tectonics act as a self-sustaining feedback, where high erosion rates, supplied by plate tectonics, are required to maintain kinetically limited weathering and thus keep surface temperatures cool enough for plate tectonics (see §\[sec:pt\_cc\]), then the loss of plate tectonics would lead directly to a hot climate state that likely precludes plate tectonics from re-initiating. Without plate tectonics to enhance erosion, the inhospitably hot climate that results from supply limited weathering would likely be permanent. Such tight coupling between plate tectonics and the carbon cycle is most likely on planets with small exposed land areas or high total CO$_2$ budgets. Another divergence point is possible for planets orbiting very active solar mass stars or very close to active small mass stars where stronger stellar winds can more efficiently strip atmospheric volatiles. For these planets cessation of the core dynamo could cause significant water loss, in turn halting silicate weathering and, due to the ensuing hot climate, likely shutting down plate tectonics as well (see Figure \[fig:divergent\_evol\] for a schematic illustration of the divergence points discussed in this paragraph). Size is also a major factor in terrestrial planet evolution. Large planets are expected to have wider habitable zones due to the influence of higher gravity on atmospheric scale height and the greenhouse effect [@Kopp2014], but also shallower ocean basins and thus less exposed land, unless feedbacks can regulate ocean volume such that continents are always exposed [@Cowan2014 see also §\[sec:future\]]. The influence of size on plate tectonics and magnetic field generation is also unclear. Previous studies have found that plate tectonics is more likely on larger planets [@Valencia2007b; @Valencia2009; @vanheck2011; @Foley2012], less likely on larger planets [@ONeill2007; @Kite2009; @Stein2013; @noack2014; @stamenkovic2014; @miyagoshi2014; @tachinami2014], or that size is relatively unimportant [@Korenaga2010a]. Different studies reach very different conclusions because of the large uncertainties in the rheological mechanism necessary for generating plate tectonics, how key features such as internal heating rate scale with size, and how mantle properties are affected by pressure and temperature. The influence of size on magnetic field strength is likewise debatable. Several studies have found a weak dependence of field strength and lifetime on planet and core size, and possibly a peak in strength for Earth-sized planets [@gaidos2010; @tachinami2011; @driscoll2011a; @vansummeren2013]. Generally, larger dynamo regions are expected to produce stronger magnetic fields [e.g. @christensen2009], but variations in more subtle properties, like mantle and core composition, likely play a fundamental role. The coupling between plate tectonics, climate, and the magnetic field is expected to apply to planets of different size, but future work is needed to place tighter constraints on the influence of size on this coupling and on planetary evolution. The issue of size highlights the many uncertainties remaining in our knowledge of planetary dynamics and evolution. Each aspect of planetary dynamics that is important for magnetic, tectonic, and climate evolution needs to be better understood before more rigorous predictions or interpretations can be made. In particular the interactions between different components of the planetary system, specifically interactions between surface tectonics, mantle convection, and the long-term carbon cycle, between mantle convection and the core dynamo, and between the magnetic field, atmospheric escape, and climate, deserve significant attention. With a large number of rocky exoplanets already discovered, many of which are in their respective habitable zones [@Batalha2014], and more certain to follow, improving our knowledge of planetary evolution is an important goal. Future Directions {#sec:future} ================= In addition to furthering our understanding of plate tectonics, magnetic field generation, climate evolution, and the interactions between these processes, there are a number of new questions and research topics that must be addressed to advance our knowledge of planetary evolution. We summarize a few of these important questions below. 1\) [**What are the material properties of Earth-like and non-Earth-like terrestrial planets at high temperature, high pressure conditions?**]{} Without tighter constraints on basic properties like density, viscosity, and thermal conductivity models of exoplanet evolution will continue to be highly uncertain. Conducting both laboratory and numerical experiments at the conditions relevant for Earth and super-Earths is challenging, but is also vital for determining how terrestrial planets behave. Moreover, the composition of exoplanets could differ significantly from Earth, so constraints on the properties of non-Earth-like materials are necessary for modeling the evolution of these planets as well. 2\) [**What controls the volume of water at a planet’s surface (i.e. the size of the oceans and the amount of subaerial land)?**]{} Earth has maintained a relatively constant freeboard, or water level relative to the continents, throughout much of its history [@Wise1974], despite the fact that there could be multiple oceans worth of water stored in the mantle [e.g. @Ohtani2005]. Are there feedbacks in Earth’s deep water cycle acting to keep a large continental surface area exposed, as proposed by some [@Kasting1992; @Cowan2014]? Given the importance of exposed land to the long-term carbon cycle, this question is of fundamental importance to planetary evolution. 3\) [**What are typical volatile abundances, in particular water and carbon dioxide, for rocky planets, and how much of this volatile inventory resides in the atmosphere just after accretion and magma ocean solidification?**]{} Both volatile inventory and initial atmospheric composition are important for planetary evolution. Thus, a better understanding of how and when volatiles are delivered during accretion, how much degassing occurs during accretion and solidification of a possible magma ocean, and how much atmospheric loss occurs during this early phase of planetary history, are all necessary for constraining planetary evolution. 4\) [**Can strong gravitational tides render a planet uninhabitable?**]{} Planets experiencing strong gravitational tides (caused by nearby stars, planets, or satellites) can generate significant internal heating via tidal dissipation, which can cause extreme surface volcanism and hinder dynamo action [@driscoll2015b]. Efficient cooling of the interior could allow the orbits of such planets to circularize faster, minimizing the length of time spent in a tidally heated regime, and could move the planet in (or out) of the habitable zone. Future work should explore the details of tidal dissipation in the mantle and core, and how tides can influence long-term evolution. 5\) [**How and when does plate tectonics initiate?**]{} Factors other than climate, like mantle temperature, can have an important control over whether plate tectonics can operate. Thus even with favorable climate conditions, other process such as mantle cooling may be necessary for Earth-like plate tectonics. Understanding the type of tectonics that might take place on a planet before plate tectonics, how much land and weatherable rock can be created through non-plate-tectonic volcanism, and the factors that then allow for Earth-like plate tectonics to develop, will all be crucial for determining how likely planets are to follow an evolutionary trajectory similar to Earth’s. Acknowledgements ================ BF acknowledges funding from the NASA Astrobiology Institute under cooperative agreement NNA09DA81A. We thank Norm Sleep for a thorough and constructive review that helped to significantly improve the manuscript. No new data was used in producing this paper; all information shown is either obtained by solving the equations presented in the paper or is included in papers cited and listed in the references. [245]{} natexlab\#1[\#1]{}\[2\][\#2]{} , , & (). . , [ ** ]{}, . , , , & (). . , [ ** ]{}, . , , , & (). . , [ ** ]{}, . , & (). . , [**]{}, . , & (). . , [ ** ]{}, . , & (). . , [ ** ]{}, . , & (). . , [**]{}. , , , , , , , , , et al. (). . , [**]{}, . (). . , [ ** ]{}, . (). . , [ ** ]{}, . (). . , [ ** ]{}, . , & (). . In , & (Eds.), [**]{} chapter . (pp. ). : volume . , & (). . , [ ** ]{}, . , & (). . , [ ** ]{}, . , & (). . , [**]{}, . , , & (). . In , & (Eds.), [**]{} (pp. ). : volume . (). . . (). . , [ ** ]{}, . (). . , [ ** ]{}, . , & (). . , [**]{}, . , , & (). . , [ ** ]{}, . , & (). . , [ ** ]{}, . (). . , [ ** ]{}, . , & (). . , [ ** ]{}, . , , , , & (). . In [**]{} (pp. ). volume . , & (). . In , , & (Eds.), [**]{} (pp. ). : volume . , & (). . In (Ed.), [**]{} (pp. ). : volume . (). . , [ ** ]{}, . , , & (). . , [ ** ]{}, . (). , [ ** ]{}, . , , & (). . , [**]{}, . , & (). . , [ ** ]{}, . (). . , [**]{}, . (). . , [ ** ]{}, . , , & (). . , [**]{}, . , & (). . , [ ** ]{}, . , & (). . , [ ** ]{}, . , & (). . , [ ** ]{}, . , & (). . , [ ** ]{}, . , & (). . , [ ** ]{}, . , & (). . , [ ** ]{}, . (). . , [**]{}, . (). . . , , & (). , [ ** ]{}, . , , , & (). . , [ ** ]{}, . , , , , , , , , , & (). . , [ ** ]{}, . (). . , [ ** ]{}, . , , , , & (). . , [ ** ]{}, . (). . , [**]{}, . , & (). . , [**]{}, . , & (). . , [ ** ]{}, . , & (). . , [**]{}, . , & (). . , [ ** ]{}, . , , , & (). . , [ ** ]{}, . , , , , , , , & (). . , [ ** ]{}, . , , , , & (). . , [ ** ]{}, . (). . , [ ** ]{}, . , , & (). . , [**]{}, . , & (). . , [ ** ]{}, . , , , , , & (). . , [**]{}, . , & (). . , [ ** ]{}. . (). . , [ ** ]{}, . , & (). . , [ ** ]{}, . , , & (). . , [ ** ]{}, . , , & (). . , [ ** ]{}, . (). . , [ ** ]{}, . , , , , , & (). . , [ ** ]{}, . , , & (). . , [ ** ]{}, . , & (). . , [ ** ]{}, . , , , & (). . , [ ** ]{}, . , & (). . , [ ** ]{}, . , & (). . , [**]{}, . , , , & (). . , [ ** ]{}, . , , , & (). . , [ ** ]{}, . , & (). . , [ ** ]{}, . , , , , , , , , , , & (). . , [ ** ]{}, . , , , & (). . , [ ** ]{}, . (). . , [ ** ]{}, . , & (). . , [ ** ]{}, . , , & (). . , [**]{}, . , , & (). . In , , & (Eds.), [**]{} (pp. ). : volume . (). . In , , , & (Eds.), [ ** ]{} (pp. ). volume . , , & (). . , [ ** ]{}, . , , & (). . , [**]{}, . (). . , [**]{}, . (). . , [**]{}, . , , & (). . , [ ** ]{}, . , , , , , , & (). . , [**]{}, . , , , , & (). . , [ ** ]{}, . , & (). . , [ ** ]{}, . , & (). . In (Ed.), [ ** ]{} (pp. ). : volume . , , & (). . , [ ** ]{}, . , & (). . , [**]{}, . (). , [ ** ]{}, . (). , [ ** ]{}, . , & (). . , [ ** ]{}, . , & (). . , [ ** ]{}, . , , & (). . , [ ** ]{}, . , & (). . , [ ** ]{}, . , & (). . , [**]{}, . (). . , [**]{}, . , & (). . , [**]{}, . , & (). . , [ ** ]{}, . , , & (). . , [ ** ]{}, . , & (). , [ ** ]{}, . , , & (). . , [**]{}, . , & (). . , [ ** ]{}, . , & (). . , [**]{}, . , , & (). . , [ ** ]{}, . , , , & (). . , [ ** ]{}, . , , & (). . , [ ** ]{}, . , , , , , & (). . , [ ** ]{}, . (). . In , , & (Eds.), [**]{} (pp. ). volume . (). , [ ** ]{}. . (). , [ ** ]{}, . (). . , [ ** ]{}, . , & (). . In (Ed.), [ ** ]{} (pp. ). . , , & (). . , [**]{}, . , & (). . , [ ** ]{}, . , , , , , , , , , et al. (). . , [ ** ]{}, . , , , , , & (). . , [ ** ]{}, . , & (). . , [ ** ]{}, . , , , , , , , & (). . , [ ** ]{}, . , , & (). . , [ ** ]{}, . , & (). . , [ ** ]{}, . , & (). . , [ ** ]{}, . , , & (). . , [ ** ]{}, . , , & (). . , [ ** ]{}, . , & (). . , [**]{}, . , & (). . , [ ** ]{}, . (). . , [ ** ]{}, . , , , & (). . , [ ** ]{}, . , , & (). . , [ ** ]{}, . , , , , & (). . , [ ** ]{}, . , , , & (). . , [ ** ]{}, . , & (). . , [ ** ]{}, . , & (). . , [**]{}, . , , , , , , & (). . , [ ** ]{}, . , & (). . , [ ** ]{}, . , & (). . , [ ** ]{}, . , & (). . , [**]{}, . , & (). . , [ ** ]{}, . , , & (). . , [ ** ]{}, . (). , [**]{}, . , & (). . , [ ** ]{}, . , & (). . , [ ** ]{}, . , , & (). . , [ ** ]{}, . (). . , [**]{}, . , , , , , , , & (). . , [ ** ]{}, . , , & (). . , [ ** ]{}, . , & (). . , [ ** ]{}, . , , , , & (). . , [ ** ]{}, . , & (). . , [**]{}, . , & (). . , [**]{}, . , & (). . , [**]{}, . , & (). . , [ ** ]{}, . , , , & (). . , [**]{}, . , & (). . , [**]{}, . , , , & (). . , [ ** ]{}, . , & (). . , [**]{}, . , , & (). . , [**]{}, . , & (). . , [ ** ]{}, . , , & (). , [**]{}, . (). . In , & (Eds.), [**]{} (pp. ). : volume . , , , & (). . , [**]{}, . , , & (). . , [ ** ]{}, . , & (). . , [ ** ]{}, . , , & (). . , [ ** ]{}, . , & (). . , [ ** ]{}, . (). . , [**]{}, . , & (). . , [**]{}, . , & (). . , [ ** ]{}, . (). . , [ ** ]{}, . , , & (). . , [**]{}, . (). . , [ ** ]{}, . , & (). . , [**]{}, . , & (). . , [ ** ]{}, . , , & (). . , [ ** ]{}, . , , & (). . , [ ** ]{}, . , , & (). . , [ ** ]{}, . , , , , , , , & (). . (pp. ). volume . , , , , , , , & (). . , [**]{}, . (). . , [ ** ]{}, . (). . , [ ** ]{}, . , & (). . , [ ** ]{}, . , & (). . , [ ** ]{}, . , , , , , , , , , & (). . , [ ** ]{}, . , , , , , , , , & (). . , [ ** ]{}, . , & (). . , [ ** ]{}, . , & (). . , [**]{}, . , , , & (). . , [ ** ]{}, . , , , , & (). . , [**]{}, . , , & (). . , [ ** ]{}, . , , & (). . , [ ** ]{}, . , & (). . , [**]{}, . , , & (). . , [**]{}, . , , & (). . , [ ** ]{}, . (). . , [ ** ]{}, . , , & (). . , [**]{}, . , , & (). . , [ ** ]{}, . (). . In , , & (Eds.), [**]{} (pp. ). : volume . (). . , [**]{}, . , & (). In , & (Eds.), [**]{} (pp. ). , & (). . , [ ** ]{}, . , , , , , , , , , & (). . , [ ** ]{}, . , , , , & (). . , [**]{}, . , & (). . In [**]{} (pp. ). volume . , & (). . , [ ** ]{}, . (). . , [ ** ]{}, . , , & (). . , [**]{}, . , & (). . , [**]{}, . , & (). . ( ed.). : . (). . , [ ** ]{}, . , & (). . , [ ** ]{}, . , , & (). . , [ ** ]{}, . , , & (). . , [ ** ]{}, . , & (). . , [ ** ]{}, . , & (). . , [ ** ]{}, . , , & (). . , [**]{}, . , , & (). . , [ ** ]{}, . , , & (). . , [ ** ]{}, . (). , [**]{}, . , , , , & (). . In , , & (Eds.), [**]{} (pp. ). : volume . (). . In [**]{} (pp. ). . (). . , [ ** ]{}, . , & (). . , [ ** ]{}, . , , & (). . , [ ** ]{}, . , & (). . , [ ** ]{}, . , , , & (). . , [ ** ]{}, . , , & (). . , [**]{}, . , & (). . , [ ** ]{}, . , , & (). . , [ ** ]{}, . (). . , [**]{}, . , , & (). . , [ ** ]{}, . , & (). . , [ ** ]{}, . , & (). . In , , & (Eds.), [**]{} (pp. ). : volume . , & (). . , [ ** ]{}, . , , , , & (). . , [ ** ]{}, . , , & (). . , [**]{}, . (). . In , & (Eds.), [**]{} (pp. ). . , , & (). . , [ ** ]{}, . , , & (). . , [**]{}, . , & (). . , [ ** ]{}, . , , , , , , & (). . , [ ** ]{}, . (). . , [**]{}.
{ "pile_set_name": "ArXiv" }
--- abstract: | The reason why smart home remains not popularized lies in bad product user experience, purchasing cost, and compatibility, and a lack of industry standard[@avgerinakis2013recognition]. Echoing problems above, and having relentless devoted to software and hardware innovation and practice, we have independently developed a set of solution which is based on innovation and integration of router technology, mobile Internet technology, Internet of things technology, communication technology, digital-to-analog conversion and codec technology, and P2P technology among others. We have also established relevant protocols (without the application of protocols abroad). By doing this, we managed to establish a system with low and moderate price, superior performance, all-inclusive functions, easy installation, convenient portability, real-time reliability, security encryption, and the capability to manage home furnitures in an intelligent way. Only a new smart home system like this can inject new idea and energy into smart home industry and thus vigorously promote the establishment of smart home industry standard.\ author: - bibliography: - 'GG.bib' title: Promote the Industry Standard of Smart Home in China by Intelligent Router Technology --- Smart home, router technology, industry standard Introduction ============ Since this year, the waves of smart home are on its rise. This October, a company that sells intelligence temperature controllers NEST of Google acquired Revolv, a smart home central control equipment start-up. Xiaomi released its smart plug, smart camera, among four smart end new products. Enterprises at home and abroad one after another plunge into the big cake of smart home. Smart home is opening new vista and space for Internet and household appliance industry. All View Consulting forecasts that by 2020, the ecological product of domestic smart home appliance in China will reach one trillion yuan. SAIF Partners predict that the scale of smart home industry by traditional definition in China will reach 5.5 billion yuan in 2014, and that number will soar to 7.5 billion yuan in 2015. However, three stumbling blocks are in the advancing way of smart home industry: user experience, purchasing cost, and lousy compatibility[@albuquerque2014solution]. In response, we have independently developed a set of solution based on smart router technology. There are many problems existing in routers in the market. Firstly, wireless control based on 315M, 433M and other frequency ranges has no network protocol and can only send simple control command. Collision occurs when there are over three connected devices, which renders the process more difficult to succeed. Secondly, control network based on ZigBee registers small range, poor through-the-wall performance, complex protocol, inordinate price, and at the same time is exclusive and incompatible to devices existing in the market. The third one is control network based on WI-FI. WI-FI boasts a small control range and thus is limited to only a few connected devices. Normally when household router is connected to over ten devices the network would drop or other instabilities would happen. Having taken characteristics above into consideration, echoing the needs for transmission distance, stability, and controlled quantity, our system has utilized control network based on 433M frequency independently developed a self-organized protocol based on the control network which could bear dynamic networking functions similar to that of Zigbee and boast a connected devices number of over 100. As a result, our system has managed the networking capabilities of Zigbee with high connected devices number and long distance transmission. Also the protocol of 433M frequency is an open one and can be compatible with smart devices currently in the market. Therefore, smart router will change the landscape of smart home market and high-end router market and lays a solid foundation for establishment of industry standard for China smart home industry. A set of solution aiming at promoting establishment of industry standard for China smart home market {#SEC: A set of solution aiming at promoting establishment of industry standard for China smart home market} ==================================================================================================== The set of solution includes {#SSEC: The set of solution includes} ---------------------------- Smart router,cloud server, mobile terminal, and intelligent terminal. Meanwhile, the system is a intelligence development platform which enables programmers worldwide to carry out secondary development on this platform and thus an ecological chain with sound circulation takes shape (similar to AppStore of Apple). Feature of the whole set of solution {#SSEC: Feature of the whole set of solution} ------------------------------------ As a household smart center, the smart router is able to administer all of the connected devices, control smart devices in houses, keep houses in security under surveillance, monitor household environment in terms of temperature, humidity, PM 2.5 etc., alarm the police when household accidents happen such as smog or gas, enable users to be remotely connected to the smart center through devices such as cell phone and PAD at any time anywhere to observe and supervise household appliances, and at the same time promptly watch surveillance video in the house. To speak of, the cost of this device is only a small percentage of a traditional smart home product. Users can know at first hand the environment of household through cell phones in many ways such as gas leakage, burglar break-in, illegal opening of doors or windows, abnormal temperature and humidity, smoke alarm, touching of valuable things, real-time temperature and humidity detection, PM2.5 detection. It can be seen as a safety housekeeper. At the same time, users can be promptly aware of the surveillance video image in the house through cell phones, which could provide a more perceptual and intuitive supervision to surrounding environment. Moreover, the router can control household appliances such as television, air conditioner as well as all the other household appliances with remote control. Last but not least, the router is also a cloud service center where users can put personal data in family cloud. Introduction to specific functions {#SSEC: Introduction to specific functions} ---------------------------------- ### Functions of router {#SSSEC: Functions of router} Just like other ordinary routers, the router can visit the Internet and distribute WI-FI data[@zualkernan2009infopods]. ### Intellisense {#SSSEC: Intellisense} Users can receive alarm in houses through cell phones. Smart router can be adaptive to every alarm apparatus. When there is an alarm from apparatuses, smart router will promptly send alarm information to cell phones of users and provide security for users. Alarm apparatus spans gas leakage, burglar break-in, illegal opening of doors or windows, abnormal temperature and humidity, smoke alarm, touching of valuable things. Meanwhile, users can promptly get to know the temperature, humidity, and PM2.5 of houses. ### Intelligent surveillance {#SSSEC: Intelligent surveillance} Smart router can be installed with USB camera of low price as well as wireless camera, which is convenient for users to obtain real-time image through cell phones. Wireless camera can be based on codec of H264, which makes image clear and smooth. ### Intelligent plug {#SSSEC: Intelligent plug} Cell phones can remotely control the on and off of plug, which means controlling the appliances connected to the plug. ### Intelligent cloud service {#SSSEC: Intelligent cloud service} Smart router can serve as a cloud service center that provides personal data management for family members and boasts a good level of privacy. ### Intelligence remote control {#SSSEC: Intelligence remote control} X-Router enables users to put aside all remote controls at home and control all the household appliances through X-Router. For example, users can control lights, curtains, plugs, television, air conditioners, DVD, and STB through cell phones. Main technologies {#SSEC: Main technologies} ----------------- 1\) Communication protocol to control establishment and implementation of protocol. 2\) Transfer function of communication protocol to achieve intelligence transfer of transmission and P2P. 3\) P2P technological research to achieve P2P of low flow. 4\) Establishment and implementation of camera protocol with original server protocol integrated. 5\) development of cell phone software and server side software. Establishment and implementation of chat protocol between software. 6\) Device terminal networking. 7\) establishment and implementation of connection control protocol of device terminal and router. 8\) Establishment and implementation of control protocol of device terminal of each types. Protocols for different types of devices are different and are individually carried out. 9\) Development and production of communication printed circuit board of communication device terminal. 10\) Build software and hardware platform for routers. 11\) Printed circuit board hardware circuit diagram design with low consumption, high simultaneous access, and long duration. Technical index {#SSEC: Technical index} --------------- ### Low consumption {#SSSEC: Low consumption} the transmitted power is only around 1mW. It also uses sleep mode with low power dissipation, making the device use much less electricity. According to estimates, the device can endure a continuous, active period of six months to two years just by two AA batteries, which other wireless devices can hardly match. ### Low cost {#SSSEC: Low cost} the cost for the whole set of solution is around 100 yuan. ### Short time delay {#SSSEC: Short time delay} communication delay and delay period for activation from sleep mode are both very short. Delay for a typical search equipment is 30ms, 15ms for activation from sleep mode, 15ms for active devices to join via channels. Thus the technology is best suited for application in wireless control that is highly commanding in time delay (such as industry control). ### Self-organized network technique {#SSSEC:Self-organized network technique} a star schema has the maximum capacity for 254 slave units and one primary device. And the network is flexible. ### Reliability {#SSSEC: Reliability} Strategy to avoid collision is employed. Specialized slot time is conserved for communication business in need of stable bandwidth, so that competition and conflict for sending data are avoided. The MAC layer employs a completely definite data transmission mode where each sent data package must wait for the confirmation from the recipient. If any problem occurs in the transmission process, the data can be sent again. ### Security {#SSSEC: Security} HTTPS encryption that supports authentication and certification and uses encrypted algorithm. ### P2P technology {#SSSEC: P2P technology} a technology that deals with NAT gateway or firewall penetration. ### Low power dissipation technology of cell phone clients {#SSSEC: Low power dissipation technology of cell phone clients} a real-time technology that researches low power dissipation. Explanation of key technologies of each terminal {#SSEC: Explanation of key technologies of each terminal} ------------------------------------------------ ### Server terminal {#SSSEC: Server terminal} Server terminal is the bridge for the whole system to connect where cell phones and routers build data and exchange data, and carry out P2P (peer-to-peer) communication. The burden of servers is reduced. Meanwhile servers provide login and registration as well as user information management for users. Servers can expand and form a cluster according to workload dynamic, thus increasing the processing capacity for cell phones and routers[@min2013design]. We have pulled up a protocol of our own, which could identify which type of connection employed judging the type of request (Normally small data is transmitted through server UDP, and large data uses P2P connection). It connects the data path from cell phones to routers and make it possible to transparently transmit data from cell phones to router terminals. ### Smart router terminal {#SSSEC: Smart router terminal} Smart router terminal is the core part of our project, on which we build software and hard ware platform of routers. Firstly, it is a router with a high level of performance by which common PC and cell phones can visit the Internet. Secondly, it is our data processing center that achieves our core interaction protocol, and code and decode all the control alarm data and then process them. Thirdly, its radio frequency identification function enables it to send and receive radio frequency,and use the protocol we develop to code and decode to process digital signal and analog signal. By virtue of the protocol, it connects the data path from routers to device terminals. With the aid of servers, it makes possible transparent transmission of data from cell phones to device terminals. ### Cell phone {#SSSEC: Cell phone} As long as cell phones of users can be connected to 2G/3G/4G/wifi Internet, users can be conveniently connected to servers via software of cell phones, transparently communicate with smart routers and device terminals through the protocol. Users can register by cell phone software and log on the server, modify their personal information by HTTP communication protocol, add friends by chat protocol, and chat with friends. Cell phones can send router administer command through (the path between cell phones and routers) introduced above and thus set up for switches of routers, wireless switches, and PPPOE. When cell phones terminal want to control device terminal, a control protocol will be generated and be sent to the device terminal through the (path from cell phone to device terminal) introduced above. The device terminal receives the decoded information of the control protocol and obtains the specific command and implement it. Device terminal needs to report its state or generate the corresponding reporting command and send it to cell phones terminal through the path introduced above when it receives the alarm information which needs to be reported to users. The cell phones terminal receives the reported decoded protocol, obtains the specific command, and then registers its state on the UI of the application software or alarms the police. ### Device terminal {#SSSEC: Device terminal} Device terminal in effect realizes the communication protocol of the solution and can be seamlessly and smoothly extended. Any protocol that realizes the solution can join the router and be part of the networking to control the router. Device terminal spans all the alarm apparatuses, plug, bulb, power supply, sensor, and household appliance. ### Termianl SDK {#SSSEC:Termianl SDK} If we can connect data channels between each device, we can do a lot of things on it. Our solution is open to SDK of device communication. By virtue of SDK, the third party can control devices that it develops through its application and achieve platformization purpose. ### 6)Camera cloud serive cluster and camera {#SSSEC: 6)Camera cloud serive cluster and camera} Considering features of camera such as strong performance of processor, large data throughput, and large amount of data of images, as we need to ensure that camera does not affect the stability of other devices[@yuan2014accountability], we make our camera separate image data and control data. Transmission and storage of image data are solely handled by server clusters. Communication of camera terminal and cell phones can be realized by server transfer, or by P2P connection through hole punching technique of servers and cell phones. When cell phone terminals need to be connected to cameras, they will also be connected to camera cloud service cluster. They firstly make a request for P2P, if the path does not support it, it will commence server transfer. Control operation of cell phones such as rotating the camera, shifting up, down, to left, or right, is carried out through the path to the main servers. Cell phone terminals can local-save image date of cameras, or directly save the data on remote Yunfile such as Yun Baidu and Kuaipan via users’ Yunfile accounts. Content of innovation technology {#SSEC: Content of innovation technology} -------------------------------- ### Combination of radio frequency technology and router {#SSSEC: Combination of radio frequency technology and router} Tradition home gateway master control systems all employ specialized processors and communication protocol abroad with only single function and inordinate price[@nejad2014operation]. With the development of chip technology, the function of chips of household routers are powerful enough to carry out control over smart home appliances[@razminia2011chaotic]. Therefore, we design this smart router, add radio frequency module to routers, write programmes for control and communication protocol, thus achieve a master control system with multi-functions, low cost, and high stability, which can also be used as general routers. ### Home gateway communication system {#SSSEC:Home gateway communication system} Home gateway communication system utilizes exclusive custom protocol which can not be compatible with third party system and makes it difficult to upgrade[@garcia2012smart]. The system hopes to maintain open and universal in its application and integrates elements of Jingle protocol used in social network. The system also adds certification, control, and device discovery functions to the protocol, thus the system is robust in its expansiveness, can interact with any server that supports Jingle protocol, and communicate with any client that supports Jingle protocol. Meanwhile, the system supports device control and social interaction. As a result, our system can not only control smart devices, but also enable family members to communicate via social network. ### 3£©Control network protocol {#SSSEC:3£©Control network protocol} There are three types of control network by traditional definition: The first one is wireless control based on 315M, 433M and other frequency ranges. The benefit of such control network is a high level of accuracy and a far-reaching control range as much as 1000 meters if power reaches its maximum. On the flip side, there is no network protocol and it can only send simple control command. Collision occurs when there are over three connected devices, which renders the process more difficult to succeed[@han2014generating]. The second one is control network based on ZigBee. Its shortcomings lie in small range,poor through-the-wall performance, complex protocol,inordinate price,and at the same time it is exclusive and incompatible to devices existing in the market[@garcia2013multi]. The third one is control network based on WI-FI. WI-FI boasts a small control range and thus is limited to only a few connected devices. Normally when household router is connected to over ten devices the network would drop or other instabilities would happen[@kammerer2012router]. Having taken characteristics above into consideration, echoing the needs for transmission distance, stability, and controlled quantity, our system has utilized control network based on 433M frequency independently developed a self-organized protocol based on the control network which could bear dynamic networking functions similar to that of Zigbee and boast a connected devices number of over 100. As a result, our system has managed the networking capabilities of Zigbee with high connected devices number and long distance transmission. Also the protocol of 433M frequency is an open one and can be compatible with smart devices currently in the market, such as 433M infrared body detecting alarm. ### High-speed router {#SSSEC:High-speed router} The transmission rate of traditional router is only 150m/300M. Our router employs AC technology and thus the transmission rate can reach as much as 900M. ![\[fig:2-1\]Product concept structure graph.](luyouqi.pdf "fig:"){height="29.00000%" width="50.00000%"}\ Theoretical basis (including data transmission) {#SSEC:Theoretical basis (including data transmission)} ----------------------------------------------- ### TCP/IP protocol {#SSSEC:TCP/IP protocol} TCP/IP protocol is the abbreviation for Transmission Control Protocol/Internet Protocol. It is also known as network communication protocol. It is the fundamental protocol of Internet and the foundation of international Internet network, comprised of IP protocol at network layer and TCP protocol at transport layer. TCP/IP defines the standard for how electrical devices are connected to Internet and how data transmits between them. The protocol employs a hierarchical structure of four layers and each layer calls the protocol provided by its following layer to fulfil its need. To put it in blank words, TCP is in charge of spotting problems occurring in transmission and once problem occurs it sends our signals to command a new transmission until all the data is safely and correctly sent to the destination. While IP set an address for each connected device of Internet. The communication of all the terminals and devices in this project is based on TCP/IP protocol. ### Jingle/XMPP protocol {#SSSEC:Jingle/XMPP protocol} XMPP(Extensible Messaging Presence Protocol) is a protocol based on extensible markup language (XML) applied in instant messages (IM) and online presence detection.It promotes the on-time and instant operation between servers. The protocol is likely to ultimately allow Internet users to send instant messages to others on the Internet even if the operating systems and browsers are different. The P2P connection and transmission in this solution are both achieved based on revised Jingle XMPP protocol. Experimental basis {#SSEC:Experimental basis} ------------------ The router structure as follows is the ultimate version utilized by this project after multiple practices. ![\[fig:3-1\]The Router Frame.](jiegou.pdf "fig:"){height="29.00000%" width="50.00000%"}\ Product index {#SSEC:Product index} ------------- ### Smart route {#SSSEC:Smart route} 1)size: 23£ª23£ª5 2)transmission standard: IEEE 802.11ac/b/g/n 3)wireless transmission rate: 300M+700M 4)antenna gain: 5dbi 5)wired Internet access: one WAN, four LAN, 1000/100/10subject to adjustment and configuration 6)safety standard: 64/128 WEP encryption technology, WPA/WPA2 encryption, supports WPS WI-FI Protected Setup function 7)transmission band: 2.4GHz and 5.0GHz 8)reset key: 1 9)WPS key: 1 10)supported connected device terminals: 255 11)capable of responding to command of cell phone clients and each device terminals in three seconds. ### Server {#SSSEC:Server} 1)One server can simultaneously support 6000 users. 2)7\*24h smooth operation ### Wireless device {#SSSEC:Wireless device} 1)effective range: outdoors 1500M£¬indoors 300M 2)frequency range: 240-930MHz 3)FSK, GFSK and OOK modulation mode 4)maximum output power: 20dBm 5)sensitivity: -121dBm 6)low power dissipation: 18.5mA (reception); 85mA@+20dBm (transmission) 7)data transmission rate: 0.123-256kbps Conclusion {#SEC: Conclusion} ========== After relentless research and practice, our final smart router has greatly overcame the three big shortcomings of smart home, user experience, purchasing cost, and bad compatibility. We bring a revolution to the smart home market and make it accessible to common people. We enable common people to experience a life of intelligence and high quality with a moderate price, thus making smart home more popularized in China market. The appearance of smart routers also brings new vista to the smart home market and high-end router market, bring fresh vibrancy and business opportunities to China market. Also industry standard of China smart home will achieve breakthroughs with further research and development of smart router technology. In researching and developing the technology of smart router, we have come across some inevitable problems. For example, during research and development, although we harbour a strong awareness of protection to the environment and carried out protection measures, the external environment is still affected. But we believe that when products are mature enough to be massively produced, relevant technologies will surely be applied in avoiding environment pollution risk from the source or reducing it. Acknowledgement {#SEC: Acknowledgement} ================ The research subject was supported by the department of Civil Engineering and the department of Computer Science$\&$Engineering Jinjiang College, Sichuan University. I would like to express my thanks to Prof. Bingfa Lee¡¯s suggestions and guidance, as well as Guanguuan Yang$\&$Zhuo Li and Hui Zhang whose books give me a lot of inspiration.
{ "pile_set_name": "ArXiv" }
--- abstract: 'Cosmic acceleration is explained quantitatively, as an apparent effect due to gravitational energy differences that arise in the decoupling of bound systems from the global expansion of the universe. “Dark energy” is a misidentification of those aspects of gravitational energy which by virtue of the equivalence principle cannot be localised, namely gradients in the energy due to the expansion of space and spatial curvature variations in an inhomogeneous universe. A new scheme for cosmological averaging is proposed which solves the Sandage–de Vaucouleurs paradox. Concordance parameters fit supernovae luminosity distances, the angular scale of the sound horizon in the CMB anisotropies, and the effective comoving baryon acoustic oscillation scale seen in galaxy clustering statistics. Key observational anomalies are potentially resolved, and unique predictions made, including a quantifiable variance in the Hubble flow below the scale of apparent homogeneity.' address: 'Department of Physics and Astronomy, University of Canterbury, Private Bag 4800, Christchurch 8140, New Zealand' author: - 'David L. Wiltshire' title: Gravitational energy and cosmic acceleration --- =cmr5=cmr7 \#1[\_]{} \#1[[|\#1]{}]{} \#1 \#1\#2\#3[[\#1\#3\#1\#2]{}]{} \#1\#2[[\#1\#1\#2]{}]{} \#1[\_]{} \#1[\_]{} \#1\#2 \#1[Astrophys. J. [**\#1**]{}]{} \#1[Phys. Lett. [**\#1**]{}]{} \#1[Class. Quantum Grav. [**\#1**]{}]{} \#1[Gen. Relativ. Grav. [**\#1**]{}]{} [March 2007. An essay which received [*Honorable Mention*]{} in the 2007 Gravity Research Foundation Essay Competition.]{} Introduction ============ Our most widely tested “concordance model” of the universe relies on the assumption of an isotropic homogeneous geometry, in spite of the fact that at the present epoch the observed universe is anything but smooth on scales less than 150–300 Mpc. What we actually observe is a foam–like structure, with clusters of galaxies strung in filaments and bubbles surrounding huge voids. Recent surveys suggest that some 40–50% of the volume of the universe is in voids of a characteristic scale 30$h^{-1}$ Mpc, where $h$ is the dimensionless Hubble parameter, $\Hm=100h\kmsMpc$. If larger supervoids and smaller minivoids are included, then it is fair to say that our observed universe is presently void–dominated. It is nonetheless true that a broadly isotropic Hubble flow is observed, which means that a nearly smooth Friedmann–Lemaître–Robertson–Walker (FLRW) geometry must be a good approximation at some level of averaging, if our position is a typical one. In this essay, I will argue, however, that in arriving at a model of the universe which is dominated by a mysterious form of “dark energy” that violates the strong energy condition, we have overlooked subtle physical properties of general relativity in interpreting the relationship of our own measurements to the average smooth geometry. In particular, “dark energy” is a misidentification of those aspects of gravitational energy which by virtue of the equivalence principle cannot be localized. The proposed re–evaluation of cosmological measurements on the basis of a universal [*finite infinity*]{} scale determined by primordial inflation, leads to a new model for the universe. This model appears to pass key observational tests, potentially resolves anomalies, and makes new quantitative predictions. The fitting problem =================== In an arbitrary inhomogeneous spacetime the rods and clocks of any set of observers can only reliably measure local geometry. They give no indication of measurements elsewhere or of the universe’s global structure. By contrast, in an isotropic homogeneous universe, where ideal observers are comoving particles in a uniform fluid, measurements made locally are the same as those made elsewhere on suitable time slices, on account of global symmetries. Our own universe is somewhere between these two extremes. By the evidence of the cosmic microwave background (CMB) radiation, the universe was very smooth at the time of last scattering, and the assumption of isotropy and homogeneity was valid then. At the present epoch we face a much more complicated fitting problem [@fit1] in relating the geometry of the solar system, to that of the galaxy, to that of the local group and the cluster it belongs to, and so on up to the scale of the average observer in a cell which is effectively homogeneous. When we conventionally write down a FLRW metric \[FLRW\] |s\^2 = - \^2 +\^2()\^2\_[k]{} where $\dd\OM^2_{k}$ is the 3–metric of a space of constant curvature, we ignore the fitting problem. In particular, even if the rods and clocks of an ideal isotropic observer can be matched closely to the geometry (\[FLRW\]) at a volume–average position, there is no requirement of theory, principle or observation that demands that such volume–average measurements coincide with ours. The fact that we observe an almost isotropic CMB means that other observers should also measure an almost isotropic CMB, if the Copernican principle is assumed. However, it does not demand that other ideal isotropic observers measure the same mean CMB temperature as us, nor the same angular scale for the Doppler peaks in the anisotropy spectrum. Significant differences can arise due to gradients in gravitational energy and spatial curvature. In general relativity space is dynamical and can carry energy and momentum. By the strong equivalence principle, since the laws of physics must coincide with those of special relativity at a point, it is only internal energy that can be localized in an energy–momentum tensor on the r.h.s. of the Einstein equations. Thus the uniquely relativistic aspects of gravitational energy associated with spatial curvature and geometrodynamics cannot be included in the energy momentum tensor, but are at best described by a quasilocal formulation [@quasi]. The l.h.s. of the Friedmann equation derived from (\[FLRW\]) can be regarded as the difference of a kinetic energy density per unit rest mass, $E\ns{kin}=\half{\dot\ab^2\over\ab^2}$ and a total energy density per unit rest mass $E\ns{tot}=-\half{k\over\ab^2}$ of the opposite sign to the Gaussian curvature, $k$. Such terms represent forms of gravitational energy, but since they are identical for all observers in an isotropic homogeneous geometry, they are not often discussed in introductory cosmology texts. Such discussions appear rarely in the cases of specific inhomogeneous models, such as the Lemaître–Tolman–Bondi (LTB) solutions. In an inhomogeneous cosmology, gradients in the kinetic energy of expansion and in spatial curvature, will be manifest in the Einstein tensor, leading to variations in gravitational energy that cannot be localized. The observation that space is not expanding within bound systems implies that a kinetic energy gradient must exist between bound systems and the volume average in expanding space. Furthermore, the fact that space within galaxies is well approximated by asymptotically flat geometries implies that if there is significant spatial curvature within our present horizon volume, then a spatial curvature gradient should also contribute to the gravitational energy difference between bound systems and the volume average. Finite infinity and boundary conditions from primordial inflation ================================================================= In his pioneering work on the fitting problem, Ellis [@fit1] suggested the notion of [*finite infinity*]{}, “[*fi*]{}$\,$”, as being a timelike surface within which the dynamics of an isolated system such as the solar system can be treated without reference to the rest of the universe. Within finite infinity spatial geometry might be considered to be effectively asymptotically flat, and governed by “almost” Killing vectors. Quasilocal gravitational energy is generally defined in terms of surface integrals with respect to surfaces of a fiducial spacetime, and for the discussions of binding energy and rotational energy to which the quasilocal approach is commonly applied, asymptotic flatness is usually assumed. I propose that to quantify cosmological gravitational energy with respect to observers in bound systems an appropriate notion of finite infinity must be used as the fiducial reference point, since bound systems can be considered to be almost asymptotically flat. To date Ellis’ 1984 suggestion [@fit1] has not been further developed, perhaps because there is no obvious way to define finite infinity in an arbitrary inhomogeneous background. To proceed I will make the crucial observation that since our universe was effectively homogeneous and isotropic at last scattering, a notion of a universal critical density scale did exist then. It was the density required for gravity to overcome the initial uniform expansion velocity of the dust fluid. I will assume, as consistent with primordial inflation, that the present horizon volume of the universe was very close to the critical density at last scattering, with scale–invariant perturbations. Since the evolution of inhomogeneities involves back–reaction we must use an averaging scheme such as that developed by Buchert [@buch1]. An important lesson of such schemes is that averaging a quantity such as the density, and then evolving the average by the Friedmann equation, is not the same as evolving the inhomogeneous Einstein equations and then taking the average. Thus even if our present horizon volume, $\cal H$, was close to critical density at last scattering, differing perhaps by a factor of $\left.\de\rh/\rh\right|\Z{{\cal H}i}\goesas -10^{-5}$, the present horizon volume can nonetheless have a density well below critical. Furthermore, the present [*true critical density*]{} or [*closure density*]{} which demarcates a bound system from an unbound region, can be very different from the notional critical density inferred from a FLRW model using the presently measured global Hubble constant, $\Hm$. This circumstance can in fact be understood as outcome of cosmic variance combined with the scale–invariance of the primordial perturbation spectrum resulting from inflation, and the subsequent causal evolution of such inhomogeneities [@opus]. In ref. [@opus] I provide a technical definition of finite infinity in terms of the evolution of the true critical density. Finite infinity represents an averaging scale with a non–static boundary analogous to the spheres cut out in the Einstein–Straus Swiss cheese model [@gruyere], but it involves average geometry rather than matching exact solutions, and no assumptions about homogeneity are made outside finite infinity. Finite infinity represents a physical scale expected to lie outside virialized galaxy clusters, but within the filamentary walls surrounding voids. Since it is a scale related to the true critical density, space at finite infinity boundaries can be described by the spatially flat metric \^2=-\^2+\^2(). \[figeom\] Beyond finite infinity, the spatial geometry is not given by (\[figeom\]). Since we live in a universe dominated by voids at the present epoch, a two–scale approximation can be developed by assuming that the geometry near the centres of voids is given by \^2=-\^2+\^2(), \[vogeom\]where the local spatial curvature is negative, differing from that of (\[figeom\]) determined by observers in galaxies within finite infinity. Furthermore the local void time parameter, $\tv$, differs from that within finite infinity regions, on account of gravitational energy differences. Clocks run slower where mass is concentrated, but because this time dilation relates mainly to energy associated with spatial curvature gradients, the differences can be significantly larger than those we would arrive at in considering only binding energy below the finite infinity scale, which is very small. In ref. [@opus] Buchert’s scheme is applied to the evolution of a volume average of the two geometries (\[figeom\]) and (\[vogeom\]). The average spatial geometry does not have a simple uniform Gaussian curvature, but can be described in terms of an effective scale factor $\ab(t)\equiv\left[\Vav(t)/\Vav_i\right]^{1/3}$ related to the evolution of the spatial volume, $\Vav$, over a suitable averaging scale, which I identify as the time evolution of present particle horizon volume. The volume–average geometry which replaces the metric (\[FLRW\]) of a FLRW universe will also have a time parameter, $t$, which differs from that measured by observers within finite infinity regions, via a mean lapse function t=(). Effectively, $\tw$, is a universal cosmic time set by the almost stationary Killing vector of the finite infinity scale, which differs from the local time of a volume–average observer. Such a volume average observer would measure an older age of the universe, and a lower mean CMB temperature. Although this may seem surprising it is entirely consistent with observation, since we exchange photons with other bound systems which keep a time close to the universal finite infinity time scale, if binding energy is neglected. At early times, $\gb\simeq1$. It grows monotonically, reaching values of order $\gc\goesas1.38$ today. The observation of an isotropic Hubble flow is satisfied by adopting a uniform expansion gauge, [*when expansion is referred to local measurements*]{}, as a change of proper length with respect to proper time. Referred to any one set of clocks, it appears that voids expand faster than the filamentary bubble walls where galaxy clusters are located. Nonetheless if we take account of the fact that clocks tick faster in voids, the locally measured expansion can still be uniform. This provides an implicit resolution of the Sandage–de Vaucouleurs paradox: in the standard FLRW paradigm, the statistical scatter in the Hubble flow should be so large that no Hubble constant can be extracted below the scale of homogeneity. Yet Hubble originally derived a linear law on scales of 20 Mpc, of order 10% of the scale of apparent homogeneity. While dark energy has been invoked in a qualitative way to explain the Sandage–de Vaucouleurs paradox, quantitative attempts to resolve the paradox have not been fully successful in the  paradigm. The new model universe [@opus] makes a quantitative prediction as to the variance of the Hubble flow below the scale of apparent homogeneity. The Hubble parameter observed within filamentary walls, $\bH=\gb^{-1}\Hh+\gb^{-2}\Dtc\gb$, is lower than the global average Hubble parameter, $\Hh$. Similarly, a Hubble parameter larger than the global average will be observed across the nearest large voids of diameter 30$h^{-1}$ Mpc. Since voids occupy a greater volume of space than bubble walls, an isotropic average over small redshifts will give an overall higher Hubble constant locally until the scale of apparent homogeneity is reached, when we sample the global average fractions of walls and voids. This is consistent with the observed “Hubble bubble” feature [@JRK; @essence]. Apparent cosmic acceleration without “dark energy” ================================================== The volume–average Buchert equations for the two scale model have been integrated[^1] in ref. [@opus], and best–fit parameters for initial conditions consistent with the evidence of the CMB radiation, and the expectations of primordial inflation, are provided in ref. [@LNW]. The present model provides a definitive quantitative answer to the debate about whether back–reaction can mimic cosmic acceleration [@camp1; @camp2]. It is found that, as measured by a volume–average observer the expansion appears to decelerate, albeit with a deceleration parameter close to zero. This would vindicate the claims of those who have argued that back–reaction is too small to be a source of cosmic acceleration [@camp2], if the position of the observer could be neglected. However, we are in a bound system, not expanding space, and observers in galaxies whose clocks tick slower than the volume average can nonetheless still register apparent acceleration. Gravitational energy and spatial curvature gradients between bound systems and the volume–average position in a void–dominated universe – the intrinsic physics of a curved expanding space and its affect on measurements – are therefore the essential ingredients to understanding apparent cosmic acceleration. By overlooking the operational basis of measurements in general relativity, we have come to misidentify gravitational energy gradients as “dark energy”. The coincidence as to why cosmic “acceleration” should occur at the same epoch when the largest structures form is naturally solved. Voids are associated with negative spatial curvature, and negative spatial curvature is associated with the positive gravitational energy which is largely responsible for the gradient between bound systems and the volume average. Since gravitational energy directly affects relative clock rates, it is at the epoch when the gravitational energy gradient changes significantly that apparent cosmic acceleration is seen. In the new paradigm, apparent cosmic acceleration, which occurs for redshifts of order $z\lsim0.9$, is less extreme than in the  paradigm, and the universe is very close to a coasting Milne universe at late times, in accord with observation. Fits to the Riess06 gold data set [@Riess06] yield values of $\chi^2\simeq0.9$ per degree of freedom [@opus; @LNW], while a Bayesian model comparison indicates that the results statistically indistinguishable from the model [@LNW]. As shown in Fig. 1, parameters can be found which simultaneously fit supernovae, the angular scale of the sound horizon which sets the angular scale of the CMB Doppler peaks, and the effective “comoving” scale of the baryon acoustic oscillation as measured in galaxy clustering statistics [@bao]. It is remarkable that these values agree precisely with the value of the Hubble constant recently determined by the HST Key Team of Sandage [@Sandage], as this value differs by 14% from that claimed as a best–fit to the  paradigm with WMAP [@wmap]. The new paradigm may also resolve observational anomalies. The expansion age is larger, allowing more time for structure formation. The universe is typically about 14.7 Gyr old as viewed from a galaxy, or 18.6 Gyr at the volume average. Since the baryon–to–photon ratio is conventionally defined at the volume average, a systematic recalibration of cosmological parameters is required. Using standard big bang nucleosynthesis bounds, a best–fit ratio of non–baryonic matter to baryonic matter of 3:1 is found. Non–baryonic dark matter is still very significant, but reduced relative to the  paradigm, making it possible to have enough baryon drag to fit the ratio of heights of the first two Doppler peaks, while simultaneously better fitting helium abundances and potentially resolving the lithium abundance anomaly [@lithium]. The angular scale of the Doppler peaks is often claimed to be a “measure of spatial curvature”, but that is only true in the FLRW paradigm, when spatial curvature is assumed to be the same everywhere. In the new paradigm, the angular scale might be claimed as a measure of [*local*]{} spatial curvature in the 24% of our present epoch horizon volume occupied by the filamentary bubble walls, where galaxies are located. As Fig.\[fig\] demonstrates, the angular scale can still be fit despite average negative spatial curvature at the present epoch. This in fact may also resolve the anomaly associated with ellipticity in the CMB anisotropy spectrum [@ellipticity], the observation of which implies the greater geodesic mixing associated with average negative spatial curvature. The question of other anomalies, and several possible directions of research are discussed at length in ref. [@opus]. Conclusion ========== Cosmic acceleration can be quantitatively explained within general relativity as an apparent effect due to gravitational energy differences that arise in the decoupling of bound systems from the global expansion of the universe, without any exotic “dark energy”. We must account not only for the back–reaction of inhomogeneities in the Einstein equations, but also for the fact that as observers in bound systems our measurements can differ systematically from those at the volume average. This entails understanding the subtle aspects of gravitational energy that exist by virtue of the equivalence principle, the dynamical nature of general relativity and boundary conditions from primordial inflation. The results of refs. [@opus; @LNW] show that it is likely that a viable concordance model of the universe will be found. Detailed modelling will not only rely heavily on new observational data, but will also be an adventure into as yet still largely unexplored theoretical aspects of general relativity. The revolution that Einstein began exactly 100 years ago, when he first thought about the equivalence principle in 1907, is not yet over. It remains to us, the generation that has had the first real glimpse of what the universe actually looks like, to think equally deeply about the operational issues surrounding measurements, and the conceptual basis of general relativity, in its application to the universe as a whole. This work was supported by the Mardsen Fund of the Royal Society of New Zealand. References {#references .unnumbered} ========== [66]{} G.F.R. Ellis, in B. Bertotti, F. de Felice and A. Pascolini (eds), [*General Relativity and Gravitation*]{}, (Reidel, Dordrecht, 1984) pp. 215–288. L.B. Szabados, Living Rev. Rel. [**7**]{}, 4 (2004). T. Buchert, , 105 (2000); (2001) 1381. D.L. Wiltshire, New J. Phys. [**9**]{} (2007) 377. A. Einstein and E.G. Straus, Rev. Mod. Phys. [**17**]{} (1945) 120; Err. ibid. [**18**]{} (1946) 148. S. Jha, A.G. Riess and R.P. Kirshner, (2007) 122. W.M. Wood-Vasey [*et al.*]{}, (2007) 694. D.L. Wiltshire, Phys. Rev. Lett.  [**99**]{} (2007) 251101. B.M. Leith, S.C.C. Ng and D.L. Wiltshire, (2008) L91. T. Buchert, (2005) L113; (2006) 817; M.N. Célérier, arXiv:astro-ph/0702416; and references therein. A. Ishibashi and R.M. Wald, (2006) 235. A.G. Riess , arXiv:astro-ph/0611572. D.J. Eisenstein , (2005) 560; S. Cole , Mon. Not. R. Astr. Soc. [**362**]{} (2005) 505. A. Sandage, G.A. Tammann, A. Saha, B. Reindl, F.D. Macchetto and N. Panagia, (2006) 843. C.L. Bennett [*et al.*]{}, Astrophys. J. Suppl. [**148**]{} (2003) 1; D.N. Spergel [*et al.*]{}, arXiv:astro-ph/0603449. G. Steigman, Int. J. Mod. Phys. [**E 15**]{} (2006) 1; M. Asplund, D.L. Lambert, P.E. Nissen, F. Primas and V.V. Smith, (2006) 229. V.G. Gurzadyan, C.L. Bianco, A.L. Kashin, H. Kuloghlian and G. Yegorian, (2007) 121; V.G. Gurzadyan , Mod.  (2005) 813. [^1]: [*Note added*]{}: Numerical integrations were initially undertaken in ref. [@opus]. However, after this essay was submitted an exact solution of the two–scale Buchert equations was subsequently obtained [@sol].
{ "pile_set_name": "ArXiv" }
--- author: - | Arttu Rajantie\ Theoretical Physics, Blackett Laboratory, Imperial College, London SW7 2AZ, UK\ E-mail: bibliography: - 'monomass.bib' title: 'Mass of a quantum ’t Hooft-Polyakov monopole' --- Introduction ============ ’t Hooft-Polyakov monopoles [@'tHooft:1974qc; @Polyakov:1974ek] are topological solitons in the Georgi-Glashow model [@Georgi:1972cj] and a wide range of other gauge field theories, including super Yang Mills theories and grand unified theories. They are non-linear objects in which energy is localised around a point in space and which therefore appear as point particles, and they carry non-zero magnetic charge. It is possible that these monopoles actually exist in nature, but so far they have not been discovered[^1] despite extensive searches [@Milton:2001qj]. However, ’t Hooft-Polyakov monopoles are very important theoretically, because they provide a new way of looking at non-Abelian gauge field theories, complementary to the usual perturbative picture. In particular, this has shed more light on the puzzle of confinement [@Mandelstam:1974pi; @tHooftTalk]. So far, concrete results have been limited to supersymmetric theories. The main reason for the lack of progress in non-supersymmetric theories is the difficulty of treating the quantum corrections to the classical monopole solution. For instance, calculating the quantum correction to a soliton mass is a complicated task. Even in simple one-dimensional models, it can typically only be calculated to one-loop order [@Dashen:1974cj], and for ’t Hooft-Polyakov monopoles the situation is even worse as only the leading logarithm is known [@Kiselev:1988gf]. This difficulty is avoided in supersymmetric models, because the symmetry protects the mass from quantum corrections. In this paper, the quantum mechanical mass of a ’t Hooft-Polyakov monopole is calculated using lattice Monte Carlo simulations. The method was developed in Ref. [@Davis:2000kv] and has been used earlier [@Davis:2001mg] in a 2+1-dimensional model in which the monopoles are instanton-like space-time events rather than particle excitations. The mass is defined using the free-energy difference between sectors with magnetic charges one and zero, and the corresponding ensembles are constructed using suitably twisted boundary conditions. This method has several advantages over the alternative approaches based on creation and annihilation operators [@Frohlich:1998wq; @Belavin:2002em; @Khvedelidze:2005rv] or fixed boundary conditions [@Smit:1993gy; @Cea:2000zr]. In particular, it gives a unique, unambiguous result, since it requires neither gauge fixing, choice of a classical field configuration nor identification of individual monopoles in the field configurations. Analogous twisted boundary conditions have been used before to compute soliton masses in simpler models, such as 1+1-dimensional scalar field theory [@Ciria:1993yx], 3+1-dimensional compact U(1) gauge theory [@Vettorazzo:2003fg] and 2+1-dimensional Abelian Higgs model [@Kajantie:1998zn]. In the latter case, the results provided evidence for an asymptotic duality near the critical point [@Kajantie:2004vy]: The model becomes equivalent to a scalar field theory with a global O(2) symmetry, with vortices and scalar fields changing places. It is interesting to speculate whether an electric-magnetic duality might appear in the same way in the Georgi-Glashow model. These methods can, in principle, used to test that conjecture. The outline of this paper is the following: The Georgi-Glashow model and the classical ’t Hooft-Polyakov solution are introduced in Section \[sect:model\]. In Section \[sect:lattice\], the model is discretised on the lattice and the lattice magnetic field is defined. The twisted boundary conditions are discussed in Section \[sect:twist\]. In Sections \[sect:classmass\] and \[sect:simu\] the classical and quantum mechanical monopole masses are computed, and the results are discussed in Section \[sect:discuss\]. Finally, conclusions are presented in Section \[sect:conclude\]. Georgi-Glashow model {#sect:model} ==================== The 3+1-dimensional Georgi-Glashow model [@Georgi:1972cj] consists of an SU(2) gauge field $A_\mu$ and an Higgs field $\Phi$ in the adjoint representation, with the Lagrangian $${\cal L}=-\frac{1}{2}{\rm Tr}F_{\mu\nu}F^{\mu\nu} +{\rm Tr}[D_\mu,\Phi][D^\mu,\Phi]-m^2{\rm Tr}\Phi^2-\lambda\left({\rm Tr}\Phi^2\right)^2,$$ where the covariant derivative $D_\mu$ and the field strength tension are defined as $D_\mu=\partial_\mu+igA_\mu$ and $F_{\mu\nu}=[D_\mu,D_\nu]/ig$. $A_\mu$ and $\Phi$ are traceless, self-adjoint $2\times 2$ matrices, they can be represented as linear combinations of Pauli $\sigma$ matrices, $$\sigma_1=\left(\matrix{0 & 1 \cr 1 & 0}\right),\quad \sigma_2=\left(\matrix{0 & -i \cr i & 0}\right),\quad \sigma_3=\left(\matrix{1 & 0 \cr 0 & -1 }\right),$$ as $A_\mu=A_\mu^a\sigma^a$, $\Phi=\Phi^a\sigma^a$. On classical level, the model has two dimensionless parameters, the coupling constants $g$ and $\lambda$, and the scale is set by $m^2$. When $m^2$ is negative, the SU(2) symmetry is broken spontaneously to U(1) by a non-zero vacuum expectation value of the Higgs field ${\rm Tr}\Phi^2=v^2/2\equiv|m^2|/2\lambda$. In the broken phase, the particle spectrum consists of a massless photon, electrically charged $W^{\pm}$ bosons with mass $m_W=gv$, a neutral Higgs scalar with mass $m_H=\sqrt{2\lambda}v$ and massive magnetic monopoles [@'tHooft:1974qc; @Polyakov:1974ek]. The terms “electric” and “magnetic” refer to the effective U(1) field strength tensor defined as [@'tHooft:1974qc] $$\label{equ:fieldstrength} {\cal F}_{\mu\nu}={\rm Tr}\hat\Phi F_{\mu\nu}-\frac{i}{2g}{\rm Tr} \hat\Phi[D_\mu,\hat\Phi][D_\nu,\hat\Phi].$$ In any smooth field configuration, the corresponding magnetic field ${\cal B}_i=\epsilon_{ijk}{\cal F}_{jk}/2$ is sourceless (i.e., $\vec\nabla\cdot \vec{\cal B}=0$) whenever $\Phi\ne 0$. This is easy to see in the unitary gauge, in which $\Phi\propto\sigma_3$, because Eq. (\[equ:fieldstrength\]) reduces to ${\cal F}_{\mu\nu}=\partial_\mu A^3_\nu-\partial_\nu A^3_\mu$ and therefore $\vec{\cal B}=\vec\nabla \times\vec{A}^3$. At zeros of $\Phi$, the divergence is $\pm 4\pi/g$ times a delta function, indicating a magnetic charge of $q_M=4\pi/g$. The classical ’t Hooft-Polyakov monopole solution [@'tHooft:1974qc; @Polyakov:1974ek] is of the form $$\begin{aligned} \Phi^a&=&\frac{r_a}{gr^2}H(gvr), \nonumber\\ A_i&=&-\epsilon_{aij}\frac{r_j}{gr^2}\left[1-K(gvr)\right],\end{aligned}$$ where $H(x)$ and $K(x)$ are functions that have to be determined numerically. It is easy to check that this solution is a magnetic charge in the above sense. Because the energy is localised around the origin, the solution describes a particle. Once the functions $H(x)$ and $K(x)$ have been found, it is easy to integrate the energy functional to calculate the mass of the particle, as it is simply given by the total energy of the configuration. The energy density falls as $\rho\sim 1/2g^2r^4$, implying that the mass is finite but also that there is a long-range magnetic Coulomb force between monopoles, as expected. The classical monopole mass $M_{\rm cl}$ can be written as $$\label{equ:classmass} M_{\rm cl}=\frac{4\pi m_W}{g^2}f(z),$$ where $f(z)$ is a function of $z=m_H/m_W$ and is known to satisfy $f(0)=1$ [@Bogomolny:1975de; @Prasad:1975kr]. It has recently been calculated numerically to a high accuracy [@Forgacs:2005vx]. Asymptotic expressions for small and large $z$ had been found earlier [@Kirkman:1981ck; @Gardner:1982fk], but the authors of Ref. [@Forgacs:2005vx] reported that they had found an error in the small-$z$ expansion. According to them, the correct expansion is $$f(z)=1+\frac{1}{2}z+\frac{1}{2}z^2\left( \ln3\pi z-\frac{13}{12}-\frac{\pi^2}{36} \right)+O(z^3).$$ For large $z$, they found that $$f(z)=1.7866584240(2)-2.228956(7)z^{-1}+7.14(1)z^{-2}+O(z^{-3}).$$ In quantum theory, the mass of a soliton can be defined as the difference between the ground state energies in the sectors with one and zero charge. In principle, it is possible to calculate this perturbatively to leading order [@Dashen:1974cj]. First, one needs to find the classical solution $\phi_0(x)$, and consider small fluctuations $\delta(t,x)$ around it, $$\phi(t,x)=\phi_0(x)+\delta(t,x).$$ When one drops higher-order terms in the Lagrangian, one is left with field $\delta$ in an harmonic $x$-dependent potential $U(\delta)=\frac{1}{2}V''(\phi_0(x))\delta^2$. One needs to find the energy levels $\omega_k$ of this field by solving the eigenvalue equation $$\left[-\vec\nabla^2+V''(\phi_0(x))\right]\delta_k(x)=\omega_k^2\delta_k(x).$$ The one-loop correction to the soliton mass is then given simply by the difference in the zero-point energies of one- and zero-soliton sectors, $$M_{\rm 1-loop}=M_{\rm cl}+\frac{1}{2}\sum_k\left(\omega^1_k-\omega^0_k\right),$$ where $\omega^1_k$ refers to the energies in the soliton background and $\omega^0_k$ in the trivial vacuum. One has to be careful with degeneracies and ultraviolet divergences, but the calculation can be carried out exactly in, for instance, the 1+1-dimensional $\lambda\phi^4$ model. In the presence of a kink, the energy spectrum consists of two discrete levels $\omega_0^2=0$ and $\omega_1^2=(3/2)m^2$, and a continuum $\omega_q^2=(q^2/2+2)m^2$. It is essential that one takes into account the same number of eigenvalues in the two sectors, and the best way to ensure is to do the calculation on a finite lattice and take the lattice spacing to zero and the lattice volume to infinity afterwards. This gives the result [@Dashen:1974cj] $$M_{\rm kink}=\frac{2\sqrt{2}}{3}\frac{m^3}{\lambda}\left[ 1+\left(\frac{\sqrt{3}}{8}-\frac{9}{4\pi}\right)\frac{\lambda}{m^2}+O\left(\frac{\lambda^2}{m^4}\right) \right].$$ The one-loop calculation of the monopole mass would go along the same lines, but there are many extra complications, which make it technically more difficult. Instead of one field, one has to consider two coupled fields. The background solution is not known analytically except in the special case of $\lambda=0$, and even then, the eigenvalue equation cannot be solved analytically. It is difficult to regularise the theory without breaking either gauge or rotation invariance, and in any case, the theory has much stronger ultraviolet divergences. Nevertheless, because the monopole mass is a physical quantity, it is finite once the couplings have been renormalised. No separate renormalisation of the monopole mass is needed, and that means that the scale dependence of the resulting one-loop expression for the monopole mass would automatically be such that it cancels the running of the couplings. So far, only the leading logarithmic quantum correction near the BPS limit has been calculated [@Kiselev:1988gf], $$\label{equ:leadinglog} M=\frac{4\pi m_W}{g^2}\left[ 1+\frac{g^2}{8\pi^2}\left(\ln\frac{m_H^2}{m_W^2}+O(1) \right)+O(g^4)\right].$$ An interesting aspect of this result is that the it is logarithmically divergent in the BPS limit. This is related to the Coleman-Weinberg effect [@Kiselev:1990fh], which makes it impossible to actually reach the BPS limit in the quantum theory. Quantum corrections give rise to a logarithmic term $\phi^4\log\phi$, which means that if one tries to decrease the scalar mass below a certain value, the vacuum becomes unstable. This leads to a constraint $m_H\gtrsim gm_W$. If one wants to be able to test Eq. (\[equ:leadinglog\]) numerically, the logarithmic term has to be much larger than the constant term next to it, but that is only possible if $g$ is very small. This, however, means that the whole quantum correction will be small and therefore more difficult to measure. Furthermore, having a large mass hierarchy such as $m_H\ll m_W$ means that one would need to use a very large lattice. For these reasons, a numerical test of Eq. (\[equ:leadinglog\]) is not attempted in this paper. Lattice discretisation {#sect:lattice} ====================== To study the model numerically, let us carry out a Wick rotation to Euclidean space and discretise the model in the standard way, $$\begin{aligned} \label{equ:latticeaction} {\cal L}_E & = & 2\sum_{\mu} \left[ {\rm Tr} \Phi(\vec{x})^2- {\rm Tr} \Phi(\vec{x}) U_\mu(\vec{x}) \Phi(\vec{x}+\hat{\mu}) U_\mu^\dagger(\vec{x})\right] \nonumber \\ & &+\frac{2}{g^2}\sum_{\mu<\nu}\left[2- {\rm Tr} U_{\mu\nu}(\vec{x}) \right] +m^2{\rm Tr}\ \Phi^2+\lambda({\rm Tr}\ \Phi^2)^2.\end{aligned}$$ The scalar field $\Phi$ is defined on lattice sites and the gauge field is represented by SU(2)-valued link variables $U_\mu$, which correspond roughly to $\exp(igA_\mu)$. The plaquette $U_{\mu\nu}$ is defined as $U_{\mu\nu}(\vec{x})=U_\mu(\vec{x})U_\nu(\vec{x}+\hat\mu) U^\dagger_\mu(\vec{x}+\hat\nu)U^\dagger_\nu(\vec{x})$. It will be crucial that the magnetic field can be defined on the lattice and that magnetic monopoles are therefore absolutely stable objects [@Davis:2000kv]. This is highly non-trivial, because many other topological objects such as Yang-Mills instantons are not well defined on the lattice [@Luscher:1981zq]. To define the discretised version of the field strength tensor ${\cal F}_{\mu\nu}$, note that the set of configurations with $\Phi=0$ at any lattice site is of measure zero and therefore these configurations do not contribute to any physical observables. One can therefore define a unit vector valued field $\hat\Phi=\Phi/\sqrt{\Phi^2}$. This expression makes sense because $\Phi^2$ is always proportional to the $2\times 2$ identity matrix ${\mathbf 1}$. Because $\hat\Phi^2={\mathbf 1}$, one can define a projection operator $\Pi_+=({\mathbf 1}+\hat\Phi)/2$. Let us use it to define the projected link variable $$u_\mu(x)=\Pi_+(x) U_\mu(x) \Pi_+(x+\hat\mu) ,$$ which is essentially the compact Abelian gauge field that corresponds to the unbroken U(1) subgroup. The corresponding Abelian field strength tensor is $$\alpha_{\mu\nu}=\frac{2}{g}\arg{\rm Tr}\, u_\mu(x)u_\nu(x+\hat\mu) u^\dagger_\mu(x+\hat\nu)u^\dagger_\nu(x),$$ and the lattice version of the magnetic field $$\label{equ:latB} \hat B_i=\frac{1}{2}\epsilon_{ijk}\alpha_{jk}.$$ The lattice magnetic field $\hat{B}_i$ is a well-defined, gauge-invariant quantity. The magnetic charge density is given by its divergence, $$\rho_M(x)=\sum_{i=1}^3\left[\hat{B}_i(x+i)-\hat{B}_i(x)\right]\in \frac{4\pi}{g}{\mathbb Z},$$ and note that it is quantised. Being a divergence of a vector field, the magnetic charge is automatically a conserved quantity. Twisted boundary conditions {#sect:twist} =========================== Because the magnetic charge $Q_M=\int d^3x \rho_M$ defined in the previous section is a well-defined, gauge-invariant, quantised and conserved quantity even on the lattice, it is a well-defined question to ask what the lowest energy eigenvalue with $Q_M=4\pi/g$ is. Furthermore, since the total magnetic charge inside a volume is given by a surface integral over the boundary, one can fix the total charge in a simulation by choosing appropriate boundary conditions. In practice, one can therefore define separate partition functions $Z_{N}$ for each magnetic charge sector, $$Z_{N}=\int_{N} DU_\mu D\Phi \exp(-S[U_\mu,\Phi]),$$ where the boundary conditions for each sector are such that they fix the magnetic charge to $Q_M=4\pi N/g$, i.e., the net number of monopoles is $N$. Since monopoles of the same charge are not expected to form bound states, and since their interaction potential decreases with distance as $1/r$, they will be non-interacting provided that the lattice is large enough. Denoting the length of the time direction by $T$, the partition function is therefore $$Z_N=\exp(-|N|MT)Z_0,$$ where $M$ is the quantum mechanical mass of the monopole, and $Z_0$ is the partition function with $N=0$. In particular, one can express the mass as $$M=-\frac{1}{T}\ln\frac{Z_1}{Z_0}.$$ It was shown in Ref. [@Davis:2000kv] that this can be achieved by using suitably “twisted” boundary conditions. It is clear that periodic boundary conditions will not be useful, because they will fix the total charge to zero. On the other hand, they have the attractive feature that they preserve translation invariance and therefore, as all lattice points are equivalent, there will be no physical boundary. However, this does not require perfect periodicity: Periodicity up to the symmetries of the Lagrangian is enough. An obvious alternative is to use C-periodic boundary conditions [@Kronfeld:1990qu], $$\begin{aligned} \label{equ:Cper} U_\mu(x+N\hat\jmath)&=&U_\mu^*(x)=\sigma_2U_\mu(x)\sigma_2,\nonumber\\ \Phi(x+N\hat\jmath)&=&\Phi^*(x)=-\sigma_2\Phi(x)\sigma_2.\end{aligned}$$ They flip the sign of the magnetic field as one goes through the boundary and therefore allow non-zero magnetic charges. In fact, it turns out that C-periodic boundary conditions allow the magnetic charge to have any even value [@Davis:2000kv]. This means that the one-monopole sector is still excluded, and also that in practice the magnetic charge will always be zero, because as long as $M$ is non-zero and $T$ is chosen to be large enough (i.e., $T\gg 1/M$), $$Z_{\rm C}=\sum_{k=-\infty}^\infty Z_{2k}=Z_0\left(1+O(e^{-2MT})\right).$$ When the boundary conditions in Eq. (\[equ:Cper\]) are written in the matrix form, it becomes obvious that one could have used $\sigma_1$ or $\sigma_3$ instead of $\sigma_2$. They are all related to each other by gauge transformations and therefore describe identical physical situations. However, this observation allows one to define twisted boundary conditions $$\begin{aligned} \label{equ:twisted} U_\mu(x+N\hat\jmath)&=&\sigma_jU_\mu(x)\sigma_j,\nonumber\\ \Phi(x+N\hat\jmath)&=&-\sigma_j\Phi(x)\sigma_j,\end{aligned}$$ which are locally equivalent to Eq. (\[equ:Cper\]), but not globally. It is possible to carry out a gauge transform to turn the boundary conditions to Eq. (\[equ:Cper\]) in any single direction, but it is not possible to do it to all three directions simultaneously. Considering the total charge of the lattice, one finds that these twisted boundary conditions only allow odd values [@Davis:2000kv], and therefore $$Z_{\rm tw}=\sum_{k=-\infty}^\infty Z_{2k+1}=Z_1\left(2+O(e^{-2MT})\right).$$ Thus, the ratio of the twisted and C-periodic boundary conditions can be used to calculate the monopole mass, $$\label{equ:massdef} -\frac{1}{T}\ln\frac{Z_{\rm tw}}{Z_{\rm C}}=M-\frac{\ln 2}{T}+O(e^{-2MT})\to M \quad\mbox{as $T\rightarrow\infty$}.$$ As such, this expression is of little use, because it is not possible measure partition functions directly in Monte Carlo simulations. One cannot write the ratio of partition functions in Eq. (\[equ:massdef\]) as an expectation value either, because $Z_{\rm tw}$ and $Z_{\rm C}$ have different boundary conditions. One possible way to avoid this problem is to change the integration variables in $Z_{\rm tw}$ in such a way that the new variables satisfy C-periodic boundary conditions. This changes the integrand, or equivalently the action $S\rightarrow S+\Delta S$. This way, one can express Eq. (\[equ:massdef\]) in terms of an expectation value $Z_{\rm tw}/Z_{\rm C}=\langle\exp(-\Delta S)\rangle_{\rm C}$, where the subscript ${\rm C}$ indicates that the expectation value is calculated with C-periodic boundary conditions. In principle, this is measurable in the simulations. The shift $\Delta S$ consists of line integrals of the magnetic field around the lattice [@Davis:2000kv]. In practice, this approach does not work, because $\exp(-\Delta S)$ has very little overlap with the vacuum and one would need extremely high statistics to obtain any meaningful results. Let us, however, adopt a different strategy. Going back to Eq. (\[equ:massdef\]), we can differentiate the mass with respect to some parameter $x$, $$\frac{\partial M}{\partial x}=\frac{1}{T}\left( \left\langle\frac{\partial S}{\partial x}\right\rangle_{\rm tw} - \left\langle\frac{\partial S}{\partial x}\right\rangle_{\rm C} \right),$$ where the subscripts “tw” and “C” refer to expectation values calculated with twisted and C-periodic boundary conditions, respectively. If one start at a point where one knows the monopole mass, one can integrate this to obtain the mass at any other parameter values. Possible choices for the start point of the integration are the classical limit, where $M$ can be calculated directly, or any point in the symmetric phase where the monopole mass vanishes. Let us choose the latter option and use $x=m^2$. Thus we can write $$\label{equ:massderiv} \frac{\partial M}{\partial m^2}=L^3\left( \left\langle{\rm Tr}\Phi^2\right\rangle_{\rm tw} - \left\langle{\rm Tr}\Phi^2\right\rangle_{\rm C} \right).$$ If one chooses a large enough initial value for $m^2$, it is guaranteed to be in the symmetric phase. In fact, since one can only carry out a finite number of measurements, it is better to use finite differences instead of derivatives $$\label{equ:finitediff} M(m^2_2)-M(m^2_1)=-\frac{1}{T}\ln \frac{\left\langle e^{-(m^2_2-m^2_1)\sum_x {\rm Tr}\Phi^2}\right\rangle_{m^2_1,\rm tw}}{ \left\langle e^{-(m^2_2-m^2_1)\sum_x {\rm Tr}\Phi^2}\right\rangle_{m^2_1,\rm C}},$$ where the subscript $m^2_1$ indicates that the expectation value is calculated at $m^2=m^2_1$. The spacing between different values of $m^2$ has to be fine enough so that the expectation values can be calculated reliably. Classical Mass {#sect:classmass} ============== It will be interesting to compare the measured quantum masses with classical results to determine the quantum correction. However, the quantum mass will be computed on a finite lattice, and therefore it does not make sense to compare it with the infinite-volume continuum expression (\[equ:classmass\]). Instead, one needs to know the classical mass on the same finite lattice. That is straightforward to compute by minimising the lattice action $S_{\rm tw}$ (\[equ:latticeaction\]) with twisted boundary conditions. The C-periodic boundary conditions are compatible with the classical vacuum solution, and therefore the minimum action in that sector is simply $S_{\rm C}^{\rm min}=-(m^4/4\lambda)TL^3$. The classical mass is therefore given by $$\label{equ:Sdiff} M_{\rm cl}=\frac{S_{\rm tw}^{\rm min}-S_{\rm C}^{\rm min}}{T}= \frac{S_{\rm tw}^{\rm min}}{T}+\frac{m^4}{4\lambda}L^3.$$ In fact, since the classical monopole configuration is time-independent, it is enough to have only one time step in the time direction, $T=1$. The classical monopole mass was measured on three different lattices, $16^3$, $24^3$ and $32^3$ using couplings $\lambda=0.1$ and $g=1/\sqrt{5}$, which correspond to $z=1$. The results are shown in Fig. \[fig:classmass\]. The coloured solid lines correspond to different lattice sizes, the smallest being at the bottom. The top dashed line (black) is the infinite-volume mass given by Eq. (\[equ:classmass\]) with $f(1)\approx 1.238$ as computed in Ref. [@Forgacs:2005vx]. These results show a significant finite-size effect and demonstrate why it is necessary to compare the quantum result with the classical mass on the same lattice. Deep in the broken phase, where $m^2\ll 0$, the finite-size effect should be due to the magnetic Coulomb interaction between monopoles. Because our boundary conditions have the physical effect of charge conjugation, we effectively have monopoles and antimonopoles alternating in a cubic array, with distance $L$ between them. The energy of such a configuration is $$E(L)=M-\frac{2\pi}{g^2L}\sum_{\vec{n}\ne 0}\frac{1}{|\vec{n}|}\approx M-\frac{10.98}{g^2L}. \label{equ:Coulombeffect}$$ The lower dashed lines (coloured) show the predicted finite-size effects for the relevant lattice volumes, and one can see that the agreement is good deep in the broken phase. In fact, the lattice values are slightly below the continuum results based on Ref. [@Forgacs:2005vx]. This is most likely due to discretisation effects. Although the Coulomb interaction describes the finite-size effects very well deep in the broken phase, it fails badly as $m^2\rightarrow 0$. What happens there is that the size of the monopole, which is proportional to $1/\sqrt{|m^2|}$, grows and eventually becomes comparable with the size of the lattice. At some point it becomes energetically favourable for the whole system to remain in the symmetric phase. Because the field $\Phi$ is zero, the twisted action $S_{\rm tw}^{\rm min}$ in Eq. (\[equ:Sdiff\]) vanishes, and the result is $$E_{\rm symm}(L)=V(0)L^3=\frac{m^4L^3}{4\lambda}.$$ This is shown as a dotted line for $L=16$ in Fig. \[fig:classmass\], and agrees well with the result near $m^2=0$. At intermediate values of $m^2$, the minimum energy configuration corresponds to a deformed monopole, and therefore the actual result interpolates smoothly between the two behaviours. Nevertheless, we have identified the main sources of finite-size effects in the classical calculation, and we are therefore in a position to compare quantum and classical calculations. Simulations {#sect:simu} =========== In the quantum simulations, the ensembles of configurations with twisted and C-periodic boundary conditions were generated using the Metropolis algorithm. Three different lattice sizes were used: $16^4$, $24^3\times 16$ and $32^3\times 16$. The system was first equilibrated by carrying out 20000–60000 update sweeps depending on the lattice size and the value of $m^2$, and after that, measurements were carried out every 100 updates. The number of measurements for each value of $m^2$ was between 100 and 1700. The expectation values needed for the change of $M$ from $m^2_1$ to $m^2_2$ can be calculated in two different ways, $$\left\langle e^{-(m^2_2-m^2_1)\sum_x {\rm Tr}\Phi^2}\right\rangle_{m^2_1} = \left\langle e^{-(m^2_1-m^2_2)\sum_x {\rm Tr}\Phi^2}\right\rangle_{m^2_2}.$$ This was be used to check that the system was properly in equilibrium, the statistics were sufficient and the spacing between different values of $m^2$ was small enough. Defining $$f_1=-\frac{1}{T}\ln\left\langle e^{-(m^2_2-m^2_1)\sum_x {\rm Tr}\Phi^2}\right\rangle_1\quad \mbox{and}\qquad f_2=-\frac{1}{T}\ln\left\langle e^{-(m^1_2-m^2_1)\sum_x {\rm Tr}\Phi^2}\right\rangle_2,$$ the change in the monopole mass (\[equ:finitediff\]) can be written as $$M(m_2^2)-M(m_1^2)=\frac{1}{2}\left(f_{1,{\rm tw}}+f_{2,{\rm tw}}-f_{1,{\rm C}}-f_{2,{\rm C}}\right).$$ The statistical error $\Delta f$ in each $f_{i,X}$ was estimated using the bootstrap method and it was made sure that $f_{1,X}$ and $f_{2,X}$ agreed within the errors. The error in the mass difference was estimated to be $$\begin{aligned} \Delta\left[M(m_2^2)-M(m_1^2)\right]^2 &=& \frac{1}{4}\left[ \Delta f_{1,{\rm tw}}^2+\Delta f_{2,{\rm tw}}^2+\left(f_{1,{\rm tw}}-f_{2,{\rm tw}}\right)^2 \right.\nonumber\\&&\left. + \Delta f_{1,{\rm C}}^2+\Delta f_{2,{\rm C}}^2+\left(f_{1,{\rm C}}-f_{2,{\rm C}}\right)^2 \right]\end{aligned}$$ The differences were then summed up, starting from $m_0^2$, the highest value of $m^2$, where $M$ was assumed to be zero, $$M(m_N^2)=\sum_{n=0}^{N-1} \left(M(m_{n+1}^2)-M(m_n^2)\right).$$ The total error was calculated by assuming that the errors in the individual mass differences were independent, $$\Delta[M(m_N^2)]^2=\sum_{n=0}^{N-1} \Delta\left[M(m_{n+1}^2)-M(m_n^2)\right]^2$$ The results are shown in Fig. \[fig:quantummass\]. Note that the errors are highly correlated. In Fig. \[fig:massderiv\], we show the derivative of the mass calculated from Eq. (\[equ:massderiv\]). Discussion {#sect:discuss} ========== The mass derivative in Fig. \[fig:massderiv\] has a sharp peak, above which it drops rapidly to zero. This is compatible with the classical result $\partial M/\partial m^2\propto 1/\sqrt{-m^2}$ for negative $m^2$ and zero for positive values. As the horizontal lines show, the peak height is proportional to the linear size of the lattice, which is what one would expect to happen in a second-order phase transition. A fit to the peak position gives an infinite-volume value $m_c^2\approx 0.268$ for the critical point. The curves in Fig. \[fig:quantummass\] show the classical results shifted by this amount. The quantum measurements agree fairly well with them near the critical point, but start to deviate deeper in the broken phase. The qualitative behaviour, as well as the finite-size effects, are nevertheless similar. To really carry out a quantitative comparison of the quantum and classical results and to extract the quantum correction to the mass, one has to consider the renormalisation of the parameters. The values of $m^2$, $\lambda$ and $g$ that were used in the simulations correspond to bare couplings, but one should compare the measurements with the classical mass calculated using the corresponding renormalised couplings $m_R^2$, $\lambda_R$ and $g_R$. The values of the renormalised couplings depend on the renormalisation scheme and scale, and therefore there is no unique way to compare the results. Furthermore, if one calculates the renormalisation counterterms to a certain order in perturbation theory, the value of quantum correction obtained by subtracting the classical value from the quantum result is only valid to the same order, even though the quantum mass itself has been calculated fully non-perturbatively. It would, therefore, be best to use a physically meaningful renormalisation scheme and compute the renormalised couplings non-perturbatively. This can be done by choosing three observable quantities $X$, $Y$ and $Z$, one for each coupling, and measuring their values $\langle X\rangle$, $\langle Y\rangle$ and $\langle Z\rangle$ in Monte Carlo simulations. One would then calculate the same quantities in the classical theory, and fix the values of the renormalised couplings by requiring that the classical values agree with the quantum measurements, $$X_{\rm cl}(m_R^2,\lambda_R,g_R)=\langle X\rangle\quad\mbox{etc.}$$ It would be natural to choose the masses of the perturbative excitations $m_H$ and $m_W$ as two of these observables, although measuring $m_W$ is non-trivial because of its electric charge. One can choose the monopole charge as the third observable, because its value can be determined relatively straightforwardly from the finite-size effects of the monopole mass. In Fig. \[fig:gRdet\] we show the measured finite-size effects at $m^2=-0.35$. A fit to Eq. (\[equ:Coulombeffect\]) with $g$ and $M$ as free parameters gives $g_R=0.40(6)$. It agrees with the bare value $g=1/\sqrt{5}\approx 0.447$ within errors, so one would need better statistics to be actually able to measure the renormalisation counterterm. The change of the monopole mass as a function of $m^2$ is also directly related to the renormalisation of the theory. In a classical continuum theory, $m^2$ only fixes the scale, and dimensionless observables do not depend on it. In the quantum theory, this scale invariance is broken, and taking $m^2$ towards the critical point corresponds to renormalisation group flow towards infrared. Roughly speaking one can identify $m_H$ with the renormalisation scale $\mu$. In principle, one should therefore be able to use the non-perturbative renormalisation scheme discussed above to follow the running of the couplings even in the non-perturbative regime near the critical point. One can speculate on what may happen based on the perturbative running of the couplings. The one-loop renormalisation group equations are $$\frac{d\lambda_R}{d\log\mu}=\frac{11\lambda_R^2-12g_R^2\lambda_R+6g_R^4}{8\pi^2}, \qquad \frac{dg_R^2}{d\log\mu}=-\frac{7g_R^4}{8\pi^2}.$$ Moving towards infrared, $\lambda_R$ decreases and $g_R^2$ increases. In fact, $\lambda_R$ becomes negative at a non-zero $\mu$, i.e., before one reaches the critical point. This is a sign of the Coleman-Weinberg effect and means that there is a first-order phase transition. However, if $g_R$ has become large enough before this happens, the one-loop approximation is not valid any more, and it is possible that the critical point can be reached. This would mean that the line of first-order Coleman-Weinberg phase transitions ends at a tricritical point. Beyond that there is a second-order phase transition, around which the theory is strongly coupled. This is exactly what happens in the Abelian Higgs model in 2+1 dimensions. There are strong arguments that in that case, the second-order phase transition has a dual description in terms of a global O(2) model [@KleinertBook]. In the duality map, vortices and the fundamental scalar fields of the models change places. This was recently tested by measuring the critical behaviour of the vortex mass using a technique that was very similar to what has been discussed in this paper [@Kajantie:2004vy]. The critical exponents that characterise that behaviour of vortices near the transition point were found to agree with the known critical exponents of the O(2) model, which provides strong numerical evidence for the duality. If the Georgi-Glashow model has a second-order phase transition, it may have an analogous dual description. The one-loop renormalisation group equations suggest that as the critical point is approached, $g_R$ diverges and $\lambda_R$ goes to zero. The masses of the $H$ and $W^\pm$ bosons and the monopoles should behave as $$m_H\propto\lambda_R^{1/2},\quad m_W\propto g_R, \quad M\propto g_R^{-1},$$ implying that near the critical point, the $W^\pm$ bosons become much heavier than the other degrees of freedom and decouple. The Higgs scalar is neutral, and therefore it decouples as well. Thus one is left with massive magnetic monopoles coupled to a massless photon field. Because of the symmetry between electric and magnetic fields in electrodynamics, this system is indistinguishable from one with an electrically charged scalar field, i.e., the Abelian Higgs model. It is therefore possible that near the critical point, the broken phase of the Georgi-Glashow model is dual to the symmetric phase of the Abelian Higgs model. If the duality extends through the phase transition, the confining symmetric phase of the Georgi-Glashow model is dual to the broken superconducting phase of the Abelian Higgs model, with Abrikosov flux tubes playing the role of the confining strings. This is a concrete example of the ’t Hooft-Mandelstam picture of confinement as a dual phenomenon to superconductivity [@tHooftTalk; @Mandelstam:1974pi]. Earlier studies of the Georgi-Glashow model shed some light on this possible duality [@Greensite:2004ke]. Best known is the limit $\lambda\rightarrow\infty$ taken with constant $v^2=|m^2|/\lambda$. It corresponds to fixing the norm of the Higgs field $\Phi$, and is traditionally parameterised by couplings $\kappa=(m^2+8)/\lambda$ and $\beta=4/g^2$. The limits $\kappa\rightarrow\infty$ and $\beta\rightarrow\infty$ of that theory are, respectively, the compact U(1) gauge theory and the global O(3) spin model. The former is believed to have a weakly first-order phase transition [@Vettorazzo:2003fg], and the latter a second-order one. These two transitions are connected by a phase transition line that separates the Higgs and confining phases. There is evidence for a tricritical point at finite $(\kappa,\beta)$ at which the transition changes from first to second order [@Baier:1988sc], but it is not known if the tricritical line extends to $\lambda=0$. Interestingly, the $\kappa=\infty$ limit, i.e., the compact U(1) theory, is exactly dual to the so-called frozen superconductor [@Peskin:1977kp]. This is an Abelian integer-valued gauge theory, which can be obtained as the $\lambda\rightarrow\infty$, $\kappa\rightarrow\infty$ limit of the Abelian Higgs model. In other words, the hypothetical duality discussed above is real and exact in this particular limit of the theory. The interesting question is whether it exists as an asymptotic duality even away from the $\kappa=\infty$ limit. This is by no means clear because the compact U(1) theory does not have a second-order transition. In principle, the methods discussed and used in this paper can be used to test the duality hypothesis. If one finds a second-order phase transition, one can measure how the masses and the gauge coupling $g_R$ change as the transition is approached and determine whether the $W^\pm$ particles decouple. One can then construct observables along the lines of Ref. [@Kajantie:2004vy] to compare the critical behaviours of the monopoles in the Georgi-Glashow model and scalar particles in the Abelian Higgs model. This will, however, be a major computation, and is beyond the scope of this paper. Conclusions {#sect:conclude} =========== We have seen how the quantum mechanical mass of a ’t Hooft-Polyakov monopole can be calculated non-perturbatively using twisted boundary conditions. The method has clear advantages over alternative approaches based on creation and annihilation operators and fixed boundary conditions. While similar calculations have been carried out before in simpler models [@Ciria:1993yx; @Vettorazzo:2003fg; @Kajantie:1998zn], this appears to be the first time it has been used for ’t Hooft-Polyakov monopoles in 3+1 dimensions. The results demonstrate that one can obtain relatively accurate results for the monopole mass. It would be interesting to compare the results with the corresponding classical mass to determine the quantum correction. As we have seen, the finite-size effects due to the magnetic Coulomb interactions are significant, and therefore one has to compute the classical mass on the same lattice to have a meaningful comparison. Furthermore, the classical mass has to be calculated using the renormalised rather than bare couplings, and this introduces a dependence on the renormalisation scheme and scale. The simulations in this paper were done at weak coupling, i.e., deep in the broken phase. This is a useful limit for testing the method and also for identifying the quantum correction. However, the strong-coupling limit, which corresponds to the neighbourhood of the transition point, is arguably more interesting. In perturbation theory, the transition is of first order, and therefore one cannot reach the critical point, but it is possible that this changes if $\lambda$ is high enough. The methods discussed in this paper could then be used to study the critical behaviour. A particularly interesting possibility is that an asymptotic electric-magnetic duality appears near the critical point. The theory would then become equivalent to the Abelian Higgs model, with monopoles playing the role of the charged scalars. This would be a concrete example of the picture of confinement as a dual phenomenon to superconductivity. The author would like to thank Philippe de Forcrand, Tanmay Vachaspati and Falk Bruckmann for useful discussions. The research was conducted in cooperation with SGI/Intel utilising the Altix 3700 supercomputer and was supported in part by Churchill College, Cambridge, and the National Science Foundation under Grant No. PHY99-07949. [^1]: One candidate event [@Cabrera:1982gz] was seen on Valentine’s Day 1982 but is unlikely to have been a real monopole.
{ "pile_set_name": "ArXiv" }
--- author: - 'Michael Shell,  John Doe,  and Jane Doe, [^1]' title: | Bare Advanced Demo of IEEEtran.cls for\ IEEE Computer Society Journals --- [Shell : Bare Advanced Demo of IEEEtran.cls for IEEE Computer Society Journals]{} Introduction {#sec:introduction} ============ demo file is intended to serve as a “starter file” for IEEE Computer Society journal papers produced under LaTeX using IEEEtran.cls version 1.8b and later. I wish you the best of success. mds August 26, 2015 Subsection Heading Here ----------------------- Subsection text here. ### Subsubsection Heading Here Subsubsection text here. Conclusion ========== The conclusion goes here. Proof of the First Zonklar Equation =================================== Appendix one text goes here. Appendix two text goes here. Acknowledgments {#acknowledgments .unnumbered} =============== Acknowledgment {#acknowledgment .unnumbered} ============== The authors would like to thank... [1]{} H. Kopka and P. W. Daly, *A Guide to [LaTeX]{}*, 3rd ed.1em plus 0.5em minus 0.4emHarlow, England: Addison-Wesley, 1999. [Michael Shell]{} Biography text here. [John Doe]{} Biography text here. [Jane Doe]{} Biography text here. [^1]: Manuscript received April 19, 2005; revised August 26, 2015.
{ "pile_set_name": "ArXiv" }
--- abstract: 'This paper focuses on the study of certain classes of Boolean functions that have appeared in several different contexts. Nested canalyzing functions have been studied recently in the context of Boolean network models of gene regulatory networks. In the same context, polynomial functions over finite fields have been used to develop network inference methods for gene regulatory networks. Finally, unate cascade functions have been studied in the design of logic circuits and binary decision diagrams. This paper shows that the class of nested canalyzing functions is equal to that of unate cascade functions. Furthermore, it provides a description of nested canalyzing functions as a certain type of Boolean polynomial function. Using the polynomial framework one can show that the class of nested canalyzing functions, or, equivalently, the class of unate cascade functions, forms an algebraic variety which makes their analysis amenable to the use of techniques from algebraic geometry and computational algebra. As a corollary of the functional equivalence derived here, a formula in the literature for the number of unate cascade functions provides such a formula for the number of nested canalyzing functions.' address: - 'Virginia Bioinformatics Institute (0477), Virginia Tech, Blacksburg, VA 24061, USA' - 'Mathematics Department, De La Salle University, 2401 Taft Avenue, Manila, Philippines' author: - Abdul Salam Jarrah - Blessilda Raposa - Reinhard Laubenbacher title: 'Nested Canalyzing, Unate Cascade, and Polynomial Functions' --- nested canalyzing function, unate cascade function, parametrization, polynomial function, Boolean function, algebraic variety Introduction ============ Canalyzing functions were introduced by Kauffman [@Kauff1] as appropriate rules in Boolean network models of gene regulatory networks. The definition is reminiscent of the concept of “canalisation" introduced by the geneticist Waddington [@wad] to represent the ability of a genotype to produce the same phenotype regardless of environmental variability. Canalyzing functions are known to have other important applications in physics, engineering and biology. They have been used to study the convergence behavior of a class of nonlinear digital filters, called stack filters, which have applications in image and video processing [@gabbouj; @wendt; @yu]. Canalyzing functions also play an important role in the study of random Boolean networks [@Kauff1; @lynch; @stauffer; @stern], have been used extensively as models for dynamical systems as varied as gene regulatory networks [@Kauff1], evolution, [@stern] and chaos [@lynch]. One important characteristic of canalyzing functions is that they exhibit a stabilizing effect on the dynamics of a system. For example, in [@gabbouj], it is shown that stack filters which are defined by canalyzing functions converge to a fixed point called a root signal after a finite number of passes. Moreira and Amaral [@moreira], showed that the dynamics of a Boolean network which operates according to canalyzing rules is robust with regard to small perturbations. A special type of canalyzing function, so-called *nested canalyzing functions* (NCFs) were introduced recently in [@Kauff2], and it was shown in [@Kauff] that Boolean networks made from such functions show stable dynamic behavior and might be a good class of functions to express regulatory relationships in biochemical networks. Little is known about this class of functions, however. For instance, there is no known formula for the number of nested canalyzing functions in a given number of variables. Another field in which special families of Boolean functions have been studied extensively is the theory of computing, in particular the design of efficient logical switching circuits. Since the 1970s, several families of Boolean functions have been investigated for use in circuit design. For instance, the family of [*fanout-free*]{} functions has been studied extensively, as well as the family of cascade functions. A subclass of these are the [*unate cascade functions*]{} see, e.g., [@maitra; @mukho], which we focus on here. It turns out that this class of functions has some very useful properties. For instance, it was shown recently [@butler] that the class of unate cascade functions is precisely the class of Boolean functions that have good properties as binary decision diagrams. In particular, the unate cascade functions (on $n$ variables) are precisely those functions whose binary decision diagrams have the smallest average path length $\ds (2-\frac{1}{2^{n-1}})$ among all Boolean functions of $n$ variables. The notion of average path length is one cost measure for binary decision trees, which measures the average number of steps to evaluate the function on which the tree is based. One way of assessing the relative efficacy of classes of Boolean function for logic circuit or binary decision tree design is to look at the number of different circuits or trees that can be realized with a particular class. That is, one would like to count the number of functions in a given class. This has led to a formula for the number of unate cascade functions [@bendbut]. One of the results in this paper shows that the classes of unate cascade functions and nested canalyzing functions are identical (as classes of functions rather than as classes of logical expressions). As a result of the equivalence we will establish, this formula then also counts the number of nested canalyzing functions. A third framework for studying Boolean functions, in the context of models for biochemical networks, was introduced in [@LS]. There, a new method to reverse engineer gene regulatory networks from experimental data was proposed. The proposed modeling framework is that of time-discrete deterministic dynamical systems with a finite set of states for each of the variables. The number of states is chosen so as to support the structure of a finite field. One consequence is that each of the state transition functions can be represented by a polynomial function with coefficients in the finite field, thereby making available powerful computational tools from polynomial algebra. This class of dynamical systems in particular includes Boolean networks, when network nodes take on two states. It is straightforward to translate Boolean functions into polynomial form, with multiplication corresponding to AND, addition to XOR, and addition of the constant 1 to negation. In this paper we provide a characterization of those polynomial functions over the field with two elements that correspond to nested canalyzing (and, therefore, unate cascade) functions. Using a parameterized polynomial representation, one can characterize the parameter set in terms of a well-understood mathematical object, a common method in mathematics. This is done using the concepts and language from algebraic geometry. To be precise, we describe the parameter set as an algebraic variety, that is a set of points in an affine space that represents the set of solutions of a system of polynomial equations. This algebraic variety turns out to have special structure that can be used to study the class of nested canalyzing functions as a rich mathematical object. Boolean Nested Canalyzing and unate cascade Functions are equivalent ==================================================================== Boolean Nested Canalyzing Functions ----------------------------------- Boolean nested canalyzing functions were introduced recently in [@Kauff2], and it was shown in [@Kauff] that Boolean networks made from such functions show stable dynamic behavior. In this section we show that the set of Boolean nested canalyzing functions is equivalent to the set of unate cascade functions that has been studied before in the engineering and computer science literature. In particular, this equivalence provides a formula for the number of nested canalyzing functions in a given number of variables. We begin by defining the canalyzing property. A Boolean function $f(x_1,\ldots ,x_n)$ is *canalyzing* if there exists an index $i$ and a Boolean value $a$ for $x_i$ such that $f(x_1, \ldots ,x_{i-1},a,x_{i+1},\ldots ,x_n) = b$ is constant. That is, the variable $x_i$, when given the *canalyzing value* $a$, determines the value of the function $f$, regardless of the other inputs. The output value $b$ is called the *canalyzed value*. Throughout this paper, we use the Boolean functions ${\mathrm{AND}}(x,y) = x \wedge y$, ${\mathrm{OR}}(x,y) = x\vee y$ and ${\mathrm{NOT}}(x) = \overline{x}$. The function ${\mathrm{AND}}(x,y) = x \wedge y$ is a canalyzing function in the variable $x$ with canalyzing value 0 and canalyzed value 0. The function ${\mathrm{XOR}}(x,y) := (x\vee y)\wedge \overline{(x\wedge y)}$ is not canalyzing in either variable. Nested canalyzing functions are a natural specialization of canalyzing functions. They arise from the question of what happens when the function does not get the canalyzing value as input but instead has to rely on its other inputs. Throughout this paper, when we refer to a function of $n$ variables, we mean that $f$ depends on all $n$ variables. That is, for $1 \leq i \leq n$, there exists $(a_1,\dots,a_n) \in {\mathds{F}}_2^n$ such that $f(a_1,\dots,a_{i-1},a_i,a_{i+1},\dots,a_n) \neq f(a_1,\dots,a_{i-1},\overline{a_i},a_{i+1},\dots,a_n)$. \[def-ncf\] Let $f$ be a Boolean function in $n$ variables. - Let $\sigma$ be a permutation on $\{1,\dots,n\}$. The function $f$ is *nested canalyzing function*(NCF) in the variable order $x_{\sigma(1)},\dots,x_{\sigma(n)}$ with canalyzing input values $a_1,\dots, a_n$ and canalyzed output values $b_1,\dots,b_n$, respectively, if it can be represented in the form $$\label{ncf kauff} f(x_1,x_2,\ldots, x_n) = \begin{cases} b_1 & ~{\rm if}~ x_{\sigma(1)} = a_1, \\ b_2 & ~{\rm if}~ x_{\sigma(1)} \ne a_1 ~{\rm and}~ x_{\sigma(2)} = a_2, \\ b_3 & ~{\rm if}~ x_{\sigma(1)} \ne a_1 ~{\rm and}~ x_{\sigma(2)} \ne a_2 ~{\rm and}~ x_{\sigma(3)} = a_3, \\ \vdots & \hspace{1cm} \vdots \\ b_n & ~{\rm if}~ x_{\sigma(1)} \ne a_1 ~{\rm and}~ \cdots ~{\rm and}~ x_{\sigma(n-1)} \ne a_{n-1} ~{\rm and}~ x_{\sigma(n)} = a_n, \\ \overline{b_{n}} & ~{\rm if}~ x_{\sigma(1)} \ne a_1 ~{\rm and}~ \cdots ~{\rm and}~ x_{\sigma(n)} \ne a_n. \end{cases}$$ - The function $f$ is nested canalyzing if $f$ is nested canalyzing in the variable order $x_{\sigma(1)},\dots,x_{\sigma(n)}$ for some permutation $\sigma$. The function $f(x,y,z) = x\wedge \overline{y}\wedge z$ is nested canalyzing in the variable order $x,y,z$ with canalyzing values 0,1,0 and canalyzed values 0,0,0, respectively. However, the function $f(x,y,z,w) = x\wedge y\wedge {\mathrm{XOR}}(z,w)$ is not nested canalyzing because if $x =1$ and $y =1$, then the value of the function is not constant for any input values for either $z$ or $w$. The following lemma follows directly from the definition above. \[referee\] A Boolean function $f$ on $n$ variables is nested canalyzing in the variable order $x_{\sigma(1)},\dots,x_{\sigma(n)}$ with canalyzing input values $a_1,\dots, a_n$ and canalyzed output values $b_1,\dots,b_n$, respectively if and only if $f$ depends on all $n$ variables and, for all $ 1 \leq i \leq n$, $$f(c_1,\dots,c_n) = b_i,$$ where $(c_1,\dots,c_n) \in {\mathds{F}}_2^n$ such that $c_{\sigma(i)} = a_i$ and, for $1 \leq j < i$, $c_{\sigma(j)} = \overline{a_j}$. The next lemma gives the functional form of a nested canalyzing function. To simplify notation, we will use the following notational convention. Let $a=0,1$. Then $x+a$ will denote $x$ if $a=0$ and $\overline{x}$ if $a=1$. \[ncf-unate\] Let $$\label{unate-eqn} g(x_1,\dots,x_n) = (x_{\sigma(1)}+a_1+b_1)\Diamond_1((x_{\sigma(2)}+a_2+b_2)\Diamond_2 (\cdots((x_{\sigma(n-1)}+a_{n-1}+b_{n-1})\Diamond_{n-1}(x_{\sigma(n)}+a_n+b_n))\cdots),$$ where $$\label{diamond} \Diamond_i = \left\{ \begin{array}{ll} \vee , & \hbox{ if } b_i =1;\\ \wedge , & \hbox{ if } b_i = 0,\\ \end{array} \right.$$ and $a_i,b_i \in \{0,1\}$ for all $1 \leq i \leq n$. Then $g$ is nested canalyzing in the variable order $x_{\sigma(1)},\dots,x_{\sigma(n)}$ with canalyzing input values $a_1,\dots, a_n$ and canalyzed output values $b_1,\dots,b_n$, respectively. Furthermore, any nested canalyzing function can be represented in the form [(\[unate-eqn\])]{}. It is clear that $g$ depends on all variables $x_1,\dots, x_n$. Let $g_n = x_{\sigma(n)}+a_n+b_n$ and, for $1 \leq i < n$, let $$g_i = (x_{\sigma(i)}+a_{i}+b_{i}) \Diamond_{i} g_{i+1}.$$ Then $g = g_1 = (x_{\sigma(1)}+a_1+b_1) \Diamond_1 g_2$. If $x_{\sigma(1)} = a_1$, then $(x_{\sigma(1)}+a_1+b_1) \Diamond_1 g_2=b_1\Diamond_1 g_2=b_1$, by equation (\[diamond\]). For $ 1 \leq i \leq n-1$, suppose $x_{\sigma(j)}=\overline{a_j}$ for $j < i$ and $x_{\sigma(i)} = a_{i}$. Now, for all $j < i$, we have $\overline{b_{j}} \Diamond_{j} g_{j+1}= g_{j+1}$ and $b_{i}\Diamond_{i} g_{i+1} = b_{i}$. Thus, by equation (\[diamond\]), we get $$\overline{b_1}\Diamond_1 (\overline{b_2}\Diamond_2(\dots (b_{i}\Diamond_{i} g_{i+1})\dots)) = b_{i}\Diamond_{i} g_{i+1} = b_{i}.$$ Hence $g$ is nested canalyzing, with the $a_i$ as canalyzing values and the $b_i$ as canalyzed values. It is left to show that any nested canalyzing function can be represented in the form (\[unate-eqn\]). Let $f$ be a nested canalyzing function in the variable order $x_{\sigma(1)},\dots,x_{\sigma(n)}$ with canalyzing input values $a_1,\dots, a_n$ and canalyzed output values $b_1,\dots,b_n$, respectively. By Lemma \[referee\] and the above, it is clear that $f(c_1,\dots,c_n)=g(c_1,\dots,c_n)$ for all $(c_1,\dots,c_n) \in {\mathds{F}}_2^n$. Thus $f=g$ as functions and hence $f$ can be represented in the form (\[unate-eqn\]). NCFs are Unate Cascade Functions {#unate} -------------------------------- We next show that Boolean NCFs are equivalent to unate cascade functions. Unate cascade functions have been defined and studied [@maitra; @mukho] as a special class of fanout-free functions which are used in the design and synthesis of logic circuits and switching theory [@butler; @hayes]. A Boolean function $f$ is a *unate cascade* function if it can be represented as $$\label{unate casc} f(x_1,x_2,\ldots, x_n) = x_{\sigma(1)}^\ast \Diamond_1( x_{\sigma(2)}^\ast \Diamond_2(\ldots (x_{\sigma(n-1)}^\ast \Diamond_{n-1} x_{\sigma(n)}^\ast)) \ldots ),$$ where $\sigma$ is a permutation on $\{1,\dots,n\}, \, x^\ast$ is either $x$ or $x + 1$ and $\Diamond_i$ is either the OR ($\vee$) or AND ($\wedge$) Boolean operator. \[ncf = uc\] A Boolean function is nested canalyzing if and only if it is a unate cascade function. Let $f$ be a unate cascade function in the form (\[unate casc\]). Let $a_n$ and $b_n$ be such that $x_{\sigma(n)}^\ast = x_{\sigma(n)}+a_n+b_n$ and, for $1 \leq i < n$, let $$\label{bs} b_i = \left\{ \begin{array}{ll} 1 , & \hbox{ if } \Diamond_i = \vee;\\ 0 , & \hbox{ if } \Diamond_i = \wedge,\\ \end{array} \right.$$ and let $a_{i} \in \{0,1\}$ such that $x_{\sigma(i)}^\ast = x_{\sigma(i)}+a_i+b_i$. That is, $$\label{ais} a_i = x_{\sigma(i)}^\ast + x_{\sigma(i)} + b_i.$$ Then $$f(x_1,\dots,x_n) = (x_{\sigma(1)}+a_1+b_1)\Diamond_1((x_{\sigma(2)}+a_2+b_2)\Diamond_2(\cdots ((x_{\sigma(n-1)}+a_{n-1}+b_{n-1}) \Diamond_{n-1}(x_{\sigma(n)}+a_n+b_n))\cdots),$$ which is nested canalyzing by Lemma \[ncf-unate\]. Conversely, let $f$ be a nested canalyzing function of the form (\[ncf kauff\]). By Lemma \[ncf-unate\] and equation (\[unate-eqn\]), $f$ can be represented in the form (\[unate casc\]) where $x_{\sigma(i)}^\ast= x_{\sigma(i)}+a_i+b_i$ for all $1 \leq i \leq n$. Thus $f$ is unate cascade. \[two-sets\] The second sentence in the proof above implies the nested canalyzing function with canalyzing input $(a_1,\dots,a_n)$ and canalyzed output $(b_1,\dots,b_n)$ is also nested canalyzing in the same variable order with canalyzing input $(a_1,\dots,a_{n-1}, \overline{a_n})$ and canalyzing output $(b_1,\dots,b_{n-1}, \overline{b_n})$. The theorem above provides natural equivalence between the class of nested canalyzing functions and that of unate cascade functions. Namely, for a given unate cascade function in the form (\[unate casc\]), using (\[ais\]) and (\[bs\]) we can explicitly define the canalyzing input values $a_i$ and canalyzed output values $b_i$. On the other hand, any nested canalyzing function in the form (\[ncf kauff\]) can be presented as a unate cascade function in the form (\[unate casc\]) where $x_{\sigma(i)}^\ast= x_{\sigma(i)}+a_i+b_i$ and $\Diamond_{i}$ as in (\[diamond\]) for all $1 \leq i \leq n$. Using the theorem and remark above, it is now possible to translate results about one type of function into results about the other type. We point out one such example. Several papers have been dedicated to counting the number of certain fanout-free functions including the unate cascade functions [@bender; @bendbut; @hayes; @pogosyan; @sasao]. On the other hand, Just et. al. [@just], gave a formula for the number of canalyzing functions. Bender and Butler [@bendbut] and Sasao and Kinoshita [@sasao] independently found the number of unate cascade functions, among other fanout-free functions. As an immediate corollary of Theorem \[ncf = uc\], we therefore know the number of NCFs in $n$ variables, for a given value of $n$. We use the recursive formula found by Sasao and Kinoshita [@sasao], in the following corollary. The number of NCFs in $n$ variables, denoted by $NCF(n)$, is given by $$NCF(n) = 2 \cdot E(n),$$ where $$E(1) = 1, ~~E(2) = 4,$$ and, for $n \geq 3$, $$E(n) = \ds \sum_{r = 2}^{n-1} {n \choose r-1} \cdot 2^{r-1} \cdot E(n - r + 1) + 2^n.$$ For example, the number of Boolean NCFs (unate cascade functions) on $n$ variables for $n \le 8$ is given by Table \[unate-tab\], which is part of the tables given by Sasao and Kinoshita [@sasao] and also Bender and Butler [@bendbut]. $n$ 1 2 3 4 5 6 7 8 ---------- --- --- ---- ----- -------- --------- ----------- ------------ -- -- $NCF(n)$ 2 8 64 736 10,624 183,936 3,715,072 85,755,392 : \[unate-tab\] The number of NCFs on $n \le 8$ variables Some interesting facts derive from the equivalence of NCFs with unate cascade functions. Sasao and Kinoshita [@sasao Lemma 4.1], found that unate cascade functions are equivalent to fanout-free threshold functions. Thus NCFs are a special class of threshold functions. It is also interesting to note that among switching networks, the unate cascade functions are precisely those with the smallest average path length, as shown by Butler et. al. [@butler], which makes them efficient in logic circuit design. Nested canalyzing functions as polynomial functions =================================================== Wanting to compute the total number of Boolean functions of a particular type, e.g. nested canalyzing or unate cascade functions, is one example of the need to study the totality of such functions. Few tools other than elementary combinatorics are available for this purpose, however. In this section, we propose an alternative approach to Boolean functions which provides a whole new set of mathematical tools and results. We will view Boolean functions $\{0, 1\}^n\longrightarrow \{0, 1\}$ as polynomial functions $f: {\mathds{F}}_2^n\longrightarrow {\mathds{F}}_2$, where ${\mathds{F}}_2$ denotes the field with two elements. It is well-known that any function $f: {\mathds{F}}_2^n\longrightarrow {\mathds{F}}_2$ can be represented as a polynomial function [@LN]. If we require that every variable appear with exponent 1, then this representation is unique. For Boolean functions, this representation is straightforward to construct by observing that ${\mathrm{AND}}(x,y)=x \wedge y = xy$, ${\mathrm{OR}}(x,y)= x\vee y =x+y+xy$, and ${\mathrm{NOT}}(x)=\overline{x}=x+1$. Conversely, replacing multiplication by AND and addition by the XOR function, we can translate any polynomial function into a Boolean function. In particular, this shows that any binary function on $n$ variables can be represented as a Boolean function. While this seems like a simple change of language, it has the profound effect of placing the study of Boolean functions into the fields of algebra and algebraic geometry, which have a rich body of results and algorithms available. The goal of this section is to formalize this equivalence and to characterize those polynomial functions that represent nested canalyzing functions. The characterization will be expressed as a parametrization, with the set of parameters taken from an algebraic variety. Algebraic geometry has many tools to study varieties, an approach that will be pursued elsewhere. Polynomial form of nested canalyzing functions ---------------------------------------------- We derive a polynomial representation of the class of Boolean nested canalyzing functions which we then use to identify necessary and sufficient relations among their coefficients. Any Boolean function in $n$ variables is a map $f: \{0,1\}^n \longrightarrow \{0,1\}$. The set of all such maps, denoted by $B_n$, can be given the algebraic structure of a ring with the boolean operators ${\mathrm{XOR}}$ for addition and the conjunction ${\mathrm{AND}}$ for multiplication. Consider the polynomial ring ${\mathds{F}}_2[x_1,\dots,x_n]$ over the field ${\mathds{F}}_2 := \{0,1\}$ with two elements. Let $I$ be the ideal generated by the polynomials $x_i^2-x_i$ for all $i=1,\dots,n$. (That is, $I$ consists of all linear combinations of these polynomials with arbitrary polynomials as coefficients.) For any Boolean function $f \in B_n$, there is a unique polynomial $g \in {\mathds{F}}_2[x_1,\dots,x_n]$ such that $g(a_1,\dots,a_n) = f(a_1,\dots,a_n)$ for all $(a_1,\dots,a_n) \in {\mathds{F}}_2^n$ and such that the degree of each variable appearing in $g$ is equal to $1$. Namely $$\label{poly-rep} g(x_1,\dots,x_n) = \sum_{(a_1,\dots,a_n) \in {\mathds{F}}_2^n} f(a_1,\dots,a_n) \prod_{i=1}^n (1-(x_i-a_i)).$$ And it is straightforward to show that this equivalence extends to a ring isomorphism $$R := {\mathds{F}}_2[x_1,\dots,x_n]/I \cong B_n.$$ From now on we will not distinguish between these two rings. Next we present and study the set of all Boolean nested canalyzing functions as a subset of the ring $R$ of all Boolean polynomial functions. The following theorem gives the polynomial form for canalyzing and nested canalyzing functions. \[ncf poly\] Let $f$ be a function in $R$. Then 1. The function $f$ is *canalyzing* in the variable $x_i$, for some $i, ~~ 1 \le i \le n$, with canalyzing input value $a_i$ and canalyzed output value $b_i$, if and only if $$\label{can} f(x_1,x_2, \ldots, x_i, \ldots, x_n) = (x_i -a_i) g(x_1, x_2, \ldots, x_i, \ldots, x_n) + b_i.$$ 2. The function $f$ is *nested canalyzing* in the order $x_1, x_2, \ldots, x_n$, with canalyzing values $a_i$ and corresponding canalyzed values $b_i$, $1 \le i \le n$, if and only if it has the polynomial form $$\label{ncf q factor} \begin{array}{lll} f(x_1,x_2,\ldots, x_n) &=& (x_1-a_1) [(x_2 - a_2)[\ldots [ (x_{n-1} - a_{n-1}) [ (x_n-a_n) \\ && +( b_n-b_{n-1}) ] + (b_{n-1}-b_{n-2}) ] \ldots ]+(b_2-b_1)] + b_1 \\ \end{array}$$ or, equivalently, $$\label{ncf q} f(x_1,x_2,\ldots, x_n) = \ds \prod_{i=1}^n (x_i- a_i) + \sum_{j=1}^{n-1}\left [ (b_{n-j+1} - b_{n-j}) \prod_{i=1}^{n-j}(x_i-a_i) \right] + b_1.$$ <!-- --> 1. It is easy to see that if $x_i = a_i$, then the output is $b_i$, no matter what the values of the other variables are. Conversely, if $f$ is canalyzing with input $a_i$ and output $b_i$, then $f-b_i$, as a polynomial in $x_i$ has $a_i$ as a root, hence is divisible by $x_i-a_i$. This proves the first claim. 2. Let $f$ be a nested canalyzing function as in Definition \[def-ncf\], and let $$\begin{array}{lll} g(x_1,\dots,x_n) &=& (x_1-a_1) [(x_2 - a_2)[\ldots [ (x_{n-1} - a_{n-1}) [ (x_n-a_n) \\ && +( b_n-b_{n-1}) ] + (b_{n-1}-b_{n-2}) ] \ldots ]+(b_2-b_1)] + b_1. \\ \end{array}$$ We will show that $g$ is the unique polynomial presentation of $f$, as in equation \[poly-rep\]. Since the degree of each variable in $g$ is equal to one, we only need to show that $g(c_1,\dots,c_n) = f(c_1,\dots,c_n)$ for all $(c_1,\dots,c_n) \in {\mathds{F}}_2^n$. Clearly, if $c_1 = a_1$, then $g(c_1,\dots,c_n) = b_1$ . If $c_1 \ne a_1$ and $c_2 = a_2$, then $(c_1-a_1)=1$ and $g(c_1,\dots,c_n) =b_2$. If $c_1 \ne a_1$, $c_2 \ne a_2$ and $c_3 = a_3$, then $g(c_1,\dots,c_n) = b_3$. We continue until we have $c_i \ne a_i$ for all $1 \leq i < n$ and $c_n= a_n$, in which case we get $g(c_1,\dots,c_n) = b_n$. If $c_i \ne a_i$ for all $i$, then $(c_i-a_i)=1$ for all $i$ and hence $g(c_1,\dots,c_n) = 1+b_n$. Thus $g$ is the unique polynomial representation of $f$. A Parametrization of NCFs {#coeff} ------------------------- Our next goal is to derive a criterion as to when a given Boolean polynomial in $n$ variables is a nested canalyzing function. The criterion will be given in terms of a parametrization of such polynomials corresponding to points in the affine space ${\mathds{F}}_2^{2^n}$ that satisfy a certain collection of polynomial equations. Such a set is by definition an algebraic variety, in the language of algebraic geometry. This parametrization describes the entire space of nested canalyzing functions as a geometric object, whose properties can then be studied with the tools of algebraic geometry. Recall that the ring of Boolean functions is isomorphic to the quotient ring $R = {\mathds{F}}_2[x_1,\dots,x_n]/I$, where $I = \langle x_i^2-x_i : 1 \leq i \leq n \rangle$. Therefore, the terms of a Boolean polynomial consist of square-free monomials. Thus, we can uniquely index monomials by the subsets of $[n]:=\{1,\ldots ,n\}$ corresponding to the variables appearing in the monomial, so that we can write the elements of $R$ as $$R = \{\ds \sum_{S \subseteq [n]} c_S \prod_{i \in S} x_i \, : \, c_S \in {\mathds{F}}_2\}.$$ As a vector space over ${\mathds{F}}_2$, $R$ is isomorphic to ${\mathds{F}}_2^{2^n}$ via the correspondence $$\label{corresp} R \ni \ds \sum_{S \subseteq [n]} c_S \prod_{i \in S} x_i \longleftrightarrow (c_{\emptyset},\dots, c_{[n]}) \in {\mathds{F}}_2^{2^n},$$ for a given fixed total ordering of all square-free monomials. That is, a polynomial function corresponds to the vector of coefficients of the monomial summands. In this section we identify the set of nested canalyzing functions in $R$ with a subset $V^{{\mathrm{ncf}}}$ of ${\mathds{F}}_2^{2^n}$ by imposing relations on the coordinates of its elements. Let $S$ be any subset of $[n]$. We introduce a new term called the *completion* of $S$. Let $S$ be a a non-empty set whose highest element is $r_S$. The [*completion*]{} of $S$, which we denote by $[r_S]$, is the set $[r_S] := \{1,2,\ldots,r_S\}$. For $S = \emptyset$, let $[r_\emptyset] :=\emptyset$. The main result of this section is the following theorem. \[coeff rel\] Let $f$ be a Boolean polynomial in $n$ variables, given by $$\label{poly} f(x_1,x_2,\ldots, x_n) = \sum_{S \subseteq [n]} c_S \prod_{i \in S} x_i.$$ The polynomial $f$ is a nested canalyzing function in the order $x_1, x_2, \ldots, x_n$ if and only if $c_{[n]} = 1$, and for any subset $S \subseteq [n]$, $$\label{coeff formula} c_S = c_{[r_S]} \prod_{i \in [r_S] \backslash S} c_{[n]\backslash \{i\}}.$$ First assume that the polynomial $f$ is a Boolean nested canalyzing function in the order $x_1, x_2, \ldots, x_n$, with canalyzing input values $a_i$ and corresponding canalyzed output values $b_i, 1\le i \le n$. Then, by part 2 of Theorem \[ncf poly\], $f$ has the form (\[ncf q\]) which can be expanded as $$\label{expanded} f(x_1,\dots,x_n) = \sum_{S\subseteq [n]} \prod_{i\in S} x_i \prod_{l \in [n]\backslash S} a_l + \sum_{j=1}^{n-1}(b_{n-j+1}-b_{n-j}) (\sum_{S \subseteq [n-j]} \prod_{i \in S} x_i \prod_{l \in [n-j] \backslash S} a_l)+b_1.$$ We now equate corresponding coefficients in equations (\[poly\]) and (\[expanded\]). First let $S=[n]$. Then, clearly, $c_{[r_S]}=1$. Next, consider subscripts of the form $S=[n]\backslash \{i\}, i\neq n$, that is, coefficients of monomials of total degree $n-1$ which contain $x_n$. It is clear from equation (\[expanded\]) that $x_n$ only appears in the first summand and hence, for $1 \leq i \leq n-1$, $$\label{ai1} c_{[n]\backslash\{i\}}=a_i=c_{[r_S]}c_{[n]\backslash\{i\}}.$$ It is easy to check that equation (\[coeff formula\]) holds for any set $S \subseteq [n]$ such that $S = [r_S]$. By equating the coefficient of $x_1\cdots x_{r_s}$ in equations (\[expanded\]) and (\[poly\]), we get $$\begin{aligned} c_{S} = c_{[r_S]} &=& \prod_{i \in [n]\backslash S} a_i + (b_n-b_{n-1}) \prod_{i \in [n-1]\backslash S} a_i+\cdots + (b_{r_S+1} - b_{r_S}) \prod_{i \in [r_S]\backslash S} a_i\\ &=& \prod_{i \in [n]\backslash S} a_i + (b_n-b_{n-1}) \prod_{i \in [n-1]\backslash S} a_i+\cdots + (b_{r_S+1} - b_{r_S}),\end{aligned}$$ since $\ds \prod_{i \in [r_S]\backslash S} a_i = \prod_{i \in \emptyset} a_i := 1$, by definition. Now let $S$ be any nonempty index set. Then $$\begin{aligned} c_S &=& \prod_{i \in [n]\backslash S} a_i + (b_n-b_{n-1}) \prod_{i \in [n-1]\backslash S} a_i+\cdots + (b_{r_S+1} - b_{r_S}) \prod_{i \in [r_S]\backslash S} a_i\\ &=& \prod_{i \in [r_S]\backslash S} a_i [\prod_{i \in [n]\backslash [r_S]} a_i+(b_n - b_{n-1}) \prod_{i \in [n-1]\backslash [r_S]} a_i+\cdots+ (b_{r_S+1} - b_{r_S})] \\ &=& (\prod_{i \in [r_S]\backslash S} a_i ) c_{[r_S]}\\ &=& c_{[r_S]} \prod_{i \in [r_S]\backslash S} c_{[n]\backslash \{i\}}.\end{aligned}$$ This completes the proof that a nested canalyzing polynomial has to satisfy equation (\[coeff formula\]). Conversely, suppose that $c_{[n]} = 1$ and equation (\[coeff formula\]) holds for the coefficients of the polynomial $f$ in equation (\[poly\]). We need to show that $f$ is nested canalyzing. Using Lemma \[referee\], it is enough to show that $f$ depends on all $n$ variables and $f(\overline{a_1},\dots,\overline{a_{j-1}},a_j,x_{j+1},\dots,x_n) =b_j$ for some $a_j,b_j \in {\mathds{F}}_2^n$ and $1\leq j \leq n$. Since $c_{[n]} = 1$, the monomial $x_1\cdots x_n$ is a summand in $f$ and hence $f$ depends on all $n$ variables. Now let $1 \leq j \leq n$. For any $S\subset [n]$ such that $j\notin S$ and $r_{S} > j$, we have $$c_S=c_{[r_S]}\prod_{i \in [r_S]\backslash S}c_{[n]\backslash \{i\}} \mbox{ \, \, and \, \, } c_{S\cup \{j\}}=c_{[r_S]}\prod_{i \in [r_S] \backslash \{S\cup \{j\}\}}c_{[n]\backslash \{i\}}.$$ By pairing $c_S$ with $c_{S\cup \{j\}}$ and $c_T$ with $c_{T\cup \{j\}}$ where $T\subseteq [j-1]$, we rewrite the form (\[poly\]) into $$\label{new} f(x_1,\dots,x_n)= \sum_{T\subseteq [j-1]}(x_jc_{T\cup\{j\}}+c_{T}) \prod_{i\in T} x_i + (c_{[n]\backslash \{j\}} + x_j)\sum_{\substack{S\subset [n] \\ r_S > j \\ j \notin S}}c_{S\cup \{j\}} \prod_{i \in S} x_i.$$ For $1 \leq j \leq n$, let $a_j = c_{[n]\backslash\{j\}}$. Then $$\label{bj} f(\overline{a_1},\dots,\overline{a_{j-1}},a_j,x_{j+1},\dots,x_n) = \sum_{T\subseteq [j-1]}(c_{[n]\backslash\{j\}}c_{T\cup\{j\}}+c_{T}) \prod_{i\in T} (1+c_{[n]\backslash\{i\}})$$ is a constant which we call $b_j$. Hence, by Lemma \[referee\], the function $f$ is nested canalyzing. Observe that the relations in equation (\[coeff formula\]) leave the coefficients $c_\emptyset$ and $c_{[i]}$, for all $1 \leq i < n$, undetermined, as well as the coefficients $c_S$, where $S$ is any of the $(n-1)$-element subsets of $[n]$ which include $n$. Furthermore, a Boolean NCF requires that $c_{[n]} = 1$. Since a general Boolean polynomial in $n$ variables has $2^n$ coefficients, equation (\[coeff formula\]) yields $2^n - 2n$ equations which have to be satisfied by the coefficients of a Boolean NCF. \[variety-id\] The set of points in ${\mathds{F}}_2^{2^n}$ corresponding to coefficient vectors of nested canalyzing functions in the variable order $x_1,\dots,x_n$, denoted by $V_{{\mathrm{id}}}^{{\mathrm{ncf}}}$, is given by $$V_{{\mathrm{id}}}^{{\mathrm{ncf}}} = \{(c_\emptyset,\dots,c_{[n]}) \in {\mathds{F}}_2^{2^n} : c_{[n]}=1, \, c_S = c_{[r_S]} \prod_{i \in [r_S] \backslash S} c_{[n] \backslash \{i\}} \mbox {, for }\, S \subseteq [n]\}.$$ The following corollary provides surprisingly simple expressions of the canalyzing input and canalyzed output values in terms of the coefficients of the polynomial. \[ai bi\] Let $f$ be a Boolean polynomial given by equation ([\[poly\]]{}). If the polynomial $f$ is a nested canalyzing function in the order $x_1, x_2, \ldots, x_n$, with input values $a_j$ and corresponding output values $b_j, 1\le j \le n$, then $$\begin{aligned} \label{ai} a_j &=& c_{[n]\backslash \{j\}}, \hskip 3.7 true cm {\rm for }~~ 1 \le j \le n-1 \\ \label{b1} b_1 &=& c_\emptyset + c_1c_{[n]\backslash \{1\}}, \\ \label{bi} b_{j+1} - b_{j} &=& c_{[j+1]} c_{[n]\backslash \{j+1\}} + c_{[j]}, \hspace{1cm} \mbox{\rm for } 1 \le j < n-1 \,\, \mbox{\rm and} \\ \label{bn-an} b_n - a_n &=& b_{n-1} + c_{[n-1]}~.\end{aligned}$$ Equation (\[ai\]) follows from equation (\[ai1\]), equation (\[b1\]) follows directly from equation (\[bj\]) when $j=1$. In equation (\[ncf q factor\]), we observe that the variable $x_{n-1}$ appears only in the first and second group of products. In particular, $c_{[n-1]} = -a_n + b_n -b_{n-1}$, and hence equation (\[bn-an\]) follows. It is left to show (\[bi\]). From equation (\[bj\]), $$\begin{aligned} b_j &=& \sum_{T\subseteq [j-1]}(c_{[n]\backslash\{j\}}c_{T\cup\{j\}}+c_{T}) \prod_{i\in T} (1+ c_{[n]\backslash\{i\}}) \\ &=& \sum_{T\subseteq [j-1]}(c_{[n]\backslash\{j\}}c_{[j]}\prod_{i \in [j-1]\backslash T} c_{[n]\backslash\{i\}} +c_{T}) \prod_{i\in T} (1+ c_{[n]\backslash\{i\}}) \\ &=& c_{[n]\backslash\{j\}}c_{[j]}\sum_{T\subseteq [j-1]}c_{[n]\backslash\{j\}}c_{[j]}\prod_{i \in [j-1]\backslash T} c_{[n]\backslash\{i\}} \prod_{i\in T} (1+ c_{[n]\backslash\{i\}})+ \sum_{T\subseteq [j-1]}c_{T} \prod_{i\in T} (1+ c_{[n]\backslash\{i\}}) \\ &=& c_{[n]\backslash\{j\}}c_{[j]}+\sum_{T\subseteq [j-1]}c_{T} \prod_{i\in T} (1+ c_{[n]\backslash\{i\}}),\end{aligned}$$ since $$\sum_{T\subseteq [j-1]}\prod_{i \in [j-1]\backslash T} c_{[n]\backslash\{i\}} \prod_{i\in T} (1+ c_{[n]\backslash\{i\}}) = \prod_{i\in [j-1]}(c_{[n]\backslash\{i\}} + (1+c_{[n]\backslash\{i\}})) = \prod_{i\in [j-1]} 1 = 1.$$ Now $$\begin{aligned} b_{j+1} - b_{j} &=& c_{[n]\backslash\{j+1\}}c_{[j+1]}+\sum_{T\subseteq [j]}c_{T} \prod_{i\in T} (1+ c_{[n]\backslash\{i\}}) - c_{[n]\backslash\{j\}}c_{[j]}-\sum_{T\subseteq [j-1]}c_{T} \prod_{i\in T} (1+ c_{[n]\backslash\{i\}}) \\ &=& c_{[n]\backslash\{j+1\}}c_{[j+1]}- c_{[n]\backslash\{j\}}c_{[j]}+\sum_{\substack{T\subseteq [j] \\ j \in T}}c_{T} \prod_{i\in T} (1+ c_{[n]\backslash\{i\}}) \\ &=& c_{[n]\backslash\{j+1\}}c_{[j+1]}- c_{[n]\backslash\{j\}}c_{[j]} +\sum_{T\subseteq [j-1]}(1+c_{[n]\backslash\{j\}}) c_{[j]} \prod_{i \in [j-1] \backslash T} c_{[n]\backslash \{i\}} \prod_{i\in T} (1+ c_{[n]\backslash\{i\}})\\ &=& c_{[n]\backslash\{j+1\}}c_{[j+1]}- c_{[n]\backslash\{j\}}c_{[j]}+(1+c_{[n]\backslash\{j\}})c_{[j]}\sum_{T\subseteq [j-1]} \prod_{i \in [j-1] \backslash T} c_{[n]\backslash \{i\}} \prod_{i\in T} (1+ c_{[n]\backslash\{i\}})\\ &=& c_{[n]\backslash\{j+1\}}c_{[j+1]}- c_{[n]\backslash\{j\}}c_{[j]}+(1+c_{[n]\backslash\{j\}})c_{[j]} \\ &=& c_{[n]\backslash\{j+1\}}c_{[j+1]} + c_{[j]}\end{aligned}$$ Equations (\[ai\])–(\[bi\]) imply that the input values $a_i$ and output values $b_i$, $1 \le i \le n-1$, are determined uniquely by the coefficients of the polynomial $f$. Also, equation (\[bn-an\]) implies that there are two sets of values for $a_n$ and $b_n$ which will yield the same nested canalyzing function $f$. Using these facts, we discover Remark \[two-sets\]. In Table \[tabl2\], we give some examples of relationships between coefficients of NCFs in $n$ variables, nested in the order $x_1,x_2, \ldots, x_n$ for some small values of $n$. -------------------------------------------------------------------------------------------------------------------- $n=3$ $n=4$ $n=5$ ----------------------------- --------------------------------------- ---------------------------------------------- $c_3 = c_{13}c_{23}c_{123}$ $c_4 = c_{234}c_{134}c_{124}c_{1234}$ $c_5 = c_{2345}c_{1345}c_{1245}c_{1235}c_{12345}$ $c_2 = c_{23}c_{12}$ $c_{13} = c_{134}c_{123}$ $c_{124} = c_{1245}c_{1234}$ $c_{24} = c_{234}c_{124}c_{1234}$ $c_{23} = c_{2345}c_{123}$ $c_2 = c_{234}c_{12}$ $c_2 = c_{2345}c_{12}$ -------------------------------------------------------------------------------------------------------------------- : \[tabl2\] Some examples of relationships between coefficients of NCFs We now extend Theorem \[coeff rel\] to the general case when the variables are nested in any given order. For this, we will need to extend the definition of [*completion*]{} of a set $S$ with respect to any permutation of its elements. Let $\sigma$ be a permutation on the elements of the set $[n]$. We define a new order relation $<_{\sigma}$ on the elements of $[n]$ as follows: $i <_\sigma j$ if and only if $\sigma^{-1}(i) < \sigma^{-1}(j)$. Let $S$ be a nonempty subset of $[n]$, say $S = \{i_1, \dots, i_t\}$. Let $r_S^{\sigma} := \max\{\sigma^{-1}(i_1),\dots,\sigma^{-1}(i_t)\}$. The *completion of S with respect to the permutation $\sigma$*, denoted by $[r_S^{\sigma}]_\sigma$, is the set $[r_S^{\sigma}]_\sigma := \{ \sigma(1),\dots,\sigma(r_S^\sigma)\}$. The following corollary is a generalization of Theorem \[coeff rel\]. It gives necessary and sufficient relations among the coefficients of a NCF whose variables are nested in the order specified by a permutation $\sigma$ on $[n]$. \[coeff rel sigma\] Let $f \in R$ and let $\sigma$ be a permutation of the set $[n]$. The polynomial $f$ is a nested canalyzing function in the order $x_{\sigma (1)}, \ldots, x_{\sigma (n)}$, with input values $a_{\sigma (i)}$ and corresponding output values $b_{\sigma (i)}, 1 \le i \le n$, if and only if $c_{[n]} = 1 $ and, for any subset $S \subseteq [n]$, $$\label{coeff formula sigma} c_S = c_{[r_S^{\sigma}]_\sigma} \prod_{w \in [r_S^{\sigma}]_\sigma \backslash S} c_{[n] \backslash \{w\}}.$$ We follow the same argument as in the proof of Theorem \[coeff rel\], where we impose the order relation $<_{\sigma}$ on the elements of $[n]$ and we replace all occurrences of the subscript $i$ by $\sigma (i)$ and $[r_S]$ by $[r_S^{\sigma}]_\sigma$. \[variety-sigma\] Let $\sigma$ be a permutation on $[n]$. The set of points in ${\mathds{F}}_2^{2^n}$ corresponding to nested canalyzing functions in the variable order $x_{\sigma(1)}, \dots, x_{\sigma(n)}$, denoted by $V_{\sigma}^{{\mathrm{ncf}}}$, is defined by $$V_{\sigma}^{{\mathrm{ncf}}} = \{(c_\emptyset,\dots,c_{[n]}) \in {\mathds{F}}_2^{2^n} : c_{[n]}=1, \, c_S = c_{[r_S^{\sigma}]_\sigma} \prod_{w \in [r_S^{\sigma}]_\sigma \backslash S} c_{[n] \backslash \{w\}} \mbox {, \, for } S \subseteq [n]\}.$$ The following corollary is an extension of Corollary \[ai bi\] and gives the input and output values of a Boolean NCF whose variables are nested in the order specified by some permutation $\sigma$. Let $f \in R$ and let $\sigma$ be a permutation of the elements of the set $[n]$. If $f$ is a nested canalyzing function in the order $x_{\sigma (1)}, \ldots, x_{\sigma (n)}$, with input values $a_j$ and corresponding output values $b_j, 1\le j \le n$, then $$\begin{aligned} \label{ais } a_j &=& c_{[n] \backslash \{\sigma (j) \}}, \hskip 5.2 true cm {\rm for}~~ 1 \le j \le n-1 \\ \label{b1s} b_1 &=& c_\emptyset + c_{\sigma(1)}c_{[n]\backslash \{\sigma (1) \}}, \\ \label{bis} b_{j+1}- b_{j} &=& c_{[j+1]_\sigma} c_{[n] \backslash \{\sigma (j+1)\}} + c_{[j]_\sigma}, \hskip .75 true cm {\rm for}~~1 \le j < n-1~~~{\rm and} \\ \label{bn-ans} b_{n} - a_{n} &=& b_{n-1} + c_{[n-1]_\sigma}.\end{aligned}$$ This follows from Corollary \[ai bi\], where we replace all occurrences of subscript $j$ by $\sigma (j)$ and $[r]$ by $[r]_{\sigma}$. Recall that the set $V^{{\mathrm{ncf}}}$ of nested canalyzing functions is the union of the sets $V_\sigma^{{\mathrm{ncf}}}$ of canalyzing functions with respect to a specified variable order. By Corollaries \[variety-id\], \[variety-sigma\], and the correspondence (\[corresp\]), we have $$\begin{aligned} V^{{\mathrm{ncf}}} &=& \bigcup_\sigma V_\sigma^{{\mathrm{ncf}}}.\end{aligned}$$ Corollary \[coeff rel sigma\] is the starting point for a geometric analysis of the set of all nested canalyzing functions. It provides a set of equations that have to be satisfied by the coefficient vectors of the polynomial representations of the functions. These coefficient vectors therefore form an algebraic variety in the space ${\mathds{F}}_2^{2^n}$, which turns out to have very nice properties. In particular, it is a so-called toric variety. Discussion ========== Our main contribution in this paper is to connect three different fields of inquiry into Boolean functions, which were heretofore apparently unconnected. The equivalence of nested canalyzing functions and unate cascade functions relates the electrical engineering point of view of logic circuits with the dynamic biological network view, providing a dictionary for results. The equivalence of both to a class of polynomial functions brings rich additional mathematical structure to the study of both. In particular, the language and concepts of algebraic geometry and the rich tool set of computational algebra and algebraic geometry provides a foundation that imposes a mathematical structure on the entire class of these functions, which suggests an entirely new way of studying them. As an algebraic variety, the class of nested canalyzing functions has a very special structure, namely that of a toric variety [@fulton]. Toric varieties lie at the interface of geometry, algebra, and combinatorics and have a rich structure [@sturmfels]. In another paper, we will explore the properties of the toric varieties in the previous section in more detail. In particular, our motivation for this study originally was the desire to give a characterization of nested canalyzing functions as polynomials, which could be used as part of the model selection algorithm in [@LS]. That is, we are interested in giving an efficient criterion which allows our symbolic computation algorithm to preferentially pick nested canalyzing functions rather than general polynomials. The characterization of this class as a toric variety is the first important step in this direction. It deserves mention that the connection to unate cascade functions was discovered in a roundabout way. We first established the parametrization of nested canalyzing functions by special polynomials. The structure of these polynomials makes it easy to count how many there are for a given number of variables. After carrying out this counting procedure for the first few numbers resulted in a sequence of integers which we submitted to N. Sloane’s integer sequence database ([http://www.research.att.com/\~njas/sequences/]{}). One of the matching sequences was that for the number of unate cascade functions. Acknowledgements ================ Jarrah and Laubenbacher were supported partially by NSF Grant DMS-0511441. Laubenbacher was also supported partially by NIH Grant RO1 GM068947-01, a joint computational biology initiative between NIH and NSF. Raposa was supported by a Fulbright research grant while visiting the Virginia Bioinformatics Institute (VBI) where this research was conducted. She thanks VBI for the hospitality during her stay. [00]{} Bender, E. A.: 1980, ‘The Number of Fanout-Free Functions with Various Gates’. (1), 181–190. Bender, E. A. and J. T. Butler: 1978, ‘Asymptotic Aproximations for the Number of Fanout-Free Functions.’. (12), 1180–1183. Butler, J. T., T. Sasao, and M. Matsuura: 2005, ‘Average Path Length of Binary Decision Diagrams’. (9), 1041–1053. Fulton, W.: 1993, [*Introduction to toric varieties*]{}, Vol. 131 of [ *Annals of Mathematics Studies*]{}. Princeton, NJ: Princeton University Press. Gabbouj, M., P.-T. Yu, and E. J. Coyle: 1992, ‘Convergence behavior and root signal sets of stack filters’. (1), 171–193. Hayes, J. P.: 1976, ‘Enumeration of Fanout-Free Boolean Functions’. (4), 700–709. , W., I. [Shmulevich]{}, and J. [Konvalina]{}: 2004, ‘[The number and probability of canalizing functions]{}’. , 211–221. Kauffman, S., C. Peterson, B. Samuelsson, and C. Troein: 2003, ‘[Random Boolean network models and the yeast transcriptional network]{}’. (25), 14796–14799. Kauffman, S., C. Peterson, B. Samuelsson, and C. Troein: 2004, ‘[Genetic networks with canalyzing Boolean rules are always stable]{}’. (49), 17102–17107. Kauffman, S. A.: 1993, [*The Origins of Order: Self–Organization and Selection in Evolution*]{}. New York; Oxford: Oxford University Press. Laubenbacher, R. and B. Stigler: 2004, ‘A computational algebra approach to the reverse-engineering of gene regulatory networks’. , 523–537. Lidl, R. and H. Niederreiter: 1997, [*Finite fields*]{}, Vol. 20 of [ *Encyclopedia of Mathematics and its Applications*]{}. Cambridge: Cambridge University Press, second edition. Lynch, J. F.: 1995, ‘On the threshold of chaos in random Boolean cellular automata’. (2-3), 239–260. Maitra, K.: 1962, ‘Cascaded switching networks of two-input flexible cells’. , 136–143. Moreira, A. A. and L. A. Amaral: 2005, ‘Canalizing [K]{}auffman Networks: Nonergodicity and Its Effect on Their Critical Behavior’. (21), 218702. Mukhopadhyay, A.: 1969, ‘Unate Cellular Logic’. (2), 114–121. Pogosyan, G.: 1999, ‘The Number of Cascade Functions’. , 131. Sasao, T. and K. Kinoshita: 1979, ‘On the Number of Fanout-Free Functions and Unate Cascade Functions.’. (1), 66–72. Stauffer, D.: 1987, ‘On forcing functions in Kauffman’s random Boolean networks’. (3-4), 789–794. Stern, M. D.: 1999, ‘[Emergence of homeostasis and “noise imprinting” in an evolution model]{}’. (19), 10746–10751. Sturmfels, B.: 1996, [*Gröbner bases and convex polytopes*]{}, Vol. 8 of [ *University Lecture Series*]{}. Providence, RI: American Mathematical Society. Waddington, C. H.: 1942, ‘Canalisation of development and the inheritance of acquired characters’. , 563–564. Wendt, P., E. Coyle, and N. Gallagher: 1986, ‘Stack Filters’. , 898–911. Yu, P.-T. and E. Coyle: 1990, ‘Convergence behavior and N-roots of stack filters’. (9), 1529–1544.
{ "pile_set_name": "ArXiv" }
--- abstract: 'The Casimir-Polder interaction of ground-state and excited atoms with graphene is investigated with the aim to establish whether graphene systems can be used as a shield for vacuum fluctuations of an underlying substrate. We calculate the zero-temperature Casimir-Polder potential from the reflection coefficients of graphene within the framework of the Dirac model. For both doped and undoped graphene we show limits at which graphene could be used effectively as a shield. Additional results are given for AB-stacked bilayer graphene.' author: - Sofia Ribeiro - Stefan Scheel bibliography: - 'phdthesis.bib' title: Shielding vacuum fluctuations with graphene --- Introduction ============ Graphene’s extraordinary electronic and optical properties hold great promise for applications in photonics and optoelectronics. The existence of a true two-dimensional (2D) material having a thickness of a single atom was believed to be impossible for a long time, because both finite temperature and quantum fluctuations would destroy the 2D structure. However, since the first groundbreaking experiments [@Science306_666_2004], the study of graphene became an active field in condensed matter. Theoretical reviews of graphene’s properties can be found in Refs. [@RMP_81_2009; @RMP_Peres_2010]. The technological push towards miniaturization resulted in the idea of devising small structures based on graphene. For instance, placing graphene between different substrates or by patterning a given substrate, it is possible to create artificial materials with tunable properties [@RPP74_082501_2011]. Hybrid quantum systems which combine cold atoms with solid structures hold great promise for the study of fundamental science, creating the possibility to built devices to measure precisely gravitational, electric and magnetic fields [@NJP13_083020_2011]. For instance, many of the proposed extensions to the Standard Model of particle physics include forces, due to compactified extra dimensions, that would modify Newtonian gravity on submicrometer scales [@BookNonNewton; @ARNP53_77_2003]. By performing extremely careful force measurements near surfaces, it is hoped that more stringent limits on the presence of such forces may be obtained. With this in mind, hybrid systems in which neutral atoms and graphene are held in close proximity represent an important and attractive case to study. A quick estimate shows that the Casimir-Polder force dominates gravity by several orders of magnitude at micrometer distances. It is therefore necessary to find a system that is simple enough in order to either be able to calculate its dispersion effect to high enough precision, or to provide a shield against vacuum fluctuations of another (macroscopic) body. Graphene has been shown to be a strong absorber of electromagnetic radiation, it interacts strongly with light over a wide wavelength range, particularly in the far infrared and terahertz parts of the spectrum due to its high carrier mobility and conductivity [@PRL108_047401_2012]. Considering that graphene is only one atomic layer thick, its (universal) absorption coefficient of $\eta=\pi e^2 / (\hbar c) \approx 2.3\%$ is quite remarkable [@graphenebook]. In Ref. [@nnano7_330_2012] new systems made of several layers of graphene are shown to be an effective shield for terahertz radiation, while letting visible light pass. These studies brought the attention to the development of transparent mid- and far-infrared photonic devices. With graphene’s absorption properties in mind we investigate the possibility of shielding electromagnetic vacuum fluctuations of a macroscopic body placed nearby. The purpose of this study is to investigate whether and under which circumstances the Casimir-Polder potential between an atom and a graphene-substrate system is dominated by the interaction with graphene such that the effect of the substrate does not play an important role. This knowledge will allow us to manipulate the Casimir-Polder potential of a layered system by placing the graphene at different graphene-substrate distances or by patterning it into different shapes. This article is organised as follows. After briefly introducing graphene into the formalism of macroscopic QED in Sec. \[sec:CPpotentialgraphene\], we give some numerical results for the Casimir–Polder shift of an atom near a graphene sheet in Sec. \[sec:atomgraphene\]. In Sec. \[sec:1sheet\] and \[sec:bilayer\], we study different the shielding of vacuum fluctuations by single-layer and bilayer graphene, respectively, and give concluding remarks in Sec. \[sec:conclusions\]. Casimir-Polder interaction with graphene {#sec:CPpotentialgraphene} ======================================== It is well known that an atom placed near a macroscopic body will experience a dispersion force — the Casimir-Polder force — due to the presence of fluctuations of the electromagnetic field even at zero temperature [@casimirpolderpaper]. We begin by investigating the Casimir-Polder interaction of an atom next to a graphene layer at zero temperature. We adopt the Dirac model for graphene and calculate the Casimir-Polder interactions based on the formalism of macroscopic QED. Upon quantization of the electromagnetic field in the presence of absorbing bodies, and application of second-order perturbation theory, the Casimir-Polder potential for planar structures can be written as [@acta2008] $$\begin{gathered} U_{\mathrm{CP}} {\ensuremath{\left(z_{A}\right)}} = \frac{\hbar \mu_{0}}{8 \pi^{2}} \int_{0}^{\infty} d \xi \xi^{2} \alpha_{at} {\ensuremath{\left(i \xi\right)}} \nonumber \\ \times \int\limits_{0}^{\infty} d k_{\parallel} \frac{e^{-2 k_{\parallel} \gamma_{0z} z_{A}} }{\gamma_{0z}} \left[ \mathrm{R}_{\mathrm{TE}} + \mathrm{R}_{\mathrm{TM}} \left( 1- \frac{2 k_{\parallel}^{2} \gamma_{0z}^{2} c^{2} }{\xi^{2}} \right) \right] \label{eq:Ucp_1}\end{gathered}$$ where $\gamma_{iz}=\sqrt{1+\varepsilon_i(i\xi)\xi^{2}k_\|^{2}/c^2}$ is the $z$-component of the wavenumber in the medium with permittivity $\varepsilon_i$ for imaginary frequencies (the index 0 refers to the medium in which the atom is placed) and $\alpha_{at} (\omega)$ is the isotropic atomic polarizability defined by $$\begin{aligned} \mathbf{\alpha}_{at} (\omega) = \lim_{\varepsilon \rightarrow 0} \frac{2}{\hbar} \sum_{k_{A} \neq 0_{A}} \frac{\omega_{k 0} \mathbf{d}_{0 k}\cdot \mathbf{d}_{k 0} }{\omega_{k 0}^{2} -\omega^{2} - i \omega \varepsilon } . \label{eq:atomicpol}\end{aligned}$$ This equation is valid for zero temperature. A replacement of the frequency integral by a Matsubara sum has to be performed for finite temperatures [@acta2008]. In this case, the potential is well approximated by inserting the temperature-dependent reflection coefficients in the lowest term in the Matsubara sum ($j=0$) while keeping the zero-temperature coefficients for all higher Matsubara terms [@PRB84_035446_2011]. Only for $k_B T \gtrsim \Delta$ thermal corrections become important [@PRA86_012515_2012] ($\Delta \approx 0.1$ eV is the gap parameter of quasiparticle excitations). In order to compute the reflection coefficients $\mathrm{R}_{\mathrm{TM}}$ and $\mathrm{R}_{\mathrm{TE}}$ for transverse magnetic (TM) and transverse electric (TE) waves from a graphene sheet, two alternative models, the hydrodynamic model and the Dirac model, exist. The first treats graphene as an infinitesimally thin positively charged flat sheet, carrying a homogeneous fluid with some mass and negative charge densities. On the other hand, the Dirac model incorporates the conical electron dispersion relation of Dirac fermions, modelling graphene as a two-dimensional gas of massless Dirac fermions, where low-energy excitations are considered Dirac fermions that move with a Fermi velocity. The ranges of validity of these respective models is not completely resolved [@NJP13_083020_2011]. The reflection coefficients are calculated by matching the dyadic Green function of free space and its derivatives on either side of a two-dimensional conducting sheet [@PRB85_195427_2012], with the result that $$\begin{aligned} \mathrm{R}_{\mathrm{TM}} &=& \frac{\gamma_{0z} \alpha_{\parallel} (k_{\parallel}, \omega)}{1+\gamma_{0z} \alpha_{\parallel} (k_{\parallel}, \omega)} ,\nonumber\\ \mathrm{R}_{\mathrm{TE}} &=& \frac{(\omega / c k_{\parallel})^{2} \alpha_{\perp} (k_{\parallel}, \omega) }{\gamma_{0z} -(\omega / c k_{\parallel})^{2} \alpha_{\perp} (k_{\parallel}, \omega)}, \label{eq:rBo}\end{aligned}$$ where $$\alpha(\mathbf{k},\omega)=- e^2 \frac{\chi(\mathbf{k},\omega)}{ 2 \varepsilon_0 k_{\parallel}} = i \frac{\sigma (\mathbf{k},\omega) \, k_{\parallel}}{2 \varepsilon_0 \omega}$$ is given by the density-density correlation function $\chi(\mathbf{k},\omega)$ [@NJP8_318_2006; @PRB75_205418_2007] or, alternatively, the conductivity $\sigma (\mathbf{k},\omega)$ [@PRB77_155409_2008; @PRB86_075439_2012]. The functions for doped and undoped graphene are derived based on the band structure of graphene. The problem with this approach is that there are no transverse functions available in the literature for single sheets, only the longitudinal functions [@PRB85_195427_2012]. It has been shown in Ref. [@PRB80_245424_2009] that, at zero as well as at finite temperature, the retardation effects in graphene systems are negligible. Hence, we neglect the retardation effects, so that the TE modes do not contribute and we can set $\gamma_{0z} \to 1$. The density-density correlation function for undoped graphene in the complex frequency plane ($\omega=i\xi$) is given by [@PRB85_195427_2012] $$\chi (\mathbf{k}, i\xi) = -\frac{g}{16 \hbar} \frac{k^{2}_{\parallel}}{\sqrt{v_{F}^{2} k^{2}_{\parallel} + \xi^2 }}. \label{eq:xi_undoped}$$ The parameter $g=4$ represents the degeneracy parameter, with a factor of 2 accounting for spin and another factor of 2 for cone degeneracy; $v_{F}$ is the Fermi velocity. However, most materials naturally occur with charge doping where the Fermi level or chemical potential $\mu$ is away from charge neutrality ($\mu=0$). When the graphene sheet is doped, the density-density correlation function becomes more complicated. Following Ref. [@PRB85_195427_2012], the density-density correlation function on the imaginary frequency axis can be written in terms of the dimensionless variables $\tilde{k}=k_{\parallel}/2k_{F}$ and $\tilde{\xi}=\hbar\xi/2E_F$ ($E_F=\hbar v_{F}k_{F}$ and $k_{F}=\sqrt{4\pi n/g}$), as $$\chi (\mathbf{k}, \xi) = - D_{0} {\ensuremath{\left[1 + \frac{\tilde{k}^2}{4 \sqrt{\tilde{k}^{2} + \tilde{\xi}^{2}}} {\ensuremath{\left( \pi - f (\tilde{k}, \tilde{\xi})\right)}} \right]}} \label{eq:chi}$$ where $D_{0} = \sqrt{gn/(\pi \hbar^{2} v_{F}^{2})}$ is the density of states at the Fermi level for doping concentration $n$. The function $f(\tilde{k}, \tilde{\xi})$ is defined as $$\begin{gathered} f (\tilde{k}, \tilde{\xi}) = \arcsin {\ensuremath{\left(\frac{1 - i \tilde{\xi}}{ \tilde{k}}\right)}} + \arcsin {\ensuremath{\left(\frac{1 + i \tilde{\xi}}{ \tilde{k}}\right)}}\nonumber \\ - \frac{i \tilde{\xi}-1}{\tilde{k}} \sqrt{1-{\ensuremath{\left( \frac{i \tilde{\xi}-1}{\tilde{k}}\right)}}^2} + \frac{i \tilde{\xi}+1}{\tilde{k}} \sqrt{1-{\ensuremath{\left( \frac{i \tilde{\xi}+1}{\tilde{k}}\right)}}^2}.\end{gathered}$$ In real graphene there are always deviations from the conical shapes of the band structure and other than the lowest bands also contribute, thus the functions used here are valid only for low frequencies ($\hbar \omega \lesssim 4$eV). Atom near a single graphene sheet {#sec:atomgraphene} ================================= The simplest geometry is an atom at a distance $z_{A}$ away from a perfectly flat graphene sheet. In case of undoped graphene, the relevant density-density correlation function to be used in the reflection coefficients at a graphene/vacuum interface $\mathrm{R}_{\mathrm{TM}}$ and $\mathrm{R}_{\mathrm{TE}}$ \[Eqs. \], is the one shown in Eq. . In the framework of this study, it is interesting to compare this result to the interaction between an atom and a perfect conductor, where $\mathrm{R}_{\mathrm{TM}}=1$ and $\mathrm{R}_{\mathrm{TE}}=-1$. As an example, we have chosen a ground-state rubidium atom at zero temperature. We found that the interaction between the atom and graphene is about $\sim 5\%$ of the interaction between an atom and a perfect conductor, see Fig. \[results\_doped\_graphene\]. Using the same Dirac model for graphene, it has already been shown that, at zero temperature, the interaction between graphene and an ideal conductor is about $2.6\%$ of the interaction between two perfect conductors separated by the same distance [@PRB80_245406_2009]. When the graphene sheets are doped, one has to use the reflection coefficients given in Eq.  with $\chi (\mathbf{k},i\xi)$ defined by Eq. . The results are shown in Fig. \[results\_doped\_graphene\] and Table \[table:results\], where we present the results for doping densities $10^{10}$, $10^{11}$, $10^{12}$ and $10^{13}$ cm$^{-2}$. ![(Color online) Casimir-Polder potential between a ground state rubidium atom and a doped graphene sheet. The upper solid line (blue) is the result for undoped graphene, while the dashed curves (green) are for doping densities $10^{10}$, $10^{11}$, $10^{12}$ and $10^{13}$ cm$^{-2}$, respectively, from top to bottom. The solid bottom line (red) is the result for a perfect conductor. \[results\_doped\_graphene\]](results_doped_graphene){width="8cm"} doping density (cm$^{-2})$ $U_\mathrm{CP} (s^{-1})$ ---------------------------- -------------------------- no doping -90.987 $10^{10}$ -121.940 $10^{11}$ -165.489 $10^{12}$ -244.768 $10^{13}$ -371.140 : Casimir-Polder potential between rubidium atoms in the ground state and graphene sheets at $z_A=1~\mu$m. \[table:results\] Graphene sheet above a gold substrate \[sec:1sheet\] ==================================================== ![Scheme of an atom standing above a free standing graphene sheet above a substrate.[]{data-label="GoldGraphene_Atom"}](gold_graphene_atom){width="7cm"} The purpose of this manuscript is to show whether one (or more) freestanding graphene sheet could be a candidate to shield the effects of a substrate below on an atom above the graphene sheets (see Fig. \[GoldGraphene\_Atom\]). For a single graphene layer, the system will in effect be a layered medium of the structure $1\nmid2\mid3$, where the graphene sheet (denoted by $\nmid$) separates the free-space regions 1 and 2, the index 3 denotes the substrate (subscript $s$) with permittivity $\varepsilon(\omega)$. The Fresnel coefficients for this geometry can be written as [@ChewBook] $$\begin{aligned} \tilde{R} = R_{G} + \frac{T_{G} R_{0s} T_{G} e^{2 i k_{0 z} d}}{1 - R_{G} R_{0s} e^{2 i k_{0z} d}},\end{aligned}$$ where $T_{G}$ is the transmission coefficient through the graphene layer. For the different modes we will have different coefficients, such that $T^{\mathrm{TE}}_G = 1+R^{\mathrm{TE}}_G$ is valid only for the amplitude coefficient of the TE mode and the conditions $T^{\mathrm{TM}}_G = 1-R^{\mathrm{TM}}_G$ should be used for the TM mode. The Fresnel coefficients for this structure $1\nmid2\mid3$ should be written as $$\begin{aligned} \label{eq:R3} \tilde{R}^{\mathrm{TE}} &= \frac{R^{\mathrm{TE}}_{G} + R^{\mathrm{TE}}_{0s} e^{2 i k_{0 z} d} + 2 R^{\mathrm{TE}}_{G} R^{\mathrm{TE}}_{0s} e^{2 i k_{0 z} d}}{1 - R^{\mathrm{TE}}_{G} R^{\mathrm{TE}}_{0s} e^{2 i k_{0z} d}}, \\ \tilde{R}^{\mathrm{TM}} &= \frac{R^{\mathrm{TM}}_{G} + R^{\mathrm{TM}}_{0s} e^{2 i k_{0 z} d} - 2 R^{\mathrm{TM}}_{G} R^{\mathrm{TM}}_{0s} e^{2 i k_{0 z} d}}{1 - R^{\mathrm{TM}}_{G} R^{\mathrm{TM}}_{0s} e^{2 i k_{0z} d}}.\end{aligned}$$ The reflection coefficients for TE and TM waves at the interface between free space and the substrate are the usual Fresnel coefficients $$\begin{aligned} \mathrm{R}_{\mathrm{TM}}^{0s} &= \frac{\varepsilon_{s} \gamma_{0z} - \gamma_{sz}}{\varepsilon_{s} \gamma_{0z} + \gamma_{sz}}, \\ \mathrm{R}_{\mathrm{TE}}^{0s} &= \frac{\gamma_{0z} - \gamma_{sz}}{ \gamma_{0z} + \gamma_{sz}}.\end{aligned}$$ In the following, we will present our results for the Casimir-Polder interaction with both doped and undoped graphene sheets. As a substrate material we have chosen gold, a material used in several experimental setups, whose permittivity we describe by the Drude model $$\begin{aligned} \varepsilon (\omega) = 1 - \frac{\omega_{pe}^{2}}{\omega (\omega + i \gamma_{e})}\end{aligned}$$ with parameters $\omega_{pe}=1.37\times 10^{16}\mbox{s}^{-1}$ and $\gamma_{e} = 4.12 \times 10^{13} \mbox{s}^{-1}$. Undoped graphene sheet ---------------------- In this simple geometry with only one graphene sheet, the total Casimir-Polder potential of the graphene-substrate system is limited by the potential of the single graphene sheet and that for the gold substrate. At small distances $d$ between graphene and substrate the Casimir-Polder interaction felt by the atom is dominated by the interaction with the gold substrate. With increasing distance $d$, the Casimir-Polder potential is well approximated by that of a single graphene sheet. ![(Color online) Normalized Casimir-Polder potential of a rubidium atom in the ground state (black line), 32S (blue dotted line), 43S (green dashed line) and 54S (red dashed line) at $z_{A}=1~\mu$m for different distances $d$ between the undoped graphene sheet and gold. \[results\_shield\]](UndopedShield){width="8.5cm"} In order to quantify the shielding effect of graphene, we fix the atom’s position at $z_{A} = 1~\mu$m, vary the distance $d$ between the graphene and the substrate, and normalize these results to the Casimir-Polder potential without the substrate at the same distance $z_{A}$. Due to the recent experimental progress in working with atoms in Rydberg states, one might look at the differences that may arise from having an atom in an excited state rather than in its ground state. The atomic transition frequencies of a highly excited Rydberg atom are in a window of frequencies in which graphene absorbs well [@PRL108_047401_2012; @nnano7_330_2012], so that one would expect a larger Casimir-Polder shift for the atom-graphene-gold system than for the corresponding atom-gold system. For the calculation of the interaction energy between an atom in a excited state and a surface one has to add a resonant contribution to the usual non-resonant Casimir-Polder potential $$\begin{gathered} U_{\mathrm{CP}}^{\mathrm{R}} {\ensuremath{\left(z_{A}\right)}} = \frac{\mu_{0}}{4 \pi} \sum_{k \neq n} \omega_{n k}^{2} \mathbf{d}_{n k} \cdot \mathbf{d}_{k n} \int_{0}^{\infty} d \kappa_{0z} e^{-2 \kappa_{0z} z_{A}} \nonumber \\ \times {\ensuremath{\left[\mathrm{Re}{\ensuremath{\left[\mathrm{R}_{\mathrm{TE}}\right]}} + \mathrm{Re}{\ensuremath{\left[\mathrm{R}_{\mathrm{TM}}\right]}} {\ensuremath{\left(1 + \frac{2 \kappa_{0z}^{2} c^{2}}{\omega^{2}}\right)}}\right]}}. \end{gathered}$$ The Casimir-Polder potentials for a selection of Rydberg states of rubidium are shown in Fig. \[results\_shield\] and compared with the corresponding results for the ground state. We can clearly see that for the excited states, the shielding properties of graphene are highlighted. The differences in the potential experienced by the various states reflect the resonances of the different atomic transitions frequencies allowed for each state. Doped graphene sheet -------------------- For doped graphene one has to use the density-density correlation function Eq.  in the reflection coefficients Eq. . In Fig. \[results\_doping\] we show the results for different doping concentrations for fixed atom-graphene and graphene-gold distances. One observes that for higher doping concentration the shielding effect of graphene becomes somewhat better than at lower concentrations. This is due to the fact that the conductivity increases and therefore the graphene sheet more and more resembles a perfect conductor, see Fig. \[results\_doped\_graphene\]. ![(Color online) Normalized Casimir-Polder potential of a rubidium atom in the ground state at $z_{A}=1~\mu$m and $d = 2~\mu$m for different doping densities. \[results\_doping\]](DopingShield){width="8.5cm"} Note that these results change for different atomic eigenstates, the effect of which are shown in Fig. \[results\_doped\_states\] for ground and excited state (the curves were calculated for a doping concentration of $ n= 10^{12}$cm$^{2}$). Each atom has different frequency transitions that influence the strength of the atom-graphene coupling. In the same way, different doping concentrations will also influence the absorbance of graphene, Fig. \[results\_doped\_graphene\], so it is expected that each concentration and each atomic state to yield unique results. ![(Color online) Normalized Casimir–Polder potential of a rubidium ($^{87}$Rb) atom in the ground state (black line), 32S (blue dotted line), 43S (green dashed line) and 54S (red dashed line) at $z_{A}=1~\mu$m for different distances $d$ between doped graphene ($ n= 10^{12}$cm$^{2}$) and gold. \[results\_doped\_states\]](DopedShield){width="8.5cm"} Bilayer graphene above a substrate {#sec:bilayer} ================================== During the manufacturing of graphene, layers of varying thickness are typically generated. Besides the ’pure’ form of single-layer graphene, bilayers of two weakly bound sheets are common. The natural form for bilayer graphene is the AB-stacking, which is the basis of graphite. However, alternative stackings are also available where one layer is rotated by some angle relative to the other [@PRB86_075439_2012]. Here we focus on AB-stacking in which half the atoms are aligned on top of one another whereas the other half are located above the center of the hexagonal lattice of the opposite layer. The conductivity of AB-stacked bilayer graphene can be found in Refs. [@PRB77_155409_2008; @PRB86_075439_2012] for both doped and undoped cases. The longitudinal conductivity for undoped AB-stacking ($\mu=0$) at zero temperature can be written as $$\begin{gathered} \sigma_{xx} (\omega) = \frac{e^2}{2 \hbar} \nonumber \\ \times\bigg[ \bigg( \frac{\omega +2 \gamma}{2(\omega+\gamma)}+\frac{\omega -2 \gamma}{2(\omega-\gamma)} \Theta (\omega-2\gamma) \bigg) \Theta (\omega) \nonumber \\ + \frac{\gamma^{2}}{2 \omega^2} {\ensuremath{\left[\Theta (\omega-\gamma) + \Theta (\omega+\gamma)\right]}} \Theta (\omega-\gamma) \bigg] \,.\end{gathered}$$ and the perpendicular conductivity is given by $$\begin{gathered} \sigma_{zz} (\omega) = \frac{e^2}{4 \hbar} {\ensuremath{\left(\frac{\gamma d}{\hbar v_F }\right)}}^2 \nonumber \\ \times \bigg[ \frac{\omega}{2(\omega+\gamma)}+\frac{\omega}{2(\omega-\gamma)} \Theta (\omega-2\gamma) \bigg] \Theta (\omega) \end{gathered}$$ where $\Theta (x)$ is the Heaviside step function, $\gamma=0.4$ eV is the interlayer hopping energy and $d=3.3$ Å the interlayer distance. These results for conductivity do not include the spin-orbit coupling. That effect could be also considered to calculate the conductivity in order to cover other possible effects [@PRB86_075439_2012]. The conductivity at imaginary frequencies as required for the nonresonant Casimir-Polder potential can be obtained from the Kramers-Kronig relation $$\begin{gathered} \sigma (i \xi) = \frac{2}{\pi} \int_{0}^{\infty} d \omega \, \frac{\omega \, \mathrm{Im} \sigma (\omega)}{\omega^2+ \xi^2}.\end{gathered}$$ In Fig. \[results\_bilayer\] we show the Casimir-Polder potential of an atom (either in its ground state or in a Rydberg state) next to a graphene bilayer. When compared to a single graphene sheet, a bilayer of graphene does not provide a better shielding for a ground state atom and for an excited one the results are even more unfavourable, see Fig. \[results\_shield\]. ![(Color online) Normalized Casimir-Polder potential of a rubidium atom in the ground state (black line), 32S (blue dotted line), 43S (green dashed line) and 54S (red dashed line), at $z_{A}=1~\mu$m for different distances $d$ between the graphene bilayer and gold.[]{data-label="results_bilayer"}](BilayerShield){width="8.5cm"} Conclusions \[sec:conclusions\] =============================== The knowledge of how to control and manipulate graphene systems opens the possibility of a number of novel research possibilities. A layered structure made from graphene could be used as an effective shield for the effects of a substrate beneath, by patterning the graphene into disks and ribbons, one would be able to create tunable filters, where the disks and nanoribbons would shield the substrate and the void regions would only feel the effects of the substrate. We have shown that, in some situations, in a atom-graphene-substrate system the Casimir-Polder potential is dominated by the graphene potential. For graphene-substrate distances larger than 4 $\mu$m we verified that for all studied systems, one or more graphene membranes shield the effect of electromagnetic vacuum fluctuations emanating from a substrate. The optical absorption of graphene is dominated by intraband transitions in the far-infrared spectral range and by interband transitions from mid-infrared to ultraviolet [@SolidStateCom152_1341_2012]. The coupling of graphene with different states will lead to different couplings as we could verify from the results obtained from ground and highly excited states. Each state has different frequency transitions which will influence the strength of the atom-graphene coupling. These results show that shielding the Casimir-Polder forces is a rather delicate issue. It has been proposed that, since hydrogen-switchable mirrors are shiny metals which become optically transparent upon exposure to hydrogen, the Casimir force between them should be stronger in air than in hydrogen. However, that was been shown in Ref. [@PNAS101_12_2004] not to be the case, the reason being that, although the mirrors are indeed shiny metals in air, this change in optical properties only affects the optical range of frequencies. In order to have an effective change in the Casimir interaction one would need to have the mirror which is strongly reflecting at all frequencies [@NJOP8_235_2006]. A similar situation is encountered with graphene. The results for patterned graphene in Ref. [@PRL108_047401_2012] show that it is be possible to create a system that is highly absorbing in a small frequency band, so as to tune out the resonant part of the Casimir-Polder interaction for an atom in the excited state. However, in order to be able to shield the non-resonant part one would have to have a material that is more broadband absorbing. Several factors could still be included to make our model more realistic. Amongst them are finite temperature, corrugation of the freestanding graphene sample and the presence of impurities. However, for clean enough samples, these factors are normally considered as perturbations, not changing the essentials of the Dirac model [@PRB80_245406_2009]. We would like to acknowledge fruitful discussions with E.A. Hinds and S.Y. Buhmann. SR is supported by the PhD grant SFRH/BD/62377/2009 from FCT, co-financed by FSE, POPH/QREN and EU.
{ "pile_set_name": "ArXiv" }
--- abstract: 'We have studied the diffusion inside the silica network of sodium atoms initially located outside the surfaces of an amorphous silica film. We have focused our attention on structural and dynamical quantities, and we have found that the local environment of the sodium atoms is close to the local environment of the sodium atoms inside bulk sodo-silicate glasses obtained by quench. This is in agreement with recent experimental results.' author: - 'Michaël Rarivomanantsoa$^{a}$, Philippe Jund$^b$ and Rémi Jullien$^c$' title: 'Sodium diffusion through amorphous silica surfaces: A molecular dynamics study' --- Introduction ============ Many characteristics of materials such as mechanical resistance, adsorption, corrosion or surface diffusion depend on the physico-chemical properties of the surface. Thus, the interactions between the surfaces with their physico-chemical environment are very important, and in particular for amorphous materials which are of great interest for a wide range of industrial and technological applications (optical fibers coating, catalysis, chromatography or microelectronics). Therefore a great number of studies have for example focused on the interactions between the amorphous silica surfaces with water, experimentally [@exp1] and by molecular dynamics simulations [@md1; @md12; @bakaev; @md10; @md11; @md13]. On the other hand, the sodium silicate glasses entail great interest due to their presence in most of the commercial glasses and geological magmas. They are also often used as simple models for a great number of silicate glasses with more complicated composition. The influence of sodium atoms on the amorphous silica network is the subject of numerous experimental studies: Raman spectroscopy [@brawer; @millan], IR [@brawer; @wong], XPS [@bruckner; @sprenger] and NMR [@sprenger; @silver] from which we have informations about neighboring distances, bond angle distributions or concentration of so-called Q$^{n}$ tetrahedra. In order to improve the insight about the sodium silicate glass structure and to obtain a good understanding of the role of the modifying Na$^+$ cations, Greaves [[*et al.* ]{}]{}have used new promising investigation techniques like EXAFS and MAS NMR [@baker; @greaves]. Despite all these efforts, the structure of sodo-silicate glasses is still a subject of debate. Another means to give informations about this structure is provided by simulations, by either [*ab initio*]{} [@ispas] or classical [@soules; @vessal; @smith; @oviedo; @horbach; @jund] molecular dynamics (MD). In the present work, we are using classical MD simulations, but [*a contrario*]{} to previous simulations, the sodium atoms are not located before hand inside the amorphous silica sample. Recent experimental studies of the diffusion of Na atoms initially placed at the surface of amorphous silica, using EXAFS spectroscopy [@mazzara], showed that the Na atoms diffuse inside the vitreous silica and once inside the amorphous silica network, the local environment of the Na atoms is characterized by a Na - O distance $d_{{\rm Na-O}} = 2.3$ Å and by a Na - Si distance $d_{{\rm Na-Si}} = 3.8$ Å. These values are close to the distances characterizing the local environment of Na atoms in sodium silicate glasses obtained by quench. In this study we have used classical MD simulations in order to reproduce the diffusion of sodium atoms inside a silica matrix and to check that the local environment of the sodium atoms is close to what is found for quenched sodo-silicate glasses. The sodium atoms have been inserted at the surface of thin amorphous silica films under the form of Na$_2$O groups in order to respect the charge neutrality. Computational method ==================== To simulate the interactions between the different atoms, we use a generalized version [@kramer] of the so-called BKS potential [@bks] where the functional form of the potential remains unchanged: $${\cal{\phi}}\left(\left|\vec{r}_j-\vec{r}_i\right|\right) = \frac{q_iq_j}{\left|\vec{r}_j-\vec{r}_i\right|} -A_{ij}\exp\left(-B_{ij}\left|\vec{r}_j-\vec{r}_i\right|\right) -\frac{C_{ij}}{\left|\vec{r}_j-\vec{r}_i\right|^6}.$$ The potential parameters $A_{ij}$, $B_{ij}$, $C_{ij}$, $q_i$ and $q_j$ involving the silicon and oxygen atoms (describing the interactions inside the amorphous silica network) are extracted from van Beest [[*et al.* ]{}]{}[@bks] and remain unchanged (in particular the partial charges q$_{{\rm Si}} = 2.4\rm e$ and q$_{{\rm O}} = -1.2$e are not modified). The new parameters, devoted to describe the interactions between the sodium atoms and the silica network are given by Kramer [[*et al.* ]{}]{}[@kramer] and are adjusted on [*ab initio*]{} calculations of zeolithes except the partial charge of the sodium atoms whose value q$_{{\rm Na}} = 0.6\rm e$ is chosen in order to respect the system electroneutrality. However, this sodium partial charge does not reproduce the short-range forces and to this purpose, Horbach [[*et al.* ]{}]{}[@horbach] have proposed to vary the charge q$_{{\rm Na}}$ as follows: $$\begin{aligned} q_{{\rm Na}}(r_{ij})&=& \left\{ \begin{array}{ll} 0.6\left(1+\ln\left[C\left( r_c-r_{ij}\right)^2+1\right]\right) & r_{ij} < r_c\\ 0.6 & r_{ij} \geqslant r_c \end{array} \right.\end{aligned}$$ where $r_{ij}$ is the distance between the particles $i$ and $j$. The parameters $C$ and $r_c$ are adjusted to obtain the experimental structure factor of Na$_2$Si$_2$O$_5$ (NS2) and their values are included in Ref [@horbach]. It is important to note that using this method to model the sodium charge, the system electroneutrality is respected for large distances (in fact for distances $r \geqslant r_c$). Next we assume that the modified BKS potential describes reasonably well the system studied here, for which the sodium atoms are initially located outside the amorphous silica sample. In addition other simulations have shown that this interatomic potential is convenient for various compositions, in particular for NS2, NS3 (Na$_2$Si$_3$O$_7$) [@horbach] and NS4 (Na$_2$Si$_4$O$_9$) [@jund] and we assume it is adapted to model any concentration of modifying Na$^+$ cations inside sodo-silicate glasses. Our aim here is to obtain a sodo-silicate glass by deposition of sodium atoms at the amorphous silica surface, as it was done experimentally [@mazzara]. Using the [*modus operandi*]{} described in a previous study [@mr] we have generated Amorphous Silica Films (ASF), each containing two free surfaces perpendicular to the $z$-direction. These samples have been made by breaking the periodic boundary conditions along the $z$-direction, normal to the surface, thus creating two free surfaces located at $L/2$ and $-L/2$ with $L=35.8$ Å. In order to evaluate the Coulomb interactions, we used a two-dimensional technique based on a modified Ewald summation to take into account the loss of periodicity in the $z$-direction. For further technical details see Ref [@mr]. Then, instead of initially positioning the sodium atoms inside the silica matrix, like it was done before [@soules; @huang; @smith; @jund; @oviedo], we have deposited 50 Na$_2$O groups inside two layers located at a distance of 4 Å of each free surface as depicted in  \[figure1\]. Within the layers, the Na$_2$O groups are assumed to be linear, with $d_{{\rm Na-O}}=2$ Å [@elliott], and arranged on a pseudoperiodic lattice represented in the zoom of  \[figure1\]. Hence the system is made of 100 Na$_2$O groups for 1000 SiO$_2$ molecules, corresponding to a sodo-silicate glass of composition NS10 (Na$_2$Si$_{10}$O$_{21}$) and contains 3300 particles. Since our goal is to study the diffusion of the sodium atoms placed at the amorphous silica surfaces, we fixed the initial temperature of the whole system at 2000 K. Indeed, the simulations of Smith [[*et al.* ]{}]{}[@smith] and Oviedo [[*et al.* ]{}]{}[@oviedo] of sodo-silicate glasses have shown that there is no appreciable sodium diffusion for temperatures below $\approx$ 1500 K. On the other hand, it is worth noticing that Sunyer [[*et al.* ]{}]{}[@sunyer] have found a simulated glass transition temperature $T_g \simeq 2400$ K for a NS4 glass. Therefore we thermalized the sodium layers at 2000 K and placed them at the ASF surfaces, also thermalized at 2000 K. We have used repulsive ’walls’ at $z=-30$ Å and $z=+30$ Å  in order to avoid that some Na$_2$O groups evaporate along the $z$ direction, where the periodic boundary conditions are no more fulfilled. The energy of the repulsive ’walls’ has an exponential shape, $E=E_0\exp\left [-\left ( z_w-z\right )/\sigma \right ]$ where $z_w$ is the wall position, $\sigma=0.1$ Å the distance for which the repulsion energy is diminished by a factor $e$ and $E_0=10$ eV the repulsion energy at the plane $z=z_w$. The value of 30 Å for $z_w$ was chosen in order to place the repulsive walls at a reasonable distance from the Na$_2$O layers and not too far from the ASF surfaces. We have then performed classical MD simulations, with a timestep $\Delta t=0.7$ fs, using ten statistically independent samples. Since the interactions between the surfaces and the Na$_2$O layers are relatively weak, some Na$_2$O groups may evaporate just before being reflected toward the thin films by the repulsive walls. During this time frame, the system temperature increases up to a temperature of approximately 2800 K. As described by Athanasopoulos [[*et al.* ]{}]{}[@athanasopoulos] this temperature rise is likely due to the approach of the adatoms to the surface of the substrate, dropping in the potential well of the substrate atoms, thus increasing their kinetic energy. This kinetic energy is then transmitted to the substrate with the adsorption. Contrarily to Athanasopoulos [[*et al.* ]{}]{} [@athanasopoulos] and Webb [[*et al.* ]{}]{}[@webb] we have not dissipated this energy excess with a thermal sink region, in fact we have not controlled the temperature at all. Using this device we are able to perform MD simulations of the diffusion of the sodium atoms deposited [*via*]{} Na$_2$O groups at the ASF surfaces. In the following section, we will present the structural and dynamical characteristics of the sodo-silicate film (SSF) obtained in this way. Results ======= The observation of the sodium diffusion in the silica network is an important goal of this molecular dynamics simulation. This can be carried out by analyzing the behavior of the density profiles along the normal direction to the surface (the $z$-direction). The density profiles represent the mass densities within slices, of thickness $\Delta z=0.224$ Å, parallel to the surfaces [@mr]. The time evolution of the sodium density profile is represented in  \[figure2\](a) and  \[figure2\](b) and the time evolution of the total density profile in  \[figure2\](c) and  \[figure2\](d) . The sodium atoms enter inside the silica network in the time range 14 - 70 ps ( \[figure2\](a)) and during this time interval the sodium profile exhibits a diffusion front. After 42 ps, the sodium atoms are observed in the entire system illustrating that they have diffused within the whole ASF, as observed experimentally [@mazzara]. This result contrasts with that of MD simulations of the diffusion of platinum (electrically neutral) [@athanasopoulos] and potassium [@garofalini2; @zirl1; @zirl2] but is similar to that of the diffusion of lithium [@garofalini2; @zirl1; @zirl2] at the surface of amorphous silica. Moreover, the sodium profiles ( \[figure2\](a) and (b)) show outward rearrangement at the surface as already pointed out by Zirl [[*et al.* ]{}]{} [@zirl] for glassy sodium aluminosilicate surfaces. This agrees with ion scattering spectroscopy [@kelso] and simulations of sodium silicate glasses [@garofalini] in which high concentrations of sodium ions are found at the surface. For larger times, $t\geqslant 98$ ps represented in  \[figure2\](b), the sodium density seems to stabilize around a mean value of 0.3 g.cm$^{-3}$ which corresponds to $\sim 2$ Na atoms per slice ($\Delta z = 0.224$ Å ). The surface of the system is identified as being the large linear region in which the total density profile decreases. For the short times ( \[figure2\](c)) the surface is located approximately in the range 15 - 20 Å and for larger times ($t\geqslant 98$ ps,  \[figure2\](d)), the surface lies in the range 7 - 25 Å. Hence, the introduction of the sodium atoms in the ASF is likely to increase the surface thickness of the system. On the other hand, as observed for the adsorption of platinum atoms on the surface of amorphous silica [@athanasopoulos], the surface position does not seem to vary with time. We have also calculated the silicon and oxygen density profiles, but since they behave like the total density profile, they are not represented here. The non bridging oxygen (NBO) density profile is not represented in  \[figure2\] as well since it is close to the Na density profile. In the time range 98 - 210 ps, the total and sodium density profiles do not evolve with time. In particular, in the region $z \lesssim 5$ Å  ( \[figure2\](d)), the total density value remains fluctuating around a mean value of 2.6 g.cm$^{-3}$ and the atom composition is approximately of 370 Si, 760 O and 40 Na which is usually written Na$_{2}$O(SiO$_{2}$)$_{18.5}$ or ’NS18.5’ (it should be noticed that the above mentioned density is significantly larger than the one expected for a “real” NS18.5 glass ($\approx$ 2.3 g.cm$^{-3}$)). Therefore it seems reasonable to consider that the system is in a [*quasi permanent*]{} regime after 210 ps and the forthcoming quantities, structural and dynamical, are calculated for the following 70 ps. It is worth remembering that all the quantities are determined for a system containing 3300 particles and averaged over 10 statistically independent samples. As usual when studying the structural and dynamical characteristics of free surfaces, the system is divided into several subsystems: here six slices of equal thickness 10 Å. But in order to increase the statistics, the contributions to the physical quantities of the negative and positive slices are averaged. Hence, the system is actually subdivided into three parts, named respectively from the center to the surface, interior, intermediate and external region. In order to improve the characterization of the local environment of the atoms, we have calculated the radial pair distribution functions for all the pairs $(i,j) \in {\rm [Si,Na,O]}^{2}$ within the three subsystems defined previously. The Na - Na, Na - O and Si - Na pair distribution functions are represented in  \[figure3\](a), \[figure3\](b) and \[figure3\](c) respectively and represent the local environment of the sodium atoms for $t\geqslant210$ ps. At the surfaces of the system the distances are $d_{{\rm Na-O}} \simeq 2.2$ Å, $d_{{\rm Si-Na}} \simeq 3.5$ Å and, due to a lack of statistics, the Na - Na distance is included in the interval $3.3 \lesssim d_{{\rm Na-Na}} \lesssim 3.9$ Å. While slightly smaller, these distances are close to the experimental values found by Mazzara [[*et al.* ]{}]{}($d_{{\rm Na-O}}=2.3$ Å and $d_{{\rm Si-Na}}=3.8$ Å) [@mazzara]. Moreover, the values found in the present work agree with the distances, calculated by MD, corresponding to the sodium environment in sodo-silicate glasses, of several sodium compositions (NS2 [@baker; @smith], NS3 [@horbach] and NS4 [@ispas]), obtained by quench. Therefore, as observed experimentally by Mazzara [[*et al.* ]{}]{}[@mazzara], once within the amorphous silica network, the sodium atoms have the same local environment as in the sodo-silicate glass obtained by quench. This fact is confirmed by the distributions of the $\widehat{{\rm NaONa}}$ and $\widehat{{\rm SiONa}}$ bond angles (not shown) which are close to those determined in quenched sodo-silicate glasses [@oviedo; @sunyer]. In particular, the most probable angles are 90[$^{\rm o}$]{} for $\widehat{{\rm NaONa}}$ and 105[$^{\rm o}$]{} for $\widehat{{\rm SiONa}}$, in agreement with the values found by Oviedo [[*et al.* ]{}]{}[@oviedo] and Sunyer [[*et al.* ]{}]{} [@sunyer].\ The intermolecular distances corresponding to the amorphous silica network structure are $d_{{\rm Si-Si}} \simeq$ 3.1 Å, $d_{{\rm Si-O}} \simeq$ 1.6 Å and $d_{{\rm O-O}} \simeq$ 2.6 Å. On the other hand, the most probable values exhibited by the distributions of the $\widehat{{\rm OSiO}}$ and $\widehat{{\rm SiOSi}}$ bond angles (not shown) are respectively $\sim 109$[$^{\rm o}$]{} and $\sim 145$[$^{\rm o}$]{}. These distances and bond angle distributions are very similar to those found experimentally or by MD simulations in bulk amorphous silica. The shoulder exhibited at 2.5 Å by the distribution $g_{{\rm Si-Si}}$ at amorphous silica surfaces [@mr] and interpreted as the signature of the twofold rings is not present in the SSF. The absence of the shoulders at 80 and 100[$^{\rm o}$]{} depicted at ASF surfaces [@mr; @md1; @roder] by the $\widehat{{\rm OSiO}}$ and $\widehat{{\rm SiOSi}}$ bond angle distributions confirms that the small sized rings have disappeared as suggested by the radial pair distributions. This result is expected for R$_2$O (with R belonging to the column I) adsorption on glassy silicate surfaces and observed [*via*]{} MD simulations for R = H, Li, Na and K [@md1; @md12; @md11; @garofalini; @garofalini2; @zirl1; @zirl2]. Note that this also occurs for platinum adsorption on sodium aluminosilicate surfaces [@athanasopoulos; @webb] and for H$_2$ adsorption on amorphous silica surfaces [@lopez]. The small rings like two and threefold rings are known to be some of the most reactive sites on the surfaces of silicate glasses since they include strained siloxane bonds that react with water or other adsorbates [@brinker; @bunker]. Since the sodium introduction weakens the amorphous silica network, one important question is to measure the proportion of non bridging oxygens (NBOs). To this purpose, oxygen coordination with silicon was calculated. As expected, there is an important proportion of defective oxygens due to the presence of modifying cations Na$^{+}$ (8.9 % for the SSF to be compared with 1 % for the ASF [@mr]), illustrated by the similarities between the NBO and Na density profiles mentioned previously. Moreover, the NBO concentration is coherent with the concentration of 11 % of NBO calculated by MD simulations in a NS9 system [@huang]. The latter concentration is higher than the one found in this study because of the greater proportion of Na atoms in NS9 compared to 100 Na$_2$O for 1000 SiO$_2$ in the present SSF. At the surface ($z \geqslant 20$ Å), the BOs disappear and correlatively, the defective oxygens become preponderant, as observed for amorphous silica (36.2 % of NBOs at the SSF surfaces, 15 % at the ASF surfaces [@mr] and 10 % at the nanoporous silica surfaces [@beckers]). When analyzing the silicon coordination with oxygen, we can state that the silicon atoms remain coordinated in a tetrahedral way, revealing that the sodium introduction does not modify the silicon environment. The modifications created by the Na$^{+}$ cations are not able to break the SiO$_4$ tetrahedra which are very stable in the amorphous silica network. This agrees with the usual models for the sodo-silicate glasses, [[*ie* ]{}]{}the CRN model of Zachariasen [@zachariasen] and the MRN model of Greaves [@greaves]. One possible way to analyze more precisely the silicon environment consists in calculating the Q$^n$ tetrahedra proportion. A Q$^n$ structure is a SiO$_4$ tetrahedra which contains $n$ BOs. The Q$^n$ proportion is often determined by NMR experiments [@chuang; @maekawa], XPS [@sprenger; @emerson] or by molecular dynamics simulations[@smith], in order to describe the local environment around the silicon atoms. In the SSF, the Q$^2$ and Q$^3$ concentrations (6.8 % and 25.4 % respectively) are weak compared to those determined for NS2, NS3 and NS4 glasses. This result is coherent since the NS2, NS3 and NS4 glasses contain more Na$^+$ cations than the SSF studied in this work. At the surface, the Q$^3$ proportion is 45.9 % which is comparable to the proportions obtained by MD (using the BKS potential) in NS2, NS3 and NS4 glasses and to the experimental proportions in NS3 [@sprenger; @silver] and NS4 [@maekawa; @emerson] glasses. Moreover, it is worth noting that some Q$^1$ appear at the surface. In fact, these structural entities do not allow to create a network but it is conceivable to find those defects forming ’dead ends’ at the surface. A direct method to confirm the previous assumption concerning the disappearance of small rings consists in analyzing directly the ring size distribution. A ring is a particularly interesting structure because it can be detected using infrared and Raman spectroscopy. In particular the highly strained twofold rings result in infrared-active stretching modes [@bunker; @morrow] at 888 and 908 cm$^{-1}$. In order to determine the probability $P_n$ for a given Si atom, whose coordinate along the normal direction to the surface is in one of the three regions, to be a member of a $n$-fold ring we have used the algorithm described in [@horbach2]. A ring is defined as the shortest path between two oxygen atoms, first neighbors of a given silicon atom and made by Si - O bonds. The ring size is given by the number of silicon atoms contained in the ring.\ The probability $P_n$ is reported in  \[figure4\](a) for $n=2,\ldots,9$  and for the three different regions. For comparison, we have also reported $P_n$ determined at the ASF surface and interior [@mr]. In order to improve the medium range order characterization, we have investigated the orientation of the rings computing $<\cos^2\theta>$ for a given ring size, within the three regions of the system, where $\theta$ is the angle between the normal of the surface and the normal of the ring [@mr]. The results are reported in  \[figure4\](b) for the three regions and for $n=2,\ldots,9$ together with the results obtained for the ASF surface (dashed line) and interior (dotted line). For the three regions, the distributions of the sodo-silicate film ( \[figure4\](a)) are closer to the ASF interior than to the ASF surface distributions. Particularly, the probability of a silicon atom to belong to a small sized ring (2, 3 or 4-fold ring) is weak. This confirms the previous conclusion about the disappearance of the small strained sized rings which react with the sodium ions during their adsorption at the amorphous silica surface. As observed for the ring size distributions, the orientation of the rings ( \[figure4\](b)), in the three regions of the sodo-silicate film is similar to that obtained in the interior of the pure silica films [@mr] which means that even at the surface the rings have an isotropic orientation with respect to the surface. This is related to the disappearance of the small sized rings at the surface. Nevertheless, it is worth noting that in the external region the probability of a Si atom to belong to a 5-fold ring is greater than the probability to belong to a 6-fold ring ( \[figure4\](a)) as observed at the ASF [*surfaces*]{} [@mr]. In a sense, the 5-fold rings are not affected by the sodium adsorption in contrast to the small sized rings which disappear with the introduction of the sodium atoms. Also in the external region, the 2 and 3-fold rings (that are still present) are oriented perpendicularly to the surfaces ( \[figure4\](b)) as observed at the ASF surfaces [@mr; @ceresoli]. We have also analyzed the dynamics of the Na$^+$ cations and compared the results obtained in the present ’NS10’ system with those obtained in a NS4 glass. To this purpose, the mean square displacements (MSD) $\left< r^2(t)\right > = \left<\left|r_i(t)- r_i(0)\right|^2\right>$ have been calculated for each species composing the SSF.  \[figure5\] represents the MSD for the BO, NBO, Na and Si atoms within the time frame 0.7 fs - 70 ps after the first 210 ps together with the MSD calculated by Sunyer [[*et al.* ]{}]{}[@sunyer] for NS4 ($t \geqslant 1.5$ ps). At $\sim$ 2800 K, the MSD of each species exhibits three regimes. For the short times, we observe the so-called ballistic regime where $\left< r^2(t)\right > \sim t^2$. In this regime, the differences between the species are not really important. For the long times, we recognize the so-called diffusive regime in which $\left< r^2(t)\right> \sim t$ and in which the Si atoms are the ones that diffuse the less. Similarly to the pure ASF [@mr], the NBOs diffuse more than the BOs. The origin of this feature lies in the fact that the NBOs form only one covalent bond with the Si atoms instead of two for the BOs. The Na atoms diffuse much more (one order of magnitude) than the other species. Between the two former regimes, the MSD exhibits the so-called $\beta$-relaxation, clearly observable for the oxygen and silicon atoms. The phenomenon is not as clear for the sodium atoms, but between $\sim 10^{-1}$ and $\sim 1$ ps, the Na MSD does not behave like $t$ (sub-diffusion) and it is reasonable to consider that the sodium atoms are submitted to the so-called cage effect characterizing the $\beta$-relaxation.\ These results are consistent with the observations made for different sodo-silicate systems [@smith; @oviedo; @horbach] for temperatures close to 2800 K. More precisely the mean square displacements in the SSF are close to those calculated by Sunyer [[*et al.* ]{}]{} [@sunyer] for a quenched NS4 glass at 3000 K. This is somewhat surprising since in a NS10 system, one expects smaller diffusion constants for all the species compared to the diffusion constants obtained in a NS4 system. The modus operandi of the present system (diffusion of the cations through the surface) could explain this observation. Conclusion ========== This work was motivated by a recent experimental study [@mazzara] of the local environment of diffusing sodium atoms deposited at the surfaces of thin amorphous silica films. We have reproduced numerically this experiment by classical molecular dynamics simulations after putting Na$_2$O groups at the surfaces of amorphous silica thin films. We have quantitatively analyzed the temporal evolution of the sodium and total densities and we have checked that the sodium atoms are diffusing inside the amorphous silica network. After a given time, the density profiles are no longer evolving, and we have calculated the structure of the resulting sodo-silicate glass. Our attention has been focused on the local environment of the sodium atoms. Once inside the thin film, they are preferentially bound to NBOs as predicted by the MRN model of Greaves [@greaves]. The distances and bond angle distributions show that the sodium atoms have a local environment corresponding to the local environment of the sodium atoms in sodo-silicate glasses made by quench, as observed by Mazzara [[*et al.* ]{}]{}[@mazzara]. Moreover, the distances $d_{{\rm Na-O}}=2.2$ Å  and $d_{{\rm Si-Na}}\simeq 3.5$ Å are close to the experimental values. Concerning the amorphous silica network, we have observed [*via*]{} the corresponding distance and bond angle distributions that its short range order is not modified by the introduction of the Na$^+$ cations. We have also calculated the ring size distributions and the orientation of the rings which show that the introduction of the sodium atoms has an influence on the silica network but on larger scales compared to those corresponding to the local environment. The ring size distributions and the orientations are close to the results obtained in the bulk of thin amorphous silica films. This is due to the decrease of the proportion of small rings (particularly two and threefold) which interact with the Na$_2$O groups since they are known to be highly reactive sites for the adsorption of species on amorphous silica surfaces.\ Finally concerning the dynamics of the different atoms we find results similar to those obtained in a NS4 glass at a slightly higher temperature.\ [**Acknowledgments**]{} Calculations have been performed partly at the “Centre Informatique National de l’Enseignement Supérieur” in Montpellier. [99]{} D. W. Sindorf and G. E. Maciel, [[J. Am. Chem. Soc. ]{}]{}[**105**]{}, 1487 (1983); D. M. Krol and J. G. van Lierop, [[J. Non-Cryst. Solids ]{}]{}[**63**]{}, 131 (1984); D. M. Krol and J. G. van Lierop, [[J. Non-Cryst. Solids ]{}]{}[**68**]{}, 163 (1984); C. J. Brinker, E. P. Roth, G. W. Scherer and D. R. Tallant, [[J. Non-Cryst. Solids ]{}]{}[**71**]{}, 171 (1985); I.-S. Chuang, D. R. Kinney, C. E. Bronnimann, R. C. Zeigler and G. E. Maciel, [[J. Phys. Chem. ]{}]{}[**96**]{}, 4027 (1992); L. Dubois and B. R. Zegarski, [[J. Phys. Chem. ]{}]{}[**97**]{}, 1665 (1993); D. R. Kinney, I.-S. Chuang and G. E. Maciel, [[J. Am. Chem. Soc. ]{}]{}[**115**]{}, 6786 (1993); I.-S. Chuang, D. R. Kinney and G. E. Maciel, [[J. Am. Chem. Soc. ]{}]{}[**115**]{}, 8696 (1993); A. Grabbe, T. A. Michalske and W. L. Smith, , 4648 (1995) S. H. Garofalini, , 2069 (1983); S. M. Levine and S. H. Garofalini, 2997 (1987) B. P. Feuston and S. H. Garofalini, [J. App. Phys. ]{}[**68**]{}, 4830 (1990) V. A. Bakaev and W. A. Steele, , 9803 (1999) B. P. Feuston and S. H. Garofalini, , 564 (1989) S. H. Garofalini, [[J. Non-Cryst. Solids ]{}]{}[**120**]{}, 1 (1990) M. M. Branda and N. J. Castellani, [Surf. Sci. ]{}[**393**]{}, 171 (1997) N. Lopez, M. Vitiello, F. Illas and G. Pacchioni, [[J. Non-Cryst. Solids ]{}]{}[**271**]{}, 56 (2000) S. A. Brawer and W. B. White, , 242 (1975) P. F. Mc Millan and G. H. Wolf, Rev. Mineral. [**32**]{}, 247 (1995); N. Umekasi, N. Iwamoto, M. Tatsumisago and T. Minami, [[J. Non-Cryst. Solids ]{}]{}[**106**]{}, 77 (1988) J. Wong and C. A. Angell, Glass Structure by Spectroscopy, M. Dekker, New York (1976) R. Bruckner, H. U. Chun, H. Goretzki and M. Sammet, [[J. Non-Cryst. Solids ]{}]{}[**42**]{}, 49 (1980) D. Sprenger, H. Bach, W. Meisel and P. Gütlich, [[J. Non-Cryst. Solids ]{}]{}[**159**]{}, 187 (1993) A. H. Silver and P. J. Bray, , 984 (1958); J. F. Stebbins, Rev. Mineral. [**18**]{}, 405 (1988) G. J. Baker, G. N. Greaves, M. Surman and M. Oversluizen, Nucl. Instr. and Meth. in Phys. Res. B [**97**]{}, 375 (1995) G. N. Greaves, [[J. Non-Cryst. Solids ]{}]{}[**71**]{}, 203 (1985) S. Ispas, M. Benoit, P. Jund and R. Jullien, , 214206 (2001); T. Uchino and T. Yoko, [[J. Phys. Chem. B ]{}]{}[**102**]{}, 8372 (1998) T. F. Soules, , 4570 (1979) B. Vessal, M. Leslie and C. R. A. Catlow, Mol. Phys. [**3**]{}, 123 (1989); B. Vessal, A. Amini, D. Fincham and C. R. A. Catlow, [[Philos. Mag. B ]{}]{}[**60**]{}, 753 (1989); B. Vessal, G. N. Greaves, P. T. Marten, A. V. Chadwick, R. Mole and S. Houde-Walter, Nature [**356**]{}, 504 (1992) W. Smith, G. N. Greaves and M. J. Gillan, , 3091 (1995) J. Oviedo and J. F. Sanz, , 9047 (1998) J. Horbach, W. Kob and K. Binder, [[Philos. Mag. B ]{}]{}[**79**]{}, 1981 (1999) P. Jund, W. Kob and R. Jullien, , 134303 (2001) C. Mazzara, J. Jupille, A.-M. Flank, and P. Lagarde, [[J. Phys. Chem. B ]{}]{}[**104**]{}, 3438 (2000) G. J. Kramer, A. J. M. de Man and R. A. van Santen, [[J. Am. Chem. Soc. ]{}]{}[**64**]{}, 6435 (1991) B. W. H. van Beest, G. J. Kramer and R. A. van Santen, , 1955 (1990) C. Huang and A. N. Cormack, , 8180 (1990) M. Rarivomanantsoa, P. Jund and R. Jullien, J. Phys.: Condens. Matter [**13**]{}, 6707 (2001) S. D. Elliott and R. Ahlirchs, , 4267 (1998) E. Sunyer, P. Jund and R. Jullien, [unpublished.]{} D. C. Athanasopoulos and S. H. Garofalini, , 3775 (1992); D. C. Athanasopoulos and S. H. Garofalini, [Surf. Sci. ]{}[**273**]{}, 129 (1992) E. B. Webb and S. H. Garofalini, [Surf. Sci. ]{}[**319**]{}, 381 (1994) D. M. Zirl and S. H. Garofalini, J. Am. Ceram. Soc. [**75**]{}, 2353 (1992) J. Kelso, C. G. Pantano and S. H. Garofalini, [Surf. Sci. ]{}[**134**]{}, L543 (1983) S. H. Garofalini and S. M. Levine, [[J. Am. Ceram. Soc. ]{}]{}[**68**]{}, 376 (1985) S. H. Garofalini and D. M. Zirl, J. Vac. Sci. Technol. A [**6**]{}, 975 (1988) D. M. Zirl and S. H. Garofalini, Phys. Chem. Glasses [**30**]{}, 155 (1989) A. Roder, W. Kob and K. Binder, , 7602 (2001) D. M. Zirl and S. H. Garofalini, [[J. Non-Cryst. Solids ]{}]{}[**122**]{}, 111 (1990) C. J. Brinker, R. J. Kirkpatrick, D. R. Tallant and B. C. Bunker, [[J. Non-Cryst. Solids ]{}]{}[**99**]{}, 418 (1988); B. C. Bunker, D. M. Haaland, T. A. Michalske and W. L. Smith, [Surf. Sci. ]{}[**222**]{}, 95 (1989); J. Tossell, [[J. Non-Cryst. Solids ]{}]{}[**120**]{}, 13 (1990) B. C. Bunker, D. M. Haaland, K. J. Ward, T. A. Michalske, W. L. Smith, J. S. Binkley, C. F. Melius and C. A. Balfe, [Surf. Sci. ]{}[**210**]{}, 406 (1989) J. L. V. Beckers and S. W. de Leeuw, [[J. Non-Cryst. Solids ]{}]{}[**261**]{}, 87 (2000) W. H. Zachariasen, [[J. Am. Chem. Soc. ]{}]{}[**54**]{}, 3841 (1932) I.-S. Chuang and G. E. Maciel, [[J. Am. Chem. Soc. ]{}]{}[**118**]{}, 401 (1996) H. Maekawa, T. Maekawa, K. Kawamura and T. Yokokawa, [[J. Non-Cryst. Solids ]{}]{}[**127**]{}, 53 (1991) J. F. Emerson, P. E. Stallworth and P. J. Bray, [[J. Non-Cryst. Solids ]{}]{}[**113**]{}, 253 (1989) B. A. Morrow and A. Devi, Trans. Faraday Soc. [**68**]{}, 403 (1972); B. A. Morrow and I. A. Cody, [[J. Phys. Chem. ]{}]{}[**80**]{}, 1995 (1976) J. Horbach and W. Kob, , 3169 (1999) D. Ceresoli, M. Bernasconi, S. Iarlori, M. Parinello and E. Tosatti, , 3887 (2000)
{ "pile_set_name": "ArXiv" }
--- abstract: 'The escape fraction, [$f_{\rm esc}$]{}, of ionizing photons from early galaxies is a crucial parameter for determining whether the observed galaxies at $z \geq 6$ are able to reionize the high-redshift intergalactic medium. Previous attempts to measure [$f_{\rm esc}$]{} have found a wide range of values, varying from less than 0.01 to nearly 1. Rather than finding a single value of $f_{esc}$, we clarify through modeling how internal properties of galaxies affect [$f_{\rm esc}$]{} through the density and distribution of neutral hydrogen within the galaxy, along with the rate of ionizing photons production. We find that the escape fraction depends sensitively on the covering factor of clumps, along with the density of the clumped and interclump medium. One must therefore be cautious when dealing with an inhomogeneous medium. Fewer, high-density clumps lead to a greater escape fraction than more numerous low-density clumps. When more ionizing photons are produced in a starburst, [$f_{\rm esc}$]{} increases, as photons escape more readily from the gas layers. Large variations in the predicted escape fraction, caused by differences in the hydrogen distribution, may explain the large observed differences in [$f_{\rm esc}$]{}  among galaxies. Values of [$f_{\rm esc}$]{} must also be consistent with the reionization history. High-mass galaxies alone are unable to reionize the universe, because [$f_{\rm esc}$]{} $> 1$ would be required. Small galaxies are needed to achieve reionization, with greater mean escape fraction in the past.' author: - 'Elizabeth R. Fernandez, and J. Michael Shull' title: The Effect of Galactic Properties on the Escape Fraction of Ionizing Photons --- INTRODUCTION {#sec:introduction} ============ Observations of the cosmic microwave background optical depth made with the [*[Wilkinson Microwave Anisotropy Probe (WMAP)]{}*]{} [@kogut/etal:2003; @spergel/etal:2003; @page/etal:2007; @spergel/etal:2007; @dunkley/etal:2008; @komatsu/etal:2008; @wmap7] suggest that the universe was reionized sometime between $6<z<12$. Because massive stars are efficient producers of ultraviolet photons, they are the most likely candidates for the majority of reionization. However, in order for early star-forming galaxies to reionize the universe, their ionizing radiation must be able to escape from the halos, in which neutral hydrogen () is the dominant source of Lyman continuum (LyC) opacity. The escape fraction, [$f_{\rm esc}$]{}, of ionizing photons is a key parameter for starburst galaxies at $z > 6$, which are believed to produce the bulk of the photons that reionize the universe [@robertson/etal:2010; @trenti/etal:2010; @Bouwens/etal:2010]. The predicted values of escape fraction span a large range from $0.01 \lesssim f_{\rm esc} < 1$, derived from a variety of theoretical and observational studies of varying complexity. Various properties of the host galaxy, its stars, or its environment are thought to affect the number of ionizing photons that escape into the intergalactic medium (IGM). For example, @ricotti/shull:2000 studied [$f_{\rm esc}$]{} in spherical halos using a Strömgren approach. @wood/loeb:2000 assumed an isothermal, exponential disk galaxy and followed an ionization front through the galaxy using three-dimensional Monte Carlo radiative transfer. Both @wood/loeb:2000 and @ricotti/shull:2000 state that [$f_{\rm esc}$]{} varies greatly, from $<0.01$ to $1$, depending on galaxy mass, with larger galaxies giving smaller values of [$f_{\rm esc}$]{}. A similar dependence with galaxy mass is also seen by the simulations of @yajima/etal:2010, because larger galaxies tend to have star formation buried within dense hydrogen clouds, while smaller galaxies often had clearer paths for escaping ionizing radiation. @gnedin/etal:2008, on the other hand, ran a high resolution N-body simulation with adaptive-mesh refinement in a cosmological context. Contrary to @ricotti/shull:2000, @wood/loeb:2000 and @yajima/etal:2010, they state that lower-mass galaxies have significantly smaller [$f_{\rm esc}$]{}, as the result of a declining star formation rate. In addition, above a critical halo mass, [$f_{\rm esc}$]{}  does not change by much. The model of @gnedin/etal:2008 allowed for the star formation rate to increase with the mass of the galaxy at a higher rate than a linear proportionality would allow. The larger galaxies also tended to have star formation occurring in the outskirts of the galaxy, which made it easier for ionizing photons to escape. Their model included a distribution of gas within the galaxy, which created free sight-lines out of the galaxy. @wise/cen:2008 used adaptive mesh hydrodynamical simulations on dwarf galaxies. Even though their simulations covered a different mass range than the larger galaxies studied by @gnedin/etal:2008, they found much higher value of [$f_{\rm esc}$]{} than would be expected from extrapolating results from @gnedin/etal:2008 to lower masses. @wise/cen:2008 attribute this difference to the irregular morphology of their dwarf galaxies with a turbulent and clumpy interstellar medium (ISM), allowing for large values of [$f_{\rm esc}$]{}. Others have also looked at how the shape and morphology of the galaxy can affect [$f_{\rm esc}$]{}. @dove/shull:1994, using a Strömgren model, studied how [$f_{\rm esc}$]{} varies with various   disk density distributions. In addition, many authors have found that superbubbles and shells can trap radiation until blowout, seen in analytical models of @dove/shull/ferrara:2000 as well as in hydrodynamical simulations of @fujita/etal:2003. The analytical model by @clark/oey:2002 showed that high star formation rates can raise the porosity of the ISM and thereby increase [$f_{\rm esc}$]{}. In addition to bubbles and structure caused from supernovae, galaxies can have a clumpy ISM whose inhomogeneities affect [$f_{\rm esc}$]{}. For example, dense clumps could reduce [$f_{\rm esc}$]{}  [@dove/shull/ferrara:2000]. On the other hand, @boisse:1990, @hobson/scheuer:1993, @witt/gordon:1996, and @wood/loeb:2000 all found that clumps in a randomly distributed medium cause [$f_{\rm esc}$]{} to rise, while @ciardi/etal:2002 found that the effects of clumps depend on the ionization rate. A host of other galaxy parameters have been tested analytically and with simulations. Increasing the baryon mass fraction lowers [$f_{\rm esc}$]{} for smaller halos, but increases it at masses greater than $10^8 \: M_{\sun}$ [@wise/cen:2008]. Star formation history changes the amount of ionizing photons and neutral hydrogen, causing [$f_{\rm esc}$]{} to vary from $0.12$ to $0.20$ for coeval star formation and from $0.04$ to $0.10$ for a time-distributed starburst [@dove/shull/ferrara:2000]. Other galactic quantities, such as spin [@wise/cen:2008] or dust content [@gnedin/etal:2008], do not seem to affect the escape fraction. Observations have also been used to constrain [$f_{\rm esc}$]{}, especially at $z\lesssim3$. Searches for escaping Lyman continuum radiation at redshifts $z \lesssim 1-2$ have found escape fractions of at most a few percent [@bland/maloney:2002; @bridge:2010; @cowie/etal:2009; @tumlinson/etal:1999; @deharveng:2001; @grimes/etal:2007; @grimes:2009; @heckman/etal:2001; @Leitherer/etal:1995; @malkan:2003; @Siana/etal:2007]. @hurwitz/etal:1997 saw large variations in the escape fraction, and @hoopes:2007 and @bergvall/etal:2006 saw a relatively high escape fraction of $10\%$. @ferguson/etal:2001 observed [$f_{\rm esc}$]{} $\approx 0.2$ at $z\approx1$. @hanish/etal:2010 do not see a difference in [$f_{\rm esc}$]{} between starbursts and normal galaxies. @siana/etal:2010 also found low escape fractions at $z \approx 1.3$ and showed that no more than $8\%$ of galaxies at this redshift can have $f_{\rm esc,rel} > 0.5$. Note that $f_{\rm esc,rel}$, which the authors use to compare their results to other surveys, is defined as the ratio of escaping LyC photons to escaping 1500 $\AA$ photons. In our own Galaxy, @bland/maloney:1999 and @putman/etal:2003 found an escape fraction of only a few percent. Observations using $\gamma$-ray bursts [@chen:2007] show [$f_{\rm esc}$]{} $\approx 0.02$ at $z \approx 2$. At higher redshift ($z \approx 3$), [$f_{\rm esc}$]{} seems to vary drastically from galaxy to galaxy [@shapley/etal:2006; @Iwata/etal:2009; @vanzella:2010], with a few galaxies having very large escape fractions. Some studies have found low values of the escape fraction at $z \approx 3$ [@fernandez-soto/etal:2003; @Giallongo/etal:2002; @heckman/etal:2001; @inoue/etal:2005; @wyithe:2010], while others have found significant LyC leakage [@Steidel:2002; @shapley/etal:2006]. This large variation from galaxy to galaxy suggests a dependence on viewing angle and could indicate the patchiness and structure of neutral hydrogen within the galaxy [@bergvall/etal:2006; @deharveng:2001; @grimes/etal:2007; @heckman/etal:2001]. From these observations, one infers that the fundamental properties of the galaxies change with time or that [$f_{\rm esc}$]{} increases with increasing redshift [@bridge:2010; @cowie/etal:2009; @inoue/etal:2006; @Iwata/etal:2009; @Siana/etal:2007]. @Bouwens/etal:2010 looked at the blue color of high redshift ($z \approx 7$) galaxies and argued that the nebular component must be reduced. This would suggest a much larger escape fraction in the past. The minimum mass of galaxy formation can also put limitations on [$f_{\rm esc}$]{}. Observations of Ly$\alpha$ absorption toward high-redshift quasars, combined with the UV luminosity function of galaxies, can limit [$f_{\rm esc}$]{} from a redshift of $5.5$ to $6$, with [$f_{\rm esc}$]{} $ \sim$ 0.20–0.45 if the halos producing these photons are larger than $10^{10} \: M_{\sun}$. This can decrease to [$f_{\rm esc}$]{} $\sim$ 0.05–0.1 if halos down to $10^8 \: M_{\sun}$ are included as sources of escaping ionizing photons [@srbinovsky/wyithe:2008]. It is clear that many factors can affect [$f_{\rm esc}$]{}, and the problem is quite complicated. Cosmological simulations that predict the escape fraction provide a more accurate estimate for [$f_{\rm esc}$]{}. However, many parameters of the galaxy change at once, and it becomes difficult to understand how a single parameter can affect the escape fraction. In addition, trends may be difficult to understand because of the manner in which some physical processes are included or neglected. Analytic models can show clearer trends, even though they may be over-simplified and miss important physics. Therefore, rather than predicting a quantitative value for [$f_{\rm esc}$]{} , we seek to understand how properties of galaxies and their internal structure affect the escape fraction. Because our model is simplified, the values of [$f_{\rm esc}$]{}are not exact, but rather illustrate trends caused by various galactic properties. In section \[sec:Method\], we explain our method of tracing photons that escape the galaxy. In section \[sec:Results\], we explain our results and compare our results to previous literature in section \[sec:Lit\]. In section \[sec:reion\], we consider constraints from reionization and we conclude in section \[sec:Conclusions\]. Throughout, we use the cosmological parameters from WMAP-7 [@wmap7]. METHODOLOGY {#sec:Method} =========== Properties of the Galaxy ------------------------ We use an exponential hyperbolic secant profile [@spitzer:1942] to describe the density of an isothermal disk in a halo of mass $M_{\rm halo}$: $$n_H(Z) = n_0 \exp[-r/r_{h}]\: \rm{sech}^2\left(\frac{Z}{z_0}\right), \label{eq:densitysech}$$ [@spitzer:1942] where $n_0$ is the number density of hydrogen at the center of the galaxy, $Z$ is the height above the galaxy mid-plane, and $r_{h}$ is the scale radius: $$r_{h} = \frac{j_d \lambda }{\sqrt{2}m_d} r_{\rm vir}$$ [@mo/etal:1998]. The parameter $j_d$ is the fraction of the halo’s angular momentum in the disk, $\lambda$ is the spin parameter, $m_d$ is the fraction of the halo in the disk ($m_d=\Omega_b/\Omega_m$), and $r_{vir}$ is the virial radius. As in @wood/loeb:2000, we assume $j_d/m_d = 1$ and $\lambda = 0.05$. The virial radius is $$r_{vir}=(0.76 {\rm kpc}) \left(\frac{M_{halo}}{10^8 M_{\sun} h^{-1}}\right)^{1/3}\left(\frac{\Omega_m}{\Omega(z_f)}\frac{\Delta_c}{200}\right)^{-1/3} \left(\frac{1+z_f}{10}\right)^{-1} h^{-1} \: , \label{eq:rvir}$$ [@navarro/etal:1997] where $\Delta_c = 18 \pi^2 +82d-39d^2$ and $d=\Omega_{zf}-1$. $\Omega_{zf}$ is the local value of $\Omega_m$ at the redshift of galaxy formation, $z_f$. The dependence of the virial radius on $z_f$ will affect the density of the disk, with smaller disks of higher density forming earlier. The disk scale height, $z_0$, is given by $$z_0 = \left(\frac{\langle v^2 \rangle}{2\pi G \rho_0}\right)^{1/2} = \left(\frac{M_{halo}}{2\pi \rho_0 r_{vir}}\right)^{1/2},$$ where $\langle v^2 \rangle$ is the mean square of the velocity and $\rho_0$ is the central density. In a real galaxy, there will be non-thermal motions of the gas. However, to simplify the calculations, we assume that the gas is virialized, feels the gravity of the disk, and therefore follows the relation $\langle v^2 \rangle = G M_{halo} / r_{vir}$. However, radiative cooling can cause the disk to be thinner than this. The central density is solved for in a self-consistent way after the halo mass and the redshift of formation are specified. We use $15 r_{h}$ and $2z_0$ as the limits of the radius and height of the disk, respectively. The structure of real galaxies is more complicated. The addition of stars in the halo will change the gas distribution through feedback, heating, and gravitational effects. For purposes of simplicity, we ignore these effects. The mass of the disk (stars and gas) is taken as $M_{\rm disk} = m_d \, M_{\rm halo}$, where $m_d$ is the fraction of matter that is incorporated into the disk. The upper limit of $m_d$ is $\Omega_b / \Omega_m$. The mass of the stars within the disk is $M_* = M_{\rm disk} f_* $, where $f_*$ is the star formation efficiency, which describes the fraction of baryons that form into stars.[^1] The remainder of the mass of the disk is in gas, distributed according to equation \[eq:densitysech\] with gas temperature of $10^4$ K. The number of ionizing photons is related to $f_*$, considering either Population III (metal-free) or Population II (metal-poor, $Z = 0.02 Z_\sun$) stars. The total number of ionizing photons per second from the entire stellar population, $Q_{\rm pop}$ is $$Q_{pop} = \frac{\int_{m_1}^{m_2}\overline{Q}_H(m) f(m) dm}{\int_{m_1}^{m_2} m f(m) dm} \times M_* \; ,$$ where $m$ is the mass of the star, and $m_1$ and $m_2$ are the upper and lower mass limits of the mass spectrum, given by $f(m)$. For a less massive distribution of stars, we use the Salpeter initial mass spectrum [@salpeter:1955]: $$f(m) \propto m^{-2.35}, \label{eq:salpeter}$$ with $m_1=0.4 \; M_\sun$ and $m_2 = 150 \; M_\sun$. The Larson initial mass spectrum illustrates a case with heavier stars [@larson:1998]: $$f(m)\propto m^{-1}\left(1+\frac{m}{m_c}\right)^{-1.35},$$ with $m_1 = 1 \; M_\sun$, $m_2=500 \; M_\sun$, and $m_c = 250 \; M_\sun$ for Population III stars and $m_1 = 1 \; M_\sun$, $m_2=150 \; M_\sun$, and $m_c = 50 \; M_\sun$ for Population II stars. We define $\overline{Q}_H$ as the number of ionizing photons emitted per second per star, averaged over the star’s lifetime. For Population III stars of mass parameter $x \equiv \log_{10}(m/M_\sun)$, this is $$\begin{aligned} \log_{10}\left[\overline{Q}_H/{\rm s^{-1}}\right] &=&\left\{ \begin{array}{ll} 43.61 + 4.90x - 0.83x^2 & 9-500~M_\sun \; ,\\ 39.29 + 8.55x & 5-9~M_\sun \; ,\\ 0 & \rm {otherwise} \; , \end{array}\right.\end{aligned}$$ and for Population II stars, $$\begin{aligned} \log_{10}\left[\overline{Q}_H/{\rm s^{-1}}\right] &=&\left\{ \begin{array}{ll} 27.80 + 30.68x - 14.80x^2 + 2.50x^3 & \geq 5 M_\sun\\ 0 & \rm {otherwise} \; , \end{array}\right.\end{aligned}$$ as given in Table 6 of @schaerer:2002. Calculating the Escape Fraction ------------------------------- We place the stars at the center of the galaxy. An ionized H  region develops around the stars, where the number of ionizing photons emitted per second by the stellar population, $Q_{\rm pop}$, is balanced by recombinations, such that $$Q_{pop} = \frac{4}{3} \pi r_s^3 n_H^2 \alpha_B(T).$$ Here, $\alpha_B$ is the case-B recombination rate coefficient of hydrogen and $T$ is the temperature of the gas (we assume $T=10^4K$). The radius of this   region, called the Strömgren radius, is $$r_s= \left(\frac{3 Q_{\rm pop}}{4 \pi n_H^2 \alpha_B}\right)^{1/3}.$$ This radius is simple to evaluate in the case of a uniform medium, but if we are concerned with clumps and a disk with a density profile, the density will be changing with location. Although we are calculating [$f_{\rm esc}$]{} at a moment in time, in reality the   region is not static, and the ionization front will propagate at a flux-limited speed. We assume that all stars are placed at the center of the galaxy. In reality, star formation will be distributed throughout the galaxy. If stars are closer to the edge of the galaxy, their photons will have less hydrogen to traverse, and hence will escape more easily. Therefore, the results we present will be lower limits of the escape fraction. We integrate along the path length that a photon takes in order to escape the galaxy, following the formalism in @dove/shull:1994. We can then calculate the escape fraction of ionizing photons along each ray emanating from the center of the galaxy by equating the number of ionizing photons to the number of hydrogen atoms across its path. If there are more photons than hydrogen atoms, the ray can break out of the disk; otherwise, no photons escape and the escape fraction is zero. The escape fraction along a path, $\eta$, thus depends on the amount of hydrogen the ray transverses, which depends on its angle $\theta$, measured from the axis perpendicular to the disk: $$\eta(\theta) = 1 -\frac{4 \pi \alpha_B}{Q_{pop}}\int^{\infty}_0 n_H^2(Z)r^2 dr \; .$$ Photons are more likely to escape out of the top and bottom of the disk, rather than the sides, because there is less path length to traverse. This creates a critical angle, beyond which photons no longer will escape the galaxy. The total escape fraction, [$f_{\rm esc}$]{}, is then found by integrating over all angles $\theta$ and the solid angle $\Omega$: $$\begin{aligned} f_{\rm esc}(Q_{\rm pop}) &=& \int \int \frac{\eta(\theta) }{4\pi} d\theta d\Omega\\ &=& \int \frac{1}{2}\eta(\theta)\: \sin(\theta)\: d\theta \; .\end{aligned}$$ We take into account the whole disk (top and bottom) so that an [$f_{\rm esc}$]{} of $1$ means that all photons produced are escaping into the IGM. Adding Clumps ------------- A medium with clumps can be described with the density contrast $C = n_{c}/n_{ic}$ between the clumps (density $n_{c}$), and the interclump medium (density $n_{ic}$). The percentage of volume taken up by the clumps is described by the volume filling factor $f_{V}$. We randomly distribute clumps throughout the galaxy. We define $n_{\rm mean}$ as the density the medium would have if it was not clumpy, given by equation \[eq:densitysech\]. The density at each point is given by $$n_{c}=\frac{n_{\rm mean}}{f_{V}+(1-f_{V})/C} \label{eq:clump}$$ if the point is in a clump and $$n_{ic}=\frac{n_{\rm mean}}{f_{V}(C-1)+1} \label{eq:ic}$$ if the point is not in a clump, similar to [@wood/loeb:2000]. In this way, the galaxy retains the same interstellar gas mass, independent of $f_{V}$ and $C$. As $C$ increases, the density of the clumps increases as the density of the non-clumped medium falls. Similarly, if $f_V$ is larger, more of the medium is contained in less dense clumps. We trace photons on their path through the galaxy and track whether or not they encounter a clump. At each step on the path out of the galaxy, a random number is generated. A clump exists if this random number is less than the volume filling factor. As the filling factor increases, clumps can merge, forming larger, arbitrarily shaped clumps. Counting the number of photons exiting the galaxy then leads to [$f_{\rm esc}$]{}. The covering factor is computed by counting how many clumps intersect a ray as it travels out of the galaxy. RESULTS {#sec:Results} ======= Properties of the Clumps ------------------------ In the first calculations, we placed Population III stars with a Larson mass spectrum and a star formation efficiency $f_* = 0.5$ in a halo of $M_{\rm halo} = 10^9 M_\sun$, with a redshift of formation of $z_f = 10$. The clumps have diameter $10^{17}$ cm, unless otherwise stated. The top panel of Figure \[fig:varyff\] shows [$f_{\rm esc}$]{} as a function of $f_V$ for various values of $C$. The case with no clumps is equivalent to $C=1$. As clumps are introduced, [$f_{\rm esc}$]{}quickly falls, but rises again as $f_V$ rises. This is because the clumps become less dense (since more of the medium is in clumps and the mass of the galaxy must be kept constant). In addition, [$f_{\rm esc}$]{} drops as $C$ increases, showing that denser clumps with a less dense interclump medium stops more ionizing radiation than a more evenly distributed medium. The clumps are small enough that essentially every ray traversing the galaxy encounters one of these very dense clumps and is diminished. In the bottom panel of Figure \[fig:varyff\], the same population of stars is shown for various values of $f_V$ as a function of $\log(C)$. As $C$ increases, [$f_{\rm esc}$]{} becomes low for small values of $f_V$. Again, this is because of a few very dense clumps that stop essentially all radiation. As $f_V$ increases, more of the medium is in clumps, and therefore the density of the clumps decreases. The combined effect is an increase of [$f_{\rm esc}$]{}. The solid black line shows the case with no clumps, or when $f_V=0$. For $f_V=0$, [$f_{\rm esc}$]{} equals the case with no clumps ($C=1$), as it should. Above $C \sim 10-100$, increasing $C$ no longer affects [$f_{\rm esc}$]{}. ![ Escape fraction of ionizing photons out of the disk as a function of the clump volume filling factor $f_V$ for various values of the clumping factor $C$ ([*[top panel]{}*]{}) and $\log(C)$ for various values of $f_V$ ([*[bottom panel]{}*]{}). Shown for a $10^9 M_\sun$ halo at $z_f=10$, with $f_* = 0.5$ and Population III stars with a Larson mass spectrum. The fraction of matter incorporated into the disk, $m_d$, scales with the escape fraction. []{data-label="fig:varyff"}](figure1a.eps "fig:"){width="120mm"} ![ Escape fraction of ionizing photons out of the disk as a function of the clump volume filling factor $f_V$ for various values of the clumping factor $C$ ([*[top panel]{}*]{}) and $\log(C)$ for various values of $f_V$ ([*[bottom panel]{}*]{}). Shown for a $10^9 M_\sun$ halo at $z_f=10$, with $f_* = 0.5$ and Population III stars with a Larson mass spectrum. The fraction of matter incorporated into the disk, $m_d$, scales with the escape fraction. []{data-label="fig:varyff"}](figure1b.eps "fig:"){width="120mm"} So far, we have only been exploring the results of small clumps ($10^{17}\: \rm{cm}$, or $\sim 0.3\: \rm{pc}$) in diameter. What would happen if we were to increase the size of these clumps? In this case, a ray would traverse fewer clumps as it travels out of the galaxy (the covering factor will fall), but any given clump would be larger. As shown in Figure \[fig:clumpsize\], $f_{esc}$ rises as the clumps increase in size. For very low values of $f_V$, only a few clumps exist and not every ray comes in contact with a clump, increasing the escape fraction above the case with no clumps. To illustrate this further, Figure \[fig:clumplog\] shows [$f_{\rm esc}$]{} against $f_V$ for a large clump size ($10^{19}$ cm in diameter). In the top panel, $C$ is varied for $m_d = 0.01$ and $m_d = 0.17$. The left-most vertical line represents the value of $f_V$ needed for a photon traversing the longest path length to pass through an average of one clump for both $m_d=0.01$ and $m_d=0.17$. The right-most vertical solid line represents the value of $f_V$ for a photon traversing the shortest path length to pass through an average of one clump for $m_d=0.17$, while the dashed line is the shortest path for $m_d=0.01$. To the right of these lines, all path lengths intersect a clump. In other words, here the covering factor is greater than one. To the left of these lines, there are clump-free path lengths out of the galaxy. For $m_d=0.17$, we see that if $f_V$ is low enough some rays will pass through fewer than one clump, on average, [$f_{\rm esc}$]{} is much greater than the no-clump case. For very low values of $f_V$, there are so few clumps that the interclump medium approaches the case with no clumps. Therefore, the plot of [$f_{\rm esc}$]{} is peaked in the region where there are some paths that do not intersect a clump. These results are averaged over ten runs, while Figure \[fig:clumplog2\] shows the distribution for each run. The bottom panel of Figure \[fig:clumplog\] shows [$f_{\rm esc}$]{} for various values of $m_d$. As $m_d$ increases, the disk is less massive, and the escape fraction for a non-clumped galaxy rises. For $m_d = 0.01$, the escape fraction of a non-clumped galaxy is $\sim 1$. In this case, once clumps are added, the escape fraction decreases because regions dense enough to stop ionizing radiation are finally introduced. When the covering factor is greater than one, the clumps grow less dense, causing the escape fraction to rise again. For all other cases, [$f_{\rm esc}$]{} peaks when the covering factor is less than one, and falls below the escape fraction for the non-clumped case when the covering factor is greater than one. ![ Effect of large clumps on escape fraction: [$f_{\rm esc}$]{} out of the disk is shown for a galaxy in a $10^9 M_\odot$ halo, with $z_f = 10$, $f_* = 0.1$, and a Pop III Larson initial mass spectrum, with $m_d=0.17$. If the clumps are very large and $f_V$ is very low, there are cases where [$f_{\rm esc}$]{} is larger than the no-clump case. In the top left corner, the results are averaged over ten runs to reduce noise. In the remaining three panels, the distributions are shown for each run. In each case, the black points represent a case with no clumps. Galaxy properties are the same as in Figure \[fig:varyff\]. []{data-label="fig:clumpsize"}](figure2a.eps "fig:"){width="80mm"} ![ Effect of large clumps on escape fraction: [$f_{\rm esc}$]{} out of the disk is shown for a galaxy in a $10^9 M_\odot$ halo, with $z_f = 10$, $f_* = 0.1$, and a Pop III Larson initial mass spectrum, with $m_d=0.17$. If the clumps are very large and $f_V$ is very low, there are cases where [$f_{\rm esc}$]{} is larger than the no-clump case. In the top left corner, the results are averaged over ten runs to reduce noise. In the remaining three panels, the distributions are shown for each run. In each case, the black points represent a case with no clumps. Galaxy properties are the same as in Figure \[fig:varyff\]. []{data-label="fig:clumpsize"}](figure2b.eps "fig:"){width="80mm"} ![ Effect of large clumps on escape fraction: [$f_{\rm esc}$]{} out of the disk is shown for a galaxy in a $10^9 M_\odot$ halo, with $z_f = 10$, $f_* = 0.1$, and a Pop III Larson initial mass spectrum, with $m_d=0.17$. If the clumps are very large and $f_V$ is very low, there are cases where [$f_{\rm esc}$]{} is larger than the no-clump case. In the top left corner, the results are averaged over ten runs to reduce noise. In the remaining three panels, the distributions are shown for each run. In each case, the black points represent a case with no clumps. Galaxy properties are the same as in Figure \[fig:varyff\]. []{data-label="fig:clumpsize"}](figure2c.eps "fig:"){width="80mm"} ![ Effect of large clumps on escape fraction: [$f_{\rm esc}$]{} out of the disk is shown for a galaxy in a $10^9 M_\odot$ halo, with $z_f = 10$, $f_* = 0.1$, and a Pop III Larson initial mass spectrum, with $m_d=0.17$. If the clumps are very large and $f_V$ is very low, there are cases where [$f_{\rm esc}$]{} is larger than the no-clump case. In the top left corner, the results are averaged over ten runs to reduce noise. In the remaining three panels, the distributions are shown for each run. In each case, the black points represent a case with no clumps. Galaxy properties are the same as in Figure \[fig:varyff\]. []{data-label="fig:clumpsize"}](figure2d.eps "fig:"){width="80mm"} ![[*[Top:]{}*]{} The [$f_{\rm esc}$]{} out of the disk is shown for a galaxy in a $10^9 M_\odot$ halo, with $z_f=10$, $f_* = 0.1$, and a Population III Larson initial mass spectrum. The left-most solid vertical line represents the value of the volume filling factor $f_V$ needed for a photon transversing the longest path length to pass through an average of one clump for a galaxy with $m_d = 0.17$, while the right-most solid vertical line represents the value of $f_V$ needed for a photon transversing the shortest path length to pass through an average of one clump for a galaxy with $m_d = 0.17$. Therefore, to the right of this line, the covering factor is greater than one, and all path lengths intersect a clump, while to the left, there are clump-free path lengths out of the galaxy. For a galaxy with $m_d=0.01$ the dashed vertical line represents the value of $f_V$ needed for a photon transversing the shortest path length to pass through an average of one clump for a galaxy. The value of $f_V$ needed for a photon traversing the longest path overlaps with the case where $m_d = 0.17$, because as $m_d$ changes, the scale height of the galaxy changes, but the radius remains the same. [*[Bottom]{}*]{}: The dependence on the escape fraction as $m_d$ is varied. The same population is shown, with $C = 1$ for the solid lines and $C = 1000$ for the dashed lines. The results are averaged over ten runs to reduce noise. []{data-label="fig:clumplog"}](figure3a.eps "fig:"){width="95mm"} ![[*[Top:]{}*]{} The [$f_{\rm esc}$]{} out of the disk is shown for a galaxy in a $10^9 M_\odot$ halo, with $z_f=10$, $f_* = 0.1$, and a Population III Larson initial mass spectrum. The left-most solid vertical line represents the value of the volume filling factor $f_V$ needed for a photon transversing the longest path length to pass through an average of one clump for a galaxy with $m_d = 0.17$, while the right-most solid vertical line represents the value of $f_V$ needed for a photon transversing the shortest path length to pass through an average of one clump for a galaxy with $m_d = 0.17$. Therefore, to the right of this line, the covering factor is greater than one, and all path lengths intersect a clump, while to the left, there are clump-free path lengths out of the galaxy. For a galaxy with $m_d=0.01$ the dashed vertical line represents the value of $f_V$ needed for a photon transversing the shortest path length to pass through an average of one clump for a galaxy. The value of $f_V$ needed for a photon traversing the longest path overlaps with the case where $m_d = 0.17$, because as $m_d$ changes, the scale height of the galaxy changes, but the radius remains the same. [*[Bottom]{}*]{}: The dependence on the escape fraction as $m_d$ is varied. The same population is shown, with $C = 1$ for the solid lines and $C = 1000$ for the dashed lines. The results are averaged over ten runs to reduce noise. []{data-label="fig:clumplog"}](figure3b.eps "fig:"){width="95mm"} ![Distribution of [$f_{\rm esc}$]{} for each run shown in the top panel of Figure \[fig:clumplog\], with $m_d = 0.17$ in the top panel and $m_d = 0.01$ in the bottom panel. []{data-label="fig:clumplog2"}](figure4a.eps "fig:"){width="120mm"} ![Distribution of [$f_{\rm esc}$]{} for each run shown in the top panel of Figure \[fig:clumplog\], with $m_d = 0.17$ in the top panel and $m_d = 0.01$ in the bottom panel. []{data-label="fig:clumplog2"}](figure4b.eps "fig:"){width="120mm"} Star formation is more likely to take place in clumps. Star formation will also have an effect on gas clumping produced by stellar winds and supernova shells. Because of this, clumps are likely to be distributed around locations of star formation. To analyze this effect, we have defined a region 100 pc from the center of star formation in the galaxy. Inside this region, the volume filling factor is $f_{\rm V,near}$, and outside of this region, the volume filling factor is $f_{\rm V,far}$. For a case where clumps are only located near to the star formation center, $f_{\rm V,far} = 0$. If $f_{\rm V,far} < f_{\rm V,near}$, there are fewer clumps in the outer portions of the galaxy than near the star formation center. Results are shown in Figure \[fig:clumpdist\]. The black solid line shows the case with no clumps, where $f_{V}=0$ throughout, and the red triple-dot dashed line represents the case where $f_V=0.3$ throughout. For the case where $f_{\rm V,near}=0.3$ and $f_{\rm V,far}=0$, clumps are only near stars. This results in a higher escape fraction than if the entire galaxy had $f_{V}=0.3$. When there are some clumps far from star formation, in the case where $f_{\rm V,near}=0.3$ and $f_{\rm V,far}=0.1$, the escape fraction falls below the case where if the entire galaxy had $f_{V}=0.3$. This is a result of clumps becoming more dense as $f_V$ increases. We have chosen a 100 pc radius of the region near to the star formation. If this region is much larger, $\sim 1$ kpc, the escape fraction will be equal to the case $f_V = f_{\rm V,near}$, where $f_V$ is the volume filling factor if the galaxy has the same value throughout. If the region is much smaller, $\sim10$ pc, the escape fraction will be equal to the case where $f_V = f_{\rm V,far}$. Overall, the distribution of clumps does not change the escape fraction significantly. Properties of Stars and the Galaxy ---------------------------------- In Figure \[fig:Q1\], we analyze how the stellar population affects the escape fraction. In the top panel, $f_*$ is held constant as $f_V$ increases. In the bottom panel, $f_V$ is held constant as $f_*$ increases. Both plots show metal-free (Population III) stars and metal-poor (Population II) stars, as well as stars with a heavy Larson initial mass spectrum and a light Salpeter initial mass spectrum. In both cases, [$f_{\rm esc}$]{} is proportional to the number of ionizing photons that are emitted by the stars, with heavier stars or stars with fewer metals more likely to produce photons that can escape the nebula. This is because when more ionizing photons are produced, the critical angle where photons can break free from the halo increases, and hence more photons escape. When the galaxy forms more stars (higher $f_*$), there is the added effect of less hydrogen remaining in the galaxy to absorb ionizing photons. Therefore, [$f_{\rm esc}$]{} increases greatly as $f_*$ increases. ![The $f_{esc}$ for the disk is shown for stars of varied masses and metallicities and various values of $f_V$ with $f_* = 0.5$ ([*[top panel]{}*]{}) and various values of $f_*$ for $f_V=0.1$ ([*[bottom panel]{}*]{}). Very high values of $f_*$ ($0.9$) approach a case in which all ionizing photons are escaping. Both are shown for a $10^9 M_\sun$ halo at $z_f=10$. In each case, the highest line is for Population III Larson, followed by Population II Larson, Population III Salpeter, and Population II Salpeter. For $f_* =0.3$, [$f_{\rm esc}$]{} $\sim 0$ for both Salpeter cases. Please see the electronic edition for a color version of this plot. []{data-label="fig:Q1"}](figure6a.eps "fig:"){width="120mm"} ![The $f_{esc}$ for the disk is shown for stars of varied masses and metallicities and various values of $f_V$ with $f_* = 0.5$ ([*[top panel]{}*]{}) and various values of $f_*$ for $f_V=0.1$ ([*[bottom panel]{}*]{}). Very high values of $f_*$ ($0.9$) approach a case in which all ionizing photons are escaping. Both are shown for a $10^9 M_\sun$ halo at $z_f=10$. In each case, the highest line is for Population III Larson, followed by Population II Larson, Population III Salpeter, and Population II Salpeter. For $f_* =0.3$, [$f_{\rm esc}$]{} $\sim 0$ for both Salpeter cases. Please see the electronic edition for a color version of this plot. []{data-label="fig:Q1"}](figure6b.eps "fig:"){width="120mm"} The star formation redshift, $z_f$, is varied in Figure \[fig:galaxy1\]. As $z_f$ increases, the galaxy is smaller and more concentrated. Therefore, it is easiest for photons to escape from less dense disks at low redshifts. At redshifts where we expect reionization to take place, it is harder for photons to escape the galaxy. This problem may be remedied by the high-redshift expectation of more massive or metal-free stars with higher values of $f_*$. COMPARISON TO PREVIOUS LITERATURE {#sec:Lit} ================================= As noted in the introduction, there have been many previous studies that calculated the number of ionizing photons emitted from high redshift halos, resulting in a wide range of values for the escape fraction. Various factors are proposed that affect the number of ionizing photons that escape into the IGM, in particular the effects of a clumpy ISM. @boisse:1990 and @witt/gordon:1996 found that clumps increase transmission, and @hobson/scheuer:1993 found that a three-phase medium (clumps grouped together, rather than randomly distributed) further increases transmission. Very dense clumps (with $C=10^6$) were studied by @wood/loeb:2000, who found that clumps increase [$f_{\rm esc}$]{} over the case with no clumps. For very small values of $f_V$, their [$f_{\rm esc}$]{} was very high, because most of the density is in a few very dense clumps, and most lines of sight do not encounter a clump. Their clump size is 13.2 pc, which is similar to our largest clump size, $5 \times 10^{19}$ cm. Their results are consistent with our findings for clumps with large radius, low $f_V$, and high $C$, where most rays do not encounter a clump. @ciardi/etal:2002 included the effect of clumps using a fractal distribution of the ISM with $C = 4-8$. They noted that this distribution of clumps increases [$f_{\rm esc}$]{} in cases with lower ionization rate because there are clearer sight lines. They found that [$f_{\rm esc}$]{} is more sensitive to the gas distribution than to the stellar distribution. @dove/shull/ferrara:2000 reported that [$f_{\rm esc}$]{} decreases as clumps are added. This results from the fact that adding clumps does not change the density of the interclump medium. In their model, as clumps are added, the mass of hydrogen in the galaxy increases. On the other hand, our current method decreases the density of the interclump medium as clumps are added or become denser to keep the overall mass of the galaxy constant. Photons are more likely to escape along paths with lower density, as in irregular galaxies and along certain lines of sight [@gnedin/etal:2008; @wise/cen:2008]. Shells, such as those created by supernova remnants (SNRs), can trap ionizing photons until the bubble blows out of the disk, allowing photons a clear path to escape and causing [$f_{\rm esc}$]{}to rise [@dove/shull:1994; @dove/shull/ferrara:2000; @fujita/etal:2003]. These SNRs or superbubbles create porosity in the ISM, and above a critical star formation rate, [$f_{\rm esc}$]{} rises [@clark/oey:2002]. This is similar to what is seen in our results. As with a dense clump, a shell will essentially stop all radiation, while a clear path, similar to the case with a low $f_V$, allows many free paths along which radiation can escape. Our model extends the previous work by enlarging the parameter space. We see how low values of $f_V$, changing the clump size, and location of the clumps affect the results. This heavy dependence on how the clumping can affect the escape fraction can explain why such a variation is seen in observations in the escape fraction from galaxy to galaxy. Previous work differs as to whether or not [$f_{\rm esc}$]{} increases or decreases with redshift. @ricotti/shull:2000 state that [$f_{\rm esc}$]{} decreases with increasing redshift for a fixed halo mass, but is consistent with higher escape fraction from dwarf galaxies. This assumes that the star formation efficiency is proportional to the baryonic content in a galaxy. However, other studies seem to indicate that [$f_{\rm esc}$]{} increases with redshift. High resolution simulations of @razoumov/sl:2006 state that [$f_{\rm esc}$]{}increases with redshift from $z = 2.39$ (where [$f_{\rm esc}$]{} = 0.01–0.02) to $z = 3.8$ (where [$f_{\rm esc}$]{} = 0.06–0.1). This is a result of higher gas clumping and lower star formation rates at lower redshifts, causing the escape fraction to fall. At higher redshifts, the simulations of @razoumov/sl:2009 see this trend continue, with an [$f_{\rm esc}$]{} $\approx 0.8$ at $z \approx 10.4$ that declines with time. This trend has also been seen observationally from $0<z<7$ [@bridge:2010; @cowie/etal:2009; @inoue/etal:2006; @Iwata/etal:2009; @Siana/etal:2007; @Bouwens/etal:2010]. (However, @vanzella/etal:2010 point out that observational measurements of the escape fraction can be contaminated by lower redshift interlopers.) Other simulations have given different results. @gnedin/etal:2008 say that [$f_{\rm esc}$]{} changes little from $3 < z < 9$, always being about 0.01–0.03. This difference could possibly be attributed to how the models deal with star formation efficiencies within galaxies. @wood/loeb:2000 state that since disk density increases with redshift, [$f_{\rm esc}$]{} will fall as the formation redshift increases, ranging from [$f_{\rm esc}$]{} = 0.01–1. We found that [$f_{\rm esc}$]{}decreases with increasing redshift of formation, since disks are more dense. However, this assumes that the types of stars and $f_*$ remain constant. If $f_*$ is larger at higher redshifts and if stars are more massive and have a lower metallicity (likely), the number of ionizing photons should increase, which could cause [$f_{\rm esc}$]{} to increase, despite a denser disk. It is also possible that the disk morphology does not exist at high redshifts. Therefore, in order to understand the evolution of [$f_{\rm esc}$]{} with redshift, one must understand the evolution of other properties, namely, $f_*$, $Q_{\rm pop}$, and the density distribution of gas within a galaxy. CONSTRAINTS FROM REIONIZATION {#sec:reion} ============================= If galaxies are responsible for keeping the universe reionized, there must be a minimum number of photons that can escape these galaxies to be consistent with reionization. The star formation rate ($\dot{\rho}$) that corresponds to a star formation efficiency $f_*$ is given by: $$\dot{\rho}(z) = 0.536~M_\sun~{\rm yr^{-1}}~{\rm Mpc}^{-3} \left(\frac{f_*}{0.1}\frac{\Omega_bh^2}{0.02}\right) \left(\frac{\Omega_mh^2}{0.14}\right)^{1/2} \left(\frac{1+z}{10}\right)^{3/2} y_{\rm min}(z) e^{-y_{\rm min}^2(z)/2} \label{eq:ferKom}$$ [@fernandez/komatsu:2006], assuming a Press-Schechter mass function [@press/schechter:1974], with $$y_{\rm min}(z) \equiv \frac{\delta_c}{\sigma[M_{\rm min}(z)]D(z)} \; .$$ Here, $\delta_c$ is the overdensity, $\sigma(M)$ is the present-day rms amplitude of mass fluctuations, and $M_{\rm min}$ is the minimum mass of halos that create stars. Similarly, the [$f_{\rm esc}$]{} needed to reionize the universe can be related to the critical star formation rate, $\dot{\rho}_{\rm crit}$, needed to keep the universe ionized: $$\dot{\rho}_{\rm crit}(z) = (0.012 \: M_\sun \: {\rm{yr}}^{-1} \: {\rm{Mpc}}^{-3} \: ) \left[\frac{1+z}{8}\right]^3\left[\frac{C_H/5}{f_{esc}/0.5}\right]\left[\frac{0.004}{Q_{\rm LyC}}\right]T_4^{-0.845} \label{eq:trenti}$$ [@madau/etal:1999; @trenti/shull:inprep], which results from the number of photoionizations needed to balance recombinations to keep the universe ionized at $z=7$. Here, $C_H$ is the clumping of the IGM (which we scale to a typical value of $C_H=5$), $T_4$ is the temperature of the IGM in units of $10^4$ K, and $Q_{\rm LyC}$ is the conversion factor from $\dot{\rho}(z)$ to the total number of Lyman continuum photons produced per $M_{\odot}$ of star formation, $$Q_{\rm LyC} \equiv \frac{N_{\rm LyC}/10^{63}}{\dot{\rho}_{\rm crit}t_{\rm rec}},$$ where $N_{\rm LyC}$ is the number of Lyman continuum (LyC) photons produced by a star and $t_{\rm rec}$ is the hydrogen recombination timescale. We assume that $Q_{\rm LyC}=0.004$, which is reasonable for a low-metallicity stellar population. By requiring the star formation rate to be at least as large as the critical value, we can solve for the value of [$f_{\rm esc}$]{} needed to reionize the universe. Results are shown in the top panel of Figure \[fig:reion1\] plotted at two redshifts, $z=7$ and $z=10$. We also consider stars forming in galaxies down to a minimum mass $$M_{min} = (10^8~M_{\odot}) \left[ \frac{(1+z)}{10}\right] ^{-1.5} \; ,$$ [@barkana/loeb], or if smaller halos are suppressed and only those above $10^9M_\sun$ are forming stars. If the required [$f_{\rm esc}$]{} exceeds $1$ (shown by the dashed line), the given population cannot reionize the universe. As redshift decreases, it becomes much easier to keep the universe reionized, and a smaller [$f_{\rm esc}$]{} is needed, as expected. At higher redshifts, it is harder for stars to keep the IGM ionized because the gas density is higher and there are fewer massive halos forming stars. If small halos are suppressed, the remaining high-mass halos have a much harder time keeping the universe ionized. It is interesting to note that the universe cannot be reionized at $z = 8-10$ if only larger halos ($M_h > 10^9 M_\sun$) are producing ionizing photons that escape into the IGM. At $z = 9$, $f_*$ must always be above $0.1$ for all cases shown in order for the universe to be reionized. At $z=7$, $f_*$ can be very low. Alternately, one can set equations \[eq:ferKom\] and \[eq:trenti\] equal and constrain directly the value of $f_{esc}f_*Q_{LyC}/C_{IGM}$. Any values greater than this would be able to reionize the universe. This is shown in the bottom panel of Figure \[fig:reion1\]. Observations of high redshift galaxies at $z = 7-10$ [@robertson/etal:2010; @Bouwens/etal:2010] show that currently observed galaxies, along with an escape fraction that is compatible with lower redshift observations ([$f_{\rm esc}$]{} $\approx 0.2$), are not able to reionize the universe. Therefore, the faint end slope of the luminosity function must be very steep to create a greater contribution to reionization from high-redshift small galaxies, the escape fraction at high redshifts was probably much higher, the IMF was top heavy and made of low metallicity stars, and/or the clumping factor of the IGM was low [@bunker/etal:2004; @bunker/etal:2010; @gnedin:2008; @gonzalez/etal:2010; @lehnert/bremer:2003; @mclure/etal:2010; @oesch/etal:2010a; @oesch/etal:2010b; @ouchi/etal:2009; @richard/etal:2008; @stiavelli/etal:2004; @yan/windhorst:2004]. In addition, observations of Ly$\alpha$ absorption toward quasars and the UV luminosity function suggest that [$f_{\rm esc}$]{} is allowed to be much lower if low-mass galaxies are allowed to be sources of ionizing photons [@srbinovsky/wyithe:2008]. This is consistent with our findings; small galaxies need to be included and their star formation not suppressed to permit [$f_{\rm esc}$]{}  to be sufficiently high to ionize the universe. Low values of [$f_{\rm esc}$]{}, consistent with the low redshift universe, would require high, and in some cases unreasonably high, values of $f_*$. Therefore, the escape fraction was almost certainly higher in the past than it is today. ![[*[Top:]{}*]{} The [$f_{\rm esc}$]{} needed from a population of galaxies with various values of $f_*$ to reionize the universe at $z = 7$ or $10$. If the required [$f_{\rm esc}$]{}  lies above $1$ (dashed line), the population cannot reionize the universe. [*Bottom:*]{} The product $f_{\rm esc} f_*Q_{LyC}/C_{IGM}$, which is directly constrained by reionization. The region above each line has a value of $f_{\rm esc}f_*Q_{LyC}/C_{IGM}$ large enough to reionize the universe. Please see the electronic edition for a color version of this plot. []{data-label="fig:reion1"}](figure8a.eps "fig:"){width="120mm"} ![[*[Top:]{}*]{} The [$f_{\rm esc}$]{} needed from a population of galaxies with various values of $f_*$ to reionize the universe at $z = 7$ or $10$. If the required [$f_{\rm esc}$]{}  lies above $1$ (dashed line), the population cannot reionize the universe. [*Bottom:*]{} The product $f_{\rm esc} f_*Q_{LyC}/C_{IGM}$, which is directly constrained by reionization. The region above each line has a value of $f_{\rm esc}f_*Q_{LyC}/C_{IGM}$ large enough to reionize the universe. Please see the electronic edition for a color version of this plot. []{data-label="fig:reion1"}](figure8b.eps "fig:"){width="120mm"} CONCLUSIONS {#sec:Conclusions} =========== We have explored how the internal properties of galaxies can affect the amount of escaping ionizing radiation. The properties of clumps within the galaxy have the strongest effect on [$f_{\rm esc}$]{}. When the covering factor is much less than one, only a few, dense clumps exist. These clumps only stop a small fraction of the ionizing radiation, and therefore, the escape fraction will be large. As the covering factor increases to one, the escape fraction will fall, eventually to become smaller than the case with a non-clumped galaxy. (The exception to this is if the non-clumped galaxy has an escape fraction of about one. In this case the addition of clumps will only cause the escape fraction to fall around a covering factor of one.) This indicates that the escape fraction is very sensitive to the way the ISM is distributed, causing the escape fraction to vary from 1 to 0 in some cases. The escape fraction depends on the star formation efficiency and the population of stars formed at the center of the galaxy. As the number of ionizing photons increases, the critical angle at which photons can escape the galaxy will also increase, which will have a direct effect on the escape fraction. Since disks were more likely more dense at higher redshifts, the escape fractions will be lower, unless a change in the star formation efficiency or stellar population would increase the number of ionizing photons. Therefore, the escape fraction depends directly on the number of ionizing photons and the distribution of hydrogen. These variations can be extreme and can help to explain why some observations see large differences in the escape fraction from galaxy to galaxy. Other factors, such as the mass of the galaxy and the redshift, affect the escape fraction indirectly (as the number of ionizing photons and distribution of hydrogen will change with mass or redshift). At high redshifts, small galaxies are probably needed to help complete reionization, consistent with recent observations. In addition, low values of the escape fraction, $f_{\rm esc} \approx 0.02$, similar to what is seen at low redshifts, are probably insufficient to allow reionization to be completed. Otherwise, they would require very high values of the star formation efficiency. We acknowledge support from the University of Colorado Astrophysical Theory Program through grants from NASA (NNX07AG77G) and NSF (AST07-07474). [99]{} Barkana, R., & Loeb, A. 2001, Phys. Rep., 349, 125 Bergvall, N., Zackrisson, E., Andersson, B.-G., Arnberg, D., Masegosa, J., & …stlin, G. 2006, A&A, 448, 513 Bland-Hawthorn, J., & Maloney, P. R. 2002, ASPC, 254, 267 Bland-Hawthorn, J., & Maloney, P. R., 1999, ApJ, 510, L33 Boissé, P. 1990, A&A, 228, 483 Bouwens, R. J., et al. 2010, , 708, L69 Bridge, C. R., et al. 2010, , 720, 465 Bunker, A. J., Stanway, E. R., Ellis, R. S., & McMahon, R. G. 2004 MNRAS, 355, 374 Bunker, A. J., et al. 2010, MNRAS, 409, 855 Chen, H. W., Prochaska, J. X., & Gnedin, N. Y. 2007, ApJ, 667, L125 Ciardi, B., Bianchi, S., & Ferrara, A. 2002, MNRAS, 331, 463 Clarke, C., & Oey, M. S. 2002, MNRAS, 337, 1299 Cowie, L. L., Barger, A. J., & Trouille, L. 2009, ,692,1476 Deharveng, J-M., Buat, V., Le Brun, V., Milliard, B., Kunth, D., Shull, J. M., & Gry, C. 2001, A&A, 375, 805 Dove, J. B., & Shull, J. M. 1994, , 430, 222 Dove, J. B., Shull, J. M., & Ferrara, A. 2000, , 531, 846 Dunkley, J., et al. 2009, ApJS, 180, 306 Ferguson, H. C. 2001, in Deep Fields, Proc. ESO Workshop, ed. S. Cristiani, A. Renzini, R. E. Williams (Garching: Springer-Verlag), 54 Fernandez, E. R., & Komatsu, E. 2006, ApJ, 646, 703 Fernandez-Soto, A., Lanzetta, K. M., & Chen, H.-W. 2003, MNRAS, 342, 1215 Fujita, A., Martin, C. L., Mac Low, M. M., & Abel, T. 2003, , 599, 50 Giallongo, E., Cristiani, S., D’Odorico, S., & Fontana, A. 2002, , 568, L9 Gnedin, N. Y. 2008, , 673, L1 Gnedin, N. Y., Kravtsov, A. V., & Chen, H. -W. 2008, , 672, 765 Gonz‡lez, V., LabbŽ, I., Bouwens, R. J., Illingworth, G., Franx, M., Kriek, M., & Brammer, G. B. 2010, , 713, 115 Grimes, J. P., et al. 2007, , 668, 891 Grimes, J. P., et al. 2009, ,181, 272 Hanish, D. J., Oey, M. S., Rigby, J. R., de Mello, D. F., & Lee, J. C. 2010, , 725, 2029 Heckman, T. M., Sembach, K. R., Meurer, G. R., Leitherer, C., Calzetti, D., & Martin, C. L. 2001, , 558, 56 Hobson, M. P., & Scheuer, P. A. 1993, , 264, 145 Hoopes, C. G., et al. 2007, ApJS, 173, 441 Hurwitz, M., Jelinsky, P., Van Dyke Dixon, W. 1997, , 481, L31 Inoue, A. K., Iwata, I., Deharveng, J.-M., Buat, V., & Burgarella, D. 2005 A&A, 435, 471 Inoue, A. K., Iwata, I., & Deharveng, J.-M. 2006, , 371, L1 Iwata, I., et al. 2009, , 692, 1287 Kogut, A., et al. 2003, , 148, 161 Komatsu, E., et al. 2009, ApJS, 180, 330 Komatsu, E., et al. 2011, ApJS, 192, 18 Larson, R. B. 1998, , 301, 569 Lehnert, M. D., & Bremer, M. 2003, , 593, 630 Leitherer, C., Ferguson, H. C., Heckman, T. M., Lowenthan, J. D. 1995, , 454, L19 Madau, P., Haardt, F., & Rees, M. J. 1999, , 514, 648 Malkan, M., Webb, W., Konopacky, Q. 2002, , 598, 878 McLure, R. J., Dunlop, J. S., Cirasuolo, M., Koekemoer, A. M., Sabbi, E., Stark, D. P., Targett, T. A., & Ellis, R. S.2010, MNRAS, 403, 960 Mo, H. J., Mao, S., & White, S. D. 1998, MNRAS, 295, 319 Navarro, J. F., Frenk, C. S., & White, S. D. M. 1997, , 490, 493 Oesch, P. A., et al. 2010, , 709, L16 Oesch, P. A., et al. 2010, , 690, 1350 Ouchi, M., et al. 2009, , 706, 1136 Page, L., et al. 2007, ApJS, 170, 335 Press, W. H., & Schechter, P. 1974, , 187, 425 Putman, M. E., Bland-Hawthorn, J., Veilleux, S., Gibson, B. K., Freeman, K. C., & Maloney, P. R. 2003 , ApJ, 597, 948 Razoumov, A., & Sommer-Larsen, J. 2006, , 651, L89 Razoumov, A., & Sommer-Larsen, J. 2010, , 710, 1239 Richard, J., Stark, Daniel P., Ellis, Richard S., George, Matthew R., Egami, Eiichi, Kneib, Jean-Paul, Smith, & Graham P. 2008, , 685, 705 Ricotti, M., & Shull, J. M. 2000, , 542, 548 Robertson, B. E., Ellis, R. S., Dunlop, J. S., McLure, R. J., & Stark, D. P. 2010, , 468, 49 Salpeter, E. E. 1955, , 121, 161 Schaerer, D. 2002, , 382, 28 Shapley, A. E., Steidel, C. C., Pettini, M., Adelberger, K. L., & Erb, D. K. 2006, , 651, 688 Siana, B., et al. 2007, , 668, 62 Siana, B., et al. 2010, , 723, 241 Spergel, D. N., et al. 2003, ApJS, 148, 175 Spergel, D. N., et al. 2007, ApJS, 170, 377 Spitzer, L. 1942, , 95, 329 Srbinovsky, J. A., & Wyithe, J. S. B. 2008, PASA, 27, 110 Steidel, C. C., Pettini, M, & Adelberger, K. L. 2001, , 546, 665 Stiavelli, M., Fall, S. M., & Panagia, N. 2004, , 610, L1 Shull, J. M., & Trenti, M. 2010, in prep Trenti, M., Stiavelli, M., Bouwens, R. J., Oesch, P., Shull, J. M., Illingworth, G. D., Bradley, L. D., & Carollo, C. M. 2010, , 714, 202 Tumlinson, J., Giroux, M. L., Shull, J. M., & Stocke, J. T. 1999, , 118, 2148 Vanzella, E., Siana, B., Cristiani, S, & Nonino, M. 2010, MNRAS, 404, 1672 Vanzella, E., et al. 2010, , 725, 1011 Wise, J. H., & Cen, R. 2009, , 693, 984 Witt, A. N., & Gordon, K. D. 1996, , 463, 681 Wood, L., & Loeb, A. 2000, ApJ, 545, 86 Wyithe, J. S. B., Hopkins, A. M., Kistler, M. D., Yuksel, H., & Beacom, J. F. 2010, MNRAS 401, 2561 Yajima, H., Jun-Hwan, C., & Nagamine, K. 2010, MNRAS , in press, arXiv:1002.3346 Yan, H., & Windhorst, R. A. 2004, , 612, L93 [^1]: The star formation efficiency is defined by the fraction of baryons in stars at any given time. Therefore, the escape fraction calculated is the escape fraction at that point in the galaxy’s lifetime. The total escape fraction of the galaxy will depend on the lifetime of the burst and the star formation history, which can be obtained by integrating $f_*$ over the duration of the burst, taking into account the propagation of the ionizing front within the galaxy.
{ "pile_set_name": "ArXiv" }
--- abstract: 'The description of physical processes in accelerated frames opens a window to numerous new phenomena. One can encounter these effects both in the subatomic world and on a macroscale. In the present work we review our recent results on the study of the electroweak interaction of particles with an accelerated background matter. In our analysis we choose the noninertial comoving frame, where matter is at rest. Our study is based on the solution of the Dirac equation, which exactly takes into account both the interaction with matter and the nonintertial effects. First, we study the interaction of ultrarelativistic neutrinos, electrons and quarks with the rotating matter. We consider the influence of the matter rotation on the resonance in neutrino oscillations and the generation of anomalous electric current of charged particles along the rotation axis. Then, we study the creation of neutrino-antineutrino pairs in a linearly accelerated matter. The applications of the obtained results for elementary particle physics and astrophysics are discussed.' author: - Maxim Dvornikov title: ELECTROWEAK INTERACTION OF PARTICLES WITH ACCELERATED MATTER AND ASTROPHYSICAL APPLICATIONS --- Nowadays it is understood that noninertial effects are important in various areas of modern science such as elementary particles physics, general and special relativity, as well as condensed matter physics [@review]. Recently in Refs. [@Dvo14; @Dvo15a; @Dvo15b] it was realized that the electroweak interaction of particles with accelerated matter leads to interesting applications in physics and astrophysics. In those works, the treatment of the particle evolution was made in the comoving frame, where matter is at rest, with the noninertial effects being accounted for exactly. In the present work we review our recent results on the particle interaction with accelerated matter. Our study of the fermion propagation in an accelerated matter is based on the Dirac equation in a comoving frame. In this situation one can unambiguously define the interaction with background matter. It is known that the motion in a noninertial frame is equivalent to the interaction with an effective gravitational field having the metric tensor $g_{\mu\nu}$. The Dirac equation for the particle bispinor $\psi$ in curved space-time has the form [@Dvo15a], $$\label{eq:Depsicurv} \left[ \mathrm{i}\gamma^{\mu}(x)\nabla_{\mu}-m \right] \psi = \gamma_{0}(x) \left\{ \frac{V_{\mathrm{L}}}{2} \left[ 1-\gamma^{5}(x) \right] + \frac{V_{\mathrm{R}}}{2} \left[ 1+\gamma^{5}(x) \right] \right\} \psi,$$ where $\gamma^{\mu}(x)$ are the coordinate dependent Dirac matrices, $\nabla_{\mu}=\partial_{\mu}+\Gamma_{\mu}$ is the covariant derivative, $\Gamma_{\mu}$ is the spin connection, $m$ is the particle mass, $V_{\mathrm{L,R}} \sim G_\mathrm{F} n_\mathrm{eff}$ are the effective potentials of the interaction of left and right chiral projections with background matter, $G_\mathrm{F}$ is the Fermi constant, $n_\mathrm{eff}$ is the effective density of background particles, $\gamma^{5}(x) = -(\mathrm{i}/4!) E^{\mu\nu\alpha\beta} \gamma_{\mu}(x) \gamma_{\nu}(x) \gamma_{\alpha}(x) \gamma_{\beta}(x)$, $E^{\mu\nu\alpha\beta} = \varepsilon^{\mu\nu\alpha\beta} / \sqrt{-g}$ is the covariant antisymmetric tensor in curved space-time, and $g=\det(g_{\mu\nu})$. In Ref. [@Dvo14] we found the solution of Eq. (\[eq:Depsicurv\]) for an ultrarelativistic neutrino moving in a rotating matter. Note that, in case of neutrinos, we should set $V_{\mathrm{R}}=0$. Choosing the appropriate vierbien vectors, we obtained $\psi$, which is expressed in terms of the Laguerre functions. Then we generalized our result to include different neutrino eigenstates and mixing between them. We obtained that the resonance condition is shifted by the matter rotation contrary to our previous claim in Ref. [@Dvo10]. This effect can have the implication for the explanation of great linear velocities of pulsars since there is a correlation between the linear and angular velocities of a pulsar [@Joh05]. In Ref. [@Dvo15a], we obtained the solution of Eq.  for ultrarelaticistic electroweakly interacting electrons and quarks in the rotating matter. Using this solution we derived the nonzero electric current along the rotation axis in the form, $$\label{eq:elcurr} \mathbf{J} = \frac{q\bm{\omega}}{\pi} \left( V_{\mathrm{R}}\mu_{\mathrm{R}}-V_{\mathrm{L}}\mu_{\mathrm{L}} \right),$$ where $\bm{\omega}$ is the angular velocity, $q$ is the electric charge (including the sign) of a test fermion, and $\mu_{\mathrm{R,L}}$ are the chemical potentials of right and left fermions. The existence of the nonzero current in Eq.  is attributed in Ref. [@Dvo15a] to the new *galvano-rotational effect* (GRE). GRE is analogous to the chiral vortical effect [@Vil79], in which the induced current is $\mathbf{J} \sim \bm{\omega} (\mu_{\mathrm{L}}^2 - \mu_{\mathrm{R}}^2)$. However, in the later case the current is vanishing in the equilibrium at $\mu_{\mathrm{L}} = \mu_{\mathrm{R}}$, whereas $\mathbf{J}$ in Eq.  is nonzero in this situation. GRE can be used for the generation of a toroidal magnetic field (TMF) in neutron and quark/hybrid stars. It is well known that, in a star, a purely poloidal magnetic field, which is observed by astronomers, is unstable. A toroidal component, which lays inside a star and can be of the same magnitude as a poloidal one, is required. In Ref. [@Dvo15a] we estimated the strength of TMF generated owing to GRE as $B_\mathrm{tor}\sim |\mathbf{J}|R$, where $R\sim 10\thinspace\text{km}$ is the star radius. Using the characteristics of the background matter in a compact star, one gets that $B_\mathrm{tor} \sim 10^8\thinspace\text{G}$ can be generated [@Dvo15a]. This TMF strength is comparable with the observed magnetic fields in old millisecond pulsars [@PhiKul94]. In Ref. [@Dvo15b] we solved Eq.  for an ultrarelativistic neutrino interacting with a linearly accelerated matter. In this case $\psi$ is expressed via the Whittaker functions. The obtained solution turned out to reveal the instability of the neutrino vacuum leading to the creation of the neutrino-antineutrino ($\nu\bar{\nu}$) pairs. This phenomenon is analogous to the well known Unruh effect [@CriHigMat08] consisting in the emission of the thermal radiation by an accelerated particle, with the effective temperature $T_\mathrm{eff} = a/2\pi$, where $a$ is the particle acceleration. Requiring that the probability of the creation of $\nu\bar{\nu}$ pairs is not suppressed, in Ref. [@Dvo15b], we obtained the upper bound on the neutrino mass, $$\label{eq:masslim} m \lesssim m_{\mathrm{cr}}, \quad m_{\mathrm{cr}}=2\sqrt{\frac{|V_\mathrm{L}|a}{\pi}}.$$ If we study the creation of $\nu\bar{\nu}$ pairs in a core collapsing supernova (SN) at the bounce stage, one gets that $m_{\mathrm{cr}} \sim 10^{-7}\thinspace\text{eV}$. The obtained upper bound is comparable with the constraint on neutrino masses established earlier in Ref. [@DvoGavGit14], where we studied the $\nu\bar{\nu}$ pairs creation in SN at the neutronization. In conclusion we mention that we have studied various phenomena happening with particles electroweakly interacting with accelerated background matter. We have considered two types of the acceleration: due to rotation and a linear acceleration. The exact solutions of the Dirac equation for a test fermion, accounting for both the matter interaction and the noninertial effects, have been found. Then we have discussed the influence of the matter rotation on the resonance in neutrino oscillations, the generation of the electric current flowing along the rotation axis, and the creation of $\nu\bar{\nu}$ pairs in a linearly accelerated matter. Finally we have considered the possibility of the implementation of our results in various astrophysical media such as neutron and quark/hybrid stars as well as SNs. Acknowledgments {#acknowledgments .unnumbered} =============== I am thankful to the Tomsk State University Competitiveness Improvement Program and to RFBR (research project No. 15-02-00293) for partial support. [99]{} D. Alba, *et al.*, Int. J. Mod. Phys. **A 21**, 2781 (2006); H.W. Crater, *et al.*, Int. J. Geom. Meth. Mod. Phys. **11**, 1450086 (2014); D. Chowdhury, *et al.*, Annals Phys. **335**, 166 (2013). M. Dvornikov, JHEP **10** (2014) 053, arXiv:1408.2735; Mod. Phys. Lett. **A 30**, 1530017 (2015), arXiv:1503.01431. M. Dvornikov, JCAP **05** (2015) 037, arXiv:1503.00608. M. Dvornikov, JHEP **08** (2015) 151, arXiv:1507.01174. M. Dvornikov, in Proceedings of the 14th Lomonosov Conference on Elementary Particle Physics. Ed. by A.I. Studenikin, World Scientific, Singapore, 2010, pp. 183–185, arXiv:1001.2690; Azerbaij. Astron. J. **6**, 5 (2011), arXiv:1001.2516. S. Johnston, *et al.*, MNRAS **364**, 1397 (2005). A. Vilenkin, Phys. Rev. **D 20**, 1807 (1979). E.S. Phinney, *et al.*, Annu. Rev. Astron. Astrophys. **32**, 591 (1994). L.C.B. Crispino, *et al.*, Rev. Mod. Phys. **80**, 787 (2008). M. Dvornikov, *et al.*, Phys. Rev. **D 89**, 105028 (2014), arXiv:1312.2288.
{ "pile_set_name": "ArXiv" }
--- abstract: 'We develop a framework for approximating collapsed Gibbs sampling in generative latent variable cluster models. Collapsed Gibbs is a popular MCMC method, which integrates out variables in the posterior to improve mixing. Unfortunately for many complex models, integrating out these variables is either analytically or computationally intractable. We efficiently approximate the necessary collapsed Gibbs integrals by borrowing ideas from expectation propagation. We present two case studies where exact collapsed Gibbs sampling is intractable: mixtures of Student-$t$’s and time series clustering. Our experiments on real and synthetic data show that our approximate sampler enables a runtime-accuracy tradeoff in sampling these types of models, providing results with competitive accuracy much more rapidly than the naive Gibbs samplers one would otherwise rely on in these scenarios.' author: - 'Christopher Aicher[^1] and Emily B. Fox[^2]' bibliography: - 'bib.bib' title: Approximate Collapsed Gibbs Clustering with Expectation Propagation --- Introduction ============ Background {#sec:background} ========== Approximate Collapsed Gibbs Sampling ==================================== \[sec:inference\] Case Studies ============ \[sec:case\_studies\] We consider two motivating examples for the use of our EP-based approximate collapsed Gibbs algorithm. The first is a mixture of Student-$t$ distributions, which can capture heavy-tailed emissions crucial in robust modeling (i.e., reducing sensitivity to outliers). The second example is a time series clustering model. Mixture of Multivariate Student-$t$ ----------------------------------- \[sec:student\] Time Series Clustering ---------------------- \[sec:tscluster\] Experiments =========== \[sec:experiments\] Conclusion ========== We presented a framework for constructing approximate collapsed Gibbs samplers for efficient inference in complex clustering models. The key idea is to approximately marginalize the nuisance variables by using EP to approximate the conditional distributions of the variables with an individual observation removed; by approximating this conditional, the required integral becomes tractable in a much wider range of scenarios than that of conjugate models. Our use of this EP approximation takes two steps from its traditional use: (1) we approximate a (nearly) full conditional rather than directly targeting the posterior, and (2) our targeted conditional changes as we sample the cluster assignment variables. For the latter, we provided a brief analysis and demonstrated the impact of the changing target, drawing parallels to previously proposed samplers that use stale sufficient statistics. We demonstrated how to apply our EP-based approximate sampling approach in two applications: mixtures of Student-$t$ distributions and time series clustering. Our experiments demonstrate that our EP approximate collapsed samplers mix more rapidly than naive Gibbs, while being computationally scalable and analytically tractable. We expect this method to provide the greatest benefit when approximately collapsing large parameter spaces. There are many interesting directions for future work, including deriving bounds on the asymptotic convergence of our approximate sampler [@pillai2014ergodicity; @dinh2017convergence], considering different likelihood approximation update rules such as *power EP* [@minka2004power], and extending our idea of approximately integrating out variables to other samplers. For the analysis, [@dehaene2015expectation] showed that EP with Gaussian approximations is exact in the large data limit; one could extend these results to consider the case of data being allocated amongst *multiple* clusters. Another interesting direction is to explore our EP-based approximate collapsing within the context of variational inference, possibly extending the set of models for which collapsed variational Bayes [@teh2007collapsed] is possible. Finally, there are many ways in which our algorithm could be made even more scalable through distributed, asynchronous implementations, such as in [@ahmed2012scalable]. Acknowledgements {#acknowledgements .unnumbered} ================ We would like to thank Nick Foti, You “Shirley” Ren and Alex Tank for helpful discussions. This paper is based upon work supported by the NSF CAREER Award IIS-1350133 This paper is an extension of our previous workshop paper [@aicher2016scalable]. **Appendix** Mixture of Multivariate Student-$t$ {#app:student} =================================== Time Series Clustering {#app:tscluster} ====================== EP Convergence {#app:ep_converge} ============== Synthetic Time Series Trace Plots {#app:traceplots} ================================= Seattle Housing Data {#app:housing_data} ==================== [^1]: Department of Statistics, University of Washington, `aicherc@uw.edu` [^2]: Department of Computer Science and Statistics, University of Washington, `ebfox@uw.edu`
{ "pile_set_name": "ArXiv" }
--- abstract: 'The notion of Kolmogorov complexity (=the minimal length of a program that generates some object) is often useful as a kind of language that allows us to reformulate some notions and therefore provide new intuition. In this survey we provide (with minimal comments) many different examples where notions and statements that involve Kolmogorov complexity are compared with their counterparts not involving complexity.' author: - 'Alexander Shen[^1]' title: Kolmogorov complexity as a language --- Introduction ============ The notion of Kolmogorov complexity is often used as a tool; one may ask, however, whether it is indeed a powerful technique or just a way to present the argument in a more intuitive way (for people accustomed to this notion). The goal of this paper is to provide a series of examples that support both viewpoints. Each example shows some statements or notions that use complexity, and their counterparts that do not mention complexity. In some cases these two parts are direct translations of each other (and sometimes the equivalence can be proved), in other cases they just have the same underlying intuition but reflect it in different ways. Hoping that most readers already know what is Kolmogorov (algorithmic, description) complexity, we still provide a short reminder to fix notation and terminology. The complexity of a bit string $x$ is the minimal length of a program that produces $x$. (The programs are also bit strings; they have no input and may produce binary string as output.) If $D(p)$ is the output of program $p$, the complexity of string $x$ with respect to $D$ is defined as $K_D(x)=\inf\{ |p|\colon D(p)=x\}$. This definition depends on the choice of programming language (i.e., its interpreter $D$), but we can choose an optimal $D$ that makes $K_D$ minimal (up to $O(1)$ constant). Fixing some optimal $D$, we call $K_D(x)$ the *Kolmogorov complexity* of $x$ and denote it by $K(x)$. A technical clarification: there are several different versions of Kolmogorov complexity; if we require the programming language to be self-delimiting or prefix-free (no program is a prefix of another one), we got *prefix* complexity usually denoted by $K(x)$; without this requirement we get *plain* complexity usually denoted by $C(x)$; they are quite close to each other (the difference is $O(\log n)$ for $n$-bit strings and usually can be ignored). *Conditional* complexity of a string $x$ given condition $y$ is the minimal length of a program that gets $y$ as input and transforms it into $x$. Again we need to chose an optimal programming language (for programs with input) among all languages. In this way we get *plain conditional complexity* $C(x|y)$; there exists also a prefix version $K(x|y)$. The value of $C(x)$ can be interpreted as the “amount of information” in $x$, measured in bits. The value of $C(x|y)$ measures the amount of information that exists in $x$ but not in $y$, and the difference $I(y:x)=C(x)-C(x|y)$ measures the amount of information in $y$ about $x$. The latter quantity is almost commutative (classical Kolmogorov – Levin theorem, one of the first results about Kolmogorov complexity) and can be interpreted as “mutual information” in $x$ and $y$. Foundations of probability theory ================================= Random sequences ---------------- One of the motivations for the notion of description complexity was to define randomness: $n$-bit string is random if it does not have regularities that allow us to describe it much shorter, i.e., if its complexity is close to $n$. For finite strings we do not get a sharp dividing line between random and non-random objects; to get such a line we have to consider infinite sequences. The most popular definition of random infinite sequences was suggested by Per Martin-Löf. In terms of complexity one can rephrase it as follows: bit sequence $\omega_1\omega_2\ldots$ is random if $K(\omega_1\ldots\omega_n)\ge n-c$ for some $c$ and for all $n$. (This reformulation was suggested by Chaitin; the equivalence was proved by Schnorr and Levin. See more in [@livitanyi; @uppsala-notes].) Note that in classical probability theory there is no such thing as an individual random object. We say, for example, that randomly generated bit sequence $\omega_1\omega_2\ldots$ satisfies the strong law of large numbers (has limit frequency $\lim (\omega_1+\ldots+\omega_n)/n$ equal to $1/2$) almost surely, but this is just a measure-theoretic statement saying that the set of all $\omega$ with limit frequency $1/2$ has measure $1$. This statement (SLLN) can be proved by using Stirling formula for factorials or Chernoff bound. Using the notion of Martin-Löf randomness, we can split this statement into two: (1) every Martin-Löf random sequence satisfies SLLN; and (2) the set of Martin-Löf random sequences has measure $1$. The second part is a general statement about Martin-Löf randomness (and is easy to prove). The statement (1) can be proved as follows: if the frequency of ones in a long prefix of $\omega$ deviates significantly from $1/2$, this fact can be used to compress this prefix, e.g., using arithmetic coding or some other technique (Lempel–Ziv compression can be also used), and this is impossible for a random sequence according to the definition. (In fact this argument is a reformulation of a martingale proof for SLLN.) Other classical results (e.g., the law of iterated logarithm, ergodic theorem) can be also presented in this way. Sampling random strings ----------------------- In the proceeding of this conference S. Aaronson proves a result that can be considered as a connection between two meanings of the word “random” for finite strings. Assume that we bought some device which is marketed as a random number generator. It has some physical source of randomness inside. The advertisement says that, being switched on, this device produces an $n$-bit random string. What could be the exact meaning of this sentence? There are two ways to understand it. First: the output distribution of this machine is close to the uniform distribution on $n$-bit strings. Second: with high probability the output string is random (=incompressible). The paper of Aaronson establishes some connections between these two interpretations (using some additional machinery). Counting arguments and existence proofs ======================================= A simple example ---------------- Kolmogorov complexity is often used to rephrase counting arguments. We give a simple example (more can be found in [@livitanyi]). Let us prove by counting that there exists an $n\times n$ bit matrix without $3\log n\times 3\log n$ uniform minors. (We obtain minors by selecting some rows and columns; the minor is *uniform* if all its elements are the same.) **Counting**: Let us give an upper bound for the number of matrices with uniform minors. There are at most $n^{3\log n}\times n^{3\log n}$ positions for a minor (we select $3\log n$ rows and $3\log n$ columns). For each position we have $2$ possibilities for the minor (zeros or ones) and $2^{n^2-(3\log n)^2}$ possibilities for the rest, so the total number of matrices with uniform minors does not exceed $$n^{3\log n} \cdot n^{3\log n} \cdot 2 \cdot 2^{n^2-9\log^2n}=2^{n^2-3\log^2 n +1}< 2^{n^2},$$ so there are matrices without uniform minors. **Kolmogorov complexity**: Let us prove that incompressible matrix does not have uniform minors. In other words, let us show that matrix with a uniform minor is compressible. Indeed, while listing the elements of such a matrix we do not need to specify all $9\log^2 n$ bits in the uniform minor individually. Instead, it is enough to specify the numbers of the rows of the minor ($3\log n$ numbers; each contains $\log n$ bits) as well as the numbers of columns (this gives together $6\log^2 n$ bits), and to specify the type of the minor ($1$ bit), so we need only $6\log^2 n + 1 \ll 9 \log^2 n$ bits (plus the bits outside the minors, of course). One-tape Turing machines ------------------------ One of the first results of computational complexity theory was the proof that some simple operations (checking symmetry or copying) require quadratic time when performed by one-tape Turing machine. This proof becomes very natural if presented in terms of Kolmogorov complexity. Assume that initially some string $x$ of length $n$ is written on the tape (followed by the end-marker and empty cells). The task is to copy $x$ just after the marker (Fig. \[tape-1.mps\]). $\raisebox{-1ex}{\hbox{\includegraphics[scale=0.7]{tape-1.mps}}} \qquad\to\qquad \raisebox{-1ex}{\hbox{\includegraphics[scale=0.7]{tape-2.mps}}}$ It is convenient to consider a special case of this task when the first half of $x$ is empty (Fig. \[tape-3.mps\]) and the second half $y$ is an incompressible string of length $n/2$. $\raisebox{-1ex}{\hbox{\includegraphics[scale=0.7]{tape-3.mps}}} \qquad\to\qquad \raisebox{-1ex}{\hbox{\includegraphics[scale=0.7]{tape-4.mps}}}$ To copy $y$, our machine has to move $n/2$ bits of information across the gap of length $n/2$. Since the amount of information carried by the head of TM is fixed ($\log m$ bits for TM with $m$ states), this requires $\mathrm\Omega(n^2)$ steps (the hidden constant depends on the number of states). The last statement can be formalized as follows. Fix some borderline inside the gap and install a “customs office” that writes down the states of TM when it crosses this border from left to right. This record (together with the office position) is enough to reconstruct $y$ (since the behavior of TM on the right of the border is determined by this record). So the record should be of $\mathrm\Omega(n)$ size. This is true for each of $\mathrm\Omega(n)$ possible positions of the border, and the sum of the record lengths is a lower bound for the number of steps. Forbidden patterns and everywhere complex sequences --------------------------------------------------- By definition the prefixes of a random sequence have complexity at least $n-O(1)$ where $n$ is the length. Can it be true for all substrings, not only prefixes? No: if it is the case, the sequence at least should be random, and random sequence contains every combination of bits as a substring. However, Levin noted that the weaker condition $C(x)> \alpha |x| - O(1)$ can be satisfied for all substrings (for any fixed $\alpha<1$). Such a sequence can be called *$\alpha$-everywhere complex* sequence. Levin suggested a proof of their existence using some properties of Kolmogorov complexity [@dls]. The combinatorial counterpart of Levin’s lemma is the following statement: let $\alpha<1$ be a real number and let $F$ be a set of strings that contains at most $2^{\alpha n}$ strings of length $n$. Then there exists a constant $c$ and a sequence $\omega$ that does not have substrings of length greater than $c$ that belong to $F$. It can be shown that this combinatorial statement is equivalent to the original formulation (so it can be formally proved used Kolmogorov complexity); however, there are other proofs, and the most natural one uses Lovasz local lemma. (See [@rumyantsev].) Gilbert–Varshamov bound and its generalization ---------------------------------------------- The main problem of coding theory is to find a code with maximal cardinality and given distance. This means that for a given $n$ and given $d$ we want to find some set of $n$-bit strings whose pairwise Hamming distances are at least $d$. The strings are called code words, and we want to have as many of them as possible. There is a lower bound that guarantees the existence of large code, called Gilbert–Varshamov bound. The condition for Hamming distances guarantees that few (less than $d/2$) bit errors during the transmission do not prevent us from reconstructing the original code word. This is true only for errors that change some bits; if, say, some bit is deleted and some other bit is inserted in a different place, this kind of error may be irreparable. It turns out that we can replace Hamming distance by information distance and get almost the same bound for the number of codewords. Consider some family of $n$-bit strings $\{x_1,x_2,\ldots\}$. We say that this family is *$d$-separated*, if $C(x_i|x_j)\ge d$ for $i\ne j$. This means that simple operations of any kind (not only bit changes) cannot transform $x_j$ to $x_i$. Let us show that for every $d$ there exists a $d$-separated family of size $\mathrm\Omega(2^{n-d})$. Indeed, let us choose randomly strings $x_1,\ldots,x_N$ of length $n$. (The value of $N$ will be chosen later.) For given $i$ and $j$ the probability of the event $C(x_i|x_j)<d$ is less than $2^d/2^n$. For given $i$ the probability that $x_i$ is not separated from *some* $x_j$ (in any direction) does not exceed $2N\cdot 2^d/2^n$, so the expected number of $x_i$ that are “bad” in this sense is less than $2N^2\cdot 2^d/2^n$. Taking $N=\mathrm\Omega(2^{n-d})$, we can make this expectation less than $N/2$. Then we can take the values of $x_1,\ldots,x_N$ that give less that $N/2$ bad $x_i$ and delete all the bad $x_i$, thus decreasing $N$ at most twice. The decreased $N$ is still $\Omega(2^{n-d})$. It is easy to see that the Gilbert–Varshamov bound (up to some constant) is a corollary of this simple argument. (See [@chinese] for more applications of this argument.) Complexity and combinatorial statements ======================================= Inequalities for Kolmogorov complexity and their\ combinatorial meaning {#kolmogorov-levin} ------------------------------------------------- We have already mentioned Kolmogorov–Levin theorem about the symmetry of algorithmic information. In fact, they proved this symmetry as a corollary of the following result: $ C(x,y)=C(x) + C(y|x) + O(\log n). $ Here $x$ and $y$ are strings of length at most $n$ and $C(x,y)$ is the complexity of some computable encoding of the pair $(x,y)$. The simple direction of this inequality, $C(x,y)\le C(x)+C(y|x)+O(\log n)$, has equally simple combinatorial meaning. Let $A$ be a finite set of pairs $(x,y)$. Consider the first projection of $A$, i.e., the set $A_X=\{x\colon \exists y\, (x,y)\in A\}$. For each $x$ in $A_X$ we also consider the $x$th section of $A$, i.e., the set $A_x=\{y\colon (x,y)\in A\}$. Now the combinatorial counterpart for the inequality can be formulated as follows: if $\#A_X \le 2^k$ and $\#A_x\le 2^l$ for every $x$, then $\#A \le 2^{k+l}$. (To make the correspondence more clear, we can reformulate the inequality as follows: if $C(x)\le k$ and $C(y|x)\le l$, then $C(x,y)\le k+l+O(\log n)$.) The more difficult direction, $C(x,y)\ge C(x)+C(y|x)-O(\log n)$, also has a combinatorial counterpart, though more complicated. Let us rewrite this inequality as follows: for every integers $k$ and $l$, if $C(x,y)\le k+l$, then either $C(x)\le k+O(\log n)$ or $C(y|x)\le l+O(\log n)$. It is easy to see that this statement is equivalent to the original one. Now we can easily guess the combinatorial counterpart: if $A$ is a set of pairs that has at most $2^{k+l}$ elements, then one can cover it by two sets $A'$ and $A''$ such that $\#A'_X\le 2^k$ and $\#A''_x\le 2^l$ for every $x$. Kolmogorov–Levin theorem implies also the inequality $2C(x,y,z)\le C(x,y)+C(y,z)+C(x,z)$. (Here are below we omit $O(\log n)$ terms, where $n$ is an upper bound of the length for all strings involved.) Indeed, $C(x,y,z)=C(x,y)+C(z|x,y)=C(y,z)+C(x|y,z)$. So the inequality can be rewritten as $C(z|x,y)+C(x|y,z)\le C(x,z)$. It remains to note that $C(x,z)=C(x)+C(z|x)$, that $C(z|x,y)\le C(z|x)$ (more information in the condition makes complexity smaller), and that $C(x|y,z)\le C(x)$ (condition can only help). The combinatorial counterpart (and the consequence of the inequality about complexities) says that for $A\subset X\times Y\times Z$ we have $ (\# A)^2 \le \# A_{X,Y} \cdot \#A_{X,Z} \cdot \#A_{Y,Z}, $ where $A_{X,Y}$ is the projection of $A$ onto $X\times Y$, i.e., the set of all pairs $(x,y)$ such that $(x,y,z)\in A$ for some $z\in Z$, etc. In geometric terms: if $A$ is a 3-dimensional body, then the square of its volume does not exceed the product of areas of three its projections (onto three orthogonal planes). Common information and graph minors ----------------------------------- We have defined the mutual information in two strings $a,b$ as $I(a:b)=C(b)-C(b|a)$; it is equal (with logarithmic precision) to $C(a)+C(b)-C(a,b)$. The easiest way to construct some strings $a$ and $b$ that have significant amount of mutual information is to take overlapping substrings of a random (incompressible) string; it is easy to see that the mutual information is close to the length (and complexity) of their overlap. We see that in this case the mutual information is not an abstract quantity, but is materialized as a string (the common part of $a$ and $b$). The natural question arises: is it always the case? i.e., is it possible to find for every pair $a,b$ some string $x$ such that $C(x|a)\approx 0$, $C(x|b)\approx 0$ and $C(x)\approx I(a:b)$? It turns out that it is not always the case (as found by Andrej Muchnik [@common] in Kolmogorov complexity setting and earlier by Gács and Körner [@gacs-korner] in Shannon information setting which we do not describe here — it is not that simple). The combinatorial counterpart of this question: consider a bipartite graph with (approximately) $2^\alpha$ vertices on the left and $2^\beta$ vertices on the right; assume also that this graph is almost uniform (all vertices in each part have approximately the same degree). Let $2^\gamma$ be the total number of edges. A typical edge connects some vertex $a$ on the left and some vertex $b$ on the right, and corresponds to a pair of complexity $\gamma$ whose first component $a$ has complexity $\alpha$ and second component $b$ has complexity $\beta$, so the “mutual information” in this edge is $\delta=\alpha+\beta-\gamma$. The question whether this information can be extracted corresponds to the following combinatorial question: can all (or most) edges of the graph be covered by (approximately) $2^\delta$ minors of size $2^{\alpha-\delta}\times 2^{\beta-\delta}$? (Such a minor connects some $2^{\alpha-\delta}$ vertices on the left with $2^{\beta-\delta}$ vertices on the right.) For example, consider some finite field $F$ of size $2^n$ and a plane over this field (i.e., two-dimensional vector space). Consider a bipartite graph whose left vertices are points on this plane, right vertices are lines, and edges correspond to incident pairs. We have about $2^{2n}$ vertices is each part, and about $2^{3n}$ edges. This graph does not have $2\times 2$ minors (two different points on a line determine it uniquely). Using this property, one can show that $M\times M$ minor could cover only $O(M\sqrt{M})$ edges. (Assume that $M$ vertices on the left side of such a minor have degrees $d_1,\ldots,d_M$ in the minor. Then for $i$th vertex on the left there are $\mathrm\Omega(d_i^2)$ pairs of neighbor vertices on the right, and all these pairs are different, so $\sum d_i^2 \le O(M^2)$; Cauchy inequality then implies that $\sum d_i \le O(M\sqrt{M})$, and this sum is the number of edges in the minor). Translating this argument in the complexity language, we get the following statement: for a random pair $(a,b)$ of incident point and line, the complexity of $a$ and $b$ is about $2n$, the complexity of the pair is about $3n$, the mutual information is about $n$, but it is not extractable: there is no string $x$ of complexity $n$ such that $C(x|a)$ and $C(x|b)$ are close to zero. In fact, one can prove that for such a pair $(a,b)$ we have $C(x)\le 2C(x|a)+2C(x|b)+O(\log n)$ for all $x$. Almost uniform sets ------------------- Here is an example of Kolmogorov complexity argument that is difficult to translate to combinatorial language (though one may find a combinatorial proof based on different ideas). Consider the set $A$ of pairs. Let us compare the maximal size of its sections $A_x$ and the average size (that is equal to $\#A/\#A_X$; we use the same notation as in section \[kolmogorov-levin\]); the maximal/average ratio will be called $X$-nonuniformity of $A$. We can define $Y$-nonuniformity in the same way. Claim: *every set $A$ of pairs having cardinality $N$ can be represented as a union of $\mathrm{polylog}(N)$ sets whose $X$- and $Y$-nonuniformity is bounded by $\mathrm{polylog}(N)$*. Idea of the proof: consider for each pair $(x,y)\in A$ a quintuple of integers $$p(x,y) = \langle C(x), C(y), C(x|y), C(y|x), C(x,y) \rangle$$ where all complexities are taken with additional condition $A$. Each element $(x_0,y_0)$ in $A$ is covered by the set $U(x_0,y_0)$ that consists of all pairs $(x,y)$ for which $p(x,y)\le p(x_0,y_0)$ (coordinate-wise). The number of elements in $U(x_0,y_0)$ is equal to $2^{C(x_0,y_0)}$ up to polynomial in $N$ factors. Indeed, it cannot be greater because $C(x,y)\le C(x_0,y_0)$ for all pairs $(x,y)\in U(x_0,y_0)$. On the other hand, the pair $(x_0,y_0)$ can be described by its ordinal number in the enumeration of all elements of $U(x_0,y_0)$. To construct such an enumeration we need to know only the set $A$ and $p(x_0,y_0)$. The set $A$ is given as a condition, and $p(x_0,y_0)$ has complexity $O(\log N)$. So if the size of $U(x_0,y_0)$ were much less than $2^{C(x_0,y_0)}$, we would get a contradiction. Similar argument shows that projection $U(x_0,y_0)_X$ has about $2^{C(x_0)}$ elements. Therefore, the average section size is about $2^{C(x_0,y_0)-C(x_0)}$; and the maximal section size does not exceed $C(y_0|x_0)$ since $C(y|x)\le C(y_0|x_0)$ for all $(x,y)\in U(x_0,y_0)$. It remains to note that $C(y_0|x_0)\approx C(x_0,y_0)-C(x_0)$ according to Kolmogorov–Levin theorem, and that there are only polynomially many different sets $U(x,y)$. Similar argument can be applied to sets of triples, quadruples etc. For a combinatorial proof of this result (in a stronger version) see [@alon]. Shannon information theory ========================== Shannon coding theorem ---------------------- A random variable $\xi$ that has $k$ values with probabilities $p_1,\ldots,p_k$, has *Shannon entropy* $H(\xi)=\sum _i p_i(-\log p_i)$. Shannon coding theorem (in its simplest version) says that if we want to transmit a sequence of $N$ independent values of $\xi$ with small error probability, messages of $NH(\xi)+o(N)$ bits are enough, while messages of $NH(\xi)-o(N)$ bits will lead to error probability close to $1$. Kolmogorov complexity reformulation: *with probability close to $1$ the sequence of $N$ independent values of $\xi$ has complexity $NH(\xi)+o(N)$*. Complexity, entropy and group size ---------------------------------- Complexity and entropy are two ways of measuring the amount of information (cf. the title of the Kolmogorov’s paper [@kolm65] where he introduced the notion of complexity). So it is not surprising that there are many parallel results. There are even some “meta-theorems” that relate both notions. A. Romashchenko [@romash-ineq] has shown that the linear inequalities that relate complexities of $2^n-1$ tuples made of $n$ strings $a_1,\ldots,a_n$, are the same as for Shannon entropies of tuples made of $n$ random variables. In fact, this meta-theorem can be extended to provide combinatorial equivalents for complexity inequalities [@combin]. Moreover, in [@group-ineq] it is shown that the same class of inequalities appears when we consider cardinalities of subgroups of some finite group and their intersections! Muchnik’s theorem ----------------- Let $a$ and $b$ be two strings. Imagine that somebody knows $b$ and wants to know $a$. Then one needs to send at least $C(a|b)$ bits of information, i.e., the shortest program that transforms $b$ to $a$. However, if we want the message to be not only short, but also simple relative to $a$, the shortest program may not work. Andrej Muchnik [@muchnik-conditional] has shown that it is still possible: *for every two strings $a$ and $b$ of length at most $n$ there exists a string $x$ such that $C(x)\le C(a|b)+O(\log n)$, $C(a|x,b)=O(\log n)$, and $C(x|a)=O(\log n)$*. This result probably is one of the most fundamental discoveries in Kolmogorov complexity theory of the last decade. It corresponds to Wolf–Slepyan theorem in Shannon information theory; the latter says that for two dependent random variables $\alpha$ and $\beta$ and $N$ independent trials of this pair one can (with high probability) reconstruct $\alpha_1,\ldots,\alpha_N$ from $\beta_1,\ldots,\beta_N$ and some message that is a function of $\alpha_1,\ldots,\alpha_N$ and has bit length close to $NH(\alpha|\beta)$. However, Muchnik and Wolf–Slepyan theorem do not seem to be corollaries of each other (in any direction). Romashchenko’s theorem ---------------------- Let $\alpha,\beta,\gamma$ be three random variables. The mutual information in $\alpha$ and $\beta$ when $\gamma$ is known is defined as $ I(\alpha:\beta|\gamma)=H(\alpha,\gamma)+H(\beta,\gamma)+H(\alpha,\beta,\gamma)-H(\gamma). $ It is equal to zero if and only if $\alpha$ and $\beta$ are conditionally independent for every fixed value of $\gamma$. One can show the following: *If $I(\alpha:\beta|\gamma)=I(\alpha:\gamma|\beta)=I(\beta:\gamma|\alpha)=0$, then one can extract all the common information from $\alpha,\beta,\gamma$ in the following sense: there is a random variable $\chi$ such that $H(\chi|\alpha)=H(\chi|\beta)=H(\chi|\gamma)=0$ and $\alpha,\beta,\gamma$ are independent random variables when $\chi$ is known*. (The latter statement can be written as $I(\alpha:\beta\gamma|\chi)=I(\beta:\alpha\gamma|\chi)=I(\gamma:\alpha\beta|\chi)=0$.) In algebraic terms: if in a $3$-dimensional matrix with non-negative elements all its $2$-dimensional sections have rank $1$, then (after a suitable permutation for each coordinate) it is made of blocks that have tensor rank $1$. (Each block corresponds to some value of $\chi$.) Romashchenko proved [@romash-triple] a similarly looking result for Kolmogorov complexity: if $a,b,c$ are three strings such that $I(a:b|c)$, $I(b:c|a)$ and $I(a:c|b)$ are close to zero, then there exists $x$ such that $C(x|a)$, $C(x|b)$, $C(x|c)$ are close to zero and strings $a,b,c$ are independent when $x$ is known, i.e., $I(a:bc|x)$, $I(b:ac|x)$ and $I(c:ab|x)$ are close to zero. This theorem looks like a direct translation of the information theory result above. However, none of these results looks a corollary of the other one, and Romashchenko’s proof is a very ingenious and nice argument that has nothing to do with the rather simple proof of the information-theoretic version. Computability (recursion) theory ================================ Simple sets ----------- Long ago Post defined *simple* set as (recursively) enumerable set whose complement is infinite but does not contain an infinite enumerable set (see, e.g., [@rogers], Sect. 8.1). His example of such a set is constructed as follows: let $W_i$ be the $i$th enumerable set; wait until a number $j>2i$ appears in $W_i$ and include first such number $j$ into the enumeration. In this way we enumerate some set $S$ with infinite complement ($S$ may contain at most $n$ integers less than $2n$); on the other hand, $S$ intersects any infinite enumerable set $W_i$, because $W_i$ (being infinite) contains some numbers greater than $2i$. It is interesting to note that one can construct a natural example of a simple set using Kolmogorov complexity. Let us say that a string $x$ is simple if $C(x)<|x|/2$. The set $S$ of simple strings is enumerable (a short program can be discovered if it exists). The complement of $S$ (the set of “complex” strings) is infinite since most $n$-bit strings are incompressible and therefore non-simple. Finally, if there were an infinite enumerable set $x_1,x_2,\ldots$ of non-simple strings, the algorithm “find the first $x_i$ such that $|x_i|>2n$” will describe some string of complexity at least $n$ using only $\log n+O(1)$ bits (needed for the binary representation of $n$). Similar argument, imitating Berry’s paradox, was used by Chaitin to provide a proof for Gödel incompleteness theorem (see Sect. \[godel\]). Note also a (somewhat mystical) coincidence: the word “simple” appears in two completely different meanings, and the set of all simple strings turns out to be simple. Lower semicomputable random reals --------------------------------- A real number $\alpha$ is *computable* if there is an algorithm that computes rational approximations to $\alpha$ with any given precision. An old example of E. Specker shows that a computable series of non-negative rationals can have a finite sum that is not computable. (Let $\{n_1,n_2,\ldots\}$ be a computable enumeration without repetitions of an enumerable undecidable set $K$; then $\sum_i 2^{-n_i}$ is such a series.) Sums of computable series with non-negative rational terms are called *lower semicomputable* reals. The reason why the limit of a computable series is not computable is that the convergence is not effective. One can ask whether one can somehow classify how ineffective the convergence is. There are several approaches. R. Solovay introduced some reduction on lower semicomputable reals: $\alpha\preceq \beta$ if $\alpha+\gamma=c\beta$ for some lower semicomputable $\gamma$ and some rational $c>0$. Informally, this means that $\alpha$ converges “better” than $\beta$ (up to a constant $c$). This partial quasi-ordering has maximal elements called *Solovay complete* reals. It turned out (see [@calude; @kucera-slaman]) that Solovay complete reals can be characterized as lower semicomputable reals whose binary expansion is a random sequence. Another characterization: we may consider the *modulus of convergence*, i.e., a function that for given $n$ gives the first place where the tail of the series becomes less than $2^{-n}$. It turns out that computable series has a random sum if and only if the modulus of convergence grows faster than $BP(n-O(1))$ where $BP(k)$ is the maximal computation time for all terminating $k$-bit self-delimited programs. Other examples ============== Constructive proof of Lovasz local lemma ---------------------------------------- Lovasz local lemma considers a big (unbounded) number of events that have small probability and are mostly independent. It guarantees that sometimes (with positive probability, may be very small) none of this events happens. We do not give the exact statement but show a typical application: *any CNF made of $k$-literal clauses where each clause has $t=o(2^k)$ neighbors, is satisfiable*. (Neighbors are clauses that have a common variable.) The original proof by Lovasz (a simple induction proving some lower bound for probabilities) is not constructive in the sense that it does not provide any algorithm to find the satisfying assignment (better than exhaustive search). However, recently Moser discovered that naive algorithm: “resample clauses that are false until you are done” converges in polynomial time with high probability, and this can be explained using Kolmogorov complexity. Consider the following procedure (Fig. \[resample\]; by *resampling* a clause we mean that all variables in this clause get fresh random values). It is easy to see that this procedure satisfies the specification if terminates (induction). {Clause C is false} procedure Fix (C: clause)= š resample (C); for all neighbor clauses C' of C: if C' is false then Fix(C') {Clause C is true; all the clauses that were true before the call Fix(C), remain true} The pre- and post-conditions guarantee that we can find a satisfying assignment applying this procedure to all the clauses (assuming the termination). It remains to show that with high probability this procedure terminates in a polynomial time. Imagine that $\texttt{Fix}(X)$ was called for some clause $X$ and this call does not terminate for a long time. We want to get a contradiction. Crucial observation: *at any moment of the computation the sequence of recursive calls made during the execution* (i.e., the ordered list of clauses $C$ for which $\texttt{Fix}(C)$ was called) *together with the current values of all variables determine completely the random bits used for resampling*. (This will allow us to compress the sequence of random bits used for resampling and get a contradiction.) Indeed, we can roll back the computation; note that for every clause in the CNF there is exactly one combination of its variables that makes it false, and our procedure is called only if the clause is false, so we know the values before each resampling. Now we estimate the number of bits needed to describe the sequence of recursive calls. These calls form a tree. Consider a path that visits all the vertices of this tree (=calls) in the usual way, following the execution process (going from a calling instance to a called one and returning back). Note that called procedure corresponds to one of $t$ neighbors of the calling one, so each step down in the tree can be described by $1+\log t$ bits (we need to say that it is a step down and specify the neighbor). Each step up needs only $1$ bit (since we return to known instance). The number of steps up does not exceed the number of steps down, so we need in total $2+\log t$ bits per call. Since $t=o(2^k)$ by assumption, we can describe the sequence of calls using $k-O(1)$ bits per call which is less than the number of random bits ($k$ per call), so the sequence of calls cannot be long. Berry, Gödel, Chaitin, Raz {#godel} -------------------------- Chaitin found (and popularized) a proof of Gödel incompleteness theorem based on the Berry paradox (“the smallest integer not definable by eight words”). He showed that statements of the form “$C(x)>k$” where $x$ is a string and $k$ is a number, can be proved (in some formal theory, e.g., Peano arithmetic) only for bounded values of $k$. Indeed, if it were not the case, we could try all proofs and for every number $n$ effectively find some string $x_n$ which has guaranteed complexity above $n$. Informally, $x_n$ is some string provably not definable by $n$ bits. But it can be defined by $\log n+O(1)$ bits ($\log n$ bits are needed to describe $n$ and $O(1)$ bits describe the algorithm transforming $n$ to $x_n$), so we get a contradiction for large enough $n$. (The difference with the Berry paradox is that $x_n$ is not the minimal string, just the first one in the proofs enumeration ordering.) Recently Kritchman and Raz found that another paradox, “Surprise Examination” (you are told that there will be a surprise examination next week: you realize that it cannot be at Saturday, since then you would know this by Friday evening; so the last possible day is Friday, and if it were at Friday, you would know this by Thursday evening, etc.), can be transformed into a proof of second Gödel incompleteness theorem; the role of the day of the examination is played by the number of incompressible strings of length $n$. (The argument starts as follows: We can prove that such a string exists; if it were only one string, it can be found by waiting until all other strings turn out to be compressible, so we know there are at least two, etc. In fact you need more delicate argument that uses some properties of Peano arithmetic — the same properties as in Gödel’s proof.) 13th Hilbert problem -------------------- Thirteenth Hilbert problem asked whether some specific function (that gives a root of a degree $7$ polynomial as a function of its coefficients) can be expressed as a composition of continuous functions of one and two real variables. More than fifty years later Kolmogorov and Arnold showed that the answer to this question is positive: any continuous function of several real arguments can be represented as a composition of continuous functions of one variable and addition. (For other classes instead of continuous function this is not the case.) Recently this question was discussed in the framework of circuit complexity [@13H]. It has also some natural counterpart in Kolmogorov complexity theory. Imagine that three string $a,b,c$ are written on the blackboard. We are allowed to write any string that is simple (has small conditional complexity) relative to any *two* strings on the board, and can do this several times (but not too many: otherwise we can get any string by changing one bit at a time). Which strings could appear if we follow this rule? The necessary condition: strings that appear are simple relative to $(a,b,c)$. It turns out, however, that it is not enough: some strings are simple relative to $(a,b,c)$ but cannot be obtained in this way. This is not difficult to prove (see [@decomposition] for the proof and references); what would be really interesting is to find some specific example, i.e., to give an explicit function with three string arguments such that $f(a,b,c)$ cannot be obtained in the way described starting from random $a$, $b$, and $c$. Secret sharing -------------- Imagine some secret (i.e., password) that should be shared among several people in such a way that some (large enough) groups are able to reconstruct the secret while other groups have no information about it. For example, for a secret $s$ that is an element of the finite field $F$, we can choose a random element $a$ of the same field and make three shares $a$, $a+s$ and $a+2s$ giving them to three participants $X,Y,Z$ respectively. Then each of three participants has no information about the secret $s$, since each share is a uniformly distributed random variable. On the other hand, any two people together can reconstruct the secret. One can say that this secret sharing scheme implements the access structure $\{\{X,Y\},\{X,Z\},\{Y,Z\}\}$ (access structure lists minimal sets of participants that are authorized to know the secret). Formally, a secret sharing scheme can be defined as a tuple of random variables (one for the secret and one for each participant); the scheme implements some access structure if all groups of participants listed in this structure can uniquely reconstruct the value of the secret, and for all other groups (that do not contain any of the groups listed) their information is independent of the secret. It is easy to see that any access structure can be implemented; the interesting (and open) question is to find how big should be the shares (for a given secret size and a given access structure). We gave the definition of secret sharing in probability theory framework; however, one can also consider it in Kolmogorov complexity framework. For example, take binary string $s$ as a secret. We may look for three strings $x,y,z$ such that $C(s|x,y)$, $C(s|y,z)$, and $C(s|x,z)$ are very small (compared to the complexity of the secret itself), as well as the values of $I(x:s)$, $I(y:s)$, and $I(z:s)$. The first requirement means that any two participants know (almost) everything about the secret; the second requirement means each participant alone has (almost) no information about it. The interesting (and not well studied yet) question is whether these two frameworks are equivalent in some sense (the same access structure can be implemented with the same efficiency); one may also ask whether in Kolmogorov setting the possibility of sharing secret $s$ with given access structure and share sizes depends only on the complexity of $s$. Some partial results were obtained recently by T. Kaced and A. Romashchenko (private communication). The use of Kolmogorov complexity in cryptography is discussed in [@antunes]. Quasi-cryptography ------------------ The notion of Kolmogorov complexity can be used to pose some questions that resemble cryptography (though probably are hardly practical). Imagine that some intelligence agency wants to send a message $b$ to its agent. They know that agent has some information $a$. So their message $f$ should be enough to reconstruct $a$ from $b$, i.e., $C(b|a,f)$ should be small. On the other hand, the message $f$ without $a$ should have minimal information about $b$, so the complexity $C(b|f)$ should be maximal. It is easy to see that $C(b|f)$ cannot exceed $\min(C(a),C(b))$ because both $a$ and $b$ are sufficient to reconstruct $b$ from $f$. Andrej Muchnik proved that indeed this bound is tight, i.e., there is some message $f$ that reaches it (with logarithmic precision). Moreover, let us assume that eavesdropper knows some $c$. Then we want to make $C(b|c,f)$ maximal. Muchnik showed that in this case the maximal possible value (for $f$ such that $C(b|a,f)\approx 0$) is $\min(C(a|c),C(b|c))$. He also proved a more difficult result that bounds the size of $f$, at least in the case when $a$ is complex enough. The formal statement of the latter result: *There exists some constant $d$ such that for every strings $a,b,c$ of length at most $N$ such that $C(a|c)\ge C(b|c)+C(b|a)+d\log N$, there exists a string $f$ of length at most $C(b|a)+d\log N$ such that $C(b|a,f)\le d\log N$ and $C(b|c,f)\ge C(b|c)-d\log N$*. [99]{} Noga Alon, Ilan Newman, Alexander Shen, Gabor Tardos, and Nikolai K. Vereshchagin. Partitioning multi-dimensional sets in a small number of “uniform” parts. *European Journal of Combinatorics*, 28(1):134–144, 2007. Luís Antunes, Sophie Laplante, Alexandre Pinto, Liliana Salvador. Cryptographic Security of Individual Instances. *Information theoretic security*, LNCS 4883, p. 195–210, 2009. Cristian S. Calude, Peter H. Hertling, Bakhadyr Khoussainov, Yongge Wang, Recursively Enumerable Reals and Chaitin $\mathrm\Omega$ Numbers. *Theoretical Computer Science*, 255:125-149, 2001. Terrence H. Chan and Raymond W. Yeung, On a relation between information inequalities and group theory, *IEEE Transaction on Information theory*, 48(7):199–1995, 2002 Alexey Chernov, Andrej A. Muchnik, Andrei E. Romashchenko, Alexander Shen, and Nikolai K. Vereshchagin. Upper semi-lattice of binary strings with the relation “$x$ is simple conditional to $y$”. *Theoretical Computer Science*, 271(1–2):69–95, 2002. Bruno Durand, Leonid A. Levin, and Alexander Shen. Complex Tilings. *Journal of Symbolic Logic*, 73(2):593–613, 2007. Peter Gács, Janosz Korner. Common Information is Far Less Than Mutual Information. *Problems of Control and Information Theory*, 2(2):119–162, 1973. Daniel Hammer, Andrei E. Romashchenko, Alexander Shen, and Nikolai Vereshchagin. Inequalities for Shannon Entropy and Kolmogorov Complexity. *Journal for Computer and System Sciences*, 60:442–464, 2000. Daniel Hammer and Alexander Shen. A Strange Application of Kolmogorov Complexity. *Theory of Computing Systems*, 31(1):1–4, 1998. Kristoffer Arnsfelt Hansen, Oded Lachish, and Peter Bro Miltersen. Hilbert’s Thirteenth Problem and Circuit Complexity. *ISAAC 2009*, Lecture notes in computer science, 5878, Springer, 2009, 153–162 Andrei N. Kolmogorov, Three approaches to the definition of the concept “quantity of information”, *Problemy Peredachi Informatsii* (Russian), 1(1):3–11, 1965. Shira Kritchman and Ran Raz. The Surprise Examination Paradox and the Second Incompleteness Theorem. *Notices of the AMS*, 75(11):1454–1458, 2010. Antonín Kučera and Theodore A. Slaman. Randomness and recursive enumerability. *SIAM Journal on Computing*, 31(1):199–211, 2001. Ming Li and Paul Vitanyi, *An Introduction to Kolmogorov Complexity and Its Applications*, 3rd ed., Springer-Verlag, 2008. Andrej Muchnik. Conditional complexity and codes. *Theoretical Computer Science*, 271(1–2):97–109, 2002. Hartley Rogers, Jr., *Theory of Recursive Functions and Effective Computability*, McGraw-Hill Book Company, 1967. Andrei E. Romashchenko, A Criterion of Extractability of Mutual Information for a Triple of Strings, *Problems of Information Transmission*, 39(1):148–157. Andrei E. Romashchenko, Alexander Shen, and Nikolai K. Vereshchagin. Combinatorial Interpretation of Kolmogorov Complexity. *Theoretical Computer Science*, 271(1–2):111–123, 2002. Andrey Yu. Rumyantsev, Maxim A. Ushakov. Forbidden substrings, Kolmogorov complexity and almost periodic sequences. STACS 2006, Marseille, France. *Lecture Notes in Computer Science*, v. 3884, p. 396–407. See also `arxiv.org/abs/1009.4455`. Alexander Shen. Algorithmic Information theory and Kolmogorov complexity. Uppsala university Technical Report TR2000-034. Available as\ `/research/publications/reports/2000-034/2000-034-nc.ps.gz`\ at `www.it.uu.se`. Alexander Shen. Decomposition complexity. *Journées Automates Cellulaires 2010 (Turku)*, p. 203–213. Available as `hal-00541921` at `archives-ouvertes.fr`. Yen-Wu Ti, Ching-Lueh Chang, Yuh-Dang Lyuu, and Alexander Shen. Sets of k-independent strings. *International Journal of Foundations of Computer Science*, 21(3):321–327, 2010. [^1]: LIF Marseille, CNRS & Univ. Aix–Marseille. On leave from IITP, RAS, Moscow. Supported in part by NAFIT ANR-08-EMER-008-01 grant. E-mail: `alexander.shen@lif.univ-mrs.fr`. Author is grateful to all the participants of Kolmogorov seminar at Moscow State University and to his LIF/ESCAPE colleagues. Many of the results covered in this survey were obtained (or at least inspired) by Andrej Muchnik (1958–2007).
{ "pile_set_name": "ArXiv" }
--- abstract: 'We investigate a novel global orientation regression approach for articulated objects using a deep convolutional neural network. This is integrated with an in-plane image derotation scheme, DeROT, to tackle the problem of per-frame fingertip detection in depth images. The method reduces the complexity of learning in the space of articulated poses which is demonstrated by using two distinct state-of-the-art learning based hand pose estimation methods applied to fingertip detection. Significant classification improvements are shown over the baseline implementation. Our framework involves no tracking, kinematic constraints or explicit prior model of the articulated object in hand. To support our approach we also describe a new pipeline for high accuracy magnetic annotation and labeling of objects imaged by a depth camera.' bibliography: - 'egbib.bib' title: 'Rule Of Thumb: Deep derotation for improved fingertip detection' --- Introduction {#sec:intro} ============ ![ Examples from HandNet test set detections. The colors represent fingertips that are correctly located and identified. The white boxes indicate false detections with the error threshold chosen to be 1cm. The top two rows are trained and tested on non-derotated data. The bottom two are trained and tested on derotated data and then rotated back to the non-derotated space. The detections are overlaid on the IR image from the camera which is not part of the classification process. a) Successful examples for all methods. b) Representative challenging examples for which derotation enables better performance. c) Failure cases where derotation fails to improve the results.[]{data-label="fig:fingertips"}](Result.png){width="0.98\linewidth"} In this paper we propose a method for normalizing out the effects of rotation on highly articulated motion of deforming geometric surfaces such as hands observed by a depth camera. Changing the global rotation of an object directly increases the variation in appearance of the object parts. The work of [@KimHIBCOO12] physically removes this variability with a wristworn camera and samples only a single 3D point on each finger to perform full hand pose estimation. For markerless situations, removing variability through partial canonization can significantly reduce the space of possible images used for pose learning instead of trying to explicitly learn the variability through data augmentation. In [@LepetitLF05] the authors show that learning a derotated 2D patch instead of the original one around a feature point dramatically reduces the learning capacity required and improves the classification results while using fewer randomized trees. To develop our method we use fingertip detection as a challenging representative scenario with a propensity for self occlusion and high rotational variability relative to an imaging sensor. Many approaches in the literature use fingertip or hand part detection towards the goal of full hand pose ([@KeskinKKA11],[@qian2014realtime],[@tompson14tog],[@Wang09]) however, they all approach the problem by trying to learn on datasets by augmenting rotational variability. Instead, we propose to remove this hand space variability during both the training phase and run-time. To this end we propose to learn the rotation using a deep convolutional neural network (CNN) in a regression context based on a network similar to that of [@tompson14tog]. We show how this can be used to predict full three degrees of freedom (DOF) orientation information on a database of hand images captured by a depth sensor. We combine the predicted orientation with a novel in-plane derotation scheme. The “Rule of thumb” is derived from the following insight: there is almost always an in-plane rotation which can be applied to an image of the hand which forces the base of the thumb to be on the right side of the image. This implies that the ambiguity inherrent in rotationally variant features can be overcome by derotating the hand image to a canonical pose instead of augmenting a dataset with all variations of the rotational degrees of freedom as is commonly done. Figure \[fig:fingertips\] shows examples of extensive pose variation that can benefit from our approach [^1]. No currently available hand datasets ([@ZhaoCX12],[@qian2014realtime],[@tompson14tog]) include accurate full 3 DOF ground truth hand orientations on a large database of real depth images. Using joint location data from NYUHands [@tompson14tog] it is possible to extract a global hand orientation per pose. However, we found that the size of this dataset and rotational variability are not optimal for learning to predict 3 DOF orientation. A significant contribution of this paper is therefore the creation of a new, large-scale database of fully annotated depth images with 212928 unique hand poses captured by an Intel RealSense camera that we call HandNet[^2]. For the purpose of effectively annotating such a large dataset we describe a novel image annotation technique. To overcome the severe occlusion inherrent in such a process we use DC magnetic trackers which are surprizingly sparsely used by the vision community considering their high accuracy, speed and robustness to occlusions. Using our deep derotation method (DeROT) we show up to 20.5% improvement in mean average precision (mAP) over our baseline results for two state-of-the-art approaches for fingertip detection in depth images, namely, a random decision tree [@KeskinKKA11] (RDT) and a deep convolutional neural network [@tompson14tog] (CNN). We also compare our results to a non-learning based method similar to PCA and show that it produces inferior results, further supporting the proposed use of DeROT. Building HandNet: Creation and annotation {#sec:database} ========================================= ![ The data capture setup. a) 2mm magnetic sensors. The larger rectangular sensors are not used. b) A fingertip sensor inside the inner seam. c) Virtual model used for planning a multi-sensor setup. We only use 5 sensors. d) The RealSense camera rigidly fixed to the TrakStar transmitter. e) The back of the wooden calibration board where the glass sensor housings are firmly pushed through. f) The front of the calibration board where the glass sensor housings are visible on the corners as seen in the inset.[]{data-label="fig:dataprocess"}](DataRecording.png){width="0.95\linewidth"} Synthetic databases such as those created using [@libhand] have a severe disadvantage in that they cannot accurately account for natural hand motion, occlusions and noise characteristics of real depth cameras. The creation of a large hand pose database of real depth images with consistent annotations is therefore of great importance, but beyond the capability of human annotators. The NYUHands database [@tompson14tog] uses a full model of the hand and a three-camera setup to annotate hand joint locations. There are instances where fingers are obstructed and accurate orientation information is not reliable. Similarly the method of [@Wang09] uses inverse kinematics coupled with a colored glove which also has the disadvantage of not having explicitly measured orientation as well as fingertip locations which are obstructed from the depth camera. An alternative to model based systems are sparse marker systems such as those used by [@ZhaoCX12], however, the excessive cost of a modern mocap setup such as Vicon as well as the occlusion problem make such an approach unattractive. In contrast, modern DC magnetic trackers like the TrakStar [@trakstar] are robust to metallic interference and obstruction by non-ferrous metals, and provide sub-millimeter and sub-degree accuracy for location and orientation relative to a fixed based station. Despite their almost non-existent use in modern computer vision literature, we have found them to be an excellent measurement and annotation tool. **Sensors.** To build and annotate our HandNet database we use a RealSense camera combined with $2mm$ TrakStar magnetic trackers. We affix the sensors to a user’s hand and fingertips by using tight elastic loops with sensors in sewn seam pockets. This prevents lateral and medial movement along the finger. This can be seen in Figure \[fig:dataprocess\]. The skin tight elastic loops have an additional significant benefit over gloves in that the depth profile and hand movements are not affected by the attached sensors and thus do not pollute the data. **Callibration.** Camera callibration with known correspondences is a well studied problem [@Zhang96]. However, in our case we need to callibrate between a camera and a sensor frame. We do this by positioning the magnetic sensors on the corners of a checkerboard pattern thereby creating physical correspondence between the detected corner locations and the actual sensors. This setup can be seen in Figure \[fig:dataprocess\]. We use the extracted 2D locations of the corner points on the callibration board [@bouguet2004camera] together with the sampled sensor 3D locations to perform EPnP [@Epnp09] to determine the extrinsic configuration between the devices. ![ The available data annotations after calibration. a) Color image. Illustrates a full hand setup for this work. The color is not used. b) The RGB axes indicate the measured location and orientation of each fingertip and the back of the palm. c) IR image(not used) overlaid with the labels generated from the raycasting described in Section \[sec:database\]. d) IR image overlaid with the generated heatmaps per fingertip and the global orientation of the hand represented as an oriented bounding box (not used). []{data-label="fig:annotation"}](HandData.png){width="0.9\linewidth"} **Annotation.** We model each sensor as a 3D oriented ellipsoid. We then raycast the ellipsoid into the camera frame and set the label to be the identity of the ellipsoid closest to the camera for every pixel. We also create a heatmap $h_i$ for each fingertip $i$ using the same technique but setting the value per pixel to be gaussian over the distance to the projected sensor location. An example of both types of annotation can be seen in Figure \[fig:annotation\]. **Recording the database.** The database is created from $10$ participants (half male, half female, different hand sizes) who perform random hand motions with extensive pose variation while wearing the magnetic sensors. The RealSense camera operates at $~58$fps producing $640\times 480$ depth maps which we reduce to $320\times 240$. The TrakStar samples measurements at a rate of $720$Hz. In total we recorded $256987$ images. A portion of these images were removed because of low quality. The final dataset is $212928$ frontal images including full annotation of the position and orientation of each fingertip and the back of the palm. After recording each participant we used a software utility to add offsets to the rotation and location of each sensor to adjust for greater consistency in positioning across subjects. Fingertip detection {#sec:fingertip} =================== Although there are many non-learning based hand pose methods that can produce fingertip locations ([@schmidt2014dart; @oikonomidis2011markerless; @MelaxKO13; @BallanTGGP12]), they use kinematic and frame to frame constraints coupled with hand modelling. In contrast, here we specifically focus on per frame fingertip detection in depth images without either tracking or kinematic modelling. For our pipeline we first segment the target hand from the depth image using a fast depth based flood-fill method seeded either from the previous frame for real-time use and testing or from the ground truth hand location for building the database. Using the center of mass (CoM) of the segmented hand and its average depth value we define a depth dependent bounding box of size $w=\frac{50000}{z}$ for a RealSense camera (HandNet) and $w=\frac{70000}{z}$ for a Kinect camera (NYUHands) where $z$ is the depth of the CoM of the segmented hand. We derotate the image about the CoM using an angle of rotation according to the in-plane angle produced by DeROT described in Section \[sec:derotation\]. This comes from the predicted full 3D orientation at run-time or from the ground truth sensor orientation for database construction or testing. We then crop the image using the bounding box. We now describe our modifications of the two different, learning-based fingertip detectors that we use in this work. ![ Understanding derotation: We represent the space of poses by a non-uniform 2D region with representative hand poses. Red and green represent the pose-space covered by training images and testing images respectively. Each ,,, indicates one of the 4 possible combinations of training and testing for a machine learning method where the database remains fixed in size. The larger region indicates greater pose variability while the smaller represents less. Intuitively, by training on a space with low variance and testing in this same space (type ) we expect to see an improvement over the opposite (type ). Section \[sec:evaluation\] supports this intuition. []{data-label="fig:ronnyhands"}](Hands2.jpg){width="0.8\linewidth"} Random decision tree {#sec:RDT} -------------------- We follow the method of Keskin [@KeskinKKA11] where a random decision tree (RDT) ensemble learns hand part labels for every pixel in a depth image of a hand. We refer the reader to the supplementary material of our paper as well as [@KeskinKKA11; @Shotton11] for specific details of this approach. However, here we propose a number of key differences which we found specifically helpful for fingertip detection and run-time efficiency. We use the same random binary depth attributes per pixel but spatially distribute them according to an exponential sampling pattern similar to that of BRISK [@leutenegger2011brisk]. In addition to this, we use only a single RDT which contrasts with the common use of multiple trees in an ensemble. After training our single RDT the class distributions stored at each leaf can be used for inference because they represent the empirical estimate of the posterior probability $p\left(c | x \right)$ of hand part label $c$ given the image evidence $x$. Inferring the most likely fingertip identity label is therefore simply performed pixel-wise by finding the $c^*$ which maximizes $p\left(c | x \right)$ per pixel. However, label inference performed this way results in noisy labels as neighboring classifications do not influence one another. Without adding more trees we propose a simple but highly effective spatial regularization: for each fingertip $i$ we treat the posterior $p\left(c=i | x \right)$ for all pixels as an image and convolve it with a discrete 2D gaussian smoothing kernel $g_\sigma$ with blur radius $\sigma$. This has the effect of correlating the posterior label distributions of nearby pixels. Therefore every pixel $q$ is labeled by fingertip identity (including palm and wrist labels) according to $$\label{eq:treeEq} c^*\left(q;x\right) = \mathop {\arg \max }\limits_{i \in \left\{0..6 \right\}} \left(g_{\sigma}* p_{c=i|x} \right) \left(q\right).$$ Finally, we found that the close proximity of fingers compromises standard mean-shift [@meanshift] clustering. Instead we detect the largest label blobs in the label image from Equation \[eq:treeEq\] above a certain area threshold. The 2D fingertip locations are then assigned to the blob centers and, if necessary, the average depth value for each blob can be used to generate the 3D camera-space coordinates. **Training the RDT.** Training optimal decision trees is known to be NP-complete [@HyafilR76] and therefore trees are built from the root down using breadth-first greedy optimization over tree node impurity. We use the Gini impurity measure which is slightly cheaper to compute than the more typical entropy measure. To build our database for training an RDT we extracted $80\%$ of the fingertip pixels in our training datasets and $50\%$ of the non-fingertip hand pixels. For HandNet this results in a training dataset of $500$ million sample pixels totaling  $600$GB of data for $1200$ attributes. Our tree-builder trains an unpruned randomized tree on $4\times$ GTX 580 GPUs and an Intel I7 processor with $48$GB of RAM in $16$ hours for a tree depth of $21$ with $18000$ query tests per node. We are not aware of another single-workstation tree-builder capable of handling this quantity of data. The very large number of examples helps to prevent overfitting demonstrated by single RDTs. Convolutional neural network {#sec:CNN} ---------------------------- For our second evaluated method we build a CNN architecture based on Tompson [@tompson14tog] to predict the location of the five fingertips by using the maximum location in a set of heat maps which implicitly represent fingertip locations. We refer the reader to that work for specific details and to our supplementary material for the explicit architecture of our implementation. This multi-layer deep approach is critical for an input space as complicated as the set of images of an articulated object and we found that the deeper convolutional layers extract feature responses on a higher semantic level such as oriented fingertips. Using the heatmap based error objective helps to spatially regularize the network during training. For input to the CNN we set $D_1$ to be the cropped depth resized to $96\times 96$ pixels. We then downsample it by a factor of two twice to produce $D_2$ and $D_3$. We use a subtractive form of local contrast normalization (LCN) [@tompson14tog; @jarret] so that $D_i \leftarrow D_i - g_{\sigma}*D_i$ using a gaussian smoothing kernel with $\sigma=5$ pixels. The triplet $\left(D_1,D_2,D_3\right)$ is then input to the network. The trained network outputs a heatmap $h_i$ per fingertip $i$ for new data. Our method differs for fingertip detection in that we augment the output by a non-fingertip heatmap that is strong wherever a fingertip is not likely to be present. Also, instead of fitting a gaussian model to the strongest mode in the low resolution heatmaps, we instead upsample each $18\times 18$ fingertip heatmap $h_i$ to a fixed size of $128\times 128$ with a smoothing bi-linear interpolator. Similar to Section \[sec:RDT\] every pixel $q$ is labeled with fingertip identity (including a non-fingertip class) $$\label{eq:cnnEq} c^*\left(q\right) = \mathop {\arg \max }\limits_{i \in \left\{0 .. 5 \right\}} h_i\left(q\right).$$ As in Section \[sec:RDT\] the fingertip locations are given by the location of the largest label blob. **Training the CNN.** Both the orientation regression CNN of the next Section as well as the described fingertip CNN are trained using Caffe [@jia2014caffe] on an NVidia GTX 980 with an i7 processor and $16$GB of onboard RAM. We train both with a Euclidean loss and a batch size of $100$ for $100000$ iterations with stochastic gradient descent. We start with a learning rate of 0.01 and reduce it by a factor of 0.2 after every ten thousand iterations. We found that repeated fine-tuning was necessary to help network convergence. Derotation {#sec:derotation} ========== Orientation regression {#sec:regression} ---------------------- ![ This graph shows the predicted value of all 9 coefficients of the hand orientation matrix in red relative to the ground truth in yellow. For clarity we order each ground truth coefficient monotonically and apply this reordering to the predicted results. The mean squared error for all the coefficients on the HandNet test set before and after SVD is 0.0271 and 0.0234 respectively.[]{data-label="fig:rotationgraph"}](Hand_Rotation_Prediction.pdf){width="0.9\linewidth"} We adapt the deep convolutional architecture from Section \[sec:fingertip\] to predict full $3$ DOF hand orientation. Instead of a heatmap, we directly predict the $9$ coefficients of the rotation matrix. There are only $3$ degrees of freedom in a regular rotation but by using $9$ parameters and a large database we are effectively regularizing our over-parameterized output. The representation of a rotation matrix in this way is unique in $SO\left(3\right)$ unlike quaternions and Euler angles which we found to be noisy and unreliable. This noise was most visible when trying to predict a single representative angle. For training we use Euclidian loss and do not enforce orthonormality. However, the output of this CNN is directly projected onto the closest unitary matrix using the SVD decomposition $R = USV^T$. $\hat{R} = UV^T$ then provides a least squares optimal projection into $SO\left(3\right)$, if we additionally enforce $\det\left(\hat{R}\right)=1$. Figure \[fig:rotationgraph\] shows the result of predicting the $9$ ground truth coefficients for HandNet and the full network architecture can be seen in the supplementary material. DeROT: Designing a derotation method {#sec:derot} ------------------------------------ [2]{} $r_{align} \leftarrow \operatorname*{\arg\!\max}_{r_i \in \{r_1,r_2,r_3\}} \|\left(0,0,1\right)\cdot r_i\|$ $\alpha \leftarrow atan2(r_{3x},r_{3y}) + 90 + \tiny \twopartdef {180} {r_{2z} \leq 0} {0} {r_{2z} > 0}$ $\alpha \leftarrow atan2(r_{2x},r_{2y}) + 90$ Return $\alpha$ \[alg:derot\] ![ Synthetic and real examples of DeROT. a) The depth projection of the virtual hand before applying DeROT can be seen on the left wall of the cube representing the camera plane. The axis marked $r_{orient}$ is projected onto the camera plane and used in DeROT to define the angle $\alpha$. The purple circle contains the resulting image of the hand after applying derotation by angle $\alpha$. b) The top row of images are un-derotated. The bottom row have been derotated by $\alpha$ obtained by DeROT. Note that the thumb is consistently on the right of the image.[]{data-label="fig:derot"}](Derotate.png){width="0.95\linewidth"} We take advantage of the orientation prediction $\hat{R}=\left[r_1 r_2 r_3\right]$ to compute an angle $\alpha$ which we will use for rotating the camera image about its center. The aim of this is to reduce pose variance by heuristically forcing the thumb to be on the right side of the image. We could use a predefined axis and set the angle $\alpha$ with which to rotate the image to be that between the projection of this axis and the upwards image direction. Unfortunately, when this axis mostly points to or away from the camera the projection onto the screen will be small and noisy. As a simple heuristic we detect if this is the case and if so choose an alternative axis. Specifically we first determine the predicted axis $r_{align}$ most aligned with the camera z axis as $r_{align}=\operatorname*{\arg\!\max}_{r_i \in \{r_1,r_2,r_3\}} \|\left(0,0,1\right)\cdot r_i\|$. If $r_{align}$ is either the palm pointing direction or the direction of the extended fingers then we can be sure that the thumb direction $r_2$ will be non-noisy for this case and set $r_{orient}=r_2$. If the test yields instead that $r_{align}=r_2$ (i.e. thumb direction is mostly pointing towards or away from the camera) then we instead set $r_{orient}=r_3$ which is the palm vector. This procedure is summarized in Algorithm \[alg:derot\]. Synthetic and real examples can be seen in Figure \[fig:derot\]. This choice is arbitrary and can be adapted for objects other than the hand. We thus define DeROT to be the combination of using the CNN from Section \[sec:regression\] to predict $\hat{R}$ together with this derotation heuristic. Derotation with PCA and Procrustes {#sec:pca} ---------------------------------- Instead of using DeROT, an alternative approach is to extract the principal axes of the hand silhouette using PCA and taking the rotation angle of the largest axis to the vertical image axis. We have found that a similar but more stable option is to determine an enclosing ellipse using a Procrustes like algorithm on the convex hull of the points $\cal V$ of the hand segmentation. The minimum area enclosing ellipse can be found efficiently over the points $x_i\in\text{convhull}\left({\cal V}\right)$ by minimizing $-\log \left( {\det \left( A \right)} \right) \text{, s.t } {\left( {{x_i} - {\overline x_i}} \right)^T}A\left( {{x_i} - {\overline x_i}} \right)\leq 1$ for $A,{\overline x_i}$ defining the ellipse. We solve this using Khachiyan’s algorithm [@AspvallS80]. However, as shown in Section \[sec:evaluation\] even with this added stability the method reduces performance rather than improving it. Experiments {#sec:experiments} =========== Evaluation protocol and data {#sec:evaluation} ---------------------------- **Experiments.** We perform our experiments using our HandNet database and the publicly available database NYUHands [@tompson14tog]. All experiments are performed separetly on the two databases. Our baseline results come from () training on non-derotated data and testing on non-derotated data. We compare this to () training on non-derotated data while testing with derotated data, () training on derotated data while testing with non-derotated data, () training on derotated data while testing with derotated data. **Non-derotated data.** For HandNet training we randomly select 202928 images and use the remaining 10000 images for testing. For NYUHands we use all 3 camera views (72757 images per view) for training and the frontal view for testing (8252 images). We slightly dilute the training and testing sets according to our hand segmentation pipeline which results in 184100 training images and 7241 testing images. For experiment types () and () we use this data to train two CNN orientation regression networks; one for each dataset. We use the same data for training the RDT and CNN fingertip detectors for experiment types () and (). However, for testing the fingertip detectors in experiments () and (), we rotate each testing image by uniformly random in-plane rotational offsets between -$90$ and $90$ degrees. This further guarantees that the testing data is different from the training data. **Derotated data.** Experiment types () and () use training data which is first derotated by an Oracle which we define to be DeROT that uses the ground truth $R_{gt}$ obtained from the magnetic sensors. With experiment types () and () we first apply the same uniform random image rotation to the test images exactly as for experiment types () and (). We then apply one of the following: (a) Procrustes derotation, (b) DeROT using $\hat{R}$ predicted by the CNN regression network, (c) Oracle derotation with $R_{gt}$. **Mean precision and mean average precision.** We compute precision and recall according to the protocol of [@voc11]. We set prediction confidence as the value at the location of the fingertip detection in the $128\times 128$ channel heatmap for each fingertip. The mean precision (mP) represents the mean precision over all fingertips at a recall rate of $100\%$. Mean average precision (mAP) measures the mean of all the areas under the precision-recall curves for each fingertip and takes into account the behaviour over all confidence values. **Error threshold.** The error of a prediction is the distance to the ground truth location. If a fingertip is more than 6 pixels from the ground truth position it is considered a false positive. The threshold of 6 pixels roughly translates into a distance of $1cm$ for both HandNet and NYUHands in an image patch of size $128\times 128$ cropped according to Section \[sec:fingertip\]. $1cm$ is a natural threshold to choose as the distance between adjacent fingertips is over $1.6cm$ on average [@dandekar2003]. [rrrrr]{} & & & &\ & & & &\ \ & & & &\ & & & &\ & & & &\ & & & &\ \ & & & &\ & & & &\ & & & &\ & & & &\ -- -- -- -- Discussion {#sec:discussion} ---------- The results of the experiments can be seen in Table \[tab:HandNet\]. In Figure \[tab:graphs\] we display a precision-recall curve and error threshold graph for the thumb on the HandNet test-set for all experiment types which is representative of the behavior of all fingertips. The results show that the use of DeROT improves over the baseline results *for all measurements* for both RDT and CNN for experiments on both datasets. On HandNet, when training an RDT and CNN on ground truth derotated data, we see that test-time use of DeROT yields improvement in mAP of 11.3% and 20.5% over the respective baselines. For NYUHands, DeROT gives an RDT a gain of 17.3% in mAP when trained on derotated data and a CNN achieves mAP gains of 14.2% when trained on underotated data but only a marginal gain of 2.5% when trained on derotated data. We found that the confidence values for this specific case were not reliable (which directly effects mAP) because of confusion between fingertips (specifically index and ring) which further justified the creation of HandNet. *For all experiments and datasets* the mP when using DeROT shows improvements of between 7.8% and 21.1% on underotated training data and between 23.5% and 38.6% for derotated training data. The simplistic Procrustes derotation negatively impacts fingertip detection relative to the baseline and we therefore chose not to build and train an RDT and CNN on Procrustes derotated versions of the two datasets. For our experiments a single RDT mostly outperforms a CNN. Although they are trained with different data and objectives it hints that there is no silver bullet to determining which machine learning approach is more appropriate. Conclusions and future work {#sec:conclusion} =========================== We have shown that using derotation, specifically DeROT, significantly improves the localization ability of machine-learning based per-frame fingertip detectors by reducing the variance of the pose space. Furthermore we find that this procedure works despite the extremely high range of potential poses. We see this approach as an alternative to data augmentation and as a potentially useful additional step in pipelines dedicated to articulated object pose extraction such as hands. Although we have used no prior model or kinematic constraints to improve the detection results this is currently an active area that we are investigating. Also, in this work we have considered results only on depth images but it would be interesting to apply a similar pipeline to pure 2D color images.\ \ **Acknowledgments** This research was supported by European Community’s FP7- ERC program, grant agreement no. 267414. [^1]: All graphs and images in this paper are best viewed in color. [^2]: To advance research in the field this database and relevant code is available at [www.cs.technion.ac.il/\~twerd/HandNet/](www.cs.technion.ac.il/~twerd/HandNet/)
{ "pile_set_name": "ArXiv" }
--- abstract: 'The critical behaviour of three-dimensional semi-infinite Ising ferromagnets at planar surfaces with (i) random surface-bond disorder or (ii) a terrace of monatomic height and macroscopic size is considered. The Griffiths-Kelly-Sherman correlation inequalities are shown to impose constraints on the order-parameter density at the surface, which yield upper and lower bounds for the surface critical exponent $\beta_1$. If the surface bonds do not exceed the threshold for supercritical enhancement of the pure system, these bounds force $\beta_1$ to take the value $\beta_1^{\text{ord}}$ of the latter system’s ordinary transition. This explains the robustness of $\beta_1^{\text{ord}}$ to such surface imperfections observed in recent Monte Carlo simulations.' address: | Fachbereich Physik, Universität - Gesamthochschule Essen,\ D-45117 Essen, Federal Republic of Germany author: - 'H. W. Diehl' title: 'Critical behaviour of three-dimensional Ising ferromagnets at imperfect surfaces: Bounds on the surface critical exponent $\beta_1$' --- epsf =10000 In a recent paper Pleimling and Selke (PS) [@PS98] reported the results of a detailed Monte Carlo analysis of the effects of two types of surface imperfections on the surface critical behaviour of $d=3$ dimensional semi-infinite Ising models with planar surfaces and ferromagnetic nearest-neighbour (NN) interactions: (i) random surface-bond disorder and (ii) a terrace of monatomic height and macroscopic size on the surface. For type (i), both the ordinary and special transitions were studied. They found that the asymptotic temperature dependence of the disorder-averaged surface magnetization on approaching the bulk critical temperature $T_c$ from below could be represented by a power law $\sim |\tau|^{\beta_1}$ with $\tau\equiv (T-T_c)/T_c$, where $\beta_1$ agreed, within the available numerical accuracy, with the respective values $\beta_1^{\text{ord}}\simeq 0.8$ and $\beta_1^{\text{sp}}\simeq 0.2$ of the pure system’s ordinary and special transitions. For type (ii), where the interaction constants were chosen such that only an ordinary transition could occur, the same value $\beta_1^{\text{ord}}$ of the perfect system was found for $\beta_1$. Their findings for the case of (i) are in conformity with the relevance/irrelevance criteria of Diehl and N[ü]{}sser [@DN90a; @Har74] according to which the pure system’s surface critical behaviour should be expected to be stable or unstable with respect to short-range correlated random surface-bond disorder depending on whether the surface specific heat $C_{11}$ [@Die86a] of the pure system remains finite or diverges at the transition. It is fairly well established [@DD83b; @DDE83] that $C_{11}$ approaches a finite constant at the ordinary transition, but has a leading thermal singularity $\sim|\tau|^{(d-1)\nu -2\Phi}$ at the special transition, where $\Phi$ is the surface crossover exponent. In the latter case, the condition for irrelevance, $\Phi<(d-1)\nu/2$, reduces to $$\label{irrelcon} \Phi<\nu$$ in $d=3$ bulk dimensions. Since various Monte Carlo simulations [@LB90a; @RDW92; @RDWW93] (though not all [@vrf]) and renewed field-theory estimates [@DS94] suggest a value of $\Phi$ between $0.5$ and $0.6$, definitely smaller than the accepted value $0.63$ of $\nu$ for $d=3$, one may be quite confident that the condition (\[irrelcon\]) holds. Thus short-range correlated surface-bond disorder should be irrelevant in the renormalization-group sense at both transitions. Irrelevance criteria of the above Harris type [@DN90a; @Har74] seem to work quite well in practice. Yet, from a mathematical point of view, they are rather weak because they are nothing but a necessary (though not sufficient) condition for stability of the pure system’s critical behaviour. In this note, I shall employ the Griffiths-Kelly-Sherman (GKS) inequalities [@Gri67a] to obtain upper and lower bounds on the surface magnetization densities of both types of imperfect systems, bounds that are given by surface magnetizations of analogous systems without such imperfections. Their known asymptotic temperature dependence near $T_c$ will then be exploited to obtain restrictions on the surface critical behaviour of the imperfect systems considered. For some cases of interest studied by PS [@PS98], the equality $\beta_1=\beta_1^{\text{ord}}$ will be rigorously established. Following these authors, let us consider an Ising model with ferromagnetic NN interactions on a simple cubic lattice of size $L_x\times L_y\times L_z$. Periodic boundary conditions will be chosen along two principal axes (the $x$ and $y$ directions), and free boundary conditions along the third one (the $z$ direction), so that the surface consists of the top layer at $z=1$ and the bottom layer at $z=L_z$. Associated with each pair of spins on NN sites $i$ and $j$ is an interaction constant $J(i,j)>0$, which we assume to have the same value $J$ whenever $i$ or $j$ (or both) belong to layers with $1<z<L_z$. In the case of surface-bond disorder, which we consider first, the $J(i,j)\equiv J^{\text{(s)}}(i,j)$ of all NN pairs of surface sites are independent random variables. The probability density $P(J_1)$ of any one of these will be assumed to have support only in the interval $[J_1^<,J_1^>]$ (with $J_1^>>J_1^<>0$). This is in conformity with, but less restrictive than, PS’s assumption that $J_1$ takes just two values $J_1^<$ and $J_1^>$, either one with probability $1/2$. We will also assume that all (bulk and surface) spins are exposed to the same magnetic field $H>0$, whose limit $H\to 0^+$ will be taken after the thermodynamic limit has been performed. Let $K\equiv J/k_BT$ and $h\equiv H/J$. Define $\bbox{r}^{(\text{s})}$ to be the set of all dimensionless surface coupling constants $J^{(\text{s})}(i,j)/J$. Let $m(i;K,\bbox{r}^{(\text{s})},h)\equiv \langle s_i\rangle$ be the thermal average of a spin at site $i$ for a given disorder configuration $\bbox{r}^{(\text{s})}$, and denote the corresponding quantity of the perfect system with uniform NN surface coupling $J_1=rJ$ as $m(i;K,r,h)$. Since all interactions are ferromagnetic, the GKS inequalities [@Gri67a] are valid. Averages of products of spin variables are monotone non-decreasing functions of all variables $J(i,j)$ and $H$. Hence, for finite $L_x,\, L_y$, and $L_z$, $m(i;K,\bbox{r}^{(\text{s})},h)$ is bounded by $m(i;K,r^<,h)$ from below and by $m(i;K,r^>,h)$ from above. We choose $i\equiv i_s$ to be a surface site, take the thermodynamic limit (first) and then let $H\to 0^+$. The bounds converge towards the respective values of $m_1(K,r,0^+)$, the spontaneous magnetization of the surface layers per site, for $r=r^<$ and $r^>$. Thus we obtain $$\label{Grifineq} m_1(K,r^<,0^+)\le m(i_s;K,\bbox{r}^{(\text{s})},0^+)\le m_1(K,r^>,0^+)\;.$$ The following limiting forms of $m_1$ are well established [@PS98; @Die86a; @LB90a; @rigres; @BD94]: $$\label{limform} m_1=\cases{C_1|\tau|^{\beta_1^{\text{ord}}}[1+o(\tau)]& as $\tau\to 0^-$ at fixed $r<r_c$,\cr C_1'|\tau|^{\beta_1^{\text{sp}}}[1+o(\tau)] &as $\tau\to 0^-$ at fixed $r=r_c$,\cr m_{1c}+O(\tau) &as $\tau\to 0^\pm$ at fixed $r>r_c$,}$$ where $r_c\simeq 1.50 $ [@LB90a] is the critical value associated with the special transition. The quantities $m_{1c}>0$, $C_1$, and $C'_1$ are nonuniversal, whence the first two depend on $r$. Consider first the case $r^><r_c$. Let $C^<$ and $C^>$ be the values of $C_1$ for $r=r^<$ and $r=r^>$, respectively. (These satisfy $0<C^<\le C^><\infty$ provided $0<J<\infty$ and $0<J_1^<\le J_1^><\infty$.) It follows that there exists a number $\epsilon >0$ independent of the disorder configuration $\bbox{r}^{\text{(s)}}$ such that $$\label{ineq} C^>\le m[i_s;K(\tau),\bbox{r}^{\text{(s)}},0^+]\, \,|\tau|^{-\beta_1^{\text{ord}}}\le C^>$$ whenever $-\epsilon <\tau <0$. We denote the average of a quantity $Q$ over all choices of the random variables $\bbox{r}^{\text{(s)}}$ as $\overline{Q}$. Upon averaging $m(i_s;.)$ to obtain the disorder-averaged surface magnetization $\overline{m_1}$, we see that the inequality (\[ineq\]) holds for $\overline{m_1}\,|\tau|^{-\beta_1^{\text{ord}}}$ as well. An elementary consequence is: If $\overline{m_1}$ has a well-defined critical exponent $\beta_1^{\text{dis}}$ in the sense that [@Sta71] $$\label{defce} \beta_1^{\text{dis}}=\lim_{\tau\to 0^-} \frac{\ln \overline{m_1}(\tau)}{\ln \tau}$$ exists, then we have $$\label{ordeq} \beta_1^{\text{dis}}=\beta_1^{\text{ord}}\,.$$ Two further implications of (\[ineq\]) are worth mentioning. First, if a surface critical exponent $\tilde\beta_1^{\text{dis}}$ can be defined via the analog of (\[defce\]) for the most probable value of $m(i_s;.)$ [@comment], then it must have the same value $\beta_1^{\text{ord}}$. Second, the inequality (\[ineq\]) also rules out a limiting $\tau$ dependence of the form $\sim |\tau|^{\beta_1}\,|\ln|\tau||^\varphi$ (standard logarithmic corrections) for $\overline{m_1}$ and the most probable value of $m(i_s;.)$. Consider next the case $r^>=r_c$. Let us again make the assumption that the limit (\[defce\]) or the analogous one defining $\tilde\beta_1^{\text{dis}}$ exist. Then the inequalities $$\label{betadisineq} \beta_1^{\text{sp}}\le \beta_1^{\text{dis}}\le \beta_1^{\text{ord}}$$ and their analogs for $\tilde\beta_1^{\text{dis}}$ can be deduced from (\[ineq\]). (Cf. Lemma 3 of [@Sta71].) The same reasoning applied in the case $r^>>r_c$ shows that $\beta_1^{\text{dis}}$ or $\tilde\beta_1^{\text{dis}}$ must obey the relations $$0\le \beta_1^{\text{dis}}\le \beta_1^{\text{ord}}$$ whenever the limits (\[defce\]) through which we defined them exist. Likewise in the case $r^<=r_c$, the possible values of $\beta_1^{\text{dis}}$ or $\tilde\beta_1^{\text{dis}}$ are restricted by $$\label{bsp} 0\le \beta_1^{\text{dis}}\le \beta_1^{\text{sp}}$$ at transitions at which $\overline{m_1}$ or the most probable value of $m(i_s;.)$ [@comment] approach zero, respectively. On the other hand, it should be recalled that the surface critical exponent $\beta_1^{\text{ex}}$ of the pure system’s extraordinary transition requires a definition other than (\[defce\]): One must subtract a regular background contribution $m_1^{\text{reg}}$ from $m_1$ and define $\beta_1^{\text{ex}}$ through the limiting behaviour $m_1-m_1^{\text{reg}}\sim |\tau|^{\beta_1^{\text{ex}}}$. For transitions of the impure systems at which $\overline{m_1}$ approaches a constant $\ne 0$, it would also not make much sense to define $\beta_1^{\text{dis}}$ via (\[defce\]). Of course, for surface critical exponents $\beta_1^{\text{dis}}$ not given by (\[defce\]), the above bounds do [*not*]{} apply. This means that they cannot be utilized to draw conclusions about the surface critical exponent $\beta^{\text{ex}}_1$ of the impure system’s extraordinary transition. However, for a special transition of the impure system with $\overline{m_1}(\tau=0)=0$, the inequalities (\[bsp\]) hold. The inequality (\[Grifineq\]) rules out that the impure system has an ordered surface phase for $T>T_c$ whenever $r^>\le r_c$. In order that the impure system can have an extraordinary or special transition, the distribution $P(J_1)$ of the surface couplings typically will have to extend beyond the critical-enhancement threshold $r_cJ$ of the pure system. But even if $r^>>r_c$, an ordinary transition may still occur if the surface bonds ‘on average’ are not sufficiently enhanced (cf. [@PS98]). However, if $P(J_1)$ extends beyond $r_cJ$, then disorder configurations for which macroscopically large surface regions have the same supercritical value ($>r_cJ$) of $J_1$ occur with finite probability. This happens even if the impure system (for a typical realization of disorder) undergoes an ordinary transition, albeit with exponentially small probability. By analogy with the bulk case [@Gri69], I expect surface quantities like $\overline{m_1}$ and the disorder-averaged surface free energy to be [*non-analytic*]{} functions of the surface magnetic field $H_1$ at $H_1=0$ for temperatures between the bulk critical temperature $T_c$ and the temperature $T_s(r^>)>T_c$ at which the semi-infinite pure system with homogeneous surface coupling $J_1=r^>J$ undergoes a transition to a surface-ordered, bulk-disordered phase. That is, they should display [*Griffiths singularities*]{} [@Gri69], a problem on which we will not embark further here. Turning now to the case of surfaces with a terrace, we start from a pure Ising model of the sort considered above. Just as PS, we assume that [*all*]{} NN couplings $J(i,j)$ (including those between surface sites) have the same value $J$. Let us denote thermal averages pertaining to this system by a superscript $[I]$, writing, e.g., $m^{[I]}(i;K)=\langle s_i\rangle^{[I]}$. We consider another system, $[II]$, which differs from $[I]$ through the addition of a zeroth layer at $z=0$ whose spins are assumed to interact among themselves and with the spins in the $z=1$ layer via NN interaction constants $J_1$ and $J$, respectively. To obtain a system with a terrace, $[T]$, we choose a subregion of the zeroth layer (the terrace) and remove all those NN bonds $J$ and $J_1$ that are connected to lattice sites of this layer outside the terrace region. PS considered a strip-like terrace of size $(L_x/2)\times L_y$, and assumed that $J_1=J$. For our considerations, the precise form and size of the terrace region will not be important. (One could even assume that an arbitrary subset of the spins in the zeroth layer are decoupled from the rest of the system.) Let $i_1$ be an arbitrary lattice site in the $z=1$ layer. Since the systems $[I]$, $[T]$, and $[II]$ differ by the addition of ferromagnetic interactions, we have from the GKS inequalities, $$m^{[I]}(i_1;K,h)\le m^{[T]}(i_1;K,r,h)\le m^{[II]}(i_1;K,r,h)$$ where, as before, $h=H/J>0$ is a uniform magnetic field and $r=J_1/J$. In the thermodynamic limit $L_x,\,L_y,\, L_z\to\infty$, the lower and upper bounds converge towards $m_1(K,h)$, the magnetization per site of the topmost layer, and to $m_2(K,r,h)$, the magnetization per site of the layer underneath the topmost layer, respectively. If we assume that $r<r_c$ (subcritical surface enhancement) and take the limit $h\to 0^+$, then the limiting form shown in the first line of (\[limform\]) applies to both $m_1$ and $m_2$ (with different values of $C_1$). As a straightforward consequence we find that the surface critical exponent $\beta_1$ of $m^{[T]}(i_1;K,r,0^+)$ (for an arbitrary site $i_1$ with $z=0$) strictly satisfies $\beta_1=\beta_1^{\text{ord}}$. It evident that the same reasoning can be applied to the analogous two-dimensional model with a terrace to conclude that $\beta_1$ takes the exactly known value $\beta_1^{\text{ord}}=1/2$. Likewise, the inequality (\[ineq\]) and the result (\[ordeq\]) carry over to the two-dimensional case, giving $\beta_1^{\text{dis}}=1/2$ for all values of $r<\infty$, since $r_c=\infty$ for $d=2$. Note also that the inequality (\[ineq\]) excludes the possibility of an asymptotic temperature dependence of the form $\overline{m_1}\approx \text{const } |\tau|^{1/2}|\ln|\tau||^p$ (i.e., of logarithmic correction factors). This is because it is known for the pure case that no such logarithmic corrections appear in the limiting form of $m_1$. Results of Monte Carlo simulations on the surface critical behaviour of two-dimensional Ising models with bond disorder have been reported in two recent papers [@SSLI97]. However, in this work random bond disorder was assumed to be present both in the bulk and at the surface, a case not captured by our reasoning. Nevertheless, $\overline{m_1}$ was found to behave as $|\tau|^{1/2}$, apparently without logarithmic corrections, even though the presence of such a correction could be detected in the limiting form of the disorder-averaged bulk order parameter. I am indebted to W. Selke for informing me about the work [@PS98] prior to publication, and to him, Joachim Krug, and Kay Wiese for a critical reading of the manuscript. This work has been supported by the Deutsche Forschungsgemeinschaft through the Leibniz program. M. Pleimling and W. Selke, [*Critical phenomena at perfect and non-perfect surfaces*]{}, European Physical Journal B, to appear; preprint cond-mat/9710097. H. W. Diehl and A. N[ü]{}sser, Z. Phys. B [**79**]{}, 69 (1990). These criteria are of a similar nature as the familiar Harris criterion, which assesses the relevance or irrelevance of random bond bulk disorder; see A. B. Harris, J. Phys. C[**7**]{}, 1671 (1974). For background on surface critical behaviour, see H. W. Diehl, in [*Phase Transitions and Critical Phenomena*]{}, edited by C. Domb and J. L. Lebowitz (Academic Press, London, 1986), Vol. 10, p. 75–267; K. Binder, in [*Phase Transitions and Critical Phenomena*]{}, edited by C. Domb and J. L. Lebowitz (Academic, London, 1983), Vol. 8, p. 1-144; and H. W. Diehl, Int. J. Mod. Phys. B [**11**]{}, 3593 (1997), preprint cond-mat/9610143. S. Dietrich and H. W. Diehl, Z. Phys. B [**51**]{}, 343 (1983). H. W. Diehl, S. Dietrich, and E. Eisenriegler, Phys. Rev. B [**27**]{}, 2937 (1983). D. P. Landau and K. Binder, Phys. Rev. B [**41**]{}, 4633 (1990). C. Ruge, S. Dunkelmann, and F. Wagner, Phys. Rev. Lett. [**69**]{}, 2465 (1992). C. Ruge, S. Dunkelmann, F. Wagner, and J. Wulf, J. Stat. Phys. [**73**]{}, 293 (1992). H. W. Diehl and M. Shpot, Phys. Rev. Lett. [**73**]{}, 3431 (1994), and to be published. R. B. Griffiths, J. Math. Phys. [**8**]{}, 478 (1967); D. G. Kelly and S. Sherman, J. Math. Phys. [**9**]{}, 466 (1968). To my knowledge, not all aspects of these limiting forms have been proven in a mathematically rigorous fashion, but they are consistent with all known results. For rigorous results on the 3D semi-infinite Ising model, see J. Fr[ö]{}hlich and C.-E. Pfister, Commun. Math. Phys. [**109**]{}, 493 (1987); C.-E. Pfister and O. Penrose, Commun. Math. Phys. [**115**]{}, 691 (1988). T. W. Burkhardt and H. W. Diehl, Phys. Rev. B [**50**]{}, 3894 (1994). Cf.H. E. Stanley, [*Introduction to Phase Transitions and Critical Phenomena*]{}, [*International series of monographs on physics*]{} (Oxford University Press, Oxford, 1971), Eq. (3.2); note also that its Lemma 3 could be utilized to derive the result (\[ordeq\]) from (\[ineq\]). Quantities like $m_1\equiv \sum_{i_s}\langle s_{i_s}\rangle/\sum_{i_s}1$, the total magnetization of the surface per spin, may be expected to be self-averaging in the thermodynamic limit. Hence the value $m_1$ takes for any choice of the random variables $\bbox{r}^{\text{(s)}}$ that occurs with reasonable probability should agree with $\overline{m_1}$. Once one accepts this, the distinction between $\beta^{\text{dis}}_1$ and $\tilde\beta^{\text{dis}}_1$ becomes unnecessary. It is made here because the assumption of self-averaging is neither required nor used in the derivation of the given inequalities, and not to suggest a possible breakdown of self-averaging. R. B. Griffiths, Phys. Rev. Lett. [**23**]{}, 17 (1969). W. Selke, F. Szalma, P. Lajkó, and F. Iglói, J. Stat. Phys. [**89**]{}, 1079 (1997); F. Iglói, P. Lajkó, W. Selke, and F. Szalma, preprint cond-mat/9711182.
{ "pile_set_name": "ArXiv" }
--- address: | Institute of Nuclear Physics, Catholic University of Louvain\ 2, Chemin du Cyclotron, B-1348 Louvain-la-Neuve, Belgium\ E-mail: govaerts@fynu.ucl.ac.be author: - Jan GOVAERTS title: | On the Road Towards\ the Quantum Geometer’s Universe:\ an Introduction to Four-Dimensional\ Supersymmetric Quantum Field Theory --- Introduction {#Sect1} ============ The organisers of the third edition of the COPROMAPH Workshops had thought it worthwhile to have the second series of lectures during the week-long meeting dedicated to an introduction to supersymmetric quantum field theories. An internationally renowned expert in the field had been invited, and was to deliver the course. Unfortunately, at the last minute fate had it decided otherwise, depriving the participants of what would have been an introduction to the subject of outstanding quality. The present author was finally found to be on hand, without being able to do full justice to the wide relevance of the topic, ranging from pure mathematics and topology to particle physics phenomenology at its utmost best in anticipation of the running of the LHC at CERN by 2007. Fate had it also that the same author had already delivered a similar series of lectures at the previous edition of the COPROMAPH Workshops,[@GovCOPRO2] which in broad brush strokes attempted to paint with vivid colours the fundamental principles of XX$^{\rm th}$ century physics, underlying all the basic conceptual progresses having led to the relativistic quantum gauge field theory and classical general relativity frameworks for the present description of all known forms of elementary matter constituents and their fundamental interactions, as inscribed in the Standard Model of particle physics and Einstein’s classical theory of general relativity. At the same time, a few of the doors onto the roads winding deep into the unchartered territories of the physics that must lie well beyond were also opened. It is thus all too fitting that we get the opportunity to trace together a few steps onto one of these roads, in the embodiement of a Minkowski spacetime structure extended into a superspace including now also anticommuting coordinates in addition to the usual commuting spacetime ones. We are truly embarking on a journey onto the roads leading towards the quantum geometer’s universe! Even if only by marking the path by a few white and precious pebbles to guide us into the unknown territory when the time will have come for more solitary explorations of one’s own in the composition, with a definite African beat, of the music scores of the unfinished symphony of XXI$^{\rm st}$ century physics.[@GovCOPRO2] Even though none are based on actual experimental facts, there exist a series of theoretical and conceptual motivations for considering supersymmetric extensions of ordinary Yang–Mills theories in the quest for a fundamental unification. Spacetime supersymmetry is a symmetry that exchanges particles of integer — bosons — and half-integer — fermions — spin,[^1] enforcing specific relations between the properties and couplings of each class of particles, when supersymmetry remains manifest in the spectrum of the system. In particular, since for what concerns ultra-violet (UV) short-distance divergences of quantum field theories in four-dimensional Minkowski spacetime fermionic fields are less ill-behaved than bosonic fields (namely, in terms of a cut-off in energy, divergences in fermionic loop amplitudes are usually only logarithmically divergent whereas those of bosonic loops are quadratically divergent), one should expect that in the presence of manifest supersymmetry, UV divergences should be better tamed for bosonic fields, being reduced to a logarithmic behaviour only as in the fermionic sector (this has important consequences which we shall not delve into here). Another aspect is that within the context of superstring and M-theory[@Strings] with bosonic and fermionic states, quantum consistency is ensured provided supersymmetries are restricting the dynamics. In this sense, the existence of supersymmetry at some stage of unification beyond the Standard Model is often considered to be a natural prediction of M-theory. Besides such physics motivations just hinted at, supersymmetry has also proved to be of great value in mathematical physics, in the understanding of nonperturbative phenomena in quantum field theories and M-theory,[@Witten1; @MATH] and for uncovering deep connections between different fields of pure mathematics. The algebraic structures associated to Grassmann graded algebras are powerful tools with which to explore new limits in the concepts of geometry, topology and algebra.[@MATH] One cannot help but feel that a great opportunity would be missed if tomorrow’s quantum geometry would not make any use of supersymmetric algebraic structures. Since its discovery in the early 1970’s,[@SUSY1970; @WZ] applications of supersymmetry have been developed in such a diversity of directions and in so large a variety of fields of physics and mathematics, that it is impossible to do any justice to all that work in the span of any set of lectures, let alone only a few. Our aim here will thus be very modest. Namely, starting from the contents of the previous lecture notes,[@GovCOPRO2] build a bridge reaching the entry roads and the shores towards supersymmetric field theories and the fundamental concepts entering their construction. Not that the lectures delivered at the Workshop did not discuss the general superfield approach over superspace as the most efficient and transparent techniques for such constructions in the case of $\mathcal N=1$ supersymmetry, but the latter material being so widely and in such detailed form available from the literature, it is felt that rather a detailed introduction to the topics missing from Ref.  but necessary to understand supersymmetric field theories is of greater use and interest to most readers of this Proceedings volume. With these notes, our aim is thus to equip any interested reader with a few handy concepts and tools to be added to the backpack to be carried on his/her explorer’s journey towards the quantum geometer’s universe of XXI$^{\rm st}$ century physics, in search of the new principle beyond the symmetry principle of XX$^{\rm th}$ century physics.[@GovCOPRO2] Also by lack of space and time, even of the anticommuting type if the world happens to be supersymmetric indeed, we shall thus stop short of discussing explicitly any supersymmetric field theory in 4-dimensional Minkowski spacetime, even the simplest example of the $\mathcal N=1$ Wess-Zumino model[@WZ] that may be constructed using the hand-made tools of an amateur artist-composer in the art of supersymmetries. From where we shall leave the subject in these notes, further study could branch off into a variety of directions of wide ranging applications, beginning with general supersymmetric quantum mechanics and the general superspace and superfield techniques for $\mathcal N=1$ and $\mathcal N=2$ supersymmetric field theories with Yang–Mills internal gauge symmetries and the associated Higgs mechanism of gauge symmetry breaking, to further encompass the search for new physics at the LHC through the construction of supersymmetric extensions of the Standard Model, or also reaching towards the duality properties of supersymmetric Yang–Mills and M-theory, mirror geometry, topological string and quantum field theories,[@GovTQFT] etc., to name just a few examples.[@Strings; @Witten1; @MATH] Let us thus point out a few standard textbooks and lectures for large and diversified accounts of these classes of theories and more complete references to the original literature. Some important such material is listed in Refs.  and . In particular, the lectures delivered at the Workshop were to a significant degree inspired by the contents of Ref. . Any further search through the SPIRES databasis ([http://www.slac.stanford.edu/spires/hep/]{}; UK mirror: [http://www-spires.dur.ac.uk/spires/hep/]{}) will quickly uncover many more useful reviews. In Sec. \[Sec2\], we briefly recall the basic facts of relativistic quantum field theory for bosonic degrees of freedom, discussed at greater length in Ref. , in order to explain why such systems are the natural framework for describing relativistic quantum point-particles. The same considerations are then developed in Sec. \[Sec3\] in the case of fermionic degrees of freedom associated to particles of half-integer spin, based on a discussion of the theory of finite dimensional representations of the Lorentz group, leading in particular to the free Dirac equation for the description of spin 1/2 particles without interactions. Section \[Sec4\] then considers, as a simple introductory illustration of some facts essential and generic to supersymmetric field theories, and much in the same spirit as that of the discussion in Sec. \[Sec2\], the $\mathcal N=1$ supersymmetric harmonic oscillator which already displays quite a number of interesting properties. Section \[Sec5\] then concludes with a series of final remarks related to the actual construction of supersymmetric field theories based on the general concepts of the Lie symmetry algebraic structures inherent to such relativistic invariant quantum field theories and their manifest realisations through specific choices of field content, indeed the underlying theme to both these lectures and the previous ones.[@GovCOPRO2] Basics of Quantum Field Theory: A Compendium for Scalar Fields {#Sec2} ============================================================== Within a relativistic classical framework,[^2] material reality consists, on the one hand, of dynamical fields, and on the other hand, of point-particles. Fields act on particles through forces that they develop, such as the Lorentz force of the electromagnetic field for charged particles, while particles react back onto the fields being sources for the latter, for instance through the charge and current density sources of the electromagnetic field in Maxwell’s equations (the same characterisation applies to the gravitational field equations of general relativity). This dichotomic distinction between matter and radiation is unified in a dual form when considering a relativistic quantum framework. Indeed, it then turns out that particles are nothing else than the quanta, [*i.e.*]{}, the quantum states of definite energy, momentum and spin, of quantum fields. Particles and fields are just the two complementary aspects of the quantum relativistic world of point-particles. All electrons, for example, are identical, being quanta of a single electron field filling all of spacetime. To each distinct species of particle corresponds a field, and vice-versa. This, in a word, is the essence of quantum field theory: the natural framework for a description of relativistic quantum point-particles, explaining their corpuscular properties when detected in energy-momentum eigenstates and their wave behaviour when considering their spacetime propagation. Let us briefly express these points in a somewhat more mathematical setting. Particles and Fields {#Sec2.1} -------------------- A free relativistic field may be seen to correspond to an infinite collection of harmonic oscillators sitting at each point in space, and coupled to one another through a nearest-neighbour term in the action of the field’s dynamics. Let us first recall a few basic facts about the one-dimensional harmonic oscillator. Its dynamics derives through the variational principle from the action $$S[q]=\int dt\,m\left[\frac{1}{2}\left(\frac{dq}{dt}\right)^2- \frac{1}{2}\omega^2q^2\right]\ , \label{eq:SHO}$$ where $q(t)\in\mathbb R$ is the configuration space of the system. How to perform the standard canonical operator quantisation of this system is well known,[@GovCOPRO2] leading, in the Heisenberg picture, to the following quantum operator representation, $$\hat{q}(t)=\sqrt{\frac{\hbar}{2m\omega}}\, \left[a\,e^{-i\omega t}\,+\,a^\dagger\,e^{i\omega t}\right]\ , \label{eq:solHO}$$ obeying the operator equation of motion $$\left[\,\frac{d^2}{dt^2}\ +\ \omega^2\,\right]\,\hat{q}(t)=0\ . \label{eq:ELHO}$$ Here, $a$ and $a^\dagger$ are the annihilation and creation operators for the quanta of the system (they are complex conjugate integration constants at the classical level), obeying the Fock space algebra $$[a,a^\dagger]=\one\ .$$ The quantum Hamiltonian $\hat{H}=\hbar\omega(a^\dagger a+1/2)$ is diagonal in the Fock state basis, constructed as follows for all natural numbers $n=0,1,2,\cdots$, $$|n\rangle=\frac{1}{\sqrt{n!}}(a^\dagger)^n|0\rangle\ \ ,\ \ \langle n|m\rangle=\delta_{nm}\ \ ,\ \ a|0\rangle=0\ \ ,\ \ \hat{H}|n\rangle=\hbar\omega(n+\frac{1}{2})|n\rangle\ .$$ The physical interpretation is that the state $|0\rangle$ defines the ground state or vacuum of the quantum oscillator, with the discrete set of states $|n\rangle$ ($n=1,2,\cdots$) corresponding to excitations of the oscillator with $n$ quanta each contributing an energy $\hbar\omega$ on top of the vacuum quantum energy $\hbar\omega/2$ due to so-called vacuum quantum fluctuations. In particular, the operators $a$ and $a^\dagger$ are ladder operators between the Fock states, $$a|n\rangle=\sqrt{n}|n\rangle\ \ ,\ \ a^\dagger|n\rangle=\sqrt{n+1}|n+1\rangle\ \ ,\ \ a^\dagger a|n\rangle=n|n\rangle\ .$$ Thus, here we have a mathematical framework in which the quantisation of a configuration space $q(t)\in\mathbb R$ leads to an algebra of quantum operators representing the creation and annihilation of energy eigenstates. In order to describe the dynamics of relativistic quantum point-particles which likewise, as observed in experiments, may be created and annihilated, we shall borrow a similar mathematical framework. Since the harmonic oscillator is a system invariant under translations in time, according to Noether’s theorem[@GovBook] there must exist a conserved quantity associated to this continuous symmetry whose value coincides with the energy of the system, namely its Hamiltonian. In the case of relativistic particles defined over Minkowski spacetime,[^3] invariance under spacetime translations implies the existence of conserved quantities associated to these symmetries, namely the particle’s total energy and momentum, which we shall denote $k^\mu=(k^0,\vec{k}\,)$ with $k^0=\omega(\vec{k}\,)=\sqrt{\vec{k}\,^2+m^2}$, $m$ being the particle’s mass. Consequently, let us introduce the annihilation and creation operators, $a(\vec{k}\,)$ and $a^\dagger(\vec{k}\,)$, respectively, for particles of given momentum $\vec{k}$ and energy $\omega(\vec{k}\,)$, and obeying the commutation relations[^4] $$\left[a(\vec{k}\,),a^\dagger(\vec{\ell}\,)\right]= (2\pi)^3\,2\omega(\vec{k}\,)\,\delta^{(3)}(\vec{k}-\vec{\ell}\,)\ .$$ For instance, 1-particle states are thus constructed as $$|\vec{k}\rangle=a^\dagger(\vec{k}\,)|0\rangle\ \ ,\ \ \langle\vec{k}|\vec{\ell}\,\rangle=(2\pi)^3 2\omega(\vec{k}\,)\, \delta^{(3)}(\vec{k}-\vec{\ell}\,)\ ,$$ $|0\rangle$ being the Fock vacuum of the system. In order to identify the actual configuration space of the system that is being considered, by analogy with (\[eq:solHO\]), let us construct the following quantum operator in the Heisenberg picture, $$\hat{\phi}(x^\mu)=\int\frac{d\vec{k}}{(2\pi)^32\omega(\vec{k}\,)} \left[a(\vec{k}\,)\,e^{-ik\cdot x}\,+\, a^\dagger(\vec{k}\,)\,e^{ik\cdot x}\,\right]\ ,$$ where the inner product in the plane wave factors is defined to be given by $k\cdot x=\omega(\vec{k}\,)x^0-\vec{k}\cdot\vec{x}$, thus with the on-shell energy value $k^0=\omega(\vec{k}\,)$. Note that in comparison to (\[eq:solHO\]), the plane wave factors, corresponding to positive- and negative-frequency components of the wave equation and involving only a time dependence in the case of the harmonic oscillator, have now been extended to the spacetime dependent Lorentz invariant quantity $k\cdot x$. The requirement of a relativistic covariant description of quantum point-particles being created and annihilated requires such an extension of the plane wave contributions. Furthermore, the operator $\hat{\phi}(x^\mu)$ obeys the quantum equation of motion $$\left[\,\frac{\partial^2}{\partial t^2}\,-\, \vec{\nabla}^2\,+\,m^2\,\right]\,\hat{\phi}(x^\mu)=0\ ,$$ in which one recognises of course the Klein–Gordon equation, deriving also from the action $$S[\phi]=\int dt\int_{(\infty)} d^3\vec{x}\, \left[\frac{1}{2}\left(\frac{\partial\phi}{\partial t}\right)^2- \frac{1}{2}\left(\vec{\nabla}\phi\right)^2-\frac{1}{2}m^2\phi^2\right]\ . \label{eq:KGaction}$$ In other words, such a framework for the description of relativistic quantum point-particles and their creation and annihilation naturally leads to a relativistic quantum field theory. The configuration space of such a system is that of the relativistic real scalar field $\phi(x^\mu)$. In the quantum world, configurations of this field are observed through its energy and momentum eigenstates — thanks to the invariance under spacetime translations of the Klein–Gordon action — which are nothing else than the particle quanta of the field. To any relativistic quantum field one associates relativistic quantum point-particles, and to any ensemble of indistinguishable relativistic quantum point-particles one associates a relativistic quantum field. Fields and particles are only two dual aspects in a relativistic quantum universe whose basic “constituents” are dynamical fields. Note that relativistic covariance, which forced the extension to the Lorentz invariant $k\cdot x$ in the plane wave contributions, also explains the appearance of the gradient terms $\vec{\nabla}\phi$ in the Klein–Gordon wave equation and action. Had this term been absent, indeed the field $\phi(x^\mu)$ would have described simply an infinite collection of harmonic oscillators $q_{\vec{x}}(t)=\phi(t,\vec{x}\,)$ each fixed at each of the points in space, and oscillating independently of one another. However, the gradient term introduces a specific nearest-neighbour coupling between these oscillators, such that any disturbance set-up in any one of them will quickly spread throughout space in a wave-like manner because of the gradient coupling between adjacent oscillators. These linear waves are characterised by their wave-number vector $\vec{k}$ and frequency $\omega(\vec{k}\,)$, which, at the quantum level, are identified with the quanta’s or particles’ conserved momentum and energy. Indeed, because of Noether’s theorem associated to the invariance under spacetime translations, the total energy and momentum of the field take these values for the 1-particle quantum states $|\vec{k}\rangle$. The system is also invariant under the full Lorentz group of spacetime (pseudo)rotations, hence leading also to further conserved quantum numbers of the field and its quanta associated to their spin. In the present instance, since the field $\phi(x^\mu)$ transforms as a scalar under the Lorentz group (namely, it is left invariant), the quanta of such a real scalar field carry zero spin. The above argument thus explains why relativistic quantum field theory provides the natural framework for the description of relativistic quantum point-particles that may be annihilated and created. An explicit canonical quantisation starting from the classical Klein–Gordon action (\[eq:KGaction\]) of course recovers all the above results, simply by applying the usual rules of quantum mechanics to this system of degrees of freedom $q_{\vec{x}}(t)=\phi(t,\vec{x}\,)$.[@GovCOPRO2] Furthermore, it is also possible to set up a perturbation expansion for the introduction of spacetime local interactions between such fields or with themselves (simply by adding to the Klein–Gordon Lagrangian density higher order products of the fields at each point in spacetime, thus preserving spacetime locality and causality), and the systematic calculation, through Feynman diagrams and the corresponding Feynman rules, of matrix elements of the scattering S-matrix.[@GovCOPRO2] Hence finally, decay rates and cross-sections for processes occuring between the quanta associated to such fields may be evaluated, at least through perturbation theory, starting from any given quantum field theory extended to include also interactions. As discussed in Ref. , it is at this stage that the short-distance UV divergences appear in loop amplitudes, for which the renormalisation programme has been designed.[@QFT] Large classes of renormalisable, [*i.e.*]{}, theories for which specific predictions may be made, have thereby been identified, and they all fall within the class of Yang–Mills theories of local gauge interactions extended in different manners and involving particles of spin 0 and 1/2 for the matter fields, and of spin 1 for the gauge fields associated to the gauge interactions. These are the basic concepts going into the construction of the Standard Model of the strong and electroweak interactions, based on the gauge symmetry group SU(3)$_{\rm C}\times$SU(2)$_{\rm L}\times$U(1)$_{\rm Y}$. A brief discussion of Yang–Mills theories, the Higgs mechanism of spontaneous symmetry breaking and the generation of mass is available in Ref. . All the above may readily be extended to collections of real scalar fields. When further internal symmetries appear,[^5] additional internal quantum numbers exist, by virtue of Noether’s theorem, and particle quanta may then be classified according to specific linear representations of that internal symmetry group when realised in the Wigner mode.[^6] It thus proves very efficient to base the consideration of the construction of relativistic quantum field theories, of which the particle quanta carry collections of conserved quantum numbers and specific interactions governed by the associated symmetries, on a Lagrangian formulation, since symmetries of the dynamics are then made manifest, readily leading to the identification of the conserved quantities through the Noether theorem. Thus when turning to the construction of field theories possessing the invariance under supersymmetry transformations, the analysis will be performed directly in terms of the Lagrangian density once the field content is specified. Spacetime Symmetries {#Sec2.2} -------------------- Since supersymmetry relates particles of integer and half-integer spin, it is a symmetry that intertwines with the spacetime symmetry of the full Poincaré group in Minkowski spacetime. It is thus important that we first understand the basics of the Poincaré group algebra, involved in the contruction of any relativistic quantum field theory over Minkowski spacetime. Acting on the spacetime coordinates, the ISO(1,3) Poincaré group (in a 4-dimensional Minkowski spacetime) is defined by the transformations $${x'}^\mu={\Lambda^\mu}_\nu\,x^\nu\ +\ a^\mu\ ,$$ where the 4-vector $a^\mu$ represents a constant spacetime translation, and ${\Lambda^\mu}_\nu$ a constant SO(1,3) Lorentz (pseudo)rotation leaving invariant the Minkowski metric, $$\eta_{\rho\sigma}{\Lambda^\rho}_\mu\,{\Lambda^\sigma}_\nu= \eta_{\mu\nu}\ .$$ These transformations also act on the field content of a given theory. In the case of scalar fields, one has simply $$\phi'(x')=\phi(x)\ ,$$ and more generally for a collection of real or complex scalar fields, a similar relation holds component by component. Note that at the quantum level for the quantum field operators, such transformations are generated by a representation $U(a,\Lambda)$ of the Poincaré group acting through the adjoint action on the field operator in the Heisenberg picture, $$\hat{\phi}(x')=U(a,\Lambda)\hat{\phi}(x)U^\dagger(a,\Lambda)\ .$$ The Poincaré group being an abstract Lie group, possesses a collection of generators each of which is associated to each independent type of transformation. Thus for spacetime translations with the parameters $a^\mu$ one has the generators $P^\mu$, while for spacetime Lorentz (pseudo)rotations with the parameters ${\Lambda^\mu}_\nu$ one has the generators $M^{\mu\nu}=-M^{\nu\mu}$. At the abstract level, the corresponding Lie algebra is given by the nonvanishing Lie brackets, $$\begin{array}{r c l} \left[P^\mu,P^\nu\right]&=&0\ \ \ ,\ \ \ \left[P^\mu,M^{\nu\rho}\right]= i\left[\eta^{\mu\nu}P^\rho-\eta^{\mu\rho}P^\nu\right]\ ,\\ & & \\ \left[M^{\mu\nu},M^{\rho\sigma}\right]&=& -i\left[\eta^{\mu\rho}M^{\nu\sigma}-\eta^{\mu\sigma}M^{\nu\rho}- \eta^{\nu\rho}M^{\mu\sigma}+\eta^{\nu\sigma}M^{\mu\rho}\right]\ , \end{array}$$ where the last set of brackets determines the Lorentz algebra. These generators induce finite Poincaré transformations through their exponentiated action in the abstract realisation of the group, $$U(a,\Lambda)=e^{ia_\mu P^\mu+\frac{1}{2}i\omega_{\mu\nu}M^{\mu\nu}}\ .$$ According to the Noether theorem, any dynamics of which the Lagrange function is invariant (possibly up to a total divergence) under Poincaré transformations possesses conserved quantities — the Noether charges — for solutions to the classical equations of motion, which, in the Hamiltonian formulation, generate through Poisson brackets these symmetry transformations on phase space, and possess among themselves Poisson brackets which coincide with the above Lie algebra brackets.[^7] Hence, the Noether charges provide the explicit realisation within a given system of the associated symmetry generators in terms of the relevant degrees of freedom. Thus for instance, in the case of the relativistic quantum scalar particle in the configuration space wave function representation,[@GovCOPRO2] the Poincaré algebra is realised by the operators $$P_\mu=-i\hbar\frac{\partial}{\partial x^\mu}=-i\hbar\partial_\mu\ \ \ ,\ \ \ M_{\mu\nu}=P_\mu x_\nu-P_\nu x_\mu\ ,$$ where in the last expression the Lorentz covariant extension of the usual orbital angular-momentum definition is recognised. Likewise given any relativistic invariant local field theory, Noether’s theorem guarantees the existence of conserved charges given by explicit functionals of the fields which generate Poincaré transformations through Poisson brackets at the classical level, and through commutation relations at the quantum level. However, due to Lorentz covariance and spacetime locality of the Lagrange function given as a space integral of a Lagrangian density, it follows that the conservation condition is expressed[@GovBook] through a divergenceless condition on a conserved current density, of which the conserved charge is given by the integral over space of its time component. In general terms, $$\partial_\mu J^\mu=0\ \ \ ({\rm on\!-\!shell})\ \ \ ,\ \ \ Q=\int_{(\infty)}d^3\vec{x}\,J^{\mu=0}\ ,$$ where $J^\mu$ denotes the Noether current density and $Q$ the associated Noether charge, while “on-shell” stands for the fact that the conservation property holds only for solutions to the classical equations of motion. In the case of a single real scalar field, the detailed analysis of the Noether identities[@GovBook; @QFT] associated to the invariance of the Lagrangian density under Poincaré transformations establishes that the Noether density is given by $${\mathcal T}_{\mu\nu}=\partial_\mu\phi\partial_\nu\phi-\eta_{\mu\nu} {\mathcal L}\ ,$$ a quantity which defines the energy-momentum density of the system. In particular, the total energy-momentum content $P^\mu$ of the field is then given as $$P^\mu=\int_{(\infty)}d^3\vec{x}\,{\mathcal T}^{0\mu}\ :\ \ P^0=H_0=\int_{(\infty)}d^3\vec{x}\,{\mathcal H}_0\ \ ,\ \ \vec{P}=-\int_{(\infty)}d^3\vec{x}\,\pi_\phi\vec{\nabla}\phi\ ,$$ where ${\mathcal H}_0$ stands for the canonical Hamiltonian density of the system, and $\pi_\phi=\partial_0\phi$ for the momentum conjugate to the scalar field. For the total angular-momentum content, one also has $$M^{\mu\nu}=\int_{(\infty)}d^3\vec{x}\, \left[{\mathcal T}^{0\mu}x^\nu\,-\,{\mathcal T}^{0\nu}x^\mu\right]\ .$$ Once given such expressions as well as the mode expansions of the scalar field and its conjugate momentum in terms of the creation and annihilation operators of its quanta, it is possible to also determine the representations of these Poincaré charges in terms of the particle content of the field, whether at the classical or the quantum level, as operators acting on Hilbert space. Thus, once a normal ordering prescription is applied onto composite operators — whereby creation operators are always brought to the left of annihilation operators[@GovCOPRO2; @QFT] —, one finds for the energy-momentum content of the field, $$\hat{P}^\mu=\int\frac{d^3\vec{k}}{(2\pi)^32\omega(\vec{k}\,)}\, k^\mu\,a^\dagger(\vec{k}\,)a(\vec{k}\,)\ ,$$ while its angular-momentum $\hat{M}^{\mu\nu}$ decomposes according to,[@QFT] $$\begin{array}{r c l} \hat{M}_{0j}&=&i\int\frac{d^3\vec{k}}{(2\pi)^32\omega(\vec{k}\,)}\, a^\dagger(\vec{k}\,)\left[\omega(\vec{k}\,)\frac{\partial}{\partial k^j}\right] a(\vec{k}\,)\ ,\\ & & \\ \hat{M}_{j\ell}&=& i\int\frac{d^3\vec{k}}{(2\pi)^32\omega(\vec{k}\,)}\, a^\dagger(\vec{k}\,)\left[k_j\frac{\partial}{\partial k^{\ell}}\ -\ k_{\ell}\frac{\partial}{\partial k^j}\right] a(\vec{k}\,)\ . \end{array}$$ The expectation values of these quantities may thus be determined for whatever quantum state the quantum field finds itself it. In particular, 1-particle states define specific eigenstates of these Poincaré generators (see below). As is well known, representations of the Poincaré algebra ISO(1,3) are characterised by the eigenstates of its two Casimir operators, namely the invariant energy $\hat{P}^2=\hat{P}^\mu\hat{P}^\nu\eta_{\mu\nu}$ which measures the invariant mass of field configurations, and the relativistic invariant $\hat{W}^2$ of the Pauli-Lubanski 4-vector $\hat{W}^\mu=\frac{1}{2}\epsilon^{\mu\nu\rho\sigma} \hat{P}_{\nu}\hat{M}_{\rho\sigma}$,[^8] which commutes with $\hat{P}^\mu$. A massive representation of the Poincaré group is thus characterised by the eigenvalues $\hat{P}^2=m^2$ and $\hat{W}^2=-m^2s(s+1)$, where $m>0$ stands for its mass and $s$ for its spin, an integer or half-integer valued quantity defining an irreducible representation of the group SU(2), the universal covering group the 3-dimensional rotation group SO(3) sharing the same Lie algebra of infinitesimal rotations in space. For a massless representation, one has $\hat{P}^2=0$ and $\hat{W}^2=0$. Such representations are characterised by the helicity $s$ of the state, namely a specific representation of the helicity group SO(2), the rotation subgroup of the Wigner little group for a massless particle.[^9] For instance, for a light-like energy-momentum 4-vector $P^\mu=E(1,0,0,1)$, one has $W^\mu=M_{12}P^\mu$, so that $M_{12}$ takes the possible eigenvalues $\pm s$. In the case of the scalar field, it is then straightforward to identify the particle content of its Hilbert space. A 1-particle state $|\vec{k}\rangle=a^\dagger(\vec{k}\,)|0\rangle$ is characterised by the eigenvalues $$\hat{P}^0|\vec{k}\rangle=\omega(\vec{k}\,)|\vec{k}\rangle\ \ ,\ \ \hat{\vec{P}}|\vec{k}\rangle=\vec{k}\,|\vec{k}\rangle\ \ ,\ \ \hat{W}^2|\vec{k}\rangle=0\ ,$$ thus showing that indeed, the quanta of such a quantum field may be identified with particles of definite energy-momentum and mass $m$, carrying a vanishing spin (in the massive case) or helicity (in the massless case). Relativistic quantum field theories are thus the natural framework in which to describe all the relativistic quantum properties, including the processes of their annihilation and creation in interactions, of relativistic quantum point-particles. It is the Poincaré invariance properties, namely the relativistic covariance of such systems, that also justifies, on account of Noether’s theorem, this physical interpretation. One has to learn how to extend the above description to more general field theories whose quanta are particles of nonvanishing spin or helicity. Clearly, one then has to consider collections of fields whose components also mix under Lorentz transformations, namely nontrivial representations[^10] of the Lorentz group. Spinor Representations of the Lorentz Group\ and Spin 1/2 Particles {#Sec3} ============================================ The Lorentz Group and Its Covering Algebra {#Sec3.1} ------------------------------------------ Let us now consider the possibility that a collection of fields $\phi_\alpha(x)$ (whether real or complex), distinguished by a component index $\alpha$, provide a linear representation space of the Poincaré group, whose action is defined according to[^11] $$\begin{array}{r l c l} - &{\rm translations}\ :&&\ \ \phi'_\alpha(x'=x+a)=\phi_\alpha(x)\ ,\\ & & & \\ - &{\rm Lorentz\ transformations}\ :&&\ \ \phi'_\alpha(x'=\Lambda\cdot x)={S_\alpha}^\beta(\Lambda)\,\phi_\beta(x)\ . \end{array}$$ The sought for collection of fields is to provide a representation space of the associated Lorentz $so(1,3)$ algebra, $$\left[M^{\mu\nu},M^{\rho\sigma}\right]= -i\left[\eta^{\mu\rho}M^{\nu\sigma}- \eta^{\mu\sigma}M^{\nu\rho}- \eta^{\nu\rho}M^{\mu\sigma}+ \eta^{\nu\sigma}M^{\mu\rho}\right]\ ,$$ where the Lorentz boost generators $M^{0i}$ ($i=1,2,3$) must be taken to be anti-hermitian, and the generators of rotations in space $M^{ij}$ hermitian,[^12] $$\left(M^{0i}\right)^\dagger=-M^{0i}\ \ \ ,\ \ \ \left(M^{ij}\right)^\dagger=M^{ij}\ \ \ ,i,j=1,2,3\ .$$ In order to exploit now a feature unique to 4-dimensional Minkowski spacetime, let us introduce the following change of basis in the complexified Lorentz Lie algebra, $$L^i_\pm=\frac{1}{2}\left[L^i\pm iK^i\right]\ \ ,\ \ K^i=M^{0i}\ \ ,\ \ L^i=\frac{1}{2}\epsilon^{ijk}M^{jk}\ ,$$ $${L^i_\pm}^\dagger=L^i_\pm\ \ ,\ \ {K^i}^\dagger=-K^i\ \ ,\ \ {L^i}^\dagger=L^i\ .$$ Note that the generators $L^i_\pm$ combine a Lorentz boost in the direction $i$ with a rotation around that direction in opposite directions, hence in effect defining chiral rotations in spacetime, and leading to hermitian generators for the complexified Lorentz algebra. A direct calculation then readily finds that in terms of these chiral generators $L^i_\pm$, the Lorentz algebra factorises into the direct sum of two $su(2)_\pm$ algebras, $$\left[L^i_\pm,L^j_\pm\right]=i\epsilon^{ijk}L^k_\pm\ \ \ ,\ \ \ \left[L^i_\pm,L^j_\mp\right]=0\ .$$ In other words, the complexification of the $so(1,3)$ Lorentz algebra is isomorphic to the algebra $su(2)_+\oplus su(2)_-=sl(2,\mathbb C)$.[^13] Consequently, the universal covering algebra (over $\mathbb C$) of the Lorentz group algebra $so(1,3)$ is that of the group SU(2)$_+\times$SU(2)$_-$. The obvious advantage of this result is that the representation theory of the Lorentz group in 4-dimensional Minkowski spacetime may be understood in terms of representations of the SU(2) group, which are well known from the notion of spin and angular-momentum in nonrelativistic quantum mechanics. To each of the factors $su(2)_\pm$ one must associate an integer or half-integer value $j_\pm$ which determines a specific irreducible representation of SU(2), namely that of “spin” $j_\pm$. Thus finite dimensional irreducible representations of the Lorentz group SO(1,3) are characterised by a pair of integer or half-integer values $(j_+,j_-)$. The trivial representation is that characterised by $(j_+,j_-)=(0,0)$. Next one has the two inequivalent representations $(j_+,j_-)=(1/2,0)$ and $(j_+,j_-)=(0,1/2)$, which will be seen to play a fundamental role hereafter. One may also have for instance $(j_+,j_-)=(1/2,1/2),(1,0),(0,1)$, etc. In fact, since in SU(2), all representations may be obtained through tensor products of the fundamental $j=1/2$ spinor representation, likewise for the Lorentz group, all its finite dimensional irreducible representations may be obtained through tensor products of the two inequivalent spinor representations $(j_+,j_-)=(1/2,0)$ and $(j_+,j_-)=(0,1/2)$, which are thus the two fundamental representations of the Lorentz group, known as the Weyl spinors of opposite right- or left-handed chiralities, respectively. Given any such $(j_+,j_-)$ Lorentz representation, its spin content may also easily be identified. Indeed, in terms of the chiral generators $L^i_\pm$, the SO(3) angular-momentum generators $L^i$ are obtained simply as the direct sum $L^i=L^i_++L^i_-$. Thus, the spin content of a given $(j_+,j_-)$ representation is simply obtained through the usual rules for spin reduction of tensor products of SU(2) representations. Consequently, a $(j_+,j_-)$ representations contains spin representations of $so(3)=su(2)$ of values spanning the range $$|j_+-j_-|,\ |j_+-j_-|+1,\ \cdots,\ j_++j_-\ .$$ Finally, a given $(j_+,j_-)$ Lorentz representation is not invariant under parity. Indeed, under this transformation in space, the Lorentz boost generators $K^i$ change sign whereas the angular-momentum ones $L^i$ do not. Hence under parity, the two classes of chiral operators $L^i_\pm$ are simply exchanged, inducing the correspondence under parity of the representations $(j_+,j_-)$ and $(j_-,j_+)$. Consequently, when the Lorentz group SO(1,3) is extended to also include the parity transformation, its irreducible representations are to be combined into the direct sums $(j_+,j_-)\oplus(j_-,j_+)$ in the case of distinct values for $j_+$ and $j_-$. Given all these considerations, one may list the representations which are invariant under parity and correspond to the lowest spin or helicity content possible, $$\begin{array}{r c l} (0,0)\ &:&\ \ \ {\rm scalar\ field}\ \phi;\\ (1/2,0)\oplus(0,1/2)\ &:&\ \ {\rm Dirac\ spinor}\ \psi;\\ (1/2,1/2)\ &:&\ \ {\rm vector\ field}\ A_\mu;\\ (1,0)\oplus(0,1)\ &:&\ \ {\rm electromagnetic\ field\ strength\ tensor}\\ & & \ \ F_{\mu\nu}=\partial_\mu A_\nu-\partial_\nu A_\mu\simeq(\vec{E},\vec{B})\ . \end{array}$$ The simplest $\mathcal N=1$ supersymmetry realisation in 4-dimensional Minkowski spacetime in fact relates scalar and spinor fields, as well as spinor and vector fields. The fundamental Lorentz spinors correspond to the right- and left-handed Weyl spinors $(1/2,0)$ and $(0,1/2)$, respectively, which are exchanged under the parity transformation. In terms of the quanta of such fields, Weyl spinors describe massless particles of fixed helicity $s=\pm 1/2$ equal to the chirality $\pm 1/2$ of the Weyl spinor, and antiparticles of the opposite helicity $s=\mp 1/2$. Weyl spinors must be combined in order to describe massive spin 1/2 particles, one possibility being the celebrated Dirac spinor and its Dirac equation, describing massive spin 1/2 particles and antiparticles invariant under parity, which is to be discussed hereafter. An Interlude on SU(2) Representations {#Sec3.2} ------------------------------------- Let us pause for a moment to recall a few well known facts concerning SU(2) representations, that will become relevant in the next section. The $su(2)$ Lie algebra is spanned by three generators $T^i$ $(i=1,2,3)$ with the Lie bracket algebra $$\left[T^i,T^j\right]=i\epsilon^{ijk}\,T^k\ \ ,\ \ \epsilon^{123}=+1\ .$$ As is the case for any SU(N) algebra, [*a priori*]{}, SU(2) possesses two fundamental representations of dimension two, complex conjugates of one another, namely the spinor representations of SU(2) or SO(3). There is the “covariant” 2-dimensional representation $\underline{\bf 2}$, a vector space spanned by covariant complex valued doublet vectors $a_\alpha$ $(\alpha=1,2)$ transforming under a SU(2) group element ${U_\alpha}^\beta$, with $U^\dagger=U^{-1}$ and ${\rm det}\,U=1$, as $${a'}_\alpha={U_\alpha}^\beta\,a_\beta\ .$$ This representation is also associated to the generators $$T^i=\frac{1}{2}\sigma_i\ \ \ ,\ \ \ i=1,2,3\ ,$$ the $\sigma_i$ being the usual Pauli matrices,[^14] $$\sigma_1=\left(\begin{array}{c c} 0 & 1 \\ 1 & 0 \end{array}\right)\ \ ,\ \ \sigma_2=\left(\begin{array}{c c} 0 & -i \\ i & 0 \end{array}\right)\ \ ,\ \ \sigma_3=\left(\begin{array}{c c} 1 & 0 \\ 0 & -1 \end{array}\right)\ .$$ Correspondingly, the “contravariant” complex conjugate 2-dimensional representation $\overline{\underline{\bf 2}}$, spanned by vectors $a^\alpha$ ($\alpha=1,2)$, consists of complex valued vectors transforming under SU(2) group elements as $${a'}^\alpha=a^\beta\,{U^\dagger_\beta}^\alpha= a^\beta\,{U^{-1}_\beta}^\alpha\ ,$$ and associated to the generators $T^i=\sigma^*_i/2$. Similar considerations apply to the SU(N) case. The fact that in this general case these are the two fundamental representations is related to the existence of two SU(N)-invariant tensors, namely the Kronecker symbols ${\delta^\alpha}_\beta$ and ${\delta_\alpha}^\beta$ and the totally antisymmetric symbols $\epsilon^{\alpha_1\cdots\alpha_N}$ and $\epsilon_{\alpha_1\cdots\alpha_N}$, which themselves are directly connected to the defining properties of SU(N) matrices, namely the fact that they are unitary, $U^\dagger=U^{-1}$, and of unit determinant, ${\rm det}\,U=1$. Particularised to the SU(2) case, these simple properties may easily be checked. Indeed, using the transformation rules recalled above for co- and contra-variant indices under the SU(2) action, one has, for instance, $${\delta'_\alpha}^\beta={U_\alpha}^{\alpha_1}\, {\delta_{\alpha_1}}^{\beta_1}\,{U^\dagger_{\beta_1}}^\beta= {\delta_\alpha}^\beta\ ,$$ a result which readily follows from the unitarity property of SU(2) elements, $U^\dagger=U^{-1}$. Likewise for the $\epsilon_{\alpha\beta}$ tensor, for instance, $$\epsilon'_{\alpha\beta}= {U_\alpha}^{\alpha_1}\,{U_\beta}^{\beta_1}\,\epsilon_{{\alpha_1}{\beta_1}}= \epsilon_{\alpha\beta}\ ,$$ a result which follows from the unit determinant value, ${\rm det}\,U=1$. In the general SU(N) case, these considerations imply that the $N$-dimensional contravariant representation $\overline{\underline{\bf N}}$, the complex conjugate of the $N$-dimensional covariant one $\underline{\bf N}$, is also equivalent to the totally antisymmetry representation obtained through the $(N-1)$-times totally antisymmetrised tensor product of the latter representation with itself, $$a_{\alpha_1\cdots\alpha_{N-1}}=\epsilon_{\alpha_1\cdots\alpha_{N-1}\beta} a^\beta\ .$$ However, the SU(2) case is distinguished in this regard by the fact that this transformation also defines a unitary transformation on representation space. In other words, the relations $$a^\alpha=\epsilon^{\alpha\beta}\,a_\beta\ \ \ ,\ \ \ a_\alpha=\epsilon_{\alpha\beta}\,a^\beta\ ,$$ establish the unitary equivalence of the two 2-dimensional SU(2) representations $\underline{\bf 2}$ and $\overline{\underline{\bf 2}}$. For example, one may check that these quantities do indeed transform under SU(2) according to the rules associated to the position of the index $\alpha$, using the invariant properties of the two available SU(2) invariant tensors, for instance, $${a'}^\alpha=\epsilon^{\alpha\beta}{U_\beta}^\gamma\,a_\gamma= {\left(U^{-1}\right)_\beta}^\alpha\,\epsilon^{\beta\gamma}\,a_\gamma= a^\beta{U^{-1}_\beta}^\alpha=a^\beta\,{U^\dagger_\beta}^\alpha\ .$$ The unitary equivalence between the two 2-dimensional SU(2) representations is thus determined by the unitary matrix $$\epsilon^{\alpha\beta}=\left(i\sigma_2\right)^{\alpha\beta}\ \ ,\ \ \epsilon_{\alpha\beta}=\left(-i\sigma_2\right)_{\alpha\beta}\ .$$ This matrix being also antisymmetric, means that in fact the 2-dimensional SU(2) representation ($\underline{\bf 2}$ or $\overline{\underline{\bf 2}}$) is a pseudoreal representation. Contrary to SU(N) with $N>2$ for which the $\underline{\bf N}$ and $\overline{\underline{\bf N}}$ representations are the two inequivalent fundamental complex representations, in the SU(2) case there is only a single fundamental representation which is also pseudoreal. This is the SU(2) spinor representation. Consequently, it is also clear that all higher spin SU(2) representations are either real or pseudoreal, namely are unitarily equivalent to their complex conjugate representations with a unitary matrix defining this equivalence which is either symmetric or antisymmetric, respectively, according to whether they are obtained with an even or an odd number of tensor product factors of the fundamental spinor representation. In other words, all integer spin SU(2) representations are strictly real, whereas all half-integer spinor representations are pseudoreal representations. In fact, all integer spin representations are actual representations of SO(3)=SU(2)/${\mathbb Z}_2$ corresponding to all tensor representations of arbitrary rank, whereas all half-integer spin representations are representations of SU(2), the universal covering group of SO(3), but not of SO(3) itself because of the ${\mathbb Z}_2$ factor, the center of SU(2) (a rotation of angle $2\pi$ in a half-integer spin or spinor representation is given by $(-\one)$, but by $\one$ in an integer spin or tensor representation). This distinction between tensor and spinor representations of the rotation group SO(3) is related to the fact that SO(3) is a doubly-connected Lie group, a 3-dimensional manifold equivalent to the solid 2-sphere of radius $\pi$ and with opposite points identified, obtained as the quotient of SU(2) by its center ${\mathbb Z}_2$ taking on the value $(-1)$ (resp. $(+1)$) for any rotation by $2\pi$ (resp. $4\pi$) around any given axis. In contradistinction the SU(2) manifold is that of the 3-sphere,[^15] which is simply connected. This detailed characterisation of SU(2) representations enables the direct construction of quantities which are SU(2) invariants. For instance, consider two covariant spinors $a_\alpha$ and $b_\beta$. Since the tensor product of the spin 1/2 representation with itself includes the trivial representation of zero spin, the associated SU(2) invariant must exist, and is given by the explicit SU(2) invariant contraction of the different indices in a manner involving the two invariant tensors available, $$\epsilon^{\alpha\beta}a_\alpha b_\beta=a_\alpha b^\beta= a_\alpha\,{\delta^\alpha}_\beta\,b^\beta\ ,$$ showing how the singlet component may be identified within the tensor products $\underline{\bf 2}\otimes\underline{\bf 2}$ or $\underline{\bf 2}\otimes\overline{\underline{\bf 2}}$. This simple rule for the construction of SU(2) invariants for SU(2) tensor products will thus readily extend to the construction of Lorentz invariant quantities, since the SO(1,3) Lorentz group shares the same algebra as the SU(2)$_+\times$SU(2)$_-$ group, of which the two independent spinor representations define the two fundamental Weyl spinor representations of the Lorentz group. The Fundamental Lorentz Representations:\ Weyl Spinors {#Sec3.3} ----------------------------------------- We have seen how, on the basis of the chiral SU(2)$_+\times$SU(2)$_-$ group, it is possible to readily identify the finite dimensional representation theory of the Lorentz group SO(1,3). Let us now discuss yet another construction of its two fundamental Weyl spinor representations, which is also of importance in the construction of supersymmetric field theories. The present discussion shall also make explicit why the universal covering group of the Lorentz group SO(1,3) is the group SL(2,$\mathbb C$) of complex 2$\times$2 matrices of unit determinant. We shall thus establish the relation, at the level of the corresponding Lie algebras, $$so(1,3)_{\mathbb C}=su(2)_+\oplus su(2)_-=sl(2,{\mathbb C})\ .$$ Let us introduce the notation $$\sigma_\mu=(\one,\sigma_i)\ \ \ ,\ \ \ \sigma^\mu=(\one,\sigma^i)=(\one,-\sigma_i)\ ,$$ where the space index $i$ carried by the usual Pauli matrices is raised and lowered according to our choice of signature for the Minkowski spacetime metric, namely $\eta_{\mu\nu}={\rm diag}\,(+---)$. Consider now an arbitrary spacetime 4-vector $x^\mu$, and construct the 2$\times$2 hermitian matrix $$X=x^\mu\sigma_\mu=\left(\begin{array}{c c} x^0+x^3 & x^1-ix^2 \\ x^1+ix^2 & x^0-x^3 \end{array}\right)\ .$$ Note that conversely, any 2$\times$2 hermitian matrix $X=X^\dagger$ possesses such a decomposition, and may thus be associated to some spacetime 4-vector $x^\mu$ through the above relation. In particular, the determinant of any such matrix is equal to the Lorentz invariant inner product of the associated 4-vector with itself, $${\rm det}\,X=x^2=\eta_{\mu\nu}x^\mu x^\nu\ .$$ Consider now an arbitrary SL(2,$\mathbb Z$) group element $M$, thus of unit determinant, ${\rm det}\,M=1$, and its adjoint action on any hermitian matrix $X$ as $$X'=M\,X\,M^\dagger\ .$$ It should be clear that the transformed matrix itself is hermitian, ${X'}^\dagger=X'$, hence possesses a decomposition in terms of a 4-vector ${x'}^\mu$, $X'={x'}^\mu\sigma_\mu$, of which the Lorentz invariant takes the value $${x'}^2={\rm det}\,X'={\rm det}\,MXM^\dagger={\rm det}\,X=x^2\ .$$ In other words, any SL(2,$\mathbb C$) transformation induces a Lorentz transformation on the 4-vector $x^\mu$. The group SL(2,$\mathbb C$) determines a covering group of the Lorentz group SO(1,3). In fact, it is the universal covering of the latter, as the discussion hereafter in terms of its fundamental representations establishes. This conclusion is thus analogous to that which states that SU(2) is the universal covering group of the group SO(3) of spatial rotations. Indeed, the above discussion may also be developed in the latter case, simply by ignoring the time component of the matrices $\sigma_\mu$ and then restricting further the matrices $X$ to be both hermitian and traceless. This conclusion having been reached, the next question is: how does one construct the two fundamental Weyl spinor representations of SO(1,3) in terms of SL(2,$\mathbb C$) representations? An arbitrary SL(2,$\mathbb C$) matrix $M$ with ${\rm det}\,M=1$ may be decomposed according to $$M=e^{(a_j+ib_j)\sigma_j}\ \ \ ,\ \ \ M^\dagger=e^{(a_j-ib_j)\sigma_j}\ ,$$ where $a_j$ and $b_j$ ($j=1,2,3$) are triplets of real numbers. In these terms, the SU(2)$_+\times$SU(2)$_-$ structure of the $sl(2,\mathbb C)$ algebra should again be obvious, with in particular the hermitian (related to space rotations) and antihermitian (related to Lorentz boosts) components of the Lie algebra.[^16] In the case of SU(2), the additional property is that the matrices defining the group are also unitary, $U^\dagger=U^{-1}$. As a consequence, we have seen that the two fundamental 2-dimensional SU(2) representations, complex conjugates of one another, are unitarily equivalent. In the SL(2,$\mathbb C$) case, these two representations are no longer equivalent. However, because of the property ${\rm det}\,M=1$, $\epsilon^{\alpha\beta}$ and $\epsilon_{\alpha\beta}$ still define SL(2,$\mathbb C$) invariant tensors, that may be used to raise and lower indices. Because of this latter fact, there exist only two independent fundamental 2-dimensional representations of SL(2,$\mathbb C$), in direct correspondence with the two chiral Weyl spinors considered previously. Another way of arguing the same conclusion is as follows. Given a matrix $M\in$ SL(2,$\mathbb C$), each of the matrices $M$, $M^{-1}$, $M^*$ and $(M^*)^{-1}$ defines [*a priori*]{} another 2-dimensional representation of the same group. As pointed out above, $M$ and $M^*$ are necessarily not unitarily equivalent. However, $M$ and $M^{-1}$ on the one hand, and $M^*$ and $(M^*)^{-1}$ on the other hand, are each unitarily equivalent in pairs, using the invariant tensors $\epsilon^{\alpha\beta}$ and $\epsilon_{\alpha\beta}$ because of the property ${\rm det}\,M=1$ for these $2\times 2$ matrices. In conclusion, first we have the right-handed Weyl spinor representation $(1/2,0)$, $\psi_\alpha$ or $\psi^\alpha$, such that $$\psi^\alpha=\epsilon^{\alpha\beta}\,\psi_\beta\ \ ,\ \ \psi_\alpha=\epsilon_{\alpha\beta}\,\psi^\beta\ \ ,\ \ \epsilon^{12}=+1\ \ ,\ \ \epsilon_{12}=-1\ ,$$ and transforming under SL(2,$\mathbb C$) according to $${\psi'}_\alpha={M_\alpha}^\beta\,\psi_\beta\ \ ,\ \ {\psi'}^\alpha=\psi^\beta\,{\left(M^{-1}\right)_\beta}^\alpha\ .$$ Likewise, the left-handed Weyl spinor representation $(0,1/2)$, $\bar{\psi}_{\dot{\alpha}}$ or $\bar{\psi}^{\dot{\alpha}}$, is such that $$\overline{\psi}^{\dot{\alpha}}=\epsilon^{\dot{\alpha}\dot{\beta}}\, \overline{\psi}_{\dot{\beta}}\ \ ,\ \ \overline{\psi}_{\dot{\alpha}}=\epsilon_{\dot{\alpha}\dot{\beta}}\, \overline{\psi}^{\dot{\beta}}\ \ ,\ \ \epsilon^{\dot{1}\dot{2}}=+1\ \ ,\ \ \epsilon_{\dot{1}\dot{2}}=-1\ ,$$ each of these spinors transforming according to $${\overline{\psi}'}_{\dot{\alpha}}={M^*_{\dot{\alpha}}}^{\dot{\beta}}\, \overline{\psi}_{\dot{\beta}}\ \ ,\ \ {\overline{\psi}'}^{\dot{\alpha}}=\overline{\psi}^{\dot{\beta}}\, {\left((M^*)^{-1}\right)_{\dot{\beta}}}^{\dot{\alpha}} \ .$$ Here, in order to distinguish these two SL(2,$\mathbb C$) representations, or equivalently the two Weyl spinors, the van der Waerden dotted and undotted index notation has been introduced. This notation proves particularly valuable for the construction of manifestly supersymmetric invariant Lagrangian densities. The undotted indices $\alpha,\beta$, on the one hand, and dotted indices $\dot{\alpha},\dot{\beta}$, on the other hand, have the same meaning as the $\alpha,\beta$ indices for the SU(2) spinor representations. Consequently, Lorentz invariant quantities are readily constructed in terms of the Weyl spinors $\psi^\alpha$ and $\overline{\psi}_{\dot{\alpha}}$, through simple contraction of the indices using the invariant tensors available. Furthermore, given that we have $${x'}^\mu\sigma_\mu=X'=MXM^\dagger=M\left(x^\mu\sigma_\mu\right)M^\dagger\ ,$$ it follows that the SL(2,$\mathbb C$) or SO(1,3) Lorentz transformation properties of the matrices $\sigma_\mu$ are those characterised by the index structure, $$\sigma^\mu\ :\ \ \ \left(\sigma^\mu\right)_{\alpha\dot{\alpha}}\ \ ,\ \ \sigma_\mu=(\one,\sigma_i)\ \ ,\ \ \sigma^\mu=(\one,-\sigma_i)=(\one,\sigma^i)\ .$$ By raising the indices, one introduces the quantities $$\overline{\sigma}^\mu\ :\ \ \ \left(\overline{\sigma}^\mu\right)^{\dot{\alpha}\alpha}= \epsilon^{\dot{\alpha}\dot{\beta}}\, \epsilon^{\alpha\beta}\,\left(\sigma_\mu\right)_{\beta\dot{\beta}}\ \ ,\ \ \overline{\sigma}^\mu=(\one,\sigma_i)\ \ ,\ \ \overline{\sigma}_\mu=(\one,-\sigma_i)\ .$$ Note that these properties also justify why indeed a 4-vector $A_\mu$ is equivalent to the $(1/2,1/2)=(1/2,0)\oplus(0,1/2)$ Lorentz representation, $A^\mu\sigma_{\mu\alpha\dot{\alpha}}=A_{\alpha\dot{\alpha}}$, $A^\mu{\overline{\sigma}_{\mu}}^{\dot{\alpha}\alpha}= \overline{A}^{\dot{\alpha}\alpha}$. Let us now consider different Weyl spinors $\psi$, $\chi$, $\overline{\psi}$, $\overline{\chi}$, ... and the Lorentz invariant spinor bilinears that may constructed out of these quantities. For this purpose, it is important to realise that such field degrees of freedom, at the classical level, need to be described in terms of Grassmann odd variables, namely variables $\theta_1$, $\theta_2$, $\cdots$ which anticommute with one another, $\theta_2\theta_1=-\theta_1\theta_2$, in contradistinction to commuting variables used for fields describing particles of integer spin and obeying Bose–Einstein statistics. The reasons for this necessary choice will be discussed somewhat further later on, but at this stage, it suffices to say that spinorial fields are associated to particles of half-integer spin which should thus obey Fermi–Dirac statistics with the consequent Pauli exclusion principle, a result which is readily achieved provided Grassmann odd degrees of freedom are used even at the classical level. The associated Grassmann graded Poisson brackets[@GovBook] then correspond, at the quantum level, to anticommutation rather than commutation relations for the degrees of freedom, ensuring the Fermi–Dirac statistics. The anticommuting character of the Weyl spinors hereafter is an important fact to always keep in mind when performing explicit calculations. Since dotted and undotted indices cannot be contracted with one another in a Lorentz invariant way, there are only two types of Lorentz invariant spinor bilinears that may be considered. By definition, those associated to undotted spinors write as, $$\begin{array}{r c l} \psi\chi&=&\psi^\alpha\,\chi_\alpha= \epsilon^{\alpha\beta}\,\psi_\beta\chi_\alpha= -\epsilon^{\alpha\beta}\psi_\alpha\chi_\beta\\ & & \\ &=&-\psi_\alpha\,\chi^\alpha=\chi^\alpha\,\psi_\alpha=\chi\,\psi\ . \end{array}$$ The convention here, implicit throughout the supersymmetry literature, is that for undotted spinors, the Lorentz invariant contraction denoted $\psi\,\chi$ without displaying the indices, is that in which the undotted indices are contracted from top-left to bottom-right. Note that the Grassmann odd property of the Weyl spinors has been used to derive the above identity, $\psi\chi=\chi\psi$. In contradistinction for dotted spinors, the convention is that the contraction is taken from bottom-left to top-right, namely $$\begin{array}{r c l} \overline{\psi}\,\overline{\chi}&=& \overline{\psi}_{\dot{\alpha}}\,\overline{\chi}^{\dot{\alpha}}= \epsilon_{\dot{\alpha}\dot{\beta}}\,\overline{\psi}^{\dot{\beta}}\, \overline{\chi}^{\dot{\alpha}} =-\epsilon_{\dot{\alpha}\dot{\beta}}\,\overline{\psi}^{\dot{\alpha}}\, \overline{\chi}^{\dot{\beta}}\\ & & \\ &=&-\overline{\psi}^{\dot{\alpha}}\,\overline{\chi}_{\dot{\alpha}}= \overline{\chi}_{\dot{\alpha}}\,\overline{\psi}^{\dot{\alpha}}= \overline{\chi}\,\overline{\psi}\ . \end{array}$$ Further identities that may be established in a likewise manner are, $$\left(\psi\,\chi\right)^\dagger= \overline{\chi}\,\overline{\psi}= \overline{\psi}\,\overline{\chi}\ \ ,\ \ \left(\overline{\psi}\,\overline{\chi}\right)^\dagger= \chi\,\psi=\psi\,\chi\ .$$ For the construction of Lorentz covariant spinor bilinears, one has to also involve the matrices $\sigma_\mu$ and $\overline{\sigma}_\mu$. Thus for instance, we have the quantities transforming as 4-vectors under Lorentz transformations, $$\psi\,\sigma^\mu\,\overline{\chi}= \psi^\alpha\,{\sigma^\mu}_{\alpha\dot{\beta}}\,\overline{\chi}^{\dot{\beta}} \ \ ,\ \ \overline{\psi}\,\overline{\sigma}^\mu\,\chi= \overline{\psi}_{\dot{\alpha}}\,\overline{\sigma}^{\mu\dot{\alpha}\beta}\, \chi_\beta\ .$$ Such quantities also obey a series of identities, for instance, $$\chi\,\sigma^\mu\,\overline{\psi}=- \overline{\psi}\,\overline{\sigma}^\mu\,\chi\ \ ,\ \ \chi\,\sigma^\mu\,\overline{\sigma}^\nu\,\psi= \psi\,\sigma^\nu\,\overline{\sigma}^\mu\,\chi\ ,$$ $$\left(\chi\,\sigma^\mu\,\overline{\psi}\right)^\dagger= \psi\,\sigma^\mu\,\overline{\chi}\ \ ,\ \ \left(\chi\,\sigma^\mu\,\overline{\sigma}^\nu\,\psi\right)^\dagger= \overline{\psi}\,\overline{\sigma}^\nu\,\sigma^\mu\,\overline{\chi}\ .$$ Identities of this type enter the explicit construction of supersymmetric invariant field theories. The Dirac Spinor {#Sec3.4} ---------------- As mentioned earlier, Weyl spinors are not parity invariant representations of the Lorentz group. The fundamental parity invariant representation is obtained as the direct sum of a right- and a left-handed Weyl spinor, leading to the Dirac spinor, a 4-dimensional spinor representation of the Lorentz group, which is irreducible for the Lorentz group SO(1,3) extended to also include the parity transformation. Furthermore, since the dotted and undotted notation is not as familiar as the Dirac spinor construction, the latter will now be considered in detail through its relation to the previous discussion. Given the 4-dimensional Dirac representation, it is useful to combine the $\sigma^\mu$ and $\overline{\sigma}^\mu$ matrices into a collection of 4$\times$4 matrices, $$\gamma^\mu=\left(\begin{array}{c c} 0 & \sigma^\mu \\ \overline{\sigma}^\mu & 0 \end{array}\right)\ \ \ ,\ \ \gamma_5=i\gamma^0 \gamma^1 \gamma^2 \gamma^3= \left(\begin{array}{c c} \one & 0 \\ 0 & -\one \end{array}\right) \ ,$$ known as the Dirac matrices. As a matter of fact, the above definition provides a specific representation of the Dirac-Clifford algebra that these matrices obey, $$\begin{array}{r c l} \left\{\gamma^\mu,\gamma^\nu\right\}=2\eta^{\mu\nu}\ \ \ &,&\ \ \ \left\{\gamma^\mu,\gamma_5\right\}=0\ ,\\ & & \\ {\gamma^\mu}^\dagger=\gamma^0 \gamma^\mu \gamma^0\ \ \ &,&\ \ \ \gamma^\dagger_5=\gamma_5\ \ ,\ \ \gamma^2_5=\one\ . \end{array}$$ Other matrix representations of this algebra exist (among which that originally constructed by Dirac himself[@QFT] when he discovered the celebrated Dirac equation). However in a Minkowski spacetime of even dimension, all these representations are unitarily equivalent.[@PvN] The above representation of the Dirac-Clifford algebra is known as the chiral or Weyl representation, since the chiral projection operator $\gamma_5$ is then diagonal. Being the direct sum of a right- and a left-handed Weyl spinor, within the chiral representation a Dirac spinor decomposes according to $$\psi^{\rm Dirac}_{(4)}=\left(\begin{array}{c} \psi_\alpha \\ \overline{\chi}^{\dot{\alpha}} \end{array}\right)\ ,$$ where $\psi_\alpha$ is a $(1/2,0)$ right-handed Weyl spinor, and $\overline{\chi}^{\dot{\alpha}}$ a $(0,1/2)$ left-handed one. These two chiral components are indeed projected from the Dirac spinor through the projectors $$P_R=\frac{1}{2}\left(1+\gamma_5\right)\ \ ,\ \ P_L=\frac{1}{2}\left(1-\gamma_5\right)\ ,$$ with the properties, $$P^2_R=P_R\ \ ,\ \ P^2_L=P_L\ \ ,\ \ P_L P_R = 0 = P_R P_L\ .$$ [*A priori*]{}, the two Weyl spinors $\psi_\alpha$ and $\overline{\chi}^{\dot{\alpha}}$ are independent spinors, leading to the construction of an actual Dirac spinor with these many independent degrees of freedom. However, it could be that these two Weyl spinors are complex conjugates of one another, in which case the above construction defines what is known as a Majorana spinor, $$\psi^{\rm Majorana}_{(4)}=\left(\begin{array}{c} \psi_\alpha \\ \overline{\psi}^{\dot{\alpha}} \end{array}\right)\ .$$ A Majorana spinor is to a Dirac spinor what a real scalar field is to a complex scalar field. Namely, whereas the quanta of a real scalar field are particles that cannot be distinguished from their antiparticles (they do not carry a conserved quantum number that could distinguish them, such as the electric charge), the quanta of a complex scalar field are classified in terms of particles and antiparticles, which may be distinguished according to a conserved quantum number, for instance their electric charge, associated to the global symmetry invariance under arbitrary spacetime constant variations in the complex phase of the complex scalar field.[@GovCOPRO2] Likewise for the above spinors, since the Majorana spinor obeys some sort of restriction under complex conjugation (its Weyl components of opposite chiralities are related through complex conjugation), a Majorana spinor describes spin or helicity 1/2 particles which are their own antiparticles, and thus cannot carry a conserved quantum number such as the electric charge.[^17] In contradistinction, the quanta associated to a Dirac spinor may be distinguished in terms of particles and their antiparticles carrying opposite values of a conserved quantum number, such as for instance the electric charge (or baryon or lepton number), associated to a symmetry under arbitrary global phase transformations of the Dirac spinor. As the above construction clearly shows, in a 4-dimensional Minkowski spacetime, one cannot have both a Weyl and a Majorana condition imposed on a Dirac spinor. In such a case, one has either only Dirac spinors, Majorana spinors, or Weyl spinors of definite chirality, while the fundamental constructs of Lorentz covariant spinors are the two fundamental right- and left-handed Weyl spinors. In fact, it may be shown,[@PvN] using the properties of the Dirac-Clifford algebra, that Majorana-Weyl spinors exist only in a Minkowski spacetime of dimension $D=2$ (mod 8), which includes the dimension $D=10$ in which superstrings may be constructed, which is not an accident. Given that the Dirac $\gamma^\mu$ matrices provide a representation space of the Lorentz group, it should be possible to display explicitly the associated generators. Indeed, it may be shown that the latter are obtained as $$\Sigma^{\mu\nu}=\frac{1}{2}i\gamma^{\mu\nu}\ \ \ ,\ \ \ \gamma^{\mu\nu}=\frac{1}{2}\left[\gamma^\mu,\gamma^\nu\right]\ ,$$ with $$\gamma^{\mu\nu}=\frac{1}{2}\left(\begin{array}{c c} \sigma^\mu\overline{\sigma}^\nu-\sigma^\nu\overline{\sigma}^\mu & 0 \\ 0 & \overline{\sigma}^\mu\sigma^\nu-\overline{\sigma}^\nu\sigma^\mu \end{array}\right)\ .$$ Thus a right-handed spinor $\psi_\alpha$ transforms according to the generators, $$\Sigma_R^{\mu\nu}\ :\ \ {\left(\Sigma_R^{\mu\nu}\right)_\alpha}^\beta=\frac{1}{4}i\left[ {\sigma^\mu}_{\alpha\dot{\gamma}}\, \overline{\sigma}^{\nu\dot{\gamma}\beta}\,-\, {\sigma^\nu}_{\alpha\dot{\gamma}}\, \overline{\sigma}^{\mu\dot{\gamma}\beta}\right]\ ,$$ while a left-handed Weyl spinor $\overline{\chi}^{\dot{\alpha}}$ according to $$\overline{\Sigma}_L^{\mu\nu}\ :\ \ {\left(\overline{\Sigma}_L^{\mu\nu}\right)^{\dot{\alpha}}}_{\dot{\beta}}= \frac{1}{4}i\left[ \overline{\sigma}_L^{\mu\dot{\alpha}\gamma}\, {\sigma^\nu}_{\gamma\dot{\beta}}\,-\, \overline{\sigma}^{\nu\dot{\alpha}\gamma}\, {\sigma^\mu}_{\gamma\dot{\beta}}\right]\ .$$ Given these different considerations, it should not come as a surprise that once a free quantum field theory dynamics is constructed, it turns out that such fundamental spinor representations of the Lorentz group describe quanta which are massive or massless particles whose spin or helicity is 1/2. Extending the above considerations to an arbitrary representation of the Dirac-Clifford algebra, any Dirac spinor may be decomposed into its chiral components, $$\psi=\psi_L+\psi_R\ \ ,\ \ \psi_L=P_L\,\psi=\frac{1}{2}\left(1-\gamma_5\right)\psi\ \ ,\ \ \psi_R=P_R\,\psi=\frac{1}{2}\left(1+\gamma_5\right)\psi\ .$$ The SL(2,$\mathbb C$) invariant tensors that enable the raising and lowering of dotted and undotted indices provide for a transformation which, given a Dirac spinor $\psi$ and its complex conjugate, constructs another Dirac spinor also transforming according to the correct rules under Lorentz transformations. This operation, known as charge conjugation since it exchanges the roles played by particles and their antiparticles, is represented through a matrix $C$ such that $$C\gamma^\mu C^{-1}=-{\gamma^\mu}^{\rm T}\ \ ,\ \ C=i\gamma^2\gamma^0\ \ ,\ \ C^\dagger=C^{\rm T}=-C\ \ ,\ \ C^2=-\one\ ,$$ where, except for the very first identity, the last series of properties is valid, for instance, in the Dirac and chiral representations of the $\gamma^\mu$ matrices, but not necessarily in just any other representation of the Dirac-Clifford algebra. The charge conjugate Dirac spinor $\psi_C$ associated to a given Dirac spinor $\psi$ is given by, $$\psi_C=C\overline{\psi}^{\rm T}\ \ ,\ \ \overline{\psi}=\psi^\dagger\,\gamma^0\ ,$$ up to an arbitrary phase factor. Consequently, a Majorana spinor $\psi$ obeys the Majorana condition, $$\psi=\psi_C=C\overline{\psi}^{\rm T}\ ,$$ thus extending to the Dirac spinor representation of the Lorentz group in a manner consistent with Lorentz transformation, the reality condition under complex conjugation for such fields, in a way similar to the simple reality condition $\phi=\phi^\dagger$ for a scalar field real under complex conjugation describing spin 0 particles which are their own antiparticles. Given all the above, different properties may be established. For instance, one has $$\overline{\left(\psi_L\right)}=\left(\overline{\psi}\right)_R\ ,\ \overline{\left(\psi_R\right)}=\left(\overline{\psi}\right)_L\ ,\ \left(\psi_L\right)_C=\left(\psi_C\right)_R\ ,\ \left(\psi_R\right)_C=\left(\psi_C\right)_L\ .$$ Lorentz invariant spinor bilinears decompose as $$\overline{\psi}\chi=\overline{\psi_L}\chi_R+\overline{\psi_R}\chi_L\ ,\ \overline{\psi}\gamma_5\chi= \overline{\psi_L}\gamma_5\chi_R+\overline{\psi_R}\gamma_5\chi_L =\overline{\psi_L}\chi_R-\overline{\psi_R}\chi_L\ , \label{eq:chiral1}$$ where, under parity, the first quantity is a pure scalar, and the second a pseudoscalar. Likewise, one has the Lorentz covariants, $$\begin{array}{r c l} \overline{\psi}\gamma^\mu\chi&=& \overline{\psi_L}\gamma^\mu\chi_L+\overline{\psi_R}\gamma^\mu\chi_R\ ,\\ & & \\ \overline{\psi}\gamma^\mu\gamma_5\chi&=& -\overline{\psi_L}\gamma^\mu\chi_L+\overline{\psi_R}\gamma^\mu\chi_R\ ,\\ & & \\ \overline{\psi}\sigma^{\mu\nu}\chi&=& \overline{\psi_L}\sigma^{\mu\nu}\chi_R+ \overline{\psi_R}\sigma^{\mu\nu}\chi_L\ , \end{array} \label{eq:chiral2}$$ where in the last relation one defines $\sigma^{\mu\nu}=i[\gamma^\mu,\gamma^\nu]/2$. Note that the bilinears $\overline{\psi}\gamma^\mu\chi$, $\overline{\psi}\gamma^\mu\gamma_5\chi$ and $\overline{\psi}\sigma^{\mu\nu}\chi$ transform as a 4-vector, an axial 4-vector, and a $(1,0)\oplus(0,1)$ tensor, respectively. In fact, the whole $2^4=16$ dimensional Dirac-Clifford algebra, generated by the $2^2\times 2^2$ matrices $\one$ and $\gamma^\mu$, is spanned by the $2^4=16$ independent quantities $\one$, $\gamma_5$, $\gamma^\mu$, $\gamma^\mu\gamma_5$ and $\sigma^{\mu\nu}$ (one has indeed $\sigma^{\mu\nu}\gamma_5=i\epsilon^{\mu\nu\rho\sigma}\sigma_{\rho\sigma}/2$ where $\epsilon^{0123}=+1$). Further identities involving four Dirac spinors are also important to establish supersymmetry invariance. These involve the celebrated Fierz identities,[@QFT] the simplest of which is of the form,[^18] $$\begin{array}{r l} \overline{\psi_1}\one\psi_2\,\overline{\psi_3}\one\psi_4= -\frac{1}{4}\Big\{& \overline{\psi_1}\one\psi_4\,\overline{\psi_3}\one\psi_2\,+\, \overline{\psi_1}\gamma^\mu\psi_4\,\overline{\psi_3}\gamma_\mu\psi_2\,+\,\\ & \\ &+\frac{1}{2}\overline{\psi_1}\sigma^{\mu\nu}\psi_4\, \overline{\psi_3}\sigma_{\mu\nu}\psi_2\,-\,\\ & \\ & - \overline{\psi_1}\gamma^\mu\gamma_5\psi_4\, \overline{\psi_3}\gamma_\mu\gamma_5\psi_2\,+\, \overline{\psi_1}\gamma_5\psi_4\,\overline{\psi_3}\gamma_5\psi_2\Big\}\ , \end{array}$$ where $\psi_1$, $\psi_2$, $\psi_3$ and $\psi_4$ are arbitrary Grassmann odd Dirac spinors. An application of this identity leads, for instance, to the relation $$\overline{\epsilon_{1R}}\,\partial_\mu\psi_L\,\gamma^\mu\epsilon_{2R}= -\frac{1}{2}\overline{\epsilon_{1R}}\gamma_\nu\epsilon_{2R}\, \gamma^\mu\gamma^\nu\partial_\mu\psi_L\ ,$$ where $\epsilon_{1R}$, $\epsilon_{2R}$ and $\psi_L$ are Grassmann odd Dirac spinors of definite chirality as indicated by their lower label. This relation is central in establishing the supersymmetry invariance property of the simplest example of a supersymmetric field theory, the so-called Wess-Zumino model involving a scalar and a Weyl or Majorana spinor.[@WZ; @Deren] In the case of Grassmann odd Majorana spinors $\epsilon$ and $\lambda$, one also has, $$\begin{array}{r c c c l} \overline{\epsilon}\lambda&=&\overline{\lambda}\epsilon&=& \left(\overline{\epsilon}\lambda\right)^\dagger\ ,\\ \overline{\epsilon}\gamma_5\lambda&=& \overline{\lambda}\gamma_5\epsilon&=& -\left(\overline{\epsilon}\gamma_5\lambda\right)^\dagger\ ,\\ \overline{\epsilon}\gamma^\mu\lambda&=& -\overline{\lambda}\gamma^\mu\epsilon&=& -\left(\overline{\epsilon}\gamma^\mu\lambda\right)^\dagger\ ,\\ \overline{\epsilon}\gamma^\mu\gamma_5\lambda&=& \overline{\lambda}\gamma^\mu\gamma_5\epsilon&=& \left(\overline{\epsilon}\gamma^\mu\gamma_5\lambda\right)^\dagger\ ,\\ \overline{\epsilon}\gamma^\mu\gamma^\nu\lambda&=& -\overline{\lambda}\gamma^\mu\gamma^\nu\epsilon&=& \left(\overline{\epsilon}\gamma^\mu\gamma^\nu\lambda\right)^\dagger\ . \end{array}$$ It is a useful exercise to establish any of these identities. The Dirac Equation {#Sec3.5} ------------------ Let us now consider the dynamics of a single free Dirac spinor field, thus described, at the classical level, by complex valued Grassmann odd variables forming a 4-component Dirac spinor $\psi(x^\mu)$. The action principle for a such a system is given by the Lorentz invariant quantity $$S\left[\psi,\overline{\psi}\right]=\int d^4x^\mu\, {\mathcal L}\left(\psi,\partial_\mu\psi\right)\ ,$$ with the Lagrangian density[^19] $${\mathcal L}=\frac{1}{2}i\left[\,\overline{\psi}\gamma^\mu\partial_\mu\psi\,-\, \partial_\mu\overline{\psi}\gamma^\mu\psi\,\right]\,-\,m\overline{\psi}\psi\ . \label{eq:DiracL}$$ Through the variational principle, the associated equation of motion is the celebrated Dirac equation, $$\left[i\gamma^\mu\partial_\mu\,-\,m\right]\,\psi(x^\mu)=0\ . \label{eq:Dirac}$$ A few remarks are in order. Given the relations in (\[eq:chiral1\]) and (\[eq:chiral2\]), it is clear that the kinetic term $\overline{\psi}\gamma^\mu\partial_\mu\psi$ couples the chiral components of the Dirac spinor by preserving their chirality, while the coupling $m\overline{\psi}\psi$ switches between the two chirality components. As will become clear hereafter, since the real parameter $m\ge 0$ in fact determines the mass of the particle quanta associated to such a field, a massless Dirac particle propagates without flipping its chirality, whereas a massive particle sees both its chiral components contribute to its spacetime dynamics. The term $m\overline{\psi}\psi$ is known as the Dirac mass term. In particular, it preserves the symmetry of the kinetic term under global phase transformations of the Dirac spinor, $${\rm U}_{\rm V}{\rm (1)}\ :\ \ \ \psi'(x)=e^{i\alpha}\,\psi(x)\ ,$$ leading to a conserved U$_{\rm V}$(1) quantum number which, effectively, counts the difference between the numbers of fermions and antifermions present in the system. This U$_{\rm V}$(1) phase symmetry is thus that of the fermion number, which may coincide with the electric charge quantum number when coupled to the electromagnetic interaction. The corresponding conserved Noether current is simply the vector bilinear $J^\mu=\overline{\psi}\gamma^\mu\psi$, thus obeying the divergenceless condition $\partial_\mu J^\mu=0$ for solutions to the Dirac equation (\[eq:Dirac\]). Furthermore, since under the transformation $$\psi'(x)=\gamma_5\,\psi(x)\ ,$$ the mass term $m\overline{\psi}\psi$ changes sign, $\overline{\psi'}\psi'=-\overline{\psi}\psi$, it may always be assumed that the parameter $m$ is not negative, $m\ge 0$. One may also consider U(1)$_{\rm A}$ axial transformations, $${\rm U}_{\rm A}{\rm (1)}\ :\ \ \ \psi'(x)=e^{i\alpha\gamma_5}\,\psi(x)\ ,$$ leaving the kinetic term of the Lagrangian density invariant, but not the Dirac mass term. When $m=0$, the associated conserved Noether current density is the axial vector spinor bilinear, $J^\mu_5=\overline{\psi}\gamma^\mu\gamma_5\psi$, which is indeed conserved for solutions to Dirac’s equation (\[eq:Dirac\]) only provided $m=0$, as may explicitly be checked through direct calculation. These vector and axial symmetries of the Dirac Lagrangian density are important aspects for the theory of the strong interactions, quantum chromodynamics (QCD). Rather than considering a Dirac mass term, one may also use the charge conjugate spinor $\psi_C$ to define another type of mass term, $$m_M\overline{\psi}\psi_C\,+\,{\rm hermitian\ conjugate}\ ,$$ known as a Majorana mass term. However, it should be clear that such a term breaks not only the axial symmetry as does a Dirac mass term, but also the above vector symmetry under phase transformations. Hence, a Majorana mass term leads to a violation of the fermion number, again a reason why such a possibility may be contemplated for neutrinos only within the Standard Model of the quarks and leptons and their strong and electroweak interactions. A detailed analysis, similar to that applied to the Klein–Gordon equation,[@GovCOPRO2] considering the plane wave solutions[^20] to the Dirac equation (\[eq:Dirac\]), reveals that the general solution may be expressed through the following mode expansion $$\psi(x^\mu)=\int\frac{d^3\vec{k}}{(2\pi)^32\omega(\vec{k}\,)}\, \sum_{s=\pm}\left\{ e^{-ik\cdot x}\,u(\vec{k},s)b(\vec{k},s)\,+\, e^{ik\cdot x}\,v(\vec{k},s)d^\dagger(\vec{k},s)\right\}\ , \label{eq:solution}$$ where the plane wave spinors $u(\vec{k},s)$ and $v(\vec{k},s)$ are positive- and negative-frequency solutions to the Dirac equation in energy-momentum space, $$\left[\gamma^\mu k_\mu-m\right]\,u(\vec{k},s)=0\ \ \ ,\ \ \ \left[\gamma^\mu k_\mu+m\right]\,v(\vec{k},s)=0\ .$$ The normalisation of these spinors is such that $$\sum_{s=\pm}\,u(\vec{k},s)\overline{u}(\vec{k},s)= \left(\gamma^\mu k_\mu+m\right)\ \ \ ,\ \ \ \sum_{s=\pm}\,v(\vec{k},s)\overline{v}(\vec{k},s)= \left(\gamma^\mu k_\mu-m\right)\ .$$ The index $s=\pm$ taking two values is related to a spin or a helicity projection degree of freedom, specifying the polarisation state of the solution. The general solution has to include a summation over the two possible polarisation states of the field. The spinors $u(\vec{k},s)$ and $v(\vec{k},s)$ thus also correspond to polarisation spinors characterising the polarisation state of the field (in the same way that a polarisation vector characterises the polarisation state of a vector field $A_\mu(x^\mu)$, such as the electromagnetic vector field). Finally, in exactly the same manner as for the scalar field,[@GovCOPRO2] the quantities $b(\vec{k},s)$ and $d^\dagger(\vec{k},s)$ are, at the classical level, Grassmann odd integration constants specifying a unique solution to the Dirac equation, which, at the quantum level, correspond to quantum operators for which the quantum algebraic structure is given by the following anticommutation relations $$\left\{b(\vec{k},s),b^\dagger(\vec{\ell},r)\right\}= (2\pi)^32\omega(\vec{k}\,)\,\delta_{sr}\,\delta^{(3)}(\vec{k}-\vec{\ell}\,)= \left\{d(\vec{k},s),d^\dagger(\vec{\ell},r)\right\}\ .$$ Note that the normalisation of these relations is the same as that of the creation and annihilation operators for a scalar field. As explained in Ref. , this choice leads a Lorentz covariant normalisation of 1-particle states and mode decomposition of fields. One very important point should be emphasized here. By giving the above anticommutation relations, it is understood, as it is also in the bosonic case, that only the nonvanishing anticommutators are displayed. Thus the following anticommutation relations are implicit, $$\begin{array}{r c l} \left\{b(\vec{k},s),b(\vec{\ell},r)\right\}=&0&= \left\{d(\vec{k},s),d(\vec{\ell},r)\right\}\ ,\\ & & \\ \left\{b^\dagger(\vec{k},s),b^\dagger(\vec{\ell},r)\right\}=&0&= \left\{d^\dagger(\vec{k},s),d^\dagger(\vec{\ell},r)\right\}\ . \end{array} \label{eq:vanish}$$ Given that $b(\vec{k},s)$ and $d(\vec{k},s)$ are to be interpreted as annihilation operators for particles and antiparticles, and $b^\dagger(\vec{k},s)$ and $d^\dagger(\vec{k},s)$ as creation operators for particles and antiparticles, respectively, the anticommutators in (\[eq:vanish\]) have as consequence that no two identical particles may occupy the same quantum state specified by the quantum numbers $\vec{k}$ and $s$. In other words, in contradistinction to commutation relations for bosonic degrees of freedom as is the case for a scalar field, anticommutation relations provide a manifest realisation of the Pauli exclusion principle at the operator level. Subsequent action with the same creation operator on a 1-particle state, $|\vec{k},s;-\rangle=b^\dagger(\vec{k},s)|0\rangle$ or $|\vec{k},s;+\rangle=d^\dagger(\vec{k},s)|0\rangle$, leads to the null vector in Hilbert space, since ${b^\dagger}^2(\vec{k},s)=0={d^\dagger}^2(\vec{k},s)$. It thus appears that half-integer spin fields, namely fermionic degrees of freedom, must be quantised according to anticommutation relations, whereas integer spin fields, namely bosonic degrees of freedom, must be quantised with commutation relations. This is the realisation of the spin-statistics connection. The justification of this choice may be seen from a series of arguments. The one often invoked goes as follows.[@QFT] Given the different mode expansions of the bosonic and fermionic fields in terms of creation and annihilation operators in a Fock space representation of their Fock algebra, it is necessary to specify an ordering prescription for composite operators, such as for instance the Hamiltonian operator measuring the total energy content of the field. Within the perturbative Fock space representation, it is customary and natural to choose normal ordering, whereby all creation operators are brought to sit to the left of all annihilation operators. In the case of the Dirac spinor though, when using commutation relations rather than anticommutation ones, this prescription leads to an energy spectrum which is not bounded below: the contribution of the $d^\dagger d$ type (antiparticles) is negative-definite! On the other hand, using anticommutation relations brings in the required minus sign, rendering the energy spectrum of the system positive-definite both for particles and antiparticles. Half-integer spin fields must be quantised according to anticommutation relations. For that reason, it is also necessary to use at the classical level Grassmann odd degrees of freedom to describe half-integer spin systems. Consequently, the usual Hamiltonian formulation of such systems involves now Grassmann graded Poisson brackets,[@GovBook] extending the properties of the usual bosonic Poisson brackets based on commuting degrees of freedom, as is the case for the scalar field for instance. Through the correspondence principle, such Grassmann graded Poisson brackets must then correspond to Grassmann graded (anti)commutation relations for the quantised system, in particular anticommutation relations for fermionic degrees of freedom of half-integer spin and commutation relations for bosonic degrees of freedom of integer spin. The algebraic properties shared by Grassmann graded Poisson brackets and Grassmann graded (anti)commutation relations are indeed identical, hence the necessity of such a coherent prescription for their correspondence. From yet another point of view, the necessity of Grassmann odd degrees of freedom for spinor fields may be seen as follows. Note that the Lagrangian function for the Dirac field is linear in the spacetime gradient $\partial_\mu\psi$, whereas that for the scalar field is quadratic in $\partial_\mu\phi$.[^21] This is a crucial fact, when considered in relation to the possibility of adding total derivatives to Lagrange functions. Indeed, for the sake of the argument, consider a one degree of freedom system of configuration space coordinate $\theta(t)$, for which the Lagrange function is first-order in the time derivative, $$L=N\theta\frac{d\theta}{dt}-V(\theta)\ ,$$ $N$ being some normalisation constant with properties under complex conjugation such that $L$ be real ($\theta$ could be complex valued). However, one may also write $$\theta\frac{d\theta}{dt}=\frac{d}{dt}\left(\theta^2\right)\,-\, \frac{d\theta}{dt}\theta\ .$$ Thus, if the variable $\theta$ is Grassmann even, namely implying that $\theta$ and $\dot{\theta}$ commute, one has $$\theta\frac{d\theta}{dt}=\frac{d}{dt}\left(\frac{1}{2}\theta^2\right)\ ,$$ showing that such a first-order contribution to such an action for a bosonic degree of freedom reduces purely to a total time derivative, hence leads to an equation of motion which is not a dynamical equation but rather a constraint condition, $\partial_\theta V(\theta)=0$, involving only the $\theta$-derivative of the potential contribution $V(\theta)$ to the Lagrange function. On the other hand, if the variable $\theta$ is Grassmann odd, namely such that $\theta^2=0$ and $\dot{\theta}\theta=-\theta\dot{\theta}$ (since $\theta_1\theta_2=-\theta_2\theta_1$ for Grassmann odd variables $\theta_1$ and $\theta_2$), the first-order contribution $\theta\dot{\theta}$ to the Lagrange function does indeed lead to an equation of motion describing dynamics, namely $$\dot{\theta}=\frac{1}{2N}\frac{\partial V}{\partial\theta}\ ,$$ where in the r.h.s. a left-derivative is implicitly understood. Hence, first-order actions of the above type, which generically apply for spinor field representations of the Lorentz group, need to be defined in terms of Grassmann odd variables in order to lead to nontrivial dynamics. Consequently, at the quantum level, they need to be quantised using anticommutation, rather than commutation relations. The whole mathematical framework is thus consistent, both at the classical as well as the quantum level, provided integer spin degrees of freedom are described in terms of bosonic or commuting Grassmann even variables, hence commutation relations at the quantum level, and half-integer spin degrees of freedom are described in terms of fermionic or anticommuting Grassmann odd variables, hence anticommutation relations at the quantum level. Having understood how to quantise the Dirac spinor field, let us conclude with a few more remarks. First, consider the Majorana condition $\psi_C(x)=\psi(x)$ imposed on such a spinor. The associated Lagrangian density then reads, $$\begin{array}{r c l} {\mathcal L}&=&\frac{1}{4}i\left[\overline{\psi}\gamma^\mu\partial_\mu\psi\,-\, \partial_\mu\overline{\psi}\gamma^\mu\psi\right]- \frac{1}{2}m\overline{\psi}\psi\\ & & \\ &=&\frac{1}{2}i\overline{\psi}\gamma^\mu\partial_\mu\psi\,-\, \frac{1}{2}m\overline{\psi}\psi+ \partial_\mu\left(-\frac{1}{4}i\overline{\psi}\gamma^\mu\psi\right)\ , \end{array}$$ where the choice of factor $1/2$ in comparison to the Dirac Lagrangian density is made in order to have a convenient normalisation of the field, leading to the usual normalisation of the anticommutation relations for the creation and annihilation operators of its quanta. This factor is also related to the avoidance of double counting of degrees of freedom. In fact, it is the same factor[@GovCOPRO2] that appears in the Lagrangian density for a real scalar field, as compared to that for a complex scalar field $\phi(x)$, namely related to the factor $1/\sqrt{2}$ in the real and imaginary components of the complex field in terms of real fields, $\phi(x)=(\phi_1(x)+i\phi_2(x))/\sqrt{2}$. Solving the Dirac equation following from the above Majorana field Lagrangian density, subject to the Majorana condition, leads to the mode decomposition, $$\psi(x)=\int\frac{d^3\vec{k}}{(2\pi)^32\omega(\vec{k}\,)} \sum_{s=\pm}\left\{ e^{-ik\cdot x}\,u(\vec{k},s)b(\vec{k},s)\ +\ e^{ik\cdot x}\,v(\vec{k},s)b^\dagger(\vec{k},s)\right\}\ ,$$ with the same quantities as those that appear in the solution (\[eq:solution\]) for the Dirac spinor. Note well that indeed there no longer appears the creation operator $d^\dagger(\vec{k},s)$ for antiparticles, but that only the annihilation, $b(\vec{k},s)$, and creation, $b^\dagger(\vec{k},s)$, operators of particles of a single type contribute to the Majorana spinor field operator.[^22] A Majorana spinor describes quanta which are their own antiparticles. Hence, they cannot carry a conserved quantum number, such as fermion number, as was already observed previously. A Majorana spinor describes neutral spin 1/2 particles, whereas a Dirac spinor describes charged (for some symmetry, for instance the U(1) symmetry of electric charge or fermion number) spin 1/2 particles. The fermion number of the Dirac spinor is determined, through Noether’s theorem, from the time component of the conserved vector current $J^\mu=\overline{\psi}\gamma^\mu\psi$. In terms of the mode expansion, one has $$F=\int_{(\infty)}d^3\vec{x}\,J^0= \int\frac{d^3\vec{k}}{(2\pi)^3\omega(\vec{k}\,)}\sum_{s=\pm}\left\{ b^\dagger(\vec{k},s)b(\vec{k},s)\,-\,d^\dagger(\vec{k},s)d(\vec{k},s)\right\}\ ,$$ where the normal ordering prescription has been applied. Clearly, this expression shows that states created by $b^\dagger(\vec{k},s)$ carry an $F$ value opposite to that carried by states created by $d^\dagger(\vec{k},s)$. The conserved $F$ quantum number, related to the invariance of the Dirac Lagrangian density under arbitrary global phase transformations of the Dirac spinor field, is what distinguishes particles from antiparticles of spin 1/2 in this system. If this quantum number is also identified to the electric charge of the electromagnetic interaction for electrons, it is thus seen that the Dirac spinor describes both electrons and their antiparticles, positrons, of identical mass and spin, but opposite electric charge, which remains a conserved quantum number. Gauging the associated U$_{\rm V}$(1) vector symmetry then leads to a complete description of the quantum electromagnetic interactions between electrons, positrons and photons, namely quantum electrodynamics (QED). When this is extended to nonabelian internal symmetries, one obtains Yang–Mills theories[@GovCOPRO2] which, for the choice of gauge group SU(3)$_{\rm C}\times$SU(2)$_{\rm L}\times$U(1)$_{\rm Y}$, enter the construction of the Standard Model of quarks and leptons and their interactions. The sector of the strong interactions among quarks is thus based on the colour symmetry SU(3)$_{\rm C}$ and the associated Yang–Mills gauge theory of quantum chromodynamics (QCD). For what concerns spacetime symmetries, the Poincaré generators are now given by the expressions,[@QFT] $$P^\mu=\int\frac{d^3\vec{k}}{(2\pi)^3\omega(\vec{k}\,)}\,k^\mu\, \sum_{s=\pm}\left\{ b^\dagger(\vec{k},s)b(\vec{k},s)\,+\,d^\dagger(\vec{k},s)d(\vec{k},s)\right\}\ ,$$ $$M^{\mu\nu}=\int_{(\infty)} d^3\vec{x}\left\{ \Theta^{0\mu}x^\nu-\Theta^{0\nu}x^\mu- \frac{1}{4}\overline{\psi}\left\{\gamma^0,\sigma^{\mu\nu}\right\}\psi\right\}\ ,$$ where $$\Theta^{\mu\nu}=\frac{1}{2}i\left[\overline{\psi}\gamma^\mu\partial^\nu\psi\,-\, \partial^\nu\overline{\psi}\gamma^\mu\psi\right]\ ,$$ while fermion normal ordering is implicit of course. It then follows that the 1-particle states obtained by acting with the creation operators $b^\dagger(\vec{k},s)$ and $d^\dagger(\vec{k},s)$ on the Fock vacuum $|0\rangle$ are energy-momentum eigenstates of momentum $\vec{k}$ and mass $m$, possessing spin or helicity 1/2. In the same way as for the scalar field,[@GovCOPRO2] it is possible to compute the Feynman propagator of the Dirac field, namely the causal probability amplitude for seeing a particle created at a given point in spacetime and annihilated at some other such point. This time-ordered amplitude is thus defined by the 2-point correlation function $$\begin{array}{r c l} \langle 0|T\psi_\alpha(x)\overline{\psi}_\beta(y)|0\rangle&=& \theta(x^0-y^0)\langle 0|\psi_\alpha(x)\overline{\psi}_\beta(y)|0\rangle\,-\,\\ & & \\ &&-\theta(y^0-x^0)\langle 0|\overline{\psi}_\beta(y)\psi_\alpha(x)|0\rangle\ , \end{array}$$ where the anticommuting nature of the spinor is accounted for through the negative sign in the second contribution in the r.h.s. ($\theta(x)$ denotes the usual step function, $\theta(x>0)=1$ and $\theta(x<0)=0$). In the case of the free Dirac field, a direct substitution of the mode expansion (\[eq:solution\]) leads to the integral representation, $$\langle 0|T\psi_\alpha(x)\overline{\psi}_\beta(y)|0\rangle= \int\frac{d^4k}{(2\pi)^4}\,e^{-ik\cdot(x-y)}\, \left(\frac{i}{\gamma^\mu k_\mu-m+i\epsilon}\right)_{\alpha\beta}\ ,$$ where, as usual,[@GovCOPRO2] $\epsilon>0$ corresponds to an infinitesimal imaginary part in the denominator of the momentum-space propagator introduced to specify the contour integration in the complex $k^0$ energy plane in order to pick up the correct pole contributions associated to the positive- and negative-frequency components of the Dirac spinor mode expansion. This Dirac propagator is the basis for perturbation theory involving Dirac spinors, in the same way that the Feynman propagator for scalar fields enables the evaluation of the perturbation theory corrections stemming from interactions between scalar particles.[@GovCOPRO2] On the Road Towards Supersymmetry:\ A Simple Quantum Mechanical Model {#Sec4} =================================== The previous sections have reviewed how, by enforcing at all steps the consequences of spacetime Lorentz and Poincaré covariance, relativistic quantum field theories lead to a conceptual framework deeply rooted in basic physical principles which naturally describes the relativistic and quantum properties of point-particles of given mass and spin, and the possibility of their creation and annihilation in a variety of processes for which the fundamental interactions are responsible. The Poincaré symmetry invariance properties of Minkowski spacetime allow for the particle interpretation of definite energy-momentum and spin values for the quanta of such fields. Any further internal symmetries then also account for further conserved quantum numbers that particles carry. When gauged, such internal symmetries lead to specific interactions of the Yang–Mills type, which are at the basis of the construction of the successful Standard Model for quarks and leptons and their fundamental interactions. We have also made clear how bosonic particles of integer spin need to be described in terms of commuting degrees of freedom and quantum commutation relations for the tensor field representations of the Lorentz group, whereas fermionic particles of half-integer spin need to be described in terms of anticommuting degrees of freedom and quantum anticommutation relations for the spinor field representations of the Lorentz group. As briefly discussed in Ref. , this widely encompassing framework aiming towards a fundamental unification has now come to a cross-roads at which an irreconcilable clash has arisen between the principles of general relativity, the relativistic invariant classical field theory for the gravitational interaction described through the dynamics of spacetime geometry, and the principles of relativistic quantum field theory, the natural framework for all of matter and the other three fundamental interactions. Many extensions beyond the Standard Model aiming at a resolution of this conflict have been contemplated, most of which involve in one way or another algebraic structures relating fermionic and bosonic degrees of freedom, so-called supersymmetry algebras. Indeed, the distinct separation between boson and fermionic fields at the same time attracts the suggestion of a possible unification within a larger framework in which such degrees of freedom could appear on an equal footing, a specific type of a fundamental unification of matter (half-integer spin particles, namely the quarks and leptons) and interactions (integer spin particles, namely the Yang–Mills gauge bosons of the strong and electroweak interactions, the higgs particle yet to be discovered and the graviton). One should expect that assuming this to be achievable, such a unification should also extend the usual commuting coordinates of Minkowski spacetime into a superspace including both commuting and anticommuting coordinates, truly a first embodiement of an eventual fundamental quantum geometry. The stage has thus been set to embark onto a journey on the roads towards the construction of supersymmetric quantum field theories. These notes shall stop short of such a discussion, which is widely available in the literature, and conclude in this section with a series of remarks pointing towards the generic features of such systems, as a way of opening the reader’s mind for whom this is unknown territory of theoretical physics, to what he/she may expect from a study on his/her own of supersymmetry. We shall do this starting again from ordinary quantum mechanics. Hopefully, it should have been made abundantly clear[@GovCOPRO2] that the “essence” of relativistic quantum fields is their harmonic oscillator characteristics, extended in such a manner as to make their spacetime dynamics also consistent with the Poincaré invariance of Minkowski spacetime. This is true whether for bosonic or fermionic quantum fields, the simplest examples of which are the fields describing particles of spin or helicity 0 and 1/2. Let us thus reduce to the extreme again these field situations, by restricting the discussion to simple harmonic oscillator degrees of freedom finite in number. The generalisation to field degrees of freedom will then be restricted and guided by the constraints stemming from Poincaré invariance, leading [*in fine*]{} to supersymmetric relativistic quantum field theories. To begin with, let us consider a single bosonic harmonic oscillator.[@GovCOPRO2] Once quantised, to such a system is associated a representation space of its quantum states, its physical Hilbert space, on which act the annihilation, $a$, and creation, $a^\dagger$, operators of energy quanta subjected to the commutation relation $[a,a^\dagger]=\one$ (the other commutators vanish identically, $[a,a]=0=[a^\dagger,a^\dagger]$). A canonical basis of the Fock algebra is the Fock basis, constructed from a vacuum state $|0\rangle$ annihilated by $a$, $a|0\rangle=0$, on which acts the creation operator $a^\dagger$, leading to the discrete set of states $|n\rangle=(a^\dagger)^n|0\rangle/\sqrt{n!}$ ($n=0,1,2,\cdots$) obeying the properties, $$\langle n|m\rangle=\delta_{nm}\ \ ,\ \ a|n\rangle=\sqrt{n}|n-1\rangle\ \ ,\ \ a^\dagger|n\rangle=\sqrt{n+1}|n+1\rangle\ \ ,\ \ a^\dagger a|n\rangle=n|n\rangle\ .$$ The quantum Hamiltonian of the system, generating also its dynamical evolution in time, is diagonal in the Fock basis, and is given by $$H_B=\frac{1}{2}\hbar\omega\left\{a^\dagger,a\right\}= \frac{1}{2}\hbar\omega\left[a^\dagger a+a a^\dagger\right]= \hbar\omega\left[a^\dagger a+\frac{1}{2}\right]\ ,$$ where the vacuum quantum energy contribution $\hbar\omega/2$ has been retained, while $\omega$ denotes the angular frequency of the system, setting its energy scale in combination with Planck’s constant $\hbar$. The energy spectrum is thus equally spaced in steps of $\hbar\omega$, with the eigenvalues $E_B(n)=\hbar\omega(n+1/2)$, $H_B|n\rangle=E_B(n)|n\rangle$, starting with the vacuum state at $E_B(n=0)=\hbar\omega/2$. Let us now consider likewise the quantum fermionic oscillator of same angular frequency $\omega$ (the reason being that later on we shall introduce a symmetry relating the bosonic and fermionic systems). The space of states provides a representation for the fermionic anticommutator algebra $$\{b,b\}=0=\{b^\dagger,b^\dagger\}\ \ \ ,\ \ \ \{b,b^\dagger\}=\one\ ,$$ where $b$ and $b^\dagger$ are the fermionic annihiliation and creation operators, respectively. Note that by having replaced commutation relations with anticommutation ones, the vanishing anticommutators in fact imply the properties $b^2=0={b^\dagger}^2$, the manifest realisation of the Pauli exclusion principle for fermions. As a consequence, the Fock space representation of this fermionic Fock algebra is 2-dimensional (to be contrasted with the discrete infinite dimension of the bosonic Hilbert space), and is spanned by a vacuum state $|0\rangle$ and its first excitation $|1\rangle=b^\dagger|0\rangle$, with the properties, $$\begin{array}{r c l} b|0\rangle=0\ \ ,\ \ b^\dagger|0\rangle=|1\rangle\ \ &,&\ \ b|1\rangle=|0\rangle\ \ ,\ \ b^\dagger|1\rangle=0\ ,\\ & \\ \langle 0|0\rangle=1=\langle 1|1\rangle\ \ &,&\ \ \langle 0|1\rangle=0=\langle 1|0\rangle\ . \end{array}$$ For the quantum Hamiltonian, we shall also choose $$H_F=\frac{1}{2}\hbar\omega\left[b^\dagger,b\right]= \frac{1}{2}\hbar\omega\left[b^\dagger b-b b^\dagger\right]= \hbar\omega\left[b^\dagger b - \frac{1}{2}\right]\ ,$$ where this time the vacuum quantum energy is negative because of the fermionic character of the degree of freedom. The Fock state basis diagonalises this operator, with the energy spectrum, $$H_F|0\rangle=-\frac{1}{2}\hbar\omega\ \ \ ,\ \ \ H_F|1\rangle=\frac{1}{2}\hbar\omega\ ,$$ thus describing a 2-level quantum system split by an energy $\hbar\omega$. Let us now combine these two systems, and consider the tensor product of their operator algebras and representation spaces. Hence, the complete Hilbert space is spanned by the states $|n,0\rangle$ and $|n,1\rangle$, where the first entry stands for the bosonic excitation level, and the second entry for that of the fermionic sector. The total Hamiltonian of the system then reads, $$H=H_B+H_F=\frac{1}{2}\hbar\omega\left[a^\dagger a+a a^\dagger + b^\dagger b - b b^\dagger\right]= \hbar\omega\left[a^\dagger a+ b^\dagger b\right]\ ,$$ in which the vacuum quantum energies of the bosonic and fermionic sectors have cancelled one another. Consequently, the energy eigenspectrum is still equally spaced in steps of $\hbar\omega$, is doubly degenerate at each level with the states $|n,0\rangle$ and $|n-1,1\rangle$ at level $n$ of energy $\hbar\omega n$, except for the single ground state or vacuum state $|n=0,0\rangle$ at level $n=0$ whose energy vanishes identically, $$H|n=0,0\rangle=0\ \ ,\ \ H|n,0\rangle=\hbar\omega n|n,0\rangle\ \ ,\ \ H|n-1,1\rangle=\hbar\omega n|n-1,1\rangle\ .$$ With these simple remarks, in fact we already encounter a series of features quite unique to supersymmetry. If a system possesses a symmetry that relates fermionic and bosonic degrees of freedom, there are general classes of cancellations between quantum fluctuations and corrections stemming from the two sectors, leading to better behaved short-distance UV divergences generic of 4-dimensional quantum field theories. Indeed, there are even certain classes of quantum operators which, in supersymmetric field theories, are not at all renormalised by perturbative quantum corrections, leading to very powerful so-called no-renormalisation theorems. In addition, the cancellation between bosonic and fermionic vacuum quantum energy contributions implies that in field theories in which supersymmetry is not spontaneously broken, the vacuum state possesses an exactly vanishing energy, suggesting a possible connection with the famous problem of the extremely small (in comparison to the Planck energy scale relevant to quantum gravity, $10^{19}$ GeV) and yet not exactly vanishing cosmological constant of our universe.[@Lambda] If only from that perspective, dynamical spontaneous symmetry breaking of supersymmetry is thus an extremely fascinating issue in the quest for a fundamental unification.[@Witten2] The degeneracy between the bosonic states $|n,0\rangle$ and the fermionic ones $|n,1\rangle$ suggests that there exists a symmetry — a supersymmetry — relating these two sectors of the system. We need to construct the operators generating such transformations, by creating a fermion and annihilating a boson, or vice-versa, thus mapping between bosonic and fermionic states degenerate in energy. Clearly these operators are given by $$Q=\sqrt{\hbar\omega}\ a^\dagger b\ \ \ ,\ \ \ Q^\dagger=\sqrt{\hbar\omega}\ ab^\dagger\ ,$$ acting as $$\begin{array}{r c l} Q|n,0\rangle=0\ \ &,&\ \ Q|n,1\rangle=\sqrt{\hbar\omega}\ \sqrt{n+1}|n+1,0\rangle\ ,\\ & & \\ Q^\dagger|n,0\rangle=\sqrt{\hbar\omega}\ \sqrt{n}|n-1,1\rangle\ \ &,&\ \ Q^\dagger|n,1\rangle=0\ . \end{array}$$ Note that the vacuum $|n=0,0\rangle$ is the single state which is annihilated by both $Q$ and $Q^\dagger$, as it must since it is not degenerate in energy with any other state. The operators $Q$ and $Q^\dagger$ are thus the generators of a supersymmetry present in this system. Their algebra is given by $$\left\{Q,Q\right\}=0=\left\{Q^\dagger,Q^\dagger\right\}\ \ ,\ \ \left\{Q,Q^\dagger\right\}=H\ \ ,\ \ \left[Q,H\right]=0=\left[Q^\dagger,H\right]\ . \label{eq:SUSYalgebra}$$ The fact that they define a symmetry is confirmed by their vanishing commutation relations with the Hamiltonian $H$. Once again, we uncover here a general feature of supersymmetry algebras, namely the fact that acting twice with a supersymmetry generator, in fact one gets an identically vanishing result, $Q^2=0={Q^\dagger}^2$, a property directly reminiscent of cohomology classes of differential forms in differential geometry.[@Witten3] In addition, the anticommutator of a supersymmetry generator with its adjoint gives the Hamiltonian of the system. In a certain sense thus, making a system supersymmetric amounts to taking a square-root of its Hamiltonian. Put differently, the square-root of the Klein–Gordon equation is the Dirac equation, when this correspondence is extended to field theories. From these simply remarks it already transpires that supersymmetry algebras provide powerful new tools with which to explore mathematics questions within a context which may draw on a lot of insight and intuition from quantum physics.[@MATH; @Witten3] Results have indeed been very rewarding already, and many more are still to be established along such lines. To complete the algebraic relations in (\[eq:SUSYalgebra\]), it is also useful to display the supersymmetry action on the creation and annihilation operators, $$\begin{array}{r c l} [Q,a]=-\sqrt{\hbar\omega}\,b\ \ ,\ \ [Q,a^\dagger]=0\ \ &,&\ \ [Q^\dagger,a]=0\ \ ,\ \ [Q^\dagger,a^\dagger]=\sqrt{\hbar\omega}\,b^\dagger\ ,\\ & \\ \left\{Q,b\right\}=0\ \ ,\ \ \left\{Q,b^\dagger\right\}=\sqrt{\hbar\omega}\,a^\dagger\ \ &,&\ \ \left\{Q^\dagger,b\right\}=\sqrt{\hbar\omega}\,a\ \ ,\ \ \left\{Q^\dagger,b^\dagger\right\}=0\ . \end{array} \label{eq:QFock}$$ The properties $Q^2=0={Q^\dagger}^2$ also suggest that it should be possible to obtain wave function representations of the fermionic and supersymmetry algebras using complex valued Grassmann odd variables $\theta$, such that $\theta_1\theta_2=-\theta_2\theta_1$ and thus $\theta^2_1=0=\theta^2_2$, in the same way that the bosonic Fock algebra possesses wave function representations in terms of commuting coordinates, a configuration space coordinate $x$ and its conjugate momentum $p$, obeying the Heisenberg algebra.[@GovCOPRO2] In the latter case, these two variables may be combined into a single complex commuting variable $z$, leading for instance to the usual holomorphic representation in the bosonic sector, $$a=\frac{\partial}{\partial z}\ \ \ ,\ \ \ a^\dagger=z\ .$$ Thus likewise for the fermionic algebra, let us take $$b=\frac{\partial}{\partial\theta}\ \ \ ,\ \ \ b^\dagger=\theta\ ,$$ where it is understood that all derivatives with respect to Grassmann odd variables are taken from the left (left-derivatives). Consequently the supersymmetry generators are represented by $$Q=\sqrt{\hbar\omega}\,z\frac{\partial}{\partial\theta}\ \ \ ,\ \ \ Q^\dagger=\sqrt{\hbar\omega}\,\theta\frac{\partial}{\partial z}\ ,$$ leading to the representation for the Hamiltonian, $$H=Q^\dagger Q+Q Q^\dagger=\hbar\omega\left[a^\dagger a+b^\dagger b\right]= \hbar\omega\left[z\frac{\partial }{\partial z} +\theta\frac{\partial}{\partial\theta}\right]\ .$$ These operators thus act on wave functions $\psi(z,\theta)$. Because of the Grassmann property $\theta^2=0$, a power series expansion of such a function terminates at a finite order, in the present case at first order since only one $\theta$ variable is involved, $$\psi(z,\theta)=\psi_B(z)+\theta\psi_F(z)\ \ ,\ \ \psi_F(z)=\frac{\partial}{\partial\theta}\psi(z,\theta)\ ,$$ where, assuming that $\psi(z,\theta)$ itself is Grassmann even, the bosonic component $\psi_B(z)$ is Grassmann even while the fermionic one $\psi_F(z)$ is Grassmann odd, as it should considering the analogous structure of the space of quantum states. In particular, the general wave function representing the energy eigenstates $|n,0\rangle$ and $|n-1,1\rangle$ with value $E(n)=\hbar\omega n$ is given as $$\psi_n(z,\theta)=B_n\frac{z^n}{\sqrt{n!}}\,+\, F_n\theta\frac{z^{n-1}}{\sqrt{(n-1)!}}\ ,$$ where $B_n$ and $F_n$ are arbitrary phase factors associated to the bosonic and fermionic components of this wave function. The supersymmetry charges $Q$ and $Q^\dagger$ act on such general wave functions as $$Q\psi(z,\theta)=\sqrt{\hbar\omega}\ z\psi_F(z)\ \ ,\ \ Q^\dagger\psi(z,\theta)=\sqrt{\hbar\omega}\ \theta\partial_z\psi_B(z)\ .$$ Thus introducing a complex valued Grassmann odd constant parameter $\epsilon$ associated to the symmetries generated by the supercharges $Q$ and $Q^\dagger$, one has for the general self-adjoint combination of supercharges $$Q_\epsilon=\epsilon Q+Q^\dagger\epsilon^\dagger= \epsilon Q-\epsilon^\dagger Q^\dagger\ ,$$ the action $$Q_\epsilon\psi(z,\theta)=\sqrt{\hbar\omega} \left[\left(z\epsilon\psi_F(z)\right)\ +\ \theta\left(\epsilon^\dagger\partial_z\psi_B(z)\right)\right]\ .$$ Consequently, given the variations $\delta_\epsilon\psi(z,\theta)=iQ_\epsilon\psi(z,\theta)$, the bosonic and fermionic components of such wave functions are transformed according to the rules $$\delta_\epsilon\psi_B(z)=i\sqrt{\hbar\omega}\,z\epsilon\psi_F(z) \ \ ,\ \ \delta_\epsilon\psi_F(z)=i\sqrt{\hbar\omega}\, \epsilon^\dagger\partial_z\psi_B(x)\ . \label{eq:SUSYvariation1}$$ These expressions thus provide the infinitesimal supersymmetry transformations of the wave functions of the system. We shall come back to these relations hereafter. In order to identify which type of classical system corresponds to the present situation, let us now introduce the configuration and momentum space degrees of freedom through the usual relations,[@GovCOPRO2] $$\begin{array}{r c l} a=\sqrt{\frac{m\omega}{2\hbar}}\left[x+\frac{i}{m\omega}p\right]\ \ &,&\ \ a^\dagger=\sqrt{\frac{m\omega}{2\hbar}}\left[x-\frac{i}{m\omega}p\right]\ ,\\ & & \\ b=\sqrt{\frac{m\omega}{2\hbar}}\left[\theta_1+i\theta_2\right]\ \ &,&\ \ b^\dagger=\sqrt{\frac{m\omega}{2\hbar}}\left[\theta_1-i\theta_2\right]\ . \end{array}$$ Note well that the variables $x$, $p$, $\theta_1$ and $\theta_2$, which are assumed to be self-adjoint, $x^\dagger=x$, $p^\dagger=p$, $\theta^\dagger_1=\theta_1$, $\theta^\dagger_2=\theta_2$, are still operators at this stage. The decomposition of the fermionic operators $b$ and $b^\dagger$ in these terms is of course to maintain as manifest as possible the parallel between the bosonic and fermionic sectors of the system, which are exchanged under supersymmetry transformations. Given these operator redefinitions, it follows that the only nonvanishing (anti)commutators are (note that the operators $\theta_1$ and $\theta_2$ thus anticommute with one another, $\left\{\theta_1,\theta_2\right\}=0$) $$[x,p]=i\hbar\ \ ,\ \ \left\{\theta_1,\theta_1\right\}=\frac{\hbar}{m\omega}= \left\{\theta_2,\theta_2\right\}\ . \label{eq:quantumbrackets}$$ Furthermore, the Hamiltonian operators is then expressed as $$H=\frac{p^2}{2m}+\frac{1}{2}m\omega^2x^2+im\omega^2\theta_1\theta_2\ , \label{eq:Hamiltonian}$$ leading to the operator equations of motion in the Heisenberg picture, $$\begin{array}{l c l} i\hbar\dot{x}=\left[x,H\right]=i\hbar\frac{p}{m}\ \ \ &,&\ \ \ i\hbar\dot{p}=\left[p,H\right]=-i\hbar m\omega^2x\ ,\\ & & \\ i\hbar\dot{\theta}_1=\left[\theta_1,H\right]=i\hbar\omega\theta_2\ \ \ &,&\ \ \ i\hbar\dot{\theta}_2=\left[\theta_2,H\right]=-i\hbar\omega\theta_1\ . \end{array} \label{eq:quantumEM}$$ It is also possible to determine how the supercharges $Q$ and $Q^\dagger$act on the operators $x$, $p$, $\theta_1$ and $\theta_2$, an exercise left to the reader (of which the results are used hereafter). Through the correspondence principle, the (anti)commutation relations (\[eq:quantumbrackets\]) are required to translate into the following classical Grassmann graded Poisson brackets for the associated degrees of freedom, $$\left\{x,p\right\}=1\ \ ,\ \ \left\{\theta_1,\theta_1\right\}=-\frac{i}{m\omega}= \left\{\theta_2,\theta_2\right\}\ ,$$ with now all the variables $x$, $p$ , $\theta_1$ and $\theta_2$ real under complex conjugation, $x$ and $p$ being ordinary commuting Grassmann even degrees of freedom, but $\theta_1$ and $\theta_2$ being anticommuting Grassmann odd degrees of freedom associated to the fermionic sector of the system. At the classical level, the Hamiltonian is given by the same expression as in (\[eq:Hamiltonian\]). In particular, using these Grassmann graded Poisson brackets, at the classical level the same Hamiltonian equations of motion are recovered as those in (\[eq:quantumEM\]) for the quantum operators. These classical equations of motion follow through the variational principle from the first-order Hamiltonian action $$S[x,p,\theta_1,\theta_2]=\int dt\left\{ \frac{1}{2}\left[\dot{x}p-\dot{p}x\right] -\frac{1}{2}im\omega\left[\dot{\theta}_1\theta_1+\dot{\theta}_2\theta_2\right] -H\right\}\ .$$ Using the Hamiltonian equation of motion for $x$ in order to reduce its conjugate momentum $p$, namely $p=m\dot{x}$, and also introducing the complex valued Grassmann odd variable $\theta=\theta_1+i\theta_2$, it then follows finally that the Lagrange function of the system is given by,[^23] $$L=\frac{1}{2}m\dot{x}^2-\frac{1}{2}m\omega^2x^2 +\frac{1}{2}im\omega\theta^\dagger\dot{\theta}- \frac{1}{2}m\omega^2\theta^\dagger\theta\ . \label{eq:SUSYL}$$ From the above considerations, it should then follow that the transformations associated to the supercharges $Q$ and $Q^\dagger$ generate global symmetries of this action. These transformations are given by[^24] $$\delta_Q x=i\epsilon\theta+i\epsilon^\dagger\theta^\dagger\ \ ,\ \ \delta_Q\theta=2i\epsilon^\dagger\left(x+\frac{i}{\omega}\dot{x}\right)\ \ ,\ \ \delta_Q\theta^\dagger=-2i\epsilon \left(x-\frac{i}{\omega}\dot{x}\right)\ . \label{eq:SUSYvariation2}$$ And indeed, it may readily be checked that the infinitesimal variation of the Lagrange function (\[eq:SUSYL\]) then reduces to a simple total time derivative, thus establishing the supersymmetry invariance of this system also at the classical level. Applying Noether’s general analysis to this new type of symmetry for which the parameters are Grassmann odd quantities, leads back to the conserved supercharges generating these transformations.[^25] Once again, a few general lessons may be drawn from the above considerations, which remain valid in the case also of supersymmetric field theories. The supercharges $Q$ and $Q^\dagger$ define transformations between the states $|n,0\rangle$ and $|n-1,1\rangle$, except for the vacuum state $|n=0,0\rangle$ which remains invariant under supersymmetry. Hence, all these pairs of states for $n\ge 1$ define 2-dimensional supermultiplets, namely irreducible representations of the supersymmetry algebra, combining a bosonic and a fermionic state degenerate in energy. In the holomorphic wave function representation, the bosonic component is given by $\psi(z,\theta)=\psi_B(z)=z^n/\sqrt{n!}$, and the fermionic one by $\psi(z,\theta)=\theta\psi_F(z)=\theta z^{n-1}/\sqrt{(n-1)!}$. From a certain point of view, the bosonic phase space $z$ of the system has been extended into a super-phase space of degrees of freedom $(z,\theta)$, on which supersymmetry transformations act through $Q=\sqrt{\hbar\omega}z\partial_\theta$ and $Q^\dagger=\sqrt{\hbar\omega}\theta\partial_z$, thereby inducing a map between the bosonic and fermionic components of a general super-phase space wave function $\psi(z,\theta)$, according to the rules in (\[eq:SUSYvariation1\]). In particular, note that the lowest component $\psi_B(z)$ of such a super-wave function is mapped into its fermionic component, while its highest component $\psi_F(z)$ is mapped into the $z$-derivative of its lowest component. If one recalls that through a process of second-quantisation, quantum fields may be seen, in a certain sense, to correspond to quantum mechanical wave functions of 1-particle states, this remark suggests that by extension, supersymmetric quantum field theories should be constructed in terms of superfields depending not only on the usual spacetime coordinates $x^\mu$, but also on some collection of Grassmann odd variables, in order to extend usual Minkowski spacetime into some form of a superspace[@SS] Minkowski spacetime, all in a manner consistent with the Poincaré covariance properties required of such theories. Consequently, these Grassmann odd variables must be chosen to determine specific spinor representations of the Lorentz group, namely right- and left-handed Weyl spinors $\theta_\alpha$ and $\overline{\theta}^{\dot{\alpha}}$. Indeed, when developing this point of view, it appears that such superfields decompose into specific bosonic and fermionic components, defining a supermultiplet, with supersymmetry transformations mapping these components into one another. In particular, the transformation of the highest component always includes the spacetime derivative of some of the lower components. From the field theory point of view however, it would be more appropriate to develop the same considerations rather in terms of the system degrees of freedom $x(t)$ and $\theta(t)$, which are transformed into one another as shown in (\[eq:SUSYvariation2\]). Indeed, as argued in Sec. \[Sec2\], fields may be viewed as collections of oscillators fixed at all points in space, namely $\phi(t,\vec{x}\,)=x_{\vec{x}}(t)$ and $\psi(t,\vec{x}\,)=\theta_{\vec{x}}(t)$ for scalar and spinor fields, respectively, and coupled to one another through their spatial gradients in order to ensure spacetime Poincaré and Lorentz invariance. In the case of the present simple supersymmetric mechanical model, these bosonic and fermionic degrees of freedom $x(t)$ and $\theta(t)$ thus define a certain type of “field” supermultiplet (rather than a supermultiplet of quantum states, as in the discussion above), of which the variation of its highest component includes again the time derivative of some of its lower components. Upon quantisation, these transformation properties also apply to the quantum operators, and translate into the specific transformation rules for the quantum states described above. When extended to field theories, these features survive, this time in terms of bosonic and fermionic field degrees of freedom. Note that in (\[eq:SUSYL\]), time derivatives of bosonic degrees of freedom contribute in quadratic form to the Lagrange function, whereas for fermionic ones they contribute to linear order. This fact, when extended to a relativistic framework, remains valid as well. The Klein–Gordon Lagrangian density is quadratic in spacetime gradients of the scalar field, but the Dirac Lagrangian density is linear in such derivatives of the Dirac spinor. For reasons explained previously, these features are necessary for a consistent dynamics of Grassmann even and odd, or integer and half-integer spin degrees of freedom. For the sake of completing the discussion of the present simple supersymmetric quantum mechanical model of harmonic oscillators, let us indeed show how a superfield calculus may be developed already in this case. Again, the general lessons following from such an approach readily extend to the superfield constructions of supersymmetric field theories in which the constraints of spacetime Poincaré covariance are then also accounted for through the knowledge of the representation theory of the Lorentz group. The Hamiltonian of any given system is the generator of translations in time. Given that the anticommutator of supercharges produces the Hamiltonian, as shown for example in (\[eq:SUSYalgebra\]), means that supersymmetry transformations correspond to taking some sort of square-root of translations in (space)time, in a new “dimension” of (space)time which must be parametrised by a Grassmann odd coordinate this time, since supercharges map bosonic and fermionic states into one another. In addition to the bosonic time coordinate $t$, let us thus extend time into a “supertime” by also introducing a complex valued Grassmann odd coordinate, which shall be denoted $\eta$ (the customary notation $\theta$ for Grassmann odd superspace coordinates being already used for the fermionic degrees of freedom of the system, $\theta(t)$), and its complex conjugate $\eta^\dagger$. Thus we now have the superspace (or rather for this mechanical model simply “supertime”) spanned by the coordinates $(t,\eta,\eta^\dagger)$. Supersymmetry transformations generated by $Q$ and $Q^\dagger$ should then correspond to translations in the Grassmann odd directions in superspace, in the same way that transformations in time generated by the Hamiltonian correspond to translations in the Grassmann even direction of superspace. By analogy with the operator $i\partial_t$ generating the latter translations, and representing the action of the Hamiltonian on degrees of freedom, the naive choice for the supercharges would be $Q=-i\partial_\eta$ and $Q^\dagger=i\partial_{\eta^\dagger}$. However, a quick check then finds that all anticommutators $\left\{Q,Q,\right\}$, $\left\{Q^\dagger,Q^\dagger\right\}$ and $\left\{Q,Q^\dagger\right\}$ vanish, thus not reproducing the supersymmetry algebra in (\[eq:SUSYalgebra\]). Hence, in order that the anticommutator of $Q$ and $Q^\dagger$ also reproduces the Hamiltonian, it is necessary that while a translation is performed in $\eta$ and $\eta^\dagger$, a translation in $t$ be also included in an amount proportional to the Grassmann odd coordinates in superspace. It turns out that an appropriate choice is given by[^26] $$Q=-i\partial_\eta+\frac{2}{\omega}\eta^\dagger\,\partial_t\ \ \ ,\ \ \ Q^\dagger=i\partial_{\eta^\dagger}-\frac{2}{\omega}\eta\,\partial_t\ .$$ A direct calculation finds that these operators obey the supersymmetry algebra $$\left\{Q,Q\right\}=0=\left\{Q^\dagger,Q^\dagger\right\}\ \ ,\ \ \left\{Q,Q^\dagger\right\}=\left(-\frac{2}{\sqrt{\hbar\omega}}\right)^2\, \left(i\hbar\partial_t\right)\ ,$$ in perfect correspondence with the abstract algebra in (\[eq:SUSYalgebra\]) (one should recall that a rescaling by a factor $(-\sqrt{\hbar\omega}/2)$ of the supersymmetry parameters $\epsilon$ and $\epsilon^\dagger$ or the supercharges $Q$ and $Q^\dagger$ has been applied in the intervening discussion). In order to readily construct manifestly supersymmetric invariant Lagrange functions, it proves necessary to also use another pair of superspace differential operators, that anticommute with the supercharges, and define so-called superspace covariant derivatives. These supercovariant derivatives thus enable one to take derivatives of superfields in a manner consistent with supersymmetry transformations. Again, a convenient choice turns out to be $$D=\partial_\eta-\frac{2i}{\omega}\eta^\dagger\,\partial_t\ \ \ ,\ \ \ D^\dagger=-\partial_{\eta^\dagger}+\frac{2i}{\omega}\eta\,\partial_t\ ,$$ leading to the algebra $$\left\{D,D\right\}=0=\left\{D^\dagger,D^\dagger\right\}\ \ ,\ \ \left\{D,D^\dagger\right\}=\left(-\frac{2}{\sqrt{\hbar\omega}}\right)^2\, \left(i\hbar\partial_t\right)\ ,$$ as well as the required properties $$\left\{Q,D\right\}=0\ ,\ \left\{Q,D^\dagger\right\}=0\ ,\ \left\{Q^\dagger,D\right\}=0\ ,\ \left\{Q^\dagger,D^\dagger\right\}=0\ .$$ Consider now an arbitrary Grassmann even superfield on superspace, namely a function $X(t,\eta,\eta^\dagger)$. Without loss of generality (by distinguishing its real and imaginary parts), it is always possible to assume that such a superfield obeys a reality condition, $$X^\dagger(t,\eta,\eta^\dagger)=X(t,\eta,\eta^\dagger)\ .$$ On account of the Grassmann odd character of the coordinate $\eta$, namely the fact that $\eta^2=0={\eta^\dagger}^2$, the general form of such a real superfield is given by $$X(t,\eta,\eta^\dagger)=x(t)+i\eta\theta(t)+i\eta^\dagger\theta^\dagger(t)+ \eta^\dagger\eta\,f(t)\ ,$$ where $x(t)$ and $f(t)$ are real bosonic degrees of freedom, whereas $\theta(t)$ and $\theta^\dagger(t)$ are complex valued fermionic ones, complex conjugates of one another. Indeed, it will turn out that $x(t)$ and $\theta(t)$ correspond to the degrees of freedom considered above, while $f(t)$ will be seen to be simply an auxiliary degree of freedom without dynamics, whose equation of motion is purely algebraic and such that upon its reduction the system described in (\[eq:SUSYL\]) is recovered. This is a generic feature of superfields in supersymmetric field theories: they include auxiliary fields which are reduced through their algebraic equations of motion. However, in the superspace formulation, there are required for a supersymmetric covariant superspace calculus. These choices having been specified, it is now straightforward to establish how the different components $(x,\theta,\theta^\dagger,f)$ (namely, the components of the terms in $1$, $i\eta$, $i\eta^\dagger$ and $\eta^\dagger\eta$ in the $\eta$-expansion of superfields) of real superfields transform under supersymmetry transformations. By considering the explicit evaluation of $$\delta_Q\,X=i\left[\epsilon Q-\epsilon^\dagger Q^\dagger\right]\,X\ ,$$ $\epsilon$ and $\epsilon^\dagger$ being the arbitrary complex valued Grassmann odd constant supersymmetry parameters, complex conjugates of one another, it readily follows that the components vary according to $$\begin{array}{r c l} \delta_Q x=i\epsilon\theta+i\epsilon^\dagger\theta^\dagger\ \ &,&\ \ \delta_Q \theta=i\epsilon^\dagger\left[f+\frac{2i}{\omega}\dot{x}\right]\ ,\\ & & \\ \delta_Q \theta^\dagger=-i\epsilon\left[f-\frac{2i}{\omega}\dot{x}\right] \ \ &,&\ \ \delta_Q f=-\frac{2}{\omega} \left[\epsilon\dot{\theta}-\epsilon^\dagger\dot{\theta}^\dagger\right]\ . \end{array} \label{eq:SUSYvariation3}$$ It is of interest to compare these transformation rules to those given in (\[eq:SUSYvariation2\]). Here appears yet another generic feature of the superfield technique. One notices that the highest component $f(t)$ in $\eta^\dagger\eta$ of the superfield $X(t,\eta,\eta^\dagger)$ transforms under supersymmetry as a total derivative in time. In the context of supersymmetric field theories, the highest component of superfields transforms as a total spacetime divergence. Thus, if one chooses for the Lagrange function or Lagrangian density the highest component of any relevant superfield, under any supersymmetry transformation the action of the system is invariant up to a total derivative, thus indeed defining an invariance of its equations of motion. In superspace, supersymmetric invariant actions are given by the highest component of superfields, in our case the $\eta^\dagger\eta$ component, $$S[X]=\int dt\,d\eta\,d\eta^\dagger\,F(X)\ ,$$ where $F(X)$ is any real valued superfield constructed out of the basic superfield $X$ and its derivatives obtained through the action of the supercovariant derivatives $D$ and $D^\dagger$. In this expression, the definition of Grassmann integration is such that $$\int\,d\eta\,d\eta^\dagger\,1=0\ \ ,\ \ \int\,d\eta\,d\eta^\dagger\,\eta=0\ \ ,\ \ \int\,d\eta\,d\eta^\dagger\,\eta^\dagger=0\ \ ,\ \ \int\,d\eta\,d\eta^\dagger\,\eta^\dagger\eta=1\ \ ,\ \$$ while the result for any linear combination of these $\eta$-monomials is given by the appropriate linear combination of the resulting integrations (the usual integral over Grassmann even variables being also linear for polynomials). It turns out that the choice corresponding to the supersymmetric harmonic oscillator in (\[eq:SUSYL\]) is given by (one has, by construction of the supercovariant derivatives, $(DX)^\dagger=D^\dagger X$ for the real superfield $X$) $$S[X]=\int dt\,d\eta\,d\eta^\dagger\,\left[ -\frac{1}{8}m\omega^2\left(D^\dagger X\right)\left(DX\right)\,-\, \frac{1}{4}m\omega^2X^2\right]\ .$$ Working out the superspace components of this expression, it reduces to $$\begin{array}{r l} S[x,\theta,\theta^\dagger,f]=\int dt\,\Big\{& \frac{1}{8}m\omega^2\left[f^2+\frac{4}{\omega^2}\dot{x}^2+ \frac{2i}{\omega}\left(\theta^\dagger\dot{\theta}+ \theta\dot{\theta}^\dagger\right)\right]\,-\,\\ & \\ &-\frac{1}{2}m\omega^2\left(fx+\theta^\dagger\theta\right)\Big\}\ . \end{array}$$ Since no time derivatives of the highest superfield component $f(t)$ contribute to this action, this degree of freedom is indeed auxiliary with a purely algebraic equation of motion given by $$f(t)=2x(t)\ .$$ Upon reduction of this auxiliary degree of freedom, one recovers precisely the Lagrange function in (\[eq:SUSYL\]), up to a total derivative in time $d/dt(-im\omega\theta^\dagger\theta/4)$, while for the remaining dynamical degrees of freedom $x(t)$, $\theta(t)$ and $\theta^\dagger(t)$, the supersymmetry transformations (\[eq:SUSYvariation3\]) coincide then exactly with those in (\[eq:SUSYvariation2\]). Having achieved the construction of the harmonic oscillator with a single supersymmetry generator ${\mathcal N}=1$ from these different but complementary points of view, one may wonder whether generalisations to types of potentials other than the quadratic one in $X^2$, to more general dynamics, and for a larger number $\mathcal N$ of supersymmetries, are possible. The interested reader is invited to explore such issues further, which have been addressed in the literature already to a certain extent.[@SUSYQM] We have thus shown how a superspace extension of the time coordinate into superspace coordinates $(t,\eta,\eta^\dagger)$ over which a superspace calculus is defined for superfields, readily allows for a systematic approach to the construction of supersymmetric quantum mechanical models. This superspace calculus displays already the features generic to the superspace techniques of superfields for the construction of supersymmetric invariant field theories in Minkowski spacetime. In the latter cas, it is spacetime itself which is extended into a “superspacetime” $(x^\mu,\theta_\alpha,\overline{\theta}_{\dot{\alpha}})$ of bosonic and Weyl spinor coordinates, the latter appearing with multiplicities depending on the number $\mathcal N$ of supersymmetries acting on the theory.[@Deren; @SS] An Invitation to Superspace Exploration {#Sec5} ======================================= As recalled in Sec. \[Sec2\], by enforcing Poincaré invariance, the ordinary bosonic harmonic oscillator extends naturally into the quantum field theory of a scalar field describing relativistic quantum point-particles of zero spin. One could attempt pursuing the same road starting from the fermionic harmonic oscillator described above and reach again the Dirac or the Majorana equation for spin 1/2 charged or neutral particles, but the task would be quite much more involved, since the answer is known to require a 4-component complex valued field, the parity invariant spinor representation of the Lorentz algebra. Rather, it is by considering the detailed representation theory of the Lorentz group that the correct answer is readily identified in simple algebraic terms. Likewise, one could attempt to extend the simple $\mathcal N=1$ supersymmetric harmonic oscillator model into a relativistic invariant quantum field theory, which should thus include both a scalar field and a Dirac–Majorana spinor. It is indeed possible to construct by hand such a field theory, known as the Wess-Zumino model,[@WZ] the simplest example of a $\mathcal N=1$ supersymmetric quantum field theory in 4-dimensional Minkowski spacetime. However, the approach is very much streamlined by expressing everything in terms of superfields defined over some superspace which extends Minkowski spacetime by including some further Grassmann odd coordinates corresponding to specific Weyl spinors. This is the superspace construction of $\mathcal N=1$ supersymmetric quantum field theories.[@SS] Truly a quantum geometer’s approach to a possible quantum geometry of spacetime. However once again, in order to classify and identify the realm of possible supersymmetric quantum field theories, for whatever number $\mathcal N$ of supersymmetries, and whatever dimension of Minkowski spacetime, and even whatever type of interactions consistent with the requirements of perturbative renormalisability, a discussion based on the possible algebraic structures merging and intertwining together the Poincaré algebra with Grassmann odd generators mapping bosons to fermions and vice-versa, is the most efficient approach. It is no small feat that such a complete and finite dimensional classification has been achieved.[@Haag] It is a nontrivial fact that such solutions exist, and also that they are only a small finite number of possibilities consistent with the rules of quantum field theory, in particular unitarity and causality. Clearly, such a situation gives credence to the suggestion that such a combination of Poincaré covariance and supersymmetry invariance brings us onto the right track towards the quest for a final unification. The usefulness, relevance, and even meaning, of these different remarks should find a nice simple illustration with the previous quantum mechanical model. These different avenues towards the construction and classification of supersymmetric quantum field theories have been developed and discussed during the actual lectures delivered at the Workshop. However, since this material is widely available in the literature,[@WB] and in much detailed form, while the lectures themselves were to a large extent based on those of Ref. , we shall stop short here from pursuing any further the discussion of such field theories and their particle content, except for just one last remark. The supersymmetric field theories simplest to construct in 4-dimensional Minkowski spacetime involve a single supersymmetry generator, $\mathcal N=1$, represented by a right-handed Weyl spinor supercharge $Q_\alpha$ and its complex conjugate left-handed Weyl spinor supercharge $\overline{Q}^{\dot{\alpha}}$, using the dotted and undotted index notation (thus the single supercharge combines into a single Majorana spinor). In this case, the supersymmetry algebra is defined by the anticommutation relations, $$\left\{Q_\alpha,Q_\beta\right\}=0= \left\{\overline{Q}_{\dot{\alpha}},\overline{Q}_{\dot{\beta}}\right\} \ \ \ ,\ \ \ \left\{Q_\alpha,\overline{Q}_{\dot{\beta}}\right\}= 2P_\mu\left(\sigma^\mu\right)_{\alpha\dot{\beta}}\ .$$ Clearly, these relations are the natural extension to a Poincaré covariant setting of the supersymmetry algebra in (\[eq:SUSYalgebra\]) relevant to the quantum mechanical oscillator model. Indeed, the Hamiltonian is the time component of the energy-momentum 4-vector $P^\mu$, while the components of the Weyl supercharges $Q$ and $\overline{Q}$ all square to a vanishing operator, implying again important cohomology properties in supersymmetric quantum field theories. The Noether charge $P^\mu$ being also the generator for spacetime translations, implies that in a certain sense supersymmetry transformations correspond to taking the square-root of spacetime translations, requiring spinor degrees of freedom for consistency with Lorentz covariance. As mentioned in Ref. , often one prefers, if only for aesthetical reasons having to do with spacetime locality and causality, to have a local or gauged symmetry as compared to a global symmetry acting identically and instantaneously throughout all of spacetime. Supersymmetry transformations as described in these notes correspond to global symmetries. Indeed, their infinitesimal action generated by the supercharges and mapping bosonic and fermionic degrees of freedom into one another, involves arbitrary Grassmann odd parameters which are spacetime independent constants. This situation suggests that one should gauge supersymmetry transformations, namely consider the possibility of constructing quantum field theories invariant under the same types of transformations between their degrees of freedom for which though, this time, the parameters are local functions of spacetime. Since the anticommutator of supercharges induces spacetime translations, it is clear that by gauging supersymmetry one has to introduce new field degrees of freedom[@GovCOPRO2] — the associated gauge fields, possessing both bosonic and fermionic degrees of freedom — which are in direct relation to spacetime reparametrisations and local Lorentz transformations, the latter being precisely the local gauge symmetries of general relativity for instance. In other words, in exactly the same way that gauged internal symmetries lead to Yang–Mills interactions, gauged supersymmetry implies the gravitational interaction through a dynamical spacetime metric field of helicity $\pm 2$ and its supersymmetric partner field, in fact a Majorana Rarita–Schwinger of helicity $\pm 3/2$. For this reason, gauged supersymmetric field theories are known as supergravity theories.[@WB; @SUGRA] Such theories exist for spacetime dimensions ranging from $D=2$ to $D=11$. Again, it is no accident that M-theory, the modern nonperturbative extension of superstring theory and a possible candidate for a final unification yet to be constructed, exists only in a spacetime of dimension $D=11$.[@Strings] It is hoped that through the above analysis of a simple supersymmetric quantum mechanical model, the reader will have understood enough of the general concepts and generic features entering the formulation and the construction of supersymmetric field theories, as well as of their potential to address from novel and powerful points of view large fields of pure mathematics itself, that he/she may feel sufficiently secure in following the lead of such little white and precious pebbles along the path, to embark on a journey of one’s own onto the roads of the quantum geometer’s superspaces, deep into the unchartered territories of supersymmetries and their yet to be discovered treasure troves in the eternally fascinating worlds of physics and mathematics, thereby fulfilling ever a little more, this time with a definite African beat in the symphony, humanity’s unswaying and yet never ending quest for a complete understanding of our destiny in the physical Universe, the eternal yearning of man’s soul.[@GovCOPRO2] Acknowledgements {#acknowledgements .unnumbered} ================ The author acknowledges the Workshop participants for their many constructive discussions and contributions on the matter of these lectures. This work is partially supported by the Federal Office for Scientific, Technical and Cultural Affairs (Belgium) through the Interuniversity Attraction Pole P5/27. [99]{} J. Govaerts, [*The quantum geometer’s universe: particles, interactions and topology*]{}, in the Proceedings of the Second International Workshop on Contemporary Problems in Mathematical Physics, J. Govaerts, M.N. Hounkonnou and A.Z. Msezane, eds. (World Scientific, Singapore, 2002), pp. 79–212, e-print [arXiv:hep-th/0207276]{} (July 2002). For reviews, see,\ M.B. Green, J.H. Schwarz and E. Witten, [*Superstring Theory*]{}, 2 Volumes (Cambridge University Press, Cambridge, 1987);\ J. Polchinski, [*String Theory*]{}, 2 Volumes (Cambridge University Press, Cambridge, 1998);\ C.V. Johnson, [*D-Branes*]{} (Cambdrige University Press, Cambridge, 2003);\ J. Polchinski, S. Chaudhuri and C.V. Johnson, [*Notes on D-branes*]{}, e-print [arXiv:hep-th/9602052]{} (February 1996);\ C.V. Johnson, [*D-brane primer*]{}, in TASI 1999, [*Strings, Branes and Gravity*]{} (World Scientific, Singapore, 2001), e-print [arXiv:hep-th/0007170]{} (July 2000);\ E. Kiritsis, [*Introduction to Superstring Theory*]{} (Leuven University Press, 1998), e-print [arXiv:hep-th/9709062]{} (September 1997);\ J. Govaerts, see references quoted in Ref. . E. Witten, [*Nucl. Phys. B*]{}[**443**]{}, 85 (1995), e-print [arXiv:hep-th/9503124]{} (March 1995);\ N. Seiberg and E. Witten, [*Nucl. Phys. B*]{}[**426**]{}, 19 (1994); Erratum, [*ibid. B*]{}[**430**]{}, 485 (1994); [*ibid. B*]{}[**431**]{}, 484 (1994); [*ibid. B*]{}[**435**]{}, 129 (1995). For a comprehensive review, see,\ [*Quantum Fields and Strings: A Course for Mathematicians*]{}, 2 Volumes, P. Deligne, P. Etingof, D.S. Freed, L.C. Jeffrey, D. Kazhdan, J.W. Morgan, D.R. Morrison and E. Witten, eds. (American Mathematical Society, Institute for Advanced Study, Princeton, New Jersey, 1999). T.A. Gol’fand and E.P. Likhtman, [*JETP Letters*]{} [**13**]{}, 32 (1971);\ P. Ramond, [*Phys. Rev. D*]{}[**13**]{}, 2415 (1971);\ A. Neveu and J.H. Schwarz, [*Nucl. Phys. B*]{}[**31**]{}, 86 (1971);\ D.V. Volkov and V.P. Akulov, [*Phys. Lett. B*]{}[**46**]{}, 109 (1973). J. Wess and B. Zumino, [*Nucl. Phys. B*]{}[**70**]{}, 39 (1974); [*ibid. B*]{}[**78**]{}, 1 (1974); [*Phys. Lett. B*]{}[**49**]{}, 52 (1974). For references on Topological Quantum Field Theory (TQFT), see,\ J. Govaerts, [*Topological quantum field theory and pure Yang–Mills dynamics*]{}, contribution to this volume. J. Wess and J. Bagger, [*Supersymmetry and Supergravity*]{}, 2$^{\rm nd}$ edition (Princeton University Press, Princeton, 1983, 1992);\ P.C. West, [*Introduction to Supersymmetry and Supergravity*]{}, 2$^{\rm nd}$ edition (World Scientific, Singapore, 1986, 1990);\ S.J. Gates, M.T. Grisaru, M. Roček and W. Siegel, [*Superspace, or One Thousand and One Lessons in Supersymmetry*]{} (Benjamin/Cummings, Reading, 1983), e-print [arXiv:hep-th/0108200]{} (August 2001);\ P. van Nieuwenhuizen, [*Physics Reports*]{} [**68**]{}, 189 (1981);\ H.P. Nilles, [*Physics Reports*]{} [**110**]{}, 1 (1984);\ M.F. Sohnius, [*Physics Reports*]{} [**128**]{}, 39 (1985);\ A. Bilal, [*Introduction to supersymmetry*]{}, e-print [arXiv:hep-th/0101055]{} (January 2001);\ M.J. Strassler, [*An unorthodox introduction to supersymmetry gauge theory*]{}, Lectures given at the Theoretical Advanced Study Institute in Elementary Particle Physics (TASI 2001), [*Strings, Branes and Extra Dimensions*]{}, Boulder (Colorado, USA), June 3–29, 2001, e-print [arXiv:hep-th/0309149]{} (September 2003). For references and recent work of interest on supersymmetric quantum mechanics, see for example,\ E. Witten, [*Nucl. Phys. B*]{}[**188**]{}, 513 (1981);\ F. Cooper and B. Freedman, [*Annals Phys.*]{} [**146**]{}, 262 (1983);\ M. de Crombrugghe and V. Rittenberg, [*Annals Phys.*]{} [**151**]{}, 99 (1983);\ F. Cooper, A. Khare and U. Sukhatme, [*Physics Reports*]{} [**251**]{}, 267 (1995), e-print [arXiv:hep-th/9405029]{} (May 1994);\ F. Cooper, A. Khare and U. Sukhatme, [*Supersymmetry in Quantum Mechanics*]{} (World Scientific, Singapore, 2001);\ H. Aoyama, M. Sato, T. Tanaka and M. Yamamoto, [*Phys. Lett. B*]{}[**498**]{}, 117 (2001);\ M. Faux, D. Kagan and D. Spector, [*Central charges and extra dimensions in supersymmetric quantum mechanics*]{}, e-print [arXiv:hep-th/0406152]{} (June 2004);\ M. Faux and S.J. Gates, Jr., [*Adinkras: a graphical technology for supersymmetry representation theory*]{}, e-print [arXiv:hep-th/0408004]{} (August 2004). J.-P. Derendinger, [*Globally supersymmetric theories in four and two dimensions*]{}, Proceedings of the 3$^{\rm rd}$ Hellenic School on Elementary Particle Physics, Corfu (Greece), September 13–23, 1989, [*Elementary Particle Physics 1989*]{}, E.N. Argyres, N. Tracas, G. Zoupanos, eds. (World Scientific, Singapore, 1990), pp. 111–243, preprint ETH-TH/90-21 (July 1990). (If necessary, on request a copy may be sent by the present author.) For a discussion, see for example,\ J. Govaerts, [*Hamiltonian Quantisation and Constrained Dynamics*]{} (Leuven University Press, Leuven, 1991). For a discussion, see for example,\ C. Itzykson and J.-B. Zuber, [*Quantum Field Theory*]{} (McGraw-Hill Book Company, New York, 1980);\ M.E. Peskin and D.V. Schroeder, [*An Introduction to Quantum Field Theory*]{} (Perseus Books Publishing, Cambridge, Massachusetts, 1995). A. Cabo, J.-L. Lucio and V.M. Villanueva, [*Mod. Phys. Lett. A*]{}[**14**]{}, 1855 (1999). For a discussion, see for example Appendix A in,\ J. Govaerts, [*String and superstring theories: an introduction*]{}, Proceedings of the $2^{\rm nd}$ Mexican School of Particles and Fields, Cuernavaca–Morelos (Mexico), 4–12 December 1986, J.-L. Lucio and A. Zepeda, eds. (World Scientific, Singapore, 1987), pp. 247–442. P. van Nieuwenhuizen, [*An introduction to simple supergravity and the Kaluza-Klein program*]{}, in [*Relativity, Groups and Topology II*]{}, Les Houches 1983, 3 Volumes, B.S. DeWitt and R. Stora, eds. (North-Holland, Elsevier Science Publishers, Amsterdam, 1984), pp. 823–932. L. Faddeev and R. Jackiw, [*Phys. Rev. Lett.*]{} [**60**]{}, 1692 (1988);\ J. Govaerts, [*Int. J. Mod. Phys. A*]{} [**5**]{}, 3625 (1990). For references on the cosmological constant problem, see,\ J. Govaerts, [*The cosmological constant of one-dimensional matter coupled quantum gravity is quantised*]{}, contribution to this volume. E. Witten, [*Nucl. Phys. B*]{}[**202**]{}, 253 (1982); [*Adv. Theor. Math. Phys.*]{} [**5**]{}, 841 (2002), e-print [arXiv:hep-th/0006010]{} (June 2000). E. Witten, [*J. Diff. Geom.*]{} [**17**]{}, 661 (1982). A. Salam and J. Strathdee, [*Phys. Lett. B*]{}[**51**]{}, 353 (1974); [*Nucl. Phys. B*]{}[**76**]{}, 477 (1974); [*ibid. B*]{}[**80**]{}, 499 (1974); [*ibid. B*]{}[**86**]{}, 142 (1975); [*Phys. Rev. D*]{}[**11**]{}, 1521 (1975);\ S. Ferrara, J. Wess and B. Zumino, [*Phys. Lett. B*]{}[**51**]{}, 239 (1974). R. Haag, J.T. Lopuszanski and M. Sohnius, [*Nucl. Phys. B*]{}[**88**]{}, 257 (1975). D.Z. Freedman, S. Ferrara and P. van Nieuwenhuizen, [*Phys. Rev. D*]{}[**13**]{}, 3214 (1976);\ S. Deser and B. Zumino, [*Phys. Lett. B*]{}[**62**]{}, 335 (1976). [^1]: The lectures delivered at COPROMAPH2 did not deal with field theories associated to fermionic degrees of freedom described using Grassmann odd variables, and considered only bosonic theories.[@GovCOPRO2] Quantised fermionic field theories are briefly dealt with in Sec. \[Sec3\]. [^2]: Throughout most of these notes, units such that $c=1=\hbar$ are used. [^3]: Our choice of Minkowski spacetime metric signature is such that $\eta_{\mu\nu}={\rm diag}\,(+---)$ with $\mu,\nu=0,1,2,D-1$ with $D=4$. [^4]: Only the nonvanishing commutators are given. The choice of normalisation is made such that the momentum integration measure in the mode decomposition of the fields later on is Lorentz invariant.[@GovCOPRO2] This choice also implies that particle states are normalised in a Lorentz covariant manner. [^5]: For example, taking two real scalar fields of identical mass defines a system with a global SO(2)=U(1) symmetry. The associated conserved quantum number thus distinguishes quanta according to their U(1) quantum number, taking a value either $(+1)$ for certain quanta — particles — or $(-1)$ for other quanta — antiparticles —, all sharing otherwise the same kinematical and spacetime properties such as mass and spin. The existence of matter and antimatter is thus a natural outcome of relativistic quantum field theory, extended to complex valued fields. Indeed, the two mass-degenerate real scalar fields combine into a single complex scalar field, invariant under any global, namely spacetime independent transformation of its phase. [^6]: This is no longer the case if the symmetry is realised in the Goldstone mode, namely when it is spontaneously broken by the vacuum which is then not invariant under the action of the symmetry.[@GovCOPRO2] [^7]: The notion of a dynamics invariant under a set of symmetry transformations requires in fact that the action of the system, rather than its Lagrange function, be invariant up to a surface term, since the latter does not affect the equations of motion. If indeed the Lagrange function is invariant only up to a surface term, central extensions of the symmetry Lie brackets are also possible, already at the classical level.[@Jose-Luis] Nonetheless, Noether’s theorem then remains valid, though with a contribution of the induced surface terms to the conserved charges.[@GovBook] [^8]: Note that for a massive particle in its rest-frame, the Pauli-Lubanski 4-vector does indeed reduce to its total angular-momentum, [*i.e.*]{}, its spin. [^9]: By definition, the Wigner little group of a particle is the subgroup of the full Lorentz group leaving invariant the particle’s energy-momentum 4-vector. For a massive particle, by going to its rest-frame, it is immediate to establish that its little group is isomorphic to the space rotation group SO(3) (at least for its component connected to the identity transformation, namely homotopic to the identity) or SO(D-1) in a $D$-dimensional Minkowski spacetime. For a massless particle whose energy-momentum 4-vector is light-like, a detailed analysis, based on the Lorentz algebra, shows that the little group is isomorphic to the euclidean group E(D-2) for a Minkowski spacetime of dimension $D$ which combines the rotations SO(D-2) in the space directions transverse to the particle momentum with specific combinations of Lorentz boosts in the momentum direction with space rotations around that momentum direction. At the quantum level, the notion of spin is attached to massive particles, and determines a representation of the SO(D-1) little group, while the notion of helicity is attached to massless particles and is characterised by a representation of the rotation subgroup SO(D-2) of the E(D-2) little group. In four dimensions, $D=4$, both spin and helicity are thus specified by a single integer of half-integer $s$.[@GovMexico] [^10]: A scalar field being invariant under Lorentz transformations, is associated to the trivial representation of the Lorentz group. [^11]: Note well that the fields are taken to transform under the trivial representation of the spacetime translation subgroup of the full Poincaré group. Hence it is only for Lorentz transformations that we need to understand the representation theory to be discussed in the present section. [^12]: A finite dimensional representation of a noncompact Lie algebra as is that of the Lorentz group is necessarily nonunitary. [^13]: The relation to SL(2,$\mathbb C$) is discussed hereafter. [^14]: The position of the index $i$ is important in these relations, for reasons to become clear later on. [^15]: This is readily established by considering a real parametrisation of $2\times 2$ complex matrices, and imposing the constraints of unitarity and unit determinant defining the SU(2) matrix group. [^16]: Note that the counting of independent parameters of the different SO(1,3), SU(2)$_+\times$SU(2)$_-$ and SL(2,$\mathbb C$) groups also matches this correspondence. These three Lie groups are all 6-dimensional. [^17]: Consequently, among quarks and leptons, only neutrinos could possibly be Majorana particles. The experimental verdict is still out, and is an important issue in the quest for the fundamental unification of all interactions and particles. [^18]: As a matter of fact, all other Fierz identities follow from the present one, by appropriate choices of the spinors involved. [^19]: Often, this Lagrangian density is given as ${\mathcal L}=i\overline{\psi}\gamma^\mu\partial_\mu\psi- m\overline{\psi}\psi$, which differs from the one given here by a total divergence with no consequence for a choice of boundary conditions at infinity such that fields vanish asymptotically. Note however that the form chosen in (\[eq:DiracL\]) is manifestly real under complex conjugation, as befits any Lagrangian density. [^20]: Such solutions must exist since the Dirac equation is invariant under spacetime translations and is linear in the field. [^21]: This also means that the Dirac Lagrangian is already in Hamiltonian form.[@GovBook; @FJ] [^22]: Again, this conclusion is in perfect analogy with what happens for a real and a complex scalar field.[@GovCOPRO2] [^23]: Note that up to a total time derivative term this function is indeed real under complex conjugation, because of the Grassmann odd character of the fermionic degree of freedom $\theta(t)$. Some total derivative terms in time have been ignored to reach this expression, and to bring it into such a form that no time derivatives of order strictly larger than unity appear in the action. [^24]: Compared to the previous parametrisation, a factor $(-\sqrt{\hbar\omega}/2)$ has been absorbed into the normalisation of the supersymmetry constant parameters $\epsilon$ and $\epsilon^\dagger$ or supercharges $Q$ and $Q^\dagger$. Note also that these expressions are consistent with the properties under complex conjugation of the different degrees of freedom as well as their Grassmann parity. [^25]: In such an analysis, one should beware of the surface terms induced by the supersymmetry transformation applied to the action, which also contribute to the definition of the Noether charges.[@GovBook] [^26]: Some properties have to be met in the whole construction, such as preserving under supersymmetry transformations the real character under complex conjugation of the superfield considered hereafter. This leaves open a series of possible choices, essentially related to possible phase factors in the combinations defining the superspace differential operators introduced hereafter.
{ "pile_set_name": "ArXiv" }
--- abstract: 'The general lines of the derivation and the main properties of the master equations for the master amplitudes associated to a given Feynman graph are recalled. Some results for the 2-loop self-mass graph with 4 propagators are then presented.' --- [**Master Equations for Master Amplitudes $^{\star}$** ]{} [ [M. Caffo$^{ab}$, H. Czy[ż]{} $^{c}$, S. Laporta$^{b}$]{} and [E. Remiddi$^{ba}$\ ]{} ]{} - [*INFN, Sezione di Bologna, I-40126 Bologna, Italy* ]{} - [*Dipartimento di Fisica, Università di Bologna, I-40126 Bologna, Italy* ]{} - [*Institute of Physics, University of Silesia, PL-40007 Katowice, Poland* ]{} e-mail: [caffo@bo.infn.it\ czyz@usctoux1.cto.us.edu.pl\ laporta@bo.infn.it\ remiddi@bo.infn.it\ ]{} [——————————-\ PACS 11.10.-z Field theory\ PACS 11.10.Kk Field theories in dimensions other than four\ PACS 11.15.Bt General properties of perturbation theory\ ]{} $^{\star}$[Presented at the Zeuthen Workshop on Elementary Particle Physics - Loops and Legs in Gauge Theories - Rheinsberg, 19-24 April 1998. ]{} 2[\_2]{} Introduction. =============== The integration by part identities [[@CT]]{} are by now a standard tool for obtaining relations between the many integrals associated to any Feynman graph or, equivalently, for working out recurrence relations for expressing the generic integral in terms of the “master integrals” or “master amplitudes" of the considered graph. A good example of the use of the integration by part identities is given in [[@Tarasov]]{}, where the recurrence relations for all the 2-loop self-mass amplitudes are established in the arbitrary masses case.\ It has been shown in [[@ER]]{} that by that same technique one can obtain a set of linear first order differential equations for the master integrals themselves; the coefficients of the equations are ratio of polynomials with integer coefficients in all the variables; the equations are further non homogeneous, with the non homogeneous terms given by the master integrals of the simpler graphs obtained from the considered graph by removing one or more internal propagators.\ Restricting ourselves for simplicity to the self-mass case, for any Feynman graph the related integrals can in general be written in the form $$A(\alpha,p^2) = \int d^nk \ B(\alpha,p,k) \ . {\label{1}}$$ In more detail, $ d^nk = d^nk_1...d^nk_l $ stands for the $ n $-continuous integration on an arbitrary number $ l $ of loops and $ k $ stands for the set of the corresponding loop momenta, so that there are altogether $ s=(l+1)(l+2)/2 $ different scalar products, including $ p^2 $; $ B(\alpha,p,k) $ is the product of any power of the scalar products in the numerators divided by any power of the propagators occurring in the graph (all masses will always be taken as different, unless otherwise stated); as the propagators are also simple combinations of the scalar products, simplifications might occur between numerator and denominator and as a consequence one expects quite in general $ (s-1) $ different factors altogether in the numerator and denominator, independently of the actual number of propagators present in the graph (graphs with less propagators have more factors in the numerator and [*viceversa*]{}); therefore the symbol $ \alpha $ in [Eq.(\[1\])]{} stands in fact for a set of $ (s-1) $ indices – the (integer) powers of the $ (s-1) $ factors.\ The integration by parts corresponding to the amplitudes of [Eq.(\[1\])]{} are $$\int d^nk \frac{\partial}{\partial k_{i,\mu}} \Bigl[ v_\mu B(\alpha,p,k) \Bigr] = 0 \ , \hskip 1cm i=1,...,l {\label{2}}$$ where $ v $ stands for any of the $ (l+1) $ vectors $ k $ and $ p $; there are therefore $ l(l+1) $ identities for each set of indices $ \alpha $. The identity is easily established – for small $ n $ the integral of the divergence vanishes. When the derivatives are explicitly carried out, one obtains the sum of a number of terms, all equal to a simple coefficient (an integer number or, occasionally, $ n $), times an integrand of the form $ B(\beta,k,p) $, with the set of indices $ \beta $ differing at most by a unity in two places from the set $ \alpha $.\ That set of identities is infinite; even if they are not all independent, they can be used for obtaining the recurrence relations, by which one can express each integral in terms of a few already mentioned “master amplitudes", through a relation of the form $$A(\alpha,p^2) = \sum \limits_{m} C(\alpha,m) A(m,p^2) + \sum \limits_{j} C(\alpha,j) A(j,p^2) \ , {\label{3}}$$ where the set of indices $ m $ takes the very few values corresponding to the master amplitudes, $ j $ refers to simpler master integrals in which one or more denominators are missing, and the coefficients $ C(\alpha,m), C(\alpha,j) $ are ratios of polynomials in $ n $, masses and $ p^2 $.\ Let us consider now one of the master amplitudes themselves, say the master amplitude identified by the set of indices $ m $; according to [Eq.(\[1\])]{} we can write $$A(m,p^2) = \int d^nk \ B(m,p,k) \ . {\label{4}}$$ By acting with $ p_\mu (\partial/\partial p_\mu) $ on both sides we get $$p^2 \frac{\partial}{\partial p^2} A(m,p^2) = \frac{1}{2} \int d^nk \ p_\mu \frac{\partial}{\partial p_\mu} \ B(m,p,k) \ . {\label{5}}$$ According to the discussion following [Eq.(\[2\])]{}, the [*r.h.s.*]{} is a combination of integrands; as [Eq.(\[3\])]{} applies to each of the corresponding integrals, one obtains the relations $$p^2 \frac{\partial}{\partial p^2} A(m,p^2) = \sum \limits_{m'} C(m,m') A(m',p^2) + \sum \limits_{j} C(m,j) A(j,p^2) \ , {\label{6}}$$ which are the required master equations. As in [Eq.(\[3\])]{}, $ j $ refers to simpler master integrals (in which one or more denominators are missing; they constitute the non-homogeneous part of the master equations), to be considered as known when studying the $ A(m,p^2) $.\ It is obvious from the derivation that the master equations can be established regardless of the number of loops. It is equally clear that for graphs depending on several external momenta (such as vertex or 4-body scattering graphs) one has simply to replace the single operator $ p_\mu (\partial/\partial p_\mu) $ of [Eq.(\[5\])]{} by the set of operators $ p^i_\mu (\partial/\partial p^j_\mu) $, where $ i,j $ run on all the external momenta, and with some more algebra one can obtain master equations in any desired Mandelstam variable.\ The master equations are a powerful tool for the study and the evaluation of the master amplitudes; among other things: - they provide information on the values of the master amplitudes at special kinematical points (such as $ p^2=0 $ in [Eq.(\[6\])]{}; the [*l.h.s.*]{} vanishes, as $ p^2=0 $ is a regular point, so that the [*r.h.s.*]{} is a relation among master amplitudes at $ p^2=0 $, usually sufficient to fix their values at that point); - the master equations are valid identically in $ n $, so that they can be expanded in $ (n-4) $ and solved recursively for the various terms of the expansion in $ (n-4) $, starting from the most singular (with 2-loop amplitudes one expects at most a double pole in $ (n-4) $); - when the initial value at $ p^2 = 0 $ has been obtained, the equations can be integrated by means of fast and precise numerical methods (for instance with a Runge-Kutta routine), so providing a convenient approach to their numerical evaluation; note that the numerical approach can be used both for arbitrary $ n $ or for $ n=4 $, once the expansion has been properly carried out; - the equations can be used to work out virtually any kind of expansion, in particular the large $ p^2 $ expansion, as will be shown in some detail; - in particularly simple cases (for instance when most of the masses vanish and only one or two scales are left) the analytic quadrature of the equations can lead to the analytic evaluation of the master amplitudes. The 2-loop 4-propagator graph. ============================== The use of the master equations for studying the 1-loop self-mass and the 2-loop sunrise self-mass graph has been already discussed, [[@ER]]{}, [[@CCLR]]{}; we will describe here its application to the 2-loop 4-propagator self-mass graph shown in Fig.1.\ The corresponding amplitude is defined as $$\begin{aligned} {G(n,m_1^2,m_2^2,m_3^2,m_4^2,p^2)}&=& \frac{1}{C^2(n)} \int \frac{d^nk_1}{(2\pi)^{n-2}} \ \frac{d^nk_2}{(2\pi)^{n-2}} \nonumber \\ && {\kern-105pt} \frac{1} {(k_1^2+m_1^2)\ [(p-k_1)^2+m_4^2]\ (k_2^2+m_2^2)\ [(p-k_1-k_2)^2+m_3^2]} ; {\label{6a}} \end{aligned}$$ the conventional factor $ C(n) $ is defined in [[@CCLR]]{}, it is sufficient to know that at $ n=4 $ its value is 1; if all momenta are Euclidean the $ i\epsilon $ in the propagators is not needed.\ Skipping details, the master equation is\ where the $ {F_{k}(n,m_1^2,m_2^2,m_3^2,p^2)}, k=0,..,3 $ are the 2-loop self-mass sunrise master amplitudes [[@Tarasov]]{}[[@CCLR]]{}, $$\begin{aligned} {F_{0}(n,m_1^2,m_2^2,m_3^2,p^2)} &=& \frac{1}{C^2(n)} \int \frac{d^nk_1}{(2\pi)^{n-2}} \ \frac{d^nk_2}{(2\pi)^{n-2}} \nonumber \\ && {\kern-75pt} \frac{1} {(k_1^2+m_1^2)\ (k_2^2+m_2^2)\ [(p-k_1-k_2)^2+m_3^2]} ; {\label{6b}} \end{aligned}$$ and $${F_{i}(n,m_1^2,m_2^2,m_3^2,p^2)} = - \frac{\partial}{\partial m_i^2} {F_{0}(n,m_1^2,m_2^2,m_3^2,p^2)},\ \ i=1,2,3 {\label{6c}}$$ while $ V(n,m_1^2,m_2^2,m_3^2) $ corresponds to the 2-loop vacuum amplitude, $$\begin{aligned} V(n,m_1^2,m_2^2,m_3^2) &=& \frac{1}{C^2(n)} \int \frac{d^nk_1}{(2\pi)^{n-2}} \ \frac{d^nk_2}{(2\pi)^{n-2}} \nonumber \\ && {\kern-50pt} \frac{1} {(k_1^2+m_1^2)\ (k_2^2+m_2^2)\ [(k_1+k_2)^2+m_3^2]} {\label{6d}} \end{aligned}$$ and, as usual, $$R^2(-p^2,m_1^2,m_2^2) = p^4+m_1^4+m_2^4+2m_1^2p^2+2m_2^2p^2-2m_1^2m_2^2 \ .$$ The value at $ p^2 = 0 $ is (almost trivially) found to be $$G(n,m_1^2,m_2^2,m_3^2,m_4^2,0) = \frac{V(n,m_2^2,m_3^2,m_4^2)-V(n,m_1^2,m_2^2,m_3^2)}{m_1^2-m_4^2} {\label{8}}$$ The expansion in $ (n-4) $ reads $${G(n,m_1^2,m_2^2,m_3^2,m_4^2,p^2)}= \sum \limits_{k=-2}^{\infty} (n-4)^k G^{(k)}(m_1^2,m_2^2,m_3^2,m_4^2,p^2) {\label{9a}}$$ By expanding in the same way all the other amplitudes occurring in the master equation [Eq.(\[7\])]{} and using the results of [[@CCLR]]{}, the first values are found to be $$\begin{aligned} G^{(-2)}(m_1^2,m_2^2,m_3^2,m_4^2,p^2) &=& + \frac{1}{8} \nonumber \\ G^{(-1)}(m_1^2,m_2^2,m_3^2,m_4^2,p^2) &=& - \frac{1}{16} - \frac{1}{2} S^{(0)}(m_1^2,m_4^2,p^2) \ , {\label{9}} \end{aligned}$$ where $ S^{(0)}(m_1^2,m_4^2,p^2) $ is the finite part at $ n=4 $ of the 1-loop self mass; more exactly, defining $${S(n,m_1^2,m_2^2,p^2)}= \frac{1}{C^(n)} \int \frac{d^nk}{(2\pi)^{n-2}} \ \frac{1} {(k^2+m_1^2)\ [(p-k)^2+m_2^2]} \ ,$$ and expanding in $ (n-4) $, one finds [[@ER]]{} $${S(n,m_1^2,m_2^2,p^2)}= - \frac{1}{2} \frac{1}{(n-4)} + S^{(0)}(m_1^2,m_2^2,p^2) + {\cal O} (n-4) {\label{12}}$$ with $$\begin{aligned} S^{(0)}(m_1^2,m_2^2,p^2) &=& \frac{1}{2} - \frac{1}{4} \ln(m_1m_2) \nonumber \\ && {\kern-80pt} + \frac{1}{4p^2} \Biggl[ R(-p^2,m_1^2,m_2^2) \ln(u(p^2,m_1^2,m_2^2)) + (m_1^2-m_2^2)\ln{\frac{m_1}{m_2}} \Biggl] {\label{13}} \end{aligned}$$ where $$R(-p^2,m_1^2,m_2^2) = \sqrt{ [p^2+(m_1+m_2)^2] [p^2+(m_1-m_2)^2] }$$ $$u(p^2,m_1^2,m_2^2) = \frac { \sqrt{ p^2+(m_1+m_2)^2 } - \sqrt{ p^2+(m_1-m_2)^2 } } { \sqrt{ p^2+(m_1+m_2)^2 } + \sqrt{ p^2+(m_1-m_2)^2 } } \ .$$ The large $ p^2 $ expansion. ============================ Quite in general, if [+5pt]{} $ {\omega}= (n-4)/2 $ , the large $ p^2 $ expansion is $$\begin{aligned} {G(n,m_1^2,m_2^2,m_3^2,m_4^2,p^2)}&=& (p^2)^{2{\omega}} \sum \limits_{k=0}^{\infty} G_k^{(\infty,2)}(n,m_1^2,m_2^2,m_3^2,m_4^2) \frac{1}{(p^2)^k} \nonumber \\ &+& (p^2)^{{\omega}} \sum \limits_{k=0}^{\infty} G_k^{(\infty,1)}(n,m_1^2,m_2^2,m_3^2,m_4^2) \frac{1}{(p^2)^k} \nonumber \\ &+& \frac{1}{p^2} \sum \limits_{k=0}^{\infty} G_k^{(\infty,0)}(n,m_1^2,m_2^2,m_3^2,m_4^2) \frac{1}{(p^2)^k} {\label{14}} \end{aligned}$$ Any $ l $-loop amplitude, indeed, at large $ p^2 $ develops $ l $ terms with “fractional powers" in $ p^2 $, with exponents $ {\omega}, 2{\omega}, \cdots l{\omega}$, besides the “regular" term containing integer powers only. As any 2-loop amplitude has “fractional dimension" equal to 2 (in square mass units), on dimensional grounds the coefficients $ G_k^{(\infty,2)}, G_k^{(\infty,1)} $ and $ G_k^{(\infty,0)} $ must have “fractional dimension" equal to 0, 1 and 2 respectively.\ Similar expansions are valid for the sunrise amplitudes appearing in the [*r.h.s.*]{} of [Eq.(\[7\])]{} such as $$\begin{aligned} F_0 (n,m_1^2,m_2^2,m_3^2,p^2) &=& p^2 (p^2)^{2{\omega}} \sum \limits_{k=0}^{\infty} F_{0,k}^{(\infty,2)}(n,m_1^2,m_2^2,m_3^2) \frac{1}{(p^2)^k} \nonumber \\ &+& (p^2)^{{\omega}} \sum \limits_{k=0}^{\infty} F_{0,k}^{(\infty,1)}(n,m_1^2,m_2^2,m_3^2) \frac{1}{(p^2)^k} \nonumber \\ &+& \frac{1}{p^2} \sum \limits_{k=0}^{\infty} F_{0,k}^{(\infty,0)}(n,m_1^2,m_2^2,m_3^2) \frac{1}{(p^2)^k} {\label{15}} \end{aligned}$$ as well as for the 1-loop self-mass amplitude (whose “fractional power" is just $ {\omega}$) $$\begin{aligned} S (n,m_1^2,m_2^2,p^2) &=& (p^2)^{{\omega}} \sum \limits_{k=0}^{\infty} S_k^{(\infty,1)}(n,m_1^2,m_2^2) \frac{1}{(p^2)^k} \nonumber \\ &+& \frac{1}{p^2} \sum \limits_{k=0}^{\infty} S_k^{(\infty,0)}(n,m_1^2,m_2^2) \frac{1}{(p^2)^k} \ . {\label{16}} \end{aligned}$$ When the above expansions are inserted into the master equations and in the recurrence relations (to be regarded, in this context, as differential equations in the masses), one obtains a number of equations providing important relations between the coefficients of the expansions.\ As a result, one finds [[@ER]]{} $$S_0^{(\infty,1)}(n,m_1^2,m_2^2) = S^{(\infty)}(n) \ , {\label{17}}$$ where $ S^{(\infty)}(n) $ is a dimensionless function of $ n $ only; an explicit calculation (most easily performed by putting $ m_2 = 0 $) gives for its expansion in $ n-4 $ $$S^{(\infty)}(n) = - \frac{1}{2} \ \frac{1}{n-4} + \frac{1}{2} + \left( \frac{1}{8} \zeta(2) - \frac{1}{2} \right) (n-4) + {\cal O} \left( (n-4)^2 \right) . {\label{18}}$$ All the other $ S_k^{(\infty,1)}(n,m_1^2,m_2^2), k=1,2,.. $ can then obtained explicitly and are found to be proportional to $ S^{(\infty)}(n) $.\ One further finds $$S_0^{(\infty,0)}(n,m_1^2,m_2^2) = \frac { m_1^{n-2} + m_2^{n-2} } {(n-2)(n-4)} {\label{19}}$$ and similar expressions for all the other $ S_k^{(\infty,0)} $.\ Note here that in the massless limit one has the exact relation $$S(n,0,0,p^2) = (p^2)^{\omega}S^{(\infty)}(n) \ . {\label{20}}$$ Likewise, one obtains ([[@CCLR]]{}; the results are reported here with minor changes of notation) $$F_{0,0}^{(\infty,2)}(n,m_1^2,m_2^2,m_3^2) = F^{(\infty,2)}(n) {\label{21}}$$ $$F_{0,0}^{(\infty,1)}(n,m_1^2,m_2^2,m_3^2) = F^{(\infty,1)}(n) \left( m_1^{n-2} + m_2^{n-2} + m_3^{n-2} \right) {\label{22}}$$ $$F_{0,0}^{(\infty,0)}(n,m_1^2,m_2^2,m_3^2) = \frac{ (m_1 m_2)^{n-2} + (m_1 m_3)^{n-2} + (m_2 m_3)^{n-2} } {(n-2)^2(n-4)^2} {\label{23}}$$ By expanding as usual around $ n=4 $ $$F^{(\infty,i)}(n) = \sum \limits_{j=-2}^{\infty} (n-4)^j F^{(i,j)} \ , {\label{24}}$$ one finds $$\begin{aligned} F^{(2,-2)} &&{\kern-20pt}= 0 \ \ , \ \ \ \ \ \ \ \ \ \ \ \ F^{(1,-2)} = -\frac{1}{4} \nonumber \\ F^{(2,-1)} &&{\kern-20pt}= \frac{1}{32} \ \ , \ \ \ \ \ \ \ \ \ \ F^{(1,-1)} = \frac{3}{8} \nonumber \\ F^{(2,0)} &&{\kern-20pt}= -\frac{13}{128} \ \ , \ \ \ \ \ \ F^{(1,0)\phantom{-}} = - 4F^{(2,1)} + \frac{59}{128} \ . {\label{25}} \end{aligned}$$ When the large $ p^2 $ expansions, Eq.s(\[14\],\[15\],\[16\]), are substituted into [Eq.(\[7\])]{}, one can express the $ G_k^{(\infty,i)}(n,m_1^2,m_2^2,m_3^2,m_4^2) $ in terms of the other coefficients, already known.\ One finds $$G_0^{(\infty,0)}(n,m_1^2,m_2^2,m_3^2,m_4^2) = V(n,m_2^2,m_3^2,m_4^2) {\label{26}}$$ and $$G_0^{(\infty,2)}(n,m_1^2,m_2^2,m_3^2,m_4^2) = \frac{3n-8}{n-4} F^{(\infty,2)}(n) \ . {\label{27}}$$ while $ G_0^{(\infty,1)}(n,m_1^2,m_2^2,m_3^2,m_4^2) $ depends on the combination $$(n-2) F^{(\infty,1)}(n) - S^{(\infty)}(n)/(n-4) \ ;$$ when the combination is expanded around $ n=4 $ as in [Eq.(\[24\])]{} and the explicit values of [Eq.(\[25\])]{} are used, the first 3 terms of the expansion – namely the double pole, the simple pole and the term constant in $ (n-4) $ – are all found to vanish, suggesting the existence of the exact relation $$F^{(\infty,1)}(n) = \frac{1}{(n-2)(n-4)} S^{(\infty)}(n) \ . {\label{28}}$$ The above result seems to be confirmed by a preliminary investigation of the large $ p^2 $ behaviour of the 5-propagator 2-loop self-mass graph (in progress). When [Eq.(\[28\])]{} is taken as valid, one finds $$G_0^{(\infty,1)}(n,m_1^2,m_2^2,m_3^2,m_4^2) = 0 {\label{29}}$$ and $$\begin{aligned} G_1^{(\infty,1)}(n,m_1^2,m_2^2,m_3^2,m_4^2) &=& \nonumber \\ && {\kern-65pt} \frac{S^{(\infty)}(n)}{(n-2)(n-4)} \Bigl[ (m_1^2)^{\omega}- (n-3) \left( (m_2^2)^{\omega}+ (m_3^2)^{\omega}\right) \Bigr] {\label{30}} \end{aligned}$$ 2 truecm [**Acknowledgements.**]{} As in previous work, the algebra needed in all the steps of the work has been processed by means of the computer program [FORM]{} [[@FORM]]{} by J. Vermaseren. One of the authors (E.R.) is glad to acknowledge an interesting discussion with K.G. Chetyrkin on the universality of the coefficients of the large $ p^2 $ expansions. 2 truecm [99]{} K.G. Chetyrkin and F.V. Tkachov, , 159 (1981); F.V. Tkachov, , 65 (1981). O.V. Tarasov, , 455 (1997). E. Remiddi, , 1435 (1997), hep-th/9711188 (1997). Similar equations, for the one-loop box amplitudes in 4 dimensions, was proposed by V. de Alfaro, B. Jakšić and T. Regge in their paper [*Differential Properties of Feynman Amplitudes*]{} to the volume [*High-Energy Physics and Elementary Particles*]{}, IAEA, Vienna 1965. A differential equations method, based on mass derivatives, has been also proposed by A.V. Kotikov, , 158 (1991), and used in a number of subsequent papers (see in A.V. Kotikov, hep-ph/9807440). In that approach, amplitudes with a single non vanishing mass $ m $ are expressed as a suitable integral of the corresponding massless amplitudes, which are taken as known. M. Caffo, H. Czy[ż]{}, S. Laporta and E. Remiddi, hep-th/9805118 (1998), (to be published). J.A.M. Vermaseren, [*Symbolic Manipulation with FORM*]{}, Computer Algebra Nederland, Amsterdam (1991).
{ "pile_set_name": "ArXiv" }
--- abstract: | An RNA sequence is a string composed of four types of nucleotides, $A, C, G$, and $U$. Given an RNA sequence, the goal of the RNA folding problem is to find a maximum cardinality set of crossing-free pairs of the form $\{A,U\}$ or $\{C,G\}$. The problem is central in bioinformatics and has received much attention over the years. However, the current best algorithm for the problem still takes $\mathcal{O}\left(\frac{n^3}{\log^2 (n)}\right)$ time, which is only a slight improvement over the classic $\mathcal{O}(n^3)$ dynamic programming algorithm. Whether the RNA folding problem can be solved in $\mathcal{O}(n^{3-\epsilon})$ time remains an open problem. Recently, Abboud, Backurs, and Williams (FOCS’15) made the first progress by showing a conditional lower bound for a generalized version of the RNA folding problem based on a conjectured hardness of the $k$-clique problem. A drawback of their work is that they require the RNA sequence to have at least 36 types of letters, making their result biologically irrelevant. In this paper, we show that by constructing the gadgets using a lemma of Bringmann and Künnemann (FOCS’15) and surrounding them with some carefully designed sequences, the framework of Abboud et al. can be improved upon to work for the case where the alphabet size is 4, yielding a conditional lower bound for the RNA folding problem. We also investigate the Dyck edit distance problem. We demonstrate a reduction from RNA folding problem to Dyck edit distance problem of alphabet size 10, establishing a connection between the two fundamental string problems. This leads to a much simpler proof of the conditional lower bound for Dyck edit distance problem given by Abboud et al. and lowers the required alphabet size for the lower bound to work. author: - 'Yi-Jun Chang[^1]' title: Hardness of RNA Folding Problem with Four Symbols --- Keywords: RNA folding, Dyck edit distance, longest common subsequence, conditional lower bound, clique Introduction \[sec.intro\] ========================== An [*RNA sequence*]{} is a string composed of four types of nucleotides, namely $A, C, G$, and $U$. Given an RNA sequence, the goal of the [*RNA folding*]{} problem is to find a maximum cardinality set of crossing-free pairs of nucleotides, where all the pairs are either $\{A,U\}$ or $\{C,G\}$. The problem is central in bioinformatics and has found applications in many areas of molecular biology. For a more comprehensive exposition of the topic, the reader is referred to e.g. [@S15]. It is well-known that the problem can be solved in cubic time using a simple dynamic programming method [@DEKM98]. Due to the importance of RNA folding in practice, there has been a long line of research on improving the cubic time algorithm (See e.g. [@A99; @FG10; @PTZZ11; @PZTZ13; @S15; @VGF13]). Currently the best upper bound is $\mathcal{O}\left(\frac{n^3}{\log^2 (n)}\right)$ [@PZTZ13; @S15], and this can be obtained via four-Russian method or fast min-plus multiplication (based on ideas from Valiant’s CFG parser [@V75]). Whether the RNA folding problem can be solved in $\mathcal{O}(n^{3-\epsilon})$ time for some $\epsilon > 0$ is still a major open problem. Other than attempting to improve the upper bound, we should also approach the problem in the opposite direction, i.e. showing a lower bound or arguing why the problem is hard. A popular way to show hardness of a problem is to demonstrate a lower bound conditioned on some widely accepted hypothesis. \[Strongly Exponential Time Hypothesis (SETH)\] \[c-1\] There exists no $\epsilon, k_0 > 0$ such that $k$-SAT with $n$ variables can be solved in time $\mathcal{O}(2^{(1-\epsilon)n})$ for all $k > k_0$. \[c-2\] There exists no $\epsilon, k_0 > 0 $ such that $k$-clique on graphs with $n$ nodes can be solved in time $\tilde{\mathcal{O}}\left(n^{(\omega - \epsilon) k/3}\right)$ for all $k > k_0$, where $\omega < 2.373$ is the matrix multiplication exponent. Assuming that SETH (Conjecture \[c-1\]) holds, the following bounds are unattainable for any $\epsilon > 0$: - an $\mathcal{O}(n^{k-\epsilon})$ algorithm for $k$-dominating set problem [@PR10], - an $\mathcal{O}(n^{2-\epsilon})$ algorithm for dynamic time warping, longest common subsequence, and edit distance [@ABV15*; @BI14; @BK15], - an $\mathcal{O}(m^{2-\epsilon})$ algorithm for ($3/2 - \epsilon$)-approximating the diameter of a graph with $m$ edges [@RV13]. As remarked in [@ABV15], it is easy to reduce the longest common subsequence problem on binary strings to the RNA folding problem as following: Given two binary strings $X, Y$, we let $\hat{X} \in {\{A,C\}}^{|X|}$ be the string such that $\hat{X}[i] = A$ if $X[i] = 0$, $\hat{X}[i] = C$ if $X[i] = 1$, and we let $\hat{Y} \in {\{G,U\}}^{|Y|}$ be the string such that $\hat{Y}[i] = U$ if $Y[i] = 0$, $\hat{Y}[i] = G$ if $Y[i] = 1$. Then we have a 1-1 correspondence between RNA foldings of $\hat{X} \circ \hat{Y}^R$ (i.e. concatenation of $\hat{X}$ and the reversal of $\hat{Y}$) and common subsequences of $X$ and $Y$. It has been shown in [@BK15] that there is no $\mathcal{O}(n^{2-\epsilon})$ algorithm for longest common subsequence problem on binary strings conditioned on SETH, and we immediately get the same conditional lower bound for RNA folding from the simple reduction! Very recently, based on a conjectured hardness of $k$-clique problem (Conjecture \[c-2\]), a higher conditional lower bound was proved for a generalized version of the RNA folding problem (which coincides with the RNA folding problem when the alphabet size is 4) [@ABV15]: \[[@ABV15]\] \[thm-1\] If the generalized RNA folding problem on sequences of length $n$ with alphabet size 36 can be solved in $T(n)$ time, then $3k$-clique on graphs with $|V|=n$ can be solved in $\mathcal{O}\left(T\left(n^{k + 2} \log(n) \right)\right)$ time. Therefore, a $\mathcal{O}(n^{\omega - \epsilon})$ time algorithm for the generalized RNA folding with alphabet size at least 36 will disprove Conjecture \[c-2\], yielding a breakthrough to the parameterized complexity of clique problem. However, the above theorem is irrelevant to the RNA folding problem in real life (which has alphabet size 4). It is unknown whether the generalized RNA folding for alphabet size $4$ admits a faster algorithm than the case for alphabet size $> 4$. In fact, there are examples of string algorithms whose running time scales with alphabet size (e.g. string matching with mismatched [@AL91] and jumbled indexing [@ACLL14; @CL15]). We also note that when the alphabet size is 2, the generalized RNA folding can be trivially solved in linear time. In this paper, we improve upon Theorem \[thm-1\] by showing the same conditional lower bound for the RNA folding problem: \[thm-2\] If the RNA folding problem on sequences in ${\{A,C,G,U\}}^n$ can be solved in $T(n)$ time, then $3k$-clique on graphs with $|V|=n$ can be solved in $\mathcal{O}\left(T\left(n^{k + 1} \log(n) \right)\right)$ time. Note that we also get an $\mathcal{O}(n)$ factor improvement inside $T(\cdot)$, though it does not affect the conditional lower bound. The current state-of-art algorithm for $k-$clique, which takes $\tilde{\mathcal{O}}\left(n^{\omega k/3}\right)$ time, requires the use of fast matrix multiplication [@EG04] which does not perform very efficiently in practice. For combinatorial, non-algebraic algorithm for $k-$clique, the current best one runs in $\tilde{\mathcal{O}}\left(\frac{n^k}{\log^k (n)}\right)$ time [@V09], which is only slightly better than the trivial approach. As a result, by Theorem \[thm-2\], even a $\mathcal{O}(n^{3- \epsilon})$ time combinatorial algorithm for RNA folding will lead to an improvement for combinatorial algorithms for $k-$clique! In the proof of Theorem \[thm-1\] in [@ABV15], given a graph $G=(V,E)$, a sequence of length $\mathcal{O}(n^{k+2} \log(n))$ is constructed in such a way that we can decide whether $G$ has a $3k$-clique according to the number of pairs in an optimal generalized RNA folding of $S$. Such a construction requires many different types of letters in order to build various “walls” which prevent undesired pairings between different parts of the sequence. Hence extending their approach to handle the case where the alphabet size is 4 may not be easy without aid from other techniques and ideas. [**Overview of our approach.**]{} At a high level, our reduction (from $3k$-clique problem to RNA folding problem) follows the approach in [@ABV15]: We enumerate all $k$-cliques, and each of them is encoded as some gadgets. All the gadgets are then put together to form an RNA sequence. The goal is to ensure that an optimal RNA folding corresponds to choosing three $k$-cliques that form a $3k$-clique, given that the underlying graph admits a $3k$-clique. To achieve this goal without using extra types of letters that force the gadgets to match in a desired manner, we construct the gadgets via a key lemma in [@BK15], whose original purpose is to prove that longest common subsequence and other edit distance problems are SETH-hard even on binary strings. We will treat it as a black box and apply it multiple times during the construction. This powerful tool will allow us to test whether two $k$-cliques form a $2k$-clique by the longest common subsequence between the two strings representing the two $k$-cliques. In the final RNA sequence, all clique gadgets are well-separated by some carefully designed sequences whose purpose is to “trap” all the clique gadgets except three of them. Since we know that these three clique gadgets are guaranteed to match well if the graph has a $3k$-clique, we can infer whether the graph has a $3k$-clique from the optimal RNA folding of the RNA sequence. [**Dyck Edit Distance.**]{} One other way to formulate the RNA folding problem is as follows: deleting the minimum number of letters in a given string to transform the string into a string in the language defined by the grammar $\mathbf{S} \rightarrow \mathbf{SS}, A\mathbf{S}U, U\mathbf{S}A, C\mathbf{S}G, G\mathbf{S}C, \epsilon$ (empty string). The [*Dyck edit distance problem*]{} [@S14; @S15*], which asks for the minimum number of edits to transform a given string to a well-balanced parentheses of $s$ different types, has a similar formulation. Due to the similarity, the same conditional lower bound as Theorem \[thm-1\] was also shown for the Dyck edit distance problem (with alphabet size $\geq 48$) in [@ABV15]. In this paper, we improve and simplify their result by demonstrating a simple reduction from RNA folding to Dyck edit distance problem: \[thm-3\] If Dyck edit distance problem on sequences of length $n$ with alphabet size 10 can be solved in $T(n)$ time, then the RNA folding problem on sequences in ${\{A,C,G,U\}}^n$ can be solved in $\mathcal{O}(T(n))$ time. Combining Theorem \[thm-2\], \[thm-3\], we get the following corollary: \[cor-1\] If the Dyck edit distance problem on sequences of length $n$ with alphabet size 10 can be solved in $T(n)$ time, then $3k$-clique on graphs with $|V|=n$ can be solved in $\mathcal{O}\left(T\left(n^{k + 1} \log(n) \right)\right)$ time. Preliminaries \[sec.prelim\] ============================ Given a set of letters $\Sigma$, the set $\Sigma'$ is defined as $\{x' | x \in \Sigma\}$. We require that $\Sigma \cap \Sigma' = \emptyset$, and $\forall x, y \in \Sigma, (x \neq y) \rightarrow (x' \neq y')$. Therefore, we have $|\Sigma'| = |\Sigma|$ and $|\Sigma \cup \Sigma'| = 2|\Sigma|$. For any $X = (x_1, \ldots, x_k) \in \Sigma^k$, we write $p(X)$ to denote $(x_1', \ldots, x_k')$ (the letter $p$ stands for the prime symbol). We denote the reversal of the sequence $X$ as $X^R$. The concatenation of two sequences $X, Y$ is denoted as $X \circ Y$ (or simply $XY$). We write [*substring*]{} to denote a contiguous subsequence. Two pairs of indices $(i_1,j_1)$, $(i_2,j_2)$, with $i_1 < j_1$ and $i_2 < j_2$, form a [*crossing pair*]{} iff $$\left( \{i_1, j_1\} \cap \{i_2,j_2\} \neq \emptyset \right) \vee \left( i_1 < i_2 < j_1 < j_2 \right) \vee \left( i_2 < i_1 < j_2 < j_1\right).$$ [**Generalized RNA Folding.**]{} Given $S \in (\Sigma \cup \Sigma')^n$, the goal of the generalized RNA folding problem is to find a maximum cardinality set $A \subseteq \{(i,j) | 1 \leq i < j \leq n \}$ among all sets meeting the following conditions: - $A$ does not contain any crossing pair. - For any $(i,j) \in A$, either (i) $S[i] \in \Sigma$ and $S[j] = S[i]'$ or (ii) $S[j] \in \Sigma$ and $S[i] = S[j]'$ is true. We write $\text{RNA}(S) = |A|$. Any set meeting the above conditions is called an [*RNA folding*]{} of $S$. If its cardinality equals $\text{RNA}(S)$, then it is said to be [*optimal*]{}. In the paper we will only focus on the generalized RNA folding problem with four types of letters, i.e. $\Sigma = \{0,1\}, \Sigma' = \{0',1'\}$, which coincides with the RNA folding problem for alphabet $\{A,C,G,U\}$. With a slight abuse of notation, sometimes we will write $(S[i], S[j])$ to denote a pair $(i,j) \in A$. The notation $\{\cdot,\cdot\}$ is used to indicate an unordered pair. [**Longest Common Subsequence (LCS).**]{} Given $X \in \Sigma^n$ and $Y \in \Sigma^m$, we define $\delta_{\text{LCS}} (X,Y) = n+m - 2k$, where $k =$ the length of the longest common subsequence of $X$ and $Y$. It is easy to observe that $\text{RNA} (X \circ p(Y^R))$ equals the length of LCS $= (n+m - \delta_{\text{LCS}} (X,Y))/2$. In this sense, we can conceive of an LCS problem as an RNA folding problem with some structural constraint on the sequence. In [@BK15], a conditional lower bound for the LCS problem with $|\Sigma| = 2$ based on SETH was presented. A key technique in their approach is a function that transforms an instance of an alignment problem between two sets of sequences to an instance of the LCS problem. [**Alignments of two sets of sequences.**]{} Let $\mathbf{X} = (X_1, \ldots, X_n)$ and $\mathbf{Y} = (Y_1, \ldots, Y_m)$ be two linearly ordered sets of sequences of alphabet $\Sigma$. We assume that $n \geq m$. An [*alignment*]{} is a set $A = \{(i_1, j_1), (i_2, j_2), \ldots$, $(i_{|A|}, j_{|A|})\}$ with $1 \leq i_1 < i_2 < \ldots < i_{|A|} \leq n$ and $1 \leq j_1 < j_2 < \ldots < j_{|A|} \leq m$. An alignment $A$ is called [*structural*]{} iff $|A| = m$ and $i_{m} = i_1 + m - 1$. That is, all sequences in $\mathbf{Y}$ are matched, and the matched positions in $\mathbf{X}$ are contiguous. The set of all alignments is denoted as $\mathcal{A}_{n,m}$, and the set of all structural alignments is denoted as $\mathcal{S}_{n,m}$. The [*cost*]{} of an alignment $A$ (with respect to $\mathbf{X}$ and $\mathbf{Y}$) is defined as: $$\delta(A) = \sum_{(i,j) \in A} \delta_{\text{LCS}}(X_i, Y_j) + (m - |A|) \max_{i,j} \delta_{\text{LCS}}(X_i, Y_j).$$ That is, unaligned parts of $\mathbf{Y}$ are penalized by $\max_{i,j} \delta_{\text{LCS}}(X_i, Y_j)$. Given a sequence $X$, the [*type*]{} of $X$ is defined as $(|X|, \sum_i X[i])$, where each letter is assumed to be a number. Note that when the alphabet is simply $\{0, 1\}$, $\sum_i X[i]$ is simply the number of occurrences of $1$ in $X$. The following key lemma was proved in [@BK15] (Lemma 4.3 of [@BK15]): \[[@BK15]\] \[lem-1\] Let $\mathbf{X} = (X_1, \ldots, X_n)$ and $\mathbf{Y} = (Y_1, \ldots, Y_m)$ be two linearly ordered sets of binary strings such that $n \geq m$, all $X_i$ are of type $\mathcal{T}_{X} = (\ell_{X}, s_{X})$, and all $Y_i$ are of type $\mathcal{T}_Y = (\ell_{Y}, s_{Y})$. There are two binary strings $S_X = \mathrm{GA}_{{X}}^{m, \mathcal{T}_{Y}}(X_1, \ldots, X_n)$, $S_Y = \mathrm{GA}_{{Y}}^{n, \mathcal{T}_{X}}(Y_1, \ldots, Y_m)$ and an integer $C$ meeting the following requirements: - $\min_{A \in \mathcal{A}_{n,m}} \delta(A) \leq \delta_{\text{LCS}}(S_X, S_Y) - C \leq \min_{A \in \mathcal{S}_{n,m}} \delta(A)$. - The types of $S_X, S_Y$ and the integer $C$ only depend on $n, m, \mathcal{T}_{X}, \mathcal{T}_{Y}$. - $S_X, S_Y$, and $C$ can be calculated in time $\mathcal{O}((n+m)(\ell_X +\ell_Y))$ (hence $|S_X|$ and $|S_Y|$ are both $\mathcal{O}((n+m)(\ell_{X} +\ell_{Y}))$ ). Note that the term $\mathrm{GA}$ comes from the word gadget. Intuitively, computing an optimal alignment (or an optimal structural alignment) of two sets of sequences is at least as hard as computing a longest common subsequence. The above lemma gives a reduction from the computation of a number $s$ with $\min_{A \in \mathcal{A}_{n,m}} \delta(A) \leq s \leq \min_{A \in \mathcal{S}_{n,m}} \delta(A)$ (which can be regarded as an approximation of optimal alignments) to a single LCS instance. We will use the above lemma as a black box to devise two encodings, the clique node gadget $\text{CNG}(t)$ and the clique list gadget $\text{CLG}(t)$, for a $k$-clique $t$ in a graph in such a way that we can decide whether two $k$-cliques $t_1, t_2$ form a $2k$-clique according the value of $\delta_{\text{LCS}} (\text{CNG}(t_1), \text{CLG}(t_2))$. When invoking the lemma, $\mathbf{X}$, $\mathbf{Y}$ are designed in such a way that we can test whether a condition is met (e.g. whether two given $k$-cliques form a $2k$-clique) by the value of $\min_{A \in \mathcal{A}_{n,m}} \delta(A)$. We will show that $\min_{A \in \mathcal{A}_{n,m}} \delta(A) = \min_{A \in \mathcal{S}_{n,m}} \delta(A)$ for the case we are interested in. Therefore, we can infer whether the condition we are interested in is met from the value of $\delta_{\text{LCS}}(S_X, S_Y)$. From Cliques to RNA Folding \[sec.reduction\] ============================================= The goal of this section is to prove Theorem \[thm-2\]. Let $G = (V,E)$ be a graph, and let $n = |V|$. We write $\mathcal{C}_k$ to denote the set of $k$-cliques in $G$. We fix $\Sigma = \{0, 1\}$. As in [@ABV15], we will construct a sequence $S_G \in (\Sigma \cup \Sigma')^\ast$ such that we can decide whether $G$ has a $3k$-clique according to the value of $\text{RNA}(S_G)$. As our framework of the construction of $S_G$ is similar to the one in [@ABV15], we will give the building blocks (for constructing $S_G$) the same names as their analogues in [@ABV15], despite that they may have different lower-level implementations. The high-level plan is described as following: In Section \[ss-1\] we describe two encodings $\text{CNG}(t), \text{CLG}(t)$ for a $k$-clique $t$ based on the black box described in Lemma \[lem-1\]. In Section \[ss-2\], adapting the encodings shown in the previous subsection as the building blocks, we present the definition of the binary sequence $S_G$. We will give a lower bound on $\text{RNA}(S_G)$ by demonstrating an RNA folding of $S_G$, and the bound will depend on whether $G$ has a $3k$-clique. The goal of the next two subsections is to show that the bound given in Section \[ss-2\] is actually the exact value of $\text{RNA}(S_G)$. In Section \[ss-3\], we show that there exists an optimal RNA folding of $S_G$ meeting several constraints. These constraints will simplify the calculation of $\text{RNA}(S_G)$, and we will work out the exact calculation in Section \[ss-4\]. Testing $2k$-cliques via LCS \[ss-1\] ------------------------------------- We associate each vertex $v \in V$ a distinct integer in $\{0, 1, \ldots\ n-1\}$. Let $s_v$ be the binary encoding of such integer with $|s_v| = \lceil \log (n) \rceil$. We define $\bar{v}$ to be the binary string resulted by replacing each 0 in $s_v$ with 01 and replacing each 1 in $s_v$ with 10. It is clear that for each $v \in V$, $\bar{v}$ is of type $\mathcal{T}_0 = (2\lceil \log (n) \rceil, \lceil \log (n) \rceil)$, and $\delta_{\text{LCS}} (\bar{u}, \bar{v}) = 0$ iff $u = v$. In this subsection we present two encodings $\text{CNG}(t), \text{CLG}(t)$ for a $k$-clique $t$ such that we can infer whether two $k$-cliques $t_1, t_2$ form a $2k$-clique from the value of $\delta_{\text{LCS}} (\text{CNG}(t_1), \text{CLG}(t_2))$. For each $v \in V$, the [*list gadget*]{} $\textrm{LG}(v)$ and the [*node gadget*]{} $\textrm{NG}(v)$ are defined as following: - $\textrm{LG}(v) = \mathrm{GA}_{{X}}^{1, \mathcal{T}_0}(\bar{u}_1, \bar{u}_2, \ldots, \bar{u}_{|N(v)|}, 1^{\lceil \log (n) \rceil}0^{\lceil \log (n) \rceil}, \ldots, 1^{\lceil \log (n) \rceil}0^{\lceil \log (n) \rceil} )$, where $N(v) = \{{u_1}, {u_2},$ $\ldots, {u_{|N(v)|}}\}$, and the number of occurrences of $1^{\lceil \log (n) \rceil}0^{\lceil \log (n) \rceil}$ is $n - |N(v)|$. - $\textrm{NG}(v) = \mathrm{GA}_{{Y}}^{n, \mathcal{T}_0}(\bar{v})$. \[lem-2\] There is a constant $c_0$, depending only on $n$, such that for any $v_1, v_2 \in V$, we have $\{v_1, v_2\} \in E$ iff $\delta_{\text{LCS}}(\textrm{LG}(v_1), \textrm{NG}(v_2)) = c_0 = \min_{v_1', v_2' \in V} \delta_{\text{LCS}} (\textrm{LG}(v_1'), \textrm{NG}(v_2'))$. We let $N(v_1) = \{{u_1, u_2, \ldots, u_{|N(v_1)|}}\}$. Let $\mathbf{X} = (\bar{u}_1, \bar{u}_2, \ldots, \bar{u}_{|N(v_1)|}, 1^{\lceil \log (n) \rceil}0^{\lceil \log (n) \rceil}, \ldots, 1^{\lceil \log (n) \rceil}0^{\lceil \log (n) \rceil})$, where the number of occurrences of $1^{\lceil \log (n) \rceil}0^{\lceil \log (n) \rceil}$ is $n - |N(v_1)|$, and let $\mathbf{Y} = (\bar{v}_2)$. In view of Lemma \[lem-1\], we have $\min_{A \in \mathcal{A}_{n,1}} \delta(A) \leq \delta_{\text{LCS}}(\textrm{LG}(v_1)$, $\textrm{NG}(v_2)) - C \leq \min_{A \in \mathcal{S}_{n,1}} \delta(A)$, for some $C$ whose value depends on $|\mathbf{X}|, |\mathbf{Y}|$, and $\mathcal{T}_0$. As these parameters depend solely on $n$, the number $C$ only depends on $n$. Since $|\mathbf{Y}|=1$, any non-empty alignment between $\mathbf{X}$ and $\mathbf{Y}$ is structural. This implies that $\delta_{\text{LCS}}(\textrm{LG}(v_1)$, $\textrm{NG}(v_2)) - C = \min_{A \in \mathcal{A}_{n,1}} \delta(A) = \min_{A \in \mathcal{S}_{n,1}} \delta(A)$. When $\{v_1, v_2\} \in E$, since $\bar{v}_2$ is contained in $\mathbf{X}$, clearly $\min_{A \in \mathcal{S}_{n,m}} \delta(A) = 0$. When $\{v_1, v_2\} \not\in E$, $\bar{v}_2$ does not appear in $\mathbf{X}$, so $\min_{A \in \mathcal{S}_{n,m}} \delta(A) > 0$. Note that $1^{\lceil \log (n) \rceil}0^{\lceil \log (n) \rceil} \neq \bar{v}$, for any $v \in V$. As a result, $\{v_1, v_2\} \in E$ iff $\delta_{\text{LCS}}(\textrm{LG}(v_1)$, $\textrm{NG}(v_2)) = C = \min_{v_1', v_2' \in V} \delta_{\text{LCS}} (\textrm{LG}(v_1'), \textrm{NG}(v_2'))$. Hence setting $c_0 = C$ suffices. We let $\mathcal{T}_X$ be the type of the list gadgets, and we let $\mathcal{T}_Y$ be the type of the node gadgets. For each $k$-clique $t = \{u_1, u_2, \ldots, u_k\}$, we define the [*clique node gadget*]{} $\textrm{CNG}(t)$ and the [*clique list gadget*]{} $\textrm{CLG}(t)$ as following: - $\textrm{CLG}(t) = \mathrm{GA}_{{X}}^{k^2, \mathcal{T}_Y}(\textrm{LG}(u_1), \ldots, \textrm{LG}(u_1), \textrm{LG}(u_2), \ldots, \textrm{LG}(u_2), \ldots, \textrm{LG}(u_k), \ldots, \textrm{LG}(u_k))$, where the number of occurrences of each $\textrm{LG}(u_i)$ is $k$. - $\textrm{CNG}(t) = \mathrm{GA}_{{Y}}^{k^2, \mathcal{T}_X}( \textrm{NG}(u_1), \textrm{NG}(u_2), \ldots, \textrm{NG}(u_k), \textrm{NG}(u_1), \textrm{NG}(u_2), \ldots, \textrm{NG}(u_k), \ldots$, $\textrm{NG}(u_1)$, $\textrm{NG}(u_2), \ldots$, $\textrm{NG}(u_k) )$, where the number of occurrences of each $\textrm{NG}(u_i)$ is $k$. We are ready to prove the main lemma in the subsection: \[lem-3\] There is a constant $c_1$, depending only on $n, k$, such that for any $t_1, t_2 \in \mathcal{C}_k $ , $t_1 \cup t_2$ is a $2k$-clique iff $\delta_{\text{LCS}} (\textrm{CLG}(t_1), \textrm{CNG}(t_2)) = c_1 = \min_{t_1', t_2' \in \mathcal{C}_k} \delta_{\text{LCS}} (\textrm{CLG}(t_1'), \textrm{CNG}(t_2'))$. Let $t_1 = \{u_1, u_2, \ldots, u_k\}$, and let $t_2 = \{v_1, v_2, \ldots, v_k\}$. Let $\mathbf{X} = (\textrm{LG}(u_1), \ldots, \textrm{LG}(u_1), \textrm{LG}(u_2), \ldots, \textrm{LG}(u_2), \ldots, \textrm{LG}(u_k), \ldots, \textrm{LG}(u_k))$, where each $\textrm{LG}(u_i)$ appears $k$ times, and let $\mathbf{Y} = ( \textrm{NG}(v_1), \textrm{NG}(v_2), \ldots, \textrm{NG}(v_k), \textrm{NG}(v_1)$, $\textrm{NG}(v_2)$, $\ldots$, $\textrm{NG}(v_k)$, $\ldots$, $\textrm{NG}(v_1)$, $\textrm{NG}(v_2)$, $\ldots$, $\textrm{NG}(v_k) )$, where each $\textrm{NG}(v_i)$ appears $k$ times. In view of Lemma \[lem-2\], we have $\min_{{w_1}, {w_2} \in V} \delta_{\text{LCS}} (\textrm{LG}({w_1}), \textrm{NG}({w_2})) \geq c_0$, so we can lower bound $\min_{A \in \mathcal{A}_{k^2,k^2}} \delta(A)$ by $k^2 c_0$. If $\max_{i,j} \delta_{\text{LCS}}(X_i, Y_j) = c_0$, any alignment has cost $k^2 c_0$. When $\max_{i,j} \delta_{\text{LCS}}(X_i, Y_j) > c_0$, it is easy to observe that in order to achieve $\delta(A) = k^2 c_0$, all sequences in $\mathbf{Y}$ must be aligned (as the cost for any unaligned sequence in $\mathbf{Y}$ is now $> c_0$). Therefore, any alignment $A$ with $\delta(A) = k^2 c_0$ must be $A = \{(i, i)| i \in \{1, 2, \ldots, k^2\}\}$ with $\delta_{\text{LCS}}(X_i, Y_i) = c_0$, for all $i \in \{1, 2, \ldots, k^2\}$. In view of the above, $\min_{A \in \mathcal{A}_{k^2,k^2}} \delta(A) = k^2 c_0$ iff $\delta_{\text{LCS}}(X_i, Y_i) = c_0$ for all $i \in \{1, 2, \ldots, k^2\}$. Since $A = \{(i, i)| i \in \{1, 2, \ldots, k^2\}\}$ is structural, $\min_{A \in \mathcal{A}_{k^2,k^2}} \delta(A) = k^2 c_0$ iff $\min_{A \in \mathcal{S}_{k^2,k^2}} \delta(A) = k^2 c_0$. Therefore, in view of Lemma \[lem-1\], there exists a constant $C$ such that: - If $\min_{A \in \mathcal{A}_{k^2,k^2}} \delta(A) = k^2 c_0$, then $\delta_{\text{LCS}} (\textrm{CLG}({t_1})$, $\textrm{CNG}({t_2})) = k^2 c_0 + C$. - If $\min_{A \in \mathcal{A}_{k^2,k^2}} \delta(A) > k^2 c_0$, then $\delta_{\text{LCS}} (\textrm{CLG}({t_1})$, $\textrm{CNG}({t_2})) > k^2 c_0 + C$. Moreover, the value of $C$ depends only on $|\mathbf{X}|, |\mathbf{Y}|$, $\mathcal{T}_X, \mathcal{T}_Y$. As these parameters depend solely on $n, k$, the number $C$ only depends on $n, k$. When $t_1 \cup t_2$ is a $2k$-clique, all vertices in $t_1$ are adjacent to all vertices in $t_2$. In view of Lemma \[lem-2\], $\forall_{i,j} \delta_{\text{LCS}}(X_i, Y_j) = c_0$. Hence $\min_{A \in \mathcal{A}_{k^2,k^2}} \delta(A) = k^2 c_0$, implying that $\delta_{\text{LCS}} (\textrm{CLG}({t_1})$, $\textrm{CNG}({t_2})) = k^2 c_0 + C$. When $t_1 \cup t_2$ is not a $2k$-clique, there exist $u_i \in t_1, v_j \in t_2$ such that $\{u_i,v_j\} \not\in E$. According to our definition of $\mathbf{X}$ and $\mathbf{Y}$, we have $X_{j+k(i-1)} = \textrm{LG}(u_i)$, $Y_{j+k(i-1)} = \textrm{NG}(v_j)$, and hence $\delta_{\text{LCS}}(X_{j+k(i-1)}, Y_{j+k(i-1)}) > c_0$. This implies that $\min_{A \in \mathcal{A}_{k^2,k^2}} \delta(A) > k^2 c_0$, which leads to $\delta_{\text{LCS}} (\textrm{CLG}({t_1})$, $\textrm{CNG}({t_2})) > k^2 c_0 + C$. As a result, $t_1 \cup t_2$ is a $2k$-clique iff $\delta_{\text{LCS}} (\textrm{CLG}(t_1)$, $\textrm{CNG}(t_2)) = k^2 c_0 + C = \min_{t_1', t_2' \in \mathcal{C}_k} \delta_{\text{LCS}} ($ $\textrm{CLG}(t_1'), \textrm{CNG}(t_2'))$. Setting $c_1 = k^2 c_0 + C$ suffices. The following lemma is a simple consequence of Lemma \[lem-1\]: \[lem-length\] There exist four integers $\ell_{\textrm{CNG}, 0}$, $\ell_{\textrm{CNG}, 1}$, $\ell_{\textrm{CLG}, 0}$, $\ell_{\textrm{CLG}, 1} \in \mathcal{O}(k^2 n \log (n))$, such that for any $t \in \mathcal{C}_k$, - $\ell_{\textrm{CNG}, b} = $ the number of occurrences of $b$ in $\textrm{CNG}(t)$, $b \in \{0,1\}$. - $\ell_{\textrm{CLG}, b} = $ the number of occurrences of $b$ in $\textrm{CLG}(t)$, $b \in \{0,1\}$. As a consequence of Lemma \[lem-1\], all $\textrm{CNG}(t)$ have the same type, and all $\textrm{CLG}(t)$ have the same type. Therefore, the existence of these four integers is guaranteed. In view of Lemma \[lem-1\], for all $v \in V$, both $\textrm{LG}(v)$ and $\textrm{NG}(v)$ have length at most $(n+1) \cdot (2 \lceil \log (n) \rceil + 2 \lceil \log (n) \rceil ) = \mathcal{O} (n \log (n))$. Applying Lemma \[lem-1\] again, the length of both $\textrm{CNG}(t)$ and $\textrm{CLG}(t)$ for all $t \in \mathcal{C}_k$ is $(k^2 + k^2)(\mathcal{O} (n \log (n)) + \mathcal{O} (n \log (n))) = \mathcal{O}(k^2 n \log (n))$. As a result, the four integers can be bounded by $\mathcal{O}(k^2 n \log (n))$. The RNA sequence $S_G$ \[ss-2\] ------------------------------- Based on the parameters in Lemma \[lem-length\], we define $\ell_0 = \ell_{\textrm{CNG}, 0} + \ell_{\textrm{CNG}, 1} +\ell_{\textrm{CLG}, 0} +\ell_{\textrm{CLG}, 1} = \mathcal{O}(k^2 n \log (n))$; for $i \in \{1,2,3\}$, we set $\ell_i = 100 \ell_{i-1}$; and $\ell_4 = 100 |\mathcal{C}_k| \ell_3 = \mathcal{O}(k^2 n^{k+1} \log (n)) $. The RNA sequence $S_G$ is then defined as following: $$S_G = 0^{\ell_4} \left[{0'}^{\ell_3} \underset{{t \in \mathcal{C}_k}}{\bigcirc} \left( \textrm{CG}_\alpha(t){0'}^{\ell_3} \right) \right] 0^{\ell_4} \left[{0'}^{\ell_3} \underset{{t \in \mathcal{C}_k}}{\bigcirc} \left( \textrm{CG}_\beta(t) {0'}^{\ell_3} \right)\right] 0^{\ell_4} \left[{0'}^{\ell_3} \underset{{t \in \mathcal{C}_k}}{\bigcirc} \left( \textrm{CG}_\gamma(t) {0'}^{\ell_3} \right)\right],$$ where $$\begin{aligned} \textrm{CG}_\alpha(t) &= {1'}^{2 \ell_2} p({\textrm{CLG}(t)}^R) {0'}^{\ell_1} {1}^{\ell_2} {0}^{\ell_1} \textrm{CNG}(t) {1}^{\ell_2},\\ \textrm{CG}_\beta(t) &= {1'}^{\ell_2} p({\textrm{CLG}(t)}^R) {0'}^{\ell_1} {1'}^{2\ell_2} {0'}^{\ell_1} p(\textrm{CNG}(t)) {1'}^{\ell_2},\\ \textrm{CG}_\gamma(t) &= {1}^{\ell_2} {\textrm{CLG}(t)}^R {0}^{\ell_1} {1}^{\ell_2} {0}^{\ell_1} \textrm{CNG}(t) {1}^{2\ell_2}.\end{aligned}$$ For any $t \in \mathcal{C}_k$, $x \in \{\alpha, \beta, \gamma\}$, the string $\textrm{CG}_x(t)$ is called a [*clique gadget*]{}. Note that $\textrm{CG}_\alpha(t) \in {(\Sigma \cup \Sigma')}^\ast$, $\textrm{CG}_\beta(t) \in {\Sigma'}^\ast$, and $\textrm{CG}_\gamma(t) \in {\Sigma}^\ast$. It is obvious that $|S_G| = \mathcal{O}(|\mathcal{C}_k| \ell_0) = \mathcal{O}(k^2 n^{k+1} \log (n) )$. Before proceeding further, we explain some intuitions behind the definition of $S_G$ and give a simple lower bound on $\text{RNA}(S_G)$ by constructing an RNA folding as following: - The pairings between letters in some ${0'}^{\ell_3}$ and some ${0}^{\ell_4}$ sometimes make a clique gadget unable to participate in the RNA folding with other clique gadgets. In this sense, a clique gadget is said to be “blocked” if the letters within the clique gadget only pair up with other letters within the same clique gadget or some $0$ in a ${0}^{\ell_4}$. Let’s try linking all the $0'$ in all ${0'}^{\ell_3}$ to some $0$ in some ${0}^{\ell_4}$ in such a way that all clique gadgets are blocked except $\textrm{CG}_\alpha(t_\alpha)$, $\textrm{CG}_\beta(t_\beta)$, and $\textrm{CG}_\gamma(t_\gamma)$. This gives us $3(|\mathcal{C}_k|+1) \ell_3$ amount of pairs. See Fig. \[fig-1\]. - For a clique gadget that is “blocked”, our design of $S_G$ ensures that the number of pairs involving letters in the clique gadget (in certain optimal RNA foldings) is irrelevant to its corresponding $k$-clique (we will prove it later): - For a blocked $\textrm{CG}_\alpha(t)$, since $\ell_2$ is significantly larger than $\ell_1, \ell_0$, an optimal way to pair up the letters is to match as many $\{1', 1\}$ as possible. This gives us $\ell_2 + \min(\ell_{\textrm{CLG}, 1}, \ell_{\textrm{CNG}, 1})$ pairs. - For a blocked $\textrm{CG}_\beta(t)$, since we do not have any 1 here, the best we can do is to match all $0'$ to some ${0}^{\ell_4}$. This gives us $2 \ell_1 + \ell_{\textrm{CLG}, 0} + \ell_{\textrm{CNG}, 0}$ pairs. - For a blocked $\textrm{CG}_\gamma(t)$, no matching can be made. Therefore, the total amount of pairs involving blocked clique gadgets is $(|\mathcal{C}_k|-1)( 2 \ell_1 + 2 \ell_2 + \min (\ell_{\textrm{CLG}, 1}, \ell_{\textrm{CNG}, 1}) + \ell_{\textrm{CLG}, 0} + \ell_{\textrm{CNG}, 0} )$. See Fig. \[fig-2\] for an illustration. - For the three clique gadgets that are not blocked, we will later see that (in certain optimal RNA foldings) $\textrm{CG}_\alpha(t_\alpha), \textrm{CG}_\beta(t_\beta), \textrm{CG}_\gamma(t_\gamma)$ correspond to a $3k$-clique if the graph has one. It is a simple exercise to construct an RNA folding for $\textrm{CG}_\alpha(t_\alpha) \circ \textrm{CG}_\beta(t_\beta) \circ \textrm{CG}_\gamma(t_\gamma)$ that uses up all the ${1'}^{2 \ell_2}, {1}^{2 \ell_2}, {1'}^{\ell_2}, {1}^{\ell_2}, {0'}^{ \ell_1}, {0}^{\ell_1}$ and has cardinality $6 \ell_2 + 3 \ell_1 + \frac{1}{2} \left( \ell_0 - \delta_{\text{LCS}} (\textrm{CLG}(t_\alpha), \textrm{CNG}(t_\beta)) \right) + \frac{1}{2} \left( \ell_0 - \delta_{\text{LCS}} (\textrm{CLG}(t_\alpha), \textrm{CNG}(t_\gamma)) \right) + \frac{1}{2} \left( \ell_0 - \delta_{\text{LCS}} (\textrm{CLG}(t_\beta), \textrm{CNG}(t_\gamma)) \right)$. Recall that $\frac{1}{2} ( \ell_0 - \delta_{\text{LCS}} ($ $\textrm{CLG}(t_x), \textrm{CNG}(t_y)) )$ is the length of the LCS between $\textrm{CLG}(t_x)$ and $\textrm{CNG}(t_y)$. See Fig. \[fig-3\] for an illustration. ![The three selected clique gadgets and the matchings between ${0'}^{\ell_3}$ and ${0}^{\ell_4}$. []{data-label="fig-1"}](fig-1){width="100.00000%"} ![The matchings between a blocked clique gadget and ${0}^{\ell_4}$. []{data-label="fig-2"}](fig-2){width="100.00000%"} ![The matchings within the three selected clique gadgets. []{data-label="fig-3"}](fig-3){width="100.00000%"} In light of the above discussion, we define: - $m_1 = 3(|\mathcal{C}_k| +1) \ell_3 + (|\mathcal{C}_k|-1) ( 2 \ell_1 + 2 \ell_2 + \min (\ell_{\textrm{CLG}, 1}, \ell_{\textrm{CNG}, 1}) + \ell_{\textrm{CLG}, 0} + \ell_{\textrm{CNG}, 0} )$, - $m_2 = 6 \ell_2 + 3 \ell_1 + \frac{3}{2} \ell_0 - \min_{t_\alpha, t_\beta, t_\gamma \in \mathcal{C}_k} \frac{1}{2}( \delta_{\text{LCS}} (\textrm{CLG}(t_\alpha), \textrm{CNG}(t_\beta)) + \delta_{\text{LCS}} (\textrm{CLG}(t_\alpha), \textrm{CNG}(t_\gamma)) + \delta_{\text{LCS}} (\textrm{CLG}(t_\beta), \textrm{CNG}(t_\gamma)) )$. The next lemma, which gives a lower bound on $\text{RNA}(S_G)$, is then implied instantly by the above discussion. \[lem-5\] $\text{RNA}(S_G) \geq m_1 + m_2$. Ultimately we will show that $\text{RNA}(S_G) = m_1 + m_2$, and clearly this offers enough information for us to decide whether $G$ has a $3k$-clique. The following lemma calculates $\text{RNA}(\cdot)$ of some sequences, and they will be useful in the subsequent discussion. \[lem-4\] The following statements are true for any $t,t' \in \mathcal{C}_k$: 1. $\text{RNA}( 0^{\ell_4} \textrm{CG}_\alpha(t) ) = 2 \ell_2 + \min (\ell_{\textrm{CLG}, 1}, \ell_{\textrm{CNG}, 1}) $ 2. $\text{RNA}( 0^{\ell_4} \textrm{CG}_\beta(t) ) = 2 \ell_1 + \ell_{\textrm{CLG}, 0} + \ell_{\textrm{CNG}, 0} $ 3. $\text{RNA}( 0^{\ell_4} \textrm{CG}_\gamma(t) ) = 0 $ 4. $\text{RNA}( 0^{\ell_4} \textrm{CG}_\alpha(t) 0^{\ell_4} \textrm{CG}_\beta(t') ) \leq 3.1 \ell_1 + 2 \ell_2$ 5. $\text{RNA}( 0^{\ell_4} \textrm{CG}_\alpha(t) 0^{\ell_4} \textrm{CG}_\gamma(t') ) \leq 1.1 \ell_1 + 2 \ell_2$ 6. $\text{RNA}( 0^{\ell_4} \textrm{CG}_\beta(t) 0^{\ell_4} \textrm{CG}_\gamma(t') ) \leq 1.1 \ell_1 + 4 \ell_2 $ The value of $\text{RNA}(\cdot)$ for each of the six sequences are calculated as following: 1. Linking as many $1$ to $1'$ gets a matching of size $m = 2 \ell_2 + \min (\ell_{\textrm{CLG}, 1}, \ell_{\textrm{CNG}, 1})$. To see that it is optimal, it suffices to show that both $(0',0)$ and $(0,0')$ cannot appear in an optimal RNA folding: - If the RNA folding contains $(0,0')$ , then none of $1'$ can participate in the RNA folding. As the total number of $0'$ is $\ell_1 + \ell_{\textrm{CLG}, 0}$, the size of RNA folding is at most $\ell_1 + \ell_{\textrm{CLG}, 0} < m$. - If the RNA folding contains $(0',0)$ , then at most $\ell_{\textrm{CLG}, 1}$ number of letters within the middle ${1}^{\ell_2}$ (the one between ${0'}^{\ell_1}$ and ${0}^{\ell_1}$) can participate in the RNA folding. It implies that the number of $(1',1)$ pairs in the RNA folding is at most $\ell_{\textrm{CLG}, 1} + \ell_2$. Hence the size of the RNA folding can be upper bounded by $(\ell_1 + \ell_{\textrm{CLG}, 0}) + (\ell_{\textrm{CLG}, 1} + \ell_2) < m$. 2. Since there is no $1$, the equation follows from the fact that there are $ 2 \ell_1 + \ell_{\textrm{CLG}, 0} + \ell_{\textrm{CNG}, 0}$ occurrences of $0'$, all of which can be matched to some $0$ without crossing. 3. No matching can be made since there is no $0', 1'$. 4. The value of $\text{RNA}(\cdot)$ can be upper bounded by the number of $1$ and $0'$. This is $(2 \ell_2 + \ell_{\textrm{CNG}, 1}) + (3 \ell_1 + 2\ell_{\textrm{CLG}, 0} + \ell_{\textrm{CNG}, 0}) \leq 3.1 \ell_1 + 2 \ell_2$. 5. The value of $\text{RNA}(\cdot)$ can be upper bounded by the number of $1'$ and $0'$. This is $(2 \ell_2 + \ell_{\textrm{CLG}, 1}) + (\ell_1 + \ell_{\textrm{CLG}, 0} ) \leq 1.1 \ell_1 + 2 \ell_2$. 6. We define $S = 0^{\ell_4} \circ \left( {1'}^{\ell_2} {0'}^{\ell_1} {1'}^{2\ell_2} {0'}^{\ell_1}{1'}^{\ell_2} \right) \circ 0^{\ell_4} \circ \left( {1}^{\ell_2} {0}^{\ell_1} {1}^{\ell_2} {0}^{\ell_1} {1}^{2\ell_2} \right)$, which is the result of removing the clique node gadgets and the clique list gadgets in $0^{\ell_4} \textrm{CG}_\beta(t) 0^{\ell_4} \textrm{CG}_\gamma(t')$. It is clear that $\text{RNA}( 0^{\ell_4} \textrm{CG}_\beta(t) 0^{\ell_4} \textrm{CG}_\gamma(t') )$ $\leq 0.1 \ell_1 + \text{RNA}(S)$, as the total length of the removed substrings can be upper bounded by $0.1 \ell_1$. Therefore, it suffices to show that $\text{RNA}(S) \leq \ell_1 + 4 \ell_2$. Let $A$ be any RNA folding of $S$: - Case: there are some $(0,0')\in A$ where the $0'$ comes from the first ${0'}^{\ell_1}$ in $S$. Clearly, the first substring $ {1'}^{\ell_2}$ cannot participate in any pairing. Therefore, $|A| \leq |{0'}^{\ell_1} {1'}^{2\ell_2} {0'}^{\ell_1}{1'}^{\ell_2}| = 2 \ell_1 + 3 \ell_2 < \ell_1 + 4 \ell_2$. - Case: there are some $(0',0)\in A$ where the $0'$ comes from the first ${0'}^{\ell_1}$ in $S$. In this situation, at most half of the ${1}^{2\ell_2}$ can participate in the RNA folding, since only the first ${1'}^{\ell_2}$ in $S$ is reachable from ${1}^{2\ell_2}$ without crossing a pair $(0',0)$. Therefore, $|A|$ is at most the total number of $0'$ and $1$ in $S$ minus $\ell_2$, i.e. $|A| \leq 2 \ell_1 + 3 \ell_2 < \ell_1 + 4 \ell_2$ . - Case: the first ${0'}^{\ell_1}$ in $S$ does not participate in the RNA folding. Then, $|A| \leq |{1'}^{\ell_2} {1'}^{2\ell_2} {0'}^{\ell_1}{1'}^{\ell_2}|$ $= \ell_1 + 4 \ell_2$. Note that (1), (2), (3) in Lemma \[lem-4\] imply that the RNA folding for blocked clique gadgets described in Fig. \[fig-2\] is optimal, and the optimal number of pairings is irrelevant to the corresponding $k$-clique. Optimal RNA foldings of $S_G$ \[ss-3\] -------------------------------------- In the previous subsection, we describe an RNA folding of $S_G$ containing $m_1 + m_2$ pairs. The two key properties of this RNA folding are: [*Property 1.*]{} All $0'$ in all ${0'}^{\ell_3}$ are paired up with some $0$ in some $0^{\ell_4}$. [*Property 2.*]{} All clique gadgets are “blocked” by the pairings between ${0'}^{\ell_3}$ and $0^{\ell_4}$, except the three clique gadgets: $\textrm{CG}_\alpha(t_\alpha), \textrm{CG}_\beta(t_\beta), \textrm{CG}_\gamma(t_\gamma)$, for some $t_\alpha,t_\beta,t_\gamma \in \mathcal{C}_k$. The goal in this section is to show that there is an optimal RNA folding having the above two properties, which facilitates the calculation of $\text{RNA}(S_G)$ in the next subsection. \[lem-6\] For any RNA folding $A$ of $S_G$, if there exists a pair linking a $0'$ in a specific ${0'}^{\ell_3}$ (denoted as $S_1$) to a $0$ in a specific $0^{\ell_4}$ (denoted as $S_2$), then there exists another RNA folding $A'$ with $|A'| \geq |A|$ where all letters in $S_1$ are linked to some letters in $S_2$. It immediately follows from the fact that $\ell_4$ is greater than the total number of $0'$ in $S_G$. It makes rematching all the letters in $S_1$ to some letters in $S_2$ possible. Lemma \[lem-7\] ensures that there is an optimal RNA folding having Property 1: \[lem-7\] There is an optimal RNA folding $A$ of $S_G$ having Property 1. Let’s choose any RNA folding $A$ of $S_G$ with $|A| = \text{RNA}(S_G)$. In view of Lemma \[lem-6\], we can assume that for each ${0'}^{\ell_3}$ in $S_G$, either all its letters are matched to some $0$ in the same $0^{\ell_4}$ or none of its letters is matched to any $0$ in any $0^{\ell_4}$. Let $z$ denote the number of ${0'}^{\ell_3}$ such that none of its letters are matched to any $0$ in any $0^{\ell_4}$. For some $t \in \mathcal{C}_k$, and for some $x \in \{\alpha, \beta, \gamma\}$, $\textrm{CG}_x(t)$ is said to be “trapped” in $A$ if all letters within $\textrm{CG}_x(t)$ are either unmatched, matched to letters within $\textrm{CG}_x(t)$, or matched to letters in some $0^{\ell_4}$. We note that a sufficient condition for $\textrm{CG}_x(t)$ to be trapped is that the letters in its two neighboring ${0'}^{\ell_3}$ are all matched to the same $0^{\ell_4}$. The cases that the condition is violated is enumerated as follows: 1. The two neighboring ${0'}^{\ell_3}$ of $\textrm{CG}_x(t)$ are matched to different $0^{\ell_4}$, and this occurs at most $2|\{\alpha, \beta, \gamma\}| = 6$ times (i.e. at most two times per $x \in \{\alpha, \beta, \gamma\}$). 2. A neighboring ${0'}^{\ell_3}$ of $\textrm{CG}_x(t)$ is not matched to any $0^{\ell_4}$, and this occurs at most $2z$ times. Therefore, the number of clique gadgets that are not trapped in $A$ is at most $6 + 2z$. Using this information, we can derive an upper bound of $|A|$: $$\begin{aligned} |A| & \leq (3(|\mathcal{C}_k|+ 1 ) - z) \ell_3 & (\text{matched }{0'}^{\ell_3}) \\ &\hspace{0.4cm}+ |\mathcal{C}_k| \bigg( \max_{t \in \mathcal{C}_k } \text{RNA}( 0^{\ell_4} \textrm{CG}_\alpha(t) ) + \max_{t\in \mathcal{C}_k} \text{RNA}( 0^{\ell_4} \textrm{CG}_\beta(t) ) & (\text{trapped clique gadgets}) \\ & \hspace{1.4cm} + \max_{t\in \mathcal{C}_k} \text{RNA}( 0^{\ell_4} \textrm{CG}_\gamma(t) ) \bigg)\\ &\hspace{0.4cm} + (6 + 2z) \max_{t\in \mathcal{C}_k , x \in \{\alpha, \beta, \gamma\}} |\textrm{CG}_x(t)|. & (\text{remaining clique gadgets})\end{aligned}$$ In view of the calculation in Lemma \[lem-4\], $|A|$ is at most $$m_1 - z \ell_3 + \left( 2 \ell_2 + \min (\ell_{\textrm{CLG}, 1}, \ell_{\textrm{CNG}, 1}) + 2 \ell_1 + \ell_{\textrm{CLG}, 0} + \ell_{\textrm{CNG}, 0} \right) + (6 +2z) \max_{t, x}|\textrm{CG}_x(t)|.$$ Since $ 2 \ell_2 + \min (\ell_{\textrm{CLG}, 1}, \ell_{\textrm{CNG}, 1}) + 2 \ell_1 + \ell_{\textrm{CLG}, 0} + \ell_{\textrm{CNG}, 0} < 0.1\ell_3$, and since the length of a clique gadget $< 0.1\ell_3$, we have: $$|A| < m_1 - 0.8 z \ell_3 + 0.7 \ell_3.$$ Therefore, $|A| < m_1 < \text{RNA}(S_G)$ if $z > 0$. Hence we must have $z = 0$, i.e. all $0'$ in all ${0'}^{\ell_3}$ are paired up with some $0$ in some $0^{\ell_4}$. To proceed further, some terminologies are needed to formally define the Property 2: Let $A$ be an RNA folding of a sequence where $S_1, S_2$ are two substrings (subsequences of consecutive elements). We write $S_1 \overset{A}\longleftrightarrow S_2$ iff - there exists $\{x_1, x_2\} \in A$ with $x_1 \in S_1, x_2 \in S_2$. - $S_1, S_2$ are disjoint substrings. For example, “$\textrm{CG}_x(t_{1})$ is blocked in $A$” is equivalent to “there exist no $y \in \{\alpha, \beta, \gamma\}$, $t_2 \in \mathcal{C}_k$ such that $\textrm{CG}_x(t_{1}) \overset{A}\longleftrightarrow \textrm{CG}_y(t_{2})$”. $\mathcal{M}_\alpha$ is defined as the set of RNA foldings of $S_G$ such that $A \in \mathcal{M}_\alpha$ iff - $A$ has Property 1, and - there exist $t_{\alpha,1}, t_{\alpha,2}, t_{\beta}, t_{\gamma} \in \mathcal{C}_k$ such that $t_{\alpha,1} \neq t_{\alpha,2}$, and for any $t_1, t_2 \in \mathcal{C}_k$, $\{u_1, u_2\} \subseteq \{\alpha, \beta, \gamma\}$, $\textrm{CG}_{u_1}(t_1) \overset{A}\longleftrightarrow \textrm{CG}_{u_2}(t_2)$ implies that $\{(u_1, t_1), (u_2,t_2)\} \in \{ \{(\alpha,t_{\alpha,1}),(\beta, t_\beta)\},\{(\alpha,t_{\alpha,2}),(\gamma, t_\gamma)\} \}$. $\mathcal{M}_\beta$ and $\mathcal{M}_\gamma$ are defined analogously. $\mathcal{M}_{\alpha, \beta, \gamma}$ is defined as the set of RNA foldings of $S_G$ such that $A \in \mathcal{M}_{\alpha, \beta, \gamma}$ iff - $A$ has Property 1, and - there exist $t_{\alpha}, t_{\beta}, t_{\gamma} \in \mathcal{C}_k$ such that for any $t_1, t_2 \in \mathcal{C}_k$, $\{u_1, u_2\} \subseteq \{\alpha, \beta, \gamma\}$, $\textrm{CG}_{u_1}(t_1) \overset{A}\longleftrightarrow \textrm{CG}_{u_2}(t_2)$ implies that $\{(u_1, t_1), (u_2,t_2)\} \subseteq \{(\alpha,t_{\alpha}),(\beta, t_\beta), (\gamma, t_\gamma)\}$. Using the above notions, it is clear that $A \in \mathcal{M}_{\alpha, \beta, \gamma}$ iff $A$ has both Property 1 and Property 2. In the remainder of the subsection, we will prove that there exists an optimal RNA folding of $S_G$ that belongs to $\mathcal{M}_{\alpha, \beta, \gamma}$. To ease the notation, for each $x \in \{\alpha, \beta, \gamma\}$, we call $\textrm{CG}_x(t)$ an “$x$ clique gadget”, for all $t \in \mathcal{C}_k$; we write “$C_1$ and $C_2$ are [*linked*]{} (in $A$)” to denote $C_1 \overset{A}\longleftrightarrow C_2$. \[lem-8\] Let $A$ be an optimal RNA folding of $S_G$ having Property 1. For any $x \in \{\alpha, \beta, \gamma\}$, there does not exist two $x$ clique gadgets $C_1, C_2$ such that $C_1 \overset{A}\longleftrightarrow C_2$. There is a substring ${0'}^{\ell_3}$ located between $C_1$ and $C_2$. Existence of a pair in $A$ linking a letter in $C_1$ and a letter in $C_2$ makes it impossible for any letter in the ${0'}^{\ell_3}$ be matched to any letter in any $0^{\ell_4}$, a contradiction. \[lem-9\] Let $A$ be an optimal RNA folding of $S_G$ having Property 1. For any $\{x,y\} \in \{ \{\alpha, \beta\}$, $\{\alpha, \gamma\}, \{\beta, \gamma\}\}$, there does not exist two distinct $x$ clique gadgets $C_1, C_2$ and two (not necessarily distinct) $y$ clique gadgets $C_3, C_4$ such that $C_1 \overset{A}\longleftrightarrow C_3$ and $C_2 \overset{A}\longleftrightarrow C_4$. Clearly there must be a substring ${0'}^{\ell_3}$ located between $C_1$ and $C_2$. However, since $C_1 \overset{A}\longleftrightarrow C_3$ and $C_2 \overset{A}\longleftrightarrow C_4$, letters in the substring ${0'}^{\ell_3}$ can only be matched to letters in $C_1, C_2, C_3, C_4$, letters between $C_1, C_2$, and letters between $C_3, C_4$. This contradicts Property 1. \[lem-10\] Let $A$ be an optimal RNA folding of $S_G$ having Property 1. For any $x \in \{\alpha, \beta, \gamma\}$, suppose that there exist two distinct $x$ clique gadgets $C_1, C_2$ such that $C_1 \overset{A}\longleftrightarrow C_3$ and $C_2 \overset{A}\longleftrightarrow C_4$ for some clique gadgets $C_3, C_4$. Then there does not exist any other pairs of clique gadgets that are linked in $A$. Let $y, z \in \{\alpha, \beta, \gamma\}$ such that $C_3$ is a $y$ clique gadget, and $C_4$ is a $z$ clique gadget. By Lemma \[lem-8\] and Lemma \[lem-9\], $x, y, z$ must be distinct. Suppose that there exist two clique gadgets $C_5, C_6$ that are linked in $A$ such that $\{C_5, C_6\} \not\in \{\{C_1, C_3\},\{C_2, C_4\}\}$. We show that this leads to a contradiction. First of all, none of $C_5, C_6$ can be an $x$ clique gadget. Suppose that $C_5$ is an $x$ clique gadget. Then by Lemma \[lem-8\], $C_6$ is either a $y$ clique gadget or a $z$ clique gadget. In any case, Lemma \[lem-9\] is violated. Therefore, we can (without loss of generality) assume that $C_5$ is a $y$ clique gadget, and $C_6$ is a $z$ clique gadget. Since $C_1, C_2$ are distinct, there must be a substring ${0'}^{\ell_3}$ located between $C_1$ and $C_2$. Since $C_1$ is linked to a $y$ gadget, and since $C_2$ is linked to a $z$ gadget, letters in the substring ${0'}^{\ell_3}$ can only be paired up with letters in the substring $0^{\ell_4}$ bordering both ${0'}^{\ell_3} \underset{{t \in \mathcal{C}_k}}{\bigcirc} \left( \textrm{CG}_y(t) {0'}^{\ell_3} \right)$ and ${0'}^{\ell_3} \underset{{t \in \mathcal{C}_k}}{\bigcirc} \left( \textrm{CG}_z(t) {0'}^{\ell_3} \right)$ (imaging $S_G$ as a circular string). However, the existence of a pair linking a letter in $C_5$ (an $y$ clique gadget) and a letter in $C_6$ (a $z$ clique gadget) implies that such $0^{\ell_4}$ cannot be reached from the ${0'}^{\ell_3}$ without a crossing, a contradiction. The following lemma directly follows from Lemma \[lem-7\] and Lemma \[lem-10\]. \[lem-11\] There exists an optimal RNA folding of $S_G$ that belongs to $\mathcal{M}_\alpha \cup \mathcal{M}_\beta \cup \mathcal{M}_\gamma \cup \mathcal{M}_{\alpha, \beta, \gamma}$. By Lemma \[lem-7\], we can restrict our consideration to optimal RNA foldings having Property 1. Let $A$ be any such optimal RNA folding: [**Case 1:**]{} For each $x \in \{\alpha, \beta, \gamma\}$, there is at most one $x$ clique gadget that is linked to other clique gadgets. Then $A \in \mathcal{M}_{\alpha, \beta, \gamma}$. [**Case 2:**]{} For some $x \in \{\alpha, \beta, \gamma\}$, there are two distinct $x$ clique gadgets that are linked to other clique gadgets. By Lemma \[lem-10\], $ A \in \mathcal{M}_x$. We are now in a position to prove the main lemma in the subsection: \[lem-12\] There exists an optimal RNA folding of $S_G$ that belongs to $\mathcal{M}_{\alpha, \beta, \gamma}$. In view of Lemma \[lem-11\], it suffices to show that for any $A \in \mathcal{M}_\alpha \cup \mathcal{M}_\beta \cup \mathcal{M}_\gamma$, we have $|A| < \text{RNA}(S_G)$. Let $t_{x,1}, t_{x,2}, t_{y}, t_{z} \in \mathcal{C}_k$ and $\{y,z\} = \{\alpha, \beta, \gamma\} \setminus \{x\}$ be the ones in the definition of $\mathcal{M}_x$. We let $A \in \mathcal{M}_x$. Each pair in $A$ falls into one of the following categories: - The ones linking a $0'$ in some ${0'}^{\ell_3}$ to a $0$ in some $0^{\ell_4}$. There are exactly $3(|\mathcal{C}_k|+1) \ell_3$ number of such pairs. - The ones involving a letter in some $\textrm{CG}_{u}(t)$, where $(u,t) \not\in \{(x,t_{x,1}),(x,t_{x,2}), (y, t_y), (z,t_z)\}$. As any letter in such $\textrm{CG}_{u}(t)$ can only be matched to the letters within $\textrm{CG}_{u}(t)$ or $0^{\ell_4}$. The number of such pairs can be upper bounded by $(|\mathcal{C}_k| - 2) \max\limits_{t \in \mathcal{C}_k}\text{RNA}\left(0^{\ell_4} \textrm{CG}_{x}(t)\right) + (|\mathcal{C}_k| - 1) \max\limits_{t \in \mathcal{C}_k} \text{RNA}\left(0^{\ell_4} \textrm{CG}_{y}(t)\right) + (|\mathcal{C}_k| - 1) \max\limits_{t \in \mathcal{C}_k} \text{RNA}\left(0^{\ell_4} \textrm{CG}_{z}(t)\right)$. - The ones involving a letter in some $\textrm{CG}_{u}(t)$, where $(u,t) \in \{(x,t_{x,1}),(x,t_{x,2}), (y, t_y), (z,t_z)\}$. The number of such pairs can be upper bounded by $\max\limits_{t, t' \in \mathcal{C}_k} \text{RNA}\left(0^{\ell_4} \textrm{CG}_{x}(t) 0^{\ell_4} \textrm{CG}_{y}(t')\right) + \max\limits_{t, t' \in \mathcal{C}_k} \text{RNA}$ $\left(0^{\ell_4} \textrm{CG}_{x}(t) 0^{\ell_4} \textrm{CG}_{z}(t')\right) $. Therefore, using Lemma \[lem-4\], we can upper bound $|A|$ as following: - When $x = \alpha$, $|A| \leq m_1 + 2 \ell_2 + 4.2 \ell_1 - \min (\ell_{\textrm{CLG}, 1}, \ell_{\textrm{CNG}, 1})$. - When $x = \beta$, $|A| \leq m_1 + 6 \ell_2 + 2.2 \ell_1 - \ell_{\textrm{CLG}, 0} - \ell_{\textrm{CNG}, 0}$. - When $x = \gamma$, $|A| \leq m_1 + 6 \ell_2 + 2.2 \ell_1 $. By Lemma \[lem-5\], we always have $|A| < m_1 + m_2 \leq \text{RNA}(S_G)$ (recall that $m_2 \geq 6 \ell_2 + 3 \ell_1$). Calculating $\text{RNA}(S_G)$ \[ss-4\] -------------------------------------- In this subsection, we will prove that $\text{RNA}(S_G) = m_1 + m_2$ and finish the proof of Theorem \[thm-2\]. In view of Lemma \[lem-12\], when calculating $\text{RNA}(S_G)$, we can restrict our attention to RNA foldings of $S_G$ in $\mathcal{M}_{\alpha, \beta, \gamma}$. Based on the structural property of RNA foldings in $\mathcal{M}_{\alpha, \beta, \gamma}$, we first reduce the calculation of $\text{RNA}(S_G)$ to the calculation of optimal RNA foldings of much simpler sequences. \[lem-13\] $\text{RNA}(S_G) \leq m_1 + \max\limits_{t_{\alpha}, t_{\beta}, t_{\gamma} \in \mathcal{C}_k} \text{RNA}(0^{\ell_4} \textrm{CG}_\alpha(t_\alpha) 0^{\ell_4} \textrm{CG}_\beta(t_\beta) 0^{\ell_4} \textrm{CG}_\gamma(t_\gamma))$. In view of Lemma \[lem-12\], there is an optimal RNA folding of $S_G$ in $\mathcal{M}_{\alpha, \beta, \gamma}$. For any $A \in \mathcal{M}_{\alpha, \beta, \gamma}$, let $t_{\alpha}, t_{\beta}, t_{\gamma} \in \mathcal{C}_k$ be the ones in the definition of $\mathcal{M}_{\alpha, \beta, \gamma}$. Then, each pair in $A$ falls into one of the following categories: - The ones linking a $0'$ in some ${0'}^{\ell_3}$ to a $0$ in some $0^{\ell_4}$. There are exactly $3(|\mathcal{C}_k|+1) \ell_3$ number of such pairs. - The ones involving a letter in some $\textrm{CG}_{u}(t)$, where $(u,t) \not\in \{(\alpha,t_{\alpha}),(\beta,t_{\beta}), (\gamma, t_\gamma)\}$. As any letter in such $\textrm{CG}_{u}(t)$ can only be matched to the letters within $\textrm{CG}_{u}(t)$ or $0^{\ell_4}$. The number of such pairs can be upper bounded by $(|\mathcal{C}_k| - 1) \max\limits_{t \in \mathcal{C}_k}\text{RNA}(0^{\ell_4} \textrm{CG}_{\alpha}(t)) + (|\mathcal{C}_k| - 1) \max\limits_{t \in \mathcal{C}_k} \text{RNA}(0^{\ell_4} \textrm{CG}_{\beta}(t)) + (|\mathcal{C}_k| - 1) \max\limits_{t \in \mathcal{C}_k} \text{RNA}(0^{\ell_4} \textrm{CG}_{\gamma}(t))$. - The ones involving a letter in some $\textrm{CG}_{u}(t)$, where $(u,t) \in \{(\alpha,t_{\alpha}),(\beta,t_{\beta}), (\gamma, t_\gamma)\}$. The number of such pairs can be upper bounded by $\text{RNA}(0^{\ell_4} \textrm{CG}_\alpha(t_\alpha) 0^{\ell_4} \textrm{CG}_\beta(t_\beta) 0^{\ell_4} \textrm{CG}_\gamma(t_\gamma))$. In view of Lemma \[lem-4\], $|A| = m_1 + \text{RNA}(0^{\ell_4} \textrm{CG}_\alpha(t_\alpha) 0^{\ell_4} \textrm{CG}_\beta(t_\beta) 0^{\ell_4} \textrm{CG}_\gamma(t_\gamma))$. Hence we conclude the proof. The following auxiliary lemma is useful in the later discussion. \[lem-14\] Let $S = S_1 \circ S_2 \circ S_3 \in \{0,1,0',1'\}^{\ast}$, where $S_2$ is either $1 1'$ or $1' 1$. Then $\text{RNA}(S) = \text{RNA}(S_1 \circ S_3) + 1$. It suffices to show that there exists an optimal RNA folding of $S$ such that the $1$ and the $1'$ in $S_2$ are matched. We first choose any optimal RNA folding $A$ of $S$, and then we show that we can modify $A$ in such a way that the $1$ and the $1'$ in $S_2$ are matched without changing the number of matched pairs. - Case: the $1$ and the $1'$ in $S_2$ are already matched. We are done. - Case: Exactly one of the $1$ and the $1'$ in $S_2$ is matched. We first unmatch it, and then we pair up the $1$ and the $1'$. Doing so does not change the number of matched pairs. - Case: both of the $1$ and the $1'$ in $S_2$ are matched to some other letters. Let the $1$ be matched with $x$, and let the $1'$ be matched with $y$. Removing these two pairs from $A$ and adding $\{x,y\}$ and $\{1,1'\}$ to $A$ does not change the number of matched pairs. For any choices of three $k$-cliques $t_{\alpha}, t_{\beta}, t_{\gamma} \in \mathcal{C}_k$, we define: $$S_{t_{\alpha}, t_{\beta}, t_{\gamma}} = {1}^{\ell_2} \circ S_{t_{\gamma}, t_{\alpha}} \circ {1}^{\ell_2} \circ S_{t_{\alpha}, t_{\beta}, } \circ {1'}^{2 \ell_2} \circ S_{t_{\beta}, t_{\gamma}},$$ where $$\begin{aligned} S_{t_{\gamma}, t_{\alpha}} &= {0}^{\ell_1} \textrm{CNG}(t_\gamma) p({\textrm{CLG}(t_\alpha)}^R) {0'}^{\ell_1},\\ S_{t_{\alpha}, t_{\beta}} &= {0}^{\ell_1} \textrm{CNG}(t_\alpha) p({\textrm{CLG}(t_\beta)}^R) {0'}^{\ell_1},\\ S_{t_{\beta}, t_{\gamma}} &= {0'}^{\ell_1} p(\textrm{CNG}(t_\beta)) {\textrm{CLG}(t_\gamma)}^R {0}^{\ell_1}.\\\end{aligned}$$ $S_{t_{\alpha}, t_{\beta}, t_{\gamma}}$ is simply a cyclic shift of the concatenation of $\textrm{CG}_\alpha(t_\alpha)$, $\textrm{CG}_\beta(t_\beta)$, and $\textrm{CG}_\gamma(t_\gamma)$ after removing the sequences of $1$s and $1'$s at the beginning and the end of these clique gadgets. The next lemma (together with Lemma \[lem-13\]) reduces the calculation of $\text{RNA}(S_G)$ to the calculation of $\text{RNA}(S_{t_{\alpha}, t_{\beta}, t_{\gamma}})$. \[lem-15\] $\text{RNA}(0^{\ell_4} \textrm{CG}_\alpha(t_\alpha) 0^{\ell_4} \textrm{CG}_\beta(t_\beta) 0^{\ell_4} \textrm{CG}_\gamma(t_\gamma)) = 4 \ell_2 + \text{RNA}(S_{t_{\alpha}, t_{\beta}, t_{\gamma}})$. First of all, we state a few easy observations that will be applied in the proof: - By simply matching only the letters in $\textrm{CG}_\alpha(t_\alpha), \textrm{CG}_\beta(t_\beta)$, and $\textrm{CG}_\gamma(t_\gamma)$ (as described in Fig. \[fig-3\]), we can infer that $\text{RNA}(0^{\ell_4}\textrm{CG}_\alpha(t_\alpha) 0^{\ell_4} \textrm{CG}_\beta(t_\beta) 0^{\ell_4} \textrm{CG}_\gamma(t_\gamma)) \geq 6 \ell_2 + 3 \ell_1$. - The total number of $0'$ and $1$ in $0^{\ell_4} \textrm{CG}_\alpha(t_\alpha) 0^{\ell_4} \textrm{CG}_\beta(t_\beta) 0^{\ell_4} \textrm{CG}_\gamma(t_\gamma)$ is at most $ 6 \ell_2 + 3.1 \ell_1$. - the difference between the number of $1$ and $1'$ in $0^{\ell_4} \textrm{CG}_\alpha(t_\alpha) 0^{\ell_4} \textrm{CG}_\beta(t_\beta) 0^{\ell_4} \textrm{CG}_\gamma(t_\gamma)$ is at most $0.1 \ell_1$ We claim that in any optimal RNA folding $A$ of $0^{\ell_4} \textrm{CG}_\alpha(t_\alpha) 0^{\ell_4} \textrm{CG}_\beta(t_\beta) 0^{\ell_4} \textrm{CG}_\gamma(t_\gamma)$, all letters within all $0^{\ell_4}$ are not matched: - Claim: there is no $0'$ within $\textrm{CG}_\beta(t_\beta)$ matched to any $0$ in the two $0^{\ell_4}$ preceding and after $\textrm{CG}_\beta(t_\beta)$. Recall that $\textrm{CG}_\beta(t_\beta) = {1'}^{\ell_2} p({\textrm{CLG}(t_\beta)}^R) {0'}^{\ell_1} {1'}^{2\ell_2} {0'}^{\ell_1} p(\textrm{CNG}(t_\beta)) {1'}^{\ell_2}$. If there is such a pair, then at least $\ell_2$ amount of $1'$ cannot participate in the RNA folding. Therefore, $|A| \leq ( 6 \ell_2 + 3.1 \ell_1) - (\ell_2 - 0.1\ell_1) < \text{RNA}(0^{\ell_4} \textrm{CG}_\alpha(t_\alpha) 0^{\ell_4} \textrm{CG}_\beta(t_\beta) 0^{\ell_4} \textrm{CG}_\gamma(t_\gamma))$. - Claim: there is no $0'$ within $\textrm{CG}_\beta(t_\beta)$ matched to any $0$ in the $0^{\ell_4}$ in the beginning of the sequence. Suppose that there is such a pair. Then the $3\ell_2$ amount of $1'$ within ${1'}^{2\ell_2}$ in $\textrm{CG}_\alpha(t_\alpha)$ and within the first ${1'}^{\ell_2}$ in $\textrm{CG}_\beta(t_\beta)$ can only be matched to letters in $\textrm{CG}_\alpha(t_\alpha)$. However, the amount of $1$ in $\textrm{CG}_\alpha(t_\alpha)$ is at most $2.1 \ell_1$, so at least $0.9 \ell_2$ amount of $1'$ are not matched. Therefore, $|A| \leq ( 6 \ell_2 + 3.1 \ell_1) - 0.9 \ell_2 < \text{RNA}(0^{\ell_4} \textrm{CG}_\alpha(t_\alpha) 0^{\ell_4} \textrm{CG}_\beta(t_\beta) 0^{\ell_4} \textrm{CG}_\gamma(t_\gamma))$. - Claim: there is no $0'$ within $\textrm{CG}_\alpha(t_\alpha)$ matched to any $0$ in any $0^{\ell_4}$. Suppose that there is such a pair. We can show that at least $\ell_2$ amount of $1'$ cannot participate in the RNA folding, so $|A| \leq ( 6 \ell_2 + 3.1 \ell_1) - \ell_2 < \text{RNA}(0^{\ell_4} \textrm{CG}_\alpha(t_\alpha) 0^{\ell_4} \textrm{CG}_\beta(t_\beta) 0^{\ell_4} \textrm{CG}_\gamma(t_\gamma))$. - Case: a $0'$ within $\textrm{CG}_\alpha(t_\alpha)$ is matched to a $0$ in the first $0^{\ell_4}$. Then the ${1'}^{2 \ell_2}$ in the beginning of $\textrm{CG}_\alpha(t_\alpha)$ cannot participate in the RNA folding. - Case: a $0'$ within $\textrm{CG}_\alpha(t_\alpha)$ is matched to a $0$ in the second $0^{\ell_4}$. Then letters in the two ${1}^{\ell_2}$ in $\textrm{CG}_\alpha(t_\alpha)$ can only be matched to letters within $p(\text{CLG}(t_\alpha)^R)$. Hence at least $2 \ell_2 - 0.1 \ell_1$ amount of $1$ are unmatched. Since the difference between the number of $1$ and $1'$ in $0^{\ell_4} \textrm{CG}_\alpha(t_\alpha) 0^{\ell_4} \textrm{CG}_\beta(t_\beta) 0^{\ell_4} \textrm{CG}_\gamma(t_\gamma)$ is at most $0.1 \ell_1$, at least $2 \ell_2 - 0.2 \ell_1 > \ell_2$ amount of $1'$ cannot participate in the RNA folding. - Case: a $0'$ within $\textrm{CG}_\alpha(t_\alpha)$ is matched to a $0$ in the third $0^{\ell_4}$. Then all $1'$ within $\textrm{CG}_\beta(t_\beta)$ can only be matched to $1$ within $\textrm{CG}_\alpha(t_\alpha)$. It is obvious that the number of $1'$ within $\textrm{CG}_\beta(t_\beta)$ is at least $\ell_2$ more than the number of $1$ within $\textrm{CG}_\alpha(t_\alpha)$, so at least $ \ell_2 $ amount of $1'$ cannot participate in the RNA folding. Therefore, $$\begin{aligned} &\text{RNA}(0^{\ell_4} \textrm{CG}_\alpha(t_\alpha) 0^{\ell_4} \textrm{CG}_\beta(t_\beta) 0^{\ell_4} \textrm{CG}_\gamma(t_\gamma))\\ &\hspace{0.7cm} = \text{RNA}(\textrm{CG}_\alpha(t_\alpha)\textrm{CG}_\beta(t_\beta) \textrm{CG}_\gamma(t_\gamma))\\ &\hspace{0.7cm} = \text{RNA}( {1'}^{2 \ell_2} p({\textrm{CLG}(t_\alpha)}^R) {0'}^{\ell_1} {1}^{\ell_2} {0}^{\ell_1} \textrm{CNG}(t_\alpha) {1}^{\ell_2} {1'}^{\ell_2} p({\textrm{CLG}(t_\beta)}^R){0'}^{\ell_1} {1'}^{2\ell_2} & (\text{by definition})\\ &\hspace{1.2cm} {0'}^{\ell_1} p(\textrm{CNG}(t_\beta)) {1'}^{\ell_2} {1}^{\ell_2} {\textrm{CLG}(t_\gamma)}^R {0}^{\ell_1} {1}^{\ell_2} {0}^{\ell_1} \textrm{CNG}(t_\gamma) {1}^{2\ell_2} )\\ &\hspace{0.7cm} = \text{RNA}( {1}^{\ell_2} {0}^{\ell_1} \textrm{CNG}(t_\gamma) {1}^{2\ell_2} {1'}^{2 \ell_2} p({\textrm{CLG}(t_\alpha)}^R) {0'}^{\ell_1} {1}^{\ell_2} {0}^{\ell_1} \textrm{CNG}(t_\alpha){1}^{\ell_2} {1'}^{\ell_2} & (\text{cyclic shift})\\ &\hspace{1.2cm} p({\textrm{CLG}(t_\beta)}^R) {0'}^{\ell_1} {1'}^{2\ell_2} {0'}^{\ell_1} p(\textrm{CNG}(t_\beta)) {1'}^{\ell_2} {1}^{\ell_2} {\textrm{CLG}(t_\gamma)}^R {0}^{\ell_1} )\\ &\hspace{0.7cm} = 4 \ell_2 + \text{RNA}( {1}^{\ell_2} {0}^{\ell_1} \textrm{CNG}(t_\gamma) p({\textrm{CLG}(t_\alpha)}^R) {0'}^{\ell_1} {1}^{\ell_2} {0}^{\ell_1} \textrm{CNG}(t_\alpha)p({\textrm{CLG}(t_\beta)}^R) {0'}^{\ell_1} & (\text{Lemma~\ref{lem-14}})\\ &\hspace{1.2cm} {1'}^{2\ell_2} {0'}^{\ell_1} p(\textrm{CNG}(t_\beta)) {\textrm{CLG}(t_\gamma)}^R {0}^{\ell_1} )\\ &\hspace{0.7cm} = 4 \ell_2 + \text{RNA}(S_{t_{\alpha}, t_{\beta}, t_{\gamma}}).\end{aligned}$$ For the third equality, we just move ${1}^{\ell_2} {0}^{\ell_1} \textrm{CNG}(t_\gamma) {1}^{2\ell_2}$ from the end of the sequence to the beginning. The fourth equality follows by applying Lemma \[lem-14\] iteratively (which removes ${1}^{2\ell_2} {1'}^{2\ell_2}$, ${1}^{\ell_2} {1'}^{\ell_2}$, and ${1'}^{\ell_2} {1}^{\ell_2}$). By calculating the exact value of $\text{RNA}(S_{t_{\alpha}, t_{\beta}, t_{\gamma}})$, together with several previous lemmas, the next lemma shows that $\text{RNA}(S_G) = m_1 + m_2$. \[lem-16\] $\text{RNA}(S_G) = m_1 + m_2$. In view of Lemma \[lem-5\], \[lem-13\], \[lem-15\], it suffices to show that $\text{RNA}(S_{t_{\alpha}, t_{\beta}, t_{\gamma}}) = 2 \ell_2 + 3 \ell_1 + \frac{3}{2} \ell_0 - \frac{1}{2}\big( \delta_{\text{LCS}} (\textrm{CLG}(t_\alpha), \textrm{CNG}(t_\beta)) +$ $\delta_{\text{LCS}}(\textrm{CLG}(t_\alpha), \textrm{CNG}(t_\gamma)) + \delta_{\text{LCS}} (\textrm{CLG}(t_\beta), \textrm{CNG}(t_\gamma)) \big)$. First of all, it is easy to observe that $\text{RNA}(S_{t_{\alpha}, t_{\beta}, t_{\gamma}}) \geq 2 \ell_2 + 3 \ell_1$, so for any optimal RNA folding $A$ (of $S_{t_{\alpha}, t_{\beta}, t_{\gamma}}$), we must have $|A| \geq 2 \ell_2 + 3 \ell_1$. We claim that in any optimal RNA folding $A$ of $S_{t_{\alpha}, t_{\beta}, t_{\gamma}}$, the following two statements are true: - For each of the two ${1}^{\ell_2}$, there is a $1$ that is matched to a $1'$ in the ${1'}^{2\ell_2}$. - For each of the $S_{t_{\gamma}, t_{\alpha}}, S_{t_{\alpha}, t_{\beta}}, S_{t_{\beta}, t_{\gamma}}$, there is a pair linking a $0'$ in its ${0'}^{\ell_1}$ and a $0$ in its ${0}^{\ell_1}$. For the first statement, suppose that one ${1}^{\ell_2}$ does not have any letter matched to a $1'$ in the ${1'}^{2\ell_2}$. It is easy to observe that the number of $1'$ in $S_{t_{\alpha}, t_{\beta}, t_{\gamma}}$ that does not belong to ${1'}^{2\ell_2}$ is at most $0.1 \ell_1$. Therefore, $|A|$ is at most the total number of $0'$ plus the total number of $1$ minus $(\ell_2 - 0.1 \ell_1)$. By a simple calculation, $|A| \leq 3.1\ell_1 + (2 \ell_2 + 0.1 \ell_1) - (\ell_2 - 0.1 \ell_1) = \ell_2 + 3.3 \ell_1 < \text{RNA}(S_{t_{\alpha}, t_{\beta}, t_{\gamma}})$. Therefore, we conclude the first statement. For the second statement, suppose that there is an $S \in \{S_{t_{\gamma}, t_{\alpha}}, S_{t_{\alpha}, t_{\beta}}, S_{t_{\beta}, t_{\gamma}}\}$ that has no pair linking a $0'$ in its ${0'}^{\ell_1}$ and a $0$ in its ${0}^{\ell_1}$. Due to the first statement, any pairing involving ${0'}^{\ell_1}$ and ${0}^{\ell_1}$ are confined to be within $S$. Therefore, the number of pairs involving letters in $S$ is at most $|S| - 2\ell_1 \leq 0.1 \ell_1$. This is certainly not optimal, since simply matching all $0'$ in ${0'}^{\ell_1}$ to all $0$ in ${0}^{\ell_1}$ gives us $\ell_1$ amount of pairs. Therefore, we conclude the second statement. We can infer from the above two statements that for each $S \in \{S_{t_{\gamma}, t_{\alpha}}, S_{t_{\alpha}, t_{\beta}}, S_{t_{\beta}, t_{\gamma}}\}$, letters within $S$ are only matched to letters within $S$ in any optimal RNA folding of $S_{t_{\alpha}, t_{\beta}, t_{\gamma}}$. As a result, $$\begin{aligned} \text{RNA}(S_{t_{\alpha}, t_{\beta}, t_{\gamma}}) & = \text{RNA}({1}^{\ell_2} \circ {1}^{\ell_2} \circ {1'}^{2\ell_2}) + \text{RNA}(S_{t_{\gamma}, t_{\alpha}}) + \text{RNA}(S_{t_{\alpha}, t_{\beta}}) +\text{RNA}(S_{t_{\beta}, t_{\gamma}} )\\ &= 2 \ell_2 + 3 \ell_1 + \text{RNA}(\textrm{CNG}(t_\gamma) p({\textrm{CLG}(t_\alpha)}^R))+ \text{RNA}(\textrm{CNG}(t_\alpha) p({\textrm{CLG}(t_\beta)}^R)) \\ & \hspace{0.4cm} + \text{RNA}(p(\textrm{CNG}(t_\beta)) {\textrm{CLG}(t_\gamma)}^R)\\ &=2 \ell_2 + 3 \ell_1 + \frac{3}{2} \ell_0 - \frac{1}{2} \big( \delta_{\text{LCS}} (\textrm{CLG}(t_\alpha), \textrm{CNG}(t_\beta)) + \delta_{\text{LCS}}(\textrm{CLG}(t_\alpha), \textrm{CNG}(t_\gamma))\\ & \hspace{0.4cm} + \delta_{\text{LCS}} (\textrm{CLG}(t_\beta), \textrm{CNG}(t_\gamma)) \big).\end{aligned}$$ We are ready to prove the main theorem of the paper: [**Remainder of Theorem \[thm-2\].**]{} [*If the RNA folding problem on sequences in ${\{A,C,G,U\}}^n$ can be solved in $T(n)$ time, then $3k$-clique on graphs with $|V|=n$ can be solved in $\mathcal{O}\left(T\left(n^{k + 1} \log(n) \right)\right)$ time.*]{} Given a graph $G$, we construct the string $S_G$. According to Lemma \[lem-1\], \[lem-length\], the length of $S_G$ is $\mathcal{O}(k^2 n^{k+1} \log (n))$, and $S_G$ can be constructed in time $\mathcal{O}\left(k^2 n^{k+1} \log (n)\right)$. We let $t_{\alpha}, t_{\beta}, t_{\gamma} \in \mathcal{C}_k$ be chosen such that $Q = \delta_{\text{LCS}} (\textrm{CLG}(t_\alpha), \textrm{CNG}(t_\beta)) + \delta_{\text{LCS}} (\textrm{CLG}(t_\alpha), \textrm{CNG}(t_\gamma)) + \delta_{\text{LCS}} (\textrm{CLG}(t_\beta), \textrm{CNG}(t_\gamma))$ is minimized. By Lemma \[lem-3\], there exists a number $c_1$ such that: - the number $c_1$ depends only on $n,k$, and $Q \geq 3c_1$. - If $Q = 3c_1$, then each of $t_{\alpha} \cup t_{\beta}$ , $t_{\alpha} \cup t_{\gamma}$, $t_{\beta} \cup t_{\gamma}$ is a $2k$-clique, which in turn is equivalent to “$t_{\alpha} \cup t_{\beta} \cup t_{\gamma}$ is a $3k$-clique”. - If $Q > 3c_1$, then the graph has no $3k$-clique. According to Lemma \[lem-16\], $\text{RNA}(S_G) = m_1 + m_2$. By its definition, $m_1$ only depends on $n,k$, and $m_2 = 6 \ell_2 + 3 \ell_1 + \frac{3}{2} \ell_0 - \frac{Q}{2}$. Hence we are able to decide whether $G$ has a $3k$-clique from the value of $\text{RNA}(S_G)$, which can be calculated in time $T\left(\mathcal{O}\left(k^2 n^{k+1} \log (n)\right)\right) = \mathcal{O}\left(T\left(k^2 n^{k+1} \log (n)\right)\right)$. Note that $k$ is treated as a constant instead of an input parameter. Hardness of Dyck Edit Distance Problem ====================================== In this section, we shift our focus to the Dyck edit distance problem. We will present a simple reduction from RNA folding problem (with alphabet size 4) to Dyck edit distance problem (with alphabet size 10). This leads to a much simplified and improved proof for a conditional lower bound of Dyck edit distance based on the conjectured hardness $k$-clique (the previous proof presented in [@ABV15] requires 48 symbols). [**Dyck Edit Distance.**]{} Given $S \in (\Sigma \cup \Sigma')^n$, the goal of the Dyck edit distance problem is to find a minimum number of edit operations (insertion, deletion, and substitution) that transform $S$ into a string in the Dyck context free language. Given $\Sigma$ and its corresponding $\Sigma'$, the Dyck context free language is defined by the grammar with following production rules: $\mathbf{S} \rightarrow \mathbf{SS}$, $\forall x \in \Sigma, \mathbf{S} \rightarrow x\mathbf{S}x'$, and $\mathbf{S} \rightarrow \epsilon$ (empty string). An alternative definition of the Dyck edit distance problem is described as follows: Given a sequence $S \in (\Sigma \cup \Sigma')^n$, find a minimum cost set $A \subseteq \{(i,j) | 1 \leq i < j \leq n \}$ satisfying the following conditions: - $A = A_M \uplus A_S$ has no crossing pair. - $A_M$ contains only pairs of the form $(x,x')$, $x \in \Sigma$ (i.e. for all $(i,j)\in A_M$, we have $S[i]=x$, $S[j]=x'$, for some $x \in \Sigma$). $A_M$ corresponds to the set of matched pairs. - $A_S$ does not contain any pair of the form $(y',x)$, $x,y \in \Sigma$ (i.e. for all $(i,j)\in A_S$ we have either $S[i] \in \Sigma$ or $S[j] \in \Sigma'$). $A_S$ corresponds to the set of pairs that can be fixed by one substitution operation per each pair. - Let $D$ be the set of letters in $S$ that do not belong to any pair in $A$. Each letter in $D$ requires one deletion/insertion operation to fix. The cost of $A$ is then defined as $|A_S|+|D|$, and the Dyck edit distance of the string $S$ is the cost of a minimum cost set meeting the above conditions. Dyck edit distance problem can be thought of as an asymmetric version of the RNA folding problem (in RNA folding, we allowed the aligned pair to be either $(x,x')$ or $(x',x)$, $x \in \Sigma$) that also handles substitution (in addition to deletion and insertion). Though these two problems look similar, they can behave quite differently. For example, in Section \[sec.intro\] we describe a simple reduction from LCS to RNA folding; since LCS is edit distance problem without substitution, one may hope that the same reduction reduces edit distance problem to Dyck edit distance problem. However, this is not true due to the following counterexample: both the two strings $ababa$ and $abbaa$ require at least 4 edit operations to transform into the string $caaac$; but the Dyck edit distance of $ababac'a'a'a'c'$ is 4 (by deleting all $b,c'$), while the Dyck edit distance of $abbaac'a'a'a'c'$ is 3 (by deleting all $c'$ and substituting the second $b$ with $b'$). Intuitively, the substitution operation makes Dyck edit distance more complicated than RNA folding. Indeed, the same conditional lower bound as Theorem \[thm-1\] for Dyck edit distance problem shown in [@ABV15] requires a bigger alphabet size (48 instead of 36) and a longer proof. In the next, we prove Theorem \[thm-3\] by demonstrating a simple reduction from RNA folding problem to Dyck edit distance problem with alphabet size 10. This improves upon the hardness result in [@ABV15], and justifies the intuition that Dyck edit distance is a harder problem than RNA folding. [**Proof of Theorem \[thm-3\].**]{} [*If Dyck edit distance problem on sequences of length $n$ with alphabet size 10 can be solved in $T(n)$ time, then the RNA folding problem on sequences in ${\{A,C,G,U\}}^n$ can be solved in $\mathcal{O}(T(n))$ time.*]{} For notational simplicity, we let the alphabet for the RNA folding problem be $\Sigma \cup \Sigma'= \{0,0',1,1'\}$ (instead of $\{A,C,G,U\}$). Let $S$ be any string in ${(\Sigma \cup \Sigma')^n}$. We define the string $S_{\text{Dyck}}$ as the result of applying the following operations on $S$: - Replace each letter $0$ with the sequence $S_{0} = aeb'aeb'$. - Replace each letter $0'$ with the sequence $S_{0'} = bba'a'$. - Replace each letter $1$ with the sequence $S_{1} = ced'ced'$. - Replace each letter $1'$ with the sequence $S_{1'} = ddc'c'$. It is clear that $S_{\text{Dyck}}$ is a sequence of length at most $6n$ on the alphabet $\{a,b,c,d,e\} \cup \{a',b',c',d',e'\}$, though the letter $e'$ is not used. We claim that the Dyck edit distance of $S_{\text{Dyck}}$ is $\frac{|S_{\text{Dyck}}|}{2} - 2\text{RNA}(S)$. First, we show that the Dyck edit distance of $S_{\text{Dyck}}$ is at most $\frac{|S_{\text{Dyck}}|}{2} - 2\text{RNA}(S)$. Given an optimal RNA folding of $S$, we construct a crossing-free matching $A$ with cost $\frac{|S_{\text{Dyck}}|}{2} - 2\text{RNA}(S)$ as follows: [For matched pairs in the RNA folding of $S$:]{} - For each matched pair $(0,0')$ in the RNA folding of $S$, we add two pairs $(a,a'),(a,a')$ to $A_M$, and add three pairs $(e,b'),(e,b'),(b,b)$ to $A_S$ in its corresponding pair of substrings $(S_0 = \mathbf{a}(eb')\mathbf{a}(eb'), S_{0'} = (bb)\mathbf{a'a'})$ in $S_{\text{Dyck}}$. - For each matched pair $(0',0)$ in the RNA folding of $S$, we add two pairs $(b,b'),(b,b')$ to $A_M$, and add three pairs $(a',a'),(a,e),(a,e)$ to $A_S$ in its corresponding pair of substrings $(S_{0'} = \mathbf{bb}(a'a'), S_{0} = (ae)\mathbf{b'}(ae)\mathbf{b'})$ in $S_{\text{Dyck}}$. - Similarly, for each matched pair $(1,1'),(1',1)$ in the RNA folding of $S$, we can add two pairs to $A_M$ and three pairs to $A_S$. [For unmatched letters in $S$:]{} - For each unmatched letter $0$ in $S$, we add three pairs $(a,b'),(e,b'),(a,e)$ to $A_S$ in its corresponding substring $S_0 = (a(eb')(ae)b')$. Similarly, for each unmatched letter $1$, we can add three pairs to $A_S$. - For each unmatched letter $0'$ in $S$, we add two pairs $(b,b),(a',a')$ to $A_S$ in its corresponding substring $S_0 = (bb)(a'a')$. Similarly, for each unmatched letter $1'$, we can add two pairs to $A_S$. The set $A_M$ has size $2\text{RNA}(S)$, the set $A_S$ has size $\frac{|S_{\text{Dyck}}| - 4\text{RNA}(S)}{2}$, and $D$ is an empty set. Therefore, the cost of $A$ is $\frac{|S_{\text{Dyck}}| - 4\text{RNA}(S)}{2} = \frac{|S_{\text{Dyck}}|}{2} - 2\text{RNA}(S)$. Second, we show that the Dyck edit distance of $S_{\text{Dyck}}$ is at least $\frac{|S_{\text{Dyck}}|}{2} - 2\text{RNA}(S)$. Given a crossing-free matching $A$ (on the string $S_{\text{Dyck}}$) of cost $C$, we recover an RNA folding of $S$ that has $\geq \frac{|S_{\text{Dyck}}|}{4} - \frac{C}{2}$ number of matched pairs. We build a multi-graph $G=(V,E)$ such that $V$ is the set of all substrings $S_0, S_{0'}, S_1, S_{1'}$ that constitute $S_{\text{Dyck}}$, and the number of edges between two substrings in $V$ is the number of pairs in $A_M$ linking letters between these two substrings. Note that $|V|=n, |E|=A_M$. It is clear that $C \geq \frac{|S_{\text{Dyck}}| - 2|E|}{2}$ (since $|A_S|+|D| \geq \frac{|S_{\text{Dyck}}| - 2|A_M|}{2} = \frac{|S_{\text{Dyck}}| - 2|E|}{2}$). Therefore, we are done if we can recover an RNA folding of size $\geq \frac{|E|}{2}$, since $\frac{|E|}{2} \geq \frac{|S_{\text{Dyck}}|}{4} - \frac{C}{2}$. We observe the following: - $G$ has degree at most 2 (due to our definition of $S_0, S_{0'}, S_1, S_{1'}$, at most two letters in such a substring can participate in pairings of the form $(x,x')$, $x \in \{a,b,c,d\}$, without crossing). - In the graph $G$, any edge must either links an $S_0$ with an $S_{0'}$ or links an $S_1$ with an $S_{1'}$ (due to our definition of $S_0, S_{0'}, S_1, S_{1'}$, any pairing of the form $(x,x')$, $x \in \{a,b,c,d\}$, must be made between $S_0, S_{0'}$ or between $S_1, S_{1'}$). - $G$ does not contain any cycle of odd length (due to the above observation). In view of the above (second) observation, a (graph-theoretic) matching $M \subseteq E$ of $G$ naturally corresponds to a (size $|M|$) RNA folding of $S$: for each edge (a pair of substrings in $S_{\text{Dyck}}$) in $M$, we add its corresponding pair of letters in $S$ to the RNA folding. Since a maximum matching has size $\geq \frac{|E|}{2}$ in a graph of maximum degree 2 without odd cycles, we conclude the proof. We note that for the case substitution is not allowed, the letter $e$ in the above proof is not needed, and this lowers the required alphabet size to 8. The reason that the letter $e$ is essential for the above proof to work is explained as follows: Suppose that $e$ is removed. For each matched pair $(0,0')$ in the RNA folding of $S$, after adding two pairs $(a,a'),(a,a')$ to $A_M$, the letter $b'$ between two $a$ in $S_0=ab'ab'$ cannot participate in any matching anymore. Hence some letters will be in $D$ according to our construction of the crossing-free matching $A$. This indicates that our construction may not be optimal. Indeed, for the string $(0 0' 0')_{\text{Dyck}} = ab'ab'bba'a'bba'a'$ (after removing $e$), if we insist on matching the two pairs $(a,a'),(a,a')$ in $\mathbf{a}b'\mathbf{a}b'bb\mathbf{a'a'}bba'a'$, then the cost will be at least $5$ (three substitutions and two deletions are needed). However, there is a solution that uses only 4 substitutions: $\mathbf{a}(b'\mathbf{a}(b'(bb)a')\mathbf{a'}(bb)a')\mathbf{a'}$. Conclusion and Future Directions ================================ In this paper we present a hardness result of RNA folding problem with alphabet size 4 and demonstrate a reduction from RNA folding problem to Dyck edit distance problem. A few open problems still remain: - There are still a few cases where the state-of-art conditional lower bound requires a certain alphabet size to work (e.g. Theorem \[thm-3\], Corollary \[cor-1\], and the hardness result for Dynamic time warping in [@BK15]). Is it possible to improve them using our technique or other ideas? - Is it possible to reduce Dyck edit distance problem to RNA folding problem? - Besides the classic RNA folding problem, several problems in bioinformatics admit similar formulation (see e.g. [@ABHL12; @FG12]). It would be interesting to see whether the technique presented in this paper (and [@ABV15; @BK15]) can be adapted to give meaningful lower bounds for other problems. [**Acknowledgements.**]{} The author would like to thank Seth Pettie for helpful discussions and comments. [89]{} Tatsuya Akutsu. Approximation and exact algorithms for RNA secondary structure prediction and recognition of stochastic context-free languages. Journal of Combinatorial Optimization, 3(2): 321-336, 1999. Mika Amit, Rolf Backofen, Steffen Heyne, Gad M. Landau, Mathias Mohl, Christina Schmiedl, Sebastian Will. Local Exact Pattern Matching for Non-fixed RNA Structures. In Combinatorial Pattern Matching (CPM), 306–320, Springer, 2012. Amir Abboud, Arturs Backurs, and Virginia Vassilevska Williams. If the current clique algorithms are optimal, so is Valiant’s Parser. In Proceedings of IEEE 55th Annual Symposium on Foundations of Computer Science (FOCS), 2015. Amir Abboud, Arturs Backurs, and Virginia Vassilevska Williams. Quadratic-time hardness of LCS and other sequence similarity measures. In Proceedings of IEEE 55th Annual Symposium on Foundations of Computer Science (FOCS), 2015. Amihood Amir, Timothy M. Chan, Moshe Lewenstein, and Noa Lewenstein. On hardness of jumbled indexing. In Proceedings of the 41st International Colloquium Automata, Languages, and Programming (ICALP), 114–125, 2014. Amihood Amir and Gad M. Landau. Fast parallel and serial multidimensional approximate array matching. Theoretical Computer Science, 81(1): 97–115, 1991. Arturs Backurs and Piotr Indyk. Edit distance cannot be computed in strongly subquadratic time (unless SETH is false). In Proceedings of the 47th Annual ACM Symposium on Theory of Computing (STOC), 51–58, 2015. Karl Bringmann and Marvin Künnemann. Quadratic conditional lower bounds for string problems and dynamic time warping. In Proceedings of IEEE 56th Annual Symposium on Foundations of Computer Science (FOCS), 79–97, 2015. Timothy M. Chan and Moshe Lewenstein. Clustered integer 3SUM via additive combinatorics. In Proceedings of the 47th Annual ACM Symposium on Theory of Computing (STOC), 31–40, 2015. Richard Durbin, Sean R. Eddy, Anders Krogh, and Graeme J. Mitchison. Biological Sequence Analysis: Probabilistic Models of Proteins and Nucleic Acids. Cambridge University Press, 1998. Friedrich Eisenbrand and Fabrizio Grandoni. On the complexity of fixed parameter clique and dominating set. Theoretical Computer Science, 326(1): 57–67, 2004. Yelena Frid and Dan Gusfield. A simple, practical and complete $\mathcal{O}(\frac{n^3}{\log n})$-time algorithm for RNA folding using the four-russians speedup. Algorithms for Molecular Biology, 5(1):13, 2010. Yelena Frid, Dan Gusfield: Speedup of RNA Pseudoknotted Secondary Structure Recurrence Computation with the Four-Russians Method. In Combinatorial Optimization and Applications (COCOA), 176–187, 2012. Mihai Pătraşcu and Ryan Williams. On the possibility of faster SAT algorithms. In Proceedings of the 21st ACM-SIAM Symposium on Discrete Algorithms (SODA), 1065–1075, 2010. Tamar Pinhas, Dekel Tsur, Shay Zakov, and Michal Ziv-Ukelson. Edit distance with duplications and contractions revisited. In Combinatorial Pattern Matching (CPM), 441–454, Springer, 2011. Tamar Pinhas, Shay Zakov, Dekel Tsur, and Michal Ziv-Ukelson. Efficient edit distance with duplications and contractions. Algorithms for Molecular Biology, 8(1):27, 2013. Liam Roditty and Virginia Vassilevska Williams. Fast approximation algorithms for the diameter and radius of sparse graphs. In Proceedings of the 45th Annual ACM Symposium on Theory of Computing (STOC), 515–524, 2013. Barna Saha. The dyck language edit distance problem in near-linear time. In Proceedings of the IEEE 55th Annual Symposium on Foundations of Computer Science (FOCS), 611–620, 2014. Barna Saha. Language edit distance & maximum likelihood parsing of stochastic grammars: faster Algorithms & connection to fundamental graph problems. In Proceedings of the IEEE 56th Annual Symposium on Foundations of Computer Science (FOCS), 118–135, 2015. Yinglei Song. Time and space efficient algorithms for RNA folding with the four-russians technique. Electronic preprint arXiv:1503.05670, 2015. Balaji Venkatachalam, Dan Gusfield, and Yelena Frid. Faster algorithms for RNA-folding using the four-russians method. In Algorithms in Bioinformatics (WABI), 126–140, Springer, 2013. Leslie G. Valiant. General context-free recognition in less than cubic time. Journal of Computer and System Sciences, 10(2): 308–315, 1975. Virginia Vassilevska. Efficient algorithms for clique problems. Information Processing Letters, 109(4): 254–257, 2009. [^1]: Supported by NSF grants CCF-1217338, CNS-1318294, and CCF-1514383.
{ "pile_set_name": "ArXiv" }
--- abstract: 'We study the iteration complexity of stochastic gradient descent (SGD) for minimizing the gradient norm of smooth, possibly nonconvex functions. We provide several results, implying that the classical $\mathcal{O}(\epsilon^{-4})$ upper bound (for making the average gradient norm less than $\epsilon$) cannot be improved upon, unless a combination of additional assumptions is made. Notably, this holds even if we limit ourselves to convex quadratic functions. We also show that for nonconvex functions, the feasibility of minimizing gradients with SGD is surprisingly sensitive to the choice of optimality criteria.' author: - | Yoel Drori\ Google Research - | Ohad Shamir\ Weizmann Institute of Science\ and Google Research bibliography: - 'bib.bib' title: | The Complexity of Finding Stationary Points\ with Stochastic Gradient Descent --- Introduction ============ Stochastic gradient descent (SGD) is today one of the main workhorses for solving large-scale supervised learning and optimization problems. Much of its popularity is due to its extreme simplicity: Given a function $f$ and an initialization point ${\mathbf{x}}$, we perform iterations of the form ${\mathbf{x}}_{t+1}={\mathbf{x}}_t-\eta_t {\mathbf{g}}_t$, where $\eta_t>0$ is a step size parameter and ${\mathbf{g}}_t$ is a stochastic vector which satisfies ${\mathbb{E}}[{\mathbf{g}}_t|{\mathbf{x}}_t]=\nabla f({\mathbf{x}}_t)$. For example, in the context of machine learning, $f({\mathbf{x}})$ might be the expected loss of some predictor parameterized by ${\mathbf{x}}$ (over some underlying data distribution) and ${\mathbf{g}}_t$ is the gradient of the loss w.r.t. a single data sample. For convex problems, the convergence rate of SGD to a global minimum of $f$ has been very well studied (for example, [@kushner2003stochastic; @nemirovski2009robust; @moulines2011non; @bertsekas2011incremental; @rakhlin2012making; @bottou2018optimization]). However, for nonconvex problems, convergence to a global minimum cannot in general be guaranteed. A reasonable substitute is to study the convergence to local minima, or at the very least, to stationary points. This can also be quantified as an optimization problem, where the goal is not to minimize $f({\mathbf{x}})$ over ${\mathbf{x}}$, but rather ${\|\nabla f({\mathbf{x}})\|}$. This question of finding stationary points has gained more attention in recent years, with the rise of deep learning and other large-scale nonconvex optimization methods. Compared to optimizing function values, the convergence of SGD in terms of minimizing the gradient norm is relatively less well-understood. A folklore result (see e.g., [@ghadimi2013stochastic], which we repeat in [Appendix \[sec:upperbounds\]]{} for completeness, as well as [@allen2018make]) states that for smooth (Lipschitz gradient) functions, ${\mathcal{O}}(\epsilon^{-4})$ iterations are sufficient to make the average expected gradient ${\mathbb{E}}[\frac{1}{T}\sum_{t=1}^{T}{\|\nabla f({\mathbf{x}}_t)\|}]$ (or minimal gradient ${\mathbb{E}}[\min_t {\|\nabla f({\mathbf{x}}_t)\|}]$) less than $\epsilon$, and it was widely conjectured that this is the best complexity achievable with SGD. However, this bound was recently improved in Fang et al. [@fang2019sharp], which showed a complexity bound of ${\mathcal{O}}(\epsilon^{-3.5})$ for SGD, under the following additional assumptions/algorithmic modifications: 1. **(Complex) aggregation.** Rather than considering the average or minimal gradient norm of the iterates, the algorithm considers the norm of a certain adaptive average of a suffix of the iterates (those which do not deviate too much from the final iterate). 2. \[as:hessian\] **Lipschitz Hessian.** The function is twice differentiable, with a Lipschitz Hessian as well as a Lipschitz gradient. 3. \[as:noise\] **“Dispersive” noise.** The stochastic noise satisfies a “dispersive” property, which intuitively implies that it is well-spread (it is satisfied, for example, for Gaussian or uniform noise in some ball). 4. \[as:dimension\] **Bounded dimension.** The dimension is bounded, in the sense that there is an explicit logarithmic dependence on it in the iteration complexity bound (in contrast, the folklore ${\mathcal{O}}(\epsilon^{-4})$ result is dimension-free). In fact, the result of Fang et al. is even stronger, as it shows convergence to a *second-order* stationary point (where the Hessian is nearly positive definite), but this will not be our focus here. Moreover, it is known that some dimension dependence is difficult to avoid when considering second-order stationary points (see [@simchowitz2017gap]). In this paper, we study the performance limits of SGD for minimizing gradients, through several variants of lower bounds under different assumptions. In particular, we wish to understand which of the assumptions/modifications above are necessary to break the $\epsilon^{-4}$ barrier. Our main take-home message is that most of these indeed appear to be needed in order to attain an iteration complexity better than ${\mathcal{O}}(\epsilon^{-4})$, in some cases even if we limit ourselves just to convex quadratic functions. In a bit more detail: - If we drop assumption \[as:dimension\] (bounded dimension), and consider the norm of the gradient at the output of some fixed, deterministic aggregation scheme (as opposed to returning, for example, an iterate with a minimal gradient norm), then perhaps surprisingly, we show that it is impossible to provide *any* finite complexity bound. This holds under mild algorithmic conditions, which extend far beyond SGD. This implies that for dimension-free bounds, we must either consider rather complicated aggregation schemes, apply randomization, or use optimality criteria which do not depend on a single point (e.g., consider the average gradient $\frac{1}{T}\sum_{t=1}^{T}{\|\nabla f({\mathbf{x}}_t)\|}$ or $\min_t {\|\nabla f(x_t)\|}$, as is often done in the literature). This result is formalized as [Thm. \[thm:infdim\]]{} in [Subsection \[subsec:fixedpoint\]]{}. - Without assumptions \[as:hessian\] (Lipschitz Hessian) and \[as:noise\] (dispersive noise), then even with rather arbitrary aggregation schemes, the iteration complexity of SGD is $\Omega(\epsilon^{-4})$. This result is formalized as [Thm. \[thm:aggregation\_step\]]{} in [Subsection \[subsec:sgdlowbound\]]{}. - Without aggregation and without assumption \[as:noise\] (dispersive noise), the iteration complexity of SGD required to satisfy ${\mathbb{E}}[\min_t {\|\nabla f({\mathbf{x}}_t)\|}]\leq \epsilon$ is $\Omega(\epsilon^{-3})$. This result is formalized as [Thm. \[thm:nonconvex\]]{} in [Subsection \[subsec:sgdlowbound\]]{}. - Without aggregation, the iteration complexity of SGD with “reasonable” step sizes to attain ${\mathbb{E}}[\min_t {\|\nabla f({\mathbf{x}}_t)\|}]\leq \epsilon$ is $\Omega(\epsilon^{-4})$, even for quadratic *convex* functions in moderate dimension and Gaussian noise (namely, all other assumptions are satisfied as well as convexity). This result is formalized as [Thm. \[thm:sgdlow\]]{} in Section \[sec:convex\]. It is important to note that the SGD algorithm, which is the main focus of this paper, is not necessarily an optimal algorithm (in terms of iteration complexity) for minimizing gradient norms in our stochastic optimization setting. For example, for convex problems, it is known that it is possible to achieve an iteration complexity of $\tilde{O}(\epsilon^{-2})$, strictly smaller than our $\Omega(\epsilon^{-4})$ lower bound (see [@foster2019complexity], and for a related result in the deterministic setting see [@nesterov2012make]). However, these algorithms are more complicated and less natural than plain SGD. Our results indicate that this algorithmic complexity might be a necessary price to pay in order to achieve optimal iteration complexity in some cases. Setting and Notation {#sec:setting} ==================== We let bold-face letters denote vectors, use ${\mathbf{e}}_i$ to denote the canonical unit vector, and use $[T]$ as shorthand for $\{1,2,\ldots,T\}$. We assume throughout that the objective $f$ maps ${\mathbb{R}}^d$ to ${\mathbb{R}}$, and either has an $L$-Lipschitz gradient for some fixed parameter $L>0$ or a $\rho$-Lipschitz Hessian for some $\rho>0$. We consider algorithms which use a standard stochastic first-order oracle ([@Book:NemirovskyYudin; @agarwal2009information]) in order to minimize some optimality criteria: This oracle, given a point ${\mathbf{x}}_t$, returns $\nabla f({\mathbf{x}}_t)+{\boldsymbol{\xi}}_t$, where ${\boldsymbol{\xi}}_t$ is a random variable satisfying $${\mathbb{E}}[{\boldsymbol{\xi}}_t|{\mathbf{x}}_t]=0~~~\text{and}~~~{\mathbb{E}}[{\|{\boldsymbol{\xi}}_t\|}^2|{\mathbf{x}}_t]\leq \sigma^2$$ almost surely for some fixed $\sigma^2$. In this paper, we focus on optimality criteria involving minimizing gradient norms, using the Stochastic Gradient Descent (SGD) algorithm. This algorithm, given a budget of $T$ iterations and an initialization point ${\mathbf{x}}_1$, produces $T$ stochastic iterates ${\mathbf{x}}_1,\ldots,{\mathbf{x}}_T$ according to $$\label{eq:sgd} {\mathbf{x}}_{t+1}~=~{\mathbf{x}}_t-\eta_t \cdot (\nabla f({\mathbf{x}}_t)+{\boldsymbol{\xi}}_t)~,$$ where $\eta_t$ is a fixed step size parameter. In some cases, we will also allow the algorithm to perform an additional aggregation step, generating a point ${{\mathbf{x}_{\mathrm{out}}}}$ which is some function of ${\mathbf{x}}_1,\ldots,{\mathbf{x}}_T$ (for example, the average $\frac{1}{T}\sum_{t=1}^{T}{\mathbf{x}}_t$). Additionally, in some of our results, we will allow the step size to be adaptive, and depend on the previous iterates (under appropriate assumptions), in which case we will use the notation $$\label{eq:adaptive_sgd} {\mathbf{x}}_{t+1} ={\mathbf{x}}_t - \eta_{{\mathbf{x}}_1,\dots,{\mathbf{x}}_t} \cdot (\nabla f({\mathbf{x}}_t)+{\boldsymbol{\xi}}_t).$$ Regarding the initial conditions, we make the standard assumption[^1] that ${\mathbf{x}}_1$ has bounded suboptimality, i.e., $$f({\mathbf{x}}_1)-f({\mathbf{x}}_*)\leq \Delta$$ for some fixed $\Delta > 0$, where in the convex case, we assume ${\mathbf{x}}_*$ is some point ${\mathbf{x}}_* \in \arg\min_{{\mathbf{x}}}f({\mathbf{x}})$, and in the non-convex case, we assume ${\mathbf{x}}_*$ is a stationary point with $f({\mathbf{x}}_*) \leq f({\mathbf{x}}_t)$ for all $t\in [T]$. We note that some analyses (see for example [@allen2018make; @foster2019complexity]) replace the assumption $f({\mathbf{x}}_1)-f({\mathbf{x}}_*)\leq \Delta$ with the assumption ${\|{\mathbf{x}}_1-{\mathbf{x}}_*\|}\leq R$, but we do not consider this variant in this paper (in fact, some of our constructions rely on the fact that even if $f({\mathbf{x}}_1)-f({\mathbf{x}}_*)$ is small, ${\|{\mathbf{x}}_1-{\mathbf{x}}_*\|}$ might be very large). It should also be pointed out that in the non-convex setting, ${\mathbf{x}}_*$ might not be uniquely defined or even belong to a single connected set, which makes ${\|{\mathbf{x}}_1-{\mathbf{x}}_*\|}$ somewhat ambiguous. Lower bounds in the non-convex case {#sec:nonconvex} =================================== In this section, we present several lower bounds relating to first-order methods in the non-convex stochastic setting. We start by considering a wide range of first-order methods, showing that if we consider any point which is a *fixed* function of the iterates, then *no* meaningful, dimension-free worst-case bound can be attained on its expected gradient norm. We conclude that it is necessary for any useful optimality criterion to relate to more than one iterate in some way, as is indeed the case with the standard optimality criteria, which considers the average expected norm of the gradients ($\frac{1}{T} \sum_t {\mathbb{E}}\|\nabla f({\mathbf{x}}_t)\|$) or the minimal expected norm of the gradients ($\min_t {\mathbb{E}}\|\nabla f({\mathbf{x}}_t)\|$). We then turn our focus to the SGD method under the standard set of assumptions (see [Sec. \[sec:setting\]]{}), and show that it requires $\Omega(\epsilon^{-4})$ iterations (or $\Omega(\epsilon^{-3})$ with Lipschitz Hessians) to attain a value of $\epsilon$ for any of the standard optimality criteria mentioned above. Impossibility of minimizing the gradient at any fixed point {#subsec:fixedpoint} ----------------------------------------------------------- In this subsection, we show that in the nonconvex setting, perhaps surprisingly, *no* meaningful iteration complexity bound can be provided on ${\|\nabla f({{\mathbf{x}_{\mathrm{out}}}})\|}$, where ${{\mathbf{x}_{\mathrm{out}}}}$ is the point returned by any fixed, deterministic aggregation scheme which depends continuously on the iterates and stochastic gradients (for example, some fixed weighted combination of the iterates). To state the result, recall that SGD can be phrased in an oracle-based setting, where we model an optimization algorithm as interacting with a stochastic first-order oracle: Given an initial point ${\mathbf{x}}_1$, at every iteration $t=2,\ldots,T$, the algorithm chooses a point ${\mathbf{x}}_t$, and the oracle returns a stochastic gradient estimate ${\mathbf{g}}_t:=\nabla f({\mathbf{x}}_t)+{\boldsymbol{\xi}}_t$, where ${\mathbb{E}}[{\boldsymbol{\xi}}_t|{\mathbf{x}}_t]=0$ and ${\mathbb{E}}[{\|{\boldsymbol{\xi}}_t\|}^2|{\mathbf{x}}_t]\leq \sigma^2$ for some known $\sigma^2$. The algorithm then uses ${\mathbf{g}}_t$ (as well as ${\mathbf{g}}_1,\ldots,{\mathbf{g}}_{t-1}$ and ${\mathbf{x}}_1,\ldots,{\mathbf{x}}_{t}$) to select a new point ${\mathbf{x}}_{t+1}$. After $T$ iterations, the algorithm returns a final point ${{\mathbf{x}_{\mathrm{out}}}}$, which depends on ${\mathbf{g}}_1,\ldots,{\mathbf{g}}_T$ and ${\mathbf{x}}_1,\ldots,{\mathbf{x}}_T$. \[thm:infdim\] Consider any deterministic algorithm as above, which satisfies the following: - There exists a finite $C_{T}$ (dependent only on $T$) such that for any initialization ${\mathbf{x}}_1$ and any $t\in [T]$, if ${\mathbf{g}}_1=\ldots={\mathbf{g}}_{t}=\mathbf{0}$, then ${\|{\mathbf{x}}_{t+1}-{\mathbf{x}}_1\|}\leq C_{T}$. Moreover, if this holds for $t=T$, then ${\|{{\mathbf{x}_{\mathrm{out}}}}-{\mathbf{x}}_1\|}\leq C_T$. - For any $t\in [T]$, ${\mathbf{x}}_{t+1}$ is a fixed continuous function of ${\mathbf{x}}_1,{\mathbf{g}}_1,\ldots,{\mathbf{x}}_t,{\mathbf{g}}_t$, and ${{\mathbf{x}_{\mathrm{out}}}}$ is a fixed continuous function of ${\mathbf{x}}_1,{\mathbf{g}}_1,\ldots,{\mathbf{x}}_T,{\mathbf{g}}_T$. Then for any $\delta\in (0,1)$, and any choice of random variables ${\boldsymbol{\xi}}_t$ satisfying the assumptions above, there exists a dimension $d$, a twice-differentiable function $f:{\mathbb{R}}^d\mapsto{\mathbb{R}}$ with $2$-Lipschitz gradients and $4$-Lipschitz Hessians, and an initialization point ${\mathbf{x}}_1$ satisfying $f({\mathbf{x}}_1)-\inf_{\mathbf{x}}f({\mathbf{x}})\leq 1$, such that with probability at least $1-\delta$, $${\|\nabla f({{\mathbf{x}_{\mathrm{out}}}})\|}\geq \frac{1}{2}$$ Moreover, if there is no stochastic noise (${\boldsymbol{\xi}}_t\equiv 0$), then the result holds for $d=1$. Intuitively, the first condition in the theorem requires that the algorithm does not “move” too much from the initialization point ${\mathbf{x}}_1$, if all stochastic gradients are zero (this is trivially satisfied for SGD, and any other reasonable algorithm we are aware of). The second condition requires the iterates produced by the algorithm to depend continuously on the previous iterates and stochastic gradients (again, this is satisfied by SGD). The theorem suggests that to get non-trivial results, we must either use a dimension-dependent analysis, use a non-continuous/adaptive/randomized scheme to compute ${{\mathbf{x}_{\mathrm{out}}}}$, or measure the performance of the generated sequence using an optimality criterion that does not depend on a fixed point (e.g., the average gradient $\frac{1}{T}\sum_{t=1}^{T}{\|\nabla f({\mathbf{x}}_t)\|}$ or $\min_{t}{\|\nabla f({\mathbf{x}}_t)\|}$). We note that the positive result of [@fang2019sharp] assumes both finite dimension, and computes ${{\mathbf{x}_{\mathrm{out}}}}$ according to an adaptive non-continuous decision rule (involving branching depending on how far the iterates have moved), hence there is no contradiction to the alluded theorem. We will first prove the result in the case where there is no noise, i.e. ${\mathbf{g}}_t=\nabla f({\mathbf{x}}_t)$ deterministically, in which case ${\mathbf{x}}_2,\ldots,{\mathbf{x}}_{T}$ and ${{\mathbf{x}_{\mathrm{out}}}}$ are deterministic functions of ${\mathbf{x}}_1$. To that end, let $d=1$ and let $f(x)=s(x)$, where $s$ is the sigmoid-like function $$s(x)~=~\begin{cases} -\frac{1}{2} & x\leq -1\\ \frac{2}{3}(x+1)^3 -\frac{1}{2}& x\in [-1,-\frac{1}{2}] \\ -\frac{2}{3}x^3+x& x\in [-\frac{1}{2},\frac{1}{2}] \\ \frac{2}{3}(x-1)^3+\frac{1}{2}& x\in [\frac{1}{2},1] \\ \frac{1}{2} & x\geq 1\end{cases}$$ This function smoothly and monotonically interpolates between $-1/2$ at $x=-1$ at $1/2$ at $x=1$. It can be easily verified to have $2$-Lipschitz gradients and $4$-Lipschitz Hessians, and for any $x$, satisfies $f(x)-\inf_{x}f(x) \leq 1$. Let us consider the iterates generated by the algorithm, $x_1,\dots,x_T$ and ${{x_{\mathrm{out}}}}$, as we make $x_1\rightarrow\infty$. Our function is such that $\nabla f(x)=0$ for all $x\geq 1$, so at every iteration, the algorithm gets $g_t=0$ as long as $x_t \geq 1$. Moreover, by the assumptions, as long as the gradients are zero, $|x_t-x_1|$ is bounded. As a result, by induction and our assumption that $|{{x_{\mathrm{out}}}}-x_1|$ is bounded, we get that ${{x_{\mathrm{out}}}}\rightarrow\infty$. A similar argument shows that when $x_1\rightarrow -\infty$, we also have ${{x_{\mathrm{out}}}}\rightarrow -\infty$. Next, we argue that ${{x_{\mathrm{out}}}}$ is a continuous function of $x_1$. Indeed, $x_2$ is a continuous function of $x_1$, since it is a continuous function of $g_1=\nabla f(x_1)$ by assumption, and $\nabla f(x_1)$ is Lipschitz (hence continuous) in $x_1$, and compositions of continuous functions is continuous. By induction, a similar arguments holds for $x_t$ for any $t$, and hence also to ${{x_{\mathrm{out}}}}$. Overall, we showed that ${{x_{\mathrm{out}}}}$ is a continuous function of $x_1$, that ${{x_{\mathrm{out}}}}\rightarrow\infty$ when $x_1\rightarrow\infty$, and that ${{x_{\mathrm{out}}}}\rightarrow-\infty$ when $x_1\rightarrow-\infty$. Therefore, by the mean value theorem, there exists some $x_1$ for which ${{x_{\mathrm{out}}}}$ is precisely zero, in which case $|f'({{x_{\mathrm{out}}}})|=|f'(0)|=|s'(0)|=1$, satisfying the Theorem statement. It remains to prove the theorem in the noisy case, where ${\boldsymbol{\xi}}_i$ are non-zero random variables. In that case, instead of choosing $f(x)=s(x)$, we let $f({\mathbf{x}})=s(\langle {\mathbf{x}}, {\mathbf{e}}_r \rangle)$, where the coordinate $r$ is defined as $$r:=\arg\min_{j\in [d]} \max_{t\in [T]}{\mathbb{E}}[\langle{\boldsymbol{\xi}}_t, {\mathbf{e}}_j\rangle^2].$$ Since we assume $\max_t {\mathbb{E}}[{\|{\boldsymbol{\xi}}_t\|}^2]=\max_t \sum_{j=1}^{d}{\mathbb{E}}[\langle{\boldsymbol{\xi}}_{t}, {\mathbf{e}}_j\rangle^2]$ is bounded by $\sigma^2$ independently of $d$, it follows that the variance of ${\boldsymbol{\xi}}_1,\ldots,{\boldsymbol{\xi}}_t$ along coordinate $r$ goes to zero as $d\rightarrow \infty$. Therefore, by making $d$ large enough and using Chebyshev’s inequality, we can ensure that $\max_t |\langle {\boldsymbol{\xi}}_{t},{\mathbf{e}}_j\rangle|$ is arbitrarily small with arbitrarily high probability. Since the gradients of $f$ are Lipschitz, and we assume each ${\mathbf{x}}_{t+1}$ is a continuous function of the noisy gradients ${\mathbf{g}}_1,{\mathbf{g}}_2,\ldots,{\mathbf{g}}_t$, it follows that the trajectory of ${\mathbf{x}}_1,{\mathbf{x}}_2,\ldots,{\mathbf{x}}_{T}$ and ${{\mathbf{x}_{\mathrm{out}}}}$ on the $j$-th coordinate can be made arbitrarily close to the noiseless case analyzed earlier (where ${\boldsymbol{\xi}}_t\equiv \mathbf{0}$), with arbitrarily high probability. In particular, we can find an initialization point ${\mathbf{x}}_1$ such that the $j$-th coordinate of ${{\mathbf{x}_{\mathrm{out}}}}$ is arbitrarily close to $0$, hence the gradient is arbitrarily close to $1$ (and in particular, larger than $1/2$). The theorem considers deterministic algorithms for simplicity, but the same proof idea holds for larger families of randomized algorithms, where the randomness is used “obliviously”. For example, consider the popular technique of adding random perturbations to the iterates: If the perturbations have a fixed distribution with finite variance, then we can always embed our construction in a high enough dimension, so that the effective variance of the perturbations is arbitrarily small, and we are back to the deterministic setting. Lower bounds on SGD {#subsec:sgdlowbound} ------------------- In this subsection, we focus on the analysis of SGD in the nonconvex setting. We present two main results: A lower bound on the performance of SGD with an aggregation step for objectives with $L$-Lipschitz gradient, followed by a lower bound in the case where the objective has $\rho$-Lipschitz Hessian that applies to “plain” SGD methods that do not perform an aggregation step. In both cases, the step sizes chosen by the method are allowed to be adaptive, in the sense that they are allowed to depend on past iterates and gradients. This dependence is *not* allowed to be completely general, but rather we assume that the dependence on the past iterates and gradients is done through a function of their norm and the dot-products between them (in the Lipschitz Hessian case, we also allow the step size to depend on the Hessians). Note, that all commonly used adaptive schemes (including Adagrad [@duchi2011adaptive], normalized gradient [@nesterov1984minimization; @kiwiel2001convergence], among others) follow this type of adaptive scheme. We start the analysis with a technical lemma. \[L:detbound\] Let $f:{\mathbb{R}}^d \mapsto {\mathbb{R}}$ be a function with $L$-Lipschitz gradient, and assume that the vectors ${\mathbf{y}}_1,\dots,{\mathbf{y}}_m, {\mathbf{z}}_1,\dots,{\mathbf{z}}_n, {\boldsymbol{\gamma}}\in {\mathbb{R}}^d$ ($n,m\in\mathbb{N}$) are such that 1. \[itm:grad\_eq\] $\nabla f({\mathbf{y}}_1) = \dots = \nabla f({\mathbf{y}}_m)= \nabla f({\mathbf{z}}_1) = \dots = \nabla f({\mathbf{z}}_n) = {\boldsymbol{\gamma}}$, 2. \[itm:constant\_vals\] $f({\mathbf{y}}_1)=\dots=f({\mathbf{y}}_m)$, and 3. \[itm:constant\_prod\] $\langle {\boldsymbol{\gamma}}, {\mathbf{y}}_1\rangle = \dots = \langle {\boldsymbol{\gamma}}, {\mathbf{y}}_m\rangle$. Then there exists a function $\hat f$ with $L$-Lipschitz gradient that has the same first-order information as $f$ at $\{{\mathbf{z}}_i\}_{i\in [n]}$, i.e., for all $i\in [n]$ $$\begin{aligned} & \hat f({\mathbf{z}}_{i}) = f({\mathbf{z}}_{i}), \\ & \nabla \hat f({\mathbf{z}}_i) = \nabla f({\mathbf{z}}_i) = {\boldsymbol{\gamma}},\end{aligned}$$ and in addition, for all $j\in [m]$ $$\begin{aligned} & \hat f({\mathbf{y}}_j) \geq \min_{k\in [n]} f({\mathbf{z}}_k) - \frac{1}{L} \|{\boldsymbol{\gamma}}\|^2, \\ & \nabla \hat f({\mathbf{y}}_j) = \nabla f({\mathbf{y}}_j) = {\boldsymbol{\gamma}}.\end{aligned}$$ We postpone the proof of this lemma to the appendix and turn to present the first main result of this subsection. \[thm:aggregation\_step\] Consider a first-order method that given a function $f:{\mathbb{R}}^d\rightarrow{\mathbb{R}}$ and an initial point ${\mathbf{x}}_1\in {\mathbb{R}}^d$ generates a sequence of points $\{{\mathbf{x}}_i\}$ satisfying $$\begin{aligned} & {\mathbf{x}}_{t+1} = {\mathbf{x}}_t + \eta_{{\mathbf{x}}_1,\dots,{\mathbf{x}}_t} \cdot (\nabla f({\mathbf{x}}_t)+{\boldsymbol{\xi}}_t), \quad t\in [T-1],\end{aligned}$$ where ${\boldsymbol{\xi}}_i$ are some random noise vectors, and returns a point ${{\mathbf{x}_{\mathrm{out}}}}\in {\mathbb{R}}^d$ as a non-negative linear combination of the iterates: $${{\mathbf{x}_{\mathrm{out}}}}= \sum_{t=1}^T \zeta^{(t)}_{{\mathbf{x}}_1,\dots,{\mathbf{x}}_T} {\mathbf{x}}_t.$$ We further assume that the step sizes $\eta_{{\mathbf{x}}_1,\dots,{\mathbf{x}}_t}$ and aggregation coefficients $\zeta^{(t)}_{{\mathbf{x}}_1,\dots,{\mathbf{x}}_T}$ are deterministic functions of the norms and inner products between the vectors ${\mathbf{x}}_1,\dots,{\mathbf{x}}_t, \nabla f({\mathbf{x}}_1)+{\boldsymbol{\xi}}_1,\dots, \nabla f({\mathbf{x}}_t)+{\boldsymbol{\xi}}_t$. Then for any $L,\Delta, \sigma\in {\mathbb{R}}_{++}$ and $T\in\mathbb{N}$ there exists a function $f:{\mathbb{R}}^{T}\mapsto {\mathbb{R}}$ with $L$-Lipschitz gradient, a point ${\mathbf{x}}_1\in {\mathbb{R}}^T$ and independent random variables ${\boldsymbol{\xi}}_t$ with ${\mathbb{E}}[{\boldsymbol{\xi}}_t]=0$ and ${\mathbb{E}}[\|{\boldsymbol{\xi}}_t\|^2]=\sigma^2$ such that $\forall t\in[T]$ $$\begin{aligned} & f({\mathbf{x}}_1) - f({\mathbf{x}}_t) \overset{\text{a.s.}}{\leq} \Delta, \\ & \nabla f({\mathbf{x}}_t) \overset{\text{a.s.}}{=} {\boldsymbol{\gamma}}, \end{aligned}$$ and in addition, $$\begin{aligned} & f({\mathbf{x}}_1) - f({{\mathbf{x}_{\mathrm{out}}}}) \overset{\text{a.s.}}{\leq} \Delta + \frac{\sigma}{2L} \sqrt{\frac{L \Delta}{T-1} }, \\ & \nabla f({{\mathbf{x}_{\mathrm{out}}}}) \overset{\text{a.s.}}{=} {\boldsymbol{\gamma}}. \end{aligned}$$ where ${\boldsymbol{\gamma}}\in {\mathbb{R}}^T$ is a vector such that $$\|{\boldsymbol{\gamma}}\|^2 = \frac{\sigma}{2} \sqrt{\frac{L \Delta}{T-1} }.$$ We start by assuming that the algorithm performs gradient steps with fixed step size $\eta_t$ and aggregation coefficients $\zeta_i$, i.e., the algorithm is defined by the rule $$\begin{aligned} & {\mathbf{x}}_{t+1} = {\mathbf{x}}_t - \eta_t (\nabla f({\mathbf{x}}_t) + {\boldsymbol{\xi}}_t), \quad t\in[T-1], \\ & {{\mathbf{x}_{\mathrm{out}}}}= \sum_{t=1}^T \zeta_i {\mathbf{x}}_i.\end{aligned}$$ Under this assumption, the proof proceeds by defining an adversarial objective and noise distribution, showing that the iterates of SGD on that example posses the claimed properties, then using [Lemma \[L:detbound\]]{}, modifying the objective at ${{\mathbf{x}_{\mathrm{out}}}}$ so that the claimed bounds at ${{\mathbf{x}_{\mathrm{out}}}}$ are attained while keeping the behavior of the function at the previous iterates unaffected. We start by defining an adversarial example, choosing the noise vectors $\{{\boldsymbol{\xi}}_t\}$ to be independent random variables distributed such that $$\label{def:nonconvex_xi} P({\boldsymbol{\xi}}_t=\pm \sigma {\mathbf{e}}_{t+1}) = \frac{1}{2}, \quad t\in [T-1],$$ where ${\mathbf{e}}_i$ stands for the canonical unit vector, and defining the objective $f:=f_{\{\eta_t\},\{\zeta_t\}}:{\mathbb{R}}^{T} \mapsto {\mathbb{R}}$ by $$\label{def:nonconvex_f} f_{\{\eta_t\},\{\zeta_t\}}({\mathbf{x}}) := G \cdot \langle {\mathbf{x}}, {\mathbf{e}}_1\rangle + \sum_{t=1}^{T-1} h_t (\langle {\mathbf{x}}, {\mathbf{e}}_{t+1}\rangle),$$ where $G \geq 0$ is such that $$G^2 = \frac{\sigma}{2} \sqrt{\frac{L \Delta}{T-1} },$$ and the functions $h_t$ are defined as follows: First denote by $h^{(1,L)}_{b,-}(x)$ and $h^{(1,L)}_{b,+}(x)$ the functions (see [Fig. \[fig:sfunc\]]{}) $$\begin{aligned} & h^{(1,L)}_{b,+}(x) := \begin{cases} \frac{L}{2} x^2 & |x| \leq b/4,\\ \frac{L}{16} b^2 - \frac{L}{2} (|x|-b/2)^2 & b/4 < |x| < b/2, \\ \frac{L}{16} b^2 & |x|\geq b/2, \end{cases}\end{aligned}$$ $$\begin{aligned} & h^{(1,L)}_{b,-}(x) := \begin{cases} 0 & |x| \leq b/2,\\ \frac{L}{2} (|x|-b/2)^2 & b/2 \leq |x| \leq 3b/4,\\ \frac{L}{16} b^2 - \frac{L}{2} (|x-b/2|-b/2)^2 & 3b/4 < |x| < b, \\ \frac{L}{16} b^2 & |x|\geq b, \end{cases}\end{aligned}$$ then at iterates $t$ where the aggregation coefficient $|\zeta_{t+1}| \leq \frac{1}{2}$, take $h_t$ to be $h_t=h^{(1,L)}_{|\eta_t| \sigma,-}(x)$, and otherwise take $h_t=h^{(1,L)}_{|\eta_t| \sigma,+}$. Note that for all $t\in[T-1]$, $$\begin{aligned} & h_t(0)=0, \\ & h_t(x)=h_t(-x), \quad \forall x\in {\mathbb{R}}, \\ & h_t'(0)=h_t'(\eta_t \sigma)=h_t'(-\eta_t \sigma)=h_t'(\zeta_{t+1} \eta_t \sigma)=h_t'(-\zeta_{t+1} \eta_t \sigma)=0,\end{aligned}$$ and that $h_t$ has $L$-Lipschitz gradient. From the definition of $f$ we conclude that (being a separable sum of functions with $L$-Lipschitz gradient) $f$ also shares the $L$-Lipschitz gradient of the functions $h_t$. ![$h^{(1,1)}_{1,-}$ and $h^{(1,1)}_{1,+}$.[]{data-label="fig:sfunc"}](sfig1.eps){width="0.6\linewidth"} We now turn to analyze the dynamics of SGD when applied on the function $f$ defined above. Given the objective $f$ and the staring point ${\mathbf{x}}_1=0$, the algorithm at the first iteration sets $${\mathbf{x}}_2 = {\mathbf{x}}_1 - \eta_1 (\nabla f({\mathbf{x}}_1)+ {\boldsymbol{\xi}}_1) = (-\eta_1 G, \pm \eta_1 \sigma, 0, \dots, 0)^\top,$$ hence, from the properties of $h_1$, we get $$\begin{aligned} & f({\mathbf{x}}_2) = - G^2 \eta_1 + h_1(\pm \eta_1 \sigma) + \sum_{t=2}^{T-1} h_t(0) = - G^2 \eta_1 + h_1(\eta_t \sigma), \\ & \nabla f({\mathbf{x}}_2) = G {\mathbf{e}}_1 + h_1'(\pm \eta_1 \sigma){\mathbf{e}}_2 + \sum_{t=2}^{T-1} h_t'(0) {\mathbf{e}}_{t+1} = G {\mathbf{e}}_1.\end{aligned}$$ Similarly, at the $t$-th iteration, $t\in [T-1]$, the algorithm sets $$\label{eq:x_t_value} {\mathbf{x}}_{t+1} = {\mathbf{x}}_t - \eta_t (\nabla f({\mathbf{x}}_t)+ {\boldsymbol{\xi}}_t) ) = (-\sum_{k=1}^t \eta_k G, \pm \eta_1 \sigma, \dots, \pm \eta_t \sigma, 0, \dots, 0)^\top,$$ which leads to $$\begin{aligned} & f({\mathbf{x}}_t) = - G^2 \sum_{k=1}^{t-1} \eta_k + \sum_{k=1}^{t-1} h_k(\eta_k \sigma), \\ & \nabla f({\mathbf{x}}_t) = G {\mathbf{e}}_1. \label{eq:nabla_F_x_t}\end{aligned}$$ At the aggregation step, the algorithm sets $$\label{eq:x_out} {{\mathbf{x}_{\mathrm{out}}}}= \sum_{t=1}^T \zeta_t {\mathbf{x}}_t = (-\sum_{t=1}^T \zeta_t \sum_{k=1}^{t-1} \eta_k G, \pm \zeta_2 \eta_1 \sigma, \dots, \pm \zeta_T \eta_{T-1} \sigma)^\top,$$ then by the properties of $h_t$, we get $$\begin{aligned} & f({{\mathbf{x}_{\mathrm{out}}}}) = - G^2 \sum_{t=1}^T \zeta_t \sum_{k=1}^{t-1} \eta_k + \sum_{k=1}^{t-1} h_k(\zeta_{k+1} \eta_k \sigma), \label{eq:F_xout} \\ & \nabla f({{\mathbf{x}_{\mathrm{out}}}}) = G {\mathbf{e}}_1, \label{eq:nabla_F_xout}\end{aligned}$$ where the first equality follows since $h_t$ is even, and second equality follows from $h_t'(\pm \zeta_{t+1} \eta_t \sigma)=0$. We get $$\begin{aligned} f({\mathbf{x}}_1) - f({\mathbf{x}}_t) & = 0 + G^2 \sum_{k=1}^{t-1} \eta_k - \sum_{k=1}^{t-1} h_k(\eta_k \sigma) = \sum_{k=1}^{t-1} \left(G^2 \eta_k - h_k(\eta_k \sigma)\right),\end{aligned}$$ and therefore $$\begin{aligned} f({\mathbf{x}}_1) - f({\mathbf{x}}_t) & = \sum_{k=1}^{t-1} \left(G^2 \eta_k - h_k(\eta_k \sigma)\right) \\ & = \sum_{k=1}^{t-1} \left(\frac{\sigma}{2} \sqrt{\frac{L \Delta}{T - 1} } \eta_k - \frac{L}{16} \eta_k^2 \sigma^2\right) \\ & = \sum_{k=1}^{t-1} \left(\sqrt{\frac{\Delta}{T-1} } \left(\frac{1}{2} \eta_k \sigma \sqrt{L}\right) - \frac{1}{4} \left(\frac{1}{2}\eta_k \sigma \sqrt{L}\right)^2 \right) \\ & \leq (t-1) \frac{\Delta}{T-1} \leq \Delta,\end{aligned}$$ where the one before last inequality follows since $a x - \frac{1}{4} x^2 \leq a^2$ for all $x\in {\mathbb{R}}$. To complete our treatment of the fixed-step case, we turn to show that it is possible to modify the value of $f$ at ${{\mathbf{x}_{\mathrm{out}}}}$ without affecting the first-order information at the previous iterates and such that the claimed bound on $f({{\mathbf{x}_{\mathrm{out}}}})$ is satisfied. For this purpose, we continue to show that [Lemma \[L:detbound\]]{} can be applied when taking for ${\mathbf{y}}_1,\dots,{\mathbf{y}}_m$ all the possible values for ${{\mathbf{x}_{\mathrm{out}}}}$ , and for ${\mathbf{z}}_1,\dots,{\mathbf{z}}_n$ all possible values the random variables $\{{\mathbf{x}}_t\}_{t\in [T]}$ can attain . Indeed, in view of  and , the first condition of [Lemma \[L:detbound\]]{} holds with ${\boldsymbol{\gamma}}= G {\mathbf{e}}_1$. Furthermore, the second requirement follows from , and the third requirement follows since $$\langle \nabla f({{\mathbf{x}_{\mathrm{out}}}}), {{\mathbf{x}_{\mathrm{out}}}}\rangle = -\sum_{t=1}^T \zeta_t \sum_{k=1}^{t-1} \eta_k G^2$$ does not depend on the sign of the noise vectors ${\boldsymbol{\xi}}_t$. As all the requirements of [Lemma \[L:detbound\]]{} hold, we conclude that there exists a function $\hat f$ that shares the the first-order information of $f$ at the iterates, and satisfies the claimed properties $$\begin{aligned} & \hat f({{\mathbf{x}_{\mathrm{out}}}}) \overset{\text{a.s.}}{\geq} \min_k f({\mathbf{x}}_k) - \frac{G^2}{L}, \\ & \nabla \hat f({{x_{\mathrm{out}}}}) \overset{\text{a.s.}}{=} G {\mathbf{e}}_1.\end{aligned}$$ As SGD does not have access the objective beyond the first-order information at the iterates, we conclude that the algorithm proceeds on $\hat f$ in exactly the same dynamics as it does on $f$, maintaining all bounds derived above. We turn to the general, adaptive step-size case: $$\begin{aligned} & {\mathbf{x}}_{t+1} = {\mathbf{x}}_t + \eta_{{\mathbf{x}}_1,\dots,{\mathbf{x}}_t} \cdot (\nabla f({\mathbf{x}}_t)+{\boldsymbol{\xi}}_t), \quad t\in [T-1], \\ & {{\mathbf{x}_{\mathrm{out}}}}= \sum_{t=1}^T \zeta^{(t)}_{{\mathbf{x}}_1,\dots,{\mathbf{x}}_T} {\mathbf{x}}_i.\end{aligned}$$ Here, our goal is to show that constants $\eta_t$ and $\zeta_t$ can be chosen in such a way that the method, when applied on $f_{\{\eta_t\},\{\zeta_t\}}$ constructed above, chooses step sizes and aggregation coefficients that are almost surely equal to the selected constants, i.e, $$\eta_{{\mathbf{x}}_1,\dots,{\mathbf{x}}_t} \overset{\text{a.s.}}{=} \eta_t, \quad \zeta^{(t)}_{{\mathbf{x}}_1,\dots,{\mathbf{x}}_T} \overset{\text{a.s.}}{=} \zeta_t,$$ and thus the proof for the fixed-step case can proceed without change. We use the following procedure to select $\eta_t$ and $\zeta_t$. We start by executing the first step of the algorithm on the initial point ${\mathbf{x}}_1=0$ and $f$, where the constants $\{\eta_t\}$ and $\{\zeta_t\}$ are set to arbitrarily values. Note that the first-order information of $f$ at ${\mathbf{x}}_1$ is independent of the choice for $\eta_t, \zeta_t$, and the norm of the noise vector ${\boldsymbol{\xi}}_1$ and its inner product with ${\mathbf{x}}_1$ and $\nabla f({\mathbf{x}}_1)$ are independent of the specific direction chosen for the noise; therefore, by the assumption on the step size $\eta_{{\mathbf{x}}_1}$, it is independent of the specific value for ${\boldsymbol{\xi}}_1$, i.e., it is almost surely a constant. We denote this constant by $\eta_1$. We continue by executing the second step of the algorithm on $f$, using the value of $\eta_1$ chosen above while keeping the constants $\eta_2,\dots,\eta_{T-1}$, $\{\zeta_t\}_{t\in[T]}$ set to arbitrary values. As in the first iteration, the first-order information of $f$ at ${\mathbf{x}}_2$ is independent of the specific choice for $\eta_2,\dots,\eta_{T-1}$, $\{\zeta_t\}_{t\in[T]}$ and the norm of the noise vector ${\boldsymbol{\xi}}_2$ and its inner product with ${\mathbf{x}}_1, {\mathbf{x}}_2$, $\nabla f({\mathbf{x}}_1), \nabla f({\mathbf{x}}_2)$ and ${\boldsymbol{\xi}}_1$ are independent of the specific direction chosen for the noise; therefore by the assumption on $\eta_{{\mathbf{x}}_1,{\mathbf{x}}_2}$ it is almost surely a constant. As before, we denote the step size performed by the algorithm $\eta_2$. Continuing in this fashion, we obtain a set of constants $\eta_1,\dots,\eta_{T-1}$ with the property that when applying the method on $f=f_{\{\eta_t\},\{\zeta_t\}}$, then for any choice of aggregation coefficients $\{\zeta_t\}$ the step-sizes chosen by the method are almost surely equal to $\{\eta_t\}$. Finally, executing the aggregation step, by the assumption on the aggregation function, the coefficients are almost surely constants, which we denote by $\zeta_1,\dots,\zeta_{T}$. To conclude, we have found a function $f=f_{\{\eta_t\},\{\zeta_t\}}$ such that the step sizes performed by the method on $f$ are almost surely $\eta_1,\dots,\eta_{T-1}$ and the aggregation coefficients chosen by the method are almost surely $\zeta_1,\dots,\zeta_{T}$, hence the proof can continue as in the fixed-step case. The example provided by [Thm. \[thm:aggregation\_step\]]{} comes with a guarantee that the gradient of the objective at all iterates is almost surely a constant; as a result, the theorem is applicable for forming lower bounds for all first-order optimality criteria used in the literature, including the best expected gradient norm $\min_t {\mathbb{E}}\|\nabla f({\mathbf{x}}_t)\|$, average expected gradient norm $\frac{1}{T} \sum_t {\mathbb{E}}\|\nabla f({\mathbf{x}}_t)\|$, and expected norm of the average gradient ${\mathbb{E}}\|\frac{1}{T} \sum_t \nabla f({\mathbf{x}}_t)\|$, both when taking the actual gradient and when taking the noisy version of the gradient. Note that although the theorem does not directly consider randomized sampling schemes for computing ${{\mathbf{x}_{\mathrm{out}}}}$, the performance of any scheme that samples ${{\mathbf{x}_{\mathrm{out}}}}$ out of $\{{\mathbf{x}}_1,\dots,{\mathbf{x}}_T\}$ is bounded from below by the optimality criterion $\min_{t}{\|\nabla f({\mathbf{x}}_t)\|}$, making the guarantees by the theorem applicable. The example provided by the theorem comes with a guarantee that $f({\mathbf{x}}_1) - f({\mathbf{x}}_t)\leq \Delta$ holds almost surely for any $t$: This is a strictly weaker assumption on ${\mathbf{x}}_1$ than used in the standard bounds in this setting, which require $f({\mathbf{x}}_1) - f({\mathbf{x}}_*)\leq \Delta$ with ${\mathbf{x}}_*$ being a stationary point with $f({\mathbf{x}}_*)\leq {\mathbb{E}}f({\mathbf{x}}_T)$. Note that although the specific function used in the proof of the theorem does not have any stationary points, the construction can be adjusted at areas that are far enough from the points queried by the method such that a stationary point (see e.g., [Lemma \[L:detbound\]]{}). Setting $$\Delta \leftarrow \Delta - \frac{ \sigma \sqrt{\Delta L (T-1)+\sigma ^2/16}-\sigma^2/4}{2L(T-1)}$$ in [Thm. \[thm:aggregation\_step\]]{}, we get (after some basic algebra) that $\forall t\in \{1,\dots, T,\mathrm{out}\}$ $$\begin{aligned} & f({\mathbf{x}}_1) - f({\mathbf{x}}_t) \overset{\text{a.s.}}{\leq} \Delta, \\ & \| \nabla f({\mathbf{x}}_t) \|^2 \overset{\text{a.s.}}{\geq} \frac{ \sigma \sqrt{\Delta L (T - 1)+\sigma ^2/16}-\sigma^2/4}{2(T - 1)} \geq \frac{\sigma}{2} \sqrt{\frac{L \Delta}{T - 1} } - \frac{\sigma^2}{8(T - 1)},\end{aligned}$$ establishing that SGD with an additional aggregation step require $\Omega(\epsilon^{-4})$ iterations to reach a value smaller than $\epsilon$ for all standard optimality criteria. \[rem:tight\_bounds\] Consider the upper bound by Ghadimi and Lan (see [Thm. \[T:upper\_noncovex\]]{} in the appendix) and set the step size by $$\label{eq:optimal_step} \eta_t \equiv \eta:=\sqrt{\frac{2\Delta}{(T-1)L \sigma^2}},$$ where $\Delta$ is an upper bound on $f({\mathbf{x}}_1)-f({\mathbf{x}}_*)$. We obtain $$\begin{aligned} \min_{t \in [T]} \|\nabla f({\mathbf{x}}_t)\|^2 \leq \frac{2\Delta + L (T-1) \eta^2 \sigma^2}{(T-1) \eta (2-L \eta)} \underset{T\gg 1}{\approx} \frac{2\Delta + L (T-1) \eta^2 \sigma^2}{2 (T-1) \eta} = \sqrt{2}\sigma \sqrt{\frac{L \Delta}{T-1}},\end{aligned}$$ which establishes on one hand, that the lower bound obtained in [Thm. \[thm:aggregation\_step\]]{} on the iterates ${\mathbf{x}}_t$ is tight up to the constant factor $2\sqrt{2}$, and on the other hand, establishes that the constant step-size scheme  is optimal up to the same constant. The second main result of this subsection gives a lower bound on the performance of “plain” SGD methods (i.e., methods that do not perform an aggregation step) acting on objectives with Lipchitz Hessians. \[thm:nonconvex\] Consider a method that given a function $f:{\mathbb{R}}^d\rightarrow{\mathbb{R}}$ and an initial point ${\mathbf{x}}_1\in {\mathbb{R}}^d$ generates a sequence of points $\{{\mathbf{x}}_t\}$ satisfying $${\mathbf{x}}_{t+1} = {\mathbf{x}}_t + \eta_{{\mathbf{x}}_1,\dots,{\mathbf{x}}_t} \cdot (\nabla f({\mathbf{x}}_t)+{\boldsymbol{\xi}}_t),$$ where ${\boldsymbol{\xi}}_t$ are some random noise vectors. We further assume that the step sizes $\eta_{{\mathbf{x}}_1,\dots,{\mathbf{x}}_t}$ are deterministic functions of the norms and inner products between ${\mathbf{x}}_1,\dots,{\mathbf{x}}_t, \nabla f({\mathbf{x}}_1)+{\boldsymbol{\xi}}_1,\dots, \nabla f({\mathbf{x}}_t)+{\boldsymbol{\xi}}_t$ and may also depend on the exact second-order information $\nabla^2 f({\mathbf{x}}_1),\dots, \nabla^2 f({\mathbf{x}}_t)$. Then for any $\rho,\Delta,\sigma \in {\mathbb{R}}_{++}$ and $T\in\mathbb{N}$ there exists a function $f:{\mathbb{R}}^{T}\mapsto {\mathbb{R}}$ with $\rho$-Lipschitz Hessian, ${\mathbf{x}}_1\in {\mathbb{R}}^T$, and independent random variables ${\boldsymbol{\xi}}_t$ with ${\mathbb{E}}[{\boldsymbol{\xi}}_t]=0$ and ${\mathbb{E}}[\|{\boldsymbol{\xi}}_t\|^2]=\sigma^2$ such that $\forall t\in[T]$ $$\begin{aligned} & f({\mathbf{x}}_1) - f({\mathbf{x}}_t) \overset{\text{a.s.}}{\leq} \Delta, \\ & \|\nabla f({\mathbf{x}}_t)\|^2 \overset{\text{a.s.}}{=} {\boldsymbol{\gamma}}, \end{aligned}$$ where ${\boldsymbol{\gamma}}\in {\mathbb{R}}^T$ is a vector that satisfies $$\|{\boldsymbol{\gamma}}\|^2 = \frac{\sigma}{2}\left(\frac{\rho \Delta ^2}{ (T-1)^2}\right)^{1/3}.$$ We proceed as in the proof of [Thm. \[thm:aggregation\_step\]]{}, taking for $G$ the positive value that satisfies $$G^2 = \frac{3 \sigma}{32} \left(\frac{16^2 \rho \Delta ^2}{ (T-1)^2}\right)^{1/3} \geq \frac{\sigma}{2}\left(\frac{\rho \Delta ^2}{ (T-1)^2}\right)^{1/3}, \\$$ and set $h_t:=h^{(2,\rho)}_{|\eta_t| \sigma}$, with $h^{(2,\rho)}_{b}$ defined by (see [Fig. \[fig:sfunc2\]]{}) $$\begin{aligned} & h^{(2,\rho)}_{b}(x) := \begin{cases} \frac{\rho}{6} |x|^3 & |x| \leq b/4,\\ \frac{\rho}{192} \left(b^3-12 b^2 |x|+48 b x^2-32 |x|^3\right) & b/4 \leq |x| < 3b/4,\\ \frac{\rho}{32} b^3 - \frac{\rho}{6} \left(b - |x| \right)^3 & 3b/4 \leq |x| < b, \\ \frac{\rho}{32} b^3 & |x|\geq b. \end{cases}\end{aligned}$$ It is straightforward to verify that $h_t$ has $\rho$-Lipschitz Hessian, and as in the Lipschitz gradient case, we have $$\begin{aligned} & h_t(0)=0, \\ & h_t(x)=h_t(-x), \quad \forall x\in {\mathbb{R}}, \\ \text{and}\quad & h_t'(0)=h_t'(\eta_t \sigma)=h_t'(-\eta_t \sigma)=0.\end{aligned}$$ Proceeding with new values, we reach $$\begin{aligned} & f({\mathbf{x}}_1) - f({\mathbf{x}}_t) \\ & = \sum_{k=1}^{t-1} \left(G^2 \eta_k - h^{(2)}_{|\eta_k| \sigma}(|\eta_k| \sigma)\right) \\ & = \sum_{k=1}^{t-1} \left( \frac{3 \sigma}{32}\left(\frac{16^2 \rho \Delta ^2}{ (T-1)^2}\right)^{1/3} \eta_k - \frac{\rho}{32} |\eta_k|^3 \sigma^3\right) \\ & = \frac{1}{32} \sum_{k=1}^{t-1} \left(3 \left(\frac{16^2 \Delta ^2} {(T-1)^2}\right)^{1/3} (\rho^{1/3} \sigma \eta_k) - |\rho^{1/3} \eta_k \sigma|^3\right) \\ & \leq (t-1) \frac{\Delta}{T-1} \leq \Delta,\end{aligned}$$ where the one before last inequality follows from the inequality $3 a x - |x|^3 \leq 2 a^{\frac{3}{2}}$. Finally, note that $h''_t (\eta_t \sigma) = h''_t (-\eta_t \sigma) = 0$, and as a result, the second-order information of $f$ at all iterates is identically zero, thus the proof in the adaptive step-size case can proceed without change. ![$h^{(2,1)}_{1}$.[]{data-label="fig:sfunc2"}](sfig2.eps){width="0.6\linewidth"} Note that the main missing component needed for establishing a result on SGD with aggregation steps in the Lipschitz-Hessian case is a set of necessary and sufficient interpolation conditions for Lipschitz-Hessian functions (as in the case of Lipschitz-gradient, [Thm. \[T:nonconvex\_interpolation\]]{}). The existence of such conditions remains an open question. Lower bounds in the convex quadratic case {#sec:convex} ========================================= In this section, we continue our analysis of the SGD method, showing that even for convex, quadratic functions in moderate dimensions, and a standard Gaussian noise, SGD cannot achieve an iteration complexity better than ${\mathcal{O}}(\epsilon^{-4})$ in order for any of its iterates to have gradient norm less than $\epsilon$. Note that for quadratic functions, the Hessian is constant, so the result still holds under additional Lipschitz assumptions on the Hessian and higher-order derivatives. We emphasize that the lower bounds only hold for the iterates themselves, without any aggregation step. Formally, we have the following: \[thm:sgdlow\] Consider the SGD method defined by $${\mathbf{x}}_{t+1} = {\mathbf{x}}_t + \eta_t \cdot (\nabla f({\mathbf{x}}_t)+{\boldsymbol{\xi}}_t), \quad t\in [T-1],$$ for some $T>1$ and suppose that the step sizes $\eta_1,\ldots,\eta_{T-1}$ are non-negative and satisfy at least one of the following conditions: 1. *(Small step sizes)* $\max_{t\in [T-1]} \eta_t\leq 1/L$, and $\sum_{t=1}^{T-1}\eta_t \leq c\sqrt{T}/L$ for some constant $c$ (independent of the problem parameters). 2. *(Fixed step sizes)* $\eta_t$ is the same for all $t$. 3. *(Polynomial decay schedule)* $\eta_t = \frac{a}{b+t^{\theta}}$ for some non-negative constants $a,b,\theta$ (independent of the problem parameters). Then for any $\delta\in (0,1)$, there exists a *quadratic* function $f$ on ${\mathbb{R}}^d$ (for any $d\geq d_0$ with $d_0={\mathcal{O}}(\log(T/\delta)\sigma^2T/(L^2\Delta))$ with $L$-Lipschitz gradients, and ${\mathbf{x}}_1$ for which $f({\mathbf{x}}_1)-\inf_{{\mathbf{x}}}f({\mathbf{x}})\leq \Delta$, such that if ${\boldsymbol{\xi}}_t$ has a Gaussian distribution ${\mathcal{N}}(\mathbf{0},\frac{\sigma^2}{d}I_d)$, with probability at least $1-\delta$, $$\min_{t\in [T-1]}{\|\nabla f({\mathbf{x}}_t)\|}^2 \geq c_0 \frac{\min\{L \Delta, \sigma^2\}}{\sqrt{T}},$$ where $c_0$ is a positive constant depending only on the constants in the conditions stated above. We note that all standard analyses for (non-adaptive) SGD methods rely on one of these step size strategies. Moreover, the proof technique can plausibly be extended to other step sizes. Thus, the theorem provides a strong indication that SGD (without an aggregation step) cannot achieve a better iteration complexity, at least when the optimality criterion is $\min_t{\|\nabla f({\mathbf{x}}_t)\|}$, even for convex quadratic functions. The proof is based on the following two more technical propositions, which provide lower bounds depending on the step sizes and the problem parameters: \[prop:distance\] For any $L>0,\Delta>0,T>1$ and $\delta\in (0,1)$, there exists a convex quadratic function $f$ on ${\mathbb{R}}^d$ (for any $d\geq d_0$ where $d_0={\mathcal{O}}(\log(T/\delta)\sigma^2T/(L^2\Delta))$) with $L$-Lipschitz gradient, and an ${\mathbf{x}}_1$ such that $f({\mathbf{x}}_1)-\inf_{{\mathbf{x}}}f({\mathbf{x}})\leq \Delta$, such that if we initialize SGD at ${\mathbf{x}}_1$ with Gaussian noise ${\mathcal{N}}(\mathbf{0},\frac{\sigma^2}{d}I_d)$and use step sizes $\eta_1,\ldots,\eta_{T-1}$ in $[0,1/L]$, then with probability at least $1-\delta$, $$\min_{t\in [T]}{\|\nabla f({\mathbf{x}}_t)\|}^2 \geq \frac{\Delta}{25 \max\left\{1/L,\sum_{t=1}^{T-1}\eta_t\right\}}.$$ \[prop:noise\] For any $L>0,\Delta>0,T>1$ and $\delta\in (0,1)$, there exists a convex quadratic function $f$ on ${\mathbb{R}}^d$ (for any $d\geq d_0$ where $d_0={\mathcal{O}}(\log(T/\delta))$) with $L$-Lipschitz gradient, and a vector ${\mathbf{x}}_1$ such that $f({\mathbf{x}}_1)-\inf_{{\mathbf{x}}}f({\mathbf{x}})\leq \Delta$, such that if we initialize SGD at ${\mathbf{x}}_1$ with Gaussian noise ${\mathcal{N}}(\mathbf{0},\frac{\sigma^2}{d}I_d)$, then the following holds with probability at least $1-\delta$: - If for all $t$, $\eta_t=\eta$ with $\eta\in [0,1/L)$, then $ \min_{t\in [T]}{\|\nabla f({\mathbf{x}}_t)\|}^2 \geq \frac{L}{2} \min\{\Delta,\frac{\eta \sigma^2}{2-L\eta}\}. $ - If for all $t$, $\eta_t \geq c/L$ for some constant $c>0$ then $\min_{t\in [T]}{\|\nabla f({\mathbf{x}}_t)\|}^2 \geq \frac{\sigma^2 c^2}{2}$. - If $\eta_t = \frac{a}{L(b+t^\theta)}$ for some positive constants $a>0,b\geq 0$ and $\theta\in (0,1)$, then $$\min_{t\in [T]}{\|\nabla f({\mathbf{x}}_t)\|}^2 \geq c_{a,b,\theta} \sigma^2 \min\{1, L \eta_T\},$$ where $c_{a,b,\theta}$ is a constant dependant only on $a,b,\theta$. The proofs of these propositions appear in [Appendix \[sec:proofs\]]{}. Together, Propositions \[prop:distance\] and \[prop:noise\] imply the theorem: The theorem, under the first condition, is an immediate corollary of [Proposition \[prop:distance\]]{}. Indeed, $$\begin{aligned} \min_{t\in [T]}{\|\nabla f({\mathbf{x}}_t)\|}^2 & \geq \frac{\Delta}{25\max\left\{1/L,\sum_{t=1}^{T-1}\eta_t\right\}} \geq \frac{L \Delta}{25\max\left\{1,c \sqrt{T}\right\}} \\ & \geq \frac{L \Delta}{25\max\{1, c\} \sqrt{T}} \geq \frac{\min\{L \Delta, \sigma^2\}}{25\max\{1, c\} \sqrt{T}}.\end{aligned}$$ As to the second condition, let us consider three cases. First, if $\eta_t=\eta$ is at most $T^{-1/2}/L$, then $\sum_{t=1}^{T-1}\eta_t< \sqrt{T}/L$ and the result again follows from [Proposition \[prop:distance\]]{} as in the previous case. Next, suppose $T^{-1/2}/L \leq \eta < 1/L$, then the result follows from [Proposition \[prop:noise\]]{}: $$\min_{t\in [T]} {\|\nabla f({\mathbf{x}}_t)\|}^2 \geq \frac{L}{2} \min\{\Delta, \frac{ \eta \sigma^2}{2-L\eta}\} \geq \frac{1}{2} \min\{L \Delta, \frac{\sigma^2 T^{-1/2}}{2-T^{-1/2}}\} \geq \frac{1}{2} \frac{\min\{L \Delta,\sigma^2\}}{2\sqrt{T}-1}.$$ The last case for this condition is $1/L \leq \eta$, which does not converge due to the second case in [Proposition \[prop:noise\]]{}. As to the third condition (namely $\eta_t = \frac{a}{b+t^{\theta}}$), we can assume without loss of generality that $a>0$ (otherwise we are back to the first condition in the theorem, and nothing is left to prove), and that $\theta \in (0,1)$ (since if $\theta=0$, we are back to the second condition in the theorem, and if $\theta \geq 1$, we are back to the first condition in the theorem). The result then follows from third case of Proposition \[prop:noise\]. Additional Proofs {#sec:proofs} ================= Proof of [Lemma \[L:detbound\]]{} {#S:lemma_proof} --------------------------------- We start by recalling a recent and fundamental theorem by Taylor, Hendrickx and Glineur. This theorem can be viewed as providing necessary and sufficient conditions under which a set $\{{\mathbf{x}}_i, {\mathbf{g}}_i, f_i\}_{i\in I}$ can be *interpolated* (or *extended* using the terminology from the classical text [@whitney1934analytic]) to a function $f$ with $L$-Lipschitz gradient such that $f({\mathbf{x}}_i) =f_i$ and $\nabla f({\mathbf{x}}_i) = {\mathbf{g}}_i$ for all $i\in I$. \[T:nonconvex\_interpolation\] Let $L>0$, $d\in \mathbb{N}$ and suppose $\{({\mathbf{x}}_i, {\mathbf{g}}_i, f_i)\}_{i\in I}$ is some finite subset of ${\mathbb{R}}^d\times{\mathbb{R}}^d\times {\mathbb{R}}$, where $I$ is a finite set of indices. Then there exists a function $f$ with $L$-Lipschitz gradient that satisfies $f({\mathbf{x}}_i) = f_i$ and $\nabla f({\mathbf{x}}_i) = {\mathbf{g}}_i$ if and only if $$\label{E:nonconvex_interpolation_conditions} \frac{1}{2L} \|{\mathbf{g}}_i - {\mathbf{g}}_j\|^2 - \frac{L}{4} \|{\mathbf{x}}_i - {\mathbf{x}}_j - \frac{1}{L}({\mathbf{g}}_i - {\mathbf{g}}_j)\|^2 \leq f_i - f_j - \langle {\mathbf{g}}_j, {\mathbf{x}}_i - {\mathbf{x}}_j\rangle, \quad \forall i,j \in I.$$ Theorem \[T:nonconvex\_interpolation\] will be the main tool used in the proof of [Lemma \[L:detbound\]]{} below. By Theorem \[T:nonconvex\_interpolation\], it is sufficient to show that there is a choice $\beta$ for the value of $\hat f({\mathbf{y}}_1),\dots,\hat f({\mathbf{y}}_m)$ with $\beta \geq \min_{i \in [n]} f({\mathbf{z}}_{i}) - \frac{1}{L} \|{\boldsymbol{\gamma}}\|^2$ such that the set $$\{({\mathbf{y}}_j, {\boldsymbol{\gamma}}, \beta)\}_{j\in [m]} \cup \{({\mathbf{z}}_{i}, {\boldsymbol{\gamma}}, f({\mathbf{z}}_i) \}_{i\in [n]},$$ satisfies the interpolation conditions . Noting that all the interpolation conditions involving two points from $\{{\mathbf{z}}_i\}$ are naturally satisfied by the assumption that there exists some function with $L$-Lipschitz gradient (namely $f$) that interpolates $\{({\mathbf{z}}_i, \nabla f({\mathbf{z}}_i), f({\mathbf{z}}_i) \} = \{({\mathbf{z}}_i, {\boldsymbol{\gamma}}, f({\mathbf{z}}_i) \}$, and further note that by the assumptions, $\langle {\boldsymbol{\gamma}}, {\mathbf{y}}_i - {\mathbf{y}}_j\rangle=0$, hence the interpolation conditions involving both point in ${\mathbf{y}}_i$ are also trivially satisfied. We conclude that we only need to consider  for cases where one of the points is ${\mathbf{z}}_i$ and the other is ${\mathbf{y}}_j$, i.e., we are left the following set of inequalities: $$\begin{aligned} & - \frac{L}{4} \|{\mathbf{z}}_{i} - {\mathbf{y}}_j \|^2 \leq f({\mathbf{z}}_i) - \beta - \langle {\boldsymbol{\gamma}}, {\mathbf{z}}_i - {\mathbf{y}}_j \rangle, \quad i\in [n],\ j\in [m], \\ & - \frac{L}{4} \|{\mathbf{y}}_j - {\mathbf{z}}_{i} \|^2 \leq \beta - f({\mathbf{z}}_{i}) - \langle {\boldsymbol{\gamma}}, {\mathbf{y}}_j - {\mathbf{z}}_{i}\rangle, \quad i\in [n],\ j\in [m].\end{aligned}$$ Clearly, these inequalities hold if and only if $$\begin{aligned} & \beta \in [\max_{i,j} \left( f({\mathbf{z}}_{i}) + \langle {\boldsymbol{\gamma}}, {\mathbf{y}}_j - {\mathbf{z}}_{i}\rangle - \frac{L}{4} \|{\mathbf{y}}_j - {\mathbf{z}}_{i} \|^2\right), \min_{i,j} \left( f({\mathbf{z}}_{i}) + \langle {\boldsymbol{\gamma}}, {\mathbf{y}}_j - {\mathbf{z}}_{i}\rangle + \frac{L}{4} \|{\mathbf{z}}_{i} - {\mathbf{y}}_j \|^2 \right) ].\end{aligned}$$ Now, this range is non-empty since it contains $f({\mathbf{y}}_1)$ (recall that $f({\mathbf{y}}_1)=\dots=f({\mathbf{y}}_m)$, $\nabla f({\mathbf{y}}_1)=\dots=\nabla f({\mathbf{y}}_m) = {\boldsymbol{\gamma}}$, and that the interpolation conditions for the set $\{({\mathbf{y}}_j, \nabla f({\mathbf{y}}_j), f({\mathbf{y}}_j)\} \cup \{({\mathbf{z}}_i, \nabla f({\mathbf{z}}_i), f({\mathbf{z}}_i) \}$ naturally hold), hence there exits some $i,j$ such that the choice $$\hat \beta := f({\mathbf{z}}_{i}) + \langle {\boldsymbol{\gamma}}, {\mathbf{y}}_j - {\mathbf{z}}_{i}\rangle + \frac{L}{4} \|{\mathbf{z}}_{i} - {\mathbf{y}}_j \|^2$$ is a feasible choice for $\beta$. We get $$\begin{aligned} \hat \beta & = f({\mathbf{z}}_{i}) + \frac{L}{4} \| {\mathbf{z}}_{i} - {\mathbf{y}}_j - \frac{2}{L} {\boldsymbol{\gamma}}\|^2 - \frac{1}{L} \|{\boldsymbol{\gamma}}\|^2 \\ & \geq \min_{k} f({\mathbf{z}}_{k}) - \frac{1}{L} \|{\boldsymbol{\gamma}}\|^2.\end{aligned}$$ which concludes the proof, as all interpolation conditions are satisfied, hence a function with the claimed properties exists. Proof of Proposition \[prop:distance\] -------------------------------------- We will utilize the following function: $$f({\mathbf{x}}) = \frac{1}{4\max\left\{1/L,\sum_{t=1}^{T-1}\eta_t\right\}}\cdot \langle {\mathbf{x}}, {\mathbf{e}}_1\rangle^2$$ and assume that the initialization ${\mathbf{x}}_1$ is $${\mathbf{x}}_1 := \left(\sqrt{\Delta\cdot\max\left\{1/L,\sum_{t=1}^{T-1}\eta_t\right\}}~,~0,0,\ldots,0\right)~.$$ It is easily verified that $f$ has $L$-Lipschitz gradient, and that $f({\mathbf{x}}_1)-\inf_{{\mathbf{x}}} f({\mathbf{x}})<\Delta$. Moreover, $$\label{eq:norm_f_distance} {\|\nabla f({\mathbf{x}})\|}=\frac{|\langle {\mathbf{x}}, {\mathbf{e}}_1\rangle|}{2\max\left\{1/L,\sum_{t=1}^{T-1}\eta_t\right\}}.$$ Hereafter, for the sake of simplicity we drop the subscript indicating the coordinate number, and let $x_t$ denote the first coordinate of iterate $t$, and $\xi_t$ the first coordinate of the noise at iteration $t$. We now turn to show that when $d$ is large enough $$\label{eq:toshoww} \min_{t\in [T]}|x_t|\geq \frac{2}{5}\sqrt{\Delta\max\left\{1/L,\sum_{t=1}^{T-1}\eta_t\right\}}$$ holds with arbitrarily high probability, which together with  implies the desired result. The dynamics of SGD on the first coordinate is as follows: we initially have $$x_1 = \sqrt{\Delta\cdot\max\{1/L,\sum_{t=1}^{T-1}\eta_t\}},$$ and $$x_{t+1} = \left(1-\frac{\eta_t}{2\max\left\{1/L,\sum_{t=1}^{T-1}\eta_t\right\}}\right)x_t -\eta_t \xi_{t}.$$ Unrolling this recurrence, we have for any $t$ $$\label{eq:xtexpression} \begin{aligned} x_t & = \sqrt{\Delta\cdot\max\left\{1/L,\sum_{t=1}^{T-1}\eta_t\right\}} \cdot \prod_{j=1}^{t-1}\left(1-\frac{\eta_j}{2\max\left\{1/L,\sum_{t=1}^{T-1}\eta_t\right\}}\right) \\ &\qquad - \sum_{j=1}^{t-1}\eta_j \xi_j \prod_{i=j+1}^{t-1}\left(1-\frac{\eta_j}{2\max\left\{1/L,\sum_{t=1}^{T-1}\eta_t\right\}}\right), \end{aligned}$$ where we use the convention that $\prod_{i=a}^{b}c_i$ is always $1$ if $b<a$. Since each $\xi_j$ is a zero-mean independent Gaussian, $x_t$ is also Gaussian with $$\begin{aligned} {\mathbb{E}}[ x_t ] &= \sqrt{\Delta\cdot\max\left\{1/L,\sum_{t=1}^{T-1}\eta_t\right\}} \cdot \prod_{j=1}^{t-1}\left(1-\frac{\eta_j}{2\max\left\{1/L,\sum_{t=1}^{T-1}\eta_t\right\}}\right) \\ & \geq \sqrt{\Delta\cdot\max\left\{1/L,\sum_{t=1}^{T-1}\eta_t\right\}}\cdot \exp\left(\ln \frac{1}{2}\cdot \sum_{j=1}^{t-1}\frac{ \eta_j}{\max\left\{1/L,\sum_{t=1}^{T-1}\eta_t\right\}}\right) \\ &\geq \frac{1}{2} \sqrt{\Delta\cdot\max\left\{1/L,\sum_{t=1}^{T-1}\eta_t\right\}},\end{aligned}$$ here we used the assumption $\eta_t \geq 0$ and the fact that $1-z/2\geq \exp(\ln \frac{1}{2} \cdot z)$ for all $z\in [0,1]$. In addition, $$\begin{aligned} {\mathbb{V}}[x_t] & = \sum_{j=1}^{t-1}\eta_j^2 {\mathbb{V}}[\xi_j] \prod_{i=j+1}^{t-1}\left(1-\frac{\eta_j}{2\max\left\{1/L, \sum_{t=1}^{T-1}\eta_t\right\}}\right)^2 \\ & \leq \sum_{j=1}^{t-1}\eta_j^2 {\mathbb{V}}[\xi_j] \leq \frac{\sigma^2 (T-1)}{L^2d},\end{aligned}$$ which follows since each $\xi_j$ is independent and with variance at most $\sigma^2/d$, and $0\leq \eta_j \leq 1/L$. Choosing $$d \geq d_0 := \frac{ \Phi^{-1}(1-\delta/T)^2 \sigma^2 (T-1)}{\left(\frac{1}{2} - \frac{2}{5} \right)^2 L^2 \Delta\cdot\max\left\{1/L,\sum_{t=1}^{T-1}\eta_t\right\}} = {\mathcal{O}}(\log(T/\delta)\sigma^2T/(L^2\Delta)),$$ where $\Phi^{-1}$ is the inverse CDF of the normal distribution, we get that for all $t$ with ${\mathbb{V}}[x_t]>0$ $$\begin{aligned} & \Pr \left(x_t \geq \frac{2}{5} \sqrt{\Delta\cdot\max\left\{1/L,\sum_{t=1}^{T-1}\eta_t\right\}}\right) \\ & = \Pr \left(\frac{x_t - {\mathbb{E}}x_t}{\sqrt{{\mathbb{V}}[x_t]}} \geq -\frac{\left( \frac{1}{2} - \frac{2}{5} \right) \sqrt{\Delta\cdot\max\left\{1/L,\sum_{t=1}^{T-1}\eta_t\right\}}}{ \sqrt{\sigma^2(T-1) / (L^2 d)}}\right) \\ & \geq \Pr \left(\frac{x_t - {\mathbb{E}}x_t}{\sqrt{{\mathbb{V}}[x_t]}} \geq -\frac{\left( \frac{1}{2} - \frac{2}{5} \right) \sqrt{\Delta\cdot\max\left\{1/L,\sum_{t=1}^{T-1}\eta_t\right\}}}{ \sqrt{\sigma^2(T-1) / (L^2 d_0)}}\right) = 1- \delta/T,\end{aligned}$$ and furthermore, the same bound holds almost surely for all $t$ with ${\mathbb{V}}[x_t]=0$. Finally, taking a union bound over $t$, we conclude that this lower bound holds for all $x_t$ with probability $1-\delta$, which implies as required. Proof of Proposition \[prop:noise\] ----------------------------------- To prove the proposition, we will need the following Lemma, which formalizes the fact that the norm of high-dimensional Gaussian random variables tend to be concentrated around a fixed value: \[lem:gausscon\] Let $M,\gamma>0$ be fixed. For any $d$, let ${\mathbf{x}}_d$ be a random variable normally distributed with ${\mathbf{x}}_d \sim {\mathcal{N}}({\mathbf{u}}, \frac{\gamma}{d}I_d)$, where ${\mathbf{u}}$ is some vector in ${\mathbb{R}}^d$ with $\|{\mathbf{u}}\|^2=M$. Then for any $\epsilon\in (0,1)$, $$\Pr\left(\left|\frac{{\|{\mathbf{x}}_d\|}^2}{M+\gamma}-1\right|\leq \epsilon\right)~\geq~1-4\exp\left(-\frac{d\epsilon^2}{24}\right).$$ Consider ${\mathbf{x}}_d$ for some fixed $d$. We can decompose it as ${\mathbf{u}}+\sqrt{\frac{\gamma}{d}}{\mathbf{n}}$, where ${\mathbf{n}}$ has a standard Gaussian distribution in ${\mathbb{R}}^d$ (zero mean and covariance matrix being the identity). Thus, $$\begin{aligned} \frac{{\|{\mathbf{x}}_d\|}^2}{M+\gamma}-1 ~&=~ \frac{{\|{\mathbf{u}}\|}^2+2\sqrt{\frac{\gamma}{d}}{\mathbf{u}}^\top{\mathbf{n}}+\frac{\gamma}{d}{\|{\mathbf{n}}\|}^2} {M+\gamma}-1~=~ \frac{2\sqrt{\frac{\gamma}{d}}{\mathbf{u}}^\top{\mathbf{n}}+\gamma\left(\frac{1}{d}{\|{\mathbf{n}}\|}^2-1\right)} {M+\gamma}\notag\\ &=~\frac{2\sqrt{\gamma/d}}{M+\gamma}{\mathbf{u}}^\top{\mathbf{n}}+\frac{\gamma}{M+\gamma}\left(\frac{1}{d}{\|{\mathbf{n}}\|}^2-1\right). \label{eq:twoterms} \end{aligned}$$ The first term in the sum above is distributed as a Gaussian in ${\mathbb{R}}$ with zero mean and variance $\frac{4\gamma}{d(M+\gamma)^2}{\|{\mathbf{u}}\|}^2=\frac{4\gamma M}{d(M+\gamma)^2}\leq \frac{4\gamma M}{d\cdot 2\gamma M}=\frac{2}{d}$. By a standard Gaussian tail bound, it follows that the probability that it exceeds $\epsilon/2$ in absolute value is at most $2\exp(-d\epsilon^2/16)$. Similarly, for the second term, we have by a standard tail bound for Chi-squared random variables (see for example [@shalev2014understanding Lemma B.12]) that $$\Pr\left(\frac{\gamma}{M+\gamma}\left|\frac{1}{d}{\|{\mathbf{n}}\|}^2-1\right|\geq \frac{\epsilon}{2}\right) ~\leq~ \Pr\left(\left|\frac{1}{d}{\|{\mathbf{n}}\|}^2-1\right|\geq \frac{\epsilon}{2}\right) ~\leq~ 2\exp(-d\epsilon^2/24)~.$$ Combining the above with a union bound, it follows that has absolute value more than $\epsilon$ with probability at most $$2\exp(-d\epsilon^2/16)+2\exp(-d\epsilon^2/24) ~\leq~ 4\exp(-d\epsilon^2/24)~.$$ We will utilize the function $$f({\mathbf{x}}) = \frac{L}{2}{\|{\mathbf{x}}\|}^2,$$ where ${\mathbf{x}}_1$ is some vector such that ${\|{\mathbf{x}}_1\|}=\sqrt{\Delta/L}$. Using a derivation similar to the one used in , we have $${\mathbf{x}}_{t+1}= {\mathbf{x}}_t-\eta_t \cdot (L {\mathbf{x}}_t + {\boldsymbol{\xi}}_t) = (1-L\eta_t) {\mathbf{x}}_t - \eta_t {\boldsymbol{\xi}}_t,$$ hence $$\label{eq:xtexpression2} {\mathbf{x}}_t = \prod_{j=1}^{t-1}\left(1-L\eta_j\right){\mathbf{x}}_1- \sum_{j=1}^{t-1}\eta_j \prod_{i=j+1}^{t-1}\left(1-L\eta_i\right){\boldsymbol{\xi}}_j.$$ Since each ${\boldsymbol{\xi}}_j$ is an independent zero-mean Gaussian with covariance matrix $\frac{\sigma^2}{d}I_d$, we get that ${\mathbf{x}}_t$ has a Gaussian distribution with mean $\prod_{j=1}^{t-1}\left(1-L\eta_j\right){\mathbf{x}}_1$ and covariance matrix $\frac{\gamma_t}{d}I_d$, where $$\gamma_t = \sigma^2 \sum_{j=1}^{t-1}\eta_j^2 \prod_{i=j+1}^{t-1}\left(1-L\eta_i\right)^2.$$ By [Lemma \[lem:gausscon\]]{}, taking $\epsilon=1/2$ and $d\geq d_0$ with $$d_0 := 96 \log \frac{4T}{\delta} = {\mathcal{O}}(\log(T/\delta)),$$ it follows that ${\|\nabla f({\mathbf{x}}_t)\|}^2={\|L {\mathbf{x}}_t\|}^2$ is at least $$\label{eq:lowboundthis} \frac{L}{2}\cdot\left( \Delta\prod_{j=1}^{t-1}\left(1 - L\eta_j\right)^2 + L \sigma^2 \sum_{j=1}^{t-1}\eta_j^2 \prod_{i=j+1}^{t-1}\left(1-L\eta_i\right)^2\right)$$ with probability at least $1-\delta/T$. Our goal now will be to lower bound  under the conditions in the proposition. Plugging this lower bound and applying a union bound over all $t\in [T]$ will result in our proposition. - If $\eta_t=\eta$ and $\eta\in [0,1/L)$, we can lower bound by $$\begin{aligned} & \frac{L}{2} \left( \Delta(1-L\eta)^{2(t-1)} + L \sigma^2 \sum_{j=1}^{t-1} \eta^2 (1-L\eta)^{2(t-1-j)} \right) \\ & = \frac{L}{2} \left(\Delta(1 - L\eta)^{2(t-1)} + L \sigma^2 \eta^2\cdot \frac{1-(1-L\eta)^{2(t-1)}}{1-(1-L\eta)^2} \right) \\ &= \frac{L}{2} \left( \Delta(1 - L\eta)^{2(t-1)} + \frac{\eta \sigma^2}{2-L\eta}\left(1-(1-L\eta)^{2(t-1)}\right)\right). \end{aligned}$$ For any $t$, this is a convex combination of $\frac{L}{2} \Delta$ and $\frac{L}{2} \frac{\eta \sigma^2}{2-L\eta}$, hence is at least the minimum between them. - If there exists some constant $c\geq 0$ such that $\eta_t \geq c/L$ for all $t$, we can lower bound by $\frac{L^2}{2} \sigma^2 \eta_{t-1}^2\geq \frac{\sigma^2 c^2}{2}$ (i.e., accounting for the noise at the last iterate). - If $\eta_t=\frac{a}{L(b+t^\theta)}$ (where $a>0, b\geq 0,\theta\in (0,1/2)$), then it is easily verified that for a certain constant $\tau_{a,b,\theta}$ depending only on $a,b,\theta$, $$1\leq\frac{1}{L\eta_t}\leq \frac{t}{2}~~\text{for all}~~t\geq \tau_{a,b,\theta}.$$ In that case, we can lower bound by $$\begin{aligned} & \frac{L^2 \sigma^2}{2} \sum_{j=t-\lfloor 1/(L\eta_t)\rfloor}^{t-1}\eta_j^2\prod_{i=j+1}^{t-1}(1-L\eta_i)^{2} \\ & \geq \frac{L^2 \sigma^2 \eta_t^2}{2}\cdot \left\lfloor\frac{1}{L\eta_t}\right\rfloor\left(1-L\eta_{\lfloor t/2\rfloor}\right)^{2\lfloor 1/(L\eta_t) \rfloor} \geq \frac{\sigma^2 L \eta_t}{4}\left(1-\frac{a}{b+\lfloor t/2\rfloor^{\theta}}\right)^{2\left\lfloor \frac{b+t^\theta}{a}\right\rfloor}, \end{aligned}$$ which is at least $c_{a,b,\theta} \sigma^2 L \eta_t\geq c_{a,b,\theta} \sigma^2 L \eta_T$ if $t\geq \tau'_{a,b,\theta}$ (for some parameters $c_{a,b,\theta},\tau'_{a,b,\theta}$ depending on $a,b,\theta$). Moreover, if $t<\tau'_{a,b,\theta}$, then is at least $$\frac{L^2 \sigma^2}{2} \eta_{t-1}^2 \geq \frac{L^2 \sigma^2}{2} \eta_{\tau'_{a,b,\theta}}^2 = \frac{\sigma^2}{2} \cdot \left( \frac{a}{b+(\tau'_{a,b,\theta})^\theta}\right)^2.$$ Combining both cases, we get that is at least $c'_{a,b,\theta} \sigma^2 \cdot\min\{1,L \eta_T\}$, where $c'_{a,b,\theta}$ is again some constant dependant on $a,b,\theta$, implying the stated result. Upper Bounds for SGD {#sec:upperbounds} ==================== In order to place our lower bounds in perspective, we state and prove a rather standard ${\mathcal{O}}(\epsilon^{-4})$ complexity bound for SGD, which unlike the result discussed in the introduction, does not assume anything special about the Hessians or the noise, and is completely independent of the dimension. We start the analysis with a technical lemma that we will use to derive bounds both in the stochastic and deterministic settings. \[L:nonconvex\_upper\] Consider the Stochastic Gradient Descent $${\mathbf{x}}_{t+1} = {\mathbf{x}}_t - \eta_{t} \left(\nabla f({\mathbf{x}}_t) + {\boldsymbol{\xi}}_t\right), \quad t\in [T-1],$$ where $0 < \eta_t < 1/L$, $f$ is a non-convex function with $L$-Lipschitz gradient, and ${\boldsymbol{\xi}}_t$ is a random noise with ${\mathbb{E}}({\boldsymbol{\xi}}_t)=0$, $V({\boldsymbol{\xi}}_t)=\sigma^2$. Then for any choice of $\kappa_t$, $t\in [T-1]$ such that $1-L\eta_t\leq \kappa_t \leq (1-L\eta_t)^{-1}$ we have $$\min_{t \in [T]} {\mathbb{E}}\|\nabla f({\mathbf{x}}_t)\|^2 \leq \frac{4L(f({\mathbf{x}}_1)-f({\mathbf{x}}_*)) + \sum_{t=1}^{T-1} \frac{L^2 \eta_{t}^2(1-L \eta_{t}+\kappa_{t})}{1 - L\eta_{t}} \sigma^2}{3 (T-1) - \sum_{t=1}^{T-1} (1 - L \eta_t) (1 - L \eta_t + \kappa_t + \frac{1}{\kappa_{t}})},$$ where $x_*$ is a stationary point with $f({\mathbf{x}}_*) \leq f({\mathbf{x}}_T)$. By [Thm. \[T:nonconvex\_interpolation\]]{} we have $$\begin{aligned} & \frac{1}{2L} \|\nabla f({\mathbf{x}}_{t})-\nabla f({\mathbf{x}}_{t+1})\|^2 - \frac{L}{4} \|{\mathbf{x}}_{t} - {\mathbf{x}}_{t+1} - \frac{1}{L}(\nabla f({\mathbf{x}}_{t})-\nabla f({\mathbf{x}}_{t+1}))\|^2 \\&\quad\overset{\text{a.s.}}{\leq} f({\mathbf{x}}_{t}) - f({\mathbf{x}}_{t+1}) - \langle \nabla f({\mathbf{x}}_{t+1}), {\mathbf{x}}_{t}-{\mathbf{x}}_{t+1}\rangle, \quad t\in [T-1],\end{aligned}$$ which by the definition of ${\mathbf{x}}_{t+1}$ becomes $$\begin{aligned} & \frac{1}{2L} \|\nabla f({\mathbf{x}}_{t})-\nabla f({\mathbf{x}}_{t+1})\|^2 - \frac{1}{4L} \|L \eta_{t} {\boldsymbol{\xi}}_t - (1-L \eta_{t})\nabla f({\mathbf{x}}_{t}) + \nabla f({\mathbf{x}}_{t+1})\|^2 \\&\quad \overset{\text{a.s.}}{\leq} f({\mathbf{x}}_{t}) - f({\mathbf{x}}_{t+1}) - \langle \nabla f({\mathbf{x}}_{t+1}), \eta_{t} \left(\nabla f({\mathbf{x}}_t) + {\boldsymbol{\xi}}_t\right)\rangle, \quad t\in [T-1],\end{aligned}$$ Adding up the inequality above for all $t\in [T-1]$ brings us to $$\begin{aligned} & \frac{1}{2L} \sum_{t=1}^{T-1} \|\nabla f({\mathbf{x}}_{t})-\nabla f({\mathbf{x}}_{t+1})\|^2 - \frac{1}{4L} \sum_{t=1}^{T-1} \|L \eta_{t} {\boldsymbol{\xi}}_t - (1-L \eta_{t})\nabla f({\mathbf{x}}_{t}) + \nabla f({\mathbf{x}}_{t+1})\|^2 \\&\quad+ \sum_{t=1}^{T-1} \eta_{t} \langle \nabla f({\mathbf{x}}_{t+1}), \nabla f({\mathbf{x}}_t) + {\boldsymbol{\xi}}_t \rangle \overset{\text{a.s.}}{\leq} f({\mathbf{x}}_1) - f({\mathbf{x}}_{T}),\end{aligned}$$ which, after adding $$\frac{1}{4} \sum_{t=1}^{T-1} \eta_{t} (1-L \eta_{t}+\kappa_{t}) \left(\frac{L \eta_{t}}{1-L \eta_{t}} \|{\boldsymbol{\xi}}_t\|^2 + 2 \langle \nabla f({\mathbf{x}}_t), {\boldsymbol{\xi}}_t\rangle \right)$$ to both sides and rearranging the terms, brings us to $$\begin{aligned} & \frac{1}{4L} \sum_{t=1}^{T-1} \left( 2 - (1-L \eta_t) (1 - L \eta_t + \kappa_t) \right) \|\nabla f({\mathbf{x}}_t)\|^2 + \frac{1}{4L} \sum_{t=1}^{T-1} \left(1 - \frac{1-L \eta_{t}}{\kappa_{t}}\right) \|\nabla f({\mathbf{x}}_{t+1})\|^2 \\&\quad + \frac{1}{4L} \sum_{t=0}^{T-1}(1 - L \eta_t)\kappa_t \left\| \nabla f({\mathbf{x}}_t) - \frac{1}{\kappa_t} \nabla f({\mathbf{x}}_{t+1}) - \frac{L \eta_t}{1-L \eta_t}{\boldsymbol{\xi}}_t \right\|^2 \\&\quad \overset{\text{a.s.}}{\leq} f({\mathbf{x}}_1) - f({\mathbf{x}}_{T}) + \frac{1}{4} \sum_{t=0}^{T-1} \eta_{t} (1-L \eta_{t}+\kappa_{t}) \left(\frac{L \eta_{t}}{1-L \eta_{t}} \|{\boldsymbol{\xi}}_t\|^2 + 2 \langle \nabla f({\mathbf{x}}_t), {\boldsymbol{\xi}}_t\rangle \right)\end{aligned}$$ Finally, taking the expected value of both side, and noting that ${\mathbb{E}}\|{\boldsymbol{\xi}}_t\|=\sigma$, ${\mathbb{E}}\langle \nabla f({\mathbf{x}}_t), {\boldsymbol{\xi}}_t\rangle=0$, and ${\mathbb{E}}f({\mathbf{x}}_T)\geq f({\mathbf{x}}_*)$, we reach $$\begin{aligned} & \frac{1}{4L} \left(3 (T-1) - \sum_{t=1}^{T-1} (1-L \eta_t) (1 - L \eta_t + \kappa_t) - \sum_{t=1}^{T-1} \frac{1-L \eta_t}{\kappa_t} \right)\min_{t\in [T]} {\mathbb{E}}\|\nabla f({\mathbf{x}}_t)\|^2 \\&\quad \leq f({\mathbf{x}}_1) - f({\mathbf{x}}_*) + \frac{1}{4} \sum_{t=1}^{T-1} \eta_{t} (1-L \eta_{t}+\kappa_{t}) \frac{L \eta_{t}}{1-L \eta_{t}} \sigma^2,\end{aligned}$$ concluding the proof. An explicit optimal expression for $\kappa_i$ appear to be complex in the general case, however, for two important cases a good approximation can be obtained. First, when $\sigma$ is large, the term in the nominator dominates the expression, thus the optimal value for $\kappa_t$ approaches $1-L \eta_t$ as $\sigma\rightarrow\infty$, recovering the following result by Ghadimi and Lan: \[T:upper\_noncovex\] Consider the fixed-step Stochastic Gradient Descent $${\mathbf{x}}_{t+1} = {\mathbf{x}}_t - \eta_t \left(\nabla f({\mathbf{x}}_t) + {\boldsymbol{\xi}}_t\right), \quad t \in [T-1],$$ where $f$ is a nonconvex function with $L$-Lipschitz gradient, ${\boldsymbol{\xi}}_t$ is a random noise with ${\mathbb{E}}({\boldsymbol{\xi}}_t)=0$, $V({\boldsymbol{\xi}}_t)=\sigma^2$ and $0 < L \eta_t < 1$. Then $$\label{eq:sgp_upper_bound} \min_{t \in [T]} \|\nabla f({\mathbf{x}}_t)\|^2 \leq \frac{2(f({\mathbf{x}}_1)-f({\mathbf{x}}_*)) + L \sum_{t=1}^{T-1} \eta_t^2 \sigma^2}{\sum_{t=1}^{T-1} \eta_t (2 - L \eta_t)}.$$ where $x_*$ is a stationary point with $f({\mathbf{x}}_*) \leq f({\mathbf{x}}_T)$. The result follows directly from [Lemma \[L:nonconvex\_upper\]]{}, taking $\kappa_t = 1-L \eta_t$. A second case where a simple expression for $\kappa_t$ can be easily attained is when $\sigma=0$, i.e., in the deterministic case. Here an optimal choice for $\kappa$ is $\kappa_i=1$, giving the following result which appears to be a new and slightly improved version of the classical result by Nesterov [@Book:Nesterov (1.2.15)]: \[C:gd\_upper\_bound\] Consider the fixed-step Gradient Descent $${\mathbf{x}}_{t+1} = {\mathbf{x}}_t - \eta_t \nabla f({\mathbf{x}}_t), \quad t \in [T-1],$$ where $f$ is a nonconvex function with $L$-Lipschitz gradient and $0 < \eta_t < 1/L$. Then $$\min_{t \in [T]} \|\nabla f({\mathbf{x}}_t)\|^2 \leq \frac{4 (f({\mathbf{x}}_1)-f({\mathbf{x}}_*))}{\sum_{t=1}^{T-1} \eta_t(4-L \eta_t)},$$ where $x_*$ is a stationary point with $f({\mathbf{x}}_*) \leq f({\mathbf{x}}_T)$. The discovery of the proof of [Lemma \[L:nonconvex\_upper\]]{} was guided by numerically solving an optimization problem called the Performance Estimation Problem, whose solution captures the worst-case performance of the SGD method. This technique was first introduced in [@Article:Drori] and was later shown in [@taylor2015smooth] to achieve tight bounds for a wide range of methods in the deterministic case. This, in conjunction with the nearly matching lower bound established in [Thm. \[thm:aggregation\_step\]]{}, motivates us to raise the conjecture that [Lemma \[L:nonconvex\_upper\]]{} gives a tight bound (including the constant) in the stochastic case. [^1]: See e.g. [@Book:Nesterov] and references mentioned earlier.
{ "pile_set_name": "ArXiv" }
--- abstract: 'In a MC study using a cluster update algorithm we investigate the finite-size scaling (FSS) of the correlation lengths of several representatives of the class of three-dimensional classical O($n$) symmetric spin models on the geometry $T^2\times\R$. For all considered models we find strong evidence for a linear relation between FSS amplitudes and scaling dimensions when applying [*antiperiodic*]{} instead of periodic boundary conditions across the torus. The considered type of scaling relation can be proven analytically for systems on two-dimensional strips with [*periodic*]{} bc using conformal field theory.' address: | Institut für Theoretische Physik, Universität Leipzig, 04109 Leipzig, Germany, and\ Institut für Physik, Johannes Gutenberg-Universität Mainz, 55099 Mainz, Germany author: - 'Martin Weigel[@mw] and Wolfhard Janke[@wj]' title: 'Universal amplitudes in the FSS of three-dimensional spin models' --- Conformal invariance of 2D systems at a critical point has turned out to be the key feature for a complete, analytical description of their critical behavior[@CardyBuch; @HenkelBuch]. In particular, conformal field theory (CFT) supplies exact FSS relations [*including the amplitudes*]{} for these 2D models. For strips of width $L$ with periodic boundary conditions, i.e. the $S^1\times\R$ geometry, Cardy[@Cardy84a] has shown that the FSS amplitudes of the correlation lengths $\xi_i$ of primary (conformally covariant) operators are entirely determined by the corresponding scaling dimensions $x_i$: $$\xi_i=\frac{A}{x_i}L, \label{amplit}$$ with a model independent overall amplitude $A=1/2\pi$. This result relies on both, the greater restrictive strength of the 2D conformal group compared with the higher dimensional cases, which is needed for the definition of the “primarity” of operators, and the fact that the considered geometry is conformally related to the corresponding flat space ${\R}^2$. Generalizing these results to more realistic 3D geometries within the CFT framework generically destroys the rich 2D group structure. Keeping at least the conformal flatness condition, Cardy[@Cardy85] arrived at a conjecture of the form (\[amplit\]) for the $S^{n-1}\times{\R},\,n>2$ geometries. Mainly for reasons of the numerical inaccessibility of these geometries Henkel[@Henkel86; @Henkel87] considered the situation where even this latter condition is cancelled: investigating the scaling behavior of the $S=\frac{1}{2}$ Ising model on 3D columns $T^2\times{\R}$ with periodic (pbc) or antiperiodic (apbc) boundary conditions across the torus via a transfer matrix calculation, he found for the correlation lengths of the magnetization and energy densities (the only primary operators in the [*2D*]{} model) in the scaling regime the ratios: $$\begin{array}{rcl} \xi_\sigma/\xi_\epsilon & = & 3.62(7) \hspace{0.5cm} \mbox{{\em periodic} bc,} \\ \xi_\sigma/\xi_\epsilon & = & 2.76(4) \hspace{0.5cm} \mbox{{\em antiperiodic} bc.} \\ \end{array}$$ Comparing this to the ratio of scaling dimensions of $x_\epsilon/x_\sigma=2.7326(16)$ a relation of the form (\[amplit\]) seems not to hold, [*unless the boundary conditions are changed to be antiperiodic*]{}. This is in qualitative agreement with numerical work done by Weston[@Weston]. In this letter, we first revisit the Ising model on the $T^2\times{\R}$ geometry trying to decide the exposed question with an independent Monte Carlo (MC) method and at an increased level of accuracy. The main purpose is to investigate further models – in our case O($n$)$,\,n>1$ spin models –, thus adding evidence that Henkel’s result is not just a numerical “accident” but reflects a universal property of such 3D systems. #### The model — {#the-model .unnumbered} We consider an O($n$) symmetric classical spin model with nearest-neighbor, ferromagnetic interactions in zero field with Hamiltonian $$\label{Hamilton} {\cal H} = -J \sum_{<ij>} {\bf s}_i\cdot{\bf s}_j,\;\;{\bf s}_i \in S^{n-1}.$$ The spins are located on a sc lattice of dimensions $(L_x,L_y,L_z)$ with $L_x=L_y$, modeling the $T^2$ geometry by applying periodic or antiperiodic bc along the $x$- and $y$-directions. Effects of the finite length of the lattice in the $z$-direction are minimized by choosing $L_z$ such that $L_z/\xi\gg 1$ and sticking the ends together via periodic bc. As is well known[@ZinnJustin], all of these models undergo a continuous phase transition in three dimensions, so that at the critical point the correlation length diverges linearly with the finite length $L=L_x$. Particular representatives of this class are the Ising ($n=1$), the XY ($n=2$) and the Heisenberg ($n=3$) model. #### The simulation — {#the-simulation .unnumbered} For our MC simulations we used the Wolff single-cluster update algorithm[@Wolff89] which is known to be more effective than the Swendsen-Wang[@Swendsen] update for three-dimensional systems[@WJChem]. As we want to consider antiperiodic bc for all systems in addition to the generic periodic bc case, the algorithm had to be adapted to this situation using the fact that in the case of nearest-neighbor interactions antiperiodic bc are equivalent to the insertion of a seam of antiferromagnetic bonds along the relevant boundary. The primary observables to measure are the connected correlation functions of the spin and the energy density: $$\begin{array}{rcl} G_{\sigma}^c({\bf x}_1,{\bf x}_2) & = & \langle{\bf s}({\bf x}_1)\cdot{\bf s}({\bf x}_2)\rangle-\langle{\bf s}\rangle\langle{\bf s}\rangle, \\ G_{\epsilon}^c({\bf x}_1,{\bf x}_2) & = & \langle\epsilon({\bf x}_1)\,\epsilon({\bf x}_2)\rangle-\langle\epsilon\rangle\langle\epsilon\rangle. \\ \end{array} \label{conncorr}$$ The correlation lengths $\xi_i$ in Eq. (\[amplit\]) being understood as measuring the correlations in the longitudinal $\R$-direction, one may average over estimates $\hat{G}^c({\bf x}_1,{\bf x}_2)$ such that $({\bf x}_1-{\bf x}_2)\parallel \hat{e}_z$ and $i\equiv|{\bf x}_1-{\bf x}_2|=\mbox{const}$, thus ending up at estimates $\hat{G}^{c,\parallel}(i)$. This average can be improved by considering a zero momentum mode projection[@WJ93], i.e., by correlating layer variables made up out of the sum of variables in a given layer $z=\mbox{const}$ instead of the original spins or local energies; this reduces the variance by a factor of $1/L_x^2$, the influence of transversal correlations being irrelevant for large distances $i$[@diplom]. Assuming an exponential long-distance behavior of the correlation functions (\[conncorr\]), extracting the correlation lengths via a straightforward fitting procedure requires a nonlinear three-parameter fit of the form $$G^{c,\parallel}(i)=G^{c,\parallel}(0)\exp{(-i/\xi)}+\mbox{const}, \label{corrfunction}$$ since any numerical estimation of $G^{c,\parallel}(i)$ necessarily fails to reproduce the correct long distance limit $G^{c,\parallel}(i)\rightarrow 0$ as $i\rightarrow\infty$ exactly. As this amounts to an investment of the gathered statistics into the determination of three parameters, two of which are completely irrelevant for our ends, we used an alternative method which intrinsically eliminates the two irrelevant parameters by using differences and ratios of $\hat{G}^{c,\parallel}(i)$ rather than the values themselves. Given the correlation function behaves as (\[corrfunction\]), estimators $\hat{\xi}_i$ for the correlation length are given by: $$\hat{\xi}_i=\Delta{\left[\ln\frac{\hat{G}^{c,\parallel}(i)-\hat{G}^{c,\parallel}(i-\Delta)} {\hat{G}^{c,\parallel}(i+\Delta)-\hat{G}^{c,\parallel}(i)}\right]}^{-1}. \label{diffmethoddelta}$$ The generic value for $\Delta$ is one, but it might be advantageous to choose $\Delta>1$ in order to enhance the local drop of $G^{c,\parallel}(i)$ between $i$ and $i+\Delta$ (the signal) against the fluctuations (the noise). Following this procedure one ends up with a set of estimators for the correlation length as a function of distance $i$ as depicted in Fig. \[fig1\] for the spin-spin correlations of the Ising model: after a transition regime starting at $i=\Delta$ which is a consequence of the discreteness of the lattice as well as the above mentioned zero momentum mode projection, the estimates settle at a plateau indicating that the exponential long distance behavior has been reached. The error bars in Fig. \[fig1\] were generated using a combined binning and “jackknife” resampling scheme[@Efron; @Berg] which is necessary due to the strong non-linearity of the transformation (\[diffmethoddelta\]); on the same grounds we checked for the necessity of a bias correction. Final values for the correlation lengths were obtained by an average of the estimators $\hat{\xi}_i$; as neither the estimates for very small distances $i$ nor – because of the periodicity of the lattice in the $z$-direction – those for distances $i\gtrsim L_z/2$ are reliable estimates for the continuum correlation length, the range of distances $i_{\rm min},\ldots,i_{\rm max}$ to average over was determined by a procedure of statistical optimization, a generalized $\chi^2$-test. In order to minimize the theoretical variance of the final average $\bar{\xi}$ each element $\hat{\xi}_i$ was weighted by a factor proportional to a row sum of the inverse covariance matrix, this matrix itself being again estimated by a jackknife technique[@diplom; @ours]. (150,165) (0, 0) #### The Ising model — {#the-ising-model .unnumbered} Simulations of the Ising model were done at the most accurate estimate for the bulk inverse critical temperature available, $\beta_c=0.2216544(3)$[@Talapov96], where the influence of the given error in $\beta_c$ on the results for the correlation lengths was checked via a temperature reweighting technique and found negligible compared to the statistical errors; this applies to the other models considered in this note as well. To be able to perform a FSS analysis, simulations were done for system sizes between $4^2\times 48$ and $30^2\times 356\approx 3\times 10^5$ sites, accumulating about eight million independent measurements for each system. As is obvious from the example in Fig. \[fig2\](a) the final estimates for the correlation lengths show up an almost perfect linear scaling behavior as a function of the transverse system size $L_x$. Considering the amplitudes $\hat{\xi}/L_x$ reveals, however, that corrections to the leading linear scaling behavior are relevant and can be clearly resolved within the accuracy of the data, cp. Fig. \[fig2\](b). In order to extract the leading amplitudes in the scaling regime nonlinear fits of the form $$\xi(L_x)=AL_x+BL_x^{\alpha} \label{fitform}$$ were done. Even though some field theoretical estimates for the correction exponents exist[@ZinnJustin], we decided to keep $\alpha$ as a parameter, ending up at an effective correction exponent that takes higher order corrections into account, which have some importance for the small systems; successively dropping systems from the small $L_x$ end while monitoring the goodness of fit parameters $\chi^2$ and $Q$ then acts as a consistency check. As a rule, the overall corrections are negative for systems with periodic bc and positive in the case of antiperiodic bc. As a result of this fitting procedure we arrive at the following final estimates for the amplitudes $A$ in Eq. (\[fitform\]) and their ratios: $$\begin{array}{l} \begin{array}{l} A_\sigma=0.8183(32) \\ A_\epsilon=0.2232(16) \\ A_\sigma/A_\epsilon=3.666(30) \\ \end{array} \hspace{0.5cm}\mbox{for periodic bc,} \\ \\ \begin{array}{l} A_\sigma=0.23694(80) \\ A_\epsilon=0.08661(31) \\ A_\sigma/A_\epsilon=2.736(13) \\ \end{array} \hspace{0.5cm}\mbox{for antiperiodic bc.} \end{array}$$ Comparing this to the ratio of scaling dimensions[@WJChem; @Bloete95; @Butera97], $$x_\epsilon/x_\sigma=\frac{(1-\alpha)/\nu}{\beta/\nu}=\frac{2(\nu d-1)}{\nu d-\gamma}=2.7326(16),$$ we find that the amplitude and exponent ratios agree very precisely in the case of antiperiodic bc across the torus, while in the periodic case they differ by an amount of some thirty sigma. In comparison to a first exploration by Weston[@Weston], who found ratios of about $3.7$ for periodic and $2.6$ for antiperiodic bc, the precision could be increased by over an order of magnitude. (150,165) (0, 0) (150,165) (0, 0) #### XY and Heisenberg model — {#xy-and-heisenberg-model .unnumbered} Although being stringent in itself, up to this point the above result is a singular, maybe casual, statement for the special case of the Ising model. Believing in a universal law needs a broader backing with successful examples, two of which being considered here. Simulations for the XY and Heisenberg model were done at the estimated inverse critical temperature values $\beta_c=0.4541670(32)$ and $\beta_c=0.693004(7)$, respectively, which are weighted means of recent literature estimates[@WJChem; @Butera97; @Ballesteros; @Gottlob]. Investing about three years of workstation time for both models altogether and using the same system sizes as in the Ising case, we took between four and eighteen million independent measurements for each system, both for periodic and antiperiodic bc. Applying the outlined tools of data analysis we arrive at scaling and amplitude plots similar to those in Fig. \[fig2\]. Traversing the above described fitting procedure leads to final estimates for the amplitudes $A_\sigma$ and $A_\epsilon$ according to Eq. (\[fitform\]), which are shown in Table\[tab1\]. Comparing the results for the ratios $A_\sigma/A_\epsilon$ with the ratio $x_\epsilon/x_\sigma$ of scaling dimensions, we arrive at a highly precise agreement for the case of antiperiodic bc and an obvious divergence in the standard periodic bc situation for both, the XY and the Heisenberg model. Thus a linear relation between scaling amplitudes and scaling dimensions according to Eq. (\[amplit\]) is almost certainly valid for three generic, non-trivial examples of 3D spin models, and one might well assume, that it is satisfied for the whole class of O($n$) spin models, a view which is supported by further simulations for the $n=10$ case[@ours] and an analytic result for the limiting case $n\rightarrow\infty$[@Henkel88], which is known to be equivalent to the spherical model[@Stanley]. In view of the analogous 2D results it is not too far fetched, then, to argue that the numerical results provide evidence that this relation might be of a universal, model independent kind. (150,165) (0, 0) #### Universal amplitudes — {#universal-amplitudes .unnumbered} Given the scaling amplitudes for the 3D systems with antiperiodic bc behave according to Eq. (\[amplit\]), one may ask further, what the amplitude $A$ in Eq. (\[amplit\]), that was $1/2\pi$ in the 2D case, becomes in the 3D scenario and, furthermore, if it holds true that it is universal in the sense that all the model-dependent information is condensed in the scaling dimensions $x_i$. The transfer matrix approach cannot give an answer to this question, because in the Hamiltonian limit amplitudes are only given up to an overall normalization factor. Supposed such a relation holds, from the above results one can give an estimate for these amplitudes using the amplitudes $A_\sigma$, which are usually more accurate than $A_\epsilon$. Using the scaling dimensions of the spin of $x_\sigma=0.5175(5)$, $x_\sigma=0.5178(15)$ and $x_\sigma=0.5161(17)$ for the Ising, the XY and the Heisenberg model, respectively, one has: $$A=A_\sigma x_\sigma=\left\{ \begin{array}{l@{\hspace{0.5cm}}l} 0.12262(43) & \mbox{Ising} \\ 0.12486(47) & \mbox{XY} \\ 0.12625(49) & \mbox{Heisenberg} \\ \end{array} \right. . \label{metaamp}$$ Taking into account the corresponding amplitude of the spherical model, which is $A\approx 0.13624$[@Henkel88; @Allen93], and comparing the variation of these values with the given errors, as is shown in Fig. \[fig3\], it becomes clear that these amplitudes do in fact depend on the model under consideration and seem to vary smoothly and monotonically with the dimension $n$ of the order parameter. To summarize, the amplitudes of the FSS of the correlation lengths of the magnetization and energy densities of O($n$) spin models are linearly related to the corresponding scaling dimensions for the $T^2\times\R$ geometry when choosing [*antiperiodic*]{} instead of periodic bc across the torus; the amplitudes of this relation themselves depend, in contrast to the 2D case, on the model under consideration. Note, however, that again in contrast to the 2D case, where the influence of boundary conditions on the operator content has been extensively explored[@Cardy86], it is theoretically not understood up to now, why using antiperiodic bc in 3D should restore the 2D situation. model ------------------------ ----------------------- ------------- ------------- $A_\sigma$ 0.8183(32) 0.23694(80) $A_\epsilon$ 0.2232(16) 0.08661(31) \[-1ex\][Ising]{} $A_\sigma/A_\epsilon$ 3.666(30) 2.736(13) $x_\epsilon/x_\sigma$ $A_\sigma$ 0.75409(59) 0.24113(57) $A_\epsilon$ 0.1899(15) 0.0823(13) \[-1ex\][XY]{} $A_\sigma/A_\epsilon$ 3.971(32) 2.930(47) $x_\epsilon/x_\sigma$ $A_\sigma$ 0.72068(34) 0.24462(51) $A_\epsilon$ 0.16966(36) 0.0793(20) \[-1ex\][Heisenberg]{} $A_\sigma/A_\epsilon$ 4.2478(92) 3.085(78) $x_\epsilon/x_\sigma$ \[tab1\] : FSS amplitudes of the correlation lengths of the Ising, XY, and Heisenberg models on the $T^2\times\R$ geometry. In view of the total lack of exact results for non-trivial 3D systems, it seems to us a rewarding challenge for the field theorists to explain these results. We thank K. Binder for his constant and generous support. We are grateful to J. Cardy and M. Henkel for helpful discussions on the theoretical background. W.J. gratefully acknowledges support from the Deutsche Forschungsgemeinschaft through a Heisenberg Fellowship. Email: [Martin.Weigel@itp.uni-leipzig.de]{} Email: [Wolfhard.Janke@itp.uni-leipzig.de]{} J. L. Cardy, [*Conformal Invariance*]{}, in: [*Phase Transitions and Critical Phenomena*]{}, Vol. 11, eds. C. Domb and J. L. Lebowitz (Academic Press, London, 1987), p. 55. P. Christe, M. Henkel, [*Introduction to Conformal Invariance and Its Applications to Critical Phenomena*]{} (Springer, Berlin/Heidelberg/New York, 1993) \[New Series m: Monographs, Lecture Notes in Physics, m 16\]. J. L. Cardy, J. Phys. [**A17**]{}, L385 (1984). J. L. Cardy, J. Phys. [**A18**]{}, L757 (1985). M. Henkel, J. Phys. [**A19**]{}, L247 (1986). M. Henkel, J. Phys. [**A20**]{}, L769 (1987). R. A. Weston, Phys. Lett. [**B248**]{}, 340 (1990). J. Zinn-Justin, [*Quantum Field Theory and Critical Phenomena*]{} (Clarendon Press, Oxford, 1996). U. Wolff, Phys. Rev. Lett. [**62**]{}, 361 (1989). R. H. Swendsen, J.–S. Wang, Phys. Rev. Lett. [**58**]{}, 86 (1987). W. Janke, [*Monte Carlo Simulations of Spin Systems*]{}, in: [*Computational Physics*]{}, ed. K.H. Hoffmann, M. Schreiber (Springer, Berlin, 1996), p. 10, and references therein. W. Janke, K. Nather, Phys. Rev. [**B48**]{}, 7419 (1993). M. Weigel, diploma thesis, Universität Mainz (1998), unpublished. B. Efron, [*The Jackknife, the Bootstrap and Other Resampling Plans*]{}, Society for Industrial and Applied Mathematics \[SIAM\], Philadelphia (1982). B. A. Berg, [*Double Jackknife Bias Corrected Estimators*]{}, Comp. Phys. Commun. [**69**]{}, 7 (1992). M. Weigel, W. Janke, in preparation. A. L. Talapov, H. W. J. Blöte, J. Phys. [**A29**]{}, 5727 (1996). H. W. J. Blöte, E. Luijten, J. R. Heringa, J. Phys. [**A28**]{}, 6289 (1995). P. Butera, M. Comi, Phys. Rev. [**B56**]{}, 8212 (1997). H. G. Ballesteros, L. A. Fernández, V. Martín-Mayor, A. Muñoz Sudupe, Phys. Lett. [**B387**]{}, 125 (1996). A. P. Gottlob, M. Hasenbusch, J. Stat. Phys. [**77**]{}, 919 (1994). M. Henkel, J. Phys. [**A21**]{}, L227 (1988); M. Henkel, R. A. Weston, J. Phys. [**A25**]{}, L207 (1992). H. E. Stanley, Phys. Rev. [**176**]{}, 718 (1968). S. Allen, R. K. Pathria, J. Phys. [**A26**]{}, 5173 (1993). J. L. Cardy, Nucl. Phys. [**B275**]{}, 200 (1986).
{ "pile_set_name": "ArXiv" }
--- abstract: 'UX Orionis stars (UXors) are Herbig Ae/Be or T Tauri stars exhibiting sporadic occultation of stellar light by circumstellar dust. GMCephei is such a UXor in the young ($\sim4$ Myr) open cluster Trumpler37, showing prominent infrared excess, emission-line spectra, and flare activity. Our photometric monitoring (2008–2018) detects (1) an $\sim$3.43 day period, likely arising from rotational modulation by surface starspots, (2) sporadic brightening on time scales of days due to accretion, (3) irregular minor flux drops due to circumstellar dust extinction, and (4) major flux drops, each lasting for a couple of months with a recurrence time, though not exactly periodic, of about two years. The star experiences normal reddening by large grains, i.e., redder when dimmer, but exhibits an unusual “blueing" phenomenon in that the star turns blue near brightness minima. The maximum extinction during relatively short (lasting $\leq 50$ days) events, is proportional to the duration, a consequence of varying clump sizes. For longer events, the extinction is independent of duration, suggestive of a transverse string distribution of clumps. Polarization monitoring indicates an optical polarization varying $\sim3\%$–8$\%$, with the level anticorrelated with the slow brightness change. Temporal variation of the unpolarized and polarized light sets constraints on the size and orbital distance of the circumstellar clumps in the interplay with the young star and scattering envelope. These transiting clumps are edge-on manifestations of the ring- or spiral-like structures found recently in young stars with imaging in infrared of scattered light, or in submillimeter of thermalized dust emission.' author: - 'P. C. Huang' - 'W. P. Chen' - 'M. Mugrauer' - 'R. Bischoff' - 'J. Budaj' - 'O. Burkhonov' - 'S. Ehgamberdiev' - 'R. Errmann' - 'Z. Garai' - 'H. Y. Hsiao' - 'C. L. Hu' - 'R. Janulis' - 'E. L. N. Jensen' - 'S. Kiyota' - 'K. Kuramoto' - 'C. S. Lin' - 'H. C. Lin' - 'J. Z. Liu' - 'O. Lux' - 'H. Naito' - 'R. Neuh[ä]{}user' - 'J. Ohlert' - 'E. Pakštienė' - 'T. Pribulla' - 'J. K. T. Qvam' - 'St. Raetz' - 'S. Sato' - 'M. Schwartz' - 'E. Semkov' - 'S. Takagi' - 'D. Wagner' - 'M. Watanabe' - Yu Zhang title: Diagnosing the Clumpy Protoplanetary Disk of the UXor Type Young Star GM Cephei --- Introduction {#sec:intro} ============ Circumstellar environments are constantly changing. A young stellar object (YSO), with prominent chromospheric and coronal activities, interacts intensely with the surrounding accretion disk by stellar/disk winds and outflows. The first few million years of the pre-main-sequence (PMS) evolution coincide with the epoch of possible planet formation, during which grain growth, already taking place in prestellar molecular cores up to micron sizes, continues on to centimeter sizes, and then to planetesimals [@nat07]. The detailed mechanism to accumulate planetesimals and to eventual planets is still uncertain. Competing theories include planetesimal accretion [@wei00] versus gravitational instability [@saf72; @gol73; @joh07]. Given the ubiquity of exoplanets, planet formation must be efficient to complete with the dissipation of PMS optically thick disks in less than 10 Myr [@mam04; @bri07; @hil08]. YSOs are known to vary in brightness. Outbursts arising from intermittent mass accretion events are categorized into two major classes: (1) FU Ori-type stars (or FUors) showing erupt brightening up to 6 mag from quiescent to the high state in weeks to months, followed by a slow decline in decades [@har85], and (2) EX Lup-type stars (EXors) showing brightening up to 5 mag, sometimes recurrent, with roughly the same timescale of months in both rising and fading [@her89]. Sunlike PMS objects, i.e., T Tauri stars, may also display moderate variations in brightness and colors [@her94] due to rotational modulation by magnetic/chromospheric cool spots or accretion/shocking hot spots on the surface. There is an additional class, owing its variability to extrinsic origin, of UXOri type stars [UXors; @her94], that displays irregular dimming caused by circumstellar dust extinction. In addition to the prototype UXOri itself, examples of UXors include COOri, RRTau, and VVSer. The YSO dimming events can be further categorized according to the levels of extinction and the timescales. The “dippers” [@cod10], with AATau being the prototype [@bou99; @bou03], have short (1–5 days) and quasi-periodic events thought to originate from occultation by warps [@ter00; @cod14] or by funnel flows [@bli16] near the disk truncation radius and induced by the interaction between the stellar magnetosphere and the inner disk [@rom13]. The “faders,” with KH15D being the prototype [@kea98; @ham01], show prolonged fading events, each lasting for months to years with typically large extinction up to several magnitudes, thought to be caused by occultation by the outer part of the disk [@bou13; @rod15; @rod16]. The target of this work, GMCephei (hereafter GMCep), a UXor star known to have a clumpy dusty disk [@che12], displays both dipper and fader events. As a member of Trumpler (Tr) 37, a young (1–4 Myr, [@mar90; @pat95; @sic05; @err13]) star cluster as a part of the Cepheus OB2 association, GMCep (R.A.=21$^{\rm h}$38$^{\rm m}$17$\fs32$, Decl.=+573122, J2000) possesses observational properties typical of a T Tauri star, such as emission spectra, infrared excess, and X-ray emission [@sic08; @mer09]. [*Gaia*]{}/DR2 [@bro18] measured a parallax of $\varpi=1.21\pm0.02$ mas ($d=826_{-13}^{+14}$ pc), consistent with being a member of Tr37 at $\sim870$ pc [@con02]. The spectral type of GMCep reported in the literature ranges from a late F [@hua13] to a late G or early K [@sic08]. The star has been measured to have a disk accretion rate up to $10^{-6} M_\sun$ yr$^{-1}$, which is thought to be 2–3 orders higher than the median value of the YSOs in Tr37 and is 1–2 orders higher than those of typical T Tauri stars [@gul98; @sic08]. The broad spectral lines suggest a rotation $ v \sin i\sim43.2$ km s$^{-1}$ much faster than the average $v\sin i\sim10.2$ km s$^{-1}$ of the members of Tr37 [@sic08]. @sic08 presented a comprehensive collection of data on GMCep, including optical/infrared photometry and spectroscopy, plus millimeter line and continuum observations, along with the young stellar population in the cluster Tr37 and the Cep OB2 association [See also @sic04; @sic05; @sic06a; @sic06b]. Limited by the time span of their light curve, @sic08 made the incorrect conclusion that the star belonged to the EXor type. Later, with a century-long light curve derived from archival photographic plates, covering 1895 to 1993, @xia10 classified the star as a UXor, which was confirmed by subsequent intense photometric monitoring [@che12; @sem12; @sem15; @hua18]. @che12 speculated on a possible recurrent time of $\sim1$ yr based on a few major brightness dimming events, but this was not substantiated by @sem15. GMCep has been studied as a part of the Young Exoplanet Transit Initiative (YETI) project [@neu11], which combines a network of small telescopes in distributed time zones to monitor young star clusters, with the goal to find possible transiting exoplanets [@neu11]. Any exoplanets thus identified would have been newly formed or in the earliest evolution, providing a comparative sample with the currently known exoplanets that are almost exclusively found in the general Galactic fields, so are generally older). While so far YETI has detected only exoplanet candidates [@gar16; @rae16], the data set serves as a valuable inventory for studies such as stellar variability [@err13; @fri16]. The work reported here includes light curves in $BVR$ bands on the basis of the photometry collected from 2008 to 2018. Moreover, polarization measurements in $g^{\prime}$-, $r^{\prime}$-, and $i^{\prime}$-bands have been taken at different brightness phases, enabling simultaneous photometric and polarimetric diagnosis of the properties of the circumstellar dust clumps that cause the UXor variability. §\[sec:data\] summarizes the data used in this study, including those collected in the literature, and our own photometric and polarimetric observations. §\[sec:results\] presents the results of photometric, color, and polarimetric variations. On the temporal behavior of these measurements, we then discuss in §\[sec:disk\] the implications on the properties of the dust clumps around GMCep. We summarize our findings in §\[sec:conclusion\]. Data Sources and Observations {#sec:data} ============================= Optical data of GMCep consist mostly of our own imaging photometry since mid-2008, and polarimetry since mid-2014, up to mid-2018. These are supplemented by data adopted from the American Association of Variable Star Observers (AAVSO) database, covering timescales from days/weeks to years. @sic08 summarized the photometry from the literature, e.g., those of @mor39 @suy75, and @kun86, and from databases such as VizieR, SIMBAD, and SuperCOSMOS [@mon03], along with the infrared data from $IRAS$ and MSX6C. @xia10 expanded the light-curve baseline and presented a-century-long photometric measurements, with a photometric uncertainty of $\sim0.15$ mag, derived from the photographic plates collected at the Harvard College Observatory and from Sonneberg Observatory. Previous optical monitoring data include those reported by @che12 [in BVR covering end of 2009 to 2011], by @sem12, and by @sem15 [in $UBVRI$ to end of 2014]. The AAVSO data were adopted only from the observer “MJB” after checking photometric consistency with our results. Optical photometry ------------------ The imaging photometry covering 10 years has been acquired by 16 telescopes, including seven of the YETI telescopes [@neu11]. The Tenagra Observatory in Arizona and Lulin Observatory in Taiwan contributed about four-year baseline coverage each from mid-2010 to mid-2018, respectively. The Tenagra II telescope, a 0.81m, Ritchey–Chrétien type telescope, carried out the $BVR$ monitoring from 2010 October to 2014 June. No observations were taken in July/August because of the monsoon season, or during February/March because of the invisibility of the target. The SLT 0.4 m telescope, located at Lulin Observatory, acquired a few data points in $BVR$ bands every night from 2014 September to date, weather permitting. Technical parameters of additional telescopes contributing to the data are listed in Table \[tab:tele\]. For each observing session, darks and bias frames were obtained every night when science frames were taken, except for the STK and CTK-II, for which darks already include biases. The sky flats were obtained when possible. For those nights without sky flats, we used the flats from the nearest previous night. The standard reduction with dark, bias, and flat field correction was performed with IRAF. For the Maidanak Observatory, Nayoro Observatory, and the ESA’s OGS, the images were only corrected with bias and flat because of the low temperatures of the CCD detectors used. The brightness of GMCep and photometric reference stars was each measured with the aperture photometry procedure “aper.pro” of IDL, which is similar to the “IRAF/Daophot” task, with an aperture radius of 85 for the target, and an annulus of the inner radius of 95 and outer radius of 13 for the sky. The seven reference stars from @xia10 [their Table 2] were originally used by @che12, but later we found that Star A varied at $\sim0.1$ mag level, and Star E was likely a member of the young cluster, so would be likely also variable. Excluding these two stars, the remaining five were used as reference stars in the differential photometry of GMCep reported here. Photometric measurements at multiple bands were taken at different epochs in a night, and sometimes with different telescopes. In order to facilitate a quantitative comparison, e.g., between the $B$- and $V$-band light curves, and hence the $B-V$ color curve, the epoch of each observation was rounded to the nearest integer Modified Julian Date (MJD), and the average in each band was taken within the same MJD. For periodicity analysis, the actual timing was used, so there would be no round-off error. [lllcl cc]{}\ 0.4 m SLT (Lulin) & E2V 42-40 & 2048$\times$2048 & 13.5 & 30.0$\times$30.0 & 7 & 541\ 0.81 m TenagraII (Tenagra) & SITe SI-03xA & 1024$\times$1024 & 24 & 14.8$\times$14.8 & 29 & 463\ 0.25 m CTK-II (Jena)$^{a}$ & E2V PI47-10 & 1056$\times$1027 & 13 & 21.0$\times$20.4 & 7 & 104\ 0.6 m STK (Jena)$^{b}$ & E2V 42-10 & 2048$\times$2048 & 13.5 & 52.8$\times$52.8 & 8 & 79\ 1.0 m LOT (Lulin) & Apogee U42 & 2048$\times$2048 & 13.5 & 11.0$\times$11.0 & 12 & 48\ 0.61 m RC (Van de camp) & Apogee U16M & 4096$\times$4096 & 9 & 26.0$\times$26.0 & 7 & 13\ 0.6 m Zeiss 600/7500 (Stara Lesna) & FLI ML 3041 & 2048$\times$2048 & 15 & 14.0$\times$14.0 & 5 & 11\ \ 1.6 m Pirka (Nayoro)$^{c}$ & EMCCD C9100-13 & 512$\times$512 & 16 & 3.3$\times$3.3 & 13 & 133\ 1.5 m AZT-22 (Maidanak) & SI 600 Series & 4096$\times$4096 & 15 & 16.0$\times$16.0 & 5 & 120\ 1.0 m NOWT (XinJiang) & E2V 203-82 & 4096$\times$4096 & 12 & 78.0$\times$78.0 & 5 & 108\ 1.2 m T1T (Michael Adrian) & SBIG STL-6303 & 3072$\times$2048 & 9 & 10.0$\times$6.7 & 15 & 12\ 0.51 m CDK (Mayhill) & FLI ProLine PL11002M & 4008$\times$2072 & 9 & 36.2$\times$54.3 & 9 & 12\ 1.0 m ESA’s OGS (Teide)$^{d}$ & Roper Spec Camera & 2048$\times$2048 & 13.5 & 13.76$\times$13.76 & 8 & 10\ 1.5 m P60 (Palomar) & AR-Coated Tektronix & 2048$\times$2048 & 24 & 11.0$\times$11.0 & 9 & 7\ 0.35 m ACT-452 (MAO) & QSI 516 & 1552$\times$1032 & 9 & 37.6$\times$25.0 & 15 & 2\ \[tab:tele\] [cccccc]{} Star B & 324.529226 & 57.508117 & 16.015 & 14.961 & 14.364\ Star C & 324.563184 & 57.492816 & 15.445 & 14.837 & 14.455\ Star D & 324.543391 & 57.505287 & 15.333 & 14.357 & 13.770\ Star F & 324.586443 & 57.487231 & 14.389 & 13.358 & 12.770\ Star G & 324.600939 & 57.556202 & 13.374 & 12.829 & 12.513\ Optical Polarimetry ------------------- The optical polarization of GMCep was measured by TRIPOL2, the second unit of the Triple-Range Imaging POLarimeter (TRIPOL, Chen et al. 2019, in preparation) attached to the LOT. This imaging polarimeter measures polarization in the Sloan $g^{\prime}$-, $r^{\prime}$-, and $i^{\prime}$-bands simultaneously by rotating a half-wave plate to four angles, 0, 45, 22.5, and 67.5. To reduce the influence by sky conditions, every polarization measurement reported in this work was the mean value of at least fuve sets of images having nearly the same counts in each angle. This compromises the possibility to detect polarization variations on timescales of less than about an hour, but ensures the reliability of nightly measurements. For TRIPOL2, we acquired the sky flats if weather allowed, or else we used the sky flats from the nearest adjacent night. Several unpolarized and polarized standard stars [@sch92] were observed to calibrate the instrumental polarization and angle offset (Chen et al. 2019, in preparation). The correction for the dark and flat field was performed for all the images following the standard reduction procedure. The fluxes at four angles were measured with aperture photometry, and the Stokes parameters ($I$, $Q$, and $U$) were then calculated, from which the polarization percentage ($P=\sqrt{Q^2 + U^2}/I$) and position angle ($\theta=0.5\arctan(U/Q)^{-1}$) were derived. A typical accuracy $\Delta P \lesssim 0.3$% in polarization could be achieved in a photometric night (Chen et al. 2019, in preparation). Results and Discussions {#sec:results} ======================= Photometric Variations ---------------------- Figure \[fig:lc\] exhibits the light curves of GMCep, including data taken from the literature covering more than a century since 1895 (Figure \[fig:lc\]a), and our intense multiband observations starting in 2008 (Figure \[fig:lc\]b). Since last reported [@che12; @sem12; @sem15], the star continued to show abrupt brightness changes. There are three main kinds of variations. Most noticeable are the major flux drops, $\sim1$–$2.5$ mag at all $B$-, $V$-, and $R$-bands, with prominent ones, each lasting for months, occurring in mid-2009, mid-2010, 2011/2012, beginning of 2014, end of 2016, and end of 2017 [@mun17]. The list is not complete, limited by the time coverage of our observations. In addition, there are minor flux drops ($\sim0.2$–1 mag), each with the duration of days to weeks. The third kind, with a typical depth of $0.05$ mag and occurring in a few days, is not discernible on the display scale of Figure \[fig:lc\], and will be discussed later. ![The light curve of GMCep from 1894 to 2018. (a) The century-long data reported by @sic08 and @xia10. (b) The light curves and $(B-V)$ color curve from 2008 to 2018 reported in this work. Epochs at which spectral measurements were reported in the literature are marked, with a triangle symbol for @sic08, an upside down triangle for @sem15 and an asterisk for @gia18. (c) Dynamical period analysis of the input light curve of (b), with a window size of 2000 days and a step of 1 day. The color represents the power of the periodogram, from high in red to blue. The vertical axis represents either the frequency (on the left) or the corresponding period (right). []{data-label="fig:lc"}](LC.pdf){width="100.00000%"} ### Periodicity Analysis [*Deep Flux Drops*]{} The UXors are thought to have irregular extinction events, despite the attempts to search for cyclic variability [@gri98; @ros99]. For GMCep, period analysis by the Lomb–Scargle algorithm [@lom76; @sca82] was performed, and the result is shown in Figure \[fig:period\]. A significant power is seen at $\sim730$ days, which does not show up in the power spectrum of the sampling function (i.e., a constant magnitude at each sampling point). The secondary peak around 350 days, also visible in the sampling function, is the consequence of annual observing gaps. A dynamical period analysis was performed by repetitive Lomb–Scargle computation within a running window of 2,000 days with a moving step of one day. For example, the power spectrum at date 42500 (plus MJD$+$13000) was calculated by the data within the window ranging from 41500 to 43500. Enough padding was applied to the edges of the light curve. A peak around $\sim700$ days persists, evidenced in Figure \[fig:lc\]c. An independent investigation of the periodicity was performed by computing the autocorrelation function. The light curve was resampled to be equally spaced with a step of one day, and for each day, the average of data within 300 days from date 41500 to date 45500, or within 100 days from date 42300 to date 45500, centered on the day was adopted. A time lag of $\sim700-800$ days is reaffirmed. This is the timescale between the few prominent minima (i.e., near 42900 and 43700). ![(a) The periodogram of the V-band light curve, where the red line marks the peak of the power spectrum. (b) The periodogram of the sampling function.[]{data-label="fig:period"}](period.pdf){width="80.00000%"} [*Rotational Modulation*]{} To investigate possible variability on much shorter time scales, we extracted the segment of the light curve from mid-2014 to the end of 2014, when the star was in the bright state so that there should be little influence by major flux drops. The light curve was fitted with, and then subtracted by, a third-order polynomial function to remove the slow-varying trend. The Lomb–Scargle analysis led to an identification of a period of $\sim3.43$ days in the detrended light curve, and Figure \[fig:spots\] exhibits the original and the detrended light curves, together with the power spectrum and the folded light curve. This variation is caused by modulation of stellar brightness by dark spots on the surface with the rotational period of the star [@str09]. Note that this period coincides roughly with the expected rotational period of a few days for the star, given its measured rotation $ v \sin i\sim43$ km s$^{-1}$, and a radius of a few solar radii, estimated from the PMS evolutionary tracks [@sic08]. Guided by the periodicity derived from the short segment of the light curve, we then processed the entire light curve using a more aggressive detrend technique than a polynomial fit to deal with the large fluctuations. The original light curve was smoothed by a running average, with an eight-day window. This effectively removes low-frequency signals slower than about 10 days. To investigate possible period changes, we divided the light curve into three segments, with the MJD ranges (plus MJD+13000) (1) 41500 to 43000, (2) 43000 to 44250, and (3) 44250 to 45500, respectively, based on a judicious choice to have sufficiently long trains of undersampled data to recover periods on time scales of days. Figure \[fig:segments\] presents the power spectrum and the phased light curve for each segment, and in each case a significant period stands out, with the period and amplitude, $P_1=3.421$ days, $A_1=0.039$ mag $P_2=3.428$ days, $A_2=0.036$ mag, and $P_3=3.564$ days, $A_3=0.020$ mag. The seemingly large scattering in each folded light curve is not the noise in the data, but the intrinsic variation in the star’s brightness, e.g., by differing total starspot areas. Because such a variation is not Gaussian, a least-squares analysis may not be appropriate to render a reliable estimate of the amplitude. Still, the sinusoidal behavior seems assured. Therefore, a rotation period of roughly 3.43 days is found to persist throughout the entire time of our observations. Moreover there is marginal evidence of a lengthening period with a reduction in amplitude. This can be understood as latitudinal dependence of the occurrence of starspots due to surface differential rotation, in analog to the solar magnetic Schwabe cycle, in which sunspots first appear in heliographic mid-latitudes, and progressively more new sunspots turn up (hence covering a larger total surface) toward the equator (hence with shorting rotational periods). GMCep therefore has an opposite temporal behavior, suggestive of an alternative dynamo mechanism at work [e.g., @kuk11]. Further observations with a shorter cadence should be able to confirm this period shift and to provide a more quantitative diagnostic. ![(a) The bright state in mid-2014 of the $R$-band light curve. (b) The scaled light curve after removal of the slow-varying trend. (c) The power spectrum of (b), from which a period of 3.43 days is detected. (d) The folded light curve with $P=3.43$ days found in (c). The solid curve shows the best-fit sinusoidal function. []{data-label="fig:spots"}](spots.pdf){width="80.00000%"} ![Power spectrum and phased light curve for (plus MJD+13000) (a) 41500–43000, (b) 43000–44250, and (3) 44250–45500. In each case the solid curve is the best-fit sinusoidal function, from which the amplitude is derived.[]{data-label="fig:segments"}](segments.pdf){width="\textwidth"} The detrended light curve shows mostly dimming events with occasional brightening episodes. The dimming must be the consequence of rotational modulation by surface starspots, whereas the brightening arises from sporadic accretion. The amplitude $\lesssim0.2$ mag is consistent with the 0.01–0.5 mag variation range typically observed in T Tauri stars caused by cool or hot starspots [@her94]. Also, the amplitude of variation is marginally larger at shorter wavelengths, namely in $V$ and $B$, lending evidence of accretion. The excessive accretion rate of GMCep reported by @sic08, $10^{-7}$ to $5 \times 10^{-6} M_\sun$ yr$^{-1}$, was estimated by the $U$-band luminosity [@gul98]. Using the H$_\alpha$ velocity as an alternative diagnostic tool [@nat04], the accretion rate would be $5 \times 10^{-8}$ to $3 \times 10^{-7} M_\sun$ yr$^{-1}$ [@sic08]. Similarly, measuring also the H$_\alpha$ velocity, @sem15 derived $1.8 \times 10^{-7}M_\sun$ yr$^{-1}$. @gia18 presented spectra of GMCep at different brightness phases and, on the basis of the dereddened H$_\alpha$ luminosity and its relation to the accretion luminosity [@alc17], and then to the accretion rate [@gul98], derived an average accretion rate of $3.5 \times 10^{-8} M_\sun$ yr$^{-1}$ with no significant temporal variations. Each of these methods has its limitation. The $U$-band flux may be contributed by thermal emission from the hot boundary layer (the accretion funnel) between the star and the disk. The H$_\alpha$ emission, on the other hand, may be contaminated by absorption in the H$_\alpha$ profile, or by chromospheric contribution not related to accretion. In any case, GMCep does not seem to be unusually active in accretion activity compared to typical T Taur stars or Herbig Ae/Be stars. The prominent flux variations are the consequences of dust extinction, not the FUor kind of flares. In Figure \[fig:lc\](a) and (b), the epoches at which literature spectroscopic measurements are available are marked, at date 39091 [@sic08] and at date 41645 [@sem15], both when the star was in a bright state, and at date $\sim45080$ [@gia18] when the star was in a faint state. Among the three datasets, the accretion rate does not seem to correlate with the apparent brightness. ### Event Duration and Extinction We parameterize a flux drop event by its duration and the maximum depth, with a least-squares fit by a Gaussian function. Only events sampled at more than half of the duration, e.g., an event lasting for roughly 10 days must have been observed for more than 5 nights, are considered to have sufficient temporal coverage to be included in the analysis. Figure \[fig:gau\] illustrates how the duration, taken as five times the standard deviation, or about 5% below the continuum, and the depth, as the minimum of the Gaussian function, are derived for each major event. The parameters are summarized in Table \[tab:duraDep\], in which the columns list for each event the identification, the MJD, duration, depths in $B$-, $V$- and $R$-bands, and the comments. Figure \[fig:duraDep\] exhibits the duration versus depth of the flux drop events. Two distinct classes of events emerge. For the short events the duration in general lengthens with the depth, roughly amounting to $A_V\sim1$ mag per 30 days. This is understood as the various sizes of occulting clumps, so a larger clump leads to a longer event along with a deeper minimum. The extinction depth levels off for longer ($\gtrsim100$ days) events to $A_V\sim1.5$ mag, suggesting that these events are not caused by ever larger clumps. We propose that each long event consists of a series of events, or a continuous event, by clumps distributed along a string or a spiral arm. In this case, the duration gets longer, but the depth is not deeper. The depth-duration relation of T Tauri stars has been discussed by @fin13 with 3 yr monitoring of Palomar Transient Factory (PTF) for the North America Nebula complex. In their sample of 29 stars, there are fading events with a variety of depth (up to $\sim2$ mag) and duration (1–100 days). @sta15, with a high-cadence light curve from the $CoRoT$ campaign for NGC2264, identified YSO fading events up to 1 mag. @guo18 summarized event parameters for different stars, including those in @sta15, and found those with durations less than 10 days varied typically with a depth of $\leq1$ mag, whereas those lasting more than $\sim20$ days have a roughly constant amplitude  $\sim2$–3 mag. All these studies made use of samples of different stars with a diverse star/disk masses, ages, inclination angles, etc., and no clear correlation was evidenced between depth and duration. In comparison, our investigation is for a single target with distinct correlations for the short and for the long events. ![The Gaussian fitting to each of the major flux drop events. []{data-label="fig:gau"}](major-gau.pdf){width="\textwidth"} ![Depth versus duration of occultation events. Each event is parameterized by a Gaussian fit to the light curve as illustrated in Figure \[fig:gau\]. There is a linear trend for short events (triangles), whereas for long events (circles) the extinction depth levels off. []{data-label="fig:duraDep"}](duraDep.pdf){width="\textwidth"} [llcllll]{}\ BD01 & 55039 & 100 & 1.45 & 1.50 & 1.50 &\ BD02 & 55401 & 450 & 1.45 & 1.40 & 1.20 &\ BD03 & 55910 & 180 & 1.70 & 1.60 & 1.50 &\ BD04 & 56713 & 310 & 1.75 & 1.64 & 1.47 &\ BD05 & 57759 & 75 & 1.45 & 1.30 & 1.10 &\ \ SD01 & 55736 & 10 & 0.79 & 0.75 & 0.67 &\ SD02 & 55767 & 30 & 0.82 & 0.75 & 0.63 &\ SD03 & 55818 & 35 & 0.80 & 0.70 & 0.65 &\ SD04 & 56205 & 25 & 1.05 & 0.85 & 0.78 &\ SD05 & 56415 & 13 & 0.87 & 0.80 & 0.72 &\ SD06 & 56429 & 10 & 0.52 & 0.44 & 0.40 &\ SD07 & 56510 & 11 & 0.65 & 1.05 & 0.70 & $V$ includes AAVSO data\ SD08 & 56553 & 15 & 0.37 & 0.32 & 0.30 &\ SD09 & 56763 & 13 & 0.35 & 0.55 & 0.68 &\ SD10 & 56784 & 13 & 0.55 & 0.48 & 0.48 &\ SD11 & 56865 & 13 & - & 0.40 & - &\ SD12 & 56944 & 13 & 0.33 & 0.18 & 0.20 &\ SD13 & 56972 & 3 & 0.22 & 0.17 & 0.15 &\ SD14 & 56989 & 3 & 0.22 & 0.20 & 0.18 &\ SD15 & 57184 & 8 & 0.49 & 0.40 & 0.36 &\ SD16 & 57263 & 25 & 1.10 & 1.10 & 1.40 & incomplete sampling in $B$ and $V$\ SD17 & 57291 & 10 & 0.40 & 0.35 & 0.30 &\ SD18 & 57333 & 10 & 0.30 & 0.35 & 0.35 &\ SD19 & 57415 & 28 & 0.95 & - & 0.87 &\ SD20 & 57511 & 20 & 1.10 & 1.00 & 0.92 &\ SD21 & 57591 & 15 & 0.45 & 0.35 & 0.30 &\ SD22 & 57656 & 10 & 0.61 & 0.54 & 0.48 &\ SD23 & 57946 & 15 & 1.05 & 0.85 & 0.80 &\ Color Variations {#sec:color} ---------------- Along with the light curves, Figure \[fig:lc\] also presents the $B-V$ color curve, i.e., the temporal variation. Figure \[fig:cmd\]a illustrates how the $B$ magnitude of GMCep varies with its $B-V$ color. In this color-magnitude diagram (CMD), GMCep in general becomes redder when fainter, suggesting normal interstellar extinction/reddening. The slope of the reddening vector, marked by an arrow, is consistent with a total-to-selective extinction law of $R_V=5$ [@mat90], rather than with the nominal $R_V=3$, implying larger dust grains than in the diffuse interstellar clouds. Between $B\sim15.2$ mag and $B\sim15.7$ mag, the extinction appears independent of the ($B-V$) color, indicative of gray extinction by even larger grains ($> 10~\micron$, [@eir02]. The trend is yet different toward the faint state; namely the color turns bluer when fainter. This color reversal, or the “bluing effect”, has been known [@bib90; @gri94; @gra95; @her99; @sem15], with the widely accepted explanation being that during the flux minimum, when direct star light is heavily obscured by circumstellar dust, the emerging light is dominated by forward scattered radiation into the field of view. The bluing phenomenon is also illustrated in Figure \[fig:lc\], where a few deep minima are marked, each by a thick red line, during which the corresponding color turns blue near the flux minimum. Additional CMDs in $V$ versus $V-R$, and $R$ versus $R-I$, where the data in $I$ are adopted from those reported by @sem15, indicate also normal reddening in the bright state, whereas the bluing tends to subside toward longer wavelengths, in support of the scattering origin, as shown in Figure \[fig:cmd\]b. ![(a) The $B$ magnitude versus $B-V$ color for GMCep, using data in Figure  \[fig:lc\]. The panel on the right plots the histogram of the brightness in $B$, whereas the panel on the top plots the histogram of the $B-V$ color. The arrow marks the reddening vector for $A_V=0.5$ mag assuming a total-to-selective extinction of $R_V=5.0$. (b) The same as in (a) but for $V$ versus $V-R$ and $R$ versus $R-I$. []{data-label="fig:cmd"}](cmdBBV.pdf "fig:"){height="0.45\textheight"} ![(a) The $B$ magnitude versus $B-V$ color for GMCep, using data in Figure  \[fig:lc\]. The panel on the right plots the histogram of the brightness in $B$, whereas the panel on the top plots the histogram of the $B-V$ color. The arrow marks the reddening vector for $A_V=0.5$ mag assuming a total-to-selective extinction of $R_V=5.0$. (b) The same as in (a) but for $V$ versus $V-R$ and $R$ versus $R-I$. []{data-label="fig:cmd"}](cmdVRI.pdf "fig:"){height="0.45\textheight"} Polarization ------------- Figure \[fig:rpol\] presents the linear polarization in $r^{\prime}$-band of GMCep, and of two comparison stars including one of the photometric reference stars and a field star. GMCep displays a varying polarization with $P=3\%$–8% but with an almost constant position angle of $\sim72\degr$. The two comparison stars remain steadily polarized, each of $P \lesssim2\%$ with a variation $\lesssim1$%. Adding up the TRIPOL measurements at four polarizer angles gives the total flux. As seen in Figure \[fig:rpol\], the TRIPOL $r^{\prime}$ light curve, albeit with lower cadence, allows for diagnosis of simultaneous photometric and polarimetric behavior. The broadband light curves in turn serve to indicate the overall brightness states at which the polarization data are taken. Figure \[fig:gripol\]a plots the polarization in each band, $P_{g^{\prime}}$, $P_{r^{\prime}}$, and $P_{i^{\prime}}$. The polarization exhibits a slowly varying pattern, declining from 6% to 9% in the fall of 2014 to 3%–5% in 2015 July/August, and reclining to 5% – 7% near the end of 2015. A similar pattern seems to exist also in 2017 but with a variation of 2% – 5%. At the same time, the slow brightness change in each case, notwithstanding abrupt flux drops, seems to have a reverse trend. In particular, the smooth brightening in late 2014, where polarization data are densely sampled, is clearly associated with a monotonic decrease in polarization. A similar brightness-polarization pattern is seen from early 2017 to early 2018, for which the brightening and fading in the light curve is associated with a decreasing-turn-increasing trend in polarization. Note that in general the polarization is higher at shorter wavelengths, but at certain epochs, particularly at flux minima, e.g., at the end of 2015 and the beginning of 2017, an “anomalous” wavelength dependence seems to emerge, so that the $g^{\prime}$ band becomes the least polarized. ![(a) The $r^{\prime}$-band light curve (in black) for GMCep, together with one of the photometric reference stars (filled triangles) and one field star (squares), in the same field of images. (b) The changing polarization level of GMCep, in comparison to the two comparison stars. (c) The polarization angle for GMCep remaining steady (72$\degr$) during three years of monitoring. []{data-label="fig:rpol"}](rpol.pdf){width="100.00000%"} ![ (a) The photopolarimetric $r^{\prime}$-band light curve (in red) vs. the $R$-band light curve (in black), and shown below the polarization levels in $g^{\prime}$ (in green), $r^{\prime}$ (in red), and $i^{\prime}$ (in brown). The gray shades represent the slow brightness changes and simultaneous behavior of the polarization. (b) The light curves for unpolarized flux ($F^u$) and polarized flux ($F^p$), with the same color symbols as in (a).[]{data-label="fig:gripol"}](gripol.pdf){width="90.00000%"} The Clumpy Disk Structure in GMCep {#sec:disk} ================================== The photopolarimetric measurements enable inference on the occultation configuration in a qualitative way. For example, a sequential blockage of the circumstellar environs and the star will result in a certain photometric and polarimetric behavior. The high-cadence light curves, furthermore, allow quantitative derivation of the depth, duration, etc., of the occulting body. We present the analysis and interpretation of both kinds in this section. Occultation Geometry Inferred By the Polarization Data ------------------------------------------------------ The level of polarization at wavelength $\lambda$ is defined as $$P_\lambda (\%) = \frac{F^p_\lambda}{F^t_\lambda} = \frac{F^p_\lambda}{F^p_\lambda + F^u_\lambda} = \frac{1}{1+F^u_\lambda/F^p_\lambda},$$ where $F^t$ is the total flux, which is decomposed into polarized flux ($F^p$) and unpolarized flux ($F^u$), with $F^t=F^p + F^u$. At each observing epoch, $P_\lambda$ and $F^t$ are measured, therefore $F^p$ and $F^u$ can be derived. In general the starlight is not polarized, but the scattered light from the inner gaseous envelope/disk is, which is fainter and bluer in color than the direct starlight. The temporal variations of $F^p_\lambda$, $F^u_\lambda$, and $F^t_\lambda$, plus the wavelength dependence of these variations, provide clues on the geometry of a clump, or a string of clumps, relative to the stellar system (star plus disk). The last part of the equation suggests that (1) if $F^u_\lambda$ remains the same, $P_\lambda$ changes with $F^p_\lambda$ in the sense that as $F^p_\lambda$ decreases, so does $P_\lambda$. The dust reddening by occultation makes this dependence stronger at shorter wavelengths. But (2) if $F^u_\lambda$ changes, because it dominates the brightness over $F^p_\lambda$, so, for example, as $F^u_\lambda$ decreases, $P_\lambda$ increases. Figure \[fig:gripol\]b exhibits how the decomposed polarized ($F^p$) and unpolarized ($F^u$) components vary, respectively, at different wavelengths. To facilitate the comparison, each curve is scaled to its first data point to demonstrate the relative level of flux changes. The decomposition makes it clear that the decreasing polarization near the end of 2014, with $P_{g^{\prime}} > P_{r^{\prime}} > P_{i^{\prime}}$ (see Fig. \[fig:gripol\]a), corresponding to the brightening of the star system, is the result of a fading $F^p_\lambda$ alongside with a brightening $F^u_\lambda$, as evidenced in Fig. \[fig:gripol\]b, both leading to a decreasing $P_\lambda$ in every wavelength. In the occultation scenario, the star system would be just coming out of a major event, and during such an egress, the clump was unveiling the star and blocking a progressively larger part of the envelope. Incidentally the deep flux drop event at the beginning of 2017 has polarization measured. At the brightness minimum, the level of polarization changes little, but with the anomaly $P_{r^{\prime}} > P_{i^{\prime}} > P_{g^{\prime}}$. Inspection of the decomposition result reveals that both $F^u_\lambda$ and $F^p_\lambda$ decline to almost an all-time low, particularly at shorter wavelengths. This is the configuration when the star and the envelope are both heavily obscured. On YSO photometric and polarimetric variability, @woo96 and @sta99 modeled the rotationally modulated multiwavelength photopolarization due to scattering of light by stellar hot spots, under different simulation parameters, such as the size and latitude of the hot spot, inclination, truncation radius, and geometry (e.g., flat or flared) of the disk. In general, the simulations suggested an amplitude of polarization variability less than about $1\%$. The polarization variability due to a warped disk is similarly low, as demonstrated in the case of AATau, a prototype of dippers, with a variation of $\sim0.5\%$ in the $V$-band during the occultation @osu05. Recent modeling by @kes16 of the photopolarimetric variability of YSOs plus accretion disks considered the spot temperature, radius of inner disk, structure, and inclination of the warp disk. Only star and dust emission was included, with no gas emission, but still, the typical polarization is expected to vary by less than $\sim1\%$. It is interesting that the polarization level of $I$-band normally is always higher than that of the $V$-band, consistent with the wavelength dependence of our observations, albeit with limited time coverage, near flux minima. Hot starspots or a warped inner disk alone apparently cannot account for the large polarization variability seen in GMCep. An additional gaseous envelope likely plays an important role. Clump Parameters By the Light-curve Analysis -------------------------------------------- The long-term light curves render conclusive evidence that the major flux drops detected in GMCep are caused by occultation of the young star and the envelope by circumstellar dust clumps. These dust grains are large in size, inferred by the reddening law (see §\[sec:color\]), and distributed in a highly nonuniform manner. This density inhomogeneity could signify the protoplanetary disk evolution in transition from grain growth (of  size) to planetesimal formation (of kilometer size) [@che12]. Accretion plus viscous dissipation heats up a young stellar disk early on. As the accretion subsides and grains get clumpy, the disk becomes passive, in the sense that the dust absorbs starlight, warms up, and reradiates in infrared [@chi97]. The frequent occultation events imply a geometry that would have led to a significant stellar extinction and a flat spectral energy distribution (SED). Instead, however, because of the grain coagulation, GMCep (1) has a moderate $A_V=2$–3 mag, partly of interstellar origin, despite the copious dust content evidenced by the elevated fluxes in far-infrared and submillimeter wavelengths [@sic08], and also (2) has an SED characteristic of a T Tauri star [@sic08] with a noticeable infrared excess. In a passive disk, hydrostatic equilibrium results in a structure to flare outward [@key87; @chi97], so the dust intercepts more starlight than a geometrically thin disk. Ring- or spiral-like structure in YSO disks seems ubiquitous, as evidenced by, e.g., recent ALMA imaging in molecular lines or in continuum of the Herbig Ae/Be star ABAur [@tan12; @tan17], the class II object Elias2-27 [@per16], or by HiCIAO/Subaru polarimetric imaging of FUors [@liu16]. Such a structure may be induced by a planet companion [@zhu15] or by gravitational instability [@kra16]. All these rings or spirals have some tens to hundreds of astronomical units in extents. The most enlightening finding relevant to our work is the detection in the T Tauri star HLTau at 7 mm of a distribution of clumps along the main ring of thermalized dust found earlier by at shorter wavelengths, where large grains reside [@car16 see their Figure 2]. The most prominent one, at $\sim0\farcs1$ from the star, or $\sim14$ au at a distance of 140 pc, with an estimated mass of 3–8 $M_\earth$, is considered by these authors as a possible planetary embryo. We have no knowledge of the location of the (strings of) clumps in the GMCep disk, or of their geometric shape. But we present the following exercise, using theoretical disk models, to shed light on the possible constraints on clump parameters. The largest clumps in GMCep, as seen in Figure \[fig:duraDep\], cause a maximal extinction of $A^c_V=1.5$ mag with a time scale of $\sim50$ days. Note that here $A^c_V$ refers to the extinction caused by the occultation of the clump, to be distinguished from the interstellar plus circumstellar extinction of the star. The maximal extinction provides information on the column density of dust, and the duration time on the scale of the clump. The fiducial disk by @chi97 adopts a stellar temperature $T_{*}=4000$ K, mass $M_*=0.5~M_\sun$, and radius $R_*=2.5~R_\sun$. With veiling and line blending due to fast rotation, the spectral type of GMCep is uncertain, ranging from an F9 [@hua13] to G5/K3 [@sic08]. In any case the star is hotter (with higher pressure) but more massive (with stronger gravitational pull), and the hydrostatic conditions in the disk turn out to be similar. This means the disk height ($H$) is scaled with the radius ($r$) $H/r \approx 0.17 (r/{\rm au})^{2/7}$ [@chi97]. A clump at $r=14$ au thus would subtend an opening angle (viewing the rim from the star) of $\sim20\degr$; at $r=1$ au, the angle would become $\sim 10\degr$, for which the disk has to be close to edge-on for occultation to take place. Assuming 2 $M_\sun$ for GMCep, a clump at 4–14 au has a projected Keplerian speed up to 11 km s$^{-1}$. So for a clump to traverse the GMCep system, the linear size would be 0.3 au for $r=14$ au. In the case $r=1$ au, the orbital speed is faster, so the linear scale would be 1.2 au. Alternatively, the clumps may be located closer in to the central star. The disk may not be monotonically flared, as the innermost disk is irradiated by starlight, and dust evaporation at temperature $T_{\rm evap}\sim 1500$ K results in an inner hole, hence an inner rim or “wall” in the flaring disk, which accounts for the bump near 2–3   observed in the SEDs of some YSOs [@dul01; @eis04]. This temperature corresponds to a distance from the central star, $r_{\rm rim}=(L_*/4\pi T^4_{\rm rim}\sigma)^{1/2} ( 1+ (H_{\rm rim}/r_{\rm rim})) ^{1/2}$, where $L_*$ is the luminosity of the star, $T_{\rm rim}=T_{\rm evap}$ is the temperature at the rim, $H_{\rm rim}$ is the vertical height of the inner rim, and $\sigma$ is the Stefan–Boltzmann constant [@dul01]. Given $L_*=26~L_\sun$ for GMCep [@sic08], adopting $H_{\rm rim}/r_{\rm rim}=0.2$ [@dul01], the estimated inner rim radius is roughly $r_{\rm rim} \sim 0.4$ au, corresponding to an opening angle $\arctan{(H_{\rm rim}/r_{\rm rim})} \sim11\degr$. Even though the chance of occultation is higher with a clump closer to the star, a faster Keplerian speed would lead to a linear size of 1.7 au. We conclude that the “clump,” or the region of density enhancement in the disk has a length scale up to roughly 0.1–1 au cross the line of sight. The depth, or the length scale along the line of sight, is related to the maximum $A^c_V=1.5$ mag, or the column density of dust. Integration requires detailed disk structure, such as the vertical and radial density profiles, grain size distribution, midplane settling, etc. Such a complexity is beyond the scope of this paper and in fact not justified by our data. Here we again attempt to gain some physical insights on the clump properties. For a uniform disk, the volume mass density of dust $m_d = (N_d / \ell) \, M_{\rm grain}$, where $N_d$ is the column density of dust, $\ell$ is the length of the sightline through the dusty medium, and $M_{\rm grain}$ is the mass of each grain. Each term is evaluated as follows. The column density $N_d$ is related to the extinction: $A^c_V=1.086 \tau_V = N_d \sigma_d Q_{\rm ext}$, where $\tau_V$ is the optical depth at $V$-band, $\sigma_d = \pi a^2$ is the geometric cross section of each (assuming spherical) grain of radius $a$, and $Q_{\rm ext}$ is the optical extinction coefficient, which, for grains large in size compared to the wavelength ($2 \pi a >> \lambda$), $Q_{\rm ext} \approx 2$ [@spi78; @van57]. Therefore, $N_d=1.6 \times 10^5\, A^c_V\, [10~\micron/a]^2$ cm$^{-2}$, and for each dust grain, assuming a material bulk density of 2 g cm$^{-3}$, the mass is $M_{\rm grain}=8.4 \times 10^{-9} [a/10\micron]^3$ g. Given a gas density $n_g$, and a nominal gas-to-dust mass ratio of 100, $m_d=n_g m_H /100$, and so $$\ell = \frac{5.4 \times 10^{9}}{n_g} \, A_V \, (\frac{a}{10~\micron})~~~{\rm [au]}.$$ For GMCep, $A^c_V=1.5$ mag, and adopting a gas density $n_g=10^{10}$ [@bar05], $\ell\sim 0.8$ au for $a=10$  grains. For truly large grains, such as $a=1$ mm, the extinction efficiency becomes much smaller, thus $\ell$ 100 times longer, to $\ell\sim 80$ au. Admittedly, none of the simple assumptions we have made in the estimation is likely valid. Still, it is assuring that both the crossing time and the flux drop of occultation by a dust clump could end up with reasonable solutions, namely a region tens of astronomical units across in the young stellar disk, perhaps in a ring or a spiral configuration located tens of astronomical units from the star, consisting of primarily 10  grains or larger. Given the overall low extinction of the star, small grains likely exist but not in quantity, as they had been agglomerated into large bodies. Conclusion {#sec:conclusion} ========== Optical photometric and polarimetric monitoring of the UX Ori star GMCep for nearly a decade reveals variations in brightness and in polarization of different amplitude and time scales. The essential results of our study are: - GMCep exhibits (1) brightness fluctuations $\lesssim0.05$ mag on time scales of days, due partly to rotational modulation by surface starspots with a period of 3.43 days, and partly to accretion activity; (2) minor flux drops of amplitude 0.2–1.0 mag with duration of days to weeks; and (3) major flux drops up to 2.5 mag, each lasting for months, with a recurrent time, but not exactly periodic, of about 2 years. - The flux drops arise from occultation of the star and gaseous envelope by orbiting dust clumps of various sizes. - The star experiences normal dust reddening by large grains, i.e., the star becomes redder when fainter, except at the brightness minimum during which the star turns bluer when fainter. - The maximum depth of an occultation event is proportional to the duration, about 1 mag per 30 days, for the events lasting less than $\sim 50$ days, a result of occultation by clumps of varying sizes. For the events longer than about 100 days, the maximum depth is independent of the duration and remains $A_V\sim1.5$ mag, a consequence of transiting strings or layers of clumps. - The $g^{\prime} r^{\prime} i^{\prime}$ polarization levels change between 3% and 8%, and vary inversely with the slow brightness change, while the polarization angle remains constant. The polarization is generally higher at shorter wavelengths, but at flux minima, there is a reversal of wavelength dependence, e.g., the $g^{\prime}$-band becomes the least polarized. Temporal variations of polarization versus brightness, once the total light is decomposed into polarized and unpolarized components, allow diagnosis of the occultation circumstances of the dust clumps relative to the star and envelope. - Our data do not provide direct information on the size or location of the clumps, but the duration of an occultation sets constraints on the transverse size scale of the clump, while the maximum extinction depth is a measure of the column density of dust, hence a dependence of the line-of-sight length through the dusty medium. It is possible that GMCep is an edge-on manifestation of the ring- or spiral-like structures found recently in young stars with imaging in infrared of scattered light, or in submillimeter of dust emission. The NCU group acknowledges the financial support of the grants MOST 106-2112-M-008-005-MY3 and MOST 105-2119-M-008-028-MY3. We greatly thank the Jena group H. Gilbert, T. Zehe, T. Heyne, A. Pannicke, and C. Marka for kindly providing us with their efforts on acquiring data from Jena Observatory, which is operated by the Astrophysical Institute of the Friedrich-Schiller-University. Furthermore, we would like to thank the Thuringian State (Thüringer Ministerium für Bildung, Wissenschaft und Kultur) in project number B 515-07010 for financial support. The work by the Xinjiang Observatory group was in part supported by the program of the Light in China’s Western Region (LCWR, grant No. 2015-XBQN-A-02) and National Natural Science Foundation of China (grant No. 11661161016). J. Budaj, Z. Garai, and T. Pribulla acknowledge VEGA 2/0031/18 and APVV 15-0458 grants as well as V. Kollar, J. Lopatovsky, N. Shagatova, S. Shugarov, and R. Komzik for their help with some observations. This research has made use of the International Variable Star Index (VSX) database, operated at AAVSO, Cambridge, Massachusetts, USA. We thank the referee for constructive comments that greatly improve the quality of the paper. Alcal[á]{}, J. M., Manara, C. F., Natta, A., et al. 2017, , 600, A20 Barri[è]{}re-Fouchet, L., Gonzalez, J.-F., Murray, J. R., Humble, R. J., & Maddison, S. T. 2005, , 443, 185 Bibo, E. A., & The, P. S. 1990, , 236, 155 Blinova, A. A., Romanova, M. M., & Lovelace, R. V. E. 2016, , 459, 2354 Bouvier, J., Chelli, A., Allain, S., et al. 1999, , 349, 619 Bouvier, J., Grankin, K. N., Alencar, S. H. P., et al. 2003, , 409, 169 Bouvier, J., Grankin, K., Ellerbroek, L. E., Bouy, H., & Barrado, D. 2013, , 557, A77 Brice[ñ]{}o, C., Hartmann, L., Hern[á]{}ndez, J., et al. 2007, , 661, 1119 Carrasco-Gonz[á]{}lez, C., Henning, T., Chandler, C. J., et al. 2016, , 821, L16 Chen, W. P., Hu, S. C.-L., Errmann, R., et al. 2012, , 751, 118 Chiang, E. I., & Goldreich, P. 1997, , 490, 368 Cody, A. M., & Hillenbrand, L. A. 2010, , 191, 389 Cody, A. M., Stauffer, J., Baglin, A., et al. 2014, , 147, 82 Contreras, M. E., Sicilia-Aguilar, A., Muzerolle, J., et al. 2002, , 124, 1585 Dullemond, C. P., Dominik, C., & Natta, A. 2001, , 560, 957 Eiroa, C., Oudmaijer, R. D., Davies, J. K., et al. 2002, , 384, 1038 Eisner, J. A., Lane, B. F., Hillenbrand, L. A., Akeson, R. L., & Sargent, A. I. 2004, , 613, 1049 Errmann, R., Neuh[ä]{}user, R., Marschall, L., et al. 2013, Astronomische Nachrichten, 334, 673 Findeisen, K., Hillenbrand, L., Ofek, E., et al. 2013, , 768, 93 Fritzewski, D. J., Kitze, M., Mugrauer, M., et al. 2016, , 462, 2396 Gaia Collaboration, Brown, A. G. A., Vallenari, A., et al. 2018, arXiv:1804.09365 Garai, Z., Pribulla, T., Hamb[á]{}lek, [Ľ]{}., et al. 2016, Astronomische Nachrichten, 337, 261 Giannini, T., Munari, U., Lorenzetti, D., et al. 2018, Research Notes of the American Astronomical Society, 2, 124 Goldreich, P., & Ward, W. R. 1973, , 183, 1051 Grady, C. A., Perez, M. R., The, P. S., et al. 1995, , 302, 472 Grinin, V. P., The, P. S., de Winter, D., et al. 1994, , 292, 165 Grinin, V. P., Rostopchina, A. N., & Shakhovskoi, D. N. 1998, Astronomy Letters, 24, 802 Gullbring, E., Hartmann, L., Brice[ñ]{}o, C., & Calvet, N. 1998, , 492, 323 Guo, Z., Herczeg, G. J., Jose, J., et al. 2018, , 852, 56 Hamilton, C. M., Herbst, W., Shih, C., & Ferro, A. J. 2001, , 554, L201 Hartmann, L., & Kenyon, S. J. 1985, , 299, 462 Herbig, G. H. 1989, European Southern Observatory Conference and Workshop Proceedings, 33, 233 Herbst, W., Herbst, D. K., Grossman, E. J., & Weinstein, D. 1994, , 108, 1906 Herbst, W., & Shevchenko, V. S. 1999, , 118, 1043 Hillenbrand, L. A. 2008, Physica Scripta Volume T, 130, 014024 Huang, P.-C., Chen, W.-P., Hu, C.-L., et al. 2018, Bulletin de la Societe Royale des Sciences de Liege, 87, 145 Huang, Y. F., Li, J. Z., Rector, T. A. & Mallamaci, C. C. 2013, , 145, 126 Johansen, A., Oishi, J. S., Mac Low, M.-M., et al. 2007, , 448, 1022 Kearns, K. E., & Herbst, W. 1998, , 116, 261 Kenyon, S. J., & Hartmann, L. 1987,, 323, 714 Kesseli, A. Y., Petkova, M. A., Wood, K., et al. 2016, , 828, 42 Kratter, K., & Lodato, G. 2016, , 54, 271 K[ü]{}ker, M., R[ü]{}diger, G., & Kitchatinov, L. L. 2011, , 530, A48 Kun, M. 1986, Information Bulletin on Variable Stars, 2961, 1 Liu, H. B., Takami, M., Kudo, T., et al. 2016, Science Advances, 2, e1500875 Lomb, N. R. 1976, , 39, 447 Mamajek, E. E., Meyer, M. R., Hinz, P. M., et al. 2004, , 612, 496 Marschall, L. A., Karshner, G. B., & Comins, N. F. 1990, , 99, 1536 Mathis, J. S. 1990, , 28, 37 Mercer, E. P., Miller, J. M., Calvet, N., et al. 2009, , 138, 7 Monet, D. G., Levine, S. E., Canzian, B., et al. 2003, , 125, 984 Morgenroth, O. 1939, Astronomische Nachrichten, 268, 273 Munari, U., Castellani, F., Giannini, T., et al. 2017, Atel\#11004 Mugrauer, M., & Berthold, T. 2010, Astronomische Nachrichten, 331, 449 Mugrauer, M. 2016, Astronomische Nachrichten, 337, 226 Natta, A., Testi, L., Muzerolle, J., Randich, S., Comer[ó]{}n, F., & Persi, P.  2004, , 424, 603 Natta, A., Testi, L., Calvet, N., et al. 2007, Protostars and Planets V, eds. B. Reipurth, D. Jewitt, and K. Keil, University of Arizona Press, Tucson, 767 Neuh[ä]{}user, R., Errmann, R., Berndt, A., et al. 2011, Astronomische Nachrichten, 332, 547 O’Sullivan, M., Truss, M., Walker, C., et al. 2005, , 358, 632 Patel, N. A., Goldsmith, P. F., Snell, R. L., Hezel, T., & Xie, T. 1995, , 447, 721 P[é]{}rez, L. M., Carpenter, J. M., Andrews, S. M., et al. 2016, Science, 353, 1519 Raetz, S., Schmidt, T. O. B., Czesla, S., et al. 2016, , 460, 2834 Rodriguez, J. E., Pepper, J., Stassun, K. G., et al. 2015, , 150, 32 Rodriguez, J. E., Reed, P. A., Siverd, R. J., et al. 2016, , 151, 29 Romanova, M. M., Ustyugova, G. V., Koldoba, A. V., & Lovelace, R. V. E. 2013, , 430, 699 Rostopchina, A. N., Grinin, V. P., & Shakhovskoi, D. N. 1999, Astronomy Letters, 25, 243 Safronov, V. S. 1972, Evolution of the protoplanetary cloud and formation of the earth and planets., by Safronov, V. S.. Translated from Russian. Jerusalem (Israel): Israel Program for Scientific Translations, Keter Publishing House, 212 p., Scargle, J. D. 1982, , 263, 835 Schmidt, G. D., Elston, R., & Lupie, O. L. 1992, , 104, 1563 Schulz, R., Erd, C., Guilbert-Lepoutre, A., et al. 2014, AAS/Division for Planetary Sciences Meeting Abstracts \#46, 46, 214.05 Semkov, E. H., & Peneva, S. P. 2012, , 338, 95 Semkov, E. H., Ibryamov, S. I., Peneva, S. P., et al. 2015, , 32, e011 Sicilia-Aguilar, A., Hartmann, L. W., Brice[ñ]{}o, C., Muzerolle, J., & Calvet, N. 2004, , 128, 805 Sicilia-Aguilar, A., Hartmann, L. W., Hern[á]{}ndez, J., Brice[ñ]{}o, C., & Calvet, N. 2005, , 130, 188 Sicilia-Aguilar, A., Hartmann, L. W., F[ü]{}r[é]{}sz, G., et al. 2006, , 132, 2135 Sicilia-Aguilar, A., Hartmann, L. W., Calvet, N., et al. 2006, , 638, 897 Sicilia-Aguilar, A., Mer[í]{}n, B., Hormuth, F., et al. 2008, , 673, 382 Spitzer, L. 1978, Physical processes in the interstellar medium, Wiley, New York Stassun, K., & Wood, K. 1999, , 510, 892 Stauffer, J., Cody, A. M., McGinnis, P., et al. 2015, , 149, 130 Strassmeier, K. G. 2009, , 17, 251 Suyarkova, O. G. 1975, Peremennye Zvezdy, 20, 167 Tang, Y.-W., Guilloteau, S., Pi[é]{}tu, V., et al. 2012, , 547, A84 Tang, Y.-W., Guilloteau, S., Dutrey, A., et al. 2017, , 840, 32 Terquem, C., & Papaloizou, J. C. B. 2000, , 360, 1031 van de Hulst, H. C. 1957, Light Scattering by Small Particles, Dover, New York Watanabe, M., Takahashi, Y., Sato, M., et al. 2012, , 8446, 84462O Weidenschilling, S. J. 2000, , 92, 295 Wood, K., Kenyon, S. J., Whitney, B. A., & Bjorkman, J. E. 1996, , 458, L79 Xiao, L., Kroll, P., & Henden, A. A. 2010, , 139, 1527 Zhu, Z., Dong, R., Stone, J. M., & Rafikov, R. R. 2015, , 813, 88
{ "pile_set_name": "ArXiv" }
--- address: 'Cambridge University Engineering Dept., Trumpington St., Cambridge, CB2 1PZ U.K.\' title: Discriminative Neural Clustering for Speaker Diarisation --- References ==========
{ "pile_set_name": "ArXiv" }
--- abstract: 'We discuss different methods of calculation of the screened Coulomb interaction $U$ in transition metals and compare the so-called constraint local-density approximation (LDA) with the GW approach. We clarify that they offer complementary methods of treating the screening and, therefore, should serve for different purposes. In the *ab initio* GW method, the renormalization of on-site Coulomb interactions between $3d$ electrons (being of the order of 20-30 eV) occurs mainly through the screening by the same $3d$ electrons, treated in the random phase approximation (RPA). The basic difference of the constraint-LDA method from the GW method is that it deals with the neutral processes, where the Coulomb interactions are additionally screened by the “excited” electron, since it *continues to stay in the system*. This is the main channel of screening by the itinerant ($4sp$) electrons, which is especially strong in the case of transition metals and missing in the GW approach, although the details of this screening may be affected by additional approximations, which typically supplement these two methods. The major drawback of the conventional constraint-LDA method is that it does not allow to treat the energy-dependence of $U$, while the full GW calculations require heavy computations. We propose a promising approximation based on the combination of these two methods. First, we take into account the screening of Coulomb interactions in the $3d$-electron-line bands located near the Fermi level by the states from the subspace being orthogonal to these bands, using the constraint-LDA methods. The obtained interactions are further renormalized within the bands near the Fermi level in RPA. This allows the energy-dependent screening by electrons near the Fermi level including the same $3d$ electrons.' author: - 'I. V. Solovyev' - 'M. Imada' title: Screening of Coulomb interactions in transition metals --- \[sec:intr\]Introduction ======================== The description of electronic structure and properties of strongly correlated systems presents a great challenge for *ab initio* electronic structure calculations. The main complexity of the problem is related with the fact that such electronic systems typically bear both localized and itinerant character, where most of conventional methods do not apply. A canonical example is the local-\[spin\]-density approximation (L\[S\]DA) in the density-functional theory (DFT).[@DFT] The DFT, which is a ground-state theory, is based on the minimization of the total energy functional $E[\rho]$ with respect to the electron density $\rho$. In the Kohn-Sham (KS) scheme, which is typically employed for practical calculations, this procedure is formulated as the self-consistent solution of single-particle KS equations $$\left( -\nabla^2 + V_{\rm KS}[\rho] \right) \psi_i[\rho] = \varepsilon_i \psi_i[\rho], \label{eqn:KS}$$ which are combined with the equation for the electron density: $$\rho = \sum_i f_i |\psi_i|^2, \label{eqn:rho}$$ defined in terms of eigenfunctions ($\psi_i$), eigenvalues ($\varepsilon_i$), and the occupation numbers ($f_i$) of KS quasiparticles. The LSDA provides an explicit expression for $V_{\rm KS}[\rho]$. However, it is based on the homogeneous electron gas model, and strictly speaking applicable only for itinerant electron compounds. The recent progress, which gave rise to such directions as LDA$+$ Hubbard $U$ (Refs. ) and LDA+DMFT (dynamical mean-field theory) (Refs. ), is based on the idea of partitioning of electronic states. It implies the validity of the following postulates:\ (1) All solutions of KS equations (\[eqn:KS\]) in LDA can be divided (by introducing proper projection-operators) into two subgroups: $i$$\in$$I$, for which LSDA works reasonably well, and $i$$\in$$L$, for which LSDA encounters serious difficulties and needs to be improved (a typical example is the $3d$ states in transition-metal oxides and some transition metals).\ (2) Two orthogonal subspaces, $I$ and $L$, are “flexible” in the sense that they can be defined for a wider class of electron densities, which can be different from the ground-state density in LDA. This allows to “improve” LDA by adding a proper correction $\Delta\hat{\Sigma}$ (generally, an $\omega$-dependent self-energy) to the KS equations, which acts solely in the $L$-subspace but may also affect the $I$-states through the change of $\rho$ associated with this $\Delta\hat{\Sigma}$. Thus, in the KS equations, the $L$- and $I$-states remain decoupled even after including $\Delta\hat{\Sigma}$: $\langle \psi_{i \in I}[\rho] |(-\nabla^2$$+$$V_{\rm KS}[\rho]$$+$$ \Delta\hat{\Sigma})| \psi_{i \in L}[\rho] \rangle$$=$$0$. For many applications, the $L$-states are atomic or Wannier-type orbitals. In this case, the solution of the problem in the $L$-space becomes equivalent to the solution of a multi-orbital Hubbard-type model, and the formulation of the LDA$+$$U$ approach is basically a mapping of the electronic structure in LDA onto this Hubbard model. In the following, by referring to the LDA$+$$U$ we will mean not only the static version of this method, originally proposed in Ref. , but also its recent extensions designed to treat dynamics of correlated electrons and employing the same idea of partitioning of the electronic states.[@LSDADMFT; @LichtPRL01]\ (3) All physical interactions, which contribute to $\Delta\hat{\Sigma}$, can be formally derived from LDA by introducing certain constraining fields $\{ \delta \hat{V}_{\rm ext} \}$ in the subspace of $L$-states of the KS equations (i.e., in a way similar to $\Delta\hat{\Sigma}$). The purpose of including these $\{ \delta \hat{V}_{\rm ext} \}$ is to simulate the change of the electron density, $\delta \rho$, and then to extract parameters of electronic interactions from the total energy difference $E[\rho$$+$$\delta \rho]$$-$$E[\rho]$, by doing a mapping onto the Hubbard model. The total energy difference is typically evaluated in LDA,[@JonesGunnarsson] and the method itself is called the constraint-LDA (CLDA).[@UfromconstraintLSDA; @Gunnarsson; @AnisimovGunnarsson; @PRB94.2] However, despite a more than decade of rather successful history, the central question of LDA$+$$U$ is not completely solved and continues to be the subject of various disputes and controversies.[@NormanBrooks; @PRB96; @Pickett; @Springer; @Kotani; @Ferdi] This question is how to define the parameter of the effective Coulomb interaction $U$. To begin with, the Coulomb $U$ is *not* uniquely defined quantity, as it strongly depends on the property for the description of which we want to correct our LDA scheme. One possible strategy is the excited-state properties, associated with the complete removal of an electron from (or the addition of the new electron to) the system, i.e. the processes which are described by Koopman’s theorem in Hartree-Fock calculations and which are corrected in the GW method by taking into account the relaxation of the wavefunctions onto the created electron hole (or a new electron).[@Hedin; @FerdiGunnarsson] However the goal which is typically pursued in LDA$+$$U$ is somewhat different. Namely, one would always like to stay as close as it is possible to the description of the ground-state properties. The necessary precondition for this, which should be taken into account in the definition of the Coulomb $U$ and all other interactions which may contribute to $\Delta \hat{\Sigma}$ is the conservation of the total number of particles. In principle, similar strategy can be applied for the analysis of neutral excitations (e.g., by considering the $\omega$-dependence of $\Delta \hat{\Sigma}$), for which the total number of electrons is conserved.[@LichtPRL01] The basic difference between these two processes is that the “excited” electron in the second case continues to stay in the system and may additionally screen the Coulomb $U$. This screening may also affect the relaxation effects.[@OnidaReiningRubio] The purpose of this paper is to clarify several questions related with the definition of the Coulomb interaction $U$ in transition metals. We will discuss both the momentum (${\bf q}$) and energy ($\omega$) dependence of $U$, corresponding to the response of the Coulomb potential onto the site (${\bf R}$) and time ($t$) dependent perturbation $\delta \hat{V}_{\rm ext}$, and present a comparative analysis of the existing methods of calculations of this interaction, like CLDA and GW. We will argue that, despite a common believe, the GW method does not take into account the major effect of screening of the effective Coulomb interaction $U$ between the $3d$ electrons by the (itinerant) $4sp$ electrons, which may also contribute to the ${\bf q}$-dependence of $U$. This channel of screening is included in CLDA, although under an additional approximation separating the $3d$- and $4sp$-states, while in the GW approach, its absence can be compensated by an appropriate choice of the pseudo-Wannier orbitals, simulating the basis of $L$-states. On the other hand, CLDA is a static approach, which does not take into account the $\omega$-dependence of $U$.[@TDDFT] We will consider mainly the ferromagnetic (FM) fcc Ni, although similar arguments can be applied for other metallic compounds. We start with the basic definition of $U$ for the systems with the conserving number of particles, which was originally introduced by Herring,[@Herring] and then discuss the connection of this definition with the parameters which comes out from CLDA and GW calculations. \[sec:Herring\]Herring’s definition and CLDA ============================================ According to Herring,[@Herring] the Coulomb $U$ is nothing but the energy cost for moving a $L$-electron between two atoms, located at ${\bf R}$ and ${\bf R}'$, and initially populated by $n_{L{\bf R}}$$=$$n_{L{\bf R}'}$$\equiv$$n_L$ electrons: $$U_{{\bf RR}'} = E [ n_{L{\bf R}}+1,n_{L{\bf R}'}-1 ] - E [ n_{L{\bf R}},n_{L{\bf R}'} ]. \label{eqn:HerringU1}$$ In DFT, $U_{{\bf RR}'}$ can be expressed in terms of the KS eigenvalues, $\varepsilon_{L{\bf R}}$$=$$\partial E/\partial n_{L{\bf R}}$, using Slater’s transition state arguments:[@PRB94.2] $$U_{{\bf RR}'} = \varepsilon_{L{\bf R}} [ n_{L{\bf R}}+\frac{1}{2},n_{L{\bf R}'}-\frac{1}{2} ] - \varepsilon_{L{\bf R}} [ n_{L{\bf R}}-\frac{1}{2},n_{L{\bf R}'}+\frac{1}{2} ]. \label{eqn:HerringU1TS}$$ The final definition $$U_{{\bf RR}'} = \left. \frac{\partial \varepsilon_{L{\bf R}}}{\partial n_{L{\bf R}}} \right|_{n_{L{\bf R}}+n_{L{\bf R}'}=const}, \label{eqn:HerringU1TSD}$$ which is typically used in CLDA calculations, is obtained after replacing the finite difference between two KS eigenvalues in Eq. (\[eqn:HerringU1TS\]) by their derivative. The derivative depends on the path in the sublattice of occupation numbers along which it is calculated (e.g., $n_{L{\bf R}}$$+$$n_{L{\bf R}'}$$=$$const$). This dependence has a clear physical meaning and originates from the distance-dependence of intersite Coulomb interactions, which contribute to the screening of $U_{{\bf RR}'}$. In the reciprocal (Fourier) space, this distance-dependence gives rise to the ${\bf q}$-dependence of $U$. Owing to the existence of the second subsystem, $I$, the reaction (\[eqn:HerringU1\]) may compete with another one $$U = E [ n_{L{\bf R}}+1,n_{I{\bf R}}-1,n_{L{\bf R}'}-1,n_{I{\bf R}'}+1 ] - E [ n_{L{\bf R}},n_{I{\bf R}},n_{L{\bf R}'},n_{I{\bf R}'} ], \label{eqn:HerringU2}$$ corresponding to independent ”charge transfer” excitations at the sites ${\bf R}$ and ${\bf R}'$.[@ZSA] It can be also presented in the form (\[eqn:HerringU1TSD\]), but with the different constraint imposed on the numbers of $L$- and $I$-electrons: $n_{L{\bf R}}$$+$$n_{I{\bf R}}$$=$$const$. Generally, the definitions (\[eqn:HerringU1\]) and (\[eqn:HerringU2\]) will yield two different interaction parameters. Since in the charge-transfer scenario any change of $n_{L{\bf R}}$ is totally screened by the change of $n_{I{\bf R}}$ located at the same site, the interaction (\[eqn:HerringU2\]) does not depend on ${\bf R}$. In reality, both processes coexist and the proper interaction parameter is given by the following equation $$U_{{\bf RR}'} = E [ n_{L{\bf R}}+1,n_{I{\bf R}}-\delta,n_{L{\bf R}'}-1,n_{I{\bf R}'}+\delta ] - E [ n_{L{\bf R}},n_{I{\bf R}},n_{L{\bf R}'},n_{I{\bf R}'} ],$$ where the amount of charge $\delta$ redistributed between two subsystems is determined variationally to minimize $U_{{\bf RR}'}$. In the CLDA scheme, it is convenient to work in the reciprocal (Fourier) space and calculate $U_{\bf q}$ as the response to the ${\bf q}$-dependent constraining field $$\delta \hat{V}_{\rm ext}({\bf q},{\bf R})=V_L \cos {\bf q} \cdot {\bf R}, \label{eqn:dVext}$$ acting in the subspace of $L$-states under the general condition of conservation of the total number of particles. The results of these calculations will strongly depend on how well $L$-electrons are screened by the $I$-ones. In the case of perfect (100%) screening, the reaction (\[eqn:HerringU2\]) will dominate, and the parameter $U$ will not depend on ${\bf q}$. If the screening is not perfect (e.g., the change of the number of $3d$ electrons in the transition metals is screened to only about 50% by the $4sp$ electrons at the same atom – Ref. ), it is reasonable to expect strong ${\bf q}$-dependence of the effective $U$, because two different channels of screening, given by Eqs. (\[eqn:HerringU1\]) and (\[eqn:HerringU2\]), will work in a different way for different ${\bf q}$’s. Since the excess (or deficiency) of $L$-electrons caused by a uniform shift of the external potential $\delta \hat{V}_{\rm ext}$ can be only compensated from the system of $I$-electrons, the ”charge transfer” mechanism (\[eqn:HerringU2\]) will always dominate for small ${\bf q}$. The mechanism (\[eqn:HerringU1\]) becomes increasingly important near the Brillouin zone (BZ) boundary, and will generally compete with the ”charge transfer” excitations (\[eqn:HerringU2\]), depending on the distribution of the $I$-electron density. [@AnisimovGunnarsson] \[sec:GWg\]The GW method ======================== It was recently suggested by several authors (e.g., in Refs. , and ) that the Coulomb $U$ in the LDA$+$$U$ approach can be replaced by the screened Coulomb interaction $W$ taken from the *ab initio* GW method. The latter is calculated in the random phase approximation (RPA):[@Springer; @Kotani; @Ferdi] $$\hat{W}(\omega) = \left[1 - \hat{u} \hat{P}(\omega)\right]^{-1} \hat{u}. \label{eqn:Dyson}$$ We adopt the orthogonal atomic-like basis of linear-muffin-tin orbitals (LMTO) $\{ \chi_\alpha \}$,[@LMTO] which specifies all matrix notations in Eq. (\[eqn:Dyson\]). For example, the matrix of bare Coulomb interactions $e^2/|{\bf r}$$-$${\bf r}'|$ has the form $\langle \alpha \beta | \hat{u} | \gamma \delta \rangle$$=$$ e^2 \int d{\bf r} \int d{\bf r}' \chi_\alpha^*({\bf r}) \chi_\beta^*({\bf r}') |{\bf r}$$-$${\bf r}'|^{-1} \chi_\gamma({\bf r}) \chi_\delta({\bf r}')$, and all other matrices are defined in a similar way. The diagonal part of $\hat{u}$ for the $3d$ states is totally specified by three radial Slater’s integrals: $F^0$, $F^2$, and $F^4$. In the following we will identify $F^0$ with the parameter of bare Coulomb interaction, which has the same meaning as the Coulomb $U$ after taking into account all screening effects. $F^2$ and $F^4$ describe non-spherical interactions, responsible for Hund’s rule. The first advantage of RPA is that it allows to handle the $\omega$-dependence of $\hat{W}$, which comes from the $\omega$-dependence of the polarization matrix $\hat{P}$. The most common approximation for $\hat{P}$, which is feasible for *ab initio* GW calculations, is that of non-interacting quasiparticles:[@Hedin; @FerdiGunnarsson] $$P_{\rm GW}({\bf r},{\bf r}',\omega) = \sum_{ij} \frac{(f_i-f_j)\psi_i ({\bf r}) \psi^*_i ({\bf r}') \psi^*_j ({\bf r}) \psi_j ({\bf r}')} {\omega - \varepsilon_j + \varepsilon_i + i\delta (f_i-f_j)}, \label{eqn:polarization}$$ which is typically evaluated starting with the electronic structure in LSDA (the spin indices are already included in the definition of $i$ and $j$). Generally speaking, the use of $\hat{P}_{\rm GW}$ is an additional approximation, which yields a new interaction $\hat{W}_{\rm GW}$. At this stage, it is not clear whether it has the same meaning as the effective $U$ derived from CLDA and whether Eq. (\[eqn:polarization\]) includes all necessary channels of screening. It may also include some other effects, which should be excluded from the final definition of $U$, in order to avoid the double-counting. One is the self-screening arising from local (on-site) interactions between the localized electrons. These interactions are not accurately treated in RPA.[@Liebsch] Therefore, the basic idea is to exclude these effects from the definition of $\hat{W}_{\rm GW}$ and to resort this part to the interaction term of the Hubbard model.[@Biermann] In this respect, the second important property of RPA is that it allows to easily partition different contribution to $\hat{P}$ and $\hat{W}$. If $\hat{P}$$=$$\hat{P}_1$$+$$\hat{P}_2$ and $\hat{W}_1$ is the solution of Eq. (\[eqn:Dyson\]) for $\hat{P}$$=$$\hat{P}_1$, the total $\hat{W}$ can be obtained from the same equation after substitution $\hat{P}$$\rightarrow$$\hat{P}_2$ and $\hat{u}$$\rightarrow$$\hat{W}_1$ in Eq. (\[eqn:Dyson\]). For example, if $\hat{P}_2$$=$$\hat{P}_{LL}$ is the part of $\hat{P}_{\rm GW}$, which includes all possible transitions between the localized states, and $\hat{P}_1$$=$$\hat{P}_r$ is the rest of the polarization, the matrix $\hat{W}_r$ corresponding to $\hat{P}_r$, can be used as the interaction part of the Hubbard model.[@Kotani; @Ferdi] \[sec:GWNi\]The GW story for fcc Ni ----------------------------------- The ferromagnetic fcc Ni is the most notorious example where LSDA encounters serious difficulties, especially for description of spectroscopic properties. There are three major problems:[@FerdiGunnarsson] (i) the bandwidth is too large (overestimated by $\sim$30%); (ii) the exchange splitting is too large (overestimated by $\sim$50%); (ii) the absence of the 6 eV satellite. The *ab initio* GW approach corrects only the bandwidth (although with certain tendency to overcorrect), whereas the other two problems remain even in GW.[@FerdiGunnarsson; @YamasakiFujiwara] Therefore, before doing any extensions on the basis of GW method, it is very important to have a clear idea about its limitations. In this section we would like to clarify several confusing statements about screening of $W$ in GW. We argue that the main results of the *ab initio* GW method can be explained, even quantitatively, by retaining, instead of the full matrix $\hat{u}$ in Eq. (\[eqn:Dyson\]), only the site-diagonal block $\hat{u}_{LL}$ of bare Coulomb interactions between $3d$ electrons, in the atomic-like LMTO basis set. An intuitive reason behind this observation is the form of polarization matrix (\[eqn:polarization\]), which can interact only with exchange matrix elements. The latter are small unless they are calculated between orbitals of the same type, corresponding to the self-interaction. The values of radial Slater’s integrals calculated in the basis of atomic $3d$ orbitals are $F^0$$=$$24.9$, $F^2$$=$$11.1$, and $F^4$$=$$6.8$ eV, respectively. All other interactions are considerably smaller. Hence, it seems to be reasonable to adopt the limit $\hat{u}_{LL}$$\rightarrow$$\infty$, which automatically picks up in Eq. (\[eqn:Dyson\]) only those matrix elements which are projected onto the atomic $3d$ orbitals, in the LMTO representation. In this sense the *ab initio* GW method for transition metals can be regarded as the RPA solution of the Hubbard model with the *bare* on-site interactions between $3d$ electrons *defined in the basis of LMTO orbitals*. In the GW method, these interactions are practically not screened by outer electrons. Note, however, that the LMTO basis in the transition metals is generally different from the Wannier basis, which should be used for the construction the Hubbard Hamiltonian. As it will become clear in Sec. \[sec:OQ\], the Wannier representation has several additional features, which may modify conclusions of this section to a certain extent. Results of these model GW calculations are shown in Fig. \[fig.WandSb\]. In this case, the energy scale is controlled by the bare interaction $F^0$, which predetermines the asymptotic behavior ${\rm Re}W(\infty)$ (with $W$ denoting the diagonal matrix element of $\hat{W}$) and the position of the ”plasmon peak” of ${\rm Im} W(\omega)$ at $\sim$22 eV , which is related with the sharp increase of ${\rm Re}W(\omega)$ at around 25 eV via the Kramers-Kronig transformation. At small $\omega$, the behavior of $\hat{W}(\omega)$ is well consistent with the strong coupling regime $F^0$$\rightarrow$$\infty$: namely, $\hat{W}(\omega)$$\sim$$-$$\hat{P}^{-1}(\omega)$, which is small ($\sim$1.8 eV at $\omega$$=$$0$) and *does not depend on* $F^0$ (though it may depend on $F^2$ and $F^4$). All these features are in a good semi-quantitative agreement with results of GW calculations.[@FerdiGunnarsson; @Springer; @Kotani; @Ferdi] The self-energy in GW is given by the convolution of $\hat{W}$ with the one-particle Green function $\hat{G}$: $$\hat{\Sigma}(\omega) = \frac{i}{2\pi} \int d\omega' \hat{G}(\omega + \omega') \hat{W}(\omega'). \label{eqn:SigmaGW}$$ Therefore, the $\omega$-dependence of $\hat{\Sigma}$ should incorporate the main features of $\hat{W}(\omega')$. Indeed, the low-energy part of $\hat{\Sigma}$ (close to the Fermi energy or the chemical potential $\mu$) is mainly controlled by ${\rm Im} \hat{W}$. Since the main poles of ${\rm Im} \hat{W}$ and ${\rm Im} \hat{G}$ are well separated on the $\omega$-axis (the $\omega$-range of ${\rm Im} \hat{G}$ is limited by the $3d$ bandwidth, $\sim$4.5 eV in LSDA for fcc Ni, whereas the ”plasmon peak” of ${\rm Im} W$ is located only at $\sim$22 eV), one has the following relation: $$\left. \partial \Sigma /\partial \omega \right|_{\omega=\mu} \approx \frac{1}{\pi} \int_0^\infty d\omega {\rm Im} W(\omega)/\omega^2. \label{eqn:dSigmamu}$$ This yields the renormalization factor $Z$$=$$[1$$-$$\left. \partial \Sigma / \partial \omega \right|_{\omega=\mu}]^{-1}$$\sim$$0.5$, which readily explains the reduction of the $3d$ bandwidth as well as of the intensity of the valence spectrum in *ab initio* GW calculations (Fig. \[fig.DOS\]).[@FerdiGunnarsson; @YamasakiFujiwara] Away from the Fermi energy (i.e., for energies $|\omega|$ which are much larger than the $3d$ bandwidth), one has another relation ${\rm Re} \Sigma(\omega)$$\sim$$-$${\rm Re} W(\omega)$, which readily explains the existence of the deep minimum of ${\rm Re} \Sigma(\omega)$ near $-$$30$ eV as well as large transfer of the spectral weight into this region (shown in the inset of Fig. \[fig.DOS\]). Therefore, it is not quite right to say that the satellite structure is missing in the *ab initio* GW approach. It may exist, but only in the wrong region of $\omega$. Thus, even besides RPA, the major problem of the GW description for the transition metals is the *wrong energy scale, which is controlled by the bare on-site Coulomb interaction* $F^0$ ($\sim$$20$-$30$ eV) between the $3d$ electrons. In summarizing this section we would like to stress again the following points:\ (1) The major channel of screening of Coulomb interaction in the GW method for the transition metals originates from the $3d$$\rightarrow$$3d$ transitions in the polarization function calculated in the atomic-like LMTO basis set. The screening by the $4sp$-electrons is practically absent;\ (2) At small $\omega$, the deficiency of the $3d$-$4sp$ screening is masked by the strong-coupling regime realized in RPA equations for screened Coulomb interaction, which explains a small value of $W(0)$ obtained in the GW calculations;\ (3) The main $\omega$-dependence of $\hat{\Sigma}$ and $\hat{W}$ in GW also comes from the $3d$$\rightarrow$$3d$ transitions. Different conclusions obtained in Refs.  are related with the use of different partitioning into what is called the “$3d$” and “non-$3d$” (pseudo-) Wannier orbitals.[@comment3] In the light of analysis presented in this section, the strong $\omega$-dependent screening by the “non-$3d$” Wannier states obtained in Refs.  means that in reality these states had a substantial weight of “$3d$” character of the LMTO basis, which mainly contributed to the screening. We will return to this problem in Sec. \[sec:OQ\]. The next important interaction, which contribute to the screening of $F^0$ in GW is due to transitions between states with the same angular momentum: i.e., $3d$$\rightarrow$$nd$ ($n$$=$ $4$, $5$, ...) (see also comments in Sec. \[sec:conventions\]). In the lowest order (non-self-consistent RPA), these contributions can be estimated as $$\Delta W(\omega) \approx \langle 3d 3d | \hat{u} | 3d 4d \rangle_{\rm av}^2 P_{\rm GW}(\omega,3d \rightarrow 4d) + ({\rm higher}~n), \label{eqn:simpleDyson}$$ where $\langle 3d 3d | \hat{u} | 3d 4d \rangle_{\rm av}$$\simeq$$6.1$ eV is the spherical part of the exchange integral $\langle 3d 3d | \hat{u} | 3d 4d \rangle$, corresponding to $F^0$.[@comment4] Results of these calculations are shown in Fig. \[fig.relaxation\]. The region of $3d$$\rightarrow$$4d$ transitions strongly overlaps with the “plasmon peak” of ${\rm Im} W(\omega)$ (Fig. \[fig.WandSb\]). Therefore, in the GW calculations, these two effects are strongly mixed.[@FerdiGunnarsson; @Springer; @Kotani; @Ferdi] The $\omega$-dependence of $\Delta W$ will also contribute to the renormalization of the low-energy part of spectrum. In GW, this contribution can be estimated using Eq. (\[eqn:dSigmamu\]), which yields $\left. \partial \Sigma /\partial \omega \right|_{\omega=\mu}$$\sim$$0.06$. This contribution is small and can be neglected. \[sec:GWversusCLDA\]GW versus CLDA ================================== What is missing in the *ab initio* GW method, and what is the relation between GW and CLDA? Let us consider for simplicity the static case, where $\delta \hat{V}_{\rm ext}$ does not depend on time (the generalization to the time-dependent case is rather straightforward). Eventually, both methods are designed to treat the response $\delta\rho({\bf r})$ of the charge density (\[eqn:KS\]) to the change of the external potential $\delta \hat{V}_{\rm ext}$, which can be calculated in the first order of the regular perturbation theory. Then, $\delta \hat{V}_{\rm ext}$ will affect both eigenvalues and eigenfunctions of the KS equations (\[eqn:KS\]). The corresponding corrections are given by the matrix elements $\langle \psi_i | \delta \hat{V}_{\rm ext} | \psi_j \rangle$ with $i$$=$$j$ and $i$$\neq$$j$, respectively. If two (or more) eigenvalues are located near the Fermi level, their shift can lead to the repopulation effects when some levels become occupied at the expense of the other ones. This is a direct consequence of the conservation of the total number of particles, which affects the occupation numbers. Therefore, very generally, the total response $\delta\rho({\bf r})$ in metals will consist of two parts, $\delta \rho({\bf r})$$=$$\delta_1 \rho({\bf r})$$+$$\delta_2 \rho({\bf r})$, describing the change of the occupation numbers, $\delta_1 \rho({\bf r})$$=$$\sum_i \delta f_i |\psi_i ({\bf r})|^2$, and the relaxation of the wavefunction, $\delta_2 \rho({\bf r})$$=$$\sum_i f_i \delta |\psi_i ({\bf r})|^2$, respectively. Then, the polarization function $P$, defines as $$\delta \rho({\bf r}) = \int d {\bf r}' P({\bf r},{\bf r}',0) \delta V_{\rm ext} ({\bf r}'), \label{eqn:deltarhoRPA}$$ will also consist of two parts, $P_1$ and $P_2$, which yield $\delta_1 \rho$ and $\delta_2 \rho$ after acting on $\delta V_{\rm ext}$. Then, it is easy to verify by considering the perturbation-theory expansion for $\{ \psi_i \}$ with respect to $\delta V_{\rm ext}$ that the GW approximation corresponds to the choice $P_1$$=$$0$ and $P_2$$=$$P_{\rm GW}$. It yields $\delta_2\rho({\bf r})$, which further induces the new change of the Coulomb (Hartree) potential $\delta_2 V_{\rm H}({\bf r})$$=$$e^2 \int d {\bf r}' \delta_2 \rho({\bf r}')/|{\bf r}$$-$${\bf r}'|$. By solving this problem self-consistently and taking the functional derivative with respect to $\delta_2 \rho$ one obtains the GW expression (\[eqn:Dyson\]) for the screened Coulomb interaction $\hat{W}_{\rm GW}(0)$. Therefore, it is clear that the *ab initio* GW method takes into account only one part of the total response $\delta \rho$, describing the relaxation of the wavefunction with the fixed occupation numbers. Another contribution, corresponding to the change of the occupation numbers (or the charge redistribution near the Fermi level) is totally missing. This result can be paraphrased in a different way, which clearly illustrates its connection with the definition of orthogonal subspaces, $L$ and $I$, discussed in the introduction, and the partitioning of the polarization function $P$ (Sec. \[sec:GWg\]), which is used in the definition of the Hubbard model.[@Kotani; @Ferdi] First, recall that according to the main idea of the LDA$+$$U$ method (see postulates 1-3 of the Introduction part), $\delta \hat{V}_{\rm ext}$ should be a projector-type operator acting in the subspace of the $L$ states. Then, the result of the action of the polarization function $P_{\rm GW}$$\equiv$$P_2$, given by Eq. (\[eqn:polarization\]), onto this $\delta \hat{V}_{\rm ext}$ will belong to the same $L$ space. Therefore, the projection $\delta \hat{V}_{\rm ext}$ will generate only that part of the polarization function, which is associated with the transitions between localized states ($\hat{P}_{LL}$ in Sec. \[sec:GWg\]). Meanwhile, this polarization effect should be excluded from the final definition of the parameter $U$ in the Hubbard model to avoid the double counting.[@Kotani; @Ferdi] However, if $\hat{P}_{LL}$ is excluded, there will be nothing left in the polarization function (\[eqn:polarization\]) that can interact with $\delta \hat{V}_{\rm ext}$ and screen the change of the electron density in the $L$-subspace. Therefore, the GW scheme should correspond to the bare Coulomb interaction, that is totally consistent with the analysis presented in Sec. \[sec:GWNi\]. \[sec:difficulties\]Basic Difficulties for Transition Metals ------------------------------------------------------------ There is certain ambiguity in the construction of the Hubbard model for the transition metals, which is related with the fact that their LDA electronic structure cannot be described in terms of fully separated $L$- and $I$-states without additional approximations. In this section we briefly review two such approximations, which will explain the difference of our point of view on the screening of Coulomb interactions in the transition metals from the one proposed in Refs.  and . The GW approach employed in Refs.  and implies that *all* electronic structure near the Fermi level can be described in terms of *only five* pseudo-Wannier orbitals of predominantly $3d$-character, which serve as the $L$-states in the considered model. Generally, such $L$-states are not the same as the LMTO basis functions and take into account the effects of hybridization between $3d$ and $4sp$ states. An example of such electronic structure, obtained after elimination of the $4sp$-states near the Fermi level through the downfolding procedure,[@PRB04] is shown in Fig. \[fig.bands\]. Other possibilities of defining these pseudo-Wannier functions, which have been actually used in Refs.  and , are summarized in Ref. . Then, the remaining electronic states, which are orthogonal to these pseudo-Wannier orbitals, represent the $I$-states. By the construction, the $I$-states are expected to be far from the Fermi level. This may justify the use of the GW approximation for the screening of Coulomb interactions in the $3d$-electron-like bands, formed by the pseudo-Wannier orbitals near the Fermi level, by the remote $I$-states. The parameters of Coulomb interactions, constructed in such a way, correspond to the original Herring definition (\[eqn:HerringU1\]) in the basis of pseudo-Wannier orbitals. Formally, it should also include the charge redistribution effects near the Fermi level. However, in this case the charge redistribution goes between pseudo-Wannier orbitals of the same ($L$) type, which constitutes the basis of the Hubbard model. Therefore, the effects of the charge redistribution can be taken into account by including the intra- as well as inter-site Coulomb interactions in the Hubbard Hamiltonian. The latter can be evaluated in the GW approach, provided that the relaxation effect are not very sensitive to whether the excited electron is placed on another $L$-orbital of the same system, or completely removed from it, like in the GW method. The model employed in CLDA calculations is obtained after neglecting the hybridization between $3d$- and $4sp$-states (the so-called canonical-bands approximation–Ref. ). It consists of the pure $3d$-band, located near the Fermi level and representing the $L$-states of the model, which is embedded into the free-electron-like $4sp$-band, representing the $I$-states.[@comment1] Formally, these bands are decoupled and the free-electron-line $4sp$-band can be eliminated from the basis in the process of construction of the Hubbard Hamiltonian. However, in this case the definition of the screened Coulomb interaction in the $3d$ band should take into account the processes corresponding to redistribution of electrons between $3d$- and $4sp$-band at the low-energy cost, which is traced back to Herring’s scenario of screening in the transition metals,[@Herring] and which is missing in the GW method. However, we would like to emphasize again that both considered models are *approximations* to the real electronic structure of fcc Ni. Even in the first case (model ‘a’ in Fig. \[fig.bands\]), the free-electron-like $4sp$-band lies near the Fermi level (especially around L-point of the Brillouin zone). Therefore, the charge redistribution effects are expected to play some role even in the basis of Wannier orbitals. On the other hand, because of strong hybridization between $3d$- and $4sp$-states in the transition metals, there is a substantial difference of electronic structure used in CLDA calculations (model ‘b’ in Fig. \[fig.bands\]) from the real LDA electronic structure of fcc Ni. Strictly speaking, all partial contributions to the screening of Coulomb interactions, which we will consider in the next section, will be evaluated for this particular model of the electronic structure. The values of these parameters can be revised to a certain extent after taking into account the hybridization between $3d$- and $4sp$-states. For example, with the better choice of the Wannier basis for the five $3d$-electron-line bands in the model ‘b’ one could possibly incorporate the main effects of the model ‘a’ and merge these two approaches. \[sec:CLDATM\]CLDA for transition metals ======================================== How important are the relaxation of the wavefunctions and the change of the occupation numbers in the definition of the Coulomb interaction $U$? For the transition metals, both contributions can be easily evaluated in CLDA. For these purposes it is convenient to use the Hellman-Feinman theorem, which relates the static $U$ with the expectation value of the KS potential $V_{\rm KS}$$=$$V_{\rm H}$$+$$V_{\rm XC}$: [@PRB94.2] $$U=\langle 3d | \frac{\partial V_{\rm KS}}{\partial n_{3d}} | 3d \rangle.$$ Then, the exchange-correlation (XC) part is small. $\delta V_{\rm H}$ can be expressed through $\delta\rho$. Hence, the CLDA scheme provides the self-consistent solution for $\delta\rho$ associated with the change of the number of $3d$ electrons, $\delta n_{3d}$. The latter is controlled by $\delta V_{\rm ext}$. Therefore, the procedure is totally equivalent to the calculation of the polarization function $P$ and the screened Coulomb interaction for $\omega$$=$$0$. \[sec:conventions\]Conventions ------------------------------ We use rather standard set up for the CLDA calculations. Namely, the $3d$ band of Ni should be well separated from the rest of the spectrum (otherwise, the LDA$+$$U$ strategy discussed in the Introduction does not apply). For fcc Ni this is not the case. However, this property can be enforced by using the canonical bands approximation in the LMTO method.[@LMTO] We employ even cruder approximation and replace the $3d$ band by the atomic $3d$ levels embedded into the $4sp$ band (in the other words, we switch off the hybridization between $3d$ orbitals located at different atomic sites as well as the $3d$ and $4sp$ states).[@AnisimovGunnarsson] Then, each $3d$ orbital can be assigned to a single atomic site. By changing the number of $3d$-electrons at different atomic sites $\{ {\bf R} \}$ in supercell calculations, one can mimic the ${\bf q}$-dependence of the external potential (\[eqn:dVext\]). Other atomic population (of the $4sp$ states) are allowed to relax self-consistently onto each change of the number of $3d$ electrons. Hence, the contribution of the charge-transfer excitation (\[eqn:HerringU2\]) to the screening of $U$ is unambiguously defined by the form of the external potential and details of the electronic structure of the $4sp$ states. Some aspects of treating the $3d$ states beyond the atomic approximation will be considered in Sec. \[sec:OQ\]. The LMTO method is supplemented with an additional spherical approximation for $V_{\rm KS}({\bf r})$ inside atomic spheres, which bars small exchange interactions between $3d$ and $4sp$ electrons from the screening of $U$. By paraphrasing this statement in terms of the polarization function in the GW method, the spherical approximation for $V_{\rm KS}({\bf r})$ in the CLDA calculations is equivalent to retaining in $P_{\rm GW}$ only those contributions which are associated with transitions between states with the same angular momentum (e.g., $3d$$\rightarrow$$4d$, etc.). \[sec:Gamma\]Screened Coulomb Interaction in the $\Gamma$-point --------------------------------------------------------------- First, we evaluate the pure effect associated with the change of the occupation numbers, without relaxation of the wavefunctions. This mechanism is directly related with the conservation of the total number of particles, and simply means that the excess (or deficiency) of the $3d$ electrons for ${\bf q}$$=$$0$ is always compensated by the $4sp$ electrons, which participate in the screening of $3d$ interactions. The corresponding contribution to the screening of $F^0$ is given by:[@PRB94.2] $$\Delta^{(1)} F^0 = \sum_{i \neq 3d} \frac{\delta f_i}{\delta n_{3d}} \langle 3d i|\hat{u}|3d i \rangle_{\rm av}.$$ In transition metals, $\Delta^{(1)}U$ is very large and takes into account more than 70% of screening of the bare Coulomb interaction $F^0$ (Table \[tab:U\]). This contribution is missing in the GW method. The second largest effect ($\sim$25% of the total screening) is caused by relaxation of the $3d$ orbitals onto the change of the Hartree potential associated with the change of these occupation numbers ($\Delta^{(2)} U$ in Table \[tab:U\]). The remaining part of the screening ($\sim$5%) comes from the relaxation of other orbitals (including the core ones) and the change of the XC potential. In principle, the relaxation effects should be taken into account by the GW calculations. However, this procedure strongly depends on the way how it is implemented. compound $F^0$ $\Delta^{(1)} F^0$ $\Delta^{(2)} F^0$ $U$ ---------- ------- -------------------- -------------------- ----- bcc Fe 22.2 -13.6 -3.5 4.5 fcc Ni 24.9 -14.2 -5.2 5.0 : Partial contributions to the screening of the $3d$ interactions in the $\Gamma$-point extracted from constraint-LDA calculations (in eV): (1) bare Coulomb integral $F^0$, (2) the screening of $F^0$ by the $4sp$ electrons associated with the change of occupation numbers, without relaxation of the wavefunctions ($\Delta^{(1)} F^0$), (3) the additional screening of $F^0$ associated with relaxation of the $3d$ orbitals ($\Delta^{(2)} F^0$), and (4) the total value of $U$ obtained in CLDA calculations.[]{data-label="tab:U"} For example, the CLDA approach is based on a direct solution of KS equations supplemented with a *flexible* atomic basis set, like in the LMTO method.[@LMTO] Then, the change of $F^0$ caused by relaxation of the $3d$ orbitals can be easily evaluated as[@PRB94.2] $$\Delta^{(2)} F^0=\frac{n_{3d}}{2} \frac{\partial F^0}{\partial n_{3d}}.$$ Since $n_{3d}$ is large in the fcc Ni, this contribution is also large. The situation can be different in the GW scheme, based on the perturbation theory expansion, which requires a large basis set.[@HybertsenLouie] For example, in order to describe properly the same relaxation of the $3d$ wavefunctions, the polarization $P_{\rm GW}$ should explicitly include the excitation from the occupied $3d$ to the unoccupied $4d$ (and probably higher) states.[@FerdiGunnarsson] \[sec:q\][**q**]{}-dependence of Coulomb $U$ -------------------------------------------- Since the change of the number of $3d$ electrons in transition metals is totally screened by the $4sp$ electrons at the same atomic site,[@AnisimovGunnarsson] it is reasonable to expect an appreciable ${\bf q}$-dependence of the effective $U$. Results of CLDA calculations for the high-symmetry points of the Brillouin zone are summarized in Table \[tab:Uq\]. The effective $U$ appears to be small in the $\Gamma$-point due to the perfect screening by the $4sp$ electrons. At the Brillouin zone boundary this channel of screening is strongly suppressed that is reflected in the larger values of the Coulomb $U$. The screening by intersite Coulomb interactions, which takes place in the ${\rm X}$-point of the BZ, is substantially weaker and cannot fully compensate the lack of the $4sp$-screening. In the ${\rm L}$-point of the BZ for the fcc lattice, the modulation of the $3d$-electron density in the CLDA calculations is such that the number of nearest neighbors with excessive and deficient number of $3d$ electrons is the same. Therefore, the contributions of intersite Coulomb interactions to the screening are cancelled out, resulting in the largest value of the effective $U$ in this point of the BZ. $\Gamma$ ${\rm X}$ ${\rm L}$ ---------- ----------- ----------- 5.0 6.8 7.3 : Coulomb interaction $U$ (in eV) for fcc Ni in three different points of the Brillouin zone: $\Gamma$$=$$(0,0,0)$, ${\rm X}$$=$$(2\pi,0,0)$, and ${\rm L}$$=$$(\pi,\pi,\pi)$ (in units of $1/a$, where $a$ is the cubic lattice parameter).[]{data-label="tab:Uq"} \[sec:GWCLDA\][GW starting with CLDA]{} ======================================= In this section we discuss some relevance of parameters of effective Coulomb interactions extracted from CLDA for the analysis of electronic structure and properties of fcc Ni. We consider the “renormalized GW approach”, in which, instead of bare Coulomb interactions, we use parameters extracted from CLDA. The main difference is that the latter incorporates the screening by the $4sp$-electrons, including the effects of charge redistribution beyond the GW approximation. This strategy can be well justified within RPA, because it allows to partition the polarization function and treat the screening effects in two steps:\ (1) We take into account the screening by “non-$3d$” electrons using CLDA. This yields the new (“renormalized”) matrix of screened Coulomb interactions $\hat{\bar{u}}_{LL}$ between the $3d$ electrons.[@comment2] As it was discussed in Sec. \[sec:q\], the obtained interaction $\hat{\bar{u}}_{LL}$ is ${\bf q}$-dependent, and this dependence is fully taken into account in our calculations.\ (2) We evaluate the screening caused by $3d$$\rightarrow$$3d$ transitions in the polarization function (\[eqn:polarization\]) using Eq. (\[eqn:Dyson\]) in which the matrix of bare Coulomb interactions $\hat{u}_{LL}$ is replaced by $\hat{\bar{u}}_{LL}$. This yields the new interaction $\hat{\bar{W}}(\omega)$, which is used in subsequent calculations of the self-energy (\[eqn:SigmaGW\]). It is reasonable to expect that the main $\omega$-dependence of $\hat{\bar{W}}$ will come from the $3d$$\rightarrow$$3d$ transitions (see closing arguments in Sec. \[sec:GWNi\]), which are taken into account in the second step. The screening by “non-$3d$” states can be treated as static. Results of these calculations are shown in Fig. \[fig.WandSb\]. The main effect of the $4sp$-screening, beyond the standard GW approach, is the change of the energy scale, which is now controlled by the ${\bf q}$-dependent Coulomb interaction $U$, being of the order of 5.0-7.3 eV. It change the asymptotic behavior ${\rm Re} \bar{W}(\infty)$ as well as the position and the intensity of the “plasmon peak” of ${\rm Im} \bar{W}(\omega)$, which is shifted to the lower-energies region and becomes substantially broader in comparison with the case of bare Coulomb interactions considered in Sec. \[sec:GWNi\]. On the other hand, the static limit ${\rm Re}\bar{W}$$\simeq$$1.9$ eV is practically not affected by details of the $4sp$-screening, due to the strong-coupling regime realized in the low-$\omega$ region. The ${\rm Re} \bar{W}$ exhibits a strong $\omega$-dependence at around 7 eV, which is related with the position of the plasmon peak of ${\rm Im} \bar{W}(\omega)$. All these features are well reflected in the behavior of $\Sigma(\omega)$. The main effect of the $4sp$-screening onto the spectral function in RPA consists in somewhat milder reduction of the bandwidth, which is also related with the spectral weight transfer (Fig. \[fig.DOS\]): the new renormalization factor is $Z$$\sim$$0.7$ against $Z$$\sim$$0.5$ obtained with bare Coulomb interactions. However, the exchange splitting does not change and the 6 eV satellite structure does not emerge. \[sec:OQ\]Summary and Remaining Questions ========================================= We have considered several mechanisms of screening of the bare Coulomb interactions between $3d$ electrons in transition metals. We have also discussed different methods of calculations of the screened Coulomb interactions. Our main results can be summarized as follows.\ (1) The processes which mainly contribute to the screening of Coulomb interactions between $3d$ electrons are essentially local, meaning that the on-site Coulomb interactions are most efficiently screened by the $3d$ and $4sp$ electrons located at the same site.[@Gunnarsson; @AnisimovGunnarsson; @PRB96] The most efficient mechanism of screening is basically the self-screening by the same $3d$ electrons, evaluated in some appropriate atomic-like basis set, like that of the LMTO method employed in the present work. The $\omega$-dependence of the effective Coulomb interaction $U$ also originates mainly from the self-screening.\ (2) We have clarified a fundamental difference between constraint-LDA and GW methods in calculating the effective Coulomb interaction $U$. The GW approximation does not take into account a screening of the on-site Coulomb interactions by the itinerant $4sp$ electrons, taking place via redistribution of electrons between $3d$ and $4sp$ bands. In a number of cases, the GW approach may be justified by using Wannier basis functions, representing the bands near the Fermi level. If these bands are well isolated from the other bands, the redistribution of electrons between Wannier orbitals for the bands near the Fermi level and those far from the Fermi level must be negligible. Then, the remote bands can participate in the screening of Coulomb interactions in the “near-Fermi-level bands” only via virtual excitations, which can be treated on the RPA level. However, in the case of Ni, such separation of bands is not complete, and it is essential to consider additional mechanisms of screening beyond the GW approximation. In the present work, the $4sp$-screening is automatically taken into account in the CLDA approach, which is complementary to the GW method. Due to the strong-coupling regime realized in RPA equations for the screened Coulomb interaction, the static limit appears to be insensitive to the details of the $4sp$-screening. However, from the viewpoint of the present approach, the $4sp$-screening becomes increasingly important at finite $\omega$ and controls both the asymptotic behavior and the position of the plasmon peak of the screened Coulomb interaction in RPA. The latter effect can be especially important as it predetermines the position of the satellite structure. Finally, we would like to make several comments about implication of the parameters of screened Coulomb interaction obtained in our work for the description of electronic structure and properties of transition metals. We will also discuss some future directions and make a comparison with already existing works.\ (1) Our results clearly show that RPA is not an adequate approximation for the electronic structure of fcc Ni. Even after taking into account the additional screening of the $3d$-$3d$ interactions by the itinerant $4sp$ electrons, beyond the GW approximation, and the ${\bf q}$-dependence of the effective $U$, we obtain only a partial agreement with the experimental data. Namely, only the bandwidth is corrected in this “renormalized GW approach”, in a better agreement with the experimental data. However, there is only a tiny change of the spectral weight around 6 eV (Fig. \[fig.DOS\]), i.e. in the region where the satellite structure is expected experimentally. Even assuming that our parameters of Coulomb interactions may be still overestimated (due to the reasons which will be discussed below), and the satellite peak can emerge for some smaller values of $U$,[@Ferdi] one can hardly expect the strong spin-dependence of this satellite structure as well as the reduction of the exchange splitting, which are clearly seen in the experiment,[@Kakizaki] on the level of RPA calculations. Therefore, it is essential to go beyond.\ (2) Even beyond LDA, do the parameters of screened Coulomb interaction $U$$\sim$5.0-7.3 eV, obtained in the atomic approximation, provide a coherent description for the electronic structure and properties of fcc Ni? Probably, this is still an open question because so far not all of the possibilities in this direction have been fully investigated. One new aspect suggested by our calculations is the ${\bf q}$-dependence of the effective $U$. On the other hand, all previous calculations suggest that the Coulomb interaction of the order of 5.0-7.3 eV is probably too large. For example, the value of $U$, which provide a coherent description for a number of electronic and magnetic properties of fcc Ni on the level of DMFT calculations is about 3 eV,[@LichtPRL01] which is well consistent with the previous estimates based on the $t$-matrix approach.[@Liebsch] Therefore, it is reasonable to ask if there is an additional mechanism, which further reduces the effective $U$ from 5.0-7.3 eV till 3.0 eV? One possibility lies in the atomic approximation which neglects the hybridization effects between $3d$ and $4sp$ states, and which is rather crude approximation for the transition metals.[@comment1] The hybridization will generally mix the states of the $3d$ and $4sp$ character, and therefore will affect the form of the Wannier orbitals constructed from the atomic wavefunctions. Since the $3d$, $4s$, and $4p$ states belong to different representations of point group of the cubic symmetry, they cannot mix at the same site. However, the $4s$ (or $4p$) orbital can have tails of the $3d$ character at the neighboring sites (and vice versa). These tails will additionally screen the Coulomb interactions between the (nominally) $3d$ electrons. The screening is expected to be very efficient because it operates between orbitals of the same ($3d$) type. It should explain further reduction of the static $U$ obtained in the atomic approximation. Another feature of this screening is the $\omega$-dependence of the effective $U$, which comes from the $3d$$\rightarrow$$3d$ transitions in the polarization function (namely between tails of the $4sp$-orbitals and the heads of the wavefunctions of the $3d$ character). In RPA, this $\omega$-dependence is directly related with the static limit of screening via the Kramers-Kronig transformation.[@FerdiGunnarsson] We believe that the screening by the tails of the Wannier functions was the main physical mechanism underlying the calculations of effective Coulomb interaction in Refs. , in the framework of *ab initio* GW method, although this idea has not been clearly spelled out before. The effect of charge redistribution between different states located near the Fermi level, which is not taken into account in the GW approximation, is also expected to be smaller with the proper choice of the Wannier orbitals. Another problem is that the $3d$ and $4sp$ bands are strongly mixed in the case of pure transition metals. Therefore, the construction of the separate Wannier functions of the “$3d$” and “non-$3d$” type will always suffer from some ambiguities.[@comment3] In this sense, the transition-metal oxides, whose physical properties are mainly predetermined by the behavior of a limited number of $3d$ bands, located near the Fermi level and well separated from the rest of the spectrum, are much more interesting systems for the exploration of the idea of screening of Coulomb interactions, formulated on the Wannier basis. For example, based on the above argument, one can expect a very efficient screening of Coulomb interactions in the $3d$ band by the Wannier states constructed from the oxygen $2p$ orbitals, which have appreciable tails of the $3d$ character at the transition-metal sites. The first attempt to consider this screening have been undertaken in Ref. , on the basis of constraint-LDA method. Similar scheme can be formulated within RPA, which takes into account the $\omega$-dependence of the screened Coulomb interaction $U$. This work is currently in progress. [10]{} P. Hohenberg and W. Kohn, Phys. Rev. [**136**]{}, B864 (1964); W. Kohn and L. J. Sham, Phys. Rev. [**140**]{}, A1133 (1965). V. I. Anisimov, J. Zaanen, and O. K. Andersen, Phys. Rev. B [**44**]{}, 943 (1991). I. V. Solovyev, P. H. Dederichs, and V. I. Anisimov, Phys. Rev. B [**50**]{}, 16861 (1994). V. I. Anisimov, F. Aryasetiawan, and A. I. Lichtenstein, J. Phys. Condens. Matt. [**9**]{}, 767 (1997). V. I. Anisimov, A. I. Poteryaev, M. A. Korotin, A. O. Anokhin, and G. Kotliar, J. Phys. Condens. Matt. [**9**]{}, 7359 (1997); S. Y. Savrasov, G. Kotliar, and E. Abrahams, Nature [**410**]{}, 793 (2001); K. Held, G. Keller, V. Eyert, D. Vollhardt, and V. I. Anisimov, Phys. Rev. Lett. [**86**]{}, 5345 (2001). A. I. Lichtenstein, M. I. Katsnelson, and G. Kotliar, Phys. Rev. Lett. [**87**]{}, 067205 (2001). A great advantage of LSDA is that it satisfies a certain number of fundamental theorems \[R. O. Jones and O. Gunnarsson, Rev. Mod. Phys. [**61**]{}, 689 (1989)\], which makes the range of its applications much wider rather than the homogeneous electron gas. This probably explains why for many compounds LSDA provides a reasonable estimate for the parameter $U$ (though there is no solid justification why this is so). P. H. Dederichs, S. Blügel, R. Zeller, and H. Akai, Phys. Rev. Lett. [**53**]{}, 2512 (1984); M. Norman and A. Freeman, Phys. Rev. B [**33**]{}, 8896 (1986). O. Gunnarsson, O. K. Andersen, O. Jepsen, and J. Zaanen, Phys. Rev. B [**39**]{}, 1708 (1989). V. I. Anisimov and O. Gunnarsson, Phys. Rev. B [**43**]{}, 7570 (1991). I. V. Solovyev and P. H. Dederichs, Phys. Rev. B [**49**]{}, 6736 (1994). M. Norman, Phys. Rev. B [**52**]{}, 1421 (1995); M. Brooks, J. Phys. Condens. Matt. [**13**]{}, L469 (2001). I. Solovyev, N. Hamada, and K. Terakura, Phys. Rev. B [**53**]{}, 7158 (1996). W. E. Pickett, S. C. Erwin, and E. C. Ethridge, Phys. Rev. B [**58**]{}, 1201 (1998). M. Springer and F. Aryasetiawan, Phys. Rev. B [**57**]{}, 4364 (1998). T. Kotani, J. Phys. Condens. Matt. [**12**]{}, 2413 (2000). F. Aryasetiawan, M. Imada, A. Georges, G. Kotliar, S. Biermann, and A. I. Lichtenstein, cond-mat/0401620. L. Hedin, Phys. Rev. [**139**]{}, A796 (1965). F. Aryasetiawan and O. Gunnarsson, Rep. Prog. Phys. [**61**]{}, 237 (1998). G. Onida, L. Reining, and A. Rubio, Rev. Mod. Phys. [**74**]{}, 601 (2002). An alternative is to calculate the response to the time-dependent (TD) external potential using TDDFT.[@OnidaReiningRubio] C. Herring, in *Magnetism*, ed. by G. T. Rado and H. Suhl (Academic, New York, 1966), Vol. IV. J. Zaanen, G. A. Sawatzky, and J. W. Allen, Phys. Rev. Lett. [**55**]{}, 418 (1985). S. Biermann, F. Aryasetiawan, and A. Georges, Phys. Rev. Lett. [**90**]{}, 086402 (2003). O. K. Andersen, Phys. Rev. B [**12**]{}, 3060 (1975); O. Gunnarsson, O. Jepsen, and O. K. Andersen, [*ibid.*]{} [**27**]{}, 7144 (1983); O. K. Andersen and O. Jepsen, Phys. Rev. Lett. [**53**]{}, 2571 (1984). A. Liebsch, Phys. Rev. Lett. [**43**]{}, 1431 (1979); Phys. Rev. B [**23**]{}, 5203 (1981). A. Yamasaki and T. Fujiwara, J. Phys. Soc. Jpn. [**72**]{}, 607 (2003). The scheme proposed by Kotani (Ref. ) was based on a special construction of the LMTO basis set.[@LMTO] In the process of calculations of the screened Coulomb interaction $\hat{W}_r$, he excluded only the “heads” of the LMTO $3d$-basis functions, $\phi_{3d}$ (solutions of KS equations inside atomic spheres for a given energy $E_\nu$), whereas the “tails” of the wavefunctions of the same $3d$ character, given by the energy derivatives $\dot{\phi}_{3d}$ of $\phi_{3d}$, were allowed to screen $\hat{W}_r$. The idea was probably to go beyond the atomic approximation for the $3d$ states and to mimic the behavior of the Wannier functions. However, this procedure depends on the choice of $\{ E_\nu \}$ and the LMTO representation.[@LMTO] For example in the (most suitable for the construction of the Hubbard model) orthogonal representation, both $\phi_{3d}$ and $\dot{\phi}_{3d}$ can contribute to the “heads” of the basis functions as well as their “tails”. On the other hand, Aryasetiawan and co-workers in Ref.  used the band picture and considered the screening of the “$3d$” band by the “non-$3d$” (mainly $4sp$) ones. In reality, however, the $4sp$ band can carry a considerable weight of atomic $3d$ states, due to the hybridization. Moreover, the $3d$ and $4sp$ bands in the fcc Ni are well separated only along the $X$-$\Gamma$-$L$ direction of the BZ. Along other directions, there is a substantial band-crossing (see Fig. \[fig.bands\]) and this procedure is not well defined. The $4d$ and $5d$ states were treated in the canonical bands approximation [@LMTO]. Since the energy distance between the $3d$ and $4d$ (and higher) bands is much larger than the $3d$ bandwidth, one can neglect the dispersion of the $3d$ bands and relate $P_{\rm GW}(\omega,3d$$\rightarrow$$nd)$ with the local density of $nd$ states. I. V. Solovyev, Phys. Rev. B [**69**]{}, 134403 (2004). In fact the considered example of the electronic structure where there is a narrow band, formed by some patricular group of ($L$) states, embedded into the free-electron-like band is rather generic and equally apply to transition, rear earth and actinide metals. The main difference between these systems is the strength of hybridization between $L$- and $I$-states (which is the strongest in the case of transition metals), as well as in the system of $L$-states. M. S. Hybertsen and S. G. Louie, Phys. Rev. B [**35**]{}, 5585 (1987). In addition to the parameter $U$, which has a meaning of screened Slater’s integral $F^0$, CLDA allows to easely calculate the intraatomic exchange coupling $J$.[@AZA] The other two parameters, $F^2$ and $F^4$, which specify the matrix $\hat{\bar{u}}_{LL}$ of screened Coulomb interactions between the $3d$ electrons, can be estimated from $J$ using the ratio $F^4/F^2$$\simeq$$0.63$ corresponding to the atomic limit.[@PRB94] A. Kakizaki, J. Fujii, K. Shimada, A. Kamata, K. Ono, K.-H. Park, T. Kinoshita, T. Ishii, and H. Fukutani, Phys. Rev. Lett. [**72**]{}, 2781 (1994).
{ "pile_set_name": "ArXiv" }
--- abstract: | We consider the model of nonregular nonparametric regression where smoothness constraints are imposed on the regression function $f$ and the regression errors are assumed to decay with some sharpness level at their endpoints. The aim of this paper is to construct an adaptive estimator for the function $f$. In contrast to the standard model where local averaging is fruitful, the nonregular conditions require a substantial different treatment based on local extreme values. We study this model under the realistic setting in which both the smoothness degree $\beta> 0$ and the sharpness degree $\mathfrak {a}\in(0, \infty)$ are unknown in advance. We construct adaptation procedures applying a nested version of Lepski’s method and the negative Hill estimator which show no loss in the convergence rates with respect to the general risk and a logarithmic loss with respect to the pointwise risk. Optimality of these rates is proved for $\mathfrak{a}\in(0, \infty)$. Some numerical simulations and an application to real data are provided. address: - | M. Jirak\ M. Rei[ß]{}\ Institut für Mathematik\ Humboldt-Universität zu Berlin\ Unter den Linden 6\ D-10099 Berlin\ Germany\ \ - | A. Meister\ Institut für Mathematik\ Universität Rostock\ D-18051 Rostock\ Germany\ author: -   -   -   title: 'Adaptive function estimation in nonparametric regression with one-sided errors' --- , Introduction {#1} ============ In the standard model of nonparametric regression, the data $$\label{eq11} Y_j = f(x_j) + \varepsilon_j, \qquad j=1,\ldots,n$$ are observed. In this paper, in contrast to classical theory, the observation errors $(\varepsilon_j)$ are not assumed to be centred, but to have certain support properties. This is motivated from many applications where rather the support than the mean properties of the noise are known and where the regression function $f$ describes some frontier or boundary curve. Below we shall discuss concrete applications to sunspot data and annual sport records. Typical economical examples include auctions where the bidders’ private values are inferred from observed bids (see Guerre et al. [@guerre2000] or Donald and Paarsch [@paarsch1993]) and note the extension to bid and ask prices in financial data. Related phenomena arise in the context of inference for deterministic production frontiers, where it is assumed that $f$ is concave (convex) or monotone. A pioneering contribution in this area is due to Farrell [@farrell1957], who introduced data envelopment analysis (DEA), based on either the conical hull or the convex hull of the data. This was further extended by Deprins et al. [@deprins1984] to the free disposal Hull (FDH) estimator, whose properties have been extensively discussed in the literature; see, for instance, Banker [@banker1993], Korostelev et al. [@korostelev1995a], Kneip et al. [@Kneip1998; @kneip2008], Gijbels et al. [@gijbels1999], Park et al. [@park2000; @park2010], Jeong and Park [@jeong2006] and Daouia et al. [@daouiasimar2010]. The issue of stochastic frontier estimation goes back to the works of Aigner et al. [@Aigner1977] and Meeusen and van den Broeck [@meeusen1977]; see also the more recent contributions of Kumbhakar et al. [@Kumbhakar2007], Park et al. [@Park2007] and Kneip et al. [@kneip2012]. In a general nonparametric setting the accuracy of the estimator heavily depends on the average number of observations in the vicinity of the support boundary. The key quantity is the sharpness ${\mathfrak{a}}_{{x}}>0$ of the distribution function $F_{{x}}$ of $\varepsilon_j$ at ${x}=x_j$, which in its simplest case has polynomial tails $$\label{defstandard} F_{{x}}(y) = 1 - {\mathfrak{c}}_{{x}}' {\vert}y{\vert}^{{\mathfrak{a}}_{{x}}} + {\mathcal{O}}\bigl({\vert}y{\vert}^{{\mathfrak{a}}_{{x}} + \delta} \bigr)\qquad\mbox{with }{\mathfrak{c}}_{{x}}', \delta> 0\mbox{ as }y \to0.$$ The cases $0 < {\mathfrak{a}}_{{x}} < 1$, ${\mathfrak{a}}_{{x}} = 1$ and ${\mathfrak{a}}_{{x}} > 1$ are sometimes called *sharp boundary*, *fault-type boundary* and *nonsharp boundary*. From a theoretical noise models with ${\mathfrak{a}}_{{x}}\in(0,2)$ are nonregular (e.g., Ibragimov and Hasminskii [@ibrakhasminskii1981]) since they exhibit nonstandard statistical theory already in the parametric case. Chernozhukov and Hong [@chernozhukovhong2004] discuss extensively parametric efficiency of maximum-likelihood and Bayes estimators in this context and show their relevance in . From a nonparametric statistics point of view, Korostelev and Tsybakov [@korostelevtsyb1993] and Goldenshluger and Zeevi [@goldenshluger2006] treat a variety of boundary estimation problems. The focus is on applications in image recovery and is mathematically and practically substantially different from ours. The optimal convergence rate $n^{(-2 \beta)/({\mathfrak{a}}\beta+1)}$ over $\beta$-Hölder classes of regression functions $f$ depends heavily on ${\mathfrak{a}}$ (not assumed to be varying in $x$); for ${\mathfrak{a}}_{{x}}\in(0,2)$ it is faster than for local averaging estimators in standard mean regression and can even become faster than the regular squared parametric rate $n^{-1}$. Hall and van Keilegom [@hallkeilegom2009] study a local-linear estimator in a closer related nonparametric regression model and establish minimax optimal rates in $L_2$-loss if the smoothness and sharpness parameters $\beta \in(0,2]$ and ${\mathfrak{a}}>0$ are known. Earlier contributions in a related setup are due to Härdle et al. [@haerdle1995], Hall et al. [@hall1997; @hall1998] and Gijbels and Peng [@gijbels2000]. If the support of $(\varepsilon_j)$ is not one-sided, but symmetric like $[-a,a]$ and $\beta\le1$, ${\mathfrak{a}}=1$, Müller and Wefelmeyer [@muellerwefelmayer2010] have shown that mid-range estimators attain also these better rates. Recently, Meister and Reiss [@meisterreiss2013] have proved strong asymptotic equivalence in Le Cam’s sense between a nonregular nonparametric regression model for ${\mathfrak{a}}=1$ and a continuous-time Poisson point process experiment. All the references above consider a theoretically optimal bandwidth choice which depends on the unknown quantities ${\mathfrak{a}}$ and/or $\beta$. Completely data-driven adaptive procedures have been rarely considered in the literature because the nonlinear inference and the nonmonotonicity of the stochastic and approximation error terms block popular concepts from mean regression like cross-validation or general unbiased risk estimation; cf. the discussion in Hall and Park [@hall2004]. Recently, Chichignoud [@chichi2012] was able to produce a $\beta $-adaptive minimax optimal estimator, which, however, uses a Bayesian approach hinging on the assumption that the law of the errors $(\varepsilon_j)$ is perfectly known in advance (in fact, after log transform a uniform law is assumed). Moreover, a log factor due to adaptation is paid, which is natural only under pointwise loss. It remained open whether under a global loss function like an $L_q$-norm loss adaptation without paying a log factor is possible. For regular nonparametric problems Goldenshluger and Lepski [@goldenshlugerlepski2011] study adaptive methods and convergence rates with respect to general $L_q$-loss which is much more involved in the general case $q\ge1$ than for $q=2$. It is therefore of high interest, both from a theoretical and a practical perspective, to establish a fully data-driven estimation procedure where the error distribution and the regularity of the regression function are unknown and to analyze it under local (pointwise) and global ($L_q$-norm) loss. In particular, neither ${\mathfrak{a}}$ nor $\beta$ that determine the optimal convergence rate are fixed in advance. In this paper we introduce a fully data-driven (${\mathfrak{a}},\beta $)-adaptive procedure for estimating $f$ and prove that it is minimax optimal over ${\mathfrak{a}}, \beta> 0$. To ease the presentation, we restrict to equidistant design points $x_j = j/n$ on $[0,1]$ and regression errors $(\varepsilon_j)$ which are concentrated on the interval $(-\infty,0]$. Given ${x}\in[0,1]$ and an open neighborhood ${\mathcal{N}}({x}) \subseteq[0,1]$, the function $f\dvtx {\mathcal{N}}({x}) \to\mathbb{R}$ is supposed to lie in the Hölder class $H_{{\mathcal{N}}({x})}(\beta,L)$ with $\beta,L >0 $. Note that $\beta= \beta_{{x}}$ and $L = L_{{x}}$ may vary in ${x}$. The $\langle \beta\rangle$-derivatives of all $f \in H_{{\mathcal{N}}({x})}(\beta,L)$ satisfy $$\bigl{\vert}f^{(\langle\beta\rangle)}(y) - f^{(\langle\beta\rangle )}(z)\bigr{\vert}\leq L {\vert}y-z{\vert}^{\beta- \langle\beta\rangle },\qquad y,z \in{\mathcal{N}}({x}).$$ Here $\langle\beta\rangle=\max\{ m\in{\mathbb{N}}_0\dvtx m<\beta\}$ is the largest integer strictly smaller than $\beta$. We consider the case where the $\varepsilon_j$ are independent with individual distribution function $F_{x_j}$ and tail quantile function $$\mathcal{U}_{x_j}(y) = F_{x_j}^{\leftarrow} (1 - 1/y ),$$ where $F_{x_j}^{\leftarrow}$ denotes the generalized inverse of $F_{x_j}$. Weakening the polynomial tail behavior in (\[defstandard\]), our key structural condition is that for each ${x}\in [0,1]$, there exist ${\mathfrak{a}}_{{x}},{\mathfrak{c}}_{{x}} > 0$, ${\mathfrak{b}}_{{x}} \in \mathbb{R}$ and a slowly varying function $l_{{x}}(y)$, such that $$\label{defU} \mathcal{U}_{x}(y) = -{\mathfrak{c}}_x y^{-1/{\mathfrak{a}}_x} l_x(y),$$ where $l_x(y)$ satisfies uniformly for $x \in[0,1]$ condition $$\label{defL} l_x(y) = \log(y)^{{\mathfrak{b}}_x} + \mbox{\scriptsize$ \mathcal{O}$} \bigl(\log(y)^{{\mathfrak{b}}_x - 1} \bigr)\qquad\mbox{as }y \to \infty.$$ If (\[defstandard\]) holds, then (\[defU\]), (\[defL\]) are valid with ${\mathfrak{b}}_{{x}} = 0$ (note that ${\mathfrak{c}}_{{x}} \neq{\mathfrak{c}}_{{x}}'$ in general; see Lemma \[lemquantcoomp\] for the precise relation). The polynomial tail condition (\[defstandard\]) is one of the standard models in the literature; see de Haan and Ferreira [@dehaanbook2006], Härdle et al. [@haerdle1995], Hall and van Keilegom [@hallkeilegom2009] or Girard et al. [@girard2013]. In this context, so called *second order conditions* are inevitable whenever one is interested in convergence rates or limit distributions involving estimates of ${\mathfrak{a}}_{{x}}$; see Beirlant et al. [@beirlantteugelsbook2004], de Haan and Ferreira [@dehaanbook2006] or Falk et al. [@falkhr94]. Our second order condition (\[defL\]) is rather mild when compared to examples from the literature; cf. [@beirlantteugelsbook2004; @dehaanbook2006; @falkhr94; @girard2013; @hallkeilegom2009; @haerdle1995]. As will be explained in Section \[32\], a more general formulation seems to be impossible. Let us point out two main conceptual results of this paper. First, we wish to extend the existing theory beyond the limitation $\beta_{{x}}\le2$ imposed by locally constant or linear approximations and to have a clear notion of stochastic and deterministic error for the nonlinear estimators. To this end we develop a linear program in terms of general local polynomials, based on a quasi-likelihood method, because the definition in Hall and van Keilegom [@hallkeilegom2009] does not extend to polynomials of degree 2 or more in our setup. Then Theorem \[teocon\] below yields for the estimator a nontrivial decomposition in approximation and stochastic error. This decomposition is a key result for our analysis, and permits us to address the adaptation problem in full generality, thus abolishing the blockade mentioned in Hall and Park [@hall2004]. We can consider not only pointwise, but also the global $L_q$-norm as risk measure for the whole range $q \in[1, \infty)$. Technically, the optimal $L_q$-adaptation is much more demanding compared to the pointwise risk. It requires very tight deviation bounds since no additional $\log n$-factor widens the margin. For adaptive bandwidth selection, we apply a nested variant of theLepski [@lepski1990] procedure with pre-estimated critical values. Careful adaptive pre-estimation is necessary since the distribution of $(\varepsilon_j)$ is unknown and allowed to vary in ${x}$. The fact that the underlying sample $(Y_j)$ is inhomogeneously shifted by $f$ adds another level of complexity for the estimationof ${\mathfrak{a}}_{{x}}$ and ${\mathfrak{b}}_{{x}}$, which needs to be addressed by translation invariant estimators. The remarkable result of Theorem \[TL2upperbound\] is that for general $L_q$-losswe obtain the rate $n^{(-2 \beta)/({\mathfrak{a}}\beta+1)} (\log n)^{(2{\mathfrak{a}}_{{x}} {\mathfrak{b}}_{{x}} \beta_{{x}})/({\mathfrak{a}}_{{x}} \beta_{{x}}+1)}$ of convergence, the same as in the case of known (global) Hölder regularity $\beta$ and knowndistribution of $(\varepsilon_j)$. For pointwise loss the rate deteriorates to$(n /\log n)^{(-2 \beta _{{x}})/({{\mathfrak{a}}_{{x}} \beta_{{x}}+1})} (\log n)^{(2{\mathfrak{a}}_{{x}} {\mathfrak{b}}_{{x}} \beta_{{x}})/({\mathfrak{a}}_{{x}} \beta_{{x}}+1)}$; see Theorem \[Tpointwise\] below. In Section \[seclowerbounds\] it is shown that all our rates are minimax optimal for adaptive estimation. For regular mean regression these rates, inserting ${\mathfrak{a}}_{{x}}=2$ and ${\mathfrak{b}}_{{x}} = 0$, and particularly the payment for adaptation on $\beta_{{x}}$ under pointwise loss are well known. A priori it is, however, not at all obvious that in the nonregular case with Poisson limit experiments (Meister and Reiss [@meisterreiss2013]) exactly the same factor appears. Interestingly, we do not pay in the convergence rates for not knowing ${\mathfrak{a}}_{{x}}$, ${\mathfrak{b}}_{{x}}$. The lower bound in the “default-type boundary” case ${\mathfrak{a}}_{{x}} > 2$ with slower rates than in regular regression requires a completely new strategy of proof where not only alternatives for the regression function, but also for the error distributions are tested against each other. In Section \[5\] we provide some numerical simulations in order to evaluate the finite sample performance of the estimator. Smaller values in ${\mathfrak{a}}_{{x}}$ indeed lead to significantly improved estimation results. The bandwidth selection shows a quite different behavior from the regular regression case due to taking local extremes. Applications to empirical data from sunspot observations and annual best running times on 1500 m are presented. Most proofs are deferred to Section \[6\], and auxiliary lemmas and details regarding the sharpness estimation are given in the supplementary material [@jirmeireisssuppl]. Methodology {#2} =========== Our approach is a local polynomial estimation based on local extreme value statistics. We fix some $x\in[0,1]$ and consider the coefficients $(\hat{b}_j)_{j=0,\ldots,\beta^*}$ which minimize the objective function $$\label{defnestimator} (b_0,\ldots,b_{\beta^*}) \mapsto\sum _{{\vert}x_i-x{\vert}\leq h_k} \sum_{j=0}^{\beta^*} b_j (x_i-x)^j,$$ under the constraints $Y_i \leq\sum_{j=0}^{\beta^*} b_j (x_i-x)^j$ for all $i$ with ${\vert}x_i-x{\vert}\leq h_k$. Set $\tilde{f}_k(x):= \hat{b}_0$. As an estimator of $f$ we define $$\label{eqvartheta} \tilde{f}_k\dvtx x \mapsto\tilde{f}_k(x), \qquad x\in[0,1],$$ where the bandwidth $h_k>0$ remains to be selected. If $-\varepsilon_j$ is exponentially distributed and the regression function a polynomial of maximal degree $\beta^*$ on the interval $[x-h_k,x+h_k]$, then $\tilde{f}_k(x)$ is the maximum likelihood estimator (MLE), whence the approach can be seen as a local quasi-MLE method; see also Knight [@knight2001]. The idea of local polynomial estimators in frontier estimation was already employed, for instance, in Hall et al. [@hall1998], Hall and Park [@hall2004] and Hall and van Keilegom [@hallkeilegom2009]. However, in contrast to their local linear estimators (and their higher order extensions), the sum over the evaluations at $x_i$ in the neighborhood of $x$ is minimized instead of the area. This marks a substantial difference and is crucial for our setup. Already in the case of quadratic polynomials $p$, it might occur that the minimization of just the value $p(x)$ under the support constraints yields the inappropriate estimator $\tilde f_k(x)=-\infty$ if $x$ is not a design point and $\tilde f_k(x)=Y_i$ if $x=x_i$ because a sufficiently steep parabola always fits the constraints. This problem is visualized in Figure \[figcompareest\], where Figure \[figunser\](a) corresponds to estimator (\[defnestimator\]), and Figure \[fighall\](b) to the minimization approach employed in the above references. Note that this problem may or may not occur in practice, but it poses an obstacle for the mathematical analysis. This is why we work with the base estimators defined in (\[defnestimator\]). The calculation of our estimator only requires basic linear optimization, but its error analysis will be more involved. Note that the formulation as a linear program is particularly important for implementation purposes, since our adaptive procedure requires the computation of many sequential estimators as the bandwidth $h_k$ increases. ![Dashed line: true function; solid line: estimated function; red squares: sample points; estimation point: fourth sample point from the left.[]{data-label="figcompareest"}](1248f01.eps) \[figunser\]\[fighall\] The adaptation problem consists of finding an (asymptotically) optimal bandwidth $h_k$ when neither the regression function $f$ nor the specific boundary behavior of the errors $(\varepsilon_j)$ is known, which leads to different convergence rates. We follow the method inaugurated by Lepski [@lepski1990] and consider geometrically growing bandwidths with $h_0 = n^{\mathfrak{h}_0 - 1}$, $\mathfrak {h}_0 \in(0,1)$ and $$\begin{aligned} \label{eqbandwidth} {h}_k &=& {h}_0 \rho^k,\qquad k = 0,\ldots,K+1 \nonumber\\[-10pt]\\[-10pt] \eqntext{\mbox{where }\rho> 1\mbox{ and }K = \bigl\lfloor \log_{\rho} \bigl(n^{1 - \mathfrak {h}_0} \bigr) \bigr\rfloor.}\end{aligned}$$ The purely data-driven estimator $\hat{f}:=\tilde {f}_{\hat{k}}$ is defined as $$\label{defnleskiestimator} \hat{k}:=\inf\bigl\{k = 0,\ldots,K | \exists l \leq k\dvtx {\Vert}\tilde{f}_{k+1} - \tilde{f}_l {\Vert}> \hat{{\mathfrak{z}}}_l^T + \hat{{\mathfrak{z}}}_{k+1}^T \bigr\} \wedge K.$$ The critical values $\hat{{\mathfrak{z}}}_l^T$, $l=0,\ldots,K+1$ depend on the observations $\{Y_i\}_{1 \leq i \leq n}$, and will be specified below. The basic idea is to increase the bandwidth $h_k$ as long as the distance (in some suitable seminorm ${\Vert}\cdot{\Vert}$) between the estimators is not significantly larger than the usual stochastic fluctuations of the estimators such that at $\hat k$ the bias is not yet dominating. In order to choose $\hat{{\mathfrak{z}}}_l^T$, the extreme-value index ${\mathfrak{a}}_{{x}}$, ${\mathfrak{b}}_{{x}}$ and the constant ${\mathfrak{c}}_{{x}}$ from equations (\[defU\]) and (\[defL\]) have to be estimated locally. For that purpose a quasi-negative-Hill method is developed in Section \[32\]. Asymptotic upper bounds {#3} ======================= In this section we will study the convergence rate of our estimator $\hat{f} = \tilde{f}_{\hat{k}}$ with $\hat{k}$ as defined in (\[defnleskiestimator\]) when the sample size $n$ tends to infinity. We will consider both the pointwise risk $\mathbb{E}_f {\vert}\hat{f}(x) - f(x){\vert}^2$ for some fixed ${x}\in[0,1]$ and the $L_q$-risk $\mathbb{E}_f \int_0^1 {\vert}\hat{f}(x) - f(x){\vert}^q \,dx$ for $q \geq1$. To deal with the upper bounds, first some preparatory remarks and work are necessary. Throughout this section, we suppose: \[assmain\] (i) ${\mathfrak{c}}_x, {\mathfrak{b}}_x, {\mathfrak{a}}_x \in H_{[0,1]}(\beta_0,L_0)$, where $\beta_0, L_0 > 0$ and $\inf_{x \in[0,1]} {\mathfrak{a}}_x,{\mathfrak{c}}_x > 0$, \(ii) $\max_{1 \leq j \leq n}\mathbb{E} [{\vert}\varepsilon _j{\vert}] < \infty$, \(iii) $(\varepsilon_j)$ are independent, and the distribution of $\varepsilon_j$ satisfies (\[defU\]), (\[defL\]). For our theoretical treatment, an important quantity in the sequel is the approximative tail-function $$\label{defnAfisrt} A_x(y) = -{\mathfrak{c}}_x y^{-1/{\mathfrak{a}}_x} \log(y)^{{\mathfrak{b}}_x},$$ since it asymptotically describes the quantile $\mathcal{U}_{x}(y)$. General upper bounds {#31} -------------------- Most of our analysis relies on Theorem \[teocon\] and Proposition \[propestimator\] below. These give rise to a decomposition, where the error for the implicitly defined base estimators $\tilde{f}_k(x)$ in (\[eqvartheta\]) is split into a deterministic and a stochastic error part. Even though $\tilde{f}_k(x)$ is highly nonlinear, we obtain a relatively sharp and particularly simple upper bound. \[teocon\] For any $x\in[0,1]$ and $\beta_{{x}}\in(0,\beta^*+1]$ there exist constants $c(\beta^*,L_{{x}})$, $c(\beta^*)$ and $J(\beta^*)$, only depending on $\beta^*$ and $L_{{x}}$, respectively, such that $$\begin{aligned} && \bigl{\vert}\tilde{f}_k(x) - f(x)\bigr{\vert}\\ &&\qquad \leq c\bigl( \beta^*,L\bigr) h_k^{\beta_{{x}}} \\ &&\quad\qquad{}+ c\bigl(\beta^*\bigr)\max\bigl\{ \bigl{\vert}Z_j(h_k,x) \bigr{\vert}\dvtx j=1,\ldots,2J\bigl(\beta^*\bigr), x+h_k{\mathcal I}_j \subseteq[0,1] \bigr\}\end{aligned}$$ holds true for all $f \in H_{{\mathcal{N}}({x})}(\beta,L)$ where $$\begin{aligned} Z_j(h_k,x) &:=& \max\{\varepsilon_i\dvtx x_i \in x + h_k {\mathcal I}_j \}\quad\mbox{and} \\ {\mathcal I}_j &:=& \bigl[-1+(j-1)/J\bigl(\beta^*\bigr),-1+j/J\bigl( \beta^*\bigr)\bigr].\end{aligned}$$ \[remconcentration\] Interestingly, this decomposition holds true for any underlying distribution function $F$ and dependence structure within $(\varepsilon _j)$. Its proof is entirely based on nonprobabilistic arguments and has an interesting connection to algebra. A generalization to arbitrary dimensions or other basis functions than polynomials seems challenging. We continue the range of the indices $j$ of the $(x_j,\varepsilon_j)$ from $\{1,\ldots,n\}$ to $\mathbb{Z}$ while the equidistant location of the $x_j$ and the independence of the $\varepsilon_j$ is maintained. Then, Theorem \[teocon\] yields that with $c^* = c(\beta^*,L)$ $$\begin{aligned} \label{eqpointwise} \bigl{\vert}\tilde{f}_k(x) - f(x)\bigr{\vert}&\leq& c^* h_k^{\beta_{{x}}} + c\bigl(\beta^*\bigr) \max\bigl\{ \bigl {\vert}Z_j(h_k,x)\bigr{\vert}\dvtx j=1,\ldots,2J\bigl( \beta^*\bigr) \bigr\},\hspace*{-30pt} \\ \label{eqL2setup} {\Vert}\tilde{f}_k - f{\Vert}_q &\leq& c^* h_k^{\beta_{{x}}} + c\bigl(\beta^*\bigr) \bigl{\Vert}\max\bigl \{ \bigl{\vert}Z_j(h_k,\cdot)\bigr{\vert}\dvtx j=1,\ldots,2J\bigl(\beta ^*\bigr) \bigr\}\bigr{\Vert}_q,\hspace*{-30pt}\end{aligned}$$ where ${\Vert}\cdot{\Vert}_q$ denotes the $L_q([0,1])$-norm, $q\geq 1$. To pursue adaptivity, suppose that in terms of some seminorm ${\Vert}\cdot {\Vert}$, we can bound the error via $$\label{eqcond1} {\Vert}\tilde{f}_k - f{\Vert}\leq{R}_k + B_k \qquad\forall k=0,\ldots,K+1, f\in H_{{\mathcal{N}}({x})}(\beta,L),$$ for some nonnegative random variables $B_k,R_k$, where $B_k$ increases in $k$ and ${R}_k$ decreases in $k$. Neither the $B_k$ nor the ${R}_k$ depend on $f$, only on $\beta_{{x}}$ and $L_{{x}}$. In the sequel $B_k$ will be a bias upper bound while ${R}_k$ is a bound on the stochastic error, which here—in contrast to usual mean regression—decays in $k$ for each noise realisation. The following fundamental proposition addresses both, the pointwise and the $L_q$-risk of the adaptive estimator, since the pointwise distance of function values at some $x$ as well as the $L_q$-distance of functions on $[0,1]$ define seminorms for $q \geq1$. \[propestimator\] Let ${\Vert}\cdot{\Vert}$ denote some seminorm, and let $\tilde {f}_k$, $f$ lie in the corresponding normed space. Assume (\[eqcond1\]) and that the $\hat{{\mathfrak{z}}}_{k}^T$ decrease a.s. in $k$. Defining the oracle-type index $$\label{defnoracleestimator} \hat{k}^*:= \inf\bigl\{k = 0,\ldots,K-1\dvtx B_{k+1} > \hat{{\mathfrak{z}}}_{k+1}^T/2 \bigr\} \wedge K,$$ we obtain for $q \geq1$: $\displaystyle \mathbb{E}_f \bigl[{\Vert}\hat{f} - \tilde {f}_{\hat{k}^*}{\Vert}^q {\mathbf{1}}\bigl(\hat{k} > \hat{k}^* \bigr) \bigr]^{1/q} \leq\mathbb{E}_f \bigl[\bigl(\hat{{\mathfrak{z}}}_{\hat {k}^*}^T\bigr)^q \bigr]$, $\displaystyle \mathbb{E}_f \bigl[{\Vert}\hat{f} - \tilde {f}_{\hat{k}^*}{\Vert}^q {\mathbf{1}}\bigl(\hat{k} < \hat{k}^* \bigr) \bigr]^{1/q}$\ $\displaystyle\hspace*{30pt}\qquad \leq2^{(2q-1)/q} \mathbb{E}_f \bigl[\hat{{\mathfrak{z}}}_{\hat{k}^*}^q\bigr]^{1/q}$\ $\displaystyle\hspace*{30pt}\quad\qquad{} + 2^{(2q-1)/q}\sum_{k=0}^{K-1} \mathbb{E}_f \bigl[{R}_k^q {\mathbf{1}}\bigl( \exists l \leq k\dvtx {R}_l > \hat{{\mathfrak{z}}}_l^T/2 \bigr) \bigr]^{1/q}$. Critical values and their estimation {#32} ------------------------------------ Our adaptive procedure and particularly the question of optimality crucially hinge on the (estimated) critical values $\hat{{\mathfrak{z}}}_k^T$, and thereby as a quantile for the distribution function $F_{{x}}(-y^{-1})$ as $y \to\infty$. In the literature (de Haan and Ferreira [@dehaanbook2006]), the standard, nonparametric quantile estimator is constructed via the approximation $$\label{eqquantfails} \mathcal{U}_{{x}} (t y ) \approx\mathcal{U}_{{x}} (y ) + a_{{x}} (t ) \bigl(y^{-1/{\mathfrak{a}}_{{x}}} - 1 \bigr) {\mathfrak{a}}_{{x}}\qquad\mbox{as }t,y \to\infty,$$ where the function $a_{{x}} (t )$ is a so-called *first-order scale function*. Unfortunately, this approach fails in our setup. The reason for this failure is the severely shifted sample $(Y_j)$ (we do not observe $\varepsilon_j$) and the particular type of interpolation used in (\[eqquantfails\]), which leads to an insufficient rate of convergence in the above approach. The bias that is induced by the shift will be present in any estimation method. This fact makes us believe that under model (\[eq11\]), quantile estimation for general regular varying distributions is not possible. Since for any $t > 0$ we have the relation $$\begin{aligned} \label{eqmotivateA} F_{{x}} \bigl(A_{{x}}(n/t) \bigr)^n \bigl(1 + \mbox{\scriptsize$\mathcal{O}$}(1) \bigr) &=& F_{{x}} \bigl(\mathcal{U}_{{x}}(n/t) \bigr)^n\nonumber \\ &=& (1 - t/n )^n \\ &=& e^{-t} \bigl(1 + \mbox{\scriptsize$ \mathcal{O}$}(1) \bigr)\qquad\mbox{as }n \to\infty,\nonumber\end{aligned}$$ a viable alternative is provided by a plug-in estimator $\widehat {A}_{{x}}(y)$, based on suitable estimates $\hat{{\mathfrak{a}}}_{{x}}$, $\hat{{\mathfrak{b}}}_{{x}}$ and $\hat{{\mathfrak{c}}}_{{x}}$. Here, the shift may be overcome by location invariant estimators for these quantities. The fact that these parameters additionally vary in ${x}$ with unknown smoothness degree adds another level of complexity and needs to be dealt with in a localized, adaptive manner. At this stage, it is worth mentioning that our adaptive procedure does not hinge on any particular type of quantile estimator. As a matter of fact, we only require the following property of an admissible quantile estimator $\widehat{A}_{{x}}(y)$. \[defnquantestadmissible\] Given ${x}\in[0,1]$, let $\mathcal{Y}_{{x}} = [\log n, n^{4/{\mathfrak{a}}_{{x}}}]$ and $s \in\{0,1\}$. We call $\widehat{A}_{{x}}(y)$ admissible if for any fixed $v\in{\mathbb{N}}$ and constants $c_1^- < 1<c_1^+$, which may be arbitrarily close to one, we have $$P_f \biggl(c_1^- \leq\sup_{y \in\mathcal{Y}_{{x}}} \biggl{\vert}\frac {\widehat{A}_{{x}}^{(s)}(y)}{A_{{x}}^{(s)}(y)}\biggr{\vert}\leq c_1^+ \biggr) = 1 - {\mathcal{O}}\bigl(n^{-v} \bigr),$$ uniformly over $f\in H_{{\mathcal{N}}({x})}(\beta,L)$, where $g^{(s)}(\cdot)$ denotes the $s$th derivative of a function $g(\cdot)$. \[remadmL2\] Admissibility for $s = 1$ is only required in case of the $L_q$-norm loss. Now we shall construct an admissible estimator under Assumption \[assmain\]. Even though the class of potential estimators seems to be quite large under Assumption \[assmain\], verifying the conditions of Definition \[defnquantestadmissible\] leads to quite technical and tedious calculations. Moreover, the requirement of location invariance rules out many prominent estimators from the literature. Regarding the shape parameter ${\mathfrak{a}}_{{x}}$, this eliminates, for instance, Hill-type estimators as possible candidates; see Alves [@fraga2002] and de Haan and Ferreira [@dehaanbook2006]. Possible alternatives are Pickand’s estimator (cf. Pickand [@pickandsIII1975] and Drees [@drees1995]) or the probability weighted moment estimator by Hosking and Wallis [@hoskins1987]. These may, however, exhibit a poor performance in practice; see, for instance, de Haan and Peng [@dehaanpeng1998] for a comparison. In [@falk1995], Falk proposed the negative Hill estimator, which, unlike to its positive counter part, is also location invariant; see also de Haan and Ferreira [@dehaanbook2006]. Transferring this approach to our setup, we construct estimators $\hat{{\mathfrak{a}}}_{{x}}$, $\hat{{\mathfrak{b}}}_{{x}}$ and $\hat{{\mathfrak{c}}}_{{x}}$ that are location invariant, and also inherit the favorable variance property of Hill’s estimator. Based on these estimates, we can use the plug-in estimator $$\widehat{A}_{{x}}(y) = -\hat{{\mathfrak{c}}}_{{x}}(\log y)^{\hat{{\mathfrak{b}}}_{{x}}} y^{-1/\hat{{\mathfrak{a}}}_{{x}}}.$$ To construct the estimators $\hat{{\mathfrak{a}}}_{{x}}$, $\hat{{\mathfrak{b}}}_{{x}}$ and $\hat{{\mathfrak{c}}}_{{x}}$ for fixed ${x}\in[0,1]$, consider the neighborhoods ${\mathcal{N}}_k({x}) = \{y\dvtx {\vert}{x}-y{\vert}\leq h_k\}$ for $k = 0,\ldots,K-1$. Introduce the sets $\mathcal {S}_{k}({x}) = \{Y_i\dvtx i/n \in{\mathcal{N}}_{k}({x})\}$, and note that its cardinality $\bar n_k({x}):= \# \mathcal{S}_{k}({x})$ satisfies $n h_k \leq\bar n_k({x})\leq2 nh_k + 1$. Let us rearrange the sample in $\mathcal{S}_{k}({x})$ as $$Y_{1,\bar n_{k}({x})}, Y_{2,\bar n_{k}({x})},\ldots, Y_{\bar n_{k}({x}),\bar n_{k}({x})},$$ where $Y_{j,\bar n_{k}({x})}$ denotes the $j$th largest $Y_{i} \in \mathcal{S}_{k}({x})$. For each $k = 0,\ldots,K-1$, let ${m}_k = {m}(\bar n_{k}({x}) )$ such that ${m}_k/\bar n_k \to0$, where $\bar n_k = \bar n_k({x})$ to lighten the notation. In the literature, a common parametrization of ${m}_k$ is ${m}_k = {\bar n_k}^{\mathfrak{m}}$ for $0 < \mathfrak{m}\leq1$. Before discussing the important issue of possible choices of $\mathfrak{m}$, we formally introduce our estimation procedure. Apart from the necessary location invariance, an estimator of ${A}_{{x}}(y)$ should also adapt to the unknown smoothness degree of the parameters ${{\mathfrak{a}}}_{{x}}$, ${{\mathfrak{b}}}_{{x}}$ and ${{\mathfrak{c}}}_{{x}}$. A related issue is dealt with in the literature; see, for instance, Drees [@drees2001] or Grama and Spokoiny [@gramaspokoiny2008]. In order to achieve this adaptivity, we apply a Lepski-type procedure to select among appropriate base estimators. We first tackle the problem of estimating ${{\mathfrak{a}}}_{{x}}$. Using Falk’s idea in [@falk1995], we define $$\qquad\frac{1}{\hat{{\mathfrak{a}}}_{{x}} ({m}_k )} = \frac{1}{{m}_k}\sum_{i = 2}^{{m}_k -1} \log\biggl(\frac{Y_{{m}_k,\bar n_k} - Y_{1,\bar n_k}}{Y_{i,\bar n_k} - Y_{1,\bar n_k}} \biggr),\qquad k = 0,1,\ldots,K-1.$$ Note that this estimator is clearly location invariant. For $\rho> 1$ select the index $\hat{k}_{{\mathfrak{a}}}({x})$ via $$\begin{aligned} \label{defnaestimatorindex} \hat{k}_{{\mathfrak{a}}}({x}) &:=&\inf\bigl\{k = 0,\ldots,K -1 | \exists l \leq k\dvtx \nonumber\\[-8pt]\\[-8pt] &&\phantom{\inf\bigl\{} \bigl{\vert}\hat{{\mathfrak{a}}}_{{x}}^{-1} ({m}_{k+1} ) - \hat{{\mathfrak{a}}}_{{x}}^{-1} ( {m}_l )\bigr{\vert}> \rho^{-k} (\log n)^{-1} \bigr\} \wedge K.\nonumber\end{aligned}$$ As a final estimator, we put $$\hat{{\mathfrak{a}}}_{{x}}^{-1} = \hat{{\mathfrak{a}}}_{{x}}^{-1} ({m}_{\hat{k}_{{\mathfrak{a}}}} )\qquad\mbox{where }\hat{k}_{{\mathfrak{a}}} = \hat{k}_{{\mathfrak{a}}}({x}).$$ For the estimation of ${{\mathfrak{b}}}_{{x}}$, we proceed in a similar manner. For $k = 0,1,\ldots,K-1$, we put $$\hat{{\mathfrak{b}}}_{{x}} ({m}_k ) = \frac{1}{{m}_k \log\log \bar n_k}\sum _{i = 2}^{{m}_k -1}\log\biggl(\frac{Y_{i,\bar n_k} - Y_{1,\bar n_k}}{(\bar n_k/i)^{-1/\hat{{\mathfrak{a}}}_{{x}}({m}_k)} - (\bar n_k/1)^{-1/\hat{{\mathfrak{a}}}_{{x}}({m}_k)}} \biggr),\hspace*{-35pt}$$ and select the index $\hat{k}_{{\mathfrak{b}}}({x})$ via $$\begin{aligned} \label{defnbestimatorindex} \hat{k}_{{\mathfrak{b}}}({x})&:=&\inf\bigl\{k = 0,\ldots, \hat{k}_{{\mathfrak{a}}}({x}) | \exists l \leq k\dvtx \nonumber\\[-8pt]\\[-8pt] &&\hphantom{\inf\bigl\{} \bigl{\vert}\hat{{\mathfrak{b}}}_{{x}} ({m}_{k+1} ) - \hat{{\mathfrak{b}}}_{{x}} ({m}_l )\bigr{\vert}> \rho^{-k} ( \log\log n)^{-1} \bigr\} \wedge K.\nonumber\end{aligned}$$ As final estimator, we then put $$\hat{{\mathfrak{b}}}_{{x}} = \hat{{\mathfrak{b}}}_{{x}} ( {m}_{\hat{k}_{{\mathfrak{b}}}} )\qquad\mbox{where }\hat{k}_{{\mathfrak{b}}} = \hat{k}_{{\mathfrak{b}}}({x}).$$ Interestingly, it turns out that $\hat{{\mathfrak{b}}}_{{x}} = {\mathfrak{b}}_{{x}} + \frac{\log{\mathfrak{c}}_{{x}}}{\log\log n h_{\hat{k}_{{\mathfrak{b}}}}} (1 + \mbox{\scriptsize $\mathcal{O}$}_P(1) )$. Since this implies that $$\begin{aligned} (\log nh_k)^{\hat{{\mathfrak{b}}}_{{x}}} &=& {\mathfrak{c}}_{{x}}(\log nh_k)^{{\mathfrak{b}}_{{x}}} \bigl(1 + \mbox{\scriptsize$ \mathcal{O}$}_P(1) \bigr)\qquad\mbox{for }k = 0,\ldots, K-1,\end{aligned}$$ there is no need to specifically estimate ${\mathfrak{c}}_{{x}}$, it is included in the bias for free. We are thus lead to the definition of our estimator $$\label{defnAestimate} \widehat{A}_{{x}}(y) = -(\log y)^{\hat{{\mathfrak{b}}}_{{x}}} y^{-1/\hat{{\mathfrak{a}}}_{{x}}}.$$ For the consistent estimation of $\widehat{A}_{{x}}(\cdot)$, we need a relation between the initial bandwidth $h_0$ and the bias, induced by the parameter $\beta_{{x}}$. Note that such an assumption is inevitable, since any adaptive estimation procedure needs to start off with some initial bandwidth. Thus in the sequel, we will assume that $$\label{eqparamrelation0} h_0^{\beta_{{x}}} \bigl{\vert}A_{{x}} ( {m}_0 )\bigr{\vert}^{-1} = \mbox{\scriptsize$\mathcal{O}$} \bigl((\log n)^{-1} \bigr).$$ If $\mathfrak{h}_0,\mathfrak{m}> 0$ is such that $$\label{eqparamrelation1} \mathfrak{m}\mathfrak{h}_0 < (1 - \mathfrak{h}_0){\mathfrak{a}}_{0} \beta_0,$$ for some lower bounds $$\label{eqlowbnd} {\mathfrak{a}}_0\leq{\mathfrak{a}}_{{x}}\quad\mbox{and}\quad \beta_0 \leq\beta_{{x}}$$ on the unknown parameters, then (\[eqparamrelation0\]) is valid. In the supplementary material [@jirmeireisssuppl] we prove the following result under the more general Assumption 10.1, which is implied by Assumption \[assmain\]. \[propestimateunivquantile\] Grant Assumption 10.1, and suppose that (\[eqparamrelation0\]) is valid. Then $\widehat{A}_{{x}}(y)$ defined in (\[defnAestimate\]) is admissible. In practice the negative Hill estimator works well for ${\mathfrak{a}}_{{x}} \in(0,3/2]$ (and small ${\mathfrak{b}}_{{x}}$), but has increasing (asymptotically negligible) bias for ${\mathfrak{a}}_{{x}} > 3/2$ and ${\mathfrak{b}}_{{x}} \neq0$, which should be corrected in applications; see also Section \[5\], paragraph (B). Also note that our assumptions in Assumption \[assmain\] include cases where a CLT for an estimator $\hat{{\mathfrak{a}}}_{{x}}$ fails to hold, and only slower rates of convergence than $m_k^{-1/2}$ are possible. This is particularly the case if ${\mathfrak{b}}_{{x}} \neq0$; we refer to de Haan and Ferreira [@dehaanbook2006] for details. In practice, the choice of the actual bandwidth ${m}_k$ (and hence $\mathfrak{m}$) is of significant relevance, and much research has been devoted to this subject; see, for instance, Drees [@Drees1998] and Drees et al. [@drees2001how]. In [@fraga2002], Alves addresses this question for a related (positive) location invariant Hill-type estimator both in theory (Theorem 2.2) and practice (concluding remarks and algorithm). Transferring the practical aspects, this amounts to the choice ${m}_k = 2{\bar n_k}^{\mathfrak{m}}$, $\mathfrak{m}= 2/3$ in our case. Still, any other choice also leads to the total optimal rates presented in Theorems \[Tpointwise\] and \[TL2upperbound\], as long as $0 < \mathfrak{m}< 1$ holds. Pointwise adaptation {#secpointwise} -------------------- Throughout this subsection we fix a point ${x}\in[0,1]$. For the seminorm in Proposition \[propestimator\] we take ${\Vert}f{\Vert}:={\vert}f({x}){\vert}$. According to Theorem \[teocon\], we set $$\begin{aligned} \label{EqRk2} \nonumber B_k &:=& c\bigl(\beta^*,L\bigr) h_k^\beta, \nonumber\\[-8pt]\\[-8pt]\nonumber {R}_k &:=& c\bigl(\beta^*\bigr) \max\bigl\{ \bigl{\vert}Z_j(h_k,{x})\bigr{\vert}\dvtx j=1,\ldots,2J\bigl(\beta^* \bigr) \bigr\},\end{aligned}$$ in the notation of (\[eqcond1\]). The nonnegativity and monotonicity constraints on $B_k$ and $R_k$ are satisfied since $h_k$ increases. We define the oracle and estimated critical values as $$\begin{aligned} \nonumber {\mathfrak{z}}_k({x}) &=& 4 c\bigl(\beta^*\bigr)\biggl{\vert}A_{{x}} \biggl(\frac{{\mathfrak{a}}_{{x}} n h_k}{4 J(\beta^*) \log n} \biggr)\biggr{\vert},\qquad\hat{{\mathfrak{z}}}_k({x}) = 4 c\bigl(\beta^*\bigr)\biggl{\vert}\widehat{A}_{{x}} \biggl(\frac {\hat{{\mathfrak{a}}}_{{x}}n h_k}{4 J(\beta^*)\log n } \biggr)\biggr{\vert},\end{aligned}$$ for $k=0,\ldots,K-1$ and set ${{\mathfrak{z}}}_K({x}) = \hat{{\mathfrak{z}}}_K({x}):=0$. To lighten the notation, we often drop the index ${x}$ and write ${\mathfrak{z}}_k$ and $\hat{{\mathfrak{z}}}_k$. As outlined earlier in (\[eqmotivateA\]), this definition is motivated by the fact that $\mathcal {U}_{{x}}(y) \approx A_{{x}}(y)$ as $y \to\infty$. The critical values can thus be viewed as an appropriate estimate for certain extremal quantiles. The additional $\log n$-factor turns out to be the price to pay for adaption. We proceed by introducing the estimated truncated critical values as $$\label{eqSchaetzerzeta} \hat{{\mathfrak{z}}}_k^T = \min\{\hat{{\mathfrak{z}}}_k,1 \}.$$ The truncation of the estimator $\hat{{\mathfrak{z}}}_k^T$ is required to exclude a possible pathological behavior both in theory and practice. Note that this does not affect its proximity to ${\mathfrak{z}}_k$ if $\hat{{\mathfrak{z}}}_k$ is consistent, since ${\mathfrak{z}}_k \to0$ uniformly in $k = 0, \ldots,K-1$. We have the following pointwise result. \[Tpointwise\] Fix ${x}\in[0,1]$, and suppose ${\mathfrak{a}}_{{x}}, {\mathfrak{b}}_{{x}}, {\mathfrak{c}}_{{x}}$ and $\beta_{{x}}\in(0,\beta^*+1]$ are unknown with $\mathfrak {h}_0 < \beta_{{x}} {\mathfrak{a}}_{{x}} / (\beta_{{x}} {\mathfrak{a}}_{{x}} + 1)$. If Assumption \[assmain\] holds, then $$\begin{aligned} && \sup_{f \in H_{{\mathcal{N}}({x})}(\beta,L)} \mathbb{E}_f \bigl[ \bigl( \hat{f}({x}) - f({x}) \bigr)^2 \bigr] \\ &&\qquad = {\mathcal{O}}\bigl((n /\log n)^{(-2 \beta_{{x}})/({\mathfrak{a}}_{{x}} \beta_{{x}}+1)} (\log n)^{(2{\mathfrak{a}}_{{x}} {\mathfrak{b}}_{{x}} \beta_{{x}})/({\mathfrak{a}}_{{x}} \beta_{{x}}+1)} \bigr).\end{aligned}$$ As will be demonstrated in Section \[seclowerbounds\], this result is optimal in the minimax sense. $L_q$-adaptation {#secL2} ---------------- Let us consider the $L_q([0,1])$-norm as seminorm in Proposition \[propestimator\]. Due to (\[eqL2setup\]) we can choose $$\begin{aligned} B_k &:=& c\bigl(\beta^*,L\bigr) h_k^\beta, \label{EqBk} \\ R_k &:=& c\bigl(\beta^*\bigr) \bigl{\Vert}\max\bigl\{ \bigl{\vert}Z_j(h_k,\cdot)\bigr{\vert}\dvtx j=1,\ldots,2J\bigl(\beta^* \bigr) \bigr\}\bigr{\Vert}_q \label{EqRk}\end{aligned}$$ in the notation of (\[eqcond1\]). We verify that the nonnegativity and monotonicity constraints on $B_k$ and $R_k$ are satisfied for $\rho >1$ in (\[eqbandwidth\]) since for any $x\in[0,1]$ each interval $x + h_k {\mathcal I}_j$, $j=1,\ldots,2J(\beta^*)$ is included in $x + h_{k+1} {\mathcal I}_{j'}$ for some $j'=1,\ldots,2J(\beta^*)$ for any $k$. Throughout this paragraph, we assume that the parameters ${\mathfrak{a}}_{x}, {\mathfrak{b}}_{x}, {\mathfrak{c}}_{x}$ remain constant for $x \in[0,1]$. We denote these with ${\mathfrak{a}}_{F}, {\mathfrak{b}}_{F}, {\mathfrak{c}}_{F}$, and the corresponding $A_x(\cdot)$ with $A_F(\cdot)$. The construction of the critical values is more intricate compared to the pointwise case, and relies on the following quantity. Introduce $$\qquad \widehat{\mathit{IU}}_n(s,q)= \biggl(\int_{n^{-2/\hat{{\mathfrak{a}}}_F}}^{n^{1/2}} \bigl((-\widehat{A}_F)^q (s/y ) \bigr)^{(1)} \exp(-y) \,dy \biggr)^{1/q}, \qquad q \geq1,$$ and the corresponding version ${\mathit{IU}}_n(s,q)$ where we replace $\widehat {A}_F (\cdot)$ by ${A}_F (\cdot)$ and $\hat{{\mathfrak{a}}}_F$ by ${\mathfrak{a}}_F$ \[recall that $g^{(s)}(\cdot)$ denotes the $s$th derivative of a function $g(\cdot)$\]. For $k=0,\ldots,K-1$, we introduce the critical values as $$\begin{aligned} \nonumber {\mathfrak{z}}_k &=& \sqrt{5} c\bigl(\beta^*\bigr)\biggl{\vert}{\mathit{IU}}_n \biggl(\frac{n h_k}{6 J(\beta^*)},q \biggr)\biggr{\vert},\qquad \hat{{\mathfrak{z}}}_k = \sqrt{5} c\bigl(\beta^*\bigr)\biggl{\vert}\widehat{\mathit{IU}}_n \biggl(\frac{n h_k}{6 J(\beta ^*)},q \biggr)\biggr{\vert},\end{aligned}$$ and set ${{\mathfrak{z}}}_K = \hat{{\mathfrak{z}}}_K:=0$. Moreover, we define the corresponding truncated values as $$\hat{{\mathfrak{z}}}_k^T = \min\{\hat{{\mathfrak{z}}}_k,1 \}.$$ Unlike the pointwise case, the critical values do not correspond to an extremal quantile, but they can be considered as an estimate of $\mathbb{E}[R_k^q]^{1/q}$. This already indicates that the $L_q$-case is substantially different from the pointwise situation, and indeed additional, more refined arguments are necessary to prove the result given below. \[TL2upperbound\] Suppose ${\mathfrak{a}}_F>0$ and $\beta\in(0,\beta^*+1]$ are unknown with $\beta{\mathfrak{a}}_F / (\beta{\mathfrak{a}}_F + 1)>\mathfrak{h}_0$. We select $\rho\in{\mathbb{N}}$ with $\rho>1$. If $q \geq1$, then the adaptive estimator $\hat{f}$ from Section \[2\] satisfies $$\sup_{f\in H_{[0,1]}(\beta,L)} \mathbb{E}_f \bigl[{\Vert}\hat{f} - f{\Vert}_q^q \bigr] = {\mathcal{O}}\bigl(n^{(-q\beta)/({\mathfrak{a}}_{F} \beta_+1)} (\log n)^{(q \beta{\mathfrak{a}}_{F} {\mathfrak{b}}_{F})/({\mathfrak{a}}_{F} \beta _+1)} \bigr). $$ \[remL2case\] If one allows for ${\mathfrak{a}}_{x}, {\mathfrak{b}}_{x}, {\mathfrak{c}}_{x} \in H (\beta _0,L )$ for $x \in[0,1]$, the above result remains valid if one takes the supremum over the above bound. This result is also optimal in the minimax sense. Theorem \[TL2upperbound\] shows that the estimator $\hat{f}$ is $L_q$-adaptive; that is, it attains the minimax rates, which are optimal in the oracle setting of known ${\mathfrak{a}}_F$ and $\beta$, although it does not use these constants in its construction; see Theorem \[T421\] below for the lower bound. Asymptotic lower bounds {#seclowerbounds} ======================= We show that the logarithmic loss in the convergence rate in Theorem \[Tpointwise\] is unavoidable with respect to any estimator sequence of $f$. First, we treat the case ${\mathfrak{a}}_x\in(0,2]$ for which we derive a lower bound, even for a known error distribution. It suffices to treat the case where ${\mathfrak{a}}= {\mathfrak{a}}_x$, ${\mathfrak{b}}= {\mathfrak{b}}_x$ and ${\mathfrak{c}}= {\mathfrak{c}}_x$ remain constant for $x \in[0,1]$. We maintain this convention throughout this section. We assume that the $\varepsilon_j$ have a Lebesgue density $f_\varepsilon$ which is continuous and strictly positive on $(-\infty,0)$, and vanishes on $[0,\infty]$. Moreover, we impose that the $\chi^2$-distance for the parametric location problem satisfies $$\label{eqeps} \qquad\int_{-\infty}^0 \bigl{\vert}f_\varepsilon(x+\vartheta) - f_\varepsilon(x)\bigr{\vert}^2 / f_\varepsilon(x) \,dx \leq c_\varepsilon\vartheta^{{\mathfrak{a}}} {\vert}\log\vartheta{\vert}^{-{\mathfrak{a}}{\mathfrak{b}}} \qquad\forall\vartheta\in(0,1),$$ for some ${\mathfrak{a}}\in(0,2]$ and ${\mathfrak{b}}\in\mathbb{R}$. Note that ${\mathfrak{a}}$ and ${\mathfrak{b}}$ correspond to ${\mathfrak{a}}_x$ and ${\mathfrak{b}}_x$, respectively, in (\[defU\]) and (\[defL\]) with uniform $x$. As examples for such error densities with ${\mathfrak{b}}=0$, we consider the reflected gamma-densities $$f_{\lambda}(x):= \frac{1}{\Gamma(\lambda)} (-x)^{\lambda-1} \exp(x){ \mathbf1}_{(-\infty,0)}(x), \qquad x\in\mathbb{R}, $$ for $\lambda\in[1,2)$. Thus, by ${\vert}(1+\vartheta/x)^{\lambda -1}-1{\vert}\le\vartheta/{\vert}x{\vert}$ for $x\le-\vartheta$ we have $$\begin{aligned} &&\int_{-\infty}^0 \bigl{\vert}f_\lambda(x+ \vartheta) - f_\lambda(x)\bigr{\vert}^2 / f_\lambda(x) \,dx \\ &&\qquad = \int_{-\infty}^{-\vartheta} \biggl{\vert}\biggl(1 + \frac{\vartheta}{x} \biggr)^{\lambda-1} \exp(\vartheta) - 1 \biggr {\vert}^2 f_\lambda(x) \,dx + \int_{-\vartheta}^0 f_\lambda(x) \,dx \\ &&\qquad \leq2 \bigl(\exp(\vartheta)-1 \bigr)^2 + 2 \exp(2\vartheta) \vartheta^2 \\ &&\quad\qquad{}+ 2 \exp(2\vartheta) \vartheta^2 \int _{-1}^{-\vartheta } x^{-2} f_\lambda(x) \,dx + \int_{-\vartheta}^0 f_\lambda(x) \,dx \\ &&\qquad \leq{\mathcal{O}}\bigl(\vartheta^2\bigr) + \frac{2}{(2-\lambda) \Gamma(\lambda )} \exp(2 \vartheta) \vartheta^\lambda\bigl(1 - \vartheta^{2-\lambda}\bigr) + \vartheta^\lambda/\Gamma(\lambda+1) \\ &&\qquad = {\mathcal{O}}\bigl(\vartheta^{\lambda}\bigr).\end{aligned}$$ Therefore, the reflected gamma-density satisfies (\[eqeps\]) when putting ${\mathfrak{a}}= \lambda$. Note that (\[eqeps\]) implies (\[defU\]), (\[defL\]) under the Assumption \[assmain\](i). The following theorem together with the upper bound in Theorem \[Tpointwise\] shows that pointwise adaptation causes a logarithmic loss in the convergence rates, which is known from regular regression when inserting ${\mathfrak{a}}=2$. \[Tlowerbound\] Assume condition (\[eqeps\]), and fix some arbitrary $x_0\in[0,1]$, $\beta_1>\beta_2>0$ and $C_0,C_1>0$. Let $\{\hat{f}_n(x_0)\}_n$ be any sequence of estimators of $f(x_0)$ based on the data $Y_1,\ldots,Y_n$ which satisfies $$\sup_{f\in{\mathcal H}_{[0,1]}(\beta_1,C_0)} \mathbb{E}_f \bigl{\vert}\hat{f}_n(x_0) - f(x_0)\bigr{\vert}^2 = {\mathcal{O}}\bigl(n^{-2\beta _2/(1+\beta_2 {\mathfrak{a}})} n^{-\xi} \bigr), $$ for some $\xi>0$. Then this estimator sequence suffers from the lower bound $$\begin{aligned} && \liminf_{n\to\infty} (n/\log n)^{(2\beta_2)/(1+ {\mathfrak{a}}\beta _2)} (\log n)^{(-2{\mathfrak{a}}{\mathfrak{b}}\beta_2)/(1 + {\mathfrak{a}}\beta_2)} \\ &&\qquad{}\times \sup_{f\in{\mathcal H}_{[0,1]}(\beta_2,C_1)} \mathbb{E}_f \bigl {\vert}\hat{f}_n(x_0) - f(x_0)\bigr {\vert}^2 > 0.\end{aligned}$$ For completeness we also derive the $L_q$-minimax optimality of the convergence rates established by our estimator $\hat{f}$ in Theorem \[TL2upperbound\]. This rectifies a conjecture after Theorem 3 in Hall and van Keilegom [@hallkeilegom2009] for general smoothness degrees. \[T421\] Assume condition (\[eqeps\]), and let $\{\hat{f}_n\}_n$ be any sequence of estimators of $f$ based on the data $Y_1,\ldots,Y_n$. Then, for any fixed $q\geq1$, we have $$\liminf_{n\to\infty} n^{\beta_2/(1+{\mathfrak{a}}\beta_2)} (\log n)^{(-{\mathfrak{a}}{\mathfrak{b}}\beta_2 )/(1 + {\mathfrak{a}}\beta_2)} \sup _{f\in {\mathcal H}_{[0,1]}(\beta_2,C_1)} \mathbb{E}_f \bigl[{\Vert}\hat{f}_n - f{\Vert}_q \bigr] > 0. $$ Now we focus on the case ${\mathfrak{a}}>2$. To simplify some of the technical arguments in the proofs, we restrict to the case ${\mathfrak{b}}= 0$. If ${\mathfrak{a}}>2$, the convergence rates become slower than in the Gaussian case. Instead of the convenient conditions (\[defU\]) and (\[defL\]), we choose the slightly different Definition \[assmainlowerbound\], under which the upper bound proofs obviously still hold true. \[assmainlowerbound\] Let ${\mathfrak{a}}>2$, $0 < \mathfrak{h}_0 \leq1$, and denote with ${\mathcal D}_n({\mathfrak{a}}, \mathfrak{h}_0)$ the set of all error distribution functions whose quantile functions $\mathcal{U}^{(n)}$ satisfy: $\displaystyle \sup_{y \in(0, \infty]} \biggl{\vert}\frac{\mathcal {U}^{(n)}(y)}{A(y/2)}\biggr {\vert}\leq1\qquad\mbox{where }A(y) = - y^{-1/{\mathfrak{a}}}$, $\displaystyle \sup_n \sup_{y \in[\log N, N]} \biggl {\vert}\frac{\mathcal {U}^{(n)}(y)}{A(y)} - 1 \biggr{\vert}{\vert}\log y{\vert}\leq(\log n)^{-2}, \qquad N = n^{\mathfrak{h}_0}$. Note that we have ${\mathcal D}_n({\mathfrak{a}}, \mathfrak{h}_0) \subseteq {\mathcal D}_n({\mathfrak{a}}, \mathfrak{h}_0')$ if $\mathfrak{h}_0 > \mathfrak{h}_0'$. The above conditions particularly imply that the distribution function $F(y) = F^{(n)}(y)$ \[or likewise $\mathcal{U}(y) = \mathcal {U}^{(n)}(y)$\] of the errors $\varepsilon_j$ may depend on $n$. While the lower bound results for ${\mathfrak{a}}\leq2$ still hold true if the error distribution is known and independent of the design point, here two competing types of regression errors have to be considered in the proof. Note that the probability measure thus depends on both the regression function $f$ and the distribution function $F$, which we mark as $P_{f, F}$. \[teo-aexp2\] Fix some arbitrary $x_0\in[0,1]$, $\beta_1>\beta_2> 0$ and $C_0,C_1>0$. Let ${\mathfrak{a}}> 2$, and suppose that $\mathfrak{h}_0 < \frac {\beta_2}{{\mathfrak{a}}\beta_2 + 1}$. Let $\{\hat{f}_n(x_0)\}_n$ be any sequence of estimators of $f(x_0)$ based on the data $Y_1,\ldots,Y_n$ which satisfies $$\sup_{f\in{\mathcal H}_{[0,1]}(\beta_1,C_0)} \sup_{F \in{\mathcal D}_n({\mathfrak{a}}, \mathfrak{h}_0)} \mathbb{E}_{f, F} \bigl{\vert}\hat{f}_n(x_0) - f(x_0)\bigr{\vert}^2 = {\mathcal{O}}\bigl(n^{-2\beta_2/(1+ {\mathfrak{a}}\beta_2)} n^{-\xi} \bigr), $$ for some $\xi>0$. Then this estimator sequence suffers from the lower bound $$\liminf_{n\to\infty} (n/\log n)^{(2\beta_2)/(1+{\mathfrak{a}}\beta_2 )} \sup _{f\in{\mathcal H}_{[0,1]}(\beta_2,C_1)} \sup_{F \in {\mathcal D}_n({\mathfrak{a}}, \mathfrak{h}_0)} \mathbb{E}_{f, F} \bigl{\vert}\hat{f}_n(x_0) - f(x_0)\bigr {\vert}^2 > 0. $$ The proofs of Theorems \[Tlowerbound\], \[T421\] and \[teo-aexp2\] are given in the supplementary material [@jirmeireisssuppl]. Theorem \[T421\] can be extended in a similar way to ${\mathfrak{a}}>2$, a detailed proof is omitted. Numerical simulations and real data application {#5} =============================================== The aim of this section is to highlight some of the theoretical findings with numerical examples. We will briefly touch on the following points: Performance of the estimator on different function types and the corresponding effect on adaptive bandwidth selection. The effect of different parameters ${\mathfrak{a}}_{{x}}$, ${\mathfrak{b}}_{{x}}$ and ${\mathfrak{c}}_{{x}}$. Application: wolf sunspot-number. Application: yearly best men’s outdoor 1500 times. ![ Function $f = f_1$, $\beta^* = 2$, $n = 200$, $\varepsilon_j \sim {\operatorname{exp}}(1)$;  function $f = f_2$, $\beta^* = 2$, $n = 200$, $\varepsilon_j \sim {\operatorname{exp}}(1)$.[]{data-label="figcomparemultid3"}](1248f02.eps) In order to illustrate the behavior of the estimation procedure, we consider three different regression functions, displayed in black in Figures \[figcomparemultid3\] and \[figqq-a\](a), $$\begin{aligned} f_1(x) &=& -2\cdot{\mathbf{1}}(x < 1/3) - 3\cdot{\mathbf{1}}(1/3\leq x < 2/3) - {\mathbf{1}}(2/3 < x),\qquad x\in[0,1], \\ f_2(x) &=& -2 + 2\cos(2\pi x) + 0.3\sin(19\pi x),\qquad x\in[0,1].\end{aligned}$$ They are similar to those discussed in Chichignoud [@chichi2012]. Comments on the implementation and setup are given in the supplementary material [@jirmeireisssuppl], together with a numerical comparison to oracle estimators and additional simulations. All of the results can be reproduced by , available at [@jirmeireissrcode]. Figure \[figcomparemultid3\] gives a first impression on the behavior and accuracy of our estimation procedure. In both cases, the errors $\varepsilon_j$ follow an exponential distribution ${\operatorname{exp}}(1)$, and the sample size is $n = 200$. The window size in Figure \[figcomparemultid3\] corresponds to the local sample size, chosen by the adaptive procedure. Even though $n$ is only of moderate size, the estimation procedure achieves good results by essentially recovering the shape of the underlying regression, also in the wiggly case of function $f_2$. Simulations of other nonparametric (adaptive) estimators that do not take the nonregularity into account (cf. Lepski and Spokoiny [@lepskispokoiny1997] and the R-packages crs, gam, smooth-spline, etc.) often fail to do so (with mean correction). ![ Function $f = f_2$, $\beta^* = 4$, $n = 600$, $\varepsilon_j \sim\Gamma({\mathfrak{a}}_{{x}},1)$;  ${\mathfrak{a}}_{{x}}$ (black line) and $\hat{{\mathfrak{a}}}_{{x}}$ (blue points).[]{data-label="figcomparead3"}](1248f03.eps) \[figqq-a\] The effect of the shape (type) of the function on the bandwidth selection is highlighted by a color-scheme, ranging from dark red (low) to dark violet (high). In order to understand the “coloring of the estimator,” one has to recall that the estimation procedure always tries to fit a local polynomial which “stays above the observations.” At first sight, this can lead to a surprisingly large bandwidth selection at particular spots. The bandwidth size is not necessarily an indicator for estimation accuracy. The reason for this effect is the maximum function: additional observations are taken into account as long as this does not substantially change the maximum, which can lead to a surprisingly large bandwidth selection. Here, the setup is different from paragraph (A). We consider a sample size of $n = 600$, and we let the parameters vary in ${x}$. The impact of ${\mathfrak{b}}_{{x}}$, ${\mathfrak{c}}_{{x}}$ (and their estimates) is rather insignificant on the total estimator. This is not unexpected and can be explained by the very definition in (\[defnAfisrt\]). We therefore focus only on the parameter ${\mathfrak{a}}_{{x}}$ in this paragraph. We consider the setup where the errors follow a Gamma distribution $\varepsilon_j \sim\Gamma({\mathfrak{a}}_{{x}},1)$ and ${\mathfrak{a}}_{{x}}$ varies according to the function ${\mathfrak{a}}_{{x}} = \sin(2 \pi{x}+ \pi /2) -\sqrt(1-({x}-1)^2)+2$. We only discuss function $f_2$ here; a more comprehensive comparison including an additional function $f_3$ is given in the supplementary material [@jirmeireisssuppl]. As can be clearly seen in Figure \[figcomparead3\], there is a considerable increase in estimation accuracy as ${\mathfrak{a}}_{{x}}$ gets closer to zero. Generally speaking, for larger ${\mathfrak{a}}_{{x}}$ the bias can be pronounced, and this is indeed the case at the left top of Figure \[figqq-a\](a). It simply turns out that there are no observations at all near the regression function $f_2$, which leads to the large gap. An approximate bias correction \[e.g., by $\widehat{\mathit{IU}}(nh_k,1)$, Section \[secL2\]\] could be applied, but we do not pursue this any further. Figure \[figcomparead3\] also reveals that the estimator $\hat{{\mathfrak{a}}}_{{x}}$ (blue) has a large variance and problems with quickly oscillating regression functions $f$ (compare with the supplementary material [@jirmeireisssuppl]). On the other hand, it seems to capture the general trend of decrease and increase to some extent. We would like to point out, however, that these estimations are very sample dependent, and due to the relatively small, local sample size, the actual behavior of local samples may deviate significantly from a large sample of $\Gamma({\mathfrak{a}}_{{x}},1)$-distributed random variables. Significant overestimation leads to critical values that are too large, which in turn results in a slight overestimation of the regression function; see the very center of Figure \[figqq-a\](a) ($0.4$ to $0.6$). The opposite effect can be observed at both endpoints of Figure \[figqq-a\](a), where an underestimation is present, which leads to critical values that are too small. Also note that the negative Hill estimator generally tends to underestimate ${\mathfrak{a}}_{{x}}$ if ${\mathfrak{a}}_{{x}} \geq3/2$, which is due to an (asymptotic negligible) bias; cf. de Haan and Ferreira [@dehaanbook2006]. A thorough bias correction requires a precise second order asymptotic expansion of the limit distribution of the negative Hill estimator, which is beyond the scope of this paper. Note, however, that a rudimentary bias correction is available in our implemented code. Another, more practical option would be to consider the estimation of ${\mathfrak{a}}_{{x}}$ itself as a regression problem with one-sided errors, treating the local estimates $\hat{{\mathfrak{a}}}_{{x}}$ as “sample.” A similar behavior appears when considering function $f_1$, but, as can be expected, the estimates $\hat{{\mathfrak{a}}}_{{x}}$ are more accurate. The Wolf sunspot number (often also referred to as *Zürich number*), is a measure for the number of sunspots and groups of sunspots present on the surface of the sun. Initiated by Rudolf Wolf in 1848 in Zürich, this famous time series has been studied for decades by physicists, astronomers and statisticians. The relative sunspot number $R_t$ is computed via the formula $$\label{eqsunspot} R_t = K_t (10 g_t + s_t ),$$ where $s_t$ is the number of individual spots observed at time $t$, $g_t$ is the number of groups observed at time $t$ and $K_t$ is the *observatory factor* or *personal reduction coefficient*. The factor $K_t$ (always positive and usually smaller than one) depends on the individual observatories around the world and is intended to convert the data to Wolf’s original scale, but also to correct for seeing conditions and other diversions. In general, we have the relationship $$\label{eqmodelgen} \mbox{observed data} = \mbox{observed fraction} \times \mbox{true value},$$ where we always have that the random variable $\mbox{observed fraction} \in (0,1]$. Therefore, the factor $K_t$ can be viewed as an aggregated individual estimate for the right scaling. Over the last century, many different models have been fit to the sunspot data; we refer to He [@heli2001] and Solanki et al. [@Solanki20041084] for an overview. In particular, the study of the sunspots has attracted people long before 1848. Recorded observations are, for instance, due to Thomas Harriot, Johannes and David Fabricius (in the 17th century), Edward Maunder and many more. However, much uncertainty lies in these data, and the sunspot time series before 1850 is usually referred to as “unreliable” or “poor.” It is therefore interesting to reconstruct the “true time series” or at least reduce some uncertainty. We attempt do so for the period from 1749 to 1810, based on monthly observations. Let us reconsider model (\[eqsunspot\]). Given $R_t$, we may then postulate the model $$\label{eqsunspotpost} R_t = X_t \mathcal{S} \bigl(10 g_t^{\circ} + s_t^{\circ} \bigr),$$ where $g_t^{\circ}$, $s_t^{\circ}$ denote the corresponding true sunspot values, and $X_t \in(0,1]$. This means we concentrate all random components in $X_t$, which is in spirit of model (\[eqmodelgen\]). We point out that this is only one possible way from a modeling perspective; we refer to Kneip et al. [@kneip2012] or Koenker et al. [@koenker1994] and the references therein for alternatives and more general models. In our setup, the parameter $\mathcal{S}>0$ reflects the support of the “misjudgment” of the observer. For example, $\mathcal{S} \leq1$ is equivalent with the assumption that every observer always reports less than the true value. As we see below, it incorporates the systematic bias of the observers. By using a $\log$-transformation, we have the additive model $$\label{eqsunspotlog} \log R_t = \log X_t + \log \bigl(10g_t^{\circ} + s_t^{\circ} \bigr) + \log\mathcal{S},$$ which can be interpreted as a nonparametric regression problem with stochastic error $\log X_t \in(-\infty,0]$. The goal is to estimate the function $f(t) = \log(10g_t^{\circ} + s_t^{\circ} )$, the “true” relative sunspot number. Such estimation results can serve as input to structural physical models for sunspot activity like the time series approaches mentioned above. Unfortunately, one can only estimate $f(t) + \log\mathcal{S}$, where the bias $\log\mathcal{S}$ cannot be removed without any further assumptions. This is clear from the nonidentifiability in model (\[eqsunspotpost\]). Generally $\mathcal{S}$ is a systematic (intrinsic) bias, which has to be overcome using other sources of information (expert judgement). Any other statistical approach will also suffer from such a global bias. ![Estimated Wolf number with $\mathcal{S} = 1$, $\beta^* = 5$.[]{data-label="figsunspot1"}](1248f04.eps) ![Yearly best men’s outdoor 1500 m times in seconds with estimated boundary ($\beta^* = 2$).[]{data-label="fig15001"}](1248f05.eps) The results of the estimated sunspot number is given in Figure \[figsunspot1\], where we plotted an estimate corresponding to $\mathcal {S} = 1$. Given that observation techniques where much less advanced and coordinated in the 18th and 19th centuries, it is reasonable to assume $\mathcal{S} \leq1$. Apart from the estimated sunspot number itself, our estimation procedure provides a map from the uncertainty level $\mathcal{S}$ to the true sunspot number $f(t)$. The sharpness ${\mathfrak{a}}_{{x}}$ seems to mainly vary within the interval $[0,3.5]$. Finally, we would like to comment on the “peaks” around $1768$ and $1774$. These peaks are artifacts and originate from a too large initial bandwith selection at these particular points. However, for the sake of reproducibility, we have kept them and did not make any ad-hoc, data-dependent changes. As another example we discuss the yearly best men’s outdoor 1500 m times starting from 1966, depicted in Figure \[fig15001\] with estimated lower boundary. Following Knight [@knight2006], the boundary can be interpreted as the best possible time for a given year. This data set displays an interesting behavior. As can be clearly seen from Figure \[fig15001\], the boundary steadily decreases from 1970 until around the year 2000, followed by a sudden and sharp increase. This event leaves room for speculation. Let us mention that until the year 2000, it had been very difficult to distinguish between the biological and synthetical EPO. The breakthrough was achieved by Lasne and Ceaurriz [@lasneceaurriz2000], and since then, more and more refined and efficient doping tests have been developed. It seems plausible that this change and advance in doping controls has lead to the sudden increase, but it might as well be attributed to some other reason. Proof of the main results {#6} ========================= Throughout the proofs, we make the following convention. For two sequences of positive numbers $a_n$ and $b_n$, we write $a_n \gtrsim b_n$ when $a_n \geq C b_n$ for some absolute constant $C> 0$, and $a_n \lesssim b_n$ when $b_n \gtrsim a_n$. Finally, we write $a_n \thicksim b_n$ when both $a_n \lesssim b_n$ and $a_n \gtrsim b_n$ hold. [Proof of Theorem \[teocon\]]{} Throughout the proof, we fix some arbitrary $x\in[0,1]$ and write $\beta= \beta_{x}$ to lighten the notation. The data $Y_i$, $i=1,\ldots,n$, can be written as $$Y_i = \sum_{j=0}^{\beta^*} b_j (x_i-x)^j + \varepsilon_i + \Delta_i, $$ where $\Delta_i:= f(x_i) - \sum_{j=0}^{\beta^*} b_j (x_i-x)^j$. Putting $\Delta:= \max\{{\vert}\Delta_i{\vert}\dvtx {\vert}x_i-x{\vert}\leq h_k\}$, the coefficients $b_j$ are chosen as the Taylor coefficients $b_j = f^{(j)}(x)/j!$ for $j\leq\langle\beta\rangle$ and $b_j=0$ otherwise, such that by the Hölder condition on $f^{(\langle\beta \rangle)}$ in the Taylor remainder term $$\label{eqTaylor} \Delta\leq L h_k^\beta/ \bigl(\langle \beta\rangle+1\bigr)!.$$ Selecting $b_0^*:= b_0 + \Delta$, $b_j^*:=b_j$, $j>0$, we realize that $$\begin{aligned} \sum_{j=0}^{\beta^*} b_j^* (x_i-x)^j & =& \sum_{j=0}^{\beta^*} b_j (x_i-x)^j + \Delta\geq Y_i\nonumber \\ \eqntext{\forall i=1,\ldots,n\mbox{ with }{\vert}x-x_i{\vert}\leq h_k,}\end{aligned}$$ so that by the definition of the $\hat{b}_j$, $j=0,\ldots,\beta ^*$, we have $$\begin{aligned} \label{eqlemcon2} \sum_{{\vert}x_i-x{\vert}\leq h_k} \sum _{j=0}^{\beta^*} \hat{b}_j (x_i-x)^j & \leq&\sum_{{\vert}x_i-x{\vert}\leq h_k} \sum_{j=0}^{\beta^*} b_j^* (x_i-x)^j \nonumber\\[-8pt]\\[-8pt]\nonumber &=& \sum_{{\vert}x_i-x{\vert}\leq h_k} \Biggl\{ \sum_{j=0}^{\beta ^*} b_j (x_i-x)^j + \Delta\Biggr\}.\end{aligned}$$ We define the polynomial $$Q(y):= \sum_{j=0}^{\beta^*} ( \hat{b}_j-b_j) (y-x)^j - \Delta. $$ Then inequality (\[eqlemcon2\]) implies that $$\label{eqIntBed} \inf_{n,k,f} \int Q(y) \,d\lambda_n(y) \leq0,$$ where $\lambda_n$ denotes the uniform probability measure on the discrete set $\{x_i\dvtx {\vert}x_i-x{\vert}\leq h_k\}$ inside the interval $[x-h_k,x+h_k] \cap[0,1]$. We introduce the sets $Q^\pm$ of all $y\in [x-h_k,x+h_k] \cap[0,1]$ such that $Q(y)$ is nonnegative or negative, respectively. Our first task is to show that $$\label{eqlemcon3} \inf_{n,k,f} \lambda_n\bigl(Q^-\bigr) > 0\quad\mbox{or}\quad Q=0\qquad\mbox{identically}.$$ In the latter case Theorem \[teocon\] is trivially true; hence we focus on the case where $Q \neq0$. As $Q^-$ is the complement of $Q^+$ with respect to $[x-h_k,x+h_k] \cap[0,1]$, we have $\lambda _n(Q^+)\geq1/2$ or $\lambda_n(Q^-)\geq1/2$. Clearly, we have (\[eqlemcon3\]) in the second case, so let us study the situation where $\lambda_n(Q^+)\geq1/2$. As $Q$ is a polynomial with degree $\leq\beta^*$ the set $Q^+$ equals the union of at most $\beta^*+1$ disjoint sub-intervals of $[x-h_k,x+h_k] \cap[0,1]$. The number of all design points in $[x-h_k,x+h_k]$ is denoted by $m_k$. Hence, there exists at least one interval $I_0^+ \subseteq Q^+$ such that $\lambda_n(I_0^+) \geq 1/(2\beta^*+2)$. At least $\lceil m_k / (2\beta^*+2) \rceil$ of the $x_i$ lie in $I_0^+$ so that, due to the equidistant location of the design points, the length of $I_0^+$ is larger or equal to $$\bigl\{\bigl\lceil m_k / \bigl(2\beta^*+2\bigr) \bigr\rceil- 1\bigr \}/n \geq\bigl\{\bigl\lceil\lfloor n h_k \rfloor/ \bigl(2\beta^*+2 \bigr) \bigr\rceil- 1\bigr\}/n \geq c_1\bigl(\beta^*\bigr) \cdot h_k, $$ for $n$ sufficiently large and some uniform constant $c_1(\beta^*)>0$ which does not depend on $n$ or $k$, but only on $\beta^*$. The polynomial $Q$ takes only nonnegative values on the interval $I_0^+$. By Lemma \[lempoly\] below there exists some interval $I_1^+ \subseteq I_0^+$ with the length $c_2(\beta^*) h_k$ such that $$\inf_{y\in I_1^+} \bigl{\vert}Q(y)\bigr{\vert}\geq c_3\bigl(\beta^*\bigr) \cdot\sup_{{\vert}z-x{\vert}\leq h_k} \bigl {\vert}Q(z)\bigr{\vert}, $$ where the constants $c_2(\beta^*), c_3(\beta^*)>0$ only depend on $\beta^*$. It follows from there that $$\begin{aligned} \int_{Q^+} Q(y) \,d\lambda_n(y) &\geq&\int _{I_1^+} Q(y) \,d\lambda_n(y) \\ &\geq& \lambda_n\bigl(I_1^+\bigr)\cdot\inf_{y\in I_1^+} \bigl{\vert}Q(y)\bigr{\vert}\\ &\geq&\lambda_n\bigl(I_1^+ \bigr) c_3\bigl(\beta^*\bigr) \cdot\sup_{{\vert}z-x{\vert}\leq h_k} \bigl{\vert}Q(z)\bigr{\vert}. $$ On the other hand we learn from (\[eqIntBed\]) that $$\int_{Q^+} Q(y) \,d\lambda_n(y) \leq\int _{Q^-} \bigl{\vert}Q(y)\bigr{\vert}\,d\lambda_n(y) \leq\lambda_n\bigl(Q^-\bigr)\cdot\sup_{{\vert}z-x{\vert}\leq h_k} \bigl {\vert}Q(z)\bigr{\vert}, $$ so that $$\begin{aligned} \lambda_n\bigl(Q^-\bigr) &\geq&\lambda_n \bigl(I_1^+\bigr) c_3\bigl(\beta^*\bigr) \geq c_3\bigl(\beta^*\bigr) \cdot\bigl(c_2\bigl(\beta^* \bigr) h_k n - 1\bigr) / m_k \\ &\geq& c_3\bigl( \beta^*\bigr) \cdot\bigl(c_2\bigl(\beta^*\bigr) - n^{-\mathfrak{h}_0} \bigr) / \bigl(2 + n^{-\mathfrak{h}_0}\bigr), $$ unless $Q=0$ identically. Thus (\[eqlemcon3\]) has been shown. Using the arguments as above, we can now find some interval $I_0^- \subseteq Q^-$ whose length is bounded from below by a constant (only depending on $\beta^*$) times $h_k$. By Lemma \[lempoly\] there exists an interval $I_1^- \subseteq I_0^-$, whose length is also bounded from below by a constant (only depending on $\beta^*$) times $h_k$ and on which ${\vert}Q{\vert}$ is bounded from below by a uniform multiple of $$\sup_{{\vert}z-x{\vert}\leq h_k} \bigl{\vert}Q(z)\bigr{\vert}\geq \bigl{\vert}Q(x)\bigr{\vert}\geq{\vert}\hat{b}_0 - b_0{\vert}- \Delta. $$ This implies that $$\label{eqlemcon4} \inf_{y \in I_1^-} \bigl(-Q(y)\bigr) \geq c_4\bigl(\beta^*\bigr) \bigl({\vert}\hat{b}_0 - b_0{\vert}- \Delta\bigr).$$ On the other hand, for all $x_i \in I_1^-$ we have $$\begin{aligned} \label{eqlemcon5} Q(x_i) &=& \sum_{j=0}^{\beta^*} \hat{b}_j (x_i-x)^j + \Delta_i - f(x_i) - \Delta\geq Y_i - f(x_i) - 2\Delta.\end{aligned}$$ Combining the inequalities in (\[eqlemcon4\]) and (\[eqlemcon5\]), we conclude that $$\bigl{\vert}\tilde{f}_k(x) - f(x)\bigr{\vert}={\vert}\hat{b}_0 - b_0{\vert}\leq- c^*\bigl(\beta^*\bigr) \max\bigl\{\varepsilon_i\dvtx x_i\in I_1^- \bigr\} + c^*\bigl(\beta^*\bigr) \Delta$$ for some positive constant $c^*(\beta^*)$. Choosing $J(\beta ^*)$ sufficiently large (regardless of $k$, $n$ and $f$) there exists some $l=1,\ldots,2J(\beta^*)$ such that $x+h_k {\mathcal I}_l \subseteq I_1^-$, and hence $$\bigl{\vert}\tilde{f}_k(x) - f(x)\bigr{\vert}\leq c^*\bigl(\beta^* \bigr) \Delta- c^*\bigl(\beta^*\bigr)\cdot Z_j(h_k,x),$$ which completes the proof. \[lempoly\] Let $Q$ by any polynomial with the degree $\leq\beta^*$ and $I\subseteq[x-h_k,x+h_k]$ be an interval with the length $\geq c_5(\beta^*) h_k$ for some constant . Then there exist some finite constants $c_6(\beta^*),c_7(\beta^*)>0$ which only depend on $\beta^*$ and some interval $I^* \subseteq I$ with the length $\geq c_6(\beta^*) h_k$ such that $$\inf_{y\in I^*} \bigl{\vert}Q(y)\bigr{\vert}\geq c_7\bigl(\beta^*\bigr) \cdot\sup_{{\vert}z-x{\vert}\leq h_k} \bigl {\vert}Q(z)\bigr{\vert}.$$ If $Q$ is a constant function, the assertion is satisfied by putting $c=1$. Otherwise, by the fundamental theorem of algebra, $Q$ can be represented by $$Q(y) = \alpha_Q \prod_{j=1}^{\beta'} (y-y_j), $$ where $1\leq\beta'\leq\beta^*$, the $y_j$ denote the complex-valued roots of $Q$. By the pigeon hole principle there exists some square $I_1^+ \times[-c_5(\beta^*) h_k/(2\beta^*+2),\break c_5(\beta ^*) h_k/(2\beta^*+2)]$ in the complex plane which does not contain any $y_j$ where has the length $c_5(\beta^*) h_k/(\beta^*+1)$. Now we shrink that square by the factor $1/2$ where the center of the square does not change, leading to the square $I_2^+ \times[-c_5(\beta^*) h_k/(4\beta^*+4),c_5(\beta^*) h_k/(4\beta^*+4)]$. Thus, for any $y$ in this shrinked square, the distance between $y$ and any $y_j$ is bounded from below by $c_5(\beta ^*) h_k/(4\beta^*+4)$ and by ${\vert}y_j-x{\vert}- h_k$. If the latter bound dominates, we have ${\vert}y_j - x{\vert}\geq\{1 + c_5(\beta^*) /(4\beta^*+4)\} \cdot h_k$. Then the distance between any $z \in[x-h_k,x+h_k]$ and $y_j$ has the upper bound $${\vert}y_j-x{\vert}+ h_k \leq{\vert}y_j-y {\vert}+ 2h_k \leq\bigl\{1 + \bigl(8\beta^*+8\bigr)/c_5 \bigl(\beta^*\bigr)\bigr\} \cdot{\vert}y_j-y{\vert}, $$ when applying the first bound. Otherwise, if the first bound dominates, we have $$\begin{aligned} {\vert}z-y_j{\vert}&\leq&{\vert}y_j-x{\vert}+ h_k \leq\bigl\{2 + c_5\bigl(\beta^*\bigr)/\bigl(4 \beta^*+4\bigr)\bigr\} \cdot h_k \\ &\leq&{\vert}y-y_j{\vert}\cdot\bigl(4\beta^*+4\bigr) \bigl\{2/c_5\bigl(\beta^*\bigr) + 1/ \bigl(4\beta^*+4\bigr)\bigr\}. $$ In both cases ${\vert}z-y_j{\vert}$ is bounded from above by a uniform constant $c_6(\beta^*)$ times ${\vert}y-y_j{\vert}$. Then we learn from the root-decomposition of the polynomial $Q$ that $$\inf_{y\in I^*} \bigl{\vert}Q(y)\bigr{\vert}\geq c_7\bigl(\beta^*\bigr) \cdot\sup_{{\vert}z-x{\vert}\leq h_k} \bigl {\vert}Q(z)\bigr{\vert}, $$ for some deterministic constant $c_7(\beta^*)>0$, which only depends on $\beta^*$. [Proof of Proposition \[propestimator\]]{} Part (a) follows directly from the definition of Lepski’s method. For part (b) we obtain from (\[eqcond1\]), and repeated application of the triangle and Jensen’s inequality that $$\begin{aligned} \label{eqlem00} \quad&& \mathbb{E}_f \bigl[{\Vert}\hat{f} - \tilde{f}_{\hat{k}^*}{\Vert}^q {\mathbf{1}}\bigl(\hat{k} < \hat{k}^* \bigr) \bigr]^{1/q}\nonumber \\ &&\qquad \leq\mathbb{E}_f \bigl[ {\Vert}\tilde{f}_{\hat{k}} - f{\Vert}^q {\mathbf{1}}\bigl(\hat{k} < \hat{k}^* \bigr) \bigr]^{1/q} + \mathbb{E}_f \bigl[ {\Vert}\tilde{f}_{\hat{k}^*} - f{\Vert}^q {\mathbf{1}}\bigl(\hat{k} < \hat{k}^* \bigr) \bigr]^{1/q}\nonumber \\ &&\qquad \leq 2^{(q-2)/q} \nonumber\\[-8pt]\\[-8pt]\nonumber &&\quad\qquad{}\times \bigl( \mathbb{E}_f \bigl[ \bigl({R}_{\hat{k}}^q + B_{\hat{k}}^q\bigr) {\mathbf{1}}\bigl(\hat{k} < \hat{k}^* \bigr) \bigr]^{1/q} + \mathbb{E}_f \bigl[\bigl({R}_{\hat{k}^*}^q + B_{\hat{k}^*}^q\bigr) {\mathbf{1}}\bigl(\hat{k} < \hat{k}^* \bigr) \bigr]^{1/q} \bigr) \\ &&\qquad \leq 2^{(2q-1)/{q}} \bigl(\mathbb{E}_f \bigl[ B_{\hat{k}^*}^q \bigr]^{1/q} + \mathbb{E}_f \bigl[{R}_{\hat{k}}^q {\mathbf{1}}\bigl(\hat{k} < \hat{k}^* \bigr) \bigr]^{1/q} \bigr)\nonumber \\ &&\qquad \leq 2^{(2q-1)/q} \Biggl( \mathbb{E}_f \bigl[ \hat{{\mathfrak{z}}}_{\hat{k}^*}^q\bigr]^{1/q} + \sum _{k=0}^{K-1} \mathbb{E}_f \bigl[{R}_k^q {\mathbf{1}}\bigl(\hat{k} = k, k < \hat{k}^* \bigr) \bigr]^{1/q} \Biggr),\nonumber\end{aligned}$$ where we also used that $R_k$ decreases in $k$ and $B_k$ increases in $k$. Note that $$\begin{aligned} {\mathbf{1}}\bigl(\hat{k} = k, k < \hat{k}^* \bigr) & \leq&{\mathbf{1}}\bigl (\exists l \leq k\dvtx {\Vert}\tilde{f}_{k+1} - \tilde{f}_l{\Vert}> \hat{{\mathfrak{z}}}_l^T + \hat{{\mathfrak{z}}}_{k+1}^T \bigr)\cdot{\mathbf{1}}\bigl(k < \hat{k}^* \bigr) \\ & \leq&{\mathbf{1}}\bigl(\exists l \leq k\dvtx {\Vert}\tilde{f}_{k+1} - f{\Vert}+ {\Vert}f - \tilde{f}_l{\Vert}> \hat{{\mathfrak{z}}}_l^T + \hat{{\mathfrak{z}}}_{k+1}^T \bigr) \cdot{\mathbf{1}}\bigl(k < \hat{k}^* \bigr) \\ \nonumber & \leq&{\mathbf{1}}\bigl(\exists l \leq k\dvtx {R}_l + B_{k+1} > \hat{{\mathfrak{z}}}_l^T \bigr) \cdot{\mathbf{1}}\bigl(B_{k+1} \leq\hat{{\mathfrak{z}}}_{k+1}^T/2 \bigr) \\ & \leq&{\mathbf{1}}\bigl(\exists l \leq k\dvtx {R}_l > \hat{{\mathfrak{z}}}_l^T/2 \bigr).\end{aligned}$$ Inserting this inequality into (\[eqlem00\]) completes the proof. In the sequel, the following three lemmas will be useful. The proofs are given in the supplementary material [@jirmeireisssuppl]. \[lemquantcoomp\] If $y,t \to\infty$ and $$\begin{aligned} y &=& {\mathfrak{c}}(\log t)^{{\mathfrak{b}}} t^{{\mathfrak{a}}} \bigl(1 + \mbox{\scriptsize$ \mathcal{O}$}(1) \bigr), \qquad{\mathfrak{c}}, {\mathfrak{a}}> 0, {\mathfrak{b}}\in\mathbb{R},\end{aligned}$$ then $$\begin{aligned} t &=& \bigl({\mathfrak{c}}^{-1} \bigl(\log y^{1/{\mathfrak{a}}}\bigr)^{-{\mathfrak{b}}} y \bigr)^{1/{\mathfrak{a}}} + {\mathcal{O}}(1).\end{aligned}$$ In particular, if we have $v = \mathcal{U}_x(y)$ with $v \to0$, then $$\begin{aligned} F_x (v ) &=& 1 - {\mathfrak{c}}_x^{-{\mathfrak{a}}_x} \bigl(\log {\vert}v{\vert}^{-1/{\mathfrak{a}}_x} \bigr)^{-{\mathfrak{b}}_x {\mathfrak{a}}_x}{\vert}v{\vert}^{{\mathfrak{a}}_x} \bigl(1 + \mbox{\scriptsize$\mathcal{O}$}(1) \bigr).\end{aligned}$$ \[lemcomputeprodprob\] For $1 \leq j_0,j_1 \leq n$, let $\mathcal{J} = \{j_0,\ldots,j_1 \}$ such that ${\vert}j_0 - j_1{\vert}/n = {\mathcal{O}}(n^{-\rho_0} )$ for some $0 < \rho_0 < 1$. If $u \to0$, $u \leq-n^{-\rho_1}$ for some $\rho_1 > 0$, then $$\begin{aligned} \prod_{j \in\mathcal{J}} P \bigl(\varepsilon_j \leq A_{x_{j_0}}\bigl(-u^{-1}\bigr) \bigr) & \leq& e^{\# \mathcal{J} c_3^- u},\end{aligned}$$ where $c_3^- < 1$ may be chosen arbitrarily close to one. \[lempowern\] Let $(q_n)_n$ be a real-valued sequence which satisfies $q_n \in [1,\log n]$ for all integer $n$, and denote with $F(\cdot)$ the c.d.f. of $\varepsilon$. Then we have $$\mathbb{E}\bigl{\vert}\max\{\varepsilon_1,\ldots, \varepsilon_n\}\bigr{\vert}^{q_n} \leq\bigl(1 + \mbox{ \scriptsize$\mathcal{O}$}(1) \bigr)\int_0^{n^{1/2}} \bigl((-\mathcal{U})^{q_n} (n/y ) \bigr)^{(1)} \exp(-y)\,dy.$$ If $\mathcal{U}(\cdot)$ is not differentiable, replace $\mathcal{U}(\cdot)$ with $c_2^+ A(\cdot)$ in the above inequality, where $c_2^+ > 1$ can be chosen arbitrarily close to one. If $q_n = q$ is finite and independent of $n$, we obtain that $$\mathbb{E}\bigl{\vert}\max\{\varepsilon_1,\ldots, \varepsilon_n\}\bigr{\vert}^{q} = {\mathcal{O}}\bigl((\log n)^{q{\mathfrak{b}}_F} n^{-q/{\mathfrak{a}}_F} \bigr). $$ For arbitrary $q_n \in[1,\log n]$ we have $$\begin{aligned} {\mathcal{O}}\bigl(n^{-c_2^+{\mathfrak{a}}_F/q_n} \bigr) &\leq&\int_0^{\infty} F\bigl(-x^{1/{q_n}}\bigr)^n \,dx \leq{\mathcal{O}}\bigl(n^{-c_2^-{\mathfrak{a}}_F/q_n} \bigr),\end{aligned}$$ where $0 < c_2^- < 1 < c_2^+$ can be chosen arbitrarily close to one. [Proof of Theorem \[Tpointwise\]]{} In the course of the proof we will frequently apply Proposition \[propestimateunivquantile\]. We may do so since condition $\mathfrak {h}_0 < \beta_{{x}} {\mathfrak{a}}_{{x}} / (\beta_{{x}} {\mathfrak{a}}_{{x}} + 1)$ implies (\[eqparamrelation1\]). The general strategy is the following. By the triangle inequality and Jensen’s inequality, we have $$\quad\mathbb{E}_f \bigl[ \bigl(\hat{f}({x}) - f({x}) \bigr)^2 \bigr] \leq2 \mathbb{E}_f \bigl[ \bigl( \hat{f}({x}) - \tilde{f}_{\hat{k}^*} \bigr)^2 \bigr] + 2 \mathbb{E}_f \bigl[ \bigl(\tilde{f}_{\hat{k}^*}- f({x}) \bigr)^2 \bigr],$$ and we will treat both quantities separately. In order to deal with the first, Proposition \[propestimator\] implies that it suffices to consider $$\begin{aligned} && \sup_{f \in H_{{\mathcal{N}}({x})}(\beta,L)} \mathbb{E}_f \bigl[\bigl(\hat{{\mathfrak{z}}}_{\hat{k}^*}^T\bigr)^2 \bigr]^{1/2} + \sup_{f \in H_{{\mathcal{N}}({x})}(\beta,L)} \sum_{k=0}^{K-1} \mathbb{E}_f \bigl[{R}_k^{2} {\mathbf{1}}\bigl( \exists l \leq k\dvtx {R}_l > \hat{{\mathfrak{z}}}_l^T/2 \bigr) \bigr]^{1/2} \\ &&\qquad =:I + \sum_{k=0}^{K-1} \mathit{II}_k.\end{aligned}$$ To treat $I$, we require the following simple lemma; the proof is given in the supplementary material [@jirmeireisssuppl]. \[lemdealwithz\] Let $q \geq1$. Under the assumptions of Theorem \[Tpointwise\], we have uniformly over $f \in H_{{\mathcal{N}}({x})}(\beta,L)$ $$\begin{aligned} \mathbb{E}_f \bigl[\bigl(\hat{{\mathfrak{z}}}_{\hat{k}^*}^T \bigr)^q \bigr] & \leq&\bigl(c_1^+\bigr)^q \mathbb{E}_f \bigl[\bigl({\mathfrak{z}}_{\hat{k}^*}^T \bigr)^q \bigr] + {\mathcal{O}}\bigl(n^{-q/{\mathfrak{a}}_{{x}}} \bigr),\end{aligned}$$ where $c_1^+>1$. Applying the above result with $q = 2$ we obtain $$I^2 \leq\bigl(c_1^+\bigr)^2 \mathbb{E}_f \bigl[\bigl({\mathfrak{z}}_{\hat{k}^*}^T \bigr)^2 \bigr] + {\mathcal{O}}\bigl(n^{-2/{\mathfrak{a}}_{{x}}} \bigr),$$ and it remains to deal with $\mathbb{E}_f [({\mathfrak{z}}_{\hat{k}^*}^T)^2 ]$. We define $$k^\pm:= \inf\bigl\{k=0,\ldots,K-1\dvtx B_{k+1} > c_2^\pm{\mathfrak{z}}_{k+1}^T / 2\bigr\} \wedge K. $$ On the event ${\mathcal{A}}_n=\{c_2^- {\mathfrak{z}}_k \leq\hat{{\mathfrak{z}}}_k \leq c_2^+ {\mathfrak{z}}_k\mbox{ for all } k=0,\ldots,K-1\}$ we have $k^- \leq \hat{k}^* \leq k^+$. From Proposition \[propestimateunivquantile\] we infer $P(\mathcal{A}_n^c)={\mathcal{O}}(n^{-v} \log n )$ due to. Since ${\mathfrak{z}}_k$ decreases monotonically in $k$, we deduce $\mathbb{E} [({\mathfrak{z}}_{\hat{k}^*}^T)^2 ] \leq{\mathfrak{z}}_{k^-}^2 + {\mathcal{O}}(n^{-2/{\mathfrak{a}}_{{x}}} )$. Note that the deterministic sequences $(h_k^{\pm})$ satisfy (see Lemma \[lemquantcoomp\] for details) $$\begin{aligned} \label{eqthmpointwise15} {\mathfrak{z}}_{k^-} &\thicksim&- \bigl(h_k^{\pm} \bigr)^{\beta_{{x}}} \quad\mbox{and} \nonumber\\[-8pt]\\[-8pt]\nonumber h_{k^{\pm}} &\thicksim& (n /\log n)^{-1/({\mathfrak{a}}_{{x}} \beta_{{x}}+1)} (\log n)^{({\mathfrak{a}}_{{x}} {\mathfrak{b}}_{{x}})/( {\mathfrak{a}}_{{x}} \beta_{{x}} + 1)},\end{aligned}$$ under our assumption $\mathfrak{h}_0 < \beta_0 {\mathfrak{a}}_0 / (\beta_0 {\mathfrak{a}}_0 + 1)$. We obtain $$\begin{aligned} \label{eqthmpointwise2} && \sup_{f\in H_{{\mathcal{N}}({x})}(\beta,L)} \mathbb{E}_f \bigl[ \bigl(\hat{{\mathfrak{z}}}{}^T_{\hat{k}^*}\bigr)^2\bigr] \nonumber\\[-8pt]\\[-8pt]\nonumber &&\qquad = {\mathcal{O}}\bigl((n /\log n)^{(-2\beta_{{x}})/({\mathfrak{a}}_{{x}} \beta_{{x}}+1)} (\log n)^{(2{\mathfrak{a}}_{{x}} {\mathfrak{b}}_{{x}}\beta_{{x}})/({\mathfrak{a}}_{{x}} \beta_{{x}}+1)} \bigr),\end{aligned}$$ and it remains to deal with the second part. Let $\mathcal{B}_k = \{\exists l \leq k\dvtx {R}_l > \hat{{\mathfrak{z}}}_l^T/2 \}$. Then for $\delta_0 = n^{-\mathfrak{h}_0/{\mathfrak{a}}_{{x}}}$, we have $$\begin{aligned} \mathit{II}_k^2 &=& \int_0^{\infty}P_f \bigl(R_k^2 {\mathbf{1}}(\mathcal{B}_k)\geq x \bigr) \,dx \leq\int_0^{\delta_0} P_f ( \mathcal{B}_k )\,dx + \int_{\delta_0}^{\infty}P \bigl(R_k^2 \geq x \bigr)\,dx \\ &\leq&\delta_0 \sum_{l = 0}^k P_f \bigl({R}_l > \hat{{\mathfrak{z}}}_l^T/2 \bigr) + \int_{\delta_0}^{\infty}P \bigl(R_k^2 \geq x \bigr)\,dx =: \mathit{III}_k + \mathit{IV}_k.\end{aligned}$$ We first deal with $\mathit{III}_k$. Recall that for $j=1,\ldots,2J(\beta^*)$ we have $$Z_j(h_k,{x}) = \max\{\varepsilon_i\dvtx x_i \in{x}+ h_k {\mathcal I}_j \}$$ and $${\mathcal I}_j:= \bigl[-1+(j-1)/J\bigl(\beta^* \bigr),-1+j/J\bigl(\beta^*\bigr)\bigr].$$ Put $\mathcal{J}_j(l) = \{0,1/n,2/n,\ldots,1 \}\cap \{{x}+ h_l {\mathcal I}_j \}$ for $j=1,\ldots,2J(\beta^*)$, and note that $\#\mathcal{J}_j(l) \geq n h_l/2 J(\beta^*)$. An application of Proposition \[propestimateunivquantile\] and Theorem \[teocon\] then yields that $$\begin{aligned} P_f \bigl({R}_l > \hat{{\mathfrak{z}}}_l^T/2 \bigr) &\leq&\sum_{j = 1}^{J(\beta^*)}P \bigl( \bigl{\vert}Z_j(h_l,{x})\bigr{\vert}> c_1^- {\mathfrak{z}}_l \bigr) + {\mathcal{O}}\bigl(n^{-2/{\mathfrak{a}}_{{x}}} \bigr) \nonumber\\[-8pt]\\[-8pt]\nonumber &\leq&\sum_{j = 1}^{2J(\beta ^*)} \prod _{i \in\mathcal{J}_j(l)} P \bigl(\varepsilon_i < -c_1^-{\mathfrak{z}}_l \bigr) + {\mathcal{O}}\bigl(n^{-2/{\mathfrak{a}}_{{x}}} \bigr),\end{aligned}$$ where $c_1^- <1$ may be chosen arbitrarily close to one. Arguing as in Lemma \[lemcomputeprodprob\] we obtain $$\label{eqthmpointwise3} \prod_{i \in\mathcal{J}_j} P \bigl( \varepsilon_i > -c_1^-{\mathfrak{z}}_l \bigr) = {\mathcal{O}}\bigl(n^{-2c_2^-/{\mathfrak{a}}_{{x}}} \bigr),$$ where $c_2^- <1$ may be chosen arbitrarily close to one. Hence we obtain that $$\label{eqthmpointwise4} \qquad \mathit{III}_k \leq\delta_0 \sum _{l = 0}^k \sum_{j = 1}^{2J(\beta^*)} {\mathcal{O}}\bigl(n^{-2c_2^-/{\mathfrak{a}}_{{x}}} \bigr) = {\mathcal{O}}\bigl(K \delta_0 n^{-2c_2^-/{\mathfrak{a}}_{{x}}} \bigr) = {\mathcal{O}}\bigl(n^{-2/{\mathfrak{a}}_{{x}}} \bigr),$$ since $K = {\mathcal{O}}(\log n )$. For dealing with $\mathit{IV}_k$, set $\eta_0 = \exp( n^{\mathfrak{h}_0/4} )$. Let $u_{{x}} = A_{{x}}^{-1}(\delta_0)$. Then Lemma \[lemquantcoomp\] implies that $u_x < -n^{-\mathfrak{h}_0/2}$ for large enough $n$. Then as in (\[eqthmpointwise3\]), it follows from Lemma \[lemcomputeprodprob\] that for sufficiently large $n$, $$\begin{aligned} \int_{\delta_0}^{\eta_k}P \bigl(R_k^2 \geq x \bigr)\,dx &=& \int_{\delta_0}^{\eta_k}P \bigl(R_k \leq-x^{1/2} \bigr)\,dx \\ &\leq&\eta_k \sum _{j = 1}^{2J(\beta^*)} \prod _{i \in\mathcal{J}_j(k)} P \bigl(\varepsilon_i < A_{{x}} \bigl(-n^{-\mathfrak{h}_0/2}\bigr) \bigr) \\ &\leq& \eta_k \sum_{j = 1}^{2J(\beta^*)} \exp\bigl( -\#\mathcal{J}_{j(k)} n^{-\mathfrak{h}_0/2} \bigr) \\ &\leq& \eta_k \sum_{j = 1}^{2J(\beta^*)} \exp \bigl( -n^{\mathfrak{h}_0/2} \bigr) = {\mathcal{O}}\bigl(\exp\bigl( -n^{\mathfrak{h}_0/4} \bigr) \bigr).\end{aligned}$$ Let $p> 2$. Since $\mathbb{E} [{\vert}R_k{\vert}^p ] = {\mathcal{O}}(1 )$ by Lemma \[lempowern\], it follows from the Markov inequality that $$\begin{aligned} \int_{\eta_k}^{\infty}P \bigl(R_k^2 \geq x \bigr)\,dx & \leq&\int_{\eta_k}^{\infty}x^{-p/2} \mathbb{E} \bigl[{\vert}R_k{\vert}^p \bigr] \,dx = \frac{2}{p - 2} \eta_k^{-p/2 + 1} {\mathcal{O}}(1 ).\end{aligned}$$ Combining the above and (\[eqthmpointwise4\]), it follows that $\mathit{IV}_k = {\mathcal{O}}(\exp( -n^{\mathfrak{h}_0/8} ) )$, which in turn yields $$\mathit{II}_k^2 = \mathit{III}_k + \mathit{IV}_k = {\mathcal{O}}\bigl(n^{-2/{\mathfrak{a}}_{{x}}} \bigr) + {\mathcal{O}}\bigl(\exp\bigl( -n^{\mathfrak {h}_0/8} \bigr) \bigr) = {\mathcal{O}}\bigl(n^{-2/{\mathfrak{a}}_{{x}}} \bigr).$$ We thus conclude $$\begin{aligned} \sum_{k=0}^{K-1} \mathit{II}_k &=& {\mathcal{O}}\bigl(K n^{-1/{\mathfrak{a}}_{{x}}} \bigr) = {\mathcal{O}}\bigl(\log n n^{-1/{\mathfrak{a}}_{{x}}} \bigr) \\ &=& \mbox{\scriptsize$\mathcal{O}$} \bigl((n \log n)^{- \beta_{{x}}/({\mathfrak{a}}_{{x}} \beta_{{x}}+1)} (\log n)^{({\mathfrak{a}}_{{x}} {\mathfrak{b}}_{{x}})/({{\mathfrak{a}}_{{x}} \beta_{{x}}+1})} \bigr).\end{aligned}$$ Piecing everything together and taking squares, we arrive at $$\begin{aligned} \label{eqthmpointwise5} && \sup_{f \in H_{{\mathcal{N}}({x})}(\beta,L)}\mathbb{E}_f \bigl[ \bigl(\hat{f}({x}) - \tilde{f}_{\hat{k}^*} \bigr)^2 \bigr] \nonumber\\[-8pt]\\[-8pt]\nonumber &&\qquad = {\mathcal{O}}\bigl((n \log n)^{- \beta_{{x}}/({{\mathfrak{a}}_{{x}} \beta_{{x}}+1})} (\log n)^{({\mathfrak{a}}_{{x}} {\mathfrak{b}}_{{x}})/({\mathfrak{a}}_{{x}} \beta _{{x}}+1)} \bigr).\end{aligned}$$ To complete the proof, it remains to deal with $\mathbb{E}_f [ (\tilde{f}_{\hat{k}^*} - f({x}) )^2 ]$. Let $p' > 2$. Then by (\[EqRk2\]) and the triangle, Jensen and Hölder inequalities, we have $$\begin{aligned} \mathbb{E}_f \bigl[ \bigl(\hat{f}({x}) - \tilde{f}_{\hat{k}^*} \bigr)^2 \bigr] &\leq& 2 \mathbb{E}_f \bigl[R_{\hat{k}^*}^2 + B_{\hat{k}^*}^2 \bigr] \nonumber\\[-8pt]\label{eqthmpointwise6} \\[-8pt]\nonumber & \leq&2 \mathbb{E}_f \bigl[R_{\hat{k}^*}^2{\mathbf{1}}( \mathcal{A}_n ) + B_{\hat{k}^*}^2{\mathbf{1}}( \mathcal{A}_n ) \bigr] \\ &&{}+ P \bigl(\mathcal{A}_n^{c} \bigr)^{(p'-2)/p'} \bigl(\mathbb{E} \bigl[R_{k^-}^{p'} \bigr]^{1/p'} + {\mathcal{O}}(L_{{x}} ) \bigr).\end{aligned}$$ Hence Proposition \[propestimateunivquantile\], Lemma \[lempowern\] and (\[eqthmpointwise15\]) imply that the above is of order $$\begin{aligned} \label{eqthmpointwise7} \mathbb{E}_f \bigl[R_{\hat{k}^*}^2 + B_{\hat{k}^*}^2 \bigr] &\leq&2 \mathbb{E}_f \bigl[R_{{k}^-}^2 \bigr] + B_{k^+}^2 + {\mathcal{O}}\bigl(n^{-2/{\mathfrak{a}}_{{x}}} \bigr)\nonumber \\ &=& {\mathcal{O}}\bigl(n_{k^-}^{-2/{\mathfrak{a}}_{{x}}} (\log n_{k^-})^{2 {\mathfrak{b}}_{{x}}} + {\mathfrak{z}}_{k^+}^2 + n^{-2/{\mathfrak{a}}_{{x}}} \bigr) \\ &=& {\mathcal{O}}\bigl((n \log n)^{(- 2\beta_{{x}})/({\mathfrak{a}}_{{x}} \beta_{{x}}+1)} (\log n)^{(2{\mathfrak{a}}_{{x}} {\mathfrak{b}}_{{x}})/({\mathfrak{a}}_{{x}} \beta_{{x}}+1)} \bigr).\nonumber\end{aligned}$$ The above bound is uniform over $H_{{\mathcal{N}}({x})}(\beta,L)$, and the proof is complete. [Proof of Theorem \[TL2upperbound\]]{} For the proof we require the following lemma, which provides a sub-polynomial upper bound on the probability that $R_k$ exceeds the threshold $\hat{{\mathfrak{z}}}_l^T/2$. \[LL2estimate\] Suppose $h_k \leq\exp(-c_H \log^\gamma n)$ for fixed constants $\gamma\in(0,1)$, $c_H>0$. Grant Assumption \[assmain\], and let $\mathfrak{m}, {\mathfrak{a}}_0, \beta_0, \mathfrak{h}_0$ satisfy (\[eqparamrelation1\]) in view of (\[eqlowbnd\]). Then $$\sup_{k = 0,\ldots,K-1} P \bigl(R_k > \hat{{\mathfrak{z}}}_l^T/2 \bigr) = {\mathcal{O}}\bigl (\exp\bigl(-c_H \log^{1+\gamma} n / 2q\bigr) \bigr),$$ as $n\to\infty$. Let $p> q$. Then Lemma \[lempowern\] gives $$\label{eqRP} \mathbb{E} \bigl[R_k^p \bigr]^{q/p} = {\mathcal{O}}\bigl((nh_k)^{-q /{\mathfrak{a}}_F} (\log n h_k)^{q {\mathfrak{b}}_F} \bigr).$$ An application of Hölder’s inequality, (\[eqRP\]) and Lemma \[LL2estimate\] yields that $$\begin{aligned} \label{eqest1} && \sup_{f\in H_{[0,1]}(\beta,L)} \sum_{k=0}^{K-1} \mathbb{E}_f \bigl[{R}_k^{q} {\mathbf{1}}\bigl( \exists l \leq k\dvtx {R}_l > \hat{{\mathfrak{z}}}_l^T/2 \bigr) \bigr]^{1/q}\nonumber \\ &&\qquad \leq\sum_{k=0}^{K} (k+1)^{(p-q)/p}1_{[0,\exp(-c_H \log^\gamma n)]}(h_k)\nonumber \\ &&\hspace*{45pt}{}\times {\mathcal{O}}\bigl(n^{-\mathfrak{h}_0 /{\mathfrak{a}}_F}\cdot\exp\bigl(-c_H/q (p-q) \log^{1+\gamma} n / [2 q p] \bigr) \bigr) \\ &&\quad\qquad{} + \sum_{k=0}^{K} 1_{(\exp(-c_H \log^\gamma n),\infty)}(h_k) \cdot(nh_k)^{-1/{\mathfrak{a}}_F}\nonumber \\ &&\qquad = {\mathcal{O}}\bigl(n^{- c_2^-/{\mathfrak{a}}_F} \exp \bigl([c_H/{\mathfrak{a}}_F] \log^\gamma n\bigr)\cdot \log n \bigr).\nonumber\end{aligned}$$ Choosing $c_H, \gamma> 0$ sufficiently small, the above is of order ${\mathcal{O}}(n^{-c_3^-/{\mathfrak{a}}_F} )$, where $c_3^- < 1$ can be chosen arbitrarily close to one. According to Proposition \[propestimator\] it remains to bound the expectation $\mathbb{E}_f [(\hat{{\mathfrak{z}}}{}^T_{\hat {k}^*})^q]$ uniformly over $f\in H_{[0,1]}(\beta,L)$. Applying Lemma \[lemdealwithz\], we obtain that uniformly over $f\in H_{[0,1]}(\beta,L)$ $$\begin{aligned} \mathbb{E}\bigl[\bigl(\hat{{\mathfrak{z}}}_{\hat{k}^*}^T \bigr)^q\bigr] & \leq&\bigl(T c_1^+\bigr)^q \mathbb{E}_f \bigl[({\mathfrak{z}}_{\hat{k}^*})^q\bigr] + {\mathcal{O}}\bigl(n^{-q/{\mathfrak{a}}_{F}} \bigr).\end{aligned}$$ To deal with $\mathbb{E}_f [({\mathfrak{z}}_{\hat{k}^*})^q]$, we introduce $$k^\pm:= \inf\bigl\{k=0,\ldots,K-1\dvtx B_{k+1} > c_2^\pm{\mathfrak{z}}_{k+1}^T / 2\bigr\} \wedge K. $$ On the event $\mathcal{A}_n=\{c_2^- {\mathfrak{z}}_k \leq\hat{{\mathfrak{z}}}_k \leq c_2^+ {\mathfrak{z}}_k\mbox{ for all } k=0,\ldots,K-1\}$ we have $k^- \leq \hat{k}^* \leq k^+$. From Proposition \[propestimateunivquantile\] we infer $P(\mathcal{A}_n^c)={\mathcal{O}}(n^{-q/{\mathfrak{a}}_F} )$. Since ${\mathfrak{z}}_k$ decreases monotonically in $k$, we find $\mathbb{E}[{\mathfrak{z}}_{\hat{k}^*}^q] \leq{\mathfrak{z}}_{k^-}^q + {\mathcal{O}}(n^{-q/{\mathfrak{a}}_F} )$. Note that the deterministic sequences $(h_k^{\pm})$ satisfy $$\begin{aligned} h_{k^{\pm}} &\thicksim& (n)^{-1/({\mathfrak{a}}_{F} \beta_+1)} (\log n)^{({\mathfrak{a}}_{F} {\mathfrak{b}}_{F})/({\mathfrak{a}}_{F} \beta_+1)} \quad\mbox{and} \nonumber\\[-8pt]\\[-8pt]\nonumber {\mathfrak{z}}_{k^-} &\thicksim& n^{-\beta/({\mathfrak{a}}_{F} \beta _+1)} (\log n)^{(\beta{\mathfrak{a}}_{F} {\mathfrak{b}}_{F})/({\mathfrak{a}}_{F} \beta_+1)},\end{aligned}$$ provided that $\mathfrak{h}_0 < \beta_0 {\mathfrak{a}}_0 / (\beta_0 {\mathfrak{a}}_0 + 1)$. For computational details, refer to Lemma \[lemquantcoomp\]. Moreover, condition $\mathfrak{h}_0 < \beta_0 {\mathfrak{a}}_0 / (\beta_0 {\mathfrak{a}}_0 + 1)$ also implies (\[eqparamrelation1\]). We obtain $$\label{eqest2} \quad\sup_{f\in H_{[0,1]}(\beta,L)} \mathbb{E}_f \bigl[ \bigl(\hat{{\mathfrak{z}}}{}^T_{\hat{k}^*}\bigr)^q \bigr] = {\mathcal{O}}\bigl(n^{(-q\beta)/({\mathfrak{a}}_{F} \beta_+1)} (\log n)^{(q\beta{\mathfrak{a}}_{F} {\mathfrak{b}}_{F})/({\mathfrak{a}}_{F} \beta_+1)} \bigr).$$ Combining this result with (\[eqest1\]), Proposition \[propestimator\] yields that $$\sup_{f\in H_{[0,1]}(\beta,L)} \mathbb{E}_f \bigl[ {\Vert}\hat{f} - \tilde{f}_{\hat{k}^*}{\Vert}_q^q \bigr] = {\mathcal{O}}\bigl(n^{(-q\beta)/({\mathfrak{a}}_{F} \beta_+1)} (\log n)^{(q\beta{\mathfrak{a}}_{F} {\mathfrak{b}}_{F})/({\mathfrak{a}}_{F} \beta_+1)} \bigr).$$ Arguing similarly as in (\[eqthmpointwise7\]), by (\[EqBk\]) and (\[EqRk\]) we deduce that $$\begin{aligned} \label{eqoracleestimator} \mathbb{E}_f \bigl[ {\Vert}\tilde{f}_{\hat{k}^*} - f{\Vert}_q^q \bigr] & \leq&2^{q} \mathbb{E}_f \bigl[B_{\hat{k}^*}^q\bigr] + 2^{q} \mathbb{E}_f \bigl[R_{\hat{k}^*}^q \bigr]\nonumber \\ &\leq&2^{q} B_{k^+}^q + {\mathcal{O}}\bigl(n^{-q/{\mathfrak{a}}_F} \bigr) + 2^{q} \mathbb{E}_f \bigl[R_{k^-}^q\bigr] \\ &=&{\mathcal{O}}\bigl(n^{(-q\beta)/({\mathfrak{a}}_{F} \beta _+1)} (\log n)^{(q\beta{\mathfrak{a}}_{F} {\mathfrak{b}}_{F})/({\mathfrak{a}}_{F} \beta_+1)} \bigr),\nonumber\end{aligned}$$ uniformly with respect to $f\in H_{[0,1]}(\beta,L)$, by conditioning on the event ${\mathcal{A}}_n$ and using (\[eqRP\]). The proof is complete. The proofs of Theorems \[Tlowerbound\], \[T421\] and \[teo-aexp2\] are given in the supplementary material [@jirmeireisssuppl]. Acknowledgments {#acknowledgments .unnumbered} =============== We would like to thank the anonymous referees for many constructive remarks that have lead to a significant improvement both in the results and the presentation. We also thank Holger Drees and Keith Knight for extensive discussions and insightful comments on quantile estimation in inhomogeneous data. \[suppA\] [57]{} , (). . . (). . . , , (). . , . (). . . (). . . , (). . . (). . . , . (). . . , (). . In (, , , eds.) . , . (). . . (). . . (). . . (). . . , (). . . (). . . , (). . , . (). . . (). . . , , (). . . (). . . , (). . . (). . . (). . . (). . . , (). . . , (). . . (). . . , (). . . (). . . , (). . . (). (). . . (). . , . . (). . . [, ]{} (). . , (). . . , (). . . , (). . (). . . (). . In (, , eds.) . , . , (). . . , (). . . (). . . , . , , (). . . (). . . (). . . (). . . (). . . (). . . (). . . , (). . . , (). . . , (). . . (). . . , , , (). . .
{ "pile_set_name": "ArXiv" }
--- abstract: 'For any integer $d\geq 1$ we construct examples of finitely presented algebras with intermediate growth of type $[e^{n^{d/(d+1)}}]$. We produce these examples by computing the growth types of some finitely presented metabelian Lie algebras.' author: - Dİlber Koçak bibliography: - 'metabelian.bib' title: On growth of metabelian Lie algebras --- [Introduction]{} Let $A$ be an (not necessarily associative) algebra over a field $k$ generated by a finite set $X$, and $X^n$ denote the subspace of $A$ spanned by all monomials on $X$ of length at most $n$. The *growth function* of $A$ with respect to $X$ is defined by $$\gamma_{X,A}(n)=dim_k(X^n)$$ This function depends on the choice of the generating set $X$. To remove this dependence the following equivalence relation is introduced: Let $f(n)$ and $g(n)$ be two monotone functions on $ \mathbb{N}$. We write $ f\precsim g$ if there exists a constant $C\in \mathbb{N}$ such that $f(n)\leq Cg(Cn)$ for all natural numbers $n$. If $ f\precsim g$ and $ g\precsim f$, we say $f$ and $g$ are equivalent and denote this by $f\sim g$. The equivalence class containing $f$ is denoted by $[f]$ and is called the *growth rate of f*. Set $[f]\leq [g]$ if and only if $f\precsim g$. The growth rate $[2^n]$ is called *exponential* and a growth rate strictly less than exponential is called *subexponential*. A subexponential growth which is greater than $[n^d]$ for any $d$ is called *intermediate*. The growth rate is a widely studied invariant for finitely generated algebraic structures such as groups, semigroups and algebras. The notion of growth for groups was introduced by Schwarz [@schwarz55] and independently by Milnor [@milnor68]. The study of growth of algebras dates back to the papers by Gelfand and Kirillov, [@GK661; @GK66]. A theorem of Milnor and Wolf [@milnor68; @Wolf68] states that any solvable group has a polynomial growth if it is virtually nilpotent, otherwise it has exponential growth. The description of groups of polynomial growth was obtained by Gromov in his celebrated work [@gromov81]. He proved that every finitely generated group of polynomial growth is virtually nilpotent. The situation for algebras is different from the case of groups. M. Smith [@smith76] showed that there exists an infinite dimensional solvable Lie algebra $L$ whose universal enveloping algebra $U(L)$ has intermediate growth. Later, in [@lichtman84], Lichtman proved that the universal envelope of an arbitrary finitely generated infinite dimensional virtually solvable Lie algebra has intermediate growth. In [@LiUf95], Lichtman and Ufnarovski showed that the growth rate of a finitely generated free solvable Lie algebra of derived length $k>3$ and its universal envelope are almost exponential (this means that it is less than exponential growth $[2^n]$ but greater than the growth $[2^{n^{\alpha}}]$ for any $\alpha< 1$). The first examples of finitely generated groups of intermediate growth were constructed by Grigorchuk [@Gri83; @Gri84]. It is still an open problem whether there exists finitely presented groups of intermediate growth. In contrast, there are examples of finitely presented algebras of intermediate growth. The first such example is the universal enveloping algebra of the Lie algebra $W$ with basis $\{w_{-1},w_0,w_1,w_2,\dots\}$ and brackets defined by $[w_i,w_j]=(i-j)w_{i+j}$. $W$ is a subalgebra of the generalized Witt algebra $W_{\mathbb{Z}}$ (see [@as74 p.206] for definitions). It was proven in [@stewart75] that $W$ has a finite presentation with two generators and six relations. It is also a graded algebra with generators of degree $-1$ and $2$. Since $W$ has linear growth, its universal enveloping algebra has growth $[e^{\sqrt{n}}]$ and it is finitely presented. Known examples of finitely presented algebras of intermediate growth have growth of type $[e^{\sqrt{n}}]$. In this note we present examples of finitely presented associative algebras of intermediate growth having different growth types. Specifically, our main result is the following: For any positive integer $d$, there exists a finitely presented associative algebra with intermediate growth of type $[e^{n^{d/d+1}}]$. The steps in proving the theorem are as follows: In [@bau77], Baumslag established the fact that every finitely generated metabelian Lie algebra can be embedded in a finitely presented metabelian Lie algebra. Using ideas of [@bau77] (and clarifying some arguments thereof), we embed the free metabelian Lie algebra $M$ (with $d$ generators) into a finitely presented metabelian Lie algebra $W^+$. Next we show that $W^+$ has polynomial growth of type $[n^d]$. Finally, considering the universal enveloping algebra of $W^+$ we obtain a finitely presented associative algebra of growth type $[e^{n^{d/d+1}}]$. Growth of a finitely generated free metabelian Lie algebra ========================================================== Let $k$ be a field and $L$ a Lie algebra over $k$ generated by a finite set $X$. Elements of $X$ are monomials of length $1$. Inductively, a monomial of length $n$ is an element of the form $[u,v]$, where $u$ is a monomial of length $i<n$ and $v$ is monomial of length $n-i$. Every element of $L$ is a linear combination of monomials. If $a_1,\dots,a_n \in X$ then $[a_1,\dots ,a_n]$ is defined inductively by $$[a_1,\dots,a_n]=[[a_1,\dots,a_{n-1}],a_n]\;\text{for}\; n>2$$ Monomials of the form $[a_1,\dots,a_n]$ are called *left-normed*. We will frequently use the following simple lemmas in the remainder of this note. \[l0\] Let $x,\;y,\;z$ be elements of a Lie algebra and $[x,y]=0$. Then the following relations hold: $$[x,[y,z]]=[y,[x,z]]$$ $$[x,z,y]=[y,z,x].$$ Direct consequence of Jacobi identity. \[l00\] Any element of a Lie algebra can be written as a linear combination of left-normed monomials. By induction on the length of monomials. A Lie algebra $L$ is called *solvable of derived length $n$* if $$L^{(n)}=0\;\text{and}\;L^{(n-1)}\neq 0$$ where $L^{(m+1)}=[L^{(m)},L^{(m)}]$ and $L^{(0)}=L$. We also denote $L^{(1)}=[L,L]$ by $L^{\prime}$ and call it the *commutator* of $L$. A solvable Lie algebra of derived length $2$ is called *metabelian*. Let $X=\{x_1,\dots,x_d\}$ be a finite set and $L_X$ be the free Lie algebra generated by $X$. $M=L_X/L_{X}^{(2)}$ is the free metabelian Lie algebra generated by $X$. The following proposition can be found in [@bokut63]. \[p1\] Let $M$ be a free metabelian Lie algebra over a field $k$ with the generating set $X=\{x_1,\dots,x_d\}$ and $x_1<\dots <x_d$ be an order on $X$. If $\mathcal{B}$ is the set of left-normed monomials of the form $$[a_0,a_1,\dots,a_{n-1}]$$ where $a_0>a_1\leq a_2\leq\dots \leq a_{n-1}$ for $a_i\in X$ and $n\geq 1$. Then $\mathcal{B}$ forms a basis for $M$. Let $M_n$ denote the subspace of $M$ spanned by all left-normed monomials of length $n$ in $M$ ($M_0=k$ and $M_1$ is the subspace spanned by $X$). Since the relations $[x,y]=-[y,x]$ for $x,y\in M$ and Jacobi identity are homogeneous, $ M_k\cap M_l= \emptyset $ for $k\neq l$. By Lemma \[l00\], we get $M= \bigoplus_{n=0}^{\infty} M_n$ and $\mathcal{B}=\bigsqcup_{n=1}^{\infty}\mathcal{B}_n$ where $\mathcal{B}_n$ denotes the set of monomials of length $n$ in $\mathcal{B}$. Hence, it is enough to check that $\mathcal{B}_n$ is a basis of $M_n$ for any $n\geq 1$ i.e., - Any element of $M_n$ can be written as a linear combination of the elements of $B_n$. - Elements of $\mathcal{B}_n$ are linearly independent. For $n=1$, $\mathcal{B}_1=X$, so $(i)$ and $(ii)$ hold.\ Assume $n=2$. By the anti-symmetry of the Lie bracket $$[x,y]=-[y,x]$$ for any $x,y\in X$. So $\mathcal{B}_2$ spans $M_2$.\ Assume $n=3$ and $y_1,y_2,y_3\in X$ such that $y_1\leq y_2\leq y_3$. There are at most 6 different monomials of length 3 containing $y_1,y_2,y_3$. They are $$m_1=[y_1,y_2,y_3]$$ $$m_2=[y_1,y_3,y_2]$$ $$m_3=[y_2,y_1,y_3]$$ $$m_4=[y_2,y_3,y_1]$$ $$m_5=[y_3,y_1,y_2]$$ $$m_6=[y_3,y_2,y_1]$$ $m_3$ and $m_5$ are either $0$ or the elements of $\mathcal{B}_3$ (For example, if $y_1=y_3$ , $m_5=0$). $$m_1=[y_1,y_2,y_3]=-[y_2,y_1,y_3]=-m_3$$ $$m_2=[y_1,y_3,y_2]=-[y_3,y_1,y_2]=-m_5$$ and by Jacobi identity, $${\displaystyle}\begin{array}{lll} {\displaystyle}m_4&=&[y_2,y_3,y_1]\\ &=&-[y_1,y_2,y_3]-[y_3,y_1,y_2]\\ &=&[y_2,y_1,y_3]-[y_3,y_1,y_2]\\ &=&m_3-m_5 \end{array}$$ and, $$m_6=-m_4=m_5-m_3$$ Hence, $\mathcal{B}_3=\{[x_{i_0},x_{i_1},x_{i_2}]\mid x_{i_0}>x_{i_1}\leq x_{i_2},\;x_{i_j}\in X\}$ spans $M_3$.\ Now assume that $\mathcal{B}_k$ spans $M_k$ for any $k\in\{2,3,\dots,n\}$. Let $a=[a_0,a_1,\dots,a_n]$ be an element of length $n+1$ in $M$ where $a_i\in X$. By the assumption $[a_0,a_1,\dots,a_{n-1}]$ can be written as a linear combination of the elements of $\mathcal{B}_n$. So $a$ is a linear combination of the elements of the form $[a_{i_0},a_{i_1},\dots,a_{i_{n-1}},a_n]$ where $i_0,i_1,\dots,i_{n-1}$ is a permutation of $0,1,\dots,n-1$ and $a_{i_0}>a_{i_1}\leq \dots \leq a_{i_{n-1}}$. If $a_n \geq a_i$ for any $i\in \{0,\dots, n-1\}$, then $[a_{i_0},a_{i_1},\dots,a_{i_{n-1}},a_n]$ is in $\mathcal{B}_{n+1}$. Otherwise there exists $j\in\{0,\dots,n-1\}$ such that $a_{i_j}$ is the smallest element satisfying $a_{i_j}>a_n$. If $j=0$ then again $[a_{i_0},a_{i_1},\dots,a_{i_{n-1}},a_n]$ is in $\mathcal{B}_{n+1}$. If $j>0$, we apply Jacobi identity to $[a_{i_0},a_{i_1},\dots,a_{i_{n-1}},a_n]$, $${\displaystyle}\begin{array}{lll} {\displaystyle}[a_{i_0},a_{i_1},\dots,a_{i_{n-1}},a_n] &=& -([[a_{n},[a_{i_0},\dots,a_{i_{n-2}}]],a_{n-1}]+[[a_{n-1},a_n],[a_{i_0}, \dots,a_{i_{n-2}}]])\\ &=&[a_{i_0},a_{i_1},\dots,a_{i_{n-2}},a_n,a_{n-1}] \end{array}$$ For $j\geq 1$, we can repeat this $n-j-1$ times and get $$\label{a} {\displaystyle}[a_{i_0},a_{i_1},\dots,a_{i_{n-1}},a_n] = [a_{i_0},\dots,a_{i_j},a_n,a_{i_{j+1}},\dots,a_{n-1}]$$ If $j\geq 2$, we apply the identity one more time and get $${\displaystyle}\begin{array}{lll} {\displaystyle}[a_{i_0},a_{i_1},\dots,a_{i_{n-1}},a_n] &=& [a_{i_0},\dots,a_{i_{j-1}},a_n,a_{i_j},a_{i_{j+1}},\dots,a_{n-1}] \end{array}$$ and it is a monomial in $\mathcal{B}_{n+1}$. If $j=1$, $$[a_{i_0},a_{i_1},a_n,a_{i_2},\dots,a_{i_{n-1}}]$$ and since $[a_{i_0},a_{i_1},a_n]$ is an element of the vector space spanned by $\mathcal{B}_3$, $a$ is a linear combination of monomials in $\mathcal{B}_{n+1}$. To complete the proof , we also need to verify $(ii)$ for any $n\geq 2$:\ Since $M_i\cap M_j$ for any $i,j$, ${\displaystyle}\sum_{i=1}^{m}c_ip_i(x_1,\dots,x_d)=0$ where $c_i\in k$ and $p_i(x_1,\dots,x_d)\in M_i$ implies that either $k_i=0$ or $p_i(x_1,\dots,x_d)=0$. If $ p_n(x_1,\dots,x_d)=\sum_{j=1}^{l}c_j[x_{j_1},\dots,x_{j_n}] $ for $c_j\in k $, $x_{j_t}\in X$, $j\in \{1,\dots,l\}$, $t\in \{1,\dots,n\}$ then the indices $j_1,\dots,j_n$ are permutations of $1_1,\dots,1_n$ for any $j\in \{1,\dots,l\}$. For $n=2$, $[x,y]\in \mathcal{B}_2$ implies $[y,x]\notin \mathcal{B}_2$. So $\mathcal{B}_2$ forms a basis for $M_2$.\ For $n=3$, let $m_1,\dots,m_6$ be monomials of length $3$ as we defined before. Among them only $m_3$ and $m_5$ are in $\mathcal{B}_3$, and $m_3\neq cm_5$ for any $c\in k$. Hence, $(ii)$ holds for $n=3$.\ Now assume that for $3\leq i\leq n$, the elements of $\mathcal{B}_i$ are linearly independent and let $[a_0,a_1,\dots,a_n]\in \mathcal{B}_{n+1}$. Assume that for $i\in \{1,\dots, m\}$, $m\in\mathbb{N}$, there are indices $i_0,\dots,i_n$ which are permutations of $0,\dots,n$ such that $$[a_{i_0},\dots,a_{i_n}]\in \mathcal{B}_{n+1}\;\text{for any}\; i\in \{1,\dots,m\}$$ and $$[a_0,a_1,\dots,a_n]=\sum_{i=1}^{m}c_i[a_{i_0},\dots,a_{i_n}]\;\text{for some}\;c_i\in k$$ Choose the smallest element $a_j$ among $\{a_0,\dots,a_n\}$. It is clear that $j\neq 0$ and if $j=1$ then there exists $k>1$ such that $a_1=a_k$. So we can assume that $j\geq 2$. Similarly, for any $i\in \{1,\dots,m\}$, there exists $k_i\geq2$ such that $a_j=a_{i_{k_i}}$. By using equation \[a\], we can replace $a_j$ and $a_n$ in $[a_0,\dots,a_n]$ and $a_{i_{k_i}}$ and $a_{i_n}$ in all monomials $[a_{i_0},\dots,a_{i_n}]$, $i\in \{1,\dots,m\}$. We get $$[b,a_j]=\sum_{i=1}^{m}c_i[b_i,a_j]$$ for $b,b_1,\dots,b_m\in M_n$. Any element of $M_n$ can be written as a linear combination of elements of $\mathcal{B}_n$ but this contradicts that $\mathcal{B}_n$ is linearly independent. Hence, we conclude that $\mathcal{B}$ is a basis of $M$. \[c1\] Let $M$ be a free metabelian Lie algebra over a field $k$ with generating set $X=\{x_1,\dots,x_d\}$. Then $M$ has polynomial growth of degree $d$. Let $x_1<\dots <x_d$ be the order on $X$. The growth function of $M$ is $${\displaystyle}\gamma_{X,M}(n)=dim_k(X^n)=\sum_{k=1}^{n}|\mathcal{B}_k |$$ where $\mathcal{B}_k$ denotes the set of basis elements of length $k$ as in Proposition 1. For $m>2$, consider $\mathcal{B}_m$. The elements of $\mathcal{B}_m$ are of the form $$a=[a_0,a_1,\dots,a_{m-1}]$$ where $a_i\in X$ and $a_0>a_1\leq a_2\leq \dots \leq a_{m-1}$.\ If $a_1=x_j$ for some $j\in {1,\dots ,d-1}$ then for $i\in {1,\dots,m-1}$, $a_i\in \{x_j,\dots,x_d\}$ and $a_0\in \{x_{j+1},\dots,x_d\}$. So for fixed $a_1=x_j$, the number of basis elements of length $m$ is $$(d-j)\binom{m-2+d-j}{d-j}$$ Hence, $$\begin{array}{lll} |B_m|&=&{\displaystyle}\sum_{j=1}^{d-1}(d-j)\binom{m-2+d-j}{d-j}\\ & &\\ &=& {\displaystyle}\sum_{j=1}^{d-1}\frac{(m-2+d-1)+\dots + (m-2+1)}{(d-j-1)!}\\ & &\\ &\sim&{\displaystyle}\sum_{i=0}^{d-2}\frac{1}{i!}(m-1)^{d-1}\\ & &\\ &{\displaystyle}\sim&m^{d-1} \end{array}$$ Since $${\displaystyle}\gamma_{X,M}(m)=dim_k(X^m)=d+\binom{d}{2}+\sum_{k=3}^{m}|\mathcal{B}_k|,$$ $M$ has polynomial growth of degree $d$. Wreath Product of two abelian Lie algebras ========================================== Let $T$ be a Lie algebra over a field $k$ of characteristic $p\neq 2$ and $B$ a $k$-module. $B$ is called a *right $T$-module*, if there exists a $k$-bilinear map $B\times T\rightarrow B$ satisfying the following : $$b[t_1,t_2]=(bt_1)t_2-(bt_2)t_1$$ for $b\in B$, $t_1,t_2\in T$.\ For a Lie algebra $T$ and a right $T$- module $B$, we define the split extension $W=B]T$ of $B$ by $T$ as follows: As a vector space $W=B\oplus T$ is the direct sum of $B$ and $T$, and the Lie operation on $W$ is defined as $$[b_1+t_1,b_2+t_2]= (b_1t_2-b_2t_1)+[t_1,t_2]$$ for $b_1,b_2\in B$ and $t_1,t_2\in T$, so $W$ is a Lie algebra over $k$.\ Here, we consider a special case of this construction that can be found in [@bau77; @lichtman84; @bah87]. Suppose $A$ and $T$ are finite dimensional abelian Lie algebras over a field $k$ of characteristic $p\neq 2$. Let $\{a_1,\dots,a_m\}$ and $\{t_1,\dots,t_n\}$ be bases of $A$ and $T$, respectively. Let $U=U(T)$ be the universal enveloping algebra of $T$ and $B$ be a free right $U$-module with the module basis $\{a_1,\dots,a_m\}$. $B$ can be also viewed as a Lie algebra module for $T$ where the action of $T$ on $B$ is the action of a subset of $U$ on the $U$-module $B$ and hence we can form the split extension $W=B]T$. $A$ and $T$ are Lie subalgebras of $W$, $B$ is the ideal generated of $W$ generated by $\{a_1,\dots,a_m\}$ and it is called the *base ideal* of $W$. $W$ is a Lie algebra generated by $A$ and $T$. It is termed the *wreath product* of the Lie algebras of $A$ and $T$ and denoted by $$W=A \wr T$$ (For the general definition of the wreath product of Lie algebras see [@shmelkin73; @bah87]). As a vector space $W=B\oplus T$ is the direct sum of its abelian ideal $B$ and abelian Lie subalgebra $T$. \[l1\] [@bau77] Suppose that the Lie algebra $W$ over $k$ is the direct sum of $B$ and $T$ where $B$ is an abelian ideal generated by $\{a_1,\dots,a_m\}$ and $T$ is an abelian Lie algebra generated by $\{t_1,\dots,t_n\}$: $$W=B\oplus T.$$ Then $B$ is a Lie algebra spanned by $$\{[a_l,t_{j_1},\dots,t_{j_s}] \mid \; l \in \{1,\dots,m\}\;\text{and}\; j_1,\dots,j_2\in \{1,2,\dots n\}\}.$$ Firstly, we show that $W=B\oplus T$ is metabelian: Let $w,w'$ be elements of $W$, $$w=b+t\;\text{and}\;w'=b'+t',\;\; b,b'\in B\; t,t'\in T.$$ Then, $${\displaystyle}\begin{array}{lll} [w,w']&=&[b+t,b'+t']\\ &=&[b,b'+t']+[t,b'+t']\\ &=&[b,b']+[b,t']+[t,b']+[t,t'] \\ &=&[b,t']-[b',t] \in B \end{array}$$ this implies that $W'\subset B$. Since B is abelian, $W^{(2)}=0$. In W, all the elements can be written as linear combinations of left-normed commutators with elements from $\{a_1,\dots,a_m,t_1,\dots,t_m\}$.\ Consider a commutator $x=[x_1,\dots,x_s]$ for $s\geq 2$ and $x_i \in \{a_1,\dots,a_m,t_1,\dots,t_m\}$. If $x\neq 0$ then there is exactly one $j$ such that $x_j\in \{a_1,\dots,a_m\}$ (In particular $j=1$ or $j=2$, otherwise $[x_1,x_2]=0$): If $x_i\in\{t_1,\dots,t_n\}$ for all $i\in \{1,\dots,s\}$, then $x=0$. If there exist $x_k,x_l \in \{a_1,\dots,a_m\}$ for some $k\neq l$ then $x=[x_1,\dots,x_k,\dots,x_{l-1},x_l,\dots x_s]=0$ since $[x_1,\dots,x_k,\dots,x_{l-1}],x_l \in B$ and their product is equal to $0$. So $W$ has a basis which is a subset of the following set: $$\{t_1,\dots,t_d\}\cup \{[a_l,t_{j_1},\dots,t_{j_s}]) \mid \; l \in \{1,\dots,m\}\;\text{and}\; j_1,\dots,j_2\in \{1,2,\dots n\}\}.$$ Since $B\cap T=\{0\} $ , $$B\leq span([a_l,t_{j_1},\dots,t_{j_s}] \mid \; l \in \{1,\dots,m\}\;\text{and}\; j_1,\dots,j_2\in \{1,2,\dots n\})$$ and we have $[a_l,t_{j_1},\dots,t_{j_s}]\in B$ for l $\in \{1,\dots,m\}\;\text{and}\; j_1,\dots,j_2\in \{1,2,\dots n\}$. Hence $$B=span([a_l,t_{j_1},\dots,t_{j_s}] \mid \; l \in \{1,\dots,m\}\;\text{and}\; j_1,\dots,j_2\in \{1,2,\dots n\}).$$ \[c0\] $W$ can be presented by the generators $$a_1,\dots,a_m,t_1,\dots,t_n$$ and the following relations: $$[t_i,t_j]=0$$ for $1\leq i,j\leq n$, $$[[a_k,t_{i_1},\dots,t_{i_r}],[a_l,t_{j_1},\dots,t_{j_s}]]=0$$ for $1\leq k,l\leq m,\;\{i_1,\dots,i_r,j_1,\dots,j_s\}\subset \{1,2,\dots,n\},\;r\geq 0 ,\;s\geq 0$. The next lemma follows from a theorem of Lewin [@lewin74] and it is reformulated in [@bau77] as: \[l3\] Let $F$ be a finitely generated free Lie algebra and $R$ an ideal of $F$. Then $F/R^{\prime}$ can be embedded in $W=A\wr( F/R)$ where $A$ is a finite dimensional abelian Lie algebra. \[c2\] Let $M$ be a finitely generated free metabelian Lie algebra. Then there exist finite-dimensional abelian Lie algebras $A$ and $T$ such that $M$ can be embedded in $W=A \wr T$. Let $R=L_X^{\prime}$ be the commutator of the free Lie algebra $L_X$ generated by $X$. Then by Lemma \[l3\], $M=L_X/L_X^{(2)}$ can be embedded in $ {\displaystyle}W=A\wr T$ where $T= L_X/L_X^{\prime}$ and $A$ is a finite dimensional abelian Lie algebra. Finitely presented metabelian Lie algebras =========================================== Let $A$ and $T$ be finite dimensional abelian Lie algebras over a field $k$ of characteristic $p\neq 2$ and $W$ the wreath product of $A$ and $T$ as we defined in the previous section. In [@bau77], Baumslag showed that $W$ can be embedded in a finitely presented metabelian Lie algebra $W^+$. The construction of $W^+$ is as follows:\ Let $\{a_1,\dots a_m\}$ and $\{t_1,\dots,t_n\}$ be bases of $A$ and $T$, respectively. Then the universal enveloping algebra $U$ of $T$ is the associative k-algebra $k[t_1,\dots, t_n]$ of polynomials with variables $t_1,\dots,t_n$ over $k$. Furthermore, let $B$ be the free right $U$-module with module basis $\{a_1,\dots,a_m\}$. It is well known that $U$ can be turned into a Lie algebra by defining a new multiplication in $U$ by $$[u,v]=uv-vu$$ $U$ is simply an infinite dimensional abelian Lie algebra and $T$ is a finite dimensional subalgebra of $U$. To get a finitely presented metabelian Lie algebra $W^+$, we consider a subalgebra $T^+$ of the Lie algebra $U$ properly containing $T$ as a subalgebra. Let $T^+$ be the subalgebra generated by $\{t_1,\dots,t_n,u_1,\dots,u_n\}$ where $$u_i=t_i^2\;\;\text{for}\;i\in\{1,\dots,n\}$$ $T^+$ is a $2n$-dimensional abelian Lie algebra and we define $W^+$ as the wreath product of $A$ and $T^+$. $$W^+=A\wr T^+$$ [@bau77 Lemma 6]\[l2\] $W^+$ can be presented on the generators $$a_1,\dots,a_m,t_1,\dots,t_n,u_1,\dots,u_n,$$ subject to the relations $$[[a_k,t_{i_1},\dots,t_{i_r}],[a_l,t_{j_1},\dots,t_{j_s}]]=0,$$ $(1\leq k\leq m,\;1\leq l\leq m, i_1,\dots,i_r,j_1,\dots,j_r \in \{1,2,\dots,n\})$, $$[t_i,t_j]=[t_i,u_j]=[u_i,u_j]=0\;\;\;(1\leq i\leq n,\; 1\leq j\leq n),$$ $$[a_k,u_l]=[a_k,t_l,t_l]\;\;\;(1\leq k\leq n,\; 1\leq l\leq n).$$ By the construction of $W^+$, we have the following relations $$\label{eq:1} [a_k,u_i]=a_kt_i^2=[a_k,t_i,t_i],$$ and, since $T^+$ is abelian $$\label{eq:2} [t_i,t_j]=[t_i,u_j]=[u_i,u_j]=0$$ for any $1\leq k\leq m,\;1\leq i,j\leq n.$ It follows from Lemma \[l0\] and that $$\label{eq:3} [a_k,x_1,\dots,x_s]=[a_k,x_{i_1},\dots,x_{i_s}]$$ if $1\leq k\leq m$, $x_1,\dots,x_s \in \{t_1,\dots, t_n,u_1,\dots,u_n\}$ and $\{i_1,\dots,i_s\}$ is any permutation of $1,\dots,s$. In view of and , we see that $$\label{eq:4} [a_k,t_{j_1},t_{j_2},\dots,t_{j_l},u_i]=[a_k,t_{j_1},t_{j_2},\dots,t_{j_l},t_i,t_i]$$ if $1\leq k\leq d$, $\{j_1,j_2,\dots,j_l,i\}\subset \{1,2,\dots n\}$. By Lemma \[l1\] and , we conclude that all the elements of $W^+$ can be presented as linear combinations of the monomials of the following set $$S=\{a_1,\dots, a_m,t_1,\dots,t_m,u_1,\dots,u_m\}\cup \{[a_i,t_{j_1},\dots,t_{j_s}]\mid i ,j_1,\dots,j_s \in\{1,\dots,n\}\}$$ The product of any two elements of $S$ is defined in the given presentation. We will use the following lemmas to show that $W^+$ has a finite presentation. [@bau77 Lemma 5]\[l5\] Let $L$ be a Lie algebra of characteristic $p\neq 2$. Suppose $a,b,t,u$ are elements of $L$ and suppose $$[a,b]=[a,t,b]=[b,t,a]=[t,u]=0$$ and $$[a,u]=[a,t,t],\;[b,u]=[b,t,t].$$ Then $$[[a,\underbrace{t,\dots,t}_{i}],[b,\underbrace{t,\dots,t}_{j}]]=0$$ for every $i\geq 0, j\geq 0$. Let us denote $a_0=a$, $b_0=b$ and $$a_i=[a,\underbrace{t,\dots,t}_{i}],\;b_j=[b,\underbrace{t,\dots,t}_{j}],\;\text{for}\; i,j\geq 1.$$ To prove that $[a_i,b_j]=0$ whenever $i,j\geq 0$, we apply induction on $i$ and $j$. For $0\leq i,j\leq 1$, $$[a_0,b_0]=[a_1,b_0]=[a_0,b_1]=0$$ are given relations. We only need to verify that $[a_1,b_1]=0$ to complete the base cases of induction. by Lemma \[l0\], $[a_1,b_0]=0$ implies $[a_1,b_1]=[b_0,a_2]$. Similarly, $[b_1,a_0]$ implies $[b_1,a_1]=[a_0,b_2]$. So we get $$\label{l5e1} [a_0,b_2]=[a_2,b_0]=[b_1,a_1]$$ In view of Lemma \[l0\] and the given relations $[a_0,u]=a_2$ and $[b_0,u]=b_2$, $$\label{l5e2} [a_0,b_2]=[a_0,[b_0,u]]=[b_0,a_2]$$ Combining (\[l5e1\]) and (\[l5e2\]), we get $[a_0,b_2]=[b_0,a_2]=[a_2,b_0].$ Since $char(k)\neq 2$, we conclude that $[a_0,b_2]=0$. Thus $[a_1,b_1]=0$. Now, suppose that $$[a_i,b_j]=0\;\text{for}\;0\leq i\leq n,\;0\leq j \leq n.$$ Since $[t,u]=0$, by Lemma \[l0\], we have $$[a_i,u]=a_{i+2},\;[b_i,u]=b_{i+2}\;\text{for any}\; i\in\{1,2,\dots\}.$$ Combining the induction hypothesis with Lemma \[l0\] , we get $$\label{l5e3} [a_i,b_{n+1}]=[b_n,a_{i+1}]=0\;\;\text{for}\; 0\leq i\leq n-1$$ and similarly, $$\label{l5e4} [b_j,a_{n+1}]=[a_n,b_{j+1}]=0\;\;\text{for}\; 0\leq j\leq n-1$$ it remains only to verify that $$[a_n,b_{n+1}]=[a_{n+1},b_{n+1}]=[b_n,a_{n+1}]= 0.$$ Now $[b_{n-1},a_{n+1}]=0$, so by Lemma \[l0\] $$\label{l5e5} [b_{n-1},a_{n+2}]=[b_{n-1},[a_{n+1},t]]=[a_{n+1},b_{n}]$$ and $[b_{n-1},a_n]=0$ and $[a_n,b_n]=0$ imply the following relations, respectively. $$\label{l5e6} [b_{n-1},a_{n+2}]=[b_{n-1},[a_n,u]]=[a_n,b_{n+1}]$$ $$\label{l5e7} [a_n,b_{n+1}]=[b_n,a_{n+1}]=-[a_{n+1},b_n]$$ Putting (\[l5e5\]), (\[l5e6\]) and (\[l5e7\]) together, we get $-[a_{n+1},b_n]=[a_{n+1},b_n]$. Since $char(k)\neq 2$, this implies $$\label{l5e8} [a_{n+1},b_n]=0$$ and a similar argument shows that $$\label{l5e9} [a_{n},b_{n+1}]=0$$ We also need to verify that $[a_{n+1},b_{n+1}]=0$. By Lemma \[l0\] , $$\label{l5e10} [a_n,b_{n+2}]=[b_{n+1},a_{n+1}]$$ $$\label{l5e11} [a_n,b_{n+2}]=[b_{n},a_{n+2}]$$ $$\label{l5e12} [a_{n+1},b_{n+1}]=[b_{n},a_{n+2}]$$ Combining (\[l5e10\]), (\[l5e11\]) and (\[l5e12\]), we get $[a_{n+1},b_{n+1}]=[b_{n+2},a_n]=[a_{n+2},b_{n}]=-[a_{n+1},b_{n_1}]$ Therefore, $$\label{l5e13} [a_{n+1},b_{n+1}]=0$$ Equations (\[l5e3\]), (\[l5e4\]), (\[l5e8\]), (\[l5e9\]) and (\[l5e13\]) completes the induction and the proof of Lemma \[l5\]. \[l6\] For any $k,l\in \{1,2,\dots,m\}$ and $ i_1,i_2,\dots,i_s \in \{1,2,\dots,n\}$, $$[a_k,t_{i_1},t_{i_2},\dots,t_{i_s},a_l]=0$$ implies that for any $r\in \{1,\dots,s-1\}$, $$[[a_l,t_{i_1},t_{i_2},\dots,t_{i_r}], [a_k,t_{i_{r+1}},\dots,t_{i_s}]]=0 .$$ By Lemma \[l0\] and equation we have $${\displaystyle}\begin{array}{lll} [a_k,t_{i_1},t_{i_2},\dots,t_{i_s},a_l]&=&[a_k,t_{i_2},t_{i_3},\dots,t_{i_s},t_{i_1},a_l]\\ &=&[[a_l,t_{i_1}][a_k,t_{i_2},t_{i_3},\dots,t_{i_s}]\\ &=&[[a_l,t_{i_1}][a_k,t_{i_3},\dots,t_{i_s},t_{i_2}]\\ &=&[[a_k,t_{i_3},\dots,t_{i_s}],[a_l,t_{i_1},t_{i_2}]\\ &\dots&\\ &=&[[a_l,t_{i_1},t_{i_2},\dots,t_{i_r}],[a_k,t_{i_{r+1}},\dots,t_{i_s}]]\\ &=& 0. \end{array}$$ \[p2\] $W^+$ can be presented by the generators $$a_1,\dots,a_m,t_1,\dots,t_n,u_1,\dots,u_n,$$ subject to the finitely many relations $$[a_k,t_{j_1},t_{j_2},\dots,t_{j_s},a_l]=0\;\;\;(k,l\in\{1,2,\dots,m\}, 1\leq j_1 < j_2<\dots j_s\leq n),$$ $$[t_i,t_j]=[t_i,u_j]=[u_i,u_j]=0\;\;\;(1\leq i\leq n,\; 1\leq j\leq n),$$ $$[a_k,u_l]=[a_k,t_l,t_l]\;\;\;(1\leq k\leq n,\; 1\leq l\leq n).$$ By Lemma \[l2\], we only need to show that $$\label{eq:5} [[a_k,t_{i_1},\dots,t_{i_r}],[a_l,t_{j_1},\dots,t_{j_s}]]=0,$$ where $1\leq k\leq m,\;1\leq l\leq m,\;i_1,\dots,i_r,j_1,\dots,j_s\in \{1,2,\dots,n\},\;r\geq 0 ,\;s\geq 0$. To prove this, we apply induction on $r>0$, $s>0$:\ If $r=s=1$ we have $$w=[[a_k,t_{i_1}],[a_l,t_{j_1}]]$$ If $i_1=j_1$, we can apply Lemma 5 by taking $a=a_k$, $b=a_l$, $t=t_{i_1}=t_{j_1}$, $u=u_{i_1}$. Since we have the relations $$[a,b]=[a,t,b]=[b,t,a]=[t,u]=0$$ $$[a,u]=[a,t,t]\;\text{and}\;[b,u]=[b,t,t],$$ we get $$[[a,t],[b,t]]=0.$$ If $i_1\neq j_1$, the equation $w=0$ follows from Lemma \[l6\]. Now assume that for $r\leq q$, $s\leq q$, we have $$[[a_k,t_{i_1},\dots,t_{i_r}],[a_l,t_{j_1},\dots,t_{j_s}]]=0$$ Consider $$w=[[a_k,t_{i_1},\dots,t_{i_q},t_{i_{q+1}}],[a_l,t_{j_1},\dots,t_{j_s}]]$$ for $s\leq q$. If $t_{i_1},\dots,t_{i_q},t_{i_{q+1}},t_{j_1},\dots,t_{j_s}$ are all distinct, $w=0$ is followed from one of the given relations and Lemma \[l6\]. If not, we prove $w=0$ as follows: By Lemma \[l0\] and equation , we can assume without loss of generality that $$t_{i_{q+1}}=t_{i_q}=t.$$ Taking $a=[a_k,t_{i_1},\dots,t_{i_{q-1}}]$, $b=[a_l,t_{j_1},\dots,t_{j_s}]$, $t=t_{i_q}$ and $u=u_{i_q}$, we can apply Lemma 5. Since $a,b,t,u$ satisfy the relations in Lemma \[l5\], we have $$\label{eq:6} w=[[a_k,t_{i_1},\dots,t_{i_q},t_{i_{q+1}}],[a_l,t_{j_1},\dots,t_{j_s}]]=[[a,t,t],b]=0$$ Similarly, one can show that $$\label{eq:7} [[a_k,t_{i_1},\dots,t_{i_r}],[a_l,t_{j_1},\dots,t_{j_q},t_{j_{q+1}}]]=0$$ for $r\leq q$. To complete the proof, we only need to show that $$[[a_k,t_{i_1},\dots,t_{i_q},t_{i_{q+1}}],[a_l,t_{j_1},\dots,t_{j_q},t_{j_{q+1}}]]=0$$ and it follows from the equations , and Lemma \[l5\]. Proof of Theorem 1 ================== Let $W$ be the wreath product of abelian Lie algebras $A$ and $T$ generated by $\{a_1,\dots,a_d\}$ and $\{t_1,\dots,t_d\}$, respectively. In the previous section we have shown that $W$ is a subalgebra of a finitely presented Lie algebra $W^+$ generated by $\{a_1,\dots, a_d,t_1,\dots,t_d,u_1,\dots,u_d\}$. In order to prove Theorem 1, we compute the growth rate of $W^+$ in this section. In Corollary \[c2\], we have shown that the free metabelian Lie algebra $M$ generated by $d$ elements can be embedded in $W$ So we have $$\gamma_M \sim n^{d}\precsim \gamma_{W^+}$$ To find an upper bound for the growth rate $\gamma_{W^+}$ of $W^+$, we consider the number of the non-zero monomials in $W^+$. In the proof of Lemma \[l2\], we have shown that all the elements of $W^+$ can be presented as linear combinations of the monomials of the following set $$S=\{a_1,\dots, a_m,t_1,\dots,t_m,u_1,\dots,u_m\}\cup \{[a_i,t_{j_2},\dots,t_{j_s}]\;\mid\; 1\leq i\leq n ,\;j_1,\dots,j_s \in\{1,\dots,n\}\}$$ and combining this with equation we see that, as a vector space $W^+$ has a basis which is a subset of the following set: $$\tilde{S}=\{a_1,\dots, a_d,t_1,\dots,t_d,u_1,\dots,u_d\}\cup \{[a_i,t_{j_i},t_{j_2},\dots,t_{j_s}]\mid 1\leq i\leq d,\;1\leq j_1\leq j_2\leq \dots \leq j_s\leq d\}$$ So the growth function $\gamma_{W^+}(n)$ of $W^+$ is less than or equal to the number of elements of length not greater than $n$ in $\tilde{S}$, $${\displaystyle}\gamma_{W^+}(n)\leq 2d+d+\sum_{s=1}^{n-1}d\binom{s+d-1}{d-1}\sim \sum_{s=1}^{n-1}ds^{d-1}\sim n^d$$ and we conclude that $$\gamma_{W^+}(n)\sim n^{d}.$$ To complete the proof, we consider the relation between the growth functions of a Lie algebra $L$ and its universal enveloping algebra $U(L)$: Let $L$ denote a Lie algebra generated over a field $k$ by the finite set $X$ whose elements are linearly independent over $k$ and $X^n$ denote the subspace of $L$ spanned by all monomials of length less than or equal to $n$, as we defined in the first section. We may assume $L\subset U(L)$, so that $X$ generates $U$ as an associative algebra. Let $u_1,u_2,\dots$ be an ordered basis of $L$ such that $X=\{u_1,\dots,u_{\gamma_L(1)}\}$ and $u_{\gamma_L(n-1)}\dots \gamma_{L(n)} $ is a basis for $X^n/X^{n-1}$, $n\geq 2$. By Poincaré-Birkhoff-Witt Theorem [@jacob62], monomials of the form $$u_{i_1}\dots u_{i_r}\;\text{with}\; i_1\leq i_2 \leq \dots \leq i_r$$ form a basis for $U$ and we get the following relation: $$\label{eq:9} \displaystyle \sum_{n=0}^{\infty} b_n t^n =\prod_{n=1}^{\infty}(1-t^n)^{-a_n}$$ where $a_n:=dim(X^n/X^{n-1})$ and $b_n$:=number of monomials of length $n$ in $U(L)$ ([@smith76]). The relation between the growth rates of $a_n$ and $b_n$ is given in the following proposition, the proof of which can be found in various papers ([@ber83], [@pet93], [@gribar00]). \[pro2\] If $a_n$ and $b_n$ are related by and $a_n\sim n^d$, then $b_n \sim e^{n^{\frac{d+1}{d+2}}}.$ So the growth of the universal enveloping algebra $U(W^+)$ of $W^+$ is $$\gamma_{U(W^+)}(n)\sim e^{n^\frac{d}{d+1}}.$$ Since $W^+$ is finitely presented, so is $U(W^+)$ [@ufna]. Hence we can conclude that for any integer $d>0$, there exists a finitely presented associative algebra of growth $e^{n^\frac{d}{d+1}}$.
{ "pile_set_name": "ArXiv" }
--- abstract: 'In this paper, we explain the importance of finite decomposition semigroups and present two theorems related to their structure.' address: 'LIPN - UMR 7030 du CNRS, 99, avenue Jean-Baptiste Clément, Université Paris 13, 93430 Villetaneuse, France' author: - 'Matthieu Deneufchâtel, Gérard H. E. Duchamp' bibliography: - 'Biblio.bib' title: Finite Decomposition Semigroups --- Introduction ============ Theories of “special sums” have highlighted different products over the indices. For example, Chen’s lemma states that the product of two iterated integrals is ruled out by the shuffle product defined by $$\begin{aligned} 1 \shuffle w = w \shuffle 1 & = w ~; \\ (a u) \shuffle (b v) & = a(u \shuffle (bv)) + b ( (au) \shuffle v) \end{aligned}$$ for all words $u, \, v, \, w \in A^*$ and all letters $a,b$ of the alphabet $A$.\ Indeed (see [@JGJY1]), if $\mathscr H$ is a vector space of integrable functions over $(c_1,c_2)$ and $f_1 , \dots , f_n$ some functions of $\mathscr H$, define the following integral: $$\langle f_1 \dots f_n \rangle = \int_{c_1}^{c_2} dy_1 \int_{c_1}^{y_1} \dots \int_{c_1}^{y_{n-1}} d y_n \, f_1(y_1) \dots f_n(y_n)$$ (considered as a linear form defined on ${\mathscr H}^{\otimes n}$). If the functions $\phi_{a_i}$ are indexed by letters of the alphabet $A$, we associate to $w = a_{i_1} \dots a_{i_{|w|}}$ the integral $$\langle w \rangle = \langle \phi_{a_{i_1}} \dots \phi_{a_{i_{|w|}}} \rangle.$$ Then Chen’s lemma gives the following relation[^1], $\forall \, u , v \in X^*$: $$\label{Chen} \left\{ \begin{aligned} \langle u \rangle \langle v \rangle & = \langle u \shuffle v \rangle; \\ \langle 1 \rangle & = 1 . \end{aligned} \right.$$ Some of these iterated integrals have been thoroughly studied, for example the polyzetas: one considers the alphabet $\left\{ x_0 , x_1 \right\}$ and constructs recursively the following integrals: $\forall z \in \C \backslash \left] - \infty , 0 \right] \cup \left[ 1 , + \infty \right[$, $$\displaystyle {\rm Li}_{x_0^n}(z) = \frac{\ln^n(z)}{n!},$$ $${\rm Li}_{x_1 w} (z) = \int_0^z \frac{dt}{1-t} {\rm Li}_w(t),$$ and, $\forall w \in X^* x_1 X^*$, $${\rm Li}_{x_0 w} (z) = \int_0^z \frac{dt}{t} {\rm Li}_w(t).$$ The specialization of these functions for $z=1$ yields the Multiple Zeta Values (henceforth denoted by MZV) $\zeta({\bf s})$ where the multiindex ${\bf s}$ is obtained from $w$ with the correspondence $w = x_0^{s_1 - 1} x_1 \dots x_0^{s_k - 1} x_1 \leftrightarrow {\bf s} = ( s_1 , \dots s_k )$. One can show that the product of two MZV’s is, like the quasi symmetric functions, ruled by the stuffle product $\stuffle$ defined by $$\begin{array}{rl} (s_1 , \dots , s_p ) \stuffle ( t_1 , \dots , t_q ) & = \\ & s_1 (s_2 , \dots s_p ) \stuffle ( t_1 , \dots , t_q )\\ + & t_1 (s_1 , \dots , s_p ) \stuffle ( t_2 , \dots , t_q ) \\ + & (s_1 + t_1 ) (s_2 , \dots , s_p ) \stuffle ( t_2 , \dots , t_q ). \end{array}$$ Further, coloured polyzetas ([@Kreimer; @SLC44]) need an indexation by bicompositions $\displaystyle \left( \genfrac{}{}{0pt}{}{s'_1 \dots s'_p}{s_1'' \dots s_p''} \right)$ with a product $\diamond$ given by $$\begin{array}{rl} \displaystyle \left( \genfrac{}{}{0pt}{}{s'_1 \dots s'_p}{s_1'' \dots s_p''} \right) \diamond \left( \genfrac{}{}{0pt}{}{t'_1 \dots t'_p}{t_1'' \dots t_p''} \right) & = \\ &\displaystyle \left( \left( \genfrac{}{}{0pt}{}{s'_1}{s_1''} \right) \left( \genfrac{}{}{0pt}{}{s'_2 \dots s'_p}{s_2'' \dots s_p''} \right) \diamond \left( \genfrac{}{}{0pt}{}{t'_1 \dots t'_p}{t_1'' \dots t_p''} \right) \right) \\ + & \displaystyle \left( \left( \genfrac{}{}{0pt}{}{t'_1}{t_1''} \right) \left( \genfrac{}{}{0pt}{}{s'_1 \dots s'_p}{s_1'' \dots s_p''} \right) \diamond \left( \genfrac{}{}{0pt}{}{t'_2 \dots t'_p}{t_2'' \dots t_p''} \right) \right) \\ + & \displaystyle \left( \left( \genfrac{}{}{0pt}{}{s_1'+t'_1}{s_1''+t_1''} \right) \left( \genfrac{}{}{0pt}{}{s'_2 \dots s'_p}{s_2'' \dots s_p''} \right) \diamond \left( \genfrac{}{}{0pt}{}{t'_2 \dots t'_p}{t_2'' \dots t_p''} \right) \right) . \end{array}$$ Even algebras of diagrams $\LDIAG$ ([@SLC62]), which need coding with words whose letters, belonging to an alphabet $A$, are composable, are endowed with a product $\uparrow$ of this type. These algebras contain plane bipartite graphs with multiple ordered legs which are in bijection with the elements of $(\mathfrak{MON}^+(X))^*$ where $\mathfrak{MON}^+(X)$ is the set of non void commutative monomials in the variables of the alphabet $X$; formally speaking, let $X = \left\{ x_i \right\}_{i \geq 1}$ be an alphabet; denote by $\mathfrak{MON}(X)$ (resp. $\mathfrak{MON}^+(X)$) the monoid of monomials $X^\al$ for $\al \in \N^{(X)}$ (resp. for $\al \in \N^{(X)} \setminus \left\{ 0 \right\}$). Then, the elements of the monoid $(\mathfrak{MON}^+(X))^*$ are *words of monomials* which represent some diagrams. The bilinear product $\uparrow$ of two diagrams is given on the corresponding words of monomials by $$\left\{ \begin{aligned} 1_{(\mathfrak{MON}^+(X))^*} \uparrow w & = w \uparrow 1_{(\mathfrak{MON}^+(X))^*} = w; \\ a u \uparrow b v & = a (u \uparrow bv) + b (au \uparrow v ) + (a \cdot b) ( u \uparrow v) \end{aligned} \right.$$ for all $a, \, b \in \mathfrak{MON}(X)$ and $u, \, v \in (\mathfrak{MON}(X))^*$. The dualization of the superposition law $(a,b)\rightarrow a \cdot b$ leads to the definition of coproducts given by sums over a semigroup which has the following property: each of its elements has a finite number of decompositions as a product of two elements of the semigroup. This fact motivates the study of such semigroups, called *finite decomposition semigroups*, and of their structure. Note that the law of semigroup can be deformed with a bicharacter [@Thibon96; @Hoffman] or a colour factor [@EnjalbertMinh; @SLC62; @duchamp:hal-00793118]. The aim of this paper is to present two theorems related to the structure of these semigroups. Section \[MotDef\] is devoted to the detailed presentation of two examples of the importance of finite decomposition semigroups. In section \[DDL\], we give a necessary and sufficient condition for the disjoint direct limit of a family of semigroups to be a finite decomposition semigroup. Finally, in section \[finitestruc\], we provide a structure theorem which describes every finite decomposition semigroup as a disjoint direct limit. Motivations - Definitions {#MotDef} ========================= Definitions ----------- Let us recall the definition of the finite decomposition property. Let $T$ be a semigroup. We say that $T$ has the *finite decomposition property* (or, equivalently, that $T$ is a finite decomposition semigroup) if, $\forall t \in T$, $$\label{finitedecomposition} \big| \left\{ t_1 , t_2 \in T, \, t_1 \cdot t_2 = t \right\} \big| < \infty.\tag{D}$$ We will need the following notation: if $(I,\leq)$ is an ordered set and $\alpha \in I$, then $$\left[ \leftarrow , \alpha \right] = \left\{ \beta \in I , \, \beta \leq \alpha \right\}.$$ is called the *initial interval* generated by $\alpha$. Motivations ----------- Our interest for the finite decomposition property comes from the study of several problems of combinatorial physics in which semigroups or monoids with this property are involved[^2]. We give below two examples. ### Dualizability Another example of the importance of the finite decomposition semigroups comes from the fact that they appear in the study of some bialgebras and, more precisely, in the dualization of the law of the algebra (see, for example [@SLC62]; one of the authors recently described some of the features of some semigroup bialgebras [@FPSAC2013]). The best-known examples of this kind of bialgebras are given by the shuffle and stuffle algebras where the semigroups involved in the coproduct are respectively the null semigroup and $S = (\N^+ , + )$. If $k$ is a field and $M$ a semigroup, we denote by $k \left[ M \right]$ the algebra of $M$. It is in duality with itself for the scalar product $\scal{\cdot}{\cdot}$ defined by $$\scal{P}{Q} = \sum_{m \in M} \scal{P}{m} \scal{Q}{m}$$ if $P, \, Q \in k\left[ M \right]$ are polynomials with coefficients $\scal{P}{m}$ and $\scal{Q}{m}$ respectively for all $m \in M$.\ If $M$ is a finite decomposition semigroup, it is possible to dualize the product of $k \left[ M \right]$. Indeed, one can define the element $\Delta(m) \in k \left[ M \right] \otimes k \left[ M \right]$ by $$\Delta(m) = \sum_{\genfrac{}{}{0pt}{}{p,q \in M}{pq = m}} p \otimes q$$ as the sum is finite and then extend $\Delta$ by linearity. One has $$\scal{P \cdot Q}{R} = \scal{P \otimes Q}{\Delta(R)}, \, \forall P, Q, R \in k \left[ M \right].$$ Let $\mathfrak{MON}^{\rm L}(X) = \left\{ X^\alpha , \, \alpha \in \Z^{(X)} \right\}$ denote the monoid of Laurent monomials. Then the map $\displaystyle \Delta : X^{\alpha} \rightarrow \sum_{\al_1 + \al_2} X^{\al_1} \otimes X^{\al_2}$ takes its values in the large algebra $k \left[ \left[ \mathfrak{MON}^{\rm L}(X) \otimes \mathfrak{MON}^{\rm L}(X) \right] \right]$ since the semigroup of multiindices with values in $\Z$ is not a finite decomposition semigroup. ### Existence of the convolution product Let $M$ be a semigroup and ${\mathscr F}(M)$ the space of functions defined on $M$. Assume that $M$ is finite decomposition. Then ${\mathscr F}(M)$ is endowed with the structure of an algebra for the convolution product $\star$ defined by $$(f \star g) (m) = \sum_{m_1 m_2 = m} f(m_1) g(m_2)$$ for all $f,g \in {\mathscr F}(M)$. In fact, the two motivations above are related. Indeed, the law dual to the coproduct defined in section \[dualizability\] gives birth to the convolution product of linear forms on $k \left[ M \right]$. Let $S$ be a finite decomposition semigroup. If $S$ has a neutral $e_S$, we consider $S^{(1)} = S \setminus S^{\times}$ ($S^{\times}$ is the set of invertibles of $S$). The lemmas presented in this paper show that - $S^{1}$ is a sub-semigroup of $S$; - $S^\times$ is a finite group. In the next paragraph, we will see how to iterate this process and reconstruct the initial semigroup from the spare pieces. Disjoint Direct Limit and Finite decomposition property {#DDL} ======================================================= Disjoint Direct Limit --------------------- Let $(I,\leq)$ be an ordered set. We consider an inductive system of disjoint semigroups $S_\alpha$, $\alpha \in I$. This structure is given by a family of morphisms of semigroups $\phi_{\alpha \beta}: S_\beta \ra S_\alpha$, $\beta \leq \alpha$, which satisfy the following properties: $$\phi_{\alpha \alpha}={\rm Id}_\alpha \text{ for all }\alpha \in I;$$ $$\label{comp} \phi_{\alpha \beta} \circ \phi_{\beta \gamma} = \phi_{\alpha \gamma} \text{ for all }\gamma \leq \beta \leq \alpha \in I.$$ Then we denote by $S = \underset{\longrightarrow}{\rm DDL}(S_\alpha)$ the *Disjoint Direct Limit* of the system of semigroups which is the semigroup structure on $S = \displaystyle \bigsqcup_{\alpha \in I} S_\alpha$ constructed as follows. Assume that $I$ is a upper half lattice. Then $S$ has the structure of a semigroup for the law $\star$ given by $$x \star y = \phi_{\big(\lambda(x) \vee \lambda(y)\big) \lambda(x) } (x) \cdot_{\lambda(x) \vee \lambda(y)} \phi_{\big(\lambda(x) \vee \lambda(y)\big) \lambda(y) } (y)$$ where $\lambda(x)$ denotes the unique element of $I$ such that $x \in S_{\lambda(x)}$. Indeed, if $\lambda(x) = \alpha$, $\lambda(y) = \beta$ and $\lambda(z) = \gamma$, $$\begin{aligned} ( x \star y ) \star z & = (\phi_{\big(\alpha \vee \beta \big) \alpha } (x) \cdot_{\alpha \vee \beta} \phi_{\big(\alpha \vee \beta \big) \beta } (y) ) \star z \\ & = \phi_{\big( ( \alpha \vee \beta )\vee \gamma \big) (\alpha \vee \beta)} \Big( (\phi_{\big(\alpha \vee \beta \big) \alpha } (x) \cdot_{\alpha \vee \beta} \phi_{\big(\alpha \vee \beta \big) \beta } (y) ) \Big) \cdot_{( \alpha \vee \beta )\vee \gamma} \phi_{\big( (\alpha \vee \beta) \vee \gamma \big) \gamma}(z) \\ & = \Big( \phi_{\big( ( \alpha \vee \beta )\vee \gamma \big) \alpha} (x) \cdot_{(\alpha \vee \beta)\vee \gamma} \phi_{\big( ( \alpha \vee \beta )\vee \beta \big) \alpha} (y) \Big) \cdot_{(\alpha \vee \beta)\vee \gamma} \phi_{\big( (\alpha \vee \beta) \vee \gamma \big) \gamma}(z) \end{aligned}$$ using the compatibility property of the morphisms of semigroups $\phi_{\alpha \beta}$. The claim follows from the associativity of the product in $S_{\alpha \vee \beta \vee \gamma}$. Note that this construction is very similar to the construction of the direct limit of a family of semigroups (which is described, for example, in [@B_Set]). It is motivated by the structure of the finite decomposition semigroups (see \[finitestruc\]): these semigroups can be decomposed as the union of a family of disjoint groups with a finite decomposition semigroup. The disjoint direct limit shows that one can conversely build a semigroup from a family of finite decomposition semigroups.\ Remark also that in the case where - the $S_\alpha$’s are monoids with neutral $e_\alpha$; - the morphisms $\phi_{\alpha \beta}$ satisfy $\phi_{\alpha \beta}(e_\beta) = e_\alpha$ for all $\alpha \geq \beta \in I$ (i.e. they are morphisms of monoids); - ${\rm min}(I) = \alpha_0$ exists, then $S$ is a monoid with neutral $e_{\alpha_0}$. Indeed, in that case, $$e_{\alpha_0} \star x = \phi_{\lambda(x) \alpha_0}(e_{\alpha_0}) \cdot_{\lambda(x)} x = e_{\lambda(x)} \cdot_{\lambda(x)} x = x$$ for all $x \in S$. Finite decomposition criterion ------------------------------ Let $S = {\rm DDL}(S_\alpha)_{\alpha \in I}$ be the disjoint direct limit of a family of semigroups. The semigroup $S$ is finite decomposition if and only if the following conditions are satisfied: 1. $\forall \alpha \in I$ and $\forall y \in S_\alpha$, $|\left\{ \beta \leq \alpha , \, \phi^{-1}_{\alpha \beta}(y) \neq \emptyset \right\}| < \infty$; 2. every $S_\alpha$ is of finite decomposition type; 3. for all $\alpha \leq \beta \in I$ and for all $x \in S_\alpha$, the fibers $\phi^{-1}_{\alpha \beta}(x)$ of $\phi_{\alpha \beta}$ are finite. **Proof :** - Assume that $S$ is of finite decomposition type. Then it is impossible that one of the intervals $\left\{ \beta \leq \alpha , \, \phi^{-1}_{\alpha \beta}(y) \neq \emptyset \right\}$ be infinite. If that were the case for $y \in S_\alpha$, then $y^2 \in S_\alpha$ has an infinite number of decompositions since $y^2 = y \phi_{\alpha \beta}(x)$ for all $\beta \leq \alpha$ and $x \in \phi^{-1}_{\alpha \beta}(y)$. Moreover, each of the $S_\alpha$’s is a sub-monoid of $S$; hence it is finite decomposition since $S$ is finite decomposition. Finally, the decomposition of $z^2$ presented above also explains why there is no morphism $\phi_{\alpha \beta}$ with an infinite fiber. - Assume now that the three properties are satisfied. Let $z \in S_{\lambda(z)}$. Consider its decompositions $x y$; they form a set $D_z$ given by $\displaystyle \bigsqcup_{u,v,\alpha,\beta} D(z,u,v,\alpha , \beta)$ where $D(z,u,v,\alpha,\beta) = \left\{ (x,y) \in S_\alpha \times S_\beta \text{ such that } \phi_{\lambda(z)\alpha}(x)=u \text{ and } \phi_{\lambda(z)\beta}(y)=v \right\}$. Remark that in order that $D(z,u,v,\alpha,\beta) \neq \emptyset$, one must have $\alpha \vee \beta = \lambda(z)$. There is a finite number of $\alpha$ and $\beta$ such that $\alpha < \lambda(z)$ and $\beta < \lambda(z)$ because of the structure of $I$. Moreover, there is also a finite number of $x$ and $y$ respectively in $S_\alpha$ and $S_\beta$ such that $u = \phi_{\lambda(z) \alpha}(x)$ and $v = \phi_{\lambda(z) \beta}(y)$ because the morphisms are finite fibers. Finally, since the $S_\alpha$’s are finite decomposition, there is a finite number of decompositions of $z$ as a product of elements of $S_{\lambda(z)}$. Hence $S$ is finite decomposition. The following shows an example of ${\rm DDL}$ which is of finite decomposition type but whose intervals $\left[ \leftarrow , \alpha \right]$ are all infinite. It proves that the first condition can not be replaced by the finiteness of every interval $\left[ \leftarrow , \alpha \right]$.\ Let $S_k = \left[ k , +\infty \right[$ be the additive semigroup of integers greater than or equal to $k$. The disjoint direct limit of the family $(S_k)_{k \geq 0}$ is the semigroup $S = \left\{ (k,x) \in \N^2, \, k \leq x \right\}$ with product $\star = \wedge_\N \times +$ where $$(k_1 , y_1) \star (k_2 , y_2 ) = ( k_1 \wedge_\N k_2 , y_1 + y_2).$$ The morphisms $S_k \stackrel{\phi_{k\ell}}{\longrightarrow} S_\ell$ associate $(\ell,y)$ to $(k,y)$ for $\ell \leq k$ (see Fig. \[Fig\]). The set of index of the semigroups is $\N$ with an order $\prec$ such that $$\alpha \prec \beta \Leftrightarrow \alpha \wedge \beta = \beta.$$ Hence, the intervals $\left[ \leftarrow , \alpha \right] = \left\{ \beta , \beta \wedge \alpha = \alpha \right\} = \left[ \alpha , + \infty \right[$ are all infinite. $$\begin{tikzpicture}[>=stealth]\draw[->,thick] (0,0) -- (5.5,0); \draw[->,thick] (0,0) -- (0,5.5); \draw[->,dotted] (0,0) -- (5.5,5.5); \draw[->, thick, color=red] (3,2) -- (3,1); \draw (3.7,1.5) node {$\,\,\,\,\phi_{1,2}(2,3)$}; \draw (11,0) node {$S_0$}; \draw (11,1) node {$S_1$}; \draw (11,2) node {$S_2$}; \draw (11,3) node {$S_3$}; \draw (11,4) node {$S_4$}; \draw (11,5) node {$S_5$}; \draw (0,0) node {$\bullet$}; \draw (1,0) node {$\bullet$}; \draw (2,0) node {$\bullet$}; \draw (3,0) node {$\bullet$}; \draw (4,0) node {$\bullet$}; \draw (5,0) node {$\bullet$}; \draw (1,1) node {$\bullet$}; \draw (2,1) node {$\bullet$}; \draw (3,1) node {$\bullet$}; \draw (4,1) node {$\bullet$}; \draw (5,1) node {$\bullet$}; \draw (6,1) node {$\bullet$}; \draw (2,2) node {$\bullet$}; \draw (3,2) node {$\bullet$}; \draw (4,2) node {$\bullet$}; \draw (5,2) node {$\bullet$}; \draw (6,2) node {$\bullet$}; \draw (7,2) node {$\bullet$}; \draw (3,3) node {$\bullet$}; \draw (4,3) node {$\bullet$}; \draw (5,3) node {$\bullet$}; \draw (6,3) node {$\bullet$}; \draw (7,3) node {$\bullet$}; \draw (8,3) node {$\bullet$}; \draw (4,4) node {$\bullet$}; \draw (5,4) node {$\bullet$}; \draw (6,4) node {$\bullet$}; \draw (7,4) node {$\bullet$}; \draw (8,4) node {$\bullet$}; \draw (9,4) node {$\bullet$}; \draw (5,5) node {$\bullet$}; \draw (6,5) node {$\bullet$}; \draw (7,5) node {$\bullet$}; \draw (8,5) node {$\bullet$}; \draw (9,5) node {$\bullet$}; \draw (10,5) node {$\bullet$}; \end{tikzpicture}$$ Structure of the finite decomposition monoids {#finitestruc} ============================================= Structure theorem {#secth} ----------------- Let $T$ be a semigroup. One defines two sequences $(T_n)_{n \in \N}$ and $(D_n)_{n \in \N}$ by - $T_0=T, \qquad D_0=\emptyset$; - if $T_n,\, D_n$ are constructed, - either $T_n$ has no neutral and we stop with $D=D_{n}$; - or $T_n$ has a neutral $e_n$ and $$T_{n+1}=T_n\setminus T_n^{\times}\ ;\ D_{n+1}=D_n\cup \{n+1\}.$$ For convenience, we denote by $G_n$ the group of invertible elements of $T_n$ : $G_n=T_n^{\times}$ whenever $T_n$ admits a neutral $e_n$. Let $T$ be a finite decomposition semigroup. Then 1. either $D = \left\{ 1 , \dots , N \right\}$ is finite and $T=(\displaystyle \bigsqcup_{1\leq n\leq N}G_n)\sqcup T_{N+1}$ where $T_{N+1}$ is a finite decomposition semigroup without neutral; $T_m=(\displaystyle \bigsqcup_{m\leq n\leq N}G_n)\sqcup T_{N+1}$ is a sub-semigroup of $T$ and there exists a family of morphisms of monoids $\phi_{ij}~: G_j\ra G_{i};\ \phi_{Ni}~: G_i\ra T_{N}$.\ 2. or $D$ is infinite and $T=(\displaystyle \bigsqcup_{n \geq 0}G_n)$; $T_m=(\displaystyle\bigsqcup_{m\leq n\leq N}G_n)$ is a sub-semigroup of $T$ and there exists a family of morphisms of monoids $\phi_{ij} : G_j\ra G_{i}$.\ Lemma 1 ------- We will need the following lemma for the proof of the theorem. \[lemme1\] Let $T$ be a finite decomposition semigroup with unit $1$. Then the following properties are equivalent: 1. $u \in T$ is right invertible; 2. $u$ is left invertible; 3. $u$ is cyclic (and hence invertible). **Proof:** Let $u \in T$ be a right invertible element. Then, for all $n \in \N$, $u^n v^n = 1$. Since $T$ has the finite decomposition property, it is impossible that all the decomposition of $1$ of the form $u^n v^n$ are different. Thus, there exists $p>0$ such that $(u^{n+p},v^{n+p}) = (u^n , v^n)$. Hence $u^{n+p} = u^n$ and one has $$1 = u^n v^n = u^{n+p} v^n = u^p$$ which proves that $u$ is invertible since $u u^{p-1} = 1 = u^{p-1} u$. The same proof holds if $u$ is left invertible and both $(iii) \Rightarrow (i)$ and $(iii) \Rightarrow (ii)$ are trivial, hence the claim. Lemma 2 ------- In this section, we use the notation of section \[secth\]. Let $T$ be a finite decomposition semigroup. \[lemme2\] For all $x\in T$ and $n\in D$, one has $$e_nxe_n=e_nx=xe_n \in T_n.$$ **Proof:** Let $x \in T$ and $n \in D$. We denote by $i_0 = \underset{n}{{\rm max}} \left\{ xe_n \in T_n \right\}$. If $i_0 = N$ we are done. Assume that $i_0 < N$. One has $x e_{i_0 +1} = x e_{i_0} e_{i_0 + 1}$ (since $e_{i_0}$ is the neutral of $T_{i_0}$ which contains $e_{i_0+1}$); $x e_{i_0} \in T_{i_0}$ hence $x e_{i_0 +1} \in T_{i_0}$. But $x e_{i_0 +1} \notin G_{i_0}$ (if that were the case, then lemma \[lemme1\] would imply that $e_{i_0+1}$ belongs to $G_{i_0}$; this is impossible by definition of $e_{i_0+1}$); thus $x e_{i_0+1} \in T_{i_0} \setminus G_{i_0} = T_{i_0+1}$. This is not possible by definition of $i_0$. Necessarily $i_0 = N$. The same argument proves that $j_0 = \underset{n}{{\rm max}} \left\{ e_n x \in T_n \right\}=N$. The claims follows from the fact that $e_n$ is the neutral of $T_n$; hence $x e_n = e_n (x e_n) = (e_n x ) e_n = e_n x$. Proof of the theorem -------------------- Note that, for all $n \in D$, $T_n$ is a semigroup: if $x$ and $y$ belong to $T_n$, $xy$ belong to $T_n$. Indeed, if that is not the case, $xy$ belongs to $G_{n-1}$; then $x$ and $y$ are invertible but it is not possible since $x$ and $y$ belong to $T_n=T_{n-1} \setminus G_{n-1}$. Lemma \[lemme2\] implies that for all $x \in T$ and for all $n \in D$, $e_n x e_n \in T_n$. Hence, $\phi_n : T \ra T_n$ defined by $\phi(x) = e_n x e_n$ is a morphism of monoids (since $\phi_n(xy) = e_n x y e_n = e_n x e_n e_n y e_n = \phi_n(x) \phi_n(y)$; of course $\phi_n(e_j) = e_n$). The restrictions $\phi_{i}\Big|_{G_j} : G_j \ra G_i$ define the morphisms $\phi_{ij}$. From now on, we assume that $T$ is a finite decomposition semigroup. As an intersection of a non empty (as soon as $e_1$ exists) family of semigroups, $T_{N+1} = T \displaystyle \setminus \bigsqcup_{n \in D} G_n$ is a finite decomposition semigroup.\ Assume that $D$ be infinite; then $T \displaystyle \setminus \left( \bigsqcup_{n \in D} G_n \right)$ is empty. Indeed, if there were $t \in T \displaystyle \setminus \left( \bigsqcup_{n \in D} G_n \right)$, then $t \in T_n$ for all $n \in D$; thus $e_n t = t$ for all $n \in D$ and $t$ is an element of $T$ that has an infinite number of decompositions; this is not possible.\ If $D$ is finite, $T_{N+1}$ has no neutral. Indeed, assume that there be a neutral $e_{N+1} \in T_{N+1}$. Then $e_{N} e_{N+1} = e_{N}$; hence $e_{N+1}$ is left invertible and thus invertible in $T_N$; this is not possible since $e_{N+1}$ belongs to $T_{N+1} = T_{N} \setminus G_N$. Conclusion ========== We have illustrated the importance of finite decomposition semigroups for the computation of different generalized stuffle products.\ It turns out that every finite decomposition semigroup is the disjoint direct limit of finite groups and possibly a finite decomposition semigroup without neutral.\ Moreover, it is possible to apply the disjoint direct limit process to construct new finite decomposition semigroups. Acknowledgements {#acknowledgements .unnumbered} ================ The authors take advantage of these lines to acknowledge support from the French Ministry of Science and Higher Education under Grant ANR PhysComb and local support from the Project “Polyzetas”. They also wish to express their gratitude to K. Penson and C. Tollu for fruitful discussions. [^1]: In fact, the symbol $\langle \cdot \rangle$ is a character of $(T({\mathscr H}),\shuffle,1_{T({\mathscr H)}})$. [^2]: See [@SLC62; @duchamp3p] for physics and [@duchampint; @Hoffman] for the links with number theory.
{ "pile_set_name": "ArXiv" }
--- abstract: 'We theoretically consider charge transport through two quantum dots coupled in series. The corresponding full counting statistics for noninteracting electrons is investigated in the limits of sequential and coherent tunneling by means of a master equation approach and a density matrix formalism, respectively. We clearly demonstrate the effect of quantum coherence on the zero-frequency cumulants of the transport process, focusing on noise and skewness. Moreover, we establish the continuous transition from the coherent to the incoherent tunneling limit in all cumulants of the transport process and compare this with decoherence described by a dephasing voltage probe model.' author: - 'G. Kie[ß]{}lich' - 'P. Samuelsson' - 'A. Wacker' - 'E. Sch[ö]{}ll' title: Counting statistics and decoherence in coupled quantum dots --- [*Introduction*]{}. The analysis of current fluctuations in mesoscopic conductors provides detailed insight into the nature of charge transfer [@BLA00; @NAZ03]. The complete information is available by studying the full counting statistics (FCS), i.e. by the knowledge of all cumulants of the distribution of the number of transferred charges [@LEV; @NAZ03]. As a crucial achievement, the measurement of the third-order cumulant of transport through a single tunnel junction was recently reported [@REU]. To what extent one can extract informations from current fluctuations about quantum coherence and decoherence is the subject of intense theoretical investigations: e.g. dephasing in mesoscopic cavities and Aharonov-Bohm rings [@PAL04] and decoherence in a Mach-Zehnder interferometer [@FOE05]. Quantum dots (QDs) constitute a representative system for mesoscopic conductors. Recently, the real-time tunneling of single electrons could be observed in QDs [@WEI] providing an important step towards an experimental observation of the FCS. For single dots the FCS is known to display no effects of quantum coherence [@JON96; @BAG03a]. In contrast, in serially-coupled double QDs [@WIE03] the superposition between states from both dots causes prominent coherent effects. Noise properties have been studied theoretically both in the low [@ELA02] and finite frequency range [@SUN99; @AGU04] for these structures but no FCS studies are available yet. Experimentally, the low-frequency noise has been investigated very recently in related double-well junctions [@YAU]. In this Letter we show that detailed information about quantum coherence in double QD systems can be extracted from the zero-frequency current fluctuations. For this purpose we elaborate on the FCS in the limits of coherent and incoherent transport through the QD system by means of a density matrix (DM) and master equation (ME) description. We demonstrate a smooth transition between these approaches by decoherence originating from coupling the QDs to a charge detector. The results are compared to a scattering approach, where decoherence is introduced via phenomenological voltage probes. [*Model*]{}. The central quantity in the FCS is $P(N,t_0)$, the distribution function of the number $N$ of transferred charges in the time interval $t_0$. The associated cumulant generating function (CGF) $F(\chi )$ is [@NAZ03] $$\begin{aligned} \exp{[-F(\chi )]}=\sum_NP(N,t_0)\exp{[iN\chi ]} \label{eq:CGF-general}\end{aligned}$$ Here we consider the zero frequency limit, i.e. $t_0$ much longer than the time for tunneling through the system. From the CGF we can obtain the cumulants $C_k=-(-i\partial_\chi)^kF(\chi )\vert_{\chi=0}$ which are related to e.g. the average current $\langle I\rangle =eC_1/t_0$ and to the zero-frequency noise $S=2e^2C_2/t_0$. The Fano factor is defined as $C_2/C_1$. The skewness of the distribution of transferred charges is given by the third-order cumulant $C_3$. ![(Color online) Current statistics for $\Omega /\Gamma =0.5$ and for various dephasing rates $\Gamma_\varphi /\Gamma =$0, 5, 20; dashed lines: master equation (ME) approach, solid lines: density matrix (DM) formalism; on-resonance $\Delta \varepsilon =0$, symmetric contact coupling: $\Gamma =\Gamma_\textrm{e}=\Gamma_\textrm{c}$. $\Gamma_0\equiv (2\Gamma \Omega^2)/[ 4\Omega^2+\Gamma (\Gamma +\Gamma_\varphi )]$. Inset: Setup of the coupled QD system with (e)mitter and (c)ollector contact and mutual coupling $\Omega$.[]{data-label="fig1"}](figure1){width="45.00000%"} The setup of the coupled QD system is shown as the inset of Fig. \[fig1\]: QD1 is connected to the emitter with a tunneling rate $\Gamma_\textrm{e}$ and QD2 to the collector contact with rate $\Gamma_\textrm{c}$. Mutually they are coupled by the tunnel matrix element $\Omega$. One level in each dot, at energies $\varepsilon_1$ and $\varepsilon_2$ respectively, is assumed. We consider zero temperature and work in the limit of large bias applied between the collector and emitter, with the broadened energy levels well inside the bias window. To compare DM/ME- and scattering approaches we consider noninteracting electrons (spin degrees of freedom decouple, we give all results for a single spin direction) throughout this Letter. We note, however, that strong Coulomb blockade can be treated within the DM/ME-approaches along the same lines. [*Coherent tunneling*]{}. The FCS for coherent tunneling through coupled QDs can be obtained from the approach developed by Gurvitz and coworkers in a series of papers [@GUR96c; @ELA02] (for related work see e.g. Ref. [@RAM04]). Starting from the time dependent Schr[ö]{}dinger equation one derives a modified Liouville equation, a system of coupled first order differential equations for DM elements $\rho_{\alpha\beta}^N(t_0)$ at a given number $N$ of electrons transferred through the QD system at time $t_0$. Here $\alpha ,\beta\in \{a,b,c,d\}$, where $a,b,c$ and $d$ denote the Fock-states $|00\rangle, |10\rangle, |01\rangle,|11\rangle$ of the system, i.e., no electrons, one electron in the first dot, one in the second dot, and one in each dot, respectively. The probability distribution is then directly given by $P(N,t_0)=\rho_{aa}^N(t_0)+\rho_{bb}^N(t_0)+\rho_{cc}^N(t_0)+\rho_{dd}^N(t_0)$. The FCS is formally obtained by first Fourier transforming the DM elements as $\rho_{\alpha\beta}(\chi,t_0)=\sum_{N}\rho_{\alpha\beta}^N(t_0)e^{iN\chi}$. This gives the Fourier transformed equation $\dot \rho=\mathcal{L}_c(\chi)\rho$, with $$\begin{aligned} \mathcal{L}_c(\chi )=\left( \begin{array}{cccccc} -\Gamma_\textrm{e} & 0 & \Gamma_\textrm{c}e^{i\chi} & 0 & 0 & 0\\ \Gamma_\textrm{e} & 0 & 0 & \Gamma_\textrm{c}e^{i\chi} & 0 & 2\Omega\\ 0 & 0 & -2\Gamma & 0 & 0 & -2\Omega \\ 0 & 0 & \Gamma_\textrm{e} & -\Gamma_\textrm{c} & 0 & 0\\ 0 & 0 & 0 & 0 & -\Gamma & -\Delta\varepsilon\\ 0 & -\Omega & \Omega & 0 & \Delta\varepsilon & -\Gamma \end{array} \right) \label{eq:matrix-coh}\end{aligned}$$ and $\rho\equiv \big(\rho_{aa},\rho_{bb}, \rho_{cc},\rho_{dd},\textrm{Re}[\rho_{bc}],\textrm{Im}[\rho_{bc}]\big)^T$, $\Gamma\equiv (\Gamma_\textrm{e}+\Gamma_\textrm{c})/2$, $\Delta\varepsilon\equiv\varepsilon_1 -\varepsilon_2$. Note that the counting field $\chi$ enters the matrix elements in (\[eq:matrix-coh\]), where an electron jumps from QD2 into the collector contact. The CGF is then obtained as the eigenvalue of $\mathcal{L}_c$ which goes to zero for $\chi=0$, as required by probability conservation \[see Eq. (\[eq:CGF-general\])\] $$\begin{aligned} F_c(\chi)=\frac{t_0}{2}\left[2\Gamma-\left(p_1+2\sqrt{p_2^2+16\Gamma^2 \Omega^2 (e^{i\chi}-1)}\right)^{1/2}\right] \label{eq:CGF-coh}\end{aligned}$$ with $p_1=2(\Gamma^2-4\Omega^2+\Delta\varepsilon^2)$ and $p_2=\Gamma^2+4\Omega^2-\Delta\varepsilon^2$ for symmetric contact coupling $\Gamma_\textrm{e}=\Gamma_\textrm{c}=\Gamma$. [*Sequential tunneling*]{}. For incoherent tunneling the FCS can be obtained along similar lines from a ME [@BAG03a] for the diagonal elements of $\rho$ as $\dot{\bar \rho}=\mathcal{L}_s\bar \rho$, with $\bar \rho=(\rho_{aa},\rho_{bb},\rho_{cc},\rho_{dd})$. The coefficient matrix is $$\begin{aligned} \mathcal{L}_s(\chi )=\left( \begin{array}{cccc} -\Gamma_\textrm{e} & 0 & \Gamma_\textrm{c}e^{i\chi} & 0\\ \Gamma_\textrm{e} & -Z & Z & \Gamma_\textrm{c}e^{i\chi}\\ 0 & Z & -(2\Gamma +Z) & 0\\ 0 & 0 & \Gamma_\textrm{e} & -\Gamma_\textrm{c} \end{array} \right) \label{eq:matrix-seq}\end{aligned}$$ with the coupling between the single-particle states given by Fermi’s golden rule: $Z\equiv (2\vert\Omega\vert^2/\Gamma) L(\Delta\varepsilon,2\Gamma )$ with the normalized Lorentzian $L(x,w)\equiv [1+(2x/w)^2]^{-1}$ [@SPR04]. The CGF corresponds to the eigenvalue of the matrix (\[eq:matrix-seq\]) which goes to zero for $\chi =0$ and reads $$\begin{aligned} F_s(\chi )&=&\frac{t_0}{6}\left[(1+i\sqrt{3})q_1+(1-i\sqrt{3})q_2+6\Gamma+4Z\right], \nonumber \\ q_{1/2}&=&\left[-u\pm\sqrt{u^2-v^3}\right]^{1/3} \label{eq:CGF_seq}\end{aligned}$$ with $u=8Z^3+9Z\Gamma^2(1-3e^{i\chi})$ and $v=4Z^2+3\Gamma^2$. [*Results*]{}. The probability distributions for coherent and incoherent tunneling obtained from the CGFs (\[eq:CGF-coh\]) and (\[eq:CGF\_seq\]), respectively, in a saddle-point approximation are plotted in Fig. \[fig1\] for $\Omega /\Gamma =0.5$, where the effect of coherence is most pronounced. We see that the fluctuations are smaller in the coherent limit, i.e. decoherence generally enhances current fluctuations. In the limits of small inter-dot coupling $\Omega \ll\Gamma $ one obtains a Poissonian transfer of unit elementary charges and for large coupling $\Omega\gg\Gamma $ the FCS of a single QD is recovered [@JON96; @BAG03a]. In these limits the statistics for sequential and coherent tunneling are indistinguishable. ![(Color online) Average current $C_1$, noise $C_2$ in units of $t_0\Gamma$, Fano factor $C_2/C_1$, normalized skewness $C_3/C_1$ vs. coupling $\Omega$ for various dephasing rates $\Gamma_\varphi /\Gamma =$0, 5, 20; Master equation approach (ME): dashed lines, Density matrix formalism (DM): solid lines. On-resonance: $\Delta\varepsilon=0$, symmetric contact coupling: $\Gamma =\Gamma_\textrm{e}=\Gamma_\textrm{c}$.[]{data-label="fig2"}](figure2){width="45.00000%"} The CGF for coherent (\[eq:CGF-coh\]) and sequential (\[eq:CGF\_seq\]) tunneling yield the same expression for the average current through the coupled QD system [@GUR91; @KOR94a; @SPR04]: $$\begin{aligned} \langle I\rangle = e\left[\frac{1}{\Gamma_\textrm{e}}+\frac{1}{\Gamma_\textrm{c}}+\frac{1}{\Gamma_i}\right]^{-1} L\left(\Delta\varepsilon,2\Gamma\sqrt{1+\frac{4\vert\Omega\vert^2}{\Gamma_\textrm{e}\Gamma_\textrm{c}}}\right)\nonumber\\ \label{eq:current}\end{aligned}$$ with $\Gamma_i\equiv 2\Omega^2/\Gamma$. The higher order cumulants $C_k$ with $k\geq$2 deviate for intermediate $\Omega$ reflecting their sensitivity to quantum coherence in the transport process. For $\Gamma=\Gamma_e=\Gamma_c$ and $\Delta\varepsilon=0$ we have the Fano factors [@SUN99; @ELA02] $$\frac{S_c}{2e\langle I\rangle}=\frac{\Gamma ^4-2\Gamma ^2\Omega^2+8\Omega^4} {(\Gamma ^2+4\Omega^2)^2}$$ for the coherent case and $$\frac{S_s}{2e\langle I\rangle}=\frac{\Gamma ^4+2\Gamma ^2\Omega^2+8\Omega^4} {(\Gamma ^2+4\Omega^2)^2}$$ for the sequential, incoherent case. Clearly, coherence suppresses the noise [@SUN99; @AGU04]. The noise and the Fano factors are shown in Fig. \[fig2\] (results for $\Gamma_{\varphi}$=0). The noise for coherent tunneling shows a local minimum at $2\Omega =\Gamma $. At this coupling the normalized skewness has a local maximum as it can be seen in Fig. \[fig2\] and a close inspection reveals a FCS identical to a Poissonian transfer of quarter elementary charges: $F(\chi )=t_0\Gamma (e^{i\chi /4}-1)$. [*Decoherence - charge detector*]{}. In order to connect the limits of coherent and incoherent charge transport through the QD system we consider the exponential damping of the off-diagonal elements in the modified Liouville equation with rate $\Gamma_\varphi$: i.e. in the last two rows of the coefficient matrix (\[eq:matrix-coh\]) $\Gamma$ is replaced by $\Gamma +\Gamma_\varphi$. This apparent phenomenological treatment of decoherence can be substantiated, e.g., by the introduction of a quantum point contact close to one of the QDs: whenever an electron enters the QD the transmission through the quantum point contact changes. This charge detection leads to the exponential damping of the off-diagonals, as microscopically derived in Ref. [@GUR97]. Due to the finite coupling $\Omega$, it also leads to an exponential relaxation of the diagonal density matrix elements. Its effect on the FCS is presented in Fig. \[fig1\] and its effect on the current and noise in Fig. \[fig2\]. For comparison with the sequential tunneling cumulants the broadening of the resonance due to the coupling to the quantum point contact has to be considered and therefore the replacement $\Gamma \rightarrow \Gamma +\Gamma_\varphi$ in $Z$ of the coefficient matrix (\[eq:matrix-seq\]) is carried out. Then, the currents $C_1$ in both treatments agree for any $\Gamma_\varphi$ (Fig. \[fig2\]). The higher-order cumulants merge for $\Gamma_\varphi\gg\Omega$ as shown for the noise $C_2$, the Fano factor $C_2/C_1$ and for the normalized skewness $C_3/C_1$ in Fig. \[fig2\]. [*Decoherence - Voltage probe model*]{}. The coherent FCS in Eq. (\[eq:CGF-coh\]) can also be obtained from the scattering formula of Levitov and coworkers [@LEV], $F(\chi ) =(t_0/\hbar)\int d\varepsilon \ln[1+T(\varepsilon)(e^{i\chi}-1)]$, where $T(\varepsilon)$ is the transmission probability through the QD system (see e.g. [@ELA02]). This makes it interesting to compare dephasing within the DM approach with dephasing in a scattering formalism. This is done by introducing phenomenological voltage probes [@BLA00] coupled with strength $\Gamma_{\varphi}=\Gamma_{\varphi 1}=\Gamma_{\varphi 2}$ to the QDs (see inset of Fig. \[fig3\]a). The probes absorb and subsequently re-emit electrons, thereby randomizing their phases. Here we focus on the current and the noise, higher cumulants can be investigated with a modified version of the stochastic path-integral technique in Ref. [@PIL03], but this is beyond the scope of the present Letter. ![Fano factor vs. inter-QD coupling $\Omega$ for various dephasing rates $\Gamma_\varphi$. a) elastic voltage probe, b) inelastic voltage probe in scattering formalism (solid curves). Dashed curves: master equation (ME) Fano factor for $\Gamma_\varphi /\Gamma =20$. On-resonance: $\Delta\varepsilon=0$, symmetric coupling: $\Gamma =\Gamma_\textrm{e}=\Gamma_\textrm{c}$.[]{data-label="fig3"}](figure3){width="45.00000%"} The scattering matrix $\mathbf{s}$ for the four terminal QD-probe system is given by $$\begin{aligned} \mathbf{s}&=&1-iW^TGW, \hspace{0.2cm}G=[\varepsilon-H+iWW^T]^{-1} \\ \nonumber H&=&\left(\begin{array}{cc} \varepsilon_1 & \Omega \\ \Omega & \varepsilon_2 \end{array}\right), W=\left(\begin{array}{cccc} \sqrt{\Gamma_{\varphi}} &\sqrt{\Gamma_\textrm{c}} &0 &0 \\ 0 & 0 & \sqrt{\Gamma_{\varphi}} &\sqrt{\Gamma_\textrm{e}}\end{array}\right)\end{aligned}$$ The average current in lead $\alpha=e,c,\varphi_1,\varphi_2$ is given by [@BLA00; @BUE92] $$\begin{aligned} \langle I_\alpha\rangle =\frac{e}{h}\sum_{\beta}\int d\varepsilon A_{\beta\beta}^\alpha (\varepsilon )f_\beta (\varepsilon ) \label{eq:scatt-current}\end{aligned}$$ with $A_{\beta\gamma}^\alpha (\varepsilon)=\delta_{\alpha\beta}\delta_{\alpha\gamma}- s_{\alpha\beta}^\dagger (\varepsilon )s_{\alpha\gamma}(\varepsilon )$ and the distribution function $f_\alpha (\varepsilon )$ of terminal $\alpha$. The zero-frequency noise between terminal $\alpha$ and $\beta$ reads [@BUE92; @BLA00] $$\begin{aligned} S_{\alpha\beta}=\frac{2e^2}{h}\sum_{\gamma\delta}\int d\varepsilon A_{\gamma\delta}^\alpha (\varepsilon ) A_{\delta\gamma}^\beta (\varepsilon)f_\gamma (\varepsilon )\big[1-f_\delta (\varepsilon )\big] \label{eq:spectral-power}\end{aligned}$$ We first consider an elastic, purely dephasing voltage probe [@JON], where the average current as well as the low-frequency current fluctuations into the probe is zero at each energy. The conservation of average current gives the average distribution functions $f_{\varphi1/\varphi2}$. From the conservation of the current fluctuations one obtains the fluctuating part of the distribution functions $\delta f_{\varphi1/\varphi2}$ in terms of the bare current fluctuations [@BLA00]. The total noise is then obtained as a weighted sum of the bare current correlations in Eq. (\[eq:spectral-power\]). It is found that both current and noise qualitatively reproduce the DM result. The Fano factor is plotted in Fig. \[fig3\]a, however, there is a quantitative difference. Since in the DM approach, the electrons in the dots can exchange energy with electrons at the quantum point contacts, the dephasing is inelastic and a quantitative agreement with an elastic scattering dephasing approach is not to be expected. To account for inelastic dephasing we next consider inelastic voltage probes which conserve only total, energy-integrated current and fluctuations. Trying to mimic the effect of the point contacts in the DM approach, we assume the distribution functions in the probes to be constant, independent of energy in the entire bias window. The average current and noise are then obtained along the same lines as for the purely dephasing probe. We find that the average current coincides with the DM result, the noise, however, again differs quantitatively but not qualitatively. The Fano factor is plotted in Fig. \[fig3\]b. We thus conclude that in double QD systems, dephasing in a scattering and a DM approach yield qualitatively similar but in general quantitatively different results. [*Conclusions*]{}. Within density matrix and master equation approaches, we have examined the FCS for coherent and sequential charge transport through coupled QDs. While the average currents in the two cases coincide, all higher cumulants differ, clearly demonstrating the sensitivity of the charge transport to quantum coherence which generally suppresses the fluctuations. Coupling the QDs to a charge detector introduces decoherence, which results in a continuous transition from coherent to sequential tunneling. A scattering approach, where decoherence is introduced via phenomenological voltage probes, gives qualitatively similar results. [*Acknowledgements*]{}. We acknowledge helpful discussions with S. Pilgram and M. B[ü]{}ttiker. This work was supported by Deutsche Forschungsgemeinschaft in the framework of Sfb 296 and the Swedish Research Council. [29]{} natexlab\#1[\#1]{}bibnamefont \#1[\#1]{}bibfnamefont \#1[\#1]{}citenamefont \#1[\#1]{}url \#1[`#1`]{}urlprefix\[2\][\#2]{} \[2\]\[\][[\#2](#2)]{} , ****, (). , ed., ** (, , ). , ****, (); , , , ****, (). , , , ****, (); , , , , (), . , ****, (). , , (), . , , , , , ****, (); , , , , , ****, (); , , , ****, (). , ****, (). , ****, (). , , , , , , ****, (). , ****, (). , ****, (). , ****, (). , , , , ****, (); , , , , (), . , ****, (). , , , ****, (). , , , ****, (). , , , , ****, (). , ****, (). , ****, (). , , , , ****, (). , ****, (). , ****, (); , ****, ().
{ "pile_set_name": "ArXiv" }
--- author: - 'N. Pradel' - 'P. Charlot' - 'J.-F. Lestrade' bibliography: - '3021.bib' date: 'Received / Accepted ' title: 'Astrometric accuracy of phase-referenced observations with the VLBA and EVN' --- Introduction {#Introduction} ============ Very Long Baseline Interferometry (VLBI) narrow-angle astrometry pioneered by Shapiro et al. (1979) makes use of observations of pairs of angularly close sources to cancel atmospheric phase fluctuations between the two close lines of sight. In this initial approach, the relative coordinates between the two strong quasars and and other ancilliary parameters were adjusted by a least-squares fit of the differenced phases after connecting the VLBI phases for both sources over a multi-hour experiment. Then, [@Mar83; @Mar84] made the first phase-referenced map where structure and astrometry were disentangled for the double quasar and B. Both of these experiments demonstrated formal errors at the level of a few tens of microarcseconds or less in the relative angular separation between the two sources. Another approach was designed to tackle faint target sources by observing a strong reference source (quasar) to increase the integration time of VLBI from a few minutes to a few hours [@Les90]. This approach improves the sensitivity by the factor $$\sqrt { N_b\times\frac{T_{int}}{T_{scan}}},$$ where $N_b$ is the number of VLBI baselines, $T_{int}$ is the extended integration time permitted by phase-referencing (several hours) and $T_{scan}$ is the individual scan length (a few minutes). As this factor is very large (e.g. $> 50$ for the 45 baselines of the Very Long Baseline Array), faint target sources can be detected and their positions can be concomitantly measured with high precision. In the approach above, the VLBI phases of the strong reference source are connected, interpolated in time and differenced with the VLBI phases of the faint source that do not need to be connected. The differenced visibilities are then inverted to produce the map of the brightness distribution of the faint target source and its position is determined by reading directly the coordinates of the map peak which are relative to the a priori reference source coordinates. The map is usually highly undersampled but suffices for astrometry. This [*mapping astrometry*]{} technique is implemented in the SPRINT software [@Les90] and a similar procedure is also used within the NRAO AIPS package to produce phase-referenced VLBI maps with absolute source coordinates on the sky. While phase-referencing in this way is efficient, it still provides no direct positional uncertainty as does least-squares fitting of differenced phases [@Sha79]. In order to circumvent this problem, we have developed simulations to evaluate the impact of systematic errors in the derived astrometric results. Such simulations have been carried out for of a pair of sources observed with the Very Long Baseline Array (VLBA) and the European VLBI Network (EVN) at various declinations and angular separations. Systematic errors in station coordinates, Earth rotation parameters, reference source coordinates and tropospheric zenith delays were studied in turn. The results of the simulations are summarized below in tables that indicate positional uncertainties when considering these systematic errors either separately or altogether. Such tables can be further interpolated to determine the accuracy of any full-track experiment with the VLBA and EVN. Our study includes atmospheric fluctuations caused by the turbulent atmosphere above all stations. These fluctuations have been considered uniform and equivalent to a delay rate noise of $0.1$ ps/s for all stations. The impact of these fluctuations is limited if the antenna switching cycle between the two sources is fast enough. The phase structure function measured at 22 GHz above the VLA by [@Car99] provides prescriptions on this switching time. At high frequency, it can be as short as 10s, as e.g. in @Rei03 who carried out precise 43 GHz VLBA astrometric observations of Sgr A$^*$ at a declination of $-28\degr$. Switching time in more clement conditions is typically a few minutes at 8.4 GHz for northern sources. A few applications of [*mapping astrometry*]{} are the search for extra-solar planets around radio-emitting stars [@Les94], the determination of the Gravity Probe B guide star proper motion [@Leb99], the determination of absolute motions of VLBI components in extragalactic sources, e$.$g$.$ in compact symetric objects [@Cha03] or core-jet sources [@Ros99], probing the jet collimation region in extragalactic nuclei [@Ly004], pulsar parallax and proper motion measurements [@Bri02] and the determination of parallaxes and proper motions of maser sources in the whole Galaxy as planned with the VERA project [@Kaw00; @Hon00]. Method {#Method} ====== As indicated in e.g. [@Tho86], the theoretical precision of astrometry with the interferometer phase is $$\sigma_{\alpha, \delta} = {1 \over {2 \pi}} ~ {1 \over SNR } ~ { \lambda \over B} ,$$ where $SNR$ is the signal-to-noise ratio of the observation, $\lambda$ is the wavelength and $B$ is the baseline length projected on the sky. For observations with the VLBA ($B\sim 8000$ km), $\lambda = 3.6$ cm, and a modest $SNR$ of $10$, this theoretical precision is breathtakingly $\sim 15~ \mu$as. Although a single observation of the target yields an ambiguous position, multiple observations over several hours easily remove ambiguities even with a sparse u-v plane coverage [@Les90]. While the theoretical precision above might be regarded as the potential accuracy attainable for the VLBI, systematic errors in the model of the phase limit narrow-angle astrometry precision to roughly ten times this level in practice [@Fom99]. An analytical study of systematic errors in phase-referenced VLBI astrometry over a single baseline is given in @Sha79 and it shows that all systematic errors are scaled by the source separation. Another error analysis in such differential VLBI measurements can be found in @Mor84. However, for modern VLBI arrays with 10 or more antennae, the complex geometry makes the analytical approach intractable. For this reason, we have estimated such systematic errors by simulating VLBI visibilities and inverting them for a range of model parameters (station coordinates, reference source coordinates, Earth Orientation parameters, and tropospheric dry and wet zenith delays) corresponding to the expected errors in these parameters. The visibilities were simulated for a pair of sources at declinations $-25\degr$, $0\degr$, $25\degr$, $50\degr$, $75\degr$, $85\degr$ and with angular separations $0.5\degr$, $1\degr$ and 2$\degr$ for the VLBA, EVN and global VLBI array (VLBA+EVN). For each of these cases, we simulated visibilities every 2.5 min from source rise to set (full track) with a lower limit on elevation of 7$\degr$. The adopted flux for each source (calibrator and target) was 1 Jy to make the phase thermal noise negligeable in our simulations. For applications to faint target sources, one should combine the corresponding thermal astrometric uncertainty (Eq. 1) with the systematic errors derived below. The simulated visibilities were then inverted using uniform weighting to produce a phase-referenced map of the target source and estimate its position. This operation was repeated 100 times in a [*Monte Carlo*]{} analysis after varying slightly the parameters of the model based on errors drawn from a Gaussian distribution with zero-mean and plausible standard deviation. We report the rms of the differences found between the known a priori position of the target source and the resulting estimated positions as a measure of the corresponding systematic errors for each of the above cases. We have adopted the usual astrometric frequency of $8.4$ GHz for this analysis. Phase model used in simulation {#model} ============================== The phase delay and group delay in VLBI are described in @Sov98. The phase $ \phi = \nu \tau$ at frequency $\nu $ is related to the interferometer delay $$\tau = \tau_g + \tau_{trop} +\tau_{iono}+ \tau_R + \tau_{struc} + \tau_{clk} .$$ Specifically, the geometric delay is: $$\tau_g = { { { [P][N][EOP]} { \vec b . \vec k \over c }}}$$ with the precession matrix $[P]$, the nutation matrix $[N]$, the Earth Orientation Parameters matrix $[EOP]$, the baseline coordinates $\vec b$ in the terrestrial frame, the source direction coordinates $\vec k$ computed with source right ascension and declination in the celestial frame. The “retarded baseline correction” to account for Earth rotation during elapsed time $\tau_g$ must also be modelled [@Sov98]. The differential tropospheric delay $\tau _{trop}$ between the two stations is computed with a static tropospheric model and the simple mapping function $ 1/\sin E $ (where $E$ is the source elevation at station) to transform the zenith delay into the line-of-sight delay at each station. The differential ionospheric phase delay $\tau_{iono} = -{{k {\rm TEC}}/ {\nu^2}}$ is related to the total electronic content TEC in the direction of the source at each station. The General Relativity delay $\tau_R$ takes into account light propagation travel time in the gravitational potential of the Sun. The source structure contribution $\tau_{struc}$ can be computed according to the model by [@Cha90] but was not included in our simulations which are for point sources. The clock delay $\tau _{clk}$ cancels in differenced VLBI phases. The model above is that implemented in the SPRINT software used for our simulations. It is thought to be complete for narrow-angle astrometry and additional refinements, such as ocean loading, atmospheric loading, etc…, would not make difference into our results. We have not studied the ionosphere contribution to systematic errors. The unpredictible nature of the ionosphere makes this task difficult. Calibration of the ionosphere by dual-frequency observations, or over a wide bandwidth at low frequency [@Bri02], or simply by observing at high frequency ($>10$ GHz) where the effect is small, offers solutions to this problem. Results ======= VLBA {#VLBA} ---- Parameters Errors ------------------------------ ------------ Source coordinates $\alpha_0 \cos \delta_0 $ 0.25/1 mas $\delta_0$ 0.25/1 mas Station coordinates $X$ 1–2 mm $Y$ 1–3 mm $Z$ 1–2 mm Earth Orientation Parameters $X_p$ 0.2 mas $Y_p$ 0.2 mas $UT1-UTC$ 0.02 ms $\psi \sin\epsilon$ 0.3 mas $\epsilon$ 0.3 mas : Adopted rms errors for the source coordinates, VLBA station coordinates and Earth Orientation Parameters in our [*Monte Carlo*]{} simulations.[]{data-label="errors"} --------------- --------------- --------------------- -- --------------- --------------------- -- --------------- --------------------- Stations $\tau_{dtrp}$ $\Delta\tau_{dtrp}$ $\tau_{wtrp}$ $\Delta\tau_{wtrp}$ $\tau_{wtrp}$ $\Delta\tau_{wtrp}$ (cm) (cm) (cm) (cm) (cm) (cm) Brewster 225 0.5  8 2.7 13  4.3 Fort Davis 192 0.5  8 2.7 15  5.0 Hancock 223 0.5  9 3.0 19  6.3 Kit Peak 185 0.5  6 2.0 15  5.0 Los Alamos 185 0.5  6 2.0 13  4.3 Mauna Kea 149 0.5  1 2.0  4  2.0 North Liberty 225 0.5 10 3.3 19  6.3 Owens Valley 199 0.5  5 2.0 20  6.7 Pietown 176 0.5  4 2.0 12  4.0 Saint Croix 213 0.5 22 7.3 30 10.0 --------------- --------------- --------------------- -- --------------- --------------------- -- --------------- --------------------- : Dry and wet tropospheric zenith path delays ($\tau_{dtrp}$ and $\tau_{wtrp}$) at the VLBA stations along with the adopted rms errors $\Delta\tau_{dtrp}$ and $\Delta\tau_{wtrp}$ in our [*Monte Carlo*]{} simulations.[]{data-label="delay"} The parameter rms errors adopted as plausible for the VLBA phase model are listed in Tables \[errors\] and \[delay\]. The reference source coordinate uncertainties ($\Delta\alpha_0\cos\delta_0$, $\Delta\delta_0$) of 1 mas are typical of those in the VLBA Calibrator Survey [@Bea02], from which most of the reference sources originate. However, ICRF extragalactic sources have better position accuracies down to 0.25 mas [@Ma998]. We have thus carried out the calculations for both of these cases (1 mas and 0.25 mas) and both $\alpha_0$ and $\delta_0$ have been perturbed by these uncertainties in our simulations. The uncertainties for the station coordinates are from the ITRF2000 frame [@Bou04] while those for the Earth Orientation Parameters are from the IERS web site[^1]. The adopted dry tropospheric rms error $\Delta\tau _{dtrp}$ of 0.5 cm corresponds to $2.5$ millibars in atmospheric pressure uncertainty at sea level. Although barometer reading is usually better, the absolute calibration of station barometers is at this level. Uncertainties in the wet tropospheric zenith delay $\tau_{wtrp}$ derived from temperature and humidity are known to be large [@Saa73]. Experience makes us believe that a 30% error is likely on $\tau_{wtrp}$ and thus we took 1/3 of $\tau_{wtrp}$ as the plausible rms error $\Delta\tau_{wtrp}$ with a minimum value of 2 cm. We carried out simulations for both mean and maximum values of wet zenith path delays based on estimates of $\tau _{wtrp}$ recently derived from multiple VLBA geodetic and astrometric sessions [@Sov03]. The maximum wet zenith delays and corresponding errors were used to investigate the impact of extreme weather conditions on observations. These values are listed for each VLBA station in Table \[delay\]. 1.5mm =0.82mm -------------------------------------- -- --------------------------------- ---------------- -- --------------------------------- ---------------- -- --------------------------------- ---------------- -- --------------------------------- ---------------- -- --------------------------------- ---------------- -- --------------------------------- ---------------- -- Error component $\Delta\alpha \cos\delta \hfil$ $\Delta\delta$ $\Delta\alpha \cos\delta \hfil$ $\Delta\delta$ $\Delta\alpha \cos\delta \hfil$ $\Delta\delta$ $\Delta\alpha \cos\delta \hfil$ $\Delta\delta$ $\Delta\alpha \cos\delta \hfil$ $\Delta\delta$ $\Delta\alpha \cos\delta \hfil$ $\Delta\delta$ Calibrator position (1 mas error)  8   7  1  9  8 16 20 26  59  68 196 193 Calibrator position (0.25 mas error)  2   7  1  3  2  5  3  5  12  11  49  50 Earth orientation  1   8  1  5  1  6  1  5   1   4   1   4 Antenna position  2   8  2  4  2  4  2  3   2   3   2   3 Dry troposphere 15  45  9 16  7  9 10 11  18  23  14  16 Wet troposphere (mean) 53 182 34 57 33 28 31 45  54  72  79  88 Wet troposphere (max) 87 219 46 66 42 38 49 56  65  78  81  91 Total (mean wtrp) 60 175 36 50 33 32 37 53  87 103 227 258 Total (max wtrp) 85 217 43 74 42 44 46 66 100 117 226 240 -------------------------------------- -- --------------------------------- ---------------- -- --------------------------------- ---------------- -- --------------------------------- ---------------- -- --------------------------------- ---------------- -- --------------------------------- ---------------- -- --------------------------------- ---------------- -- 1.5mm =0.82mm -------------------------------------- -- -------------------------------- ---------------- -- -------------------------------- ---------------- -- -------------------------------- ---------------- -- -------------------------------- ---------------- -- -------------------------------- ---------------- -- -------------------------------- ---------------- -- Error component $\Delta\alpha\cos\delta \hfil$ $\Delta\delta$ $\Delta\alpha\cos\delta \hfil$ $\Delta\delta$ $\Delta\alpha\cos\delta \hfil$ $\Delta\delta$ $\Delta\alpha\cos\delta \hfil$ $\Delta\delta$ $\Delta\alpha\cos\delta \hfil$ $\Delta\delta$ $\Delta\alpha\cos\delta \hfil$ $\Delta\delta$ Calibrator position (1 mas error)   7   8  1   7  8 11 21  2 59  2 199  1 Calibrator position (0.25 mas error)   2   7  1   3  2  4  5  2 20  1  43  1 Earth orientation   5   7  5   3  5  4  4  3  3  1   3  1 Antenna position   2   9  2   6  2  5  3  3  2  2   2  2 Dry troposphere  17  54  6  19  2 12  5  9 11  9  12 13 Wet troposphere (mean)  80 272 32  98 11 41 12 32 43 41  60 62 Wet troposphere (max) 112 358 43 114 19 61 17 46 59 55  74 71 Total (mean wtrp)  84 284 30  99 16 42 25 33 81 36 189 67 Total (max wtrp) 121 481 44 134 20 56 26 46 92 65 212 74 -------------------------------------- -- -------------------------------- ---------------- -- -------------------------------- ---------------- -- -------------------------------- ---------------- -- -------------------------------- ---------------- -- -------------------------------- ---------------- -- -------------------------------- ---------------- -- We simulated the visibilities of a full u-v track experiment with the VLBA for six declinations between $-25\degr$ and $85\degr$ with a $1\degr$ relative source separation (either oriented in right ascension or in declination). Uniform weighting was applied to the visibilities, resulting in a synthesized beam mainly shaped by the longest baselines. As a test, we have also removed the 9 baselines smaller than 1500 km in length out of the 45 baseline array and noted a decrease in systematic errors of $\sim 15\% $ in a few test cases. Conservatively, we have retained these “short” baselines in our final simulations. This is motivated by the fact that all possible baselines must be kept for sensitivity when observing weak sources. The antenna switching cycle between target and reference sources was set to 2.5 minutes. The results, however, do not depend critically on this value. It was chosen so that the automatic phase connection routine for the reference source does not discard too much data in the presence of a delay rate error of 0.1 ps/s (adopted uniformly for all the stations in the simulation). As mentioned previously, we analysed these data simulated with SPRINT using the [*a priori*]{} parameter values perturbed by some errors. We carried out this analysis 100 times for each systematic error component with perturbation errors drawn from Gaussian distributions with zero mean and standard deviations according to the rms errors in Tables \[errors\] and \[delay\]. The resulting position of the target was estimated by reading the peak position in each of the 100 phase-referenced maps. The pixel size in the maps was 0.05 mas. This size is small compared to the synthesized beam ($\sim1$ mas at 8.4 GHz on 8000 km baseline) and, hence, the uncertainty in the peak position due to the pixel size is negligeable. This position was determined by fitting a parabola over the full half beam width. This procedure was used in the Hipparcos/VLBI work of [@Les99] and was found to be appropriate. As expected, each position was slightly offset from the map phase center, reflecting the corresponding systematic errors. After substracting the initial perturbation in the calibrator position, we calculated the rms of these 100 relative coordinate offsets $\Delta\alpha\cos\delta$ and $\Delta\delta$ for the adopted $1\degr$ source separation in right ascension or declination. Note that the mean of these 100 coordinate offsets was close to zero in all cases. In Tables \[VLBARA\] and \[VLBADC\], we report the rms astrometric errors for each individual error component along with the total astrometric errors when all model parameters are perturbed together in the simulation. The total errors were derived by considering a 1 mas error in the calibration position. The wet troposphere systematic error clearly dominates over all the other error components for $\delta \leq 50 \degr$ but the calibrator error dominates at higher declinations if its position is not known to better than 1 mas. This behavior was first noted by @Sha79 who derived analytical formulae providing the astrometric errors caused by the calibrator coordinate uncertainties in the case of a single VLBI baseline. A detailed analysis comparing our simulated errors with those obtained from these formulae is given in Appendix A. Other systematic errors, in particular the Earth orientation parameter and the station coordinate errors, are small. In Tables \[VLBARA\] and \[VLBADC\], we note that astrometric errors originating from mean and maximum wet troposphere uncertainties are not drastically different (a ratio of 1.5 at most). Finally, we have plotted in Fig$.$ \[chiDC\] the distribution of all coordinate offsets $\Delta \alpha \cos \delta$ and $\Delta \delta$ for the $50\degr$ declination target when all perturbation errors are present. For this specific case we have carried out 1000 simulations to refine the binning of the distribution. We have also performed the Pearson test on all distributions and provide the reduced chi-square $\chi^2_{\nu}$ and probability $p$ that such distributions are Gaussian in Table \[chi\]. The results of this test show that most of the distributions are not Gaussian with $p$ generally smaller than $0.4$. ------------- ----------------- ---------------------- ----------------- ----------------------- Declination $\chi ^2_{\nu}$ $p$ $\chi ^2_{\nu}$ $p$ $-25 \degr$ 1.51 1.43$\times 10^{-2}$ 2.10 9.59$\times 10^{-6}$ $0 \degr$ 1.67 1.29$\times 10^{-2}$ 1.73 1.95$\times 10^{-3}$ $ 25 \degr$ 2.33 8.12$\times 10^{-8}$ 1.97 1.12$\times 10^{-3}$ $ 50 \degr$ 0.97 0.511 1.18 0.309 $ 75 \degr$ 0.84 0.783 1.22 0.273 $ 85 \degr$ 1.06 0.659 1.02 0.424 Declination $\chi ^2_{\nu}$ $p$ $\chi ^2_{\nu}$ $p$ $-25 \degr$ 1.19 0.286 0.76 0.885 $0 \degr$ 1.62 2.24$\times 10^{-2}$ 1.35 6.01$\times 10^{-2}$ $ 25 \degr$ 3.22 4.62$\times 10^{-4}$ 1.91 5.57$\times 10^{-4}$ $ 50 \degr$ 2.23 5.19$\times 10^{-6}$ 2.18 1.28$\times 10^{-4}$ $ 75 \degr$ 1.38 0.116 2.05 1.99$\times 10^{-3}$ $ 85 \degr$ 1.22 0.260 2.98 5.62$\times 10^{-12}$ ------------- ----------------- ---------------------- ----------------- ----------------------- : Reduced chi-square $\chi ^2_\nu$ and probability $p$ of Gaussian distribution for the astrometric errors $\Delta \alpha \cos \delta$ and $\Delta \delta$ using the Pearson test.[]{data-label="chi"} EVN {#EVN} --- We have carried out a similar study for the EVN by simulating full track observations for the 10 stations of the array at $8.4$ GHz. The adopted errors for the reference source coordinates and Earth orientation parameters were identical to those used in the VLBA simulations. Station coordinate errors were similar to the VLBA ones (1–6 mm), with the exception of those for Westerbork which are at the level of 50 mm [@Cha02]. The same scheme as that adopted for the VLBA was used to define zenith dry and wet tropospheric delay errors at each EVN station and the corresponding values are given in Table \[EVNdelay\]. ---------------- --------------- --------------------- -- --------------- --------------------- -- --------------- --------------------- Stations $\tau_{dtrp}$ $\Delta\tau_{dtrp}$ $\tau_{wtrp}$ $\Delta\tau_{wtrp}$ $\tau_{wtrp}$ $\Delta\tau_{wtrp}$ (cm) (cm) (cm) (cm) (cm) (cm) Effelsberg 220 0.5  8  2.7 20  6.7 Hartebeesthoek 199 0.5 10  3.3 17  5.7 Medicina 231 0.5 11  3.7 18  6.0 Noto 229 0.5 12  4.0 20  6.7 Onsala 230 0.5  8  2.7 14  4.7 Sheshan 231 0.5 22  7.3 36 12.0 Urumqi 210 0.5 10  3.3 10  3.3 Westerbork 220 0.5  8  2.7 20  6.7 Wettzell 215 0.5  7  2.3 13  4.3 Yebes 208 0.5  5  2.0  5  2.0 ---------------- --------------- --------------------- -- --------------- --------------------- -- --------------- --------------------- : Dry and wet tropospheric zenith path delays ($\tau_{dtrp}$ and $\tau_{wtrp}$) at the EVN stations along with the adopted rms errors $\Delta\tau_{dtrp}$ and $\Delta\tau_{wtrp}$ in our [*Monte Carlo*]{} simulations.[]{data-label="EVNdelay"} 1.5mm =1.32mm ------------------------ -- -------------------------------- ---------------- -- -------------------------------- ---------------- -- -------------------------------- ---------------- -- -------------------------------- ---------------- -- -------------------------------- ---------------- -- Error component $\Delta\alpha\cos\delta \hfil$ $\Delta\delta$ $\Delta\alpha\cos\delta \hfil$ $\Delta\delta$ $\Delta\alpha\cos\delta \hfil$ $\Delta\delta$ $\Delta\alpha\cos\delta \hfil$ $\Delta\delta$ $\Delta\alpha\cos\delta \hfil$ $\Delta\delta$ Antenna position  5  4  5  4  6  5   5  5   5   5 Wet troposphere (mean) 55 11 37 14 52 33  73 40  65  31 Total (mean wtrp) 57 12 44 15 57 45  91 81 206 185 ------------------------ -- -------------------------------- ---------------- -- -------------------------------- ---------------- -- -------------------------------- ---------------- -- -------------------------------- ---------------- -- -------------------------------- ---------------- -- 1.5mm =1.32mm ------------------------ -- -------------------------------- ---------------- -- -------------------------------- ---------------- -- -------------------------------- ---------------- -- -------------------------------- ---------------- -- -------------------------------- ---------------- -- Error component $\Delta\alpha\cos\delta \hfil$ $\Delta\delta$ $\Delta\alpha\cos\delta \hfil$ $\Delta\delta$ $\Delta\alpha\cos\delta \hfil$ $\Delta\delta$ $\Delta\alpha\cos\delta \hfil$ $\Delta\delta$ $\Delta\alpha\cos\delta \hfil$ $\Delta\delta$ Antenna position  7  6  7  5  4   5  5  5   4  5 Wet troposphere (mean) 51 31 29 54 18 81 31 58  33 68 Total (mean wtrp) 62 29 33 57 27 78 79 61 201 61 ------------------------ -- -------------------------------- ---------------- -- -------------------------------- ---------------- -- -------------------------------- ---------------- -- -------------------------------- ---------------- -- -------------------------------- ---------------- -- Since the EVN comprises antennas with different sensitivities, each baseline has been weighted by the reciprocal of their noise power equivalent $\sqrt{{\rm SEFD}_1\times {\rm SEFD}_2}$ with System Equivalent Flux Densities (SEFD$_i$) for each station according to Table 2 of the EVN Status Table[^2] (as available in May 2003). The Effelsberg–Westerbork baseline is the most sensitive baseline of the array but also the shortest one and so unfavorable for high-accuracy astrometry. For this reason, we decided to perform the simulations without this baseline, hence using an array of 44 baselines only. We have applied uniform weighting to the visibilities similarly to the VLBA. We have tested that in removing the 12 baselines shorter than 1500 km in this 44 baseline array, systematic errors decrease by $\sim 20\% $ but, conservatively, we have kept them in our simulations. In order to reduce the number of simulations, calculations were carried out for only mean values of the wet zenith tropospheric delays since the results when using mean or maximum values were not found to be drasticaly different. We also did not calculate individual contributions from calibrator position, dry tropospheric zenith delay and Earth orientation parameter errors since these were found to be very small for the VLBA (see Tables \[VLBARA\] and \[VLBADC\]). One should keep in mind, however, that calibrator error dominates at high declination. The results of the EVN simulations are reported in Tables \[EVNRA\] and \[EVNDC\] for a $1\degr$ source separation in right ascension or declination. At declination $-25\degr$, many SPRINT maps were found to be ambiguous, i.e. the main lobe of the point spread function of the EVN could not be identified because secondary lobes were too high. This is essentially caused by the relatively high latitude of the array and hence to the difficulty of observing such low declination sources due to very limited visibility periods. For this reason, we do not provide EVN results for this declination. For other declinations, EVN astrometric errors (Tables \[EVNRA\] and \[EVNDC\]) are similar to those found for the VLBA (Tables \[VLBARA\] and \[VLBADC\]) and the Westerbork position error is not a limiting factor. Declination accuracies are somewhat better for the EVN than for the VLBA at low declination ($0\degr$ and $25\degr$), a consequence of the participation of Hartebeeshoek (South Africa) in such observations. Global VLBI array ----------------- 1.5mm =1.32mm ------------------------ -- -------------------------------- ---------------- -- -------------------------------- ---------------- -- -------------------------------- ---------------- -- -------------------------------- ---------------- -- -------------------------------- ---------------- -- -------------------------------- ---------------- -- Error component $\Delta\alpha\cos\delta \hfil$ $\Delta\delta$ $\Delta\alpha\cos\delta \hfil$ $\Delta\delta$ $\Delta\alpha\cos\delta \hfil$ $\Delta\delta$ $\Delta\alpha\cos\delta \hfil$ $\Delta\delta$ $\Delta\alpha\cos\delta \hfil$ $\Delta\delta$ $\Delta\alpha\cos\delta \hfil$ $\Delta\delta$ Wet troposphere (mean) 71 76 32 42 26 34 23 13 22  6  27   9 Total (mean wtrp) 82 67 34 46 24 44 34 33 64 76 196 203 ------------------------ -- -------------------------------- ---------------- -- -------------------------------- ---------------- -- -------------------------------- ---------------- -- -------------------------------- ---------------- -- -------------------------------- ---------------- -- -------------------------------- ---------------- -- 1.5mm =1.32mm ------------------------ -- -------------------------------- ---------------- -- -------------------------------- ---------------- -- -------------------------------- ---------------- -- -------------------------------- ---------------- -- -------------------------------- ---------------- -- -------------------------------- ---------------- -- Error component $\Delta\alpha\cos\delta \hfil$ $\Delta\delta$ $\Delta\alpha\cos\delta \hfil$ $\Delta\delta$ $\Delta\alpha\cos\delta \hfil$ $\Delta\delta$ $\Delta\alpha\cos\delta \hfil$ $\Delta\delta$ $\Delta\alpha\cos\delta \hfil$ $\Delta\delta$ $\Delta\alpha\cos\delta \hfil$ $\Delta\delta$ Wet troposphere (mean) 60 305 26 71 10 43  9 44  5 21   5 26 Total (mean wtrp) 61 279 24 78 15 45 22 46 61 24 183 27 ------------------------ -- -------------------------------- ---------------- -- -------------------------------- ---------------- -- -------------------------------- ---------------- -- -------------------------------- ---------------- -- -------------------------------- ---------------- -- -------------------------------- ---------------- -- We have carried out a similar study for the global VLBI array which is the combination of the VLBA and EVN. It includes 20 stations, with 190 possible baselines. As discussed above, the Effelsberg–Westerbork baseline was ignored and the calculations were thus carried out for 189 baselines only. The adopted systematic error values for the simulations with this array were the same as those adopted for the individual VLBA and EVN (Tables \[errors\], \[delay\] and \[EVNdelay\]) and calculations were performed for full track observations as previously. The results of these simulations (Tables \[GlobalRA\] and \[GlobalDC\]) indicate that the astrometric errors for the global VLBI array are consistent with those found for the VLBA and the EVN. As expected, these errors are generally slightly better than the ones derived for each individual array. Discussion =========== General results --------------- Our simulations show that the astrometric accuracy of the VLBI phase-referencing technique (defined as $\sqrt{(\Delta\alpha\cos\delta)^2+(\Delta\delta)^2}$) is $\sim 50~\mu$as for mid declinations and is $\leq 300~\mu$as at low and high declinations for point sources with a relative separation of $1\degr$. The major systematic error components are the wet tropospheric delay and the calibrator astrometric position, the latter only at high declination. Station coordinate, Earth orientation parameter and dry tropospheric zenith delay errors contribute generally to less than $20~\mu$as in the error budget. Simulation of the VLBA without Saint Croix ------------------------------------------ 1.5mm =1.32mm ------------------------ -- -------------------------------- ---------------- -- -------------------------------- ---------------- -- -------------------------------- ---------------- -- -------------------------------- ---------------- -- -------------------------------- ---------------- -- -------------------------------- ---------------- -- Error component $\Delta\alpha\cos\delta \hfil$ $\Delta\delta$ $\Delta\alpha\cos\delta \hfil$ $\Delta\delta$ $\Delta\alpha\cos\delta \hfil$ $\Delta\delta$ $\Delta\alpha\cos\delta \hfil$ $\Delta\delta$ $\Delta\alpha\cos\delta \hfil$ $\Delta\delta$ $\Delta\alpha\cos\delta \hfil$ $\Delta\delta$ Wet troposphere (mean) 62 171 28 47 27 57 34 65 46 63  37  67 Total (mean wtrp) 63 193 27 41 31 62 42 68 83 82 211 207 ------------------------ -- -------------------------------- ---------------- -- -------------------------------- ---------------- -- -------------------------------- ---------------- -- -------------------------------- ---------------- -- -------------------------------- ---------------- -- -------------------------------- ---------------- -- 1.5mm =1.32mm ------------------------ -- -------------------------------- ---------------- -- -------------------------------- ---------------- -- -------------------------------- ---------------- -- -------------------------------- ---------------- -- -------------------------------- ---------------- -- -------------------------------- ---------------- -- Error component $\Delta\alpha\cos\delta \hfil$ $\Delta\delta$ $\Delta\alpha\cos\delta \hfil$ $\Delta\delta$ $\Delta\alpha\cos\delta \hfil$ $\Delta\delta$ $\Delta\alpha\cos\delta \hfil$ $\Delta\delta$ $\Delta\alpha\cos\delta \hfil$ $\Delta\delta$ $\Delta\alpha\cos\delta \hfil$ $\Delta\delta$ Wet troposphere (mean) 183 563 62 189 19 55 17 41 39 27  42 44 Total (mean wtrp) 190 534 70 191 24 71 26 42 74 28 216 40 ------------------------ -- -------------------------------- ---------------- -- -------------------------------- ---------------- -- -------------------------------- ---------------- -- -------------------------------- ---------------- -- -------------------------------- ---------------- -- -------------------------------- ---------------- -- We speculated that if the VLBA station at Saint Croix in the Virgin Islands that suffers from dampness were withdrawn from the array, it should improve the astrometric accuracy of the VLBA. We thus repeated our VLBA simulations without that station. The results of this test are given in Tables \[SCRA\] and \[SCDC\]. In contrast to our intuition, the astrometric accuracy is actually degraded when the target-calibrator direction is oriented along declination. In fact, the addition of Saint Croix strengthens the geometry of the array and improves the astrometric accuracy despite severe weather conditions. In order to further explore this question, we ran simulations without Pie Town in the middle of the array and without Mauna Kea at the far West of the array. Withdrawing Pie Town does not change the astrometric accuracy but the absence of Mauna Kea degrades the accuracy in a similar way to Saint Croix. Linearity of the astrometric accuracy with source separation ------------------------------------------------------------ An important question is whether the astrometric accuracy scales linearly as a function of the source separation. To study this matter, we repeated all the previous simulations but with source separations of $0.5\degr$ and $2\degr$. Then, we performed a linear fit to the astrometric errors for the three values of the calibrator-target separation ($0.5\degr$, $1\degr$ and $2\degr$), considering separately each systematic error component of the tables above. Figure \[plotseparation\] shows an example of such results for the VLBA in the case of a target at $+25^\circ$ declination. Overall, our plots show that the astrometric accuracy generally scales fairly linearly as a function of the source separation. To obtain a quantitative measure of the likehood of the linearity, we determined the regression coefficients for each of the 107 linear fits. Such coefficients should be close to 1 for a linear behavior while they should decrease as the behavior becomes less linear. This analysis reveals that 80% of the coefficients are larger than $0.95$, indicating that the astrometric errors behave linearly. Among all errors, calibrator position systematics are those that were found to behave the least linearly. An empirical formula for the astrometric accuracy $\Delta _{\alpha\cos\delta ,\delta}$ has been further estimated by averaging the parameters of all the fits : $$\label{regression} \Delta _{\alpha\cos\delta ,\delta} = (\Delta _{\alpha\cos\delta , \delta}^{1^{\circ}}-14) \times d+ 14~~~~~~(\mu as)$$ where $\Delta _{\alpha\cos\delta ,\delta}^{1^{\circ}}$ is the astrometric error for $1\degr$ source separation as provided by our tables (\[VLBARA\] and \[VLBADC\] for the VLBA, \[EVNRA\] and \[EVNDC\] for the EVN, \[GlobalRA\] and \[GlobalDC\] for the global VLBI array) and $d=\sqrt{((\alpha -\alpha _0)\cos\delta_0)^2+(\delta- \delta_0)^2}$ is the source separation in degrees. In Section \[EVN\], we noted that the astrometric accuracies of the EVN and the VLBA are similar, hence this formula should apply to the EVN, too. As a verification of this empirical formula, we computed the astrometric accuracy for eight target-calibrator pairs observed with the global VLBI array as part of a project to monitor absolute lobe motions in compact symmetric objects [@Cha03]. For the source pair / with a separation of $1.37\degr$ along the right ascension direction, we obtained simulated accuracies $\Delta \alpha \cos\delta_0=42~\mu$as and $\Delta \delta =63~\mu$as, versus $\Delta \alpha \cos \delta_0 =44~\mu$as and $\Delta \delta =63~\mu$as when derived from Eq. \[regression\] and Table \[GlobalRA\]. In the worst case (target-calibrator / with a separation of $0.50\degr$ along declination), simulated accuracies were $\Delta \alpha \cos \delta_0=18~\mu$as and $\Delta \delta =12~\mu$as while Eq. \[regression\] and Table \[GlobalDC\] give $\Delta \alpha \cos \delta_0=20~\mu$as and $\Delta \delta =24~\mu$as. Thus, overall we found a discrepancy of a factor of 2 at most between our simple formula (Eq. \[regression\]) and real simulation of the case considered. Conclusion ========== We have performed extensive simulations of VLBI data with the VLBA, EVN and global VLBI array to study the dependence of the astrometric accuracy on systematic errors in the phase model of phase-referenced VLBI observations. Systematic errors considered in this study are calibrator position uncertainties, station coordinate uncertainties, Earth Orientation Parameters uncertainties and dry and wet troposphere errors. We have adopted state of the art VLBI values for these errors. Our simulations show that the astrometric accuracy of a full track phase-referenced VLBI experiment is $50~\mu$as at mid declination and is $\sim 300~\mu$as at low ($-25\degr$) and high ($+85\degr$) declinations for point sources angularly separated by $1\degr$. Not surprinsingly, the major systematic error originates from wet tropospheric zenith delay uncertainties except at high declination where calibrator position uncertainties dominate. We show that the astrometric accuracy $\Delta _{\alpha\cos\delta ,\delta }$ depends linearly on the source separation and we established the simple formula $\Delta _{\alpha\cos\delta ,\delta } = (\Delta _{\alpha\cos\delta , \delta}^{1^{\circ}}-14) \times d+ 14~~(\mu as)$ where $\Delta _{\alpha\cos\delta ,\delta}^{1^{\circ}}$ is the astrometric error provided by our tables for the various arrays and configurations and $d=\sqrt{((\alpha -\alpha _0)\cos\delta)^2+(\delta- \delta_0)^2}$ is the source separation in degrees. Our study has been carried out for point sources but variable source structure is likely to degrade the accuracy derived from this formula. Analytical Behavior =================== The analytical formulae in the Appendix A of @Sha79 provide the astrometric errors caused by the inaccuracy of the calibrator coordinates in the case of a single VLBI baseline. Adopting our notation, these formulae become : $$\begin{aligned} \Delta\alpha &\simeq &((\alpha-\alpha_0)\rm\tan\delta)\Delta\delta_0\\ &&-((\delta-\delta_0)\rm \tan\delta +1/2\times(\alpha-\alpha_0)^2 ) \Delta\alpha_0 ,\end{aligned}$$ and $$\begin{aligned} \Delta \delta &\simeq &(-(\delta -\delta_0) \cot \delta +1/2 \times (\alpha -\alpha_0)^2)\Delta\delta_0 +\\ &&((\alpha-\alpha_0) \cot \delta)\Delta\alpha_0.\end{aligned}$$ where $\Delta\alpha$ and $\Delta\delta$ are the errors in right ascension and declination introduced by errors $\Delta\alpha_0$ and $\Delta\delta_0$ in the coordinates of the reference source. The expression above for $\Delta\delta$ restores correctly the last term of the equation which was misprinted in the original paper. These simple formulae are, however, valid only for the special geometry adopted by the authors where the “baseline declination” is $0^{\degr}$. Adopting the same parameters as in our simulations ($\Delta\alpha_0= 1/\cos\delta_0$ mas, $\Delta\delta_0=1$ mas, $\alpha -\alpha_0= 0\degr$ or $(1/\cos\delta_0)\degr$, $\delta -\delta_0=1\degr$ or $0\degr$), we obtain the astrometric errors plotted as a function of declination in Fig \[shap\] (dotted lines). The results of our simulations for declinations of $-25\degr$, $0\degr$, $25\degr$, $50\degr$, $75\degr$ and $85\degr$ in the case of the VLBA (first lines of Tables 3 and 4) are also superimposed on these plots. ![Astrometric errors $\Delta\alpha\cos\delta$ and $\Delta\delta$ (respectively left and right) as a function of declination. The two upper plots are for the case $\alpha-\alpha_0=(1/\cos\delta)\degr$ and $\delta -\delta_0 =0\degr$ while the two lower plots are for the case $\alpha -\alpha_0 = 0\degr$ and $\delta -\delta_0 = 1\degr$. The continuous dotted lines show the errors derived from the [@Sha79] formulae. The stars show the errors from our simulations at six declinations from $-25\degr$ to $85\degr$.[]{data-label="shap"}](3021figA.ps){width="8.5cm"} The right ascension errors obtained from the simulations match perfectly those derived analytically, while the declination errors show a strong discrepancy near declination $0\degr$ (although they agree at high declinations). This discrepancy originates from a singularity in the $\Delta\delta$ formula at $\delta =0\degr$ (term in $\cot\delta$), inherent to the approximation used to establish the formula (baseline declination of $0\degr$). For a more complex and realistic network, such a singularity does not exist, as also demonstrated by the results of our simulations. [^1]: http://hpiers.obspm.fr/iers/eop/eopc04/EOPC04.GUIDE (Table 2). [^2]: http://www.mpifr-bonn.mpg.de/EVN/EVNstatus.txt
{ "pile_set_name": "ArXiv" }
--- abstract: 'Flow-based generative models, conceptually attractive due to tractability of both the exact log-likelihood computation and latent-variable inference, and efficiency of both training and sampling, has led to a number of impressive empirical successes and spawned many advanced variants and theoretical investigations. Despite their computational efficiency, the density estimation performance of flow-based generative models significantly falls behind those of state-of-the-art autoregressive models. In this work, we introduce *masked convolutional generative flow* (**<span style="font-variant:small-caps;">MaCow</span>**), a simple yet effective architecture of generative flow using masked convolution. By restricting the local connectivity in a small kernel, <span style="font-variant:small-caps;">MaCow</span> enjoys the properties of fast and stable training, and efficient sampling, while achieving significant improvements over Glow for density estimation on standard image benchmarks, considerably narrowing the gap to autoregressive models.' author: - | Xuezhe Ma\ Language Technologies Institute\ Carnegie Mellon University\ Pittsburgh, PA, USA\ `xuezhem@cs.cmu.edu` Eduard Hovy\ Language Technologies Institute\ Carnegie Mellon University\ Pittsburgh, PA, USA\ `hovy@cmu.edu` bibliography: - 'macow.bib' title: 'MaCow: Masked Convolutional Generative Flow' --- Introduction ============ Unsupervised learning of probabilistic models is a central yet challenging problem. Deep generative models have shown promising results in modeling complex distributions such as natural images [@radford2015unsupervised], audio [@van2016wavenet] and text [@bowman2015generating]. A number of approaches have emerged over the years, including Variational Autoencoders (VAEs) [@kingma2014auto], Generative Adversarial Networks (GANs) [@goodfellow2014generative], autoregressive neural networks [@larochelle2011neural; @oord2016pixel], and flow-based generative models [@dinh2014nice; @dinh2016density; @kingma2018glow]. Among these models, flow-based generative models gains popularity for this capability of estimating densities of complex distributions, efficiently generating high-fidelity syntheses and automatically learning useful latent spaces. Flow-based generative models are typically warping a simple distribution into a complex one by mapping points from the simple distribution to the complex data distribution through a chain of invertible transformations whose Jacobian determinants are efficient to compute. This design guarantees that the density of the transformed distribution can be analytically estimated, making maximum likelihood learning feasible. Flow-based generative model has spawned significant interests in improving and analyzing it from both theoretical and practical perspectives, and applying it to a wide range of tasks and domains. In their pioneering work, @dinh2014nice proposed *Non-linear Independent Component Estimation* (NICE), where they first applied flow-based models for modeling complex dimensional densities. RealNVP [@dinh2016density] extended NICE with more flexible invertible transformation and experimented on natural images. However, these flow-based generative models have much worse density estimation performance compared to state-of-the-art autoregressive models, and are incapable of realistic-looking synthesis of large images compared to GANs [@karras2017progressive; @brock2018large]. Recently, @kingma2018glow proposed Glow: generative flow with invertible 1x1 convolutions, significantly improving the density estimation performance on natural images. Importantly, they demonstrated that flow-based generative models optimized towards the plain likelihood-based objective are capable of generating realistic-looking high-resolution natural images efficiently. @prenger2018waveglow investigated applying flow-based generative models to speech synthesis by combining Glow with WaveNet [@van2016wavenet]. Unfortunately, the density estimation performance of Glow on natural images still falls behind autoregressive models, such as PixelRNN/CNN [@oord2016pixel; @salimans2017pixelcnn++], Image Transformer [@parmar2018image], PixelSNAIL [@chen2017pixelsnail] and SPN [@menick2018generating]. We noted in passing that there are also some work [@rezende2015variational; @kingma2016improved; @zheng2017] trying to apply flow to variational inference. In this paper, we propose a novel architecture of generative flow, *masked convolutional generative flow* (**<span style="font-variant:small-caps;">MaCow</span>**), using masked convolutional neural networks [@oord2016pixel]. The bijective mapping between input and output variables can easily be established; meanwhile, computation of the determinant of the Jacobian is efficient. Compared to inverse autoregressive flow (IAF) [@kingma2016improved], <span style="font-variant:small-caps;">MaCow</span> has the merits of stable training and efficient inference and synthesis by restricting the local connectivity in a small “masked” kernel, and large receptive fields by stacking multiple layers of convolutional flows and using reversed ordering masks (§\[subsec:macow\]). We also propose a fine-grained version of the multi-scale architecture adopted in previous flow-based generative models to further improve the performance (§\[subsec:multi-scale\]). Experimentally, on three benchmark datasets for images — CIFAR-10, ImageNet and CelebA-HQ, we demonstrate the effectiveness of <span style="font-variant:small-caps;">MaCow</span> as a density estimator by consistently achieving significant improvements over Glow on all the three datasets. When equipped with the variational dequantization mechanism [@ho2018flow++], <span style="font-variant:small-caps;">MaCow</span> considerably narrows the gap on density estimation to autoregressive models (§\[sec:experiment\]). Flow-based Generative Models {#sec:background} ============================ In this section, we first setup notations, describe flow-based generative models, and review Glow [@kingma2018glow], which is <span style="font-variant:small-caps;">MaCow</span> built on. Notations --------- Throughout we use uppercase letters for random variables, and lowercase letters for realizations of the corresponding random variables. Let $X \in \mathcal{X}$ be the randoms variables of the observed data, e.g., $X$ is an image or a sentence for image and text generation, respectively. Let $P$ denote the true distribution of the data, i.e., $X \sim P$, and $D = \{x_1, \ldots, x_N\}$ be our training sample, where $x_i, i=1,\ldots, N,$ are usually i.i.d. samples of $X$. Let $\mathcal{P} = \{P_\theta : \theta \in \Theta\}$ denote a parametric statistical model indexed by parameter $\theta \in \Theta$, where $\Theta$ is the parameter space. $p$ is used to denote the density of corresponding distribution $P$. In the literature of deep generative models, deep neural networks are the most widely used parametric models. The goal of generative models is to learn the parameter $\theta$ such that $P_{\theta}$ can best approximate the true distribution $P$. In the context of maximal likelihood estimation, we wish to minimize the negative log-likelihood of the parameters: $$\label{eq:mle} \min\limits_{\theta \in \Theta} \frac{1}{N} \sum\limits_{i=1}^{N} -\log p_{\theta}(x_i) = \min\limits_{\theta \in \Theta} \mathrm{E}_{\widetilde{P}(X)} [-\log p_{\theta}(X)],$$ where $\tilde{P}(X)$ is the empirical distribution derived from training data $D$. Flow-based Models ----------------- In the framework of flow-based generative models, a set of latent variables $Z \in \mathcal{Z}$ are introduced with a prior distribution $p_{Z}(z)$, typically a simple distribution like multivariate Gaussian. For a bijection function $f: \mathcal{X} \rightarrow \mathcal{Z}$ (with $g = f^{-1}$), the change of variable formula defines the model distribution on $X$ by : $$p_{\theta}(x) = p_{Z}(f_{\theta}(x))\left| \det(\frac{\partial f_{\theta}(x)}{\partial x})\right|,$$ where $\frac{\partial f_{\theta}(x)}{\partial x}$ is the Jacobian of $f_{\theta}$ at $x$. The generative process is defined straight-forwardly as: $$\begin{array}{rcl} z & \approx & p_{Z}(z) \\ x & = & g_{\theta}(z). \end{array}$$ Flow-based generative models focus on certain types of transformations $f_{\theta}$ that both the inverse functions $g_{\theta}$ and the Jacobian determinants are tractable to compute. By stacking multiple such invertible transformations in a sequence, which is also called a (normalizing) *flow* [@rezende2015variational], a flow is capable of warping a simple distribution ($p_{Z}(z)$) in to a complex one ($p(x)$): $$X \underset{g_1}{\overset{f_1}{\longleftrightarrow}} H_1 \underset{g_2}{\overset{f_2}{\longleftrightarrow}} H2 \underset{g_3}{\overset{f_3}{\longleftrightarrow}} \cdots \underset{g_K}{\overset{f_K}{\longleftrightarrow}} Z,$$ where $f = f_1 \circ f_2 \circ \cdots \circ f_K$ is a flow of $K$ transformations. For brevity, we omit the parameter $\theta$ from $f_{\theta}$ and $g_{\theta}$. Glow ---- In recent years, A number types of invertible transformations have emerged to enhance the expressiveness of flows, among which Glow [@kingma2018glow] has stood out for its simplicity and effectiveness on both density estimation and high-fidelity synthesis. We briefly describe the three types of transformations that Glow consists of. #### Actnorm. @kingma2018glow proposed an activation normalization layer (Actnorm) as an alternative for batch normalization [@ioffe2015batch] to alleviate the problems in model training. Similar to batch normalization, Actnorm performs an affine transformation of the activations using a scale and bias parameter per channel for 2D images: $$y_{i,j} = s \odot x_{i,j} + b,$$ where both $x$ and $y$ are tensors of shape $[h\times w \times c]$ with spatial dimensions $(h, w)$ and channel dimension $c$. #### Invertible 1 x 1 convolution. To incorporate a permutation along the channel dimension, Glow includes a trainable invertible 1 x 1 convolution layer to generalize the permutation operation: $$y_{i, j} = W x_{i,j},$$ where $W$ is the weight matrix with shape $c \times c$. #### Affine Coupling Layers. Following @dinh2016density, Glow has affine coupling layers in its architecture: $$\begin{array}{rcl} x_a, x_b & = & \mathrm{split}(x) \\ y_a & = & x_a \\ y_b & = & \mathrm{s}(x_a) \odot x_b + \mathrm{b}(x_a) \\ y & = & \mathrm{concat}(y_a, y_b), \end{array}$$ where $\mathrm{s}(x_a)$ and $\mathrm{b}(x_a)$ are outputs of two neural networks with $x_a$ as input. The $\mathrm{split}()$ and $\mathrm{concat}()$ functions perform operations along the channel dimension. From the designed architecture of Glow, we see that interactions between spatial dimensions are incorporated only in the coupling layers. The coupling layer, however, is typically costly for memory, making it infeasible to stack a large number of coupling layers into a single model, particularly for high-resolution images. The main goal of this work is to design a new type of transformation that is able to simultaneously model dependencies in both the spatial and channel dimensions, meanwhile maintains relatively low-memory footprint, in the hope to improve the capacity of the generative flow. Masked Convolutional Generative Flows {#sec:macow} ===================================== In this section, we describe the architectural components that compose the *masked convolutional generative flow* (<span style="font-variant:small-caps;">MaCow</span>). We first introduce the proposed flow transformation using masked convolutions in §\[subsec:macow\]. Then, we present an fine-grained version of the multi-scale architecture adopted in previous generative flows [@dinh2016density; @kingma2018glow] in §\[subsec:multi-scale\]. In §\[subsec:dequant\], we briefly revisit the dequantization problem involved in flow-based generative models. ![Visualization of the receptive field of two masked convolutions with reversed ordering.[]{data-label="fig:mcf"}](figs/conv1.pdf){width="60.00000%"} Flow with Masked Convolutions {#subsec:macow} ----------------------------- Applying autoregressive models to normalizing flows has been explored by previous studies [@kingma2016improved; @papamakarios2017masked]. The idea behind autoregressive flows is sequentially modeling the input random variables in an autoregressive order to make sure the model cannot read input variables behind the current one: $$\label{eq:auto-flow} y_t = \mathrm{s}(x_{<t}) \odot x_t + \mathrm{b}(x_{<t}),$$ where $x_{<t}$ denotes the input variables in $x$ that are in front of $x_t$ in the autoregressive order. $\mathrm{s}()$ and $\mathrm{b}()$ are two autoregressive neural networks typically implemented using spatial masks [@germain2015made; @oord2016pixel]. Despite of the effectiveness in high-dimensional space, autoregressive flows suffer from two crucial problems: (1) The training procedure is unstable when modeling a long range of contexts and stacking multiple layers. (2) Inference and synthesis are inefficient, due to the non-parallelizable inverse function. We propose to use masked convolutions to restricting the local connectivity in a small “masked” kernel to address these two problems. The two autoregressive neural networks, $\mathrm{s}()$ and $\mathrm{b}()$, are implemented with one-layer masked convolutional networks with small kernels (e.g. $3\times3$) to make sure that they are only able to read contexts in a small neighborhood: $$\mathrm{s}(x_{<t}) = \mathrm{s}(x_{t\star}), \quad \mathrm{b}(x_{<t}) = \mathrm{b}(x_{t\star}),$$ where $x_{t\star}$ denotes the input variables, restricted in a small kernel, that $x_t$ depends on. By using masks in reversed ordering and stacking multiple layers of flows, the model can capture a large receptive field (see Figure \[fig:mcf\]), and is able to model dependencies in both the spatial and channel dimensions. #### Efficient Synthesis. As discussed above, synthesis from autoregressive flows is inefficient since the inverse has to be computed by sequentially traversing through the autoregressive order. In the context of 2D images with shape $[h\times w \times c]$, the time complexity of synthesis is $O(h \times w \times \mathrm{NN}(h, w, c))$, where $\mathrm{NN}(h, w, c)$ is the time of computing the outputs from the neural network $\mathrm{s}()$ and $\mathrm{b}()$ with input shape $[h\times w \times c]$. In our proposed flow with masked convolutions, computation of $x_{i,j}$ can begin as soon as all $x_{<i,j}$ are available, contrary to the autoregressive requirement that all $x_{<i,j}$ must be computed. Moreover, at each step we only need to feed $x_{(i,j)\star}$ (with shape $[kh\times kw\times c]$) into $\mathrm{s}()$ and $\mathrm{b}()$. Here $[\mathit{kh}\times \mathit{kw}\times c]$ is the shape of a kernel in the convolution. Thus, the time complexity reduces significantly to $O((h + w) \times \mathrm{NN}(\mathit{kh}, \mathit{kw}, c))$. \[fig:architecture:step\] Fine-grained Multi-Scale Architecture {#subsec:multi-scale} ------------------------------------- @dinh2016density proposed a multi-scale architecture using a squeezing operation, which has been demonstrated to be helpful for training very deep flows. In the original multi-scale architecture, the model factors out half of the dimensions at each scale to reduce computational and memory cost. In this paper, inspired by the size upscaling in subscale ordering [@menick2018generating] which generates an image as a sequence of sub-images of equal size, we propose a fine-grained multi-scale architecture to further improve model performance. In this fine-grained multi-scale architecture, each scale consists of $M/2$ blocks. After each block, the model splits out $1/M$ dimensions of the input[^1]. Figure \[fig:architecture\] illustrates the graphical specification of the two versions of architectures. Experimental improvements demonstrate the effectiveness of the fine-grained multi-scale architecture (§\[sec:experiment\]). Dequantization {#subsec:dequant} -------------- From the description in §\[sec:background\], generative flows are defined on continuous random variables. Many real word datasets, however, are recordings of discrete representations of signals, and fitting a continuous density model to discrete data will produce a degenerate solution that places all probability mass on discrete datapoints [@uria2013rnade; @ho2018flow++]. A common solution to this problem is “dequantization” which converts the discrete data distribution into a continuous one. Specifically, in the context of natural images, each dimension (pixel) of the discrete data $x$ takes on values in $\{0, 1, \ldots, 255\}$. The dequatization process adds a continuous random noise $u$ to $x$, resulting a continuous data point: $$\label{eq:dequant} y = x + u,$$ where $u \in [0, 1)^{d}$ is continuous random noise taking values from interval $[0, 1)$. By modeling the density of $Y \in \mathcal{Y}$ with $p_{\theta}(y)$, the distribution of $X$ is defined as: $$P_\theta(x) = \int_{\mathcal{Y}} p_\theta(y) \operatorname{d\!}y = \int_{[0, 1)^d} p_\theta(x + u) \operatorname{d\!}u.$$ By restricting the range of $u$ in $[0, 1)$, the mapping between $y$ and a pair of $x$ and $u$ is bijective. Thus, we have $p_\theta(y) = p_\theta(x + u) = p_\theta(x, u)$. By introducing a *dequantization noise distribution* $q(u|x)$, the training objective in can be re-written as: $$\begin{aligned} \mathrm{E}_{P(X)} \Big[-\log P_{\theta}(X)\Big] & = \mathrm{E}_{P(X)} \left[-\log \int_{[0, 1)^d} p_\theta(X, u) \operatorname{d\!}u \right] \nonumber \\ & = \mathrm{E}_{P(X)} \Bigg[\mathrm{E}_{q(u|X)} \left[-\log \frac{p_{\theta}(X, u)}{q(u|X)}\right] - \mathrm{KL}\big(q(u|X) || p_{\theta}(u|X)\big) \Bigg] \nonumber \\ & \leq \mathrm{E}_{P(X)} \Bigg[\mathrm{E}_{q(u|X)} \Big[-\log p_{\theta}(X, u) \Big] + \mathrm{E}_{q(u|X)} \Big[\log q(u|X) \Big]\Bigg] \nonumber \\ & = \mathrm{E}_{p(Y)} \Big[-\log p_{\theta}(Y) \Big] + \mathrm{E}_{P(X)} \mathrm{E}_{q(u|X)} \Big[\log q(u|X)\Big], \label{eq:elbo}\end{aligned}$$ where $p(y) = P(x) q(u|x)$ is the distribution of the dequantized variable $Y$ under the dequantization noise distribution $q(u|X)$. #### Uniform Dequantization. The most common dequantization method used in prior work is uniform dequantization, where the noise $u$ is sampled from the uniform form distribution $\mathrm{Unif(0, 1)}$: $$q(u|x) \sim \mathrm{Unif(0, 1)}, \forall x \in \mathcal{X}.$$ From , we have $$\mathrm{E}_{P(X)} \left[-\log P_{\theta}(X)\right] \leq \mathrm{E}_{p(Y)} \left[-\log p_{\theta}(Y) \right],$$ as $\log q(u|x) = 0, \forall x \in \mathcal{X}$. #### Variational Dequantization. As discussed in @ho2018flow++, uniform dequantization asks $p_\theta(y)$ to assign uniform density to unit hypercubes $[0, 1)^d$, which is difficult for smooth distribution approximators. They proposed to use a parametric dequantization noise distribution $q_\phi(u|x)$, and the training objective is to optimize the *evidence lower bound* (ELBO) provided in : $$\min\limits_{\theta, \phi} \mathrm{E}_{p_\phi(Y)} \left[-\log p_{\theta}(Y) \right] + \mathrm{E}_{P(X)} \mathrm{E}_{q_\phi(u|X)} \left[\log q_\phi(u|X) \right],$$ where $p_\phi(y) = P(x) q_\phi(u|x)$. In this paper, we implemented both the two dequantization methods for our <span style="font-variant:small-caps;">MaCow</span> (detailed in §\[sec:experiment\]). Experiments {#sec:experiment} =========== We evaluate our <span style="font-variant:small-caps;">MaCow</span> model on both low-resolution and high-resolution datasets. For a step of <span style="font-variant:small-caps;">MaCow</span>, we use $T=4$ masked convolution units, and the Glow step is the same of that described in @kingma2018glow: an *ActNorm* followed by an *Invertible $1\times1$ convolution*, followed by a *coupling layer*. Each the coupling layer has three convolution layers, where the the first and last convolutions are $3\times3$, while the center convolution is $1\times1$. For low resolution images we use affine coupling layers with 512 hidden channels, while for high resolution images we use additive layers with 256 hidden channels to reduce memory cost. ELU [@clevert2015elu] is used as the activation function throughout the flow architecture. For variational dequantization, the dequantization noise distribution $q_\phi(u|x)$ is modeled with a conditional <span style="font-variant:small-caps;">MaCow</span> with shallow architecture. More details on architectures, as well as results, and analysis of the conducted experiments, will be given in a source code release. Low-Resolution Images --------------------- We begin our experiments by evaluating the density estimation performance of <span style="font-variant:small-caps;">MaCow</span> on two commonly used datasets of low-resolution images for evaluating deep generative models: CIFAR-10 with images of size $32\times32$ [@krizhevsky2009learning] and $64\times64$ downsampled version of ImageNet [@oord2016pixel]. [ll|c:c]{} & **Model** & **CIFAR-10** & **ImageNet-64**\ & IAF VAE [@kingma2016improved] & 3.11 & –\ & Parallel Multiscale [@reed2017parallel] & – & 3.70\ & PixelRNN [@oord2016pixel] & 3.00 & 3.63\ & Gated PixelCNN [@van2016conditional] & 3.03 & 3.57\ & MAE [@ma2019mae] & 2.95 & –\ & PixelCNN++ [@salimans2017pixelcnn++] & 2.92 & –\ & PixelSNAIL [@chen2017pixelsnail] & **2.85** & **3.52**\ & SPN [@menick2018generating] & – & **3.52**\ & Real NVP [@dinh2016density] & 3.49 & 3.98\ & Glow [@kingma2018glow] & 3.35 & 3.81\ & Flow++:  [@ho2018flow++] & 3.29 & –\ & Flow++:  [@ho2018flow++] & **3.09** & 3.69\ & <span style="font-variant:small-caps;">MaCow</span>: & 3.31 & 3.78\ & <span style="font-variant:small-caps;">MaCow</span>: & 3.28 & 3.75\ & <span style="font-variant:small-caps;">MaCow</span>: & 3.16 & **3.64**\ We run experiments to dissect the effectiveness of each components of our <span style="font-variant:small-caps;">MaCow</span> model by ablation studies. The model utilizes the original multi-scale architecture, while the model augments the original one with the fine-grained multi-scale architecture proposed in §\[subsec:multi-scale\]. The model further implements the variational dequantization (§\[subsec:dequant\]) on the top of to replace the uniform dequantization. For each ablation, we slightly adjust the number of steps in each level so that all the models have similar numbers of parameters for fair comparison. Table \[tab:density:low\] provides the density estimation performance of different variations of our <span style="font-variant:small-caps;">MaCow</span> model, together with the top-performing autoregressive models (first section) and flow-based generative models (second section). First, on both the two datasets, models outperform ones, demonstrating the effectiveness of the fine-grained multi-scale architecture. Second, with the uniform dequantization, <span style="font-variant:small-caps;">MaCow</span> combined with fine-grained multi-scale architecture significantly improves the performance over Glow on both the two datasets, and obtains slightly better results than Flow++ on CIFAR-10. In addition, with variational dequantization, <span style="font-variant:small-caps;">MaCow</span> achieves 0.05 improvement of bits/dim over Flow++ on ImageNet $64\times64$. On CIFAR-10, however, the performance of is around 0.07 bits/dim behind Flow++. Compared with PixelSNAIL [@chen2017pixelsnail] and SPN [@menick2018generating], the state-of-the-art autoregressive generative models, the performance of <span style="font-variant:small-caps;">MaCow</span> is around 0.31 bits/dim worse on CIFAR-10 and 0.12 worse on ImageNet $64\times64$. Further improving the density estimation performance of <span style="font-variant:small-caps;">MaCow</span> on natural images has been left to future work. **Model** **CelebA-HQ $256\times256$** ------------------------------------------------------ ------------------------------ Glow [@kingma2018glow] 1.03 SPN [@menick2018generating] **0.61** <span style="font-variant:small-caps;">MaCow</span>: 0.95 <span style="font-variant:small-caps;">MaCow</span>: 0.74 : Negative Log-likelihood scores for 5-bit datasets in bits/dim.[]{data-label="tab:density:high"} ![5-bit $256\times256$ CelebA-HQ samples, with temperature 0.7.[]{data-label="fig:celeba:sample"}](figs/celeba_sample.png){width="99.00000%"} High-Resolution Images ---------------------- We now demonstrate experimentally that our <span style="font-variant:small-caps;">MaCow</span> model is capable of high fidelity samples at high resolution. Following @kingma2018glow, we choose the CelebA-HQ dataset [@karras2017progressive], which consists of 30,000 high resolution images from the CelebA dataset [@liu2015faceattributes]. We train our models on 5-bit images, with the fine-grained multi-scale architecture and both the uniform and variational dequantization. ### Density Estimation Table \[tab:density:high\] illustrates the negative log-likelihood scores in bits/dim of two versions of <span style="font-variant:small-caps;">MaCow</span> on the 5-bit $256\times256$ CelebA-HQ dataset. With uniform dequantization, <span style="font-variant:small-caps;">MaCow</span> improves the log-likelihood over Glow from 1.03 bits/dim to 0.95 bits/dim. Equipped with variational dequantization, <span style="font-variant:small-caps;">MaCow</span> obtains 0.74 bits/dim, 0.13 bits/dim behind the state-of-the-art autoregressive generative model SPN [@menick2018generating], significantly narrowing the gap. ### Image Generation Consistent with previous work on likelihood-based generative models [@parmar2018image; @kingma2018glow], we found that sampling from a reduced-temperature model often results in higher-quality samples. Figure \[fig:celeba:sample\] showcases some random samples obtained from our model for 5-bit CelebA-HQ $256\times256$ with temperature 0.7. The images are extremely high quality for non-autoregressive likelihood models. Conclusion ========== In this paper, we proposed a new type of generative flow, coined <span style="font-variant:small-caps;">MaCow</span>, which exploits masked convolutional neural networks. By restricting the local dependencies in a small masked kernel, <span style="font-variant:small-caps;">MaCow</span> enjoys the properties of fast and stable training and efficient sampling. Experiments on both low- and high-resolution benchmark datasets of images show the capability of <span style="font-variant:small-caps;">MaCow</span> on both density estimation and high-fidelity generation, with state-of-the-art or comparable likelihood, and superior quality of samples against previous top-performing models. One potential direction for future work is to extend <span style="font-variant:small-caps;">MaCow</span> to other forms of data, in particular text on which no attempt (to our best knowledge) has been made to apply flow-based generative models. Another exciting direction is to combine <span style="font-variant:small-caps;">MaCow</span>, or even general flow-based generative models, with variational inference to automatically learn meaningful (low-dimensional) representations from raw data. [^1]: In our experiments, we set $M=4$. Note that the original multi-scale architecture is a special case of the fine-grained version with $M=2$.
{ "pile_set_name": "ArXiv" }
--- author: - 'N. Ysard' - 'M. Juvela' - 'L. Verstraete' bibliography: - 'biblio.bib' title: Modelling the spinning dust emission from dense interstellar clouds --- Introduction ============ Discovered in the nineties, the anomalous microwave emission (AME) has aroused great interest [@Kogut1996; @Leitch1997]. First because it appears in a frequency window that is optimal for the detection of the Cosmic Microwave Background (CMB) fluctuations. @DL98 proposed that AME could be caused by electric dipole emission of rapidly rotating grains: the spinning dust emission. This mechanism is now most often invoked to explain the AME and several models have been published [@Ali2009; @Ysard2010a; @Hoang2010; @Silsbee2011]. The study of spinning dust could help in understanding the life cycle of interstellar dust grains because it may be a new tracer of the smallest grains, the interstellar Polycyclic Aromatic Hydrocarbons (PAHs). The preference of spinning dust models over other mechanisms is based on several arguments. First, AME is correlated with dust IR emission and this correlation is particularly tight for the mid-IR emission of small grains. Second, AME is weakly polarized as expected for PAHs because these grains are not supposed to be aligned with the interstellar magnetic field [@Battistelli2006; @Casassus2008; @Lopez2011]. Third, the shape and the intensity of the AME can be reproduced with spinning dust spectra (e.g. Watson et al. 2005; Planck Collaboration et al. 2011b, and many other references). However, the spinning dust emission depends on the local physical conditions (gas ionisation state and radiation field) and on the size distribution of small grains. Recent observations of interstellar clouds point out dissimilar morphologies in the mid-IR and in the microwave range that may be explained by local variations of the environmental conditions [@Casassus2006; @Casassus2008; @Ysard2010b; @Castellanos2011; @Vidal2011]. In this work we study the spinning dust emission of interstellar clouds including a treatment of the gas state and radiative transfer. In this context we reexamine the relationship between the AME and the dust IR emission. The paper is organised as follows. In Section \[models\] we describe the models. In Section \[gas\_properties\] we detail our method to estimate the gas properties (ionisation state and temperature). In Section \[environment\] we present the variations of the spinning dust spectrum with the gas density and with the intensity of the radiation field. We also consider variations of the cosmic-ray ionisation rate as suggested by recent observations. In Section \[radiative\_transfer\] we present the spinning dust emission with radiative transfer modelling. Finally, we present in Section \[conclusions\] our conlusions. Models ====== Current models of spinning dust [@DL98; @Ali2009; @Ysard2010a; @Hoang2010; @Silsbee2011] take into account a number of processes for the rotational excitation and damping of the grains: the emission of IR photons, the collisions with neutral and ionised gas particles (, , and ), the plasma drag ( and ), the photoelectric effect, and the formation of H$_2$ molecules at the surface of the grains. The publically available SpDust[^1] code [@Ali2009; @Silsbee2011] includes the most recent developments regarding the gas-grain interactions and the grain dynamics (rotation around non-principal axis of inertia). The results of SpDust agree well with other models that include a more detailed treatment of the IR emission or of the gas-grain interactions [@Ysard2010a; @Hoang2010]. SpDust is fast and well-suited for coupling to other codes, especially radiative transfer codes. In the following, we use SpDust to model the spinning dust emission. In order to estimate dust emission from the mid-IR to the microwave range in a consistent way, we coupled SpDust with the dust emission model described by @Compiegne2011, DustEM[^2]. DustEM is based on the formalism of @Desert1990 and includes three dust types: interstellar PAHs, amourphous carbonaceous grains, and amourphous silicates. We used the dust populations defined by @Compiegne2011 for the diffuse, high galactic latitude interstellar medium (DHGL). For PAHs we assumed a log-normal size distribution with centroid $a_0=0.64$ nm and width $\sigma=0.4$, with a dust-to-gas mass ratio $M_{PAH}/M_H=7.8\times 10^{-4}$. In current models, the smallest grains (PAHs) carry the spinning dust emission that is sensitive to the gas density and the radiation field intensity[^3], $G_0$, but also to the ionisation state (abundance of the $\ion{H}{ii}$ and $\ion{C}{ii}$ ions, noted $x_H$ and $x_C$ respectively). Radiative transfer calculations are performed with the CRT[^4] tool [@Juvela2003; @Juvela2005], to which we have coupled DustEM and SpDust. CRT is only used to estimate the dust temperature and the resulting dust emission from the mid-IR to the microwave range. Our treatment of the gas properties is presented in Section \[gas\_properties\]. Gas state {#gas_properties} ========= As discussed above, the dynamics of spinning dust grains involves gas-grain interactions and radiative processes. The spinning dust emission is therefore sensitive to the gas density ($n_H$) and temperature ($T_{{\rm gas}}$), and to the intensity of the UV radiation field traced by the factor $G_0$. In particular the gas-grain interactions depend on the gas ionisation state, i.e., the abundance of the major charged species (electrons, , , ect.), which primarily depends on $n_H$, $T_{{\rm gas}}$, and $G_0$ but also on the chemistry occurring locally. Realistic modelling consequently requires a consistent treatment of the spinning motion of the grains and gas ionisation state. For the present work, where we consider the influence of radiative transfer on the spinning dust emission (see Section \[radiative\_transfer\]), we treat the gas ionisation state with a simplified scheme that we present below. Using this scheme, we then discuss the influence of $n_H$ and $G_0$, and look at the effect of an enhanced cosmic-ray ionisation rate as suggested by recent observations (see Section \[environment\]). When the radiation field intensity is low ($G_0 \leqslant 1$), inelastic collisions with neutral and ionised species of the interstellar gas become the dominant processes for the excitation and the damping of the grain rotation [@Ali2009; @Ysard2010a]. The ion fractions $x_H = n_{\ion{H}{ii}}/n_H$ and $x_C = n_{\ion{C}{ii}}/n_H$ where $n_H = n(\ion{H}{i}) + n(\ion{H}{ii}) + 2n({\rm H}_2)$ accordingly need to be carefully determined to perform a quantitative study of the variations of spinning dust emission with environmental properties. Where CO has not formed (unshielded regions in which most of the gas phase carbon is in the form of or ), we estimate the electron and ion fractions ($x_e=n_e/n_H$, $x_H$, and $x_C$) by simultaneously solving the / and / equilibria, including the recombination of carbon with H$_2$ [@Roellig2006]. Furthermore, we take into account the recombination of with PAHs as described in @Wolfire2008. In neutral gas and neglecting the contribution of helium, the ionisation balance of hydrogen including H$_2$ reads $$\begin{aligned} ({\rm \ion{H}{i}, H_2}) + {\rm CR} & \rightleftarrows & ({\rm \ion{H}{ii}, H_2^+}) + {\rm e}^{\,-} \\ \zeta_{CR} (1-x_H) & = & x_H x_e n_H a_H,\end{aligned}$$ where $\zeta_{CR}$ is the cosmic-ray ionisation rate per second and per proton and $a_H = 3.5 \times 10^{-12} (T/300 {\rm K})^{-0.75}$ cm$^3$/s is the recombination rate [@Roellig2006]. Unless otherwise stated, we assume $\zeta_{CR} = 5 \times 10^{-17}$ s$^{-1}$H$^{-1}$. In regions where CO has not formed, we assume that is the dominant ionised heavy element and write the electron fraction as $x_e \simeq x_H + x_C$. The abundance thus becomes $$\label{premiere_estimation} x_C = x_e - \frac{1}{1+x_e n_H a_H / \zeta_{CR}}.$$ On the other hand, $x_C$ can be derived from the ionisation balance of carbon where we take into account the following reactions: $$\begin{aligned} {\rm \ion{C}{i}} + h\nu & \stackrel{k_i}{\longrightarrow} & {\rm \ion{C}{ii}} + {\rm e}^{\,-} \\ {\rm \ion{C}{ii}} + {\rm e}^{\,-} & \stackrel{k_r}{\longrightarrow} & {\rm \ion{C}{i}} \\ {\rm \ion{C}{ii}} + {\rm PAH}^-/{\rm PAH} & \stackrel{k_x}{\longrightarrow} & {\rm \ion{C}{i}} + {\rm PAH}/{\rm PAH}^+ \\ {\rm \ion{C}{ii}} + {\rm H}_2 & \stackrel{k_a}{\longrightarrow} & {\rm CH_2}^+ + h\nu \;\; {\rm or} \;\; {\rm CH}^+ + {\rm \ion{H}{i}}.\end{aligned}$$ The rate coefficients $k_i$, $k_r$ and $k_x$ are taken from @Wolfire2008. From the database of the Meudon PDR code [@LePetit2006], we take $$k_a\simeq 1.7\,10^{-15}+1.5\,10^{-10}\exp(-4640/T_{{\rm gas}})$ cm$^3$ s$^{-1},$$ where the two terms are for the CH$_2^+$ and CH$^+$ products, respectively. The C/ ionisation balance then reads $$([{\rm C}] - x_C) k_i = x_C x_e n_H k_r + x_C x_{{\rm PAH}} n_H k_x + x_C y n_H k_a / 2,$$ where $y = 2n_{H_2}/n_H$ is the H$_2$ fraction and $[{\rm C}] = n_C/n_H = n(\ion{C}{i})/n_H + x_C$ is the total carbon abundance, which we take to be $1.3\times 10^{-4}$. This leads to an expression for $x_C$: $$\label{seconde_estimation} x_C = [{\rm C}] \left( 1 + x_e \frac{n_H k_r}{k_i} + x_{{\rm PAH}} \frac{n_H k_x}{k_i} + y \frac{n_H k_a}{2 k_i} \right)^{-1}.$$ Combining Eqs. \[premiere\_estimation\] and \[seconde\_estimation\] yields a third-degree equation for $x_e$ whose solution is $x_s$. In shielded regions, $x_C$ drops rapidly to form C and then CO. The electron fraction $x_e$ follows this evolution down to a value $x_{dc}$ corresponding to the case of dark clouds where the ionisation is mostly due to cosmic rays. In this case, we assume that the electron fraction follows the formula that @Williams1998 derived from an analysis of C$^{18}$O, H$^{13}$CO$^+$ and DCO$^+$ observations in low-mass cores: $$x_{dc}= 2000 \sqrt{ \frac{2\zeta_{CR}}{y n_H}},$$ where $y=2n_{H_2}/n_H$. We then write $x_e={\rm MAX}(x_{dc}, x_s)$. The and fractions are then derived from Eqs. \[premiere\_estimation\] and \[seconde\_estimation\]. Fig. \[ratio\_xH\_xC\] shows the influence of the recombination of with PAHs on the calculation of $x_H$ and $x_C$. When it is omitted ($k_x = 0$), the impact is the strongest when the radiation field intensity is the weakest. The fraction is over estimated by a factor of up to 20 in dense clouds, whereas the fraction is underestimated. The recombination with PAHs has no influence when $G_0 \gtrsim 50$, whatever the density. The above fractions depend on the gas temperature, $T_{gas}$, as well as on the hydrogen molecular fraction $y$. To derive these quantities, we performed a grid of simulations with CLOUDY [@Ferland1998]. We then interpolated on these grids to obtain $T_{{\rm gas}}$ and $y$ in the optically thin limit ($N_H\leq 10^{20}$ H/cm$^2$). We note that even if the gas is fully molecular ($y=1$), the reactions of with H$_2$ have little influence because their rates are much lower than those of reactions involving PAHs. The results are shown in Fig. \[figure\_gas\] for a gas density of $n_H = 100$ cm$^{-3}$. ![Ratio of ion fractions calculated as decribed in Section \[gas\_properties\] and of ion fractions calculated omitting the recombination of with PAHs ($k_x = 0$). Results are shown for $n_H = 0.1$ (blue), 30 (green), 100 (red), $1\,000$ (magenta), and $10\,000$ cm$^{-3}$ (black), for $x_H$ (dashed lines from bottom to top) and $x_C$ (solid lines from top to bottom).[]{data-label="ratio_xH_xC"}](ratio_xH_xC.png){width="40.00000%"} ![Ion fractions $x_H$ (dashed line) and $x_C$ (dot-dashed line), H$_2$ fraction $y$ (dotted line), and gas temperature $T_{{\rm gas}}$ (full line) as a function of the intensity of the radiation field $G_0$ for $n_H = 100$ H/cm$^{-3}$.[]{data-label="figure_gas"}](figure_gas.png){width="40.00000%"} Influence of local physical conditions {#environment} ====================================== Variations with the gas density {#gas_density} ------------------------------- ![[*a)*]{} Spinning dust spectra per H column density for interstellar PAHs illuminated by the standard ISRF, $G_0 = 1$, for $n_H = 400, 1000, 3\,000, 4\,000, 7\,500, 20\,000$ and $50\,000$ cm$^{-3}$. [*b)*]{} Peak frequencies (full line + circles) and intensities per H column density at the peak (dashed line + diamonds) of the spinning dust spectra for $G_0 = 1$ and $n_H = 0.1 - 10^5$ cm$^{-3}$.[]{data-label="variations_nH"}](variations_nH.png){width="40.00000%"} We study the variations of the spinning dust spectrum for grains illuminated by the standard interstellar radiation field, ISRF, with $G_0 = 1$, and for clouds with densities ranging from $n_H = 0.1$ to $10^5$ cm$^{-3}$. For each density, the gas properties are estimated according to $n_H$ and the radiation field intensity, as decribed in Section \[gas\_properties\]. The resulting spectra are shown in Fig. \[variations\_nH\] and we also display the variation of the peak frequency and of the maximum intensity of the spectra as functions of $n_H$. When $n_H \lesssim 10$ cm$^{-3}$, the rotational excitation and damping are dominated by the IR emission (Fig. \[contributions\]). Thus the peak intensity of the spinning dust spectrum does not much depend on the gas properties and it varies little when increasing $n_H$. The slight increase of the peak frequency and of the intensity of the spectra for $n_H \geqslant 1$ cm$^{-3}$ is caused by the increase of the influence of collisions with neutral species (Fig. \[contributions\]). If $10 \lesssim n_H \lesssim 4\,000$ cm$^{-3}$, the gas-grain interactions become the dominant processes for rotational excitation and damping. This results in an increased peak frequency when $n_H$ increases. For $4\,000 \lesssim n_H \lesssim 10\,000$ cm$^{-3}$, the gas-grain interactions are still the dominant processes but the and fractions dramatically decrease when $n_H$ increases. As a result, the ion-grain collisions are less efficient and the peak frequency decreases (Fig. \[contributions\]). Finally if $n_H \gtrsim 10\,000$ cm$^{-3}$, the peak frequency increases, while the emissivity varies only little. The numbers given here are valid only for the ISRF with $G_0 = 1$. If the intensity of the radiation field is increased (decreased), the threshold values from one domain to the next are increased (decreased). We emphasize the importance of correctly estimating the ion fractions when modelling spinning dust emission. ![Contribution of the different processes to the damping (solid line) and the excitation (dashed line) of the spinning dust emission for a grain with a radius of 3.5 $\mathring{A}$ illuminated by the standard ISRF with $G_0 = 1$. The contributions are normalized to the drag produced by non-elastic collisions in a pure gas of a density $n_H$ [@DL98]. The red lines show the contribution of IR emission, the green lines of collisions with neutrals, the blue lines of collisions with ions, the black lines of plasma drag, the magenta line of photoelectric effect, and the yellow line of H$_2$ formation.[]{data-label="contributions"}](contributions.png){width="40.00000%"} Variations with the intensity of the interstellar radiation field {#intensity} ----------------------------------------------------------------- ![[*a)*]{} Peak frequencies of the spinning dust spectra for $G_0 = 10^{-4}$ to $10^4$ and $n_H = 0.1$ (blue), 30 (green), 100 (red), $1\,000$ (turquoise), and $10\,000$ cm$^{-3}$ (purple), from bottom to top. [*b)*]{} Intensities per H column density of the spinning dust spectra at peak frequencies for the same environmental parameters.[]{data-label="variations_Go"}](variations_Go.png){width="40.00000%"} We study the variations of the spinning dust spectrum for grains illuminated by the ISRF scaled by $G_0$ factors between $10^{-4}$ and $10^4$. Several gas densities are considered, from 0.1 to $10^4$ H/cm$^3$. The ion fractions are calculated according to the values of $n_H$ and $G_0$ (see Section \[gas\_properties\]). The results can be seen in Fig. \[variations\_Go\]. First, for low $G_0$ the dominant processes are gas-grain interactions: the excitation is led by collisions with ions, while the damping is dominated by plasma drag and collisions with neutrals. This explains why the spinning dust spectrum varies very little with $G_0$. Depending on the gas density, this applies to $G_0 \lesssim 10^{-3}-10$. A correct estimate of ion fractions is thus important. For instance, if the recombination of with PAHs is omitted when calculating $x_H$ and $x_C$ (Fig. \[ratio\_xH\_xC\]), the spinning dust spectrum is shifted to higher frequencies for low-$G_0$. The peak shift caused by reaction 6 is between 1 and 25 GHz for $n_H = 30$ to $10\,000$ cm$^{-3}$, while the intensity is increased by 5 to 45%. A second domain can be distinguished for $1 \lesssim G_0 \leqslant 100$. The grains become positively charged when $G_0$ increases. For these intermediate radiation field intensities the collisions with ions are still the dominant exciting process even if less efficient. As a result, we see a slight decrease of both the emissivity and the peak frequency of the spinning dust emission, which is more intense when the density is higher. For $G_0 \geqslant 100$, IR emission dominates the rotational excitation and damping, so that increasing $G_0$ results in increased emissivity and peak frequency. We stress that the variations of the spinning dust spectrum with $G_0$ are not monotonous and that they depend on the gas density. Moreover, the PAHs mid-IR emission is proportional to $G_0$, so that we do not expect a tight correlation of the IR bands with the AME if the latter is caused by spinning dust emission. Influence of the cosmic-ray ionisation rate $\zeta_{CR}$ {#cosmic_rays} -------------------------------------------------------- ![Peak frequencies (solid line + circles) and intensities per H column density at the peak frequencies (dashed line + diamond) of the spinning dust spectra as a function of the cosmic-ray ionisation rate equal to the standard rate $\zeta_{CR} = 5 \times 10^{-17}$ s$^{-1}$ multiplied by the factor CR = \[1-100\].[]{data-label="variations_CR"}](peak_intensity_CR_nH30.png){width="40.00000%"} Gas-grain interactions through collisions or plasma drag are one of the dominant processes for the rotational damping and excitation. A proper evaluation of the ion fractions in the gas, $x_H$ and $x_C$, is therefore important for the estimation of the spinning dust emission. The carbon is ionised by the UV radiation field and thus $x_C$ depends on $G_0$ (see Section \[gas\_properties\]). However, because of the Lyman cut of the radiation field, inside clouds the hydrogen is mainly ionised by cosmic-rays and $x_H$ depends on the cosmic-ray ionisation rate $\zeta$. In the previous sections, we adopted the Galactic standard value, $\zeta_{CR} = 5 \times 10^{-17}$ s$^{-1}$, measured towards dense molecular clouds. However, several authors have inferred an enhanced cosmic-ray ionisation rate from observations of diffuse clouds ($n_H < 500$ cm$^{-3}$). They showed that this rate can exceed 50 times the standard cosmic-ray ionisation rate [@McCall2003; @Liszt2003; @Shaw2006]. To investigate the influence of the cosmic-ray ionisation rate, we modelled spinning dust spectra for $n_H = 30$ cm$^{-3}$, $G_0 = 1$, and $\zeta = CR \times \zeta_{CR}$ with $1 \leqslant CR \leqslant 100$ (Fig. \[variations\_CR\]). When $\zeta_{CR}$ is increased by two orders of magnitude, $x_H$ is multiplied by 10 and the spinning dust spectrum is shifted by 10 GHz and its intensity is multiplied by three. The cosmic-ray ionisation rate therefore has a significant impact on the spinning dust emission. Its variations as a function of the gas density may produce a more complex behaviour than those presented in Figs. \[variations\_nH\] and \[variations\_Go\]. Spinning dust emission with radiative transfer {#radiative_transfer} ============================================== An interesting result obtained with the Planck data is that there is AME associated with all the interstellar phases of our Galaxy. It is detected in diffuse and dense media, ionised or neutral, and its spectrum can be fitted with a basic spinning dust model in all cases [@PlanckMarshall2011; @PlanckDickinson2011]. However, at finer angular resolution ($\sim$ a few arcminutes), the morphology of molecular clouds differs in the mid-IR and in the microwave [@Castellanos2011; @Vidal2011]. We showed above that the spinning dust spectrum is very sensitive to the environmental conditions and we claim that it could explain the observed differences. Indeed, the variations of the gas properties and $G_0$ in dense molecular clouds are strong from the cloud edge to the centre. Hence, dense clouds are excellent targets to test the hypothesis of the spinning dust emission at the origin of the observed AME. Therefore, we investigate the spinning dust emission emerging from dense interstellar clouds using the combination of DustEM and CRT, in addition to SpDust. We model starless spherical clouds with density distributions that are suitable to fit interstellar clouds [@Arzoumanian2011; @Dapp2009]: $$n(r) = \frac{n_0}{1 + (r/H_0)^2} \;\;\; {\rm for} \; r \leqslant R_{out},$$ with $R_{out}$ the outer radius, $H_0 = R/3$ the flat internal radius, and $n_0$ the central density. The clouds are illuminated by the ISRF extinguished by a visual extinction $A_V =1$. We consider that small and large grains are present everywhere in the clouds with constant abundances. The gas ionisation state is estimated as described in Section \[gas\_properties\]. Gas temperature --------------- In the dense molecular clouds the gas temperature grid calculated with CLOUDY in the optically thin limit is no longer relevant (see Section \[gas\_properties\]). We used our own radiative transfer code to estimate $T_{{\rm gas}}$ in spherical model clouds. This model takes into account cosmic-ray heating, line cooling, coupling between gas and dust, and photoelectric heating. The calculations follow the description of @Goldsmith2001 with the exception that the line cooling rates are estimated with Monte-Carlo radiative transfer modelling instead of using the large velocity gradient approximation [@Juvela2011]. The line transfer is calculated with the abundances listed in @Goldsmith2001 and assuming a turbulent linewidth of 1 km/s. The resulting temperature profiles are shown in Fig. \[Tgas\_profiles\]. ![Gas temperature as a function of the cloud radius for clouds with $R_{out} = 0.1$ pc and $n_0 = 10^4$ (solid line) and $10^5$ H/cm$^3$ (dashed line), and for a cloud with $R_{out} = 1$ pc and $n_0 = 10^3$ H/cm$^3$ (dotted line).[]{data-label="Tgas_profiles"}](Tgas_profiles.png){width="40.00000%"} Starless molecular cloud {#starless_molecular_cloud} ------------------------ ![Surface brightness maps at 12 $\mu$m ($a$) and 30 GHz ($b$) and the emission profiles ($c$) at 12 $\mu$m (solid line), 12 GHz (dotted line), 30 GHz (dashed line), and 44 GHz (dot-dashed line) for the starless molecular cloud with $n_0 = 10^3$ cm$^{-3}$.[]{data-label="figure_1e3"}](figure_1e3_new.png){width="40.00000%"} ![Peak frequency of the spinning dust emission for the molecular cloud with $n_0 = 10^3$ cm$^{-3}$ (dotted line), $10^4$ cm$^{-3}$ (dashed line), and $10^5$ cm$^{-3}$ (solid line).[]{data-label="peak_frequency_new"}](peak_frequency_new.png){width="40.00000%"} We model a starless molecular cloud with an outer radius $R_{out} = 1$ pc and a central density $n_0 = 10^3$ H/cm$^3$. Fig. \[figure\_1e3\] shows the emerging dust emission for this cloud. Both the PAHs mid-IR emission and the spinning dust emission increase towards the centre of the cloud. However, the shapes of the profiles are quite dissimilar: the mid-IR profile is approximately twice as wide as the microwave profile. In this cloud, the intensity of the radiation field varies from 0.24 at the edge to 0.06 at the centre. With these low intensities, the dominant processes for the excitation and the damping of the spinning motion of the grains are the interactions with the gas. The spinning dust emission is consequently stronger at the centre of the cloud. On the other hand, because the PAH mid-IR emission is directly proportional to the intensity of the radiation field and to the column density, the grains emit more at the edge of the cloud, which produces a broader emission profile. Dense molecular globules: spinning dust as a tracer of grain growth from diffuse to dense medium ------------------------------------------------------------------------------------------------ ![Surface brightness maps at 12 $\mu$m ($a$) and 30 GHz ($b$) and the emission profiles ($c$) at 12 $\mu$m (solid line), 12 GHz (dotted line), 30 GHz (dashed line), and 44 GHz (dot-dashed line) for the dense molecular globule with $n_0 = 10^4$ cm$^{-3}$. The three lower figures ($d$, $e$, $f$) show the same for $n_0 = 10^5$ cm$^{-3}$.[]{data-label="figure_transfert"}](figure_transfert.png){width="40.00000%"} We modeled two dense molecular globules with an outer radius $R = 0.1$ pc and central densities $n_0 = 10^4$ and $10^5$ H/cm$^3$. The emerging dust emission at 12 $\mu$m, at the centre of two Planck-LFI filters (30, 44 GHz), and at 12 GHz are shown in Fig. \[figure\_transfert\]. As expected, the brightness maps in the mid-IR and in the microwave are dissimilar and even anti-correlated depending on the central density and on the frequency. The emission profiles in the microwave range have different widths depending on the frequency. This reflects the variation of the peak frequency of the spinning dust emission with the local physical conditions at a given radius (Fig. \[peak\_frequency\_new\]). Apart from these general trends, the results are strikingly different depending on the central density of the cloud. We emphasize that these findings are quite different from what is expected and observed at degree scale in the diffuse interstellar medium where the mid-IR emission and the AME are correlated. The cloud with $n_0 = 10^4$ H/cm$^3$ illustrates the complexity of the variations of the spinning dust emission with the environmental properties (Figs. \[figure\_transfert\]a-c). The peak frequency of the spinning dust emission varies from $\sim 45$ GHz at the edge to $\sim 33$ GHz at the centre of the cloud (Fig. \[peak\_frequency\_new\]), while $0.1 \leqslant G_0 \leqslant 0.25$. These peak frequencies are higher than in the previous case with $n_0 = 10^3$ H/cm$^3$ (Section \[starless\_molecular\_cloud\]), whereas the mid-IR emission is partly extinguished towards the centre. This explains why the profiles at 30 and 44 GHz are wider than the mid-IR profile. The width of the 12 GHz profile varies only little compared to the previous case because of its distance from the peak frequency. For the densest cloud, with $n_0 = 10^5$ H/cm$^3$ (Figs. \[figure\_transfert\]d-f), the brightest region in the mid-IR is a ring at the edge of the cloud. The emission profile of these clouds can be explained in two ways. Either the PAHs are present throughout the cloud with a constant abundance but the radiation field is too extinguished at the centre for the grains to emit efficiently in the mid-IR. Some of the emitted IR photons also are absorbed by the dust, which additionally decreases the surface brightness at the centre. Alternatively the abundance of small grains decreases from the edge to the centre of the cloud. There is no way to distinguish these two scenarios with only data in the mid-IR. However, this ambiguity is lifted with a microwave emission map. Indeed, if PAHs are present at the centre of the cloud, the rotational excitation caused by the interactions with the gas particles is sufficient for them to emit in the microwave range. The peak frequencies vary from $\sim$ 30 GHz at the edge to $\sim$ 27 GHz at the centre, which coincides with the Planck-LFI bands and explains why the emission increases towards the cloud centre for the three frequencies. Because the dense clouds are transparent in the microwave range, spinning dust emission from the cloud centre can be observed. On the other hand, if small grains are absent from the centre, only a ring of mid-IR and spinning dust emission is observed at the edge of the cloud. As a result, spinning dust emission could be used to trace the evolution of small grains from the diffuse to the dense medium because the comparison between mid-IR and microwave data allow us to trace the variations of the PAHs abundance. In this context, the potential of comparing the AME to the mid-IR bands has recently been illustrated by @Tibbs2011. Conclusion {#conclusions} ========== To understand the emerging spinning dust emission from interstellar clouds, we studied the influence of the gas ionisation state and of radiative transfer on the grain rotational motion with consistent models for the gas state and for the interstellar clouds. We found that the spinning dust spectrum is sensitive to the abundance of the major $\ion{H}{ii}$ and $\ion{C}{ii}$ ions and that these abundances must be described consistently in the modelling by including a treatment of the gas state. In addition, we showed that radiative transfer within interstellar clouds has surprising effects, e.g., that the mid-IR emission and the AME can be anticorrelated towards the cloud centre. From these findings, we argue that the AME from dense interstellar clouds provides new ways of grain growth from diffuse to dense medium (e.g., depletion of small grains by coagulation on larger ones), providing it is observed at relevant (a few arcminutes) angular scales using ground-based radio telescopes. [^1]: Available at [http://www.tapir.caltech.edu/∼yacine/spdust/spdust.html](http://www.tapir.caltech.edu/∼yacine/spdust/spdust.html) [^2]: Available at <http://www.ias.u-psud.fr/DUSTEM>. [^3]: $G_0$ is a scaling factor for the radiation field integrated between 6 and 13.6 eV. The standard radiation field corresponds to $G_0 = 1$ and to an intensity of $1.6\times 10^{−3}$ erg/s/cm$^{−2}$ [@Mathis1983]. [^4]: Available at <http://wiki.helsinki.fi/display/~mjuvela@helsinki.fi/CRT>.
{ "pile_set_name": "ArXiv" }
--- abstract: 'In order to investigate the low-energy antiferromagnetic Cu-spin correlation and its relation to the superconductivity, we have performed muon spin relaxation ($\mu$SR) measurements using single crystals of the electron-doped high-$T_{\rm c}$ cuprate Pr$_{1-x}$LaCe$_x$CuO$_4$ in the overdoped regime. The $\mu$SR spectra have revealed that the Cu-spin correlation is developed in the overdoped samples where the superconductivity appears. The development of the Cu-spin correlation weakens with increasing $x$ and is negligibly small in the heavily overdoped sample where the superconductivity almost disappears. Considering that the Cu-spin correlation also exist in the superconducting electron-doped cuprates in the undoped and underdoped regimes \[T. Adachi [*et al.*]{}, J. Phys. Soc. Jpn. [**85**]{}, 114716 (2016)\], our findings suggest that the mechanism of the superconductivity is related to the low-energy Cu-spin correlation in the entire doping regime of the electron-doped cuprates.' author: - 'Malik A. Baqiya' - Tadashi Adachi - Akira Takahashi - Takuya Konno - Taro Ohgi - Isao Watanabe - Yoji Koike title: 'Muon spin relaxation study of the spin correlation in the overdoped regime of electron-doped high-$T_{\rm c}$ cuprate superconductors' --- Introduction {#sec:introduction} ============ In the research of high-$T_{\rm c}$ cuprate superconductivity, the relationship between the Cu-spin correlation and superconductivity has been the central issue in both hole-doped and electron-doped cuprates. For the hole-doped cuprate of La$_{2-x}$Sr$_x$CuO$_4$ (LSCO), neutron-scattering experiments have revealed that the commensurate Cu-spin correlation in the antiferromagnetic (AF) state of the parent compound changes to the incommensurate one with hole doping in the superconducting (SC) state, [@yamada-prb] followed by the disappearance of both incommensurate Cu-spin correlation and superconductivity in the heavily overdoped regime. [@wakimoto] Muon-spin-relaxation ($\mu$SR) measurements in Zn-impurity-substituted La$_{2-x}$Sr$_x$Cu$_{1-y}$Zn$_y$O$_4$ have revealed that the development of the Cu-spin correlation vanishes at the end point of the SC region in the heavily overdoped regime of the phase diagram. [@risdi-lsco] Therefore, the incommensurate Cu-spin correlation appears to be intimately related to the superconductivity. For the electron-doped cuprates, on the other hand, the commensurate Cu-spin correlation has been observed in the optimally doped regime of Nd$_{2-x}$Ce$_x$CuO$_4$ [@yamada-prl] and Pr$_{1-x}$LaCe$_x$CuO$_4$ (PLCCO). [@kang; @wilson] The relationship between the commensurate Cu-spin correlation and superconductivity has been unclear in the electron-doped cuprates. Recently, the so-called undoped (Ce-free) superconductivity in the electron-doped cuprates has attracted considerable research attention. It has been reported that the superconductivity appears even in the parent compound of $x=0$ and in a wide range of $x$ in Nd$_{2-x}$Ce$_x$CuO$_4$ thin films through the appropriate reduction annealing to remove excess oxygen from the as-grown thin films. [@tsukada; @matsumoto] The superconductivity in the parent compound has also been confirmed in the polycrystalline samples. [@asai; @takamatsu] Two possible mechanisms of the undoped superconductivity have been proposed; the electron doping by the oxygen deficiency (oxygen non-stoichiometry) [@horio-nco] and the collapse of the charge-transfer gap due to square-planer coordination of oxygen in the CuO$_2$ plane. [@adachi-jpsj] If the latter is the case, the undoped superconductivity indicates that the phase diagram is completely different from the former one, that is, the superconductivity in the electron-doped cuprates cannot be understood in terms of carrier doping into the parent Mott insulators as in the case of the hole-doped cuprates. An important issue is whether or not the Cu-spin correlation is related to the superconductivity in the electron-doped cuprates. Through the improved reduction annealing, high-quality SC single crystals have been obtained in underdoped Pr$_{2-x}$Ce$_x$CuO$_4$ with $x \ge 0.04$ [@brinkmann] and Pr$_{1.3-x}$La$_{0.7}$Ce$_x$CuO$_{4+\delta}$ with $x \ge 0.05$. [@adachi-jpsj; @horio; @adachi-review] Formerly, we have performed $\mu$SR measurements of the SC parent polycrystal of La$_{1.8}$Eu$_{0.2}$CuO$_4$ and the SC underdoped single crystal of Pr$_{1.3-x}$La$_{0.7}$Ce$_x$CuO$_4$ with $x=0.10$. [@adachi-review; @adachi-jpsj2] It has been found that a short-range magnetic order is formed at low temperatures in both samples, suggesting a coexisting state of superconductivity with the short-range magnetic order. The development of the Cu-spin correlation has also been confirmed in $\mu$SR measurements of the SC parent thin film of La$_{1.9}$Y$_{0.1}$CuO$_4$. [@kojima] These results suggest that a small amount of residual excess oxygen in a sample causes the development of the Cu-spin correlation and/or the formation of the short-range magnetic order, indicating a strongly correlated electron system of the undoped and electron-underdoped cuprates. The next issue is how the Cu-spin correlation changes with electron doping concomitant with the weakening of the superconductivity in the overdoped regime. Inelastic neutron-scattering experiments in the overdoped PLCCO with $x \le 0.18$ have revealed that the characteristic energy of the Cu-spin correlation decreases with increasing $x$ and seems to disappear with the superconductivity. [@fujita] This is different from the results of the hole-doped cuprates in which the characteristic energy of the Cu-spin correlation is unchanged but the spectral weight decreases with hole doping, [@wakimoto] suggesting the occurrence of a phase separation into SC and normal-state regions in a sample. [@tanabe] From the former $\mu$SR measurements in the SC polycrystal of PLCCO with $x=0.14$, slowing down of the Cu-spin fluctuations has been observed at low temperatures without any magnetic order. [@risdi-plcco] NMR experiments of the SC single crystal of Pr$_{1.3-x}$La$_{0.7}$Ce$_x$CuO$_4$ with $x=0.15$ have also indicated the presence of AF spin fluctuations. [@yamamoto] These suggest that, compared with the short-range magnetic order in the parent and underdoped samples, [@adachi-review; @adachi-jpsj2] the development of the Cu-spin correlation weakens with increasing $x$ but is apparently observed in the slightly overdoped regime. In order to obtain detailed information on the low-energy Cu-spin correlation in the heavily overdoped regime and its relation to the superconductivity, we have carried out $\mu$SR measurements using PLCCO single crystals in the heavily overdoped regime of $x=0.17$ and $0.20$. Experimental ============ Single crystals of PLCCO with $x=0.17$ and $0.20$ were prepared by the traveling solvent floating zone method. [@lambacher; @malik] The quality of the grown crystals was checked by the x-ray back-Laue photography and powder x-ray diffraction to be good. The composition of the crystals was analyzed by the inductively-coupled-plasma spectrometry. For the reduction annealing in a vacuum condition of $2 \times 10^{-4}$ Pa, the two-step annealing was performed at 900$^{\rm o}$C for 12 h and 500$^{\rm o}$C for 12 h for $x=0.17$. For $x=0.20$, the improved one-step reduction annealing was carried out at 800$^{\rm o}$C for 24 h. [@adachi-jpsj] Magnetic-susceptibility measurements were performed using a SC quantum interference device (SQUID) magnetometer (Quantum Design, MPMS). Figure 1 shows the temperature dependence of the magnetic susceptibility of PLCCO with $x=0.17$ and $0.20$ together with $x=0.13$ and $0.15$. [@malik] The SC transition temperature $T_{\rm c}$ of $x=0.17$ is $\sim 5$ K and the Meissner diamagnetism at 2 K is much smaller than those of $x=0.13$ and $0.15$, indicating that the superconductivity is weak. For $x=0.20$, the Meissner diamagnetism is unobservable, indicating a non-SC state of this sample. As shown in the inset of Fig. 1, values of $T_{\rm c}$ are almost consistent with those in the former report. [@fujita] Zero-field (ZF) and longitudinal-field (LF) $\mu$SR measurements were performed at low temperatures down to 0.3 K at the RIKEN-RAL Muon Facility at the Rutherford-Appleton Laboratory in the United Kingdom using a pulsed positive surface muon beam. The data were analyzed using the WiMDA program. [@pratt] Results and Discussion ====================== ![(Color online) Temperature dependence of the magnetic susceptibility of reduced single crystals of PLCCO with $x=0.17$ and 0.20 together with $x=0.13$ and 0.15. [@malik] The inset shows the Ce-concentration dependence of $T_{\rm c}$ in PLCCO, together with the former results of $T_{\rm c}$ and $T_{\rm N}$. [@fujita] Solid lines are to guide the reader’s eye[]{data-label="fig:Figure1"}](fig1.eps){width="1.0\linewidth"} Figure 2 shows the ZF-$\mu$SR time spectra of the reduced crystals of PLCCO with $x=0.17$ and $0.20$ together with $x=0.14$. [@risdi-plcco] In both $x=0.17$ and $0.20$, the depolarization of muon spins is slow at a high temperature of 200 K owing to the very small nuclear dipole field randomly oriented at the muon site, indicating an almost paramagnetic state of Cu spins. With decreasing temperature, the depolarization of muon spins becomes fast as seen in $x=0.14$ due to static random magnetism of small magnetic moments of Pr$^{3+}$ ions induced by the mixing of the excited state in the crystal electric field. [@risdi-plcco; @kadono] For $x=0.14$, the development of the Cu-spin correlation is characterized by the increase in the asymmetry in the long-time region above $\sim 4$ $\mu$sec with decreasing temperature shown in Fig. 2(a), [@risdi-plcco] which is due to the recovery of the asymmetry toward 1/3 in a magnetically ordered state. It is found that the recovery of the asymmetry in the long-time region is negligibly small at low temperatures down to 9 K for $x=0.17$ and down to 0.3 K for $x=0.20$. This indicates that the Cu-spin correlation is hardly developed in the heavily overdoped regime of PLCCO where the superconductivity almost disappears. To see effects of the Cu-spin correlation in detail, the ZF-$\mu$SR time spectra were analyzed using the following two-component function, [@risdi-plcco] $$A(t) = A_{\rm s} {\rm exp}[-(\lambda t)^\beta] + A_{\rm G} {\rm exp}[-\sigma^2 t^2] + A_{\rm base}. \label{eq1}$$ The first term represents a stretched-exponential component in which effects of nuclear spins and Cu spins are dominant. The $A_{\rm s}$, $\lambda$, and $\beta$ are the initial asymmetry, depolarization rate of muon spins, and power of damping, respectively. The second term represents a static Gaussian component in which the effect of small Pr$^{3+}$ moments is dominant. The $A_{\rm G}$ and $\sigma$ are the initial asymmetry and depolarization rate of muon spins, respectively. The $A_{\rm base}$ is a time-independent background term. The spectra are well fitted with Eq. (\[eq1\]), as clearly shown in Figs. 2(b) and 2(c). It is noted that the use of the two terms with $A_{\rm s}$ and $A_{\rm G}$ suggests possible two muon stopping sites in PLCCO. Although former reports have suggested one muon stopping site in the T’-cuprates based on the dipole-field calculation, [@luke; @le] a recent first-principle calculation suggests two muon stopping sites; one is near the CuO$_2$ plane mainly sensing the dipole field of the Cu spins and the other is near the (Pr,La,Ce)-O layer mainly sensing that of Pr$^{3+}$ moments. [@tsutsumi; @tsutsumi-calc] ![(Color online) ZF-$\mu$SR time spectra of reduced PLCCO crystals with (a) $x=0.14$, [@risdi-plcco] (b) $x=0.17$, and (c) $x=0.20$ at various temperatures. (d) ZF-$\mu$SR time spectra of the reduced PLCCO crystal with $x=0.20$ at 10 K and 0.3 K. Solid lines are the best-fit results using the two-component function described in Eq. (1).[]{data-label="fig:Figure2"}](fig2.eps){width="1.0\linewidth"} Figure 3 shows the temperature dependence of the fitting parameters $A_{\rm s}$, $\beta$, and $\sigma$, and $\lambda$ for both crystals of PLCCO with $x=0.17$ and $0.20$ together with $x=0.14$. [@risdi-plcco] At high temperatures above 100 K, all parameters seem to be almost independent of temperature and the normalized $A_{\rm s}$ is nearly one. It is noted that the change of the spectra above 100 K shown in Fig. 2 for all samples is predominantly due to the small change of $A_{\rm s}$. The $\sigma$ ($A_{\rm s}$) increases (decreases) with decreasing temperature below $\sim 100$ K owing to the growing effect of Pr$^{3+}$ moments. [@risdi-plcco] For $x=0.14$, the development of the Cu-spin correlation is characterized by the steep increase in $\lambda$ and enhancement of $A_{\rm s}$ below $\sim 30$ K as shown in Figs. 3(d) and 3(a). [@risdi-plcco] For $x=0.17$ and $0.20$, on the contrary, neither steep increase in $\lambda$ nor apparent enhancement of $A_{\rm s}$ is observable at low temperatures, indicating that the development of the Cu-spin correlation is negligibly small in both samples. ![(Color online) Temperature dependence of the fitting parameters in the two-component function described in Eq. (1) for reduced crystals of PLCCO with $x=0.17$ and 0.20 together with $x=0.14$ [@risdi-plcco] for comparison.[]{data-label="fig:Figure3"}](fig3.eps){width="0.8\linewidth"} In order to further investigate the effects of Cu spins and Pr$^{3+}$ moments, LF-$\mu$SR measurements were performed under LF up to 1000 G at 0.3 K for PLCCO with $x=0.20$. As shown in Fig. 4, the tail of the spectrum is gradually quenched with increasing field up to 100 G. This suggests the existence of static magnetism due to Pr$^{3+}$ moments. In the long-time region, the slow depolarization is still observed up to 1000 G, indicating that there exist fluctuating internal fields at the muon site due to Cu spins. [@risdi-plcco] Therefore, the LF-$\mu$SR results suggest the coexistence of the static magnetism of Pr$^{3+}$ moments and fluctuating Cu spins in $x=0.20$ as well as $x=0.14$. [@risdi-plcco] For both PLCCO crystals with $x=0.17$ and $0.20$, the $\mu$SR spectra show the existence of both the static magnetic field due to Pr$^{3+}$ moments and the slow depolarization related to Cu-spin fluctuations. The development of the Cu-spin correlation becomes weak with increasing $x$ and is negligibly small at $x=0.20$ where the superconductivity almost disappears. Combined with the results in the undoped and underdoped regimes, [@adachi-review; @adachi-jpsj2] these results suggest an intimate relationship between the Cu-spin correlation and superconductivity in the entire doping regime of PLCCO. ![(Color online) LF-$\mu$SR time spectra of the reduced crystal of PLCCO with $x=0.20$ at 0.3 K. Solid lines are the best-fit results using the two-component function described in Eq. (1).[]{data-label="fig:Figure4"}](fig4.eps){width="1.0\linewidth"} Finally, we discuss the comparison between hole-doped and electron-doped cuprates briefly in term of the Cu-spin correlation. From the $\mu$SR results of Zn-substituted La$_{2-x}$Sr$_x$Cu$_{1-y}$Zn$_y$O$_4$ in the overdoped regime, it has been found that the Zn-induced development of the Cu-spin correlation becomes weak with hole doping and finally disappears at $x \sim 0.30$ where the superconductivity disappears in LSCO. [@risdi-lsco] Moreover, inelastic neutron scattering experiments have uncovered the disappearance of the AF spin fluctuations concomitant with the disappearance of superconductivity at $x=0.30$. [@wakimoto] In the electron-doped cuprate, a short-range magnetic order due to a very small amount of excess oxygen is formed in the parent SC polycrystal of La$_{1.8}$Eu$_{0.2}$CuO$_{4+\delta}$ and the SC underdoped single crystal of Pr$_{1.3-x}$La$_{0.7}$Ce$_x$CuO$_4$ with $x=0.10$. [@adachi-review; @adachi-jpsj2] The Cu-spin correlation is moderately developed in the overdoped PLCCO with $x=0.14$ [@risdi-plcco] and the development of the Cu-spin correlation almost disappears in the heavily overdoped PLCCO with $x=0.20$ where the superconductivity disappears. Therefore, there is a similarity for both hole-doped and electron-doped cuprates in terms of the Cu-spin correlation, that is, the development of the Cu-spin correlation is observed in the SC region of the phase diagram. It is suggested that the Cu-spin correlation is in intimate relation with the appearance of high-$T_{\rm c}$ superconductivity in both hole- and electron-doped cuprates. Summary ======= ZF-and LF-$\mu$SR spectra have revealed that the development of the Cu-spin correlation weakens with increasing $x$ and is negligibly small in heavily overdoped PLCCO with $x=0.20$ where the superconductivity disappears. These results suggest that the Cu-spin correlation exists in the electron-doped T’-cuprates where the superconductivity appears. It is suggested that, in both hole-doped and electron-doped cuprates, the mechanism of the superconductivity is related to the Cu-spin correlation. Acknowledgments {#acknowledgments .unnumbered} =============== We would like to thank M. Ishikuro of the Institute for Materials Research, Tohoku University, Japan, for his help in the ICP analysis. This work was supported by JSPS KAKENHI Grant Numbers 23540399, 16K05458, 17H02915 and by MEXT KAKENHI Grant Number 23108004. [99]{} K. Yamada, C. H. Lee, K. Kurahashi, J. Wada, S. Wakimoto, S. Ueki, H. Kimura, Y. Endoh, S. Hosoya, G. Shirane, R. J. Birgeneau, M. Greven, M. A. Kastner, and Y. J. Kim, Phys. Rev. B [**57**]{}, 6165 (1998). S. Wakimoto, H. Zhang, K. Yamada, I. Swainson, H. Kim, and R. J. Birgeneau, Phys. Rev. Lett. [**92**]{}, 217004 (2004). Risdiana, T. Adachi, N. Oki, S. Yairi, Y. Tanabe, K. Omori, Y. Koike, T. Suzuki, I. Watanabe, A. Koda, and W. Higemoto, Phys. Rev. B [**77**]{}, 054516 (2008). K. Yamada, K. Kurahashi, T. Uefuji, M. Fujita, S. Park, S. H. Lee, and Y. Endoh, Phys. Rev. Lett. [**90**]{}, 137004 (2003). H. J. Kang, P. Dai, H. A. Mook, D. N. Argyriou, V. Sikolenko, J. W. Lynn, Y. Kurita, S. Komiya, and Y. Ando, Phys. Rev. B [**71**]{}, 214512 (2005). S. D. Wilson, S. Li, P. Dai, W. Bao, J.-H. Chung, H. J. Kang, S.-H. Lee, S. Komiya, Y. Ando, and Q. Si, Phys. Rev. B [**74**]{}, 144514 (2006). A. Tsukada, Y. Krockenberger, M. Noda, H. Yamamoto, D. Manske, L. Alff, and M. Naito, Solid State Commun. [**133**]{}, 427 (2005). O. Matsumoto, A. Utsuki, A. Tsukada, H. Yamamoto, T. Manabe, and M. Naito, Physica C [**469**]{}, 924 (2009). S. Asai, S. Ueda, and M. Naito, Physica C [**471**]{}, 682 (2011). T. Takamatsu, M. Kato, T. Noji, and Y. Koike, Appl. Phys. Express [**5**]{}, 073101 (2012). M. Horio, Y. Krockenberger, K. Yamamoto, Y. Yokoyama, K. Takubo, Y. Hirata, S. Sakamoto, K. Koshiishi, A. Yasui, E. Ikenaga, S. Shin, H. Yamamoto, H. Wadati, and A. Fujimori, Phys. Rev. Lett. [**120**]{}, 257001 (2018). T. Adachi, Y. Mori, A. Takahashi, M. Kato, T. Nishizaki, T. Sasaki, N. Kobayashi, and Y. Koike, J. Phys. Soc. Jpn [**82**]{}, 063713 (2013). M. Brinkmann, T. Rex, H. Bach, and K. Westerholt, Phys. Rev. Lett. [**74**]{}, 4927 (1995). M. Horio, T. Adachi, Y. Mori, A. Takahashi, T. Yoshida, H. Suzuki, L.C.C. Ambolode II, K. Okazaki, K. Ono, H. Kumigashira, H. Anzai, M. Arita, H. Namatame, M. Taniguchi, D. Ootsuki, K. Sawada, M. Takahashi, T. Mizokawa, Y. Koike, and A. Fujimori, Nat. Commun. [**7**]{}, 10567 (2016). T. Adachi, T. Kawamata, and Y. Koike, Condens. Matter [**2**]{}, 23 (2017). T. Adachi, A. Takahashi, K. M. Suzuki, M. A. Baqiya, T. Konno, T. Takamatsu, M. Kato, I. Watanabe, A. Koda, M. Miyazaki, R. Kadono, and Y. Koike, J. Phys. Soc. Jpn [**85**]{}, 114716 (2016). K. M. Kojima, Y. Krockenberger, I. Yamauchi, M. Miyazaki, M. Hiraishi, A. Koda, R. Kadono, R. Kumai, H. Yamamoto, A. Ikeda, and M. Naito, Phys. Rev. B [**89**]{}, 180508 (2014). M. Fujita, M. Matsuda, S.-H. Lee, M. Nakagawa, and K. Yamada, Phys. Rev. Lett. [**101**]{}, 107003 (2008). Y. Tanabe, T. Adachi, T. Noji, and Y. Koike, J. Phys. Soc. Jpn [**74**]{}, 2893 (2005). Risdiana, T. Adachi, N. Oki, Y. Koike, T. Suzuki, and I. Watanabe, Phys. Rev. B [**82**]{}, 014506 (2010). M. Yamamoto, Y. Kohori, H. Fukazawa, A. Takahashi, T. Ohgi, T. Adachi, and Y. Koike, J. Phys. Soc. Jpn [**85**]{}, 024708 (2016). M. Lambacher, T. Helm, M. Kartsovnik, and A. Erb, Euro. Phys. J. Special Topics [**188**]{}, 61 (2010). M. A. Baqiya, T. Adachi, A. Takahashi, T. Prombood, M. Watanabe, K. Fukumoto, Y. Tanabe, and Y. Koike, J. Phys.: Conf. Ser. [**568**]{}, 022002 (2014). F. L. Pratt, Physica B [**289-290**]{}, 710 (2000). R. Kadono, K. Ohishi, A. Koda, W. Higemoto, K. M. Kojima, S. Kuroshima, M. Fujita, and K. Yamada, J. Phys. Soc. Jpn [**72**]{}, 2955 (2003). G. M. Luke, L. P. Le, B. J. Sternlieb, Y. J. Uemura, J. H. Brewer, R. Kadono, R. F. Kiefl, S. R. Kreitzman, T. M. Riseman, C. E. Stronach, M. R. Davis, S. Uchida, H. Takagi, Y. Tokura, Y. Hidaka, T. Murakami, J. Gopalakrishnan, A. W. Sleight, M. A. Subramanian, E. A. Early, J. T. Markert, M. B. Maple, and C. L. Seaman, Phys. Rev. B [**42**]{}, 7981 (1990). L. P. Le, G. M. Luke, B. J. Sternlieb, Y. J. Uemura, J. H. Brewer, T. M. Riseman, D. C. Johnston, L. L. Miller, Y. Hidaka, and H. Murakami, Hyperfine Interact. [**63**]{}, 279 (1990). K. Tsutsumi, M. Fujita, K. Sato, M. Miyazaki, R. Kadono, and K. Yamada, Key Eng. Mater. [**616**]{}, 297 (2014). K. Tsutsumi, M. Fujita, and K. M. Kojima, unpublished.
{ "pile_set_name": "ArXiv" }
--- abstract: 'X-ray observations with the ROSAT High Resolution Imager (HRI) often have spatial smearing on the order of 10$\arcsec$ (Morse 1994). This degradation of the intrinsic resolution of the instrument (5$\arcsec$) can be attributed to errors in the aspect solution associated with the wobble of the space craft or with the reacquisition of the guide stars. We have developed a set of IRAF/PROS and MIDAS/EXSAS routines to minimize these effects. Our procedure attempts to isolate aspect errors that are repeated through each cycle of the wobble. The method assigns a ’wobble phase’ to each event based on the 402 second period of the ROSAT wobble. The observation is grouped into a number of phase bins and a centroid is calculated for each sub-image. The corrected HRI event list is reconstructed by adding the sub-images which have been shifted to a common source position. This method has shown $\sim$30% reduction of the full width half maximum (FWHM) of an X-ray observation of the radio galaxy 3C 120. Additional examples are presented.' author: - 'D.E. Harris, J.D. Silverman, G. Hasinger' - 'I. Lehmann' date: 'Received date / Accepted date' title: Spatial Corrections of ROSAT HRI Observations --- Introduction ============ Spatial analysis of ROSAT HRI observations is often plagued by poor aspect solutions, precluding the attainment of the potential resolution of about 5”. In many cases (but not all), the major contributions to the degradation in the effective Point Response Function (PRF) come from aspect errors associated either with the ROSAT wobble or with the reacquisition of the guide stars. To avoid the possibility of blocking sources by the window support structures (Positional Sensitive Proportional Counter) or to minimize the chance that the pores near the center of the microchannel plate would become burned out from excessive use (High Resolution Imager), the satellite normally operates with a constant dither for pointed observations. The period of the dither is 402s and the phase is tied to the spacecraft clock. Any given point on the sky will track back and forth on the detector, tracing out a line of length $\approx$ 3 arcmin with position angle of 135$^{\circ}$ in raw detector coordinates (for the HRI). Imperfections in the star tracker (see section \[sec:MM\]) can produce an erroneous image if the aspect solution is a function of the wobble track on the CCD of the star tracker. This work is similar to an analysis by Morse (1994) except that we do not rely on a direct correlation between spatial detector coordinates and phase of the wobble. Moreover, our method addresses the reacquisition problem which produces the so-called cases of “displaced OBIs”. An “OBI” is an observation interval, normally lasting for 1 ks to 2 ks (i.e. a portion of an orbit of the satellite). A new acquisition of the guide stars occurs at the beginning of each OBI and we have found that different aspect solutions often result. Occasionally a multi-OBI observation consists of two discrete aspect solutions. A recent example (see section \[sec:120B\]) showed one OBI for which the source was 10$^{\prime\prime}$ north of its position in the other 17 OBIs. Note that this sort of error is quite distinct from the wobble error. Throughout this discussion, we use the term “PRF” in the dynamic sense: it is the point response function realized in any given situation: i.e. that which includes whatever aspect errors are present. We start with an observation for which the PRF is much worse than it should be. We seek to improve the PRF by isolating the offending contributions and correcting them if possible or rejecting them if necessary. Model and Method {#sec:MM} ================ The “model” for the wobble error assumes that the star tracker’s CCD has some pixels with different gain than others. As the wobble moves the de-focused star image across the CCD, the centroiding of the stellar image gets the wrong value because it is based on the relative response from several pixels. If the roll angle is stable, it is likely that the error is repeated during each cycle of the wobble since the star’s path is over the same pixels (to a first approximation if the aspect ‘jitter’ is small compared to the pixel size of $\approx$ 1 arcmin). What is not addressed is the error in roll angle induced by erroneous star positions. If this error is significant, the centroiding technique with one strong source will fix only that source and its immediate environs. The correction method assigns a ’wobble phase’ to each event; then divides each OBI (or other suitably defined time interval) into a number of wobble phase bins. The centroid of the reference source is measured for each phase bin. The data are then recombined after applying x and y offsets in order to ensure that the reference source is aligned for each phase bin. What is required is that there are enough counts in the reference source to obtain a reliable centroid. Variations of this method for sources weaker than approx 0.1 count/s involve using all OBIs together before dividing into phase bins. This is a valid approach so long as the nominal roll angle is stable (i.e. within a few tenths of a degree) for all OBIs, and so long as major shifts in the aspect solutions of different OBIs are not present. Diagnostics =========== Our normal procedure for evaluation is to measure the FWHM (both the major and minor axes) of the observed response on a map smoothed with a 3$^{\prime\prime}$ Gaussian. For the best data, we find the resulting FWHM is close to 5.7$^{\prime\prime}$. While there are many measures of source smearing, we prefer this approach over measuring radial profiles because there is no uncertainty relating to the position of the source center; we are normally dealing with elliptical rather than circular distributions; and visual inspection of the two dimensional image serves as a check on severe abnormalities. It has been our experience that when we are able to reduce the FWHM of the PRF, the wings of the PRF are also reduced. Wobble Errors ------------- If the effective PRF is evaluated for each OBI separately, the wobble problem is manifest by a degraded PRF in one or more OBIs. Most OBIs contain only the initial acquisition of the guide stars, so when the PRF of a particular OBI is smeared, it is likely to be caused by the wobble error and the solution is to perform the phased ‘de-wobbling’. Misplaced OBI ------------- For those cases where each OBI has a relatively good PRF but the positions of each centroid have significant dispersion, the error cannot be attributed to the wobble. We use the term ‘misplaced OBI’ to describe the situation in which a different aspect solution is found when the guide stars are reacquired. In the worst case, multiple aspect solutions can produce an image in which every source in the field has a companion displaced by anywhere from 10 to 30 arcsec or more. When the separation is less than 10 arcsec, the source can appear to have a tear drop shape (see section \[sec:120A\]) or an egg shape. However, depending on the number of different aspect solutions, almost any arbitrary distortion to the (circularly symmetric) ideal PRF is possible. The fix for these cases is simply to find the centroid for each OBI, and shift them before co-adding (e.g., see Morse et al. 1995). IRAF/PROS Implementation ======================== The ROSAT Science Data Center (RSDC) at SAO has developed scripts to assist users in evaluating individual OBIs and performing the operations required for de-wobbling and alignment. The scripts are available from our anonftp area: sao-ftp.harvard.edu. cd to pub/rosat/dewob. An initial analysis needs to be performed to determine the stable roll angle intervals, to check for any misalignment of OBIs and to examine the guide star combinations. These factors together with the source intensity are important in deciding what can be done and the best method to use. OBI by OBI Method {#sec:ObyO} ----------------- If the observation contains a strong source ($\ge$ 0.1 counts/s) near the field center (i.e. close enough to the center that the mirror blurring is not important), then the preferred method is to dewobble each OBI. The data are thus divided into n $\times$ p qpoe files (n = number of OBIs; p = number of phase bins). The position of the centroid of the reference source is determined and each file is shifted in x and y so as to align the centroids from all OBIs and all phase bins. The data are then co-added or stacked to realize the final image (qpoe file). Stable Roll Angle Intervals --------------------------- For sources weaker than 0.1 counts/s, it is normally the case that there are not enough counts for centroiding when 10 phase bins are used. If it is determined that there are no noticeable shifts between OBIs, then it is possible to use many OBIs together so long as the roll angle does not change by a degree or more. Method for Visual Inspection ---------------------------- On rare occasions, it may be useful to examine each phase bin visually to evaluate the segments in order to decide if some should be deleted before restacking for the final result. We have found it useful to do this via contour diagrams of the source. This approach can be labor intensive if there are a large number of OBIs and phase bins but scripts we provide do most of the manipulations. MIDAS/EXSAS Implementation ========================== The X-ray group at the Astrophysical Institute Potsdam (AIP) has developed some MIDAS/EXSAS routines to correct for the ROSAT wobble effect. The routines can be obtained by anonymous ftp from ftp.aip.de at directory pub/users/rra/wobble. The correction procedure works interactively in five main steps: - [Choosing of a constant roll angle interval]{} - [Folding the data over the 402 sec wobble period]{} - [Creation of images using 5 or 10 phase intervals]{} - [Determining the centroid for the phase resolved images]{} - [Shifting the photon X/Y positions in the events table]{} We have tested the wobble correction procedures for 21 stars and 24 galaxies of the ROSAT Bright Survey using archival HRI data. The procedures work successfully down to an HRI source count rate of about 0.1 counts/s. In the case of lower count rates the determination of the centroid position failed because of the few photons available in the phase-binned images. The number of phase bins which can be used is of course dependent on the X-ray brightness of the source. Limitations =========== We briefly describe the effects which limit the general use of the method. In so doing, we also indicate the process one can use in deciding if there is a problem, and estimating the chances of substantial improvement. Presence of Aspect Smearing --------------------------- The FWHM of all sources in the field should be $\ge$ 7$^{\prime\prime}$ (after smoothing with a 3$^{\prime\prime}$ Gaussian). If any source is smaller than this value, it is likely that aspect problems are minimal and little is to be gained by applying the dewobbling method. If there is only a single source in the field, without [*a priori*]{} knowledge or further analysis it is difficult to determine whether a distribution significantly larger than the ideal PRF is caused by source structure or aspect smearing. The best approach in this case is to examine the image for each OBI separately to see if some or all are smaller than the total image (i.e. OBI aspect solutions are different). Wobble Phase ------------ It is important that the phase of the wobble is maintained. This is ensured if there is no ’reset’ of the space craft clock during an observation. If an observation has a begin and end time/date that includes a reset, it will be necessary to divide the data into two segments with a time filter before proceeding to the main analysis. Dates of clock resets (Table 1) are provided by MPE: http://www.ROSAT.mpe-garching.mpg.de/$\sim$prp/timcor.html. [45mm]{}[cl]{}\ \ \ Year &Day\ 90 & 151.87975 (launch)\ 91 & 25.386331\ 92 & 42.353305\ 93 & 18.705978\ 94 & 19.631352\ 95 & 18.169322\ 96 & 28.489871\ 97 & 16.069990\ 98 & 19.445738\ \ \ Characteristics of the Reference Source --------------------------------------- In most cases, the reference source (i.e. the source used for centroiding) will be the same as the target source, but this is not required. Ideally, the reference source should be unresolved in the absence of aspect errors and it should not be embedded in high brightness diffuse emission (e.g. the core of M87 does not work because of the bright emission from the Virgo Cluster gas). Both of these considerations are important for the operation of the centroiding algorithm, but neither is an absolute imperative. For accurate centroiding, the reference source needs to stand well above any extended component. Obviously the prime concern is that there be enough counts in a phase bin to successfully measure the centroid. The last item is usually the determining factor, and as a rule of thumb, it is possible to use 10 phase bins on a source of 0.1 counts/s. We have tested a strong source to see the effect of increasing the number of phase bins. In Fig. \[fig:hz43\], we show the results of several runs on an observation of HZ 43 (12 counts/s). This figure demonstrates that ten phase bins is a reasonable choice, but that there is little to be gained by using more than 20 phase bins. Examples ======== 3C 120 ------ 3C 120 is a nearby radio galaxy (z=0.033) with a prominent radio jet leaving the core at PA $\approx 270^{\circ}$. The ROSAT HRI observation was obtained in two segments, each of which had aspect problems. Since the average source count rate is 0.8 count/s, the X-ray emission is known to be highly variable (and therefore most of its flux must be unresolved), and each segment consisted of many OBIs, we used these observations for testing the dewobbling scripts. ### Segment A: Two aspect solutions, both found multiple times {#sec:120A} The smoothed data (Figure \[fig:120A\]) indicated that in addition to the X-ray core, a second component was present, perhaps associated with the bright radio knot 4$^{\prime\prime}$ west of the core. When analyzing these two components for variability, it was demonstrated that most of the emission was unresolved, but that the aspect solution had at least two different solutions, and that the change from one to the other usually coincided with OBI boundaries. The guide star configuration table showed that a reacquisition coincided with the change of solution. The 24 OBIs comprising the 36.5 ksec exposure were obtained between 96Aug16 and 96Sep12. Because 3C 120 is close to the ecliptic, the roll angle hardly changed, and our first attempts at dewobbling divided the data into 2 ’stable roll angle intervals’. This effort made no noticeable improvement. We then used the method described in section \[sec:ObyO\]. The results are shown in Figure \[fig:120Ade\]. It can be seen that a marked improvement has occurred, but some of the E-W smearing remains. ### Segment B: A single displaced OBI {#sec:120B} The second segment of the 3C 120 observation was obtained in 1997 March. In this case, only one OBI out of 17 was displaced. It was positioned 10$^{\prime\prime}$ to the north of the other positions, producing a low level extension (see Fig. \[fig:120B\]). After dewobbling, that feature is gone, the half power size is reduced, and the peak value is larger (Fig. \[fig:120Bde\]). M81 --- M81 is dominated by an unresolved nuclear source. The count rate is 0.31 count/s. The observation has 14 OBIs for a total exposure of 19.9 ks. Figure \[fig:m81A\] shows the data from SASS processing. After running the ‘OBI by OBI’ method, the source is more circularly symmetric, has a higher peak value, and a smaller FWHM (Fig. \[fig:m81B\]). NGC 5548 -------- This source was observed from 25 June to 11 July 1995 for a livetime of 53 ks with 33 OBIs. The average count rate was 0.75 counts/s and the original data had a FWHM = 8.2$^{\prime\prime}\times$6.8$^{\prime\prime}$. Most of the OBIs appeared to have a normal PRF but a few displayed high distortion. After applying the OBI by OBI method, the resulting FWHM was 6.3$^{\prime\prime}$ in both directions and the peak value on the smoothed map increased from 138 to 183 counts per 0.5$^{\prime\prime}$ pixel. RZ Eri ------ The observation of this star was reduced in MIDAS/EXSAS. The source has a count rate of 0.12 count/s. The reduction selected only a group of the OBIs which comprised a ’stable roll angle interval’; almost half the data were rejected. The original smoothed image had a FWHM = 8.4$^{\prime\prime}\times$6.6$^{\prime\prime}$. After dewobbling, the resulting FWHM was 6.9$^{\prime\prime}\times$5.8$^{\prime\prime}$. Summary ======= We have developed a method of improving the spatial quality of ROSAT HRI data which suffer from two sorts of aspect problems. This approach requires the presence of a source near the field center which has a count rate of $\approx$ 0.1 counts/s or greater. Although the method does not fix all bad aspect problems, it produces marked improvements in many cases. Acknowledgments =============== We thank M. Hardcastle (Bristol) for testing early versions of the software and for suggesting useful improvements. J. Morse contributed helpful comments on the manuscript. The 3C 120 data were kindly provided by DEH, A. Sadun, M. Vestergaard, and J. Hjorth (a paper is in preparation). The other data were taken from the ROSAT archives. The work at SAO was supported by NASA contract NAS5-30934. David, L.D., Harnden Jr., F.R., Kearns, K.E., Zombeck, M.V., 1995, The ROSAT High Resolution Imager (HRI). A hardcopy is available: Center for Astrophysics, RSDC, MS 3, 60 Garden St., Cambridge, MA, 02138\ (http://hea-www.harvard.edu/ROSAT/rsdc\_www/hricalrep.html) Morse, J.A., 1994, PASP 106, 675 Morse, J.A., Wilson, A.S., Elvis, M., Weaver, K.A. 1995, ApJ 439, 121
{ "pile_set_name": "ArXiv" }
--- abstract: 'We consider a basic cache network, in which a single server is connected to multiple users via a shared bottleneck link. The server has a database of files (content). Each user has an isolated memory that can be used to cache content in a prefetching phase. In a following delivery phase, each user requests a file from the database, and the server needs to deliver users’ demands as efficiently as possible by taking into account their cache contents. We focus on an important and commonly used class of prefetching schemes, where the caches are filled with uncoded data. We provide the exact characterization of the rate-memory tradeoff for this problem, by deriving both the *minimum average rate* (for a uniform file popularity) and the *minimum peak rate* required on the bottleneck link for a given cache size available at each user. In particular, we propose a novel caching scheme, which strictly improves the state of the art by exploiting commonality among user demands. We then demonstrate the exact optimality of our proposed scheme through a matching converse, by dividing the set of all demands into types, and showing that the placement phase in the proposed caching scheme is universally optimal for all types. Using these techniques, we also fully characterize the rate-memory tradeoff for a decentralized setting, in which users fill out their cache content without any coordination.' author: - 'Qian Yu,  Mohammad Ali Maddah-Ali,  and A. Salman Avestimehr,  [^1] [^2] [^3] [^4] [^5] [^6]' bibliography: - 'uache\_checked.bib' title: | The Exact Rate-Memory Tradeoff\ for Caching with Uncoded Prefetching --- Caching, Coding, Rate-Memory Tradeoff, Information-Theoretic Optimality Introduction ============ Caching is a commonly used approach to reduce traffic rate in a network system during peak-traffic times, by duplicating part of the content in the memories distributed across the network. In its basic form, a caching system operates in two phases: (1) a placement phase, where each cache is populated up to its size, and (2) a delivery phase, where the users reveal their requests for content and the server has to deliver the requested content. During the delivery phase, the server exploits the content of the caches to reduce network traffic. Conventionally, caching systems have been based on uncoded unicast delivery where the objective is mainly to maximize the hit rate, i.e. the chance that the requested content can be delivered locally [@sleator1985amortized; @dowdy82; @almeroth96; @dan96; @korupolu99; @meyerson01; @baev08; @borst10]. While in systems with single cache memory this approach can achieve optimal performance, it has been recently shown in [@maddah-ali12a] that for multi-cache systems, the optimality no longer holds. In [@maddah-ali12a], an information theoretic framework for multi-cache systems was introduced, and it was shown that coding can offer a significant gain that scales with the size of the network. Several coded caching schemes have been proposed since then [@DBLP:journals/corr/Chen14h; @wan2016caching; @sahraei2016k; @tian2016caching; @amiri2016fundamental; @amiri2016coded]. The caching problem has also been extended in various directions, including decentralized caching [@maddah-ali13], online caching [@pedarsani13], caching with nonuniform demands [@niesen13; @zhang15; @ji2015order; @ramakrishnan2015efficient], hierarchical caching [@hachem14; @karamchandani14; @hachem15], device-to-device caching [@ji14b], cache-aided interference channels [@maddah2015cache; @naderializadeh2016fundamental; @hachem2016layered; @DBLP:journals/corr/HachemND16a], caching on file selection networks [@wang2015information; @DBLP:journals/corr/WangLG15; @lim2016information], caching on broadcast channels [@timo2015joint; @bidokhti2016erasure; @bidokhti2016noisy; @bidokhti2016upper], and caching for channels with delayed feedback with channel state information [@zhang2015fundamental; @zhang2015coded]. The same idea is also useful in the context of distributed computing, in order to take advantage of extra computation to reduce the communication load [@2016arXiv160407086L; @globedcd16; @li2016scalable; @7901473; @yu2017howto]. Characterizing the exact rate-memory tradeoff in the above caching scenarios is an active line of research. Besides developing better achievability schemes, there have been efforts in tightening the outer bound of the rate-memory tradeoff [@ghasemi15; @lim2016information; @sengupta15; @DBLP:journals/corr/WangLG16; @tian2016symmetry; @prem2015critical]. Nevertheless, in almost all scenarios, there is still a gap between the state-of-the-art communication load and the converse, leaving the exact rate-memory tradeoff an open problem. In this paper, we focus on an important class of caching schemes, where the prefetching scheme is required to be uncoded. In fact, almost all caching schemes proposed for the above mentioned problems use uncoded prefetching. As a major advantage, uncoded prefetching allows us to handle asynchronous demands without increasing the communication rates, by dividing files into smaller subfiles [@maddah-ali13]. Within this class of caching schemes, we characterize the exact rate-memory tradeoff for both the *average rate* for uniform file popularity and the *peak rate*, in both centralized and decentralized settings, for all possible parameter values. In particular, we first propose a novel caching strategy for the centralized setting (i.e., where the users can coordinate in designing the caching mechanism, as considered in [@maddah-ali12a]), which strictly improves the state of the art, reducing both the average rate and the peak rate. We exploit commonality among user demands by showing that the scheme in [@maddah-ali12a] may introduce redundancy in the delivery phase, and proposing a new scheme that effectively removes all such redundancies in a systematic way. In addition, we demonstrate the exact optimality of the proposed scheme through a matching converse. The main idea is to divide the set of all demands into smaller subsets (referred to as types), and derive tight lower bounds for the minimum peak rate and the minimum average rate on each type separately. We show that, when the prefetching is uncoded, the rate-memory tradeoff can be completely characterized using this technique, and the placement phase in the proposed caching scheme universally achieves those minimum rates on all types. Moreover, we extend the techniques we developed for the centralized caching problem to characterize the exact rate-memory tradeoff in the decentralized setting (i.e. where the users cache the contents independently without any coordination, as considered in [@maddah-ali13]). Based on the proposed centralized caching scheme, we develop a new decentralized caching scheme that strictly improves the state of the art [@maddah-ali13; @amiri2016coded]. In addition, we formally define the framework of decentralized caching, and prove matching converses given the framework, showing that the proposed scheme is optimal. To summarize, the main contributions of this paper are as follows: - Characterizing the rate-memory tradeoff for average rate, by developing a novel caching design and proving a matching information theoretic converse. - Characterizing the rate-memory tradeoff for peak rate, by extending the achievability and converse proofs to account for the worst case demands. - Characterizing the rate-memory tradeoff for both average rate and peak rate in a decentralized setting, where the users cache the contents independently without coordination. Furthermore, in one of our recent works [@yu2017characterizing], we have shown that the achievablity scheme we developed in this paper also leads to the yet known tightest characterization (within factor of $2$) in the general problem with coded prefetching, for both average rate and peak rate, in both centralized and decentralized settings. The problem of caching with uncoded prefetching was initiated in [@kai2016optimality; @wan2016caching], which showed that the scheme in [@maddah-ali12a] is optimal when considering *peak rate* and *centralized caching*, if there are more files than users. Although not stated in [@kai2016optimality; @wan2016caching], the converse bound in our paper for the special case of peak rate and centralized setting could have also been derived using their approach. In this paper however, we introduce the novel idea of demand types, which allows us to go beyond and characterize the rate-memory tradeoff for both peak rate and average rate for all possible parameter values, in both centralized and decentralized settings. Our result covers the peak rate centralized setting, as well as strictly improves the bounds in all other cases. More importantly, we introduce a new achievability scheme, which strictly improves the scheme in [@maddah-ali12a]. The rest of this paper is organized as follows. Section \[sec:sys\] formally establishes a centralized caching framework, and defines the main problem studied in this paper. Section \[sec:main\] summarizes the main result of this paper for the centralized setting. Section \[sec:opt\] describes and demonstrates the optimal centralized caching scheme that achieves the minimum expected rate and the minimum peak rate. Section \[sec:conv\] proves matching converses that show the optimality of the proposed centralized caching scheme. Section \[sec:ext\] extends the techniques we developed for the centralized caching problem to characterize the exact rate-memory tradeoff in the decentralized setting. System Model and Problem Definition {#sec:sys} =================================== In this section, we formally introduce the system model for the centralized caching problem. Then, we define the rate-memory tradeoff based on the introduced framework, and state the main problem studied in this paper. System Model ------------ We consider a system with one server connected to $K$ users through a shared, error-free link (see Fig. \[fig:net\]). The server has access to a database of $N$ files $W_1, . . . , W_N$, each of size $F$ bits.[^7] We denote the $j$th bit in file $i$ by $B_{i,j}$, and we assume that all bits in the database are i.i.d. Bernoulli random variables with $p=0.5$. Each user has an isolated cache memory of size $MF$ bits, where $M\in[0,N]$. For convenience, we define parameter $t=\frac{KM}{N}$. ![Caching system considered in this paper. The figure illustrates the case where $K=N=3$, $M=1$.[]{data-label="fig:net"}](net.pdf){width="40.00000%"} The system operates in two phases, a placement phase and a delivery phase. In the placement phase, users are given access to the entire database, and each user can fill their cache using the database. However, instead of allowing coding in prefetching [@maddah-ali12a], we focus on an important class of prefetching schemes, referred to as uncoded prefetching schemes: An *uncoded prefetching scheme* is where each user $k$ selects no more than $MF$ bits from the database and stores them in its own cache, without coding. Let $\mathcal{M}_k$ denote the set of indices of the bits chosen by user $k$, then we denote the prefetching as $$\boldsymbol{\mathcal{M}}=(\mathcal{M}_1,...,\mathcal{M}_K).$$ In the delivery phase, only the server has access to the database. Each user $k$ requests one of the files in the database. To characterize user requests, we define *demand* $\boldsymbol{d}=\left(d_1,...,d_K\right)$, where $d_k$ is the index of the file requested by user $k$. We denote the number of distinct requested files in $\boldsymbol{d}$ by $N_{\textup{e}}(\boldsymbol{d})$, and denote the set of all possible demands by $\mathcal{D}$, i.e., $\mathcal{D}=\{1,...,N\}^K$. The server is informed of the demand and proceeds by generating a signal $X$ of size $RF$ bits as a function of $W_{1},...,W_{N}$, and transmits the signal over the shared link. $R$ is a fixed real number given the demand $\boldsymbol{d}$. The values $RF$ and $R$ are referred to as the load and the rate of the shared link, respectively. Using the values of bits in $\mathcal{M}_k$ and the signal $X$ received over the shared link, each user $k$ aims to reconstruct their requested file $W_{d_k}$. Problem Definition ------------------ Based on the above framework, we define the rate-memory tradeoff for the average rate using the following terminology. Given a prefetching $\boldsymbol{\mathcal{M}}=(\mathcal{M}_1,...,\mathcal{M}_K)$, we say a communication rate $R$ is *$\epsilon$-achievable* for demand $\boldsymbol{d}$ if and only if there exists a message $X$ of length $RF$ such that every active user $k$ is able to recover its desired file $W_{d_k}$ with a probability of error of at most $\epsilon$. This is rigorously defined as follows: $R$ is *$\epsilon$-achievable* given a prefetching $\boldsymbol{\mathcal{M}}$ and a demand $\boldsymbol{d}$ if and only if we can find an encoding function $\psi: \{0,1\}^{NF}\rightarrow \{0,1\}^{RF}$ that maps the $N$ files to the message: $$\begin{aligned} X=\psi(W_1,...,W_N), \nonumber \end{aligned}$$ and $K$ decoding functions $\mu_k: \{0,1\}^{RF}\times \{0,1\}^{|\mathcal{M}_k|}\rightarrow \{0,1\}^{F}$ that each map the signal $X$ and the cached content of user $k$ to an estimate of the requested file $W_{d_k}$, denoted by $\hat{W}_{\boldsymbol{d},k}$: $$\begin{aligned} \hat{W}_{\boldsymbol{d},k}=\mu_k(X,\{B_{i,j}\ |\ (i,j)\in \mathcal{M}_k\}), \nonumber \end{aligned}$$ such that $$\begin{aligned} \mathbbm{P} (\hat{W}_{\boldsymbol{d},k}\neq W_{d_k} )\leq \epsilon. \nonumber \end{aligned}$$ We denote $R_\epsilon^*(\boldsymbol{d},\boldsymbol{\mathcal{M}})$ as the minimum $\epsilon$-achievable rate given $\boldsymbol{d}$ and $\boldsymbol{\mathcal{M}}$. Assuming that all users are making requests independently, and that all files are equally likely to be requested by each user, the probability distribution of the demand $\boldsymbol{d}$ is uniform on $\mathcal{D}$. We define the average rate $R_\epsilon^*(\boldsymbol{\mathcal{M}})$ as the expected minimum achievable rate given a prefetching $\boldsymbol{\mathcal{M}}$ under uniformly random demand, i.e., $$R_\epsilon^*(\boldsymbol{\mathcal{M}})=\mathbb{E}_{\boldsymbol{d}}[ R_\epsilon^*(\boldsymbol{d},\boldsymbol{\mathcal{M}})].\nonumber$$ The rate-memory tradeoff for the average rate is essentially finding the minimum average rate $R^*$, for any given memory constraint $M$, that can be achieved by prefetchings satisfying this constraint with vanishing error probability for sufficiently large file size. Rigorously, we want to find $$R^*=\sup_{\epsilon>0} \adjustlimits\limsup_{F\rightarrow+\infty}\min_{\boldsymbol{\mathcal{M}}} R^*_\epsilon(\boldsymbol{\mathcal{M}}).\nonumber$$ as a function of $N$, $K$, and $M$. Similarly, the rate-memory tradeoff for peak rate is essentially finding the minimum peak rate, denoted by $R^*_{\textup{peak}}$, which is formally defined in Appendix \[app:worst\]. Main Results {#sec:main} ============ We state the main result of this paper in the following theorem. \[th:p\] For a caching problem with $K$ users, a database of $N$ files, local cache size of $M$ files at each user, and parameter $t=\frac{KM}{N}$, we have $$\begin{aligned} \label{eq:unif_pre} R^*=\mathbb{E}_{\boldsymbol{d}}\left[ \frac{\binom{K}{t+1}-\binom{K-N_{\textup{e}}(\boldsymbol{d})}{t+1}}{\binom{K}{t}}\right], \end{aligned}$$ for $t\in\{0,1,...,K\}$, where $\boldsymbol{d}$ is uniformly random on $\mathcal{D}=\{1,...,N\}^K$ and $N_{\textup{e}}(\boldsymbol{d})$ denotes the number of distinct requests in $\boldsymbol{d}$. Furthermore, for $t\notin\{0,1,...,K\}$, $R^*$ equals the lower convex envelope of its values at $t\in\{0,1,...,K\}$.[^8] To prove Theorem \[th:p\], we propose a new caching scheme that strictly improves the state of the art [@maddah-ali12a], which was relied on by all prior works considering the minimum average rate for the caching problem [@niesen13; @zhang15; @ji2015order; @lim2016information]. In particular, the rate achieved by the previous best known caching scheme equals the lower convex envelope of $\min\{\frac{K-t}{t+1}, \mathbb{E}_{\boldsymbol{d}}[N_{\textup{e}}(\boldsymbol{d})(1-\frac{t}{K})] \}$ at $t\in\{0,1,...,K\}$, which is strictly larger than $R^*$ when $N>1$ and $t<K-1$. For example, when $K=30$, $N=30$, and $t=1$, the state-of-the-art scheme requires a communication rate of $14.12$, while the proposed scheme achieves the rate $12.67$, both rounded to two decimal places. The improvement of our proposed scheme over the state of the art can be interpreted intuitively as follows. The caching scheme proposed in [@maddah-ali12a] essentially decomposes the problem into $2$ cases: in one case, the redundancy of user demands is ignored, and the information is delivered by satisfying different demands using single coded multicast transmission ; in the other case, random coding is used to deliver the same request to multiple receivers . Our result demonstrates that the decomposition of the caching problem into these 2 cases is suboptimal, and our proposed caching scheme precisely accounts for the effect of redundant user demands. The technique for finding the minimum average rate in the centralized setting can be straightforwardly extended to find the minimum peak rate, which was solved for $N\geq K$ [@kai2016optimality]. Here we show that we not only recover their result, but also fully characterize the rate for all possible values of $N$ and $K$, resulting in the following corollary, which will be proved in Appendix \[app:worst\]. \[cor:worst\] For a caching problem with $K$ users, a database of $N$ files, a local cache size of $M$ files at each user, and parameter $t=\frac{KM}{N}$, we have $$\begin{aligned} \label{eq:cent} R^*_{\textup{peak}}= \frac{\binom{K}{t+1}-\binom{K-\min\{K,N\}}{t+1}}{\binom{K}{t}} \end{aligned}$$ for $t\in\{0,1,...,K\}$. Furthermore, for $t\notin\{0,1,...,K\}$, $R^*_{\textup{peak}}$ equals the lower convex envelope of its values at $t\in\{0,1,...,K\}$. As we will discuss in Section \[sec:ext\], we can also extend the techniques that we developed for proving Theorem \[th:p\] to the decentralized setting. The exact rate-memory tradeoff for both the average rate and the peak rate can be fully characterized using these techniques. Besides, the newly proposed decentralized caching scheme for achieving the minimum rates strictly improves the state of the art [@maddah-ali13; @amiri2016coded]. Prior to this result, there have been several other works on this coded caching problem. Both centralized and decentralized settings have been considered, and many caching schemes using uncoded prefetching were proposed. Several caching schemes have been proposed focusing on minimizing the average communication rates [@niesen13; @zhang15; @ji2015order; @ramakrishnan2015efficient]. However in the case of uniform file popularity, the achievable rates provided in these works reduce to the results of [@maddah-ali12a] or [@maddah-ali13], while our proposal strictly improves the state of the arts in both [@maddah-ali12a] and [@maddah-ali13] by developing a novel delivery strategy that exploits the commonality of the user demands. There have also been several proposed schemes that aim to minimize the peak rates [@wan2016caching; @amiri2016coded]. The main novelty of our work compared to their results is that we not only propose an optimal design that strictly improves upon all these works through a leader based strategy, but also provide an intuitive proof for its decodability. The decodability proof is based on the observation that the caching schemes proposed in [@maddah-ali12a] and [@maddah-ali13] may introduce redundancy in the delivery phase, while our proposed scheme provides a systematic way to *optimally* remove all the redundancy, which allows delivering the same amount of information with strictly improved communication rates. [.45]{} ![Numerical comparison between the optimal tradeoff and the state of the arts for the centralized setting. Our results strictly improve the prior arts in both achievability and converse, for both average rate and peak rate.[]{data-label="fig:cent_compare"}](cent_ave_final.pdf "fig:"){width="0.95\linewidth"} [.45]{} ![Numerical comparison between the optimal tradeoff and the state of the arts for the centralized setting. Our results strictly improve the prior arts in both achievability and converse, for both average rate and peak rate.[]{data-label="fig:cent_compare"}](cent_peak_final.pdf "fig:"){width="0.95\linewidth"} We numerically compare our results with the state-of-the-art schemes and the converses for the centralized setting. As shown in Fig. \[fig:cent\_compare\], both the achievability scheme and the converse provided in our paper strictly improve the prior arts, for both average rate and peak rate. Similar results can be shown for the decentralized setting, and a numerical comparison is provided in Section \[sec:ext\]. There have also been several prior works considering caching designs with coded prefetching [@maddah-ali12a; @DBLP:journals/corr/Chen14h; @sahraei2016k; @tian2016caching; @amiri2016fundamental]. They focused on the centralized setting and showed that the peak communication rate achieved by uncoded prefetching schemes can be improved in some low capacity regimes. Even taking coded prefetching schemes into account, our work strictly improves the prior art in most cases (see Section \[sec:conclu\] for numerical results). More importantly, the caching schemes developed in this paper is within a factor of $2$ optimal in the general coded prefetching setting, for both average and peak rates, centralized and decentralized settings [@yu2017characterizing]. In the following sections, we prove Theorem \[th:p\] by first describing a caching scheme that achieves the minimum average rate (see Section \[sec:opt\]), and then deriving tight lower bounds of the expected rates for any uncoded prefetching scheme (see Section \[sec:conv\]). The Optimal Caching Scheme {#sec:opt} ========================== In this section, we provide a caching scheme (i.e. a prefetching scheme and a delivery scheme) to achieve $R^*$ stated in Theorem \[th:p\]. Before introducing the proposed caching scheme, we demonstrate the main ideas of the proposed scheme through a motivating example. Motivating Example {#sec:example} ------------------ Consider a caching system with $3$ files (denoted by $A$, $B$, and $C$), $6$ users, and a caching size of $1$ file for each user. To develop a caching scheme, we need to design an uncoded prefetching scheme, independent of the demands, and develop delivery strategies for each of the possible $3^6$ demands. For the prefetching strategy, we break file $A$ into $15$ subfiles of equal size, and denote their values by $A_{\{1,2\}}$, $A_{\{1,3\}}$, $A_{\{1,4\}}$, $A_{\{1,5\}}$, $A_{\{1,6\}}$, $A_{\{2,3\}}$, $A_{\{2,4\}}$, $A_{\{2,5\}}$, $A_{\{2,6\}}$, $A_{\{3,4\}}$, $A_{\{3,5\}}$, $A_{\{3,6\}}$, $A_{\{4,5\}}$, $A_{\{4,6\}}$, and $A_{\{5,6\}}$. Each user $k$ caches the subfiles whose index includes $k$, e.g., user $1$ caches $A_{\{1,2\}}$, $A_{\{1,3\}}$, $A_{\{1,4\}}$, $A_{\{1,5\}}$, and $A_{\{1,6\}}$. The same goes for files $B$ and $C$. This prefetching scheme was originally proposed in [@maddah-ali12a]. Given the above prefetching scheme, we now need to develop an optimal delivery strategy for each of the possible demands. In this subsection, we demonstrate the key idea of our proposed delivery scheme through a representative demand scenario, namely, each file is requested by 2 users as shown in Figure \[fig:example\]. ![A caching system with $6$ users, $3$ files, local cache size of $1$ file at each user, and a demand where each file is requested by $2$ users.[]{data-label="fig:example"}](example.pdf){width="45.00000%"} We first consider a subset of 3 users $\{1,2,3\}$. User $1$ requires subfile $A_{\{2,3\}}$, which is only available at users $2$ and $3$. User $2$ requires subfile $A_{\{1,3\}}$, which is only available at users $1$ and $3$. User $3$ requires subfile $B_{\{1,2\}}$, which is only available at users $1$ and $2$. In other words, the three users would like to exchange subfiles $A_{\{2,3\}}$, $A_{\{1,3\}}$, and $B_{\{1,2\}}$, which can be enabled by transmitting the message $A_{\{2,3\}}\oplus A_{\{1,3\}}\oplus B_{\{1,2\}}$ over the shared link. Similarly, we can create and broadcast messages for any subset $\mathcal{A}$ of $3$ users that exchange $3$ subfiles among those $3$ users. As a short hand notation, we denote the corresponding message by $Y_\mathcal{A}$. According to the delivery scheme proposed in [@maddah-ali12a], if we broadcast all $\binom{6}{3}=20$ messages that could be created in this way, all users will be able to decode their requested files. However, in this paper we propose a delivery scheme where, instead of broadcasting all those $20$ messages, only $19$ of them are computed and broadcasted, omitting the message $Y_{\{2,4,6\}}$. Specifically, we broadcast the following $19$ values: $$\begin{aligned} Y_{\{1,2,3\}}=B_{\{1,2\}}\oplus A_{\{1,3\}} \oplus A_{\{2,3\}} \\ Y_{\{1,2,4\}}=B_{\{1,2\}}\oplus A_{\{1,4\}} \oplus A_{\{2,4\}}\\ Y_{\{1,2,5\}}=C_{\{1,2\}}\oplus A_{\{1,5\}} \oplus A_{\{2,5\}} \\ Y_{\{1,2,6\}}=C_{\{1,2\}}\oplus A_{\{1,6\}} \oplus A_{\{2,6\}}\\ Y_{\{1,3,4\}}=B_{\{1,3\}}\oplus B_{\{1,4\}} \oplus A_{\{3,4\}}\\ Y_{\{1,3,5\}}=C_{\{1,3\}}\oplus B_{\{1,5\}} \oplus A_{\{3,5\}}\\ Y_{\{1,3,6\}}=C_{\{1,3\}}\oplus B_{\{1,6\}} \oplus A_{\{3,6\}}\\ Y_{\{1,4,5\}}=C_{\{1,4\}}\oplus B_{\{1,5\}} \oplus A_{\{4,5\}}\\ Y_{\{1,4,6\}}=C_{\{1,4\}}\oplus B_{\{1,6\}} \oplus A_{\{4,6\}}\\ Y_{\{1,5,6\}}=C_{\{1,5\}}\oplus C_{\{1,6\}} \oplus A_{\{5,6\}}\\ Y_{\{2,3,4\}}=B_{\{2,3\}}\oplus B_{\{2,4\}} \oplus A_{\{3,4\}}\\ Y_{\{2,3,5\}}=C_{\{2,3\}}\oplus B_{\{2,5\}} \oplus A_{\{3,5\}}\\ Y_{\{2,3,6\}}=C_{\{2,3\}}\oplus B_{\{2,6\}} \oplus A_{\{3,6\}}\\ Y_{\{2,4,5\}}=C_{\{2,4\}}\oplus B_{\{2,5\}} \oplus A_{\{4,5\}}\\ Y_{\{2,5,6\}}=C_{\{2,5\}}\oplus C_{\{2,6\}} \oplus A_{\{5,6\}}\\ Y_{\{3,4,5\}}=C_{\{3,4\}}\oplus B_{\{3,5\}} \oplus B_{\{4,5\}}\\ Y_{\{3,4,6\}}=C_{\{3,4\}}\oplus B_{\{3,6\}} \oplus B_{\{4,6\}}\\ Y_{\{3,5,6\}}=C_{\{3,5\}}\oplus C_{\{3,6\}} \oplus B_{\{5,6\}}\\ Y_{\{4,5,6\}}=C_{\{4,5\}}\oplus C_{\{4,6\}} \oplus B_{\{5,6\}} \end{aligned}$$ Surprisingly, even after taking out the extra message, all users are still able to decode the requested files. The reason is as follows: User $1$ is able to decode file $A$, because every subfile $A_{\{i,j\}}$ that is not cached by user $1$ can be computed with the help of $Y_{\{1,i,j\}}$, which is directly broadcasted. The above is the same decoding procedure used in [@maddah-ali12a]. User $2$ can easily decode all subfiles in $A$ except $A_{\{4,6\}}$ in a similar way, although decoding $A_{\{4,6\}}$ is more challenging since the value $Y_{\{2,4,6\}}$, which is needed in the above decoding procedure for decoding $A_{\{4,6\}}$, is not directly broadcasted. However, user $2$ can still decode $A_{\{4,6\}}$ by adding $Y_{\{1,4,6\}}$, $Y_{\{1,4,5\}}$, $Y_{\{1,3,6\}}$, and $Y_{\{1,3,5\}}$, which gives the binary sum of $A_{\{4,6\}}$, $A_{\{4,5\}}$, $A_{\{3,6\}}$, and $A_{\{3,5\}}$. Because $A_{\{4,5\}}$, $A_{\{3,6\}}$, and $A_{\{3,5\}}$ are easily decodable, $A_{\{4,6\}}$ can be obtained consequently. Due to symmetry, all other users can decode their requested files in the same manner. This completes the decoding tasks for the given demand. General Schemes --------------- Now we present a general caching scheme that achieves the rate $R^*$ stated in Theorem \[th:p\]. We focus on presenting prefetching schemes and delivery schemes when $t\in\{0,1,...,K\}$, since for general $t$, the minimum rate $R^*$ can be achieved by memory sharing. \[remark:convexity\] Note that the rates stated in equation (\[eq:unif\_pre\]) for $t\in\{0,1,...,K\}$ form a convex sequence, which are consequently on their lower convex envelope. Thus those rates cannot be further improved using memory sharing. To prove the achievability of $R^*$, we need to provide an optimal prefetching scheme $\boldsymbol{\mathcal{M}}$, an optimal delivery scheme for every possible user demand $\boldsymbol{d}$ of which the average rate achieves $R^*$, and a valid decoding algorithm for the users. The main idea of our proposed achievability scheme is to first design a prefetching scheme that enables multicast coding opportunities, and then in the delivery phase, we optimally deliver the message by effectively solving an index coding problem. We consider the following optimal prefetching: We partition each file $i$ into $\binom{K}{t}$ non-overlapping subfiles with approximately equal size. We assign the $\binom{K}{t}$ subfiles to $\binom{K}{t}$ different subsets of $\{1,..,K\}$ of size $t$, and denote the value of the subfile assigned to subset $\mathcal{A}$ by $W_{i,\mathcal{A}}$. Given this partition, each user $k$ caches all bits in all subfiles $W_{i,\mathcal{A}}$ such that $k\in\mathcal{A}$. Because each user caches $\binom{K-1}{t-1}N$ subfiles, and each subfile has $F/\binom{K}{t}$ bits, the caching load of each user equals $NtF/K=MF$ bits, which satisfies the memory constraint. This prefetching was originally proposed in [@maddah-ali12a]. In the rest of the paper, we refer to this prefetching as *symmetric batch prefetching*. Given this prefetching (denoted by $\boldsymbol{\mathcal{M_{\textup{batch}}}}$), our goal is to show that for any demand $\boldsymbol{d}$, we can find a delivery scheme that achieves the following optimal rate with zero error probability: [^9] $$\begin{aligned} \label{eq:single} R^*_{\epsilon=0}\left(\boldsymbol{d},\boldsymbol{\mathcal{M_{\textup{batch}}}}\right)=\frac{\binom{K}{t+1}-\binom{K-N_{\textup{e}}(\boldsymbol{d})}{t+1}}{\binom{K}{t}}. \end{aligned}$$ Hence, by taking the expectation over demand $\boldsymbol{d}$, the rate $R^*$ stated in Theorem \[th:p\] can be achieved. Note that, in the special case where all users are requesting different files (i.e., $N_{\textup{e}}(\boldsymbol{d})=K$), the above rate equals $\frac{K-t}{t+1}$, which can already be achieved by the delivery scheme proposed in [@maddah-ali12a]. Our proposed scheme aims to achieve this optimal rate in more general circumstances, when some users may share common demands. Finding the minimum communication load given a prefetching $\boldsymbol{\mathcal{M}}$ can be viewed as a special case of the index coding problem. Theorem \[th:p\] indicates the optimality of the delivery scheme given the symmetric batch prefetching, which implies that (\[eq:single\]) gives the solution to a special class of non-symmetric index coding problem. The optimal delivery scheme is designed as follows: For each demand $\boldsymbol{d}$, recall that $N_{\textup{e}}(\boldsymbol{d})$ denotes the number of distinct files requested by all users. The server arbitrarily selects a subset of $N_{\textup{e}}(\boldsymbol{d})$ users, denoted by $\mathcal{U}=\{u_1,...,u_{N_{\textup{e}}(\boldsymbol{d})}\}$, that request $N_{\textup{e}}(\boldsymbol{d})$ different files. We refer to these users as *leaders*. Given an arbitrary subset $\mathcal{A}$ of $t+1$ users, each user $k\in \mathcal{A}$ needs the subfile $W_{d_k, \mathcal{A}\backslash\{k\}}$, which is known by all other users in $\mathcal{A}$. In other words, all users in set $\mathcal{A}$ would like to exchange subfiles $W_{d_k, \mathcal{A}\backslash\{k\}}$ for all $k\in \mathcal{A}$. This exchange can be processed if the binary sum of all those files, i.e. $\underset{x\in \mathcal{A}}{\mathlarger{\oplus}}W_{d_x, \mathcal{A}\backslash\{x\}}$, is available from the broadcasted message. To simplify the description of the delivery scheme, for each subset $\mathcal{A}$ of users, we define the following short hand notation $$\begin{aligned} Y_\mathcal{A}=\underset{x\in \mathcal{A}}{\mathlarger{\oplus}}W_{d_x, \mathcal{A}\backslash\{x\}}.\label{eq:ya} \end{aligned}$$ To achieve the rate stated in (\[eq:single\]), the server only greedily broadcasts the binary sums that directly help at least $1$ leader. Rigorously, the server computes and broadcasts all $Y_\mathcal{A}$ for all subsets $\mathcal{A}$ of size $t+1$ that satisfy $\mathcal{A}\cap\mathcal{U}\neq\emptyset$. The length of the message equals $\binom{K}{t+1}-\binom{K-N_{\textup{e}}(\boldsymbol{d})}{t+1}$ times the size of a subfile, which matches the stated rate. We now prove that each user who requests a file is able to decode the requested file upon receiving the messages. For any leader $k\in \mathcal{U}$ and any subfile $W_{d_k, \mathcal{A}}$ that is requested but not cached by user $k$, the message $Y_{\{k\}\cup \mathcal{A}}$ is directly available from the broadcast. Thus, $k$ is able to obtain all requested subfiles by decoding each subfile $W_{d_k, \mathcal{A}}$ from message $Y_{\{k\}\cup \mathcal{A}}$ using the following equation: $$\begin{aligned} W_{d_k, \mathcal{A}}= Y_{\{k\}\cup \mathcal{A}} \ \mathlarger{\oplus} \left(\underset{x\in \mathcal{A}}{\mathlarger{\oplus}}W_{d_x, \{k\}\cup \mathcal{A}\backslash\{x\}}\right), \end{aligned}$$ which directly follows from equation (\[eq:ya\]). The decoding procedure for a non-leader user $k$ is less straightforward, because not all messages $Y_{\{k\}\cup\mathcal{A}}$ for corresponding required subfiles $W_{d_k, \mathcal{A}}$ are directly broadcasted. However, user $k$ can generate these messages simply based on the received messages, and can thus decode all required subfiles. We prove the above fact as follows. First we prove the following simple lemma: \[dec\_l\] Given a demand $\boldsymbol{d}$, and a set of leaders $\mathcal{U}$. For any subset $\mathcal{B}\subseteq\{1,...,K\}$ that includes $\mathcal{U}$, let $\mathcal{V}_{\textup{F}}$ be the family of all subsets $\mathcal{V}$ of $\mathcal{B}$ such that each requested file in $\boldsymbol{d}$ is requested by exactly one user in $\mathcal{V}$. The following equation holds: $$\begin{aligned} \label{dec_lemma} \underset{\mathcal{V}\in\mathcal{V}_{\textup{F}}}{\mathlarger{\oplus}} Y_{\mathcal{B}\backslash\mathcal{V}}=0 \end{aligned}$$ if each $Y_{\mathcal{B}\backslash\mathcal{V}}$ is defined in (\[eq:ya\]). To prove Lemma \[dec\_l\], we essentially need to show that, after expanding the LHS of equation (\[dec\_lemma\]) into a binary sum of subfiles using the definition in (\[eq:ya\]), each subfile is counted an even number of times. This will ensure that the net sum is equal to $0$. To rigorously prove this fact, we start by defining the following. For each $u\in \mathcal{U}$ we define $\mathcal{B}_u$ as $$\begin{aligned} \mathcal{B}_u=\{x\in \mathcal{B}\ |\ d_x=d_u\}. \end{aligned}$$ Then all sets $\mathcal{B}_u$ disjointly cover the set $ \mathcal{B}$, and the following equations hold: $$\begin{aligned} \underset{\mathcal{V}\in\mathcal{V}_{\textup{F}}}{\mathlarger{\oplus}} Y_{ \mathcal{B}\backslash\mathcal{V}}&=\underset{\mathcal{V}\in\mathcal{V}_{\textup{F}}}{\mathlarger{\oplus}} \ \underset{x\in \mathcal{B}\backslash\mathcal{V}}{\mathlarger{\oplus}}W_{d_x, \mathcal{B}\backslash(\mathcal{V} \cup \{x\})}\\ &=\underset{u\in\mathcal{U}}{\mathlarger{\oplus}} \ \underset{\mathcal{V}\in\mathcal{V}_{\textup{F}}}{\mathlarger{\oplus}} \ \underset{x\in ( \mathcal{B}\backslash\mathcal{V})\cap \mathcal{B}_u}{\mathlarger{\oplus}}W_{d_u, \mathcal{B}\backslash(\mathcal{V} \cup \{x\})}\\ &=\underset{u\in\mathcal{U}}{\mathlarger{\oplus}} \ \underset{\mathcal{V}\in\mathcal{V}_{\textup{F}}}{\mathlarger{\oplus}} \ \underset{x\in \mathcal{B}_u\backslash\mathcal{V}}{\mathlarger{\oplus}}W_{d_u, \mathcal{B}\backslash(\mathcal{V} \cup \{x\})}. \end{aligned}$$ For each $u\in\mathcal{U}$, we let $\mathcal{V}_u$ be the family of all subsets $\mathcal{V}'$ of $\mathcal{B}\backslash \mathcal{B}_u$ such that each requested file in $\boldsymbol{d}$, except $d_u$, is requested by exactly one user in $\mathcal{V}'$. Then $\mathcal{V}_{\textup{F}}$ can be represented as follows: $$\begin{aligned} \mathcal{V}_{\textup{F}}=\{ \{y\}\cup \mathcal{V}'\ |\ y\in\mathcal{B}_u , \mathcal{V}'\in\mathcal{V}_u\}. \end{aligned}$$ Consequently, the following equation holds for each $u\in\mathcal{U}$: $$\begin{aligned} \underset{\mathcal{V}\in\mathcal{V}_{\textup{F}}}{\mathlarger{\oplus}} \ \underset{x\in \mathcal{B}_u\backslash\mathcal{V}}{\mathlarger{\oplus}}&W_{d_u, \mathcal{B}\backslash(\mathcal{V} \cup \{x\})}\nonumber\\&=\underset{\mathcal{V}'\in\mathcal{V}_u}{\mathlarger{\oplus}} \ \underset{y\in\mathcal{B}_u}{\mathlarger{\oplus}} \ \underset{x\in \mathcal{B}_u \backslash\{y\} }{\mathlarger{\oplus}}W_{d_u, \mathcal{B}\backslash(\mathcal{V}' \cup \{x,y\})}\\ &=\underset{\mathcal{V}'\in\mathcal{V}_u}{\mathlarger{\oplus}} \ \underset{\substack{(x,y)\in\mathcal{B}^2_u\\x\neq y}}{\mathlarger{\oplus}} W_{d_u, \mathcal{B}\backslash(\mathcal{V}' \cup \{x,y\})} \end{aligned}$$ Note that $W_{d_u, \mathcal{B}\backslash(\mathcal{V}' \cup \{x,y\})}$ and $W_{d_u, \mathcal{B}\backslash(\mathcal{V}' \cup \{y,x\})}$ are the same subfile. Hence, every single subfile in the above equation is counted exactly twice, which sum up to $0$. Consequently, the LHS of equation (\[dec\_lemma\]) also equals 0. Consider any subset $\mathcal{A}$ of $t+1$ non-leader users. From Lemma \[dec\_l\], the message $Y_\mathcal{A}$ can be directly computed from the broadcasted messages using the following equation: $$\begin{aligned} Y_\mathcal{A}=\underset{\mathcal{V}\in\mathcal{V}_{\textup{F}}\backslash \{\mathcal{U}\}}{\mathlarger{\oplus}} Y_{\mathcal{B}\backslash\mathcal{V}}, \end{aligned}$$ where $\mathcal{B}=\mathcal{A}\cup \mathcal{U}$, given the fact that all messages on the RHS of the above equation are broadcasted, because each $\mathcal{B}\backslash\mathcal{V}$ has a size of $t+1$ and contains at least one leader. Hence, each user $k$ can obtain the value $Y_\mathcal{A}$ for any subset $\mathcal{A}$ of $t+1$ users, and can subsequently decode its requested file as previously discussed. An interesting open problem is to find computationally efficient decoding algorithms for the proposed optimal caching scheme. The decoding algorithm proposed in this paper imposes extra computation at the non-leader users, since they have to solve for the missing messages to recover all needed subfiles. However there are some ideas that one may explore to improve this decoding strategy, e.g. designing a smarter approach for non-leader users instead of naively recovering all required messages before decoding the subfiles (see the decoding approach provided in the motivating example in Section \[sec:example\]). Converse {#sec:conv} ======== In this section, we derive a tight lower bound on the minimum expected rate $R^*$, which shows the optimality of the caching scheme proposed in this paper. To derive the corresponding lower bound on the average rate over all demands, we divide the set $\mathcal{D}$ into smaller subsets, and lower bound the average rates within each subset individually. We refer to these smaller subsets as *types*, which are defined as follows.[^10] ![Dividing $\mathcal{D}$ into 5 types, for a caching problem with $4$ files and $4$ users.[]{data-label="fig:types"}](ddddd.pdf){width="35.00000%"} Given an arbitrary demand $\boldsymbol{d}$, we define its *statistics*, denoted by $\boldsymbol{s}(\boldsymbol{d})$, as a sorted array of length $N$, such that $s_i(\boldsymbol{d})$ equals the number of users that request the $i$th most requested file. We denote the set of all possible statistics by $\mathcal{S}$. Grouping by the same statistics, the set of all demands $\mathcal{D}$ can be broken into many small subsets. For any statistics $\boldsymbol{s}\in\mathcal{S}$, we define type $\mathcal{D}_{\boldsymbol{s}}$ as the set of queries with statistics $\boldsymbol{s}$. For example, consider a caching problem with $4$ files (denoted by $A$, $B$, $C$, and $D$) and $4$ users. The statistics of the demand $\boldsymbol{d}=(A,A,B,C)$ equals $\boldsymbol{s}(\boldsymbol{d})=(2,1,1,0)$. More generally, the set of all possible statistics for this problem is $\mathcal{S}=\{(4,0,0,0),(3,1,0,0),(2,2,0,0),(2,1,1,0),(1,1,1,1)\}$, and $\mathcal{D}$ can be divided into $5$ types accordingly, as shown in Fig. \[fig:types\]. Note that for each demand $\boldsymbol{d}$, the value $N_{\textup{e}}(\boldsymbol{d})$ only depends on its statistics $\boldsymbol{s}({\boldsymbol{d}})$, and thus the value is identical across all demands in $\mathcal{D}_{\boldsymbol{s}}$. For convenience, we denote that value by $N_{\textup{e}}(\boldsymbol{s})$. Given a prefetching $\boldsymbol{\mathcal{M}}$, we denote the average rate within each type $\mathcal{D}_{\boldsymbol{s}}$ by $R^*_{\epsilon}(\boldsymbol{s}, \boldsymbol{\mathcal{M}})$. Rigorously, $$\begin{aligned} R^*_\epsilon({\boldsymbol{s}}, \boldsymbol{\mathcal{M}})&=\frac{1}{|\mathcal{D}_{\boldsymbol{s}}|}\sum_{\boldsymbol{d}\in \mathcal{D}_{\boldsymbol{s}}}R^*_\epsilon(\boldsymbol{d},\boldsymbol{\mathcal{M}}). \end{aligned}$$ Recall that all demands are equally likely, so we have $$\begin{aligned} R^*&=\sup_{\epsilon>0} \adjustlimits\limsup_{F\rightarrow+\infty}\min_{{\boldsymbol{\mathcal{M}}}} \mathbb{E}_{\boldsymbol{s}}[R_\epsilon^*({\boldsymbol{s}}, \boldsymbol{\mathcal{M}})]\\ &\geq \sup_{\epsilon>0} \limsup_{F\rightarrow+\infty}\mathbb{E}_{\boldsymbol{s}}[\min_{{\boldsymbol{\mathcal{M}}}} R_\epsilon^*({\boldsymbol{s}}, \boldsymbol{\mathcal{M}})].\label{ineq:17} \end{aligned}$$ Hence, in order to lower bound $R^*$, it is sufficient to bound the minimum value of $R^*_\epsilon({\boldsymbol{s}}, \boldsymbol{\mathcal{M}})$ for each type $\mathcal{D}_{\boldsymbol{s}}$ individually. We show that, when the prefetching is uncoded, the minimum average rate within a type can be tightly bounded (when $F$ is large and $\epsilon$ is small), thus the rate-memory tradeoff can be completely characterized using this technique. The lower bounds of the minimum average rates within each type are presented in the following lemma: \[lemma:univ\] Consider a caching problem with $N$ files, $K$ users, and a local cache size of $M$ files for each user. For any type $\mathcal{D}_{\boldsymbol{s}}$, the minimum value of $R_\epsilon^*(\boldsymbol{s}, \boldsymbol{\mathcal{M}})$ is lower bounded by $$\begin{aligned} \min_{\boldsymbol{\mathcal{M}}} R^*_\epsilon(\boldsymbol{s} ,\boldsymbol{\mathcal{M}})\geq &\textup{Conv}\left( \frac{\binom{K}{t+1}-\binom{K-N_{\textup{e}}(\boldsymbol{s})}{t+1}}{\binom{K}{t}}\right)\nonumber\\& -\left(\frac{1}{F}+ N^2_{\textup{e}}(\boldsymbol{s})\epsilon \right), \end{aligned}$$ where $\textup{Conv}(f(t))$ denotes the lower convex envelope of the following points: $\{(t,f(t))\ | \ t\in\{0,1,...,K\}\}$. The above lemma characterizes the minimum average rate given a type $\mathcal{D}_{\boldsymbol{s}}$, if the prefetching $\boldsymbol{\mathcal{M}}$ can be designed based on $s$. However, for (\[ineq:17\]) to be tight, the average rate for each different type has to be minimized on the same prefetching. Surprisingly, such an optimal prefetching exists, an example being the symmetric batch prefetching according to Section \[sec:opt\]. This indicates that the symmetric batch prefetching is universally optimal for all types in terms of the average rates. We postpone the proof of Lemma \[lemma:univ\] to Appendix \[app:keylemma\] and first prove the converse using the lemma. From (\[ineq:17\]) and Lemma \[lemma:univ\], $R^*$ can be lower bounded as follows: $$\begin{aligned} R^*&\geq\sup_{\epsilon>0} \limsup_{F\rightarrow+\infty}\mathbb{E}_{\boldsymbol{s}}\left[\min_{\boldsymbol{\mathcal{M}}}R_\epsilon^*({\boldsymbol{s}},\boldsymbol{\mathcal{M}})\right] \\&\geq \mathbb{E}_{\boldsymbol{s}}\left[\textup{Conv}\left(\frac{\binom{K}{t+1}-\binom{K-N_{\textup{e}}(\boldsymbol{s})}{t+1}}{\binom{K}{t}}\right)\right].\label{eq:39} \end{aligned}$$ Because the sequence $$\begin{aligned} c_n &= \frac{\binom{K}{n+1}-\binom{K-N_{\textup{e}}(\boldsymbol{s})}{n+1}}{\binom{K}{n}} \end{aligned}$$ is convex, we can switch the order of the expectation and the Conv in (\[eq:39\]). Therefore, $R^*$ is lower bounded by the rate defined in Theorem \[th:p\].[^11] Extension to the Decentralized Setting {#sec:ext} ====================================== In the sections above, we introduced a new centralized caching scheme and a new bounding technique that completely characterize the minimum average communication rate and the minimum peak rate, when the prefetching is required to be uncoded. Interestingly, these techniques can also be extended to fully characterize the rate-memory tradeoff for decentralized caching. In this section, we formally establish a system model for decentralized caching systems, and state the exact rate-memory tradeoff as main results for both the average rate and the peak rate. System Model and Problem Formulation ------------------------------------ In many practical systems, out of the large number of users that may potentially request files from the server through the shared error-free link, only a random unknown subset are connected to the link and making requests at any given time instance. To handle this situation, the concept of decentralized prefetching scheme was introduced in [@maddah-ali13], where each user has to fill their caches randomly and independently, based on the same probability distribution. The goal in the decentralized setting is to find a decentralized prefetching scheme, without the knowledge of the number and the identities of the users making requests, to minimize the required communication rates given an arbitrarily large caching system. Based on the above framework, we formally define decentralized caching as follows: In a *decentralized caching scheme*, instead of following a deterministic caching scheme, each user $k$ caches a subset $\mathcal{M}_k$ of size no more than $MF$ bits randomly and independently, based on the same probability distribution, denoted by $P_{{\mathcal{M}}}$. Rigorously, when $K$ users are making requests, the probability distribution of the prefetching $\boldsymbol{\mathcal{M}}$ is given by $$\begin{aligned} \mathbbm{P}\left(\boldsymbol{\mathcal{M}}=(\mathcal{M}_1,...,\mathcal{M}_K)\right)= \prod_{i=1}^{K}{{P}_{\mathcal{M}}(\mathcal{M}_i)}.\nonumber \end{aligned}$$ We define that a *decentralized caching scheme*, denoted by ${P}_{\mathcal{M};F}$, is a distribution parameterized by the file size $F$, that specifies the prefetching distribution $P_{\mathcal{M}}$ for all possible values of $F$. Similar to the centralized setting, when $K$ users are making requests, we say that a rate $R$ is *$\epsilon$-achievable* given a prefetching distribution $P_{{\mathcal{M}}}$ and a demand $\boldsymbol{d}$ if and only if there exists a message $X$ of length $RF$ such that every active user $k$ is able to recover its desired file $W_{d_k}$ with a probability of error of at most $\epsilon$. This is rigorously defined as follows: When $K$ users are making requests, $R$ is *$\epsilon$-achievable* given a prefetching distribution $P_{{\mathcal{M}}}$ and a demand $\boldsymbol{d}$ if and only if for every possible realization of the prefetching $\boldsymbol{\mathcal{M}}$, we can find a real number $\epsilon_{\boldsymbol{\mathcal{M}}}$, such that $R$ is $\epsilon_{\boldsymbol{\mathcal{M}}}$-achievable given $\boldsymbol{\mathcal{M}}$ and $\boldsymbol{d}$, and $ \mathbbm{E}[\epsilon_{\boldsymbol{\mathcal{M}}}]\leq \epsilon $. We denote $R^*_{\epsilon, K}(\boldsymbol{d}, P_{\mathcal{M}})$ as the minimum $\epsilon$-achievable rate given $K$, $\boldsymbol{d}$ and $P_{{\mathcal{M}}}$, and we define the rate-memory tradeoff for the average rate based on this notation. For each $K\in \mathbb{N}$, and each prefetching scheme $P_{\mathcal{M};F}$, we define the minimum average rate $R^*_{K} ({P}_{\mathcal{M};F})$ as the minimum expected rate under uniformly random demand that can be achieved with vanishing error probability for sufficiently large file size. Specifically, $$\begin{aligned} R^*_{K} ({P}_{\mathcal{M};F})=\sup_{\epsilon>0} \limsup_{F'\rightarrow+\infty}\mathbb{E}_{\boldsymbol{d}} [R^*_{\epsilon,K}(\boldsymbol{d}, P_{{\mathcal{M};F}}(F=F'))],\nonumber \end{aligned}$$ where the demand $\boldsymbol{d}$ is uniformly distributed on $\{1,\dots,N\}^K$. Given the fact that a decentralized prefetching scheme is designed without the knowledge of the number of active users $K$, we characterize the rate-memory tradeoff using an infinite dimensional vector, denoted by $\{R_K\}_{K\in\mathbb{N}}$, where each term $R_{K}$ corresponds to the needed communication rates when $K$ users are making requests. We aim to find the region in this infinite dimensional vector space that can be achieved by any decentralized prefetching scheme, and we denote this region by $\mathcal{R}$. Rigorously, we aim to find $$\begin{aligned} \mathcal{R}= \underset{P_{{\mathcal{M};F}}}{\cup} \{\{R_K\}_{K\in\mathbb{N}} \ |\ \forall K\in\mathbb{N}, R_K\geq R^*_{K}(P_{{\mathcal{M};F}}) \}\nonumber, \end{aligned}$$ which is a function of $N$ and $M$. Similarly, we define the rate-memory tradeoff for the peak rate as follows: For each $K\in \mathbb{N}$, and each prefetching scheme $P_{\mathcal{M};F}$, we define the minimum peak rate $R^*_{K,\textup{peak}} ({P}_{\mathcal{M};F})$ as the minimum communication rate that can be achieved with vanishing error probability for sufficiently large file size, for the worst case demand. Specifically, $$\begin{aligned} R^*_{K,\textup{peak}} ({P}_{\mathcal{M};F})=\sup_{\epsilon>0}\adjustlimits \limsup_{F'\rightarrow+\infty}\max_{\boldsymbol{d}\in \mathcal{D}} [R^*_{\epsilon,K}(\boldsymbol{d}, P_{{\mathcal{M};F}}(F=F'))],\nonumber \end{aligned}$$ We aim to find the region in the infinite dimensional vector space that can be achieved by any decentralized prefetching scheme in terms of the peak rate, and we denote this region $\mathcal{R}_{\textup{peak}}$. Rigorously, we aim to find $$\begin{aligned} \mathcal{R}_{\textup{peak}}= \underset{P_{{\mathcal{M};F}}}{\cup}\{\{R_K\}_{K\in\mathbb{N}} \ |\ \forall K\in\mathbb{N}, R_K\geq R^*_{K,\textup{peak}}(P_{{\mathcal{M};F}}) \}\nonumber, \end{aligned}$$ as a function of $N$ and $M$. Exact Rate-Memory Tradeoff for Decentralized Setting ---------------------------------------------------- The following theorem completely characterizes the rate-memory tradeoff for the average rate in the decentralized setting: \[thm:ave-dec\] For a decentralized caching problem with parameters $N$ and $M$, $\mathcal{R}$ is completely characterized by the following equation: $$\begin{aligned} \mathcal{R}=&\left\{\{R_K\}_{K\in\mathbb{N}}\ \vphantom{\left. R_K\geq \mathbb{E}_{\boldsymbol{d}}\left[\frac{N-M}{M} \left(1-\left(\frac{N-M}{N}\right)^{N_{\textup{e}}(\boldsymbol{d})}\right)\right]\right\}} \right|\nonumber\\ &\ \ \left. R_K\geq \mathbb{E}_{\boldsymbol{d}}\left[\frac{N-M}{M} \left(1-\left(\frac{N-M}{N}\right)^{N_{\textup{e}}(\boldsymbol{d})}\right)\right]\right\}, \end{aligned}$$ where demand $\boldsymbol{d}$ given each $K$ is uniformly distributed on $\{1,...,N\}^K$ and $N_{\textup{e}}(\boldsymbol{d})$ denotes the number of distinct requests in $\boldsymbol{d}$.[^12] The proof of the above theorem is provided in Appendix \[app:decent-ave\]. Theorem \[thm:ave-dec\] demonstrates that $\mathcal{R}$ has a very simple shape with one dominating point: $\{R_K=\mathbb{E}_{\boldsymbol{d}}[\frac{N-M}{M} (1-(\frac{N-M}{N})^{N_{\textup{e}}(\boldsymbol{d})})]\}_{K\in\mathbb{N}}$. In other words, we can find a decentralized prefetching scheme that simultaneously achieves the minimum expected rates for all possible numbers of active users. Therefore, there is no tension among the expected rates for different numbers of active users. In Appendix \[app:decent-ave\], we will show that one example of the optimal prefetching scheme is to let each user cache $\frac{MF}{N}$ bits in each file uniformly independently. To prove Theorem \[thm:ave-dec\], we propose a decentralized caching scheme that strictly improves the state of the art [@maddah-ali13; @amiri2016coded] (see Appendix \[app:decent-opt\]), for both the average rate and the peak rate. In particular for the average rate, the state-of-the-art scheme proposed in [@maddah-ali13] achieves the rate $\frac{N-M}{N}\cdot\min\{\frac{N}{M}(1-(1-\frac{M}{N})^K),\mathbb{E}_{\boldsymbol{d}}[N_e(\boldsymbol{d})]\}$, which is strictly larger than the rate achieved by our proposed scheme $\mathbb{E}_{\boldsymbol{d}}[\frac{N-M}{M}(1-(\frac{N-M}{N})^{N_{\textup{e}}(\boldsymbol{d})})]$ in most cases. Similarly one can show that our scheme strictly improves [@amiri2016coded], and we omit the details for brevity. \[remark:fund-ave\] We also prove a matching information-theoretic outer bound of $\mathcal{R}$, by showing that the achievable rate of any decentralized caching scheme can be lower bounded by the achievable rate of a caching scheme with centralized prefetching that is used on a system where, there are a large number of users that may potentially request a file, but only a subset of $K$ users are actually making the request. Interestingly, the tightness of this bound indicates that, in a system where the number of potential users is significantly larger than the number of active users, our proposed decentralized caching scheme is optimal, even compared to schemes where the users are not caching according to an i.i.d.. Using the proposed decentralized caching scheme and the same converse bounding technique, the following corollary, which completely characterizes the rate-memory tradeoff for the peak rate in the decentralized setting, directly follows: \[cor:dec\] For a decentralized caching problem with parameters $N$ and $M$, the achievable region $\mathcal{R}_{\textup{peak}}$ is completely characterized by the following equation:[^13] $$\begin{aligned} \mathcal{R}_{\textup{peak}}=&\left\{\{R_K\}_{K\in\mathbb{N}}\ \vphantom{\left. R_K\geq \frac{N-M}{M} \left(1-\left(\frac{N-M}{N}\right)^{\min\{N,K\}}\right)\right\}} \right|\nonumber \\ &\ \ \left. R_K\geq \frac{N-M}{M} \left(1-\left(\frac{N-M}{N}\right)^{\min\{N,K\}}\right)\right\}. \end{aligned}$$ The proof of the above corollary is provided in Appendix \[app:decent-worst\]. Corollary \[cor:dec\] demonstrates that $\mathcal{R}_{\textup{peak}}$ has a very simple shape with one dominating point: $\{R_K=\frac{N-M}{M} (1-(\frac{N-M}{N})^{\min\{N,K\}})\}_{K\in\mathbb{N}}$. In other words, we can find a decentralized prefetching scheme that simultaneously achieves the minimum peak rates for all possible numbers of active users. Therefore, there is no tension among the peak rates for different numbers of active users. In Appendix \[app:decent-worst\], we will show that one example of the optimal prefetching scheme is to let each user cache $\frac{MF}{N}$ bits in each file uniformly independently. \[remark:fund-w\] Similar to the average rate case, a matching converse can be proved by deriving the minimum achievable rates of centralized caching schemes in a system where only a subset of users are actually making the request. Consequently, in a caching system where the number of potential users is significantly larger than the number of active users, our proposed decentralized scheme is also optimal in terms of peak rate, even compared to schemes where the users are not caching according to an i.i.d.. [.45]{} ![Numerical comparison between the optimal tradeoff and the state of the arts for the decentralized setting. Our results strictly improve the prior arts in both achievability and converse, for both average rate and peak rate.[]{data-label="fig:decent_compare"}](decent_ave_final.pdf "fig:"){width="0.95\linewidth"} [.45]{} ![Numerical comparison between the optimal tradeoff and the state of the arts for the decentralized setting. Our results strictly improve the prior arts in both achievability and converse, for both average rate and peak rate.[]{data-label="fig:decent_compare"}](decent_peak_final.pdf "fig:"){width="0.95\linewidth"} We numerically compare our results with the state-of-the-art schemes and the converses for the decentralized setting. As shown in Fig. \[fig:decent\_compare\], both the achievability scheme and the converse provided in our paper strictly improve the prior arts, for both average rate and peak rate. Concluding Remarks {#sec:conclu} ================== ![Achievable peak communication rates for centralized schemes that allow coded prefetching. For $N=20$, $K=40$, we compare our proposed achievability scheme with prior-art coded-prefetching schemes [@tian2016caching; @amiri2016fundamental], prior-art converse bounds [@ghasemi15; @sengupta15; @prem2015critical], and two recent results [@yu2017characterizing; @gomez2016fundamental]. The achievability scheme proposed in this paper achieves the best performance to date in most cases, and is within a factor of $2$ optimal as shown in [@yu2017characterizing], even compared with schemes that allow coded prefetching. []{data-label="fig:coded"}](coded_final.pdf){width="45.00000%"} In this paper, we characterized the rate-memory tradeoff for the coded caching problem with uncoded prefetching. To that end, we proposed the optimal caching schemes for both centralized setting and decentralized setting, and proved their exact optimality for both average rate and peak rate. The techniques we introduced in this paper can be directly applied to many other problems, immediately improving their state of the arts. For instance, the achievability scheme proposed in this paper has already been applied in various different settings, achieving improved results [@chugh2017improved; @wan2017novel; @yu2017characterizing]. Beyond these works, the techniques can also be applied in directions such as online caching [@pedarsani13], caching with non-uniform demands [@niesen13], and hierarchical caching [@karamchandani14], where improvements can be immediately achieved by directly plugging in our results. One interesting follow-up direction is to consider coded caching problem with coded placement. In this scenario, it has been shown that in the centralized setting, coded prefetching schemes can achieve better peak communication rates. For example, Figure \[fig:coded\] shows that one can improve the peak communication rate by coded placement when the cache size is small. In a recent work [@yu2017characterizing] we have shown that, through a new converse bounding technique, the achievability scheme we proposed in this paper is optimal within a factor of $2$. However, finding the exact optimal solution in this regime remains an open problem. Proof of Lemma \[lemma:univ\] {#app:keylemma} ============================= The Proof of Lemma \[lemma:univ\] is organized as follows: We start by proving a lower bound of the communication rate required for a single demand, i.e., $R_\epsilon^*(\boldsymbol{d},\boldsymbol{\mathcal{M}})$. By averaging this lower bound over a single demand type $\mathcal{D}_{\boldsymbol{s}}$, we automatically obtain a lower bound for the rate $R_\epsilon^*(\boldsymbol{s},\boldsymbol{\mathcal{M}})$. Finally we bound the minimum possible $R_\epsilon^*(\boldsymbol{s},\boldsymbol{\mathcal{M}})$ over all prefetching schemes by solving for the minimum value of our derived lower bound. We first use a genie-aided approach to derive a lower bound of $R_\epsilon^*(\boldsymbol{d},\boldsymbol{\mathcal{M}})$ for any demand $\boldsymbol{d}$ and for any prefetching $\boldsymbol{\mathcal{M}}$: Given a demand $\boldsymbol{d}$, let $\mathcal{U}=\{u_1,...,u_{N_{\textup{e}}(\boldsymbol{d})}\}$ be an arbitrary subset with $N_{\textup{e}}(\boldsymbol{d})$ users that request distinct files. We construct a virtual user whose cache is initially empty. Suppose for each $\ell\in\{1,...,N_{\textup{e}}(\boldsymbol{d})\}$, a genie fills the cache with the value of bits that are cached by $u_\ell$, but not from files requested by users in $\{u_1,...,u_{\ell-1}\}$. Then with all the cached information provided by the genie, the virtual user should be able to inductively decode all files requested by users in $\mathcal{U}$ upon receiving the message $X$. Consequently, a lower bound on the communication rate $R_\epsilon^*(\boldsymbol{d},\boldsymbol{\mathcal{M}})$ can be obtained by applying a cut-set bound on the virtual user. Specifically, we prove that the virtual user can decode all $N_{\textup{e}}(\boldsymbol{d})$ requested files with high probability, by inductively decoding each file $d_{u_\ell}$ using the decoding function of user $u_\ell$, from $\ell=1$ to $\ell={N_\textup{e}(\boldsymbol{d})}$: Recall that any communication rate is $\epsilon$-achievable if the error probability of each decoding function is at most $\epsilon$. Consequently, the probability that all $N_{\textup{e}}(\boldsymbol{d})$ decoding functions can correctly decode the requested files is at least $1-N_{\textup{e}}(\boldsymbol{d})\epsilon$. In this scenario, the virtual user can correctly decode all the files, given that at every single step of induction, all bits necessary for the decoding function have either been provided by the genie, or decoded in previous inductive steps. Given this decodability, we can lower bound the needed communication load using Fano’s inequality: $$\begin{aligned} R_\epsilon^*(\boldsymbol{d},&\boldsymbol{\mathcal{M}})F\geq \nonumber\\&H\left( \left.\{W_{d_{u_{\ell}}}\}_{\ell=1}^{N_{\textup{e}}(\boldsymbol{d})}\ \right|\ \textup{Bits cached by the virtual user} \right)\nonumber\\&-(1+ N^2_{\textup{e}}(\boldsymbol{d})\epsilon F ). \end{aligned}$$ Recall that all bits in the library are i.i.d. and uniformly random, the cut-set bound in the above inequality essentially equals the number of bits in the $N_{\textup{e}}(\boldsymbol{d})$ requested files that are not cached by the virtual user. This set includes all bits in each file $d_{u_{\ell}}$ that are not cached by any users in $\{u_1,...,u_{\ell}\}$. Hence, the above lower bound is essentially $$\begin{aligned} &R_\epsilon^*(\boldsymbol{d},\boldsymbol{\mathcal{M}})F\geq\nonumber\\& \sum_{\ell=1}^{N_{\textup{e}}(\boldsymbol{d})} \sum_{j=1}^{F} \mathbbm{1}\left(B_{d_{u_\ell},j}\textup{ is not cached by any user in } \{u_1,...,u_{\ell}\}\right)\nonumber\\&-(1+ N^2_{\textup{e}}(\boldsymbol{d})\epsilon F ). \end{aligned}$$ where $B_{i,j}$ denotes the $j$th bit in file $i$. To simplify the discussion, we let $\mathcal{K}_{i,j}$ denote the subset of users that caches $B_{i,j}$. The above lower bound can be equivalently written as $$\begin{aligned} R_\epsilon^*(\boldsymbol{d},\boldsymbol{\mathcal{M}})F\geq &\sum_{\ell=1}^{N_{\textup{e}}(\boldsymbol{d})} \sum_{j=1}^{F} \mathbbm{1}\left(\mathcal{K}_{d_{u_\ell},j}\cap \{u_1,...,u_{\ell}\}=\emptyset\right)\nonumber\\&-(1+ N^2_{\textup{e}}(\boldsymbol{d})\epsilon F ). \end{aligned}$$ Using the above inequality, we derive a lower bound of the average rates as follows: For any positive integer $i$, we denote the set of all permutations on $\{1,...,i\}$ by $\mathcal{P}_i$. Then, for each $p_1\in\mathcal{P}_K$ and $p_2\in\mathcal{P}_N$ given a demand $\boldsymbol{d}$, we define $\boldsymbol{d}(p_1,p_2)$ as a demand satisfying, for each user $k$, $d_k(p_1,p_2)=p_2 (d_{p_1^{-1}(k)})$. We can then apply the above bound to any demand $\boldsymbol{d}(p_1,p_2)$: $$\begin{aligned} R_\epsilon^*&(\boldsymbol{d}(p_1,p_2),\boldsymbol{\mathcal{M}})F\geq\nonumber\\& \sum_{\ell=1}^{N_{\textup{e}}(\boldsymbol{d})} \sum_{j=1}^{F} \mathbbm{1}\left(\mathcal{K}_{p_2(d_{u_\ell}),j}\cap \{p_1(u_1),...,p_1(u_{\ell})\}=\emptyset\right)\nonumber\\&-(1+ N^2_{\textup{e}}(\boldsymbol{d})\epsilon F ).\label{eq:genie_cut_p} \end{aligned}$$ It is easy to verify that by taking the average of (\[eq:genie\_cut\_p\]) over all pairs of $(p_1,p_2)$, only the rates for demands in type $\mathcal{D}_{\boldsymbol{s}(\boldsymbol{d})}$ are counted, and each of them is counted the same number of times due to symmetry. Consequently, this approach provides us with a lower bound on the average rate within type $\mathcal{D}_{\boldsymbol{s}(\boldsymbol{d})}$, which is stated as follows: $$\begin{aligned} R^*_\epsilon({\boldsymbol{s}(\boldsymbol{d})},\boldsymbol{\mathcal{M}}) = & \frac{1}{K!N!} \sum_{p_1\in\mathcal{P}_K}\sum_{p_2\in\mathcal{P}_N} R^*_\epsilon(\boldsymbol{d}(p_1,p_2),\boldsymbol{\mathcal{M}}) \\\geq& \frac{1}{K!N!F} \sum_{p_1\in\mathcal{P}_K}\sum_{p_2\in\mathcal{P}_N }\sum_{\ell=1}^{N_{\textup{e}}(\boldsymbol{d})} \sum_{j=1}^{F} \nonumber\\ &\mathbbm{1}\left(\mathcal{K}_{p_2(d_{u_\ell}),j}\cap \{p_1(u_1),...,p_1(u_{\ell})\}=\emptyset\right)\nonumber\\&-\left(\frac{1}{F}+ N^2_{\textup{e}}(\boldsymbol{d})\epsilon \right).\label{ineq:avebound1} \end{aligned}$$ We aim to simplify the above lower bound, in order to find its minimum to prove Lemma \[lemma:univ\]. To simplify this result, we first exchange the order of the summations and evaluate $\frac{1}{K!}\sum\limits_{p_1\in\mathcal{P}_K} \mathbbm{1}\left(\mathcal{K}_{p_2(d_{u_\ell}),j}\cap \{p_1(u_1),...,p_1(u_{\ell})\}=\emptyset\right)$. This is essentially the probability of selecting $\ell$ distinct users $\{p_1(u_1),...,p_1(u_{\ell})\}$ uniformly at random, such that none of them belongs to $\mathcal{K}_{p_2(d_{u_\ell})}$. Out of the $\binom{K}{\ell}$ subsets, $\binom{K-|\mathcal{K}_{p_2(d_{u_\ell}),j}|}{\ell}$ of them satisfy this condition,[^14] which gives the following identity: $$\begin{aligned} \frac{1}{K!}\sum\limits_{p_1\in\mathcal{P}_K} \mathbbm{1}\left(\mathcal{K}_{p_2(d_{u_\ell}),j}\cap \{p_1(u_1),...,p_1(u_{\ell})\}=\emptyset\right)=&\nonumber\\ \frac{\binom{K-|\mathcal{K}_{p_2(d_{u_\ell}),j}|}{\ell}}{\binom{K}{\ell}}&.\label{eq:ident1} \end{aligned}$$ Hence, inequality (\[ineq:avebound1\]) can be simplified based on (\[eq:ident1\]) and the above discussion. $$\begin{aligned} R^*_\epsilon({\boldsymbol{s}(\boldsymbol{d})},\boldsymbol{\mathcal{M}}) \geq & \frac{1}{N!F} \sum_{p_2\in\mathcal{P}_N}\sum_{\ell=1}^{N_{\textup{e}}(\boldsymbol{d})} \sum_{j=1}^{F} \frac{1}{K!}\sum_{p_1\in\mathcal{P}_K} \nonumber\\&\mathbbm{1}\left(\mathcal{K}_{p_2(d_{u_\ell}),j}\cap \{p_1(u_1),...,p_1(u_{\ell})\}=\emptyset\right)\nonumber\\&-\left(\frac{1}{F}+ N^2_{\textup{e}}(\boldsymbol{d})\epsilon \right)\\ =&\frac{1}{N!F} \sum_{p_2\in\mathcal{P}_N}\sum_{\ell=1}^{N_{\textup{e}}(\boldsymbol{d})} \sum_{j=1}^{F} \frac{\binom{K-|\mathcal{K}_{p_2(d_{u_\ell}),j}|}{\ell}}{\binom{K}{\ell}}\nonumber\\&-\left(\frac{1}{F}+ N^2_{\textup{e}}(\boldsymbol{d})\epsilon \right).\label{ineq:avebound2} \end{aligned}$$ We further simplify this result by computing the summation over $p_2$ and $j$, and evaluating $\frac{1}{N!F}\sum\limits_{p_2\in\mathcal{P}_N}\sum\limits_{j=1}^{F} \binom{K-|\mathcal{K}_{p_2(d_{u_\ell}),j}|}{\ell}$. This is essentially the expectation of $\binom{K-|\mathcal{K}_{i,j}|}{\ell}$ over a uniformly randomly selected bit $B_{i,j}$. Let $a_n$ denote the number of bits in the database that are cached by exactly $n$ users, then $|\mathcal{K}_{i,j}|=n$ holds for $\frac{a_n}{NF}$ fraction of the bits. Consequently, we have $$\begin{aligned} \frac{1}{N!F}\sum\limits_{p_2\in\mathcal{P}_N}\sum\limits_{j=1}^{F} \binom{K-|\mathcal{K}_{p_2(d_{u_\ell}),j}|}{\ell}=\sum_{n=0}^{K} \frac{a_n}{NF}\cdot \binom{K-n}{\ell}. \end{aligned}$$ We simplify (\[ineq:avebound2\]) using the above identity: $$\begin{aligned} R^*_\epsilon({\boldsymbol{s}(\boldsymbol{d})},\boldsymbol{\mathcal{M}}) \geq& \sum_{\ell=1}^{N_{\textup{e}}(\boldsymbol{d})} \frac{1}{N!F} \sum_{p_2\in\mathcal{P}_N}\sum_{j=1}^{F} \frac{\binom{K-|\mathcal{K}_{p_2(d_{u_\ell}),j}|}{\ell}}{\binom{K}{\ell}}\nonumber\\&-\left(\frac{1}{F}+ N^2_{\textup{e}}(\boldsymbol{d})\epsilon \right)\\ =&\sum_{\ell=1}^{N_{\textup{e}}(\boldsymbol{d})} \sum_{n=0}^{K} \frac{a_n}{NF}\cdot \frac{\binom{K-n}{\ell}}{\binom{K}{\ell}}-\left(\frac{1}{F}+ N^2_{\textup{e}}(\boldsymbol{d})\epsilon \right)\label{ineq:avebound3} \end{aligned}$$ It can be easily shown that $$\begin{aligned} \frac{\binom{K-n}{\ell}}{\binom{K}{\ell}}&=\frac{\binom{K-\ell}{n}}{\binom{K}{n}} \end{aligned}$$ and $$\begin{aligned} \sum_{\ell=1}^{N_{\textup{e}}(\boldsymbol{d})} \binom{K-\ell}{n}&=\binom{K}{n+1}-\binom{K-N_{\textup{e}}(\boldsymbol{d})}{n+1}. \end{aligned}$$ Thus, we can rerwite (\[ineq:avebound3\]) as $$\begin{aligned} R^*_\epsilon({\boldsymbol{s}(\boldsymbol{d})},\boldsymbol{\mathcal{M}}) \geq& \sum_{n=0}^{K} \frac{a_n}{NF}\cdot \frac{\binom{K}{n+1}-\binom{K-N_{\textup{e}}(\boldsymbol{d})}{n+1}}{\binom{K}{n}}\nonumber\\&-\left(\frac{1}{F}+ N^2_{\textup{e}}(\boldsymbol{d})\epsilon \right). \end{aligned}$$ Hence for any ${\boldsymbol{s}}\in\mathcal{S}$, by arbitrarily selecting a demand $\boldsymbol{d}\in \mathcal{D}_{\boldsymbol{s}}$ and applying the above inequality, the following bound holds for any prefetching $\boldsymbol{\mathcal{M}}$: $$\begin{aligned} R^*_\epsilon({\boldsymbol{s}},\boldsymbol{\mathcal{M}}) &\geq \sum_{n=0}^{K} \frac{a_n}{NF}\cdot \frac{\binom{K}{n+1}-\binom{K-N_{\textup{e}}(\boldsymbol{s})}{n+1}}{\binom{K}{n}}\nonumber\\&-\left(\frac{1}{F}+ N^2_{\textup{e}}(\boldsymbol{s})\epsilon \right). \end{aligned}$$ After proving a lower bound of $R^*_\epsilon({\boldsymbol{s}},\boldsymbol{\mathcal{M}})$, we proceed to bound its minimum possible value over all prefetching schemes. Let $c_n$ denote the following sequence $$\begin{aligned} c_n &= \frac{\binom{K}{n+1}-\binom{K-N_{\textup{e}}(\boldsymbol{s})}{n+1}}{\binom{K}{n}}. \end{aligned}$$ We have $$\begin{aligned} R^*_\epsilon({\boldsymbol{s}},\boldsymbol{\mathcal{M}}) &\geq \sum_{n=0}^{K} \frac{a_n}{NF}\cdot c_n-\left(\frac{1}{F}+ N^2_{\textup{e}}(\boldsymbol{s})\epsilon \right).\label{ineq:ac} \end{aligned}$$ We denote the lower convex envelope of $c_n$, i.e., the lower convex envelope of points $\{(t,c_t)\ |\ t\in\{0,1,...,K\} \}$, by $\textup{Conv}(c_t)$. Note that $c_n$ is a decreasing sequence, so its lower convex envelope is a decreasing and convex function. Because the following holds for every prefetching: $$\begin{aligned} \sum_{n=0}^K a_n&=NF,\\ \sum_{n=0}^K n a_n&\leq NFt, \end{aligned}$$ we can lower bound (\[ineq:ac\]) using Jensen’s inequality and the monotonicity of $\textup{Conv}(c_t)$: $$\begin{aligned} R^*_\epsilon({\boldsymbol{s}},\boldsymbol{\mathcal{M}}) &\geq \textup{Conv}(c_t)-\left(\frac{1}{F}+ N^2_{\textup{e}}(\boldsymbol{s})\epsilon \right). \end{aligned}$$ Consequently, $$\begin{aligned} \min_{\boldsymbol{\mathcal{M}}}R^*_\epsilon({\boldsymbol{s}},\boldsymbol{\mathcal{M}}) \geq& \min_{\boldsymbol{\mathcal{M}}} \textup{Conv}(c_t)-\left(\frac{1}{F}+ N^2_{\textup{e}}(\boldsymbol{s})\epsilon \right)\\ = &\textup{Conv}(c_t)-\left(\frac{1}{F}+ N^2_{\textup{e}}(\boldsymbol{s})\epsilon \right)\\ =&\textup{Conv}\left(\frac{\binom{K}{t+1}-\binom{K-N_{\textup{e}}(\boldsymbol{s})}{t+1}}{\binom{K}{t}}\right)\nonumber\\&-\left(\frac{1}{F}+ N^2_{\textup{e}}(\boldsymbol{s})\epsilon \right). \end{aligned}$$ Minimum Peak Rate for Centralized Caching {#app:worst} ========================================= Consider a caching problem with $K$ users, a database of $N$ files, and a local cache size of $M$ files for each user. We define the rate-memory tradeoff for the peak rate as follows: Similar to the average rate case, for each prefetching $\boldsymbol{\mathcal{M}}$, let $R_{\epsilon,\textup{peak}}^*(\boldsymbol{\mathcal{M}})$ denote the peak rate, defined as $$R_{\epsilon,\textup{peak}}^*(\boldsymbol{\mathcal{M}})=\max_{\boldsymbol{d}} R_\epsilon^*(\boldsymbol{d},\boldsymbol{\mathcal{M}}).\nonumber$$ We aim to find the minimum peak rate $R^*_{\textup{peak}}$, where $$R_{\textup{peak}}^*=\sup_{\epsilon>0}\adjustlimits \limsup_{F\rightarrow+\infty}\min_{\boldsymbol{\mathcal{M}}} R_{\epsilon,\textup{peak}}^*(\boldsymbol{\mathcal{M}}),\nonumber$$ which is a function of $N$, $K$, and $M$. Now we prove Corollary \[cor:worst\], which completely characterizes the value of $R_{\textup{peak}}^*$. It is easy to show that the rate stated in Corollary \[cor:worst\] can be exactly achieved using the caching scheme introduced in Section \[sec:opt\]. Hence, we focus on proving the optimality of the proposed coding scheme. Recall the definitions of statistics and types (see section \[sec:conv\]). Given a prefetching $\boldsymbol{\mathcal{M}}$ and statistics $\boldsymbol{s}$, we define the peak rate within type $\mathcal{D}_{\boldsymbol{s}}$, denoted by $R_{\epsilon,\textup{peak}}^*(\boldsymbol{s},\boldsymbol{\mathcal{M}})$, as $$R_{\epsilon,\textup{peak}}^*(\boldsymbol{s},\boldsymbol{\mathcal{M}})=\max_{\boldsymbol{d}\in \mathcal{D}_{\boldsymbol{s}}} R_{\epsilon}^*(\boldsymbol{d},\boldsymbol{\mathcal{M}}).$$ Note that $$\begin{aligned} R^*_{\textup{peak}}&=\sup_{\epsilon>0}\adjustlimits \limsup_{F\rightarrow+\infty}\min_{{\boldsymbol{\mathcal{M}}}} \max_{\boldsymbol{s}} R^*_{\epsilon,\textup{peak}}({\boldsymbol{s}}, \boldsymbol{\mathcal{M}}) \\ &\geq \sup_{\epsilon>0}\adjustlimits \limsup_{F\rightarrow+\infty}\max_{\boldsymbol{s}}\min_{{\boldsymbol{\mathcal{M}}}} R^*_{\epsilon,\textup{peak}}({\boldsymbol{s}}, \boldsymbol{\mathcal{M}}).\label{ineq:54} \end{aligned}$$ Hence, in order to lower bound $R^*$, it is sufficient to bound the minimum value of $R^*_{\epsilon,\textup{peak}}({\boldsymbol{s}}, \boldsymbol{\mathcal{M}})$ for each type $\mathcal{D}_{\boldsymbol{s}}$ individually. Using Lemma \[lemma:univ\], the following bound holds for each $s\in\mathcal{S}$: $$\begin{aligned} \min_{{\boldsymbol{\mathcal{M}}}} R^*_{\epsilon,\textup{peak}}({\boldsymbol{s}}, \boldsymbol{\mathcal{M}})\geq& \min_{{\boldsymbol{\mathcal{M}}}} R_{\epsilon}^*({\boldsymbol{s}}, \boldsymbol{\mathcal{M}})\\\geq& \textup{Conv}\left( \frac{\binom{K}{t+1}-\binom{K-N_{\textup{e}}(\boldsymbol{s})}{t+1}}{\binom{K}{t}}\right)\nonumber\\&-\left(\frac{1}{F}+ N^2_{\textup{e}}(\boldsymbol{s})\epsilon \right)\label{ineq:typ-worst}. \end{aligned}$$ Consequently, $$\begin{aligned} R^*_{\textup{peak}}\geq& \sup_{\epsilon>0}\adjustlimits \limsup_{F\rightarrow+\infty} \max_{\boldsymbol{s}}\textup{Conv}\left( \frac{\binom{K}{t+1}-\binom{K-N_{\textup{e}}(\boldsymbol{s})}{t+1}}{\binom{K}{t}}\right)\nonumber\\&-\left(\frac{1}{F}+ N^2_{\textup{e}}(\boldsymbol{s})\epsilon \right)\\ =&\textup{Conv}\left( \frac{\binom{K}{t+1}-\binom{K-\min\{N,K\}}{t+1}}{\binom{K}{t}}\right). \end{aligned}$$ \[remark:peakuniv\] Inequality (\[ineq:typ-worst\]) characterizes the minimum peak rate given a type $\mathcal{D}_{\boldsymbol{s}}$, if the prefetching $\boldsymbol{\mathcal{M}}$ can be designed based on $\boldsymbol{s}$. However, for (\[ineq:54\]) to be tight, the peak rate for each different type has to be minimized on the same prefetching. Surprisingly, such an optimal prefetching exists, an example being the symmetric batch prefetching, according to Section \[sec:opt\]. This indicates that the symmetric batch prefetching is also universally optimal for all types in terms of peak rates. Proof of Theorem \[thm:ave-dec\] {#app:decent-ave} ================================ To completely characterize $\mathcal{R}$, we propose decentralized caching schemes to achieve all points in $\mathcal{R}$. We also prove a matching information-theoretic outer bound of the achievable regions, which implies that none of the points outside $\mathcal{R}$ are achievable. The Optimal Decentralized Caching Scheme {#app:decent-opt} ---------------------------------------- To prove the achievability of $\mathcal{R}$, we need to provide an optimal decentralized prefetching scheme ${P}_{\mathcal{M};F}$, an optimal delivery scheme for every possible user demand $\boldsymbol{d}$ that achieves the corner point in $\mathcal{R}$, and a valid decoding algorithm for the users. The main idea of our proposed achievability scheme is to first design a decentralized prefetching scheme, such that we can view the resulting content delivery problem as a list of sub-problems that can be individually solved using the techniques we already developed for the centralized setting. Then we optimally solve this delivery problem by greedily applying our proposed centralized delivery and decoding scheme. We consider the following optimal prefetching scheme: all users cache $\frac{MF}{N}$ bits in each file uniformly and independently. This prefetching scheme was originally proposed in [@maddah-ali13]. For convenience, we refer to this prefetching scheme as *uniformly random prefetching scheme*. Given this prefetching scheme, each bit in the database is cached by a random subset of the $K$ users. During the delivery phase, we first greedily categorize all the bits based on the number of users that cache the bit, then within each category, we deliver the corresponding messages in an opportunistic way using the delivery scheme described in Section \[sec:opt\] for centralized caching. For any demand $\boldsymbol{d}$ where $K$ users are making requests, and any realization of the prefetching on these $K$ users, we divide the bits in the database into $K+1$ sets: For each $j\in\{0,1,...,K\}$, let $\mathcal{B}_j$ denote the bits that are cached by exactly $j$ users. To deliver the requested files to the $K$ users, it is sufficient to deliver all the corresponding bits in each $\mathcal{B}_j$ individually. Within each $\mathcal{B}_j$, first note that with high probability for large $F$, the number of bits that belong to each file is approximately $\binom{K}{j}(\frac{M}{N})^j(1-\frac{M}{N})^{K-j}F+o(F)$, which is the same across all files. Furthermore, for any subset $\mathcal{K}\subseteq\{1,...,K\}$ of size $j$, a total of $(\frac{M}{N})^j(1-\frac{M}{N})^{K-j}F+o(F)$ bits in file $i$ are exclusively cached by users in $\mathcal{K}$, which is $1/\binom{K}{j}$ fraction of the bits in $\mathcal{B}_j$ that belong to file $i$. This is effectively the symmetric batch prefetching, and hence we can directly apply the same delivery and decoding scheme to deliver all the requested bits within this subset. Recall that in the centralized setting, when each file has a size $F$ and each bit is cached by exactly $t$ users, our proposed delivery scheme achieves a communication load of $\frac{\binom{K}{t+1}-\binom{K-N_{\textup{e}}(\boldsymbol{d})}{t+1}}{\binom{K}{t}} F$. Then to deliver all requested bits within $\mathcal{B}_j$, where the equivalent file size approximately equals $\binom{K}{j}(\frac{M}{N})^j(1-\frac{M}{N})^{K-j}F$, we need a communication rate of $(\frac{M}{N})^j(1-\frac{M}{N})^{K-j} \left(\binom{K}{j+1}-\binom{K-N_{\textup{e}}(\boldsymbol{d})}{j+1}\right)$. Consequently, by applying the delivery scheme for all $j\in\{0,1,...,K\}$, we achieve a total communication rate of $$\begin{aligned} R_K=&\sum_{j=0}^{K}\left(\frac{M}{N}\right)^j\left(1-\frac{M}{N}\right)^{K-j} \nonumber\\&\ \ \ \ \ \cdot\left(\binom{K}{j+1}-\binom{K-N_{\textup{e}}(\boldsymbol{d})}{j+1}\right)\\ =&\frac{N-M}{M}\left(1-\left(1-\frac{M}{N}\right)^{N_{\textup{e}}(\boldsymbol{d})}\right) \end{aligned}$$ for any demand $\boldsymbol{d}$. Hence, for each $K$ we achieve an average rate of $\mathbb{E}[\frac{N-M}{M}(1-\left(1-\frac{M}{N}\right)^{N_{\textup{e}}(\boldsymbol{d})})]$, which dominates all points in $\mathcal{R}$. This provides a tight inner bound for Theorem \[thm:ave-dec\]. Converse {#converse} -------- To prove an outer bound of $\mathcal{R}$, i.e., bounding all possible rate vectors $\{R_K\}_{K\in\mathbb{N}}$ that can be achieved by a prefetching scheme, it is sufficient to bound each entry of the vector individually, by providing a lower bound of $R^*_K(P_{{\mathcal{M}};F})$ that holds for all prefetching schemes. To obtain such a lower bound, for each $K\in\mathbb{N}$ we divide the set of all possible demands into types, and derive the minimum average rate within each type separately. For any statistics $\boldsymbol{s}$, we let $R^*_{\epsilon,K}(\boldsymbol{s}, P_{{\mathcal{M}}})$ denote the average rate within type $\mathcal{D}_{\boldsymbol{s}}$. Rigorously, $$\begin{aligned} R^*_{\epsilon,K}(\boldsymbol{s}, P_{{\mathcal{M}}})&=\frac{1}{|\mathcal{D}_{\boldsymbol{s}}|}\sum_{\boldsymbol{d}\in \mathcal{D}_{\boldsymbol{s}}}R^*_{\epsilon, K}(\boldsymbol{d},P_{{\mathcal{M}}}). \end{aligned}$$ The minimum value of $R^*_{\epsilon,K}(\boldsymbol{s}, P_{{\mathcal{M}}})$ is lower bounded by the following lemma: \[lemma:univ-dec\] Consider a decentralized caching problem with $N$ files and a local cache size of $M$ files for each user. For any type $\mathcal{D}_{\boldsymbol{s}}$, where $K$ users are making requests, the minimum value of $R^*_{\epsilon, K}(\boldsymbol{s}, P_{{\mathcal{M}}})$ is lower bounded by $$\begin{aligned} \min_{P_{{\mathcal{M}}}} R^*_{\epsilon,K}(\boldsymbol{s}, P_{{\mathcal{M}}})\geq& \frac{M-N}{M}\left(1-\left(1-\frac{M}{N}\right)^{N_{\textup{e}}(s)}\right)\nonumber\\&-\left(\frac{1}{F}+N_{\textup{e}}^2 (\boldsymbol{s})\epsilon\right). \end{aligned}$$ As proved in Appendix \[app:decent-opt\], the rate $R^*_{\epsilon,K}(\boldsymbol{s}, P_{{\mathcal{M}}})$ for any statistics $\boldsymbol{s}$ and any $K$ can be simultaneously minimized using the uniformly random prefetching scheme. This demonstrates that the uniformly random prefetching scheme is universally optimal for the decentralized caching problem in terms of average rates. To prove Lemma \[lemma:univ-dec\], we first consider a class of *generalized demands*, where not all users in the caching systems are required to request a file. We define generalized demand $\boldsymbol{d}=(d_1,...,d_K)\in \{0,1,...,N\}^K$, where a nonzero $d_k$ denotes the index of the file requested by $k$, while $d_k=0$ indicates that user $k$ is not making a request. We define statistics and their corresponding types in the same way, and let $R^*_{\epsilon,K}(\boldsymbol{s} ,\boldsymbol{\mathcal{M}})$ denote the centralized average rate on a generalized type $\mathcal{D}_{\boldsymbol{s}}$ given prefetching $\boldsymbol{\mathcal{M}}$. For a centralized caching problem, we can easily generalize Lemma \[lemma:univ\] to the following lemma for the generalized demands: \[lemma:univ-gen\] Consider a caching problem with $N$ files, $K$ users, and a local cache size of $M$ files for each user. For any generalized type $\mathcal{D}_{\boldsymbol{s}}$, the minimum value of $R^*_{\epsilon,K}(\boldsymbol{s}, \boldsymbol{\mathcal{M}})$ is lower bounded by $$\begin{aligned} \min_{\boldsymbol{\mathcal{M}}} R^*_{\epsilon,K}(\boldsymbol{s} ,\boldsymbol{\mathcal{M}})\geq& \textup{Conv}\left( \frac{\binom{K}{t+1}-\binom{K-N_{\textup{e}}(\boldsymbol{s})}{t+1}}{\binom{K}{t}}\right)\nonumber\\&-\left(\frac{1}{F}+N_{\textup{e}}^2 (\boldsymbol{s})\epsilon\right), %\\ &=\textup{Conv} \left(\mathbf{E}_{\boldsymbol{d}}\left[ \frac{\binom{K}{r+1}-\binom{K-N_{\textup{e}}(\boldsymbol{d})}{r+1}}{\binom{K}{r}}\right]\right) . \end{aligned}$$ where $\textup{Conv}(f(t))$ denotes the lower convex envelope of the following points: $\{(t,f(t))\ | \ t\in\{0,1,...,K\}\}$. The above lemma can be proved exactly the same way as we proved Lemma \[lemma:univ\], and the universal optimality of symmetric batch prefetching still holds for the generalized demands. For a decentralized caching problem, we can also generalize the definition of $R^*_{\epsilon,K}(\boldsymbol{s}, P_{{\mathcal{M}}})$ correspondingly. We can easily prove that, when a decentralized caching scheme is used, the expected value of $R^*_{\epsilon,K}(\boldsymbol{s}, \boldsymbol{\mathcal{M}})$ is no greater than $R^*_{\epsilon,K}(\boldsymbol{s}, P_{{\mathcal{M}}})$. Consequently, $$\begin{aligned} R^*_{\epsilon,K}(\boldsymbol{s}, P_{{\mathcal{M}}})\geq& \mathbb{E}_{\boldsymbol{\mathcal{M}}}[R^*_{\epsilon,K}(\boldsymbol{s}, \boldsymbol{\mathcal{M}})]\\ \geq& \textup{Conv}\left( \frac{\binom{K}{t+1}-\binom{K-N_{\textup{e}}(\boldsymbol{s})}{t+1}}{\binom{K}{t}}\right)\nonumber\\&-\left(\frac{1}{F}+N_{\textup{e}}^2 (\boldsymbol{s})\epsilon\right), \end{aligned}$$ for any generalized type $\mathcal{D}_{\boldsymbol{s}}$ and for any $P_{{\mathcal{M}}}$. Now we prove that value $R^*_{\epsilon,K}(\boldsymbol{s}, P_{{\mathcal{M}}})$ is independent of parameter $K$ given $\boldsymbol{s}$ and $ P_{{\mathcal{M}}}$: Consider a generalized statistic $\boldsymbol{s}$. Let $K_{\boldsymbol{s}}=\sum\limits_{i=1}^{N} s_i$, which equals the number of active users for demands in $\mathcal{D}_{\boldsymbol{s}}$. For any caching system with $K>K_{\boldsymbol{s}}$ users, and for any subset $\mathcal{K}$ of $K_{\boldsymbol{s}}$ users, let $\mathcal{D}_{\mathcal{K}}$ denote the set of demands in $\mathcal{D}_{\boldsymbol{s}}$ where only users in $\mathcal{K}$ are making requests. Note that $\mathcal{D}_{\boldsymbol{s}}$ equals the union of disjoint sets $\mathcal{D}_{\mathcal{K}}$ for all subsets $\mathcal{K}$ of size $K_{\boldsymbol{s}}$. Thus we have, $$\begin{aligned} R^*_{\epsilon,K}(\boldsymbol{s}, P_{{\mathcal{M}}})&=\frac{1}{|\mathcal{D}_{\boldsymbol{s}}|} \sum_{\boldsymbol{d}\in\mathcal{D}_{\boldsymbol{s}}} R^*_{\epsilon,K}(\boldsymbol{d}, P_{{\mathcal{M}}}) \\ &=\frac{1}{|\mathcal{D}_{\boldsymbol{s}}|} \sum_{\mathcal{K}:|\mathcal{K}|=K_{\boldsymbol{s}}} \sum_{\boldsymbol{d}\in\mathcal{D}_{\mathcal{K}}} R^*_{\epsilon,K}(\boldsymbol{d}, P_{{\mathcal{M}}})\\ &=\frac{1}{|\mathcal{D}_{\boldsymbol{s}}|} \sum_{\mathcal{K}:|\mathcal{K}|=K_{\boldsymbol{s}}} |\mathcal{D}_{\mathcal{K}}| R^*_{\epsilon,K_{\boldsymbol{s}}}(\boldsymbol{s}, P_{{\mathcal{M}}})\\ &=R^*_{\epsilon,K_{\boldsymbol{s}}}(\boldsymbol{s}, P_{{\mathcal{M}}}). \end{aligned}$$ Consequently, $$\begin{aligned} R^*_{\epsilon,K_{\boldsymbol{s}}}(\boldsymbol{s}, P_{{\mathcal{M}}})=&\lim_{K\rightarrow+\infty} R^*_{\epsilon,K}(\boldsymbol{s}, P_{{\mathcal{M}}})\\ \geq& \lim_{K\rightarrow+\infty} \textup{Conv}\left( \frac{\binom{K}{t+1}-\binom{K-N_{\textup{e}}(\boldsymbol{s})}{t+1}}{\binom{K}{t}}\right)\nonumber\\&-\left(\frac{1}{F}+N_{\textup{e}}^2 (\boldsymbol{s})\epsilon\right)\\ =&\frac{M-N}{M}\left(1-\left(1-\frac{M}{N}\right)^{N_{\textup{e}}(s)}\right)\nonumber\\&-\left(\frac{1}{F}+N_{\textup{e}}^2 (\boldsymbol{s})\epsilon\right). \end{aligned}$$ Because the above lower bound is independent of the prfetching distribution $P_{\mathcal{\boldsymbol{M}}}$, the minimum value of $R^*_{\epsilon,K_{\boldsymbol{s}}}(\boldsymbol{s}, P_{{\mathcal{M}}})$ over all possible prefetchings is also bounded by the same formula. This completes the proof of Lemma \[lemma:univ-dec\]. From Lemma \[lemma:univ-dec\], the following bound holds by definition $$\begin{aligned} R^*_K(P_{{\mathcal{M}};F})&=\sup_{\epsilon>0} \limsup_{F'\rightarrow+\infty}\mathbb{E}_{\boldsymbol{s}} [R^*_{\epsilon,K}(\boldsymbol{s}, P_{{\mathcal{M}};F}(F=F'))]\\&\geq \mathbb{E}_{\boldsymbol{d}} \left[\frac{M-N}{M}\left(1-\left(1-\frac{M}{N}\right)^{N_{\textup{e}}(\boldsymbol{d})}\right)\right] \end{aligned}$$ for any $K\in \mathbb{N}$ and for any prefetching scheme $P_{{\mathcal{M}};F}$. Consequently, any vector $\{R_K\}_{K\in\mathbb{N}}$ in $\mathcal{R}$ satisfies $$\begin{aligned} R_K&\geq \min_{P_{{\mathcal{M}};F}}R^*_K(P_{{\mathcal{M}};F})\\&\geq \mathbb{E}_{\boldsymbol{d}} \left[\frac{M-N}{M}\left(1-\left(1-\frac{M}{N}\right)^{N_{\textup{e}}(\boldsymbol{d})}\right)\right], \end{aligned}$$ for any $K\in \mathbb{N}$. Hence, $$\begin{aligned} \mathcal{R}\subseteq&\left\{\{R_K\}_{K\in\mathbb{N}}\ \vphantom{\left.R_K\geq \mathbb{E}_{\boldsymbol{d}}\left[\frac{N-M}{M} \left(1-\left(\frac{N-M}{N}\right)^{N_{\textup{e}}(\boldsymbol{d})}\right)\right]\right\}} \right|\nonumber\\ & \ \ \left.R_K\geq \mathbb{E}_{\boldsymbol{d}}\left[\frac{N-M}{M} \left(1-\left(\frac{N-M}{N}\right)^{N_{\textup{e}}(\boldsymbol{d})}\right)\right]\right\}. \end{aligned}$$ Proof of Corollary \[cor:dec\] {#app:decent-worst} ============================== It is easy to show that all points in $\mathcal{R}_{\textup{peak}}$ can be achieved using the decentralized caching scheme introduced in Appendix \[app:decent-opt\]. Hence, we focus on proving the optimality of the proposed decentralized caching scheme. Similar to the average rate case, we prove an outer bound of $\mathcal{R}_{\textup{peak}}$ by bounding $R^*_{K,\textup{peak}}(P_{{\mathcal{M}};F})$ for each $K\in\mathbb{N}$ individually. To do so, we divide the set of all possible demands into types, and derive the minimum average rate within each type separately. Recall the definitions of statistics and types (see section \[sec:conv\]). Given a caching system with $N$ files, $K$ users, a prefetching distribution $P_{{\mathcal{M}}}$, and a statistic $\boldsymbol{s}$, we define the peak rate within type $\mathcal{D}_{\boldsymbol{s}}$, denoted by $R_{\epsilon, K,\textup{peak}}^*(\boldsymbol{s},P_{{\mathcal{M}}})$, as $$R_{\epsilon,K,\textup{peak}}^*(\boldsymbol{s},P_{{\mathcal{M}}})=\max_{\boldsymbol{d}\in \mathcal{D}_{\boldsymbol{s}}} R^*_{\epsilon,K}(\boldsymbol{d},P_{{\mathcal{M}}}).$$ Note that any point $\{R_K\}_{K\in\mathbb{N}}$ in $\mathcal{R}_{\textup{peak}}$ satisfies $$\begin{aligned} R_K&\geq \inf_{P_{{\mathcal{M}};F}} R^*_{K,\textup{peak}}(P_{{\mathcal{M}};F})\\&= \inf_{P_{{\mathcal{M}};F}} \sup_{\epsilon>0}\adjustlimits \limsup_{F'\rightarrow+\infty}\max_{\boldsymbol{s}\in \mathcal{D}} [R^*_{\epsilon,K,{\textup{peak}}}(\boldsymbol{s}, P_{{\mathcal{M};F}}(F=F'))] % \max_{\boldsymbol{s}} R_{K,\textup{peak}}^*(\boldsymbol{s},P_{{\mathcal{M}}}) \\ % &\geq \sup_{\epsilon>0}\adjustlimits \limsup_{F'\rightarrow+\infty}\max_{\boldsymbol{s}\in \mathcal{D}} \min_{P_{{\mathcal{M}}}} [R^*_{\epsilon,K}(\boldsymbol{s}, P_{{\mathcal{M};F}}(F=F'))],\label{ineq:78} \end{aligned}$$ for any $K\in \mathbb{N}$. We have the following from min-max inequality $$\begin{aligned} R_K&\geq \sup_{\epsilon>0}\adjustlimits \limsup_{F\rightarrow+\infty}\max_{\boldsymbol{s}\in \mathcal{D}} [\min_{P_{{\mathcal{M}}}} R^*_{\epsilon,K,{\textup{peak}}}(\boldsymbol{s}, P_{{\mathcal{M}}})].\label{ineq:78} \end{aligned}$$ Hence, in order to outer bound $\mathcal{R}_{\textup{peak}}$, it is sufficient to bound the minimum value of $R_{\epsilon,K,\textup{peak}}^*(\boldsymbol{s},P_{{\mathcal{M}}})$ for each type $\mathcal{D}_{\boldsymbol{s}}$ individually. Using Lemma \[lemma:univ-dec\], the following bound holds for each $s\in\mathcal{S}$: $$\begin{aligned} \min_{P_{{\mathcal{M}}}} R_{\epsilon,K,\textup{peak}}^*(\boldsymbol{s},P_{{\mathcal{M}}})\geq& \min_{P_{{\mathcal{M}}}} R_{\epsilon,K}^*(\boldsymbol{s},P_{{\mathcal{M}}})\\\geq& \frac{M-N}{M}\left(1-\left(1-\frac{M}{N}\right)^{N_{\textup{e}}(s)}\right) \nonumber\\&-\left(\frac{1}{F}+N_{\textup{e}}^2 (\boldsymbol{s})\epsilon\right) \label{ineq:typ-dec-worst}. \end{aligned}$$ Hence for any $\{R_K\}_{K\in\mathbb{N}}$, $$\begin{aligned} R_K\geq& \sup_{\epsilon>0}\adjustlimits \limsup_{F\rightarrow+\infty}\max_{\boldsymbol{s}}\left[ \frac{M-N}{M}\left(1-\left(1-\frac{M}{N}\right)^{N_{\textup{e}}(s)}\right) \right.\nonumber\\&-\left.\left(\frac{1}{F}+N_{\textup{e}}^2 (\boldsymbol{s})\epsilon\right) \vphantom{\left(1-\frac{M}{N}\right)^{N_{\textup{e}}(s)}}\right]\\=&\frac{M-N}{M}\left(1-\left(1-\frac{M}{N}\right)^{\min\{N,K\}}\right). \end{aligned}$$ Consequently, $$\begin{aligned} \mathcal{R}_{\textup{peak}}\subseteq&\left\{\vphantom{\left(1-\frac{M}{N}\right)^{N_{\textup{e}}(s)}}\{R_K\}_{K\in\mathbb{N}}\ \right|\nonumber\\ &\ \ \left. R_K\geq \frac{N-M}{M} \left(1-\left(\frac{N-M}{N}\right)^{\min\{N,K\}}\right)\right\}. \end{aligned}$$ According to the above discussion, the rate $R^*_{\epsilon,K,\textup{peak}}(\boldsymbol{s}, P_{{\mathcal{M}}})$ for any statistics $\boldsymbol{s}$ and any $K$ can be simultaneously minimized using the uniformly random prefetching scheme. This indicates that the uniformly random prefetching scheme is universally optimal for all types in terms of peak rates. Biographies {#biographies .unnumbered} =========== [Qian Yu]{} (S’16) is pursuing his Ph.D. degree in Electrical Engineering at University of Southern California (USC), Viterbi School of Engineering. He received his M.Eng. degree in Electrical Engineering and B.S. degree in EECS and Physics, both from Massachusetts Institute of Technology (MIT). His interests span information theory, distributed computing, and many other problems math-related. Qian received the Jack Keil Wolf ISIT Student Paper Award in 2017. He is also a Qualcomm Innovation Fellowship finalist in 2017, and received the Annenberg Graduate Fellowship in 2015. [Mohammad Ali Maddah-Ali]{} (S’03-M’08) received the B.Sc. degree from Isfahan University of Technology, and the M.A.Sc. degree from the University of Tehran, both in electrical engineering. From 2002 to 2007, he was with the Coding and Signal Transmission Laboratory (CST Lab), Department of Electrical and Computer Engineering, University of Waterloo, Canada, working toward the Ph.D. degree. From 2007 to 2008, he worked at the Wireless Technology Laboratories, Nortel Networks, Ottawa, ON, Canada. From 2008 to 2010, he was a post-doctoral fellow in the Department of Electrical Engineering and Computer Sciences at the University of California at Berkeley. Then, he joined Bell Labs, Holmdel, NJ, as a communication research scientist. Recently, he started working at Sharif University of Technology, as a faculty member. Dr. Maddah-Ali is a recipient of NSERC Postdoctoral Fellowship in 2007, a best paper award from IEEE International Conference on Communications (ICC) in 2014, the IEEE Communications Society and IEEE Information Theory Society Joint Paper Award in 2015, and the IEEE Information Theory Society Joint Paper Award in 2016. [A. Salman Avestimehr]{} (S’03-M’08-SM’17) is an Associate Professor at the Electrical Engineering Department of University of Southern California. He received his Ph.D. in 2008 and M.S. degree in 2005 in Electrical Engineering and Computer Science, both from the University of California, Berkeley. Prior to that, he obtained his B.S. in Electrical Engineering from Sharif University of Technology in 2003. His research interests include information theory, the theory of communications, and their applications to distributed computing and data analytics. Dr. Avestimehr has received a number of awards, including the Communications Society and Information Theory Society Joint Paper Award, the Presidential Early Career Award for Scientists and Engineers (PECASE) for “pushing the frontiers of information theory through its extension to complex wireless information networks”, the Young Investigator Program (YIP) award from the U. S. Air Force Office of Scientific Research, the National Science Foundation CAREER award, and the David J. Sakrison Memorial Prize. He is currently an Associate Editor for the IEEE Transactions on Information Theory. [^1]: Manuscript received September 26, 2016; revised August 09, 2017; accepted November 29, 2017. A shorter version of this paper was presented at ISIT, 2017 [@yu2016exactisit]. [^2]: Q. Yu and A.S. Avestimehr are with the Department of Electrical Engineering, University of Southern California, Los Angeles, CA, 90089, USA (e-mail: qyu880@usc.edu; avestimehr@ee.usc.edu). [^3]: M. A. Maddah-Ali is with Department of Electrical Engineering, Sharif University of Technology, Tehran, 11365, Iran (e-mail: maddah\_ali@sharif.edu). [^4]: Communicated by P. Mitran, Associate Editor for Shannon Theory. [^5]: This work is in part supported by NSF grants CCF-1408639, NETS-1419632, and ONR award N000141612189. [^6]: Copyright (c) 2017 IEEE. Personal use of this material is permitted. However, permission to use this material for any other purposes must be obtained from the IEEE by sending a request to pubs-permissions@ieee.org. [^7]: Although we only focus on binary files, the same techniques developed in this paper can also be used for cases of q-ary files and files using a mixture of different alphabets, to prove that same rate-memory trade off holds. [^8]: In this paper we define $\binom{n}{k}=0$ when $k>n$. [^9]: Rigorously, we prove equation (\[eq:single\]) for $F|\binom{K}{t}$. In other cases, the resulting extra communication overhead is negligible for large $F$. [^10]: The notion of type was also recently introduced in [@chao2016isit] in order to simplify the LP for finding better converse bounds for the coded caching problem. [^11]: As noted in Remark \[remark:convexity\], the rate $R^*$ stated in equation (\[eq:unif\_pre\]) for $t\in\{0,1,...,K\}$ is convex, so it is sufficient to prove $R^*$ is lower bounded by the convex envelope of its values at $t\in\{0,1,...,K\}$. [^12]: If $M=0$, $\mathcal{R}=\{\{R_K\}_{K\in\mathbb{N}}\ |\ R_K\geq \mathbb{E}_{\boldsymbol{d}}[{N_{\textup{e}}(\boldsymbol{d})}]\}$. [^13]: If $M=0$, $\mathcal{R}=\{\{R_K\}_{K\in\mathbb{N}}\ |\ R_K\geq \min\{N,K\}\}$. [^14]: Recall that we define $\binom{n}{k}=0$ when $k>n$.
{ "pile_set_name": "ArXiv" }
0.1667in [**An Ignored Mechanism**]{} [**for the Longitudinal Recoil Force in Railguns and**]{} [**Revitalization of the Riemann Force Law**]{} Department of Electrical Engineering National Tsinghua University Hsinchu, Taiwan **Abstract*** *– The electric induction force due to a time-varying current is used to account for the longitudinal recoil force exerted on the rails of railgun accelerators. As observed in the experiments, this induction force is longitudinal to the rails and can be the strongest at the heads of the rails. Besides, for the force due to a closed circuit, it is shown that the Riemann force law, which is based on a potential energy depending on a relative speed and is in accord with Newton’s law of action and reaction, can reduce to the Lorentz force law. PACS numbers: 03.50.De, 41.20.-q $\vspace{1.5cm}$ [**1. Introduction**]{}\ It is known that a railgun utilizes the magnetic force to accelerate an armature to move along two parallel rails on which it is placed. Further, it has been reported that a recoil force, which is longitudinal to the rails and is exerted on them, was observed during the acceleration of the armature [@Graneau87]. Based on the Biot-Savart (Grassmann) force law, the magnetic force exerted on a wire segment of directed length $d\mathbf{l}_{1}$ and carrying a current $I_{1}$ due to another current element $I_{2}d\mathbf{% l}_{2}$ is given by $$\mathbf{F}=-\frac{\mu _{0}}{4\pi }I_{1}I_{2}\frac{1}{R^{2}}\left[ \hat{R}(d% \mathbf{l}_{1}\cdot d\mathbf{l}_{2})-(d\mathbf{l}_{1}\cdot \hat{R})d\mathbf{l% }_{2}\right] ,\eqno (1)$$ where $\hat{R}$ is a unit vector pointing from element 2 to element 1 and $R$ is the separation distance between them. By using a vector identity it is readily seen that the magnetic force is always perpendicular to the wire segment carrying the current $I_{1}$. Thus the longitudinal force cannot be accounted for by the Biot-Savart force law. Alternatively, in some experiments the Ampère force law $$\mathbf{F}=-\frac{\mu _{0}}{4\pi }I_{1}I_{2}\frac{\hat{R}}{R^{2}}\left[ 2(d% \mathbf{l}_{1}\cdot d\mathbf{l}_{2})-3(d\mathbf{l}_{1}\cdot \hat{R})(d% \mathbf{l}_{2}\cdot \hat{R})\right] \eqno (2)$$ is applied to account for this longitudinal recoil force [@Graneau87], though this force law is not well accepted. From this law it seems that the longitudinal force can be expected. However, it can be shown that the force predicted from the Ampère law is identical to the one from the Biot-Savart law, when the force is due to a closed circuit with uniform current as it is ordinarily. Such an identity has also been proved by two elegant but similar approaches by using vector identities [@Jolly; @Ternan], where the current is given by a volume density as it is actually and the singularity problem which occurs when the distance $R$ becomes zero for the self-action term is then avoided. In these derivations the magnetostatic condition, under which the divergence of the current density is zero, is assumed. A closed circuit with uniform current is a common case of this condition. Some specific analytical or numerical integrations with volume or even surface current densities [@Moyssides; @Assis96] also support the identity. Thereby, without doubt, the Ampère law is identical to the Biot-Savart law for the force due to closed circuits and hence the longitudinal recoil force can be accounted for by neither of them. In spite of these theoretical arguments, there remains controversy over the experimental observations of the railgun longitudinal force and the experimental demonstrations for the validity of the force laws \[6–11\]. In this investigation, it is pointed out that the railgun longitudinal force can be accounted for by the electric induction force which as well as the Biot-Savart magnetic force is incorporated in the Lorentz force law. This induction force is due to a time-varying current and its direction is longitudinal to the current. This force is of the same order of magnitude of the magnetic force, but it appears to be ignored in the literature dealing with railguns. As to the Ampère force law, it has an appealing feature that it is obviously in accord with Newton’s third law of motion. This is a consequence of the situation that the Weber force law and hence the Ampère force law can be derived from a potential energy of which the involved velocity is a relative velocity between two associated charged particles. In section 5 it is shown that the Riemann force law, which is derived from a potential energy where the involved velocity is also relative, can reduce to the Lorentz force law. Thus the longitudinal rail recoil force can be accounted for by a force law which is in accord both with the nowadays standard theory and with Newton’s law of action and reaction. [**2. Electric Induction Force in Railguns**]{}\ It is well known that in the presence of electric and magnetic fields, the electromagnetic force exerted on a particle of charge $q$ and velocity $% \mathbf{v}$ is given by the Lorentz force law $$\mathbf{F}=q\left( \mathbf{E}+\mathbf{v}\times \mathbf{B}\right) .\eqno (3)$$ This force law and Maxwell’s equations form the fundamental equations adopted by Lorentz in the early development of electromagnetics. The Lorentz force law can be given directly in terms of the scalar and the vector potential originating from the charge and the current density, respectively. That is, $$\mathbf{F}=q\left( -\nabla \Phi -\frac{\partial \mathbf{A}}{\partial t}+% \mathbf{v}\times \nabla \times \mathbf{A}\right) ,\eqno (4)$$ where $\Phi $ is the electric scalar potential and $\mathbf{A}$ is the magnetic vector potential. The term associated with the gradient of the scalar potential, with the time derivative of the vector potential, and the one with the particle velocity are known as the electrostatic force, the electric induction force, and the magnetic force, respectively. Quantitatively, the scalar and the vector potential are given explicitly in terms of the charge density $\rho $ and the current density $\mathbf{J}$ respectively by the volume integrals $$\Phi (\mathbf{r},t)=\frac{1}{4\pi \epsilon _{0}}\int \frac{\rho (\mathbf{r}% ^{\prime },t)}{R}dv^{\prime }\eqno (5)$$ and $$\mathbf{A}(\mathbf{r},t)=\frac{\mu _{0}}{4\pi }\int \frac{\mathbf{J}(\mathbf{% r}^{\prime },t)}{R}dv^{\prime },\eqno (6)$$ where $\mu _{0}\epsilon _{0}=1/c^{2}$, $R=|\mathbf{r}-\mathbf{r}^{\prime }|$, and the time retardation $R/c$ from the source point $\mathbf{r}^{\prime }$ to the field point $\mathbf{r}$ is neglected. It is noted that compared to the electrostatic force due to the scalar potential, both the electric induction force and the magnetic force due to the vector potential are of the second order of normalized speed with respect to $c$. In railgun accelerators, the current $I$ flowing on the loop formed by the rails, the armature, and the breech generates a magnetic vector potential $% \mathbf{A}$ and a magnetic field $\mathbf{B}$. Then the current-carrying armature experiences a magnetic force, which tends to accelerate the armature to move along the rails. Correspondingly, there is another magnetic force exerted on the breech as a recoil force. Meanwhile, the motion of the armature results in another magnetic force on the armature itself. This force is along the armature and then will counteract the electrostatic force which in turn is established by an external power supply to support the current $I$. The current depends on the resultant force and hence on the speed of the armature. If the applied voltage is fixed, the current and hence the magnetic vector potential will decrease. According to the Lorentz force law, a time-varying vector potential will generate an electric induction force. The electric induction force exerted on the ions of a straight metal wire carrying a current decreasing with time is parallel to the current. Thus the net induction force exerted on each rail of a railgun will have a major component longitudinal to the rails. This force is not expected to depend significantly on the location along each rail, while the forces exerted on the respective rails are in opposite directions. As the electric induction force is proportional to the time rate of change of the current $I$, it depends on the acceleration of the armature.$\vspace{0.3cm}$ $\hspace{2.4cm}$$\vspace{% 0.3cm}$ $\hspace{1.1cm}$ Another effect of the motion of the armature is to constantly introduce new current elements located on the rails just behind the armature, where the current changes abruptly from zero to $I$, as depicted in Fig.1. Accordingly, the magnetic vector potential has a tendency to increase with time. (On the other hand, the current $I$ itself and hence the vector potential tend to decrease as discussed previously.) This increment of the vector potential is longitudinal to the rails and hence another electric induction force longitudinal to them is induced. The vector potential due to the new current elements is given by superposition. As the currents flowing on the two rail segments of length $dx$ are in opposite directions, the increment is given quantitatively by the difference $$dA=\frac{\mu _{0}}{4\pi }\left( \frac{1}{x}-\frac{1}{\sqrt{x^{2}+s^{2}}}% \right) Idx,\eqno (7)$$ where $x$ is the distance of the observation position on one rail from the moving armature and $s$ is the separation distance between the two rails. In the preceding formula the cross section of the rail is supposed to be vanishing; otherwise, the potential should be evaluated by a surface integral over the cross section to get a more accurate result for a small $x$ and to avoid the singularity for a vanishing $x$. The length $dx$ introduced during the movement of the armature over a short time interval $dt$ is simply given by $vdt$, where $v$ is the speed of the armature with respect to the rails. Thus the corresponding induction force exerted on an ion of the rail is given by $$F=-q\frac{dA}{dt}=-q\frac{\mu _{0}}{4\pi }\left( \frac{1}{x}-\frac{1}{\sqrt{% x^{2}+s^{2}}}\right) Iv,\eqno (8)$$ where the force is along the rail and $q$ is the charge of the ion. It is noted that the force is proportional to the speed of the armature and the current. These dependences are similar to those for the magnetic force. Thus the induction force is of the same order of the magnetic force in magnitude. Obviously, this electric induction force is the strongest at the instantaneous heads of the rails. This situation agrees with the experimental observation that the railheads were distorted significantly after the launch of the armature [@Graneau87]. Thus, in railgun accelerators, there are at least two electric induction forces which are longitudinal to the rails and depend both on the speed and on the acceleration of the armature. This force can also depend on the location of the armature along the rails, as it determines the perimeter and the resistance of the loop. Obviously, the electric induction force vanishes for a substantially stationary armature, which is in agreement with some similar experiments where it is found that the measured force is identical to the calculated magnetic force [@Peoglos; @Cavalleri98]. As the aforementioned induction forces in railguns are in opposite directions, the resultant induction force exerted on the ions of one rail can be parallel or antiparallel to the direction of the current. In either case, the induction forces on the respective rails are different in direction, if a direct current is used in stead of an alternating current. This situation seems not yet observed experimentally and deserves further investigation. Anyway, the electric induction force should not be ignored in analyzing the longitudinal recoil force in railguns. Another mechanism for the recoil force may be the electrostatic force due to internal sources, which is also ignored in the literature. The electrostatic force is due to charges, stationary or moving, and can be much stronger than the magnetic and induction forces by a factor like $(v/c)^{-2}$. In the previous discussion of the induction force and the magnetic force, electrical neutralization is assumed tacitly. If the neutralization is not complete, a net electrostatic force will emerge and can dominate over the other forces. According to the continuity equation, electric charges tend to accumulate at the location where the current is not uniform, such as the junctions between the rails and the armature and the interface between two conductors of different conductivities. The electrostatic force may be used to account for the experiment of the repulsion between a suspended $\pi $-shaped aluminum wire and the current-supplying wires, where the ends of the wires are connected to mercury troughs. In this experiment it was observed that the direction of the repulsion depends on the direction of the current-supplying wires [@Pappa83]. Further, the wire fragmentation, where a metal wire was observed to break into several segments after a high current passed through it [@Graneau87b], could be ascribed to a complicated process involving a strong electrostatic force. However, quantitative discussions of these electrostatic forces are difficult. [**3. Derivation of Lorentz Force Law**]{}\ In classical mechanics the force exerted on a particle due to a potential energy $U$ depending on the particle velocity $\mathbf{v}$ is given by Lagrange’s equation $$\mathbf{F}=-\nabla U+\sum\limits_{i}\hat{i}\frac{d}{dt}\left( \frac{\partial U}{\partial v_{i}}\right) ,\eqno (9)$$ where $v_{i}=\mathbf{v\cdot }\hat{i}$, $\hat{i}$ is a unit vector, and the index $i=x,y,z$. It is known that the Lorentz force law (4) can be derived from Lagrange’s equation by adopting the velocity-dependent potential energy $U$ which in turn incorporates the scalar potential $\Phi $ and the vector potential $% \mathbf{A}$. That is, $$U=q\Phi -q\mathbf{v\cdot A.}\eqno (10)$$ This approach was pioneered by Clausius in 1877 [@ORahilly; @Whittaker]. In the derivation the expansion $d\mathbf{A}/dt=\partial \mathbf{A}/\partial t+(\mathbf{v\cdot }\nabla )\mathbf{A}$ and the identity $\nabla (\mathbf{% v\cdot A})-(\mathbf{v\cdot }\nabla )\mathbf{A=v}\times \nabla \times \mathbf{% A}$ have been used. It is seen that the electric induction force is similar to the magnetic induction force in their physical origin, where the latter is associated with the term $(\mathbf{v\cdot }\nabla )\mathbf{A}$ and is an ingredient of the magnetic force. In the preceding potential energy $U$, the velocity $\mathbf{v}$ and the velocity of the mobile charged particles involved in the potential $\mathbf{A}$ are not relative. Thus the potential energy and hence the derived force are not frame-invariant under Galilean transformations. Furthermore, the derived force between two moving charged particles is not in accord with Newton’s law of action and reaction. On the other hand, it is known that the Lorentz force law is invariant under the Lorentz transformation. [**4. Weber Force Law and Ampère Force Law**]{}\ In as early as 1846, Weber presented a second-order generalization of Coulomb’s law for electrostatic force. The Weber force law can be derived from a velocity-dependent potential energy which, for the force exerted on a particle of charge $q_{1}$ and velocity $\mathbf{v}_{1}$ due to another particle of charge $q_{2}$ and velocity $\mathbf{v}_{2}$, is given by [@ORahilly; @Whittaker] $$U=\frac{q_{1}q_{2}}{4\pi \epsilon _{0}}\frac{1}{R}\left( 1+\frac{u_{12}^{2}}{% 2c^{2}}\right) ,\eqno (11)$$ where $R$ is the relative distance between the two charged particles, $% u_{12}=(\mathbf{v}_{1}-\mathbf{v}_{2})\mathbf{\cdot }\hat{R}$ is the radial relative speed between them, and $\hat{R}$ points from particle 2 to particle 1. As the potential energy depends on the radial speed, it is of convenience to use the chain rule to express Lagrange’s equation in the form $$\mathbf{F}=-\nabla U+\sum\limits_{i}\hat{i}\frac{d}{dt}\left( \hat{i}\mathbf{% \cdot }\hat{R}\frac{\partial U}{\partial u_{1}}\right) ,\eqno (12)$$ where $v_{i}$ in (9) is understood as $v_{1i}$ ($=\mathbf{v}_{1}\mathbf{% \cdot }\hat{i}$). Then, by using the identity $\nabla u_{12}=d\hat{R}/dt=(% \mathbf{v}_{12}-u_{12}\hat{R})/R$, the preceding force formula becomes the form given in [@Whittaker] $$\mathbf{F}=\hat{R}\frac{1}{R}U+\hat{R}\frac{d}{dt}\frac{\partial U}{\partial u_{1}}.\eqno (13)$$ In dealing with the time derivative associated with the potential energy, one uses the expansion $d(u_{12}/R)/dt=(du_{12}/dt)/R-u_{12}^{2}/R^{2}$, as both of the variations of $u_{12}$ and $R$ contribute to the time derivative. Further, by expanding the derivative $du_{12}/dt$, one has the Weber force law [@ORahilly; @Whittaker] $$\mathbf{F}=\frac{q_{1}q_{2}}{4\pi \epsilon _{0}}\frac{\hat{R}}{R^{2}}\left( 1+\frac{v_{12}^{2}}{c^{2}}-\frac{3}{2}\frac{u_{12}^{2}}{c^{2}}+\frac{\mathbf{% R}\cdot \mathbf{a}_{12}}{c^{2}}\right) ,\eqno (14)$$ where $\mathbf{a}_{12}$ denotes the relative acceleration. It is noted that the force is always along the radial direction represented by $\hat{R}$ and the involved distance, velocity, and acceleration are all relative between the two particles. Thereby, the Weber force is frame-invariant simply under Galilean transformations and is in accord with Newton’s law of action and reaction. Consider the case where the magnetic force is due to a neutralized current where the mobile charged particles forming the current is actually embedded in a matrix, such as electrons in a metal wire. The ions that constitute the matrix tend to electrically neutralize the mobile particles. Suppose that the various ions and hence the neutralizing matrix move at a fixed velocity $% \mathbf{v}_{m}$. Thus the mobile charged particles drift at the speed $% v_{2m} $ relative to the matrix. Ordinarily, the drift speed $v_{2m}$ is quite low due to the collision of electrons against ions. Thus, based on the Weber force law, the force due to a neutralized current element exerted on a charged particle of relative velocity $\mathbf{v}_{1m}$ can be given by superposing the forces due to the electron and ion. Thus one has the force law between the current element and the particle $$\mathbf{F}=\frac{q_{1}q_{2}}{4\pi \epsilon _{0}c^{2}}\frac{\hat{R}}{R^{2}}% \left( -2\mathbf{v}_{1m}\cdot \mathbf{v}_{2m}+3u_{1m}u_{2m}-\mathbf{R}\cdot \mathbf{a}_{2m}\right) ,\eqno (15)$$ where it has been supposed that the drift speed $v_{2m}$ is sufficiently low as it is ordinarily and thus those terms associated with the second order of $\mathbf{v}_{2m}$ are neglected. It is noted that the term with $\mathbf{a}% _{2m}$ is along the direction of $\hat{R}$, instead of the direction of $% \mathbf{a}_{2m}$ itself. Consequently, the Weber force law disagrees with the Lorentz force law as far as the longitudinal force in railgun accelerators is concerned. Consider two neutralized current elements flowing on two wire segments which in turn are stationary with respect to each other. Then, by superposing the forces exerted on the electron and ion, one has the force law between the two current elements $$\mathbf{F}=\frac{q_{1}q_{2}}{4\pi \epsilon _{0}c^{2}}\frac{\hat{R}}{R^{2}}% \left( -2\mathbf{v}_{1m}\cdot \mathbf{v}_{2m}+3u_{1m}u_{2m}\right) .\eqno (16)$$ This formula is identical to the Ampère force law (2), as $q_{1}\mathbf{v% }_{1m}$ and $q_{2}\mathbf{v}_{2m}$ correspond to $I_{1}d\mathbf{l}_{1}$ and $% I_{2}d\mathbf{l}_{2}$, respectively. Since $\mathbf{v}_{1m}$ and $\mathbf{v}% _{2m}$ are relative, the Ampère force law is Galilean invariant. And as these velocities appear in a symmetric way, the action of a current element on itself then cancels out. [**5. Riemann Force Law**]{}\ The electromagnetic force law can be derived alternatively from a potential energy incorporating the relative speed, instead of the radial relative speed. That is, [@ORahilly] $$U=\frac{q_{1}q_{2}}{4\pi \epsilon _{0}}\frac{1}{R}\left( 1+\frac{v_{12}^{2}}{% 2c^{2}}\right) .\eqno (17)$$ This velocity-dependent potential energy was introduced by Riemann in 1861 [@Whittaker] and is almost ignored at the present time. Then Lagrange’s equation immediately leads to the Riemann force law [@ORahilly] $$\mathbf{F}=\frac{q_{1}q_{2}}{4\pi \epsilon _{0}}\left\{ \frac{\hat{R}}{R^{2}}% \left( 1+\frac{v_{12}^{2}}{2c^{2}}\right) -\frac{1}{c^{2}R^{2}}u_{12}\mathbf{% v}_{12}+\frac{1}{c^{2}R}\mathbf{a}_{12}\right\} ,\eqno (18)$$ where, as in deriving (14), one uses the expansion $$\frac{d}{dt}\frac{\mathbf{v}_{12}}{R}=\frac{\mathbf{a}_{12}}{R}-\frac{u_{12}% \mathbf{v}_{12}}{R^{2}},\eqno (19)$$ as both of the variations of $\mathbf{v}_{12}$ and $R$ contribute to the time derivative. Physically, the derivative $d(\mathbf{v}_{12}/R)/dt$ is associated with the time rate of change in the potential energy actually experienced by the affected particle. And the term with $u_{12}\mathbf{v}% _{12}$ in the preceding force formula is associated with the variation of the experienced potential energy due to the relative displacement between the affected and the source particle. It is of essence to note that the potential energy and the force depend on the relative velocity and distance and hence they are independent of the choice of reference frames in uniform motion of translation. Furthermore, the Riemann force law as well as the Weber force law is in accord with Newton’s third law of motion. Now we consider the ordinary case where the force is due to a neutralized current element with a sufficiently low drift speed $v_{2m}$. By superposition the Riemann force exerted on a charged particle moving at a velocity $\mathbf{v}_{1m}$ relative to the matrix is then given by $$\mathbf{F}=\frac{q_{1}q_{2}}{4\pi \epsilon _{0}c^{2}}\left\{ \frac{1}{R^{2}}% (-\hat{R}\mathbf{v}_{1m}\cdot \mathbf{v}_{2m}+u_{1m}\mathbf{v}_{2m}+u_{2m}% \mathbf{v}_{1m})-\frac{1}{R}\mathbf{a}_{2m}\right\} .\eqno (20)$$ Omitting the acceleration term, a similar force formula between two current elements can be found in [@Whittaker]. When the current-carrying wire forms a loop $C_{2}$ over which the current is uniform and thus the neutralization remains, the force becomes $$\mathbf{F}=\frac{q_{1}}{4\pi \epsilon _{0}c^{2}}\oint_{C_{2}}\frac{\rho _{l}% }{R^{2}}(-\hat{R}\mathbf{v}_{1m}\cdot \mathbf{v}_{2m}+u_{1m}\mathbf{v}% _{2m})dl\ -\ q_{1}\dfrac{\partial \mathbf{A}}{\partial t},\eqno (21)$$ where $\rho _{l}$ denotes the line charge density of the mobile particles of the neutralized loop, the vector potential is given by $$\mathbf{A}=\frac{\mu _{0}}{4\pi }\oint_{C_{2}}\frac{\rho _{l}\mathbf{v}_{2m}% }{R}dl,\eqno (22)$$ and we have made use of the consequence that a uniform current ($\rho _{l}v_{2m}$) leads to $$\oint_{C_{2}}\frac{\rho _{l}u_{2m}}{R^{2}}dl=0.\eqno (23)$$ Similarly, for a volume current density under the magnetostatic condition, it can be shown that the contribution corresponding to that of the term $% \rho _{l}u_{2m}$ cancels out collectively. It is noted that the time derivative $\partial \mathbf{A/}\partial t$ is actually referred to the matrix frame (in which the matrix is stationary) so that the variation of $% \mathbf{v}_{2m}$ contributes to this derivative, while the variation of $R$ does not as its effect has been counted in the term with $u_{1m}\mathbf{v}% _{2m}$ in (21). Further, by using vector identities, the force given by (21) can be written as $$\mathbf{F}=q_{1}\left\{ \mathbf{v}_{1m}\times (\nabla \times \mathbf{A)}-% \dfrac{\partial \mathbf{A}}{\partial t}\right\} .\eqno (24)$$ It is of essence to note that the preceding formula looks like the Lorentz force law under neutralization. However, the current density generating the potential $\mathbf{A}$, the time derivative of $\mathbf{A}$ in the electric induction force, and the particle velocity connecting to $\nabla \times \mathbf{A}$ in the magnetic force are all referred specifically to the matrix frame. It is noted that this specific frame has been adopted tacitly in common practice with the magnetic and induction forces. Thus, for the force due to closed circuits, the Riemann force law which is Galilean invariant and in accord with Newton’s law of action and reaction can be identical to the Lorentz force law. Recently, based on a wave equation a time evolution equation similar to Schrödinger’s equation is derived. From the evolution equation an electromagnetic force given in a form quite similar to Lagrange’s equation in conjunction with the potential energy (17) is derived [@Su; @QEM]. Thus a quantum-mechanical basis for the Riemann force law has been provided. Further, the divergence and the curl relations for the corresponding electric and magnetic fields are derived. Under the magnetostatic condition, these four relationships are just Maxwell’s equations, with the exception that the velocity determining the involved current density is also relative to the matrix [@QEM]. [**6. Conclusion**]{}\ It is shown that in railgun accelerators the electric induction force longitudinal to the rails is generated during the movement of the armature. This force is due to the decrease of the current and to the newly introduced current elements. Thus it depends on the location, speed, and acceleration of the armature. This induction force is comparable to the magnetic force in magnitude and has a tendency to be the strongest at the railheads. Thus it accounts for the observed longitudinal recoil force exerted on the rails. Besides, we compare the Weber and the Riemann force law, which are derived from Lagrange’s equation in conjunction with a potential energy depending on the radial relative speed and on the relative speed, respectively. For ordinary cases where the force is due to the current on a neutralized and closed wire with low drift speed, it is shown that the Riemann force law reduces to the Lorentz force law. Thus the longitudinal force exerted on the rails of railgun accelerators can be well accounted for by the Riemann force law which is in accord both with the nowadays standard theory and with Newton’s law of action and reaction. [99]{} P. Graneau, *J. Appl. Phys*. **62**, 3006 (1987). D.C. Jolly, *Phys. Lett. A* **107**, 231 (1985). J.G. Ternan, *J. Appl. Phys.* **57**, 1743 (1985). P.G. Moyssides, *IEEE Trans. Magn.* **25**, 4307 (1989). A.K.T. Assis and M.A. Bueno, *IEEE Trans. Magn.* **32**, 431 (1996). P. Graneau and N. Graneau, *Phys. Rev. E*. **63**, 058601 (2001). G. Cavalleri, E. Tonni, and G. Spavieri, *Phys. Rev. E* **63**, 058602 (2001). P.T. Pappas, *Nuovo Cimento B* **76**, 189 (1983). T.E. Phipps and T.E. Phipps Jr., *Phys. Lett. A* **146**, 6 (1990). V. Peoglos, *J. Phys. D* **21**, 1055 (1988). G. Cavalleri, G. Bettoni, E. Tonni, and G. Spavieri, *Phys. Rev. E* **58**, 2505 (1998). P. Graneau, *Phys. Lett. A* **120**, 77 (1987). A. O’Rahilly, *Electromagnetic Theory* (Dover, New York, 1965), vol. 1, ch. 7; vol. 2, ch. 11. E. T. Whittaker, *A History of the Theories of Aether and Electricity* (Amer. Inst. Phys., New York, 1987), vol. 1, chs. 3, 7, and 13. C.C. Su, *J. Electromagnetic Waves Applicat.* **16**, 1275 (2002). C.C. Su, *Quantum Electromagnetics – A Local-Ether Wave Equation Unifying Quantum Mechanics, Electromagnetics, and Gravitation* (`http://qem.ee.nthu.edu.tw`).
{ "pile_set_name": "ArXiv" }
--- abstract: 'Random tessellations of the space represent a class of prototype models of heterogeneous media, which are central in several applications in physics, engineering and life sciences. In this work, we investigate the statistical properties of $d$-dimensional isotropic Poisson geometries by resorting to Monte Carlo simulation, with special emphasis on the case $d=3$. We first analyse the behaviour of the key features of these stochastic geometries as a function of the dimension $d$ and the linear size $L$ of the domain. Then, we consider the case of Poisson binary mixtures, where the polyhedra are assigned two ‘labels’ with complementary probabilities. For this latter class of random geometries, we numerically characterize the percolation threshold, the strength of the percolating cluster and the average cluster size.' author: - 'C. Larmier' - 'E. Dumonteil' - 'F. Malvagi' - 'A. Mazzolo' - 'A. Zoia' title: 'Finite-size effects and percolation properties of Poisson geometries' --- Introduction {#intro} ============ Heterogeneous and disordered media emerge in several applications in physics, engineering and life sciences. Examples are widespread and concern for instance light propagation through engineered optical materials [@NatureOptical; @PREOptical; @PREQuenched] or turbid media [@davis; @kostinski; @clouds], tracer diffusion in biological tissues [@tuchin], neutron diffusion in pebble-bed reactors [@larsen] or randomly mixed immiscible materials [@renewal], inertial confinement fusion [@zimmerman; @haran], and radiation trapping in hot atomic vapours [@NatureVapours], only to name a few. Stochastic geometries provide convenient models for representing such configurations, and have been therefore widely studied [@santalo; @torquato; @kendall; @solomon; @moran; @ren], especially in relation to heterogeneous materials [@torquato], stochastic or deterministic transport processes [@pomraning], image analysis [@serra], and stereology [@underwood]. A particularly relevant class of random media is provided by the so-called Poisson geometries [@santalo], which form a prototype process of isotropic stochastic tessellations: a portion of a $d$-dimensional space is partitioned by randomly generated $(d-1)$-dimensional hyper-planes drawn from an underlying Poisson process. The resulting random geometry (i.e., the collection of random polyhedra determined by the hyper-planes) satisfies the important property that an arbitrary line thrown within the geometry will be cut by the hyper-planes into exponentially distributed segments [@santalo]. In some sense, the exponential correlation induced by Poisson geometries represents perhaps the simplest model of ‘disordered’ random fields, whose single free parameter (i.e., the average correlation length) can be deduced from measured data [@mikhailov]. Following the pioneering works by Goudsmit [@goudsmit], Miles [@miles1964a; @miles1964b] and Richards [@richards] for $d=2$, the statistical features of the Poisson tessellations of the plane have been extensively analysed, and rigorous results have been proven for the limit case of domains having an infinite size: for a review, see, e.g., [@santalo; @moran; @ren]. An explicit construction amenable to Monte Carlo simulations for two-dimensional homogeneous and isotropic Poisson geometries of finite size has been established in [@switzer]. Theoretical results for infinite Poisson geometries have been later generalized to $d=3$, which is key for real-world applications but has comparatively received less attention, and higher dimensions by several authors [@miles1969; @miles1970; @miles1971; @miles1972; @matheron; @santalo]. The two-dimensional construction for isotropic Poisson geometries has been analogously extended to three-dimensional (and in principle $d$-dimensional) domains [@serra; @mikhailov]. In this work, we will numerically investigate the statistical properties of $d$-dimensional isotropic Poisson geometries by resorting to Monte Carlo simulation, with special emphasis on the case $d=3$. Our aim is two-fold: first, we will focus on finite-size effects and on the convergence towards the limit behaviour of infinite domains. In order to assess the impact of dimensionality on the convergence patterns, comparisons to analogous numerical or exact findings obtained for $d=1$ and $d=2$ (where available) will be provided. In so doing, we will also present and discuss the simulation results for some physical observables for which exact asymptotic results are not known, yet. Then, we will consider the case of ‘coloured’ Poisson geometries, where each polyhedron is assigned a label with a given probability. Such models emerge, for instance, in connection to particle transport problems, where the label defines the physical properties of each polyhedron [@pomraning; @mikhailov]. The case of random binary mixtures, where only two labels are allowed, will be examined in detail. In this context, we will numerically determine the statistical features of the coloured polyhedra, which are obtained by regrouping into clusters the neighbouring volumes by their common label. Attention will be paid in particular to the percolation properties of such binary mixtures for $d=3$: the percolation threshold at which a cluster will span the entire geometry, the average cluster size and the probability that a polyhedron belongs to the the spanning cluster will be carefully examined and contrasted to the case of percolation on lattices [@percolation_book]. The effect of dimensionality will be again assessed by comparison with the case $d=2$, for which analogous results were numerically determined in [@lepage]. This paper is structured as follows: in Sec. \[construction\] we will recall the explicit construction for $d$-dimensional isotropic Poisson geometries, with focus on $d=3$. In Sec. \[uncolored\_geo\] we will discuss the statistical properties of Poisson geometries, and assess the convergence to the limit case of infinite domains. In Sec. \[colored\_geo\] we will extend our analysis to the case of coloured geometries and related percolation properties. Conclusions will be finally drawn in Sec. \[conclusions\]. Construction of Poisson geometries {#construction} ================================== For the sake of completeness, in this Section we will recall the strategy for the construction of Poisson geometries, spatially restricted to a $d$-dimensional box. The case $d=1$ simply stems from the Poisson point process on the line [@santalo], and will not be detailed here. The explicit construction of homogeneous and isotropic Poisson geometries for the case $d=2$ restricted to a square has been originally proposed by [@switzer], based on a Poisson point field in an auxiliary parameter space in polar coordinates. It has been recently shown that this construction can be actually extended to $d=3$ and even higher dimensions [@mikhailov] by suitably generalizing the auxiliary parameter space approach of [@switzer] and using the results of [@serra]. In particular, such $d$-dimensional construction satisfies the homogeneity and isotropy properties [@mikhailov]. The method proposed by [@mikhailov] is based on a spatial decomposition (tessellation) of the $d$-hypersphere of radius $R$ centered at the origin by generating a random number $q$ of $(d-1)$-hyperplanes with random orientation and position. Any given $d$-dimensional subspace included in the $d$-hypersphere will therefore undergo the same tessellation procedure, restricted to the region defined by the boundaries of the subspace. The number $q$ of $(d-1)$-hyperplanes is sampled from a Poisson distribution with parameter $R \Lambda_d$, with $\Lambda_d= \lambda {\cal A}_d(1)/{\cal V}_{d-1}(1) $. Here ${\cal A}_{d}(1)=2\pi^{d/2}/\Gamma(d/2)$ denotes the surface of the $d$-dimensional unit sphere ($\Gamma(a)$ being the Gamma function [@special_functions]), ${\cal V}_{d}(1)=\pi^{d/2}/\Gamma(1+d/2)$ denotes the volume of the $d$-dimensional unit sphere, and $\lambda$ is the arbitrary density of the tessellation, carrying the units of an inverse length. This normalization of the density $\lambda$ corresponds to the convention used in [@santalo], and is such that $\lambda t$ yields the mean number of $(d-1)$-hyperplanes intersected by an arbitrary segment of length $t$. ![(Color online) Cutting a cube with a random plane. A cube of side $L$ is centered in $O$. The circumscribed sphere centered in $O$ has a radius $R=\sqrt{3}L/2$. The point $\mathbf M$ is defined by $\mathbf M=r {\mathbf n}$, where $r$ is uniformly sampled in the interval $[0,R]$ and ${\mathbf n}$ is a random unit vector of components ${\mathbf n}=(n_1,n_2,n_3)^T$, with $n_1=1-2\xi_1$, $n_2=\sqrt{1-n_1^2}\cos{(2 \pi \xi_2)}$ and $n_3=\sqrt{1-n_1^2}\sin{(2 \pi \xi_2)}$. The auxiliary variables $\xi_1$ and $\xi_2$ are sampled from two independent uniform distributions in the interval $[0,1]$. The random plane $K$ of equation $n_1 x+n_2 y+n_3 z=r$ is orthogonal to the vector ${\mathbf n}$ and intersects the point $\mathbf M$.[]{data-label="fig1"}](sphere) Let us now focus on the case $d=3$. Suppose, for the sake of simplicity, that we want to obtain an isotropic tessellation of a box of side $L$, centered in the origin $O$. This means that the Poisson tessellation is restricted to the region $[-L/2,L/2]^3$. We denote $R$ the radius of the sphere circumscribed to the cube. The algorithm proceeds then as follows. The first step consists in sampling a random number of planes $q$ from a Poisson distribution of parameter $4 \lambda R$, where the factor $4$ stems from ${\cal A}_3(1)/{\cal V}_2(1) = 4$. The second step consists in sampling the random planes that will cut the cube. This is achieved by choosing a radius $r$ uniformly in the interval $[0,R]$ and then sampling two other random numbers, denoted $\xi_1$ and $\xi_2$, from two independent uniform distributions in the interval $[0,1]$. Based on these three random parameters, a unit vector ${\mathbf n}=(n_1,n_2,n_3)^T$ is generated (see Fig. \[fig1\]), with components $$\begin{aligned} n_1&=1-2\xi_1 \nonumber \\ n_2&=\sqrt{1-n_1^2}\cos{(2 \pi \xi_2)}\nonumber \\ n_3&=\sqrt{1-n_1^2}\sin{(2 \pi \xi_2)}.\nonumber\end{aligned}$$ Let now $\mathbf M$ be the point such that ${\mathbf{ OM}}=r {\mathbf n}$. The random plane $K$ will be finally defined by the equation $n_1 x + n_2 y +n_3 z =r$, passing trough $\mathbf M$ and having normal vector ${\mathbf n}$. By construction, this plane does intersect the circumscribed sphere of radius $R$ but not necessarily the cube: the probability that the plane intersects both the sphere and the cube can be deduced from a classical result of integral geometry. For two convex sets $J_0$ and $J_1$ in $\mathbb R^3$, with $J_1 \subset J_0$, the probability that a randomly chosen plane meets both $J_0$ and $J_1$ is given by the ratio ${\cal M}_{1}(J_1) / {\cal M}_{1}(J_0) $, ${\cal M}_{1}(J)$ being the mean orthogonal $1$-projection of $J$ onto an isotropic random line [@santalo]. The quantity ${\cal M}_{1}(J)$ takes also the name of mean caliper diameter of the set $J$ [@miles1972]. The average caliper diameter of a cube of side $L$ is $3L/2$, whereas for the sphere the average caliper diameter coincides with its diameter $2R = L\sqrt{3}$, which yields a probability $\sqrt{3}/2 \simeq 0.866$ for the random planes to fall within the cube [^1]. The tessellation is built by successively generating the $q$ random planes. Initially, the stochastic geometry is composed of a single polyhedron, i.e., the cube. If the first sampled plane intersects the region $[-L/2,L/2]^3$, new polyhedra are generated within the cube and the tessellation is updated. This procedure is then iterated until $q$ random planes have been generated. By construction, the polyhedra defined by the intersection of such random planes are convex. For illustration purposes, some examples of isotropic Poisson tessellation of a cube of side $L=20$ obtained by Monte Carlo simulation are presented in Fig. \[fig2\], for different values of the density $\lambda$. The number of random polyhedra of the tessellation increases with increasing $\lambda$. Monte Carlo analysis {#uncolored_geo} ==================== The physical observables of interest associated to the stochastic geometries, such as for instance the volume of a polyhedron, its surface, the number of edges, and so on, are clearly random variables, whose statistical distribution we would like to characterize. In the following, we will focus on the case of Poisson geometries restricted to a $d$-dimensional box of linear size $L$. ![(Color online) Examples of Monte Carlo realizations of isotropic Poisson geometries restricted to a three-dimensional box of linear size $L$. For all realizations, we have chosen $L=20$. The geometry at the top (a) has $\lambda=0.2$, that at the middle (b) has $\lambda=1$ and that at the bottom (c) has $\lambda=3$. For fixed $L$, the average number of random polyhedra composing the geometry increases with increasing $\lambda$.[]{data-label="fig2"}](cubes.jpg) With a few remarkable exceptions, the exact distributions for the physical observables are unfortunately unknown [@santalo]. A number of exact results have been nonetheless established for the (typically low-order) moments of the observables and for their correlations, at least in the limit case of domains having an infinite extension [@santalo; @kendall; @solomon]. Monte Carlo simulation offers a unique tool for the numerical exploration of the statistical features of Poisson geometries. In particular, by resorting to the algorithm described above we can $i)$ investigate the convergence of the moments and distributions of arbitrary physical observables to their known limit behaviour (if any), and $ii)$ numerically explore the scaling of the moments and the distributions for which exact asymptotic results are not yet available. We will thus address these issues with the help of a Monte Carlo code developed to this aim. Number of polyhedra ------------------- To begin with, we will first analyse the growth of the number $N_p$ of polyhedra in $d$-dimensional Poisson geometries as a function of the linear size $L$ of the domain, for a given value of the density $\lambda$. In the following, we will always assume that $\lambda=1$, unless otherwise specified (with both $\lambda$ and $L$ expressed in arbitrary units). The quantity $N_p$ provides a measure of the complexity of the resulting geometries. The simulation findings for the average number $\langle N_p|L\rangle$ of $d$-polyhedra (at finite $L$) and the dispersion factor, i.e., the ratio $\sigma[N_p|L]/\langle N_p|L \rangle$, $\sigma$ denoting the standard deviation, are illustrated in Fig. \[fig3\]. For large $L$, we find an asymptotic scaling law $\langle N_p |L\rangle \sim L^d$: the complexity of the random geometries increases with system size and dimension (Fig. \[fig3\], top), as expected on physical grounds. This means that the computational cost to generate a realization of a Poisson geometry is also an increasing function of the system size and of the dimension. As for the dispersion factor, an asymptotic scaling law $\sigma[N_p|L]/\langle N_p|L\rangle \sim 1/\sqrt{L}$ is found for large $L$, independent of the dimension (Fig. \[fig3\], bottom): for large systems, the distribution of $N_p$ will be then peaked around the average value $\langle N_p|L\rangle$. Markov properties ----------------- Poisson geometries are Markovian, which means that in the limit case of infinite domains an arbitrary line will be cut by the $(d-1)$-surfaces of the $d$-polyhedra into segments whose lengths $\ell$ are exponentially distributed, i.e., $${\cal P}(\ell) = \mu e^{-\mu \ell}, \label{asy_length}$$ with average density $\mu = \lambda$. Conversely, the number of intersections $n_i$ of an arbitrary segment of length $t$ with the $(d-1)$-surfaces of the $d$-polyhedra in an infinite domain will obey a Poisson distribution $${\cal P}(n_i) = \nu^{n_i} \frac{e^{-\nu}}{n_i!}, \label{asy_poisson}$$ with mean value $\nu = \lambda t$. ![(Color online) The average number $\langle N_p|L \rangle$ of $d$-polyhedra in $d$-dimensional Poisson geometries as a function of the linear size $L$ of the domain. The scaling law $ L^d$ is displayed for reference with dashed lines. Inset. The dispersion factor $\sigma[N_p|L]/\langle N_p|L \rangle$ as a function of $L$. The scaling law $1/\sqrt{L}$ is displayed for reference with dashed lines.[]{data-label="fig3"}](Nb_polys_L) In order to verify that the geometries constructed by resorting to the algorithm described above satisfy the Markov properties, we have numerically computed by Monte Carlo simulation the probability density of the segment lengths and the probability of the number of intersections as a function of the linear size $L$ of the domain and for different dimensions $d$. For the former, a Poisson geometry is first generated, and a line is then drawn by uniformly choosing a point in the box and an isotropic direction: this choice corresponds to formally assuming a so-called $I$-randomness for the lines [@coleman]. The intersections of the line with the polyhedra of the geometry are computed, and the resulting segment lengths are recorded. The whole procedure is repeated a large number of times in order to get the appropriate statistics. For the latter, a test segment of unit length is sampled by choosing a point and a direction as before, and the number of intersections with the polyhedra are again determined. ![(Color online) The probability densities ${\cal P}(\ell|L)$ of the segment lengths as a function of the linear size $L$ of the domain, in dimension $d=3$. Symbols correspond to the Monte Carlo simulation results, with lines added to guide the eye: blue triangles denote $L=1$, red diamonds $L=2$, green circles $L=3$, black squares $L=5$, and purple crosses $L=40$. The asymptotic (i.e., $L \to \infty$) exponential distribution given in Eq.  is displayed as a black dashed line for reference. The inset displays the same data in log-linear scale.[]{data-label="fig4"}](Distrib_length) The numerical results for ${\cal P}(\ell|L)$ at finite $L$ are illustrated in Figs. \[fig4\] and \[fig5\]. For small $L$, finite-size effects are apparent in the segment length density: this is due to the fact that the longest line that can be drawn across a box of linear size $L$ is $\sqrt{d} L$, which thus induces a cut-off on the distribution (see Fig. \[fig4\]). For $\lambda L \gg 1$, the finite-size effects due to the cut-off fade away and the probability densities eventually converge to the expected exponential behaviour. The rate of convergence appears to be weakly dependent on the dimension $d$ (see Fig. \[fig5\]). The case $d=1$ can be treated analytically and might thus provide a rough idea of the approach to the limit case. For any finite $L$, the distribution of the segment lengths for $d=1$ is $${\cal P}(\ell|L) = \lambda e^{-\lambda \ell} {1\hspace{-1.3mm}{1}}_{\ell < L} + e^{-\lambda L} \delta(\ell-L),$$ ${1\hspace{-1.3mm}{1}}_J$ being the marker function of the domain $J$. The moments of order $m$ of the segment length $\ell$ for finite $L$ thus yield $$\langle \ell^m |L \rangle = \frac{\Gamma(m+1)}{\lambda^m}-\frac{\Gamma_{m+1}(\lambda L)}{\lambda^m} + e^{-\lambda L} L^m,$$ where $\Gamma_a(x)$ is the incomplete Gamma function [@special_functions]. In the limit case $L \to \infty$, we have $\langle \ell^m \rangle = \Gamma(m+1)/\lambda^m$, so that for the convergence rate we obtain $$\frac{\langle \ell^m |L\rangle}{\langle \ell^m \rangle} = 1-\frac{\Gamma_{m+1}(\lambda L) -e^{-\lambda L} (\lambda L)^m}{\Gamma(m+1)}, \label{exact_moment_d1}$$ which for large $\lambda L \gg 1$ gives $$\frac{\langle \ell^m |L\rangle}{\langle \ell^m \rangle} \simeq 1-\frac{(\lambda L)^{m-1} e^{-\lambda L}}{\Gamma(m+1)}.$$ Thus, the average segment length ($m=1$) converges exponentially fast to the limit behaviour, whereas the higher moments ($m \ge 2$) converge sub-exponentially with power-law corrections. For $d>1$, the cut-off is less abrupt, but the distributions ${\cal P}(\ell|L)$ still show a peak at $\ell=L$, and vanish for $\ell > L \sqrt{d}$. The asymptotic average segment lengths for $L \to \infty$ yield $\langle \ell \rangle = 1/\lambda$ for any $d$: the Monte Carlo simulation results obtained for a large $L=80$ are compared to the theoretical formulas in Tab. \[tab1\]. $d$ $\langle \ell \rangle$ Theoretical value Monte Carlo ----- ------------------------ ------------------- ------------------------------- $1$ $1/\lambda$ $1$ $1.0002 \pm 10^{-4}$ $2$ $1/\lambda$ $1$ $0.9932 \pm 6\times 10^{-4}$ $3$ $1/\lambda$ $1$ $0.9985 \pm 3 \times 10^{-3}$ : The average segment lengths $\langle \ell \rangle$. Monte Carlo simulation results are obtained with $L=80$ and $\lambda=1$ for any dimension $d$.[]{data-label="tab1"} ![(Color online) The probability densities ${\cal P}(\ell|L)$ of the segment lengths as a function of the linear size $L$ of the domain and of the dimension $d$. Symbols correspond to the Monte Carlo simulation results, with lines added to guide the eye: for $L=1$, blue crosses denote $d=1$, green circles $d=2$, and orange triangles $d=3$; for $L=2$, red pluses denote $d=1$, grey squares $d=2$, and purple diamonds $d=3$. The asymptotic (i.e., $L \to \infty$) exponential distribution given in Eq.  is displayed as a black dashed line for reference. Inset. The case of a system size $L=40$: red crosses denote $d=1$, green circles $d=2$, and blue diamonds $d=3$; the black dashed line corresponds to Eq. .[]{data-label="fig5"}](Distrib_length_bis.pdf) For $d=1$ we performed $10^6$ realizations, with an average number $\langle N_p \rangle = 80.986 \pm 9 \times 10^{-3}$ of $1$-polyhedra per realization. For $d=2$ we performed $10^5$ realizations, with an average number $\langle N_p \rangle = 5189 \pm 3$ of $2$-polyhedra per realization. For $d=3$ we performed $2 \times 10^3$ realizations, with an average number $\langle N_p \rangle = 2.82\times 10^5 \pm 1.4 \times 10^3$ $3$-polyhedra per realization. The convergence of the distribution of the number of intersections to the limit Poisson distribution ${\cal P}(n_i)$ is very fast as a function of $L$, which most probably stems from the unit test segment being only weakly affected by finite-size effects (i.e., by the polyhedra that are cut by the boundaries of the box), contrary to the case of the lines. Finite-size effects are appreciable only for large values of the number of intersections $n_i$, which in turn occur with small probability. The asymptotic average number of intersections per unit length for $L \to \infty$ yield $\langle n_i \rangle = \lambda$ for any $d$: the Monte Carlo simulation results obtained for a large $L=80$ are compared to the theoretical formulas in Tab. \[tab2\], with the same simulation parameters as above. $d$ $\langle n_i \rangle$ Theoretical value Monte Carlo ----- ----------------------- ------------------- ----------------------------- $1$ $\lambda$ $1$ $1.001 \pm 10^{-3}$ $2$ $\lambda$ $1$ $0.995 \pm 3\times 10^{-3}$ $3$ $\lambda$ $1$ $1.03 \pm 2 \times 10^{-2}$ : The average number of intersections $\langle n_i \rangle$. Monte Carlo simulation results are obtained with $L=80$ and $\lambda=1$ for any dimension $d$.[]{data-label="tab2"} The inradius distribution ------------------------- The inradius $r_\text{in}$ is defined as the radius of the largest sphere that can be contained in a (convex) polyhedron, and as such represents a measure of the linear size of the polyhedron [@santalo]. The probability density of the inradius is exactly known in any dimension $d$ for Poisson geometries of infinite size: it turns out that $r_\text{in}$ has an exponential distribution, namely, $${\cal P}(r_\text{in}) = \Lambda_d e^{-\Lambda_d r_\text{in}}, \label{asy_inradius}$$ where the dimension-dependent constant $\Lambda_d$ reads $\Lambda_1=2 \lambda$, $\Lambda_2=\pi \lambda$, and $\Lambda_3=4 \lambda$. In principle, it would be possible to analytically determine the coordinates of the center and the radius of the largest contained sphere, once the equations of the $(d-1)$-hyperplanes defining the $d$-polyhedron are known [@sahu]. We have however chosen to numerically compute the inradius by resorting to a linear programming algorithm. For a given realization of a Poisson geometry, we select in turn a convex $d$-polyhedron: this will be formally defined by a set ${\bf x} \in \mathbb{R}^d$ such that $${\bf a}_i^T {\bf x} \leq {\bf b}_i \,\,\, (1 \leq i \leq q),$$ where $q$ is the number of $(d-1)$-hyperplanes composing the surface of the $d$-polyhedron. The inradius $r_\text{in}$ can be then computed based on the Chebyshev center $({\bf x},r_\text{in})$ of the $d$-polyhedron, which can be found by maximising $r_\text{in}$ with the constraints $$\begin{aligned} &\forall i \in \{1, 2, ..., q\}, \,\,\, {\bf a}_i^T {\bf x} + r_\text{in} || {\bf a}_i || \leq {\bf b}_i \\ & r_\text{in} > 0.\end{aligned}$$ This maximisation problem has been finally solved by using the simplex method [@recipes]. ![(Color online) The probability density ${\cal P}(r_{in}|L)$ of the inradius as a function of the system size $L$ and of the dimension $d$. Symbols correspond to Monte Carlo simulation results. For $d=1$, red squares denote $L=5$ and blue pluses $L=40$. For $d=2$, red crosses denote $L=5$ and blue diamonds $L=40$. For $d=3$, red circles denote $L=5$ and blue triangles $L=40$. The black dashed lines represent the asymptotic (i.e., $L \to \infty$) distribution in Eq. . Inset. Comparison between ${\cal P}(r_{in}|L)$ for a typical polyhedron (blue triangles) and ${\cal P}_0(r_{in}|L)$ for the polyhedron containing the origin (green circles), for $d=3$ and $L=40$. The dashed line represents the asymptotic distribution in Eq. .[]{data-label="fig6"}](Distrib_inradius) The results of the Monte Carlo simulation for $r_\text{in}$ are shown in Fig. \[fig6\] as a function of $L$ and $d$. The case $d=1$ is straightforward, since the inradius simply coincides with the half-length of the $1$-polyhedron. For any finite $L$, the numerical distributions suffer from finite-size effects, analogous to those affecting the distributions of the segment lengths $\ell$: in particular, a cut-off appears at $r_\text{in}=L/2$. As $\lambda L \gg 1$, finite-size effects fade away and the numerical distributions converge to the expected exponential behaviour. The convergence rate as a function of the system size $L$ is weakly dependent on the dimension $d$. The asymptotic average inradius for $L \to \infty$ yields $\langle r_\text{in} \rangle = 1/\Lambda_d $: the Monte Carlo simulation results obtained for a large $L=80$ are compared to the theoretical formulas in Tab. \[tab3\], with the same simulation parameters as above. $d$ $\langle r_{\text{in}} \rangle$ Theoretical value Monte Carlo ----- --------------------------------- ------------------- -------------------------------- $1$ $1/2 \lambda$ $0.5$ $0.50009 \pm 6 \times 10^{-5}$ $2$ $1/\pi \lambda$ $0.31831$ $0.31795 \pm 9 \times 10^{-5}$ $3$ $1/4 \lambda$ $0.25$ $0.2499 \pm 4 \times 10^{-4}$ : The average inradius $\langle r_{\text{in}} \rangle$. Monte Carlo simulation results are obtained with $L=80$ and $\lambda=1$ for any dimension $d$.[]{data-label="tab3"} ![(Color online) The probability density ${\cal P}(Y_d|L)$ of the dimensionless $d$-volume $Y_d=V_d / \langle V_d \rangle$ as a function of the linear size $L$ of the domain and of the dimension $d$. Black inverted triangles denote a system size $L=40$ for $d=1$. For $d=2$, purple diamonds are chosen for a system size $L=40$ and orange squares for $L=10$. For $d=3$, blue crosses are chosen for a system size $L=40$, red circles for $L=10$ and grey triangles for $L=5$. For $d=1$, the asymptotic (i.e., $L \to \infty$) exponential distribution given in Eq.  is displayed as a black dashed line. For $d=2$ and $d=3$, dashed lines denote exponential decay. Inset. Comparison between ${\cal P}(V_d|L)$ for a typical polyhedron (blue triangles) and ${\cal P}_0(V_d|L)$ for the polyhedron containing the origin (green circles), for $d=3$.[]{data-label="fig7"}](Distrib_volume) The volume distribution ----------------------- One of the most important physical observables related to the stochastic geometries is the distribution ${\cal P}(V_d)$ of the $d$-volumes $V_d$ of the polyhedra. For $d=1$, this distribution coincides with that of the segment lengths, ${\cal P}(\ell)$, which means that the approach to the limit case of infinite domains follows from the same arguments as above. Unfortunately, the functional form of the distribution ${\cal P}(V_d)$ is not known for $d>1$ [@miles1971; @miles1972; @santalo]. We have thus resorted to Monte Carlo simulation so as to assess the impact of the domain size $L$ and of the dimension $d$ on ${\cal P}(V_d | L)$ for finite $L$. In order to compare the results for different $d$, we found convenient to introduce the dimensionless variable $Y_d = V_d/\langle V_d\rangle$, where the asymptotic average $d$-volume size is estimated by Monte Carlo for large $L$. The numerical findings are shown in Fig. \[fig7\]. It is apparent that for $\lambda L \gg 1$ the distributions ${\cal P}(Y_d | L)$ approach an asymptotic shape. The rate of convergence as a function of $L$ decreases with increasing $d$, which is expected on physical grounds because the complexity of the geometries grows as $\sim L^d$. The tails of ${\cal P}(Y_d)$ for large values of the argument $Y_d$ also depend on $d$: for $d=1$, ${\cal P}(Y_d) \sim \exp(-Y_d)$, whereas for $d>1$ the tail appears to be increasingly slower as a function of $d$. Due to poor statistics for very large values of $Y_d$, we are not able to precisely characterize the asymptotic decay of ${\cal P}(Y_d)$. It seems however that for $d>1$ the tail is not purely exponential, and that power law corrections might thus appear. ![(Color online) The dimensionless first moment $\langle Y^1_d |L\rangle=\langle V^1_d |L\rangle /\langle V^1_d\rangle$ of the $d$-volume, as a function of the system size $L$ and of the dimension $d$. Monte Carlo simulation results are displayed as symbols, with dashed lines lines to guide the eye for $d=2$ and $d=3$. For $d=1$, the solid line represents the exact formula given in Eq. . Red diamonds denote $d=1$; green circles denote $d=2$; blue triangles denote $d=3$. Inset. The dimensionless moments $\langle Y^m_d |L\rangle=\langle V^m_d |L\rangle /\langle V^m_d\rangle$ of the $d$-volume, for $m=1,2,3$, as a function of the system size $L$, for $d=1$ and $d=2$. Monte Carlo simulation results are displayed as symbols, with dashed lines lines to guide the eye for $d=3$. For $d=1$, the solid line represents the exact formula given in Eq. . For $d=1$, red diamonds denote $m=1$; red pluses denote $m=2$; red inverted triangles denote $m=3$. For $d=3$, blue triangles denote $m=1$; blue crosses denote $m=2$; blue squares denote $m=3$.[]{data-label="fig8"}](Volume_size) Supplementary information can be retrieved from the analysis of the $m$-th moments $\langle V_d^m\rangle$, for which exact results are available in the case $m=1,2$ and $3$ for infinite domains [@miles1971; @miles1972; @santalo; @matheron]. The convergence of the dimensionless moments $\langle Y^m_d|L\rangle=\langle V^m_d |L\rangle /\langle V^m_d\rangle$ to the limit case as a function of $L$ is displayed Fig. \[fig8\]. The convergence to the asymptotic value $\lim_{L\to \infty} \langle Y^m_d|L\rangle =1$ is increasingly slower as a function of $L$ as $d$ increases, whereas the order $m$ of the moments has a weak impact on the convergence rate. The Monte Carlo simulation results for the asymptotic $m$-th moments $\langle V_d^m\rangle$ obtained for a large $L=80$ are finally compared to the theoretical formulas in Tab. \[tab4\] for $\langle V_d \rangle$, in Tab. \[tab5\] for $\langle V^2_d \rangle$, and in Tab. \[tab6\] for $\langle V^3_d \rangle$, respectively, with the same simulation parameters as above. $d$ $\langle V_d \rangle$ Theoretical value Monte Carlo ----- ----------------------- ------------------- ------------------------------- $1$ $1/\lambda$ $1$ $1.0002 \pm 10^{-4}$ $2$ $4/\pi \lambda^2$ $1.27324$ $1.2703 \pm 7 \times 10^{-4}$ $3$ $6/\pi \lambda^3$ $1.90986$ $1.91 \pm 10^{-2}$ : The average $d$-volume size $\langle V_d \rangle$. Monte Carlo simulation results are obtained with $L=80$ and $\lambda=1$ for any dimension $d$.[]{data-label="tab4"} $d$ $\langle V_d^2 \rangle$ Theoretical value Monte Carlo ----- ------------------------- ------------------- ------------------------------- $1$ $2/\lambda^2$ $2$ $2.0007 \pm 5\times 10^{-4}$ $2$ $8/\lambda^4$ $8$ $7.9609 \pm 9 \times 10^{-4}$ $3$ $48/\lambda^6$ $48$ $47.7 \pm 0.5$ : The second moment $\langle V^2_d \rangle$ of the $d$-volume. Monte Carlo simulation results are obtained with $L=80$ and $\lambda=1$ for any dimension $d$.[]{data-label="tab5"} $d$ $\langle V_d^3 \rangle$ Theoretical value Monte Carlo ----- ------------------------- ------------------- ------------------------------ $1$ $6/\lambda^3$ $6$ $6.003 \pm 3 \times 10^{-3}$ $2$ $256 \pi/7\lambda^6$ $114.893$ $114.1 \pm 0.2$ $3$ $1344 \pi/\lambda^9$ $4222.3$ $4144 \pm 75$ : The third moment $\langle V^3_d \rangle$ of the $d$-volume. Monte Carlo simulation results are obtained with $L=80$ and $\lambda=1$ for any dimension $d$.[]{data-label="tab6"} The moments of the surfaces --------------------------- The analysis of the $d$-surfaces $A_d$ of the $d$-polyhedra is also of utmost importance, in that it provides information on the interface between the constituents of the geometry (see for instance the considerations in [@miles1972]). We have then computed the first few moments $\langle A^m_d \rangle$ of the $d$-surfaces by Monte Carlo simulation. Results are recalled in Tab. \[tab7\], where we compare the numerical findings for large $L=80$ to the exact formulas for infinite domains. $\langle A^m_d \rangle$ Theoretical value Monte Carlo ------------------------- ------------------------- ------------------- ------------------------------ $\langle A_2 \rangle$ $4/\lambda$ $4$ $3.995 \pm 10^{-3}$ $\langle A_2^2 \rangle$ $(2\pi^2+8)/\lambda^2$ $27.74$ $27.67 \pm 2 \times 10^{-2}$ $\langle A_3 \rangle$ $24/\pi \lambda^2$ $7.64$ $7.63 \pm 2 \times 10^{-2} $ $\langle A_3^2 \rangle$ $240/\lambda^4$ $240$ $239.5 \pm 1.7$ : The moments $\langle A^m_d\rangle$ of the $d$-surface of the $d$-polyhedra. Monte Carlo simulation results are obtained with $L=80$ and $\lambda=1$ for any dimension $d$.[]{data-label="tab7"} The moments of the outradius ---------------------------- The outradius $r_\text{out}$ is defined as the radius of the smallest sphere enclosing a (convex) polyhedron, and can be thus used together with the inradius so as to characterize the shape of the polyhedra. For $d=1$, the outradius coincides with the inradius. The probability density and the moments of the outradius of Poisson geometries for $d>1$ are not known. We have then numerically computed the moments of the outradius by resorting to an algorithm recently proposed in [@fischer]. This algorithm implements a pivoting scheme similar to the simplex method for linear programming. It starts with a large $d$-ball that includes all vertices of the convex $d$-polyhedron and progressively shrinks it [@fischer]. For reference, the Monte Carlo simulation results for the first few moments of $r_\text{out}$ obtained for a large $L=80$ are given in Tab. \[tab8\], with the same simulation parameters as above: these numerical findings might inspire future theoretical advances. [ c @ c @ c ]{} $d$ & & Monte Carlo\ $2$ & $\langle r_\text{out} \rangle$ & $0.8444 \pm 2 \times 10^{-4}$\ $2$ & $\langle r_\text{out}^2 \rangle$ & $1.2291 \pm 7 \times 10^{-4}$\ $3$ & $\langle r_\text{out} \rangle$ & $1.153 \pm 2 \times 10^{-3}$\ $3$ & $\langle r_\text{out}^2 \rangle$ & $2.127 \pm 7 \times 10^{-3}$\ The polyhedron containing the origin ------------------------------------ So far, the properties of the constituents of the Poisson geometries have been derived by assuming that each $d$-polyhedron has an identical statistical weight (for a precise definition, see, e.g., [@miles1964a; @miles1970; @miles1971; @matheron]). It is also possible to attribute to each $d$-polyhedron a statistical weight equal to its $d$-volume. It can be shown that the statistics of any observable related to the $d$-polyhedron containing the origin $O$ obeys this latter volume-weighted distribution [@miles1970]. This surprising property can be understood by following the heuristic argument proposed by Miles [@miles1964a]: the origin has greater chances of falling within a larger rather than a smaller volume. In particular, for the moments $\langle X \rangle_0$ of the $d$-polyhedron containing the origin we formally have $$\langle X \rangle_0 = \frac{\langle V_d X \rangle}{\langle V_d \rangle},$$ where $X$ denotes an arbitrary observable [@miles1970]. We have carried out an extensive analysis of the moments of the features of the $d$-polyhedra containing the origin by Monte Carlo simulation: numerical findings for the most relevant quantities are reported in Tab. \[tab9\]. For some of the computed quantities, such as the average inradius $\langle r_\text{in} \rangle_0$ or the average outradius $\langle r_\text{out} \rangle_0$, exact results are not available, and our numerical findings may thus support future theoretical investigations. The full distribution ${\cal P}_0(r_\text{in}|L)$ of the inradius of the $d$-polyhedron containing the origin has been estimated, and is compared to ${\cal P}(r_\text{in}|L)$ for the inradius of a typical polyhedron of the tessellation in the inset of Fig. \[fig6\] for $d=3$ and a large system size $L=40$: it is immediately apparent that $\langle r_\text{in} \rangle_0 > \langle r_\text{in} \rangle$. Moreover, the behaviour of the two distributions for small $r_\text{in}$ is also different: for $L \to \infty$, ${\cal P}(r_\text{in}|L)$ attains a finite value for $r_\text{in} \to 0$ due to its exponential shape; on the contrary, our Monte Carlo simulations seem to suggest a power-law scaling ${\cal P}_0(r_\text{in}|L) \sim r^{\alpha_d}_\text{in}$ for $r_\text{in} \to 0$, with $\alpha_d = 1+(d-1)/2$. The distribution ${\cal P}_0(V_d|L)$ of the $d$-volume of the $d$-polyhedron containing the origin has been also computed, and is compared to ${\cal P}(V_d|L)$ for the $d$-volume of a typical polyhedron of the tessellation in the inset of Fig. \[fig7\] for $d=3$ and a large system size $L=40$. Again, $\langle V_d \rangle_0 > \langle V_d \rangle$. --------------------------------------------------------------------------------------------------------------------- $d$ Formula Theoretical value Monte Carlo ----- ---------------------------------- ------------------------- ------------------- ------------------------------ $1$ $\langle V_1 \rangle_0$ $2/ \lambda$ $2$ $2.000 \pm 10^{-3} $ $1$ $\langle V_1^2 \rangle_0$ $6/\lambda^2$ $6$ $6.001 \pm 9 \times 10^{-3} $ $2$ $\langle V_2 \rangle_0$ $2 \pi / \lambda^2$ $6.28319$ $6.28 \pm 2 \times 10^{-2}$ $2$ $\langle V_2^2 \rangle_0$ $64 \pi^2 /7 \lambda^4$ $90.2364$ $90.6 \pm 0.9$ $2$ $\langle A_2 \rangle_0$ $\pi^2/ \lambda$ $9.8696$ $9.87 \pm 2 \times 10^{-2}$ $2$ $\langle r_\text{in} \rangle_0$ $0.886 \pm 2 \times 10^{-3}$ $2$ $\langle r_\text{out} \rangle_0$ $2.028 \pm 3 \times 10^{-2}$ $3$ $\langle V_3 \rangle_0$ $8 \pi / \lambda^3$ $25.1327$ $25.3 \pm 0.9$ $3$ $\langle V_3^2 \rangle_0$ $224 \pi^2 / \lambda^6$ $2210.79$ $2129.1 \pm 182$ $3$ $\langle A_3 \rangle_0$ $16 \pi / \lambda^2$ $50.2655$ $50.6 \pm 1.0$ $3$ $\langle r_\text{in} \rangle_0$ $0.89 \pm 10^{-2}$ $3$ $\langle r_\text{out} \rangle_0$ $3.11 \pm 3 \times 10^{-2}$ --------------------------------------------------------------------------------------------------------------------- : Moments of the $d$-polyhedron containing the origin. Monte Carlo simulation results are obtained with $L=80$ and $\lambda=1$ for any dimension $d$.[]{data-label="tab9"} Other moments and correlations ------------------------------ A number of moments and correlations of other physical observables are exactly known for Poisson geometries of infinite size for $d=2$ and $d=3$. For the sake of completeness, our Monte Carlo estimates corresponding to these quantities are reported in Appendix \[appendix\_moments\]. When analytical results are not known, Monte Carlo simulation findings are displayed for reference. Coloured geometries {#colored_geo} =================== So far, we have addressed the statistical properties of Poisson geometries based on the assumption that all polyhedra share the same physical properties, i.e., the medium is homogeneous. In many applications, the polyhedra emerging from a random tessellation are actually characterized by different physical properties, which for the sake of simplicity can be assumed to be piece-wise constant over each volume. Such stochastic mixtures can be then formally described by assigning a distinct ‘label’ (also called ‘color’) to each polyhedron of the geometry, with a given probability $p$. A widely studied model is that of stochastic binary mixtures, where only two labels are allowed, say ‘red’ and ‘blue’, with associated complementary probabilities $p$ and $1-p$ [@pomraning]. Stochastic mixtures are realized by resorting to the following procedure: first, a $d$-dimensional Poisson geometry is constructed by resorting to the algorithm detailed in Sec. \[construction\]. Then, the corresponding coloured geometry is immediately obtained by assigning to each polyhedron a label with a given probability. Adjacent polyhedra sharing the same label are finally merged. For the specific case of binary stochastic mixtures, this gives rise to (generally) non-convex red and blue clusters, each composed of a random number of convex polyhedra. For illustration, some examples of binary stochastic mixtures based on coloured Poisson geometries are provided in Fig. \[fig9\] by Monte Carlo simulation, for a three-dimensional box of side $L=20$ and different values of $\lambda$ and $p$. By increasing $p$, the size of the red clusters also increases, and a large red cluster spanning the entire box may eventually appear for $p>p_c$, where $p_c$ is some critical probability value. In this case, the red clusters are said to have attained the percolation threshold [@percolation_book]. The same argument applies also to the blue clusters: in particular, depending on the kind of underlying stochastic geometry and on the dimension $d$, there might exist a range of probabilities $p$ for which both coloured clusters can simultaneously percolate. ![(Color online) Examples of Monte Carlo realizations of coloured isotropic Poisson geometries restricted to a three-dimensional box of linear size $L$. For all realizations, we have chosen $L=20$. The geometry at the top (a) has $\lambda=0.3$ and $p=0.5$; the geometry in the center (b) has $\lambda=1$ and $p=0.5$; the geometry at the bottom (c) has $\lambda=1$ and $p=0.25$.[]{data-label="fig9"}](cubes_color) Percolation theory has been intensively investigated for the case of regular lattices [@percolation_book]. Although less is comparatively known for percolation in stochastic geometries, remarkable results have been nonetheless obtained in recent years for, e.g., Voronoi and Delaunay tessellations in two dimensions [@voronoi_a; @voronoi_b; @delaunay], whose analysis demands great ingenuity (see, e.g., [@calka2003; @calka2008; @hilhorst]). The percolation properties of two-dimensional isotropic Poisson geometries have been first addressed in [@lepage], where the percolation threshold $p_c$ and the fraction of polyhedra pertaining to the percolating cluster were numerically estimated by Monte Carlo simulation. In the following, we will focus on the case of three-dimensional isotropic Poisson geometries, with special emphasis on the transition occurring at $p=p_c$. ![(Color online) Monte Carlo simulation of the percolation probability ${\cal P}_C(p|L)$ for $d=3$ as a function of the colouring probability $p$ and of the system size $L$. Purple crosses represent $L=30$, green diamonds $L=40$, orange squares $L=60$, blue triangles $L=80$, and red circles $L=100$. Curves have been added to guide the eye. The estimated $p_c$ is displayed as a dashed line, with confidence error bars drawn as thinner dashed lines. For all sizes $L$ we have generated $10^3$ realizations, with the exception of $L=100$, for which $5 \times 10^{2}$ realizations were generated.[]{data-label="fig10"}](Perco) Percolation threshold --------------------- To fix the ideas, we will consider the percolation properties of the red clusters in the geometry. The results for blue clusters can be easily obtained by using the symmetry $p \to 1-p$. For infinite geometries, the percolation threshold $p_c$ is defined as the probability of assigning a red label to each $d$-polyhedron above which there exists a giant connected cluster, i.e., an ensemble of connected red $d$-polyhedra spanning the entire geometry [@percolation_book]. The percolation probability ${\cal P}_C(p)$, i.e., the probability that there exists such a connected percolating cluster, has thus a step behaviour as a function of the colouring probability $p$, i.e., ${\cal P}_C(p) = 0$ for $p<p_c$, and ${\cal P}_C(p) = 1$ for $p>p_c$. Actually, for any finite $L$, there exists a finite probability that a percolating cluster exists below $p=p_c$, due to finite-size effects. The case $d=1$ is straightforward and can be solved analytically: ${\cal P}_C(p)$ simply coincides with the probability that all the segments composing the Poisson geometry on the line are coloured in red. For any finite $L$, this happens with probability $${\cal P}_C(p|L) = p e^{-(1-p) \lambda L}.$$ It is easy to understand that for $d=1$ we have $p_c=1$. For very large $L \to \infty$, ${\cal P}_C(p|L)$ converges to a step function, with ${\cal P}_C(p) = 1$ for $p = p_c$ and ${\cal P}_C(p) = 0$ otherwise. This behaviour is analogous to that of percolation on one-dimensional lattices [@percolation_book]. To the best of our knowledge, exact results for the percolation probability for Poisson geometries in $d>1$ are not known. The percolation threshold can be numerically estimated by determining $p_c$ at finite $L$ and extrapolating the results to the limit behaviour for $L \to \infty$. The value of $p_c$ for two-dimensional isotropic Poisson geometries has been estimated to be $p_c \simeq 0.586 \pm 10^{-3}$ by means of Monte Carlo simulation [@lepage]. This means that $p_c$ for Poisson geometries in $d=2$ is quite close to the percolation threshold of two-dimensional regular square lattices, which reads $p_c^\text{square} \simeq 0.5927$ [@ziff]. The comparison with respect to regular square lattices might nonetheless appear somewhat artificial, since the features of the constituents of Poisson geometries have broad statistical distributions around their average values. In particular, the typical $2$-polyhedron of infinite Poisson geometries, while having the same average number of sides as a square (see Tab. \[tab10\]), does not share the same surface-to-volume ratio $\chi$, which is a measure of the connectivity of the geometry components: for the $2$-polyhedron we have $\chi = \langle A_2 \rangle/ \langle V_2 \rangle = \pi$ for $\lambda=1$, whereas for a square of side $u$ we have $\chi = 4/u$, which for $u$ equal to the average side of the $2$-polyhedron, namely $u= \langle A_2 \rangle / \langle N \rangle = 1$, yields $\chi= 4$. ![(Color online) Monte Carlo simulation of the segment length distributions ${\cal P}_r(\ell|L)$ and ${\cal P}^\dag_r(\ell|L)$ for $d=3$ as a function of the colouring probability $p$. Purple crosses represent ${\cal P}_r(\ell|L)$ with $p=0.2$; blue triangles: ${\cal P}_r(\ell|L)$ with $p=0.6$; red diamonds: ${\cal P}_r(\ell|L)$ with $p=0.8$. Green circles denote the segment length distribution ${\cal P}^\dag_r(\ell|L)$ for $p=0.2$. All simulations have been performed for a system size $L=40$ and $5 \times 10^{3}$ realizations. For each $p$, the black dashed lines correspond to the exponential distribution ${\cal P}_r(\ell|L) $ given in Eq. .[]{data-label="fig11"}](Chord_length_red) Simulation results for the probability ${\cal P}_C(p|L)$ in three-dimensional Poisson geometries are shown in Fig. \[fig10\] as a function of $p$, for various system sizes $L$. As $L$ increases, the shape of ${\cal P}_C(p|L)$ converges to a step function, as expected. Based on the Monte Carlo results, we were able to estimate a confidence interval for the percolation threshold, which lies close to $p_c = 0.290 \pm 7 \times 10^{-3}$. As expected, $p_c$ decreases as dimension increases, since the probability that a red cluster can make its way through the blue clusters (acting as obstacles) and eventually reach the opposite side of the box also increases with dimension. For comparison, our estimate of $p_c$ for Poisson geometries lies close to the percolation threshold for three-dimensional regular cubic lattices, which reads $p_c^\text{cube} \simeq 0.3116$ [@grassberger]. This difference might again be explained by noting that the typical $3$-polyhedron of infinite Poisson geometries has the same number of vertices ($n_v =8$), edges ($n_e=12$) and faces ($n_f=6$) as a cube (see Tab. \[tab11\]), but it does not share the same surface-to-volume ratio $\chi$. The $3$-polyhedron has $\chi = \langle A_3 \rangle/ \langle V_3 \rangle = 4$ for $\lambda=1$, whereas for a cube we have $\chi=6/u=6$ by assuming an average side $u=\l_3/n_e =1$. For $d=3$, the estimated $p_c$ for Poisson geometries is also very close to that of continuum percolation models based on spheres, whose threshold reads $p_c^\text{sphere} \simeq 0.2895$ [@torquato_3d]; this is not true for $d=2$, where the threshold for continuum percolation models based on disks yields $p_c^\text{disk} \simeq 0.676339$ [@torquato_2d]. ![(Color online) Monte Carlo simulation of the segment length distributions ${\cal P}_r(\ell)$ and ${\cal P}^\dag_r(\ell)$ for $d=3$ and $L=40$ as a function of the colouring probability $p$. Grey squares denote ${\cal P}_r(\ell|L)$ for $p=0.6$ and red circles denote ${\cal P}_r(\ell|L)$ for $p=0.8$. Purple crosses denote ${\cal P}^\dag_r(\ell)$ for $p=0.6$, green diamonds ${\cal P}^\dag_r(\ell)$ for $p=0.8$, blue triangles ${\cal P}^\dag_r(\ell|L)$ for $p=0.95$. The dashed curve corresponds to the chord length distribution $h_I(\ell|L)$ of a cube, as given in Eq. . Inset. Effects of system size $L$ for fixed $p=0.8$. Black squares denote ${\cal P}_r(\ell|L)$ for $L=20$; red circles denote ${\cal P}_r(\ell|L)$ for $L=40$. Orange triangles denote ${\cal P}^\dag_r(\ell|L)$ for $L=20$; green diamonds denote ${\cal P}^\dag_r(\ell|L)$ for $L=40$. The chord length distribution $h_I(z)$, $z=\ell/L$, is displayed as a dotted curve for $L=20$; and as a dashed curve for $L=40$.[]{data-label="fig12"}](Distrib_distances_bis) Segment length distributions ---------------------------- In coloured geometries, the distribution of the segment lengths cut by the $(d-1)$-hyperplanes can be quite naturally conditioned to the colour of the $d$-polyhedra. Two possible ways of defining such conditioned probability densities actually exist. Suppose that a line is randomly drawn as before, and that we are interested in assessing the statistics of the segments crossing the red $d$-polyhedra. Then, one can either assume that the counter for the lengths is re-initialized each time that the line crosses a red region (coming from a blue region), regardless of whether the newly crossed region belongs to an already traversed cluster (this is possible since the coloured clusters are generally non-convex); or, one can sum up all the segments crossing red $d$-polyhedra pertaining to the same non-convex cluster. These two definitions give rise to distinct distributions ${\cal P}_c(\ell)$ and ${\cal P}^\dag_c(\ell)$, respectively, where the index $c$ can take the values red ($r$) and blue ($b$). In the former case, it can be shown that for domains of infinite size the segment lengths obey $$\begin{aligned} {\cal P}_r(\ell) = \lambda_r e^{-\lambda_r \ell}, \nonumber \\ {\cal P}_b(\ell) = \lambda_b e^{-\lambda_b \ell}, \label{eq_exp_col}\end{aligned}$$ respectively, where $\lambda_r = (1-p) \lambda$ and $\lambda_b=p \lambda$, which can be interpreted as a generalization of the Markov property holding for un-coloured Poisson geometries [@lepage]. Monte Carlo simulation results corresponding to this former definition are illustrated in Fig. \[fig11\] for different values of the probability $p$: for large $\lambda L \gg 1$, the obtained probability densities of the segment lengths conditioned to red polyhedra asymptotically converge to the expected exponential density ${\cal P}_r(\ell)$ given in Eq. . The average segment length $\langle \ell \rangle_r$ has been also computed as a function of $p$: numerical findings are reported in Tab. \[tab10\_bis\] and compared to the exact result $\langle \ell \rangle_r = 1/\lambda_r = 1/(1-p)$ for $\lambda=1$. For the latter definition, the exact functional form ${\cal P}^\dag_c(\ell)$ is not known. For $p \ll p_c$, it turns out that ${\cal P}^\dag_r(\ell) \simeq {\cal P}_r(\ell)$ (see Fig. \[fig11\]); on the contrary, for $p \gg p_c$ the probability density ${\cal P}^\dag_r(\ell)$ largely differs from ${\cal P}_r(\ell)$ and depends on the system size $L$ (see Figs. \[fig12\] and \[fig13\]). This behaviour is due to the shape of the clusters in the geometry: for small $p$, most red clusters are composed of a small number of $d$-polyhedra, and are thus still typically convex. As $p$ increases, there is an increasing probability for a random line to cross non-convex red clusters, and the shape of ${\cal P}^\dag_r(\ell)$ correspondingly drifts away from that of ${\cal P}_r(\ell)$. Eventually, for $p \to 1$, the entire domain will be coloured in red, and ${\cal P}^\dag_r(\ell)$ converges to the probability density $ h_I(z)$ of the chord through a $d$-box of side $L$, which for our choice of lines obeying the $I$-randomness is given by [@coleman] $$2\pi L h_I(z)= \left\lbrace \begin{array}{l} 8 z-3 z^2 \, \, \, \, \text{if}\,\, \, 0<z\leq 1 \\ f(z) \, \, \, \, \text{if}\,\, \, 1<z\leq \sqrt{2} \\ g(z) \, \,\, \, \text{if}\,\, \, \sqrt{2} < z\leq \sqrt{3}, \\ \end{array}\right.\\ \label{chord_cube}$$ with $z=\ell/L$, where $$\begin{aligned} f(z)=\frac{6 z^4 + 6 \pi -1 -8 \left[2 z^2+1\right] \sqrt{z^2-1} }{z^2}\nonumber\end{aligned}$$ and $$\begin{aligned} g(z)=&\frac{8 \left[z^2+1\right] \sqrt{z^2-2}+6 \pi -5 -3 z^4 }{z^2} \nonumber \\ &-\frac{24}{z^2} \tan^{-1}\sqrt{z^2-2}.\nonumber\end{aligned}$$ The average segment lengths corresponding to ${\cal P}^\dag_r(\ell)$ have been also computed as a function of $p$, and are reported in Tab. \[tab10\_bis\]. p $1/\lambda_r $ $\langle \ell |L \rangle_r$ (i) $\langle \ell |L \rangle_r$ (ii) -------- ---------------- --------------------------------- ---------------------------------- $0.1$ $1.11111$ $1.08 \pm 2 \times 10^{-2}$ $1.09 \pm 2 \times 10^{-2}$ $0.2$ $1.25$ $1.20 \pm 2 \times 10^{-2}$ $1.27 \pm 2 \times 10^{-2}$ $0.25$ $1.33333$ $1.28 \pm 2 \times 10^{-2}$ $1.46 \pm 2 \times 10^{-2}$ $0.3$ $1.42857$ $1.38 \pm 2 \times 10^{-2}$ $2.25 \pm 4 \times 10^{-2}$ $0.35$ $1.53846$ $1.52 \pm 2 \times 10^{-2}$ $6.0 \pm 0.1$ $0.4$ $1.66667$ $1.64 \pm 2 \times 10^{-2}$ $10.7 \pm 0.2$ $0.6$ $2.5$ $2.49 \pm 3 \times 10^{-2}$ $28.8 \pm 0.4$ $0.8$ $5$ $4.89 \pm 7 \times 10^{-2}$ $41.4 \pm 0.5$ $0.9$ $10$ $9.6 \pm 0.2$ $46.2 \pm 0.6$ : The average segment length $\langle \ell |L \rangle_r$ restricted to the red clusters, as a function of the colouring probability $p$. Monte Carlo simulation results are obtained by either following the prescriptions coherent with ${\cal P}_r(\ell)$ (marked with $i$), or with ${\cal P}^\dag_r(\ell)$ (marked with $ii$). In both cases, we used $L=60$, with $10^3$ realizations. For reference, the exact result corresponding to prescription (i), namely, $1/\lambda_r = 1/(1-p)$ is also reported.[]{data-label="tab10_bis"} Average cluster size -------------------- For percolation on lattices, the average cluster size $S(p)$ is defined by $$S(p) = \sum_s s w_s, \label{eq_S}$$ where $w_s$ is the probability that the cluster to which a red site belongs contains $s$ sites, and the sum is restricted to sites belonging to non-percolating clusters [@percolation_book]. Now, $w_s \propto s n_s(p)$, where $n_s(p)$ is the number of clusters of size $s$ per lattice site, which means that $S(p) \propto \sum_s s^2 n_s(p)$ [@percolation_book]. Close to the percolation threshold, $S(p)$ is known to behave as $S(p) \propto |p-p_c|^{-\gamma}$ for infinite lattices, where $\gamma$ is a dimension-dependent critical exponent that does not depend on the specific lattice type [@percolation_book]. For finite lattices of linear size $L$, the behaviour of $S(p|L)$ close to $p \to p_c^-$ is dominated by finite-size effects, with a scaling $S(p|L) \propto L^{\gamma/\nu}$, where $\nu$ is another dimension-dependent critical exponent that does not depend on the specific lattice type [@percolation_book]. ![(Color online) The average cluster size $S(p|L)$ as a function of the colouring probability $p$ and of the system size $L$. Purple crosses represent $L=10$, green diamonds $L=20$, orange squares $L=30$, blue triangles $L=40$, and red circles $L=60$. Curves have been added to guide the eye. The estimated $p_c$ is displayed as a dashed line for reference. For all sizes $L$ we have generated $10^{3}$ realizations. Inset. The behaviour of $S(p|L)$ as a function of $p-p_c^*$, where $p_c^*$ is our best estimate for the percolation threshold, namely, $p_c^*=0.290$. Blue triangles correspond to $L=40$ and red circles to $L=60$. The dashed line corresponds to the power law scaling $S(p) \propto |p-p_c|^{-\gamma}$, with $\gamma = 1.793$.[]{data-label="fig13"}](S_p) In order to adapt the definition in Eq.  to the calculation of average cluster size of the Poisson geometries, we can either compute the sum by weighting each $d$-polyhedron composing a non-percolating cluster by its volume, or by attributing to each constituent an equal unit weight. The former choice seems more appropriate on physical grounds. We have computed the quantity $S(p|L)$ by Monte Carlo simulation by weighting each polyhedron by its volume: numerical results as a function of the colouring probability $p$ and of the system size $L$ are shown in Fig. \[fig13\]. The shape of $S(p|L)$ is similar to that obtained for percolation on regular lattices (see, for instance, [@percolation_book]), and it displays in particular a divergence for $p$ close to the percolation threshold. Far from the value of $p_c$ estimated above, the curves $S(p|L)$ do not depend on the system size, provided that $L$ is large. For $p \gg p_c$, $S(p|L) \to 0$. For $p \to 0$, numerical evidences show that $S(p|L) \to \langle V_3 \rangle_0$, which is coherent with the volume-weighted average that we have introduced in order to compute the mean cluster size. Close to $p_c$, $S(p|L)$ suffers from strong finite-size effects, which are coherent with the behaviour of $S(p|L)$ for regular lattices. The inset of Fig. \[fig13\] illustrates the scaling of $S(p|L)$ as a function of $p-p_c^*$, where $p_c^*$ is our best estimate for the percolation threshold, namely, $p_c^*=0.290$. We have examined different values of the system size, namely, $L=40$ and $L=60$. As $L$ increases, $S(p|L)$ shows a power law behaviour with an exponent that is compatible with the universal critical exponent $\gamma = 1.793$ for dimension $d=3$ [@percolation_book]. ![(Color online) The percolation strength $P(p|L)$ as a function of the colouring probability $p$ and of the system size $L$. Purple crosses represent $L=10$, green diamonds $L=20$, orange squares $L=30$, blue triangles $L=40$, and red circles $L=60$. Curves have been added to guide the eye. The estimated $p_c$ is displayed as a solid line for reference. For all sizes $L$ we have generated $10^{3}$ realizations. Inset. The behaviour of $P(p|L)$ as a function of $p-p_c^*$, where $p_c^*$ is our best estimate for the percolation threshold, namely, $p_c^*=0.290$. Blue triangles correspond to $L=40$ and red circles to $L=60$. The dashed line corresponds to the power law scaling $P(p) \propto (p-p_c)^{\beta}$, with $\beta = 0.4181$.[]{data-label="fig14"}](P_p) Strength of the percolating cluster ----------------------------------- We conclude our investigation of the percolation properties by addressing the behaviour of the so-called strength $P(p)$, which for percolation on lattices is defined as the probability that an arbitrary site belongs to the percolating cluster [@percolation_book]. Close to the percolation threshold, for infinite lattices $P(p)$ is known to behave as $P(p) \propto (p-p_c)^{\beta}$ when $p \to p_c^+$, where $\beta$ is a dimension-dependent critical exponent that does not depend on the specific lattice type [@percolation_book]. For finite lattices of linear size $L$, the behaviour of $P(p|L)$ close to $p=p_c$ is dominated by finite-size effects, with a scaling $P(p|L) \propto L^{-\beta/\nu}$ [@percolation_book]. The strength of Poisson geometries can be again computed by either weighting each $d$-polyhedron composing the percolating cluster by its volume, or by attributing to each constituent an equal unit weight. Monte Carlo simulation results of $P(p|L)$ corresponding to weighting each polyhedron by its volume are shown in Fig. \[fig14\], as a function of the colouring probability $p$ and of the system size $L$. Analogously as in the case of $S(p|L)$, the shape of the strength $P(p|L)$ is also similar to that obtained for percolation on regular lattices [@percolation_book]. Far from the value of $p_c$ estimated above, the curves $P(p|L)$ do not depend on the system size, provided that $L$ is large. In particular, for $p \gg p_c$ the entire geometry will be coloured in red, so that we obtain a linear scaling $P(p|L) \propto p$ for the probability of belonging to the percolating cluster. For $p \ll p_c$, $P(p|L)$ falls off rapidly to zero. Close to $p_c$, $P(p|L)$ displays strong finite-size effects, which are again coherent with the behaviour of $P(p|L)$ for regular lattices. The inset of Fig. \[fig14\] shows the scaling of $P(p|L)$ as a function of $p-p_c^*$ for different values of the system size, namely, $L=40$ and $L=60$. As $L$ increases, $P(p|L)$ displays a power law behaviour with an exponent that is compatible with the universal critical exponent $\beta= 0.4181$ for dimension $d=3$ [@percolation_book]. Conclusions =========== In this paper we have examined the statistical properties of isotropic Poisson stochastic geometries by resorting to Monte Carlo simulation. First, we have addressed the scaling of the key features of the random $d$-polyhedra composing the geometry, encompassing the volume, the surface, the inradius, the crossed lengths, and so on, as a function of the system size and of the dimension. When possible, we have compared the results of our Monte Carlo simulations for very large systems to the exact findings that are known for infinite geometries. When exact asymptotic results were not available from literature, we have provided accurate numerical estimates that could support future theoretical advances. Then, we have considered the case of binary mixtures of Poisson geometries, where each $d$-polyhedron is assigned a random label with two possible values. All adjacent polyhedra sharing the same label have been regrouped into possibly non-convex clusters, whose statistical features have been characterized for the case of three-dimensional geometries. We have in particular examined the percolation properties of this prototype model of disordered systems: the probability that a cluster spans the entire geometry, the probability that a given polyhedron belongs to a percolating cluster (the so-called strength), and the average cluster size. We have been able to determine the corresponding percolation threshold, namely, $p_c \simeq 0.290 \pm 7 \times 10^{-3}$, which lies close to that of percolation on regular cubic lattices. An analogous result had been previously established for the two-dimensional Poisson geometries, where the percolation threshold had been also found to lie close to that of regular square lattices. The critical exponents associated to the percolation strength and to the average cluster size have been finally determined, and were found to be compatible with the theoretical values $\beta \simeq 0.4181$ and $\gamma \simeq 1.793$, respectively, that are conjectured to be universal for percolation on lattices. Future work will be aimed at refining these Monte Carlo estimates. Other moments and correlations related to Poisson geometries {#appendix_moments} ============================================================ For the sake of completeness, in this Appendix we report the exhaustive Monte Carlo calculations corresponding to other relevant moments and correlations for the physical observables of Poisson geometries of infinite size, in dimension $d=2$ and $d=3$. The case of ‘typical’ $d$-polyhedra and that of $d$-polyhedra containing the origin are separately considered. When analytical results are known (from [@santalo; @miles1971; @miles1972; @matheron]), our Monte Carlo estimates are compared to the exact values. Otherwise, numerical findings are provided for reference. Notation is as follows. For the case of the $2$-polyhedron, we denote $N$ the number of sides. For the $3$-polyhedron, we denote $\l_3$ the total length of edges, $n_v$ the number of vertices, $n_e$ the number of edges, and $n_f$ the number of faces, respectively. All other symbols have been introduced above. The moments and the correlations are reported in Tabs. \[tab10\] - \[tab14\]. For the case $d=2$ we have also computed the fraction $P_3$ of random polygons having $3$ sides, which yields $0.35505 \pm 2 \times 10^{-5}$ and the fraction $P_4$ of polygons having $4$ sides, which yields $0.38148 \pm 3 \times 10^{-5}$. These estimates are to be compared with the exact results $P_3= 2-\pi^2/6 \simeq 0.35507$ and $$P_4 = -\frac{1}{3} -\frac{7}{36}\pi^2 + \pi^2 \log(2) -\frac{7}{2} \zeta(3) \simeq 0.38147,$$ respectively [@tanner], where $\zeta$ is the Riemann Zeta function [@special_functions]. Formula Theoretical value Monte Carlo --------------------------------- ----------------------------- ------------------- ------------------------------- $\langle N \rangle$ $4$ $4$ $4 \pm 0$ $\langle N^2 \rangle$ $(\pi^2+24)/2$ $16.9348$ $16.9347 \pm 10^{-4}$ $\langle N A_2 \rangle$ $(\pi^2+8)/\lambda$ $17.870$ $17.848 \pm 5 \times 10^{-3}$ $\langle N V_2 \rangle$ $2 \pi/\lambda^2$ $6.283$ $6.268 \pm 3 \times 10^{-3}$ $\langle N V_2^2 \rangle$ $16(8\pi^2-21)/21\lambda^4$ $44.16$ $43.94 \pm 5 \times 10^{-2}$ $\langle A_2 V_2 \rangle$ $4 \pi/\lambda^3$ $12.57$ $12.52 \pm 10^{-2}$ $\langle A_2 V_2^2 \rangle$ $256 \pi^2/21\lambda^5$ $120.3$ $119.6\pm 0.2 $ $\langle r_\text{in}^2 \rangle$ $2/\pi^2 \lambda^2$ $0.2026$ $0.2022 \pm 10^{-4}$ : Moments and correlations of physical observables related to two-dimensional Poisson geometries. Monte Carlo simulation results are obtained with $L=80$ and $\lambda=1$.[]{data-label="tab10"} Formula Theoretical value Monte Carlo --------------------------------- ----------------------------- ------------------- -------------------------------- $\langle n_v \rangle$ $8$ $8$ $7.99999 \pm 2 \times 10^{-6}$ $\langle n_e \rangle$ $12$ $12$ $12.000 \pm 3 \times 10^{-6}$ $\langle n_f \rangle$ $6$ $6$ $6.000000 \pm \times 10^{-6}$ $\langle \l_3 \rangle$ $12/\lambda$ $12$ $12.00 \pm 2 \times 10^{-2}$ $\langle n_v^2 \rangle$ $(13\pi^2+96)/3$ $74.768$ $74.767 \pm 10^{-3}$ $\langle n_v V_3 \rangle$ $8 \pi/\lambda^3$ $25.13$ $25.1 \pm 0.1$ $\langle n_v A_3 \rangle$ $28 \pi/\lambda^2$ $87.9646$ $87.9 \pm 0.3$ $\langle n_v \l_3 \rangle$ $(10\pi^2+24)/\lambda$ $122.696$ $122.7 \pm 0.2$ $\langle n_f^2 \rangle$ $(13\pi^2+336)/12$ $38.6921$ $38.6916 \pm 3 \times 10^{-4}$ $\langle n_f V_3 \rangle$ $4(\pi^2+3)/\pi \lambda^3$ $16.3861$ $16.36 \pm 8 \times 10^{-2}$ $\langle n_f A_3 \rangle$ $(14\pi^2+48)/\pi\lambda^2$ $59.2612$ $59.2 \pm 0.2$ $\langle n_f \l_3 \rangle$ $(5\pi^2+36)/\lambda$ $85.348$ $85.3 \pm 0.1$ $\langle V_3 A_3 \rangle$ $96/\lambda^5$ $96$ $95.7 \pm 0.8$ $\langle V_3 \l_3 \rangle$ $24\pi/\lambda^4$ $75.3982$ $75.2 \pm 0.5$ $\langle A_3 \l_3 \rangle$ $72\pi/\lambda^3$ $226.195$ $225.9 \pm 1.2$ $\langle \l_3^2 \rangle$ $24(\pi^2+1)/\lambda^2$ $260.871$ $260.7 \pm 0.9$ $\langle r_\text{in}^2 \rangle$ $1/8 \lambda^2$ $0.125$ $0.1249 \pm 4 \times 10^{-4}$ : Moments and correlations of physical observables related to three-dimensional Poisson geometries. Monte Carlo simulation results are obtained with $L=80$ and $\lambda=1$.[]{data-label="tab11"} [ c @ c ]{} & Monte Carlo\ $\langle N r_\text{in} \rangle$ & $1.4538 \pm 4 \times 10^{-4}$\ $\langle V_2 r_\text{in} \rangle$ & $1.125 \pm 10^{-3}$\ $\langle A_2 r_\text{in} \rangle$ & $2.269 \pm 10^{-3}$\ $\langle N r_\text{out} \rangle$ & $3.755 \pm 10^{-3}$\ $\langle V_2 r_\text{out} \rangle$ & $2.572 \pm 2 \times 10^{-3}$\ $\langle A_2 r_\text{out} \rangle$ & $5.815 \pm 3 \times 10^{-3}$\ $\langle r_\text{in} r_\text{out} \rangle$ & $0.4669 \pm 3 \times 10^{-4}$\ [ c @ c ]{} & Monte Carlo\ $\langle n_e^2 \rangle$ & $168.225 \pm 3 \times 10^{-3}$\ $\langle n_e n_v \rangle$ & $112.15 \pm 2 \times 10^{-3}$\ $\langle n_e n_f \rangle$ & $80.075 \pm 10^{-3}$\ $\langle n_e V_3 \rangle$ & $37.6 \pm 0.2$\ $\langle n_e A_3 \rangle$ & $131.9 \pm 0.4$\ $\langle n_e \l_3 \rangle$ & $184.0 \pm 0.3$\ $\langle n_v n_f \rangle$ & $53.3833 \pm 7 \times 10^{-4}$\ $\langle n_v r_\text{in} \rangle$ & $2.584 \pm 4 \times 10^{-3}$\ $\langle n_e r_\text{in} \rangle$ & $3.875 \pm 7 \times 10^{-3}$\ $\langle n_f r_\text{in} \rangle$ & $1.792 \pm 3 \times 10^{-3}$\ $\langle V_3 r_\text{in} \rangle$ & $1.70 \pm 10^{-2}$\ $\langle A_3 r_\text{in} \rangle$ & $4.92 \pm 3 \times 10^{-2}$\ $\langle \l_3 r_\text{in} \rangle$ & $5.53 \pm 2 \times 10^{-2}$\ $\langle n_v r_\text{out} \rangle$ & $11.13 \pm 2 \times 10^{-2}$\ $\langle n_e r_\text{out} \rangle$ & $16.70 \pm 3 \times 10^{-2}$\ $\langle n_f r_\text{out} \rangle$ & $7.87 \pm 10^{-2}$\ $\langle V_3 r_\text{out} \rangle$ & $5.90 \pm 4 \times 10^{-2}$\ $\langle A_3 r_\text{out} \rangle$ & $19.1 \pm 0.1$\ $\langle \l_3 r_\text{out} \rangle$ & $23.08 \pm 8\times 10^{-2}$\ $\langle r_\text{in} r_\text{out} \rangle$ & $0.478 \pm 2\times 10^{-3}$\ $d$ Formula Theoretical value Monte Carlo ----- ----------------------------- ----------------------------------- ------------------- ------------------------------ $2$ $\langle N \rangle_0$ $\pi^2/2$ $4.9348$ $4.932 \pm 4 \times 10^{-3}$ $2$ $\langle V_2 N \rangle_0$ $ (32\pi^3-84 \pi)/ 21 \lambda^2$ $34.6813$ $34.6 \pm 0.1$ $2$ $\langle V_2 A_2 \rangle_0$ $64 \pi^3 / 21 \lambda^3$ $94.4953$ $94.5 \pm 0.6$ $3$ $\langle n_v \rangle_0$ $4 \pi^2/3$ $13.1595$ $13.18 \pm 9 \times 10^{-2}$ $3$ $\langle n_e \rangle_0$ $19.8 \pm 0.1$ $3$ $\langle n_f \rangle_0$ $(2 \pi^2 + 6)/3$ $8.57974$ $8.59 \pm 4 \times 10^{-2}$ $3$ $\langle \l_3 \rangle_0$ $4 \pi^2/\lambda$ $39.4784$ $39.6 \pm 0.4$ : Moments and correlations for the $d$-polyhedron containing the origin in $d$-dimensional Poisson geometries. Monte Carlo simulation results are obtained with $L=80$ and $\lambda=1$ for any dimension $d$.[]{data-label="tab14"} [10]{} P. Barthelemy, J. Bertolotti, and D. S. Wiersma, Nature [**453**]{}, 495 (2009). T. Svensson, K. Vynck, M. Grisi, R. Savo, M. Burresi, and D. S. Wiersma, Phys. Rev. E [**87**]{}, 022120 (2013). T. Svensson, K. Vynck, E. Adolfsson, A. Farina, A. Pifferi, and D. S. Wiersma, Phys. Rev. E [**89**]{}, 022141 (2014). A. B. Davis and A. Marshak, J. Quant. Spectr. Rad. Transfer [**84**]{}, 3-34 (2004). A. B. Kostinskiand R. A. Shaw, J. Fluid Mech. [**434**]{}, 389 (2001). F. Malvagi, R. N. Byrne, G. C. Pomraning, and R. C. J. Somerville, J. Atm. Sci. [**50**]{}, 2146-2158 (1992). V. Tuchin, [*Tissue optics: light scattering methods and instruments for medical diagnosis*]{} (SPIE Press, Cardiff, 2007). E. W. Larsen and R. Vasques, J. Quant. Spectrosc. Radiat. Transfer [**112**]{}, 619 (2011). O. Zuchuat, R. Sanchez, I. Zmijarevic, and F. Malvagi, J. Quant. Spectr. Rad. Transfer [**51**]{}, 689-722 (1994). G. B. Zimmerman and M. L. Adams, Trans. Am. Nucl. Soc. [**63**]{}, 287-288 (1991). O. Haran, D. Shvarts, and R. Thieberger, Phys. Rev. E [**61**]{}, 6183-6189 (2000). N. Mercadier, W. Guerin, M. Chevrollier, and R. Kaiser, Nature Physics [**5**]{}, 602 (2008). L. A. Santaló, [*Integral Geometry and Geometric Probability*]{} (Addison-Wesley, Reading, MA, 1976). S. Torquato, [*Random Heterogeneous Materials: Microstructure and Macroscopic Properties*]{} (Springer-Verlag, New York, 2002). S. N. Chiu, D. Stoyan, W. S. Kendall, and J.  Mecke, [*Stochastic Geometry and Its Applications*]{} (Wiley, 2013). H. Solomon, [*Geometric Probability*]{} (SIAM Press, Philadelphia, PA, 1978). M. G. Kendall and P. A. P.Moran, [*Geometrical probability*]{} (Charles Griffin And Company Limited, London, 1963). D. Ren, [*Topics in integral geometry*]{} (World Scientific, Singapore, 1994). G. C. Pomraning, [*Linear Kinetic Theory and Particle Transport in Stochastic Mixtures*]{} (World Scientific, 1991). J. Serra, [*Image Analysis and Mathematical Morphology*]{} (Academic Press, London, 1982). E. Underwood, [*Quantitative Stereology*]{} (Addison-Wesley, 1970). A. Yu. Ambos and G. A. Mikhailov, Russ. J. Numer. Anal. Math. Modelling [**26**]{}, 263-273 (2011). S. Goudsmit, Rev. Mod. Phys. [**17**]{}, 321 (1945). R. E. Miles, Proc. Nat. Acad. Sci. USA [**52**]{}, 901-907 (1964). R. E. Miles, Proc. Nat. Acad. Sci. USA [**52**]{}, 1157-1160 (1964). P. I. Richards, Proc. Nat. Acad. Sci. USA [**52**]{}, 1160-1164 (1964). P. Switzer, Ann. Math. Statist. [**36**]{}, 1859-1863 (1965). R. E. Miles, Adv. Appl. Prob. [**1**]{}, 211-237 (1969). R. E. Miles, Izv. Akad. Nauk Arm. SSR Ser. Mat. [**5**]{}, 263-285 (1970). R. E. Miles, Adv. Appl. Prob. [**3**]{}, 1-43 (1969). R. E. Miles, Suppl. Adv. Appl. Prob. [**4**]{} 243-266 (1972). G. Matheron, Adv. Appl. Prob. [**4**]{} 508-541 (1972). D. Stauffer and A. Aharony, [*Introduction To Percolation Theory*]{} (CRC Press, 1994). T. Lepage, L. Delaby, F. Malvagi, and A. Mazzolo, Prog. Nucl. Sci. Techn. [**2**]{}, 743-748 (2011). F. W. J. Olver, D. W. Lozier, R. F. Boisvert, and C. W. Clark, [*NIST Handbook of Mathematical Functions*]{} (Cambridge University Press, Cambridge, 2010). R. Coleman, J. Appl. Probab. [**6**]{}, 430-441 (1969). K. K. Sahu and A. K. Lahiri, Phil. Mag. [**84**]{}, 1185-1196 (2004). W. H. Press, S. A. Teukolsky, W. T. Vetterling, and B. P. Flannery, [*Numerical Recipes in C. The Art of Scientific Computing*]{} (Cambridge University Press, Cambridge, 2002). K. Fischer, B. Gärtner, and M. Kutz, [*Proc. 11th European Symposium on Algorithms (ESA)*]{}, 630-641 (2003). A. M. Becker and R. M. Ziff, Phys. Rev. E [**80**]{}, 041101 (2009). R. Neher, K. Mecke, and H. Wagner, J. Stat. Mech. P01011 (2008). B. Bollobás and O. Riordan, Probab. Theory Relat. Fields [**136**]{}, 417-468 (2006). P. Calka, Adv. Appl. Prob. [**35**]{}, 551-562 (2003). P. Calka, J. Stat.Phys. [**132**]{}, 627-647 (2008). H. J. Hilhorst and P. Calka, J. Stat.Phys. [**132**]{}, 627-647 (2008). M. E. J. Newman and R. M. Ziff, Phys. Rev. Lett. [**85**]{}, 4104-4107 (2000). P. Grassberger, J. Phys. A [**25**]{}, 5867-5888 (1992). M. D. Rintoul and S. Torquato, J. Phys. A: Math. Gen. [**30**]{}, L585 (1997). J. Quintanilla, S. Torquato, and R. M. Ziff, J. Phys. A: Math. Gen. [**33**]{}, L399-L407 (2000). J. C. Tanner, J. App. Probab. [**20**]{}, 778-787 (1983). [^1]: In the plane $\mathbb R^2$, the probability that a random line intercepts both square of side $L$ and the circumscribed circle of radius $R=\sqrt{2}L/2$ is again given by the ratio of the respective mean caliper diameters, which for $d=2$ are simply proportional to the perimeters of each set (the so-called Barbier-Crofton theorem). This yields a probability $4L/(2\pi \sqrt{2}L/2) = 2\sqrt{2}/\pi \simeq 0.900$ for a random line to fall within the square [@santalo].
{ "pile_set_name": "ArXiv" }
--- abstract: 'We present the C++ library CppSs (C++ super-scalar), which provides efficient task-parallelism without the need for special compilers or other software. Any C++ compiler that supports C++11 is sufficient. CppSs features different directionality clauses for defining data dependencies. While the variable argument lists of the taskified functions are evaluated at compile time, the resulting task dependencies are fixed by the runtime value of the arguments and are thus analysed at runtime. With CppSs, we provide task-parallelism using merely native C++.' author: - bibliography: - 'bibliography.bib' title: 'CppSs – a C++ Library for Efficient Task Parallelism' --- ***Keywords–high-performance computing; task parallelism; parallel libraries.*** Introduction ============ Programming models implementing task-parallelism play a major role when preparing code for modern architectures with many cores per node and thousands of nodes per cluster. In high performance computing, a common approach for achieving the best parallel performance is to apply the message passing interface (MPI)[@mpi-web] for inter-node communication and a shared-memory programming model for intra-node parallelisation. This way, the communication overhead of pure MPI applications can be overcome. Shared memory models are also crucial when using single node computers as there are systems consisting of hundreds or even thousands of processing units accessing the same memory address space. These systems offer great parallelism to the developer. But utilising the processing units evenly, so that they can run efficiently, is a non-trivial task. Many scientific applications are based on processing large amounts of data. Usually, the processing of this data can be split up and some of these chunks have to be executed in a well defined order while others are independent. This is the level on which task based programming models are employed. We will call the chunks of work to be processed tasks, while the appearances in the code (e.g., if they are implemented as functions, methods or subroutines) are going to be called task instances. The dependencies between tasks can be stated explicitly by the programmer or inferred automatically by some kind of preprocessing of the code. In the case of fork-join-models (e.g. OpenMP[@openmp-web]), all tasks after a “fork” are (potentially) parallel while code after the “join” and all consecutive forks depend on them. For example, in figure \[fig:forkjoin\]a), tasks 2, 3 and 4 can run in parallel, if sufficient processing units are available. Task 5 cannot be executed before all other tasks have finished. In programming models which support nesting (e.g. Cilk[@cilk1]), the dependencies can sometimes be derived from the placement of the calls (see Figure \[fig:forkjoin\]b)). In many implementations of task based programming models, the data dependencies are specified explicitly by the programmer (e.g. SMPSs[@text-web], OMPSs[@ompss-web], StarPU[@starpu-1] and XKAAPI[@xkaapi-web]). This allows for more complex dependency graphs and therefore more possibilities to adjust the parallelisation to the code, the amount of data and the architecture. However, these implementations suffer from a number of disadvantages: - The tasks and/or task instances and their dependencies have to be marked by special directives, usually within a `#pragma` in C or using special comments in Fortran. These use keywords and syntax which is not part of the actual language and which the programmer needs to learn. - In order to compile the instrumented code, the programmer needs a special compiler or preprocessor. She depends on this additional software to be available on the desired platform, which is not generally the case. - The need for special compilers also poses additional work to system administrators who will be asked by the programmer to install the specific compiler used in the application. - The code of the programming model implementation itself becomes more difficult to maintain and usually at least one additional compile step is introduced when compiling the user code. In order to avoid these inconveniences, we developed a pure C/C++ library, which allows functions to be marked as tasks and to execute them asynchronously. The programmer still needs to prepare the code looking for the parts feasible for parallelisation and separate them into functions. Also, it is still necessary to instrument the code with the CppSs API. But contrary to the implementations mentioned above, this is achieved using standard C++11 syntax instead of an “imposed” pragma language. To execute the application serially, e.g. for debugging, the programmer can define the macro `NO_CPPSS`, which bypasses the creation of additional threads and converts the tasks instances into normal function calls. In the following, we will illustrate the usage (Section \[sec:usage\]) and present the basic implementation of the library CppSs (Section \[sec:impl\]). Lastly, we will sum up our conclusions in Section \[sec:concl\]. (a)![(a) Example of fork-join-parallelism. After task 1 the execution thread is forked. of nested parallelism. Task 1 spawns tasks 2 and 5. Before task 5 is created, task 3 and 4 are spawned, hence the numbering.[]{data-label="fig:forkjoin"}](./forkjoin.png "fig:"){width=".20\textwidth"} (b)![(a) Example of fork-join-parallelism. After task 1 the execution thread is forked. of nested parallelism. Task 1 spawns tasks 2 and 5. Before task 5 is created, task 3 and 4 are spawned, hence the numbering.[]{data-label="fig:forkjoin"}](./nested.png "fig:"){width=".22\textwidth"} CppSs - usage {#sec:usage} ============= CppSs is a library which compiles on any system with a working C++ compiler. The C++11 features necessary for CppSs are provided by the GNU compiler of version 4.6 or higher and the Intel compiler of version 13 or higher. In order to use CppSs, the programmer only needs to include the header `CppSs.h` and link against the library `libcppss.so`. All of CppSs’ application programming interface (API) functions are declared in the namespace [CppSs]{} to avoid overlap with other libraries’ functions. In the following, the CppSs API is introduced presenting the declaration of tasks (Section \[sec:decl\]), the initialisation and finishing of the parallel execution (Section \[sec:initfinish\]) and setting barriers (Section \[sec:barriers\]). Finally, we will give a minimal example putting everything together in Section \[sec:example\]. Declaring Tasks {#sec:decl} --------------- Parallelisation with CppSs relies on functions with well defined directionality of their parameters. Loop parallelisation and anonymous code blocks are not supported. To convert a function into a task, the programmer has to call the API function [MakeTask]{}, which takes the following parameters (see listing in Figure  \[lst:minimal\]): - a pointer to the function, - an initialiser list containing directionality specifiers for each function parameter, - (optional) a string with the function name for debugging purposes and - (optional) a priority level, which is ignored in the present version. Future versions will provide one or more priority queues. <!-- --> void func1(int *a1, double *a2, double *b) { //... } auto func1_task = CppSs::MakeTask(func1, {INOUT,IN,OUT}, "func1"); It is required that the arguments of the taskified function which are intended to cause dependencies are pointers. These can be used to access arrays, built-in types or any other data structure. However, potential overlap with other data structures is not detected. The directionality specifier must be one of [IN, OUT, INOUT]{}, [REDUCTION]{} or [PARAMETER]{}. The latter is used for arguments which are not to be interpreted as a potential dependency and must be of a built-in numerical type. The effect of each of the directionality specifiers are described in the following: #### IN The task treats this argument as input. It will not be executed until all task instantiations which were called before the function and which write to this argument (i.e. have an [OUT, INOUT]{} or [REDUCTION]{} specifier for the same argument value) have finished. #### OUT The task treats this argument as output. The content of the variable or array pointed to is (possibly) overwritten. This affects functions with an [IN]{} or [INOUT]{} specifier for the same argument value. #### INOUT The task intends to read from and write to this argument value. It will be dependent on the last task writing to this memory address. The following tasks reading from this memory address will be dependent on this task. #### REDUCTION Similar to [INOUT]{}. The task intends to read from and write to this argument value. In contrast to [INOUT]{}, the tasks with a [REDUCTION]{} clause will depend on other tasks with a [REDUCTION]{} clause on the same argument value. #### PARAMETER The argument is treated as a parameter. It will be ignored for the dependency analysis. The return type of [MakeTask]{} is an internal template type, which includes the argument types of the taskified function, thus we recommend to use the C++11 keyword [auto]{}. For convenience two macros were defined that wrap the call to [MakeTask]{}. The three calls in Figure \[lst:macros\] are equivalent. auto func_task = CppSs::MakeTask(func, {INOUT,IN,OUT}, "func"); auto func_task = CPPSS_TASK(func, {INOUT,IN,OUT}); CPPSS_TASKIFY(func,{INOUT,IN,OUT}) Init and Finish {#sec:initfinish} --------------- The next instrumentation to be inserted in the application code is calls to [Init]{} and [Finish]{}. These calls must be called before and after each task, respectively. While [Finish]{} takes no arguments, [Init]{} takes two optional arguments, namely - the number of threads and - the reporting level. The number of threads must be any positive integer. If none is given, the default is 2. The reporting level must be one of [ERROR, WARNING, INFO]{} or [DEBUG]{}, which causes increasing amount of output. The default is [WARNING]{}. [Init]{} will instantiate a runtime system which enables the queuing and asynchronous execution of tasks. The runtime will create one thread less than the number of threads specified in the call to `Init` as the main thread will also execute tasks. The threads will be constructed using the standard library [std::thread]{} class. This way portability is granted for each system which provides a C++11 compiler. [Finish]{} will wait for all the tasks to be finished and destruct all threads, queues and the runtime. Barriers {#sec:barriers} -------- With the API function [Barrier]{} it is possible to halt the main execution thread, i.e. the code outside of tasks, until all tasks instantiated so far have finished. The call takes no arguments. The call to [Finish]{} contains a call to [Barrier]{}. Minimal example {#sec:example} --------------- ![Task dependency graph of the minimal example in listing in Figure  \[lst:example\]. The blue nodes (1 and 4) represent task function [set\_task]{}, the red nodes (2 and 5) [increment\_task]{} and the green nodes (3 and 6) [output\_task]{}. []{data-label="fig:example"}](minimal){width="20.00000%"} To sum up the API usage, we compile everything into a small example, shown in Figure \[lst:example\]. Internally, it produces the dependency graph shown in Figure \[fig:example\] and prints the output shown in Figure \[lst:output\]. #include <iostream> #include "CppSs.h" #define N_THREADS 2 void set(int *a, int b) { (*a) = b; } CPPSS_TASKIFY(set,{OUT,PARAMETER}) void increment(int *a) { ++(*a); } CPPSS_TASKIFY(increment,{INOUT}) void output(int *a) { std::cout << (*a) << std::endl; } CPPSS_TASKIFY(output,{IN}) int main(void) { int a[] = {1,11}; CppSs::Init(N_THREADS,INFO); for (unsigned i=0; i < 2; ++i){ set_task(&(a[i]), i); increment_task(&(a[0])); output_task(&(a[0])); } CppSs::Finish(); return 0; } - 13:32:45.207 INFO: ### CppSs::Init ### - 13:32:45.207 INFO: adding worker: 1 of 2 - 13:32:45.207 INFO: Running on 2 threads. 1 2 - 13:32:45.207 INFO: Executed 6 tasks. - 13:32:45.207 INFO: ### CppSs::Finish ### CppSs - implementation with variadic templates {#sec:impl} ============================================== The major design paradigm for CppSs was to avoid usage of external libraries. All code should be compilable with a standard C++ compiler. In order to achieve this goal, several features of C++11 were used, the most prominent one being variadic templates[@gregor08:VariadicTemplates]. These are of central importance as the objects representing a task and an instance of a task are implemented as variadic templates, the function arguments of the taskified function being the template arguments. This is necessary because a function which the application programmer wants to taskify can have any number and type of arguments. These arguments are known at compile time, so an implementation with variadic templates is the most efficient way to handle variable argument lists. An excerpt of the [Task\_functor]{} class declaration which stores the taskified function is shown in Figure \[lst:taskfunctor\]. template<typename... ARGS> class Task_functor : public Task_functor_base { //... void (*m_f) (ARGS...); } template <typename fun, size_t i> struct get_types_helper { static void get_types( std::vector<std::type_info const*> &types) { get_types_helper<fun, i-1>::get_types(types); types.push_back(&typeid(typename function_traits<fun>::template arg<i-1>::type)); } }; template <typename fun> struct get_types_helper<fun,0> { static void get_types( std::vector<std::type_info const*> &types) {} }; template <typename fun> void get_types(std::vector<std::type_info const*> &types) { get_types_helper<fun, function_traits<fun>::nargs>::\ get_types(types); } In order to process the variable argument list at compile time, recursive template evaluation is necessary. For instance, the set of template functions used to retrieve the types of the task function arguments is shown in Figure \[lst:gettypes\]. Conclusion {#sec:concl} ========== We developed a pure C/C++ library, which allows functions in C/C++ source code to be marked as tasks, specify their dependencies and to execute them asynchronously. Contrary to other similar task based programming models like OpenMP, SMPSs or OMPSs, no preprocessor directives are necessary and the instrumented code will compile with any compiler, which supports C++11 features such as variadic templates, smart pointers and initializer lists. The smallest versions that qualify of the GNU compiler collection (gcc) and the Intel C compiler (icc), both of which are widely available, are gcc 4.6 and icc 13. The current version is capable of constructing the task dependency graph and execute the tasks asynchronously. Several directionality clauses are available. The code was checked for correctness but has still to prove scalability in realistic scenarios. First performance tests showed more than three times faster execution when running on four cores compared with the serial version of the same algorithm. We believe that these results can be enhanced by revising the implementation of the queueing and dequeueing as well as the creation and destruction of task functor instances. Acknowledgment {#acknowledgment .unnumbered} ============== The authors acknowledge support by the H4H project funded by the German Federal Ministry for Education and Research (grant number 01IS10036B) within the ITEA2 framework (grant number 09011).
{ "pile_set_name": "ArXiv" }
--- abstract: 'The two-dimensional one-component plasma, i.e. the system of pointlike charged particles embedded in a homogeneous neutralizing background, is studied on the surface of a cylinder of finite circumference, or equivalently in a semiperiodic strip of finite width. The model has been solved exactly by Choquard et al. at the free-fermion coupling $\Gamma=2$: in the thermodynamic limit of an infinitely long strip, the particle density turns out to be a nonconstant periodic function in space and the system exhibits long-range order of the Wigner-crystal type. The aim of this paper is to describe, qualitatively as well as quantitatively, the crystalline state for a larger set of couplings $\Gamma=2 \gamma$ ($\gamma=1,2\ldots$ a positive integer) when the plasma is mappable onto a one-dimensional fermionic theory. The fermionic formalism, supplemented by some periodicity assumptions, reveals that the density profile results from a hierarchy of Gaussians with a uniform variance but with different amplitudes. The number and spatial positions of these Gaussians within an elementary cell depend on the particular value of $\gamma$. Analytic results are supported by the exact solution at $\gamma=1$ ($\Gamma=2$) and by exact finite-size calculations at $\gamma=2,3$.' author: - 'L. [Š]{}amaj$^1$, J. Wagner$^1$, and P. Kalinay$^{1,2}$' title: ' Translation Symmetry Breaking in the One-Component Plasma on the Cylinder ' --- [**KEY WORDS:**]{} Two-dimensional jellium; semiperiodic boundary conditions; translation symmetry breaking. $^1$ Institute of Physics, Slovak Academy of Sciences, Dúbravská cesta 9, 845 11 Bratislava, Slovak Republic $^2$ Courant Institute of Mathematical Sciences, New York University, New York, NY 10012 INTRODUCTION ============ According to the laws of electrostatics, the Coulomb potential $v$ at a spatial position ${\bf r}\in R^{\nu}$ of the $\nu$-dimensional Euclidean space, induced by a unit charge at the origin ${\bf 0}$, is defined as the solution of the Poisson equation $$\label{1.1} \Delta v({\bf r}) = - s_{\nu} \delta({\bf r})$$ where $s_{\nu}$ is the surface area of the unit sphere in $R^{\nu}$. The pair interaction energy of particles with charges $q$ and $q'$, localized at the respective positions ${\bf r}$ and ${\bf r}'$, is given by $$\label{1.2} v({\bf r},q\vert {\bf r}',q') = q q' v(\vert {\bf r}-{\bf r}'\vert)$$ In one dimension (1D), $s_1=2$ and the solution of (\[1.1\]) reads $$\label{1.3} v(x) = - \vert x \vert , \quad \quad \nu=1$$ In 2D, $s_2=2\pi$ and the solution of (\[1.1\]), subject to the boundary condition $\nabla v({\bf r})\to {\bf 0}$ as $\vert {\bf r} \vert \to \infty$, reads $$\label{1.4} v({\bf r}) = - \ln \left( \frac{\vert {\bf r}\vert}{r_0} \right), \quad \quad \nu=2$$ where $r_0$ is a free length constant which fixes the zero point of the potential. The Coulomb potential defined by Eq. (\[1.1\]) exhibits in the Fourier ${\bf k}$-space the characteristic singular $\vert {\bf k}\vert^{-2}$ form. This maintains many generic properties, like the sum rules [@Martin], of “real” 3D Coulomb systems with the interaction potential $v({\bf r})=1/\vert {\bf r}\vert$, ${\bf r}\in R^3$. The present paper deals with the equilibrium properties of the classical (i.e. non-quantum) one-component plasma, sometimes called jellium, formulated in 1D or quasi-1D domains. The jellium model consists of only one mobile pointlike particle species of charge $q$ embedded in a fixed background of charge $-q$ and density $n$ such that the system as a whole is neutral. Thermodynamics of the 1D jellium has been obtained exactly a long time ago by Baxter [@Baxter]. It was proven subsequently that the 1D jellium is never in a fluid state, but forms a Wigner crystal [@Kunz; @Brascamp]. In particular, choosing the free (hard walls) boundary conditions and going to the infinite volume limit, the one-particle density becomes periodic in space with period $1/n$. This long-range order is present for all densities $n$ and all temperatures. Although the 1D jellium is not in a fluid state, it behaves as a conductor in the sense that arbitrary boundary charges are perfectly screened by means of a global transport of the particle lattice in the background, with no additional polarization in the bulk [@Lugrin]. Translation symmetry breaking was documented also on a quasi-1D system, namely the 2D one-component plasma living on the surface of a cylinder of circumference $W$ [@Choquard1]. This system is exactly solvable at the dimensionless coupling constant $\Gamma=2$ [@Choquard2]. In the thermodynamic limit of an infinitely long cylinder, the one-particle density is given by an array of equidistant identical Gaussians along the cylinder’s axis, with period $1/(nW)$. In 1D and quasi-1D Coulomb systems, the variance of the charge in an interval $I$ remains uniformly bounded as $\vert I\vert \to \infty$. The existence of periodic structures is related to this boundedness of the charge fluctuations [@Aizenman; @Jancovici1]. The present work proceeds in the study of the 2D jellium on the cylinder surface [@Choquard1; @Choquard2]. Our aim is to describe, qualitatively as well as quantitatively, the crystalline state for a larger set of couplings $\Gamma=2\gamma$ ($\gamma=1,2,\ldots$ a positive integer). At these couplings, the underlying model is shown to be mappable onto a 1D anticommuting-field theory following the method of Ref. [@Samaj1], and its density profile is expressible in terms of the corresponding field correlators. The assumption of the periodicity of the particle density in the thermodynamic limit reveals uniquely that the density profile results from a superposition of a hierarchy of nonidentical Gaussians with a uniform variance but with different amplitudes. The number and spatial positions of these Gaussians within an elementary cell depend on the particular value of $\gamma$. The analytic results for the crystalline state are supported by the exact solution at $\gamma=1$ ($\Gamma=2$) and by the exact finite-size calculations at $\gamma=2,3$. The paper is organized as follows. In Section 2, we present basic formulas for the one-component plasma living on the cylinder surface. Section 3 deals with the 1D fermionic representation of the model for the special values of the coupling constant $\Gamma=2\gamma$ ($\gamma$ a positive integer). Section 4 is devoted to a general analysis of the density profile in the thermodynamic limit; the Gaussian structure of the crystalline state is revealed for any value of $\gamma$. The analytic results are verified in Section 5 on the exact solution of the model at $\gamma=1$, and by the exact finite-size calculations at $\gamma=2$ and $\gamma=3$. THE MODEL ========= First we define the 2D one-component plasma confined to the surface of a cylinder of circumference $W$ and finite length $L$, in the canonical ensemble. The cylinder surface can be represented as a 2D semiperiodic rectangle domain $\Lambda$ with ${\bf r}=(x,y) \in \Lambda$ if $-L/2\le x\le L/2$ (free or hard walls boundary conditions at $x=\pm L/2$) and $-W/2\le y\le W/2$ (periodic boundary conditions at $y=\pm W/2$). It is sometimes useful to use the complex coordinates $z=x+{\rm i}y$ and ${\bar z}=x-{\rm i}y$. There are $N$ mobile pointlike particles of charge $q$ in $\Lambda$, embedded in a homogeneous background of charge density $\rho_b=-q n$ with $$\label{2.1} n = \frac{N}{L W}$$ so that the system as a whole is neutral. The interaction potential between two unit charges at ${\bf r}_1$ and ${\bf r}_2$ is given by the 2D Poisson equation (\[1.1\]) with the requirement of periodicity along the $y$-axis with period $W$. Writing the potential as a Fourier series in $y$, one gets [@Choquard1] $$\label{2.2} v({\bf r}_1,{\bf r}_2) = - \ln \left\vert 2\, {\rm sinh} \frac{\pi(z_1-z_2)}{W} \right\vert$$ At small distances $\vert {\bf r}_1-{\bf r}_2 \vert << W$, this potential behaves like the 2D Coulomb potential (\[1.4\]) with the constant $r_0 = W/(2\pi)$. At large distances along the cylinder $\vert x_1-x_2\vert >> W$, this potential behaves like the 1D Coulomb potential $-(\pi/W) \vert x_1-x_2\vert$. We shall need to express the absolute value on the rhs of (\[2.2\]) formally as the product $g(z_1) g(z_2) \vert f(z_1) - f(z_2) \vert$. This can be done in two ways: $$\begin{aligned} \left\vert 2\, {\rm sinh} \frac{\pi(z_1-z_2)}{W} \right\vert & = & {\rm e}^{-\frac{\pi}{W}(x_1+x_2)} \left\vert {\rm e}^{\frac{2\pi}{W}z_1}-{\rm e}^{\frac{2\pi}{W}z_2} \right\vert \label{2.3} \\ & = & {\rm e}^{\frac{\pi}{W}(x_1+x_2)} \left\vert {\rm e}^{-\frac{2\pi}{W}z_1}-{\rm e}^{-\frac{2\pi}{W}z_2} \right\vert \label{2.4}\end{aligned}$$ The final results cannot depend on the particular choice, and we shall adopt the representation (\[2.3\]). For a given configuration $\{ {\bf r}_1, \ldots, {\bf r}_N \}$ of charges, the total energy of the particle-background system is given by [@Choquard1] $$\label{2.5} E_N(\{ {\bf r}\}) = q^2 \sum_{j<k} v({\bf r}_j,{\bf r}_k) + \pi n q^2 \sum_j x_j^2 + B_N$$ where $B_N$ is the background-background interaction constant. The partition function at inverse temperature $\beta$ is defined by $$\label{2.6} Z_N = \frac{1}{N!} \int_{\Lambda} \prod_{j=1}^N {\rm d}^2 r_j {\rm e}^{-\beta E_N(\{ {\bf r}\})}$$ It depends on the dimensionless combination $\Gamma = \beta q^2$ called the coupling. The multiplication of $Z_N$ by a constant does not effect the particle distribution functions, so for notational convenience we omit the interaction constant $B_N$ in (\[2.5\]) and multiply each volume element ${\rm d}^2 r_j$ in (\[2.6\]) by $W^{-2}$, to get $$\label{2.7} Z_N = \frac{1}{N!} \int_{\Lambda} \prod_{j=1}^N \left[ {\rm d}^2 z_j w(z_j,{\bar z}_j) \right] \prod_{j<k} \left\vert {\rm e}^{\frac{2\pi}{W}z_j} - {\rm e}^{\frac{2\pi}{W}z_k} \right\vert^{\Gamma}$$ Here, $w$ is the one-body Boltzmann factor $$\label{2.8} w(z,{\bar z}) \equiv w(x) = \frac{1}{W^2} \exp \left[ -\pi\Gamma n x^2 -\frac{\pi\Gamma}{W}(N-1)x \right]$$ The particle density at point ${\bf r}\in \Lambda$ is defined as $$\label{2.9} n({\bf r}) = \left\langle \sum_{j=1}^N \delta({\bf r}-{\bf r}_j) \right\rangle$$ where $\langle \cdots \rangle$ denotes the usual canonical average. It can be obtained in a standard way as the functional derivative $$\label{2.10} n(z,{\bar z}) = w(z,{\bar z}) \frac{\delta}{\delta w(z,{\bar z})} \ln Z_N$$ Due to the cylinder geometry of the system, the particle density depends only on the $x$-coordinate, $n({\bf r}) \equiv n(x)$, and exhibits the reflection symmetry $n(x)=n(-x)$ with respect to the origin $x=0$. The charge neutrality of the system is equivalent to the condition $$\label{2.11} \int_{-L/2}^{L/2} {\rm d} x \left[ n(x)-n \right] = 0$$ Our task is to determine the particle density profile in the thermodynamic limit $N,L\to \infty$ (the circumference $W$ of the cylinder is finite), where the background density $n$ given by (\[2.1\]) stays constant. The exact 1D solution of the jellium at any temperature [@Kunz] and the exact 2D solution at the coupling $\Gamma=2$ [@Choquard2] indicate two characteristic features of this density profile: - The thermodynamic limit of the density profile depends on which one of the two subsequences, the particle number $N=$ even and $N=$ odd integers, is chosen; we denote by $n^{(e)}(x)$ and $n^{(o)}(x)$ the corresponding density profiles defined in $-\infty<x<\infty$. The plots of $n^{(e,o)}(x)$ are expected to be periodic with a period $\lambda$, $$\label{2.12} n^{(e,o)}(x\pm \lambda) = n^{(e,o)}(x)$$ The two density profiles are supposed to be the two realizations of the same periodic function shifted to one another by a half period, $$\label{2.13} n^{(e)}(x\pm \lambda/2) = n^{(o)}(x)$$ - The period $\lambda$ is equal to $1/n$ in 1D, independently of the temperature. The exact 2D solution for the coupling $\Gamma=2$ gives $\lambda=1/(nW)$. In both cases the elementary cell with the size of the period in the $x$-direction contains just one particle[^1]. This motivates us to suggest that the period $$\label{2.14} \lambda = \frac{1}{n W}$$ is present for an arbitrary coupling $\Gamma$. These two working hypothesis will be incorporated into an analytic treatment of the model, and subsequently justified numerically with a high precision by finite-size calculations. FERMIONIC REPRESENTATION ======================== At $\Gamma=2\gamma$ ($\gamma=1,2,\ldots$ a positive integer), the 2D jellium with the interaction Boltzmann factor $\prod_{j<k}\vert z_j-z_k\vert^{\Gamma}$ is mappable onto a discrete 1D fermionic theory [@Samaj1]. The mapping can be readily extended to the present model. The partition function (\[2.7\]) is expressed as an integral over two sets of Grassman variables $\{ \xi_j^{(\alpha)}, \psi_j^{(\alpha)} \}$, each with $\gamma$ components ($\alpha=1,\ldots,\gamma$) defined on a discrete chain of $N$ sites $j=0,1,\ldots,N-1$ as follows $$\begin{aligned} Z_N & = & \int {\cal D}\psi {\cal D}\xi\, {\rm e}^{S(\xi,\psi)} \label{3.1} \\ S(\xi,\psi) & = & \sum_{j,k=0}^{\gamma(N-1)} \Xi_j w_{jk} \Psi_k \label{3.2}\end{aligned}$$ Here ${\cal D}\psi {\cal D}\xi = \prod_{j=0}^{N-1} {\rm d}\psi_j^{(\gamma)} \ldots {\rm d}\psi_j^{(1)} {\rm d}\xi_j^{(\gamma)} \ldots {\rm d}\xi_j^{(1)}$ and the action $S$ involves pair interactions of “composite” variables $$\label{3.3} \Xi_j = \sum_{j_1,\ldots,j_{\gamma}=0\atop (j_1+\ldots+j_{\gamma}=j)}^{N-1} \xi_{j_1}^{(1)} \ldots \xi_{j_{\gamma}}^{(\gamma)}, \quad \quad \Psi_k = \sum_{k_1,\ldots,k_{\gamma}=0\atop (k_1+\ldots+k_{\gamma}=k)}^{N-1} \psi_{k_1}^{(1)} \ldots \psi_{k_{\gamma}}^{(\gamma)}$$ The interaction strengths $w_{jk}$ $[j,k=0,1,\ldots,\gamma(N-1)]$ are given by $$\label{3.4} w_{jk} = \int_{\Lambda} {\rm d}^2 z\, w(z,{\bar z}) \exp\left( \frac{2\pi}{W} j z \right) \exp\left( \frac{2\pi}{W} k {\bar z} \right)$$ Using the notation $\langle \cdots \rangle = \int {\cal D}\psi {\cal D}\xi {\rm e}^S \cdots/Z_N$ for averaging over the anticommuting variables, the particle density (\[2.10\]) is expressible in the fermionic form as follows $$\label{3.5} n(z,{\bar z}) = w(z,{\bar z}) \sum_{j,k=0}^{\gamma(N-1)} \langle \Xi_j \Psi_k \rangle \exp\left( \frac{2\pi}{W} j z \right) \exp\left( \frac{2\pi}{W} k {\bar z} \right)$$ The fermionic correlators $\{ \langle \Xi_j \Psi_k \rangle \}$ can be obtained from the partition function (\[3.1\]) using relation (\[3.2\]) via the derivatives $$\label{3.6} \langle \Xi_j \Psi_k \rangle = \frac{\partial}{\partial w_{jk}} \ln Z_N$$ Inserting the one-body Botzmann factor $w$ of interest (\[2.8\]) into (\[3.4\]), and using the orthogonality relation $$\label{3.7} \int_{-W/2}^{W/2} {\rm d}y\, \exp\left\{ \frac{2\pi}{W} {\rm i} (j-k) y \right\} = W \delta_{jk}$$ leads to the diagonalization of the interaction matrix $$\label{3.8} w_{jk} = w_j \delta_{jk}, \quad \quad w_j = \frac{1}{W} \int_{-L/2}^{L/2} {\rm d}x\, \exp\left\{ -2\pi\gamma n x^2 + \frac{2\pi}{W} \left[ 2 j - \gamma(N-1) \right] x \right\}$$ Notice that the diagonal interaction strengths $\{ w_j \}$ possess the symmetry $$\label{3.9} w_j = w_{\gamma(N-1)-j} \quad \quad \mbox{for all $j=0,1,\ldots,\gamma(N-1)$}$$ With $w_{jk}$ of the form (\[3.8\]), the action (\[3.2\]) of the partition function (\[3.1\]) becomes “diagonalized”, $$\label{3.10} Z_N = \int {\cal D}\psi {\cal D}\xi \exp \left( \sum_{j=0}^{\gamma(N-1)} \Xi_j w_j \Psi_j \right)$$ and the fermionic correlators $\{ \langle \Xi_j \Psi_k \rangle \}$ are given by $$\label{3.11} \langle \Xi_j \Psi_k \rangle = \langle \Xi_j \Psi_j \rangle \delta_{jk}, \quad \quad \langle \Xi_j \Psi_j \rangle = \frac{\partial}{\partial w_j} \ln Z_N$$ The particle density (\[3.5\]) takes the form $$\label{3.12} n(x) = \frac{1}{W^2} \sum_{j=0}^{\gamma(N-1)} \langle \Xi_j \Psi_j \rangle \exp\left\{ -2\pi\gamma n x^2 + \frac{2\pi}{W} \left[ 2 j - \gamma(N-1) \right] x \right\}$$ where $-\lambda N/2\le x\le \lambda N/2$, $\lambda$ being defined by Eqs. (\[2.1\]) and (\[2.14\]). The symmetry (\[3.9\]) of the interaction strengths $\{ w_j \}$ implies an analogous symmetry for the fermionic correlators $$\label{3.13} \langle \Xi_j \Psi_j \rangle = \langle \Xi_{\gamma(N-1)-j} \Psi_{\gamma(N-1)-j} \rangle \quad \quad \mbox{for all $j=0,1,\ldots,\gamma(N-1)$}$$ This relation ensures the mentioned reflection property of the particle density $n(x)=n(-x)$. GENERAL ANALYSIS ================ The explicit formula (\[3.12\]) for the density profile contains the unknown set of fermionic correlators $\{ \langle \Xi_j \Psi_j \rangle \}$. In this section we present an analytic treatment of the thermodynamic $N,L\to \infty$ limit of this formula, supplemented by the periodicity assumptions (\[2.12\]) and (\[2.13\]) for the density functions with the period $\lambda$ defined by Eq. (\[2.14\]). It will be shown that for any $\gamma$ the periodicity assumptions determine uniquely the general Gaussian structure of the density profile, without having at one’s disposal the particular values of the fermionic correlators $\{ \langle \Xi_j \Psi_j \rangle \}$. The treatment depends technically on whether $\gamma$ is an even or odd integer. $\gamma=$ even integer ---------------------- For $\gamma$ even, we define the auxiliary integer $M$ as follows $$\label{4.1} \gamma(N-1) = 2 M$$ Let us shift the $j$-index enumeration in Eq. (\[3.12\]) by $M$, $$\label{4.2} j=M+l, \quad \quad l=-M,-M+1,\ldots,M$$ Now, rescaling appropriately the integration $x$-variable in Eq. (\[3.8\]), the interaction strengths can be written as $$\label{4.3} w_{M+l} = \frac{\exp\left( \frac{2\pi l^2}{\gamma\mu}\right)}{\sqrt{2\gamma\mu}} \frac{1}{\sqrt{\pi}} \int_{-\sqrt{\frac{\pi\gamma}{2\mu}}N}^{\sqrt{\frac{\pi\gamma}{2\mu}}N} {\rm d}x \, \exp\left\{ -\left( x- \sqrt{\frac{2\pi}{\gamma\mu}}l\right)^2 \right\}$$ Here, we have introduced the dimensionless parameter $$\label{4.4} \mu = n W^2$$ which measures the number of particles in a square of side $W$; the limits $\mu\to 0$ and $\mu\to\infty$ correspond to the extreme 1D (at zero temperature) and 2D versions of the model, respectively. The symmetry (\[3.9\]) is equivalent to $$\label{4.5} w_{M+l} = w_{M-l} \quad \quad \mbox{for all $l=-M,\ldots,M$}$$ which can be verified directly from the explicit representation (\[4.3\]). For $l$ finite and in the limit $N\to\infty$ $$\label{4.6} w_{M+l} \sim \frac{1}{\sqrt{2\gamma\mu}} \exp\left( \frac{2\pi l^2}{\gamma\mu} \right)$$ Under the index shift (\[4.2\]), the density profile (\[3.12\]) can be expressed as $$\label{4.7} \frac{n(x)}{n} = \sqrt{\frac{2}{\gamma\mu}} \sum_{l=-M}^M c_l(M) \exp\left\{ - \frac{2\pi\gamma}{\mu} \left( \frac{x}{\lambda} -\frac{l}{\gamma} \right)^2 \right\}$$ where we have introduced the coefficients $$\label{4.8} c_l(M) = \langle \Xi_{M+l} \Psi_{M+l} \rangle \sqrt{\frac{\gamma}{2\mu}} \exp\left( \frac{2\pi l^2}{\gamma\mu} \right)$$ The symmetry of the fermionic correlators (\[3.13\]) is equivalent to $\langle \Xi_{M+l} \Psi_{M+l} \rangle = \langle \Xi_{M-l} \Psi_{M-l} \rangle$. The consequent relation $$\label{4.9} c_l(M) = c_{-l}(M) \quad \quad \mbox{for all $l=-M,\ldots,M$}$$ ensures the reflection symmetry $n(x)=n(-x)$. We see that, even for a finite-size system, the particle density (\[4.7\]) results as a superposition of Gaussians, localized equidistantly at positions $x=\lambda l/\gamma$ ($l=-M,\ldots,M$), with the uniform variance $\sigma^2=\lambda^2\mu/(4\pi\gamma)$ but with different position-dependent amplitudes. The value-structure of these amplitudes simplifies substantially in the thermodynamic limit discussed below. All formal algebra made till now was rigorous. To describe the thermodynamic limit of the profile relation (\[4.7\]), we adopt the two assumptions presented at the end of Section 2. In the limit $M\to\infty$, one has to distinguish between the subsequences of $M$ in (\[4.1\]) which correspond to $N=$even and to $N=$odd particle numbers. Within each of the even and odd $M$-subsequences, the coefficients $\{ c_l(M) \}$ tend uniformly to their asymptotic values denoted by $\{ c_l^{(e)} \}$ and $\{ c_l^{(o)} \}$, respectively. Thence, $$\label{4.10} \frac{n^{(e,o)}(x)}{n} = \sqrt{\frac{2}{\gamma\mu}} \sum_{l=0,\pm 1,\ldots} c^{(e,o)}_l \exp\left\{ - \frac{2\pi\gamma}{\mu} \left( \frac{x}{\lambda}-\frac{l}{\gamma} \right)^2 \right\}$$ The analogue of the symmetry relation (\[4.9\]) takes the form $$\label{4.11} c^{(e,o)}_l = c^{(e,o)}_{-l} \quad \quad \mbox{for all $l=0,\pm 1,\ldots$}$$ The periodicity assumption (\[2.12\]) implies $$\label{4.12} c_l^{(e,o)} = c_{l\pm \gamma}^{(e,o)} \quad \quad \mbox{for all $l=0,\pm 1,\ldots$}$$ The shift condition (\[2.13\]) between the “even” and “odd” states leads to the relations $$\label{4.13} c_l^{(e)} = c_{l\pm \frac{\gamma}{2}}^{(o)} \quad \quad \mbox{for all $l=0,\pm 1,\ldots$}$$ Based on Eqs. (\[4.11\])-(\[4.13\]) we conclude that there exist $\gamma/2+1$ independent asymptotic amplitudes $C_0, C_1,\ldots, C_{\frac{\gamma}{2}}$ such that $$\label{4.14} c_l^{(e)} = C_l, \quad \quad c_l^{(o)} = C_{\frac{\gamma}{2}-l} \quad \quad \mbox{for $l=0,1,\ldots,\frac{\gamma}{2}$}$$ All other coefficients can be generated from the basic set (\[4.14\]) by using the symmetry relations (\[4.11\]) and (\[4.12\]). The values of the asymptotic $C$-amplitudes depend on the dimensionless parameter $\mu$ of Eq. (\[4.4\]). They are constrained by the neutrality condition (\[2.11\]), written for $x$ ranging over one period as follows $$\label{4.15} \int_0^{\lambda} {\rm d} x \left[ \frac{n^{(e,o)}(x)}{n} - 1 \right] = 0$$ Simple algebra gives $$\label{4.16} C_0 +2(C_1 + \cdots + C_{\frac{\gamma}{2}-1}) +C_{\frac{\gamma}{2}} = \gamma$$ For instance, in the $\gamma=2$ case, there are two amplitudes $C_0$ and $C_1$ such that $$\begin{aligned} c_l^{(e)} & = & \left\{ \begin{array}{ll} C_0 & \mbox{for $l$ even} \cr C_1 & \mbox{for $l$ odd} \end{array} \right. \label{4.17} \\ c_l^{(o)} & = & \left\{ \begin{array}{ll} C_1 & \mbox{for $l$ even} \cr C_0 & \mbox{for $l$ odd} \end{array} \right. \label{4.18}\end{aligned}$$ The amplitudes are constrained by $$\label{4.19} C_0(\mu) + C_1(\mu) = 2$$ $\gamma=$ odd integer --------------------- For $\gamma$ odd, the fermionic formalism needs to be developed separately for the thermodynamic-limit subsequence with $N$ even and $N$ odd. When the particle number $N$ is even, the product $$\label{4.20} \gamma(N-1) = 2M + 1$$ with some integer $M$. The shift of the $j$-index enumeration in Eq. (\[3.12\]) $$\label{4.21} j = M + \frac{1}{2} + l, \quad \quad l = -\left( M+\frac{1}{2} \right), -M+\frac{1}{2}, \ldots, M+\frac{1}{2}$$ leads to the interaction strengths $$\label{4.22} w_{M+\frac{1}{2}+l} = \frac{\exp\left( \frac{2\pi l^2}{\gamma\mu}\right)}{\sqrt{2\gamma\mu}} \frac{1}{\sqrt{\pi}} \int_{-\sqrt{\frac{\pi\gamma}{2\mu}}N}^{\sqrt{\frac{\pi\gamma}{2\mu}}N} {\rm d}x \, \exp\left\{ -\left( x- \sqrt{\frac{2\pi}{\gamma\mu}}l\right)^2 \right\}$$ and to the coefficients $$\label{4.23} c_l(M) = \langle \Xi_{M+\frac{1}{2}+l} \Psi_{M+\frac{1}{2}+l} \rangle \sqrt{\frac{\gamma}{2\mu}} \exp\left( \frac{2\pi l^2}{\gamma\mu} \right)$$ possessing the symmetry $c_l(M)=c_{-l}(M)$. In the limit $N\to\infty$ (keeping $N$ even), one arrives at the density profile of the “even” state $$\label{4.24} \frac{n^{(e)}(x)}{n} = \sqrt{\frac{2}{\gamma\mu}} \sum_{l=\pm\frac{1}{2},\pm\frac{3}{2},\ldots} c^{(e)}_l \exp\left\{ - \frac{2\pi\gamma}{\mu} \left( \frac{x}{\lambda}-\frac{l}{\gamma} \right)^2 \right\}$$ The asymptotic coefficients satisfy the symmetry relation $$\label{4.25} c_l^{(e)} = c_{-l}^{(e)} \quad \quad \mbox{for all $l=\pm\frac{1}{2}, \pm\frac{3}{2}, \ldots$}$$ and the periodicity condition $$\label{4.26} c_l^{(e)} = c_{l\pm \gamma}^{(e)} \quad \quad \mbox{for all $l=\pm\frac{1}{2}, \pm\frac{3}{2}, \ldots$}$$ implied by Eqs. (\[2.12\]) and (\[2.14\]). As a consequence, there exist $(\gamma+1)/2$ independent amplitudes $C_{\frac{1}{2}}, C_{\frac{3}{2}},\ldots,C_{\frac{\gamma}{2}}$ such that $$\label{4.27} c_l^{(e)} = C_l \quad \quad \mbox{for $l=\frac{1}{2},\frac{3}{2},\ldots,\frac{\gamma}{2}$}$$ All other coefficients can be generated from the basic set (\[4.27\]) with the aid of the symmetry relations (\[4.25\]) and (\[4.26\]). When the particle number $N$ is odd, the product $\gamma(N-1)$ is expressible similarly as in Eq. (\[4.1\]), so that we can proceed along the lines of Section 4.1. The density profile of the “odd” state is again of the form (\[4.10\]) $$\label{4.28} \frac{n^{(o)}(x)}{n} = \sqrt{\frac{2}{\gamma\mu}} \sum_{l=0,\pm 1,\ldots} c^{(o)}_l \exp\left\{ - \frac{2\pi\gamma}{\mu} \left( \frac{x}{\lambda}-\frac{l}{\gamma} \right)^2 \right\}$$ where the asymptotic coefficients exhibit the symmetries $c_l^{(o)} = c_{-l}^{(o)}$ and $c_l^{(o)} = c_{l\pm \gamma}^{(o)}$ for all $l=0,\pm 1,\ldots$. The shift condition (\[2.13\]), when applied to the representations (\[4.24\]) and (\[4.28\]), gives $c_l^{(e)} = c_{l\pm\frac{\gamma}{2}}^{(o)}$. In view of Eq. (\[4.27\]), the basic set of the odd coefficients is given by $$\label{4.29} c_l^{(o)} = C_{\frac{\gamma}{2}-l} \quad \quad \mbox{for $l=0,1,\ldots,\frac{\gamma-1}{2}$}$$ The neutrality conditions of type (\[4.15\]) constraint the $C$-amplitudes as follows $$\label{4.30} 2\left[ C_{\frac{1}{2}}(\mu) + \cdots + C_{\frac{\gamma}{2}-1}(\mu) \right] + C_{\frac{\gamma}{2}}(\mu) = \gamma$$ For $\gamma=1$, one has the simple result $$\begin{aligned} c_l^{(e)} & = & 1 \quad \mbox{for all $l=\pm\frac{1}{2},\pm\frac{3}{2},\ldots$} \label{4.31} \\ c_l^{(o)} & = & 1 \quad \mbox{for all $l=0,\pm 1,\ldots$} \label{4.32} \end{aligned}$$ For $\gamma=3$, the above scheme results in $$\begin{aligned} c_l^{(e)} & = & \left\{ \begin{array}{ll} C_{\frac{1}{2}} & \mbox{for $l=\pm\frac{1}{2}+3k$} \cr C_{\frac{3}{2}} & \mbox{for $l=\frac{3}{2}+3k$} \end{array} \right. \label{4.33} \\ c_l^{(o)} & = & \left\{ \begin{array}{ll} C_{\frac{3}{2}} & \mbox{for $l=3k$} \cr C_{\frac{1}{2}} & \mbox{for $l=\pm 1+3k$} \end{array} \right. \label{4.34}\end{aligned}$$ where $k$ is an arbitrary integer. The $\mu$-dependent amplitudes are constrained by $$\label{4.35} 2 C_{\frac{1}{2}}(\mu) + C_{\frac{3}{2}}(\mu) = 3$$ FINITE-SIZE CALCULATIONS ======================== In this section, we check the obtained analytic results by the exact finite-$N$ calculations. The crucial problem is to determine the dependence of the partition function $Z_N(\gamma)$, given as the integral over anticommuting variables by Eq. (\[3.10\]), on the set of interaction strengths $\{ w_j \}_{j=0}^{\gamma(N-1)}$. Having at one’s disposal this dependence, the consequent fermionic correlators \[generated by using Eq. (\[3.11\])\] determine the $c$-coefficients of interest via relations (\[4.8\]) \[($\gamma$ even, $N$ arbitrary) or ($\gamma$ odd, $N$ odd)\] and (\[4.23\]) ($\gamma$ odd, $N$ even). For $\gamma=1$, one has the simple result [@Jancovici2] $$\begin{aligned} Z_N(1) & = & w_0 w_1 \cdots w_{N-1} \label{5.1} \\ \langle \Xi_j \Psi_j \rangle & = & \frac{1}{w_j}, \quad \quad j=0,1,\ldots,N-1 \label{5.2}\end{aligned}$$ In the thermodynamic $N\to\infty$ limit, after simple algebra both even (\[4.31\]) and odd (\[4.32\]) types of the $c$-coefficients are reproduced exactly. For larger values of $\gamma$, the partition function is a more complicated function of interaction strengths whose complexity increases with increasing the particle number $N$. The methods for a systematic generation of $Z_N(\gamma)$, realized in practice through computer languages like [*Fortran*]{}, are summarized and further developed in Ref. [@Samaj2]. With the aid of these methods we were able to go up to $N=12$ particles for $\gamma=2$ and up to $N=9$ particles for $\gamma=3$. Results for $\gamma=2$ ---------------------- The finite-size results for $\gamma=2$ are summarized in Figs. 1-3. Their discussion follows the analysis of Subsection 4.1. The asymptotic amplitudes $C_0$ and $C_1$ are defined by relations (\[4.17\]) and (\[4.18\]). Taking the parameter $\mu=3$, the density profiles for the subsequence with the particle number $N$ even and the one with $N$ odd are plotted in Figs. 1a and 1b, respectively. In the units of the period $\lambda=1$, hard walls are localized at $x=\pm N/2$ where the boundary effects dominate. On the other hand, close to the center $x=0$, the particle density exhibits the characteristic periodic behavior as from relatively small particle numbers. The density at $x=0$ has the minimum for $N$ even and the maximum for $N$ odd. This means that the asymptotic amplitudes fulfill the inequality $C_0<C_1$. The Gaussians with the smaller asymptotic amplitude $C_0$ are localized at the minimum points of the density profile, and so they modify the density plot given by the equidistant array of the Gaussians corresponding to the larger asymptotic amplitude $C_1$ (and localized at the maximum points of the density profile) only marginally, without generating new extreme points. We observe numerically this phenomenon for any value of $\mu$. Taking the same value of the parameter $\mu=3$, the plots of the coefficients $\{ c_0,c_1,c_2,c_3\}$, reflecting the amplitudes of the Gaussians close to the center, versus $1/N$ are pictured in Fig. 2 where only the subsequence with $N$ odd is considered. As the particle number $N$ increases, $c_0\to c_2$ and $c_1\to c_3$ as is expected. As can be seen the convergence of the $c$-coefficients to their asymptotic $C_0$ and $C_1$ values turns out to be fast. The dependence of the asymptotic amplitudes $C_0$ and $C_1$, constrained by $C_0+C_1=2$, on the parameter $\mu$ is shown in Fig. 3. The finite-size errors in determining these amplitudes are deducible directly from the differences $c_0-c_2$ and $c_1-c_3$ at the largest $N=12$ particle number. The error bars increase with increasing $\mu$, they are however still so small that we do not present them in the figure in order to keep the clarity of the presentation. In the limit $\mu\to 0$, which corresponds to the 1D jellium at zero temperature, $C_0=0$ and $C_1=2$; there exists consequently only one set of Gaussians (more precisely, the $\delta$-function peaks), in agreement with the exact 1D solution [@Kunz]. In the limit $\mu\to \infty$, which corresponds to the bulk 2D jellium, both asymptotic amplitudes tend to unity as it should be. Results for $\gamma=3$ ---------------------- The finite-size results for $\gamma=3$ are summarized in Figs. 4-6. We do not comment these figures since they provide similar information as the previous ones for $\gamma=2$. We only recall that the asymptotic amplitudes $C_{1/2}$ and $C_{3/2}$, constrained by $2 C_{1/2}+C_{3/2}=3$, are defined by Eqs. (\[4.33\]) and (\[4.34\]). One concludes that the finite-size calculations confirm with a high accuracy the predicted Gaussian structure of the crystalline state. The structure of the crystalline state for noninteger values of $\gamma$ is an open problem. ACKNOWLEDGMENTS {#acknowledgments .unnumbered} =============== We thank B. Jancovici for stimulating discussions. The support by a VEGA grant is acknowledged. [9]{} Ph. A. Martin, Sum rules in charged fluids, [*Rev. Mod. Phys.*]{} [**60**]{}:1075 (1988). R. J. Baxter, Statistical mechanics of a one-dimensional Coulomb system with a uniform charge background, [*Proc. Cambridge Phil. Soc.*]{} [**59**]{}:779 (1963). H. Kunz, The one-dimensional classical electron gas, [*Ann. Phys.*]{} [**85**]{}:303 (1974). H. J. Brascamp and E. H. Lieb, Some inequalities for Gaussian measures and the long-range order of the one-dimensional plasma, in [Functional Integration and Its Applications]{}, A. M. Arthurs, ed. (Oxford Univ. Press, 1975). Ch. Lugrin and Ph. A. Martin, Functional integration treatment of one-dimensional ionic mixtures, [*J. Math. Phys.*]{} [**23**]{}:2418 (1982). Ph. Choquard, [*Helv. Phys. Acta*]{} [**54**]{}:332 (1981). Ph. Choquard, P. J. Forrester, and E. R. Smith, The two-dimensional one-component plasma at $\Gamma=2$: the semiperiodic strip, [*J. Stat. Phys.*]{} [**33**]{}:13 (1983). M. Aizenman, S. Goldstein, and J. L. Lebowitz, Bounded fluctuations and translation symmetry breaking in one-dimensional particle systems, [*J. Stat. Phys.*]{} [**103**]{}:601 (2001). B. Jancovici and J. L. Lebowitz, Bounded fluctuations and translation symmetry breaking: a solvable model, [*J. Stat. Phys.*]{} [**103**]{}:619 (2001). L. [Š]{}amaj and J. K. Percus, A functional relation among the pair correlations of the two-dimensional one-component plasma, [*J. Stat. Phys.*]{} [**80**]{}:811 (1995). B. Jancovici, Exact results for the two-dimensional one-component plasma, [*Phys. Rev. Lett.*]{} [**46**]{}:386 (1981). L. [Š]{}amaj, Is the Two-Dimensional One-Component Plasma Exactly Solvable?, cond-mat/0402027, to appear in [*J. Stat. Phys.*]{} ![image](fig1a.eps) ![image](fig1b.eps) ![image](fig2.eps) ![image](fig3.eps) ![image](fig4a.eps) ![image](fig4b.eps) ![image](fig5.eps) ![image](fig6.eps) [^1]: we are grateful to B. Jancovici for noticing to us this important fact
{ "pile_set_name": "ArXiv" }
‘=11 1200 =500 =1000 =5000 1,2mm =12,8pt \#1[\#1]{} \#1[by 1 \#1 ]{} \#1[\#1]{} \#1[ () ()]{} = = = = = = = = = = =msbm10 =msbm7 =msbm10 =msbm7 =msbm10 =msbm7 =msbm10 =msbm7 =msbm10 =msbm10 =msbm10 ß =msbm7 \#1|\#2| =1 . \#1\#2 [\#1\#2|]{} \#1\#2|[[\#1\#2]{}]{} \#1 [[\#1|]{} ]{} \#1\#2|[[\#1\#2]{}]{} \#1 [[\#1|]{} ]{} ¶[[(P\_t)]{}\_[t0]{}]{} Ł[[L]{}]{} **HYPERCONTRACTIVE MEASURES,** **TALAGRAND’S INEQUALITY, AND INFLUENCES** 0,5cm =cmcsc10 [D. Cordero-Erausquin, M. Ledoux]{} 0,1cm *University of Paris 6 and University of Toulouse, France* 1,2cm Abstract. – [*We survey several Talagrand type inequalities and their application to influences with the tool of hypercontractivity for both discrete and continuous, and product and non-product models. The approach covers similarly by a simple interpolation the framework of geometric influences recently developed by N. Keller, E. Mossel and ${\hbox {A. Sen}}$. Geometric Brascamp-Lieb decompositions are also considered in this context.*]{} 1,8mm [**1. Introduction**]{} In the famous paper \[T\], M. Talagrand showed that for every function $f$ on the discrete cube $ X = \{-1, +1\}^N$ equipped with the uniform probability measure $\mu $, $${\rm Var}_\mu (f) = \int_X f^2 d\mu - \bigg ( \int_X f d\mu \bigg)^2 \leq C \sum_{i=1}^N { {\| D_i f\| }_2^2 \over 1+ \log \big ( {\| D_i f\| }_2 / {\| D_i f\| }_1 \big ) } \eqno (1)$$ for some numerical constant $C \geq 1$, where ${\| \cdot \| }_p$ denote the norms in $\L^p(\mu )$, $1 \leq p \leq \infty$, and for every $i = 1, \ldots, n$ and every $ x = (x_1, \ldots, x_N) \in \{-1, +1\}^N$, $$D_i f(x) = f( \tau _i x) - f (x) \eqno (2)$$ with $\tau _i x = (x_1, \ldots, x_{i-1}, -x_i, x_{i+1}, \ldots , x_N)$. Up to the numerical constant, this inequality improves upon the classical spectral gap inequality (see below) $${\rm Var}_\mu (f) \leq {1 \over 4} \sum_{i=1}^N {\| D_i f\| }_2^2 \, . \eqno (3)$$ The proof of (1) is based on an hypercontractivity estimate known as the Bonami-Beckner inequality \[Bo\], \[Be\] (see below). Inequality (1) was actually deviced to recover (and extend) a famous result of J. Kahn, G. Kalai and N. Linial \[K-K-L\] about influences on the cube. Namely, applying (1) to the Boolean function $f = {\bf 1}_A $ for some set $A \subset \{-1, +1\}^N$, it follows that $$\mu (A) \big ( 1 - \mu (A) \big ) \leq C \sum_{i=1}^N{ 2I_i(A) \over 1 + \log \big (1/ \sqrt {2I_i(A)} \, \big )} \eqno (4)$$ where, for each $i = 1, \ldots, N$, $$I_i (A) = \mu \big ( \{ x \in A, \tau _i x \notin A \} \big )$$ is the so-called influence of the $i$-th coordinate on the set $A$ (noticing that $\| D_i {\bf 1}_A \|^p _p = 2 I_i(A)$ for every $p \geq 1$). In particular, for a set $A$ with $ \mu (A) = a$, there is a coordinate $i$, $1 \leq i\leq N$, such that $$I_i (A) \geq {a(1-a) \over 8CN} \, \log \Big ( {N \over a(1-a)} \Big) \geq {a(1-a) \log N \over 8C N} \eqno (5)$$ which is the main result of \[K-K-L\]. (To deduce (5) from (4), assume for example that $I_i(A) \leq \big ( {a(1-a) \over N} \big) ^{1/2}$ for every $i=1, \ldots , N$, since if not the result holds. Then, from (4), there exists $i$, $1 \leq i \leq N$, such that $${a(1-a) \over CN} \leq { 2I_i(A) \over 1 + \log \big (1/\sqrt { 2I_i(A)} \, \big )} \leq { 8 I_i(A) \over 4 + \log ( N / 4 a(1-a) ) }$$ which yields (5)). Note that (5) remarkably improves by a (optimal) factor $\log N$ what would follow from the spectral gap inequality (3) applied to $ f = {\bf 1}_A$. The numerical constants like $C$ throughout this text are not sharp. The aim of this note is to amplify the hypercontractive proof of Talagrand’s original inequality (1) to various settings, including non-product spaces and continuous variables, and in particular to address versions suitable to geometric influences. It is part of the folklore indeed (cf. e.g. \[B-H\]) that an inequality similar to (1), with the same hypercontractive proof, holds for the standard Gaussian measure $\mu $ on $\rr^N$ (viewed as a product measure of one-dimensional factors), that is, for every smooth enough function $f$ on $\rr^N$ and some constant $C>0$, $${\rm Var}_\mu (f) \leq C \sum_{i=1}^N { {\| \partial_i f\| }_2^2 \over 1+ \log ( {\| \partial_i f\| }_2 / {\| \partial_i f\| }_1) } \, . \eqno (6)$$ (A proof will be given in Section 2 below.) However, the significance of the latter for influences is not clear, since its application to characteristic functions is not immediate (and requires notions of capacities). Recently, N. Keller, E. Mossel and A. Sen \[K-M-S\] introduced a notion of geometric influence of a Borel set $A$ in $\rr^N$ with respect to a measure $\mu $ (such as the Gaussian measure) simply as $ {\| \partial_i f \| }_1$ for some smooth approximation $f$ of $ {\bf 1}_A$, and proved for it the analogue of (5) (with $\sqrt {\log N}$ instead of $\log N$) for the standard Gaussian measure on $\rr^N$. It is therefore of interest to seek for suitable versions of Talagrand’s inequality involving only $\L^1$-norms $ {\| \partial_i f \| }_1$ of the partial derivatives. While the authors of \[K-M-S\] use isoperimetric properties, we show here how the common hypercontractive tool together with a simple interpolation argument may be developed similarly to reach the same conclusion. In particular, for the standard Gaussian measure $\mu $ on $\rr^N$, we will see that for every smooth enough function $f $ on $\rr^N$ such that $|f| \leq 1$, $${\rm Var}_\mu (f) \leq C \sum_{i=1}^N { {\| \partial _i f\|} _1 \big (1 + {\| \partial _i f\|} _1 \big) \over \big [ 1 + \log^+ \big (1/ {\| \partial _i f\|}_1 \big ) \big ]^{1/2} } \, . \eqno (7)$$ Applied to $f = {\bf 1}_A$, this inequality indeed ensures the existence of a coordinate $i$, $1 \leq i\leq N$, such that the geometric influence of $A$ along $i$ is at least of the order of $ {\sqrt {\log N} \over N} $, that is one of the main conclusions of \[K-M-S\] (where it is shown moreover that the bound is sharp). In this continuous setting, the hypercontractive approach yields more general examples of measures with such an influence property in the range between exponential and Gaussian for which only a logarithmic Sobolev type inequality is needed while \[K-M-S\] required an isoperimetric inequality for the individual measures $\mu_i$. This note is divided into two main parts. In the first one, we present Talagrand type inequalities for various models, from the discrete cube to Gaussian and more general product measures, by the general principle of hypercontractivity of Markov semigroups. The method of proof, originating in Talagrand’s work, has been used recently by R. O’Donnell and K. Wimmer \[OD-W1\], \[OD-W2\] to investigate non-product models such as random walks on some graphs which enter the general presentation below. Actually, most of the Talagrand inequalities we present in the discrete setting are already contained in the work by R. O’Donnell and K. Wimmer. It is worth mentioning that an approach to the Talagrand inequality (1) rather based on the logarithmic Sobolev inequality was deviced in \[Ros\] and \[F-S\] a few years ago. The abstract semigroup approach applies in the same way on the sphere along the decomposition of the Laplacian. Geometric Brascamp-Lieb decompositions within this setting are also discussed. In the second part, we address our new version (7) of Talagrand’s inequality towards geometric influences and the recent results of \[K-M-S\] by a further interpolation step on the hypercontractive proof. In the last part of this introduction, we describe a convenient framework in order to develop hypercontractive proofs of Talagrand type inequalities. While of some abstract flavor, the setting easily covers two main concrete instances, probability measures on finite state spaces (as invariant measures of some Markov kernels) and continuous probability measures of the form $d\mu (x) = \eu^{-V(x)} dx$ on the Borel sets of $\rr^n$ where $V$ is some (smooth) potential (as invariant measures of the associated diffusion operators $ {\Delta - \nabla V \cdot \nabla }$). We refer for the material below to the general references \[Ba\], \[D-SC\], \[Roy\], \[Aal\], \[B-G-L\]... Let $\mu $ be a probability measure on a measurable space $(X, {\cal A})$. For a function $f : X \to \rr$ in $\L^2(\mu )$, define its variance with respect to $\mu $ by $${\rm Var}_\mu (f) = \int_X f^2 d\mu - \bigg ( \int_X f d\mu \bigg)^2 .$$ Similarly, whenever $f >0$, define its entropy by $${\rm Ent}_\mu (f) = \int_X f \log f d \mu - \int_X f d\mu \log \bigg ( \int_X f d\mu \bigg)$$ provided it is well-defined. The $\L^p(\mu )$-norms, $1 \leq p \leq \infty$, will be denoted by ${\| \cdot \| }_p$. Let then $\P$ be a Markov semigroup with generator $\L $ acting on a suitable class of functions on $(X, {\cal A})$. Assume that $\P $ and $\L$ have an invariant, reversible and ergodic probability measure $\mu $. This ensures that the operators $P_t$ are contractions in all $\L^p(\mu )$-spaces, $1 \leq p \leq \infty$. The Dirichlet form associated to the couple $(\L , \mu)$ is then defined, on functions $f, g$ of the Dirichlet domain, as $${\cal E} (f, g) = \int_X f (-\L g) d\mu .$$ Within this framework, the first example of interest is the case of a Markov kernel $K$ on a finite state space $X$ with invariant ($ \sum_{x \in X} K(x,y) \mu (x) = \mu (y)$, $ x \in X$) and reversible ($K(x,y) \mu (x) = K(y,x) \mu (y)$, $x,y \in X$) probability measure $\mu $. The Markov operator $ \L = K- {\rm Id} $ generates the semigroup of operators $P_t = \eu^{t \L}$, $t\geq 0$, and defines the Dirichlet form $${\cal E} (f,g) = \int _X f (-\L g) d\mu = {1\over 2} \sum_{x,y \in X} \big [ f(x) - f(y) \big ] \big [g(x) - g(y) \big ] K(x,y) \mu (x)$$ on functions $f, g : X \to \rr$. The second class of examples is the case of $X= \rr^n$ equipped with its Borel $\sigma $-field. Letting $V : \rr^n \to \rr$ be such that $\int_{\rrr^n} \eu^{-V(x)} dx = 1$, under mild smoothness and growth conditions on the potential $V$, the second order operator $\L = \Delta - \nabla V \cdot \nabla $ admits $d\mu (x) = \eu^{-V(x)} dx$ as symmetric and invariant probability measure. The operator $\L $ generates the Markov semigroup of operators $\P$ and defines by integration by parts the Dirichlet form $${\cal E} (f,g) = \int_{\rrr^n} f (- \L g) d \mu = \int_{\rrr^n} \nabla f \cdot \nabla g \, d\mu$$ for smooth functions $f,g$ on $\rr^n$. Given such a couple $(\L, \mu )$, it is said to satisfy a spectral gap, of Poincaré, inequality if there is a constant $\lambda >0$ such that for all functions $f$ of the Dirichlet domain, $$\lambda \, {\rm Var}_\mu (f) \leq {\cal E} (f,f) . \eqno (8)$$ Similarly, it satisfies a logarithmic Sobolev inequality if there is a constant $\rho >0$ such that for all functions $f$ of the Dirichlet domain, $$\rho \, {\rm Ent}_\mu (f^2) \leq 2 \, {\cal E}(f,f). \eqno (9)$$ One speaks of the spectral gap constant (of $(\L, \mu )$) as the best $\lambda >0$ for which (8) holds, and of the logarithmic Sobolev constant (of $(\L, \mu )$) as the best $\rho >0$ for which (9) holds. We still use $\lambda $ and $\rho $ for these constants. It is classical that $\rho \leq \lambda $. Both the spectral gap and logarithmic Sobolev inequalities translate equivalently on the associated semigroup $\P$. Namely, the spectral gap inequality (8) is equivalent to saying that $${\| P_t f \| }_2 \leq \eu^{-\lambda t} \, {\| f \| }_2$$ for every $t \geq 0$ and every mean zero function $f$ in $\L^2(\mu )$. Equivalently for the further purposes, for every $f \in \L^2(\mu )$ and every $t > 0$, $${\rm Var}_\mu (f) \leq {1 \over 1 - \eu^{-\lambda t}} \, \big [ {\| f\| }_2^2 - {\| P_t f\| }_2^2 \big ] . \eqno (10)$$ On the other hand, the logarithmic Sobolev inequality gives rise to hypercontractivity which is a smoothing property of the semigroup. Precisely, the logarithmic Sobolev inequality (9) is equivalent to saying that, whenever $p \geq 1 + \eu^{-2\rho t}$, for all functions $f$ in $\L^p(\mu )$, $${\| P_t f \| }_2\leq {\| f \| }_p . \eqno (11)$$ For simplicity, we say below that a probability measure $\mu $ in this context is hypercontractive with constant $\rho $. A standard operation on Markov operators is the product operation. Let $(\L_1, \mu _1)$ and $(\L_2, \mu _2)$ be Markov operators on respective spaces $X_1$ and $X_2$. Then $$\L = \L_1 \otimes {\rm Id} + {\rm Id} \otimes \L_2$$ is a Markov operator on the product space $X_1 \times X_2$ equipped with the product probability measure $\mu _1 \otimes \mu _2$. The product semigroup $\P$ is similarly obtained as the tensor product $P_t = P_t^1 \otimes P_t^2$ of the semigroups on each factor. For the product Dirichlet form, the spectral gap and logarithmic Sobolev constants are stable in the sense that, with the obvious notation, $\lambda = \min (\lambda _1, \lambda _2)$ and $\rho = \min (\rho _1, \rho _2)$. This basic stability by products will allow for constants independent of the dimension in the Talagrand type inequalities under investigation. For the clarity of the exposition, we will not mix below products of continuous and discrete spaces, although this may easily be considered. Let us illustrate the preceding definitions and properties on two basic examples. Consider first the two-point space $ X = \{-1, +1\}$ with the measure $\mu = p \delta_{+1} + q \delta_{-1}$, $p \in [0,1]$, $p+q=1$, and the Markov kernel $K(x,y ) = \mu (y)$, $x,y \in X$. Then, for every function $f : X \to \rr$, $${\cal E} (f,f) = \int_X f (- \L f) d\mu = {\rm Var }_\mu (f)$$ so that the spectral gap $\lambda =1$. The logarithmic Sobolev constant is known to be $$\rho = { 2 (p-q) \over \log p - \log q} \quad (= 1 \; \; {\hbox {if}} \; \; p=q). \eqno (12)$$ The product chain on the discrete cube $X = \{-1,+1\}^N$ with the product probability measure $\mu = (p \delta_{+1} + q \delta_{-1})^{\otimes N}$ and generator $\L = \sum_{i=1}^n \L_i$ is associated to the Dirichlet form $${\cal E} (f,f) = \int _X \sum_{i=1}^N f (-\L_i f) d\mu = pq \int _X \sum_{i=1}^N | D_i f |^2 d\mu$$ where $D_i f$ is defined in (2). By the previous product property, it admits 1 as spectral gap and $\rho $ given by (12) as logarithmic Sobolev constant. In its hypercontractive formulation, the case $p= q$ is the content of the Bonami-Beckner inequality \[Bo\], \[Be\]. As mentioned before, M. Talagrand \[T\] used thus hypercontractivity on the discrete cube $\{-1,+1\}^N$ equipped with the product measure $\mu = (p \delta_{+1} + q \delta_{-1})^{\otimes N}$ to prove that for any function $ f : \{-1,+1\}^N \to \rr$, $${\rm Var}_\mu (f) \leq { C pq (\log p - \log q) \over p - q} \, \sum_{i=1}^N { {\| D_i f\| }_2^2 \over 1+ \log \big ( {\| D_i f\| }_2 / 2\, \sqrt {pq} \, {\| D_i f\| }_1\big ) } \eqno (13)$$ for some numerical constant $C >0$ (this statement will be covered in Section 2 below). This in turn yields a version of the influence result of \[K-K-L\] on the biased cube. In the continuous setting $X = \rr^n$, the case of a quadratic potential $V$ amounts to the Hermite or Ornstein-Uhlenbeck operator $\L = \Delta - x \cdot \nabla $ with invariant measure the standard Gaussian measure $d\mu (x) = (2\pi )^{-n/2} \, \eu^{- |x|^2/2} dx$. It is known here that $\lambda = \rho =1$ independently of the dimension. (More generally, if $ V (x) - c \, {|x|^2 \over 2}$ is convex for some $c>0$, then $\lambda \geq \rho \geq c$.) Actually, $\L $ may also be viewed as the sum $\sum_{i=1} ^n \L_i$ of one-dimensional Ornstein-Uhlenbeck operators along each coordinate, and $\mu $ as the product measure of standard normal distributions. Within this product structure, the analogue (6) of (13) has been known for some time, and will be recalled below. [**2. Hypercontractivity and Talagrand’s inequality**]{} This section presents the general hypercontractive approach to Talagrand type inequalities including the discrete cube, the Gaussian product measure and more general non-product models. The method of proof, directly inspired from \[T\], has been developed recently by R. O’Donnell and K. Wimmer \[OD-W1\], \[OD-W2\] towards non-product extensions on suitable graphs. Besides hypercontractivity, a key feature necessary to develop the argument is a suitable decomposition of the Dirichlet form along “directions" commuting with the Markov operator or its semigroup. These directions are immediate in a product space, but do require additional structure in more general contexts. In the previous abstract setting of a Markov semigroup $\P$ with generator $\L $, assume thus that the associated Dirichlet form ${\cal E}$ may be decomposed along directions $\Gamma_i$ acting on functions on $X$ as $${\cal E} (f, f) = \sum_{i=1}^N \int_X \Gamma _i (f)^2 d\mu \eqno (14)$$ in such a way that, for each $i= 1, \ldots , N$, $\Gamma _i $ commutes to $\P $ in the sense that, for some constant $ \kappa \in \rr$, every $t\geq 0$ and every $f$ in a suitable family of functions, $$\Gamma _i (P_t f ) \leq \eu^{\kappa t} \, P_t \big ( \Gamma _i (f) \big) . \eqno (15)$$ These properties will be clearly illustrated on the main examples of interest below, with in particular explicit descriptions of the classes of functions for which (14) and (15) may hold. We first present the Talagrand inequality in this context. The proof is the prototype of the hypercontractive argument used throughout this note and applied to various examples. [**Theorem 1.**]{} [*In the preceding setting, assume that $(\L , \mu )$ is hypercontractive with constant $\rho >0$ and that (14) and (15) hold. Then, for any function $f$ in $\L^2(\mu)$, $${\rm Var}_\mu (f) \leq C(\rho , \kappa ) \sum_{i=1}^N {{\| \Gamma _i f\| }_2^2 \over 1 + \log ( {\| \Gamma _i f\|}_2 / {\| \Gamma _i f\|}_1 ) }$$ where $C(\rho , \kappa ) = 4 \, \eu ^{(1 + (\kappa /\rho ))^+} \! / \rho $.*]{} [*Proof.*]{} The starting point is the variance representation along the semigroup $\P$ of a function $f $ in the $\L^2(\mu )$-domain of the semigroup as $${\rm Var}_\mu (f) = - \int_0^\infty \bigg ({d \over dt} \int_X (P_t f)^2 d \mu \bigg ) dt = - 2 \int_0^\infty \bigg ( \int_X P_t f \, \L P_t f d\mu \bigg ) dt .$$ The time integral has to be handled both for the large and small values. For the large values of $t$, we make use of the exponential decay provided by the spectral gap in the form of (10) to get that, with $T = 1/2\rho $ for example since $\rho \leq \lambda $, $${\rm Var}_\mu (f) \leq 2 \, \big [ {\| f\| }^2_2 - {\| P_T f\| }^2_2 \big ] .$$ We are thus left with the variance representation of $${\| f\| }^2_2 - {\| P_T f\| }^2_2 = - 2 \int_0^T \bigg ( \int_X P_t f \, \L P_t f d\mu \bigg ) dt = 2 \int_0^T {\cal E} (P_t f, P_t f) dt .$$ Now by the decomposition (14), $${\| f\| }^2_2 - {\| P_T f\| }^2_2 = 2 \sum_{i=1}^N \int _0^T \bigg ( \int _X \big(\Gamma _i (P_t f) \big ) ^2 d\mu \bigg ) dt .$$ Under the commutation assumption (15), $$\int _X \big(\Gamma _i (P_t f) \big ) ^2 d\mu \leq \eu^{2\kappa t} \int _X \big (P_t \big (\Gamma _i (f) \big ) \big )^2 d\mu .$$ Since $\P $ is hypercontractive with constant $\rho >0$, for every $i = 1, \ldots , N$ and $t \geq 0$, $$\big \| P_t \big ( \Gamma _i (f) \big ) \big \| _2 \leq { \big \| \Gamma _i (f) \big \|} _p$$ where $ p = p(t) = 1 + \eu^{-2\rho t} \leq 2$. After the change of variables $ p(t) = v $, we thus reached at this point the inequality $${\rm Var}_\mu (f) \leq {2 \, \eu ^{(1 + (\kappa /\rho ))^+} \over \rho } \sum_{i=1}^N \int_1^2 { \big \| \Gamma _i (f) \big \|} ^2_v \, dv. \eqno (16)$$ This inequality actually basically amounts to Theorem 1. Indeed, by Hölder’s inequality, $${ \big \| \Gamma _i (f) \big \|} _v \leq { \big \| \Gamma _i (f) \big \|} _1^\theta \, { \big \| \Gamma _i (f) \big \|} _2^{1 -\theta }$$ where $\theta = \theta (v) \in [0,1]$ is defined by $ {1 \over v} = {\theta \over 1} + {1-\theta \over 2}$. Hence $$\int_1^2 { \big \| \Gamma _i (f) \big \|} ^2_v \, dv \leq { \big \| \Gamma _i (f) \big \|} ^2_2 \int _1^2 b^{2 \theta (v) } dv$$ where $ b = {\| \Gamma _i (f)\| }_1 / {\| \Gamma _i (f)\| }_ 2 \leq 1$. It remains to evaluate the latter integral with $2\theta (v) = s$, $$\int_1^2 b ^{2 \theta (v) } dv \leq \int_0^2 b^s ds \leq { 2 \over 1 + \log (1/b)}$$ from which the conclusion follows. Inequality (16) of the preceding proof may also be used towards a version of Theorem 1 with Orlicz norms as emphasized in \[T\]. As in \[T\], let $\varphi : \rr_+ \to \rr_+$ be convex such that $\varphi (x) = {x^2/\log (\eu+x)}$ for $x\geq 1$, and $\varphi (0) = 0$, and denote $${\| g\| }_\varphi = \inf \bigg \{ c > 0 \, ; \int_X \varphi \big ( |g | / c \big ) d\mu \leq 1 \bigg \}$$ the associated Orlicz norm of a measurable function $ g : X \to \rr$. Then, for some numerical constant $C>0$, $$\int _1^2 {\|g \|}_v^2 \, dv \leq C \, {\|g\|}_\varphi ^2 \eqno (17)$$ so that (16) yields $${\rm Var}_\mu (f) \leq {2 C \, \eu ^{(1 + (\kappa /\rho ))^+} \over \rho } \sum_{i=1}^N { \big \| \Gamma _i (f) \big \|} ^2_\varphi . \eqno (18)$$ Since as pointed out in Lemma 2.5 of \[T\], $${\|g\|}_\varphi ^2 \leq { C \, {\| g\|} _2^2 \over 1 + \log ( {\| g \|}_2 / {\| g\|}_1 ) } \, ,$$ we see that (18) improves upon Theorem 1. To briefly check (17), assume by homogeneity that $ \int_X g^2 /\log (e+g) d\mu \leq 1 $ for some non-negative function $g$. Then, setting $g_k = g \, 1_{\{ 2^{k-1} <g\leq 2^k\} }$, $k\geq 1$, and $g_0 = g \, 1_{\{ g\leq 1\} }$, $$\sum_{k \in \nnn} {1 \over k+1} \int_X g_k^2 d\mu \leq C_1 \eqno (19)$$ for some numerical constant $C_1>0$. Hence, since $g_k \leq 2^k$ for every $k$, $$\eqalign { \int _1^2 {\| g\|}_v^2 \, dv & = \int _1^2 \bigg ( \sum_{k \in \nnn} \int_X g_k^v d\mu \bigg )^{2/v} dv \cr & \leq 4 \int _1^2 \bigg ( \sum_{k \in \nnn} 2^{-(2-v)k} \int _X g_k^2 d\mu \bigg )^{2/v} dv \cr & \leq C_2 \sum_ {k \in \nnn} \bigg(\int _1^2 (k+1)^{2/v} 2^{-2(2-v)k/v} dv \bigg ) {1\over k+1}\int \! g_k^2 d\mu \cr }$$ where we used $(19)$ as convexity weights in the last step. Now, it is easy to check that $$\int _1^2 (k+1)^{2/v} 2^{-2(2-v)k/v} dv \leq C_3$$ uniformly in $k$ so that $ \int _1^2 {\| g\|}_v^2 \, dv \leq C_1C_2C_3$ concluding thus the claim. We next illustrate the general Theorem 1 on various examples of interest. On a probability space $(X, {\cal A}, \mu )$, consider first the Markov operator $\L f = \int_X f d\mu - f$ acting on integrable functions (in other words $Kf = \int_X f d\mu$). This operator is symmetric with respect to $\mu $ with Dirichlet form $${\cal E} (f, f) = \int _X f (- \L f) d\mu = {\rm Var}_\mu (f).$$ In particular, it has spectral gap 1. Let now $X = X_1 \times \cdots \times X_N$ be a product space with product probability measure $ \mu = \mu _1 \otimes \cdots \otimes \mu _N$. Consider the product operator $\L = \sum_{i=1}^N \L_i$ where $\L_i$ is acting on the $i$-th coordinate of a function $f$ as $\L_i f = \int_{X_i} f d\mu_i - f$. The product operator $\L $ has still spectral gap 1. Its Dirichlet form is given by $${\cal E} (f, f) = \sum_{i=1}^N \int_X f ( - \L_i f ) d \mu = \sum_{i=1}^N \int_X ( \L_i f )^2 d \mu .$$ We are therefore in the setting of a decomposition of the type (14). Moreover, it is immediately checked that $ \L_i \, \L = \L \, \L_i$ for every $i = 1, \ldots, N$, and thus the commutation property (15) also holds (with $\kappa =0$). Hence Theorem 1 applies for this model with hypercontractive constant $ \rho = \min_{1 \leq i\leq N} \rho _i >0$. In particular, Theorem 1 includes Talagrand’s inequality (13) for the hypercube $X = \{-1, +1\}^N$ with the product measure $\mu = (p \delta_{+1} + q \delta_{-1})^{\otimes N}$ with hypercontractive constant given by (12), for which it is immediately checked that, for every $r\geq 1$ and every $i = 1, \ldots , N$, $$\int_X | \L_i f |^r d\mu = (pq^r + p^rq) \int_ X |D_i f|^r d\mu .$$ Non-product examples may be considered similarly as has been thus emphasized recently in \[OD-W1\] and \[OD-W2\] with similar arguments. Let for example $G$ be a finite group, and let $S$ be a symmetric set of generators of $G$. The Cayley graph associated to $S$ is the graph with vertices the element of $G$ and edges the couples $(g,gs)$ where $g \in G$ and $s \in S$. The transition kernel associated to this graph is $$K(x,y) = {1 \over |S|} \, {\bf 1}_S (y x^{-1}), \quad x, y \in G,$$ where $|S|$ is the cardinal of $S$. The uniform probability measure $\mu $ on $G$ is an invariant and reversible measure for $K$. This framework includes the example of $G = {\cal S}_n$ the symmetric group on $n$ elements with the set of transpositions as generating set and the uniform measure as invariant and symmetric measure. Given such a finite Cayley graph $G$ with generator set $S $, kernel $K$ and uniform measure $\mu $ as invariant measure, the associated Dirichlet form may be expressed on functions $ f : G \to \rr$ in the form (14) $${\cal E} (f,f) = {1 \over 2 |S|} \sum_{s \in S} \sum_{x \in G} \big [ f(sx) - f (x) \big]^2 \mu (x) = {1 \over 2 |S|} \sum_{s \in S} {\| D _s f \| }_2^2$$ where for $s \in S$, $ D _s f (x) = f(sx) - f(x)$, $ x \in G$. In order that the operators $D_s$ commute to $K$ in the sense of (15) (with again $\kappa =0$), it is necessary to assume that $S$ is stable by conjugacy in the sense that $${\hbox {for all}} \, \, u \in S, \quad u \, S \, u^{-1} = S$$ as it is the case for the set of transpositions on the symmetric group ${\cal S}^n$. The following statement from \[OD-W1\] is thus an immediate consequence of the general Theorem 1. [**Corollary 2.**]{} [*Under the preceding notation and assumptions, denote by $\rho $ the logarithmic Sobolev constant of the chain $ (K, \mu )$. Then for every function $f$ on $G$,*]{} $${\rm Var}_\mu (f ) \leq {2 \eu \over \rho |S| } \, \sum_{s \in S} {\| D _s f\| _2^2 \over 1 + \log \big ( \| D _s f\| _2 / \| D _s f\| _1 \big ) } \, .$$ One may wonder for the significance of this Talagrand type inequality for influences. For $A \subset G$ and $ s \in S$, define the influence $I_s(A)$ of the direction $s$ on the set $A$ by $$I_s(A) = \mu \big ( \{ x \in G ; x \in A, sx \notin A \} \big ).$$ As on the discrete cube, given $A \subset G$ with $\mu (A) =a $, Corollary 2 yields the existence of $s \in S$ such that $$I_s(A) \geq {1 \over C} \, a(1-a) \rho \, \log \Big ( 1 + {1\over C\rho \, a(1-a) } \Big ) \geq {1 \over C} \, a(1-a) \, \rho \log \Big ( 1 + {1\over C\rho } \Big ) \eqno (20)$$ (where $C \geq 1$ is numerical). However, with respect to the spectral gap inequality of the chain $(K, \mu )$ $$\lambda \, {\rm Var}_\mu (f) \leq {1 \over 2 |S|} \sum_{s \in S} {\| D _s f \| }_2^2 \, ,$$ we see that (20) is only of interest provided that $\rho \log (1 + (1/\rho )) > \! \! > \lambda $. This is the case on the symmetric discrete cube $\{- 1, +1 \}^N$ for which, in the Cayley graph normalization of Dirichlet forms, $\lambda = \rho = 1/N$. On the symmetric group, it is known that the spectral gap $\lambda $ is ${2 \over n-1}$ whereas its logarithmic Sobolev constant $\rho $ is of the order of $1/n \log n$ (\[D-SC\], \[L-Y\]) so that $\rho \log (1 + (1/\rho )) $ and $\lambda $ are actually of the same order for large $n$, and hence yield the existence of a transposition $\tau $ with influence at least only of the order of $1/n$. It is pointed out in \[OD-W2\] that this result is however optimal. The paper \[OD-W1\] presents examples in the more general context of Schreier graphs for which (20) yields influences strictly better than the ones from the spectral gap inequality. Theorem 1 may also be illustrated on continuous models such as Gaussian measures. While the next corollary is stated in some generality, it is already of interest for products of one-dimensional factors and covers in particular the example (6) of the standard Gaussian product measure. [**Corollary 3.**]{} [*Let $d\mu_i (x) = \eu^{-V_i(x)}dx $, $i = 1, \ldots, N$, on $X_i = \rr^{n_i}$ be hypercontractive with constant $\rho_i >0$. Let $\mu = \mu _1 \otimes \cdots \otimes \mu _N$ on $ X = X_1 \times \cdots \times X_N$. Assume in addition that $V''_i \geq - \kappa $, $\kappa \in \rr $, $i = 1, \ldots , N$. Then, for any smooth function $f$ on $X$, $${\rm Var}_\mu (f) \leq C( \rho , \kappa ) \sum_{i=1}^N {\| \nabla _i f\| _2^2 \over 1 + \log \big ( \| \nabla _i f\|_2 / \| \nabla _i f\|_1 \big ) }$$ where $ \rho = \min _{1\leq i\leq N} \rho _i$, and where $\nabla _i f$ denotes the gradient of $f$ in the direction $X_i$, $i = 1, \ldots, N$.*]{} Corollary 3 again follows from Theorem 1. Indeed, the product structure immediately allows for the decomposition (14) of the Dirichlet form $${\cal E} (f, f) = \int_X |\nabla f|^2 d\mu = \sum_{i=1}^N \int_X |\nabla _i f|^2 d\mu$$ along smooth functions with thus $\Gamma _i(f) = |\nabla _i f|$. On the other hand, the basic commutation (15) between the semigroup and the gradients $\nabla _i$ is described here as a curvature condition. Namely, whenever the Hessian $V''$ of a smooth potential $V$ on $\rr^n$ is (uniformly) bounded below by $- \kappa $, $ \kappa \in \rr$, the semigroup ${(P_t)}_{t\geq 0} $ generated by the operator $\L = \Delta - \nabla V \cdot \nabla $ commutes to the gradient is the sense that, for every smooth function $f$ and every $t\geq 0$, $$| \nabla P_t f | \leq \eu^{\kappa t} \, P_t \big ( | \nabla f | \big ) . \eqno (21)$$ In the product setting of Corollary 3, the semigroup $\P$ is the tensor product of the semigroups along every coordinate so that (21) ensures that $$| \nabla_i P_t f | \leq \eu^{\kappa t} \, P_t \big ( | \nabla_i f | \big ) \eqno (22)$$ along the partial gradients $\nabla _i$, $i = 1, \ldots , N$ and hence (15) holds on smooth functions. This commutation property (with $\kappa = -1$) is for example explicit on the integral representation $$P_t f(x) = \int_{\rrr^n} f \big ( \eu^{-t} x + (1 - \eu^{-2t})^{1/2} y \big ) d\mu (y), \quad x \in \rr ^n, \, \, t \geq 0, \eqno (23)$$ of the Ornstein-Uhlenbeck semigroup with generator $ \L = \Delta - x \cdot \nabla $ and invariant and symmetric measure the standard Gaussian distribution. The assumption $ V'' \geq - \kappa $ describes a curvature property of the generator $\L$ and is linked to Ricci curvature on Riemannian manifolds. Since only $\kappa \in \rr$ is required here, it appears as a mild property, shared by numerous potentials such as for example double-well potentials on the line of the form $V(x) = a x^4 - bx^2$, $a,b>0$. Recall that the assumption $V'' \geq c > 0$ (for example the quadratic potential with the Gaussian measure as invariant measure) actually implies that $\mu $ satisfies a logarithmic Sobolev inequality, and thus hypercontractivity (with constant $c$). We refer for example to \[Ba\], \[L1\], \[B-G-L\]... for an account on (21) and the preceding discussion. Corollary 3 admits generalizations in broader settings. Weighted measures on Riemannian manifolds with a lower bound on the Ricci curvature may be considered similarly with the same conclusions. In another direction, the hypercontractive approach may be developed in presence of suitable geometric decompositions. The next statements deal with the example of the sphere and with geometric decompositions of the identity in Euclidean space which are familiar in the context of Brascamp-Lieb inequalities (see \[B-CE-L-M\] for further illustrations in a Markovian framework). A non-product example in the continuous setting is the one of the standard sphere $ \ss^{n-1} \subset \rr^n$ ($n\geq 2$) equipped with its uniform normalized measure $\mu $. Consider, for every $i,j = 1, \ldots , n$, $D_{ij} = x_i \partial _j - x_j \partial _i$. These will be the directions along which the Talagrand inequality may be considered since $${\cal E}(f,f) = \int_{\sss^{n-1}} f (- \Delta f) d\mu = {1\over 2} \sum_{i,j=1}^n \int_{\sss^{n-1}} (D_{ij} f)^2 d\mu .$$ The operators $D_{ij}$ namely commute in an essential way to the spherical Laplacian $\Delta = {1\over 2} \sum_{i,j=1}^n D_{ij}^2$ so that (15) holds with $\kappa =0$. Finally, the logarithmic Sobolev constant is known to be $n-1$ \[Ba\], \[L1\], \[B-G-L\].... Corollary 4 thus again follows from the general Theorem 1. [**Corollary 4.**]{} [*For every smooth enough function $f : \ss^{n-1} \to \rr$,*]{} $${\rm Var}_\mu (f) \leq {4 \eu \over n} \sum_{i,j=1}^n { {\| D_{ij} f \|} _2^2 \over 1 + \log \big ( {\| D_{ij} f \|} _2 / {\| D_{ij}f \| }_1 \big ) } \, .$$ Up to the numerical constant, this inequality improves upon the Poincaré inequality for $\mu $ (with constant $\lambda = n-1$). We turn to geometric Brascamp-Lieb decompositions. Consider thus $E_i$, $i = 1, \ldots, m$, subspaces in $\rr^n$, and $c_i > 0$, $i = 1, \ldots, m$, such that $${\rm Id}_{\rrr^n} = \sum_{i=1}^m c_i \, Q_{E_i} \eqno (24)$$ where $Q_{E_i}$ is the projection onto $E_i$. In particular, for every $ x \in \rr^n$, $ |x|^2 = \sum_{i=1}^m c_i | Q_{E_i} (x) |^2$ and thus, for every smooth function $f$ on $\rr^n$, $${\cal E}(f,f) = \int_{\rrr^n} |\nabla f |^2 d\mu = \sum_{i=1}^m c_i \! \bigg ( \int_{\rrr^n} \big | Q_{E_i}(\nabla P_t f) \big |^2 d\mu \bigg ) .$$ Furthermore, $ Q_{E_i}(\nabla P_t f) = \eu^{-t} P_t (Q_{E_i} (\nabla f)) $ which may be examplified on the representation (23) of the Ornstein-Uhlenbeck semigroup with hypercontractive constant 1. Theorem 1 thus yields the following conclusion. [**Corollary 5.**]{} [*Under the decomposition (24), for $\mu $ the standard Gaussian measure on $\rr^n$, and for every smooth function $f$ on $\rr^n$,*]{} $${\rm Var}_\mu (f) \leq 4 \sum_{i=1}^m c_i \, {\big \| Q_{E_i} (\nabla f) \big \| _2^2 \over 1 + \log \big ( {\| Q_{E_i} (\nabla f)\|}_2 / {\| Q_{E_i} (\nabla f) \|}_1 \big ) } \, .$$ [**3. Hypercontractivity and geometric influences**]{} In the continuous context of the preceding section, and as discussed in the introduction, the $\L^2$-norms of gradients in Corollary 3 are not well-suited to the (geometric) influences of \[K-M-S\] which require $\L^1$-norms. In order to reach $\L^1$-norms through the hypercontractive argument, a further simple interpolation trick will be necessary. To this task, we use an additional feature of the curvature condition $V'' \geq - \kappa $, $ \kappa \geq 0 $, namely that the action of the semigroup $\P$ with generator $\L = \Delta - \nabla V \cdot V$ on bounded functions yields functions with bounded gradients. More precisely (cf. \[L1\], \[B-G-L\]...), for every smooth function $f$ with $ |f| \leq 1$, and every $ 0 < t \leq 1/2\kappa $, $$| \nabla P_t f | \leq { 1\over \sqrt t } \, . \eqno (25)$$ This property may again be illustrated in case of the Ornstein-Uhlenbeck semigroup (22) for which, by integration by parts, $$\nabla P_t f (x) = {\eu^{-t} \over (1 - \eu^{-2t})^{1/2} } \int_{\rrr^n} y \, f \big ( \eu^{-t}x + (1 - \eu^{-2t})^{1/2} y \big ) d\mu (y) .$$ With this additional tool, the following statement then presents the expected result. The setting is similar to the one of Corollary 3. Dependence on $\rho $ and $\kappa $ for the constant $C'(\rho , \kappa )$ below may be drawn from the proof. It will of course be independent of $N$. [**Theorem 6.**]{} [*Let $d\mu_i (x) = \eu^{-V_i(x)}dx $, $i = 1, \ldots, N$, on $X_i = \rr^{n_i}$ be hypercontractive with constant $\rho_i >0$. Let $\mu = \mu _1 \otimes \cdots \otimes \mu _N$ on $ X = X_1 \times \cdots \times X_N$, and set as before $ \rho = \min_{1\leq i\leq N} \rho _i$. Assume in addition that $V''_i \geq - \kappa $, $\kappa \geq 0$, $i = 1, \ldots , N$. Then, for some constant $C'(\rho ,\kappa ) \geq 1$ and for any smooth function $f$ on $X$ such that $|f|\leq 1$,*]{} $${\rm Var}_\mu (f) \leq C'(\rho ,\kappa ) \sum_{i=1}^N { {\| \nabla _i f\|} _1 \big ( 1 + \| \nabla _i f\| _1 \big ) \over \big [ 1 + \log ^+ \big (1 / \| \nabla _i f\|_1 \big ) \big ]^{1/2} } \, .$$ [*Proof.*]{} We follow the same line of reasoning as in the proof of Theorem 1, starting on the basis of (10) from $${\| f\| }^2_2 - {\| P_T f\| }^2_2 = 2 \sum_{i=1}^N \int _0^T \bigg ( \int_X | \nabla_i P_t f|^2 d\mu \bigg ) dt \leq 4 \sum_{i=1}^N \int _0^T \bigg ( \int_X | \nabla_i P_{2t} f|^2 d\mu \bigg ) dt$$ for some $T > 0$. By (21) along each coordinate, for each $t\geq 0$, $$| \nabla_i P_{2t} f | \leq \eu^{\kappa t} \, P_t \big ( | \nabla _i P_t f | \big ) .$$ Hence, by the hypercontractivity property as in Theorem 1, $${ \| \nabla _i P_{2t} f \|} _2 \leq \eu^{\kappa t} \, { \| \nabla _i P_t f \|} _p$$ where $ p = p(t) = 1 + \eu^{-2\rho t} \leq 2$. We then proceed to the interpolation trick. Namely, by (25) and the tensor product form of the semigroup, $ |\nabla _i P_t f | \leq t^{-1/2}$ for $0< t \leq 1/2\kappa $, so that in this range, $${ \| \nabla _i P_{2t} f \|} _2 \leq \eu^{\kappa (1 + 1/p)t} \, t^{ -(1-1/p)/2} \, { \| \nabla _i f \|} _1^{1/p}$$ (where we used again (22)). As a consequence, provided $ T \leq 1/2\kappa $, $${\| f\| }^2_2 - {\| P_T f\| }^2_2 \leq 4 \, \eu^{4\kappa T} \sum_{i=1}^N {\| \nabla _i f\| }_1 \int _0^T t^{ - (1 - 1/p(t))} {\| \nabla _i f\| }_1^{(2/p(t)) -1} dt .$$ We are then left with the estimate of the latter integral that only requires elementary calculus. Set $ b = {\| \nabla _i f\| }_1$ and $\theta (t) = {2 \over p(t)} -1 \leq 1$. Assuming $T \leq 1$, $$\int _0^T t^{ - (1 - 1/p(t))} \, b^{\theta (t) } dt \leq \int _0^T t^{ - 1/2} \,b^{\theta (t)} dt .$$ Distinguish between two cases. When $b \geq 1$, $$\int _0^T t^{ - 1/2} \, b^{\theta (t)} dt \leq b \int _0^T t^{ - 1/2} dt \leq 2b \sqrt T.$$ When $ b \leq 1$, use that $ \theta (t) \geq \rho t / 2$ for every $0 \leq t\leq 1/2\rho $. Hence, provided $ T \leq 1/2\rho $, $$\int _0^T t^{ -1/2} \, b ^{\theta (t)} dt \leq \int _0^T t^{ -1/2} \, b ^{\rho t /2} dt \leq { C \over \sqrt \rho } \cdot {1 \over \big [1 + \log (1/b) \big ]^{1/2} }$$ where $C \geq 1 $ is numerical. Summarizing, in all cases, provided $T$ is chosen smaller than $ \min \big (1, {1 \over 2\rho } \big )$, we have $$\int _0^T t^{ - (1 - 1/p(t))} b^{\theta (t)} dt \leq { 2C \over \sqrt \rho } \cdot {1 + b \over \big [1 + \log^+ (1/b) \big ]^{1/2} } \, .$$ Choosing for example $ T = \min \big (1, {1 \over 2\rho }, {1 \over 2\kappa } \big )$ and using (10), Theorem 6 follows with $C'(\rho , \kappa ) = C' / \rho ^{3/2} T$ for some further numerical constant $C'$. If $ \kappa \leq c \rho $, then this constant is of order $\rho ^{-1/2}$. The preceding proof may actually be adapted to interpolate between Corollary 3 and Theorem 6 as $${\rm Var}_\mu (f) \leq C \sum_{i=1}^N { {\| \nabla _i f\|} _q^q \, \big ( 1 + {\| \nabla _i f\| }_1^2 / {\| \nabla _i f\|}_q^q \big ) \over \big [ 1 + \log^+ \big ( {\| \nabla _i f\|}_q^q / {\| \nabla _i f\|}_1^2\big ) \big ]^{q/2} }$$ for any smooth function $f$ on $X$ such that $ |f|\leq 1$, and any $1 \leq q \leq 2 $ (where $C$ depends on $\rho $, $\kappa $ and $q$). As announced in the introduction, the conclusion of Theorem 6 may be interpreted in terms of influences. Namely, for $f = {\bf 1}_A$ (or some smooth approximation), define ${\| \nabla _i f\| }_1$ as the geometric influence $I_i(A)$ of the $i$-th coordinate on the set $A$. In other words, $I_i(A)$ is the surface measure of the section of $A$ along the fiber of $x \in X = X_1 \times \cdots \times X_N$ in the $i$-th direction, $1 \leq i\leq N$, averaged over the remaining coordinates (see \[K-M-S\]). Then Theorem 6 yields that $$\mu (A) \big ( 1 - \mu (A) \big) \leq C(\rho , \kappa ) \sum_{i=1}^N { I_i(A) \big ( 1 + I_i(A) \big ) \over \big [ 1 + \log^+ \big (1/ I_i(A) \big ) \big ]^{1/2} } \, .$$ Proceeding as in the introduction for influences on the cube, the following consequence holds. [**Corollary 7.**]{} [*In the setting of Theorem 6, for any Borel set $A$ in $X$ with $\mu (A) = a$, there is a coordinate $i$, $1 \leq i\leq N$, such that $$I_i(A) \geq { a (1-a)\over C N } \, \bigg ( \log { N \over a (1-a) } \bigg ) ^{1/2} \geq { a (1-a) (\log N) ^{1/2} \over C N }$$ where $C$ only depends on $\rho $ and $\kappa $.* ]{} It is worthwhile mentioning that when $N=1$, $I_1(A)$ corresponds to the surface measure (Minkowski content) $$\mu ^+(A) = \liminf_{\varepsilon \to 0} {1\over \varepsilon } \, \big [ \mu (A_\varepsilon ) - \mu (A) \big ]$$ of $A \subset \rr^{n_1}$, so that Corollary 7 contains the quantitative form of the isoperimetric inequality for Gaussian measures $$\mu ^+(A) \geq {1 \over C} \, a(1-a) \, \bigg ( \log { 1 \over a (1-a) } \bigg ) ^{1/2} .$$ Recall indeed (cf. e.g. \[L1-2\]) that the Gaussian isoperimetric inequality indicates that $\mu ^+(A) \geq \varphi \circ \Phi ^{-1} (a)$ ($ a = \mu (A)$) where $\varphi (x) = (2\pi )^{-1/2} \, \eu^{-x^2/2}$, $x\in \rr$, $\Phi (t) = \int _{-\infty}^t \varphi (x) dx$, $t \in \rr$, and that $\varphi \circ \Phi ^{-1}(u) \sim u (2 \log {1\over u})^{1/2}$ as $u \to 0$. This conclusion, for hypercontractive log-concave measures, was established previously in \[B-L\]. See \[Mi1-2\] for recent improvements in this regard. Theorem 6 admits also generalizations in broader settings such as weighted measures on Riemannian manifolds with a lower bound on the Ricci curvature (this ensures that both (21) and (25) hold). Besides the Gaussian measure, N. Keller, E. Mossel and A. Sen \[K-M-S\] also investigate with isoperimetric tools products of one-dimensional distributions of the type $c_\alpha \eu^{-|x|^\alpha }dx$, $1 < \alpha < \infty$, for which they produce influences at least of the order of $ { (\log N)^{\beta/2} \over N}$ where $ \beta = 2 (1 - {1 \over \alpha })$ ($ \alpha = 2$ corresponding to the Gaussian case). The proof of Theorem 6 may be adapted to cover this result but only seemingly for $1 < \alpha < 2$. Convexity of the potentials $|x|^\alpha $ ensures (21) and (25). When $1 < \alpha < 2$, measures $ c_ \alpha \eu^{- |x|^\alpha } dx$ are not hypercontractive. Nevertheless, the hypercontractive theorems in Orlicz norms of \[B-C-R\] still indicate that the semigroup $\P$ generated by the potential $|x|^\alpha $ is such that, for every bounded function $g$ with ${\| g\| }_\infty =1$ and every $0 \leq t \leq 1$, $${\| P_t g \| }_2^2 \leq C \, {\| g\| }_1 \exp \Big ( -c \, t \log^\beta \big ( 1 + (1 /\| g\| _1) \big ) \Big ) \eqno (26)$$ for $ \beta > 0$ and some constants $C,c>0$, and similarly for the product semigroup with constants independent of $N$. The hypercontractive step in the proof of Theorem 6 is then modified into $$\big \| | \nabla _i P_{2t} f | \big \| _2 ^2 \leq C {\| \nabla _i f\| }_1 \int_0^1 t^{-1/2} \exp \Big ( -ct \log^\beta \big (1 + (1/ {\| \nabla _i f\| }_1) \big ) \Big ) dt .$$ As a consequence, for any smooth $f$ with $|f|\leq 1$, $${\rm Var}_\mu (f) \leq C \sum_{i=1}^N { {\| \nabla _i f\|} _1 \big ( 1 + \| \nabla _i f\| _1 \big ) \over \big [ 1 + \log^+ \big ( 1/\| \nabla _i f\|_1 \big ) \big ]^{\beta /2} } \, . \eqno (27)$$ We thus conclude to the influence result of \[K-M-S\] in this range. When $\alpha >2$ ($\beta \in (1,2)$), the potentials are hypercontractive in the usual sense so that the preceding proofs yield (27) but only for $\beta = 1$. We do not know how to reach the exponent $\beta /2$ in this case by the hypercontractive argument. We conclude this note by the $\L^1$ versions of Corollaries 4 and 5. In the case of the sphere, the proof is identical to the one of Theorem 6 provided one uses that $ |D_{ij} f | \leq |\nabla f |$ which ensures that $ |D_{ij} P_t f | \leq 1 / \sqrt t $. The behavior of the constant is drawn from the proof of Theorem 6. [**Theorem 8.**]{} [*For every smooth enough function $f : \ss^{n-1} \to \rr$ such that $|f|\leq 1$,*]{} $${\rm Var}_\mu (f) \leq {C \over \sqrt n} \sum_{i,j=1}^n { {\| D_{ij} f \|} _1 \big ( 1 + {\| D_{ij} f \|} _1 \big ) \over \big [ 1 + \log ^+ \big ( 1/ {\| D_{ij} f \|} _1 \big ) \big ]^{1/2} } \, .$$ Application to geometric influences $I_{ij} (A)$ as the limit of ${\| D_{ij} f \|} _1$ as $f$ approaches the characteristic function of the set $A$ may be drawn as in the previous corresponding statements. From a geometric perspective, $I_{ij} (A)$ can be viewed as the average over $x$ of the boundary of the section of $A$ in the $2$-plane $x + {\rm span} (e_i, e_j)$. We do not know if the order $n^{-1/2}$ of the constant in Theorem 8 is optimal. As announced, the last statement is the $\L^1$-version of the geometric decompositions of Corollary 5 which seems again of interest for influences. Under the corresponding commutation properties, the proof is developed similarly. [**Proposition 9.**]{} [*Under the decomposition (24), for $\mu $ the standard Gaussian measure on $\rr^n$ and for every smooth function $f$ on $\rr^n$ such that $|f|\leq 1$, $${\rm Var}_\mu (f) \leq C \sum_{i=1}^m c_i \, { {\| Q_{E_i} (\nabla f) \|} _1 \big ( 1 + {\| Q_{E_i} (\nabla f)\|}_1 \big ) \over \big [ 1 + \log^+ \big (1 / {\| Q_{E_i} (\nabla f)\|}_1 \big )\big ]^{1/2} }$$ where $C>0$ is numerical.*]{} Let us illustrate the last statement on a simple decomposition. As in the Loomis-Whitney inequality, consider the decomposition $${\rm Id}_{\rrr^n} = \sum_{i=1}^n {1 \over n-1} \, Q_{E_i}$$ with $E_i = {e_i}^\perp$, $i = 1, \ldots , n$, $(e_1, \ldots , e_n)$ orthonormal basis. Proposition 9 applied to $ f = {\bf 1}_A$ for a Borel set $ A$ in $\rr^n$ with $\mu (A) = a$ then shows that there is a coordinate $i$, $ 1 \leq i \leq n$, such that $${\big \| Q_{E_i} (\nabla f) \big \|} _1 \geq {1 \over C} \, a (1-a) \bigg ( \log { 1 \over a(1-a)} \bigg)^{1/2}$$ for some constant $C>0$. Now, $ {\| Q_{E_i} (\nabla f)\|} _1$ may be interpreted as the boundary measure of the hyperplane section $$A^{ x \cdot e_i } = \big \{ ( x \cdot e_1, \ldots, x \cdot e_{i-1}, x \cdot e_{i+1}, \ldots, x \cdot e_n) ; \, ( x \cdot e_1, \ldots, x \cdot e_i, \ldots, x \cdot e_n) \in A \big \}$$ along the coordinate $x \cdot e_i \in \rr$ averaged over the standard Gaussian measure. By Fubini’s theorem, there is $x \cdot e_i \in \rr$ (or even a set with measure as close to 1 as possible) such that $$\mu ^+ (A^{x \cdot e_i }) \geq {1 \over C} \, \, a (1-a) \bigg ( \log { 1 \over a(1-a)} \bigg)^{1/2} . \eqno (28)$$ The interesting point here is that $a$ is the full measure of $A$. Indeed, recall that the isoperimetric inequality for $\mu $ indicates that $ \mu ^+ (A) \geq \varphi \circ \Phi ^{-1} (a)$, hence a quantitative lower bound for $ \mu ^+ (A)$ of the same form as (28). When $A$ is a half-space in $\rr^n$, thus extremal set for the isoperimetric problem and satisfying $ \mu ^+ (A) = \varphi \circ \Phi ^{-1} (a)$, it is easy to see that there is indeed a coordinate $x \cdot e_i $ such that $ A^{x \cdot e_i } $ is again a half-space in the lower-dimensional space. The preceding (28) therefore extends this property to all sets. [*Acknowledgement. We thank F. Barthe and P. Cattiaux for their help with the bound (26) and R. Rossignol for pointing out to us the references \[OD-W1\] and \[OD-W2\].*]{} =cmcsc10 References 0,3cm =cmcsc8 =cmr8 \[A\]al|[C. Ané, S. Blachère, D. Chafaï, P. Fougères, I. Gentil, F. Malrieu, C. Roberto, G. Scheffer.]{} Sur les inégalités de Sobolev logarithmiques. Panoramas et Synthèses, vol. 10. Soc. Math. de France (2000)| \[B\]a|[D. Bakry.]{} L’hypercontractivité et son utilisation en théorie des semigroupes. Ecole d’Eté de Probabilités de Saint-Flour. Lecture Notes in Math. 1581, 1–114 (1994). Springer| \[B\]-L|[D. Bakry, M. Ledoux.]{} Lévy-Gromov’s isoperimetric inequality for an infinite dimensional diffusion generator. Invent. math. 123, 259–281 (1996)| \[B\]-G-L|[D. Bakry, I. Gentil, M. Ledoux.]{} Forthcoming monograph (2011)| \[B\]-C-R|[F. Barthe, P. Cattiaux, C. Roberto.]{} Interpolated inequalities between exponential and gaussian Orlicz hypercontractivity and isoperimetry. Revista Mat. Iberoamericana 22, 993–1067 (2006)| \[B\]-CE-L-M|[F. Barthe, D. Cordero-Erausquin, M. Ledoux, B. Maurey.]{} Correlation and Brascamp-Lieb inequalities for Markov semigroups (2009). To appear in Int. Math. Res. Notices| \[B\]e|[W. Beckner.]{} Inequalities in Fourier analysis. Ann. of Math. 102, 159Ð182 (1975)| \[B\]-H|[S. Bobkov, C. Houdré.]{} A converse Gaussian Poincaré-type inequality for convex functions. Statist. Probab. Lett. 44, 281–290 (1999)| \[B\]o|[A. Bonami.]{} Étude des coefficients de Fourier des fonctions de Lp(G). Ann. Inst. Fourier 20, 335Ð402 (1971)| \[D\]-SC|[P. Diaconis, L. Saloff-Coste]{}. Logarithmic Sobolev inequalities for finite Markov chains. Ann. Appl. Prob. 6, 695–750 (1996)| \[F\]-S|[D. Falik, A. Samorodnitsky.]{} Edge-isoperimetric inequalities and influences. Comb. Probab. Comp. 16, 693–712 (2007)| \[K\]-K-L|[J. Kahn, G. Kalai, N. Linial.]{} The influence of variables on boolean functions. 29th Symposium on the Foundations of Computer Science, White Planes, 68-80 (1988)| \[K\]-M-S|[N. Keller, E. Mossel, A. Sen.]{} Geometric influences (2010)| \[L\]1|[M. Ledoux]{}. The geometry of Markov diffusion generators. Ann. Fac. Sci. Toulouse IX, 305–366 (2000)| \[L\]2|[M. Ledoux]{}. The concentration of measure phenomenon. Math. Surveys and Monographs 89. Amer. Math. Soc. (2001)| \[L\]-Y|[T. Y. Lee, H.-T. Yau.]{} Logarithmic Sobolev inequality for some models of random walks. Ann. Probab. 26, 1855–1873 (1998)| \[M\]1|[E. Milman]{}. On the role of convexity in isoperimetry, spectral gap and concentration. Invent. Math. 177, 1–43 (2009)| \[M\]2|[E. Milman]{}. Isoperimetric and concentration inequalities - Equivalence under curvature lower bound. Duke Math. J. 154, 207–239 (2010)| \[O\]D-W1|[R. O’Donnell, K. Wimmer]{}. KKL, Kruskal-Katona, and monotone nets. 50th Annual IEEE Symposium on Foundations of Computer Science (FOCS 2009), 725–734, IEEE Computer Soc., Los Alamitos, CA (2009)| \[O\]D-W2|[R. O’Donnell, K. Wimmer]{}. Sharpness of KKL on Schreier graphs (2011)| \[R\]os|[R. Rossignol.]{} Threshold for monotone symmetric properties through a logarithmic Sobolev inequality. Ann. Probab. 34, 1707–1725 (2006)| \[R\]oy|[G. Royer.]{} An initiation to logarithmic Sobolev inequalities. Translated from the 1999 French original SMF/AMS Texts and Monographs, 14. Amer. Math. Soc. and Soc. Math. de France (2007)| \[T\]|[M. Talagrand.]{} On Russo’s approximate zero-one law. Ann. Probab. 22, 1576–1587 (1994)| 1,2mm =cmr8 [D. C.-E.: Institut de Mathématiques de Jussieu, Université Pierre et Marie Curie (Paris 6), 4, place Jussieu 75252 Paris Cedex 05, France, =cmti8 =cmtt8 [cordero@math.jussieu.fr]{}]{} [M. L.: Institut de Mathématiques de Toulouse, Université de Toulouse, 31062 Toulouse, France, and Institut Universitaire de France, =cmti8 =cmtt8 [ledoux@math.univ-toulouse.fr]{}]{}
{ "pile_set_name": "ArXiv" }
--- author: - 'E. Tognelli' - 'S. Degl’Innocenti' - 'P. G. Prada Moroni' bibliography: - 'bibliografia\_litio.bib' date: 'Received 24 February 2012 / accepted 8 October 2012' subtitle: 'Testing theory against clusters and binary systems.' title: '$^7$Li surface abundance in pre-MS stars.' --- [The disagreement between theoretical predictions and observations for surface lithium abundance in stars is a long-standing problem, which indicates that the adopted physical treatment is still lacking in some points. However, thanks to the recent improvements in both models and observations, it is interesting to analyse the situation to evaluate present uncertainties.]{} [We present a consistent and quantitative analysis of the theoretical uncertainties affecting surface lithium abundance in the current generation of models.]{} [By means of an up-to-date and well tested evolutionary code, `FRANEC`, theoretical errors on surface $^7$Li abundance predictions, during the pre-main sequence (pre-MS) and main sequence (MS) phases, are discussed in detail. Then, the predicted surface $^7$Li abundance was tested against observational data for five open clusters, namely Ic 2602, $\alpha$ Per, Blanco1, Pleiades, and Ngc 2516, and for four detached double-lined eclipsing binary systems. Stellar models for the aforementioned clusters were computed by adopting suitable chemical composition, age, and mixing length parameter for MS stars determined from the analysis of the colour-magnitude diagram of each cluster. We restricted our analysis to young clusters, to avoid additional uncertainty sources such as diffusion and/or radiative levitation efficiency.]{} [We confirm the disagreement, within present uncertainties, between theoretical predictions and $^7$Li observations for standard models. However, we notice that a satisfactory agreement with observations for $^7$Li abundance in both young open clusters and binary systems can be achieved if a lower convection efficiency is adopted during the pre-MS phase with respect to the MS one.]{} Introduction ============ In the last two decades, a large number of $^7$Li observations have been collected for isolated stars, binary systems, and open clusters from the pre-MS to the late MS phases [see e.g. Table 1 and references therein in @jeffries00; @sestito05], showing that $^7$Li depletion is a strong function of both mass and age. A detailed and homogeneous analysis has been carried out by @sestito05, who determined surface $^7$Li abundance for a large sample of open clusters in a wide range of ages and chemical compositions, supplying a useful tool for accurately analysing the temporal evolution of surface $^7$Li abundance. Open clusters and detached double-lined eclipsing binaries (EBs) are ideal systems for testing the validity of stellar evolutionary models, since their members have the same chemical composition and age. As a consequence, they allow the different lithium depletion pattern to be investigated as a function of the stellar mass once the age and the chemical composition have been kept fixed. Besides the large amount of $^7$Li data available, a strong effort in theoretical modelling has been made in the past years, and many different theoretical scenarios have been proposed to explain the observed surface $^7$Li abundance and its temporal evolution [see e.g. the reviews in @deliyannis00; @pinsonneault00; @charbonnel00], both in the framework of *standard* and *non-standard models* [see e.g., @pinsonneault90; @pinsonneault94; @chaboyer95; @dantona97; @ventura98; @piau02; @dantona03; @montalban06]. *Standard models* assume a spherically symmetric structure and convection and diffusion are the only processes that mix surface elements with the interior. Although the validity of such models in reproducing the main evolutionary parameters has been largely tested against observations, they fail to reproduce the observed $^7$Li abundances. Indeed, standard models show a $^7$Li depletion during the pre-MS phase that is much stronger than observed, while the opposite occurs in the MS phase [see e.g., @jeffries00]. Moreover, they cannot fully account for the formation of the so-called lithium dip for MS stars in the temperature range $6000\,\mathrm{K}\la T_\mathrm{eff}\la 7000\,\mathrm{K}$ [@boesgaard86], see e.g. @richer93. The comparison between theory and observation is improved, in some cases, by introducing *non-standard* processes into the models, e.g. rotation, gravity waves, magnetic fields, and accretion/mass loss [@pinsonneault90; @dantona93; @chaboyer95; @talon98; @ventura98; @mendes99; @siess99; @dantona00; @charbonnel05; @baraffe10; @vick10]. All these processes produce structural changes, with a related strong effect on lithium abundance [see e.g. the reviews by @charbonnel00; @talon08; @talon10]. In particular, models with rotation-induced mixing plus gravity waves are able to reproduce $^7$Li the depletion during the MS and post MS phases [i.e. the lithium dip feature and red-giant branch abundances, see e.g., @talon10; @pace12]. A crucial point in stellar modelling, both for standard and non-standard models, concerns the treatment of the over-adiabatic convection efficiency in the stellar envelope, which is an important issue for lithium depletion, too. In evolutionary codes, the most widely used convection treatment is the simplified scheme of the *mixing length theory* [MLT, @bohm58]. In this formalism, convection efficiency depends on a free parameter to be calibrated. It is a common approach to calibrate it by reproducing the solar radius. This choice usually gives good agreement between models and photometric data; however, to reproduce the effective temperature of stars with different masses in different evolutionary phases, an ad hoc value of the mixing length parameter should be adopted, as suggested by observations [see e.g., @chieffi95; @morel00; @ferraro06; @yildiz07; @gennaro11; @piau11; @bonaca12] and detailed hydrodynamical simulations [see e.g., @ludwig99; @trampedach07]. The main goal of this paper is to re-examine the old lithium problem in light of the improvements in the adopted physical inputs and observational data and to perform a quantitative analysis of the uncertainties affecting surface lithium depletion during the pre-MS phase. The aim is to compute, by means of updated models, theoretical error bars to be applied to the comparison between predictions and data available for stars in young open cluster and binary systems, as partially done in earlier other works [see e.g., @dantona84; @swenson94; @ventura98; @piau02; @sestito06]. The paper is structured in the following way. Section \[sec:data\] presents the adopted $^7$Li data sample for the selected open clusters, followed by a brief description of present models (Sect. \[sec:models\]). In Sect. \[sec:error\] we evaluate the main theoretical uncertainties affecting surface lithium abundance. Finally, in Sect. \[sec:results\], the comparison between predicted and observed lithium abundances for both young open clusters and binary systems is discussed. Lithium data {#sec:data} ============ Surface $^7$Li abundances for young open clusters are taken from the homogeneous database made available by @sestito05. Here, we focus our analysis on clusters younger than about 150 - 200 Myr, in order to avoid MS depletion effects [see e.g., @sestito05], with different metallicities for which a significant number of data in a wide range of effective temperatures are available. The clusters that satisfy these criteria are, Ic 2602, $\alpha$ Per, Blanco 1, Pleiades, and Ngc 2516. Lithium abundances for young double-lined eclipsing binaries are not present in the database by @sestito05, but they have been measured by different authors, as we discuss in Sect. \[sec:binary\]. Theoretical stellar models {#sec:models} ========================== Present stellar models were computed with an updated version of the `FRANEC` evolutionary code [@deglinnocenti08], which adopts the most recent input physics, as described in detail by @tognelli11. The initial deuterium mass fraction abundance is fixed to $X_{\mathrm{D}} = 2\times 10^{-5}$ as a representative value for population I stars [see e.g. @geiss98; @linsky06; @steigman07]. The logarithmic initial lithium abundance is assumed to be $\epsilon_{\mathrm{Li}} = 3.2 \pm 0.2$ [see e.g., @jeffries06; @lodders09], which approximatively corresponds to $X_{^7\mathrm{Li}} \approx 7\times 10^{-9}$ - $1\times 10^{-8}$ in dependence on the metallicity adopted for the models[^1]. Convection is treated according to the mixing length theory, using the same formalism presented in @cox. The adopted reference value of mixing length parameter is $\alpha = 1.0$ (as suggested by present comparison with pre-MS data, see Sect. \[sec:ammassi\]). Theoretical uncertainties {#sec:error} ========================= Chemical composition {#sec:err_chim} -------------------- ![image](DIFFERENZE_MISTURE_GS98_AS05_AS09.eps){width="0.9\linewidth"} To properly calculate pre-MS evolution suitable initial abundances of helium, light elements, and metals are needed. For most of the stars, however, only the \[Fe/H\] value is available, so theoretical or semi-empirical assumptions are required. Assuming for Population I stars a solar-scaled heavy elements distribution [see e.g., @asplund09 for a detailed review], the $Z/X$ value currently present at the stellar surface can be directly inferred from the observed \[Fe/H\]. For all the stars analysed in the paper, this value can be safely adopted as a good approximation of the initial one over the whole structure, since the effect of microscopic diffusion is negligible owing to the very young ages involved. The initial helium content of the star $Y$ cannot be directly measured in the stellar spectra of cool stars, so a further relation between the initial metallicity and helium of the star is required. A common way to proceed is to assume the following linear relation [see e.g., @gennaro10]: $$Y = Y_{\mathrm{P}} + \frac{\Delta Y}{\Delta Z} Z \label{eq:elio}$$ where $Y_{\mathrm{P}}$ and $\Delta Y/\Delta Z$ represent, respectively, the primordial helium abundance and the helium-to-metal enrichment ratio. For the calculations we adopt $Y_\mathrm{p}=0.2485 \pm 0.0008$ [@cyburt04] and $\Delta Y/\Delta Z = 2\pm 1$ [@casagrande07]. Thus, the metallicity of the star can be obtained directly from the following equation, $$Z = \frac{(1-Y_{\mathrm{P}})(Z/X)_\odot }{10^{-[\mathrm{Fe/H}]}-(1+\Delta Y/\Delta Z)(Z/X)_\odot}\nonumber\\ \label{eq:metal}$$ once the solar $(Z/X)_\odot$ has been specified. Regarding this last quantity, there are several values adopted by different authors, i.e. the still widely adopted @grevesse93 (GN93, $(Z/X)_\odot = 0.0244$), @grevesse98 (GS98,$(Z/X)_\odot~=~0.0231$) and the recent determinations by @asplund05 (AS05, $(Z/X)_\odot = 0.0165$) and @asplund09 (AS09, $(Z/X)_\odot =0.0181$), which are based on detailed 3D hydrodynamical atmosphere models. Recently, @caffau10 (CL10) have found a value for the solar carbon photospheric abundance higher by about 0.1 dex than the previous one derived by AS09. This also leads to an increase in the solar metallicity-to-hydrogen ratio, namely $(Z/X)_\odot = 0.0211$, which is higher than the AS09 and much closer to the GS98 one. Our models are computed adopting the AS05 mixture, for consistency with the extended pre-MS tracks and isochrones database already provided by our group[^2] [@tognelli11], whereas for the conversion of \[Fe/H\] into ($Y$, $Z$) we prefer to use $(Z/X)_\odot=0.0181$ from the more recent AS09 heavy elements distribution. The inconsistency that may arise is negligible. Indeed, we verified that the effect on pre-MS models of adopting the AS05 or AS09 distribution in the opacity, once $Z$ and $Y$ have been kept fixed, is much lower than the variation produced by a change of ($Y$, $Z$) related to the error on $(Z/X)_\odot$ [see @tognelli11 and the following discussion]. For the uncertainty on $(Z/X)_\odot$, the commonly suggested value is about $\pm 15\%$ [see e.g., @bahcall04; @bahcall05]. However, to take the difference between recent $(Z/X)_\odot$ determinations into account, i.e. GS98, AS09, and CL10, a larger uncertainty of about $+25\%$ with respect to AS09 is required. Thus, we use a final uncertainty of $+25$/$-15\%$ on $(Z/X)_\odot$. Besides the uncertainties on $Y_{\mathrm{P}}, \, \Delta Y/\Delta Z$, and $(Z/X)_\odot$, the initial $Y$ and $Z$ abundances are obviously also affected by the observational error on \[Fe/H\]. Generally the errors quoted in the literature vary from about $\pm0.01$, probably underestimated, to $\pm0.1$. Here we adopt as a conservative error $\Delta \mathrm{[Fe/H]}=\pm0.05$. By means of eqs. (\[eq:elio\]) and (\[eq:metal\]), taking the quoted uncertainties into account, we obtain eight values of ($Y$, $Z$) for each of the selected clusters, for which we compute pre-MS models. More precisely, the different $Y$ and $Z$ values are calculated by adopting, in turn, the minimum and the maximum values of one of the four parameters (\[Fe/H\], $Y_{\mathrm{P}}, \, \Delta Y/\Delta Z$, and $(Z/X)_\odot$), while the others are kept fixed to their central value. We computed two additional models with the maximum and minimum values of the estimated initial $^7$Li abundance, which, as already mentioned in Sect. \[sec:models\], is set to $\epsilon_{\mathrm{Li}} = 3.2\pm 0.2$. Obviously a change in the initial chemical composition also affects the position of the star in the colour-magnitude diagram, hence the age and mixing length parameter determination. This effect has been taken into account (see Sect. \[sec:ammassi\]). Another source of uncertainty related to the stellar chemical composition that has to be considered is the assumed distribution of heavy elements, at fixed Z, which strongly affects opacity values [@sestito06]. Since the opacity determines the temperature gradients and thus the extension of the convective envelope (in mass and temperature), a variation in this quantity, due to the uncertainty on the adopted mixture, modifies the lithium-burning rate and its resulting surface abundance. Figure \[fig:misture\] shows the relative differences among the `OPAL` radiative opacity tables[^3] computed by adopting the GN93, GS98, AS05 (our reference model), and AS09 solar mixtures. To make the figure much clearer , we also show a box representing the region covered by the entire convective envelope for stellar models with masses in the range 0.6 - 1.2 M$_\odot$ and \[Fe/H\] = $+0.0$. The model ages have been chosen in the range 1 - 10 Myr, which roughly corresponds to the phase of efficient lithium burning for such masses. The upper panel of Fig. \[fig:inputs1\] shows the change in surface lithium abundance resulting from the adoption of the aforementioned solar mixtures in the opacity tables, in stars with different masses, namely 0.6, 0.8, 1.0, and 1.2 M$_\odot$. The increase of the iron group elements at fixed metallicity $Z$ leads to higher radiative opacity and in turn to a deeper convective envelope and consequently to greater lithium depletion. This is exactly what occurs when updating the heavy elements mixture to the recent AS05 or AS09 from the older GN93 or GS98. The higher the mass, the lower is the surface lithium depletion and, consequently, the sensitivity of the lithium burning to the opacity change. Since AS05 and AS09 metals distributions are quite similar, we expect negligible differences in the opacity coefficients, and thus in the model predictions. ![Comparison among the surface lithium abundance obtained with our reference set of tracks (*solid line*) and models with different assumptions on the adopted physical inputs, for M = 0.6, 0.8, 1.0, and 1.2 M$_\odot$ with $Z=0.01291$, $Y=0.274$, and $\alpha =1$. *Upper panel*: effect of the change of the solar mixture (GN93, GS98, AS05, and AS09 ones) in the opacity tables. *Bottom panel*: effect of adopting the `OPAL` 2006 and `PTEH` EOS.[]{data-label="fig:inputs1"}](EpsiLi_mixtures_new.eps "fig:"){width="\linewidth"} ![Comparison among the surface lithium abundance obtained with our reference set of tracks (*solid line*) and models with different assumptions on the adopted physical inputs, for M = 0.6, 0.8, 1.0, and 1.2 M$_\odot$ with $Z=0.01291$, $Y=0.274$, and $\alpha =1$. *Upper panel*: effect of the change of the solar mixture (GN93, GS98, AS05, and AS09 ones) in the opacity tables. *Bottom panel*: effect of adopting the `OPAL` 2006 and `PTEH` EOS.[]{data-label="fig:inputs1"}](EpsiLi_EOS_new.eps "fig:"){width="\columnwidth"} Opacity coefficients. --------------------- Besides the influence on the opacity of the heavy element distribution, it is worth analysing the error on the calculation of Rosseland radiative opacity coefficients $\kappa_\mathrm{R}$ at fixed chemical composition. The current version of the opacity tables we adopt in present calculations [i.e. `OPAL` 2005 see e.g., @rogers92; @iglesias96] does not contain information about the related uncertainty. Thus to give a conservative uncertainty estimation on $\kappa_\mathrm{R}$, we evaluated the relative differences between the `OPAL` and the `OP` [*Opacity Project* see e.g, @seaton94; @badnell05] radiative opacity coefficients in their full range of validity, once the same chemical composition has been adopted. We found that the maximum/minimum relative difference between the two opacity tables is close to $\pm 5\%$ in the region of interest for the present calculations (i.e. the convective envelope) [see also, @neuforge01; @badnell05; @valle12]. Thus, we assume the value of $\Delta\kappa_\mathrm{R}/\kappa_\mathrm{R} = \pm5 \%$ as a conservative uncertainty. Equation of state ----------------- Owing to the complexity of the evaluations of the various thermodynamical quantities, which are strictly correlated among each other, it is very difficult to assess a precise uncertainty on the EOS tables. An idea of how the current indetermination on the EOS propagates into stellar evolutionary predictions can be obtained by computing models with two different EOS tables that have been widely adopted, namely the `OPAL` EOS 2006 (our reference one) and `PTEH` [@pols95][^4]. The comparison between the `OPAL` and the `PTEH` is useful to assess the effect of adopting a completely different treatment of the gas in stellar conditions, the two EOS being computed, respectively, in the formalism of the physical and chemical picture [see e.g., @trampedach06]. The influence of the adopted EOS on the location in the HR diagram has already been discussed in several papers for both pre-MS evolution [see e.g., @mazzitelli89; @dantona93; @tognelli11] and low-mass MS stars [see e.g., @dorman89; @neece84; @chabrier97; @dicriscienzo10]. Here, we simply recall that the models are particularly affected by the EOS in all the phases where a thick convective envelope is present, i.e. the pre-MS or MS structures of low- and very low-mass stars. In these phases, when lithium burning is efficient, the resulting surface $^7$Li abundance is quite sensitive to the adopted EOS, too, as shown in the bottom panel of Fig. \[fig:inputs1\]. As noticed above, surface lithium abundance gets less affected by the EOS change as the mass increases, because of the progressively reduction of the burning efficiency. $^7$Li(p,$\alpha$)$\alpha$ cross section ---------------------------------------- Lithium destruction is obviously dependent on the $^7$Li(p,$\alpha$)$\alpha$ cross section. However, the current uncertainty on the quoted reaction rate for bare nuclei is quite small [a few percent see e.g., @nacre; @lattuada01], so that the effect on $^7$Li abundance of such error is very small compared to the other error sources. We adopt the value of $\pm 5\%$ as a conservative uncertainty on this quantity. Total uncertainty on $^7$Li surface abundance predictions {#sec:total} --------------------------------------------------------- The partial uncertainty due to each parameter/physical input was obtained by the difference between the reference model, which is the one computed with the reference values of all the parameters, and the model computed by varying such parameter. This procedure was iterated for all the uncertainty sources discussed in the text. Then, the total error on surface lithium abundance predictions was computed by quadratically adding all the partial errors. We want to emphasize that the uncertainty analysis was performed for all the chemical compositions suitable for the selected clusters. Thus, for each cluster, error bars consistent with its chemical composition, mixing length parameter, and age were evaluated. Surface lithium abundance: theory vs observations {#sec:results} ================================================= Young open clusters {#sec:ammassi} ------------------- [l|cccccc]{} Cluster: & \[Fe/H\]: & ($Y$, $Z$): & $\alpha_{\mathrm{MS}}$ & age (Myr): & age (Myr):\ & & & & ($\lambda_\mathrm{ov} = 0.0$) & ($\lambda_\mathrm{ov} = 0.2$)\ \ Ic2602 & $+0.00$ & ($0.274$, $0.0129$) & $1.68 \pm 0.1$ & $40\pm10$ & $55\pm10$\ \ $\alpha$ Per & $-0.10$ & ($0.269$, $0.0104$) & $1.68 \pm 0.1$ & $60\pm10$ & $75\pm10$\ \ Blanco1 & $+0.04$ & ($0.276$, $0.0141$) & $1.90^{+0.1}_{-0.2}$ & $110\pm30$ & $130\pm30$\ \ Pleiades & $+0.03$ & ($0.276$, $0.0138$) & $1.90^{+0.1}_{-0.2}$ & $120\pm20$ & $130\pm20$\ \ Ngc2516 & $-0.10$ & ($0.269$, $0.0104$) & $1.90 \pm 0.1$ & $130\pm20$ & $145\pm20$\ & $+0.07$ & ($0.278$, $0.0150$) & $1.90 \pm 0.1$ & $130\pm20$ & $145\pm20$\ \ The clusters age and the mixing length parameters for MS stars are determined by comparing the observed CMDs with the present theoretical isochrones. The age is largely affected by the lack of stars near the overall contraction region of such young clusters, and it is only marginally affected by the uncertainty on the chemical composition adopted for the calculations. Similarly, the uncertainty on the calibration of $\alpha_\mathrm{MS}$ essentially comes from the spread of the MS in the CMD. The effects of the indetermination on age and $\alpha_\mathrm{MS}$ on surface $^7$Li abundance are evaluated for each cluster and quadratically added to the other error sources to define the theoretical surface lithium abundance error bars. Table \[tab:oc\_cmd\] summarizes the results for the main parameters of each cluster, namely, chemical composition, $\alpha_{\mathrm{MS}}$, and age. For Ngc 2516 two different \[Fe/H\] values are available: (1) \[Fe/H\] = $-0.10$ (photometric) and (2) \[Fe/H\] = $+0.07$ (spectroscopic). Since the spectroscopic value is still quite uncertain, because it has been determined using only two stars [see the discussion in @terndrup02], we decided to also use the photometric one for the following comparison with surface lithium data. The age determination for the selected clusters was performed both with models without core overshooting ($\lambda_\mathrm{ov} = 0$) and with a standard upper limit for the presence of overshooting, $\lambda_\mathrm{ov} = 0.2$ [see e.g., @brocato03; @claret07 and references therein], in order to take a suitable range of $\lambda_\mathrm{ov}$ into account. However, as shown in Table \[tab:oc\_cmd\], the age difference is for all the cases close to the quoted uncertainty. Moreover, the impact of the age uncertainty on $\epsilon_\mathrm{Li}-T_\mathrm{eff}$ profile is almost negligible for clusters older than about 80 - 90 Myr, since all the stars have already reached the ZAMS, so that their position in the CMD is weakly dependent on the age. The situation is different for very young clusters, such as Ic2602 and $\alpha$ Per. In this case low-mass stars are close but not yet in ZAMS, so an age variation produces an appreciable change in $T_\mathrm{eff}$. As for the age, the $\alpha_\mathrm{MS}$ uncertainty also has a small effect on the predictions of surface $^7$Li, since in the age and mass range we are investigating, the stars undergo an almost negligible depletion during MS. Only the surface $^7$Li abundances for the lowest masses of the sample, i.e. M $\la$ 0.6 M$_\odot$, are marginally affected by such a variation in the predicted $^7$Li. Besides this, $\alpha_\mathrm{MS}$ slightly changes the effective temperature of the stars, which, however, is a second-order effect when compared to the $T_\mathrm{eff}$ shift introduced by the other uncertainties previously discussed. Regarding the comparison between surface $^7$Li data and present models, as a first step, we make the assumption of a constant value of the mixing length parameter from the early pre-MS phase to the MS. Figure \[fig:litio\] shows the comparison between the theoretical predictions and data for each cluster. As one can see, the models with $\alpha_\mathrm{PMS}=\alpha_\mathrm{MS}$ (dashed line with filled black squares) fail in reproducing the observed $^7$Li abundances in almost all the selected clusters, for stars less massive than about 1 M$_\odot$, even if theoretical and observational errors are taken into account. For these stars, the predicted $^7$Li is systematically lower than what is observed, with differences as great as 1 dex for low-mass stars (about 0.6 - 0.7 M$_\odot$), confirming the well known disagreement between theory and observations for $^7$Li surface abundance, even in light of the recent data and taking the updated theoretical models into account within present error estimates. Models computed with $\alpha_\mathrm{PMS}=\alpha_\mathrm{MS}$ partially agree with data only in the case of $\alpha$ Per for M $\ga 0.7$ M$_\odot$, and Ngc 2516 if the low photometric \[Fe/H\] value is adopted (bottom left panel of Fig. \[fig:litio\]). If the spectroscopic \[Fe/H\] value is used for NGC 2516, the predictions, as the other clusters, do not match the observations for M $\la 1$ M$_\odot$. However, we emphasize that, for these two clusters, models and data are compatible each other because of the large $^7$Li abundance scatter present among stars with similar $T_\mathrm{eff}$ (about 1 dex), combined with the large error bars on theoretical predictions. Since in most of the cases the models with $\alpha_\mathrm{PMS}=\alpha_\mathrm{MS}$ disagree with the data, and given the high sensitivity of $^7$Li surface abundance predictions to the convection efficiency, it is worth exploring the possibility that the mixing length parameter value varies from the pre-MS to the MS phases. Indeed, a possible dependence of $\alpha$ on the evolutionary phase (and/or gravity, $T_\mathrm{eff}$, mass) is suggested from both observations and hydrodynamical simulations, as discussed in the introduction. Thus, we computed models with different values of $\alpha_\mathrm{PMS}$, namely, $\alpha_\mathrm{PMS} = 1.0$, 1.2, 1.4, 1.68, and 1.9, once $\alpha_\mathrm{MS}$ and the ages have been fixed by the comparison in the CMD. Figure \[fig:litio\] shows the comparison between our ‘best fit’ models and $^7$Li data for each cluster (dotted lines and filled red squares). The theoretical error bars computed for each cluster are also shown. ![image](Litio_Ic2602_ML1.00.eps){width="\columnwidth"} ![image](Litio_APer_ML1.00.eps){width="\columnwidth"}\ ![image](Litio_Blanco1_ML1.00.eps){width="\columnwidth"} ![image](Litio_Pleiadi_ML1.00.eps){width="\columnwidth"}\ ![image](Litio_Ngc2516_FeH_basso_ML1.00.eps){width="\columnwidth"} ![image](Litio_Ngc2516_FeH_alto_ML1.00.eps){width="\columnwidth"} We emphasize that a satisfactory agreement with all the clusters in the sample (with the exception of the Pleiades) can be achieved by assuming the same pre-MS convection efficiency, namely $\alpha_\mathrm{PMS}=1.0$. Such low-convection efficiency models are able to reproduce, within the error bars, the mean depletion profile even for low-mass stars, especially in the case of Ic 2602 and $\alpha$ Per. As shown in Fig. \[fig:litio\], the poorest match between theory and data is achieved for the Pleiades. The hottest stars are nearly compatible within the error bars with the observations, which show a surface abundance about 0.2 - 0.3 dex lower than the predicted one. A possible way to improve the agreement with these stars is to adopt an initial lithium abundance of about $\epsilon_\mathrm{Li} \approx 3$. However, this method does not improve the agreement with the low-mass stars, a problem still largely discussed in the literature [see e.g. @king00; @jeffries00; @umezu00; @dantona03; @clarke04; @xiong06; @king10 and references therein]. The results we obtain for young open clusters confirm the partial results of previous analysis, which have noticed that models with low-convection efficiency during pre-MS phase agree much better with lithium observation than those with solar or MS calibrated values [see e.g., @ventura98; @dantona03; @landin06]. ![image](HR_ASAS_052821_A.eps){width="0.98\columnwidth"} ![image](HR_EK_Cep_A.eps){width="0.98\columnwidth"} ![image](HR_RXJ+0529.4Aa.eps){width="0.98\columnwidth"} ![image](HR_V1174_Ori_A.eps){width="0.98\columnwidth"} ![image](ASAS_052821_A_Li.eps){width="0.98\columnwidth"} ![image](EK_Cep_A_Li.eps){width="0.98\columnwidth"} ![image](RXJ+0529.4Aa_Li.eps){width="0.98\columnwidth"} ![image](V1174_Ori_A_Li.eps){width="0.98\columnwidth"} Binary stars {#sec:binary} ------------ \[tab:binarie\] [lccccc]{} System & Mass \[M$_\odot$\] & $\log T_\mathrm{eff} \mathrm{[K]}$ & $\log L/L_{\odot}$ & $\epsilon_\mathrm{Li}$ & \[Fe/H\]\ \ ASAS J052821+0338.5 (a) & $1.387 \pm0.017$ & $3.708\pm0.009$ & $0.314\pm0.034$ & $3.10\pm0.20$ & $-0.20\pm0.20$\ ASAS J052821+0338.5 (b) & $1.331 \pm0.017$ & $3.663\pm0.009$ & $0.107\pm0.034$ & $3.35\pm0.20$ & $-0.10\pm0.20$\ \ EK Cep (a) & $2.020\pm0.010$ & $3.954\pm0.010$ & $1.170\pm0.040$ & $-$ & $+0.07\pm0.05$\ EK Cep (b) & $1.124\pm0.012$ & $3.755\pm0.015$ & $0.190\pm0.070$ & $3.11\pm0.30$ & $+0.07\pm0.05$\ \ RXJ 0529.4+0041 A (a) & $1.270\pm0.010$ & $3.716\pm0.013$ & $0.140\pm0.080$ & $3.20\pm0.30$ & $-0.01\pm0.04$\ RXJ 0529.4+0041 A (b) & $0.930\pm0.010$ & $3.625\pm0.015$ & $-0.280\pm0.150$ & $2.40\pm0.50$ & $-0.01\pm0.04$\ \ V1174 Ori (a) & $1.009\pm0.015$ & $3.650\pm0.011$ & $-0.193\pm0.048$ & $3.08\pm0.20$ & $-0.01\pm0.04$\ V1174 Ori (b) & $0.731\pm0.008$ & $3.558\pm0.011$ & $-0.761\pm0.058$ & $2.20\pm0.20$ & $-0.01\pm0.04$\ \ Binary systems and, in particular, the subclass of detached double-lined eclipsing binaries (EBs) are severe tests for stellar models. Indeed, for EBs independent measurements of mass, radius, and effective temperature are available [for a detailed review see e.g., @mathieu07]. The validity of our theoretical models have already been tested against a large sample of pre-MS binaries (26 objects) by @gennaro11, using the models of the Pisa pre-MS database against observations by means of a Bayesian method. The present pre-MS models differ from those available in the quoted database only in the minimum value of the mixing length parameter, i.e. $\alpha = 1.0$ instead of $\alpha = 1.2$. From the sample of EBs presented in @gennaro11, we selected a subsample of binary systems for which surface lithium abundances are available, namely ASAS J052821+0338.5 [@stempels08], EK Cep [@popper87], RXJ 0529.4+0041 A [@covino04], and V1174 Ori [@stassun04]. Table \[tab:binarie\] summarizes the main parameters of each system: mass, effective temperature, luminosity, lithium abundance, and \[Fe/H\]. The corresponding models are computed for $\alpha_\mathrm{PMS}=1.0$, 1.2, and 1.68. Figure \[fig:hr\_bin\] shows the HR diagram of the four selected systems compared with our evolutionary tracks. @gennaro11 have already shown that theoretical models with low initial helium abundance and mixing length parameter agree better with the data of pre-MS binary systems, in particular for those ones with at least one component near the Hayashi track. In our sample only EK Cep does not have stars near the Hayashi track. For ASAS J052821+0338.5 (Fig. \[fig:hr\_bin\]), both the lowest and the highest $\alpha_\mathrm{PMS}$ values are compatible with the primary star, whereas $\alpha_\mathrm{PMS} = 1.0$ - 1.2 is required to match the secondary. For Ek Cep we cannot constrain the mixing length value during the pre-MS since both stars are approaching the ZAMS, and consequently their position in the HR diagram is not sensitive to the choice of $\alpha_\mathrm{PMS}$. Moreover, we can not achieve a satisfactory agreement between our model and the primary star, as already pointed out by @gennaro11. Similarly to ASAS J052821+0338.5, RXJ 0529.4+0041 A has two stars near the ‘heel’. As shown in Fig. \[fig:hr\_bin\], the three different $\alpha_\mathrm{PMS}$ are all compatible with both stellar components within the observational uncertainties, which are quite large. V1174 Ori is much more problematic. As discussed in @gennaro11, the two stars show a peculiar position in the HR diagram. None of the present models (or other models widely adopted in the literature) can reproduce the correct position of the secondary by adopting the measured mass and chemical composition. A possible explanation of such a peculiar position in the HR diagram can be the presence of a large systematic uncertainty introduced by the adopted spectral type-effective temperature scale [see e.g., @luhman97; @stassun04; @hillenbrand04; @gennaro11]. To be close to our coolest model ($\alpha_\mathrm{PMS}=1.0$), an increase of about 300 K in the secondary effective temperature would be required, which would correspond to a primary effective temperature increment of about 400 K. However, it seems unlikely that such a large shift could be caused uniquely by the adoption of a inadequate spectral type-effective temperature scale. Figure \[fig:lit\_bin\] shows the comparisons between theoretical and observed lithium surface abundances. The evolutionary track of surface lithium abundance is shown. In the case of EK Cep we do not show the primary because the lithium abundance is not currently available. The figure shows the tracks computed with the low-convection efficiency, i.e. $\alpha_\mathrm{PMS}=1.0$ and the models with $\alpha_\mathrm{PMS}=\alpha_\mathrm{MS}=1.68$. The lithium predictions of our models computed with both $\alpha_\mathrm{PMS}=1.0$ and $\alpha_\mathrm{PMS}=\alpha_\mathrm{MS}=1.68$ are in good agreement with data for the primary components (1.0 M$_\odot\la$ M$_1$ $\la 1.4$ M$_\odot$), within the uncertainties, since lithium depletion is almost negligible for such masses. Therefore, the primary components belonging to our sample do not allow further constraints on the $\alpha_\mathrm{PMS}$ value. In contrast, the impact of $\alpha_\mathrm{PMS}$ gets stronger and stronger as the mass decreases below about 1 M$_\odot$. The secondary of EK Cep and RXJ 0529.4+0041 A might thus give useful constraints[^5]. Unfortunately, among the selected systems, the lithium data for the secondary components are quite uncertain, with errors as large as 0.5 dex (see Table \[tab:binarie\]), avoiding a robust comparison. The model with a low-convection efficiency is fully compatible with the data for the secondary component of EK Cep, although we cannot exclude $\alpha_\mathrm{PMS} =1.68$, while in the case of RXJ 0529.4+0041 A, nothing can be concluded owing to extremely uncertain lithium abundance determination, which is only a lower limit [see also the discussion in, @alcala00; @covino01; @dantona03]. From this analysis it is evident that an effort to improve the measurements of the main parameters of binary systems, lithium abundance included, is required, in order to better constrain the super-adiabatic efficiency of theoretical models. Moreover, we emphasize that EBs are extremely useful tools for testing the validity of stellar models, and consequently, precise data about such systems would be required. Conclusions =========== We have discussed in detail the uncertainties on theoretical models by evaluating the effect of the errors affecting the initial chemical composition and the up-to-date physical inputs (i.e. opacity, reaction rates, EOS), for several ages, masses, and chemical compositions. From these computations, we obtained a quantitative estimation of the error bar associated to the $^7$Li surface abundance predictions. The comparison between theory and observations was conducted on five young clusters, namely Ic 2602, $\alpha$ Per, Blanco1, Pleiades, and Ngc 2516, and on four pre-MS EBs. Our results confirm the disagreement between present standard models and $^7$Li surface abundance in young stars. Motivated by the high sensitivity of $^7$Li surface depletion to the convection efficiency and by the possibility of a dependency of the mixing length parameter on the evolutionary phase, gravity and $T_\mathrm{eff}$, we therefore explored the effect of a different mixing length parameter during the pre-MS and the MS evolution. We found that, in this case, very good agreement can be achieved for all clusters of the sample by adopting $\alpha_\mathrm{PMS} = 1.0$, which is considerably lower than the MS one (i.e. $\alpha_\mathrm{MS} = 1.68$ - 1.9), obtained by comparing theoretical isochrones with the observed colour-magnitude diagrams. We also checked the validity of such low-convection efficiency models against four pre-MS detached double-lined eclipsing binaries, namely ASAS J052821+0338.5, EK Cep, RXJ 0529.4+0041 A, and V1174 Ori. We found that, in the HR diagram, pre-MS tracks with $\alpha = 1.0$ seem to agree well with the data for at least two of the three systems that have a star close to the Hayashi track (ASAS J052821+0338.5 and RXJ 0529.4+0041 A). However, the models are compatible with $^7$Li data both adopting the low- and high-convection efficiency, as a consequence of the large observational uncertainty still present on $^7$Li abundance determinations. Our results point out the necessity of low-convection efficiency during the pre-MS phase in standard models. However, further analysis is required to clarify whether the mixing length parameter actually changes from low to high values evolving from the pre-MS to the MS phase, or if some physical mechanism that acts in a way to partially inhibit convection in the envelope is lacking [i.e. magnetic field, see e.g., @ventura98]. It is a great pleasure to thank Matteo Dell’Omodarme for his invaluable help in computer programming and Paola Sestito for useful and pleasant discussions. This research made use of the WEBDA database, operated at the Institute for Astronomy of the University of Vienna. This research has been partially supported by the PRIN-INAF 2011 (*Tracing the formation and evolution of the galactic halo with VST*, P.I. Marcella Marconi). [^1]: We adopt a simple scaling of the initial $^7$Li abundance with the metallicity because we are mainly interested in reproducing the lithium depletion pattern, i.e. $\epsilon_{\mathrm{Li}} - T_\mathrm{eff}$, which is independent of the initial $^7$Li abundance. [^2]: The database contains a very large grid of pre-MS models and isochrones between 1 - 100 Myr [for more details see, @tognelli11]. The corresponding database is available at: <http://astro.df.unipi.it/stellar-models/> [^3]: We use the `OPAL` opacity table released in 2005 for $\log T[K] >~4.5$, which are available at the url, <http://opalopacity.llnl.gov/opal.html>. For lower temperatures we use the @ferguson05 radiative opacity, available at the url, <http://webs.wichita.edu/physics/opacity/> [^4]: The `PTEH` EOS was computed using the FreeEOS fortran library developed by A.W. Irwin, which allows computing the EOS by the free-energy minimization technique. One of its advantages is the possibility of setting several flags to mimic other historical EOS. [^5]: The case of V1174 Ori is peculiar, and if the problem resides in the effective temperature determinations, then lithium abundances could also be affected by uncertainties greater than the quoted ones.
{ "pile_set_name": "ArXiv" }
--- abstract: 'In the recent years we have seen that Grover search algorithm [@grover-search] by using quantum parallelism has revolutionized the field of solving huge class of NP problems in comparison to classical systems. In this work we explore the idea of extending the Grover search algorithm to approximate algorithms. Here we try to analyze the applicability of Grover search to process an unstructured database with dynamic selection function as compared to the static selection function in the original work[@grover-search]. This allows us to extend the application of Grover search to the field of randomized search algorithms. We further use the Dynamic Grover search algorithm to define the goals for a recommendation system, and define the algorithm for recommendation system for binomial similarity distribution space giving us a quadratic speedup over traditional unstructured recommendation systems. Finally we see how the Dynamic Grover Search can be used to attack a wide range of optimization problems where we improve complexity over existing optimization algorithms.' author: - 'Indranil Chakrabarty, Shahzor Khan and Vanshdeep Singh' title: 'Dynamic Grover Search: Applications in Recommendation systems and Optimization problems' --- =10000 \[theorem\][Lemma]{} \[theorem\][Corollary]{} \[theorem\][Proposition]{} \[theorem\][Conjecture]{} \[theorem\][Definition]{} I. Introduction =============== The promise of quantum computation is to enable new algorithm which renders physical problems using exorbitant physical resources for their solution on a classical computer. There are two broader class of algorithms. The first class is build upon *Shor’s quantum fourier transform* [@shor] and includes remarkable algorithms for solving the discrete logarithm problems, providing a striking exponential speedup over the best known classical algorithms. The second class of algorithm is based upon Grover’s algorithm for performing *quantum searching* [@grover-search]. Apart from these two broader line of divisions Deutsch algorithm based on *quantum parallelism/interference*[@Deutsch] is another example which has no classical analogue. This provide a remarkable speed up over the best possible classical algorithms. With the induction of quantum algorithms questions were raised for proving complexity superiority of Quantum Model over Classical Model [@feynman].\ Grover’s Search algorithm was one of the first algorithms that opened a class of problems solvable by Quantum Computation[@grover-framework], in a quadratic speedup over classical systems. Classical unstructured search or processing of search space, is essentially linear as we have to process each item using randomized search functions which can be at best optimized to $N/2$ states. In 1996, L.K.Grover gave the Grover Search algorithm to search through a search space in $ \mathcal{O}\sqrt{N} $ [@grover-search]. The algorithm leverages the computational power of superimposed quantum states. In its initialization step an equi-probable superposition state is prepared from the entire search space. In each iteration of the algorithm the coefficients of selected states, based on a selection function, are increased and that of the unselected states are decreased by inversion about the mean. This method increases the coefficients of selected states quadratically and in $ \mathcal{O}\sqrt{N} $ steps we get the selected states with high probability. The unstructured search approach can be used to compute any NP problem by iterating over the search space.\ From an application perspective quantum search algorithms have several applications, as it can be used to extract statistic, such as the minimal element from an unordered data set more quickly than is possible on a classical computer[@min]. It has been extended to solve various problems like finding different measures of central tendency like mean [@mean] and median [@median]. It can be used to speed up algorithms for some problem in NP specifically those problems for which a straightforward search for a solution is the best algorithm known. Finally it can be used to speedup the search for keys to the cryptographic systems such as widely used Data Encryption Standard (DES).\ In the field of e-commerce we have seen recommended system collects information on the preferences of users for a set of items. The information can be acquired explicitly (by collecting user’s ratings) or implicitly (monitoring user’s behavior)[@lee; @nun; @choi]. It make uses of different sources of information for providing user with prediction and recommended items. They try to balance various factors like accuracy, novelty, dispersity and stability in the recommended items. Collaborative filtering plays an important role in the recommendations although they are often used with other filtering techniques like content-based, knowledge-based. Another important approach in recommending process is the k-nearest neighbor hood approach in which we find the k nearest neighbors of the search item. Recently recommendation system implementations has increased and has facilitated in diverse areas [@park] like recommending various topics like music, television , book documents; in e-learning and e-commerce; application in markets and web search [@car; @serr; @zai; @huang; @castro; @costa; @Mcnally]. Mostly these recommendations are done in structured classical database .\ NP problems[@NPProblems] have been explored in general to be solved using Grover Search [@grover-framework]. In extension to that optimization problems have been used to find solution to various specific applications. The class of NP Optimization problem (NPO)[@NPO] exists which finds solution for the combinatorial optimization problems under specific conditions. In this work we replace the static selection function of the Grover search with a dynamic selection function. This allows us to extend the application of Grover search to the field of randomized search algorithms. One such application is the use in recommendation System. We also define the goals for a recommendation system. Finally we define the algorithm for recommendation system for binomial similarity distribution space giving us a quadratic speedup over traditional unstructured recommendation systems. Another application is in finding an optimal search state, for a given NPO problem. We see that Durr and Hoyer’s work [@min] also performs optimization in $O(\log (N) \sqrt{N})$, however use of dynamic Grover Search can achieve the same in $O(\sqrt{N})$.\ In section II we give a brief introduction to Grover search by using standard static selection function. In section III we introduce our model of dynamic grover search and by defining the algorithm over the binomial distribution space and then by comparing it with the traditional unstructured recommended systems. In the last section IV we provide an application of this dynamic grover search in recommendation systems and optimization algorithms.\ II. Grover Search Algorithm =========================== In this section we briefly describe Grover search algorithm as a standard searching procedure and elaborate on the fact that how it expedites the searching process as compared to a classical search in an unstructured database[@nielsen].\ **Oracle:** Suppose we wish to search for a given element through an unstructured search space consisting of $N$ elements. For the sake of simplicity, instead of directly searching a given element we assign indices to each of these elements which are just numbers in the range of $0$ to $N-1$. Without loss of generality we assume $N=2^n$ and we also assume that there are exactly $M$ solutions ($1\leq M \leq N$) to this search problem. Further, we define a selection function $f$ which takes an input state $\ket{x}$, where the index $x$ lies in the range $0$ to $N-1$. It assigns value $1$ when the state is a solution to the search problem and the value $0$ otherwise, $$\begin{aligned} f = \begin{cases} 0 & \text{if $\ket{x}$ is not selected}, \\ 1 & \text{if $\ket{x}$ is selected}. \end{cases}\end{aligned}$$ Here we are provided with quantum oracle-black box which precisely is a unitary operator $O$ and its action on the computational basis is given by, $$\ket{x}\ket{q} \rightarrow \ket{x} \ket{q \oplus f(x)}.$$ In the above equation we have $\ket{x}$ as the index register. The symbol $\oplus$ denotes addition modulo $2$, and the oracle qubit $\ket{q}$ gets flipped if we have $f(\ket{x})=1$. It remains unchanged otherwise. This helps us to check whether $\ket{x}$ is a solution to the search problem or not as this is equivalent of checking the oracle qubit is flipped or not.\ **Algorithm:** The algorithm starts by creating a superposition of $N$ quantum states by applying a Hadamard transformation on $\ket{0}^{\otimes n}$. $$\ket{\psi} =\frac{1}{\sqrt{N}}\sum _{x=0}^{N-1}\ket{x}$$ The algorithm then proceeds to repeated application of a quantum subroutine known as the Grover iteration or as the Grover operator denoted by $G$. The grover subroutine consists of following steps:\ **Procedure: Grover Subroutine** - Apply the Oracle $O$. - Perform inversion about mean. **Algorithm: Grover Search** - Initialize the system, such that there is same amplitude for all the N states - Apply Grover Iteration $O(\sqrt{N})$ times - Sample the resulting state, where we get the expected state with probability greater than 1/2 **Geometry:** The entire process of Grover iteration can be considered as a rotation in the two dimensional space, where 1 dimension represents the solution space, and the other represents the remaining search space. These normalized states are written as, $$\begin{aligned} \ket{ \alpha} = \frac{1}{\sqrt{N-M}}\sum_x^{''} \ket{ x} \nonumber\\ \ket{ \beta} = \frac{1}{\sqrt{M}}\sum_x^{'} \ket{ x} .\end{aligned}$$ The initial state $\ket{ \psi} $ can be re-expressed as $$\ket{ \psi} =\sqrt{\frac{N-M}{N}}\ket{ \alpha} +\sqrt{\frac{M}{N}}\ket{ \beta} .$$ Geometric visualization of Grover iteration can be described in two parts, the oracle function and inversion about mean. The oracle function can be considered as reflection of state $\ket{\psi}$ about $\ket{\alpha}$ and the inversion about mean further reflects this state about the new mean ($\approx \ket{\psi} $) In short, $G$ can be understood as a rotation in two dimensional space spanned by $\ket{ \alpha} $ and $\ket{ \beta} $. It is rotating the state $\ket{\psi}$ with some angle $\theta$ radians per application of $G$. Applying Grover rotation operator multiple times brings the state vector very close to $\ket{ \beta} $.\ (Origin) at (0,0); (XAxisMax) at (5,0); (YAxisMax) at (0,3); (Origin) – (XAxisMax) node \[right\] [$\ket{ \alpha }$]{};(Origin) – (YAxisMax) node \[right\] [$\ket{ \beta }$]{};(Origin) – (15: 4) node \[right\] [$\ket{ \psi }$]{};(Origin) – (-15: 4) node \[right\] [$O \ket{ \psi }$]{};(Origin) – (45: 4) node \[right\] [$G \ket{ \psi }$]{};(15: 4) – (-15: 4) – (45: 4); (0:2.1) arc (0:15:2.1) node \[ right \] [$\theta /2$]{}; (0:2.1) arc (0:-15:2.1) node \[above right\] [$-\theta /2$]{}; (15:2.1) arc (15:45:2.1) node \[below right\] [$\theta$]{}; III. Dynamic Grover’s Search ============================ In this section we introduce a dynamic selection function $f_{s}$ which selects a state $\ket{ x} $ with certain probability $P_{s}(\ket{ x} )$. We use $f_{s}$ instead of the static selection function $f$ as used in Grover’s search to introduce randomness in the search algorithm itself. This selection criterion can be based on different properties like similarity to a given state, number of satisfying clauses, etc for applications in recommendation systems, MAX-SAT optimization systems etc.\ We consider $N$ items in a search space which are represented by (computational) basis vectors in a Hilbert space. Here our goal is to select $N_{s}$ states out of these $N$ states using the dynamic selection function. We define the dynamic selection function as: $$f_{s} = \begin{cases} 1 & \text{$\ket{x}$ is selected with $P_{s}(x)$}, \\ 0 & \text{otherwise}. \end{cases}$$ The dynamic nature of this function introduces selection scenarios that are fundamentally different from the traditional Grover search.\ For analysis let us consider the selected states be represented by $\ket{ x_s} $ with coefficient $a_s$ and unselected states by $\ket{ x_{us}} $ with coefficients $a_{us}$. The state of the system at a given time can be represented as $$\ket{ \psi } = \sum_{s} a_s\ket{ x_s} + \sum_{us} a_{us}\ket{ x_{us}} .$$ The probability of sampling from selected states is given by $P_s(=\sum_{s}|a_s|^2)$. Similarly for unselected states we have the corrosponding probability as $P_{us}(=\sum_{us}|a_{us}|^2)$. We define gain $G(=\frac{P_s}{P_{us}})$ as an indicator for achievability of the desired result. Analysis of Grovers Search in a different scenario -------------------------------------------------- In this subsection we discuss the impact of dynamic selection function on the execution of Grover’s search. We also analyze the required conditions for a Grover step to complete successfully.\ **Corollary 1:** For proper execution of Grover search following conditions must be satisfied.\ 1. The mean $\mu$ calculated in the inversion step should be positive. 2. The probability amplitude $a_{us}$ of the unselected states $\ket{ x_{us}} $ remain positive. 3. The number of selected states $\ket{ x_{s}} $ for Gain $G$ should be $$N_{s} < \frac{N}{2G} \;\;\;\; where \;\; G \gg 1$$ *Proof 1:* The mean $\mu$ (calculated in the inversion step) should be positive. If the mean is less than 0 then the coefficients of the selected states will decrease and the coefficients of the unselected states will become negative as given by, $$\begin{aligned} &&a_{x_{s}} = \mu - (- a_{x_{s}} - \mu) = 2\mu + a_{x_{s}}{}\nonumber \\&& a_{x_{us}} = \mu - (a_{x_{us}} - \mu) = 2\mu - a_{x_{us}}\end{aligned}$$ *Proof 2:* The coefficients of the unselected states remain positive. If the coefficient of the unselected state is negative the mean will be negative.\ *Proof 3:* For successfully having a gain $G$, $$N_{sel} < \frac{N}{2G} \;\;\;\; where \;\; G \gg 1$$ as described in Appendix A1.\ **Corollary 2:** If a state is selected in a Grover iteration and not in the next one, coefficient of the selected state after the next iteration will be less than those states which are not selected in both these rounds.\ *Proof :* Consider $\ket{ \psi_{i}} $ to be the input state to the iteration process. The first Grover step inverts the selected state to $\ket{ \psi_{iv}} $ and then calculates the mean state $\ket{ \psi_{\mu}} $. The final probability of the selected state $\ket{ \psi_{s}} $ is increased and the state $\ket{ \psi_{us}} $ is decreased as compared to the $\psi_{i}$, as described earlier in introduction.\ Let $a_s$, $a_{us}$ be the coefficients, $\mu_1$ be the mean and $a_{s1}$, $a_{us1}$ be the output for the first Grover iteration. Then we have,\ $$\begin{aligned} && a_{s1} = 2 \mu_1 + a_s {}\nonumber \\&& a_{us1} = 2 \mu_1 - a_{us}\end{aligned}$$ For the second iteration when no states are selected, let $\mu_2$ be the mean and $a_{s2}$, $a_{us2}$ be the outputs of the iteration. These coefficients are given by,\ $$\begin{aligned} && a_{s2} = 2 \mu_2 - a_{s1} = 2\mu_2 - 2\mu_1 - a_s {}\nonumber \\&& a_{us2} = 2 \mu_2 - a_{us1} = 2\mu_2 - 2\mu_1 + a_{us} \end{aligned}$$ Hence the coefficient of the state $a_{s2}$ that was selected in one iteration is less that the state $a_{us2}$ that was never selected. The Fig \[fig:GeometricReper\] shows a geometric representation of a Grover iteration with initial state (black arrow), selected and unselected states (red arrow).\ (Origin) at (0,0); (XAxisMax) at (5,0); (YAxisMax) at (0,3); (Origin) – (XAxisMax) node \[right\] [$\ket{\alpha}$]{};(Origin) – (YAxisMax) node \[right\] [$\ket{\beta}$]{};(Origin) – (15: 4) node \[above right\] [$\ket{\psi_{i}}$]{};(Origin) – (-15: 4) node \[above right\] [$ \ket{\psi_{iv}}$]{};(Origin) – (13: 4) node \[right\] [$ \ket{\psi_{\mu}}$]{};(Origin) – (11: 4) node \[below right\] [$ \ket{\psi_{us}}$]{};(Origin) – (41: 4) node \[above right\] [$ \ket{\psi_{s}}$]{};(0:1.5) arc (0:15:1.5) node \[below right\] [$\theta$]{}; (0:1.5) arc (0:-15:1.5) node \[above right\] [$-\theta$]{}; (-15:2.1) arc (-15:13:2.1) node \[below right\] [$\theta_{s}$]{}; (13:2.1) arc (13:41:2.1) node \[right\] [$\theta_{s}$]{}; (15:2.5) arc (15:11:2.5) node \[above\] [$2 \theta_{us}$]{}; Further, let us suppose that in the next iteration, a state $\ket{ \psi_{s}} $ which is selected earlier is not selected and an earlier unselected state $\ket{ \psi_{us}} $ is not selected. Now consider that no state is selected and no inversion occurs, then in that case the mean $\ket{ \psi_{\mu2}} $ lies above $\ket{ \psi_{us}} $. The inversion about mean will cause the state $\ket{ \psi_{s}} $ (in black) to become the state $\ket{ \psi_{s1}} $ (in red) which has a probability lower than those states which were unselected twice $\ket{ \psi_{us1}} $.\ (Origin) at (0,0); (XAxisMax) at (5,0); (YAxisMax) at (0,3); (Origin) – (XAxisMax) node \[below right\] [$\ket{ \alpha }$]{};(Origin) – (YAxisMax) node \[right\] [$\ket{ \beta }$]{};(Origin) – (13: 4) node \[below right\] [$ \ket{ \psi_{us1} }$]{};(Origin) – (27: 4) node \[above right\] [$ \ket{ \psi_{s1} }$]{};(Origin) – (15: 4) node \[right\] [$ \ket{ \psi_{\mu2} }$]{};(Origin) – (17: 4) node \[above right\] [$\ket{ \psi_{us2} }$]{};(Origin) – (3: 4) node \[right\] [$ \ket{ \psi_{s2} }$]{}; (27:1.5) arc (27:3:1.5) node \[below right\] [$2 \theta_{s2}$]{}; (17:2.5) arc (17:13:2.5) node \[below right\] [$2 \theta_{us2}$]{}; Note that the issue is not restricted to the case where no state is selected it is inherent with the use of inversion function for unselected states. In order to overcome this issue we need to always run each iteration of Grover twice with a given result from the selection function $f_s$, so that the relative coefficients remain in the order of number of times the state was selected.\ **Corollary 3:** If no state is selected and the Grover step is repeated twice, no change happens in the coefficients.\ **Corollary 4:** If all states are selected and the Grover step is repeated twice, there is no net change in the coefficients.\ *Proof :* In first step all the coefficients will become negative of themselves so the mean will be negative and inversion around the mean will leave them in negative coefficients. In second step all the coefficients will become negative of themselves so mean will be positive and the coefficients will be rotated back to their original places around the mean.\ Dynamic Grovers Search Algorithm -------------------------------- In this subsection we formalize the procedure of dynamic grover search algorithm.\ **Procedure: Dynamic Grover Iteration** - Apply the Oracle $f_{s}$ and store the result. - Apply the *Grover Iteration* using the stored oracle results. - Apply the *Grover Iteration* again using the stored oracle results, to nullify any negative affects of inversion about the mean. **Algorithm: Dynamic Grover Search** - Initialize the system, such that there is same amplitude for all the N states - Apply *Dynamic Grover Iteration* $O(\sqrt{N})$ times - Sample the resulting state, where we get the expected state with $probablity > \frac{1}{2}$ IV. Applications of Dynamic Grover search ========================================= In this section we explore two different applications of our Dynamic grover search algorithm. In the first sub section we show its application in recommendation process. In the second section we give a generic optimization problem and solution using dynamic grover search. Quantum Recommendation Algorithm ---------------------------------- Now that we have described dynamic Grover algorithm, we look towards how it can be applied to recommendation systems.\ In essence a recommendation algorithm on an unstructured search space, is similar to Grover search, with the difference of selection function. If we know the search space well, we can construct a static selection function, that can select top $M$ states. In case of unknown search space, or a dynamic search space, we may not always be able to construct a static selection function. In that case we need to associate the selection dynamically to the similarity with the given state $\ket{x} $ (say). This will increase the probability of selection of desired $M$ states sufficiently.\ Recommendation Problem : ------------------------ - Consider a standard recommendation problem, - Given a unstructured search space $S$, - The dimensionality of the space be $n$, and total number of states to be N($=2^n$). - We need to find $M$ recommended states for a given search result $\ket{x} $. Let the similarity of two pure states $S(\ket{x} ,\ket{y} )$, represent a measure of the likeliness of these two states to be recommended for each other.\ **Criteria for an effective Recommendation function**\ Now we give a criteria for selection function $f_{s}$ to be effective in a dynamic system. Let the Dynamic Selection function be given as: $$\begin{aligned} f_{s} = \begin{cases} 1 & \text{$\ket{x}$ is selected with $P_{s}$}, \\ 0 & \text{otherwise}. \end{cases}\end{aligned}$$ The $P_{s}$ should have the following criteria for good recommendation: - The most likely state is selected with high probability, $$\lim_{S(x,y) \to n } P_{s}(x) \geq \Big( 1-\frac{1}{N} \Big)$$ - The least likely state is selected with a low probability, $$\lim_{S(x,y) \to 0 } P_{s}(x) \leq \Big( \frac{1}{N} \Big)$$ - In order to select $M$ states, we have from Corollary 1, $$M < \frac{N}{2G}$$ The expected number of selected states, $$E(N_{s}) = \int_{x} P_{s}(x,y) \approx M$$ Recommendation for Binomial distribution ----------------------------------------- Consider an example for initial state space with equal probability for each of the states. The similarity of states with respect to a particular state be given by the Manhattan distance between the states. The Similarity function $S(x,y)$ would be a binomial curve Fig \[fig:BinomialDistribution\]. The probability of selection is given by the following equation: $$e^{-\log(\sqrt[n]{K}-1) S(\ket{x},\ket{y})}$$ The expected selection should be Fig \[fig:BinomialProbabilty\] ; (axis cs:4,0) – node \[fill=white\] [$S(|x\rangle, |y\rangle)$]{} (axis cs:7.2,0); ; ; (axis cs:4,0) – node \[fill=white\] [$S(|x\rangle, |y\rangle)$]{} (axis cs:5.96,0); coordinates [ ( 10, 7 ) ( 11, 11 ) ( 12, 17 ) ( 13, 22 ) ( 14, 32 ) ]{}; coordinates ( 10, 7 ) ( 11, 10 ) ( 12, 13 ) ( 13, 18 ) ( 14, 25 ) ; coordinates [ ( 0 , 0 ) ( 1 , 0 ) ( 2 , 0 ) ( 3 , 7.31511470083e-06 ) ( 4 , 0.000105024146776 ) ( 5 , 0.000570578946665 ) ( 6 , 0.00249286535328 ) ( 7 , 0.00745920069926 ) ( 8 , 0.0317610356855 ) ( 9 , 0.0618686957303 ) ( 10, 0.120915738795 ) ( 11, 0.179278528288 ) ( 12, 0.240100783919 ) ( 13, 0.35544023332 ) ]{}; coordinates ( 0 , 0 ) ( 1 , 0 ) ( 2 , 0 ) ( 3 , 6.428343871e-07 ) ( 4 , 9.22926512908e-06 ) ( 5 , 5.01410821938e-05 ) ( 6 , 0.000137933892775 ) ( 7 , 0.000211768013807 ) ( 8 , 0.000193217650065 ) ( 9 , 0.000106848258484 ) ( 10, 3.51263075808e-05 ) ( 11, 6.70384432261e-06 ) ( 12, 0.932631829595 ) ( 13, 0.0666165592568 ) ; We see that the dynamic selection, gives a similar performance (Fig \[fig:ExecutionComparison\]) as well as desired accuracy (Fig \[fig:accuracy\]) as would have been given by a static selection function using the Grover search. However this algorithm makes our recommendation system robust with respect to changes in search space and distribution of search space. Further details can be seen from the Appendix. Approximate Optimization Algorithms ------------------------------------ The Grover search algorithm was a landmark algorithm because it provided a framework[@grover-framework], that can be used to solve any NP problem, with a quadratic speedup over classical systems. Durr and Hoyer’s work on finding minimum of the search space [@min] can be considered as a tool to find the optimal value (min or max) for a search space, but it is done by applying the Grover search multiple times, and uses the quantum probability during the sampling to arrive at the optimal state.\ **Optimization Problem:** An optimization problem can be represented in the following way:\ $\boldsymbol{Given:}$ A function $f : A \to R$ from some set $A$ to the set of all real numbers $R$.\ $\boldsymbol{Sought:}$ An element $\ket{x_o} $ (optimal) in $A$ such that $f(\ket{x_o} \leq f(\ket{x} )$ for all $\ket{x} $ in $A$ (minimization) or such that $f(\ket{x_o} ) \geq f(\ket{x} )$ for all $\ket{x} $ in $A$ (maximization).\ Solving Optimization Problem using Durr and Hoyer’s Min Approach ---------------------------------------------------------------- To solve the Optimization problem using Durr and Hoyer’s approach[@min], the selection function would use the following: 1. Set $\ket{x_o} = \ket{0} $ 2. Set $f_{s} = f(\ket{x} ) \geq f(\ket{x_o} )$ 3. Run Grover Search using $f_{s}$, sample out $\ket{y} $ 4. **if** $ f(\ket{y} ) > f(\ket{x_o}) $ - $\ket{x_o} = \ket{y} $ - Repeat from 2 5. **else** return $\ket{x_o} $. The algorithm runs in expected $O(\log(N)\sqrt{N})$ Grover iterations.\ **Solving Optimization Problem using Dynamic Grover Search :** With the Dynamic Grover System, we present a generic framework for solving optimization problems using classical probability. Consider the distribution function $D(\ket{x} ) = f(\ket{x} )$, we use a Probabilistic function $P_{s}: A \to [0,1]$ such that,\ $$\lim_{f(\ket{x} ) \to f_{max} } P_{s}(\ket{x} ) \geq \Big( 1-\frac{1}{N} \Big)$$ $$\lim_{f(x) \to f_{min}} P_{s}(\ket{x} ) \leq \Big( \frac{1}{N} \Big)$$ Using a good heuristic, a probabilistic function $P_{s}$ can be chosen to get optimal results with high probability by running Grover search algorithm once. Hence the Dynamic Grover search can be modeled for any Optimization Problem, which runs in $O(\sqrt{N})$ Grover iterations. And the accuracy of the search depends on the Probability function $P_{s}$.\ [99]{} \[bb\] L. K. Grover, A fast quantum mechanical algorithm for database search, arXiv:quant-ph/9605043. P. W. Shor, Polynomial-Time Algorithms for Prime Factorization and Discrete Logarithms on a Quantum Computer. SIAM. J. Comp. 26(5), 1484 (1997). arXiv:quant-ph/9508027 D. Deutsch and R. Zozsa, Proc. R. Soc. London A, 439, (1992). Richard P. Feynman, Simulating physics with computers, Keynote speech, Dept. of Physics, California Institute of Technology, Pasedena. http://www.cs.berkeley.edu/ christos/classics/Feynman.pdf A framework for fast quantum mechanical algorithms, Lov K. Grover, arXiv:quant-ph/9711043v2 C. Durr, P. Hoyer, A Quantum Algorithm for Finding the Minimum, arXiv:quant-ph/9607014v2. L. K. Grover, A fast quantum mechanical algorithm for estimating the median, Bell Lab Technical Memorandum No. ITD-96-30115J, arXiv:quant-ph/9607024v1. R. R. Tucci, Quantum Circuit for Calculating Mean Values Via Grover-like Algorithm, arXiv:quant-ph/1404.0668. S.K. Lee, Y.H. Cho, S.H. Kim, Collaborative filtering with ordinal scale-based implicit ratings for mobile music recommendations, Information Sciences 180 (11) (2010) 2142–2155. E.R. Núñez-Valdéz, J.M. Cueva-Lovelle, O. Sanjuán-Martınez, V. Garcıa-Dıaz, P. ́Ordoñez, C.E. Montenegro-Marın, Implicit feedback techniques on recommender systems applied to electronic books, Computers in Human Behavior 28 (4) (2012) 1186–1193. K. Choi, D. Yoo, G. Kim, Y. Suh, A hybrid online-product recommendation system: combining implicit rating-based collaborative filtering and sequential pattern analysis. Electronic Commerce Research and Applications, in press, doi: 10.1016/j.elerap.2012.02.004. D.H. Park, H.K. Kim, I.Y. Choi, J.K. Kim, A literature review and classification of recommender Systems research, Expert Systems with Applications 39 (2012) 10059–10072. W. Carrer-Neto, M.L. Hernández-Alcaraz, R. Valencia-Garcıa, F. Garcıa- Sánchez, Social knowledge-based recommender system, Application to the movies domain. Expert Systems with Applications 39 (12) (2012) 10990– 11000; P. Winoto, T.Y. Tang, The role of user mood in movie recommendations, Expert Systems with Applications 37 (8) (2010) 6086–6092. J. Serrano-Guerrero, E. Herrera-Viedma, J.A. Olivas, A. Cerezo, F.P. Romero, A google wave-based fuzzy recommender system to disseminate information in University Digital Libraries 2.0., Information Sciences 181 (9) (2011) 1503– 1516; C. Porcel, E. Herrera-Viedma, Dealing with incomplete information in a fuzzy linguistic recommender system to disseminate information in university digital libraries, Knowledge-Based Systems 23 (1) (2010) 32–39; C. Porcel, J.M. Moreno, E. Herrera-Viedma, A multi-disciplinar recommender system to advice research resources in university digital libraries, Expert Systems with Applications 36 (10) (2009) 12520–12528; C. Porcel, A. Tejeda-Lorente, M.A. Martınez, E. Herrera-Viedma, A hybrid recommender system for the selective dissemination of research resources in a technology transfer office, Information Sciences 184 (1) (2012) 1–19. O. Zaiane, Building a recommender agent for e-learning systems, in: Proceedings of the International Conference on Computers Education (ICCE’02), vol. 1, 2002, pp. 55–59; J. Bobadilla, F. Serradilla, A. Hernando, Collaborative filtering adapted to recommender systems of e-learning, Knowledge Based Systems 22 (2009) 261–265. Z. Huang, D. Zeng, H. Chen, A comparison of collaborative filtering recommendation algorithms for e-commerce, IEEE Intelligent Systems 22 (5) (2007) 68–78. J.J. Castro-Sanchez, R. Miguel, D. Vallejo, L.M. López-López, A highly adaptive recommender system based on fuzzy logic for B2C e-commerce portals, Expert Systems with Applications 38 (3) (2011) 2441–2454. E. Costa-Montenegro, A.B. Barragáns-Martınez, M. Rey-López, Which App? A recommender system of applications in markets: implementation of the service for monitoring users’ interaction, Expert Systems with Applications 39 (10) (2012) 9367–9375. K. Mcnally, M.P. O’mahony, M. Coyle, P. Briggs, B. Smyth, A case study of collaboration and reputation in social web search, ACM Transactions on Intelligent Systems and Technology 3 (1) (2011). Hromkovic, Juraj (2002), Algorithmics for Hard Problems, Texts in Theoretical Computer Science (2nd ed.), Springer, ISBN 978-3-540-44134-2 Kann, Viggo (1992), On the Approximability of NP-complete Optimization Problems, Royal Institute of Technology, Sweden, ISBN 91-7170-082-X Quantum Computation and Quantum Information, Nielsen, Chuang. A. Appendices ============= Proof of Corollary 1 --------------------- **Corollary 1.** *In order for the Grover’s search to have a meaningful next step, following conditions must be satisfied.* 1. The Mean ($\mu$) (calculate in the inversion step) should be positive. 2. The coefficients of the unselected states remain positive. 3. The number of selected states $N_{s}$ for Gain G($=\frac{P_{s}}{P_{us}})$ should be $$N_{s} < \frac{N}{2G} \;\;\;\; where \;\; G \gg 1$$ **Proof:** Let $N$, $N_s$ and $N_{us}$ be total number of states, number of selected states, and number of unselected states respectively. Hence $$N_{us} = N - N_s$$ Let $\mu$, $a_s$ and $a_{us}$ be the mean, the coefficient of selected states , and coefficient of unselected states respectively. Hence $$\mu = \frac{N_{us} a_{us} - N_s a_s}{N}$$ For Coefficient of unselected states to be positive (say in the last step) $$a_{us1} = 2\mu - a_{us} > 0$$ $$\implies 2\frac{N_{us} a_{us} - N_{s} a_{s}}{N} - a_{us} > 0$$ $$\implies \frac{a_s}{a_{us}} < (\frac{N}{2 N_s} - 1)$$ Now $G = \frac{P_{s}}{P_{us}}$ $$\implies G = \frac{N_s a_s^2}{(N - N_s) a_{us}^2}$$ since $a_s$ and $a_{us}$ is positive, $$\implies G < \frac{N_s}{N - N_s} (\frac{N}{2 N_s} - 1)^2$$ for $G \gg 1 $, $$N_s < \frac{N}{2 G}$$
{ "pile_set_name": "ArXiv" }
--- abstract: '[**Abstract:**]{} We have studied some thermodynamics features of Kiselev black hole and dilaton black hole. Specifically we consider Reissner Nordström black hole surrounded by radiation and dust, and Schwarzschild black hole surrounded by quintessence, as special cases of Kiselev solution. We have calculated the products of black hole thermodynamics parameters, including surface gravities, surface temperatures, Komar energies, areas, entropies, horizon radii and the irreducible masses, at the inner and outer horizons. The products of surface gravities, surface temperature product and product of Komar energies at the horizons are not universal quantities. For Kiselev solutions products of areas and entropies at both the horizons are independent of mass of the black holes (except for Schwarzschild black hole surrounded by quintessence). For charged dilaton black hole, all the products vanish. Using the Smarr formula approach, the first law of thermodynamics is also verified for Kiselev solutions. The phase transitions in the heat capacities are also observed.' author: - Bushra Majeed - Mubasher Jamil - Parthapratim Pradhan title: '**Thermodynamic Relations for Kiselev and Dilaton Black Hole** ' --- Introduction ============ Black holes are the most exotic objects in physics and their connection with thermodynamics is even more surprising. Just like other thermodynamical systems, black holes have physical temperature and entropy. The analogy between the black hole thermodynamics and the four laws of thermodynamics was first proposed in 1970’s [@ch9670; @ch5271; @pe7771; @be3373] and the temperature ($T$) and entropy $(S)$ are analogous of the surface gravity $(\kappa)$ and area $(A)$ of the black hole event horizon respectively. Laws of black hole thermodynamics are studied in literature [@ja6111]. In [@ca0812] universal properties of black holes and the first law of black hole inner mechanics is discussed. In [@wa3114] horizon entropy sums in A(dS) spacetimes is studied. In [@cu6279] authors have discussed the spin entropy of a rotating black hole. The study of phase transition in black holes is a fascinating topic [@pa9590; @ne1865]. If a black hole has a Cauchy horizon (${\cal H}^-$) and an event horizon(${\cal H}^+$) then it is quite interesting to study different quantities like the product of areas of a black hole on these horizons. Products of thermal quantities of the rotating black holes [@cv0111; @pr8714; @prarx14; @bushra] and area products for stationary black hole horizons [@vi1413] have been studied in literature. Calculations show that sometimes these products do not depend on the ADM (Arnowitt-Deser-Misner) mass parameter but only on the charge and angular momentum. The relations that are independent of the black hole mass are of particular interest because these may turn out to be “universal” and hold for more general solutions with nontrivial surroundings too. Kiselev [@ki8703] considered Einstein’s field equation surrounded by quintessential matter and proposed new solutions, dependent on state parameter $\omega$ of the matter surrounding black hole. Recently some dynamical aspects, i.e. collision between particles and their escape energies after collision around Kiselev black hole [@ja2415] have been studied. In this work we consider the solution of Reissner Nordström(RN) black hole surrounded by energy-matter, derived by Kiselev and study the important thermodynamic features of black hole at both the horizons of the black hole. We also consider solution of Schwarzschild black hole surrounded by energy-matter and analyzed its different thermodynamic products. Furthermore, we have considered the charged dilaton black hole and computed its various thermodynamic products. The plan of the work is as follows: In section (II), we discuss the basic aspects of RN black hole surrounded by radiation. Results show that the products of area and entropy calculated at $\mathcal{H}^{\pm}$ are independent of mass of the black hole, while the other products are mass dependent. In subsections of (II), the first law of thermodynamics is obtained by the Smarr formula approach, later the rest mass is written in terms of irreducible mass of the black hole, also the phase transition in heat capacity of the black hole is discussed. Section (III) consists of discussions on thermodynamic aspects of RN black hole surrounded by dust. In section (IV), thermodynamics of the Schwarzschild black hole surrounded by quintessence is studied. In section (V) we have computed the thermodynamic product relations for dilaton black hole. All the work is concluded in the last section. We set $G = \hbar = c = 1$, throughout the calculations. RN Black Hole Surrounded by Radiation ===================================== The spherically symmetric and static solutions for Einstein’s field equations, surrounded by energy-matter, as investigated by Kiselev [@ki8703] can be written as: $$\begin{aligned} \label{M1} ds^2&= &-f(r)dt^2 + \frac{1}{f(r)}dr^2+r^2(d\theta^2+ \sin^2\theta d\phi^2),\end{aligned}$$ where $$\label{M01} f(r)= 1-\frac{2{\cal M}}{r}+ \frac{Q^2}{r^2} -\frac{\sigma}{r^{3\omega+1}},$$ here ${\cal M}$ and $Q$ are the mass and electric charge, of the black hole respectively, $\sigma$ is the normalization parameter and $\omega$ is the state parameter of the matter around black hole. We consider the cases when RN black hole is surrounded by radiation ($\omega= 1/3$) and dust ($\omega= 0$). For $\omega= 1/3$ two horizons of the black hole are obtained from: $$1-\frac{2{\cal M}}{r}+ \frac{Q^2}{r^2} -\frac{\sigma_r}{r^2}=0,$$i.e. $$\label{M3} r_{\pm}={\cal M}\pm \sqrt{{\cal M}^2 -Q^2 + \sigma_r}.$$ Here $\sigma_r$ denotes the normalization parameter for radiation case, with dimensions, $[\sigma_r]=L^2$, where $L$ denotes length, $r_+$ is the outer horizon named as event horizon ${\cal H}^+$ and $r_-$ is the inner horizon known as Cauchy horizon ${\cal H}^{-}$, ${\cal H}^{\pm}$ are the null surfaces of infinite blue-shift and infinite red-shift respectively [@ch83]. Using Eq. (\[M3\]) one can obtain, $$\label{M4} r_+ r_- = Q^2 -\sigma_r,$$ so product of horizons is independent of mass of the black hole but depends on electric charge and $\sigma_r$. Areas of both horizons of the black hole are: $$\label{M5} \mathcal{A_{\pm}}= \int^{2\pi}_0\int^\pi_0 \sqrt{g_{\theta\theta} g_{\phi \phi}}d\theta d\phi=4 \pi r_{\pm}^2= 4\pi(2{\cal M}r_{\pm} -Q^2 +\sigma_r).$$ The corresponding semi-classical Bakenstein-Hawking entropy at ${\cal H}^{\pm}$ is [@ha4471]: $$\begin{aligned} \label{M7} \mathcal{S}_{\pm}&=&\frac{\mathcal{A}_{\pm}}{4}\nonumber\\ &= &\pi(2{\cal M}r_{\pm} -Q^2 +\sigma_r).\end{aligned}$$ Hawking temperature of ${\cal H}^{\pm}$ is determined by using the formula $$\begin{aligned} \label{M9} T_{\pm}&=&\frac{1}{4 \pi}\frac{df}{dr}\mid_{r=r_{\pm}}\nonumber\\ &=&\frac{1}{4 \pi}\Big[\frac{r_{\pm}^2-Q^2+\sigma_r}{r_{\pm}^3}\Big] \nonumber\\ &=&\frac{r_{\pm}- {\cal M}}{2\pi(2{\cal M}r_{\pm} -Q^2 +\sigma_r)}.\end{aligned}$$ Surface gravity is the force required to an observer at infinity, for holding a particle in place, which is equal to the acceleration at horizon due to gravity of a black hole [@po02]: $$\label{M8}\kappa_{\pm}=\frac{1}{2}\frac{df}{dr}\mid_{r=r_{\pm}}=2\pi T_{\pm},$$ $$\kappa_{\pm}=\frac{r_{\pm}- {\cal M}}{(2{\cal M}r_{\pm} -Q^2 +\sigma_r)}.$$ The Komar energy of the black hole is defined as [@ko3459] $$E_{\pm}= 2 \mathcal{S}_{\pm} T_{\pm}={ r_{\pm}-{\cal M}}.\label{M10}$$ Products of surface gravities and surface temperatures at ${\cal H}^{\pm}$ are $$\kappa_+\kappa_-=4\pi^2 T_+ T_-= \frac{Q^2-\sigma_r-{\cal M}^2}{(Q^2-\sigma_r)^2}.\label{M11}$$ The Komar energies at ${\cal H}^{\pm}$ results in $$\label{M13} E_+E_-= {Q^2-{\cal M}^2- \sigma_r}.$$ Since these products (except product of horizons $r_+r_-$) are mass dependent, so are not universal quantities. The products of areas and entropies at ${\cal H}^{\pm}$, $$\mathcal{A}_+\mathcal{A}_-=16 \mathcal{S}_+\mathcal{S}_-=16 \pi^2(Q^2-\sigma_r)^2,\label{area}$$ are mass independent, so these are the universal parameters of the black hole. Smarr Formula for Cauchy Horizon (${\cal H}^-)$ ----------------------------------------------- The expression of the area of black hole is [@sm7173]: $$\label{M14} \mathcal{A}=4 \pi \Big[ 2M^2 -Q^2 +\sigma_r+2M\sqrt{M^2-Q^2+\sigma_r}\Big],$$ from the area of both the horizons $$\label{M15} \mathcal{A}_{\pm}=4 \pi \Big[ 2M^2 -Q^2 +\sigma_r\pm 2M\sqrt{M^2-Q^2+\sigma_r}],$$ one can write the ADM mass of the black hole as: $$\begin{aligned} \label{M16} {\cal M}^2&=& \frac{\mathcal{A}_{\pm}}{16 \pi}+\frac{Q^4 \pi }{\mathcal{A}_{\pm}} +\frac{Q^2}{2}- \frac{2\pi \sigma_r Q^2}{\mathcal{A}_{\pm}}-\frac{\sigma_r}{2}+ \frac{\pi \sigma_r^2}{\mathcal{A}_{\pm}}\nonumber\\ &=&\frac{\mathcal{A}_{\pm}}{16 \pi}+\frac{\pi Q^4}{\mathcal{A}_{\pm}} +{Q^2}\Big(\frac{\mathcal{A}_{\pm}-4 \pi \sigma_r}{2\mathcal{A}_{\pm}}\Big)- \sigma_r \Big(\frac{\mathcal{A}_{\pm}-2\pi \sigma_r}{2 \mathcal{A}_{\pm}}\Big).\end{aligned}$$ According to the first law of thermodynamics, differential of mass of the black hole can be related with the change in its area and electric charge. Since the effective surface tension at the horizon is proportional to the temperature of the black hole horizons, so we can write: $$\label{M17} d{\cal M}=\mathcal{T_{\pm}} d\mathcal{A}_{\pm} + \Phi_{\pm}dQ,$$ where $\mathcal{T}_{\pm}$ and $\Phi_{\pm}$ are defined as: $$\begin{aligned} \label{M18} \mathcal{T}_{\pm}&=& \text{Effective surface tension at horizons}\nonumber\\ &=&\frac{1}{{\cal M}}\Big(\frac{1}{32\pi}-\frac{Q^4 \pi}{2 \mathcal{A}^2_{\pm}} +\frac{Q^2 \sigma_r \pi}{\mathcal{A}^2_{\pm}}-\frac{\pi \sigma_r^2}{2 \mathcal{A}_{\pm}}\Big)\nonumber\\ \Phi_{\pm}&=& \text{Electromagnetic potentials at horizons}\nonumber\\ &=&\frac{1}{{\cal M}}\Big(\frac{2\pi Q^3}{\mathcal{A}_{\pm}}- \frac{2\pi Q \sigma_r}{\mathcal{A}_{\pm}}+ \frac{Q}{2}\Big).\end{aligned}$$ The effective surface tension can be rewritten as: $$\begin{aligned} \mathcal{T}_{\pm}&=& \frac{1}{32 \pi{\cal M}}\Big(1-\frac{16 \pi^2 (Q^4 -2Q^2\sigma_r +\sigma_r^2)}{ \mathcal{A}^2_{\pm}}\Big)\nonumber\\&=&\frac{1}{16 \pi{\cal M}} \Big(1-\frac{2M^2 -Q^2+\sigma_r}{ r^2_{\pm}}\Big)\nonumber\\ &=&\frac{1}{8\pi}\Big(\frac{r_{\pm}-{\cal M}}{2{\cal M}r_{\pm}-Q^2+\sigma_r}\Big)\nonumber\\ &=&\frac{\kappa_{\pm}}{8\pi}.\end{aligned}$$ So RN black hole surrounded by radiation satisfies the first law of thermodynamics, verified by Smarr formula approach. Christodoulou-Ruffini Mass Formula for RN Black Hole surrounded by Radiation ---------------------------------------------------------------------------- Christodoulou and Christodoulou and Ruffini [@ch9670; @ch5271] had shown that the mass of a black hole could be increased or decreased but the irreducible mass ${\cal M}_{\text{irr}}$ of a black hole can not be decreased. In fact , most processes result in an increase in ${\cal M}_{\text{irr}}$ and during reversible process this quantity also does not change. Also, the surface area of a black hole has behavior [@ba6173] $$\begin{aligned} d{\cal A}_{\pm} \geq 0 \label{arth},\end{aligned}$$ so there exist a relation between area and irreducible mass. The ${\cal M}_{\text{irr}}$ is proportional to the square root of the black hole’s area. Since the RN-Radiation space-time has regular event horizon and Cauchy horizons so the irreducible mass of a black hole is proportional to the square root of its surface area [@ch5271]$$\sqrt{\frac{\mathcal{A}_{\pm}}{16 \pi}}={\cal M}_{\text{irr}\pm}= \sqrt{\frac{r^2_{\pm}}{4}},$$where ${\cal M}_{\text{irr}-}$ and ${\cal M}_{\text{irr}+}$ are irreducible masses defined on ${\cal H}^{\pm}$ respectively. Product of the irreducible masses at $\mathcal{H^{\pm}}$ is: $$\begin{aligned} \label{M22} {\cal M}_{\text{irr}+} {\cal M}_{\text{irr}-}&=& \sqrt{\frac{\mathcal{A}_+ \mathcal{A}_-}{(16 \pi)^2}}\nonumber\\ &=& \frac{Q^2-\sigma_r}{4}.\end{aligned}$$ This product is universal because it does not depend on mass of the black hole. The expression for the rest mass of the rotating charged black hole given by Christodoulou and Ruffini in terms of its irreducible mass, angular momentum and charge is [@ch5271]: $${\cal M}^2= ({\cal M}_{\text{irr}\pm}+ \frac{Q^2}{4{\cal M}_{\text{irr}\pm}})^2 + \frac{J^2}{4 {\cal M}^2_{\text{irr}\pm}}.\label{M19}$$ Setting $\rho_{\pm}= 2 {\cal M}_{\text{irr}_{\pm}}$, mass of the RN black hole surrounded by radiation becomes: $${\cal M}^2=\frac{\rho^4_{\pm}+ Q^4}{4\rho ^2_{\pm}} +\frac{\pi}{2}(\sigma^2_r-2\sigma_r Q^2)+ 2{\cal M}_{\text{irr+}} {\cal M}_{\text{irr}-}.$$ Heat Capacity $C_{\pm}$ on ${\cal H}^{\pm}$ ------------------------------------------- Another important measure to study the thermodynamic properties of a black hole is the heat capacity, its nature (positivity or negativity) reflects the stability or instability of a thermal system (black hole). A black hole with negative heat capacity is in unstable equilibrium state, i.e. it may decay to a hot flat space by emitting Hawking radiations or it may grow without limit by absorbing radiations [@gr3082]. One can get heat capacity of a black hole by using $$C_{\pm}= \frac{\partial {\cal M}}{\partial T_{\pm}},$$ where mass ${\cal M}$ in terms of $r_{\pm}$ is: $${\cal M}=\frac{r_{\pm}^2 +Q^2 -\sigma_r}{2r_{\pm}}.$$ The partial derivatives of mass ${\cal M}$ and temperature $T_{\pm}$ with respect to $r_{\pm}$ are: $$\frac{\partial {\cal M}}{\partial r_{\pm}}=\frac{r_{\pm}^2-Q^2+\sigma_r}{2 r_{\pm}^2},$$ and from Eq. (\[M9\]) we have $$\frac{\partial T}{\partial r_{\pm}}=\frac{1}{2 \pi}\frac{(3 Q^2-r_{\pm}^2-3\sigma_r)}{r_{\pm}^4}.$$ The expression for heat capacity $C_{\pm}=\frac{\partial {\cal M}}{\partial r_{\pm}} \frac{\partial r_{\pm}}{\partial T_{\pm}}$ for RN black hole surrounded by radiation at horizons becomes: $$\label{M21} C_{\pm}=\frac{2 \pi r_{\pm}^2 (r_{\pm}^2 - Q^2 +\sigma_r)}{3Q^2- 3 \sigma_r-r_{\pm}^2}.$$ Note that there are two possible cases for heat capacity to be positive:\ **Case $1$:** When both $r_{\pm}^2 - Q^2 +\sigma_r$ and $3Q^2- 3 \sigma_r-r_{\pm}^2$ are positive.\ **Case $2$:** When both $r_{\pm}^2 - Q^2 +\sigma_r$ and $3Q^2- 3 \sigma_r-r_{\pm}^2$ are negative.\ Since we are interested in positive $r$ only, so case-$1$ implies that heat capacity is positive if $$\sqrt{Q^2- \sigma_r}~<r <~\sqrt{3(Q^2- \sigma_r)},$$ from case-$2$ we get $$3(Q^2- \sigma_r)~<r^2<~(Q^2- \sigma),$$which is not possible, so we exclude this case.\ Heat Capacity is negative if\ **Case $a$**: $r_{\pm}^2 - Q^2 +\sigma_r>0$ and $3Q^2- 3 \sigma_r-r_{\pm}^2<0$,\ **Case $b$**: $r_{\pm}^2 - Q^2 +\sigma_r<0$ and $3Q^2- 3 \sigma_r-r_{\pm}^2>0$.\ From case-($a$) implies that for: $$r> \sqrt{3Q^2-3\sigma_r},$$ heat capacity is negative. From case-($b$) we get negative capacity for $$-\sqrt{Q^2-\sigma}~<r<~\sqrt{Q^2-\sigma},$$ we are interested in positive $r$ only. The region where heat capacity is negative, corresponds to an instable region around black hole, whereas a region in which the heat capacity is positive, represents a stability region. Behavior of heat capacity given in Eq. (\[M21\]) is shown in Fig. (\[1M\]). Heat capacity is negative in the region $0<r<0.4898 $ and $r> 0.8485$, while positive for $0.4898<r<0.8485$. ![Heat capacity undergoes phase transition from instability to stability, diverges at $r=\sqrt{3(Q^2- \sigma_r)}$ and again goes to instable region, we chose $\sigma_r=0.01$ and $Q=0.5$[]{data-label="1M"}](radiation11.pdf){width="9cm"} Interestingly, the product of heat capacity on ${\cal H}^{\pm}$ becomes $$\begin{aligned} \label{psh} C_{+} C_{-} &=& 4\pi^2 \left(Q^2-\sigma_r \right)^2 \frac{\left[ (Q^2-\sigma_r)-{\cal M}^2 \right]} {\left[4(Q^2-\sigma_r)-3{\cal M}^2\right]},\end{aligned}$$ the product depends on mass parameter and charge parameter. Thus the product of specific heats is not universal. RN Black Hole Surrounded by Dust ================================ Metric of RN black hole surrounded by dust is same as in Eq. (\[M1\]), $f(r)$ defined in Eq. (\[M01\]) with $\omega= 0$ and $\sigma= \sigma_d$ becomes: $$f(r)= 1-\frac{2{\cal M}}{r}+\frac{Q^2}{r^2}-\frac{\sigma_d}{r}\label{M001},$$ where $[\sigma_d]= L$. The horizons are $$\label{M03} r_{\pm}=\frac{2{\cal M}+\sigma_d \pm \sqrt{(2{\cal M}+\sigma_d)^2-4Q^2}}{2}.$$ Area of the horizons ${\cal H}^{\pm}$ is: $$\begin{aligned} \label{M05} \mathcal{A_{\pm}}&=&\pi[2(2{\cal M}+\sigma_d)^2-4Q^2\pm 2(2{\cal M}+\sigma_d)\sqrt{(2{\cal M}+\sigma_d)^2-4 Q^2}],\nonumber\\ &=& 4\pi [(2{\cal M}+\sigma_d)r_{\pm}-Q^2].\end{aligned}$$ Entropy of the horizons is $$\begin{aligned} \label{M07} \mathcal{S}_{\pm}= \pi[(2{\cal M}+\sigma_d)r_{\pm}-Q^2].\end{aligned}$$ Surface gravity and Hawking temperature of horizons are respectively: $$\label{M08} \kappa_{\pm}=\frac{2r_{\pm}-(2 {\cal M}+\sigma_d)}{2[(2{\cal M}+\sigma_d)r_{\pm}-Q^2]},$$ and $$\begin{aligned} \label{M09} T_{\pm} &=& \frac{2r_{\pm}-(2 {\cal M}+\sigma_d)}{4 \pi[(2{\cal M}+\sigma_d)r_{\pm}-Q^2]},\nonumber\\ &=&\frac{1}{4 \pi}\Big(\frac{r_{\pm}^2-Q^2}{r_+^3}\Big),\end{aligned}$$ where we have used $r_{\pm}^2= (2{\cal M}+\sigma_d)r_{\pm}-Q^2$. The Komar energy becomes: $$E_{\pm}= \frac{2r_{\pm}-(2 {\cal M}+\sigma_d)}{2}.\label{M010}$$ Product of surface gravities and temperatures at the horizons, are $$\kappa_+\kappa_-=4\pi^2 T_+ T_-=- \frac{(2{\cal M}+\sigma_d)^2-4Q^2}{4Q^4}.\label{dM11}$$ Product of Komar energies at the horizons is: $$\label{dM13} E_+E_-= \frac{4Q^2-(2{\cal M}+\sigma_d)^2}{4}.$$ Note that all products are mass dependent, so these quantities are not universal. Products of areas and entropies at both horizons ${\cal H}^{\pm}$ are: $$\mathcal{A}_+\mathcal{A}_-= 16\mathcal{S}_{+}\mathcal{S}_{-}=16 \pi^2 Q^4.$$ It is clear that area product and entropy product are universal entities. Smarr Formula for Cauchy Horizon (${\cal H}^-)$ ----------------------------------------------- Area of both the horizons must be constant given by $$\begin{aligned} \label{M015} \mathcal{A}_{\pm}&=&\pi[2(2{\cal M}+\sigma_d)^2-4Q^2\pm 2(2{\cal M}+\sigma_d)\sqrt{(2{\cal M}+\sigma_d)^2-4 Q^2}].\end{aligned}$$ Using Eq. (\[M015\]) mass of the black hole or ADM mass is expressed in terms of the areas of horizons as: $$\label{M016} {\cal M}^2+{\cal M}\sigma_d = \frac{\mathcal{A}_{\pm}}{16 \pi}+\frac{\pi Q^4 }{\mathcal{A}_{\pm}}-\frac{(\sigma^2_d-2Q^2)}{4}.$$ Differential of mass could be expressed in terms of physical invariants of the horizons, $$\label{M017} d{\cal M}=\mathcal{T_{\pm}} d\mathcal{A}_{\pm} + \Phi_{\pm}dQ,$$ where $$\begin{aligned} \label{M018} \mathcal{T}_{\pm}&=& \frac{1}{(2{\cal M}+\sigma_d)}\Big(\frac{1}{16 \pi}-\frac{\pi Q^4}{\mathcal{A}^2_{\pm}}\Big),\nonumber\\ \Phi_{\pm}&=&\frac{1}{(2{\cal M}+\sigma_d)}\Big(Q+\frac{4\pi Q^3}{\mathcal{A}_{\pm}}\Big).\end{aligned}$$ We can rewrite effective surface tension as: $$\begin{aligned} \mathcal{T}_{\pm} &=& \frac{1}{16\pi(2{\cal M}+\sigma_d)}\Big(1-\frac{16\pi^2 Q^4}{\mathcal{A}^2_{\pm}}\Big),\nonumber\\ &=&\frac{1}{8 \pi(2{\cal M}+\sigma_d)} \Big[1-\frac{4({\cal M}^2+{\cal M}\sigma_d)+(\sigma^2_d-2Q^2)}{2 r^2_{\pm}}\Big].\end{aligned}$$ Or $$\begin{aligned} \mathcal{T}&=&\pm \frac{\sqrt{(2\mathcal{M}+\sigma_d)^2-4Q^2}}{16 \pi ((2\mathcal{M}+\sigma_d)r_{\pm}-Q^2)},\nonumber\\ &=&\frac{\kappa_{\pm}}{8\pi}.\end{aligned}$$ So the first law of black hole thermodynamics is verified, for RN black hole surrounded by dust, using the Smarr formula approach. Christodoulou-Ruffini Mass Formula for RN Black Hole Surrounded by Dust ----------------------------------------------------------------------- The expression for irreducible mass for RN black hole surrounded by dust is $${\cal M}_{\text{irr}\pm}= \frac{2{\cal M}+\sigma_d \pm \sqrt{(2{\cal M}+\sigma_d)^2-4Q^2}}{4}.$$ Here ${\cal M}_{\text{irr}-}$ and ${\cal M}_{\text{irr}+}$ are irreducible masses defined on inner and outer horizons respectively. Area of ${\cal H}^{\pm}$, in terms of ${\cal M}_{\text{irr }\pm}$ is: $$\label{M20} \mathcal{A_{\pm}}= 16 \pi ({\cal M}_{\text{irr}\pm})^2.$$ Product of the irreducible mass at the horizons $\mathcal{H^{\pm}}$ is: $$\begin{aligned} \label{M22} {\cal M}_{\text{irr}+} {\cal M}_{\text{irr}-}&=& \sqrt{\frac{\mathcal{A}_+ \mathcal{A}_-}{(16 \pi)^2}}\nonumber\\ &=& \frac{Q^2}{4}.\end{aligned}$$ This product is independent of mass of the black hole. Mass of the black hole expressed in terms of its irreducible mass and charge is: $${\cal M}^2+ \mathcal{M}\sigma_d=\frac{\rho^4_{\pm}+ Q^4}{4\rho ^2_{\pm}} -\sigma_d +2{\cal M}_{\text{irr+}} {\cal M}_{\text{irr}-}.$$ Heat Capacity $C_{\pm}$ on ${\cal H}^{\pm}$ ------------------------------------------- Mass of RN black hole surrounded by dust in terms of $r_{\pm}$ is: $${\cal M}=\frac{r_{\pm}^2 +Q^2 -\sigma_d r_{\pm}}{2r_{\pm}}.$$ Partial derivatives of mass ${\cal M}$ and temperature $T_{\pm}$ with respect to $r_{\pm}$ are: $$\frac{\partial {\cal M}}{\partial r_{\pm}}=\frac{r_{\pm}^2-Q^2+\sigma_d r_{\pm}}{2 r_{\pm}^2},$$ and $$\frac{\partial T_{\pm}}{\partial r_{\pm}}=\frac{1}{4 \pi}\frac{(3 Q^2-r_{\pm}^2)}{r_{\pm}^4},$$where $T$ is given in Eq. (\[M09\]). The heat capacity $C=\frac{\partial {\cal M}}{\partial r_{\pm}}\frac{\partial r_{\pm}}{\partial T}$ at the horizons is: $$\label{M23} C_{\pm}=\frac{2 \pi r_{\pm}^2 (r_{\pm}^2 - Q^2 +\sigma_d r_{\pm})}{r_{\pm}^2-Q^2}.$$ In this case the product formula for heat capacity is found to be $$\begin{aligned} \label{psh1} C_{+} C_{-} &=& \frac{4\pi^2 Q^4 \left[ 4Q^4-Q^2(2{\cal M}+\sigma_d)^2+\sigma_d^2 Q^2 \right]}{4Q^4-Q^2 (2{\cal M}+\sigma_d)^2}.\end{aligned}$$ It is clear that the product formula does depend on mass parameter, so it is not universal in nature. Note that there are two possible cases for heat capacity, $C$, to be positive:\ **Case $1$:** When both $r_{\pm}^2 - Q^2 +\sigma_d r_{\pm}$ and $r^2-Q^2$ are positive.\ **Case $2$:** When both $r_{\pm}^2 - Q^2 +\sigma_d r_{\pm}$ and $r^2-Q^2$ are negative.\ Considering $\sigma_dr_{\pm}$ as a positive quantity (for physically accepted region, $r$) from case-1, we can say $C$ is positive for only $r_{\pm}^2 - Q^2 >0$ i.e. $$\sqrt{Q^2}<~r~<-\sqrt{Q^2},$$ while from case-$2$ we can say $C$ is negative for only $r_{\pm}^2 - Q^2 +\sigma_d r_{\pm}<0$ i.e. $$r~< \frac{-\sigma_d}{2}+\frac{1}{2}\sqrt{4Q^2+ \sigma_d^2}.$$ Heat capacity is negative if\ **Case $a$**: $r_{\pm}^2 - Q^2 +\sigma_d r_{\pm}>0$ and $r_{\pm}^2-Q^2<0$,\ **Case $b$**: $r_{\pm}^2 - Q^2 +\sigma_d r_{\pm}<0$ and $r_{\pm}^2-Q^2>0$.\ Case ($b$) is not possible mathematically since $\sigma_d r_{\pm}>0$, while in case ($a$) heat capacity is negative in the region, where $r$ satisfies both the conditions $$\frac{-\sigma_d}{2}+\frac{1}{2}\sqrt{4Q^2+ \sigma_d^2}~<r<~\frac{-\sigma_d}{2}-\frac{1}{2}\sqrt{4Q^2+ \sigma_d^2},$$ and $$-\sqrt{Q^2}~<~r<~\sqrt{Q^2},$$ simultaneously. We consider $\sigma$ and $Q$ both are positive in all the calculations. Behavior of the heat capacity given in Eq. (\[M23\]) is shown in Fig. (\[2M\]). ![Heat capacity undergoes phase transition from stability to instability region, we chose $\sigma_d=0.01$, $Q=0.5$ and ${\cal M}=1$.[]{data-label="2M"}](dust11.pdf){width="9cm"} Schwarzschild Black Hole Surrounded by Quintessence =================================================== Metric for Schwarzschild black hole surrounded by quintessence is same as defined in Eq. (\[M1\]) and $f(r)$ defined in Eq. (\[M01\]) with $\omega= -2/3$, $\sigma=\sigma_q$ and $Q=0$ becomes: $$\label{M0001}f(r)=1-\frac{2{\cal M}}{r} -\sigma_q r,$$ where dimensions of $\sigma_q$ are that of $L^{-1}$. The horizons, $r_{\pm}$, of the black hole are: $$\label{oM3} r_{\pm}=\frac{1\pm \sqrt{1-8{\cal M}\sigma_q}}{2\sigma_q}.$$ Product of the two horizons yield: $$\label{sM4} r_+ r_- = \frac{2{\cal M}}{\sigma_q},$$ and it is depending on mass of the black hole and $\sigma_q$. Areas of the horizons are: $$\begin{aligned} \label{oM5}\mathcal{A_{\pm}}=4\pi\Big[\frac{r_{\pm}-2{\cal M}}{\sigma_q}\Big].\end{aligned}$$ Entropy at the horizons ${\cal H}^{\pm}$ is: $$\begin{aligned} \label{oM7}\mathcal{S}_{\pm}&= &\frac{\pi}{\sigma_q}(r_{\pm} -2{\cal M}).\end{aligned}$$ Hawking temperature of the horizons is: $$\begin{aligned} \label{oM9} T_{\pm}&=& \frac{1}{4 \pi}\Big[\frac{1-2\sigma_q r_{\pm}}{r_{\pm}}\Big],\end{aligned}$$ and the surface gravity on the black hole horizons ${\cal H}^{\pm}$ is given by: $$\label{oM8} \kappa_{\pm}=\frac{1}{2}\Big[\frac{1-2\sigma_q r_{\pm}}{r_{\pm}}\Big].$$ The Komar energy is given by: $$E_{\pm}= \frac{2\mathcal{M}(1+2\sigma_qr_{\pm})-r_{\pm}}{2\sigma_q r_{\pm}}.\label{oM10}$$ Product of surface gravities and temperatures of ${\cal H}^{\pm}$ is: $$\kappa_+ \kappa_-=4\pi^2T_+ T_-=\frac{(8 \mathcal{M} \sigma_q-1)\sigma_q}{8 \mathcal{M}}.\label{oM11}$$ Product of Komar energies of the horizons is: $$\label{oM13} E_+E_-= \frac{\mathcal{M}(8 \mathcal{M} \sigma_q-1)}{2\sigma_q},$$ respectively. It is clear that all these products are depending on mass of the black hole, so these quantities are not universal. The products of areas and entropies at ${\cal H}^{\pm}$ are: $$\mathcal{A}_+\mathcal{A}_-=16\mathcal{S}_+\mathcal{S}_-= \Big(\frac{8 \pi{\cal M}}{\sigma_q}\Big)^2,$$ again both products are not universal quantities. Smarr Formula for Cauchy Horizon (${\cal H}^-)$ ----------------------------------------------- Writing area of both horizons of the black hole as: $$\label{oM15} \mathcal{A}_{\pm}=\frac{\pi}{\sigma^2_q} \Big[ 2-8{\cal M}\sigma_q \pm 2\sqrt{1-8{\cal M}\sigma_q}\Big].$$ Using Eq. (\[oM15\]) mass of the black hole or ADM mass is expressed in terms of the areas of horizons as: $$\begin{aligned} \label{oM16} 4{\cal M}^2+\frac{{\cal M}\mathcal{A}\sigma_q}{\pi} &=& \frac{1}{16 \pi}\Big[4\mathcal{A}_{\pm}-\frac{\mathcal{A}_{\pm}^2\sigma^2_q}{\pi}\Big]. \end{aligned}$$ Differential of mass, expressed in terms of physical invariants of the horizons is: $$\label{sM17} d{\cal M}=\mathcal{T_{\pm}} d\mathcal{A}_{\pm},$$ where $$\begin{aligned} \label{sM18} \mathcal{T}_{\pm} &=& \frac{1}{8\pi {\cal M}+\sigma_q \mathcal{A}_{\pm}} \Big[\frac{4\pi-16\pi\sigma_q{\cal M}-2\mathcal{A}_{\pm} \sigma^2_q}{16 \pi}\Big]\nonumber\\ &=&\frac{1}{16 \pi \mathcal{A}_{\pm}}[8 \pi \mathcal{M}- \mathcal{A}_{\pm}\sigma_q] \nonumber\\ &=& \frac{\kappa_{\pm}}{8\pi},\end{aligned}$$ where we have used $\mathcal{M}= (r_{\pm}-\sigma_q r_{\pm}^2)/2$, and $\kappa $ is defined in Eq. (\[oM8\]). Hence the first law of thermodynamics is satisfied by Schwarzschild black hole surrounded by quintessence. Christodoulou-Ruffini Mass Formula for Schwarzschild Black Hole Surrounded by Quintessence ------------------------------------------------------------------------------------------ The irreducible mass of Schwarzschild black hole surrounded by quintessence is $${\cal M}_{\text{irr}\pm}= \frac{1\pm \sqrt{1-8{\cal M}\sigma_q}}{4\sigma_q}.$$ Here ${\cal M}_{\text{irr}-}$ and ${\cal M}_{\text{irr}+}$ are irreducible masses defined on inner and outer horizons respectively. Area of ${\cal H}^{\pm}$, in terms of ${\cal M}_{\text{irr }\pm}$ is: $$\label{M20} \mathcal{A_{\pm}}= 16 \pi ({\cal M}_{\text{irr}\pm})^2.$$ Product of the irreducible mass at the horizons $\mathcal{H^{\pm}}$ is: $$\begin{aligned} \label{M22} {\cal M}_{\text{irr}+} {\cal M}_{\text{irr}-}&=& \sqrt{\frac{\mathcal{A}_+ \mathcal{A}_-}{(16 \pi)^2}}\nonumber\\ &=& \frac{\mathcal{M}}{2\sigma}.\end{aligned}$$ This product is depending on mass of the black hole. Expression of mass given in Eq. (\[oM16\]), in terms of irreducible mass becomes: $$8 \sigma^2 {\cal M}^2_{\text{irr}_{+}} {\cal M}^2_{\text{irr}_{-}}+4\pi \rho^2_{\pm}({\cal M}_{\text{irr}_{+}}{\cal M}_{\text{irr}_{-}})=4 \rho^2_{\pm}(1-4\sigma^2).$$ Heat Capacity $C_{\pm}$ on ${\cal H}^{\pm}$ ------------------------------------------- Mass of Schwarzschild black hole surrounded by quintessence in terms of $r_{\pm}$ is: $${\cal M}=\frac{r_{\pm} -\sigma_q r_{\pm}^2}{2}.$$ The partial derivative of mass ${\cal M}$ with respect to $r_{\pm}$ is: $$\frac{\partial {\cal M}}{\partial r_{\pm}}=\frac{1-2 \sigma_q r_{\pm}}{2},$$ and using Eq. (\[oM9\]) we get $$\frac{\partial T_{\pm}}{\partial r_{\pm}}=-\frac{1}{4\pi}\Big[\frac{\sigma_q}{ r_{\pm}^2-2 \mathcal{M}}\Big].$$ The expression for heat capacity $C=\frac{\partial {\cal M}}{\partial r_{\pm}}\frac{\partial r_{\pm}}{\partial T}$ at the horizon becomes: $$\label{sM23} C_{\pm}=\frac{- 2\pi (1-2\sigma_q r_{\pm})(r_{\pm}-2\mathcal{M})}{\sigma_q}.$$ ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ Parameter RN-Radiation RN-Dust Schwarzschild-Quintessence ------------------------------------------------- -------------------------------------------------------------------------------------- ------------------------------------------------------------------------------------------------------------------- ----------------------------------------------------------------------- $r_{\pm}$ $\mathcal{M}\pm \sqrt{\mathcal{M}^-Q^2+\sigma_r}$ $\frac{2{\cal M}+\sigma_d \pm \sqrt{(2{\cal M}+\sigma_d)^2-4Q^2}}{2}$ $\frac{1\pm \sqrt{1-8{\cal M}\sigma_q}}{2\sigma_q}$ $\mathcal{A}_{\pm}$ $4\pi(2\mathcal{M}r_{\pm}-Q^2+\sigma_r)$ $4\pi [(2{\cal M}+\sigma_d)r_{\pm}-Q^2] $ $4\pi\Big[\frac{r_{\pm}-2{\cal M}}{\sigma_q}\Big]$ $S_{\pm}$ $\pi(2\mathcal{M}r_{\pm}-Q^2+\sigma_r$ $\pi [(2{\cal M}+\sigma_d)r_{\pm}-Q^2]$ $\frac{\pi}{\sigma_q}(r_{\pm} -2{\cal M})$ $T_{\pm}$ $\frac{r_{\pm}- {\cal M}}{2\pi(2{\cal M}r_{\pm} -Q^2 +\sigma_r)}$ $\frac{2r_{\pm}-(2 {\cal M}+\sigma_d)}{4 \pi[(2{\cal M}+\sigma_d)r_{\pm}-Q^2]}$ $\frac{1}{4 \pi}\Big[\frac{1-2\sigma_q r_{\pm}}{r_{\pm}}\Big]$ $\kappa_{\pm}$ $\frac{r_{\pm}- {\cal M}}{(2{\cal M}r_{\pm} -Q^2 +\sigma_r)}$ $\frac{2r_{\pm}-(2 {\cal M}+\sigma_d)}{2[(2{\cal M}+\sigma_d)r_{\pm}-Q^2]}$ $\frac{1}{2}\Big[\frac{1-2\sigma_q r_{\pm}}{r_{\pm}}\Big]$ $E_{\pm}$ $ { r_{\pm}-{\cal M}}$ $\frac{2r_{\pm}-(2 {\cal M}+\sigma_d)}{2}$ $\frac{2\mathcal{M}(1+2\sigma_qr_{\pm})-r_{\pm}}{2\sigma_q r_{\pm}}$ $\kappa_+\kappa_-$ $ \frac{Q^2-\sigma_r-{\cal M}^2}{(Q^2-\sigma_r)^2}$ $- \frac{(2{\cal M}+\sigma_d)^2-4Q^2}{4Q^4}$ $\frac{(8 \mathcal{M} \sigma_q-1)\sigma_q}{8 \mathcal{M}}$ $T_+ T_-$ $\frac{Q^2-{\cal M}^2-\sigma_r}{4\pi^2 (Q^2-\sigma_r)^2} $ $\frac{4Q^2-(2{\cal M}+\sigma_d)^2}{16 \pi ^2Q^4}$ $\frac{(8 \mathcal{M} \sigma_q-1)\sigma_q}{32\pi^2 \mathcal{M}}$ $E_+E_-$ ${Q^2-{\cal M}^2- \sigma_r}$ $\frac{4Q^2-(2{\cal M}+\sigma_d)^2}{4}$ $\frac{\mathcal{M}(8 \mathcal{M} \sigma_q-1)}{2\sigma_q}$ $\mathcal{A}_+\mathcal{A}_- $ $ 16 \pi^2(Q^2-\sigma_r)^2$ $16 \pi^2 Q^4$ $\Big(\frac{8 \pi{\cal M}}{\sigma_q}\Big)^2$ $\mathcal{S}_+\mathcal{S}_-$ $\pi^2(Q^2-\sigma_r)^2 $ $\pi^2Q^4$ $\Big(\frac{2\pi {\cal M}}{\sigma_q}\Big)^2$ ${\cal M}_{\text{irr}+} {\cal M}_{\text{irr}-}$ $\frac{Q^2-\sigma_r}{4}$ $\frac{Q^2}{4}$ $\frac{\mathcal{M}}{2\sigma}$ $C_{\pm}$ $\frac{2 \pi r_{\pm}^2 (r_{\pm}^2 - Q^2 +\sigma_r)}{3Q^2- 3 \sigma_r-r_{\pm}^2}$ $\frac{2 \pi r_{\pm}^2 (r_{\pm}^2 - Q^2 +\sigma_d r_{\pm})}{r_{\pm}^2-Q^2}$ $\frac{- 2\pi (1-2\sigma_q r_{\pm})(r_{\pm}-2\mathcal{M})}{\sigma_q}$ $C_{+} C_{-}$ $4\pi^2 \left(Q^2-\sigma_r \right)^2 \frac{\left[ (Q^2-\sigma_r)-{\cal M}^2 \right]} $\frac{4\pi^2 Q^4 \left[ 4Q^4-Q^2(2{\cal M}+\sigma_d)^2+\sigma_d^2 Q^2 \right]}{4Q^4-Q^2 (2{\cal M}+\sigma_d)^2}$ $\frac{16\pi^2 \mathcal{M}^2(8\mathcal{M}\sigma-1)}{\sigma^2}$ {\left[4(Q^2-\sigma_r)-3{\cal M}^2\right]}$ ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ : A comparison of thermodynamical parameters for RN-Radiation, RN-Dust and Schwarzschild-Quintessence black holes.[]{data-label="1table"} Heat capacity would be positive if:\ **Case $1$:** $1-2\sigma_q r_{\pm}<0$ and $r_{\pm}-2\mathcal{M}>0$.\ **Case $2$:** $1-2\sigma_q r_{\pm}>0$ and $r_{\pm}-2\mathcal{M}<0$.\ Heat Capacity is negative if\ **Case $a$:** $1-2\sigma_q r_{\pm}>0$ and $r_{\pm}-2\mathcal{M}>0$,\ **Case $b$:** $1-2\sigma_q r_{\pm}<0$ and $r_{\pm}-2\mathcal{M}<0$.\ Behavior of the heat capacity given in Eq. (\[sM23\]) is shown in Fig. (\[3M\]) for $\mathcal{M}=1$ and $\sigma_d= 0.01$, heat capacity is negative for $2<r<50$, positive for $0<r<2$ and $r>50$, it is zero at $r=2$ and $r=50$. ![Heat capacity phase transition of Schwarzschild black hole surrounded by quintessence. We chose $\sigma_q=0.01$, and ${\cal M}=1$. Heat capacity is positive for $0<r<2$ and $r>50$, negative for $2<r<50$ and is zero at $r=2$ and $r= 50$.[]{data-label="3M"}](quintessence11.pdf){width="9cm"} A comparison of all the parameters calculated for Kiselev solutions is shown in Table. (\[1table\]). Charged Dilaton Black Hole ========================== The action for charged black hole in string theory is [@ga4091]: $$\mathcal{S}= \int d^4x \sqrt{-g}[-R+2(\bigtriangledown \varphi)^2+ e^{-2 a \varphi}F^2],$$where $F$ is the Maxwell field, $\varphi$ is the scalar field and $a$ is an arbitrary parameter specifying the strength of dilaton and the Maxwell field’s coupling. We are going to derive the area product formula and entropy product formula for a spherically symmetric dilaton black hole [@ga4091] whose metric can be written in Schwarzschild-like coordinates as: $$\begin{aligned} ds^2=-{\cal N}(r)dt^{2}+\frac{dr^{2}}{{\cal N}(r)}+ {{\cal R}(r)}^{2}\left(d\theta^{2}+\sin^{2}\theta d\phi^{2}\right) ,\label{sph}\end{aligned}$$ where the function ${\cal N}(r)$ is defined by $$\begin{aligned} {\cal N}(r) &=& \left(1-\frac{r_{+}}{r}\right)\left(1-\frac{r_{-}}{r}\right)^{\frac{1-a^{2}}{1+a^{2}}},\end{aligned}$$ and $$\begin{aligned} {\cal R}^{2}(r) &=& r^{2}\left(1-\frac{r_{-}}{r}\right)^{\frac{2a^{2}}{1+a^{2}}}.\end{aligned}$$ In these equations, $r_{+}$ and $r_{-}$ are constants, which are related to mass and charge of the black hole as: $$\begin{aligned} {\cal M} &=& \frac{r_{+}}{2} + \left(\frac{1-a^{2}}{1+a^{2}}\right) \frac{r_{-}}{2} , ~~~ Q = \sqrt{\frac{r_{+}r_{-}}{1+a^{2}}},\end{aligned}$$ where as usual ${\cal M}$ is mass of the black hole and $Q$ is electric charge of the black hole. It may be noted that $Q$ and $a$ are positive. The horizons of the black hole are determined by the function ${\cal N}(r)=0$ which yields $$\begin{aligned} r_{+} &=& {\cal M}+ \sqrt{{\cal M}^{2}-\left(\frac{2n}{1+n}\right)Q^{2}},\label{hocde} \\ r_{-} &=& \frac{1}{n}\left[{\cal M}+ \sqrt{{\cal M}^{2}-\left(\frac{2n}{1+n}\right)Q^{2}}\right],\label{hocdc}\end{aligned}$$ and $n$ is defined by $$\begin{aligned} n &=& \frac{1-a^{2}}{1+a^{2}}.\end{aligned}$$ Here $r_{+}$ and $r_{-}$ are called event horizon (${\cal H}^+$) or outer horizon and Cauchy horizon (${\cal H}^-$) or inner horizon respectively, and $r_{+}=r_{-}$ or ${\cal M}^{2}=\left(\frac{1+n}{2}\right)Q^{2}$ corresponding to the extreme charged dilaton black hole.\ [**Case I:**]{} When $a=0$ or $n=1$, the metric corresponds to RN black hole.\ [**Case II:**]{} When $a=1$ or $n=0$, the metric corresponds to Gibbons-Maeda-Garfinkle-Horowitz-Strominger (GMGHS) black hole. The expressions for surface gravity of dilaton black hole at both the horizons (${\cal H}^{\pm}$) are $$\begin{aligned} {\kappa}_{+} &=& \frac{1}{2r_{+}}\left(\frac{r_{+}-r_{-}}{r_{+}}\right)^{n} \,\, \mbox{and} \, \, {\kappa}_{-} = 0 ~.\label{sgcd}\end{aligned}$$ The black hole temperature or Hawking temperature at ${\cal H}^\pm$ are $$\begin{aligned} T_{+} &=& \frac{1}{4\pi r_{+}}\left(\frac{r_{+}-r_{-}}{r_{+}}\right)^{n} \mbox{and} \nonumber\\ T_{-} &=&0\end{aligned}$$ Areas of the horizons (${\cal H}^\pm$) are $$\begin{aligned} {\cal A}_{+} &=& 4\pi r_{+}^2 \left(\frac{r_{+}-r_{-}}{r_{+}}\right)^{1-n}, ~~~ \, \, {\cal A}_{-} = 0.\label{arcd}\end{aligned}$$ Interestingly, the area of both the horizons go to zero at the extremal limit ($r_{+}=r_{-}$) which is quite different from the well known RN and Schwarzschild black hole. The other characteristics of this spacetime is that there is a curvature singularity at $r=r_{-}$. Now the entropies of both the horizons (${\cal H}^\pm$) are $$\begin{aligned} {\cal S}_{+} &=& \pi r_{+}^2 \left(\frac{r_{+}-r_{-}}{r_{+}}\right)^{1-n}, ~~~ {\cal S}_{-}= 0.\label{etpcd}\end{aligned}$$ Finally, the Komar energy is given by $$\begin{aligned} E_{+} &=& \frac{r_{+}-r_{-}}{2} ~~~, E_{-}= 0.\label{etpcd1}\end{aligned}$$ Now we compute products of all the parameters given above: $$\begin{aligned} \mathcal{A}_{+}\mathcal{A}_{-} = 0, \\ \mathcal{S}_+\mathcal{S}_-=0,\\ \kappa_+\kappa_-= 0,\\ T_+ T_- = 0,\\ E_+E_-= 0.\end{aligned}$$ Interestingly their products go to zero value and independent of mass thus are universal quantities. All of the above thermodynamical quantities must satisfied the first law of thermodynamics: $$\label{d10} d{\cal M}=\mathcal{T}_{\pm} d\mathcal{A}_{\pm} +\Phi_{\pm}dQ,$$ where $$\begin{aligned} \label{d11} \Phi_{\pm} &=& \Big(\frac{2n}{1+n}\Big)\frac{Q}{r_{\pm}}.\end{aligned}$$ ![Heat capacity undergoes phase transition from instability to stability region, and again to instability region, with divergence at $r= 2Q^2/(1+n)$ and $2(1+n)Q^2/(1+n)$, we chose $Q=0.5$, $n=3$ and ${\cal M}=1$.[]{data-label="d3M"}](dilaton11.pdf){width="9cm"} The irreducible mass at $\mathcal{H}^{\pm}$ for this black hole is $$\begin{aligned} {\cal M}_{irr+} &=& \frac{r_{+}}{2} \left(\frac{r_{+}-r_{-}}{r_{+}}\right)^{\frac{1-n}{2}} ,~~~ {\cal M}_{irr-}= 0.\label{irrd}\end{aligned}$$ Their product yields $$\begin{aligned} \label{d2} {\cal M}_{irr+} {\cal M}_{irr-} &=& 0.\end{aligned}$$ The heat capacity for this dilaton black hole is calculated to be $$\begin{aligned} C_{+} &=& -2\pi r_{+}^2 \frac{\left[1-\frac{2n}{1+n}\frac{Q^2}{r_{+}}\right]\left[1-\frac{2}{1+n}\frac{Q^2}{r_{+}}\right]} {\left[1-\frac{2(1+2n)}{1+n}\frac{Q^2}{r_{+}}\right]\left[1-\frac{2}{1+n}\frac{Q^2}{r_{+}}\right]^{n}}.~\label{cv1}\end{aligned}$$ Due to curvature singularity at $r=r_{-}$ the heat capacity at the Cauchy horizon diverges. Thus the product of heat capacity diverges. Note that for odd $n$, heat capacity would be positive if $$\begin{aligned} r<\frac{2(1+2n)Q^2}{1+n}~~~~\text{and}~~ r>\frac{2 n Q^2}{1+n},\end{aligned}$$ or $$\begin{aligned} r>\frac{2(1+2n)Q^2}{1+n}~~~~\text{and}~~ r<\frac{2 n Q^2}{1+n},\end{aligned}$$ and heat capacity is negative for: $$\begin{aligned} r>\frac{2(1+2n)Q^2}{1+n}~~\text{and}~~ r>\frac{2 n Q^2}{1+n},\end{aligned}$$ or $$\begin{aligned} r<\frac{2(1+2n)Q^2}{1+n}~~\text{and}~~ r<\frac{2nQ^2}{1+n},\end{aligned}$$ behavior of the heat capacity is shown in Fig. (\[d3M\]). For $Q=0.5$ and $n=3$, given expression of heat capacity (Eq. (\[cv1\])) diverges at $r=0.125$ and $r=0.875$, positive for $0.375<r<0.875$ and negative for $<0<r<0.125$, $0.125<r<0.375$, and $0.875<r< 1.5$ (for $r\in (0,1.5)$). Conclusion ========== We have studied the thermodynamical properties on the inner and outer horizons of Kiselev solutions (RN black hole surrounded by energy-matter (radiation and dust) and Schwarzschild black hole surrounded by quintessence) and charged dilaton black hole. We have studied some important parameters of black hole thermodynamics with reference to their event and Cauchy horizons. We derive the expressions for temperatures and heat capacities of all the black holes mentioned above. It is observed that the product of surface gravities, surface temperature product and product of Komar energies at the horizons are not universal quantities for the Kiselev’s solutions while products of areas and entropies at both the horizons are independent of mass of black hole (except for Schwarzschild black hole surrounded by quintessence). For dilaton black hole these products are universal except the products of specific heat which has shown divergent properties due to the curvature singularity at the Cauchy horizon. Thus the implication of these thermodynamical products may somehow give us further understanding of the microscopic nature of black hole entropy (both exterior and interior) in the black hole physics. Using the heat capacity expressions, stability regions of the black holes are also observed graphically. Figs. (\[1M\]-\[d3M\]) show that the above mentioned black holes undergo a phase transition under certain conditions on $r$. For RN black hole surrounded by radiation and dust and Schwarzschild black hole surrounded by quintessence, we derived the first law of thermodynamics using the Smarr formula approach. It is observed that third law of thermodynamics which states that “surface gravity, $\kappa$, of a black hole can not be reduced to zero in a finite sequence of processes” holds for all the above mentioned black holes. The derived expressions of $\kappa$ show that it is zero for extreme black holes only. [99]{} D. Christodoulou, *Phys. Rev. Lett.* [**25**]{}, 1596 (1970). D. Christodoulou and R. Ruffini, *Phys. Rev.* [ **D 4**]{}, 3552 (1971). R. Penrose and R. M. Floyd, *Nature* [**229**]{}, 177 (1971); S. Hawking, *Phys. Rev. Lett.* [**26**]{}, 1344 (1971). J. D. Bekenstein, *Phys. Rev.* [**D 7**]{}, 2333 (1973); J. D. Bekenstein, *Phys. Rev.* [**D 9**]{}, 3292 (1974). M. Jamil, M. Akbar, *Gen.Rel.Grav.* [**43**]{}, 1061 (2011); M. Jamil, I. Hussain, M. U. Farooq, *Astrophys. Space Sci.* [**335**]{}, 339 (2011); M. Jamil, I. Hussain, *Int. J. Theor. Phys.* [**50**]{}, 465 (2011); M. Akbar, A. A. Siddiqui, *Phys. Lett.* [**B 656**]{}, 217 (2007). A. Castro and M. J. Rodriguez, *Phys. Rev.* [**D 86**]{}, [024008]{} (2012). J. Wang, W. Xu and X. Meng, *J. High Energy Phys.* [**01**]{}, 031 (2014). A. Curir, *Nuovo Cimento.* [**B 51**]{}, [262]{} (1979). D. Pavon, *Phys. Rev.* [**D 43**]{}, 2495 (1990); S. W. Hawking, D. N. Page, *Commun. Math. Phys.* [**87**]{}, 577 (1983). E. T. Newman, E. Couch, K. Chinnapared, A. Exton, A. Prakash, and R. Torrence, *J. Math. Phys.* [**6**]{}, 918 (1965). P. Pradhan, *Eur. Phy. J.* [ **C 74**]{}, [2887]{} (2014). P. Pradhan, arXiv: 1503.04514 \[gr-qc\]. B. Majeed, M. Jamil, arXiv:1507.01547 \[hep-th\]. M. Cvetic, G. W. Gibbons and C. N. Pope, *Phys. Rev. Lett.* [**106**]{}, [121301]{} (2011). M. Visser, *Phys. Rev.* [**D 88**]{}, [044014]{} (2013). V. V. Kiselev, *Class. Quant. Grav.* [**20**]{}, 1187 (2003). M. Jamil, S. Hussain, and B. Majeed, *Eur. Phys. J.* [ **C 75**]{}, 24 (2015). S. Chandrashekar, *The Mathematical Theory of Black Holes*, Clarendon Press, Oxford (1983). S. Hawking, *Phys. Rev. Lett.* [**26**]{}, 1344 (1971). E. Poisson, *A Relativist’s Toolkit: The Mathematics of Black Hole Mechanics*, Cambridge University Press, (2007). A. Komar, *Phys. Rev.* [**113**]{}, 934 (1959). L. Smarr, *Phys. Rev. Lett.* [**30**]{}, 71 (1973); L. Smarr, *Phys. Rev.* [**D 7**]{}, 289 (1973). J. M. Bardeen, B. Carter, S. W. Hawking, *Commun. Math. Phys.* [**31**]{}, 161 (1973). D. J. Gross, M. J. Perry, and L. G. Yaffe, *Phys. Rev.* [**D 25**]{}, 330 (1982). D. Garfinkle, G. T. Horowitz, A. Strominger, *Phys. Rev.* [**D 43**]{}, 3140 (1991).
{ "pile_set_name": "ArXiv" }
--- abstract: 'The density matrix of a graph is the combinatorial laplacian matrix of a graph normalized to have unit trace. In this paper we generalize the entanglement properties of mixed density matrices from combinatorial laplacian matrices of graphs discussed in Braunstein [*et al.*]{} Annals of Combinatorics, [**10**]{}(2006)291 to tripartite states. Then we proved that the degree condition defined in Braunstein [*et al.*]{} Phys. Rev. A [**73**]{}, (2006)012320 is sufficient and necessary for the tripartite separability of the density matrix of a nearest point graph.' author: - | Zhen Wang and Zhixi Wang\ [Department of Mathematics]{}\ [Capital Normal University, Beijing 100037, China]{}\ [wangzhen061213@sina.com,  wangzhx@mail.cnu.edu.cn]{} title: The tripartite separability of density matrices of graphs --- Introduction ============ Quantum entanglement is one of the most striking features of the quantum formalism$^{\tiny\cite{peres1}}$. Moreover, quantum entangled states may be used as basic resources in quantum information processing and communication, such as quantum cryptography$^{\tiny\cite{ekert}}$, quantum parallelism$^{\tiny\cite{deutsch}}$, quantum dense coding$^{\tiny\cite{bennett1,mattle}}$ and quantum teleportation$^{\tiny\cite{bennett2,bouwmeester}}$. So testing whether a given state of a composite quantum system is separable or entangled is in general very important. Recently, normalized laplacian matrices of graphs considered as density matrices have been studied in quantum mechanics. One can recall the definition of density matrices of graphs from [@sam1]. Ali Saif M. Hassan and Pramod Joag$^{\tiny\cite{Ali}}$ studied the related issues like classification of pure and mixed states, von Neumann entropy, separability of multipartite quantum states and quantum operations in terms of the graphs associated with quantum states. Chai Wah Wu$^{\tiny\cite{chai}}$ showed that the Peres-Horodecki positive partial transpose condition is necessary and sufficient for separability in $C^2\otimes C^q$. Braunstein [*et al.*]{}$^{\tiny\cite{sam2}}$ proved that the degree condition is necessary for separability of density matrices of any graph and is sufficient for separability of density matrices of nearest point graphs and perfect matching graphs. Ali Saif M. Hassan and Pramod Joag shows that the degree condition is also necessary and sufficient condition for the separability of $m$-partite pure quantum states living in a real or complex Hilbert space in $\cite{Ali2}$. Hildebrand [*et al.*]{}$^{\tiny\cite{roland}}$ testified that the degree condition is equivalent to the PPT-criterion. They also considered the concurrence of density matrices of graphs and pointed out that there are examples on four vertices whose concurrence is a rational number. The paper is divided into three sections. In section 2, we recall the definition of the density matrices of a graph and define the tensor product of three graphs, reconsider the tripartite entanglement properties of the density matrices of graphs introduced in [@sam1]. In section 3, we define partially transposed graph at first and then shows that the degree condition introduced in [@sam2] is also sufficient and necessary condition for the tripartite mixed state of the density matrices of nearest point graphs. The tripartite entanglement properties of the density matrices of graphs ======================================================================== Recall that from [@sam1] a [*graph*]{} $G=(V(G),\ E(G))$ is defined as: $V(G)=\{v_1,\ v_2,\ \cdots,\ v_n\}$ is a non-empty and finite set called [*vertices*]{}; $E(G)=\{\{v_i,\ v_j\}:\ v_i,\ v_j\in V\}$ is a non-empty set of unordered pairs of vertices called [*edges*]{}. An edge of the form $\{v_i,\ v_i\}$ is called as a [*loop*]{}. We assume that $E(G)$ does not contain any loops. A graph $G$ is said to be on $n$ vertices if $|V(G)|=n$. The [*adjacency matrix*]{} of a graph $G$ on $n$ vertices is an $n\times n$ matrix, denoted by $M(G)$, with lines labeled by the vertices of $G$ and $ij$-th entry defined as: $$[M(G)]_{i,j}=\left\{ \begin{array}{ll} 1, & \hbox{if $(v_{i},\ v_{j})\in E(G)$;}\\ 0, & \hbox{if $(v_{i},\ v_{j})\notin E(G)$.} \end{array} \right.$$ If $\{v_i,\ v_j\}\in E(G)$ two distinct vertices $v_i$ and $v_j$ are said to be [*adjacent*]{}. The [*degree*]{} of a vertex $v_i\in V(G)$ is the number of edges adjacent to $v_i$, we denote it as $d_G(v_i)$. $d_G=\displaystyle\sum_{i=1}^nd_G(v_i)$ is called as the [*degree sum*]{}. Notice that $d_G=2|E(G)|.$ The [*degree matrix*]{} of $G$ is an $n\times n$ matrix, denoted as $\Delta(G)$, with $ij$-th entry defined as: $$[\Delta(G)]_{i,\ j}=\left\{ \begin{array}{ll} d_{G}(v_{i}), & \hbox{if $i=j$;\ }\\ 0, & \hbox{if $i\neq j$.\ } \end{array} \right.$$ The [*combinatorial laplacian matrix*]{} of a graph $G$ is the symmetric positive semidefinite matrix $$L(G)=\Delta(G)-M(G).$$ The density matrix of $G$ of a graph $G$ is the matrix $$\rho(G)=\frac{1}{d_{G}}L(G).$$ Recall that a graph is called [*complete*]{}$^{\tiny\cite{gtm207}}$ if every pair of vertices are adjacent, and the [*complete graph*]{} on $n$ vertices is denoted by $K_n$. Obviously, $\rho(K_n)=\frac{1}{n(n-1)}(nI_n-J_n),$ where $I_n$ and $J_n$ is the $n\times n$ identity matrix and the $n\times n$ all-ones matrix, respectively. A [*star graph*]{} on $n$ vertices $\alpha_1,\ \alpha_2,\ \cdots,\ \alpha_n$, denoted by $K_{1,n-1}$, is the graph whose set of edges is $\{\{\alpha_1,\ \alpha_i\}:\ i=2,\ 3,\ \cdots,\ n\}$, we have $$\rho(K_{1,n-1}) =\frac{1}{2(n-1)} \left( \begin{array}{ccccc} n-1&-1&-1&\cdots&-1\\[3mm] -1&1&&&\\[3mm] -1&&1&&\\[3mm] \vdots&&&\ddots&\\[3mm] -1&&&&1 \end{array} \right).$$ Let $G$ be a graph which has only a edge. Then the density matrix of $G$ is pure. The density matrix of a graph is a uniform mixture of pure density matrices, that is, for a graph $G$ on $n$ vertices $v_1,\ v_2,\ \cdots,\ v_n,$ having $s$ edges $\{v_{i_1},\ v_{j_1}\},\ \{v_{i_2},\ v_{j_2}\},\ \cdots,\ \{v_{i_s},\ v_{j_s}\},$ where $1\leq i_1,\ j_1,\ i_2,\ j_2,\ \cdots,\ i_k,\ j_k\leq n,$ $$\rho(G)=\displaystyle\frac{1}{s}\sum_{k=1}^{s}\rho(H_{i_kj_k}),$$ here $H_{i_kj_k}$ is the factor of $G$ such that $$[M(H_{i_kj_k})]_{u,\ w}=\left\{ \begin{array}{ll} 1, & \hbox{if}\ u=i_k\ \hbox{and}\ w=j_k\ \hbox{or}\ w=i_k\ \hbox{and}\ u=j_k;\\ 0, & \hbox{otherwise.} \end{array} \right.$$ It is obvious that $\rho(H_{i_kj_k})$ is pure. Before we discuss the tripartite entanglement properties of the density matrices of graphs we will at first recall briefly the definition of the tripartite separability: [**Definition 1**]{}The state $\rho$ acting on ${\cal H}={\cal H_A}\otimes{\cal H_B}\otimes{\cal H_C}$ is called [*tripartite separability*]{} if it can be written in the form $$\rho=\displaystyle\sum_i p_i\rho_A^i\otimes\rho_B^i\otimes\rho_C^i,$$ where $\rho_A^i=|\alpha_A^i\ra\la\alpha_A^i|,\ \rho_B^i=|\beta_B^i\ra\la\beta_B^i|,\ \rho_C^i=|\gamma_C^i\ra\la\gamma_C^i|,\ \displaystyle\sum_i p_i=1,\ p_i\geq0$ and $|\alpha_A^i\ra$, $|\beta_B^i\ra$, $|\gamma_C^i\ra$ are normalized pure states of subsystems $A,\ B$ and $C$, respectively. Otherwise, the state is called [*entangled*]{}. Now we define the tensor product of three graphs. The [*tensor product of graphs*]{} $G_A,\ G_B,\ G_C$, denoted by $G_A\otimes G_B\otimes G_C$, is the graph whose adjacency matrix is $M(G_A\otimes G_B\otimes G_C)=M(G_A)\otimes M(G_B)\otimes M(G_C).$ Whenever we consider a graph $G_A\otimes G_B\otimes G_C$, where $G_A$ is on $m$ vertices, $G_B$ is on $p$ vertices and $G_C$ is on $q$ vertices, the tripartite separability of $\rho(G_A\otimes G_B\otimes G_C)$ is described with respect to the Hilbert space ${\cal H}_A\otimes {\cal H}_B\otimes {\cal H}_C$, where ${\cal H}_A$ is the space spanned by the orthonormal basis $\{|u_1\ra,\ |u_{2}\ra,\ \cdots,\ |u_m\ra\}$ associated to $V(G_A)$, ${\cal H}_B$ is the space spanned by the orthonormal basis $\{|v_1\ra,\ |v_{2}\ra,\ \cdots,\ |v_p\ra\}$ associated to $V(G_B)$ and ${\cal H}_C$ is the space spanned by the orthonormal basis $\{|w_1\ra,\ |w_2\ra,\ \cdots,\ |w_q\ra\}$ associated to $V(G_C)$. The vertices of $G_A\otimes G_B\otimes G_C$ are taken as $\{u_iv_jw_k,\ 1\leq i\leq m,\ 1\leq j\leq p,\ 1\leq k\leq q\}.$ We associate $|u_i\ra|v_j\ra|w_k\ra$ to $u_iv_jw_k,$ where $1\leq i\leq m,\ 1\leq j\leq p,\ 1\leq k\leq q.$ In conjunction with this, whenever we talk about tripartite separability of any graph $G$ on $n$ vertices, $|\alpha_1\ra,\ |\alpha_2\ra,\ \cdots,\ |\alpha_n\ra,$ we consider it in the space $C^m\otimes C^p\otimes C^q$, where $n=mpq$. The vectors $|\alpha_1\ra,\ |\alpha_2\ra,\ \cdots,\ |\alpha_n\ra$ are taken as follows: $|\alpha_1\ra=|u_1\ra|v_1\ra|w_1\ra,\ |\alpha_2\ra=|u_1\ra|v_1\ra|w_2\ra,\ \cdots,\ |\alpha_n\ra=|u_m\ra|v_p\ra|w_q\ra.$ To investigate the tripartite entanglement properties of the density matrices of graphs it is necessary to recall the well known positive partial transposition criterion (i.e. Peres criterion). It makes use of the notion of [*partial transpose*]{} of a density matrix. Here we will only recall the Peres criterion for the tripartite states. Consider a $n\times n$ matrix $\rho_{ABC}$ acting on $C_A^m\otimes C_B^p\otimes C_C^q$, where $n=mpq$. The partial transpose of $\rho_{ABC}$ with respect to the systems $A,\ B,\ C$ are the matrices $\rho^{T_{A}}_{ABC},\ \rho^{T_{B}}_{ABC},\ \rho^{T_{C}}_{ABC}$, respectively, and with $(i,\ j,\ k;\ i^\prime,\ j^\prime,\ k^\prime)$-th entry defined as follows: $$\begin{array}{rl} &[\rho^{T_{A}}_{ABC}]_{i,\ j,\ k;\ i^{\prime},\ j^{\prime},\ k^{\prime}}= \la u_{i^{\prime}}v_{j}w_{k}|\rho_{ABC}|u_{i}v_{j^{\prime}}w_{k^{\prime}}\ra,\\[3mm] &[\rho^{T_{B}}_{ABC}]_{i,\ j,\ k;\ i^{\prime},\ j^{\prime},\ k^{\prime}}= \la u_{i}v_{j^{\prime}}w_{k}|\rho_{ABC}|u_{i^{\prime}}v_{j}w_{k^{\prime}}\ra,\\[3mm] &[\rho^{T_{C}}_{ABC}]_{i,\ j,\ k;\ i^{\prime},\ j^{\prime},\ k^{\prime}}= \la u_{i}v_{j}w_{k^{\prime}}|\rho_{ABC}|u_{i^{\prime}}v_{j^{\prime}}w_{k}\ra, \end{array}$$ where $1 \leq i,\ i^{\prime} \leq m;\ 1\leq j,\ j^{\prime} \leq p\ \hbox{and}\ 1 \leq k,\ k^{\prime} \leq q.$ For separability of $\rho_{ABC}$ we have the following criterion: [**Peres criterion**]{}$^{\tiny\cite{peres2}}$  If $\rho$ is a separable density matrix acting on $C^m\otimes C^p\otimes C^q$, then $\rho^{T_A},\ \rho^{T_B},\ \rho^{T_C}$ are positive semidefinite. [**Lemma 1**]{}The density matrix of the tensor product of three graphs is tripartite separable. [**Proof.**]{}Let $G_1$ be a graph on $n$ vertices, $u_1,\ u_2,\ \cdots,\ u_n$, and $m$ edges, $\{u_{c_1}$, $u_{d_1}\}$, $\cdots, \{u_{c_m}, u_{d_m}\},$ $1\leq c_1,\ d_1,\ \cdots,\ c_m,\ d_m\leq n.$ Let $G_2$ be a graph on $k$ vertices, $v_1,\ v_2,\ \cdots,\ v_k$, and $e$ edges, $\{v_{i_1},\ v_{j_1}\},\ \cdots,\ \{v_{i_e},\ v_{j_e}\},\ 1\leq i_1,\ j_1,\ \cdots,\ i_e,\ j_e\leq k.$ Let $G_3$ be a graph on $l$ vertices, $w_1,\ w_2,\ \cdots,\ w_l$, and $f$ edges, $\{w_{r_1},\ w_{s_1}\},\ \cdots,\ \{w_{r_f},\ w_{s_f}\},\ 1\leq r_1,\ s_1,\ \cdots,\ r_f,\ s_f\leq l.$ Then $$\rho(G_1)=\frac{1}{m}\sum_{p=1}^m\rho(H_{c_pd_p}),\ \rho(G_2)=\frac{1}{e}\sum_{q=1}^e\rho(L_{i_qj_q}),\ \rho(G_3)=\frac{1}{f}\sum_{t=1}^f\rho(Q_{r_ts_t}).$$ Therefore $$\begin{array}{rl} &\hskip-1cm \rho(G_1\otimes G_2\otimes G_3)\\[3mm] &=\displaystyle\frac{1}{d_{G_1\otimes G_2\otimes G_3}}[\Delta(G_1\otimes G_2\otimes G_3)-M(G_1\otimes G_2\otimes G_3)] \end{array}$$ $$\begin{array}{rl} &=\displaystyle\frac{1}{d_{G_1\otimes G_2\otimes G_3}}\sum_{p=1}^m\sum_{q=1}^e\sum_{t=1}^f[\Delta(H_{c_pd_p}\otimes L_{i_qj_q}\otimes Q_{r_ts_t})-M(H_{c_pd_p}\otimes L_{i_qj_q}\otimes Q_{r_ts_t} )]\\ &=\displaystyle\frac{1}{d_{G_1\otimes G_2\otimes G_3}}\sum_{p=1}^m\sum_{q=1}^e\sum_{t=1}^f8 \rho(H_{c_pd_p}\otimes L_{i_qj_q}\otimes Q_{r_ts_t})\\ &=\displaystyle\frac{1}{mef}\sum_{p=1}^m\sum_{q=1}^e\sum_{t=1}^f \rho(H_{c_pd_p}\otimes L_{i_qj_q}\otimes Q_{r_ts_t})\\ &=\displaystyle\frac{1}{mef}\sum_{p=1}^m\sum_{q=1}^e\sum_{t=1}^f \frac{1}{8}[\Delta(H_{c_pd_p})\otimes\Delta(L_{i_qj_q})\otimes \Delta(Q_{r_ts_t})-M(H_{c_pd_p})\otimes M(L_{i_qj_q})\otimes M(Q_{r_ts_t})]\\ &=\displaystyle\frac{1}{mef}\sum_{p=1}^m\sum_{q=1}^e\sum_{t=1}^f \frac{1}{4}[\rho(H_{c_pd_p})\otimes\rho(L_{i_qj_q})\otimes \rho(Q_{r_ts_t})\\ &+\rho_+(H_{c_pd_p})\otimes\rho(L_{i_qj_q})\otimes \rho_+(Q_{r_ts_t}) +\rho(H_{c_pd_p})\otimes\rho_+(L_{i_qj_q})\otimes \rho_+(Q_{r_ts_t})\\[3mm] &+\rho_+(H_{c_pd_p})\otimes\rho_+(L_{i_qj_q})\otimes \rho(Q_{r_ts_t})], \end{array}$$ where $$\rho_+(H_{c_pd_p})\stackrel{\rm def}{=}\Delta(H_{c_pd_p})-\rho(H_{c_pd_p}) =\frac{1}{2}\big(\Delta(H_{c_pd_p})+M(H_{c_pd_p})\big),$$ $$\rho_+(L_{i_qj_q})\stackrel{\rm def}{=}\Delta(L_{i_qj_q})-\rho(L_{i_qj_q}) =\frac{1}{2}\big(\Delta(L_{i_qj_q})+M(L_{i_qj_q})\big),$$ $$\rho_+(Q_{r_ts_t})\stackrel{\rm def}{=}\Delta(Q_{r_ts_t})-\rho(Q_{r_ts_t}) =\frac{1}{2}\big(\Delta(Q_{r_ts_t})+M(Q_{r_ts_t})\big),$$ the fourth equality follows from $d_{G_1\otimes G_2\otimes G_3}=8mef$ and the fifth equality follows from the definition of tensor products of graphs. Notice that $\rho_+(H_{c_pd_p}),\ \rho_+(L_{i_qj_q}),\ \rho_+(Q_{r_ts_t})$ are all density matrices. Let $$\rho_+(G_1)=\frac{1}{m}\sum_{p=1}^m \rho_+(H_{c_pd_p}),\quad \rho_+(G_2)=\frac{1}{e}\sum_{q=1}^e \rho_+(L_{i_qj_q}),\quad \rho_+(G_3)=\frac{1}{f}\sum_{t=1}^f \rho_+(Q_{r_ts_t}).$$ Then $$\begin{array}{rl} \rho(G_1\otimes G_2\otimes G_3) =&\frac{1}{4}[\rho(G_1)\otimes\rho(G_2) \otimes\rho(G_3)+\rho_+(G_1)\otimes\rho(G_2) \otimes\rho_+(G_3)\\[3mm] &+\rho(G_1)\otimes\rho_+(G_2)\otimes\rho_+(G_3) +\rho_+(G_1)\otimes\rho_+(G_2)\otimes\rho(G_3)]. \end{array}$$ So we have that $\rho(G)$ is tripartite separable. $\Box$ [**Remark**]{}We associate to the vertices $\alpha_1,\ \alpha_2,\ \cdots, \alpha_n$ of a graph $G$ an orthonormal basis $\{|\alpha_1\ra,\ |\alpha_2\ra,$ $\cdots,\ |\alpha_n\ra\}.$ In terms of this basis, the $uw$-th elements of the matrices $\rho(H_{c_pd_p})$ and $\rho_+(H_{c_pd_p})$ are given by $\la\alpha_u|\rho(H_{c_pd_p})|\alpha_w\ra$ and $\la\alpha_u|\rho_+(H_{c_pd_p})|\alpha_w\ra$, respectively. In this basis we have $$\rho(H_{c_pd_p})= P[\frac{1}{\sqrt{2}}(|\alpha_{c_p}\ra-|\alpha_{d_p}\ra)],\ \rho_+(H_{c_pd_p})= P[\frac{1}{\sqrt{2}}(|\alpha_{c_p}\ra+|\alpha_{d_p}\ra)].$$ [**Lemma 2**]{}The matrix $\sigma=\frac{1}{4}P[\frac{1} {\sqrt{2}}(|ijk\ra-|rst\ra )] +\frac{1}{4}P[\frac{1}{\sqrt{2}}(|ijt\ra -|rsk\ra )]+\frac{1}{4} P[\frac{1}{\sqrt{2}}(|isk\ra -|rjt\ra )] +\frac{1}{4}P[\frac{1}{\sqrt{2}}(|rjk\ra -|ist\ra )]$ is a density matrix and tripartite separable. [**Proof.** ]{}Since the project operator is semipositive, $\sigma$ is semipositive. By computing one can get $tr(\sigma)=1$, so $\sigma$ is a density matrix. Let $$|u^{\pm}\ra=\frac{1}{\sqrt{2}}(|i\ra\pm|r\ra),\ \ |v^{\pm}\ra=\frac{1}{\sqrt{2}}(|j\ra\pm|s\ra),\ \ |w^{\pm}\ra=\frac{1}{\sqrt{2}}(|k\ra\pm|t\ra).$$ We obtain $$\sigma=\frac{1}{4} P[|u^{+}\ra|v^{-}\ra|w^{+}\ra] +\frac{1}{4}P[|u^{+}\ra|v^{+}\ra|w^{-}\ra] +\frac{1}{4}P[|u^{-}\ra|v^{-}\ra|w^{-}\ra] +\frac{1}{4}P[|u^{-}\ra|v^{+}\ra|w^{+}\ra],$$ thus $\sigma$ is tripartite separable. $\Box$ [**Lemma 3**]{}For any $n=mpq$, the density matrix $\rho(K_n)$ is tripartite separable in $C^m\otimes C^p\otimes C^q.$ [**Proof.**]{}Since $M(K_n)=J_n-I_n$, where $J_n$ is the $n\times n$ all-ones matrix and $I_n$ is the $n\times n$ identity matrix, whenever there is an edge $\{u_{i}v_{j}w_{k},\ u_{r}v_{s}w_{t}\}$, there must be entangled edges $\{u_{r}v_{j}w_{k},\ u_{i}v_{s}w_{t}\},$ $\{u_{i}v_{s}w_{k},\ u_{r}v_{j}w_{t}\}$ and $\{u_{i}v_{j}w_{t},\ u_{r}v_{s}w_{k}\}.$ The result follows from Lemma 2. $\Box$ [**Lemma 4**]{}The complete graph on $n>1$ vertices is not a tensor product of three graphs. [**Proof.**]{}It is obvious that $K_n$ is not a tensor product of three graphs if $n$ is a prime or a product of two primes. Thus we can assume that $n$ is a product of three or more primes. Let $n=mpq,\ m,\ p,\ q>1.$ Suppose that there exist three graphs $G_1,\ G_2$ and $G_3$ on $m,\ p$ and $q$ vertices, respectively, such that $K_{mpq}=G_1\otimes G_2\otimes G_3.$ Let $|E(G_1)|=r,\ |E(G_2)|=s,\ |E(G_3)|=t.$ Then, by the degree sum formula, $2r\leq m(m-1),\ 2s\leq p(p-1),\ 2t\leq q(q-1).$ Hence $$2r\cdot 2s\cdot 2t\leq mpq(m-1)(p-1)(q-1)=mpq(mpq-mp-mq-pq+m+p+q-1).$$ Now, observe that $$|V(G_1\otimes G_2\otimes G_3)|=mpq,\quad |E(G_1\otimes G_2\otimes G_3)|=4rst.$$ Therefore, $$G_1\otimes G_2\otimes G_3=K_{mpq} \iff mpq(mpq-1)=2\cdot 4rst,$$ so $$mpq(mpq-1)=8rst\leq mpq(mpq-mp-mq-pq+m+p+q-1).$$ It follows that $mp+mq+pq-m-p-q\leq 0$, that is $m(p-1)+q(m-1)+p(q-1)\leq 0.$ As $m,p,q\geq 1$ we get $m(p-1)+q(m-1)+p(q-1)=0$. It yields that $m=p=q=1.$ $\Box$ [**Theorem 1**]{}Given a graph $G_1\otimes G_2\otimes G_3$, the density matrix $\rho(G_1\otimes G_2\otimes G_3)$ is tripartite separable. However if a density matrix $\rho(L)$ is tripartite separable it does not necessarily mean that $L=L_1\otimes L_2\otimes L_3,$ for some graphs $L_1,\ L_2$ and $L_3.$ [**Proof.**]{}The result follows from Lemmas 1, 3 and 4. $\Box$ [**Theorem 2**]{}The density matrix $\rho(K_{1,\ n-1})$ is tripartite entangled for $n=mpq\geq 8.$ [**Proof.**]{}Consider a graph $G=K_{1,\ n-1}$ on $n=mpq$ vertices, $|\alpha_1\ra,\ |\alpha_2\ra,\ \cdots,\ |\alpha_n\ra.$ Then $$\rho(G)=\frac{1}{n-1}\sum_{k=2}^n\rho(H_{1k})= \frac{1}{n-1}\sum_{k=2}^nP[\frac{1}{\sqrt{2}}(|\alpha_1\ra-|\alpha_n\ra)].$$ We are going to examine tripartite separability of $\rho(G)$ in $C^m_A\otimes C^p_B\otimes C^q_C,$ where $C^m_A,\ C^p_B$ and $C^q_C$ are associated to three quantum systems ${\cal H}_A,\ {\cal H}_B$ and ${\cal H}_C,$ respectively. Let $\{|u_1\ra,\ |u_2\ra,\ \cdots,\ |u_m\ra\}$, $\{|v_1\ra,\ |v_2\ra,\ \cdots,$ $|v_p\ra\}$ and $\{|w_1\ra,\ |w_2\ra,\ \cdots,\ |w_q\ra\}$ be orthonormal basis of $C^m_A,\ C^p_B$ and $C^q_C,$ respectively. So, $$\rho(G)=\frac{1}{n-1}\sum_{k=2}^{n}P[\frac{1} {\sqrt{2}}(|u_1v_1w_1\ra-|u_{r_k}v_{s_k}w_{t_k}\ra)],$$ where $k=(r_k-1)pq+(s_k-1)+t_k,\ 1\leq r_k\leq m,\ 1\leq s_k\leq p,\ 1\leq t_k\leq q.$ Hence $$\begin{array}{rl} \rho(G)=&\displaystyle\frac{1}{n-1}\Big\{\sum_{i=2}^{m}P[\frac{1} {\sqrt{2}}(|u_1\ra-|u_i\ra)|v_1\ra|w_1\ra]+\sum_{j=2}^{p}P[ |u_1\ra\frac{1}{\sqrt{2}}(|v_1\ra-|v_j\ra)|w_1\ra]\\[3mm] &+\displaystyle\sum_{k=2}^{q}P[|u_1\ra|v_1\ra\frac{1}{\sqrt{2}} (|w_1\ra-|w_k\ra)] +\sum_{i=2}^{m}\sum_{j=2}^{p}P[\frac{1}{\sqrt{2}} (|u_1v_1w_1\ra-|u_iv_jw_1\ra)]\\[3mm] &+\displaystyle\sum_{j=2}^{p}\sum_{k=2}^{q}P[\frac{1}{\sqrt{2}} (|u_1v_1w_1\ra-|u_1v_jw_k\ra)] +\sum_{i=2}^{m}\sum_{k=2}^{q}P[\frac{1}{\sqrt{2}} (|u_1v_1w_1\ra-|u_iv_1w_k\ra)]\\[3mm] &+\displaystyle\sum_{i=2}^{m}\sum_{j=2}^{p}\sum_{k=2}^{q} P[\frac{1}{\sqrt{2}} (|u_1v_1w_1\ra-|u_iv_jw_k\ra)]\Big\}. \end{array}$$ Consider now the following projectors: $$P=|u_1\ra\la u_1|+|u_2\ra\la u_2|,\quad Q=|v_1\ra\la v_1|+|v_2\ra\la v_2|\ {\rm \ and\ }\ R=|w_1\ra\la w_1|+|w_2\ra\la w_2|.$$ Then $$\begin{array}{rl} &\hskip-2.6cm(P\otimes Q\otimes R)\rho(G)(P\otimes Q\otimes R)\\ =&\frac{1}{n-1}\Big\{\frac{n-8}{2}P[|u_1v_1w_1\ra]+ P[\frac{1}{\sqrt{2}}(|u_1v_1w_1\ra-|u_1v_1w_2\ra)]\\[3mm] &+P[\frac{1}{\sqrt{2}}(|u_1v_1w_1\ra-|u_1v_2w_1\ra)] +P[\frac{1}{\sqrt{2}}(|u_1v_1w_1\ra-|u_2v_1w_1\ra)]\\[3mm] &+P[\frac{1}{\sqrt{2}}(|u_1v_1w_1\ra-|u_1v_2w_2\ra)] +P[\frac{1}{\sqrt{2}}(|u_1v_1w_1\ra-|u_2v_1w_2\ra)]\\[3mm] &+P[\frac{1}{\sqrt{2}}(|u_1v_1w_1\ra-|u_2v_2w_1\ra)] +P[\frac{1}{\sqrt{2}}(|u_1v_1w_1\ra-|u_2v_2w_2\ra)]\Big\}. \end{array}$$ In the basis $$\{|u_1v_1w_1\ra, |u_1v_1w_2\ra, |u_1v_2w_1\ra, |u_1v_2w_2\ra, |u_2v_1w_1\ra,\ |u_2v_1w_2\ra,\ |u_2v_2w_1\ra, |u_2v_2w_2\ra\},$$ we have $$[(P\otimes Q\otimes R)\rho(G)(P\otimes Q\otimes R)]^{T_{A}} =\frac{1}{n-1} \left( \begin{array}{cccccccccccc} \frac{n-1}{2}&-\frac{1}{2}&-\frac{1}{2}&-\frac{1}{2}&-\frac{1}{2}&0&0&0\\[3mm] -\frac{1}{2}&\frac{1}{2}&0&0&-\frac{1}{2}&0&0&0\\[3mm] -\frac{1}{2}&0&\frac{1}{2}&0&-\frac{1}{2}&0&0&0\\[3mm] -\frac{1}{2}&0&0&\frac{1}{2}&-\frac{1}{2}&0&0&0\\[3mm] -\frac{1}{2}&-\frac{1}{2}&-\frac{1}{2}&-\frac{1}{2}&\frac{1}{2}&0&0&0\\[3mm] 0&0&0&0&0&\frac{1}{2}&0&0\\[3mm] 0&0&0&0&0&0&\frac{1}{2}&0\\[3mm] 0&0&0&0&0&0&0&\frac{1}{2} \end{array} \right).$$ The eigenpolynomial of the above matrix is $$\Big(\lambda-\frac{1}{2(n-1)}\Big)^{5}\Big(\lambda^{3} -\frac{n+1}{2(n-1)}\lambda^{2} +\frac{n-4}{2(n-1)^{2}}\lambda+\frac{n+4}{4(n-1)^{3}}\Big),$$ so the eigenvalues of the matrix are $\frac{1}{2(n-1)}$ (with multiplicity 5) and the roots of the polynomial $\lambda^{3}-\frac{n+1}{2(n-1)}\lambda^{2} +\frac{n-4}{2(n-1)^{2}}\lambda+\frac{n+4}{4(n-1)^{3}}.$ Let the roots of this polynomial of degree three be $\lambda_1,\ \lambda_2$ and $\lambda_3$. Then $\lambda_{1}\lambda_{2}\lambda_{3}=-\frac{n+4}{4(n-1)^{3}}<0,$ so one of the three roots must be negative, i.e., there must be a negative eigenvalue of the above matrix. Hence, by Peres criterion, the matrix $(P\otimes Q\otimes R)\rho(G)(P\otimes Q\otimes R)$ is tripartite entangled and then $\rho(G)$ is tripartite entangled. $\Box$ A sufficient and necessary condition of tripartite separability =============================================================== $G^{\Gamma_A}=(V,\ E^\prime)$, (i.e. the partial transpose of a graph $G=(V,E)$ with respect to ${\cal H}_A$) is the graph such that $$\{u_iv_jw_k,\ u_rv_sw_t\}\in E^\prime \ \hbox{if and only if}\ \{u_rv_jw_k,\ u_iv_sw_t\}\in E.$$ Partially transposed graphs $G^{\Gamma_B}$ and $G^{\Gamma_C}$ (with respect to ${\cal H}_B$ and ${\cal H}_C$, respectively) can be defined in a similar way. For tripartite states we denote $\Delta(G)=\Delta(G^{\Gamma_A})=\Delta(G^{\Gamma_B})=\Delta(G^{\Gamma_C})$ as the [*degree condition*]{}. Hildebrand [*et al.*]{}$^{\tiny\cite{roland}}$ proved that the degree criterion is equivalent to PPT criterion. It is easy to show that this equivalent condition is still true for the tripartite states. Thus from Peres criterion we can get: [**Theorem 3**]{}Let $\rho(G)$ be the density matrix of a graph on $n=mpq$ vertices. If $\rho(G)$ is separable in $C^m_A\otimes C^p_B\otimes C^q_C$, then $\Delta(G)=\Delta(G^{\Gamma_A})=\Delta(G^{\Gamma_B})=\Delta(G^{\Gamma_C}).$ Let $G$ be a graph on $n=mpq$ vertices: $\alpha_1,\ \alpha_2,\ \cdots,\ \alpha_n$ and $f$ edges: $\{\alpha_{i_1},\ \alpha_{j_1}\}$, $\{\alpha_{i_2},\ \alpha_{j_2}\},$ $\cdots$,$\{\alpha_{i_f}$, $\alpha_{j_f}\}.$ Let vertices $\alpha_s=u_iv_jw_k,$ where $s=(i-1)pq+(j-1)q+k, 1\leq i\leq m,\ 1\leq j\leq p,\ 1\leq k\leq q.$ The vectors $|u_i\ra's,\ |v_j\ra's,\ |w_k\ra's$ form orthonormal bases of $C^m,\ C^p$ and $C^q$, respectively. The edge $\{u_iv_jw_k,\ u_rv_sw_t\}$ is said to be [*entangled*]{} if $i\neq r,\ j\neq s,\ k\neq t.$ Consider a cuboid with $mpq$ points whose length is $m$, width is $p$ and height is $q$, such that the distance between two neighboring points on the same line is 1. A [*nearest point graph*]{} is a graph whose vertices are identified with the points of the cuboid and the edges have length 1, $\sqrt{2}$ and $\sqrt{3}.$ The degree condition is still a sufficient condition of the tripartite separability for the density matrix of a nearest point graph. [**Theorem 4**]{}Let $G$ be a nearest point graph on $n=mpq$ vertices. If $\Delta(G)=\Delta(G^{\Gamma_A})=\Delta(G^{\Gamma_B})=\Delta(G^{\Gamma_C})$, then the density matrix $\rho(G)$ is tripartite separable in $C^m_A\otimes C^p_B\otimes C^q_C.$ [**Proof.** ]{}Let $G$ be a nearest point graph on $n=mpq$ vertices and $f$ edges. We associate to $G$ the orthonormal basis $\{|\alpha_{l}\ra:l=1,\ 2,\ \cdots,\ n\} =\{|u_{i}\ra\otimes |v_{j}\ra\otimes |w_{k}\ra:\ i=1,\ 2,\ \cdots, m;\ j=1,\ 2,\ \cdots,\ p;\ k=1,\ 2,\ \cdots,\ q\},$ where $\{|u_{i}\ra:\ i=1,\ 2,\ \cdots,\ m\}$ is an orthonormal basis of $C^m_A$, $\{|v_{j}\ra:\ j=1,\ 2,\ \cdots,\ p\}$ is an orthonormal basis of $C^p_B$ and $\{|w_{k}\ra:\ i=1,\ 2,\ \cdots,\ q\}$ is an orthonormal basis of $C^q_C$. Let $i,\ r\in\{1,\ 2,\ \cdots,\ m\},\ j,\ s\in\{1,\ 2,\ \cdots,\ p\},\ k,\ t\in\{1,\ 2, \ \cdots,\ q\},\ \lambda_{ijk,\ rst}\in \{0,\ 1\}$ be defined by $$\lambda_{ijk,\ rst}=\left\{ \begin{array}{ll} 1, & \hbox { if\ $(u_{i}v_{j}w_{k},\ u_{r}v_{s}w_{t})\in E(G);$}\\ 0, & \hbox { if\ $(u_{i}v_{j}w_{k},\ u_{r}v_{s}w_{t})\notin E(G),$} \end{array} \right.$$ where $i,\ j,\ k,\ r,\ s,\ t$ satisfy either of the following seven conditions: - $i=r,\ j=s,\ k=t+1;$ - $i=r,\ j=s+1,\ k=t;$ - $i=r+1,\ j=s,\ k=t;$ - $\ i=r,\ j=s+1,\ k=t+1;$ - $i=r+1,\ j=s+1,k=t;$ - $i=r+1,\ j=s,\ k=t+1;$ - $i=r+1,\ j=s+1,\ k=t+1.$ Let $\rho(G),\ \rho(G^{\Gamma_A}),\ \rho(G^{\Gamma_B})$ and $\rho(G^{\Gamma_C})$ be the density matrices corresponding to the graph $G,\ G^{\Gamma_A},\ G^{\Gamma_B}$ and $G^{\Gamma_C}$, respectively. Thus $$\begin{array}{rll} \rho(G)&=\frac{1}{2f}(\Delta(G)-M(G)),\ &\rho(G^{\Gamma_{A}})=\frac{1}{2f} (\Delta(G^{\Gamma_{A}})-M(G^{\Gamma_{A}})),\\[5mm] \rho(G^{\Gamma_{B}})&=\frac{1}{2f} (\Delta(G^{\Gamma_{B}})-M(G^{\Gamma_{B}})),\ &\rho(G^{\Gamma_{C}})=\frac{1}{2f} (\Delta(G^{\Gamma_{C}})-M(G^{\Gamma_{C}})). \end{array}$$ Let $G_1$ be the subgraph of $G$ whose edges are all the entangled edges of $G$. An edge $\{u_iv_jw_k,\ u_rv_sw_t\}$ is entangled if $i\neq r,\ j\neq s,\ k\neq t$. Let $G^A_1$ be the subgraph of $G^{\Gamma_A}$ corresponding to all the entangled edges of $G^{\Gamma_A},\ G^B_1$ be the subgraph of $G^{\Gamma_B}$ corresponding to all the entangled edges of $G^{\Gamma_B}$, and $G^C_1$ be the subgraph of $G^{\Gamma_C}$ corresponding to all the entangled edges of $G^{\Gamma_C}.$ Obviously, $G^A_1=(G_1)^{\Gamma_A},\ G^B_1=(G_1)^{\Gamma_B},\ G^C_1=(G_1)^{\Gamma_C}.$ We have $$\rho(G_1)=\frac{1}{f}\sum_{i=1}^m\sum_{j=1}^p\sum_{k=1}^q \lambda_{ijk,\ rst}P[\frac{1}{\sqrt{2}}(|u_iv_jw_k\ra-|u_rv_sw_t\ra)],$$ where $i,\ j,\ k;\ r,\ s,\ t$ must satisfy either of the above seven conditions. We can get $\rho(G^A_1),\ \rho(G^B_1)$ and $\rho(G^C_1)$ by commuting the index of $u,\ v,\ w$ in the above equation, respectively. Also we have $$\Delta(G_1)=\frac{1}{2f}\sum_{i=1}^m\sum_{j=1}^p\sum_{k=1}^q \lambda_{ijk,\ rst}P[|u_iv_jw_k\ra],$$ where $i,\ j,\ k;\ r,\ s,\ t$ must satisfy either of the above seven conditions. We can get $\Delta(G^A_1),\ \Delta(G^B_1)$ and $\Delta(G^C_1)$ by commuting the index of $\lambda$ with respect to the Hilbert space ${\cal H}_A,\ {\cal H}_B,\ {\cal H}_C$, respectively. Let $G_2,\ G^A_2,\ G^B_2$ and $G^C_2$ be the subgraph of $G,\ G^A,\ G^B$ and $G^C$ containing all the unentangled edges, respectively. It is obvious that $\Delta(G_2)=\Delta(G^{\Gamma_A}_2)=\Delta(G^{\Gamma_B}_2) =\Delta(G^{\Gamma_C}_2).$ So $\Delta(G)=\Delta(G^{\Gamma_A})=\Delta(G^{\Gamma_B}) =\Delta(G^{\Gamma_C})$ if and only if $\Delta(G_1)=\Delta(G^{\Gamma_A}_1)=\Delta(G^{\Gamma_B}_1) =\Delta(G^{\Gamma_C}_1).$ The degree condition implies that $$\lambda_{ijk,\ rst}=\lambda_{rjk,\ ist}=\lambda_{isk,\ rjt}=\lambda_{ijt ,\ rsk},$$ for any $i,\ r\in\{1,\ 2,\ \cdots,\ m\},\ j,\ s \in\{1,\ 2,\ \cdots,\ p\},\ \ k,\ t \in\{1,\ 2,\ \cdots,\ q \}.$ The above equation shows that whenever there is an entangled edge $\{u_{i}v_{j}w_{k},\ u_{r}v_{s}w_{t}\}$ in $G$ (here we must have $i\neq r,\ j\neq s,\ k\neq t$), there must be the entangled edges $\{u_{r}v_{j}w_{k},\ u_{i}v_{s}w_{t}\},\ \{u_{i}v_{s}w_{k},\ u_{r}v_{j}w_{t}\}$ and $\{u_{i}v_{j}w_{t},\ u_{r}v_{s}w_{k}\}$ in $G$. Let $$\begin{array}{rl} \rho(i,\ j,\ k;\ r,\ s,\ t) =&\frac{1}{4}(P[\frac{1}{\sqrt{2}} (|u_{i}v_{j}w_{k}\ra-|u_{r}v_{s}w_{t}\ra)] +P[\frac{1}{\sqrt{2}}(|u_{r}v_{j}w_{k}\ra-|u_{i}v_{s}w_{t}\ra)]\\[5mm] &+P[\frac{1}{\sqrt{2}}(|u_{i}v_{s}w_{k}\ra-|u_{r}v_{j}w_{t}\ra)] +P[\frac{1}{\sqrt{2}}(|u_{i}v_{j}w_{t}\ra-|u_{r}v_{s}w_{k}\ra)]). \end{array}$$ By Lemma 2, we know $\rho(i,\ j,\ k;\ r,\ s,\ t)$ is tripartite separable in $C^m_A\otimes C^p_B\otimes C^q_C.$ By Theorem 3 in [@sam2] we can easily get $\rho(G_2)$ is tripartite separable in $C^m_A\otimes C^p_B\otimes C^q_C.$ $\Box$ From Theorems 3 and 4 we can obtain the following corollary which is a sufficient and necessary criterion (we called [*degree-criterion*]{}) of the density matrix of a nearest point graph: [**Corollary 1**]{}Let $G$ be a nearest point graph on $n=mpq$ vertices, then the density matrix $\rho(G)$ is tripartite separable in $C^m_A\otimes C^p_B\otimes C^q_C$ if and only if $\Delta(G)=\Delta(G^{\Gamma_A})=\Delta(G^{\Gamma_B})=\Delta(G^{\Gamma_C}).$ [**Example** ]{}Let $G$ be a graph on $12=3\times 2\times 2$ vertices, having a unique edge $\{u_1v_1w_1,\ u_2v_2w_2\}$. Then we have $$\rho(G)=\frac{1}{2} \left( \begin{array}{cccccccccccc} 1&0&0&0&0&0&0&-1&0&0&0&0\\ 0&0&0&0&0&0&0&0&0&0&0&0\\ 0&0&0&0&0&0&0&0&0&0&0&0\\ 0&0&0&0&0&0&0&0&0&0&0&0\\ 0&0&0&0&0&0&0&0&0&0&0&0\\ 0&0&0&0&0&0&0&0&0&0&0&0\\ 0&0&0&0&0&0&0&0&0&0&0&0\\ -1&0&0&0&0&0&0&1&0&0&0&0\\ 0&0&0&0&0&0&0&0&0&0&0&0\\ 0&0&0&0&0&0&0&0&0&0&0&0\\ 0&0&0&0&0&0&0&0&0&0&0&0\\ 0&0&0&0&0&0&0&0&0&0&0&0 \end{array} \right).$$ The partially transposed graph $G^{\Gamma_A}$ is a graph on 12 vertices and has an edge $\{u_2v_1w_1$, $u_1v_2w_2\}$. Then $$\rho(G^{\Gamma_{A}})=\frac{1}{2} \left( \begin{array}{cccccccccccc} 0&0&0&0&0&0&0&0&0&0&0&0\\ 0&0&0&0&0&0&0&0&0&0&0&0\\ 0&0&0&0&0&0&0&0&0&0&0&0\\ 0&0&0&1&-1&0&0&0&0&0&0&0\\ 0&0&0&-1&1&0&0&0&0&0&0&0\\ 0&0&0&0&0&0&0&0&0&0&0&0\\ 0&0&0&0&0&0&0&0&0&0&0&0\\ 0&0&0&0&0&0&0&0&0&0&0&0\\ 0&0&0&0&0&0&0&0&0&0&0&0\\ 0&0&0&0&0&0&0&0&0&0&0&0\\ 0&0&0&0&0&0&0&0&0&0&0&0\\ 0&0&0&0&0&0&0&0&0&0&0&0 \end{array} \right).$$ Obviously, the degree matrices of $G$ and $G^{\Gamma_A}$ are different. The eigenvalues of $\rho(G)^{T_A}$ are 0 (with multiplicity 8), $\frac{1}{2}$ (with multiplicity 3) and $-\frac{1}{2}$, so $\rho(G)^{T_A}$ is not positive semidefinite. According to Peres criterion, $\rho(G)$ is tripartite entangled. $\Box$ Two graphs $G$ and $H$ are said to be [*isomorphic*]{}, denoted as $G\cong H$, if there is an isomorphism between $V(G)$ and $V(H)$, i.e., there is a permutation matrix $P$ such that $PM(G)P^T=M(H).^{\tiny\cite{sam1}}$ [**Theorem 5**]{}Let $G$ and $H$ be two graphs on $n=mpq$ vertices. If $\rho(G)$ is tripartite entangled in $C^m\otimes C^p\otimes C^q$ and $G\cong H$, then $\rho(H)$ is not necessarily tripartite entangled in $C^m\otimes C^p\otimes C^q.$ [**Proof.**]{}Let $G$ be the graph introduced in the above example. Then $\rho(G)$ is tripartite entangled. Let $H$ be a graph on 12 vertices, having an edge $\{u_1v_1w_1,\ u_1v_1w_2\}$. Obviously, $G$ is isomorphic to $H$. However, $$\rho(H)=P[\frac{1}{\sqrt{2}} (|u_1v_1w_1\ra-|u_1v_1w_2\ra)]=|u_1\ra\la u_1|\otimes|v_1\ra\la v_1|\otimes|w^+\ra\la w^+|,$$ where $|w^+\ra=\displaystyle\frac{1}{\sqrt{2}}(|w_1\ra-|w_2\ra),$ shows that $\rho(H)$ is tripartite separable. $\Box$ [20]{} A. Peres, [*Quantum Theory: Concepts and Methods*]{} Kluwer, Dordrecht 1995. A. Ekert, [*Phys. Rev. Lett.*]{} [**67**]{} (1991) 661. D. Deutsch, [*Proc. R. Soc. London, Ser. A*]{} [**425**]{} (1989) 73.\ P. Shor, [*SIAM J. Comput.*]{} [**26**]{} (1997) 1484. C. H. Bennett and S. J. Wiesner, [*Phys. Rev. Lett.*]{} [**69**]{} (1992) 2881. K. Mattle, H. Weinfurter, P. Kwiat and A. Zeilinger, [*Phys. Rev. Lett.*]{} [**76**]{} (1996) 4656. C. Bennett, G. Brassard, C. Crepeau, R. Jozsa, A. Peres and W. K. Wootters, [*Phys. Rev. Lett.*]{} [**70**]{} (1993) 1895. D. Bouwmeester, J. W. Pan, K. Mattle, M. Elbl, H. Weinfurer and A. Zeilinger, [*Nature*]{} (London) [**390**]{} (1997) 575.\ D. Boschi, S. Branca, F. de Martini, L. Hardy and S. Popescu, [*Phys. Rev. Lett.*]{} [**80**]{} (1998) 1121. S. L. Braunstein, S. Ghosh, S. Severini, [*Annals of Combinatorics*]{} [**10**]{} (2006) 291. Ali Saif M. Hassan and P. Joag, [*Combinatorial approach to multipartite quantum system: basic formulation*]{}, arXiv: quant-ph/0602053v5. Ali Saif M. Hassan and P. Joag, [*Separability criterion for multipartite pure quantum states*]{}, arXiv: quan-ph/0701040v1. Chai Wah Wu, [*Phys. Lett. A*]{} [**351**]{} (2006) 18. S. L. Braunstein, S. Ghosh, T. Mansour, S. Severini and R. C. Wilson, [*Phys. Rev. A*]{} [**73**]{} (2006) 012320. R. Hildebrand, S. Mancini and S. Severini, [*Combinatorial Laplacian and Positivity Under Partial Transpose*]{}, arXiv: cs. CC/0607036, Accepted in Mathematical Structure in Computer Scinence. A. Peres, [*Phys. Rev. Lett.*]{} [**77**]{} (1996) 1413.\ K. Życzkowski, P. Horodecki and R. Horodecki, [*Phys. Lett. A*]{} [**223**]{} (1996) 1. C. Godsil and G. Royle, [*Algebratic Graph Theory*]{}, Graduate Texts in Mathematics, [**207**]{}, Springer-Verlag, New York, 2001.
{ "pile_set_name": "ArXiv" }
--- abstract: 'To what extent should we expect the syzygies of Veronese embeddings of projective space to depend on the characteristic of the field? As computation of syzygies is impossible for large degree Veronese embeddings, we instead develop an heuristic approach based on random flag complexes. We prove that the corresponding Stanley–Reisner ideals have Betti numbers which almost always depend on the characteristic, and we use this to conjecture that the syzygies of the $d$-uple embedding of projective $r$-space with $r\geq 7$ should depend on the characteristic for almost all $d$.' author: - 'Caitlyn Booms, Daniel Erman, and Jay Yang' bibliography: - 'bib.bib' title: 'Heuristics for $\ell$-torsion in Veronese Syzygies' --- Introduction ============ Imagine ${\mathbb{P}}^{10}$ embedded into a larger projective space by the $d$-uple Veronese embedding, where $d$ is some large integer like $d=100$ or $d=100000$. What should we expect about the syzygies? Such questions were raised by Ein and Lazarsfeld in [@ein-lazarsfeld-asymptotic] and later in [@ein-erman-lazarsfeld-random]. While they focused on quantitative behaviors that are independent of the ground field, we ask: [*To what extent should we expect the syzygies to depend on the characteristic, if at all? Given the impossibility of computing data for large $d$, how can we make a reasonable conjecture?*]{} The central idea in this paper is the development of an heuristic—based on a random flag complex construction—for modelling the syzygies of Veronese embeddings of projective space. The resulting conjectures propose that, when it comes to dependence on the characteristic of the ground field, pathologies are the norm. Let us make this more precise. For any integers $r,d\geq 1$ and any field $k$, we may consider the $d$-uple embedding of ${\mathbb{P}}^r_k$ into ${\mathbb{P}}^{\binom{r+d}{d}-1}$; the image is given by an ideal $I\subset S$, where $S$ is a polynomial ring in $\binom{r+d}{d}$ variables over $k$. We denote the algebraic Betti numbers of the image by $\beta_{i,j}({\mathbb{P}}^r_k;d) := \dim_k \operatorname{Tor}_i^S(S/I,k)_j$. These encode the number of degree $j$ generators for the $i$’th syzygies, and a major open question is to describe the Betti table $\beta({\mathbb{P}}^r_k;d)$, which is the collection of all these Betti numbers [@green-koszul2; @castryck-et-al; @big-computation; @anderson; @bouc; @jozefiak-pragacz-weyman; @reiner-roberts; @ottaviani-paoletti; @vu; @greco-martino; @ein-lazarsfeld-asymptotic; @ein-erman-lazarsfeld-quick; @raicu]. Since each individual Betti number is invariant under flat extensions, the Betti table is determined by the integers $r,d$ and the characteristic of $k$. For a prime $\ell$, we say that $\beta({\mathbb{P}}^r;d)$ [**has $\ell$-torsion**]{} if $\beta({\mathbb{P}}^r_{\mathbb F_\ell};d) \ne \beta({\mathbb{P}}^r_{\mathbb Q};d)$, and we say that $\beta({\mathbb{P}}^r;d)$ [**depends on the characteristic**]{} if this occurs for some $\ell$.[^1] There are two known cases. - For $r=1$ and any $d$, the Betti numbers in $\beta({\mathbb{P}}^r; d)$ do not depend on the characteristic, as any rational normal curve is resolved by an Eagon-Northcott complex. - If $r\geq 7$, Andersen’s thesis [@anderson] shows that $\beta_{5,7}(\mathbb P^r;2)$ has $5$-torsion. Very little else seems to be known or even conjectured about the dependence of Veronese syzygies on the characteristic, including no known examples of $\ell$-torsion for $\ell\ne 5$. One key challenge in this area is the difficulty of generating good data. For instance, the syzygies of ${\mathbb{P}}^2$ under the $5$-uple embedding were only recently computed [@castryck-et-al; @big-computation]. For larger values of $d$ and $r$, computation is essentially impossible: in the case of ${\mathbb{P}}^{10}$ and $d=100$ mentioned above, the computation would involve $\approx 4.68\times 10^{13}$ variables. Heuristics can provide an alternate route for generating conjectures, especially when computation is infeasible. (Such an approach is quite common for predicting properties of how the prime numbers are distributed, for instance.) In this paper, we use an heuristic model to motivate conjectures about $\ell$-torsion in $\beta({\mathbb{P}}^r;d)$. For instance, we are led to conjecture that dependence on the characteristic should be commonplace as $d\to \infty$. \[conj:dependence\] Let $r\geq 7$. For any $d\gg 0$, the Betti table of $\mathbb P^r$ under the $d$-uple embedding depends on the characteristic. This conjecture is based upon corresponding properties of the following model for Veronese syzygies. We let $\Delta\sim \Delta(n,p)$ denote a random flag complex on $n$ vertices with attaching probability $p$. (See §\[sec:background\] for details.) For a given field $k$, we let $I_\Delta$ be the corresponding Stanley–Reisner ideal in $S=k[x_1,\dots,x_n]$. Ein and Lazarsfeld showed that if $d\gg 0$, then almost all of the Betti numbers in rows $1,\dots, r$ of $\beta({\mathbb{P}}_k^r;d)$ are nonzero (see for instance [@erman-yang Theorem 1.1]). Theorem 1.3 of [@erman-yang] gives that a similar result holds for $I_\Delta$ as long as $n^{-1/(r-1)}\ll p \ll n^{-1/r}$ and $n\gg 0$. Thus, if $p$ is in the specified range, then the Betti table $\beta(S/I_\Delta)$ as $n\to \infty$ satisfies similar nonvanishing properties[^2] as $\beta({\mathbb{P}}_k^r;d)$ as $d\to \infty$; in this sense, the Betti tables $\beta(S/I_\Delta)$ determined by $\Delta(n,p)$ can act as a random model for Veronese syzygies. To predict how $\beta({\mathbb{P}}^r;d)$ depends on the characteristic, we will therefore consider the corresponding questions for $\beta(S/I_\Delta)$ for various fields $k$. As with Veronese syzygies, we say that the Betti table of the Stanley–Reisner ideal of $\Delta$ **has $\ell$-torsion** if this Betti table is different when defined over a field of characteristic $\ell$ than it is over $\mathbb Q$, and we say that this Betti table **depends on the characteristic** if this occurs for some $\ell$. We prove: \[thm:Delta depend\] Let $r\geq 7$, and let $\Delta\sim \Delta(n,p)$ be a random flag complex with $n^{-1/(r-1)} \ll p \ll n^{-1/r}$. With high probability as $n\to \infty$, the Betti table of the Stanley–Reisner ideal of $\Delta$ depends on the characteristic. In other words, if $p$ is in the range where the Betti table of the Stanley–Reisner ideal of $\Delta$ behaves like $\mathbb P^{r}$—in the sense of [@erman-yang Theorem 1.3]—then this Betti table will almost always depend on the characteristic for $n\gg 0$. This theorem is the basis of Conjecture \[conj:dependence\]. Since our $r \geq 7$ hypothesis in Conjecture \[conj:dependence\] is based upon properties of the $\Delta(n,p)$ model, the fact that this hypothesis lines up with Andersen’s example appears to be a coincidence; see Remarks \[rmk:r7\] and \[rmk:r bound\] for more details. Note also that, based on [@anderson], we might even find $\ell$-torsion in $\beta({\mathbb{P}}^r; d)$ for small values of $d$ as well; however, Theorem \[thm:Delta depend\] is asymptotic in nature, which motivates the $d\gg 0$ hypothesis in Conjecture \[conj:dependence\]. In fact, we prove the sharper result: \[thm:m torsion\] Let $m\geq 2$, and let $\Delta\sim \Delta(n,p)$ be a random flag complex with $n^{-1/6} \ll p \leq 1-\epsilon$ for some $\epsilon>0$. With high probability as $n\to \infty$, the Betti table of the Stanley–Reisner ideal of $\Delta$ has $\ell$-torsion for every $\ell$ dividing $m$. In particular, this holds for $\Delta\sim \Delta(n,p)$ where $n^{-1/(r-1)} \ll p \ll n^{-1/r}$ for any $r\geq 7$. The proof of Theorem \[thm:m torsion\] (which implies Theorem \[thm:Delta depend\]) proceeds as follows. By Hochster’s formula [@bruns-herzog Theorem 5.5.1], it suffices to show that some induced subcomplex of $\Delta$ has $m$-torsion in its homology. So, for each $m$, we construct a flag complex $X_m$ with a small number of vertices and with $m$-torsion in $H_1(X_m)$. This complex is derived from Newman’s construction of a two-dimensional simplicial complex $X$ where $H_1(X)$ has $m$-torsion [@newman §3], though we modify his work to ensure that $X_m$ is a flag complex and to lower the maximal vertex degree. We then apply Bollobás’s theorem on subgraphs of a random graph [@Bollobas-subgraph Theorem 8]—or rather a minor variant of that result for induced subgraphs—to prove that $X_m$ appears as an induced subcomplex of $\Delta$ with high probability as $n\to \infty$, yielding Theorem \[thm:m torsion\]. Theorem \[thm:m torsion\] fits into an emerging literature on random monomial ideals. Our current work seems to be the first application of random monomial ideal methods to generate new conjectures outside of the world of monomial ideals. Random monomial ideals first appeared in the work of De Loera-Petrović-Silverstein-Stasi-Wilburne [@random-monomial], which outlined an array of frameworks for studying random monomial ideals, including the model used in this paper, as well as models related to other types of random simplicial complexes such as [@costa-farber; @kahle]; they also proved threshold results for dimension and other invariants of these ideals. In [@average-random], similar methods are applied to study the average behavior of Betti tables of random monomial ideals and to compare these with certain resolutions of generic monomial ideals. Recent work of Banerjee and Yogeshwaran analyzes homological properties of the edge ideals of Erds-Rényi random graphs [@banerjee]. The forthcoming [@stilverstein-wilburne-yang] looks more closely at threshold phenomena in the phase transitions of the random models from [@random-monomial]. There is also the previously referenced [@erman-yang], which uses random monomial methods to demonstrate some asymptotic syzygy phenomena observed/conjectured in [@ein-lazarsfeld-asymptotic; @ein-erman-lazarsfeld-random]. There is also a great deal of literature on the study of $\ell$-torsion arising in random constructions. The most relevant such study is perhaps the recent work by Kahle-Lutz-Newman on $\ell$-torsion in the homology of random simplicial complexes [@torsion-burst], which conjectures the existence of bursts of torsion homology at specific thresholds. For comparison, those authors are interested in $\ell$-torsion in the global homology of a complex like $\Delta(n,p)$, whereas, due to Hochster’s formula, we analyze the simpler question of finding $\ell$-torsion in the homology of [*any*]{} induced subgraph of $\Delta(n,p)$. \[rmk:r7\] We note that the bound $r\geq 7$ in Theorem \[thm:m torsion\] is not necessarily sharp. In fact, we undertake a detailed investigation of the $2$-torsion of the Betti table of the Stanley–Reisner ideal of $\Delta$ in §\[sec:2torsion\], which yields a bound of $r\geq 4$. See Remarks \[rmk:sharpness\] and \[rmk:r bound\] for further discussion on restrictions on $r$ in both Conjecture \[conj:dependence\] and Theorem \[thm:m torsion\]. Theorem \[thm:m torsion\] also leads us to a stronger conjecture on Veronese $\ell$-torsion: \[conj:bad primes\] Let $r\geq 7$. As $d \to \infty$, the number of primes $\ell$ such that $\beta({\mathbb{P}}^r;d)$ has $\ell$-torsion will be unbounded. Regarding Conjectures \[conj:dependence\] and  \[conj:bad primes\], it is worth emphasizing the total lack of direct evidence. As noted above, [@anderson] appears to provide the only known instance of $\ell$-torsion for any Veronese embedding. These conjectures are based primarily upon the heuristic model and, to a lesser extent, upon the nonvanishing results of [@ein-lazarsfeld-asymptotic; @ein-erman-lazarsfeld-quick], both of which rely on an inductive structure where pathologies in $\beta({\mathbb{P}}^r;d)$ tend to propagate as $d\to \infty$, and both of which show that the asymptotic behavior of syzygies exhibits a strong uniformity. However, we do not expect our random flag complex model to be a perfect predictor of all properties of Veronese syzygies. In fact, the results in [@erman-yang] imply that while the Betti tables associated to $\Delta$ have similar overarching nonvanishing properties as Veronese embeddings, these Betti tables do not demonstrate more nuanced properties such as Green’s $N_p$-property [@green-koszul2]: our model will not give correct predictions about these properties. This is why Conjectures \[conj:dependence\] and  \[conj:bad primes\] echo certain qualitative aspects of Theorem \[thm:m torsion\] as opposed to more specific and quantitative predictions about $\ell$-torsion in $\beta({\mathbb{P}}^r; d)$. In a rather different direction, an alternate heuristic model for Veronese syzygies is considered in  [@ein-erman-lazarsfeld-random]. That model is based on Boij-Söderberg theory and is used to generate quantitative conjectures about the entries of $\beta({\mathbb{P}}^r_k;d)$ for $d\gg 0$. However, since this model does not take into account the characteristic of the field, it cannot be used to generate conjectures such as those above. See also the results of [@cjw], which provided a combinatorial parallel of the asymptotic results of [@ein-lazarsfeld-asymptotic]. \[rmk:other varieties\] Ein and Lazarsfeld’s asymptotic nonvanishing results are more-or-less uniform for any smooth variety of dimension $r$ [@ein-lazarsfeld-asymptotic Theorem A], and these were even expanded to integral varieties by Zhou [@zhou-integral]. In this paper, we restrict attention to ${\mathbb{P}}^r$ for concreteness, but we would expect that Conjecture \[conj:dependence\] would likely apply to the $d$-uple embeddings of any $r$-dimensional integral variety which is flat over ${\mathbb{Z}}$, including products of projective spaces, toric varieties, hypersurfaces, Grassmanians, and more. This paper is organized as follows. In §\[sec:background\], we review notation and background, including on Betti numbers, Hochster’s formula, and random flag complexes. §\[sec:construction\] contains our main construction in which we construct an explicit flag complex $X_m$ with $m$-torsion in homology; see Theorem \[thm:Xm\]. In §\[sec:subgraphs\], we apply a minor variant of Bollobás’s Theorem on subgraphs of a random graph to show that, with high probability, $X_m$ appears as an induced subgraph of $\Delta(n,p)$ for any $n^{-1/6}\ll p\ll 1$ and $m\geq 2$. In §\[sec:Betti numbers\], we then combine this result with Hochster’s formula to prove Theorem \[thm:m torsion\]. §\[sec:2torsion\] is a bit separate from the main results as we analyze the case of 2-torsion more closely. Finally, §\[sec:veronese\] returns to the geometric setting where we use our results to produce heuristics and conjectures about $\ell$-torsion in Veronese syzygies. Acknowledgments {#acknowledgments .unnumbered} --------------- We thank Kevin Kristensen, Rob Lazarsfeld, Andrew Newman, Melanie Matchett Wood for helpful conversations. We thank Claudiu Raicu for thoughtful comments on an early draft. Background and Notation {#sec:background} ======================= Betti tables for Veronese embeddings ------------------------------------ For a given $r,d\geq 1$ and field $k$, we have the $d$-uple Veronese embedding ${\mathbb{P}}^r_k\to {\mathbb{P}}_k^{\binom{r+d}{d}-1}$. The image is determined by a homogeneous ideal $I\subset S$ where $S$ is a polynomial ring with coefficients in $k$ and $\binom{r+d}{d}$ variables. The homogeneous coordinate ring $S/I$ of the image is a graded $S$-module. We can thus take a minimal free resolution $F_0\gets F_1 \gets \cdots$ of $S/I$, where each $F_i$ is a graded free $S$-module, $F_i=\displaystyle\bigoplus_{j\in {\mathbb{Z}}} S(-j)^{\beta_{i,j}(S/I)}$. This provides one way to define the algebraic Betti numbers; an alternate definition is $\beta_{i,j}(S/I) := \dim_k \operatorname{Tor}_i^S(S/I,k)_j$. To emphasize the dependence on $r$ and $d$ (and to avoid referencing the ambient ring $S$ and the homogeneous coordinate ring $S/I$, both of which change with $d$), we will denote these Betti numbers by $\beta_{i,j}({\mathbb{P}}^r_k;d)$ instead of the more standard $\beta_{i,j}(S/I)$. Further, we write $\beta({\mathbb{P}}^r_k;d)$ for the Betti table of this embedding, which is the collection of all $\beta_{i,j}({\mathbb{P}}^r_k;d)$. Torsion in Betti tables {#subsec:22} ----------------------- Throughout this paper we will analyze graded algebras, all of which have the following form: there is an ideal $I$ in a polynomial ring $T$ with coefficients in ${\mathbb{Z}}$, where $T/I$ is flat over ${\mathbb{Z}}$, and we are interested in specializations $(T/I)\otimes_{{\mathbb{Z}}} k$ to various different fields $k$. We consider such graded algebras that arise in two ways: as the coordinate rings of Veronese embeddings of projective space and as the Stanley–Reisner rings of simplicial complexes. The central questions of this paper are concerned with when the Betti numbers of such algebras depend on the choice of the characteristic of $k$. First, we consider the Veronese embeddings. For any positive integers $r$ and $d$, we can embed ${\mathbb{P}}^r_{{\mathbb{Z}}}\to {\mathbb{P}}^{\binom{r+d}{d}-1}_{{\mathbb{Z}}}$ via the $d$-uple Veronese embedding. If $T$ is the polynomial ring for the larger projective space, then there is an ideal $I\subset T$ defining the image of this map. Since $T/I$ is flat over ${\mathbb{Z}}$, the coordinate ring of the Veronese embedding over a field $k$ is given by $(T/I)\otimes_{{\mathbb{Z}}} k$. As noted in the previous subsection (with $S=T\otimes_{{\mathbb{Z}}} k$), the algebraic Betti numbers are defined as $$\beta_{i,j}({\mathbb{P}}^r_k;d) := \dim_k \operatorname{Tor}^{T\otimes_{{\mathbb{Z}}} k}_i((T/I)\otimes_{{\mathbb{Z}}} k,k)_j.$$ Since field extensions are flat, algebraic Betti numbers are invariant under field extensions, and thus, $\beta({\mathbb{P}}^r_k;d)$ only depends on $r,d$ and the characteristic of $k$. Moreover, by semicontinuity, we have an inequality $ \beta_{i,j}({\mathbb{P}}^r_{\mathbb Q};d) \leq \beta_{i,j}({\mathbb{P}}^r_{\mathbb F_{\ell}};d) $ for any prime $\ell$ (with equality for all but finitely many $\ell$). As noted in the introduction, we will say that $\beta({\mathbb{P}}^r;d)$ [**has $\ell$-torsion**]{} if this inequality is strict for some $i,j$, and we will say that $\beta({\mathbb{P}}^r;d)$ [**depends on the characteristic**]{} if this inequality is strict for some $i,j$ and some $\ell$. \[rmk:torsion\] Let $I$ be an ideal in $T={\mathbb{Z}}[x_1,\dots,x_n]$ which is flat over ${\mathbb{Z}}$. Let $S' = T\otimes_{{\mathbb{Z}}} \mathbb F_{\ell}=\mathbb F_\ell[x_1,\dots,x_n]$ and $I'=IS'$. By a standard argument, it follows that $$\dim_{\mathbb{F}_\ell} \operatorname{Tor}^{S'}_i(S'/I',\mathbb F_\ell)_j = \dim_{\mathbb{F}_\ell} (\operatorname{Tor}^T_i(T/I,{\mathbb{Z}})_j\otimes_{\mathbb{Z}}\mathbb F_\ell) + \dim_{\mathbb{F}_\ell}(\operatorname{Tor}^{{\mathbb{Z}}}_1(\operatorname{Tor}^T_{i+1}(T/I,{\mathbb{Z}})_j, \mathbb F_\ell)).$$ In particular, the Betti table of such an ideal has $\ell$-torsion in the sense of the introduction if and only if one of the $\operatorname{Tor}_{i+1}^T(T/I,{\mathbb{Z}})_j$ has $\ell$-torsion as an abelian group. We next consider notation for monomial ideals since Stanley–Reisner ideals of simplicial complexes are monomial ideals. Let $J$ be a monomial ideal in $T={\mathbb{Z}}[x_1,\dots,x_n]$. For a field $k$, the algebraic Betti numbers of $(T/J)\otimes_{{\mathbb{Z}}} k$ are given by $$\beta_{i,j}((T/J)\otimes_{{\mathbb{Z}}} k) := \dim_k \operatorname{Tor}^{T\otimes_{{\mathbb{Z}}} k}_i((T/J)\otimes_{{\mathbb{Z}}} k,k)_j.$$ As in the Veronese case, these only depend on the characteristic of the field, and we have the same inequality $\beta_{i,j}((T/J)\otimes_{{\mathbb{Z}}} \mathbb Q)\leq \beta_{i,j}((T/J)\otimes_{{\mathbb{Z}}} \mathbb F_{\ell}).$ As in the introduction, we say that $\beta(T/J)$ [**has $\ell$-torsion**]{} if this inequality is strict for some $i,j$, and we say that $\beta(T/J)$ [**depends on the characteristic**]{} if it has $\ell$-torsion for some $\ell$. Graphs and simplicial complexes ------------------------------- For a simplicial complex $X$, we write $V(X),$ $E(X),$ and $F(X)$ for the set of vertices, edges, and (2-dimensional) faces of $X$, respectively. We use $|*|$ to denote the number of elements in these sets. The degree of a vertex $v$ (denoted $\deg(v)$) is the number of edges in $X$ containing $v$. We write $\operatorname{maxdeg}(X)$ for the maximum degree of any vertex of $X$, and we write ${\operatorname{avg}}(X)$ for the average degree of a vertex in $X$. For a pair of graphs $H,G$, we write $H\subset G$ if $H$ is a subgraph of $G$. We write $H \overset{ind}{\subset} G$ if $H$ is an induced subgraph of $G$, that is, if the vertices of $H$ are a subset of the vertices of $G$ and the edges of $H$ are precisely the edges connecting those vertices within $G$ (see Figure \[fig:induced graph\]). We use similar definitions and notations for a simplicial complex $\Delta'$ to be a subcomplex (or an induced subcomplex) of another complex $\Delta$. If $\alpha \subset V(\Delta)$, then we let $\Delta|_{\alpha}$ denote the induced subcomplex of $\Delta$ on $\alpha$. (0,0) circle \[radius=2pt\]; (0,1) circle \[radius=2pt\]; (1,0) circle \[radius=2pt\]; (1,1) circle \[radius=2pt\]; (0,0)–(1,0)–(1,1)–(0,0)–(0,1)–(1,1); (.5,-.5) node [$G$]{}; (-.25,-.1) node [$1$]{}; (-.25,1.1) node [$2$]{}; (1.25,1.1) node [$3$]{}; (1.25,-.1) node [$4$]{}; (0,0) circle \[radius=2pt\]; (0,1) circle \[radius=2pt\]; (1,1) circle \[radius=2pt\]; (0,0)–(0,1)–(1,1); (.5,-.5) node [$H$]{}; (-.25,-.1) node [$1$]{}; (-.25,1.1) node [$2$]{}; (1.25,1.1) node [$3$]{}; The following definitions, adapted from [@Bollobas-subgraph] and [@Analytic-bollobas], will be used in sections \[sec:subgraphs\], \[sec:2torsion\], and \[sec:Betti numbers\]. \[mG\] The **essential density** of a graph $G$ is $$m(G):= \max\left\{\frac{|E(H)|}{|V(H)|}\; : \; H\subset G,\, |V(H)|>0\right\},$$ and $G$ is **strictly balanced** if $m(H)<m(G)$ for all subgraphs $H\subset G$. From a simplicial complex $\Delta$ on $n$ vertices, there is a corresponding Stanley–Reisner ideal $I_\Delta\subset S=k[x_1,\dots,x_n]$. Since these $I_\Delta$ are squarefree monomial ideals, Hochster’s Formula [@bruns-herzog Theorem 5.5.1] relates the Betti table of $S/I_\Delta$ to topological properties of $\Delta$, providing our key tool for studying $\beta(S/I_\Delta)$ for various fields $k$. An immediate consequence of Hochster’s formula is the following fact, which characterizes when these Betti tables are different over a field of characteristic $\ell$ than over ${\mathbb{Q}}$. \[fact:depend\] For a simplicial complex $\Delta$, the Betti table of the Stanley–Reisner ideal $I_\Delta$ has $\ell$-torsion if and only if there exists a subset $\alpha \subset V(\Delta)$ such that $\Delta|_\alpha$ has $\ell$-torsion in one of its homology groups. Monomial ideals from random flag complexes ------------------------------------------ Our monomial ideals are Stanley–Reisner ideals associated to random flag complexes. Recall that a flag complex is a simplicial complex obtained from a graph by adjoining a $k$-simplex to every $(k+1)$-clique in the graph. In particular, a flag complex is entirely determined by its underlying graph, and the process of obtaining a flag complex from its underlying graph is called taking the clique complex. We write $\Delta \sim \Delta(n,p)$ to denote the flag complex which is the clique complex of an Erdős-Rényi random graph $G(n,p)$ on $n$ vertices, where each edge is attached with probability $p$. If $\alpha\subset V(\Delta)$, then we note that $\Delta|_{\alpha}$ is also flag. The properties of random flag complexes have been analyzed extensively, with [@kahle] providing an overview. As discussed in the introduction, the syzygies of Stanley–Reisner ideals of random flag complexes were first studied in [@erman-yang]. Probability ----------- We use the notation ${\mathbf{P}}[*]$ for the probability of an event. For a random variable $X$, we use ${\mathbf{E}}[X]$ for the expected value of $X$ and $\operatorname{Var}(X)$ for the variance of $X$. For functions $f(x)$ and $g(x)$, we write $f\ll g$ if ${\displaystyle}\lim_{x\to \infty} f/g \to 0$. We use $f\in O(g)$ if there is a constant $N$ where $|f(x)|\leq N|g(x)|$ for all sufficiently large values of $x$, and we use $f\in \Omega(g)$ if there is a constant $N'$ where $|f(x)|\geq N'|g(x)|$ for all sufficiently large values of $x$. Constructing a flag complex with $m$-torsion homology {#sec:construction} ===================================================== The goal of this section is to prove the following result: \[thm:Xm\] For every $m \geq 2$, there exists a two-dimensional flag complex $X_m$ such that the torsion subgroup of $H_1(X_m)$ is isomorphic to ${\mathbb{Z}}/m{\mathbb{Z}}$ and $\operatorname{maxdeg}(X_m)\leq 12$. This result is the foundation of our proof of Theorem \[thm:m torsion\] as we will show that this specific complex $X_m$ appears as an induced subcomplex of $\Delta(n,p)$ with high probability under the hypotheses of that theorem. Here is an overview of our proof of Theorem \[thm:Xm\], which is largely based on ideas from [@newman]. Given an integer $m\geq 2$, we write its binary expansion as $m=2^{n_1}+\cdots+2^{n_k}$ with $0\leq n_1<\cdots<n_k$. Note that $k$ is the Hamming weight of $m$ and $n_k= \lfloor \log_2(m) \rfloor$. With this setup, the “repeated squares presentation” of ${\mathbb{Z}}/m{\mathbb{Z}}$ is given by $${\mathbb{Z}}/m{\mathbb{Z}}= {\langle}\gamma_0,\gamma_1,\dots,\gamma_{n_k}\; |\; 2\gamma_0=\gamma_1, 2\gamma_1=\gamma_2,\dots, 2\gamma_{n_k-1}=\gamma_{n_k}, \gamma_{n_1}+\cdots+\gamma_{n_k}=0 {\rangle}.$$ We will construct a two-dimensional flag complex $X_m$ such that the torsion subgroup of $H_1(X_m)$ has this presentation. To do so, we follow Newman’s “telescope and sphere” construction in [@newman], where $Y_1$ is the telescope satisfying $$H_1(Y_1) \cong {\langle}\gamma_0,\gamma_1,\dots,\gamma_{n_k}\; |\; 2\gamma_0=\gamma_1, 2\gamma_1=\gamma_2,\dots, 2\gamma_{n_k-1}=\gamma_{n_k} {\rangle},$$ $Y_2$ is the sphere satisfying $$H_1(Y_2) \cong {\langle}\tau_1,\dots,\tau_k \; |\; \tau_1+\cdots+\tau_k=0 {\rangle},$$ and $X_m$ is created by gluing $Y_1$ and $Y_2$ together to yield a complex with the desired $H_1$-group. Because we want our construction to be a flag complex with $\operatorname{maxdeg}(X_m)\leq 12$, we cannot simply quote Newman’s results. Instead, we must alter the triangulations to ensure that $Y_1$, $Y_2$, and $X_m$ are flag complexes. Then, we must further alter the construction to reduce $\operatorname{maxdeg}(X_m)$. However, each of our constructions is homeomorphic to each of Newman’s constructions. \[notation:hamming etc\] Throughout the remainder of this section we assume that $m\geq 2$ is given. We write $m=2^{n_1}+\cdots+2^{n_k}$ with $0\leq n_1<\cdots<n_k$. To simplify notation, we also denote $X_m$ by $X$ for the remainder of this section. The telescope construction -------------------------- The telescope $Y_1$ that we construct will be homeomorphic to the $Y_1$ that Newman constructs in [@newman Proof of Lemma 3.1] for the $d=2$ case. We start with building blocks which are punctured projective planes; in contrast with [@newman], our blocks are triangulated so that each is a flag complex. Explicitly, for each $i=0,\dots,(n_k-1)$, we produce a building block which is a triangulated projective plane with a square face removed, with vertices, edges, and faces as illustrated in Figure \[fig:Y1\]. Our building blocks differ from Newman’s in order to ensure that $Y_1$ and the final simplicial complex $X$ are flag complexes; for instance, we need to add extra vertices $v'_{8i},\dots,v'_{8i+7}$. (-1,3)–(-3,1)–(-1,2)–(-1,3); (-3,1)–(-2,1)–(-1,2)–(-3,1); (-2,1)–(-1,1)–(-1,2)–(-2,1); (-3,1)–(-3,-1)–(-2,1)–(-3,1); (-2,1)–(-3,-1)–(-2,-1)–(-2,1); (-2,1)–(-2,-1)–(-1,-1)–(-2,1); (-2,1)–(-1,-1)–(-1,1)–(-2,1); (-3,-1)–(-1,-3)–(-2,-1)–(-3,-1); (-2,-1)–(-1,-3)–(-1,-2)–(-2,-1); (-2,-1)–(-1,-2)–(-1,-1)–(-2,-1); (-1,-2)–(-1,-3)–(1,-3)–(-1,-2); (-1,-2)–(1,-3)–(1,-2)–(-1,-2); (-1,-2)–(1,-2)–(1,-1)–(-1,-2); (-1,-2)–(1,-1)–(-1,-1)–(-1,-2); (1,-2)–(1,-3)–(3,-1)–(1,-2); (1,-2)–(3,-1)–(2,-1)–(1,-2); (1,-2)–(2,-1)–(1,-1)–(1,-2); (2,-1)–(3,-1)–(3,1)–(2,-1); (2,-1)–(3,1)–(2,1)–(2,-1); (2,-1)–(2,1)–(1,1)–(2,-1); (2,-1)–(1,1)–(1,-1)–(2,-1); (2,1)–(3,1)–(1,3)–(2,1); (2,1)–(1,3)–(1,2)–(2,1); (2,1)–(1,2)–(1,1)–(2,1); (1,2)–(1,3)–(-1,3)–(1,2); (1,2)–(-1,3)–(-1,2)–(1,2); (1,2)–(-1,2)–(-1,1)–(1,2); (1,2)–(-1,1)–(1,1)–(1,2); \(0) at (-1,3) ; (0’) at (1,-3) ; (1) at (-3,1) ; (1’) at (3,-1) ; (2) at (-3,-1) ; (2’) at (3,1) ; (3’) at (1,3) ; (3) at (-1,-3) ; (4) at (-1,1) ; (5) at (-1,-1) ; (6) at (1,-1) ; (7) at (1,1) ; (8) at (-1,2) ; (9) at (-2,1) ; (10) at (-2,-1) ; (11) at (-1,-2) ; (12) at (1,-2) ; (13) at (2,-1) ; (14) at (2,1) ; (15) at (1,2) ; at (-1,3) [$\scriptstyle v_{4i}$]{}; at (1,-3) [$\scriptstyle v_{4i}$]{}; at (-3,1) [$\scriptstyle v_{4i+1}$]{}; at (3,-1) [$\scriptstyle v_{4i+1}$]{}; at (-3,-1) [$\scriptstyle v_{4i+2}$]{}; at (3,1) [$\scriptstyle v_{4i+2}$]{}; at (1,3) [$\scriptstyle v_{4i+3}$]{}; at (-1,-3) [$\scriptstyle v_{4i+3}$]{}; at (-1,1) [$\scriptstyle v_{4i+4}$]{}; at (-1,-1) [$\scriptstyle v_{4i+5}$]{}; at (1,-1) [$\scriptstyle v_{4i+6}$]{}; at (1,1) [$\scriptstyle v_{4i+7}$]{}; at (-1,2) [$\scriptstyle v'_{8i}$]{}; at (-1.8,1) [$\scriptstyle v'_{8i+1}$]{}; at (-2,-1) [$\scriptstyle v'_{8i+2}$]{}; at (-1.05,-2.05) [$\scriptstyle v'_{8i+3}$]{}; at (0.98,-1.98) [$\scriptstyle v'_{8i+4}$]{}; at (2,-1.07) [$\scriptstyle v'_{8i+5}$]{}; at (1.9, 0.93) [$\scriptstyle v'_{8i+6}$]{}; at (0.69,2.2) [$\scriptstyle v'_{8i+7}$]{}; (-1,3) circle \[radius=2pt\]; (1,-3) circle \[radius=2pt\]; (-3,1) circle \[radius=2pt\]; (3,-1) circle \[radius=2pt\]; (-3,-1) circle \[radius=2pt\]; (3,1) circle \[radius=2pt\]; (1,3) circle \[radius=2pt\]; (-1,-3) circle \[radius=2pt\]; (-1,1) circle \[radius=2pt\]; (-1,-1) circle \[radius=2pt\]; (1,-1) circle \[radius=2pt\]; (1,1) circle \[radius=2pt\]; (-1,2) circle \[radius=2pt\]; (-2,1) circle \[radius=2pt\]; (-2,-1) circle \[radius=2pt\]; (-1,-2) circle \[radius=2pt\]; (1,-2) circle \[radius=2pt\]; (2,-1) circle \[radius=2pt\]; (2,1) circle \[radius=2pt\]; (1,2) circle \[radius=2pt\]; We construct $Y_1$ by identifying edges and vertices of these $n_k$ building blocks as labeled. The underlying vertex set is $V(Y_1) = \{v_0,v_1,v_2,\dots,v_{4n_k+3},v'_0,v'_1,\dots,v'_{8n_k-1}\}$, so we have $|V(Y_1)|=(4n_k+4)+8n_k=12n_k+4$. Since each building block has $44$ edges, $4$ of which are glued to the next building block, and $28$ faces, a similar computation yields $|E(Y_1)|=40n_k+4$ and $|F(Y_1)|=28n_k$. In addition, observe that the vertices of highest degree are those in the squares in the “middle” of the telescope, such as vertex $v_4$ when $n_k\geq 2$. In this case, $v_4$ is adjacent to $v_5, v_7, v'_0, v'_1, v'_7, v'_8, v'_{15}, v'_{11},$ and $v'_{12}$, so $\deg(v_4)=9$. By the symmetry of $Y_1$, we have that $\operatorname{maxdeg}(Y_1)=9$ when $n_k\geq 2$, and $\operatorname{maxdeg}(Y_1)=6$ when $n_k=1$ (when $m=2,3$). To compute $H_1(Y_1)$, we simply apply the identical argument from [@newman]. We order the vertices in the natural way, where $v_j>v_k$ if $j>k$, similarly for the $v_\ell'$, and where $v'_{\ell} > v_j$ for all $\ell,j$. We let these vertex orderings induce orientations on the edges and faces of $Y_1$. For each $i=0,\dots,n_k$, denote by $\gamma_i$ the 1-cycle of $Y_1$ represented by $[v_{4i},v_{4i+1}]+[v_{4i+1},v_{4i+2}]+[v_{4i+2},v_{4i+3}]-[v_{4i},v_{4i+3}]$. Then $2\gamma_i - \gamma_{i+1}$ is a 1-boundary of $Y_1$ for each $i=0,\dots,(n_k-1)$, and, as in Newman’s construction, we have that $H_1(Y_1)$ can be presented as ${\langle}\gamma_0,\gamma_1,\dots,\gamma_{n_k}\; |\; 2\gamma_0=\gamma_1, 2\gamma_1=\gamma_2,\dots, 2\gamma_{n_k-1}=\gamma_{n_k} {\rangle}$. The sphere construction ----------------------- The sphere part $Y_2$ is a flag triangulation of the sphere $S^2$ that has $k$ square holes such that the squares are all vertex disjoint and nonadjacent. Our $Y_2$ will be homeomorphic to the $Y_2$ that Newman constructs in [@newman] for the $d=2$ case, but our construction involves a few different steps. First, we will show that for any integer $k\geq 1$, there exists a flag triangulation $T_i$ of $S^2$ (here $i=\lfloor \frac{k-1}{4}\rfloor$) with at least $k$ faces such that $\operatorname{maxdeg}(T_i)\leq 6$. Then, we will insert square holes on $k$ of the faces of $T_i$, while subdividing the edges, and call the resulting flag complex $\widetilde{T_i}$. Finally, we describe a process to replace each vertex of degree 14 in $\widetilde{T_i}$ with two degree 9 vertices so that the resulting complex, $Y_2$, has $\operatorname{maxdeg}(Y_2)\leq 12$. Throughout these constructions, we will have four cases corresponding to the value of $k \mod 4$, and we carefully keep track of the degrees of each vertex in $T_i$, $\widetilde{T_i}$, and $Y_2$ for each case. ### $T_i$ and flag bistellar 0-moves We begin by constructing an infinite sequence $T_0, T_1, T_2, \dots$ of flag triangulations of $S^2$ such that $\operatorname{maxdeg}(T_i)\leq 6$ for all $i$. To do so, we adapt the bistellar 0-moves used in [@newman Lemma 5.6]. Let $T_0$ be the $3$-simplex boundary on the vertex set $\{w_0, w_1, w_2, w_3\}$. Note that each vertex of $T_0$ has degree 3. We will construct the remaining $T_i$ inductively. To build $T_1$, first remove the face $[w_1, w_2, w_3]$ and edge $[w_1, w_3]$. Then, add two new vertices $w_4$ and $w_5$ as well as new edges $[w_0, w_4], [w_1, w_4], [w_3, w_4], [w_1, w_5],$ $[w_2, w_5], [w_3, w_5],$ and $[w_4, w_5]$. Taking the clique complex will then give $T_1$. See Figure \[fig:triangulations\]. Essentially, this process is the same as making the face $[w_1,w_2,w_3]$ into a square face $[w_1,w_2,w_3,w_4]$, removing that square face, taking the cone over it, and then ensuring that the resulting complex is a flag triangulation of $S^2$. We will call such a move a **flag bistellar 0-move**. Each $T_{i+1}$ for $i\geq 0$ will be obtained from $T_i$ by performing a flag bistellar 0-move on the face $[w_{2i+1}, w_{2i+2}, w_{2i+3}]$ of $T_i$. Explicitly, to construct $T_{i+1}$, remove the face $[w_{2i+1}, w_{2i+2}, w_{2i+3}]$ and the edge $[w_{2i+1}, w_{2i+3}]$. Then, add new vertices $w_{2i+4}$ and $w_{2i+5}$ and new edges $[w_{2i}, w_{2i+4}], [w_{2i+1}, w_{2i+4}], [w_{2i+3}, w_{2i+4}], [w_{2i+1},w_{2i+5}],$ $[w_{2i+2}, w_{2i+5}], [w_{2i+3}, w_{2i+5}],$ $[w_{2i+4}, w_{2i+5}],$ and take the clique complex to get $T_{i+1}$. Note that each flag bistellar 0-move adds 2 vertices, 6 edges, and 4 faces. Since $|V(T_0)|=4, |E(T_0)|=6$, and $|F(T_0)|=4$, this means that $|V(T_i)|=2i+4$, $|E(T_i)|=6i+6$, and $|F(T_i)|=4i+4$. Further, Table \[tab:Ti\_vertices\] summarizes the degrees of the vertices in each $T_i$. $T_i$ Degree Vertices ----------- -------- -------------------------------- $T_0$ 3 $w_0, w_1, w_2, w_3$ $T_1$ 4 $w_0, w_1, w_2, w_3, w_5, w_6$ $T_2$ 4 $w_0, w_1, w_6, w_7$ 5 $w_2, w_3, w_4, w_5$ $T_i$ 4 $w_0, w_1, w_{2i+2}, w_{2i+3}$ $i\geq 3$ 5 $w_2, w_3, w_{2i}, w_{2i+1}$ 6 $w_4, \dots, w_{2i-1}$ : Degrees of the vertices in $T_i$.[]{data-label="tab:Ti_vertices"} To compute the degrees of vertices in $T_i$ for $i\geq 3$, observe that when the new vertices $w_{2i+2}$ and $w_{2i+3}$ are added, they have degree $4$ in $T_i$. For each of the next two iterations of the flag bistellar-0 move, the degree of these vertices increases by one, resulting in degree 6 in $T_{i+2}$. In the remaining triangulations $T_j$ with $j\geq i+3$, these vertices are not affected. Therefore, $\operatorname{maxdeg}(T_i)\leq 6$ for each $i$. (-1,0)–(1,0)–(0,1.732)–(-1,0); (1,0)–(1.25,1)–(0,1.732)–(1,0); (-1,0)–(1.25,1); (-1,0) circle \[radius=1pt\]; (1,0) circle \[radius=1pt\]; (0,1.732) circle \[radius=1pt\]; (1.25,1) circle \[radius=1pt\]; at (0,1.732) [$w_1$]{}; at (-1,0) [$w_2$]{}; at (1,0) [$w_3$]{}; at (1.25,1) [$w_0$]{}; at (0, -0.25) [$T_0$]{}; (2,0)–(4,0)–(3.2,0.7)–(2,0); (2,0)–(3.2,0.7)–(3,1.732)–(2,0); (4,0)–(4.25,1)–(4,0.889)–(4,0); (4,0)–(4,0.889)–(3.2,0.7)–(4,0); (3.2,0.7)–(4,0.889)–(3,1.732)–(3.2,0.7); (4,0.889)–(4.25,1)–(3,1.732)–(4,0.889); (2,0)–(4.25,1); (2,0) circle \[radius=1pt\]; (4,0) circle \[radius=1pt\]; (3,1.732) circle \[radius=1pt\]; (4.25,1) circle \[radius=1pt\]; (3.2,0.7) circle \[radius=1pt\]; (4,0.889) circle \[radius=1pt\]; at (3,1.732) [$w_1$]{}; at (2,0) [$w_2$]{}; at (4,0) [$w_3$]{}; at (4.25,1) [$w_0$]{}; at (3.2,0.7) [$w_5$]{}; at (4,0.889) [$w_4$]{}; at (3, -0.25) [$T_1$]{}; (5,0)–(7,0)–(6.4,0.2)–(5,0); (5,0)–(6.4,0.2)–(6.2,0.7)–(5,0); (5,0)–(6.2,0.7)–(6,1.732)–(5,0); (7,0)–(7.25,1)–(7,0.889)–(7,0); (6.2,0.7)–(7,0.889)–(6,1.732)–(6.2,0.7); (7,0.889)–(7.25,1)–(6,1.732)–(7,0.889); (7,0)–(6.7,0.5)–(6.4,0.2)–(7,0); (6.4,0.2)–(6.7, 0.5)–(6.2, 0.7)–(6.4, 0.2); (7,0)–(7,0.889)–(6.7, 0.5)–(7,0); (6.7,0.5)–(7,0.889)–(6.2,0.7)–(6.7,0.5); (5,0)–(7.25,1); (5,0) circle \[radius=1pt\]; (7,0) circle \[radius=1pt\]; (6,1.732) circle \[radius=1pt\]; (7.25,1) circle \[radius=1pt\]; (6.2,0.7) circle \[radius=1pt\]; (7,0.889) circle \[radius=1pt\]; (6.4,0.2) circle \[radius=1pt\]; (6.7, 0.5) circle \[radius=1pt\]; at (6,1.732) [$w_1$]{}; at (5,0) [$w_2$]{}; at (7,0) [$w_3$]{}; at (7.25,1) [$w_0$]{}; at (6.2,0.7) [$w_5$]{}; at (7,0.889) [$w_4$]{}; at (6.4, 0.22) [$w_6$]{}; at (6.68,0.5) [$w_7$]{}; at (6, -0.25) [$T_2$]{}; From this infinite sequence of flag triangulations of $S^2$ with bounded degree, we are interested in the particular $T_i$ with $i=\lfloor \frac{k-1}{4} \rfloor$ to use in our construction of $Y_2$, where $k$ is the Hamming weight of $m$ as in Notation \[notation:hamming etc\]. Note that this $T_i$ has vertex set $\{w_0,\dots,w_{2i+3}\}$ and has $4\lfloor \frac{k-1}{4}\rfloor+4$ faces. Let $\delta$ be the integer $0\leq \delta \leq 3$ where $\delta \equiv -k \mod 4$. Then $T_i$ has exactly $k+\delta$ faces. ### Constructing $\widetilde{T_i}$ Next, we insert square holes in the first $k$ faces of $T_i$ and subdivide the remaining faces in such a way that the squares will be vertex disjoint and nonadjacent. First, we will insert square holes in $k$ of the faces of $T_i$, making sure to triangulate the resulting faces and take the clique complex so that our simplicial complex remains flag. Let $[w_r,w_s,w_t]$ with $r<s<t$ be the $j$th of these $k$ faces with respect to a fixed ordering of the faces (where $j$ ranges from 1 to $k$). We remove this face and subdivide the edges by adding new vertices $w'_{r,s}, w'_{r,t},$ and $w'_{s,t}$ and new edges $[w_r, w'_{r,s}], [w_s, w'_{r,s}], [w_r, w'_{r,t}], [w_t, w'_{r,t}],$ $[w_s, w'_{s,t}],$ and $[w_t, w'_{s,t}]$. Then, we add vertices $u_{4j-4}, u_{4j-3}, u_{4j-2},$ and $u_{4j-1}$ to form a square inside the original face with indices increasing counterclockwise. Moreover, we add edges $$\begin{aligned} & [w_r, u_{4j-4}], [w_r, u_{4j-1}], [u_{4j-4}, w'_{r,s}], [u_{4j-3}, w'_{r,s}], [w_s, u_{4j-3}] \\ & [u_{4j-3}, w'_{s,t}], [u_{4j-2}, w'_{s,t}], [w_t, u_{4j-2}], [u_{4j-2}, w'_{r,t}], [u_{4j-1}, w'_{r,t}].\end{aligned}$$ After applying this process, we take the clique complex. The result of this operation on face $[w_r, w_s, w_t]$ is depicted in Figure \[fig:Y2\_faces\] (left). The remaining $\delta$ faces of $T_i$ will simply be subdivided and triangulated before taking the clique complex. Explicitly, this means that after removing the face $[w_{2i+1},w_{2i+2},w_{2i+3}]$ and its edges, we add vertices $w'_{2i+1,2i+2}, w'_{2i+1,2i+3},$ and $w'_{2i+2,2i+3}$ and edges $$\begin{aligned} &[w_{2i+1}, w'_{2i+1,2i+2}], [w_{2i+2}, w'_{2i+1,2i+2}], [w_{2i+1}, w'_{2i+1,2i+3}],\\ &[w_{2i+3}, w'_{2i+1,2i+3}], [w'_{2i+1,2i+2}, w'_{2i+1,2i+3}], [w_{2i+2}, w'_{2i+2,2i+3}],\\ &[w_{2i+3}, w'_{2i+2,2i+3}], [w'_{2i+1,2i+2}, w'_{2i+2,2i+2}], [w'_{2i+1,2i+3}, w'_{2i+2,2i+3}].\end{aligned}$$ This subdivision of face $[w_{2i+1},w_{2i+2},w_{2i+3}]$ is shown in Figure \[fig:Y2\_faces\] (right). We do similarly for the faces $[w_{2i-1},w_{2i+2}, w_{2i+3}]$ and $[w_{2i}, w_{2i+1}, w_{2i+3}]$, if necessary. The clique complex of this construction is a flag complex which is homeomorphic to $S^2$ with $k$ distinct points removed. Call this complex $\widetilde{T_i}$. (-1,0)–(0,0)–(-0.2,0.667)–(-1,0); (-1,0)–(-0.2,0.667)–(-0.5,0.866)–(-1,0); (-0.5,0.866)–(-0.2,0.667)–(-0.2,1.066)–(-0.5,0.866); (-0.5,0.866)–(-0.2,1.066)–(0,1.732)–(-0.5,0.866); (0,1.732)–(-0.2,1.066)–(0.2,1.066)–(0,1.732); (0.5,0.866)–(0.2,1.066)–(0,1.732)–(0.5,0.866); (0.5,0.866)–(0.2,0.667)–(0.2,1.066)–(0.5,0.866); (1,0)–(0.2,0.667)–(0.5,0.866)–(1,0); (1,0)–(0,0)–(0.2,0.667)–(1,0); (0,0)–(-0.2,0.667)–(0.2,0.667)–(0,0); (0,1.723)–(-0.5,0.866); (0,1.723)–(-0.2,1.066); (-0.2,1.066)–(-0.5,0.866); (-0.2,0.667)–(-0.5,0.866); (-0.2,1.066)–(-0.2,0.667); (-1,0)–(-0.2,0.667); (-1,0)–(-0.5,0.866); (-1,0)–(0,0); (-0.2,0.667)–(0,0); (1,0)–(0,0); (0.2,0.667)–(0,0); (1,0)–(0.2,0.667); (1,0)–(0.5, 0.866); (0.2,0.667)–(0.5, 0.866); (0.2,1.066)–(0.5, 0.866); (0,1.723)–(0.2,1.066); (0,1.723)–(0.5, 0.866); (0.2,0.667)–(0.2,1.066); (-0.2,1.066)–(0.2,1.066); (-0.2,0.667)–(0.2,0.667); (2,0)–(3,0)–(2.5,0.866)–(2,0); (3,0)–(2.5,0.866)–(3.5,0.866)–(3,0); (4,0)–(3.5,0.866)–(3,0)–(4,0); (2.5,0.866)–(3.5,0.866)–(3,1.732)–(2.5,0.866); (3,1.723)–(2.5,0.866); (2,0)–(2.5,0.866); (2,0)–(3,0); (4,0)–(3,0); (3.5, 0.866)–(3,0); (3,1.723)–(3.5, 0.866); (2.5, 0.866)–(3,0); (4, 0)–(3.5,0.866); (2.5, 0.866)–(3.5,0.866); (-1,0) circle \[radius=1pt\]; (0,0) circle \[radius=1pt\]; (1,0) circle \[radius=1pt\]; (0,1.732) circle \[radius=1pt\]; (-0.5,0.866) circle \[radius=1pt\]; (0.5,0.866) circle \[radius=1pt\]; (-0.2,1.066) circle \[radius=1pt\]; (0.2,1.066) circle \[radius=1pt\]; (-0.2,0.667) circle \[radius=1pt\]; (0.2,0.667) circle \[radius=1pt\]; (2,0) circle \[radius=1pt\]; (3,0) circle \[radius=1pt\]; (4,0) circle \[radius=1pt\]; (3,1.732) circle \[radius=1pt\]; (2.5,0.866) circle \[radius=1pt\]; (3.5,0.866) circle \[radius=1pt\]; at (0,1.732) [$w_r$]{}; at (-1,0) [$w_s$]{}; at (1,0) [$w_t$]{}; at (-0.5,0.866) [$w'_{r,s}$]{}; at (0,0) [$w'_{s,t}$]{}; at (0.5,0.866) [$w'_{r,t}$]{}; at (-0.2,1.06) [$\scriptstyle u_{4j-4}$]{}; at (-0.2,0.68) [$\scriptstyle u_{4j-3}$]{}; at (0.22,0.63) [$\scriptstyle u_{4j-2}$]{}; at (0.22,1.08) [$\scriptstyle u_{4j-1}$]{}; at (3,1.732) [$w_{2i+1}$]{}; at (2,0) [$w_{2i+2}$]{}; at (4,0) [$w_{2i+3}$]{}; at (2.5,0.866) [$w'_{2i+1,2i+2}$]{}; at (3,0) [$w'_{2i+2,2i+3}$]{}; at (3.5,0.866) [$w'_{2i+1,2i+3}$]{}; Let’s consider the degrees of the vertices of $\widetilde{T_i}$. We have that $\deg(w'_{m,n})=6$ for all $m,n$ and $\deg(u_\ell)\in \{4,5\}$ for all $\ell$, where the “top” $u_\ell$ have degree 4 and the “bottom” $u_\ell$ have degree 5. To determine the degrees of the $w_j$ vertices, we need to consider their degrees in $T_i$ and how their degrees increase during the subdivision and square face removal processes. As we are interested in bounding the maximum degree of the vertices of $\widetilde{T_i}$, we need only consider the case when $\delta=0$ and all $k$ faces of $T_i$ have a square removed from them. $\widetilde{T_i}$ Degree Vertices --------------------- -------- ------------------------ 6 $w_2, w_3$ ${\widetilde}{T_0}$ 7 $w_1$ $(k=4)$ 9 $w_0$ 8 $w_4, w_5$ ${\widetilde}{T_1}$ 9 $w_2, w_3$ $(k=8)$ 10 $w_1$ 12 $w_0$ 8 $w_6, w_7$ ${\widetilde}{T_2}$ 10 $w_1$ $(k=12)$ 11 $w_4, w_5$ 12 $w_0, w_2, w_3$ 8 $w_{2i+2}, w_{2i+3}$ $\widetilde{T_i}$ 10 $w_1$ $i\geq 3$ 11 $w_{2i}, w_{2i+1}$ $(k=4i+4)$ 12 $w_0, w_2, w_3$ 14 $w_4, \dots, w_{2i-1}$ : Degrees of the vertices in $\widetilde{T_i}$ when $k\equiv 0 \mod 4$.[]{data-label="tab:Ti_tilde"} Table \[tab:Ti\_tilde\] gives the degrees of each of the $w_j$ vertices in $\widetilde{T_i}$ when $\delta=0$. To verify the degrees of the $w_j$ in $\widetilde{T_i}$ when $i\geq 3$, we consider how the degrees of the vertices change as $i$ increases. Between ${\widetilde}{T}_{i-1}$ and $\widetilde{T_i}$ (with $\delta=0$ for both), the only vertices that change degree are $w_{2i-2}, w_{2i-1}, w_{2i}, w_{2i+1}$, each of which increase degree by 3. This is because they each get one new edge from the $T_i$ flag bistellar 0-move and two new edges from the square removal triangulation process (since each vertex is the smallest indexed and hence the “top” vertex of one new triangular face). Further, the new vertices $w_{2i+2}, w_{2i+3}$ in $\widetilde{T_i}$ have degree 8, and they increase degree by 3 in the next two iterations, resulting in degree 14 in ${\widetilde}{T}_{i+2}$ and all future iterations. The above argument shows that regardless of $m$ and $k$, $\operatorname{maxdeg}(\widetilde{T_i})\leq 14$, where $i=\lfloor \frac{k-1}{4} \rfloor$. Furthermore, the only vertices that could have degree 14 are $w_4,\dots, w_{2i-1}$, each of which is separated from the others by a $w'_{m,n}$ vertex, which only has degree 6. We want to know exactly which vertices in $\widetilde{T_i}$ have degree 14, for all possible $k$ with $i\geq 3$, because we plan to alter these vertices to decrease $\operatorname{maxdeg}(\widetilde{T_i})$. Note that as $\delta$ increases from 0 to 3, the degree of each $w_j$ vertex is nonincreasing. When $k=4i+4$ and $\delta=0$, the above table gives that $w_4,\dots, w_{2i-1}$ have degree 14. When $k=4i+3$ and $\delta=1$, the face $[w_{2i+1},w_{2i+2},w_{2i+3}]$ is subdivided instead of having a square removed, but this does not change the degrees of $w_4,\dots, w_{2i-1}$, so these all still have degree 14. When $k=4i+2$ and $\delta=2$, the faces $[w_{2i+1},w_{2i+2},w_{2i+3}]$ and $[w_{2i-1},w_{2i+2},w_{2i+3}]$ are subdivided. Therefore, $w_{2i-1}$ has two fewer edges than in the previous case since $w_{2i-1}$ is the smallest indexed vertex in $[w_{2i-1}, w_{2i+2},w_{2i+3}]$ and so would have two “top” $u_\ell$ adjacent to it if this face had a square removed from it. So, in this case, $w_4,\dots, w_{2i-2}$ have degree 14 and $w_0,w_2, w_3, w_{2i-1}$ have degree 12 in $\widetilde{T_i}$. Finally, if $k=4i+1$ and $\delta=3$, then additionally the face $[w_{2i},w_{2i+1},w_{2i+3}]$ is subdivided, which means that the degree 12 and 14 vertices are the same as in the previous cases. ### Replacing degree 14 vertices to construct $Y_2$ Having identified the vertices of $\widetilde{T_i}$ of the highest degree, we now describe a process by which we will replace each vertex of degree 14 by two vertices of degree 9 in order to ensure that $\operatorname{maxdeg}(\widetilde{T_i})\leq 12$ for all $k$ and $i$. The resulting flag complex, given by taking the clique complex of this construction, will be the final $Y_2$, and it will be homeomorphic to $\widetilde{T_i}$. The process is summarized by Figure \[fig:Replacing\_vertex\] and described in detail in the following paragraphs. Suppose $w_j$ is a vertex of degree 14 in $\widetilde{T_i}$. Locally, on a small neighborhood of $w_j$, $\widetilde{T_i}$ is homeomorphic to a $2$-manifold. Since $\deg(w_j)=14$, $w_j$ is surrounded by six triangular faces coming from $T_i$, all of which have had a square removed. By our construction, two of these squares (which are in adjacent triangular faces) have both of their “top” $u_\ell$ vertices connected to $w_j$, but the other four squares just have a single edge connecting one of their “bottom” $u_\ell$ vertices to $w_j$. So, $w_j$ has six $w'_{m,n}$ neighbors and eight $u_\ell$ neighbors, which form a 14-sided polygon with $w_j$ as its “star” point. Choose two $w'_{m,n}$ vertices which are across from each other in this 14-sided polygon, say $w'_{a,b}$ and $w'_{c,d}$. Next, we will remove $w_j$ and all of the 14 faces that it is contained in. Then, we add vertices $w_{j_1}$ and $w_{j_2}$ in place of $w_j$ and add edges in such a way that $\deg(w_{j_1})=\deg(w_{j_2})=9$, there are edges $[w_{j_1},w_{j_2}], [w_{j_1},w'_{a,b}], [w_{j_1},w'_{c,d}], [w_{j_2},w'_{a,b}],$ and $[w_{j_2},w'_{c,d}]$, and the 14-sided polygon is triangulated with 16 triangles. This process only changes the degree of $w'_{a,b}$ and $w'_{c,d}$, each of which now have degree 7. Therefore, the maximum degree of $w_{j_1}, w_{j_2}$, and the 14 vertices in the polygon is 9 (since $\deg(u_\ell)\in \{4,5\}$ and $\deg(w'_{m,n})=6$). To illustrate this construction, we consider the case when $k=20$. Then $i=4$, $\delta=0$, and $\deg(w_7)=14$ in ${\widetilde}{T_4}$. Figure \[fig:Replacing\_vertex\] depicts this process when $w'_{a,b}=w'_{3,7}$ and $w'_{c,d}=w'_{7,11}$. (0,0)–(0.2,0.67)–(-0.2,0.67)–(0,0); (0,0)–(0.2,0.67)–(0.5,0.866)–(0,0); (0.2,0.67)–(0.5,0.866)–(0.2, 1.07)–(0.2,0.67); (0.5,0.866)–(0.2, 1.07)–(1,1.73)–(0.5,0.866); (0.2, 1.07)–(1,1.73)–(0,1.73)–(0.2,1.07); (0,1.73)–(0.2,1.07)–(-0.2,1.07)–(0,1.73); (-0.2,1.07)–(0,1.73)–(-1,1.73)–(-0.2,1.07); (-1,1.73)–(-0.2,1.07)–(-0.5,0.866)–(-1,1.73); (-0.2,1.07)–(-0.5,0.866)–(-0.2,0.67)–(-0.2,1.07); (-0.5,0.866)–(-0.2,0.67)–(0,0)–(-0.5, 0.866); (0,0)–(0.68, 0.16)–(0.48, 0.5)–(0,0); (0,0)–(0.68, 0.16)–(1,0)–(0,0); (0.68, 0.16)–(1,0)–(1.02, 0.36)–(0.68, 0.16); (1,0)–(1.02, 0.36)–(2,0)–(1,0); (1.02, 0.36)–(2,0)–(1.5,0.866)–(1.02,0.36); (1.5,0.866)–(1.02,0.36)–(0.82,0.7)–(1.5,0.866); (0.82,0.7)–(1.5,0.866)–(1,1.73)–(0.82,0.7); (1,1.73)–(0.82,0.7)–(0.5,0.866)–(1,1.73); (0.82,0.7)–(0.5,0.866)–(0.48, 0.5)–(0.82,0.7); (0.5,0.866)–(0.48, 0.5)–(0,0)–(0.5, 0.866); (-2,0)–(-1.32, 0.16)–(-1.52, 0.5)–(-2,0); (-2,0)–(-1.32, 0.16)–(-1,0)–(-2,0); (-1.32, 0.16)–(-1,0)–(-0.98, 0.36)–(-1.32, 0.16); (-1,0)–(-0.98, 0.36)–(0,0)–(-1,0); (-0.98, 0.36)–(0,0)–(-0.5,0.866)–(-0.98,0.36); (-0.5,0.866)–(-0.98,0.36)–(-1.18,0.7)–(-0.5,0.866); (-1.18,0.7)–(-0.5,0.866)–(-1,1.73)–(-1.18,0.7); (-1,1.73)–(-1.18,0.7)–(-1.5,0.866)–(-1,1.73); (-1.18,0.7)–(-1.5,0.866)–(-1.52, 0.5)–(-1.18,0.7); (-1.5,0.866)–(-1.52, 0.5)–(-2,0)–(-1.5, 0.866); (-1,-1.73)–(-0.8,-1.06)–(-1.2,-1.06)–(-1,-1.73); (-1,-1.73)–(-0.8,-1.06)–(-0.5,-0.866)–(-1,-1.73); (-0.8,-1.06)–(-0.5,-0.866)–(-0.8, -0.66)–(-0.8,-1.06); (-0.5,-0.866)–(-0.8, -0.66)–(0,0)–(-0.5,-0.866); (-0.8, -0.66)–(0,0)–(-1,0)–(-0.8,-0.66); (-1,0)–(-0.8,-0.66)–(-1.2,-0.66)–(-1,0); (-1.2,-0.66)–(-1,0)–(-2,0)–(-1.2,-0.66); (-2,0)–(-1.2,-0.66)–(-1.5,-0.866)–(-2,0); (-1.2,-0.66)–(-1.5,-0.866)–(-1.2,-1.06)–(-1.2,-0.66); (-1.5,-0.866)–(-1.2,-1.06)–(-1,-1.73)–(-1.5, -0.866); (-1,-1.73)–(-0.32, -1.57)–(-0.52, -1.23)–(-1,-1.73); (-1,-1.73)–(-0.32, -1.57)–(0,-1.73)–(-1,-1.73); (-0.32, -1.57)–(0,-1.73)–(0.02, -1.37)–(-0.32, -1.57); (0,-1.73)–(0.02, -1.37)–(1,-1.73)–(0,-1.73); (0.02, -1.37)–(1,-1.73)–(0.5,-0.866)–(0.02,-1.37); (0.5,-0.866)–(0.02,-1.37)–(-0.18,-1.03)–(0.5,-0.866); (-0.18,-1.03)–(0.5,-0.866)–(0,0)–(-0.18,-1.03); (0,0)–(-0.18,-1.03)–(-0.5,-0.866)–(0,0); (-0.18,-1.03)–(-0.5,-0.866)–(-0.52, -1.23)–(-0.18,-1.03); (-0.5,-0.866)–(-0.52, -1.23)–(-1,-1.73)–(-0.5, -0.866); (1,-1.73)–(1.2,-1.06)–(0.8,-1.06)–(1,-1.73); (1,-1.73)–(1.2,-1.06)–(1.5,-0.866)–(1,-1.73); (1.2,-1.06)–(1.5,-0.866)–(1.2, -0.66)–(1.2,-1.06); (1.5,-0.866)–(1.2, -0.66)–(2,0)–(1.5,-0.866); (1.2, -0.66)–(2,0)–(1,0)–(1.2,-0.66); (1,0)–(1.2,-0.66)–(0.8,-0.66)–(1,0); (0.8,-0.66)–(1,0)–(0,0)–(0.8,-0.66); (0,0)–(0.8,-0.66)–(0.5,-0.866)–(0,0); (0.8,-0.66)–(0.5,-0.866)–(0.8,-1.06)–(0.8,-0.66); (0.5,-0.866)–(0.8,-1.06)–(1,-1.73)–(0.5, -0.866); (6,0.2)–(6.2,0.67)–(5.8,0.67)–(6,0.2); (6,0.2)–(6.2,0.67)–(6.5,0.866)–(6,0.2); (6.2,0.67)–(6.5,0.866)–(6.2, 1.07)–(6.2,0.67); (6.5,0.866)–(6.2, 1.07)–(7,1.73)–(6.5,0.866); (6.2, 1.07)–(7,1.73)–(6,1.73)–(6.2,1.07); (6,1.73)–(6.2,1.07)–(5.8,1.07)–(6,1.73); (5.8,1.07)–(6,1.73)–(5,1.73)–(5.8,1.07); (5,1.73)–(5.8,1.07)–(5.5,0.866)–(5,1.73); (5.8,1.07)–(5.5,0.866)–(5.8,0.67)–(5.8,1.07); (5.5,0.866)–(5.8,0.67)–(6,0.2)–(5.5, 0.866); (6,0.2)–(6.5,0.866)–(6,-0.2)–(6,0.2); (6,-0.2)–(6.5, 0.866)–(6.48, 0.5)–(6,-0.2); (6,-0.2)–(6.48,0.5)–(6.68,0.16)–(6,-0.2); (6,-0.2)–(6.68, 0.16)–(7,0)–(6,-0.2); (6.68, 0.16)–(7,0)–(7.02, 0.36)–(6.68, 0.16); (7,0)–(7.02, 0.36)–(8,0)–(7,0); (7.02, 0.36)–(8,0)–(7.5,0.866)–(7.02,0.36); (7.5,0.866)–(7.02,0.36)–(6.82,0.7)–(7.5,0.866); (6.82,0.7)–(7.5,0.866)–(7,1.73)–(6.82,0.7); (7,1.73)–(6.82,0.7)–(6.5,0.866)–(7,1.73); (6.82,0.7)–(6.5,0.866)–(6.48, 0.5)–(6.82,0.7); (4,0)–(4.68, 0.16)–(4.48, 0.5)–(4,0); (4,0)–(4.68, 0.16)–(5,0)–(4,0); (4.68, 0.16)–(5,0)–(5.02, 0.36)–(4.68, 0.16); (5,0)–(5.02, 0.36)–(6,0.2)–(5,0); (5.02, 0.36)–(6,0.2)–(5.5,0.866)–(5.02,0.36); (5.5,0.866)–(5.02,0.36)–(4.82,0.7)–(5.5,0.866); (4.82,0.7)–(5.5,0.866)–(5,1.73)–(4.82,0.7); (5,1.73)–(4.82,0.7)–(4.5,0.866)–(5,1.73); (4.82,0.7)–(4.5,0.866)–(4.48, 0.5)–(4.82,0.7); (4.5,0.866)–(4.48, 0.5)–(4,0)–(4.5, 0.866); (5,-1.73)–(5.2,-1.06)–(4.8,-1.06)–(5,-1.73); (5,-1.73)–(5.2,-1.06)–(5.5,-0.866)–(5,-1.73); (5.2,-1.06)–(5.5,-0.866)–(5.2, -0.66)–(5.2,-1.06); (5.5,-0.866)–(5.2, -0.66)–(6,0.2)–(5.5,-0.866); (5.2, -0.66)–(6,0.2)–(5,0)–(5.2,-0.66); (5,0)–(5.2,-0.66)–(4.8,-0.66)–(5,0); (4.8,-0.66)–(5,0)–(4,0)–(4.8,-0.66); (4,0)–(4.8,-0.66)–(4.5,-0.866)–(4,0); (4.8,-0.66)–(4.5,-0.866)–(4.8,-1.06)–(4.8,-0.66); (4.5,-0.866)–(4.8,-1.06)–(5,-1.73)–(4.5, -0.866); (5,-1.73)–(5.68, -1.57)–(5.48, -1.23)–(5,-1.73); (5,-1.73)–(5.68, -1.57)–(6,-1.73)–(5,-1.73); (5.68, -1.57)–(6,-1.73)–(6.02, -1.37)–(5.68, -1.57); (6,-1.73)–(6.02, -1.37)–(7,-1.73)–(6,-1.73); (6.02, -1.37)–(7,-1.73)–(6.5,-0.866)–(6.02,-1.37); (6.5,-0.866)–(6.02,-1.37)–(5.82,-1.03)–(6.5,-0.866); (5.82,-1.03)–(6.5,-0.866)–(6,-0.2)–(5.82,-1.03); (6,-0.2)–(5.82,-1.03)–(5.5,-0.866)–(6,-0.2); (5.82,-1.03)–(5.5,-0.866)–(5.48, -1.23)–(5.82,-1.03); (5.5,-0.866)–(5.48, -1.23)–(5,-1.73)–(5.5, -0.866); (5.5,-0.866)–(6,-0.2)–(6,0.2)–(5.5,-0.866); (7,-1.73)–(7.2,-1.06)–(6.8,-1.06)–(7,-1.73); (7,-1.73)–(7.2,-1.06)–(7.5,-0.866)–(7,-1.73); (7.2,-1.06)–(7.5,-0.866)–(7.2, -0.66)–(7.2,-1.06); (7.5,-0.866)–(7.2, -0.66)–(8,0)–(7.5,-0.866); (7.2, -0.66)–(8,0)–(7,0)–(7.2,-0.66); (7,0)–(7.2,-0.66)–(6.8,-0.66)–(7,0); (6.8,-0.66)–(7,0)–(6,-0.2)–(6.8,-0.66); (6,-0.2)–(6.8,-0.66)–(6.5,-0.866)–(6,-0.2); (6.8,-0.66)–(6.5,-0.866)–(6.8,-1.06)–(6.8,-0.66); (6.5,-0.866)–(6.8,-1.06)–(7,-1.73)–(6.5, -0.866); (0,0) circle (1pt) node\[below\] [$\scriptstyle w_7$]{}; (1,0) circle (1pt); (2,0) circle (1pt) node\[right\] [$\scriptstyle w_8$]{}; (1,1.73) circle (1pt) node\[above right\] [$\scriptstyle w_{11}$]{}; (0.5, 0.866) circle (1pt); (-1,0) circle (1pt); (-0.5, 0.866) circle (1pt); (-2,0) circle (1pt) node\[left\] [$\scriptstyle w_6$]{}; (-1,1.73) circle (1pt) node\[above left\] [$\scriptstyle w_{10}$]{}; (1,-1.73) circle (1pt) node\[below right\] [$\scriptstyle w_{4}$]{}; (0.5, -0.866) circle (1pt); (-0.5,-0.866) circle (1pt); (-1,-1.73) circle (1pt) node\[below left\] [$\scriptstyle w_{3}$]{}; (0, 1.73) circle (1pt) node\[above\] [$\scriptstyle w'_{10,11}$]{}; (1.5, 0.866) circle (1pt) node\[right\] [$\scriptstyle w'_{8,11}$]{}; (-1.5, 0.866) circle (1pt) node\[left\] [$\scriptstyle w'_{6,10}$]{}; (0, -1.73) circle (1pt) node\[below\] [$\scriptstyle w'_{3,4}$]{}; (1.5, -0.866) circle (1pt) node\[right\] [$\scriptstyle w'_{4,8}$]{}; (-1.5, -0.866) circle (1pt) node\[left\] [$\scriptstyle w'_{3,6}$]{}; (-0.2,1.07) circle (0.7pt); (-0.2,0.67) circle (0.7pt); (0.2,1.07) circle (0.7pt); (0.2,0.67) circle (0.7pt); (0.48,0.5) circle (0.7pt); (0.82,0.7) circle (0.7pt); (1.02,0.36) circle (0.7pt); (0.68,0.16) circle (0.7pt); (-1.52,0.5) circle (0.7pt); (-1.18,0.7) circle (0.7pt); (-0.98,0.36) circle (0.7pt); (-1.32,0.16) circle (0.7pt); (-1.2,-0.66) circle (0.7pt); (-1.2,-1.06) circle (0.7pt); (-0.8,-0.66) circle (0.7pt); (-0.8,-1.06) circle (0.7pt); (-0.52,-1.23) circle (0.7pt); (-0.18,-1.03) circle (0.7pt); (0.02,-1.37) circle (0.7pt); (-0.32,-1.57) circle (0.7pt); (0.8,-0.66) circle (0.7pt); (0.8,-1.06) circle (0.7pt); (1.2,-0.66) circle (0.7pt); (1.2,-1.06) circle (0.7pt); (6,0.2) circle (1pt) node\[above left\] [$\scriptstyle w_{7_1}$]{}; (6,-0.2) circle (1pt) node\[right\] [$\scriptstyle w_{7_2}$]{}; (7,0) circle (1pt); (8,0) circle (1pt) node\[right\] [$\scriptstyle w_8$]{}; (7,1.73) circle (1pt) node\[above right\] [$\scriptstyle w_{11}$]{}; (6.5, 0.866) circle (1pt); (5,0) circle (1pt); (5.5, 0.866) circle (1pt); (4,0) circle (1pt) node\[left\] [$\scriptstyle w_6$]{}; (5,1.73) circle (1pt) node\[above left\] [$\scriptstyle w_{10}$]{}; (7,-1.73) circle (1pt) node\[below right\] [$\scriptstyle w_{4}$]{}; (6.5, -0.866) circle (1pt); (5.5,-0.866) circle (1pt); (5,-1.73) circle (1pt) node\[below left\] [$\scriptstyle w_{3}$]{}; (6, 1.73) circle (1pt) node\[above\] [$\scriptstyle w'_{10,11}$]{}; (7.5, 0.866) circle (1pt) node\[right\] [$\scriptstyle w'_{8,11}$]{}; (4.5, 0.866) circle (1pt) node\[left\] [$\scriptstyle w'_{6,10}$]{}; (6, -1.73) circle (1pt) node\[below\] [$\scriptstyle w'_{3,4}$]{}; (7.5, -0.866) circle (1pt) node\[right\] [$\scriptstyle w'_{4,8}$]{}; (4.5, -0.866) circle (1pt) node\[left\] [$\scriptstyle w'_{3,6}$]{}; (5.8,1.07) circle (0.7pt); (5.8,0.67) circle (0.7pt); (6.2,1.07) circle (0.7pt); (6.2,0.67) circle (0.7pt); (6.48,0.5) circle (0.7pt); (6.82,0.7) circle (0.7pt); (7.02,0.36) circle (0.7pt); (6.68,0.16) circle (0.7pt); (4.48,0.5) circle (0.7pt); (4.82,0.7) circle (0.7pt); (5.02,0.36) circle (0.7pt); (4.68,0.16) circle (0.7pt); (4.8,-0.66) circle (0.7pt); (4.8,-1.06) circle (0.7pt); (5.2,-0.66) circle (0.7pt); (5.2,-1.06) circle (0.7pt); (5.48,-1.23) circle (0.7pt); (5.82,-1.03) circle (0.7pt); (6.02,-1.37) circle (0.7pt); (5.68,-1.57) circle (0.7pt); (6.8,-0.66) circle (0.7pt); (6.8,-1.06) circle (0.7pt); (7.2,-0.66) circle (0.7pt); (7.2,-1.06) circle (0.7pt); (2.5,0)–(3.5,0); After repeating the above process for each degree 14 vertex in $\widetilde{T_i}$, we take the clique complex and call the resulting flag complex $Y_2$. Observe that this process increases the number of vertices by 1, the number of edges by 3, and the number of faces by 2 each time a degree 14 vertex in $\widetilde{T_i}$ is replaced. Also, note that $\operatorname{maxdeg}(Y_2)\leq 12$ for all $m$. Now, we give the $w_j$, $w'_{m,n},$ and $u_\ell$ vertices their natural orderings and say that $w'_{m,n} > w_j$ and $w'_{m,n}>u_\ell$ for all $\ell, m,n,$ and $j$, and then let these vertex orderings induce orientations on the edges and faces of $Y_2$ (as shown in Figure \[fig:triangulations\]). Counting the vertices, edges, and faces of $Y_2$ we have that if $0\leq k\leq 12$, then there were no degree 14 vertices to remove, so $|V(Y_2)|=6k+2\delta +2$, $|E(Y_2)|=17k+6\delta$, and $|F(Y_2)|=10k+4\delta$. If $k\geq 13$, then $i\geq 3$ and at least one degree 14 vertex was removed to construct $Y_2$ from $\widetilde{T_i}$. Table \[tab:Y2counts\] gives the number of vertices, edges, and faces of $Y_2$ for all values of $k\geq 13$. $k$ $\delta$ $|V(Y_2)|$ $|E(Y_2)|$ $|F(Y_2)|$ -------- ---------- ----------------------------- ------------------------------ ------------ $4i+4$ 0 $\frac{13}{2}k-4$ $\frac{37}{2}k-18$ $11k-12$ $4i+3$ 1 $\frac{13}{2}k-\frac{3}{2}$ $\frac{37}{2}k-\frac{21}{2}$ $11k-7$ $4i+2$ 2 $\frac{13}{2}k$ $\frac{37}{2}k-6$ $11k-4$ $4i+1$ 3 $\frac{13}{2}k+\frac{5}{2}$ $\frac{37}{2}k+\frac{3}{2}$ $11k+1$ : Number of vertices, edges, and faces in $Y_2$ when $k\geq 13$.[]{data-label="tab:Y2counts"} ### Homology of $Y_2$ Since $Y_2$ is an oriented flag triangulation of $S^2$ with $k$ square holes, each of which are vertex disjoint and nonadjacent, our $Y_2$ is homeomorphic to Newman’s $Y_2$ in the $d=2$ case of [@newman Lemma 5.7], and we can apply the same argument to compute the homology of $Y_2$. We denote the 1-cycles that are the boundaries of the $k$ square holes by $\tau_1,\dots, \tau_k$. Explicitly, for $j=1, \dots, k$, we define $$\tau_{j}: = [u_{4j-4}, u_{4j-3}] + [u_{4j-3}, u_{4j-2}] + [u_{4j-2}, u_{4j-1}] - [u_{4j-4}, u_{4j-1}].$$ Then, by our construction, each $\tau_j$ is a positively-oriented 1-cycle in $H_1(Y_2)$, and exactly as in [@newman Proof of Lemma 5.7], we have that $ $$$H_1(Y_2) = \langle \tau_1, \ldots, \tau_k \vert \tau_1 + \cdots + \tau_k = 0 \rangle. $ $$ Construction of $X$ and proof of Theorem \[thm:Xm\] --------------------------------------------------- Now we attach $Y_1$ and $Y_2$ together to form the two-dimensional flag complex $X$ such that the torsion subgroup of $H_1(X)$ is isomorphic to $\mathbb{Z}/m\mathbb{Z}$. This part essentially follows [@newman §3], though we must confirm that the resulting complex is flag and satisfies the desired bound of vertex degree. For a given $m$, let $Y_1$ and $Y_2$ be the complexes constructed in the previous subsections. Let $S$ denote the subcomplex of $Y_2$ induced by the $4k$ vertices $u_0,\dots, u_{4k-1}$. Since the square holes in $Y_2$ are vertex-disjoint and have no edges between any two of them, $S$ is a disjoint union of $k$ square boundaries. Let $f: S \rightarrow Y_1$ be the simplicial map defined, for $j=1, \ldots, k$, by $$\begin{aligned} & u_{4j-4} \mapsto v_{4n_j}, & & u_{4j-3} \mapsto v_{4n_j + 1}, & u_{4j-2} \mapsto v_{4n_j + 2}, && u_{4j-1} \mapsto v_{4n_j + 3}.\end{aligned}$$ Following [@newman §3], let $X = Y_1 \sqcup_f Y_2$ and observe that this is a simplicial complex by the same argument as Newman gives. In addition, $X$ is a flag complex because $Y_1$ and $Y_2$ are flag, and we subdivided the edges of $Y_1$ and $Y_2$ to avoid the possibility that $X$ might contain a 3-cycle which doesn’t have a face. Furthermore, in $X$ the squares $\tau_j$ and $\gamma_{n_j}$ are identified by $f$ for $j=1, \ldots, k$, and, as in [@newman], $$H_1(X) \cong \mathbb{Z}^{k-1} \oplus \mathbb{Z}/m \mathbb{Z},$$ where ${\mathbb{Z}}/m{\mathbb{Z}}$ has the repeated squares representation given by $${\langle}\gamma_0,\gamma_1,\dots,\gamma_{n_k}\; |\; 2\gamma_0=\gamma_1, 2\gamma_1=\gamma_2,\dots, 2\gamma_{n_k-1}=\gamma_{n_k}, \gamma_{n_1}+\cdots+\gamma_{n_k}=0 {\rangle}.$$ Finally, using our counts for the number of vertices, edges, and faces of $Y_1$ and $Y_2$ and with $\delta$ defined as above, we have $$|V(X)|=2k+12n_k+6+2\delta, \; |E(X)|= 13k+40n_k+4+6\delta, \text{ and }|F(X)|=10k+28n_k+4\delta.$$ If $k\geq 13$, then Table \[tab:Xcounts\] gives the number of vertices, edges, and faces in $X$ (where $i=\lfloor \frac{k-1}{4} \rfloor$). $k$ $\delta$ $|V(X)|$ $|E(X)|$ $|F(X)|$ -------- ---------- ----------------------------------- ------------------------------------ ---------------- $4i+4$ 0 $\frac{5}{2}k+12n_k$ $\frac{29}{2}k+40n_k-14$ $11k+28n_k-12$ $4i+3$ 1 $\frac{5}{2}k+12n_k+\frac{5}{2}$ $\frac{29}{2}k+40n_k-\frac{13}{2}$ $11k+28n_k-7$ $4i+2$ 2 $\frac{5}{2}k+12n_k+4$ $\frac{29}{2}k+40n_k-2$ $11k+28n_k-4$ $4i+1$ 3 $\frac{5}{2}k+12n_k+\frac{13}{2}$ $\frac{29}{2}k+40n_k+\frac{11}{2}$ $11k+28n_k+1$ : Number of vertices, edges, and faces in $X$ when $k\geq 13$.[]{data-label="tab:Xcounts"} Additionally, recall that $\operatorname{maxdeg}(Y_1)\leq9$ and $\operatorname{maxdeg}(Y_2)\leq 12$. Since in $X$ we are only identifying the squares of $Y_2$ with $k$ of the squares of $Y_1$, to find the maximum degree of any vertex of $X$, we need only check the degrees of the identified vertices. In $Y_1$, we know that $\deg(v_j)\leq 9$ for each $j$, and in $Y_2$, we know that $\deg(u_\ell) \in \{4,5\}$ for each $\ell$. Let $v_j$ and $u_\ell$ be vertices that are identified in $X$. Since two of their adjacent edges in the squares are identified as well, in $X$ we see that $\deg(v_j)=\deg(u_\ell) \leq 12$. Thus, $\operatorname{maxdeg}(X)\leq 12$. We also note the following corollary: \[cor:flag-newman\] For every finite abelian group $G$ there is a two-dimensional flag complex $X$ such that the torsion subgroup of $H_1(X)$ is isomorphic to $G$ and $\operatorname{maxdeg}(X)\leq 12$. Let $G = {\mathbb{Z}}/m_1{\mathbb{Z}}\oplus {\mathbb{Z}}/m_2{\mathbb{Z}}\oplus \cdots \oplus {\mathbb{Z}}/m_r {\mathbb{Z}}$ with $m_1|m_2|\cdots|m_r$ be an arbitrary finite abelian group. By Theorem \[thm:Xm\], there exist two-dimensional flag complexes $X_{m_i}$ such that the torsion subgroup of $H_1(X_{m_i})$ is isomorphic to ${\mathbb{Z}}/m_i{\mathbb{Z}}$ and $\operatorname{maxdeg}(X_{m_i})\leq 12$. If $X$ is the disjoint union of all the $X_{m_i}$, then $X$ satisfies the hypotheses of the corollary. Appearance of subcomplexes in $\Delta(n,p)$ {#sec:subgraphs} =========================================== The goal of this section is to show, for attaching probabilities $p$ in an appropriate range, the flag complex $X_m$ from Theorem \[thm:Xm\] will appear with high probability as an induced subcomplex of $\Delta(n,p)$. See §\[sec:background\] for the relevant definitions and notation used throughout this section. Here is our main result: \[prop:high-probability\] Let $m\geq 2$, and let $X_m$ be as in Theorem \[thm:Xm\]. If $\Delta\sim \Delta(n,p)$ is a random flag complex with $n^{-1/6}\ll p\leq 1-\epsilon$ for some $\epsilon>0$, then ${\mathbf{P}}\left[X_m \overset{ind}{\subset} \Delta(n,p)\right]\rightarrow 1$ as $n \to \infty$. Our proof of this result will rely on Bollobás’s theorem on the appearance of subgraphs of a random graph, which we state here for reference. \[thm:Bollobás\] Let $G'$ be a fixed graph, let $m(G')$ be the essential density of $G'$ defined in Definition \[mG\], and let $G(n,p)$ be the Erdős-Rényi random graph on $n$ vertices with attaching probability $p$. As $n \to \infty$, we have $${\mathbf{P}}\left[G'\subset G(n,p)\right]\rightarrow \begin{cases} 0 & \text{if } p\ll n^{-1/m(G')}\\ 1 & \text{if } p\gg n^{-1/m(G')} \end{cases}.$$ Since any flag complex is determined by its underlying graph, we can almost apply this to prove Proposition \[prop:high-probability\]. However, Proposition \[prop:high-probability\] (and our eventual application of it via Hochster’s formula to Theorem \[thm:m torsion\]) requires $X_m$ to appear as an induced subcomplex, whereas Bollobás’s result is for not necessarily induced subgraphs. The following proposition, which is likely known to experts, shows that so long as $p$ is bounded away from $1$, this distinction is immaterial in the limit. \[prop:induced-bollobas\] Let $G'$ be a fixed graph, let $m(G')$ be the essential density of $G'$ defined in Definition \[mG\], and let $G(n,p)$ be the Erdős-Rényi random graph on $n$ vertices with attaching probability $p$. Suppose $p = p(n)\leq 1-\epsilon$ for some constant $\epsilon>0$. Then as $n \to \infty$, we have $${\mathbf{P}}\left[G'\overset{ind}{\subset} G(n,p)\right]\rightarrow \begin{cases} 0 & \text{if } p\ll n^{-1/m(G')}\\ 1 & \text{if } p\gg n^{-1/m(G')} \end{cases}.$$ Since an induced subgraph is a subgraph, if ${\mathbf{P}}[G'\subset G(n,p)]\rightarrow 0$, then\ ${\mathbf{P}}\left[G'\overset{ind}{\subset} G(n,p)\right]\rightarrow 0$. Thus, the first half of the threshold is a direct consequence of Theorem \[thm:Bollobás\], and all that needs to be shown is the second half of the threshold. So, suppose that $p\gg n^{-1/m(G')}$. We will mirror the proof of Bollobàs’s theorem from [@frieze-book Theorem 5.3] (originally due to [@ruc-vince]), which relies on the second moment method. Let $\Lambda(G',n)$ be the set containing all of the possible ways that $G'$ can appear as a induced subgraph of $G(n,p)$. Thus, an element $H\in \Lambda(G',n)$ corresponds to a subset of the $n$ vertices and specified edges among those vertices such that the resulting graph is a copy of $G'$. We want to count the number of times $G'$ appears as an induced subgraph of $G(n,p)$. For each $H\in \Lambda(G',n)$, we let $\mathbf{1}_{H}$ be the corresponding indicator random variable, where $\mathbf{1}_H = 1$ occurs in the event that restricting $G(n,p)$ to the vertices of $H$ is precisely the copy of $G'$ indicated by $H$. Note that the random variables $\mathbf{1}_{H}$ are not independent, as two distinct elements from $\Lambda(G',n)$ might have overlapping vertex sets. If we let $N_{G'}$ be the random variable for the number of copies of $G'$ appearing as induced subgraphs in $G(n,p)$, then we have $N_{G'} = {\displaystyle}\sum_{H \in \Lambda(G',n)} \mathbf{1}_{H}.$ Our goal is to show that ${\mathbf{P}}[N_{G'}\geq 1]\to 1$, or equivalently that ${\mathbf{P}}[N_{G'}= 0]\to 0$. Since $N_{G'}$ is non-negative, the second moment method as seen in [@alon-spencer-book Theorem 4.3.1] states that ${\mathbf{P}}[N_{G'}= 0]\leq \frac{\operatorname{Var}(N_{G'})}{{\mathbf{E}}[N_{G'}]^2}$, so it suffices to show that $\frac{\operatorname{Var}(N_{G'})}{{\mathbf{E}}[N_{G'}]^2}\rightarrow 0$. To start, we will bound the expected value. To simplify notation throughout the following computation, we let $v=|V(G')|$ and $e=|E(G')|$ denote the number of vertices and edges of $G'$. $$\begin{aligned} {\mathbf{E}}[N_{G'}]&= \sum_{H\in \Lambda(G',n)} {\mathbf{E}}[\mathbf{1}_H]\\ &= \sum_{H\in \Lambda(G',n)} p^{e}(1-p)^{\binom{v}{2}-e}\\ &= \Omega(n^{v})\cdot p^{e}(1-p)^{\binom{v}{2}-e}.\end{aligned}$$ Now let us repeat this with the variance instead. $$\begin{aligned} \operatorname{Var}(N_{G'}) &= \sum_{H,H'\in \Lambda(G',n)} {\mathbf{E}}[\mathbf{1}_{H}\mathbf{1}_{H'}] - {\mathbf{E}}[\mathbf{1}_{H}]{\mathbf{E}}[\mathbf{1}_{H'}]\\ &= \sum_{H,H'\in \Lambda(G',n)} {\mathbf{P}}[\mathbf{1}_{H}=1\text{ and }\mathbf{1}_{H'}=1] - {\mathbf{P}}[\mathbf{1}_{H}=1]{\mathbf{P}}[\mathbf{1}_{H'}=1]\\ &= \sum_{H,H'\in \Lambda(G',n)} {\mathbf{P}}[\mathbf{1}_{H}=1]\left({\mathbf{P}}[\mathbf{1}_{H'}=1 \mid \mathbf{1}_{H}=1]- {\mathbf{P}}[\mathbf{1}_{H'}=1]\right)\\ &= p^{e}(1-p)^{\binom{v}{2}-e} \sum_{H,H'\in \Lambda(G',n)}{\mathbf{P}}[\mathbf{1}_{H'}=1 \mid \mathbf{1}_{H}=1]- {\mathbf{P}}[\mathbf{1}_{H'}=1]\\ \intertext{If $H$ and $H'$ don't share at least two vertices, $\mathbf{1}_H$ and $\mathbf{1}_{H'}$ are independent of each other, so we can restrict to the case where they share at least two vertices, which gives} &= p^{e}(1-p)^{\binom{v}{2}-e} \sum_{i=2}^{v}\sum_{\substack{H,H'\in \Lambda(G',n) \\ |V(H)\cap V(H')|=i}}{\mathbf{P}}[\mathbf{1}_{H'}=1 \mid \mathbf{1}_{H}=1]- {\mathbf{P}}[\mathbf{1}_{H'}=1].\end{aligned}$$ We now come to the key observation, which is also at the heart of the proof in [@frieze-book Theorem 5.3]: ${\mathbf{P}}[\mathbf{1}_{H'}=1 \mid \mathbf{1}_{H}=1]$ is maximized if those edges and non-edges in $H$ are exactly those that are required by $H'$. Thus, by applying the fact that any subgraph of $G'$ with $i$ vertices, has at most $i\cdot m(G')$ edges and at most $\binom{i}{2}$ non-edges we get the following bound for $H,H'\in \Lambda(G',n)$ sharing $i$ vertices: $${\mathbf{P}}[\mathbf{1}_{H'}=1 \mid \mathbf{1}_{H}=1]\leq {\mathbf{P}}[\mathbf{1}_{H'}=1]\cdot p^{-i\cdot m(G')}(1-p)^{-\binom{i}{2}}$$ From here, it is a standard computation. Substituting this back into the previous equation and simplifying, we get $$\begin{aligned} \operatorname{Var}(N_{G'}) &\leq p^{e}(1-p)^{\binom{v}{2}-e} \sum_{i=2}^{v}\sum_{\substack{H,H'\in \Lambda(G',n) \\ |V(H)\cap V(H')|=i}}{\mathbf{P}}[\mathbf{1}_{H'}=1]\left(p^{-i\cdot m(G')}(1-p)^{-\binom{i}{2}}-1\right)\\ &\leq \left(p^{e}(1-p)^{\binom{v}{2}-e}\right)^2 \sum_{i=2}^{v} O\left(n^{2v-i}\right)\left(p^{-i\cdot m(G')}(1-p)^{-\binom{i}{2}}-1\right).\\ \intertext{And since $p$ is bounded away from $1$ and $1-p$ is bounded away from $0$, we get} &\leq \left(p^{e}(1-p)^{\binom{v}{2}-e}\right)^2 \sum_{i=2}^{v} O\left(n^{2v-i}p^{-i\cdot m(G')}\right). \end{aligned}$$ Finally, applying the second moment method gives $${\mathbf{P}}[N_{G'}\leq 0]\leq \frac{\operatorname{Var}(N_{G'})}{{\mathbf{E}}[N_{G'}]^2}=\frac{{\displaystyle}\sum_{i=2}^{v} O\left(n^{2v-i}p^{-i\cdot m(G')}\right)}{\Omega(n^{2v})} =\sum_{i=2}^{v} O\left(n^{-i}p^{-i\cdot m(G')}\right). $$ Since $p\gg n^{-1/m(G')}$, we conclude that $np^{m(G')}\rightarrow \infty$, and therefore, ${\mathbf{P}}[N_{G'}=0 ]\rightarrow 0$. It follows that ${\mathbf{P}}\left[G'\overset{ind}{\subset} G(n,p)\right]\rightarrow 1$. We now turn to the proof of Proposition \[prop:high-probability\]. Recall that $X_m$ is the complex from Theorem \[thm:Xm\], and let $H_m$ be its underlying graph. Moreover, the underlying graph of $\Delta(n,p)$ is the Erdős-Rényi random graph $G(n,p)$. Since a flag complex is uniquely determined by its 1-skeleton, it suffices to show that ${\mathbf{P}}\left[H_m\overset{ind}{\subset}G(n,p)\right]\rightarrow 1$. Since $\operatorname{maxdeg}(H_m)\leq 12$, every subgraph has average degree at most $12$. Thus, the essential density $m(H_m)$ satisfies $m(H_m)\leq 6$. Since $p\gg n^{-1/6}$, we have $p\gg n^{-1/m(H_m)}$. Applying Proposition \[prop:induced-bollobas\] gives ${\mathbf{P}}\left[H_m \overset{ind}{\subset} G(n,p)\right]\rightarrow 1$; thus, ${\mathbf{P}}\left[X_m\overset{ind}{\subset}\Delta(n,p)\right]\rightarrow 1$. \[rmk:sharpness\] Explicitly computing the essential density $m(H_m)$ seems difficult in general, and our chosen bound $m(H_m)\leq 6$, which is determined by the fact that $6 = \frac{1}{2}\operatorname{maxdeg}(X_m)$, is likely too coarse. It would be interesting to see a sharper result on $m(H_m)$, as this could potentially provide an heuristic for decreasing the bound on $r$ in Conjecture \[conj:dependence\]. Might it even be the case that $m(H_m)$ is half the average degree, $\frac{1}{2}{\operatorname{avg}}(H_m)$? In any case, $\frac{1}{2}{\operatorname{avg}}(H_m)$ at least provides a lower bound on $m(H_m)$. Due to the detailed nature of the constructions in §\[sec:construction\], we can estimate this value. Let $k\geq 13$ and $m\gg 0$. By Table \[tab:Xcounts\], $n_k=\lfloor \log_2(m)\rfloor$ will be much larger than $\delta$, and so the number of vertices will be approximately $\frac{5}{2}k+12n_k$ and the number of edges will be approximately $\frac{29}{2}k+40n_k$. The smallest the ratio of edges to vertices can be is when $n_k\gg k$, in which case the ratio will be approximately $3\frac{1}{3}$. A similar computation holds for $k\leq 12$ and for $m\gg 0$. We can conclude that $m(H_m)\geq 3 \frac{1}{3}-\epsilon$, where $\epsilon$ is a positive constant that goes to $0$ as $m\to \infty$. A detailed analysis of 2-torsion {#sec:2torsion} ================================ The goal of this section is to provide a more detailed analysis of what happens in the case of 2-torsion. We use a known flag triangulation of ${\mathbb{R}}P^2$ that minimizes the number of vertices and where we can easily compute its essential density to produce induced subcomplexes of $\Delta(n,p)$ with $2$-torsion. In [@bibby2019minimal], the authors find two (nonisomorphic) minimal flag triangulations of ${\mathbb{R}}P^2$, each of which have 11 vertices and 30 edges and differ by a single bistellar 0-move. One of these flag triangulations is depicted in Figure \[fig:flagRP2\]. (0,2)–(0.85,1.3)–(0,1)–(0,2); (0,2)–(-0.85,1.3)–(0,1)–(0,2); (0,1)–(0.85,1.3)–(0.8,0.3)–(0,1); (0,1)–(-0.85,1.3)–(-0.8,0.3)–(0,1); (0,1)–(0.8,0.3)–(0,0)–(0,1); (0,1)–(-0.8,0.3)–(0,0)–(0,1); (0.85,1.3)–(1.7,0.6)–(0.8,0.3)–(0.85,1.3); (0.85,1.3)–(1.7,0.6)–(0.8,0.3)–(0.85,1.3); (-0.85,1.3)–(-1.7,0.6)–(-0.8,0.3)–(-0.85,1.3); (1.7,0.6)–(0.8,0.3)–(1.4,-0.5)–(1.7,0.6); (-1.7,0.6)–(-0.8,0.3)–(-1.4,-0.5)–(-1.7,0.6); (0.8,0.3)–(1.4,-0.5)–(0.5,-0.75)–(0.8,0.3); (0.8,0.3)–(0,0)–(0.5,-0.75)–(0.8,0.3); (-0.8,0.3)–(-1.4,-0.5)–(-0.5,-0.75)–(-0.8,0.3); (-0.8,0.3)–(0,0)–(-0.5,-0.75)–(-0.8,0.3); (0,0)–(-0.5,-0.75)–(0.5,-0.75)–(0,0); (1.4,-0.5)–(1,-1.6)–(0.5,-0.75)–(1.4,-0.5); (-1.4,-0.5)–(-1,-1.6)–(-0.5,-0.75)–(-1.4,-0.5); (1,-1.6)–(0.5,-0.75)–(0,-1.6)–(1,-1.6); (-1,-1.6)–(-0.5,-0.75)–(0,-1.6)–(-1,-1.6); (0,-1.6)–(0.5,-0.75)–(-0.5,-0.75)–(0,-1.6); (0,0) circle (2pt) node\[above right\] [$v_8$]{}; (0,1) circle (2pt) node\[above right\] [$v_9$]{}; (0,2) circle (2pt) node\[above\] [$v_3$]{}; (0.85,1.3) circle (2pt) node\[above right\] [$v_2$]{}; (1.7,0.6) circle (2pt) node\[right\] [$v_6$]{}; (-0.85,1.3) circle (2pt) node\[above left\] [$v_{10}$]{}; (-1.7,0.6) circle (2pt) node\[left\] [$v_5$]{}; (-1,-1.6) circle (2pt) node\[below left\] [$v_2$]{}; (-0.5,-0.75) circle (2pt) node\[below left\] [$v_7$]{}; (0.5,-0.75) circle (2pt) node\[below right\] [$v_4$]{}; (1,-1.6) circle (2pt) node\[below right\] [$v_{10}$]{}; (-1.4,-0.5) circle (2pt) node\[below left\] [$v_6$]{}; (1.4,-0.5) circle (2pt) node\[below right\] [$v_5$]{}; (0.8,0.3) circle (2pt) node\[above right\] [$v_1$]{}; (-0.8,0.3) circle (2pt) node\[left\] [$v_{11}$]{}; (0,-1.6) circle (2pt) node\[below\] [$v_3$]{}; For the remainder of this section, let $G$ denote the underlying graph of this flag triangulation of ${\mathbb{R}}P^2$, which we denote by $\Delta(G)$ as it is the clique complex of $G$. To understand the probability that this particular triangulation of ${\mathbb{R}}P^2$ appears as an induced subcomplex of $\Delta(n,p)$, we need to compute the essential density $m(G)$. \[lem:bollobas of G\] For the graph $G$ underlying the flag triangulation of ${\mathbb{R}}P^2$ exhibited in Figure \[fig:flagRP2\], the essential density $m(G)$ is $30/11$. This amounts to an exhaustive computation, which is summarized in Table \[tab:RP2 counts\]. In particular, Table \[tab:RP2 counts\] identifies the maximal number of edges that a subgraph $H\subset G$ on $|V(H)|$ vertices can have, for each $|V(H)|\leq 11$. One can see from the table that $m(G)$ is maximized by the entire graph, and thus $m(G) = |E(G)|/|V(G)| = 30/11$. Lemma \[lem:bollobas of G\] shows that the graph $G$ is strongly balanced in the sense of Definition \[mG\]. While we expect the essential density of our complexes $X_m$ to be lower than the coarse bound of $\frac{1}{2}\operatorname{maxdeg}(X_m)$ (see Remark \[rmk:sharpness\]), we note that in the case of the graph $G$, this difference is not very large. In fact, we have $\frac{1}{2}\operatorname{maxdeg}(G)=3$ and $m(G)=30/11\approx 2.72$. $|V(H)|$ $\max\{|E(H)|\}$ $V(H)$ $\max\left\{\frac{|E(H)|}{|V(H)|}\right\}$ ---------- ------------------ -------------------------------------------------- -------------------------------------------- 1 0 $\{v_1\}$ 0 2 1 $\{v_1, v_2\}$ $\frac12$ 3 3 $\{v_1, v_2, v_6\}$ 1 4 5 $\{v_1,v_2,v_5,v_6\}$ $\frac54$ 5 7 $\{v_1,v_2,v_4,v_5,v_6\}$ $\frac75$ 6 10 $\{v_1,v_4,v_7,v_8,v_9,v_{11}\}$ $\frac53$ 7 13 $\{v_1,v_2,v_4,v_7,v_8,v_9,v_{11}\}$ $\frac{13}{7}$ 8 17 $\{v_1,v_2,v_4,v_6,v_7,v_8,v_9,v_{11}\}$ $\frac{17}{8}$ 9 21 $\{v_1,v_2,v_3,v_4,v_6,v_7,v_8,v_9,v_{11}\}$ $\frac{7}{3}$ 10 25 $\{v_1,v_2,v_3,v_4,v_5,v_6,v_7,v_8,v_9,v_{11}\}$ $\frac{5}{2}$ 11 30 $\{v_1,\dots,v_{11}\}$ $\frac{30}{11}$ : With $G$ as the underlying graph of the complex in Figure \[fig:flagRP2\], this table computes the maximal number of edges of subgraphs $H\subset G$ with varying number of vertices.[]{data-label="tab:RP2 counts"} Combining Lemma \[lem:bollobas of G\] and Theorem \[thm:Bollobás\] we obtain an analogue of Proposition \[prop:high-probability\]. \[prop:high-prob 2tor\] if $\Delta\sim \Delta(n,p)$ is a random flag complex with $n^{-11/30}\ll p\leq 1-\epsilon$ for some $\epsilon>0$, then ${\mathbf{P}}\left[\Delta(G) \overset{ind}{\subset} \Delta(n,p)\right]\rightarrow 1$ as $n \to \infty$. The proof is nearly identical to that of Proposition \[prop:high-probability\], so we omit the details. \[q:2torsion threshold\] It would be interesting to know whether $p\ll n^{-11/30}$ is a sharp threshold for the appearance of 2-torsion in the homology of $\Delta(n,p)$. A closely related question is whether there exists a flag complex $X$ with 2-torsion homology and a smaller essential density. Torsion in the Betti tables associated to $\Delta$ {#sec:Betti numbers} ================================================== We now prove Theorem \[thm:m torsion\]. The hard work was done in the previous sections. Assume $n^{-1/6}\ll p \leq 1-\epsilon$ and let $\Delta\sim \Delta(n,p)$. Let $X_m$ be as in Theorem \[thm:Xm\]. By Proposition \[prop:high-probability\], $\Delta$ contains $X_m$ as an induced subcomplex, with high probability. Since $H_1(X_m)$ has $m$-torsion, Hochster’s Formula (see Fact \[fact:depend\]) gives that the Betti table of the Stanley–Reisner ideal of $\Delta$ has $\ell$-torsion for every $\ell$ dividing $m$. We can also apply the more detailed study of $2$-torsion from §\[sec:2torsion\] to obtain a result on the appearance of $2$-torsion in the Betti tables of random flag complexes. \[prop:2tors\] Let $r\geq 4$, and let $\Delta\sim \Delta(n,p)$ be a random flag complex with $n^{-1/(r-1)}\ll p \ll n^{-1/r}$. With high probability as $n\to \infty$, the Betti table of the Stanley–Reisner ideal of $\Delta$ has $2$-torsion. The proof is the same as the proof of Theorem \[thm:m torsion\], but utilizing Proposition \[prop:high-prob 2tor\] in place of Proposition \[prop:high-probability\] since $r\geq 4$ and $n^{-1/(r-1)}\ll p$ gives $n^{-11/30}\ll p$. Note that the bound on $r$ for the appearance of $2$-torsion in Proposition \[prop:2tors\] is lower than in Theorem \[thm:m torsion\]. This is due to our ability to sharply compute the essential density in this case; in contrast, for Theorem \[thm:m torsion\], we work with a bound on the essential density. See Question \[q:threshold\] and Remark \[rmk:r bound\] for more on the possibility of lowering the bound on $r$ in Theorem \[thm:m torsion\]. It would be interesting to understand a precise threshold on the attaching probability $p$ such that the Betti table of the Stanley–Reisner ideal of $\Delta$ does not depend on the characteristic. A related question is posed in Question \[q:threshold\]. We also note that our constructions are based entirely on torsion in the $H_1$-groups, and thus we obtain Betti tables where the entries in the second row of the Betti table (that is the row of entries of the form $\beta_{i,i+2}$) depend on the characteristic. Since Newman’s work also produces small simplicial complexes where the $H_i$-groups have torsion, for any $i\geq 1$ [@newman Theorem 1], one could likely apply the methods of §\[sec:construction\] to produce thresholds for where the other rows of the Betti table would depend on the characteristic, and it might be interesting to explore the resulting thresholds. $\ell$-torsion in Veronese syzygies {#sec:veronese} =================================== Finally, we return to the question of $\ell$-torsion in Veronese syzygies. Since there is very little computational evidence either in favor or in opposition to Conjecture \[conj:dependence\], we base the conjecture upon an heuristic model. As noted in the introduction, one of the central results of [@erman-yang] is that for $\Delta \sim \Delta(n,p)$ with $n^{-1/(r-1)}\ll p \ll n^{-1/r}$ and $S=k[x_1,\dots,x_n]$, the Betti table $\beta(S/I_\Delta)$ as $n\to \infty$ will exhibit the known nonvanishing properties of the Betti table of the Veronese embeddings $\beta({\mathbb{P}}_k^r;d)$ as $d\to \infty$. Based on this connection, we use Theorem \[thm:m torsion\] as an heuristic for understanding the behavior of $\beta({\mathbb{P}}^r;d)$, in particular, when these Betti tables depend on the characteristic. For Conjecture \[conj:dependence\], we set $r\geq 7$ and use the framework of Theorem \[thm:m torsion\]. With these hypotheses, as $n\to \infty$, the Betti table associated to $\Delta$ will depend on the characteristic with high probability. We thus conjecture a corresponding statement for $\beta({\mathbb{P}}^r;d)$ with $r\geq 7$ and $d\to \infty$. While we conjecture that this dependence on characteristic should be quite widespread, the only known examples of such behavior come from [@anderson]. It would thus be very interesting to produce any new examples (or non-examples!) of torsion in Veronese syzygies. For instance: Can one find any new examples of Veronese embeddings whose Betti tables depend on the characteristic? For a given $\ell$, can one produce a Betti table with $\ell$-torsion? Can one find some $\beta({\mathbb{P}}^r;d)$ which has $\ell$-torsion for two (or more) distinct primes? We find it especially surprising that there are no known examples of $2$-torsion. Conjecture \[conj:bad primes\] represents one way to sharpen Conjecture \[conj:dependence\]. In particular, since Theorem \[thm:m torsion\] shows that, with $r\geq 7$ and within the given framework, $m$-torsion appears with high probability as $n\to \infty$ in the Betti table of the Stanley–Reisner ideal of $\Delta$, we conjecture that $m$-torsion should appear frequently in the Betti tables of the $d$-uple Veronese embeddings for ${\mathbb{P}}^r$ as $d\to \infty$. There are many follow-up questions one might ask, and we assemble some of these below. What is the minimal value of $r$ such that $\beta({\mathbb{P}}^r;d)$ depends on the characteristic for some $d$? (It is known that $1<r\leq 6$.) To develop an heuristic for this question, along the lines of this paper, one would need to consider the following question, which seeks to sharpen Theorem \[thm:m torsion\]. \[q:threshold\] Let $m\geq 2$. For a random flag complex $\Delta\sim \Delta(n,p)$, what is the threshold on $p$ such that the Betti table of the Stanley–Reisner ideal of $\Delta$ has $m$-torsion with high probability as $n\to \infty$? \[rmk:r bound\] We know of two natural ways that one could improve the bound on $r$ in Theorem \[thm:m torsion\]. First, one could perform a more detailed study of the essential density $m(H_m)$, as that value is surely lower than our chosen bound $\frac{1}{2}\operatorname{maxdeg}(X_m)$. Second, one could aim to produce flag complexes $X_m'$ with torsion homology (not necessarily in $H_1$) which have a lower essential density than $X_m$. Of course, following the heuristic at the heart of this paper, any such improvement of the bound on $r$ in Theorem \[thm:m torsion\] would suggest a corresponding improvement of the bound on $r$ in Conjectures \[conj:dependence\] and \[conj:bad primes\]. In a different direction, one might ask about how large $n$ needs to be before we expect to see that the Betti table associated to $\Delta$ has $\ell$-torsion. Fix a prime $\ell$ and integer $r\geq 7$. Let $\Delta\sim\Delta(n,p)$ be a random flag complex with $n^{-1/(r-1)}\ll p \ll n^{-1/r}$. For a constant $0<\epsilon<1$, approximately how large does $n$ need to be to guarantee that $${\mathbf{P}}\left[\text{ Betti table associated to $\Delta$ has $\ell$-torsion }\right] \geq 1 - \epsilon?$$ It would be interesting to even answer this question for $2$-torsion, where the concrete constructions from §\[sec:2torsion\] make the question seemingly more tractable. The corresponding question for Veronese embeddings would be the following: Fix a prime $\ell$ and integer $r\geq 7$. Can one provide lower/upper bounds on the minimal value of $d$ such that $\beta({\mathbb{P}}^r; d)$ has $\ell$-torsion? We could turn to even more quantitative questions related to Conjecture \[conj:bad primes\] as well. Fix a prime $\ell$ and an integer $r\geq 7$. Can one describe the set of $d\in {\mathbb{Z}}$ such that $\beta({\mathbb{P}}^r; d)$ has $\ell$-torsion? Can one bound or estimate the density of that set? Can one estimate or bound the growth rate of the number of primes $\ell$ such that $\beta({\mathbb{P}}^r; d)$ has $\ell$-torsion as $d\to \infty$? Even a compelling heuristic for these last two questions could be quite interesting. [^1]: This is equivalent to a certain integral Tor group having $\ell$-torsion: see Remark \[rmk:torsion\]. [^2]: See [@ein-lazarsfeld-asymptotic; @ein-erman-lazarsfeld-quick] for more on these nonvanishing properties.
{ "pile_set_name": "ArXiv" }
--- abstract: | We study modes trapped in a rotating ring carrying the self-focusing (SF) or defocusing (SDF) cubic nonlinearity and double-well potential $\cos ^{2}\theta $, where $\theta $ is the angular coordinate. The model, based on the nonlinear Schrödinger (NLS) equation in the rotating reference frame, describes the light propagation in a twisted pipe waveguide, as well as in other optical settings, and also a Bose-Einstein condensate (BEC) trapped in a torus and dragged by the rotating potential. In the SF and SDF regimes, five and four trapped modes of different symmetries are found, respectively. The shapes and stability of the modes, and transitions between them are studied in the first rotational Brillouin zone. In the SF regime, two symmetry-breaking transitions are found, of subcritical and supercritical types. In the SDF regime, an antisymmetry-breaking transition occurs. Ground-states are identified in both the SF and SDF systems. author: - 'Yongyao Li$^{1,2}$, Wei Pang$^{3}$, and Boris A. Malomed$^{1}$' bibliography: - 'apssamp.bib' title: 'Nonlinear modes and symmetry breaking in rotating double-well potentials' --- Introduction ============ The concept of the spontaneous symmetry breaking (SSB) in nonlinear systems was introduced in Ref. [@Chris]. Its significance has been later recognized in various physical settings, including numerous ones originating in nonlinear optics [@Snyder]-[@photo], Bose-Einstein condensates (BECs) [@Milburn]-[@Arik], and degenerate fermionic gases [Padua]{}. A general analysis of the SSB phenomenology was developed too [misc]{}, which is closely related to the theory of bifurcations in nonlinear systems [@Bif]. Fundamental manifestations of the SSB occur in nonlinear systems based on symmetric double-well potentials (DWPs) or dual-core configurations. A paradigmatic example of the latter in nonlinear optics is the twin-core nonlinear fiber, which may serve as a basis for the power-controlled optical switching [@Snyder]. DWP settings in optics were analyzed theoretically and implemented experimentally in photorefractive crystals [@photo]. In the realm of matter waves, main effects predicted in DWPs are Josephson oscillations [@Zapata], the asymmetric self-trapping of localized modes [@Warsaw], and similar effects in binary mixtures [@Mazzarella; @Ng]. Both the Josephson and self-trapping regimes were implemented in the atomic condensate with contact repulsive interactions [@Markus]. The SSB was also analyzed in one- and two-dimensional (1D and 2D) models of BEC trapped in dual-core configurations [@Arik]. Another dynamical setting which has produced a number of interesting effects in media with the intrinsic nonlinearity, especially in BEC, is provided by rotating potentials. It is well known that stirring the self-repulsive condensate typically leads to the formation of vortex lattices [vort-latt]{}, although it was found experimentally [@Cornell] and demonstrated theoretically [@giant] that giant vortices, rather than lattices, may also be formed under special conditions (when the centrifugal force nearly compensates the trapping harmonic-oscillator potential). On the other hand, the rotation of self-attractive condensates gives rise to several varieties of stable localized modes, such as vortices, “crescents" (mixed-vorticity states), and the so-called center-of-mass modes (quasi-solitons) [@rotating-trap]. Further development of this topic was achieved by the consideration of rotating lattice potentials, which can be implemented experimentally as an optical lattice induced in BEC by a broad laser beam transmitted through a revolving sieve [@sieve], or using *twisted* photonic-crystal fibers in optics [@twistedPC]. In these systems, quantum BEC states and vortex lattices have been studied [@in-sieve], as well as solitons and solitary vortices depinning from the lattice when its rotation velocity exceeds a critical value [@HS]. A specific implementation of the latter settings is provided by the quasi-1D lattice, or a single quasi-1D potential well, revolving about its center in the 2D geometry [@Barcelona]. In particular, the rotation makes it possible to create fundamental and vortical soliton in the self-repulsive medium, where, obviously, nonrotating quasi-1D potentials cannot maintain bright solitons [@Kon]. Furthermore, the rotation of a DWP gives rise to azimuthal Bloch bands [@Ueda; @Stringari]. As mentioned above, the static DWP and its limit form reducing to dual-core systems are fundamental settings for the onset of the SSB [@Snyder]-[Arik]{}. A natural problem, which is the subject of the present work, is the SSB and related phenomenology, i.e., the existence and stability of symmetric, antisymmetric, and asymmetric modes, in *rotating* DWPs (recently, a revolving DWP configuration was considered in a different context in Ref. [@WenLuo], as a stirrer generating vortex lattices). To analyze basic features of the phenomenology, we here concentrate on the one-dimensional DWP placed onto a rotating ring. As shown in Fig. \[fig\_1\], in optics this setting may be realized as a hollow pipe waveguide twisted with pitch $2\pi /\omega $, while the azimuthal modulation of the refractive index, that gives rise to the effective potential $V(\theta )$, is written into the material of the pipe. Alternatively, a *helical* potential structure can be created in a straight sheath waveguide by means of optical-induction techniques, using pump waves with the ordinary polarization in a photorefractive material (while the probe wave is to be launched in the extraordinary polarization [@Moti]), or the method of the electromagnetically-induced transparency (EIT) [@Fleischhauer], including its version recently proposed for supporting spatial solitons [Yongyao]{}. In the latter case, one can make the pipe out of Y$_{2}$SiO$_{5}$ crystal doped by Pr$^{3+}$ (Pr:YSO) ions [@Kuznetsova]. In either case of the use of the photorefractive material or EIT, the helical structure may be induced by a superposition of a pair of co-propagating *vortical* pump waves, with equal amplitudes, a small mismatch of the propagation constants $k_{1,2}$ ($\Delta k\equiv k_{1}-k_{2}\ll k_{1}$), and opposite vorticities $\left( \pm S\right) $, which will give rise to an effective potential profile, $$V(\theta ,z)\sim r^{S}\cos \left( \Delta k\cdot z+2S\theta \right) , \label{V}$$where $z\ $is the propagation distance, while $r$ and $\theta $ are the polar coordinates in the transverse plane. In terms of the BEC, a similar setting may be based on ring-shaped (toroidal) traps, which have been created in experiments [@torus] and investigated in various contexts theoretically [@Salasnich]. In that case, the rotating periodic potential can be added to the toroidal trap [@sieve] , which is equivalent to the consideration of the rotating ring [Stringari,rotating-ring]{}. In this work, we study basic types of trapped modes and their SSB phenomenology in the 1D rotating ring, in both cases of the self-focusing and self-defocusing (SF and SDF) cubic nonlinearities. In Sec. II we formulate the model and present analytical results, which predict a boundary between the symmetric and asymmetric modes, the analysis being possible for the small-amplitude potential and the rotation rate close to $\omega =1/2$. Numerical results are reported in a systematic form, and are compared to the analytical predictions, in Secs. III and IV for the SF and SDF nonlinearities, respectively. The paper is concluded by Sec. IV. The model and analytical considerations ======================================= As said above, we consider the limit of a thin helical shell, which implies a fixed value of the radius in Eq. (\[V\]), $r=r_{0}$, that we normalize to be $r_{0}=1$. Taking the harmonic periodic potential in the form of Eq. (\[V\]) with $S=1$, $V(\theta ,z)=2A\cos ^{2}(\theta -\omega z)$, the corresponding scaled nonlinear Schrödinger equation is $$i{\frac{\partial }{\partial z}}\psi =\left[ -{\frac{1}{2}}{\frac{\partial ^{2}}{\partial \theta ^{2}}}+V(\theta ,z)-\sigma |\psi |^{2}\right] \psi , \label{Eq5}$$where $\sigma =+1$ and $-1$ refer to SF and SDF nonlinearities, respectively. Then, we rewrite Eq. (\[Eq5\]) in the helical coordinate system, with $\theta ^{\prime }\equiv \theta -\omega z$: $$i{\frac{\partial }{\partial z}}\psi =\left[ -{\frac{1}{2}}{\frac{\partial ^{2}}{\partial \theta ^{\prime }{}^{2}}}+i\omega {\frac{\partial }{\partial \theta ^{\prime }}}+2A\cos ^{2}(\theta ^{\prime })-\sigma |\psi |^{2}\right] \psi , \label{Eq5p}$$where the solution domain is defined at $-\pi \leq \theta ^{\prime }\leq +\pi $. For the narrow toroidal BEC trap with the rotating potential, the respective Gross-Pitaevskii equation, written in the co-rotating reference frame (cf. Refs. [@Ueda; @Stringari]), differs by replacing the propagation distance, $z$, with time $t$ [@review]. Stationary modes with real propagation constant $-\mu $ (in terms of the BEC, $\mu $ is the chemical potential) are sought for as $\psi \left( \theta ^{\prime },z\right) =\exp \left( -i\mu z\right) \phi (\theta ^{\prime })$, with complex function $\phi \left( \theta ^{\prime }\right) $ obeying equation $$\mu \phi =\left[ -{\frac{1}{2}}{\frac{d^{2}}{d\theta ^{\prime }{}^{2}}}% +i\omega {\frac{d}{d\theta ^{\prime }}}+2A\cos ^{2}(\theta ^{\prime })-\sigma |\phi |^{2}\right] \phi . \label{phi}$$Equation (\[Eq5p\]) conserves the total power (norm) of the field and its Hamiltonian (energy), $$P=\int_{-\pi }^{+\pi }\left\vert \psi (\theta ^{\prime })\right\vert ^{2}d\theta ^{\prime }, \label{Power}$$$$H=\int_{-\pi }^{+\pi }\left[ {\frac{1}{2}}\left\vert {\frac{\partial \psi }{% \partial \theta ^{\prime }}}\right\vert ^{2}+{\frac{i}{2}}\omega \left( \psi ^{\ast }{\frac{\partial \psi }{\partial \theta ^{\prime }}}-\psi {\frac{% \partial \psi ^{\ast }}{\partial \theta ^{\prime }}}\right) +V(\theta ^{\prime })|\psi |^{2}-{\frac{\sigma }{2}}|\psi |^{4}\right] d\theta ^{\prime }, \label{Ham}$$with the asterisk stands for the complex conjugate. The periodic boundary conditions, $V(\theta ^{\prime }+2\pi )=V(\theta ^{\prime })$ and $\psi (\theta ^{\prime }+2\pi )=\psi (\theta ^{\prime })$, make Eq. (\[Eq5p\]) invariant with respect to the *boost transformation*, which allows one to change the rotation speed from $\omega $ to $\omega -N$ with arbitrary integer $N$:$$\psi \left( \theta ^{\prime },z;\omega -N\right) =\psi \left( \theta ^{\prime },z;\omega \right) \exp \left[ -iN\theta ^{\prime }+i\left( \frac{1% }{2}{N}^{2}-N\omega \right) z\right] , \label{boost}$$hence the speed may be restricted to interval $0\leq \omega <1$. Furthermore, Eq. (\[Eq5p\]) admits an additional invariance, relating solutions for opposite signs of the rotation speed: $\psi (\theta ^{\prime },z;\omega )=\psi ^{\ast }(\theta ^{\prime },-z;-\omega )$. If combined with shift $\omega \rightarrow \omega +1$, the latter transformation implies that the solutions for the rotation speeds $\omega $ and $1-\omega $ (with $% 0<\omega <1$) are mutually tantamount, therefore the rotation speed may be eventually restricted to interval $$0\leq \omega \leq 1/2, \label{zone}$$which plays the role similar to that of the fist Brillouin zone in solid-state media [@Kittel]. An analytical approach for small-amplitude modes can be developed if the amplitude of the potential in Eq. (\[Eq5p\]) is small too, $|A|~\ll 1$, and $\omega $ is close to the right edge of zone (\[zone\]), $\delta \equiv 1/2-\omega ~\ll 1/2$. In the simplest approximation, the corresponding stationary solutions are looked for as a combination of the zeroth and first angular harmonics, $$\phi \left( \theta ^{\prime }\right) =a_{0}+a_{1}\exp \left( i\theta ^{\prime }\right) , \label{ansatz}$$where $a_{0}$ is fixed to be real, while amplitude $a_{1}$ may be complex. Indeed, setting $\omega =1/2$, $A=0$, and dropping the cubic term, it is obvious that Eq. (\[ansatz\]) is an exact solution to Eq. (\[phi\]). Then, under the aforementioned conditions, the substitution of ansatz ([ansatz]{}) into Eq. (\[phi\]) yields, in the first approximation (taking care of the balance of the zeroth and first harmonics in the equation), the following algebraic equations: $$\begin{aligned} &&\mu a_{0}=Aa_{0}-\sigma a_{0}^{3}-2\sigma |a_{1}|^{2}a_{0}, \notag \\ &&\mu a_{1}=\delta a_{1}-\sigma |a_{1}|^{2}a_{1}-2\sigma a_{0}^{2}a_{1}+Aa_{1}. \label{Ea0a1}\end{aligned}$$ First, setting $a_{1}=0$ or $a_{0}=0$ in ansatz (\[ansatz\]) corresponds, respectively, to the CW (continuous-wave) mode, and the uniform vortical one, with $\left\vert \psi \left( \theta ^{\prime }\right) \right\vert =% \mathrm{const}$:$$\begin{aligned} a_{1} &=&0,~a_{0}^{2}=\sigma \left( A-\mu \right) , \label{0} \\ a_{0} &=&0,~\left\vert a_{1}\right\vert ^{2}=\sigma \left( A-\mu +\delta \right) \label{1}\end{aligned}$$(recall that $\sigma ^{2}=1$). In terms of full equation (\[Eq5p\]) with $% A\neq 0$, these two solutions extend into non-uniform ones \[with $\left\vert \psi (\theta ^{\prime })\right\vert \neq \mathrm{const}$\], which remain (and are categorized as) *symmetric* with respect to point $\theta ^{\prime }=0$. The respective vortical mode, obtained as the extension of solution (\[1\]), is investigated in a numerical form in the next section, under the name of FHS (fundamental-harmonic symmetric) mode. Solutions to Eqs. (\[Ea0a1\]) with $a_{0}a_{1}\neq 0$ give rise to the species of *asymmetric* modes (categorized as FHA, i.e., fundamental-harmonic asymmetric mode, in the next section), with $$a_{0}^{2}={\frac{\sigma }{3}}(A-\mu +2\delta ),~|a_{1}|^{2}={\frac{\sigma }{3% }}(A-\mu -\delta ). \label{a0a1}$$According to Eqs. (\[Power\]) and (\[ansatz\]), the propagation constant is related to total power (\[Power\]) of soliton (\[a0a1\]), $P=2\pi (a_{0}^{2}+|a_{1}|^{2})$, as follows: $$\mu =A+{\frac{\delta }{2}}-{\frac{3\sigma }{4\pi }}P. \label{E}$$In particular, in the case of the SF nonlinearity, with $\sigma =+1$, Eq. (\[E\]) demonstrates that the asymmetric mode meets the Vakhitov-Kolokolov (VK) criterion, $d\mu /dP<0$, which is a necessary stability condition for modes supported by the SF terms [@VK]. On the other hand, in the case of the SDF sign of the nonlinearity, $\sigma =-1$, the asymmetric mode satisfies the “anti-VK" criterion, $d\mu /dP>0$, which, as argued in Ref. [@anti], may also play the role of a necessary stability condition, in the respective setting. Further, Eq. (\[a0a1\]) predicts a transition between the symmetric (FHS) and asymmetric (FHA) vortical modes at $a_{0}^{2}=0$, i.e., at $\mu =A+2\delta $. Then, Eq. (\[E\]) yields the location of this boundary in terms of the total power, $P_{\mathrm{\min }}=2\pi \sigma \delta $. In view of the above-mentioned equivalence between the modes pertaining to rotation speeds $\omega =1/2\pm \delta $, the latter result predicts the coexistence of the FHS and FHA modes at $$P\geq P_{\mathrm{\min }}=2\pi |\delta |. \label{P}$$Comparison of this prediction with numerical findings is presented below. Numerical results for the **self-focusing nonlinearity (**$% \protect\sigma =+1$**)** ============================================================ Symmetric, asymmetric, and antisymmetric modes ---------------------------------------------- Solutions to stationary equation (\[phi\]) were constructed by means of numerical code “PCSOM" elaborated in Ref. [@YJK]. In the SF case, the use of different input waveforms makes it possible to identify five distinct species of stationary modes, as listed in Table [tab:table1]{}. Inputs Types of modes ------------------------------- --------------------------------------- -- -- $\cos\theta'$ Fundamental-harmonic symmetric (FHS) $1+\sin\theta'$ Fundamental-harmonic asymmetric (FHA) $\sin^{2}\theta'$ Second-harmonic symmetric (2HS) $1+\sin\theta'$ Second-harmonic asymmetric (2HA) $\sin\theta'$ Anti-symmetric (AnS) $b + \sin\theta',\,0<b\leq 1$ Broken-antisymmetry (BAnS) : Different species of stationary modes in the case of the SF nonlinearity ($\protect\sigma =+1$), labeled by input waveforms which generate them. \[tab:table1\] The symmetry, asymmetry and antisymmetry of the modes designated in Table 1 is realized with respect to point $\theta ^{\prime }=0$, while the solution is considered, as defined above, in the region of $-\pi \leq \theta ^{\prime }\leq +\pi $. Further, the fundamental or second harmonic (“FH" or “2H", respectively) in the nomenclature adopted in the table refers to a dominant term in the Fourier decomposition of the stationary solution (which is made obvious by their shapes, see below). In particular, the stationary patterns of the FHA and 2HA types are generated by the same input in Table 1, $1+\sin \theta ^{\prime }$, but in non-overlapping regions of the parameter space, $\left( \omega ,A,P\right) $, as shown below. Note also that all the inputs displayed in the table are real functions, but the numerical modes found for $\omega \neq 0$ have a complex structure. The stability of the stationary solutions has been identified through the numerical computation of eigenvalues for infinitesimal perturbation modes. To this end, the perturbed solution was taken in the usual form, $$\psi =e^{-i\mu z}[\phi (\theta ^{\prime })+u(\theta ^{\prime })e^{i\lambda z}+v^{\ast }(\theta ^{\prime })e^{-i\lambda ^{\ast }z}], \label{pert}$$where $u(\theta ^{\prime })$ and $v(\theta ^{\prime })$ are perturbation eigenmodes, and $\lambda $ is the corresponding eigenfrequency. The substitution of expression (\[pert\]) into Eq. (\[Eq5p\]) and linearization leads to the eigenvalue problem in the following form: $$\left( \begin{array}{cc} \mu {-}\mathrm{\hat{h}}-i\omega \frac{\partial }{\partial \theta ^{\prime }} & \sigma \phi ^{2} \\ -\sigma \left( \phi ^{\ast }\right) ^{2} & -\mu +\mathrm{\hat{h}}-i\omega \frac{\partial }{\partial \theta ^{\prime }}% \end{array}% \right) \left( \begin{array}{c} u \\ v% \end{array}% \right) =\lambda \left( \begin{array}{c} u \\ v% \end{array}% \right) , \label{lambda}$$where $\mathrm{\hat{h}}=-(1/2)\partial ^{2}/\partial \theta ^{\prime 2}+2A\cos ^{2}(\theta ^{\prime })-2\sigma |\phi |^{2}$ is the respective single-particle Hamiltonian. The underlying solution $\phi $ is stable if all the eigenvalues are real. In the case of the SF nonlinearity, the numerical solutions reveal the existence of all the species of the modes indicated in Table 1, except for BAnS, which exist under the SDF nonlinearity, see below. Typical examples of the five species of the stationary modes which are supported by the SDF cubic term are displayed in Fig. \[1stSF\]-\[AntiSF\]. The analysis demonstrates that all these examples are stable. Moreover, the symmetric and asymmetric modes dominated by the fundamental harmonic, FHS and FHA, demonstrate mutual *bistability* in Fig. \[1stSF\], as these stable modes coexist at common values of the parameters. A distinctive feature of the FHS mode, which is evident in Fig. \[1stSF\], is that maxima and minima of its intensity coincide with local maxima and the minima of the harmonic potential, while its FHA counterpart features an intensity maximum in one potential well, and a minimum in the other. This structure of the FHS suggests that it may correspond to a maximum, rather than minimum, of the energy (see below), but, nevertheless, this mode has a region of stability against small perturbations. On the other hand, Figs. \[2ndSF\] and \[AntiSF\] demonstrates that both the 2HS and AnS modes have two symmetric intensity peaks trapped in the two potential wells (while local minima of the intensity coincide, quite naturally, with potential humps), hence these mode have a chance to realize a minimum of the energy (in the best case, it may be the system’s ground state). The 2HA mode features a similar property in Fig. \[2ndSF\], but with unequal density peaks trapped in the two potential minima. In the limit of the uniform ring (without the potential, $A=0$) the FHS and AnS modes go over into the above-mentioned uniform vortical state (\[1\]), the 2HS pattern degenerates into the CW state (\[0\]), while the FHA mode takes the form of a $2\pi $-periodic cnoidal-wave solution of the nonlinear Schrödinger equation equation (the 2HA pattern does not exist at all in the limit of $A=0$). The evolution of the shape of the modes with the increase of $A$ is shown in Fig. \[Amod\] \[The evolution figure of AnS mode, which is not shown here, is similar to the 2HS type displayed in Fig. \[fig\_5\_c\]\]. Existence and stability diagrams for the different modes -------------------------------------------------------- Results of the systematic numerical analysis are summarized in Fig. [1stPomega]{}, in the form of diagrams for the existence of the stable FHS and FHA modes in the plane of $\left( P,\omega \right) $ at several fixed values of amplitude $A$ of the harmonic potential. In the blank areas, only *unstable* modes of the FHS type are found \[in direct simulations they feature an oscillatory instability, which is accounted for by a quartet of complex eigenvalues generated by Eq. (\[lambda\])\], while the FHA modes are completely stable in their existence region. The dashed lines in panels (a) and (b) of Fig. \[1stPomega\] demonstrate that Eq. (\[P\]) correctly predicts the bistability boundary at small values of $\delta \equiv 1/2-\omega $ and $P$, provided that $A$ is small enough too \[recall it is the condition under which Eq. (\[P\]) was derived\]. Further, the fact that vertical cuts of the panels in Fig. [1stPomega]{}, that can be drawn through $\omega =\mathrm{const}$, go, with the increase of the total power ($P$), from the region of the monostability of the FHS mode through the FHS/FHA bistability area into the monostability region of the FHA mode (at least, for $\omega $ sufficiently close to $1/2$), clearly suggests that the symmetry-breaking bifurcation of the *subcritical type*, which features the bistability [@Bif], [@Snyder], occurs along this route. This conclusion is confirmed by a direct consideration of the numerical data (not shown here in detail). As concerns the AnS solitons, in the case of the SF nonlinearity they do not undergo any bifurcation, and are stable in the entire region of their existence, which is shown in the $(\omega ,P)$ plane for fixed values of the lattice’s strength, $A$, in Fig. \[antiSFPomega\] \[as mentioned above, at $% A=0$ the AnS mode coincides with the FHS one, hence the respective existence area is the same as the red (bottom) one in Fig. \[1stPomega\](a)\]. From here we conclude that the stability region of the AnS mode quickly expands with the increase of $A$, and the FHS/FHA bistability areas in panels (b) and (c) of Fig. \[1stPomega\] actually support the *tristability*, as the AnS mode is stable there too. The multistability between the three species of the modes based on the fundamental harmonic—symmetric, asymmetric, and antisymmetric ones—makes it necessary to compare their energies, which are defined as per Eq. ([Ham]{}). The comparison along the horizontal dotted line, which is drawn in Fig. \[fig\_7\_c\], is presented in Fig. \[fig\_9\_a\]. The figure clearly shows the following relation between the energies: $$H_{\mathrm{FHA}}<H_{\mathrm{AnS}}<H_{\mathrm{FHS}}. \label{HHH}$$ \[1stHam\] For the 2HS and 2HA patterns, which are based on the second angular harmonic, Fig. \[2ndPomega\] displays the stability areas in the $\left( P,\omega \right) $ plane, at different values of the lattice’s strength, $A$ \[cf. Fig. \[1stPomega\]\], together with the stability area for the FHA mode, which may stably coexist with the 2HS pattern. Figure \[2ndPomega\] also demonstrates that, as mentioned above, the 2HA mode, which exists and is stable in the blue (intermediate) area in Fig. \[2ndPomega\], does not exist at $A=0$, emerging and expanding with the increase of $A$. Note that, on the contrary to the situation for the patterns dominated by the fundamental angular harmonic (FHS and FHA) described above, there is no overlap between the stability areas of the symmetric and asymmetric second-harmonic-based modes, 2HS and 2HA, hence (as confirmed by an additional consideration of the numerical results) the symmetry-breaking 2HS $\rightarrow $ 2HA transition, following the increase of $P$ at $A\neq 0$, amounts to a *supercritical* bifurcation, which features no bistability [@Bif], [@Snyder]. Another peculiarity of the symmetric mode based on the second harmonic (2HS) is that it is stable in two *disjoint* stability areas—the red and yellow ones in Fig. \[2ndPomega\] (the bottom and rightmost regions), in the latter one the 2HS featuring the bistability with the FHA mode, although they are not linked by any bifurcation. On the other hand, there is no overlap between the stability areas of the asymmetric modes based on the different angular harmonics, FHA and 2HA, therefore these modes may be generated by the same input waveform ($1+\sin \theta ^{\prime }$, in Table 1), in different parameter regions. The effective bistability occurring in Fig. \[2ndPomega\] makes it necessary to compare the energy of the coexisting stable patterns, which is done in Fig. \[fig\_9\_b\], along the horizontal dotted line drawn in Fig. \[2ndPomega\](c). The energy of the coexisting stable AnS mode is included too. In particular, the short segment corresponding to the 2HS mode appears, above the one pertaining to the FHA mode, when the horizontal dotted line in Fig. \[2ndPomega\](c) enters the yellow (rightmost) region of the 2HS-FHA bistability. Thus, adding the results from Fig. \[fig\_9\_b\], we extend energy relations (\[HHH\]) as follows:$$H_{\mathrm{FHA}}~\mathrm{or}~H_{\mathrm{2HA}}<H_{\mathrm{2HS}}<H_{\mathrm{AnS% }}<H_{\mathrm{FHS}}. \label{HHHHH}$$The conclusion is that the asymmetric modes, either FHA or 2HA, represent the *ground state* of the rotating ring carrying the harmonic potential and SF nonlinearity. Numerical results for the **self-defocusing nonlinearity (**$% \protect\sigma =-1$**)** ============================================================== In the case of the SDF nonlinearity, symmetric modes of the FHS and 2HS types have been found too, although, on the contrary to the system with the SF sign, they do not undergo any bifurcations, hence the FHA and 2HA species do not exist in this case. On the other hand, the AnS mode features an *antisymmetry-breaking* bifurcation, which gives rise to a *broken-antisymmetry* (BAnS) mode (which does not exist under the SF nonlinearity). A typical example of the stable BAnS mode is displayed in Fig. \[Antiasy\]. In fact, antisymmetry-breaking bifurcations are characteristic to the double-well systems with the SDF nonlinearity [Warsaw]{}. The stationary solutions of the BAnS type can be numerically generated by the initial guess of the form of $(b+\sin \theta ^{\prime })$, with $0<b\leq 1$, see Table 1. Proceeding to the systematic description of the modes supported by the SDF nonlinearity acting in the combination with the rotating harmonic potential, we note, first, that the numerical analysis demonstrates the existence and stability of the mode of the 2HS type at all values of $(\omega ,P,A)$, therefore this solution is not included in the stability diagrams, which are displayed for the FHS mode in Fig. \[1stPAomegaSDF\], and for the AnS and newly found BAnS ones in Fig. \[AntiPAomegaSDF\]. In the blank area of Fig. \[1stPAomegaSDF\], the FHS solutions exist too, featuring an instability accounted for by a quartet of complex eigenvalues \[see Eq. (\[lambda\])\]. In direct simulations, this instability converts the stationary mode into an oscillatory one (not shown here). Further, Fig. \[1stPAomegaSDF\] shows that the antisymmetry-breaking transition, which follows the increase of the total power, $P$, is of a supercritical type. Actually, the supercritical character of the antisymmetry-breaking bifurcation is a characteristic feature of systems with the SDF sign of the nonlinearity [@Warsaw]. Finally, the energy comparison for the model with the SDF nonlinearity is presented in Fig. \[fig\_9\_c\], along the boundary separating the existence (and stability) regions of the AnS and BAnS modes in Fig. \[1stPAomegaSDF\](b). The figure shows the energies of these modes, which coincide along the boundary, along with the results for the FHS and 2HS symmetric modes. From here, it is concluded that the energies are related as follows:$$H_{\mathrm{2HS}}<H_{\mathrm{BAnS}}=H_{\mathrm{AnS}}<H_{\mathrm{FHS}},$$i.e., the 2HS mode represents the *ground state* in the case of the SDF nonlinearity, cf. Eqs. (\[HHH\]) and (\[HHHHH\]). CONCLUSION ========== The objective of this work is to study the trapped modes of the symmetric, asymmetric, and antisymmetric types, and the symmetry- or antisymmetry-breaking transitions between them, in the rotating ring carrying the DWP (double-well potential) and the cubic nonlinearity with the SF or SDF (self-focusing or defocusing) sign. The analysis has been performed in the first rotational Brillouin zone. In the SF case, five types of different modes, and their stability, have been identified, three dominated by the fundamental angular harmonic (FHS, FHA, and AnS), and two others based on the second harmonic, 2HS and 2HA modes. The SSB (spontaneous symmetry-breaking) transition between the FHS and FHA modes is of the subcritical type, featuring a conspicuous bistability area, while the transition between the between the 2HS and 2HA states is supercritical. There is no overlap between the existence regions of the asymmetric modes of the FHA and 2HA types, one of which realizes the ground state of the SF system. The SDF model supports four distinct species of the trapped modes, *viz*., FHS, 2HS, AnS, and, in addition, the BAnS (broken-antisymmetry) mode. The latter one appears from the AnS state as a result of supercritical antisymmetry-breaking transition. The 2HS mode represents the ground state of the SDF system. The present work can be naturally extended in other directions. First, it is possible to consider the rotating potential in the form of $A\cos \left( n\theta ^{\prime }\right) $ with $n\geq 3$, while the present analysis corresponds to the DWP with $n=2$ \[note that Eq. (\[V\]) suggests a possibility to create the helical photonic lattice with $n=2S$ and $S>1$\]. This generalization will give rise to many new trapped modes; in particular, it may be possible to build one with a small width, $\Delta \theta ^{\prime }\ll 2\pi $, hence it may be considered as a *soliton* trapped in the lattice and rotating along with it, cf. Ref. [@HS]. It is relevant too to consider effects of the two-dimensionality (the entire rotating plane, rather than a narrow ring). Our preliminary analysis, based on simulations of the 2D counterpart of Eq. (\[Eq5p\]), demonstrates essentially the same types of trapped modes in rotating annuli of a finite radial size, as reported above in the one-dimensional limit. Another interesting extension is to replace the linear rotating potential by its nonlinear counterpart, generated by modulation of the local nonlinearity along the angular coordinate (as was done in Ref. [@Pu] in the absence of the rotation). Results obtained for one-dimensional modes trapped in the rotating nonlinear potential will be reported elsewhere. We appreciate help in the use of numerical methods provided by Nir Dror and Shenhe Fu. This work was supported by Chinese agencies NKBRSF (grant No. G2010CB923204) and CNNSF(grant No. 11104083,10934011), by the German-Israel Foundation through grant No. I-1024-2.7/2009, and by the Tel Aviv University in the framework of the “matching" scheme. [99]{} J. C. Eilbeck, P. S. Lomdahl, and A. C. Scott, Physica D **16**, 318 (1985). A. W. Snyder, D. J. Mitchell, L. Poladian, D. R. Rowland, and Y. Chen, J. Opt. Soc. Am. B **8**, 2102 (1991); C. Paré and M. F[ł]{}rjańczyk, Phys. Rev. A **41**, 6287 (1990); A. I. Maimistov, Kvantovaya Elektron. **18**, 758 (1991); Sov. J. Quantum Electron. **21**, 687 (1991); N. Akhmediev and A. Ankiewicz, Phys. Rev. Lett. **70**, 2395 (1993); P. L. Chu, B. A. Malomed, and G. D. Peng, J. Opt. Soc. Am. B **10**, 1379 (1993); B. A. Malomed, in Progress Optics, edited by E. Wolf, Vol. 43 (North Holland, Amsterdam, 2002), p. 71. W. C. K. Mak, B. A. Malomed, and P. L. Chu, J. Opt. Soc. Am. B **15**, 1685 (1998). J. W. Fleischer, M. Segev, N. K. Efremidis, and D. N. Christodoulides, Nature **422**, 147 (2003); X. Wang and Z. Chen, Phys. Rev. Lett. **96**, 083904(2006); S. Jia and J. W. Fleischer, Phys. Rev. A **79**, 041804(R) (2009). P. G. Kevrekidis, Z. Chen, B. A. Malomed, D. J. Frantzeskakis, and M. I. Weinstein, Phys. Lett. A **340**, 275 (2005). G. J. Milburn, J. Corney, E. M. Wright, and D. F. Walls, Phys. Rev. A **55**, 4318 (1997); K. W. Mahmud, H. Perry, and W. P. Reinhardt, Phys. Rev. A. **71**, 023615 (2005); T. Schumm, S. Hofferberth, L. M. Andersson, S. Wildermuth, S. Groth, I. Bar-Joseph, J. Schmiedmayer and P. Krüger, Nature Physics **1**, 57 (2005); D. R. Dounas-Frazer, A. M. Hermundstad, and L. D. Carr, Phys. Rev. Lett. **99**, 200402 (2007); Y. P. Huang and M. G. Moore, Phys. Rev. Lett. **100**, 250406 (2008); Q. Y. He, M. D. Reid, T. G. Vaughan, C. Gross, M. Oberthaler, and P. D. Drummond, Phys. Rev. Lett. **106**, 120405 (2011). I. Zapata, F. Sols, and A. J. Leggett, Phys. Rev. A **57**, R28 (1998), L. Radzihovsky and V. Gurarie, Phys. Rev. A **81**, 063609 (2010); R. Qi, X. L. Yu, Z. B. Li, and W. M. Liu, Phys. Rev. Lett. **102**, 185301 (2009). M. Matuszewski, B. A. Malomed, and M. Trippenbach, Phys. Rev. A **75**, 063621 (2007); M. Trippenbach, E. Infeld, J. Gocalek, M. Matuszewski, M. Oberthaler, and B. A. Malomed, Phys. Rev. A **78**, 013603 (2008). G. Mazzarella, B. Malomed, L. Salasnich, M. Salerno, and F. Toigo, J. Phys. B: At. Mol. Opt. Phys. **44**, 035301 (2011). H. T. Ng and P. T. Leung, Phys. Rev. A **71**, 013601 (2005); I. I. Satija, R. Balakrishnan, P. Naudus, J. Heward, M. Edwards, and C. W. Clark, Phys. Rev. A **79**, 033616 (2009); C. Wang, P. G. Kevrekidis, N. Whitaker, and B. A. Malomed, Physica D **237**, 2922(2008). M. Albiez, R. Gati, J. Fölling, S. Hunsmann, M. Cristiani, and M. K. Oberthaler, Phys. Rev. Lett. **95**, 010402 (2005); R. Gati, M. Albiez, J. Fölling, B. Hemmerling, and M. K. Oberthaler, Appl. Phys. B **82**, 207 (2006). A. Gubeskys and B. A. Malomed, Phys. Rev. A **75**, 063602 (2007); *ibid*. **76**, 043623 (2007); L. Salasnich, B. A. Malomed, and F. Toigo, *ibid*. **81**, 045603 (2010). S. K. Adhikari, B. A. Malomed, L. Salasnich, and F. Toigo, Phys. Rev. A **81**, 053630 (2010). E. A. Ostrovskaya, Y. S. Kivshar, M. Lisak, B. Hall, F. Cattani, and D. Anderson, Phys. Rev. A **61**, 031601(2000); R. D’Agosta, B. A. Malomed, C. Presilla, Phys. Lett. A **275**, 424 (2000); R. K. Jackson and M. I. Weinstein, J. Stat. Phys. **116**, 881 (2004); D. Ananikian and T. Bergeman, Phys. Rev. A **73**, 013604 (2006); E. W. Kirr, P. G. Kevrekidis, E. Shlizerman, and M. I. Weinstein, SIAM J. Math. Anal. **40**, 566 (2008). G. Iooss and D. D. Joseph, *Elementary Stability and Bifurcation Theory* (Springer: New York, 1980). Y. S. Kivshar and G. P. Agrawal, *Optical Solitons: From Fibers to Photonic Crystals* (Academic Press: San Diego). A. E. Fetter, Rev. Mod. Phys. **81**, 647 (2009). P. Engels, I. Coddington, P. C. Haljan, V. Schweikhard, and E. A. Cornell, Phys. Rev. Lett. **90**, 170405 (2003). A. L. Fetter, Phys. Rev. A **64**, 063608 (2001); E. Lundh, *ibid*. **65**, 043604 (2002); K. Kasamatsu, M. Tsubota, and M. Ueda, *ibid*. **66**, 053606 (2002); G. M. Kavoulakis and G. Baym, New J. Phys. **5**, 51 (2003); A. Aftalion and I. Danaila, Phys. Rev. A **69**, 033608 (2004); U. R. Fischer and G. Baym, Phys. Rev. Lett. **90**, 140402 (2003); A. D. Jackson and G. M. Kavoulakis, Phys. Rev. A **70**, 023601 (2004); T. K. Ghosh, *ibid*. **69**, 043606 (2004). N. K. Wilkin, J. M. F. Gunn, and R. A. Smith, Phys. Rev. Lett. **80**, 2265 (1998); B. Mottelson, *ibid*. **83**, 2695 (1999); C. J. Pethick and L. P. Pitaevskii, Phys. Rev. A **62**, 033609 (2000); E. Lundh, A. Collin, and K.-A. Suominen, Phys. Rev. Lett. **92**, 070401 (2004); G. M. Kavoulakis, A. D. Jackson, and G. Baym, Phys. Rev. A **70**, 043603 (2004); A. Collin, *ibid*. **73**, 013611 (2006);  A. Collin, E. Lundh, and K.-A. Suominen, Phys. Rev. A **71**, 023613 (2005); S. Bargi, G. M. Kavoulakis, and S. M. Reimann, *ibid*. **73**, 033613 (2006); H. Sakaguchi and B. A. Malomed, *ibid*. **78**, 063606 (2008). V. Schweikhard, I. Coddington, P. Engels, S. Tung, and E. A. Cornell, Phys. Rev. Lett. **93**, 210403 (2004); S. Tung, V. Schweikhard, and E. A. Cornell, *ibid*. 97, 240402 (2006). G. Kakarantzas, A. Ortigosa-Blanch, T. A. Birks, P. St. J. Russell, L. Farr, F. Couny, and B. J. Mangan, Opt. Lett. **28**, 158 (2003). J. W. Reijnders and R. A. Duine, Phys. Rev. A **71**, 063607 (2005); H. Pu, L. O. Baksmaty, S. Yi, and N. P. Bigelow, Phys. Rev. Lett. **94**, 190401 (2005); R. Bhat, L. D. Carr, and M. J. Holland, *ibid*. **96**, 060405 (2006); K. Kasamatsu and M. Tsubota, *ibid*. **97**, 240404 (2006). H. Sakaguchi and B. A. Malomed, Phys. Rev. A **75**, 013609 (2007); H. Sakaguchi and B. A. Malomed, Phys. Rev. A **79**, 043606 (2009). Y. V. Kartashov, B. A. Malomed, and L. Torner, Phys. Rev. A **75**, 061602(R) (2007). V. A. Brazhnyi and V. V. Konotop, Mod. Phys. Lett. B **18**, 627 (2004). H. Saito and M. Ueda, Phys. Rev. Lett. **93**, 220402 (2004). S. Schwartz, M. Cozzini, C. Menotti, I Carusotto, P. Bouyer, and S. Stringari, New J. Phys. **8**, 162 (2006). R. Carretero-González, D. J. Frantzeskakis, and P. G. Kevrekidis, Nonlinearity **21**, R139 (2008). L. H. Wen and X. B. Luo, arXiv:1204.4522 (Laser Phys. Lett., in press). L. Wen, H. Xiong, and B. Wu, Phys. Rev. A **82**, 053627 (2010). N. K. Efremidis, S. Sears, D. N. Christodoulides, J. W. Fleischer, and M. Segev, Phys. Rev. E **66**, 046602 (2002); J. W. Fleischer, T. Carmon, M. Segev, N. K. Efremidis, and D. N. Christodoulides, Phys. Rev. Lett. **90**, 023902 (2003); J. W. Fleischer, M. Segev, N. K. Efremidis, and D. N. Christodoulides, Nature **422**, 147 (2003); H. Martin, E. D. Eugenieva, Z. G. Chen, and D. N. Christodoulides, Phys. Rev. Lett. **92**, 123902 (2004); D. N. Neshev, T. J. Alexander, E. A. Ostrovskaya, Y. S. Kivshar, H. Martin, I. Makasyuk, and Z. G. Chen, *ibid*. **92**, 123903 (2004); J. W. Fleischer, G. Bartal, O. Cohen, O. Manela, M. Segev, J. Hudock, and D. N. Christodoulides, *ibid.* **92**, 123904 (2004); B. Freedman, G. Bartal, M. Segev, R. Lifshitz, D. N. Christodoulides, and J. W. Fleischer, Nature **440**, 1166 (2006); H. Trompeter, W. Królikowski, D. N. Neshev, A. S. Desyatnikov, A. A. Sukhorukov, Y. S. Kivshar, T. Pertsch, U. Peschel, and F. Lederer, Phys. Rev. Lett. **96**, 053903 (2006); R. Fischer, D. Trager, D. N. Neshev, A. A. Sukhorukov, W. Królikowski, C. Denz, and Y. S. Kivshar, *ibid*. **96**, 023905 (2006); T. Schwartz, G. Bartal, S. Fishman, and M. Segev, Nature **446**, 52 (2007). J. W. Fleischer, G. Bartal, O. Cohen, T. Schwartz, O. Manela, B. Freedman, M. Segev, H. Buljan, N. K. Efremidis, Opt. Exp. **13**, 1780 (2005); F. Lederer, G. I. Stegeman, D. N. Christodoulides, G. Assanto, M. Segev, and Y. Silberberg, Phys. Rep. **463**, 1 (2008). M. Fleischhauer, A. Imamoǧlu, and J. P. Marangos, Rev. Mod. Phys. **77**, 633 (2005); H. Schmidt and A. Imamoǧlu, Opt. Lett. **21**, 1936 (1996); H. Wang, D. Goorskey, and M. Xiao, Phys. Rev. Lett. **87**, 073601 (2001). Y. Li, B. A. Malomed, M. Feng, and J. Zhou, Phys. Rev. A **82** 063813 (2010); W. Pang, J. Wu, Z. Yuan, Y. Liu and G. Chen, J. Phys. Soc. Jpn. **80**, 113401 (2011); J. Wu, M. Feng, W. Pang, S. Fu, and Y. Li, J. Nonlin. Opt. Phys. **20**, 193 (2011); Y. Li, W. Pang, S. Fu, and B. A. Malomed, Phys. Rev. A **85**, 053821 (2012). E. Kuznetsova, O. Kocharovskaya, P. R. Hemmer, and M. O. Scully, Phys. Rev. A **66**, 063802 (2002); A. V. Turukhin, V. S. Sudarshanam, M. S. Shahriar, J. A. Musser, B. S. Ham, and P. R. Hemmer, Phys. Rev. Lett. **88**, 023602 (2002). S. Gupta, K. W. Murch, K. L. Moore, T. P. Purdy, and D. M. Stamper-Kurn, Phys. Rev. Lett. **95**, 143201 (2005); A. S. Arnold, C. S. Garvie, and E. Riis, Phys. Rev. A 73, 041606(R) (2006); I. Lesanovsky and W. von Klitzing, *ibid*. **99**, 083001 (2007); C. Ryu, M. F. Andersen, P. Cladé, V. Natarajan, K. Helmerson,and W. D. Phillips, Phys. Rev. Lett. **99**, 260401 (2007); A. Ramanathan, K. C. Wright, S. R. Muniz, M. Zelan, W. T. Hill III, C. J. Lobb, K. Helmerson, W. D. Phillips, and G. K. Campbell, *ibid*. **106**, 130401 (20011). L. Salasnich, A. Parola, and L. Reatto, Phys. Rev. A **74**, 031603(R) (2006); R. Kanamoto, H. Saito, and M. Ueda, *ibid*. **73**, 033611 (2006); M. Modugno, C. Tozzo, and F. Dalfovo, *ibid*. **74**, 061601 (R) (2006); R. Bhat, M. J. Holland, and L. D. Carr, Phys. Rev. Lett. **96**, 060405 (2006); A. V. Carpentier and H. Michinel, EPL **78**, 10002 (2007); P. Mason and N. G. Berloff, Phys. Rev. A **79**, 043620 (2009); J. Brand, T. J. Haigh, and U. Zülicke, *ibid*. **80**, 011602 (R) (2009); P. Capuzzi and D. M. Jezek, J. Phys. B: At. Mol. Opt. Phys. **42**, 145301 (2009); A. Aftalion and P. Maso, *ibid*. **81**, 023607 (2010); J. Smyrnakis, M. Magiropoulos, G. M. Kavoulakis, and A. D. Jackson, *ibid*. **81**, 063601 (2010); S. Zöllner, G. M. Bruun, C. J. Pethick, and S. M. Reimann, Phys. Rev. Lett. **107**, 035301 (2011); Z.-W. Zhou, S.-L. Zhang, X.-F. Zhou, G.-C. Guo, X. Zhou, and H. Pu, Phys. Rev. A **83**, 043626 (2011); M. Abad, M. Guilleumas, R. Mayol, M. Pi, and D. M. Jezek,  *ibid*. **84**, 035601 (2011); X. Zhou, S. Zhang, Z. Zhou, B. A. Malomed, and H. Pu, *ibid*. **85**, 023603 (2012); S. K. Adhikari, *ibid*. **85**, 053631 (2012). J. Smyrnakis, S. Bargi, G. M. Kavoulakis, M. Magiropoulos, K. Kärkkäinen, and S. M. Reimann, Phys. Rev. Lett. **103**, 100404 (2009). C. Kittel, *Introduction to Solid State Physics* (Wiley: New York,1995). M. Vakhitov and A. Kolokolov, Izvestiya VUZov Radiofizika **16**, 1020 (1973) \[in Russian; English translation: Radiophys. Quantum. Electron. **16**, 783 (1973)\]; L. Bergé, Phys. Rep. **303**, 259 (1998); E. A. Kuznetsov and F. Dias, *ibid*. **507**, 43 (2011). H. Sakaguchi and B. A. Malomed, Phys. Rev. A **81**, 013624 (2010). J. Yang and T. I. Lakoba, Stud. Appl. Math. **118**, 153 (2007); **120**, 265 (2008). L. C. Qian, M. L. Wall, S. Zhang, Z. Zhou, and H. Pu, Phys. Rev. A **77**, 013611 (2008); Z.-W. Zhou, S.-L. Zhang, X.-F. Zhou, G.-C. Guo, X. Zhou, and H. Pu, *ibid*. A **83**, 043626 (2011); X.-F. Zhou, S.-L. Zhang, Z.-W. Zhou, B. A. Malomed, and H. Pu, *ibid*. **85**, 023603 (2012).
{ "pile_set_name": "ArXiv" }
--- abstract: 'In this paper, the dynamical attractor and heteroclinic orbit have been employed to make the late-time behaviors of the model insensitive to the initial condition and thus alleviate the fine-tuning problem in the torsion cosmology. The late-time de Sitter attractor indicates that torsion cosmology is an elegant scheme and the scalar torsion mode is an interesting geometric quantity for physics. The numerical solutions obtained by Nester et al. are not periodic solutions, but are quasi-periodic solutions near the focus for the coupled nonlinear equations.' author: - 'Xin-zhou Li' - 'Chang-bo Sun' - Ping Xi title: Torsion cosmological dynamics --- INTRODUCTION ============ The current observations, such as SNeIa (Supernovae type Ia), CMB (Cosmic Microwave Background) and large scale structure, converge on the fact that a spatially homogeneous and gravitationally repulsive energy component, referred as dark energy, accounts for about $70$ % of the energy density of universe. Some heuristic models that roughly describe the observable consequences of dark energy were proposed in recent years, a number of them stemming from a certain physics [@Padmanabhan01] and the others being purely phenomenological [@Copeland02]. About thirty years ago, the bouncing cosmological model with torsion was suggested in Ref.[@Kerlick], but the torsion was imagined as playing role only at high densities in the early universe. Goenner et al. made a general survey of the torsion cosmology [@Goenner], in which the equations for all the PGT (Poincar[é]{} Gauge Theory of gravity) cases were discussed although they only solved in detail a few particular cases. Recently some authors have begun to study torsion as a possible reason of the accelerating universe [@Boeheretal]. Nester and collaborators [@shie03] consider an accounting for the accelerated universe in term of PGT: dynamic scalar torsion. With the usual assumptions of homogeneity and isotropy in cosmology, they find that torsion field could play a role of dark energy. This elegant model has only a few adjustable parameters, so scalar torsion may be easily falsified as “dark energy”. The fine-tuning problem should be one of the most important issues for the cosmological models, and a good model should limit the fine-tuning as much as possible. The dynamical attractor of the cosmological system has been employed to make the late-time behaviors of the model insensitive to the initial condition of the field and thus alleviates the fine-tuning problem [@Hao04]. In this paper, we study attractor and heteroclinic orbit in the torsion cosmology. We show that the late-time de Sitter behaviors cover a wide range of the parameters. This attractor indicates that torsion cosmology is an elegant scheme and the scalar torsion mode is an interesting geometric quantity for physics. Furthermore, there are only exact periodic solutions for the linearized system, which just correspond to the critical line (line of centers). The numerical solutions in Ref.[@shie03] are not periodic, but are quasi-periodic solutions near the focus for the coupled nonlinear equations. AUTONOMOUS EQUATIONS ==================== PGT [@Hehl05] based on a Riemann-Cartan geometry, allows for dynamic torsion in addition to curvature. The affine connection of the Riemann-Cartan geometry is $$\label{PGT} \Gamma_{\mu\nu}{}^\kappa=\overline{\Gamma}_{\mu\nu}{}^\kappa+\frac{1}{2}(T_{\mu\nu}{}^\kappa+T^\kappa{}_{\mu\nu} +T^\kappa{}_{\nu\mu})\,,$$ where $\bar{\Gamma}_{\mu\nu}{}^{\kappa}$ is the Levi-Civita connection and $T_{\mu\nu}^{\kappa}$ is the torsion tensor. Meanwhile, the Ricci curvature and scalar curvature can be written as $$\begin{aligned} &&R_{\mu\nu} = \overline{R}_{\mu\nu} + \overline{\nabla}_\nu T_\mu +\frac{1}{2} (\overline{\nabla}_\kappa - T_\kappa)(T_{\nu\mu}{}^\kappa+T^\kappa{}_{\mu\nu}+T^\kappa{}_{\nu\mu})\nonumber\\ &&+\frac{1}{4}(T_{\kappa\sigma\mu}T^{\kappa\sigma}{}_\nu+2T_{\nu \kappa \sigma}T^{\sigma \kappa}{}_\mu)\,,\\ &&R=\overline{R} + 2\overline{\nabla}_\mu T^\mu+\frac{1}{4}(T_{\mu\nu \kappa}T^{\mu\nu \kappa} +2T_{\mu\nu \kappa}T^{\kappa\nu\mu}-4T_\mu T^\mu),\nonumber\\\end{aligned}$$ where $\bar{R}_{\mu\nu}$ and $\bar{R}$ are the Riemannian Ricci curvature and scalar curvature, respectively, and $\bar{\nabla}$ is the covariant derivative with the Levi-Civita connection and $T_{\mu}\equiv T^{\nu}_{\mu\nu}$. According as Ref.[@shie03] we take the restricted form of torsion in this paper $$\begin{aligned} T_{\mu\nu\rho}=\frac{2}{3}T_{[\mu}g_{\nu]\rho} \label{restrictedT}\end{aligned}$$ therefore, the gravitational Lagrangian density for the scalar mode is (For a detailed discussion see Ref.[@Nester2]) $$\begin{aligned} L_g &=& -\frac{a_0}{2}R +\frac{b}{24}R^2\nonumber\\ &&+\frac{a_1}{8}(T_{\nu\sigma\mu}T^{\nu\sigma\mu} +2T_{\nu\sigma\mu}T^{\mu\sigma\nu}-4T_\mu T^\mu)\,,\label{Lg0+mode}\end{aligned}$$ Since current observations favor a flat universe, we will work in the spatially flat Robertson-Walker metric. According to the homogeneity and isotropy, the torsion $T_{\mu}$ should be only time dependent, so one can let $T_{t}(t)\equiv\Phi(t)$ and the spatial parts vanish since we have taken the restricted form (\[restrictedT\]) of torsion. For the general form, the torsion tensor have two independent components [@Goenner]-[@Boeheretal]. From the field equations one can finally give the necessary equations for the matter-dominated era to integrate (For a detailed discussion see Ref.[@shie03]) $$\begin{aligned} \dot{H}&=&\frac{\mu}{6a_1}R-\frac{\rho}{6a_1}-2H^2\,,\label{dtH}\\ \dot{\Phi}&=&-\frac{a_0}{2a_1}R-\frac{\rho}{2a_1}-3H\Phi +\frac{1}{3}\Phi^2\,,\label{dtphi}\\ \dot{R}&=&-\frac{2}{3}\left(R+\frac{6\mu}{b}\right)\Phi\,,\label{dtR}\end{aligned}$$ where $\mu= a_1-a_0$ and the energy density of matter component $$\begin{aligned} &&\rho=\frac{b}{18}(R+\frac{6\mu}{b})(3H-\Phi)^2-\frac{b}{24}R^2-3a_1H^2 \,.\label{fieldrho} \end{aligned}$$ One can scale the variables and the parameters as $$\begin{aligned} &&t\rightarrow l_{p}^{-2}H_{0}^{-1}t,\,\, H\rightarrow l_{p}^{2}H_{0} H, \,\, \Phi\rightarrow l_{p}^{2}H_{0}\Phi,\,\, R\rightarrow l_{p}^{4}H_{0}^{2}R,\nonumber\\ &&a_0\rightarrow l_{p}^{2}a_0,\,\, a_1\rightarrow l_{p}^{2}a_1,\,\, \mu\rightarrow l_{p}^{2}\mu,\,\, b\rightarrow l_{p}^{-2}H_{0}^{-2}b,\label{scale}\end{aligned}$$ where $H_0$ is the present value of Hubble parameter and $l_p\equiv\sqrt{8\pi G}$ is the Planck length. Under the transform (\[scale\]), Eqs. (\[dtH\])-(\[dtR\]) remain unchanged. After transform, new variables $t$, $H$, $\Phi$ and $R$, and new parameters $a_0$, $a_1$, $\mu$ and $b$ are all dimensionless. Furthermore, the Newtonian limit requires $a_0=-1$. Obviously, Eqs. (\[dtH\])-(\[dtR\]) is an autonomous system, so we can use the qualitative method of ordinary differential equations. It is worth noting that in the analysis of critical points, Copeland et al. [@Copeland] introduced the elegant compact variables which are defined from the Friedmann equation constraint, but in our case, the Friedmann equation can not be written as the ordinary form, so the compact variables are not convenient here. Therefore, we will analyze the system of Eqs.(\[dtH\])-(\[dtR\]) using the variables $H$, $\Phi$ and $R$ under the transform (\[scale\]). LATE TIME DE SITTER ATTRACTOR ============================= In the case of scalar torsion mode, the effective energy-momentum tensor can be represented as $$\begin{aligned} \widetilde{T}_t{}^t&=&\!\!-3\mu H^2\!+\frac{b}{18}(R+\frac{6\mu}{b}) (3H-\Phi)^2-\frac{b}{24}R^2,\label{torho}\\ \widetilde{T}_r{}^r&=&\widetilde{T}_\theta{}^\theta =\widetilde{T}_\phi{}^\phi ={1\over 3}[\mu(R-\overline{R})-\widetilde{T}_t{}^t]\,,\label{torpre}\end{aligned}$$ and the off-diagonal terms vanish. The effective energy density $$\begin{aligned} \rho_{eff}=\rho+\rho_{T}\equiv \rho+\widetilde{T}_{tt},\label{effrho} \end{aligned}$$ which is deduced from general relativity. $p_{eff}=p_T\equiv \tilde{T}_r^r$ is an effective pressure, and the effective equation of state is $$\begin{aligned} w_{eff}=\frac{\widetilde{T}_{r}^{r}}{\rho+\widetilde{T}_{tt}}\label{effw} \end{aligned}$$ which is induced by the dynamic torsion. According to equations (\[dtH\])-(\[dtR\]), we can obtain the critical points and study the stability of these points. There are five critical points $(H_c, \Phi_c, R_c)$ of the system as follows $$\begin{aligned} &(\text i) & (0,0,0)\nonumber\\ &(\text{ii}) &\scriptstyle \left( \left(\frac{3(1+a_{1})}{8}-A\right)\sqrt{B+C},\frac{3}{2}\sqrt{B+C},-\frac{6(1+a_{1})}{b}\right)\nonumber\\ &(\text{iii}) &\scriptstyle \left(- \left(\frac{3(1+a_{1})}{8}-A\right)\sqrt{B+C},-\frac{3}{2}\sqrt{B+C},-\frac{6(1+a_{1})}{b}\right)\nonumber\\ &(\text{iv}) &\scriptstyle \left( \left(\frac{3(1+a_{1})}{8}+A\right)\sqrt{B-C},\frac{3}{2}\sqrt{B-C},-\frac{6(1+a_{1})}{b}\right)\nonumber\\ &(\text{v}) &\scriptstyle \left(- \left(\frac{3(1+a_{1})}{8}+A\right)\sqrt{B-C},-\frac{3}{2}\sqrt{B-C},-\frac{6(1+a_{1})}{b}\right)\end{aligned}$$ where $A=\scriptstyle\frac{\sqrt{a_1^2(1+a_1)^3(1+9a_1)}}{8a_1(1+a_1)}$, $B=\scriptstyle-\frac{(1+a_1)(5+9a_1)}{a_1b}$ and $C=\scriptstyle-\frac{3\sqrt{a_1^2(1+a_1)^3(1+9a_1)}}{a_1^2b}$. Consider that the parameter $b$ is associated with the quadratic scalar curvature term $R^2$, so that $b$ should be positive [@shie03]. Evidently, the critical points $(H_c, \Phi_c, R_c)$ are not real values except (0, 0, 0) in the cases of parameters $b>0$ and $a_1>0$ or $-1\leq a_1<-1/9$. If we consider the linearized equations, then Eqs. (\[dtH\])-(\[dtR\]) are reduced to $$\begin{aligned} \dot{H}=\frac{\mu}{6a_1}R,\qquad \dot{\Phi}=\frac{1}{2a_1}R,\qquad \dot{R}=-\frac{4\mu}{b}\Phi\,.\label{dtHPHIRlinear}\end{aligned}$$ In the case $a_1>0$, there is only a critical point $(0, 0, 0)$ for the nonlinear system. Eqs. (\[dtHPHIRlinear\]) has an exact periodic solution $$\begin{aligned} &&H=\alpha R_{0}\sin \omega t+\frac{\mu}{3}\Phi_{0}\cos \omega t+H_{0}-\frac{\mu}{3}\Phi_{0},\label{Hps}\\ &&\Phi=\beta^{-1}R_{0}\sin \omega t +\Phi_{0}\cos \omega t,\label{phips}\\ &&R=R_{0}\cos\ \omega t-\beta\Phi_{0}\sin \omega t,\label{Rps}\end{aligned}$$ where $\omega = \sqrt{\frac{2\mu}{a_1b}}$, $\alpha = \sqrt{\frac{b\mu}{72a_1}}$, $\beta = \sqrt{\frac{8\mu a_1}{b}}$ and $H_0=H(0)$, $\Phi_0=\Phi(0)$ and $R_0=R(0)$ are initial values. Obviously, ($H$, $0$, $0$) is a critical line for the linearized system. However, there is only a critical point $(0, 0, 0)$ for the nonlinear system which is an asymptotically stable focus. In other words, there is no periodic solution for the nonlinear system since the corresponding eigenvalue is ($0$, $-\sqrt{\frac{2\mu}{a_1b}}i$, $\sqrt{\frac{2\mu}{a_1b}}i$). In Fig.\[Focusa1g0\], we plot the orbits near the point $(0, 0, 0)$ for the nonlinear systems. ![The phase diagrams of $(H,\Phi,R)$ for nonlinear system (\[dtH\])-(\[dtR\]) with $a_{1}>0$. We take $a_{1}=0.9,b=8$ and the initial values $(0.8,0.5,1.1)$. $(0,0,0)$ is an asymptotically stable focus point.[]{data-label="Focusa1g0"}](focusa1g0.eps){height="1.8in" width="1.8in"} Next, the parameters are restricted within $b>0$ and $-1/9\leq a_1<0$ or $a_1<-1$ in this paper. To study the stability of the critical points $(H_{c},\Phi_{c},R_{c})$, we write the variables near the critical points in the form $H=H_{c}+U$, $\Phi=\Phi_{c}+V$, and $R=R_{c}+X$ with $U, V, X$ the perturbations of the variables near the critical points. Substituting the expression into the system of equations (\[dtH\])-(\[dtR\]), one can obtain the corresponding eigenvalues of critical points (i)-(v) $$\begin{aligned} &&(\text{i}) \scriptstyle(0,-\sqrt{-\frac{2(1+a_{1})}{a_{1}b}},\sqrt{-\frac{2(1+a_{1})}{a_{1}b}})\nonumber\\ &&(\text{ii}) \scriptstyle\left(-\sqrt{B+C}, -3\left(\frac{3(1+a_{1})}{8}-A\right)\sqrt{B+C}, -\left(\frac{1+9a_{1}}{8}-3A\right)\sqrt{B+C}\right)\nonumber\\ &&(\text{iii}) \scriptstyle \left(\sqrt{B+C}, 3\left(\frac{3(1+a_{1})}{8}-A\right)\sqrt{B+C},\left(\frac{1+9a_{1}}{8}-3A\right)\sqrt{B+C}\right)\nonumber\\ &&(\text{iv}) \scriptstyle\left(-\sqrt{B-C}, -3\left(\frac{3(1+a_{1})}{8}+A\right)\sqrt{B-C},-\left(\frac{1+9a_{1}}{8}+3A\right)\sqrt{B-C}\right)\nonumber\\ &&(\text{v}) \scriptstyle\left(\sqrt{B-C}, 3\left(\frac{3(1+a_{1})}{8}+A\right)\sqrt{B-C},\left(\frac{1+9a_{1}}{8}+3A\right)\sqrt{B-C}\right)\end{aligned}$$ The properties of the critical points are shown in tables \[cripointsa1l-1\] and \[cripointsa1l0\]. We find that critical point (ii) is a late time de Sitter attractor in the case of $-1/9\leq a_1<0$. NUMERICAL ANALYSIS ================== In previous sections, we have studied the phase space of a torsion cosmology. The de Sitter attractor indicates that torsion cosmology [@Kerlick]-[@shie03] is an elegant scheme and the scalar torsion mode is an interesting geometric quantity for physics. In this section, we study their dynamical evolution numerically. critical points property $w_{eff}$ stability ----------------- ---------- ------------- ----------- () focus $\pm\infty$ stable () saddle -1 unstable () saddle -1 unstable () saddle -1 unstable () saddle -1 unstable : The physical properties of critical points for $a_{1}<-1$[]{data-label="cripointsa1l-1"} The crossing of the $w=-1$ barrier is impossible in the traditional scalar field models [@Caldwell06]. The importance of the torsion cosmology is further promoted by this impossibility. In Fig. \[Tweffa1\], we plot the dynamical evolution of the equation of state $w_{eff}$ for different initial values ($H$, $\Phi$, $R$). Contrary to the quintessence and phantom model [@Caldwell06], the effective equation of state parameter $w_{eff}$ is dependent on time that can cross the cosmological constant divide $w_\Lambda=-1$ from $w_{eff}>-1$ to $w_{eff}<-1$ as the observations mildly indicate. Critical points are always exact constant solutions in the context of autonomous dynamical systems. These points are often the extreme points of the orbits and therefore describe the asymptotic behavior. If the solutions interpolate between critical points they can be divided into a heteroclinic orbit and a homoclinic orbit (a closed loop). The heteroclinic orbit connects two different critical points and homoclinic orbit is an orbit connecting a critical point to itself. In the dynamical analysis of cosmology, the heteroclinic orbit is more interesting [@Li08]. If the numerical calculation is associated with the critical points, then we will find all kinds of heteroclinic orbits. Especially, the heteroclinic orbit is shown in Fig.\[heteroclinicorbita1l0\], which connects the positive and negative attractors. critical points property $w_{eff}$ stability ----------------- -------------------- ----------- ----------- () saddle $-\infty$ unstable () positive attractor -1 stable () negative attractor -1 unstable () saddle -1 unstable () saddle -1 unstable : The physical properties of critical points for $-1/9\leq a_{1}<0$[]{data-label="cripointsa1l0"} ![The evolution of the equation-of-state parameters $w_{eff}$ for different initial values$(H,\Phi,R)$ with $-1/9 \leq a_{1}<0$. We take $a_{1}=-1/10,b=8$.[]{data-label="Tweffa1"}](weff.eps){height="1.4in" width="2.1in"} CONCLUSION AND DISCUSSION ========================= In this paper, we investigate the dynamics of a torsion cosmology, in which we consider only the “scalar torsion” mode. This mode has certain distinctive and interesting qualities. We show that the late-time asymptotic behavior does not always correspond to an oscillating aspect. In fact, only in the focus case can we declare that “scalar torsion” mode can contribute a quasinormal oscillating aspect to the expansion rate of the universe. There are only exact periodic solutions for the linearized system, which just correspond to the critical line (line of centers). Via numerical calculation of the coupled nonlinear equations Nester et al. [@shie03] plot that quasi-periodic solution near the focus. The late-time de Sitter attractor indicates that torsion cosmology is an elegant scheme and the scalar torsion mode is an interesting geometric quantity for physics. We show that the late-time de Sitter behaviors cover a wide range of the parameters and thus alleviate the fine-tuning problem. Furthermore, the torsion cosmology has considered the possibility that the dynamics scalar torsion (geometric field) could fully account for the accelerated universe, which is naturally expected from spacetime gauge theory. ![The phase diagrams of $(H,\Phi,R)$ with $-1/9 \leq a_{1}<0$. The heteroclinic orbit connects the critical points case (iii) to case (ii). We take $a_{1}=-1/10,b=4$.[]{data-label="heteroclinicorbita1l0"}](heteroclinicorbit.eps){height="1.4in" width="2.1in"} This work is supported by National Science Foundation of China grant No. 10473007 and No. 10671128. T. Padmanabhan, Phys. Rept. [**380**]{}, 235 (2003); J. G. Hao and X. Z. Li, Phys. Rev. [**D67**]{}, 107303 (2003); D. J. Liu and X. Z. Li, Phys. Rev. [**D68**]{}, 067301 (2003); X. Z. Li and J. G. Hao, Phys. Rev. [**D69**]{}, 107303 (2004). E. J. Copeland, M. Sami and S. Tsujikawa, Int. J. Mod. Phys. [**D15**]{}, 1753 (2006); J. G. Hao and X. Z. Li, Phys. Lett. [**B606**]{}, 7 (2005); D. J. Liu and X. Z. Li, Phys. Lett. [**B611**]{}, 8 (2005); D. J. Liu, C. B. Sun and X. Z. Li, Phys. Lett. [**B634**]{}, 442 (2006). G. D. Kerlick, Ann. Phys. **99**, 127 (1976) H. Goenner and F. Müller-Hoissen, Class. Quant. Grav. **1**, 651 (1984) S. Capozziello, S. Carloni and A. Troisi, Recent Res. Dev. Astron. Astrophys. **1**, 625 (2003); C. G. Böhmer, J. Burnett, Phys. Rev. [**D78**]{}, 104001 (2008); C. G. Böhmer, Acta Phys. Polon. **B36**, 2841 (2005); E. W. Mielke and E. SanchezRomero, Phys. Rev. [**D73**]{}, 043521 (2006); A. V. Minkevich, A. S. Garkin and V. I. Kudin, Class. Quant. Grav. **24**, 5835 (2007) K. F. Shie, J. M. Nester and H. J. Yo, Phys. Rev. [**D78**]{}, 023522 (2008); H. J. Yo and J. M. Nester, Mod. Phys. Lett. [**A22**]{}, 2057 (2007). J. G. Hao and X. Z. Li, Phys. Rev. [**D70**]{}, 043529 (2004). F. W. Hehl, J. D. McCrea, E. W. Mielke and Y. Neeman, Phys. Rept. [**258**]{}, 1 (1995); M. Blagojevi[ć]{}, [*Gravitation and Gauge Symmetries*]{} (Institute of Physics, Bristol, 2002); T. Ort[í]{}n, [*Gravity and strings*]{}, (Cambridge University Press, Cambridge, 2004). H. J. Yo and J. M. Nester, Int. J. Mod. Phys. **D8**, 459 (1999). E. J. Copeland, A. R. Liddle and D. Wands, Phys. Rev. [**D57**]{}, 4686 (1998) R. R. Caldwell and M. Doran, Phys. Rev. [**D72**]{}, 043527 (2005); J. G. Hao and X. Z. Li, Phys. Rev. [**D68**]{}, 083514 (2003). X. Z. Li, Y. B. Zhao and C. B. Sun, Class. Quant. Grav. [**22**]{}, 3759 (2005).
{ "pile_set_name": "ArXiv" }
--- abstract: 'We calculate QCD corrections to transversely polarized Drell-Yan process at a measured $Q_T$ of the produced lepton pair in the dimensional regularization scheme. The $Q_T$ distribution is discussed resumming soft gluon effects relevant for small $Q_T$.' address: - | Radiation Laboratory, RIKEN\ 2-1 Hirosawa, Wako, Saitama 351-0198, JAPAN\ kawamura@rarfaxp.riken.jp - | Theory Division, High Energy Accelerator Research Organization (KEK)\ 1-1 OHO, Tsukuba 305-0801, JAPAN - 'Department of Physics, Juntendo University, Inba-gun, Chiba 270-1695, JAPAN' author: - HIROYUKI KAWAMURA - JIRO KODAIRA and HIROTAKA SHIMIZU - KAZUHIRO TANAKA title: '$Q_T$ RESUMMATION IN TRANSVERSELY POLARIZED DRELL-YAN PROCESS [^1]' --- Hard processes with polarized nucleon beams enable us to study spin-dependent dynamics of QCD and the spin structure of nucleon. The helicity distribution $\Delta q(x)$ of quarks within nucleon has been measured in polarized DIS experiments, and $\Delta G(x)$ of gluons has also been estimated from the scaling violations of them. On the other hand, the transversity distribution $\delta q(x)$, i.e. the distribution of transversely polarized quarks inside transversely polarized nucleon, can not be measured in inclusive DIS due to its chiral-odd nature,[@RS:79] and remains as the last unknown distribution at the leading twist. Transversely polarized Drell-Yan (tDY) process is one of the processes where the transversity distribution can be measured, and has been undertaken at RHIC-Spin experiment. We compute the 1-loop QCD corrections to tDY at a measured $Q_T$ and azimuthal angle $\phi$ of the produced lepton in the dimensional regularization scheme. For this purpose, the phase space integration in $D$-dimension, separating out the relevant transverse degrees of freedom, is required to extract the $\propto \! \cos(2\phi)$ part of the cross section characteristic of the spin asymmetry of tDY.[@RS:79] The calculation is rather cumbersome compared with the corresponding calculation in unpolarized and longitudinally polarized cases, and has not been performed so far. We obtain the NLO ${\cal O}(\alpha_s)$ corrections to the tDY cross section in the $\overline{\rm MS}$ scheme. We also include soft gluon effects by all-order resummation of logarithmically enhanced contributions at small $Q_T$ (“edge regions of the phase space”) up to next-to-leading logarithmic (NLL) accuracy, and obtain the first complete result of the $Q_T$ distribution for all regions of $Q_T$ at NLL level. We first consider the NLO ${\cal O}(\alpha_s)$ corrections to tDY: $h_1(P_1,s_1)+h_2(P_2,s_2)\rightarrow l(k_1)+\bar{l}(k_2)+X$, where $h_1,h_2$ denote nucleons with momentum $P_1,P_2$ and transverse spin $s_1,s_2$, and $Q=k_1+k_2$ is the 4-momentum of DY pair. The spin dependent cross section $\Delta_T d \sigma \equiv (d \sigma (s_1 , s_2) - d \sigma (s_1 , - s_2))/2$ is given as a convolution $$\Delta_T d\sigma = \int d x_1 d x_2\, \delta H (x_1 \,,\,x_2 ; \mu_F)\, \Delta_T d \hat{\sigma} (s_1\,,\,s_2 ; \mu_F),$$ where $\mu_F$ is the factorization scale, and $$\delta H (x_1 \,,\,x_2 ; \mu_F)\, = \sum_i e_i^2 [\delta q_i(x_1 ; \mu_F)\delta \bar{q}_i(x_2 ; \mu_F) +\delta \bar{q}_i(x_1 ; \mu_F)\delta q_i(x_2 ; \mu_F)]$$ is the product of transversity distributions of the two nucleons, and $\Delta_T d \hat{\sigma}$ is the corresponding partonic cross section. Note that, at the leading twist level, the gluon does not contribute to the transversely polarized process due to its chiral odd nature. We compute the one-loop corrections to $\Delta_T d \hat{\sigma}$, which involve the virtual gluon corrections and the real gluon emission contributions, e.g., $q (p_1 , s_1) + \bar{q} (p_2 , s_2) \to l (k_1) + \bar{l} (k_2) + g$, with $p_i = x_i P_i$. We regularize the infrared divergence in $D=4 - 2 \epsilon$ dimension, and employ naive anticommuting $\gamma_5$ which is a usual prescription in the transverse spin channel.[@WV:98] In the $\overline{\rm MS}$ scheme, we eventually get,[@KKST; @KKST2] to NLO accuracy, $$\begin{aligned} \frac{\Delta_T d \sigma}{d Q^2 d Q_T^2 d y d \phi} = N\, \cos{(2 \phi )} \left[ X\, (Q_T^2 \,,\, Q^2 \,,\, y) + Y\, (Q_T^2 \,,\, Q^2 \,,\, y) \right], \label{cross section}\end{aligned}$$ where $N = \alpha^2 / (3\, N_c\, S\, Q^2)$ with $S=(P_1 +P_2 )^2$, $y$ is the rapidity of virtual photon, and $\phi$ is the azimuthal angle of one of the leptons with respect to the initial spin axis. For later convenience, we have decomposed the cross section into the two parts: the function $X$ contains all terms that are singular as $Q_T \rightarrow 0$, while $Y$ is of ${\cal O}(\alpha_s)$ and finite at $Q_T=0$. Writing $X = X^{(0)} + X^{(1)}$ as the sum of the LO and NLO contributions, we have[@KKST; @KKST2] $X^{(0)} = \delta H (x_1^0\,,\,x_2^0\,;\, \mu_F )\ \delta (Q_T^2)$, and $$\begin{aligned} X^{(1)} &=& \frac{\alpha_s}{2 \pi} C_F\ \Biggl\{ \delta H (x_1^0\,,\,x_2^0\,;\, \mu_F ) \left[\, 2\, \left( \frac{\ln Q^2 / Q_T^2}{Q_T^2} \right)_+ - \frac{3}{(Q_T^2)_+} + \left(\, - 8 + \pi^2 \right) \delta (Q_T^2) \right] \nonumber\\ \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!+&& \!\!\!\!\!\!\!\!\left( \frac{1}{(Q_T^2)_+} + \delta (Q_T^2) \ln \frac{Q^2}{\mu_F^2} \right)\!\!\! \left[ \int^1_{x_1^0} \frac{d z}{z} \delta P_{qq}^{(0)} (z)\ \delta H \left( \frac{x_1^0}{z}, x_2^0 ;\ \mu_F \right) + ( x_1^0 \leftrightarrow x_2^0 ) \right] \Biggr\} , \label{eq:x}\end{aligned}$$ where $x_1^0 = \sqrt{\tau}\ e^y , x_2^0 =\sqrt{\tau}\ e^{-y}$ are the relevant scaling variables with $\tau =Q^2/S$, and $\delta P_{qq}^{(0)} (z) = 2 z/(1 - z)_+ + (3/2)\, \delta (1 - z)$ is the LO transverse splitting function.[@AM:90] In (\[eq:x\]), the terms involving $\delta(Q_T^2 )$ come from the virtual gluon corrections, while the other terms represent the recoil effects due to the real gluon emissions. For the analytic expression of $Y$, see Ref.[@KKST2]. Eq. (\[cross section\]) gives the first NLO result in the $\overline{\rm MS}$ scheme. We note that there has been a similar NLO calculation of tDY cross section in massive gluon scheme.[@VW:93] We also note that, integrating (\[cross section\]) over $Q_T$, our result coincides with the corresponding $Q_T$-integrated cross sections obtained in previous works employing massive gluon scheme[@VW:93] and dimensional reduction scheme,[@CKM:94] via the scheme transformation relation.[@WV:98] The cross section (\[cross section\]) becomes very large when $Q_T \ll Q$, due to the terms behaving $\sim \alpha_s \ln(Q^2/Q_T^2 )/Q_T^2$ and $\sim \alpha_s /Q_T^2$ in the singular part $X$. It is well-known that, in unpolarized and longitudinally polarized DY, large “recoil logs” of similar nature appear in each order of perturbation theory as $\alpha_s^n \ln^{2n-1}(Q^2/Q_T^2 )/Q_T^2$, $\alpha_s^n \ln^{2n-2}(Q^2/Q_T^2 )/Q_T^2$, and so on, corresponding to LL, NLL, and higher contributions, respectively, and that the resummation of those “double logarithms” to all orders is necessary to obtain a well-defined, finite prediction of the cross section.[@CSS:85] Because the LL and NLL contributions are universal,[@DG:00] we can work out the all-order resummation of the corresponding logarithmically enhanced contributions in (\[cross section\]) up to the NLL accuracy, based on the general formulation[@CSS:85] of the $Q_T$ resummation. This can be conveniently carried out in the impact parameter $b$ space, conjugate to the $Q_T$ space. As a result, the singular part $X$ of (\[cross section\]) is modified into the corresponding resummed part, which is expressed as the Fourier transform, $$\begin{aligned} X \rightarrow&& \sum_{i}e_i^2 \int_0^{\infty} d b \frac{b}{2} J_0 (b Q_T) e^{\, S (b , Q)} ( C_{qq} \otimes \delta q_i ) \left( x_1^0 , \frac{b_0^2}{b^2} \right) ( C_{\bar{q} \bar{q}} \otimes \delta\bar{q}_i ) \left( x_2^0 , \frac{b_0^2}{b^2} \right) \nonumber\\ &&+ ( x_1^0 \leftrightarrow x_2^0 ) . \label{resum}\end{aligned}$$ Here $b_0 = 2e^{-\gamma_E}$, and the large logarithmic corrections are resummed into the Sudakov factor $e^{S (b , Q)}$ with $S(b,Q)=-\int_{b^2_0/b^2}^{Q^2}(d\kappa^2/\kappa^2) \{ (\ln{\frac{Q^2}{\kappa^2}} )A_q(\alpha_s(\kappa))+ B_q(\alpha_s(\kappa)) \}$. The functions $A_q$, $B_q$ as well as the coefficient functions $C_{qq}, C_{\bar{q}\bar{q}}$ are calculable in perturbation theory, and at the present accuracy of NLL, we get:[@KKST; @KKST2] $A_q(\alpha_s ) =(\alpha_s /\pi) C_F +(\alpha_s /2\pi)^2 2C_F \{(67/18-\pi^2/6)C_G-5N_f /9 \}$, $B_q (\alpha_s )=-3C_F(\alpha_s /2\pi)$, $C_{qq}(z, \alpha_s )=C_{\bar{q} \bar{q}}(z, \alpha_s ) =\delta(1-z)\{1+(\alpha_s /4\pi)C_F(\pi^2-8)\}$. We have utilized a relation[@KT:82] between $A_q$ and the DGLAP kernels in order to obtain the two-loop term of $A_q$. The other contributions have been determined so that the expansion of the above formula (\[resum\]) in powers of $\alpha_s (\mu_F )$ reproduces $X$ of (\[cross section\]), (\[eq:x\]) to the NLO accuracy. Eq. (\[cross section\]) with (\[resum\]) presents the first result of the NLL $Q_T$ resummation formula for tDY. The NLO parton distributions in the $\overline{\rm MS}$ scheme have to be used. One more step is necessary to make the QCD prediction of tDY. Similarly to other all-order resuumation formula, our result (\[resum\]) is suffered from the IR renormalons due to the Landau pole at $b= (b_0 /Q)e^{(1/2\beta_0 \alpha_s (Q))}$ in the Sudakov factor, and it is necessary to specify a prescription to avoid this singularity. Here we deform the integration contour in (\[resum\]) in the complex $b$ space, following the method introduced in the joint resummation.[@LKSV:01] Obviously prescription to define the $b$ integration is not unique reflecting IR renormalon ambiguity, e.g., “$b_{*}$ prescription” to “freeze” effectively the $b$ integration along the real axis is frequently used.[@CSS:85] The renormalon ambiguity should be eventually compensated in the physical quantity by the power corrections $\sim (b \Lambda_{\rm QCD})^n$ ($n=2,3, \ldots$) due to non-perturbative effects. Correspondingly, we make the replacement $e^{S (b , Q)}\rightarrow e^{S (b , Q)}F^{NP}(b)$ in (\[resum\]) with the “minimal” ansatz for non-perturbative effects, [@CSS:85; @LKSV:01] $F^{NP}(b)=\exp(-g b^2)$ with a non-perturbative parameter $g$. Fig.1 shows the $Q_T$ distribution of tDY at $\sqrt{S}=100$ GeV, $Q=10$ GeV, $y=\phi=0$, and with a model for the transversity $\delta q(x)$ that saturates the Soffer’s inequality at a low scale.[@MSSV:98] Solid line shows the NLO result using (\[cross section\]), and the dashed and dot-dashed lines show the NLL result using (\[cross section\]), (\[resum\]), $F^{NP}(b)=\exp(-g b^2)$, with $g=0.5$ GeV$^2$ and $g=0$, respectively. Acknowledgements {#acknowledgements .unnumbered} ================ We would like to thank W. Vogelsang for valuable discussions. The work of J.K. was supported by the Grant-in-Aid for Scientific Research No. C-16540255. The work of K.T. was supported by the Grant-in-Aid for Scientific Research No. C-16540266. [0]{} J.P.Ralston and D.Soper, [*Nucl. Phys.*]{}[**B152**]{}, 109 (1979). W.Vogelsang and A.Weber, [*Phys. Rev.*]{}[**D48**]{}, 2073 (1993). A.P.Contogouris, B.Kamal and Z.Merebashvili, [*Phys. Lett.*]{} [**B337**]{}, 167 (1994). W.Vogelsang, [*Phys. Rev.*]{} [**D57**]{}, 1886 (1998). X.Artru and M.Mukhfi, [*Z. Phys.*]{} [**C45**]{}, 669(1990). H.Kawamura [*et al.*]{},[*Nucl.Phys.Proc.Suppl.*]{} [**135**]{}, 19 (2004). H.Kawamura, J.Kodaira, H.Shimizu and K.Tanaka, in preparation. W.Vogelsang, [*Phys. Rev.*]{} [**D57**]{}, 1886 (1998). J.C.Collins, D.Soper, G.Sterman, [*Nucl. Phys.*]{} [**B250**]{}, 199 (1985). J. Kodaira and L. Trentadue, [*Phys. Lett.*]{} [**B112**]{} 66 (1982). D. de Florian and M. Grazzini,[*Phys.Rev. Lett.*]{} [**85**]{} 4678 (2000). E.Laenen, G.Sterman and W.Vogelsang, [*Phys. Rev.*]{} [**D63**]{}, 114018 (2001); S.Kulesza, G.Sterman and W.Vogelsang, [*ibid.*]{}[**D66**]{},014001 (2002); E.Laenen, in these proceedings. O. Martin [*et al.*]{}, [*Phys. Rev.*]{} [**D57**]{}, 3084 (1998); [*ibid*]{} [**D60**]{} (1999) 117502. [^1]: A talk presented by H.Kawamura
{ "pile_set_name": "ArXiv" }
--- author: - | Maxim Naumov$^*$, John Kim$^\dagger$, Dheevatsa Mudigere$^\ddagger$, Srinivas Sridharan, Xiaodong Wang,\ Whitney Zhao, Serhat Yilmaz, Changkyu Kim, Hector Yuen, Mustafa Ozdal, Krishnakumar Nair,\ Isabel Gao, Bor-Yiing Su, Jiyan Yang and Mikhail Smelyanskiy\ Facebook, 1 Hacker Way, Menlo Park, CA bibliography: - 'refs.bib' title: 'Deep Learning Training in Facebook Data Centers: Design of Scale-up and Scale-out Systems' ---
{ "pile_set_name": "ArXiv" }
--- author: - 'Simon Taylor, Chris Sherlock, Gareth Ridall and Paul Fearnhead' bibliography: - 'MUNErefs.bib' date: 11th April 2018 title: Motor Unit Number Estimation via Sequential Monte Carlo --- Abstract {#abstract .unnumbered} ======== A change in the number of motor units that operate a particular muscle is an important indicator for the progress of a neuromuscular disease and the efficacy of a therapy. Inference for realistic statistical models of the typical data produced when testing muscle function is difficult, and estimating the number of motor units from these data is an ongoing statistical challenge. We consider a set of models for the data, each with a different number of working motor units, and present a novel method for Bayesian inference, based on sequential Monte Carlo, which provides estimates of the marginal likelihood and, hence, a posterior probability for each model. To implement this approach in practice we require sequential Monte Carlo methods that have excellent computational and Monte Carlo properties. We achieve this by leveraging the conditional independence structure in the model, where given knowledge of which motor units fired as a result of a particular stimulus, parameters that specify the size of each unit’s response are independent of the parameters defining the probability that a unit will respond at all. The scalability of our methodology relies on the natural conjugacy structure that we create for the former and an enforced, approximate conjugate structure for the latter. A simulation study demonstrates the accuracy of our method, and inferences are consistent across two different datasets arising from the same rat tibial muscle. Keywords {#keywords .unnumbered} ======== Motor Unit Number Estimation; Sequential Monte Carlo; Model Selection Introduction {#sec:Intro} ============ Motor unit number estimation (MUNE) is a continuing challenge for clinical neurologists. An ability to determine the number of motor units (MUs) that operate a particular muscle provides important insights into the progression of various neuromuscular ailments such as amyotrophic lateral sclerosis [@She06; @Bro07], and aids the assessment of the efficacy of potential therapy treatments [@Cas10]. A MU is the fundamental component of the neuromuscular system and consists of a single motor neuron and the muscle fibres whos contraction it governs. Restriction to a MU’s operation may be a result of impaired communication between the motor neuron and muscle fibres, abnormaility in their function, or atrophy of either cell type. A direct investigation into the number of MUs via a biopsy, for example, is not helpful since this only determines the presence of each MU, not its functionality. Electromyography (EMG) provides a set of electrical stimulii of varying intensity to a group of motor neurons; each stimulus artificially induces a twitch in the targeted muscle, providing an *in situ* measurement of the functioning of the MUs. The effect on the muscle may be measured by recording either the minute variation in muscle membrane potential or the physical force the muscle exerts [@Maj05]. The generic methods developed in this article are applicable to either type of measurement. Since our data consist of whole muscle twitch force (WMTF) measurements we henceforth describe the response in these terms. In a healthy subject, the stimulus-response curve is typically sigmoidal [@Hen06], illustrating the smooth recruitment of additional MUs as the stimulus increases; however, the relatively low number of MUs in a patient with impaired muscle function may manifest within the stimulus-response relationship as large jumps in WMTF measurements. Figure \[fig:RatData\] shows the two data sets that will be described and analysed in detail in Section \[sec:CaseStudy\], with the large jumps clearly visible. The histograms of absolute differences in response for adjacent stimuli show two main modes, one, near 0mN, corresponding to noise and the other, around 40mN indicating that different MUs fired. The noise arises primarily because of small variations in the contribution to the WMTF provided by any particular MU, whenever it fires. The second general source of noise, visible in isolation at very low stimuli when no MUs are firing, is called the baseline noise. This arises from respiration movements and pulse pressure waves, and particular care is taken to minimise such influences, for example by earthing the subject and equipment, restraining the limb, digitally resetting the force signal prior to each stimulus, synchronising stimuli with the pulse cycle and using highly sensitive measurement devices. MUNE uses the observed stimulus-response pattern to estimate the number of functioning MUs. Techniques for MUNE generally form two classes: the average and comprehensive approaches. The most common averaging approach is the incremental technique of [@McC71], which assumes that the MUs can be characterised by an ‘average’ MU with a particular single motor unit twitch force (MUTF), estimated as the average of the magnitudes of the observed stepped increases in twitch force. A large stimulus, known as the supramaximal stimulus, is applied in order to cause all MUs to react. The quotient of the WMTF arising from the supramaximal stimulus and the average MUTF provides a count estimate. However, there is no guarantee that a particular single-stepped increase in response corresponds to a new, previously latent, MU, since it may instead be due to a phenomenon called alternation [@Bro76]. This occurs when two or more MUs have similar activation thresholds such that different combinations of MUs may fire in reaction to two identical stimuli. Consequently, the incremental technique tends to underestimate the average MUTF and hence overestimate the number of MUs. A number of improvements both experimentally [@Kad76; @Sta94 e.g.] and empirically [@Dau95; @Maj07 e.g.] have been proposed to try to deal with the alternation problem but, despite these improvements, each method oversimplifies the data generating mechanism and there is no gold-standard averaging approach; @Bro07 and @Goo14 provide thorough discussions on these approaches to MUNE. ![Stimulus-response curve from a rat tibial muscle using 10sec (left) and 50sec duration stimuli. Histogram inserts represent the frequency in the absolute difference of twitch forces when ordered by stimulus.[]{data-label="fig:RatData"}](figures/SRcurve_rat.pdf){width="80.00000%"} Motor units are more diverse than simple replicates of the ‘average’ MU, with many factors influencing their function. A desire for a more complete model for the data generating mechanism motivated the comprehensive approach to MUNE in @Rid06, which proposed three assumptions: - MUs fire independently of each other and of previous stimuli in an all-or-nothing response. Each MU fires precisely when the stimulus intensity exceeds a random threshold whose distribution is unique to that MU, with a sigmoidal cumulative distribution function, called an excitability curve. - The firing of a MU is characterised by a MUTF which is independent of the size of the stimulus that caused it to fire, and has a Gaussian distribution with an expectation specific to that MU and a variance common to all MUs. - The measured WMTF is the superposition of the MUTFs of those MUs that fired, together with a baseline component which has a Gaussian distribution with its own mean and variance. From these assumptions, @Rid06 proposed a set of similar statistical models each of which assumed a different *fixed* number of MUs. MUNE thus reduced to selection of a best model, for which the Bayesian information criterion was used. The class of methods which performs MUNE within a Bayesian framework is commonly referred to as Bayesian MUNE. In a subsequent paper, @Rid07 extended the method by constructing a reversible jump Markov chain Monte Carlo (RJMCMC) [@Gre95] to sample from the MU-number posterior mass function directly. However, its implementation is highly challenging with slow and uncertain convergence particularly when the studied muscle has many MUs. This is partly attributed to difficulty in defining efficient and meaningful transitions between models, with transition rates found to be 0.5–2% [@And07]. The between model transition rate was improved in @Dro14 where it was noticed that under Assumption A1, for a given stimulus, the majority of MUs are either almost certain to fire or almost certain to not fire. Approximating this near certainty by absolute certainty led to a substantial reduction in the size of the sample space. The approximate sample space for the firing events was sufficiently small to permit marginalisation in the calculation of between-model transition probabilities, increasing the acceptance rate to 9.2% with simulated examples. Nevertheless, substantial issues over convergence remain as the parameter posterior distributions for models with more than the true number of MUs are multimodal. In this paper, slight alterations of the neuromuscular assumptions permit the development of a fully adapted sequential Monte Carlo (SMC) filter, leading to SMC-MUNE, the first Bayesian MUNE method compatible with real-time analysis. As in @Rid06, the principal inference targets are separate estimates of the marginal likelihood for models with $u=1, \ldots, {u_{\max}}$ MUs, for some maximum size ${u_{\max}}$. The paper proceeds as follows. Section \[sec:Model\] presents the neuromuscular model of @Rid06 for a fixed number of MUs and defines the priors for the model parameters. Section \[sec:Method\] describes the SMC-MUNE method. Due the complexity of the problem that MUNE addresses, this section is broken into three parts: inference for the firing events and associated parameters; inference for the parameters of the baseline and MUTF processes; and, estimation of the marginal likelihood so as to evaluate the posterior mass function for MU-number. Section \[sec:SimStudy\] assesses the performance of the SMC-MUNE method for $200$ simulated data sets. Closer examination of cases where the point estimate of the number of MUs was incorrect revealed two classes of error; an example in each of these classes is investigated in detail. Section \[sec:CaseStudy\] applies the SMC-MUNE method to data (collected using the method in [@Cas10]) from a rat tibial muscle that has undergone stem cell therapy. Section \[sec:Discussion\] concludes the paper with a discussion on the effectiveness of SMC-MUNE and of potential avenues for improvement. The neuromuscular model and prior specification {#sec:Model} =============================================== The three assumptions A1–A3 underpin a comprehensive description of the neuromuscular system. This section expands on these assumptions to form the model of the neuromuscular system for a given fixed number of MUs. Section \[sec:Notation\] introduces the notational convention. Section \[sec:Neuro-model\] presents the neuromuscular model under the assumptions of @Rid06, and Section \[sec:PriorDist\] defines the prior distributions for the model parameters. Notation {#sec:Notation} -------- The total number of MUs operating the muscle of interest is denoted by $u$ and a particular MU is indexed by $j$. An EMG data set consists of $T$ measurements whereby the datum for the $t$th test, $t=1,\ldots,T$, consists of the applied stimulus $s_t$ and resulting WMTF $y_t$. The data set is re-ordered such that the observation $y_1, \ldots, y_{\tau-1}$ define baseline measurements with $s_t=0$ for $t=1, \ldots, \tau-1$, followed by an overall WMTF $y_\tau$ corresponding to the supramaximal stimulus $s_\tau=\max_t(s_t)$ where all $u$ MUs are known (by the clinician) to have fired. The remaining measurements appear in order of increasing stimulus. The advantages of this ordering will become evident in Section \[sec:DetailObsProc\]. The reaction of MU $j$ to stimulus $s_t$ is denoted by the indicator variable $x_{j,t}$, which is $1$ if MU $j$ fires, and hence contributes to the $y_t$ measurement, and $0$ otherwise. The $u$-vector of indicators ${\mathbf{x}}_t=(x_{1,t},\ldots,x_{u,t})^\top$ defines the firing vector of the MUs in response to stimulus $s_t$. Given the experimental set-up, it is assumed that no MUs fire for any baseline measurement, $x_{j,t}=0$ for each $j=1,\ldots,u$ and $t=1,\dots \tau-1$, and all MUs fire in response to the supramaximal stimulus, $x_{j,\tau}=1$. A sequentially indexed set of elements, vectors or scalars shall be represented as $a_{1:t} := \{a_1, \ldots, a_t\}$. The vectors where all elements are zero or all are unity are denoted by ${\mathbf{0}}$ an ${\mathbf{1}}$ respectively. The indicator function $\mathbb{I}_{A}(x)$ is $1$ if $x \in A$ and $0$ otherwise. The neuromuscular model {#sec:Neuro-model} ----------------------- Following the assumptions A1–A3 of @Rid06, the state-space neuromuscular model for the WMTF observations based on a fixed $u$ number of MUs is as follows. $$\begin{aligned} X_{j,t} | s_t, \eta_j, \lambda_j & \sim \mathrm{Bern}\left[ F \left(s_t; \eta_j, \lambda_j\right) \right], \label{eq:StateProc}\\ Y_{j,t} | \mathbf{X}_t = {\mathbf{x}}_t, {\bar{\mu}}, {\bar{\nu}}, {\boldsymbol{\mu}}, \nu & \sim \mathrm{N}\left({\bar{\mu}}+ {\mathbf{x}}_t^\top{\boldsymbol{\mu}}, {\bar{\nu}}^{-1} + \nu^{-1}{\mathbf{x}}_t^\top{\mathbf{1}}\right). \label{eq:ObsProc}\end{aligned}$$ The WMTF in is the sum of independent Gaussian contributions, firstly, from a baseline effect of $N({\bar{\mu}},{\bar{\nu}}^{-1})$ and, secondly, from each MU that fires. If the $j$th MU fires then it makes a $N(\mu_j,\nu^{-1})$ contribution to the WMTF. The parameters ${\boldsymbol{\mu}}= (\mu_1, \ldots, \mu_u)^\top$, $\nu$, ${\bar{\mu}}$, ${\bar{\nu}}$ are collectively referred to as the *observation parameters*. Each firing event in , $X_{j,t}$, is a Bernoulli random variable with success probability given by a sigmoidal function $F$ of the stimulus, called the *excitability curve* [@Bro76]. The *excitability parameters* for the $t$th MU, $\eta_j$ and $\lambda_j$, characterise its excitation features; conditional on these values, firing events are independent. The acyclic graph in Figure \[fig:MUNEdag\] depicts the dependencies within the neuromuscular model. Key to the strategy in this paper is that the observational and excitability parameters are conditionally independent given the unobserved firing events ${\mathbf{x}}_{1:T}$. ![Directed acyclic graph of the neuromuscular model for a fixed number of motor units, $u$. Arrows denote direct dependencies between known data (square nodes) and unknown parameters and states (circle nodes). Pallets indicate repeated cases according to the stated index.[]{data-label="fig:MUNEdag"}](figures/MUNEdag.pdf){width="80.00000%"} The excitability curve is a non-decreasing sigmoid function of the stimulus, parameterised by its median, $\eta$ and the reciprocal gradient at the median: $F(s=\eta;\eta,\lambda) = 1/2$, and $F'(s=\eta;\eta,\lambda) = 1/\lambda$. Under assumption A1, @Rid06 specifies the excitability curve as the Gaussian cumulative distribution function (CDF): $F(s) = \Phi[\delta(s-\eta)]$ where $\Phi(x)$ denotes the standard Gaussian CDF with $\delta = \sqrt{2\pi}/\lambda$. Evidence for this definition [@Hal04] focused on the central structure of the excitability curve by applying a binned chi-squared goodness-of-fit test. However, evidence to distinguish between this and alternatives such as the logistic curve will arise, chiefly, from tail events. Moreover, the Gaussian assumption allows a small, albeit potentially negligible, probability of a spurious firing event when no stimulus is applied. Given this contradiction with the experimental design, the following log-logistic form of the excitability is used: $$\begin{aligned} F(s; \eta, \lambda) = \left[1 + \left(\frac{s}{\eta}\right)^{-4\eta/\lambda}\right]^{-1}. \label{eq:ECdef}\end{aligned}$$ Nonetheless, the inference method described in Section \[sec:DetailFireProc\] is applicable for any sigmoidal curve. Prior distributions {#sec:PriorDist} ------------------- The excitability parameters of individual MUs are assumed to be independent *a priori*. For some upper limits ${\eta_{\max}}$ and ${\lambda_{\max}}$, the excitability parameters are assigned vague independent beta prior distributions: $$\begin{aligned} \frac{\eta}{{\eta_{\max}}} \sim \mathrm{Beta}(1.1,~ 1.1), \quad\quad \frac{\lambda}{{\lambda_{\max}}} \sim \mathrm{Beta}(1.1,~ 1.1). \label{eq:ECprior}\end{aligned}$$ The shape parameters are chosen so that the densities are uninformative yet tail off towards the boundaries. The location upper bound is conservatively set just greater than the supramaximal stimulus, ${\eta_{\max}}= 1.1s_\tau$. Evidence for specifying the upper bound ${\lambda_{\max}}$ is taken from @Hal04 where, for a Gaussian excitability curve, the coefficient of variation of a random variable whose cumulative distribution function is given by the excitability curve was estimated to be 1.65%. With the log-logistic curve this corresponds to $\lambda/\eta \approx 3.64\%$. Given that $\eta\le \eta_{max}=1.1s_{\tau}$, we deduce that $\lambda \le 0.04 s_\tau$. The limitations of the the study of @Hal04, commented on by @Maj07, indicate that a larger bound may be required than initially suggested, so sensitivity of MUNE to ${\lambda_{\max}}$ is investigated in Sections \[sec:SimStudy\_under\] and \[sec:CaseStudy\]. The following pair of four-parameter (multivariate) Gaussian-gamma prior distributions are specified for the observation parameters: $$\begin{aligned} {\bar{\nu}}& \sim \mathrm{Gam}\left({\bar{a}}_0,~ {\bar{b}}_0\right), \quad\quad {\bar{\mu}}|{\bar{\nu}}\sim \mathrm{N}\left({\bar{m}}_0,~ {\bar{\nu}}^{-1}{\bar{c}}_0\right),\nonumber\\ \nu & \sim \mathrm{Gam}\left(a_0,~ b_0\right), \quad\quad {\boldsymbol{\mu}}|\nu \sim \mathrm{MVN}_u\left({\mathbf{m}}_0,~ \nu^{-1}C_0\right).\label{eq:ObsPrior}\end{aligned}$$ All hyper-parameters are strictly positive scalars except for the real-valued scalar expectation ${\bar{\mu}}_0$, $u$-vector ${\mathbf{m}}_0$ and $u \times u$ positive definite matrix $C_0$. The prior distributions defined for the precision parameters are consistent with @Rid06. However, the prior for both baseline and MUTF expectations differ from the gamma definition of @Rid06. The tractability reasons for adopting Gaussian rather than gamma priors are detailed in Section \[sec:DetailObsProc\]; the problems that arise from the support now including the whole real line are addressed in Section \[sec:ML\]. The range of MUs to consider, $u=1, \ldots, {u_{\max}}$, defines a set of neuromuscular models. Previous Bayesian MUNE methods defined a uniform prior on the model space in assuming that each is equally probable. However, there is typically a preference for identifying the simplest representation of the underlying process. This is of particular importance in the presence of alternation where the data could be equally probably under two or more models. To impose an *a priori* preference for smaller models the number of MUs is given a $\mathsf{Geom}(1/2)$ distribution, truncated at ${u_{\max}}$. Methodology for SMC-MUNE {#sec:Method} ======================== The methodology that defines the SMC-MUNE procedure detailed in this section is based on an approximation to the ideal model defined in Section \[sec:Model\] using, effectively, an approximation to the prior specification. The reasons for the approximations are twofold: firstly, the choice of prior is necessary for certain tractable operations but does not reflect true prior belief; secondly, prior specification does not lead conveniently and efficiently to sequential inference yet a simple approximation achieves this goal. An overview of the methodology is first provided, with details about each part given subsequently. Adapting terminology from sequential inference, the re-ordered index $t$ shall henceforth be referred to as ‘time’. Overview {#sec:Overview} -------- The ultimate aim is to calculate and compare the posterior model probabilities for a range of models, each with a different number of MUs, $u$. Posterior model probabilities are straightforward to obtain once the marginal likelihood for each model is available. Hence, for a given model with $u$ MUs, the target for inference is its marginal likelihood, $f(y_{1:T}|s_{1:T})$; throughout this section, for notational simplicity, we suppress the dependence on $u$. This can be expressed as a product of sequential predictive factors with each defined by: $$\begin{aligned} f\left(y_t |~ y_{1:t-1},~ s_{1:t}\right) & = \sum_{{\mathbf{x}}_{1:t-1}\in\mathcal{X}_{1:t-1}} f\left(y_t|~ {\mathbf{x}}_{1:t-1},~ y_{1:t-1},~ s_{1:t}\right) \mathbb{P}\left({\mathbf{x}}_{1:t-1} |~ y_{1:t-1},~ s_{1:t-1}\right)\label{eq:SeqPredFactor}\end{aligned}$$ where $\mathcal{X}_{1:t} = \{0,1\}^{ut}$ denotes the space for the sequence of vectors of historical firing events. The inference scheme is based upon two key observations. Firstly, the observation and excitability parameters are conditionally independent given the set of firing events ${\mathbf{x}}_{1:T}$. Such an independence structure separates the observational and firing processes and simplifies the marginalisation of the parameter space for evaluating the marginal likelihood. Secondly, conditional on ${\mathbf{x}}_t$, the priors for the observation parameters in are nearly conjugate for the likelihood in . For a baseline measurement (which has ${\mathbf{x}}_t={\mathbf{0}}$) the posterior has the same form as the prior with tractable updates; the same would be true for a non-baseline measurement (${\mathbf{x}}_t\neq{\mathbf{0}}$) if it were possible to set ${\bar{\nu}}^{-1}=0$ and to ignore the further information on ${\bar{\mathcal{A}}}$; such an approximation is described and justified in Section \[sec:DetailObsProc\]. Subject to this approximation, the posterior distribution for the observational parameters after assimilating $y_{1:t}$ and conditional on ${\mathbf{x}}_{1:t}$ is defined by the sufficient statistics and (multivariate) Gaussian-gamma distributions analogous to the prior specification: $$\begin{aligned} {\bar{\mathcal{A}}}_t := \left\{{\bar{a}}_t,~ {\bar{b}}_t,~ {\bar{m}}_t,~ {\bar{c}}_t\right\} \quad \mathrm{and} \quad {\mathcal{A}}_t := \left\{a_t,~ b_t,~ {\mathbf{m}}_t,~ C_t\right\}, \label{eq:SSdef}\end{aligned}$$ $$\begin{aligned} {\bar{\nu}}|y_{1:t},{\mathbf{x}}_{1:t} & \sim \mathrm{Gam}\left({\bar{a}}_t,~ {\bar{b}}_t\right), \quad\quad {\bar{\mu}}|{\bar{\nu}},y_{1:t},{\mathbf{x}}_{1:t} \sim \mathrm{N}\left({\bar{m}}_t,~ {\bar{\nu}}^{-1}{\bar{c}}_t\right),\nonumber\\ \nu|y_{1:t},{\mathbf{x}}_{1:t} & \sim \mathrm{Gam}\left(a_t,~ b_t\right), \quad\quad {\boldsymbol{\mu}}|\nu,y_{1:t},{\mathbf{x}}_{1:t} \sim \mathrm{MVN}_u\left({\mathbf{m}}_t,~ \nu^{-1}C_t\right).\label{eq:ObsPriorApx}\end{aligned}$$ Given the assumptions leading to the marginal likelihood for the observation $y_t$ conditional on the firing vector ${\mathbf{x}}_t$ and sets ${\bar{\mathcal{A}}}_{t-1}$ and ${\mathcal{A}}_{t-1}$ has tractable form: $$\begin{aligned} f\left(y_t|{\mathbf{x}}_t,{\bar{\mathcal{A}}}_{t-1},{\mathcal{A}}_{t-1}\right) & = \left\{ \begin{array}{ll} \mathsf{t}\left[y_t;~ {\bar{m}}_{t-1},~ \frac{{\bar{b}}_{t-1}}{{\bar{a}}_{t-1}}\left({\bar{c}}_{t-1}+1\right),~ 2{\bar{a}}_{t-1}\right] \quad & \mathrm{if} ~ {\mathbf{x}}_t={\mathbf{0}},\\ \mathsf{t}\left[y_t;~ {\bar{m}}_{t-1}+{\mathbf{x}}_t^\top{\mathbf{m}}_{t-1},~ \frac{b_{t-1}}{a_{t-1}}\left({\mathbf{x}}_{t}^\top C_{t-1}{\mathbf{x}}_{t}+{\mathbf{x}}_{t}^\top{\mathbf{1}}\right), ~2 a_{t-1}\right] \quad & \mathrm{otherwise}.\\ \end{array} \right.\label{eq:ObsMarginal}\end{aligned}$$ Here, $\mathsf{t}(y; m, v, n)$ denotes the Student’s t-density function on $n$ degrees of freedom with centrality parameter $m$ and scaling factor $\sqrt{v}$. The statistics ${\bar{\mathcal{A}}}_{t-1}$ and ${\mathcal{A}}_{t-1}$ are deterministic functions of $y_{1:t-1}$ and ${\mathbf{x}}_{1:t-1}$, and are sufficient in that $f(y_t|{\mathbf{x}}_{1:t},y_{1:t-1}) \equiv f(y_t|{\mathbf{x}}_{t},{\bar{\mathcal{A}}}_{t-1},{\mathcal{A}}_{t-1})$. The posterior-predictive mass function for the next excitation vector, $\mathbb{P}\left({\mathbf{x}}_t|{\mathbf{x}}_{1:t-1},y_{1:t-1},s_{1:t}\right)$\ $= \mathbb{P}\left({\mathbf{x}}_t|{\mathbf{x}}_{1:t-1},s_{1:t}\right)$, is given by the following intractable marginalisation: $$\begin{aligned} \mathbb{P}\left({\mathbf{x}}_t|{\mathbf{x}}_{1:t-1},s_{1:t}\right) = \int \mathbb{P}\left({\mathbf{x}}_t|\eta_{1:u},\lambda_{1:u},s_t\right) \pi\left(\eta_{1:u},\lambda_{1:u}| {\mathbf{x}}_{1:t-1},s_{1:t-1}\right)~d\eta_{1:u}~d\lambda_{1:u}, \label{eq:ECmarg}\end{aligned}$$ where $\pi\left(\eta_{1:u},\lambda_{1:u}| {\mathbf{x}}_{1:t-1},s_{1:t-1}\right)$ is the posterior for the excitability parameters given the firing vectors to time $t-1$. Section \[sec:DetailFireProc\] presents a fast numerical quadrature scheme for evaluating to any desired accuracy. The marginalisations over the parameters in and together provide the predictive: $$\begin{aligned} f\left(y_t|~ {\mathbf{x}}_{1:t-1},~ y_{1:t-1},~ s_{1:t}\right) & = \sum_{{\mathbf{x}}_t\in\mathcal{X}_t} f\left(y_t|~ {\mathbf{x}}_{1:t},~ y_{1:t-1},~ s_{1:t}\right) \mathbb{P}\left({\mathbf{x}}_{t} |~ {\mathbf{x}}_{1:t-1},~ s_{1:t}\right),\label{eq:Ypred}\end{aligned}$$ Combination of with the historical firing event mass function $\mathbb{P}\left({\mathbf{x}}_{1:t}|y_{1:t},s_{1:t}\right)$ would provide the quantity $f(y_t|y_{1:t-1},~s_{1:t})$ in as required; however, it is infeasible to track $\mathbb{P}\left({\mathbf{x}}_{1:t}|y_{1:t},s_{1:t}\right)$ as the dimension of the event space increases exponentially with time. Instead, combining , and gives the conditional mass function for the current firing vector given all previous firing vectors and all MUTFs to date, $$\begin{aligned} \mathbb{P}\left({\mathbf{x}}_{t} |~ y_{1:t},~ {\mathbf{x}}_{1:t-1},~ s_{1:t}\right) & = \frac{f\left(y_t|~{\mathbf{x}}_{1:t},~y_{1:t-1},~s_{1:t}\right) \mathbb{P}\left({\mathbf{x}}_t|~{\mathbf{x}}_{1:t-1},~s_{1:t}\right)}{f\left(y_t|~{\mathbf{x}}_{1:t-1},~y_{1:t-1},~s_{1:t}\right)}. \label{eq:Xupdate}\end{aligned}$$ Expressions in and together lead to a fully adaptive sequential Monte Carlo (SMC) sampler which approximates the historical firing event mass function by the particle set $\left\{{\mathbf{x}}_{1:t}^{(i)}\right\}_{i=1}^{N}$, for a suitably large $N$, recursively updating the set for $t=1,\dots,T$. Algorithm \[tab:Alg\] presents the auxiliary SMC sampler [@Pit99] which, given the set of samples drawn from $\mathbf{X}_{1:t-1}|~y_{1:t-1},~ s_{1:t-1}$, creates an unweighted sample from the filtering distribution $\mathbf{X}_{1:t}|y_{1:t},s_{1:t}$, and approximates via Monte Carlo so as to update the marginal likelihood estimate $\hat{f}(y_{1:t}|~s_{1:t})$. $\omega_t^{(i)}=f(y_t|{\mathbf{x}}_{1:t-1}^{(i)},~y_{1:t},~s_{1:t})$ $\bar{\omega}_t^{(i)}=\omega_t^{(i)}/\sum_{k} \omega_t^{(k)}$. Sample auxiliary indices $\{\zeta^{(i)}\}_{i=1}^N$ with probabilities $\{\bar{\omega}_{t}^{(i)}\}_{i=1}^N$. Sample ${\mathbf{x}}_t^{(i)}$ with probability $\mathbb{P}\left({\mathbf{x}}_t|~y_t,~{\mathbf{x}}_{1:t-1}^{(\zeta_i)},~s_{1:t}\right)$. Set ${\mathbf{x}}_{1:t}^{(i)} = \left({\mathbf{x}}_{1:t-1}^{(\zeta_i)},~ {\mathbf{x}}_t^{(i)}\right)$. Set $\log \hat{f}\left(y_{1:t}|~s_{1:t},~u\right) = \log \hat{f}\left(y_{1:t-1}|~s_{1:t-1},~u\right) - \log N + \log \sum_i \omega^{(i)}_t$. Although primary interest lies in the marginal-likelihood estimate, parameter inference is also available to assist in assessing the quality of fit. The deterministic map to the sufficient statistics from the set of firing events and responses permits the transformation from the final particle set $\{X_{1:T}^{(i)}\}_{i=1}^{N}$ to an $N$-component Gaussian-gamma mixture approximating the posterior distribution for the observation parameters. A similar transformation for evaluating the posterior distribution for the excitability parameters is derived from the approximation to the prior; see Section \[sec:DetailFireProc\] for details. ### Equivalent particle specification and degeneracy The Bayesian conjugate structure for the observation process suggests storing and updating the sufficient statistics when assimilating the latest observations. Given the prior statistics ${\bar{\mathcal{A}}}_0$ and ${\mathcal{A}}_0$, there is a deterministic map from $({\mathbf{x}}_{1:t-1},~ y_{1:t-1})$ to $({\bar{\mathcal{A}}}_{t-1},~{\mathcal{A}}_{t-1})$. Hence estimates relating to the observation process at time $t$ are equivalently expressed with respect to the samples $\{{\bar{\mathcal{A}}}_{t-1}^{(i)},~ {\mathcal{A}}_{t-1}^{(i)},~ {\mathbf{x}}_t^{(i)}\}_{i=1}^N$; the storage required for this set does not increase with with number of observations assimilated. Since the method relies on these sufficient statistics, Algorithm \[tab:Alg\] may be considered as a case of particle learning [@Car10]. For notational clarity, however, the particle set is described in terms of the historical firing events, ${\mathbf{x}}_{1:t-1}$, unless otherwise specified. Assimilating the observation, $y_{\tau}$, at the supramaximal stimulus, $s_{\tau}$, before any of the other non-baseline observations ensures an update for *all* MUs, $j$, from the initial vague priors for each $\mu_j$. After this, each $m_j\approx y_{\tau}/u$, ensuring more sensible predictions in and hence , when a new MU fires. This helps to mitigate against the inevitable particle degeneracy that occurs with particle learning. Further mitigation is achieved by iteratively re-running the algorithm with more and more particles until inferences are stable (see Appendix \[sec:APX\]). Details for the firing vector and excitability parameters {#sec:DetailFireProc} --------------------------------------------------------- At time $t-1$, each particle sample consists of a historical sequence of firing events for all MUs, and from this an associated joint posterior for the firing parameters, $\eta_{1:u}$ and $\lambda_{1:u}$, is derived. A representation of the distribution for the excitability parameters is sought that is analogous to that described for the observation parameters in that it should (a) permit simple calculation of the firing event predictive , (b) be deterministically updatable when assimilating the current measurement, and (c) provide a concise and sufficient description for the posterior distribution. From the independence of MU firing under Assumption A1 and the excitability parameter prior in , it follows that the predictive for the firing event ${\mathbf{x}}_t$ in factorises: $$\begin{aligned} \mathbb{P}\left({\mathbf{x}}_t|~ {\mathbf{x}}_{1:t-1},~ s_{1:t}\right) & = \prod_{j=1}^{u} \iint \mathbb{P}\left(x_{j,t}|~\eta_j,~\lambda_j,~s_t\right) \pi\left(\eta_j,~ \lambda_j|~x_{j,1:t-1},~s_{1:t-1}\right) d\eta_j~d\lambda_j,\label{eq:ECj_marg}\end{aligned}$$ where the posterior at time $t-1$ for the excitability parameters associated with MU $j$ is: $$\begin{aligned} \pi\left(\eta_j,~ \lambda_j|~x_{j,1:t-1},~s_{1:t-1}\right) & \propto \prod_{r=1}^{t-1} \mathbb{P}\left(x_{j,r}|~\eta_j,~\lambda_j,~s_r\right) \pi\left(\eta_j\right) \pi\left(\lambda_j\right).\label{eq:ECj_post}\end{aligned}$$ Regardless of the excitability curve definition, this product of firing probabilities does not lead to a simple conjugate structure with a concise set of sufficient statistics for the posterior distribution. Furthermore, whilst for specific values of $(\eta_j,\lambda_j)$ the update in may be performed sequentially, the integrations for the normalising constant in and the expectation in require evaluation of the product at arbitrary values in a continuum. To address these issues, the following approximation is proposed: - For each MU, store and update at each time point a surface proportional to the posterior density $\pi(\eta_j,\lambda_j|x_{j,1:t-1},s_{1:t-1})$ at a set of grid of points on a regular rectangular lattice $\mathcal{G}$ spanning the excitability parameter space. For general $(\eta_j,\lambda_j)$, approximate the right-hand side of using bilinear interpolation from the four nearest grid points. Under this assumption, let $h(\eta,~\lambda)$ be the right-hand side of ; then $\tilde{h}(\eta,~\lambda)$, the interpolated surface specified using points on the unit square in which $(\eta,\lambda)$ resides, is: $$\begin{aligned} \tilde{h}(\eta,\lambda) & = (1-\eta)(1-\lambda)h(0,0) + (1-\eta)\lambda h(0,1) + \eta(1-\lambda)h(1,0) + \eta\lambda h(1,1), $$ with a similar approximation for $\mathbb{P}\left(x_{j,t}|\eta_j,\lambda_j,s_t\right) h(\eta_j, \lambda_j)$ based on interpolating this between grid points. The resulting approximations for the normalising constant in and the integral in , therefore, correspond to iterative (over the two dimensions) application of the compound trapezium rule. This approach provides a deterministic updating procedure for maintaining the excitability posterior density up to a constant of proportionality for each point on the regular lattice. A naïve implementation of the above scheme would evaluate the posterior density for each grid point, MU and particle sample. However, two posterior densities will only differ if the historical firing events differ. Consider any two particles, $i$ and $i'$, each with an associated MU, $j$ and $j'$ respectively, that have identical firing histories: $x_{j,1:t}^{(i)} = x_{j',1:t}^{(i')}$. Since the priors for the excitability parameters are identical for all MUs then the posterior distribution for these two MUs on these two particles are identical. Efficiency gains are therefore achieved by storing a single grid of values for each unique firing pattern to date. A higher-order Newton-Cotes numerical integration method would produce a more accurate estimate of , but the associated interpolated density surface of piecewise polynomials would not be guaranteed to be bounded below by zero, making an inspection of parameter estimates for assessing model fit problematic. Alternatively, quadrature on adaptive sparse grids [@Bun03], where the grid is finer at regions of high curvature, could improve estimator accuracy over the static regular rectangular lattice. However, this would be achieved at the expense of additional implementation complexity and further approximation error when estimating the surface at infilled lattice points. Details concerning the observation process {#sec:DetailObsProc} ------------------------------------------ Consider the observation model . At time $t\le \tau-1$, when no MUs fire, ${\mathbf{x}}_t = {\mathbf{0}}$, the observation, $y_t$, provides no new information about the observation parameters for the MUs, ${\mathcal{A}}_t = {\mathcal{A}}_{t-1}$, and $Y_{j,t}|{\mathbf{x}}_t=0,{\bar{\mu}},{\bar{\nu}},{\boldsymbol{\mu}},\nu\sim \mathrm{N}({\bar{\mu}},{\bar{\nu}}^{-1})$. Standard conjugate updates may, therefore, be applied to obtain ${\bar{\mathcal{A}}}_t$ as follows: $$\begin{aligned} {\bar{m}}_t = {\bar{m}}_{t-1} + \frac{y_t-{\bar{m}}_{t-1}}{1+{\bar{c}}_{t-1}},\quad {\bar{c}}_t = \frac{{\bar{c}}_{t-1}}{1+{\bar{c}}_{t-1}},\quad {\bar{a}}_t = {\bar{a}}_{t-1} + \frac{1}{2},\quad {\bar{b}}_t = {\bar{b}}_t + \frac{(y_t-{\bar{m}}_{t-1})^2}{2(1+{\bar{c}}_{t-1})}. $$ When $t\ge \tau$, at least one MU fires and tractable updates are not possible. However, in real experiments, because of the precautions detailed in Section \[sec:Intro\], the variance (and expectation) of the baseline noise are generally much smaller than the variability in response from a given MU when it fires. For example [@Hen06] find a ratio of an order of magnitude. We, therefore make the following approximation: - When assimilating a non-baseline observation, ${\bar{\mathcal{A}}}$ is kept fixed at its previous value, and for updating ${\mathcal{A}}$ it is assumed that ${\bar{\nu}}^{-1}=0$. Approximation B2 implies that for ${\mathbf{x}}_t\neq{\mathbf{0}}$, $$\begin{aligned} Y_t|~ {\bar{\nu}},~ {\boldsymbol{\mu}},~ \nu,~ {\bar{\mathcal{A}}}_{t-1},~ {\mathcal{A}}_{t-1} & \stackrel{apx.}{\sim} \mathrm{N} \left(~{\bar{m}}_{t-1} + {\mathbf{x}}_t^\top{\boldsymbol{\mu}},~ \nu^{-1}{\mathbf{x}}_t^\top{\mathbf{1}}~ \right), $$ which, given distributions at time $t-1$ as specified in , leads to the desired tractable updates for the sufficient statistics for as follows: ${\bar{\mathcal{A}}}_t = {\bar{\mathcal{A}}}_{t-1}$ and $$\begin{aligned} {\mathbf{m}}_t &= {\mathbf{m}}_{t-1} + q_t C_{t-1}{\mathbf{x}}_t(y_t - {\bar{m}}_{t} - {\mathbf{x}}_t^\top{\mathbf{m}}_{t-1}) & C_t &= C_{t-1} - q_t C_{t-1} {\mathbf{x}}_t {\mathbf{x}}_t^\top C_{t-1} \nonumber\\ a_t &= a_{t-1} + \frac{1}{2} & b_t &= b_{t-1} + \frac{q_t}{2}(y_t - {\bar{m}}_{t} - {\mathbf{x}}_t^\top{\mathbf{m}}_{t-1})^2, $$ where $q_t=({\mathbf{x}}_t^\top{\mathbf{1}}+ {\mathbf{x}}_t^\top C_{t-1} {\mathbf{x}}_t)^{-1}$. In essence, the approximate observation process decouples the learning about the observational parameters: when no MU fires then $({\bar{\mu}},{\bar{\nu}})$ is updated, else $(\mu,\nu)$ is updated. After assimilating the baseline observations $y_1,\dots,y_{\tau-1}$, both ${\bar{\nu}}^{-1}$ and ${\bar{\mu}}$ are known (and known to be small) with considerable certainty. Thus, approximating ${\bar{\nu}}^{-1}$ as $0$ and considering ${\bar{\mu}}$ to be a point mass at ${\bar{m}}$ is reasonable. Furthermore, the prior for $\nu$ does not need to be set until just before the observation $y_{\tau}$ is assimilated. Given the tight posterior for ${\bar{\nu}}$ at this juncture it is, therefore, possible to incorporate the knowledge that ${\bar{\nu}}>>\nu$ into the vague prior for $\nu$ (which is conceptually equivalent to specifying an initial joint prior on ${\bar{\nu}}$ and $\nu$). Letting ${\bar{\nu}}_{\tau-1}^{\mbox{\scriptsize med}}$ denote the posterior median of ${\bar{\nu}}$ at time $\tau-1$, tuning parameters $\epsilon<<1$ and $\delta<<1$ are chosen such that $\mathbb{P}(\nu>\epsilon{\bar{\nu}})\approx \delta$ is desired. Given that $$\begin{aligned} 1-\delta = \mathbb{P}(\nu\leq\epsilon{\bar{\nu}}) \approx \mathbb{P}(\nu\leq\epsilon{\bar{\nu}}_{\tau-1}^{\mbox{\scriptsize med}}) = \mbox{\textsf{Gam}}(b_{\tau-1}\epsilon{\bar{\nu}}_{\tau-1}^{\mbox{\scriptsize med}};~a_{\tau-1}),\label{eq:DefineNuPrior}\end{aligned}$$ where $\mbox{\textsf{Gam}}(x; \alpha)$ is the cumulative distribution function evaluated at $x$ of a gamma random variable with shape $\alpha$ and unit rate, a practical specification for the prior for $\nu$ is obtained by defining a small $a_{\tau-1}=a_0$ and then solving for $b_{\tau-1}$. Improving the marginal likelihood estimate {#sec:ML} ------------------------------------------ The following post-processing development is motivated by the analysis of a particular simulated dataset where the point estimate for the number of MUs is one greater than the true number. The detailed analysis in Section \[sec:SimStudyOver\] shows that the extra MU has a very weak expected MUTF and that it, effectively, acts simply to increase the variability in the response. The problem arises because the $u$-vector, ${\boldsymbol{\mu}}$, of expected MUTF contributions has a Gaussian prior which, to allow reasonable uncertainty across the typical range of believable MUTF contributions, also places a non-negligible prior mass at low and even negative values. Negative expectations for an individual MU need not be prohibited by the data provided that MU is always inferred to fire alongside another MU with a positive expectation of similar or larger magnitude. The fact that the parameter suport permits this possibility potentially increases the marginal likelihood for a model which is larger than that necessary to explain the data. Guaranteeing positive MUTFs greater than some minimum [@Bro03; @Maj07] would require a change to the likelihood. The approach taken in @Rid06 is to specify independent left-truncated gamma prior distributions for the the expected MUTFs, $\mu_j$ for $j=1,\ldots,u$. However any such change would not lead to the tractable updates required for the concise sequential analysis described in Sections \[sec:Overview\] and \[sec:DetailObsProc\]. Within the constraints of the algorithm overviewed in Section \[sec:Overview\], the natural mechanism for preventing these undesirable scenarios is via post-processing: the conditional prior for ${\boldsymbol{\mu}}|\nu$ in is re-calibrated by truncating it to the region $M=[{\mu_{\min}},\infty)^u$ for some minimum MUTF ${\mu_{\min}}$. It follows that the re-calibrated marginal prior for ${\boldsymbol{\mu}}$ is: $$\begin{aligned} \tilde{\pi}({\boldsymbol{\mu}}) & = \frac{1}{\pi(M)}\pi({\boldsymbol{\mu}})\mathbb{I}_M({\boldsymbol{\mu}}),\label{eq:TruncMU}\end{aligned}$$ where $\pi({\boldsymbol{\mu}})$ is the multivariate Student’s t-density centred at ${\mathbf{m}}_0$ with shape matrix $\frac{b_0}{a_0}C_0$ and $2a_0$ degrees of freedom, and, with a slight abuse of notation, $\pi(M)=\int_M\pi({\boldsymbol{\mu}})d{\boldsymbol{\mu}}$. The effect on the marginal likelihood from the prior re-calibration is examined by Proposition \[prop:AdjML\]. \[prop:AdjML\] Let $f(y_{1:T}|s_{1:T})$ denote the marginal likelihood defined in Section \[sec:Overview\]. The re-calibrated marginal likelihood, denoted by $\tilde{f}(y_{1:T}|s_{1:T})$, resulting from truncating the prior for ${\boldsymbol{\mu}}$ in to $M$ is: $$\begin{aligned} \tilde{f}(y_{1:T}|s_{1:T}) = \frac{\pi(M|y_{1:T},s_{1:T})}{\pi(M)} f(y_{1:T}|s_{1:T}) = \frac{\pi(M|y_{1:T},s_{1:T})}{\pi(M|y_{1:\tau-1},s_{1:\tau-1})} f(y_{1:T}|s_{1:T}), \label{eq:recalib}\end{aligned}$$ where $\pi(M|y_{1:t},s_{1:t}) = \int_M \pi({\boldsymbol{\mu}}|y_{1:t},s_{1:t}) d{\boldsymbol{\mu}}$. Expressing the re-calibrated marginal likelihood as a marginalisation of ${\boldsymbol{\mu}}$ and substituting the definition produces the first equality by: $$\begin{aligned} \tilde{f}(y_{1:T}|s_{1:T}) =\int \tilde{\pi}({\boldsymbol{\mu}}) f(y_{1:T}|{\boldsymbol{\mu}},s_{1:T}) d{\boldsymbol{\mu}}= \frac{\int_M \pi({\boldsymbol{\mu}})f(y_{1:T}|{\boldsymbol{\mu}},s_{1:T}) d{\boldsymbol{\mu}}}{\pi(M)} = \frac{\pi(M|y_{1:T},s_{1:T})}{\pi(M)} f(y_{1:T}|s_{1:T}).\end{aligned}$$ The second equality in arises as the first $\tau-1$ observations relate exclusively to the baseline. Evaluation of $\pi(M|y_{1:\tau-1},s_{1:\tau-1})$ is straightforward since it is an orthant probability for the multivariate Student-t distribution. In contrast, the posterior probability is estimated from the $N$-component-mixture approximation of the posterior distribution by the final particle set: $$\begin{aligned} \hat{\pi}(M|y_{1:T},s_{1:T}) & = \frac{1}{N} \sum_{i=1}^{N} \pi(M|{\mathbf{x}}_{1:T}^{(i)}, y_{1:T}). $$ There is no theoretical argument against assuming from the outset. Indeed, the firing events sampled in the propagation step of Algorithm \[tab:Alg\] would then account for the restriction to the ${\boldsymbol{\mu}}$ parameter space and therefore directing particle samples to a more appropriate approximation for posterior parameter estimates. However, implementing this scheme requires at most $N2^u$ orthant evaluations of the multivariate Student’s t-distribution per time step in calculating the re-sampling weights. Standard procedures for evaluating these orthant probabilities [@Genz09] are expensive, so the computational time of the resulting SMC-MUNE algorithm would increase substantially. Simulation study {#sec:SimStudy} ================ The performance of the SMC-MUNE algorithm is now assessed using $200$ simulated data sets, $20$ for each true number of MUs of $u^*=1,\ldots,10$. Each data set consists of $T=220$ measurements with $\tau=21$ so that the first $20$ observations correspond to the baseline, $s_t=0$V, and these are followed by the supramaximal stimulus $s_{21}=40$V. All MUs are excited according to the log-logistic curve with MU parameters simulated anew for each dataset as follows: $\eta_j\sim \mathsf{Unif}(5,40)$, $\lambda_j\sim\mathsf{Gamma}(2,8)\mathbb{I}(\lambda_j<10)$, $\mu_j\sim \mathsf{N}(40,20^2)\mathbb{I}(\mu_j>20)$, $\nu^{-1}\sim \mathsf{Unif}(1,5)$. The measurement units for excitation parameters are all in V and the expected MUTFs are in mN with variance parameter in mN${}^2$. Parameters were independent except for the following constraints, where $(j)$ is the index of the MU with the $j$th highest $\eta$ value: $\eta_{(j)}-\eta_{(j-1)}>2$ (neigbours must be separate) and $|\mu_{(j)}-\mu_{(j-1)}|>4$ (neighbours should have distinct expectations). To test the ability of SMC-MUNE, these values ensure a greater range in MUTF contributions and in the excitation parameters than is typically observed in practice. For example, the average coefficient of variation is $11.2\%$ (as opposed to $1.65\%$; see Section \[sec:PriorDist\]). This resulted in $79\%$ of datasets containing at least one alternation event. Additional noise was generated as in with ${\bar{\mu}}=0$mN and ${\bar{\nu}}^{-1}=0.25^2$mN${}^2$. To each data set, a set of neuromuscular models was fitted with a number of MUs, $u$, ranging from $1$ up to a maximum size of ${u_{\max}}=12$. The sufficient statistics for the parameter prior distributions are provided in Appendix \[sec:APX\]. To control the Monte Carlo variability and the error in the numerical integration, the particle set size, $N$, and the lattice size for numerical integration were iteratively increased until estimates of the marginal likelihood were stable; see Appendix \[sec:APX\] for further details. The point estimate of the number of MUs is taken to be the maximum *a posteriori* (MAP) model, $\hat{u}$, and uncertainty in the estimate is quantified by the 95% highest posterior credible set (HPCS); the minimal set of models where their total posterior probability is at least 95%. In addition, the estimated posterior probability for the true model, $p_{u^*} = \hat{\mathbb{P}}(U=u^*|y_{1:T},s_{1:T})$, is evaluated. Number of MUs, $u^*$ $\leq 5$ 6 7 8 9 10 ---------------------------------------------- ---------- ------- ------- ------- ------- ------- No. where $\hat{u}=u^*$ 100 19 19 16 15 12 No. where $u^*$ in HPCS 100 20 20 19 19 20 Avg. size of HPCS 1.11 1.70 2.10 2.05 2.35 2.45 Avg. $\hat{p}_{u^*}$ (%) 97.95 89.20 80.45 69.42 62.70 54.68 Avg. particle set size for $u^*$ 5000 5250 6000 7500 7500 8250 Avg. particle set size for $\hat{u}$ 5000 5000 6000 7500 9250 7750 Avg. $n{\times}n$ lattice size for $u^*$ 30.0 30.5 30.5 32.0 32.5 32.0 Avg. $n{\times}n$ lattice size for $\hat{u}$ 30.0 30.0 30.5 32.0 35.0 32.0 : Summary of the MU-number posterior mass functions and required numerical resource for 200 simulated data sets.[]{data-label="tab:SimSummary"} Table \[tab:SimSummary\] presents summaries of the mass functions of the number of MUs and descriptions of the resource required as functions of the true number of MUs. The MAP estimate corresponded to the true number of MUs for all data sets generated from five or fewer MUs, and for most of these datasets the HPCS contained only the true model. For true sizes of greater than five the MAP estimate was correct for $81$ of the $100$ data sets and the HPCS contained the truth for all but two data sets. It is unsurprising that the uncertainty in the MU-number increases with the true number of MUs; this is visible both as an increase in the average size of the HPCS and a reduction in the average posterior probability for the true number. In addition, both the number of particles required to control Monte Carlo variability and the size of the numerical lattice required for accurate numerical integration also increase as with the true number of MUs. This demonstrates the challenge of MUNE for large neuromuscular systems that possess complex features resulting from alternation. Of the $19$ datasets where the MAP estimate $\hat{u}$ did not correspond to the truth, $u^*$, one dataset had $\hat{u}>u^*$ with the rest (including the two outliers) having $\hat{u}<u^*$. The stimulus-response curves for the first case and a typical example of the second are presented in Figure \[fig:SimCaseData\] and are discussed in turn below. ![Stimulus-response curve (top) for the simulated data with lines representing the expected WMTF over the stimuli intervals where the joint firing probability is greater than 5% according to the individual excitability curves (bottom). Left: Data set D1 contains $u^*=7$ MUs but $\hat{u}=8$. Right: Data set D2 contains $u^*=8$ MUs but $\hat{u}=7$. Circle points, additional 23 simulations over the 23–32V alternation period involving 5 MUs.[]{data-label="fig:SimCaseData"}](figures/SRcurve_sim2.pdf){width="80.00000%"} Over-estimation {#sec:SimStudyOver} --------------- The first data set, D1, contains $u^*=7$ MUs in truth but the SMC-MUNE method produces a MAP estimate of $\hat{u}=8$. The posterior probability of the true model is $\hat{p}_{u^*}=14.9\%$ and this model, along with the larger 9 MU model, is contained with a 95% HPCS. Parameter estimates for the MAP model (Table \[tab:D1muEst\]) show that the penultimate MU has a median expected twitch force of 9.6mN with a credible upper bound of 15.7mN, much lower than the 20mN simulation threshold. Figure \[fig:PredDen\_S37\] presents the the construction of the predictive WMTF density for the true and MAP models at a 37V stimulus. The local modes in the model containing the true MU-number correspond uniquely to particular firing combinations. In contrast, the weak MU in the MAP model principally serves to increase the variability around a specific WMTF response level rather than describing a distinct MU. In light of these concerns, the marginal likelihood estimates are adjusted according to Section \[sec:ML\] with a conservative lower bound of ${\mu_{\min}}=15$mN to guard against small MUs that, when firing, are indistinguishable from other combinations. The corrected posterior mass function places 89.3% of the mass on the correct, seven-MU model, with 10.7% mass on the eight-MU model. The estimates of expected MUTF in Table \[tab:D1muEst\] for the seven-MU model are similar to those prior to the adjustment and are still close to the true values from which the data was generated. However, the prior adjustment for the eight-MU hypothesis has a significant effect on the penultimate MU and, so as to preserve the overall maximum WMTF, a small reduction in the estimated $\mu$s for its neighboring MUs. --------------------------- ------------------- ------------------- ------------------- ------------------- Parameter $\mu_6$ $\mu_7$ $\mu_8$ $\nu^{-1}$ True 40.2mN 87.9mN – 4.54 mN${}^{2}$ $u=7$ 40.5 (37.7, 43.5) 91.2 (86.6, 95.9) – 3.85 (3.13, 4.81) $u=8$ 36.3 (32.4, 40.2) 9.6 (4.7, 15.7) 86.7 (80.3, 92.7) 3.22 (2.57, 4.18) $u=7$ & ${\mu_{\min}}=15$ 40.5 (37.8, 40.2) 91.3 (86.8, 95.7) – 3.90 (3.14, 4.92) $u=8$ & ${\mu_{\min}}=15$ 35.7 (31.1, 40.0) 15.7 (15.0, 20.7) 80.4 (71.4, 86.6) 3.23 (2.60, 4.09) --------------------------- ------------------- ------------------- ------------------- ------------------- : Expected MUTF median and 95% credible interval estimates for MUs with high excitation threshold from the true ($u^*=7$) and MAP ($\hat{u}=8$) models, with and without post-process truncation (${\mu_{\min}}=15$mN) on data set D1.[]{data-label="tab:D1muEst"} ![Predictive density (thick line) at stimulus 37V from the seven (left) and eight (right) MU model without post-process adjustment. Thin lines identify the contribution to the predictive for the indicated firing combinations associated to the final few MUs. In both cases, the first five MUs fire with near certainty. Most firing combinations with negligible predictive probabilities are omitted from the plot.[]{data-label="fig:PredDen_S37"}](figures/PredDen_D1_S37_version2.pdf){width="80.00000%"} Under-estimation {#sec:SimStudy_under} ---------------- The second data set, D2, contains $u^*=8$ MUs and presents a period of alternation between 23–32V which involves five MUs. The SMC-MUNE procedure, however, estimates $\hat{u}=7$ and gives this a high posterior probability of 97.1% after applying the post-process adjustment at ${\mu_{\min}}=15$mN. The main source for this under-estimation arises through the over-estimation of the excitability scale parameter (Table \[tab:lamMAX\]) for the fourth MU ($\lambda_4$), so that the stimulus interval for probabilistic firing behavior is nearly three times wider than it should be. Consequently, this incorrectly estimated MU acts as a surrogate for MU-number $6$, which has similar twitch force properties. One potential solution is to reduce the upper bound for the scale parameter ${\lambda_{\max}}$ in to constrain estimation against shallow excitability curves. Table \[tab:lamMAX\] presents scale parameter estimates for selected MUs at the original (${\lambda_{\max}}=14$V) and reduced (${\lambda_{\max}}=7$V) upper bounds. Under the reduced bound the 8 MU model becomes a member of the HPCS, but the MAP estimate remains at $\hat{u}=7$ with a high posterior probability of 94.6%. Although a further reduction to ${\lambda_{\max}}$ might be appealing, this action is likely to be detrimental in determining good model fits. For example, the scale parameter of the first MU, which has a true value of 5.0V is accurately estimated whether ${\lambda_{\max}}$ is $7$ or $14$, principally because its excitability curve is well separated from the other curves, but a further reduction in $\lambda_{\max}$ risks the introduction of an additional, spurious MU to explain the low-stimulus observations. [lcP[18mm]{}P[18mm]{}P[18mm]{}P[18mm]{}P[18mm]{}P[18mm]{}]{} & True & & &\ $u$ & 8 & 7 & 8 & 7 & 8 & 7 & 8\ $\mathbb{P}(u|y)$ & – & 96.7% & 3.3% & 94.6% & 5.4% & 28.7% & 71.3%\ $\eta_4$ & 26.0V & 26.9 (25.4, 28.8) & 26.6 (25.3, 28.6) & 27.0 (25.4, 28.9) & 26.5 (25.2, 28.5) & 26.7 (25.7, 27.7) & 26.4 (25.6, 27.6)\ $\eta_5$ & 27.3V & 27.8 (26.0, 29.1) & 27.4 (25.5, 28.9) & 27.8 (25.9, 29.0) & 27.5 (25.6, 28.9) & 27.4 (26.2, 28.5) & 27.2 (25.9, 28.3)\ $\eta_6$ & 27.9V & – & 27.9 (26.5, 29.5) & – & 27.9 (26.6, 29.5) & – & 27.5 (26.5, 28.8)\ $\lambda_4$ & 1.8V & 4.5 (1.8, 7.6) & 3.6 (1.0, 7.6) & 4.3 (1.9, 6.6) & 3.1 (0.7, 6.3) & 4.0 (1.8, 7.8) & 2.5 (0.9, 6.3)\ $\lambda_5$ & 3.6V & 4.1 (2.2, 7.3) & 4.7 (1.8, 7.9) & 4.0 (2.1, 6.5) & 4.4 (1.7, 6.6) & 3.7 (2.1, 6.3) & 4.6 (1.8, 8.3)\ $\lambda_6$ & 4.8V & – & 4.7 (1.6, 8.1) & – & 4.4 (1.4, 6.6) & – & 4.4 (2.3, 7.5)\ In the original analysis, $\lambda_4$ is mis-estimated because of the limited information available in the observations to adequately describe the period of alternation between 23–32V which involves five MUs. To show that this is the case, an additional 23 observations were generated evenly over this interval; see Figure \[fig:SimCaseData\]. This modest addendum to the data set is sufficient for the true model to be identified, $\hat{u}=8$, and with a posterior probability of 71.3%, and with better scale parameter estimates. However, the increase in computational resource required to obtain the same degree of Monte Carlo and numerical accuracy was substantial: from 5000 to 25000 particles and from a $30{\times}30$ to $50{\times}50$ lattice for the eight-MU hypothesis. Case study: rat tibial muscle {#sec:CaseStudy} ============================= The case study arises from [@Cas10] where a rat tibial muscle (medial gastrocnemius) receives stem cell therapy to encourage neuromuscular activation after simulating paralysis. The two data sets, presented in Figure \[fig:RatData\], are generated by applying stimuli for different durations. The first data set, using 10sec duration stimuli, consist of $T=304$ observations, including 11 baseline measurements and a maximal stimulus of 100V. In contrast, the second data set was collected using 50sec duration stimuli and consists of $T=669$ observations, including 7 baseline measurements, and with a maximal stimulus of 60V. The data sets are named R10 and R50 respectively. Since both data sets are collected from the same neuromuscular system it is expected that MUNE should be similar between the data sets. Naïve assessment of the stimulus-response curves by counting the number of distinct levels of twitch force suggests that there are perhaps nine or ten MUs, but this would ignore any potential features arising from alternation. The histogram inserts in Figure \[fig:RatData\] present frequency in absolute difference between consecutive twitch forces when ordered by stimulus intensity. The highest frequency occurs at low differences and represents the within-MUTF variability whereas the less-frequent, larger differences appear due to the firing of different combinations of MUs. In both cases a minimum expected MUTF of ${\mu_{\min}}=15$mN is suitable to correct against the estimation of MUs with negligible contribution to the observed twitch forces. The SMC-MUNE procedure was applied up to a maximum model size of ${u_{\max}}=12$ with prior sufficient statistics and algorithmic parameters as specified in Appendix \[sec:APX\]. For both data sets, the estimated motor unit number posterior mass function (Table \[tab:RealResults\]) identifies the MAP estimate as $\hat{u}=8$, with this being the only member of the HPCSs. There is a noticeable difference in the computational resources required as the MAP model for R50 required twenty times more particles and three times finer lattice than that for data set R10. This is in part reflective of the relative sizes of the data sets, but may also relate to the relative complexities of the state-spaces for the firing vectors. Figure \[fig:RealParam\] presents the estimated excitability curves for each of the MUs, with MUs labelled in order of increasing $\mathbb{E}[\eta|y_{1:T}]$. First, the location parameters under the 50sec duration stimuli are approximately four times lower than the corresponding parameters under 10sec duration stimuli. This difference in scale corresponds to Weiss’s law [@Bos83] that relates the excitation of the neuron to the charge built-up in the cell. Despite this, it is clear that the majority of the MUs are excited within a short stimulus window with only the last MU requiring a larger stimulus to be excited. This high degree of activity at low stimulus is reflective of the sudden early rise in the stimulus-response curves. To compare MUs between data sets, the coefficient of variation for the random variable associated with each excitability curve is presented in Figure \[fig:RealParam\]c. Apart from the first MU, the 95% credible intervals from each data set for a given MU overlap, suggesting similar coefficients of variation for the MUs; this might be anticipated since measurements are taken from the same neuromuscular system. These similar estimated coefficients are larger than the estimate in @Hal04, which is presented for comparison. This reflects the experimentation where the developed neurons are less stable and are yet to restore full and healthy motor function. Table \[tab:Xest\] present the most probable firing combinations for each visibly distinct response level in Figure \[fig:RatData\]. The estimated firing behavior of each MU, after label-swapping similarly excitable MUs for R50, are very similar between the two data sets. It can then be suggested that the level at approximately 120mN in both data sets and at about 70mN in R50 are potential consequences of alternation as MUs that fired in contributing to weaker WMTFs are latent in forming these response levels. However, a difference in estimated firing behavior occurs at the 120mN response level whereby the SMC-MUNE procedure obtained two different model fits; MU1+MU4 in R10 and MU2+MU3 in R50. As a consequence, the estimated excitation range for MU1 in R50 is unusually large, leading to a relatively flat excitability curve with an enlarged coefficient of variation (Figure \[fig:RealParam\]) in relation to other MUs and between data sets. Nevertheless, the net effect of these firing combinations with the estimated expected MUTFs, see Figure \[fig:RealParam\] inserts, does not suggest that the overall description of the two data sets greatly differ. This exemplifies the difficulty in disseminating between MUs with similar excitation and twitch characteristics. The difference in fit could have occurred in part due to the 70mN response level in the R50 data set not being represented within dataset R10. -------------------------------- ------ ------- ------ ------ -- ------ -------- ------ ------ Data set $~$ No. of MUs ($u$) 7 8 9 10 7 8 9 10 $\mathbb{P}(u|\mathbf{y})$ (%) 0.04 99.95 0.01 0.00 0.00 100.00 0.00 0.00 Grid Size ($n{\times}n$) 30 30 30 30 100 90 50 90 No. of Particles (,000s) 20 5 5 5 155 100 65 115 -------------------------------- ------ ------- ------ ------ -- ------ -------- ------ ------ : Posterior summary from the SMC-MUNE procedure for the rat tibial muscle using 10sec and 50sec duration stimuli.[]{data-label="tab:RealResults"} ![Estimated excitability curves from the eight MU hypotheses for data sets R10 (left) and R50 (centre) with corresponding expected MUTF mean estimates. Right: median and 95% credible interval for the coefficient of variation for the random variable associated with the excitability curve for each MU, together with the mean and 95% confidence interval from @Hal04.[]{data-label="fig:RealParam"}](figures/R10_Fest.pdf "fig:"){width="33.00000%"} ![Estimated excitability curves from the eight MU hypotheses for data sets R10 (left) and R50 (centre) with corresponding expected MUTF mean estimates. Right: median and 95% credible interval for the coefficient of variation for the random variable associated with the excitability curve for each MU, together with the mean and 95% confidence interval from @Hal04.[]{data-label="fig:RealParam"}](figures/R50_Fest.pdf "fig:"){width="33.00000%"} ![Estimated excitability curves from the eight MU hypotheses for data sets R10 (left) and R50 (centre) with corresponding expected MUTF mean estimates. Right: median and 95% credible interval for the coefficient of variation for the random variable associated with the excitability curve for each MU, together with the mean and 95% confidence interval from @Hal04.[]{data-label="fig:RealParam"}](figures/CoV2.pdf "fig:"){width="33.00000%"} \[tab:Xest\] ------------ --- ----- --- --- --- --- --- --- -- ------------ --- --- --- --- --- --- --- --- $~$ Level (mN) 1 2 3 4 5 6 7 8 Level (mN) 1 2 4 3 5 7 6 8 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 50 1 0 0 0 0 0 0 0 40 1 0 0 0 0 0 0 0 – – – – – – – – – 70 0 1 0 0 0 0 0 0 100 1 1 0 0 0 0 0 0 110 1 1 0 0 0 0 0 0 120 1 0 0 1 0 0 0 0 120 0 1 0 1 0 0 0 0 170 1 1 0 1 0 0 0 0 150 1 1 0 1 0 0 0 0 210 1 1 1 1 0 0 0 0 200 1 1 1 1 0 0 0 0 230 1 1 1 1 1 0 0 0 230 1 1 1 1 1 0 0 0 270 1 1 1 1 1 1 0 0 270 1 1 1 1 1 1 0 0 320 1 1 1 1 1 1 1 0 300 1 1 1 1 1 1 1 0 360 1 1 1 1 1 1 1 1 350 1 1 1 1 1 1 1 1 ------------ --- ----- --- --- --- --- --- --- -- ------------ --- --- --- --- --- --- --- --- : Most probable firing events (1=fire, 0=latent) for each level in the stimulus-response curve. The labeled MUs for R50 are re-ordered to demonstrate similarity between the two data sets. The response level around 70mN is not present in the R10 data set. Discussion {#sec:Discussion} ========== This paper presents a new sequential Bayesian procedure for motor unit number estimation (MUNE), the assessment of the number of the operating motor units (MUs) from an electromyography investigation into muscle function. The fully adpated sequential Monte Carlo (SMC) filter uses the approximate conditional conjugacy of the twitch process. The principal purpose of SMC-MUNE is to estimate the marginal likelihood for the neuromuscular model based on a fixed number of MUs. From this, motor unit number estimation (MUNE) is then performed by comparing the evidence between competing MU-number hypotheses. As is demonstrated in Sections \[sec:SimStudyOver\] and \[sec:SimStudy\_under\] SMC-MUNE also allows detailed scrutiny of the quality of each model fit. SMC-MUNE performed well on simulated data, but two scenarios that may cause incorect estimation were identified. In the first scenario, one or more MUs were estimated to have a negligible or negative twitch force, allowing a model that was larger than the truth to fit the data and resulting in overestimation of the number of motor units (MUs). This lead to the development of a post-process correction that restricts the parameter space. By contrast, the second scenario resulted in under-estimation because of difficulty in estimating the underlying process during a period of alternation involving many MUs, where the same stimulus, applied repeatedly can lead to several different combinations of MUs firing. This issue persisted despite constraining a key parameter, but was resolved when, instead, additional data points were sampled from the region of alternation, strongly suggesting that the original underestimation arose because the information available in the data was not sufficient to fully characterise the firing process. Independent application of SMC-MUNE to two data sets (with data collected as in @Cas10) on the same neuromuscular system resulted in the same estimate for the number of MUs. However, closer examination of the model fits identifies minor variations in parameter estimates and firing patterns that reflected subtle and known differences between the two data sets. The examples investigated in this paper involve neuromuscular systems with relatively small numbers of MUs. In practice, large and healthy muscle groups can contain hundreds of MUs [@Goo14]. Application of SMC-MUNE to these larger problems is currently impractical as the computation cost increases exponentially with the assumed number of MUs. As such the SMC-MUNE currently is best applied to small neuromuscular systems such as in some animal or in patients with amyotrophic lateral sclerosis who have limited motor function. The computational demand arises from the necessity to evaluate the predictive mass function for sampling the firing vectors and to marginalise this event space for calculating the resamping weights. One approach to address this is to approximate very low or very high excitation probabilities by their respective certainties as in @Dro14. Alternatively, the excitability curve for SMC-MUNE is specified in generic terms and so computational saving are possible by defining a function that has finite support. In addition to concerns over the size of the computation, additional resources would be required for a sufficiently fine lattice over the excitability parameter space to minimize numerical error on the marginal likelihood estimate. Although adaptive sparse grids [@Bun03] have the potential to be more beneficial in terms of resource management and precision, care would be needed in automating the grid refinements, and it is likely that a unique grid would be associated with each distinct firing pattern. The sequential aspect of the proposed methodology provides the opportunity for real-time inference that has the potential to provide in-lab assistance during experimentation. In this framework, an interim SMC-MUNE analysis could help in identifying the choice of stimulus to apply in order to collect the best evidence to distinguish between competing hypotheses, as in Section \[sec:SimStudy\_under\]. The limitations of the present SMC-MUNE procedure to become a wholly online algorithm are the computational aspects discussed earlier and the post-processing stage to correct for potentially negligible estimates of the expected MU twitch forces. Solutions to these outstanding problems would increase the efficiency and accuracy of SMC-MUNE and, hence, the range of application. Acknowledgements {#acknowledgements .unnumbered} ================ The authors would like to thank Dr. Christine Thomas from the Miami Project to Cure Paralysis who provided the data analysed in this paper and for specialist discussions. Additional detail {#sec:APX} ================= The prior sufficient statistics for the simulation and case studies are: ${\bar{m}}_0 = 0$, ${\bar{c}}_0 = 10^3$, ${\bar{a}}_0 = 0.5$, ${\bar{b}}_0 = 0.1$, ${\mathbf{m}}_0 = 40 {\mathbf{1}}_u$, $C_0 = 10^4I_u$ and $a_0 = 0.5$ where ${\mathbf{1}}_u$ is a unit $u$-vector and $I_u$ is the $u\times{u}$ identity matrix. The statistic $b_0$ is defined according to with $\delta=0.05$ and $\epsilon=0.2$. The upper bounds for the excitability parameter space are ${\eta_{\max}}= 1.1s_{\tau}$ and ${\lambda_{\max}}= 14$. In the case study, the upper bound for the scale parameter was reduced to ${\lambda_{\max}}=7$. Resampling in Algorithm \[tab:Alg\], is performed by systematic sampling on the residuals of particle weights [@Hol06]. The number of particle samples and the number of rectangular lattice cells is initially $N=5000$ and $|\mathcal{G}|=30\times{30}$ respectively. Accuracy in MU-number posterior mass function is managed by ensuring that for each model, $u$, the range in marginal log-likelihood estimate from 3 independent runs of the SMC scheme is less than $1$ whenever the posterior probability is greater than 1%. If not, then the particle set is increased in steps of $5000$ samples to reduce Monte Carlo variability. Once this criterion is satisfied, the lattice for the numerical integration is made finer by 10 vertices in both dimensions and the stability of the estimates to increasing grid size is checked; instability leads to a further check of the Monte Carlo variability and, if necessary, an increase in the particle set size, and then a further increase in the number of vertices; iteration between these two steps continues until the results are numerically stable and have a low variance. For a particular data set, once the minimal number of particles and grid size required for stability have been ascertained, a further $7$ runs are performed using these settings and the final marginal log-likelihood estimate is the average of the result from the total of $10$ runs.
{ "pile_set_name": "ArXiv" }
--- author: - | [**Jian Gao,  Linzhi Shen,  Fang-Wei Fu** ]{}\ [Chern Institute of Mathematics and LPMC, Nankai University]{}\ [Tianjin, 300071, P. R. China]{}\ title: '[**Skew Generalized Quasi-Cyclic Codes Over Finite Fields**]{}' --- [ In this work, we study a class of generalized quasi-cyclic (GQC) codes called skew GQC codes. By the factorization theory of ideals, we give the Chinese Remainder Theorem over the skew polynomial ring, which leads to a canonical decomposition of skew GQC codes. We also focus on some characteristics of skew GQC codes in details. For a $1$-generator skew GQC code, we define the parity-check polynomial, determine the dimension and give a lower bound on the minimum Hamming distance. The skew quasi-cyclic (QC) codes are also discussed briefly.]{} [ Skew cyclic codes; Skew GQC codes; $1$-generator skew GQC codes; Skew QC codes]{} [**Mathematics Subject Classification (2000)** ]{} 11T71 $\cdot$ 94B05 $\cdot$ 94B15 0.2in [**1 Introduction**]{} Recently, it has been shown that codes over finite rings are a very important class of codes and many types of codes with good parameters could be constructed over rings [@Aydin; @Abualrub2; @Siap2]. Skew polynomial rings are an important class of non-commutative rings. More recently, applications in the construction of algebraic codes have been found [@Abualrub1; @Bhaintwal2; @Boucher1; @Boucher2; @Boucher3], where codes are defined as ideals or modules in the quotient ring of skew polynomial rings. The principle motivation for studying codes in this setting is that polynomials in skew polynomials rings have more factorizations than that in the commutative case. This suggests that it may be possible to find good or new codes in the skew polynomial ring with lager minimum Hamming distance. Some researchers have indeed shown that such codes in skew polynomial rings have resulted in the discovery of many new linear codes with better minimum Hamming distances than any previously known linear codes with same parameters [@Abualrub1; @Boucher1]. Quasi-cyclic (QC) codes over commutative rings constitute a remarkable generalization of cyclic codes [@Aydin; @Bhaintwal1; @Conan; @Ling2; @Siap2]. More recently, many codes were constructed over finite fields which meet the best value of minimum distances with the same length and dimension [@Aydin; @Siap2]. In [@Abualrub1], Abualrub et al. have studied skew QC codes over finite fields as a generalization of classical QC codes. They have introduced the notation of similar polynomials in skew polynomial rings and shown that parity-check polynomials for skew QC codes are unique up to similarity. They also constructed some skew QC codes with minimum Hamming distances greater than previously best known linear codes with the given parameters. In [@Bhaintwal2], Bhaintwal studied skew QC codes over Galois rings. He gave a necessary and sufficient condition for skew cyclic codes over Galois rings to be free, and presented a distance bound for free skew cyclic codes. Futhermore, he also discussed the sufficient condition for 1-generator skew QC codes to be free over Galois rings. A canonical decomposition and the dual codes of skew QC codes were also given. The notion of generalized quasi-cyclic (GQC) codes over finite fields were introduced by Siap and Kulhan [@Siap1] and some further structural properties of such codes were studied by Esmaeili and Yari [@Esmaeili]. Based on the structural properties of GQC codes, Esmaeili and Yari gave some construction methods of GQC codes and obtained some optimal linear codes over finite fields. In [@Cao1], Cao studied GQC codes of arbitrary length over finite fields. He investigated the structural properties of GQC codes and gave an explicit enumeration of all $1$-generator GQC codes. As a natural generalization, GQC codes over Galois rings were introduced by Cao and structural properties and explicit enumeration of GQC codes were also obtained in [@Cao2]. But the problem of researching skew GQC codes over finite fields has not been considered to the best of our knowledge. Let $\mathbb{F}_{q}$ be a finite field, where $q=p^m$, $p$ is a prime number and $m$ is a positive integer. The Frobenius automorphism $\theta$ of $\mathbb{F}_{q}$ over $\mathbb{F}_p$ is defined by $\theta (a)=a^p$, $a\in\mathbb{F}_{q}$. The automorphism group of $\mathbb{F}_{q}$ is called the Galois group of $\mathbb{F}_{q}$. It is a cyclic group of order $m$ and is generated by $\theta$. Let $\sigma$ be an automorphism of $\mathbb{F}_{q}$. The *skew polynomial ring* $R=\mathbb{F}_q[x, \sigma]$ is the set of polynomials over $\mathbb{F}_q$, where the addition is defined as the usual addition of polynomials and the multiplication is defined by the following basic rule $$(ax^i)(bx^j)=a\sigma^i(b)x^{i+j},~~a,b\in\mathbb{F}_q.$$ From the definition one can see that $R$ is a non-commutative ring unless $\sigma$ is an identity automorphism. Let $\mid \sigma\mid$ denote the order of $\sigma$ and assume $\mid \sigma\mid=t$. Then there exists a positive integer $d$ such that $\sigma=\theta^d$ and $m=td$. Clearly, $\sigma$ fixes the subfield $\mathbb{F}_{p^d}$ of $\mathbb{F}_q$. Let $Z(\mathbb{F}_q[x,\sigma])$ denote the center of $R$. For $f, g\in R$, $g$ is called a *right divisor* (resp. *left divisor*) of $f$ if there exists $r\in R$ such that $f=rg$ (resp. $f=gr$). In this case, $f$ is called a *left multiple* (resp. *right multiple*) of $g$. Let the division be defined similarly. Then $\bullet$  If $g, f \in Z(\mathbb{F}_q[x, \sigma])$, then $g\cdot f=f\cdot g$. $\bullet$  Over finite fields, a skew polynomial ring is both a right Euclidean ring and a left Euclidean ring. Let $f, g \in R$. A polynomial $h$ is called a *greatest common left divisor* (gcld) of $f$ and $g$ if $h$ is a left divisor of $f$ and $g$; and if $u$ is another left divisor of $f$ and $g$, then $u$ is a left divisor of $h$. A polynomial $e$ is called a *least common left multiple* (lclm) of $f$ and $g$ if $e$ is a right multiple of $f$ and $g$; and if $v$ is another right multiple of $f$ and $g$, then $v$ is a right multiple of $e$. The *greatest common right divisor* (gcrd) and *least common right multiple* (lcrm) of polynomials $f$ and $g$ are defined similarly. The main aim of this paper is to study the structural properties of skew generalized quasi-cyclic (GQC) codes over finite fields. The rest of this paper is organized as follows. In Section 2, we survey some well known results of skew cyclic codes and give the BCH-type bound for skew cyclic codes. By the factorization theory of ideals, we give the Chinese Remainder Theorem in skew polynomial rings. In Section 3, using the Chinese Remainder Theorem, we give a necessary and sufficient condition for a code to be a skew GQC code. And this leads to a canonical decomposition of skew GQC codes. In Section 4, we mainly describe some characteristics of $1$-generator GQC codes including parity-check polynomials, dimensions and the minimum Hamming distance bounds. In Section 5, we discuss a special class of skew GQC codes called skew QC codes. [**2 Skew cyclic codes** ]{} Let $\sigma$ be an automorphism of the finite field $\mathbb{F}_q$ and $n$ be a positive integer such that the order of $\sigma$ divides $n$. A linear code $C$ of length $n$ over $\mathbb{F}_q$ is called *skew cyclic code* or *$\sigma$-cyclic code* if for any codeword $(c_0, c_1, \ldots, c_{n-1})\in C$, the vector $(\sigma(c_{n-1}), \sigma(c_0), \ldots, \sigma(c_{n-2}))$ is also a codeword in $C$. In polynomial representation, a linear code of length $n$ over $\mathbb{F}_q$ is a skew cyclic code if and only if it is a *left ideal* of the ring $R/(x^n-1)$, where $(x^n-1)$ denotes the *two-sided ideal* generated by $x^n-1$. In general, if $f(x)\in R$ generates a two-sided ideal, then a left ideal of $R/(f(x))$ is a linear code over $\mathbb{F}_q$. Such a linear code will be called a *skew linear code* or a *$\sigma$-linear code*. Let $C$ be a linear code of length $n$ over $\mathbb{F}_q$. The Euclidean dual of $C$ is defined as$$C^\perp=\{v \in \mathbb{F}_q^n|~u\cdot v=0, ~\forall u\in C\}.$$ In this paper, we suppose that the order of $\sigma$ divides $n$ and ${\rm gcd}(n,q)=1$. In the following, we list some well known results of skew cyclic codes in Theorem 2.1. [**Theorem 2.1** ]{}[@Boucher1; @Boucher2] *Let $C$ be a skew cyclic code ($\sigma$-cyclic code) of length $n$ over $\mathbb{F}_q$ generated by a right divisor $g(x)=\sum_{i=0}^{n-k-1}g_ix^i+x^{n-k}$ of $x^n-1$. Then\ * *(i)  The generator matrix of $C$ is given by* $$\left( \begin{array}{ccccccc} g_0 & \cdots & g_{n-k-1}& 1 & 0 & \cdots & 0\\ 0 & \sigma(g_0) & \cdots & \sigma(g_{n-k-1}) & 1 & \cdots & 0\\ 0 & \ddots & \ddots& & & \ddots& \vdots\\ \vdots & & \ddots &\ddots &\cdots &\ddots &0\\ 0& \cdots & 0& \sigma^{k-1}(g_0) & \cdots & \sigma^{k-1}(g_{n-k-1})& 1\\ \end{array} \right)$$ *and $\mid C\mid=q^{n-{\rm deg}(g(x))}$.\ * *(ii)  Let $x^n-1=h(x)g(x)$ and $h(x)=\sum_{i=0}^{k-1}h_ix^i$. Then $C^\perp$ is also a skew cyclic code of length $n$ generated by $\widetilde{h}(x)=x^{{\rm deg}(h(x))}\varphi(h(x))=1+\sigma(h_{k-1})x+\cdots+\sigma^k(h_0)x^k$, where $\varphi$ is an anti-automorphism of $\sigma$ defined as $\varphi(\sum_{i=0}^ta_ix^t)=\sum_{i=0}^tx^{-i}a_i$, where $ \sum_{i=0}^ta_ix^i\in R$. The generator matrix of $ C^\perp$ is given by* $$\left( \begin{array}{ccccccc} 1 & \sigma(h_{k-1}) & \cdots& \sigma^k(h_0) & 0 & \cdots & 0\\ 0 & 1 & \sigma^2(h_{k-1}) & \cdots & \sigma^{k+1}(h_0) & \cdots & 0\\ 0 & 0 & \ddots& & & \ddots& 0\\ \vdots & & \ddots &\ddots &\cdots &\ddots &\vdots\\ 0& \cdots & 0& 1 & \sigma^{n-k}(h_{k-1}) & \cdots& \sigma^{n-1}(h_0)\\ \end{array} \right)$$\ *and $\mid C^\perp\mid=q^k$.*\ *(iii)  For $c(x)\in R$, $c(x)\in C$ if and only if $c(x)h(x)=0$ in $R$.\ * *(iv)  $C$ is a cyclic code of length $n$ over $\mathbb{F}_q$ if and only if the generator polynomial $g(x)\in \mathbb{F}_{p^d}[x]/(x^n-1)$.* The monic polynomials $g(x)$ and $h(x)$ in Theorem 2.1 are called the *generator polynomial* and the *parity-check polynomial* of the skew cyclic code $C$, respectively. [**Theorem 2.2** ]{} *Let $C$ be a skew cyclic code with the generator polynomial $g(x)$ and the check polynomial $h(x)$. Then a polynomial $f(x)\in R/(x^n-1)$ generates $C$ if and only if there exists a polynomial $p(x)\in R$ such that $f(x)=p(x)g(x)$ where $p(x)$ and $h(x)$ are right coprime.* *Proof* Let $f(x)\in R/(x^n-1)$ generate $C$. Then there exist polynomials $p(x), q(x) \in R/(x^n-1)$ such that $f(x)=p(x)g(x)$ and $g(x)=q(x)f(x)$ in $R/(x^n-1)$. Therefore $g(x)=q(x)p(x)g(x)$. In $R$, we have $$g(x)=q(x)p(x)g(x)+r(x)(x^n-1)=q(x)p(x)g(x)+r(x)h(x)g(x)$$ for some $r(x)\in R$. It follows that $$(1-q(x)p(x)-r(x)h(x))g(x)=0.$$ Since $R$ is a principal ideal domain, we have $1-q(x)p(x)-r(x)h(x)=0$, which implies that $p(x)$ and $h(x)$ are right coprime. Conversely, suppose $f(x)=p(x)g(x)$ where $p(x)$ and $h(x)$ are right coprime. Then there exist polynomials $u(x), v(x) \in R$ such that $u(x)p(x)+v(x)h(x)=1$. Multiplying on right by $g(x)$ both sides, we have $u(x)p(x)g(x)+v(x)h(x)g(x)=g(x)$, which implies that $u(x)p(x)g(x)=u(x)f(x)g(x)$ in $R/(x^n-1)$. Therefore $g(x)\in (f(x))_l$, where $(f(x))_l$ denotes the left ideal generated by $f(x)$ in $R/(x^n-1)$. It means that $(g(x))_l \subseteq (f(x))_l$. Clearly, $(f(x))_l\subseteq (g(x))_l$, and hence $(f(x))_l=(g(x))_l=C$. $\Box$ Let $\mathbb{F}[Y^{q_0}, \circ]=\{a_0Y+a_1Y^{q_0}+\cdots+a_nY^{q_0^n}\mid~a_0,a_1,\ldots,a_n\in \mathbb{F}_q\}$, where $ q_0=p^d$. For $f=a_0Y+a_1Y^{q_0}+\cdots+a_nY^{q_0^n}$ and $g=b_0Y+b_1Y^{q_0}+\cdots+b_tY^{q_0^t}$, define $f+g$ to be ordinary addition of polynomials and define $f\circ g=f(g)$. Thus, $f\circ g=c_0Y+c_1Y^{q_0}+\cdots+c_{n+t}Y^{q_0^{n+t}}$, where $c_i=\sum_{j+s=i}a_jb_s^{q_0^s}$. It is easy to see that $\mathbb{F}[Y^{q_0}, \circ]$ under addition and composition $\circ$ forms a non-commutative ring called *Ore Polynomial* ring (see [@McDonald]). Define $$\phi:~ R\rightarrow \mathbb{F}_q[Y^{q_0}, \circ],$$ $$\sum a_ix^i\mapsto \sum a_iY^{q_0^i}.$$ [**Lemma 2.3** ]{}[@McDonald Theorem II.13] *The above mapping $\phi:~R\rightarrow \mathbb{F}[Y^{q_0}, \circ]$ is a ring isomorphism between the skew polynomial ring $R=\mathbb{F}_q[x, \sigma]$ and the Ore Polynomial ring $F_q[Y^{q_0}, \circ]$.* $\Box$ For a skew cyclic code over $\mathbb{F}_q$, it can also be described in terms of the $n$-th root of unity. By the above mapping $\phi$, one can verify that $\phi(x^n-1)=Y^{q_0^n}-Y$. Since $\sigma=\theta^d$, the fixed subfield is $\mathbb{F}_{p^d}=\mathbb{F}_{q_0}$. Let $\mathbb{F}_{q^s}$ be the smallest extension of $\mathbb{F}_q$ containing $\mathbb{F}_{q_0^n}$ as a subfield. Then $\mathbb{F}_{q^s}$ is the splitting field of $\phi(x^n-1)$ over $\mathbb{F}_q$. An element $\alpha \in \mathbb{F}_{q^s}$ is called a *right root* of $f\in R$ if $x-\alpha$ is a right divisor of $f$. Let the extension of $\sigma$ to an automorphism of $\mathbb{F}_{q^s}$ be also denoted by $\sigma$. For any $\alpha\in \mathbb{F}_{q^s}$ define ${\mathcal N}_{\sigma, i}(\alpha)=\sigma^{i-1}(\alpha)\sigma^{i-2}(\alpha)\cdots \sigma(\alpha)\alpha,~i>0$, with ${\mathcal N}_{\sigma, 0}=1$. [**Lemma 2.4** ]{}[@Jacobson2 Proposition 1.3.11] *Let $f(x)=\sum_{i=0}^ka_ix^i \in R$. Then\ (i)  The remainder $r$ on right division of $f(x)$ by $x-\alpha$ is given by $r=a_0{\mathcal N}_{\sigma, 0}(\alpha)+a_1{\mathcal N}_{\sigma, 1}(\alpha)+\cdots +a_k{\mathcal N}_{\sigma, k}(\alpha)$.\ (ii)  Let $\beta\in \mathbb{F}_{q^s}$. Then $(x-\beta)\mid_rf(x)$ if and only if $\sum_{i=0}^ka_i{\mathcal N}_{\sigma, i}(\beta)=0$.* $\Box$ Note that $\sigma(\alpha)=\theta^d(\alpha)=\alpha^{p^d}=\alpha^{q_0}$, and hence ${\mathcal N}_{\sigma, i}(\alpha)=\alpha^{\frac{q_0^i-1}{q_0-1}}$. The following result can also be found in [@Chaussade Lemma 4], here we give another proof by Lemma 2.3. [**Lemma 2.5** ]{} *Let $f(x)\in R$, and $\mathbb{F}_{q^s}$ be the smallest extension of $\mathbb{F}_q$ in which $\phi(f(x))$ splits. Then a non-zero element $\alpha \in \mathbb{F}_{q^s}$ is a root of $\phi(f(x))$ if and only if $\alpha^{q_0}/\alpha$ is a right root of $f(x)$.* *Proof* If $\alpha^{q_0}/\alpha$ is a right root of $f(x)$, then $x-\alpha^{q_0}/\alpha$ is a right divisor of $f(x)$. From Lemma 2.3, we have $Y^{q_0}-\alpha^{q_0}/\alpha Y$ is a factor of $\phi (f(x))$. Therefore $\alpha$ is the root of $\phi(f(x))$. Conversely, suppose $\alpha$ is the root of $\phi(f(x))$. Let $f(x)=k(x)(x-\alpha^{q_0}/\alpha)+r$, where $r\in \mathbb{F}_q$. Then $\phi(f(x))=\phi(k(x))\circ \phi(x-\alpha^{q_0}/\alpha)+\phi(r)$. From the discussion above, $\alpha$ is the root of $\phi(x-\alpha^{q_0}/\alpha)$. Therefore $\alpha$ is also the root of $\phi(r)$, i.e., $r\alpha=0$. Since $\alpha$ is a non-zero element in $\mathbb{F}_{q^s}$, we have $r=0$. This implies that $\alpha^{q_0}/\alpha$ is a right root of $f(x)$. $\Box$ Since $\phi(x^n-1)=Y^{q_0^n}-Y$ splits into linear factors in $\mathbb{F}_{q^s}$, it follows from Lemma 2.3 that $x^n-1$ also splits into linear factors in $\mathbb{F}_{q^s}[x, \sigma]$. It is well known that the non-zero roots of $Y^{q_0^n}-Y$ are precisely the elements of $\{1, \gamma, \ldots, \gamma^{q_0^n-2}\}$, where $\gamma$ is a primitive element of $\mathbb{F}_{q_0^n}$. Therefore, by Lemma 2.5, $x-(\gamma^i)^{q_0}/\gamma^i$ is the right factor of the skew polynomial $x^n-1$. It means that there are several different factorizations of the skew polynomial $x^n-1$. In the following, we give the BCH-type bound for the skew cyclic code over $\mathbb{F}_q$. [**Theorem 2.6** ]{} *Let $C$ be a skew cyclic code of length $n$ generated by a monic right factor $g(x)$ of the skew polynomial $x^n-1$ in $R$. If $x-\gamma^j$ is a right divisor of $g(x)$ for all $j=b, b+1, \ldots, b+\delta-2$, where $b\geq 0$ and $\delta \geq 1$, then the minimum Hamming distance of $C$ is at least $ \delta$.* *Proof* Let $c(x)=\sum_{i=0}^{n-1}c_ix^i$ be a codeword of $C$. Then $c(x)$ is a left multiple of $g(x)$, and hence $x-\gamma^j$ is a right divisor of $c(x)$, for all $0\leq j \leq b+\delta-2$. From Lemma 2.4, $x-\gamma^j$ is a right divisor of $c(x)$ if and only if $\sum_{i=0}^{n-1}c_i{\mathcal N}_{\sigma, i}(\gamma^j)=0$, $j=b, b+1, \ldots, b+\delta-2$. Therefore the matrix $$\label{matrix} \left( \begin{array}{cccc} 1 & {\mathcal N}_{\sigma, 1}(\gamma^b) & \cdots & {\mathcal N}_{\sigma, n-1}(\gamma^b)\\ 1 & {\mathcal N}_{\sigma, 1}(\gamma^{b+1}) & \cdots & {\mathcal N}_{\sigma, n-1}(\gamma^{b+1})\\ \vdots & \vdots & \ddots & \vdots \\ 1& {\mathcal N}_{\sigma, 1}(\gamma^{b+\delta-2}) & \cdots & {\mathcal N}_{\sigma, n-1}(\gamma^{b+\delta-2}) \\ \end{array} \right)$$ is a parity-check matrix. Any $\delta-1$ columns of (\[matrix\]) form a $(\delta-1)\times (\delta-1)$ matrix and denote $D$ as its determinant. Since $D$ is a Vandermonde determinant, $D=0$ if and only if ${\mathcal N}_{\sigma, i}(\gamma)={\mathcal N}_{\sigma, j}(\gamma)$, for $i\neq j$. It is equivalent to $$\gamma^{\frac{q_0^i-q_0^j}{q_0-1}}=\gamma^{\frac{q_0^j(q_0^{i-j}-1)}{q_0-1}}=1.$$ In particular, $\gamma^{q_0^j(q_0^{i-j}-1)}=1$ implies that $(q_0^n-1)\mid q_0^j(q_0^{i-j}-1)$. Since ${\rm gcd}(q_0^n-1, q_0^j)=1$, it follows that $(q_0^n-1)\mid (q_0^{i-j}-1)$. Therefore, there exists a positive integer $l$ such that $i-j=nl$. It means that $\frac{q_0^{nl}-1}{q_0-1}=k(q_0^n-1)$ for some positive integer $k$. Thus $(q_0-1)\mid \frac{q_0^{nl}-1}{q_0^n-1}=\sum_{i=0}^{m-1}q_0^{ni}$. It implies that $\gamma^{q_0-1}\mid \gamma$, which is impossible. This shows that any $\delta-1$ columns are linearly independent, and hence the minimum Hamming distance of $C$ is is at least $ \delta$. $\Box$ [**Example 2.7** ]{} Consider $R=\mathbb{F}_{3^2}[x, \sigma]$, where $\sigma=\theta$ is a Frobenius automorphism of $\mathbb{F}_{3^2}$ over $\mathbb{F}_3$. The polynomial $g(x)=x-\alpha^2$ is a right factor of $x^4-1$, where $\alpha$ is a primitive element of $\mathbb{F}_{3^2}$. Since $\phi(x^4-1)=Y^{81}-Y$, it follows that $\phi(x^4-1)$ splits in $\mathbb{F}_{3^4}$. Let $\xi$ be a primitive element of $\mathbb{F}_{3^4}$. Then $\alpha=\xi^{20}$ and $\phi(g(x))=Y^3-\alpha^2 Y$ has a root $\xi^{20}$. Therefore, by Lemma 2.5, $(\xi^{20})^3/\xi^{20}=\xi^{40}$ is a right root of $g(x)$. Let $C$ be a skew cyclic code of length $4$ generated by $g(x)$ over $\mathbb{F}_{3^2}$. Then $C$ is a code with ${\rm dim}(C)=3$ and $d_H(C)\geq 2$. In fact, $C$ is an optimal $[4,3,2]$ skew cyclic code over $\mathbb{F}_{3^2}$. Also for each $i=1,2,\ldots,40$, $(\xi^i)^3/\xi^i=\xi^{2i}$ is a right root of $x^4-1$, which implies that there are $10$ different factorizations of skew polynomial $x^4-1$ over $\mathbb{F}_{3^2}$. We now consider the factorization theory of (two-sided) ideals or two-sided elements. An element $a^* \in R$ is called two-sided element if $Ra^*=a^*R$. [**Theorem 2.8** ]{}[@McDonald Theorem II.12] *If a polynomial $f^*$ generates a two-sided ideal in $R$, then $f^*$ has the form $$(a_0+a_1x^t+\cdots +a_nx^{nt})x^m,$$ where $a_i\in \mathbb{F}_q$ and $t=\mid \sigma\mid$.* Obvious $\mathbb{F}_{q_0}[x]=\{ b_0+b_1x+\cdots +b_nx^n \mid~b_i\in \mathbb{F}_{q_0}\}$ forms a commutative subring of $R$. Then the center of $R$ is $\mathbb{F}_{q_0}[x]\cap \mathbb{F}_q[x^t]$ where $\mathbb{F}_q[x^t]=\{ a_0+a_1x^t+\cdots +a_nx^{nt}\mid~a_i\in \mathbb{F}_{q_0}\}$, i.e., the center of $R$ is $\mathbb{F}_{q_0}[x^t]=\mathbb{F}_{p^d}[x^t]$ (see [@McDonald]). If $Ra^*$ is a non-zero (two-sided) maximal ideal in $R$, or equivalently, $a^*\neq 0$ and $R/Ra^*$ is a simple ring, then we call the two-sided element $a^*$ a *two-sided maximal* (t.s.m) element. Let $a, b$ be non-zero elements in $R$. Then $a$ is said to be *left similar* to $b$ ($a\sim_lb$) if and only if $R/Ra\cong R/Rb$. Two elements are left similar if and only if they are right similar. Let $a$ be a non-zero element in $R$. If $a$ is not a unit in $R$, then $a$ can be written as $a=p_1p_2\cdots p_s$, where $p_1,p_2,\ldots,p_s$ are irreducible. Moreover, if $a=p_1p_2\cdots p_s=p_1'p_2'\cdots p_t'$, where $p_i$ and $p_j'$ are irreducible, then $s=t$ and there exists a permutation $(1',2', \ldots, s')$ of $(1,2,\ldots,s)$ such that $p_i\sim p_{i'}'$ (see [@Jacobson2]). [**Lemma 2.9** ]{}[@Jacobson2 Theorem 1.2.17$'$, Theorem 1.2.19] *Let $a^*$ ba a non-zero two-sided element in $R$ and $a^*$ not a unit. Then\ (i) $a^*=p_1^*p_2^*\cdots p_m^*$, where $p_i^*, 1\leq i \leq m$, are t.s.m elements and such a factorization is unique up to order and unit multipliers.\ (ii) Let $p_i^*=p_{i,1}p_{i,2}\cdots p_{i,n}$, where $p_{i,j}, 1\leq i \leq m, 1\leq j \leq n$ are irreducible. Then $p_{i,1}, p_{i,2}, \ldots, p_{i,n}$ are all similar.* [**Example 2.10** ]{} Consider $R=\mathbb{F}_{3^3}[x, \sigma]$, where $\sigma=\theta$ is a Frobenius automorphism of $\mathbb{F}_{3^3}$ over $\mathbb{F}_3$. The fixed field of $\sigma$ is $\mathbb{F}_3$. Let $f(x)=x^6-x^3-2 \in R$. Since $f(x)\in \mathbb{F}_3[x^3]$, $f(x)$ is a two-sided element of $R$. A factorization of $f(x)$ in $R$ is $f(x)=(x^3+1)(x^3-2)$. Clearly, both $x^3+1$ and $x^3-2$ are two-sided elements. Moreover, they must be t.s.m elements because they have the smallest degree for a polynomial of the form $x^t, t\geq 1$, to be a two-sided element. [**Remark 2.1** ]{} Note that there is an error in Example 4 in [@Bhaintwal2], where the author claimed that the fixed field of $\sigma$ is $\mathbb{F}_9$. But it is well known that $\mathbb{F}_9$ is not a subfield of $\mathbb{F}_{27}$ at all. We have corrected it in Example 2.10. Suppose $x^n-1$ has a factorization of the form $x^n-1=f_1f_2\cdots f_k$, where $f_1,f_2,\ldots,f_k$ are irreducible polynomials. Since $x^n-1$ is a two-sided element, by Lemma 2.9, it can also be factorized as $x^n-1=f_1^*f_2^*\cdots f_t^*$, where each $f_i^*$ is a t.s.m element and is a product of all polynomials similar to an irreducible factor $f_i$ of $x^n-1$. Since ${\rm gcd}(q, n)=1$, it follows that all factors $f_1^*,f_2^*,\ldots,f_t^*$ are distinct. Also since $(f_i^*)$ is maximal, we can see that $f_i^*$ and $f_j^*$ are coprime for all $i\neq j$. Denote $\widehat{f}_i^*$ as the product of all $f_j^*$ except $f_i^*$, we have the following *Chinese Remainder Theorem* in the skew polynomial ring $\mathbb{F}_q[x, \sigma]$. [**Theorem 2.11** ]{} *Let $x^n-1=f_1^*f_2^*\cdots f_t^*$ be the unique representation of $x^n-1$ as a product of pairwise coprime t.s.m elements in $R$. Since ${\rm gcd}(\widehat{f}_i^*, f_i^*)=1$, there exist polynomials $b_i, c_i \in R$ such that $b_i\widehat{f}_i^*+c_if_i^*=1$. Let $e_i=b_i\widehat{f}_i^* \in R$. Then\   (i)  $e_1, e_2, \ldots, e_t$ are mutually orthogonal in $R$;\   (ii)  $e_1+e_2+\cdots +e_t=1$ in $R$;\   (iii) $R_i=(e_i)$ is a two-sided ideal of $R$ and $e_i$ is the identity in $(e_i)$;\   (iv)  $R=R_1\bigoplus R_2 \bigoplus \cdots \bigoplus R_t$;\   (v)  For each $i=1,2,\ldots,t$, the map $$\psi:~R/(f_i^*)\rightarrow R_i$$ $$g+(f_i^*)\mapsto (g+(x^n-1))e_i$$ is a well-defined isomorphism of rings;\   (vi)  $R\cong R/(f_1^*)\bigoplus R/(f_2^*)\bigoplus \cdots \bigoplus R/(f_t^*)$.* *Proof* (i)  Suppose $e_i=0$ for some $i=1,2,\ldots,t$, i.e., $b_i\widehat{f}_i^*\in (x^n-1)$ in $R$. Then $b_i\widehat{f}_i^*\in (f_i^*)$. Thus $1=b_i\widehat{f}_i^*+c_if_i^*\in (f_i^*)$, which is a contradiction. Hence, for each $i=1,2,\ldots,t$, $e_i\neq 0$. Thus we have $b_i\widehat{f}_i^*b_j\widehat{f}_j^* \in (x^n-1)$ for $i\neq j$. This implies that $e_ie_j=0$ in $R$. (ii)  We have $b_1\widehat{f}_1^*+\cdots +b_t\widehat{f}_t^*-1 \in (f_i^*)$, for all $i=1,2,\ldots,t$. Therefore $b_1\widehat{f}_1^*+\cdots +b_t\widehat{f}_t^*-1 \in (x^n-1)$. Thus $e_1+\cdots +e_t=1$ in $R$. (iii)  Let $Re_i=(e_i)_l$. Then $(e_i)_l\subseteq (\widehat{f}_i^*)$. On the other hand, $\widehat{f}_i^*=\widehat{f}_i^*(b_i\widehat{f}_i^*+c_if_i^*)=\widehat{f}_i^*b_i\widehat{f}_i^*$ in $R$, which implies $(\widehat{f}_i^*)\subseteq (e_i)_l$. Therefore $(e_i)_l=(\widehat{f}_i^*)$. Similarly, one can prove that $e_iR=(e_i)_r=(\widehat{f}_i^*)$, which implies that $(e_i)$ is a two-sided ideal of $R$. Clearly, $e_i$ is the identity in $(e_i)$. (iv)  For any $a\in R$, $a$ can be represented as $a=ae_1+ae_2+\cdots +ae_t$. Since $ae_i\in (e_i)$, $R=(e_1)+(e_2)+\cdots +(e_t)$. Assume that $a_1+a_2+\cdots+a_t=0$, where $a_i\in (e_i)$. Multiplying on the left (or on the right) by $e_i$, we obtain that $a_1e_i+a_2e_i+\cdots +a_te_i=a_ie_i=a_i$, for $i=1,2,\ldots,t$. Therefore $R=R_1\bigoplus R_2 \bigoplus \cdots \bigoplus R_t$. (v)  Let $g+(f_i^*)=g'+(f_i^*)$, for $g, g'\in R$. Then $g-g'\in (f_i^*)$. But $b_i\widehat{f}_i^* \in (f_j^*)$ for all $i\neq j$. Therefore $(g-g')b_i\widehat{f}_i^*\in (x^n-1)$. Hence, $(g+(x^n-1))e_i=(g'+(x^n-1))e_i$ in $R$, which implies that the map $\psi$ is well-defined. Clearly, $\psi$ is a surjective homomorphism of rings. Let $g+(f_i^*)\in R/(f_i^*)$ statisfy $(g+(x^n-1))e_i=(x^n-1)$. Then $gb_i\widehat{f}_i^* \in (x^n-1)\subseteq (f_i^*)$. Thus $g\in (f_i^*)$, i.e., $g+(f_i^*)=(f_i^*)$. This implies that the kernel of $\varphi$ is zero. (vi)  From (iv) and (v), one can deduce this result immediately. $\Box$ [**3 Skew GQC codes**]{} In this section, we investigate the structural properties of skew GQC codes. We give the definition of skew GQC codes first. [**Definition 3.1** ]{} *Let $R=\mathbb{F}_q[x,\sigma]$ be a skew polynomial ring and $m_1, m_2, \ldots,m_l$ positive integers. Let $t$ be a divisor of each $m_i$, where $t$ is the order of $\sigma$ and $i=1,2,\ldots,l$. Denote ${\mathcal R}_i=R/(x^{m_i}-1)$ for $i=1,2,\ldots,l$. Any left $R$-submodule of the $R$-module ${\mathcal R}={\mathcal R}_1\times {\mathcal R}_2\times\cdots\times{\mathcal R}_l$ is called a skew generalized quasi-cyclic (GQC) code over $\mathbb{F}_q$ of block length $(m_1,m_2,\ldots,m_l)$ and length $\sum_{i=1}^lm_i$.* Let $m_i\geq 1$, $i=1,2,\ldots,l$, and ${\rm gcd}(q, m_i)=1$. Then, by Lemma 2.9, $x^{m_i}-1$ has a unique factorization $x^{m_i}-1=f_{i1}^*f_{i2}^*\cdots f_{ir_i}^*$, where $f_{ij}^*$ , $j=1,2,\ldots,r_i$, are pairwise coprime monic t.s.m elements in $R$. Let $\{ g_1^*, g_2^*, \ldots, g_s^*\}=\{ f_{ij}^* \mid 1\leq i\leq l, 1\leq j \leq r_i\}$. Then we have $$x^{m_i}-1=g_1^{*d_{i1}}g_2^{*d_{i2}}\ldots g_s^{*d_{is}},$$ where $d_{ik}=1$ if $g_k^*=f_{i,j}^*$ for some $1\leq j \leq r_i$ and $d_{i, k}=0$ if ${\rm gcd}(g_k^*, x^{m_i}-1)=1$, for all $1\leq i \leq l$ and $1\leq k \leq s$. Suppose $n_j=\mid \{i\mid f_{i,\lambda}^*=g_j^*, 1\leq \lambda \leq r_i, 1\leq i \leq l, 1\leq j\leq s \}\mid$. Let ${\mathcal M}_j=(R/(g_j^*))^{n_j}$. Then we have [**Theorem 3.2** ]{} *Let ${\mathcal R}={\mathcal R}_1\times {\mathcal R}_2\times\cdots \times {\mathcal R}_l$, where ${\mathcal R}_i=R/(x^{m_i}-1)$ for all $i=1,2,\ldots,l$. Then there exists an $R$-module isomorphism $\phi$ from ${\mathcal R}$ onto ${\mathcal M}_1\times {\mathcal M}_2 \times \cdots \times {\mathcal M}_s$ such that a linear code $C$ is a skew GQC code of block length $(m_1, m_2, \ldots, m_l)$ and length $\sum_{i=1}^lm_i$ over $\mathbb{F}_q$ if and only if for each $1\leq k \leq s $ there is a unique left $R$-module $M_k$ of ${\mathcal M}_k$ such that $\phi (C)=M_1\times M_2\times\cdots \times M_s$.* *Proof* Denote $$g^*=g_1^*g_2^*\cdots g_s^*,~ \widehat{g}_k^*=\frac{g^*}{g_k^*},$$ $$\widetilde{g}_{i,k}^*=\frac{x^{m_i}-1}{g_k^{*d_{ik}}}, ~i=1,2,\ldots,l,~k=1,2,\ldots,s.$$ Then there exists a polynomial $w_{i,k}^*\in R$ such that $$\widehat{g}_k^*=w_{i,k}^*\widetilde{g}_{i,k}^*,~i=1,2,\ldots,l,~k=1,2,\ldots,s.$$ Since $g_k^*$ and $\widehat{g}_k^*$ are coprime, there exist polynomials $b_k,~s_k\in R$ such that $b_k\widehat{g}_k^*+s_kg_k^*=1$, which implies that $b_kw_{i,k}^*\widetilde{g}_{i,k}^*+c_kg_k^*=1$ in $R$. Let $\varepsilon_{ik}=b_kw_{i,k}^*\widetilde{g}_{i,k}^*+(x^{m_i}-1)=b_k\widehat{g}_k^*+(x^{m_i}-1)\in {\mathcal R}_i$. Then, by Theorem 2.11, we have (i) $\varepsilon_{ik}=0$ if and only if ${\rm gcd}(g_k^*, x^{m_i}-1)=1,~k=1,2,\ldots,s.$ \[\] (ii) $\varepsilon_{i1},\varepsilon_{i2},\ldots,\varepsilon_{is}$ are mutually orthogonal in ${\mathcal R}_i$. \[\] (iii) $\varepsilon_{i1}+\varepsilon_{i2}+\cdots+\varepsilon_{is}=1$ in ${\mathcal R}_i$. \[\] (iv) Let ${\mathcal R}_{ik}=(\varepsilon_{ik})$ be the principle ideal of ${\mathcal R}_i$ generated by $\varepsilon_{ik}$. Then $\varepsilon_{ik}$ is the identity of ${\mathcal R}_{ik}$ and ${\mathcal R}_{ik}=(b_k\widehat{g}_k^*)$. Hence $R_{ik}=\{ 0\}$ if and only if ${\rm gcd}(g_k^*, x^{m_i}-1)=1$. \[\] (v) ${\mathcal R}_i=\bigoplus_{k=1}^s{\mathcal R}_{ij}$. \[\] (vi) For each $k=1,2,\ldots,s$, the mapping $\phi_{ik}:~{\mathcal R}_{ik}\rightarrow R/(g_k^{*d_{ik}})$, defined by $$\phi_{ik}:~fb_k\widehat{g}_k^*+(x^{m_i}-1)\mapsto f+(g_k^{*d_{ik}}), ~\mbox{where}\; f\in R,$$ is a well defined isomorphism of rings. \[\] (vii) ${\mathcal R}_i=R/(x^{m_i}-1)\cong \bigoplus_{j=1}^sR/(g_j^{*d_{ij}})$. From (vi), we have a well defined $R$-module isomorphism $\Phi_k$ from $b_k\widehat{g}_k^*{\mathcal R}$ onto $R/(g_k^{*d_{ik}})\times \cdots \times R/(g_k^{*d_{ik}})$, which defined by $$\Phi_k:~(\alpha_1, \ldots, \alpha_l)\mapsto (\phi_{1k}(\alpha_1),\ldots,\phi_{lk}(\alpha_l)),~ \mbox{where}\; \alpha_i\in {\mathcal R}_{ik}, i=1,2,\ldots,l.$$ $\Phi_k$ can introduce a natural $R$-module isomorphism $\mu_k$ from $b_k\widehat{g}_k^*{\mathcal R}$ onto ${\mathcal M}_k$. For any $c=(c_0,c_1,\ldots,c_l)\in {\mathcal R}$, from (v) we deduce $c=(b_1\widehat{g}_1^*c_1+\cdots+b_s\widehat{g}_s^*c_1, \ldots, \\ b_1\widehat{g}_1^*c_l+\cdots+b_s\widehat{g}_s^*c_l)=b_1\widehat{g}_1^*c+\cdots+b_s\widehat{g}^*_sc$, where $b_k\widehat{g}_k^*c\in b_k\widehat{g}_k^*{\mathcal R}_1\times\cdots\times b_k\widehat{g}_k^*{\mathcal R}_l$ for all $k=1,2,\ldots,s$. Hence ${\mathcal R}=b_1\widehat{g}_1^*{\mathcal R}+\cdots+b_s\widehat{g}_s^*{\mathcal R}$. Let $c_1$, $c_2$, $\ldots$, $c_s\in {\mathcal R}$ satisfying $b_1\widehat{g}_1^*c_1+\cdots+b_s\widehat{g}_s^*c_s=0$. Since $(x^{m_i}-1)\mid g^*$ for all $i=1,2,\ldots,l$, it follows that $g^*{\mathcal R}=\{0\}$. Then for each $k=1,2,\ldots,s$, from $b_k\widehat{g}_k^*+s_kg_k^*=1$, $g^*=g_k^*\widehat{g}_k^*$ and $g^*\mid \widehat{g}_\tau^* \widehat{g}_\sigma^*$ for all $1\leq \tau\neq \sigma \leq s$, we deduce $b_k\widehat{g}_k^*c_k=0$. Hence ${\mathcal R}=\bigoplus_{j=1}^sb_j\widehat{g}_j^*{\mathcal R}$. Define $\phi:~\beta_1+\beta_2+\cdots+\beta_s\mapsto (\mu_1(\beta_1), \mu_2(\beta_2), \ldots, \mu_s(\beta_s))~\mbox{where}\; \beta_k\in b_k\widehat{g}_k^*{\mathcal R}, k=1,2,\ldots,s$. Then $\phi$ is an $R$-module isomorphism from ${\mathcal R}$ onto ${\mathcal M}_1\times\cdots\times{\mathcal M}_s$. For any left $R$-module $M_j$, it is obvious that $M_1\times\cdots\times M_s$ is a left $R$-submodule of ${\mathcal M}_1\times\cdots\times{\mathcal M}_s$. Therefore there is a unique left $R$-submodule $C$ of ${\mathcal R}$ such that $\phi(C)=M_1\times\cdots\times M_s$. $\Box$ Since ${\mathcal M_k}=(R/(g_k^*))^{n_k}=\bigoplus_{i=1}^lR/(g_k^{*d_{ik}})$ is up to an $R$-module isomorphism, Theorem 3.2 can lead to a canonical decomposition of skew GQC codes as follows. [**Theorem 3.3** ]{} *Let $C$ be a skew GQC code of block length $(m_1, m_2, \ldots, m_l)$ and length $\sum_{i=1}^lm_i$ over $\mathbb{F}_q$. Then $$C=\bigoplus_{i=1}^sC_i$$ where $C_i$, $1\leq i\leq s$, is a linear code of length $l$ over $R/(g_i^{*d_{ik}})$ and each $j$-th, $1\leq j\leq l$, component in $C_i$ is zero if $d_{ji}=0$ and an element of the ring $R/(g_i^*)$ otherwise.* $\Box$ Let $m_1=m_2=\cdots m_l=m$. Then a skew GQC code $C$ is a *skew quasi-cyclic* (QC) code of length $ml$ over $\mathbb{F}_q$. From Theorems 3.2 and 3.3, we have the following result. [**Corollary 3.4** ]{} *Let $R=\mathbb{F}_q[x, \sigma]$, ${\rm gcd}(m,q)=1$ and $x^m-1=g_1^*g_2^*\cdots g_s^*$, where $g_1^*, g_2^*, \ldots, g_s^*$ are pairwise coprime monic t.s.m elements in $R$. Then we have\ (i) There is an $R$-module isomorphism $\phi$ from ${\mathcal R}=(R/(x^m-1))^l, ~l\geq 1$, onto $(R/(g_1^*))^l\times (R/(g_2^*))^l\times \cdots \times (R/(g_s^*))^l$.\ (ii) $C$ is a skew QC code of length $ml$ over $\mathbb{F}_q$ if and only if there is a left $R$-submodule $M_i$ of $(R/(g_i^*))^l, ~i=1,2,\ldots,s$, such that $\phi(C)=M_1\times M_2\times\cdots \times M_s$.\ (iii) A skew QC code $C$ of length $ml$ can be decomposed as $C=\bigoplus_{i=1}^sC_i$, where each $C_i$ is a linear code of length $l$ over $R/(g_i^*)$, $i=1,2,\ldots,s$.* $\Box$ A skew GQC code $C$ of block length $(m_1, m_2, \ldots, m_l)$ and length $\sum_{i=1}^lm_i$ is called a *$\rho$-generator* over $\mathbb{F}_q$ if $\rho$ is the smallest positive integer for which there are codewords $c_i(x)=(c_{i,1}(x),c_{i,2}(x), \ldots, c_{i,l}(x))$, $1\leq i \leq \rho$, in $C$ such that $C=Rc_1(x)+Rc_2(x)+\cdots +Rc_\rho(x)$. Assume that the dimension of each $C_i$, $i=1,2,\ldots,s$, is $k_i$, and set ${\mathcal K}={\rm max} \\{\{ k_i \mid 1\leq i\leq s\}}$. Now by generalizing Theorem 3 of [@Esmaeili], we get [**Theorem 3.5** ]{} *Let $C$ be a $\rho$-generator skew GQC code of block length $(m_1, m_2, \ldots, m_l)$ and length $\sum_{i=1}^lm_i$ over $\mathbb{F}_q$. Let $C=\bigoplus_{i=1}^sC_i$, where each $C_i$, $i=1,2,\ldots,s$, is with dimension $k_i$ and ${\mathcal K}={\rm max} {\{ k_i \mid 1\leq i\leq s\}}$. Then $\rho={\mathcal K}$. In fact, any skew GQC code $C$ with $C=\bigoplus_{i=1}^sC_i$, where each $C_i$, $i=1,2,\ldots,s$, is with dimension $k_i$ satisfying $\rho={\rm max }_{1\leq i\leq s} k_i$, is a $\rho$-generator skew GQC code.* *Proof* Let $C$ be a $\rho$-generator skew GQC code generated by the elements $c^{(j)}(x)=(c_1^{(j)}(x), c_2^{(j)}, \ldots, c_l^{(j)}(x)) \in {\mathcal R}, ~j=1,2,\ldots,\rho$. Then for each $i=1,2,\ldots,s$, $C_i$ is spanned as a left $R$-module by $\widetilde{c}^{(j)}(x)=(\widetilde{c}_1^{(j)}(x), \widetilde{c}_2^{(j)}(x), \ldots, \widetilde{c}_l^{(j)}(x))$, where $\widetilde{c}_\nu^{(j)}(x)=c_\nu^{(j)}(x)~({\rm mod} g_i^*)$ if $g_i^*$ is a factor of $x^{m_i}-1$ and $\widetilde{c}_\nu^{(j)}(x)=0$ otherwise, $\nu=1,2,\ldots,l$. Hence $k_i\leq \rho$ for each $i$, and so ${\mathcal K}\leq \rho$. On the other hand, since ${\mathcal K}={\rm max}_{1\leq i\leq s} k_i$, there exist $q_i^{(j)}(x)\in R^l$, $1\leq j \leq {\mathcal K}$, such that $q_i^{(j)}(x)$ span $C_i$, $1\leq i \leq s$, as a left $R$-module. Then, by Theorem 3.3, for each $1\leq j \leq {\mathcal K}$, there exists $q^{(j)}(x)\in C$ such that $q_i^{(j)}(x)=q^{(j)}(x)~({\rm mod}~g_i^*)$ and $C$ is generated by $q_i^{(j)}(x)$, $1\leq j \leq{\mathcal K}$. Hence $\rho \leq {\mathcal K}$, which implies that $\rho={\mathcal K}$. $\Box$ If $C$ is a $1$-generator skew GQC code of block length $(m_1, m_2, \ldots, m_l)$ and length $\sum_{i=1}^lm_i$ over $\mathbb{F}_q$, then by Theorem 3.5, each $C_i$, $i=1,2,\ldots,s$, is either trivial or an $[l, 1]$ linear code over $R/(g_i^*)$. Conversely, any linear code $C$ is a $1$-generator GQC code when each $C_i$, $i=1,2,\ldots,s$, is with dimension at most $1$. [**Example 3.6** ]{} Let $R=\mathbb{F}_{3^2}[x, \sigma]$, where $\sigma$ is the Frobenius automorphism of $\mathbb{F}_{3^2}$ over $\mathbb{F}_3$. Let ${\mathcal R}=R/(x^4-1)\times R/(x^8-1)$ and $C$ be a $2$-generator skew GQC code of block length $(4,8)$ and length $4+8=12$ generated by $c_1(x)=(x^3-x, x^3-\alpha x)$ and $c_2(x)=(x^3, x^3-2\alpha x)$, where $\alpha$ is a $4$-th primitive element in $\mathbb{F}_{3^2}$ over $\mathbb{F}_{3}$. Since $x^4-1=(x^2-1)(x^2-2)$ and $x^8-1=(x^2-1)(x^2-2)(x^2-\alpha)(x^2-\alpha^2)$, by Theorem 3.2, $${\mathcal R}\cong (R/(x^2-1))^2\times (R/(x^2-1))^2\times R/(x^2-\alpha)\times R/(x^2-\alpha^2).$$ Then up to an $R$-module isomorphism $$\begin{array}{ccc} {\mathcal R} & \cong & (R/(x^2-1), R/(x^2-1))\\ & \bigoplus & (R/(x^2-2), R/(x^2-2))\\ & \bigoplus & (0, R/(x^2-\alpha))\\ & \bigoplus & (0 , R/(x^2-\alpha^2)).\\ \end{array}$$ This implies that the skew GQC code $C$ can be decomposed into $C=\bigoplus_{i=1}^4C_i$, where $\bullet$  $C_1$ is the $[2,2]$ linear code with the basis $(0, (1-\alpha)x)$ and $(x, (1-2\alpha)x)$ over $R/(x^2-1)$; $\bullet$  $C_2$ is the $[2,2]$ linear code with the basis $(x, (2-\alpha)x)$ and $(2x, (2-2\alpha)x)$ over $R/(x^2-2)$; $\bullet$  $C_3$ is the $[2,1]$ linear code with the basis $(0, 2\alpha x)$ over $R/(x^2-\alpha)$; $\bullet$  $C_4$ is the $[2,1]$ linear code with the basis $(0, (\alpha^2-\alpha)x)$ over $R/(x^2-\alpha^2)$. Let $k_i$ be the dimension of $C_i$, $i=1,2,3,4$. Then $${\rm max}\; k_i=2={\rm the ~number ~of ~generators ~of ~}C.$$ [**4 $1$-generator skew GQC codes**]{} In this section, we discuss some structural properties of $1$-generator skew GQC codes over $\mathbb{F}_q$. Let $R=\mathbb{F}_q[x, \sigma]$ and ${\mathcal R}=R/(x^{m_1}-1)\times R/(x^{m_2}-1)\times\cdots\times R/(x^{m_l}-1)$. [**Definition 4.1** ]{} *Let $C$ be a $1$-generator skew GQC code generated by $c(x)=(c_1(x),\\ c_2(x), \ldots,c_l(x))\in {\mathcal R}$. The the monic polynomial $h(x)$ of minimum degree satisfying $c(x)h(x)=0$ is called the parity-check polynomial of $C$.* Let $C$ be a $1$-generator skew GQC code of block length $(m_1, m_2, \ldots, m_l)$ and length $\sum_{i=1}^lm_i$ with the generator $(c_1(x), c_2(x), \ldots, c_l(x))$, $c_i(x)\in {\mathcal R}_i=R/(x^{m_i}-1), ~i=1,2,\ldots,l$. Define a well defined $R$-homomorphism $\varphi_i$ from ${\mathcal R}$ onto ${\mathcal R}_i$ such that $\varphi_i(c_1(x), c_2(x), \ldots, c_l(x))=c_i(x)$. Then $\varphi_i(C)$ is a skew cyclic code of length $m_i$ generated by $c_i(x)$ in ${\mathcal R}_i$. From Theorem 2.2, we have $\varphi_i(C)=(p_i(x)g_i(x))$, where $g_i(x)$ is a right divisor of $x^{m_i}-1$ and $p_i(x)$ and $h_i(x)=(x^{m_i}-1)/g_i(x)$ are right coprime. Therefore, $h_i(x)=(x^{m_i}-1)/g_i(x)=(x^{m_i}-1)/{\rm gcld}(c_i(x), x^{m_i}-1)$ is the parity-check polynomial of $\varphi_i(C)$. It means that $h(x)={\rm lclm}\{h_1(x), h_2(x), \ldots, h_l(x)\}$ is the parity-check polynomial of $C$. Define a map $\psi$ from $R$ to ${\mathcal R}$ such taht $\psi(a(x))=c(x)a(x)$ . This is an $R$-module homomorphism with the kernel $(h(x))_r$, which implies that $C\cong R/(h(x))_r$. Thus ${\rm dim}(C)={\rm deg}(h(x))$. As stated above, we have the following result. [**Theorem 4.2**]{} *Let $C$ be a $1$-generator skew GQC code of block length $(m_1, m_2, \ldots, m_l)$ and length $\sum_{i=1}^lm_i$ generated by $c(x)=(c_1(x), c_2(x), \ldots, c_l(x))\in {\mathcal R}$. Then the parity-check polynomial of $C$ is $h(x)={\rm lclm}\{h_1(x), h_2(x), \ldots, h_l(x)\}$, where $h_i(x)=(x^{m_i}-1)/{\rm gcld}(c_i(x), x^{m_i}-1)$, $i=1,2,\ldots,l$, and the dimension of $C$ is equal to the degree of $h(x)$.* $\Box$ Let $h_1(x)$ and $h_2(x)$ be the parity-check polynomials of $1$-generator skew GQC codes $C_1$ and $C_2$, respectively. If $C_1=C_2$, then $h_1(x)=h_2(x)$, which implies that ${\rm deg }(h_1(x))={\rm deg}(h_2(x))$. It means that $R/(h_1(x))_r=R/(h_2(x))_r$. Conversely, suppose $h_1(x)$ and $h_2(x)$ are similar. Then we have $R/(h_1(x))_r\cong R/(h_2(x))_r$, which implies that $C_1= C_2$. Then from the discussion above, we have $C_1=C_2$ if and only if $h_1(x)\sim h_2(x)$, i.e., any $1$-generator skew GQC code has a unique parity-check polynomial up to similarity. [**Theorem 4.3**]{} *Let $C$ be a $1$-generator skew GQC code of block length $(m_1, m_2,\ldots, m_l)$ and length $\sum_{i=1}^lm_i$ generated by $c(x)=(c_1(x), c_2(x), \ldots,c_l(x))\in {\mathcal R}$. Suppose $h_i(x)$ is given as in Theorem 4.2 and $h(x)={\rm lclm}\{h_1(x),h_2(x),\ldots,h_l(x)\}$. Let $\delta_i$ denote the number of consecutive powers of a primitive $m_i$-th root of unity that among the right zeros of $(x^{m_i}-1)/h_i(x)$. Then\ (i) $d_{\rm H}(C)\geq \sum_{{i}\not \in K}(\delta_i+1)$, where $K\subseteq \{1,2,\ldots,l\}$ is a set of maximum size such that ${\rm lclm}_{i\in K}h_i(x)\neq h(x)$.\ (ii) If $h_1(x)=h_2(x)=\cdots =h_l(x)$, then $d_{\rm H}(C)\geq\sum_{i=1}^l(\delta_i+1)$.* *Proof* Let $a(x)\in C$ be a nonzero codeword. Then there exists a polynomial $f(x)\in R$ such that $a(x)=f(x)c(x)$. Since for each $i=1,2,\ldots,l$, the $i$-th component is zero if and only if $(x^{m_i}-1)\mid f(x)c_i(x)$, i.e., if and only if $h_i(x)\mid f(x)$. Therefore $a(x)=0$ if and only if $h(x)\mid f(x)$. So $a(x)\neq 0$ if and only if $h(x)\nmid f(x)$. This implies that $c(x)\neq 0$ has the most number of zero blocks whenever $h(x)\neq {\rm lclm}_{i\in K}h_i(x)$, where ${\rm lclm}_{i\in K}h_i(x)\mid f(x)$, and $K$ is a maximal subset of $\{1,2,\ldots,l\}$ having this property. Thus, $d_{\rm H}(C)\geq \sum_{i\notin K}d_i$, where $d_i=d_{\rm H}(\varphi_i(C))\geq \delta_i+1$. Clearly, $K=\emptyset$ if and only if $h_1(x)=h_2(x)=\cdots =h_l(x)$. Therefore, from the discussion above, we have if $h_1(x)=h_2(x)=\cdots =h_l(x)$, then $d_{\rm H}(C)=\sum_{i=1}^ld_i\geq \sum_{i=1}^l(\delta_i+1)$. $\Box$ From Theorems 4.2 and 4.3, we have the following corollary immediately. [**Corollary 4.4**]{} *Let $C$ be a $1$-generator skew QC code of length $ml$ generated by $c(x)=(c_1(x), c_2(x), \ldots, c_l(x))\in (R/(x^m-1))^l$. Suppose $h_i(x)=(x^m-1)/{\rm gcld}(c_i(x), x^m-1)$, $i=1,2,\ldots,l$, and $h(x)={\rm lclm}\{h_1(x),h_2(x),\ldots,h_l(x)\}$. Then\ (i) The dimension of $C$ is the degree of $h(x)$.\ (ii) Let $\delta_i$ denote the number of consecutive powers of a primitive $m_i$-th root of unity that among the right zeros of $(x^{m}-1)/h_i(x)$. Then $d_{\rm H}(C)\geq \sum_{{i}\not \in K}(\delta_i+1)$, where $K\subseteq \{1,2,\ldots,l\}$ is a set of maximum size such that ${\rm lclm}_{i\in K}h_i(x)\neq h(x)$.\ (iii) If $h_1(x)=h_2(x)=\cdots =h_l(x)$, then $\delta_i=\delta$ for each $i=1,2,\ldots,l$ and $d_{\rm H}(C)\geq l(\delta+1)$.* $\Box$ [**Example 4.5**]{} Let $R=\mathbb{F}_{3^2}[x,\sigma]$, where $\sigma$ is the Frobenius automorphism of $\mathbb{F}_{3^2}$ over $\mathbb{F}_3$. The polynomial $g(x)=x-\alpha^2$ is a right divisor of $x^4-1$, where $\alpha$ is a primitive element of $\mathbb{F}_{3^2}$. Consider the $1$-generator GQC code $C$ of block length $(4,8)$ and length $4+8=12$ generated by $c(x)=(g(x), g(x))$. Then, by Theorem 4.3, $h(x)=(x^8-1)/(x-\alpha^2)$ and $d_H(C)\geq 2$. A generator matrix for $C$ is given as follows $$G=\left( \begin{array}{cccccccccccc} -\alpha^2 & 1 & 0 & 0 & -\alpha^2 & 1 & 0 & 0 & 0 & 0 & 0 & 0\\ 0 & -\alpha^6 & 1 & 0 & 0 & -\alpha^6 & 1& 0 & 0 & 0& 0 & 0\\ 0 & 0 & -\alpha^2 & 1 & 0 & 0 & -\alpha^2 & 1 & 0 & 0 & 0 & 0\\ 1 & 0 & 0 & -\alpha^6 & 0 & 0 & 0 & -\alpha^6 & 1 & 0 & 0 & 0\\ -\alpha^2 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & -\alpha^2 & 1 & 0 & 0\\ 0 & -\alpha^6 & 1 & 0 & 0 & 0 & 0& 0 & 0 & -\alpha^6& 1 & 0\\ 0 & 0 & -\alpha^2 & 1 & 0 & 0& 0 & 0 & 0 & 0 & -\alpha^2 & 1\\ \end{array} \right).$$ $C$ is an optimal $[12, 8, 4]$ skew GQC code over $\mathbb{F}_{3^2}$ actually. [**Example 4.6**]{} Let $R=\mathbb{F}_{3^2}[x,\sigma]$, where $\sigma$ is the Frobenius automorphism of $\mathbb{F}_{3^2}$ over $\mathbb{F}_3$. The polynomial $g(x)=x-\alpha^2$ is a right divisor of $x^4-1$, where $\alpha$ is a primitive element of $\mathbb{F}_{3^2}$. Consider the $1$-generator skew QC code $C$ of length $ml=12$ and index $3$ generated by $c(x)=(g(x), g(x), g(x))$ over $\mathbb{F}_{3^2}$. Then $h(x)=h_1(x)=h_2(x)=h_3(x)=(x^4-1)/(x-\alpha^2)$. Thus, by Corollary 4.4, $C$ is a skew QC code of length $12$ and index $3$ with dimension $3$ and the minimum Hamming distance at least $3\times 2=6$. A generator matrix for $C$ is given as follows $$G=\left( \begin{array}{cccccccccccc} -\alpha^2 & 1 & 0 & 0 & -\alpha^2 & 1 & 0 & 0 & -\alpha^2 & 1 & 0 & 0\\ 0 & -\alpha^6 & 1 & 0 & 0 & -\alpha^6 & 1& 0 & 0 & -\alpha ^6& 1 & 0\\ 0 & 0 & -\alpha^2 & 1 & 0 & 0& -\alpha^2 & 1 & 0 & 0 & -\alpha ^2 & 1\\ \end{array} \right).$$ From the generator matrix $G$, we see that $C$ is an $[12, 3, 6]$ skew QC code over $\mathbb{F}_{3^2}$. [**5 Skew QC codes**]{} Skew quasi-cyclic (QC) codes as a special class of skew generalized quasi-cyclic (GQC) codes, have the similar structural properties to skew GQC codes such as Corollary 3.4 and Corollary 4.4. But in this section, we use another view presented in [@Lally] to research skew QC codes over finite fields. The dual codes of skew QC codes are also discussed briefly. For convenience, we write an element $a\in \mathbb{F}_q^{lm}$ as a $m$- tuple $a=(a_0, a_1, \ldots, a_{m-1})$, where $a_i=(a_{i,0}, a_{i,1}, \ldots, a_{i, (l-1)}) \in \mathbb{F}_q^l$. Let the map $T_{\sigma, l}$ on $\mathbb{F}_q^{lm}$ be defined as follows $$T_{\sigma, l}(a_0, a_1, \ldots, a_{m-1})=(\sigma(a_{m-1}), \sigma(a_0), \ldots, \sigma(a_{m-2})),$$ where $\sigma(a_i)=(\sigma(a_{i,0}), \sigma(a_{i,1}), \ldots, \sigma(a_{i,l-1}))$. Define a one-to-one correspondence $$\eta: \mathbb{F}_q^{lm}\rightarrow {\mathcal R}^l,$$ $$(a_{0,0}, a_{0,1}, \ldots, a_{0,l-1},a_{1,0}, a_{1,1}, \ldots, a_{1,l-1}, \ldots, a_{m-1,0}, a_{m-1,1}, \ldots, a_{m-1,l-1})$$ $$\mapsto a(x)=(a_0(x), a_1(x), \ldots, a_{l-1}(x)),$$ where $a_j(x)=\sum_{i=0}^{m-1}a_{i,j}x^i$ for $j=0,1,\ldots,l-1$. Then a skew QC code $C$ of length $lm$ with index $l$ defined as in Corollary 3.4 is equivalent to a linear code of length $lm$, which is invariant under the map $T_{\sigma,l}$. Let $v=(v_{0,0}, v_{0,1},\ldots,v_{0,l-1}, v_{1,0}, v_{1,1}, \ldots,v_{1,l-1},\ldots, v_{m-1,0}, v_{m-1,1}, \ldots, v_{m-1,l-1} )\in \mathbb{F}_q^{ml}$. Let $\{1, \xi, \xi^2, \ldots, \xi^{l-1}\}$ be a basis of $\mathbb{F}_{q^l}$ over $\mathbb{F}_q$. Define an isomorphism between $\mathbb{F}_q^{ml}$ and $\mathbb{F}_{q^l}^m$, for $i=0,1,\ldots,m-1$, associating each $l$-tuple $(v_{i,0}, v_{i,1}, \ldots,v_{i,l-1})$ with the element $v_i\in \mathbb{F}_{q^l}$ where $v_i=v_{i,0}+v_{i,1}\xi+\cdots+v_{i,l-1}\xi^{l-1}$. Then every element in $\mathbb{F}_q^{ml}$ is a one-to-one correspondence with an element in $\mathbb{F}_{q^l}^m$. The operator $T_{\sigma,l}$ on $(v_{0,0}, v_{0,1},\ldots,v_{0,l-1}, v_{1,0},v_{1,1}, \ldots, v_{1,l-1},\ldots, v_{m-1,0}, v_{m-1,1}, \ldots,v_{m-1,l-1} )\in \mathbb{F}_q^{ml}$ corresponds to the element $(\sigma(v_{m-1}), \sigma(v_0),\ldots, \sigma(v_{m-2}))\in \mathbb{F}_{q^l}^m$ under the above isomorphism. The vector $v\in \mathbb{F}_q^{ml}$ can be associated with the polynomial $v(x)=v_0+v_1x+\cdots+v_{m-1}x^{m-1}\in \widetilde{R}=\mathbb{F}_{q^l}[x, \sigma]$. Clearly, there is an $R/(x^m-1)$-module isomorphism between $\mathbb{F}_q^{ml}$ and $\widetilde{ R}[x]/( x^m-1)$ that is defined by $\phi(v)=v(x)$. It follows that there is a one-to-one correspondence between the left $R/( x^m-1)$-submodule of $\widetilde{ R}/( x^m-1)$ and the skew QC code of length $ml$ with index $l$ over $\mathbb{F}_q$. In addition, a skew QC code of length $ml$ with index $l$ over $\mathbb{F}_q$ can also be regarded as an $R$-submodule of $\widetilde{ R}/( x^m-1)$ because of the equivalence of $\mathbb{F}_q^{ml}$ and $\widetilde{ R}/( x^m-1)$. Let $C$ be a skew QC code of length $ml$ with index $l$ over $\mathbb{F}_q$, and generated by the elements $v_1(x), v_2(x),\ldots, v_\rho(x)\in \widetilde{R}/( x^m-1)$ as a left $R/(x^m-1)$-submodule of $\widetilde{R}[x]/( x^m-1)$. Then $C=\{a_1(x)v_1(x)+a_2(x)v_2(x)+\cdots+a_\rho(x)v_\rho(x)| a_i(x)\in R/( x^m-1 ), i=1,2,\ldots,\rho\}$. As discussed above, $C$ is also an $R$-submodule of $\widetilde{R}/( x^m-1)$. As an $R$-submodule of $\widetilde{R}/( x^m-1)$, $C$ is generated by the following set $\{v_1(x), xv_1(x), \ldots, x^{m-1}v_1(x),v_2(x), xv_2(x), \ldots,x^{m-1}v_2(x), \ldots,v_\rho(x), xv_\rho(x), \ldots,\\ x^{m-1}v_\rho(x)\}$. Since $R/( x^m-1)$ is a subring of $\widetilde{R}[x]/( x^m-1)$ and $C$ is a left $R/( x^m-1)$-submodule of $\widetilde{R}/( x^m-1)$, $C$ is in particular a left submodule of an $\widetilde{R}/( x^m-1)$-submodule of $\widetilde{R}/( x^m-1)$, i.e., the skew cyclic code $ \widetilde{C}$ of length $m$ over $\widetilde{R}$. Therefore, $d_H(C)\geq d_H(\widetilde{C})$, where $d_H(C)$ and $d_H(\widetilde {C})$ are the minimum Hamming distance of $C$ and $\widetilde{C}$, respectively. Lally [@Lally Theorem 5] has obtained another lower bound on the minimum Hamming distance of the QC code over finite fields. In the following, we generalized these results to skew QC codes. [**Theorem 5.1**]{} *Let $C$ be a $\rho$-generator skew QC code of length $ml$ with index $l$ over $\mathbb{F}_q$ and generated by the set $\{v_i(x)=\widetilde{v}_{i,0}+\widetilde{v}_{i,1}x+\cdots +\widetilde{}v_{i,m-1}x^{m-1}, i=1,2,\ldots,\rho\}\subseteq \widetilde{R}/(x^m-1)$. Then $C$ has lower bound on minimum Hamming distance given by $$d_H(C)\geq d_H(\widetilde{C})d_H(B),$$ where $\widetilde{C}$ is a skew cyclic code of length $m$ over $\widetilde{ R}$ with generator polynomial ${\rm gcld}(v_1(x), \\ v_2(x), \ldots, v_\rho (x), x^m-1)$ and $B$ is a skew linear code of length $l$ generated by $\{{\mathcal V}_{i,j}, i=1,2,\ldots, \rho, j=0,1,\ldots, m-1\}\subseteq \mathbb{F}_q^l$ where each ${\mathcal V}_{i,j}$ is the vector corresponding to the coefficients $\widetilde{v}_{i,j} \in \mathbb{F}_{q^l}$ with respect to a $\mathbb{F}_q$-basis $\{1, \xi, \ldots, \xi^{l-1}\}$.* $\Box$ Define the Euclidean inner product of $u, v\in \mathbb{F}_q^{lm}$ by $$u\cdot v=\sum_{i=0}^{m-1}\sum_{j=0}^{l-1}u_{i,j}v_{i,j}.$$ Let $C$ be a skew QC code of length $lm$ with index $l$, $u\in C$ and $v\in C^\perp$. Since $\sigma^m=1$, we have $u\cdot T_{\sigma,l}(v)=\sum_{i=0}^{m-1}u_i\cdot \sigma(v_{i+m-1})=\sum_{i=0}^{m-1}\sigma(\sigma^{m-1}(u_i)\cdot v_{i+m-1})=\sigma(T_{\sigma,l}^{m-1}(u)\cdot v)=\sigma(0)=0$, where $i+m-1$ is taken modulo $m$. Hence $T_{\sigma, l}(v)\in C^\perp$, which implies that the dual code of skew QC code $C$ is also a skew QC code of the same index. We define a conjugation map $^-$ on $R$ such that $\overline{ax^i}=\sigma^{-i}x^{m-i}$, for $ax^i\in R$. On $R^l$, we define the Hermitian inner product of $a(x)=(a_0(x), a_1(x), \ldots, a_{l-1}(x))$ and $b(x)=(b_0(x), b_1(x), \ldots, b_{l-1}(x))\in R^l$ by $$\langle a(x), b(x)\rangle=\sum_{i=0}^{l-1}a(x)\cdot \overline{b_i(x)}.$$ By generalizing Proposition 3.2 of [@Ling1], we get [**Proposition 5.2**]{} *Let $u, v \in \mathbb{F}_q^{lm}$ and $u(x)$ and $v(x)$ be their polynomial representations in $R^l$, respectively. Then $T_{\sigma,l}^k(u)\cdot v=0$ for all $0\leq k \leq m-1$ if and only if $\langle u(x), v(x)\rangle=0$.* $\Box$ Let $C$ be a skew QC code of length $lm$ with index $l$ over $\mathbb{F}_q$. Then, by Theorem 5.1, $$C^\perp=\{v(x)\in R^l\mid \langle c(x), v(x)\rangle=0,~\forall c(x)\in C\}.$$ Furthermore, by Corollary 3.4 (iii), we have $C^\perp=\bigoplus_{i=1}^sC_i^\perp$. In [@Ling3], some results for $\rho$-generator QC codes and their duals over finite fields are given. These results can also be generalized to skew $\rho$-generator QC codes over finite fields. By generalizing Corollary 6.3, Corollary 6.4 in [@Ling3] and Theorem 3.5 in this paper, we get the following result. [**Theorem 5.3**]{} *Let $C$ be a $\rho$-generator skew QC code of length $lm$ with index $l$ over $\mathbb{F}_q$. Let $C=\bigoplus_{i=1}^sC_i$, where each $C_i$, $i=1,2,\ldots,s$, is with dimension $k_i$. Then\ (i) $C$ is a ${\mathcal K}$-generator skew QC code and $C^\perp$ is an $(l-{\mathcal K}')$-generator skew QC code, where ${\mathcal K}={\rm max}_{1\leq i\leq s} k_i$ and ${\mathcal K}'={\rm min}_{1\leq i\leq s} k_i$.\ (ii) Let $l\geq 2$. If $C^\perp$ is also an $\rho$-generator skew QC code, then ${\rm min}_{1\leq i\leq s} k_i=l-\rho$ and $l\leq 2\rho$.\ (iii) If $C$ is a self-dual $\rho$-generator skew QC code, then $l$ is even and $l\leq 2\rho$.* $\Box$ For a $1$-generator skew QC code of length $lm$ with index $l$ and the canonical decomposition $C=\bigoplus_{i=1}^sC_i$, $C^\perp$ is also a $1$-generator skew QC code if and only if $l=2$ and ${\rm dim}(C_i)=1$ for each $i=1,2,\ldots,s$. [**6 Conclusion**]{} The structural properties of skew cyclic codes and skew GQC codes over finite fields are studied. Using the factorization theory of ideals, we give the Chinese Remainder Theorem in the skew polynomial ring $\mathbb{F}_q[x, \sigma]$, which leads to a canonical decomposition of skew GQC codes. Moreover, we give some characteristics of $\rho$-generator skew GQC codes. For $1$-generator skew GQC codes, we give their parity-check polynomials and dimensions. A lower bound on the minimum Hamming distance of $1$-generator skew GQC codes is given. These special codes may lead to some good linear codes over finite fields. Finally, skew QC codes are also discussed in details. In this paper, we restrict on the condition that the order of $\sigma$ divides each $m_i$, $i=1,2,\ldots,l$. If we remove this condition, then the polynomial $x^m-1$ may not be a central element. This implies that the set $R/(x^m-1)$ is not a ring anymore. In this case, the cyclic code in $R/(x^m-1)$ will not be an ideal. It is just a left $R$-submodule, and we call it a *module skew cyclic code*. A GQC code in ${\mathcal R}$ is also a left $R$-submodule of ${\mathcal R}$, and we call it a *module skew GQC code*. Most of our results on skew cyclic codes and skew GQC codes in this paper depend on the fact that $x^m-1$ is a central element of $R$. Since in the module skew case this is not ture anymore, some results stated in this paper cannot be held. Therefore, the structural properties of module skew cyclic and module skew GQC codes are also interesting open problems for further consideration. Another interesting open problem is to find some new or good linear codes over finite fields from skew GQC codes. [**Acknowledgments**]{} This research is supported by the National Key Basic Research Program of China (973 Program Grant No. 2013CB834204), the National Natural Science Foundation of China (Nos. 61171082, 60872025, 10990011). [s21]{} Abualrub, T., Aydin, N., Siap, I.: *On the construction of skew quasi-cyclic codes*. IEEE Trans. Inform. Theory. 56, 2081-2090(2010). Aydin, N.: *Quasi-Cyclic Codes over $\mathbb{Z}_4$ and Some New Binary Codes*. IEEE Trans. Inform. Theory. 48, 2065-2069(2002). Abualrub, T., Siap, I.: *Cyclic codes over the ring $\mathbb{Z}_2+u\mathbb{Z}_2$ and $\mathbb{Z}_2+u\mathbb{Z}_2+u^2\mathbb{Z}_2$.* Des. Codes and Cryptography. 42, 273-287(2007). Bhaintwal, M., Wasan, S.: *On quasi-cyclic codes over $\mathbb{Z}_q$*. Appl. Algebra Eng. Commun. Comput. 20, 459-480(2009). Bhaintwal, M.: *Skew quasi-cyclic codes over Galois rings*. Des. Codes Cryptogr. 62, 85-101(2012). Boucher, D., Geisemann, W., Ulmer, F.: *Skew-cyclic codes*. Appl. Algebra Eng. Commun. Comput. 18, 379-389(2007). Boucher, D., Solé, P., Ulmer, F.: *Skew constacyclic codes over Galois rings*. Adv. Math. Commun. 2, 273-292(2008). Boucher, D., Ulmer, F.: *Coding with skew polynomial rings*. J. Symbolic Comput. 44, 1644-1656(2009). Cao, Y.: *Structural properties and enumeration of $1$-generator generalized quasi-cyclic codes*. Des. Codes Cryptogr, 60, 67-79(2011) Cao, Y.: *Generalized quasi-cyclic codes over Galois rings: structural properties and enumeratio*n. Appl. Algebra Eng. Commun. Comput. 22, 219-233(2011) Chaussade, L., Loidreau, P: *Skew codes of prescribed distance or rank*. Des. Codes and Cryptogr. 50, 267-284(2009) Conan, J., Séguin, G.: *Structural Properties and Enumeration of Quasi Cyclic Codes*. Appl. Algebra Eng. Commun. Comput. 4, 25-39(1993). Esmaeili, M., Yari, S.: *Generalized quasi-cyclic codes: structural properties and codes construction*. Appl. Algebra Eng. Commun. Comput. 20, 159-173(2009). Jacobson, N: *Theory of Rings*. Ann. Math. Sco. New York(1943). Jacobson, N: *Finite Dimensional Division Algebras over Fields*. Springer, New York(1996). Lally, K.: *Quasicyclic codes of index $l$ over $\mathbb{F}_q$ viewed as $\mathbb{F}_q[x]$-submodules of $\mathbb{F}_{q^l}[x]/(x^m-1)$.* in Applied Algebra, Algebraic Algorithms and Error-Correcting Codes, Lecture Notes in Comput. Sci. 2643, Springer-Verlag, Berlin, Heidelberg, 244-253(2003). Ling, S., Solé, P.: *On the algebra structure of quasi-cyclic codes I: finite fields.* IEEE Trans. Inform. Theory. 47, 2751-2760(2001). Ling, S., Solé, P.: *On the algebra structure of quasi-cyclic codes II: chain rings.* Des. Codes and Cryptography. 30 (1), 113-130(2003). Ling, S., Solé, P.: *On the algebra structure of quasi-cyclic codes III: generator theory.* IEEE Trans. Inform. Theory. 51, 2692-2700(2005). McDonald, B. R.: *Finite Rings with Identity*. Marcel Dekker, New York(1974). Siap, I., Kulhan, N.: *The structure of generalized quasi-cyclic codes.* Appl. Math. E-Notes. 5, 24-30(2005). Siap, I., Abualrub, T., Yildiz, B. : *One generator quasi-cyclic codes over $\mathbb{F}_2+u\mathbb{F}_2$.* J. Frank. Inst. 349, 284-292(2012).
{ "pile_set_name": "ArXiv" }
--- author: - 'Aimeric Colléaux $^{1,}$ [^1] , Sergio Zerbini $^{2,}$' title: | Modified Gravity Models\ Admitting Second Order Equations of Motion --- **Abstract** : The aim of this paper is to find higher order geometrical corrections to the Einstein-Hilbert action that can lead to only second order equations of motion. The metric formalism is used, and static spherically symmetric and Friedmann-Lemaître space-times are considered, in four dimensions. The FKWC-basis are introduced in order to consider all the possible invariant scalars, and both polynomial and non-polynomial gravities are investigated.\ Introduction ============ Most of the equations of motion describing physical effects are second order, that is, we need to specify either an initial and a final position in space-time to describe the dynamics between them, or we need an initial position and velocity to describe how the system will evolve. Concerning General Relativity (GR), for which the gravitational field $g_{\mu\nu}(x)$ is encoded into the geometry of space-time : $$\begin{aligned} ds^2 = g_{\mu\nu}(x) dx^{\mu}dx^{\nu} \,,\end{aligned}$$ The Einstein field equations, describing the dynamics of the geometry, are also second order ones. However, it is well known that two of the simpliest solutions of GR, the Schwarzschild metric and the Friedmann-Lemaître one, suffer from the existence of singularities. When one is dealing with ordinary matter, this is a general fact. Furthermore this theory alone is not able to describe dark energy, even though the inclusion of a suitable cosmological constant is sufficient. But then, other problems arise, like the cosmological constant one [@1] and the coincidence problem. Therefore, one can think about modifying the Einstein equations, in the hope to describe dark energy and to cure singularities. In order to do so, one can add higher order invariant scalars, like $R_{\mu\nu}R^{\mu\nu}$, to the Einstein-Hilbert action to have high energy corrections that could describe what really happen around the singularities [@2; @3]. With regard to dark energy issue, see [@4; @5; @6; @7; @8]. Within an higher order modified gravity model, the equations of motion will no longer be second order ones : there will be more than two initial conditions to specify in order to find the dynamics, so to keep the physical sense of what is an equation of motion, it is needed to introduce new fields for which these additional initial conditions would apply, such that at the end, the theory would involve two dynamical fields, with second order equations of motion for both of them. By doing so, we face an important problem, that is the presence of Ostrogradsky instabilities (see, for example [@4]) : the new field defined in this way can carry negative kinetic energy such that the Hamiltonian of the theory is not bounded from below and can reach arbitrarily negative energies, what would make this theory impossible to quantize in a satisfying way [@4]. And there are no general rules to avoid this problem, although a well known class of modified gravity, equivalent to GR plus a scalar field, the $f \big( R \big)$ one, might not suffer from this problem [@9]. Moreover, with a new field involved in the dynamics of gravity, this last would not be a fully geometrical theory anymore, which is yet one of the most important implications of General Relativity. Nevertheless, it is possible to find second order equations of motion from the addition into the Einstein-Hilbert action of higher order scalars [@10]. In this way, the Ostrogradsky instability may be avoided and there are no additional field involved in the dynamics, so these corrections can be said to be “geometrical” ones. This kind of modifications are the Lovelock scalars, but it turns out that in four dimensions, the only higher order scalar made of contractions of curvature tensors (only) that leads to second order equations [@10] is the so-called Gauss-Bonnet invariant : $$\begin{aligned} \mathcal{E}_4 = R^2 - 4 R_{\alpha \beta} R^{\alpha \beta} + R_{\alpha\beta\gamma}^{\ \ \ \ \delta}R^{\alpha\beta\gamma}_{\ \ \ \ \delta} ,\end{aligned}$$ which is however a total derivative in four dimensions [@11], and then it does not contribute to the equation of motion: $$\begin{aligned} \sqrt{-g}\mathcal{E}_4 = \partial_{\alpha} \Bigg( -\sqrt{-g} \, \epsilon^{\alpha \beta \gamma \delta} \; \epsilon_{\rho \sigma}^{\ \ \mu \nu} \Gamma_{\mu \beta}^{\ \ \ \rho} \Big( \frac{1}{2} R_{\delta\gamma\nu}^{\ \ \ \ \sigma} - \frac{1}{3} \Gamma_{\lambda \gamma}^{\ \ \ \sigma}\Gamma_{\nu \delta}^{\ \ \ \lambda} \Big) \Bigg).\end{aligned}$$ This result is background independent, which means that if we want to find a second order correction for all possible metrics in four dimensions, then this unique term does not contribute to the dynamics. That is why, in order to find anyway significant corrections to General Relativity that could cure some of its problems, we will search for additional terms that will give second order equations for only some specific metrics : the most studied ones, that suffer from singularities, the FLRW space-time describing the large scale dynamics of the universe, and the static spherically symmetric space-time describing neutral non-rotating stars and black holes. We note however that our way to find second order corrections is not at all the only possible one. There are other formulations of GR than the metric one, where the equations of motion are found by varying the action with respect to the metric field only. In the spirit of gauge theories, one can also vary the action with respect to the connections and independently with respect to the metric. Then, it is possible to find second order corrections with no a priori background structures [@12]. In some sense, our approach is similar to Horndeski’s theory which is the most general one leading to second order equations of motion for gravity described by a metric $g_{\mu\nu}$ coupled with a scalar field $\phi $ and its first two derivatives [@13]. This theory involves non linear higher order derivatives of the scalar field, like $\big( \Box \phi \big)^2 $, and yet leads to second order. Moreover, if all the matter fields are minimally coupled with the same metric $\widetilde{g}_{\mu \nu} \big( g_{\mu\nu} , \phi \big)$, one can expect the equivalence principle to hold [@14], which is also a fundamental feature of GR that one wants to keep. Briefly, the outline of the paper is the following. First we consider all the independent scalar invariants built from the metric field and its derivatives, for example of the form $\big( \Box R \big)^2$, and see if some linear combinations of them, or, in the spirit of [@15] and [@16], if some roots of these combinations, could lead to second order differential equations for FLRW space-time and static spherically symmetric one. The basis of independent scalars that are needed have been presented in [@17], but for specific backgrounds, we will show that this basis may be reduced. Furthermore, we will start to exibit, order by order for FLRW, the existence of polynomial and non-polynomial gravity models that give second order equations and polynomial corrections to the Friedmann equation. Finally, we will investigate the static spherically symmetric space-times. Order 6 FKWC-basis ================== The basis of all independent invariant geometrical scalars involving $2n$ derivatives of the metric are separated into different classes, depending on how many covariant derivatives act on curvature tensors. For order 6 ($n=3$), the first class, that does not involve explicitly covariant derivatives, from $\mathcal{L}_1$ to $\mathcal{L}_8$, is denoted by $\mathcal{R}_{6,3}^0$ : these scalars are built with six derivatives of the metric and by the contraction of 3 curvature tensors. The two other classes, $\mathcal{R}_{\left\{ 2,0 \right\}}^0$ and $\mathcal{R}_{\left\{ 1,1 \right\}}^0$, contain scalars that involve respectively, a curvature tensor contracted with two covariant derivatives acting on another curvature tensor (from $\mathscr{L}_1$ to $\mathscr{L}_4$), and two covariant derivatives, each acting on one curvature tensor (from $\mathscr{L}_5$ to $\mathscr{L}_8$) :  \    \   $\left\{ \begin{array}{l} \mathcal{L}_1=R^{\mu\nu\alpha\beta}R_{\alpha\beta\sigma\rho}R^{\sigma\rho}_{\;\,\;\,\,\mu\nu} \quad\;\;\;\; , \;\;\quad \mathcal{L}_2=R^{\mu\nu}_{\;\,\;\,\,\alpha\beta}R^{\alpha\sigma}_{\;\,\;\,\,\nu\rho}R^{\beta\rho}_{\;\,\;\,\,\mu\sigma} \\ \mathcal{L}_3=R^{\mu\nu\alpha\beta}R_{\alpha\beta\nu\sigma}R^{\sigma}_{\;\,\mu} \quad\;\;\;\; , \;\;\quad \mathcal{L}_4=R R^{\mu\nu\alpha\beta} R_{\mu\nu\alpha\beta} =R T \\ \mathcal{L}_5=R^{\mu\nu\alpha\beta}R_{\mu\alpha}R_{\nu\beta} \;\;\quad\;\; , \;\;\quad \mathcal{L}_6=R^{\mu\nu}R_{\nu\alpha}R^{\alpha}_{\;\,\mu} \\ \mathcal{L}_7=R R^{\mu\nu} R_{\mu\nu} = R S \;\,\, , \,\;\quad \mathcal{L}_8=R^3 \end{array} \right. $\  \ \ $\left\{ \begin{array}{l} \mathscr{L}_1=R\Box R \quad \quad \;\;\; \quad\;\; , \;\quad \mathscr{L}_2=R_{\mu\nu} \Box R^{\mu\nu} \\ \mathscr{L}_3=R^{\mu\nu\alpha\beta} \nabla_\nu \nabla_\beta R_{\mu\alpha} \; \; , \;\;\,\,\,\; \mathscr{L}_4=R^{\mu\nu} \nabla_\mu \nabla_\nu R \\ \mathscr{L}_5=\nabla_\sigma R_{\mu\nu} \nabla^\sigma R^{\mu\nu} \quad \quad\,\; , \;\;\;\; \mathscr{L}_6=\nabla_\sigma R_{\mu\nu} \nabla^\nu R^{\mu\sigma} \\ \mathscr{L}_7=\nabla_\sigma R_{\mu\nu\alpha\beta} \nabla^\sigma R^{\mu\nu\alpha\beta} \;\;\;\;\; , \;\;\;\; \mathscr{L}_8=\nabla_\sigma R \nabla^\sigma R \end{array} \right. $  \   Recall that our aim is to see that if we consider all these scalars, there are quite natural modified gravity Lagrangian densities that we can expect to lead to second order equations of motion and that actually do. There are linear combinations of all the scalars of the basis, but also for example square-root of the $\mathcal{R}_{\left\{ 1,1 \right\}}^0$-class, or cubic-roots of the $\mathcal{R}_{6,3}^0$ one, even if we are not going to study this last because we search here for high energy geometrical correction to the Einstein-Hilbert action. We write down this fact as : $$\begin{aligned} \mathscr{L}= \sum \Big( R^3 + R \nabla \nabla R + \nabla R\nabla R \Big) + \sqrt{ \sum \big( \nabla R\nabla R \big) } + \sqrt[3]{\sum \big( R^3 \big) }.\end{aligned}$$ Because inside a same class, the scalars have approximatively the same terms in their expansions, we can indeed expect to cancel higher order derivatives for some specific combinations of them, and then to have second order equations of motion. Now, let us write down some definitions that allow to find relations between these scalars, coming from the fact that we are going to restrict our study to specific backgrounds, the Friedmann-Lemaître-Robertson-Walker (FLRW) metric and the static spherically symmetric one, both in four dimensions. For both of them, there are between the scalars relations coming from the Lovelock theorem (that are not taken into account in Ref [@17]), and also for FLRW, relations coming from the fact that this is a conformally invariant flat metric. All these relations are written in the first Appendix, and we need to express them to define the Weyl tensor, as : $$\begin{aligned} W_{\mu\nu\alpha\beta}=R_{\mu\nu\alpha\beta}-\frac{1}{2}(R_{\mu\alpha}g_{\nu\beta}-R_{\mu\beta}g_{\nu\alpha}+R_{\nu\beta}g_{\mu\alpha}-R_{\nu\alpha}g_{\mu\beta})+\frac{1}{6}(g_{\mu\alpha}g_{\nu\beta}-g_{\mu\beta}g_{\nu\alpha})R.\end{aligned}$$ And the following rank 2 tensor that is null in four dimension because of the Lovelock theorem : $$\begin{aligned} L_{\mu\nu} = -\frac{1}{2}g_{\mu \nu} \mathcal{E}_4 + 2 Q_{\mu \nu} -4 P_{\mu \nu} +4 R_{\ \nu \mu \ }^{\alpha \ \ \gamma} R_{\alpha \gamma} + 2 R R_{\mu \nu} =0,\end{aligned}$$ where $Q_{\mu \nu} =R_{\mu\eta\alpha}^{\ \ \ \ \beta}R_{\nu \ \ \beta}^{\ \eta \alpha}$ and $P_{\mu \nu} = R_{\nu\gamma}R^{\gamma}_{\ \mu}$. Indeed, if we vary the Lagrangian associated with the Gauss-Bonnet invariant with respect to the metric field we find : $$\begin{aligned} \begin{split} \delta \big( \sqrt{-g} \mathcal{E}_4 \big) =& \sqrt{-g} \; \delta g^{\mu \nu} L_{\mu\nu}. \end{split}\end{aligned}$$ But as we saw in equation (3), this Lagrangian can be written as a total derivative in four dimensions, which means that its contribution to the equations of motion is identically zero. Friedmann-Lemaître Space-time ============================= Order 6 ------- We start with the flat Friedmann-Lemaître cosmological metric describing the dynamics of the universe at very large scale, in the simpliest manner : $$\begin{aligned} ds^2=-dt^2 + a(t)^2 \big( dr^2 + r^2 d\Omega ^2 \big),\end{aligned}$$ where $a(t)$ is the scale factor and $d\Omega^2 = d\theta ^2 + \sin ^2\theta \, d\phi ^2 $ is the metric of the 2-sphere. This metric is conformally invariant to Minkowski space-time, and from the relations written in the first Appendix, we can choose the following reduced basis of all independent order 6 scalar invariants: $(\mathcal{L}_4, \mathcal{L}_6, \mathcal{L}_7,\curv{L}_1,\curv{L}_3,\curv{L}_5,\curv{L}_8)$. We note that there is one scalar less than for a general conformally invariant space-time coming from the particular metric of FLRW for which there is the additional relation: $$\begin{aligned} \mathcal{L}_1=\frac{1}{3}\big( -\mathcal{L}_7+2 \mathcal{L}_4 \big).\end{aligned}$$ ### Linear Combination. $H^6$ correction. With this metric, we can right the most general order 6 linear combination of all the independent scalars : $$\begin{aligned} \begin{split} J=&\sum \Big( v_i \mathcal{L}_i +x_i \curv{L}_i \Big) =\frac{3}{a(t)^6} \Bigg( \sigma_1(v_i,x_i) \; \dot{a}(t)^6 + \sigma_2(v_i,x_i) \; a(t) \; \dot{a}(t)^4 \; \ddot{a}(t) + \sigma_3(v_i,x_i) \; a(t)^2 \; \dot{a}(t)^2 \; \ddot{a}(t)^2 \\& + \sigma_4(v_i,x_i) \; a(t)^3 \; \ddot{a}(t)^3 + \sigma_5(v_i,x_i) \; a(t)^2 \; \dot{a}(t)^3 \; a^{(3)}(t)+ \sigma_6(v_i,x_i) \; a(t)^3 \; \dot{a}(t) \; \ddot{a}(t) \; a^{(3)}(t) \\& + \sigma_7(v_i,x_i) \; a(t)^4 \; a^{(3)}(t)^2+ \sigma_8(v_i,x_i) \; a(t)^3 \; \dot{a}(t)^2 \; a^{(4)}(t)+ \sigma_9(v_i,x_i) \; a(t)^4 \; \ddot{a}(t) \; a^{(4)}(t) \Bigg), \end{split}\end{aligned}$$ where the expressions of the $\sigma_j$ in terms of $(v_i,x_i)$ are presented in the second Appendix. Setting all of them to zero allows to check that our list of scalars is a basis. We can then impose $v_1=v_2=v_3=v_5=v_8=x_2=x_4=x_6=x_7=0$ to take into account the algebraic relations we have found. Moreover, in this section, we are interested in linear combinations of order 6 scalars that lead to second order equations of motion. Therefore, we can also consider equivalence relations (up to boundary terms) between the scalars, and there are three of them that remain after considering the previous algebraic relations : $$\begin{aligned} \int d^4x \sqrt{-g} \curv{L}_3 = -\frac{1}{12} \int d^4x \sqrt{-g} \; \curv{L}_8 \quad \quad \text{;} \quad \int d^4x \sqrt{-g} \curv{L}_1 = - \int d^4x \sqrt{-g} \; \curv{L}_8 \end{aligned}$$ $$\begin{aligned} \text{and} \quad \int d^4x \sqrt{-g} \curv{L}_5 = \frac{1}{6 }\int d^4x \sqrt{-g} \; \big( 2 \curv{L}_8 + 3 \mathcal{L}_4 - 12 \mathcal{L}_6 + \mathcal{L}_7 \big).\end{aligned}$$ We check that there is no other one by deriving the equations of motion for the scale factor considering the lagrangian $L=a(t)^3 \; J$ , that we substitute into the generalized Euler-Lagrange equation for third order lagrangians: $$\begin{aligned} -\frac{d^3}{dt^3} \Big( \frac{\partial L}{ \partial a^{(3)}} \Big) + \frac{d^2}{dt^2} \Big( \frac{\partial L}{ \partial \ddot{a}} \Big) - \frac{d}{dt} \Big( \frac{\partial L}{ \partial \dot{a}} \Big) + \frac{\partial L}{ \partial a} =0.\end{aligned}$$ Finally we only need to consider the combination : $$\begin{aligned} J=v_4 \mathcal{L}_4 +v_6 \mathcal{L}_6 +v_7 \mathcal{L}_7 + x_8 \curv{L}_8 , \end{aligned}$$ and after deriving the equation of motion and imposing a simultaneous cancellation of the higher order terms, we find that there is only one linear combination leading to second order equations : $$\begin{aligned} J_1 =-7 \mathcal{L}_4 + 2 (6 \mathcal{L}_6 +\mathcal{L}_7)=72 H(t)^4 \left(3 \dot{H}(t)+2 H(t)^2\right),\end{aligned}$$ where $H(t)=\dot{a}(t) / a(t)$ is the Hubble parameter. We note that this linear combination only involves contractions of curvature tensors, which is the kind of corrections expected to follow from quantum field theory. Therefore, considering the following action that could represent a natural high energy geometrical correction to the Einstein-Hilbert action for FLRW space-time, $$\begin{aligned} S_1= \int d^4x \sqrt{-g} \; \Bigg( \frac{1}{16 \pi}\bigg[ R +\nu \Big( -7 \, R R^{\mu\nu\alpha\beta}R_{\mu\nu\alpha\beta} + 12\, R^{\mu\nu}R_{\nu\alpha}R^{\alpha}_{\;\,\mu} + 2 \, R R^{\mu\nu} R_{\mu\nu} \;\Big) \;\bigg] + \curv{L}_m \Bigg),\end{aligned}$$ where $\curv{L}_m$ is the Lagrangian density for matter, one find the acceleration equation : $$\begin{aligned} 3 H(t)^2+2 \dot{H}(t)-36\, \nu \, H(t)^4 \big(H(t)^2+2 \dot{H}(t)\big) = -8\pi p , \end{aligned}$$ where $p$ is the cosmic pressure. Then the equation of the conservation of the energy, with $\rho$ the cosmic energy density, $$\begin{aligned} \frac{d\rho}{dt}+3H(t) \big(\rho + p \big)=0,\end{aligned}$$ gives the following modified Friedmann equation : $$\begin{aligned} 3 H(t)^2-36 \, \nu \, H(t)^6= 8 \pi \rho .\end{aligned}$$ One can solve it, and choose only the solutions that reduce to the standard equation when $\rho$ is small. However, the only solution we are going to see in this work for FLRW spacetime is coming from a non-polynomial correction involving order 8 scalars, but the fact that $S_1$ is unique and second order could be a sufficient reason to study its cosmological solutions. ### Non-polynomial gravity. $H^3$ correction. Now we want to consider all the order 6 linear combinations that are perfect squares, in order to consider non-polynomial corrections that are the square-roots of these squares. Rewrite the most general linear combination in terms of the Hubble parameter $H(t)$ : $$\begin{aligned} \begin{split} J=&\sum\limits_{i=4, 6, 7} v_i \mathcal{L}_i + \sum\limits_{j=1, 3, 5, 8} x_j \curv{L}_j =3 \Bigg( \widetilde{\sigma}_1(v_i,x_i) \; H(t)^6 + \widetilde{\sigma}_2(v_i,x_i) \; H(t)^4 \dot{H}(t) + \widetilde{\sigma}_3(v_i,x_i) \; H(t)^2 \dot{H}(t)^2 \\& \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad + \widetilde{\sigma}_4(v_i,x_i) \; \dot{H}(t)^3 + \widetilde{\sigma}_5(v_i,x_i) \; H(t)^3 \ddot{H}(t)+ \widetilde{\sigma}_6(v_i,x_i) \; H(t) \dot{H}(t) \ddot{H}(t) \\& \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad + \widetilde{\sigma}_7(v_i,x_i) \ddot{H}(t)^2+ \widetilde{\sigma}_8(v_i,x_i) \; H(t)^2 H^{(3)}(t)+ \widetilde{\sigma}_9(v_i,x_i) \; \dot{H}(t) H^{(3)}(t) \Bigg). \end{split}\end{aligned}$$ To find squares, we use the following general procedure that is usefull for FLRW space-time, and necessary for spherical symmetry : Take the higher order perfect square $\ddot{H}(t)^2$. In the expansion of our square, each term will be multiply by $\ddot{H}(t)$, so the only terms that can enter inside are those $K_i(t)$ for which $K_i(t) \ddot{H}(t)$ and $K_i(t)K_j(t)$ exist in the expansion of order 6 scalars. Because of this, we need to impose the conditions : $ \widetilde{\sigma}_9\, =\, 0 \;$, $\widetilde{\sigma}_8\, =\, 0 \, \;$ and $\, \widetilde{\sigma}_4\, =\, 0$, what give $x_1=x_3=0$ and $v_7=-v_4-5 v_6/12$. Therefore, $\widetilde{\sigma}_5\, =\, 0$ and there are only two possible forms of squares made of order 6 scalars : $$\begin{aligned} \sum\limits_{i,j} \big( v_i \mathcal{L}_i + x_j \curv{L}_j \big) = \Big( \delta H(t) \dot{H}(t) + \gamma H(t)^3 \Big)^2,\end{aligned}$$ $$\begin{aligned} \text{And :} \quad \quad \quad \sum\limits_{i,j} \big( v_i \mathcal{L}_i + x_j \curv{L}_j \big) = \Big( \xi \ddot{H}(t) + \delta H(t) \dot{H}(t) \Big)^2.\end{aligned}$$ It means that all their square-roots can be decomposed in the basis $\big( H(t)^3, \, H(t) \dot{H}(t) , \, \ddot{H}(t) \big)$. Moreover, the general Lagrangian density : $$\begin{aligned} \sqrt{\sum\limits_{i,j} \big( v_i \mathcal{L}_i + x_j \curv{L}_j \big) }= \xi \ddot{H}(t) + \delta H(t) \dot{H}(t) + \gamma H(t)^3 \, ,\end{aligned}$$ leads to second order differential equations for all $\big( \xi, \delta, \gamma \big) $, so we do not need to impose additional conditions on these coefficients and there are then only 3 independent second order corrections that we can find in this way. Now let us see what are the actual perfect squares that one can find. Solving the natural conditions for respectively the first and the second kind of perfect squares, $\, \widetilde{\sigma}_2^2\, =\, 4 \, \widetilde{\sigma}_3\, \widetilde{\sigma}_1 \, \;$, $\widetilde{\sigma}_6 = \widetilde{\sigma}_7 = 0$ and $\, \widetilde{\sigma}_6^2\, =\, 4 \, \widetilde{\sigma}_3\, \widetilde{\sigma}_7 \, \;$, $\widetilde{\sigma}_1 = \widetilde{\sigma}_2 = 0$, one find the following ones : $$\begin{aligned} \begin{split} &J_2 = 2 \mathcal{L}_4+12 \mathcal{L}_6 -7 \mathcal{L}_7 = -72 \bigg( 4 H^3 + 3 \dot{H}H \bigg)^2 \, , \\ & J_3= \bigg(6\mathcal{L}_4 -12\mathcal{L}_6-\mathcal{L}_7 \bigg)\alpha \big( 5\alpha +18\beta \big) +6 \big( \alpha +3 \beta \big) \bigg( \alpha \curv{L}_5 + \beta \curv{L}_8 \bigg) \\ &~~~~~ = -72 \bigg( 3 \big( \alpha +4\beta \big) H(t)\dot{H}(t) + \big( \alpha +3\beta \big) \ddot{H}(t) \bigg)^2 \, . \end{split}\end{aligned}$$ The last one being a general formula that gives perfect squares for any value of $\big(\alpha,\beta\big)$. As we just saw, there are only 3 independent square-roots of these squares, so from the general expression $J_3$, we can choose the two last to be the squares given by $\big(\alpha=0,\beta= \sqrt{2}/6 \big)$ and $\big(\alpha=1,\beta=0\big)$: $$\begin{aligned} J_{3,1} = \curv{L}_8 = -36 \left(4 H(t) \dot{H}(t)+\ddot{H}(t)\right)^2,\end{aligned}$$ $$\begin{aligned} \text{And :} \quad \quad \quad J_{3,2} = \Big( 6 \curv{L}_5+5 \big( 6 \mathcal{L}_4 - 12\mathcal{L}_6 -\mathcal{L}_7\big) \Big) =-72 \left(3 H(t) \dot{H}(t)+\ddot{H}(t)\right)^2.\end{aligned}$$ Indeed, one can check that the following combination, with $\epsilon_i$ the signs inside de squares, $$\begin{aligned} \begin{split} \nu_0 \, \epsilon_0 \sqrt{-J_3} +\nu_1 \, \epsilon_1 \sqrt{-J_{3,1}}+\nu_2 \, \epsilon_2 \sqrt{-J_{3,2}} =& 6 \Big(3 \sqrt{2} \, (\alpha +4 \, \beta ) \, \nu_0 \, \epsilon_0+4 \, \nu_1 \, \epsilon_1+3 \, \sqrt{2} \, \nu _2 \, \epsilon _2 \Big) H(t) \dot{H}(t) \\+& 6 \Big( \sqrt{2} \, (\alpha +3\, \beta ) \, \nu _0 \, \epsilon _0+ \, \nu _1\, \epsilon _1 \,+\sqrt{2} \, \nu _2 \, \epsilon _2 \Big) \ddot{H}(t) , \end{split} \end{aligned}$$ Vanishes for $\nu _2=-\alpha \nu _0 \epsilon _0/\epsilon _2$ and $\nu _1=-3 \sqrt{2} \beta \nu _0 \epsilon _0/\epsilon _1$, and so the relation becomes explicitly, for all value of $(\alpha,\beta)$ : $$\begin{aligned} \begin{split} ~& \sqrt{- \Bigg[ \Big(6\mathcal{L}_4 -12\mathcal{L}_6-\mathcal{L}_7 \Big)\alpha \big( 5\alpha +18\beta \big) +6 \big( \alpha +3 \beta \big) \Big( \alpha \curv{L}_5 + \beta \curv{L}_8 \Big) \Bigg] } \\ &- \frac{3\beta \epsilon_0 \sqrt{2}}{\epsilon_1} \sqrt{-\curv{L}_8 } - \frac{\epsilon_0 \alpha}{\epsilon_2} \sqrt{- \Big( 6 \curv{L}_5+5 \big( 6 \mathcal{L}_4 - 12\mathcal{L}_6 -\mathcal{L}_7\big) \Big) } =0. \end{split} \end{aligned}$$ Therefore, this general formula for perfect squares depends only on $\sqrt{-J_{3,1}}$ and $\sqrt{-J_{3,2}}$ as we said.\ We note here that an interesting property coming from the existence of an infinite number $J_3\big(\alpha,\beta\big)$ of perfect squares for which the square-root can be decomposed in a small basis is that it gives some non-linear algebraic relations between the scalars of the FKWC-Basis, that reduce it in a non-trivial way, and allow to have a very small number of independent corrections. Indeed, solving the previous equation for $\mathcal{L}_4$, we find : $$\begin{aligned} \begin{split} \mathcal{L}_4 = \frac{1}{54} \Big( -9 \curv{L}_5 + 2 \curv{L}_8 + 108 \mathcal{L}_6 + 9 \mathcal{L}_7 - \sqrt{ \curv{L}_8 \big( 18 \curv{L}_5 -5 \curv{L}_8 \big) } \epsilon_0 \epsilon_1 \Big). \end{split} \end{aligned}$$ We can now calculate the equations of motion for the 3 Lagrangian densities $\sqrt{-J_{2}}$, $\sqrt{-J_{3,1}}$ and $\sqrt{-J_{3,2}}$. First, one can check that the last one is in fact a topological term that does not bring any contribution to the equation of motion. Moreover, the first two lagrangian densities give the same equation of motion $54 \, \dot{a}(t) \ddot{a}(t)=0$ for the first one, and $18 \sqrt{2} \, \dot{a}(t) \ddot{a}(t)=0$ for the second one. It means that they are equal up to an invariant scalar $T$ for which $\sqrt{-g T}$ is a total derivative, $$\begin{aligned} \sqrt{- \Big(2 \mathcal{L}_4+12 \mathcal{L}_6 -7 \mathcal{L}_7\Big) }=\frac{3}{\sqrt{2}} \sqrt{- \curv{L}_8} + T \; \, \text{,}\end{aligned}$$ such that we can in fact consider a unique scalar (let us choose $\sqrt{- \curv{L}_8}$) made of order 6 scalars that leads to non vanishing second order differential equations. We can note that $T$ cannot be equal to $\sqrt{-J_{3,2}}$ because $\sqrt{-J_{2}}$ contains an $H^3$ terms. Therefore $T$ could be found considering higher order derivatives scalars, and other perfect powers. In this case, powers of four for order 12 scalars for example. There is then another non-linear relation between order 6 scalars and (possibly) order 12 ones. Finally, to recapitulate this part, we have found that it is natural to consider only one perfect square made of order 6 scalars : $\curv{L}_8 = \nabla^\sigma R \nabla_\sigma R $. As a result, the action : $$\begin{aligned} S_2= \int d^4x \sqrt{-g} \; \Bigg( \frac{1}{16 \pi}\bigg[ R +\nu \sqrt{ - \nabla^\sigma R \nabla_\sigma R } \;\bigg] + \curv{L}_m \Bigg),\end{aligned}$$ leads to a unique second order $H^3$-correction to the Friedmann equation, as it is easy to see using the same reasoning as in the previous section. Order 8 ------- Now let us study the linear combination and squares made of order 8 scalars that lead to second order equations of motion. We do not copy all the FKWC basis for general metric, but we name the scalars according to their position in Ref [@17] where this basis is fully written. The reduced FKWC basis for order 8 scalars in FLRW space-time is the following :  \   $\left\{ \begin{array}{l} \mathcal{K}_1=R^4 \quad\; , \quad \mathcal{K}_{10}=R^{\mu\nu}R^{\alpha\beta}R^{\sigma\rho}_{\;\,\;\,\,\mu\alpha}R_{\sigma\rho\nu\beta} \quad\; , \quad \mathcal{K}_{11}=R\,R^{\mu\nu\alpha\beta}R_{\mu\;\,\alpha}^{\;\,\sigma\;\,\rho}R_{\nu\sigma\beta\rho} = R \big( \frac{1}{4} \mathcal{L}_1 -\mathcal{L}_2 \big) \\~ \\ \mathcal{K}_{12}=T^2 \quad\; , \quad \mathcal{M}_1 = R\,\Box^2R \quad\; , \quad \mathcal{M}_2 =R_{\mu\nu} \nabla^{\mu}\nabla^{\nu}\Box R \quad\; , \quad \mathcal{M}_3=R^{\mu\nu}\Box^2R_{\mu\nu} \\~ \\ \mathcal{M}_5=\nabla^{\mu}\Box R \nabla_{\mu} R \;\;\, , \;\; \mathcal{M}_6=\nabla_{\mu}\nabla_{\nu}\nabla_{\alpha}R \nabla^{\mu} R^{\nu\alpha} \;\;\, , \;\; \mathcal{M}_{10}=\big(\Box R \big)^2 \;\; \, , \;\; \mathcal{M}_{11}=\nabla_{\mu}\nabla_{\nu}R \nabla^{\mu}\nabla^{\nu}R \\~ \\ \mathcal{M}_{12}=\nabla^{\mu}\nabla^{\nu}R \Box R_{\mu\nu} \quad\; , \quad \mathcal{M}_{14}=\nabla_{\mu}\nabla_{\nu} R_{\alpha\beta} \nabla^{\mu}\nabla^{\nu} R^{\alpha\beta} \quad\; , \quad \mathcal{M}_{18}=R \, \curv{L}_1 \quad\; , \quad \mathcal{M}_{19}=R \, \curv{L}_4 \\~ \\ \mathcal{M}_{20}= S \Box R \quad\; , \quad \mathcal{M}_{33}= R \, \curv{L}_8 \end{array} \right. $\  \       \ We also introduce the definitions $\mathcal{K}_{9}=R^{\mu\nu}R^{\alpha}_{\;\,\mu}R_{\nu}^{\;\,\beta\sigma\rho} R_{\rho\sigma\beta\alpha}$ $\,$, $\,$ $\mathcal{M}_{13}=\Box R_{\mu\nu} \Box R^{\mu\nu}$ $\,$ and $\,$ $\mathcal{M}_{16}=\nabla_{\mu}\nabla_{\nu} R_{\alpha\beta} \nabla^{\beta}\nabla^{\alpha} R^{\nu\mu}$ that will be usefull later for static spherically symmetric space-times. ### Linear Combination. $H^8$ correction. Consider the sum of all independent order 8 scalars for FLRW space-time : $$\begin{aligned} J=&\sum v_i \mathcal{K}_i + \sum x_j \mathcal{M}_j \end{aligned}$$ Here, we follow exactly what we did for order 6 scalars. We derive the equation of motion associated with the previous sum, and see what conditions on $(v_i , x_j)$ cancel the equation, such that we find the 10 equivalence relations that exist between the scalars of the reduced basis. Therefore, we can consider only the following independent scalars with respect to the equation of motion, $\big( \mathcal{K}_{1},\mathcal{K}_{10},\mathcal{K}_{11},\mathcal{K}_{12},\mathcal{M}_{1},\mathcal{M}_{11},\mathcal{M}_{12} \big)$ and the reduced sum : $$\begin{aligned} J=&\sum\limits_{i=1, 10, 11, 12} v_i \mathcal{K}_i + \sum\limits_{j=1, 11, 12} x_j \mathcal{M}_j \, .\end{aligned}$$ We derive its associated equation of motion and see what a simultaneous cancellation of all the higher order terms implies for the coefficients $(v_i , x_j)$ : we find that the unique linear combination of order 8 scalars for FLRW space-time that leads to second order equation is : $$\begin{aligned} J_{4}=\mathcal{K}_{1} -48 \, \mathcal{K}_{11} -9 \, \mathcal{K}_{12} =1728 \, \Big( \, H^8 +2 \dot{H}H^6 \, \Big).\end{aligned}$$ Therefore, one may consider the action : $$\begin{aligned} \begin{split} S_{3} = \int d^4x \sqrt{-g} \; \Bigg( \frac{1}{16 \pi}\bigg[ R +\nu \Big( R^4 -48 \, R\,R^{\mu\nu\alpha\beta}R_{\mu\;\,\alpha}^{\;\,\sigma\;\,\rho}R_{\nu\sigma\beta\rho} -9 \, \big( R^{\mu\nu\alpha\beta} R_{\mu\nu\alpha\beta} \big)^2 \Big) \;\bigg] + \curv{L}_m \Bigg) , \end{split}\end{aligned}$$ and see that it brings an $H^8$ correction to the Friedmann equation. We note that this correction involves only contraction of curvature tensors like for the order 6 case. ### Non-polynomial gravity. $H^4$ correction. #### Correction to the Einstein-Hilbert action  \   To find non-polynomial second order models from order 8 scalars, we follow exactly what we did for the order 6 and find the same kind of result : there are only two classes of perfect squares in this case. Those for which the square-roots give topological scalars, that does not give any contribution to the equation of motion, and a class of equivalent scalars with respect to the equation of motion, up to the topological scalars of the first class. Therefore in this case also, we can consider a unique perfect square that contributes to the dynamics. To begin, the more general square made of order 8 scalars has the form : $$\begin{aligned} \big( \alpha H(t)^4+\beta H(t)^2 \dot{H}(t)+\gamma \dot{H}(t)^2+\delta H(t) \ddot{H}(t)+\sigma H^{(3)}(t) \big)^2.\end{aligned}$$ And its square-root gives the following equation of motion: $$\begin{aligned} \begin{split} ~& 3 \big( \alpha -\beta +\gamma +2 \delta -6 \sigma \big) \, \dot{a}(t)^2 \, \Big( \dot{a}(t)^2-4 a(t) \ddot{a}(t)\Big) \\ &+\big( \gamma -\delta +3 \sigma \big) \, a(t)^2 \, \Big(3 \ddot{a}(t)^2+4 \dot{a}(t) a^{(3)}(t)+2 a(t) a^{(4)}(t)\Big) =0. \end{split}\end{aligned}$$ Following the section concerning order 6, we can say that there are 5 independent contributions made of square-roots of order 8 scalars, but in this case, there is also one condition to impose on $\big( \alpha, \beta, \gamma, \delta, \sigma \big)$ in order to have second order equations of motion. Therefore there will be only 4 independent contributions and we can choose them to be : $$\begin{aligned} \begin{split} & \sqrt{J_5} = \sqrt{ -38 \mathcal{K}_1-2448 \mathcal{K}_{10}+2400 \mathcal{K}_{11}+1086 \mathcal{K}_{12}-143 \mathcal{M}_{10}-220 \mathcal{M}_{11}+792 \mathcal{M}_{12}+88 \mathcal{M}_{18}-352 \mathcal{M}_{19} } \\ &~~~~~~~=6 \sqrt{33} \left(3 H(t) \ddot{H}(t)+H^{(3)}(t)\right) , \\& \mathcal{E}_4 = \sqrt{ \big( \mathcal{E}_4 \big)^2} =\frac{1}{\sqrt{33}} \sqrt{\mathcal{K}_{1} -144\mathcal{K}_{10}+48 \mathcal{K}_{11} +27 \mathcal{K}_{12}} =24 \,H(t)^2 \left(H(t)^2+\dot{H}(t)\right) , \\& \Box R = \sqrt{\mathcal{M}_{10} } = -6 \left(12 H(t)^2 \dot{H}(t)+4 \dot{H}(t)^2+7 H(t) \ddot{H}(t)+H^{(3)}(t)\right) , \\& \sqrt{ J_6 }=\sqrt{ - \Big( 5\mathcal{K}_{1} + 9 \big( 8 \mathcal{K}_{10}-32 \mathcal{K}_{11} -7 \mathcal{K}_{12} \big) \Big) }= 12 \sqrt{66} H(t)^2 \dot{H}(t) , \end{split}\end{aligned}$$ where only the last one gives a non vanishing contribution to the equations of motion. We note that, of course, because there are much more perfect squares than the previous ones, the fact that they form a basis gives once more some non-linear algebraic relations between the order 8 scalars for FLRW space-time, and reduces the already reduced FKWC-basis. However it is not the aim of this work to find all these relations. We focus here on the unique modified action with respect to order 8 scalars : $$\begin{aligned} S_4=& \int d^4x \sqrt{-g} \; \Bigg( \frac{1}{16 \pi}\bigg[ R +\nu \sqrt{ -5 R^4 -9 \Big( 8 R^{\mu\nu}R^{\alpha\beta}R^{\sigma\rho}_{\;\,\;\,\,\mu\alpha}R_{\sigma\rho\nu\beta} - 32 R\,R^{\mu\nu\alpha\beta}R_{\mu\;\,\alpha}^{\;\,\sigma\;\,\rho}R_{\nu\sigma\beta\rho} } \nonumber\\ & \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \overline{ ~ -7 \big( R^{\mu\nu\alpha\beta} R_{\mu\nu\alpha\beta} \big)^2 ~\Big) } \;\bigg] + \curv{L}_m \Bigg). \nonumber\\\end{aligned}$$ #### Correction to the Friedmann equation  \   From this action, we get the acceleration equation : $$\begin{aligned} 3 H(t)^2+2 \dot{H}(t)-6 \nu \sqrt{66} H(t)^2 \left(3 H(t)^2+4 \dot{H}(t)\right) = -8\pi p . \end{aligned}$$ In analogy with equations (12) to (14), this last modification leads to an $H^4$ modification of the Friedmann equation. Now we are going to see that this unique correction is very interesting regarding the problem of the Big Bang singularity as it was shown in Ref [@16]. It is convenient to introduce the dimensional quantity $\rho_c$, critical energy density of the universe, and the constant parameter $ \epsilon $, such that $$\begin{aligned} \frac{\epsilon}{8 \pi \rho_c}=- 2 \nu \sqrt{66}\,.\end{aligned}$$ Thus, the modified Friedmann equation induced by the action $S_4$ is : $$\begin{aligned} 3 H(t)^2+ \epsilon \; \frac{9 H(t)^4}{8 \pi \rho_c}=8 \pi \rho.\end{aligned}$$ The solution $H(t)^2$ that reduces to the standard Friedmann equation in the limit when $ \rho_c $ goes to infinity is : $$\begin{aligned} 3 H(t)^2 = \, \frac{4 \pi \rho_c }{ \epsilon }\left(-1+\sqrt{\frac{4 \epsilon \rho }{\rho_c}+1} \, \right).\end{aligned}$$ Furthermore, if the energy density of the universe is small compared to the critical one, $\rho \ll \rho_c$, this equation becomes : $$\begin{aligned} H(t)^2 = \, \frac{8 \pi \rho }{3} \Big( 1 - \epsilon \, \frac{\rho}{\rho_c} \Big) + O(\rho^3).\end{aligned}$$ Choosing $\epsilon=1$, gives the loop quantum cosmology correction to the Friedmann equation [@19], and choosing $\epsilon=-\frac{1}{2}$, gives the Randall-Sundrum brane world model [@20]. These two corrections are regular cosmological solutions and allow to avoid the Big-Bang singularity [@19; @16]. One should also note that the above equation may be obtained within other approches (see [@21; @22]). Static Spherically Symmetric Space-times ======================================== In this section, we study static spherically symmetric space-times, defined by the general metric: $$\begin{aligned} ds^2 = -B(r) dt^2 + A(r) dr^2 + r^2 d\Omega^2.\end{aligned}$$ First, following exactly the same procedure as for the FLRW case, it is possible to show that there is no second order linear combination made of order 6 scalars. The same conclusion is valid for the classes $\mathcal{R}_{2,2}^0$ and $\mathcal{R}_{8,4}^0$. Now for order 4 scalars (and more generally for all orders, considering only monomials of the curvature tensor), the result of Deser and Ryzhov [@18] shows that the most general second order action is : $$\begin{aligned} S_5 = \int d^4x \sqrt{-g} \; \Big( R +\, \sqrt{3} \, \sigma \, \sqrt{ W^{\mu\nu\alpha\beta}W_{\mu\nu\alpha\beta}} \; \Big).\end{aligned}$$ Moreover, there is no other perfect squares than $W^{\mu\nu\alpha\beta}W_{\mu\nu\alpha\beta}$ and $R^2$. Concerning order 4, all the scalars that are perfect squares lead to second order equations of motion inside the square. Now, starting from order 6 scalars, it is possible to show that there are again only two perfect squares. Thus, we can consider the action : $$\begin{aligned} S_6 = \int d^4x \sqrt{-g} \; \Bigg( \delta \, \sqrt{ \nabla_\sigma R \nabla^\sigma R } + \, \sqrt{3} \, \gamma \, \sqrt{ C^{\mu\nu\alpha}C_{\mu\nu\alpha}} \; \Bigg),\end{aligned}$$ here $C_{\mu\nu \alpha}$ is the Cotton tensor, expressed in terms of the Weyl tensor as $C_{\mu\nu \alpha} = -2 \nabla^{\sigma} W_{\mu \nu\alpha\sigma} $. Its square may be written in our basis as $C^{\mu\nu\alpha}C_{\mu\nu\alpha}=2 \big( \curv{L}_5-\curv{L}_6 \big) - \frac{1}{6} \curv{L}_8$. In our search for second order differential equations, it is interesting to note that the property of order 4 perfect squares is preserved here : these two terms are such that their higher order terms cancel perfectly, what makes their associated equations of motion second order. For example the term : $$\begin{aligned} \begin{split} \sqrt{-g} \; &\sqrt{ C^{\mu\nu\alpha}C_{\mu\nu\alpha} } =\frac{\sqrt{3}}{6} \frac{\sqrt{B(r)}}{r B(r)^3A(r)^3} \Bigg( \Sigma \Big(r,B(r),A(r),B'(r),A'(r),B''(r),A''(r) \Big) \\ &-3 r^3 B(r)^2 A(r) A'(r) B''(r) -r^3 B(r)^2 A(r) B'(r) A''(r)+2 r^3 B(r)^2 A(r)^2 B^{(3)}(r) \Bigg), \end{split}\end{aligned}$$ where $\Sigma \Big(r,B(r),A(r),B'(r),A'(r),B''(r),A''(r) \Big)$ is a sum of 15 first order terms (that lead trivially to second order differential equations), is equivalent, up to boundary terms, to the following first order expression : $$\begin{aligned} \begin{split} \sqrt{-g} \; &\sqrt{ C^{\mu\nu\alpha}C_{\mu\nu\alpha} } \equiv \frac{\sqrt{3}}{6} \frac{\sqrt{B(r)}}{r B(r)^3A(r)^3} \Bigg( 4B(r)^3 A(r)^2-4 B(r)^3 A(r)^3-5 r B(r)^2 A(r)^2 B'(r) \\&\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad +2 r^2 B(r) A(r)^2 B'(r)^2-\frac{1}{4} r^3 A(r)^2 B'(r)^3 \Bigg). \end{split}\end{aligned}$$ Therefore, we have shown first that, up to order 6, all the perfect squares that one can build for static spherically symmetric space-time are also perfect squares in FLRW as we have seen for the case of $ \nabla_\sigma R \nabla^\sigma R$, because in this space-time $W_{\mu \nu\alpha\sigma} =0$. And secondly, that they also share the property that their square-root lead to second order equations of motion for the metric field. These perfect squares, coming from the action $S_6$ for $A(r)$ and $B(r)$ are respectively: $$\begin{aligned} \begin{split} ~& 16 \big(\gamma +2 \delta \big) B(r)^3+4 r \big(-5 \gamma +2 \delta \big) B(r)^2 B'(r) \\& +4 r^2 \big(2 \gamma +\delta \big) B(r) B'(r)^2+r^3 \big(-\gamma +\delta \big) B'(r)^3=0, \end{split}\end{aligned}$$ And, $$\begin{aligned} \begin{split} ~& 8 \big(\gamma +2 \delta \big) \Big( -1+A(r)\Big) A(r) B(r)^3+4 r \big(5 \gamma -2 \delta \big) B(r)^3 A'(r)-18 r^2 \gamma A(r) B(r) B'(r)^2 \\& +8 r \big(2 \gamma +\delta \big) B(r)^2 \Big(-r A'(r) B'(r)+A(r) \left(B'(r)+r B''(r)\right)\Big) \\& +r^3 \big(\gamma -\delta \big) B'(r) \Big(5 A(r) B'(r)^2+3 B(r) \left(A'(r) B'(r)-2 A(r) B''(r)\right)\Big)=0. \end{split}\end{aligned}$$ The first one provides solutions for $B(r)$, so the two equations decouple. We have found real exact vacuum solutions for 3 couples $(\gamma \, , \, \delta)$ and have computed their associated Kretschmann scalars $ R^{\mu\nu\alpha\beta}R_{\mu\nu\alpha\beta}$ to see if they suffer from singularites at $r=0$. - For $\gamma =1$ and $\delta =0$ : $$\begin{aligned} ds^2 = -k \, r^2 \; dt^2 + \frac{r^2}{p+r^2} \; dr^2 + r^2 d\Omega^2 \quad\quad \text{and} \quad\quad R^{\mu\nu\alpha\beta}R_{\mu\nu\alpha\beta} \to \frac{p^2}{r^8} , \end{aligned}$$ - For $\gamma =0$ and $\delta =1$ : $$\begin{aligned} ds^2 = -\frac{k}{r^4} \; dt^2 + \frac{3}{1+p \, r^2} \; dr^2 + r^2 d\Omega^2 \quad\quad \text{and} \quad\quad R^{\mu\nu\alpha\beta}R_{\mu\nu\alpha\beta} \to \frac{15 p^2-2 p+3}{r^4} , \end{aligned}$$ - For $\gamma =1$ and $\delta =-\frac{1}{2}$ : $$\begin{aligned} ds^2 = -k \; dt^2 + p \;dr^2 + r^2 d\Omega^2 \quad\quad \text{and} \quad\quad R^{\mu\nu\alpha\beta}R_{\mu\nu\alpha\beta} \to \frac{(p-1)^2}{p^2 \, r^4} . \end{aligned}$$ where $p$ and $k$ are integration constants. Note that $k$ can be set equal to one by a right choice of the time coordinate. Recall that the Kretschmann scalar of the Schwarzschild solution of the Einstein equations diverges as $ R^{\mu\nu\alpha\beta}R_{\mu\nu\alpha\beta} \to 1/r^6$. Therefore, we see that our first solution has a milder divergence. In the last case, choosing the initial condition $p=1$ provides $R^{\mu\nu\alpha\beta}R_{\mu\nu\alpha\beta}=0$ for all $r$ so there is no singularity of curvature because the space-time is flat. However in the second case, as for $p$ real, $15 p^2-2 p+3 >0$, we cannot “cancel” the singularity by a proper choice of initial condition, but still, the divergence is again milder than the standard one. Now let us note that considering only the two order 8 classes $\mathcal{R}_{2,2}^0$ and $\mathcal{R}_{8,4}^0$, we have found a unique perfect square $\Big[ 2K_{10}-2K_{9}-M_{11}+2M_{12}-M_{13}+4M_{14}-4M_{16}\Big]$ (that is zero in the FLRW case, so possibly again related with Cotton and Weyl “conformal” tensors), but its square-root does not lead to second order equations of motion, such that we can conjecture that this property is true only for the 3 scalars $W^{\mu\nu\alpha\beta}W_{\mu\nu\alpha\beta}$, $C^{\mu\nu\alpha}C_{\mu\nu\alpha}$ and $ \nabla_\sigma R \nabla^\sigma R $, without counting the scalars $R^2$, $(\Box^i R)^2$, $(\mathcal{E}_4)^{2}$, etc, for which the square-root always lead to second order equations. To conclude this section, we point out that it would be interesting to find black hole solutions, for some values of $\sigma$, $\gamma$ and $\delta$, and starting from the following action : $$\begin{aligned} S_7 = \int d^4x \sqrt{-g} \; \Big( R + \, \sqrt{3} \, \sigma \, \sqrt{ W^{\mu\nu\alpha\beta}W_{\mu\nu\alpha\beta}} +\, \sqrt{3} \, \gamma \, \sqrt{ C^{\mu\nu\alpha}C_{\mu\nu\alpha}} + \delta \, \sqrt{ \nabla_\sigma R \nabla^\sigma R } \; \Big),\end{aligned}$$ where the two last terms would be consider as high energy corrections to the standard Einstein-Hilbert action. In this action, the two additional terms are very similar to one present in this $S_5$. We recall that starting from $S_5$, one can find exact black hole solutions [@15]. After finding black hole solutions for some class of the parameters, it would be possible to reduce the number of physically relevant value of $\big( \sigma, \,\gamma, \,\delta \big)$, for example by calculating the Wald entropy associated with the action and the class of solutions, like it is done in Ref [@23], by imposing the positivity of the entropy to find some contraint on the parameters. Conclusions =========== Considering the order six and eight FKWC-basis of all independent invariant scalars build from the metric field and its derivatives, we have found modified gravity models admitting second order equations of motion for FLRW and static spherically symmeric spacetimes. For FLRW, we have shown that there are unique second order polynomial corrections to the Einstein-Hilbert action for both order six and eight. They involve only contractions of curvature tensors which is what one could expect to follow from quantum field theory. Yet, concerning static spherical symmetry we have seen that there is none. Some non-polynomial corrections, that are square-roots of the specific combinations that are perfect squares have also been studied for FLRW and spherical symmetry for both order six and eight. We have shown that up to order 6, all the perfect squares lead to second order equations of motion in this way. However, for order eight, there are squares that do not have this property, for example the one we have found for spherical symmetry that involves scalars of the $\mathcal{R}_{2,2}^0$ and $\mathcal{R}_{8,4}^0$ order eight classes. Moreover, concerning FLRW we have seen a “mechanism” that provides unique corrections with non-vanishing contributions to the equations of motion : there are lots of squares, but their square-roots can be decomposed in a very small basis, such that there exist some non-linear algebraic relations between order six and order eight scalars. Taken into account, we can choose the independent “square-root” scalars such that for both order, there are topological scalars and only one that is not. For order six, the one we choose turned out to be the same as for spherical symmetry. The other perfect squares for this last spacetime are squares of the Weyl and Cotton tensors, which are identically zeros for FLRW, such that all the squares of static spherical symmetry are present in FLRW. Let’s say also that the ressemblance between $S_6$ and $S_5$, that admits exact black hole solutions, could suggest that it might be possible to find also these kind of solutions from this first. It could also be interesting to understand better why the second order corrections specific to spherical symmetry involve “conformal” tensors. We note here that in addition, we have checked if this common non-vanishing square that is $\nabla_\sigma R \nabla^\sigma R$ is also a square for another physically relevant spacetime, that is Bianchi I, defined by the following metric and describing an anisotropic universe as the early universe was [@28; @29] : $$\begin{aligned} ds^2= -dt^2 + \alpha(t) dx^2 + \beta(t) dy^2 + \gamma(t) dz^2.\end{aligned}$$ It turns out that this scalar is also a perfect square in this case and that it leads also to second order equations of motions. These results are just copied in the third Appendix, and makes the results concerning the action : $$\begin{aligned} S= \int d^4x \sqrt{-g} \; \Bigg( R +\nu \sqrt{ - \nabla^\sigma R \nabla_\sigma R } \Bigg),\end{aligned}$$ our main ones. The study of non-polynomial gravity with order eight scalars represents also an interesting results in the sense that we can choose the four independent square-root scalars to be three topological ones, and one that contributes to the equations of motion and gives the well-known $H^4$ correction to the Friedmann equation [@26; @27], that leads to a unique physically relevant solution for which the limit of small density allows to reproduce the loop quantum cosmology result and the one coming from the Randall-Sundrum brane world model. In these two, the big-bang singularity is absent and the cosmological solutions are regular ones. To finish, we note that our study was done on fixed backgrounds, but General Relativity is a background independent theory, so it could be important to check for general background if the scalar $\nabla_\sigma R \nabla^\sigma R$ is still a perfect square, and if it is the case, if its square-root leads in the general case to second order equations of motion for the metric field. Moreover, to our knowledge, there is no proof that, considering all the FKWC-basis for all orders, it is impossible to find linear combinations of invariant scalars that lead to second order partial differential equations for a general metric. Indeed the Lovelock papers [@10; @30; @31] are only restricted their studies to lagrangians of the general form: $ L \Big(g_{\mu\nu} , \partial_{\alpha}g_{\mu\nu} ,\partial_{\alpha}\partial_{\beta}g_{\mu\nu}\Big)$, i.e. involving curvature tensors, but not explicit covariant derivatives of them. Therefore, if this remark is true, one could search for second order equations of motion coming from the general lagrangian $L \Big(g_{\mu\nu} , \partial_{\alpha}g_{\mu\nu} , ... , \partial_{\alpha}... \partial_{\beta} g_{\mu\nu} \Big)$. We know from our results that there is no such combination for order six and eight scalars because there is none for static spherically symmetric spacetime. But as the order grows, even if there are more and more terms in the expansions of the scalars, the number of independent scalars inside a same class grows as well, such that for some “high” order in the FKWC-basis, it might be possible to cancel all the higher order derivatives for general metric. Acknowledgments {#acknowledgments .unnumbered} =============== Appendix ======== Relations for the reduced FKWC basis ------------------------------------ First we copy the following relations from Ref [@24] : $$\begin{aligned} \mathcal{E}_6=2 \mathcal{L}_1+8 \mathcal{L}_2+24 \mathcal{L}_3+2 \mathcal{L}_4+24 \mathcal{L}_5+16 \mathcal{L}_6-12 \mathcal{L}_7+ \mathcal{L}_8\end{aligned}$$ is the order 6 Euler density, $$\begin{aligned} W^2=W_{\mu\nu\alpha\beta} W^{\mu\nu\alpha\beta}=\frac{1}{3}R^2-2S+T\end{aligned}$$ is the square of the Weyl tensor, $$\begin{aligned} W^3_1=W^{\mu\nu\alpha\beta}W_{\alpha\beta\sigma\rho}W^{\sigma\rho}_{\;\,\;\,\,\mu\nu}\end{aligned}$$ and $$\begin{aligned} W^3_2=W^{\mu\nu\alpha\beta}W_{\mu\sigma\rho\beta}W_{\;\;\;\nu\alpha}^{\sigma\;\,\;\,\,\;\rho}\end{aligned}$$ are the two independent cubic contractions of the Weyl tensor in four dimensions. From the relations of Ref [@25], we have found the following geometrical identities : $$\begin{aligned} L_{\mu\nu}R^{\mu\nu}=-(2 \mathcal{L}_3 + \frac{1}{2}\mathcal{L}_4+4\mathcal{L}_5+4\mathcal{L}_6-4\mathcal{L}_7+\frac{1}{2}\mathcal{L}_8)\end{aligned}$$ $$\begin{aligned} W_{\mu\nu\alpha\beta}\nabla^{\nu} \nabla^{\beta}R^{\mu\alpha}=\mathscr{L}_3-\frac{1}{2}\mathscr{L}_2+\frac{1}{12}\mathscr{L}_1+\frac{1}{2}\mathcal{L}_6-\frac{1}{2}\mathcal{L}_5\end{aligned}$$ $$\begin{aligned} \nabla^\rho R^{\mu\nu\alpha\beta}\nabla_{\rho}W_{\mu\nu\alpha\beta}=\mathscr{L}_7-2\mathscr{L}_5+\frac{1}{3}\mathscr{L}_8\end{aligned}$$ $$\begin{aligned} R^{\mu\alpha}\nabla^{\beta}\nabla^{\nu}W_{\mu\nu\alpha\beta}=\frac{1}{2}\curv{L}_2-\frac{1}{12}\curv{L}_1-\frac{1}{6}\curv{L}_4-\frac{1}{2}L_6+\frac{1}{2}L_5\end{aligned}$$ $$\begin{aligned} \nabla^\mu W_{\mu\nu\alpha\beta}\nabla^{\alpha} R^{\nu\beta}=\frac{1}{2}\curv{L}_5-\frac{1}{2}\curv{L}_6-\frac{1}{24}\curv{L}_8\end{aligned}$$ Note that $\nabla^\mu \nabla^\nu L_{\mu\nu}=0$ identically, such that there are no new relations coming from this term. Because in four dimensions, $\mathcal{E}_6=0$ and $L_{\mu\nu}=0$, leading to $L_{\mu\nu}R^{\mu\nu}=0$ for order 6 scalars, the FKWC basis, that does not take into account these relations, is then reduced. As for conformally invariant space-times, their metrics verify $W_{\mu\nu\alpha\beta}=0$ which provides again new relations between the scalars coming from : $R \, W^2 =0$, $W^3_1 =0$, $W^3_2 =0$ (note that because $\mathcal{E}_6$ is a linear combination of $W^3_1$ and $W^3_2$ [@24], the relation $\mathcal{E}_6=0$ becomes redundant), $W_{\mu\nu\alpha\beta}\nabla^{\nu} \nabla^{\beta}R^{\mu\alpha}=0$, $\nabla^\rho R^{\mu\nu\alpha\beta}\nabla_{\rho}W_{\mu\nu\alpha\beta}=0$, $R^{\mu\alpha}\nabla^{\beta}\nabla^{\nu}W_{\mu\nu\alpha\beta}=0$ and $\nabla^\mu W_{\mu\nu\alpha\beta}\nabla^{\alpha} R^{\nu\beta}=0$. Therefore, for conformally invariant space-times, there are 1 relations coming from the corollary of the Lovelock theorem, and 7 coming from $W_{\mu\nu\alpha\beta}=0$, so there are 8 scalars less in the reduced basis, i.e. 8 scalars left. General order 6 linear combination for FLRW ------------------------------------------- The sum of all order six independent scalars, expressed in terms of $a(t)$ and its derivatives is : $$\begin{aligned} \begin{split} J=&\sum \Big( v_i \mathcal{L}_i +x_i \curv{L}_i \Big) =\frac{3}{a(t)^6} \Bigg( \sigma_1(v_i,x_i) \; \dot{a}(t)^6 + \sigma_2(v_i,x_i) \; a(t) \; \dot{a}(t)^4 \; \ddot{a}(t) + \sigma_3(v_i,x_i) \; a(t)^2 \; \dot{a}(t)^2 \; \ddot{a}(t)^2 \\& + \sigma_4(v_i,x_i) \; a(t)^3 \; \ddot{a}(t)^3 + \sigma_5(v_i,x_i) \; a(t)^2 \; \dot{a}(t)^3 \; a^{(3)}(t)+ \sigma_6(v_i,x_i) \; a(t)^3 \; \dot{a}(t) \; \ddot{a}(t) \; a^{(3)}(t) \\& + \sigma_7(v_i,x_i) \; a(t)^4 \; a^{(3)}(t)^2+ \sigma_8(v_i,x_i) \; a(t)^3 \; \dot{a}(t)^2 \; a^{(4)}(t)+ \sigma_9(v_i,x_i) \; a(t)^4 \; \ddot{a}(t) \; a^{(4)}(t) \Bigg) \end{split}\end{aligned}$$ with, $\left\{ \begin{array}{l} \sigma _1=4 (2 v_1-2 v_3+6 v_4+2 v_5+2 v_6+6 v_7+18 v_8+2 x_2+x_3+6 x_4\\ \quad\quad -6 x_5-5 x_6-8 x_7-12 x_8)\vspace{6pt}\\ \sigma _2=2 \big(-2 v_3+12 v_4+4 v_5+6 v_6+24 v_7+108 v_8+30 x_1+x_2\\ \quad\quad -3 x_3-18 x_4+20 x_5+18 x_6+32 x_7+24 x_8\big) \vspace{6pt}\ \\ \sigma _3= \big[ -6 v_2-4 v_3+24 v_4+14 v_5+6 v_6+48 v_7+216 v_8+48 x_1\\ \quad\quad +14 x_2+7 x_3+42 x_4-20 x_5-19 x_6-36 x_7-12 x_8 \big]\vspace{6pt} \\ \sigma _4=8 v_1+2 v_2-8 v_3+24 v_4+6 v_5+10 v_6+24 v_7+72 v_8-12 x_1-x_3-6 x_4\vspace{6pt} \\ \sigma _5=-2 \left(18 x_1+5 x_2+x_3+6 x_4-4 x_5-2 x_6-24 x_8\right)\vspace{6pt} \\ \sigma _6=-\left(36 x_1+8 x_2+x_3+6 x_4-2 x_6-8 x_7+24 x_8\right)\vspace{6pt} \\ \sigma _7=-\left(4 x_5+3 x_6+4 \left(x_7+3 x_8\right)\right) \vspace{6pt} \\ \sigma _8=-2 \left(6 x_1+x_2\right)\vspace{6pt} \\ \sigma _9=-\left(12 x_1+4 x_2+x_3+6 x_4\right) \end{array} \right. $ Now in terms of $H(t)$ and its derivatives, considering only the scalars of the reduced FKWC basis : $$\begin{aligned} \begin{split} J=&\sum\limits_{i=4, 6, 7} v_i \mathcal{L}_i + \sum\limits_{j=1, 3, 5, 8} x_j \curv{L}_j =3 \Bigg( \widetilde{\sigma}_1(v_i,x_i) \; H(t)^6 + \widetilde{\sigma}_2(v_i,x_i) \; H(t)^4 \dot{H}(t) + \widetilde{\sigma}_3(v_i,x_i) \; H(t)^2 \dot{H}(t)^2 \\& \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad + \widetilde{\sigma}_4(v_i,x_i) \; \dot{H}(t)^3 + \widetilde{\sigma}_5(v_i,x_i) \; H(t)^3 \ddot{H}(t)+ \widetilde{\sigma}_6(v_i,x_i) \; H(t) \dot{H}(t) \ddot{H}(t) \\& \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad + \widetilde{\sigma}_7(v_i,x_i) \ddot{H}(t)^2+ \widetilde{\sigma}_8(v_i,x_i) \; H(t)^2 H^{(3)}(t)+ \widetilde{\sigma}_9(v_i,x_i) \; \dot{H}(t) H^{(3)}(t) \Bigg) \end{split}\end{aligned}$$ with, $\left\{ \begin{array}{l} \widetilde{\sigma} _1=12 \left(8 v_4+3 \left(v_6+4 v_7\right)\right) \vspace{6pt}\\~ \widetilde{\sigma} _2=6 \left(24 v_4+9 v_6+36 v_7-48 x_1-2 x_3\right) \vspace{6pt}\\~ \widetilde{\sigma} _3= 4 \left(24 v_4+9 v_6+30 v_7-60 x_1-2 x_3-14 x_5-48 x_8\right) \vspace{6pt}\\~ \widetilde{\sigma} _4=2 \left(12 v_4+5 v_6+12 v_7-24 x_1-2 x_3\right) \vspace{6pt}\\~ \widetilde{\sigma} _5=-7 \left(24 x_1+x_3\right) \vspace{6pt}\\~ \widetilde{\sigma} _6=-\left(84 x_1+5 x_3+24 \left(x_5+4 x_8\right)\right) \vspace{6pt}\\~ \widetilde{\sigma} _7=-4 \left(x_5+3 x_8\right) \vspace{6pt}\\~ \widetilde{\sigma} _8=-\left(24 x_1+x_3\right) \vspace{6pt}\\~ \widetilde{\sigma} _9=-\left(12 x_1+x_3\right) \end{array} \right. $ $\nabla_\sigma R \nabla^\sigma R$ in Bianchi I spacetime -------------------------------------------------------- We reproduce only here the value of the particular scalar $\nabla_\sigma R \nabla^\sigma R$ for Bianchi I spacetime, and the associated equations of motion of its square-root. $$\begin{aligned} \begin{split} ~& \nabla_\sigma R \nabla^\sigma R= -4 \Bigg\{\alpha (t) \gamma (t) \dot{\beta} (t) \bigg(\alpha (t) \dot{\beta} (t) \dot{\gamma} (t)+\gamma (t) \big(\dot{\alpha} (t) \dot{\beta} (t)+\alpha (t) \ddot{\beta} (t)\big)\bigg) +\beta (t) \bigg(\alpha (t)^2 \dot{\beta} (t) \dot{\gamma} (t)^2 \\& -\alpha (t)^2 \gamma (t) \big(\dot{\gamma} (t) \ddot{\beta} (t)+\dot{\beta} (t) \ddot{\gamma} (t)\bigg) +\gamma (t)^2 \bigg(\dot{\alpha} (t)^2 \dot{\beta} (t)-\alpha (t) \dot{\alpha} (t) \ddot{\beta} (t)-\alpha (t) \big(\dot{\beta} (t) \ddot{\alpha} (t)+\alpha (t) \beta ^{(3)}(t)\big)\big)\bigg) \\ &+\beta (t)^2 \Bigg[\alpha (t) \dot{\gamma} (t) \bigg(\dot{\alpha} (t) \dot{\gamma} (t)+\alpha (t) \ddot{\gamma} (t)\bigg)+\gamma (t)^2 \bigg(\dot{\alpha} (t) \ddot{\alpha} (t)-\alpha (t) \alpha ^{(3)}(t)\bigg)+\gamma (t) \bigg(\dot{\alpha} (t)^2 \dot{\gamma} (t) \\ &-\alpha (t) \dot{\alpha} (t) \ddot{\gamma} (t)-\alpha (t) \Big(\dot{\gamma} (t) \ddot{\alpha} (t)+\alpha (t) \gamma ^{(3)}(t)\Big)\bigg)\Bigg]\Bigg\}^2 \, \Bigg/ \Bigg\{ \alpha (t)^4 \beta (t)^4 \gamma (t)^4 \Bigg\} \end{split} \end{aligned}$$ The associated equations of motion, very complicated, yet second order, for respectively $\alpha (t) $, $ \beta (t) $ and $\gamma (t)$ are : $$\begin{aligned} \begin{split} ~&0 = -\alpha (t)^3 \gamma (t)^3 \dot{\beta} (t)^3+\alpha (t)^3 \beta (t) \gamma (t)^3 \dot{\beta} (t) \ddot{\beta} (t)+\alpha (t) \beta (t)^2 \gamma (t)^2 \Bigg[2 \alpha (t) \dot{\alpha} (t) \dot{\beta}(t) \dot{\gamma} (t) \\ &+\gamma (t) \Big(\dot{\alpha} (t)^2 \dot{\beta} (t)+\alpha (t) \dot{\beta} (t) \ddot{\alpha} (t)+\alpha (t) \dot{\alpha} (t) \ddot{\beta} (t)\Big)\Bigg]+\beta (t)^3 \Bigg[-\alpha (t)^3 \dot{\gamma} (t)^3+\gamma (t)^3 \Big(-2 \dot{\alpha} (t)^3 \\ &+3 \alpha (t) \dot{\alpha} (t) \ddot{\alpha} (t)\Big)+\alpha (t)^3 \gamma (t) \dot{\gamma} (t) \ddot{\gamma}(t)+\alpha (t) \gamma (t)^2 \Big(\dot{\alpha} (t)^2 \dot{\gamma} (t)+\alpha (t) \dot{\gamma} (t) \ddot{\alpha} (t)+\alpha (t) \dot{\alpha} (t) \ddot{\gamma} (t)\Big)\Bigg] \end{split} \end{aligned}$$ $$\begin{aligned} \begin{split} ~&0 = -2 \alpha (t)^3 \gamma (t)^3 \dot{\beta} (t)^3+\alpha (t)^2 \beta (t) \gamma (t)^2 \dot{\beta} (t) \Bigg[\alpha (t) \dot{\beta} (t) \dot{\gamma} (t)+\gamma (t) \Big(\dot{\alpha} (t) \dot{\beta} (t)+3 \alpha (t) \ddot{\beta} (t)\Big)\Bigg] \\ &+\alpha (t)^2 \beta (t)^2 \gamma (t)^2 \Bigg[\gamma (t) \dot{\beta} (t) \ddot{\alpha} (t)+\alpha (t) \dot{\gamma} (t) \ddot{\beta} (t)+\dot{\alpha}(t) \Big(2 \dot{\beta} (t) \dot{\gamma} (t)+\gamma (t) \ddot{\beta} (t)\Big)+\alpha (t) \dot{\beta} (t) \ddot{\gamma} (t)\Bigg] \\ &+\beta (t)^3 \Bigg[-\alpha (t)^3 \dot{\gamma} (t)^3+\gamma (t)^3 \Big(-\dot{\alpha} (t)^3+\alpha (t) \dot{\alpha} (t) \ddot{\alpha} (t)\Big)+\alpha (t)^3 \gamma (t) \dot{\gamma} (t) \ddot{\gamma} (t)\Bigg] \end{split} \end{aligned}$$ $$\begin{aligned} \begin{split} &0 = -\alpha (t)^3 \gamma (t)^3 \dot{\beta} (t)^3+\alpha (t)^3 \beta (t) \gamma (t)^3 \dot{\beta} (t) \ddot{\beta} (t)+\beta (t)^3 \Bigg[-2 \alpha (t)^3 \dot{\gamma} (t)^3\\ &+\gamma (t)^3 \Big(-\dot{\alpha}(t)^3+\alpha (t) \dot{\alpha} (t) \ddot{\alpha} (t)\Big) \\ &+\alpha (t)^2 \gamma (t) \dot{\gamma} (t) \Big(\dot{\alpha} (t) \dot{\gamma} (t)+3 \alpha (t) \ddot{\gamma} (t)\Big)+\alpha (t)^2 \gamma (t)^2 \Big(\dot{\gamma} (t) \ddot{\alpha} (t)+\dot{\alpha} (t) \ddot{\gamma} (t)\Big)\Bigg] \\ &+\alpha (t)^2 \beta (t)^2 \gamma (t) \Bigg[\alpha (t) \dot{\beta} (t) \dot{\gamma} (t)^2+\gamma (t) \Big(2 \dot{\alpha} (t) \dot{\beta} (t) \dot{\gamma} (t)+\alpha (t) \dot{\gamma} (t) \ddot{\beta} (t)+\alpha (t) \dot{\beta} (t) \ddot{\gamma} (t)\Big)\Bigg] \end{split} \end{aligned}$$ biblabel\[1\][\#1. ]{} [999]{} Steven Weinberg. The cosmological constant problem. [*Rev. Mod. Phys.*]{} [**1989**]{}, [*61*]{}. Alexei A. Starobinsky. A new type of isotropic cosmological models without singularity. [*Physics Letters B.*]{} [**1980**]{}, [*91*]{}, 99-102. Robert H. Brandenberger. A Nonsingular Universe. [*Brown University.*]{} [**1992**]{}. Woodard, R. Avoiding dark energy with 1/R modifications of gravity. [*Lect.Notes Phys.*]{} [**2007**]{}, [*720*]{}, 403-433. S. M. Carroll, V. Duvvuri, M. Trodden and M. Turner. [*Phys. Rev. D*]{} [**70**]{} [*2004*]{} 043528; S. Capozziello, S. Carloni and A. Troisi. [*Int. J. Mod. Phys D*]{} [**2003**]{}, [*12*]{} 1969. T. P. Sotiriou and V. Faraoni. f(R) Theories Of Gravity. [*Rev. Mod. Phys.* ]{} [**2010**]{} [*82*]{} 451. S. Nojiri and S. D. Odintsov. Unified cosmic history in modified gravity: from F(R) theory to Lorentz non-invariant models. [*Phys. Rept.*]{} [**2011**]{}, [*505*]{} 59. S. Capozziello and M. De Laurentis. Extended Theories of Gravity. [*Phys. Rept.*]{} [**2011**]{}, [*509*]{} 167. Antonio De Felice, Shinji Tsujikawa. $f \big(R \big)$ theories. [*Living Rev. Rel.*]{} [**2010**]{}, [*13: 3*]{}. Lovelock, D. The uniqueness of the Einstein field equations in a four-dimensional space. [*Arch.Ration.Mech.Anal.*]{} [**1994**]{}, [*33*]{}, 54-70. C. Cherubini, D. Bini, S. Capozziello, R. Ruffini.Second Order Scalar Invariants of the Riemann Tensor: Applications to Black Hole space-times. [*Int.J.Mod.Phys.*]{} [**2002**]{}, [*D11*]{}, 827-841. James T. Wheeler. Weyl gravity as general relativity. [*Phys. Rev.*]{} [**2014**]{}, [*D90*]{}, 025027. Gregory Walter Horndeski. Second-order scalar-tensor field equations in a four-dimensional space. [*Int. J. Theor. Phys.*]{} [**1974**]{}, [*10*]{}, 363-384. C. Charmousis, E. J. Copeland, A. Padilla, P. M. Saffin. General second order scalar-tensor theory, self tuning, and the Fab Four. [*Phys. Rev. Lett.*]{} [**2012**]{}, [*108*]{}, 051101. S. Deser, O. Sarioglu, B. Tekin. Spherically symmetric solutions of Einstein + non-polynomial gravities. [*Gen.Rel.Grav.*]{} [**2008**]{}, [*40*]{}, 1-7. Changjun Gao. Generalized modified gravity with the second-order acceleration equation. [*Phys. Rev.*]{} [**2012**]{}, [*D86*]{}, 103512. S A Fulling, R C King, B G Wybourne and C J Cummins. Normal forms for tensor polynomials. I. The Riemann tensor. [*Class. Quantum Grav.*]{} [**1992**]{}, [*9*]{}, 1151. S. Deser, and A.V. Ryzhov. Curvature invariants of static spherically symmetric geometries. [*Class.Quant.Grav.*]{} [**2005**]{}, [*22*]{}, 3315-3324. Ashtekar A. and Singh P. Loop quantum cosmology: a status report. [*Class. Quantum Grav.*]{} [**2011**]{}, [*28*]{}, 213001. P. Binetruy, C. Deffayet, U. Ellwanger and D. Langlois. Brane cosmological evolution in a bulk with cosmological constant. [*Phys. Lett.*]{} [**2000**]{}, [*B477*]{}, 285-291. G. Cognola, R. Myrzakulov, L. Sebastiani and S. Zerbini. Einstein gravity with Gauss-Bonnet entropic corrections. [*Phys. Rev. D*]{} [**2013**]{}, [*88*]{}, 024006. G. Cognola, E. Elizalde, L. Sebastiani and S. Zerbini. Black hole and de Sitter solutions in a covariant renormalizable field theory of gravity. [*Phys. Rev. D*]{} [**2011**]{}, [*83*]{}, 063003. E. Bellini, R. Di Criscienzo, L. Sebastiani and S. Zerbini. Black Hole entropy for two higher derivative theories of gravity. [*Entropy*]{} [**2010**]{}, [*12*]{}, 2186. Julio Oliva, Sourya Ray. Classification of Six Derivative Lagrangians of Gravity and Static Spherically Symmetric Solutions. [*Phys.Rev.*]{} [**2010**]{}, [*D82*]{}, 124030. Yves Décanini, Antoine Folacci. FKWC-bases and geometrical identities for classical and quantum field theories in curved space-time. [*Unpublished report.*]{} [**2008**]{}. Adel Awad, Ahmed Farag Ali. Planck-Scale Corrections to Friedmann Equation. [*Central Eur.J.Phys.*]{} [**2014**]{}, [*12*]{}, 245-255. Pantelis S. Apostolopoulos, George Siopsis, Nikolaos Tetradis. Cosmology from an AdS Schwarzschild black hole via holography. [*Phys.Rev.*]{} [**2009**]{}, [*102*]{}, 151301. Esra Russell, Can Battal Kilinc, Oktay K. Pashaev. Bianchi I Model: An Alternative Way To Model The Presentday Universe. [*MNRAS.*]{} [**2014**]{}, [*442*]{}, 2331-2341. Thomas Schucker, André Tilquin, Galliano Valent. Bianchi I meets the Hubble diagram. [*MNRAS.*]{} [**2014**]{}, [*444*]{}, 2820. D. Lovelock. Divergence-free tensorial concomitants. [*aequationes mathematicae*]{} [**1970**]{}, [*4*]{}, 127-138. D. Lovelock. Degenerate Lagrange densities involving geometric objects . [*Arch. Ration. Mech. Anal.*]{} [**1970**]{}, [*36*]{}, 293-304. [^1]: Electronic address: `aimericcx@yahoo.fr`
{ "pile_set_name": "ArXiv" }
--- abstract: 'We present a Monte Carlo simulation of the perturbative Quantum Chromodynamics (pQCD) shower developing after a hard process embedded in a heavy-ion collision. The main assumption is that the cascade of branching partons traverses a medium which (consistent with standard radiative energy loss pictures) is characterized by a local transport coefficient $\hat{q}$ which measures the virtuality per unit length transferred to a parton which propagates in this medium. This increase in parton virtuality alters the development of the shower and in essence leads to extra induced radiation and hence a softening of the momentum distribution in the shower. After hadronization, this leads to the concept of a medium-modified fragmentation function. On the level of observables, this is manifest as the suppression of high transverse momentum ($P_T$) hadron spectra. We simulate the soft medium created in heavy-ion collisions by a 3-d hydrodynamical evolution and average the medium-modified fragmentation function over this evolution in order to compare with data on single inclusive hadron suppression and extract the $\hat{q}$ which characterizes the medium. Finally, we discuss possible uncertainties of the model formulation and argue that the data in a soft momentum show evidence of qualitatively different physics which presumably cannot be described by a medium-modified parton shower.' author: - Thorsten Renk title: 'Parton shower evolution in a 3-d hydrodynamical medium' --- Introduction ============ Jet quenching, i.e. the energy loss of hard partons created in the first moments of a heavy ion collision due to interactions with the surrounding soft medium has long been regarded a promising tool to study properties of the soft medium [@Jet1; @Jet2; @Jet3; @Jet4; @Jet5; @Jet6]. The basic idea is to study the changes induced by the medium to a hard process which is well-known from p-p collisions. A number of observables is available for this purpose, among them suppression in single inclusive hard hadron spectra $R_{AA}$ [@PHENIX_R_AA], the suppression of back-to-back correlations [@Dijets1; @Dijets2] or single hadron suppression as a function of the emission angle with the reaction plane [@PHENIX-RP]. Calculations have now reached a high degree of sophistication. Different energy loss formalisms are used together with a 3-d hydrodynamical description of the medium [@Hydro3d] in central and noncentral collisions to determine the pathlength dependence of energy loss [@HydroJet1; @HydroJet2; @HydroJet3; @HydroJet4]. Some of these models have also been employed successfully to describe the suppression of hard back-to-back hadron correlations [@Dihadron1; @Dihadron2; @Dihadron3]. The existing formulations of energy loss can roughly be divided into two groups: Some compute the energy loss from a leading parton [@Jet2; @Jet5; @QuenchingWeights] whereas others compute an in-medium fragmentation function by following the evolution of a parton shower [@HydroJet2; @HBP]. Recently, the Monte Carlo (MC) code JEWEL [@JEWEL] has also been developed which simulates the evolution of a parton shower in the medium in a non-analytic way. This model builds on the success of MC shower generators like PYTHIA [@PYTHIA; @PYSHOW] or HERWIG [@HERWIG] for showers in vacuum. In the present work, we follow an approach which is very similar to the one taken with JEWEL, i.e. we modify a MC code for vacuum shower to account for medium effects. However, while JEWEL so far chiefly implements elastic scattering with medium constituents and includes radiative energy loss only in a schematic way, we rather wish to focus on radiative energy loss in the following. This is based on the observation that elastic energy loss has the wrong pathlength dependence to account properly for the suppression of back-to-back correlations [@Elastic] and hence cannot be a large contribution to the total energy loss of light quarks. In particular, we assume that partons traversing the medium pick up additional virtuality which induces additional branchings in the shower, thus softening the parton spectrum, but that there is no transfer of longitudinal momentum from the hard parton to the scatterers in the medium. This work is organized as follows: First, we outline the computation of hadron spectra in the formalism, starting from the hard process. The key ingredient of our model, the medium-modified fragmentation function (MMFF) is described in detail in section \[S-MMFF\] where we outline the MC simulation of showers in vacuum and present how the algorithm is modified to simulate showers in medium. We present various observables which show the expected modification of the jet by the medium. In section \[S-Data\] we present a comparison of the suppression calculated using the MMFF in a 3-d hydrodynamical model for the medium evolution [@Hydro3d] with the measured nuclear suppression in central AuAu collisions at 200 AGeV and use this result to extract an estimate for the medium transport coefficient $\hat{q}$. We follow with a discussion of the model uncertainties. Finally the limits of the approach in the light of the patterns seen in semi-hard and soft correlations with a hard trigger hadron are discussed. The hard process ================ We aim at a description of the production of high $P_T$ hadrons both in p-p and in Au-Au collisions. The underlying hard process can be computed in leading order (LO) pQCD. We assume in the following that the actual hard process is not influenced by the fact that a soft medium is created in Au-Au collisions, but that the subsequent parton shower (which extends to timescales at which a medium is relevant) is modified by the presence of such a medium, whereas hadronization itself takes place sufficiently away from the medium such that it can be assumed to take place as in vacuum. In this section, we describe the computation of the hard process itself. The production of two hard back to back partons $k,l$ with momentum $p_T$ in a p-p or A-A collision in LO pQCD is described by $$\label{E-2Parton} \frac{d\sigma^{AB\rightarrow kl +X}}{d p_T^2 dy_1 dy_2} \negthickspace = \sum_{ij} x_1 f_{i/A} (x_1, Q^2) x_2 f_{j/B} (x_2,Q^2) \frac{d\hat{\sigma}^{ij\rightarrow kl}}{d\hat{t}}$$ where $A$ and $B$ stand for the colliding objects (protons or nuclei) and $y_{1(2)}$ is the rapidity of parton $k(l)$. The distribution function of a parton type $i$ in $A$ at a momentum fraction $x_1$ and a factorization scale $Q \sim p_T$ is $f_{i/A}(x_1, Q^2)$. The distribution functions are different for the free protons [@CTEQ1; @CTEQ2] and protons in nuclei [@NPDF; @EKS98]. The fractional momenta of the colliding partons $i$, $j$ are given by $ x_{1,2} = \frac{p_T}{\sqrt{s}} \left(\exp[\pm y_1] + \exp[\pm y_2] \right)$. Expressions for the pQCD subprocesses $\frac{d\hat{\sigma}^{ij\rightarrow kl}}{d\hat{t}}(\hat{s}, \hat{t},\hat{u})$ as a function of the parton Mandelstam variables $\hat{s}, \hat{t}$ and $\hat{u}$ can be found e.g. in [@pQCD-Xsec]. Inclusive production of a parton flavour $f$ at rapidity $y_f$ is found by integrating over either $y_1$ or $y_2$ and summing over appropriate combinations of partons, $$\label{E-1Parton} \begin{split} \frac{d\sigma^{AB\rightarrow f+X}}{dp_T^2 dy_f} = \int d y_2 \sum_{\langle ij\rangle, \langle kl \rangle} \frac{1}{1+\delta_{kl}} \frac{1}{1+\delta_{ij}} &\Bigg\{ x_1 f_{i/A}(x_1,Q^2) x_2 f_{j/B}(x_2,Q^2) \bigg[ \frac{d\sigma^{ij\rightarrow kl}}{d\hat{t}}(\hat{s}, \hat{t},\hat{u}) \delta_{fk} + \frac{d\sigma^{ij\rightarrow kl}}{d\hat{t}}(\hat{s}, \hat{u},\hat{t}) \delta_{fl} \bigg]\\ +&x_1 f_{j/A}(x_1,Q^2) x_2 f_{i/B}(x_2,Q^2) \bigg[ \frac{d\sigma^{ij\rightarrow kl}}{d\hat{t}}(\hat{s}, \hat{u},\hat{t}) \delta_{fk} + \frac{d\sigma^{ij\rightarrow kl}}{d\hat{t}}(\hat{s},\hat{t}, \hat{u}) \delta_{fl} \bigg] \Bigg\} \\ \end{split}$$ where the summation $\langle ij\rangle, \langle kl \rangle$ runs over pairs $gg, gq, g\overline{q}, qq, q\overline{q}, \overline{q}\overline{q}$ and $q$ stands for any of the quark flavours $u,d,s$. For the production of a hadron $h$ with mass $M$, transverse momentum $P_T$ at rapidity $y$ and transverse mass $m_T = \sqrt{M^2 + P_T^2}$ from the parton $f$, let us introduce the fraction $z$ of the parton energy carried by the hadron after fragmentation with $z = E_h/E_f$. Assuming collinear fragmentation, the hadronic variables can be written in terms of the partonic ones as $$m_T \cosh y = z p_T \cosh y_f \quad \text{and} \quad m_T \sinh y = P_T \sinh y_f.$$ Thus, the hadronic momentum spectrum arises from the partonic one by folding with the distribution $D_{f\rightarrow h}(z, \mu_f^2)$ which describes the average number of fragment hadrons carrying a fraction $z$ of the parton momentum at a scale $\mu_f \sim P_T$ as $$\label{E-Fragment} \frac{d\sigma^{AB\rightarrow h+X}}{dP_T^2 dy} = \sum_f \int dp_T^2 dy_f \frac{d\sigma^{AB\rightarrow f+X}}{dp_T^2 dy_f} \int_{z_{min}}^1 dz D_{f\rightarrow h}(z, \mu_f^2) \delta\left(m_T^2 - M_T^2(p_T, y_f, z)\right) \delta\left(y - Y(p_T, y_f,z)\right)$$ with $$M_T^2(p_T, y_f, z) = (zp_T)^2 + M^2 \tanh^2 y_f$$ and $$Y(p_T, y_f, z) = \text{arsinh} \left(\frac{P_T}{m_T} \sinh y_f \right).$$ The lower cutoff $z_{min} = \frac{2 m_T}{\sqrt{s}} \cosh y$ arises from the fact that there is a kinematical limit on the parton momentum; it cannot exceed $\sqrt{s}/(2\cosh y_f)$ and thus for given hadron momentum there is a minimal $z$ corresponding to fragmentation of a parton with maximal momentum. In the following, we describe this key ingredient in our formulation of energy loss, i.e. the computation the fragmentation function both in vacuum and in medium using a pQCD shower evolution code. The medium-modified fragmentation function {#S-MMFF} ========================================== In this section, we describe how the medium-modified fragmentation function is obtained from a computation of an in-medium shower followed by hadronization. Key ingredient for this computation is a pQCD MC shower algorithm. In this work, we employ a modification of the PYTHIA shower algorithm PYSHOW [@PYSHOW]. In the absence of any medium effects, our algorithm therefore corresponds directly to the PYTHIA shower. Furthermore, the subsequent hadronization of the shower is assumed to take place outside of the medium, even if the shower itself was medium-modified. It is computed using the Lund string fragmentation scheme [@Lund] which is also part of PYTHIA. Shower evolution in vacuum -------------------------- We model the evolution from some initial, highly virtual parton to a final state parton shower as a series of branching processes $a \rightarrow b+c$ where $a$ is called the parent parton and $b$ and $c$ are referred to as daughters. In a longer chain of branchings, all partons which have been part of the evolution branch towards some parton $i$ are called the ancestors of $i$ in the following. In QCD, the allowed branching processes are $q \rightarrow qg$, $g \rightarrow gg$ and $g \rightarrow q \overline{q}$. The kinematics of a branching is described in terms of the virtuality scale $Q^2$ and of the energy fraction $z$, where the energy of daughter $b$ is given by $E_b = z E_a$ and of the daughter $c$ by $E_c = (1-z) E_a$. It is convenient to introduce $t = \ln Q^2/\Lambda_{QCD}$ where $\Lambda_{QCD}$ is the scale parameter of QCD. $t$ takes a role similar to a time in the evolution equations, as it describes the evolution from some high initial virtuality $Q_0$ ($t_0$) to a lower virtuality $Q_m$ ($t_m$) at which the next branching occurs. In terms of the two variables, the differential probability $dP_a$ for a parton $a$ to branch is [@DGLAP1; @DGLAP2] $$dP_a = \sum_{b,c} \frac{\alpha_s}{2\pi} P_{a\rightarrow bc}(z) dt dz$$ where $\alpha_s$ is the strong coupling and the splitting kernels $P_{a\rightarrow bc}(z)$ read $$\begin{aligned} &&P_{q\rightarrow qg}(z) = 4/3 \frac{1+z^2}{1-z} \label{E-qqg}\\ &&P_{g\rightarrow gg}(z) = 3 \frac{(1-z(1-z))^2}{z(1-z)}\label{E-ggg}\\ &&P_{g\rightarrow q\overline{q}}(z) = N_F/2 (z^2 + (1-z)^2)\label{E-gqq}\end{aligned}$$ where we do not consider electromagnetic branchings involving a photon or a lepton pair and $N_F$ counts the number of active quark flavours for given virtuality. At a given value of the scale $t$, the differential probability for a branching to occur is given by the integral over all allowed values of $z$ in the branching kernel as $$I_{a\rightarrow bc}(t) = \int_{z_-(t)}^{z_+(t)} dz \frac{\alpha_s}{2\pi} P_{a\rightarrow bc}(z).$$ The kinematically allowed range of $z$ is given by $$\label{E-KB} z_\pm = \frac{1}{2} \left( 1+ \frac{M_b^2 - M_c^2}{M_a^2}\pm \frac{|{\bf p}_a|}{E_a}\frac{\sqrt{(M_a^2-M_b^2-M_c^2)^2 -4M_b^2M_c^2}}{M_a^2} \right)$$ where $M_i^2 = Q_i^2 + m_i^2$ with $m_i$ the bare quarm mass or zero in the case of a gluon. Note that the gluon emission from a quark Eq. (\[E-qqg\]) is singular at $z=1$ and the gluon splitting probability (\[E-ggg\]) at $z=1$ and $z=0$, hence they lead preferably to soft gluon radiation close to the kinematic cutoff $z_\pm$. Note also that a high virtuality $Q_a$ shrinks the available range of $z$ and hence enforces on average harder radiation. Given the initial parent virtuality $Q_a^2$ or equivalently $t_a$, the virtuality at which the next branching occurs can be determined with the help of the Sudakov form factor $S_a(t)$, i.e. the probability that no branching occurs between $t_0$ and $t_m$, where $$S_a(t) = \exp\left[ - \int_{t_0}^{t_m} dt' \sum_{b,c} I_{a \rightarrow bc}(t') \right].$$ Thus, the probability density that a branching of $a$ occurs at $t_m$ is given by $$\label{E-Qsq} \frac{dP_a}{dt} = \left[\sum_{b,c}I_{a\rightarrow bc}(t) \right] S_a(t).$$ These equations are solved for each branching by the PYSHOW algorithm [@PYSHOW] iteratively to generate a shower. For each branching first Eq. (\[E-Qsq\]) is solved to determine the scale of the next branching, then Eqs. (\[E-qqg\])-(\[E-gqq\]) are evaluated to determine the type of branching and the value of $z$, if the value of $z$ is outside the kinematic bound given by Eq. (\[E-KB\]) then the event is rejected. Given $t_0, t_m$ and $z$, energy-momentum conservation determines the rest of the kinematics except for a radial angle by which the plane spanned by the vectors of the daughter parents can be rotated. In order to account in a schematic way for higher order interference terms, angular ordering is enforced onto the shower, i.e. opening angles spanned between daughter pairs $b,c$ from a parent $a$ are enforced to decrease according to the condition $$\label{E-Angular} \frac{z_b (1-z)b)}{M_b^2} > \frac{1-z_a}{z_a M_a^2}$$ After a branching process has been computed, the same algorithm is applied to the two daughter partons treating them as new mothers. The branching is continued down to a scale $Q_{min}$ which is set to 1 GeV in the MC simulation, after which the partons are set on-shell, adjusting transverse momentum to ensure energy-momentum conservation. Shower evolution in medium -------------------------- We now embed this shower evolution in a soft medium. It is important to realize that this implies that energy and momentum in this case is in general [*not*]{} conserved if one includs only one component (shower or medium) in the simulation, as there is flow of energy and momentum through interactions from the shower to the medium and vice versa. In this particular model, we aim at a description of radiative energy loss, i.e. we assume a medium which does not absorb momentum by recoil of its constituents, but rather increases the virtuality of partons propagating through it and thus inducing additional radiation. While this is clearly an idealized assumption of the medium, it allows to study cleanly the effect of induced radiation on a shower, and moreover it is expected to capture the physics of the dominant component of energy loss [@Elastic]. Such a medium can be characterize by a transport coefficient $\hat{q}$ which represents the increase in virtuality $\Delta Q^2$ per unit pathlength of a parton traversing the medium. Note that this represents an average transfer, i.e. a picture which would be realized in a medium which is characterized by multiple soft scatterings with the hard parton. Alternatively, a model in which hard scatterings which occur probabilistically transfer the virtuality could be implemented. While the vacuum shower develops only in momentum space, the interaction with the medium requires modelling in position space as well. In order to make the link, we assume that the formation time of a shower parton with virtuality $Q$ is developed on the timescale $1/Q$, i.e. the lifetime of a virtual parton with virtuality $Q_b$ coming from a parent parton with virtuality $Q_a$ is in the rest frame of the original hard collision (the rest frame of the medium may be different by a flow boost as the medium may not be static) given by $$\label{E-Lifetime} \tau_b = \frac{E_b}{Q_b^2} - \frac{E_b}{Q_a^2}.$$ The time $\tau_a^0$ at which a parton $a$ is produced in a branching can be determine by summing the lifetimes of all ancestors. Thus, during its lifetime, the parton virtuality is increased by the amount $$\label{E-Qgain} \Delta Q_a^2 = \int_{\tau_a^0}^{\tau_a^0 + \tau_a} d\zeta \hat{q}(\zeta)$$ where $\zeta$ is integrated along the spacetime path of the parton through the medium and $\hat{q}(\zeta) = \hat{q}(\tau,r,\phi,\eta_s)$ describes the spacetime variation of the transport coefficient where $\hat{q}$ is specified at each set of coordinates proper time $\tau$, radius $r$, angle $\phi$ and spacetime rapidity $\eta_s$. In the following, we will use a hydrodynamical evolution model of the medium [@Hydro3d] for this dependence. If the parton is a gluon, the virtuality transfer from the medium is increased by the ratio of their Casimir color factors, $3/\frac{4}{3} = 2.25$. If $\Delta Q_a^2 \ll Q_a^2$, holds, i.e. the virtuality picked up from the medium is a correction to the initial parton virtuality, we may add $\Delta Q_a^2$ to the virtuality of parton $a$ before using Eq. (\[E-Qsq\]) to determine the kinematics of the next branching. If the condition is not fulfilled, the lifetime is determined by $Q_a^2 + \Delta Q_a^2$ and may be significantly shortened by virtuality picked up from the medium. In this case we iterate Eqs. (\[E-Lifetime\]),(\[E-Qgain\]) to determine a self-consistent pair of $(\tau_a, \Delta Q^2_a)$. This ensures that on the level of averages, the lifetime is treated consistently with the virtuality picked up from the medium. The need for the self-consistent iteration must be investigated a posteriori when an actual $\hat{q}(\zeta)$ is specified. At this point, we note that it may frequently occur in a constant medium that virtuality transferred from the medium determines the lifetime of a parton, as towards the end of the shower development $Q^2$ may become small, leading to large lifetimes and large $\Delta Q^2$. But the situation is much less severe for an expanding medium in which $\hat{q} \sim 1/\tau^\alpha$ where $\alpha >1$ and $\tau$ is the lifetime of the medium. We note that in this model, the shower [*gains*]{} energy from the medium. This is not unexpected, as we have only included energy transfer to the shower, but no interaction which would feed energy from the shower into the medium. In principle, one may argue that sufficiently soft partons of the shower would thermalize and thus become part of the medium, but at the moment we refrain from modelling energy transfer to the medium. Instead, we note that the increase in parton virtuality leads to induced radiation, transferring energy from hard partons to soft radiation and thus to a softening of the momentum distribution of partons inside the shower, along with an increase in momentum $|q_T|$ transverse to the shower initiating parton direction. Properties of the hydrodynamical medium --------------------------------------- After using the Lund string fragmentation scheme [@Lund] to hadronize the shower, we can compute the longitudinal momentum distribution of a shower initiated by a parton at fixed energy and hence the fragmentation function into any hadron species at this given partonic momentum scale. It is clear that the form of the medium-modified fragmentation function (MMFF) is dependent on $\hat{q}(\zeta)$ and that no general form can be given without specifying this quantity. However, we are only interested in very specific forms of $\hat{q}(\zeta)$, i.e. those which occur in the soft medium created in an heavy-ion collision. If we assume the relation $$\label{E-qhat} \hat{q}(\zeta) = K \cdot 2 \cdot [\epsilon(\zeta)]^{3/4} (\cosh \rho(\zeta) - \sinh \rho(\zeta) \cos\psi)$$ between $\hat{q}$ and the medium energy density $\epsilon$, the local flow rapidity $\rho$ with angle $\psi$ between flow and parton trajectory [@Flow1; @Flow2], we find that the vast majority of paths leads to a $\hat{q}(\zeta)$ which can be described by (see Fig. \[F-qhat\] for examples) $$\hat{q}(\zeta) = \frac{a}{(b+\tau/(1 fm/c))^c}.$$ The exception are paths close to the surface for which the parton travels [*inward*]{}. However, such events are not only suppressed by the overlap function which is small at the medium surface but also very suppressed due to the strong medium effect for such long paths. Thus, the medium which will be probed by a parton emerging from a particular vertex can be characterized by the four parameters $(a,b,c,\tau_E)$ where $\tau_E$ is the time at which the parton emerges from the medium. In practice, the range of parameters can be sufficiently narrowed down to 1 $<b<$ 2, $\tau_E < \tau_F$ (with $\tau_F$ the freeze-out time of the medium) and $2 < c < 4$. We thus investigate in the following three different scenarios (approximately representing a parton travelling into $+x$ direction originating from $x=4$ fm (A), $x=0$ (B) and $x=-4$ fm (C), $y=0$ in all cases in the transverse $(x,y)$ plane at midrapidity. These trajectories are characterized by the parameters $(b=1.5, c=3.3, \tau_E= 5.8$ fm/c$)$ (A), $(b=1.5, c=2.2, \tau_E= 10$ fm/c$)$ (B) and $(b=1.5, c=2.2, \tau_E= 15$ fm/c$)$ (C) and are quite typical for partons close to the surface (A), emerging from the central region (B) or traversing the whole medium (C). The medium-modified fragmentation function {#S-scaling} ------------------------------------------ In Fig. \[F-MMFF\], we present the longitudinal momentum $q_z$ distribution of hadrons in a jet originating from a 20 GeV quark, both in vacuum and medium-modified according to the three scenarios outlined above. Instead of specifying the parameter $a$, we characterize the scenarios instead by the total amount of virtuality $$\Delta Q^2_{tot} = \int_{\tau_0}^{\tau_E} d \zeta \hat{q}(\zeta)$$ which would be transferred to a single parton crossing the medium (note that since branchings occur, multiple partons travel through the medium in a shower, hence the virtuality transfer to the shower as a whole is larger than $\Delta Q^2_{tot}$, but can only be computed on an event-by-event basis). In general, the medium modification induces the expected changes to the shower, i.e. the high $q_z$ tail is reduced by the medium, whereas induced radiation creates additional multiplicity at low $q_z$. This transfer of energy from hard partons to soft radiation increases with the strength of the medium transport coefficient. One notes that for small $\Delta Q^2_{tot}$, there is good scaling between all scenarios, and only for larger values of $\Delta Q^2_{tot}$ the precise pattern at which time the virtuality is transferred to the shower starts to be significant and longer paths lead to more suppression of the high $q_T$ tail. However, since partons close to the edge of the medium propagating outward traverse only low density matter, they probe a region in which the scaling holds. Thus, scenario (B) can effectively be assumed to be a fair approximation for all paths if $\Delta Q^2_{tot}$ is used to characterize the path. This observation simplifies the subsequent averaging of paths through the hydrodynamical medium considerably, as it reduces the problem to determining $\Delta Q^2_{tot}$ on line integrals through the medium. The Hump-backed plateau ----------------------- In order to focus more on the hadron production at low $p_z$, we introduce the variable $\xi = \ln(1/x)$ where $x = p/E_{jet}$ is the fraction of the jet momentum carried by a particular hadron and $E_{jet}$ is the total energy of the jet. The inclusive distribution $dN/d\xi$, the so-called Hump-backed plateau, is an important feature of QCD radiation [@Muller; @Dokshitzer] and is in vacuum dominated by color coherence physics. Fig. \[F-HBP\] shows the calculated distribution in vacuum and its distortion by the medium effect. Qualitatively, the results are very similar to those observed in [@HBP] and [@JEWEL] for schematic radiative energy loss, i.e. a depletion of the distribution at low $\xi$ and a strong enhancement of the hump. Transverse jet momentum spectra and angular distribution -------------------------------------------------------- Since the parton shower picks up virtuality from the medium, we may expect some broadening of the transverse distribution of hadrons in the jet due to the medium effect. We show the transverse momentum $q_T$ distribution (where transverse in this section denotes the direction transverse to the hard parton initiating the shower, not as in a description of the whole p-p or Au-Au collision, the direction transverse to the beam axis) of hadrons in the shower in Fig. \[F-pT\]. Although the effects on the transverse momentum spectrum are not large, there is a clear trend to a rise of multiplicity at low $q_T$ and some depletion in the high $q_T$ tail through the medium effect. One may also note that the integral $\int dq_T q_T dN/dq_T$ increases by $\sim 30$% from vacuum to the $\Delta Q^2_{tot}=10$ GeV$^2$ medium modified jet. The effect of the medium on the transverse structure of the shower is more clearly seen when looking at the angular distribution of hadrons in the jet. The distribution $dN/d\alpha$ where $\alpha$ is the angle between hadron and jet axis is shown in Fig. \[F-ang\] where a cut in momentum of 2 GeV has been applied to focus on hadrons which would appear above the soft background of a heavy-ion collision. The angular broadening of the distribution by the medium is clearly visible (note that the dip at $\alpha = 0$ is caused by the Jacobian in a polar coordinate system and is not an indication of a double-hump angular structure). Computation of $R_{AA}$ ======================= In this section, we show how to compute the nuclear suppression factor $R_{AA}$ using the medium-modified fragmentation function in the hydrodynamically evolving medium. The nuclear suppression factor is defined as $$R_{AA}(P_T,y) = \frac{dN^h_{AA}/dP_Tdy}{T_{AA}(0) d\sigma^{pp}/dP_Tdy}$$ where $T_{AA}({\bf b})$ is the standard nuclear overlap function. We can compute it by forming the ratio $$\label{E-R_AA} R_{AA}(P_T,y) = \frac{d\tilde\sigma_{medium}^{AA\rightarrow h+X}}{dP_T^2 dy}/\frac{d\sigma^{pp\rightarrow h+X}}{dP_T^2 dy}$$ where ${d\sigma^{pp\rightarrow h+X}}/{dP_T^2 dy}$ follows from Eq. (\[E-Fragment\]) when $D_{f\rightarrow h}(z, \mu_f^2)$ is set to be the vacuum fragmentation function whereas ${d\tilde\sigma_{medium}^{AA\rightarrow h+X}}/{dP_T^2 dy}$ is computed from the same equation with $D_{f\rightarrow h}(z, \mu_f^2)$ replace by the suitably averaged MMFF $\langle D_{MM}(z,\mu_f^2)\rangle_{T_{AA}}$. This averaging has to be done over all possible paths of partons through the medium. We make the simplifying assumption that instead of computing the detailed paths of all partons in a shower through the medium, we only sample the medium along the eikonal path of the shower originator and consider this the medium seen by all partons in the shower. In essence, this neglects the transverse spread of paths in a shower. The probability density $P(x_0, y_0)$ for finding a hard vertex at the transverse position ${\bf r_0} = (x_0,y_0)$ and impact parameter ${\bf b}$ is given by the product of the nuclear profile functions as $$\label{E-Profile} P(x_0,y_0) = \frac{T_{A}({\bf r_0 + b/2}) T_A(\bf r_0 - b/2)}{T_{AA}({\bf b})},$$ where the thickness function is given in terms of Woods-Saxon the nuclear density $\rho_{A}({\bf r},z)$ as $T_{A}({\bf r})=\int dz \rho_{A}({\bf r},z)$. The MMFF must then be averaged over this quantity and all possible directions $\phi$ partons could travel from a vertex as $$\label{E-P_TAA} \langle D_{MM}(z,\mu^2) \rangle_{T_{AA}} \negthickspace = \negthickspace \frac{1}{2\pi} \int_0^{2\pi} \negthickspace \negthickspace \negthickspace d\phi \int_{-\infty}^{\infty} \negthickspace \negthickspace \negthickspace \negthickspace dx_0 \int_{-\infty}^{\infty} \negthickspace \negthickspace \negthickspace \negthickspace dy_0 P(x_0,y_0) D_{MM}(z,\mu^2,\zeta).$$ Using the approximate scaling relation described in section \[S-scaling\], the medium modified fragmentation function $D_{MM}(z, \mu^2,\zeta)$ for a path $\zeta$ can be found by computing the line intergal $$\Delta Q^2_{tot} = \int_0^\infty \negthickspace d \zeta \hat{q}(\zeta)$$ through the hydrodynamical medium where $\hat{q}(\zeta)$ is given by Eq. (\[E-qhat\]). The MC shower code is then used to compute $D_{MM}(z, \mu^2,\zeta)$ for each value of $\Delta Q^2_{tot}$. Note that this point that there is in principle a conceptual difference between using a fragmentation function extracted from a shower simulation (be it in medium or vacuum) and a phenomenological fragmentation function such as the KKP [@KKP] or AKK [@AKK] set in Eq. (\[E-Fragment\]). This has to do with the fact that in the the usual fragmentation functions $\mu_f$ represents the relevant [*hadronic*]{} momentum scale. On the other hand, one can extract fragmentation functions from a shower evolution only for a given [*partonic*]{} momentum scale. Thus, the scale evolution of the fragmentation function must be accounted for in a different way. At present, we use a MMFF for a fixed partonic scale of 20 GeV, hence the description of high $P_T$ hadron momentum spectra with the shower generator extracted fragmentation function is not as good as using the phenomenological functions. However, here we are chiefly interested in the ratio of modified over unmodified spectra, and in this ratio the uncertainties due to the scale evolution largely cancel out. Comparison with data {#S-Data} ==================== At this point, our the model is completely determined except for the parameter $K$ in Eq. (\[E-qhat\]) which links the energy density in the medium with the transport coefficient. We adjust $K$ such that the best fit to the data is achieved. In Fig. \[F-R\_AA\] we show $R_{AA}$ for different values of $K$. The data seem to favour $K=1.5$. This implies that the medium exhibits a 50% stronger effect than expected in an ideal QGP [@Baier]. $\hat{q}_0$, i.e. the highest transport coefficient reached in the evolution in the medium center at thermalization time is found to be 7.8 GeV$^2$/fm. Note that this is well below the values of $K$ (and $\hat{q}$) extracted from computations which consider energy loss from the leading parton in the ASW [@QuenchingWeights] formulation of energy loss, cf. where $K$ values between 2 and 5 are found [@HydroJet1; @Dijets2; @JetFlow]. However, it is evident that the overall trend of the data is not well described by the model. While the data indicate a small rise of $R_{AA}$ from $p_T=5$ GeV and above, the MMFF model finds a drop. In particular, the region between 10 and 14 GeV is not well described. This is quite unlike models based on energy loss of the leading parton where a rise of $R_{AA}$ with $p_T$ is found, cf. e.g. [@HydroJet1; @Dijets2]. On the other hand, a similar drop of $R_{AA}$ has been found in the schematical shower evolution model in [@HBP]. It seems clear that there is a direct link between the characteristic distortion of the Hump-backed plateau at larger $\xi$ by medium effects and the rise of $R_{AA}$ towards lower $p_T$ — in both cases, the effects of additional parton production at lower momenta by medium induced radiation become visible. It seems at this point that this is a generic, unavoidable feature of the model formulation. Nevertheless, let us revisit the assumptions of the model and discuss their validity. Discussion of model assumptions =============================== While a detailed discussion of all model uncertainties is beyond the scope of this paper and will be postponed to a future publication, we will at this point provide a list of possible uncertainties and estimate their effect if possible. First, there are uncertainties in the calculation of the hard process. We do this in LO pQCD, and one may wonder about NLO corrections. However, in [@LOpQCD] is has been shown that LO pQCD supplemented with a $K$ factor to account for higher order effects is in fact a good description of hard hadron production in p-p collisions. Another approximation we have made is to neglect the scale dependence of the fragmentation function and instead to compute the fragmentation function at a single partonic scale of 20 GeV. While there is no conceptual problem with a MC computation of the dependence of the vacuum or medium-modified fragmentation on $(z, \mu)$ and in the latter case also $\Delta Q^2_{tot}$, mapping out a dependence on three parameters is very demanding in terms of MC simulation time and will be done in a future investigation. As shown in Fig. \[F-Scale\] the effects are not substantial. A second class of uncertainties has to do with the way the shower algorithm is implemented and affects both vacuum and medium results. In essence, these are encoded in the control parameters of the PYTHIA shower algorithm. For example, the scale $Q^2_{min}$ down to which partons are evolved is a parameter, and the results show some dependence to it. Other uncertainties have to do with what scale is used to determine the running of $\alpha_s$ in the shower. In general, these effects are known to be small. A third class of uncertainties has to do with the specific implementation of medium effects. Chiefly, one may wonder about the effect of replacing a probabilistic formation time with its average value in Eq. (\[E-Lifetime\]) or about replacing a probabilistic transfer of virtuality associated with scattering processes with an average growth of virtuality as realized in Eq. (\[E-Qgain\]). The latter assumption has some impact on the treatment of QCD coherence effects, as the angular ordering Eq. (\[E-Angular\]) would be destroyed by a scattering process. Thus, a detailed formulation in terms of individual scatterings with medium constituents would include the information when angular ordering is effective and when not. This information is lost when computing based on an average virtuality transfer. On the other hand, Landau-Pomeranchuk-Migdal (LPM) interference [@LPM] would lead to a suppression of radiation as compared to the vacuum situation due to frequent interactions with the medium. While answering the question about the effect of replacing a probabilistic formulation with averages requires a restructured algorithm which is beyond the scope of this paper, the effect of a detailed treatment of the angular ordering can be estimated comparatively easy by computing in the two limits of retaining or dropping angular ordering completely in the medium. As evident from Fig. \[F-AngOrder\], the effects are in practice relatively small and moreover do not alter the shape of the distribution significantly, hence they can largely be absorbed into a rescaling of $\Delta Q^2_{tot}$ or alternatively $K$. One may also keep in mind that using the Lund string fragmentation mechanism to hadronize in a heavy-ion environment is not straightforward, especially at low $z$ and for heavy hadrons. If one estimates the hadronization time for a hadron with mass $m_h$ and energy $E_h$ as $$\tau \sim E_h/m_h^2$$ then a 10 GeV pion will be formed several tens of fm away from the medium, but a 5 GeV proton has a formation time of about 1.2 fm, clearly not enough to assume the process happens out of the medium or even out of the partonic region. Thus, there is some reason to suspect that the treatment of low $z$ hadrons will miss out part of the relevant physics. Finally, one may question the validity of the scaling assumption made in section \[S-scaling\] which allows in essence to reduce the information of $\hat{q}(\zeta)$ specified on the whole path to a simple line integral. While the scaling is not exact, a detailed comparison of Eq. (\[E-Qgain\]) for some selected explicit forms of paths through the hydro medium with the scaled version shows that differences in the resulting MMFF are at most on the level of 20% and moreover can again largely be absorbed into a rescaling of $\Delta Q^2_{tot}$ as the shape of the MMFF is not changed. Conclusions =========== There is evidence that model uncertainties which can be estimated maninly affect the precise determination of $K$, but not the more striking fact that model does not capture the rising trend of $R_{AA}$ with $p_T$ apparent in the data. While there is still the possibility that one of the effects which have been neglected in modelling so far may correct this disagreement, one may note that this does not seem likely as there is within the model a rather generic physics mechanism which produces this particular shape, i.e. the energy transport from high $z$ partons into radiated low $z$ multiplicity, and it is this additional multiplicity which leads to the rise of $R_{AA}$ for low momenta. Supporting evidence for this is found in the fact that the calculation in [@HBP] which is based on a completely different algorithm but models the same physics mechanism results nevertheless in a very similar shape of $R_{AA}$ when folded with the pQCD parton spectrum. At this point, we note that there’s a second failure of the model, also connected with subleading hadrons in the shower. The angular distribution of hadrons shown in Fig. \[F-ang\] shows, even for a jet emerging from the center of the medium, only moderate broadening, but no sign of an energy momentum flow to large angles or the formation of a dip at zero angles as would be required to account for experimental data on two and three particle correlations [@PHENIX_Mach; @PHENIX_Mach2; @STAR_Mach; @STAR_3p]. In contrast, models which assume that the energy radiated from the leading parton is not developing as a shower but rather excites a hydrodynamical shockwave in the medium [@MachShuryak; @Mach1; @Mach2; @Mach3] can account for the observed structures in the data. On the other hand, radiative energy loss considered for the leading parton only is able to account well for the data even in the nontrivial case of back-to-back correlations [@HydroJet1; @Dihadron1; @Dihadron2]. Furthermore, conceptually branchings at large $Q^2$ should take place regardless of the presence or absence of a medium, as they on average develop before a thermalized medium can be assumed to exist. These observations lead to the idea that while especially the initial development of a high $p_T$ process is described by the shower evolution, at some lower scale a pQCD shower ceases to be an adequate description for soft radiated partons, but instead some form of a coupling to hydrodynamical modes of the medium (cf. e.g.[@Source]) takes place. The interesting question (especially in the light of the kinematic range of hard probes accessible at LHC) is what part of the shower (starting from the leading parton) is still correctly described by the in medium shower evolution equations and under what conditions this evolutions should be replaced by different physics. The answer to these questions at RHIC energies may be found in the application of the formalism presented here to hard multiparticle correlations in which a systematic access to the high $P_T$ part of the shower is accessible. This, however, will be the subject of a future investigation. I’d like to thank Kari J. Eskola for valuable discussions on the problem. This work was financially supported by the Academy of Finland, Project 115262. [99]{} M. Gyulassy and X. N. Wang, Nucl. Phys. B [**420**]{}, (1994) 583. R. Baier, Y. L. Dokshitzer, A. H. Mueller, S. Peigne and D. Schiff, Nucl. Phys. B [**484**]{}, (1997) 265. B. G. Zakharov, JETP Lett.  [**65**]{}, (1997) 615. U. A. Wiedemann, Nucl. Phys. B [**588**]{}, (2000) 303. M. Gyulassy, P. Levai and I. Vitev, Nucl. Phys. B [**594**]{}, (2001) 371. X. N. Wang and X. F. Guo, Nucl. Phys. A [**696**]{}, (2001) 788. M. Shimomura \[PHENIX Collaboration\], nucl-ex/0510023. D. Magestro \[STAR Collaboration\], nucl-ex/0510002; talk Quark Matter 2005. J. Adams [*et al.*]{} \[STAR Collaboration\], nucl-ex/0604018. S. S. Adler [*et al.*]{} \[PHENIX Collaboration\], Phys. Rev.  C [**76**]{} (2007) 034904. C. Nonaka and S. A. Bass, Phys. Rev.  C [**75**]{} (2007) 014902. T. Renk, J. Ruppert, C. Nonaka and S. A. Bass, Phys. Rev.  C [**75**]{} (2007) 031902. A. Majumder, C. Nonaka and S. A. Bass, Phys. Rev.  C [**76**]{} (2007) 041902. G. Y. Qin, J. Ruppert, S. Turbide, C. Gale, C. Nonaka and S. A. Bass, Phys. Rev.  C [**76**]{} (2007) 064907. S. A. Bass, C. Gale, A. Majumder, C. Nonaka, G. Y. Qin, T. Renk and J. Ruppert, 0805.3271 \[nucl-th\]. T. Renk, Phys. Rev.  C [**74**]{} (2006) 024903. T. Renk and K. Eskola, Phys. Rev.  C [**75**]{} (2007) 054910. H. Zhang, J. F. Owens, E. Wang and X. N. Wang, Phys. Rev. Lett.  [**98**]{} (2007) 212301. C. A. Salgado and U. A. Wiedemann, Phys. Rev. D [**68**]{}, (2003) 014008. N. Borghini and U. A. Wiedemann, hep-ph/0506218. K. Zapp, G. Ingelman, J. Rathsman, J. Stachel and U. A. Wiedemann, 0804.3568 \[hep-ph\]. T. Sjostrand, Comput. Phys. Commun.  [**82**]{} (1994) 74. M. Bengtsson and T. Sjöstrand, Phys. Lett. B [**185**]{} (1987) 435; Nucl. Phys. B [**289**]{} (1987) 810; E. Norrbin and T. Sjöstrand, Nucl. Phys. B [**603**]{} (2001) 297. G. Corcella [*et al.*]{}, JHEP [**0101**]{} (2001) 010. T. Renk, Phys. Rev.  C [**76**]{} (2007) 064905. J. Pumplin, D. R. Stump, J. Huston, H. L. Lai, P. Nadolsky and W. K. Tung, JHEP [**0207**]{}, (2002) 012. D. Stump, J. Huston, J. Pumplin, W. K. Tung, H. L. Lai, S. Kuhlmann and J. F. Owens, JHEP [**0310**]{}, (2003) 046. M. Hirai, S. Kumano and T. H. Nagai, Phys. Rev. C [**70**]{}, (2004) 044905. K. J. Eskola, V. J. Kolhinen and C. A. Salgado, Eur. Phys. J. C [**9**]{} (1999) 61. I. Sarcevic, S. D. Ellis and P. Carruthers, Phys. Rev. D [**40**]{}, (1989) 1446. B. Andersson, G. Gustafson, G. Ingelman and T. Sjostrand, Phys. Rep. [**97**]{} (1983) 31. V. N. Gribov and L. N. Lipatov, Sov. J. Nucl. Phys. [**15**]{} (1972) 438, [*ibid.*]{} [**75**]{}; Yu. L. Dokshitzer, Sov. J. Phys. JETP [**46**]{} (1977) 641. G. Altarelli and G. Parisi, Nucl. Phys. B [**126**]{} (1977) 298. R. Baier, A. H. Mueller and D. Schiff, nucl-th/0612068. H. Liu, K. Rajagopal and U. A. Wiedemann, hep-ph/0612168. A. H. Muller, Nucl. Phys. B [**213**]{} (1983) 85. Yu. L. Dokshitzer, V. A. Khoze and S. I. Troian, Adv. Ser. Direct. High Energy Phys. [**5**]{} (1988) 241. B. A. Kniehl, G. Kramer and B. Potter, Nucl. Phys. B [**582**]{}, (2000) 514. S. Albino, B. A. Kniehl and G. Kramer, Nucl. Phys. B [**725**]{} (2005) 181. R. Baier, Nucl. Phys. A [**715**]{}, (2003) 209. T. Renk and J. Ruppert, Phys. Rev.  C [**72**]{} (2005) 044901. K. J. Eskola and H. Honkanen, Nucl. Phys. A [**713**]{} (2003) 167. L. D. Landau and I. Pomeranchuk, Dokl. Akad. Nauk Ser. Fiz.  [**92**]{}, (1953) 735; L. D. Landau and I. Pomeranchuk, Dokl. Akad. Nauk Ser. Fiz.  [**92**]{}, (1953) 535; A. B. Migdal, Phys. Rev.  [**103**]{}, (1956) 1811. S. S. Adler [*et al.*]{} \[PHENIX Collaboration\], Phys. Rev. Lett.  [**97**]{} (2006) 052301. A. Adare [*et al.*]{} \[PHENIX Collaboration\], Phys. Rev. Lett.  [**98**]{} (2007) 232302. M. J. Horner \[STAR Collaboration\], J. Phys. G [**34**]{} (2007) S995. J. G. Ulery \[STAR Collaboration\], Int. J. Mod. Phys.  E [**16**]{} (2007) 2005; J. G. Ulery, 0709.1633 \[nucl-ex\]. J. Casalderrey-Solana, E. V. Shuryak and D. Teaney, hep-ph/0602183. T. Renk and J. Ruppert, Phys. Rev.  C [**73**]{} (2006) 011901. T. Renk and J. Ruppert, Phys. Lett.  B [**646**]{} (2007) 19. T. Renk and J. Ruppert, Phys. Rev.  C [**76**]{} (2007) 014908. R. B. Neufeld, 0805.0385 \[hep-ph\].
{ "pile_set_name": "ArXiv" }
--- abstract: 'Disentanglement is the process which transforms a state $\rho$ of two subsystems into an unentangled state, while not effecting the reduced density matrices of each of the two subsystems. Recently Terno [@Terno98] showed that an arbitrary state cannot be disentangled into a [*tensor product*]{} of its reduced density matrices. In this letter we present various novel results regarding disentanglement of states. Our main result is that there are sets of states which cannot be successfuly disentangled (not even into a separable state). Thus, we prove that a universal disentangling machine cannot exist.' author: - 'Tal Mor[^1]' title: On the Disentanglement of States --- [2]{} Entanglement plays an important role in quantum physics [@Peres93]. Due to its peculiar non-local properties, entanglement is one of the main pillars of non-classicality. The creation of entanglement and the destruction of entanglement via general operations are still under extensive study [@entanglement]. Here we concentrate on the process of disentanglement of states. For simplicity, we concentrate on qubits in this letter, and on the disentanglement of two subsystems. Let there be two two-level systems “X” and “Y”. The state of each such system is called a quantum bit (qubit). A pure state which is a tensor product of two qubits can always be written as $|0({\rm X})0({\rm Y})\rangle$ by an appropriate choice of basis, $|0\rangle$ and $|1\rangle $ for each qubit. For convenience, we drop the index of the subsystem (whenever it is possible), and order them so that “X” is at the left side. By an appropriate choice of the basis $|0\rangle$ and $|1\rangle$, and using the Schmidt decomposition (see [@Peres93]), an entangled pure state of two qubits can always be written as $ | \psi \rangle = \cos \phi |00\rangle + \sin \phi |11\rangle $ or using a density matrix notation $\rho = |\psi\rangle \langle \psi|$ $$\rho = [ \cos \phi |00\rangle + \sin \phi |11\rangle ] [ \cos \phi \langle 00| + \sin \phi \langle 11|] \ .$$ The reduced density matrix of each of the qubits is $\rho_{\rm X} = {\rm Tr}_{\rm Y} [\rho({\rm XY})] $ and $\rho_{\rm Y} = {\rm Tr}_{\rm X} [\rho({\rm XY})] $. In the basis used for the Schmidt decomposition the two reduced density matrices are $$\label{reduced-state} \rho_{\rm X} = \rho_{\rm Y} = \left( \begin{array}{cc} \cos^2\phi & 0 \\ 0 & \sin^2 \phi \end{array} \right) \ .$$ Following Terno [@Terno98] and Fuchs [@Fuchs] let us provide the following two definitions (note that the second is an interesting special case of the first): [*Definition*]{}.— Disentanglement is the process that transforms a state of two (or more) subsystems into an unentangled state (in general, a mixture of product states) such that the reduced density matrices of each of the subsystems are uneffected. [*Definition*]{}.— Disentanglement into a tensor product state is the process that transforms a state of two (or more) subsystems into a tensor product of the two reduced density matrices. We noticed that according to these definitions, when a successful disentanglement is applied onto any pure product state, the state must be left unmodified. That is, $$\label{pure-ps} |00\rangle \longrightarrow |00\rangle$$ (in an appropriate basis). This fact proved very useful in the analysis we report here. The main goal of this letter is to show that a universal disentangling machine cannot exist. A universal disentangling machine is a machine that could disentangle any state which is given to it as an input. In order to prove that such a machine cannot exist, it is enough to find [*one*]{} set of states that cannot be disentangled if the data (regarding which state is used) is not available. To analyze the process of disentanglement consider the following experiment involving two subsystems “X” and “Y”, and a sender who sends [*both systems*]{} to the receiver who wishes to disentangle the state of these two subsystems: Let the sender (Alice) and the disentangler (Eve) define a finite set of states $|\psi_i\rangle$; let Alice choose one of the states at random, and let it be the input of the disentangling machine designed by Eve. Eve does not get from Alice the data regarding [*which*]{} of the states Alice chose, so Eve’s aim is to design a machine that will succeed to disentangle any of the possible states $|\psi_i\rangle$. In the same sense that an arbitrary state cannot be cloned (a universal cloning machine does not exist [@WZ82]), it was recently shown by Terno [@Terno98] that an arbitrary state cannot be disentangled into a tensor product of its reduced density matrices. Note that this novel result of [@Terno98] proves that [*universal disentanglement into product states*]{} is impossible, and it leaves open the more general question of whether a [*universal disentanglement*]{} is impossible (that is, disentanglement into separable states). We extend the investigation of the process of disentanglement well beyond Terno’s novel analysis in several ways. First, we find a larger class (then the one found by Terno) of states which cannot be disentangled into product states. Then, we show that there are non-trivial sets of states that [*can*]{} be disentangled. In particular, we present a set of states that cannot be disentangled into tensor product states, [*but*]{} can be disentangled into separable states. Finally, we present our most important result; a set of states that [*cannot be disentangled*]{}. The existence of such a set of states proves that a universal disentangling machine cannot exist. Using the terminology of [@WZ82] we can say that our letter shows that [*a single quantum can not be disentangled*]{}. Consider a set of states containing only one state. Since the state is known, obviously it can be disentangled. E.g., it is replaced by the appropriate tensor product state. We first prove that there are infinitely many sets of states that [*cannot*]{} be disentangled into product states. Our proof here follows from Terno’s method, with the addition of using the Schmidt decomposition to analyze a larger class of states. The most general form of two entangled states can always be presented (by an appropriate choice of bases) as: $$\begin{aligned} \label{the-states} |\psi_0 \rangle &=& \cos \phi_0 |00\rangle + \sin \phi_0 |11\rangle \nonumber \\ |\psi_1 \rangle &=& \cos \phi_1 |0'0'\rangle + \sin \phi_1 |1'1'\rangle \ .\end{aligned}$$ To prove that there are states for which disentanglement into tensor product states is impossible, let us restrict ourselves to the simpler subclass $$\begin{aligned} |\psi_0 \rangle &=& \cos \phi |00\rangle + \sin \phi |11\rangle \nonumber \\ |\psi_1 \rangle &=& \cos \phi |0'0'\rangle + \sin \phi |1'1'\rangle \ .\end{aligned}$$ There exists some basis $$|0''\rangle = {1 \choose 0} ; |1''\rangle = {0 \choose 1}$$ such that the bases vectors $|0\rangle;|1\rangle$ and $|0'\rangle;|1'\rangle$ become $$|0\rangle = {\cos \theta \choose \sin \theta} ; |1\rangle = {\sin \theta \choose -\cos \theta} \ ,$$ and $$|0'\rangle = {\cos \theta \choose -\sin \theta} ; |1'\rangle = {\sin \theta \choose \cos \theta}$$ respectively, in that basis. The states (\[the-states\]) are now $$\begin{aligned} |\psi_0 \rangle &=& c_\phi {c_\theta \choose s_\theta} {c_\theta \choose s_\theta} + s_\phi {s_\theta \choose - c_\theta} {s_\theta \choose - c_\theta} \nonumber \\ |\psi_1 \rangle &=& c_\phi {c_\theta \choose - s_\theta} {c_\theta \choose - s_\theta} + s_\phi {s_\theta \choose c_\theta} {s_\theta \choose c_\theta} \ ,\end{aligned}$$ with $c_\phi \equiv \cos \phi$, etc. The overlap of the two states is ${\rm OL}= \langle \psi_0 | \psi_1 \rangle = \cos^2 2\theta + \sin 2\phi \sin^2 2 \theta$. The reduced states are given by $$\hat{\rho_0} = {c_\phi}^2 \left( \begin{array}{cc} {c_\theta}^2 & c_\theta s_\theta \\ c_\theta s_\theta & {s_\theta}^2 \end{array} \right) + {s_\phi}^2 \left( \begin{array}{cc} {s_\theta}^2 & - c_\theta s_\theta \\ - c_\theta s_\theta & {c_\theta}^2 \end{array} \right) \ ,$$ and $$\hat{\rho_1} = {c_\phi}^2 \left( \begin{array}{cc} {c_\theta}^2 & - c_\theta s_\theta \\ - c_\theta s_\theta & {s_\theta}^2 \end{array} \right) + {s_\phi}^2 \left( \begin{array}{cc} {s_\theta}^2 & c_\theta s_\theta \\ c_\theta s_\theta & {c_\theta}^2 \end{array} \right) \ .$$ Thus, the state after the disentanglement into tensor product states is $ (\rho_{\rm disent})_0 = \hat{\rho_0} \hat{\rho_0}$ or $ (\rho_{\rm disent})_1 = \hat{\rho_1} \hat{\rho_1}$. The minimal probability of error for distinguishing two states [@Helstrom76] is given by $ {\rm PE} = \frac{1}{2} - \frac{1}{4} {\rm Tr}| \rho_0 - \rho_1| $. For two pure states there is a simpler expression: $ {\rm PE} = \frac{1}{2} - \frac{1}{2} \sqrt{[1 - OL^2]}$. Thus, $${\rm PE}_{\ \!\rm ent} = \frac{1}{2} - \frac{1}{2} \sqrt{[1 - ({c_{2\theta}}^2 + s_{2 \phi} {s_{2 \theta}}^2)^2]}$$ for the two initial entangled states. This probability of error is minimal, hence it cannot be reduced by any physical process. Therefore, if, for some $\theta$ and $\phi$, the disentanglement into the tensor product states [*reduces*]{} the ${\rm PE}$, then that process is non-physical. The difference of the states obtained after disentangling into tensor product states is $ \Delta_{\rm disent} = \hat{\rho_0} \hat{\rho_0}-\hat{\rho_1} \hat{\rho_1}$ This matrix is $$\Delta_{\rm disent} = \cos 2 \phi \sin 2 \theta \left( \begin{array}{cccc} 0 & a & a & 0 \\ a & 0 & 0 & b \\ a & 0 & 0 & b \\ 0 & b & b & 0 \end{array} \right) \ ,$$ with $a = \cos^2 \phi \cos^2 \theta + \sin^2 \phi \sin^2 \theta$ and $b = \cos^2 \phi \sin^2 \theta + \sin^2 \phi \cos^2 \theta$. After diagonalization, we can calculate the Trace-Norm, so finally we get $$\begin{aligned} {\rm PE}_{\rm \ \!disent} &=& \frac{1}{2} - \frac{1}{\sqrt2}\sin 2 \theta \cos 2 \phi \sqrt{a^2 + b^2} \nonumber \\ &=& \frac{1}{2} - \frac{1}{2} s_{ 2 \theta} c_{ 2 \phi} \sqrt{1 + {c_{2 \phi}}^2 {c_{2 \theta}}^2 } \ .\end{aligned}$$ We can now observe that there are values of $\theta$ and $\phi$, e.g., $\theta = \phi = \pi/8$, for which the outcomes of the disentanglement process are illegitimate since they satisfy ${\rm PE}_{\rm disent} < {\rm PE}_{\rm ent} $. Once these outcomes are illegitimate the disentanglement process leading to these outcomes is non-physical, proving that a disentangling machine which disentangle the states $|\psi_0\rangle$ and $|\psi_1\rangle $ cannot exist for these values of $\theta $ and $\phi$. Therefore, this analysis provides a proof (similar to Terno’s proof [@Terno98]) that a universal machine performing disentanglement into tensor product states cannot exist. The following set of states can easily be disentangled: $$|\psi_0\rangle = \frac{1}{\sqrt2} [ |00\rangle + |11\rangle ] \ ; \quad |\psi_1\rangle = \frac{1}{\sqrt2} [ |00\rangle - |11\rangle ]$$ To disentangle them, Eve’s machine uses an ancilla which is another pair of particles in a maximally entangled state (e.g., the singlet state) in any basis. Eve’s machine swaps one of the above particles with one of the members of the added pair, and traces out the ancillary particles. As a result, the state of the remaining two particles (one from each entangled pair) is $$\label{cms} (1/4)[ |00 \rangle \langle 00| + |01 \rangle \langle 01| + |10 \rangle \langle 10| + |11 \rangle \langle 11| ] \ ,$$ the completely mixed state in four dimensions. This set provides a trivial example of the ability to perform the disentanglement process. It is a trivial case of disentanglement, since the two states are orthogonal: they can first be measured and distinguished, and then, once the state is known, clearly it can be disentangled. However, exactly the same disentanglement process can be used to successfully disentangle a non-trivial set of states. Let the basis used for the two states be a different basis (and not the same basis), so the first state is still $|\psi_0\rangle$, and the second state is $$|\psi_1'\rangle = \frac{1}{\sqrt2} [ |0'0'\rangle - |1'1'\rangle ] \ .$$ The same process of disentanglement still works, while now the states are non-orthogonal, and cannot always be successfully distinguished. Hence, this disentanglement process is non-trivial. Note that the same process successfuly works also when more than two maximally entangled states are used as the possible inputs. Before we continue, let us recall some proofs of the no-cloning argument, since the methods we use here are quite similar the those used in the no-cloning argument. Let the cloner obtain an unknown state and try to clone it. To prove that this is impossible, it is enough to provide one set of states for which the cloner cannot clone any state in this set. Let the sender (Alice) and the cloner (Eve) use three states $|0\rangle$, $|1\rangle$, and $|+\rangle = (1/\sqrt2)[|0\rangle + |1\rangle]$. The most general process which can be used here in the attempt of cloning the unknown state from this set is to attach an ancilla in an arbitrary dimension and in a known state (say $|E\rangle$), to tranform the entire system using an arbitrary unitary transformation, and to trace out the unrequired parts of the ancilla. In order to clone the states $|0 \rangle$ and $|1\rangle$ the transformations are restricted to be $$|E0\rangle \longrightarrow |E_0 00\rangle \ ; \quad |E1\rangle \longrightarrow |E_1 11\rangle$$ and once the remaining ancilla is traced out, the cloning process is completed. Due to linearity, this fully determine the transformation of the last state to be $$|E+\rangle] \longrightarrow \frac{1}{\sqrt2} [|E_0 00\rangle + |E_1 11\rangle \ ,$$ while a cloning process should yield $$|E+\rangle \longrightarrow |E_+\! +\!+\rangle \ .$$ The contradiction is clearly apparent since, once the remaining ancilla is traced out, the second expression has a non-zero amplitude for the term $|01\rangle $ while the first expression does not. The conventional way [@WZ82] of proving the no-cloning theorem (using only two states, say $|0\rangle$ and $|+\rangle$) is to compare the overlap before and after the transformation (it must be equal due to the unitarity of quantum mechanics): We obtain that $\langle E|E\rangle \langle 0 | + \rangle = \langle E_0|E_+\rangle \langle 0 | + \rangle \langle 0 | + \rangle$. Hence $ 1 = \langle E_0|E_+\rangle \langle 0 | + \rangle $ which is obviously wrong since all the terms on the right hand side are smaller than one. We shall now use the linearity of quantum mechanics to show that there are states that cannot be disentangled into tensor product states, but can only be disentangled into a mixture of tensor product states. Surprisingly, our proof is [*mainly*]{} based on the [*disentanglement of product states*]{}, that is, on the disentanglement of states which are anyhow not entangled even before the disentanglement process. The reason for the usefulness of such states is that they provide strict restrictions on the allowed transformations. The following set of states cannot be disentangled into product states: $$\begin{aligned} |\psi_0 \rangle &=& |00\rangle \nonumber \\ |\psi_1 \rangle &=& |11\rangle \nonumber \\ |\psi_2 \rangle &=& |00\rangle + |11\rangle \end{aligned}$$ We shall assume that these states can be disentangled into product states and we shall reach a contradiction. Note that the resulting states should be $|\psi_0 \rangle$ and $|\psi_1\rangle $ in the first two cases (see Eq. \[pure-ps\]), and the resulting state should be the completely mixed state (in 4 dimensions) in the last case (see Eq. \[cms\]). The most general process which can be used here is to attach an ancilla in an arbitrary dimension and in a known state (say $|E\rangle$), to transform the entire system using an arbitrary unitary transformation, and to trace out the ancilla. In order to avoid changing the states $|\psi_0 \rangle$ and $|\psi_1\rangle$ the transformations are restricted to be $$\begin{aligned} \label{two-states} |E\psi_0\rangle &=& |E00\rangle \longrightarrow |E_0 00\rangle \nonumber \\ |E\psi_1\rangle &=& |E11\rangle \longrightarrow |E_1 11\rangle \ .\end{aligned}$$ As in the no-cloning argument, these transformations fully determine the transformation of the last state to be $$|E\psi_2\rangle \longrightarrow \frac{1}{\sqrt2} [|E_0 00\rangle + |E_1 11\rangle \ .$$ Once we trace out the ancilla, the resulting state is still entangled unless $|E_0\rangle$ and $E_1\rangle$ are orthogonal. The proof of that statement is as follows: Without loss of generality the states $|E_0\rangle $ and $|E_1\rangle $ can be written as $|E_0 \rangle = |0 \rangle$ and $|E_1 \rangle = \alpha |0 \rangle + \beta |1 \rangle$ with $|\alpha^2| + |\beta^2| = 1$. Thus, $|E\psi_2\rangle \longrightarrow \frac{1}{\sqrt2} |0\rangle ( |00\rangle + \alpha |11\rangle ) + \frac{\beta}{\sqrt2} |111\rangle$. When the ancilla is traced out the remaining state is $$\label{resulting} \left( \begin{array}{cccc} 1/2 & 0 & 0 & \alpha^*/2 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ \alpha/2 & 0 & 0 &1/2 \end{array} \right) \ .$$ The resulting state is entangled unless $\alpha = 0$. Thus, in a successful disentanglemet process $\alpha = 0$ and hence, $|E_1\rangle = e^{i\theta} |1\rangle$. This state, however, is not a tensor product state, thus the above set of states [*cannot*]{} be disentangled into tensor product states. At the same time, this example also shows that the above set of states [*can*]{} be disentangled into a mixture of tensor product states. The resulting state (\[resulting\]) still has the correct reduced density matrices for each subsystem—the completely mixed state in two dimensions. With $\alpha=0$, the resulting state is $(1/2) [|00\rangle \langle 00| + |11\rangle \langle 11|] $, so we succeeded in showing an example where the states can only be disentangled into a separable state, but not into a tensor product state. Our result resembles a result regarding two commuting mixed states [@broadcast]: these states cannot be cloned, but they can be broadcast. That is, the resulting state of the cloning device cannot be a tensor product of states which are equal to the original states, but can be a separable state whose reduced density matrices are equal to the original states [@Bennett]. At that stage, the main question (raised by [@Terno98] and [@Fuchs]) is still left open: Can there be a universal disentangling machine? That is, can there exist a machine that disentangles any set of states into separable states? We shall now show that such a machine cannot exist. Our result is obtained by combining several of the previous techniques: the use of linearity, unitarity, and the disentanglement of product state. Consider the following set of states $$\begin{aligned} |\psi_0 \rangle &=& |00\rangle \nonumber \\ |\psi_1 \rangle &=& |11\rangle \nonumber \\ |\psi_2 \rangle &=& (1/\sqrt2)[ |00\rangle + |11\rangle ] \nonumber \\ |\psi_3 \rangle &=& |++\rangle \nonumber \\\end{aligned}$$ in which we added the states $|\psi_3 \rangle$ to the previous set. This set of states cannot be disentangled. The allowed transformations are now more restricted since, in addition to (Eq. \[two-states\]), the state $|\psi_3\rangle$ must also not be changed by the disentangling machine: $$|E \psi_3\rangle = |E\!+\!+\rangle \longrightarrow |E_+\! +\!+\rangle \ .$$ Due to unitarity, $\langle E|E\rangle \langle 0 | + \rangle \langle 0 | + \rangle = \langle E_0|E_+\rangle \langle 0 | + \rangle \langle 0 | + \rangle$, and also $\langle E|E\rangle \langle 1 | + \rangle \langle 1 | + \rangle = \langle E_1|E_+\rangle \langle 1| + \rangle \langle 1| + \rangle$. These expressions yield $ 1 = \langle E_0|E_+\rangle $, and $ 1 = \langle E_1|E_+\rangle $, from which we must conclude that $|E_+\rangle = |E_0\rangle = |E_1\rangle$. Recall that we already found that $|E_0\rangle = |0 \rangle$ and $|E_1\rangle = e^{i\theta} |1 \rangle$, due to the disentanglement of $|\psi_2\rangle$. Since the two requirements contradict each other, the proof that the above set of states cannot be disentangled (not even to a separable state) is completed. Thus, we have proved that a universal disentangling machine cannot exist. In otehr words—a single quantum cannot be disentangled. This result resembles a result regarding two non-commuting mixed states [@broadcast]: these states cannot be cloned, and furthermore, they cannot be broadcast. To summarize, we provided a thorough analysis of disentanglement processes, and we proved that a single quantum cannot be disentangle. Interestingly, we used a set of four states to prove this, but we conjecture that there are smaller sets that could be used to establish the same conclusion. The no-cloning of states of composite systems were investigated recently [@BDFMRSSW; @Mor98], and it seems that several interesting connections between these works and the idea of disentanglement can be further explored. For instance, one can probably find systems where the states can only be disentangled (or only be disentangled into product states) if the two subsystems are available together, but [*cannot*]{} be disentangled if the subsystems are available one at a time (with similarity to [@Mor98]), or [*cannot*]{} be disentangled if only bilocal superoperators can be used for the disentanglement process (with similarity to [@BDFMRSSW]). I would like to thank Charles Bennett, Oscar Boykin, and Danny Terno for very helpful remarks and discussions. [99]{} D. Terno, [*Non-linear operations in quantum information theory*]{}, submitted to Phys. Rev. A. Los-Alamos archive, Quant-Ph 9811036. A. Peres, [*Quantum Theory: Concepts and Methods*]{} (Kluwer, Dordrecht, 1993). C. H. Bennett, D. P. DiVincenzo, J. S. Smolin and W. K. Wootters, Phys. Rev. A [**54**]{}, 3824 (1996); V. Vedral and M. Plenio, Phys. Rev. A [**57**]{}, 1619 (1998). C. A. Fuchs. Cited in [@Terno98] for private communication. W. K. Wootters and W. H. Zurek, Nature [**299**]{}, 802 (1982). C. W. Helstrom, [*Quantum Detection and Estimation Theory*]{} (Academic Press, New York, 1976). H. Barnum, C. M. Caves, C. A. Fuchs, R. Jozsa, and B. Schumacher, Phys. Rev Lett. [**76**]{}, 2818 (1996). The observation of this similarity is due to C. H. Bennett, private communication. C. H. Bennett, D. P. DiVincenzo, C. A. Fuchs, T. Mor, E. Rains, P. W. Shor, J. S. Smolin and W. K. Wootters, [*Quantum nonlocality without entanglement*]{}, to be publish in Phys. Rev. A. Los-Alamos archive, Quant-Ph 9804053. T. Mor, Phys. Rev Lett. [**80**]{}, 3137 (1998). [^1]: Electrical Engineering, UCLA, Los Angeles, Cal., USA
{ "pile_set_name": "ArXiv" }
--- abstract: 'In a D-brane model of space-time foam, there are contributions to the dark energy that depend on the D-brane velocities and on the density of D-particle defects. The latter may also reduce the speeds of photons [*linearly*]{} with their energies, establishing a phenomenological connection with astrophysical probes of the universality of the velocity of light. Specifically, the cosmological dark energy density measured at the present epoch may be linked to the apparent retardation of energetic photons propagating from nearby AGNs. However, this nascent field of ‘D-foam phenomenology’ may be complicated by a dependence of the D-particle density on the cosmological epoch. A reduced density of D-particles at redshifts $z \sim 1$ - a ‘D-void’ - would increase the dark energy while suppressing the vacuum refractive index, and thereby might reconcile the AGN measurements with the relatively small retardation seen for the energetic photons propagating from GRB 090510, as measured by the Fermi satellite.' author: - John Ellis - 'Nick E. Mavromatos' - 'Dimitri V. Nanopoulos' title: 'D-Foam Phenomenology: Dark Energy, the Velocity of Light and a Possible D-Void' --- Introduction to D-Phenomenology =============================== The most promising framework for a quantum theory of gravity is string theory, particularly in its non-perturbative formulation known as M-theory. This contains solitonic configurations such as D-branes [@polchinski], including D-particle defects in space-time. One of the most challenging problems in quantum gravity is the description of the vacuum and its properties. At the classical level, the vacuum may be analyzed using the tools of critical string theory. However, we have argued [@emn1] that a consistent approach to quantum fluctuations in the vacuum, the so-called ‘space-time foam’, needs the tools of non-critical string theory. As an example, we have outlined an approach to this problem in which D-branes and D-particles play an essential role [@Dfoam2; @Dfoam]. Within this approach, we have identified two possible observable consequences, which may give birth to an emergent subject of ‘D-foam phenomenology’. One possible consequence is a linear energy-dependence of the velocity of light [@aemn; @nature; @Farakos] due to the interactions of photons with D-particle defects, which would also depend linearly on the space-time density of these defects [@emnnewuncert; @mavro_review2009]. Another possible consequence is a contribution to the vacuum energy density (dark energy) that depends on the velocities of the D-branes and, again, the density of D-particle defects [@emninfl]. Therefore, between them, measurements of the dark energy and of the velocities of energetic photons could in principle constrain the density of D-particle defects and the velocities of D-branes. The experimental value of the present dark energy density $\Lambda$ is a fraction $\Omega_\Lambda = 0.73 (3)$ of the critical density $1.5368 (11) \times 10^{-5} h^2$GeV/cm$^3$, where $h = 0.73 (3)$, and the matter density fraction $\Omega_M = 0.27 (3)$. The available cosmological data are consistent with the dark energy density being constant, but some non-zero redshift dependence of $\Lambda$ cannot be excluded, and would be an interesting observable in the context of D-foam phenomenology, as we discuss later. The observational status of a possible linear energy dependence of the velocity of light is less clear. As shown in Fig. \[fig:data\], observations of high-energy emissions from AGN Mkn 501 [@MAGIC2] and PKS 2155-304 [@hessnew] are compatible with photon velocities $$v \; = \; c \times \left( 1 - \frac{E}{M_{QG}} \right) , \label{linear}$$ where $\Delta t/E_\gamma = 0.43 (0.19) \times K(z)$ s/GeV, corresponding to $M_{QG} =(0.98^{+0.77}_{-0.30}) \times 10^{18}$ GeV [@emnnewuncert2]. This range is also compatible with Fermi satellite observations of GRB 09092B [@grb09092B], from which at least one high-energy photon arrived significantly later than those of low energies, and of GRB 080916c [@grbglast]. On the other hand, as also seen in Fig. \[fig:data\], Fermi observations of GRB 090510 [@grb090510] seem to allow only much smaller values of the retardation $\Delta t$, and hence only values of $M_{QG} > M_P = 1.22 \times 10^{19}$ GeV. However, these data probe different redshift ranges. In this first combined exploration of D-foam phenomenology, we start by reviewing the general connection between dark energy and a vacuum refractive index in the general framework of D-branes moving through a gas of D-particle defects in a 10-dimensional space. As we discuss, there are various contributions to the dark energy that depend in general on the density of defects and on the relative velocity of the D-branes. On the other hand, the magnitude of the vacuum refractive index (\[linear\]) is proportional to the density of D-particle defects. We then discuss the ranges of D-brane velocity and D-particle density that are compatible with the measurements of $\Lambda$ and delays in the arrivals of photons from AGN Mkn 501 and PKS 2155-304. As seen in Fig. \[fig:data\], these AGNs are both at relatively low redshift $z$, where the experimental measurement of $\Lambda$ has been made, so the same D-particle density is relevant to the two measurements. However, it is not yet clear whether this value of $\Lambda$, and hence the same D-particle density, also applied when $z \sim 1$. If the density of D-particles was suppressed when $z \sim 1$ - a [*D-void*]{} - this could explain [@mavro_review2009] the much weaker energy dependence of the velocity of light allowed by the Fermi observations of GRB 090510, which has a redshift of $0.903 (3)$, as also seen in Fig. \[fig:data\]. In this case, the value of $\Lambda$ should also have varied at that epoch - a [*clear experimental prediction of this interpretation of the data*]{}. On the other hand, only a very abrupt resurgence of the D-particle density could explain the retardation seen by Fermi in observations of GRB 09092b. Alternatively, this retardation could be due to source effects - which should in any case also be allowed for when analyzing the retardations seen in emissions from other sources. A D-Brane Model of Space-Time Foam and Cosmology ================================================ As a concrete framework for D-foam phenomenology, we use the model illustrated in the left panel of Fig. \[fig:recoil\] [@Dfoam2; @Dfoam]. In this model, our Universe, perhaps after appropriate compactification, is represented as a Dirichlet three-brane (D3-brane), propagating in a bulk space-time punctured by D-particle defects [^1]. As the D3-brane world moves through the bulk, the D-particles cross it. To an observer on the D3-brane the model looks like ‘space-time foam’ with defects ‘flashing’ on and off as the D-particles cross it: this is the structure we term ‘D-foam’. As shown in the left panel of Fig. \[fig:recoil\], matter particles are represented in this scenario by open strings whose ends are attached to the D3-brane. They can interact with the D-particles through splitting and capture of the strings by the D-particles, and subsequent re-emission of the open string state, as illustrated in the right panel of Fig. \[fig:recoil\]. This set-up for D-foam can be considered either in the context of type-IIA string theory [@emnnewuncert], in which the D-particles are represented by point-like D0-branes, or in the context of the phenomenologically more realistic type-IIB strings [@li], in which case the D-particles are modelled as D3-branes compactified around spatial three-cycles (in the simplest scenario), since the theory admits no D0-branes. For the time being, we work in the type-IIA framework, returning later to the type-IIB version of D-foam phenomenology. ![*Left: schematic representation of a generic D-particle space-time foam model, in which matter particles are treated as open strings propagating on a D3-brane, and the higher-dimensional bulk space-time is punctured by D-particle defects. Right: details of the process whereby an open string state propagating on the D3-brane is captured by a D-particle defect, which then recoils. This process involves an intermediate composite state that persists for a period $\delta t \sim \sqrt{\alpha '} E$, where $E$ is the energy of the incident string state, which distorts the surrounding space time during the scattering, leading to an effective refractive index but *not* birefringence.*[]{data-label="fig:recoil"}](recoil2_sarbennickJE.eps "fig:"){width="7.5cm"} ![*Left: schematic representation of a generic D-particle space-time foam model, in which matter particles are treated as open strings propagating on a D3-brane, and the higher-dimensional bulk space-time is punctured by D-particle defects. Right: details of the process whereby an open string state propagating on the D3-brane is captured by a D-particle defect, which then recoils. This process involves an intermediate composite state that persists for a period $\delta t \sim \sqrt{\alpha '} E$, where $E$ is the energy of the incident string state, which distorts the surrounding space time during the scattering, leading to an effective refractive index but *not* birefringence.*[]{data-label="fig:recoil"}](restore.eps "fig:"){width="7.5cm"} Within this approach, a photon propagating on the D3-brane (represented as an open-string state) interacts with ‘flashing’ D-particles at a rate proportional to their local density $n_{D0}$. During each such interaction, a simple string amplitude calculation or the application of the quantum-mechanical uncertainty principle leads to the following order-of-magnitude estimate for the time delay between absorption and re-emission of a photon of energy $E$: $$\delta t_{D0} \; = \; C \sqrt{\alpha'} E , \label{delayD0}$$ where $C$ is an unknown and model-dependent coefficient, and $\alpha'$ is the Regge slope of the string, which is related to the fundamental string length $\ell_*$ and mass $M_*$ by $\sqrt{\alpha'} = \ell_* = 1/M_*$ [^2]. This effect is independent of the photon polarization, and hence does [*not*]{} yield birefringence. When discussing D-foam phenomenology that involves constraining the delay (\[delayD0\]) accumulated over cosmological distances, one must incorporate the effects of the redshift $z$ and the Hubble expansion rate $H(z)$. Specifically, in a Roberston-Walker cosmology the delay due to any single scattering event is affected by: (i) a time dilation factor [@JP] $(1 + z)$ and (ii) the redshifting of the photon energy [@naturelater] that implies that the observed energy of a photon with initial energy $E$ is reduced to $E_{\rm obs} = E_0/(1 + z)$. Thus, the observed delay associated with (\[delayD0\]) is: $$\label{obsdelay} \delta t_{\rm obs} = (1 + z) \delta t_0 = (1 + z)^2 C \sqrt{\alpha '} E_{\rm obs} .$$ For a line density of D-particles $n(z)$ at redshift $z$, we have $n(z) d\ell = n(z) dt$ defects per co-moving length, where $dt$ denotes the infinitesimal Robertson-Walker time interval of a co-moving observer. Hence the total delay of an energetic photon in a co-moving time interval $dt$ is given by $n(z) (1 + z)^2 C \sqrt{\alpha '} E_{\rm obs} \, dt $. The time interval $dt$ is related to the Hubble rate $H(z)$ in the standard way: $dt = - dz/[(1 + z) H(z)]$. Hence the total observed delay of a photon of observed energy $E_{\rm obs}$ from a source at redshift $z$ till observation ($z=0$) is [@emnnewuncert; @naturelater; @mavro_review2009]: $$\label{totaldelay} \Delta t_{\rm obs} = \int_0^z dz C \frac{n(z)\, E_{\rm obs}}{M_s \,H_0} \,\frac{(1 + z)}{\sqrt{\Omega_M \, (1 + z)^3 + \Omega_\Lambda (z)}}~.$$ In writing this expression, we assume Robertson-Walker cosmology, with $H_0$ denoting the present Hubble expansion rate, where $\Omega_M = 0.27$ is the present fraction of the critical energy density in the form of non-relativistic matter, and $\Omega_\Lambda (z)$ is the dark energy density. We assume that the non-relativistic matter co-moves, with no creation or destruction, but leave open the redshift dependence of the dark energy density, which we now discuss. The D3-branes appear in stacks, and supersymmetry dictates the number of D3-branes in each stack, but does not restrict the number of D0-branes in the bulk. For definiteness, we restrict our attention for now to the type-IIA model, in which the bulk space is restricted to a finite range by two appropriate stacks of D8-branes, each stack being supplemented by an appropriate orientifold eight-plane with specific reflecting properties, so that the bulk space-time is effectively compactified to a finite region, as illustrated in Fig. \[fig:inhom\]. We then postulate that two of the D8-branes have been detached from their respective stacks, and are propagating in the bulk. As discussed above, the bulk region is punctured by D0-branes (D-particles), whose density may be inhomogeneous, as discussed below. When there are no relative motions of the D3-branes or D-particles, it was shown in [@Dfoam] that the ground-state energy vanishes, as decreed by the supersymmetries of the configuration. Thus, such static configurations constitute appropriate ground states of string/brane theory. On the other hand, relative motions of the branes break target-space supersymmetry explicitly, contributing to the dark energy density. It is natural to assume that, during the current late era of the Universe, the D3-brane representing our Universe is moving slowly and the configuration is evolving adiabatically. One can calculate the vacuum energy induced on the brane world in such an adiabatic situation by considering its interaction with the D-particles as well as the other branes in the construction. This calculation was made in detail in [@Dfoam], where we refer the interested reader for further details. Here we mention only the results relevant for the present discussion. ![*Schematic representation of a D-particle space-time foam model with bulk density $n^\star (r) $ of D-particles that may be inhomogeneous. The 10-dimensional space-time is bounded by two stacks of D-branes, each accompanied by an orientifold.*[]{data-label="fig:inhom"}](dbraneworld.eps){width="7.5cm"} We concentrate first on D0-particle/D8-brane interactions in the type-IIA model of [@Dfoam]. During the late era of the Universe when the approximation of adiabatic motion is valid, we use a weak-string-coupling approximation $g_s \ll 1$. In such a case, the D-particle masses $\sim M_s/g_s$ are large, i.e., these masses could be of the Planck size: $M_s/g_s \sim M_P = 1.22 \cdot 10^{19}$ GeV or higher. In the adiabatic approximation for the relative motion, these interactions may be represented by a string stretched between the D0-particle and the D8-brane, as shown in Fig. \[fig:inhom\]. The world-sheet amplitude of such a string yields the appropriate potential energy between the D-particle and the D-brane, which in turn determines the relevant contribution to the vacuum energy of the brane. As is well known, parallel relative motion does not generate any potential, and the only non-trivial contributions to the brane vacuum energy come from motion transverse to the D-brane. Neglecting a velocity-independent term in the D0-particle/D8-brane potential that is cancelled for a D8-brane in the presence of orientifold $O_8$ planes [@polchinski] [^3], we find: $$\begin{aligned} \mathcal{V}^{short}_{D0-D8} & = & - \frac{\pi \alpha '}{12}\frac{v^2}{r^3}~~{\rm for} ~~ r \ll \sqrt{\alpha '}~, \label{short}\\ \mathcal{V}^{long}_{D0-D8} & = & + \frac{r\, v^2}{8\,\pi \,\alpha '}~~~{\rm for} ~~ r \gg \sqrt{\alpha '}~. \label{long}\end{aligned}$$ where $v \ll 1$ is the relative velocity between the D-particle and the brane world, which is assumed to be non-relativistic. We note that the sign of the effective potential changes between short distances (\[short\]) and long distances (\[long\]). We also note that there is a minimum distance given by: $$r_{\rm min} \simeq \sqrt{v \,\alpha '}~, \qquad v \ll 1~, \label{newmin}$$ which guarantees that (\[short\]) is less than $r/\alpha'$, rendering the effective low-energy field theory well-defined. Below this minimum distance, the D0-particle/D8-brane string amplitude diverges when expanded in powers of $(\alpha')^2 v^2 /r^4$. When they are separated from a D-brane by a distance smaller than $r_{\rm min}$, D-particles should be considered as lying on the D-brane world, and two D-branes separated by less than $r_{\rm min}$ should be considered as coincident. We now consider a configuration with a moving D8-brane located at distances $R_{i}(t)$ from the orientifold end-planes, where $ R_1(t) + R_2(t) = R_0$ the fixed extent of the ninth bulk dimension, and the 9-density of the D-particles in the bulk is denoted by $n^\star (r) $: see Fig. \[fig:inhom\]. The total D8-vacuum-energy density $\rho^8$ - [*D-energy*]{} - due to the relative motions is [@Dfoam]: $$\mathcal{\rho}^{D8-D0}_{\rm total} = - \int_{r_{\rm min}}^{\ell_s} n^\star (r) \, \frac{\pi \alpha '}{12}\frac{v^2}{r^3}\, dr - \int_{-r_{\rm min}}^{-\ell_s} n^\star (r) \, \frac{\pi \alpha '}{12}\frac{v^2}{r^3}\, dr + \int_{-\ell_s} ^{-R_{1}(t)} \,n^\star (r) \frac{r\, v^2}{8\,\pi \,\alpha '}\, dr + \int_{\ell_s} ^{R_{2}(t)} \,n^\star (r) \frac{r\, v^2}{8\,\pi \,\alpha '}\, dr + \rho_0 \label{total}$$ where the origin of the $r$ coordinate is placed on the 8-brane world and $\rho_0$ combines the contributions to the vacuum energy density from inside the band $-r_{\rm min} \le r \le r_{\rm min}$, which include the brane tension. When the D8-brane is moving in a uniform bulk distribution of D-particles, we may set $n^\star(r) = n_0$, a constant, and the D-energy density $\mathcal{\rho}^{D8-D0}_{\rm total}$ on the D8-brane is also (approximately) constant for a long period of time: $$\mathcal{\rho}^{D8-D0}_{\rm total} = -n_0 \frac{\pi }{12} v(1 - v) + n_0 v^2 \frac{1}{16\pi \alpha '} (R_1(t)^2 + R_2(t)^2 - 2 \alpha ') + \rho_0~. \label{total2}$$ Because of the adiabatic motion of the D-brane, the time dependence of $R_i(t)$ is weak, so that there is only a weak time dependence of the D-brane vacuum energy density: it is positive if $\rho_0 > 0$, which can be arranged by considering branes with positive tension. However, one can also consider the possibility that the D-particle density $n^\star (r)$ is inhomogeneous, perhaps because of some prior catastrophic cosmic collision, or some subsequent disturbance. If there is a region depleted by D-particles - a [*D-void*]{} - the relative importance of the terms in (\[total2\]) may be changed. In such a case, the first term on the right-hand side of (\[total2\]) may become significantly smaller than the term proportional to $R_i^2(t)/\alpha'$. As an illustration, consider for simplicity and concreteness a situation in which there are different densities of D-particles close to the D8-brane ($n_{\rm local}$) and at long distances to the left and right ($n_{\rm left}, n_{\rm right}$). In this case, one obtains from (\[total\]): $$\mathcal{\rho}^{D8-D0}_{\rm total} \simeq -n_{\rm local} \frac{\pi }{12} v(1 - v) + n_{\rm left} v^2 \frac{1}{16 \pi \alpha '} \left(R_1(t)^2 - \alpha '\right)+ n_{\rm right} v^2 \frac{1}{16 \pi \alpha '} \left( R_2(t)^2 - \alpha '\right) + \rho_0~. \label{total3}$$ for the induced energy density on the D8-brane. The first term can be significantly smaller in magnitude than the corresponding term in the uniform case, if the local density of D-particles is suppressed. Overall, the D-particle-induced energy density on the D-brane world [*increases*]{} as the brane enters a region where the D-particle density is [*depleted*]{}. This could cause the onset of an accelerating phase in the expansion of the Universe. It is intriguing that the redshift of GRB 090510 is in the ballpark of the redshift range where the expansion of the Universe apparently made a transition from deceleration to acceleration [@decel]. The result (\[total3\]) was derived in an oversimplified case, where the possible effects of other branes and orientifolds were not taken into account. However, as we now show, the ideas emerging from this simple example persist in more realistic structures. As argued in [@Dfoam], the presence of orientifolds, whose reflecting properties cause a D-brane on one side of the orientifold plane to interact non-trivially with its image on the other side, and the appropriate stacks of D8-branes are such that the *long-range contributions* of the D-particles to the D8-brane energy density *vanish*. What remain are the short-range D0-D8 brane contributions and the contributions from the other D8-branes and O8 orientifold planes. The latter are proportional to the fourth power of the relative velocity of the moving D8-brane world: $$\mathcal{V}_{\rm{long} \,, \,D8-D8, , D8-O8} = V_8\frac{\left(aR_0 - b R_2(t)\right)v^4}{2^{13}\pi^9 {\alpha '}^5}~, \label{vlongo8d8}$$ where $V_8$ is the eight-brane volume, $R_0$ is the size of the orientifold-compactified ninth dimension in the arrangement shown in Fig. \[fig:inhom\], and the numerical coefficients in (\[vlongo8d8\]) result from the relevant factors in the appropriate amplitudes of strings in a nine-dimensional space-time. The constants $a > 0$ and $b >0$ depend on the number of moving D8-branes in the configuration shown in Fig. \[fig:inhom\]. If there are just two moving D8-branes that have collided in the past, as in the model we consider here, then $a=30$ and $b = 64$ [@Dfoam]. The potential (\[vlongo8d8\]) is positive during late eras of the Universe as long as $R_2(t) < 15R_0/32$. One must add to (\[vlongo8d8\]) the negative contributions due to the D-particles near the D8-brane world, so the total energy density on the 8-brane world becomes: $$\mathcal{\rho}_{8} \equiv \frac{\left(aR_0 - b R_2(t)\right)v^4}{2^{13}\pi^9 {\alpha '}^5} -n_{\rm short} \frac{\pi }{12} v(1 - v)~. \label{vtotal2}$$ As above, $n_{\rm short}$ denotes the nine-dimensional bulk density of D-particles near the (moving) brane world. As in the previous oversimplified example, the transition of the D8-brane world from a region densely populated with D-particles to a depleted D-void causes these negative contributions to the total energy density to diminish, leading potentially to an acceleration of the expansion of the Universe. The latter lasts as long as the energy density remains positive and overcomes matter contributions. In the particular example shown in Fig. \[fig:inhom\], $R_2(t)$ diminishes as time elapses and the D8-brane moves towards the right-hand stack of D-branes, so the net long-distance contribution to the energy density (\[total2\]) (the first term) increases, tending further to increase the acceleration. It is the linear density of the D-particle defects $n(z)$ encountered by a propagating photon that determines the amount of refraction. The density of D-particles crossing the D-brane world cannot be determined from first principles, and so may be regarded as a parameter in phenomenological models. The flux of D-particles is proportional also to the velocity $v$ of the D8-brane in the bulk, if the relative motion of the population of D-particles is ignored. Towards D-Foam Phenomenology ============================ In order to make some phenomenological headway, we adopt some simplifying assumptions. For example, we may assume that between a redshift $z < 1$ and today ($z=0$), the energy density (\[vtotal2\]) has remained approximately constant, as suggested by the available cosmological data. this assumption corresponds to [^4]: $$\label{zero} 0 \simeq \frac{d\mathcal{\rho}_8}{d t } = H(z) (1 + z) \frac{d\mathcal{\rho}_8}{d z} = \frac{v^5}{2^{7}\pi^9 {\alpha '}^5} -\frac{d n_{\rm short}}{d t} \frac{\pi }{12} v(1 - v)~.$$ where $t$ denotes the Robertson-Walker time on the brane world, for which $d/dt = -H(z) (1 + z) d/d z$, where $z$ is the redshift and $H(z)$ the Hubble parameter of the Universe, and we take into account the fact that $dR_2(t)/dt = -v$ with $v > 0$, due to the motion of the brane world towards the right-hand stack of branes in the model illustrated in Fig. \[fig:inhom\]. Using $a(t) = a_0/(1 + z)$ for the cosmic scale factor, equation (\[zero\]) yields: $$\label{nchange} \frac{v^5}{c^4\,2^{7}\pi^9 {\alpha '}^5} = -H(z) (1 + z) \frac{d n_{\rm short}}{d z} \frac{\pi }{12} \frac{v}{c}(1 - \frac{v}{c})~,$$ where we have incorporated explicitly the speed of light *in vacuo*, $c$. Equation (\[nchange\]) may be then integrated to yield: $$\label{nofz} n_{\rm short} (z) = n_{\rm short}(0) - \frac{12}{2^{7}\pi^{10}{\alpha '}^5} \frac{1}{\ell_s^9}\,\frac{c}{\ell_s}\, \frac{(v/c)^4}{(1 - \frac{v}{c})}\int_0^z \frac{d z'}{H(z') \, (1 + z')} \; \; {\rm where} \; \; \ell_s \equiv \sqrt{\alpha '}~.$$ In order to match the photon delay data of Fig. \[fig:data\] with the D-foam model, we need a reduction of the *linear* density of defects encountered by the photon by about two orders of magnitude in the region $0.2 < z < 1$, whereas for $z < 0.2 $ there must be, on average, one D-particle defect per unit string length $\ell_s$. We therefore assume that our D-brane encountered a D-void when $0.2 < z < 1$, in which there was a significant reduction in the bulk nine-dimensional density. Ignoring the details of compactification, and assuming that a standard $\Lambda$CDM cosmological model is a good underlying description of four-dimensional physics for $z < 1$, which is compatible with the above-assumed constancy of $\mathcal{\rho}_8$, we may write $H(z) = H_0 \sqrt{\Omega_\Lambda + \Omega_m (1 + z)^3}$, where $H_0 \sim 2.5 \times 10^{-18} ~s^{-1}$ is the present-epoch Hubble rate. Using (\[nofz\]), and assuming $n_{\rm short}(0) = n^\star/\ell_s^9$, we obtain: $$\label{nofz5} n_{\rm short} (z) = \frac{n^\star}{\ell_s^9} \left( 1 - \frac{12}{2^{7}\pi^{10}}\,\frac{c H_0^{-1}}{n^\star \,\ell_s}\, \frac{(v/c)^4}{(1 - \frac{v}{c})}\int_0^z \frac{d z'}{(1 + z')\,\sqrt{\Omega_\Lambda + \Omega_m (1 + z')^3}}\right)~.$$ It is clear that, in order to for $n_{\rm short}(z)$ to be reduced significantly, e.g., by two orders of magnitude, at the epoch $z \simeq 0.9$ of the GRB 090510 [@grb090510], while keeping $n^\star ={\cal O}(1)$, one must consider the magnitude of $v$ as well as the string scale $\ell_s$. For instance, for string energy scales of the order of TeV, i.e., string time scales $\ell_s/c = 10^{-27}~s$, one must consider a brane velocity $v \le \sqrt{10} \times 10^{-11} \, c$, which is not implausible for a slowly moving D-brane at a late era of the Universe [^5]. This is compatible with the constraint on $v$ obtained from inflation in [@emninfl], namely $v^2 \le 1.48 \times 10^{-5}\,g_s^{-1}$, where $g_s < 1$ for the weak string coupling we assume here. On the other hand, if we assume a 9-volume $V_9 = (K \ell_s)^9$: $K \sim 10^3$ and $\ell_s \sim 10^{-17}$/GeV, then the brane velocity $v \le 10^{-4} \, c$. In our model, due to the friction induced on the D-brane by the bulk D-particles, one would expect that the late-epoch brane velocity should be much smaller than during the inflationary era immediately following a D-brane collision. Towards D3-Foam Phenomenology ============================= The above discussion was in the context of type-IIA string theories, but may easily be extended to compactified type-IIB theories of D-particle foam [@li], which may be of interest for low-energy Standard Model phenomenology. In this case, as suggested in [@li], one may construct ‘effective’ D-particle defects by compactifying D3-branes on three-cycles. In the foam model considered in [@li], the rôle of the brane Universe was played by D7-branes compactified appropriately on four-cycles. The foam was provided by compactified D3-brane ‘D-particles’, in such a way that there is on average a D-particle in each three-dimensional volume, $V_{A3}$, in the large Minkowski space dimensions of the D7-brane. We recall that, although in the conformal field theory description a D-brane is an object with a well-defined position, in the full string field theory appropriate for incorporating quantum fluctuations in its position, a D-brane is regarded as an object with thickness of the order of the string scale. In particular, as discussed in [@li], an analysis of the tachyonic lump solution in the string field theory, which may be considered as a D-brane [@Moeller:2000jy], suggests that the widths of the D-brane in the transverse dimensions are about $1.55\ell_s$. In the setting of [@li], there are open strings between the D7-branes and D3-branes that satisfy Neumann (N) and Dirichlet (D) boundary conditions, respectively, which we call ND particles. Their gauge couplings with the gauge fields on the D7-branes are $$\frac{1}{g_{37}^2} = \frac{V}{g_7^2}~,~\, \label{coupl}$$ where $g_7$ is the gauge coupling on the D7-branes, and $V$ denotes the volume of the extra four spatial dimensions of the D7 branes transverse to the D3-branes [@giveon]. If there were no D-particle foam, the Minkowski space dimensions would be non-compact, in which case $V$ in (\[coupl\]) would approach infinity and $g_{37} \to 0$. In such a case, Standard Model particles would have no interactions with the ND-particles on the compactified D3-brane (D-particle’). Thus, our Ansatz for the gauge couplings between the gauge fields on the D7-branes and the ND-particles is [@li] $$\frac{1}{g_{37}^2} = \frac{V_{A3} R'}{(1.55\ell_s)^4} \frac{\ell_s^4}{g_7^2} = \frac{V_{A3} R'}{(1.55)^4} \frac{1}{g_{7}^2}~.~\, \label{coupl-N}$$ where $R'$ is the radius of the fourth space dimension transverse to the D3-branes. We remind the reader that the D7-brane gauge couplings are such that $g_7^2 \propto g_s$, in accordance with general properties of D-branes and their equivalence at low energies to gauge theories [@giveon]. The amplitudes describing the splitting and capture of a photon state by a D-particle may be calculated in this picture using the standard techniques outlined in [@benakli], with the result [@li] that they are effectively proportional to the coupling $g_{37}^2$ in (\[coupl-N\]). The amplitudes depend on kinematical invariants expressible in terms of the Mandelstam variables: $s=-(k_1 + k_2)^2$, $t=-(k_2 + k_3)^2$ and $u=-(k_1 + k_3)^2$, which satisfy $s + t + u =0$ for massless particles. The ordered four-point amplitude $\mathcal{A}(1,2,3,4)$, describing *partly* the splitting and capture of a photon state by the D-particle, is given by $$\begin{aligned} \mathcal{A} (1_{j_1 I_1},2_{j_2 I_2},3_{j_3 I_3},4_{j_4 I_4}) = && - { g_{37}^2} l_s^2 \int_0^1 dx \, \, x^{-1 -s\, l_s^2}\, \, \, (1-x)^{-1 -t\, l_s^2} \, \, \, \frac {1}{ [F (x)]^2 } \, \times \nonumber \\ && \left[ {\bar u}^{(1)} \gamma_{\mu} u^{(2)} {\bar u}^{(4)} \gamma^{\mu} u^{(3)} (1-x) + {\bar u}^{(1)} \gamma_{\mu} u^{(4)} {\bar u}^{(2)} \gamma^{\mu} u^{(3)} x \right ] \, \nonumber \\ && \times \{ \eta \delta_{I_1,{\bar I_2}} \delta_{I_3,{\bar I_4}} \delta_{{\bar j_1}, j_4} \delta_{j_2,{\bar j_3}} \sum_{m\in {\bf Z}} \, \, e^{ - {\pi} {\tau}\, m ^2 \, \ell_s^2 /R^{\prime 2} } \nonumber \\ && + \delta_{j_1,{\bar j_2}} \delta_{j_3,{\bar j_4}} \delta_{{\bar I_1}, I_4} \delta_{I_2,{\bar I_3}} \sum_{n\in {\bf Z}} e^{- {\pi \tau} n^2 \, R^{\prime 2} / \ell_s^{2} } \}~,~\, \label{4ampl}\end{aligned}$$ where $F(x)\equiv F(1/2; 1/2; 1; x)$ is the hypergeometric function, $\tau (x) = F(1-x)/F(x)$, $j_i$ and $I_i$ with $i=1, ~2, ~3, ~4$ are indices on the D7-branes and D3-branes, respectively, and $\eta=(1.55\ell_s)^4/(V_{A3} R')$ in the notation of [@benakli], $u$ is a fermion polarization spinor, and the dependence on the appropriate Chan-Paton factors has been made explicit [^6]. In the above we considered for concreteness the case where the photon state splits into two fermion excitations, represented by open strings. The results are qualitatively identical in the case of boson excitations. In fact, the spin of the incident open strings does not affect the delays or the relevant discussion on the qualitative features of the effective number of defects interacting with the photon. Technically, it should be noted that the novelty of our results above, as compared with those of [@benakli], lies in the specific compactification procedure adopted, and the existence of a uniformly-distributed population of D-particles (foam), leading to (\[coupl-N\]). It is this feature that leads to the replacement of the simple string coupling $g_s$ in the Veneziano amplitude by the effective D3-D7 effective coupling $g_{37}^2$. In this approach, time delays for photons arise [@sussk1] from the amplitude $\mathcal{A}(1,2,3,4)$ by considering backward scattering $u=0$ and looking at the corresponding pole structure. The details of the discussion are presented in [@li] and are not repeated here. The photon delays are proportional to the incident energy, and are suppressed by a single power of the string scale, as discussed above. Due to the $\eta$-dependent terms in (\[4ampl\]), charged-particle excitations in the low-energy limit, such as electrons, have interactions with the D-particle foam that are non-zero in type-IIB models, in contrast to the type-IIA models discussed previously. However, these interactions are suppressed by further factors of $\eta$ compared to photons. The stringent constraints from the Crab-Nebula synchrotron-radiation observations [@crab] are satisfied for $\eta ={\cal O}(10^{-6})$, which is naturally obtained in such models [@li] [^7]. The cosmological scenario of a D-brane moving in a bulk space punctured by D-particles can be extended to this type-IIB construction. If there was a depletion of D-particle defects in a certain range of redshifts in the past, e.g., when $z \sim 0.9$, the volume $V_{A3}$ in (\[coupl-N\]) would have been larger than the corresponding volume at $z \ll 1$, and the corresponding coupling reduced, and hence also the corresponding cross section $\sigma$ describing the probability of interaction of a photon with the D-particle in the foam. The *effective* linear density of defects $n(z)$ appearing in (\[totaldelay\]) can then be related to the number density $\tilde{n}^{(3)}$ per unit three-volume on the compactified D7-brane via $ n(z) \sim \sigma \tilde{n}^{(3)}$. The quantity $\tilde{n}^{(3)}$ is, in turn, directly related to the bulk density of D-particles in this model, where there is no capture of defects by the moving D-brane, as a result of the repulsive forces between them. The cross section $\sigma$ is proportional to the square of the string amplitude (\[4ampl\]) describing the capture of an open photon string state by the D-particle in the model, and hence is proportional to $g_{37}^4$, where $g_{37}$ is given in (\[coupl-N\]). If recoil of the D-particle is included, the amplitude is proportional to the effective string coupling: $g_s^{\rm eff} \propto g_{37}^2$, where $g^{\rm eff} = g_s (1 - |\vec u|^2)^{1/2}$, with $\vec u \ll 1$ the recoil velocity of the D-particle [@emnnewuncert; @li]. One may normalize the string couplings in such a way that $n(z=0)={\cal O}(1)$, in which case $n(z\simeq 0.9) \ll 1$ by about two orders of magnitude. However, in general the induced gauge string couplings depend crucially on the details of the compactification as well as the Standard Model phenomenology on the D-brane world. Hence the precise magnitude of the time delays is model-dependent, and can differ between models. We defer a detailed discussion of such issues to a future work, limiting ourselves here to noting that photon delay experiments place crucial constraints on such diverse matters as the dark sector of a string Universe and its compactification details. Conclusions =========== The above examples are simplified cases capable of an analytic treatment, in which the bulk density of D-particles may be determined phenomenologically by fitting the photon delay data of Fig. \[fig:data\], and those to be obtained by future measurements. In a more complete theory, the late-era bulk distribution of defects would be determined by the detailed dynamics and the subsequent evolution of disturbances in the initial population of D-particles after the brane collision. Therefore, one needs detailed numerical simulations of the gas of D-particles in such colliding D-brane models, in order to determine the relative time scales involved in the various phases of the D-brane Universe in such foamy situations. These are complicated by the supersymmetric constructions and the orientifold planes involved, but may be motivated by the important rôles that the D-particles can play in providing dark energy, as we, as making a link to a seemingly unrelated phenomenon, namely a possible refractive index for photons propagating [*in vacuo*]{}. Already in this first discussion of D-foam phenomenology, we have established that the available data are not incompatible with a refractive index that varies [*linearly*]{} on the photon energy, with a coefficient that depends on the density and coupling of the D-particle defects. These may depend on the cosmological epoch, and their density may also influence the magnitude of the cosmological dark energy density. We defer to later work a more detailed fit to the photon delays and dark energy observed. [*Une affaire à suivre...*]{} Acknowledgements {#acknowledgements .unnumbered} ================ The work of J.E. and N.E.M. is partially supported by the European Union through the Marie Curie Research and Training Network *UniverseNet* (MRTN-2006-035863), and that of D.V.N. by DOE grant DE-FG02-95ER40917. [99]{} See for instance: J. Polchinski, *String Theory*, Vol. 2 (Cambridge University Press, 1998). J. R. Ellis, N. E. Mavromatos and D. V. Nanopoulos, Phys. Lett. B **293**, 37 (1992); *A microscopic Liouville arrow of time*, Invited review for the special Issue of *J. Chaos Solitons Fractals*, Vol. 10, p. 345-363 (eds. C. Castro amd M.S. El Naschie, Elsevier Science, Pergamon 1999) \[arXiv:hep-th/9805120\]. J. R. Ellis, N. E. Mavromatos and D. V. Nanopoulos, Gen. Rel. Grav.  [**32**]{}, 127 (2000) \[arXiv:gr-qc/9904068\]; Phys. Rev.  D [**61**]{}, 027503 (2000) \[arXiv:gr-qc/9906029\]; Phys. Rev.  D [**62**]{}, 084019 (2000) \[arXiv:gr-qc/0006004\]. J. R. Ellis, N. E. Mavromatos and M. Westmuckett, Phys. Rev. D **70**, 044036 (2004) \[arXiv:gr-qc/0405066\]; *ibid.* [**71**]{}, 106006 (2005) . G. Amelino-Camelia, J. R. Ellis, N. E. Mavromatos and D. V. Nanopoulos, Int. J. Mod. Phys.  A [**12**]{}, 607 (1997); this model of refractive index is based on the nonpcritical string approach of [@emn1]. G. Amelino-Camelia, J. R. Ellis, N. E. Mavromatos, D. V. Nanopoulos and S. Sarkar, Nature [**393**]{}, 763 (1998). J. R. Ellis, K. Farakos, N. E. Mavromatos, V. A. Mitsou and D. V. Nanopoulos, Astrophys. J.  [**535**]{}, 139 (2000). J. R. Ellis, N. E. Mavromatos and D. V. Nanopoulos, Phys. Lett.  B [**665**]{}, 412 (2008) \[arXiv:0804.3566 \[hep-th\]\]; N. E. Mavromatos, arXiv:0909.2319 \[hep-th\]. J. R. Ellis, N. E. Mavromatos, D. V. Nanopoulos and A. Sakharov, New J. Phys.  [**6**]{}, 171 (2004) \[arXiv:gr-qc/0407089\]. J. Albert [*et al.*]{} \[MAGIC Collaboration\] and J. R. Ellis, N. E. Mavromatos, D. V. Nanopoulos, A. S. Sakharov and E. K. G. Sarkisyan, Phys. Lett.  B [**668**]{}, 253 (2008). F. Aharonian [*et al.*]{} \[HESS Collaboration\], Phys. Rev. Lett.  [**101**]{}, 170402 (2008). J. Ellis, N. E. Mavromatos and D. V. Nanopoulos, Phys. Lett.  B [**674**]{}, 83 (2009) \[arXiv:0901.4052 \[astro-ph.HE\]\]. T. F. collaboration, T. F. Collaborations and T. S. Team, arXiv:0909.2470 \[astro-ph.HE\]. A. A. Abdo [*et al.*]{} \[Fermi LAT and Fermi GBM Collaborations\], Science [**323**]{} (2009) 1688. FERMI GMB/LAT Collaborations, arXiv:0908.1832 \[astro-ph.HE\]. T. Li, N. E. Mavromatos, D. V. Nanopoulos and D. Xie, Phys. Lett.  B [**679**]{}, 407 (2009) \[arXiv:0903.1303 \[hep-th\]\]. U. Jacob and T. Piran, JCAP [**0801**]{}, 031 (2008); J. R. Ellis, N. E. Mavromatos, D. V. Nanopoulos, A. S. Sakharov and E. K. G. Sarkisyan, Astropart. Phys.  [**25**]{}, 402 (2006) \[Astropart. Phys.  [**29**]{}, 158 (2008)\]; See, for instance: A. Melchiorri, L. Pagano and S. Pandolfi, Phys. Rev.  D [**76**]{}, 041301 (2007) \[arXiv:0706.1314 \[astro-ph\]\]; R. A. Daly, S. G. Djorgovski, K. A. Freeman, M. P. Mory, C. P. O’Dea, P. Kharb and S. Baum, arXiv:0710.5345 \[astro-ph\]. N. Moeller, A. Sen and B. Zwiebach, JHEP [**0008**]{}, 039 (2000), and references therein. A. Giveon and D. Kutasov, Rev. Mod. Phys.  [**71**]{}, 983 (1999), and references therein. I. Antoniadis, K. Benakli and A. Laugier, JHEP [**0105**]{}, 044 (2001). N. Seiberg, L. Susskind and N. Toumbas, JHEP [**0006**]{}, 021 (2000) \[arXiv:hep-th/0005040\]; T. Jacobson, S. Liberati and D. Mattingly, Nature [**424**]{}, 1019 (2003); L. Maccione, S. Liberati, A. Celotti and J. G. Kirk, JCAP [**0710**]{}, 013 (2007). [^1]: Since an isolated D-particle cannot exist [@polchinski], because of gauge flux conservation, the presence of a D-brane is essential. [^2]: An estimate similar to (\[delayD0\]) is also obtained by considering the off-diagonal entry in the effective metric experienced by the energetic photon due to the recoil of the struck D-particle. [^3]: This cancellation is crucial for obtaining an appropriate supersymmetric string ground state with zero ground-state energy. [^4]: We recall that, for the case of two D8 branes moving in the bulk in the model of [@Dfoam], we have $b=64 = 2^6$. [^5]: Much smaller velocities are required for small string scales that are comparable to the four-dimensional Planck length. [^6]: This is only part of the process. The initial splitting of a photon into two open string states and their subsequent re-joining to form the re-emitted photon both depend on the couplings $g_{37}^2$, so there are extra factors proportional to $\eta$ in a complete treatment, which we do not discuss here. [^7]: As already mentioned, the ordered amplitude (\[4ampl\]) describes only part of the process, and there are extra factors of $\eta$ coming from the initial splitting of the photon into two open string states. When these are taken into account, the constraints from Crab Nebula [@crab] can be satisfied for much larger values of $\eta$. We postpone a more detailed discussion of D3-foam phenomenology for future work.
{ "pile_set_name": "ArXiv" }
--- abstract: 'Let $2\leq m \leqs n$ and $q \in (1,\infty)$, we denote by $W^mL^{\frac nm,q}(\mathbb H^n)$ the Lorentz–Sobolev space of order $m$ in the hyperbolic space $\mathbb H^n$. In this paper, we establish the following Adams inequality in the Lorentz–Sobolev space $W^m L^{\frac nm,q}(\mathbb H^n)$ $$\sup_{u\in W^mL^{\frac nm,q}(\mathbb H^n),\, \|\nabla_g^m u\|_{\frac nm,q}\leq 1} \int_{\mathbb H^n} \Phi_{\frac nm,q}\big(\beta_{n,m}^{\frac q{q-1}} |u|^{\frac q{q-1}}\big) dV_g \leqs \infty$$ for $q \in (1,\infty)$ if $m$ is even, and $q \in (1,n/m)$ if $m$ is odd, where $\beta_{n,m}^{q/(q-1)}$ is the sharp exponent in the Adams inequality under Lorentz–Sobolev norm in the Euclidean space. To our knowledge, much less is known about the Adams inequality under the Lorentz–Sobolev norm in the hyperbolic spaces. We also prove an improved Adams inequality under the Lorentz–Sobolev norm provided that $q\geq 2n/(n-1)$ if $m$ is even and $2n/(n-1) \leq q \leq \frac nm$ if $m$ is odd, $$\sup_{u\in W^mL^{\frac nm,q}(\mathbb H^n),\, \|\na_g^m u\|_{\frac nm,q}^q -\lam \|u\|_{\frac nm,q}^q \leq 1} \int_{\mathbb H^n} \Phi_{\frac nm,q}\big(\beta_{n,m}^{\frac q{q-1}} |u|^{\frac q{q-1}}\big) dV_g \leqs \infty$$ for any $0\leqs \lambda \leqs C(n,m,n/m)^q$ where $C(n,m,n/m)^q$ is the sharp constant in the Lorentz–Poincaré inequality. Finally, we establish a Hardy–Adams inequality in the unit ball when $m\geq 3$, $n\geq 2m+1$ and $q \geq 2n/(n-1)$ if $m$ is even and $2n/(n-1) \leq q \leq n/m$ if $m$ is odd $$\sup_{u\in W^mL^{\frac nm,q}(\mathbb H^n),\, \|\na_g^m u\|_{\frac nm,q}^q -C(n,m,\frac nm)^q \|u\|_{\frac nm,q}^q \leq 1} \int_{\mathbb B^n} \exp\big(\beta_{n,m}^{\frac q{q-1}} |u|^{\frac q{q-1}}\big) dx \leqs \infty.$$' author: - Van Hoang Nguyen title: 'The sharp Adams type inequalities in the hyperbolic spaces under the Lorentz-Sobolev norms' --- [^1] [^2] [^3] Introduction ============ It is well-known that the Sobolev’s embedding theorems play the important roles in the analysis, geometry, partial differential equations, etc. Let $m\geq 1$, we we traditionally use the notation $$\na^m = \begin{cases} \Delta^{\frac m2} &\mbox{if $m$ is even,}\\ \na \Delta^{\frac{m-1}2} &\mbox{if $m$ is odd} \end{cases}$$ to denote the $m-$th derivatives. For a bounded domain $\Om\subset \R^n, n\geq 2$ and $1\leq p \leqs \infty$, we denote by $W^{m,p}_0(\Om)$ the usual Sobolev spaces which is the completion of $C_0^\infty(\Om)$ under the Dirichlet norm $\|\na^m u\|_{L^p(\Om)} = \Big(\int_\Om |\na^m u|^p dx \Big)^{\frac1p}$. The Sobolev inequality asserts that $W^{m,p}_0(\Om) \hookrightarrow L^q(\Om)$ for any $q \leq \frac{np}{n-mp}$ provided $mp \leqs n$. However, in the limits case $mp = n$ the embedding $W^{m,\frac nm}_0(\Om) \hookrightarrow L^\infty(\Om)$ fails. In this situation, the Moser–Trudinger inequality and Adams inequality are perfect replacements. The Moser–Trudinger inequality was proved independently by Yudovic [@Yudovic1961], Pohozaev [@Pohozaev1965] and Trudinger [@Trudinger67]. This inequality was then sharpened by Moser [@Moser70] in the following form $$\label{eq:Moserineq} \sup_{u\in W^{1,n}_0(\Om), \|\nabla u\|_{L^n(\Om)} \leq 1} \int_\Om e^{\alpha |u|^{\frac n{n-1}}} dx \leqs \infty$$ for any $\al \leq \al_{n}: = n \om_{n-1}^{\frac 1{n-1}}$ where $\om_{n-1}$ denotes the surface area of the unit sphere in $\R^n$. Furthermore, the inequality is sharp in the sense that the supremum in will be infinite if $\al \geqs \al_n$. The inequality was generalized to higher order Sobolev spaces $W^{m,\frac nm}_0(\Om)$ by Adams [@Adams] in the following form $$\label{eq:AMT} \sup_{u \in W^{m,n}_0(\Om), \, \int_\Om |\na^m u|^{\frac nm} dx \leq 1} \int_\Om e^{\al |u|^{\frac n{n-m}}} dx \leqs \infty,$$ for any $$\al \leq \al_{n,m}: = \begin{cases} \frac 1{\si_n}\Big(\frac{\pi^{n/2} 2^m \Gamma(\frac m2)}{\Gamma(\frac{n-m}2)}\Big)^{\frac n{n-m}} &\mbox{if $m$ is even},\\ \frac 1{\si_n}\Big(\frac{\pi^{n/2} 2^m \Gamma(\frac {m+1}2)}{\Gamma(\frac{n-m+1}2)}\Big)^{\frac n{n-m}} &\mbox{if $m$ is odd}, \end{cases}$$ where $\si_n = \om_{n-1}/n$ is the volume of the unit ball in $\R^n$. Moreover, if $\al \geqs \al_{n,m}$ then the supremum in becomes infinite though all integrals are still finite. The Moser-Trudinger inequality and Adams inequality play the role of the Sobolev embedding theorems in the limiting case $mp = n$. They have many applications to study the problems in analysis, geometry, partial differential equations, etc such as the Yamabe’s equation, the $Q-$curvature equations, especially the problems in partial differential equations with exponential nonlinearity, etc. There have been many generalizations of the Moser–Trudinger inequality and Adams inequality in literature. For examples, the Moser–Trudinger inequality and Adams inequality were established in the Riemannian manifolds in [@YangSuKong; @ManciniSandeep2010; @AdimurthiTintarev2010; @ManciniSandeepTintarev2013; @Bertrand; @Karmakar; @LuTang2013; @DongYang] and were established in the subRiemannian manifolds in [@CohnLu; @CohnLu1; @Balogh]. The singular version of the Moser–Trudinger inequality and Adams inequality was proved in [@AdimurthiSandeep2007; @LamLusingular]. The Moser–Trudinger inequality and Adams inequality were extended to unbounded domains and whole spaces in [@Ruf2005; @LiRuf2008; @RufSani; @AdimurthiYang2010; @LamLuHei; @Adachi00; @LamLuAdams; @LamLunew], and to fractional order Sobolev spaces in [@Martinazzi; @FM1; @FM2]. The improved version of the Moser–Trudinger inequality and Adams inequality were given in [@AdimurthiDruet2004; @Tintarev2014; @WangYe2012; @Nguyenimproved; @LuYangAiM; @Nguyen4; @delaTorre; @Mancini; @Yangjfa; @DOO; @NguyenCCM; @LuZhu; @LuYangHA; @LiLuYang]. An interesting question concerning to the Moser–Trudinger inequality and Adams inequality is whether or not the extremal functions exist. For this interesting topic, the reader may consult the papers [@Carleson86; @Flucher92; @Lin96; @Ruf2005; @LiRuf2008; @Chen; @LiYang; @LuZhu; @NguyenCCM; @LuYangAiM; @Nguyen4] and many other papers. Another generalization of the Moser–Trudinger inequality and Adams inequality is to establish the inequalities of same type in the Lorentz–Sobolev spaces. The Moser–Trudinger inequality and the Adams inequality in the Lorentz spaces was established by Alvino, Ferone and Trombetti [@Alvino1996] and Alberico [@Alberico] in the following form $$\label{eq:AMTLorentz} \sup_{u\in W^m L^{\frac nm,q}(\Om), \, \|\na^m u\|_{\frac nm,q} \leq 1} \int_{\Om} e^{\al |u|^{\frac q{q-1}}} dx < \infty$$ for any $\al \leq \beta_{n,m}^{\frac q{q-1}}$ with $$\beta_{n,m} = \begin{cases} \frac{\pi^{n/2} 2^m \Gamma(\frac m2)}{\si_n^{(n-m)/n} \Gamma(\frac{n-m}2)}&\mbox{if $m$ is even,}\\ \frac{\pi^{n/2} 2^m \Gamma(\frac {m+1}2)}{\si_n^{(n-m)/n} \Gamma(\frac{n-m+1}n)}&\mbox{if $m$ is odd.} \end{cases}$$ The constant $\beta_{n,m}$ is sharp in in the sense that the supremum will become infinite if $\al > \beta_{n,m}^{\frac q{q-1}}$. For unbounded domains in $\R^n$, the Moser–Trudinger inequality was proved by Cassani and Tarsi [@CassaniTarsi2009] (see Theorem $1$ and Theorem $2$ in [@CassaniTarsi2009]). In [@LuTang2016], Lu and Tang proved several sharp singular Moser–Trudinger inequalities in the Lorentz–Sobolev spaces which generalize the results in [@Alvino1996; @CassaniTarsi2009] to the singular weights. The singular Adams type inequalities in the Lorentz–Sobolev spaces were studied by the author in [@NguyenLorentz]. The motivation of this paper is to study the Adams inequalities in the hyperbolic spaces under the Lorentz–Sobolev norm. For $n\geq 2$, let us denote by $\H^n$ the hyperbolic space of dimension $n$, i.e., a complete, simply connected, $n-$dimensional Riemmanian manifold having constant sectional curvature $-1$. The aim in this paper is to generalize the main results obtained by the author in [@Nguyen2020a] to the higher order Lorentz–Sobolev spaces in $\H^n$. Before stating our results, let us fix some notation. Let $V_g, \na_g$ and $\Delta_g$ denote the volume element, the hyperbolic gradient and the Laplace–Beltrami operator in $\H^n$ with respect to the metric $g$ respectively. For higher order derivatives, we shall adopt the following convention $$\na_g^m \cdot = \begin{cases} \Delta_g^{\frac m2} \cdot &\mbox{if $m$ is even,}\\ \na_g (\Delta_g^{\frac{m-1}2} \cdot) &\mbox{if $m$ is odd.} \end{cases}$$ Furthermore, for simplicity, we write $|\na^m_g \cdot|$ instead of $|\na_g^m \cdot|_g$ when $m$ is odd if no confusion occurs. For $1\leq p, q\leqs \infty$, we denote by $L^{p,q}(\H^n)$ the Lorentz space in $\H^n$ and by $\|\cdot\|_{p,q}$ the Lorentz quasi-norm in $L^{p,q}(\H^n)$. When $p=q$, $\|\cdot\|_{p,p}$ is replaced by $\|\cdot\|_p$ the Lebesgue $L_p-$norm in $\H^n$, i.e., $\|f\|_p = (\int_{\H^n} |f|^p dV_g)^{\frac1p}$ for a measurable function $f$ on $\H^n$. The Lorentz–Sobolev space $W^m L^{p,q}(\H^n)$ is defined as the completion of $C_0^\infty(\H^n)$ under the Lorentz quasi-norm $\|\na_g^m u\|_{p,q}:=\| |\na_g^m u| \|_{p,q}$. In [@Nguyen2020a; @Nguyen2020b], the author proved the following Poincaré inequality in $W^1 L^{p,q}(\H^n)$ $$\label{eq:Poincare} \|\na_g^m u\|_{p,q}^q \geq C(n,m,p)^q \|u\|_{p,q}^q,\quad\forall\, u\in W^m L^{p,q}(\H^n).$$ provided $1\leqs q \leq p$ if $m$ is odd and for any $1\leqs q \leqs \infty$ if $m$ is even, where $$C(n,m,p) = \begin{cases} (\frac{(n-1)^2}{pp'})^{\frac m2} &\mbox{if $m$ is even,}\\ \frac {n-1}p (\frac{(n-1)^2}{pp'})^{\frac {m-1}2}&\mbox{if $m$ is odd,} \end{cases}$$ with $p' = p/(p-1)$. Furthermore, the constant $C(n,m,p)^q$ in is the best possible and is never attained. The inequality generalizes the result in [@NgoNguyenAMV] to the setting of Lorentz–Sobolev space. The Moser–Trudinger inequality in the hyperbolic spaces was firstly proved by Mancini and Sandeep [@ManciniSandeep2010] in the dimension $n =2$ (another proof of this result was given by Adimurthi and Tintarev [@AdimurthiTintarev2010]) and by Mancini, Sandeep and Tintarev [@ManciniSandeepTintarev2013] in higher dimension $n\geq 3$ (see [@FontanaMorpurgo2020] for an alternative proof) $$\label{eq:MThyperbolic} \sup_{u\in W^{1,n}(\H^n),\, \int_{\H^n} |\na_g u|_g^n dV_g \leq 1} \int_{\H^n} \Phi(\al_n |u|^{\frac n{n-1}}) dV_g < \infty,$$ where $\Phi(t) = e^t -\sum_{j=0}^{n-2} \frac{t^j}{j!}$. Lu and Tang [@LuTang2013] also established the sharp singular Moser–Trudinger inequality under the conditions $\|\na u\|_{L^n(\H^n)}^n + \tau \|u\|_{L^n(\H^n)}^n \leq 1$ for any $\tau >0$ (see Theorem $1.4$ in [@LuTang2013]). In [@NguyenMT2018], the author improves the inequality by proving the following inequality $$\label{eq:NguyenMT} \sup_{u\in W^{1,n}(\H^n),\, \int_{\H^n} |\na_g u|_g^n dV_g - \lam \int_{\H^n} |u|^n dV_g \leq 1} \int_{\H^n} \Phi(\al_n |u|^{\frac n{n-1}}) dV_g < \infty,$$ for any $\lambda < (\frac{n-1}n)^n$. The Adams inequality in the hyperbolic spaces were proved by Karmakar and Sandeep [@Karmakar] in the following form $$\sup_{u\in C_0^\infty(\H^{2n} \int_{\H^{2n}} P_nu \cdot u dV_g \leq 1} \int_{\H^{2n}} \Big(e^{ \al_{2n,n} u^2} -1\Big) dV_g < \infty.$$ where $P_k$ is the GJMS operator on the hyperbolic spaces $\H^{2n}$, i.e., $P_1 = -\Delta_g -n(n-1)$ and $$P_k = P_1(P_1+2)\cdots (P_1 + k(k-1)),\quad k\geq 2.$$ In recent paper, Fontana and Morpurgo [@FM2] established the following Adams inequality in the hyperbolic spaces $\H^n$, $$\label{eq:FM} \sup_{u\in W^{m,\frac nm}(\H^n), \int_{\H^n} |\na_g^m u|^{\frac nm} dV_g \leq 1} \int_{\H^n} \Phi_{\frac nm}(\al_{n,m} |u|^{\frac n{n-m}}) dV_g < \infty$$ where $$\Phi_{\frac nm}(t) = e^{t} -\sum_{j=0}^{j_{\frac nm} -2} \frac{t^j}{j!}, \quad\text{\rm and }\quad j_{\frac nm} = \min\{j\, :\, j \geq \frac nm\} \geq \frac nm.$$ In [@NgoNguyenRMI], Ngo and the author proved several Adams type inequalities in the hyperbolic spaces. To our knowledge, much less is known about the Trudinger–Moser inequality and Adams inequality under the Lorentz–Sobolev norm on complete noncompact Riemannian manifolds except Euclidean spaces. Recently, Yang and Li [@YangLi2019] proves a sharp Moser–Trudinger inequality in the Lorentz–Sobolev spaces defined in the hyperbolic spaces. More precisely, their result ([@YangLi2019 Theorem $1.6$]) states that for $1\leqs q \leqs \infty$ it holds $$\sup_{u\in W^1L^{n,q}(\H^n),\, \|\na_g u\|_{n,q} \leq 1} \int_{\H^n} \Phi_{n,q}(\al_{n,q} |u|^{\frac q{q-1}}) dV_g \leqs \infty,$$ where $$\Phi_{a,q}(t) =e^t - \sum_{j=0}^{j_{a,q} -2} \frac{t^j}{j!},\quad \text{\rm where}\,\, j_{a,q} = \min\{j\in \N\, :\, j \geqs 1+ a(q-1)/q\},$$ with $a \geqs 1$. The first aim in this paper is to establish the sharp Adams inequality in the hyperbolic spaces under the Lorentz–Sobolev norm which generalize the result of Yang and Li to higher order derivatives. Our fist result in this paper reads as follows. \[MAINI\] Let $n\geqs m \geq 2$ and $q \in (1,\infty)$. Then it holds $$\label{eq:AdamsLorentz} \sup_{u\in W^mL^{\frac nm,q}(\H^n),\, \|\na_g^m u\|_{\frac nm,q}\leq 1} \int_{\H^n} \Phi_{\frac nm,q}\big(\beta_{n,m}^{\frac q{q-1}} |u|^{\frac q{q-1}}\big) dV_g \leqs \infty,$$ for any $q \in (1,\infty)$ if $m$ is even, or $1\leqs q \leq \frac nm$ if $m$ is odd. Futhermore, the constant $\beta_{n,m}^{\frac q{q-1}}$ is sharp in the sense that the supremum in will become infinite if $\beta_{n,m}^{\frac q{q-1}}$ is replaced by any larger constant. Let us make some comments on Theorem \[MAINI\]. When $q =\frac nm$, we obtain the inequality of Fontana and Morpurgo from Theorem \[MAINI\]. However, our approach is completely different with the one of Fontana and Morpurgo. Notice that in the case that $m$ is odd, we need an extra assumption Notice $q \leq \frac nm$ comparing with case that $m$ is even. This extra condition is a technical condition in our approach for which we can apply the Pólya–Szegö principle in the hyperbolic space (see Theorem \[PS\] below). This principle was proved by the author in [@Nguyen2020a] which generalizes the classical Pólya–Szegö principle in Euclidean space to the hyperbolic space. Note that when $m=1$, the extra condition is not need by the result of Yang and Li [@YangLi2019]. The approach of Yang and Li is based on an representation formula for function via Green’s function of the Laplace-Beltrami $-\Delta_g$ (similar with the one of Fontana and Morpurgo [@FM2]). Hence, we believe that the extra condition $q \leq \frac nm$ is superfluous when $m > 1$ is odd. One reasonable approach is to follow the one of Fontana and Morpurgo by using the representation formulas and estimates in [@FM2 Section $5$]. This problem is left for interesting reader. Next, we aim to improve the Lorentz–Adams inequality in Theorem \[MAINI\] in spirit of . In the case $m=1$, an analogue of under Lorentz–Sobolev norm was obtained by the author in [@Nguyen2020a Theorem $1.3$]. The result for $m > 1$ is given in the following theorem. \[MAINII\] Let $n > m\geq 2$ and $q \geq \frac{2n}{n-1}$. Suppose in addition that $q \leq \frac nm$ if $m$ is odd. Then we have $$\label{eq:improvedAL} \sup_{u\in W^mL^{\frac nm,q}(\H^n),\, \|\na_g^m u\|_{\frac nm,q}^q -\lam \|u\|_{\frac nm,q}^q \leq 1} \int_{\H^n} \Phi_{\frac nm,q}\big(\beta_{n,m}^{\frac q{q-1}} |u|^{\frac q{q-1}}\big) dV_g \leqs \infty.$$ for any $ \lam \leqs C(n,m,\frac nm)^q$. Obviously, Theorem \[MAINII\] is stronger than Theorem \[MAINI\]. The extra condition $q \geq \frac{2n}{n-1}$ in Theorem \[MAINII\] is to apply a crucial point-wise estimate in [@NguyenPS2018 Lemma $2.1$]. Theorem \[MAINII\] is proved by using iteration method and some estimates in [@Nguyen2020b] which we will recall in Section §2 below. The Hardy–Moser–Trudinger inequality was proved by Wang and Ye (see [@WangYe2012]) in dimension $2$ $$\label{eq:WangYe} \sup_{u \in W^{1,2}_0(\B^2), \int_{\B^2} |\na u|^2 dx - \int_{\B^2} \frac{u^2}{(1-|x|^2)^2} dx \leq 1} \int_{\B^2} e^{4\pi u^2} dx < \infty.$$ The inequality is stronger than the classical Moser–Trudinger inequality in $\B^2$. It connects both the sharp Moser–Trudinger inequality in $\B^2$ and the sharp Hardy inequality in $\B^2$ $$\int_{\B^2} |\na u|^2 dx \geq \int_{\B^2} \frac{u^2}{(1-|x|^2)^2} dx, \quad u \in W^{1,2}_0(\B^2).$$ The higher dimensional version of was recently established by the author [@NguyenHMT] $$\sup_{u \in W^{1,n}_0(\B^n), \int_{\B^n} |\na u|^n dx - \lt(\frac{2(n-1)}n\rt)^n\int_{\B^n} \frac{|u|^n}{(1-|x|^2)^n} dx \leq 1} \int_{\B^2} e^{\al_n |u|^{\frac n{n-1}}} dx < \infty.$$ For higher order derivatives, the sharp Hardy–Adams inequality was proved by Lu and Yang [@LuYangHA] in dimension $4$ and by Li, Lu and Yang [@LiLuYang] in any even dimension. The approach in [@LuYangHA; @LiLuYang] relies heavily on the Hilbertian structure of the space $W^{\frac n2,2}_0(\B^n)$ with $n$ even for which the Fourier analysis in the hyperbolic spaces can be applied. Our next motivation in this paper is to establish the sharp Hardy–Adams inequality in any dimension. Our next result reads as follows. \[HARDYADAMS\] Let $m \geq 3$, $n \geq 2m+1$ and $q \geq \frac{2n}{n-1}$. Suppose in addition that $q \leq \frac nm$ if $m$ is odd. Then it holds $$\label{eq:HAineq} \sup_{u\in W^mL^{\frac nm,q}(\H^n),\, \|\na_g^m u\|_{\frac nm,q}^q -C(n,m,\frac nm)^q \|u\|_{\frac nm,q}^q \leq 1} \int_{\B^n} \exp\big(\beta_{n,m}^{\frac q{q-1}} |u|^{\frac q{q-1}}\big) dx \leqs \infty.$$ Notice that the condition $m \geq 3$ is crucial in our approach. Indeed, under this condition we can make some estimates for $\|\na_g^m u\|_{\frac nm,q}^q -C(n,m,\frac nm)^q \|u\|_{\frac nm,q}^q$ for which we can apply the results from Theorem \[MAINI\] and Theorem \[MAINII\]. We do not know an analogue of when $m=2$. When $q = \frac nm$, we obtain the following Hardy–Adams inequality $$\sup_{u\in W^{m,\frac nm}_0(\H^n),\, \int_{\H^n} |\na_g^m u|^{\frac nm} dV_g -C(n,m,\frac nm)^{\frac nm} \int_{\H^n} |u|^{\frac nm} dV_g \leq 1} \int_{\B^n} \exp\big(\al_{n,m} |u|^{\frac n{n-m}}\big) dx \leqs \infty.$$ The rest of this paper is organized as follows. In Section §2, we recall some facts on the hyperbolic spaces, the non-increasing rearrangement argument in the hyperbolic spaces and some important results from [@Nguyen2020b] which are used in the proof of Theorem \[MAINII\] and Theorem \[HARDYADAMS\]. The proof of Theorem \[MAINI\] is given in Section §3. Section §4 is devoted to prove Theorem \[MAINII\]. Finally, in Section §5 we provide the proof of Theorem \[HARDYADAMS\]. Preliminaries ============= We start this section by briefly recalling some basis facts on the hyperbolic spaces and the Lorentz–Sobolev space defined in the hyperbolic spaces. Let $n\geq 2$, a hyperbolic space of dimension $n$ (denoted by $\H^n$) is a complete , simply connected Riemannian manifold having constant sectional curvature $-1$. There are several models for the hyperbolic space $\H^n$ such as the half-space model, the hyperboloid (or Lorentz) model and the Poincaré ball model. Notice that all these models are Riemannian isometry. In this paper, we are interested in the Poincaré ball model of the hyperbolic space since this model is very useful for questions involving rotational symmetry. In the Poincaré ball model, the hyperbolic space $\H^n$ is the open unit ball $B_n\subset \R^n$ equipped with the Riemannian metric $$g(x) = \Big(\frac2{1- |x|^2}\Big)^2 dx \otimes dx.$$ The volume element of $\H^n$ with respect to the metric $g$ is given by $$dV_g(x) = \Big(\frac 2{1 -|x|^2}\Big)^n dx,$$ where $dx$ is the usual Lebesgue measure in $\R^n$. For $x \in B_n$, let $d(0,x)$ denote the geodesic distance between $x$ and the origin, then we have $d(0,x) = \ln (1+|x|)/(1 -|x|)$. For $\rho \geqs 0$, $B(0,\rho)$ denote the geodesic ball with center at origin and radius $\rho$. If we denote by $\na$ and $\Delta$ the Euclidean gradient and Euclidean Laplacian, respectively as well as $\la \cdot, \cdot\ra$ the standard scalar product in $\R^n$, then the hyperbolic gradient $\na_g$ and the Laplace–Beltrami operator $\Delta_g$ in $\H^n$ with respect to metric $g$ are given by $$\na_g = \Big(\frac{1 -|x|^2}2\Big)^2 \na,\quad \Delta_g = \Big(\frac{1 -|x|^2}2\Big)^2 \Delta + (n-2) \Big(\frac{1 -|x|^2}2\Big)\la x, \na \ra,$$ respectively. For a function $u$, we shall denote $\sqrt{g(\na_g u, \na_g u)}$ by $|\na_g u|_g$ for simplifying the notation. Finally, for a radial function $u$ (i.e., the function depends only on $d(0,x)$) we have the following polar coordinate formula $$\int_{\H^n} u(x) dx = n \sigma_n \int_0^\infty u(\rho) \sinh^{n-1}(\rho)\, d\rho.$$ It is now known that the symmetrization argument works well in the setting of the hyperbolic. It is the key tool in the proof of several important inequalities such as the Poincaré inequality, the Sobolev inequality, the Moser–Trudinger inequality in $\H^n$. We shall see that this argument is also the key tool to establish the main results in the present paper. Let us recall some facts about the rearrangement argument in the hyperbolic space $\H^n$. A measurable function $u:\H^n \to \R$ is called vanishing at the infinity if for any $t >0$ the set $\{|u| > t\}$ has finite $V_g-$measure, i.e., $$V_g(\{|u|> t\}) = \int_{\{|u|> t\}} dV_g < \infty.$$ For such a function $u$, its distribution function is defined by $$\mu_u(t) = V_g( \{|u|> t\}).$$ Notice that $t \to \mu_u(t)$ is non-increasing and right-continuous. The non-increasing rearrangement function $u^*$ of $u$ is defined by $$u^*(t) = \sup\{s > 0\, :\, \mu_u(s) > t\}.$$ The non-increasing, spherical symmetry, rearrangement function $u^\sharp$ of $u$ is defined by $$u^\sharp(x) = u^*(V_g(B(0,d(0,x)))),\quad x \in \H^n.$$ It is well-known that $u$ and $u^\sharp$ have the same non-increasing rearrangement function (which is $u^*$). Finally, the maximal function $u^{**}$ of $u^*$ is defined by $$u^{**}(t) = \frac1t \int_0^t u^*(s) ds.$$ Evidently, $u^*(t) \leq u^{**}(t)$. For $1\leq p, q < \infty$, the Lorentz space $L^{p,q}(\H^n)$ is defined as the set of all measurable function $u: \H^n \to \R$ satisfying $$\|u\|_{L^{p,q}(\H^n)}: = \lt(\int_0^\infty \lt(t^{\frac1p} u^*(t)\rt)^q \frac{dt}t\rt)^{\frac1q} < \infty.$$ It is clear that $L^{p,p}(\H^n) = L^p(\H^n)$. Moreover, the Lorentz spaces are monotone with respect to second exponent, namely $$L^{p,q_1}(\H^n) \subsetneq L^{p,q_2}(\H^n),\quad 1\leq q_1 < q_2 < \infty.$$ The functional $ u\to \|u\|_{L^{p,q}(\H^n)}$ is not a norm in $L^{p,q}(\H^n)$ except the case $q \leq p$ (see [@Bennett Chapter $4$, Theorem $4.3$]). In general, it is a quasi-norm which turns out to be equivalent to the norm obtained replacing $u^*$ by its maximal function $u^{**}$ in the definition of $\|\cdot\|_{L^{p,q}(\H^n)}$. Moreover, as a consequence of Hardy inequality, we have Given $p\in (1,\infty)$ and $q \in [1,\infty)$. Then for any function $u \in L^{p,q}(\H^n)$ it holds $$\label{eq:Hardy} \lt(\int_0^\infty \lt(t^{\frac1p} u^{**}(t)\rt)^q \frac{dt}t\rt)^{\frac1q} \leq \frac p{p-1} \lt(\int_0^\infty \lt(t^{\frac1p} u^*(t)\rt)^q \frac{dt}t\rt)^{\frac1q} = \frac p{p-1} \|u\|_{L^{p,q}(\H^n)}.$$ For $1\leq p, q \leqs \infty$ and an integer $m\geq 1$, we define the $m-$th order Lorentz–Sobolev space $W^mL^{p,q}(\H^n)$ by taking the completion of $C_0^\infty(\H^n)$ under the quasi-norm $$\|\na_g^m u\|_{p,q} := \| |\na_g^m u|\|_{p,q}.$$ It is obvious that $W^mL^{p,p}(\H^n) = W^{m,p}(\H^n)$ the $m-$th order Sobolev space in $\H^n$. In [@Nguyen2020a], the author established the following Pólya–Szegö principle in the first order Lorenz–Sobolev spaces $W^1L^{p,q}(\H^n)$ which generalizes the classical Pólya–Szegö principle in the hyperbolic space. \[PS\] Let $n\geq 2$, $1\leq q \leq p \leqs \infty$ and $u\in W^{1}L^{p,q}(\H^n)$. Then $u^\sharp \in W^{1}L^{p,q}(\H^n)$ and $$\|\na_g u^\sharp\|_{p,q} \leq \|\na_g u\|_{p,q}.$$ For $r \geq 0$, define $$\Phi(r) = n \int_0^r \sinh^{n-1}(s) ds, \quad r\geq 0,$$ and let $F$ be the function such that $$r = n \si_n \int_0^{F(r)} \sinh^{n-1}(s) ds, \quad r\geq 0,$$ i.e., $F(r) = \Phi^{-1}(r/\si_n)$. The following results was proved in [@Nguyen2020b] (see the Section §2). Let $n \geq 2$. Then it holds $$\label{eq:keyyeu} \sinh^{n}(F(t)) \geqs \frac t{\si_n},\quad t\geqs 0.$$ Furthermore, the function $$\vphi(t) =\frac{t}{\sinh^{n-1}(F(t))}$$ is strictly increasing on $(0,\infty)$, and $$\label{eq:keyyeu*} \lim_{t\to \infty} \varphi(t) = \frac{n \si_n}{n-1} > \frac{t}{\sinh^{n-1}(F(t))},\quad t >0.$$ It should be remark that under an extra condition $q \geq \frac{2n}{n-1}$, a stronger estimate which combines both and was established by the author in [@Nguyen2020a Lemma $2.1$] that $$\sinh^{q(n-1)}(F(t)) \geq \lt(\frac t{\si_n}\rt)^{q \frac{n-1}n} + \lt(\frac{n-1}n\rt)^q \lt(\frac t{\si_n}\rt)^q,\quad t \geqs 0.$$ Let $u \in C_0^\infty(\H^n)$ and $f = -\Delta_g u$. It was proved by Ngo and the author (see [@NgoNguyenAMV Proposition $2.2$]) that $$\label{eq:NgoNguyen} u^*(t) \leq v(t):= \int_t^\infty \frac{s f^{**}(s)}{(n \si_n \sinh^{n-1}(F(s)))^2} ds,\quad t\geqs 0.$$ The following results which were proved in [@Nguyen2020b; @Nguyen2020a] play the important role in the proof of our main results, \[L1\] Let $p\in (1,n)$ and $\frac{2n}{n-1} \leq q \leq p$. Then we have $$\label{eq:improvedLS1a} \|\na_g u\|_{p,q}^q - \lt(\frac{n-1}p\rt)^q \|u\|_{p,q}^q \geq \lt(\frac{n-p}p \si_n^{\frac1n}\rt)^q \|u\|_{p^*,q}^q,\quad u\in C_0^\infty(\H^n)$$ where $p' = p/(p-1)$, and \[L2\] Let $n\geq 2$, $p \in (1,n)$ and $q \in (1,\infty)$. If $p \in (1,\frac n2)$ then it holds $$\label{eq:LSorder2} \|\Delta_g u\|_{p,q}^q \geq \lt(\frac{n(n-2p)}{p p'} \si_n^{\frac 2n}\rt)^q \|u\|_{p_2^*,q}^q.$$ If $p\in (1,n)$ and $q \geq \frac{2n}{n-1}$ then we have $$\label{eq:improvedLS2} \|\Delta_g u\|_{p,q}^q - C(n,2,p)^q \|u\|_{p,q}^q \geq \lt(\frac{n^2 \si_n^{\frac2n}}{p'}\rt)^q \int_0^\infty |v'(t)|^q t^{q(\frac1p -\frac2n) + q -1} dt.$$ Furthermore, if $p\in (1,\frac n2)$ and $q \geq \frac{2n}{n-1}$ and $\frac{2n}{n-1} \leq q \leq p$ then we have $$\label{eq:improvedLS2a} \|\Delta_g u\|_{p,q}^q - C(n,2,p)^q \|u\|_{p,q}^q \geq \lt(\frac{n(n-2p)}{p p'} \si_n^{\frac 2n}\rt)^q \|u\|_{p_2^*,q}^q,\quad u \in C_0^\infty(\H^n).$$ Proposition \[L1\] follows from [@Nguyen2020a Theorem $1.2$] while Proposition \[L2\] follows from Theorem $2.8$ in [@Nguyen2020b]. Proof of Theorem \[MAINI\] ========================== In this section, we prove Theorem \[MAINI\]. The main point is the proof of the case $m=2$. For the case $m\geq 3$, the proof is based on the iteration argument by using the inequalities and below. We divide the proof of into three following cases:\ *Case 1: $m =2$.* It is enough to consider $u \in C_0^\infty(\H^n)$ with $\|\Delta_g u\|_{\frac n2,q} \leq 1$. Denote $f = -\Delta_g u$ and define $v$ by , then we have $u^* \leq v$. By [@Nguyen2020b Theorem $1.1$] , we have $\|u\|_{\frac n2,q}^q \leq C$. Here and in the sequel, we denote by $C$ a generic constant which does not depend on $u$ and whose value maybe changes on each line. For any $t\geqs 0$, we have $$\frac n{2q} u^*(t)^q t^{\frac{2q}n} \leq \int_0^t u^*(s)^q s^{\frac{2q}n-1} ds \leq \|u\|_{p,q}^q \leq C,$$ which yields $u^*(t) \leq C t^{-\frac 2n}$, $t\geqs 0$. Therefore, it is not hard to see that $$\Phi_{\frac n2,q}(\beta_{n,2}^{\frac q{q-1}} u^*(t)^{\frac q{q-1}}) \leq C u^*(t)^{\frac{q}{q-1} (j_{\frac n2,q} -1)} \leq C t^{-\frac 2n \frac{q}{q-1} (j_{\frac n2,q} -1)},\quad \forall\, t\geq 1.$$ By the choice of $j_{\frac n2,q}$, we then have $$\label{eq:tach12} \int_1^\infty \Phi_{\frac n2,q}(\beta_{n,2}^{\frac q{q-1}} u^*(t)^{\frac q{q-1}}) dt \leq C.$$ On the other hand, we have $$\begin{aligned} \label{eq:on01} \int_0^1 \Phi_{\frac n2,q}(\beta_{n,2}^{\frac q{q-1}} u^*(t)^{\frac q{q-1}}) dt &\leq \int_0^1 \exp\Big(\beta_{n,2}^{\frac q{q-1}} u^*(t)^{\frac q{q-1}}\Big) dt \notag\\ &\leq \int_0^1 \exp\Big(\beta_{n,2}^{\frac q{q-1}} v(t)^{\frac q{q-1}}\Big) dt \notag\\ &= \int_0^\infty \exp\Big(-t + \beta_{n,2}^{\frac q{q-1}} v(e^{-t})^{\frac q{q-1}}) dt.\end{aligned}$$ Notice that $$v(e^{-t}) =\int_{e^{-t}}^\infty \frac{r}{(n\si_n \sinh^{n-1}(F(r)))^2} f^{**}(r) dr = \int_{-\infty}^t \frac{e^{-2(1-\frac1n)s}}{(n\si_n \sinh^{n-1}(F(e^{-s})))^2} e^{-\frac{2}ns}f^{**}(e^{-s}) ds.$$ Denote $$\phi(s) = \frac{n-2}{n} e^{-\frac 2n s} f^{**}(e^{-s}),$$ we then have $$\label{eq:boundnormorder2} \int_{\R}\phi(s)^q ds = \lt(\frac{n-2}n\rt)^q \int_0^\infty (f^{**}(t) t^{\frac2n})^q \frac{dt}t \leq 1,$$ here we used the Hardy inequality and $\|\Delta_g u\|_{L^{\frac n2,q}(\H^n)} \leq 1$. Define the function $$a(s,t) = \begin{cases} \be_{n,2} \frac{n}{n-2} \frac{e^{-2(1-\frac1n)s}}{(n\si_n \sinh^{n-1}(F(e^{-s})))^2} &\mbox{if $s \leq t$,}\\ 0&\mbox{if $s > t$.} \end{cases}$$ Using the inequality $\si_n \sinh^n(F(r)) \geq r$, we have for $0 \leq s \leq t$ $$\label{eq:boundby1order2} a(s,t) \leq \be_{n,2} \frac1{n(n-2) \si_n^{\frac2n}} = 1.$$ Moreover, for $t >0$ we have $$\begin{aligned} \int_{-\infty}^0 a(s,t)^{q'} ds + \int_t^\infty a(s,t)^{q'} ds& = \be_{n,2}^{q'} \lt(\frac n{n-2}\rt)^{q'} \int_{-\infty}^0 \lt(\frac{e^{-2(1-\frac1n)s}}{(n\si_n \sinh^{n-1}(F(e^{-s})))^2}\rt)^{q'} ds\\ &\leq \be_{n,2}^{q'} \lt(\frac n{n-2}\rt)^{q'} (n-1)^{-2q'} \int_{-\infty}^0 e^{\frac2n q'} ds\\ &= \be_{n,2}^{q'} \lt(\frac n{n-2}\rt)^{q'} (n-1)^{-2q'} \frac{n}{2q'},\end{aligned}$$ here we used $n\sigma_n \sinh^{n-1}(F(r)) \geq (n-1) r$. Hence $$\label{eq:dk2Adamsorder2} \sup_{t >0} \lt(\int_{-\infty}^0 a(s,t)^{q'} ds + \int_t^\infty a(s,t)^{q'} ds\rt)^{\frac1{q'}} \leq \lt(\beta_{n,2}^{q'} \lt(\frac n{n-2}\rt)^{q'} (n-1)^{-2q'} \frac{n}{2q'}\rt)^{\frac1{q'}}.$$ Notice that $$\label{eq:majoru*2} \be_{n,2}v(e^{-t}) \leq \int_{\R} a(s,t) \phi(s) ds.$$ With , , , and at hand, we can apply Adams’ Lemma [@Adams] to obtain $$\label{eq:tporder22} \int_0^1 \Phi_{\frac n2,q}(\beta_{n,2}^{q'} u^*(t)^{\frac q{q-1}}) dt \leq \int_0^\infty e^{-t + \beta_{n,2}^{q'} v(t)^{q'}} dt \leq C.$$ Combining and together, we arrive $$\int_{\R^n} \Phi_{\frac n2,q}(\be_{n,2}^{q'} |u|^{q'}) dx = \int_0^\infty \Phi_{\frac n2,q}(\be_{n,2}^{q'} (u^*(t))^{q'}) dt \leq C,$$ for any $u \in W^2L^{\frac n2,q}(\H^n)$ with $\|\Delta_g u\|_{L^{\frac n2,q}(\H^n)} \leq 1$. This proves for $m =2$.\ *Case 2: $m =2k$, $k\geq 2$.* To obtain the result in this case, we apply the iteration argument. Firstly, by iterating the inequality , we have that for $k\geq 1$, $q \in (1,\infty)$ and $p \in (1,\frac n{2k})$ $$\|\Delta_g^k u\|_{p,q}^q \geq S(n,2k,p)^q \|u\|_{p_{2k}^*,q}^q.$$ Hence, if $u \in W^{2k} L^{\frac n{2k},q}(\H^n)$ with $\|\Delta_g^k u\|_{\frac n{2k},q} \leq 1$, then we have $$S(n,2(k-1),\frac n{2k}) \|\Delta_g u\|_{\frac n2,q} \leq 1.$$ Define $w = S(n,2(k-1),\frac n{2k}) u$, then $\|w\|_{\frac n2,q} \leq 1$. Using the result in the *Case 1* with remark that $$\beta_{n,2k} = \beta_{n,2} S(n,2(k-1),\frac n{2k}),$$ we obtain $$\label{eq:Case1} \int_{\H^n} \Phi_{\frac n2,q}(\beta_{n,2k}^{q'} |u|^{q'}) dV_g \leq C.$$ By the Lorentz–Poincaré inequality , we have $\|u\|_{\frac n{2k},q}^q \leq C$. Similarly in the *Case 1*, we get $u^*(t) \leq C t^{-\frac{2k}n}$, $t\geqs 0$. Hence, for $t\geq 1$, it holds $$\Phi_{\frac n{2k},q}(\beta_{n,2k}^{q'} u^*(t)^{q'}) \leq C (u^*(t))^{q'(j_{\frac n{2k},q} -1)} \leq C t^{-\frac{2k}n q'(j_{\frac n{2k},q} -1)},$$ which implies $$\label{eq:tach12k} \int_1^\infty \Phi_{\frac n{2k} ,q}(\beta_{n,2k}^{\frac q{q-1}} u^*(t)^{\frac q{q-1}}) dt \leq C$$ by the choice of $j_{\frac n{2k},q}$. Since $$\lim_{t\to \infty} \frac{\Phi_{\frac n{2k},q}(t)}{\Phi_{\frac n2,q}(t)} = 1,$$ then there exists $A$ such that $\Phi_{\frac n{2k},q}(t) \leq 2 \Phi_{\frac n2,q}(t)$ for $t \geq A$. Hence, we have $$\begin{aligned} \int_0^1 \Phi_{\frac n{2k},q}(\beta_{n,2k}^{\frac q{q-1}} u^*(t)^{\frac q{q-1}}) dt & = \int_{\{t\in (0,1):u^*(t) \leqs A^{1/q'} \beta_{n,2k}^{-1}\}} \Phi_{\frac n{2k},q}(\beta_{n,2k}^{\frac q{q-1}} u^*(t)^{\frac q{q-1}}) dt\\ &\quad + \int_{\{t\in (0,1):u^*(t) \geq A^{1/q'} \beta_{n,2k}^{-1}\}} \Phi_{\frac n{2k},q}(\beta_{n,2k}^{\frac q{q-1}} u^*(t)^{\frac q{q-1}}) dt\\ &\leq C + 2\int_{\{t\in (0,1):u^*(t) \geq A^{1/q'} \beta_{n,2k}^{-1}\}} \Phi_{\frac n2,q}(\beta_{n,2k}^{\frac q{q-1}} u^*(t)^{\frac q{q-1}}) dt\\ &\leq C + \int_0^1 \Phi_{\frac n2,q}(\beta_{n,2k}^{\frac q{q-1}} u^*(t)^{\frac q{q-1}}) dt\\ &\leq C\end{aligned}$$ here we have used . Combining the previous inequality together with proves the result in this case.\ *Case 3: $m =2k+1$, $k\geq 1$.* Let $f = -\Delta_g^{k} u$. Since $q \leq \frac n{2k+1}$, then it was proved in [@Nguyen2020a] (the formula after $(2.8)$ with $u$ replaced by $f$) that $$\|\na_g^m u\|_{\frac n{2k+1},q}^q = \|\na_g f\|_{\frac n{2k+1},q}^q \geq \int_0^\infty |(f^*)'(t)|^q (n \si_n \sinh^{n-1} (F(t)))^q t^{\frac{(2k+1) q}n -1} dt.$$ Using , we have $$\|\na_g^m u\|_{\frac n{2k+1},q}^q \geq n^q \si_n^{\frac qn} \int_0^\infty |(f^*)'(t)|^q t^{\frac{2kq}n + q -1} dt.$$ Applying the one-dimensional Hardy inequality, it holds $$\label{eq:Sob} \|\na_g^m u\|_{\frac n{2k+1},q}^q \geq (2k)^q \si_n^{\frac qn}\int_0^\infty |f^*(t)|^q t^{\frac{2kq}n -1} dt = (2k)^q \si_n^q \|\Delta_g^k u\|_{\frac n{2k},q}^q.$$ For any $u \in W^{2k+1} L^{\frac n{2k+1},q}(\H^n)$ with $\|\na_g^m u\|_{\frac n{2k+1},q} \leq 1$, define $w= 2k \si_n^{\frac1n} u$. By , we have $\|w\|_{\frac n{2k},q}^q \leq 1$. Using the result in the *Case 2* with remark that $$\beta_{n,2k+1} =2k \si_n^{\frac 1n} \beta_{n,2k},$$ we obtain $$\label{eq:Case2} \int_{\H^n} \Phi_{\frac n{2k},q} (\beta_{n,2k+1}^{q'} |u|^{q'}) dV_g \leq C.$$ Using together with the last arguments in the proof of the *Case 2* proves the result in this case.\ It remains to check the sharpness of constant $\beta_{n,m}^{\frac q{q-1}}$. To do this, we construct a sequence of test functions as follows $$v_j(x) = \begin{cases} \frac{(\ln j)^{1/q'}}{\beta_{n,m}} + \frac{n\beta_{n,m}}{2(\ln j)^{1/q}} \sum_{i=1}^{m-1} \frac{(1-j^{\frac 2n}|x|^2)^i}{i} &\mbox{if $0\leq |x| \leq j^{-\frac 1n}$,}\\ -\frac n{\beta_{n,m}} (\ln j)^{-1/q} \ln |x|&\mbox{if $j^{-\frac1n} \leqs |x| \leq 1$,}\\ \xi_j(x) &\mbox{if $1 \leqs |x| \leqs 2$}, \end{cases} \quad j\geq 2$$ where $\xi \in C_0^\infty(2\B^n)$ are radial function chosen such that $\xi_j = 0$ on $\pa \B^n$ and for $i=1,\ldots,m-1$ $$\frac{\pa^i \xi_j}{\pa r^i} \Big{|}_{\pa \B^n} = (-1)^i (i-1)! n\beta_{n,m}^{-1} (\ln j)^{-1/q},$$ and $\xi_j$, $|\na^l \xi_j|$ and $|\na^m \xi_j|$ are all $O((\ln j)^{-1/q})$ as $j\to \infty$. For $\ep \in (0,1/3)$ let us define $u_{\ep,j}(x) =v_j(x/\ep)$. Then $u_{\ep,j} \in W^m L^{\frac nm,q}(\H^n)$ has support contained in $\{|x| \leq 2\ep\}$. It is easy to check that $$|\na_g^m u_{\ep,j}(x)| \leq \lt(\frac{1-|x|^2}2\rt)^m C (\ep^{-1} j^{\frac1n})^m (\ln j)^{-1/q}\leq C2^{-m}(\ep^{-1} j^{\frac1n})^m (\ln j)^{-1/q}$$ for $|x| \leq \ep j^{-\frac1n}$, and $$|\na_g^m u_{\ep,j}(x)| \leq C \ep^{-m} (\ln j)^{-\frac1q}\lt(\frac{1-|x|^2}2\rt)^m \leq C2^{-m}\ep^{-m} (\ln j)^{-\frac1q}$$ for $|x|\in (\ep, 2\ep)$ with a positive constant $C$ independent of $\ep\leqs \frac13$ and $j$. Furthermore, we can check that $$\begin{aligned} |\na^m_g u_{\ep,j}(x)|&\leq \lt(\frac{1-|x|^2}2\rt)^m\lt((|x|^n \si_n)^{-\frac mn} + C |x|^{-m+1}\rt) (\ln j)^{-\frac1q}\\ & \leq 2^{-m}(\ln j)^{-\frac1q} \lt((|x|^n \si_n)^{-\frac mn} + C |x|^{-m+1}\rt)\end{aligned}$$ and $$\begin{aligned} |\na^m_g u_{\ep,j}(x)|&\geq \lt(\frac{1-|x|^2}2\rt)^m\lt((|x|^n \si_n)^{-\frac mn} - C |x|^{-m+1}\rt) (\ln j)^{-\frac1q}\\ & \geq \lt(\frac{1-\ep^2}2\rt)^{-m}(\ln j)^{-\frac1q} \lt((|x|^n \si_n)^{-\frac mn} - C |x|^{-m+1}\rt)\end{aligned}$$ for $|x| \in (\ep j^{-\frac1n}, \ep )$ with $\ep \geqs 0$ small enough where $C$ is a positive constant independent of $\ep$ and $j$. Define $$h_1(x) = \begin{cases} C2^{-m}(\ep^{-1} j^{\frac1n})^m (\ln j)^{-1/q}&\mbox{if $|x| \leq \ep j^{-\frac1n}$}\\ 2^{-m}(\ln j)^{-\frac1q} \lt((|x|^n \si_n)^{-\frac mn} + C |x|^{-m+1}\rt)&\mbox{if $|x| \in (\ep j^{-\frac1n}, \ep )$}\\ C2^{-m}\ep^{-m} (\ln j)^{-\frac1q}&\mbox{if $|x|\in (\ep, 2\ep)$}\\ 0&\mbox{if $|x| \in (2\ep,1)$}, \end{cases}$$ Then we have $0 \leq |\na_g^m u| \leq h_1$. Consequently, we get $0 \leq |\na^m_g u|^* \leq h_1^*$. Let us denote by $h_1^{*,e}$ the rearrangement function of $h_1$ with respect to Lebesgue measure. Since the support of $h_1$ is contained in $\ep \{|x| \leq \ep\}$, then we can easy check that $$h_1^*(t) \leq h_1^{*,e}\lt(\Big(\frac{1-\ep^2}2\Big)^n t\rt).$$ Consequently, we have $$\|\na_g^m u_{\ep,j}\|_{\frac nm,q}^q \leq \lt(\frac2{1-\ep^2}\rt)^{mq} \int_0^\infty h_1^{*,e}(t)^q t^{\frac{mq}n -1} dt$$ Notice that by enlarging the constant $C$ (which is still independent of $\ep$ and $j$), we can assume that $$C2^{-m}\ep^{-m} (\ln j)^{-\frac1q} \geq h_1\Big |_{\{|x| = \ep\}} =2^{-m}(\ln j)^{-\frac1q} \ep^{-m}\lt(\si_n^{-\frac mn} + C \ep \rt)$$ for $\ep \geqs 0$ small enough. For $j$ larger enough, we can chose $x_0$ with $\ep j^{-\frac1n} \leqs |x_0| \leq \ep$ such that $C2^{-m}\ep^{-m} (\ln j)^{-\frac1q} = h_1(x_0)$. It is easy to see that $c\ep \leq |x_0| \leq C \ep$ for constant $C, c \geqs 0$ independent of $\ep$ and $j$. We have $$h_1(x) \leq g(x) :=\begin{cases} h_1(x)&\mbox{if $|x| \leq |x_0| $}\\ C2^{-m}\ep^{-m} (\ln j)^{-\frac1q}&\mbox{if $|x|\in (|x_0|, 2\ep)$}\\ 0&\mbox{if $|x| \geq 2\ep$}. \end{cases}$$ Notice that $g$ is non-increasing radially symmetric function in $\B^n$, hence $g^{\sharp,e} = g$. Using the function $g$, we can prove that $$\int_0^\infty h_1^{*,e}(t)^q t^{\frac{mq}n -1} dt \leq 2^{-mq}(1 + C (\ln j)^{-1}).$$ Therefore, we have $$\|\na_g^m u_{\ep,j}\|_{\frac nm,q}^q \leq \lt(\frac1{1-\ep^2}\rt)^{mq}(1 + C (\ln j)^{-1})$$ Set $w_{\ep,j} = u_{\ep,j}/\|\na_g^m u_{\ep,j}\|_{\frac nm,q}$. For any $\beta \geqs \beta_{n,m}^{q'}$, we choose $\ep \geqs 0$ small enough such that $\gamma := \beta (1-\ep^2)^{\frac{mq}{q-1}}\geqs \be_{n,m}^{q'}$. Then we have $$\begin{aligned} \int_{\H^n} \Phi_{\frac nm,q}(\beta |w_{\ep,j}|^{q'}) dV_g &\geq \int_{\{|x|\leq \ep j^{-\frac1n}\}} \Phi_{\frac nm,q}\Big(\frac{\beta}{\|\na_g^m u_{\ep,j}\|_{\frac nm,q}^{q'}} |u_{\ep,j}|^{q'}\Big) dV_g\\ &\int_{\{|x|\leq \ep j^{-\frac1n}\}} \Phi_{\frac nm,q}\Big(\frac{\ga}{(1+ C (\ln j)^{-1})^{q'}} |u_{\ep,j}|^{q'}\Big) dV_g\\ &\geq 2^n \int_{\{|x|\leq \ep j^{-\frac1n}\}} \Phi_{\frac nm,q}\Big(\frac{\ga}{(1+ C (\ln j)^{-1})^{q'}} |u_{\ep,j}|^{q'}\Big) dx\\ &= 2^n \ep^{n}\int_{\{|x|\leq j^{-\frac1n}\}} \Phi_{\frac nm,q}\Big(\frac{\ga}{(1+ C (\ln j)^{-1})^{q'}} |v_{j}|^{q'}\Big) dx\\ &\geq 2^n \ep^{n}\int_{\{|x|\leq j^{-\frac1n}\}} \Phi_{\frac nm,q}\Big(\frac{\ga}{\beta_{n,m}^{q'}}\frac{\ln j}{(1+ C (\ln j)^{-1})^{q'}} \Big) dx\\ &=2^n \ep^{n}\si_n \Phi_{\frac nm,q}\Big(\frac{\ga}{\beta_{n,m}^{q'}}\frac{\ln j}{(1+ C (\ln j)^{-1})^{q'}} \Big) e^{-\ln j}.\end{aligned}$$ Since $$\lim_{j\to \infty} \frac{\ga}{\beta_{n,m}^{q'}}\frac{\ln j}{(1+ C (\ln j)^{-1})^{q'}} = \infty,$$ then $$\Phi_{\frac nm,q}\Big(\frac{\ga}{\beta_{n,m}^{q'}}\frac{\ln j}{(1+ C (\ln j)^{-1})^{q'}} \Big) \geq C e^{\frac{\ga}{\beta_{n,m}^{q'}}\frac{\ln j}{(1+ C (\ln j)^{-1})^{q'}}}$$ for $j$ larger enough. Consequently, we get $$\int_{\H^n} \Phi_{\frac nm,q}(\beta |w_{\ep,j}|^{q'}) dV_g \geq 2^n \ep^n \si_n C e^{\frac{\ga}{\beta_{n,m}^{q'}}\frac{\ln j}{(1+ C (\ln j)^{-1})^{q'}} -\ln j} \to \infty$$ as $j\to \infty$ since $\ga \geqs \beta_{n,m}^{q'}$. This proves the sharpness of $\beta_{n,m}^{q'}$. The proof of Theorem \[MAINI\] is then completely finished. Proof of Theorem \[MAINII\] =========================== This section is devoted to prove Theorem \[MAINII\]. The proof is based on the inequalities and , the iteration argument and Theorem \[MAINII\] for $m\geq 3$. The case $m=2$ is proved by using inequality and the Moser–Trudinger inequality involving to the fractional dimension in Lemma \[MT\] below. Let $\theta \geqs 1$, we denote by $\lam_\theta$ the measure on $[0,\infty)$ of density $$d\lam_{\theta} = \theta \sigma_{\theta} x^{\theta -1} dx, \quad \sigma_{\theta} = \frac{ \pi^{\frac\theta 2}}{\Gamma(\frac\theta 2+ 1)}.$$ For $0\leqs R \leq \infty$ and $1\leq p \leqs \infty$, we denote by $L_\theta^p(0,R)$ the weighted Lebesgue space of all measurable functions $u: (0,R) \to \R$ for which $$\|u\|_{L^p_\theta(0,R)}= \lt(\int_0^R |u|^p d\lam_\theta\rt)^{\frac1p} \leqs \infty.$$ Besides, we define $$W^{1,p}_{\al,\theta}(0,R) =\Big\{u\in L^p_\theta(0,R)\, :\, u' \in L_\alpha^p(0,R),\,\, \lim_{x\to R^{-}} u(x) =0\Big\}, \quad \al, \theta \geqs 1.$$ In [@deOliveira], de Oliveira and do Ó prove the following sharp Moser–Trudinger inequality involving the measure $\lam_\theta$: suppose $0 \leqs R \leqs \infty$ and $\alpha \geq 2, \theta \geq 1$, then $$\label{eq:MTOO} D_{\al,\theta}(R) :=\sup_{u\in W^{1,\al}_{\al,\theta}(0,R),\, \|u'\|_{L^\alpha_\alpha(0,R)} \leq 1} \int_0^R e^{\mu_{\al,\theta} |u|^{\frac{\alpha}{\alpha -1}}} d\lam_\theta \leqs \infty$$ where $\mu_{\al,\theta} = \theta \alpha^{\frac1{\al -1}} \sigma_{\al}^{\frac1{\alpha -1}}$. Denote $D_{\al,\theta} = D_{\al,\theta}(1)$. It is easy to see that $D_{\al,\theta}(R) = D_{\al,\theta} R^\theta$. \[MT\] Let $\alpha \geqs 1$ and $q\geq 2$. There exists a constant $C_{\al,q} \geqs 0$ such that for any $u \in W^{1,q}_{q,\al}(0,\infty),$ $u' \leq 0$ and $\|u\|_{L^q_{\al}(0,\infty)}^q + \|u'\|_{L^q_q(0,\infty)}^q\leq 1$, it holds $$\label{eq:Abreu} \int_0^\infty \Phi_{\frac q\al,q}(\mu_{q,1} |u|^{\frac q{q -1}}) d\lam_1 \leq C_{\al,q}.$$ We follows the argument in [@Ruf2005]. Since $u' \leq 0$ then $u$ is a non-increasing function. Hence, for any $t \geqs 0$, it holds $$\label{eq:boundu} u(r)^q \leq \frac{1}{\si_\al r^\al} \int_0^r u(s)^q d\lam_\al \leq \frac{\int_0^\infty u(s)^q d\lam_\al}{\si_\al r^\al} \leq \frac{\|u\|_{L^q_\al(0,\infty)}^q}{\si_\al r^\al}.$$ For $R \geqs 0$, define $w(r) = u(r) - u(R)$ for $r \leq R$ and $w(r) =0$ for $r \geqs R$. Then $w \in W^{1,q}{q,q}(0,R)$ and $$\label{eq:on0R} \|w\|_{L^q_q(0,R)}^q = \int_0^R |u'(s)|^q d\lam_q \leq 1 - \|u\|_{L^q_\al(0,\infty)}^q.$$ For $r \leq R$, we have $u(r) = w(r) + u(R)$. Since $q \geq 2$, then there exists $C \geqs 0$ depending only on $q$ such that $$u(r)^{q'} \leq w(r)^{q'} + C w(r)^{q'-1} u(R) + u(R)^{q'}.$$ Applying Young’s inequality and , we get $$\begin{aligned} \label{eq:E1} u(r)^{q'} &\leq w(r)^{q'}\lt(1+ \frac{C}{q} u(R)^q\rt) + \frac{q-1}q + u(R)^{q'}\notag\\ &\leq w(r)^{q'}\lt(1+ \frac{C}{q\si_\al R^\al}\rt) + \frac{q-1}q + \lt(\frac1{\si_\al R^\al}\rt)^{q'-1}.\end{aligned}$$ Fix a $R \geq 1$ large enough such that $\frac{C}{q\si_\al R^\al} \leq 1$, and set $$v(r) = w(r) \lt(1+ \frac{C}{q\si_\al R^\al}\rt)^{\frac{q-1}q}.$$ Using and the choice of $R$, we can easily verify that $\|v\|_{L^q_q(0,R)}^q \leq 1$. Hence, applying , we get $$\label{eq:on0R1} \int_0^R e^{\mu_{q,1} |u|^{q'}} d\lam_1 \leq D_{q,1} R.$$ For $r \geq R$, we have $u(r) \leq \si_\al^{-\frac1q} R^{-\frac\al q}$, hence it holds $$\Phi_{\frac q\al,q} (\mu_{q,1} |u(r)|^{q'}) \leq C |u(r)|^{q'(j_{\al,q}-1)} \leq C r^{-\frac{\al}{q-1}(j_{\al,q} -1)}.$$ By the choice of $j_{\al,q}$, we have $$\label{eq:E2} \int_R^\infty \Phi_{\frac q\al,q}(\mu_{q,1} |u(r)|^{q'}) d\lam_1 \leq C.$$ Putting , , together and using $R\geq 1$, we get $$\begin{aligned} \int_0^\infty \Phi_{\frac q\al,q}(\mu_{q,1} |u|^{q'}) d\lam_1 &\leq \int_0^R \Phi_{\frac q\al,q}(\mu_{q,1} |u|^{q'}) d\lam_1 + \int_R^\infty \Phi_{\frac q\al,q}(\mu_{q,1} |u|^{q'}) d\lam_1 \\ &\leq \int_0^R \exp\Big(\mu_{q,1} |u|^{q'}\Big) d\lam_1 + C\\ &\leq \int_0^R \exp\Big(\mu_{q,1} v^{q'} + \mu_{q,1} \big(\frac{q-1}q + \si_\al^{-\frac1{q-1}}\big)\Big) d\lam_1 + C\\ &\leq \exp\Big(\mu_{q,1} \big(\frac{q-1}q + \si_\al^{-\frac1{q-1}}\big)\Big)D_{q,1} R + C\\ &\leq C.\end{aligned}$$ For any $\tau \geqs 0$ and $u \in W^{1,q}_{q,\al}(0,\infty),$ such that $u' \leq 0$ and $\tau \|u\|_{L^q_\alpha(0,\infty)}^q + \|u'\|_{L^q_q(0,\infty)}^q\leq 1$. Applying for function $u_\tau(x) = u(\tau^{-\frac1\al} x)$ and making the change of variables, we obtain $$\label{eq:Abreu1} \int_0^\infty \Phi_{\frac q \alpha,q}(\mu_{q,1} |u|^{q'}) d\lam_1 \leq C \tau^{-\frac 1\al}.$$ We are now ready to give the proof of Theorem \[MAINII\]. We divide the proof into the following cases.\ *Case 1: $m=2$.* Let $u \in C_0^\infty(\H^n)$ with $\|\Delta_g u\|_{\frac n2,q}^q - \lam \|u\|_{\frac n2,q}^q \leq 1$. Define $v$ by and $\tilde v(x) = v(V_g(B(0,d(0,x))))$, then $u^* \leq v$, $\|\Delta_g u\|_{\frac n2,q} = \|\Delta_g \tilde v\|_{\frac n2,q}$ and $\|u\|_{\frac n2,q} \leq \|\tilde v\|_{\frac n2,q}$. So, we have $$\|\Delta_g \tilde v\|_{\frac n2,q}^q -\lam \|\tilde v\|_{\frac n2,q}^q \leq 1.$$ We show that $\int_{\H^n} \Phi_{\frac n2,q}(\beta_{n,2} |\tilde v|^{q'}) dV_g \leq C.$ Set $\kappa = C(n,2,n/2)^q -\lam \geqs 0$. Applying the inequality for $\tilde v$, we get $$\lt(n(n-2) \si_n^{\frac2n}\rt)^q \int_0^\infty |v'(t)|^q t^{ q -1} dt + \kappa \int_0^\infty v(t)^q t^{\frac{2q}n -1} dt \leq 1.$$ Define $$w = \frac{n(n-2) \si_n^{\frac2n}}{(q \si_q)^{\frac1q}} v,\quad \tau = \frac{q \si_q}{(n(n-2) \si_n^{\frac2n})^q \frac{2q}n \si_{\frac{2q}n}}\kappa,$$ then, we have $$\int_0^\infty |w'|^q d\lam_q + \tau \int_0^\infty |w|^q d \lam_{\frac{2q}n} \leq 1.$$ Applying the inequality , we obtain $$\int_0^\infty \Phi_{\frac{n}2,q} (\mu_{q,1} w^{\frac q{q-1}}) d\lam_1 \leq C_{\frac{2q}n,q} \tau^{-\frac n{2q}}.$$ Notice that $$\int_{\H^n} \Phi_{\frac n2,q} (\beta_{n,2}^{q'} |\tilde v|^{q'}) dV_g = \frac12 \int_0^\infty \Phi_{\frac{n}2,q}(\beta_{n,2}^{q'} |v|^{q'}) d\lam_1=\frac12 \int_0^\infty \Phi_{\frac{n}2,q} (\mu_{q,1} w^{\frac q{q-1}}) d\lam_1.$$ Hence, it holds $$\int_{\H^n} \Phi_{\frac n2,q} (\beta_{n,m}^{q'} |\tilde v|^{q'}) dV_g \leq \frac12 C_{\frac{2q}n,q} \tau^{-\frac n{2q}}.$$ This completes the proof of this case.\ *Case 2: $m = 2k$, $k\geq 2$.* Denote $\tau = C(n,2k, \frac n{2k})^q -\lam \geqs 0$. We have $$1\geq \|\Delta^k_g u\|_{\frac n{2k},q}^q - \lam \|u\|_{\frac n{2k},q}^q \geq \tau \|u\|_{\frac n{2k},q}^q,$$ which yields $$\label{eq:normu} \|u\|_{\frac n{2k},q}^q \leq \tau^{-1}.$$ On the other hand, by the Lorentz–Poincaré inequality and the Poincaré–Sobolev inequality under Lorentz–Sobolev norm , we have $$\begin{aligned} \|\Delta^k_g u\|_{\frac n{2k},q}^q - \lam \|u\|_{\frac n{2k},q}^q &\geq \|\Delta^k_g u\|_{\frac n{2k},q}^q - C(n,2k,\frac{n}{2k})\|u\|_{\frac n{2k},q}^q + \tau \|u\|_{\frac n{2k},q}^q\\ &\geq \|\Delta^k_g u\|_{\frac n{2k},q}^q - C(n,2,\frac{n}{2k})\|\Delta^{k-1}_g u\|_{\frac n{2k},q}^q + \tau \|u\|_{\frac n{2k},q}^q\\ &\geq (2(k-1)(n-2k) \si_n^{\frac 2n})^q \|\Delta^{k-1}_g u\|_{\frac n{2(k-1)},q}^ q + \tau \|u\|_{\frac n{2k},q}^q.\end{aligned}$$ Set $ w = 2(k-1)(n-2k) \si_n^{\frac 2n} u $ we have $\|\Delta^{k-1}_g w\|_{\frac n{2(k-1)},q}^ q \leq 1$. Applying the Adams inequality , we obtain $$\int_{\H^n} \Phi_{n,2(k-1),q}(\beta_{n,2k}^{q'} |u|^{q'}) dV_g = \int_{\H^n} \Phi_{n,2(k-1),q}(\beta_{n,2(k-1)}^{q'} |w|^{q'}) \leq C,$$ here we use $$\beta_{n,2k} = 2(k-1)(n-2k) \si_n^{\frac 2n} \beta_{n,2(k-1)}.$$ Using and repeating the last argument in the proof of *Case 2* in the proof of Theorem \[MAINI\], we obtain in this case.\ *Case 3: $m =2k+1$, $k\geq 1$.* Denote $\tau = C(n,2k+1,\frac n{2k+1})^q -\tau \geqs 0$. Since $\frac{2n}{n-1} \leq q \leq \frac n{2k+1}$, then using the Lorentz–Poincaré inequality and the Poincaré–Sobolev inequality under Lorentz–Sobolev norm , we get $$\begin{aligned} 1&\geq \|\na_g \Delta_g^k u\|_{\frac n{2k+1},q}^q - \lam \|u\|_{\frac n{2k+1},q}^q \\ &\geq \|\na_g \Delta_g^k u\|_{\frac n{2k+1},q}^q - C(n,2k+1,\frac n{2k+1})^q \|u\|_{\frac n{2k+1},q}^q + \tau \|u\|_{\frac n{2k+1},q}^q\\ &\geq \|\na_g \Delta_g^k u\|_{\frac n{2k+1},q}^q - \lt(\frac{(2k+1)(n-1)}{n}\rt)^q \|\Delta_g^k u\|_{\frac n{2k+1},q}^q + \tau \|u\|_{\frac n{2k+1},q}^q\\ &\geq (2k \si_n^{\frac 1n})^q \|\Delta_g^k u\|_{\frac n{2k},q}^q + \tau \|u\|_{\frac n{2k+1},q}^q.\end{aligned}$$ We now can use the argument in the proof of *Case 2* to obtain the result in this case. The proof of Theorem \[MAINI\] is then completely finished. Proof of Theorem \[HARDYADAMS\] =============================== In this section, we provide the proof of Theorem \[HARDYADAMS\]. The proof uses the Lorentz–Poincaré inequality , the Poincaré–Sobolev inequality under Lorentz–Sobolev norm and , and the Adams type inequality . We divide the proof in two cases according to the facts that $m$ is even or odd.\ *Case 1: $m=2k$, $k\geq 2$.* Using the Lorentz–Poincaré inequality and the inequality , we have $$\begin{aligned} 1\geq \|\Delta^k_g u\|_{\frac n{2k},q}^q - C(n,2k,\frac{n}{2k})\|u\|_{\frac n{2k},q}^q &\geq \|\Delta^k_g u\|_{\frac n{2k},q}^q - C(n,2,\frac{n}{2k})\|\Delta^{k-1}_g u\|_{\frac n{2k},q}^q\notag\\ &\geq (2(k-1)(n-2k) \si_n^{\frac 2n})^q \|\Delta^{k-1}_g u\|_{\frac n{2(k-1)},q}^q.\end{aligned}$$ Let us define the function $w$ by $w = 2(k-1)(n-2k) \si_n^{\frac 2n} u$. Then we have $\|\Delta^{k-1}_g w\|_{\frac n{2(k-1)},q}^ q \leq 1$. Applying the Adams type inequality , we obtain $$\label{eq:aa0} \int_{\H^n} \Phi_{n,2(k-1),q}(\beta_{n,2k}^{q'} |u|^{q'}) dV_g = \int_{\H^n} \Phi_{n,2(k-1),q}(\beta_{n,2(k-1)}^{q'} |w|^{q'}) dV_g\leq C,$$ here we use $$\beta_{n,2k} = 2(k-1)(n-2k) \si_n^{\frac 2n} \beta_{n,2(k-1)}.$$ It follows from and the fact $\Phi_{n,2(k-1),q}(t) \geq C t^{j_{\frac{n}{2(k-1)},q}-1}$ that $$\int_0^\infty (u^*(t))^{q'(j_{\frac{n}{2(k-1)},q}-1)} dt = \int_{\H^n} |u|^{q'(j_{\frac{n}{2(k-1)},q}-1)} dV_g \leq C.$$ Using the non-increasing of $u^*$, we can easily verify that $$u^*(t) \leq C t^{-1/(q'(j_{\frac{n}{2(k-1)},q}-1))}$$ for any $t \geqs 0$. Let $x_0 \in \B^n$ such that $V_g(B(0,d(0,x_0))) = 1$. Since the function $h(x) = (1-|x|^2)^n$ is decreasing with respect to $d(0,|x|)$, then $h^\sharp = h$. Using Hardy–Littlewood inequality, we have $$\begin{aligned} \label{eq:aa1} \int_{\B^n} e^{\beta_{n,2k}^{q'} |u|^{q'}} dx = 2^{-n} \int_{\H^n} e^{\beta_{n,2k}^{q'} |u|^{q'}} h(x) dV_g& \leq 2^{-n} \int_{\H^n} e^{\beta_{n,2k}^{q'} |u^\sharp|^{q'}} h(x) dV_g\notag\\ &= 2^{-n} \int_0^\infty e^{\beta_{n,2k}^{q'} |u^*(t)|^{q'}} h(t) dt.\end{aligned}$$ For $t \geq 1$ we have $u^*(t) \leq C$, hence it holds $$\label{eq:aa2} 2^{-n}\int_1^\infty e^{\beta_{n,2k}^{q'} |u^*(t)|^{q'}} h(t) dt \leq C2^{-n} \int_1^\infty h(t) dt =C \int_{\{|x| \geq |x_0|\}} dx \leq C\si_n.$$ Notice that $$e^t = \Phi_{\frac n{2(k-1)},q}(t) + \sum_{j=0}^{j_{\frac{n}{2(k-1)},q} -2} \frac{t^j}{j!}.$$ Using Young’s inequality, we get $$e^t = \Phi_{\frac n{2(k-1)},q}(t) + C(1+ t^{j_{\frac{n}{2(k-1)},q} -2}).$$ Consequently, by using the previous inequality and the inequality and the fact $h\leq 1$, we obtain $$\begin{aligned} \label{eq:aa3} \int_0^1 e^{\beta_{n,2k}^{q'} |u^*(t)|^{q'}} h(t) dt&\leq \int_0^1 \Phi_{\frac n{2(k-1)},q}(\beta_{n,2k}^{q'} |u^*(t)|^{q'}) dt + C \int_0^1\lt(1 + (u^*(t))^{q'(j_{\frac{n}{2(k-1)},q} -2)}\rt) dt\notag\\ &\leq \int_0^\infty \Phi_{\frac n{2(k-1)},q}(\beta_{n,2k}^{q'} |u^*(t)|^{q'}) dt + C + C\int_0^1(u^*(t))^{q'(j_{\frac{n}{2(k-1)},q} -2)} dt\notag\\ &\leq \int_{\H^n} \Phi_{n,2(k-1),q}(\beta_{n,2k}^{q'} |u|^{q'}) dV_g + C + C \int_0^1 t^{-\frac{j_{\frac{n}{2(k-1)},q} -2}{j_{\frac{n}{2(k-1)},q} -1}} dt\notag\\ &\leq C.\end{aligned}$$ Combining , and we obtain the desired estimate.\ *Case 2: $m=2k+1$, $k\geq 1$.* Since $\frac{2n}{n-1} \leq q \leq \frac{n}{2k+1}$, then by using the Lorentz–Poincaré inequality and the Poincaré–Sobolev inequality under Lorentz–Sobolev norm , we get $$\begin{aligned} 1&\geq \|\na_g \Delta_g^k u\|_{\frac n{2k+1},q}^q - C(n,2k+1,\frac n{2k+1})^q \|u\|_{\frac n{2k+1},q}^q \\ &\geq \|\na_g \Delta_g^k u\|_{\frac n{2k+1},q}^q - \lt(\frac{(2k+1)(n-1)}{n}\rt)^q \|\Delta_g^k u\|_{\frac n{2k+1},q}^q\\ &\geq (2k \si_n^{\frac 1n})^q \|\Delta_g^k u\|_{\frac n{2k},q}^q.\end{aligned}$$ Setting $w = 2k \si_n^{\frac 1n} u$, we have $\|\Delta^k_g w\|_{\frac n{2k}, q}^q \leq 1$. Applying the Adams type inequality , we obtain $$\label{eq:bb0} \int_{\H^n} \Phi_{n,2k,q}(\beta_{n,2k+1}^{q'} |u|^{q'}) dV_g = \int_{\H^n} \Phi_{n,2k,q}(\beta_{n,2k}^{q'} |w|^{q'}) dV_g\leq C,$$ here we use $$\beta_{n,2k+1} = 2k \si_n^{\frac 1n} \beta_{n,2k}.$$ Similarly in the *Case 1*, the inequality yields $$\int_0^\infty (u^*(t))^{q'(j_{\frac{n}{2k},q}-1)} dt = \int_{\H^n} |u|^{q'(j_{\frac{n}{2k},q}-1)} dV_g \leq C,$$ which implies $$u^*(t) \leq C t^{-\frac1{q'(j_{\frac{n}{2k},q}-1)}},\quad t \geqs 0.$$ Repeating the last arguments in the proof of *Case 1*, we obtain the result in this case.\ The proof of Theorem \[HARDYADAMS\] is then completed. [10]{} S. Adachi and K. Tanaka. Trudinger type inequalities in [$\bold R^N$]{} and their best exponents. , 128(7):2051–2057, 2000. D. R. Adams. A sharp inequality of [J]{}. [M]{}oser for higher order derivatives. , 128(2):385–398, 1988. Adimurthi and O. Druet. Blow-up analysis in dimension 2 and a sharp form of [T]{}rudinger-[M]{}oser inequality. , 29(1-2):295–322, 2004. Adimurthi and K. Sandeep. A singular [M]{}oser-[T]{}rudinger embedding and its applications. , 13(5-6):585–603, 2007. Adimurthi and K. Tintarev. On a version of [T]{}rudinger-[M]{}oser inequality with [M]{}öbius shift invariance. , 39(1-2):203–212, 2010. Adimurthi and Y. Yang. An interpolation of [H]{}ardy inequality and [T]{}rundinger-[M]{}oser inequality in [$\Bbb R^N$]{} and its applications. , (13):2394–2426, 2010. A. Alberico. Moser type inequalities for higher-order derivatives in [L]{}orentz spaces. , 28(4):389–400, 2008. A. Alvino, V. Ferone, and G. Trombetti. Moser-type inequalities in [L]{}orentz spaces. , 5(3):273–299, 1996. Z. M. Balogh, J. J. Manfredi, and J. T. Tyson. Fundamental solution for the [$Q$]{}-[L]{}aplacian and sharp [M]{}oser-[T]{}rudinger inequality in [C]{}arnot groups. , 204(1):35–49, 2003. C. Bennett and R. Sharpley. , volume 129 of [*Pure and Applied Mathematics*]{}. Academic Press, Inc., Boston, MA, 1988. J. Bertrand and K. Sandeep. Adams inequality on pinched hadamard manifolds. , 2019. L. Carleson and S.-Y. A. Chang. On the existence of an extremal function for an inequality of [J]{}. [M]{}oser. , 110(2):113–127, 1986. D. Cassani and C. Tarsi. A [M]{}oser-type inequality in [L]{}orentz-[S]{}obolev spaces for unbounded domains in [$\Bbb R^N$]{}. , 64(1-2):29–51, 2009. L. Chen, G. Lu, and M. Zhu. Existence and nonexistence of extremals for critical adams inequalities in $\mathbb r^4$ and trudinger–moser inequalities in $\mathbb r^2$. , 2018. W. S. Cohn and G. Lu. Best constants for [M]{}oser-[T]{}rudinger inequalities on the [H]{}eisenberg group. , 50(4):1567–1591, 2001. W. S. Cohn and G. Z. Lu. Best constants for [M]{}oser-[T]{}rudinger inequalities, fundamental solutions and one-parameter representation formulas on groups of [H]{}eisenberg type. , 18(2):375–390, 2002. J. F. de Oliveira and J. a. M. do Ó. Trudinger-[M]{}oser type inequalities for weighted [S]{}obolev spaces involving fractional dimensions. , 142(8):2813–2828, 2014. A. DelaTorre and G. Mancini. Improved adams–type inequalities and their extremals in dimension $2m$. , 2017. J. a. M. do Ó and M. de Souza. A sharp inequality of [T]{}rudinger-[M]{}oser type and extremal functions in [$H^{1,n}(\Bbb{R}^n)$]{}. , 258(11):4062–4101, 2015. Y. Q. Dong and Q. H. Yang. An interpolation of [H]{}ardy inequality and [M]{}oser-[T]{}rudinger inequality on [R]{}iemannian manifolds with negative curvature. , 32(7):856–866, 2016. M. Flucher. Extremal functions for the [T]{}rudinger-[M]{}oser inequality in [$2$]{} dimensions. , 67(3):471–497, 1992. L. Fontana and C. Morpurgo. Sharp exponential integrability for critical [R]{}iesz potentials and fractional [L]{}aplacians on [$\Bbb R^n$]{}. , 167:85–122, 2018. L. Fontana and C. Morpurgo. Adams inequalities for [R]{}iesz subcritical potentials. , 192:111662, 32, 2020. L. Fontana and C. Morpurgo. Adams inequalities for [R]{}iesz subcritical potentials. , 192:111662, 32, 2020. V. I. Judovič. Some estimates connected with integral operators and with solutions of elliptic equations. , 138:805–808, 1961. D. Karmakar and K. Sandeep. Adams inequality on the hyperbolic space. , 270(5):1792–1817, 2016. N. Lam and G. Lu. Sharp [A]{}dams type inequalities in [S]{}obolev spaces [$W^{m,\frac{n}{m}} (\Bbb R^n)$]{} for arbitrary integer [$m$]{}. , 253(4):1143–1171, 2012. N. Lam and G. Lu. Sharp [M]{}oser-[T]{}rudinger inequality on the [H]{}eisenberg group at the critical case and applications. , 231(6):3259–3287, 2012. N. Lam and G. Lu. Sharp singular [A]{}dams inequalities in high order [S]{}obolev spaces. , 19(3):243–266, 2012. N. Lam and G. Lu. A new approach to sharp [M]{}oser-[T]{}rudinger and [A]{}dams type inequalities: a rearrangement-free argument. , 255(3):298–325, 2013. J. Li, G. Lu, and Q. Yang. Fourier analysis and optimal [H]{}ardy-[A]{}dams inequalities on hyperbolic spaces of any even dimension. , 333:350–385, 2018. X. Li and Y. Yang. Extremal functions for singular [T]{}rudinger-[M]{}oser inequalities in the entire [E]{}uclidean space. , 264(8):4901–4943, 2018. Y. Li and B. Ruf. A sharp [T]{}rudinger-[M]{}oser type inequality for unbounded domains in [$\Bbb R^n$]{}. , 57(1):451–480, 2008. K.-C. Lin. Extremal functions for [M]{}oser’s inequality. , 348(7):2663–2671, 1996. G. Lu and H. Tang. Best constants for [M]{}oser-[T]{}rudinger inequalities on high dimensional hyperbolic spaces. , 13(4):1035–1052, 2013. G. Lu and H. Tang. Sharp singular [T]{}rudinger-[M]{}oser inequalities in [L]{}orentz-[S]{}obolev spaces. , 16(3):581–601, 2016. G. Lu and Q. Yang. Sharp [H]{}ardy-[A]{}dams inequalities for bi-[L]{}aplacian on hyperbolic space of dimension four. , 319:567–598, 2017. G. Lu and Y. Yang. Adams’ inequalities for bi-[L]{}aplacian and extremal functions in dimension four. , 220(4):1135–1170, 2009. G. Lu and M. Zhu. A sharp [T]{}rudinger-[M]{}oser type inequality involving [$L^n$]{} norm in the entire space [$\Bbb{R}^n$]{}. , 267(5):3046–3082, 2019. G. Mancini and L. Martinazzi. Extremals for fractional moser–trudinger inequalities in dimension $1$ via harmonic extensions and commutator estimates. , 2019. G. Mancini and K. Sandeep. Moser-[T]{}rudinger inequality on conformal discs. , 12(6):1055–1068, 2010. G. Mancini, K. Sandeep, and C. Tintarev. Trudinger-[M]{}oser inequality in the hyperbolic space [${\Bbb H}^N$]{}. , 2(3):309–324, 2013. L. Martinazzi. Fractional [A]{}dams-[M]{}oser-[T]{}rudinger type inequalities. , 127:263–278, 2015. J. Moser. A sharp form of an inequality by [N]{}. [T]{}rudinger. , 20:1077–1092, 1970/71. Q. A. Ngô and V. H. Nguyen. Sharp adams–moser–trudinger type inequalities in the hyperbolic space. , 2016. Q. A. Ngô and V. H. Nguyen. Sharp constant for [P]{}oincaré-type inequalities in the hyperbolic space. , 44(3):781–795, 2019. V. H. Nguyen. A sharp adams inequality in dimension four and its extremal functions. , 2017. V. H. Nguyen. Improved [M]{}oser-[T]{}rudinger type inequalities in the hyperbolic space [$\Bbb{H}^n$]{}. , 168:67–80, 2018. V. H. Nguyen. Improved singular [M]{}oser-[T]{}rudinger and their extremal functions. , 2018. V. H. Nguyen. The sharp [P]{}oincaré-[S]{}obolev type inequalities in the hyperbolic spaces [$\Bbb{H}^n$]{}. , 462(2):1570–1584, 2018. V. H. Nguyen. Extremal functions for the [M]{}oser-[T]{}rudinger inequality of [A]{}dimurthi-[D]{}ruet type in [$W^{1,N}(\Bbb R^N)$]{}. , 21(4):1850023, 37, 2019. V. H. Nguyen. The sharp hardy-moser-trudinger inequality in dimension $n$. , 2019. V. H. Nguyen. The sharp [S]{}obolev type inequalities in the [L]{}orentz–[S]{}obolev spaces in the hyperbolic spaces. , 2019. V. H. Nguyen. The sharp higher order [L]{}orentz-[P]{}oincaré and [L]{}orentz-[S]{}obolev inequalities in the hyperbolic spaces. , 2020. V. H. Nguyen. Singular adams inequalities in [L]{}orentz–[S]{}obolev spaces. , 2020. S. I. Pohožaev. On the eigenfunctions of the equation [$\Delta u+\lambda f(u)=0$]{}. , 165:36–39, 1965. B. Ruf. A sharp [T]{}rudinger-[M]{}oser type inequality for unbounded domains in [$\Bbb R^2$]{}. , 219(2):340–367, 2005. B. Ruf and F. Sani. Sharp [A]{}dams-type inequalities in [$\Bbb{R}^n$]{}. , 365(2):645–670, 2013. C. Tintarev. Trudinger-[M]{}oser inequality with remainder terms. , 266(1):55–66, 2014. N. S. Trudinger. On imbeddings into [O]{}rlicz spaces and some applications. , 17:473–483, 1967. G. Wang and D. Ye. A [H]{}ardy-[M]{}oser-[T]{}rudinger inequality. , 230(1):294–320, 2012. Q. Yang and Y. Li. Trudinger-[M]{}oser inequalities on hyperbolic spaces under [L]{}orentz norms. , 472(1):1236–1252, 2019. Q. Yang, D. Su, and Y. Kong. Sharp [M]{}oser-[T]{}rudinger inequalities on [R]{}iemannian manifolds with negative curvature. , 195(2):459–471, 2016. Y. Yang. A sharp form of [M]{}oser-[T]{}rudinger inequality in high dimension. , 239(1):100–126, 2006. [^1]: Email: [vanhoang0610@yahoo.com](mailto: Van Hoang Nguyen <vanhoang0610@yahoo.com>). [^2]: 2010 *Mathematics Subject Classification*: 26D10, 46E35, 46E30, [^3]: *Key words and phrases*: Adams inequality, improved Adams inequality, Hardy–Adams inequality Lorentz–Sobolev space, hyperbolic spaces, rearrangement argument.
{ "pile_set_name": "ArXiv" }
--- abstract: 'The general solution of Einstein’s gravity equation in $D$ dimensions for an anisotropic and spherically symmetric matter distribution is calculated in a bulk with position dependent cosmological constant. Results for $n$ concentric $(D-2)-$branes with arbitrary mass, radius, and pressure with different cosmological constant between branes are found. It is shown how the different cosmological constants contribute to the effective mass of each brane. It is also shown that the equation of state for each brane influences the dynamics of branes, which can be divided into eras according to the dominant matter. This scenario can be used to model the universe in the $D=5$ case, which may presents a phenomenology richer than the current models. The evolution law of the branes is studied, and the anisotropic pressure that removes divergences is found. The Randall-Sundrum metric in an outside the region in the flat branes limit is also derived.' author: - 'I. C. Jardim' - 'R. R. Landim' - 'G. Alencar' - 'R. N. Costa Filho' title: 'Construction of multiple spherical branes cosmological scenario\' --- Introduction ============ The general model for the cosmos is based on the description of the universe as a perfect fluid that admits a global cosmic time. This scenario has a space-time with constant curvature given by the Friedmann-Robertson-Walker metric, where its dynamics is determined by a cosmological scale factor that depends on the fluid state equation. Despite the success of that model in the description of the primordial nucleosintesys and the cosmic microwave background, it has failures that led to the emergence of new models. Among the major flaws of the current model are the problem of dark energy showing the accelerated expansion in the currently observed universe, and the dark matter which is the divergence between the rotation of the halo of some galaxies and the amount of matter contained in them according to gravitational dynamics [@weinberg:cosmology]. Cosmological models with extra dimensions appeared first in Kaluza-Klein models with extra dimensions and later in Randall-Sundrum scenarios [@Randall:1999vf; @Randall:1999ee]. These models describe the observed universe as a brane universe in a hyper-dimensional space-time. Despite the fact that the Friedmann-Robertson-Walker metric did not determine the geometry of the observed universe. The majority of studies focused on plane geometry. Because of its simplicity, this geometry is not able to change the dynamics of the universe and thus cannot solve the problem of dark matter or the initial singularity. Although the first studies to describe the universe as a spherical shell back to the 80’s [@Rubakov:1983bb; @Visser:1985qm; @Squires:1985aq], the spherical brane-universe has shown very rich phenomenology in the past decade [@Gogberashvili:1998iu; @Boyarsky:2004bu]. Besides being compatible with the observational data [@Tonry:2003zg; @Luminet:2003dx; @Overduin:1998pn], the models provide an explanation for; the galaxy isotropic runaway (isotropic expansion), the existence of a preferred frame, and a cosmic time. They show how the introduction of different cosmological constants in each region of the bulk can change the dynamics of the cosmological scale factor so as to make it compatible with the observed dynamics [@Knop:2003iy; @Riess:2004nr] without the introduction of dark energy [@Gogberashvili:2005wy]. Similar to other models with extra dimensions, the spherical shell models open the possible to obtain an energy scale in order to solve the problem of hierarchy [@Gogberashvili:1998vx] and can be used as a basis for systems with varying speed of light in the observed universe [@Gogberashvili:2006dz]. The introduction of other branes and different cosmological constants can modify the overall dynamics of the observed universe. Local density fluctuations of density can change the local dynamics such as galactic dynamics (since the field of other branes interacts gravitationally with the matter of the brane-universe) without dark matter. Herein this piece of work we extend and generalize the scenario of the world as one expanding shell [@Gogberashvili:1998iu] to multiple concentric spherical $(D-2)$-branes in a $D$ dimensional space-time. For this, we solve the Einstein’s equation in $D$ dimensions to $n$ $(D-2)-$branes with different masses in a space with different cosmological constants between the branes. A previous study considered a continuous distributions of matter. However, only one cosmological constant was used [@Das:2001md]. We solve the $D-$dimensional case, but for a cosmological model we limited ourselves to the case $D=5$, since the observed universe has only three spatial dimensions. This work is organized as follows: In the second section we review the Einstein’s equations in $D$ dimensions with a cosmological constant for spherically symmetric matter distribution. In the third section we solve this set of equations for $n$ shells with different cosmological constants $\Lambda$ between them. In Sec. 4, the energy-momentum tensor conservation law is used to determine the possible anisotropic pressure which removes the divergences in brane evolution equation. In the fifth section we particularize the solution found to take the flat brane limit in order to obtain the Randall-Sundrum metric in the exterior region. In the last section we discuss the conclusions and possible consequences. Static and Spherically Symmetric Space-time in $D$ Dimensions ============================================================= To learn about the gravitational effect of a distribution of matter we must determine the geometry of space-time. For this we need to know the $D(D+1)/2$ independent components of the metric solving the Einstein’s equation. However, it is possible to use the symmetry of the problem to reduce these components to just two, given by the invariant line element[@Gogberashvili:1998iu], $$ds^{2} = -A(r,t)dt^{2} +B(r,t)dr^{2} +r^{2}d\Omega^{2}_{D-2}$$ where $\Omega_{D-2}$ is the element of solid angle in $D$ dimensions, formed by $D-2$ angular variables. Therefore we are left only with two functions, $A(r,t)$ and $B(r,t)$, to be determined by the Einstein’s equation in $D$ dimensions $$\label{einD} R_{\mu}^{\nu} -\frac{1}{2}R\delta_{\mu}^{\nu} +\Lambda\delta_{\mu}^{\nu} = \kappa_{D}T_{\mu}^{\nu},$$ where $\Lambda$ is the cosmological constant, which depends on $r$ and possibly on $t$. Also $\kappa_{D}$ is the gravitational coupling constant in $D$ dimensions. Due to the symmetries of the problem we only have four non null independent components of the Einstein’s equation (\[einD\]), which are $$\begin{aligned} \kappa_{D}T_{0}^{0} &=&-\frac{D-2}{2r^{2}}\left[(D-3)\left(1 -B^{-1}\right) +\frac{rB'}{B^{2}}\right] +\Lambda \label{ein00}, \\ \kappa_{D}T_{1}^{1} &=& -\frac{D-2}{2r^{2}}\left[(D-3)\left(1 -B^{-1}\right) -\frac{rA'}{AB}\right] +\Lambda \label{ein11},\\ \kappa_{D}T^{1}_{0} &=& \frac{D-2}{2r}\frac{\dot{B}}{B^{2}}, \label{ein10}\\ \kappa_{D}T_{2}^{2} &=& \frac{1}{4A}\left[\frac{\dot{A}\dot{B}}{AB} +\frac{\dot{B}^{2}}{B^{2}} -\frac{2\ddot{B}}{B}\right] +\frac{(D-3)(D-4)}{2Br^{2}} - \nonumber \\&&-\frac{2(D-3)(D-4)}{r^{2}} +\frac{(D-3)}{2Br}\left(\frac{A'}{A} -\frac{B'}{B}\right) + \nonumber \\&& +\frac{1}{4B}\left[\frac{2A''}{A} -\frac{A'^{2}}{A^{2}} -\frac{A'B'}{AB}\right] +\Lambda \label{ein22},\end{aligned}$$ where the prime means derivation with respect to $r$ and the dot is the derivative with respect to $t$. We can see that if we know $T^{0}_{0}$, $T^{1}_{1}$ and $\Lambda$ we can, from (\[ein00\]) and (\[ein11\]), completely determine the solutions with two boundary conditions. This comes from the fact that we have two first order differential equations. In this case the remaining equations determine the flow of energy $T^{1}_{0} $, and the tangential stresses $ T^{2}_{2}$. To find the exact solution we need to specify the form of matter $T^{\mu}_{\nu}$ which we use. General Solution for Thin Spherical Branes =========================================== The cosmological scenario we shall consider consists of $n$ concentric spherical delta type $(D-2)$-branes in a $D$ dimensional space with different cosmological constant between them. As said in the introduction this is a generalization of [@Gogberashvili:1998iu]. For this we fix the energy-momentum tensor and the cosmological constant to the form $$\label{fixT} T_{0}^{0}(r,t) = -\sum_{i=1}^{n}\rho_{i}\delta(r-R_{i}), \quad T_{1}^{1} = \sum_{i=1}^{n}P_{i}\delta(r-R_{i}),$$ and $$\Lambda(r,t) = \frac{(D-1)(D-2)}{2}\sum_{i=0}^{n}\lambda_{i}\left[\theta(r-R_{i}) -\theta(r -R_{i+1})\right],$$ where the dependence on $t$ should be solely due to the branes radii ($ R_{i} = R_{i}(t) $). The $\theta$ function is defined in such way that is 1 when the argument vanish, it’s made to ensure that above expresion cover all space, including $r=0$ point. The cosmological constant can be understood as a special fluid, so we can think that the difference between the cosmological constant is because each brane contains a fluid with different density. Fixing $T^{0}_{0}$ and $\Lambda$ we can find $B(r,t)$ using equation (\[ein00\]) in the form $$\kappa_{D}T^{0}_{0} = -\frac{D-2}{2r^{D-2}}\left[r^{D-3}\left(1 -B^{-1}\right)\right]' +\Lambda \label{ein002}$$ and according to the above equation, $B$ has a first order discontinuity in $R_{i}$ because $T^{0}_{0}$ has a second order one and $\Lambda$ has first order discontinuity only. Where we consider that in the region $R_{i} \leq r <R_{i +1}$, $B(r,t) = B_{i}(r)$, since in this region $B$ does not depend on $t$. The above-mentioned time dependence occurs in the region where this solution is valid. The region between the branes has no matter. Therefore, the equation (\[ein10\]) assures us that the solution is static in this region. This information is contained in the Birkhoff’s theorem. Integrating (\[ein002\]) from $ R_{j} - \epsilon $ to $R_{j} + \epsilon$ and taking the limit $ \epsilon \to 0$, we obtain the discontinuity in the point $ r = R_ {j} $ $$\label{descont} B_{j}^{-1}(R_{j}) -B_{j-1}^{-1}(R_{j}) = -\frac{2\kappa_{D}}{D-2}\rho_{j}R_{j}$$ The limit $\epsilon \to 0$ eliminate the $\Lambda$ term because its divergence is first order only. Integrating (\[ein002\]) from $R_{j} +\epsilon$ to $r< R_{j+1}$, where $B$ is continuous, and taking the limit $\epsilon \to 0$ we obtain $$\begin{aligned} B_{j}^{-1}(r) &=& 1- \left(\frac{R_{j}}{r}\right)^{D-3}\left[1 -\lambda_{j}R_{j}^{2} -B_{j}^{-1}(R_{j})\right] -\lambda_{j}r^{2} \\&=& 1- \left(\frac{R_{j}}{r}\right)^{D-3}\left[1 -\lambda_{j}R_{j}^{2} +\frac{2\kappa_{D}}{D-2}\rho_{j}R_{j} \right] + \\&& +\left(\frac{R_{j}}{r}\right)^{D-3}B_{j-1}^{-1}(R_{j})-\lambda_{j}r^{2}.\end{aligned}$$ By recurrence we find that $$\begin{aligned} B_{j}^{-1}(r) &=& 1- \frac{1}{r^{D-3}}\sum_{i=1}^{j}\left[\frac{2\kappa_{D}}{D-2}\rho_{i}R_{i}^{D-2} -\Delta\lambda_{i}R^{D-1}_{i}\right] + \\&&+ \left(\frac{R_{1}}{r}\right)^{D-3}\left(1-\lambda_{0}R_{1}^{2} -B_{0}^{-1}(R_{1})\right) -\lambda_{j}r^{2} ,\end{aligned}$$ where $\Delta\lambda_{i} =\lambda_{i} -\lambda_{i-1}$. Considering that inside all branes the solution is a de Sitter vacuum, i.e., $ B_{0}(r) =\left( 1-\lambda_{0}r^{2}\right)^{-1}, $ we get $$\begin{aligned} B_{j}^{-1}(r) &=& 1- \frac{1}{r^{D-3}}\sum_{i=1}^{j}\left[\frac{2\kappa_{D}}{D-2}\rho_{i}R_{i}^{D-2} -\Delta\lambda_{i}R^{D-1}_{i}\right] -\nonumber \\&& -\lambda_{j}r^{2}.\label{Bj}\end{aligned}$$ The above solution is valid only in the region $R_{j}\leq r<R_{j+1}$, but we can write the solution valid in any region in the form $$B^{-1}(r,t) = 1- \frac{2G_{D}M(r,t)}{r^{D-3}} -r^{2}\lambda(r,t) \label{B}$$ where $M(r,t)$ and $\lambda(r,t)$ are defined by $$\begin{aligned} M(r,t) &\equiv & \sum_{i=0}^{n}\left[\frac{\kappa_{D}}{(D-2)G_{D}}\rho_{i}R_{i}^{D-2} -\frac{\Delta\lambda_{i}}{2G_{D}}R^{D-1}_{i}\right]\theta(r-R_{i}), \\ \lambda(r,t) &\equiv& \sum_{i=0}^{n}\lambda_{i}[\theta(r -R_{i}) - \theta(r -R_{i+1})],\end{aligned}$$ and the time dependence is implicit in $R_{i}$. It is important to note that $M(r,t)$ is not positive defined, in order to enable a repulsive gravitational situation. Using the above definition in (\[ein11\]) we find the equation which governs $A$ $$\frac{A'}{A} = \frac{2\kappa_{D}}{D-2}BrT^{1}_{1} +2B\left[(D-3)G_{D}\frac{M(r,t)}{r^{D-2}} -r\lambda(r,t) \right]. \nonumber$$ Taking the way $T^{1}_{1}$ was fixed at (\[fixT\]), $A$ has a second order discontinuity. Now, using the same procedure to find $B$ we can show that $$A_{j}(r) = B_{j}^{-1}(r)A_{0}(R_{1})B_{0}(R_{1})\prod_{i=1}^{j}\frac{B_{i}(R_{i})}{B_{i-1}(R_{i})}e^{\pi_{i}}.\nonumber$$ where $$\pi_{i} \equiv \frac{2\kappa_{D}}{D-2}R_{i}B_{i}(R_{i})P_{i}.$$ The asymptotic behavior of $B(r,t)$ is $\lim_{r\to\infty} B(r) = \left[ 1-\lambda(r)r^{2}\right]^{-1}$, which is the generalization of the de Sitter vacuum to a cosmological constant that is position dependent. Likewise we expect that $ A(r) $ behaves asymptotically as the vacuum, i.e., $\lim_{r\to\infty} A(r) = 1-\lambda(r)r^{2}$, so that we can use this to fix the multiplicative constants appearing in the temporal solution and write $$A_{j}(r) = B_{j}^{-1}(r)\prod_{i=j+1}^{n}\frac{B_{i-1}(R_{i})}{B_{i}(R_{i})}e^{-\pi_{i}}.\nonumber$$ In the same way we did for $B$, we can rewrite the above solution in order to be valid in all space as $$\begin{aligned} A(r,t) &=& B^{-1}(r,t)\prod_{i=1}^{n}{e}^{-\pi_{i}\theta(R_{i} -r)}\times \nonumber \\&& \times\left[1+\left(\frac{B_{i-1}(R_{i})}{B_{i}(R_{i})}-1\right)\theta(R_{i}-r)\right],\label{A}\end{aligned}$$ where $B_{j}$ is defined by (\[Bj\]). The solutions (\[A\]) and (\[B\]) are generalizations of the Kottler solution [@Kottler-Ann.Phys.361] in $D$ dimensions with position dependent $\Lambda $. In the case where $\lambda$ is constant, these solutions agree with those found by Das [@Das:2001md]. However, these solutions only make sense if the branes are not in a time-like region. In order to avoid a singularity in the solutions, we impose that the radius of the brane relates to the masses so that they are beyond their respective generalized Kottler’s radii, i.e. $$\begin{aligned} -\lambda(R_{i}) R_{i}^{D-1} +R_{i}^{D-3} -2GM(R_{i}) >0. \end{aligned}$$ The above solutions perfectly agree with the Birkhoff’s theorem, and despite a constant, it is the Schwarzschild solutions with a cosmological constant (Kottler Solution). The temporal dependence of the solutions is in $R_{i}$, so that in each region the solution is static. The multiplicative constants in the temporal part of the solutions indicates the gravitational redshift, even within the shells. Mathematically this makes the solution continuous in all regions. The time dependence on $B$ is given exclusively by $R_{i}$. Therefore in a dynamical case ($R_{i} = R_{i}(t)$) we can obtain the energy flow from (\[ein10\]). It’s easy to show that $$T^{1}_{0} = -\sum_{i=1}^{n}\rho_{i}V_{i}\delta(r -R_{i}),\nonumber$$ where $V_{i} \equiv \dot{R}_{i}$. The tangential stresses can be obtained from (\[ein22\]), but it is easier to compute from the energy-momentum tensor conservation law. Energy-Momentum Tensor Conservation Law ======================================= The Einstein’s field equation relates the energy-momentum tensor and the metric tensor. But due the symmetry only two components of energy-momentum tensor are necessary to determine the metric. So the other terms of energy-momentum tensor are determined by Einstein’s equation or by a conservation law. Taking the covariant derivative in the equation (\[einD\]) we obtain the $D$ dimensional conservation law. $$\begin{aligned} T_{\mu;\nu}^{\nu} = \frac{\Lambda_{,\mu}}{\kappa_{D}},\end{aligned}$$ the above equation states that the energy and momentum are not conserved inside the brane because the extra dimensional pressure given by the difference between cosmological constants. This difference can be used to model the dark energy, which makes the universe expand. But in our case no strange matter in needed inside the brane like the usual dark matter models. In terms of independent components the above conservation law can be written as $$\begin{aligned} \frac{\dot{\Lambda}}{\kappa_{D}} &=&\dot{T}^{0}_{0} +T^{1\prime}_{0} +\frac{\dot{B}}{2B}\left[T^{0}_{0} -T^{1}_{1}\right]+ \nonumber \\&&+\frac{T^{1}_{0}}{2}\left[\frac{A'}{A} +\frac{B'}{B} +\frac{2(D-2)}{r}\right] \\\frac{\Lambda'}{\kappa_{D}} &=&\dot{T}^{0}_{1} +T^{1\prime}_{1} +\left[\frac{A'}{2A} +\frac{(D-2)}{r}\right]T^{1}_{1}+ \nonumber \\&& +\frac{T^{0}_{1}}{2}\left[\frac{\dot{A}}{A} +\frac{\dot{B}}{B}\right] -\left[\frac{A'}{2A}T^{0}_{0} +\frac{(D-2)}{r}T^{2}_{2}\right].\end{aligned}$$ The first equation is trivially satisfied if we use the known solution (\[A\]) and (\[B\]) and energy-momentum tensor components. The second equation gives us the propagation speed of the brane as a function of the tangential stress, the masses, and the cosmological constant in the same way as in (\[ein22\]). Taking $$T^{2}_{2} = \sum_{i=1}^{n}T_{i}\delta(r-R_{i})\nonumber$$ we can integrate the unsolved component of the conservation law from $R_{i}-\epsilon$ to $R_{i}+\epsilon$ to obtain $$\begin{aligned} \frac{\Delta\Lambda_{i}}{\kappa_{D}} &=& \frac{B_{i}(R_{i})}{A_{i}(R_{i})}[\dot{\rho}_{i}V_{i} +\rho_{i}\dot{V}_{i}] +\frac{D-2}{R_{i}}\left[P_{i} -T_{i}\right]+ \\&&+\left[\left.\left(P_{i} +\rho_{i}\right)\frac{A'}{2A} +\rho_{i}V_{i}^{2}\left(\frac{B'}{A} -\frac{BA'}{A^{2}}\right)-\right.\right. \\&&-\left.\left.\rho_{i}V_{i}\left(\frac{B\dot{A}}{2A^{2}} -\frac{3\dot{B}}{2A}\right)\right]\right|_{r=R_{i}}\end{aligned}$$ as the functions $A$ and $B$ have second order divergences in $r = R_{i}$ the last term in above expression have the same divergence. Analyzing separately the divergent terms $$\begin{aligned} \mbox{div} &=& \overbrace{\frac{\kappa_{D}}{D-2}B_{i}(R_{i})R_{i}\left(P_{i} +\rho_{i}\right)\left[P_{i}-\frac{B_{i}(R_{i})}{A_{i}(R_{i})}\rho_{i}V_{i}^{2} \right]\left.\delta(r-R_{i})\right|_{r=R_{i}}}^{\mbox{real divergence}} + \\&&+B_{i}(R_{i})\left[P_{i} +\rho_{i} -4\rho_{i}V_{i}^{2}\frac{B_{i}(R_{i})}{A_{i}(R_{i})}\right]\left[(D-3)\frac{G_{D}M(R_{i})}{R_{i}^{D-2}} -R_{i}\lambda_{i}\right] -\rho_{i}V_{i}\frac{B_{i}(R_{i})}{2A_{i}(R_{i})}\left[K_{0}(R_{i}) -\sum_{j=i}^{n}\dot{\pi}_{j}\right]\end{aligned}$$ where $$K_{0} = \sum_{j=1}^{n}\left[\frac{\dot{B}_{j-1}(R_{j})}{B_{j-1}(R_{j})} -\frac{\dot{B}_{j}(R_{j})}{B_{j}(R_{j})} \right]\theta(R_{j}-r).\nonumber$$ To avoid a real divergence we need to fix $$P_{i} = -\rho_{i} \;\;\;\;\;\mbox{or}\;\;\;\;\; P_{i} =\frac{B_{i}(R_{i})}{A_{i}(R_{i})}\rho_{i}V_{i}^{2}.\nonumber$$ The first case indicates a cosmological constant state equation type, this equation is the only state that is independent of motion, i.e. the properties of a fluid with this state equation is independent of its movement. Therefore, it was already expected that the divergences found in the dynamic case can be removed. The second case relates the normal pressure with the brane velocity. That indicates an increase in the pressure if the velocity increases to keep the spherical shape of the brane. This relationship ensures that $P$ could vanishes in the static case, as in the Randall-Sundrum scenario. Assuming a linear state equation relating the tangential stresses and the energy density, $ T_{i} = \gamma_{i}\rho_{i}, $ and defining, for the $i$-th brane, the time $$dt_{i} = \sqrt{A_{i}(R_{i})/B_{i}(R_{i})}dt \nonumber$$ the brane evolution is given by $$\begin{aligned} \rho_{i}\frac{dU_{i}}{dt_{i}}&=& \frac{\Delta\Lambda_{i}}{\kappa_{D}}\left(1- U_{i}^{2}\right) -\frac{D-2}{R_{i}}\left[P_{i} -\rho_{i}\left(\gamma_{i} +U_{i}^{2}\right)\right] - \nonumber \\&&-B_{i}(R_{i})\left[P_{i} +\rho_{i} -2\rho_{i}U_{i}^{2}\right]\times \nonumber \\&&\times\left[(D-3)\frac{G_{D}M(R_{i})}{R_{i}^{D-2}} -R_{i}\lambda_{i}\right],\label{evolution}\end{aligned}$$ where $ U_{i} \equiv \frac{d R_{i}}{dt_{i}} . $ This indicates a different dynamic for each cosmological eras driven by a different state equation , i.e., by the related tangential pressures. The Randall-Sundrum Flat Brane Limit ==================================== In the previous sections we found the general solution to $n$ spherical branes in a $D$-dimensional space-time with different cosmological constant between them. To find a scenario similar to Randall-Sundrum we need to fix $D=5$, $n=1$ and $\lambda_{0}=\lambda_{1}$. In this case the exterior solution is $$ds^{2} = -f(r)dt^{2} +f^{-1}(r)dr^{2} +r^{2}d\Omega_{3}^{2},\nonumber$$ where $$f(r)= \left(1 -\frac{2G_{5}M}{r^{2}} -\lambda r^{2}\right).$$ In order to obtain the Randall-Sundrum metric we define $$dz \equiv \left(1 -\frac{2G_{5}M}{r^{2}} -\lambda r^{2}\right)^{-1/2}dr\nonumber$$ or, fixing that $z$ vanishes when $r =R$, $$z = \frac{1}{2k}\ln\left[\frac{2k\left(k^{2}r^{4} +r^{2} -2G_{5}M\right)^{1/2} +2k^{2}r^{2} +1}{2k\left(k^{2}R^{4} +R^{2} -2G_{5}M\right)^{1/2} +2k^{2}R^{2} +1}\right],\nonumber$$ where $\lambda =-k^{2}$ to avoid the de Sitter horizon. To obtain the flat brane limit we will consider that $R$ and consequently $r$ tends to infinity. In this limit the dominant term, regarding that $M$ grows as $R^{3}$, is $$z = \frac{1}{2k}\ln\left[\frac{r^{2}}{R^{2}}\right]\nonumber$$ writing $r$ as function of $z$, we obtain the line element $$ds^{2} = -k^{2}{e}^{2kz}R^{2}dt^{2} +dz^{2} +{e}^{2kz}R^{2}d\Omega_{3}^{2}.\nonumber$$ Putting the constants into coordinates we obtain the Randall-Sundrum metric $$ds^{2} = {e}^{2kz}\eta_{\mu\nu}dx^{\mu}dx^{\nu} +dz^{2}.\nonumber$$ The exponential in the warp factor is positive because the bulk is anti de Sitter, instead the original RS scenario. Conclusions and Perspectives ============================ In this work we built a scenario of multiple concentric membranes through the solution of Einstein’s equation in $D$ dimensions with different cosmological constant in each region. The results we found may serve as a basis for more specific scenarios, through the fixation of radii, masses on each brane, and bulk cosmological constant. In a dynamical case, the solutions we found can be used to model the universe for $D=5$. The model used here is more accurate than the previous ones, and as a consequence we have a multiplicative constant that appears in the temporal solution, which is the redshift measured by observers in the region inside the brane. Through the momentum-energy tensor conservation law we obtained two possible anisotropic pressures that remove the singularities in the branes dynamics. These two possible pressures give us two possible fixations leading to more freedom in the construction of a cosmological model that can better fits the observed data. The tangential pressure and the difference between the cosmological constants are responsible for the evolution of each brane according to Eq. (\[evolution\]). This tangential pressure is found from a state equation that is determined by the dominant matter in each cosmological era. We show that the difference between the cosmological constants modifies the effective mass of the matter distribution, and can be fixed in a way that the observed universe expansion rate is independent of the dark energy. Finally, we were able to arrive at the Randall-Sundrum metric to a anti-de Sitter bulk from calculating the external solution in the limit of plane branes. This metric was first introduced in the literature as an ansatz[@Randall:1999ee]. Now, it is derived from the Kottler anti-de Sitter solution. The main extension of the model developed here is the phenomenological study of cosmology generated by solving a dynamic Universe Brane equation. We would like to thank the financial support provided by Fundação Cearense de Apoio ao Desenvolvimento Científico e Tecnológico (FUNCAP), the Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq), and FUNCAP/CNPq/PRONEX. [5]{} S. Weinberg, [*Cosmology*]{}, Oxford University Press Inc., New York (2008) L. Randall and R. Sundrum, Phys. Rev. Lett.  [**83**]{}, 4690 (1999) \[arXiv:hep-th/9906064\]. L. Randall and R. Sundrum, “A large mass hierarchy from a small extra dimension,” Phys. Rev. Lett.  [**83**]{}, 3370 (1999) \[arXiv:hep-ph/9905221\]. V. A. Rubakov and M. E. Shaposhnikov, Phys. Lett.  B [**125**]{}, 136 (1983). M. Visser, Phys. Lett.  B [**159**]{}, 22 (1985) \[arXiv:hep-th/9910093\]. E. J. Squires, Phys. Lett.  B [**167**]{}, 286 (1986). M. Gogberashvili, Europhys. Lett.  [**49**]{}, 396 (2000) \[arXiv:hep-ph/9812365\]. A. Boyarsky, A. Neronov and I. Tkachev, Phys. Rev. Lett.  [**95**]{}, 091301 (2005) \[arXiv:gr-qc/0411144\]. J. L. Tonry [*et al.*]{} \[Supernova Search Team Collaboration\], Astrophys. J.  [**594**]{}, 1 (2003) \[arXiv:astro-ph/0305008\]. J. P. Luminet, J. Weeks, A. Riazuelo, R. Lehoucq and J. P. Uzan, Nature [**425**]{}, 593 (2003) \[arXiv:astro-ph/0310253\]. J. M. Overduin and P. S. Wesson, Phys. Rept.  [**283**]{}, 303 (1997) \[arXiv:gr-qc/9805018\]. R. A. Knop [*et al.*]{} \[Supernova Cosmology Project Collaboration\], Astrophys. J.  [**598**]{}, 102 (2003) \[arXiv:astro-ph/0309368\]. A. G. Riess [*et al.*]{} \[Supernova Search Team Collaboration\], Astrophys. J.  [**607**]{}, 665 (2004) \[arXiv:astro-ph/0402512\]. M. Gogberashvili, Phys. Lett.  B [**636**]{}, 147 (2006) \[arXiv:gr-qc/0511039\]. M. Gogberashvili, Int. J. Mod. Phys.  D [**11**]{}, 1635 (2002) \[arXiv:hep-ph/9812296\]. M. Gogberashvili, Europhys. Lett.  [**77**]{}, 20004 (2007) \[arXiv:hep-th/0603235\]. A. Das and A. DeBenedictis, Prog. Theor. Phys.  [**108**]{}, 119 (2002) \[arXiv:gr-qc/0110083\]. F. Kottler, Ann. der Phys. [**361**]{}, 14 (1918). G. D. Birkhoff, [*Relativity and Modern Physics*]{}, Harvard University Press: Boston (1923).
{ "pile_set_name": "ArXiv" }
--- abstract: 'We study indirect CPT violating effects in $B_d$ meson decays and mixing, taking into account the recent constraints on the CPT violating parameters from the Belle collaboration. The life time difference of the $B_d$ meson mass eigenstates, expected to be negligible in the standard model and many of its CPT conserving extensions, could be sizeable ($\sim$ a few percent of the total width) due to breakdown of this fundamental symmetry. The time evolution of the direct CP violating asymmetries in one amplitude dominated processes (inclusive semileptonic $B_d$ decays, in particular) turn out to be particularly sensitive to this effect.' --- 21.0cm 16.0cm 0.0cm 0.0cm - 0.75cm ł ø DO-TH 02/10\ hep–ph/0209090\ May 2002 \ Amitava Datta[^1]\ \ Emmanuel A. Paschos[^2]\ \ and L.P. Singh[^3]\ \ The suggestion for two distinct lifetimes for the $B_d$ or $B_s$ meson mass eigenstates originated in parton model calculations [@ref1], which, at that time, were limited by numerous uncertainties of hadronic ($f_B$, the bag parameter, top quark mass, ...) and weak parameters (CKM matrix elements). Many of these, however, cancel in the ratio $$\left(\frac{\Delta m}{\Delta\Gamma}\right)_d = \frac{8}{9\pi} \left( \frac{\eta_t}{\eta}\right)\, \left(\frac{m_t}{m_b}\right)^2 f(x_t)$$ where $\Delta m_d(\Delta\Gamma_d)$ is the mass (width) difference of the $B_d$ meson mass eigenstates, $\eta_t,\,\eta$ are calculable perturbative QCD corrections, $x_t = \frac{m_t}{m_w}$ and $$f(x) = \frac{3}{2} \frac{x^2}{(1-x)^3}\, \ln x- \left( \frac{1}{4}+\frac{9}{4}\frac{1}{(1-x)}\, - \frac{3}{2}\, \frac{1}{(1-x)^2}\right)\, .$$ Following the discovery of mixing in the $B_d$ system [@ref2], $\Delta m_d$ was measured and $m_t$ was the only major source of uncertainty in the ratio. Using the then lower bound on $m_t$ it was shown [@ref3; @ref4] that $\Delta\Gamma_d$ is indeed very small, while $\Delta\Gamma_s$, the width difference of the $B_s$ meson mass eigenstates could be rather large as is indicated by the scaling law [@ref4] $$\left(\frac{\Delta\Gamma}{\Gamma}\right)_s = \left( \frac{X_{Bs}}{X_{Bd}}\right)\cdot \left|\frac{V_{ts}}{V_{td}}\right|^2\cdot \left(\frac{\Delta\Gamma}{\Gamma}\right)_d$$ where $V_{ij}$’s are the elements of the CKM matrix and $$X_{B_q}=\langle B_q|\left[ \bar{q}\gamma^{\mu} (1-\gamma_5)b\right]^2|\bar{B}_q\rangle\, .$$ In the meanwhile, many advances have taken place with the discovery of the top quark and the determination of its mass [@ref5] and more precise values for CKM matrix elements. Combining the new values with the above scaling laws, the width difference among the $B_d$ states is $(\Delta\Gamma/\Gamma)_d \approx 0.0012$, which is unobservable, but for $B_s$ eigenstates $(\Delta\Gamma/\Gamma)_s\approx 0.045$. More recent calculations using heavy quark effective theory and improved QCD corrections [@ref6; @ref7] suggest that calculations based on the absorptive parts of the box diagram improved by QCD corrections give reasonable estimates for both $B_d$ and $B_s$ systems. Nevertheless the possibility that there are loopholes in the above calculations cannot be totally excluded. For example, $\Delta \Gamma_q$ (q = d or s) is determined by only those channels which are accessible to both $B_q$ and $\bar{B}_q$ decays. Its computation in the parton model may not be as reliable as the calculation of $\Gamma_q$, the total width which depends on fully inclusive decays and quark- hadron duality is valid. In addition to the expected phenomena, one should, therefore, be prepared for unexpected effects and the final verdict on this subject should wait for experimental determination of $\Delta\Gamma$ from the B–factories, B–TeV or LHC–B. Many different suggestions for measuring $\Delta\Gamma_s$ have been put forward [@ref3; @ref4; @ref8]. It is believed that $(\dg)_d \sim 0.1$ can be measured at B–factories [@ref9] while $(\dg)_d \sim 0.001$ [@ref10] might be accessible at the LHC. In this article we wish to emphasize that apart from dynamical surprises in the decay mechanism, a possible breakdown of the CPT symmetry contributes to $\dg$. The currently available constraints on CPT violating parameters [@opal; @belle] certainly allow this possibility. If this happens its effect will be more visible and detectable in the $B_d$ system which, in the electroweak theory, is expected to have negligible $(\dg)_d$. In other words the scenario with $(\dg)_d$ large not only due to hitherto unknown dynamics but also due to a breakdown of CPT is quite an open possibility. In the case of $(\dg)_s$ CPT violation may act in tandem with the already known electroweak dynamics to produce an even larger effect.\ There are several motivations for drawing out a strategy to test CPT symmetry. From the experimental point of view all symmetries of nature must be scrutinized as accurately as possible, irrespective of the prevailing theoretical prejudices. It may be recalled that before the discovery of CP violation, there was very little theoretical argument in its favour. There are purely theoretical motivations as well. First of all the CPT theorem is valid for local, renormalizable field theories with well defined asymptotic states. It is quite possible that the theory we are dealing with is an effective theory and involving small nonlocal/ nonrenormalizable interactions. Further the concept of asymptotic states is not unambiguous in the presence of confined quarks and gluons. It has been suggested that physics at the string scale may indeed induce nonlocal interactions in the effective low energy theory leading to CPT violation [@ref11]. Moreover, modification of quantum mechanics due to gravity may also lead to a breakdown of CPT [@ref12]. One of the major goals of the B–factories running at KEK or SLAC is to reveal CP violation in the B system. The discrete symmetry CPT has not yet been adequately tested for the B meson system, although there are many interesting suggestions to test it [@ref13; @ref14]. In all such works, however, the correlation between $\D \gm$ and CPT violation was either ignored or not adequately emphasized. It will be shown below that $\D \gm$ can in general be numerically significant even if CPT violation is not too large. We consider the time development of neutral mesons $M^0$ (which can be $K^0$ or $D^0$ or $B_d^0$ or $B_s^0$) and their antiparticles $\bar{M}^0$. The time development is determined by the effective Hamiltonian $H_{ij} = M_{ij}-\frac{i}{2}\Gamma_{ij}$ with $M_{ij}$ and $\Gamma_{ij}$ being the dispersive and absorptive parts of the Hamiltonian, respectively [@ref15]. CPT invariance relates the diagonal elements $$M_{11} = M_{22}\quad\quad {\rm and} \quad\quad \Gamma_{11} = \Gamma_{22}\, .$$ A measure of CPT violation is, therefore, given by the parameter $$\delta = \frac{H_{22}-H_{11}}{\sqrt{H_{12}H_{21}}}$$ which is phase convention independent. In order to keep the discussion simple we shall study the consequences of indirect CPT violation only. Since indirect CPT violation is a cumulative effect involving summations over many amplitudes, it is likely that its magnitude would be much larger than that of direct violation in a single decay amplitude. It is further assumed that CPT violation does not affect the off–diagonal elements of $H_{ij}$. These assumptions can be justified in specific string models [@ref11], where terms involving both flavour and CPT violations receive negligible corrections due to string scale physics. A further consequence of this assumption is that the usual SM inequality $M_{12}\gg \Gamma_{12}$ holds even in the presence of CPT violation. The eigenfunctions of the Hamiltonian are defined as $$|M_1\rangle = p_1|M^0\rangle + q_1|\bar{M}^0\rangle \quad\quad {\rm and} \quad\quad |M_2\rangle = p_2|M^0\rangle -q_2|\bar{M}^0\rangle$$ with the normalization $|p_1|^2+|q_1|^2 = |p_2|^2+ |q_2|^2=1$. We summarize the consequences of the symmetries. We define $$\begin{aligned} \eta_1 = \frac{q_1}{p_1} = \left[\left(1+\frac{\delta^2}{4}\right)^{1/2} + \frac{\delta}{2}\right]\, \left[\frac{H_{21}}{H_{12}}\right]^{1/2}\\ \eta_2 = \frac{q_2}{p_2} = \left[\left(1+\frac{\delta^2}{4}\right)^{1/2} - \frac{\delta}{2}\right]\, \left[\frac{H_{21}}{H_{12}}\right]^{1/2}\end{aligned}$$ and note that CPT violation is contained in the first factor, while indirect CP violation is in the second factor with the square root. In many expressions we need the ratio $\omega=\eta_1/\eta_2 =\frac{q_1 p_2}{p_1 q_2}$ which is only a CPT violating quantity. CPT conservation requires\ Im $\omega=0$, Re $\omega=1$ and $\eta_1=\eta_2$. The time development of the states is determined by the eigenvalues $$\begin{aligned} \lambda_1 & = & H_{11}+\sqrt{H_{12}H_{21}} \left[ \left( 1+\frac{\delta^2}{4} \right)^{1/2} + \delta/2 \right] \quad {\rm and}\nonumber\\ \lambda_2 & = & H_{22}-\sqrt{H_{12}H_{21}} \left[ \left( 1+\frac{\delta^2}{4}\right)^{1/2} + \delta/2 \right] \end{aligned}$$ which can be parametrized as $\lambda_{1,2} = m_{1,~2}-\frac{i}{2}~\Gamma_{1,~2}$. The quantities which occur in the asymmetries are $\lambda_1-\lambda_2 = \Delta m -\frac{i}{2}\Delta\Gamma$ and $\Gamma = \frac{1}{2}(\Gamma_1+\Gamma_2)$. To leading order in $(\Gamma_{12}/M_{12})$ they are expressed in terms of the CPT parameter $$y= \left(1+\frac{\delta^2}{4}\right)^{1/2}$$ as follows: $$\begin{aligned} \Delta m = m_1 - m_2 = 2|M_{12}|(\Re\, y+\frac{1}{2}\, \Re \frac{\Gamma_{12}}{M_{12}}\Im\,y)\nonumber\\ \Delta\Gamma=\Gamma_1-\Gamma_2=2|M_{12}| (\Re\, \frac{\Gamma_{12}}{M_{12}}\, \Re\, y -2\Im\, y)\, .\end{aligned}$$ In the CPT conserving limit $y=1$ and the contribution to $\Delta m$ is large, overwhelming CPT violating corrections. The CPT conserving contribution to $\Delta\Gamma$, on the other hand, is suppressed by $\Re\, \frac{\Gamma_{12}} {M_{12}}$. The purely CPT violating term dominating $\Delta \Gamma$ remains, therefore, an open possibility. In order to get a feeling for the magnitude of $\Delta\Gamma/\Gamma$, we use the small $\delta$ approximation and obtain $|\Delta\Gamma/\Gamma|=0.5 \times (\Delta m/\Gamma)\times (\Re\delta\times \Im\delta)$. Most of the measurements of $\Delta m/\Gamma$ have been carried out by assuming CPT conservation. If CPT is violated its magnitude could be somewhat different (see, e.g., Kobayashi and Sanda in [@ref13]). Recently the Belle collaboration has determined $\Delta m$ with and without assuming CPT symmetry [@belle]. The two results , $\D m$ = 0.463 $\pm$0.016 and 0.461$\pm$0.008 $\pm$ 0.016 $ps^{-1}$ respectively, do not differ appreciably from each other or from the average $\D m$ given by the particle data group(PDG). We shall, therefore, use throughout the paper $\Delta m/\Gamma=0.73$, which is perfectly consistent with the PDG value. The relevant limits on CPT violating parameters from Belle are $|m_{B^0} - m_{\bar{B^0}}|/m_{B^0} < $1.6 $\times 10^{-14}$ and $|\gm_{B^0} - \gm_{\bar{B^0}}|/\gm_{B^0} <$ 0.161, which implies $|\Re \dl | < $ 0.54 and $|\Im \dl| <$ 0.23. A choice like $\Re\delta \times \Im\delta\sim 0.1$, consistent with the above bounds, would then yield $\Delta\Gamma/\Gamma$ of the order of a few %, larger than the SM estimate by an order of magnitude. Moreover $\Delta\Gamma/\Gamma$ will be well within the measurable limits of LHC B, should $\delta$ happen to be much smaller. The Belle limits are derived under the assumption that $\dg$ is negligible. We emphasize that for a refined analysis of CPT violation $\Delta m/\Gamma$ and $\Delta\Gamma/\Gamma$ along with $\delta$ should be fitted directly from the data. Such a combined fit may open up the possibility that $\dl$ could be somewhat larger than the above bounds. In our numerical analysis values consistent with the bounds as well as somewhat larger values will be considered. The time development of the states involves, now, the time factors $$\begin{aligned} f_-(t) & = & e^{-i\lambda_1t}-e^{-i\lambda_2t}\\ f_+(t) & = & e^{-i\lambda_1t}+ \omega e^{-i\lambda_2t}\quad{\rm and}\\ \bar{f}_+(t) & = & \omega e^{-i\lambda_1t} + e^{-i\lambda_2t}\, .\end{aligned}$$ The new feature is the presence of the factor $\omega$ in the second and third of these equations. The decays of an original $|B^0\rangle$ or $|\bar{B}^0\rangle$ state to a flavor eigenstate $|f\rangle$ vary with time and are given by P\_f(t) = ł|&lt; f | B\^0(t)&gt;|\^2 =ł|f\_+(t)|\^2 N ł|&lt;f|B\^0&gt;|\^2 |P\_[|f]{}(t) = ł|&lt;|f ||B\^0(t)&gt;|\^2 =ł||f\_+(t)|\^2 N ł|&lt;|f||B\^0&gt;|\^2 P\_[|f]{}(t) = ł|&lt;|f | B\^0(t)&gt;|\^2 = ł|\_1|\^2 ł| f\_-(t)|\^2 N ł|&lt;| f||B\^0&gt;|\^2 |P\_[f]{}(t) = ł|&lt;f ||B\^0(t)&gt;|\^2 =ł| f\_-(t)|\^2 N ł||\^2 ł|&lt;f|B\^0&gt;|\^2/ł|\_1|\^2 where $N^{-1}$ = $|1 + \omega|^2$ and the matrix elements on the right–hand side $ g = \langle f|B^0\rangle, \, \bar{g}=\langle \bar {f}|\bar{B}^0 \rangle,\, \ldots$, are computed at $t=0$ and have no time dependence. From these expressions it is evident that the five unknowns, $\Gamma,\, \Delta m, \, \Delta\Gamma$ and $\Re\delta$ and $\Im\delta$ (or equivalently $\Re\omega$ and $\Im\omega$), must be determined from the data. We emphasize that $\Delta\Gamma$ must be treated as a free parameter, since in addition to the CPT violating contributions it may also receive contributions from new dynamics. In addition taking linear combinations of these decays, we can produce exponential decays accompanied by oscillatory terms which help in separating the various contributions. It may be recalled that the time dependent techniques for extracting these probabilities and the associated electroweak parameters from data are now being used extensively. Different schemes for testing CPT violation suggested in the literature [@ref13; @ref14] often involve observables specifically constructed for this purpose. Here we wish to point out that some of the observables involving the above probabilities, which are now being routinely measured at BABAR and BELLE are also sufficiently sensitive to CPT violation and have the potential of either revealing the breakdown of this fundamental symmetry or improving the limit on the CPT violating parameter. One such observable is the direct CP violating asymmetry in $B_d$ and $\bar{B_d}$ decays to flavor specific channels $f$ and $\bar{f}$, respectively [@ref16], but with $f$ different from $\bar{f}$. The following ratio is at the center of current interest \^\_[CP]{} (t) = [ ł|&lt;f|B\^0(t)&gt;|\^2 + ł|&lt;|f||B\^0(t)&gt;|\^2]{}\ = [ ł|f\_+(t)|\^2 ł|g|\^2 + ł|f\_+(t)|\^2 ł|g |\^2]{} In the SM or in any of its CPT conserving extensions, $\b f_+(t) = f_+(t)$ and the asymmetry is time independent in general. The time independence holds even if $\D \gm$ happens to be large due to new dynamics or direct CPT violation and/ or new physics influence the hadronic matrix elements. Time evolution of this asymmetry is, therefore, a sure signal of indirect CPT violation. Flavour specific B decays involving a single lepton or a kaon in the final state are possible candidates for this measurement. This consequence is even more dramatic for decays dominated by a single amplitude in the SM, in which case $|g|=|\bar{g}|$ and ${\mbox{ a}}^{\mbox{dir}}_{CP} (t)$ vanishes at all times. Purely tree level decays arising from the subprocess $b\to u_i\bar{u}_j d_k$ ($i \ne j$), penguin induced processes $b\to d_i\bar{d}_i d_k$, dominated by a single Penguin operator or inclusive semileptonic decays $b\to X l^+ \nu $ (l = e or $\mu$ and X is any hadronic final state) are examples of such decays. The last process is particularly promising. A single amplitude strongly dominates the decay not only in the SM but also in many extensions of it. The large branching ratio ($\sim 20\%$ for l = e and $\mu$ ) and reasonably large efficiency of detecting leptons is sufficient to ensure the measurement of this asymmetry at B - factories, provided it is of the order of a few percent. For this class of decays the matrix elements along with their theoretical uncertainties cancel out in the ratio. Consequently in presence of indirect CPT violation the time dependent asymmetry is the same for all one–amplitude dominated processes and the statistics may be improved by including several channels. If a difference in the time dependence of various modes is observed, the assumption of one amplitude dominance will be questionable and new physics beyond the standard model leading to $|g| \ne |\bar{g}|$, in addition to indirect CPT violation, may be revealed. In Figure 1 we present the asymmetry for a one amplitude dominated process as a function of time for Im $\dl$ = 0.1 and Re $\dl$ = 0.1 (solid curve, here $\dg $ = 0.004) , 0.5 (dotted curve, $\dg$ = 0.02) and 1.0 (dashed curve, $\dg$=0.04). Both the time evolution and the nonvanishing of the asymmetry are clearly demonstrated. The correlation between ${\mbox{ a}}^{\mbox{dir}}_{CP}$ and $\D \Gamma$ calls for a more detailed analysis. As has been noted $\D \Gamma$ is significantly different from the SM prediction only if $\Im \dl~ \times ~ \Re \dl \ne 0 $. The numerator and the denominator of the asymmetry are determined to be D(t)& = & P\_f(t) - [|P]{}\_[|f]{}(t)\ & = &N and S(t)& = & P\_f(t) + [|P]{}\_[|f]{}(t)\ & = &N A non-vanishing asymmetry can arise in various ways\ i) Im $\o \ne$0, which requires Im$\delta \ne$0.0,\ ii) Re $\o \ne$ 1.0 and $\dg$ as small as in the SM,\ or from a combination of the two possibilities. It is trivial to express $\omega$ in terms of $\delta$ and confirm that both D(t) and S(t) are modified from the SM prediction through $\delta$. When both numerator and denominator of the asymmetry are measured accurately, one can determine separately real and imaginary parts of delta. This may indicate, albeit indirectly, that $\D \Gamma$ is unexpectedly large. In order to have an idea of how large the effects can be, in Figure 2 we plot $D(t)$ as a function of time for the values $\Re~ \delta=\Im~ \delta=0.1$ (the solid curve). D(t) vanishes for $\Im\delta=0$ and has a relatively weak dependence on $\Re\delta$, as illustrated by also plotting on the same figure the cases with $\Re\delta=0.5$ (the dotted curve) and 1.0 (the dashed curve). A similar study of S(t) is presented in Figure 3. This quantity is fairly insensitive to Im $\dl$. In order to estimate roughly the number of tagged B-mesons needed to establish a non-zero D(t) , we assume that at t=0 there is a sample of $N_0$ tagged $B_d^0$ and $\bar{B}_d^0$. Let of number of semileptonic $B_d^0$ ($\bar{B}_d^0$ ) decays in the time interval t=$ (1.0 \pm 0.1) \times \tau_B$ be n(t) ($\bar{n}(t)$) (we assume the lepton detection efficiency to be $\sim$ 1). By requiring $$\frac{|n(t) - \bar{n}(t)|}{\sqrt{n(t)} + \sqrt{\bar{n}(t)}} \geq 3.0 , \nonumber$$ we obtain for Re $\delta$ = 0.1 and Im $\delta$ = 0.1, $N_0 \approx $ 2.0 $\times$ 10$^6$, a number which is realizable at B - factories after several years of run and certainly at the LHC. Including other flavour specific channels like $B_d^0 \rightarrow K^+ + X$, which has a larger branching ratio ($\approx$ 70%), a measurable asymmetry may be obtained with a smaller $N_0$. In the presence of indirect CPT violation, the time integrated asymmetry is obtained by integrating the numerator D(t) and the denominator S(t). This leads to a\_[CP]{}\^[dir]{} = ( ( ||\^2-1) -) (2(||\^2+1)+ ) with $x~ =~ \Delta m~ /~ \Gamma$. In the standard model and for processes dominated by one amplitude the integrated asymmetry vanishes. In extensions of the SM in which the decays are no longer dominated by one amplitude the integrated asymmetry may be nonzero [@ref17]. Thus a nonzero integrated asymmetry points either to new physics (coming from additional amplitudes) or to indirect CPT violation. In Figure 4 we present the variation of this observable with Im $\dl$ for Re $\dl$ = 0.1 (solid line) and 0.75 (dashed line).\ Experimental studies of CPT violating phenomena can be combined with experiments that search for a $\Delta\Gamma/\Gamma$. For example, one can consider untagged B mesons decaying to a specific flavour [@ref3; @ref4]. The observable $S_1(t) = P_f(t)+ \bar P_{f}(t)$ which in the absence of CPT violation have a time dependence governed by two exponentials. If now CPT violation is also included, then an oscillation is superimposed on the exponentials. The original articles [@ref3; @ref4] considered $B_s$ decays but the same properties hold for $B_d$ meson decaying semileptonically or to specific flavour final states. Looking at flavour non-specific channels there are results for $a^{\rm dir}_{CP}$ (also denoted by $C_{\pi \pi}$) from BABAR [@ref18] and BELLE [@ref19] for the channel $B\to \pi^+\pi^-$. In the SM using naive factorization this asymmetry turns out to be small [@ref20]. It is interesting to note that although the Babar result is fairly consistent with the SM prediction, the BELLE result indicates a much larger asymmetry. It should, however, be noted that there are many theoretical uncertainties. Neither the magnitude of the penguin pollution nor the magnitude of the strong phase difference between the interfering amplitudes can be computed in a full proof way. Direct CP violation in flavour specific, charmless decays have also been measured [@ref21]. Here the data is not yet very precise and the theoretical uncertainties are also large. In view of these uncertainties it is difficult to draw any conclusion regarding new physics effects. This underlines the importance of inclusive semileptonic decays which are theoretically clean and the branching ratios are much larger than any of the above exclusive modes. $B_d$ decays to CP eigenstates have been observed and established CP–violation via time dependent measurements [@ref22; @ref23]. The golden example is $B^0\to\psi K_s$ where the time dependent asymmetry is proportional to $\sin 2 \beta$, where $\beta$ (also denoted by $\phi_1$) is an angle of the unitarity triangle. The current averaged value of this parameter is $\sin 2 \beta$ =0.78 $\pm$ 0.08. . It is straight–forward to obtain the asymmetry in the presence of indirect CPT violation. An attempt to fit the data as in the SM would lead to an effective sin $2 \beta$ which is time dependent. We have checked that with Re $\delta$ = 0.1 and Im $\delta$= 0.1, this effective sin $2 \beta$ varies between 0.74 and 0.84. We therefore conclude that if sin 2$\beta$ is determined with an accuracy of 5 % or better, some hint of indirect CPT violation may be obtained. However, this observation cannot establish CPT violation unambiguously. Since CPT conserving new physics may change the phase of the $B_d - \bar{B_d}$ mixing amplitude and/or the decay amplitudes and lead to similar effects.\ Many other observables specifically constructed for the measurement of CPT violation [@ref13; @ref14] have been suggested in the literature. It will be interesting to compare the sensitivities of these observables to the CPT parameter $\dl$ with that of the observables considered in this paper, which are already being measured in the context of CP violation. In summary, we wish to emphasize again that an unexpectedly large life time difference of $B_d$ mesons, which is predicted to be negligible in the SM and many of its CPT conserving extensions, may reveal indirect CPT violation. Time dependence of the direct CP violating asymmetry for flavour specific decays, which is time independent and vanishes for decays dominated by only one amplitude may establish CPT violation as well as a large life time difference. The theoretically clean inclusive semileptonic decays having relatively large branching ratios might be particularly suitable in this context. [**[Acknowledgements]{}**]{}\ We wish to thank the Bundesministerium fur Bildung and and Forschung for financial support under contract No. 05HT1PEA9. One of us (EAP) thanks Mr. W. Horn for useful discussions. AD thanks the Department of Science and Technology, Government of India for financial support under project no SP/S2/k01/97 and Abhijit Samanta for help in computation. [54]{} J.S. Hagelin, [*Nucl. Phys.*]{} [**B193**]{} (1981) 123;\ V.A. Khoze, M.A. Shifman, N.G. Uraltsev and M.B. Voloshin, [*Sov. J. Nucl. Phys. Fiz.*]{} [**46**]{} (1987) 112 H. Albrecht et al. (ARGUS collaboration), [*Phys. Lett.*]{} [**B192**]{} (1987) 245 A. Datta, E.A. Paschos and U. Türke, [*Phys. Lett.*]{} [**B196**]{} (1987) 382 A. Datta, E.A. Paschos and Y.L. Wu, [*Nucl. Phys.*]{} [**B311**]{} (1988) 35 S. Abachi et al. (CDF collaboration), [*Phys. Rev. Lett.*]{} [**74**]{} (1995) 2632 R. Aleksan, A. Le Yaouanc, L. Oliver and J.C. Raynal, [*Phys. Lett.*]{} [**B316**]{} (1993) 567 M. Beneke et al. [*Phys. Lett.*]{} [**B459**]{} (1999) 631;\ A.S. Dighe et al., [*Nucl. Phys.*]{} [**B624**]{} (2002) 377 I. Dunietz, [*Phys. Rev.*]{} [**D52**]{} (1995) 3048;\ I. Dunietz, R. Fleischer and U. Nierste, [*Phys. Rev.*]{} [**D63**]{} (2001) 114015;\ A.S. Dighe et al. in ref 7 R. Aleksan, private communications A.S. Dighe et al., in ref. 7 K. Ackerstaff[*et al*]{} (OPAL collaboration) , [*Z. Phys.*]{} [**C76**]{} (1997) 401. C. Leonidopoulos (Belle collaboration), hep-ex/0107001 and Ph.D thesis, Princeton University, 2000 (see http:$//$ belle.kek.jp) V.A. Kostelecky and R. Potting, [*Phys. Lett.*]{} [**B381**]{} (1996) 89 S.W.  Hawking, [*Phys. Rev.*]{} [**D14**]{} (1976) 2460;\ J. Ellis et al., [*Phys. Rev.*]{} [**D53**]{} (1996) 3846 M. Kobayashi and A. Sanda, [*Phys. Rev. Lett.*]{} [**69**]{} (1992) 3139;\ Z.Z. Xing, [*Phys. Rev.*]{} [**D50**]{} (1994) 2957;\ D. Colladay and V.A. Kostelecky, [*Phys. Lett.*]{} [**B344**]{} (1995) 359;\ V.A. Kostelecky and R. Potting, [*Phys. Rev.*]{} [**D51**]{} (1995) 3923;\ V.A. Kostelecky and R. Van Kooten [*Phys. Rev.*]{} [**D54**]{} (1996) 5585;\ M.C. Banuls and J. Bernabeu, [*Nucl. Phys.*]{} [**B590**]{} (2000) 19;\ P . Colangello and G. Corcella, [*Eur. Phys. J.*]{} [**C1**]{} (1998) 515;\ A. Mohapatra et al., [*Phys. Rev.*]{} [**D58**]{} (1998) 036003. K.C. Chou, W.F. Palmer, E.A. Paschos and Y.L. Wu, [*Eur. Phys. J.*]{} [**C16**]{} (2000) 279 See for instance, E.A. Paschos and U. Türke, [*Phys. Rep.*]{} [**178**]{} (1989) 145 For a review see, e.g., I.I. Bigi and A. Sanda, CP violation, Ed. C. Jarlskog (World Scientific, 1989); I.I. Bigi and A. Sanda, “CP violation” (Cambridge Univrsity Press, 1999). R-parity violating models are interesting in this context. See, e.g., G. Bhattacharya and A. Datta, [*Phys. Rev. Lett.*]{} [**83**]{} (1999) 2300. Talk by A. Farbin (BABAR collaboration), XXXVIIth Rencontres de Moriond, Les Arcs, France, 9 - 16 March, 2002, http:$//$ moriond.in2p3.fr/EW/2002/. K. Abe [*et al*]{} (BELLE collaboration), hep-ex/0204002. A. Ali, G. Krammer and C-D. Lu, [*Phys. Rev.*]{} [**D59**]{} (1998) 014005. B. Aubert [*et al*]{} (BABAR collaboration), [*Phys. Rev.*]{} [**D65**]{} (2002) 051101. Talk by T. Karim (BELLE collaboration), XXXVIIth Rencontres de Moriond, Les Arcs, France, 9 - 16 March, 2002, http:$//$moriond.in2p3.fr/EW/2002/. B. Aubert [*et al*]{} (BABAR collaboration), hep-ex/0203007. ł ø [^1]: Electronic address: adatta@juphys.ernet.in . [^2]: Electronic address: paschos@physik.uni-dortmund.de [^3]: Electronic address: lambodar@iopb.res.in
{ "pile_set_name": "ArXiv" }
=1
{ "pile_set_name": "ArXiv" }
--- author: - '**S.A. Grebenev**' title: '**THE ORIGIN OF THE BIMODAL LUMINOSITY DISTRIBUTION OF ULTRALUMINOUS X-RAY PULSARS**' --- [*to be published in Astronomy Letters, 2017, v. 43, n. 7, pp. 464–470*]{}\ \[30mm\] The mechanism that can be responsible for the bimodal luminosity distribution of super-Eddington X-ray pulsars in binary systems is pointed out. The transition from the high to low state of these objects is explained by accretion flow spherization due to the radiation pressure at certain (high) accretion rates. The transition between the states can be associated with a gradual change in the accretion rate. The complex behavior of the recently discovered ultraluminous X-ray pulsars M 82 X-2, NGC 5907 , and NGC 7793 P13 is explained by the proposed mechanism. The proposed model also naturally explains the measured spinup of the neutron star in these pulsars, which is slower than the expected one by several times. [**DOI:**]{} 10.1134/S1063773717050012 [**Keywords:**]{} ultraluminous X-ray sources, supercritical accretion, X-ray pulsars, neutron stars, bimodality. ------------------------------------------------------------------------ \ [$^*$ E-mail $<$sergei@hea.iki.rssi.ru$>$]{} INTRODUCTION {#introduction .unnumbered} ============ The discovery (Bachetti et al. 2014) of X-ray pulsations with a mean period $P_s\simeq 1.37$ s from the ultraluminous X-ray (ULX) source (=NuSTARJ095551+6940.8) and its sinusoidal modulation with a period $P_b\simeq2.5$ days (the orbital period of the binary system) changed drastically our views of the nature of ULX sources. Previously, it had been assumed that a high observed isotropic X-ray luminosity of such sources $L_{\rm iso}\ga 10^{40}\ \mbox{erg s}^{-1}$ could be reached only during accretion onto a black hole with a moderately large, $\sim10^3\ M_{\odot}$, or at least stellar, $\sim10\ M_{\odot}$, mass (provided the formation of a relativistic jet and associated strong radiation anisotropy). It has now become clear that such a luminosity can also take place during accretion onto a neutron star possessing a strong magnetic field with a mass of only $M_*\sim1.4\ M_{\odot}$. Such binary systems must be widespread and can even dominate in the population of ULX sources (Shao and Li 2015). The discoveries of the ultraluminous X-ray pulsars NGC7793 P13 and NGC5907 ULX-1 by the XMM-Newton satellite (Israel et al. 2017a, 2017b) shortly afterward confirm this point of view and give hope for the detection of other objects of this type. Note that NGC5907 ULX-1 has a record peak luminosity even for ULX sources, in particular, it exceeds the maximum detected luminosity of by several times (see the table). The discovery of ULX pulsars has thrown down a serious challenge to theorists. For example, it is still unclear, though is widely discussed, how such a high luminosity is reached, which exceeds the Eddington one for spherically symmetric accretion onto a neutron star by hundreds of times: $$\label{led} L_{\rm ed}=\frac{4\pi GM_*m_p c}{\sigma_{\rm es}}\simeq 1.9\times 10^{38} \left(\frac{\sigma_{\rm T}}{\sigma_{\rm es}}\right) \left(\frac{M_*}{1.4\ M_{\odot}}\right)\ \mbox{erg s}^{-1}.$$ Here, $\sigma_{\rm es}$ is the electron scattering cross section, $\sigma_{\rm T}$ is the Thomson cross section, $G$ is the gravitational constant, $m_p$ is the proton mass, and $c$ is the speed of light. Of course, the accretion onto a neutron star with a strong magnetic field is far from spherically symmetric one. As early as 1976, having considered a realistic accretion flow geometry at a supercritical accretion rate, Basko and Sunyaev (1976) showed that the isotropic luminosity of a pulsar could exceed $L_{\rm ed}$ by more than an order of magnitude (see below). Nevertheless, it is still insufficient to explain the observations of ULX pulsars. Many of the authors (e.g., Lyutikov 2014; Tong 2015; Eksi et al. 2015; Tsygankov et al. 2016a; Israel et al. 2017a, 2017b) are inclined to the assumption about an extreme magnetic field strength of the neutron star in ULX systems ($B_*\ga 10^{14}$ G), which reduces the electron scattering cross section $\sigma_{\rm es}$ and, thus, raises the Eddington limit. Others (e.g., Kluzniak and Lasota 2015) think that a high luminosity is reached precisely because of the reduced (to $B_*\sim10^{9}$ G) magnetic field strength (compared to its values $B_*\sim10^{12}-10^{13}$ G typical for X-ray pulsars). Because of the weak magnetic field, the accretion disk almost reaches the neutron star surface and radiates in the same way as during super-Eddington accretion onto a black hole. In both cases, the limiting observed luminosity of ULX pulsars, $L_{\rm iso}\sim 10^{41}\ \mbox{erg s}^{-1}$, still cannot be explained and one has to appeal to a strong anisotropy of their radiation (dall’Osso et al. 2015; Chen 2017). [@l@|c|c|c|c|c|c|c|c|c|c|c|c@]{} & & &\ Source& $P_{b}$& $P_{s}$&$\dot{P}_{-10}$& $\gamma$c&$\mu_3$&$R_{\rm c}$& $\dot{M}_{20}$& $R_{\rm s}$& $L_{39}$& $R_{\rm m}$& $L_{39}$& $R_{\rm ms}$\ &days & s & s s$^{-1}$ &&& km&gs$^{-1}$&km&ergs$^{-1}$&km& ergs$^{-1}$&km\ M82X-2 &2.5 &1.37 & -2.0&4& 3& 2080&0.59& 860&37 & 900&0.28&890\ NGC5907ULX-1&5.3 &1.13 & -8.1&6&12&1830&1.06 &1550&100&1670&&lt;0.3&1640\ NGC7793P13 & &0.42 & -0.4&2&1.4& 950&0.41& 610&13 &640&0.3&630\ \ \[-3mm\]\ \ \ \ \ \ \ \ \ The nature of the bimodal luminosity distribution of ULX pulsars pointed out by Tsygankov et al. (2016a) and Israel et al. (2017a, 2017b) also remains a puzzle. In addition to the state with a very high X-ray luminosity (hereafter the high state), periods during which the luminosity dropped to $\la 3\times10^{38}\ \mbox{erg s}^{-1}$ (hereafter the low state) have been detected for all three sources (see the table). Tsygankov et al. (2016a) and Israel et al. (2017а) assumed the bimodality of the luminosity distribution to be associated with the action of centrifugal forces, which inhibit accretion and are capable of expelling an excess of accreting matter from the system (the propeller effect; Illarionov and Sunyaev 1975; see also Corbet 1996). This effect begins to manifest itself as soon as the magnetospheric radius of the neutron star $R_m$ during the evolution of the system (for example, a temporary decrease in the accretion rate) exceeds the corotation radius $R_c$ (otherwise the surface rotation velocity of the magnetosphere will exceed the Keplerian velocity). In this case, the accretion onto the neutron star ceases, and only the radiation from the outer disk region $R>R_m$ is observed. In order for the propeller effect to operate in the systems being discussed, it is necessary that the neutron stars in them possess a very strong magnetic field $B_*\sim10^{14}-10^{15}$ G similar to the field of magnetars (Tsygankov et al. 2016a). Although the very existence of the propeller effect is beyond doubt and has come into wide use by astrophysicists, i.e., it is used to explain the observed luminosity jumps in millisecond (LMXBs, Campana et al. 2008, 2014) and ordinary (HMXBs, Corbet et al. 1996; Campana et al. 2002; Tsygankov et al. 2016b; Postnov et al. 2017) X-ray pulsars, the existence of “equilibrium” pulsar periods (van den Heuvel 1984; Corbet 1986), the outbursts of fast X-ray transients (Grebenev and Sunyaev 2007; Grebenev 2009), and many other observed phenomena, the action of this mechanism as a cause of the bimodal luminosity distribution of ULX pulsars raises doubts. This is not only due to the very strong neutron star magnetic field, $B_*\ga10^{14}$ G, required for this purpose, but also due to the observed range of the luminosity drop, which is smaller by several times than the expected one $\sim R_c/R_*\simeq 140\ m_*^{1/3}p_*^{2/3}R_{12}^{-1}$ (Corbet 1996; Tsygankov et al. 2016a; here, $p_*$ is the spin period of the neutron star $P_s$ in seconds, while $m_*$ and $R_{12}$ are its mass $M_*$ and radius $R_*$ normalized to their standard values of $1.4\ M_{\odot}$ и $12$ km), and, most importantly, the very close coincidence of the luminosity of the sources in their low state with the Eddington one $L_{\rm ed}$. In this paper we will show that there exists a different explanation for the abrupt change of the luminosity in these sources associated with the transitions between two different regimes of supercritical accretion onto a neutron star with a strong magnetic field. The transitions are caused by accretion flow spherization in the disk due to the radiation pressure when a certain accretion rate dependent on the magnetic field strength of the neutron star is exceeded. THE REGIMES OF SUPERCRITICAL ACCRETION {#the-regimes-of-supercritical-accretion .unnumbered} ====================================== The properties of the accretion flow onto a neutron star and the interaction of this flow with its magnetic field are defined by four characteristic radii: The [*magnetospheric radius*]{} $$\label{rmag} R_m\simeq\xi\left(\frac{\mu_*^2}{\sqrt{2GM_*}\dot{M}}\right)^{2/7}\simeq 8.2\times 10^7 \ \xi\ \mu_{3}^{4/7} m_*^{-1/7} \dot{m}_{20}^{-2/7}\ \mbox{cm},$$ at which the pressure of the matter inflowing through the accretion disk is equal to the pressure of the neutron star magnetic field (Davidson and Ostriker 1973; Illarionov and Sunyaev 1975); the [*spherization radius*]{} of the accretion flow $$\label{rspher} R_s=\frac{3}{8\pi}\frac{\dot{M}_0}{m_p}\frac{\sigma_{\rm T}}{c}= \frac{3}{2}\frac{GM_*\dot{M}_0}{L_{\rm ed}}\simeq {1.5\times 10^8}\ \dot{m}_{20}\ \mbox{cm},$$ at which the accretion disk under radiation pressure swells so that its half-thickness is equal to the radius $R$ (Shakura and Sunyaev 1973)[^1]; the [*corotation radius*]{} $$\label{rcor} R_c=\left(\frac{GM_*P_s^2}{4\pi^2}\right)^{1/3}=1.7\times 10^8\ m_*^{1/3} p_*^{2/3}\ \mbox{cm},$$ at which the surface rotation velocity of the magnetosphere $(2\pi/P_s) R_c$ is equal to the Keplerian velocity $(GM_*/R_c)^{1/2}$ (Illarionov and Sunyaev 1975); and, of course, the [*intrinsic radius*]{} of the neutron star $R_*$. Here, $\xi\simeq0.5$ is the correction that takes into account the deviation of the magnetospheric radius in the case of disk accretion from the Alfvén radius computed for spherically symmetric accretion (Ghosh and Lamb 1978), $\dot{m}_{20}$ is the accretion rate $\dot{M}_0$ in units of $10^{20} \ \mbox{g s}^{-1}$ ($= 1.6\times 10^{-6}\ M_{\odot}\ \mbox{yr}^{-1}$), $\mu_*=0.5 B_* R_*^3$ is the dipole magnetic moment of the neutron star, and $B_*$ is the magnetic field strength at its poles. The magnetic moment $\mu_*$ expressed in units of $3\times10^{30}\ \mbox{G cm}^3$ will be denoted by $\mu_{3}$. Note that by $\dot{M}$ in Eq. (\[rmag\]) we mean the accretion rate near the magnetospheric boundary. During super-Eddington accretion in the inner disk regions $\dot{M}$ can decrease compared to the external value $\dot{M}_0$ due to the outflow of matter. =0.9 In Fig.1 the radii $R_m$, $R_s$, and $R_c$ are plotted against the accretion rate for three magnetic moments of the neutron star, $\mu_3= 0.1,\ 1,$ and $10$. These values correspond to magnetic field strengths at the stellar poles $B_*\simeq 3.5\times10^{11}$, $3.5\times10^{12}$ и $3.5\times 10^{13}$ G, respectively. The pulsation period was assumed to be $1.37$ s, the same as that for the ULX pulsar M82X-2. The estimates of the radii $R_m$, $R_s$, $R_c$ for this and the two other ULX pulsars known to date are given in the table. As will be shown below, once $R_s$ has reached $R_m$, the magnetospheric radius $R_m$ ceases to depend on $\dot{M}_0$. Therefore, the dependence (\[rmag\]) in this region is indicated in Fig.1 by the dotted line. Figure1 suggests that the dependence of the magnetospheric radius of the neutron star $R_m$ on $\dot{M}_0$ has two singular points. This radius is equal to the corotation radius $R_c$ and the spherization radius $R_s$ at the first and second points, respectively. The first event occurs at an accretion rate $$\label{mdmc} \dot{M}_{mc}\simeq7.2\times 10^{17}\ \mu_{3}^{2}\, m_*^{-5/3} p_*^{-7/3}\ \mbox{g s}^{-1},$$ and the second one occurs at an accretion rate (Lipunov 1982) $$\label{mdms} \dot{M}_{ms}\simeq 3.7\times 10^{19}\ \mu_{3}^{4/9} m_*^{-1/9}\ \mbox{g s}^{-1}.$$ At $\dot{M}_0\le\dot{M}_{mc}$ no efficient accretion is possible because of the propeller effect — the infalling matter is ejected from the system. At $\dot{M}_0\ge\dot{M}_{mc}$ and up to $\dot{M}_0\simeq\dot{M}_{ms}$ nothing inhibits it; the regime of direct accretion observed in ordinary X-ray pulsars with the modifications for $\dot{M}_0 \ga \dot{M}_{\rm ed}=L_{\rm ed}R_*/(GM_*)\simeq1.2\times10^{18}\ \mbox{g s}^{-1},$ described by Basko and Sunyaev (1976), is realized. Note that in this regime the spherization radius $R_s<R_m$. Let us consider the case of direct supercritical accretion in more detail. The High State (an Accretion Rate $\dot{M}_{mc}\leq\dot{M}_0\leq\dot{M}_{ms}$) {#the-high-state-an-accretion-rate-dotm_mcleqdotm_0leqdotm_ms .unnumbered} ------------------------------------------------------------------------------ In this case, just as in the case of ordinary X-ray pulsars, upon reaching the boundary of the magnetosphere, the accretion disk matter is frozen into its upper layer and is transferred by two streams to high-latitude regions, where it flows down along the so-called accretion columns into the vicinity of the neutron star magnetic poles. Basko and Sunyaev (1976) (see also Lyubarskii and Sunyaev 1988; Mushtukov et al. 2015) showed that at $\dot{M}_0\ga\dot{M}_{\rm ed}$ the radiation flux escaping through the walls of the accretion columns in X-ray pulsars exceeds the radial (Eddington) flux by a factor of $\sim H/d\simeq 240\ R_{12}\, (H/R_*);$ accordingly, their total luminosity $$L_{\rm iso}=4Hl \left(\frac{H}{d}\right) \left(\frac{L_{\rm ed}}{4\pi R_*^2}\right)= \frac{l}{\pi d}\left(\frac{H}{R_*}\right)^2L_{\rm ed}.$$ can exceed $L_{\rm ed}$. Here, $H$ is the height of the accretion columns[^2], $l\simeq2.5\times10^{5}$ cm is the width of their base, $d\simeq5\times10^3$ cm is the thickness of the walls. Since typically $H\la R_*$, the actual increase in luminosity is limited by $$L^{\rm max}_{\rm iso}\la 3\times10^{39} \left(\frac{l/d}{50}\right) \left(\frac{\sigma_{\rm T}}{\sigma_{\rm es}}\right) \left(\frac{M_*}{1.4\ M_{\odot}}\right)\ \mbox{erg s}^{-1}.\label{lbasko}$$ The luminosity of the accretion disk $L_{\rm d}\la L_{\rm ed}$ should be added to the luminosity of the accretion columns $L_{\rm iso}$, but still, to achieve agreement with the observations of ULX pulsars, it is necessary either to assume an appreciable radiation anisotropy or to take into account the decrease in the scattering cross section due to a strong magnetic field (Basko and Sunyaev 1975, 1976). Indeed, in the presence of a magnetic field at energies $E<E_{B}=11.6\,(B_*/10^{12}\ \mbox{G})$ keV the electron scattering cross section $\sigma_{es}$ decreases compared to the Thomson one as $\sigma_{\rm X}\simeq\sigma_{\rm T}(E/E_{B})^2$ for the extraordinary wave and as $\sigma_{\rm O}\simeq\sigma_{\rm T}[\sin^2\theta+(E/E_{B})^2]$ for the ordinary one; here, $\theta$ is the angle between the direction of wave propagation and the magnetic field lines. Since the emergent radiation is multiply scattered in the accretion column walls, with the ordinary and extraordinary waves being transformed into one another, the effective scattering cross section in the standard X-ray band ($E\la10$ keV) can be appreciably smaller than $\sigma_{\rm T}$ (Paczynski 1992). Introducing an anisotropy factor $\gamma>1$, suggesting that the radiation intensity toward us is greater than the mean intensity by a factor of $\gamma$, from inequality (\[lbasko\]) we finally obtain $$L^{\rm max}_{\rm aniso}\la 3\times 10^{40}\ \left(\frac{\gamma\, \sigma_{\rm T}/\sigma_{\rm es}}{10}\right)\ m_*\ \mbox{\rm erg s}^{-1}.\label{lbasko2}$$ The parameter $\epsilon=\gamma\, (\sigma_{\rm T}/\sigma_{\rm es}),$ which we set equal to $10,$ characterizes the joint uncertainty in the anisotropy of the emergent radiation and the decrease in the scattering cross section. The pulse profile for ULX pulsars is fairly smooth, nearly sinusoidal (Bachetti et al. 2014; Israel et al. 2017a, 2017b). Given that it is shaped by the radiation emerging from the walls of the accretion columns, it is hard to expect a very strong anisotropy of this radiation. Below we assume that $\gamma=2-4.$ If the accretion occurred with the maximum possible efficiency, then one would expect the observed luminosity to be $$\label{lobs} L^{\rm obs}_{\rm iso}=\gamma\,GM_*\dot{M}_0/R_*\simeq 1.6\times10^{40}\ \gamma\,R_{12}^{-1} m_*\dot{m}_{20}\ \mbox{erg s}^{-1}.$$ Comparing this expression with inequality (\[lbasko2\]), we see that the energy being released during accretion can be efficiently reprocessed into radiation only as long as $\dot{m}_{20}\leq 0.2 R_{12} (\sigma_{\rm T}/\sigma_{es})$. As the accretion rate increases further, no rise in luminosity occurs, the excess of energy being released is carried away to the neutron star surface (Basko and Sunyaev 1976). Note that the adopted values of $\epsilon$ and $\gamma$ allow the observed maximum luminosities of the ULX pulsars M82X-2 and NGC7793 P13 to be explained (see the table). In the case of NGC5907 ULX-1, however, $\epsilon=\gamma (\sigma_{\rm T}/\sigma_{\rm es})$ and especially $\gamma$ should be additionally increased by a factor of 2–3. A much stronger magnetic field apparently operates in this source, which leads to a more noticeable decrease in the scattering cross section $\sigma_{\rm es}$, a more significant increase in the Eddington limit, and a more strong anisotropy of the radiation. The Low State (a High Accretion Rate $\dot{M}_0\geq\dot{M}_{ms}$) {#the-low-state-a-high-accretion-rate-dotm_0geqdotm_ms .unnumbered} ----------------------------------------------------------------- The spherization radius is equal to $R_m$ at an accretion rate $\dot{M}_0\simeq\dot{M}_{ms}$ and begins to exceed it as $\dot{M}_0$ increases further. In this case: (1) the accretion disk swells near $R_s$, (2) an efficient outflow of excess matter and angular momentum with a nearly parabolic velocity is formed above the disk at $R<R_s$, and (3) the accretion in the region $R_m<R<R_s$ occurs in a regime close to the spherically symmetric one with a rate decreasing as $$\label{ss-sol} \dot{M}(R<R_s)=\dot{M}_0\ R/R_s$$ (Shakura and Sunyaev 1973; Lipunova 1999). The total luminosity of the source in this case does not exceed $\simeq 2\,L_{\rm ed}.$ Half of it, $\simeq L_{\rm ed}$, is emitted by the outer $R>R_s$ disk regions, and the other half is emitted by the inner envelope formed by the outflowing matter. Irrespective of precisely where and how the energy release occurs here, due to the quasi-sphericity of this envelope, the luminosity of the radiation leaving it cannot exceed $\simeq L_{\rm ed},$ the remaining energy being released is spent on the acceleration of the outflowing matter[^3]. In this sense, the observed picture has much in common with the photospheric expansion of the neutron star atmosphere during super-Eddington X-ray bursts (see, e.g., Lewin et al. 1993). Just as for bursts, depending on the accretion rate, the density of the outflowing matter and the size of its photosphere (inner envelope) and, accordingly, the effective temperature of the emergent radiation change. Although, on the whole, the X-ray observations of ULX pulsars in their low state are consistent with $\simeq 2\,L_{\rm ed},$ at high accretion rates an increasingly large fraction of the radiation must fall into the ultraviolet and optical spectral ranges. Therefore, the X-ray luminosity in the low state for some of the sources can be appreciably below the Eddington level. This may be true for NGC5907 ULX-1 (Israel et al. 2017a). Because of the decrease in the accretion rate at $R<R_s$, the flux of matter reaching the boundary of the neutron star magnetosphere turns out to be equal only to $\dot{M}_0 R_m/R_s$. Accordingly, the magnetospheric radius $R_m$ does not decrease with increasing $\dot{M}_0$ after reaching the critical accretion rate $\dot{M}_{ms}$ (as $\dot{M}_0^{-2/7}$, see Eq. \[rmag\]), but remains equal to its value at $\dot{M}_0=\dot{M}_{ms},$ $$R_m^{\rm low}\simeq 5.4\times 10^{7}\ \mu_{3}^{4/9} m_*^{-1/9}\ \mbox{cm}.\label{rms}$$ In Fig.1 this part of the dependence of $R_m$ on $\dot{M}_0$ is indicated by the solid horizontal line, while the dependence (\[rmag\]) is indicated by the dashed line. =0.9 The shaded region in Fig.1 indicates the forbidden values of the magnetospheric radius that exceed the corotation radius, $R_m>R_c$. At such $R_m$ a rapidly rotating neutron starmagnetosphere would produce a centrifugal barrier for the accreting matter, inhibiting its penetration inward — the propeller regime would be switched on. Previously, it has already been mentioned that for this reason, no efficient accretion is possible at $\dot{M}_0<\dot{M}_{mc}$. It can be seen from Fig.1 that in the case of a strong magnetic field $B_*$, the situation when no direct accretion onto the neutron star is possible at any $\dot{M}_0$ is realistic. Figure2 shows that this is actually the case. The solid line in this figure indicates the accretion rate $\dot{M}_{mc}$ at which the magnetospheric radius $R_m$ is equal to the corotation radius $R_c$ (Eq. \[mdmc\]) as a function of the magnetic field strength $B_*$. In the shaded region to the right of this line $R_m\ge R_c$; therefore, no accretion is possible here due to the propeller effect. The dashed line in this figure indicates the accretion rate $\dot{M}_{ms}$ at which the magnetospheric radius $R_m$ is equal to the radius $R_s$ (Eq. \[mdms\]) as a function of $B_*$. Above this curve $R_m<R_s$. As has already been said, accretion flow spherization, a strong outflow of matter under radiation pressure, and a drop in luminosity to $\simeq 2 L_{\rm ed}$ begin here. Direct (efficient) accretion is possible only in the $\dot{M}_0-B_*$ region lying between these lines, to the left of their intersection. The limiting field at which direct accretion is still possible corresponds to the point of their intersection: $$B_*^{\rm max}\simeq 4.4\times10^{13}\ R_{12}^{-3} m_* p_*^{3/2}\ \mbox{G}.$$ However, it should be noted that at high accretion rates, when $R_m<R_s$, being in the shaded region in comparison with being outside it makes very little difference observationally — as before, we will see a source with a nearly Eddington total luminosity. This luminosity will be emitted by the accretion disk at large, $R>R_s$ , distances from the neutron star and the outflowing envelope in the region $R_m^{\rm low}<R<R_s$. Obviously, the accreting matter does not fall below the radius $R_m^{\rm low},$ so that it is impossible to record any radiation pulsations in this regime of accretion. Given what has been said above about the softness of the radiation spectrum for such a source, it will most likely be impossible to determine whether the two-fold drop in its luminosity is associated with the excess of $R_m$ above $R_c$ or an excessively narrow and hard range of its observations. THE NEUTRON STAR SPINUP RATE {#the-neutron-star-spinup-rate .unnumbered} ============================ Although the measured long-term spinup rate of the neutron star in the three discovered ULX pulsars, $\dot{\nu}=-\dot{P}_sP_s^{-2}\sim (1-6)\times10^{-10}\ \mbox{Hz s}^{-1},$ exceeds the spinup rate of the neutron star in ordinary X-ray pulsars by an order of magnitude or more (see the table), it turns out to be several times lower than the spinup rate expected for these sources, given the observed essentially super-Eddington accretion rate, $$\label{dotnu} \dot{\nu}=(GM_* R_m)^{1/2}\frac{\dot{M}_0}{2\pi I} \simeq 1.4\times10^{-9}\ \dot{m}_{20}^{6/7} m_*^{3/7} \mu_3^{2/7} I_{45}^{-1}\ \mbox{Hz s}^{-1}.$$ Here, $I_{45}$ is the moment of inertia $I _*$ of the neutron star (normalized to its standard value of $10^{45}\ \mbox{g cm}^{2}$). Such a slow spinup is naturally explained in the scenario of supercritical accretion onto these pulsars proposed above. Indeed, the estimate (\[dotnu\]) refers only to the high luminosity state of these sources. During their low state the actual accretion rate near the magnetosphere decreases considerably to $\dot{M}_0\,R_m^{\rm low}/R_s \simeq\dot{M}_{ms};$ the spinup rate of the neutron star drops accordingly. Moreover, it should be noted that in this state there is not disk accretion, which is capable of efficiently transferring the angular momentum of Keplerian motion to the neutron star, but almost quasi-spherical accretion of matter that lost much of its angular momentum. The angular momentum is transferred only in the narrow circular region which width is much smaller than the real width of the accretion disk. The foregoing implies that an efficient spinup of the neutron star in the ULX pulsars occurs only during a certain fraction of the entire time of their active existence, while in the remaining time they barely spin up. For this reason, the mean spinup determined from long time intervals turns out to be appreciably smaller than their maximum spinup during the episodes of direct super-Eddington accretion. CONCLUSIONS {#conclusions .unnumbered} =========== We gave an explanation for the bimodality of the X-ray luminosity distribution of ULX pulsars. The transition from the high to low state of these sources was explained by accretion flow spherization when a certain accretion rate is exceeded. In this case, the luminosity of the source drops to a nearly Eddington level of $(1-2) L_{\rm ed}.$ The observed X-ray luminosity can be even lower, given the softness of the radiation spectrum forming in the envelope of matter outflowing from the accretion disk due to the radiation pressure. Apart from the rate of change of the accretion rate, the transition rate between the states is determined by the time it takes for the neutron star magnetosphere to be rearranged, the speed of the mass transfer through the disk, and the outflow velocity of the excess of matter. The accretion-driven spinup rate of the neutron star in the low state decreases considerably compared to the spinup rate in the high state. This allows the mean, insufficiently high measured spinup rate of the ULX pulsars, lower than the expected one by several times at given accretion rates, to be explained.\ ACKNOWLEDGMENTS {#acknowledgments .unnumbered} =============== This work was financially supported by the Program of the President of the Russian Federation for support of leading scientific Schools (grant NSh-10222.2016.2) and the “Transitional and Explosive Processes in Astrophysics” Subprogram of the Basic Research Program P-7 of the Presidium of the Russian Academy of Sciences. ,  [**514**]{}, 202 (2014). ,  [**42**]{}, 311 (1975). ,  [**175**]{}, 395 (1976). ,  [**441**]{}, 1984 (2014). ,  [**580**]{}, 389 (2002). ,  [**684**]{}, L99 (2008). ,  [**465**]{}, L6 (2017). ,  [**220**]{}, 1047 (1986). ,  [**457**]{}, L31 (1996). ,  [**179**]{}, 585 (1973). ,  [**448**]{}, L40 (2015). ,  [**223**]{}, L83 (1978). , Proceedings of Science, [**96**]{}, 60 (Proc. of the Conference “The Extreme Sky: Sampling the Universe above 10 keV”, Otranto, Italy, October 13–17, 2009). ,  [**33**]{}, 149 (2007). , J. Astrophys. Astron. [**5**]{}, 209 (1984). ,  [**39**]{}, 185 (1975). , Science [**355**]{}, 817; arXiv:1609.07375v1 (2017a). ,   [**466**]{}, L48; arXiv:1609.06538v1 (2017b). ,  [**448**]{}, L43 (2015). ,  [**62**]{}, 223 (1993). ,  [**26**]{}, 54 (1982). ,  [**25**]{}, 508 (1999). ,  14, 390 (1988). , arXiv:1410.8745 (2014). ,  [**454**]{}, 2539 (2015). ,  [**449**]{}, 2144 (2015). , Acta Astronomica [**42**]{}, 145 (1992). ,  [**465**]{}, L119 (2017). ,  [**24**]{}, 337 (1973). ,  [**802**]{}, 131 (2015). , Res. Astron. Astrophys. [**15**]{}, 517 (2015). ,  [**457**]{}, 1101 (2016a). ,  [**593**]{}, A16 (2016b). [^1]: The disk luminosity in the region $R>R_s$ turns out then to be equal to the Eddington luminosity $L_d=(3/2) G M_* \dot{M}_0/R_s=L_{\rm ed}$ (Lipunova 1999). [^2]: To be more precise, the height of the radiation-dominated shock in which the matter sinking in the column walls is heated to high temperatures above the neutron star surface. [^3]: Note that even formally the above solution (\[ss-sol\]) for the decrease in the accretion rate at $R<R_s$ was obtained by assuming that the [*entire*]{} energy being released at $R<R_s$ is spent on the radiation acceleration of the outflowing matter (Lipunova 1999). Therefore, the frequently encountered assertion that the inner region gives a logarithmic $\sim L_{\rm ed}\ln{(\dot{M}_0/\dot{M}_{\rm ed})}$ increase of the luminosity of the outer disk is incorrect.
{ "pile_set_name": "ArXiv" }
--- abstract: 'Based on network analysis of hierarchical structural relations among Chinese characters, we develop an efficient learning strategy of Chinese characters. We regard a more efficient learning method if one learns the same number of useful Chinese characters in less effort or time. We construct a node-weighted network of Chinese characters, where character usage frequencies are used as node weights. Using this hierarchical node-weighted network, we propose a new learning method, the distributed node weight (DNW) strategy, which is based on a new measure of nodes’ importance that takes into account both the weight of the nodes and the hierarchical structure of the network. Chinese character learning strategies, particularly their learning order, are analyzed as dynamical processes over the network. We compare the efficiency of three theoretical learning methods and two commonly used methods from mainstream Chinese textbooks, one for Chinese elementary school students and the other for students learning Chinese as a second language. We find that the DNW method significantly outperforms the others, implying that the efficiency of current learning methods of major textbooks can be greatly improved.' author: - 'Xiaoyong Yan$^{1,2}$, Ying Fan$^{1,3}$, Zengru Di$^{1,3}$, Shlomo Havlin$^{4}$, Jinshan Wu$^{1,3,\dag}$' bibliography: - 'characters.bib' title: Efficient learning strategy of Chinese characters based on network approach --- [**[Introduction]{}**]{}. It is widely accepted that learning Chinese is much more difficult than learning western languages, and the main obstacle is learning to read and write Chinese characters. However, some students who have learned certain amount of Chinese characters and gradually understand the intrinsic coherent structure of the relations between Chinese characters, quite often find out that it is not that hard to learn Chinese [@Bellassen]. Unfortunately, such experiences are only at individual level. Until today there is no textbook that have exploited systematically the intrinsic coherent structures to form a better learning strategy. We explore here such relations between Chinese characters systematically and use this to form an efficient learning strategy. Complex networks theory has been found useful in diverse fields, ranging from social systems, economics to genetics, physiology and climate systems [@Watts; @Strogatz; @Albert; @Newman; @Wu; @Costa; @Fortunato]. An important challenge in studies of complex networks in different disciplines is how network analysis can improve our understanding of function and structure of complex systems [@Costa; @Fortunato; @Chen]. Here we address the question if and how network approach can improve the efficiency of Chinese learning. Differing from western languages such as English, Chinese characters are non-alphabetic but are rather ideographic and orthographical [@Branner]. A straightforward example is the relation among the Chinese characters ‘’, ‘’ and ‘’, representing tree, woods and forest, respectively. These characters appear as one tree, two trees and three trees. The connection between the composition forms of these characters and their meanings is obvious. Another example is ‘’ (root), which is also related to the character ‘ ’ (tree): A bar near the bottom of a tree refers to the tree root. Such relations among Chinese characters are common, though sometimes it is not easy to realize them intuitively, or, even worse, they sometimes may become fuzzy after a few thousand years of evolution of the Chinese characters. However, the overall forms and meanings of Chinese characters are still closely related [@Qiu; @Bai; @Bellassen]: Usually, combinations of simple Chinese characters are used to form complex characters. Most Chinese users and learners eventually notice such structural relations although quite often implicitly and from accumulation of knowledge and intuitions on Chinese characters [@Lam1]. Making use of such relations explicitly might be helpful in turning rote leaning into meaningful learning [@Novak:Cmap], which could improve efficiency of students’ Chinese learning. In the above example of ‘’, ‘ ’, and ‘’, instead of memorizing all three characters individually in rote learning, one just needs to memorize one simple character ‘’ and then uses the logical relation among the three characters to learn the other two. However, such structural relations among Chinese characters have not yet been fully exploited in practical Chinese teaching and learning. As far as we know from all mainstream Chinese textbooks the textbook of Bellassen et al. [@Bellassen] is the only one that has taken partially the structure information into consideration. However, considerations of such relations in teaching Chinese in their textbook are, at best, at the individual characters level and focus on the details of using such relations to teach some characters one-by-one. With the network analysis tool at hand, we are able to analyze this relation at a system level. The goal of the present manuscript is to perform such a system-level network analysis of Chinese characters and to show that it can be used to significantly improve Chinese learning. Major aspects of strategies for teaching Chinese include character set choices, the teaching order of the chosen characters, and details of how to teach every individual character. Although our investigation is potentially applicable to all three aspects, we focus here only on the teaching order question. Learning order of English words is a well studied question which has been well established [@English_Order]. However, there is almost no explicit such studies in Chinese characters. In this work, the characters choice is taken to be the set of the most frequently used characters, with $99\%$ accumulated frequency [@Frequency]. To demonstrate our main point: how network analysis can improve Chinese learning, we focus here on the issue of Chinese character learning order. Although some researchers have applied complex network theory to study the Chinese character network [@Li; @Lee], they mainly focus on the network’s structural properties and/or evolution dynamics, but not on learning strategies. A recent work studied the evolution of relative word usage frequencies and its implication on coevolution of language and culture [@Petersen]. Different from these studies, our work considers the whole structural Chinese character network, but more importantly, the value of the network for developing efficient Chinese characters learning strategies. We find, that our approach, based on both word usage and network analysis provides a valuable tool for efficient language learning. [**[Data and methods.]{}**]{} Although nearly a hundred thousand Chinese characters have been used throughout history, modern Chinese no longer uses most of them. For a common Chinese person, knowing $3,000 - 4,000$ characters will enable him or her to read modern Chinese smoothly. In this work, we thus focus only on the most used $3500$ Chinese characters, extracted from a standard character list provided by the Ministry of Education of China [@Characters]. According to statistics [@Frequency], these 3500 characters account for more than $99\%$ of the accumulated usage frequency in the modern Chinese written language. ![\[fig1\] Chinese character decomposing and network construction. The numerical values in the figure represent learning cost, which will be discussed later.](Wu_fig1.pdf){width="8.4cm"} Most Chinese characters can be decomposed into several simpler sub-characters [@Qiu; @Bai]. For instance, as illustrated in Fig. \[fig1\], character ‘’(means ‘add’) is made from ‘’(ashamed) and ‘’(water); ‘’ can then be decomposed into ‘’(head, or sky) and ‘’(heart), and ‘’ can be decomposed into ‘’ (one) and ‘’(a person standing up, or big). The characters ‘’, ‘’, ‘ ’ and ‘’ cannot be decomposed any further, as they are all radical hieroglyphic symbols in Chinese. There are general principles about how simple characters form compound characters. It is so-called “Liu Shu” (six ways of creating Chinese characters). Ideally when for example two characters are combined to form another character the compound character should be connected to its sub-characters either via their meanings or pronunciations. We have illustrated those principles using characters listed in Fig. \[fig1\]. See [**[Supporting Online Material]{}**]{} for more details. While certain decompositions are structurally meaningful and intuitive, others are not that obvious at least with the current Chinese character forms [@Bai]. In this work, we do not care about the question, to what extent Chinese character decompositions are reasonable, the so-called Chinese character rationale [@Qiu], but rather about the existing structural relations (sometimes called character-formation rationale or configuration rationale) among Chinese characters and how to extract useful information from these relations to learn Chinese. Our decompositions are based primarily on Ref. [@ShuoWen; @Qiu; @Bai]. Following the general principles shown in the above example and the information in Ref. [@ShuoWen; @Qiu; @Bai] , we decompose all 3500 characters and construct a network by connecting character $B$ to $A$ (an adjacent matrix element $a_{BA}=1$, otherwise it is zero) through a directed link if $B$ is a “direct” component of $A$. Here, “direct” means to connect characters hierarchically (see Fig. \[fig1\]): Assuming $B$ is part of $A$, if $C$ is part of $B$ and thus in principle $C$ is also part of $A$, we connect only $B$ to $A$ and $C$ to $B$, but NOT $C$ to $A$. There are other considerations on including more specific characters which are not within the list of most-used $3500$ characters but are used as radicals of characters in the list, in constructing this network. More technical details can be found in the [**[Supporting Online Material]{}**]{}. Decomposing characters and building up links in this way, the network is a Directed Acyclic Graph (DAG), which has a giant component of $3687$ nodes (see [**[Supporting Online Material]{}**]{} for details on the number of nodes) and $7024$ links, plus $15$ isolated nodes. Fig. \[fullmap\] is a skeleton illustration of the full map of the network. ![\[fullmap\] Full map of the Chinese character network. For a better visual demonstration, we plot here the minimum spanning tree of the whole network which is shown in blue while other links are presented in grey as a background. All characters can be seen when the figure magnified properly. ](Wu_fig2.pdf){width="8.4cm"} As a DAG, the Chinese character network is hierarchical. Starting from the bottom in Fig. \[fig1\], where nodes have no incoming links, we can assign a number to a character to denote its level: all components of a character should have lower levels than the character itself. Fig.\[fig2\](a) shows the hierarchical distribution of characters in the network. The figure shows that the network has a small set of radical characters ($224$ nodes at the bottom level, $1$) and nearly $94\%$ of the characters lie at higher levels. Moreover, the network has a broad heterogeneous offsprings degree distribution (a node’s offspring degree is defined as its number of outgoing edges). Notice in Fig. \[fig2\](b), the number of characters with more than one (the smallest number on the vertical axis) offspring is close to $1000$ (the largest number shown on the horizontal axis). This means that less than $1000$ of the $3687$ characters are involved in forming other characters. The other characters are simply the top ones in their paths so that no characters are formed based on them. Their distribution in the different levels is also shown in Fig. \[fig2\]a. ![\[fig2\] Topological properties of Chinese character network. (a) Hierarchical distribution: number of characters at each level. The number of characters in each level that have no offspings is shown in brown. (b) Node-offspring distribution: Zipf plot, where characters are ranked according to their number of offsprings. The number of offsprings of a character is plotted against the rank of the character.](Wu_fig3.pdf){width="8.4cm"} [**[Learning Strategy.]{}**]{} The heterogeneity of the hierarchical structure reflected in the node-offspring broad distribution in the Chinese character network suggests that learning Chinese characters in a “bottom-up" order (starting from level $1$ characters and gradually climbing along the hierarchical paths) may be an efficient approach. At the level of learning of [*[individual]{}*]{} characters, Chinese teaching has indeed used this rationale[@Bellassen; @Zhou]. Other approaches are based on character usage frequencies, learning the most used characters, those appearing as the most used words first (Ref. [@Lam2] provides a critical review of this approach and others). To assess the efficiency of different approaches, which is here limited to Chinese characters learning orders, one needs a method to measure the learning efficiency. However, measuring learning efficiency is not trivial and currently, to the best of our knowledge, does not exist. In our approach, we regard a learning strategy as more efficient if it reaches the same learning goal, a desired number of learned characters or accumulated character usage frequencies, with lower learning costs compared to other strategies. The question thus becomes how to determine the learning cost? Of all possible factors related to cost, it is reasonable to assume that a character with more sub-characters and more unlearned sub-characters is more difficult to learn. For example, the character ‘’, with 5 sub-characters, is obviously more difficult to learn than ‘’, with 2 sub-characters. Conversely, it is easier to learn a character for which all sub-characters have been learned earlier than another character with same number of sub-characters all of which are previously unknown to the learner. We thus intuitively define the cost for a student to learn a character as the sum of the number of sub-characters and the learning cost of the unlearned sub-characters at his current stage. The learning cost of the unlearned sub-characters is calculated recursively until characters at the first level are reached or until all sub-characters have been learned previously. Each unlearned character of the first level contributes cost $1$, while previously learned characters contribute cost $0$. For example, assuming that, at a given stage, a student needs to learn the character ‘’ and that the student already knows the characters in blue in Fig. \[fig1\]. We demonstrate the cost for the student to learn this character. First, the character ‘ ’ has $2$ sub-characters (‘’and ‘’), and the student does not know one character, ‘’. The total cost of learning the character ‘’ is thus equals to $2$ plus the cost of learning ‘’, which, calculated using the same principle, is $2$ ($2$ sub-characters ‘’ and ‘’ , and none of which are new to the student). The cost for the student is thus $4$. If the student somehow learned the character ‘’ before and then needs to learn ‘’, the cost of acquiring ‘’ is only $2$. Thus, to learn both characters, it is cheaper to first learn ‘ ’ and then ‘’ (total cost $2+2=4$), rather than the other way around ($4+2=6$). If we assume that learning more characters, independent of their usage frequency, is the learning goal, the optimal learning strategy is to follow the node-offspring order (NOO) from many to few, which means learning characters with more offspring first. In this way, an ancestor character is always learned before its offspring characters since the ancestor has at least one more offspring than the offspring character. From the learning cost definition, we know that using this approach we never waste effort in learning characters twice. No other strategy is thus better than this one. However, in this way we might learn many characters with low usage frequencies which are less useful. Hence, as shown in Fig. \[fig3\]b, if our aim is acquiring more accumulated usage frequency, the NOO-based strategy is indeed not a good one. Being able to achieve a high accumulated usage frequency in relatively short times is not only good for those who can not spend much time but it will also help the students to do extracurricular reading. Thus, our main objective is to develop a learning strategy that reaches the highest accumulated usage frequency with limited cost. When simply following the character usage frequency order (UFO method) from high to low, one discards topological relations among characters that could help in the learning process and save cost. In UFO one learns characters at higher levels before learning those at lower levels, which is more costly. Thus, the question comes to developing a new Chinese character centrality measure of character importance, that considers both topological relations and usage frequencies. Such a measure could help to obtain a learning order better than both NOO and UFO. One additional consideration is to learn first the characters with larger out degree in the character network since here a large out degree means the character is involved as a component in many characters. The method proposed in the following in fact takes all these three aspects into consideration. Here we develop a centrality measure that we call distributed node weight (DNW) based on both network structure and on usage frequencies which are the node weights ($W^{(m)}_j$ ). Here $j$ represents the node (character) and $m$ its level in the network. The top level is $m=5$ (no outgoing links) and the bottom level is $m=0$ (no incoming links). To measure character centrality of node $j$ at level $m$, we pick each of its predecessors (denoted as node $i$ at level $m+1$) and add its weight $W^{(m+1)}_i$ multiplied by $b$ to the weight $W^{(m)}_j$ as follow: $$\label{eq1} \tilde{W}^{(m)}_j=W^{(m)}_j+b\sum_{i}W^{(m+1)}_i a_{ji},$$ where $b\geq0$ is a parameter, $a_{ji}=1 \mbox{ or } 0$ is the adjacency matrix element from node $j$ to node $i$ (whether or not character $j$ is a direct part of character $i$). In the DNW method one learns characters in order according to their centrality from highest to lowest. Thus, when $b=0$, the DNW is equivalent to the UFO method. For $b>0$, the node’s offsprings play an important role. When $b=1$ and all $W_{j}=1$ (which means ignoring the difference in character usage frequencies), the DNW centrality order becomes the node-offspring order (NOO). In this sense, the NOO is an unweighted version of the DNW. The DNW order can thus be considered a hybrid of the NOO and UFO. ![\[fig3\] Learning efficiency comparison for different learning orders: node-offspring order (NOO), usage frequency order (UFO), distributed node weight (DNW) and two common empirical orders (EM1 for Chinese pupils and EM2 for LCSL). (a) Number of characters is set as the learning goal. (b) Accumulated usage frequency is set as the learning goal. $C_{min}$ is defined as the learning cost of $1775$ characters using the NOO method and it will be used in discussion of leaning efficiency index.](Wu_fig4a.pdf "fig:"){width="4.2cm"} ![\[fig3\] Learning efficiency comparison for different learning orders: node-offspring order (NOO), usage frequency order (UFO), distributed node weight (DNW) and two common empirical orders (EM1 for Chinese pupils and EM2 for LCSL). (a) Number of characters is set as the learning goal. (b) Accumulated usage frequency is set as the learning goal. $C_{min}$ is defined as the learning cost of $1775$ characters using the NOO method and it will be used in discussion of leaning efficiency index.](Wu_fig4b.pdf "fig:"){width="4.2cm"} Using numerical analysis, we find that the optimal $b$ value for the DNW strategy is $b\simeq 0.35$, as discussed below. With this optimal parameter $b$, we compare our strategy of DNW learning order against the NOO and the UFO in Fig.\[fig3\]. We find in Fig.\[fig3\]a that DNW is close to NOO, regarding the total number of characters vs. the learning cost. However, in Fig. \[fig3\]b, the DNW is significantly better than NOO and even better than UFO, regarding the total accumulated usage frequency vs. the learning cost. In the left panel, NOO and DWN are much better than UFO, while in the right panel the UFO and DNW are much better than NOO. Thus, only the DNW demonstrates a high efficiency in both, accumulated frequency and total number of characters. The DNW in the right figure appears to be only slightly better than the UFO, but this is a little misleading. From the left figure, we can see that with the same cost, say around $1000$, although the difference between the two is relatively small in the right figure, there is a much bigger difference in the left figure. It means that even though the DNW is only slightly better than the UFO on the accumulated usage frequency, significantly more characters are learned following the DNW than the UFO. Such a difference in number of known characters sometimes is as important as the accumulated usage frequency when estimating if an individual is literate or not. For beginners, $400-500$ characters is roughly the first barrier. Many stop there. Using the UFO, this corresponds to a cost of about $2000$ while using the DNW it is around only $1000$. Thus, it will be much easier for students to overcome this barrier when using DNW compared to UFO. We next compare the DNW against two empirical commonly used orders: one is from a set of the most used Chinese textbook [@Textbook1] for primary schools in China, which contains $2475$ different Chinese characters (EM1); the other is from a mainstream Chinese textbook [@Textbook2] for students Learning Chinese as a Second Language (LCSL), which contains $1775$ different Chinese characters (EM2). We sort the two character sets by first appearances in new character lists in the two textbooks and plot their learning results in Fig.\[fig3\]. The figure shows that compared to our developed DNW method, the empirical learning orders have relatively poor performance in both the total number of characters and accumulated usage frequency. This emphasizes the urgent need of improving the efficiency of current learning Chinese characters. [**[Optimal b.]{}**]{} To find the optimal $b$ value, we define an efficiency index for learning strategies. We first take a certain learning cost and denote it as $C_{min}$, which is here set to be the learning cost of learning the total of $N_{min}=1775$ characters using the NOO order ($C_{min}=3351$, See Fig. \[fig3\]a). We intuitively assume that the sooner a curve reaches $N_{min}$ the learning is more efficient. Thus, the larger is the area under the curves in Fig. \[fig3\]a the learning can be regarded as more efficient. The same consideration holds for the curves in Fig. \[fig3\]b. We therefore, measure the area underneath the learning efficiency curves (Fig.\[fig3\]) up to cost $C_{min}$ and denote them as $S_n$ (area under the curve of number of characters v.s. cost like the ones in Fig. \[fig3\]a) and similarly $S_f$ (area under the curve of accumulated usage frequency v.s. cost like those in Fig. \[fig3\]b), respectively. The ratio between the area underneath the curves $S_{n}$ ($S_{f}$) and the area of a rectangular region defined by $C_{min}N_{min}$ ($C_{min}F_{min}$, where $F_{min}$ is the maximum accumulated frequency of the curves at $C=C_{min}$) is defined as the learning efficiency index, $$\begin{aligned} v_n=\frac{S_n}{C_{min}N_{min}},\\ v_f=\frac{S_f}{C_{min}F_{min}}. {\label{eq:speed}}\end{aligned}$$ The sooner a curve reaches $N_{min}$ ($F_{min}$) the larger is the area and so is the ratio, the more efficient is the learning order. In this sense, the above ratios serve as indexes of efficiency of learning orders. In Fig. \[fig4\], we plot $v_n$ and $v_f$ of the hybrid strategy (DNW) as functions of $b$. We also plot two lines, for comparison, showing the learning efficiency of the NOO (blue line) and UFO (green line). As $b$ increases, $v_n$ of the hybrid strategy approaches that of the NOO. On the other hand, when $b=0.35$, $v_f$ of hybrid strategy reaches its maximum. Thus, with respect to frequency usage the DNW with $b=0.35$ is the most efficient. However, if we consider also the number of characters the range of $b\in\left[0.35, 0.7\right]$ can be regarded as very good choices. As an example, in this work we use $b=0.35$, which shows a significant improvement over commonly used methods (Fig. \[fig3\]). ![\[fig4\] Efficient index of hybrid strategies as a function of b (dots). The two horizontal lines are the efficiency of the node-offspring order (blue line) and usage frequency order (green line). (a) Efficiency when using number of characters as the learning goal. (b) Efficiency when using accumulated usage frequency as the learning goal.](Wu_fig5.pdf){width="8.4cm"} In order to compare the DNW strategy against others in more detail, we have analyzed the learning cost statistics of the characters covered by cost $C_{min}$ for all the five learning strategies in Fig. \[fig5\]. Recall that $C_{min}$ is the cost of learning first $1775$ characters using the NOO and number of characters covered by this $C_{min}$ is different for different methods. Using the measure of learning cost proposed earlier, we record the learning cost of every character before the accumulated cost reaches $C_{min}$ in each learning order and then plot a histogram of learning costs of all those characters for each learning order. From Fig. \[fig5\]a, we see that in both DNW and NOO learning orders, characters with learning cost $2$ are dominant (roughly $80\%$). In these two learning orders, few characters have learning cost higher than $3$. The other three learning orders have much smaller fraction of characters of cost-$2$ and more characters with cost higher than $3$. Most Chinese characters can be decomposed into $2$ direct parts, therefore, learning cost $2$ means that when a character is learned, its parts have been quite often learned before. This is natural in the NOO order since it is designed that way. However, as seen here it also holds in the DNW order, which is the high advantage of the DNW order. In Fig. \[fig5\]b we also plot the corresponding usage frequencies of the set of characters with the same learning cost. In DNW one learns in fact about 6$\%$ less characters compared to NOO, but the usage of the characters learned in DNW is more than 30$\%$ higher. Thus DNW is significantly better than NOO. We also find that although DNW and UFO have comparable overall usage frequencies, the DNW is concentrated on the cost-$1$ and cost-$2$ characters while the UFO is distributed widely on characters with learning cost from $1$ to $4$. This illustrates further why our DNW is an efficient learning order in both the sense of total number and total usage frequency of characters. ![\[fig5\] Up to a fixed total learning cost $C_{min}$, for all five learning orders, we count and plot the number of characters according to their individual learning costs in (a) and convert the number of characters into the corresponding usage frequency in (b). ](Wu_fig6.pdf){width="8.4cm"} [**[Conclusion and Discussion]{}**]{}. We demonstrate the potential of network approach in increasing significantly the efficiency of learning Chinese. By including character usage frequencies as node weights to the structural character network, we discover and develop an efficient learning strategy which enables to turn rote learning of Chinese characters to meaningful learning. In the [**[Supporting Online Material]{}**]{}, we present an adjacency list form of the constructed network; we also list Chinese characters order according to our DNW centrality. The constructed network might also help design a customized Chinese character learning order for students who have previously learned some Chinese and want to continue their studies at their own paces. Given the information about the student’s known characters in our network, our DNW centrality measure can be adapted to be used in finding a specific student oriented optimal learning order. This goal is completely out of reach of standard textbook-based education and it will be especially useful for Chinese learners that do not study Chinese in a formal Chinese school, or study Chinese every now and then or using private tutors. We hope that our study will lead to develop textbooks applying the DNW learning order and detailed decomposition of each character. It will also be valuable for Chinese learners to have a dictionary explaining every character and word simply from a core set of small number of basic characters. Note that we are not claiming that our decomposition is perfect or that our character choice is good enough. These questions are still debated in the Chinese character structure fields. There are possibly also other topological quantities that might be valuable for Chinese learning. Considering our node-weighted network, the concept of using the shortest path to accumulate the largest node weight in shortest steps, clearly differs from the usual shortest path. How these quantities are related to Chinese learning is an interesting question that we have not discussed in this work. Writers, reporters and citizens in China have argued that the Chinese textbooks currently used in mainland China are going in the wrong direction, and textbooks used $70$ years ago seem to be more reasonable. Influenced by English teaching, Chinese teaching indeed becomes increasingly speaking- and listening-oriented [@Lam2]. Speaking- and listening-oriented approach is a reasonable way to learn a phonetic language. However, for Chinese – an ideographic language, it results an inefficient learning order of Chinese character where structurally complicated characters are often taught before simpler ones. What we are suggesting is that in designing the speaking, listening and reading materials, one should utilize the logographic relations among Chinese characters and also respect the optimal learning order discovered from analyzing the character network of the same relation. Only using a network analysis can we capture an entire picture of a network of these structural relations. [**Acknowledgements**]{} This work was supported by NSFC Grant $61174150$ and $60974084$. [**Competing interests statement**]{} The authors declare that they have no competing financial interests. [**Correspondence**]{} should be addressed to J. Wu (jinshanw@bnu.edu.cn). Supporting Online Material ========================== Data and methods ---------------- ### Decomposition of Chinese characters According to “Liu Shu” (six ways of creating Chinese characters), ideally when sub-characters are combined to form a character the compound character should be connected to its sub-characters either via their meanings or pronunciations. Thus, Chinese characters are usually meaningfully and coherently connected to each other. Let us start from the bottom of Fig. 1 in the main text. The four characters are “” (one), “”(person, big), “”(heart), “” (water). These characters closely resemble the shapes or characteristics of the objects to which they refer, though their forms today might not hold as much of a resemblance as their ancient forms. One can compare the modern simplified Chinese character against their ancient Zhuanti forms in the figures. Such characters are called pictographic (Xiangxing) characters. Initially, the character “” (sky) refers to the head, the primary part of a person, by placing a bar over the character “”(person, big). The meaning later developed and became the sky, heaven and god, the primary part of everything as ancient Chinese people believed. This way of forming new characters from radical parts is called “simple” ideogram (Zhishi) or “combination character” ideogram (Huiyi). These two mechanisms are in fact slightly different in that the first is based on only one radical part, usually with only a very simple additional stroke while the second usually involves two radical parts. For a character formed by these two principles, its meaning usually can be read out intuitively from the combination. For example, the character ‘’ (forest) mentioned in the introduction of the main text follows the principle of “combined” ideogram: it is a stack of three ‘’(tree). However, in this work, we will not distinguish the two mechanisms. The character ‘’ (, ashamed) is a compound character of ‘’ and ‘’. It follows a different principle, which later became popular in forming new Chinese characters, the so-called pictophonetic formation (Xingsheng). Here, ‘’ and ‘’ have exactly the same pronunciation, and the meaning of ‘’ refers to a psychological phenomenon, which was believed to be related to ‘’ (heart). The same pictophonetic relation holds among ‘’ (add), ‘’and ‘’ (water): the first two share the pronunciation while the last part ‘’ is remotely connected to the meaning of ‘’. In Fig.1 of the main text, we also notice that the characters ‘’ and ‘’ also form the characters ‘ ’(seep). The character ‘’ follows also the pictophonetic formation. It is quite common that some basic characters are used in quite a few composed characters. Here we have demonstrated four of the six principles. The other two are phonetic loan (Jiajie) and derivative cognates (Zhuanzhu). Those two principles are more on usage of characters but not on creating new characters. It is not our focus of this work to discuss various ways of usages of Chinese characters. Following the above general principles, our decompositions of characters are based primarily on Ref.\[11,12,21\]. The first is a standard reference, where the six principles were first explicitly discussed, in Chinese etymology studies, and the last two are regarded as developments of the first, mainly due to discoveries of new materials, including Oracle characters (Jiaguwen) and Bronze characters(Jinwen). Starting from $3500$ characters, our network ends up with a giant component of $3687$ nodes and $7025$ links, plus $15$ isolated nodes. Why do we have more nodes than the total number of characters we start with? In our decomposition, we find some sub-characters beyond the set of the most used $3500$ characters. Sometimes, such sub-characters are just variations of their normal forms. The situation becomes more complicated when a radical whose corresponding normal form is not within the most-used set. In such cases, we add the “never-independent characters” as extra nodes in the network. For example, ‘’ is such a rarely used character, but we keep it in our network. See Fig. 2 in the main text for the full map of structural relations among Chinese characters. ### Additional explanation of definition of learning cost We define the learning cost of a character for a student to be the sum of the number of sub-characters and the learning cost (calculated recursively) of the unlearned sub-characters at his current stage. The recursive definition seems to imply that when a student is learning a compound character, he has to recognize first the sub-characters. However, the dynamic process is only a fictitious process used to represent the difficulty that the student faces in learning the character. It does not means the learning process is indeed as such. Recall from the main text total cost of learning ‘’ before ‘’ is $4=2+2$, which is from the fact that it has $2$ sub-characters and also from the fact that cost of learning the unknown ‘’ is $2$. Therefore, determining cost of learning ‘’ first obviously involves cost of learning ‘’. However, this does not imply that the student should have known ‘’ after acquiring ‘’. If it happens so that the next time the student must learn ‘’, then the learning cost of ‘’ is still $2$ even he had learned ‘’ before. Thus the total learning cost of the two characters following the order of ‘’ $\rightarrow$ ‘’ is $6$. Of course, if the student learned the character ‘’ meaningfully, when he learn the character ‘ ’, he indeed learn also the relation between ‘’ and ‘ ’ (also the meaning of ‘’) explicitly from his books or his instructors, then the total cost for him to learn both characters is in fact $4$ (no cost for learning ‘’), which is the same cost of learning both characters in the order of ‘’ $\rightarrow$ ‘ ’. Therefore, learning closely connected characters together at the same time and learning them meaningfully would reduce the cost. Therefore, one might conclude that our definition of learning cost does not apply to such meaningful learning. However, for this we would argue that such meaningful learning has implicitly used the optimal learning orders, learning the two characters simultaneously and meaningfully is equivalent to learning them according to the proper order. Another problem related to our definition of learning cost is that we treated the number of sub-characters and the cost of unlearned sub-characters equally. This can be questioned and should be investigated further. For example, one might introduce a parameter to rescale the number of sub-characters and then sum the two together. For simplicity, we have not yet discussed this issue. Finding the proper value of such parameters from empirical studies and then comparing performance of those learning orders again using the new definition of cost should be an interesting topic. Supplemental Results -------------------- At last, we provide the two important lists of characters as final results of our network-based analysis of Chinese characters. First is the adjacency list of the network of characters. The first character of every line is the starting point of links and all other characters in the same line are the ending point of the links, meaning the first character is a part of everyone of the other characters. Second is the order of Chinese characters listed according to the calculated DNW centrality. This list includes all $3500$ characters and $b=0.5$ is used in the calculation of DNW. In the main text, when $1775$ characters are used as the learning target, we find the optimal value of parameter $b$ is $b=0.35$. Repeating the same analysis for all $3500$ characters, we find that learning efficiency is higher when $b=0.5$ is used instead of $b=0.35$. Here the list is produced when we consider the whole set of most used characters as the learning goal. The lists can be downloaded from our own still developing website on Chinese learning <http://www.learnm.org/data/>.
{ "pile_set_name": "ArXiv" }
[**[Vector-like quarks in a “composite” Higgs model]{}**]{} **[Abstract]{}** Vector-like quarks are a common feature of “composite” Higgs models, where they intervene in cutting off the top-loop contribution to the Higgs boson mass and may, at the same time, affect the Electroweak Precision Tests (EWPT). A model based on $SO(5)/SO(4)$ is here analyzed. In a specific non minimal version, vector-like quarks of mass as low as 300-500 GeV are allowed in a thin region of its parameter space. Other models fail to be consistent with the EWPT. Introduction ============ The great success of the Standard Model (SM) in predicting the electroweak observables leaves many theoretical open questions. One of them is the famous “naturalness problem” of the Fermi scale: one looks for a non-accidental reason that explains why the Higgs boson is so light relatively to any other short distance scale in Physics. In order to keep the Higgs boson mass near the weak-scale expectation value $v$ with no more than $10 \%$ finetuning it is necessary to cut-off the top, gauge, and scalar loops at a scale $\Lambda_{nat} \lesssim 1-2$ TeV. This fact tells us that the SM is not natural at the energy of the Large Hadron Collider (LHC), and more specifically new physics that cuts-off the divergent loops has to be expected at or below 2 TeV. In a weakly coupled theory this means new particles with masses below 2 TeV and related to the SM particles by some symmetry. For concreteness, the dominant contribution comes from the top loop. Thus naturalness arguments predict new multiplet(s) of top-symmetry-related particles that should be easily produced at the LHC, which has a maximum available energy of 14 TeV. The possibilities in extending the SM are many. Here we focus on a model (see [@contino2]) in which the Higgs particle is realized as a pseudo-Goldstone boson associated to the breaking $SO(5)\rightarrow SO(4)$ at a scale $f > v$. In some sense this extension is “minimal” since we add only one field in the scalar sector. The Higgs mass will then be protected from self-coupling corrections, and the cutoff scale can be raised up to $3$ TeV. Following the approach of [@barbieri1], the $SO(5)$ symmetry has then to be extended to the top sector by adding new vector-like quarks in order to reduce the UV sensitivity of $m_h$ to the top loop. In principle new heavy vectors should also be included in order to cut-off the gauge boson loops, however here only the quark sector will be studied because the dominant contribution comes from the top. Moreover, from a phenomenological point of view, heavy quark searches at the LHC may be easier than heavy vector searches (as pointed out in [@barbieri2]). In enlarging the fermion sector it is necessary to fulfill the requirements of the Electro Weak Precision Tests (EWPT). More specifically, as shown in Figure \[figuraewpt\], the composite nature of the Higgs boson and the physics at the cutoff produce two corrections to the $S$ and $T$ parameters of the SM. For this reason, in order to be consistent with data, one can look for a positive contribution to $T$ coming from the fermion sector. Another experimental constraint comes from the modified bottom coupling to the $Z$ boson. The main virtues of this model are minimality and effectiveness. That is we concentrate on the fermion resonances, which can be lighter than the new gauge bosons and play a central role in reducing the sensitivity of the Higgs boson mass to the new physics. Moreover we do so introducing the least possible number of new particles and parameters. In fact there are models which can be compatible with EWPT data and have the same scalar sector, but since they start from 5d considerations they are forced to introduce much more new fields (see e.g. [@contino] and [@carena-santiago]). In section \[modelloSO5\] a summary of some relevant previous works is reported. In section \[extendedmodel\] I work out a non minimal model which can be consistent with data. In section \[othermodels\] two examples are given of other models ruled out by the EWPT. ![The experimentally allowed region in the $ST$ plane, including contributions “from scalars” and “from cutoff” (see [@barbieri1], section 2). The dashed arrow shows that an extra positive contribution to $T$ is needed in order to make the model consistent with data. In section \[extendedmodel\] it will be shown that such contribution may come from a suitably extended top sector. This figure is taken from [@barbieri1].[]{data-label="figuraewpt"}](./EWPT){width="60.00000%"} Summary of previous works {#modelloSO5} ========================= Making reference to [@contino2] and [@barbieri1] for a detailed description of the model, here I concentrate on quarks. The fermion sector has to be enlarged in such a way that the top is ($SO(5)$ symmetrically) given the right mass $m_t = 171$ GeV, and new heavy quarks are vector-like in the $v/f \rightarrow 0$ limit. The bottom quark can be considered massless at this level of approximation, while lighter quarks are completely neglected. The minimal way to do this is to enlarge the left-handed top-bottom doublet $q_L$ to a vector (one for each colour) $\Psi_L$ of $SO(5)$, which under $SU(2)_L \times SU(2)_R$ breaks up as $(2,2)+1$. The SM gauge group $G_{SM}=SU(2)_L \times U(1)$ is here given by the $SU(2)_L$ and the $T_3$ of the $SU(2)_R$ of a fixed subgroup $SO(4)=SU(2)_L \times SU(2)_R \subset SO(5)$. The full fermionic content of the third quark generation is now: $$\Psi_L= \left( q= \left( \begin{array}{c} t \\ b\end{array}\right) ,\, X= \left( \begin{array}{c} X^{5/3} \\ X \end{array}\right) , \, T \right)_L \, , \, t_R, X_R= \left( \begin{array}{c} X^{5/3} \\ X \end{array}\right)_R, T_R ,$$ where the needed right handed states have been introduced in order to give mass to the new fermions. Hypercharges are fixed in order to obtain the correct value of the electric charges. Note that the upper component of the “exotic” $X$ has electric charge $5/3$. In the next section an extended model with fermions in the fundamental representation will be examined. The spinor representation (see e.g. [@contino2]) is ruled out by requiring that the physical left handed b-quark is a true doublet of $SU(2)_L$ and not an admixture of doublet and singlet, as noted in [@barbieri1] or in [@contino-lett]. The requirement that there be not a left handed charge $-\frac{1}{3}$ singlet to mix whith $b_L$ is a sort of “custodial symmetry” which protects the $Zb\overline{b}$ coupling fom large corrections ([@agashe-contino]). The Yukawa Lagrangian of the fermion sector consists of an $SO(5)$ symmetric mass term for the top (this guarantees the absence of quadratic divergences in the contribution to $m_h$, as shown by equation \[diverglogaritm5q\]) and the most general (up to redefinitions) gauge invariant mass terms for the heavy $X$ and $T$: $$\label{lagr5iniziale} \mathcal{L}_{top}= \lambda_1 \overline{\Psi}_L \phi t_R + \lambda_2 f \overline{T}_L T_R + \lambda_3 f \overline{T}_L t_R +M_X \overline{X}_L X_R + h.c,$$ where $\phi$ is the scalar 5-plet containing the Higgs Field. Note that the adjoint representation of $SO(5)$ splits in the adjoint representation of $SO(4)$ plus a $(4)$ of SO(4): this fact guarantees that the Goldstone bosons of the $SO(5)\rightarrow SO(4)$ breaking have the quantum numbers of the Higgs dublet. Up to rotations that preserve all the quantum numbers, with a convenient definition of the various parameters, we can rewrite \[lagr5iniziale\] in the form: $$\label{yukawaminimal} \mathcal{L}_{top}=\overline{q}_L H^c (\lambda_t t_R + \lambda_T T_R) + \overline{X}_L H (\lambda_t t_R + \lambda_T T_R) + M_T \overline{T}_L T_R + M_X \overline{X}_L X_R + h.c.$$ Through diagonalization of the mass matrix we obtain the physical fields, in terms of which it is possible to evaluate the physical quantities. For example let us check the cancellation of the quadratically divergent contribution to $m_h$ due to the top loop, starting from the potential: $$\label{potenziale} V= \lambda (\phi^2 -f^2)^2 - A f^2 {\overrightarrow{\phi}}^2 + B f^3 \phi_5,$$ where ${\overrightarrow{\phi}}$ are the first four components of $\phi$. The Higgs boson mass can be shown to be controlled by the $A$ parameter, that is by the $SO(5)$-breaking term ($m_h = 2 v \sqrt{A}$ for big $\lambda$). This is reasonable since if everything were $SO(5)$-symmetric the Higgs particle would be a massless Goldstone boson. The divergent part of the one loop correction to $A$, evaluated as in [@barbierisusy] and setting $v=0$ for simplicity, is now: $$\begin{aligned} \delta A &=& -\frac{12 f^2}{64 \pi^2} \label{diverglogaritm5q} \lambda_1^2\left(\frac{M_X^2}{f^2}-4\left(\lambda_1+\lambda_3\right)^2-2\lambda_2^2\right)\log\Lambda^2 \nonumber \\ &=& -\frac{3}{16\pi^2 f^2} \left(\lambda_t^2+\lambda_T^2\right) \left(M_X^2 + M_T^2 \left(\frac{2}{1 + \lambda_T^2/\lambda_t^2}-4\right)\right)\log\Lambda^2.\end{aligned}$$ Notice that there is no quadratic divergence. Moreover $M_X$ and $M_T$ take the role of the cutoff $\Lambda$ in the original top-loop contribution. For this reason we cannot allow them to be much above 2 TeV, otherwise this logarithmic term alone produces a $\delta m_h$ of the same order of the weak-scale expectation value $v$, and we are led again to a naturalness problem. Some finetuning on the parameters $A, B$ of equation \[potenziale\] is necessary in order to obtain $v < f$. This can be quantified by the logarithmic derivative: $$\Delta = \frac{A}{v^2} \frac{{\partial}v^2}{{\partial}A} \approx \frac{v^2}{f^2}.$$ To avoid a large $\Delta$, throughout this paper I will assume $f=500$ GeV, which means $\approx 10 \%$ finetune. This implies that for the “naturalness cutoff” of this model we have (see [@barbieri1] for a detailed discussion): $$\Lambda \approx \frac{4 \pi f}{\sqrt{N_g}} \backsim 3 \mbox{ TeV,}$$ where $N_g=4$ is the number of Goldstones. As shown in [@barbieri1] this model can be considered as the low energy description of any model in which the EWSB sector has a $SO(5)$ global symmetry partly gauged with $G_{SM}=SU(2)\times U(1)$. Different models can be meaningfully compared at the same level of finetuning, which in practice means the same level of $f$. Generally, at this level of finetuning, the heavy vector resonances have masses exceeding the cutoff, or at least exceeding the energy scale at which the $WW$-scattering exceeds unitarity in the effective sigma model with the heavy scalar sent above the cutoff[^1]. For this reason it is hard to see any gain in introducing them at all, since they do not substantially improve the calculability. Throughout this paper their contribution is considered to be included in that from the “physics at the cutoff”. In order to check the compatibility with the EWPT, it is necessary to evaluate the relative deviations of the $T$ parameter and of the $Z \rightarrow b\overline{b}$ coupling with respect to the usual SM results: $$\hat{T}_{SM}=\frac{3 g^2 m_t^2}{64 \pi^2 m_W^2}=\frac{3 G_F m_t^2}{8 \sqrt{2}}.$$ $$A^{bb}_{SM}=\frac{\lambda_t^2}{32 \pi^2} \quad , \quad A^{bs}_{SM}=V_{ts}V_{tb}^*A^{bb}_{SM}.$$ where the definition of $A^{bb}$ and $A^{bs}$ is: $$\left( -\frac{1}{2} + \frac{\sin^2 \theta_W}{3} + A^{bb} \right)\frac{g}{\cos \theta_W} Z_\mu \overline{b}_L \gamma^\mu b_L \quad , \quad A^{bs}\frac{g}{\cos \theta_W} Z_\mu \overline{b}_L \gamma^\mu s_L$$ and the limit $m_t \gg m_W$ is understood (see [@barbieri3] or [@barbieri4]). The experimental constraints are summarized in Table \[expconstraints\], where the former condition comes from Figure \[figuraewpt\], the latter from LEP precision measurements[^2]. In principle one could also consider the constraint coming from the b-factories data on $B\rightarrow X_s l^+ l^-$ decays: $$\frac{A^{bs}}{A_{SM}^{bs}} = 0.95 \pm 0.20,$$ however using this constraint or the one in Table \[expconstraints\], the final conclusions do not change. [||c||]{}\ $\quad 0.25 \leq \delta T_{fermions} \leq 0.50 \quad$\ \ \ $\quad \frac{A^{bb}}{A_{SM}^{bb}} = 0.88 \pm 0.15 \quad$\ \ Analytic approximate expressions for $\delta T$ and $\delta A^{bb}$ can be found in [@barbieri1]. In Figure \[plot5q1.0\] a typical result of the numerical computation of the one-loop $\delta T$ and $A^{bb}/A^{bb}_{SM}$ is reported in terms of the parameters of Lagrangian \[yukawaminimal\]. The only effective free parameters are $M_X$, $M_T$ (which are roughly equal to the physical masses) and $\lambda_t / \lambda_T$, which is taken from 1/3 to 3 so that the theory is not strongly coupled. The result is that there are no allowed regions in the parameter space for this minimal model. This fact suggests to consider the non minimal model of the next section. ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ ![Numerical isoplot of the one loop corrections to $\delta T$ and $A^{bb}/A^{bb}_{SM}$ versus the Lagrangian parameters $(M_X,M_T)$ in the minimal model for $\lambda_T/\lambda_t= 1$. Note that there is no experimentally allowed region.[]{data-label="plot5q1.0"}](./5qTnumerico1 "fig:"){width="46.00000%"} ![Numerical isoplot of the one loop corrections to $\delta T$ and $A^{bb}/A^{bb}_{SM}$ versus the Lagrangian parameters $(M_X,M_T)$ in the minimal model for $\lambda_T/\lambda_t= 1$. Note that there is no experimentally allowed region.[]{data-label="plot5q1.0"}](./5qZnumerico1 "fig:"){width="46.00000%"} ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ An extended model in the top sector {#extendedmodel} =================================== In this section I study an explicit extended top sector which is motivated only by the fulfillment of the requirements of the EWPT. Non-minimal model {#fenomenologia7q} ----------------- In this non minimal model the $SO(5)$ symmetric quark sector is completely made up of new quarks and the top mass term arises through order $v/f$ mass mixing. The fermionic content is now: $$\Psi_{L,R}= \left( Q= \left( \begin{array}{c} Q^u \\ Q^d \end{array}\right) ,\, X= \left( \begin{array}{c} X^u \\ X^d \end{array}\right) , \, T \right)_{L,R} \, , \, q_L= \left( \begin{array}{c} t \\ b \end{array}\right)_L \, , \, t_R,$$ where $Q$ is now a standard ($Y=1/6$) $SU(2)_L$ doublet and the quantum numbers are the same as in the previous case. The Yukawa Lagrangian is now $\mathcal{L} = \mathcal{L}_{int}+\mathcal{L}_{BSM}$, where $\mathcal{L}_{BSM}$ involves only “beyond the SM” fields with a non renormalizable Yukawa interaction, and $\mathcal{L}_{int}$ describes the mass mixing of the standard fields with the heavy fermions: $$\begin{aligned} \label{interazioneNONRIN} \mathcal{L}_{BSM} &=& \frac{y}{f} \overline{\Psi}_L \phi \phi^T \Psi_R + m_Q \overline{Q}_L Q_R + m_X \overline{X}_L X_R+ m_T \overline{T}_L T_R +h.c. \nonumber \\ \mathcal{L}_{int} &=& \lambda_1 f \overline{q}_{L}Q_R + \lambda_2 f \overline{T}_L t_R +h.c. \end{aligned}$$ Defining: $$\lambda_t= \frac{y \lambda_1 \lambda_2 f^2}{\sqrt{(m_T+y f)^2+(\lambda_2 f)^2} \sqrt{m_Q^2+(\lambda_1 f)^2}}$$ $$\begin{array}{ll} M_T = \sqrt{(m_T+y f)^2+(\lambda_2 f)^2} & A=\frac{m_T+y f}{\lambda_2 f}\\ M_Q = \sqrt{m_Q^2+(\lambda_1 f)^2} & B=\frac{m_Q}{\lambda_1 f} \\ M_X =m_X & M_f=\lambda_t f \qquad, \end{array}$$ up to rotations which preserve quantum numbers the charge $2/3$ mass matrix becomes, with an obvious notation and “quark vectors” $(t,T,Q,X)_{L,R}$: $$\label{massmatrix7q} \left(\begin{array}{cccc} \lambda_t v & - A \lambda_t v & -\frac{\sqrt{1+A^2}(\lambda_t v)^2}{M_f} & -\frac{\sqrt{1+A^2}(\lambda_t v)^2}{M_f} \\ 0 & M_T & \sqrt{1+A^2}\sqrt{1+B^2} \lambda_t v & \sqrt{1+A^2}\sqrt{1+B^2} \lambda_t v \\ - B \lambda_t v & A B \lambda_t v & M_Q + \frac{B \sqrt{1+A^2}(\lambda_t v)^2}{M_f} & \frac{B \sqrt{1+A^2}(\lambda_t v)^2}{M_f} \\ - \sqrt{1+B^2} \lambda_t v & A \sqrt{1+B^2} \lambda_t v & \frac{ \sqrt{1+A^2}\sqrt{1+B^2}(\lambda_t v)^2}{M_f} & M_X +\frac{ \sqrt{1+A^2}\sqrt{1+B^2}(\lambda_t v)^2}{M_f} \end{array}\right) .$$ The physical masses of the charge $2/3$ quarks will be corrected by diagonalization, while the $Q^d$ (charge $-\frac{1}{3}$) and $X^u$ (charge $\frac{5}{3}$) masses remain exactly $M_Q$ and $M_X$ since there is no state to mix with. As already mentioned, to avoid finetuning we shall take $f=500$ GeV so that $M_f$ is not a free parameter. I report the exact one loop results for $\delta T$ and $A^{bb}$ up to order $\epsilon^2$ in the limit in which three masses are much bigger than the other one. For the correction to $T$ we have: $$\begin{aligned} \label{deltaT7qNonrinOrd2} M_Q,M_X,M_f>>M_T:& & \frac{\delta T}{T_{SM}}\approx 2A^2(\log\frac{M_T^2}{m_t^2} - 1 + \frac{A^2}{2})(\frac{m_t}{M_T})^2 \nonumber \\ M_T,M_X,M_f>>M_Q: & & \frac{\delta T}{T_{SM}}\approx 4B^2(\log\frac{M_Q^2}{m_t^2} - \frac{3}{2} + \frac{1}{3}B^2)(\frac{m_t}{M_Q})^2 \\ M_T,M_Q,M_f>>M_X: & & \frac{\delta T}{T_{SM}}\approx -4(1+B^2)(\log\frac{M_X^2}{m_t^2} -\frac{11}{6}-\frac{1}{3} B^2 )(\frac{m_t}{M_X})^2 \nonumber\end{aligned}$$ $M_T,M_Q,M_X>>M_f$: $$\begin{aligned} \frac{\delta T}{T_{SM}} &\approx & \frac{2(1+A^2)}{3(M_Q^2-M_X^2)^2} \{ 12 B \sqrt{1+B^2}(M_Q^3 M_X+M_Q M_X^3) - \\ &&- (1+2B^2)(7(M_Q^4+M_X^4)-26M_Q^2 M_X^2) + \\&& + \frac{6\log\frac{M_Q^2}{M_X^2}}{M_Q^2-M_X^2}(-4B\sqrt{1+B^2}M_Q^3 M_X^3-3M_Q^2 M_X^4 + M_X^6 + \\ && +B^2(M_Q^6 -3M_Q^4M_X^2-3M_Q^2M_X^4+M_X^6)) \}(\frac{\lambda_t v}{M_f})^2 .\end{aligned}$$ while for $Z \rightarrow b \overline{b}$ it is: $$\begin{aligned} M_Q, M_X>>M_T: & & \quad \frac{\delta A^{bb}}{A_{SM}^{bb}} \approx 2A^2(\log\frac{M_T^2}{m_t^2}-1+\frac{A^2}{2})(\frac{m_t}{M_T})^2 \label{Zinbb7qNonrinOrd2} \\ M_T, M_X>>M_Q: & & \frac{\delta A^{bb}}{A_{SM}^{bb}} \approx B^2(\log\frac{M_Q^2}{m_t^2}-1)(\frac{m_t}{M_Q})^2 + 2B \sqrt{1+A^2} \frac{(\lambda_t v)^2}{M_Q M_f} \nonumber\end{aligned}$$ $M_T, M_Q>>M_X:$ $$\frac{\delta A^{bb}}{A_{SM}^{bb}} \approx (1+B^2)(\log\frac{M_X^2}{m_t^2}-1)(\frac{m_t}{M_X})^2 + 2\sqrt{1+B^2} \sqrt{1+A^2} \frac{(\lambda_t v)^2}{M_X M_f}$$ These results are compatible with [@carena-santiago]. In the following, through numerical diagonalization of the mass matrix, it will be shown that compatibility with the experimental constraints of Table \[expconstraints\] is now allowed in a thin slice of parameter space. Minimal values for the masses of the new quarks {#minimalmasses} ----------------------------------------------- The parameter space has been studied for $\frac{1}{3}\leq A,B\leq 3$ with vector-like quark masses all below $M_{max}$, looking for experimentally allowed configurations with relatively light vector-like quarks. For naturalness considerations, $M_{max}$ cannot be much above 2 TeV (see equation \[diverglogaritm5q\]). A typical situation, for example $A=1.8$, $B=1.1$, $M_Q=900$ GeV, is represented in Figure \[plot1.8\_1.1\_Q900\], where I report the isolines of $\delta T$ and $A^{bb}$ in the $(M_X,M_T)$ plane. The thicker lines correspond to the regions constrained as in Table \[expconstraints\]. The small overlap between the two regions around $M_X, M_T \approx$ 1 TeV is the allowed portion of the parameter space. ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ![Isoplot of $\delta T$ and $A^{bb}/A^{bb}_{SM}$ in the $(M_X,M_T)$ plane for ($A=1.8$, $B=1.1$, $M_Q=900$ GeV) in the non-minimal model.[]{data-label="plot1.8_1.1_Q900"}](./18_11_Q900_T "fig:"){width="46.00000%"} ![Isoplot of $\delta T$ and $A^{bb}/A^{bb}_{SM}$ in the $(M_X,M_T)$ plane for ($A=1.8$, $B=1.1$, $M_Q=900$ GeV) in the non-minimal model.[]{data-label="plot1.8_1.1_Q900"}](./18_11_Q900_Zinbb "fig:"){width="46.00000%"} ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- For a better illustration of this case, consider for example an exact one loop calculation, which corresponds to a point in Figure \[plot1.8\_1.1\_Q900\] (masses in GeV): [lcr]{} ------------ ------------ $M_T=1000$ $M_X=1100$ $M_Q=900$ $A=1.8$ $B=1.1$ ------------ ------------ & $ \rightarrow $ & ------------------- --------------------------- $M_T^{phys}=1010$ $M_Q^{phys}=550$ $M_{Q^{-1/3}}^{phys}=920$ $M_X^{phys}=1940$ $M_{X^{5/3}}^{phys}=1130$ $\delta T = 0.28$ $A^{bb}= 0.97$ ------------------- --------------------------- Note the significant difference between $M_{heavy}$ and $M_{heavy}^{phys}$. This is mainly due to diagonalization splitting, but also to a mass-matrix rescaling which is necessary in order to get the correct value for the top mass. A most interesting phenomenological question concerns the smallest possible values for the masses of the new quarks which are compatible with the constraints of Table \[expconstraints\]. A study of the full parameter space allows to assert that the following properties hold: 1. At least one of the new charge 2/3 quarks has to be heavy, that is around 1.9 TeV. 2. *Light Q:* The lightest possible new-quark state is the $Q^{2/3}$, which in principle can be as light as 290 GeV. In such configurations a heavy $T$ or $X$ is required, for example: [lcr]{} ------------ ----------- $M_T=1225$ $M_X=630$ $M_Q=320$ $A=2.81$ $B=0.33$ ------------ ----------- & $ \rightarrow $ & ------------------- --------------------------- $M_T^{phys}=1890$ $M_Q^{phys}=290$ $M_{Q^{-1/3}}^{phys}=340$ $M_X^{phys}=555$ $M_{X^{5/3}}^{phys}=670$ $\delta T = 0.40$ $A^{bb}= 1.02$ ------------------- --------------------------- 3. *Light T:* Allowing $M_X \approx$ 1.9 TeV it is possible to obtain a $T$ quark mass around $500$ GeV for example: [lcr]{} ----------- ------------ $M_T=940$ $M_X=1200$ $M_Q=960$ $A=1.86$ $B=1.1$ ----------- ------------ & $ \rightarrow $ & ------------------- --------------------------- $M_T^{phys}=510$ $M_Q^{phys}=1060$ $M_{Q^{-1/3}}^{phys}=955$ $M_X^{phys}=1940$ $M_{X^{5/3}}^{phys}=1195$ $\delta T = 0.43$ $A^{bb}= 1.01$ ------------------- --------------------------- 4. *Light X:* $M_{X^{2/3}}$ can be as low as 450 GeV with $X^{5/3}$ at 950 GeV, and heavy $T$, for example: [lcr]{} ------------ ----------- $M_T=1152$ $M_X=969$ $M_Q=971$ $A=2.99$ $B=0.71$ ------------ ----------- & $ \rightarrow $ & ------------------- ---------------------------- $M_T^{phys}=2050$ $M_Q^{phys}=925$ $M_{Q^{-1/3}}^{phys}=1026$ $M_X^{phys}=460$ $M_{X^{5/3}}^{phys}=1024$ $\delta T = 0.28$ $A^{bb}= 0.93$ ------------------- ---------------------------- 5. *Light $X^{5/3}$:* $M_{X^{5/3}}$ can be relatively small. From point 2 we see that the $X^{5/3}$ can be also as light as 670 GeV. Allowed volume in parameter space {#parameterspace} --------------------------------- Of some interest is the following question: how extended is the volume of the parameter space which is allowed by the experimental data? To answer this question one can consider the fractional volume (making a linear sampling) of the experimentally allowed region in the relevant parameter space: $$\left\{ \frac{1}{3} \, \leq \, A, B \, \leq \, 3 \right\} \cap \left\{ 200 \mbox{ GeV } \leq \, M_{T, X, Q} \, \leq M_{max} \right\}$$ I call “probability” of the model this fractional volume. Note that all the points in the “total volume” of this parameter space are viable in the sense of giving a correct EWSB, even if most of them do not satisfy the EWPT. In Figure \[spazioparametri\] the result of this calculation is given as a function of $M_{max}$. For example we have: $$\label{probability} \frac{\mbox{Allowed volume}}{\mbox{Total volume }(M_{max} = 2.5 \mbox{ TeV})}\approx 0.05 \% = \frac{1}{2000} \quad .$$ Note that for the model to be consistent with data it is necessary to have at least one $M_{heavy} \gtrsim 1$ TeV (which actually leads to one $M_{heavy}^{phys} \gtrsim 1.8$ TeV because of the mass splitting and rescaling, as explained in section \[minimalmasses\]). ![% Probability of the allowed region (see text).[]{data-label="spazioparametri"}](./paramspacevol){width="75.00000%"} Alternative models {#othermodels} ================== In the course of this investigation other Lagrangian models for fermion masses have been considered, all involving more fields than the minimal model. In this section I briefly report about two of them, with different motivations. None gives an acceptable region of parameter space. A different coupling {#so5rinormalizz} -------------------- One can ask himself what happens considering Lagrangian \[interazioneNONRIN\] with a standard Yukawa fermion-scalar interaction instead of the non-renormalizable one studied in the previous section. This can be done by taking exactly the same fermion sector extension described in section \[fenomenologia7q\] with: $$\begin{aligned} \label{interazioneRIN} \mathcal{L} &=& \lambda_1 f \overline{q}_{L}Q_R + \lambda_2 f \overline{T}_L t_R +y \overline{\Psi}_L \phi t_R\nonumber \\ && + m_Q \overline{Q}_L Q_R + m_X \overline{X}_L X_R+ m_T \overline{T}_L T_R +h.c. \end{aligned}$$ Note that in this case there is no separation between $\mathcal{L}_{int}$ and $\mathcal{L}_{BSM}$ as in the model of section \[extendedmodel\]. Up to rotations which preserve quantum numbers the charge $2/3$ mass matrix is now, with the same notation of \[massmatrix7q\]: $$\label{massmatrix7qRIN} \left(\begin{array}{c} \overline{t}_L^0 \\ \overline{T}_L^0 \\ \overline{Q}_L^0 \\ \overline{X}_L^0 \end{array}\right) \left(\begin{array}{cccc} -\lambda_t v & - A \lambda_t v & 0 & 0 \\ 0 & M_T & 0 & 0 \\ B \lambda_t v & A B \lambda_t v & M_Q & 0 \\ - \sqrt{1+B^2} \lambda_t v & A \sqrt{1+B^2} \lambda_t v & 0 & M_X \end{array}\right) \left(\begin{array}{c} t_R^0 \\ T_R^0 \\ Q_R^0 \\ X_R^0 \end{array}\right) +h.c,$$ where it is: $$\lambda_t= \frac{y \lambda_1 f^2}{\sqrt{1+\frac{f^2(\lambda_2 +y )^2}{(m_T)^2}} \sqrt{(m_Q+(\lambda_1 f)^2}}$$ $$\begin{array}{ll} M_T = \sqrt{m_T^2+f^2(\lambda_2 +y)^2} & A=\frac{(\lambda_2+y) f}{m_T}\\ M_Q = \sqrt{m_Q^2+(\lambda_1 f)^2} & B=\frac{m_Q}{\lambda_1 f} \\ M_X =m_X . & \end{array}$$ Repeating the procedure to evaluate the $\delta T$ and $A^{bb}$ corrections, we obtain exactly the expressions \[deltaT7qNonrinOrd2\] and \[Zinbb7qNonrinOrd2\] with $M_f \rightarrow \infty$. Note however that the definition of $A$ and $B$ is different. The result of the numerical calculation is that there is no experimentally allowed region. This fact shows also that compatibility with the EWPT is a delicate issue, and in general the approximation $\frac{m_t}{M_{heavy}}\ll 1$ is not reliable. A different model: $SU(4)/Sp(4)$ -------------------------------- In the literature several other models for the Higgs particle as a pseudo-Goldstone bosone have been considered, based on different groups. A common feature shared with the $SO(5)/SO(4)$ model is an extended top sector. Here we consider a suitable extension of the $SU(4)/Sp(4)$ composite-Higgs theory described in [@katz]. The number of fields which can mix with the top is exactly the same as in the $SO(5)/SO(4)$ extended model, with the same number of free parameters. Enlarging the top sector with new fermions in the vectorial representation of $SU(4)$, as done in [@katz], is problematic because there is an $SU(2)_L$ left handed singlet with electric charge $-1/3$ which will mix at tree level with the bottom, and this is phenomenologically not defendable (see section \[modelloSO5\]). This problem is avoided with new fermions in the antisymmetric representation: $$A_L=\left( \begin{array}{cccc} 0 & Q_L & X_L^{5/3} & t_L \\ & 0 & X_L & b_L \\ & & 0 & T_L \\ & & & 0 \end{array} \right).$$ The quantum numbers of the fields are fixed by the natural way $SU(2)_L \times SU(2)_R$ is embedded in $Sp(4)$. Introducing the needed right-handed states the third generation is therefore enlarged as: $$\left(\begin{array}{c}X^{5/3}_{L,R} \\ X_{L,R} \end{array}\right)=(2)_{7/6} \quad , \quad \left(\begin{array}{c}t_L \\ b_L \end{array}\right)=(2)_{1/6} \quad , \quad t_R, Q_{L,R}, T_{L,R}=(1)_{2/3} .$$ The most general Lagrangian respecting $SU(2)\times U(1)$ gauge invariance and the $SU(4)$ symmetry of the Yukawa interaction is: $$\begin{aligned} \mathcal{L} &=& \lambda_1 f \overline{t}_R Q_L + \lambda_2 f \overline{t}_R T_L + \frac{1}{2} y_1 \overline{Q}_R tr(\Sigma^* A_L) + y_2 f \overline{Q}_R T_L \\ && + m_Q \overline{Q}_R Q_L + m_T \overline{T}_R T_L + m_X \left( \overline{X}_R X_L + \overline{X}^{5/3}_R X^{5/3}_L \right)\end{aligned}$$ where, keeping only the Yukawa interactions with the Higgs doublet: $$\frac{1}{2} tr (\Sigma^* A_L)= f (Q_L + T_L) + H \left(\begin{array}{c}t_L \\ b_L \end{array}\right) + H^c \left(\begin{array}{c}X^{5/3}_L \\ X_L \end{array}\right).$$ Here $f$ is the scale of the $SU(4)/Sp(4)$ breaking and $H$ is the Higgs doublet. This Lagrangian can be analyzed in a totally analogous way as in the previous sections. The mass matrix, concentrating on charge $2/3$ quark mass terms and up to quantum-number preserving rotations, is: $$\label{massmatrix} \mathcal{L}= \left(\begin{array}{c} \overline{t}_L \\ \overline{T}_L \\ \overline{Q}_L \\ \overline{X}_L^d \end{array}\right)^T \left(\begin{array}{cccc} \lambda_t v & A \lambda_t v & B \lambda_t v & 0 \\ 0 & M_T & 0 & 0 \\ 0 & 0 & M_Q & 0\\ \lambda_t v & A \lambda_t v & B \lambda_t v & M_X \end{array}\right) \left(\begin{array}{c} t_R \\ T_R \\ Q_R \\ X_R^d \end{array}\right) + h.c.$$ where for example: $$\lambda_t= \frac{\lambda_1 y_1 m_T f}{\sqrt{2}\sqrt{f^2 \lambda_1^2 + (m_Q + f y_1)^2}\sqrt{m_T^2+\frac{f^2(\lambda_2(m_Q+f y_1)-f \lambda_1 (y_1+y_2))^2}{f^2 \lambda_1^2+(m_Q+f \lambda_1)^2}}},$$ and also the other new parameters are combinations of the original ones. Note that now $Q$ is a singlet like $T$, while $X$ is again a component of an $Y=7/6$-doublet. Computing the one loop correction to the $T$ parameter up to second order in $\lambda_t v /M_{heavy}$ we now obtain: $$\begin{aligned} M_X,M_Q>>M_T: && \frac{\delta T}{T_{SM}}\approx 2 A^2 (\log\frac{M_T^2}{m_t^2} - 1 + \frac{A^2}{2})(\frac{m_t}{M_T})^2 \\ M_X,M_T>>M_Q: && \frac{\delta T}{T_{SM}}\approx 2B^2(\log\frac{M_Q^2}{m_t^2} - 1 + \frac{B^2}{2})(\frac{m_t}{M_Q})^2 \\ M_Q,M_T>>M_X: && \frac{\delta T}{T_{SM}}\approx -4(\log\frac{M_X^2}{m_t^2} -\frac{11}{6} )(\frac{m_t}{M_X})^2\end{aligned}$$ while for $Z \rightarrow b \overline{b}$ it is: $$\begin{aligned} M_Q, M_X>>M_T: && \frac{\delta A^{bb}}{A_{SM}^{bb}} \approx 2A^2(\log\frac{M_T^2}{m_t^2}-1+\frac{A^2}{2})(\frac{m_t}{M_T})^2 \\ M_T, M_X>>M_Q: && \frac{\delta A^{bb}}{A_{SM}^{bb}} \approx 2B^2(\log\frac{M_Q^2}{m_t^2}-1+\frac{B^2}{2})(\frac{m_t}{M_Q})^2 \\ M_Q, M_T>>M_X: && \frac{\delta A^{bb}}{A_{SM}^{bb}} \approx (\log\frac{M_X^2}{m_t^2}-\frac{1}{2})(\frac{m_t}{M_X})^2\end{aligned}$$ The experimental consistency of the model has been checked via numerical diagonalization of the mass matrix (\[massmatrix\]) in the relevant parameter space: $\frac{1}{3} \leq A,B \leq 3$ and $M_T, M_Q, M_X$ below 2 TeV. The final result is that this model can not be consistent with experimental data. In Figure \[plotsu4sp4\] I give an example of the typical situation. ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ![Isolines of $\delta T$ and $A^{bb}/A^{bb}_{SM}$ in the $(M_X,M_T)$ plane for the the $SU(4)/Sp(4)$ model. Plot for $A=B=1, \, M_Q=800$ GeV.[]{data-label="plotsu4sp4"}](./susp1_1_MQ800_T "fig:"){width="46.00000%"} ![Isolines of $\delta T$ and $A^{bb}/A^{bb}_{SM}$ in the $(M_X,M_T)$ plane for the the $SU(4)/Sp(4)$ model. Plot for $A=B=1, \, M_Q=800$ GeV.[]{data-label="plotsu4sp4"}](./susp1_1_MQ800_Zinbb "fig:"){width="46.00000%"} ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Conclusions =========== Heavy vector-like fermions are a likely component of models for Electroweak symmetry breaking which address the naturaleness problem of the Fermi scale. Constraining their mass is crucial in order to assess the potential of their discovery at the LHC. Here I have analyzed such constraints in a $SO(5)/SO(4)$ model for the Higgs doublet as a pseudo-Goldstone boson. These constraints arise from the EWPT, including B-physics. Confirming the results of [@barbieri1], I have found that the minimal extension of the top sector has problems in fulfilling the experimental requirements. For this reason I have considered other possible extensions of the fermion sector, as well as another model based on a different symmetry. These models have received attention in the literature and have different motivations. The main result is that one such extension is consistent with the constraints coming from the EWPT, including B-physics, in a thin region of its parameter space. To the third generation quarks of the Standard Model one has to add a full vector-like 5-plet of SO(5), i.e. in particular three new quarks of charge 2/3 which mix with the top: $T,Q,X$. In this region of parameter space the new quarks can be as light as a few hundreds GeV and might therefore be accessible at the LHC. The range of possible masses is summarized in the following Table (see section \[minimalmasses\]): Quark $SU(2)_L \times U(1)_Y$ Constraints on mass ------- ------------------------- --------------------------------------------------------------- $Q$ $(2)_{1/6}$ $M_{Q^{2/3}} \gtrsim 300$ GeV, $M_{Q^{-1/3}} \gtrsim 350$ GeV $T$ $(1)_{2/3}$ $M_{T^{2/3}} \gtrsim 500$ GeV $X$ $(2)_{7/6}$ $M_{X^{2/3}} \geq$ 450 GeV, $M_{X^{5/3}} \gtrsim$ 650 GeV It is of interest that, randomly “picking up a point” in the relevant parameter space with all fermion masses below 2.5 TeV, the probability of being consistent with data is very small, roughly $1/2000$. None of the other similar models that have been examined have regions of the corresponding parameter space which are compatible with the experimental data. Acknowledgements {#acknowledgements .unnumbered} ================ For this work I am greatly indebted to Riccardo Barbieri. I also thank Vyacheslav S. Rychkov, Duccio Pappadopulo, and Giovanni Pizzi for useful discussions. [9]{} K. Agashe, R.Contino, A. Pomarol, *The minimal composite Higgs model*, Nucl. Phys. B 719, 165 (2005) \[arXiv:hep-ph/0412089\]. R. Barbieri, B. Bellazzini, V. S. Rychkov, A. Varagnolo, *The Higgs boson from an extended symmetry*, Phys. Rev. D 76, (2007) 115008, \[arXiv:hep-ph/0706.0432v3\]. R. Barbieri, *Signatures of new physics at 14 Tev*, \[arXiv:hep-ph/08023988v1\]. R. Contino, L. Da Rold, A. Pomarol, *Light custodians in a natural composite Higgs model*, Phys Rev. D 75, 055014 (2007) \[arXiv:hep-ph/0612048\]. R. Contino, *A holographic composite Higgs model*, \[arXiv:hep-ph/0609148v1\]. M. Carena, E. Pontón, J. Santiago, C. E. M. Wagner, *Electroweak constraints on warped models with custodial symmetry* \[arXiv:hep-ph/ 0701055v1\]. K. Agashe, R. Contino, L. Da Rold, A. Pomarol, *A custodial symmetry for $Zb\overline{b}$*, Phys. Lett. B 641 (2006) 62, \[arXiv:hep-ph/0605341v2\]. R. Barbieri, *Supersymmetric gauge models of the fundamental interactions*, Acta Physica Austriaca, Suppl. XXIV, 363-392 (1982). M. Schmaltz, D. Tucker-Smith, *Little Higgs Review* \[arXiv:hep-ph/0502182v1\]. R. Barbieri, M. Beccaria, P. Ciafaloni, G. Curci, A. Vicerè, *Radiative correction effects of a very heavy top*, Phys. Lett. B288 (1992) 95. R. Barbieri, M. Beccaria, P. Ciafaloni, G. Curci, A. Vicerè, *Two-loop heavy-top effects in the Standard Model*, Nucl. Phys. B409 (1993) 105-127. ALEPH, DELPHI, L3, OPAL, SLD Collaborations, LEP and SLD Electroweak Working Groups, SLD Heavy Flavour Group, *Precision electroweak measurments on the Z resonance*, Phys. Rept. 427, 257 (2006) \[arXiv:hep-ex/0509008\]. E. Katz, A. Nelson, D. G. E. Walker, *The Intermediate Higgs*, \[arXiv:hep-ph/ 0404252v1\]. Ling-Fong Li, *Group theory of spontaneously broken gauge symmetries*, Phys. Rev. D 9,6,1723 (1974). [^1]: This is a general consequence of the composite nature of the Higgs Boson, as pointed out in [@schmaltz]. For $f=500$ GeV the unitarity is saturated at $s=2.5$ TeV, see [@barbieri1]. [^2]: See [@barbieri1], par. 3.2.2. Experimental data are from [@lepdata].
{ "pile_set_name": "ArXiv" }
--- abstract: 'The quantum Zakharov system in three-spatial dimensions and an associated Lagrangian description, as well as its basic conservation laws are derived. In the adiabatic and semiclassical case, the quantum Zakharov system reduces to a quantum modified vector nonlinear Schrödinger (NLS) equation for the envelope electric field. The Lagrangian structure for the resulting vector NLS equation is used to investigate the time-dependence of the Gaussian shaped localized solutions, via the Rayleigh-Ritz variational method. The formal classical limit is considered in detail. The quantum corrections are shown to prevent the collapse of localized Langmuir envelope fields, in both two and three-spatial dimensions. Moreover, the quantum terms can produce an oscillatory behavior of the width of the approximate Gaussian solutions. The variational method is shown to preserve the essential conservation laws of the quantum modified vector NLS equation.' author: - 'F. Haas' - 'P. K. Shukla' title: Quantum and classical dynamics of Langmuir wave packets --- Introduction ============ The Zakharov system [@Zakharov], describing the coupling between Langmuir and ion-acoustic waves, is one of the basic plasma models, see Ref. [@Goldman; @Thornhill] for reviews. Recently [@Garcia], a quantum modified Zakharov system was derived, by means of the quantum plasma hydrodynamic model [@Haas]–[@HaasQMHD]. In this context, enhancement of the quantum effects was then shown [*e. g.*]{} to suppress the four-wave decay instability. Subsequently [@Marklund], a kinetic treatment of the quantum Zakharov system has shown that the modulational instability growth rate can be increased in comparison to the classical case, for partially coherent Langmuir wave electric fields. Also [@Haasvar], a variational formalism was obtained and used to study the radiation of localized structures described by the quantum Zakharov system. Bell shaped electric field envelopes of electron plasma oscillations in dense quantum plasmas obeying Fermi statistics were analyzed in Ref. [@Shukla]. More mathematically-oriented works on the quantum Zakharov equations concern its Lie symmetry group [@Tang] and the derivation of exact solutions [@Abdou]–[@Yang]. Finally, there is evidence of hyperchaos in the reduced temporal dynamics arising from the quantum Zakharov equations [@Misra]. All these paper refer to quantum Zakharov equations in one-spatial-dimension only. In the present work, we extend the quantum Zakharov system to fully three-dimensional space, allowing also for the magnetic field perturbation. In the classical case, both heuristic arguments and numerical simulations indicate that the ponderomotive force can produce finite-time collapse of Langmuir wave packets in two- or three-dimensions [@Goldman], [@Zakharov2; @Zakharov3]. This is in contrast to the one-dimensional case, whose solutions are smooth for all time. A dynamic rescaling method was used for the time-evolution of electrostatic self-similar and asymptotically self-similar solutions in two- and three-dimensions, respectively [@Landman]. Allowing for transverse fields shows that singular solutions of the resulting vector Zakharov equations are weakly anisotropic, for a large class of initial conditions [@Papanicolaou]. The electrostatic nonlinear collapse of Langmuir wave packets in the ionospheric and laboratory plasmas has been observed [@Dubois; @Robinson]. Also, the collapse of Langmuir wave packets in beam plasma experiments verifies the basic concepts of strong Langmuir turbulence, as introduced by Zakharov [@Cheung]. The analysis of the coupled longitudinal and transverse modes in the classical strong Langmuir turbulence has been less studied [@Alinejad]–[@Li], as well as the intrinsically magnetized case [@Pelletier], which can lead to upper-hybrid wave collapse [@Stenflo]. Finally, Zakharov-like equations have been proposed for the electromagnetic wave collapse in a radiation background [@Marklund2]. It is expected that the ponderomotive force causing the collapse of localized solutions in two- or three-space dimensions could be weakened by the inclusion of quantum effects, making the dynamics less violent. This conjecture is checked after establishing the quantum Zakharov system in higher-dimensional space and using its variational structure in association with a (Rayleigh-Ritz) trial function method. The manuscript is organized in the following fashion. In Section 2, the quantum Zakharov system in three-spatial-dimensions is derived by means of the usual two-time scale method applied to the fully 3D quantum hydrodynamic model. In Section 3, the 3D quantum Zakharov system is shown to be described by a Lagrangian formalism. The basic conservation laws are then also derived. When the density fluctuations are so slow in time so that an adiabatic approximation is possible, and treating the quantum term of the low-frequency equation as a perturbation, a quantum modified vector nonlinear Schrödinger equation for the envelope electric field is obtained. In Section 4, the variational structure is used to analyze the temporal dynamics of localized (Gaussian) solutions of this quantum NLS equation, through the Rayleigh-Ritz method, in two-spatial-dimensions. Section 5 follows the same strategy, extended to fully 3D space. Special attention is paid to the comparison between the classical and quantum cases, with considerable qualitative and quantitative differences. Section 6 contains conclusions. Quantum Zakharov equations in $3+1$ dimensions ============================================== The starting point for the derivation of the electromagnetic quantum Zakharov equations is the quantum hydrodynamic model for an electron-ion plasma, Equations (20)-(28) of Ref. [@HaasQMHD]. For the electron fluid pressure $p_e$, consider the equation of state for spin $1/2$ particles at zero temperature, $$\label{e1} p_e = \frac{3}{5}\,\frac{m_{e}v_{Fe}^2 \,n_{e}^{5/3}}{n_{0}^{2/3}} \,,$$ where $m_e$ is the electron mass, $v_{Fe}$ is the Fermi electron thermal speed, $n_e$ is the electron number density and $n_0$ is the equilibrium particle number density both for electron and ions. The pressure and quantum effects (due to their larger mass) are neglected for the ions. Also due to the larger ion mass, it is possible to introduce a two-time scale decomposition, $n_e = n_0 + \delta n_s + \delta n_f$, $n_i = n_0 + \delta n_s$, ${\bf u}_e = \delta{\bf u}_s + \delta{\bf u}_f$, ${\bf u}_i = \delta{\bf u}_s$, ${\bf E} = \delta{\bf E}_s + \delta{\bf E}_f$, ${\bf B} = \delta{\bf B}_f$, where the subscripts $s$ and $f$ refer to slowly and rapidly changing quantities, respectively. Also, ${\bf u}_e$ is the electron fluid velocity, $n_i$ the ion number density, ${\bf u}_i$ the ion fluid velocity, ${\bf E}$ the electric field, and ${\bf B}$ the magnetic field. Notice that it is assumed that there is no slow contribution to the magnetic field, a restriction which allows to get ${\bf B} = (m_{e}/e)\,\nabla\times\delta{\bf u}_f$ (see Equation (2.21) of Ref. [@Thornhill]), where $-e$ is the electron charge. Including a slow contribution to the magnetic field could be an important improvement, but this is outside the scope of the present work. Following the usual approximations [@Thornhill; @Garcia], the quantum corrected 3D Zakharov equations read $$\begin{aligned} \label{e2} 2i\omega_{pe}\frac{\partial{\bf\tilde{E}}}{\partial t} &-& c^2\, \nabla\times(\nabla\times{\bf\tilde{E}}) + v_{Fe}^2 \nabla(\nabla\cdot{\bf\tilde{E}}) = \nonumber \\ &=& \frac{\delta n_s}{n_0} \,\omega_{pe}^2 \,{\bf\tilde{E}} + \frac{\hbar^2}{4m_{e}^2}\nabla\left[\nabla^2 (\nabla\cdot{\bf\tilde{E}})\right] \,, \\ \label{e3} \frac{\partial^2 \delta n_s}{\partial t^2} &-& c_{s}^2 \,\nabla^2 \delta n_s - \frac{\varepsilon_0}{4m_i}\nabla^2 (|{\bf\tilde{E}}|^2) + \frac{\hbar^2}{4m_e m_i} \,\nabla^4 \delta n_s = 0 \,.\end{aligned}$$ Here ${\bf\tilde{E}}$ is the slowly varying envelope electric field defined via $${\bf E}_f = \frac{1}{2}\,({\bf\tilde{E}} \, e^{-i\omega_{pe}t} + {\bf\tilde{E}}^{*} \, e^{i\omega_{pe}t}) \,,$$ where $\omega_{pe}$ is the electron plasma frequency. Also, in Eqs. (\[e2\]–\[e3\]) $c$ is the speed of light in vacuum, $\hbar$ the scaled Planck constant, $\varepsilon_0$ the vacuum permittivity and $m_i$ the ion mass. In addition, $c_{s}^2 = \kappa_B T_{Fe}/m_i \,,$ where $\kappa_B T_{Fe} = m_e v_{Fe}^2$. Therefore, $c_s$ is a Fermi ion-acoustic speed, with the Fermi temperature replacing the thermal temperature for the electrons. In comparison to the classical Zakharov system (see Eqs. (2.48a)–(2.48b) of Ref. [@Thornhill]), there is the inclusion of the extra dispersive terms proportional to $\hbar^2$ in Eqs. (\[e2\])–(\[e3\]). Other quantum difference is the presence of the Fermi speed instead of the thermal speed in the last term at the left hand side of Eq. (\[e2\]). From the qualitative point of view, the terms proportional to $\hbar^2$ are responsible for extra dispersion which can avoid collapsing of Langmuir envelopes, at least in principle. This possibility is investigated in Sections 4 and 5. Finally, notice the non trivial form of the fourth order derivative term in Eq. (\[e2\]). It is not simply proportional to $\nabla^{4} {\bf\tilde{E}}$ as could be wrongly guessed from the quantum Zakharov equations in $1+1$ dimensions, where there is a $\sim \partial^{4}{\bf\tilde{E}}/\partial x^4$ contribution [@Garcia]. It is useful to consider the rescaling $$\begin{aligned} \label{e4} \bar{\bf r} &=& \frac{2\sqrt{\mu}\,\omega_{pe}\,{\bf r}}{v_{Fe}} \,, \quad \bar{t} = 2\,\mu\,\omega_{pe}t \,, \\ n &=& \frac{\delta n_s}{4\mu n_0} \,, \quad {\bf\cal E} = \frac{e\,\tilde{\bf E}}{4\,\sqrt{\mu}\,m_{e}\omega_{pe}v_{Fe}} \,, \nonumber\end{aligned}$$ where $\mu = m_{e}/m_{i}$. Then, dropping the bars in ${\bf r}, t$, we obtain $$\begin{aligned} \label{e5} i\frac{\partial{\bf\cal E}}{\partial t} &-& \frac{c^2}{v_{Fe}^2} \nabla\times(\nabla\times{\bf\cal E}) + \nabla(\nabla\cdot{\bf\cal E}) = \nonumber \\ &=& n \, {\bf\cal E} + \Gamma\,\nabla\left[\nabla^2 (\nabla\cdot{\bf\cal E})\right] \,, \\ \label{e6} \frac{\partial^2 n}{\partial t^2} &-& \nabla^2 n - \nabla^2 (|{\bf\cal E}|^2) + \Gamma\, \nabla^4 n = 0 \,, \end{aligned}$$ where $$\label{e7} \Gamma = \frac{m_e}{m_i}\left(\frac{\hbar\,\omega_{pe}}{\kappa_{B}T_{Fe}}\right)^2$$ is a non-dimensional parameter associated with the quantum effects. Usually, it is an extremely small quantity, but it is nevertheless interesting to retain the $\sim \Gamma$ terms, specially for the collapse scenarios. The reason is not only due to a general theoretical motivation, but also because from some simple estimates one concludes that these terms become of the same order as some of other terms in Eqs. (\[e2\])–(\[e3\]) provided that the characteristic length $l$ for the spatial derivatives becomes as small as the mean inter-particle distance, $l \sim n_{0}^{-1/3}$. Of course, the Zakharov equations are not able to describe the late stages of the collapse, since they do not include dissipation, which is unavoidable for short scales. But even Landau damping would be irrelevant for a zero-temperature Fermi plasma, where the main influence comes from the Pauli pressure. In the left-hand side of Eq. (\[e5\]), the $\nabla(\nabla\cdot{\bf\cal E})$ term is retained because the $\sim c^{2}/v_{Fe}^2$ transverse term disappears in the electrostatic approximation. In the adiabatic limit, neglecting $\partial^{2} n/\partial t^2$ in Eq. (\[e6\]) and under appropriated boundary conditions, it follows that $$\label{n} n = - |{\bf\cal E}|^2 + \Gamma\, \nabla^{2} n \,,$$ When $\Gamma \neq 0$, it is not easy to directly express $n$ as a function of $|{\bf\cal E}|$ as in the classical case. Therefore, the adiabatic limit is not enough to derive a vector nonlinear Schrödinger equation, due to the coupling in Eq. (\[n\]). Lagrangian structure and conservation $\!\!\!$ laws =================================================== The quantum Zakharov equations (\[e5\])–(\[e6\]) can be described by the Lagrangian density $$\begin{aligned} {\cal L} &=& \frac{i}{2}\,\Bigl(\,{\bf\cal E}^{*}\cdot\frac{\partial{\bf\cal E}}{\partial t} - {\bf\cal E}\cdot\frac{\partial{\bf\cal E}^{*}}{\partial t}\,\Bigr) - \frac{c^2}{v_{Fe}^2} |\nabla\times {\bf\cal E}|^2 - |\nabla\cdot{\bf\cal E}|^2 - \Gamma \,|\nabla(\nabla\cdot{\bf\cal E})|^2 \nonumber \\ \label{e8} &+& n\,\Bigl(\,\frac{\partial\alpha}{\partial t} - |{\bf\cal E}|^2\,\Bigr) - \frac{1}{2}\,\Bigl(n^2 + \Gamma |\nabla n|^2 + |\nabla\alpha|^2\Bigr) \,,\end{aligned}$$ where $n$, the auxiliary function $\alpha$ and the components of ${\bf\cal E}, {\bf\cal E}^{*}$ are regarded as independent fields. Remark: for the particular form (\[e8\]) and for a generic field $\psi$, one computes the functional derivative as $$\label{e9} \frac{\delta{\cal L}}{\delta\psi} = \frac{\partial{\cal L}}{\partial\psi} - \frac{\partial}{\partial r_i}\,\frac{\partial{\cal L}}{\partial\psi/\partial r_i} - \frac{\partial}{\partial t}\,\frac{\partial{\cal L}}{\partial\psi/\partial t} + \frac{\partial^2}{\partial r_{i}\,\partial r_j}\, \frac{\partial{\cal L}}{\partial^{2}\psi/\partial r_i \partial r_j} \,,$$ using the summation convention and where $r_i$ are cartesian components. Taking the functional derivatives with respect to $n$ and $\alpha$, we have $$\label{e10} \frac{\partial\alpha}{\partial t} = n + |{\bf\cal E}|^2 - \Gamma\nabla^2 n ,$$ and $$\label{e11} \frac{\partial n}{\partial t} = \nabla^2 \alpha \,,$$ respectively. Eliminating $\alpha$ from Eqs. (\[e10\]) and (\[e11\]) we obtain the low frequency equation. In addition, the functional derivatives with respect to ${\bf\cal E}^{*}$ and ${\bf\cal E}$ produce the high-frequency equation and its complex conjugate. The present formalism is inspired by the Lagrangian formulation of the classical Zakharov equations [@Gibbons]. The quantum Zakharov equations admit as exact conserved quantities the “number of plasmons" of the Langmuir field, $$\label{e12} N = \int |{\bf\cal E}|^2 \,d{\bf r} \,,$$ the linear momentum (with components $P_i, \, i = x, y, z$), $$\label{e13} P_i = \int \Bigl[\frac{i}{2} \left({\cal E}_j \,\frac{\partial{\cal E}^{*}_j}{\partial r_i} - {\cal E}^{*}_j \,\frac{\partial{\cal E}_j}{\partial r_i}\right) - n \,\frac{\partial\alpha}{\partial r_i} \Bigr] \,d{\bf r}$$ and the Hamiltonian, $$\begin{aligned} {\cal H} &=& \int \Bigl[n|{\bf\cal E}|^2 + \frac{c^2}{v_{Fe}^2}\, |\nabla\times{\bf\cal E}|^2 + |\nabla\cdot{\bf\cal E}|^2 + \Gamma\,|\nabla(\nabla\cdot{\bf\cal E})|^2 \nonumber \\ \label{e14} &+& \frac{1}{2}\,\Bigl(n^2 + \Gamma |\nabla n|^2 + |\nabla\alpha|^2\Bigr) \Bigr] \,d{\bf r} \,.\end{aligned}$$ Furthermore, there is also a preserved angular momenta functional, but it is not relevant in the present work. These four conserved quantities can be associated, through Noether’s theorem, to the invariance of the action under gauge transformation, time translation, space translation and rotations, respectively. The conservation laws can be used [*e. g.*]{} to test the accuracy of numerical procedures. Also, observe that equations (\[e6\]) and (\[n\]) for the adiabatic limit are described by the same Lagrangian density (\[e8\]). In this approximation, it suffices to set $\alpha \equiv 0$. In addition to the adiabatic limit, Eq. (\[n\]) can be further approximated to $$\label{e15} n = - |{\bf\cal{E}}|^2 - \Gamma \nabla^{2}(|{\bf\cal{E}}|^{2}) \,,$$ assuming that the quantum term is a perturbation. In this way and using Eq. (\[e5\]), a quantum modified vector nonlinear Schrödinger equation is derived $$\begin{aligned} i\frac{\partial{\bf\cal E}}{\partial t} &+& \nabla(\nabla\cdot{\bf\cal E}) - \frac{c^2}{v_{Fe}^2} \nabla\times(\nabla\times{\bf\cal E}) + |{\bf\cal{E}}|^2 {\bf\cal{E}} = \nonumber \\ \label{e16} &=& \Gamma\nabla\left[\nabla^2 (\nabla\cdot{\bf\cal E})\right] -\Gamma \, {\bf\cal E} \nabla^{2}(|{\bf\cal{E}}|^{2}) \,. \end{aligned}$$ The appropriate Lagrangian density ${\cal L}_{ad,sc}$ for the semiclassical equation (\[e16\]) is given by $$\begin{aligned} {\cal L}_{ad,sc} &=& \frac{i}{2}\,\Bigl(\,{\bf\cal{E}}^{*}\cdot\frac{\partial{\bf\cal{E}}}{\partial t} - {\bf\cal{E}}\cdot\frac{\partial{\bf\cal{E}}^{*}}{\partial t}\,\Bigr) - \frac{c^2}{v_{Fe}^2}|\nabla\times{\bf\cal{E}}|^2 - |\nabla\cdot{\bf\cal{E}}|^2 \nonumber \\ \label{e17} &-& \Gamma \,|\nabla(\nabla\cdot{\bf\cal{E}})|^2 + \frac{1}{2}\,|{\bf\cal{E}}|^4 - \frac{\Gamma}{2}\,\Bigl|\nabla[\,|{\bf\cal{E}}|^2] \Bigr|^2 \,,\end{aligned}$$ where the independent fields are taken as ${\bf\cal{E}}$ and ${\bf\cal{E}}^{*}$ components. The expression $N$ for the number of plasmons in Eq. (\[e12\]) remains valid as a constant of motion in the joint adiabatic and semiclassical limit, as well as the momentum ${\bf P}$ in Eq. (\[e13\]) with $\alpha \equiv 0$. Finally, the Hamiltonian $$\begin{aligned} {\cal H}_{ad,sc} = \int \Bigl[\,\frac{c^2}{v_{Fe}^2}\,|\nabla&\times&{\bf\cal{E}}|^2 + |\nabla\cdot{\bf\cal{E}}|^2 + \Gamma \,|\nabla(\nabla\cdot{\bf\cal{E}})|^2 \nonumber \\ \label{e18} &-& \frac{1}{2}\,|{\bf\cal{E}}|^4 + \frac{\Gamma}{2}\,\Bigl|\nabla[\,|{\bf\cal{E}}|^2\,] \Bigr|^2 \,\,\Bigr] \,d{\bf r} \end{aligned}$$ is also a conserved quantity. In the following, the influence of the quantum terms in the right-hand side of Eq. (\[e16\]) are investigated, assuming adiabatic conditions for collapsing quantum Langmuir envelopes. Other scenarios for collapse, like the supersonic one [@Landman; @Papanicolaou], could also be relevant and shall be investigated in the future. Variational solution in two dimensions ====================================== Consider the adiabatic semiclassical system defined by Eq. (\[e16\]). We refer to localized solution for this vector NLS equation as (quantum) “Langmuir wave packets", or envelopes. As discussed in detail in [@Gibbons] in the purely classical case, Langmuir wave packets will become singular in a finite time, provided the energy is not bounded from below. Of course, explicit analytic Langmuir envelopes are difficult to derive. A fruitful approach is to make use of the Lagrangian structure for deriving approximate solutions. This approach has been pursued in [@Malomed] for the classical and in [@Haasvar] for the quantum Zakharov system. Both studies considered the internal vibrations of Langmuir envelopes in one-spatial-dimension. Presently, we shall apply the time-dependent Rayleigh-Ritz method for the higher-dimensional cases. A priori, it is expected that the quantum corrections would inhibit the collapse of localized solutions, in view of wave-packet spreading. To check this conjecture, and to have more definite information on the influence of the quantum terms, first we consider the following [*Ansatz*]{}, $$\label{e19} {\bf\cal{E}} = \left(\frac{N}{\pi}\right)^{1/2}\,\frac{1}{\sigma}\,\exp\left(-\frac{\rho^2}{2\sigma^2}\right)\, \exp\left(i(\Theta + k\rho^2)\right)\,\,(\cos\phi, \sin\phi, 0) \,,$$ which is appropriate for two-spatial-dimensions. Here $\sigma, k, \Theta$ and $\phi$ are real functions of time, and $\rho = \sqrt{x^2+y^2}$. The normalization condition (\[e12\]) is automatically satisfied (in 2D the spatial integrations reduce to integrations on the plane). Other localized forms, involving [*e. g.*]{} a [*sech*]{} type dependence, could have been also proposed. Here a Gaussian form was suggested mainly for the sake of simplicity [@Fedele]. Notice that the envelope electric field (\[e19\]) is not necessarily electrostatic: it can carry a transverse ($\nabla\times{\bf\cal{E}} \neq 0$) component. The free functions in Eq. (\[e19\]) should be determined by extremization of the action functional associated with the Lagrangian density (\[e17\]). A straightforward calculation gives $$\begin{aligned} L_2 \equiv \int\,{\cal L}_{ad,sc}\,dx\,dy &=& - N \,\Bigl[\dot\Theta + \sigma^2 \dot{k} + \frac{2c^2}{v_{Fe}^2}\,k^2 \sigma^2 + \frac{1}{2}\,\left( \frac{c^2}{v_{Fe}^2} - \frac{N}{2\pi}\right)\,\frac{1}{\sigma^2} \nonumber \\ \label{e20} &+& 8\Gamma k^2 + 16\Gamma k^4 \sigma^4 + \left(1+\frac{N}{2\pi}\right)\,\frac{\Gamma}{\sigma^4}\,\Bigr] \,,\end{aligned}$$ where only the main quantum contributions are retained. Now $L_2$ is the Lagrangian for a mechanical system, after the spatial form of the envelope electric field was defined in advance via Eq. (\[e19\]). Of special interest is the behavior of the dispersion $\sigma$. For a collapsing solution one could expect that $\sigma$ goes to zero in a finite time. The phase $\Theta$ and the chirp function $k$ should be regarded as auxiliary fields. Notice that $L_2$ is not dependent on the angle $\phi$, which remains arbitrary as far as the variational method is concerned. Applying the functional derivative of $L_2$ with respect to $\Theta$, we obtain $$\label{e21} \frac{\delta L_2}{\delta\Theta} = 0 \quad \rightarrow \quad \dot{N} = 0 \,,$$ so that the variational solution preserves the number of plasmons, as expected. The remaining Euler-Lagrange equations are $$\begin{aligned} \label{e22} \frac{\delta L_2}{\delta k} = 0 \quad \rightarrow \quad \sigma\dot\sigma &=& \frac{2 c^2}{v_{Fe}^2}\,\sigma^2 k + 8\Gamma k + 32\Gamma\sigma^4 k^3 \,,\\ \frac{\delta L_2}{\delta\sigma} = 0 \quad \rightarrow \quad \sigma\dot{k} &=& - \frac{2c^2}{v_{Fe}^2}\,k^2 \sigma + \frac{1}{2}\,\left( \frac{c^2}{v_{Fe}^2} - \frac{N}{2\pi}\right)\,\frac{1}{\sigma^3} - 32\Gamma k^4 \sigma^3 \nonumber \\ \label{e23} &+& \left(1+\frac{N}{2\pi}\right)\,\frac{2\Gamma}{\sigma^5} \,.\end{aligned}$$ The exact solution of the nonlinear system (\[e22\]–\[e23\]) is difficult to obtain, but at least the dynamics was reduced to ordinary differential equations. It is instructive to analyze the purely classical ($\Gamma \equiv 0$) case first. This is specially true, since to our knowledge the Rayleigh-Ritz method was not applied to the vector NLS equation (\[e16\]), even for classical systems. The reason can be due to the calculational complexity induced by the transverse term. When $\Gamma = 0$, Eq. (\[e22\]) gives $k = v_{Fe}^2 \dot\sigma/2c^2 \sigma$. Inserting this in Eq. (\[e23\]) we have $$\label{e24} \ddot\sigma = - \frac{\partial V_{2c}}{\partial\sigma} \,,$$ where the pseudo-potential $V_{2c}$ is $$\label{e25} V_{2c} = \frac{c^2}{2v_{Fe}^2}\,\left( \frac{c^2}{v_{Fe}^2} - \frac{N}{2\pi}\right)\,\frac{1}{\sigma^2} \,.$$ From Eq. (\[e25\]) it is evident that the repulsive character of the pseudo-potential will be converted into an attractive one, whenever the number of plasmons exceeds a threshold, $$\label{e26} N > \frac{2\pi c^2}{v_{Fe}^2} \,,$$ a condition for Langmuir wave packet collapse in the classical two-dimensional case. The interpretation of the result is as follows. When the number of plasmons satisfy Eq. (\[e26\]), the refractive $\sim |{\bf\cal{E}}|^4$ term dominates over the dispersive terms in the Lagrangian density (\[e17\]), producing a singularity in a finite time. Finally, notice the ballistic motion when $N = 2\pi c^{2}/v_{Fe}^2$, which can also lead to singularity. Further insight follows after evaluating the energy integral (\[e18\]) with the [*Ansatz*]{} (\[e19\]), which gives, after eliminating $k$, $$\label{e27} {\cal H}_{ad,sc,2c} = \frac{N v_{Fe}^2}{c^2}\,\left[\frac{\dot\sigma^2}{2} + V_{2c}\right] \quad (\Gamma \equiv 0) \,.$$ Of course, this energy first integral could be obtained directly from Eq. (\[e24\]). However, the plausibility of the variational solution is reinforced, since Eq. (\[e27\]) shows that it preserves the exact constant of motion ${\cal H}_{ad,sc}$. In addition, in the attractive (collapsing) case the energy (\[e27\]) is not bounded from bellow. In the quantum ($\Gamma \neq 0$) case, Eq. (\[e22\]) becomes a cubic equation in $k$, whose exact solution is too cumbersome to be of practical use. It is better to proceed by successive approximations, taking into account that the quantum and electromagnetic terms are small. In this way, one arrives at $$\label{e28} \ddot\sigma = - \frac{\partial V_{2}}{\partial\sigma} \,,$$ where the pseudo-potential $V_{2}$ is $$\label{e29} V_{2} = \frac{c^2}{2v_{Fe}^2}\,\left( \frac{c^2}{v_{Fe}^2} - \frac{N}{2\pi}\right)\,\frac{1}{\sigma^2} + \frac{\Gamma c^2}{v_{Fe}^2}\,\left(1+\frac{N}{2\pi}\right)\,\frac{1}{\sigma^4}\,.$$ Now, even if the threshold (\[e26\]) is exceeded, the repulsive $\sim\sigma^{-4}$ quantum term in $V_2$ will prevent singularities. This adds quantum diffraction as another physical mechanism, besides dissipation and Landau damping, so that collapsing Langmuir wave packets are avoided in vector NLS equation. Also, similar to Eq. (\[e27\]), it can be shown that the approximate dynamics preserves the energy integral, even in the quantum case. Indeed, calculating from Eq. (\[e18\]) and the variational solution gives ${\cal H}_{ad,sc}$ as $${\cal H}_{ad,sc,2} = \frac{N v_{Fe}^2}{c^2}\,\left[\frac{\dot\sigma^2}{2} + V_{2}\right] \quad (\Gamma \geq 0) \,.$$ From Eq. (\[e28\]), obviously $\dot{\cal H}_{ad,sc,2} = 0$. It should be noticed that oscillations of purely quantum nature are obtained when the number of plasmons exceeds the threshold (\[e26\]). Indeed, in this case the pseudo-potential $V_2$ in Eq. (\[e29\]) assumes a potential well form as shown in Figure 1, which clearly admits oscillations around a minimum $\sigma = \sigma_{m}$. Here, $$\label{e30} \sigma_{m} = 2\,\left[\frac{\Gamma (1 + N/2\pi)}{N/2\pi - c^2/v_{Fe}^2}\right]^{1/2} \,.$$ Also, the minimum value of $V_2$ is $$V_{2}(\sigma_{m}) = - \frac{c^2}{16\Gamma\,v_{Fe}^2}\,\frac{(N/2\pi - c^2/v_{Fe}^2)^2}{1+N/2\pi} > - \frac{1}{16\Gamma}\,\left(\frac{N}{2\pi}-\frac{c^2}{v_{Fe}^2}\right)^2 \,,$$ the last inequality follows since Eq. (\[e26\]) is assumed. Therefore, a deepest potential well is obtained when $N$ is increasing. Also, for too large quantum effects the trapping of the localized electric field in this potential well would be difficult, since $V_{2}(\sigma_{m}) \rightarrow 0_{-}$ as $\Gamma$ increases. This is due to the dispersive nature of the quantum corrections. The frequency $\omega$ of the small amplitude oscillations is derived linearizing Eq. (\[e28\]) around the equilibrium point (\[e30\]). Restoring physical coordinates via Eq. (\[e4\]) this frequency is calculated as $$\begin{aligned} \omega &=& \frac{c}{\sqrt{2}\,v_{Fe}}\,\left(\frac{\kappa_{B}T_{Fe}}{\hbar\,\omega_{pe}}\right)^2\, \frac{(N/2\pi-c^{2}/v_{Fe}^{2})^{3/2}}{1+N/2\pi}\,\,\omega_{pe} \nonumber \\ \label{e31} &<& \frac{v_{Fe}}{\sqrt{2}\,c}\,\left(\frac{\kappa_{B}T_{Fe}}{\hbar\,\omega_{pe}}\right)^2\, \left(\frac{N}{2\pi}-\frac{c^2}{v_{Fe}^2}\right)^{3/2}\,\omega_{pe} \,.\end{aligned}$$ ![image](fig1.eps) To conclude, the variational solution suggests that the extra dispersion arising from the quantum terms would inhibit the collapse of Langmuir wave packets in two-spatial-dimensions. Moreover, for sufficient electric field energy (which is proportional to $N$), instead of collapse there will be oscillations of the width of the localized solution, due to the competition between the classical refraction and the quantum diffraction. The frequency of linear oscillations is then given by Eq. (\[e31\]). The emergence of a pulsating Langmuir envelope is a qualitatively new phenomena, which could be tested quantitatively in experiments. Variational solution in three-dimensions ======================================== It is worth to study the dynamics of localized solutions for the vector NLS equation (\[e16\]) in fully three-dimensional space. For this purpose, we consider the Gaussian form $$\label{e32} {\bf\cal{E}} = \left(\frac{N}{(\sqrt{\pi}\,\sigma)^3}\right)^{1/2}\!\!\!\!\!\exp\left[-\frac{r^2}{2\sigma^2}\! +\!i(\Theta\!+\!k\,r^2)\right](\cos\phi\sin\theta, \sin\phi\sin\theta, \cos\theta) \,,$$ where $\sigma, k, \Theta, \theta$ and $\phi$ are real functions of time and $r = \sqrt{x^2+y^2+z^2}$, applying the Rayleigh-Ritz method just like in the last Section. The normalization condition (\[e12\]) is automatically satisfied with Eq. (\[e32\]), which, occasionally, can also support a transverse ($\nabla\times{\bf\cal{E}} \neq 0$) part. Proceeding as before, the Lagrangian $$\begin{aligned} L_3 \equiv \int\,{\cal L}_{ad,sc}\,d{\bf r} &=& - N \,\Bigl[\dot\Theta + \frac{3}{2}\,\sigma^2 \dot{k} + \frac{4\,c^2}{v_{Fe}^2}\,k^2 \sigma^2 + \frac{c^2}{v_{Fe}^2\,\sigma^2} - \frac{N}{4\sqrt{2}\,\pi^{3/2}\,\sigma^3} \nonumber \\ \label{e33} &+& 10\,\Gamma k^2 + 20\,\Gamma k^4 \sigma^4 + \frac{5\,\Gamma}{4\,\sigma^4} +\frac{3\,\Gamma N}{4\sqrt{2}\,\pi^{3/2}\,\sigma^5}\,\Bigr] \end{aligned}$$ is derived. In comparison to the reduced 2D-Lagrangian in Eq. (\[e20\]), there are different numerical factors as well as qualitative changes due to higher-order nonlinearities. Also, the angular variables $\theta$ and $\phi$ don’t appear in $L_3$. The main remaining task is to analyze the dynamics of the width $\sigma$ as a function of time. This is achieved from the Euler-Lagrange equations for the action functional associated to $L_3$. As before, $\delta L_{3}/\delta\Theta = 0$ gives $\dot{N}=0$, a consistency test satisfied by the variational solution. The other functional derivatives yield $$\begin{aligned} \label{e34} \frac{\delta L_3}{\delta k} = 0 \rightarrow \sigma\dot\sigma &=& \frac{4k}{3}\, \left[\frac{2\, c^2}{v_{Fe}^2}\,\sigma^2 + 5\Gamma\,(1+4k^2 \sigma^4)\right] \,,\\ \frac{\delta L_3}{\delta\sigma} = 0 \rightarrow \sigma\dot{k} &=& \frac{1}{3}\, \Bigl[- \frac{8\,c^2}{v_{Fe}^2}\,k^2 \sigma + \frac{2\,c^2}{v_{Fe}^2 \sigma^3} - \frac{3\,N}{4\,\sqrt{2}\,\pi^{3/2}\,\sigma^4} \nonumber \\ \label{e35} &-& 80\, \Gamma k^4 \sigma^3 + \frac{5\,\Gamma}{\sigma^5} + \frac{15\,\Gamma N}{4\,\sqrt{2}\,\pi^{3/2}\,\sigma^6}\Bigr] \,.\end{aligned}$$ In the formal classical limit ($\Gamma \equiv 0$), and using Eq. (\[e34\]) to eliminate $k$, we obtain $$\label{e36} \ddot\sigma = - \frac{\partial V_{3c}}{\partial\sigma} \,,$$ where now the pseudo-potential $V_{3c}$ is $$\label{e37} V_{3c} = \frac{c^2}{v_{Fe}^2}\,\left( \frac{8\,c^2}{9\,v_{Fe}^2\,\sigma^2} - \frac{2\,N}{9\,\sqrt{2}\,\pi^{3/2}\,\sigma^3}\right) \,.$$ The form (\[e37\]) shows a generic singular behavior, since the attractive $\sim \sigma^{-3}$ term will dominate for sufficiently small $\sigma$, irrespective of the value of $N$. Hence, in fully three-dimensional space there is more “room" for a collapsing dynamics. Figure 2 shows the qualitative form of $V_{3c}$, attaining a maximum at $\sigma = \sigma_M$, where $$\label{e38} \sigma_M = \frac{3\, v_{F}^2\,N}{8\,\sqrt{2}\,\pi^{3/2}\,c^2} \,.$$ ![image](fig2.eps) By Eq. (\[e35\]) and using successive approximations in the parameter $\Gamma$ to eliminate $k$ via Eq. (\[e34\]), we obtain $$\label{ee} \ddot\sigma = - \frac{\partial V_{3}}{\partial\sigma} \,,$$ where $$\label{e39} V_3 = \frac{8\,c^2}{3\,v_{Fe}^2}\,\left[\frac{c^2}{3\,v_{Fe}^2\,\sigma^2} - \frac{N}{12\,\sqrt{2}\,\pi^{3/2}\,\sigma^3} + \frac{5\,\Gamma}{12\,\sigma^4} + \frac{\Gamma\,N}{4\,\sqrt{2}\,\pi^{3/2}\,\sigma^5}\right] \,.$$ The quantum terms are repulsive and prevent collapse, since they dominate for sufficiently small $\sigma$. Moreover, when $\Gamma \neq 0$ an oscillatory behavior is possible, provided a certain condition, to be explained in the following, is meet. To examine the possibility of oscillations, consider $V_{3}'(\sigma) = 0$, the equation for the critical points of $V_3$. Under the rescaling $s = \sigma/\sigma_{M}$, where $\sigma_M$ (defined in Eq. (\[e38\])) is the maximum of the purely classical pseudo-potential, the equation for the critical points read $$\label{e40} V_{3}' = 0 \quad \rightarrow \quad s^3 - s^2 + \frac{4\,g}{27} = 0 \,,$$ where $$\label{e41} g = \frac{480\,\pi^3\,\Gamma\,c^4}{N^2\,v_{Fe}^4}$$ is a new dimensionless parameter. In deriving Eq. (\[e40\]), it was omitted a term negligible except if $s \sim c^2/v_{Fe}^2$, which is unlikely. The quantity $g$ plays a decisive rôle on the shape of $V_3$. Indeed, calculating the discriminant shows that the solutions to the cubic in Eq. (\[e40\]) are as follows: (a) $g < 1 \rightarrow$ three distinct real roots (one negative and two positive); (b) $g = 1 \rightarrow$ one negative root, one (positive) double root; (c) $g > 1 \rightarrow$ one (negative) real root, two complex conjugate roots. Therefore, $g < 1$ is the condition for the existence of a potential well, which can support oscillations. This is shown in Figure 3. The analytic formulae for the solutions of the cubic in Eq. (\[e40\]) are cumbersome and will be omitted. ![image](fig3.eps) Restoring physical coordinates, the necessary condition for oscillations is rewritten as $$\label{e42} g < 1 \quad \rightarrow \quad \frac{\varepsilon_0}{2}\,\int\,|\tilde{\bf E}|^2\,d{\bf r} > \frac{\sqrt{30\pi}}{\gamma}\,\,m_e\,v_{Fe}\,c \,,$$ where $\gamma = e^2/4\,\pi\varepsilon_{0}\,\hbar\,c \simeq 1/137$ is the fine structure constant. From Eq. (\[e42\]) it is seen that for sufficient electrostatic energy the width $\sigma$ of the localized envelope field can show oscillations, supported by the competition between classical refraction and quantum diffraction. Also, due to the Fermi pressure, for large particle densities the inequality (\[e42\]) becomes more difficult to be met, since $v_{Fe} \sim n_{0}^{1/3}$. For example, when $n_0 \sim 10^{36}\,m^{-3}$ (white dwarf), the right-hand-side of Eq. (\[e42\]) is $0.6\,$ GeV. For $n_0 \sim 10^{33}\,m^{-3}$ (the next generation intense laser-solid density plasma experiments), it is $57.5$ MeV. Finally, notice that ${\cal H}_{ad,sc}$ from Eq. (\[e18\]), evaluated with the variational solution (\[e32\]), is proportional to $\dot{\sigma}^2/2 + V_3$, which is a constant of motion for Eq. (\[ee\]). Therefore, the approximate solution preserves one of the basic first integrals of the vector NLS equation (\[e16\]), as it should be. Conclusion ========== In this paper, the quantum Zakharov system in fully three-dimensional space has been derived. An associated Lagrangian structure was found, as well as the pertinent conservation laws. From the Lagrangian formalism, many possibilities are opened. Here, the variational description was used to analyze the behavior of localized envelope electric fields of Gaussian shape, in both two- and three-space dimensions. It was shown that the quantum corrections induce qualitative and quantitative changes, inhibiting singularities and allowing for oscillations of the width of the Langmuir envelope field. This new dynamics can be tested in experiments. In particular, the rôle of the parameter $g$ and the inequality in Eq. (\[e42\]) should be investigated. However, the variational method was applied only for the adiabatic and semiclassical case, which allows to derive the quantum modified vector NLS equation (\[e16\]). Other, more general, scenarios for the solutions of the fully three-dimensional quantum Zakharov system are also worth to study, with numerical and real experiments. .5cm [**Acknowledgments**]{} .5cm This work was partially supported by the Alexander von Humboldt Foundation. Fernando Haas also thanks Professors Mattias Marklund and Gert Brodin for their warm hospitality at the Department of Physics of Umeå University, where part of this work was produced. [99]{} V. E. Zakharov, Zh. Eksp. Teor. Fiz. [**62**]{}, 1745 (1972) \[Sov. Phys. JETP [**35**]{}, 908 (1972)\]. M. V. Goldman, Rev. Mod. Phys. [**56**]{}, 709 (1984). S. G. Thornhill and D. ter Haar, Phys. Reports [**43**]{}, 43 (1978). L. G. Garcia, F. Haas, L. P. L. de Oliveira and J. Goedert, Phys. Plasmas [**12**]{}, 012302 (2005). F. Haas, G. Manfredi and M. R. Feix, Phys. Rev. E [**62**]{}, 2763 (2000). G. Manfredi and F. Haas, Phys. Rev. B [**64**]{}, 075316 (2001). F. Haas, Phys. Plasmas [**12**]{}, 062117 (2005). M. Marklund, Phys. Plasmas [**12**]{}, 082110 (2005). F. Haas, Phys. Plasmas [**14**]{}, 042309 (2007). P. K. Shukla and B. Eliasson, Phys. Rev. Lett. [**96**]{}, 245001 (2006); Phys. Lett. A [**372**]{}, 2893 (2008). X. Y. Tang and P. K. Shukla, Phys. Scripta [**76**]{}, 665 (2007). M. A. Abdou and E. M. Abulwafa, Z. Naturforsch. A [**63**]{}, 646 (2008). S. A. El-Wakil and M. A. Abdou, Nonl. Anal. TMA [**68**]{}, 235 (2008). Q. Yang, C. Q. Dai, X. Y. Wang and J. F. Zhang, J. Phys. Soc. Japan [**74**]{}, 2492 (2005). See the comments about this work in Ref. [@Tang]. A. P. Misra, D. Ghosh and A. R. Chowdhury, Phys. Lett. A [**372**]{}, 1469 (2008). V. E. Zakharov, A. F. Mastryukov and V. H. Sinakh, Fiz. Plazmy [**1**]{}, 614 (1975) \[Sov. J. Plasma Phys. [**1**]{}, 339 (1975)\]. V. E. Zakharov, [*Handbook of Plasma Physics*]{}, eds. M. N. Rosenbluth and R. Z. Sagdeev (Elsevier, New York, 1984), vol. 2, p. 81. M. Landman, G. C. Papanicolaou, C. Sulem, P. L. Sulem and X. P. Wang, Phys. Rev. A [**46**]{}, 7869 (1992). G. C. Papanicolaou, C. Sulem, P. L. Sulem and X. P. Wang, Phys. Fluids B [**3**]{}, 969 (1991). D. F. Dubois, A. Hanssen, H. A. Rose and D. Russel, J. Geophys. Res. [**98**]{}, 17543 (1993). P. A. Robinson and D. H. Newman, Phys. Fluids B [**2**]{}, 3120 (1990). P. Y. Cheung and A. Y. Wong, Phys. Fluids [**18**]{}, 1538 (1985). H. Alinejad, P. A. Robinson, I. H. Cairns, O. Skjaeraasen and C. Sobhanian, Phys. Plasmas [**14**]{}, 082304 (2007). K. Akimoto, H. L. Rowland and K. Papadopoulos, Phys. Fluids [**31**]{}, 2185 (1988). L. H. Li and X. Q. Li, Phys. Fluids B [**5**]{}, 3819 (1993). G. Pelletier, H. Sol and E. Asseo, Phys. Rev. A [**38**]{}, 2552 (1988). L. Stenflo, Phys. Rev. Lett. [**48**]{}, 1441 (1982). M. Marklund, G. Brodin and L. Stenflo, Phys. Rev. Lett. [**91**]{}, 163601 (2003). J. Gibbons, S. G. Thornhill, M. J. Wardrop and D. ter Haar, J. Plasma Phys. [**17**]{}, 153 (1977). B. Malomed, D. Anderson, M. Lisak, M. L. Quiroga-Teixeiro and L. Stenflo, Phys. Rev. E [**55**]{}, 962 (1997). R. Fedele, U. de Angelis and T. Katsouleas, Phys. Rev. A [**33**]{}, 4412 (1986).
{ "pile_set_name": "ArXiv" }
--- abstract: 'We undertake a regularity analysis of the solutions to initial/boundary value problems for the (third-order in time) Moore-Gibson-Thompson (MGT) equation. The key to the present investigation is that the MGT equation falls within a large class of systems with memory, with affine term depending on a parameter. For this model equation a regularity theory is provided, which is of also independent interest; it is shown in particular that the effect of boundary data that are square integrable (in time and space) is the same displayed by wave equations. Then, a general picture of the (interior) regularity of solutions corresponding to homogeneous boundary conditions is specifically derived for the MGT equation in various functional settings. This confirms the gain of one unity in space regularity for the time derivative of the unknown, a feature that sets the MGT equation apart from other PDE models for wave propagation. The adopted perspective and method of proof enables us to attain as well the (sharp) regularity of boundary traces.' address: - ' Francesca Bucci, Università degli Studi di Firenze, [*Dipartimento di Matematica e Informatica*]{}, [Via S. Marta 3, 50139 Firenze, ITALY]{} ' - 'Luciano Pandolfi, Politecnico di Torino, [*Dipartimento di Scienze Matematiche “Giuseppe Luigi Lagrange”*]{}, [Corso Duca degli Abruzzi 24, 10129 Torino, ITALY]{} ' author: - Francesca Bucci - Luciano Pandolfi title: 'On the regularity of solutions to the Moore-Gibson-Thompson equation: a perspective via wave equations with memory' --- Introduction ============ The Jordan-Moore-Gibson-Thompson equation is a nonlinear Partial Differential Equation (PDE) model which describes the acoustic velocity potential in ultrasound wave propagation; the use of the constitutive Cattaneo law for the heat flux, in place of the Fourier law, accounts for its being of third order in time. The quasilinear PDE is $$\label{Eq:quasilineare} \tau \psi_{ttt} + \psi_{tt}-c^2\Delta \psi - b\Delta \psi_t= \frac{\partial}{\partial t}\Big(\frac1{c^2}\frac{B}{2A}\psi^2_t+|\nabla \psi|^2\Big)$$ in the unknown $\psi=\psi(t,x)$, that is the acoustic velocity potential (then $-\nabla \psi$ is the acoustic particle velocity), $A$ and $B$ being suitable constants; [*cf.*]{} Moore & Gibson [@moore-gibson_1960], Thompson [@thompson_1972], Jordan [@jordan_2009]. For a brief overview on nonlinear acoustics, along with a list of relevant references, see the recent paper by Kaltenbacher [@kalt_2015]. Aiming at the understanding of the nonlinear equation, a great deal of attention has been recently devoted to its linearization—referred to in the literature as the Moore-Gibson-Thompson (MGT) equation—whose mathematical analysis is also of independent interest, posing already several questions and challenges. Let $\Omega\subset \mathbb{R}^n$ be a region with smooth ($C^2$) boundary $\Gamma:=\partial\Omega$. (It is a natural conjecture that existence results for wave equations in non-smooth domains ([*cf.*]{} [@Grisvard]) can be extended to wave equations with memory and to the MGT equation, by using the methods we present in this paper. We consider the MGT equation $$\label{e:mgt} \tau u_{ttt}+\alpha u_{tt} -c^2 \Delta u -b \Delta u_t =0 \qquad \text{in $(0,T)\times\Omega$}$$ in the unknown $u=u(t,x)$, $t\ge 0$, $x\in \Omega$, representing the acoustic velocity potential or alternatively, the acoustic pressure (see [@kalt-las-posp_2012] for a discussion on this issue). The coefficients $c$, $b$, $\alpha$ are constant and positive; they represent the speed and diffusivity of sound ($c$, $b$), and, respectively, a viscosity parameter ($\alpha$). For simplicity we set $\tau=1$ throughout the paper. Equation is supplemented with initial and boundary conditions: $$\begin{aligned} & u(0,\cdot)=u_0\,,\; u_t(0,\cdot)=u_1\,,\; u_{tt}(0,\cdot)=u_2(x)\,, & \text{in $(0,T)\times\Omega$} \label{e:IC} \\[1mm] & {{\mathcal T}}u(t,\cdot) =g(t,\cdot) & \text{on $(0,T)\times\Gamma$}; \label{e:BC}\end{aligned}$$ ${{\mathcal T}}$ denotes here a boundary operator, which—for the sake of simplicity—associates to a function either the trace on $\Gamma$, or the outward normal derivative $\frac{\partial}{\partial \nu}\big|_\Gamma$ (it would be the [*conormal*]{} derivative, in the case of a more general elliptic operator than the Laplacian). The original studies of the MGT equation with homogenous (Dirichlet or Neumann) boundary data carried out in Kaltenbacher [*et al.*]{} [@kalt-etal_2011] and Marchand [*et al.*]{} [@marchand-etal_2012] establish appropriate functional settings for semigroup well-posedenss, as well as stability and spectral properties of the dynamics, depending on the parameters values. They obtain, in particular, 1. that assuming $b>0$ the linear dynamics is governed by a strongly continuous [*group*]{} in the function space $H^1_0(\Omega)\times H^1_0(\Omega)\times L^2(\Omega)$ (Dirichlet BC), or ($H^1(\Omega)\times H^1(\Omega)\times L^2(\Omega)$ (Neumann BC); 2. that in the case $b=0$ the associated initial/boundary value problems are ill-posed ([*cf.*]{} Remark \[r:role-of-b\]); 3. that the parameter $\gamma=\alpha - \tau c^2/b$ is a threshold of stability/instability: it must be positive, if the property of uniform stability is required. The critical role of $\gamma$ for a dissipative behaviour was recently pointed out also in Dell’Oro and Pata [@delloro-pata_2016], within the framework of viscoleasticity. (We add that linear and true nonlinear variants of the MGT equation including an [*additional*]{} memory term have been the object of recent investigation; see [@las-jee_2017] and references therein.) Our interest lies in studying the regularity of the mapping $$(u_0,u_1,u_2,g)\longmapsto u$$ that associates to initial and boundary data—taken in appropriate spaces—the corresponding solution $u=u(t,x)$ to the initial/boundary value problem (IBVP) --. (We note that the time and more often the space variable $x$ will generally not be esplicit, unless when needed for the sake of clarity.) As it will be shown in the paper, it will be the embedding of equation in a general class of integro-differential equations (depending on a parameter) to spark our method of proof for the regularity analysis of the associated initial/boundary value problems. Indeed, the MGT equation is a special instance of the following wave equation with persistent memory, $$\label{e:memory} u_{tt}-b \Delta u=-b\gamma \int_0^t N(t-s) \Delta u(s)\,ds + F(t)\xi\,,$$ which displays an affine term depending on a suitable $\xi$, and that will be supplemented with (initial and boundary) data $$\label{eq:dataDIe:memory} u(0)=u_0\,,\ u_t(0)=u_1\,, \qquad {{\mathcal T}}u=g\,.$$ The assumptions on the real valued functions $N(t)$, $F(t)$ and on $\xi$ in are specified later; see Theorem \[t:sample\]. As it will be apparent below, the parameter $\xi$ includes the component $u_2$ of initial data $(u_0,u_1,u_2)$ for the MGT equation, while - reduces to the MGT equation (with -) when $$N(t)=F(t)=e^{-\alpha t}\,,\qquad \xi=u_2-b\Delta u_0\,.$$ The obtained regularity results will follow combining the (interior and trace) regularity theory for wave equations with non-homogenous boundary data—the Neumann case being the most challenging (see [@las-trig_wave1], and the optimal result of [@tataru_1998])—with the methods developed in [@PandLIBRO] for equations with persistent memory. In order to carry out a regularity analysis of the model equation with memory we shall use the trick of MacCamy [@maccamy_1977] and the theory of Volterra equations. For equations with memory of the form the reader is referred, e.g., to [@PandLIBRO Chapter 2]; see also [@corduneanu Chapter 5]. A novelty in the equation is brought about by the presence of the (vectorial) parameter $\xi$. A classical reference on—and thorough treatment of—evolutionary integral equations is [@pruess_1993]. It is important to emphasize at the very outset that the adopted perspective and approach paves the way for establishing the (sharp) regularity of boundary traces for the MGT equation, as well as for the solutions to a rather general family of wave equations with memory, supplemented with Dirichlet or Neumann boundary conditions. While being a topic of recognized current interest, the only result that appears available so far is the one obtained (via energy methods) in [@loreti-sforza_parma], tailored for the case of Dirichlet boundary conditions. Our alternative proof for the model equation with memory, depending on the parameter $\xi$ (and with the same BC) is given in Theorem \[t:traces-memory\], which brings about a boundary regularity result for the MGT equation, that is Corollary \[c:traces-mgt\]. (The study of the regularity of boundary traces for wave equations with memory in the case of Neumann boundary conditions is left to a separate, subsequent investigation.) Main results: synopsis ---------------------- The outcome of the [*interior*]{} regularity analysis carried out in this paper is stated in Theorem \[t:main\_1\], pertaining to the general model equation with memory , and Theorem \[t:main\_2\] for the MGT equation itself. Beside being the former results instrumental in achieving the subsequent ones, they are also of independent interest.\ Because the said results are presented by means of elaborate tables, aiming at rendering explicit the major achievements on the regularity of equations and —the latter linked and complementing those in our key reference [@kalt-etal_2011]—we highlight them in Theorem \[t:sample\] below. Theorem \[t:sample\] includes as well a last statement on the regularity of [*boundary*]{} traces, an issue which is dealt with in Section \[s:traces\]; see Theorem \[t:traces-memory\] and Corollary \[c:traces-mgt\]. For the statement and understanding of all our findings, we need to introduce appropriate functional spaces along with the related notation. Let $A$ be the unbounded operator defined as follows: $$\label{definizOPERATORE-A} A w:=(\Delta -I) w\,, \quad {{\mathcal D}}(A) =\big\{w\in H^2(\Omega)\colon {{\mathcal T}}w =0 \; \text{on $\Gamma$}\big\}\,;$$ namely, $A$ is the (so called) [*realization*]{} of the differential operator $\Delta -I$ in $L^2(\Omega)$, with homogeneous boundary conditions (BC) defined by ${{\mathcal T}}$, in the present work of either Dirichlet or Neumann type; of course, the domain of $A$ depends on $\mathcal{T}$. (We might take the realization of the laplacian in the case of Dirichlet BC; translating the differential operator allows us to deal with both significant BC at once.) We further note that $A$ is the infinitesimal generator of an exponentially stable analytic semigroup and the fractional powers of $-A$ are well defined. Thus, we are allowed to introduce the functional spaces $X_s$ definied as follows: $$\label{e:x_r} X_s= \begin{cases} {{\mathcal D}}((-A)^{s/2}) & \text{if $s \ge 0$} \\[1mm] [{{\mathcal D}}((-A)^{s/2})]' & \text{if $s < 0$}\,, \end{cases}$$ endowed with the graph norm if $s\ge 0$, while the norm of a dual space is needed otherwise.\ Then, it is well known that $A\colon X_s \rightarrow X_{s-2}$ is continuous, surjective and boundedly invertible. The next theorem collects results obtained in Sections \[s:main\_1\], \[sect:RegulaMGT\],  \[s:traces\]. \[t:sample\] The following assertions hold: - \[item1THTheorem:sample\] [ (Interior regularity for the equation with memory ) with homogeneous BC). ]{} Assume $N(t)\in H^2(0,T)$ and $F(t)\in L^2(0,T)$ for every $T>0$. Let $g\equiv 0$. If $u_0\in X_0$ and $u_1,\xi\in X_{-1}$, then the solution $u$ to the IBVP problem - satisfies $$u\in C([0,T];X_0)\cap C^1([0,T];X_{-1})\cap L^2(0,T;X_{-2})\,.$$ - \[item2THTheorem:sample\] [ (Interior regularity for the MGT equation with homogeneous BC). ]{} If $g\equiv 0$ and $(u_0,u_1,u_2)\in X_1\times X_1\times X_0$, then the solution $u$ to the IBVP problem -- satisfies $$\label{e:boundary-tointerior} u\in C([0,T];X_1)\cap C^1([0,T];X_1)\cap C^2([0,T];X_0)$$ and the map $(u_0,u_1,u_2)\longmapsto u(t,x)$ is continuous in the specified spaces. - \[item3THTheorem:sample\] [ (Boundary-to-interior regularity for equations and , with trivial initial data). ]{} Assume $u_0=u_1=\xi=0$ ($u_0=u_1=u_2=0$, respectively), and $g\in L^2(0,T;L^2(\Gamma))$. Then there exists $\alpha_0$ such that for the solutions $u$ to the IBVP problem - (-–, respectively) satisfy $$\label{e:boundary-to-interior} u\in C([0,T];X_{\alpha_0})\cap C^1([0,T];X_{\alpha_0-1}\cap L^2(0,T;X_{\alpha_0-2})\,;$$ the value of $\alpha_0$ depends on the boundary operator $\mathcal{T}$ (and partly on $\Omega$) and are specified in below. - [ (Regularity of boundary traces for the MGT equation ). ]{} Let $u=u(t,x)$ be a solution to the MGT equation corresponding to initial data $(u_0,u_1,u_2)$ and homogeneous boundary data. Assume $(u_0,u_1,u_2)\in H^1_0(\Omega)\times L^2(\Omega)\times H^{-1}(\Omega)$, along with the compatibility condition $$u_2-\Delta u_0\in L^2(\Omega)\,.$$ Then, for every $T>0$ there exists $M=M_T$ such that $$\begin{split} & \int_0^T\!\!\!\int_{\partial\Omega} \Big|\frac{\partial}{\partial\nu} u(x,t)\Big|^2 d\sigma\,d t \le M\, \Big( \|u_0\|_{H^1_0(\Omega)}+\|u_1|_{L^2(\Omega)}^2 + \\[1mm] & {\qquad\qquad\qquad}{\qquad\qquad\qquad}\qquad +\|u_2-\Delta u_0\|^2_{L^2(\Omega)}\Big)\,. \end{split}$$ We see from the statements in i) and ii), respectively, that while the equation with memory displays a somewhat expected regularity, namely, the same as most PDE models for wave propagation, the interior regularity of solutions to the MGT equation under homogeneous boundary conditions improves.\ Instead, the regularity result in iii)—that pertains to the case of trivial initial data ($u_0=u_1=u_2=0$) and non-homogeneous boundary data ($g\ne 0$)—is not improved by special choices of the kernel $N(t)$, such as $N(t)=e^{-\alpha t}$. It is worth mentioning that our analysis does not disclose that the dynamics of the MGT equation is governed by a [*group*]{}. Following the studies on well-poseness performed in [@kalt-etal_2011] and [@marchand-etal_2012], the present study focuses on the regularity analysis of a general class of PDE systems which are governed by [*semigroups*]{}—not necessarily groups—and whose solutions generally display a lower regularity than the ones of equation . The higher interior regularity for the MGT equation is obtained when we particularize the formulas, and exploiting the smoothness of the coefficients. We note that the values of $\alpha_0$ which occurr in —and which correspond to appropriate Sobolev exponents—are the ones established in the case of (linear) hyperbolic equations with $L^2(\Sigma)$ boundary data (of either Dirichlet or Neumann type). We record explicitly for the IBVP $$\label{e:ibvp-wave} \begin{cases} u_{tt}=\Delta u - u+f & \text{in $(0,T)\times \Omega$} \\[1mm] u(0,\cdot)=u_0\,, \; u_t(0,\cdot)=u_1 & \text{in $\Omega$} \\[1mm] {{\mathcal T}}u=g & \text{on $(0,T)\times \Gamma$} \end{cases}\,.$$ a statement which embodies a complex of successive achivements; see the cited references. \[t:tataru\] Assume that $u_0, u_1=0$, $f= 0$, and $g\in L^2(\Sigma)$. Then, the unique solution to the initial/boundary value problem satisfies $$(u,u_t)\in C([0,T];H^{\alpha_0}(\Omega) \times H^{\alpha_0-1}(\Omega))\,,$$ with $$\label{e:alfazero} \alpha_0 = \begin{cases} 0 & \text{if ${{\mathcal T}}$ is the Dirichlet trace operator} \\[1mm] \frac23 & \text{if ${{\mathcal T}}$ is the Neumann trace operator and $\Omega$ is a smooth domain} \\[1mm] \frac34 & \text{if ${{\mathcal T}}$ is the Neumann trace operator and $\Omega$ is a parallelepiped.} \end{cases}$$ For a chronological overview with historical and technical remarks see, e.g., [@las-trig-book Notes on Chapter 8, p. 761]. We finally point out on the regularity of wave equations the recent progress of [@triggiani_2016], dealing with the case of boundary data $g$ that are [*not*]{} ‘smooth in space’, e.g., $g\in L^2(0,T;H^{-1/2}(\Gamma))$. In view of the approach taken in the present work, it is clear that the results obtained therein could be utilized as well in order to attain regularity results for equations with memory and for the MGT equation under boundary data that are less regular (than square integrable) in space. Orientation ----------- The plan of the paper is briefly outlined below. For the reader’s convenience and since these tools will be utilized throughout, in Section \[s:preliminaries-on-wave\] we provide a minimal background and references on the approach to linear wave equations via cosine operator theory. In Section \[sec:memory\] we perform an analysis of the equation with memory that encompasses the MGT equation. An equivalent equation—in fact easier, since the convolution term therein does not involve differential operators at all—is derived, which in turn results in a Volterra equation of the second kind; see Proposition \[p:volterra\]. This step will play a crucial role in the proof of our first regularity result, that is Theorem \[t:main\_1\], concerning the model equation with memory . Section \[s:main\_1\] is then almost entirely devoted to the proof of Theorem \[t:main\_1\]. In Section \[sect:RegulaMGT\] we return to the third order MGT equation and show how the (interior) regularity results specifically pertaining to the MGT equation, stated in Theorem \[t:main\_2\], follow as a consequence of Theorem \[t:main\_1\]. Finally, Section \[s:traces\] is devoted to the regularity of boundary traces; see Theorem \[t:traces-memory\] and Corollary \[c:traces-mgt\].\ A discussion and explanation of the introduced definition of solutions to the third order (in time) equation under investigation is postponed to Appendix \[a:def-solutions\]. Preliminaries on wave equations {#s:preliminaries-on-wave} =============================== Consider the initial/boundary value problem for a linear wave equation . Since the methods of proof employed in the present work rely in a crucial way on the representation of solutions to wave equations by means of cosine operators, few lines on this approach follow. The reader is referred to [@belleni] and [@las-trig-cos], were a first use of cosine operators is found in order to study equations with persistent memory and in the context of boundary control theory, respectively; see also the former contribution of [@sova_1966]. We adopt here the notation of [@belleni] and [@fattorini]. We shall use the operator $A$ in , which is the realization of the translation $\Delta -I$ of the Laplacian in $L^2(\Omega)$, with suitable homogeneous boundary conditions, according to a (boundary) operator ${{\mathcal T}}$. (In the Dirichlet case $A$ might be simply the realization of the Laplacian.) As noted already, $A$ is boundely invertibile, i.e. $A^{-1}$ exists and it is bounded, in fact compact (even if ${{\mathcal T}}$ represents the normal derivative on $\Gamma$). It generates an exponentially stable analytic semigroup and the fractional powers of $-A$ are well defined and we shall use the spaces $X_s$ in ($X_s$ has the graph norm if $s\ge 0$, and the norm as a dual space otherwise). We recall once more that $A$: $X_s \rightarrow X_{s -2}$ is continuous, surjective and boundedly invertible. Next, we introduce the Green maps $G\in {{\mathcal L}}(L^2(\Gamma),L^2(\Omega))$ defined as follows: $$\label{e:green-map} G\colon L^2(\Gamma)\ni \varphi\longmapsto G\varphi=:\psi \; \Longleftrightarrow \; \begin{cases} \Delta \psi =\psi & \textrm{on $\Omega$} \\[1mm] {{\mathcal T}}\psi=\varphi & \textrm{on $\Gamma$}\,. \end{cases}\,;$$ By elliptic theory, it is known that there exists an appropriate $s >0 $ such that $\text{im}\,G\subset X_s$ so that $A G\subset X_{s -2}$. For instance, in the case of Dirichlet boundary conditions one has $\text{im}\,G= H^{1/2}(\Omega) \subset X_s$, with inclusion that holds true for any $s=1/2 -\sigma$, $0<\sigma<\frac{1}{2}$. Thus, it is known that the solution to the IBVP is given by $$\label{e:waves-explicit} \begin{split} u(t)& =R_+(t)u_0+{{\mathcal A}}^{-1} R_-(t)u_1-{{\mathcal A}}\int_0^t R_-(t-s)Gg(s)\,ds \,+ \\[1mm] & {\qquad\qquad\qquad}+{{\mathcal A}}^{-1} \int_0^t R_-(t-s)f(s)\,ds \end{split}$$ where the operator ${{\mathcal A}}$, and the families of operators $R_+(\cdot)$, $R_{-}(\cdot)$ are defined as follows: $$\label{eq:defiOperRpm} {{\mathcal A}}=i(-A)^{1/2}\,,\qquad R_+(t)=\frac{e^{{{\mathcal A}}t}+e^{-{{\mathcal A}}t}}{2}\,, \qquad R_-(t)=\frac{e^{{{\mathcal A}}t}-e^{-{{\mathcal A}}t}}{2}\,,$$ $R_+(t)$ being the strongly continuous [*cosine*]{} operator generated by $A$ in $L^2(\Omega)$; see [@sova_1966], [@fattorini], [@las-trig-book Vol. II]. The previous definitions make sense because ${{\mathcal A}}$ is the infinitesimal generator of a $C_0$-[*group*]{} of operators; in particular, we have as well $$X_s={{\mathcal D}}((i{{\mathcal A}})^s)={{\mathcal D}}({{\mathcal A}}^s) \qquad\qquad \text{if $s\ge 0$,}$$ and ${{\mathcal A}}$ is bounded and boundedly invertible from $X_s$ to $X_{s-1}$ for every $s$. Computing the derivatives of we obtain the following equalities, valid in $H^{-1}(\Omega)$ and $H^{-2}(\Omega)$, respectively: $$\label{e:waves-explicit-prime} \begin{split} u_t(t)& ={{\mathcal A}}R_-(t)u_0+R_+(t)u_1-A\int_0^t R_+(t-s) Gg(s)\,ds\, + \\[1mm] & {\qquad\qquad\qquad}+ \int_0^t R_+(t-s)f(s)\,ds\,, \end{split}$$ as well as $$\label{e:waves-explicit-second} \begin{split} u_{tt}(t)&=A R_+(t)u_0+{{\mathcal A}}R_-(t)u_1 -AG g(t) -A\Big({{\mathcal A}}\int_0^t R_-(t-s) Gg(s)\,ds\Big)+ \\[1mm] & + f(t) +{{\mathcal A}}\int_0^t R_-(t-s)f(s)\,ds = \\[1mm] &=Au(t)-AGg(t)+f(t)\,. \end{split}$$ If $f(\cdot)$ is of class $C^1([0,T])$ then it is possible to integrate by parts, like in $${{\mathcal A}}^{-1} \int_0^t R_-(t-s)f(s)\,ds=-A^{-1}\left [ f(t)-R_+(t)f(0)-\int_0^t R_+(t-s) f(s) d s \right ]$$ which brings about a gain of one unity in space regularity. The integration by parts is rigorously justified in [@PandAMO Lemma 5]. The explicit formula , along with and are among the keys for the following regularity result. The statement in iii) is by far the most challenging, as its proof is based on pseudo-differential methods and microlocal analysis. \[teo:propertyONDE\] Let $T>0$ be given, and $s\in \mathbb{R}$. The following statements hold true for the solutions to the initial/boundary value problem (\[e:ibvp-wave\]). 1. \[teo:propertyONDE-1\] Assume $g=0$, $f=0$. Then $(u_0,u_1)\longmapsto u(t)$ is continuous from $X_s\times X_{s-1}$ into $C([0,T],X_s)\cap C^1([0,T],X_{s-1})\cap C^2([0,T],X_{s-2})$. 2. \[teo:propertyONDE-2\] Assume $u_0=0$, $u_1=0$, $g=0$. Then the map $f \longmapsto u(t)$ is continuous from $L^2(0,T;X_s)$ into $C([0,T],X_{s+1})\cap C^1([0,T),X_s)$ while $u_{tt}(t)-f(t)\in C([0,T],X_{s+1})$. 3. \[teo:propertyONDE-3\] Assume $u_0=0$, $u_1=0$, $f=0$. Then, there exists $\alpha_0\ge 0$—depending on ${{\mathcal T}}$ and possibly on the geometry of $\Omega$—such that for every $g\in L^2((0,T)\times \Gamma)$ we have $u\in C([0,T],X_{\alpha_0})\cap C^1([0,T];X_{\alpha_0-1})\cap C^2([0,T],X_{\alpha_0-2})$. The mapping $g\longmapsto u$ is continuous in the indicated spaces. 1\. With reference to the assertion $iii)$ above, we remind the reader that the proper value of Sobolev exponent $\alpha_0$ are given in . 2\. The properties stated in the previous Theorem justify as a formula for the solutions to the IBVP , since the following fact is easily checked: when $u_0, u_1\in {{\mathcal D}}(\Omega)$ ($C^\infty(\Omega)$ functions with compact support), $f \in {{\mathcal D}}((0,T)\times \Omega)$, $g \in {{\mathcal D}}((0,T)\times \Gamma)$, then $u-Gg\in C([0,T],{{\mathcal D}}(A))\cap C^1([0,T],{{\mathcal D}}(A))\cap C^2([0,T];L^2(\Omega))$ and the following equality holds: $$u_{tt}(t)=A(u(t)-Gg(t))+f(t)\,,$$ along with $u(0)=u_0$, $u_t(0)=u_1$. Thus, the boundary condition ${{\mathcal T}}u=g$ is satisfied in the sense that $u(t)-Gg(t)\in {{\mathcal D}}(A)$ for almost any $t$. The MGT equation as an equation with memory {#sec:memory} =========================================== We initially proceed formally. Rewrite the first hand side of equation as $$\label{eq:MGTPasso1} \begin{split} & u_{ttt}+\alpha u_{tt} -c^2 \Delta u -b \Delta u_t = \\[1mm] & {\qquad\qquad\qquad}= \big(u_{tt}-b \Delta u\big)_t +\alpha \big(u_{tt}-b\Delta u\big) - c^2\Delta u +\alpha b \Delta u= \\[1mm] & {\qquad\qquad\qquad}= \big(u_{tt}-b \Delta u\big)_t +\alpha \big(u_{tt}-b\Delta u\big)+b \gamma \Delta u =0 \end{split}$$ where we recall that $\gamma = \alpha -c^2/b$. Solving the equation $$\big(u_{tt}-b \Delta u\big)_t =-\alpha \big(u_{tt}-\Delta u\big)-b \gamma \Delta u$$ in the ‘unknown’ $u_{tt}-b \Delta u$ gives the following integral equation in the unknown (and in fact not yet defined as solution) $u$: $$\label{eq:MGTriscritta} u_{tt}-b \Delta u= e^{-\alpha t}\xi-b\gamma \int_0^t e^{-\alpha (t-s)}\Delta u(s) \, ds\,,$$ with $\xi= u_2-b\Delta u_0$. Thus, in view of the obtained equation , we consider the following (more general) model equation with persistent memory, depending on the parameter $\xi$: $$\label{eq:MEMORY} u_{tt}-b \Delta u= -b\gamma \int_0^t N(t-s)\Delta u(s) \, ds + F(t)\xi$$ (already appeared—as —in the Introduction and recorded here for the reader’s convenience; notice that both functions $N(t)$ and $F(t)$ equal $e^{-\alpha t}$ in the MGT equation). \[r:role-of-b\] If it happens that $\gamma=0$, then is nothing but a wave equation with affine term $F(t)\xi$ and the regularity of the corresponding solutions follows from Theorem \[teo:propertyONDE\]. Thus, we explicitly assume $\gamma \ne 0$, and recall from the Introduction that $b>0$. It is important to emphasize that in the case $b=0$ the problem is ill-posed, since the semigroup generation fails, as proved in [@kalt-etal_2011 Theorem 1.1]; instead, if $b<0$ then the PDE becomes a [*nonlocal*]{} elliptic equation of a kind studied by Skubacevskiǐ in [@skubacevskii]. The regularity analysis of equation is carried out under the assumptions listed below. \[a:kernel-and-affine\] i) The coefficient $b$ is positive. ii) The memory kernel $N(t)$ and the function $F(t)$ are real valued; $N(t)\in H^2(0,T)$ while $F(t)\in L^2(0,T)$ for every $T>0$. An equivalent Volterra integral equation ---------------------------------------- A first step in our analysis is to show that we can get rid of the (second order) differential operator in the convolution term of . To do so, let us preliminarly introduce the Volterra equation of the second kind $$\label{e:volterra-2-kind} X(t)- \gamma \int_0^t N(t-s) X(s) \, ds = G(t)\,, \qquad t\in [0,T]\,.$$ This equation has a unique solution $X(t)$ given by the following formula: $$\label{eq:soluFORMvolte} X(t)=G(t)-\int_0^t R_0(t-s) G(s)\,ds\,.$$ where $R_0(\cdot)$ is the (unique) solution to the integral equation $$\label{e:resolvent-kernel} R_0(t)-\gamma \int_0^t N(t-s) R_0(s)\,ds = -\gamma N(t)\,, \qquad t\in [0,T]\,.$$ The function $t \longmapsto R_0(t)$ is the [*resolvent kernel*]{} of the Volterra equation. An important observation is that $R_0\in H^2(0,T)$ since $N\in H^2(0,T)$ and . We then see (either from  or from ) that if $G(t)$ is continuous then $X(t)$ is continuous; if $G(t)$ is square integrable then $X(t)$ is square integrable. We now perform several formal computations which will lead to a definition of the solutions to equation (with appropriate initial and boundary data). Rewrite the equation in the following different fashion, $$\label{e:like-a-volterra} \Delta u -\gamma \int_0^t N(t-s)\Delta u(s) \, ds =\frac{1}{b}\big(u_{tt}- F(t)\xi\big)\,,$$ that is a Volterra integral equation of the second kind in the unknown $\Delta u$. With reference to the general form , we have here $$G(t)=\frac{1}{b}\big(u_{tt}- F(t)\xi\big)\,.$$ The formula gives $$b\Delta u =u_{tt}- F(t)\xi- \int_0^t R_0(t-s)\big(u_{ss}(s)- F(s)\xi\big)\,ds\,,$$ where $R_0(\cdot)$ is the unique solution to the integral equation , as explained above. Since $R_0\in H^2(0,T)$ we are allowed to integrate by parts twice, thereby obtaining $$\begin{split} b\Delta u & = u_{tt}- F(t)\xi - \Big\{R_0(t-s)u_t(s)\big|_{s=0}^{s=t}-\int_0^t R_0'(t-s)u_s(s)\,ds\Big\} + \\[1mm] & {\qquad\qquad\qquad}+ \int_0^t R_0(t-s)F(s)\xi\,ds= \\[1mm] & = u_{tt}- F(t)\xi-R_0(0)u_t(t) + R_0(t) u_1 - R_0'(0)u(t)-R_0'(t)u_0- \\[1mm] & {\qquad\qquad\qquad}- \int_0^t R_0''(t-s)u(s)\,ds + \int_0^t R_0(t-s)F(s)\xi\,ds\,, \end{split}$$ where the memory term does not contain differential operators. The computations carried out so far—known as MacCamy’s trick [@maccamy_1977]—are purely formal, since the solutions to the equation have not yet been defined. The obtained equation is a wave equation perturbed by a persistent memory, namely, $$\begin{split} u_{tt} & = b(\Delta-I) u + (R_0'(0)+b) u(t) + R_0(0)u_t(t) + \int_0^t R_0''(t-s)u(s)\,ds- \\[1mm] & {\qquad\qquad\qquad}-R_0'(t)u_0 - R_0(t) u_1 + \Big\{F(t)\xi -\int_0^t R_0(t-s)F(s)\xi\,ds\Big\}\,. \end{split}$$ The introduction of the function $$\label{e:variable-v} v(t) = e^{-\frac{1}{2}R_0(0)t} u(t)$$ enables us to eliminate the term $R_0(0)u_t$, and to attain the following equation in the unknown $v$: $$\label{eq:MEMORY-for-v} v_{tt}=b (\Delta v-v)+ \int_0^t K(t-s)v(s)\,ds + \beta v(t) + \big(h_2(t)\xi+h_1(t)u_1+ h_0(t) u_0\big)\,,$$ with the constant $\beta$ and the functions $K(\cdot)$, $h_i(\cdot)$, $i=0,1,2$ given by the formulas below: $$\label{e:various-functions} \begin{split} & \text{$K(t) = e^{-\frac{1}{2}R_0(0)t} R_0''(t)$ is square integrable in $(0,T)$;} \\ & \beta=b+\frac{1}{4}R_0^2(0)+R_0'(0)\,; \\ & h_0(t)=e^{-\frac12 R_0(0) t} R_0'(t)\in H^1(0,T)\,; \\ & h_1(t)=e^{-\frac12 R_0(0) t} R_0(t)\in H^2(0,T)\,; \\ & \text{$h_2(t)= e^{-\frac12 R_0(0) t}\big(F(t)-\int_0^t R_0(t-s) F(s)\,ds\big)$ is square integrable.} \end{split}$$ The above suggests the following Definition, which is rigorously justified in the Appendix. \[d:def-solution\] Let ${{\mathcal H}}$ be a Hilbert space. An ${{\mathcal H}}$-valued function $t\longmapsto u(t)$ is a solution of equation supplemented with initial/boundary conditions if the function $t\longmapsto v(t)$ defined in is an ${{\mathcal H}}$-valued continuous function which solves the Volterra integral equation , with $\beta$, $K(\cdot)$, $h_i(\cdot)$, $i=0,1,2$ defined by . In the case $F(t)\equiv N(t)=e^{-\alpha t}$, then the above definition yields the definition of solutions to the MGT equation , with initial/boundary conditions -. On the basis of Definition \[d:def-solution\] we are led to study the regularity of solutions the following IBVP for the wave equation with memory , that is: $$\label{ibvp-for-v} \begin{cases} v_{tt}=b (\Delta v-v)+ \int_0^t K(t-s)v(s)\,ds + \beta v(t) + \big(h_2(t)\xi+h_1(t)u_1+ h_0(t) u_0\big) \\[1mm] v(0,\cdot)=v_0\,, \; v_t(0,\cdot)=v_1 \\[1mm] {{\mathcal T}}v=e^{-\frac{1}{2}R_0(0)t} g\,, \end{cases}$$ where initial data are related to the ones of $u$ via the following relations: $$\label{eq:datiUdatiV} v_0=u_0\,, \qquad v_1= u_1-\frac{1}{2}R_0(0)u_0\,.$$ The next Proposition connects the IBVP to a Volterra equation of the second kind, with suitable kernel and affine term. \[p:volterra\] Any solution to the initial/boundary value problem solves the Volterra equation $$\label{eq:represent} v(t)+\int_0^t L(t-s)v(s)\,ds=H(t)\,,$$ where $L(\cdot)$ is a strongly continuous kernel defined by $$\label{eq:defiL} L(t)v=-\frac{\beta}{\sqrt b} {{\mathcal A}}^{-1}R_-(\sqrt b t) v -\frac{1}{\sqrt b}{{\mathcal A}}^{-1}\int_0^t R_-(\sqrt b(t-s)) K(s)v\,ds$$ (and $K(\cdot)$ is defined explicitly in ), while the affine term $H(\cdot)$ is given by $$\label{eq:defiH} \begin{split} H(t)&=\Big[R_+(\sqrt{b}t)-\frac{R_0(0)}{2\sqrt{b}}{{\mathcal A}}^{-1}R_-(\sqrt{b}t)\Big]u_0 +\frac{1}{\sqrt{b}}{{\mathcal A}}^{-1} R_-(\sqrt{b}t) u_1- \\[1mm] & \qquad -\sqrt{b} {{\mathcal A}}\int_0^t R_-(\sqrt{b}(t-s)) G\,e^{-\frac{1}{2}R_0(0)s} g(s)\,ds+ \\[1mm] & \qquad +\frac{1}{\sqrt{b}} {{\mathcal A}}^{-1}\int_0^t R_-(\sqrt{b}(t-s)) \big[h_2 (s)\xi+ h_1(s)u_1+h_0(s)u_0\big]\, ds\,. \end{split}$$ We recall once more that $R_0(\cdot)$ is the (scalar) resolvent kernel defined—in terms of $N(\cdot)$—by the integral equation . The proof is straightforward, in view of formula for the solution to wave equations with initial and boundary data. We just recall here that the abstract operator $A$ is the realization of the differential operator $\Delta -I$ with boundary conditions driven by ${{\mathcal T}}$, while $R_+(\sqrt{b}t)$, the [*cosine*]{} operator generated by $bA$, and $R_-(\sqrt{b}t)$ are defined in (\[eq:defiOperRpm\]). We finally note that $H(\cdot) z\in C([0,T];X_\alpha)$, for every $z\in X_\alpha$. Interior regularity for the equation {#s:main_1} ===================================== In this Section we see how the regularity results pertaining to wave equations stated in Theorem \[teo:propertyONDE\] can be suitably extended to the general equation with memory of the form . This will eventually imply the [*stronger*]{} regularity of solutions to the third order MGT equation (see the next Section). The key and starting point is the Volterra integral equation in the unknown $v$. Its kernel $L(\cdot)$ is now operator valued and *strongly continuous* from $[0,+\infty)$ to ${{\mathcal L}}(X_\alpha)$ for every $\alpha$. By using Theorem \[teo:propertyONDE\] we will derive the regularity properties of the right hand side of , that will be inherited by $v$ and then by the solutions to the wave equation with memory . These properties will be expressed in terms of the boundary datum $g$, as well as of initial data $u_0$, $u_1$ and $\xi$. It is convenient to write explicitly the solution of a Volterra integral equation in a Hilbert space ${{\mathcal H}}$. We introduce the notation $*$ for the convolution, $$L * h=\int_0^t L(t-s)h(s)\,ds=\int_0^t L(s)h(t-s)\,ds\,.$$ Here $L(t)$ is a strongly continuous function of time, with values in ${{\mathcal L}}({{\mathcal H}})$ and $h(t)$ is an integrable ${{\mathcal H}}$-valued function.\ Moreover, let $L^{(*n)}$ denote iterated convolutions, recursively defined by the following equalities $$L^{(*1)}=L\,,\quad L^{(*(n+1))}*h=L*\Big( L^{(*n)}*h\Big)$$ (for every integrable ${{\mathcal H}}-$valued function $h$). Then, the solution to the Volterra equation —that is $v+L*v=H$, in short—is $$v=H+\sum _{k=2}^{\infty} L^{(*k)}*H\,.$$ Uniform convergence of the series is easily proved. In the special case of our interest, ${{\mathcal H}}=X_\alpha$ and $L(t)$ is given by . The following result is well known. \[lemmaVOLTE\] Let $T>0$ and let the kernel $L(\cdot)$ be given by . If $H\in C([0,T];X_\alpha)$, then the solution $v$ of the Volterra equation $v+L*v=H$ satisfies $v\in C([0,T];X_\alpha)$. We will repeatedly use Lemma \[lemmaVOLTE\] in order to pinpoint the regularity of the solutions to the initial/boundary value problems associated with the equation . \[t:main\_1\] Consider Eq.  with initial data $(u_0,u_1)$ and boundary data defined by . Then, the regularity of the (linear) map $(u_0,u_1,\xi,g) \longmapsto (u,u_t,u_{tt})$ is detailed in Table \[tableGENEresu\]. ------------- --------- --------- ----------------- --------------------------------------------------------------------------------------- $u_0$ $u_1$ $\xi$ $g $ $u=u(t,x)$ solution of $X_\lambda$ $0$ $0$ $0$ $C([0,T];X_\lambda)\cap C^1([0,T];X_{\lambda-1})\cap C^2([0,T];X_{\lambda-2})$ $0$ $X_\mu$ $0$ $0$ $C([0,T];X_{\mu+1})\cap C^1([0,T];X_\mu)\cap C^2([0,T]; X_{\mu-1})$ $0$ $0$ $X_\nu$ $0$ $\left\{\begin{array}{l} \mbox{if $F(t)\in L^2(0,T)$ then:}\\ C([0,T];X_{\nu+1})\cap C^1([0,T]; X_{\nu }) \cap H^2([0,T]; X_{\nu-1})\,; \\ \mbox{if $F(t)\in H^1(0,T)$ then:}\\ C([0,T];X_{\nu+2})\cap C^1([0,T]; X_{\nu+1}) \cap H^2([0,T]; X_\nu) \end{array}\right.$ $0$ $0$ $0$ $ L^2(\Sigma) $ $C([0,T];X_{\alpha_0})\cap C^1([0,T];X_{\alpha_0-1 })\cap H^2([0,T]; X_{\alpha_0-2})$ ------------- --------- --------- ----------------- --------------------------------------------------------------------------------------- : Regularity of solutions to the equation . The transformations are continuous between the indicated spaces.[]{data-label="tableGENEresu"} The proof of the several statements contained in Table \[tableGENEresu\] is structured in few major steps. #### **0. Premise and outline.** Consider the Volterra equation , and note that the functions $v_t(t)$ and $v_{tt}(t)$ solve the same Volterra integral equation of the second kind in a Hilbert space, yet with different affine terms, $H_1(\cdot)$ and $H_2(\cdot)$ say, respectively, which will be computed in the next step. In view of Lemma \[lemmaVOLTE\], the (time and space) regularity of these affine terms—depending on $u_0$, $u_1$, $\xi$ and $g$—will naturally bring about the (time and space) regularity for the triple $(v,v_t,v_{tt})$.\ To do so we will set to zero all data but one. Finally, the derived regularity properties will be inherited by the triple $(u,u_t,u_{tt})$ pertaining to the original equation with persistent memory , still depending on $u_0$, $u_1$, $\xi$ and $g$. #### **1. The affine terms of Volterra equations.** We rewrite in the form $$v(t)+\int_0^t L(s)v(t-s) ds= H(t)$$ and compute the derivatives of both the sides. Inserting the expressions and of $L(t)$ and $H(t)$, and replacing initial data $v_0$ and $v_1$ with their respective expressions in terms of $u_0$ and $u_1$ (see ), we obtain the following Volterra integral equations in the unknowns $v_t(t)$ and $v_{tt}(t)$: $$\begin{aligned} v_t(t)+\int_0^t L(t-s)v_s(s)\,ds= H_1(t)\,, \label{eq:deriPRIMA} \\[1mm] v_{tt}(t)+\int_0^t L(t-s)v_{ss}(s)\,ds = H_2(t) \label{eq:deriSEconda}\end{aligned}$$ where $$\begin{aligned} \nonumber & H_1(t):= L(t)v_0+H_t(t)= \\ &=\Big[\frac{\beta}{\sqrt{b}}{{\mathcal A}}^{-1} R_-(\sqrt{b} t)u_0 + \frac{1}{\sqrt b}\int_0^t R_-(\sqrt{b}(t-s)){{\mathcal A}}^{-1}K(s)u_0\,ds\Big] +H_t(t)\,, \label{e:y_1} \\[2mm] & H _2(t):= \Big[\frac{\beta}{\sqrt b}{{\mathcal A}}^{-1} R_-(\sqrt{b}t)u_1 +\frac{1}{\sqrt b} {{\mathcal A}}^{-1}\int_0^t R_-(\sqrt{b}(t-s))K(s)u_1\,ds \Big] - \nonumber \\[1mm] & \qquad\qquad-\frac{R_0(0)}{2\sqrt b}\Big[\beta{{\mathcal A}}^{-1}R_-(\sqrt{b}t)u_0 +{{\mathcal A}}^{-1}\int_0^t R_-(\sqrt{b}(t-s))K(s)u_0 \,ds \Big]+ \nonumber \\[1mm] & {\qquad\qquad\qquad}+\beta R_+(\sqrt{b}t)u_0+\int_0^t R_+(\sqrt{b}(t-s)) K(s)u_0\,ds +H_{tt}(t)\,, \nonumber\end{aligned}$$ while the explicit expression of $H(t)$ is recorded here for the reader’s convenience: $$\begin{split} H(t)&=\Big[R_+(\sqrt{b}t)-\frac{R_0(0)}{2\sqrt{b}}{{\mathcal A}}^{-1}R_-(\sqrt{b}t)\Big]u_0 +\frac{1}{\sqrt{b}}{{\mathcal A}}^{-1} R_-(\sqrt{b}t) u_1- \\[1mm] & \qquad -\sqrt{b} {{\mathcal A}}\int_0^t R_-(\sqrt{b}(t-s)) G\,e^{-\frac{1}{2}R_0(0)s} g(s)\,ds+ \\[1mm] & \qquad +\frac{1}{\sqrt{b}} {{\mathcal A}}^{-1}\int_0^t R_-(\sqrt{b}(t-s)) \big[h_2 (s)\xi+ h_1(s)u_1+h_0(s)u_0\big]\, ds\,. \end{split}$$ As it will appear clear immediately below, we neglected to write explicitly the derivatives of $H(t)$, just because their regularity is easily deduced invoking once more Theorem \[teo:propertyONDE\]. #### **2a. Effects of boundary data action.** With $u_0,\,u_1,\,\xi\equiv 0$, $g\in L^2(\Sigma)$, the affine term $H(t)$ in (recorded above) reduces to $$H(t)=-\sqrt{b}{{\mathcal A}}\int_0^t R_-(\sqrt{b}(t-s))Gg(s)\, ds\,.$$ Therefore we know from assertion iii) of Theorem \[teo:propertyONDE\] that $$(H,H_t,H_{tt})\in C([0,T];X_{\alpha_0}\times X_{\alpha_0-1}\times X_{\alpha_0-2})\,.$$ Thus, Lemma \[lemmaVOLTE\] shows that the solutions of the Volterra equation as well as those pertaining to the former equation with memory belong to $$C([0,T];X_{\alpha_0})\cap C^1 ([0,T];X_{\alpha_0-1})\cap C^2([0,T];X_{\alpha_0-2})\,.$$ #### **2b. Effects of the initial datum $u_0$.** Assume $u_1, \,\xi=0$, $g=0$, and $u_0\in X_\lambda$. The affine term of the equation in the unknown $v$ becomes $$H(t) = \Big[R_+(\sqrt{b}t)-\frac{R_0(0)}{2\sqrt{b}}{{\mathcal A}}^{-1}R_-(\sqrt{b}t)\Big]u_0 +\frac{1}{\sqrt{b}} {{\mathcal A}}^{-1}\int_0^t R_-(\sqrt{b}(t-s)) h_0(s)u_0\, ds\,,$$ so that readily $$H\in C([0,T];X_\lambda)\cap C^1([0,T];X_{\lambda-1})\cap C^2([0,T];X_{\lambda-2})\,,$$ which immediately implies $v(t)\in C([0,T];X_\lambda)$. Recall now the term $H_1$ in and notice that its regularity is determined by the regularity of $H_t$. Then, $H_1$—as well as $v_t$, in view of Lemma \[lemmaVOLTE\]—belongs to $C^1([0,T];X_{\lambda-1})$, while $H_2$ and then $v_{tt}(t)$ belong to $C^1([0,T];X_{\lambda-2})$, which establishes the first row of Table \[tableGENEresu\]. #### **2c. Effect of the initial datum $u_1$.** Assume $u_0,\ \xi=0$, and $g=0$ while $u_1\in X_\mu$. In this case $$H(t) = \frac{1}{\sqrt{b}}{{\mathcal A}}^{-1} R_-(\sqrt{b}t) u_1 +\frac{1}{\sqrt{b}} {{\mathcal A}}^{-1}\int_0^t R_-(\sqrt{b}(t-s)) h_1(s)u_1\, ds\,,$$ so that we have a slight regularization $$(H,H_t,H_{tt})\in C([0,T];X_{\mu +1}\times X_\mu\times X_{\mu-1});$$ the transformation $u_1\longmapsto H$ is continuous in the indicated spaces ([*cf.*]{} assertion [*iii)*]{} of Theorem \[teo:propertyONDE\]).\ The obtained regularity for $H$ and its derivatives holds for $H_i$, $i=1,2$, and then is inherited by the solution $v(t)$: namely, $$v\in C([0,T];X_{\mu+1})\cap C^1([0,T];X_\mu)\cap C^2([0,T]; X_{\mu-1})\,;$$ in turn, the same is valid for $u$, thereby confirming the second row of Table \[tableGENEresu\]. #### **2d. Effect of the parameter $\xi$.** We finally discuss the dependence on $\xi$. Assume $u_0, u_1=0$, and $g=0$ and $\xi\in X_\nu$. In this case $$H(t)=\frac{1}{\sqrt{b}} {{\mathcal A}}^{-1}\int_0^t R_-(\sqrt{b}(t-s)) h_2 (s)\xi \,ds$$ and, from , $h_2(t)\in L^2(0,T)$, just like $F(t)$. We invoke once more item [*ii)*]{} of Theorem \[teo:propertyONDE\], and ascertain again a slightly regularizing property: the transformation $\xi\longmapsto v$ is continuous from $X_\nu$ to $C([0,T];X_{\nu+1})\cap C^1([0,T];X_\nu)\cap H^2([0,T]; X_{\nu-1})$ (while if in addition $F(t)$—and consequently, $h_2(t)$—is continuous, then $v\in C^2([0,T];X_{\nu-1})$). In the case $F\in H ^2(0,T)$ (as the case of the MGT equation) we have a stronger regularization, since we can integrate by parts as follows: $$\begin{split} H(t) &=- \frac{1}{b}\,A^{-1}\int_0^t \frac{d}{ds }R_+(\sqrt{b}(t-s)) h_2 (s)\xi \, ds= \\[1mm] & \qquad=- \frac{1}{b}\,A^{-1}\Big[\big(h_2(t)-R_+(\sqrt{b}t) h_2(0)\big)\xi -\int_0^t R_+(\sqrt{b}(t-s)) h_2'(s)\xi \,ds\Big]\,; \end{split}$$ a rigorous justification is found, e.g., in [@PandAMO Lemma 5]. For a better understanding, we compute explicitly $$\begin{split} H_t(t) &=-\frac{1}{b}\, A^{-1}\Big[\cancel{h_2'(t)\xi}-\sqrt{b}{{\mathcal A}}R_-(\sqrt{b}t)\xi -\cancel{h_2'(t)\xi}- \\[1mm] & {\qquad\qquad\qquad}-\sqrt{b}{{\mathcal A}}\int_0^t R_-(\sqrt{b}(t-s)) h_2' (s)\xi \,ds\Big]= \\[1mm] &= \frac{1}{b} {{\mathcal A}}^{-1} R_-(\sqrt{b}t)\xi + \frac{1}{b} {{\mathcal A}}^{-1}\int_0^t R_-(\sqrt{b}(t-s)) h_2'(s)\xi \,ds\in X_{\nu+1}\,, \end{split}$$ and $$H_{tt}(t) =R_+(\sqrt{b}t)\xi + \int_0^t R_+(\sqrt{b}(t-s)) h_2'(s)\xi \,ds\in X_{\nu}\,,$$ that complete the membership $H\in X_{\nu+2}$. It is important to note that the space regularity increases of one unity and we get the result in the third row of Table \[tableGENEresu\], where $H^2$ is replaced with $C^2$ if furthermore $F\in C^2(0,T)$, that is the case of the MGT equation. The noticeable outcome of the obtained regularity result is that $u_1$ and $\xi$ are regularized by one and, respectively, two unities. Hence, when $g=0$, $u_0=0$ while $u_1$ and $\xi$ belong to $X_r$, then $(u(t),u_t(t),u_{tt}(t))$ evolves in $X_{r+1}\times X_r\times X_{r-1}$. From Table \[tableGENEresu\] of Theorem \[t:main\_1\] we deduce, in particular, the following regularity result. \[c:example\] Consider equation with initial data $(u_0,u_1)$, and homogeneous boundary data, namely, $g\equiv 0$ in . Then, if $F(\cdot)\in C^1$ and if $(u_0,u_1,\xi)\in X_r\times X_{r-1} \times X_{r-2}$, then the corresponding weak solution satisfies $$(u,u_t,u_{tt})\in C([0,T];X_r\times X_{r-1}\times X_{r-2})\,.$$ Interior regularity for the MGT equation {#sect:RegulaMGT} ======================================== In this Section we utilize the analysis performed for the general class of equations with memory , in order to derive a result pertaining to the MGT equation, that is Theorem \[t:main\_2\] below. This Theorem establishes, in particular, the statements of Theorem \[t:sample\] detailing the regularity from the boundary to the interior for the MGT equation (i.e. item [*iii)*]{}), as well as the one pertaining to the interior regularity, under homogeneous boundary data (i.e. item [*ii)*]{}). The latter result is consistent with the analyis formerly carried out in [@kalt-etal_2011], that brought about semigroup well-posedness of the MGT equation in the space ${{\mathcal D}}(A^{1/2})\times {{\mathcal D}}(A^{1/2}) \times L^2(\Omega)$, $A$ being the proper realization of the Laplacian in $L^2(\Omega)$; see [@kalt-etal_2011 Theorem 1.2]. The peculiar regularity of the MGT equation is here (re)confirmed in a wealth of functional settings. Recall that for the special case of the MGT equation we have in particular $$N(t)=F(t) = e^{-\alpha t}\,, \qquad \xi = u_2-b\Delta u_0$$ in . The meaning given to solutions is still the one in Definition \[d:def-solution\]. We restart from the integral equation which defines the resolvent associated with the convolution kernel $-\gamma N(t)$ of , that is equation and that in the present case—with $N(t)= e^{-\alpha t}$—reads as $$\label{e:resolvent-eq-mgt} R_0(t)-\gamma \int_0^t e^{-(t-s)}\, R_0(s)\, ds= - \gamma e^{-\alpha t}\,.$$ It is then easily verified that the solution to is given by $$\label{e:kernel-mgt} R_0(t)= -\gamma e^{(\gamma-\alpha)t}=-\gamma e^{-\frac{c^2}{b}t}\,,$$ which gives $R_0(0)= -\gamma$ and hence $$\qquad v(t)= e^{\frac{\gamma}2 t}u(t)$$ for $v$ defined in ). In view of Definition \[d:def-solution\], and taking into account the actual expression of $R_0(t)$—depending on $N(t)$ and $F(t)$—in , the following instance of Definition \[d:def-solution\] comes into the picture. A function $u=u(t,x)$ is a solution of the IBVP -- for the Moore-Gibson-Thompson equationif and only if the function $v(t,x) = e^{\frac{\gamma}{2}t}u(t,x)$ solves $$\begin{split} v_{tt} &=b (\Delta v-v)+ \int_0^t K(t-s)v(s)\,ds + \beta v(t) \\[1mm] & \qquad \qquad + h_2(t)\big(u_2-b\Delta u_0\big)+h_1(t)u_1+ h_0(t) u_0\,, \end{split}$$ where $$\begin{split} & K(t) = -\gamma (\gamma-\alpha)^2 e^{(\frac{3}{2}\gamma-\alpha)t}\,, \qquad \beta=b-\gamma\Big(\frac{3}{4}\gamma -\alpha\Big)\,, \\ & h_0(t)=-\gamma (\gamma-\alpha) e^{(\frac{3}{2}\gamma -\alpha)t}\,, \quad h_1(t)=-\gamma e^{(\frac{3}{2}\gamma -\alpha)t}\,, \quad h_2(t)= e^{(\frac{3}{2}\gamma-\alpha)t}\,. \end{split}$$ Thus, on the basis of Theorem \[t:main\_1\], we develop a picture of the interior regularity for the MGT equation. \[t:main\_2\] The regularity of the map $(u_0,u_1,u_2,g)\longmapsto u$ that associates to initial and boundary data the solution $u=u(t,x)$ to the Moore-Gibson-Thompson equation is described by the Table \[tableMGTresu\]. ------------- --------- --------- ----------------- --------------------------------------------------------------------------------------- $u_0$ $u_1$ $u_2$ $g $ $u=u(t,x)$ solution of the MGT equation $X_\lambda$ $0$ $0$ $0$ $C([0,T];X_\lambda)\cap C^1([0,T];X_{\lambda })\cap C^2([0,T];X_{\lambda-1})$ $0$ $X_\mu$ $0$ $0$ $C([0,T];X_{\mu+1})\cap C^1([0,T];X_\mu)\cap C^2([0,T]; X_{\mu-1})$ $0$ $0$ $X_\nu$ $0$ $C([0,T];X_{\nu+2})\cap C^1([0,T]; X_{\nu+1})\cap C^2([0,T]; X_{\nu })$ $0$ $0$ $0$ $ L^2(\Sigma) $ $C([0,T];X_{\alpha_0})\cap C^1([0,T];X_{\alpha_0-1 })\cap H^2([0,T]; X_{\alpha_0-2})$ ------------- --------- --------- ----------------- --------------------------------------------------------------------------------------- : Regularity of solutions to the equation . The transformations are continuous between the indicated spaces.[]{data-label="tableMGTresu"} Along the lines of the first steps of the proof of Theorem \[t:main\_1\], we return to the Volterra equation and note that the affine term $H(t)$ in must be rewritten taking into account that in the present case we have $\xi=u_2-b\Delta u_0$. Let us focus on the last summand of $H(t)$, that is $$\frac{1}{\sqrt{b}} {{\mathcal A}}^{-1}\int_0^t R_-(\sqrt{b}(t-s)) \big[h_2 (s)\big(u_2-b\Delta u_0\big)+ h_1(s)u_1+h_0(s)u_0\big]\, ds\,,$$ and more specifically on the term $$T(t)=\frac{1}{\sqrt{b}} {{\mathcal A}}^{-1}\int_0^t R_-(\sqrt{b}(t-s))h_2 (s)[u_2-b\Delta u_0]\,ds\,.$$ We rewrite $$\label{e:t} \begin{split} T(t)&=\underbrace{\frac{1}{\sqrt{b}} {{\mathcal A}}^{-1}\int_0^t R_-(\sqrt{b}(t-s))h_2 (s)u_2\,ds}_{T_1(t)}- \\[1mm] & \qquad \underbrace{-\sqrt{b} {{\mathcal A}}^{-1}\int_0^t R_-(\sqrt{b}(t-s))h_2 (s)\Delta u_0\,ds}_{T_2(t)} \end{split}$$ and compute $$\label{e:t_2} \begin{split} T_2(t)&=-\sqrt{b} {{\mathcal A}}^{-1}\int_0^t R_-(\sqrt{b}(t-s))h_2 (s)\big(\Delta u_0-u_0+u_0\big)\,ds= \\[1mm] &=\underbrace{-\sqrt{b} {{\mathcal A}}^{-1}\int_0^t R_-(\sqrt{b}(t-s))h_2 (s) Au_0\,ds}_{T_{21}(t)}- \\[1mm] & {\qquad\qquad\qquad}\underbrace{-\sqrt{b} {{\mathcal A}}^{-1}\int_0^t R_-(\sqrt{b}(t-s))h_2 (s)u_0\,ds}_{T_{22}(t)}\,. \end{split}$$ Assuming $u_0\in X_\lambda$, then $Au_0\in X_{\lambda-2}$; moreover, recall that $A={{\mathcal A}}^2$, and the relation between the operators $R_-(\cdot)$ and $R_+(\cdot)$. Then proceed with the computations, integrating by parts to get $$\label{e:t_21} \begin{split} T_{21}(t)& =-\sqrt{b} {{\mathcal A}}^{-1}\int_0^t R_-(\sqrt{b}(t-s))h_2 (s)Au_0\,ds \\[1mm] & = -\sqrt{b} {{\mathcal A}}\int_0^t R_-(\sqrt{b}(t-s))h_2 (s)u_0\,ds= \\[1mm] & =\int_0^t \frac{d}{ds} \big\{R_+(\sqrt{b}(t-s)) h_2 (s)u_0\Big\}\,ds - \int_0^t R_+(\sqrt{b}(t-s))h_2'(s)u_0\,ds= \\[1mm] & =h_2(t)u_0-R_+(\sqrt{b}(t))h_2(0)u_0 -\int_0^t R_+(\sqrt{b}(t-s))h_2'(s)u_0\,ds\,. \end{split}$$ Combine with and , insert the resulting expression of $T(t)$ in $H(t)$, to obtain $$\begin{split} H(t)& =\cancel{R_+(\sqrt{b}t)u_0}-\frac{R_0(0)}{2\sqrt{b}}{{\mathcal A}}^{-1} R_-(\sqrt{b}t)u_0 +\frac{1}{\sqrt{b}}{{\mathcal A}}^{-1} R_-(\sqrt{b}t) u_1- \\[1mm] & \qquad -\sqrt{b} {{\mathcal A}}\int_0^t R_-(\sqrt{b}(t-s)) G\,e^{-\frac{1}{2}R_0(0)s} g(s)\,ds+ \\[1mm] & \qquad +\frac{1}{\sqrt{b}} {{\mathcal A}}^{-1}\int_0^t R_-(\sqrt{b}(t-s))h_2 (s)u_2\, ds+ \\[1mm] & \qquad + h_2(t) u_0\cancel{- R_+(\sqrt{b}t)u_0} - \int_0^t R_+(\sqrt{b}(t-s))h_2'(s)u_0\, ds- \\[1mm] & \qquad - \sqrt{b} {{\mathcal A}}^{-1}\int_0^t R_-(\sqrt{b}(t-s))h_2 (s)u_0\,ds+ \\[1mm] & \qquad + \frac{1}{\sqrt{b}} {{\mathcal A}}^{-1}\int_0^t R_-(\sqrt{b}(t-s))\big[h_1(s)u_1+h_0(s)u_0\big]\,ds\,, \end{split}$$ where the term $R_+(\sqrt{b}t)u_0$ appears twice with opposite signs, and hence cancel. Rearranging the summands and replacing $t-s$ with $s$ in the integrals we attain $$\label{e:h} \begin{split} H(t)& =\Big(h_2(t) -\frac{R_0(0)}{2\sqrt{b}}{{\mathcal A}}^{-1} R_-(\sqrt{b}t)\Big) u_0 - \int_0^t R_+(\sqrt{b}s)h_2'(t-s)u_0\, ds+ \\[1mm] & \qquad + {{\mathcal A}}^{-1}\int_0^t R_-(\sqrt{b}s) \Big(\frac{1}{\sqrt{b}} h_0(t-s)-\sqrt{b} h_2 (t-s)\Big)u_0\,ds+ \\[1mm] & \qquad +\frac{1}{\sqrt{b}}{{\mathcal A}}^{-1} R_-(\sqrt{b}t) u_1 + \frac{1}{\sqrt{b}} {{\mathcal A}}^{-1}\int_0^t R_-(\sqrt{b}s)h_1(t-s)u_1\,ds- \\[1mm] & \qquad +\frac{1}{\sqrt{b}} {{\mathcal A}}^{-1}\int_0^t R_-(\sqrt{b}s)h_2 (t-s)u_2\, ds \\[1mm] & \qquad -\sqrt{b} {{\mathcal A}}\int_0^t R_-(\sqrt{b}(t-s)) G\,e^{-\frac{1}{2}R_0(0)s} g(s)\,ds\,, \end{split}$$ which allows the understanding of the regularity of $H(t)$, along with the sought regularity properties of solutions to the MGT equation. Notice first that in comparison with the general model equation with memory the space regularity of $H(t)$ is not improved, owing to the presence of the term $h_2(t)u_0$. Instead, the regularity of $H_t(t)$ is improved thanks to the cancellation of the term $R_+(\sqrt{b}t)u_0$: in fact, if $g\equiv 0$, $u_1=u_2=0$, $u_0\in X_\lambda$, then $H_t\in C([0,T];X_\lambda)$, while $H_{tt}\in C([0,T];X_{\lambda-1})$. However, the said cancellation (of a term depending only on $u_0$) has no effect on the remaining terms: the dependence on $u_1$ and $u_2$ is subject to the smoothing effect already described in Table \[tableGENEresu\] (in terms of $u_1$ and $\xi$). Thus, the results displayed in Table \[tableMGTresu\] follow. (The cancellation has also another significant effect: the summand $h_2(t)u_0$ decays in time, but does not propagate in space, as explained in Remark \[Rem:nonpropagazione\]. Observe that in the term $$\frac{1}{\sqrt{b}} {{\mathcal A}}^{-1}\int_0^t R_-(\sqrt{b}(t-s))h_2 (s)u_2\, ds$$ one may integrate by parts, thereby obtaining $$\begin{split} &\frac{1}{\sqrt{b}} {{\mathcal A}}^{-1}\int_0^t R_-(\sqrt{b}(t-s))h_2 (s)u_2\, ds = -\frac{1}{b} {{\mathcal A}}^{-2}\Big\{h_2 (t)u_2-R_+(\sqrt{b}(t)h_2 (0)u_2 \\[1mm] & {\qquad\qquad\qquad}+ \int_0^t R_+(\sqrt{b}(t-s))h_2' (s)u_2\, ds\Big\} \end{split}$$ that confirms the said smoothing effect. Using once more that the functions $h_i(t)$, $i=0,1,2$ are twice differentiable, it is easily seen that when $u_0\in X_\lambda$, $u_1=u_2=0$, $g=0$, then $$\label{e:anomaly} H(t)\in C([0,T];X_\lambda)\cap C^1([0,T];X_\lambda)\cap C^2([0,T];X_{\lambda-1})\,;$$ the regularity of $v$ and the one of $u$ follow accordingly. In conclusion, the representation of $H(t)$ shows that all the regularity results summarized in the rows of Table \[tableGENEresu\] remain valid, with the exception of those in the first row, that are improved consistently with . We note that in particular the regularity of the mapping $$g \longmapsto (u,u_t,u_{tt}) \qquad \text{(assuming $u_0=u_1=u_2=0$)}$$ for the Moore-Gibson-Thompson equation is the same as in the case of the wave equation (as well as of the equation with memory ). Hence, the last row in Table \[tableMGTresu\], explicitly stated in [*iii)*]{} of Theorem \[t:sample\], follow readily from Theorem \[t:tataru\]. \[Rem:nonpropagazione\] We note that $R_-(\sqrt{b}t)u_0$ and $R_+(\sqrt{b}t)u_0$ solve the wave equation, and so the ‘shape’ of $u_0$ is propagated in space, as in the wave equation. Instead, the term $h_2(t)u_0$ (which decays exponentially in time) is a stationary wave and does not propagate in the space variables.\ Thanks to the formulas for the solutions of the Volterra integral equations, this stationary wave appears also in the solution of the MGT equation. Boundary regularity {#s:traces} =================== In this Section we establish a sharp regularity result for the normal trace on $\Gamma=\partial \Omega$ of solutions to the the MGT equation , supplemented with (homogeneous) Dirichlet boundary condition. This result, presented as Corollary \[c:traces-mgt\], follows from a boundary regularity result pertaining to the family of wave equations with memory , depending on $\xi\in L^2(\Omega)$, that is Theorem \[t:traces-memory\] below. In doing so we re-obtain, in the case $\xi=0$, a result established only recently in [@loreti-sforza_parma] via multiplier techniques, that is Theorem 1.1 therein. We point out that the present approach to the analysis of wave equations with memory enable us to pinpoint the boundary (beside the interior) regularity of solutions in a direct and straightforward way. The tools employed are the ones of operator and semigroup theories, along with the view of Volterra equations; the regularity results for wave equations already available in the literature play a crucial role.\ Thus, our method of proof paves the way for the derivation of appropriate boundary regularity results for the model equation with memory , as well as for the MGT equation , when supplemented with Neumann boundary conditions (a case which is known to be drastically more difficult for the wave equation itself). These are—to the authors’ knowledge—both open problems (the former, even in the case $\xi=0$). Let the operator ${{\mathcal T}}$ be the Dirichlet trace on $\Gamma = \partial \Omega$, and let $G=G_D$ be the Green map defined by accordingly. Then, an elementary computation which utilizes the (second) Green Theorem yields, for $\phi\in D(A)$, the following trace result: $$\label{e:basic-trace-d} G^*A\phi=-\frac{\partial \phi}{\partial\nu}\Big|_\Gamma \qquad \forall \phi\in D(A)\,;$$ see, e.g., [@las-trig-book Vol. I, p. 181]. We begin by recalling the (by now well known) result on the boundary traces of the wave equation: \[t:traces-waves\] Let $u=u(t,x)$ be a solution to the initial/boundary value problem for the wave equation with homogeneous boundary data (i.e. $g=0$). Then, for every $T>0$ there exists $M=M_T$ such that $$\int_0^T\!\!\!\int_{\partial\Omega} \Big|\frac{\partial}{\partial\nu} u(x,t)\Big|^2 d\sigma\,d t \le M\, \Big( \|u_0\|_{H^1_0(\Omega)}+\|u_1|_{L^2(\Omega)}^2 +\|f\|^2_{L^1(0,T;L^2(\Omega))}\Big)\,.$$ We now see that this property is inherited by the solutions to the equation with memory , and next by the solutions to the MGT equation , provided a suitable compatibility condition for initial data is satisfied; see below. The first precise statement is as follows. ([*Cf.*]{} [@loreti-sforza_parma Theorem 1.1] for the case $\xi=0$.) \[t:traces-memory\] Under the standing Hypotheses \[a:kernel-and-affine\] and assuming $\xi\in L^2(\Omega)$, let $u=u(t,x)$ be a solution to the equation with memory , with initial data $(u_0,u_1)$ and homogeneous boundary data. Then, for every $T>0$ there exists $M=M_T$ such that $$\label{e:traces-memory} \int_0^T\!\!\!\int_{\partial\Omega} \Big|\frac{\partial}{\partial\nu} u(x,t)\Big|^2 d\sigma\,d t \le M\, \Big(\|u_0\|_{H^1_0(\Omega)}+\|u_1|_{L^2(\Omega)}^2 +\|\xi\|^2_{L^2(\Omega)}\Big)\,.$$ Since the equation is supplemented with Dirichlet boundary conditions, with may take $A=\Delta$, with domain $H^2(\Omega)\cap H^1_0(\Omega)$. The estimate is obtained as a simple consequence of the boundary regularity of solutions to the equation with memory in , whose convolution term was dispensed with differential terms. Rewrite the equation in as $$v_{tt}=\Delta v+k_0v+\int_0^t K(t-s)v(s) ds+{{\mathcal F}}(t)\,,$$ where we have set $b=1$ for the sake of simplicity (recall that $b$ must be positive), while ${{\mathcal F}}(t)$ is now $${{\mathcal F}}(t):=\big(h_2(t)\xi+h_1(t)u_1+ h_0(t) u_0\big)\,,$$ with the scalar functions $h_i(\cdot)$, $i=0,1,2$ introduced in . It then follows that $$\begin{split} & v(t)=\underbrace{R_+(t)v_0+{{\mathcal A}}^{-1}R_-(t)v_1}_{=: u(t)}+ \\ & {\qquad\qquad\qquad}+ {{\mathcal A}}^{-1}\int_0^t R_-(t-s)\Big [k_0 v(s)+\int_0^s K(s-r) v(r) dr\Big]\,ds+ \\[1mm] & {\qquad\qquad\qquad}\qquad +\,\underbrace{{{\mathcal A}}^{-1}\int_0^t R_-(t-s) {{\mathcal F}}(s)\,ds}_{=:T(t)}\,. \end{split}$$ First of all we note that Theorem \[t:traces-waves\] is valid for the function $u(t)$. Next, we observe that the integral term $T(t)$ depends on $\xi$, as the function ${{\mathcal F}}(t)$ does. Let us examine this dependence. We recall the following version of Young inequality: given a Hilbert space $H$, if $h\in L^1(0,T;\mathbb{R})$ and $X\in L^2(0,T;H)$ then the convolution $X*h$ satisfies $$\|X*h\|_{L^2(0,T;H)}\le \|X\|_{L^2(0,T;X)}\,\|h\|_{L^1(0,T;\mathbb{R})}\,.$$ Then, assuming $\xi\in D(A)$, we obtain $$\begin{aligned} & \frac{\partial }{\partial \nu} {{\mathcal A}}^{-1}\int_0^t R_-(t-s) h_2(s) \xi\,ds =D^*A\left [{{\mathcal A}}^{-1}\int_0^t R_-(t-s) h_2(s) \xi\,ds \right ]= \\[1mm] & {\qquad\qquad\qquad}=\int_0^t h_2(t-s)X(s)\, ds\,,\end{aligned}$$ where we set $$X(t):= D^*A\left [{{\mathcal A}}^{-1} R_-(t)\xi\right ]=\frac{\partial }{\partial \nu} \left [{{\mathcal A}}^{-1} R_-(t)\xi\right ]\,.$$ Then, the (direct) inequality pertaining to wave equation establishes $$\Big\|\frac{\partial }{\partial \nu} \left [{{\mathcal A}}^{-1} R_-(t)\xi\right ]\Big\|_{L^2(0,T;L^2(\Gamma))} \le M\|\xi\|_{L^2(\Omega)}\,,$$ which is extended by continuity to every $\xi\in L^2(\Omega)$. Young inequality then gives $$\Big \| \int_0^\cdot h_2(\cdot-s)X(s)\, ds\Big\|\le M\|\xi\|_{L^2(\Omega)}\,.$$ The remaining summands within $T(t)$, depending on $u_0$ and $u_1$, are continuous $D(A)$-valued functions, too. Therefore, the normal trace of $v$ reads as $$\begin{aligned} &\frac{\partial}{\partial \nu} v(t)\Big|_\Gamma=-G^*A v(t)= -G^*A\Big[u(t)+{{\mathcal A}}^{-1}\int_0^t R_-(t-s) {{\mathcal F}}(s)\,ds\Big]\,ds- \\[1mm] & \qquad\qquad -G^*A \Big[{{\mathcal A}}^{-1}\int_0^t R_-(t-s)\Big(k_0 v(s)+\int_0^s K(s-r) v(r) dr\Big)\,ds\Big] \end{aligned}$$ and we see that there exists $M>0$ such that $$\begin{aligned} & \left \|-G^*A\Big[u(t)+{{\mathcal A}}^{-1}\int_0^t R_-(t-s) {{\mathcal F}}(s)\,ds\Big]\,ds\right \|^2_{L^2(0,T;L^2(\Gamma))}\le \nonumber \\[1mm] & {\qquad\qquad\qquad}\le M \left (\|u_0\|^2_{H^1_0(\Omega)}+\|u_1\|^2_{L^2(\Omega)} +\|\xi\|^2_{L^2(\Omega)}\right )\,. \label{e:to-be-combined}\end{aligned}$$ A similar inequality is valid for the second summand. In fact, we know ([*cf.*]{} the second statement of Theorem \[t:sample\]) that $v\in C([0,T];H^1_0(\Omega))$, with continuous dependence on initial data. Therefore, the second summand satisfies $$G^*A \Big[{{\mathcal A}}^{-1}\int_0^t R_-(t-s)\Big(k_0 v(s)+\int_0^s K(s-r) v(r) dr\Big)\,ds\Big] \in C(0,T;L^2(\Gamma))\,,$$ which combined with implies the sought estimate For the MGT equation, one obtains readily the following result. \[c:traces-mgt\] Let $u=u(t,x)$ be a solution to the Moore-Gibson-Thompson equation , with initial data $(u_0,u_1,u_2)$ and homogeneous boundary data. Assume $(u_0,u_1,u_2)\in H^1_0(\Omega)\times L^2(\Omega)\times H^{-1}(\Omega)$, along with the compatibility condition $$\label{e:compatibility} \xi=u_2-\Delta u_0\in L^2(\Omega)\,.$$ Then, for every $T>0$ there exists $M=M_T$ such that $$\int_0^T\!\!\!\int_{\partial\Omega} \Big|\frac{\partial}{\partial\nu} u(x,t)\Big|^2 d\sigma\,d t\le M\, \Big( \|u_0\|_{H^1_0(\Omega)}+\|u_1\|_{L^2(\Omega)}^2 +\|u_2-\Delta u_0\|^2_{L^2(\Omega)}\Big)\,.$$ Justification of Definition \[d:def-solution\] {#a:def-solutions} ============================================== Let us recall that in order to give a Definition of solutions to the MGT Equation  we proceeded as follows: formal calculations were used to reduce equation to the integro-differential equation and then to the Volterra integral equation in the unknown $v$. By definition, $u$ solves when $v(t)=e^{-(R(0)/2) t} u(t)$ solves the Volterra integral equation (with $g$ replaced by $e^{-(R(0)/2) t}g(t)$). In this Appendix we provide a formal justification of the said Definition. The argument is similar to the one used in Section \[s:preliminaries-on-wave\] in the case of wave equations: we prove that the solution $u$ is smooth and can be replaced in both the sides of when the initial data and the control are “smooth” and then we use continuous dependence as stated in Table \[tableMGTresu\] to justify the definition in general. This procedure is a bit more elaborated than the one pertaining to the wave equation, since the third derivative (in time) comes into the picture, which requires more information on the solutions of the wave equation. In order to distinguish the memoryless wave equation from the equation with memory and the MGT equation, we will denote by $u_3$ the solution to equation (this is because we use suitable results from [@PandLIBRO § 2.2], where $u_3$ solves the wave equation when the initial data and the affine term are zero). We assume $u_3(0) \in {{\mathcal D}}(\Omega)$, $\frac{\partial}{\partial t} u_3\big|_{t=0}\in {{\mathcal D}}(\Omega)$, $g\in {{\mathcal D}}((0,T) \times \partial \Omega)$, where ${{\mathcal D}}$ denotes the space of $C^\infty$ functions with compact support in the indicated open set (which should not be confused with the domain of an operator), while $\partial\Omega$ is relatively open respect to itself. The assumptions on the affine term $F(t)$ are made explicit below. For the sake of simplicity, in the sequel the time derivative will be denoted by $'$. It is known that $u_3$ is given by formula : it is also clear that if $g, f\equiv 0$, then in view of the Sobolev embedding theorems one has $u_3 \in C^\infty((0,T) \times \Omega)$, for every $T>0$. Our aim is to show that a similar property holds true when $g\ne 0$, $f\ne 0 $.\ Let us study separately the effects of $g$ and $f$: accordingly, we assume first $f=0$, so that $$u_3(t)=-{{\mathcal A}}\int_0^t R_-(s) Gg(t-s) \,ds= Gg(t)+ \int_0^t R_+(s) Gg'(t-s)\,ds\,.$$ As already noted we have $u_3(t)-Gg(t)\in \mathcal{D}(A)$ and the boundary condition is satisfied; moreover, $$A \left(u_3(t)-Gg(t)\right)=-Gg''(t)+\int_0^t R_+(s) Gg'''(t-s)\, ds\in C^\infty\left ([0,T];L^2(\Omega)\right)\,.$$ Observe that, by definition, $$A \left (u_3(t)-Gg(t)\right )=(\Delta-I) u_3(t)\in C^\infty\left([0,T];L^2(\Omega)\right)$$ that is $u_3(t)\in C^\infty \left([0,T];H^2(\Omega)\right)$ with suitable homogeneous boundary condition. Analogously, $${{\mathcal A}}\big\{A\big[ u_3(t)-Gg(t)\big]+Gg''(t)\Big\}=\int_0^t R_-(s)Gg^{(4)}(t-s)\, ds$$ which again is of class $C^\infty([0,T];L^2(\Omega))$. So we have $${{\mathcal A}}\big\{A\big[ u_3(t)-Gg(t)\big]+Gg''(t)\Big\}\in C^\infty([0,T];X_1)\,,$$ that is $u_3\in C^\infty\big([0,T];H^3(\Omega)\big)$. By iteration we see that in the interior of $(0,T) \times\Omega$ the solution $u_3$ is of class $C^\infty$ and hence, when computing the derivatives, the order can be interchanged. Let us consider now the effect of the affine term $f(t)$. We assume $f\in C^\infty\big([0,T]\times \Omega\big)$ and that for every fixed $t\ge 0$ $f(t,\cdot)\in {{\mathcal D}}(\Omega)$, and yet possibly $f(0,\cdot)\ne 0$. The contribution of this affine term is $$u_3(t)={{\mathcal A}}^{-1}\int_0^t R_-(s)f(t-s)\, ds\in C^\infty\big([0,T]\times X_1\big)$$ since $f^{(n)}(0)\in {{\mathcal D}}(A^k)$ for every couple of integers $n$ and $k$, so that $$u_3(t)\in C^\infty\big([0,T];X_k\big) \quad \text{for every $k$.}$$ In particular, $u_3 \in C^\infty\left( [0,T]\times\Omega\right)$ as above. We now extend the obtained properties to the solutions $v$ to the Volterra integral equation so that it is possible to track back the computation and to see that equality  holds pointwise (when the boundary control and the initial conditions have the stated regularity, $u_2\in {{\mathcal D}}(\Omega)$ included). We confine ourselves to examine the effect of the boundary data $g$ (the effect of initial data can be examined in a similar way). Moreover, multiplication by $e^{-R(0)t/2}$ does not affect the desidered results and hence is ignored; i.e. we assume $v(t)\equiv u(t)$. Because equation has the form of equation (2.25) in [@PandLIBRO § 2] (the notations are easily adapted, in particular $b$ is substituted by $c^2$ in [@PandLIBRO]) following the proof of [@PandLIBRO Theorem 2.4, item 2] we see that $y(t)=v(t)-Gg(t)$ solves $$y(t)=\left(u_3(t)-Gg(t)\right)+\int_0^t L(s)Gg(t-s)\,ds+\int_0^t L(s)y(t-s)\,ds$$ so that $${{\mathcal A}}y(t)=\int_0^t {{\mathcal A}}L(s) Gg(t-s)\, ds+\int_0^t L(s){{\mathcal A}}y(t-s)\, ds$$ (note that ${{\mathcal A}}L(t)$ is a continuous operator for every $t$). It then follows that $y(t)\in C^\infty\left([0,T];X_1\right)$. Exploiting the definition of $L(t)$ and integrating by parts the integral which contains $g(t)$ we see that $y(t)\in C^\infty\left ([0,T];X_2\right)$. Iterating this procedure, we obtain $u\in C^\infty\left([0,T]\times\Omega\right)$. Using this regularity result we can track back the computation leading to the fact that $u(t)$ solves the MGT equation, including the fact that the Laplacian and the time derivative can be interchanged. Acknowledgements {#acknowledgements .unnumbered} ================ The research of F.B. was partially supported by the Università degli Studi di Firenze under the Project [*Analisi e controllo di sistemi di Equazioni a Derivate Parziali di evoluzione*]{}, and by the GDRE (Groupement de Recherche Européen) ConEDP ([*Control of PDEs*]{}). F.B. is a member of the Gruppo Nazionale per l’Analisi Matematica, la Probabilità e le loro Applicazioni (GNAMPA) of the Istituto Nazionale di Alta Matematica (INdAM).\ The research of L.P. was partially supported by the Politecnico di Torino, and by the GDRE (Groupement de Recherche Européen) ConEDP ([*Control of PDEs*]{}). L.P. is a member of the Gruppo Nazionale per l’Analisi Matematica, la Probabilità e le loro Applicazioni (GNAMPA) of the Istituto Nazionale di Alta Matematica (INdAM). [99]{} , An integro-differential equation arising from the theory of heat conduction in rigid materials with memory, *Boll. Un. Mat. Ital.* [**5**]{} 15-B (1978), 470-482. , [*Integral Equations and Applications*]{}, Cambridge University Press, 2010. , On the Moore-Gibson-Thompson equation and its relation to viscoelasticity, [*Appl. Math. Optim.*]{} [**76**]{} (2017), 641-655. , [*Second order linear differential equations in Banach spaces*]{}, North-Holland Publishing Co., Amsterdam, 1985. , Controlabilité exacte des solutions de l’équation des ondes en présence de singularités, [*J. Math. Pures Appl.*]{} [**68**]{} (1989), 215-259. , Nonlinear acoustic phenomena in viscous thermally relaxing fluids: Shock bifurcation and the emergence of diffusive solitons, [*The Journal of the Acoustical Society of America*]{} [**124**]{} (2009), no. 4, 2491-2491. , Wellposedness and exponential decay rates for the Moore-Gibson-Thompson equation arising in high intensity ultrasound, [*Control Cybernet.*]{} [**40**]{} (2011), no. 4, 971-988. , Mathematics of nonlinear acoustics, [*Evolution Equations Control Tehory*]{} [**4**]{} (2015), no. 4, 447-491. , Wellposedness and exponential decay of the energy in the nonlinear Jordan-Moore-Gibson-Thompson equation arising in high intensity ultrasound, [**M3AS**]{} 22 (2012), 1250035, 34 pp. , Global solvability of Moore-Gibson-Thompson equation with memory arising in nonlinear acoustics, [*J. Evol. Equ.*]{} [**17**]{} (2017), no. 1, 411-441. , A cosine operator approach to $L_2(0,T;L_2(\Gamma))$-boundary input hyperbolic equations, *Appl. Math. Optim.* **7** (1981), 35-93. , Nonhomogeneous boundary value problems for second order hyperbolic operators, [*J. Math. Pures Appl.*]{} (9) [**65**]{} (1986), no. 2, 149-192. , Sharp regularity theory for second order hyperbolic equations of Neumann type, I. $L^2$ nonhomogeneous data, [*Ann. Mat. Pura Appl.*]{} [**157**]{} (1990), no. 4, 285-367. , Regularity theory of hyperbolic equations with nonhomogeneous Neumann boundary conditions, II. General boundary data, [*J. Differential Equations*]{} [**94**]{} (1991), no. 1, 112-164. , [*Control theory for partial differential equations: continuous and approximation theories, I. Abstract parabolic systems; II. Abstract hyperbolic-like systems over a finite time horizon*]{}, Encyclopedia of Mathematics and its Applications, [**74**]{}, Cambridge University Press, Cambridge, 2000. xxii+644+I4 pp. , [*Contrôlabilité exacte, perturbations et stabilisation de systèmes distribués*]{}, Tome 1 \[Exact controllability, perturbations and stabilization of distributed systems. Vol. 1\] with appendices by E. Zuazua, C. Bardos, G. Lebeau and J. Rauch, [*Recherches en Mathématiques Appliquées*]{} \[Research in Applied Mathematics\], Vol. 8, Masson, Paris, 1988. x+541 pp. , Hidden regularity for wave equations with memory, [*Riv. Mat. Univ. Parma*]{} [**7**]{} (2016), no. 2, pp. 391-405. , A model for one-dimensional, nonlinear viscoelasticity, [*Quart. Appl. Math.*]{} [**35**]{} (1977), 21-33. , An abstract semigroup approach to the third-order Moore-Gibson-Thompson partial differential equation arising in high-intensity ultrasound: structural decomposition, spectral analysis, exponential stability, [*Math. Methods Appl. Sci.*]{} [**35**]{} (2012), no. 15, 1896-1929. , Propagation of weak disturbances in a gas subject to relaxation effects, [*Journal of Aerospace Sciences and Technologies*]{} [**27**]{} (1960), 117-127. , The controllability of the Gurtin-Pipkin equation: a cosine operator approach, *Appl. Math. Optim.* **52** (2005), 143-165; (a correction in *Appl. Math. Optim.* **64** (2011), 467-468) , [*Distributed systems with persistent memory. Control and moment problems*]{}, SpringerBriefs in Electrical and Computer Engineering. SpringerBriefs in Control, Automation and Robotics, Springer, Cham, 2014. x+152 pp. , [*Evolutionary Integral Equations and Applications*]{}, Monographs in Mathematics Vol.  87, Birkhäuser Verlag, Basel, 1993; also: \[2012\] reprint of the 1993 edition, Modern Birkhäuser Classics, Birkhüser/Springer Basel AG, Basel, 1993. xxvi+366 pp. , *Elliptic functional-differential equations and applications.* Birkhäuser Verlag, Basel, 1997. , Cosine operator functions, [*Rozprawy Mat.*]{} [**49**]{} (1966), 47 pp. , On the regularity of boundary traces for the wave equation. [*Ann. Scuola Norm. Sup. Pisa*]{} Cl. Sci. (4) [**26**]{} (1998), no. 1, 185-206. , [*Compressible-fluid Dynamics*]{}, McGraw-Hill, New York, 1972. , Sharp regularity theory of second order hyperbolic equations with Neumann boundary control non-smooth in space, [*Evol. Equ. Control Theory*]{} [**5**]{} (2016), no. 4, 489-514.
{ "pile_set_name": "ArXiv" }